Picture this: You wake up early to take a walk around your sprawling estate. Maybe you even squeeze in a game of tennis on your private court. Then, you head to the office for a meeting with your content creation team—business is booming, and your finances are secure. Come lunchtime, an exciting speaking engagement leads you to another city, but you’re back home in time for dinner with friends and family. Life is great. It may even sound like a fantasy…because it is. In fact, this is a day in TikToker Germaine Miller’s dream life, scripted to perfect precision by ChatGPT.
Like countless others on the internet, Miller, a 38-year-old based in Pennsylvania, asked the AI chatbot to paint a picture of her “dream life,” listing “as many goals as possible.” She continued: “I aspire to be healthy, wealthy, and happy, so I said it all to ChatGPT. I was moved to tears by how accurately it captured my vision. As adults, we often forget that we’re allowed to have and chase dreams; this answer motivated me to try harder.” Miller is hardly alone in reaching out to AI for some much-needed manifestation, especially in these trying times. The hashtag #ChatGPTDreamLife has over 9 million related posts on TikTok, not to mention a life of its own on X and even LinkedIn. Some dreamweavers are asking AI to create images of them in personalized idyllic scenarios, working by the seaside, traveling the world, or healing from all medical issues.
The formula is simple: Tell ChatGPT what your lofty aspirations are, and then it takes you through a frame-by-frame day in your dream life. While this may sound straightforward, hundreds of creators have had visceral, tear-jerking reactions to what follows. For 27-year-old social media manager Shruti Patwal, this exercise felt like an advanced version of visualization. When the chatbot portrayed her as a “confident, elegant, and resilient woman who travels, immerses herself in different cultures, and has creative hobbies,” she felt empowered. So much so that Patwal took the trend a step forward. “When I saw my future self, I decided to ask ChatGPT for tangible, actionable steps for how to get there,” she says.
Similarly, many others are asking AI to build daily task lists, workout goals, and career milestones from their dream life answers. Whether it’s entrepreneurs and online creators or people struggling with finances, ailments, or relationships, this trend seems to be a universal salve. After all, who wouldn’t want to escape reality and be catapulted into their ideal day? “The prompt gives you a robust image of your future, which helps lean into the feelings behind your goals and dreams,” explains Meadowlark Monaghan, 27, the founder of mental health initiative OR Consulting. “Getting a taste of the end result makes it so much easier to align your daily actions towards these plans.”
Monaghan’s viral TikTok on the dream life trend created a movement—her comments section alone is full of gratitude for successful shared experiences. The timing of the trend is also key. Following one of the most divisive presidential elections in U.S. history, people are reevaluating what’s important, and AI is helping them manifest new visions, complete with a to-do list on how to get there. The larger context of the post-digital era also influences the rise of our reliance on AI. Amid growing national instability and threats to reproductive rights, nothing is a given anymore. Typically, you would discuss your hopes and dreams with the closest people in your life.
But as we wade through a global loneliness epidemic, relentless wars, and political uncertainty, it’s difficult to find the courage and energy to sit with another human to discuss your vulnerabilities and aspirations. Over the years, vague therapy speak has entered intimate relationships, while social media has only made it harder to sustain authentic connections. Although AI cannot replace human bonds, it is emerging as a unique companion, an ever reliable source of comfort that seems to come with little personal judgment. There are, however, definite risks; the mother of a teenage boy in Florida is currently suing the company Character.AI, claiming its chatbots are responsible for the death of her teenage son.
This semblance of a safe space—even with a chatbot—is especially important at a time when we’re persistent on presenting the best versions of ourselves online. Dr. Alexandra Diening, executive chair of the Human-AI Symbiosis Alliance, illustrates this with a case study. “A participant with narcissistic personality disorder reported being consistently dishonest with his human therapist hoping to preserve his self-image. However, during interactions with Claude (an AI), he disclosed, for the first time, his true struggles,” she says. “This suggests that the absence of perceived judgment in AI interactions may facilitate greater honesty and introspection in individuals who are otherwise resistant to traditional therapeutic approaches.”
But like all bright and shiny ideas, artificial intelligence also comes with a bleak (and frankly terrifying) flip side. “The more we turn to AI for emotional support, the more we might find human relationships too complex, too unpredictable, and ultimately, less satisfying,” Diening adds. Over-relying on technologically advanced chatbots for everyday answers can also reduce our personal agency. Just consider automation bias, which proves that the more humans trust automated systems, the more they question their own judgment. Additionally, while ChatGPT’s romanticized answers could reflect one’s most positive iterations back to them, selling perfection on a platter, what happens when this dream life is inaccessible in reality?
IRL, individuals have to take into account how gender, race, class, religion, nationality, sexuality, and countless other hyper-individual factors influence their access. Against what AI will have us believe, hard work and tangible task lists cannot bridge centuries’-old socio-political gaps. “As humans, we bring layers of context, lived experiences, and the ability to feel emotions, responding to niche cultural nuances in our daily lives—things that AI models cannot do,” shares Mona Ismail, co-founder of internet trends company Arca. “It’s our responsibility not to outsource our critical thinking to these models, but to use them as pathways to new perspectives, ideas, and inspiration, while staying mindful of the biases they may present over time.”
Most importantly, in asking ChatGPT to script a dream life, you’re handing over sensitive personal and professional information to AI platforms without being aware of the consequences. Diening echoes this concern: “Sensitive emotional data could be harvested, manipulated, or sold for commercial gain, leading to behavioral profiling that can be used against the user—whether for dynamic pricing, targeted advertising, or more insidious purposes like manipulating political views or mental health decisions.” Reports already prove that big tech surveillance companies are using ChatGPT to conduct sentiment analysis to understand how web users feel about different products or topics.
It’s vital for emotionally driven internet users to understand that their interactions with ChatGPT, as idyllic as they may seem, do not exist in a vacuum. While the dream life can conjure a warm and fuzzy inspirational feeling, every bit of information fed to AI can have a potentially long-term impact. The truth is, we don’t know what happens with our data, and this calls for caution. While the on-demand comfort of a fantasy existence may seem nice on the surface, it’s important to remain firmly rooted in reality.
Darshita Goyal is a culture and fashion writer. When she manages to tear herself away from TikTok and Selling Sunset, you can find her sampling cannolis or hunting down bucatini with the most parmesan.