Today marks the second entry in my "Morning Pages," and I find myself unsure where my thoughts will lead. Yesterday, I generated a cover image using DALL·E, which I plan to use for the first publication. I’ll definitely need more drafts to keep the blog visually engaging, though. As easy as content creation is becoming, I wonder how this market will evolve long-term. Who will actually read everything that AI generates on behalf of countless aspiring influencers? The attention economy is already intense, and it seems like we're heading toward a future where fast, shallow entertainment overshadows anything more thoughtful, reflective, or explanatory.
On the flip side, I can see how AI might improve fact-checking, making it faster and more automated. But this brings its own concerns. In a sense, we could be facing a new form of technological censorship, where AI decides what’s “objectively” true and what counts as fake. It’s in these gray areas that things could get interesting. As communication undergoes massive changes, I’m eager to see how this plays out—especially with the release of Harari's new book Nexus at the end of the month. From what I understand, it addresses these very issues, focusing on the dangers of AI.
I hope he hasn't lost his generally positive outlook on technology and progress. Of course, every new development comes with risks. The real challenge is learning how to face these risks without stifling opportunities or breeding fear and resistance. That’s where we failed with genetic engineering—everyone just "knows" GMOs are bad. I hope this negative mindset doesn't take root with AI as well.
This post is part of the "Morning Pages" project, an experiment in daily creative writing and content generation using AI tools. The thoughts and reflections shared here are edited for publication with the help of ChatGPT, while accompanying visuals are created using DALL·E 3. Both tools contribute to exploring the intersection of technology, creativity, and communication.