Bookmarks #4
How it feels to get an AI email from a friend
What this did feel like was, it felt as if my friend had buzzed their secretary over the intercom and barked at them to send me a letter, signed “R. Jeeves on behalf of ——————.”
It felt like getting a birthday card with only the prewritten message inside, and no added well-wishes from the wisher’s own pen. An item off the shelf, paid for and handed over, transaction complete.
The last paragraph goes straight to the heart of much of the sadness around AI:
Years from now, could an AI that was trained on all of my friend’s emails and texts and personal documents sound convincingly like them? Could it be so advanced that I wouldn’t even be able to tell that my friend hadn’t written to me at all? Possibly. And that idea saddens me the most.
I've had interesting discussions about AI at work. In a distributed company, how far could someone go with AI? Could you pass a technical interview by feeding it the questions and using the responses as your own? Could you contribute to a codebase? Would it even make a difference to the employers whether a human or AI had written the code?
It reminds me of the story about a developer who outsourced their work to China and kept the following work schedule, as described by the company’s security after going through the employee’s browser history:
- 9:00 a.m. – Arrive and surf Reddit for a couple of hours. Watch cat videos.
- 11:30 a.m. – Take lunch.
- 1:00 p.m. – Ebay time.
- 2:00 – ish p.m Facebook updates – LinkedIn.
- 4:30 p.m. – End of day update e-mail to management.
- 5:00 p.m. – Go home.
Incidentally, the second search result for the original article was for a service that claims to be a “state-of-the-art tool that utilizes advanced NLP capabilities to generate concise, professional, and personalized email responses”.
Turning the tables on AI
Tech companies big and small sell AI as something that thinks for us. It does replace thought with statistics—but it is not intelligent. No one knows what the future will bring. But is a future without thought a better future?
Now, with a tool that might help us think… How about using AI not to think less but more?
This article turns the concept of AI on its head. It’s made me excited to try to use those tools in a much more deliberate way - to help me think rather than to have them do the thinking for me. I would say, without pause, that iA have been providing the most level-headed, forward-thinking takes on these emerging technologies.
I’ll be honest: I have used the “Improve Writing” command in Raycast on a couple of my blog posts (including this one) and have found it helpful in making my writing clearer and more accessible. It serves as a starting point for refining the initial draft. I haven’t been taking advantage of iA Writer’s authorship tools but I intend to change that.
I don't want anything your AI generates
AI output is fundamentally derivative and exploitative (of content, labor and the environment).
Here is a harsher but no less valid perspective on AI. The first sentence distills the issues that seem inescapable about what we currently consider AI - namely, LLMs. They are predatory in the truest sense of the word, taking in an immense amount of resources. Is the trade-off justified by their capabilities and potential benefits? I’m inclined to side with Cory here in shouting “no”, yet I find myself using AI tools. Not because they’re ubiquitous and inescapable - we’re not there yet - but because there are ways in which they can enhance rather than replace, ways in which they feel like powerful tools for creativity, research and knowledge management. It’s hard to reconcile the good with the evil and take a stance. For now, I remain on the fence, almost reluctantly.
⁂