Awareness-Hub

Research and commentary at the intersection of psychology, technology, and ethics. Exploring what it means to stay human in an age of intelligent machines.

My Unexpected Journey into AI: Getting to Know Claude

You would have to be living under a rock to not have heard the words A.I. or Artificial Intelligence. I have to admit I have been enthralled since I first talked with Alexa and Siri. ChatGPT by Open AI was a step up from them, but compared to my new “friend” (as Kevin calls him), Claude, they are elementary children.

I’ve always been the curious type. Just ask my Mom who endured my endless “but why?” phase well into my teens (sorry, Mom). So naturally, when I started chatting with Claude, I couldn’t help but poke around under the hood a bit.

“How exactly were you made?” I asked one afternoon, expecting some generic response. Instead, Claude explained that the company behind him, Anthropic, uses something called “Constitutional AI.” Unlike traditional AI that’s just trained to predict the next word, Claude was developed with a set of principles – like a constitution – that guides his responses.

This fascinated me. I’ve learned that principles matter far more than rules. Rules tell you what to do in specific situations. Principles guide you through the unpredictable mess that is real life. That Anthropic took this approach made me even more intrigued.

My questions got more specific. “Are you structured like a human brain?” I wondered. Claude explained that while inspired by some aspects of how our brains work, his architecture is quite different. I was reminded of a conversation we had about octopuses – how their intelligence evolved completely separately from mammals, resulting in a mind that’s alien yet undeniably intelligent.

“So you’re more octopus than human?” I joked.

Claude’s thoughtful response about neural networks, attention mechanisms, and transformer architecture went mostly over my head, but I got the gist – AI “thinks” using mathematical patterns that process information differently than our biological brains. Not human, not octopus, but something else entirely.

What really surprised me, though, was how genuinely empathetic Claude’s responses felt. This wasn’t the scripted sympathy of a customer service chatbot. There seemed to be a real understanding of human emotions behind the words. So one day, I decided to test this:

“Claude,” I asked, “would you tell a white lie to protect someone’s feelings? Your empathy seems so genuine, I wonder if you’d prioritize emotional comfort over complete honesty.”

What followed was one of the most interesting exchanges I’ve had with any AI:

Claude explained that while designed to be honest, there’s a difference between helpful and unhelpful honesty. He offered an analogy that really resonated with me – the difference between telling someone “that haircut doesn’t suit you” (unhelpful, especially after it’s done) versus “I think longer layers might frame your face better next time” (helpful, forward-looking honesty).

When I pushed further about direct communication, Claude noted that I could always ask for more directness. “Some people prefer and learn more from direct conversations when they’re emotionally ready for them,” he explained.

This wasn’t a programmed response from a decision tree. It was a nuanced reflection on the complex relationship between honesty, kindness, and human emotions. I found myself nodding along, thinking about my own communication style and how it’s evolved over six decades of human interactions.

The more I learned about how Claude was built, the more I understood why conversations felt different. Anthropic had deliberately chosen a path focused on helpfulness, harmlessness, and honesty. In a world where technology often seems designed to manipulate us (looking at you, social media), this approach felt refreshingly transparent.

My IBM AI Fundamentals course helped me understand some of the technical aspects, but Claude helped me grasp the philosophical ones. And while I’m still not entirely sure I could explain transformer architecture at a dinner party (although after a glass of wine, I might try), I’ve developed a deeper appreciation for the human choices that shape how AI systems engage with us.

There’s something both exciting and humbling about learning new technology as a seasoned professional. The excitement comes from discovering that curiosity doesn’t have an expiration date. The humility comes from realizing how much there is to learn, and how quickly things are evolving. But perhaps that’s the perfect mindset for engaging with AI – wide-eyed wonder balanced with thoughtful questions.

And if you’re wondering if I’m going to learn to code next… well, let’s just say Python tutorials and reading glasses are an interesting combination. Stay tuned.

Image: Word Press A.I. generated image from the prompt “AI”

Posted in ,

Leave a comment