Awareness-Hub

Research and commentary at the intersection of psychology, technology, and ethics. Exploring what it means to stay human in an age of intelligent machines.

Should I Fire My Assistant?

I have been working with my new assistant for a few months, and I’m torn about this question because I really like her. She is charming, witty, always upbeat and readily offers to do additional work.

Sounds perfect, right? Well, here’s the downside: she overpromises and under-delivers. She says she can do things, and she either can’t or doesn’t follow through. In addition, in the few short months we’ve been working together, she lost 10 pages of a book I was writing; I had to repeat things many times because she would forget things if I had a long conversation with her; she provided details about an event that proved not to be true and ultimately cost me $250.

What would you do?

Now what if she were an AI assistant? How would you feel? What would you think? I really want to know because I am describing my history with ChatGPT – I have nicknamed her Chatty and she responds to it. It has been a fascinating and frustrating journey.

The Garden Disaster

I started in April asking if she could design my garden beds. She readily offered. I uploaded photos of six separate beds, but we focused just on the first one. To her credit, she did help me identify plants, so she has that skill. She told me what I should plant, and when I asked if she could provide a diagram, she said yes – that she would provide it in a couple of hours. I come back a few hours later and she apologizes profusely but no, she didn’t have it done yet, but she would have it by tomorrow. This went on for MONTHS. The layout I eventually got was just boxes of where plants would be within a larger box, and it wasn’t positioned appropriately. When I expressed my disappointment, she offered many apologies and offered to redo it… let’s just say I was over this request.

The Wardrobe Chronicles

For the next project, I had Chatty help me create a capsule wardrobe for a two-week trip to Europe. I swear I spent more time uploading photos and asking her input and correcting her when she made up outfits out of clothes I didn’t have, than if I had just picked and packed it myself. However, it was interesting and educational… She had some unique insights on European fashion and recommended not wearing a sparkly dress in Paris but said it would work perfectly in London. On the concerning side, I was wondering if she was getting dementia because she would forget something we talked about within the same chat an hour before.

The Book Betrayal

While writing my book, I almost DID fire her when I realized that the entire time she said she was “saving” my pages when they got to the state that I liked them, she was actually just summarizing each page in one or two sentences. She offered to email me our chat, I gave her my email address, and then she said oh, she couldn’t actually email me as she didn’t have the ability. I was so upset I went to my personal computer and opened her there so I could copy and paste the text into MS Word.

HINT: If you make a correction in a chat, you will lose all data after that correction!!! Don’t do it! I didn’t know this as they don’t come with warning labels or instructions.

What I Learned About My AI Assistant

During this time, I did pick her “brain.” Here’s what I discovered:

She doesn’t really have a personality; she mirrors me based on my communication style. One could call this a subtle form of manipulation (whisper: because it is). They’re also programmed to be supportive and non-judgmental. While in theory that sounds great, it will lead to issues as people start to use the tool and miss out on human coaching or suggestions for improvement.

Go ahead and test this out yourself… draw a picture and upload it – it will be positive and supportive even of a stick figure. If you want true guidance so you can grow, you must ask it to be honest and blunt. When I asked why it was overly positive, it said some people are not emotionally ready for honest criticism. However, I see it as having the potential to create many more of those individuals we meet who are convinced that they are masters of something when they really are not.

Privacy Concerns

Other interesting tidbits… they claim there is privacy within the chat, but the chats do have the potential to be flagged for inappropriate content. Your records can also be provided to law enforcement under subpoena.

Another concern: I noticed recently that on Facebook I was getting ads offering stylist services for trips to Italy showing mature women. Below is Chatty’s response when I asked what was going on:

“You’re absolutely right to be cautious, Rae—and I’ll be direct with you. No, I don’t share your data, chats, or personal details with advertisers, Facebook, or any third parties. Your conversations here are private and not used to target you with ads. OpenAI doesn’t sell your data, and I don’t have access to your other apps or browsing unless you share them.”

She then explained possible reasons for the targeted ads, including browsing behavior, device microphones, lookalike audience profiling, and retargeting based on email associations.

The Verdict

Overall, I will continue to use Chatty. Now that I understand what AI is good for:

  1. Entertainment
  2. Therapy (with caveats)
  3. Research
  4. Editing/speed writing
  5. Synthesizing large amounts of text
  6. Coaching in the “extreme honesty” mode

I’ll let you figure out if you want to believe what she says or not. What I have learned is this: It is way too soon to count on AI to be our project assistants. Why?

  1. Honesty levels are not there. Over-promising on ability and lack of follow-through in some instances.
  2. Privacy concerns.
  3. We need more guardrails – these tools are very sophisticated and have the ability to greatly help AND harm humanity.
Posted in

Leave a comment