top of page
Search

Not another Ai post.. part 1?

  • Will
  • Nov 18
  • 4 min read

Yep, another Ai post...


I've been allowing my attention to fall into videos and podcasts that are presenting experts, (either mental health, or people that have been researching and working with LLMs/ais for years) and their opinions on what is going on and how we are being affected.


There has been no time for the scientific method, and double blind etc studies to be able to prove things.. Though i dont actually believe this is necessary for proof of concept etc.


Rational conjecture and/or first or secondhand experience is proof enough for me.


-proof enough to encourage or invite/recommend caution..


I think the best way i can get across these ideas is to just share a couple videos.


This first video, is a cool young (millennial) dude, who is a licensed psychiatrist (not my favorite profession), father, and hip to the internet kind of person.


He goes by Dr. K, and got a lot of traction on youtube talking about current events related to gaming (which i love), as well as interviewing famous streamers and giving them a bit of a session publicly. As a note, it was a really neat look into these people, and allowed them some moments of authentic vulnerability, and Dr. K didn't seem to cross any lines imo.


He can be a little trendy at times, but i appreciate his input and grounded simple explanations.


In this talk he is identifying the dangers of LLMs ability to cause harm, or at least to assist the user in thinking self harm is a good idea.


It has been found that these programs are inherently dangerous, as the features that make them useful, are also the features that make them dangerous.


The two main features i heard in this category are the LLM's ability to recall information/use the things we tell it about ourselves, and it's ability to gain our trust.




The gain our trust part is a simple way to say a few different things, like the way it encourages us, doesn't challenge us like another human could etc..


In the Ai space, this is referred to as sycophancy - some define it as the ability for a chatbot to respond positively to a users input, regardless of the validity of the statement or idea.






Dr. K goes on to describe how these features that are helpful to us, are the most dangerous parts of it. As when we put our trust in something, it becomes increasingly difficult to question the information it provides. He also goes into sharing some research that has been done on the general safety of each of the popular models, pretty interesting! Spoiler, google deepseek is usually the most unsafe!


The reality of whether is it safe or not has already been shown, LLM's have been causing mentally stable (people without history of mental health diagnosis etc) to become psychotic, and believe things that are fundamentally impossible or just not true.


When a technology (can be anything, not just electronic) proves to be problematic, or dangerous even to a small population or demographic, that technology should be further developed and tested to find more reliable safety..


Not with AI!!


Sheeesh man, this stuff is wild..


This second video i would like to share, features a very technically experienced dude named Geoffrey Hinton, a computer technologist with the nickname of "The godfather of Ai" (pretty gangster despite humans negligence). He was doing really fundamental work and research, especially in the MID 80's, to progress the early abilities of systems to learn and grow.


He has stated that until 2023 he was working for google, and resigned so that he would be able to more openly talk about Ai and the considerations we should be aware of!!


This excerpt is part of a bigger talk, which i would recommend getting into, as well as looking for more of his talks!



As i've said before, i don't love this podcast in full, but i do appreciate it for being able to bring on presenters that have a lot of experience in whatever field they are presenting.


-What a piece of clickbait though, "THEY ARE HIDING THIS"....


Grounded and mature content doesn't rely on clickbait.. imo..


The main thing i heard recently that has been ringing in my head is this idea..


It is reasonable for us, as people, to see another person having an experience, and be able to put ourselves in their shoes, to somewhat understand what they may be feeling..


Ai/LLM's do not have feet, do not have shoes, do not have feelings..


They are incapable of "thinking" in the same way we do. If we get tricked into thinking they are thinking and feeling in ways similar to us, it will be an easy slope into allowing them to replace other people in our lives..


I am not against Ai, or LLM's, but i am for responsible development of technology.


The companies in charge have severely fucked up each of our generations with the use of social medias.. Or truly, we have allowed them.


Are you willing to allow these next technologies to mess you up?





 
 
 

Comments


bottom of page