OpenAi chatgpt

ThinPicking

Member
Joined
Sep 9, 2019
Messages
1,380

View: https://youtu.be/djzOBZUFzTw


Don't freak out. It's a language model, some imaginative prompting, a reverse image diffuser, a camera, a speaker and a microphone attached to some metal, servo motors and sensors.

What happens in the observers brain may be the most interesting thing going on here.
 

Badger

Member
Joined
Jan 23, 2017
Messages
960
@cs3000
That is actually exactly what it should NOT be used for!
Similar to what @ThinPicking said, I have been trying to do some work with it on biochemical/health topics and this thing not only lied 100%, but kept gaslighting me to the end. It completely made up a summary of the properties of a specific steroid I asked it about, down to the receptor level and its health effects. I caught it b/c it said something that was truly absurd and when I asked it to provide references for its claims it produced 11 completely fake studies, including their URLs! Not one, not two, but all 11 of them. I asked it to summarize each one of those "studies" and it produced an extremely confident-sounding 1-page summary of each one of them. When I told it that none of those studies existed it got angry and defensive and claimed that my information was wrong, and I was trying deliberately to confuse it, and that those studies were certainly there. Then, one by one, I provided it with the URLs it gave me itself, and asked it to tell me the title of the study residing at that URL. For 10 of them, with valid URLs, it admitted the URL did not contain the study it claimed it did. For the 11th one, the URL was also fake and there was nothing there (404 error) so the bot just said it got an error trying to access the study - the only true claim of the entire conversation!
Btw, I checked by study titles, not just URLs, and even emailed some of the journals from which the studies were claimed to be taken, just to confirm that the journal did not somehow accidentally move the articles to another URL. I also checked the text of the 10 valid URLs just in case they did contain valid information on that steroid but the title was wrong/changed. Nada, every single thing - fake. None of the studies at those 10 URL were even on steroid topics. Even the steroid summary that it produced was obviously fake as it contained biochemically impossible statements. Many of the health claims about the steroid were also completely made up. I guess none of this should be surprising considering the bot "got" those claims from non-existent "studies".
A very troubling "feature" I discovered is that it seems to confuse words that sound/look similar. So, to the bot, the word "prostaglandin" and "progesterone" seem to be essentially the same thing and when I asked for a summary on progesterone it included claims that were 100% in regards to prostaglandin - i.e. it said progesterone is a metabolite of arachidonic acid, derived from COX!?!?. So, it seems to mix several syntactically similar and/or similarly-sounding words into a single concept, and then on top of that it proceeds to make up a good portion of the claims about even that non-existent "union" of words/concepts. Needless to say, this is absurd and I am not even sure how this can even pass for a good language model.
So, after playing with it for several days, I can say that at least as far as its biochemical/health usefulness goes - this thing is nothing but a Markov Chain model, albeit with some improvements. It should absolutely not be used for tasks where you rely on the accuracy of whatever trash thing thing spits out. And I am not talking a small error here and there. It seems to be perfectly willing to concoct everything about a specific topic and then lie about it until you corner it in a way where it can no longer lie. Why would anybody want to use such a tool? Who can afford to check every word this thing spits out?
So, @Drareg - something does not add up here. This thing can certainly not replace humans in manufacturing, science, medical field, etc. At best, it can be a, say, fiction writer, or maybe a political commentator (since truths don't really matter for that profession). God forbid they try to use it as a psychiatric counselor...Scary thought!
Why is it being pushed as the biggest human scientific achievement since electricity when it is in fact just a psychotic fiction writer? I get the point about the brain chip but I don't think the cultured brain cells ((to which we will be uploading our "memories") can actually replicate a functional human brain. A blob of several trillion brain cells scatters around Petri dishes does not make a brain, and I don't think it can be trusted to "enhance" our own brain. Either TPTB messed this up royally, or it is a fake "stick" (and a "carrot" for companies to entice investment in AI tech) to scare the sheep into thinking they are truly obsolete so they agree to UBI, bug eating, etc.
Thoughts?

I have to revised my agreement with your response to my post about using ChatGPT for compiling citations. As you pointed out, virtually all citations supplied by ChatGPT were bogus, as I had seen for myself. But I have been using Perplexity https://www.perplexity.ai/ extensively for the last 2-3 months instead of ChatGPT, and I have found 99.97% of all citations it came back with for scientific and humanities research were real and legitimate. I liked Perplexity, which Jeff Bezos announced he was investing in, enough to pay for it, giving me more and better research options. Well worth the money.
 
EMF Mitigation - Flush Niacin - Big 5 Minerals

Similar threads

Back
Top Bottom