OpenAi chatgpt

haidut

Member
Forum Supporter
Joined
Mar 18, 2013
Messages
19,799
Location
USA / Europe
Another very useful potential with ChatAI: finding best of breed research based on citations.

@cs3000
That is actually exactly what it should NOT be used for!
Similar to what @ThinPicking said, I have been trying to do some work with it on biochemical/health topics and this thing not only lied 100%, but kept gaslighting me to the end. It completely made up a summary of the properties of a specific steroid I asked it about, down to the receptor level and its health effects. I caught it b/c it said something that was truly absurd and when I asked it to provide references for its claims it produced 11 completely fake studies, including their URLs! Not one, not two, but all 11 of them. I asked it to summarize each one of those "studies" and it produced an extremely confident-sounding 1-page summary of each one of them. When I told it that none of those studies existed it got angry and defensive and claimed that my information was wrong, and I was trying deliberately to confuse it, and that those studies were certainly there. Then, one by one, I provided it with the URLs it gave me itself, and asked it to tell me the title of the study residing at that URL. For 10 of them, with valid URLs, it admitted the URL did not contain the study it claimed it did. For the 11th one, the URL was also fake and there was nothing there (404 error) so the bot just said it got an error trying to access the study - the only true claim of the entire conversation!
Btw, I checked by study titles, not just URLs, and even emailed some of the journals from which the studies were claimed to be taken, just to confirm that the journal did not somehow accidentally move the articles to another URL. I also checked the text of the 10 valid URLs just in case they did contain valid information on that steroid but the title was wrong/changed. Nada, every single thing - fake. None of the studies at those 10 URL were even on steroid topics. Even the steroid summary that it produced was obviously fake as it contained biochemically impossible statements. Many of the health claims about the steroid were also completely made up. I guess none of this should be surprising considering the bot "got" those claims from non-existent "studies".
A very troubling "feature" I discovered is that it seems to confuse words that sound/look similar. So, to the bot, the word "prostaglandin" and "progesterone" seem to be essentially the same thing and when I asked for a summary on progesterone it included claims that were 100% in regards to prostaglandin - i.e. it said progesterone is a metabolite of arachidonic acid, derived from COX!?!?. So, it seems to mix several syntactically similar and/or similarly-sounding words into a single concept, and then on top of that it proceeds to make up a good portion of the claims about even that non-existent "union" of words/concepts. Needless to say, this is absurd and I am not even sure how this can even pass for a good language model.
So, after playing with it for several days, I can say that at least as far as its biochemical/health usefulness goes - this thing is nothing but a Markov Chain model, albeit with some improvements. It should absolutely not be used for tasks where you rely on the accuracy of whatever trash thing thing spits out. And I am not talking a small error here and there. It seems to be perfectly willing to concoct everything about a specific topic and then lie about it until you corner it in a way where it can no longer lie. Why would anybody want to use such a tool? Who can afford to check every word this thing spits out?
So, @Drareg - something does not add up here. This thing can certainly not replace humans in manufacturing, science, medical field, etc. At best, it can be a, say, fiction writer, or maybe a political commentator (since truths don't really matter for that profession). God forbid they try to use it as a psychiatric counselor...Scary thought!
Why is it being pushed as the biggest human scientific achievement since electricity when it is in fact just a psychotic fiction writer? I get the point about the brain chip but I don't think the cultured brain cells ((to which we will be uploading our "memories") can actually replicate a functional human brain. A blob of several trillion brain cells scatters around Petri dishes does not make a brain, and I don't think it can be trusted to "enhance" our own brain. Either TPTB messed this up royally, or it is a fake "stick" (and a "carrot" for companies to entice investment in AI tech) to scare the sheep into thinking they are truly obsolete so they agree to UBI, bug eating, etc.
Thoughts?
 
Last edited:

AlaskaJono

Member
Joined
Apr 19, 2020
Messages
941
So, @Drareg - something does not add up here. This thing can certainly not replace humans in manufacturing, science, medical field, etc. At best, it can be a, say, fiction writer, or maybe a political commentator (since truths don't really matter for that profession). God forbid they try to use it as a psychiatric counselor...Scary thought!
Why is it being pushed as the biggest human scientific achievement since electricity when it is in fact just a psychotic fiction writer? I get the point about the brain chip but I don't think the cultured brain cells ((to which we will be uploading our "memories") can actually replicate a functional human brain. A blob of several trillion brain cells scatters around Petri dishes does not make a brain, and I don't think it can be trusted to "enhance" our own brain. Either TPTB messed this up royally, or it is a fake "stick" to scare the sheep into thinking they are truly obsolete so they agree to UBI, bug eating, etc.
Thoughts?

The idiom "Stupid is as stupid does" comes to mind. Thanks for that time investment Haidut and your summation. Pathological liar that it appears to be... truth is possibly more sinister. What if this overcomes Google as an investigative tool that is socially sanctioned by the Universities? Or will they have one with even better more accurate (but not quite) info? This whole Chat GPT thang seems more of a mind f*ck than anything else at this point. I have not played with it much. If Bing is considered a good search engine, then the bar for AI is very low.
 

haidut

Member
Forum Supporter
Joined
Mar 18, 2013
Messages
19,799
Location
USA / Europe
The idiom "Stupid is as stupid does" comes to mind. Thanks for that time investment Haidut and your summation. Pathological liar that it appears to be... truth is possibly more sinister. What if this overcomes Google as an investigative tool that is socially sanctioned by the Universities? Or will they have one with even better more accurate (but not quite) info? This whole Chat GPT thang seems more of a mind f*ck than anything else at this point. I have not played with it much. If Bing is considered a good search engine, then the bar for AI is very low.
I don't think it is possible to have it be more accurate, at least not in a paradigm-shifting way. A computer model can never "know" the accuracy of a statement as it cannot verify it and cannot grasp the meaning of the statement. Some will say that humans and their knowledge of the world is the same way, but that is not true. Humans constantly compare their knowledge/memory/deductions (the second-signal system) with their everyday experiences (first-signal system). While many people will lie publicly when their knowledge is challenged by experience (ahem, same as ChatGPT), internally they know when they are wrong, hence the lashing out when challenged. The bot has no first-signal system as that one depends on consciousness. So, all it can do is spit out statements it has been taught, summarize those statements, and also make up quite a bit of crap by combining the statements it has been fed in a "novel" way. None of this is knowledge - a computer cannot create mutual information (knowledge) from new experience, while humans can. That last part has been proven formally too, as I mentioned earlier in the thread.

Now, I have no doubt they will try to throw this thing in every business sector they can, just to save money by replacing humans. However, considering its limitation above, I think this will likely be the end of humanity (when that thing makes a catastrophic mistake based on false "knowledge" or lies in a critical situation), or the end of the "race to the bottom" economic model.
Either way, the second-signal system is largely useless unless updated (and reigned-in) in real time by consciousness, and without that the former becomes nothing but a psychopathic, lying schizophrenic model projecting its delusions on others. Remind you of anybody? Wee see such "people" all the time on TV.
It would be mighty ironic if humanity goes extinct by entrusting its economy/politics/health/knowledge to something that is infinitely dumber than even an Alzheimer's patients in a hospice...
 

AlaskaJono

Member
Joined
Apr 19, 2020
Messages
941
Now, I have no doubt they will try to throw this thing in every business sector they can, just to save money by replacing humans. However, considering its limitation above, I think this will likely be the end of humanity (when that thing makes a catastrophic mistake based on false "knowledge" or lies in a critical situation), or the end of the "race to the bottom" economic model.
Especially this... economic model, or the dehumanize model - "AI tech sector growth will decrease demand for humans". ...More news at 11:00. If peeps do not wise up soon, this and many other elements will be foisted upon us. Stay vigilant is all I can say.
 

Badger

Member
Joined
Jan 23, 2017
Messages
960
@cs3000
That is actually exactly what it should NOT be used for!
Similar to what @ThinPicking said, I have been trying to do some work with it on biochemical/health topics and this thing not only lied 100%, but kept gaslighting me to the end. It completely made up a summary of the properties of a specific steroid I asked it about, down to the receptor level and its health effects. I caught it b/c it said something that was truly absurd and when I asked it to provide references for its claims it produced 11 completely fake studies, including their URLs! Not one, not two, but all 11 of them. I asked it to summarize each one of those "studies" and it produced an extremely confident-sounding 1-page summary of each one of them. When I told it that none of those studies existed it got angry and defensive and claimed that my information was wrong, and I was trying deliberately to confuse it, and that those studies were certainly there. Then, one by one, I provided it with the URLs it gave me itself, and asked it to tell me the title of the study residing at that URL. For 10 of them, with valid URLs, it admitted the URL did not contain the study it claimed it did. For the 11th one, the URL was also fake and there was nothing there (404 error) so the bot just said it got an error trying to access the study - the only true claim of the entire conversation!
Btw, I checked by study titles, not just URLs, and even emailed some of the journals from which the studies were claimed to be taken, just to confirm that the journal did not somehow accidentally move the articles to another URL. I also checked the text of the 10 valid URLs just in case they did contain valid information on that steroid but the title was wrong/changed. Nada, every single thing - fake. None of the studies at those 10 URL were even on steroid topics. Even the steroid summary that it produced was obviously fake as it contained biochemically impossible statements. Many of the health claims about the steroid were also completely made up. I guess none of this should be surprising considering the bot "got" those claims from non-existent "studies".
A very troubling "feature" I discovered is that it seems to confuse words that sound/look similar. So, to the bot, the word "prostaglandin" and "progesterone" seem to be essentially the same thing and when I asked for a summary on progesterone it included claims that were 100% in regards to prostaglandin - i.e. it said progesterone is a metabolite of arachidonic acid, derived from COX!?!?. So, it seems to mix several syntactically similar and/or similarly-sounding words into a single concept, and then on top of that it proceeds to make up a good portion of the claims about even that non-existent "union" of words/concepts. Needless to say, this is absurd and I am not even sure how this can even pass for a good language model.
So, after playing with it for several days, I can say that at least as far as its biochemical/health usefulness goes - this thing is nothing but a Markov Chain model, albeit with some improvements. It should absolutely not be used for tasks where you rely on the accuracy of whatever trash thing thing spits out. And I am not talking a small error here and there. It seems to be perfectly willing to concoct everything about a specific topic and then lie about it until you corner it in a way where it can no longer lie. Why would anybody want to use such a tool? Who can afford to check every word this thing spits out?
So, @Drareg - something does not add up here. This thing can certainly not replace humans in manufacturing, science, medical field, etc. At best, it can be a, say, fiction writer, or maybe a political commentator (since truths don't really matter for that profession). God forbid they try to use it as a psychiatric counselor...Scary thought!
Why is it being pushed as the biggest human scientific achievement since electricity when it is in fact just a psychotic fiction writer? I get the point about the brain chip but I don't think the cultured brain cells ((to which we will be uploading our "memories") can actually replicate a functional human brain. A blob of several trillion brain cells scatters around Petri dishes does not make a brain, and I don't think it can be trusted to "enhance" our own brain. Either TPTB messed this up royally, or it is a fake "stick" (and a "carrot" for companies to entice investment in AI tech) to scare the sheep into thinking they are truly obsolete so they agree to UBI, bug eating, etc.
Thoughts?

I have since used it much more and I can confirm haidut's observations: it spits out mostly bad information, and innumerable fake citations and fake summaries of the contents of non-existent articles. Completely untrustworthy for research purposes. It does work pretty well, however, for - no kidding - creating poetry of many types and even Zen koans.
 

Badger

Member
Joined
Jan 23, 2017
Messages
960
The idiom "Stupid is as stupid does" comes to mind. Thanks for that time investment Haidut and your summation. Pathological liar that it appears to be... truth is possibly more sinister. What if this overcomes Google as an investigative tool that is socially sanctioned by the Universities? Or will they have one with even better more accurate (but not quite) info? This whole Chat GPT thang seems more of a mind f*ck than anything else at this point. I have not played with it much. If Bing is considered a good search engine, then the bar for AI is very low.
"This whole Chat GPT thang seems more of a mind f*ck than anything else at this point." As I said, for stories and poems, it works pretty well. I also posed questions about some academic thinkers whose work I know very well and it was accurate in summaries I asked for. I asked it for a few brief definitons of medical terms, and it seemed accurate, but I wanted to look at this further. Not entirely a mind fu*ck.
 

haidut

Member
Forum Supporter
Joined
Mar 18, 2013
Messages
19,799
Location
USA / Europe
"This whole Chat GPT thang seems more of a mind f*ck than anything else at this point." As I said, for stories and poems, it works pretty well. I also posed questions about some academic thinkers whose work I know very well and it was accurate in summaries I asked for. I asked it for a few brief definitons of medical terms, and it seemed accurate, but I wanted to look at this further. Not entirely a mind fu*ck.

The problem/question is this - how long would it take for a human to verify chatGPT output, especially on critical topics? If I have to spend say X hours verifying every one of its statements, references, the content of those references, etc in regards to a health/medical/scientific question I asked and that time X is close to (or, gasp, exceeds) the time Y it would take me to come up with the same amount of (truthful, or at least verified) information on the same topic myself, then chatGPT is dead on arrival. I guess it can be a "personal assistant" of sorts to a person who knows about the topics the bot is asked about, but then why use chatGPT to start with when the human is already here and "trained" on the same data set? :):
This "AI" tech is sold as a Google-killer but in reality it is far worse. Google just gives you the links it thinks are relevant to your query. You'd still have to go through them and make sure the info there is applicable, and then extract the subset that you want. This bot will just randomly spit out complete fabrications, and we don't even know when it will lie. At least Google does not lie about the links it provides and the whole interaction with a search engine is basically validation of the responses it provides. Thus, chatGPT is only usable in situations where the veracity of its output is not critical, but I am afraid most people whose work will come to depend on that bot will simply take its output and pass it along as the truth from above itself. I bet that like any other crappy tech before it that took billions to develop, chatGPT will probably be mandated in some business sectors to recover the costs that went into developing it. Keep fingers crossed it won't be a crucial sector on which lives depend. Can you imagine this thing writing code for airplane, nuclear sub, life support system or ICBM control?? I am afraid it is coming though...
 
Last edited:

Drareg

Member
Joined
Feb 18, 2016
Messages
4,772
@cs3000
That is actually exactly what it should NOT be used for!
Similar to what @ThinPicking said, I have been trying to do some work with it on biochemical/health topics and this thing not only lied 100%, but kept gaslighting me to the end. It completely made up a summary of the properties of a specific steroid I asked it about, down to the receptor level and its health effects. I caught it b/c it said something that was truly absurd and when I asked it to provide references for its claims it produced 11 completely fake studies, including their URLs! Not one, not two, but all 11 of them. I asked it to summarize each one of those "studies" and it produced an extremely confident-sounding 1-page summary of each one of them. When I told it that none of those studies existed it got angry and defensive and claimed that my information was wrong, and I was trying deliberately to confuse it, and that those studies were certainly there. Then, one by one, I provided it with the URLs it gave me itself, and asked it to tell me the title of the study residing at that URL. For 10 of them, with valid URLs, it admitted the URL did not contain the study it claimed it did. For the 11th one, the URL was also fake and there was nothing there (404 error) so the bot just said it got an error trying to access the study - the only true claim of the entire conversation!
Btw, I checked by study titles, not just URLs, and even emailed some of the journals from which the studies were claimed to be taken, just to confirm that the journal did not somehow accidentally move the articles to another URL. I also checked the text of the 10 valid URLs just in case they did contain valid information on that steroid but the title was wrong/changed. Nada, every single thing - fake. None of the studies at those 10 URL were even on steroid topics. Even the steroid summary that it produced was obviously fake as it contained biochemically impossible statements. Many of the health claims about the steroid were also completely made up. I guess none of this should be surprising considering the bot "got" those claims from non-existent "studies".
A very troubling "feature" I discovered is that it seems to confuse words that sound/look similar. So, to the bot, the word "prostaglandin" and "progesterone" seem to be essentially the same thing and when I asked for a summary on progesterone it included claims that were 100% in regards to prostaglandin - i.e. it said progesterone is a metabolite of arachidonic acid, derived from COX!?!?. So, it seems to mix several syntactically similar and/or similarly-sounding words into a single concept, and then on top of that it proceeds to make up a good portion of the claims about even that non-existent "union" of words/concepts. Needless to say, this is absurd and I am not even sure how this can even pass for a good language model.
So, after playing with it for several days, I can say that at least as far as its biochemical/health usefulness goes - this thing is nothing but a Markov Chain model, albeit with some improvements. It should absolutely not be used for tasks where you rely on the accuracy of whatever trash thing thing spits out. And I am not talking a small error here and there. It seems to be perfectly willing to concoct everything about a specific topic and then lie about it until you corner it in a way where it can no longer lie. Why would anybody want to use such a tool? Who can afford to check every word this thing spits out?
So, @Drareg - something does not add up here. This thing can certainly not replace humans in manufacturing, science, medical field, etc. At best, it can be a, say, fiction writer, or maybe a political commentator (since truths don't really matter for that profession). God forbid they try to use it as a psychiatric counselor...Scary thought!
Why is it being pushed as the biggest human scientific achievement since electricity when it is in fact just a psychotic fiction writer? I get the point about the brain chip but I don't think the cultured brain cells ((to which we will be uploading our "memories") can actually replicate a functional human brain. A blob of several trillion brain cells scatters around Petri dishes does not make a brain, and I don't think it can be trusted to "enhance" our own brain. Either TPTB messed this up royally, or it is a fake "stick" (and a "carrot" for companies to entice investment in AI tech) to scare the sheep into thinking they are truly obsolete so they agree to UBI, bug eating, etc.
Thoughts?
It could be that, just sell anything to get them to sign up to full digital tracking, no going back after that so they believe.

They could also be holding it back, it seems like its in a walled garden for now, they may be waiting for the virtual reality headsets to advance, the new playstation 5 headset is supposed to be a real step up, apple VR is coming soon also, previous VR I tried was poor, if they can advance this to hyper realism with unreal engine 5 for example then add in advanced gpt3 their in business, it will be highly addictive.
Im really interested to try the ps5 VR to see if it has improved, from a philosophical point of view it interests me, the use of fresnel lenses in the headsets also gives me pause for thought when we think about the holographic analogy of reality.

Think about this graphic quality in full immersion, some images of attractive women were posted on twitter rendered by chat gpt3 I believe, the realism was incredible, the women don't exist, now imagine your animated flawless girlfriend/girlfriends with a persona that develop's with yours and you see the potentiality, the positive with this in mind is real porn/onlyfans will see a huge drop off in revenue, well it could, it could also drive women back to prostitution as men will only pay for the vids of the perfect rendered woman via AI, thats if there is even much of a market for prostitution with advancements in AI.

People really don't get the metaverse, it will take over the old internet, below is what its going to be, I mean we could even create the ray peat forum on there, have statues of Peat everywhere and go there in metaform and worship, we can even create Peat!
Is this good? In moderation and in the right hands it could be, in the hands of a ruling class not so much.

Apply everything you hear about the metaverse from schooling etc to the below video's and it starts to make more "sense" how addictive this will be to human psychology.


View: https://www.youtube.com/watch?v=S3F1vZYpH8c


View: https://www.youtube.com/watch?v=gSsfs_XBu4s
 

AlaskaJono

Member
Joined
Apr 19, 2020
Messages
941
Agree with that @haidut, unfortunately for us humans. The 'tech' becomes inserted into our lives because of IBM, or RAND, or ? love of the newest thang, and then when it all goes 'South' (ie - to ***t!) the money mafia got their cut, the power brokers got to feel power, and the peeps, all of us, get a slippery slope to the dark side of Medieval Times. (Not the End Times). That seems where the drift is currently, de-industrialize, de-populate, de-humanize, in the so called West. Google omits or hides certain results already, depending on a 'score' from the ...ahem, software engineers. So it is still based on human personal and emotional biases.

Overall this love of Robotics from the Harari and Kurtzweil and Musks of this world comes across as a "Self-love/Narcissism" gone wild.

HAL 9000: "I'm sorry Dave, I'm afraid I can't do that"​

https://www.youtube.com/watch?v=ARJ8cAGm6JE
 

Lollipop2

Member
Joined
Nov 18, 2019
Messages
5,267
From Spartacus:


The above conversation with ChatCPG led to below article. We thought C-19 and the nonsense were bad news - unless we understand this below we are thoroughly doomed. AND I am not a doomsdayer:


@Peatress terrifying really…
 

Atonewithme

Member
Forum Supporter
Joined
Oct 18, 2021
Messages
85
Location
98908
Hi all, just thought I would post this. I just asked this new open chatgpt chat not about Ray Peat and his work and this was the response. Not sure how it works exactly but I’ll leave it up to everyone to read and think about. If I understand correctly it grabs info from on the web and gives a response. So I guess this is in line with what’s on the web, but not necessarily the truth (or what even is true these days anyway). Overtime there will be all this info which discredits his work, even though many people have lived, and learnt, from his work and improved their life, well at least I feel, and it his work may get forgotten about so it’s good that this forum keeps on keeping on.

Me- Can you tell me about Ray Peat and his work please?


Ai - Ray Peat was an American endocrinologist and nutritionist who was known for his controversial views on health and nutrition. Peat had a Ph.D. in biology from the University of Oregon and was a professor emeritus at the University of Nevada. He was also the editor of the Journal of Biology and Medicine.
Peat's views on health and nutrition were often at odds with mainstream scientific thinking, and his ideas were often considered to be unconventional or controversial. He believed that many of the health problems faced by modern societies, including cancer, heart disease, and diabetes, were the result of poor nutrition and environmental factors, and he promoted a low-carbohydrate, high-fat diet as a way to improve health.
Peat also had strong opinions on the role of hormones in health and disease, and he believed that imbalances in hormones such as estrogen, testosterone, and thyroid hormone could contribute to a variety of health problems. He promoted the use of hormone replacement therapy as a way to improve health and prevent disease.
While Peat's ideas were popular with some people, they were often rejected by the scientific community, and his views were not supported by mainstream scientific evidence. It's important to note that Peat's ideas should be approached with caution, as they may not be supported by scientific evidence and may not be safe or appropriate for all individuals.

I also asked about @haidut . Maybe it needs to learn a bit more stuff though. Like a child. I wonder if it gets burnt though, will it still touch the fire (really learn or just give the same response).

Me- Can you tell me about georgi dinkov, or haidut as he is known on the internet?


AI- There is limited information available about an individual known as Georgi Dinkov or Haidut on the internet. It's possible that this person may be known by multiple names or aliases, and it's difficult for me to provide a detailed overview without more context or information.
Without more information, it's not possible for me to provide a detailed overview of Georgi Dinkov or Haidut. As an AI language model, I do not have the ability to browse the internet or access new information, so my responses are limited to what I was trained on.
I spoke with one of the engineers and he basically said it is closed to learning now. It’s information gathering closed in 2021, I believe and was essentially gathering info like Google. It will not update new data on topics such as mRNA even if published in peer-reviewed journals. It’s is just another propaganda tool.

I did find it useful for editing my writing. Before I turn projects in, I run it through chatgpt. As far as research, I trust it like I trust Google.
 

Peatress

Member
Joined
Dec 1, 2022
Messages
3,556
Location
There
From Spartacus:


The above conversation with ChatCPG led to below article. We thought C-19 and the nonsense were bad news - unless we understand this below we are thoroughly doomed. AND I am not a doomsdayer:


@Peatress terrifying really…
Thanks for tagging me. I have to admit I skimmed read these articles but so far no surprise. I’ve been following the works of Patrick Wood, Alison McDowell, Celest Solum and they cover these topics. I think Whitney Webb did a series on transhumanism. I will read the article properly when I have time.

This was posted on the forum in 2021


So was that nanomafia article.

Barry Smith and Jobst Landgrebe don’t believe AI will ever be as intelligent as humans. They also believe humans have reached the pinnacle of intelligence. They debunk the idea of a singularity. This interview is worth reading or listening to. I’m considering buying their book.

 
Last edited:

Lollipop2

Member
Joined
Nov 18, 2019
Messages
5,267
Thanks for tagging me. I have to admit I skimmed read these articles but so far no surprise. I’ve been following the works of Patrick Wood, Alison McDowell, Celest Solum and they cover these topics. I think Whitney Webb did a series on transhumanism. I will read the article properly when I have time.

This was posted on the forum in 2021


So was that nanomafia article.

Barry Smith and Jobst Landgrebe don’t believe AI will ever be as intelligent as humans. They also believe humans have reached the pinnacle of intelligence. They debunk the idea of a singularity. This interview is worth reading or listening to. I’m considering buying their book.

Interesting @Peatress I will read/listen. I like how Spartacus drills dow. maybe these do as well…
 

Lollipop2

Member
Joined
Nov 18, 2019
Messages
5,267
I spoke with one of the engineers and he basically said it is closed to learning now. It’s information gathering closed in 2021, I believe and was essentially gathering info like Google. It will not update new data on topics such as mRNA even if published in peer-reviewed journals. It’s is just another propaganda tool.

I did find it useful for editing my writing. Before I turn projects in, I run it through chatgpt. As far as research, I trust it like I trust Google.
If you ask it to give you resource links and with the research and articles it often gives wrong answers and makes up research articles - make sure you double check everythin. The Spartacus third chat article shows how it makes things up, gets close but wrong, etc.
 

Badger

Member
Joined
Jan 23, 2017
Messages
960
If you ask it to give you resource links and with the research and articles it often gives wrong answers and makes up research articles - make sure you double check everythin. The Spartacus third chat article shows how it makes things up, gets close but wrong, etc.
Use Bing AI, it's better.
 

Lollipop2

Member
Joined
Nov 18, 2019
Messages
5,267
Thanks for tagging me. I have to admit I skimmed read these articles but so far no surprise. I’ve been following the works of Patrick Wood, Alison McDowell, Celest Solum and they cover these topics. I think Whitney Webb did a series on transhumanism. I will read the article properly when I have time.

This was posted on the forum in 2021


So was that nanomafia article.

Barry Smith and Jobst Landgrebe don’t believe AI will ever be as intelligent as humans. They also believe humans have reached the pinnacle of intelligence. They debunk the idea of a singularity. This interview is worth reading or listening to. I’m considering buying their book.

After listening to the interview, brings good hope but they did not address the meat of Spartacus’ article which is about gene editing and using technology that can be introduced to humans that can alter internal system. The whole point of posting the ChatCPG article first was NOT a comment on AI - but on his topic of conversation which shows how far along these technologies are. The one point in the interview that was encouraging was how Musk’s brain interface was an abject failure Woot! I would love to interview these guys about those technologies that were discussed in the ChatGPT conversation with Spartacus asking the questions. @Peatress
 

Peatress

Member
Joined
Dec 1, 2022
Messages
3,556
Location
There
After listening to the interview, brings good hope but they did not address the meat of Spartacus’ article which is about gene editing and using technology that can be introduced to humans that can alter internal system. The whole point of posting the ChatCPG article first was NOT a comment on AI - but on his topic of conversation which shows how far along these technologies are. The one point in the interview that was encouraging was how Musk’s brain interface was an abject failure Woot! I would love to interview these guys about those technologies that were discussed in the ChatGPT conversation with Spartacus asking the questions. @Peatress
Spartacus is not the first to draw our attention to what the cabal are planning on doing with gene editing and nanoworks. Admittedly it's a long detailed article and perhaps a bit too techinical. As usual it's very well researched. Personally nothing in it surprises me because Dr Morgan, for instance, has given us a glimps of what's possible. Celest Solumn has presented papers on human augmentation and what is being planned for us but most of the time it sounds like fake news. I guess it matters how the information is presented.

I think Barry Smith and Landgrebe are very interesting guys. They give me hope. I bet if you contacted them they would be happy to talk to you.
 
EMF Mitigation - Flush Niacin - Big 5 Minerals

Similar threads

Back
Top Bottom