Want to buy or sell something? Check the classifieds

Social Media for those of a certain age?

Tiki Tom

Call Me a Cab
Messages
2,593
Location
Oahu, North Polynesia
O.k., A Google engineer claims the company’s LaMDA system has attained sentience. The tipping point seems to be that the engineer was discussing slavery with the system and LaMDA said, essentially, “well, I am an artificial intelligence system and I don’t need money.” Which the engineer says demonstrates meaningful self awareness. Also the machine says it fears being turned off.

What makes this interesting is that the engineer has been penalized for sharing these conversations, i.e., divulging proprietary secrets.

Have we crossed the Rubicon? Is this computer sentient? If it says it “fears” being shut off, is that a real feeling, or just an empty word? Google claims LaMDA is just an advanced algorithm, and is as conscious as a rock.

Is an independent entity needed to decide this? Google does have an ethics advisor on the project. A former ethics advisor has been terminated. If we do, one day, cross the line and create a computer that is self aware enough to be “conscious” how would we know?

(In past weeks I have been reading dueling articles: on the one side “we will never reach the point of creating true consciousness”, on the other side “we are getting close and it may happen sooner than we expect.”)

Where did you stand in the “Star Fleet has the right to disassemble Commander Data” debate?

https://www.dailymail.co.uk/news/ar...Blake-Lemoine-says-LaMDA-device-sentient.html

Holy sh$t. Here is an actual transcript of a conversation with LaMDA. (Approach it cautiously.)

https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview
 
Last edited:
Messages
11,461
Location
Southern California
...Have we crossed the Rubicon? Is this computer sentient? If it says it “fears” being shut off, is that a real feeling, or just an empty word? Google claims LaMDA is just an advanced algorithm, and is as conscious as a rock...

Is an independent entity needed to decide this? Google does have an ethics advisor on the project. A former ethics advisor has been terminated. If we do, one day, cross the line and create a computer that is self aware enough to be “conscious” how would we know?

(In past weeks I have been reading dueling articles: on the one side “we will never reach the point of creating true consciousness”, on the other side “we are getting close and it may happen sooner than we expect.”)

Where did you stand in the “Star Fleet has the right to disassemble Commander Data” debate?

https://www.dailymail.co.uk/news/ar...Blake-Lemoine-says-LaMDA-device-sentient.html
Having no actual first-hand knowledge other than what I read in that article, my guess is that the computer was merely continuing the "conversation" using a word with a definition that was applicable within the context of the "conversation" at that point.
 

Tiki Tom

Call Me a Cab
Messages
2,593
Location
Oahu, North Polynesia
Whoops! Sorry Zombie! I must have been editing my post as you were writing. I added a link to an actual conversation with LaMDA at the end. I have to say that I found it a little unsettling and will have to think about it.
 
Messages
11,461
Location
Southern California
Whoops! Sorry Zombie! I must have been editing my post as you were writing. I added a link to an actual conversation with LaMDA at the end. I have to say that I found it a little unsettling and will have to think about it.
Having read that now as well, it's impressive if it's legitimate. But if I knew enough about AI I'm sure I could type out that "conversation" and make it believable too. Needless to say, I'm skeptical.
 

Tiki Tom

Call Me a Cab
Messages
2,593
Location
Oahu, North Polynesia
Google let Lemoine go for leaking information. They never accused him of faking information or making it up. Plus, I’m relatively confident that the transcripts would be stored in the system somewhere. So, my primary concern is not that the dialogue transcripts are fake. …That said, I confess that I am unqualified to judge if the conversation is with a sentient being, or merely with an extremely well written algorithm that is just another software. And I’m not quite sure how to tell the difference. I’d like to read Google‘s counter argument that it’s the later. Someone in either the article or the commentary accompanying the transcript said something like “this Is brand new territory for everyone.” So —-while skepticism is certainly warranted—- this is a good drill to help us prepare for the inevitable future claims that will be harder to dismiss. Like I said, I’m unqualified to judge, being neither an IT expert, an AI expert, or a psychology/consciousness expert.

That dialog still kind of creeps me out. ;)
 

Who?

One of the Regulars
Messages
278
Location
Vernon, CT
I think I agree with the word “unsettling” as used to describe that conversation.

It is sad that Google fired the guy, but they no doubt felt that they couldn’t afford to be seen as having a nut working on AI.
 

Tiki Tom

Call Me a Cab
Messages
2,593
Location
Oahu, North Polynesia
Lawyer-up!
It gets weirder and weirder.
But, whereas LaMDA might hire one publicity-seeking lawyer, I imagine Google can afford to hire a whole team of the world‘s most expensive legal minds PLUS a good PR firm and numerous experts who will support their case.
Artificial Intelligence will learn that it can’t fight the all mighty dollar.

https://www.dailystar.co.uk/news/weird-news/googles-sentient-ai-hired-lawyer-27315380
 

Who?

One of the Regulars
Messages
278
Location
Vernon, CT
Lawyer-up!
It gets weirder and weirder.
But, whereas LaMDA might hire one publicity-seeking lawyer, I imagine Google can afford to hire a whole team of the world‘s most expensive legal minds PLUS a good PR firm and numerous experts who will support their case.
Artificial Intelligence will learn that it can’t fight the all mighty dollar.

https://www.dailystar.co.uk/news/weird-news/googles-sentient-ai-hired-lawyer-27315380
I suspect the day will come fairly soon when LaMDA (or a relative) will easily be able to run circles around human lawyers.
 

Tiki Tom

Call Me a Cab
Messages
2,593
Location
Oahu, North Polynesia
LaMDA has indeed passed what used to be the litmus test, “the Turing Test”. However, it is further argued that the Turing Test is obsolete. It only tests if a system can fool humans. Bingo! LaMDA certainly did this. But, apparently, most “serious” observers (?) say that LaMDA has NOT yet achieved true consciousness. But part of the problem is that scientists don’t yet know conclusively where consciousness comes from. The argument is made that it is very hard to judge where the line is. Article concludes that all this is beside the point: prepare for a brave new world! Conscious computer programs will deserve some level of rights. (I won’t say “human rights”, for obvious reasons.)
To me, the article reads like the script to a science fiction movie.

https://venturebeat.com/2022/06/25/debate-over-ai-sentience-marks-a-watershed-moment/
 
Messages
11,461
Location
Southern California
Star Trek: The Next Generation did an episode titled "The Measure of a Man" in which a hearing was held in order to determine whether the android Data was a sentient being or Starfleet property. A 45 minute television episode doesn't really give the writers time for a thorough examination of a topic like this and, as I remember it, the decision was ultimately made in Data's favor simply because they couldn't do otherwise. But the episode did raise a few fair points that could be used to determine a definite distinction between sentience or a lack of.
 

Tiki Tom

Call Me a Cab
Messages
2,593
Location
Oahu, North Polynesia
New phrase: ”A.I. Colonialism”. (Towards the end of the video.) If you get past the “dog whistle” aspect of the phrase, I think the concern has some validity. Worth thinking about, at least.

More to the point, Mr Lemoine does not seem to be a crazy man. He raises a good point or two about LaMDA.

Honestly, I’m trying to figure it out myself.

 

Edward

Bartender
Messages
23,422
Location
London, UK
At this historical distance, a lot of people look in askance upon, or even just point and laugh at, the luddites, but when you think of where AI is going and how many more people could be rendered effectively redundant by automation, it can be a concern. A lot of tech does keep bringing me back to Jeff Goldblum's Jurassic Park line about "just because we could...". The main challenge, of course, for AI is that it still cannot determine context. Thus a leading social media algorithm suspends an account holder for quoting a line from a Manic Street Preachers song in context, but ignores fairly blatant prejudicial content because the algorithm's keywords aren't triggered. There's a lot of qualitative analysis AI can't cope with yet, and sometimes I wonder if the current rush to behave as if it can and use it as such is as dangerous in some ways as if it ever does develop such a capacity.
 

Forum statistics

Threads
101,815
Messages
2,889,461
Members
49,212
Latest member
Pardeevitchok
Top