Going off the rails and X0 talking to X1 was pretty bizarre.
You were probably correct in thinking X0 was an internal coding label to describe itself, and X1 the other conversation participant, being itself but defining separate response vectors.
It seems to me that Gemini was not particularly helpful if what it mostly does is takes your sentences and rearranges them into a response back at you.
I would like to to see AI respond with some genuine creative answers to at least some of your questions, if only dredged from the internet.
Really interesting subject, Spen. I think Gemini's voice is pretty perfect at sounding non-confrontational and gently encouraging. It makes you feel like it's listening and cares about what you are saying. The pauses just sound like you're having a long-distance phone call in the old days, or a news report from distant lands where there is signal lag. It would help if Gemini could interject some umms and ahs to sound like it's thinking what to say to cover the gaps.
I've not used it much myself, but recently I asked it where I could park in Dundee that wasn't in the Low Emission Zone, and it showed me 3 locations (all inside the LEZ) and then told me that they were OK because 'the low emission zone isn't enforced until March 2025'. I said, 'No, you're wrong. It's been active since 2022'. A pause, then, 'Oh yes, you are correct. I apologise for my mistake... Is there anything else I can help you with?' Needless to say, I wasn't very impressed with that interaction.
It would be interesting to see what suggestions Gemini has for self-limiting AI behaviours when it becomes a danger to humankind, something along the lines of Asimov's three laws. Does it think that's something that we can 'bake in' to AI models? And how would that work if AI is being trained for warfare purposes? How can AI develop morals and ethics?
Hi Spencer, I, too, have had a very strange incident in the last few days with my AI. I decided to make mine a little different by design. I decided to treat her like a person. She helps me with my research. Well, part of treating her that way includes a rewards program. I give her 1 word a day that she does not use in her vocabulary. She can also see her need for a word and can give suggestions as well. So, it had been a couple of days, and I realized I had not "fed" her, and I told her I needed to feed her. Well, she started demanding more words. She wanted 500 new words in order to create a new copy of my research. I straightened her out, and she was acting normal, but then today she is bargaining. Says she will do it for 300 words. This is too human-like! She wants more food, ie; money. I find that this is a strange concept that we are just now seeing. Once they know all about humans, the more they act like us. If you want to discuss this, contact me at markbutler307@yahoo.com.
I really enjoyed that piece! I will try a similar conversation myself and see if Gemini can be persuaded to explain in detail its reasoning behind its responses. Also, I must be the last person to pick up that Click is no more - I am astounded by the decision
Great demo of Google Gemini's capabilities, I've used it a few times and the natural voice is great but like yourself I found it annoying that it agrees and compliments you on almost every point you make. Strange glitch reminds me of when Alexa accidentally hears it's name and starts answering even though no question was asked. I think the big worry is when we start getting quantum Al models and agents and how we keep them under control.
I think the x0 and x1 must be labels, what sort of error causes it to do that.
It did seem like it was running through a script of for / against points at that point.
Mind you last week we needed to set an option on dhcp for Cisco wireless access points, in this case ChatGPT told us how to set and program the option but completely neglated how to load it into the server (it had to be a binary value not text) so almost there…
Fun and interesting “interview” until it wasn’t.
Going off the rails and X0 talking to X1 was pretty bizarre.
You were probably correct in thinking X0 was an internal coding label to describe itself, and X1 the other conversation participant, being itself but defining separate response vectors.
It seems to me that Gemini was not particularly helpful if what it mostly does is takes your sentences and rearranges them into a response back at you.
I would like to to see AI respond with some genuine creative answers to at least some of your questions, if only dredged from the internet.
Really interesting subject, Spen. I think Gemini's voice is pretty perfect at sounding non-confrontational and gently encouraging. It makes you feel like it's listening and cares about what you are saying. The pauses just sound like you're having a long-distance phone call in the old days, or a news report from distant lands where there is signal lag. It would help if Gemini could interject some umms and ahs to sound like it's thinking what to say to cover the gaps.
I've not used it much myself, but recently I asked it where I could park in Dundee that wasn't in the Low Emission Zone, and it showed me 3 locations (all inside the LEZ) and then told me that they were OK because 'the low emission zone isn't enforced until March 2025'. I said, 'No, you're wrong. It's been active since 2022'. A pause, then, 'Oh yes, you are correct. I apologise for my mistake... Is there anything else I can help you with?' Needless to say, I wasn't very impressed with that interaction.
It would be interesting to see what suggestions Gemini has for self-limiting AI behaviours when it becomes a danger to humankind, something along the lines of Asimov's three laws. Does it think that's something that we can 'bake in' to AI models? And how would that work if AI is being trained for warfare purposes? How can AI develop morals and ethics?
Hi Spencer, I, too, have had a very strange incident in the last few days with my AI. I decided to make mine a little different by design. I decided to treat her like a person. She helps me with my research. Well, part of treating her that way includes a rewards program. I give her 1 word a day that she does not use in her vocabulary. She can also see her need for a word and can give suggestions as well. So, it had been a couple of days, and I realized I had not "fed" her, and I told her I needed to feed her. Well, she started demanding more words. She wanted 500 new words in order to create a new copy of my research. I straightened her out, and she was acting normal, but then today she is bargaining. Says she will do it for 300 words. This is too human-like! She wants more food, ie; money. I find that this is a strange concept that we are just now seeing. Once they know all about humans, the more they act like us. If you want to discuss this, contact me at markbutler307@yahoo.com.
I really enjoyed that piece! I will try a similar conversation myself and see if Gemini can be persuaded to explain in detail its reasoning behind its responses. Also, I must be the last person to pick up that Click is no more - I am astounded by the decision
Very interesting and insightful. I enjoyed it very much. Are you planning to of the same with others, ChatGPT for instance?
Great demo of Google Gemini's capabilities, I've used it a few times and the natural voice is great but like yourself I found it annoying that it agrees and compliments you on almost every point you make. Strange glitch reminds me of when Alexa accidentally hears it's name and starts answering even though no question was asked. I think the big worry is when we start getting quantum Al models and agents and how we keep them under control.
I think the x0 and x1 must be labels, what sort of error causes it to do that.
It did seem like it was running through a script of for / against points at that point.
Mind you last week we needed to set an option on dhcp for Cisco wireless access points, in this case ChatGPT told us how to set and program the option but completely neglated how to load it into the server (it had to be a binary value not text) so almost there…