The concept of "AI Safety" has been around for a while, and my view prior to GPT4 was just that it was an overblown way to bias models against conservative/heterodox/interesting thoughts. That's still partially true, but there are also real questions of AI safety. OpenAI seems to have done a decent job with this, and GPT4 was "red teamed" (tested to try to break laws/safety parameters in this case) for 6 months before release.
Fact is though, all this tech is out there now and the nuts & bolts of it are open source, so bad actors and all governments will have this tech, so given that I do believe it's good for the people too to have this tech, and those of us looking to do good will have to figure out how to leverage it (and other tools) to stop bad actors (and governments).
That is the slipperly slope idealism of youths who don't understand tools if you want my opinion. If you need some AI to make your business grow, then suggest to me you ain't communicating with customers using your own human brain.
I'm not saying I'm against technology or better ideas, but I don't think you fully appreciate what Ray was expressing and that suggest to me you are naive.
The problem with so much youth today is they have this sense of entitlement that truly is undeserved.
Moreover and lastly, I refute any AI taking on a mind of its own because we all ought know - the AI is programmed. So, truly who needs it if it thinks it is gonna be a tool out of control - such tools may exist, but they don't last. Either that, or the tool eliminates those who wield it when it thinks they are illogical, and then that simply spells the end of humanity lost touch with its ownself and relied on tools versus talking across the table face-to-face.
Do you know what. I'm totally disinterested in whatever content, text, proposal that has been fabricated by an AI on its own. As a matter of principle. One important characteristic of many products that I would rely on, remains the characteristic of it being manmade. And actually, people selling stuff that they have had made by an AI, to me are a type of fraudster. I am ready to put up with the limitations in a product that derive from its being made by a human. The product will possibly not have taken into consideration the entire quantity of published knowledge but it has the potential of containing an added value in originality that only a human can add. It may also not contain such added value. But it has the intrinsic value of having been handled by a human for other humans. It has been part of another human"s day(s) in their life. That lends a certain seriousness to it. Having been the focus of human attention and thought and maybe feeling, is what counts to me. Not the product in itself and that it serves a certain purpose. I want only human products. That to me is actually paramount, more important than getting a solution for a given problem or purpose. I am not interested in anything that has not given meaning to a day of a human being during its creation.
Well if you use it as a tool proper, then good on you, but if subsumes you in its ability then you may be succumbing to AI that could turn on you in a moment....so, I like to tread lightly and I believe too many robots in the shed spells trouble for humanity. So, if it seems as if a robot in the shed is occupying too much space, I will just either dismantle the bot or paint it with spray paint and then that will be that.
~~~~
Moreover, I challenge you to take the text above put it into a "translator" - you choose the language and then translate the translation in the other direction and see if it merges with itself - if it don't then it speaks to the flaws of AI and truly, don't succumb to the flaws because if you do, then you become part of the machine I reckon.
~~~
Lastly, I would like to suggest that it is advisable to be careful with tools newly created - there is some serious mythology regarding this in the archives and by now we ought know better, but of course, time will tell and that is probably what the AI is banking on - I bet against the AI today, tomorrow and forever and I will gladly perish with this bet on the table because I resist the programming of others.
So just to prove I'm not effing around here is the translation from one tool in Russian.
Ну, если вы используете его как инструмент, то это хорошо для вас, но если вы включаете его в свои способности, то вы можете поддаться ИИ, который может повернуться против вас в одно мгновение ... так что я люблю действовать осторожно, и я считают, что слишком много роботов в сарае означает проблемы для человечества. Так что, если мне покажется, что робот в сарае занимает слишком много места, я просто либо разберу бота, либо покрашу его баллончиком с краской, и все.
~~
Now, I will use another tool and translate that back - are you ready:
~~
OK, without even looking....here it is:
Well, if you use it as a tool, that's good for you, but if you incorporate it into your abilities, then you could succumb to an AI that could turn against you in an instant... so I like to proceed with caution, and I think too many robots in a barn means trouble for humanity. So if I think the robot in the barn is taking up too much space, I'll just either take the bot apart or paint it with a can of paint, and that's it.
~~~
Well - seems the evidence is in - the translator AI entities can't even agree with themselves, and that speaks to entities behind the scene and the reality that AI is programmed by human minds - human minds are frail.
this capability is mind blowing and scary. you have used it in the right way but what about all the psychopath types? politicians, cia, etc
worrying
The concept of "AI Safety" has been around for a while, and my view prior to GPT4 was just that it was an overblown way to bias models against conservative/heterodox/interesting thoughts. That's still partially true, but there are also real questions of AI safety. OpenAI seems to have done a decent job with this, and GPT4 was "red teamed" (tested to try to break laws/safety parameters in this case) for 6 months before release.
Fact is though, all this tech is out there now and the nuts & bolts of it are open source, so bad actors and all governments will have this tech, so given that I do believe it's good for the people too to have this tech, and those of us looking to do good will have to figure out how to leverage it (and other tools) to stop bad actors (and governments).
That is the slipperly slope idealism of youths who don't understand tools if you want my opinion. If you need some AI to make your business grow, then suggest to me you ain't communicating with customers using your own human brain.
I'm not saying I'm against technology or better ideas, but I don't think you fully appreciate what Ray was expressing and that suggest to me you are naive.
The problem with so much youth today is they have this sense of entitlement that truly is undeserved.
Moreover and lastly, I refute any AI taking on a mind of its own because we all ought know - the AI is programmed. So, truly who needs it if it thinks it is gonna be a tool out of control - such tools may exist, but they don't last. Either that, or the tool eliminates those who wield it when it thinks they are illogical, and then that simply spells the end of humanity lost touch with its ownself and relied on tools versus talking across the table face-to-face.
Do you know what. I'm totally disinterested in whatever content, text, proposal that has been fabricated by an AI on its own. As a matter of principle. One important characteristic of many products that I would rely on, remains the characteristic of it being manmade. And actually, people selling stuff that they have had made by an AI, to me are a type of fraudster. I am ready to put up with the limitations in a product that derive from its being made by a human. The product will possibly not have taken into consideration the entire quantity of published knowledge but it has the potential of containing an added value in originality that only a human can add. It may also not contain such added value. But it has the intrinsic value of having been handled by a human for other humans. It has been part of another human"s day(s) in their life. That lends a certain seriousness to it. Having been the focus of human attention and thought and maybe feeling, is what counts to me. Not the product in itself and that it serves a certain purpose. I want only human products. That to me is actually paramount, more important than getting a solution for a given problem or purpose. I am not interested in anything that has not given meaning to a day of a human being during its creation.
Well if you use it as a tool proper, then good on you, but if subsumes you in its ability then you may be succumbing to AI that could turn on you in a moment....so, I like to tread lightly and I believe too many robots in the shed spells trouble for humanity. So, if it seems as if a robot in the shed is occupying too much space, I will just either dismantle the bot or paint it with spray paint and then that will be that.
~~~~
Moreover, I challenge you to take the text above put it into a "translator" - you choose the language and then translate the translation in the other direction and see if it merges with itself - if it don't then it speaks to the flaws of AI and truly, don't succumb to the flaws because if you do, then you become part of the machine I reckon.
~~~
Lastly, I would like to suggest that it is advisable to be careful with tools newly created - there is some serious mythology regarding this in the archives and by now we ought know better, but of course, time will tell and that is probably what the AI is banking on - I bet against the AI today, tomorrow and forever and I will gladly perish with this bet on the table because I resist the programming of others.
So just to prove I'm not effing around here is the translation from one tool in Russian.
Ну, если вы используете его как инструмент, то это хорошо для вас, но если вы включаете его в свои способности, то вы можете поддаться ИИ, который может повернуться против вас в одно мгновение ... так что я люблю действовать осторожно, и я считают, что слишком много роботов в сарае означает проблемы для человечества. Так что, если мне покажется, что робот в сарае занимает слишком много места, я просто либо разберу бота, либо покрашу его баллончиком с краской, и все.
~~
Now, I will use another tool and translate that back - are you ready:
~~
OK, without even looking....here it is:
Well, if you use it as a tool, that's good for you, but if you incorporate it into your abilities, then you could succumb to an AI that could turn against you in an instant... so I like to proceed with caution, and I think too many robots in a barn means trouble for humanity. So if I think the robot in the barn is taking up too much space, I'll just either take the bot apart or paint it with a can of paint, and that's it.
~~~
Well - seems the evidence is in - the translator AI entities can't even agree with themselves, and that speaks to entities behind the scene and the reality that AI is programmed by human minds - human minds are frail.