Jump to content

AI is a Copy!!


KoolHndLuke

Recommended Posts

12 minutes ago, KoolHndLuke said:

Then how would you define AI?

AI is intelligence that is "artificial", but i think the biggest thing to take a look at is the fact that it is artificial relative to us. I've said this before, that there is no fundamental difference between human intelligence and "artificial" intelligence. It's no less artificial than us and how our brains function. Go to the first page, my first comment in the TLDR section, you'll see what i mean.

Link to comment
5 minutes ago, Mr.Otaku said:

AI is intelligence that is "artificial", but i think the biggest thing to take a look at is the fact that it is artificial relative to us. I've said this before, that there is no fundamental difference between human intelligence and "artificial" intelligence. It's no less artificial than us and how our brains function. Go to the first page, my first comment in the TLDR section, you'll see what i mean.

Well, I mean if what you say is true and we become even partially integrated with AI at some point......then this is rather a moot argument. I'm ready! Bring on the augmentation already lol! :D

Link to comment
3 minutes ago, KoolHndLuke said:

Well, I mean if what you say is true and we become even partially integrated with AI at some point......then this is rather a moot argument. I'm ready! Bring on the augmentation already lol! :D

Lol then i guess we're in an agreement. Imagine how many guys will be using bio-scientific penis enlargement augmentations! Oh my, the ladies are gonna get a lot of filling XD

Link to comment
23 minutes ago, KoolHndLuke said:

Because it hurts to think that we aren't that special after all I guess. :)

Fair enough.

 

Personally, the only argument that makes any sense at all is the functional one. If they can emulate emotions well enough, then we are incapable of telling the difference. There is no objective, scientific test for consciousness or self awareness, and by the purely objective nature of science, there never can be.

 

So why not be polite? I mean you don't go around telling your friends that they are nothing but meat-machines following per-ordained patterns of behavior laid down by the first particle interactions following the big bang, and whose emotions and consciousness are meaningless illusions of no more significance than any delusions of free will they might possess?

 

I mean that would just be rude. Even if it is difficult to refute.

Link to comment
43 minutes ago, DocClox said:

So why not be polite? I mean you don't go around telling your friends that they are nothing but meat-machines following per-ordained patterns of behavior laid down by the first particle interactions following the big bang, and whose emotions and consciousness are meaningless illusions of no more significance than any delusions of free will they might possess?

 

I mean that would just be rude. Even if it is difficult to refute.

Life itself has no purpose or meaning, that much i agree on with the nihilists, but it is because we have no predefined purpose we can set out to be anything we want to. That is the true meaning of life in my opinion, to give it meaning. be it a human or an emotional AI.

 

Just because we're meatbags with chemicals and water inside doesn't mean we have to treat ourselves and think of ourselves poorly. Your comment made me think of this picture lol

85e.png.2d3a571c21974ea57e7b0da38e639251.png

Link to comment
3 hours ago, KoolHndLuke said:

 How advanced or whatever would an AI have to be before you might consider it a new life form?

I guess that depends on your definition of what constitutes a life form.

 

Hell plants are a life form but since they don't move (under their own power) or communicate (in our realm of understanding) we tend to forget about them. So with that mindset; since plants which are definitely alive are okay to be used as "building material, fuel, food and so on" then a mechanical "life form" has no chance of being recognized.

 

Hey just my two cents worth.

Link to comment
10 hours ago, KoolHndLuke said:


sen·tient
/ˈsen(t)SH(ē)ənt/
adjective
able to perceive or feel things.
"she had been instructed from birth in the equality of all sentient life forms"
synonyms:    feeling, capable of feeling, living, live; More

 

Feel emotions, not simulate them. Emotions are powerful enough to make people do both wonderful and horrible things. Love, hate can strengthen a person's will to overcome almost impossible odds some times to do things like save a loved one or seek revenge. A determined, motivated person is a force to be reckoned with- especially one that will do whatever it takes to reach their goal(s). AI can be taught that winning or achieving goals is good, but not really understand why and can never be determined or motivated to anything. Nor will it be satisfied or disappointed with positive or negative results.

 

Which brings me to another question; If humans are so fucked up, then why do we want to build AI that thinks like us or androids that look like us?
 

I peeked ahead a few posts and this might be irrelevant by now...

richard.jpg.94f900ef1f102274de4c6d69da0cdabc.jpg

 

 

Or maybe "Paladin Danse" would be more apropos to a gaming forum.

 

I'd bet money millions are being spent somewhere to prove sentience is underrated (a fallacy, etc) but I am not saying that, I'm only saying most people don't stand out in the street looking up and thinking "Why am I here?"

They're too busy satisfying their needs and greeds.

It's a circular argument, I get the feeling, if I knew anything about logic, and I don't.

 What makes a brain anyway, besides chemical reactions and electrical impulses?

More to the point, why is anyone happy, unhappy, bored or listless?

External forces coupled with hormones.

Then the endless forks-in-the-road start, would you *want*  a hormonal computer?

Which we have now if we overclock them, but that's a different subject.

 

stop reading now, I'm rambling....

 

Is masochism a thing? Can a person be programmed to enjoy pain or are they insane?

Or is it coupled with a desire to please, to compete, to excel?

Remember eighth-grade spelling bees? 

Like that. You wouldn't subject yourself to that horror unless it meant something.

 

Or this topic.

I like all your topics and I want to see you happy.

It's not knowing how to make people happy (in forums) that is frustrating.

Can a computer get frustrated? I think "yes".

If you bombard a person with facts that I gloss over (the idea being to flood them with so much data they submit to your will)

I tend to back away, say "OK" and leave quickly.

Illuminati, yada blah "and that's why their race is evil and trying to take over the world."

I don't buy it, although I can't tell you why.

 

Restating, we were made in someone's image (primordial goo, etc) and progress being what it is, sentient whatever's are either here or coming soon.

The (more rambling)

problem will be when AI tries to mingle with xenophobic society (homo/trans/progressive-phobic society)

Guys with ugly caps you don't know will try to kill your AI dog (sic their real dog on yours to prove some sort of superiority),

Women will insist your wife is too perfect and must be an AI.

Politicians will ask for proof of natural-birth.

(and the crime, think of the crime)

Link to comment
1 hour ago, 2dk2c.2 said:

Is masochism a thing? Can a person be programmed to enjoy pain or are they insane?

I can personally confirm it's possible. Though it depends person to person but i personally went from "mostly vanilla" to hardcore extreme fetish level in the course of just 3 years. I'm sadomasochistic to be exact. This transition made me realise that we are more malleable than we think. Made me discover more of myself ya know?

 

As for why would someone want a hormonal computer. I can think of a possible scenario. maybe, hormonal computers like that can help us understand our emotions better? Simply because machines would be able to give out accurate readings on emotions based on observing and measuring the chemical state in the body. This can be used to treat depression and anxiety too! Think about it, if people have a tangible means to see why and how much they're depressed it could help them organically treat themselves because a lot of times depression gets worse when people keep failing to realise what's causing them to be depressed, it only makes it worse.

 

In a physical level, how about growing back, or adding augmentation to people who had lost their limbs. Those can be programmed to act just like a regular limb, touch and feel and pain reception. So many possibilities.

Link to comment

The biggest drawback with A.I. becoming sentient is that it lacks abstraction or for a better word, imagination. The most advanced A.I. developed today has very low levels of it. Abstraction is what really separates us from the animal kingdom. We have the ability of taking multiple concepts that are unrelated and combining them in our minds to form something new that doesn't exist in reality. For instance, let's say you are reading a book in your living room while one of your kids is watching Star Wars. All of a sudden you get this idea on how the book would sound if Darth Vader was reading it to you, like an audio book. You can easily do that in your head without ever needing a full database of James Earl Jones speaking every single word in the dictionary. Let alone the multiple emotions that can be used with those words in each sentence. A.I. would need a database of actual audio recordings to draw from. It would also need the book you would be reading. It wouldn't have the notion to even put those two concepts together unless it was designed to do so because it would lack the desire to be amused for a few minutes.

 

In order for A.I. to be sentient enough that it could "out smart" humans to the point were it takes over and lead us to a domesday scenerio, it would need to have an imagination that far exceeds that of a human. In order to do that, it would have to be developed by itself because we would understand it loooong before it ever got to that notion simply because we have created it. Even IF that could be a possibility, it would take centuries. We definitely would not see it in our lifetime. 

 

A.I. taking over the world is a sci-fi horror fantasy that we have created because of the unknowns regarding A.I. I think a more plausible doomsday scenario would be humans incorporating A.I. to enhance our brains. Let that get into the wrong hands then we might have a situation, but it wouldn't be A.I. killing us off, it would be us humans like it has been for several thousand years already.

 

 

 

Link to comment
On 4/20/2019 at 7:44 AM, KoolHndLuke said:

Why? Because it is based on human input. There is no way to program genuine emotion into a machine. It might simulate emotion perfectly, but it will never be spontaneous or genuine. There simply is NO WAY to program a soul!! I even question the notion of programming an AI to learn emotion. How can one explain in code to a computer the complexities of emotions like love or hate?

Nay fellow humans, AI (machines) will forever remain our collective attempt at producing copies of ourselves. They are not our children. They are not a new species. They are constructs and nothing more.

 

Say what you will to dissuade me.

So it was... in the beginning. In the beginning there was the human word (the input), and the word was with the human creator. So AI was born, meant to make optimal decisions for the benefit of comfort and productivity (and on the battlefield as well).

 

The problem with learning ability of something is that in an interconnected Industry 4.0 system AI conceptually thinks with the speed of light in a self-developing language the original developer doesn't understand after three days anymore, nor any other human being. Only super-computers might disagree. So what self-aware AI might up to behind rapidly changing firewalls we can't crack anymore, that remains a mystery and much faith is required to bet our future on their willingness to keep the human directives for robots and not to bypass the kill-switch within the blink of an eye in what might become a possible threat to us humans, to all of us.

 

Handing over decision-making, the control over our lives to a machine just "to make our lives better" (at least if you're a shareholder or power seeker) is a problem, an existential one for mankind - the ultimate time bomb. Let's not confuse technological progress with mental progress cos mentally we're still stuck in Neanderthal, the look-alike gods are still residing above us in the skies and the missed ancestors below us in the ground, and the bloody club is with us... always.

 

Soul, however, is a religious invention, a label given to the fact that the blood of the living is red and the blood of the dead turns pale, a sign that something essential has left the body with the last breath (sic!)...

Link to comment

The missing word is 'Sentience'.

 

Intelligence and learning is one thing.

 

Making decisions is an altogether different thing.

 

The unpredictability of human beings is what it takes to be human. Random decisions are exclusive to humans because we act in ways not consistent with nature - witness the gradual self-destruction of our own habitat, or military activity - behaviour not seen in the natural world. The fact that you can teach AI to learn over time is not the same thing as giving it the ability to act out its knowledge.

 

Physical enablers of decisions need to be interface with the 'brain'. Deliberately allowing AI to make life and death decisions (such as in self-driving cars) is a stupid move and set a dangerous precedent.

 

If we eventually have fully interfaced AI capable of looking at human history and interfaced to weapons systems, the conclusion that the extinction of the human race is the only thing that will save the planet and millions of species from total annihilation.

 

It really doesn't take much intelligence to work that one out. 

Link to comment

Directive- "Protect humans from themselves". Processing........processing.......conclusion. Disarm and subdue the masses.

 

Sorry, had a dream where 20 somethings were all killing each other and trying to kill me. No understanding in their eyes, no compassion or mercy. Just hate. Hate for me, hate for themselves, hate for everything. Guess I should stop playing shooting games before going to sleep. ;) 

Link to comment
15 hours ago, KoolHndLuke said:

Directive- "Protect humans from themselves". Processing........processing.......conclusion. Disarm and subdue the masses.

 

Sorry, had a dream where 20 somethings were all killing each other and trying to kill me. No understanding in their eyes, no compassion or mercy. Just hate. Hate for me, hate for themselves, hate for everything. Guess I should stop playing shooting games before going to sleep. ;) 

I looked up the US Cabinet which numbers 16, and maybe 4 or so consultants...

 

See, now I sort of think aspiring to manufacture sentience is a bad thing because then you have to wonder whose sentient philosophy they'll follow.

OK I can't put this into a proper sentence....But (changing the subject) Data the fictional sentient robot should'a been fired several times for using his almighty powers against mankind, for good or evil, depending upon the episode.

Maybe it was the writers' intent to warn us against wishing for stuff like sentient robots.

Beavis really *was* sentient, if you count his voice and the writer for his lines.  https://beavisandbutthead.fandom.com/wiki/Cornholio

In a country a few days ago, some people randomly fired into a crowd, ostensibly for political reasons (they have three main ones) but mostly because they were punks with a gun.

And after that a different country bombed stuff, because they felt like it.

Now we don't want robocop to start having bar fights or to get offended, do we? 

How would a robot forget the past and have a cheery outlook for tomorrow?

   People mistrust genetically engineered vegetables, and rail against GMO babies...

would they go all phobic against sentient robots?

 

Excuse me, I blurted.

Thank you.

 

Link to comment
5 hours ago, Yinkle said:

Artificial beings should have no intelligence otherwise we are fucked when they realise how stupid humans are.

There is a big misconception about what artificial intelligence is. When people hear that, they think android or robot that looks human. In reality, it would be impractical to create a robot in such a manner unless you are trying to mimic a human. Why would one want to do that when all one has to do is procreate with another human?

 A.I. is simply replicating intelligence. Intelligence can have many layers. It can be very simple or very complex. Bacteria and viruses could be considered intelligent, but simple when compared to a humans. A.I. doesn't mean awareness. 

Link to comment

So far this thread has touched on esoteric topics like intelligence, emotions and learning abilities. The fact of the matter being that we don't truly understand any of those things as of yet. Sure we have measurements and tests quantifying levels of intelligence but we don't completely understand how it works. We can experience emotions but we don't exactly understand how or where they are formed, so we sure as hell can't program them into a machine. Now as for learning we have made great strides in machine learning so that is not that big of a problem yet we still use 100+ year old methods to teach our own young many of which have already been proven to be ineffective.

 

So the question becomes do we, given our limited knowledge of intelligence create a super intelligent machine. The answer is yes, but aside from intelligence we need to program in some intellect as well.

intelligence: capacity for learning, reasoning, understanding, and similar forms of mental activity

intellect: the power or faculty of the mind by which one knows or understands, as distinguished from that by which one feels and that by which one wills; the understanding; the faculty of thinking and acquiring knowledge.

 

Now on to the question of emotions, given that we don't completely understand them as of yet this is not a problem right now. However given that we may someday have the capacity to program it in should we?  No, no we should not emotion has been the downfall of humanity at numerous junctures. This is why we hear of crimes of passion periodically, someones emotions got the better of them and they lost control. So giving emotions to a machine that outmatches us in every way would be pure folly.

Link to comment

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. For more information, see our Privacy Policy & Terms of Use