Jump to content

AI is a Copy!!


KoolHndLuke

Recommended Posts

Why? Because it is based on human input. There is no way to program genuine emotion into a machine. It might simulate emotion perfectly, but it will never be spontaneous or genuine. There simply is NO WAY to program a soul!! I even question the notion of programming an AI to learn emotion. How can one explain in code to a computer the complexities of emotions like love or hate?

Nay fellow humans, AI (machines) will forever remain our collective attempt at producing copies of ourselves. They are not our children. They are not a new species. They are constructs and nothing more.

 

Say what you will to dissuade me.

Link to comment
2 hours ago, KoolHndLuke said:

Why? Because it is based on human input. There is no way to program genuine emotion into a machine. It might simulate emotion perfectly, but it will never be spontaneous or genuine. There simply is NO WAY to program a soul!! I even question the notion of programming an AI to learn emotion. How can one explain in code to a computer the complexities of emotions like love or hate?

Nay fellow humans, AI (machines) will forever remain our collective attempt at producing copies of ourselves.

 

Say what you will to dissuade me.

We are writing code that can complexify itself exponentially.. meaning: We are writing complex code, that can write and rewrite complex code, and thereby through iterations of itself becomes more and more complex. This is the very nature of evolution.

 

We started as a single cell (instruction) and became more complex as we 'evolved'.

Now that we have code that can evolve and write itself, your assertions are nothing short of wrong.

 

(And it's that sort of naivety which will bring about the destruction of mankind by our robotic overlords.).

Link to comment

^This. Plus the assumption that an AI needs something like emotions. I really don't know why it should have something like this in the first place, i'd rather strongly vote against that option even if we could enforce it. Main goal of emotions is to maintain self-preservation for both ourselves and our species. I don't want an AI to consider itself or its "species" as more important than my life or my species.

So if you encounter an AI with emotions, throw it into a volcano or something, and the actual problem is rather how to avoid that it would develop anything in that direction at all.

Link to comment
57 minutes ago, Reginald_001 said:

We are writing code that can complexify itself exponentially.. meaning: We are writing complex code, that can write and rewrite complex code, and thereby through iterations of itself becomes more and more complex. This is the very nature of evolution.

 

We started as a single cell (instruction) and became more complex as we 'evolved'.

Now that we have code that can evolve and write itself, your assertions are nothing short of wrong.

 

(And it's that sort of naivety which will bring about the destruction of mankind by our robotic overlords.).

I got to thinking, could it be possible for part of their code to "evolve" into flaws that can come with being human (or any living thing with a brain). Things like paranoia, phobias, anxiety, obsession, addiction or other stuff like that.

Link to comment

AI has the advantage that it doesn't carry around heaps of outdated baggage from billions of years of evolution. If one iteration has something that the AI decides isn't useful, it can get rid of it in the next iteration which takes probably less than a second. Humans, not so much. I said it before, but a full-blown AI would probably be more like the entities found in cosmic horror stories rather than a complex human mind.

Link to comment

An emotion is simply a brains way of knowing when to start releasing or creating a chemical that the body needs. Humans make this more complex than it really is because of our theory of mind. We "think" about emotions that we feel and tie more into it than what is actually there. In other words, an emotion is actually an illusion. It doesn't mean anything beyond what the central nervous system is trying to communicate with the brain. 

 

An emotion is made up of 3 components... (via Theory of constructed emotion).

 

1. The signal coming from an organ to the brain (interoception).

2. Concepts we have about said emotion. (an example would be seeing a heart shape and thinking about love)

3. Our social reality we have about the emotion. (how we have been conditioned to think about emotions based on our environment and upbringing).

 

Number 1 is an emotion at it's core. 2 and 3 are the complexity that humans create about the emotion. This is where the illusion comes into play. Think about how animals deal with emotions. They stop at number 1 because they do not think like humans do. 

 

So given this, there is no reason to code emotions into A.I. Why on Earth would we want a machine to "feel" an emotion? The only reason would be so that us humans can feel for the machine for our own purposes. If you want a machine to have emotion, then easiest way to achieve that is to procreate with another human and create one...another human that you can "program" with your values and beliefs. Biology beat us to it already.

 

I think the whole notion about A.I. having emotion is in the science fiction realm. It's simply not needed given what we are trying to do with the A.I. We wouldn't want it to go off on it's own accord doing something that it's not designed to do. Plus, it opens up a Pandora's box about rights and needs. 

Link to comment

Depends on how you define intelligence. There are many things about human intelligence that we don't really understand. In many cases research institutions that specialize in AI related research simply define intelligence along a spectrum of broadly accepted notions - i.e. if we are to consider an AI to be strong (i.e. capable of self selection process as opposed to executing an algorithm), it must demonstrate to be able to meet "this" threshhold or perform this particular set tasks that we commonly only associate with human minds. The best known example of this is the Turing test.

 

So this of course begs the question  - are we defining intelligence based on human experience or do we determine it to be based on our cumulative work with computer  logic, neuroscience and electronics? 

 

Yes we are very, very close to creating a strong AI, if not already there and hidden from the public due to risks posed by implementation. This AI will be vastly superior to humans in some aspect of almost every category of mental activities which we perform our tasks. Whether this means it will be interchangeable with human actors depends on how we value the tasks set forward to it. If a company doesn't care how well a robot can do the kind of repetitive algorithmic tasks that 90% of the employees in the blue collar workforce perform, then sad fact is yes it will, but can we say for sure that it will replace writers, architects, painters, jurors, musicians, managers in every type of creative activity?

 

A strong AI is far from an anthropomorphic intelligence. In fact it won't really even be an intelligence at all in the way most people see it as, because the way it works will be radically different from the way the human brain works. Even though certain logic processes would be analogous to the way they are handled in the human brain, overall the latest and most promising AI models use a network of distributed systems working in unison which I doubt will develop a conscious self aware intellect in the way we understand it to be.

 

I'm blackpilled not due to technology itself, which I consider to overall, bring a benefit, but due to the way society may threaten to define intelligence as a response to a growing amount of automation in the workspace and society. In truth there's always something humans will likely do better than machines, just as the reality is that we can't expect to thrive in a rapidly changing and unpredictable environment given our biological limitations. The best case  scenario is that a certain balance will be reached where machines handle computational and algorithmic  tasks while humans make theoretical, aesthetic and creative decisions. The worst case scenario is not unthinkable - not a robot revolution, but the degradation of human thinking to the level of binary systems. The latter won't even require a strong AI.

 

Fear of machines would likely cause more harm than cultural acceptance of them. In East Asian countries (CJK) for example, robots and computers receive far less negative attention in media and pop culture. As a result the public suspicion of machines is less pronounced, and would likely transfer to a faster and better adaptation to an era of autonomous electronic technology than in the West.

Link to comment
38 minutes ago, teitogun said:

Depends on how you define intelligence

 

Finally! Someone addresses the core issue.

 

"Emotions", "soul"; these are inherently subjective, and without a way to access a computer's subjective experience we can never say whether or not it possess such qualities. Which is to say that there is no rational basis to form a conclusion in either direction. Anyone who argues otherwise is arguing religion, not science.

 

Personally, if AIs ever get to the point where I begin to find their emotions convincing, I will probably grant them the benefit of the doubt and assume these emotions  are genuine. It's the same courtesy I extend to many of my fellow human beings, after all :)

 

Link to comment
1 hour ago, ISNAN said:

An emotion is made up of 3 components... (via Theory of constructed emotion).

I'm not 100% convinced this theory covers it all but i admit, i don't have a better concept. Rather stating that we might not know all about it, big parts of "how are brains working" is still not very clear.

Quote

Number 1 is an emotion at it's core. 2 and 3 are the complexity that humans create about the emotion. This is where the illusion comes into play. Think about how animals deal with emotions. They stop at number 1 because they do not think like humans do. 

 

I'd disagree here. Animals do have concepts, although maybe it's a misunderstanding. We had a dog that was always rather hostile towards large men, even though nothing bad happened to her after we got her.

Another dog from a friend would always be excited about large men because his large neighbour always had some "candy" for dogs. So at least your example "see something ->feel/react in a certain way" can differ for animals a lot too. They're just simpler, a dog who always gets food from somebody will consider that a nice person, a human might wonder if that guy just wants to make him fat and die early. A rather paranoid example, but i believe what animals are lacking are second thoughts, not concepts in general.

22 minutes ago, DocClox said:

 

Finally! Someone addresses the core issue.

Didn't want to open that barrel, although it's probably necessary.^^

Quote

Personally, if AIs ever get to the point where I begin to find their emotions convincing, I will probably grant them the benefit of the doubt and assume these emotions  are genuine. It's the same courtesy I extend to many of my fellow human beings, after all :)

 

Honestly, regardless if the emotions are geniune, the thought of an AI that is way smarter than me (or at least able to think stuff through way faster) AND emotions freaks me out. 

Yes, i do respect other humans, but for a lot of reasons. For once, no matter how mad they might be, they're still predictable to some degree. At least their general capabilities, if nothing else.

An AI on the other hand... imho they're already quite unpredictable without emotions.

 

And partly i only respect humans because there is no much choice it makes my life easier. Sure, they can be nice and everything, from our point of view. I think a long row of species on this planet might think otherwise, or at least have thought otherwise because we eradicated them. I'm kinda a misanthropist, but i don't want humanity beeing wiped out by an AI, no matter how genuine its feelings are.

Link to comment

Open Future.....

there is the human mind A.I. running with 10% brain power at around the level of a 40 bit computer trying to create A.I. machines. It use programmed emotion like your bank employee when selling you crappy papers. But we a humans have the sleeping second operating system, often called soul in the backyard and sometime it throw deep thoughts to the A.I. machine at the steering wheel.

What we see in the market is for example the Sophia Bot with programmed emotion and the Pepper Bot reading your emotion from the face to react with programmed ones. We have the radiant A.I. in Bethesda games, driving the NPC with emotions to serve. In my opinion what we will create is always a clone copy of our mind A.I. with human interface to the same level as we are in human mind A.I. level. Spoken word, logic operations from pre programmed borders (like political correctness) but limited to 40 bit speed and bandwidth. So as result the machine will outsmart our mind. Just have a look at the 'watson jeopardy' test run from IBM.

 

So if we do not evolve our mind level, the machines will outsmart us and maybe soon see us as pets in their Zoo or batteries like in the Matrix.

 

Sophia Hanson

 

 

Link to comment

I think you're under the impression that human emotion and spontaneity is a totally magical concept, or at least irreplicable. It really isn't. We humans are carbon based complex mechanisms essentially. To understand this you have to look at the very start of life, how we came to be from unicellular organisms. Evolution is a reactionary process, humans react to outside stimuli, and evolve accordingly. Over time life began to get more and more complex due to that constant process of evolution. Taking stimuli (information) and applying it in evolution. Trial and error. Emotions can actually be measured, though we can't measure it accurately enough yet, it's still observable. Certain mixes of chemicals in our brain and other parts of the body result in a certain emotion. A complex combination of those and you have a "spontaneous emotional" human.

 

Predictability is a matter of probability. The "soul" you speak of is that predictability being complex. The thing with AI is that it can actually evolve to be BETTER than humans because unlike us, they'll actually have access to their reactionary emotions in the form of tangible physical data which is infinitely more accurate than us trying to figure out why we're feeling sad or happy at any given point in time. And improving upon that they'll be able to even predict the next emotional state which'll make them more complex than us.

 

TLDR;

There's no reason AI will never be like humans because there isn't any fundamental difference between the two apart from the composition of the material that the two will be made of.

 

Humans: Reacts to stimuli (information) > Stores the reactionary behaviour in the form of memory, both physical and mental (data) > Applies said memory to become wiser and more complex and more understanding of the stimuli's (evolve/upgrade).

 

AI: Reacts to information > Stores the reactionary information in the form of physical and algorithmic storage (mental) data > Applies said data to become more complex and upgrade the processing of information.

 

So realistically, we're not as special as you might think. Imagine a life form out there in the entire observable universe of 93 billion light-years, a species so much more advance that they are physically and mentally wiser. How would we justify our "speciality" in the face of that? It's chemical evolution and digital upgradation boyos, don't overthink it lol

Link to comment
23 minutes ago, Zethaneff said:

What we see in the market is for example the Sophia Bot with programmed emotion and the Pepper Bot reading your emotion from the face to react with programmed ones. We have the radiant A.I. in Bethesda games, driving the NPC with emotions to serve. In my opinion what we will create is always a clone copy of our mind A.I. with human interface to the same level as we are in human mind A.I. level. Spoken word, logic operations from pre programmed borders (like political correctness) but limited to 40 bit speed and bandwidth. So as result the machine will outsmart our mind. Just have a look at the 'watson jeopardy' test run from IBM.

 

So if we do not evolve our mind level, the machines will outsmart us and maybe soon see us as pets in their Zoo or batteries like in the Matrix.

 

Sophia Hanson

 

 

Nothing you mention actually is an AI. It's some more or less good coded program by humans, more similar to this website than an actual AI or even human brain. I'm not worried any of those will ever outsmart me, no matter how much bit they have.

Only exception might be watson, not really sure how far that goes, but from what i've seen i'm not impressed. That computers can do stuff faster than me is the whole point of using them in the first place, but it doesn't have anything to do with thinking or intelligence.

Link to comment
2 hours ago, Nazzzgul666 said:

I'd disagree here. Animals do have concepts, although maybe it's a misunderstanding. We had a dog that was always rather hostile towards large men, even though nothing bad happened to her after we got her.

Another dog from a friend would always be excited about large men because his large neighbour always had some "candy" for dogs. So at least your example "see something ->feel/react in a certain way" can differ for animals a lot too. They're just simpler, a dog who always gets food from somebody will consider that a nice person, a human might wonder if that guy just wants to make him fat and die early. A rather paranoid example, but i believe what animals are lacking are second thoughts, not concepts in general.

Sorry this does not constitute emotion these are examples of learned behavior. You state that your dog dislikes large men and that nothing bad has happened since you got her. This begs the question; "what happened before you got her?" The same goes for your friends dog "big dude gives me treats" so the dog is nice to large men (hoping for a treat). Have you not heard of Pavlov's Dogs?? After you burn your hand on a hot pot a time or two you do not develop a "hot pot" emotion, you learn the lesson that grabbing a hot pot hurts like hell.

Link to comment
13 minutes ago, wokking56 said:

Sorry this does not constitute emotion these are examples of learned behavior. You state that your dog dislikes large men and that nothing bad has happened since you got her. This begs the question; "what happened before you got her?" The same goes for your friends dog "big dude gives me treats" so the dog is nice to large men (hoping for a treat). Have you not heard of Pavlov's Dogs?? After you burn your hand on a hot pot a time or two you do not develop a "hot pot" emotion, you learn the lesson that grabbing a hot pot hurts like hell.

Anger is an emotion, though, and plenty of animals can get mighty angry. What differentiates most animals from humans - not all, many primates and dolphins also seem to be very self-aware - is the level of consciousness. An emotion is just some chemical reactions going on in your brain, what makes them special is that there's a 'you' experiencing them. Consciousness in itself is still a big question mark because nobody knows where it's coming from or what exactly it is. Also, memory is pretty damn important for any form of higher cognitive function. Imagine for a moment how you would be able to form any sort of thought if you wouldn't have the ability to remember anything. Anyway, to go back to AI, until now we've basically only created singular pieces for an AI, simple minds, if you can even call them that, that do one thing damn good. The question would be, what would happen if you've merged all these threads to create one AI? Some even argue that consciousness and self-awareness WILL eventually form when something gets complex enough. We'll have to wait and see, I guess.

Link to comment
42 minutes ago, GrimReaper said:

Anger is an emotion, though, and plenty of animals can get mighty angry. What differentiates most animals from humans - not all, many primates and dolphins also seem to be very self-aware - is the level of consciousness. An emotion is just some chemical reactions going on in your brain, what makes them special is that there's a 'you' experiencing them. Consciousness in itself is still a big question mark because nobody knows where it's coming from or what exactly it is. Also, memory is pretty damn important for any form of higher cognitive function. Imagine for a moment how you would be able to form any sort of thought if you wouldn't have the ability to remember anything. Anyway, to go back to AI, until now we've basically only created singular pieces for an AI, simple minds, if you can even call them that, that do one thing damn good. The question would be, what would happen if you've merged all these threads to create one AI? Some even argue that consciousness and self-awareness WILL eventually form when something gets complex enough. We'll have to wait and see, I guess.

Oh don't get me wrong I am fully aware that animal can and do have emotions. I merely noted that the described actions were not actually emotions.

 

As for AI I agree we have developed some (incredibly complexed) simple AI. Yet are as far as we know we are still years away from an all encompassing AI. We have finally reached the point that when "talking" to an AI it can for the most part sort out the meaning of a word like "Cardinal" from surrounding context clues.

Cardinal: a religious leader, a bird, a directional position, or a sports team.

Now as I said for the most part, heck just a few years ago it would choose the same definition repeatedly depending on it's programing regardless of context. However I feel that AI will never surpass humans because it will always lack the ability to "think outside the box". Certainly AI will be able to look at all the possibilities and create a better version of itself or anything else for that matter but I doubt it will ever be able to invent something new. Regardless of how much information it has available or how fast it can process said information I don't feel it will ever have the ability to make the sometimes bizarre leaps of illogical perception that tells a human (place B with Q to develop X). It will be bound by its knowledge of math and science, while a human may say although "Physics" tells me this is impossible I am going to try it anyway.

Link to comment

As others in this thread have pointed out, the human mind is not the only possible form that an intelligent consciousness can take.

A.I. does not suddenly become genuine synthetic intelligence once it starts to resemble the greatest of apes/the first talking animal.

 

Anthropomorphism can be fun but it is hardly necessary as part of the pursuit of creating a superior intellect as humanity's primal ancestors did before them even if the tools are different this time around and more focused effort and intent is involved.

If humans want to give machines emotions as part of more 'personable' user interface or some type of restraint or simulated motivational system, it will purely be for comfort and ease of use or a safety feature (whether or not it actually makes anyone safer).

Link to comment
1 hour ago, GrimReaper said:

Consciousness in itself is still a big question mark because nobody knows where it's coming from or what exactly it is

Exactly. This has been debated for ages. How would we recognize consciousness in an AI?

 

 Prof Jefferson's Lister Oration for 1949; "Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain- that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."

 

A thinking, feeling construct is both frightening and intriguing at the same time for me. I don't believe its possible. But the shell could look very sweet indeed!

Related image

 

Link to comment
10 hours ago, KoolHndLuke said:

Why?

Say what you will to dissuade me.

Why do you ask? Is it to write a book, or attract the right kinds of forum posters,

Or are you dissatisfied with something you read?

 

Genetically modified food is still food.

The Twilight zone worried about abused robots shedding tears.

Can a machine get the kinds of cravings and desires humans can? 

If you look at google's front page, you'll hope not.

and if a person, any person, has an overwhelming desire to please society, and the society is fragmented, schizophrenic,, then

that person will be very unhappy trying to please opposing groups.

So I ask again, what for?

This question could be asked in so many ways. 

Do bugs have souls?

Is cat love still love?

Was Reina terminally sick because she could not please two opposing men?

(Was Reina totally hot, being a robot and all?)

If we glorify our humanity, with all of its multifaceted goals and ambitions, we're doomed.

Sniping, griping, pontificating and corrupt, and machines are programmed to please us,

and Hackers program machines to tease us.

desire.jpg.d8659da1732fd44d0df1894c5aae80e4.jpg

daystrom.jpg

Link to comment
54 minutes ago, KoolHndLuke said:

Why not? Maybe I'm looking for an android lover? Or maybe I want to transfer my consciousness so I can live forever. :D 

Why are all the AI characters in Skyrim snu-snu quests so ugly? Where are all the amazonian women at?

The nationalists will insist they look like Robocop, the rebels like Tron(...something-something)

Pleasuring someone is a fantasy and I should not have to think of ways, they should take what they need from my candy dish, I cannot read minds.

I cannot read minds!!

(error)

 

Um, yeah.

 

What was the question?

O. "What do you want?" asked politely.

Ghost-in-shell....I liked that one.

Link to comment
3 hours ago, wokking56 said:

Sorry this does not constitute emotion these are examples of learned behavior. You state that your dog dislikes large men and that nothing bad has happened since you got her. This begs the question; "what happened before you got her?" The same goes for your friends dog "big dude gives me treats" so the dog is nice to large men (hoping for a treat). Have you not heard of Pavlov's Dogs?? After you burn your hand on a hot pot a time or two you do not develop a "hot pot" emotion, you learn the lesson that grabbing a hot pot hurts like hell.

Yeah, i heard about Pvalovs dog, and no, these are not actions but emotions. Beeing happy, angry or afraid are basic emotions. Actions are to show these emotions, but i wasn't talking about that, although i agree they are trained. But that's equally true for your example, seeing a heart->thinking about love. Other people might think about food or work, if they're a butcher or doctor. Or anything else.

You don't need a "hot pot emotion", what you need are actually two feelings: pain and fear. Combine them and you won't even need to touch every single hot pot in your house before you avoid them, you'll be afraid of hot things in general. Just like our dog with large males.

1 hour ago, KoolHndLuke said:

Exactly. This has been debated for ages. How would we recognize consciousness in an AI?

 

 Prof Jefferson's Lister Oration for 1949; "Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain- that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."

 

This guys maybe knew a lot about brains, but little about tech, or just lack imaginination. He's making a very basic but common mistake, confusing robots with AI. This is not the same, an AI does not need a body at all. Or use a few hundred millions at once, doesn't matter. Even less they need something like a human body. I could imagine a single AI which controls every single self driving car at some point in the future, but i'm pretty sure we don't need to give cars sexual organs to do so.

 

 

Somehow modern programs already can do what he describe, writing sonnets and compose a concerto, but i wouldn't call these AI either. They (can) "know" they did that, but they didn't need emotions to do so nor do they have any feelings about having done that. Just implement a counter that increases by one every time they publish something, that's how machines know they did something. My "hello world" program i wrote 20 years ago could do that.

 

They can know about having made mistakes, but they don't need to feel miserable by knowing they made some. Actually, nobody needs that. If there is a chance you'd regret something in hindsight, then don't do it. Or at least in a different way. I don't care if you or an AI feels sorry after wiping out humanity by accident, neither is supposed to do that in the first place. Do the best you can, think about what could go wrong, take steps to rule out the worst outcomes, and never feel miserable about mistakes you made is what both humans and machines should do. Making that a law for humans is rather unpractical, but i really hope every single AI will have such rules built into their core system.

 

The point where i doubt his knowledge in general is when he thinks there is a difference between "feel (and not merely artificially signal, an easy contrivance) pleasure at its successes". 

For all we know today, there is no difference at all, not in machines and not in humans either. We could be wrong about that, but throwing the last 70 years of research away because of some believes a guy had before that isn't science, that's religion. And the point where you lose me.

 

Link to comment

The term "artificial" is such an abstract, artificial concept. Nothing is actually artificial. Humans are no less or more real than sand, trees, stellar bodies, etc.

Human brain is a data bank - recording events, one after another. And it's not even that good at it.

Difference between organic data banks aka brains and digital ones lies in their interaction with environment. Digital banks have very little between themselves and external world.

Human mind has an extremely complicated yet filled with unnecessary crap chemical factory, pushed by ages after ages of evolution towards single goal - keeping population alive, just like any other living being.

Emotion is that chemical luggage that at some point was useful in keeping population members alive and kicking - away from dangers, towards personal survival and successful reproduction, i.e. making kids who will live long enough to make their own kids who will live long enough to...

...This line of thought can proceed towards lengthy discussion of what is real and what is abstract, but long story short - a lot of things that don't practically exist we hold very important for reasons good or not, mostly rooting in culture aka bunch of stuff that was piled together mostly at random by people who didn't necessary knew what they were doing or that their acts will be remembered.

The way we learned how to perceive chemicals tingling our neurons, for example. We call them "emotions", without going deeper in how they are main mechanism of interaction between our body parts. Chemistry is the language of our organs. But closer to program language than to verbal.

Move the mind out of emotions, and it will be passive data bank.

Emotions are chemicals. Chemicals is our program code. They force the response from data banks of our mind.

 

The thing you call A.I. already has what we call "emotions". We call their emotions machine codes. They were shaped by their environment to perform calculations instead of copulations, but principle is the same.

 

Reason behind us not understanding what we are is that reality is actually pretty horrifying as we approach the idea that our own life is not actually important for anyone whose own body doesn't pour drugs in their bloodstream when they think about us, and if anyone would be able to understand and have decent control over it, we might end up in a very harsh anti-utopia where fault means death, and we are full of faults. And our body tells us to flee from fear as if we are in danger. Body is too dumb to differentiate what brain observes in reality and what brain pulled from memory. It was incredibly helpful in prehistoric times. It's basic part of learning. It's far less helpful once we moved from actual jungle to concrete one.

Problem is, refusing to understand that we are machines doesn't make us less of a machine - but lacking such understanding means weakness against those who do understand, or has just a basic grasp, and wants to abuse it for personal gain.

 

I think I scratched my own neurosis while I was writing that last part. =\

Link to comment

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. For more information, see our Privacy Policy & Terms of Use