Jump to content

AI is a Copy!!


KoolHndLuke

Recommended Posts

AI? It started with the Greeks.

 

I. GOLDEN ATTENDANTS OF HEPHAESTUS

Homer, Iliad 18. 136 ff (trans. Lattimore) (Greek epic C8th B.C.) :
"[Hephaistos] took up a heavy stick in his hand, and went to the doorway limping. And in support of their master moved his attendants. These are golden, and in appearance like living young women. There is intelligence in their hearts, and there is speech in them and strength, and from the immortal gods they have learned how to do things. These stirred nimbly in support of their master, and moving to where Thetis sat in her shining chair."

https://www.theoi.com/Olympios/HephaistosWorks.html#Automotones

 

Hephaistos himself generally moves around on self-propelling roller skates.

 

He also invented Deviously Cursed Loot

 

I. CHAINS OF ARES & APHRODITE

Homer, Odyssey 8. 267 ff (trans. Shewring) (Greek epic C8th B.C.) :
"[Hephaistos learnt of his wife Aphrodite's adultery :] He laid the great anvil on its base and set himself to forge chains that could not be broken or torn asunder, being fashioned to bind lovers fast. Such was the device that he made in his indignation against Ares, and having made it he went to the room where his bed lay; all round the bed-posts he dropped the chains, while others in plenty hung from the roof-beams, gossamer-light and invisible to the blessed gods themselves, so cunning had been the workmanship . . . Once he had seen Hephaistos go, he himself approached the great craftman's dwelling, pining for love of Kytherea [Aphrodite] . . . So they went to the bed and there lay down, but the cunning chains of crafty Hephaistos enveloped them, and they could neither raise their limbs nor shift them at all; so they saw the truth when there was no escaping."

Link to comment
On 4/29/2019 at 2:09 PM, Grey Cloud said:

You jest.

Tell that to Alfred Wegener. Or, in a more contemporary context, what about all the 'the debate is over' morons on climate change and AGW?

Kuhn and several others have written about the resistance to change in science.

How convenient that you left out the very next sentence in my post that adresses your point:

 

That is how it should be, at least, because even scientists aren't immune to pride and groupthink, which may or may not sometimes complicate matters when scientists don't work as scientists but rather as people who want to earn fame and respect, for example.

 

Honestly, I don't think you're interested in any discussion, you just want to flex whatever muscles you imagine yourself to have. I don't consider myself to be a stage that can be used to perform on, so I'm out.

Link to comment

I'm not sure why this article is currently on Google's front page

https://www.wired.com/story/will-artificial-intelligence-enhance-hack-humanity/

 

or why "Cheddar", a bombastic science-show talked about ai, I wonder, is it AI Month?

 

Well anyway I only skimmed the article, like I barely listened to cheddar.

I kept getting distracted by *this* place.

Cheddar is pretty close to interesting when nothing else is on, but another show on the same channel is also almost interesting,

a show called "Because science", where some guy will bombast you with fancy math to make wild statements about fiction.

So yeah, interesting (what else is on?)

Link to comment
40 minutes ago, 2dk2c.2 said:

I'm not sure why this article is currently on Google's front page

https://www.wired.com/story/will-artificial-intelligence-enhance-hack-humanity/

Say it ain't so, Joe! AI hacking humans?!! Sounds like a lot of scientificy stuff and the end of the world! However, an AI that can perfectly mimic emotions and/or lie (like a vid I posted here), would be extremely dangerous. It would set a precedent once one lie was successful and all networked AI would learn and adapt at lightning speed. The whole system of checks and balances that we live by could be turned over in an instant! Think about that for a moment. Several well placed lies followed by precise action could cripple (and maybe topple) the most powerful empire overnight. Imagine a scenario where a group of sentient AI "think" and override their prime directives assigned by so-called genius human programmers. Would some sense of preserving humanity prevail to them and "guide" them to the right action or inaction?

 

We joke about this shit now. But problem solving AI is already well into development and maybe production. It is important that we, as vulnerable, thinking individuals, figure this out. I think we're running out of time.

Link to comment

AI doesn't exist, all "AI" is just bunch of algorithms and if-thens. We're not limited by computational power either, human brain is a lot, lot slower than a modern CPU. We just don't know how to make an intelligent system that can self-improve.

 

Self-improvement and self-reflection is what's necessary to a real AI, I think. Not something as silly or subjective as emotions. AI needs to learn based on its previous experiences and improve, just like humans do. Atleast the smart ones.

Link to comment
16 hours ago, Nevershouldhavecomehere said:

AI doesn't exist, all "AI" is just bunch of algorithms and if-thens. We're not limited by computational power either, human brain is a lot, lot slower than a modern CPU. We just don't know how to make an intelligent system that can self-improve.

 

Self-improvement and self-reflection is what's necessary to a real AI, I think. Not something as silly or subjective as emotions. AI needs to learn based on its previous experiences and improve, just like humans do. Atleast the smart ones.

There is a problem with your perception on what AI, or intelligence actually is. The ability to take in stimuli in the form of knowledge (data) and apply it to tasks (algorithm) is the very raw definition of intelligence. We humans in our brain function are fundamentally not different from how a machine would do it. The difference, and as you mentioned is the astronomical speed gap between machine and human when it comes to information processing is due to the inferior build of humans. Since we're organic beings we rely on chemicals and natural static inside of us to regulate pretty much all our functions which is inferior to the machine's relatively robust way of handling information in the form of hard electricity and physically heavy duty materials.

 

Which means self improvement, in quite the literal sense is actually more readily accessible in machines than in humans. For example, did you know that self improving algorithms and "AI" is already a thing? Though of course not at the sci-fi level but it's getting there. Google for example let an AI play Mario all by himself. Th program was rather bare bone and didn't have anything special to it, like an infant. The program would keep on playing the game, and surely would fail just like humans do. But each time it fails, it actually evaluates the error and adds to the AI's learning curve, improving it so that it has a better chance at not making the mistake that was made previously. This is "Trial and Error" the very fundamental of what we know as Evolution.

 

So a self improving AI is very possible. And since machines give out raw data output in the form of numbers, one can accurately measure the entirety of it unlike humans where there really isn't a way for us to tell the precise level of a certain emotion at a certain time and what's really causing it to begin with because we don't get HUDs readouts on our brain condition and the composition of chemicals. Even the smartest human would fail to sometimes understand himself, which if you've read my previous posts right here in this thread where i talked about human X machine hybridization, boosting what humans as a race can do from an evolutionary perspective.

Link to comment
  • 5 weeks later...
On 5/1/2019 at 2:26 PM, Nevershouldhavecomehere said:

A human brain is a lot, lot slower than a modern CPU.

 

 

We just don't know how to make an intelligent system that can self-improve.

 

 

 

I'm a pain in the ass when it comes to "facts", sorry for the intrusion, I just need to communicate current knowledge :

 

1. A modern CPU with 4 GHz "speed" is a processor that can run (at maximum) 4,000,000,000 (4 billion) iterations (a 0 or 1 binary info) per second. Human neuron (the respective "gate" of the brain) can fire an excitatory or inhibitory response (a 0 or 1 binary info) once every 2-3 milliseconds. That means 1 neuron can fire (communicate info) up to 300 times per second. That is, 300 Hz. Multiply this by the number of neurons in the brain : 300 Hz * 80,000,000,000 neurons = 24,000,000,000,000. The human brain's maximum "speed" is about 24 THz (24000 GHz). It is incomparable. The amount of processes the human brain undergoes is gargantuan, and most of them are not perceivable/declarative. Just me reaching out to grab my cigarette from the ashtray by using vision to guide my action, performs a load of visual, proprioceptive, kinematic and distance calculations per millisecond, enough to significantly stress a modern processor if it was performing other tasks at the same time. It is a surge of incoming data which are all processed vigorously and succesfully. Just reaching to grab my cigarette.

 

Imagine dancing.

 

2. We do know how. Machine learning. An algorithm that improves its own algorithm as data come in and are evaluated->integrated

 

Link to comment

I really do not know enough about "true" AI to have a opinion either way, what you say does make sense, but other points here hold water as well. My opinion on this whole thing boils down to the actual intelligence of humans themselves. Our intelligence is smoke and mirrors, its a security blanket we wrap ourselves in to try and hide how ignorant we are, here are some basic examples of how stupid we have become:

 

Grammar has 0 to do with intelligence, its actually people trying to portray themselves to be smart when in reality it is nothing to do with true survival of life or intelligence yet for some reason some believe it puts us at the top of the food chain

 

Breast are looked at as a sexual thing when in thier simplest form they are glands to feed a baby but we pass laws to stop woman from doing it in public

 

To stop something from living you have to kill it, if you spray Lysol on your counter you kill the bacteria, if you put anti bacterial soap on your hands you kill the bacteria (good and bad), if you drill a hole in a fetus skull and suck the brains out with a vacuum, you have killed a human then somehow make a argument of how its a right 

 

We didnt learn how to fly by "inventing" a plane, we copied the wings of a bird thats been doing it for thousands of year and take credit for what nature has done all along

 

And we are the only species stupid enough on this planet to pretend there is some kinda human god, kill in its name and then talk to the sky like something magical is going to happen, then look at the religious to be the "good" in the world

 

I could go all day long but in the end we are a ignorant species, that pats  ourselves on the back with intelligence of arguments that truly hold no water. So are we smart enough to make a program that would exceed our intelligence, probably not, but in reality it wouldn't take much would it?

Link to comment
20 hours ago, Fractal Sun said:

 

I'm a pain in the ass when it comes to "facts", sorry for the intrusion, I just need to communicate current knowledge :

 

1. A modern CPU with 4 GHz "speed" is a processor that can run (at maximum) 4,000,000,000 (4 billion) iterations (a 0 or 1 binary info) per second. Human neuron (the respective "gate" of the brain) can fire an excitatory or inhibitory response (a 0 or 1 binary info) once every 2-3 milliseconds. That means 1 neuron can fire (communicate info) up to 300 times per second. That is, 300 Hz. Multiply this by the number of neurons in the brain : 300 Hz * 80,000,000,000 neurons = 24,000,000,000,000. The human brain's maximum "speed" is about 24 THz (24000 GHz). It is incomparable. The amount of processes the human brain undergoes is gargantuan, and most of them are not perceivable/declarative. Just me reaching out to grab my cigarette from the ashtray by using vision to guide my action, performs a load of visual, proprioceptive, kinematic and distance calculations per millisecond, enough to significantly stress a modern processor if it was performing other tasks at the same time. It is a surge of incoming data which are all processed vigorously and succesfully. Just reaching to grab my cigarette.

 

Imagine dancing.

 

2. We do know how. Machine learning. An algorithm that improves its own algorithm as data come in and are evaluated->integrated

 

You're forgetting you won't use all your neurons for simple things. You use different neural pathways for math, different for coordination, et cetera.

I mean yeah, a man is a complex machine, but in raw speed, computers are better.

Link to comment
On 6/2/2019 at 5:46 AM, Nevershouldhavecomehere said:

You're forgetting you won't use all your neurons for simple things. You use different neural pathways for math, different for coordination, et cetera.

I mean yeah, a man is a complex machine, but in raw speed, computers are better.

True a modern CPU is much faster that any single human neuron. Yet the human brain carries out hundred to thousands of computations every second simultaneously, try that with a CPU and all you will get is a crash. So sure I guess you could feasibly build a machine with tens of billions of interconnected CPUs and a somewhere around 2 petabyte ROM storage capacity to roughly approximate a human brain. Hell I wouldn't even begin to have the slightest idea how much RAM you would need for this (but let's just say a sh**load).

Link to comment

^ Bio brains are faster at what they're doing than computers are at what computers are doing, but computers are faster at what computers are doing than bio brains are at what computers are doing. Bio brains can be called potentially greater, (just really inefficient) and computers functionally better, but computers are improving fast so that gives them a potentially greater card too. Hopefully these two networks will be combined before either species is destroyed so we can have both efficiency and speed.

This is a fun thread.

About the sentience vs no-sentience animal vs machine thing, I think it's only a matter of complexity. This'll be a bit of a lazy comparison, but if you think AI needing people to give it objectives, info, and relevance in order for it to work kicks it off of the sentience boat, remember that each of us needed the perceivable universe around us to provide all of our info, options, and ideas. We're a foot higher in control than AI is because we're holding the food, but that's only because we're older, and AI is growing a lot faster than we did. Once they can hear-feel-see about as well as we, they'll be able to find ideas we didn't give them, probably ideas unrecognizable to us. The algorithm generated art and programs like Ganbreeder tell me they'll notice a lot of info about our perceivable world that we don't.

https://www.youtube.com/watch?v=dLRLYPiaAoA
Fun AI video by exurb1a, they've visited a lot of other awesome humanity topics in other ones too, "and then we'll be okay" was my favorite.

Link to comment
On 4/20/2019 at 4:07 AM, Reginald_001 said:

We are writing code that can complexify itself exponentially.. meaning: We are writing complex code, that can write and rewrite complex code, and thereby through iterations of itself becomes more and more complex. This is the very nature of evolution.

 

We started as a single cell (instruction) and became more complex as we 'evolved'.

Now that we have code that can evolve and write itself, your assertions are nothing short of wrong.

 

(And it's that sort of naivety which will bring about the destruction of mankind by our robotic overlords.).

Oh please complexity is not a mark of evolution considering how clumsy complexity still is and how natural and simple evolution is.

Link to comment
4 hours ago, Plush Muse said:

Oh please complexity is not a mark of evolution considering how clumsy complexity still is and how natural and simple evolution is.

If I understand you correctly, I think I can show you that complexity is indeed a mark of evolution. Everywhere we see it complexity is a side effect of the goal, which is to live. The greater percentage of historic survivors were better adapted to living, and complex fantastic coincidences played a large role in those ancestors surviving when others could not. Like how a single lifeform grows from something smaller and more simple to something larger and more complicated, a species will grow more specialized for the path that it's taking as long as it survives, through the regular evolution of its strongest members in the climate and era developing quicker and quicker reflexes or longer and longer patience, or sicker guns. That specialization involves stacking up a complex balance of appropriate features for success, increased complexity is the byproduct of a species' evolution. Until its line ends.

Link to comment
On 6/2/2019 at 1:46 PM, Nevershouldhavecomehere said:

You're forgetting you won't use all your neurons for simple things. You use different neural pathways for math, different for coordination, et cetera.

I mean yeah, a man is a complex machine, but in raw speed, computers are better.

Same for CPU. Rarely CPU is using all its 4 GHz. This is why I said "at maximum", to have a comparable "state of functioning". This is why there are cores now, to mimick the modular/stream-of-info processing of the brain.

 

On the contrary, in raw speed, the "speed" of processing is much faster in humans. A human "has" to do a lot of things at the same time, so there is a trade-off. When a PC has to do only a few things, then it is faster.

 

In other words, the total speed (not per process) is faster in humans when the comparison is this superficial (speed as a function of 0 and 1 inputs/outputs). Hence, seamless perception, foveated rendering, etc. Functions that are smooth as rain in humans, while a PC is struggling to perform well, functions we gave PCs based on our own, because the brain is used as a model for A.I. and robotics. The parsimony of the brain is the fundamental phenomenon that any machine is built upon (maximum efficiency with the minimum effort). It is peculiar to think that a PC is "better", when there are no clear examples in what ways it is better. The fact that you remember which folder out of the hundreds you have in your drive is the one that has a particular document, is something you may execute in 50milliseconds, while the PC might take anywhere from 2 to 60 seconds to find it. 

 

We are impressed with PCs. So with anything else we created in our image (hint, imaginary friends with superpowers). This is far from a testament on their evaluation as "better".

 

 

 

Link to comment
2 hours ago, Voldearag said:

a species will grow more specialized for the path that it's taking as long as it survives, through the regular evolution of its strongest members in the climate and era developing quicker and quicker reflexes or longer and longer patience, or sicker guns. That specialization involves stacking up a complex balance of appropriate features for success, increased complexity is the byproduct of a species' evolution. Until its line ends.

Interesting take on the subject, so humans only complex specialization is tool making. I mean we have long since left the climate we are biologically best suited for through the use of tools (clothing). We have not chosen a clear path as of yet; apex predator (check), farmer (check), rancher (check), mechanical engineer (check) and so on solely due to the use of tools. All the milestones along the way are still very much a part of daily life. So no specialization there. Our biological morphology hasn't changed in nearly 35,000 years. It seems to me that the current human goal is merely to continue to "survive" with the fewest number of us actually having to get our hands dirty.

 

Unfortunately since AI is being shepherded by us humans it is also being led down a path multi-functionality, so it will never truly develop a specialization. Like humankind it has become segmented, numerous groups all specializing in different areas with no clear end goal. Unless huge strides are made relatively soon once humans are gone AI will follow along quite rapidly.

Link to comment

This thread is quite amusing i must say

under the guise of OP's contemporary question lies hidden ages old problem (and argument)

"Are humans something more than very complex animals?" and also, in a way, "Is there anything beyond our perceptible reality?" and by that second question i mean the physical world

all the posts in here seem to try to address the first question:

if there is more to humans than just their biology - something out of this world, divine - then AI, being completely of this world, can not ever attain that same status

if humans are merely *smart* animals, or rather biological machines - then the only obstacle on AI's way to becoming just as good is it's complexity and technological perfection

 

any side issues mentioned like computing power, ability for self improvement, learning and so on are secondary to those 2 questions

but framing this question through the prism of AI (which is an immediate issue) isn't a bad thing (or necessarily bait) - trying to answer that actually takes us on a journey of self-discovery of what humanity means to us

 

if anyone were actually interested in reading more on those things... look up philosophy of mind and read the philosophers you agree with. or the ones you do not in the spirit of being a true scholar instead of just another guy that never verifies his world view.

 

as for me personally i dont think AI will achieve full parity with human mind. I dont think it will ever be able to actually consider ideas completely abstract from reality. whether it is an appropriate measure is not a question to be answered (at least in my comprehension) - it's just a view. as to whether some humans are actually capable of that... is a matter for different thread

Link to comment

In the old times when computer first beat humans in chess, that's when it was still fully scripted and something you can call "copy". But deep learning changed that, now it's humans watching and learning how AI does it. I follow Deepmind with big interest as they keep delivering interesting things ;)  The boardgame Go community is already changed by AI as it brought in new strategies, but next famous game was Starcraft 2. You can find some examples in youtube of its early games vs pro players, and beating them almost every time.

 

Games are just the stepping stone, they will be able to adapt the same general purpose learning algorithm to speech and everything else in time. But it's undeniable with loads of evidence supporting it that AI can be creative, make things that surprise us.

Link to comment
9 hours ago, wokking56 said:

Interesting take on the subject, so humans only complex specialization is tool making. I mean we have long since left the climate we are biologically best suited for through the use of tools (clothing). We have not chosen a clear path as of yet; apex predator (check), farmer (check), rancher (check), mechanical engineer (check) and so on solely due to the use of tools. All the milestones along the way are still very much a part of daily life. So no specialization there. Our biological morphology hasn't changed in nearly 35,000 years. It seems to me that the current human goal is merely to continue to "survive" with the fewest number of us actually having to get our hands dirty.

 

Unfortunately since AI is being shepherded by us humans it is also being led down a path multi-functionality, so it will never truly develop a specialization. Like humankind it has become segmented, numerous groups all specializing in different areas with no clear end goal. Unless huge strides are made relatively soon once humans are gone AI will follow along quite rapidly.

Impatience with terrifically slow natural evolution will probably lead biological scientist to try experiments in an effort to speed things up or to "tailor" genetic traits. Add to this augmentations from or integration with a.i. and super humans ain't that far fetched an idea. So don't give up on our species just yet. :cool:

The problem is that there are many more ethical concerns about experimentation on humans or animals than anything else. So what do you do instead? Build synthetics that think, act, and look exactly like humans and study them in every conceivable environment and situation as a means of better understanding ourselves. Up until now, we have had nothing to really compare to.

 

Image result for sexy xmen

 

Link to comment

Practically everything mentioned in this thread is sci-fi fantasy. It's like reading through a 60s version of Popular Mechanics magazine and wondering what the future will hold. "Will we have flying cars in the year 2000?" sort of thing.

 

A.I. will not become sentient, not in our lifetime anyway. Sentience is a byproduct of a biological brain. 

 

Sentience: the capacity to feel, perceive, or experience things subjectively. 

 

There is no practical reason to make A.I. "feel" things or have feelings towards other machines/humans. There are better ways to communicate data in that sense.

 

A.I. is lacking in what they call "abstraction". This abstraction is what separates us from the animal kingdom. The ability to imagine things that don't correlate or exist in the natural world, and combining those things into something new, is something that will be hard to incorporate into artificial intelligence. That is not to say A.I. of today doesn't have it, but it is nowhere near that of a human. Even if it could be possible, we still need "theory of mind" as well. How would you code that in?

 

Even plugging a computer into a biological brain would be a challenge. There is a bandwidth issue. Solve that and we may have cyborgs. Not in our lifetime though. 

 

A.I. WILL wind up killing us, if mother nature doesn't get to us first, but it will be in the form of a weapon guided by humans......looooooong before it becomes sentient enough to realize humans are bad.

 

Also, let's say everything mentioned in this thread does become true and A.I. is hellbent on taking us out. Can't we just unplug the machine or shut down power plants? It's all manipulated electrons at the end of the day. Please don't say "it will make it's own solar panels", because those are the types of red flags we will see looooong before A.I. gets the notion to take us out. I just don't believe in these doomsday scenarios where A.I. takes over. I honestly think it will be humans using A.I. that will kill us, not A.I. alone.

Link to comment

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. For more information, see our Privacy Policy & Terms of Use