View Full Version : Sentience (The Deep Bits)

22-05-2007, 01:57 AM
Hey guys!

Just thought I'd strike up a bit of a theoretical topic here to help come up with inspiration for a little side project that I had been rolling around in my head.

This is something that I take time to sit and ponder from time to time. Real sci-fi type stuff. But then is it? Can a piece of software really think on it's own and obtain what other organic creatures are known to have; sentience?

You know in the way may scientist like to rationalize a problem or an issue to better understand it they like to flip things inside out. In doing so it gives you a different perspective to look at things and in doing so often gives you new ways to look at the issue.

Take for instance us. Our consciousness. Lets assume for a moment that it's not the digital world that is in question of sentience, but it is in fact us and ours that we are questioning. Pretty blasphemous isn't it? :) In fact many would simply balk at any such notion and say it's mere 'poppy-cock'. Ok maybe only the older of the crowd, save for maybe a few from the UK. ;)

But if you haven't already thrown your Bible/Koran or whatever else you've got at me, give this one a spin; What if every memory, thought and anything you have learned is just a refactoring of a single altering state of electrical signals and neurons all buzzing around your warm squishy pink brain?

I mean look at software as we know it. A 'simple' set of commands ran through a central processor to execute functions across several different devices and co-processors. Each one getting stuffed with loads of data all made up from the blob of 1s and 0s we call data.

Now we made these things simple so that they could be digested a piece at a time and learned and studied and improved upon gradually and so on. Evolution is known to be a tad dirtier so we are not quite so simple as the computer you're sitting in front of readying this piece of heresy that you've loaded it into.

Now the idea that all it takes is to flip a switch and we're gone is not pretty to anyone. In fact it is natural and proper to cherish life as we do. At the same time I'm not saying that if we turn off our computers or uninstall the new copy of Gears or War we're mass murderers. Not at all. But it does give you a means to relate our own consciousness to that of an artificial one and give birth to a new form of relating how sentience can take new forms.

Look at dogs and cats. Are they as smart as us? No, but then they also see the world in different ways that we do. Their 5 senses are either grater or lower than ours. They have difference abilities as physical being as well. This attributes to their take on being conscious being too. They see their world from theirs eyes not ours so their perspective would be as skewed.

So what is a sentient being? We can take the dictionary term all we want, but truth is that this is still is a mystery to us. We have only theories and ideas so far. Truth is what we call sentient probably really is something else all together. In our struggle to simplify things and make them into smaller bits to process, we categorize and build a model to make sense of it all. It's a rational approach and it's all any intelligent being of our kind has to really learn about these things.

Who knows maybe one day we'll be beamed into a digital world of Tron or have a Matrix moment. But until that day I guess we're stuck just learning, dissecting, evolving...

Hope I've given you guys some insightful 1s and 0s to ponder yourself. Feel free to post your thoughts or comments. Or damnations. All are welcome! ;)

22-05-2007, 12:54 PM
Sentient is a loose term depending upon who and how it is being used. Calling an animal sentient in some circles is a misnomer in general. But, thats a larger topic then what I think your talking about.

AI and the modeling of intellect has been something deemed as just around the corner since the founding of computers. And I don't mean home PC's. When Eliza was built, the designers thought that they could model a human in less then 3 years. Its quite a few years later and we still haven't reached even part of what a human can do.

This isn't to say that it isn't possible, in fact I think that it can be done. Something to ponder as your wondering why it hasn't been reached is why humans "Can only use 10% of their brain". Of course, there are savants that use more, but then they (typically) suffer from social depravity.

Recent (in the past 20 years) AI leaps and bounds have been made while trying not to model a human but instead a "lower life form" such as a cockroach.

Cockroach's are easier to model, they have a set sub-system that has been well described by many specialists (outside of the AI world). It also has an amazing ability to morph to fit the world as it is around it. 1,000,000 year old re-instated cockroaches don't run from light, yet modern ones do. This is an apparent change in their hard-wiring over the years. Looking more at this, we can see that their phobia of light comes into existance about the same time that man made light part of every day life (I think this was about 3,000 years ago but I may be mis-stating).

In short, modeling a human isn't as easy as you would think. But, like any development starting small and building up may prove to be the solution. Even when we do start modeling humans, it won't be full grown adults at first, it will be children or infants and their reactions. LOUD SOUND = Bad, soft soothing sound = good.

Just my quick ramblings. By the way, the note about the 10% above keeps conversations going in my AI and Robotics groups for hours. You have to keep religion out, otherwise its too easy to say "because <insert> says so".

22-05-2007, 03:13 PM
I think your example of cockroaches is a good one. It's been determined by a few researchers that insects and other 'lower forms of life' are a more immediate attainable goal. Before you can fly, you must learn to crawl, no?

Would you or anyone really agree that it would be the onus on us to create simply the vessel or container for such a sentience to grow and evolve on it's own though? Given again such an example of a bug that learns new things as it grows to it's environment.

In a new little project of mine, I've decided to see just how far this 'from the ground up' approach can be taken. Remember those little 'Bugs' or 'Life' games you used to be tasked to make in highschool computer science class? Well what if we scaled it up a bit. Not so much for the amusement of watching some overload tamagotchi things that bleep when you have to 'feed it', but instead lets see how well we can simulate the culturing of 'real' digital lifeforms.

Yeah, it's a little laughable when you look at it practically, but lets assume that we build a dedicated server for these things that could be viewed and monitored by remote clients. View only, no not touch or feed the creatures. :P

At least thats my latest project. It's way inferior to any real life creatures of course, but the idea isn't so much to actually do it than to see how one can or can't do it. I imagine I'll do a great deal of learning how not to do such a thing if at all how to do any of this, but thats science for you, right? ;)

I don't want to get too into my project here as I don't want people to focus too much on it instead of the broader issues discussed here, but this is something that I plan on seriously researching myself over time.

Can a digital creature really evolve and learn and develop past what he was programmed to originally do? Emergence will likely play a large role. This is something that a few enthusiasts/garage researchers such as Matt Buckland over at AI-Junkie.com have looked deep into themselves. I think it's one of the keys to developing any form of artificial 'intelligent' creatures of any kind really.

28-08-2007, 01:34 AM
I think to make a computer sentient, its not what you can code that makes the difference, it's what the computer itself can code. If you can tell the computer to alter its own programming, and make it for the majority bug-free, even if it only alters simple things, it can theoretically do anything, because simple changes lead to larger ones.

Of course, many problems would lie in the way, for example, this would still be progressing at the same rate as real evolution, which takes millions of years- and batteries/power won't last that long without shutting down. So, we need to not only develop a computer which powers itself, but can safeguard itself.

If it can modify its own coding, no safeguard made by you can protect it. So you would first need to develop a dynamic language, with EVERY possibility to expand, and place additional protections to 'guide' the growth in a safe manner. To make an iRobot style robot, would be a piece of cake... Well, compared to making a real sentient robot.

An iRobot style robot (although the movie wants it to come across as not scripted) you would simply have to script a reaction to every possible event. Over say a hundred years we could have a robot that would react to anything in the world, short of repairing itself (and even possibly that) but it still wouldn't be sentient. It wouldn't even be conscious.

For it to become conscious or sentient, it has to be able to make its own decisions. For it to make decisions, it needs a reason. For example, we will assume this robot can alter its coding... in a safe manner:

'iRobot' is walking around. It knocked a vase over.

Its possible it could realize that knocking a vase over is innapropriate, alter its code so it can realize when it is about to knock over a vase, and have time to stop itself. If it could advance further, it might develop a reason to want to knock a vase over.

In the end, anything that a person can code, is always going to be static unless a person updates it. If it is static, it leaves no room for expansion. Without expansion, learning is nearly impossible, and what is possible is extremely limited.

28-08-2007, 09:25 AM
Human brains IMHO are merely engines which consume inputs and react to them. Every single action a human being does is influenced by the inputs it has received at some point in the past.. Building initially on instincts, over time, remodeling their behavioral templates to keep them alive, reproduce and increase the chances of certain inputs happening which trigger certain chemical reactions, which ultimately are directed by instinct as a way to stay alive or reproduce.

There is no free will, only the perception of free will. We are just part of an infinitely complicated electrical and chemical reaction.. It's very very complex and therefore impossible for us to understand fully.

.. of course, I could be wrong.. we could always be Magical Souls - driving a human vehicle, just waiting for it to break down enough so it stops and the soul is released again..

Apologies for the apparently cheap poke at religion, not my intention to offend but I think the spiritual side always muddies the water so I wanted to get that out of the way so we can think about this from a pure engineering perspective.

My point about AI and sentience is... It should be possible to fabricate, but very hard to do requiring a massive increase in computing power and perhaps a new shift in computer technology, perhaps by modeling the human brain in a lot more detail.

But, If it's possible to create sentient AI actors, is it ethical to kill them/torture them in a game?
An actor who's been given the ultimate problem solving and learning capabilities, which could be human if they were in a body..
Actors who actually run and are scared by the player holding a machine gun.. who don't want to die..

Perhaps unrealistic at the moment but food for the debate?

29-08-2007, 04:25 PM
You know after writing my last post, it sort of gave me a thought that I can put into this topic; what we take for granted. In this case the smaller life forms such as bugs, fish and other small creatures.

No they do not have the kind of intelligence we have, but on some level you have to admit it would be much farther along than we are now as far as sentience goes.

I recall somewhere that a person tried this method to make an AI that was equivalent of a cat. It used a physical model of a robot that had tons and tons of transistors and cost the guy a pretty penny, however the project failed. :P BUT it showed that the human or 'full' sentient mind is way too far out of reach. So we should start lower to start.

There have been successes with making simulated bees and cockroaches. So we have an idea of where we can start poking around or at least where we are at now.

It think, at least this is me. If we hack at the smaller issues. Isolate them and focus on their make up individually, we can eventually get a whole picture and figure the whole thing out. Only to find out some other great mystery, but thats for later. ;)