Rapid adaptation needs capacity for prevision. The motives of the adaptation are vital needs of organism. Artificial consciousness also develops from these features. The purpose of this work is a constructive search of algorithm of consciousness. The etude "Self-learning butterfly" demonstrates basics of consciousness in work.

Mechanisms of consciousness

Eugene Kornienko

This work was basically done in 1997. I wanted to understand, how consciousness works, using only constructive means. That is every idea is noteworthy only if it's supported with a detailed algorithm, which should work today, instead of in uncertain future.

In result of study and rejection of numerous deadlock ideas I have been also forced to refuse application of such accepted tools as neural nets and semantic analysis. Initial features of consciousness in a system may develop without a neural net. Language arises at late stages of development of consciousness. At the same time I consider these directions very important technically. After an active system, capable to learn and to accumulate knowledge, is created its perception of the world may be considerably improved by supplying the system with pattern recognition, which uses neural nets. Its intelligence may be increased by teaching it with use of proper semantic structures carrying knowledge. In my algorithm of consciousness structures, looking like neural nets, and capacious commands, similar to words, emerge and develop.

Consciousness arises in animals as one of means that improves their adaptation to environment. Rapid (in comparison with lifetime) adaptation needs the capacity for prevision. The motive of the adaptation is biological vital needs of organism. An artificial system, that possesses such features, may acquire consciousness, too.

In this article I discuss consciousness in "functional" style. The readers, possessing trained system intellection, may see interaction of objects in my story that may be programmed. The algorithm doesn't need much of resources. So I think that animal like mind might be implemented today. That is why I place an article " Dangerously Intelligent " in this site.

I am not enough familiar with world level of artificial intelligence researches. I don't pretend to new word and I apologize in advance that sometimes I repeat known ideas without proper references.

The algorithm of consciousness, that I demonstrate in the etude "butterfly and flower", remembers, structurizes, use its experience and try to satisfy the butterfly's wish. The algorithm may be embedded into an ephemeral computer entity or in a toy robot. But a level of its consciousness, that is possible without special teaching, looks like the intellect of a herring. In a result of teaching (for instance by games) this level might be compared with intellect of a mouse or even a cat. I paid main attention to "pure" self-learning consciousness, and now I don't have means for interactive teaching of my artificial entity. In my algorithm the brain and the system of interaction to ambient world are equally important. Before I will write about the algorithm I plan prepare a little text about sensors and effectors and "associative channels".


What is consciousness?

How is consciousness done? Which processes, mechanisms and interacting objects do consciousness and self-consciousness need? What is needed to make a mere consciousness, not a model of one?

Often the word consciousness is used as a feature of an entity. It may "lose consciousness" for a short time. The word mind means principle capacity to be conscious - "human's mind". There are also other interpretations. But I will not yet to stay on the words. Let's pay our attention only to "why does consciousness possible, and how does it work?"

It is unknown how to prove that a human does think.

Not measurements and logical corollaries, but our experience and our own consciousness support the assurance that humans do think. That is why it's difficult for science approaching deep study of consciousness. We can study brain, neurons, languages, behavior, but not the consciousness per se.

We watch intelligent behavior, not intellect.

Perhaps reliable proof of consciousness is impossible. So let's subjectively judge about someone's consciousness by its behavior. By this way we may accept that some behavior is conscious, and later we can reveal that it was a mistake. But we don't have more reliable way. Let's not waste time for searching better way until it blocks our investigation.

Substance of subject is subjective.

Man evaluates himself too biased. Although I suppose that I can make useful corollaries about nature of consciousness from human's behavior, but I'm also sure that these corollaries are not convincing (for humans). So let's only consider behavior of other animals.

Behavior of animal is controlled by its nervous system.

Normally, the more an animal's brain, the more intelligent its behavior. Nature invented nervous system, as well as other vital organs and systems of organism, for specie to survive. Development of the nervous system and increasing of centralized brain also occurred as a result of natural selection. But there are such forms of life, say plants, which survive without a nervous system.

Plants don't live. They survive.

Let's compare individual behaviors of a plant and an animal. What in their behavior do help surviving of their specie? The very word "behavior" isn't fit to a plant. Plants spend their lifetime without noticeable revealing of their individual properties. Surviving of one plant consists in its ability to endure unfavorable conditions, which may occur during its lifetime. Adaptation of the plant's specie to varying environment occurs because some plants, that cannot bear hard conditions, don't survive and they don't pass their unlucky features to descendants.

Animals can improve their life conditions.

Animals (say, cats and dogs) behave absolutely in another manner. They actively try canceling the difficult situation. Very this I name "behavior". Behavior is quite rapid response to varying environment. If the hard conditions remain for long time then a cat tries to adapt. It searches proper behavior in the new circumstance. But when all the searchers appeared to be futile, the cat remains in staying bear and hope like a plant.

Quite complex behavior of animals shows their conditional and born reflexes. It's right as for simple animals like snail, as for smartest ones like a dog. Unlike plants, animals, that have nervous system, have capacity for acquiring of conditional, genetically non-programmed reflexes. They can learn.

Animals are capable to find new profitable (that is intelligent?) behavior.

Sometimes the conditional reflex behavior is so "intelligent" that some, like me, accept it as intelligent behavior without quotes. Is this behavior the appearance of "real" consciousness, or it is only a sophisticated adaptation of organism to varying world, is unknown because we judge about consciousness only by behavior, and we don't have a constructive "indicator of mind". But what, if not the adaptation, is human's behavior when the humans change the world under their needs?

So judging about mind by behavior we may accept the two important facts.

Every assertion about mind is subjective so that somebody may evaluate that some humans behave unconsciously.

There is a subjective scale of intelligence . A snail, cat and a human are arranged here in order of increasing intelligence.

Maybe mind is absent at the simplest level of nervous system. Perhaps it arises in the rather developed systems. But since we don't clearly know what is mind and we don't know the mechanism of such arising let's allow that all animals with nervous system are conscious. Particularly the simplest entities possess "zero" or "negligible" intelligence. In this way we may compare behaviors of different animals to find out what is in their behavior that seems to be intelligent. Apparently the intelligence is not an ability to sleep, eat and bread because plants have such properties, but they don't have behavior , hence they don't have mind.

I like such definitions of consciousness as "ability to achieve a goal" or "to solve problems" or "to make decisions". But they aren't constructive because they don't unambiguously support on behavior. Also "goal" and "decision" are defined from consciousness. An ability to communicate and forms of such communication are closer to behavior. But it's difficult to distinct they from physical interaction like carrying of pollen.

I think that good noticeable sign of intelligence is ability of active adaptation to varying environment, that is ability to self-learn from one's own experience.

What is the difference between consciousness and the self-learning?

Consciousness is an intrinsic property - the creative motor for self-learning organism.

The definition is quite constructive. You may use any means for making the "creative motor". Do invent and create a machine that is able to self-learning - and the machine will have got consciousness.

Sometimes I use the wider notion "adaptation" instead of the word "self-learning". If an entity finds out its new behavior in new conditions on its own, with nobody taught it this behavior, hence the entity has a capacity for adaptation (for self-learning). The inventing of new behavior is a sign of creativity, and the creativity is a sign of consciousness.

Homo armed.

A man, that is supplied with modern technological means (books, cars, weapon), can survive in wider range of environment. He is better adapted to the world than a human as a living entity. The definition of consciousness leads to conclusion that consciousness of civilized human continues to develop, while consciousness of "biological" human maybe has bounds, which have already reached because we must intensively learn for all our life.

Intelligence is proportional to capacity for adaptation.

Great potential of adaptation is well seen from the thought experiment. Suppose an entity has the adaptation capacity of maximum extent. It is able adapting to each and every need of ambient world. Let the entity to be forced adapting to the human culture. And it learnt playing chess, designing the space-rockets, creating fine poems and music. This super-adaptive entity learnt to do these high intellectual things on its own. Isn't it reasonable and conscious?!

So any ability of an entity for self-learning is a sign of its consciousness. Everybody has a right to subjectively judge about the extent of development of the entity's consciousness.

Awareness of self

Awareness of self is commonly accepted sign of consciousness. However it is only a part of awareness of environment. We perceive ambient wold as a lot of different qualities that reflect different features of natural phenomena that are registered by our senses. We are aware not of our consciousness but of our perceptions of object world and our thoughts. These thoughts are represented by images of the object world that is by images of our senses. Assertion about one's own consciousness follows from watching of one's own behavior. So the problem of self-awareness reduces to the problem of awareness of one's own perceptions.

The awareness of one's own perceptions is supported by the same inner mechanism of consciousness that is the motor of self-learning and creativity. So consciousness per se isn't a brain, isn't a behavior. It's a specific "mechanism" of processing of information.

Consciousness is a process.

We may keep the information and stop the processing. Then the consciousness vanishes.

It's important for awareness that the creative mechanism of consciousness produces optimal behavior of organs. Behavior of brain is its interaction with other organs. Behavior of a hand is its interaction with the physical world and with a brain. If rather perfect "non-improvable" behavior is found then the awareness also vanishes. It changes by automatic control. That is why well trained behavior (such as in music performance) becomes automatic and it doesn't turn awareness off music creativity.

Senses, that are concomitant to automatic behavior, also may be automatic that is the senses may be "non-perceivable". Normally we don't notice the force about tens kg that acts to our feet while we are walking. But in other circumstances we perceive a slight touch.

Consciousness is not a material thing.

The question "where does consciousness reside" doesn't have sense. Due to its "non-material" nature a "motor of consciousness" doesn't have to be inside a body geometrically. Needed informational processing may be made by a local neural node or by a brain or by a remote processor. Consciousness doesn't place within these organs. For instance you see a butterfly on the screen. Processing of the butterfly's senses occurs within the computer, but the butterfly has not got a "body" at all. It follows from this example that it's wrong to attribute a function of consciousness to a brain. Brain is only one of organs, which supports conscious behavior of a living entity.

Awareness is necessary only where automation is impossible.

A living brain, as an informational machine, constantly interacts with object world. Also a little part of its ability the brain uses for conscious creativity. So the mechanism of consciousness is non-stop process that works even while we are asleep and are aware of nothing. And awareness of self is the process that occurs in consciousness simultaneously with other processes such as control of the body and its behavior towards the environment. Awareness of self is a process of creative (non-automatic) perception of ambient world. The perception cannot be totally automated because conscious entity has to be ready to sudden unpredictable change in environment. It's necessary to keep attention and to possess a "reserve of intelligence" (that are awareness and capacity for conclusions) to change one's own behavior properly.

Most of conscious processes are automatic control of body's organs. We aren't aware of them. I allow that some month before birth, while future baby's brain and organs were learning to cooperate with each other, the baby was aware of this creative, non-automated process. Mother's organism helped the learning and preserved her baby from its too rich creativity.

The awareness of self and appreciation of self as a separate entity aren't basics of consciousness but a perceivable, most creative side of its work that cannot become automatic "specific" behavior. That is why we don't watch another person's consciousness directly. We only watch the person's behavior. However my own consciousness is "directly perceivable" to me. Though automatic processes are main work of consciousness, but they are subconscious. So they don't turn our attention off creative work of consciousness, whatever tiny it is.

The criterion of self-learning mechanism understanding is a working self-learning algorithm.

If we designed the algorithm (technology) of self-learning (universal adaptation) and embodied the algorithm into a synthetic entity then we provide a way for the entity to be conscious and self-aware.

Life in community

Isolated consciousness isn't watchable. So the simulation of a self-learning system is only possible simultaneously with simulation of its organs of senses, which are transducers for interaction to synthetic or real world. Objects (things) in the world may don't have mind. They perform pre-programmed actions. Part of objects (humans and robots) may possess consciousness.

For us only humans are intelligent.

Let the artificial life is created. Check the level of intelligence of an artificial entity. If the entity has being developed among strange entities, not among humans, then the entity does have absolutely non-human world of interests. At our taste the entity will be not a wise or a liar from a Turing's test. We will see an animal or a machine or a program, not a bit of human.

The nude philosopher

The absence of pressing human culture makes the same thing to a human. Do imagine dark infinite space without gravity. The space is filled with clear air at normal conditions. A nude philosopher hangs in the weightlessness. For simplicity he is never hungry. How long time will he be wise? ... Soon he will lose his mind.

The thought experiment shows yet that life as well as mind needs constant dynamic interaction with environment. And intelligent life is only possible in a specific cultural medium. For instance, it's possible among humans. For we can recognize intelligence in an entity, the entity must adapt to human culture. If a parrot could pass the Turing test, then we undoubtedly accept that it possess consciousness. But if the same or a smarter parrot didn't ever meet people, it couldn't pass the test. It's a test for human mind .


Self-learning, as a sign of consciousness, is linked to surviving because an entity, which is able to learn, is able to preview what may occur in near seconds.

Capacity for learning

In nature the capacity for rapid learning or adaptation provides surviving of species in competition with other similar species. Self-learning, as a sign of consciousness, is linked to surviving because an entity, which can learn, is also able to preview something important that may occur in near seconds. The entity previews results of its actions, too. For instance a cockroach knows that it runs away from a danger.

Basic test on capacity for learning

The test is supposed to apply to most primitive entities like a snail or hydra. If the test is passed hence the entity is able to preview, to learn and self-learn. Hence it possesses at least rudimentary consciousness.

Preparation .
At first we watch the entity in its natural environment. It's needed to find three slightest effects A, B and C on the entity that lead the entity to respond by three well distinctive ways.

Habituation to effect AB .
In natural conditions, when the entity is able to perceive, at first we apply the effect A . After the entity recognizes the event we apply the effect B . In result, after many repetitions of this exercise, and in condition that we don't essentially disturb normal life of the entity, it must show its ability to preview - It responses to A like as it is B , even if B isn't applied.

Habituation to AC .
In the same way, by many repetitions we try to teach the entity to event A then C. In result its response to A must be close to its natural response to C .

Conclusion .
If an entity responses to event A by one quite repeatable manner, and after re-learning in the same conditions it responses to the same event A by another manner, hence the entity possesses a capacity for learning. Notice that a negative result of the test doesn't detect absence of capacities. It may be difficult to find good and noticeable testing effects.

The test displays an elementary learning "a quantum of learning". Aplysia, that contains about 20000 neurons, passes the test on capacity for learning. I haven't heard that hydra can learn. It's interesting to determine the minimal level of neural organization whose learning ability is already detectable.

In the test a researcher doesn't use encouraging stimuli.

In the test the stimulus for finding new behavior is internal proper (good) state of organism. Events B and C aren't stimuli. They are objective external effects that must be forecasted by the entity in result of the learning. Found behavior must keep proper inner (subjective) state of the entity in spite of these effects. In usual learning a teacher might choose good stimulus to accelerate the learning. But we only check a fact of learning, not an efficacy of one.

Actually the motives of the entity's behavior are hidden from the investigator.

We theoretically reconstruct the motives by studying the entity's behavior. It's possible to make a machine that passes the test, but can do nothing yet. Such a machine executes behavior that is programmed by a designer. Particularly many AI projects intend to simulate human behavior.

Aplysia invents proper behavior on its own. The mechanism of the inventing seems to be simple - Aplysia selects such behavior that maximally keeps comfortable conditions of its life. Visible results of the behavior are not distinguishable in principle from behavior of the machine. The primitive entity really is a "living machine", which has a little armory of genetically programmed behaviors, that is enough for its surviving.

So the behavior of entity, which possesses simplest level of consciousness, looks like mechanical execution of a program. The entity's consciousness was creative and its awareness was active while the process of self-programming was not finished, while the creative mechanism of consciousness (self-learning) was trying and developing new behavior. After many repetitions of specific sequence of effects, the awareness of the sequence vanishes. Well-trained behavior becomes to be automatic.

Mechanism of free will

If the choice of behavior isn't very complex, for example, if an organism has a limited set of possible reactions, or if all required behaviors are known, then creative work of consciousness is insignificant. Strict choice "if - then - else" is a zero consciousness without creativity, free will or awareness. Hence, the consciousness displays its "free will" in the very process of creative choice of proper behavior.

Actually, choice of decision occurs associatively, depending on circumstances. With "will" doesn't make technical importance. We see about the circumstances and get incline to a decision. Continuing the thinking over the situation we find new details in it. Now the same circumstances lead to another decision. The subjective impression of our free will arises because we remember that we might make one or another decision in the same situation, and we freely made one of them.

So "free will" is a feature of awareness that consists in our ability to do a conscious action while we cogitate or recall another problem.

The mechanism hasn't yet embedded to the butterfly. It doesn't make difference between its thoughts and its actions. I haven't used in the program the algorithm of "elaborating without execution" that is possible if the brain doesn't need to waste time for detailed control of sensors and effectors since they work automatically.

Due to capacity for automation of its behavior the brain may develop its purely "mental" experience that is to think. Thought is a normal work of brain when some organs don't obey its control but do properly. Previously started automatic processes turn the organ's attention off brain's control. In quite active non-automatic behavior the brain constantly switches from thoughts to conscious control. It blocks associative deliberation of long term actions. Perhaps live brain is organized so that, while the organism is asleep, many organs don't react to brain's fantasy. This phenomenon allows brain to play mentally long-term stories and to fabricate brief associative images for long term processes. Later the images are used for conscious "planning" of behavior.

Unconditional reflexes

Some "true" knowledge and skills are given to entity at born. What is a nature of the initial attainments? How may the entity receive them without learning? During the germination all organs of a germ gain specific form and structure. From the very beginning eye is "tuned" to see; an arm is able to grasp and so forth. These first capabilities may be named unconditional reflexes. Perhaps brain possesses some such reflexes, too. They may be treated as basic vital knowledge. If the knowledge is useful then it's more possible that the entity will survive and give new offspring. Additionally the brain can learn. If the capability is developed well, then the entity will rapidly adapt to environment, so the possibility of its surviving also increases. Generally all organs, that possess nervous system, can learn. But brain is the best pupil among organs.

Normally "unconditional reflexes" are attributed to motor system. A newborn calf can stand. Perhaps brain also possesses born reflexes, but unlike other organs, brain doesn't need such reflexes. Firstly, it doesn't have direct connection to object world, and secondly, it has a capacity for learning whatever the organism needs.

Particularly, not all details of living butterfly behavior (while it searches for another butterfly by smell) are genetically pre-designed because environment contains more information than any genetic structure does. To do such search it is needed co-operation of all systems of the butterfly's organism. The coordination is a work of the brain. The brain is forced (and is able) to learn controlling of the organism's systems to satisfy the organism's born needs. Not detailed complex behavior, but the vital needs are born. Although separate organs' behaviors and sensations may be born. For instance specific smell of another butterfly is attractive without any learning.

Proper initial reflexes are fixed by natural selection. A butterfly searches another butterfly by smell. If it will not find another butterfly, there will not be descendants. Natural selection "teaches" the butterfly's species. So this initial knowledge has thousands years tested strong meaning. This initial knowledge supports following meanings and corollaries that arise in learning and creative work.

The unconditional reflexes and the vital needs of organism begin the hierarchy of meanings that develop in perceiving entity while it accumulates its experience.

Meaning of objects

If a self-learning system initially possesses "zero knowledge" then, in the very beginning of its existence, the system cannot separate its perceptions (vague senses) by objects. This ability develops during accumulation of experience.

Let's consider a set of all thinkable objects that are "forms of consciousness". Some of them reflect external (to consciousness) objects, some are "pure" forms of consciousness; some are forms of sub-consciousness. The set contains all that is presentable by means of nervous system.

The nervous system (and CNS) is a whole system of live organism that process informational streams to provide work of consciousness. Different forms of consciousness are processed by the same means. A neuron cannot know what common process is it involved in. So for nervous system there is no difference between non-conscious and conscious forms. But for us the difference is. Why are we aware of some "thinkable objects" and aren't (and cannot) be aware of other ones? Say, I'm aware of what I see and I'm not aware of which hormone is needed to produce within my organism. However both the informational processes are equal at neuron level and they are processed by the same nervous system.

We are aware of objects, which are non-automatically perceived, and which are objectively "representable" that is they have strong link to our senses. That is we can be aware of an object that is expressed in sounds, colors, smells, heat and so forth. On the contrary, if an object isn't representable by our senses, we cannot be aware of the object.

Distinguishing the forms of consciousness by "representable" and "non-representable" through senses leads to the conclusion that every entity, which possesses associative memory and non-automatic (ready for unpredictable events) perception, is aware of what it senses. Naturally, a level of this awareness depends on the entity's level of consciousness.

Every thinkable object is associatively linked to other objects. All the object's links compound its meaning. The meaning defines a place of the object among other ones, and a manner how the object is used in nervous informational process. But if an object doesn't have direct or quite strong links to our senses, the object is out of our awareness. The object may equally take part in the neural informational process and it may be meaningful for the brain but the object absents for us as a conscious meaning. If it is possible to say about "subconscious meanings" then they are formed by the same mechanism of consciousness. They aren't purely objective features of natural objects.

A thinkable object, which isn't linked to other such objects, doesn't have meaning at all. It isn't perceived as an object. It isn't cognized as consciously as subconsciously. Particularly, it cannot be used because it's impossible to associate it. Such non-linked object doesn't exist for the nervous informational process. Also if anyway a thinkable object loses its associative links to other objects then it vanishes as a form of consciousness. It is ultimately forgotten.

Therefore the set of all thinkable objects along with their links is merely the links themselves. Each object, as a node of associative net of meanings, doesn't have a structure or a content beyond its links. My butterfly doesn't store images of "objects", which it perceives. It only stores dynamic flows of its senses and associative links between them.

If an entity initially possesses "zero knowledge" and it doesn't have genetically given meanings (needs, wishes, reflexes) then no new meaning can emerge. It is nothing to grow.

But if the entity has born needs, then it can grow its tree of meanings while it accumulates its experience, which "weaves" environmental information into existing meaning net and develops it. The better developed are means for interaction to ambient world, the wider is a range of objective representability, the higher is awareness of the world.

Self-learning

Speed of learning is a simple and constructive try for intellect that is applicable as a test of conscious level of a living entity or machine. The capacities of an entity are better revealed by its self-learning than by its teaching (learning by a teacher), which depends on activity of a teacher and on a way of learning. It's difficult to eliminate the influence of a teacher when we compare conscious abilities of an octopus and a crow. Self-learning occurs if an entity meets condition when its behavior depends on its feeling. (It is important condition.) Then the entity finds proper ( profitable ) behavior ensuring good feeling of the entity.

How does the self-learning occur

The key difference between teaching and self-learning consists in a way of acquisition of entity's behavior. In teaching a teacher shows or directly stimulates the proper behaviors. It marks elements of behavior that are mediate stages of whole work. While teaching pupils, robots and artificial neural nets we apply intermediate marks for every success or failure. In this process only a teacher is able to estimate the efficacy of every stage of learning.

In self-learning the entity invents its behavior on its own. With a stimulus (a feed, canceling of danger) is only given for some successful final result of a behavior, which is "proper in whole". There is a logical casual link between the stimulus and the behavior. But the link is usually far from obvious. One of main jobs of internal mechanism of consciousness is resolution of the problem "What is the stimulus given for?"

I determine the quality of my AI algorithms by their self-learning rate in some standard tasks. It would be very interesting to compare different algorithms with animals in similar tests. Where is my AI algorithm, between a hydra and a snail or between an ant and a butterfly?

Self-learning butterfly

Now I have a simple demonstration of adapting AI that yet looks quite fine only for those who understands. :) It is a Windows program tmpbrain.zip

Screenshot of the programm
      with a blue butterfly and a red flower

The program displays a small world where a butterfly and a flower live. This butterfly is so simple how I only could imagine. It has senses (organs of senses), a motor system (organs of action) and a brain.

The butterfly's organs

A sense of flower , or wish. Wish=0 means that the wish is satisfied. The butterfly senses it by a special organ when it touches the flower. Wish=1 means the butterfly has a wish.

Vision determines one of four directions toward the flower. By the vision the butterfly perceives one of four qualities, which later associate to directions towards the flower. The numbers of the directions are four separate (independent from each other) qualities for butterfly's brain. Initially they don't have sense.

Motion . The butterfly's motor system interprets its input, which is produced by the brain, as a command to move in one of four directions. Only the motor system "knows" what are these directions and how to move.

The brain receives signals from organs of senses (sensors) and transmits signals to the motor system. The only signal that has predetermined sense for the brain is the "wish". The brain is trying to generate its output signals to satisfy the wish.

The butterfly's motion is quite slow and chaotic. But the butterfly's spatial orientation emerges quite rapidly. Having some experience the butterfly moves not randomly but in almost proper direction. It makes a decision about this direction on the basis of its previous experience.

I think that at first it is difficult to evaluate the high extent of abstraction, which is implemented to my algorithm. A living brain and live organs of senses are also rather abstract, and they are able to live in any world. This gives an intellectual power of the organism. Initially no goal, wish or an attractor is innate for the brain. This informational machine is only capable to be creative that is to search "proper behavior".

In my model the senses, behavior and the wish are equivalently blind informational streams for non-experienced brain. When I speak that the brain searches proper behavior, it means that the brain is trying to change data in all informational streams to "preview" satisfaction of the wish. The brain doesn't distinguish "objective" data from "subjective" ones, which were created by the brain itself. But master-organism uses some of the streams as "behavior", while other informational streams are "senses", which don't obey brain's commands. The abstract brain doesn't even know that there is a difference between the streams, and that some of them are incoming and other are out-coming ones from the organism's perspective.

The "wish" is biologically plausible. I think that live brain receives information as from nervous fibers as from general bio-chemical state of medium. The latter is simulated by the signal "Wish". The wish depends on biochemical (and emotional) condition (self-feeling) of the organism. Such a "chemical wish" has a predefined meaning for brain. Algorithmically this meaning consists in better memorizing when the wish is satisfied. Other signals have pure informational sense. The butterfly feels badly (wish is unsatisfied). Using previous experience, it chooses (that is, it recalls) such behavior, which results to well self-feeling (wish is satisfied).

When the butterfly runs at first, it has no experience. Its vision (not a brain) sees a flower because its "born" function is to see. Its motor system is made so that it can move a "body" of butterfly in one of four directions. The brain doesn't "understand" that the butterfly is feeling badly because the butterfly hasn't yet felt a touch of flower, hence the "sense of flower" hasn't emitted "hormones of pleasure", which helps memorizing.

Association of a stimulus to a behavior makes a conditional reflex.

The brain (it is the only creative motor in the butterfly) has no goal, intention or drive. It doesn't understand that the butterfly sees a flower and what it means. It doesn't understand that it can move, and it is possible to feel well. The flower is not an attractor for non-experienced entity. The flower begins to attain the butterfly's attention after its experience increases. In a time, after tens of occasional touches of the flower, a conditional reflex arises in the butterfly. It sees a flower and understands how must do to feel well. Now the butterfly possesses a goal , which is the wish plus the way to satisfy the wish.

I think that complexity and riches of natural entity's goals arises from the variety of ambient conditions that the entity meets while it periodically reaches its quite simple wishes. The richer are senses and wishes the more intelligent is behavior. I think that alive butterfly has more developed sensors but weaker brain than my cyber butterfly.


Associative recalling arranges experience in form of temporal chains of causes and effects. Near future is a future of a recognized past event.

Mechanism for prevision

Forecast is only possible by "recalling".

Most of natural phenomena may be described as physical models with differential equations. "What will occur in near seconds" is calculated from the equations. But brain and a separate neuron cannot use predefined equations for prevision because biologically equal neurons must preview events of different physical nature. The only means the neurons might apply is "recalling about their former sensations" - neuron provides a nervous signal that depends on current dynamic condition of the neuron - this neuron in this condition always yields almost the same signal. It looks like a "solution" of equations by table data.

So basic natural mechanism of self-learning uses "induction", that is a non-intentional revealing of repetitions in comparison of current sensations to images of sensations that were memorized in associative memory. The associative cognition also leads to arrangement of repeatable experience in form of causally linked chains of events. Prevision occurs because the entity recognizes not the "current state" but the current process in development. "Near future" is associatively linked to current state because as the state as its future have their casually linked analogies in past. Near future is a future of a recognized past event.

Neuron must be creative and must successfully work in a group.

It is nervous system that provides the capacity for learning. Amount of neurons in nervous systems of different beings differs by millions times. Hence we are forced to allow that any structure of neurons, even if it is a lone neuron, can learn. Though it's almost impossible to detect the capacity in one neuron. Additionally, in nervous system and in brain the neurons are organized so that adaptation potency increases with increasing of amount of neurons. Those are cues to arrange an artificial neural structure.

Definition of creativity

Creative work is a solution of a problem whose way of solution is unknown. Surely the part of a problem, that is known how to solve, might be solved without creativity. But the part of the problem, that even lacks how to approach to, may be only solved by a random guess.

The creativity is a capacity for getting new result without any form of teaching, that is by self-learning. Working on a creative problem, that is a problem that doesn't have known means to solve, a creator makes different (stochastic and rational) trials in the vicinity of what he knows. Because of his high interest to the problem he is able (more than another is) to guess that one of it's tries approaches him to searched solution. Then the creator investigates the new revealed possibilities. And so forth. Naturally the creator can mistake. This way of trials and errors may lead to a dead end. Not every does catch luck.

In this case the teacher is consciousness that evaluates and selects approaches to solution, which are worth of further investigation. Such self-controlled self-learning is not a chaotic rambling, but it looks like a search of maximum by ordered raising in random directions.

Basic self-learning looks like natural selection of behaviors. Conscious creativity looks like artificial selection of ideas.

The primitive random search

The random search finds new behavior near familiar one.

The random search is extremely ineffective. Maximum amount of information, which is possible to be guessed randomly, is a few bits. Ten bits, that is a thousand of equal possibilities, are too many even for a human. That is why the random search must be organized as looking for few new bits in the vicinity of well-known information. It's constructively important for supposed new information to have a very little amount of content. After the "new bits" are successfully found they consistently embed into existing knowledge. Following search continues by the same tiny steps .

To estimate a rate of random self-learning let's consider a "learning" coding table. Let the table size is N numbers that is the table must find responses to N input signals. "Ambient world" supplies N different input signals for the table. Each signal demands an appropriate response, which is a number 1 to N. Input signals are random. Response is also random (1 - N) until it is found. Correct response is memorized when it is fixed by a positive mark.

In such conditions the probability of random correct answer equals 1/N. Main level of learning (when each input mostly gets right output) will be reached in N 2 steps. If the input is 64 bits wide then N 2 = 2 128 = 10 40. The self-learning is always too long-term, but here it's impossible.

That is why such "universal" self-learning is technically futile. Instead of the random learning it is used a teaching of artificial systems by a teacher (supervisor). A teacher may lead the system "by hand" along a proper path. In this way the system cannot learn how to find the right path. But it knows which path is right. If a teacher also shows proper behavior (the system is forced to behave properly while it is "taught"), and the teacher awards the behavior, then rate of learning increases significantly.

However in life, self-learning is common place, especially in primitive animals. Hence it is a common method of acquirement of knowledge by basic neural structures. A brain couldn't know the world before it is born. Unlike 'physical' organs of senses the 'informational' brain is cut off our physical world. Nevertheless a cat will survive even if it never sees a mouse. Brain is rather abstract and universal to habituate to any environment. How does the brain learn? It learns like kids do - they try different behaviors near a familiar one. Due to its universality the brain is forbidden to use such concrete informational tools as "equations", "rules" and "semantic analysis".

Search of behavior

Living adaptive system must always be learning. It may be sure that it knows proper behavior, but it must constantly prove the behavior is really proper because the ambient world may change. Such non-stop self-learning process may be subconscious. The very this constant self-learning is normal work of consciousness as a biological adaptive mechanism.

Controlling consistent behavior of vital systems of organism the mechanism of consciousness is constantly testing itself. The very surviving of the entity (and surviving of its consciousness) depends on proper work of the systems. The self-testing moderates creative "flight of fantasy", which may disorder the control system. That is why a person, that makes working and rational things, is normally more able to logical reasoning than a person that creates something "non-material". And lazy is not reasonable at all. :) An internal world of creator and the lazy differs as internal world of a human and an animal.

In the self-testing process the brain constantly needs marks for its good or bad behavior. If during a long period the external marks are absent then the brain loses its capacity for thinking and controlling the body. It occurs because the mechanism of random search of behavior continues to work, and behavior (of the brain), that isn't fixed by marks, goes far and far from the behavior, which provided surviving of the organism.

The brain of the nude philosopher doesn't get marks for its good or bad behavior from ambient world. So spontaneous changes in neurons (that is the basic mechanism of creativity) gradually lead to vitally dangerous changes in the brain's behavior. The philosopher looses its mind.

On the contrary, "objective" marks for behavior, that follows constant creative search, reinforces consciousness. The butterfly begins "to understand". It begins to behave correctly in the circumstances where earlier it didn't know what to do or did behave wrongly. Applying the process to more intelligent entity and to some other kind of tasks we might call it a process of accumulating of experience by successful "investigation".

The biological motives of creativity

Not only a teacher affects a pupil with a definite goal, but also the pupil affects its teacher. The pupil's goal is getting of a good mark. If a pupil isn't interested with the learning, it isn't a pupil, but a "database", say a dictionary or a chess program. Why and what should the pupil learn if it doesn't have an internal directive drive?

It might seem that in the beginning, while an intellect isn't developed yet, or a task is too new, the learning might occur in absence of interest. Such "unconcerned" entity's habituation to change of surrounding isn't unconscious because the very consciousness controls adaptation. In habituation the entity is forced to acquire new behavior that is useful in the new conditions.

A rat is taught pressing a pedal at a bell. It isn't interested in pressing the pedal, but it always has its own internal biological "interest". The interest consists in maintaining of its "normal" rat sensation. The sensation is subjective and self-worth only for this rat. But the sensation was objectively given to the rat by nature as born needs.

Habituation, adaptation or self-learning without constant marking (without an interest to the process of learning) looks like natural selection while teaching looks like artificial one. A teacher selects and awards only proper responses of its pupil. There is a noticeable causal link between activity of a pupil and the awards or punishments. That is why such intentional teaching works much rapidly than self-learning. In self-learning "a pupil" must discover by its own these quite difficult causal relations between its behavior and a beneficial (to the pupil) result.

So the biological stimulus for creativity is internal vital mark. If a teacher gives the mark then the rate of learning increases. If the mark "is given" by environment then the mark may by unreliable and may be too late in time. Internal mechanism of self-learning must determine the casual link between the entity's activity and such unreliable but objective mark.

The behavior that satisfies internal needs of the entity is intelligent (rational). The needs are biologically "programmed". But initially the entity doesn't have rules or cues how the needs might be satisfied. Since the needs aren't my ones I cannot always evaluate whether behavior of an entity is intelligent or not.

Mind in the cyber-butterfly

Intelligence of my butterfly's behavior is understandable to me because I know its internal needs in detail. At first it doesn't have a worldview. There are random data in its memory that absolutely unlike the world. So the world doesn't make any sure associations in the butterfly's mind. The butterfly doesn't possess any notions of logic, space or time. They arise from real experience. During accumulating of its experience it begins to cognize some stages, which it met earlier and how it behaved earlier. It chooses the most successful past experience to repeat. All possible butterfly's "ideas" are only representable through images or fragments of the images that were watched during the butterfly's life. The butterfly doesn't have any abstract from its experience means to represent its "thoughts".

The butterfly's creativity and self-learning consist in collecting of useful experience and random search of new behavior on the background of the past useful behavior.

The randomness, which leads to development of new behavior, may be treated as unsuccessful try to accurately repeat the behavior, which the butterfly counts as successful one. Also, it may be treated as inexact reproduction of successful former behavior. Either may occur because of random disturbances in memory.

The test demo "Butterfly and flower" shows not a sophisticated or very efficient algorithm of consciousness, but a principle possibility of creating proper "rational" behavior from only a vague wish of any nature. In a while you see the butterfly frequently reaches the flower rather than any another definite point of the screen. It means that the butterfly recognizes the flower and drives to it. It has got spatial orientation.

If in the beginning you make large screen then it's possible the butterfly will never learn finding the flower because of lack successful experience. But luck may help.

What does a learning rate depend on in the demo? On successful experience. You might help the butterfly's learning by placing the flower near it so that it studied all needed range of motions. But notice the butterfly's capability is limited. It may reach maximum skill in a few minutes and further learning gives nothing. Also you might interfere it. And it will never be learnt.

A live butterfly's brain as well as the brain of the cyber butterfly aims to repeat success. A live butterfly is attracted to a flower by its smell. The smell is so attractive to butterfly from birth. So I cannot be sure that a live butterfly consciously drives to the flower. Every second the unconditional reflex on specific smell helps the butterfly's consciousness.

There is no direct "physical" action (like a smell) from the cyber flower to the cyber butterfly. The butterfly doesn't enjoy with seeing a flower. It flies to the flower because, from its experience, it knows that it will enjoy when it touches the flower. Not smell and also not look of the flower attracts the butterfly but the knowledge ( idea, understanding ) that appropriate behavior will cause the pleasure. It is purely intellectual endeavor or conditional reflex . The butterfly consciously desires touching the flower because this algorithm doesn't provide the behavior to be automatic. However the butterfly isn't aware of what it sees - it cannot strive for the flower and "think" about it - the algorithm doesn't execute two or more independent learned processes at the same time.

Absolutely correct moves of the butterfly don't lead to instant success because it's necessary to cover some distance to reach the flower, but the butterfly doesn't have a sense of distance. "Look of the flower" does not differ at different distances. That is why the butterfly experiences uncertainty. It seems the butterfly makes "precariously" many tries to improve its behavior, which in result becomes looking like panic. "Who little saw does big cry".

From vague wish to self-awareness

It's possible to construct a system that simulates intelligent behavior but doesn't have any internal motive to search such behavior. Let effectors strictly execute brain commands. If the effectors' behavior doesn't satisfy a designer's goal, he re-switches inter-neural connections, input and output channels, and does other tuning of the brain. Commonly in neural nets the process is called learning. In such a system neither brain nor an organ or a separate neuron possess creativity or free choice of behavior. The behavior, as an output signal, is strictly associated to input data. After learning the system is an automaton that imitates intelligent (from human view) behavior. A prognosis of the sensation isn't needed here. But the prognosis might be useful for automation of the learning.

The prediction mechanism may be designed so that it makes not precise but "specific" prognosis. It looks so as the sensors "aim" to feel a definite sense. For such a purpose the learning mechanism, for instance it is an algorithm of switching of inter-neuronal connections, should be able to synthesize new behavior, which achieves these definite senses. Only such profitable for the system subjective prognosis might do the system to be creative and conscious.

For the system not disintegrate in separate "conscious" sub-systems all means of forecasting and searching of behavior must drive to a common goal - providing the "profitable" prognosis by specific sensors. The profitable prognosis is forecast of a definite positive mark (perception), which was "programmed" to the system naturally or by design. The creative forecasting system obeys the goal instead of doing an objective prognosis. The system aims to repeat (to "forecast" rightly) its successful behavior. It executes little random search on the background of the behavior.

Main problem of self-learning is that the learning entity doesn't know what is it marked (awarded, punished) for. I suppose that biologically the problem is solved so. If an entity feels well, all its needs are satisfied, the state of its organism is normal, then it is able to memorize well what it senses and how it behaves some time. If the state of organism is getting worst then the ability to memorize is bad. That is why conscious entity better memorizes the behavior, which leads to normal state of its organism.

If a subjectively good feel is being reached too easy, as a result of too simple behavior, then the organism comes to psychical dependence on such simple behavior, which rapidly turn to automatic one, so the behavior avoids conscious control. In result the "conscious" entity looses it's free will in choice of behaviors other that this one, which leads to the easy achievable euphoria.

The criterion of success is satisfied wish.

In my butterfly an "Analyzer of success" controls such subjective prognostication. Its task is to detect and keep important information (not objective, not strict, but only subjectively important for satisfying a wish), and to try achieving more success. Despite its strange name the device executes a simple function. It forbids the butterfly's organs to recall unsuccessful experience. But it's more important, the mechanism algorithmically joins algorithmically independent organs into a whole system with common wishes and concertedly executed goals. Due to the centralization the butterfly perceives itself as a single "I", not as a colony of organs.

Birth of meaning

The "scientific" definition of information means that whole number of states of a system doesn't carry information, but if some "kinds" of states occur to be impossible (or inevitable) then an information about the system appears. In limit if it's known the system is in a definite state then the information about the system is maximum.

There is a principle difference between such abstract "system" and an informational message. The semantic "message" uses symbols and rules, which carry "ascribed" meaning. We ascribe important for a human sense to natural phenomena so, as they are symbols. "The useful plant". For us every work of art, machine or "law of nature" consists of from elements, which have their symbolic meaning in human culture. We reveal signs of rationality in arranging of stars. Also a reader "reads a book in". Actually 99% of informational sense of the book is in the reader's head. The symbols, which he sees in a book, only awake those or other associations in his consciousness.

The semantic information is superstructure over consciousness. That is why it cannot be in foundation of a (new) conscious system. But scientific information, that is emerging of meaning from chaos, is a "physical phenomenon" that allows us growing mind from a few initial meanings "wishes".

What I call self-learning or adaptation is "pure creativity". No rules are known. Naturally there are no symbols the rules might act over. Meanwhile images and then symbols may be invented and applied by conscious entity while it collects its experience.

Origin of semantics occurs at sub-conscious level.

Organs (sensors and effectors), that possess their nervous system, and can learn provide the mechanism of the origin. After an arm, by detailed control of the brain, has learnt a new (almost automatic) motion, in the arm's nervous system a new association for this motion arises. In creative search the brain may find simplified commands to activate the association that starts the needed motion of the arm. By this way the brain goes from detailed control to "general" one. A short "symbol" appears that contains the information or meaning that is ascribed by nervous system of the arm.

Conscious semantics is natural continuation of such spontaneous arranging of experience.

Some conclusions

An entity is intelligent if it behaves intelligently. The intelligent behavior is the behavior that is useful for the entity. The objective criterion of the use consists in surviving of the entity or its specie as a result of their "intelligent" behavior. Such definition of intelligence looks like definition of rationality. And the rational behavior looks as a capacity for rapid adaptation.

The very this "rational" mind, that finds causal-effect links between events, is a feature of every animal and even of a separate neuron! But the events, that are accessible to different animals, are very different as by their informational complexity, as by their physical accessibility. Due to the mind's inner superfluity, a human (and also other high animals) uses it not only for surviving and also for other entertainment.

The consciousness operates not with essences but with their subjective reflections - images and notions. Such abstractions are beyond brain's capabilities. It is only an informational processor. Brain doesn't keep images and facts from surrounding. It keeps its dynamic experience of its interaction to other organs of the body. Also, a nervous system of an organ operates with its experience of interaction of this organ to other organs and to world. Neurons and brain on the whole keep methods how to transform information sensor-brain-effector. They don't keep "direct" data about source or target of the information. A neuron retains these methods in its internal biochemical structure that defines what is at output at some dynamics of input "logical" signals. A change in the structure leads as to forgetting as to creativity.

As images are "kept" in consciousness, they and consciousness itself are dynamic objects. They are informational streams, not static "patterns". The mechanism of consciousness functionally unifies abstract brain with concrete organs of senses and actions in a whole intelligent entity that provides an ideal flow of consciousness by its material means.

Normal living conditions improve memorizing. Hereupon useful behavior is retained. But if those are usual conditions and usual behavior then new information doesn't emerge. There is only "regeneration" of old information and repeating of automatic actions. Deterioration of normal biological state of organism impairs memorizing. Maybe it is the reason why we remember much good and little bad things.

Brain doesn't live in the object world as a body does. So immediate information about objects is not accessible to the brain. Transducers (eyes) transform physical effect to informational stream, which is processed by the brain. At output there are other transducers (muscles) that transform informational stream to object, physical form. When we see or hear, speak or touch, we act in object world. If the world changes we are forced to act by another way. Our brain is able re-learning to process the new information. Since such changes in environment occur quite frequently, the brain appears to be so worth organ that its evolutionary improving is of vital importance for some species, say a human.

Mind cannot understand all. But without the subjective and limited mind we cannot think at all. Let somebody created a thinking machine. He knows its construction in detail. And he seems the machine to be not conscious. Merely a known algorithm produces proper output in response to input signals. Also a neurosurgeon may think that a human isn't conscious enough. He doesn't see a mystery in the consciousness that he can affect by a scalpel.

In result of my investigation I see more and more that although a man is a king of animals, he reigns over equal to him thinking animals.

1997, Last Update 1999-03-05

Consciousness and logic
Is it possible to prove logically that an entity possesses consciousness and feelings?