Sunday 25 June 2017

Human uniqueness - what the Robot would find difficult to acheive

You are unique - that is your strength

There is only one you

Machine learning robots can study humans and learn from them, but can they become unique or adopt uniqueness traits?

Another way of looking at this is Cybernetics. A method of control. What will a Robot assess as 'normal' behaviour? When or will it determine a behaviour trait is not 'normal' and disregard copying it?

These questions are at the crux of developing humanoid robots.

We are all unique

This is what makes us as humans different from each other. Sure, many of us follow set paths of conduct, but just how much is conditioned and how much 'burned in' from the start?

Although we are all genetically coded, it is the variations of the genetic code that make us unique, not just from appearance but as chemical bipeds, variations in the coding vary our chemical balance.

'Nature' seems to have got it right in most cases, but perhaps social conditioning is the key to how we turn out.

If we take the American Indian people, they had no crime, no prisons and did without money, taxes and laws. How? Because the people co-existed in a group where none of these values was necessary to ascribe to.

Look at the world outside of theirs and even with these 'values' it is imperfect and really a failure, in that it ascribes to a set of values it can not achieve - zero crime, no law breaking, all tax due paid.

Boundaries and Cybernetics

So, what boundaries do 'normal' humans adhere to that make them 'normal'? What makes seemingly 'ordinary' people resort to things like fetish behaviour, lawlessness, addiction?

Cybernetics - a sort of natural control mechanism is the key. Essentially it is a check and control method of maintaining a situation within agreed parameters.

Say we enact laws, their tenets are 'cybernetic control' in that they set out 'containment boundaries' that adherence to is crucial for the law to hold true.

Without 'feedback' that is data coming back from the object you are interacting with, there is no control over the situation. With no  limits you are in a state of anarchy.

It is as simple as hammering in a tack, at a certain point you stop when the tack is flush and to go on merely damages what you are hammering. This is a simple 'feedback loop'.

Essentially we and computers use this logic as a basis of operation, things can go wrong though.

What is 'normal'?

'Normal' is difficult to define, but in simple terms, everything conforms to certain outcomes. Once those boundaries are exceeded, damage is done.

The Robot might have to be pre-programmed to observe certain cybernetic conditions, but it relies on cold, program logic.

It has to have consciousness and empathy to be like Humans. This level of consciousness and empathy has to be set and an algorithm pattern regulated back to certain parameters of what is 'acceptable.'

Essentially, the Human that sets the parameters for the Robot, and sets its behaviour, its cybernetic boundaries of what is acceptable or not.

So, what the Robot adheres to is dictated by us, it  is the 'what' we direct to it to conform to that is the question and we have yet to see if the Robot decides its own path, something that may end up being against our values.

Consideration

The questions are when does the Robot stop from creating a harmful situation? If the Robot cannot consider Human feelings, its interactive usefulness is limited to us at a social level. If it cannot compute when 'enough is enough,' or 'reasonable boundaries' then harm might ensue.

This applies across the board in almost any activity from dispensing medication, work, sex, the situations are likely endless.

As Humans, we find these boundaries recognisable, yet we do not always adhere to them ourselves or prevent situations becoming unpleasant by continuing an action - with the attendant consequences.

Some individuals go beyond what is 'normally' acceptable, which may be a result of their own egomaniacy or boundaries being different to the accepted normalcy.

Multi-level awareness

As Humans, we are different from any other creature because our brain is developed for language. Comparing the Chimpanzee brain to a Human brain is a fruitless exercise, we are not the same animal and our genetic difference at 10% is a universe away.

When extrapolated over the millions of variances, plus the obvious anthropological differences between Humans and Chimpanzees with which we cannot breed, yet Humans can with an Almas which occurs in Asia and is closer as a relative species therefore.

Language drives our brain to operate on 3 planes - Past - Now - Future. Our thinking brain even before we speak, is 'modelling' the scenarios and end-games of what we will say and do, we do not appreciate what is going on here in general because our brain has developed this as a core function.

(The best scenario is then established and potentially actionable, the end result does come down to 'what is acceptable' as the end action, this potentially varies from Human to Human as to the outcome and the consequences of such.)

We don't need to know how our heart works in brain terms because the brain 'manages' that 'background' stuff. Like in our cars, we don't have to manually pump the fuel, cause the spark interval whilst also trying to drive the car and negotiate road hazards, that 'management' is done in the background, leaving us to just drive.

This is the 'plate spinning' scenario that the computer has to do and can do to essentially operate like 'us.'

Brain V Computer

The 'wetware' Human brain is massively complex for what it is, yet to replicate that capacity in silicon chips and switchgear would take a massive computer, at present.

Each day that passes, we go further down the road of achieving a Robot in Human form, something that we have been trying to do for millennia.

Within the next 30 years we may have Humanoid robots in society. We can now capture data and video from our brains so it may be possible to create some hybrid that will be a new sub-species.

The half way may be to create remote Human Robot entities which receive data instructions from super computers, ethereally.

What must be achieved is to assess the outcome of what we are trying to create and how we perceive that to become, if we get it wrong, it could be very serious for us, potentially.

We need to ask the basic question - 'What are we trying to achieve'? and we need to obtain a consensus on the thing we are trying to create, if we get this wrong it could be potentially disastrous. We need a strategy of what we are going to build in and what we are going to ensure is not built in.


No comments:

Post a Comment