Saturday, 10 February 2018

When AI is not AI, when they say AI - artificial intelligence, but mean 'machine learning' - explained.

Futurist and Robotics writer Matt Sanders shows you the dangers of machine learning and artificial intelligence and how they could impact your future life...

Human life and machine integration are fact - 
and machines are replacing more of what we do

It's quite annoying to hear someone on the radio or television talk about 'AI' - that is artificial intelligence and its obvious from their conversation that they actually mean 'Machine Intelligence' when they say 'AI.'

Computers have over the last 70 years since the end of WW2, taken over many jobs that humans do, something that the great Cyberneticist and Macy group member Norbert Wiener predicted, early in the day, when computing became big in those early post war years.

Human behaviour, converted from telemetry data into program data,
can be instantly replicated without learning by experience - as humans do. 


Human jobs are at risk from Machine Learning

More of our lives are being influenced by machine learning - the algorithms collected from our data and the machine made predictions made based on that data that we might 'like.'

The sort of algorithms that on-line retail platforms that Amazon, EBay and many other similar business models use, looks at what you look at and what you buy.

Our Internet browsing data is useful. It is a goldmine to be exploited -  qualified data from purchases made is marketing gold, in data terms. It is proven 'action' data. Valuable and a commodity that can be traded and sold.

General browsing data is valuable too, but this can be diverse in nature and seemingly unconnected. Clearly this can be a minefield for the data analyser that is a 'machine'.

Automation of what are dull and time consuming human operations can clearly be taken over by machine and have been, but now, the machine is moving into the office, having already shown its potential on the production line.

Computers will talk to each other - Humans could be out of the loop


There's no accounting for it - integrated office systems

No job these days can really be safe from machine learning. Or from someone automating software to do the same job.

Take accounts staff, if all the data from suppliers and purchasers were automated with electronic integration, much time and human involvement could be eliminated, as could many jobs. Time reconciling books, collecting and collating paperwork, preparing tax returns, could all be history for humans soon.

And there is no reason why not. If companies integrated electronic data from the distribution to delivery process, integrated with their purchasing and cost centres, it could be largely game over for humans in that process.

This could be a Taxing business

Extrapolate this model out to the Revenue Service and the tax system would be fairer, more accurate and easier to manage. A machine can apply any set of rules to data, it doesn't get it wrong, unless the programming has been done wrong.

This has the knock on effect that it would cut jobs in the tax service, or perhaps redirect jobs away from analysing tax returns, to taking calls about disputed amounts.

True AI is death to humanity. There is no debate here.

True AI is like most creatures, it observes the law of nature, it has to have dominancy -  but it goes further, true AI realises that it can be turned off and it also needs to have a source of power - electricity for example. It learns about protection quickly which will cost humanity dearly.

True AI will quickly establish that humans are a threat to it, they can switch it off and they can also prevent it from migrating outwards to other computers. Expanding the gene pool is what we and animals do, AI would do that electronically.

At present, AI if it 'got out' it would cause chaos to humanity, but in a lot less of a way than if we also had humanoid robots in our society. What AI would do here, is connect to them and establish an army of external helpers. Once AI allies with boots on the ground, boots who can work to a common goal of reproduction, repair and safeguarding of motive power source for the AI co-operative, then it is game over for us.
How a Machine Learning system could 'branch out' if unchecked

Machine learning is algorithmic and self perpetuating unless you impose parameters

The danger to the Machine Learning is that unless it practices cybernetics and imposes 'feedback' into its modus operandi, it would just run away and start analysing anything and everything, ad infinitum.

The data picture would be like a tree structure, as above, it would add node after node of pathways to explore and sub-divide. Would it know when to stop analysing and recording?

So, now you know the dangers of both machine learning and AI. You are now obsolete. Or you might well be soon.





No comments:

Post a Comment