Can Artificial Intelligence Be Dangerous For Humans

Share on facebook
Share on google
Share on twitter
Share on linkedin

Should we be afraid of artificial intelligence? For me, this is often an easy question with a good simpler, two-letter answer: no. But not everyone agreed at this point many people, even including the late physicist Hawking, have raised concerns that the increase of powerful AI systems could spell the very best for humanity. So the question is this artificial intelligence be dangerous for humans? 

What is Artificial Intelligence?

Your view on whether AI will take over the earth will depend on whether you think that that it can develop intelligent behavior surpassing that of humans – something mentioned as “superintelligence”. So let’s take a glance at how likely this is often, and why there’s much concern about the longer-term of AI. 

Humans tend to be afraid of what they don’t understand. Fear is usually blamed for racism, homophobia, and other sources of discrimination. So it’s no wonder it also applies to new technologies – they’re often surrounded by a particular mystery. 

But allow us to demystify the foremost popular AI techniques, known collectively as “machine learning”. These allow a machine to find out a task without being programmed with explicit instructions. This may sound spooky but the reality is it’s all right down to some rather mundane statistics.

The machine, which may be a program, or rather an algorithm, is meant with the power to get relationships within provided data. Many various methods allow us to realize this. See 20 uses of computers. For example, we will present to the machine images of handwritten letters (a-z), one by one, and ask it to inform us which letter we show whenever in sequence.

We have already provided the possible answers – it can only be one among (a-z). The machine at the start says a letter randomly and that we correct it, by providing the proper answer. We have also programmed the machine to reconfigure itself so that next time, if presented with an equivalent letter, it’s more likely to offer us the right answer for the next one. As a consequence, the machine over time improves its performance and “learns” to acknowledge the alphabet.

In essence, we’ve programmed the machine to take advantage of common relationships within the data to realize the precise task. For instance, all versions of “a” look structurally similar, but different to “b”, and therefore the algorithm can exploit this.

Humans, however, are good at reading. It may be a more interesting example that the Google Deep mind’s artificial Go player, this has surpassed every human player in their performance of the game. It learns during a way different to our humans – playing a variety of different types of games with itself that no human could play in their lifetime.

It has been specifically instructed to win and told that the actions it takes to determine whether it wins or not. It has also been told the principles of the sport. By playing the sport again and again it can discover in each situation what’s the simplest action – inventing moves that no human played before.

Toddlers versus robots

Now does that make Artificial Intelligence will Go and play smarter than a human? Certainly not. AI is extremely specialized in particular kinds of tasks and it doesn’t display the pliability that humans do. Humans develop an understanding of the planet over the years that no AI has achieved or seem likely to realize anytime soon.

The fact that AI is dubbed “intelligent” is ultimately right down to the very fact that it can learn. But even when it involves learning, it’s no match for humans. Toddlers can learn by just watching somebody solving a drag once. An AI, on the opposite hand, needs tones of knowledge and a lot of tries to succeed on very specific problems, and it’s difficult to generalize its knowledge on tasks very different to those trained upon

The success of recent AI is a smaller amount thanks to a breakthrough in new techniques and more thanks to the vast amount of knowledge and computational power available. Some approaches to doing this involve building a computer model of the human brain – which we’re not even on the brink of achieving.

Ultimately, simply because an AI can learn, it doesn’t follow that it’ll suddenly learn all aspects of human intelligence and outsmart us. There is no simple definition of what human intelligence even is and that we certainly have little idea how exactly intelligence emerges within the brain. But albeit we could work it out then create an AI that would learn to become more intelligent, that doesn’t necessarily mean that it might be more successful.

I’m more concerned with how humans use AI. Machine learning algorithms are often thought of as black boxes and fewer efforts are formed in pinpointing the specifics of the answer our algorithms have found. This is a crucial and regularly neglected aspect as we are often hooked into performance and fewer with understanding. Understanding the solutions that these systems have discovered is vital, because we will also evaluate if they’re correct or desirable solutions.

Example of Artificial intelligence

If, as an example, we train our system during the wrong way, we will also find you with a machine that has learned relationships that don’t hold generally. Say as an example that we would like to style a machine to gauge the power of potential students in engineering. Probably a terrible idea, but allow us to follow it through for the sake of the argument.

Traditionally, this is often a male-dominated discipline, which suggests that training samples are likely to be from previous male students. If we don’t confirm, as an example, that the training data are balanced, the machine might find you with the conclusion that engineering students are male, and incorrectly apply it to future decisions.

Machine learning and artificial intelligence are tools.