Skip to main content

Does AI understand?



Understanding things is an interesting thing. We might feel that we know what understanding means. So let's take an example. We know that in most countries cars must drive on the right side of the road. We know that we must do that thing because regulations mean that we should drive on the right side. But does that mean that the AI understands why it should drive on the right side? 

There are, of course, countries where is left-side traffic. When the GPS tells the AI that the vehicle is in Great Britain, the AI knows that it should turn to the left side. The reason for that is that in the certain country is left-side traffic. These kinds of things are interesting. In the databases of the AI is programmed that the traffic is left-side in certain countries. But does the AI even know what is the country or state? 

The AI knows the GPS point or GPS coordinates where it is. Then it will compare that parameter to its database. It knows that it's in Great Britain. If we would ask about the location where the AI is it might answer us "I'm in Great Britain". But then we might check the code list and notice one thing. The AI knows that Great Britain is the area that consists of certain GPS coordinates. And when somebody asks about the location where the AI is those coordinates are connected to the database 

In that database is the answer "Great Britain". That database might involve many hierarchical structures like country, area, city area, and even streets and street addresses. The thing is that making that kind of AI that can answer the accurate location is not as hard to make as people might think. The AI will load the table "Great Britain" when it crosses the border. And if it should find the address like Downing Street 10 which is the official address of the prime minister. At the first, the AI must find the city where that address is. So it downloads the table where is the area where London is. Then it drives to London, and then it replaces the table of the database with the London database. 

Then it knows which city area it can find that address. And then it will change the tables to more and more accurate versions. If the people see that operation. The process looks a little bit like a zooming satellite image. At the first, the system uses large-area images but then they are turning more accurate and consist of smaller areas. But if that AI drives a robot car it would not use satellite images at all. It uses GPS points for finding the address. 

But if we would drive our car to the front of the Downing street 10 and ask where we are? The AI that is connected to the GPS and maybe its camera sees the plate. That system might say: "At the front of the Downing Street 10. And hat is the official home of the prime minister of Great Britain".  The thing is that the AI would find that answer from its database. And if it uses the right algorithm it can tell lots of things about the Downing Street 10 and the prime minister. 

It just searches things like Wikipedia pages about those things and then transforms those texts into speech. That means the AI would not know anything about things that it reads. The AI can search the net by connecting the information about the address. At the first, it might search the Downing Street 10. Then it finds the words "prime minister" and "home". 

Then it will position the data about the prime minister after the Downing Street 10 information. Then it would search the words like "current holder of that position". So that means the AI connects three databases Downing Street 10, the prime minister, and the personal data of the current prime minister to good and reasonable-looking things. But the fact is that the computer would not understand what it says. 

The situation is similar to the case where a regular western person reads things like Hebrew. We can read Hebrew if we have phonetic marks in our use. So if that text wrote using Western letters. We can read that thing very easily. 

We can say those words correctly. But without translation, we don't understand what we are saying. That is one thing, that we must realize when we are talking about AI. The AI can connect things like Wikipedia pages and read them. It can form a reasonable-looking entirety. But does it understand? The person who drives on the streets knows that if that person does something otherwise. Breaking the regulations causes problems. So this is the thing that we call understanding. 


https://artificialintelligenceandindividuals.blogspot.com/


Comments

Popular posts from this blog

Quantum breakthrough: stable quantum entanglement at room temperature.

"Researchers have achieved quantum coherence at room temperature by embedding a light-absorbing chromophore within a metal-organic framework. This breakthrough, facilitating the maintenance of a quantum system’s state without external interference, marks a significant advancement for quantum computing and sensing technologies". (ScitechDaily, Quantum Computing Breakthrough: Stable Qubits at Room Temperature) Japanese researchers created stable quantum entanglement at room temperature. The system used a light-absorbing chromophore along with a metal-organic framework. This thing is a great breakthrough in quantum technology. The room-temperature quantum computers are the new things, that make the next revolution in quantum computing. This technology may come to markets sooner than we even think. The quantum computer is the tool, that requires advanced operating- and support systems.  When the support system sees that the quantum entanglement starts to reach energy stability. I

The anomalies in gravity might cause dark energy.

"Physicists at UC Berkeley immobilized small clusters of cesium atoms (pink blobs) in a vertical vacuum chamber, then split each atom into a quantum state in which half of the atom was closer to a tungsten weight (shiny cylinder) than the other half (split spheres below the tungsten). (ScitechDaily, Beyond Gravity: UC Berkeley’s Quantum Leap in Dark Energy Research) By measuring the phase difference between the two halves of the atomic wave function, they were able to calculate the difference in the gravitational attraction between the two parts of the atom, which matched what is expected from Newtonian gravity. Credit: Cristian Panda/UC Berkeley" (ScitechDaily, Beyond Gravity: UC Berkeley’s Quantum Leap in Dark Energy Research) Researchers at Berkeley University created a model that can explain the missing energy of the universe. The idea is that the particles and their quantum fields are whisk-looking structures. Those structures form the superstrings that are extremely thi

Neon and time crystals can be the new tools for quantum computing.

"New research investigates the electron-on-solid-neon qubit, revealing that small bumps on solid neon surfaces create stable quantum states, enabling precise manipulation. This research, supported by multiple foundations, emphasizes the importance of optimizing qubit fabrication, moving us closer to practical quantum computing solutions." (ScitechDaily, Quantum Riddle Solved? How Solid Neon Qubits Could Change Computing Forever) Researchers created a superposition in solid neon. And those neon ions, where the system creates superposition in their surfaces.  Making it possible to manipulate those atoms. The atom-based qubit has one problem. Orbiting electrons cause turbulence in their quantum fields. The thing that can solve the problem is to use the quantum fields for the superposition.  If the system can position electrons at a certain point, it can make a small hill to the atom's surface. And the system can use that thing for making quantum superposition between the mos