Skip to main content

Does AI understand?



Understanding things is an interesting thing. We might feel that we know what understanding means. So let's take an example. We know that in most countries cars must drive on the right side of the road. We know that we must do that thing because regulations mean that we should drive on the right side. But does that mean that the AI understands why it should drive on the right side? 

There are, of course, countries where is left-side traffic. When the GPS tells the AI that the vehicle is in Great Britain, the AI knows that it should turn to the left side. The reason for that is that in the certain country is left-side traffic. These kinds of things are interesting. In the databases of the AI is programmed that the traffic is left-side in certain countries. But does the AI even know what is the country or state? 

The AI knows the GPS point or GPS coordinates where it is. Then it will compare that parameter to its database. It knows that it's in Great Britain. If we would ask about the location where the AI is it might answer us "I'm in Great Britain". But then we might check the code list and notice one thing. The AI knows that Great Britain is the area that consists of certain GPS coordinates. And when somebody asks about the location where the AI is those coordinates are connected to the database 

In that database is the answer "Great Britain". That database might involve many hierarchical structures like country, area, city area, and even streets and street addresses. The thing is that making that kind of AI that can answer the accurate location is not as hard to make as people might think. The AI will load the table "Great Britain" when it crosses the border. And if it should find the address like Downing Street 10 which is the official address of the prime minister. At the first, the AI must find the city where that address is. So it downloads the table where is the area where London is. Then it drives to London, and then it replaces the table of the database with the London database. 

Then it knows which city area it can find that address. And then it will change the tables to more and more accurate versions. If the people see that operation. The process looks a little bit like a zooming satellite image. At the first, the system uses large-area images but then they are turning more accurate and consist of smaller areas. But if that AI drives a robot car it would not use satellite images at all. It uses GPS points for finding the address. 

But if we would drive our car to the front of the Downing street 10 and ask where we are? The AI that is connected to the GPS and maybe its camera sees the plate. That system might say: "At the front of the Downing Street 10. And hat is the official home of the prime minister of Great Britain".  The thing is that the AI would find that answer from its database. And if it uses the right algorithm it can tell lots of things about the Downing Street 10 and the prime minister. 

It just searches things like Wikipedia pages about those things and then transforms those texts into speech. That means the AI would not know anything about things that it reads. The AI can search the net by connecting the information about the address. At the first, it might search the Downing Street 10. Then it finds the words "prime minister" and "home". 

Then it will position the data about the prime minister after the Downing Street 10 information. Then it would search the words like "current holder of that position". So that means the AI connects three databases Downing Street 10, the prime minister, and the personal data of the current prime minister to good and reasonable-looking things. But the fact is that the computer would not understand what it says. 

The situation is similar to the case where a regular western person reads things like Hebrew. We can read Hebrew if we have phonetic marks in our use. So if that text wrote using Western letters. We can read that thing very easily. 

We can say those words correctly. But without translation, we don't understand what we are saying. That is one thing, that we must realize when we are talking about AI. The AI can connect things like Wikipedia pages and read them. It can form a reasonable-looking entirety. But does it understand? The person who drives on the streets knows that if that person does something otherwise. Breaking the regulations causes problems. So this is the thing that we call understanding. 


https://artificialintelligenceandindividuals.blogspot.com/


Comments

Popular posts from this blog

Black holes cause a virtual redshift because gravitation stretches the wavelength near them.

At the beginning of this text is a film about the redshift of black holes. Gravitation stretches light, and that means gravitation fields are pulling waves longer. That thing is called the gravitational redshift. As you can see from the film, the black hole stretches radiation and distorts the redshift. Gravitational redshift, or virtual redshift, means that a black hole might seem to be at a longer distance than it is. The film shows the redshift of the star that orbits a supermassive black hole. But all other black holes interact the same way.  The event horizon is always constant. At that point, the black hole's escaping velocity is the same as the speed of light. So every black hole interacts basically in the same way. And it's possible to apply that model to all black holes irrespective of their size.  Is gravitation the thing that forms dark energy? That thing seems somehow strange. But when photons and other particles are traveling through the ball that forms the visible

The shape of the brain means more than neuro connectivity.

Well, we might say that the brain is in its entirety. Another thing is that all things in the brain have some kind of purpose. The shape of the brain and, especially, the folding of the brain shell are extremely important things. Those folds are expanding the brain's surface areas. And the brain shell has a primary role in the thinking process. The surface area of the brain determines how large the cerebral cortex is. And in a large cerebral cortex, there are a large number of neurons. But as I just wrote, the brain is in its entirety. "Researchers have discovered that the shape of a person’s brain significantly impacts thought, feeling, and behavior, overturning the prevailing emphasis on complex neuronal connectivity. Utilizing MRI scans and the principle of eigenmodes, they found that brain function is closely linked to its geometric properties, much like how the shape of a musical instrument determines its sound, offering new avenues for exploring brain function and diseas

New nanomaterial is 4 times harder than steel. And, at the same time 5 times lighter than steel.

 New nanomaterial is 4 times harder than steel. And, at the same time 5 times lighter than steel.  The new material is the hollow glass fiber with DNA molecules inside that structure. Or as you see from the image. The glass fibers are on both sides of the DNA.  DNA molecule is the thing, that involves the genetic code of the cells. Genetically engineered cells can make DNA, and those molecules can used as the nanomaterial's structures. DNA manipulation makes it possible to create new types of extremely strong materials. And those materials are stronger than steel and lighter than it. DNA molecules can act as nano-size springs.  And in some visions, genetically engineered cyborg cells like cyborg macrophages can make extremely long DNA molecules. And then they can just use those molecules as spears that can pierce wanted cells. Or those cyborg cells can also dumb targeted cells full of the DNA that terminates them immediately.  The DNA and nanotube combinations can also act as DNA-b