Showing posts with label deep learning. Show all posts
Showing posts with label deep learning. Show all posts

Saturday, February 5, 2022

The new type of quantum computer is basing photon magnetic interaction.


Photons can use to manipulate magnetic fields. 


One version of quantum sensors is the nanotube there is ionized gas or laser rays. The magnetic field senses the changes in the position of the ionized gas.

Or it can sense the changes in brightness of the laser rays. The magnetic field can manipulate by using photons. And that thing makes it possible to create more accurate sensors. 

But it can make a revolution in nanotechnology. The particle that will be moved can magnetize. And then that magnetic field can manipulate by using photons. If that ability can connect with scanning tunneling microscope. It can move single atoms on the layer. 

That kind of ability is making the new type of quantum computers possible. A single electron can take under the stylus. And then the photons can pump data in those electrons. And that is making it possible to make the new type of quantum computers. 


The new type of quantum processors can base the material there are tunnels. 


The laser rays will shoot through those tunnels. And the magnetic fields are sensing their brightness. If the magnetic field can affect the brightness of laser rays. That thing makes it easier to input data to photon rays. And that makes it easier to create a communication layer between electric and photon-based binary systems and connect keyboards and screens straight to quantum computers.

The new materials. And the new types of ideas. These are the tools for a new type of quantum computer. The image above this text is introducing a new type of nanomaterial. That material is full of tunnels. And those tunnels make it useful for new quantum computers. 

The photons can affect magnetic fields. And that thing makes it possible to drive data between photon-based computers to regular electric binary systems. That allows connecting the quantum computers to keyboards and screens. The electric impulses from keyboards will transfer to that structure. The magnetic field will also interact with photons or superpositioned and entangled particles. And that thing allows using the quantum computers by using regular keyboards. 

The idea of that kind of system is that through those quantum tunnels will be shot the photon beams. There is a magnetic field in that structure. And then the photons would shoot in those laser rays. Another way to make that thing is to pull the quantum entanglement through those tunnels. The idea is that the energy in that particle pair is symmetrically identical. 

The photons will shoot to the channel that connects those particles. And then those photons are used to anneal those particle pairs. Because data is traveling in both directions. That allows creating the error detection system. That kind of system uses two lines there identical data flow. And if there are differences in solutions there is a mistake or error. 

The idea is that the anneal of those superpositioned and entangled particles affects the magnetic fields. That allows the system can transform photonic dataflow to the magnetic field. The magnetic fields can detect the changes in annealing. And that allows driving data to binary systems like keyboards and screens. 


https://phys.org/news/2022-01-scientists-atomically-thin-wires-ribbons.html


https://scitechdaily.com/physicists-manipulate-magnetism-with-light-playground-created-for-observing-exotic-physics/


Image: https://phys.org/news/2022-01-scientists-atomically-thin-wires-ribbons.html





Deep learning algorithms are like the expanding network of connections between databases. 


The data is traveling in the forms of X and Y. The system can connect the data handling units to the form (XY). The thing is, how the system would need the orders is how to sort the databases. There is "X" that makes it deep learning. 

The AI can search all databases where is "X", and then it tries to solve the problem (*X). If there is some actor that is commonly predicting Y. That term is in the position of an asterisk (*) that allows the computer to connect that thing with "Y". So if that thing is "X" the form of the data unit is (XY). 

And then the AI can turn to search the data unit that comes before the "X". Then that asterisk (*) is before "X". So the form of filling data unit is (*XY). And if that thing that is the most common is the "W" the form of data is (WXY). And then the system will try to follow the next data module. It must replace the asterisk (*WXY) by using the next letter. Until it has made the network of the entire data. 

The letters (XYZ) and others are the data units. The data units are databases that involve some kind of data. The data can be the movement series of the successful operation. But that kind of data handling tool can use to predict how some person would act in some polls. The system can mark is their differences in answers depending on the sender of the form. And also the system can detect things how the problems are solved. 

The fact is that in this kind of system. The key element is fuzzy logic. There is no way that there is always letter or data handling unit W before the XY. But there is an error level that will make the solution acceptable. When the system finds some other than the most usual data handling unit before the "(XY)". That thing means. 

That the human operator can check the thing why there is some other data handling unit. One thing that can make the case look like this (MXY) is simple. The operational area of the AI is different than in the case (WXY). The normal solution (WXY) might be meant for city area operating robots. And the (MXY) might be meant for the mountain area-operating robots. 


https://realityisthinking.blogspot.com/

Saturday, December 25, 2021

The brain cells in a dish are learning faster than AI.

 The brain cells in a dish are learning faster than AI. 




The brain cells in a dish are learning faster than AI. But when we are thinking. That those brain cells at dish should learn only one thing. Those brain cells could learn a single thing faster than regular human brains because they need only one connection between them. So if we would minimize the data mass that is loaded to the neurons. We can make them react very fast. And also, they can learn that data faster than human brains.

In the cases, when data mass that loaded to the brain is minimal. There is a smaller number of connections between those neurons. So when neurons are handling data they must not search connections. 

The reason why brain cells in a dish are learning faster than brain cells in human brains. Is that the brain cells in the dish can use their entire time. for solving some problems. The brain cells in the human brain must sometimes concentrate on some other things. So there are cuts in the data handling process of human brains. And also there are lots more information that the human brain must handle than certain problems.

There is a reason why human brain cells in a dish are learning faster than human brains. The reason for that fast-learning process is that those brain cells in the dish must handle more limited information than normal brains. If the only thing what those brain cells must learn is some game like chess or video game those brain cells would learn the thing very well and fast. In the real world, what means places outside laboratories neurons must handle larger data masses than in laboratories. 

When people are walking on the streets, their brains must handle many types of signals. If those neurons would be in the dishes. The only thing that they must do is to learn some computer games. That thing is called sensorial adaptation or selective sensorial adaptation. The idea is that the neurons must be in the chamber where they are learning only one thing. 

And if that thing is the only stimulus that those neurons get. That thing will make them learn that only thing very well. Sometimes introduced a theory that "Kaspar Hauser" (1812-1833) the "boy who has grown in the barrel" or at least in total isolation was the victim of "selective adaptation" experiment. In that case, the only stimulus that this poor man got would be the military tactics. 

But the brain in the jar has brought one thing to my mind. Even those mini-brains would be small-size they are learning things. The memories of the person can transfer to those cells. And if there are enough brain cell cultures. That thing makes it possible to store memories in those cells. And then transmit them to another person. 

The thing is that the memories can transfer to the cell cultures. Makes it possible to talk with animals. If those memories can project to the screens of the computer that thing can give data. About how animals are living? The memories of the animals would download to the cell cultures and then those memories could transport to the screens of computers. Or of course, some extreme scientists would transfer the EEG of those brain cells to their brains. 

There is the possibility to transfer "trained neurons" to the nerve channel of the fetus. And that thing is opening new and very fascinating and same way frightening visions in my mind. That thing would make it possible to create the learning process that continues over generations. So that thing could be the real "deep learning". That means people like highly trained military officials would multiply. 


https://www.newscientist.com/article/2301500-human-brain-cells-in-a-dish-learn-to-play-pong-faster-than-an-ai/


https://en.wikipedia.org/wiki/Kaspar_Hauser


Wednesday, November 24, 2021

Deep neural networks are revolutionizing astronomy and many other things.

 Deep neural networks are revolutionizing astronomy and many other things.


Deep learning means that the person would know how to act in certain situations. "Learning" means. The person would know how to in any situation that connects with a certain case.

But deep learning means that the actor also knows why something is done in a certain case?  And also, the actor can predict some situations. In the computer world, the prediction means that the solution like some movement series will upload to RAM (Read Access Memory for) immediate use. 

Deep neural networks in the service of astronomers are opening the road for a new type of artificial intelligence. The idea of the deep neural network is searching the exoplanets is that the system is following a certain well-known exoplanet using multiple different types of telescopes.

And then, that system would make the database about the observations. Then the network is trying to look for the phenomenon that is matching with recorded data to other stars. The ability to collect the data also by using the smaller telescope simulates the targets. Those are farther in the universe than the well-known exoplanet. 

So the small telescope can simulate the situation where some exoplanet locates very far away from the Earth. Then this data can use as the data matrix for the larger size telescopes. And by using the larger telescope. The AI can see if the data that is collected from other and more distant stars match the data. That collected from closer stars. 

The model where the system makes the matrix about the case and then transfers that to other cases would be revolutionary in astronomy. The fact is that this kind of solution can benefit also in many other situations like AI-controlled cars or similar things. If some solutions would be a useful thing. The computer is loading the data from the environment and then the computer would load this matrix in its memory. 

And then in the future, when the automatic would drive to a similar environment the system would load the case matrix to the RAM (Read Access Memory) that the system would use that data matrix or solution immediately. That means the image of the environment or the position on the map would act as a trigger. That uploads the solution for immediate use. This is the computer-world version of the learning. 

So the learning system can predict the situation. When the system records the place, And if there is a match with some case that system preloads the solution or the movement series to its RAM. And that thing means the machine learns to predict things. Machine learning is the thing that is one of the most interesting things in computing. 

In a chess program, machine learning means that the system can record the games of the masters. And then it would make the database for each movement. Then the system must just make the counter-action against the opponent simply by reconnecting the databases. That thing makes the machine can learn things. Like what buttons are the most effective against a certain player. The AI can calculate how often some chess player is moving a certain button. 

And conclude what kind of role the button is the button for that certain player. The chess game is a useful thing to test how to connect databases. 

But the same way the ability to make spontaneous connections can test by using astronomical objects as the base. And that kind of system can help to benefit telescopes, chess, and the AI for finding the new and more independently operating robots and AI a reality.


https://scitechdaily.com/a-whopping-301-newly-confirmed-exoplanets-discovered-with-new-deep-neural-network-exominer/ 

https://interestandinnovation.blogspot.com/

Newtonian and Einstein models are still useful tools.

The Einstein and Newtonian gravitational principles are still “hard stuff”. And today, we can say that all gravitational models are suitable...