by
Yuriy
V. Khokhlov, Ph.D at NTUU "KPI"
Contents
Motivation
This document is both
an outline for the article and a kind of introduction to the discussion
of my idea. For this reason, the style of presentation is not always
official. The purpose of the document is to prepare you for a
discussion. I admit that among the readers there are experts in
different fields. I tried to make the document understandable for each
of them. A side effect - to each of them some fragments of the text will
seem obvious and boring. There are no neurophysiologists in my circle of
contacts. I’ll try to find consultants.
And yet it happened
that the following chain of events in my life led me to this idea:
· I’ve been dreaming about
AI since the age of 6 and systematically studying this subject;
· relatives are related to
the healthcare, biomedical electronics, and now AI domains;
· close friends motivate me
and share my ideas;
· I have acquired the
necessary acquaintances in the scientific world.
What exactly inspired me to
write all of this? Subjectivity
of doctors-humans just "got" me. I don't like when phone agent of a
medical insurance company asks something like "What happened and which
doctor you want to get an appointment?". In theory they should always
suggest appointment to the therapist. But in practice, not all
therapists are "equally useful". Therefore, it is reasonable to organize
consultations with two or three doctors for reliability. There is a need
for some mediator who would determine immediately in what actions it is
necessary to take depending on the symptoms.
I dream of a doctor-diagnostician who would be more robust
than the fictional Dr. House. However, I want the decision of this doctor
not to be affected by emotion and fatigue. In general, I want to almost
completely exclude the "human factor" from several areas
of medicine. I want to have fast replication capability of these
"doctors".
Introduction
The modern level of
technology gives humanity a chance to its old dreams.
It will be discussed
how to increase the objectivity of the differential diagnosis process in
medicine by using artificial intelligence technologies.
Research in this direction has been conducted for a long time. The drawback of most of these
solutions, which I could find on the net, are focusing on setting up a
diagnosis only within one of medical specialties. For example,
electrocardiographs, which gives an opinion on the basis of a
cardiogram.
If you replace a
student-human with a student-machine, then him (it) can be trained
faster and more efficiently. Such a student will have incomparably more
resources for storing and generalization of the data of many medical
specialties. Thus, such a student will combine the experience of many
doctors. Consultation with him will be an equivalent consultation with
several doctors. Or even better than with human in many times.
In the first part of the article, I tried to summarize
how the human brain works. I have long been interested in this topic and
have already read a lot. I will try to generalize this here somehow.
This is necessary to understand how to build such a system artificially.
In the second part, I
will already talk about technical things, offer trial demonstration
programs. We will discuss possible commercial solutions (including on
the basis of "clouds").
I was told that I was
"floating in the clouds." Now it becomes my profession :)
[My work as a associate professor in NTUU “KPI”
is not the main one. I
develop architecture of private cloud solutions in commercial software
development company.]
Initial
assumptions
The
human brain
Consider the human
brain as a computer that implements the model of the surrounding world
within itself.
Input data for it are external stimulus from analog sensors.
Input data for it are external stimulus from analog sensors.
The model makes it
possible to:
· to predict possible
options for event scenarios in the future with an assessment of the
probability of their actual implementation
· make decisions based on
weighing the predicted options.
Сonventionally,
the model can allocate functional blocks (the list is not complete):
2. classification
(recognition) of objects (secondary sensory cortex);
6. simulation of data from
sensors (peculiar only to humans and some primates). Probably this
subsystem / ability of the neocortex(?).
At the very beginning,
it should be noted that technically and biologically all brain blocks
are implemented according to the same principle - the principle of a
neural network. There is a theory that any area of the brain can be
trained to perform any of the known functions of the brain. Initially,
the brain is not divided into functional zones. For some reasons, which
are not yet fully understood, in the process of brain growth, the
specialization of its regions originates according to the same scheme.
In contrast to the obsolete theory of functional specialization[5] of the brain,
according to the theory of neuroplasticity[6] [7] (multisensory
brain mechanisms), the scheme by which the brain is divided into zones
is partly related to the order in which the sense organs are formed and
connected to the brain. In the frames of studies of this theory,
experiments are carried out, which indirectly confirm such assumptions.
Separation occurs gradually in the process of self-learning of the brain
- it learns to analyze and order the data stream that begins to receive
from the "sensors".
Those areas of the
brain, to which the sensor's nerve tissues are connected, begin to
perform the functions of preliminary signal processing, and adjacent to
them - begin to specialize in the classification of objects.
If a person is deaf or blind from birth, then if he/she regains these
feelings in adulthood, the brain will not be able to take full advantage
of them. Images and sounds will not mean anything for him at first. The
brain will try to learn how to process this data, but it is not as elastic
as it was in youth. Over time, the situation will improve, but it will be
still very far from normal. There are living proof[8] [9].
After this preprocessing, the senses signals have already been transformed
into a kind of input parameters (features) that the rational and
emotional brain can operate on.
The rational brain can use the emotional brain as a co-processor.
According to one of the neurophysiological theories, rational brain interacts
with emotional brain through marking the input parameters with somatic
markers before they enter the emotional brain. Later, it will be able to
identify the reactions of the emotional brain, which are associated with
these labeled input signals.
The emotional brain is
engaged in predicting the options for the future and generating options
for action. Each option has its own rating.
Orbitofrontal cortex analyzes the output signals of the emotional brain.
It performs pre-selection (filtering) of signals before transferring the
result back to the rational brain. Signals with too low ratings will be
discarded[10].
The human brain has
the ability to restore from memory previously recorded sensations and
load them into the emotional brain. This is how abstract thinking is
realized.
What
is the "sixth" sense
Let's try to explain the concept of the "sixth" sense - the moment when we prefer one of the
possible solutions to the problem without apparent logical reasons for
this. Apparently we always do this,
but we do not always notice it.
The algorithm is as follows:
The algorithm is as follows:
1. Our rational brain
formulates the problem to the emotional brain (the block of predictions)
and then analyzes the variants proposed by it.
2. The
emotional brain conducts a search and an estimation
of possible problem solutions based on the experience,
i.e. on the model of the world, which he has developed up to the present moment.
3. Rational
brain "likes" most that option which
has the greatest rating from the emotional brain.
The rational brain can't explain the reasons why it gravitates toward one
of the proposed options (even for itself). It simply can't build a chain
of logical reasoning because this logic is hidden from it in the "black box" of the emotional
brain.
Thanks to this
architecture, the brain can radically increase the speed of solution
development by parallelizing computations.
The rational brain is
simply not able to parallelize processing of logical chains so
efficiently. He needs much more resources and time for this than he can
afford. The rational brain, as a rule, is not able to operate
simultaneously more than seven objects and it is extremely slow.
What
is life experience and learning?
In view of this, life
experience can be called a model of the world that has developed in the
human brain as a whole in a lifetime. This model includes everything
from the analysis and recognition of signals from the senses to the
"black box" of the emotional brain.
To become an expert in
a certain field, it is necessary to build a model of the world that will
ensure sufficiently correct decisions are made in this field.
The brain can build
and correct the model on its own, but this isn't effective. Supervised
purposeful learning can speed up this process.
In the process of
supervised learning gained experience of the trainer transfers to the
student. The trainer formulates problems and shows the ways of their
optimal solution.
At the beginning of
learning, each person initially has his own different life experience.
The coach also imposes his "imprint" on it. Part of the trainer's
subjective model of the world is copied into the student's subjective
model.
It turns out that the traditional methods of learning with a live trainer
further adds to the problem of differences
in life experience.
The
problem of differences in life experience
Finally we got to
the problem’s core.
People in general can not objectively assess the situation. The brain while assesses and searches solutions can rely only on its own life experience. That way, it capable only of subjective situation evaluation.
People in general can not objectively assess the situation. The brain while assesses and searches solutions can rely only on its own life experience. That way, it capable only of subjective situation evaluation.
Differences in life
experience just cause the existence of subjectivity.
In medicine,
subjectivism is especially harmful - it prevents the accuracy increase
in medical diagnoses and choice of treatment plan. The opinions of
doctors on the same issue often vary - "how many doctors - so many
opinions."
In order to somehow fight against this phenomenon, medics organize consultations and
conferences. Discussion of the problem makes it possible to compensate
the downsides of individual subjective models of the world. The idea is
good, but it also has downsides - low decision-making speed and poor
scalability (decrease in efficiency with increasing number of
participants).
The
solution
It is proposed to
develop the idea of multidisciplinary councils of physicians, but to
translate it into the plane of artificial intelligence.
The concept of
artificial intelligence contains a lot of things including machines with
a consciousness similar to human. However, to solve the above-described
problem, we will be quite satisfied with something simpler - something
that already is actively being introduced into our daily life. It will
be about machines that can learn to perform for us a certain job while
not being specially programmed for this. The source code is one, and the
training is different.
Theme of machine learning is now again becoming popular[11]. The power of
modern computing systems and distributed computing technologies now make
it possible to realize long-term theoretical developments in this field.
For example, Google (Google
Cloud ML Platform[12], Google
DeepMind[13]) and Amazon (Amazon
AI[14] и Amazon Lex[15])
has already started to provide AI services for recognition of text,
speech, translation. Elon
Mask and Microsoft became
partners in the project OpenAI[16] worth $1
billion, which aims to develop an open AI platform based on the
cloud Microsoft Azure.
Machine learning, as well as human learning, is based on examples. In the learning process,
the machine automatically creates a mathematical model that allows it to
compute certain assumptions (hypotheses) based on the source data
provided to it.
In addition to the
obvious superiority of machines in terms of speed of learning, they also
have the following unique abilities:
· simultaneously
learn from several trainers (teachers);
· simultaneously learn in
different places;
· maintain a consistently
high learning effectiveness at any age;
· make backup copies of
successful models (configurations);
· organize natural selection
in populations of AI copies;
· to train the new
generation of AI on the basis of the previous generation of AI;
· the organization of
virtual councils among other AI instances;
· to learn from huge amounts
of information.
One of the options for
communicating with a person can be in the form of a step-by-step survey,
when each next question depends on a set of answers to previously asked
questions.
This is how the interface of the famous game AI works “Akinator”[17]. Try playing with it to
see how it can work. The program tries to guess the character you have
envisioned by consistently asking clarifying questions and so on.
Gradually narrows the circle of possible characters to one.
The machine is like a
living doctor:
· collects an anamnesis of a
patient's life;
· proposes to perform the
necessary examinations;
· makes
a differential
diagnosis.
Due to the fact that
the machine learns simultaneously for all major medical specialties, it
has unique capabilities in the field of differential diagnostics. It
virtually unites the experience of many doctors of different
specialties. Moreover, each of its specialties is also based on
summarized experience of many physicians corresponding to it.
One consultation with
such a machine is now able to replace several consultations with several
doctors of one specialty.
At the first stage of interaction with a person, the diagnostic system
acts as a therapist or family doctor. In the future, it determines the
medical areas to which the problem can relate to and
forms a virtual consultation among the AI specimens of the corresponding
specialization.
The diagnostic capabilities of the machine can also be used not so
straightforwardly. AI can act as an adviser to the doctor and protect him
from committing mistakes. The doctor fills the patient's card, and the AI
system immediately analyzes it and compares the prescription of the doctor
with that it suggests herself. In case of significant differences, it will
report a possible error. And here is a great voice interface - Google
Assistant[18] or Amazon
Alexa (Echo)[19].
The way the AI is used by a doctor in this case is similar to how a
rational brain uses the emotional as a coprocessor. It can be said that
the doctor had something like an artificial
"sixth sense."
The administration of
the clinics will have the opportunity to monitor their health workers
and identify non-professionals or even criminals.
Intelligent machines can be used to test students' knowledge in medical universities, as well as in medical simulators.
Intelligent machines can be used to test students' knowledge in medical universities, as well as in medical simulators.
Athletes and just people who care about their health will be able to
receive recommendations and early warnings based on data from their
personal trackers. Trackers can use the common standard MQTT[20] to download bio-telemetry
directly to the Internet.
The artificial
intelligence system as well as the person is subjective, however it has
incomparably more possibilities for minimizing its subjectivity and,
consequently, increasing the objectivity of the conclusions.
Diagnostic
system architecture
The architecture of
the diagnostic AI system will be constructed according to the example of
the human brain. We use the same structural blocks, but in a different
quantity.
Structural blocks, similar to how it is implemented in nature, can use the same source code. Specialization of blocks is carried out by means of their profile training.
Consider the flowchart:
Structural blocks, similar to how it is implemented in nature, can use the same source code. Specialization of blocks is carried out by means of their profile training.
Consider the flowchart:
A successful interface
largely determines the success of the whole business. And it's not just
about the user interface, but also about the convenience of integration
into existing electronic document management systems in medical
institutions. The main task here is to adapt the external data format to
the internal one.
Input Processors prepares
the data for analysis and interpretation in the Primary
Classifier.
For example, if the data is represented as text in a photo, then the Input Processor performs
recognition of letters, words, and phrases. Further, the recognized text
is transmitted to the Primary
Classifier, where the text is interpreted into objects
("concepts") by which the Specialized
Classifier blocks
operate.
In the next step, the input of some Specialized
Classifier blocks receives a set of objects for analysis.
Which blocks will be selected for further data processing depends on the
classes of previously recognized objects. Each Specialized
Classifier has its own specialty. For example, there is no
sense to show the ultrasound of the kidneys to the ophthalmologist.
Judgment and formal logic
block - performs evaluation and analysis of diagnosis
options, weighs the proposed options for additional examination in cases
where none of the Specialized
Classifier has sufficient confidence in the diagnosis.
The results of the Judgment and formal logic block are transferred back to the interface, where the visualization and initiation of additional data collection takes place.
The results of the Judgment and formal logic block are transferred back to the interface, where the visualization and initiation of additional data collection takes place.
Everything happens in
the same way as in the brain:
· Input Processor - the signal from the retina is
preprocessed. For example, the consequences of defects in the retina and
optics, lack of lighting, defects like strabismus, etc. are eliminated.
· Primary Classifier - there is a primary
identification of objects. For example, the fact that we see a curb on
the road, and not a snake.
· Specialized Classifier - an assessment of the
threats to life and the search for options to overcome the obstacles
based on life experience.
· Judgement and formal logic block - rational brain
chooses the most optimal option from the proposed on the basis of their
rating.
· The
control signals are transmitted to the interface (legs, hands).
Primary
Classifier
A bit more about the Primary
Classifier.
The main purpose of the Primary
Classifier is
to concentrate data, to discard redundant information.
These blocks are also
planned to solve such problems as the analysis of ultrasound scans,
cardiogram, X-ray and MRI images. The general idea is that each unit can
be trained to recognize pathologies, to allocate certain zones
(elements) in the image and to perform the necessary measurements (as is
done by the an ultrasound machine
operator).
In addition to the methods of machine
learning in
this block, it is quite acceptable to use mathematical transformations,
for example, Fourier, Wavelet, Radon (widely used for visualization of
MRI images) and others.
Thus, we get rid of the need to analyze directly the image in Specialized Classifier blocks. Instead, we build a
multistage analysis pipeline.
Model in Machine
Learning
Machine Learning is a cocktail of
mathematical analysis, mathematical optimization, statistics and
numerical methods.
Roughly speaking, it all boils down to using the methods of mathematical optimization to find the best parameters of the mathematical model.
Roughly speaking, it all boils down to using the methods of mathematical optimization to find the best parameters of the mathematical model.
The model itself can
be:
· linear
or nonlinear function of several variables:
h(x1, x2, x3,...) = θ0 + x1*θ1 + + x2*θ2 + x3*θ3 + ...
or
h(x1, x2, x3,...) = θ0 + x1*θ1 + x1*x2*θ2 + x12*θ3 + x22*θ4 + ...
h - hypothesis
Optimize the coefficients θ0, θ1, θ2, θ3, ...;
h(x1, x2, x3,...) = θ0 + x1*θ1 + + x2*θ2 + x3*θ3 + ...
or
h(x1, x2, x3,...) = θ0 + x1*θ1 + x1*x2*θ2 + x12*θ3 + x22*θ4 + ...
h - hypothesis
Optimize the coefficients θ0, θ1, θ2, θ3, ...;
· model of a neural network
- we optimize the weights of neural links.
In the general case, optimization of the model parameters comes down
to minimizing the objective function. In this case, the
objective function is a function of the dependence of the total prediction
error for the current value of model's parameters
(cost function). The prediction (h - hypothesis) error is calculated as
the difference between prediction and truth. The truth is known to us
from training examples. To obtain the prediction of the model, we
transfer the input parameters from the training set to it.
Minimization of the
objective function is usually performed by methods of numerical
differentiation. By differentiating the objective function, we find its
extrema.
Here below I bring a few
screenshots[21] in order to explain
how this all works. I am counting on the fact that I will have the
opportunity to use it as a visual aid for an oral conversation.
Red crosses - training
examples
The linear hypothesis - blue
Non-linear - pink
The linear hypothesis - blue
Non-linear - pink
Classification by input parameters x1,
x2, ...
Cost Function as the objective function:
Ways
to collect training data
The simplest and most affordable solution is to use physiological bank
data like the PhysioBank[22] in PhysioNet[23] system. This option
we will choose to implement a test system that can be demonstrated to
potential investors.
A real commercial
product will need to be trained more seriously. It is supposed to use
data from medical cards of patients. Data is previously depersonalized.
Each training example
will contain:
1. a set of diagnostic data
and a doctor's report;
2. a set of diagnostic data
and additional tests suggested by the doctor.
Doctors, whose opinion is worth considering, are preliminary selected by
an authoritative collegium. The principle is the same as that used
by Google, for
example, by assigning local Google
Maps moderators.
The system is trained
on examples from life.
Again, remember Google[24]. The company
systematically created such services that helped it to gather
information about human in various fields: an interpreter (extracting
the meaning from the text), an automated telephone reference service
(receiving samples of human speech - recognition and synthesis of
speech), Google Goggles (receiving samples of images of text and
objects), street panoramas (for driving instruction), social network
(the study of social relations and laws of dissemination of information)
and other stuff.
Proof
of concept - determination of critical states
In order to demonstrate the viability of the idea, I propose to make a
relatively simple application.
Now in NTUU "KPI" at the Department of Industrial Electronics is developing a system of biotelemetry for rapid response teams. One of the tasks is to develop a method for determining the critical state of a persons from data from the sensors they carry. An assessment should also be made of the degree of critical state.
As initial data, we will use the heart rate, body temperature and, possibly, the conductivity of the skin. Examples of signal changes are available in the previously mentioned PhysioBank system.
Now in NTUU "KPI" at the Department of Industrial Electronics is developing a system of biotelemetry for rapid response teams. One of the tasks is to develop a method for determining the critical state of a persons from data from the sensors they carry. An assessment should also be made of the degree of critical state.
As initial data, we will use the heart rate, body temperature and, possibly, the conductivity of the skin. Examples of signal changes are available in the previously mentioned PhysioBank system.
We have the
opportunity to ask the doctors who take part in the research to help us
in preparing the training examples.
I propose to choose a simple algorithm of machine learning and train it. After that, evaluate the reliability of the data it provides.
I propose to choose a simple algorithm of machine learning and train it. After that, evaluate the reliability of the data it provides.
Turn-key
solutions
I have many thoughts
about possible products and solutions in this field. I’d happy to
discuss them soon. This involves my expertise in architecture of
cloud-based services, embedded electronic systems (including IoT and
IoE), machine learning and AI and industrial automation (subject of my
Ph.D. thesis).
I invite everyone
interested to write joint scientific articles.
Conclusions
The main goal of the
described system is to increase the objectivity of medical reports.
Having first considered the principles of the human brain, an automated system was proposed that utilizes using a similar approach.
Having first considered the principles of the human brain, an automated system was proposed that utilizes using a similar approach.
If we develop this
idea, then it turns out that in order for the human factor not to
influence decision-making at all, it is necessary, in the final
analysis, simply to exclude a human from this process. This will be
possible when the training cycle is looped to the AI itself - the
previous generation of AI teaches the next generation.
Here's what I saw
here, in R.I.T.
Now there are a lot of
studies in the medical field and the use of machine learning in it. I
listened to lectures on: analysis of cardiograms using unsupervised
learning and supervised, DNA analysis with unsupervised learning
(auto-encoders and convolutional neural networks), natural language
processing (NLP) for detecting brain damage, skin inspection.
I am sure that this
is only the tip of the iceberg in this direction. It can be said that AI
is now actively developing a specialist doctor. Namely, it is necessary
to create an AI-chief (chief physician).
In all these
presentations, what I listened to, a very big problem - the subjectivism
of experts.
References
[3] Limbic
system: structure and functions
[5] Functional specialization (brain)
- https://en.wikipedia.org/wiki/Functional_specialization_(brain)
[6] Is it possible to ‘learn’ a new
sense? - http://ykhokhlov.blogspot.com/2013/11/is-it-possible-to-learn-new-sense.html
[7] "A
Concussion Stole My Life" Clark Elliott on TBI and Brain Plasticity
- https://youtu.be/9r2pK1j3hQQ
[8] Tracking the evolution of crossmodal plasticity and visual
functions before and after sight restoration http://jn.physiology.org/content/113/6/1727 (PDF: http://jn.physiology.org/content/jn/113/6/1727.full.pdf)
· Valeria Occelli “Molyneux’s Question: A Window
on Crossmodal Interplay in
Blindness”
https://www.rifp.it/ojs/index.php/rifp/article/view/rifp.2014.0006/279
https://ria.ru/science/20150119/1043203139.html (Ru)
https://www.rifp.it/ojs/index.php/rifp/article/view/rifp.2014.0006/279
https://ria.ru/science/20150119/1043203139.html (Ru)
· Giulia Dormal and others “Tracking the evolution of crossmodal plasticity and visual
functions before and after sight restoration”
https://www.physiology.org/doi/full/10.1152/jn.00420.2014
https://www.physiology.org/doi/full/10.1152/jn.00420.2014
· Shirl Jennings - (1940 –
October 26, 2003) was one of only a few people in the world to
regain his sight after lifelong blindness and was the inspiration
for the character of Virgil Adamson in the movie At First Sight
(1999) starring Val Kilmer and Mira Sorvino.
https://en.wikipedia.org/wiki/Shirl_Jennings
Hhis paintings: https://web.archive.org/web/20180101151230/http://www.atfirstsightthebook.com:80/shirls-paintings.html
https://en.wikipedia.org/wiki/Shirl_Jennings
Hhis paintings: https://web.archive.org/web/20180101151230/http://www.atfirstsightthebook.com:80/shirls-paintings.html
· An Account of Some
Observations Made by a Young Gentleman, Who Was Born Blind, or Lost
His Sight so Early, That He Had
no Remembrance of Ever Having Seen, and was couched between 13 and
14 Years of Age. By Mr. Will. Cheffelden,
F. R. S. Surgeon to Her Majesty, and to St. Thomas's Hospital. Chesselden, W.; Cheselden,
W Philosophical Transactions (1683-1775) (report from 1728):
https://archive.org/stream/philosophicaltra3517roya#page/n89/mode/2up
https://archive.org/stream/philosophicaltra3517roya#page/n89/mode/2up
· Sight Unseen - Two years
after Mike May regained his sight, he still can't recognize his own
wife (Complete recovery of vision in blind people can
not be carried out)
http://discovermagazine.com/2002/jun/featsight
https://geektimes.ru/post/278400/ (Ru)
http://discovermagazine.com/2002/jun/featsight
https://geektimes.ru/post/278400/ (Ru)
[10] A
person who did not know how to make decisions:
· Feeling our way to
decision - Sydney Morning Herald (Feb 28, 2009)
https://www.smh.com.au/national/feeling-our-way-to-decision-20090227-8k8v.html
https://www.smh.com.au/national/feeling-our-way-to-decision-20090227-8k8v.html
· Jonathan
D. Wallis “Orbitofrontal
Cortex and Its Contribution to Decision-Making” (2007)
https://pdfs.semanticscholar.org/2194/b0c88ef4f79e7f8547febc2739593229cc8b.pdf
http://olegart.livejournal.com/1451132.html (Ru - article review)
https://pdfs.semanticscholar.org/2194/b0c88ef4f79e7f8547febc2739593229cc8b.pdf
http://olegart.livejournal.com/1451132.html (Ru - article review)
[11] Articles of Roman V. Yampolskiy - https://scholar.google.com/citations?hl=en&user=0_Rq68cAAAAJ&view_op=list_works&sortby=pubdate
[20] MQTT (Message Queuing Telemetry
Transport) is an ISO standard (ISO/IEC PRF 20922) - https://en.wikipedia.org/wiki/MQTT
[21] Slides are borrowed from the machine
learning course Andrew Ng (https://www.coursera.org/learn/machine-learning).
[24] How and why Google creates artificial
intelligence (automatically
translated)
Original:http://itc.ua/articles/kak-i-zachem-google-sozdayot-iskusstvennyiy-intellekt/ (Ru)
Original:http://itc.ua/articles/kak-i-zachem-google-sozdayot-iskusstvennyiy-intellekt/ (Ru)
[25] He
is now conducting a research in the field of artificial intelligence
for his research degree in R.I.T. (Rochester, New York).
Copyright
(c) Yuriy Khokhlov, 2018