//Gene Alpha : The A.I for Everything

Gene Alpha : The A.I for Everything

Pineapplem3 Gene Alpha : The A.I for everything is under developing stage . The work for Gene Alpha is already Started. as estimated pineapplem3 will release Gene Alpha on 2020 . company claims Gene Alpha will be the Final Version of Gene Robotics Revolution. as alpha it also implies Gene Infinity. Which means the possibility of Gene A.I will be Infinity. since Gene was developing day by day Gene Become one of the best A.I in future.

Project is leading by Nirmal Ram along with Faseen Mohammed and other Young Engineers. development is on Going at Pineapplem3 IDC .Company also claims in Future Gene A.i will be a Doctor , adviser ,  Mentor and Good Friend. she can help human very deeply . She can imitate emotions  and human routines . company believes This will be possible on 2020 of course.

More About Gene A.I Robotics Revolution

1 .  INTRODUCTION

Artificial intelligence (AI, also machine intelligence, MI) is Intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.” For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology. Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data, including images and videos.

The traditional problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects. General intelligence is among the field’s long-term goals. Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”. This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI a danger to humanity if it progresses undauntedly

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions and targeting online advertisements.

With social media sites overtaking TV as a source for news for young people and news organizations increasingly reliant on social media platforms for generating distribution, major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.

2. Maze solving : problem

definition

The basic definition of the problem is the following : “An autonomous robot has to solve a maze in which it has been placed”. The robot is put at a random position in the maze, and have to find its way to another position known as the end, and recognizable by the robot. Several parameters have to be specified before the beginning, in order to define the project as precisely as possible. This parameters (or rules) are described below :

  1. The maze is a large rectangle, where there are severals walls.

(a) There are walls all around the maze (it is closed) ;

(b) The walls are straights ;

(c) the walls are perpendicular to each other ;

(d) The maze can be divided in squares (also called cells) of the same dimensions, which do not contain inner walls. In other words, walls have to be positioned only on the sides of this squares.

  1. The robot is independant.

(a) The robot initially knows nothing about the shape of the maze ;

(b) The robot can know the dimensions of the maze ;

(c) The robot can assume that he moves through a maze with the above

defined. parameters.

  1. The robot have to reach several goals :

(a) Find the end position ;

(b) Find the shortest path between the cell where it started and the end position.

The definition of this parameters was done before anything else, and is the starting point of the analysis of the problem. This rules which are the skeleton of the project, and without them it is impossible to go forward. This rules has been defined clearly for everyone to agree on important points of the project, like the things the robot should do, and also contains some restrictions compare to the general problem of the behavior of an autonomous robot in an unknown environment, as introduced at the beginning of this document. the reason that led to these restrictions is that it is better to have first a closest goal to reach, instead of directly beginning with a too hard and large problem. It is small step by small step that a project can be managed without run out of steam or getting lost. Al through, this project was managed step by step, and these steps was decided since the beginning. The first step consists in deciding the way the project will be managed, and what is the best environment to work on it together. The main questions are : what kind of robot will be used ? With which language will the software be written ?How the work will be split between the group members ?

The second step is the theoretical analysis. In this part, the main activity is to read documents on the work which already exists on the subject, find the right algorithms, and then go deeply in their understanding. This step could have be the first one, but the decision was made to first look at the environment and the tools we will use, in order to have more precise ideas of what we have to read, and be able to construct a sketch of coding in our minds while reading all the papers. Furthermore, the first step helped to knows which part of the problem each member of the group is the most interested in, and then better separate the research work.

The third step is the software analysis, a step to decide how the software embedded in the robot will be organized. Here it is an embedded system, so tasks have to be defined with their general behaviour, and the way they will intercommunicate. The fourth step is the implementation. First doing small tests on the hardware, in order to be more confident with our working environement, the robot, its sensors and acuators. Then it is finally time to write the first version of the implementation with the algorithms and the organization defined in the previous steps. Tests has to be permormed oftenly independantly on the different parts of the software, before to merge all the working parts to make a complete program. It is a very critical step, because it is when all the previous decisions are tested in real conditions, and sometimes things can not work as expected, and some difficulties can be endure. So here the team work is more important than ever, and it’s why regular meetings have to be schedule, to review the work done, and decide together how to go through the difficulties. It is also possible to go back to the previous steps and come back to the implementation as much as needed.

The (optional) fifth step can be to generalize the problem, for example by using a maze where walls are not all straights, or in a real room with several kinds of objects (different sizes and shapes) as walls, or also by not specify the dimensions of the environment to map. In brief, this step can be done if all the previous steps had been done and the deadline is not reach, and consists in remove the restrictions one by one to have a more generic and smarter robot.

3. Working environment

3.1 The Lego robot

At the beginning, the robot has two wheels and one sensor. This architecture can be modified later in the project after several tests to better solve the maze. In order to realize this project, we will use as main controller a NXT Micro Computer brick. This brick will communicate with other devices from LEGO to map the environment and to find the shortest path from the start point to end point. The other components could be distance sensors to find where is the wall, the color sensor to detect if it arrived in the right position which is the end, the motors to move the robot and to rotate the distance sensor for a better understanding of the environment.

3.2 The maze

3.3 NxtOsek and C

After we decide on which components we use to create the robot we had also to decide what software platforms we have to use. To program this device we will use nxtOSEK, which is a Real-Time Operating System for Lego Mindstorms programmable NXT controller. To edit the source code we can use the software gedit, which is default installed in the Ubuntu operating system and as a compiler we use gnu-arm compiler

In  mobile  robotics,  autonomous  navigation  is  an  important feature  because  it  allows  the  robot  to  independently  move from  a  point  to  another  without  a  tele-operator.  Numerous techniques  and  algorithms  have  been  developed  for  this purpose,  each having  their  own merits  and  shortcomings  [1-3,8]. Maze-solving –  although artificial in terms of the confine that the  robot  it  subjected  to  –  is  a  structured  technique  and controlled implementation of autonomous navigation which is sometimes  preferable  in  studying  specific  aspect  of  the problem  [1].  This  paper  discusses  an  implementation  of  a small size mobile robot designed to solve a maze based on the flood-fill algorithm. The  maze-solving  task  is  similar  to  the  ones  in  the MicroMouse competition where  robots  compete on solving a maze  in  the  least  time  possible  and  using  the  most  efficient way.  A  robot  must  navigate  from  a  corner  of  a  maze  to  the center  as  quickly  as  possible.  It  knows  where  the  starting location  is  and  where  the  target  location  is,  but  it  does  not have  any  information  about  the  obstacles  between  the  two. The  maze  is  normally  composed  of  256  square  cells,  where the  size  each  cell  is  about  18  cm  ×  18cm.  The  cells  are arranged  to  form  a  16  row  ×  16  column  maze.  The  starting location  of the maze is on one of the  cells at its  corners, and the  target location  is  formed by four cells at  the center of the maze.  Only one cell is opened for  entrance. The requirements of maze walls and support platform are  provided in the IEEE standard [2,3].

4. THE MAZE AND THE ROBOT

1. The maze designed for the robot to solve is of the size of 6×6 cells  as  shown  in  Figure  1.  The  actual  maze  constructed,  as shown in Figure 2, has a physical size of about 2.25 m

2 . The maze was designed so that it will have two paths in order for it to be solved. One of the paths is longer than the other. The  robot  (Figure  3)  must  decide  which  one  of  the  paths  is shorter and solve the maze through that path.

5. Updating The Wall Data

Before the robot decides where it wants to move to, it has to check  if  it  is  surrounded  by  any  walls  in  any  of  the  three directions: right, left and front. The robot reads the distance of any obstacle at each direction and check if the distance in each is  more than 20 cm. The ones that exceed 20 cm are updated as  “wall”  on  their  respective  side.  This  is  illustrated  by  the flow chart in Figure 5. For the robot to update the correct wall data, it has to know first which direction it is facing. There are four  orientations for the robot to be facing: north, south, east or  west,  as  shown  in  Figure  6.  Initial  orientation  was  set  at start and the robot keeps tracking of any changes.

6. ALGORITHM

Choosing an algorithm for the maze robot is critical in solving the maze. In this  exercise, flood-fill algorithm was chosen to solve  the  maze  due  to  its  balance  in  efficiency  and complexity.  There  are  four  main  steps  in  the  algorithm: Mapping, Flooding, Updating and Turning [2, 6-7]; which are described in the following sub-sections.

 Mapping The Maze

For the robot to be able to solve the maze, it has to know how big the maze is and virtually divides them into certain number of cells that can be used later in calculating the shortest path to the destination. In this project, a maze of 6×6 cells is used. Between two cells there can be a wall. Thus, in a row of six cells, there are five walls in between them. In total, in a row, there  are  eleven  units  of  cells  or  walls.  This  information  is stored  in  an  11×11  array,  as  shown  in  Figure  4.  The  white units are the cells which the robot can be placed inside. The orange  units  are  the  locations  for  potential  walls.  The  black units  indicate  wall  intersections  which  are  ignored  by  the algorithm. The external borders of the maze are also ignored as they  are  fixed boundaries of the maze. Both cells (white) and walls (orange) are set to zero in as their initial conditions

Flooding The Maze

After  updating  the  wall  information  for  the  current  cell,  the robot starts to flood the matrix to find the shortest path to the goal  [6].  The  flow  chart  in  Figure  7  shows  how  the  robot floods the matrix and makes decision by checking one cell at a time. It does the same for all the cells and keeps repeating for several times until a path between the robot and the goal is found.  The  algorithm  assigns  a  value  to  each  cell  based  on how far it is from the destination cell. Based on that, the goal cell gets a value of 1. If the robot is standing on a cell with a value of 4, it means it will take the robot 3 steps (3 cells) to reach  the  destination  cell.  The  algorithm  assumes  that  the robot can’t move diagonally and it only can make 90 degree turns. 4.3  Flooding The Maze After  updating  the  wall  information  for  the  current  cell,  the robot starts to flood the matrix to find the shortest path to the goal  .  The  flow  chart  in  Figure  7  shows  how  the  robot floods the matrix and makes decision by checking one cell at a time. It does the same for all the cells and keeps repeating for several times until a path between the robot and the goal is found.  The  algorithm  assigns  a  value  to  each  cell  based  on how far it is from the destination cell. Based on that, the goal cell gets a value of 1. If the robot is standing on a cell with a value of 4, it means it will take the robot 3 steps (3 cells) to reach  the  destination  cell.  The  algorithm  assumes  that  the robot can’t move diagonally and it only can make 90 degree turns. 4.3  Flooding The Maze After  updating  the  wall  information  for  the  current  cell,  the robot starts to flood the matrix to find the shortest path to the goal  .  The  flow  chart  in  Figure  7  shows  how  the  robot floods the matrix and makes decision by checking one cell at a time. It does the same for all the cells and keeps repeating for several times until a path between the robot and the goal is found.  The  algorithm  assigns  a  value  to  each  cell  based  on how far it is from the destination cell. Based on that, the goal cell gets a value of 1. If the robot is standing on a cell with a value of 4, it means it will take the robot 3 steps (3 cells) to reach  the  destination  cell.  The  algorithm  assumes  that  the robot can’t move diagonally and it only can make 90 degree turns. 4.3  Flooding The Maze After  updating  the  wall  information  for  the  current  cell,  the robot starts to flood the matrix to find the shortest path to the goal  .  The  flow  chart  in  Figure  7  shows  how  the  robot floods the matrix and makes decision by checking one cell at a time. It does the same for all the cells and keeps repeating for several times until a path between the robot and the goal is found.  The  algorithm  assigns  a  value  to  each  cell  based  on how far it is from the destination cell. Based on that, the goal cell gets a value of 1. If the robot is standing on a cell with a value of 4, it means it will take the robot 3 steps (3 cells) to reach  the  destination  cell.  The  algorithm  assumes  that  the robot can’t move diagonally and it only can make 90 degree turns. 4.3  Flooding The Maze After  updating  the  wall  information  for  the  current  cell,  the robot starts to flood the matrix to find the shortest path to the goal  .  The  flow  chart  in  Figure  7  shows  how  the  robot floods the matrix and makes decision by checking one cell at a time. It does the same for all the cells and keeps repeating for several times until a path between the robot and the goal is found.  The  algorithm  assigns  a  value  to  each  cell  based  on how far it is from the destination cell. Based on that, the goal cell gets a value of 1. If the robot is standing on a cell with a value of 4, it means it will take the robot 3 steps (3 cells) to reach  the  destination  cell.  The  algorithm  assumes  that  the robot can’t move diagonally and it only can make 90 degree turns.

 4.3  Flooding The Maze

After  updating  the  wall  information  for  the  current  cell,  the robot starts to flood the matrix to find the shortest path to the goal  .  The  flow  chart  in  Figure  7  shows  how  the  robot floods the matrix and makes decision by checking one cell at a time. It does the same for all the cells and keeps repeating for several times until a path between the robot and the goal is found.  The  algorithm  assigns  a  value  to  each  cell  based  on how far it is from the destination cell. Based on that, the goal cell gets a value of 1. If the robot is standing on a cell with a value of 4, it means it will take the robot 3 steps (3 cells) to reach  the  destination  cell.  The  algorithm  assumes  that  the robot can’t move diagonally and it only can make 90 degree turns.

Control board

Processing  power  is  provided  by  an  Arduino   board.  The board   is   powered   by   Ardruino UNO   which   is   a micro controller with 16 KB flash memory for storing the code.   The  micro controller   can   be   programmed   by  C language-like “processing programming language.”

Obstacle Sensors

Three ultrasonic distance sensors were placed on the right, the left and in front of the robot. Each ultrasonic sensor measures the   distance  between   the   robot   and   any   obstacle   in millimeters.

Wheel Rotation Encoder

Each wheel is equipped with a sensor which is basically a pair of   infra-red  transmitter   and   receiver.   By   counting   the holes   in   the  wheel  and  knowing  the  wheel  diameter,  the robot can  encode  the distance traveled. In this case, there are eight  holes  in  the  wheel  and  the  wheel  diameter  is  7.9  cm. That  means  the  distanced  traveled  is  24.8  cm  (7.9×π)  when the  wheel  rotates  a  full  cycle.  Figure  9  shows  the  counting based  on  one  of  the  wheel  rotation  sensors.  High  sensor reading is set to one and low reading is set to zero. The frame in black represent a  detection of one cell moved, which is 16 toggles between ones and zeros.

Checking for Turns

After the robot has  decided which direction it will go next, it returns the amount of degrees it needs to turn in order to go to the  cell  intended  (Table  1).  After  turning  the  robot,  the algorithm updates the new orientation of the robot, i.e. facing north, south, east or west.

HARDWARE DESIGN

The  robot  has  a  length  of  22  cm,  a  width  of  15  cm  and  a height of 15 cm. As illustrated in  Fig 1, the robot is equipped with  three  ultrasonic  distance  sensors  facing  front,  left  and right to scan the area ahead for  obstacles and specifically to detect for walls. A wheel rotation encoder  is placed near each wheel  so  that  the  extend  of  how  much  the  wheel  is  rotating can be detected. With the diameter of the wheel is known, the rotation can converted into distance traveled.

Motor Drive

The two wheels are driven by a pair of servo motors which are interfaced  to  the  Arduino  board  through  an  L293D  dual  Hbridge.   An  L293D  can  drive  two  servo  motors  or  two  DC motors  —  which  can  be  controlled  in  both  clockwise  and counter clockwise direction. It has output current of 600 mA and  peak  output  current  of  1.2A  per  channel.  The  in-built diodes protect the circuit from back EMF (Electro Magnetic Force) at the outputs. Supply voltage range vary from 4.5 V to 36 V, making L293D a flexible choice for a motor driver.

ERROR DETECTION AND CORRECTION

Moving in a Straight Line with a PI(D) Controller

Due to the fact that the motors spin at slightly different speed even when they have been calibrated, the robot tends  to drift to one side when it moves. For the robot to stay in the middle of the corridor inside the maze, a PI controller was used to fix the errors based on the inputs from the ultrasonic sensors. By applying Ziegler-Nichols tuning method on the difference between the distance detected by the left and right sensors . With P-control, the ultimate gain, K= 4 and the ultimate .

 

Ultrasonic Sensor Readings

As the ultrasonic sensors  were  exhibiting some irregular behavior when the distance detected is below 2 cm, extra tests were done to find the range of the correct readings. The robot was  set  to  move  straight  with  the  PI  controller  and  the readings  of   both   left   and   right   sensors  were   recorded. As shown in  Fig  6, 1680 readings were recorded with neglecting the readings that occurred less than 30 times. The highest occurrence was the value of 161 millimeters which  occurred 1344 times. Based on that, the value of each sensor  was  chosen  to  be  80  mm  for  the  robot  to  be  in  the middle between the two walls. Another  set of  tests  were  done to find the accurate range for each sensor alone. The robot was randomly moved away from and  close  to  a  wall.  3680  readings  were  recorded  with neglecting the readings that occurred less than 30 times. Fig 7 shows that most of the occurrences happened at the values of 34 and 127. Based on that, the accepted range of readings is   set  between  34  and  127.  Any  value  outside  this  range  is neglected.

Turning Left or Right Using Wheel

Rotation Encoders

There are three types of turning that the robot can make. It can either turn 90 degrees to the right, 90 degrees to the left or 180 degrees to the  rear. A  few  tests  were conducted  to measure how  many  toggles  the  encoder  will  count  before  the  robot turns 360 degrees with both wheels having the same speed and  opposite  direction of rotation. Based on the results it was determined that a 90 degrees turn would equal to 5 or 6 toggles counted  and a 180 degree turn would be equal to 11 toggles.

Nirmal Ram is the Founder & CEO of pineapplem3 Inc , Co-Founder of Gene Ai Foundation & Researcher in Artificial Intelligence & Machine Learning. Born on April 1998 in Kerala ,India . He's a passionate Programmer & Developer . He believed in his work with ethics and filled with confidence. His start-up journey begins on 2016 , Skilled in handling 10+ computer Languages .He's Hard worker , Freelancer & Instructor . He started earning with his Freelancing Journey since 2017. brought up pineapplem3.com worth about $67.16 in 2016.