Controllers

Controllers Task 1

class controllers.controllers_task1.LearnedController(name, goal, N, net, net_input, communication, **kwargs)

The robots can be moved following a controller learned by a neural network.

Parameters
  • name – name of the controller used (in this case omniscient)

  • goal – task to perform (in this case distribute)

  • N – number of agents in the simulation

  • net – network to be used by the controller

  • net_input – input of the network (between: prox_values, prox_comm and all_sensors)

  • communication – states if the communication is used by the network

  • kwargs – other arguments

perform_control(state, dt)

Extract the input sensing from the list of (7 or 14) proximity sensor readings, one for each sensor. The first 5 entries are from frontal sensors ordered from left to right. The last two entries are from rear sensors ordered from left to right. In case of all sensors the first 7 values refers to prox_values and the following 7 to prox_comm. Each value is normalised dividing it by 1000, that is the mean value of the sensors.

The obtained sensing is passed as input to the net and obtain the speed and the eventual communication to be transmitted.

Note

Keep still the robots at the ends of the line and send for them alway 0 as message.

Parameters
  • state – object containing the agent information

  • dt – control step duration

Return speed, communication

the velocity and the message to communicate

class controllers.controllers_task1.ManualController(name, goal, N, net_input, **kwargs)

The robots are moved following a “distributed controller”, that is a simple proportional controller PID, with P fixed to -0.01, whose goal is to align the robots by minimizing the difference between the values ​​recorded by the front and rear sensors.

Parameters
  • name – name of the controller used (in this case manual)

  • goal – task to perform (in this case distribute)

  • N – number of agents in the simulation

  • net_input – input of the network (between: prox_values, prox_comm and all_sensors)

  • kwargs – other arguments

Variables

p_distributed_controller – a simple proportional controller that returns the speed to apply

compute_difference(state)

Note

Apply a small correction to the distance measured by the rear sensors since the front sensor used is at a different x coordinate from the point to which the rear sensor of the robot that follows points. This is because of the curved shape of the face of the Thymio. The final difference is computed as follow:

\[out = front - (back - correction)\]
Parameters

state – object containing the agent information

Return out

the difference between the front and the rear distances

neighbors_distance(state)

Check if there is a robot ahead using the infrared sensor 2 (front-front) [2 and 9 in case of all sensors]. Check if there is a robot ahead using the infrared sensor 5 (back-left) and 6 (back-right) [5, 6 and 12, 13 in case of all sensors]. The distance from the neighbors is computed using a mapping function obtained experimentally that converts the intensities values contained in the sensor readings into distances.

Note

The response values are actually intensities: the front correspond to the frontal center sensor and the back to the mean of the response values of the rear sensors.

Parameters

state – object containing the agent information

Return back, front

back distance and front distance of the thymio from the others

perform_control(state, dt)

Keeping still the robots at the ends of the line, moves the others using the distributed controller, setting the target {left, right} wheel speed each at the same value in order to moves the robot straight ahead. This distributed controller is a simple proportional controller PID(5, 0, 0, max_out=16.6, min_out=-16.6) that takes in input the difference between the front and back distances measured by the agent.

Parameters
  • state – object containing the agent information

  • dt – control step duration

Return speed, communication

the velocity and the message to communicate

class controllers.controllers_task1.OmniscientController(name, goal, N, net_input, **kwargs)

The robots can be moved also following an optimal “omniscient” controller. In this case, based on the poses of the robots, the expert controller moves the robots at a constant speed, clipped to a minimum of -16.6 and a maximum of 16.6, calculating the distance from the actual pose to the target one.

Parameters
  • name – name of the controller used (in this case omniscient)

  • goal – task to perform (in this case distribute)

  • N – number of agents in the simulation

  • net_input – input of the network (between: prox_values, prox_comm and all_sensors)

  • kwargs – other arguments

linear_vel(state, constant=10)

Compute the linear velocity as the signed_distance between the current and the goal position of the robot, along the current theta of the robot. The final velocity is then multiplied by a constant value.

\[velocity = constant * self.signed\_distance()\]
Parameters
  • state – object containing the agent information

  • constant – constant value (default: 10, but used also values as 1 or 4)

Return velocity

clipped linear velocity

perform_control(state, dt)

Move the robots using the omniscient controller by setting the target {left,right} wheel speed each at the same value in order to moves the robot straight ahead.

Parameters
  • state – object containing the agent information

  • dt – control step duration

Return speed, 0

the computed speed and the message to communicate, fixed to 0, used only in case of prox_comm or all_sensors net input

Controllers Task 2

class controllers.controllers_task2.ManualController(name, goal, N, net_input, **kwargs)

Using the sensing of the robots, decide which robots are the first and the last and start sending a communication message containing a value that is incremented by each robot that received it. Each robot can receive two messages, if the count received from the right is higher than the one from the left, then the agent is in the first half, otherwise it is in the second. When the robot is sure about is position (it is able to understand on which side of the row it is, compared to the mid of the line) then the robot can turn on the top led using different colours depending if it is positioned in the first or second half of the row.

Parameters
  • name – name of the controller used (in this case manual)

  • goal – task to perform (in this case colour)

  • N – number of thymios in the simulation

  • kwargs – other arguments

perform_control(state, dt)
Parameters
  • state – object containing the agent information

  • dt – control step duration

Return colour, communication

the colour and the message to communicate

class controllers.controllers_task2.OmniscientController(name, goal, N, net_input, **kwargs)

The robots can turn on and colour the top led also following an optimal “omniscient” controller. In this case, based on the position of the robots in the line, the omniscient control colour the robots in the first half with a colour and the others with another.

Parameters
  • name – name of the controller used (in this case omniscient)

  • goal – task to perform (in this case colour)

  • N – number of agents in the simulation

  • net_input – input of the network (between: prox_values, prox_comm and all_sensors)

  • kwargs – other arguments

perform_control(state, dt)

Do not move the robots but just colour the robots belonging to the first half with a certain colour and the other half with a different colour.

Parameters
  • state – object containing the agent information

  • dt – control step duration

Return colour, communication

the colour and the message to communicate

PID

Adapted from https://github.com/alessandro-giusti/teaching-notebooks/blob/master/robotics/04%20closedloopcontrol.ipynb

© 2020 Alessandro Giusti – 2020 Giorgia Adorni

class controllers.pid.PID(Kp, Ki, Kd, min_out=- inf, max_out=inf)

A PID controller that implements the following formula.

\[K_p \epsilon + K_d \frac {d \epsilon}{d t} + K_i \int{\epsilon(t) dt}\]
Parameters
  • Kp – apply a correction based on a proportional term

  • Ki – apply a correction based on a integral term

  • Kd – apply a correction based on a derivative term

  • min_out=-float("inf") – clip the speed to the minimum allowed velocity

  • max_out=float("inf")) – clip the speed to the maximum allowed velocity

step(e, dt)
Parameters
  • e – error

  • dt – should be the time elapsed from the last time step was called

Return output

speed

PID tuning

class controllers.pid_tuning.Thymio(name, index, p, i, d, **kwargs)

Superclass: pyenki.Thymio2 -> the world update step will automatically call the Thymio controlStep.

Parameters
  • name – name of the agent

  • index – index of the agents in the row

  • p – proportional term of the PID controller

  • i – integral term of the PID controller

  • d – derivative term of the PID controller

  • kwargs – other arguments

Variables
  • initial_position – the initial position of the agent is set to None

  • goal_position – the goal position of the agent is set to None

  • goal_angle – the goal angle of the agent is set to None

  • dictionary – the dictionary containing all the agent attributes is set to None

  • colour – the colour of the agent is set to None

  • p_distributed_controller – a simple proportional controller that returns the speed to apply

controlStep(dt: float) → None
Perform one control step:

Compute the error as the difference between the goal and the actual position and use it to compute the velocity of the robot through a proportional controller.

Parameters

dt – control step duration

get_input_sensing()
Return sensing

the sensing perceived by the robot based on the net input

controllers.pid_tuning.main(directory, P, I, D)
Parameters
  • directory – directory were to save the images containing the step response

  • P – proportional term of the PID controller

  • I – integral term of the PID controller

  • D – derivative term of the PID controller

Sensing to distances

class controllers.sensors_to_distances.Thymio(name, index, **kwargs)

Superclass: pyenki.Thymio2 -> the world update step will automatically call the Thymio controlStep.

Parameters
  • name – name of the agent

  • index – index of the agents in the row

  • kwargs – other arguments

Variables
  • initial_position – the initial position of the agent is set to None

  • dictionary – the dictionary containing all the agent attributes is set to None

  • colour – the colour of the agent is set to None

controlStep(dt: float) → None
Perform one control step:

Enable communication and send at each timestep a message.

Parameters

dt – control step duration

get_input_sensing()
Return sensing

the sensing perceived by the robot based on the net input

controllers.sensors_to_distances.main(distances, front_prox_values, back_prox_values, front_prox_comms, back_prox_comms, myt_quantity, min_distance)
Parameters
  • distances – array containing the distances among the agents for each experiment

  • front_prox_values – array containing the value of the frontal sensor using prox_values

  • back_prox_values – array containing the value corresponding to the average of the rear sensor readings using prox_values

  • front_prox_comms – array containing the value of the frontal sensor using prox_comm

  • back_prox_comms – array containing the value corresponding to the average of the rear sensor readings using prox_comm

  • myt_quantity – number of agents

  • min_distance – length of the agents