Showing posts with label Zyderbot. Show all posts
Showing posts with label Zyderbot. Show all posts

Monday, October 23, 2023

Lava Palaver - PiWars 2024

Introduction 

We aren't actually in PiWars 2024, but just a reserve team for the advanced category, which doesn't mean we don't have to do anything! Assessing the challenges and thinking about what's involved has to be done. 

We've all been involved with a robot workshop so haven't had much time to look at these things in depth but here's a view on the first of the challenges listed, Lava Palaver. The official description of the challenge is here Lava Palava – Pi Wars

The Challenge

This is a black painted course 7 metres long and 55cm wide, with walls 7cm high and part way along is a double chicane where the robot has to turn right then left, followed by a left and then right. A white line 19mm wide is positioned along the centre of the course. Without attachments, the maximum width of a robot is 225mm, or half the width of the course.


A course like this has been used in previous years, but as a change to the layout, a 'speed bump' will be inserted onto the course on the day of the competition, dimensions shown below.

This one feature does introduce a range of considerations also applicable to the obstacle course challenge covered later. The overhang on a robot, that part of the chassis ahead of the front wheels or tracks, will need to be able to clear the leading edge of the 'bump' and also once traversed, be able to avoid colliding with the level part of the course when coming off the 'bump'. A robot could, of course, be made sufficiently robust to collide with this and continue on, either with strengthened chassis, the addition of a skid or with a leading idler roller or wheel. The table below lists a range of overhang lengths and clearances required.


The course is to be navigated autonomously and 225 points are awarded to each of three runs completed within 5 minutes and additional 100 points for each of these three runs where the robot doesn't collide with the sides of the course. The combined run times of three runs are compared to the other robots and up to 150 points can be awarded on a decreasing basis for the fastest robot times according to the PiWars formula, described here Formula Scoring System – Pi Wars. Finally, for the fastest individual run of a robot, 275 points are awarded. The maximum points awardable are therefore 1400. From a strategic viewpoint, completing the course three times without touching the sides and within the time limit gains 975 points, 70% of the maximum, so from an effort perspective is a worthwhile target in itself. Once a successful run navigation strategy is achieved then the speed could be increased to competitively gain the extra points.

Navigation strategies

There are several strategies which come to mind, and may be adopted either individually or combined. These are dead-reckoning, wall following, line following and for the second and third runs, memorised tracks. Collision avoidance, while not a navigation strategy, is desirable so will be included!

Dead-reckoning

As the shape of the course is known, separate routines could be incorporated into the robot navigation code to drive the robot  in different ways depending on the assumed position of the robot. Therefore, the routines could be for example.

Drive-forward-2.5-metres
Turn-right-45-degrees
Drive-forward-700mm
Turn-left-45-degrees
Drive-forward-1-metre
Turn-left-45 degrees
Drive forward-700mm
Turn-right-45-degrees
Drive forward-3-metres

These descriptive routines could be taken to successfully navigate the centre of a course, without reference to the 'speed bump'. This is not the shortest route of course and starting the robot already close to the right-side wall and navigating close to the left hand wall through the chicane, returning to near the right side wall on the lead out would be shorter and thus faster for a robot with similar speed capabilities. Robots with a chassis width less than the maximum would be able to take best advantage of this strategy.

Dead-reckoning has been done successfully with timed robot runtimes, but works more reliably when the wheel dimensions are combined with measured rotations of the wheel to calculate the distance accurately. Similarly, robot direction can be estimated by relative wheel rotations (combined with wheel orientation depending on steering method). A robot using mecanum wheels could simply move at a required angle without changing orientation. Gyroscope/accelerometer circuits can be incorporated to provide even more orientation information.

On a simple course such as Lava Palaver, dead-reckoning can provide an effective method of autonomous completion. The assumed measurements in the example could be confirmed on the day of competition, and corrected by physical measurement of the course. This could also include the position of the 'speed bump'  to accommodate any speed/power variations which might be needed. Including collision avoidance to improve the usefulness of the estimations input will aid the navigation.

Wall following

The course has a wall on either side, and while colliding with either wall might reduce the score, and time, available, it does offer a consistent guidance reference throughout. Detecting walls can be done with a variety of non-contact technologies, such as ultra sonics, laser and infrared (IR) distance measurements. 

Ultrasonics

Detectors are mounted on the sides and front of the robot and provide a reading how long an ultrasonic pulse of sound takes to reflect from a surface. These can be either self contained, carrying out the measurement and providing measurement information, or controlled by the robots controller and the timing and subsequent distance measurement calculated directly. These detectors can be prone to errors  due to the angle of the surface they are facing and the level of reflectivity, hard surfaces working best. 
This is a very common low cost ultrasonic sensor, in this case, run from a controller.



Laser

In recent years, small laser equipped distance sensors have become available, such as the VL6180X or VS53L0X models, which can provide an accurate and fast measurement providing that the target surface is reflective enough. The surface of the course walls are painted black and this may significantly reduce the effectiveness of this type of sensor, but trying it may be a useful lesson. They also cost a bit more than the ultrasonic sensors, which may need to be taken into consideration when adding multiple sensors to a robot.
   LIDAR (LIght Detection And Ranging) sensors are not beyond the budget of many robots (both in cost and size) and can provide a detailed map of a robots surroundings, but do need to be able to 'see' the course walls which may prove difficult to engineer a robot to do in this case.
This picture of a low cost LIDAR sensor is driven by an electric motor to give an all round view. It costs in the region of a good serial servo which many roboteers use.



InfraRed(IR)

These sensors rely on the level of reflected IR light from a surface, which is illuminated by an associated IR source. These can be very effective measuring small distances where the ultrasonic detector would fail completely but may suffer the same problems as the laser sensors when observing the black sides of the course.
This is a pair of IR sensors with adjustment for triggering sensitivity.

The following is an example sensor layout.

The rectangles describe the locations of the sensors for both wall following and collision avoidance and could be either ultrasonic, IR, or both. One option for the front collision detector is to mount it on a servo to provide a sweep of the area in front of the robot for greater coverage. 



Line Following

The white line down the middle of the course provides an immediate point of focus for guidance being consistent throughout. Line following is a very common entrance subject to robotics and using error correction to establish a reliable guidance mechanism. Information about the position of the line relative to the robot and its directions is typically obtained via an array of point sensors or from a high resolution optical camera. There are also low resolution optical cameras available which provide a much simpler interface and image to analyse for guidance.

Point Sensor Arrays

These come in various types but are typically a light sensor and light source as adjacent pairs and provide a signal based on the reflectivity of the surface which can be used to detect a white or black line on a background black or white field. 
Individual sensor, this is a TCRT5000

Here, eight sensors have been soldered to a sensor bar and an I2C interface provides access.


Some colour sensors can detect coloured lines to enable multiple lines to be used for different guidance uses in the same plane. A basic array would be two such pairs a short distance apart and mounted across the robots chassis at right-angle to the direction of travel. These give basic information such that when the right sensor is over the line, turn right, when the left sensor is over the line, turn left. Adding more sensor pairs enables the robot to more accurately determine the position of a line and placing them closer together enables a greater degree of granularity of control. Using two or more lines of sensors enables more directional information to be gathered, and varying the shape of the sensor array ( an arc can be beneficial) , together with varying the spacing of sensors to give both coarse and fine positional sensing can be helpful.
Positioning the sensor array ahead of the robot gives more time to make corrections, as well as placing arrays further back on the robot chassis to reduce over correction situations. They can also be useful with providing initial alignment at the start of a line following run ensuring that the robot is positioned as straight as it can be. 
This is a basic two sensor layout which can be very successful but the line follower typically has low speed as it constantly has to hunt for the line it's following.

Adding a third central sensor provided focus, reduces hunting and increases the speed possible. 

As with the commercial example above, this is an eight sensor bar. If the line is wide enough then the two central sensors can be the focus, but if it is a narrow line, then ignoring one of the outlier sensors and using the fourth sensor as the focus can help performance.

A nine sensor commercial sensor bar is unusual, but automatically provides for a central focus sensor. The wide sensor bar provides for an increased sensor sweep area when negotiating corners or having to perform line finding.

This is an enhanced nine sensor arrangement with a dense focus in the centre, allowing the robot to line follow using multiple sensors, perhaps not necessarily in the centre, but also has outlier sensors for improving corner performance.
This curved sensor is common n competitive line follower robots, maintaining a focus area and providing depth in the outlier sensors. This is very useful where the robot will be encountering many corners in quick succession.




This final layout is more extreme and might be more at home in a commercial robot but can still be useful in smaller robots. The central sensor bar provides the focus and the core steering input. The lead bar provides advance information to allow the controller to take predictive actions, and the trailing sensor bar provides some alignment information to help reduce crabbing and hunting of the robot as well as aiding aligning the robot in a straight line at the start.



High Resolution Cameras

      While they can require significantly great processing power in a robot controller, the cost of adding a camera can be very modest and equivalent to a point sensor array. The processing may be more intense but effectively provides the same level of  guidance as a multilevel array giving a degree of lookahead absent from single line sensors. Cameras can be mounted away from the surface of the course so can avoid being snagged on a 'speed hump'. Using cameras can give a very high quality of control but does require significant investment in learning to implement in code. A variation on this is to add a pan feature to the camera to provide additional lookahead capability.
Wide angle cameras such as this can provide a good view for line following but some compensation for the lens distortion might be needed for accuracy.



Combinations

    Combining all these may be difficult, but a few would be very useful.  Wall following can be a complete solution, but including it with the others for collision avoidance makes the extra points more likely and gives greater confidence in increasing the speed. Line following can achieve the whole navigation of the course, but adding the dead-reckoning information to it can aid in speed control, accelerating the robot from the start, slowing as a corner approaches and accelerating afterwards. Without knowing where the 'speed bump' is, a robot either must moderate its speed throughout or potentially risk crashing, however dead-reckoning can add a suitable speed reduction to safely navigate it. 

Memorising and recall

    Having completed one successful run of the course (we will all be successful!!!), we should have enabled our robot to do it again just as easily, but with the extra knowledge of having done it once. Recording the robots good run means that without sensors it should be able to do it again and perhaps faster. The distance to the corners is known, the 'speed bump' has been located and where the robot can and can't run at full speed determined. 
The methods of recording a 'good' run are varied but the course isn't complicated so a small array of control points may suffice.

What will make for a good robot for this challenge?

     The Lava Palaver is one of the challenges and going all out to win just this one thing may be the sole objective, but in PiWars, a robot chassis will need to be adaptable to the other challenges.

    Adding a line follower array attachment will be perhaps the easiest option, but some mechanism may be needed to allow it to navigate the 'speed bump', fitting it with a hinge and either a roller or idler wheel for the time it is in contact with the 'speed bump'. Fitting this hinged part with a detector would also inform the robot that it had found the bump. Line following competitions sometimes feature quite extreme 'bumps' which delicate high performance robots just get on with.  

    Placing the array well in front of the robot with this mechanism would also perhaps give the robot a small amount of time to decelerate to navigate the bump safely. However, placing the array far in front of the robot may be a problem for steering depending on the technique involved. 


A few mock-up pictures of a suspended sensor running on ball castors. The spring provides suspension to hold the sensor bar down as well as accommodate the rise and fall of the bar.


     
    A robot can be up to 300mm long in its base configuration, longer than half the width of  the course so a mecanum wheel equipped robot could find itself colliding with the sides of the course at corners. Using skid or differential steering would offer a robot the chance to steer precisely but only if  it was to slowdown to do so at corners. Ackermann steering would give the best control over the course at speed, but might prove problematical to use in the other challenges. One thing which would be consistent is that full length robots with attachments will be at a disadvantage on this challenge.

Remember: 70% of the points are available just for finishing the course without errors three times in 5 minutes, which isn't fast, so just that would be a good result for any robot entrant.

It's not certain, but I suspect East Devon Pirates will have a camera's eye view of the course with skid steering :)  There's code out there we've used before if you want a look.  uggoth/EastDevonPirates2024: Work towards Pi Wars 2024 by East Devon Pirates (github.com)
 

Thursday, June 16, 2022

All bar one

 Final blog entry for this year. We got four videos created, and the blog, but failed on the Natures Bounty video.

Most of a sophisticated apple picker was built, but not enough to be ready for using in a videoed challenge.


We did try to rustle up an alternative in the last few hours which basically knocked the apples off but time just ran out, and we concentrated on delivering what we could do. Disappointing, but our first try at the competition and we've learnt a bit in the process. 


It'll be fun seeing what everyone else has done, but not sure we'll get away with doing it again, just a bit to much time.

Still, next up is Sidmouth Science festival, when we'll have a lot more robots wandering about. Think that's where we came in!!!!


East Devon Pirates

Monday, June 13, 2022

Videos

 Just a brief post to show our video recording area....it's actually a bedroom with the arena laid on top of the bed.


We moved here from the kitchen floor due to other people needing to use the kitchen for space and we can use this all day without being trodden on!!!

The camera is mounted on a tripod to the right and to the left is a table to sit with laptop for robot set up, in this case with the next variation on out sheep herder, in this case a very much more 'beefed up' version made up from recycled robot arms.....and Lego!

Sunday, June 12, 2022

Zyderbot Architecture

 Zyderbot is very modular and the parts have been assembled to work together around ideas put in place early on in the competition. This is a brief overview of what it comprises. 


The central chassis, as described in previous blogs. houses the controller Raspberry Pi and it is this which forms the central control and communication hub of the robot. 

A VNC connection to this Pi is used to connect to it from a laptop and programming is done using the Python IDE or Thonny. The controller Pi also runs the camera, support for which is built into the Pi.

The rest of the robot platform is modularised to provide dedicated functions, there are two groups, the main chassis controllers and the attachment controllers. As this is our first entry into PiWars, we've designed something which is reusable and the design and code is on Github. 

The main controller runs a command queue, onto which commands can be placed, either from an onboard file, as running a script, from the command line as direct input, or as a response from a separate controller, which in this case is either a Pico or an ESP32.

For autonomous processing, the controller reads a script file and runs commands to either take actions or invoke further autonomous actions, such as running a trough filling hopper or picking an apple. 

A motion command will be sent to the Motor Controller Pico which measures distance and actuates the four chassis motors. 

For Remote Control operation, an ESP32 is used with a PS3 handheld controller to submit commands to the controller, which are passed onto the Motor Controller, providing the necessary control.

For audio input, a separate Pico is used to listen on a microphone, interpret the audio signals it receives, and then pass these on to the command queue.

The Camera is invoked by commands to enable seeking and following operations, itself issuing commands back to the queue to act on received information.

The attachment controllers are there to enable easy integration between the chassis and the attachments, running all the attachment motors, servos and sensors so relieving the main controller of that burden and providing a division of labour.

For each PiWars challenge, an attachment is provided, to a standard mounting specification, power supplied from the batteries as a nominal 12V and communication to the Pi controller is over USB, which also supplies controller power an enables coding on the attachment controller from the central Pi.

There's a lot of code around this, as well as some direct handshake signals to get the timing right, but it's a good platform to take forward into future projects and competitions.

Wednesday, May 25, 2022

Toot Toot

 There's a bit of Shepherds Pi and the Farmyard Tour that gives points for audio control, so this is our take on it. This doesn't do much, listens for a whistle and then provides basic interpretation into commands. One whistle 'toot' is a command, as well as two and three, and after a brief interlude, the last command is cancelled. 



For this, a Pi pico is paired with a microphone....and a whistle...feeding the ADC of the pico with the audio signal, and then running an FFT against the signal to pick out the whistles' frequency, 2k7Hz in this case and converts blasts it into a logic signal.




As with the other attachments, this supports the WHOU enquiry, GPIO handshake and returns the info of TOOT, TOO2 and TOO3 as commands.  Power and serial comms are via USB for easy attachment.

Seems a bit of a small post, but it's a complete sub-project that contributes.



Monday, May 23, 2022

Using the remote

 With lots of challenging challenges, it has felt easy to ignore that the Farmyard Tours was to be remote control and not autonomous, so we needed a solution which was a bit more dextrous than a keyboard. Using a full RC set with adapters was always an option, but a more personal system was preferred and an almost universal games controller format was chosen. 



Could have been any brand, but it turned out as a low cost PS3 controller from a no-name Chinese source. As the main Pi controller just wanted a data feed it could translate, the interface chosen to the controller was Bluetooth run by a small ESP32 module dedicated to the communication. This was a lot less bother than direct comms and meant that the controller Pi wasn't cluttered with noisy code. Here is a larger dev board for testing, and mounted on a test chassis. The communication is via serial over USB and a simple handshaking option is used to request data from the PS3. 


One of the issues with this configuration is that when the controller times out and disconnects, the ESP32 code can't reconnect, so an extra pin was dedicated to reboot the ESP32 into connection mode again. 

Here's a brief video of the test chassis twirling in the arena..........and negotiating a bit of garden.


....and negotiating a bit of garden.







Thursday, May 19, 2022

A new chassis

So after lots of testing....well a bit anyway...we needed to build a new chassis, if only because we'd drilled so many holes in the old one! 

First make the battery holder which is also part of the suspension. Here it is.



Well that's spectacular isn't it. It's very basic, is made from a piece of acrylic tube, one end has a fixed plastic plug with a battery contact permanently attached and the other has a plastic spring holding the other contact in, which is held in place with a screw. The tube itself forms the central pivot of the suspension.

The two halves of the chassis were updated to accommodate the positioning of the camera as well as the main controller and attachment processors.

This is the rear subframe from below, the motors fixed in place either side of a tunnel which houses the battery tube. 

And a picture of the front subframe, housing motors, a camera and illumination LEDs, with the second half of the battery tunnel shown.

Putting them together, with an empty tube in the middle. The two subframes can rotate independently of each other around the battery tube. This isn't of much use in the arena challenges, but when negotiating the farm tour obstacle course it's expected to come into its own to keep the chassis stable and in contact with the ground.

As mentioned previously, part of the rebuild was to accommodate a camera.
The camera is mounted below the attachment deck of the chassis, the bolt holes for attachments can be see top left and right, and faces down and forward, looking at the arena surface ahead of the robot. The LEDs are there to ensure it has plenty of illumination for the camera and isn't dependent on ambient light, eliminating an uncertain variable.



Close-up of the two subframes partly rotated against each other to show the suspension in operation. Already, with only the motor, LEDs and camera fitted there are a lot of wires in this robot and cable management is important, hence the prepared holes in the subframes.

From the outset we wanted interchangeable wheels to adapt to the different challenges. These are 'arena' wheels with their adapters. The adapters were always part of the design as with so many different motor shaft attachments available we needed to be able to switch them without having to remake whole wheels.



Now it looks a lot more like something with the wheels attached and wires routed. The front chassis  mounts for attachments can be seen, and that growing wiring is getting some management..

And suddenly there's a wire explosion!!! At the rear (right) is the main power switch, and just ahead is the controller pi board mounted vertically. At the far side is the power regulator. Raw power from the batteries is delivered at around 11V and used by the attachments, the regulator ensures the pi is supplied with a healthy 5V. 
Nearest the camera is the motor control pico which receives commands from the controller and incorporates a LED to provide a simple status display. This connects to the controller via USB. 

And here is the assembled new chassis with the Hungry Cattle hopper attachment. 
There's a lot of parts to this but it does work very well. Only a short video of this chassis working but it gives an idea of the basics.




Next up a  blog giving a description of the architecture of these robots.





Thursday, March 17, 2022

Attachment action

After a bit of a delay actually doing things, now entering the phase of actually testing some solutions. 

First up is the Shepherds Pi herding solution. The chassis is working well and to get the basic herder fitted just required some bolts, power and a USB cable. This wasn't just a sudden event and the interface has undergone a bit of refinement.


What looks to be the final interface is to run attachment microcontrollers from USB via a hub, with a separate power supply to run motors and servos, putting the power conversion on the attachment to optimise for the particular attachment, standard power supplied being 12V.


This is a picture of the shepherding attachment, controlled by a Pico on a Kitronik control board connected to the arm servos with a small buck converter power supply and power display. Also shown is a handle to aid picking the chassis up as it's now getting a significant amount of handling. The control board is a bit over specified here but in the next iteration will have to run two stepper motors.

The shepherding attachment has gone through several phases, and the one shown is good enough for basic positional testing using dead reckoning. Here's a video of a basic operation.


Some erratic movement on the arms due to controller initialisation signals, but this is a good test of a sheep 'fetch'.

While the arms do a good job with the sheep, they do need to do more and so the turbo-shepherd version is in construction and testing. Here's a video of it in test mode. 


Here a stepper motor is now giving lift to the arm to allow it to be move up and out of the way, open and close the gate, as well as potentially pick up recalcitrant sheep!

With the herding going well, the Hungry Cattle challenge is almost complete.

The montage above shows the feed dispenser attached to the chassis. The feed hoppers are attached to a turntable mounted between the wheels of the chassis. The robot feeder drives up alongside a trough, the turntable rotates a full hopper over the trough and dispenses feed. The robot then moves on to the next trough, meanwhile the hopper rotates the next hopper into position. Once feeding complete, the hopper turntable rotates back to a central position for refilling. The controller here is a Pico on a dedicated bit of stripboard to run three servos and a small stepper. 

Finally, a bit of dimension checking. The rules say it all has to fit withing the marked rectangles, so here we are, fitting in!
Picture with arms extended

and then with arms parked.
May need to tidy a few wires up!!!! 

That's it until next time, when we'll have some remote control via a PS3 controller, a dog whistle and maybe even some voice commands.....come by Shep!



Friday, February 25, 2022

Joining bits together!

Another team meeting this week to look at where we're going and show our progress. This is more a position update blog, not much to show in a structured way but an example of where we are.


First up is a video of the sheep herder. This is the basic version in operation, not the turbo, just giving us a view of how things will pan out in the arena. The arms and flippers aren't powered and dead reckoning is used for the small amount of navigation shown.


So we have a sheep dog but it needs a bit of training! This is also the first time a chassis and attachment have been mated together, it took only a few minutes and that's how the other attachments should fit!

Also demonstrated was our change to the use of a PS3 controller for remote control operations. We'll be using this for the Farmyard Tour challenge of course but it's main job in the coming weeks is to rehearse the best sequence of operations for the chassis and attachments for testing.

We're still moving on with the apple picking attachment, the laser cut practise tree shown in the last blog entry is now assembled, needs a bit of weight adding and the apple picking cup MK2 ready for testing.




Finally the cattle feeding turntable was demonstrated. This challenge is almost ready for full testing with most of the components finalised and control code written. This is possibly the simplest attachment but still comprises 36 individual 3D printed parts, together with many nuts, bolts, servo's and a stepper motor. Hidden in there is the custom controller board as well!


The picture has been shown before but next time will be a full demo video, we've already written the routine for the video!. Its controlled via micropython on a pico. 


That's it for this time, next time we'll have working sheep dog arms and a trough filling attachment!