Categories
Uncategorized

~Week 10

I apologize for not keeping up with the blog. It has been a busy couple of weeks, but I have made a lot of good progress! Let’s get into the details.

Kyberlib

This past month has been primarily focused on kyberlib. Kyberlib is the backend library for team 6502 that makes all of our other work easier. It creates useful functions that execute common tasks, includes examples of how to create certain mechanisms, and lots of other useful libraries.

Motor Control

The first major upgrade has been entirely reworking the motor library. Previous years had started work on creating motor control classes, but I don’t think it got very far or was ever tested. I took the initial premise and got it fully working.

First, there is the KBasicMotorController. This class is the parent of all motor controllers. It control all motors and can control via voltage, percent, and follow. It has accessible open variables to set inversion & brakeMode. It is both Sendable & Debuggable meaning that the voltage information can easily be sent to, and set from, the Dashboard. There is currently examples for both WPI’s SpeedControllers and VictorSFX.

In addition to KBasicMotorController, there is the more advanced KMotorController. This class wraps any motor that an enoder. Encoders are devices that sense how fast a motor is spinning. With this angular velocity, we can tweak our voltages to set speed we want to make the motor move. Tweaking these voltages in optimal way is a very math heavy topic known as Control Theory. KMotorController abstracts away most of the math while also allowing flexiblity to tune PIDs, add Feedforwards, or even add your own custom control algorithm. KMotorController also provides getters and setters for important values about position, velocity, setpoint, and more. It handles linear to rotational conversion so that you can instruct it to move a linearVelocity (ie 2 meters per second instead of 2 radians per second). Like KBasicMotorController, KMotorController has full support for debugging and dashboard widgets.

Simulation Support

One of the coolest features added into WPILIB in the past couple of years is simulation. This feature allows you to run the robot code on your laptop and simulate the effects of your robot code on a virtual robot. Using this new feature, however, was a bit clunky. A lot of the code needed to be specifically written to allow for the simulation to work. One of the nicest features I added into kyberlib is clean, hidden support for the simulation. All sensors (ie gyroscopes) and motors will automatically detect if they are in a simulation and will automatically adjust. The end result is that the coder is able to write code for a typical robot, and then seamlessly run the simulation with no adjustments.

Modular Automatic

During a previous post I talked about implementing my pathing and navigation algorithms. These have now been moved out of the main robot code and into kyberlib. This means that all of the code I have been working on for this robot can easily be reworked to use in future robots. Additionally, I implement a TravellingSaleman class to find the fastest path to get through a series of waypoints. Despite not really needing it, I got bored and found some really fascinating optimizations.

Example Mechanisms

In order to test the usability of the new libraries and to instruct future coders, I added a package for some simple out of the box mechanisms. Included are a Flywheel using StateSpace control, a basic elevator, and a drivetrain of the 3 most common types (differential, mecanum, and swerve).

Other Small Additions

  • Rewrote KTrajectory to inherit from the WPI trajectory and have all of the contructors from TrajectoryGenerator. This can be saved and loaded from files.
  • Created a Debuggable interface that allows classes to easily debug important values to console, DriverStation, or Dashboard.
  • Reworked how CommandManager functions.
  • Did some testing for some way to improve our units library (typing variables is a pain). Also allowed units to retain dimensions across multiplication and division.

Vision

The other major half of the project I have been working on is implementing our vision localization system. We are doing this through an algorithm called UcoSlam. This process was far harder then anticipated so strap in for a tale.

Implementing this issue had two primary difficulties. First, the camera was attached to the robot, but I could not run vision on the robot without severely slowing down its processing speeds. The second major issue was the the UcoSlam algorithm was written in C++ while the rest of my code was written in Kotlin (JVM).

My first thought was to I will have my robot send the stream back through WPI’s built-in CameraServer. Then, I would read that from CameraServer. The issue with this approach was that there were no ways to interact with the CameraServer without setting up the WPILIB libraries for the project. I have zero experience doing this and I believe there would have been build conflicts between the two dependency managers (CMake and Gradle).

The second approach was to read the CameraServers from Kotlin and then call the important functions from there. However, JNI (calling C++ from Java/Kotlin) is awful and I gave up on this approach.

My present solution is very improvise and needs work. Currently, the robot sends images to my computer. My computer reads those images and writes them to file. The algorithm then reads this file and write its about back to the intermediary algorithm. This method is slow and causes a lot of errors. I recently discovered more about how the CameraServer works and am hoping to find a way to directly retrieve the images from UcoSlam.

Categories
Uncategorized

Week 6

Due to a high workload at the end of quarter and having a cold near the end of the week, I did not get much done on the project this week. I made some small tweaks to backend libraries, but thats it.

Categories
Uncategorized

Week 5

Hello Everyone! This week has primarily been all about vision. Vision is going to be the cornerstone of this project and it has taken awhile to get right. This week I had 3 separate periods to work on my independent project.

Day 1

On the first day, with the gracious help of Mr. Beck, I wired up a model driver-station. This allowed me to bring an electronic model of the robot home and test code over the weekends.

Here is a picture of my workplace. On the left there is the driver-station. It includes battery, power distribution, a roborio, and the limelight. I also have the main robotics PC and some monitors to help with work.

Day 2

The second day was mainly focus on Limelight setup and calibration. With the help of Mr. Beck I got the Limelight hooked up the driver-station through an ethernet splitter and passive POE (power over ether). I will probably upload a video or tutorial showing how to get through this process. After I finished the electronics, I installed the new PhotonVision library that has been created by some wonderful FRC teams. The library generally enhances the default limelight code and adds some great resources like allowing multi-object targeting, camera calibration, and more!

(this image has yet to be uploaded. Try again in a couple hours)

Day 3

During my 3rd day and over the weekend I began looking into finding and installing the main algorithm that will use the camera to locate where the robot is in the field. This is an entire area of research known as Visual Simultaneous Localization and Mapping (Visual SLAM). These algorithms have a wide variety of applications from AR/VR and autonomous vehicles. With the help of pietroglyph on Chief Delphi, I decided to install an algorithm called UcoSLAM. This algorithm is optimal for my uses because it is largely open-source, allows for quicker tracking, and has other features that will allow for easy testing. I had some issues getting it integrated into my system, but it mostly working by now.

What’s Next?

Over the course of this week I hope to integrate the UcoSLAM Algorithm with my robot. This will involve learning how to translate between Kotlin (JVM), which my robot is written in, and C++, which the UcoSLAM is written in.

Categories
Uncategorized

Week 4

I have been making some solid progress over the past week. I managed to fix the wiring issues I had with my pigeon and my previously written auto code works great. There are a couple of hardware issues that have continued causing problems. Primarily, something seems to be wrong with the Mecanum drive train I have been using. When I try to strafe sideways it also goes a bit forward or backward. This has been causing some issues. Here is a video of the results.

Coming soon: I setup the limelight at some point this week. This will allow be to start testing ball detection (using HSV filtering), fiducial localization, and ORB-SLAM. Explanations of this algorithms will be up by next weekend.

Categories
Uncategorized

Week 3 Updates

I have made a substantial amount of progress so far. Here are a couple of the highlights from the past few weeks.

I have been working to plug in the Pigeon IMU gyroscope into the CAN bus. While it initially was working, I keep having issues with the CAN bus disconnecting. This is still a work in progress.

I developed the initial robot architecture. Now has a CommandScheduler to manages commands and paths. Additionally, it also has prototypes vision code and an Informed RRT* pathfinding algorithms. Look through the pages for more details.

Categories
Uncategorized

Introduction

What is this Project?

This project is meant to be an exploration into all elements of robotics and automation. The goal is to create an FRC Robot that can use drive itself. This is a complicated process that involves multiple cameras and sensors and then doing complicated analysis and planning.

This is the mini-FRC robot I will be modifying. It uses a mecanum drivetrain and

To view the Robot code go here