Team:EPF Lausanne/Hardware

From 2014.igem.org

Revision as of 00:25, 17 October 2014 by Arthurgiroux (Talk | contribs)

Hardware





With the living organisms now responding to touch, we have a signal that we need to retrieve and then analyze to make use of it. One simple and fast way to do it is through a computer.
Because our microfluidic chip is small and portable, we wanted to keep this trait all along the project and that's why instead of using a big and cumbersome device we opted for a Raspberry Pi.

Raspberry Pi

Raspberry Pi

The Raspberry Pi is a cheap (40.-) and small single-board computer that we are going to use in order to monitor the light in each of the chambers of the microfluidic chip.



This is what the initial device looked like:

Raspberry Pi


In order to track the light we need a good camera that is cheap and allows us to be flexible.

Raspberry Pi

We chose the Raspberry Pi NoIR, NoIR stands for No Infrared Filter meaning that with this camera we are able to see near infrared wavelength.

Near infrared correspond to wavelength between 700 and 1000nm we will use this feature of the camera to track the IFP. Moreover getting our signal in near infrared instead of the visible spectrum is a nice feature because not a lot of things emit near infrared so if we see something we can be quite sure that it's our bacterias, whereas if we only rely on the visible spectrum and we have no depth information, it's very hard to tell if it's our organisms emitting light or something else with similar color / light intensity.

The best plan is then to get the spectrum up to near infrared and then use a filter to get rid of the visible spectrum.


Raspberry Pi Plan

Light tracking

In order to view the light change in a chamber we need to track the light intensity. For this part we made a custom C++ code using OpenCV (you can find the entire code there https://github.com/arthurgiroux/igemtracking/)

Here's the result output:

The most common color space that program uses is RGB where the pixel color can be split into three components Red, Green and Blue with values between 0 and 255.

But there exists a lot of other color space that can be used for different applications, in our case we are interested in light intensity and not really in colors.

In our code we use the YCrCb color space, it decomposes a pixel into three components Y: the luma value (the intensity), Cr is the red difference and Cb is the blue difference.

YCrCb
Representation of the YCrCb color space

Lenses

The Raspberry Pi camera that we use has a fixed lense that is not really suitable for us because we can't change the focus, the aperture or the zoom.

We searched for a way to change the lense and have the more control possible on it and we found that the easiest way was to remove the initial lense, put a CS mount on it and attach a much bigger lense.

We found some cheap lense online (~45$) and we did the mounting.

The first thing to do is to unplug the camera module from the PCB, once it's done you need to remove the lense carefully by unscrewing it.

INSERT IMAGE

You will then see the camera sensor (CMOS).

You can then screw the CS mount on the board and plug the lense in.

Here's what we see with this lense:

We can clearly see the chambers of the microfluidic chip.

Sponsors