Team:Michigan Software/Project
From 2014.igem.org
Line 10: | Line 10: | ||
Choosing reliable protocols for new experiments is a problem laboratories routinely face. Even when drawing from research publications, it is difficult to determine which protocols will produce the best results. Experimental practices differ immensely across laboratories, and precise details of these practices may be lost or forgotten as skilled members leave the lab. These two realities give rise to a vast number of experimental protocols without their original curators around to describe them. Furthermore, no tool yet exists to allow wet lab investigators to measure and compare the efficacy of protocols before executing them.</p> | Choosing reliable protocols for new experiments is a problem laboratories routinely face. Even when drawing from research publications, it is difficult to determine which protocols will produce the best results. Experimental practices differ immensely across laboratories, and precise details of these practices may be lost or forgotten as skilled members leave the lab. These two realities give rise to a vast number of experimental protocols without their original curators around to describe them. Furthermore, no tool yet exists to allow wet lab investigators to measure and compare the efficacy of protocols before executing them.</p> | ||
- | < | + | <br> |
- | + | ||
<p> | <p> | ||
Such fragmentation in protocol methods and their documentation often hampers scientific progress. Indeed, there are few well-defined protocols that are generally agreed upon by the scientific community, in part due to the lack of a system that measures a protocol’s success. In turn, the lack of commonly accepted protocols and inadequate documentation affects experimental reproducibility through method inconsistencies across laboratories. | Such fragmentation in protocol methods and their documentation often hampers scientific progress. Indeed, there are few well-defined protocols that are generally agreed upon by the scientific community, in part due to the lack of a system that measures a protocol’s success. In turn, the lack of commonly accepted protocols and inadequate documentation affects experimental reproducibility through method inconsistencies across laboratories. | ||
</p> | </p> | ||
- | + | <br> | |
<p> | <p> | ||
</p> | </p> | ||
Line 23: | Line 22: | ||
To attempt to address these problems, we set out to build a database that integrates a crowdsourced ratings and comments system to clearly document, rate, elaborate, review, and organize variants of experimental protocols. Such a tool serves as a curator for protocol variants and enables investigators to compare protocol efficacies, acceptance via crowdsourced ratings and provides an avenue for transfer of experiential knowledge through protocol comments. In all, we hope these tools will help students and investigators document, organize, and compare protocols to assist with scientific experimentation. | To attempt to address these problems, we set out to build a database that integrates a crowdsourced ratings and comments system to clearly document, rate, elaborate, review, and organize variants of experimental protocols. Such a tool serves as a curator for protocol variants and enables investigators to compare protocol efficacies, acceptance via crowdsourced ratings and provides an avenue for transfer of experiential knowledge through protocol comments. In all, we hope these tools will help students and investigators document, organize, and compare protocols to assist with scientific experimentation. | ||
</p> | </p> | ||
+ | </br> | ||
</font> | </font> | ||
</div> | </div> |
Revision as of 00:55, 16 October 2014
Description
Choosing reliable protocols for new experiments is a problem laboratories routinely face. Even when drawing from research publications, it is difficult to determine which protocols will produce the best results. Experimental practices differ immensely across laboratories, and precise details of these practices may be lost or forgotten as skilled members leave the lab. These two realities give rise to a vast number of experimental protocols without their original curators around to describe them. Furthermore, no tool yet exists to allow wet lab investigators to measure and compare the efficacy of protocols before executing them.
Such fragmentation in protocol methods and their documentation often hampers scientific progress. Indeed, there are few well-defined protocols that are generally agreed upon by the scientific community, in part due to the lack of a system that measures a protocol’s success. In turn, the lack of commonly accepted protocols and inadequate documentation affects experimental reproducibility through method inconsistencies across laboratories.
To attempt to address these problems, we set out to build a database that integrates a crowdsourced ratings and comments system to clearly document, rate, elaborate, review, and organize variants of experimental protocols. Such a tool serves as a curator for protocol variants and enables investigators to compare protocol efficacies, acceptance via crowdsourced ratings and provides an avenue for transfer of experiential knowledge through protocol comments. In all, we hope these tools will help students and investigators document, organize, and compare protocols to assist with scientific experimentation.
Contents |
Aims
Methods
Django package for Python
SQLite
Success
Future Directions