IDG Contributor Network: Data sharing and medical research, fundraising is only the first step
Last week, Sean Parker (a founder of Facebook and, notoriously, Napster) announced the single largest donation to support immunotherapy cancer research. Totaling $250 million, the donation will support research to be conducted across six academic institutions, with the possibility of incorporating additional researchers if more funding is secured down the line.
I think it goes without saying that all donations to support medical research, particularly programs like immunotherapy that have a more difficult time receiving traditional funding, are fantastic.
However, a project like this isn’t just notable for the size of the donation, but also for the breadth of coordination that will be required to synthesize research across so many organizations. As past experience shows, innovating new models in research and discovery can be a challenge. For example, the now-defunct Prize4Life was founded to incentivize research into cures for ALS (Lou Gehrig’s disease). The organization was well funded and recognized for innovations such as a crowdsourcing approach to data science to try and foster breakthroughs. The data experiment failed however, and ultimately so did the organization.
More recently, Theranos has provided a cautionary tale for those looking to change processes without the strength of underlying data management and related quality standards. That company is perceived to have an execution problem, but what it really has is a data problem: trying to design testing that relies on the collection, analysis, and management of massive amounts of private data is a very ambitious undertaking.
In order to truly maximize the success of a research program of this scale, well-managed data integration and data sharing innovation will be key.
A few reasons why these areas in particular will be important:
- Ensuring that efforts are not being duplicated: Research teams will need to be in lock step to ensure that limited resources, and time, are not being wasted covering ground that other teams have already been working on.
- Accurately reporting progress: As any researcher can tell you, a key part of the process is testing multiple variables and, more often than not, failing. This often happens quickly, and adjustments need to be communicated to other team members and addressed, in as near to real-time as possible.
- Data sharing: Effective data sharing mechanisms have to be developed in order to promote collaboration and innovation within the research. Existing cloud system infrastructures including machine-learning implementations should be leverage to increase the probability of breakthroughs and the success of the program.
- Protecting research and data: When data starts being saved over multiple locations, it exponentially increases the risk of system failure or data loss that can have negative impact on the overall project.For this reason, establishing a strong network backup program and contingency plan in the event of failure is invaluable.
- Laying the foundation for growth: Ideally, the current teams in place will only be a starting point for this important research. As the donations grow, and presumably the team does as well, it will be important to have a strong framework of data sharing and reporting in place to ensure that the process continues smoothly and efficiently.
The program is off to a good start, however, with both the level of the donation and the expertise that Parker brings to the project. Personally, I am interested to see if Parker will drive innovation in the research by leveraging his previous technology “sharing” experience.
This article is published as part of the IDG Contributor Network. Want to Join?
Source: InfoWorld Big Data