Technology giants such as Google, Facebook Inc (NASDAQ:FB) and Microsoft Corporation (NASDAQ:MSFT) have been made hungry for more data thanks to the recent emergence of powerful machine learning techniques known as deep learning in Artificial Intelligence.
Using Artificial Intelligence or Deep learning is what gives software’s ability to learn to do things such as recognizing images and understanding languages.
Many of the problems that deep learning could be an invaluable asset, however, involve times when data is hard to come by or in some cases held by organizations which are unwilling to share it. According to Apple CEO, Tim Cook, some customers are already concerned about companies which take up all of their private and personal information.
Vitaly Shmatikov, a professor at Cornell Tech who studies privacy says, “A lot of people who hold sensitive data sets like medical images are just not going to share them for legal and regulatory concerns. In some sense, we’re depriving these people of the benefits of deep learning.”
He in conjunction with researchers from Microsoft and Google are trying to find ways to get around the privacy problem. The team hopes that by providing ways to use and train the artificial neural networks which are used in deep learning without the need to take up everything, they hope to train smarter software, and in so doing convince keepers of sensitive data to make use of the systems they build.
Shmatikov and his colleague, Reza Shokri, are testing what they are calling ‘privacy-preserving deep learning.’ This method has an advantage in that there is a benefit of multiple organizations, which in turn combine their data to train deep-learning software without the added risk of actually sharing the data.
As of now, an organization trains deep learning algorithms on its own data, and shred only the key parameters from the trained software. These can then be combined onto a system which allows them to perform as if they were all trained on all the data at once.
The research by Shmatikov dubbed the Cornell Research was partly funded by Google, which published a paper about certain similar experiments. They are also talking to Shmatikov about his ideas. The firm’s researchers found a way to train the company’s deep learning algorithms through the use of data such as smartphone images without the need to transfer the data to Google’s cloud.
This makes it easier for Google to leverage the very information that is held on mobile devices, they wrote. Google declined to comment about the research, though Shmatikov said he believed the company was still working on it.
On the other hand, Microsoft’s own cryptography team has developed its own solution to the situation. The team invented a way to train the deep-learning software on encrypted data and then spit out encrypted answers. Microsoft researchers made it possible through the use of homomorphic encryption, which is a technique that makes it possible to perform mathematical operations on encrypted data and produce encrypted results. They have tested the idea using software that can recognize handwriting and a system which estimates a patient’s risk of pneumonia from his vital signs.
CryptoNets require more computing power however than traditional deep learning software to do the same work. But Kristin Lauter, who leads Microsoft’s cryptography research, says the gap is small enough that CryptoNets could become practical for real-world use. “The health, financial, and pharmaceutical industries are where I think this is most likely to be used first,” she says.