News

/

Archive

/

Why Deep Tech and Infrastructure are Sexy Again

Why Deep Tech and Infrastructure are Sexy Again

March 19th, 2019

“The Future of Computing” seminar organized by Partech, with Accenture and Microsoft, brought together 70 key members of our portfolio companies and significant figures in the computing industry. The event, which was hosted at the Partech Shaker, saw several key speakers share their insights on the future of deep tech and infrastructure as well as how their specific roles and revolutionary products are changing the industry forever.

Reza Malekzadeh, General Partner in our San Francisco office, began the morning by explaining the increasing opportunity and need for new hardware and deep tech within the world of computing. As the industry becomes ever more interesting and exciting, Reza touched on how “infrastructure and deep tech are finally sexy again”, before briefly introducing the following speakers.

Marc Bousquet, Managing Director and Technology Lead France at Accenture, gave a brief overview of the digital transformation that continues to shake the computing industry and its consequence in modern society. He underlined his day to day relationship with start-ups and the remarkable ways that they push the boundaries of conventional tech. However, Marc stressed a need for a shift in focus to infrastructure rather than the ever-popular building and design of applications.

He continued this by underlining examples of the progressive use of deep tech in the digital world. Firstly, he pointed to how Google and New Balance teamed up during the fashion week in New York to screen the people around the venue to analyse their style, their shoes and consequently define a trend. Furthermore, in a partnership between Accenture and Airbus, they were able to use glasses complete with software that enabled them to know where and how to put in the seats, therefore making the manufacturing process of a plane easier. He shared his excitement for a start-up that looks at a profile on social media and, using AI technology to screen an individual’s photos and habits, gives them a credit score regardless of any prior banking knowledge. Marc concluded by highlighting the new necessity to transform our way of doing and our culture by leveraging our technical skills, confirming how digital transformation is beginning to lead cultural transformation.

Nader Salessi, CEO of NGD Systems, talked about how he and his team have set out to create a “paradigm shift in computational storage”. He explained the forces driving this new movement towards computational storage, identifying principally the continuous growth of data and the physical space and energy required to store it all. “Data is the new oil”, stated Nader, as he underlined the need for a new generation of storage devices. NGD Systems provides this alternative by offering a computational storage solution that implements AI machinery and other deep tech software. Nader mentioned the possible use cases of the product, underlining how it could work in hyperscale data centers used by Microsoft Azure and Amazon Web Services for example, by helping to decrease energy costs and physical footprint while maintaining a high level of functionality. By creating a storage device that can compute and store both simultaneously and locally, they can decrease “the time it takes to process data by 6x, and the energy consumed by 3x”.

Additionally, he underlined how it could be implemented in Modern Distributed “Intelligence Edge”, both acting and storing data locally. The product that they have built has an incredibly high functionality and can incorporate in-storage image classification and image similarity search as well as many other types of AI software. Their “unique, innovative and disruptive technology”, Nader concluded, allows any business to scale up without data storage issues. Also, they have managed to prevent CDN Encryption bottlenecks by providing a computational storage system that decreases the CPU and the user’s TCO as well as lowering the latency.

Benjamin Schilz, Co-Founder of Acorus Networks, began his presentation by underlining how his company has changed the archaic nature of DDoS (Distributed Denial of Service) mitigation. DDoS attacks occur around 10 million times a year, with the aim “to block a system, web service or company network” stated Benjamin. All online services are at risk, from “ecommerce to SaaS” with the attacks becoming increasingly easier to launch and being even more sophisticated and potent. Benjamin used the recent example of a series of DDoS attacks that brought down the whole of Sweden’s train network by targeting their signalling system.

benjamin schilz

Benjamin underlined how Acorus rethinks the way that we protect these services. His company has taken outdated forms of mitigation, originally done using onsite appliances, in a similar fashion to firewall deployment, and built a worldwide network of scrubbing centers that process the data as close to the source as possible. With new undetectable probe techniques appearing in attacks, strategically timed so they do not appear on graph data, Acorus felt the necessity to build a “performance and security centric” network that protects a company “no matter where”. Benjamin concluded by saying that a business should expect “the best service during an attack, not the best effort”. To combat this, Acorus are “moving DDoS mitigation to the Edge era” with “ongoing deployment” as close to the source as it can get.

Mario Trentini, R&D Director of Product at Mipsology, was next to talk about his team’s product Zebra, a computation for Deep Learning. The French startup was founded four years ago and designs high-performance Deep Learning solutions using FPGAs. Mario began by highlighting the significant growth of machine learning in the last few years. “There are two main steps to machine learning,’’ continued Mario, firstly “Training (or learning)”, involving a data scientist creating neural networks with the ability to learn from existing data, and secondly “inference”, using the results of the training to answer a question. With more complex neural networks being created, powerful CPUs and GPUs are needed to meet their targets for program execution. Mario discussed the decision process behind the choice of using FPGAs to run their Machine Learning software, underlining how they have managed to mask the complexity and design difficulty behind programming them yet still managed to bring the “high performance and flexibility” that makes them stand out from other processing units.

mario-mipsology

Zebra can fit anywhere: “in data centers, Edge or embedded”. Mario underlined that alongside this universality, it is also incredibly easy to use. It is “fully integrated for DL inference computing” and works with all neural networks. He pointed out several use cases surrounding video and images, demonstrating its practicality in object detection and segmentation. Mario closed his presentation by showing how their performance is increasing fourfold each year, bringing with it a lower price and recurring cost.

Guillaume Delaporte, Co-founder and VP of Customer Success at OpenIO, began by presenting their next gen object storage for cloud and on-premise systems. Emphasizing the need for this, Guillaume referred to the continuously increasing amount of data being produced each year, “applications come and go but data persists”. He continued by explaining that the storage market is split in two, “latency focused” and “capacity focused”, but how there is never any balance between them. “Traditional data solutions cannot keep up”, Guillaume added. He brought up issues such as expensive proprietary hardware and software, a lack of scalability and vendor lock-in.

Guillaume

OpenIo brings a flexible and smart solution to object storage, focusing on bringing clients the ability “to deploy technology in different continents” and “to scale quickly”. “Don’t let storage stunt your growth”, Guillaume remarked. OpenIo have changed conventional storage by building a conscience System (with Dynamic chunks dispatcher). The system, Guillaume explained, allows for Real Time load balancing, a process in which it computes quality scores for each disk and selects the one with the best score, effectively managing storage policy. This process allows for “optimal positioning of the data, flexible scale out/up and high performance”. He underlined how they are also building data platforms to last, around 5–7 years, with good control of your data retention, keeping and protecting important data while automatically deleting trash. Their software “offers an optimized TCO with linear performance growth and continued performance with scale”. Guillaume highlighted the multiple use cases of their products, pointing to a success story with Dailymotion in which they displaced EMC as their storage provider and reduced their TCO by 50%. He concluded by noting the open source nature of their solution and their continually growing customer base of over 30 large clients.

Next to talk was Frédéric Plais, CEO of Platform.sh. He was proud to share their vision for “the grand reunification” of software production. Frédéric explained that “every company is now a software company” and as a consequence you are “expected to deliver software or you will get disrupted by people that do”. However, Frédéric underlined the difficulty behind writing good software, pointing to the problem of staging, merely “a pale copy of production”, and the frequent bottlenecking and expensive cost of the process. “Teams are getting better”, he continued, with “agile methodologies… ways of writing software have become easier”. This led him to share how Platform.sh have unified the development, testing and production of software.

frederic plais

“Platform.sh was built on the idea that the application comes first”, stated Frédéric. He underlined how increasingly nowadays, all the care is about the features of software and the UI but not what is going on behind-the-scenes. Within two years, Platform.sh “were able to clone an application in less than 30 seconds with perfect replication”. Consequently, Frédéric highlighted how the staging process has now become very cheap while each GIT branch still gets their own staging environment. This has led to the “ability to test and deploy with no bottleneck”. Their software brings up to 40% more efficiency for development teams when building an app or a new feature for it. Frédéric stressed how this “leaves more time to spend on the application”, more importantly, “time on what matters”. He concluded by demonstrating a few of their use cases. Their “managed production runtime” has led to companies such as Johnson & Johnson and Arsenal Football Club using their service to deploy with no downtime when it matters the most.

Last to speak was Bernard Ourghanlian, CTO and CSO of Microsoft in Europe. Bernard began the talk by sharing a quote from Marc Weiser who is widely considered the “father of ubiquitous computing”: “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it”. “Good technology is supposed to disappear”, underlined Bernard as he shared how Microsoft are trying to bring storage closer to the Edge. Referring to a study by Gartner, he highlighted how new IoT technology will decrease data center storage by 10% and how we now need to therefore build a more intelligent and distributed environment.

bernard-microsoft

Microsoft are investing more and more in a global infrastructure, with an aim to put a data center in the ocean, near everyone and with the lowest latency possible. Marc stressed how he and his team “are trying to embrace Cloud and Edge at all levels”, particularly incorporating it with Azure. All of this is with the goal of worldwide vertical integration, “from Edge to Cloud and down”, and “from the smallest device to the largest data center”. The three ideas driving this are “people-centered experience, AI and Ubiquitous Computing” stated Bernard. Sharing more on the topic of AI, he pointed out how Microsoft first implemented AI over 20 years ago to sort span and now aim to “run it everywhere and democratise its use by providing all the building blocks that are needed to execute it at scale”.

Bernard stated that “the idea of having a smartphone to do everything is dumb”. “Devices and software come and go within a single year”, he continued, “the individual should therefore be at the center, not the device”. With our level of interaction with devices getting higher every day, Bernard concluded by stressing the importance of the emerging application patterns of serverless software, artificial intelligence and multi device practicality.

To conclude

The key speakers received a number of questions from the audience. Most of them pertaining to their business models and how they are able to distinguish their product or service from existing heavy weights and market leaders. To learn more about Partech portfolio companies, have a look at Partech’s website.

And the final words go to Reza Malekzadeh:

For the longest time, the world standardized on one of two processor types (x86 or ARM), one of two operating systems (Linux or Windows), one of two relational databases (Oracle or SQL), and so on. But the rise of cloud, new distributed architectures as well as IOT are allowing a new generation of infrastructure and datacenter technologies. The scale of disruption in the technology infra-structure landscape is unprecedented, creating huge opportunities and risks for industry players and their customers. All of these ambitious innovations will require more capital and capacity, but customers in the new IT infrastructure landscape will reward their efforts. Exciting times to be investing in deep technology!

Hungry for more?

You can read Reza’s blogpost on cloud computing.
1982
onwards
LegalPrivacy SettingsSitemap

©2024 Partech Partners

Developed by

unomena