Darrick Horton is the Founder and CEO of TensorWave, a leading AI compute and cloud solutions company based in Las Vegas, Nevada. As a serial entrepreneur, Darrick has successfully founded and led several technology startups, focusing on data center, cloud infrastructure, and semiconductor technology. His career also includes experience on Lockheed Martin’s Skunk Works team, where he worked on nuclear fusion. Prior to that, he contributed to several research projects including NASA-funded plasma physics research as well as astrophysics research with the LIGO project. Beyond his professional achievements, Darrick has served as the President of Engineers Without Borders, where he dedicated his efforts to supporting the development of disadvantaged communities worldwide.
Check out more interviews with entrepreneurs here.
WOULD YOU LIKE TO GET FEATURED?
All interviews are 100% FREE OF CHARGE
Table of Contents
What inspired you to start TensorWave, and what are the main goals of the company?
Darrick Horton: I believe that monopolies should not exist. So, when I saw what was happening in the GPU market, I felt compelled to do something about it. The dominance of a single player in the GPU space was creating barriers to innovation and access. This inspired me to start TensorWave with the vision of democratizing access to AI compute and providing a viable alternative to the market.
Our main goals are to restore choice and competition in the industry, ensuring that businesses of all sizes have the opportunity to leverage powerful AI technologies without being constrained by supply chain issues or prohibitive costs. By offering advanced, readily available AI compute solutions like the AMD MI300X, we aim to level the playing field and foster an ecosystem where innovation thrives. Ultimately, we want to empower organizations to achieve their AI ambitions and drive forward technological progress.
Can you explain how TensorWave’s technology addresses the current challenges in AI compute and GPU scarcity?
Darrick Horton: TensorWave addresses the current challenges in AI compute and GPU scarcity by providing compute resources through a completely independent supply chain from our competitors. The global demand for GPUs has skyrocketed, particularly for AI applications, leading to significant supply chain constraints and long wait times, especially for Nvidia-based solutions.
Our approach focuses on AMD-based compute solutions, specifically leveraging the advanced capabilities of the AMD MI300X GPU. By not depending on Nvidia’s supply chain, we can offer immediate availability of high-performance GPUs, which is a critical advantage for startups and enterprises needing to scale their AI workloads quickly and efficiently.
The MI300X not only matches but often exceeds the performance of competing GPUs, particularly for memory-intensive AI tasks. This allows us to provide a superior price-to-performance ratio, ensuring that our customers can achieve their AI objectives without the typical delays and at a lower cost. Additionally, our solutions support open standards, which prevents vendor lock-in and offers our clients greater flexibility and potentially lower total cost of ownership.
By breaking away from the traditional supply chains and offering powerful, readily available AMD-based compute resources, TensorWave is effectively mitigating the GPU scarcity issue and enabling organizations to pursue their AI initiatives without hindrance.
What specific innovations or technological advancements has TensorWave introduced under your leadership?
Darrick Horton: Under my leadership, TensorWave has introduced several groundbreaking innovations and technological advancements that set us apart in the AI compute landscape.
Firstly, we were the pioneers in deploying the AMD MI300X GPUs to customers in a production cloud environment. This was a significant milestone because it provided our clients with access to cutting-edge GPU technology that was previously unavailable at scale. The MI300X’s superior performance, especially in memory-intensive tasks, has been a game-changer for AI workloads.
Additionally, we were the first to support Long Context and FP8 (Floating Point 8) on AMD GPUs for inference workloads. Long Context support is crucial for applications requiring extensive sequence processing, such as natural language processing and large-scale data analysis. By optimizing FP8 performance, we’ve enabled more efficient and faster AI computations, which directly benefits our customers’ productivity and capability to innovate.
Moreover, TensorWave has pioneered new network architectures leveraging 800Gb/s Ethernet and RDMA over Converged Ethernet version 2 (RoCEv2) months before any of our competitors. This advanced networking infrastructure significantly reduces latency and increases data throughput, making it ideal for high-performance computing environments. The adoption of 800Gb/s Ethernet has allowed us to build more robust and scalable cloud solutions, ensuring our customers can handle the most demanding AI and machine learning tasks with ease.
These innovations not only demonstrate TensorWave’s commitment to pushing the boundaries of AI technology but also highlight our role in providing superior, reliable, and cutting-edge compute solutions to meet the growing demands of the industry.
How does TensorWave’s approach to AI compute solutions differ from other companies in the industry?
Darrick Horton: TensorWave focuses on delivering the best possible experience for our customers. We go out of our way to ensure that customers’ workloads are optimized for AMD hardware, and we assist with transitions where needed. We are constantly pushing the envelope and deploying new technologies to deliver better solutions for our end users.
Can you share some examples of how TensorWave’s solutions have positively impacted your clients and the AI community?
Darrick Horton: TensorWave’s approach to AI compute solutions stands out in several key ways, reflecting our commitment to delivering the best possible experience for our customers.
Our focus is on optimizing workloads specifically for AMD hardware. This means we don’t just provide the hardware; we work closely with our customers to ensure their AI and compute tasks are finely tuned to take full advantage of AMD’s unique capabilities. This hands-on assistance includes helping customers transition from other platforms, ensuring a smooth and efficient migration process.
We are dedicated to pushing the envelope by constantly deploying new technologies. Our team is at the forefront of innovation, integrating cutting-edge advancements like the AMD MI300X GPUs and 800Gb/s Ethernet infrastructure into our solutions. This proactive adoption of new technologies allows us to deliver superior performance and reliability to our end users, staying ahead of the curve in an ever-evolving industry.
The company places a strong emphasis on customer service. We offer a white-glove service model, ensuring that our customers receive personalized support tailored to their specific needs. This level of dedication helps us build strong, long-term relationships and fosters a deep trust in our capabilities.
TensorWave differentiates itself through our commitment to optimizing for AMD hardware, our relentless pursuit of technological innovation, and our exceptional customer service. These elements combined ensure that our clients receive not only the best hardware solutions but also the most effective and efficient support to maximize their AI and compute performance.
What are the key partnerships or collaborations that have been crucial to TensorWave’s success and growth?
Darrick Horton: Key partnerships have been instrumental to TensorWave’s success and growth, allowing us to leverage expertise, technology, and resources that align with our mission to democratize AI compute.
Our collaboration with AMD is at the core of our technology stack. By focusing on AMD-based compute solutions, we have been able to sidestep the supply chain issues that affect other GPU providers. AMD’s advanced MI300X GPUs enable us to offer superior price-to-performance ratios and immediate availability, which are critical for our customers’ AI training and inference workloads.
Another significant partnership is with Broadcom, which has been crucial in advancing our network infrastructure. Broadcom’s cutting-edge networking technology supports our high-bandwidth, low-latency requirements, enabling us to deploy new network architectures like 800Gb/s Ethernet and RoCEv2. This collaboration ensures that our systems can handle the most demanding AI workloads with efficiency and reliability.
We also work closely with Edgecore, which provides the hardware for our networking solutions. Edgecore’s switches and related equipment are integral to building our high-performance compute clusters. Their hardware, combined with Broadcom’s technology, allows us to create scalable, robust, and cost-effective networking solutions tailored to our customers’ needs.
These partnerships, among others, enable TensorWave to stay ahead of industry trends and continuously innovate. By integrating the best technologies from our partners, we are able to deliver unique and powerful AI compute solutions that help our customers overcome the challenges of GPU scarcity and computational efficiency. Our collaborative efforts not only drive our growth but also contribute to advancing the entire AI compute industry.
How do you envision the future of AI compute technology, and what role do you see TensorWave playing in it?
Darrick Horton: I envision a broad, competitive ecosystem of hardware providers, each optimized for slightly different workloads. I envision no monopolies. I envision cost-competitive solutions for customers of all sizes. TensorWave is leading the charge in bringing a competing solution to market that is performant, cost-effective, and scalable.
What advice would you give to other tech entrepreneurs looking to make a significant impact in the AI and compute technology space?
Darrick Horton: My primary advice for tech entrepreneurs aiming to make a significant impact in AI and compute technology is to focus on accessibility. Find ways to make AI tools more accessible to everyone, regardless of their resources or technical expertise. Here are a few key points to consider:
- Democratize Technology: Ensure that your technology solutions are accessible to a wide range of users. This means creating affordable, scalable, and user-friendly products that can be utilized by both small startups and large enterprises. At TensorWave, we are committed to democratizing AI compute by providing alternatives to the traditional GPU market, which has been dominated by a few key players.
- Leverage Open Standards: Utilize and contribute to open standards. This approach prevents vendor lock-in and promotes a more collaborative and innovative ecosystem. By supporting open standards, you can offer greater flexibility and lower total cost of ownership for your customers.
- Focus on Customer Needs: Always prioritize the needs and challenges of your customers. Work closely with them to understand their pain points and tailor your solutions accordingly. At TensorWave, we ensure our customers’ workloads are optimized for our hardware and assist them through transitions to our solutions.
- Innovate Continuously: Stay ahead of industry trends by constantly pushing the envelope and deploying new technologies. Innovation should be at the core of your strategy. TensorWave’s advancements in deploying MI300X GPUs and pioneering new network architectures are examples of how continuous innovation can set you apart.
- Build Strong Partnerships: Collaborate with key industry players to leverage their expertise and technology. Our partnerships with AMD, Broadcom, and Edgecore have been crucial to our success. These collaborations allow us to integrate the best technologies into our solutions and deliver superior performance to our customers.
- Stay Resilient and Adaptable: The tech industry is fast-paced and constantly evolving. Stay resilient in the face of challenges and be adaptable to changing market conditions. Your ability to pivot and innovate in response to new developments will determine your long-term success.
Anything else we should know?
Darrick Horton: TensorWave has GPUs available for customers today!
Jerome Knyszewski, VIP Contributor to ValiantCEO and the host of this interview would like to thank Darrick Horton for taking the time to do this interview and share his knowledge and experience with our readers.
If you would like to get in touch with Darrick Horton or his company, you can do it through his – Linkedin Page
Disclaimer: The ValiantCEO Community welcomes voices from many spheres on our open platform. We publish pieces as written by outside contributors with a wide range of opinions, which don’t necessarily reflect our own. Community stories are not commissioned by our editorial team and must meet our guidelines prior to being published.