"In the world of deals, it's the buzz that beats the status quo."
Elias Groll Tweet
Welcome to ValiantCEO Magazine’s exclusive interview with Elias Groll, CEO of Codesphere, a pioneering figure in the realm of AI integration and transformative hosting solutions. Elias Groll’s journey is a testament to innovation and entrepreneurial spirit, beginning with his early fascination with coding, which eventually led him to study Computer Science at the age of 15.
After a remarkable stint as a teenage hacker, Elias ventured into Google’s Zurich office as an intern. His experiences there laid the foundation for what would become Codesphere, a company dedicated to revolutionizing the tech landscape.
At Codesphere, Elias is leading a mission to enhance the developer experience by utilizing the power of the cloud, eliminating DevOps bottlenecks, and significantly speeding up feature deployment while cutting infrastructure costs. Their patent-pending technology aims to reduce GPU costs for AI applications by an impressive 90%.
Elias’s vision for success in 2023 is dynamic and forward-looking, focusing on outperforming existing hosting platforms and establishing strong industry relationships.
Join us as Elias Groll unveils the transformative power of AI and self-hosting, setting the stage for the next generation of tech innovation.
Check out more interviews with entrepreneurs here.
WOULD YOU LIKE TO GET FEATURED?
All interviews are 100% FREE OF CHARGE
Table of Contents
We are thrilled to have you join us today, welcome to ValiantCEO Magazine’s exclusive interview! Let’s start off with a little introduction. Tell our readers a bit about yourself and your company.
Elias Groll: I am Elias Groll. I started coding before I turned 10. I started studying Computer Science at 15 because I hacked my school server and a local university reached out to me. I started taking university classes while still being in high school. I also worked with SAP during that time. I joined Google in Zurich after graduating from high school as an intern. I then founded Codesphere in 2020.
Codesphere leverages the cloud for an amazing developer experience and solves the DevOps bottleneck for the growing 800B$ Cloud Market.
The platform turns the sequential DevOps process multiplayer, enabling businesses to launch new features 6x faster and saving 72% infrastructure cost.
We are on a speed run to launch our new (patent-pending) technology for autoscaling AI apps, reducing GPU costs by 90%.
Codesphere makes:
- deployment trivial
- team collaboration easy
- scaling and monitoring your application hassle free
Can you share with us your journey towards integrating AI into your business operations?
Elias Groll: Although, we are unhappy with the overhype ‘AI is sentient ‘, especially because LLms are no question the biggest revolution since the cloud. However, what they really bring is that you no longer need AI engineers to build a specific model for every use case, but one model is AGI ‘Artificial general intelligence’ hence can be used for many use cases without retraining and only minimal work.
This enables organizations and individuals to optimize processes on scale. Now the non-open source models are still problematic due to the limited amount of tokens per prompt, IP law, and access to sensitive data, but all of that can be solved with the open source models.
The token amount can be fixed with fine-tuning and connecting databases, while the IP belongs to the company that fine-tunes it and no data ever gets out.
Absolutely game-changing!
Since we are a technology company and like to write blog posts about the latest innovations we always keep an eye out and catch trends early on, AI was no different. Even before the chat-gpt release, we were using DALL-E-generated images for marketing and other company communication imagery.
We had also built fun showcase projects with earlier LLM iterations (i.e. an ask me anything about our company chatbot on the website) but dismissed it for more serious use cases as the answer quality was disappointing.
Internally our AI usage changed with the Chat-GPT release about half of our developers use tools like Copilot, currently with no measurable additional success to answer contextual questions that used to require time-consuming searches across stack overflow – we do not allow inputting proprietary code, however, because data leakage is a real concern with managed LLMs.
We use AI models to generate API integrations for customers.
Currently, our marketing is using both LLM for tasks like building templates in unfamiliar frameworks, drafting documentation, keyword research, etc. and stable diffusion models for things like blog images.
The second major breakthrough for us came after Meta open-sourced their Llama2 model suite.
Thus far hosting your own Large Language applications (as opposed to using managed offerings like openAI) had been technically advanced & computationally very expensive therefore it was a rather niche topic for tech companies offering services based on AI.
Basically overnight though that changed – with the possibility of providing similar quality experiences (like the one offered by OpenAI) with full control & ownership over your data and also a lot cheaper at scale – almost all corporates and SMEs were now looking to get their hands on models that are safe to use for internal or proprietary data use cases they would not have liked to share with companies like openAI.
Since we are in the business of making cloud deployments so easy, fast & standardized that building and scaling software applications in the cloud is no problem we realized that we wanted to enable the same thing for LLM apps.
It took us only a few weeks to get the first prototypes running on our infrastructure and for TechCrunch Disrupt we now publicly launched our first GPU infrastructure optimized for running resource-intensive applications like large language models – with these setting up self-hosted solutions can take as little as 1min 30 sec, and starts at prices as low as $8/m – making this very accessible and competitive to managed solutions.
What specific areas of your business have been most impacted by AI, and how?
Elias Groll: Many internal functions have gained efficiencies through using AI, most significantly our Marketing and Operations departments I’d say
More transforming has been setting ourselves up to be a viable provider for hosting AI infrastructure
We are now not only consumers of AI but provide services for anyone using AI – this has of course also shifted some of our engineering and operations resources towards this area
Our patent-pending technology makes it possible to request computing resources for AI applications on demand, and scale them based on the amount of traffic in a matter of seconds (soon milliseconds!), we are very confident that this will be nothing short of revolutionary once companies realize the full potential and adoption speeds up.
What are the biggest obstacles you’ve faced in implementing AI, and how did you overcome them?
Elias Groll: There are a couple of obstacles to overcome.
Firstly the pace of innovation in this area is challenging for anybody to keep up with.
Secondly, data leakage is (and should be!) a key concern that gets reviewed from all angles before deciding whether a specific use case is acceptable to be used with services like chatGPT.
Lastly and I cannot stress the importance of this enough – any company looking to build internal tools on top of any AI endpoint should consider – if the results are stable.
A couple of researchers have started to sound alarms as OpenAi’s endpoints have consistently decreased in accuracy. Answers that might have been sufficiently accurate before might not be anymore – there are two ways to solve this – pay for an enterprise-level solution like OpenAi inside Azure where you can control when updates get applied to the underlying models or turn to a self-hosted solution where you are in full control.
What advice would you give to other CEOs looking to integrate AI into their business?
Elias Groll: You need to understand two main things. AI/LLM is not too big: it’s not sentient, won’t replace all jobs anytime soon, and not too small: if you don’t use it to optimize all processes, you will be left behind.
All processes can be implemented very simply so don’t revert to the idea just because it will take too much effort. Also, understand that for most use cases, the intelligence of the model does not matter, energy efficiency, execution speed etc are more important.
There is a lot of smoke and hype around the topic, not all areas of business are equally suited for nor benefit equally from implementing AI into the workflow (or your product offering for that matter).
How do you see AI evolving in your industry over the next 5 years?
Elias Groll: Models will continue to improve but with the hardware / computational requirements already hitting the limits of what is feasible we believe the main focus and improvements will be in making the existing models more efficient.
There have already been some very exciting developments there recently with another open source model supposedly outperforming Llama 7b in speed by 13x. We have not verified these numbers but this is a very promising direction that will continue to gain importance over the next 5 years.
The current models are still a bit bulky (as in LARGE language models = trained on a huge amount of textual data) for many use cases fine tuned versions will continue to be developed and made more readily available for all sorts of use cases – this will be very beneficial to all corporate consumers of AI.
We actually believe with the current pace the open source community (and ultimately Meta) stands the best chance to win this race as the sheer speed of different model flavors and applications is staggering already – the tendency to increase a lot more.
What does “success” in 2023 mean to you? It could be on a personal or business level, please share your vision.
Elias Groll: Since we have our reactive inference technology and already other patents in the working, outperforming all existing hosting platforms (lets see for how long), we have no time to lose.
We need to make as much buzz as possible to not lose the deals, the large consultancies and inferior technology, just because they already have the relationship.
Jerome Knyszewski, VIP Contributor to ValiantCEO and the host of this interview would like to thank Elias Groll for taking the time to do this interview and share his knowledge and experience with our readers.
If you would like to get in touch with Elias Groll or his company, you can do it through his – Linkedin Page
Disclaimer: The ValiantCEO Community welcomes voices from many spheres on our open platform. We publish pieces as written by outside contributors with a wide range of opinions, which don’t necessarily reflect our own. Community stories are not commissioned by our editorial team and must meet our guidelines prior to being published.