-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.json
More file actions
2 lines (1 loc) · 108 KB
/
index.json
File metadata and controls
2 lines (1 loc) · 108 KB
1
2
[{"content":"","date":"25 October 2025","externalUrl":null,"permalink":"/tags/aws/","section":"Tags","summary":"","title":"Aws","type":"tags"},{"content":" AWS Just Proved It\u0026rsquo;s the Internet\u0026rsquo;s Biggest Single Point of Failure # tl;dr: AWS broke everything with a typo. Supabase ran out of servers for 10 days after raising millions. The cloud is held together with duct tape and hope.\nA single DNS typo—one character—pwned 2,500+ companies simultaneously. Netflix. Reddit. Coinbase. Disney+. PlayStation. The digital economy\u0026rsquo;s entire Jenga tower collapsed because someone fat-fingered a config file.\nThe 130-Minute Catastrophe # The Problem: DynamoDB\u0026rsquo;s DNS broke. The Real Problem: Every Lambda function in existence depended on it. The Actual Problem: SQS queues backed up like a Black Friday checkout line.\nThe Failure Chain:\nDNS misconfiguration in US-EAST-1 broke DynamoDB endpoint resolution Lambda functions couldn\u0026rsquo;t find DynamoDB → started timing out SQS queues fill up because Lambdas can\u0026rsquo;t process messages Dead Letter Queues (DLQs) overflow, creating secondary backlog CloudWatch alarms trigger everywhere, PagerDuty melts down Auto-scaling groups spin up more instances to handle the \u0026ldquo;load\u0026rdquo; (spoiler: doesn\u0026rsquo;t help) Here\u0026rsquo;s the thing nobody tells you about serverless: It\u0026rsquo;s only as reliable as its stateful dependencies. Lambda scales infinitely (in theory). DynamoDB is your bottleneck. When DynamoDB\u0026rsquo;s DNS dies, your entire serverless architecture becomes a very expensive retry machine.\nFixed in 2 hours. Recovered in 12. Why? Because fixing the typo was easy. Clearing millions of backed-up Lambda jobs? That\u0026rsquo;s like trying to unclog the entire internet with a plunger. AWS engineers: Fixed it in 130 minutes! The Queue: Hold my 47 million pending tasks\nDigital Monoculture = Digital Extinction Event # Here\u0026rsquo;s the based take: We\u0026rsquo;ve built a system where one region going down creates a cascading failure across the entire planet. That\u0026rsquo;s not resilience. That\u0026rsquo;s a single point of failure with extra steps.\n--- Supabase: The 10-Day Cope Session # Right after Supabase announced their massive Series E funding round—like, immediately after—their EU-2 region just\u0026hellip; stopped working.\nFor 10 days. The excuse? \u0026ldquo;We ran out of nano and micro instances, and that\u0026rsquo;s mostly AWS\u0026rsquo;s fault.\u0026rdquo;\nHere\u0026rsquo;s what happened: Supabase relies on AWS EC2 for their hosted PostgreSQL instances. They use t4g.nano and t4g.micro instances (ARM-based, cheap, efficient) for dev branches and smaller projects.\nThe problem:\nAWS has capacity pools per instance type per availability zone Popular instance types can get exhausted during high-demand periods There\u0026rsquo;s no SLA guaranteeing availability of any specific instance type Supabase\u0026rsquo;s entire branch-creation workflow depended on these instances being available Meanwhile the customers:\nCan\u0026rsquo;t restore backups ❌ Can\u0026rsquo;t restart instances ❌ Can\u0026rsquo;t create branches ❌ Can\u0026rsquo;t do literally any dev work ❌ Even paying customers were bricked. You could stare at your production database, but you couldn\u0026rsquo;t touch it. It\u0026rsquo;s like having a Ferrari with no gas stations within 1,000 miles.\nBlaming AWS didn\u0026rsquo;t land well when you\u0026rsquo;re a managed platform company and your entire value prop is \u0026ldquo;we handle the infra\u0026rdquo;. Plus you just raised millions of dollars and paying customers couldn\u0026rsquo;t work for a week and a half!\nA smarter move would\u0026rsquo;ve been multi-region architecture from day one. Instead we see companies running with monoculture dependency, single vendor lock-in, zero redundancy.\nThe Bottom Line # Managed services != managed risk — Well most of the time managed services manage risk, until your vendor runs out of servers\nDNS is still the internet\u0026rsquo;s Achilles heel — One typo can glass an entire region.\nYour SLA is only as good as your weakest dependency — And that\u0026rsquo;s probably AWS US-EAST-1\nObservability \u0026gt; Optimization — You can\u0026rsquo;t fix what you can\u0026rsquo;t see\nThe pendulum is swinging. Hard. Self-hosted infrastructure with actual resilience engineering is starting to look less like paranoia and more like basic hygiene. When a DNS typo can detonate half the internet, and a capacity shortage can paralyze production for 10 days, maybe—just maybe—we need to stop pretending \u0026ldquo;the cloud\u0026rdquo; is a magical solution and start treating it like what it is: Someone else\u0026rsquo;s computer. And it can brick at any moment.\nThis article was originally published on Substack as part of the BoFOSS publication.\n","date":"25 October 2025","externalUrl":null,"permalink":"/posts/aws-single-point-of-failure/","section":"Posts","summary":"","title":"AWS Just Proved It's the Internet's Biggest Single Point of Failure","type":"posts"},{"content":"","date":"25 October 2025","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"25 October 2025","externalUrl":null,"permalink":"/tags/cloud/","section":"Tags","summary":"","title":"Cloud","type":"tags"},{"content":"","date":"25 October 2025","externalUrl":null,"permalink":"/tags/devops/","section":"Tags","summary":"","title":"Devops","type":"tags"},{"content":"","date":"25 October 2025","externalUrl":null,"permalink":"/tags/infrastructure/","section":"Tags","summary":"","title":"Infrastructure","type":"tags"},{"content":" 📅 Schedule 15-minutes with me ","date":"25 October 2025","externalUrl":null,"permalink":"/","section":"Mandy Sidana","summary":"","title":"Mandy Sidana","type":"page"},{"content":"","date":"25 October 2025","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","date":"25 October 2025","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"25 October 2025","externalUrl":null,"permalink":"/categories/technology/","section":"Categories","summary":"","title":"Technology","type":"categories"},{"content":"","date":"25 October 2025","externalUrl":null,"permalink":"/tags/technology/","section":"Tags","summary":"","title":"Technology","type":"tags"},{"content":"You built a multi-cloud architecture to avoid vendor lock-in. 89% of companies run multi-cloud setups thinking they\u0026rsquo;re getting the best of whichever cloud they want to use. Meanwhile, they\u0026rsquo;re bleeding tens of thousands per month on a single hidden cost that makes leaving economically impossible.\nMulti-Cloud vs Hybrid Cloud (Yes, They\u0026rsquo;re Different) # Multi-cloud = Multiple public cloud providers (AWS + Azure + GCP) in one ecosystem. The goal? Vendor lock-in avoidance, cost optimization, best-of-breed tools.\nHybrid cloud = Your crusty on-prem data center + public cloud. Usually a transition strategy for enterprises who need to keep sensitive data behind their own firewall while using cloud elasticity for everything else.\nThe pitch for multi-cloud sounds amazing:\nNegotiating leverage against providers Failover resilience (AWS US-East-1 goes down? Lmao just route to Azure) Risk diversification The reality? You just traded one problem for three simultaneous flaming dumpster fires of operational complexity.\nPlatform Engineering Was Supposed to Save Us # Most platform engineering teams got too excited for developer experience (DX). They built slick internal developer platforms (IDPs) with beautiful UIs, smooth CI/CD, and self-service everything.\nCool. But they treated the underlying infrastructure like a utility - an afterthought you just assume works. What happens when you do that - well, three things break: Security (becomes a late-game patch), Compliance (also a late-game patch) and Cost (a continuously leaking wound you discover in production)\nThe new wave is Infrastructure Platform Engineering (IPE) - treating infrastructure as a first-class product with its own roadmap, SLOs, and cost metrics. This means centralizing policies, defining governance as code from day one, and actually managing the multi-cloud chaos instead of letting it manage you.\nBut even with solid IPE, there\u0026rsquo;s one cost that sneaks up and absolutely wrecks budgets: data egress fees.\nThe Egress Fee Trap: Cloud Providers\u0026rsquo; Hidden Profit Center # Here\u0026rsquo;s the scam in plain English:\nData ingress (moving data INTO the cloud) = Free BUT Data egress (moving data OUT of the cloud) = Pay per GB, and the rates are designed to hurt\nGartner estimates 10-15% of your entire cloud bill is just egress fees. Not compute. Not storage. Just moving your own data around.\nWhere You\u0026rsquo;re Getting Pwned # Cross-region replication: You set up disaster recovery by replicating 10TB of S3 data between two US regions. Seems responsible, right? Amazon treats that as egress from the source region. That\u0026rsquo;s $900+/month just for DR insurance.\nAnalytics exports: Your team runs BigQuery analytics and exports 50GB/day to cloud storage in another region or to an external BI tool. That\u0026rsquo;s $180/month in transfer fees. Every. Single. Month.\nDR test surprise: You restore 15TB of archived data from cloud to on-prem to test your disaster recovery process (which, you should absolutely do). One test = $1,200 unbudgeted egress bill. Hope finance doesn\u0026rsquo;t see that one.\nThe exponential trap: Egress doesn\u0026rsquo;t scale linearly. One company saw compute/storage grow 3x over 8 months. Normal. But because they added media-heavy features, egress costs jumped 15x. That\u0026rsquo;s not a typo.\nIsn\u0026rsquo;t this illegal? # Someone finally noticed this was basically a protection racket. Enter the European Union\u0026rsquo;s Digital Markets Act (DMA) - Starting January 12, 2027, the DMA will ban hyperscalers from charging egress fees when businesses switch cloud providers. Between now and then, providers can only charge reduced fees equivalent to their actual costs during the switching process.\nTranslation: No more charging $0.09/GB to move data that costs them $0.002/GB to transfer.\nGoogle (and others) saw this coming. So they preemptively announced egress fee waivers - which sounds great until you read the fine print. The catch: You only get free egress if you\u0026rsquo;re closing your account permanently.\nYou have to:\nDelete your entire GCP account Promise to never come back Complete the migration in 60 days It\u0026rsquo;s like they\u0026rsquo;re saying: \u0026ldquo;Sure, we\u0026rsquo;ll help you pack. But only if you swear on your life you\u0026rsquo;re moving out for good.\u0026rdquo;\nThis is the corporate equivalent of your ex saying they\u0026rsquo;ll give your stuff back, but only if you move to a different country and change your phone number.\nAre you truly free? Or just paying three cloud bills with extra steps?\nHow to Fight Back # You\u0026rsquo;re not completely cooked. Here are the actual tactics:\nCDNs for content delivery: Cache your static content closer to users. This alone can cut egress by 60-80% for websites/media apps. The content doesn\u0026rsquo;t need to come from your origin data center every time.\nCompress everything: Before shipping data out, compress it. Standard compression shrinks data volume by 20-40% with negligible performance impact. Why aren\u0026rsquo;t you doing this already?\nPrivate connectivity for high-volume transfers: AWS Direct Connect, Azure ExpressRoute - if you consistently push enough data to saturate even a modest 25Gbps link, the cost savings on reduced egress fees often pay for the dedicated link itself. Plus lower latency and better security. Actually worth investigating.\nGovernance and visibility: Use AWS Cost Explorer, Azure Cost Management, etc. to drill specifically into egress costs. Track volume, track destinations, set automated alerts for spikes. That $1,200 DR test bill? You should get notified before it ships, not after.\nIf you\u0026rsquo;re in the EU: Wait until 2027 and let the DMA do the heavy lifting. Between now and then, providers can only charge reduced fees equivalent to actual costs. Document everything. When the ban hits, you\u0026rsquo;ll have leverage.\nThe Bottom Line # Multi-cloud architecture isn\u0026rsquo;t inherently bad. The hyperscalers designed these fee structures to be deliberately opaque. Moving data between regions? That\u0026rsquo;s one price. Moving to a different cloud? Higher price. Moving back to your data center? Even higher. Every boundary crossing is a toll booth. You built multi-cloud for freedom and flexibility. But if moving your data costs more than your quarterly cloud budget, are you actually free? Or did you just build a more expensive cage?\nThis article was originally published on Substack as part of the BoFOSS publication.\n","date":"3 October 2025","externalUrl":null,"permalink":"/posts/big-cloud-highway-robbery-multi-cloud-strategy/","section":"Posts","summary":"","title":"Big Cloud's Highway Robbery (And Why Multi-cloud strategy falls short)","type":"posts"},{"content":"","date":"3 October 2025","externalUrl":null,"permalink":"/tags/cloud-costs/","section":"Tags","summary":"","title":"Cloud-Costs","type":"tags"},{"content":"","date":"3 October 2025","externalUrl":null,"permalink":"/tags/dma/","section":"Tags","summary":"","title":"Dma","type":"tags"},{"content":"","date":"3 October 2025","externalUrl":null,"permalink":"/tags/egress-fees/","section":"Tags","summary":"","title":"Egress-Fees","type":"tags"},{"content":"","date":"3 October 2025","externalUrl":null,"permalink":"/tags/eu-regulation/","section":"Tags","summary":"","title":"Eu-Regulation","type":"tags"},{"content":"","date":"3 October 2025","externalUrl":null,"permalink":"/categories/infrastructure/","section":"Categories","summary":"","title":"Infrastructure","type":"categories"},{"content":"","date":"3 October 2025","externalUrl":null,"permalink":"/tags/multi-cloud/","section":"Tags","summary":"","title":"Multi-Cloud","type":"tags"},{"content":"","date":"3 October 2025","externalUrl":null,"permalink":"/tags/platform-engineering/","section":"Tags","summary":"","title":"Platform-Engineering","type":"tags"},{"content":"","date":"3 October 2025","externalUrl":null,"permalink":"/tags/vendor-lock-in/","section":"Tags","summary":"","title":"Vendor-Lock-In","type":"tags"},{"content":"","date":"1 October 2025","externalUrl":null,"permalink":"/tags/frontend/","section":"Tags","summary":"","title":"Frontend","type":"tags"},{"content":"","date":"1 October 2025","externalUrl":null,"permalink":"/tags/javascript/","section":"Tags","summary":"","title":"Javascript","type":"tags"},{"content":"","date":"1 October 2025","externalUrl":null,"permalink":"/tags/react/","section":"Tags","summary":"","title":"React","type":"tags"},{"content":" The Vercel Takeover of React That Never Happened # React told everyone to use frameworks. The internet decided Vercel staged a hostile takeover. # Here\u0026rsquo;s the narrative that everyone believes: Vercel saw React as a cash cow. They hired away the core team members from React. They got Next.js plastered all over the docs. Now they\u0026rsquo;re puppeting the entire ecosystem to pump their hosting platform.\nOnly one problem with that theory - It\u0026rsquo;s backwards! The React team invented React Server Components inside Meta. They couldn\u0026rsquo;t ship it internally (lmao Meta infrastructure), so they needed an external guinea pig. They walked up to Vercel and basically said, \u0026ldquo;Hey, wanna rewrite your entire router for our experimental architecture?\u0026rdquo; And Vercel\u0026hellip; said yes? Gigabrain or insane, hard to tell. So React devs joined Vercel to build the thing they already wanted to build. Vercel didn\u0026rsquo;t capture React. React captured Vercel. That being said, this relationship absolutely gave Vercel unfair advantages. They got first-mover status on RSCs. They got homepage placement in React docs. The optics are terrible. But it was never a conspiracy.\n\u0026ldquo;Is React Just Next.js Now?\u0026rdquo; # No. Stop it. Next.js is a framework that uses React. React is a library. You can still use Vite, Remix or Gatsby. You can totally choose to use nothing and suffer in raw React!\nThe docs even say \u0026ldquo;yes you can use React without a framework\u0026rdquo; (in the most passive-aggressive way possible, but I\u0026rsquo;ll get to that). The confusion exists because the community doesn\u0026rsquo;t have clean vocabulary anymore. Everyone\u0026rsquo;s fighting over \u0026ldquo;framework vs SPA\u0026rdquo; when the real question is \u0026ldquo;full-stack vs client-only.\u0026rdquo;\nClient-Side React Isn\u0026rsquo;t Going Anywhere # Deep breath. Your SPA isn\u0026rsquo;t deprecated. Meta runs millions of lines of client-rendered React in production. They\u0026rsquo;re not nuking that. React has a god-tier track record of backward compatibility. New features in React 19? Many are explicitly client-only.\nRSCs are additive. Optional. You don\u0026rsquo;t have to use them. Your useState hooks aren\u0026rsquo;t getting nerfed. The sky is not falling. This is fine. 🔥🐶\nSo Why The Framework Push? # If it\u0026rsquo;s not a Vercel conspiracy and client React is safe, why is the React team simping so hard for frameworks? - Performance. That\u0026rsquo;s it.\nAndrew Clark (React core team) straight up said: \u0026ldquo;Use a framework. Full stop.\u0026rdquo; His reasoning was that frameworks ship with data fetching, routing, and SSR out of the box while DIY stacks are usually jank. In the meanwhile, frameworks matured and got better than your custom setups.\nDan Abramov explained why Create React App got the axe: it was too simple. CRA couldn\u0026rsquo;t guide you toward SSR, SSG, or optimized data fetching. These problems are deeply interconnected. You can\u0026rsquo;t bolt them on later. You need framework-level integration.\nTranslation: React looked at the ecosystem and said \u0026ldquo;y\u0026rsquo;all keep building the same slow SPAs. We\u0026rsquo;re gonna force-feed you performance.\u0026rdquo;\nBased? Maybe. Patronizing? Absolutely.\nWhere React Absolutely Bricked It # The technical direction? Defensible. The communication? Catastrophic.\nThe initial docs read like a hit piece on non-framework React:\nSPAs described as having \u0026ldquo;unusual constraints\u0026rdquo; Non-framework React buried under 17 layers of navigation Literally said \u0026ldquo;we can\u0026rsquo;t stop you\u0026rdquo; about using React without a framework That\u0026rsquo;s not documentation. That\u0026rsquo;s a passive-aggressive intervention.\nIt took years and massive community backlash to get balanced docs. The RSC docs? Still scattered across blog posts and framework-specific guides. No central source of truth.\nThe React team didn\u0026rsquo;t just slowburn the the community trust destruction - they speedran it like the AngularJS team at Google (from 2014).\nRecap of what happened with Angular - AngularJS (Angular 1.x) was huge. Everyone used it. Google announced Angular 2 - which was a complete rewrite. Not an upgrade! A different framework all together. Migration path? Lmao what migration path Your Angular 1 app? Basically legacy code overnight. The community: \u0026ldquo;So\u0026hellip; we just rewrite everything?\u0026rdquo;. Google: \u0026ldquo;Yeah pretty much\u0026rdquo;\nThe Actual Situation # React wants better performance. Frameworks enable that. RSCs are optional. Client React isn\u0026rsquo;t dying. But the team communicated like they\u0026rsquo;re personally offended by your Vite setup. They acted like your SPA was cringe just for existing.\nHere\u0026rsquo;s the thing: they\u0026rsquo;re managing a massive ecosystem. Impossible to please everyone. But you know what helps? Not talking down to your users. Not burying documentation. Not making people feel abandoned.\nIn the end, you can keep building SPAs. You can adopt frameworks. You can use RSCs or ignore them forever.\nReact gave you new tools. They\u0026rsquo;re powerful. They\u0026rsquo;re optional. But if you\u0026rsquo;re mad, you\u0026rsquo;re not wrong. The rollout was fumbled. The messaging was condescending. The docs prioritized one approach (coincidentally the one that makes Vercel money) and treated everything else like a begrudged edge case.\nModern problems require modern solutions. And apparently modern solutions require frameworks. Whether you choose to cope, seethe, or adapt is up to you.\nJust don\u0026rsquo;t let the hype cycle make your technical decisions. Pick what works for your project. Touch grass occasionally. Ship code.\nWe\u0026rsquo;ve all got PRs to merge.\nThis article was originally published on Substack and Medium as part of the BoFOSS publication.\n","date":"1 October 2025","externalUrl":null,"permalink":"/posts/react-evolving-vision/","section":"Posts","summary":"","title":"The Vercel Takeover of React That Never Happened","type":"posts"},{"content":"","date":"1 October 2025","externalUrl":null,"permalink":"/tags/web-development/","section":"Tags","summary":"","title":"Web-Development","type":"tags"},{"content":" Is SSO an ‘Enterprise’ Tier feature? # Let’s start with what SSO is. SSO, or single sign-on, can be considered a master key for your work apps. Usually, you would have a separate key (each key needs a separate password for your email) for each service, one for project management, and another for your cloud storage, etc. SSO lets you use just one key, one set of login details, username, and password, to unlock all the applications you need access to.\nSSO is a win-win for users and security # SSO centralizes how your identity is managed online, so it’s less annoying than juggling dozens of passwords. That convenience is a big win for users, and it’s more than just convenience; it\u0026rsquo;s crucial from the organization’s security perspective. If one weak password gets compromised, SSO helps prevent that from automatically giving an attacker the keys to the entire kingdom, so to speak. Plus, you’re just logging in less often overall, which means fewer chances for those credentials to be phished or intercepted.\nSSO is not just convenient but also more secure\nHow does identity management look without SSO? TLDR — It is a headache. Think about onboarding a new hire. Without SSO, someone, maybe the owner or office manager, must manually create an account for every tool that person needs- Email, CRM, internal chat, everything—assigning permissions for each one. More crucially, admins must remember to revoke all that access immediately when someone leaves. The ground reality is that lots of SMBs manage this with spreadsheets, and that’s time-consuming, obviously, but it\u0026rsquo;s also prone to errors. And it just doesn’t scale well as you add more people or apps. SSO is the answer.\nBut before you consider this problem solved, you should check out the SSO Wall of Shame, which lists popular SaaS companies where the price difference feels excessive. The website\u0026rsquo;s authors strongly object to Single Sign-On (SSO) being locked behind expensive enterprise pricing by most enterprise companies. The tag line states — A list of vendors that treat single sign-on as a luxury feature, not a core security requirement. Despite this, Single Sign-On (SSO) is often locked behind expensive enterprise pricing, leaving smaller teams and individual users without a secure and scalable way to manage identities. SSO shouldn’t be considered a ‘‘nice to have’’ feature for any company — it’s a baseline security requirement. IT and Security teams rely on SSO to centrally manage user accounts, enforce strong authentication, and instantly revoke access when employees leave. Without it, businesses are stuck managing logins across dozens (or even hundreds) of vendors, many of which don’t support essential security features like TOTP 2FA or U2F.\nHowever, authentication and authorization are inherently complex and critical security features for any business. So why not just pay for SSO? It boils down to two things: cost and technical hurdles.\nCosts # Often, vendors impose minimum user numbers you might have to pay for, say, 50 seats, even if you only have 15 employees, ouch. So, many smaller businesses are priced out immediately, so they go for the cheaper plan that doesn’t include SSO precisely. But it’s not just the sticker price of the SSO service itself. Remember that administrative overhead without SSO, spreadsheets, and manual work? Yeah, that has a cost too. It’s hidden, maybe, but the time spent on manual account creation, password resets, fixing errors, and potentially cleaning up after a breach caused by a weak password can add up significantly. It might even cost more than the SSO in the long run.\nTechnical hurdles # However, implementing SSO is technically challenging because it involves authentication (identity verification) and authorization (access control). Vendors must support multiple identity protocols, such as SAML and OpenID Connect, while integrating with various identity providers (IdPs), like Okta, Azure AD, and Google Workspace. Each IdP has unique setup requirements, token handling, and security policies, making it difficult for SaaS and commercial open-source (COSS) companies to offer a one-size-fits-all solution. Because of these complexities, SaaS vendors often restrict SSO to enterprise tiers where customers have the resources to configure and maintain their authentication infrastructure. Many vendors also bundle advanced authentication and role-based access control (RBAC) with their enterprise offerings, creating an artificial barrier that prevents smaller teams from accessing these essential security features. Multi-tenancy adds even more complexity, making it challenging for SaaS vendors to offer SSO at lower pricing. The key challenges include:\nEach tenant may use a different IdP and authentication protocol. The login process must dynamically route authentication requests to the correct IdP. Access control (RBAC, ABAC) must be enforced per tenant to prevent security risks. Cross-tenant collaboration requires additional safeguards. Regulatory and compliance requirements vary across tenants. What should SaaS companies do?\nBecause of these challenges, many SaaS companies limit SSO to enterprise customers with dedicated IT teams to handle setup and maintenance. However, if a provider can simplify multi-tenant SSO management, they can offer it to smaller teams at a reasonable price, improving security across the board.\nIf SaaS providers truly “take security seriously,” there needs to be a shift in pricing models to make SSO more accessible:\n✅ Including basic SSO support in individual and team plans.\n✅ Offering SSO as a reasonably priced add-on rather than an enterprise-only feature.\n✅ Providing more self-service-friendly SSO configurations for smaller teams.\nTowards that end, others have created an SSO wall of fame, including companies that price their SSO tier reasonably. How much responsibility do vendors have to price essential security features ethically to make them genuinely accessible? Commercial open-source (COSS) companies can make authentication and authorization accessible without forcing security-conscious teams into expensive enterprise contracts. Security shouldn’t be a premium feature — it should be the default, regardless of team size.\nMy 2 cents # At the end of the day, if you’re involved in developing a SaaS product, please don’t make SSO an enterprise-tier, disproportionally priced feature. It makes the internet worse and makes your biggest customers dislike you.\nMy advice for SMBs is to analyze needs, explore affordable options, compare solutions, conduct pilot projects, train staff, and continuously monitor.\nI advise Saas vendors to offer tailored solutions, flexible seat thresholds, and improved support materials. Critically, basic and essential services such as SSO should be decoupled from bundles with premium services. Please avoid using SSO as an upselling technique — Your customers will reward you in the long term.\n","date":"9 July 2025","externalUrl":null,"permalink":"/posts/sso/","section":"Posts","summary":"","title":"Is SSO an Enterprise Tier feature?","type":"posts"},{"content":"","date":"13 June 2025","externalUrl":null,"permalink":"/categories/business/","section":"Categories","summary":"","title":"Business","type":"categories"},{"content":"","date":"13 June 2025","externalUrl":null,"permalink":"/tags/business/","section":"Tags","summary":"","title":"Business","type":"tags"},{"content":"","date":"13 June 2025","externalUrl":null,"permalink":"/tags/cac/","section":"Tags","summary":"","title":"Cac","type":"tags"},{"content":"","date":"13 June 2025","externalUrl":null,"permalink":"/tags/cltv/","section":"Tags","summary":"","title":"Cltv","type":"tags"},{"content":"","date":"13 June 2025","externalUrl":null,"permalink":"/tags/growth/","section":"Tags","summary":"","title":"Growth","type":"tags"},{"content":"","date":"13 June 2025","externalUrl":null,"permalink":"/tags/metrics/","section":"Tags","summary":"","title":"Metrics","type":"tags"},{"content":"","date":"13 June 2025","externalUrl":null,"permalink":"/tags/saas/","section":"Tags","summary":"","title":"Saas","type":"tags"},{"content":" The CLTV/CAC Ratio: The Single Most Important SaaS Growth Metric # TLDR # This article aims to provide product leaders and managers with a thorough understanding of CLTV/CAC and its actionable insights for product strategy, especially in the context of commercial open-source software. I also ran the numbers for MongoDB as a real-world example.\nCustomer Lifetime Value (CLTV) # I’m assuming annual rates for this, but it would also work with monthly or daily rates.\nCustomer lifetime value (CLTV) represents the total revenue a business can reasonably expect from a single customer account over the entire relationship. Hence, CLTV = Average Revenue Per Account × Average customer life. E.g., if the average contract value (ACV) is $45K and the median customer lifetime is 4 years, then CLTV = $180K Customer Lifetime: The period for which the customer is retained. Hence, customer lifetime = 1 / Define churn rate (Number of Customers Lost in a Period / Total Number of Customers at the Beginning of the Period). For instance, if you know your annual churn is 25%, you can expect the average customer to stay with you for 1/0.25 = 4 years. Customer Acquisition Costs (CAC): This is calculated by dividing your company\u0026rsquo;s total spend on direct sales by the number of customers acquired during the period. So, if you spent $1 million on sales this year and acquired 20 new customers, your Customer Acquisition Cost (CAC) would be $1 million / 20 = $ 50,000 per customer. The CLTV/CAC ratio indicates the revenue that will be generated for every dollar spent on acquiring customers. In our example above, this comes out to be $180K/$50K = 3.6 The Power of the CLTV/CAC Ratio # Ratio \u0026lt;= 1: This indicates the cost to acquire a customer is higher than the revenue generated over their lifetime, signaling an unsustainable business model. This is NOT good and signals a fundamental problem with scaling — you are losing money for every customer. Even when the business is breaking even on each customer, it is not generating profit. If you factor in operational costs and cost of capital, you are still losing money. Ratio \u0026gt; 1: This is where you probably are. Generally accepted healthy ratios are 3:1 or higher, but this varies by industry. Also note that the “ideal” ratio depends on the business’s growth stage and investment strategy. Estimating it against competitors:\nWhile it would be challenging to obtain customers’ CLTC or CAC numbers, for public companies, you can calculate average ACV by dividing revenue from subscriptions by the estimated number of customers. Similar calculations can be performed for CAC using the company\u0026rsquo;s balance sheet. I will demonstrate such a calculation using MongoDB’s latest filings.\nMongoDB’s CLTV/CAC Ratio # MongoDB\u0026rsquo;s latest fiscal results can be found here\nhttps://investors.mongodb.com/2025-Annual-Report-and-Proxy-Statement https://investors.mongodb.com/node/13176/pdf\nThe report states that the revenue from subscriptions is $1.94B, and the report cover indicates that they have over 54,500 customers, so that’s roughly $35,596 per customer. That’s a good first start.\nIt\u0026rsquo;s essential to note that MongoDB supports both product-led (MongoDB Atlas) and sales-led approaches, and the numbers associated with each would vary significantly. It’s also fair to assume that the sales-led growth targets larger accounts.\nConveniently, MongoDB does report direct sales customers and their revenue percentage. For the same period as above, they have reported 7,500 direct customers, which account for 88% of subscription revenue. Running the numbers, the average AVC for these 7500 customers turns out to be $1.94B x 0.88 / 7500 = $0.22M per customer. The reason we want to use this number is that it allows us to compare it with the sales-led cost of customer acquisition.\nNow, with average ACV, we need the churn rate to calculate CLTV. MongoDB reports (see image below) that the net ARR expansion was roughly 120%, meaning the revenue from existing customers grew by 20% (expansion deals — churn). While this doesn’t directly indicate the churn rate, we can assume the industry average for public SaaS companies’ expansion is around 30–35%, meaning the churn rate would be 10–15%.\nSo CLTV = 0.22M/0.15 = $1.46M per customer\nNow let’s look at the CAC. Sales and marketing costs for 2205 were $161M. Assuming that amount was spent on acquiring 500 customers over the last year (note that this includes churn, hence our CAC will be lower than the actual number), it comes out to be $161M/500 = $0.32M per customer.\nFinally, we can examine the CLTV/CAC ratio, which comes out to be $1.46M/$0.32M = 4.5, a very healthy ratio. It’s essential to note that this calculation is performed purely for educational purposes and should not be used to make financial decisions, as the calculations performed make a lot of crude assumptions.\nInterpreting CLTV/CAC Ratio # High CLTV signifies strong customer retention, effective monetization strategies (upselling, cross-selling), and potentially a strong product-market fit. Low CLTV signifies potential issues with customer satisfaction, high churn, ineffective pricing, or limited opportunities for expansion within existing accounts. There may be a need for product improvements, better customer engagement, or pricing adjustments. A high CAC indicates inefficient marketing campaigns, a lengthy sales cycle, a lack of product-market fit, or targeting the wrong audience, requiring significant effort to acquire customers. Low CAC implies efficient marketing and sales processes, strong organic growth, a highly desirable product, or effective targeting. Things Product Managers can look at to impact CLTV/CAC ratio\nPrioritizing Features: Utilize CLTV insights to prioritize features that enhance retention (improving lifetime value) or enable upselling or cross-selling (increasing average revenue per account). Optimizing Onboarding: Focus on creating a seamless onboarding experience to minimize early churn and maximize the realization of initial value. Growth Loops: Design product features that inherently drive acquisition (e.g., referrals, viral sharing). Pricing Strategy: Inform pricing decisions based on the perceived value and potential lifetime revenue of customers. Understanding Customer Segmentation: Analyze CLTV and CAC across different customer segments to identify the most valuable and cost-effective customer groups. This can inform product roadmap prioritization and the development of targeted features. Appendix # I want to acknowledge that MongoDB hasn’t been open-source since October 2018, when it changed its license from AGPL to SSPL. However, it’s still regarded as a major COSS player and hence used, for example, here.\nThis article was originally published on Medium as part of the BoFOSS publication.\n","date":"13 June 2025","externalUrl":null,"permalink":"/posts/cltv-cac-ratio/","section":"Posts","summary":"","title":"The CLTV/CAC Ratio: The Single Most Important SaaS Growth Metric","type":"posts"},{"content":"","date":"22 May 2025","externalUrl":null,"permalink":"/tags/ai/","section":"Tags","summary":"","title":"Ai","type":"tags"},{"content":"","date":"22 May 2025","externalUrl":null,"permalink":"/tags/competition/","section":"Tags","summary":"","title":"Competition","type":"tags"},{"content":"","date":"22 May 2025","externalUrl":null,"permalink":"/tags/copilot/","section":"Tags","summary":"","title":"Copilot","type":"tags"},{"content":"","date":"22 May 2025","externalUrl":null,"permalink":"/tags/microsoft/","section":"Tags","summary":"","title":"Microsoft","type":"tags"},{"content":" Open Source Copilot Gambit: Microsoft\u0026rsquo;s Two-Pronged Plan to Stifle Competitors # We had some exciting news yesterday about Microsoft’s moves in the AI coding space. This post delves into the strategy and understands what’s behind these actions. What does it tell us about the fight for developers?\nMicrosoft is open-sourcing the client-side component of the GitHub co-pilot chat extension for VS Code, and it is being done under the MIT license, which is as permissive as open-source licenses get. This means that anyone can use it, modify it, or make changes based on the sources provided. This allows you to see how Copilot chat works within VS Code, including how it builds prompts, displays suggestions, and all other user-facing features. The key point here is that the actual AI models used in the backend remain proprietary. So while the Copilot AI model is still owned and controlled by Microsoft, opening up the client means that developers or even large companies could potentially point the Copilot interface at their own models, whether internal or other open-source LLMs, which directly addresses some significant concerns about vendor lock-in and data privacy.\nBesides driving adoption for GitHub Copilot, the open-source approach would allow AI to feel more native to VS Code itself, enabling it to catch up with AI-first editors like Cursor, which are perceived as offering a significantly better AI experience. It’s also worthwhile to note that while OpenAI and Microsoft have remained partners, OpenAI recently launched a preview of its own cloud-based software engineering agent, signalling a move away from Microsoft.\nAmidst this rising competition, something significant happened last month — Microsoft removed VSCode extensions from code forks. Microsoft started enforcing certain license terms, blocking specific key Microsoft VS Code extensions from running in VS Code forks like VSCodium and Cursor. Please note that the license terms themselves aren’t new; they have been in place since 2020. What’s new is the start of enforcing a five-year license term that restricts other IDE forks from using extensions shared via the VS Code Marketplace. One example is the C++ extension — It provides essential features, such as intelligent code completion and debugging tools, which are crucial for C++ developers. Blocking that extension for a tool like Cursor, which is built on VS Code’s open-source fork, directly broke developer workflows 😱.\nThis certainly appears to be a move aimed at hindering competition, as confirmed by Cursor’s CEO, Michael Trull, who mentioned that they will now be moving to open-source alternatives. This hasn’t gone unnoticed in the industry — Developers are calling it unfair competition. There’s even some talk of an FTC complaint being filed. The allegation is that Microsoft is blocking rivals to keep users locked into its ecosystem, which wouldn’t be the first time.\nWhat is immediately striking when examining this is that Microsoft appears to be holding almost contradictory positions at first glance. Open-sourcing the client-side of VS Code Autopilot is a move towards open source and fostering competition. However, they then use licensing to restrict those very competitors, such as Cursor, from using essential extensions built on the same underlying open-source VS Code platform. This is a strategy with two distinct parts: proactively opening up your offering to attract users who would have otherwise chosen open-source alternatives, while also defensively limiting rivals who are building on the same open-source base.\nWhat appears to be an upholding of OSS principles is, in reality, when combined with defensive restrictions, a proactive move to maintain leadership position, manage competition, and the developer narrative. The tension between open platforms and competitive business realities in the AI era is only going to intensify from here.\nThis article was originally published on Medium as part of the BoFOSS publication.\n","date":"22 May 2025","externalUrl":null,"permalink":"/posts/microsoft-copilot-gambit/","section":"Posts","summary":"","title":"Open Source Copilot Gambit: Microsoft's Two-Pronged Plan to Stifle Competitors","type":"posts"},{"content":"","date":"22 May 2025","externalUrl":null,"permalink":"/tags/open-source/","section":"Tags","summary":"","title":"Open-Source","type":"tags"},{"content":"","date":"22 May 2025","externalUrl":null,"permalink":"/tags/strategy/","section":"Tags","summary":"","title":"Strategy","type":"tags"},{"content":"","date":"8 May 2025","externalUrl":null,"permalink":"/tags/business-models/","section":"Tags","summary":"","title":"Business-Models","type":"tags"},{"content":"","date":"8 May 2025","externalUrl":null,"permalink":"/tags/commercial/","section":"Tags","summary":"","title":"Commercial","type":"tags"},{"content":" Evolution of Commercial Open Source Licensing # TLDR: Commercial software vendors\u0026rsquo; licensing strategies are rapidly changing — yet again. License changes and their impacts can be overwhelming for engineers, managers, and open-source software (OSS) contributors, so I aim to break them down in this article.\nEvolution of open source licensing models\nHow did we get here? # OSS vendors need protection from hyperscalers # This all started around the 2010s, when commercial vendors who initially leveraged open-source licenses for distribution and adoption found their business models under threat from cloud providers, such as hyperscalers like AWS. While they reaped the benefits of being free, i.e., usage spread quickly and widely, the downside was that competitors could freely utilize years of labor and offer competing services, often at a scale that the original open-source companies couldn’t match. This conflict drove the exploration and implementation of alternative licensing strategies, creating a new type of license that is still effectively open source, but with an added clause preventing the use of the software to compete with the vendor. One such license type is the AGPL, which I have extensively discussed in my earlier post here.\nWhile these new licenses drove adoption as good as purely open source licenses, they also allowed vendors to block new entrants in the market. Any new fork of the project would then have to contribute changes back to the open-source project. This model also prevented price gouging — Say if a vendor jacks up prices, or takes a drastic new direction, users or the community could fork the project (i.e., create their independent version), acting like a built-in accountability mechanism.\nTo summarize, the primary benefits were “frictionless distribution” and the inherent threat of a fork, which arguably kept vendors from abusing the software lock-in.\nCloud infra OSS vendors need more protection from AWS. # One downside to this was that the need to shield users from strong copyleft licenses, such as AGPL, for infrastructure components led to complex licensing schemes that included permissive licenses for interfaces. AGPL requires you to open-source your code if users interact with it over a network, while cloud infrastructure products, such as logging tools, do not directly interact with the user.\nTo overcome this, around 2014/2015, vendors began creating and utilizing new, stricter source-available licenses, also referred to as non-compete licenses or source-available licenses. A key example is MongoDB’s Server-Side Public License (SSPL). SSPL goes further, requiring you to open-source your entire service stack if offering the software as a service. SSPL addresses scenarios where businesses provide the functionality of the covered work to third parties as a service.\nMongoDB initially transitioned from an open-source license to the Server-Side Public License (SSPL) to prevent cloud provider competition. However, this shift created some distrust of commercial open source — there was quite a strong adverse reaction from many corners of the open source community. Many viewed it as a fundamental step away from the core principles of open source, as it felt like closing off something that was meant to be open.\nAround 2019, the concept of a “triple licensing strategy” for commercial open-source firms was articulated. This strategy involves offering a commercial license, a non-compete license for adoption, and a strong copyleft license for open source goodwill. Now in 2025, Triple-licensing is emerging as a potential strategy, particularly for mature vendors that have already experienced forks (e.g., Redis, Elasticsearch).\nTriple-Licensing Strategy # The basic idea is to offer the software under three distinct licenses simultaneously. Presented as a potential solution to the problems faced by infrastructure component vendors, this strategy involves offering three licenses:\nFirst, you’d have a plain, strong copyleft license, such as the AGPL. This is for the open-source purists, the community members who value that guaranteed freedom and sharing. To maintain the “mantle and goodwill of open source” and simplify licensing complexity compared to the older mixed-license approach. It should be noted, however, that nobody collaborates around strongly copyleft-licensed code unless they are a die-hard free software supporter. It’s essential to recognize that the strong copyleft license in triple licensing is primarily intended for “open source goodwill, nothing more, nothing less,” and does not promote collaboration or drive adoption. Second, you’d have something that’s almost like SSPL, designed for adoption, but crucially prevents the direct cloud competition we discussed. This license “replaces the original permissive-shields-for-a-copyleft-core strategy” for driving adoption. We can call the ‘anti-AWS’ license! Finally, a pay-for commercial license for traditional paying customers. This is the most restrictive option for users and lacks the “frictionless distribution” benefit of open licenses; however, this license generates revenue. Most recently, Redis transitioned from an open-source license to SSPL 1.0 (non-open source) in March 2024—one big stir. However, just over a year later, in April 2025, they reversed course and re-licensed under AGPL-1.0 as part of a triple-licensing strategy.\nElasticsearch did something similar, adding AGPL back into their mix after initially moving away from pure open-source. The ‘free’ tier in the screenshot below, from the Elasticsearch license page, clearly illustrates this change across different versions.\nElasticseach ‘free’ version moved from Apache to SSPL and then from SSPL to AGPL+SSPL over the years\nConcerns with a triple licensing model # What can go wrong with this triple license model? Personally. I have a few concerns.\nFirstly, vendors can tighten control by removing the free-to-use option entirely after establishing adoption with source-available licenses. This is seen as a potential future “payday” for vendors and a likely cause for users to “wake up.” If vendors regularly drop their source-available licenses, it may force new commercial open-source startups to adopt permissive open-source licenses from the outset. This would make “trademarks, quality, and speed” their primary competitive differentiators, which the author views as “probably a good thing.”\nSecondly, while triple-licensing simplifies licensing for infrastructure component vendors and provides a veneer of open-source goodwill, it does not drive adoption or foster collaboration, which is detrimental to the project and the community.\nThis article was originally published on Medium as part of the BoFOSS publication.\n","date":"8 May 2025","externalUrl":null,"permalink":"/posts/evoution_coss_licenses/","section":"Posts","summary":"","title":"Evolution of Commercial Open Source Licensing","type":"posts"},{"content":"","date":"8 May 2025","externalUrl":null,"permalink":"/tags/legal/","section":"Tags","summary":"","title":"Legal","type":"tags"},{"content":"","date":"8 May 2025","externalUrl":null,"permalink":"/tags/licensing/","section":"Tags","summary":"","title":"Licensing","type":"tags"},{"content":" Business Models supporting FOSS end-of-life # Open source forms the backbone of modern software development. However, this landscape is ever-evolving, and inevitably, projects reach their end-of-life (EOL). This presents both a challenge and an opportunity: how can businesses effectively navigate this EOL maze, and what business models can support them?\nMy experience at Sonatype provided a crucial perspective on this. We focused on empowering developers to identify secure and reliable replacements for EOL open-source components, leveraging Software Bill of Materials (SBOM). This proactive approach is vital for maintaining software security and stability. Similarly, at Plotly, I witnessed the impact of EOL on popular charting OSS libraries, highlighting the need for strategic planning and support for users facing these transitions.\nOne key aspect of informing business decisions around EOL framework support lies in understanding the adoption rates of specific package versions. While not perfectly accurate, analyzing download statistics from repositories like Maven Central and GitHub, alongside engagement metrics from platforms like Stack Overflow, can offer valuable insights. A high volume of downloads for a particular version, coupled with active discussions (especially those related to implementation challenges or bug fixes) on Stack Overflow, suggests a significant and potentially dependent user base. This data can act as a strong signal, indicating where investment in EOL support and migration pathways would be most impactful.\nEnd-of-life (EOL) Software Scanning SaaS # These platforms help proactively identify EOL components within a project’s dependencies and suggest secure and up-to-date alternatives, much like the product I managed at Sonatype. This is typically offered as a SaaS solution integrated into the development lifecycle, providing continuous monitoring and alerts. Popular vendors in this space include FOSSA obsolescence management, Aikido.dev, and Tenable.io. These tools often integrate with build pipelines and IDEs, offering developers real-time feedback on their dependencies.\nProviding Support Beyond EOL (Never-Ending Support — NES) # Several business models capitalize on the need for EOL support. One approach is offering enhanced support and extended security maintenance for EOL frameworks to organizations that are not yet ready or able to migrate. Vendors such as Herodevs and Tuxcare have established businesses by providing NES (Never-Ending Support) for these open-source frameworks. This typically involves providing critical bug fixes, security patches, and sometimes even limited feature backports beyond the official EOL date, offered as a subscription service. While offering “never-ending” support has challenges regarding resource allocation and the evolving threat landscape, it caters to organizations with complex systems or strict compliance requirements that necessitate prolonged stability.\nProviding Migration Services and Tools # Another model focuses on providing migration services and tools. Companies can offer expertise and tooling to help organizations seamlessly transition from EOL frameworks to newer or alternative technologies. This could include automated migration scripts, compatibility assessments, code refactoring services, and expert consulting. These services can significantly reduce the time and cost associated with complex migrations.\nData-Driven Insights and Analytics # Finally, the data-driven insights gleaned from download statistics and community activity can be valuable offerings. Websites like Endoflife.date provide key reports on open-source adoption trends, the prevalence of specific EOL versions, and prediction of upcoming EOL risks based on project activity. This can help organizations make informed decisions about their technology roadmap and proactively plan migrations. This could include identifying emerging EOL risks within their dependency trees based on broader usage patterns.\nNavigating the end-of-life of open-source frameworks effectively is crucial for the stability, security, and longevity of software projects. The business models outlined above demonstrate the growing recognition of this need and offer valuable solutions for organizations seeking to manage this ever-evolving landscape.\nThis article was originally published on Medium as part of the BoFOSS publication.\n","date":"15 April 2025","externalUrl":null,"permalink":"/posts/business-models-foss-end-of-life/","section":"Posts","summary":"","title":"Business Models supporting FOSS end-of-life","type":"posts"},{"content":"","date":"15 April 2025","externalUrl":null,"permalink":"/tags/end-of-life/","section":"Tags","summary":"","title":"End-of-Life","type":"tags"},{"content":"","date":"15 April 2025","externalUrl":null,"permalink":"/tags/maintenance/","section":"Tags","summary":"","title":"Maintenance","type":"tags"},{"content":"","date":"15 April 2025","externalUrl":null,"permalink":"/tags/sustainability/","section":"Tags","summary":"","title":"Sustainability","type":"tags"},{"content":"","date":"14 March 2025","externalUrl":null,"permalink":"/tags/business-strategy/","section":"Tags","summary":"","title":"Business-Strategy","type":"tags"},{"content":"","date":"14 March 2025","externalUrl":null,"permalink":"/tags/pricing/","section":"Tags","summary":"","title":"Pricing","type":"tags"},{"content":"In open source businesses, your most formidable competitor often isn’t another company — it’s your own freely available product. Unlike proprietary software battles over pricing, features, or marketing, open-source businesses frequently face an uphill challenge in convincing customers to move from free, community-driven offerings to paid, enterprise editions.\nMany open source companies have discovered that when it comes time to sell an Enterprise edition, the sales team is often fighting an uphill battle against their offering as many customers with deep pockets usually decide to continue with the free version and invest in custom development. The open source offering becomes the default option — even if the enterprise version packs additional features and robust support.\n“We’re Giving Too Much Away” — The Sales Perspective # Having worked at multiple open source companies, sales told me that we should cut down features in the open source offering. They are not wrong — when we try to sell an Enterprise, the customer often wants to keep using open source and do custom development instead of buying an Enterprise offering. Hence, logically, they form the standard prescription to withhold certain features from the open source edition and reserve them for enterprise users. But strategically removing functionality often backfires. If users perceive intentional limitations, they may simply build their solutions rather than adopt the premium tier. Docker initially faced backlash when it tightened restrictions on Docker Hub’s free tier, prompting developers to explore alternative registries rather than move up to paid plans.\nDetermining What Goes into Open Source vs. Enterprise # Deciding which features belong in open source isn’t purely about driving upgrades — it’s about fostering adoption and engagement within the developer community:\nCommoditized Features: If competitors widely offer a feature, withholding it makes little sense; inclusion becomes essential to stay relevant. Community Vitality: Features crucial for community momentum, adoption, or attracting new contributors should generally remain open. More reasons to add features to open source could be to drive developer adoption and community engagement. Much-asked-for updates and features become important to keep your community engaged and for new developers to pick up your open source framework.\nConversely, features are ideal candidates for enterprise tiers when they:\nAddress niche or enterprise-specific scenarios Provide compliance or security guarantees (e.g., SOC 2, HIPAA, GDPR) Offer managed or hosted services that significantly reduce operational complexity. Promise high perceived value, such as dedicated support, SLAs, and specialized integrations GitHub, for instance, differentiates clearly by offering advanced security features, audit logs, and robust CI/CD capabilities in their Enterprise tier — areas highly valuable for compliance-conscious enterprise customers.\nYour enterprise offering needs a block and a pull # In many instances, feedback from sales teams reveals a surprising truth: If customers prefer investing in custom solutions instead of upgrading, it’s often a sign your enterprise product lacks critical pull rather than your open-source product being overly generous. It signals a strategic opportunity to improve the premium offering — enhancing enterprise-grade security, offering managed infrastructure, providing robust customer support, or guaranteeing uptime and reliability.\nIn other situations, it may be that the open source users never encounter any blocks to extend the value to their companies. For this reason, when GitHub initially launched, they required developers to pay for private repos (this has been changed since as they have matured to other value props)\nTo summarize,\nThe delta of value between the open-source offering and the commercial offering is not large enough for customers to pay for it. Once hooked, open-source users must experience a conflict that blocks them from creating value at the enterprise level. After experiencing the block, the users must experience a pull (premium features) that goes beyond their current needs and invites them to pay for the product. For instance, Confluent built its enterprise success on robust managed Kafka services, significantly reducing customers’ infrastructure overhead, thus creating a clear, compelling value proposition distinct from the open-source Apache Kafka. Here are some standard pulls that open-source companies try to add to their premium offering\nAuthentication \u0026amp; Authorization: GitLab and HashiCorp offer sophisticated access management to appeal to security-sensitive enterprises. Managed Services: MongoDB Atlas and Elastic Cloud offer hosted databases, removing operational overhead for customers. Support and SLAs: Red Hat’s entire enterprise model is built on providing guaranteed uptime, security patches, and comprehensive support, which justify enterprise pricing. Ultimately, thriving open-source businesses don’t succeed by withholding — they win by offering undeniably superior value in their enterprise offerings, clearly addressing enterprise-specific pain points beyond the reach of their open-source counterparts.\nThis article was originally published on Medium as part of the BoFOSS publication.\n","date":"14 March 2025","externalUrl":null,"permalink":"/posts/biggest-competitor-open-source/","section":"Posts","summary":"","title":"When Your Biggest Competitor Is Your Own Open Source Offering","type":"posts"},{"content":"","date":"6 March 2025","externalUrl":null,"permalink":"/tags/coss/","section":"Tags","summary":"","title":"Coss","type":"tags"},{"content":"","date":"6 March 2025","externalUrl":null,"permalink":"/tags/leverage/","section":"Tags","summary":"","title":"Leverage","type":"tags"},{"content":"","date":"6 March 2025","externalUrl":null,"permalink":"/tags/product-management/","section":"Tags","summary":"","title":"Product-Management","type":"tags"},{"content":" The Open Source Edge: 5 Ways PMs Can Gain Leverage # As a product manager in a commercial open-source software (COSS) company, you inherit unique advantages that traditional software vendors don’t. Your user base isn’t just your paying customers — it includes a massive global community of developers using your product for free. If leveraged correctly, this can be a powerful asset.\nIn this article, I will explore some advantages you can and should be leveraging.\nFaster Issue Identification # With an open-source project, you don’t rely solely on internal QA or customer support tickets to identify issues. Millions of free users act as an extended testing network, reporting bugs via GitHub issues, forums, and social channels. This crowdsourced feedback helps to surface critical technical problems early.\nHowever, it’s important to prioritize effectively. The most vocal users don’t always represent the most impactful issues. Balancing community-driven reports with enterprise needs and strategic goals is key.\nRapid Product Feedback # Your community isn’t just a source of bug reports — it’s an always-on feedback loop. You can engage users through GitHub discussions, Slack, Discourse forums, polls, and idea boards. At Nexus Repo Manager, I helped launch an Ideas Portal to crowdsource feature requests, capturing thousands of free users\u0026rsquo; upvotes, comments, and valuable context.\nUnlike traditional software vendors, COSS companies can rapidly validate ideas before investing engineering resources , reducing risk and increasing confidence in product decisions.\nBuilt-in Product-Led Growth (PLG) # Most SaaS companies spend heavily on free trials and onboarding to acquire users. Open-source companies already have a massive installed base — your challenge is to create a PLG journey around them.\nYour strategy should identify friction points where paid features create tangible value. At Sonatype, we built a free security scanner for Nexus Repository, which identified vulnerabilities but required a paid plan for remediation—this naturally guided business users toward an enterprise upgrade without disrupting their workflow.\nCommunity-Powered Product Demos # Some of the best product demos don’t come from your sales team — they come from your community. Open-source users often build creative, high-impact use cases that showcase your product’s potential better than any scripted demo.\nAt Plotly, our most compelling data visualizations weren’t from internal teams but from open-source contributors. We actively encouraged this by running community challenges, highlighting user-generated content, and integrating the best examples into our marketing.\nAccelerated Developer Advocacy \u0026amp; Adoption # In traditional software, you need dedicated evangelists to drive awareness. In open source, your passionate user base does this for you. If you engage them effectively — through Discord, conferences, and contributor programs — they become your strongest advocates, organically spreading adoption in their organizations.\nBeing a PM at a COSS company means thinking beyond just customers — you’re managing an ecosystem. If you can harness community insights, guide free users to paid value, and amplify community contributions, you unlock a growth engine that proprietary software companies can’t easily replicate.\nThis article was originally published on Medium as part of the BoFOSS publication.\n","date":"6 March 2025","externalUrl":null,"permalink":"/posts/open-source-edge-product-managers/","section":"Posts","summary":"","title":"The Open Source Edge: 5 Ways PMs Can Gain Leverage","type":"posts"},{"content":"","date":"25 February 2025","externalUrl":null,"permalink":"/tags/agpl/","section":"Tags","summary":"","title":"Agpl","type":"tags"},{"content":"","date":"25 February 2025","externalUrl":null,"permalink":"/categories/agpl/","section":"Categories","summary":"","title":"AGPL","type":"categories"},{"content":"","date":"25 February 2025","externalUrl":null,"permalink":"/tags/copyleft/","section":"Tags","summary":"","title":"Copyleft","type":"tags"},{"content":"","date":"25 February 2025","externalUrl":null,"permalink":"/tags/free-software/","section":"Tags","summary":"","title":"Free-Software","type":"tags"},{"content":"","date":"25 February 2025","externalUrl":null,"permalink":"/categories/osslicenses/","section":"Categories","summary":"","title":"OSSLicenses","type":"categories"},{"content":" Why AGPL is a force for good? # There’s a common misconception that free software automatically means copyleft and that open source is inherently permissive. In reality, these terms represent different philosophies and legal frameworks.\nCopyleft licenses, including the AGPL, aren’t punitive — they require that modifications or derivative works remain under the same license, fostering a community-driven ecosystem. For example, Grafana Labs switched from Apache to AGPLv3 in 2021. Most users deploy Grafana unmodified, so the AGPL clause rarely becomes a hurdle, yet it ensures that any enhancements made to the software benefit everyone.\nAGPLv3 from GNU— Free as in Freedom # Traditional GPL licenses (coming from GNU) were designed to ensure that modifications to software remained open when the software was redistributed. However, there was a loophole: when software was offered as a service over the web — what we call Software-as-a-Service (SaaS) — there was no obligation to share modifications since the software itself was never “distributed” in the traditional sense. The AGPL was introduced to address this gap. The AGPL extends the copyleft requirements of the GPL by including a “network use” clause. Hence, enforcing the AGPL can be challenging. Legal teams often point to instances like Google’s blanket ban on AGPL code — a decision likely driven by the complexity of ensuring correct license compliance. Even when paired with compatible licenses like GPLv3, AGPL’s requirements can extend to an entire distribution, complicating matters further.\nUnmodified AGLP-licensed components # It’s crucial to understand that the AGPL doesn’t force companies to open-source proprietary code used internally or externally by default. If a business develops a proprietary user interface that interacts with an AGPL-licensed backend, such as Loki, there is no obligation to reveal its code.\nThis is akin to how dynamic linking with GPL software doesn’t require releasing proprietary modifications — only when the modified AGPL software is served directly to customers is the source code disclosure triggered, which brings us to our next point.\nModified AGLP-licensed components # In practice, it’s worth noting that most companies use AGPL components in their unmodified form. If the modifications are used strictly internally, the company isn’t obligated to disclose those changes. However, once the modified software is made accessible to external users (for instance, through a cloud service), the company must release the modified source code.\nCase: What happened at Neo4J # In May 2018, Neo4j dropped the AGPL for its Enterprise Edition (EE) and instead combined the AGPLv3 with the Commons Clause. This additional clause prohibited non-paying users from reselling the software or offering competing support services.\nThe complexities of AGPL enforcement were highlighted in Neo4j, Inc. v. PureThink, LLC, which revolved around the right to add change / modify contractual restrictions on top of AGPL-licensed software. When a licensee removed these restrictions, arguing that AGPLv3 explicitly allowed it, litigation followed, reinforcing the legal uncertainties around AGPL’s enforcement in hybrid licensing models. The licensee argued that under AGPLv3, Section 7, Paragraph 4, licensees are granted the right to remove any “further restrictions” added to the license: “If the program as you received it, or any part of it, contains a notice stating that it is governed by this license along with a term that is a further restriction, you may remove that term.”\nIt\u0026rsquo;s interesting to note that they choose to use an AGPL license and not a GPL license, which would have allowed them to add additional restrictive clauses easily. However, in 2022, a California court issued a partial summary judgment affirming that a license combining the AGPL with non-open-source restrictions cannot be called “free and open source.” The court also upheld Neo4j’s interpretation that licensors (those licensing the software) could add additional restrictive terms, while licensees (those using the software) could not remove those terms.\nSince then, PureThink LLC\u0026rsquo;s founder has appealed to the US Court of Appeals to reconsider the California district court’s decision as this judgement impacts the future of ‘free’ FOSS licenses. This case underscores the challenges with blending copyleft and proprietary licensing models, especially when additional restrictions are imposed on AGPL’s open-access principles. The outcome will likely create a binding precedent that would limit one of the major freedoms that AGPLv3 and other GPL licenses aim to protect — the ability to remove restrictions added to GPL-licensed code.\nLatest News: https://www.theregister.com/2025/03/04/free_software_foundation_agplv3/\nThe guardian of Commercial Open Source Software # Some argue that the AGPL license creates a moat for commercial open source companies by not allowing competition to emerge around them. Competitors who might try to build a business around the modified product can only do so under the AGPL license, preventing the creation of any moat for the competitor. Hence, the application of the AGPL by commercial open source companies is defensive, not progressive.\nIn my opinion, inherent advantages for the licensee can be beneficial rather than detrimental as they make FOSS license commercially attractive. This helps businesses drive innovation and make money while keeping the end users\u0026rsquo; interests at heart. It also helps them prevent large cloud providers (like AWS) from profiting off their open-source software without contributing back.\nThe winners and the losers # Winner — The Community \u0026amp; commercial open source companies # AGPL licenses are designed to ensure that improvements to a codebase remain open and available to the community. When developers build on AGPL software, any modifications made and deployed — especially over a network — must be released under the same license. By mandating that derivative works remain free, the AGPL protects the collective effort of countless contributors. This “pay-it-forward” approach means that every enhancement, bug fix, or new feature becomes accessible to all, fueling rapid innovation and mutual benefit.\nCommercial open source companies also benefit from the competitive moat that prevents others from using their work without contributing. While some argue this limits competition, it also strengthens the business model of COSS companies, allowing them to drive innovation and generate revenue while keeping user interests at heart.\nThe Loser — Cloud Commercial Gatekeepers # The primary impact of the AGPL is on companies like Amazon (AWS), whose cloud services might host popular open-source projects. The AGPL restricts them from offering these projects without complying with its terms. This stance supports the open source community by ensuring fair contribution and preventing free rides on community labor. Consequently, while these companies may have robust infrastructures and significant market power, the AGPL levels the playing field by enforcing a fair contribution policy.\nUnderstanding these nuances is vital for product or engineering managers. Recognizing that the AGPL aims to protect community contributions without stifling internal innovation can help balance legal obligations with business strategy while nurturing a collaborative ecosystem that drives open source innovation.\nFurther Reading # https://drewdevault.com/2020/07/27/Anti-AGPL-propaganda.html # This article was originally published on Medium as part of the BoFOSS publication.\n","date":"25 February 2025","externalUrl":null,"permalink":"/posts/why-agpl-force-for-good/","section":"Posts","summary":"","title":"Why AGPL is a force for good?","type":"posts"},{"content":"","date":"24 February 2025","externalUrl":null,"permalink":"/tags/cloud-native/","section":"Tags","summary":"","title":"Cloud-Native","type":"tags"},{"content":"","date":"24 February 2025","externalUrl":null,"permalink":"/tags/cncf/","section":"Tags","summary":"","title":"Cncf","type":"tags"},{"content":"","date":"24 February 2025","externalUrl":null,"permalink":"/tags/donation/","section":"Tags","summary":"","title":"Donation","type":"tags"},{"content":"","date":"24 February 2025","externalUrl":null,"permalink":"/tags/governance/","section":"Tags","summary":"","title":"Governance","type":"tags"},{"content":" The Upside To Donating Projects to the CNCF # In an era where open-source leadership defines the competitive edge, technology directors at open-source companies often have to decide and explain if they donate your project to the Cloud Native Computing Foundation (CNCF). The CNCF isn’t just another industry body — it’s a crucible where projects evolve from early-stage ideas to mature, community-certified solutions fully.\nSome recent examples of this strategic play are — Solo.io’s decision to donate its leading open-source API gateway to CNCF, and Red Hat\u0026rsquo;s intention to donate multiple tools for creating and managing containers, including Podman, to the Cloud Native Computing Foundation (CNCF).\nMilestones as Maturity Benchmarks / Industry Standards # One of the most compelling advantages of donating a project to CNCF is the stamp of approval with its milestones. The CNCF milestones serve as an industry-trusted benchmark. The CNCF nurtures projects through a clearly defined journey — from the Sandbox phase for early-stage ideas, incubation, and ultimately graduation. A graduate project signals technical maturity and a robust, diverse community of contributors. This translates to higher credibility for consumers of those technologies, often large enterprises. Customers and partners alike take note when a project has “made it” through the CNCF pipeline.\nIt is important to note that several projects have remained in the CNCF Sandbox or Incubating stage without graduating. For example, Spotify’s Backstage and KubeVela are notable projects that, despite growing adoption and active communities, haven’t yet met all the CNCF graduation criteria. It’s important to note that remaining in Sandbox or Incubation doesn’t imply failure — it often reflects a project’s ongoing evolution and the rigorous benchmarks set by CNCF.\nDriving Community Innovation and Market Positioning # Beyond credibility, joining CNCF can significantly accelerate innovation. People are more reluctant to volunteer when it’s evident that a corporation is profiting substantially while volunteers receive no share of the benefits. Donating to CNCF opens the door to a broader pool of contributors and ecosystem partners who bring fresh perspectives and technical expertise. By associating with CNCF, your project gains synergies with other leading initiatives, enhancing credibility and accelerating innovation through shared expertise. This validation can be incredibly persuasive to enterprises and partners evaluating your project. A critical advantage in today’s fast-paced market is having a project that evolves quickly and adapts to new challenges. Moreover, aligning with CNCF’s neutral, globally recognized brand can enhance your company’s strategic positioning, signaling that you’re committed to the open-source ethos and long-term industry growth.\nA Measured Security Assessment Process # One standout service is the CNCF TAG-Security Security Assessment Process (TSSA). This process reduces ecosystem risk by improving vulnerability detection and resolution and enhancing domain expertise through collaborative assessments. Another goal is accelerating adoption by providing consistent, structured security documentation, establishing a measurable security baseline, and clearly outlining design goals, potential risks, and next steps.\nThe TSSA offers a view of a project’s security design and fosters a culture of security awareness. It delivers an external security validation that can boost your project’s credibility.\nThe Donation Process and Its Considerations # It’s important to remember that donating a project to CNCF is not a casual decision. The process requires an application, thoughtful decision-making, and not guaranteed acceptance. For many projects, however, milestone-based validation and the structured TSSA are compelling advantages that help reinforce an already successful governance model. For a more detailed list of action items for any project onboarding as a sandbox project — please visit this link.\nThis article was originally published on Medium as part of the BoFOSS publication.\n","date":"24 February 2025","externalUrl":null,"permalink":"/posts/upside-donating-projects-cncf/","section":"Posts","summary":"","title":"The Upside To Donating Projects to the CNCF","type":"posts"},{"content":"","date":"11 February 2025","externalUrl":null,"permalink":"/tags/github/","section":"Tags","summary":"","title":"Github","type":"tags"},{"content":"","date":"11 February 2025","externalUrl":null,"permalink":"/tags/hugging-face/","section":"Tags","summary":"","title":"Hugging-Face","type":"tags"},{"content":"","date":"11 February 2025","externalUrl":null,"permalink":"/tags/machine-learning/","section":"Tags","summary":"","title":"Machine-Learning","type":"tags"},{"content":"","date":"11 February 2025","externalUrl":null,"permalink":"/tags/open-core/","section":"Tags","summary":"","title":"Open-Core","type":"tags"},{"content":" Open-core business strategy @ Hugging Face, AKA \u0026lsquo;GitHub for AI Models\u0026rsquo; # Initially conceived as an AI-powered chatbot for teenagers, Hugging Face pivoted towards democratizing machine learning (ML) models and building an open-source ecosystem. Today, it operates as a community-driven AI company, hosting a repository of state-of-the-art ML models and tools akin to \u0026ldquo;GitHub for AI.\u0026ldquo;\nHugging Face has raised $395 million (Series D), valuing the company at $4 billion. It plans to go public in the future, aiming to be the first company with an emoji ticker on NASDAQ. Its trajectory reflects the growing demand for AI model sharing, collaboration, and enterprise integration, mirroring GitHub\u0026rsquo;s transformation from a code-hosting service into a DevOps powerhouse. Hugging Face employs an open-core business model.\nIn this article, I examine the similarities between Hugging Face and GitHub, two businesses that facilitate collaboration among their target users (data scientists and developers, respectively). Both utilize an open-core business model and have developed enterprise offerings that create significant platform lock-in.\nThe Open-Core Business Model # Hugging Face\u0026rsquo;s success, much like GitHub\u0026rsquo;s, is rooted in an open-core approach. Its Transformers library and Model Hub are freely available, allowing researchers, students, and companies to access pre-trained models and build on them without cost. This mirrors GitHub\u0026rsquo;s strategy of providing free public repositories, which have become the foundation for open-source software development.\nHowever, while the core platform remains free, both companies have found ways to monetize at scale. GitHub generates revenue through private repositories, enterprise collaboration tools, and DevOps automation services, including GitHub Actions. Hugging Face, meanwhile, earns revenue through compute services, API-based model inference, AutoTrain, and enterprise AI hosting.\nCollaboration as the Engine of Growth # One of GitHub\u0026rsquo;s key advantages is its ability to facilitate collaboration at scale. Open-source projects thrive because of GitHub\u0026rsquo;s version control, issue tracking, and pull request systems, making it effortless for developers to contribute.\nHugging Face mirrors this in AI, but with models instead of code. The Hugging Face Hub acts as a repository for pre-trained models, datasets, and AI pipelines, allowing ML practitioners to reuse, fine-tune, and build on others\u0026rsquo; work. The impact has been significant — its Transformers library has been forked 8x more than competing AI platforms like H2O.ai.\nThis community-driven approach fuels platform stickiness. Much like GitHub, the more models hosted on Hugging Face, the harder it becomes for enterprises and researchers to move elsewhere.\nSimilar lock-in enterprise play # GitHub\u0026rsquo;s enterprise strategy involves offering features tailored to large organizations — enhanced security, compliance, and workflow automation tools — making it indispensable for companies managing complex codebases. Hugging Face has followed a parallel path, targeting AI-driven enterprises that need private model hosting, compliance, and scalable machine learning infrastructure.\nBoth platforms use pricing and features to drive adoption, create lock-in, and monetize growth. GitHub integrates directly into developer workflows, ensuring organizations rely on it for their entire software development lifecycle. Hugging Face integrates into AI pipelines, handling model training, versioning, inference, and deployment, making it deeply embedded in ML workflows.\nThese enterprise plays ensure long-term revenue streams and retention, as switching costs become prohibitively high.\nSimplifying productionalization # GitHub\u0026rsquo;s CI/CD tools (GitHub Actions) enable developers to seamlessly deploy code. Hugging Face takes a similar approach with its Inference API, which allows developers to integrate AI models into production without managing ML infrastructure. This serverless approach to AI echoes GitHub\u0026rsquo;s role in automating cloud deployments.\nThe goal is the same: make production workflows seamless, so users remain locked into the ecosystem.\nProjects (Spaces, Pages) # Hugging Face\u0026rsquo;s Spaces, which allow developers to deploy interactive AI applications using Streamlit, Gradio, and other UI tools, serve as another strategic parallel to GitHub Pages, which hosts static websites and web apps.\nBoth products extend their platforms from development and collaboration to public-facing application deployment. This shift moves Hugging Face beyond model hosting, making it a platform for AI-powered applications, much like GitHub became a home for developer tools beyond just version control.\nFuture Opportunities # GitHub has leveraged GitHub Sponsors to allow developers to fund open-source projects. Hugging Face could adopt a similar model, enabling users to fund the development of high-value AI models. It also aligns incentives for Hugging Face to make more robust open-source models than proprietary ones.\nAnother emerging opportunity is a paid model marketplace, where developers can sell fine-tuned AI models. This might be harder since GitHub seems to have a head start here with GitHub Marketplace.\nNonetheless, as AI development converges with software engineering, Hugging Face has room to evolve into a broader AI infrastructure platform, further mirroring GitHub\u0026rsquo;s role in DevOps.\nI don\u0026rsquo;t think GitHub and Hugging Face have to be separate ecosystems. By leveraging automation, creating unified issue tracking, and implementing cross-platform workflows, Hugging Face can build a bridge between the two platforms.\nThis article was originally published on Medium as part of the BoFOSS publication.\n","date":"11 February 2025","externalUrl":null,"permalink":"/posts/hugging-face-github-for-ai-models/","section":"Posts","summary":"","title":"Open-core business strategy @ Hugging Face, AKA 'GitHub for AI Models'","type":"posts"},{"content":"","date":"11 February 2025","externalUrl":null,"permalink":"/tags/platform/","section":"Tags","summary":"","title":"Platform","type":"tags"},{"content":"","date":"5 November 2024","externalUrl":null,"permalink":"/tags/automation/","section":"Tags","summary":"","title":"Automation","type":"tags"},{"content":"","date":"5 November 2024","externalUrl":null,"permalink":"/tags/iaas/","section":"Tags","summary":"","title":"Iaas","type":"tags"},{"content":"","date":"5 November 2024","externalUrl":null,"permalink":"/tags/migration/","section":"Tags","summary":"","title":"Migration","type":"tags"},{"content":"","date":"5 November 2024","externalUrl":null,"permalink":"/tags/paas/","section":"Tags","summary":"","title":"Paas","type":"tags"},{"content":" Will AI Agents be the end of the PaaS (platform as a service) # There are two main types of cloud services: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). IaaS providers like AWS and GCP offer direct access to virtualized hardware resources. PaaS providers such as Heroku and Vercel simplify the end-user experience by adding an abstraction layer on top of IaaS. This abstraction simplifies deployment and management, making cloud infrastructure more straightforward. Therefore, PaaS providers charge a premium on top of the underlying server costs for the convenience and additional features they offer.\nDevelopers prefer PaaS because it is easy to use but costs more. Early and small businesses like leveraging PaaS providers like Heroku due to limited time and resources and the desire to maintain these platforms\u0026rsquo; streamlined user experience.\nTypically, a smaller company starts on PaaS for speed, but the costs spiral out of control soon. You consider moving to an IaaS like AWS or GCP, but migration is a nightmare—you need to reconfigure infrastructure, handle secrets, and rewrite deployment scripts. Migrating applications from a Platform as a Service (PaaS) to a central cloud platform is often intricate, time-consuming, and costly. How do you go ahead? Could AI agents take on this burden and automate the migration process? In theory, AI Agents can perform the migration process by automating the transfer of workloads from PaaS services such as Heroku to leading cloud platforms, reducing operational costs and delivering better security, compliance, and infrastructure control.\nCan AI Agents Migrate My Apps from PaaS to IaaS? # AI agents would need to perform a series of steps almost immaculately to migrate these apps from PaaS to IaaS.\nExtract Application Data — The migration agent would start by pulling data from the PaaS platform\u0026rsquo;s API (e.g., Heroku API). This includes application configurations, dependencies, and metadata required for deployment. To enhance security, the agent should filter out sensitive information (e.g., credentials, keys, and secrets) and retain this data locally rather than transmitting it to external APIs.\nGenerate Dockerfiles for Containerization — The migration agent must then use the non-sensitive application configuration and metadata to generate the Dockerfiles based on the application requirements, creating a containerized environment compatible with IaaS platforms. But this isn\u0026rsquo;t trivial — small misconfigurations in dependencies or runtime settings could lead to deployment failures.\nCreate Terraform Files for Infrastructure as Code (IaC). The agent would then use the application and infrastructure requirements to generate Terraform Files to deploy the apps usingIaC (Infrastructure as Code). This creates a blueprint for deploying the application and its dependencies in the target cloud environment. The migration agent validates the generated Terraform manifest to ensure it meets all required standards and is deployable on the IaaS platform. A key challenge here? Ensuring compatibility — different IaaS platforms have different default configurations that could break the app.\nSecurely Inject Sensitive Data — The migration agent would then need to reintegrate the sensitive data, securely embedding it into the Terraform configuration as secrets. Finally, the agent initiates an auto-remediation process if errors are detected in the manifest. Mistakes here could expose sensitive data or break the application.\nValidate and Deploy — Upon successful validation, the migration agent should generate an output detailing the configuration and deployment specifications for user review. Upon configuration approval, the application is deployed to the target IaaS platform.\nWhile using AI agents to migrate applications from PaaS to IaaS offers many benefits, some potential pitfalls and challenges can arise. Here are some key areas where the approach could go wrong:\nWhat can go wrong? # Well, a lot. Here are a few things the AI Agent will have to design for:\nApplication Breakage Due to Architectural Differences\nFirstly, applications designed for a specific PaaS, like Vercel, may not function as intended in the IaaS environment due to differences in architecture, dependencies, or runtime configurations. PaaS services often come with built-in scaling, logging, and managed services. Moving to IaaS might break these integrations.\nExample: A Vercel app relying on serverless functions might not work as expected on AWS EC2 without significant refactoring.\nSecurity and Compliance Risks\nIf the AI agent fails to correctly filter or secure sensitive data, it could lead to breaches or compliance violations. It can potentially expose credentials in logs, hardcode secrets in code repositories, or make compliance violations (e.g., GDPR, SOC 2 risks)\nPoor Cost Optimization\nThe AI agent might misestimate resource needs, leading to over- or under-provisioning (increased costs) or under-provisioning (performance issues). If the application does not scale effectively in the new environment, it could result in performance degradation during peak usage.\nFurthermore, inadequate monitoring setup in the new environment can hinder troubleshooting and performance optimization. Lack of proper logging and audit trails during the migration can complicate compliance verification.\nHidden Costs of Migration\nFinally, you might not achieve the cost benefits at all. The overall cost of migration (including downtime, unexpected challenges, and resource allocation) could lead to higher costs. Furthermore, transitioning to IaaS may introduce additional costs not accounted for in the original PaaS environment, such as data transfer fees or scaling costs.\nClosing thoughts # As businesses tackle the complexities of cloud migration, AI agents offer a promising way to reduce costs by transitioning from PaaS to IaaS. However, careful planning, thorough testing, and continuous monitoring are crucial to mitigating risks related to compatibility, data security, and resource allocation. AI agents are promising, but they won\u0026rsquo;t replace DevOps teams overnight. While they can automate many steps, human expertise is still crucial for handling edge cases, optimizing infrastructure for performance and cost, and ensuring security and compliance.\nThis article was originally published on Medium as part of the BoFOSS publication.\n","date":"5 November 2024","externalUrl":null,"permalink":"/posts/platform-as-a-service-and-ai-agents/","section":"Posts","summary":"","title":"Will AI Agents be the end of the PaaS (platform as a service)","type":"posts"},{"content":"","date":"17 October 2024","externalUrl":null,"permalink":"/tags/anaconda/","section":"Tags","summary":"","title":"Anaconda","type":"tags"},{"content":"","date":"17 October 2024","externalUrl":null,"permalink":"/tags/business-model/","section":"Tags","summary":"","title":"Business-Model","type":"tags"},{"content":"","date":"17 October 2024","externalUrl":null,"permalink":"/tags/data-science/","section":"Tags","summary":"","title":"Data-Science","type":"tags"},{"content":"","date":"17 October 2024","externalUrl":null,"permalink":"/tags/freemium/","section":"Tags","summary":"","title":"Freemium","type":"tags"},{"content":" Open-source Business Model at Anaconda # It\u0026rsquo;s no secret that Anaconda operates on a freemium model (open-core) with tiered pricing, catering to individual users, academics, and enterprises. But what\u0026rsquo;s the real magic behind it? How does it all come together?\nAnaconda wasn\u0026rsquo;t always called Anaconda. It originated as Scaled Analytics, a company founded by Travis Oliphant (also the creator of NumPy and SciPy) to make scientific computing in Python more accessible. As the company scaled, the name change to Anaconda reflected the extensive and comprehensive nature of the software distribution. The renaming refined the platform\u0026rsquo;s positioning and broadened its appeal beyond scientific computing.\nThe development of core scientific Python libraries, which form the foundation of Anaconda\u0026rsquo;s ecosystem, has a rich history intertwined with open-source contributions and, in some cases, early government support. While Bokeh\u0026rsquo;s initial funding came from DARPA\u0026rsquo;s XDATA program, other crucial packages like NumPy and SciPy emerged from the efforts of academic researchers and the open-source community. NumPy, which provides fundamental array operations, and SciPy, which offers a wide range of scientific computing tools, were built incrementally by dedicated individuals driven by the need for better data analysis tools in Python.\nFirst Principles # Data Scientists are NOT Software Engineers — Anaconda\u0026rsquo;s business model is built on the understanding that data scientists and analysts often lack the software engineering expertise required for complex environment setups and package management. The platform simplifies these processes, enabling users to focus on data analysis and modeling. Need for an integrated platform for Data Science — By offering an integrated platform that includes package management, environment management, and cloud-based notebooks, Anaconda streamlines the data science workflow. Anaconda\u0026rsquo;s Open-Core Business Model # This model involves offering a core set of features and functionalities for free, typically as open-source software while providing proprietary extensions and services for paying customers.\nHow does Anaconda decide which features to open source and which to monetize? It all comes down to the use case. If a feature is broad and widely applicable, it goes open source. If it\u0026rsquo;s specialized or enterprise-focused, it becomes a commercial offering.\nWhile this model benefits from the massive user base that comes from free distribution, creating a network effect and driving adoption, it is also hard to maintain a competitive edge in both the open-source and commercial markets, which requires continuous innovation.\nHere\u0026rsquo;s how Anaconda\u0026rsquo;s open-core model works in practice:\nOpen Core (Free): The core of Anaconda\u0026rsquo;s offering is the free and open-source Anaconda Distribution. This includes the conda package and environment manager, a vast collection of popular data science libraries (NumPy, SciPy, Pandas, scikit-learn, etc.), and basic Jupyter Notebook support. This free distribution attracts a large community of individual users, academics, and even some businesses.\nProprietary Extensions (Paid): Anaconda offers several paid tiers (Starter, Business, Enterprise) that provide additional features and services not available in the free version. These paid offerings cater primarily to businesses and organizations with more demanding needs, such as:\nEnhanced Security Features: Enterprise-grade security controls, including role-based access control, secure authentication, and vulnerability scanning. Custom Package Repositories: The ability to manage and distribute private or proprietary packages within an organization. Cloud-Based Collaboration: Scalable cloud Jupyter notebooks with enhanced collaboration features, more storage, and greater computing power. Dedicated Support: Access to professional support teams for assistance with installation, configuration, and troubleshooting. Tailored Deployments: Custom on-premises installations to meet specific organizational needs. Revenue Streams # Subscriptions: Anaconda offers various subscription tiers (Free, Starter, Business, Enterprise) with increasing features and support. This is their primary revenue source.\nEnterprise Solutions: They provide tailored solutions for large organizations, as listed below.\nTailored Deployments: Custom on-premises installations to meet specific organizational needs. Custom Package Repositories: Allowing enterprises to manage and distribute proprietary packages securely. Advanced Security Features: Enhanced controls for compliance and vulnerability management. Pricing # Anaconda\u0026rsquo;s pricing structure follows a clear progression from free to enterprise tiers, with each level offering additional features and capabilities.\nFree \u0026amp; Starter Tier # The free tier provides the core Anaconda Distribution with basic functionality, while the Starter tier adds enhanced features for individual professionals and small teams.\nBusiness Tier # The Business tier includes advanced collaboration features, enhanced security, and dedicated support for growing organizations.\nEnterprise Features # Enterprise customers often have private packages (custom builds) that they need to distribute to data scientists in the company easily. Using conda as a distribution channel for these packages allows data scientists to access these packages as they would any other Python library (note that they do need to include the custom channel in their config — but this can be company-wide setting and applied broadly to all developer environments)\nCloud Jupyter Notebooks # As users pay more, they get more computing, storage, etc, on cloud Jupyter notebooks. This allows for scalable computing power and storage, enhanced features for team collaboration on notebooks, and access to environments and tools without local installations.\nDeployment Features # Anaconda provides robust deployment capabilities that enable enterprises to move from development to production efficiently.\nOne-Click Deployment: Allows data scientists to deploy machine learning models, notebooks, dashboards, and applications directly from the Anaconda interface with minimal effort Deploy as REST APIs: Expose models as RESTful APIs for integration with other applications and services. Interactive Applications: Deploy interactive data applications built with frameworks like Bokeh, Dash, or Flask. Hybrid Deployment Options: Support deploying on-premises, in the cloud, or hybrid environments. Version Control: Track different versions of deployments for rollback and reproducibility. Automated CI/CD Pipelines: Integrate with continuous integration and continuous deployment tools to automate the deployment process. Monitoring and Logging: Built-in tools for monitoring the performance of deployed models and applications, with logging for troubleshooting. Security and Compliance: # Role-Based Access Control (RBAC): Manage user permissions for deployment and access to resources. Secure Authentication: Integration with enterprise authentication systems like LDAP and Active Directory. Audit Trails: Maintain logs of deployment activities for compliance and auditing purposes. Consistent Environments: Replicate development environments in production to ensure consistency and reduce \u0026ldquo;it works on my machine\u0026rdquo; issues. Environment Snapshots: Create snapshots of environments that can be shared and deployed across teams. Policy Engine: Enables organizations to enforce security policies, such as blocking the installation of packages with known vulnerabilities. Vulnerability Metadata: Provides enriched data on package vulnerabilities and remediation steps. CVE and License Filtering: Allows filtering based on Common Vulnerabilities and Exposures (CVE) and software licenses to ensure compliance. Strengths and weaknesses # Strengths # Extensive User Community: The open-source Anaconda Python Distribution, conda package, and environment management tool are widely recognized and utilized by millions of data scientists. Comprehensive Tooling: Anaconda provides the open-source tools that expert data scientists require to deliver fundamental data science and machine learning engineering functionalities. Product-led growth: By offering a free version of its platform that is popular, well-maintained, and supported by various groups, Anaconda significantly lowers the barriers for new practitioners entering the field. Weaknesses # • Limited Vision Scope: Anaconda lacks foresight in supplying ready-to-use capabilities for data science tasks, opting instead to rely heavily on open-source components.\n• Narrow Target Personas: The platform primarily caters to expert data scientists, neglecting the needs of data engineers, MLOps engineers, and team leaders.\n• Insufficient Differentiation: The commercial version of the platform doesn\u0026rsquo;t provide considerable advantages to users, as package management and data science/machine learning libraries are also available from other vendors in the market.\nTotal Addressable Market (TAM) # The global data science platform market size was valued at USD 103.93 billion in 2023, and is projected to grow from USD 133.12 billion in 2024 to USD 776.86 billion by 2032, exhibiting a CAGR of 24.7% during the forecast period.\nDrivers: Increased adoption of data-driven decision-making, advancements in artificial intelligence, and the proliferation of big data.\nTarget Segments:\nIndividual Data Scientists and Analysts: They claim to have reached 1 million organizations and 43 million users worldwide who require efficient data analysis tools. They continue to support and contribute to open-source projects to maintain strong community relations. They also host events, webinars, and hackathons to engage with developers and data scientists. Academic Institutions: Universities and research organizations integrating data science into curricula and research. They collaborate with academic institutions to incorporate Anaconda into teaching and research, fostering early adoption among future professionals. They offer educational discounts or free licenses to students and educators. Enterprises: Organizations in various industries, such as finance, healthcare, technology, and retail, that leverage data analytics. This article was originally published on Medium as part of the BoFOSS publication.\n","date":"17 October 2024","externalUrl":null,"permalink":"/posts/anacondas-business-model/","section":"Posts","summary":"","title":"Open-source Business Model at Anaconda","type":"posts"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":" Mandy Singh # Open Source Cloud Infra Product Manager\n📍 Vancouver, BC V6E 3Z8\n📧 [email protected]\n📱 289-952-6925\n🔗 LinkedIn | Website\nProfessional Summary # Senior product manager with five years\u0026rsquo; experience managing cloud infrastructure and dev tooling with a strong focus on commercialization, pricing and GTM. Led and grew Kubernetes PaaS 30% YoY by improving platform reliability, developer experience, CI/CD integrations, pricing better and adding distribution channels. Led the turnaround for most popular open source artifact manager (Nexus Repo) - adding AI led scanning tool with a 40% price increase to double the topline. Started career as a software developer, found my passion for products as a startup founder - developed and launched a social recruiting platform that was used by 100K+ candidates.\nProfessional Experience # Plotly Technologies Inc. # Senior Platform Product Manager # Dec 2023 - Present # Remote\nOwned roadmap for Kubernetes-based PaaS, grew product revenue significantly through improved customer retention, pricing, packaging, and distribution channels (AWS, managed hosting) Led AI code assistant development allowing users to create data apps rapidly, resulting in 15% increase in active users and 28% increase in production apps Rolled out new pricing and packaging working closely with sales, finance, CS, and marketing teams. Improved platform stickiness by launching persistent storage for ephemeral pods, 1-click deploy, out-of-the-box auto-scaling, achieving 3x increase in developer engagement Added deployment options like Managed Hosting (Plotly-managed VPC), resulting in reduction in TCO by half while drastically reducing time to install\nSonatype Inc. # Product Manager - Nexus Repository Firewall # Mar 2021 - Dec 2023 # Remote Doubled product revenue for Repository Firewall through TAM expansion and productizing AI capability with 40% price increase Aligned data science team with business goals resulting in industry-leading AI capability that caught over 100K malware. Productized this AI capability with 40% price increase. Responsible for data platform roadmap, helping prioritize concepts for AI model improvements, retraining, data governance, APIs, and production SLOs leveraged by multiple products. Led product integration with non-cooperative competitor JFrog, doubling TAM and growing sales pipeline by 35% by expanding their user base Launched high availability \u0026amp; disaster recovery, improved reliability, reduced MTTR by 40%, and increased sales win rate by 28% for customer segment with mission-critical business applications\nKribX Inc (Startup) # Product Manager # Apr 2020 - Dec 2020 Launched robo-advisor for cryptocurrencies, mapped trading strategies with user personas and pricing tiers to achieve significant managed assets in first year of launch. Propelled 3x growth in user activation through revamped onboarding and GTM strategy based on prioritized experimentation\nExl Service Holdings Inc # Product Manager - Management Information Assistant # Jun 2018 - Jan 2020\nGot approval for significant budget from board for development of AI Assistant – MIA. Reported on strategy, roadmap, and PMF directly to executive board. Developed self-service conversational business intelligence tool from scratch for banking and fintech verticals. Think Siri for enterprise reporting and dashboards. Developed PRD working closely with six reference customers for discovery—improved results accuracy by 15% by leveraging domain-specific semantic search to win deals against competitors like Tableau. Worked with data science team on NLP to SQL interpreter (seq2sql model) to achieve 87% resolution rate and 25% clickthrough rate on search-as-you-type suggestions engine. Negotiated partnership with downstream technology ETL vendors, reducing time to market by six months. Partnered with risk regulatory consultants to add sales channels for specific vertical solution\nPsychd Analytics Pvt Ltd (Startup) # Founder # Jan 2015 - May 2017 Developed MVP and raised angel investment from Singapore-based VC. Conducted user research with HR professionals and established product market fit in 9 months. Grew social recruiting platform from ideation to release, grew to 100,000 candidates. Optimized acquisition channel by reducing CAC by 40% using lookalike audiences\nEducation # Master of Business Administration in General Management # Xavier School of Management XLRI (Jan 2017 - Dec 2018)\nBachelor of Technology in Computer Science Engineering # Indraprastha University (Apr 2006 - May 2010)\nSkills \u0026amp; Expertise # Product Management # Design Thinking, First Principles, Pricing \u0026amp; Packaging Product Led Growth, Product Market Fit, Systems Thinking Go-to-Market Strategy, Commercialization, Revenue Optimization Tools \u0026amp; Platforms # Product Tools: Aha!, Figma, Gainsight PX, JIRA, Klue, Looker BI, Loom, Miro, Pendo, ProductBoard, Salesforce, SQL Cloud Infrastructure: AWS IaaS, Circle CI, Coder, Docker, GitOps, Grafana, Jenkins, Kubernetes, OAuth, Open Telemetry Developer Tooling: CLI Design, Discord, Eclipse, GitHub, Package Managers, Shell, VS Code Data Science: Databricks, Datadog, Data Lakehouse, Feature Engineering, Knime, LLMs Last updated: Feb 2025\n","externalUrl":null,"permalink":"/resume/","section":"Mandy Sidana","summary":"","title":"CV - Mandy Singh","type":"page"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]