Deep Dive 2: Broadcom (AVGO) – The Technologist
Focus Ticker: AVGO
Other Tickers: NVDA, MRVL, INTC, AMD, VMW, AAPL, GOOGL, META, AMZN, MSFT
Lede
This deep dive into Broadcom (AVGO).
The First Four Questions
What does Broadcom do?
Broadcom designs and manufactures a comprehensive range of semiconductor products—including network switches, NICs, storage controllers, and custom ASICs—that form the backbone of digital connectivity in data centers, consumer devices, and telecommunications networks.
How does Broadcom claim to be different?
Broadcom claims to be different by leveraging its vast IP portfolio, deep analog and mixed-signal design expertise, and an integrated hardware-software strategy that delivers reliable, high-performance connectivity solutions few competitors can match.
How could Broadcom could succeed?
Broadcom is succeeding because it directly addresses the escalating demand for robust AI and connectivity infrastructure with proven, scalable solutions and strong financial performance, positioning it as an indispensable partner for hyperscalers and other digital innovators.
It has more success to come if these customers opt for more proprietary chip designs over generalized accelerators, potentially even shifting away from NVIDIA for certain applications.
How could Broadcom fail?
Broadcom could fail if it loses its competitive edge by not keeping pace with the rising tide of in-house chip designs and specialized competitors, which might prompt hyperscalers to develop their own proprietary solutions or eschew custom chips at scale, thereby eroding Broadcom’s market share and challenging its diversified business model.
Tell It to Me Like I’m 11-Years-Old
What Broadcom Does: Broadcom is a company that makes computer chips that help different parts of electronics talk to each other.
Think of a computer or data center like a big city, and Broadcom’s chips are like the traffic controllers and highways in that city.
They don’t necessarily drive the cars (do the computing) – instead, they connect and direct the traffic so everything runs smoothly.
Core Technology in Simple Terms: Broadcom’s chips are used in many devices to move data around. For example:
• In a data center (the “cloud” where the internet lives), Broadcom chips help connect thousands of computers together so they can share information very fast. It’s like how highways connect different neighborhoods so people can travel between them.
• Inside a computer, Broadcom chips can connect the brain of the computer (the CPU) with memory and other components. Imagine the CPU as a teacher and the memory as a library – Broadcom’s chips are like the hallways and librarians that quickly fetch books (data) from the library and bring them to the teacher when needed.
• Between a CPU and a GPU (a special chip for graphics and AI), Broadcom chips act like messengers.
The CPU might need the GPU to do some heavy lifting (like solving a puzzle or drawing a picture), and Broadcom’s chips make sure the GPU gets the request and sends back the result quickly.
• For storage (like hard drives or SSDs where data is saved), Broadcom chips are the traffic cops that manage data flowing from storage to memory.
When you open a file (say a photo or a game), Broadcom’s technology helps transfer that file’s data from the storage to the CPU and memory so you can use it.
Handling Data Traffic: Just as roads have speed limits and lanes, Broadcom’s chips create data lanes with extremely high speed limits.
Data travels as tiny electric signals or flashes of light through wires or fibers. Broadcom’s technology makes sure these signals go to the right place without crashing or getting lost.
If too many signals try to go through one pathway, Broadcom’s chips can redirect some of them – similar to how a traffic light might control cars at a busy intersection to prevent jams.
In Summary: Broadcom builds the “infrastructure” chips that let all the smart parts of our digital world talk to each other quickly and reliably.
They aren’t the flashy robots or the super-brain computer themselves; they are the communication helpers that keep everything connected and running, much like school hallways, highways, and traffic controllers keep people and cars moving in a city.
Technical Deep Dive
Easy Button: Broadcom makes chips that help everything from data centers to home Wi-Fi and smartphones connect and share data super fast. Their products include high-speed network switches, NICs, storage controllers, wireless chips, and even custom chips built to a client’s specific needs.
Broadcom’s Semiconductor Product Portfolio
Broadcom is known for a broad range of semiconductor products that focus on connecting and communicating data. Their product offerings serve many markets: data center networking, broadband internet, wireless communications, storage, and industrial systems (Source) (Source) Below is a breakdown of Broadcom’s key semiconductor product categories:
• Network Switch Chips (Ethernet ASICs): These are high-performance chips like the Tomahawk and Jericho series that act as the central switches in data centers.
They connect hundreds or thousands of servers together in a network.
Think of these as giant switching hubs that direct data packets between servers.
For instance, Broadcom’s Tomahawk 5 switch can handle up to 51.2 terabits per second of data – that’s 51.2 trillion bits every second, enabling dozens or hundreds of high-speed links at once (Source).
These chips use fast serializer-deserializer (SerDes) technology to achieve such speeds, meaning they take data bits and pack them into super-fast serial signals and then unpack them, kind of like grouping many cars into a high-speed train on one track (Source)
• Network Interface Controllers (NICs) and Adapters: Broadcom’s NIC chips go into server cards that plug into computers (or are built on the motherboard) to connect that computer to the network.
These handle data coming in and out of a server at high speeds (like 25Gb, 50Gb, 100Gb Ethernet, etc.). Many servers from companies like Dell, HPE, and even devices like the Raspberry Pi have used Broadcom’s NIC chips to provide reliable network connectivity (Source).
In simpler terms, if the network switch is a post office sorting center, the NIC is like the mailbox on each house – it formats and sends/receives the data to the main network. Broadcom’s NICs (such as the NetXtreme series) are widely used in enterprise and cloud data centers.
• Storage Connectivity & Controllers: Broadcom (through acquisitions of LSI and others) produces chips that manage data storage.
This includes RAID controllers, SAS/SATA controllers, and Fibre Channel adapters used in servers.
These chips ensure data can move between servers and storage devices (like big disk arrays or flash storage) quickly and with error-checking.
For example, Broadcom’s chips are behind the scenes when large databases retrieve information from storage – they handle the communication so that the CPU can get data from disks as fast as possible.
Broadcom also inherited Fibre Channel switch technology (from Brocade) which is used in storage area networks – specialized networks that connect servers to storage with extremely high reliability.
In essence, Broadcom covers both Ethernet (general networking) and Fibre Channel (storage networking) connectivity.
• Broadband and Wireless Chips: Historically, Broadcom is a leader in chips for Wi-Fi/Bluetooth connectivity (these are found in smartphones and laptops) and for broadband modems (like the chips in cable modems or GPON fiber internet devices).
For instance, many home Wi-Fi routers use Broadcom chipsets to deliver wireless internet. These chips involve radio-frequency design (to send data over the air or through cable lines). Although this is a bit outside data centers, it’s part of Broadcom’s broad connectivity portfolio.
(It’s worth noting that Apple iPhones have long used Broadcom’s Wi-Fi/Bluetooth combo chip, though Apple is reportedly working on its own version for the future).
• Custom Silicon and ASICs for Customers: Broadcom also has a business designing Application-Specific Integrated Circuits (ASICs) – custom chips – for large customers (especially cloud “hyperscalers”).
In this case, Broadcom engineers work closely with a client (like a Google or Meta) to develop a chip tailored to that client’s needs (for example, an AI accelerator or a unique networking chip). These custom chips leverage Broadcom’s existing IP blocks (like their SerDes, networking know-how, memory controllers, etc.) but are designed to the specifications of one customer’s particular workload. We’ll dive more into this below, as it’s a growing part of Broadcom’s strategy.
Integration and Reach: Broadcom’s semiconductors are often embedded deeply in systems and not always visible to end-users, but they are ubiquitous in infrastructure.
Common applications include “data center networking, home connectivity, broadband access, telecommunications equipment, smartphones, base stations, data center servers and storage,” and more (Source).
This means Broadcom chips might be helping deliver your YouTube video from a cloud server, enabling your home internet, and even connecting calls on a cellphone tower – a wide dominance in connectivity tech.
How Broadcom’s Chips Handle Data Traffic (Granular Level)
Easy Button: Broadcom’s chips work like super-fast train stations that sort and route tiny data packets by checking their addresses and sending them along high-speed lanes in just nanoseconds. They also act like smart post offices inside servers and storage systems, converting computer data into packets, checking for errors, and managing busy traffic so every bit of information reaches the right place efficiently.
• Inside a Data Center Switch (Ethernet switch ASIC): Picture a Broadcom Tomahawk switch chip as a massive train station for data.
Data arrives in packets (the fundamental units of data traffic) on incoming links. The Broadcom chip examines each packet’s address (like looking at a letter’s destination) and then swiftly forwards it out through the appropriate outgoing link that leads closer to the packet’s destination. This all happens within nanoseconds.
The chip has to manage dozens or even hundreds of these packet streams at the same time. It uses an internal switching fabric and on-chip memory buffers to queue packets if multiple packets try to go out of the same port at once (to avoid collisions).
Broadcom’s high-end switches can handle enormous throughput – for example, Tomahawk 4 can move 25.6 Tbps and Tomahawk 5 up to 51.2 Tbps of aggregate data (Source).
At those speeds, signals on each port might be 100 Gbps or faster; Broadcom uses advanced SerDes technology to serialize data (convert parallel data bits into a single stream) and send it over high-speed lanes.
Each lane is like an individual highway for bits, and Broadcom chips might pack hundreds of lanes (Tomahawk 3 had 256 SerDes lanes) to achieve the total bandwidth (Source) Handling such high data rates requires careful engineering: the chip must maintain signal integrity (so bits don’t get garbled at high frequency), manage heat (moving trillions of bits creates thermal energy), and decide packet paths, all in real-time.
The granular result is that a Broadcom switch chip can take a torrent of incoming data and effectively sort and redirect each bit to where it needs to go, much like a post office sorting billions of letters per second, with minimal delay.
• In a Server’s Data Path (NIC and PCIe switching): When data needs to travel from a server’s memory/CPU to the network (or vice versa), the NIC (Network Interface Card) chip is the gatekeeper.
Broadcom’s NIC chips implement the Ethernet protocols and convert data from the computer’s native format (e.g., PCI Express from the CPU) into network packets and back. At a granular level, suppose the CPU wants to send a chunk of data to another server: it hands that data to the NIC over the internal PCIe bus.
Broadcom’s NIC will packetize that data, add necessary headers (like destination address), and send it out over the cable as an electrical or optical signal.
When receiving, the NIC does the reverse: it takes the incoming signal, converts it into bytes, and places it into the server’s memory for the CPU to use.
This involves managing checksums (to verify data isn’t corrupted in transit), handling interrupts (to alert the CPU when data has arrived), and often implementing offloads (the NIC might handle tasks like aggregation or security encryption so the CPU doesn’t have to).
Broadcom has also developed data processing units (DPUs) like the Stingray adapter, which include extra processors to handle networking and storage tasks on the card (useful in cloud computing to take load off the main CPU).
Additionally, Broadcom’s acquisition of PLX Technology means they supply PCIe switch chips – imagine an internal network inside a server connecting CPUs, GPUs, and storage devices over PCI Express.
These chips function like mini-switches for PCIe lanes, allowing, for example, multiple GPUs to communicate with multiple CPUs efficiently.
At the electrical level, both NICs and PCIe switches must deal with high-speed signals (often 16 GT/s or more per lane in PCIe Gen4/5) – they use equalizers and signal amplifiers internally to maintain the integrity of these pulses of electricity over circuit board traces (a direct application of electromagnetism to ensure the waveform representing a 1 or 0 is still readable after traveling through a wire).
• Between Storage and Compute: Broadcom’s storage controller chips work granularly by speaking the language of storage protocols (like SATA, SAS, or NVMe) on one side and the system’s interconnect (PCIe or others) on the other.
Consider a RAID controller chip in a server (many Broadcom chips are sold under the LSI/Avago MegaRAID brand). If a CPU needs to read data from multiple drives, it sends a request to the RAID controller.
The Broadcom chip then issues the proper commands to each storage drive (e.g., “Drive 1, give me block X; Drive 2, give me block Y”), waits for the drives to return the data, possibly computes parity or error-correction if it’s a RAID setup, and then forwards the combined data up to the CPU.
It manages caches to temporarily hold data, uses error-correcting codes to ensure data integrity, and has to handle differences in speed (some drives are slower, some faster – the controller smooths this out so the CPU isn’t kept waiting too long).
In fiber-channel networks (commonly used in enterprise storage), Broadcom’s switch or adapter chips take data from servers and send it over optical fiber to storage arrays, using the Fibre Channel protocol which is tuned for low-latency, lossless delivery.
These chips ensure that even if you have thousands of servers all accessing a pool of disks, the data traffic is coordinated – much like a trained orchestra – so that every server gets the data it asked for, and the storage isn’t overwhelmed by requests.
Applied Physics & Electromagnetism in Broadcom’s Chips
Easy Button: Broadcom’s chips use advanced physics to boost, clean up, and direct super-fast electrical signals—much like tuning a radio so the music stays clear. They also turn electrical signals into light for fiber-optic communication, ensuring data zooms through without interference or overheating.
It’s worth noting that at the core of all these data-moving tasks is serious physics. Broadcom’s engineers leverage electromagnetic principles for high-speed communication:
• At high frequencies, electrical signals on a circuit board trace can behave more like a radio wave (with issues like reflections, interference, and skin effect losses) than a simple DC current.
Broadcom’s chips include analog circuitry to manage these – such as signal equalizers (to amplify certain frequencies that get weakened) and echo cancellers (to mitigate reflections on a line).
For example, those SerDes channels in a Tomahawk switch operate at multi-Gigahertz frequencies; the chip must send out a clean, strong signal and also be able to interpret a very fast incoming signal despite noise.
This relies on electromagnetic waveguiding and filtering techniques at the transistor level. Broadcom’s long history in analog and mixed-signal design (dating back to its origins in HP and Avago) gives it an edge here, because sending a 50 Gbps signal down a copper cable or trace requires fine-tuned analog engineering.
• Some Broadcom chips use optical technology as well (Broadcom, through Avago, is a major provider of optical components). For extremely high data rates over distance, electrical signals are converted to light.
Broadcom manufactures things like laser diodes and photodetectors in fiber-optic transceivers. The physics here involves semiconductor lasers (to generate coherent light) and fiber propagation.
Optical signals aren’t subject to the same resistance and capacitance issues as copper, so they can go longer distances at high speed. Broadcom’s optical products complement their electrical chips – for instance, a Broadcom switch ASIC might feed its data into an optical module (often also using Broadcom tech) to send data between data center racks via fiber cables.
• Electromagnetism also influences power delivery and heat in Broadcom’s chips. These chips have billions of transistors switching on and off, which generates heat and draws large currents.
The design has to ensure currents don’t create magnetic interference and that heat is dissipated.
They use materials with specific dielectric properties in the chip interconnect layers to manage capacitance and inductance.
Simply put, Broadcom’s ability to push more bits through a chip year after year comes down to mastering the physics of smaller transistors (thanks to Moore’s Law) and mastering the electrical pathways so that signals remain clean at higher and higher speeds.
Custom Chips for Hyperscalers (Google, Meta, etc.)
Easy Button: Broadcom teams up with tech giants like Google and Meta to create custom, high-speed chips that power specialized tasks like AI processing. They help design and manufacture unique silicon—like the parts that make Google’s TPU work—ensuring that these companies’ massive data centers run smoothly.
In recent years, Broadcom has been a behind-the-scenes partner for hyperscale cloud companies (the tech giants) to develop custom chips.
These chips are often unique to the customer and are not sold broadly. Broadcom’s role can range from providing specific technology components to full co-development:
• Google’s Tensor Processing Unit (TPU): Google designed a series of AI accelerator chips called TPUs to power its machine learning needs.
While Google gets credit for the TPU, it was reported that Broadcom played a key role in making those chips a reality (Source) Broadcom provided critical interface technology – for example, the SerDes interfaces that allow TPUs to communicate rapidly with other chips and systems (Source).
Additionally, Broadcom helped translate Google’s chip designs into manufacturable silicon, acting as a custom ASIC manufacturer.
Essentially, Google came up with the “blueprint” for the TPU, and Broadcom’s engineering helped turn that into the physical chip, leveraging Broadcom’s expertise in high-speed I/O and chip fabrication processes.
This partnership is lucrative for Broadcom but also kept somewhat quiet. (Interestingly, news surfaced that Google, concerned about costs, was considering designing future iterations without Broadcom by 2027 to save money (Source) – we’ll revisit that in Competitive Analysis – but for now Broadcom continues to be Google’s partner.)
• Meta and Other Hyperscalers: Similar to Google, other big players like Meta (Facebook’s parent), Amazon, Microsoft, and even some non-US giants like ByteDance have huge demands for custom silicon.
Broadcom’s CEO Hock Tan indicated that three existing hyperscalers are developing their own custom AI accelerators (“XPUs”) with Broadcom’s involvement, and two additional hyperscalers have engaged Broadcom for next-generation AI chip development (Source) (Source).
In fact, a recent analysis suggests Broadcom now has five major hyperscaler customers for custom AI chips: reportedly Google, Meta, ByteDance, and (based on media speculation) new additions possibly OpenAI and Apple (Source).
While OpenAI and Apple haven’t been confirmed by the company publicly, Hock Tan’s statements about two new customers (Source) align with such possibilities, as OpenAI (with its need for AI model training compute) and Apple (with a need for specialized silicon, perhaps in AR/AI or on-device machine learning) would both benefit from Broadcom’s custom chip capabilities.
• What these Custom Chips Do: The custom chips in question are often AI accelerators or advanced networking chips.
Easy Button: Broadcom builds custom chips called XPUs that mix high-speed networking with specialized computing, making them perfect for running advanced AI algorithms. These chips can work together in huge clusters—up to a million of them—powering the massive AI supercomputers needed by tech giants.
For example, a hyperscaler might want a chip specialized for running AI algorithms faster or connecting AI processors together in a new way.
Broadcom refers to these generally as “XPUs”, meaning any kind of custom processing unit beyond a traditional CPU/GPU. Broadcom’s value is that it can integrate multiple functions on one chip: high-speed networking, specialty compute engines, memory controllers, etc., using its library of silicon intellectual property.
A concrete example: if OpenAI wanted a custom chip to accelerate GPT-type models, they could contract Broadcom to design an ASIC that’s optimized for transformer calculations and network it with many others.
Broadcom would use its proven building blocks (like the same SerDes used in Tomahawk switches) to ensure these chips can be linked in huge clusters.
Hock Tan even mentioned that by around 2027, certain hyperscalers plan to deploy clusters with 1 million of these custom AI chips connected in a fabric (Source) – an astonishing scale.
Broadcom aims to supply both the chips and the networking gear for these massive AI supercomputers.
• Physics/Engineering Challenges & Broadcom’s Edge: Designing custom chips at the bleeding edge involves significant physics challenges, which plays to Broadcom’s strengths.
For instance, making an AI chip isn’t just about the compute cores – it’s also about feeding those cores with data. Broadcom can incorporate ultra-fast communication links (again, SerDes lanes, networking routers on chip) so that thousands of chips can work in parallel.
This is where those electromagnetic considerations come back; Broadcom’s long experience with high-frequency signal design is a service they bring to hyperscalers.
By collaborating, the hyperscaler gets a cutting-edge chip without reinventing the wheel on every component, and Broadcom secures a customer for large volumes of a chip that only it builds for that customer.
Applied Physics and Electromagnetism Influences
Easy Button: Broadcom chips rely on advanced electromagnetism principles to keep super-fast electric and optical signals clear and accurate, even when they’re zooming along tiny wires or fibers. Engineers fine-tune these chips using techniques like impedance matching and signal equalization to prevent noise and signal loss, ensuring every bit of data gets through correctly.
We touched on physics in earlier subsections, but to explicitly tie it together: Broadcom’s chip functions are governed by principles of electromagnetism and advanced physics, which influence their design:
• High-Speed Signaling: Every Broadcom chip that deals with communications (which is most of them) uses electric signals over copper or optical signals over fiber.
Easy Button: Broadcom chips send data as super-fast electrical or optical signals, and engineers use techniques like impedance matching and signal equalization to keep these signals clear and accurate even at multi-gigabit speeds.
At the multi-gigabit speeds these chips operate, signals can distort. Engineers must account for impedance matching (so signals don’t reflect at junctions), capacitive and inductive effects (wires next to each other on a board can behave like tiny capacitors or inductors, causing cross-talk or delay), and electromagnetic interference (fast switching can emit radio noise).
For example, the SerDes circuits in a Broadcom switch chip use analog techniques to shape the output waveform – often pre-emphasizing certain frequencies – and to equalize the input.
This compensates for the frequency-dependent loss of circuit board traces or cables.
Essentially, Broadcom chips implement physical-layer algorithms that apply signal processing to counteract the physics of the channel.
The Tomahawk 3/4 spec that mentions hundreds of SerDes lanes and port speeds up to 400 Gbit/s per port implies significant electromagnetics work to maintain signal fidelity (Source).
• Thermal and Power Considerations: Broadcom’s high-performance chips can have power consumptions in the range of tens or even hundreds of watts.
Easy Button: Broadcom designs their chips with smart cooling and power grids to spread out electricity and prevent overheating, ensuring the chip stays reliable even when packed with tons of tiny transistors.
The distribution of current across the chip, the power delivery network, and the removal of heat is critically important (as per basic thermodynamics and electronics).
The chips are often made on cutting-edge processes (e.g., 5nm or 7nm TSMC nodes for the latest ones), which means transistor densities are extremely high.
Broadcom employs copper power grids in the chip to spread current and heatsink attachment at the package level to dissipate heat.
The electromigration effect (where high current density can literally move metal atoms over time) must be mitigated by design – they have to ensure no part of the chip’s wiring carries more current than it can handle on average.
All these are physics constraints that Broadcom’s designers work with to ensure reliability over years of operation.
• Optical Components: In optical transceivers (like those used for 100G/400G Ethernet links between switches), Broadcom lasers convert electrical signals to light.
The quantum physics of semiconductors comes into play here – Broadcom’s lasers (often DFB lasers or VCSELs) have to be manufactured with precise materials (III-V semiconductors) to emit at the desired wavelength.
The photodiodes to detect light work on the principle of the photoelectric effect, generating current when photons hit.
Broadcom, coming from the old Avago (which was originally part of HP/Agilent focusing on LEDs and lasers), has deep expertise here.
While the digital data is abstract, ultimately it’s photons and electrons moving according to physical laws that make Broadcom’s connectivity possible.
Financial Analysis
Latest Earnings and Performance Highlights
Broadcom’s financial performance has been strong, driven by the surge in demand for its chips (especially related to AI) and the inclusion of its large software acquisition (VMware). Let’s review the latest results:
• Revenue: In the most recent quarter (Q4 of fiscal year 2024), Broadcom reported revenue of $14.05 billion, which was a 51% increase year-over-year (from about $9.3 billion in the same quarter last year) (Source) This massive jump was partly organic growth and partly due to adding VMware’s software revenue.
For the full fiscal year 2024, Broadcom’s net revenue was a record $51.6 billion, up 44% from the prior year (Source).
This is a huge leap in annual sales, reflecting how significant the VMware acquisition and AI-driven chip demand have been.
To put it in perspective, in fiscal 2023 (before VMware was included), Broadcom’s revenue was $35.8 billion (Source) (Source) so Broadcom’s top line has accelerated dramatically.
• Segment Breakdown: Broadcom now reports two segments – Semiconductor Solutions and Infrastructure Software.
In FY2024, semiconductor revenue was about $30.1 billion and software was about $21.5 billion (Source) Thus roughly 58% of revenue came from semiconductors and 42% from software, consistent with Broadcom’s strategy of balancing hardware with enterprise software (Source) (Source).
VMware (infrastructure software) contributed significantly – Broadcom noted VMware added about $3.85B in revenue in the latest quarter alone, and roughly $13.8B for the full year (which aligns with VMware’s prior annual run-rate) (Source).
On the semiconductor side, one standout is AI-related chip sales: Broadcom disclosed that AI-driven semiconductor revenue reached $12.2 billion in FY2024, a 220% increase year-over-year (Source) (Source).
In other words, nearly 40% of Broadcom’s semiconductor revenue was tied to AI applications – showing how crucial the AI trend has been for Broadcom’s growth.
• Profitability: On a GAAP (official accounting) basis, Broadcom’s profitability in FY2024 was impacted by large acquisition-related costs (especially from VMware).
GAAP net income for FY2024 was $5.9 billion (Source) down significantly from $14.1 billion in FY2023 (Source).
This drop doesn’t indicate that the business became less profitable operationally; rather, it’s due to amortization of intangible assets and one-time charges from the VMware deal.
In fact, GAAP gross margin fell to 63% from 69% prior year mainly because of amortizing purchased IP from VMware (Source) GAAP operating income also dipped ($13.5B in 2024 vs $16.2B in 2023) (Source) again due to those accounting charges.
However, cash flow and non-GAAP earnings tell a different story: Broadcom’s free cash flow for FY2024 was $19.4 billion (Source) which is extremely robust and higher than the GAAP profit. This is because many expenses on GAAP books (like amortization) are non-cash.
On a non-GAAP basis (which excludes most acquisition-related costs), Broadcom’s earnings remain very strong.
For example, in FY2023 Broadcom had ~$18.4B in non-GAAP net income (Source) (Source) we can infer FY2024 non-GAAP net income would be significantly higher given the revenue jump (likely on the order of mid-$20B range, though the company hasn’t publicly stated a full-year non-GAAP figure, the Q4 non-GAAP net was $4.8B (Source) and presumably each quarter of FY24 was around that ballpark or higher with growth).
• Margins: Broadcom’s adjusted EBITDA margin is impressive. In FY2024, adjusted EBITDA was $31.9B, which is about 62% of revenue (Source) (Source).
Even in Q4, they mentioned free cash flow was 39% of revenue (Source) for that quarter, reflecting a high cash conversion.
After acquiring VMware, Broadcom aggressively cut costs in the software segment – CEO Hock Tan noted VMware’s operating expenses were cut in half (from $2.4B to $1.2B quarterly) and its margins boosted from ~30% to 70% (Source) This “Broadcom playbook” of maximizing margins has kept overall profitability high.
Gross margins on hardware remain high (above 70% on a non-GAAP basis for semiconductors) and software gross margins are very high as well (enterprise software tends to have ~90% gross margin).
The integrated company’s GAAP gross margin (63%) is lower due to amortization, but underlying margin structure is strong.
• Dividends and Buybacks: Broadcom is known for a generous dividend policy.
The company recently raised its quarterly dividend by 11% (post-split) to $0.59 per share (Source) Before the split, they were paying $5.40 per share per quarter (which is $0.54 post-split).
So $0.59 represents an annual dividend of $2.36 per share post-split. In total, $9.8 billion in dividends were paid in FY2024 (Source) – notably, this exceeds GAAP net income, but is covered by the strong free cash flow.
Broadcom has prioritized returning cash to shareholders; historically, they have also done share buybacks opportunistically, although none were highlighted in the latest year due to focusing on the VMware acquisition integration.
Valuation Metrics (Market Cap, P/E, P/S, Growth Trends)
We start with valuation metrics rom CML Pro:
And we move to consensus growth estimates:
• Revenue Growth Trends: Broadcom’s recent growth was exceptional (44% in FY24) due to one-time factors (VMware addition, surge in AI hardware demand).
Prior to that, Broadcom was growing at single-digit percentages (e.g., 8% in FY2023 (Source) (Source) as some segments like enterprise storage had slowed, partially offset by early AI/networking growth).
Looking forward, consensus estimates predict solid growth ahead, though not as high as 44%. A recent Wall Street consensus was for about 19% revenue growth in FY2025 (Source).
That would imply FY2025 revenue around $61–62 billion. This makes sense because it would include a full year of VMware (vs. 11+ months in FY24) plus growth in semiconductor sales (particularly AI-related).
Beyond FY2025, growth may moderate; early analyst forecasts suggest around low double-digit percentage annual growth (~12-13% per year) in the next several years (Source).
This factors in continued AI momentum but also the fact that some parts of Broadcom (like traditional enterprise networking or broadband) are mature markets.
It also assumes no huge new acquisitions on the immediate horizon (Broadcom had a pattern of boosting growth via acquisitions, but after VMware, they might digest for a bit).
• Net Income and Margins Outlook: As the VMware amortization and restructuring costs taper off in coming years, Broadcom’s GAAP net income should rise sharply (all else equal).
The company even noted expecting Adjusted EBITDA to be ~66% of revenue in upcoming quarters (Source) showing confidence in margins.
If Broadcom hits ~$60B revenue in FY25 with ~66% EBITDA margin, and interest/taxes are moderate, GAAP net could bounce back into the teens of billions (depending on amortization).
The implication is that Broadcom’s true earning power is higher than the current GAAP $5.9B net indicates – which the market recognizes given the stock valuation. We might eventually see GAAP EPS better align with cash flow once those accounting expenses normalize in a few years.
For example, Broadcom’s non-GAAP net income margin in FY2023 was ~51% ($18.4B on $35.8B) (Source) (Source) If they sustain ~50% net margin (non-GAAP) on $60B, that’s $30B underlying net. GAAP will trail that for a while, but eventually a lot of the VMware intangible amortization will have been expensed.
Competitive Analysis
Broadcom operates in highly competitive markets, and its competitors vary by product line. Here we’ll compare Broadcom with two prominent peers, NVIDIA (NVDA) and Marvell Technology (MRVL), and also discuss emerging and private competitors.
Broadcom vs. NVIDIA
Different Strengths (AI Brains vs AI Plumbing): NVIDIA and Broadcom have both become pivotal in the AI era, but their roles differ.
NVIDIA is known for its GPUs (Graphics Processing Units), which have become the de-facto “brains” for AI computation.
Broadcom is more about the “plumbing” and infrastructure – networking chips, switches, connectivity, and custom solutions that tie large systems together.
In an AI data center, you might find hundreds or thousands of NVIDIA GPUs doing the heavy compute, and Broadcom’s switches and NICs moving data between those GPUs and to storage.
Market Dynamics: NVIDIA’s latest financial results highlight its dominance in AI computing: in FY2024, NVIDIA’s revenue soared to $60.9 billion (up 126% YoY) and it earned an astounding $32 billion in net income (Source) (Source) This reflects unprecedented demand for its AI chips (like the A100 and H100 GPUs).
Broadcom, meanwhile, with $51.6B revenue and $5.9B GAAP net (or higher adjusted net), actually had lower revenue than NVIDIA in the last fiscal year but a comparable market cap. Why? Investors see Broadcom’s revenue as having a large stable component (software) plus exposure to AI.
NVIDIA is more purely “AI play” and thus has higher growth but also more volatility (if AI demand swings).
Overlap and Competition: There are areas where Broadcom and NVIDIA compete directly:
Easy Button: Broadcom and NVIDIA compete in high-performance networking, with NVIDIA using proprietary InfiniBand to tightly integrate with GPUs for AI supercomputers while Broadcom champions open, cost-effective Ethernet for cloud data centers.
• Networking:
Easy Button: NVIDIA, with its Mellanox acquisition, pushes proprietary InfiniBand for ultra-low latency GPU connections in AI supercomputers, while Broadcom champions open-standard, cost-effective Ethernet for broader cloud networking. Hyperscalers mostly lean toward Ethernet, favoring Broadcom’s approach, though NVIDIA strives to differentiate by tightly coupling its GPUs with its networking solutions for peak performance.
NVIDIA acquired Mellanox in 2020, giving it a strong position in high-performance networking (InfiniBand and high-end Ethernet NICs).
In AI supercomputers, NVIDIA’s InfiniBand is often used to connect GPU servers because of its low latency and advanced features. Broadcom, on the other hand, promotes Ethernet-based networks (with their switch chips and NICs) for AI clusters.
Broadcom’s Ethernet solutions are more open-standard and widely used in general cloud data centers, whereas InfiniBand is a proprietary standard favored in certain HPC and AI setups.
This effectively pits Broadcom against NVIDIA in the networking segment: for example, if a cloud builder chooses to use Ethernet for an AI cluster (with Broadcom’s latest 51.2Tb switches and possibly RoCE – RDMA over Converged Ethernet – NICs), that’s a win for Broadcom; if they choose InfiniBand (with NVIDIA’s switches and NICs from Mellanox), that’s a win for NVIDIA.
The trend has been many hyperscalers leaning into Ethernet because it’s cheaper and more standard, which benefits Broadcom. But NVIDIA is trying to differentiate by tightly coupling their GPUs with their networking for maximum performance.
It’s a classic proprietary vs. open competition.
• DPUs (Smart NICs): Both companies see a future where specialized network/compute combo chips (Data Processing Units) handle tasks like security, virtualization, and storage offload. NVIDIA’s BlueField DPU and Broadcom’s Stingray DPU are competitors.
These devices combine networking with some processing cores (often ARM-based). So far, NVIDIA has marketed BlueField alongside its GPUs (for AI data centers needing fast data movement and security), while Broadcom has leveraged its traditional OEM relationships to push Stingray in enterprise/cloud networking gear. It’s a smaller-scale battle but important in cloud architecture.
• Custom AI Chips vs GPUs: While NVIDIA sells the same GPUs to everyone, Broadcom is enabling custom AI chips for specific big players (as discussed earlier).
In a sense, Broadcom is empowering potential NVIDIA competitors. For example, Google’s TPUs (enabled by Broadcom) reduce Google’s need to buy NVIDIA GPUs (Source) (Source).
If Meta or others succeed with custom accelerators via Broadcom’s help, that could dent NVIDIA’s future sales.
So indirectly, Broadcom and NVIDIA are vying for how AI workloads will be implemented: general GPUs (NVIDIA’s forte) or custom ASICs (Broadcom’s new niche).
That said, currently NVIDIA is far ahead in deployment and ecosystem (software support like CUDA), and Broadcom’s custom chips are more behind-the-scenes for specific users.
Coexistence
It’s also true that Broadcom and NVIDIA often coexist in many environments.
A large cloud provider might use Broadcom switches to connect NVIDIA GPUs – so one doesn’t entirely replace the other.
In fact, Broadcom benefits when NVIDIA sells more GPUs, because more GPUs means more networking needed to wire them together (NVIDIA would prefer you buy their networking, but the capacity of the market is so large that Broadcom is getting design wins too).
During 2024, Broadcom’s CEO called their revenue surge a “Nvidia moment” for Broadcom as well (Source) – implying that Broadcom’s fortunes rose in tandem with NVIDIA’s, thanks to AI.
Summary vs NVIDIA: NVIDIA is a competitor in networking and the de-facto competitor to the custom AI chips Broadcom helps build, but NVIDIA is not competing in Broadcom’s broad connectivity portfolio outside of data centers (e.g., Broadcom’s Wi-Fi, storage controllers, etc., have different competitors). NVIDIA is more singularly focused on AI compute.
Broadcom is more diversified.
If AI demand continues to explode, both can thrive, but if, say, every cloud ends up designing its own AI chips (with Broadcom’s help) and reducing GPU purchases, Broadcom could win business while NVIDIA loses – that’s a longer-term competitive question.
Broadcom vs. Marvell Technology
Easy Button: Broadcom dominates the switch ASIC market and offers a wider portfolio in networking, storage, and custom chips, while Marvell, operating on a smaller scale, is making selective inroads and courting hyperscalers for custom AI accelerator opportunities. Both companies compete in enterprise networking, 5G infrastructure, and storage, with Marvell seen as a potential alternative if tech giants shift away from Broadcom.
Similar Domain, Different Scale: Marvell is a semiconductor company that, like Broadcom, focuses on data center and networking/storage solutions. In many ways, Marvell’s product lines mirror Broadcom’s on a smaller scale:
• Marvell makes switch chips (through its acquisition of Innovium, Marvell has 12.8 Tbps switch ASICs – comparable to Broadcom’s Tomahawk 3 generation).
However, Broadcom’s switch ASIC market share is dominant (•80%+), whereas Marvell is trying to break in with select wins.
• Marvell makes embedded processors and DPUs (via its earlier acquisition of Cavium – Marvell has the Octeon processor line, and it’s repurposed some tech for DPUs).
Broadcom also has processors (e.g., Broadcom has some ARM-based communications processors).
• Both companies serve enterprise networking, 5G infrastructure (Marvell provides 5G ASICs and Broadcom provides some RF components), and storage (Marvell is big in SSD controllers and HDD controllers; Broadcom in RAID and SAS controllers).
• Marvell is also pursuing custom silicon for cloud – Marvell has explicitly stated they are working on custom AI accelerators for at least one hyperscaler. Notably, the report of Google considering ditching Broadcom mentioned Google courting Marvell as an alternative supplier for TPU chips (Source) This suggests Marvell is viewed as the next-best alternative to Broadcom for custom ASIC partnership.
Financial and Size Comparison: Broadcom absolutely dwarfs Marvell financially. Broadcom’s revenue ($51.6B) is almost 10 times Marvell’s (~$5.5B in FY2024) (Source) Marvell in its fiscal 2024 (year ended Jan 2024) actually saw a revenue decline of ~7% (Source) and had a GAAP net loss of $933 million (Source).
This loss is partly due to its own acquisition amortizations (Marvell bought Inphi, Innovium, etc., in recent years) and also the fact that Marvell’s core businesses (networking, storage) had a slowdown outside of AI.
On a non-GAAP basis Marvell is profitable, but margins are thinner than Broadcom’s. Broadcom, despite the accounting charges, runs at much higher absolute profit and cash flow.
Marvell’s market cap is around $90-100B as of early 2025 (Source) which is ~1/10 Broadcom’s.
Investors give Marvell a rich valuation because Marvell has been pivoting to the same AI/cloud narrative (Marvell announced that it expects significant revenue from AI Ethernet networking and cloud ASICs over the next few years).
However, Marvell’s current business is in a bit of a transition, whereas Broadcom’s business is already printing huge cash.
Competitive Intersections
Easy Button: Marvell’s Teralynx chip scored wins with Microsoft Azure, but Broadcom’s Tomahawk still leads cloud switches with faster speeds and stronger industry ties.
• Cloud Switches: Marvell’s Teralynx (Innovium) switch chip got a major cloud customer win pre-acquisition (with Microsoft Azure for certain deployments).
Broadcom’s Tomahawk, however, still powers the majority of cloud network switches across Azure, AWS, Google, etc.
The competition here is technological: both companies race to deliver the next generation of switch (Broadcom already has Tomahawk 5 at 51.2Tbps, Marvell’s next-gen might target similar speeds).
So far, Broadcom has been ahead in delivering cutting-edge switch silicon and has established relationships. Marvell’s challenge is to offer either lower cost or some specialization to unseat Broadcom in any accounts. This is an ongoing battle, with Broadcom in pole position.
• Custom ASICs:
Easy Button: Broadcom has five hyperscaler custom chip programs, while Marvell also claims over $800M in pipeline revenue from cloud custom silicon deals with giants like Google and Microsoft. Although Broadcom’s extensive IP portfolio and proven track record give it an edge, Marvell’s potential to offer lower-cost, specialized solutions poses a competitive threat in these one-customer deals.
As mentioned, Broadcom has five hyperscaler custom chip programs (Source) and Marvell also claims a few (Marvell’s CEO has said they have $800M+ in cloud custom silicon revenue in pipeline for 2024-25, presumably from at least Google and maybe Microsoft).
Google’s threat to replace Broadcom with Marvell for TPUs by 2027 indicates a competitive threat (Source) (Source) Broadcom’s advantage is they have more IP blocks ready and a track record of delivery (TPU, etc.), but Broadcom also has a reputation for hard bargaining on price (Source) Marvell might be willing to do custom work for cheaper to gain market share.
So there is competition for these one-customer deals.
If Marvell snags one big win (say Google shifts to Marvell for TPU gen 5 or Microsoft partners with Marvell for some AI chip), it could give Marvell significant growth and slightly dent Broadcom.
Both can potentially thrive if the overall pie (AI ASICs) grows, but the hyperscalers likely won’t dual-source the exact same chip – they’d pick one partner per project.
• Storage and Other: In enterprise storage, Broadcom’s Fibre Channel business is fairly unique (Marvell doesn’t play much there).
In SAS/SATA controllers, Broadcom (via LSI) and Marvell are competitors (Marvell sells some controllers for SSDs and HDDs; Broadcom sells RAID controllers/HBAs). It’s a stable, if not high-growth, market.
They split the pie along different product lines. In consumer or automotive, Marvell has some presence (automotive Ethernet chips) whereas Broadcom exited some lower-margin businesses in past (Broadcom had sold its IoT/wireless IoT business to Synaptics, for example, to focus on more profitable segments).
Broadcom’s Edge vs Marvell:
Broadcom’s biggest advantage is scale and diversification.
It has the resources from a $50B revenue engine to invest and also to bundle solutions. For instance, a cloud customer who wants a custom chip might also be buying Broadcom’s networking gear and Fibre Channel adapters, etc., which can give Broadcom a fuller-stack appeal.
Broadcom’s software business, while not directly competing with Marvell (Marvell is purely hardware), gives Broadcom stability to take big bets. Marvell, being smaller, is more purely focused on the chip competition in networking and compute. This can make Marvell more nimble in some cases, but also means Marvell can be more financially stretched if, say, a product cycle goes wrong.
• Internal Design Teams (Hyperscalers): The big cloud companies themselves are a form of competitor.
Easy Button: Big cloud companies like Google, Amazon, Microsoft, and Meta are increasingly developing their own chips, which could reduce their reliance on vendors like Broadcom. While Broadcom aims to partner with these hyperscalers by building their silicon, success in in-house chip design—as seen with Apple’s move in mobile—could shrink Broadcom’s addressable market and revenue.
If Google, Amazon, Microsoft, Meta invest enough, they can design more of their own silicon instead of buying from vendors.
Broadcom’s strategy is to turn this into a partnership (i.e., be the one to build their silicon). But it’s a double-edged sword: if those companies succeed in being mostly self-sufficient (like Amazon designing its own Nitro networking chips, Google designing more TPUs without Broadcom, etc.), then Broadcom’s addressable market could shrink.
In fact, Apple – one of Broadcom’s largest customers on the mobile side – is doing just that in another segment: Apple is reportedly developing its own Wi-Fi/Bluetooth chip to replace Broadcom’s in iPhones by around 2025.
That’s not in data center, but it shows even the seemingly secure businesses can be under threat when a giant customer verticalizes. Broadcom stands to lose a couple of billion in annual revenue if Apple fully drops them (though Broadcom may still supply other components to Apple like RF front-end modules).
In the cloud, the risk is similar – a Amazon or Google going “in-house” for networking silicon could displace Broadcom eventually. However, making high-performance connectivity chips is hard, and not every cloud giant wants to do it alone. This is why many continue to rely on Broadcom or at least have Broadcom as a partner.
• Intel (Barefoot Networks) and AMD (Pensando/Xilinx): Other big semiconductor players also moved into networking. Intel bought Barefoot Networks (which makes the Tofino programmable Ethernet switch ASIC).
Intel has been trying to get into data center switches with that tech; Tofino chips are used in some niche cases where programmability is key (some cloud providers use them at the network edge or for specialty tasks).
But Barefoot/Intel is not (yet) taking major share from Broadcom’s core switch market.
AMD acquired Xilinx (FPGAs) and Pensando (DPUs).
Xilinx FPGAs can be used to create smart NICs or even prototype switches.
Pensando’s DPU is a competitor to Broadcom’s Stingray and NVIDIA’s BlueField in certain cloud deployments (e.g., HPE and Oracle Cloud use Pensando).
These companies are now well-funded by Intel and AMD, which means Broadcom faces well-resourced competitors in networking silicon beyond just Marvell and NVIDIA.
Broadcom’s defense is a combination of incumbency (its chips are already designed into many systems) and performance (its ASICs often outperform more general-purpose solutions like FPGAs in raw throughput).
• Networking Startups and AI Fabric Startups: A few startups have aimed at pieces of Broadcom’s pie.
For example, Ayar Labs is working on optical chip interconnects (possibly competing with Broadcom’s electrical SerDes with a new optical approach for chip-to-chip communication in AI systems).
Lightmatter is doing optical computing which could, in futurist scenarios, change how AI processing and interconnect works.
There are also startups focusing on alternative network topologies for AI (like Enfabrica focusing on “composable” disaggregated cluster fabric).
These are mostly early-stage and not a near-term threat, but Broadcom keeps an eye on disruptive technologies.
Broadcom itself can adapt or acquire if a technology becomes promising – for instance, if optical interconnects start to take off, Broadcom’s existing optical module business and capital could push it to integrate that into future products.
• Private ASIC houses: There are a handful of lesser-known custom silicon companies (e.g., Alphawave (publicly traded small co) for SerDes IP, or privately some design services firms) but none match Broadcom’s breadth.
The closest “private” competitors in custom ASIC might be something like Graphcore or Cerebras in AI chips, but those are more independent product companies than contractors like Broadcom.
If Graphcore’s AI accelerator succeeded widely, it could reduce hyperscalers’ need to develop their own (impacting Broadcom’s custom chip business).
So far, those startups have not significantly dented the NVIDIA + (hyperscaler in-house) duopoly in AI hardware.
Broadcom’s Competitive Moat
Easy Button: Broadcom’s competitive moat stems from its vast IP portfolio and a suite of proven networking, optical, and storage products that create an ecosystem few can replicate. By integrating hardware with enterprise software, Broadcom remains a trusted backbone for critical infrastructure even as tech giants explore in-house alternatives.
Broadcom’s competitive advantage lies in its vast portfolio of IP and proven products.
Its networking devices, optical tech, NICs, storage controllers, etc., form an ecosystem that few can replicate (Source).
It’s telling that even as Google considers alternatives, they had to consider Marvell – and Marvell itself is much smaller and less proven in these ultra-high-end designs, which could be a risk.
Broadcom also has a reputation for reliability – when you run the backbone of the internet (many core switches run Broadcom inside), trust and track record matter.
Broadcom’s strategy of combining hardware and software (via VMware and previously CA) also gives it a unique position: none of its semiconductor competitors also sell enterprise software suites.
This isn’t direct competition, but it means Broadcom’s business model is diversified in a way NVIDIA’s or Marvell’s is not.
Impact of AI & “DeepSeek” on Broadcom
Easy Button: The AI boom has massively boosted Broadcom’s revenue, with its AI-related semiconductor sales more than tripling to $12.2 billion as companies rely on its networking chips and custom AI accelerators to power data centers.
This growth is fueled by the increasing need for robust AI infrastructure, where Broadcom’s extensive portfolio and deep expertise help build the essential connectivity and processing backbone for next-generation supercomputers.
Essentially, Broadcom is laying the “roads and tools” that enable the tech industry’s AI revolution, securing its role in the long-term evolution of data-driven technology.
Current AI-Driven Semiconductor Infrastructure Demand
The ongoing AI boom has been a game-changer for Broadcom.
As artificial intelligence models (like deep learning neural networks) have grown in importance, the need for powerful hardware to train and run these models has exploded.
Initially, most attention went to GPUs (for computation), but it became clear that fast networking and custom accelerators are also critical – that’s where Broadcom comes in.
Broadcom’s AI Windfall: Broadcom reported that its AI-related semiconductor revenue more than tripled in 2024 to $12.2 billion (Source).
This includes sales of networking chips for AI data centers and likely the custom AI chips for companies like Google.
Essentially, whenever a tech company builds a new AI supercomputer or expands cloud capacity for AI, there’s a high chance they are buying Broadcom’s gear: be it Ethernet switch chips to link servers, NICs for each server, or even entire custom AI ASICs.
CEO Hock Tan highlighted that Broadcom’s big hyperscaler customers are on “multiyear journeys” to develop custom AI accelerators and network them with open Ethernet connectivity (Source).
This indicates that AI demand isn’t a one-quarter fad; it’s expected to sustain for several years as companies build out massive AI infrastructure.
In summary, Broadcom’s future appears robust: it has the technical assets and financial strength to remain a key player in the data-driven world.
The company will need strategic finesse to navigate customer relationships and technological shifts, but its broad portfolio gives it multiple ways to win.
It is effectively building the “roads and tools” for the tech industry’s next era – and as long as there are “cars” (data, AI computations) on those roads, Broadcom will have a significant role (and revenue stream) in enabling that journey.
Conclusion
In conclusion, Broadcom’s comprehensive approach—from building the fundamental “infrastructure” chips that keep our digital world connected to developing high-performance, custom solutions for hyperscalers—positions the company as a critical enabler in today’s and tomorrow’s technology landscape.
Its deep technical expertise, robust and diversified product portfolio, and strong financial performance provide the resilience and competitive edge needed to navigate the rapidly evolving demands of AI and cloud computing.
Despite fierce competition and the challenge of evolving market dynamics, Broadcom’s proven ability to innovate and integrate hardware with enterprise software ensures it remains indispensable as digital connectivity becomes even more central to modern life.
The author has no position in Broadcom (AVGO) at the time of this writing.
Legal
The information contained on this site is provided for general informational purposes, as a convenience to the readers. The materials are not a substitute for obtaining professional advice from a qualified person, firm or corporation. Consult the appropriate professional advisor for more complete and current information. Capital Market Laboratories (“The Company”) does not engage in rendering any legal or professional services by placing these general informational materials on this website.
The Company specifically disclaims any liability, whether based in contract, tort, strict liability or otherwise, for any direct, indirect, incidental, consequential, or special damages arising out of or in any way connected with access to or use of the site, even if I have been advised of the possibility of such damages, including liability in connection with mistakes or omissions in, or delays in transmission of, information to or from the user, interruptions in telecommunications connections to the site or viruses.
The Company makes no representations or warranties about the accuracy or completeness of the information contained on this website. Any links provided to other server sites are offered as a matter of convenience and in no way are meant to imply that The Company endorses, sponsors, promotes or is affiliated with the owners of or participants in those sites, or endorse any information contained on those sites, unless expressly stated.