Create Account
Log In
Dark
chart
exchange
Premium
Terminal
Screener
Stocks
Crypto
Forex
Trends
Depth
Close
Check out our Level2View

ASIC
Ategrity Specialty Insurance Company Holdings
stock NYSE

At Close
Dec 4, 2025 3:59:56 PM EST
18.43USD-1.523%(-0.28)76,643
0.00Bid   0.00Ask   0.00Spread
Pre-market
0.00USD0.000%(0.00)0
After-hours
Dec 4, 2025 4:00:30 PM EST
18.45USD+0.109%(+0.02)700
OverviewHistoricalExchange VolumeDark Pool LevelsDark Pool PrintsExchangesShort VolumeShort Interest - DailyShort InterestBorrow Fee (CTB)Failure to Deliver (FTD)ShortsTrends
ASIC Reddit Mentions
Subreddits
Limit Labels     

We have sentiment values and mention counts going back to 2017. The complete data set is available via the API.
Take me to the API
ASIC Specific Mentions
As of Dec 5, 2025 2:39:58 AM EST (1 min. ago)
Includes all comments and posts. Mentions per user per ticker capped at one per hour.
3 hr ago • u/Numerous_Ruin_4947 • r/CryptoCurrency • rug_pull_president • C
He and his family did rug pull though. Didn't Eric Trump brag that they made billions in crypto? Yeah, all on the backs of people who bought his useless token. It's down -87%. The chart is horrendous. It basically launched at a high value and just trended down.
[https://coinranking.com/coin/9T8gSchgx+officialtrump-trump](https://coinranking.com/coin/9T8gSchgx+officialtrump-trump)
WLFI is just as bad.
[https://coinranking.com/coin/gbTmiRLbC+worldlibertyfinancial-wlfi](https://coinranking.com/coin/gbTmiRLbC+worldlibertyfinancial-wlfi)
Trump and family weaseled themselves into a BTC mining operation. Funny that 90% of ASIC miners are made in China. Are there tariffs on those devices? BTC mining is also a shit business per Saylor. So why get into that business?
sentiment -0.84
3 hr ago • u/mercurygermes • r/btc • strategy_the_great_bifurcation_distinguishing • C
You cannot simply "pivot" a Bitcoin Miner (ASIC) to AI.
ASICs are single-purpose chips designed solely for SHA-256 hashing. They cannot train AI models or render graphics. To pivot to AI, a miner must scrap their entire fleet and purchase Nvidia H100s/GPUs. That requires billions in CAPEX—capital they cannot raise if their core asset (BTC) is devaluing.
Regarding selling equipment to cheaper countries: Sending old machines to Paraguay or Ethiopia preserves the **Network Hashrate**, but it does not help the **Token Price**. It just keeps the Difficulty high while the price falls. That is actually bearish for miner margins.
sentiment -0.64
5 hr ago • u/Particular_Most_1529 • r/Wallstreetbetsnew • its_live_this_is_not_10x_its_10000x • YOLO • B
YYAI is actually Live!
Its even got a financial partner for Australia registed with ASIC.
[https://www.airwa.finance/](https://www.airwa.finance/)
This is going to go nuts
sentiment -0.38
8 hr ago • u/RobotAlfie • r/Bitcoin • 300_years_from_now_wouldnt_90_of_bitcoins • C
Yes thats the uncertain part as miners need the following:
to cover electricity
to cover ASIC depreciation
to earn a competitive return
All of this is priced in fiat terms, not BTC terms.
So therefore:
If the price of BTC is high relative to mining costs → security is strong.
If the price is low relative to mining costs → hashpower drops, security declines.
sentiment 0.80
16 hr ago • u/op_rank • r/investing • keep_position_on_tsmc_or • C
I'm long on TSMC. People don't seem to understand that no matter which AI chip is gaining momentum, be it Nvidia's GPU, or Google's TPU, or Broadcom's ASIC, or anyone else', all these chips are basically all made by TSMC, which is the real winner as long as the AI booms continue. It'd be illogical to have, say GOOG or AVGO, go up but TSM goes down.
sentiment 0.46
3 hr ago • u/Numerous_Ruin_4947 • r/CryptoCurrency • rug_pull_president • C
He and his family did rug pull though. Didn't Eric Trump brag that they made billions in crypto? Yeah, all on the backs of people who bought his useless token. It's down -87%. The chart is horrendous. It basically launched at a high value and just trended down.
[https://coinranking.com/coin/9T8gSchgx+officialtrump-trump](https://coinranking.com/coin/9T8gSchgx+officialtrump-trump)
WLFI is just as bad.
[https://coinranking.com/coin/gbTmiRLbC+worldlibertyfinancial-wlfi](https://coinranking.com/coin/gbTmiRLbC+worldlibertyfinancial-wlfi)
Trump and family weaseled themselves into a BTC mining operation. Funny that 90% of ASIC miners are made in China. Are there tariffs on those devices? BTC mining is also a shit business per Saylor. So why get into that business?
sentiment -0.84
3 hr ago • u/mercurygermes • r/btc • strategy_the_great_bifurcation_distinguishing • C
You cannot simply "pivot" a Bitcoin Miner (ASIC) to AI.
ASICs are single-purpose chips designed solely for SHA-256 hashing. They cannot train AI models or render graphics. To pivot to AI, a miner must scrap their entire fleet and purchase Nvidia H100s/GPUs. That requires billions in CAPEX—capital they cannot raise if their core asset (BTC) is devaluing.
Regarding selling equipment to cheaper countries: Sending old machines to Paraguay or Ethiopia preserves the **Network Hashrate**, but it does not help the **Token Price**. It just keeps the Difficulty high while the price falls. That is actually bearish for miner margins.
sentiment -0.64
5 hr ago • u/Particular_Most_1529 • r/Wallstreetbetsnew • its_live_this_is_not_10x_its_10000x • YOLO • B
YYAI is actually Live!
Its even got a financial partner for Australia registed with ASIC.
[https://www.airwa.finance/](https://www.airwa.finance/)
This is going to go nuts
sentiment -0.38
8 hr ago • u/RobotAlfie • r/Bitcoin • 300_years_from_now_wouldnt_90_of_bitcoins • C
Yes thats the uncertain part as miners need the following:
to cover electricity
to cover ASIC depreciation
to earn a competitive return
All of this is priced in fiat terms, not BTC terms.
So therefore:
If the price of BTC is high relative to mining costs → security is strong.
If the price is low relative to mining costs → hashpower drops, security declines.
sentiment 0.80
16 hr ago • u/op_rank • r/investing • keep_position_on_tsmc_or • C
I'm long on TSMC. People don't seem to understand that no matter which AI chip is gaining momentum, be it Nvidia's GPU, or Google's TPU, or Broadcom's ASIC, or anyone else', all these chips are basically all made by TSMC, which is the real winner as long as the AI booms continue. It'd be illogical to have, say GOOG or AVGO, go up but TSM goes down.
sentiment 0.46
1 day ago • u/Addicted2Vaping • r/AMD_Stock • ubss_2025_global_technology_conference_transcript • B
**Tim:** Good morning. We're going to get started here. I'm Tim Arcuri. I'm the semi and semi equipment analyst here at UBS, and we are very honored to have Dr. ls with us from AMD. So good morning, Lisa.

**Lisa:** Good morning. Thanks for having me.

**Tim:** Great. Thank you. So first, I just wanted to start by talking about the transformation that you led, I think beginning 3 or 4 years ago. You've transformed the company from being less than 20% data center to nearly 50% this year. What have been the drivers for this transformation? Some of it has been market growth, but some of it was a decision that you made years ago to sort of pivot the company in this direction.

**Lisa:** Well, again, Tim, thanks for having me. It's great to be here with everyone. And I think, in the technology sector, it's all about making the right big bets when you look at what the inflection points. I think over the last, let's call it, 5-plus years, we've been incredibly focused on high-performance computing as a sector, knowing that, as we go forward, compute would be such an important part of unlocking capability and intelligence. And then a few years ago, it became absolutely clear that this was going to be all about AI, that AI was the ultimate application of high-performance computing. And with that, the investment cycles would be there. I think this is way before ChatGPT and large language models, but it was the idea that we could really use computing to do so much more in terms of unlocking productivity and intelligence going forward.

**Lisa:** So yes, we've pivoted our -- really, our R&D capabilities, both hardware, software, system integration, to a significant focus on high-performance computing and AI. I think it's paid off well. Our data center business has grown very nicely, well ahead of the market, over 50% a year for the last few years. And what we see going forward is even more exciting. Because I think the recognition is that computing is such an important part of the ecosystem today that we see a very large market opportunity as well as significant growth of our business, actually accelerating growth from our, let's call it, 50% plus over the last few years to over 60% plus as we go forward. So no question, data center is the place to be.

**Tim:** Yes, I actually wanted to ask you about that. So you had this Analyst Day recently. You gave us a new $1 trillion data center TAM by 2030. You were saying $500 billion by 2028 before, so you've upped that, but you also importantly said that you can get double-digit share of that pie. You're doing $16 billion in data center this year, would put you on a 60% CAGR, as you said, that's up from the 50% CAGR over the past 5 years. How are you winning? And what's the crux of your competitive advantage in data center?

**Lisa:** Well, I mean, when you look at what's important in the data center market, I think the key piece is you really have to have a holistic view of the market. It is CPUs, it is GPUs, it is FPGAs, it's the possibility of doing ASICs, it's being able to integrate all of that together. And that's our unique capability. I think we are really the only semiconductor company out there that has all of this foundational IP, and we have invested way ahead of the curve in terms of some of the key enabling technologies. We were the first to implement chiplets in high-volume production. We're now on our fifth generation of chiplets. And the reason I view these as key foundational technologies is because the one thing we know is that the workloads are going to change. There's nothing static about the computing market. There's nothing static about AI. What we see is that there's an incredible pace of innovation out there where there're new workloads, there're new models and there're new use cases.

**Lisa:** And so you really need this entire portfolio of technology capability which is what we have. So we've built an incredibly strong franchise with our EPYC data center server CPU chips. We're now over 40% revenue share in that market and growing. We have a very, very strong GPU accelerator road map. And yes, we view that as a significant growth opportunity, the largest piece of the TAM. I think, Tim, you might remember when we originally said that the TAM was $300 million or $400 million people thought, "Wow, Lisa, that's really big." And now I would say that I think we're all believers in the TAM is very, very large because we're still in the early stages of this. And our differentiation is going to be offering the right solution for the right workload going forward.

**Tim:** So there's been some recent news in the market that have made people think that ASICs are going to take over the accelerator market. And I just wanted to get your opinion on that and sort of the general competitive landscape in the AI world. Is our ASIC really a threat to GPUs? You've said that ASICs are going to be 20%, 25% share of the market. Has anything we've heard recently changed your view on that?

**Lisa:** Yes. I actually don't think so. I think what we have said about the market is what I started with, which is, the market wants the right technology for the right workload. And that is a combination of CPUs, GPUs, ASICs and other devices. As we look at how these workloads evolve, we do see some cases where ASICs can be very valuable. I have to say that Google has done a great job with the TPU architecture over the last number of years. But it is a, let's call it, a more purpose-built architecture. It's not built with the same programmability, the same model flexibility, the same capabilities to do training and inference where that GPUs are. GPUs have the beauty that they are a highly parallel architecture, but they're also highly programmable. And so they really allow you to innovate at an extremely fast pace. So when we look at the market, we've said that we see a place for all of these accelerators.

**Lisa:** But our view is, as we go forward, especially over the next, let's call it, 5 years or so, that we'll see GPUs still be the significant majority of the market because we are still so early in the cycle and because software developers actually want the flexibility to innovate on different algorithms. And with that, you're not going to know our priority what to put in your ASIC. So I think that's a difference. So 20% to 25% feels like the right number. I think the other thing that people should recognize is that this is absolutely a huge and growing market. And as a result, you're going to see a lot of innovation on the silicon side as well as on the software side. And in general, I view that as a great thing because that allows a differentiation in the market.

**Tim:** And if a customer came to you and wanted you to build an ASIC for them, is that something that you would do?

**Lisa:** Well, the way we look at these things, Tim, is it's all about what is our secret sauce, what is our differentiation? And from our perspective, the differentiation really comes when we can take our intellectual property together with our customers' intellectual property and know-how and create a case where 1 plus 1 is greater than 3. I think we are extremely good at deeply partnering with customers, and we've done that over the last 10-plus years. We do have -- in addition to all of our standard products with CPUs and GPUs and FPGAs, we've also created a semi-custom business. I don't call that an ASIC business and the differentiation being ASICs are, you're going to do, let's call it, any chip that somebody comes and asks you to do. That's not necessarily where we shine. I think where we shine is when we can put our IP together with our customers' IP.

**Lisa:** And we have done a number of semi-custom designs that build off of our foundational capability so that customers can differentiate. So I think our overall value proposition is our goal is to take all of our R&D investments, and we now have 25,000 engineers that are integrating at the bleeding edge of technology, hardware, software, system design, and really marry it with our largest customers who want to find that differentiation and work on how do we see that in the portfolio, and that could be a custom system design. So we do, do sort of putting the pieces together, that could be a special SKU. We have lots of special SKUs that are optimized to given workloads. And that could be a special silicon as well. And we've done that in a number of cases across a number of markets over the last couple of years.

**Tim:** Great. So I wanted to go on to another debate that's in the marketplace, and that's whether there's a bubble right now in AI. You weren't going to get away without me asking you this.

**Lisa:** Well, wasn't the first question so...
**Tim:** So can you just talk about that? I know NVIDIA went at that pretty hard on their call. So I just wanted to give you a chance to address that?

**Lisa:** Yes, absolutely. So it's kind of curious this -- the conversation about a bubble from my standpoint. I mean, I spend most of my time talking to the largest customers, the largest AI users out there. And there's not a concept of a bubble. What there is a concept of is, we are, let's call it, 2 years into a 10-year super cycle. And that super cycle is computing allows you to unlock more and more levels of capability, more and more levels of intelligence. And that started with training being the primary use case, but that's really very quickly migrated to inference. And now we're seeing, with all of the models out there, there is no one killer model. There's actually a number of different models that are, let's call it, some are better in certain aspects, some are better in other aspects. Some people want to do, let's call it, fine-tuning, reinforcement learning.

**Lisa:** So with all of this capability out there, the one thing that is constant as we talk to customers is we need more compute. That at this point, if there was more compute installed, more compute capability, we would get to the answer faster. And so yes, there is significant investment. I mean, I think all of the CapEx forecasts that have increased over the last 3 to 6 months have certainly shown that there is confidence that those investments are going to lead to better capabilities going forward. And so yes, from the standpoint of do we see a bubble, we don't see a bubble. What we do see is very well-capitalized companies, companies that have significant resources, using those resources at this point in time because it's such a special point in time in terms of AI learning and AI capabilities.

**Tim:** And I guess just on that end, so there's a lot of talk that there's not an ROI for these CapEx dollars. I know that people say that they're short on compute. But when you look at AI and the actual use cases, can you speak to that?

**Lisa:** Yes, absolutely. I think, again, what my -- my view of this is the cause and effect usually takes a little bit more time than people are expecting. But what we're seeing, and I can just tell you our own case at AMD over the last 15 to 18 months. What started as, let's call it, let's try AI for our internal use cases, has now turned into significant clear productivity wins going forward. So there's no question that there is a return on investment for investment in AI. What is the return on investment for enterprises? It is more productivity. It's building better products. It's being able to actually serve your customers in a way that is more intuitive than you have today. And if you look at today's AI, as much progress as we've made over the last couple of years, we're still not at the point where we're fully exploiting the potential of AI. So we're seeing actually a lot more effort over the last 3 to 6 months on the use of agents and how we make sure that AI not only suggests answers in a Copilot fashion, but actually gets to a place where it can actually do a lot of productive work.

**Lisa:** And that is flowing through. We're seeing that across multiple customers. We're seeing that across the largest hyperscale customers. We're seeing that across the large enterprises that are using AI. And I still say that we are in the very, very early innings of seeing that payoff. So as we talk to the largest enterprise customers, I think every conversation is, "Lisa, how can you help us, how can we learn faster so that we can take advantage of the technology?" So I think the return on investment certainly will be there. I think the debate is perhaps more around the largest foundational model companies and whether there's return on investment there. But again, my view is that there's not going to be one best something or there're going to be multiple models that are best optimized for use cases. And the secret sauce is really in how you integrate it so that customers can take advantage of the technology as smoothly and as easily as possible.

**Tim:** So another point is that you're moving from being a silicon company to being a systems company. And a big piece of that was your acquisition of ZT. And then you -- and your partnership now with Sanmina. So can you actually speak to that? And you're a bit of a fast follower in building these racks in these system. So do you think that you've learned from some of the growing pains that your peer had?

**Lisa:** Well, I think if you take a step back and come to why are we doing this integration, the reason we're doing this integration is the time to useful capability, sort of the time that it takes for our customers to bring up this really complex infrastructure is super critical to make as fast as possible. So the full stack solution is a way for us to help customers get to, let's call it, productive compute capacity. And we're very happy with our acquisition of ZT. I think it's one of the smoothest acquisitions, integrations that I've seen. And what we've been able to do is really take, let's call it, best-in-class system design and combine it with our best-in-class hardware and software capability to come up with very, very strong full stack solution. We're super excited about MI450 series and the Helios product that will come to market in 2026. I do think we have learned. I think we learned as an industry, we're always going to learn that putting together these complex rack level systems is hard. There's nothing new about it, but there's certainly ways that you can derisk and ensure that you can go as fast as possible.

**Lisa:** I think key elements for us in our strategy when we think about our \[ global \] solutions is as important as it is to have that reference design capability, it's also really important to have an open ecosystem. And that open ecosystem means that we have an open rack architecture, which, together, we've developed with Meta, which I think has taken a lot of the best practices out there in the industry. We're working with all of the key suppliers within the rack to ensure that, again, that we learn how to bring these up as fast as possible. And then frankly, the ZT team has brought 1,000 plus really skilled engineers to the capabilities. So I think we feel really good about our rack level solutions. I think the feedback that we've been getting on the Helios rack has been fantastic. I think people see that we've made really smart engineering decisions to ensure that we're able to bring these systems up as smoothly as possible.

**Tim:** Great. One thing I also hear is that you're fighting a battle on multiple fronts. You're fighting Intel and AMD in PC, you're fighting NVIDIA and you're fighting ASICs, and you're not that large of a company yet. So when you think about prioritizing development, do you feel like you're having to sort of disinvest in certain areas and invest in others?
**Lisa:** Well, actually, I think you're actually pointing out one of our strengths. So I think one of our strengths is the fact that we have a really, really capable and efficient R&D engine. I give Mark Papermaster and the team a lot of credit for that. We've built an execution engine. We've done 5 generations of server CPUs right on time, app performance best-in-class. And the way we develop is we actually develop foundational capabilities that bring all of these computing elements together, so CPUs, GPUs, FPGAs. I actually think this is 1 of our strengths. We're not religious about like the world is going to be taken over by X because I can tell you for sure, I do not believe the world is going to be taken over by X. I think you're going to need the right compute for the right workload, and that is our strength. And I think we've developed an R&D engine that knows how to execute that. Now there's no question that AI sits above all of this. And so all of the innovation that we're doing in AI, all of the software investments that we're making in AI are there to ensure that it works across the entire portfolio.

**Tim:** Great. Can we talk about the deal with OpenAI. You offered them 10% of the company with warrants. There's various strike prices at each tranche. How did the deal come together? And how does it change your engagement with the other customers?

**Lisa:** Well, first of all, we're very pleased, excited, happy with the OpenAI deal and partnership. To give you some idea of how it came together, it really came together over the last couple of years. We've always been working with them as one of the leading foundational model companies to understand where do they think model evolution was going because that's so critical in determining sort of our long-term road map. When we were looking at what should the MI400 series look like, what would really make it special, how do we differentiate long term? Clearly, one of our key strengths has been our memory architecture that's enabled by chiplets and all that. And a lot of that came from talking to our largest customers, OpenAI being one, but a number of our other large partners, Microsoft, Meta, Oracle, et cetera, also contributed to those thoughts. And when we thought about sort of where you want to go going forward, this is all about going big and not necessarily the typical way that technology evolves is sometimes, "Hey, we do smaller partnerships here and there." In AI, it's all about really bringing together hardware, software, cooptimization and codesign.

**Lisa:** And that's what we've really put together with this OpenAI partnership. I think we view it as a way to ensure that we are highly developing with one of the largest model companies in the world. The key here is that with the current structure of our 6-gigawatt partnership, it's a win-win on both sides. So on one hand, we get significant scale with this. If you think about it as each gigawatt is deployed, that significant scale to AMD, that's double-digit billion dollars of revenue. And it's also an opportunity for OpenAI to be very invested in our technology success as well because there are a number of commercial as well as technology milestones. Very much a win-win, very highly accretive to our portfolio. And as it relates to other customers, I think the idea of having a very optimized road map is a good thing, and we view it as -- again, there's -- as much as we love OpenAI, we also deal with the entire set of customers out there from an AI-native standpoint as well as the largest hyperscalers, and we're seeing great traction with the road map.

**Tim:** And are you any more engaged? Have you had any more conversations lately that you might not have had, had you not announced that deal?

**Lisa:** I believe that it has given people a view of sort of AMD's capabilities. I think we always had good conversations, but I think the idea of just how competitive the MI400 series road map is, what we have going forward has certainly been helped since we announced the OpenAI deal.

**Tim:** And do you worry about customer concentration? Can you speak a little bit about breadth? If you look out in your forecast, how broad will your customer base be?

**Lisa:** Yes. Look, our view is, we are a general purpose supplier in the sense that OpenAI is a great partner, and we very, very much believe in their success and their road map. But we are highly engaged across all of the largest hyperscalers out there. And from a customer concentration standpoint, the key point is this is a big multigenerational, multi-gigawatt partnership. We have a number of others that are at similar scale, similarly multi-generation. And the truth is compute is a premium. Like this is one of the areas where there are so few companies that can offer this capability. I'd like to believe that in addition to great technology, we focus on our customer success. So it's about total cost of ownership, ensuring that there's significant differentiation and also ensuring that we're very flexible in how people want to operate in terms of the overall ecosystem.

**Lisa:** So from that standpoint, I don't worry about customer concentration. I view this similarly when -- if I give you the example of where we were in the server CPU market when we started with the hyperscale accounts, they didn't all start on day 1 at the same time. They -- different hyperscalers went large at different points in time. And that's the same thing that we're going to see in the AI accelerator road map. We're seeing a very similar pattern in terms of how we engage with customers and how customers view AMD as really a long-term partner, especially since there's this recognition that, in addition to the GPU road map, the CPU road map, the networking road map, the overall sort of capabilities are very attractive.

**Tim:** Great. Well, we've made it to 23 minutes, and we haven't talked about CPU yet. So maybe we can talk about that. So demand is obviously very strong in both PC and in server. So maybe we can talk about that. We keep hearing about hyperscalers asking for supply, and we keep hearing about long-term contracts, particularly on the server side. So can you just talk about that and just talk about the supply environment?

**Lisa:** Yes, absolutely. The last, I would say, several months has been a very interesting story around the CPU world. We are really happy and proud of our partnerships on the CPU side. I think there was this narrative last year that somehow GPUs were going to take over the world and refresh cycles for CPUs would lengthen and you wouldn't have as much, let's call it, market momentum. I think what we started seeing at the beginning of this year is actually a significant refresh cycle starting. So that was very positive. But more interesting is, over the last 3 months, what we've seen is really a significant uptick in CPU demand. And when you look underneath that, it's not just refresh cycles. I mean, there's no question that there were some refresh cycles that were, let's call it, delayed as a result of some of the AI CapEx spending. But a lot of that is being caught up now.

**Lisa:** And what we're also seeing is that as AI moves to more inferencing and there's more work being done and things like the agent workloads are starting that they're spawning more general purpose CPU needs. Because if you think about it, if you have, let's call it, 1,000 agents or 1,000 virtual employees, they need to operate on some data set. They need to operate on some computing capability. And that requires general purpose CPUs. So we actually have a view that the CPU market actually will substantially grow over the next 4 or 5 years as we see the AI usage really spawn more traditional computing applications. So it is certainly a good thing to see. We love seeing that. I think it's one of the reasons that we're so passionate about the overall road map being important in terms of all of the capabilities. And we see the CPU business has a great business going forward.

**Tim:** And you've gained a bunch of share in data center. Do you think that in server, has your lead at all shrunk? Do you think that you'll continue to gain share?

**Lisa:** We do. We're in a very fortunate place right now where we are a trusted partner on the CPU side, especially for the largest hyperscalers. And the conversations are such like how can we work together to build, let's call it, the best-in-class road map going forward. I think as great as our fifth generation Turin is, we're super excited about our next-generation Venice CPUs. We think that extends our leadership going forward, and that extends as we go into the next generation as well. So I think we have a very strong franchise there. And the key is we're a trusted partner going forward. We're also quite underrepresented in the enterprise space, but I've seen that also as a significant growth opportunity for us. The largest enterprises are all looking for help as to how they modernize their data centers and how they make their choices, and we're very happy to be part of that conversation.

**Tim:** Great. One thing that I was quite surprised about from the Analyst Day was that you had pretty strong share aspiration gains in client actually. You think you can be more than 40% share in the client. Can you just talk about that?

**Lisa:** Yes. So the client PC business has been a place for us that it's not a market that is necessarily growing by leaps and bounds, but it is an important market. It is a market that has very good customer-facing capability for us. And we've grown extremely well over the last couple of years. I think we've really streamlined our road map. I think we have put it as an AI-first road map, and that has been appreciated. We're now, let's call it, mid- to high 20s share. And as we go forward, we see that only growing. There are areas where I think we are already best-in-class when you look at things like the desktop gaming market. This is an area where we've had historically a lot of success, desktop channel market going forward. And we're continuing to grow sort of in premium notebooks, let's call it, the most valuable part of the PC TAM is where the product does matter in the premium segments, and that's where we're actually gaining the most share because our products are superior.

**Tim:** Do you worry that -- because memory prices have gone up so much, do you worry about some despecking in PC? Or do you worry that it hurts the market at all, that it hurts demand?

**Lisa:** Yes. I mean we're certainly watching, Tim, the commodities. There's no question that as the market has gotten tighter, some of the commodities like memory have become tighter. And we certainly are watching for that. I don't think it's a major perturbance to the market. I think it might be a minor perturbance, and we're watching that closely.

**Tim:** Great. And maybe we can talk about some of the bottlenecks that you're worried about over that 2030 forecast you gave. Are there things that you're worried about, like HBM or CoWoS? Or what is something that kind of keeps you up at night that could constrain your growth?

**Lisa:** Well, the great thing about the semiconductor market is, I think we are used to expanding and expanding quickly. So if you put aside sort of very temporal things, what are the most important things? It is advanced technology, sort of access to the most advanced wafers. It's high-bandwidth memory, it's packaging CoWoS, these elements. We have built a very, very strong supply chain over the last couple of years. We have deep partnerships with TSMC, all of the memory vendors, all of the packaging vendors. And I think we feel very confident that we can achieve our growth rates. I think the industry as a whole is very much around ensuring that we do satisfy all of the demand that's out there. The other area that we're watching very closely is power and how data center power is coming online, not just in the United States, but across the world. I will say that this administration has really activated a lot of the power build-out.

**Lisa:** So we're seeing things moving faster. We're seeing that there is a desire to put more power on as quickly as possible, trying to get rid of some of the bureaucracy around that. And I think those are all good things. We're also looking at power outside of the United States. And so there are lots of opportunities. We didn't get to talk about sovereign AI and a lot of the nation state investments that are happening there, which we think are another adder on top of it. So I would summarize it, Tim, as it is -- there're lots and lots of things on the radar screen, but the most important thing is that everyone in the ecosystem recognizes how important the enablement of this computing technology is. And so we're all working together to do that.

**Tim:** Great. Well, we're out of time. Thank you, again, Lisa.

**Lisa:** Wonderful. Thank you so much.

*This motherfucker really loves the word great...*
sentiment 1.00
1 day ago • u/HuzzahBot • r/wallstreetbetsHUZZAH • what_are_your_moves_tomorrow_december_04_2025 • C
Tweet Mirror:[FirstSquawk](https://twitter.com/FirstSquawk/status/1996355657096954339)
>AUSTRALIA’S ASIC URGES COMPANIES TO STRENGTHEN WHISTLEBLOWER PROTECTION PRACTICES
Tweet Mirror:[FirstSquawk](https://twitter.com/FirstSquawk/status/1996355588549386362)
>AUSTRALIA’S S&P/ASX 200 OPENS UP 0\.2% AT 8,608\.30
sentiment 0.00
1 day ago • u/mjblank2 • r/ValueInvesting • aehr_the_ai_pick_and_shovel_play_that_just_hit_a • Stock Analysis • B
TLDR: Aehr Test Systems (NASDAQ: AEHR) is theoretically the perfect "pick and shovel" play for the AI boom. Every major AI chip (GPU, ASIC) needs massive reliability testing, and Aehr’s technology is the gold standard for it. But theory isn't cash. Management is currently flying blind, pulling guidance due to "tariff uncertainties," and revenue is contracting just when it should be exploding.
**Rating: HOLD.** The tech is validated. The stock price is not. Do not catch this falling knife until visibility returns.
**1. The 30-Second Elevator Pitch**
You can't run a billion-dollar AI data center if your chips fail after three months. That’s where Aehr comes in.
* **What they do:** They make massive "burn-in" test systems (specifically the *Sonoma* and *FOX-XP* lines).
* **Why it matters:** Unlike standard testing that takes seconds, "burn-in" runs chips at high heat and voltage for hours or days to force early failures *before* they get shipped to a hyperscaler.
* **The Catalyst:** As AI chips get hotter, more complex, and more expensive, traditional testing fails. Aehr’s liquid-cooled systems are one of the few that can handle the thermal load of next-gen AI processors.
**2. The Bull Case: The Tech is Winning**
If you ignore the stock chart and look at the engineering, Aehr is winning.
* **The "Hyperscaler" Whale:** Management confirmed multiple follow-on orders for their *Sonoma* systems from a "leading hyperscaler" (industry rumors often point to major cloud players building custom silicon). This customer demanded *shorter lead times*. That signals urgency.
* **Critical Infrastructure Status:** We are moving from "chip shortage" to "reliability crisis." When an NVIDIA H100 or a custom Google TPU costs tens of thousands of dollars, you cannot afford a 1% failure rate in the field. Aehr is effectively selling insurance for billion-dollar clusters.
* **Market Pivot:** They are successfully pivoting from being a "Silicon Carbide (EV)" story to a pure "AI/Data Center" story. Given the slowdown in EVs, this pivot saved the company from irrelevance.
**3. The Bear Case: The "Visibility" Vacuum**
Wall Street hates uncertainty, and Aehr just served up a double portion.
* **The "Tariff" Excuse:** Management refused to provide forward guidance, citing "ongoing tariff-related uncertainty." In the semi-cap equipment world, visibility is the *only* currency that matters. When a CEO says "I can't guide," they are effectively saying, "I don't know when my customers will sign the check."
* **Revenue Contraction:** The AI narrative says "up and to the right," but the income statement says "down and to the left." You cannot trade at a growth multiple with contracting revenue, no matter how cool your technology is.
* **Revenue:** $11.0M (Down from $13.1M YoY)
* **GAAP Net Loss:** $(2.1)M
**4. Financial Health Check**
|| || |**Metric**|**Q1 FY26 (Reported Oct '25)|YoY Change**|**SCN Comment**| |**Revenue**|$11.0M|▼ 16%|The thesis is broken until this flips positive.| |**Gross Margin**|\~40-45% Range|▬ Stable|Margins are holding, proving they have pricing power.| |**Net Income**|$(2.1)M|▼ Loss|burning cash, though balance sheet remains healthy.| |**Cash Position**|\~$24M|▬ Stable|No immediate dilution risk, but the runway isn't infinite.|
**5. The Strategy**
**Why I am holding (not selling):** The "Hyperscaler" orders are real. The tech is real. If the tariff situation resolves or if the EV sector bottoms out, Aehr could double very quickly because they have high operating leverage. Selling now is selling at the point of maximum pessimism.
**Why I am NOT buying:** "Cheap" stocks can always get cheaper. Until management reinstates guidance, you are gambling, not investing.
**The Trigger to Buy:** I need to see **one** of two things:
1. **Reinstated Guidance:** Even if it's conservative, I need to know they have visibility.
2. **A "Book-to-Bill" Ratio > 1.2:** This would prove that orders are coming in faster than they are shipping them out.
**TL;DR:** Great tech, terrible visibility. Put it on your watchlist, keep your current shares, but sit on your hands for now.
**Disclaimer:** *This is not financial advice. Do your own due diligence.*
sentiment -0.95
1 day ago • u/Comfortable-Usual561 • r/NVDA_Stock • ibm_there_is_no_ai_bubble • B
I realize this is an NVIDIA stock forum, and I’m sharing this to summarize IBM’s perspective on the current AI landscape. IBM is a major NVIDIA partner—its watsonx software stack runs primarily on NVIDIA hardware. At the same time, IBM is also collaborating with select ASIC vendors to bring watsonx agentic AI capabilities to their platforms.
=================== Summary ===================
**Economics :** Hyperscalers will struggle to generate acceptable ROI, profits, or reasonable payback periods on today’s massive Gen-AI data-center buildouts. The current pace of infrastructure expansion isn’t economically sustainable. The growing dependence on debt to fund AI growth ( collateralized by extremely expensive GPUs ) is also risky.
**AI is not in a Bubble** but a business opportunity : While Consumer AI is expensive, unstainable and Hypecycle but IBM does not believe AI is in a bubble.
* large productivity gains and scalability that weren’t previously possible
* Inroads into Enterprise solving real business usecases.
**Jobs :** Majority of Recent layoffs are largely a result of overhiring between 2020 and 2023—not because of AI.
**Technology :** While LLM/GenAI has made lot of progress. Achieving AGI with today’s silicon-based hardware will be difficult; IBM believes quantum computing will ultimately be essential for future AGI breakthroughs.
sentiment 0.92
2 days ago • u/rBitcoinMod • r/Bitcoin • started_mining_today • C
Your submission has been flagged and removed because it relates primarily to bitcoin mining. If you would like to learn more about mining bitcoin, please visit /r/BitcoinMining. Be aware that bitcoin cannot be mined using graphic cards. Specialized ASIC hardware is required. Cloud mining is often a scam.
^^I ^^am ^^a ^^bot ^^and ^^cannot ^^respond. ^^Please ^^contact ^^r/Bitcoin ^^moderators [^^directly ^^via ^^mod ^^mail](https://www.reddit.com/message/compose?to=%2Fr%2FBitcoin) ^^if ^^you ^^have ^^questions.
sentiment -0.22
2 days ago • u/Agitated_Oil7955 • r/Bitcoin • started_mining_today • B
I got really bored and accidentally fell down a mining rabbit hole. Thirty minutes later I was staring at a black console window like I’d just hacked the Pentagon, and somehow my RTX 4060 Ti is now a full-time crypto employee earning a majestic £0.05 a day.
Is it worth it? Absolutely not.
Did it cure my boredom? Instantly.
And now, as I’m typing this, I’ve already started browsing ASIC miners like some financially reckless dragon…
Meanwhile my bank account is in the corner quietly sobbing and Googling “how to get emancipated from its owner.”
sentiment -0.84
2 days ago • u/rBitcoinMod • r/Bitcoin • thanks_to_exist_bitcoin • C
Your submission has been flagged and removed because it relates primarily to bitcoin mining. If you would like to learn more about mining bitcoin, please visit /r/BitcoinMining. Be aware that bitcoin cannot be mined using graphic cards. Specialized ASIC hardware is required. Cloud mining is often a scam.
^^I ^^am ^^a ^^bot ^^and ^^cannot ^^respond. ^^Please ^^contact ^^r/Bitcoin ^^moderators [^^directly ^^via ^^mod ^^mail](https://www.reddit.com/message/compose?to=%2Fr%2FBitcoin) ^^if ^^you ^^have ^^questions.
sentiment -0.22
2 days ago • u/JWcommander217 • r/AMD_Stock • technical_analysis_for_amd_123premarket • Technical Analysis • B
[wellll](https://preview.redd.it/p5mizguywz4g1.png?width=1563&format=png&auto=webp&s=d7abe5db9cca47c4e28c856f9f950098cab6c385)
My drawing is my interpretation of yesterdays price action until AMZN nvlink announcement. Everything was fine and just chugging along and then boom that sucks.
Now here is my thing:
\-We know that NVDA has better networking options than we do. That is a given and a known quantity at this time and does not really change the current status quo.
\-We know that AMD has recently made acquisitions this year with the goal of improving networking but really we won't see those bets paying off until we are delivering full rack scale solutions with our own all in one optimized servers.
So I'm not sure we were ever in the running for that AMZN business nor were we competing for it at all. Sure its some extra cash but like I think AMZN went to the open market and NVDA is hawking their solution but I'm not 100% certain that we are hawking our networking solutions for the ASICs of AVGO. Sure it might make sense for someone like NVDA who is afraid of losing some of that money they are getting from GPU's to competitors but I just don't think it is a business line that we are even targeting. If anything, we are proposing to deliver a quality full rack solution that has all in one that is BETTER than whatever TPU/ASIC thing that AMZN is working on its own. So you could argue that the NVLink deal literally is the OPPOSITE of the strategy we are pursuing.
So all in all--------I think this is just sort of a negative news story that people read the headline and say ooooooo bad. But as people really think about it, they will be like yeaaaaaaa honestly this is a nothingburger. I dunno thats my take on it. Maybe I'm wrong here.
That 50 day EMA is sitting right there are $217.5ish and the regular 100 day SMA is sitting right at that $194 area. which is where we saw support firm up when we dipped previously. I think we are going to start to see a narrowing trading range between those two areas while the market digests some mehhhh data coming out. Fed meeting from next week is going to provide a shot in the arm but I think a rate cut is pretty much a given at this point so I'm not sure the comments are going to be a big deal. Dot plot might give us some idea where everyone is at post-Powell but Powells thoughts at this point probably won't really matter so he can sit up and claim that he is the Queen of Denmark and the market won't bat an eyelash.
We should hit some 1st Std Deviation price support around $213 so I do think that some narrow trading ranges could be in the future where the bulls and bears are fighting it out. I might change my cash deployment buy program if we don't see a dip further here and start loading up early. Still trying to get a handle. I think if we consolidate in this range, we might be set up nicely for a EOY rally for AMD and I want to have a bigger seat at the table.
sentiment 0.97
2 days ago • u/OkBad4259 • r/BitcoinBeginners • nonce_question • C
As someone who’s spent years trading, building indicators, and watching how markets and mining tech evolve, the way I visualize Bitcoin mining is simple: every ASIC is hashing the same block template, but each machine feeds a different nonce and extra-nonce values into the header so they’re never checking the same number at the same time. It’s not a “guessing race” on one sequence it’s billions of parallel hash attempts with unique inputs. The network accepts the block whose hash first falls below the target difficulty.
sentiment -0.24
2 days ago • u/AverageUnited3237 • r/stocks • openai_is_not_google • C
Completely wrong. TPU is used in every part of the Google stack. Search, YouTube, cloud, waymo, etc
The TPU, or Tensor Processing Unit, is Google’s own ASIC. We designed TPUs from the ground up to run AI-based compute tasks, making them even more specialized than CPUs and GPUs. TPUs have been at the heart of some of Google’s most popular AI services, including Search, YouTube and DeepMind’s large language models.
This is from Google's OWN blog
https://blog.google/technology/ai/difference-cpu-gpu-tpu-trillium/
Clown.
sentiment 0.63


Share
About
Pricing
Policies
Markets
API
Info
tz UTC-5
Connect with us
ChartExchange Email
ChartExchange on Discord
ChartExchange on X
ChartExchange on Reddit
ChartExchange on GitHub
ChartExchange on YouTube
© 2020 - 2025 ChartExchange LLC