Create Account
Log In
Dark
chart
exchange
Premium
Terminal
Screener
Stocks
Crypto
Forex
Trends
Depth
Close
Check out our Dark Pool Levels

PNM
PNM Resources, Inc.
stock NYSE

Inactive
Aug 2, 2024
41.77USD-1.042%(-0.44)494,560
Pre-market
0.00USD-100.000%(-42.21)0
After-hours
0.00USD0.000%(0.00)0
OverviewPrice & VolumeSplitsHistoricalExchange VolumeDark Pool LevelsDark Pool PrintsExchangesShort VolumeShort Interest - DailyShort InterestBorrow Fee (CTB)Failure to Deliver (FTD)ShortsTrendsNewsTrends
PNM Reddit Mentions
Subreddits
Limit Labels     

We have sentiment values and mention counts going back to 2017. The complete data set is available via the API.
Take me to the API
PNM Specific Mentions
As of Mar 19, 2026 4:06:38 PM EDT (5 minutes ago)
Includes all comments and posts. Mentions per user per ticker capped at one per hour.
7 days ago • u/Sophia1995_miam • r/AMD_Stock • daily_discussion_thursday_20260312 • C
I asked chatgpt latest model about it: Does this tweet match mi455x?
Yes — **mostly, but not perfectly**.
The paper’s core thesis is that modern LLM inference is bottlenecked by **memory capacity, memory bandwidth, and interconnect latency**, especially during autoregressive decode, rather than by raw compute. Ma and Patterson explicitly frame the problem that way and propose four hardware directions: **High Bandwidth Flash, Processing-Near-Memory, 3D memory-logic stacking, and lower-latency interconnects**.
That maps **fairly well** to AMD’s **MI455X / Helios** story. AMD is publicly positioning Helios as a **rack-scale** system for frontier AI, powered by MI455X GPUs plus EPYC “Venice” CPUs and Pensando NICs, with an emphasis on **bandwidth, energy efficiency, and scale-out networking**. AMD also says the broader MI450-series/Helios platform delivers up to **432 GB of HBM4 per GPU**, **19.6 TB/s of memory bandwidth per GPU**, **31 TB of total HBM4 per rack**, **1.4 PB/s aggregate memory bandwidth**, and **260 TB/s of scale-up interconnect bandwidth**. Those are exactly the kinds of knobs the paper says matter most for inference.
So if your question is, “**Does this paper reinforce AMD’s MI455X value proposition?**” the answer is **yes**. A platform centered on **lots of HBM, high memory bandwidth, low-precision inference throughput, and rack-scale fabric** is much closer to the paper’s diagnosis than a design that just chases more FLOPS. AMD is also explicitly selling Helios for both **trillion-parameter training and large-scale AI inference**, which fits the paper’s idea that inference needs first-class hardware treatment rather than being treated as an afterthought.
But it is **not a perfect fit**. The paper is more radical than AMD’s public MI455X pitch. Ma and Patterson are not just saying “add more HBM and a better rack”; they are pointing to **new memory hierarchies** like High Bandwidth Flash and **memory-centric packaging/placement changes** like PNM and 3D memory-logic stacking. AMD’s public MI455X/Helios materials emphasize **HBM4, rack integration, Ethernet/UALink-style fabric, and open system design**. That is a strong **near-term response** to the inference bottleneck, but it is still fundamentally a **GPU-plus-HBM** architecture, not the deeper post-GPU memory architecture the paper is arguing the industry may eventually need.
My bottom line: **the paper is bullish for AMD’s direction, but not a proof that MI455X uniquely solves the problem**. It supports the idea that AMD is betting on the right constraints — **memory and interconnect** — yet the paper’s full recommendation goes beyond what MI455X publicly appears to be today. So I’d call it **“AMD-aligned, but not AMD-complete.”**
If you want, I can do a tighter **MI455X vs Rubin vs this paper** breakdown and show exactly where AMD matches or misses each of the paper’s four hardware shifts.
sentiment 0.89
7 days ago • u/Sophia1995_miam • r/AMD_Stock • daily_discussion_thursday_20260312 • C
I asked chatgpt latest model about it: Does this tweet match mi455x?
Yes — **mostly, but not perfectly**.
The paper’s core thesis is that modern LLM inference is bottlenecked by **memory capacity, memory bandwidth, and interconnect latency**, especially during autoregressive decode, rather than by raw compute. Ma and Patterson explicitly frame the problem that way and propose four hardware directions: **High Bandwidth Flash, Processing-Near-Memory, 3D memory-logic stacking, and lower-latency interconnects**.
That maps **fairly well** to AMD’s **MI455X / Helios** story. AMD is publicly positioning Helios as a **rack-scale** system for frontier AI, powered by MI455X GPUs plus EPYC “Venice” CPUs and Pensando NICs, with an emphasis on **bandwidth, energy efficiency, and scale-out networking**. AMD also says the broader MI450-series/Helios platform delivers up to **432 GB of HBM4 per GPU**, **19.6 TB/s of memory bandwidth per GPU**, **31 TB of total HBM4 per rack**, **1.4 PB/s aggregate memory bandwidth**, and **260 TB/s of scale-up interconnect bandwidth**. Those are exactly the kinds of knobs the paper says matter most for inference.
So if your question is, “**Does this paper reinforce AMD’s MI455X value proposition?**” the answer is **yes**. A platform centered on **lots of HBM, high memory bandwidth, low-precision inference throughput, and rack-scale fabric** is much closer to the paper’s diagnosis than a design that just chases more FLOPS. AMD is also explicitly selling Helios for both **trillion-parameter training and large-scale AI inference**, which fits the paper’s idea that inference needs first-class hardware treatment rather than being treated as an afterthought.
But it is **not a perfect fit**. The paper is more radical than AMD’s public MI455X pitch. Ma and Patterson are not just saying “add more HBM and a better rack”; they are pointing to **new memory hierarchies** like High Bandwidth Flash and **memory-centric packaging/placement changes** like PNM and 3D memory-logic stacking. AMD’s public MI455X/Helios materials emphasize **HBM4, rack integration, Ethernet/UALink-style fabric, and open system design**. That is a strong **near-term response** to the inference bottleneck, but it is still fundamentally a **GPU-plus-HBM** architecture, not the deeper post-GPU memory architecture the paper is arguing the industry may eventually need.
My bottom line: **the paper is bullish for AMD’s direction, but not a proof that MI455X uniquely solves the problem**. It supports the idea that AMD is betting on the right constraints — **memory and interconnect** — yet the paper’s full recommendation goes beyond what MI455X publicly appears to be today. So I’d call it **“AMD-aligned, but not AMD-complete.”**
If you want, I can do a tighter **MI455X vs Rubin vs this paper** breakdown and show exactly where AMD matches or misses each of the paper’s four hardware shifts.
sentiment 0.89


Share
About
Pricing
Policies
Markets
API
Info
tz UTC-4
Connect with us
ChartExchange Email
ChartExchange on Discord
ChartExchange on X
ChartExchange on Reddit
ChartExchange on GitHub
ChartExchange on YouTube
© 2020 - 2026 ChartExchange LLC