Create Account
Log In
Dark
chart
exchange
Premium
Terminal
Screener
Stocks
Crypto
Forex
Trends
Depth
Close
Check out our API

CAG
Conagra Brands, Inc.
stock NYSE

At Close
Dec 5, 2025 3:59:59 PM EST
17.05USD+0.590%(+0.10)8,209,184
0.00Bid   0.00Ask   0.00Spread
Pre-market
Dec 5, 2025 9:24:30 AM EST
16.95USD0.000%(0.00)20,903
After-hours
Dec 5, 2025 4:55:30 PM EST
17.07USD+0.110%(+0.02)23,792
OverviewOption ChainMax PainOptionsPrice & VolumeDividendsHistoricalExchange VolumeDark Pool LevelsDark Pool PrintsExchangesShort VolumeShort Interest - DailyShort InterestBorrow Fee (CTB)Failure to Deliver (FTD)ShortsTrendsNewsTrends
CAG Reddit Mentions
Subreddits
Limit Labels     

We have sentiment values and mention counts going back to 2017. The complete data set is available via the API.
Take me to the API
CAG Specific Mentions
As of Dec 7, 2025 4:13:40 PM EST (1 min. ago)
Includes all comments and posts. Mentions per user per ticker capped at one per hour.
1 day ago • u/me_xman • r/investing • whats_the_best_stock_youre_planning_to_buy_for • C
Foods companies like GIS CAG KHC CPB DEO CMG.
sentiment 0.36
1 day ago • u/Leo6-2 • r/algorithmictrading • second_opinion_needed_i_recreated_weekly_rotation • Backtest • B
Weekly Rotation Strategy vs SPY buy and hold

Hey everyone, I recreated a trading strategy from a book by a trader who now teaches others, so I figure it's legit and not just hype. But now I'm stuck—it's outputting as a vector, and I'm questioning if my backtest results are realistic or if my code is off.​
Where do I go from here? I could run walk-forward tests or Monte Carlo simulations, but realistically, since it's based on weekly candles, I can handle entries/exits manually and use it more like an indicator—no execution issues there, right? The main doubt is whether I backtested it correctly, so I'd love a second opinion on validating it properly, like manual charting or key metrics (win rate, drawdown).
this the strategy :
The Weekly Rotation strategy is a simple, long-only momentum approach for S&P 500 stocks. It requires just one weekly check (typically Friday after close) to select and rotate into the top 10 strongest performers, aiming to beat the S&P 500 with lower drawdowns by staying in cash during bear markets.​
# Key Requirements
* **Universe**: All current, delisted, and joining/leaving S&P 500 stocks for full testing.
* **Filters**: Stocks must have 20-day average volume > 1M shares and price > $1 USD.
* **Market Condition**: SPY close must be above its 200-day SMA (with 2% buffer below).​
* **Max Positions**: 10 stocks, each sized at 10% of total equity (e.g., $100K equity = $10K per position).
# Entry Rules
* On Friday close, confirm market is "up" (SPY > 200-day SMA band).
* From filtered stocks, select those with 3-day RSI < 50 (avoids overbought).
* Rank by highest 200-day Rate of Change (ROC, or % gain); pick top 10.
* Buy all positions market-on-open Monday.​
# Exit and Rotation Rules
* Every Friday, re-rank stocks by 200-day ROC.
* Hold if still in top 10; sell and replace if dropped out (market-on-open next day).
* No hard stops normally (rotation handles weakness), but optional 20% stop loss per position if desired.
&#8203;
"""
Bensdorp Weekly Rotation Strategy - CORRECTED Implementation
Based on "The 30-Minute Stock Trader" by Laurens Bensdorp

pip install pandas numpy yfinance matplotlib seaborn
"""

import pandas as pd
import numpy as np
from pathlib import Path
from datetime import datetime, timedelta
from typing import Dict, List, Tuple, Optional
import warnings
warnings.filterwarnings('ignore')

try:
import yfinance as yf
except ImportError:
import subprocess
subprocess.check_call(['pip', 'install', 'yfinance'])
import yfinance as yf

try:
import matplotlib.pyplot as plt
import seaborn as sns
except ImportError:
import subprocess
subprocess.check_call(['pip', 'install', 'matplotlib', 'seaborn'])
import matplotlib.pyplot as plt
import seaborn as sns

sns.set_style('darkgrid')


# ============================================================================
# DATA LAYER - Parquet-based local database
# ============================================================================

class MarketDataDB:
"""Local market data storage using Parquet files"""

def __init__(self, db_path: str = "./market_data"):
self.db_path = Path(db_path)
self.db_path.mkdir(parents=True, exist_ok=True)
self.price_path = self.db_path / "prices"
self.price_path.mkdir(exist_ok=True)

def _get_ticker_file(self, ticker: str) -> Path:
return self.price_path / f"{ticker}.parquet"

def download_ticker(self, ticker: str, start_date: str, end_date: str,
force_refresh: bool = False) -> pd.DataFrame:
"""Download and cache ticker data"""
file_path = self._get_ticker_file(ticker)

if file_path.exists() and not force_refresh:
df = pd.read_parquet(file_path)
df.index = pd.to_datetime(df.index)
last_date = df.index[-1].date()
today = datetime.now().date()

if (today - last_date).days <= 1:
return df[start_date:end_date]
else:
new_data = yf.download(ticker, start=last_date, end=end_date,
progress=False, auto_adjust=True)
if not new_data.empty:
df = pd.concat([df, new_data[new_data.index > df.index[-1]]])
df.to_parquet(file_path)
return df[start_date:end_date]

print(f"Downloading {ticker}...")
try:
df = yf.download(ticker, start=start_date, end=end_date,
progress=False, auto_adjust=True)
if not df.empty:
df.to_parquet(file_path)
return df
except Exception as e:
print(f"Error downloading {ticker}: {e}")
return pd.DataFrame()

def download_universe(self, tickers: List[str], start_date: str,
end_date: str, force_refresh: bool = False) -> Dict[str, pd.DataFrame]:
"""Download multiple tickers"""
data = {}
failed = []
for ticker in tickers:
try:
df = self.download_ticker(ticker, start_date, end_date, force_refresh)
if not df.empty and len(df) > 220: # Need 200+ for indicators + buffer
data[ticker] = df
else:
failed.append(ticker)
except Exception as e:
failed.append(ticker)

if failed:
print(f"Skipped {len(failed)} tickers with insufficient data")

return data


# ============================================================================
# INDICATOR CALCULATIONS - CORRECTED
# ============================================================================

class TechnicalIndicators:
"""Technical indicators - EXACT book methodology"""

u/staticmethod
def sma(series: pd.Series, period: int) -> pd.Series:
"""Simple Moving Average"""
return series.rolling(window=period, min_periods=period).mean()

u/staticmethod
def rsi_wilder(series: pd.Series, period: int = 3) -> pd.Series:
"""
CORRECTED: Wilder's RSI using exponential smoothing
Book uses 3-day RSI < 50 to avoid overbought stocks

This is THE critical fix - original used simple moving average
"""
delta = series.diff()

# Separate gains and losses
gain = delta.where(delta > 0, 0)
loss = -delta.where(delta < 0, 0)

# Wilder's smoothing: use exponential weighted moving average
# alpha = 1/period gives the Wilder smoothing
avg_gain = gain.ewm(alpha=1/period, min_periods=period, adjust=False).mean()
avg_loss = loss.ewm(alpha=1/period, min_periods=period, adjust=False).mean()

rs = avg_gain / avg_loss
rsi = 100 - (100 / (1 + rs))

return rsi

u/staticmethod
def roc(series: pd.Series, period: int = 200) -> pd.Series:
"""
Rate of Change (Momentum)
Book: "highest rate of change over last 200 trading days"
"""
return ((series - series.shift(period)) / series.shift(period)) * 100


# ============================================================================
# STRATEGY IMPLEMENTATION - CORRECTED BOOK RULES
# ============================================================================

class BensdorpWeeklyRotation:
"""
Weekly Rotation Strategy - CORRECTED implementation

CRITICAL DIFFERENCES FROM BROKEN VERSION:
1. Uses Wilder's RSI (exponential), not SMA-based RSI
2. Executes on MONDAY OPEN, not Friday close
3. Top 10 selection FIRST, then RSI filter for NEW entries only
4. Proper rotation: keep anything in top 10, exit anything that drops out

Entry Rules (Friday evening analysis, Monday morning execution):
1. Friday close: Check SPY > 200-day SMA (with 2% buffer)
2. Friday close: Rank all stocks by 200-day ROC
3. Friday close: Select top 10 by momentum
4. Friday close: For NEW entries only, filter RSI < 50
5. Monday open: Execute trades

Exit Rules:
1. Hold as long as stock remains in top 10 by ROC
2. Exit when stock drops out of top 10
3. No stop losses (rotation serves as exit)
"""

def __init__(self, initial_capital: float = 10000):
self.initial_capital = initial_capital
self.capital = initial_capital
self.positions = {} # {ticker: shares}
self.trades = []
self.equity_curve = []
self.indicators = TechnicalIndicators()

def calculate_indicators(self, data: Dict[str, pd.DataFrame],
spy_data: pd.DataFrame) -> pd.DataFrame:
"""Calculate indicators - Friday close data"""

# Need at least 200 days of SPY data
if len(spy_data) < 200:
return pd.DataFrame()

# Calculate SPY market regime
spy_sma = self.indicators.sma(spy_data['Close'], 200)
spy_sma_band = spy_sma * 0.98 # 2% buffer

# Check if SPY SMA is valid (not NaN)
spy_sma_value = spy_sma.iloc[-1]
if isinstance(spy_sma_value, pd.Series):
spy_sma_value = spy_sma_value.iloc[0]
if pd.isna(spy_sma_value):
return pd.DataFrame()

spy_close_value = spy_data['Close'].iloc[-1]
if isinstance(spy_close_value, pd.Series):
spy_close_value = spy_close_value.iloc[0]
spy_close = float(spy_close_value)

spy_band_value = spy_sma_band.iloc[-1]
if isinstance(spy_band_value, pd.Series):
spy_band_value = spy_band_value.iloc[0]
spy_band = float(spy_band_value)

indicator_data = []

for ticker, df in data.items():
if len(df) < 203: # Need 200 for ROC + 3 for RSI
continue

try:
# Calculate indicators using CORRECTED methods
rsi_3 = self.indicators.rsi_wilder(df['Close'], 3) # WILDER'S RSI
roc_200 = self.indicators.roc(df['Close'], 200)

# Get values
last_rsi = float(rsi_3.iloc[-1])
last_roc = float(roc_200.iloc[-1])
last_close = float(df['Close'].iloc[-1])
last_volume = float(df['Volume'].iloc[-1])

# Skip if NaN
if pd.isna(last_rsi) or pd.isna(last_roc):
continue

# Calculate 20-day average volume for liquidity filter
avg_volume_20 = float(df['Volume'].rolling(20).mean().iloc[-1])

indicator_data.append({
'ticker': ticker,
'date': df.index[-1],
'close': last_close,
'volume': last_volume,
'avg_volume_20': avg_volume_20,
'rsi_3': last_rsi,
'roc_200': last_roc,
'spy_close': spy_close,
'spy_sma_band': spy_band
})

except Exception:
continue

return pd.DataFrame(indicator_data)

def get_weekly_signals(self, indicators: pd.DataFrame) -> Tuple[List[str], List[str]]:
"""
CORRECTED rotation logic - matches book exactly

Key insight: "Solution C" from C# code:
1. Rank ALL stocks by momentum
2. Top 10 = target portfolio
3. KEEP: anything we hold that's still in top 10
4. ENTER: new positions from top 10, but ONLY if RSI < 50
5. EXIT: anything not in top 10
"""

if indicators.empty:
return [], []

# Extract SPY regime
spy_close = float(indicators['spy_close'].iloc[0])
spy_band = float(indicators['spy_sma_band'].iloc[0])

# Check market regime: SPY > 200 SMA band
if spy_close <= spy_band:
# Bear market: exit everything
return [], list(self.positions.keys())

# Filter valid stocks (liquidity + price)
valid = indicators[
(indicators['close'] > 1.0) &
(indicators['avg_volume_20'] > 1_000_000)
].copy()

if valid.empty:
return [], list(self.positions.keys())

# STEP 1: Rank by 200-day ROC (momentum)
valid = valid.sort_values('roc_200', ascending=False)

# STEP 2: Top 10 by momentum = TARGET PORTFOLIO
top_10 = valid.head(10)
top_10_tickers = set(top_10['ticker'].values)

# STEP 3: KEEP - positions we already hold that are still in top 10
keeps = [t for t in self.positions.keys() if t in top_10_tickers]

# STEP 4: ENTER - new positions from top 10 with RSI < 50 filter
available_slots = 10 - len(keeps)

# Filter top 10 for new entries: must have RSI < 50 and we don't already hold it
entry_candidates = top_10[
(~top_10['ticker'].isin(self.positions.keys())) &
(top_10['rsi_3'] < 50)
]

enters = entry_candidates['ticker'].head(available_slots).tolist()

# STEP 5: EXIT - anything we hold that's NOT in top 10
exits = [t for t in self.positions.keys() if t not in top_10_tickers]

return enters, exits

def execute_trades(self, friday_date: datetime, enters: List[str], exits: List[str],
friday_data: Dict[str, pd.DataFrame],
monday_data: Dict[str, pd.DataFrame]):
"""
CORRECTED: Execute trades at MONDAY OPEN, not Friday close

friday_date: Date of signal generation
friday_data: Data up to and including Friday (for portfolio valuation)
monday_data: Data including Monday (for execution prices)
"""

# Calculate portfolio value using Friday close prices
portfolio_value = self.capital
for ticker, shares in self.positions.items():
if ticker in friday_data:
try:
price = float(friday_data[ticker]['Close'].iloc[-1])
if not pd.isna(price):
portfolio_value += shares * price
except (ValueError, TypeError, IndexError):
pass

# Execute exits first (Monday open price)
for ticker in exits:
if ticker in self.positions and ticker in monday_data:
shares = self.positions[ticker]
try:
# Get Monday's open price
monday_open = float(monday_data[ticker]['Open'].iloc[-1])
if pd.isna(monday_open):
continue
except (ValueError, TypeError, IndexError, KeyError):
# If no Open price, use Close
try:
monday_open = float(monday_data[ticker]['Close'].iloc[-1])
except:
continue

proceeds = shares * monday_open
self.capital += proceeds

self.trades.append({
'date': monday_data[ticker].index[-1], # Actual Monday date
'ticker': ticker,
'action': 'SELL',
'shares': shares,
'price': monday_open,
'value': proceeds
})

del self.positions[ticker]

# Execute entries (Monday open price)
if enters:
position_size = portfolio_value * 0.10 # 10% per position

for ticker in enters:
if ticker in monday_data:
try:
# Get Monday's open price
monday_open = float(monday_data[ticker]['Open'].iloc[-1])
if pd.isna(monday_open) or monday_open <= 0:
continue
except (ValueError, TypeError, IndexError, KeyError):
try:
monday_open = float(monday_data[ticker]['Close'].iloc[-1])
except:
continue

shares = int(position_size / monday_open)
cost = shares * monday_open

if self.capital >= cost and shares > 0:
self.positions[ticker] = shares
self.capital -= cost

self.trades.append({
'date': monday_data[ticker].index[-1], # Actual Monday date
'ticker': ticker,
'action': 'BUY',
'shares': shares,
'price': monday_open,
'value': cost
})

def record_equity(self, date: datetime, data: Dict[str, pd.DataFrame]):
"""Record portfolio value at end of day"""
portfolio_value = self.capital

for ticker, shares in self.positions.items():
if ticker in data:
try:
price = float(data[ticker]['Close'].iloc[-1])
if not pd.isna(price):
portfolio_value += shares * price
except (ValueError, TypeError, IndexError):
pass

self.equity_curve.append({
'date': date,
'equity': float(portfolio_value),
'cash': float(self.capital),
'num_positions': len(self.positions)
})


# ============================================================================
# BACKTESTING ENGINE - CORRECTED
# ============================================================================

class Backtester:
"""Backtest engine with CORRECTED execution timing"""

def __init__(self, strategy: BensdorpWeeklyRotation, data_db: MarketDataDB):
self.strategy = strategy
self.data_db = data_db

def run(self, universe: List[str], start_date: str, end_date: str,
benchmark: str = 'SPY') -> pd.DataFrame:
"""Run backtest with MONDAY OPEN execution"""

print(f"\n{'='*70}")
print(f"BACKTEST: Bensdorp Weekly Rotation (CORRECTED)")
print(f"Period: {start_date} to {end_date}")
print(f"Universe: {len(universe)} stocks")
print(f"Initial Capital: ${self.strategy.initial_capital:,.2f}")
print(f"{'='*70}\n")

# Download data
print("Loading market data...")
data = self.data_db.download_universe(universe, start_date, end_date)
spy_data = self.data_db.download_ticker(benchmark, start_date, end_date)

print(f"Loaded {len(data)} stocks with sufficient history\n")

# Find all Fridays
all_dates = spy_data.index
fridays = []
for i, date in enumerate(all_dates):
if date.dayofweek == 4: # Friday = 4
fridays.append(date)

print(f"Simulating {len(fridays)} weeks of trading...")
print("Each week: Friday analysis → Monday execution\n")

trades_count = 0
for i, friday in enumerate(fridays):
# Get data up to Friday close
historical_data = {
ticker: df.loc[:friday]
for ticker, df in data.items()
if friday in df.index
}
spy_historical = spy_data.loc[:friday]

# Skip warmup period
if len(spy_historical) < 200:
continue

# Calculate indicators (Friday close)
indicators = self.strategy.calculate_indicators(
historical_data, spy_historical
)

if indicators.empty:
# Record equity even if no signals
self.strategy.record_equity(friday, historical_data)
continue

# Get signals (Friday evening)
enters, exits = self.strategy.get_weekly_signals(indicators)

# Find next Monday for execution
next_monday = None
for future_date in all_dates[all_dates > friday]:
if future_date.dayofweek == 0: # Monday = 0
next_monday = future_date
break

# If no Monday found (end of data), use next trading day
if next_monday is None:
next_available = all_dates[all_dates > friday]
if len(next_available) > 0:
next_monday = next_available[0]
else:
# End of data
self.strategy.record_equity(friday, historical_data)
continue

# Get Monday data for execution
monday_data = {
ticker: df.loc[:next_monday]
for ticker, df in data.items()
if next_monday in df.index
}

# Execute trades (Monday open)
if enters or exits:
self.strategy.execute_trades(
friday, enters, exits,
historical_data, monday_data
)
trades_count += len(enters) + len(exits)

# Record equity (use latest available data)
latest_data = monday_data if monday_data else historical_data
latest_date = next_monday if next_monday else friday
self.strategy.record_equity(latest_date, latest_data)

# Progress
if (i + 1) % 50 == 0:
current_equity = self.strategy.equity_curve[-1]['equity']
print(f" Week {i+1}/{len(fridays)}: ${current_equity:,.0f}, "
f"{len(self.strategy.positions)} positions, {trades_count} total trades")

print(f"\nBacktest complete! Total trades: {trades_count}\n")

if not self.strategy.equity_curve:
raise ValueError("No equity data recorded!")

return pd.DataFrame(self.strategy.equity_curve).set_index('date')


# ============================================================================
# PERFORMANCE ANALYTICS
# ============================================================================

class PerformanceAnalytics:
"""Performance metrics calculation"""

u/staticmethod
def calculate_metrics(equity_curve: pd.DataFrame,
benchmark_curve: pd.DataFrame,
risk_free_rate: float = 0.02) -> Dict:
"""Calculate all performance metrics"""

strategy_returns = equity_curve['equity'].pct_change().dropna()
benchmark_returns = benchmark_curve.pct_change().dropna()

# Align dates
common_dates = strategy_returns.index.intersection(benchmark_returns.index)
strategy_returns = strategy_returns.loc[common_dates]
benchmark_returns = benchmark_returns.loc[common_dates]

# CAGR
total_years = (equity_curve.index[-1] - equity_curve.index[0]).days / 365.25
strategy_cagr = float(
(equity_curve['equity'].iloc[-1] / equity_curve['equity'].iloc[0])
** (1 / total_years) - 1
) * 100

benchmark_cagr = float(
(benchmark_curve.iloc[-1] / benchmark_curve.iloc[0])
** (1 / total_years) - 1
) * 100

# Maximum Drawdown
cummax = equity_curve['equity'].cummax()
drawdown = (equity_curve['equity'] - cummax) / cummax * 100
max_dd = float(drawdown.min())

bench_cummax = benchmark_curve.cummax()
bench_drawdown = (benchmark_curve - bench_cummax) / bench_cummax * 100
bench_max_dd = float(bench_drawdown.min())

# MAR Ratio
mar_ratio = abs(strategy_cagr / max_dd) if max_dd != 0 else 0
bench_mar = abs(benchmark_cagr / bench_max_dd) if bench_max_dd != 0 else 0

# Sharpe Ratio
excess_returns = strategy_returns - (risk_free_rate / 252)
sharpe = float(np.sqrt(252) * excess_returns.mean() / strategy_returns.std())

bench_excess = benchmark_returns - (risk_free_rate / 252)
bench_sharpe = float(np.sqrt(252) * bench_excess.mean() / benchmark_returns.std())

# Sortino Ratio
downside_returns = strategy_returns[strategy_returns < 0]
sortino = (
float(np.sqrt(252) * excess_returns.mean() / downside_returns.std())
if len(downside_returns) > 0 else 0
)

# Total Return
total_return = float(
(equity_curve['equity'].iloc[-1] / equity_curve['equity'].iloc[0] - 1) * 100
)
bench_total_return = float(
(benchmark_curve.iloc[-1] / benchmark_curve.iloc[0] - 1) * 100
)

return {
'strategy_cagr': strategy_cagr,
'benchmark_cagr': benchmark_cagr,
'strategy_total_return': total_return,
'benchmark_total_return': bench_total_return,
'strategy_max_dd': max_dd,
'benchmark_max_dd': bench_max_dd,
'mar_ratio': mar_ratio,
'benchmark_mar': bench_mar,
'sharpe_ratio': sharpe,
'benchmark_sharpe': bench_sharpe,
'sortino_ratio': sortino,
'total_trades': len(strategy_returns),
'volatility': float(strategy_returns.std() * np.sqrt(252) * 100)
}

u/staticmethod
def print_metrics(metrics: Dict):
"""Pretty print metrics"""

print(f"\n{'='*70}")
print(f"PERFORMANCE SUMMARY")
print(f"{'='*70}\n")

print(f"{'Total Return':<30} Strategy: {metrics['strategy_total_return']:>8.2f}% | Benchmark: {metrics['benchmark_total_return']:>8.2f}%")
print(f"{'CAGR':<30} Strategy: {metrics['strategy_cagr']:>8.2f}% | Benchmark: {metrics['benchmark_cagr']:>8.2f}%")
print(f"{'Maximum Drawdown':<30} Strategy: {metrics['strategy_max_dd']:>8.2f}% | Benchmark: {metrics['benchmark_max_dd']:>8.2f}%")
print(f"{'MAR Ratio (CAGR/MaxDD)':<30} Strategy: {metrics['mar_ratio']:>8.2f} | Benchmark: {metrics['benchmark_mar']:>8.2f}")
print(f"{'Sharpe Ratio':<30} Strategy: {metrics['sharpe_ratio']:>8.2f} | Benchmark: {metrics['benchmark_sharpe']:>8.2f}")
print(f"{'Sortino Ratio':<30} Strategy: {metrics['sortino_ratio']:>8.2f}")
print(f"{'Volatility (Annualized)':<30} Strategy: {metrics['volatility']:>8.2f}%")

print(f"\n{'='*70}")
print(f"KEY INSIGHTS:")
print(f"{'='*70}")

outperformance = metrics['strategy_cagr'] - metrics['benchmark_cagr']
dd_improvement = abs(metrics['strategy_max_dd']) - abs(metrics['benchmark_max_dd'])

print(f"✓ Outperformance: {outperformance:+.2f}% CAGR vs benchmark")
print(f"✓ Drawdown difference: {dd_improvement:+.2f}% vs benchmark")
print(f"✓ Risk-adjusted (MAR): {(metrics['mar_ratio']/metrics['benchmark_mar']-1)*100:+.1f}% vs benchmark")
print(f"✓ Risk-adjusted (Sharpe): {(metrics['sharpe_ratio']/metrics['benchmark_sharpe']-1)*100:+.1f}% vs benchmark")
print(f"{'='*70}\n")


# ============================================================================
# VISUALIZATION
# ============================================================================

class StrategyVisualizer:
"""Professional visualizations"""

u/staticmethod
def plot_results(equity_curve: pd.DataFrame,
benchmark_curve: pd.DataFrame,
trades: List[Dict]):
"""Create comprehensive charts"""

fig, axes = plt.subplots(3, 1, figsize=(14, 10))
fig.suptitle('Bensdorp Weekly Rotation Strategy - CORRECTED Backtest',
fontsize=16, fontweight='bold')

# Equity curves
ax1 = axes[0]
ax1.plot(equity_curve.index, equity_curve['equity'],
label='Strategy (CORRECTED)', linewidth=2, color='#2E86AB')

benchmark_normalized = (
benchmark_curve / benchmark_curve.iloc[0] * equity_curve['equity'].iloc[0]
)
ax1.plot(benchmark_curve.index, benchmark_normalized,
label='S&P 500 (Buy & Hold)', linewidth=2,
color='#A23B72', alpha=0.7)

ax1.set_ylabel('Portfolio Value ($)', fontsize=11, fontweight='bold')
ax1.set_title('Equity Curve Comparison', fontsize=12, fontweight='bold')
ax1.legend(loc='upper left', fontsize=10)
ax1.grid(True, alpha=0.3)
ax1.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'${x/1000:.0f}K'))

# Drawdown
ax2 = axes[1]
cummax = equity_curve['equity'].cummax()
drawdown = (equity_curve['equity'] - cummax) / cummax * 100

ax2.fill_between(drawdown.index, drawdown, 0,
color='#F18F01', alpha=0.5, label='Drawdown')
ax2.set_ylabel('Drawdown (%)', fontsize=11, fontweight='bold')
ax2.set_title('Strategy Drawdown', fontsize=12, fontweight='bold')
ax2.legend(loc='lower left', fontsize=10)
ax2.grid(True, alpha=0.3)

# Positions
ax3 = axes[2]
ax3.plot(equity_curve.index, equity_curve['num_positions'],
linewidth=2, color='#6A994E')
ax3.set_ylabel('# Positions', fontsize=11, fontweight='bold')
ax3.set_xlabel('Date', fontsize=11, fontweight='bold')
ax3.set_title('Portfolio Exposure', fontsize=12, fontweight='bold')
ax3.set_ylim(0, 11)
ax3.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('backtest_CORRECTED.png', dpi=150, bbox_inches='tight')
print("✓ Chart saved as 'backtest_CORRECTED.png'")
plt.show()


# ============================================================================
# MAIN EXECUTION
# ============================================================================

def main():
"""Run corrected backtest"""

# Test both the book period AND recent period
START_DATE = '2020-01-01' # Book's period
# START_DATE = '2020-01-01' # Recent period for comparison
END_DATE = datetime.now().strftime('%Y-%m-%d')
INITIAL_CAPITAL = 10000

# S&P 500 sample
SP500_SAMPLE = [
"NVDA","AAPL","MSFT","AMZN","GOOGL","GOOG","AVGO","META","TSLA","BRK.B","LLY","WMT","JPM","V","ORCL","JNJ","XOM","MA","NFLX","COST","PLTR","ABBV","BAC","AMD","HD","PG","KO","GE","CVX","CSCO","UNH","IBM","MU","MS","WFC","CAT","MRK","AXP","GS","PM","TMUS","RTX","CRM","ABT","TMO","MCD","APP","PEP","AMAT","ISRG","LRCX","INTC","DIS","LIN","C","T","AMGN","QCOM","UBER","NEE","INTU","APH","NOW","VZ","TJX","SCHW","BLK","ANET","ACN","DHR","BKNG","GEV","GILD","TXN","KLAC","SPGI","BSX","PFE","SYK","BA","COF","WELL","LOW","UNP","ADBE","PGR","MDT","ETN","PANW","ADI","CRWD","DE","HON","PLD","CB","HCA","BX","CEG","COP","HOOD","KKR","PH","VRTX","MCK","ADP","LMT","CME","CVS","BMY","MO","NEM","SO","CMCSA","NKE","SBUX","DUK","TT","MMM","MMC","GD","DELL","ICE","DASH","MCO","WM","ORLY","SHW","CDNS","SNPS","AMT","MAR","UPS","HWM","REGN","NOC","BK","ECL","USB","APO","TDG","AON","PNC","WMB","CTAS","EMR","MNST","ELV","CI","RCL","MDLZ","EQIX","ITW","ABNB","GLW","COIN","JCI","COR","CMI","GM","PWR","TEL","RSG","HLT","AZO","NSC","CSX","ADSK","TRV","FDX","CL","AEP","AJG","MSI","FCX","FTNT","KMI","SPG","WBD","EOG","SRE","TFC","STX","VST","MPC","PYPL","IDXX","APD","ROST","AFL","DDOG","PSX","WDC","WDAY","ZTS","ALL","VLO","SLB","PCAR","BDX","DLR","O","F","D","URI","NDAQ","LHX","EA","MET","NXPI","BKR","EW","CAH","CBRE","PSA","ROP","XEL","LVS","OKE","DHI","FAST","EXC","TTWO","CARR","CMG","CTVA","AME","FANG","GWW","KR","MPWR","ROK","A","AMP","ETR","AXON","MSCI","DAL","FICO","OXY","TGT","YUM","AIG","PEG","PAYX","SQ","IQV","CCI","VMC","HIG","KDP","CPRT","EQT","TRGP","PRU","VTR","GRMN","HSY","EBAY","CTSH","MLM","NUE","SYY","GEHC","KMB","ON","EFX","GIS","STZ","AVB","DD","IRM","DTE","KEYS","BR","AWK","FITB","VICI","ACGL","NDSN","ODFL","WAB","PCG","DOW","FTV","TROW","SYF","TER","AEE","ZBH","HUBB","BIIB","TDY","ZBRA","CHTR","PPG","OTIS","DXCM","WTW","CTLT","ARES","WEC","LYB","MCHP","CSGP","WY","TSCO","HST","AZN","RMD","FSLR","DOV","ANSS","NTNX","EA","CTRA","KHC","PSTG","LH","INVH","KVUE","CNC","SMCI","RJF","LYV","GOOG","ILMN","DVA","ESS","WAT","TRMB","SWK","LUV","WST","AES","LDOS","FE","DRI","GPC","AVY","HOLX","TTWO","EXPD","CMS","BLDR","ALGN","STLD","ARE","EG","BRO","ES","MKC","JBHT","CNP","IT","WDC","NVR","NTRS","EPAM","POOL","BALL","HBAN","BF.B","EXPE","VTRS","PKG","J","RF","PODD","CAG","GL","STE","CFG","AKAM","BBWI","EQR","SBAC","TPR","K","DAY","FDS","NTAP","IP","ENPH","MGM","SWKS","MAS","COO","DFS","AIZ","TECH","TYL","PAYC","CHRW","MRNA","KEY","TXT","MAA","JKHY","HRL","ULTA","LNT","UDR","NI","HII","KIM","ALLE","KMX","RVTY","CE","DGX","REG","WBA","AMCR","CPT","JNPR","MTCH","APA","BXP","EVRG","RL","PFG","HSIC","BWA","ALB","SOLV","PARA","CRL","CPB","IVZ","NWS","NWSA","MOH","WYNN","HAS","PNW","BG","FRT","FOXA","FOX","VFC","EXE","HOOD","DASH","GEV","APP"
]

# Initialize system
data_db = MarketDataDB()
strategy = BensdorpWeeklyRotation(initial_capital=INITIAL_CAPITAL)
backtester = Backtester(strategy, data_db)

# Run backtest
equity_curve = backtester.run(
universe=SP500_SAMPLE,
start_date=START_DATE,
end_date=END_DATE,
benchmark='SPY'
)

# Load benchmark
benchmark = data_db.download_ticker('SPY', START_DATE, END_DATE)

# Calculate metrics
analytics = PerformanceAnalytics()
metrics = analytics.calculate_metrics(equity_curve, benchmark['Close'])

# Print results
analytics.print_metrics(metrics)

# Visualize
visualizer = StrategyVisualizer()
visualizer.plot_results(equity_curve, benchmark['Close'], strategy.trades)

# Save trade log
trades_df = pd.DataFrame(strategy.trades)
trades_df.to_csv('trade_log_CORRECTED.csv', index=False)
print("✓ Trade log saved as 'trade_log_CORRECTED.csv'\n")

return strategy, equity_curve, metrics


if __name__ == "__main__":
strategy, results, metrics = main()

print("\n" + "="*70)
print("CORRECTED BACKTEST COMPLETE")
print("="*70)
print("\nCRITICAL FIXES APPLIED:")
print(" ✓ Wilder's RSI (exponential smoothing)")
print(" ✓ Monday open execution (not Friday close)")
print(" ✓ Correct rotation logic (top 10 first, then RSI filter)")
print(" ✓ Proper position sizing and timing")
print("\nFiles generated:")
print(" • backtest_CORRECTED.png")
print(" • trade_log_CORRECTED.csv")
print(" • ./market_data/ (cached data)")
print("="*70 + "\n")

sentiment 1.00
1 day ago • u/me_xman • r/investing • whats_the_best_stock_youre_planning_to_buy_for • C
Foods companies like GIS CAG KHC CPB DEO CMG.
sentiment 0.36
1 day ago • u/Leo6-2 • r/algorithmictrading • second_opinion_needed_i_recreated_weekly_rotation • Backtest • B
Weekly Rotation Strategy vs SPY buy and hold

Hey everyone, I recreated a trading strategy from a book by a trader who now teaches others, so I figure it's legit and not just hype. But now I'm stuck—it's outputting as a vector, and I'm questioning if my backtest results are realistic or if my code is off.​
Where do I go from here? I could run walk-forward tests or Monte Carlo simulations, but realistically, since it's based on weekly candles, I can handle entries/exits manually and use it more like an indicator—no execution issues there, right? The main doubt is whether I backtested it correctly, so I'd love a second opinion on validating it properly, like manual charting or key metrics (win rate, drawdown).
this the strategy :
The Weekly Rotation strategy is a simple, long-only momentum approach for S&P 500 stocks. It requires just one weekly check (typically Friday after close) to select and rotate into the top 10 strongest performers, aiming to beat the S&P 500 with lower drawdowns by staying in cash during bear markets.​
# Key Requirements
* **Universe**: All current, delisted, and joining/leaving S&P 500 stocks for full testing.
* **Filters**: Stocks must have 20-day average volume > 1M shares and price > $1 USD.
* **Market Condition**: SPY close must be above its 200-day SMA (with 2% buffer below).​
* **Max Positions**: 10 stocks, each sized at 10% of total equity (e.g., $100K equity = $10K per position).
# Entry Rules
* On Friday close, confirm market is "up" (SPY > 200-day SMA band).
* From filtered stocks, select those with 3-day RSI < 50 (avoids overbought).
* Rank by highest 200-day Rate of Change (ROC, or % gain); pick top 10.
* Buy all positions market-on-open Monday.​
# Exit and Rotation Rules
* Every Friday, re-rank stocks by 200-day ROC.
* Hold if still in top 10; sell and replace if dropped out (market-on-open next day).
* No hard stops normally (rotation handles weakness), but optional 20% stop loss per position if desired.
&#8203;
"""
Bensdorp Weekly Rotation Strategy - CORRECTED Implementation
Based on "The 30-Minute Stock Trader" by Laurens Bensdorp

pip install pandas numpy yfinance matplotlib seaborn
"""

import pandas as pd
import numpy as np
from pathlib import Path
from datetime import datetime, timedelta
from typing import Dict, List, Tuple, Optional
import warnings
warnings.filterwarnings('ignore')

try:
import yfinance as yf
except ImportError:
import subprocess
subprocess.check_call(['pip', 'install', 'yfinance'])
import yfinance as yf

try:
import matplotlib.pyplot as plt
import seaborn as sns
except ImportError:
import subprocess
subprocess.check_call(['pip', 'install', 'matplotlib', 'seaborn'])
import matplotlib.pyplot as plt
import seaborn as sns

sns.set_style('darkgrid')


# ============================================================================
# DATA LAYER - Parquet-based local database
# ============================================================================

class MarketDataDB:
"""Local market data storage using Parquet files"""

def __init__(self, db_path: str = "./market_data"):
self.db_path = Path(db_path)
self.db_path.mkdir(parents=True, exist_ok=True)
self.price_path = self.db_path / "prices"
self.price_path.mkdir(exist_ok=True)

def _get_ticker_file(self, ticker: str) -> Path:
return self.price_path / f"{ticker}.parquet"

def download_ticker(self, ticker: str, start_date: str, end_date: str,
force_refresh: bool = False) -> pd.DataFrame:
"""Download and cache ticker data"""
file_path = self._get_ticker_file(ticker)

if file_path.exists() and not force_refresh:
df = pd.read_parquet(file_path)
df.index = pd.to_datetime(df.index)
last_date = df.index[-1].date()
today = datetime.now().date()

if (today - last_date).days <= 1:
return df[start_date:end_date]
else:
new_data = yf.download(ticker, start=last_date, end=end_date,
progress=False, auto_adjust=True)
if not new_data.empty:
df = pd.concat([df, new_data[new_data.index > df.index[-1]]])
df.to_parquet(file_path)
return df[start_date:end_date]

print(f"Downloading {ticker}...")
try:
df = yf.download(ticker, start=start_date, end=end_date,
progress=False, auto_adjust=True)
if not df.empty:
df.to_parquet(file_path)
return df
except Exception as e:
print(f"Error downloading {ticker}: {e}")
return pd.DataFrame()

def download_universe(self, tickers: List[str], start_date: str,
end_date: str, force_refresh: bool = False) -> Dict[str, pd.DataFrame]:
"""Download multiple tickers"""
data = {}
failed = []
for ticker in tickers:
try:
df = self.download_ticker(ticker, start_date, end_date, force_refresh)
if not df.empty and len(df) > 220: # Need 200+ for indicators + buffer
data[ticker] = df
else:
failed.append(ticker)
except Exception as e:
failed.append(ticker)

if failed:
print(f"Skipped {len(failed)} tickers with insufficient data")

return data


# ============================================================================
# INDICATOR CALCULATIONS - CORRECTED
# ============================================================================

class TechnicalIndicators:
"""Technical indicators - EXACT book methodology"""

u/staticmethod
def sma(series: pd.Series, period: int) -> pd.Series:
"""Simple Moving Average"""
return series.rolling(window=period, min_periods=period).mean()

u/staticmethod
def rsi_wilder(series: pd.Series, period: int = 3) -> pd.Series:
"""
CORRECTED: Wilder's RSI using exponential smoothing
Book uses 3-day RSI < 50 to avoid overbought stocks

This is THE critical fix - original used simple moving average
"""
delta = series.diff()

# Separate gains and losses
gain = delta.where(delta > 0, 0)
loss = -delta.where(delta < 0, 0)

# Wilder's smoothing: use exponential weighted moving average
# alpha = 1/period gives the Wilder smoothing
avg_gain = gain.ewm(alpha=1/period, min_periods=period, adjust=False).mean()
avg_loss = loss.ewm(alpha=1/period, min_periods=period, adjust=False).mean()

rs = avg_gain / avg_loss
rsi = 100 - (100 / (1 + rs))

return rsi

u/staticmethod
def roc(series: pd.Series, period: int = 200) -> pd.Series:
"""
Rate of Change (Momentum)
Book: "highest rate of change over last 200 trading days"
"""
return ((series - series.shift(period)) / series.shift(period)) * 100


# ============================================================================
# STRATEGY IMPLEMENTATION - CORRECTED BOOK RULES
# ============================================================================

class BensdorpWeeklyRotation:
"""
Weekly Rotation Strategy - CORRECTED implementation

CRITICAL DIFFERENCES FROM BROKEN VERSION:
1. Uses Wilder's RSI (exponential), not SMA-based RSI
2. Executes on MONDAY OPEN, not Friday close
3. Top 10 selection FIRST, then RSI filter for NEW entries only
4. Proper rotation: keep anything in top 10, exit anything that drops out

Entry Rules (Friday evening analysis, Monday morning execution):
1. Friday close: Check SPY > 200-day SMA (with 2% buffer)
2. Friday close: Rank all stocks by 200-day ROC
3. Friday close: Select top 10 by momentum
4. Friday close: For NEW entries only, filter RSI < 50
5. Monday open: Execute trades

Exit Rules:
1. Hold as long as stock remains in top 10 by ROC
2. Exit when stock drops out of top 10
3. No stop losses (rotation serves as exit)
"""

def __init__(self, initial_capital: float = 10000):
self.initial_capital = initial_capital
self.capital = initial_capital
self.positions = {} # {ticker: shares}
self.trades = []
self.equity_curve = []
self.indicators = TechnicalIndicators()

def calculate_indicators(self, data: Dict[str, pd.DataFrame],
spy_data: pd.DataFrame) -> pd.DataFrame:
"""Calculate indicators - Friday close data"""

# Need at least 200 days of SPY data
if len(spy_data) < 200:
return pd.DataFrame()

# Calculate SPY market regime
spy_sma = self.indicators.sma(spy_data['Close'], 200)
spy_sma_band = spy_sma * 0.98 # 2% buffer

# Check if SPY SMA is valid (not NaN)
spy_sma_value = spy_sma.iloc[-1]
if isinstance(spy_sma_value, pd.Series):
spy_sma_value = spy_sma_value.iloc[0]
if pd.isna(spy_sma_value):
return pd.DataFrame()

spy_close_value = spy_data['Close'].iloc[-1]
if isinstance(spy_close_value, pd.Series):
spy_close_value = spy_close_value.iloc[0]
spy_close = float(spy_close_value)

spy_band_value = spy_sma_band.iloc[-1]
if isinstance(spy_band_value, pd.Series):
spy_band_value = spy_band_value.iloc[0]
spy_band = float(spy_band_value)

indicator_data = []

for ticker, df in data.items():
if len(df) < 203: # Need 200 for ROC + 3 for RSI
continue

try:
# Calculate indicators using CORRECTED methods
rsi_3 = self.indicators.rsi_wilder(df['Close'], 3) # WILDER'S RSI
roc_200 = self.indicators.roc(df['Close'], 200)

# Get values
last_rsi = float(rsi_3.iloc[-1])
last_roc = float(roc_200.iloc[-1])
last_close = float(df['Close'].iloc[-1])
last_volume = float(df['Volume'].iloc[-1])

# Skip if NaN
if pd.isna(last_rsi) or pd.isna(last_roc):
continue

# Calculate 20-day average volume for liquidity filter
avg_volume_20 = float(df['Volume'].rolling(20).mean().iloc[-1])

indicator_data.append({
'ticker': ticker,
'date': df.index[-1],
'close': last_close,
'volume': last_volume,
'avg_volume_20': avg_volume_20,
'rsi_3': last_rsi,
'roc_200': last_roc,
'spy_close': spy_close,
'spy_sma_band': spy_band
})

except Exception:
continue

return pd.DataFrame(indicator_data)

def get_weekly_signals(self, indicators: pd.DataFrame) -> Tuple[List[str], List[str]]:
"""
CORRECTED rotation logic - matches book exactly

Key insight: "Solution C" from C# code:
1. Rank ALL stocks by momentum
2. Top 10 = target portfolio
3. KEEP: anything we hold that's still in top 10
4. ENTER: new positions from top 10, but ONLY if RSI < 50
5. EXIT: anything not in top 10
"""

if indicators.empty:
return [], []

# Extract SPY regime
spy_close = float(indicators['spy_close'].iloc[0])
spy_band = float(indicators['spy_sma_band'].iloc[0])

# Check market regime: SPY > 200 SMA band
if spy_close <= spy_band:
# Bear market: exit everything
return [], list(self.positions.keys())

# Filter valid stocks (liquidity + price)
valid = indicators[
(indicators['close'] > 1.0) &
(indicators['avg_volume_20'] > 1_000_000)
].copy()

if valid.empty:
return [], list(self.positions.keys())

# STEP 1: Rank by 200-day ROC (momentum)
valid = valid.sort_values('roc_200', ascending=False)

# STEP 2: Top 10 by momentum = TARGET PORTFOLIO
top_10 = valid.head(10)
top_10_tickers = set(top_10['ticker'].values)

# STEP 3: KEEP - positions we already hold that are still in top 10
keeps = [t for t in self.positions.keys() if t in top_10_tickers]

# STEP 4: ENTER - new positions from top 10 with RSI < 50 filter
available_slots = 10 - len(keeps)

# Filter top 10 for new entries: must have RSI < 50 and we don't already hold it
entry_candidates = top_10[
(~top_10['ticker'].isin(self.positions.keys())) &
(top_10['rsi_3'] < 50)
]

enters = entry_candidates['ticker'].head(available_slots).tolist()

# STEP 5: EXIT - anything we hold that's NOT in top 10
exits = [t for t in self.positions.keys() if t not in top_10_tickers]

return enters, exits

def execute_trades(self, friday_date: datetime, enters: List[str], exits: List[str],
friday_data: Dict[str, pd.DataFrame],
monday_data: Dict[str, pd.DataFrame]):
"""
CORRECTED: Execute trades at MONDAY OPEN, not Friday close

friday_date: Date of signal generation
friday_data: Data up to and including Friday (for portfolio valuation)
monday_data: Data including Monday (for execution prices)
"""

# Calculate portfolio value using Friday close prices
portfolio_value = self.capital
for ticker, shares in self.positions.items():
if ticker in friday_data:
try:
price = float(friday_data[ticker]['Close'].iloc[-1])
if not pd.isna(price):
portfolio_value += shares * price
except (ValueError, TypeError, IndexError):
pass

# Execute exits first (Monday open price)
for ticker in exits:
if ticker in self.positions and ticker in monday_data:
shares = self.positions[ticker]
try:
# Get Monday's open price
monday_open = float(monday_data[ticker]['Open'].iloc[-1])
if pd.isna(monday_open):
continue
except (ValueError, TypeError, IndexError, KeyError):
# If no Open price, use Close
try:
monday_open = float(monday_data[ticker]['Close'].iloc[-1])
except:
continue

proceeds = shares * monday_open
self.capital += proceeds

self.trades.append({
'date': monday_data[ticker].index[-1], # Actual Monday date
'ticker': ticker,
'action': 'SELL',
'shares': shares,
'price': monday_open,
'value': proceeds
})

del self.positions[ticker]

# Execute entries (Monday open price)
if enters:
position_size = portfolio_value * 0.10 # 10% per position

for ticker in enters:
if ticker in monday_data:
try:
# Get Monday's open price
monday_open = float(monday_data[ticker]['Open'].iloc[-1])
if pd.isna(monday_open) or monday_open <= 0:
continue
except (ValueError, TypeError, IndexError, KeyError):
try:
monday_open = float(monday_data[ticker]['Close'].iloc[-1])
except:
continue

shares = int(position_size / monday_open)
cost = shares * monday_open

if self.capital >= cost and shares > 0:
self.positions[ticker] = shares
self.capital -= cost

self.trades.append({
'date': monday_data[ticker].index[-1], # Actual Monday date
'ticker': ticker,
'action': 'BUY',
'shares': shares,
'price': monday_open,
'value': cost
})

def record_equity(self, date: datetime, data: Dict[str, pd.DataFrame]):
"""Record portfolio value at end of day"""
portfolio_value = self.capital

for ticker, shares in self.positions.items():
if ticker in data:
try:
price = float(data[ticker]['Close'].iloc[-1])
if not pd.isna(price):
portfolio_value += shares * price
except (ValueError, TypeError, IndexError):
pass

self.equity_curve.append({
'date': date,
'equity': float(portfolio_value),
'cash': float(self.capital),
'num_positions': len(self.positions)
})


# ============================================================================
# BACKTESTING ENGINE - CORRECTED
# ============================================================================

class Backtester:
"""Backtest engine with CORRECTED execution timing"""

def __init__(self, strategy: BensdorpWeeklyRotation, data_db: MarketDataDB):
self.strategy = strategy
self.data_db = data_db

def run(self, universe: List[str], start_date: str, end_date: str,
benchmark: str = 'SPY') -> pd.DataFrame:
"""Run backtest with MONDAY OPEN execution"""

print(f"\n{'='*70}")
print(f"BACKTEST: Bensdorp Weekly Rotation (CORRECTED)")
print(f"Period: {start_date} to {end_date}")
print(f"Universe: {len(universe)} stocks")
print(f"Initial Capital: ${self.strategy.initial_capital:,.2f}")
print(f"{'='*70}\n")

# Download data
print("Loading market data...")
data = self.data_db.download_universe(universe, start_date, end_date)
spy_data = self.data_db.download_ticker(benchmark, start_date, end_date)

print(f"Loaded {len(data)} stocks with sufficient history\n")

# Find all Fridays
all_dates = spy_data.index
fridays = []
for i, date in enumerate(all_dates):
if date.dayofweek == 4: # Friday = 4
fridays.append(date)

print(f"Simulating {len(fridays)} weeks of trading...")
print("Each week: Friday analysis → Monday execution\n")

trades_count = 0
for i, friday in enumerate(fridays):
# Get data up to Friday close
historical_data = {
ticker: df.loc[:friday]
for ticker, df in data.items()
if friday in df.index
}
spy_historical = spy_data.loc[:friday]

# Skip warmup period
if len(spy_historical) < 200:
continue

# Calculate indicators (Friday close)
indicators = self.strategy.calculate_indicators(
historical_data, spy_historical
)

if indicators.empty:
# Record equity even if no signals
self.strategy.record_equity(friday, historical_data)
continue

# Get signals (Friday evening)
enters, exits = self.strategy.get_weekly_signals(indicators)

# Find next Monday for execution
next_monday = None
for future_date in all_dates[all_dates > friday]:
if future_date.dayofweek == 0: # Monday = 0
next_monday = future_date
break

# If no Monday found (end of data), use next trading day
if next_monday is None:
next_available = all_dates[all_dates > friday]
if len(next_available) > 0:
next_monday = next_available[0]
else:
# End of data
self.strategy.record_equity(friday, historical_data)
continue

# Get Monday data for execution
monday_data = {
ticker: df.loc[:next_monday]
for ticker, df in data.items()
if next_monday in df.index
}

# Execute trades (Monday open)
if enters or exits:
self.strategy.execute_trades(
friday, enters, exits,
historical_data, monday_data
)
trades_count += len(enters) + len(exits)

# Record equity (use latest available data)
latest_data = monday_data if monday_data else historical_data
latest_date = next_monday if next_monday else friday
self.strategy.record_equity(latest_date, latest_data)

# Progress
if (i + 1) % 50 == 0:
current_equity = self.strategy.equity_curve[-1]['equity']
print(f" Week {i+1}/{len(fridays)}: ${current_equity:,.0f}, "
f"{len(self.strategy.positions)} positions, {trades_count} total trades")

print(f"\nBacktest complete! Total trades: {trades_count}\n")

if not self.strategy.equity_curve:
raise ValueError("No equity data recorded!")

return pd.DataFrame(self.strategy.equity_curve).set_index('date')


# ============================================================================
# PERFORMANCE ANALYTICS
# ============================================================================

class PerformanceAnalytics:
"""Performance metrics calculation"""

u/staticmethod
def calculate_metrics(equity_curve: pd.DataFrame,
benchmark_curve: pd.DataFrame,
risk_free_rate: float = 0.02) -> Dict:
"""Calculate all performance metrics"""

strategy_returns = equity_curve['equity'].pct_change().dropna()
benchmark_returns = benchmark_curve.pct_change().dropna()

# Align dates
common_dates = strategy_returns.index.intersection(benchmark_returns.index)
strategy_returns = strategy_returns.loc[common_dates]
benchmark_returns = benchmark_returns.loc[common_dates]

# CAGR
total_years = (equity_curve.index[-1] - equity_curve.index[0]).days / 365.25
strategy_cagr = float(
(equity_curve['equity'].iloc[-1] / equity_curve['equity'].iloc[0])
** (1 / total_years) - 1
) * 100

benchmark_cagr = float(
(benchmark_curve.iloc[-1] / benchmark_curve.iloc[0])
** (1 / total_years) - 1
) * 100

# Maximum Drawdown
cummax = equity_curve['equity'].cummax()
drawdown = (equity_curve['equity'] - cummax) / cummax * 100
max_dd = float(drawdown.min())

bench_cummax = benchmark_curve.cummax()
bench_drawdown = (benchmark_curve - bench_cummax) / bench_cummax * 100
bench_max_dd = float(bench_drawdown.min())

# MAR Ratio
mar_ratio = abs(strategy_cagr / max_dd) if max_dd != 0 else 0
bench_mar = abs(benchmark_cagr / bench_max_dd) if bench_max_dd != 0 else 0

# Sharpe Ratio
excess_returns = strategy_returns - (risk_free_rate / 252)
sharpe = float(np.sqrt(252) * excess_returns.mean() / strategy_returns.std())

bench_excess = benchmark_returns - (risk_free_rate / 252)
bench_sharpe = float(np.sqrt(252) * bench_excess.mean() / benchmark_returns.std())

# Sortino Ratio
downside_returns = strategy_returns[strategy_returns < 0]
sortino = (
float(np.sqrt(252) * excess_returns.mean() / downside_returns.std())
if len(downside_returns) > 0 else 0
)

# Total Return
total_return = float(
(equity_curve['equity'].iloc[-1] / equity_curve['equity'].iloc[0] - 1) * 100
)
bench_total_return = float(
(benchmark_curve.iloc[-1] / benchmark_curve.iloc[0] - 1) * 100
)

return {
'strategy_cagr': strategy_cagr,
'benchmark_cagr': benchmark_cagr,
'strategy_total_return': total_return,
'benchmark_total_return': bench_total_return,
'strategy_max_dd': max_dd,
'benchmark_max_dd': bench_max_dd,
'mar_ratio': mar_ratio,
'benchmark_mar': bench_mar,
'sharpe_ratio': sharpe,
'benchmark_sharpe': bench_sharpe,
'sortino_ratio': sortino,
'total_trades': len(strategy_returns),
'volatility': float(strategy_returns.std() * np.sqrt(252) * 100)
}

u/staticmethod
def print_metrics(metrics: Dict):
"""Pretty print metrics"""

print(f"\n{'='*70}")
print(f"PERFORMANCE SUMMARY")
print(f"{'='*70}\n")

print(f"{'Total Return':<30} Strategy: {metrics['strategy_total_return']:>8.2f}% | Benchmark: {metrics['benchmark_total_return']:>8.2f}%")
print(f"{'CAGR':<30} Strategy: {metrics['strategy_cagr']:>8.2f}% | Benchmark: {metrics['benchmark_cagr']:>8.2f}%")
print(f"{'Maximum Drawdown':<30} Strategy: {metrics['strategy_max_dd']:>8.2f}% | Benchmark: {metrics['benchmark_max_dd']:>8.2f}%")
print(f"{'MAR Ratio (CAGR/MaxDD)':<30} Strategy: {metrics['mar_ratio']:>8.2f} | Benchmark: {metrics['benchmark_mar']:>8.2f}")
print(f"{'Sharpe Ratio':<30} Strategy: {metrics['sharpe_ratio']:>8.2f} | Benchmark: {metrics['benchmark_sharpe']:>8.2f}")
print(f"{'Sortino Ratio':<30} Strategy: {metrics['sortino_ratio']:>8.2f}")
print(f"{'Volatility (Annualized)':<30} Strategy: {metrics['volatility']:>8.2f}%")

print(f"\n{'='*70}")
print(f"KEY INSIGHTS:")
print(f"{'='*70}")

outperformance = metrics['strategy_cagr'] - metrics['benchmark_cagr']
dd_improvement = abs(metrics['strategy_max_dd']) - abs(metrics['benchmark_max_dd'])

print(f"✓ Outperformance: {outperformance:+.2f}% CAGR vs benchmark")
print(f"✓ Drawdown difference: {dd_improvement:+.2f}% vs benchmark")
print(f"✓ Risk-adjusted (MAR): {(metrics['mar_ratio']/metrics['benchmark_mar']-1)*100:+.1f}% vs benchmark")
print(f"✓ Risk-adjusted (Sharpe): {(metrics['sharpe_ratio']/metrics['benchmark_sharpe']-1)*100:+.1f}% vs benchmark")
print(f"{'='*70}\n")


# ============================================================================
# VISUALIZATION
# ============================================================================

class StrategyVisualizer:
"""Professional visualizations"""

u/staticmethod
def plot_results(equity_curve: pd.DataFrame,
benchmark_curve: pd.DataFrame,
trades: List[Dict]):
"""Create comprehensive charts"""

fig, axes = plt.subplots(3, 1, figsize=(14, 10))
fig.suptitle('Bensdorp Weekly Rotation Strategy - CORRECTED Backtest',
fontsize=16, fontweight='bold')

# Equity curves
ax1 = axes[0]
ax1.plot(equity_curve.index, equity_curve['equity'],
label='Strategy (CORRECTED)', linewidth=2, color='#2E86AB')

benchmark_normalized = (
benchmark_curve / benchmark_curve.iloc[0] * equity_curve['equity'].iloc[0]
)
ax1.plot(benchmark_curve.index, benchmark_normalized,
label='S&P 500 (Buy & Hold)', linewidth=2,
color='#A23B72', alpha=0.7)

ax1.set_ylabel('Portfolio Value ($)', fontsize=11, fontweight='bold')
ax1.set_title('Equity Curve Comparison', fontsize=12, fontweight='bold')
ax1.legend(loc='upper left', fontsize=10)
ax1.grid(True, alpha=0.3)
ax1.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'${x/1000:.0f}K'))

# Drawdown
ax2 = axes[1]
cummax = equity_curve['equity'].cummax()
drawdown = (equity_curve['equity'] - cummax) / cummax * 100

ax2.fill_between(drawdown.index, drawdown, 0,
color='#F18F01', alpha=0.5, label='Drawdown')
ax2.set_ylabel('Drawdown (%)', fontsize=11, fontweight='bold')
ax2.set_title('Strategy Drawdown', fontsize=12, fontweight='bold')
ax2.legend(loc='lower left', fontsize=10)
ax2.grid(True, alpha=0.3)

# Positions
ax3 = axes[2]
ax3.plot(equity_curve.index, equity_curve['num_positions'],
linewidth=2, color='#6A994E')
ax3.set_ylabel('# Positions', fontsize=11, fontweight='bold')
ax3.set_xlabel('Date', fontsize=11, fontweight='bold')
ax3.set_title('Portfolio Exposure', fontsize=12, fontweight='bold')
ax3.set_ylim(0, 11)
ax3.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('backtest_CORRECTED.png', dpi=150, bbox_inches='tight')
print("✓ Chart saved as 'backtest_CORRECTED.png'")
plt.show()


# ============================================================================
# MAIN EXECUTION
# ============================================================================

def main():
"""Run corrected backtest"""

# Test both the book period AND recent period
START_DATE = '2020-01-01' # Book's period
# START_DATE = '2020-01-01' # Recent period for comparison
END_DATE = datetime.now().strftime('%Y-%m-%d')
INITIAL_CAPITAL = 10000

# S&P 500 sample
SP500_SAMPLE = [
"NVDA","AAPL","MSFT","AMZN","GOOGL","GOOG","AVGO","META","TSLA","BRK.B","LLY","WMT","JPM","V","ORCL","JNJ","XOM","MA","NFLX","COST","PLTR","ABBV","BAC","AMD","HD","PG","KO","GE","CVX","CSCO","UNH","IBM","MU","MS","WFC","CAT","MRK","AXP","GS","PM","TMUS","RTX","CRM","ABT","TMO","MCD","APP","PEP","AMAT","ISRG","LRCX","INTC","DIS","LIN","C","T","AMGN","QCOM","UBER","NEE","INTU","APH","NOW","VZ","TJX","SCHW","BLK","ANET","ACN","DHR","BKNG","GEV","GILD","TXN","KLAC","SPGI","BSX","PFE","SYK","BA","COF","WELL","LOW","UNP","ADBE","PGR","MDT","ETN","PANW","ADI","CRWD","DE","HON","PLD","CB","HCA","BX","CEG","COP","HOOD","KKR","PH","VRTX","MCK","ADP","LMT","CME","CVS","BMY","MO","NEM","SO","CMCSA","NKE","SBUX","DUK","TT","MMM","MMC","GD","DELL","ICE","DASH","MCO","WM","ORLY","SHW","CDNS","SNPS","AMT","MAR","UPS","HWM","REGN","NOC","BK","ECL","USB","APO","TDG","AON","PNC","WMB","CTAS","EMR","MNST","ELV","CI","RCL","MDLZ","EQIX","ITW","ABNB","GLW","COIN","JCI","COR","CMI","GM","PWR","TEL","RSG","HLT","AZO","NSC","CSX","ADSK","TRV","FDX","CL","AEP","AJG","MSI","FCX","FTNT","KMI","SPG","WBD","EOG","SRE","TFC","STX","VST","MPC","PYPL","IDXX","APD","ROST","AFL","DDOG","PSX","WDC","WDAY","ZTS","ALL","VLO","SLB","PCAR","BDX","DLR","O","F","D","URI","NDAQ","LHX","EA","MET","NXPI","BKR","EW","CAH","CBRE","PSA","ROP","XEL","LVS","OKE","DHI","FAST","EXC","TTWO","CARR","CMG","CTVA","AME","FANG","GWW","KR","MPWR","ROK","A","AMP","ETR","AXON","MSCI","DAL","FICO","OXY","TGT","YUM","AIG","PEG","PAYX","SQ","IQV","CCI","VMC","HIG","KDP","CPRT","EQT","TRGP","PRU","VTR","GRMN","HSY","EBAY","CTSH","MLM","NUE","SYY","GEHC","KMB","ON","EFX","GIS","STZ","AVB","DD","IRM","DTE","KEYS","BR","AWK","FITB","VICI","ACGL","NDSN","ODFL","WAB","PCG","DOW","FTV","TROW","SYF","TER","AEE","ZBH","HUBB","BIIB","TDY","ZBRA","CHTR","PPG","OTIS","DXCM","WTW","CTLT","ARES","WEC","LYB","MCHP","CSGP","WY","TSCO","HST","AZN","RMD","FSLR","DOV","ANSS","NTNX","EA","CTRA","KHC","PSTG","LH","INVH","KVUE","CNC","SMCI","RJF","LYV","GOOG","ILMN","DVA","ESS","WAT","TRMB","SWK","LUV","WST","AES","LDOS","FE","DRI","GPC","AVY","HOLX","TTWO","EXPD","CMS","BLDR","ALGN","STLD","ARE","EG","BRO","ES","MKC","JBHT","CNP","IT","WDC","NVR","NTRS","EPAM","POOL","BALL","HBAN","BF.B","EXPE","VTRS","PKG","J","RF","PODD","CAG","GL","STE","CFG","AKAM","BBWI","EQR","SBAC","TPR","K","DAY","FDS","NTAP","IP","ENPH","MGM","SWKS","MAS","COO","DFS","AIZ","TECH","TYL","PAYC","CHRW","MRNA","KEY","TXT","MAA","JKHY","HRL","ULTA","LNT","UDR","NI","HII","KIM","ALLE","KMX","RVTY","CE","DGX","REG","WBA","AMCR","CPT","JNPR","MTCH","APA","BXP","EVRG","RL","PFG","HSIC","BWA","ALB","SOLV","PARA","CRL","CPB","IVZ","NWS","NWSA","MOH","WYNN","HAS","PNW","BG","FRT","FOXA","FOX","VFC","EXE","HOOD","DASH","GEV","APP"
]

# Initialize system
data_db = MarketDataDB()
strategy = BensdorpWeeklyRotation(initial_capital=INITIAL_CAPITAL)
backtester = Backtester(strategy, data_db)

# Run backtest
equity_curve = backtester.run(
universe=SP500_SAMPLE,
start_date=START_DATE,
end_date=END_DATE,
benchmark='SPY'
)

# Load benchmark
benchmark = data_db.download_ticker('SPY', START_DATE, END_DATE)

# Calculate metrics
analytics = PerformanceAnalytics()
metrics = analytics.calculate_metrics(equity_curve, benchmark['Close'])

# Print results
analytics.print_metrics(metrics)

# Visualize
visualizer = StrategyVisualizer()
visualizer.plot_results(equity_curve, benchmark['Close'], strategy.trades)

# Save trade log
trades_df = pd.DataFrame(strategy.trades)
trades_df.to_csv('trade_log_CORRECTED.csv', index=False)
print("✓ Trade log saved as 'trade_log_CORRECTED.csv'\n")

return strategy, equity_curve, metrics


if __name__ == "__main__":
strategy, results, metrics = main()

print("\n" + "="*70)
print("CORRECTED BACKTEST COMPLETE")
print("="*70)
print("\nCRITICAL FIXES APPLIED:")
print(" ✓ Wilder's RSI (exponential smoothing)")
print(" ✓ Monday open execution (not Friday close)")
print(" ✓ Correct rotation logic (top 10 first, then RSI filter)")
print(" ✓ Proper position sizing and timing")
print("\nFiles generated:")
print(" • backtest_CORRECTED.png")
print(" • trade_log_CORRECTED.csv")
print(" • ./market_data/ (cached data)")
print("="*70 + "\n")

sentiment 1.00
2 days ago • u/Happy-n-Healthy • r/wallstreetbets • weekend_discussion_thread_for_the_weekend_of • C
That's strange, I have CAG calls and they have been behaving erratic. I wonder what's going on....
sentiment 0.18
2 days ago • u/GoZukkYourself • r/wallstreetbets • weekend_discussion_thread_for_the_weekend_of • C
You have 10,000 shares of CAG? Why?
sentiment 0.37
2 days ago • u/Meanboy_og • r/wallstreetbets • weekend_discussion_thread_for_the_weekend_of • C
I think Webul is mad I sold 100 covered calls on CAG though. Wouldn’t let me leg out in profit on the calls lol. Said I would mess up the theme or some crap. Very strange. It’s not margin and I didn’t use the 5k . This is a first for me.
sentiment 0.14
2 days ago • u/Meanboy_og • r/wallstreetbets • weekend_discussion_thread_for_the_weekend_of • C
Well I’m heavy in CAG and just bought 200 shares of Unh and sold a put strike 325 . So pray for me please. I like money .
sentiment 0.86
2 days ago • u/Happy-n-Healthy • r/wallstreetbets • daily_discussion_thread_for_december_05_2025 • C
CAG the pump has begun get calls now while they're cheap AF!
sentiment 0.42
2 days ago • u/Happy-n-Healthy • r/wallstreetbets • daily_discussion_thread_for_december_05_2025 • C
NFLX LOL and you laughed at me for buying CAG calls
sentiment 0.67
2 days ago • u/The_Theorist_Guy • r/algotrading • i_backtested_forwardtested_and_actively_trade_a • Strategy • B
I have developed a mathematical framework for decision-making in stochastic environments that challenges the "90% of traders fail" statistic. My bot, testing this framework, currently holds a 100% closed-trade win rate by mathematically managing entry/exit without realizing losses. I am preparing a white paper for arXiv and looking for feedback on the theory.
This didn’t start as a get rich (or even at all) scheme. In fact, the strategy is slow: average trades are small (average trade is 1-10 shares) and low frequency (\~0.5-1x/day).
It started with a question: How can one maximize survival in net-negative expectancy environments?
The best known advice from Warren Buffet and many others of that caliber is that Buy & Hold is the gold standard. We also know that 90-99% of active traders fail. I have several friends try their hand at trading and blew their accounts/failed. So, I became obsessed with solving this problem. I developed a decision-making framework involving stochastic environments that minimizes the probability of ruin to near zero.
I have been deploying this framework via a custom trading bot and manual options trading.
I wanted to prove that this wasn't just about picking the 'right' stocks, but about how you manage them. So, I did a blind test: I had my wife pick 9 random numbers from 1 to 50 and applied my strategy to whichever stocks fell on those lines in the Webull Top 50 list.
The bot has been running on this account since May 28, 2025. It strictly adheres to the rule of never selling at a loss and has a defined limit of exposure/allocating max capital per stock.
Automated Stock Trades **-** 100% Win Rate. Manual Options Trades **-** 99% Win Rate. (Human Error)
I’ve included a screenshot of the account showing the tickers and the P/L. I’ve had to blur out the specific share counts and total equity. As you can see, the Sofi position is down 1.5k and that is my "all-in" capital limit. I'm completely unaffected by it. I could sell at a loss or simply hold. Look at the cash balance, Its basically the 80k initial seed capital, which means, I am currently only exposed to the market with the profit gained since May. Effectively, playing with the house money at the moment. The math behind the position sizing is the 'secret sauce' that makes this work, and after pouring years of my life into solving this, I need to protect the intellectual property for now.
To be considered a "successful" trader by industry standards, you only need a 60-70% win rate. I found this bar to be unacceptable.. My data suggests that by removing the human psychological need to stop-loss and replacing it with a mathematical resource allocation strategy, we can substantially reduce the failure rate.
Why does this matter? I believe the normalization of losing in financial markets (and in general) has a net-negative effect on society. What happens to a person's psychology when they lose money? How does that frustration affect their interactions with family and friends? What if the "average person" could participate in the market or any decision making process with a mathematically high probability of success?
I posit that increasing the win rate of the average participant would have a cumulative, beneficial effect on society’s collective mental health and stability.
My research is currently self-funded by the trading results (the system pays for itself). I am currently compiling the data into a white paper that I intend to publish on arXiv.
I am posting this here because I want to start a discussion about it. I have nobody in my personal life who comprehends the applicability of this simple equation, and I believe this approach has value beyond profit/money (which I care nothing for), it has value for everyone that is failing and trying to survive a rigged/unpredictable/chaotic/volatile system.
Here is the list of stocks that were back-tested using the exact same equation/algorithm -ABEV, ABT, AEG, AEP, AQN, BCE, BJ, BMY, BTG, C, CAH, CAG, CHD, CI, CL, COMP, CPB, COST, CVS, D, DG, DGX, DLTR, DUK, ED, ERIC, ES, ETR, EXC, F, GIS, GNW, HL, HRL, HUM, INTC, KDP, KHC, KMB, KR, MDT, MRK, NMR, NNN, NOK, O, PBI, PFE, PG, PLUG, RSG, ROL, SAN, SFM, SILJ, SIRI, SO, STAG, T, TAP, TEF, TU, UMC, VZ, WBA, WEC, WM, WMT, XEL
sentiment 0.99


Share
About
Pricing
Policies
Markets
API
Info
tz UTC-5
Connect with us
ChartExchange Email
ChartExchange on Discord
ChartExchange on X
ChartExchange on Reddit
ChartExchange on GitHub
ChartExchange on YouTube
© 2020 - 2025 ChartExchange LLC