Day 22: Monkey Market
Megathread guidelines
- Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
- You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL
FAQ
- What is this?: Here is a post with a large amount of details: https://programming.dev/post/6637268
- Where do I participate?: https://adventofcode.com/
- Is there a leaderboard for the community?: We have a programming.dev leaderboard with the info on how to join in this post: https://programming.dev/post/6631465
Kotlin
I experimented a lot to improve the runtime and now I am happy with my solution. The JVM doesn’t optimize code that quickly :)
I have implemented a few optimizations in regards to transformations so that they use arrays directly (The file with the implementations is here)
Code
class Day22 { private fun nextSecretNumber(start: Long): Long { // Modulo 2^24 is the same as "and" with 2^24 - 1 val pruneMask = 16777216L - 1L // * 64 is the same as shifting left by 6 val mul64 = ((start shl 6) xor start) and pruneMask // / 32 is the same as shifting right by 5 val div32 = ((mul64 shr 5) xor mul64) and pruneMask // * 2048 is the same as shifting left by 11 val mul2048 = ((div32 shl 11) xor div32) and pruneMask return mul2048 } fun part1(inputFile: String): String { val secretNumbers = readResourceLines(inputFile) .map { it.toLong() } .toLongArray() repeat(NUMBERS_PER_DAY) { for (i in secretNumbers.indices) { secretNumbers[i] = nextSecretNumber(secretNumbers[i]) } } return secretNumbers.sum().toString() } fun part2(inputFile: String): String { // There is a different sample input for part 2 val input = if (inputFile.endsWith("sample")) { readResourceLines(inputFile + "2") } else { readResourceLines(inputFile) } val buyers = input .map { LongArray(NUMBERS_PER_DAY + 1).apply { this[0] = it.toLong() for (i in 1..NUMBERS_PER_DAY) { this[i] = nextSecretNumber(this[i - 1]) } } } // Calculate the prices and price differences for each buyer. // The pairs are the price (the ones digit) and the key/unique value of each sequence of differences val differences = buyers .map { secretNumbers -> // Get the ones digit val prices = secretNumbers.mapToIntArray { it.toInt() % 10 } // Get the differences between each number val differenceKeys = prices .zipWithNext { a, b -> (b - a) } // Transform the differences to a singular unique value (integer) .mapWindowed(4) { sequence, from, _ -> // Bring each byte from -9 to 9 to 0 to 18, multiply by 19^i and sum // This generates a unique value for each sequence of 4 differences (sequence[from + 0] + 9) + (sequence[from + 1] + 9) * 19 + (sequence[from + 2] + 9) * 361 + (sequence[from + 3] + 9) * 6859 } // Drop the first 4 prices, as they are not relevant (initial secret number price and 3 next prices) prices.dropFromArray(4) to differenceKeys } // Cache to hold the value/sum of each sequence of 4 differences val sequenceCache = IntArray(NUMBER_OF_SEQUENCES) val seenSequence = BooleanArray(NUMBER_OF_SEQUENCES) // Go through each sequence of differences // and get their *first* prices of each sequence. // Sum them in the cache. for ((prices, priceDifferences) in differences) { // Reset the "seen" array Arrays.fill(seenSequence, false) for (index in priceDifferences.indices) { val key = priceDifferences[index] if (!seenSequence[key]) { sequenceCache[key] += prices[index] seenSequence[key] = true } } } return sequenceCache.max().toString() } companion object { private const val NUMBERS_PER_DAY = 2000 // 19^4, the differences range from -9 to 9 and the sequences are 4 numbers long private const val NUMBER_OF_SEQUENCES = 19 * 19 * 19 * 19 } }
Rust
Part 2 is crazy slow, but it works, so thats cool :D
Edit: Gonna fix this, because pt2 is stupid.Much better, 2.4s. Still slow, but not 6 minutes slow.
#[cfg(test)] mod tests { use std::collections::HashMap; use std::iter::zip; fn step(start: usize) -> usize { let mut next = start; next = ((next * 64) ^ next) % 16777216; next = ((next / 32) ^ next) % 16777216; next = ((next * 2048) ^ next) % 16777216; next } fn simulate(initial: usize) -> usize { let mut next = initial; for _ in 0..2000 { next = step(next); } next } #[test] fn test_step() { assert_eq!(15887950, step(123)); } #[test] fn test_simulate() { assert_eq!(8685429, simulate(1)); } #[test] fn day22_part1_test() { let input = std::fs::read_to_string("src/input/day_22.txt").unwrap(); let initial_values = input .split("\n") .map(|s| s.parse::<usize>().unwrap()) .collect::<Vec<usize>>(); let mut total = 0; for value in initial_values { total += simulate(value); } println!("{}", total); } #[test] fn day22_part2_test() { let input = std::fs::read_to_string("src/input/day_22.txt").unwrap(); let initial_values = input .split("\n") .map(|s| s.parse::<usize>().unwrap()) .collect::<Vec<usize>>(); let mut all_deltas = vec![]; let mut all_values = vec![]; for value in initial_values { let mut deltas = String::with_capacity(2000); let mut values = vec![]; let mut prev = value; for _ in 0..2000 { let next = step(prev); values.push(next % 10); deltas.push((10u8 + b'A' + ((prev % 10) as u8) - ((next % 10) as u8)) as char); prev = next; } all_deltas.push(deltas); all_values.push(values); } let mut totals = HashMap::with_capacity(100000); for (delta, value) in zip(&all_deltas, &all_values) { let mut cache = HashMap::with_capacity(2000); for j in 0..delta.len() - 4 { let seq = &delta[j..j + 4]; let bananas = value[j + 3]; cache.entry(seq).or_insert(bananas); } for (key, value) in cache { *totals.entry(key).or_insert(0) += value; } } let max_bananas = totals.values().max().unwrap(); println!("{}", max_bananas); } }
Six minutes? 😅 I was feeling crappy about my 30 seconds (my naive big O cubed(?) logic means my code spends most of its time testing array equalities - 72 billion samples in the flamegraph!)
Most of my time is wasted on hashmap stuff. And the processing into the string, which really isnt needed anymore. :/
Have you tried gxhash or one of the other non-cryptographic hashers?
Uiua
It’s been a while since I posted one of these, but I thought this would be straightforward in Uiua. Turns out that bitwise operations are a bit (haha) of a pain, so the
Rng
operation is very slow at 4sec for live data.I took this as an opportunity to play with the
⧈(stencil)
operator which probably slowed things down too.# Data ← 1_10_100_2024 Data ← 1_2_3_2024 Xor ← °⋯◿2⬚0+∩⋯ # Bitwise xor of two numbers. Rng ← ⊙◌◿,Xor×2048.◿,Xor⌊÷32.◿,Xor×64.⊙16777216 Runs ← ⍉(⇌[⍥(Rng.)])2000 Data # Should be constant. First ← ( ⊟⊂0⧈₂/-.◿10 ↘¯1 # Build run, gen pair diffs ⍣⊢0▽⊸>0⊢⧈(⨬⋅0⊣≍¯2_1_¯1_3°⊟) 2_4 # Take first value for matching diffs. ) &p /+≡⊣.Runs &p /+≡First
Rust
Nice breather today (still traumatized from the robots). At some point I thought you had to do some magic for predicting special properties of the pseudorandom function, but no, just collect all values, have a big table for all sequences and in the end take the maximum value in that table. Part 1 takes 6.7ms, part 2 19.2ms.
Solution
fn step(n: u32) -> u32 { let a = (n ^ (n << 6)) % (1 << 24); let b = a ^ (a >> 5); (b ^ (b << 11)) % (1 << 24) } fn part1(input: String) { let sum = input .lines() .map(|l| { let n = l.parse().unwrap(); (0..2000).fold(n, |acc, _| step(acc)) as u64 }) // More than 2¹⁰ 24-bit numbers requires 35 bits .sum::<u64>(); println!("{sum}"); } const N_SEQUENCES: usize = 19usize.pow(4); fn sequence_key(sequence: &[i8]) -> usize { sequence .iter() .enumerate() .map(|(i, x)| (x + 9) as usize * 19usize.pow(i as u32)) .sum() } fn part2(input: String) { // Table for collecting the amount of bananas for every possible sequence let mut table = vec![0; N_SEQUENCES]; // Mark the sequences we encountered in a round to ensure that only the first occurence is used let mut seen = vec![false; N_SEQUENCES]; for l in input.lines() { let n = l.parse().unwrap(); let (diffs, prices): (Vec<i8>, Vec<u8>) = (0..2000) .scan(n, |acc, _| { let next = step(*acc); let diff = (next % 10) as i8 - (*acc % 10) as i8; *acc = next; Some((diff, (next % 10) as u8)) }) .unzip(); for (window, price) in diffs.windows(4).zip(prices.iter().skip(3)) { let key = sequence_key(window); if !seen[key] { seen[key] = true; table[key] += *price as u32; } } // Reset seen sequences for next round seen.fill(false); } let bananas = table.iter().max().unwrap(); println!("{bananas}"); } util::aoc_main!();
Also on github
How have I never noticed that
scan()
exists? Very handy.I liked the zipping of the offset prices, neater than my helper method.
Rust
Not too hard today, apart from yesterday’s visit to a cocktail bar leaving me a little hazy in the mind.
code
use std::{fs, str::FromStr}; use color_eyre::eyre::{Report, Result}; use gxhash::{HashMap, HashMapExt}; const SECRETS_PER_DAY: usize = 2000; const SEQ_LEN: usize = 4; type Sequence = [i8; SEQ_LEN]; fn produce(n: usize) -> usize { let n = (n ^ (n * 64)) % 16777216; let n = (n ^ (n / 32)) % 16777216; (n ^ (n * 2048)) % 16777216 } #[derive(Debug)] struct Buyer { prices: [u8; SECRETS_PER_DAY + 1], changes: [i8; SECRETS_PER_DAY], } impl Buyer { fn price_at_seq(&self, seq: &Sequence) -> Option<u8> { self.changes .windows(SEQ_LEN) .position(|win| win == *seq) .and_then(|i| self.price_for_window(i)) } fn price_for_window(&self, i: usize) -> Option<u8> { self.prices.get(i + SEQ_LEN).copied() } } struct BananaMarket { buyers: Vec<Buyer>, } impl FromStr for BananaMarket { type Err = Report; fn from_str(s: &str) -> Result<Self, Self::Err> { let buyer_seeds = s .lines() .map(|s| s.parse::<usize>()) .collect::<Result<Vec<_>, _>>()?; let buyers = buyer_seeds .into_iter() .map(|seed| { let mut prices = [0; SECRETS_PER_DAY + 1]; let mut changes = [0; SECRETS_PER_DAY]; let mut secret = seed; let mut price = (seed % 10) as u8; prices[0] = price; for i in 0..SECRETS_PER_DAY { let last_price = price; secret = produce(secret); price = (secret % 10) as u8; prices[i + 1] = price; changes[i] = price as i8 - last_price as i8; } Buyer { prices, changes } }) .collect(); Ok(Self { buyers }) } } impl BananaMarket { fn sell_with_seq(&self, seq: &Sequence) -> usize { self.buyers .iter() .map(|b| b.price_at_seq(seq).unwrap_or(0) as usize) .sum() } fn maximise_bananas(&self) -> usize { let mut cache: HashMap<Sequence, usize> = HashMap::new(); for seq in self .buyers .iter() .flat_map(|buyer| buyer.changes.windows(SEQ_LEN)) { let seq = seq.try_into().unwrap(); cache.entry(seq).or_insert_with(|| self.sell_with_seq(&seq)); } cache.into_values().max().unwrap_or(0) } } fn part1(filepath: &str) -> Result<usize> { let input = fs::read_to_string(filepath)? .lines() .map(|s| s.parse::<usize>()) .collect::<Result<Vec<_>, _>>()?; let res = input .into_iter() .map(|n| (0..SECRETS_PER_DAY).fold(n, |acc, _| produce(acc))) .sum(); Ok(res) } fn part2(filepath: &str) -> Result<usize> { let input = fs::read_to_string(filepath)?; let market = BananaMarket::from_str(&input)?; Ok(market.maximise_bananas()) } fn main() -> Result<()> { color_eyre::install()?; println!("Part 1: {}", part1("d22/input.txt")?); println!("Part 2: {}", part2("d22/input.txt")?); Ok(()) }
Haskell
solution
import Control.Arrow import Data.Bits import Data.List import qualified Data.Map as M parse = fmap (secretNums . read) . lines secretNums :: Int -> [Int] secretNums = take 2001 . iterate (step1 >>> step2 >>> step3) where step1 n = ((n `shiftL` 06) `xor` n) .&. 0xFFFFFF step2 n = ((n `shiftR` 05) `xor` n) .&. 0xFFFFFF step3 n = ((n `shiftL` 11) `xor` n) .&. 0xFFFFFF part1 = sum . fmap last part2 = maximum . M.elems . M.unionsWith (+) . fmap (deltas . fmap (`mod` 10)) deltas l = M.fromListWith (\n p -> p) $ flip zip (drop 4 l) $ zip4 diffs (tail diffs) (drop 2 diffs) (drop 3 diffs) where diffs = zipWith (-) (tail l) l main = getContents >>= print . (part1 &&& part2) . parse
Dart
Well, that was certainly a bit easier than yesterday…
I know running a window over each full list of 2000 prices rather than looking for cycles etc means I’m doing a lot of unnecessary work, but it only takes a couple of seconds, so that’ll do.
import 'package:collection/collection.dart'; import 'package:more/more.dart'; rng(int i) { i = ((i << 6) ^ i) % 16777216; i = ((i >> 5) ^ i) % 16777216; i = ((i << 11) ^ i) % 16777216; return i; } Iterable<int> getPrices(int val, int rounds) { var ret = [val]; for (var _ in 1.to(rounds)) { ret.add(val = rng(val)); } return ret.map((e) => e % 10); } int run(int val, int rounds) => 0.to(rounds).fold(val, (s, t) => s = rng(s)); part1(lines) => [for (var i in lines.map(int.parse)) run(i, 2000)].sum; part2(List<String> lines) { var market = <int, int>{}.withDefault(0); for (var seed in lines.map(int.parse)) { var seen = <int>{}; for (var w in getPrices(seed, 2000).window(5)) { var key = // Can't use lists as keys, so make cheap hash. w.window(2).map((e) => e[1] - e[0]).reduce((s, t) => (s << 4) + t); if (seen.contains(key)) continue; seen.add(key); market[key] += w.last; } } return market.values.max; }
Go
Re-familiarizing myself with Go. The solution to Part 2 is fairly simply, the whole packing of the sequence into a single integer to save on memory was an optimization I did afterwards based on looking at other solutions. I thought it was cool.
package main import ( "bufio" "fmt" "os" "strconv" ) type SequenceMap struct { Data map[int32]int } func PackSeq(numbers [4]int8) int32 { var packed int32 for i, num := range numbers { packed |= int32(num+9) << (i * 5) } return packed } func UnpackSeq(packed int32) [4]int8 { var numbers [4]int8 for i := range numbers { numbers[i] = int8((packed>>(i*5))&0x1F) - 9 } return numbers } func NewSequenceMap() SequenceMap { return SequenceMap{make(map[int32]int)} } func (m *SequenceMap) Increment(seq [4]int8, val int) { pSeq := PackSeq(seq) acc, ok := m.Data[pSeq] if ok { m.Data[pSeq] = acc + val } else { m.Data[pSeq] = val } } func (m *SequenceMap) Has(seq [4]int8) bool { pSeq := PackSeq(seq) _, ok := m.Data[pSeq] return ok } type Generator struct { Secret int64 LastPrice int8 ChangeSequence []int8 } func NewGenerator(Secret int64) Generator { var ChangeSequence []int8 return Generator{Secret, int8(Secret % 10), ChangeSequence} } func (g *Generator) Mix(value int64) *Generator { g.Secret = g.Secret ^ value return g } func (g *Generator) Prune() *Generator { g.Secret = g.Secret % 16777216 return g } func (g *Generator) Next() { g.Mix(g.Secret * 64).Prune().Mix(g.Secret / 32).Prune().Mix(g.Secret * 2048).Prune() Price := int8(g.Secret % 10) g.ChangeSequence = append(g.ChangeSequence, Price-g.LastPrice) g.LastPrice = Price if len(g.ChangeSequence) > 4 { g.ChangeSequence = g.ChangeSequence[1:] } } func ParseInput() []int64 { if fileInfo, _ := os.Stdin.Stat(); (fileInfo.Mode() & os.ModeCharDevice) != 0 { fmt.Println("This program expects input from stdin.") os.Exit(1) } scanner := bufio.NewScanner(os.Stdin) var numbers []int64 for scanner.Scan() { line := scanner.Text() num, err := strconv.ParseInt(line, 10, 64) if err != nil { fmt.Printf("ERROR PARSING VALUE: %s\n", line) os.Exit(1) } numbers = append(numbers, num) } return numbers } func main() { numbers := ParseInput() m := NewSequenceMap() sum := int64(0) for i := 0; i < len(numbers); i += 1 { g := NewGenerator(numbers[i]) tM := NewSequenceMap() for j := 0; j < 2000; j += 1 { g.Next() if len(g.ChangeSequence) == 4 { if !tM.Has([4]int8(g.ChangeSequence)) { tM.Increment([4]int8(g.ChangeSequence), 1) if g.LastPrice > 0 { m.Increment([4]int8(g.ChangeSequence), int(g.LastPrice)) } } } } sum += g.Secret } fmt.Printf("Part One: %d\n", sum) var bestSeq [4]int8 bestPrice := 0 for pSeq, price := range m.Data { if price > bestPrice { bestPrice = price bestSeq = UnpackSeq(pSeq) } } fmt.Printf("Part Two: %d\n", bestPrice) fmt.Printf("Best Sequence: %d\n", bestSeq) }
Haskell
I have no Idea how to optimize this and am looking forward to the other solutions that probably run in sub-single-second times. I like my solution because it was simple to write which I hadn’t managed in the previous days, runs in 17 seconds with no less than 100MB of RAM.
import Control.Arrow import Data.Bits (xor) import Data.Ord (comparing) import qualified Data.List as List import qualified Data.Map as Map parse :: String -> [Int] parse = map read . filter (/= "") . lines mix = xor prune = flip mod 16777216 priceof = flip mod 10 nextSecret step0 = do let step1 = prune . mix step0 $ step0 * 64 let step2 = prune . mix step1 $ step1 `div` 32 let step3 = prune . mix step2 $ step2 * 2048 step3 part1 = sum . map (head . drop 2000 . iterate nextSecret) part2 = map (iterate nextSecret >>> take 2001 >>> map priceof >>> (id &&& tail) >>> uncurry (zipWith (curry (uncurry (flip (-)) &&& snd))) >>> map (take 4) . List.tails >>> filter ((==4) . length) >>> map (List.map fst &&& snd . List.last) >>> List.foldl (\ m (s, p) -> Map.insertWith (flip const) s p m) Map.empty ) >>> Map.unionsWith (+) >>> Map.assocs >>> List.maximumBy (comparing snd) main = getContents >>= print . (part1 &&& part2) . parse
Haha, same! Mine runs in a bit under 4s compiled, but uses a similar 100M-ish peak. Looks like we used the same method.
Maybe iterate all the secrets in parallel, and keep a running note of the best sequences so far? I’m not sure how you’d decide when to throw away old candidates, though. Sequences might match one buyer early and another really late.
Haskell
A nice easy one today; shame I couldn’t start on time. I had a go at refactoring to reduce the peak memory usage, but it just ended up a mess. Here’s a tidy version.
import Data.Bits import Data.List import Data.Map (Map) import Data.Map qualified as Map next :: Int -> Int next = flip (foldl' (\x n -> (x `xor` shift x n) .&. 0xFFFFFF)) [6, -5, 11] bananaCounts :: Int -> Map [Int] Int bananaCounts seed = let secrets = iterate next seed prices = map (`mod` 10) secrets changes = zipWith (-) (drop 1 prices) prices sequences = map (take 4) $ tails changes in Map.fromListWith (const id) $ take 2000 (zip sequences (drop 4 prices)) main = do input <- map read . lines <$> readFile "input22" print . sum $ map ((!! 2000) . iterate next) input print . maximum $ Map.unionsWith (+) $ map bananaCounts input