A month ago, I went on a performance quest trying to optimize a PHP script that took 5 days to run. Together with the help of many talented developers, I eventually got it to run in under 30 seconds. This optimization process with so much fun, and so many people pitched in with their ideas; so I eventually decided I wanted to do something more.
That's why I built a performance challenge for the PHP community
The goal of this challenge is to parse 100 million rows of data with PHP, as efficiently as possible. The challenge will run for about two weeks, and at the end there are some prizes for the best entries (amongst the prize is the very sought-after PhpStorm Elephpant, of which we only have a handful left).
Are you talking about a new, empty WordPress instance running the default theme? Because if so, that doesn't match my anecdotal experience.
If you're talking about a WordPress instance with arbitrary plugins running an arbitrary theme, then sure — but that's an observation about those plugins and themes, not core.
As someone who has to work with WordPress, I have all kinds of issues with it, but "20 seconds to load core with caching disabled" isn't one of them.
Microbenchmarks are very different from optimizing performance in real applications in wide use though, they could do great on this specific benchmark but still have no clue about how to actually make something large like Wordpress to perform OK out of the box.
Wordpress is something that I cannot believe hasn't been displaced by a service that uses a separate application for editing and delivery.
It seems like something like vercel/cloudflare could host the content-side published as a worker for mostly-static content from a larger application and that would be more beneficial and run better with less risk, for that matter. Having the app editing and auth served from the same location is just begging for the issues WP and plugins have seen.
Much like anything else your performance is going to vary a lot based on architecture of implementation. You really shouldn't deploying anything into production without some kind of caching. Whether that's done in the application itself or with memcached/redis or varnish or OPcache.
I’ve long since abandoned WP but this seems like an SQL resource issue as it bogs up against the oom reaper dealing with no swap. WordPress is like a mid level V6 Kia with all the options and a huge aftermarket.
> A month ago, I went on a performance quest trying to optimize a PHP script that took 5 days to run. Together with the help of many talented developers, I eventually got it to run in under 30 seconds
That's a huge improvement! How much was low hanging fruit unrelated to the PHP interpreter itself, out of curiosity? (E.g. parallelism, faster SQL queries etc)
- Cursor based pagination
- Combining insert statements
- Using database transactions to prevent fsync calls
- Moving calculations from the database to PHP
- Avoiding serialization where possible
In general, it is bad practice to touch transaction datasets in php script space. Like all foot-guns it leads to Read-modify-write bugs eventually.
Depending on the SQL engine, there are many PHP Cursor optimizations that save moving around large chunks of data.
Clean cached PHP can be fast for REST transactional data parsing, but it is also often used as a bodge language by amateurs. PHP is not slow by default or meant to run persistently (low memory use is nice), but it still gets a lot of justified criticism.
Erlang and Elixir are much better for clients/host budgets, but less intuitive than PHP =3
Fun challenge, but running the benchmark on Apple hardware is a weird decision as Apple doesn't even have server hardware. Would make much more sense to run it on a dedicated Linux box as that is more accessible and more realistic.
Hehe. Optimization ... it's a good way to learn. Earlier in my career I did a lot of PHP. Usually close to bare.
Other than the obvious point that writing an enormous JSON file is a dubious goal in the first place (really), while PHP can be very fast this is probably faster to implement in shell with sed/grep, or ... almost certainly better ... by loading to sqlite then dumping out from there. Your optimization path then likely becomes index specification and processing, and after the initial load potentially query or instance parallelization.
The page confirms sqlite is available.
If the judges whinge and shell_exec() is unavailable as a path, as a more acceptable path that's whinge-tolerant, use PHP's sqlite feature then dump to JSON.
If I wanted to achieve this for some reason in reality, I'd have the file on a memory-backed blockstore before processing, which would yield further gains.
Frankly, this is not much of a programming problem, it's more a system problem, but it's not being specced as such. This shows, in my view, immaturity of conception of the real problem domain (likely IO bound). Right tool for the job.
Using a language that is 100x slower than naive native programs to do a "speed challenge" is like spending your entire day speed walking to run errands when you can just learn how to drive a car.
A month ago, I went on a performance quest trying to optimize a PHP script that took 5 days to run. Together with the help of many talented developers, I eventually got it to run in under 30 seconds.
When people say leetcode interviews are pointless I might share a link to this post. If that sort of optimization is possible there is a structures and algorithms problem in the background somewhere.
I find that these kind of optimizations are usually more about technical architecture than leetcode. Last time I got speedups this crazy the biggest win was reducing the number of network/database calls. There were also optimisations around reducing allocations and pulling expensive work out of hot loops. But leetcode interview questions don't tend to cover any of that.
They tend to be about the implementation details of specific algorithms and data structures. Whereas the important skill in most real-world scenarios would be to understand the trade-offs between different algorithms and data structures so that you pick an appropriate off-the-shelf implementation to use.
Well leetcode asks you to implement the data structure, not how and when to use which data structure. I don’t need to know how to implement a bloom filter on a whiteboard off the top of my head to know when to use it.
PHP has always escaped forward slashes to help prevent malicious JSON from injecting tags into JavaScript I believe. Because it was common for PHP users to json_encode some data and then to write it out into the HTML in a script tag. A malicious actor could include a closing script tag, and then could inject their own HTML tags and scripts etc.
The weirdness is partly in JSON . In the JSON spec, the slash (named "solidus" there) is the only character that can be written plainly or prefixed with a backslash (AKA "reverse solidus").
> The output should be encoded as a pretty JSON string.
So apparently that is what they consider "pretty JSON". I really don't want to see what they would consider "ugly JSON".
(I think the term they may have been looking for is "pretty-printed JSON" which implies something about the formatting rather than being a completely subjective term)
This is why I jumped from PHP to Go, then why I jumped from Go to Rust.
Go is the most battery-included language I've ever used. Instant compile times means I can run tests bound to ctrl/cmd+s every time I save the file. It's more performant (way less memory, similar CPU time) than C# or Java (and certainly all the scripting languages) and contains a massive stdlib for anything you could want to do. It's what scripting languages should have been. Anyone can read it just like Python.
Rust takes the last 20% I couldn't get in a GC language and removes it. Sure, it's syntax doesn't make sense to an outsider and you end up with 3rd party packages for a lot of things, but can't beat it's performance and safety. Removes a whole lot of tests as those situations just aren't possible.
If Rust scares you use Go. If Go scares you use Rust.
Sorry, but it's honestly just a lot of our journeys. Started on scripting languages like PHP/Ruby/Lua (self-taught) or Java/VB/C#/Python (collage) and then slowly expanded to other languages as we realized we were being held back by our own tools. Each new language/relationship makes you kick yourself for putting up with things so long.
Can't speak for go... but for the handful of languages I've thrown at Claude Code, I'd say it's doing the best job with Rust. Maybe the Rust examples in the wild are just better compared to say C#, but I've had a much smoother time of it with Rust than anything else. TS has been decent though.
I am not that smart to use Rust so take it with a grain of salt. However, its syntax just makes me go crazy. Go/Golang on the other hand is a breath of fresh air. I think unless you really need that additional 20% improvement that Rust provides, Go should be the default for most projects between the 2.
I hear you, advanced generics (for complex unions and such) with TypeScript and Rust are honestly unreadable. It's code you spend a day getting right and then no one touches it.
I'm just glad modern languages stopped throwing and catching exceptions at random levels in their call chain. PHP, JavaScript and Java can (not always) have unreadable error handling paths not to mention hardly augmenting the error with any useful information and you're left relying on the stack trace to try to piece together what happened.
I was curious what it would take if I approached it the way I do with most CSV transformation tasks that I'm only intending to do once: use Unix command line tools such as cut, sed, sort, and uniq to do the bulk of the work, and then do something in whatever scripting language seems appropriate to put the final output in whatever format is needed.
The first part, using this command [1], produces output lines that look like this:
219,/blog/php-81-before-and-after,2021-06-21
and is sorted by URL path and then date.
With 1 million lines that took 9 or 10 seconds (M2 Max Mac Studio). But with 100 million it took 1220 seconds, virtually all of which was sorting.
Turning that into JSON via a shell script [2] was about 15 seconds. (That script is 44% longer than it would have been had JSON allowed a comma after the last element of an array).
So basically 22 minutes. The sorting is the killer with this type of approach, because the input is 7 GB. The output is only 13 MB and the are under 300 pages and the largest page count is under 1000 so building the output up in memory as the unsorted input is scanned and then sorting it would clearly by way way faster.
I took a quick look, the dependency on php 8.5 is mildly irritating, even Ubuntu 26.04 isn't lined up to ship with that version, it's on 8.4.11.
You mention in the README that the goal is to run things in a standard environment, but then you're using a near bleeding edge PHP version that people are unlikely to be using?
I thought I'd just quickly spin up a container and take a look out of interest, but now it looks like I'll have to go dig into building my own PHP packages, or compiling my own version from scratch to even begin to look at things?
You should say in the output formatting rules that the pages should be output in the order that the pages are in the input file. Currently it only specifies the order of the visits within the entry for each page.
> duckdb -s "COPY (SELECT url[20:] as url, date, count(*) as c FROM read_csv('data.csv', columns = { 'url': 'VARCHAR', 'date': 'DATE' }) GROUP BY url, date) TO 'output.json' (ARRAY)"
Takes about 8 seconds on my M1 Macbook. JSON not in the right format, but that wouldn't dominate the execution time.
It sounds plausible, but they really need to spell out exactly what the formatting requirements are, because it can make a huge difference in how efficiently you can write the json out.
I'm looking at the leaderboard and it raises some interesting questions. Currently the fastest are ~3.4 seconds.
Yesterday the README said that benchmarks were run on a "Premium Intel Digital Ocean Droplet with 2vCPUs and 1.5GB of available memory".
Today it says they are run on a "Mac Mini M1 with 12GB of RAM of available memory", which if the net is to be believed is quite a bit faster than the DO Droplet they said they had been using. I'm going to assume those 3.4 seconds results on the leaderboard were benchmarked on the Mac.
I've got an M2 Max Mac Studio which should be faster than the Mac Mini.
A program to do this challenge must read the entire input file, and it is going to have to at least some computation for every character in the file while parsing.
So I thought to try to get an idea of what an upper limit might be for how fast this could be done. One idea for that was this:
$ time WC_ALL=C wc -l data.csv
The idea is wc should be written in C or C++, and counting lines just requires checking each character to see if it is newline so it is pretty minimal computation. WC_ALL=C should keep any Unicode stuff from happening which might slow it down.
This is taking 7.1 seconds. (Same without WC_ALL=C BTW).
OK, that was unexpected. I then wrote a line counter in C. Allocate a buffer of size N, loop doing (read N bytes from stdin into buffer, scan those bytes counting '\n's) until no more input. With a 1 MiB buffer it took 1 second. With a 1024 byte buffer it took 4.3 seconds. With a 512 byte buffer it took 7.1 seconds.
So...maybe wc just has a small buffer?
Then I decided to try "wc -c". That's 0.008 seconds. That's faster than "cat > /dev/null" (0.6) seconds, suggesting the "wc -c" is not reading the file. Someone probably decided to special case requests for just the number of characters and just use stat/fstat to get the file size or seeks to the end and gets the offset or something like that.
I then looked at the source for wc [1]. It does indeed special case things like -c. It also special cases -l, because lines, unlike words, can be counted without having to deal with locale stuff.
But my guess it is using a small buffer is wrong. Buffer size is 1 MiB same as mine. So why is my line counter 1 seconds and "wc -l" is 7.1 seconds?
Looking at it I see that wc is also finding the longest line, even if you have only asked for the number of lines. When I add finding the longest line to mine it then takes 5.1 seconds.
There is also more error handling in wc. Mine just loops as long as read() > 0 and then prints the stats and exits, where as wc loops as long as read() != 0, and then in the loop does an "if (len < 0)" to see if there was an error.
There is also a check in the loop in wc to see if a flag that gets set on SIGINFO is set. If it is then wc prints the current stats.
Still, on the 7 GB data.csv file, with a 1 MiB read buffer, the read loop should run under 7000 times so that "if (len < 0)" and "if (siginfo)" are only going to happen under 7000 times, and their enclosed code is only going to run if there is a read error for the first and every time I hit CTRL-T for second. In my tests that's 0 times for both of those.
That's not nearly enough to explain why it is 2.1 seconds slower than my line counter which now has the same buffer size, finds the longest line too, and aside from those two under 7000 times not taken if statements is essentially the same loop.
Maybe latter I'll see what it takes to build wc locally and try to find where the time is going.
A month ago, I went on a performance quest trying to optimize a PHP script that took 5 days to run. Together with the help of many talented developers, I eventually got it to run in under 30 seconds. This optimization process with so much fun, and so many people pitched in with their ideas; so I eventually decided I wanted to do something more.
That's why I built a performance challenge for the PHP community
The goal of this challenge is to parse 100 million rows of data with PHP, as efficiently as possible. The challenge will run for about two weeks, and at the end there are some prizes for the best entries (amongst the prize is the very sought-after PhpStorm Elephpant, of which we only have a handful left).
I hope people will have fun with it :)
Pitch this to whoever is in charge of performance at Wordpress.
A Wordpress instance will happily take over 20 seconds to fully load if you disable cache.
Are you talking about a new, empty WordPress instance running the default theme? Because if so, that doesn't match my anecdotal experience.
If you're talking about a WordPress instance with arbitrary plugins running an arbitrary theme, then sure — but that's an observation about those plugins and themes, not core.
As someone who has to work with WordPress, I have all kinds of issues with it, but "20 seconds to load core with caching disabled" isn't one of them.
2 replies →
Microbenchmarks are very different from optimizing performance in real applications in wide use though, they could do great on this specific benchmark but still have no clue about how to actually make something large like Wordpress to perform OK out of the box.
Wordpress is something that I cannot believe hasn't been displaced by a service that uses a separate application for editing and delivery.
It seems like something like vercel/cloudflare could host the content-side published as a worker for mostly-static content from a larger application and that would be more beneficial and run better with less risk, for that matter. Having the app editing and auth served from the same location is just begging for the issues WP and plugins have seen.
4 replies →
That's often a skill issue.
1 reply →
Much like anything else your performance is going to vary a lot based on architecture of implementation. You really shouldn't deploying anything into production without some kind of caching. Whether that's done in the application itself or with memcached/redis or varnish or OPcache.
3 replies →
I’ve long since abandoned WP but this seems like an SQL resource issue as it bogs up against the oom reaper dealing with no swap. WordPress is like a mid level V6 Kia with all the options and a huge aftermarket.
> A month ago, I went on a performance quest trying to optimize a PHP script that took 5 days to run. Together with the help of many talented developers, I eventually got it to run in under 30 seconds
That's a huge improvement! How much was low hanging fruit unrelated to the PHP interpreter itself, out of curiosity? (E.g. parallelism, faster SQL queries etc)
Almost all, actually. I wrote about it here: https://stitcher.io/blog/11-million-rows-in-seconds
A couple of things I did:
- Cursor based pagination - Combining insert statements - Using database transactions to prevent fsync calls - Moving calculations from the database to PHP - Avoiding serialization where possible
11 replies →
In general, it is bad practice to touch transaction datasets in php script space. Like all foot-guns it leads to Read-modify-write bugs eventually.
Depending on the SQL engine, there are many PHP Cursor optimizations that save moving around large chunks of data.
Clean cached PHP can be fast for REST transactional data parsing, but it is also often used as a bodge language by amateurs. PHP is not slow by default or meant to run persistently (low memory use is nice), but it still gets a lot of justified criticism.
Erlang and Elixir are much better for clients/host budgets, but less intuitive than PHP =3
Fun challenge, but running the benchmark on Apple hardware is a weird decision as Apple doesn't even have server hardware. Would make much more sense to run it on a dedicated Linux box as that is more accessible and more realistic.
Hehe. Optimization ... it's a good way to learn. Earlier in my career I did a lot of PHP. Usually close to bare.
Other than the obvious point that writing an enormous JSON file is a dubious goal in the first place (really), while PHP can be very fast this is probably faster to implement in shell with sed/grep, or ... almost certainly better ... by loading to sqlite then dumping out from there. Your optimization path then likely becomes index specification and processing, and after the initial load potentially query or instance parallelization.
The page confirms sqlite is available.
If the judges whinge and shell_exec() is unavailable as a path, as a more acceptable path that's whinge-tolerant, use PHP's sqlite feature then dump to JSON.
If I wanted to achieve this for some reason in reality, I'd have the file on a memory-backed blockstore before processing, which would yield further gains.
Frankly, this is not much of a programming problem, it's more a system problem, but it's not being specced as such. This shows, in my view, immaturity of conception of the real problem domain (likely IO bound). Right tool for the job.
5 days to 30 seconds? What kind of factor/order of magnitude is that damn
What takes 5 days to run
Poorly made analytics/datawarehouse stuff.
One query per column per row
Using a language that is 100x slower than naive native programs to do a "speed challenge" is like spending your entire day speed walking to run errands when you can just learn how to drive a car.
Do not update the leaderboard.... at all.
exec(‘c program that does the parsing’);
Where do I get my prize? ;)
The FAQ states that solutions like FFI are not allowed because the goal is to solve it with PHP :)
4 replies →
A month ago, I went on a performance quest trying to optimize a PHP script that took 5 days to run. Together with the help of many talented developers, I eventually got it to run in under 30 seconds.
When people say leetcode interviews are pointless I might share a link to this post. If that sort of optimization is possible there is a structures and algorithms problem in the background somewhere.
I find that these kind of optimizations are usually more about technical architecture than leetcode. Last time I got speedups this crazy the biggest win was reducing the number of network/database calls. There were also optimisations around reducing allocations and pulling expensive work out of hot loops. But leetcode interview questions don't tend to cover any of that.
They tend to be about the implementation details of specific algorithms and data structures. Whereas the important skill in most real-world scenarios would be to understand the trade-offs between different algorithms and data structures so that you pick an appropriate off-the-shelf implementation to use.
1 reply →
Well leetcode asks you to implement the data structure, not how and when to use which data structure. I don’t need to know how to implement a bloom filter on a whiteboard off the top of my head to know when to use it.
1 reply →
Do you think they achieved that performance optimisation with a networked service because they switched from insertion sort to quicksort?
1 reply →
Side note - I wasn't aware that there is active collectors scene for Elephpants, awesome!
https://elephpant.me/
Elephpants should be for second and third place. First place should be the double-clawed hammer.
Excellent project. My favorites: the joker, php storm, phplashy, Molly.
which molly project?
1 reply →
I love Mollie!
Are they just confused about what characters require escaping in JSON strings or is PHP weirder than I remember?
PHP has always escaped forward slashes to help prevent malicious JSON from injecting tags into JavaScript I believe. Because it was common for PHP users to json_encode some data and then to write it out into the HTML in a script tag. A malicious actor could include a closing script tag, and then could inject their own HTML tags and scripts etc.
The weirdness is partly in JSON . In the JSON spec, the slash (named "solidus" there) is the only character that can be written plainly or prefixed with a backslash (AKA "reverse solidus").
See page 4, section 9 of the latest ECMA for JSON: https://ecma-international.org/wp-content/uploads/ECMA-404_2...
That's the default output when using json_encode with the JSON_PRETTY_PRINT flag in php.
> That's the default output when using json_encode with the JSON_PRETTY_PRINT flag in php.
JSON_PRETTY_PRINT is irrelevant. Escaping slashes is the default behavior of json_encode(). To switch it off, use JSON_UNESCAPED_SLASHES.
1 reply →
> The output should be encoded as a pretty JSON string.
So apparently that is what they consider "pretty JSON". I really don't want to see what they would consider "ugly JSON".
(I think the term they may have been looking for is "pretty-printed JSON" which implies something about the formatting rather than being a completely subjective term)
Pretty JSON not meaning formatting, but more "That was pretty JSON of you."
This is why I jumped from PHP to Go, then why I jumped from Go to Rust.
Go is the most battery-included language I've ever used. Instant compile times means I can run tests bound to ctrl/cmd+s every time I save the file. It's more performant (way less memory, similar CPU time) than C# or Java (and certainly all the scripting languages) and contains a massive stdlib for anything you could want to do. It's what scripting languages should have been. Anyone can read it just like Python.
Rust takes the last 20% I couldn't get in a GC language and removes it. Sure, it's syntax doesn't make sense to an outsider and you end up with 3rd party packages for a lot of things, but can't beat it's performance and safety. Removes a whole lot of tests as those situations just aren't possible.
If Rust scares you use Go. If Go scares you use Rust.
It's almost comical how often bring up Rust. "Here's a fun PHP challange!" "Let's talk about Rust..."
Yep. It's like a crossfit vegan religion at this point.
You don't even have to ask. They will tell you and usually add nothing to the conversation while doing so.
Quite off-putting.
Sorry, but it's honestly just a lot of our journeys. Started on scripting languages like PHP/Ruby/Lua (self-taught) or Java/VB/C#/Python (collage) and then slowly expanded to other languages as we realized we were being held back by our own tools. Each new language/relationship makes you kick yourself for putting up with things so long.
3 replies →
I mean, it's kinda like complaining that people are mentioning excavators on your "how I optimised digging a massive ditch with teaspoons" post.
1 reply →
Can't speak for go... but for the handful of languages I've thrown at Claude Code, I'd say it's doing the best job with Rust. Maybe the Rust examples in the wild are just better compared to say C#, but I've had a much smoother time of it with Rust than anything else. TS has been decent though.
I am not that smart to use Rust so take it with a grain of salt. However, its syntax just makes me go crazy. Go/Golang on the other hand is a breath of fresh air. I think unless you really need that additional 20% improvement that Rust provides, Go should be the default for most projects between the 2.
I hear you, advanced generics (for complex unions and such) with TypeScript and Rust are honestly unreadable. It's code you spend a day getting right and then no one touches it.
I'm just glad modern languages stopped throwing and catching exceptions at random levels in their call chain. PHP, JavaScript and Java can (not always) have unreadable error handling paths not to mention hardly augmenting the error with any useful information and you're left relying on the stack trace to try to piece together what happened.
What's a decent time for this?
I was curious what it would take if I approached it the way I do with most CSV transformation tasks that I'm only intending to do once: use Unix command line tools such as cut, sed, sort, and uniq to do the bulk of the work, and then do something in whatever scripting language seems appropriate to put the final output in whatever format is needed.
The first part, using this command [1], produces output lines that look like this:
and is sorted by URL path and then date.
With 1 million lines that took 9 or 10 seconds (M2 Max Mac Studio). But with 100 million it took 1220 seconds, virtually all of which was sorting.
Turning that into JSON via a shell script [2] was about 15 seconds. (That script is 44% longer than it would have been had JSON allowed a comma after the last element of an array).
So basically 22 minutes. The sorting is the killer with this type of approach, because the input is 7 GB. The output is only 13 MB and the are under 300 pages and the largest page count is under 1000 so building the output up in memory as the unsorted input is scanned and then sorting it would clearly by way way faster.
[1] cut -d / -f 4- | sed -e 's/T..............$//' | sort | uniq -c | sed -e 's/^ *//' -e 's/ /,\//'
[2]
You can check the benchmarks here: https://github.com/tempestphp/100-million-row-challenge/blob...
A "good" run seems to be around 20-40s mark.
I don't have time to put together a submission but I'm willing to bet you can use this:
https://github.com/kjdev/php-ext-jq
And replicate this command:
jq -R ' [inputs | split(",") | {url: .[0], date: .[1] | split("T")[0]}] | group_by(.url) | map({ (.[0].url): ( map(.date) | group_by(.) | map({(.[0]): length}) | add ) }) | add ' < test-data.csv
And it will be faster than anything you can do in native php
Edit: I'm assuming none of the urls have a comma with this but it's more about offloading it through an extension, even if you custom built it
The rules exclude FFI etc.
I took a quick look, the dependency on php 8.5 is mildly irritating, even Ubuntu 26.04 isn't lined up to ship with that version, it's on 8.4.11.
You mention in the README that the goal is to run things in a standard environment, but then you're using a near bleeding edge PHP version that people are unlikely to be using?
I thought I'd just quickly spin up a container and take a look out of interest, but now it looks like I'll have to go dig into building my own PHP packages, or compiling my own version from scratch to even begin to look at things?
Those are quite good:
https://launchpad.net/~ondrej/+archive/ubuntu/php
Anyway, whatever you write in an earlier PHP version is likely to work on future versions. PHP has remarkable BC.
If you're just experimenting, might as well start in the browser:
https://alganet.github.io/phasm/
Not all extensions available there, but it has the essentials.
> Also, the generator will use a seeded randomizer so that, for local development, you work on the same dataset as others
Except that the generator script generates dates relative to time() ?
True, it's a bug that I'm going to fix, but it only impacts local test data sets and not the real benchmark :)
You should say in the output formatting rules that the pages should be output in the order that the pages are in the input file. Currently it only specifies the order of the visits within the entry for each page.
Awesome. I’ll be following this. I’ll probably learn a ton.
How large is a sample 100M row file in bytes? (I tried to run the generator locally but my php is not bleeding-edge enough)
Around 7GB
Submit at the very end, so others wouldn't know you have a better solution.
Obligatory DuckDB solution:
> duckdb -s "COPY (SELECT url[20:] as url, date, count(*) as c FROM read_csv('data.csv', columns = { 'url': 'VARCHAR', 'date': 'DATE' }) GROUP BY url, date) TO 'output.json' (ARRAY)"
Takes about 8 seconds on my M1 Macbook. JSON not in the right format, but that wouldn't dominate the execution time.
This log in one of the PR:s claims a 5.4s running time on some Mac.
https://github.com/tempestphp/100-million-row-challenge/pull...
> The output should be encoded as a pretty JSON string.
...
> Your parser should store the following output in $outputPath as a JSON file:
They don't define what exactly "pretty" means, but superflous escapes are not very pretty in my opinion.
They probably mean "Should look like the output of json_encode($data, JSON_PRETTY_PRINT)". Which most PHP devs would be familiar with.
It sounds plausible, but they really need to spell out exactly what the formatting requirements are, because it can make a huge difference in how efficiently you can write the json out.
1 reply →
It reminds me of a good read about optimizing PHP for 1 billion rows challenge. TLDR; at some point you hit a limit in PHP’s stream layer
https://dev.to/realflowcontrol/processing-one-billion-rows-i...
I'm looking at the leaderboard and it raises some interesting questions. Currently the fastest are ~3.4 seconds.
Yesterday the README said that benchmarks were run on a "Premium Intel Digital Ocean Droplet with 2vCPUs and 1.5GB of available memory".
Today it says they are run on a "Mac Mini M1 with 12GB of RAM of available memory", which if the net is to be believed is quite a bit faster than the DO Droplet they said they had been using. I'm going to assume those 3.4 seconds results on the leaderboard were benchmarked on the Mac.
I've got an M2 Max Mac Studio which should be faster than the Mac Mini.
A program to do this challenge must read the entire input file, and it is going to have to at least some computation for every character in the file while parsing.
So I thought to try to get an idea of what an upper limit might be for how fast this could be done. One idea for that was this:
The idea is wc should be written in C or C++, and counting lines just requires checking each character to see if it is newline so it is pretty minimal computation. WC_ALL=C should keep any Unicode stuff from happening which might slow it down.
This is taking 7.1 seconds. (Same without WC_ALL=C BTW).
OK, that was unexpected. I then wrote a line counter in C. Allocate a buffer of size N, loop doing (read N bytes from stdin into buffer, scan those bytes counting '\n's) until no more input. With a 1 MiB buffer it took 1 second. With a 1024 byte buffer it took 4.3 seconds. With a 512 byte buffer it took 7.1 seconds.
So...maybe wc just has a small buffer?
Then I decided to try "wc -c". That's 0.008 seconds. That's faster than "cat > /dev/null" (0.6) seconds, suggesting the "wc -c" is not reading the file. Someone probably decided to special case requests for just the number of characters and just use stat/fstat to get the file size or seeks to the end and gets the offset or something like that.
I then looked at the source for wc [1]. It does indeed special case things like -c. It also special cases -l, because lines, unlike words, can be counted without having to deal with locale stuff.
But my guess it is using a small buffer is wrong. Buffer size is 1 MiB same as mine. So why is my line counter 1 seconds and "wc -l" is 7.1 seconds?
Looking at it I see that wc is also finding the longest line, even if you have only asked for the number of lines. When I add finding the longest line to mine it then takes 5.1 seconds.
There is also more error handling in wc. Mine just loops as long as read() > 0 and then prints the stats and exits, where as wc loops as long as read() != 0, and then in the loop does an "if (len < 0)" to see if there was an error.
There is also a check in the loop in wc to see if a flag that gets set on SIGINFO is set. If it is then wc prints the current stats.
Still, on the 7 GB data.csv file, with a 1 MiB read buffer, the read loop should run under 7000 times so that "if (len < 0)" and "if (siginfo)" are only going to happen under 7000 times, and their enclosed code is only going to run if there is a read error for the first and every time I hit CTRL-T for second. In my tests that's 0 times for both of those.
That's not nearly enough to explain why it is 2.1 seconds slower than my line counter which now has the same buffer size, finds the longest line too, and aside from those two under 7000 times not taken if statements is essentially the same loop.
Maybe latter I'll see what it takes to build wc locally and try to find where the time is going.
[1] https://github.com/apple-oss-distributions/text_cmds/blob/te...
[dead]
Tempted to submit a Java app wrapped in PHP exec() :D
The rules state that FFI and the likes isn't allowed because the goal is to do it in PHP :)