Most Different

Anthony Howe
Montreal, Quebec, Canada
Twitter: @SirWumpus

Judges' comments:

To use:


./prog file1 file2

./prog -d file1 file2


./prog prog.orig.c prog.c

make test
./prog -d ABXYEFCD.tmp ACDBECFD.tmp
./prog ABXYEFCD.tmp ACDBECFD.tmp

rm -f curds.tmp whey.tmp
cp /usr/share/man/man1/cc.1 curds.tmp
cp /usr/share/man/man1/cc.1 whey.tmp
chmod 666 whey.tmp
make makeholes
./makeholes -n 1000 whey.tmp
./prog curds.tmp whey.tmp
./ 100 ./prog curds.tmp whey.tmp

# Assuming curl(1) installed, grab a really HUGE text file.
curl -L -o war-and-peace.txt
cp war-and-peace.txt nuked.tmp
./makeholes -c'~' -n 1000 nuked.tmp
./prog war-and-peace.txt nuked.tmp
./ 100 ./prog war-and-peace.txt nuked.tmp

Selected Judges Remarks:

This is the best use of the FNV that we have seen in the IOCCC so far! The output, when used without -d, is compatible with POSIX diff and is suitable for use with patch.

We welcome back Canada to the list of nations from where winning entries have been submitted.

Is this code a bug or a feature? :-) Or is this an attempt to corrupt the programming of our youth? Should we heed Kyle’s mom words that she uttered during a South Park P.T.A. Meeting?

“We must stop dirty © language from getting to our children’s ears!”

Blame Canada:

Or should we teach our youth to understand the intricacies of this code? Ying Tong Iddle I Po! We suggest you read the source for yourself, which might be easier than the academic papers it was inspired by.

NOTE: Unlike the original entry source, prog.orig.c, prog.c uses a 64 bit FNV hash and fixes a function call warning.

Author’s comments:



This is a functioning micro diff tool using an O(NP) algorithm, compared to the older O(ND) difference algorithm used by some versions of diff. Its output is based on the default diff(1) output described by POSIX and The Open Group Base Specifications. The output is suitable for use with patch(1).

The -d option simply writes the edit distance between file1 and file2.


The FNV1a hash is a little slow compared to the trival hash GNU Diff uses. I downloaded a plain text copy of “War And Peace” from Project Gutenberg, used makeholes.c to make 1000 random changes, then profiled and timed the program verses GNU Diff. The bottle neck appears to be in the file I/O and line hashing with an average +0.05s slower. Using a huge file like “War And Peace” for testing offsets the diff(1) optimised file I/O.

There is no hash collision checking, partly because FNV1a appears to generate very few collisions and an assumption that localised collisions within a region of edits are highly unlikely.

Support Files


Wu, Manber, Myers, and Miller; August 1989;
“An O(NP) Sequence Comparison Algorithm”;

Fowler, Noll, Vo; 1994

Fowler, Noll, Vo on Wikipedia

“diff”, The Open Group Base Specifications Issue 7
IEEE Std 1003.1, 2013 Edition

Eugene W. Myers; “An O(ND) Difference Algorithm and Its Variations”;
Algorithmica, 1986, pages. 251-266

Webb Miller and Eugene W. Myers; “A File Comparison Program”;
Software-Practice And Experience, Vol. 15(11). 1025-1040 (November 1985)

D.S. Hirschberg, “A linear space algorithm for computing maximal common subsequence problem”;
Comm. of ACM, Vol. 18, June 1975, pages 341-343

Which hashing algorithm is best for uniqueness and speed?

Creative Commons License

© Copyright 1984-2016, Leo Broukhis, Simon Cooper, Landon Curt Noll - All rights reserved
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.