⚡ Encoding Benchmark Tool – Compare Encoding Speed & Size
Choosing the right encoding algorithm often comes down to two competing concerns: compactness (encoded size vs original) and speed (how fast your application can encode and decode data). The Encoding Benchmark Tool lets you measure both — simultaneously, for up to nine popular algorithms — directly in your browser without installing anything.
🔍 What Gets Measured?
For each selected algorithm the tool records four key metrics:
- Encoded Size (bytes) – the byte length of the encoded output for your exact input string.
- Size Overhead (%) – how much larger the output is relative to the original. Calculated as
((encodedBytes − originalBytes) / originalBytes) × 100. - Ops/sec – how many encode operations the algorithm can complete per second, measured over thousands of warm-up-free iterations using
performance.now(). - Average Latency (µs) – the mean time in microseconds per single encode call, derived from the same timed run.
📊 Supported Encoding Algorithms
| Algorithm | Typical Overhead | Common Use Case |
|---|---|---|
| Base64 | ~33% | Email attachments, data URIs, JSON payloads |
| Base64 URL | ~33% | JWT tokens, OAuth codes, URL-safe identifiers |
| Base32 | ~60% | TOTP secrets (Google Authenticator), case-insensitive IDs |
| Base58 | ~35% | Bitcoin addresses, IPFS content IDs, human-readable tokens |
| Hex | +100% | Hash representation, byte dumps, color codes |
| URL Encoding | Variable | Query parameters, form submissions, URI components |
| Binary | +700%+ | Educational, bit-level debugging, protocol analysis |
| Octal | ~200% | Unix permissions, legacy systems, C-style escapes |
| ROT13 | 0% | Trivial obfuscation, spoiler-hiding in forums |
🚀 How the Performance Benchmark Works
The benchmark engine first runs 100 warm-up iterations per algorithm to allow the JavaScript JIT compiler to optimise the hot path. It then times N iterations (configurable, default 10 000) using the browser's high-resolution performance.now() API, which provides sub-millisecond precision. The total elapsed time is divided by N to compute average latency, and the reciprocal gives ops/sec.
The algorithms run sequentially with a brief setTimeout(0) yield between each to keep the browser responsive and allow the progress indicator to update. Results are ranked by encode speed (ops/sec) from fastest to slowest, while size comparisons rank from smallest to largest encoded output.
💡 Tips for Accurate Results
- Use at least 10 000 iterations for stable measurements. Fewer iterations produce noisier results due to JIT warm-up and scheduler jitter.
- Close other browser tabs consuming CPU to reduce interference.
- Test with realistic payloads — encoding speed can vary significantly with input length and character distribution (e.g., Base58 uses
BigIntarithmetic which is sensitive to payload size). - ROT13 size overhead is always 0% because it performs a character-level substitution within the same character set — output length is identical to input.
- Hex and Binary show the worst size overhead but are often among the fastest to encode because they use simple lookup-table operations.
🔐 Privacy & Security
All encoding and benchmarking runs entirely in your browser using standard JavaScript APIs. No input data is ever transmitted to any server. This makes the tool safe for benchmarking sensitive strings such as API keys or personal data samples. The tool does not store any input between sessions.