CSV to JSON Converter
Paste CSV (or TSV) on the left, get a JSON array of objects on the right. Headers auto-detected, numbers and booleans typed automatically. Runs in your browser — no upload.
- Paste your CSV into the left textarea.
- Click "Convert to JSON". The first row is treated as headers.
- Copy the JSON array, or download as data.json.
- TSV (tab-separated) and other delimiters are detected automatically by PapaParse.
What does it do?
Parses CSV (RFC 4180-ish) into a JSON array where each object key matches a header column. Quoted fields, embedded commas, escaped double-quotes, and CRLF / LF line endings are all handled. Numbers, booleans, and null-like values ("true", "false", empty) are typed automatically. The delimiter is auto-detected — you can paste TSV, semicolon-separated, or pipe-separated data and it will still work.
Example
CSV input:
name,age,role
Ada,36,engineer
Grace,40,scientist JSON output:
[
{"name": "Ada", "age": 36, "role": "engineer"},
{"name": "Grace", "age": 40, "role": "scientist"}
] Common CSV pitfalls and how to handle them
CSV looks simple but has more edge cases than people expect. These are the patterns that produce surprising output.
- Embedded commas without quoting. A field like `Smith, John` (no surrounding quotes) gets split into two columns. Wrap such fields in double quotes: `"Smith, John"`.
- Embedded double-quotes. Inside a quoted field, a literal `"` is escaped by doubling it: `"He said ""hi"""` decodes to `He said "hi"`. Single backslash escaping (`\"`) is non-standard and not supported.
- Inconsistent column counts. Rows with fewer columns than the header get null for missing keys; rows with more columns are reported as a parse warning. The conversion still completes, but check the output.
- Numeric IDs that lose leading zeros. `007` parses as the number 7, not the string "007". If the leading zeros matter (zip codes, phone numbers), pre-process by quoting the field — quoted values can still be parsed as strings if you turn off dynamic typing in your downstream code.
- BOM at file start. Excel-saved CSVs often include a UTF-8 byte-order mark (`\uFEFF`) at the start. The parser strips it, but if you paste raw bytes from a hex editor you may see the BOM as a stray character on the first header.
- Mixed line endings. CRLF (Windows), LF (Unix), and CR (old Mac) are all recognized. If your output looks like one giant single row, the file may have no line breaks at all — common when CSV is generated by concatenating without `\n`.
Frequently asked questions
Does this support tab-separated values (TSV)?
Yes. PapaParse auto-detects the delimiter from the first kilobyte of input. Tabs, semicolons, pipes, and commas all work without configuration. If detection picks the wrong delimiter (rare on real data), separate your fields more clearly or pre-process.
What happens if my CSV has no header row?
The current configuration assumes the first row is headers. If your CSV is headerless, prepend a synthetic header row like `a,b,c` before pasting — or convert with `header: false` in PapaParse on the command line if you need an array-of-arrays output.
How are dates handled?
They stay as strings. The parser only auto-types numbers and booleans — date parsing is intentionally not done because date format ambiguity (`01/02/03` is January 2 in the US, February 1 in the UK) is too risky to guess. Parse them downstream where you know the source convention.
Can I convert really large CSV files?
Up to about 50 MB before the browser starts to feel sluggish. The textarea is the bottleneck, not the parser — for files larger than that, run PapaParse on the command line. The library is the same.
Is my CSV uploaded anywhere?
No. Everything runs in your browser — your data is parsed by JavaScript on this page and never sent to any server. Verify in the browser developer tools: zero network requests fire when you click Convert.
How do I get JSON in a different shape (nested, grouped)?
This tool produces a flat array of flat objects — that is what CSV represents. To get nested structure, post-process the output with a script (group by a column, transform field names with prefixes, etc.). Trying to encode hierarchy in CSV usually causes more problems than it solves.