Chapter 11. Data conversion

Table of Contents

11.1. Text data conversion tools
11.1.1. Converting a text file with iconv
11.1.2. Checking file to be UTF-8 with iconv
11.1.3. Converting file names with iconv
11.1.4. EOL conversion
11.1.5. TAB conversion
11.1.6. Editors with auto-conversion
11.1.7. Plain text extraction
11.1.8. Highlighting and formatting plain text data
11.2. XML data
11.2.1. Basic hints for XML
11.2.2. XML processing
11.2.3. The XML data extraction
11.2.4. The XML data lint
11.3. Type setting
11.3.1. roff typesetting
11.3.2. TeX/LaTeX
11.3.3. Pretty print a manual page
11.3.4. Creating a manual page
11.4. Printable data
11.4.1. Ghostscript
11.4.2. Merge two PS or PDF files
11.4.3. Printable data utilities
11.4.4. Printing with CUPS
11.5. The mail data conversion
11.5.1. Mail data basics
11.6. Graphic data tools
11.7. Miscellaneous data conversion

Tools and tips for converting data formats on the Debian system are described.

Standard based tools are in very good shape but support for proprietary data formats are limited.

Following packages for the text data conversion caught my eyes.

[Tip] Tip

iconv(1) is provided as a part of the libc6 package and it is always available on practically all Unix-like systems to convert the encoding of characters.

You can convert encodings of a text file with iconv(1) by the following.

$ iconv -f encoding1 -t encoding2 input.txt >output.txt

Encoding values are case insensitive and ignore "-" and "_" for matching. Supported encodings can be checked by the "iconv -l" command.

[Note] Note

Some encodings are only supported for the data conversion and are not used as locale values (Section 8.1, “The locale”).

For character sets which fit in single byte such as ASCII and ISO-8859 character sets, the character encoding means almost the same thing as the character set.

For character sets with many characters such as JIS X 0213 for Japanese or Universal Character Set (UCS, Unicode, ISO-10646-1) for practically all languages, there are many encoding schemes to fit them into the sequence of the byte data.

For these, there are clear differentiations between the character set and the character encoding.

The code page is used as the synonym to the character encoding tables for some vendor specific ones.

[Note] Note

Please note most encoding systems share the same code with ASCII for the 7 bit characters. But there are some exceptions. If you are converting old Japanese C programs and URLs data from the casually-called shift-JIS encoding format to UTF-8 format, use "CP932" as the encoding name instead of "shift-JIS" to get the expected results: 0x5C → "\" and 0x7E → "~". Otherwise, these are converted to wrong characters.

[Tip] Tip

recode(1) may be used too and offers more than the combined functionality of iconv(1), fromdos(1), todos(1), frommac(1), and tomac(1). For more, see "info recode".

Intelligent modern editors such as the vim program are quite smart and copes well with any encoding systems and any file formats. You should use these editors under the UTF-8 locale in the UTF-8 capable console for the best compatibility.

An old western European Unix text file, "u-file.txt", stored in the latin1 (iso-8859-1) encoding can be edited simply with vim by the following.

$ vim u-file.txt

This is possible since the auto detection mechanism of the file encoding in vim assumes the UTF-8 encoding first and, if it fails, assumes it to be latin1.

An old Polish Unix text file, "pu-file.txt", stored in the latin2 (iso-8859-2) encoding can be edited with vim by the following.

$ vim '+e ++enc=latin2 pu-file.txt'

An old Japanese unix text file, "ju-file.txt", stored in the eucJP encoding can be edited with vim by the following.

$ vim '+e ++enc=eucJP ju-file.txt'

An old Japanese MS-Windows text file, "jw-file.txt", stored in the so called shift-JIS encoding (more precisely: CP932) can be edited with vim by the following.

$ vim '+e ++enc=CP932 ++ff=dos jw-file.txt'

When a file is opened with "++enc" and "++ff" options, ":w" in the Vim command line stores it in the original format and overwrite the original file. You can also specify the saving format and the file name in the Vim command line, e.g., ":w ++enc=utf8 new.txt".

Please refer to the mbyte.txt "multi-byte text support" in vim on-line help and Table 11.2, “List of encoding values and their usage” for locale values used with "++enc".

The emacs family of programs can perform the equivalent functions.

The Extensible Markup Language (XML) is a markup language for documents containing structured information.

See introductory information at XML.COM.

XML text looks somewhat like HTML. It enables us to manage multiple formats of output for a document. One easy XML system is the docbook-xsl package, which is used here.

Each XML file starts with standard XML declaration as the following.

<?xml version="1.0" encoding="UTF-8"?>

The basic syntax for one XML element is marked up as the following.

<name attribute="value">content</name>

XML element with empty content is marked up in the following short form.

<name attribute="value" />

The "attribute="value"" in the above examples are optional.

The comment section in XML is marked up as the following.

<!-- comment -->

Other than adding markups, XML requires minor conversion to the content using predefined entities for following characters.

[Caution] Caution

"<" or "&" can not be used in attributes or elements.

[Note] Note

When SGML style user defined entities, e.g. "&some-tag;", are used, the first definition wins over others. The entity definition is expressed in "<!ENTITY some-tag "entity value">".

[Note] Note

As long as the XML markup are done consistently with certain set of the tag name (either some data as content or attribute value), conversion to another XML is trivial task using Extensible Stylesheet Language Transformations (XSLT).

There are many tools available to process XML files such as the Extensible Stylesheet Language (XSL).

Basically, once you create well formed XML file, you can convert it to any format using Extensible Stylesheet Language Transformations (XSLT).

The Extensible Stylesheet Language for Formatting Objects (XSL-FO) is supposed to be solution for formatting. The fop package is new to the Debian main archive due to its dependence to the Java programing language. So the LaTeX code is usually generated from XML using XSLT and the LaTeX system is used to create printable file such as DVI, PostScript, and PDF.

Since XML is subset of Standard Generalized Markup Language (SGML), it can be processed by the extensive tools available for SGML, such as Document Style Semantics and Specification Language (DSSSL).

[Tip] Tip

GNOME's yelp is sometimes handy to read DocBook XML files directly since it renders decently on X.

The Unix troff program originally developed by AT&T can be used for simple typesetting. It is usually used to create manpages.

TeX created by Donald Knuth is a very powerful type setting tool and is the de facto standard. LaTeX originally written by Leslie Lamport enables a high-level access to the power of TeX.

Traditionally, roff is the main Unix text processing system. See roff(7), groff(7), groff(1), grotty(1), troff(1), groff_mdoc(7), groff_man(7), groff_ms(7), groff_me(7), groff_mm(7), and "info groff".

You can read or print a good tutorial and reference on "-me" macro in "/usr/share/doc/groff/" by installing the groff package.

[Tip] Tip

"groff -Tascii -me -" produces plain text output with ANSI escape code. If you wish to get manpage like output with many "^H" and "_", use "GROFF_NO_SGR=1 groff -Tascii -me -" instead.

[Tip] Tip

To remove "^H" and "_" from a text file generated by groff, filter it by "col -b -x".

The TeX Live software distribution offers a complete TeX system. The texlive metapackage provides a decent selection of the TeX Live packages which should suffice for the most common tasks.

There are many references available for TeX and LaTeX.

  • The teTeX HOWTO: The Linux-teTeX Local Guide

  • tex(1)

  • latex(1)

  • texdoc(1)

  • texdoctk(1)

  • "The TeXbook", by Donald E. Knuth, (Addison-Wesley)

  • "LaTeX - A Document Preparation System", by Leslie Lamport, (Addison-Wesley)

  • "The LaTeX Companion", by Goossens, Mittelbach, Samarin, (Addison-Wesley)

This is the most powerful typesetting environment. Many SGML processors use this as their back end text processor. Lyx provided by the lyx package and GNU TeXmacs provided by the texmacs package offer nice WYSIWYG editing environment for LaTeX while many use Emacs and Vim as the choice for the source editor.

There are many online resources available.

When documents become bigger, sometimes TeX may cause errors. You must increase pool size in "/etc/texmf/texmf.cnf" (or more appropriately edit "/etc/texmf/texmf.d/95NonPath" and run update-texmf(8)) to fix this.

[Note] Note

The TeX source of "The TeXbook" is available at This file contains most of the required macros. I heard that you can process this document with tex(1) after commenting lines 7 to 10 and adding "\input manmac \proofmodefalse". It's strongly recommended to buy this book (and all other books from Donald E. Knuth) instead of using the online version but the source is a great example of TeX input!

Printable data is expressed in the PostScript format on the Debian system. Common Unix Printing System (CUPS) uses Ghostscript as its rasterizer backend program for non-PostScript printers.

You can merge two PostScript (PS) or Portable Document Format (PDF) files using gs(1) of Ghostscript.

$ gs -q -dNOPAUSE -dBATCH -sDEVICE=pswrite -f
$ gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=bla.pdf -f foo1.pdf foo2.pdf
[Note] Note

The PDF, which is a widely used cross-platform printable data format, is essentially the compressed PS format with few additional features and extensions.

[Tip] Tip

For command line, psmerge(1) and other commands from the psutils package are useful for manipulating PostScript documents. pdftk(1) from the pdftk package is useful for manipulating PDF documents, too.

Both lp(1) and lpr(1) commands offered by Common Unix Printing System (CUPS) provides options for customized printing the printable data.

You can print 3 copies of a file collated using one of the following commands.

$ lp -n 3 -o Collate=True filename
$ lpr -#3 -o Collate=True filename

You can further customize printer operation by using printer option such as "-o number-up=2", "-o page-set=even", "-o page-set=odd", "-o scaling=200", "-o natural-scaling=200", etc., documented at Command-Line Printing and Options.

The following packages for the mail data conversion caught my eyes.

[Tip] Tip

The Internet Message Access Protocol version 4 (IMAP4) server may be used to move mails out from proprietary mail systems if the mail client software can be configured to use IMAP4 server too.

Mail (SMTP) data should be limited to series of 7 bit data. So binary data and 8 bit text data are encoded into 7 bit format with the Multipurpose Internet Mail Extensions (MIME) and the selection of the charset (see Table 11.2, “List of encoding values and their usage”).

The standard mail storage format is mbox formatted according to RFC2822 (updated RFC822). See mbox(5) (provided by the mutt package).

For European languages, "Content-Transfer-Encoding: quoted-printable" with the ISO-8859-1 charset is usually used for mail since there are not much 8 bit characters. If European text is encoded in UTF-8, "Content-Transfer-Encoding: quoted-printable" is likely to be used since it is mostly 7 bit data.

For Japanese, traditionally "Content-Type: text/plain; charset=ISO-2022-JP" is usually used for mail to keep text in 7 bits. But older Microsoft systems may send mail data in Shift-JIS without proper declaration. If Japanese text is encoded in UTF-8, Base64 is likely to be used since it contains many 8 bit data. The situation of other Asian languages is similar.

[Note] Note

If your non-Unix mail data is accessible by a non-Debian client software which can talk to the IMAP4 server, you may be able to move them out by running your own IMAP4 server.

[Note] Note

If you use other mail storage formats, moving them to mbox format is the good first step. The versatile client program such as mutt(1) may be handy for this.

You can split mailbox contents to each message using procmail(1) and formail(1).

Each mail message can be unpacked using munpack(1) from the mpack package (or other specialized tools) to obtain the MIME encoded contents.

The following packages for the graphic data conversion, editing, and organization tools caught my eyes.

Table 11.17. List of graphic data tools

package popcon size keyword description
gimp V:61, I:300 19827 image(bitmap) GNU Image Manipulation Program
imagemagick I:353 221 image(bitmap) image manipulation programs
graphicsmagick V:2, I:15 5306 image(bitmap) image manipulation programs (fork of imagemagick)
xsane V:16, I:161 2346 image(bitmap) GTK-based X11 frontend for SANE (Scanner Access Now Easy)
netpbm V:28, I:363 5056 image(bitmap) graphics conversion tools
icoutils V:12, I:82 221 png↔ico(bitmap) convert MS Windows icons and cursors to and from PNG formats (favicon.ico)
scribus V:2, I:22 30523 ps/pdf/SVG/… Scribus DTP editor
libreoffice-draw V:119, I:427 13442 image(vector) LibreOffice office suite - drawing
inkscape V:35, I:167 87324 image(vector) SVG (Scalable Vector Graphics) editor
dia V:3, I:28 3620 image(vector) diagram editor (Gtk)
xfig V:1, I:14 6334 image(vector) Facility for Interactive Generation of figures under X11
pstoedit V:3, I:76 1003 ps/pdf→image(vector) PostScript and PDF files to editable vector graphics converter (SVG)
libwmf-bin V:8, I:176 180 Windows/image(vector) Windows metafile (vector graphic data) conversion tools
fig2sxd V:0, I:0 149 fig→sxd(vector) convert XFig files to Draw format
unpaper V:2, I:19 460 image→image post-processing tool for scanned pages for OCR
tesseract-ocr V:8, I:36 1507 image→text free OCR software based on the HP's commercial OCR engine
tesseract-ocr-eng V:6, I:37 4032 image→text OCR engine data: tesseract-ocr language files for English text
gocr V:1, I:10 531 image→text free OCR software
ocrad V:0, I:4 318 image→text free OCR software
eog V:64, I:269 7640 image(Exif) Eye of GNOME graphics viewer program
gthumb V:4, I:20 5318 image(Exif) image viewer and browser (GNOME)
geeqie V:5, I:18 15785 image(Exif) image viewer using GTK
shotwell V:17, I:237 6402 image(Exif) digital photo organizer (GNOME)
gtkam V:0, I:5 1154 image(Exif) application for retrieving media from digital cameras (GTK)
gphoto2 V:0, I:11 955 image(Exif) The gphoto2 digital camera command-line client
gwenview V:27, I:94 11042 image(Exif) image viewer (KDE)
kamera I:94 854 image(Exif) digital camera support for KDE applications
digikam V:2, I:12 2921 image(Exif) digital photo management application for KDE
exiv2 V:2, I:36 305 image(Exif) EXIF/IPTC metadata manipulation tool
exiftran V:1, I:18 70 image(Exif) transform digital camera jpeg images
jhead V:0, I:10 131 image(Exif) manipulate the non-image part of Exif compliant JPEG (digital camera photo) files
exif V:1, I:16 339 image(Exif) command-line utility to show EXIF information in JPEG files
exiftags V:0, I:4 292 image(Exif) utility to read Exif tags from a digital camera JPEG file
exifprobe V:0, I:4 499 image(Exif) read metadata from digital pictures
dcraw V:2, I:16 562 image(Raw)→ppm decode raw digital camera images
findimagedupes V:0, I:1 82 image→fingerprint find visually similar or duplicate images
ale V:0, I:0 839 image→image merge images to increase fidelity or create mosaics
imageindex V:0, I:2 145 image(Exif)→html generate static HTML galleries from images
outguess V:0, I:2 231 jpeg,png universal Steganographic tool
librecad V:2, I:16 8309 DXF CAD data editor (KDE)
blender V:3, I:36 85549 blend, TIFF, VRML, … 3D content editor for animation etc
mm3d V:0, I:0 3868 ms3d, obj, dxf, … OpenGL based 3D model editor
open-font-design-toolkit I:0 10 ttf, ps, … metapackage for open font design
fontforge V:0, I:7 4191 ttf, ps, … font editor for PS, TrueType and OpenType fonts
xgridfit V:0, I:0 806 ttf program for gridfitting and hinting TrueType fonts

[Tip] Tip

Search more image tools using regex "~Gworks-with::image" in aptitude(8) (see Section 2.2.6, “Search method options with aptitude”).

Although GUI programs such as gimp(1) are very powerful, command line tools such as imagemagick(1) are quite useful for automating image manipulation via scripts.

The de facto image file format of the digital camera is the Exchangeable Image File Format (EXIF) which is the JPEG image file format with additional metadata tags. It can hold information such as date, time, and camera settings.

The Lempel-Ziv-Welch (LZW) lossless data compression patent has been expired. Graphics Interchange Format (GIF) utilities which use the LZW compression method are now freely available on the Debian system.

[Tip] Tip

Any digital camera or scanner with removable recording media works with Linux through USB storage readers since it follows the Design rule for Camera Filesystem and uses FAT filesystem. See Section 10.1.7, “Removable storage device”.

There are many other programs for converting data. Following packages caught my eyes using regex "~Guse::converting" in aptitude(8) (see Section 2.2.6, “Search method options with aptitude”).

You can also extract data from RPM format with the following.

$ rpm2cpio file.src.rpm | cpio --extract