String::Escape - Registry of string functions, including backslash escapes
TOML implements a parser for Tom's Obvious, Minimal Language, as defined at [1].
TOML exports two subroutines, from_toml and to_toml.
[1] https://github.com/mojombo/toml
FOP is the world's first print formatter driven by XSL formatting
objects. It is a Java application that reads a formatting object
tree conforming to the XSL candidate release (21. November 2000) and
then turns it into a PDF document or allows you to preview it
directly on screen.
FOP is part of Apache's XML project. The homepage of FOP is
This module is a wrapper around the diff algorithm from the module
Algorithm::Diff. It's job is to simplify a visualization of the differences of
each strings.
Compared to the many other Diff modules, the output is neither in diff-style
nor are the recognised differences on line or word boundaries, they are at
character level.
String::Format is a Perl module which gives the user
sprintf-like string formatting capabilities with arbitrary
format definitions.
String::HexConvert It is a wrapper around pack and unpack of perl to convert
a string of hex digits to ascii and other way around.
String::ToIdentifier::EN provides a utility method, "to_identifier" for
converting an arbitrary string into a readable representation using the ASCII
subset of \w for use as an identifier in a computer program. The intent is to
make unique identifier names from which the content of the original string can
be easily inferred by a human just by reading the identifier.
If you need the full set of \w including Unicode, see the subclass
String::ToIdentifier::EN::Unicode.
Currently, this process is one way only, and will likely remain this way.
The default is to create camelCase identifiers, or you may pass in a separator
char of your choice such as _.
Binary char groups will be separated by _ even in camelCase identifiers to make
them easier to read, e.g.: foo_2_0xFF_Bar.
A simple string tokenizer which takes a string and splits it on
whitespace. It also optionally takes a string of characters to use as
delimiters, and returns them with the token set as well. This allows for
splitting the string in many different ways.
This is a very basic tokenizer, so more complex needs should be either
addressed with a custom written tokenizer or post-processing of the output
generated by this module. Basically, this will not fill everyones needs,
but it spans a gap between simple split / /, $string and the other options
that involve much larger and complex modules.
Also note that this is not a lexical analyser. Many people confuse
tokenization with lexical analysis. A tokenizer mearly splits its input
into specific chunks, a lexical analyzer classifies those chunks.
Sometimes these two steps are combined, but not here.
This module handles the simple but common problem of long strings
and finite terminal width.
Using output of /dev/urandom. Simply convert bytes into 8-bit
characters.