A Free Implementation of the Unicode Bidirectional Algorithm.
The library implements all of the algorithm as described in the "Unicode
Standard Annex #9, The Bidirectional Algorithm,
http://www.unicode.org/unicode/reports/tr9/". FriBidi is exhautively tested
against Bidi Reference Code, and due to our best knowledge, does not contain
any conformance bugs.
In the API, we were inspired by the document "Bi-Di languages support - BiDi
API proposal" by Franck Portaneri which he wrote as a proposal for adding BiDi
support to Mozilla.
Internally the library uses Unicode entirely. The character property function
was automatically created from the Unicode property list data file,
PropList.txt, available from the Unicode Online Data site. This means that
every Unicode character will be treated in strict accordance with the Unicode
specification. The same is true for the mirroring of characters, which also
works for all the characters listed as mirrorable in the Unicode specification.
Unicode::Map8
-------------
The Unicode::Map8 class implement efficient mapping tables between
8-bit character sets and 16 bit character sets like Unicode. About
170 different mapping tables between various known character sets and
Unicode is distributed with this package. The source of these tables
is the vendor mapping tables provided by Unicode, Inc. and the code
tables in RFC 1345. New maps can easily be installed.
By coincidence Martin Schwartz created a similar module at the same
time I did. His module is called Unicode::Map and should be available
on CPAN too. Both modules now support a unified interface. Martin's
module will be depreciated in the future.
Since UTF8 support is coming to Perl soon, there might be good reasons
to move this module in the direction of mapping to/from UTF8. I will
probably do so, once the Unicode support in the Perl core settle.
COPYRIGHT 1998-1999 Gisle Aas. All rights reserved.
A fast JSON parser and generator optimized for statistical data and
the web. Started out as a fork of RJSONIO, but has been completely
rewritten in recent versions. The package offers flexible, robust,
high performance tools for working with JSON in R and is particularly
powerful for building pipelines and interacting with web APIs. The
implementation is based on the mapping described in the vignette
of the package (Ooms, 2014). In addition to drop-in replacements
for toJSON and fromJSON, jsonlite contains functions to stream,
validate, and prettify JSON data. The unit tests included with the
package verify that all edge cases are encoded and decoded consistently
for use with dynamic data in systems and applications.
With Glom you can design table definitions and the relationships
between them, plus arrange the fields on the screen. You can edit
and search the data in those tables, and specify field values in
terms of other fields. It's as easy as it should be.
The design is loosely based on FileMaker Pro, with the added
advantage of separation between interface and data. Its simple
framework should be enough to implement most database
applications. Without Glom these systems normally consist of lots
of repetitive, unmaintainable code.
Glom-specific data such as the relationship definitions is saved
in the Glom document. Glom re-connects to the database server
when it loads a previous Glom document. The document is in XML
format.
Glom uses the PostgreSQL database backend but it can not edit
databases that it did not create, because it uses only a simple
subset of Postgres functionality.
hamsterdb is a lightweight embedded database engine. It is
in development for more than three years and concentrates
on ease of use, high performance, stability and portability.
The hamsterdb API is simple and self-documenting. The interface
is similar to other widely-used database engines. Fast algorithms
and data structures guarantee high performance for all scenarios.
Hamsterdb has hundreds of unittests with a test coverage of over
90%. Each release is tested with thousands of acceptance tests in
many different configurations, tested on up to six different
hardware architectures and operating systems. Written in plain
ANSI-C, hamsterdb runs on many architectures: Intel-compatible
(x86, x64), PowerPC, SPARC, ARM, RISC and others. Tested operating
systems include Microsoft Windows, Microsoft Windows CE, Linux,
SunOS and other Unices.
JDB is a package of commands for manipulating flat-ASCII databases
from shell scripts. JDB is useful to process medium amounts of data
(with very little data you'd do it by hand, with megabytes you might
want a real database). JDB is very good at doing things like:
* extracting measurements from experimental output
* re-examining data to address different hypotheses
* joining data from different experiments
* eliminating/detecting outliers
* computing statistics on data (mean, confidence intervals,
histograms, correlations)
* reformatting data for graphing programs
Rather than hand-code scripts to do each special case, JDB provides
higher-level functions.
JDB is built on flat-ASCII databases. By storing data in simple text
files and processing it with pipelines it is easy to experiment (in
the shell) and look at the output.
JRobin is a 100% pure java implementation of RRDTool's functionality. It
follows the same logic and uses the same data sources, archive types and
definitions as RRDTool does. JRobin supports all standard operations on
Round Robin Database (RRD) files: CREATE, UPDATE, FETCH, LAST, DUMP, XPORT
and GRAPH. JRobin's API is made for those who are familiar with RRDTool's
concepts and logic, but prefer to work with pure java. If you provide the
same data to RRDTool and JRobin, you will get exactly the same results and
graphs. JRobin is made from the scratch and it uses very limited portions
of RRDTool's original source code. JRobin does not use native functions and
libraries, has no Runtime.exec() calls and does not require RRDTool to be
present. JRobin is distributed as a software library (jar files) and comes
with full java source code (LGPL licence).
Because the many-to-many relationships are not real relationships,
they can not be introspected with DBIx::Class. Many-to-many
relationships are actually just a collection of convenience methods
installed to bridge two relationships. This DBIx::Class component
can be used to store all relevant information about these
non-relationships so they can later be introspected and examined.
This module is fairly esoteric and, unless you are dynamically
creating something out of a DBIC Schema, is probably the wrong
solution for whatever it is you are trying to do. Please be advised
that compatibility is not guaranteed for DBIx::Class 0.09000+. We
will try to mantain all compatibility, but internal changes might
make it impossible.
Pure Python
All code, at first, is written in pure Python so that py-postgresql will work
anywhere that you can install Python 3. Optimizations in C are made where
needed, but are always optional.
Prepared Statements
Using the PG-API interface, protocol-level prepared statements may be created
and used multiple times. db.prepare(sql)(*args)
COPY Support
Use the convenient COPY interface to directly copy data from one connection to
another. No intermediate files or tricks are necessary.
Arrays and Composite Typescw
Arrays and composites are fully supported. Queries requesting them will returns
objects that provide access to the elements within.
"pg_python" Quick Console
Get a Python console with a connection to PostgreSQL for quick tests and simple
scripts.
GNU Recutils is a set of tools and libraries to access human-editable,
text-based databases called recfiles. The data is stored as a sequence of
records, each record containing an arbitrary number of named fields.
Advanced capabilities usually found in other data storage systems are
supported: data types, data integrity (keys, mandatory fields, etc) as well
as the ability of records to refer to other records (sort of foreign keys).
Despite its simplicity, recfiles can be used to store medium-sized
databases.
Recfiles are human-readable, human-writable and still they are easy to parse
and to manipulate automatically. Obviously they are not suitable for many
tasks (for example, it can be difficult to manage hierarchies in recfiles)
and performance is somewhat sacrificed in favor of readability, but they are
quite handy to store small to medium simple databases.