The intent of File::ShareDir is to provide a companion to
Class::Inspector and File::HomeDir, modules that take a process that is
well-known by advanced Perl developers but gets a little tricky, and
make it more available to the larger Perl community.
Quite often you want or need your Perl module (CPAN or otherwise) to
have access to a large amount of read-only data that is stored on the
file-system at run-time.
On a Linux-like system, this would be in a place such as /usr/share,
however Perl runs on a wide variety of different systems, and so the use
of any one location is unreliable.
Perl provides a little-known method for doing this, but almost nobody is
aware that it exists. As a result, module authors often go through some
very strange ways to make the data available to their code.
Teeworlds is a freeware online multiplayer game, designed as a
crossover between Quake and Worms. Set on platform-based maps,
players control a cute little bugger with guns to take out as many
opponents as possible. The characters can jump but move more quickly
using a grappling hook, swinging through the levels. It can also
be used to lock other players to keep them near. The available
weapons include a pistol, shotgun, grenade launcher and a hammer.
The shooting and grappling direction is shown through a cursor,
controlled by the mouse. A special power-up temporarily provides a
ninja sword, used to slash through enemies. Each character has an
amount of health and shield. Items scattered around include additional
ammo, and health and shield bonuses. Unlike Worms, all the action
that happens is fast-paced and happens in real-time. It supports
CTF mode.
icon-slicer is a utility for generating icon themes and libXcursor cursor
themes.
The inputs to icon-slicer are conceptually:
A) a set of multi-layer images, one for each size
B) a XML theme description file
Each image contains all the cursors arranged in a grid; for cursors the
layers are:
- a layer with a dot for the hotspot of each cursor
- the main image or first animation frame for multi-frame animated cursors
- the second animation frame for multi-frame animated cursors
For icons, the layers are:
- a layer with the images
- an optional layer with attachment points for emblems
- an optional layer with boxes for embedding text into icons
In practice, since loading of multilayer images is not supported by standard
image libraries, each layer is input as a separate image file.
Chart::PNGgraph is a perl5 module to create and display PNG output for a graph.
The following classes for graphs with axes are defined:
Chart::PNGgraph::lines
Create a line chart.
Chart::PNGgraph::bars
Create a bar chart.
Chart::PNGgraph::points
Create an chart, displaying the data as points.
Chart::PNGgraph::linespoints
Combination of lines and points.
Chart::PNGgraph::area
Create a graph, representing the data as areas under a line.
Chart::PNGgraph::mixed
Create a mixed type graph, any combination of the above. At the moment this
is fairly limited. Some of the options that can be used with some of the
individual graph types won't work very well. Multiple bar graphs in a mixed
graph won't display very nicely.
Chart::PNGgraph::pie
Create a pie chart.
Clojure is a dynamic programming language that targets the Java Virtual
Machine. It is designed to be a general-purpose language, combining the
approachability and interactive development of a scripting language with
an efficient and robust infrastructure for multithreaded programming.
Clojure is a compiled language - it compiles directly to JVM bytecode,
yet remains completely dynamic. Every feature supported by Clojure is
supported at runtime. Clojure provides easy access to the Java frameworks,
with optional type hints and type inference, to ensure that calls to Java
can avoid reflection.
Clojure is a dialect of Lisp, and shares with Lisp the code-as-data
philosophy and a powerful macro system. Clojure is predominantly a
functional programming language, and features a rich set of immutable,
persistent data structures. When mutable state is needed, Clojure offers a
software transactional memory system that ensures clean, correct,
multithreaded designs.
libESMTP is a library to manage posting (or submission of) electronic
mail using SMTP to a preconfigured Mail Transport Agent (MTA) such as
Exim. It may be used as part of a Mail User Agent (MUA) or another
program that must be able to post electronic mail but where mail
functionality is not the program's primary purpose. libESMTP is not
intended to be used as part of a program that implements a Mail
Transport Agent.
libESMTP is an attempt to provide a robust implementation of the SMTP
protocol for use with mail clients. It is being developed as a reaction
to the experience of incomplete or buggy implementations of SMTP and
also to help remove the need for the installation of MTAs on
workstations which only need them to provide a sendmail command for a
mail client to post its mail.
WHAT IS AMANDA?
---------------
This is a release of Amanda, the Advanced Maryland Automatic
Network Disk Archiver. Amanda is a backup system designed to archive many
computers on a network to a single large-capacity tape drive.
Here are some features of Amanda:
* written in C, freely distributable.
* built on top of standard backup software: Unix dump/restore, and
later GNU Tar and others.
* will back up multiple machines in parallel to a holding disk, blasting
finished dumps one by one to tape as fast as we can write files to
tape. For example, a ~2 Gb 8mm tape on a ~240K/s interface to a host
with a large holding disk can be filled by Amanda in under 4 hours.
* does simple tape management: will not overwrite the wrong tape.
Libtextcat is a library with functions that implement the classification
technique described in Cavnar & Trenkle, "N-Gram-Based Text Categorization" [1].
It was primarily developed for language guessing, a task on which it is known to
perform with near-perfect accuracy.
The central idea of the Cavnar & Trenkle technique is to calculate a
"fingerprint" of a document with an unknown category, and compare this with the
fingerprints of a number of documents of which the categories are known. The
categories of the closest matches are output as the classification. A
fingerprint is a list of the most frequent n-grams occurring in a document,
ordered by frequency. Fingerprints are compared with a simple out-of-place
metric.
[1] The document that started it all: William B. Cavnar & John M. Trenkle (1994)
N-Gram-Based Text Categorization, <http://citeseer.ist.psu.edu/68861.html>.
CQL::Parser provides a mechanism to parse Common Query Language (CQL)
statements. The best description of CQL comes from the CQL homepage at the
Library of Congress http://www.loc.gov/z3950/agency/zing/cql/
CQL is a formal language for representing queries to information retrieval
systems such as web indexes, bibliographic catalogs and museum collection
information. The CQL design objective is that queries be human readable
and human writable, and that the language be intuitive while maintaining
the expressiveness of more complex languages.
A CQL statement can be as simple as a single keyword, or as complicated as
a set of compoenents indicating search indexes, relations, relational
modifiers, proximity clauses and boolean logic. CQL::Parser will parse CQL
statements and return the root node for a tree of nodes which describes
the CQL statement. This data structure can then be used by a client
application to analyze the statement, and possibly turn it into a query
for a local repository.
A simple string tokenizer which takes a string and splits it on
whitespace. It also optionally takes a string of characters to use as
delimiters, and returns them with the token set as well. This allows for
splitting the string in many different ways.
This is a very basic tokenizer, so more complex needs should be either
addressed with a custom written tokenizer or post-processing of the output
generated by this module. Basically, this will not fill everyones needs,
but it spans a gap between simple split / /, $string and the other options
that involve much larger and complex modules.
Also note that this is not a lexical analyser. Many people confuse
tokenization with lexical analysis. A tokenizer mearly splits its input
into specific chunks, a lexical analyzer classifies those chunks.
Sometimes these two steps are combined, but not here.