Tags groups of audio files using CDDB.
TagLookup is a utility for tagging MP3s and other taggable audio file formats.
It inspects a set of audio files and uses their lengths to look up an
appropriate disc from a CDDB-compatible service. TagLookup can be used in two
modes:
* ID -- Given a CDDB ID and a number of files, look up the details of the CDDB
disc from a CDDB service. Tag files using the CDDB disc. Match each file with
each CDDB track using the closest track length.
* Sequence -- Given a number of files, generate a CDDB ID and query a CDDB
service. CDDB IDs are generated based on the sequence of tracks. Choose the
closest matching CDDB disc to tag the files.
As well as this, taglookup can:
* Rename -- Rename files based on their tags.
DBIx::Admin::CreateTable is a pure Perl module.
Database vendors supported: MySQL, Oracle, Postgres, SQLite.
Assumptions:
- Every table has a primary key
- The primary key is a unique, non-null, integer
- The primary key is a single column
- The primary key column is called 'id'
- If a primary key has a corresponding auto-created index, the index is called
't_pkey': This is true for Postgres, where declaring a column as a primary
key automatically results in the creation of an associated index for that
column. The index is named after the table, not after the column.
- If a table 't' (with primary key 'id') has an associated sequence, the
sequence is called 't_id_seq': This is true for both Oracle and Postgres,
which use sequences to populate primary key columns. The sequences are named
after both the table and the column.
Kyua is a testing framework for infrastructure software, originally
designed to equip BSD-based operating systems with a test suite. This
means that Kyua is lightweight and simple, and that Kyua integrates well
with various build systems and continuous integration frameworks.
Kyua features an expressive test suite definition language, a safe
runtime engine for test suites and a powerful report generation engine.
Kyua is for both developers and users, from the developer applying a
simple fix to a library to the system administrator deploying a new
release on a production machine.
Kyua is able to execute test programs written with a plethora of testing
libraries and languages. The library of choice is ATF, for which Kyua
was originally designed, but simple, framework-less test programs and
TAP-compliant test programs can also be executed through Kyua.
The intent of File::ShareDir is to provide a companion to
Class::Inspector and File::HomeDir, modules that take a process that is
well-known by advanced Perl developers but gets a little tricky, and
make it more available to the larger Perl community.
Quite often you want or need your Perl module (CPAN or otherwise) to
have access to a large amount of read-only data that is stored on the
file-system at run-time.
On a Linux-like system, this would be in a place such as /usr/share,
however Perl runs on a wide variety of different systems, and so the use
of any one location is unreliable.
Perl provides a little-known method for doing this, but almost nobody is
aware that it exists. As a result, module authors often go through some
very strange ways to make the data available to their code.
Teeworlds is a freeware online multiplayer game, designed as a
crossover between Quake and Worms. Set on platform-based maps,
players control a cute little bugger with guns to take out as many
opponents as possible. The characters can jump but move more quickly
using a grappling hook, swinging through the levels. It can also
be used to lock other players to keep them near. The available
weapons include a pistol, shotgun, grenade launcher and a hammer.
The shooting and grappling direction is shown through a cursor,
controlled by the mouse. A special power-up temporarily provides a
ninja sword, used to slash through enemies. Each character has an
amount of health and shield. Items scattered around include additional
ammo, and health and shield bonuses. Unlike Worms, all the action
that happens is fast-paced and happens in real-time. It supports
CTF mode.
Clojure is a dynamic programming language that targets the Java Virtual
Machine. It is designed to be a general-purpose language, combining the
approachability and interactive development of a scripting language with
an efficient and robust infrastructure for multithreaded programming.
Clojure is a compiled language - it compiles directly to JVM bytecode,
yet remains completely dynamic. Every feature supported by Clojure is
supported at runtime. Clojure provides easy access to the Java frameworks,
with optional type hints and type inference, to ensure that calls to Java
can avoid reflection.
Clojure is a dialect of Lisp, and shares with Lisp the code-as-data
philosophy and a powerful macro system. Clojure is predominantly a
functional programming language, and features a rich set of immutable,
persistent data structures. When mutable state is needed, Clojure offers a
software transactional memory system that ensures clean, correct,
multithreaded designs.
libESMTP is a library to manage posting (or submission of) electronic
mail using SMTP to a preconfigured Mail Transport Agent (MTA) such as
Exim. It may be used as part of a Mail User Agent (MUA) or another
program that must be able to post electronic mail but where mail
functionality is not the program's primary purpose. libESMTP is not
intended to be used as part of a program that implements a Mail
Transport Agent.
libESMTP is an attempt to provide a robust implementation of the SMTP
protocol for use with mail clients. It is being developed as a reaction
to the experience of incomplete or buggy implementations of SMTP and
also to help remove the need for the installation of MTAs on
workstations which only need them to provide a sendmail command for a
mail client to post its mail.
WHAT IS AMANDA?
---------------
This is a release of Amanda, the Advanced Maryland Automatic
Network Disk Archiver. Amanda is a backup system designed to archive many
computers on a network to a single large-capacity tape drive.
Here are some features of Amanda:
* written in C, freely distributable.
* built on top of standard backup software: Unix dump/restore, and
later GNU Tar and others.
* will back up multiple machines in parallel to a holding disk, blasting
finished dumps one by one to tape as fast as we can write files to
tape. For example, a ~2 Gb 8mm tape on a ~240K/s interface to a host
with a large holding disk can be filled by Amanda in under 4 hours.
* does simple tape management: will not overwrite the wrong tape.
CQL::Parser provides a mechanism to parse Common Query Language (CQL)
statements. The best description of CQL comes from the CQL homepage at the
Library of Congress http://www.loc.gov/z3950/agency/zing/cql/
CQL is a formal language for representing queries to information retrieval
systems such as web indexes, bibliographic catalogs and museum collection
information. The CQL design objective is that queries be human readable
and human writable, and that the language be intuitive while maintaining
the expressiveness of more complex languages.
A CQL statement can be as simple as a single keyword, or as complicated as
a set of compoenents indicating search indexes, relations, relational
modifiers, proximity clauses and boolean logic. CQL::Parser will parse CQL
statements and return the root node for a tree of nodes which describes
the CQL statement. This data structure can then be used by a client
application to analyze the statement, and possibly turn it into a query
for a local repository.
A simple string tokenizer which takes a string and splits it on
whitespace. It also optionally takes a string of characters to use as
delimiters, and returns them with the token set as well. This allows for
splitting the string in many different ways.
This is a very basic tokenizer, so more complex needs should be either
addressed with a custom written tokenizer or post-processing of the output
generated by this module. Basically, this will not fill everyones needs,
but it spans a gap between simple split / /, $string and the other options
that involve much larger and complex modules.
Also note that this is not a lexical analyser. Many people confuse
tokenization with lexical analysis. A tokenizer mearly splits its input
into specific chunks, a lexical analyzer classifies those chunks.
Sometimes these two steps are combined, but not here.