Lots of times, Class::DBI is used in web-based applications. (In fact, coupled
with a templating system that allows you to pass objects, such as
Template::Toolkit, Class::DBI is very much your friend for these.)
And, as we all know, one of the most irritating things about writing web-based
applications is the monotony of writing much of the same stuff over and over
again. And, where there's monotony there's a tendency to skip over stuff that
we all know is really important, but is a pain to write - like Taint Checking
and sensible input validation. (Especially as we can still show a 'working'
application without it!). So, we now have CGI::Untaint to take care of a lot of
that for us.
It so happens that CGI::Untaint also plays well with Class::DBI. All you need
to do is to 'use Class::DBI::FromCGI' in your class (or in your local
Class::DBI subclass that all your other classes inherit from. You do do that,
don't you?).
This is an SQL to OO mapper with an object API inspired by Class::DBI (with a
compatibility layer as a springboard for porting) and a resultset API that
allows abstract encapsulation of database operations. It aims to make
representing queries in your code as perl-ish as possible while still providing
access to as many of the capabilities of the database as possible, including
retrieving related records from multiple tables in a single query, JOIN, LEFT
JOIN, COUNT, DISTINCT, GROUP BY, ORDER BY and HAVING support.
DBIx::Class can handle multi-column primary and foreign keys, complex queries
and database-level paging, and does its best to only query the database in
order to return something you've directly asked for. If a resultset is used as
an iterator it only fetches rows off the statement handle as requested in order
to minimise memory usage. It has auto-increment support for SQLite, MySQL,
PostgreSQL, Oracle, SQL Server and DB2 and is known to be used in production
on at least the first four, and is fork- and thread-safe out of the box
(although your DBD may not be).
This project is still under rapid development, so large new features may be
marked EXPERIMENTAL - such APIs are still usable but may have edge bugs.
Failing test cases are *always* welcome and point releases are put out rapidly
as bugs are found and fixed.
pgloader imports data from a flat file and inserts it into one or
more PostgreSQL database tables. It uses a flat file per database
table, and you can configure as many Sections as you want, each one
associating a table name and a data file.
Data are parsed and rewritten, then given to PostgreSQL COPY command.
Parsing is necessary for dealing with end of lines and eventual trailing
separator characters, and for column reordering: your flat data file may
not have the same column order as the database table has.
pgloader is also able to load some large objects data into PostgreSQL,
as of now only Informix UNLOAD data files are supported. This command
gives large objects data location information into the main data file.
pgloader parse it add the text or bytea content properly escaped to the
COPY data.
pgloader issues some timing statistics every "commit_every" commits. At
the end of processing each section, a summary of overall operations,
numbers of rows copied and commits, time it took in seconds, errors
logged and database errors is issued.
The primary goal of this project is to define a portable and efficient
C programming interface (API) to determine the call-chain of a program.
The API additionally provides the means to manipulate the preserved
(callee-saved) state of each call-frame and to resume execution at any
point in the call-chain (non-local goto). The API supports both local
(same-process) and remote (across-process) operation. As such, the API
is useful in a number of applications. Some examples include:
o exception handling
The libunwind API makes it trivial to implement the stack-manipulation
aspects of exception handling.
o debuggers
The libunwind API makes it trivial for debuggers to generate
the call-chain (backtrace) of the threads in a running program.
o introspection
It is often useful for a running thread to determine its call-chain.
For example, this is useful to display error messages (to show how
the error came about) and for performance monitoring/analysis.
o efficient setjmp()
With libunwind, it is possible to implement an extremely efficient
version of setjmp(). Effectively, the only context that needs to be
saved consists of the stack-pointer(s).
The rather wacky idea behind this module and its sister module DBD::AnyData
is that any data, regardless of source or format should be accessible and
modifiable with the same simple set of methods. This module provides a multi-
dimensional tied hash interface to data in a dozen different formats. The
DBD::AnyData module adds a DBI/SQL interface for those same formats.
Both modules provide built-in protections including appropriate flocking()
for all I/O and (in most cases) record-at-a-time access to files rather than
slurping of entire files.
Currently supported formats include general format flat files (CSV, Fixed
Length, etc.), specific formats (passwd files, httpd logs, etc.), and a
variety of other kinds of formats (XML, Mp3, HTML tables). The number of
supported formats will continue to grow rapidly since there is an open API
making it easy for any author to create additional format parsers which can
be plugged in to AnyData itself and thereby be accessible by either the
tiedhash or DBI/SQL interface.
A heap is a partially sorted structure where it's always easy to extract the
smallest element. If the collection of elements is changing dynamically, a heap
has less overhead than keeping the collection fully sorted.
The order in which equal elements get extracted is unspecified.
The main order relations supported by this module are "<" (numeric compare) and
"lt" (string compare).
The internals of the module do nothing with the elements inserted except
inspecting the key. This means that if you for example store a blessed object,
that's what you will get back on extract. It's also ok to keep references to the
elements around and make changes to them while they are in the heap as long as
you don't change the key.
Heap::Simple itself is just a loader for the code that will actually implement
the functionality mentioned above. You will need to install something like
Heap::Simple::XS or Heap::Simple::Perl to be able to actually do anything.
InlineX::C2XS - create an XS file from an Inline C file.
The C file that InlineX::C2XS needs to find would contain
only the C code.
InlineX::C2XS looks for the file in ./src directory - expecting that the
filename will be the same as what appears after the final '::' in the
module name (with a '.c' extension). ie if the module is called
My::Next::Mod it looks for a file ./src/Mod.c, and creates a file
named Mod.xs. Also created, is the file 'INLINE.h' - but only if that
file is needed. The generated xs file (and INLINE.h) will be written
to the cwd unless a third argument (specifying a valid directory) is
provided to the c2xs() function.
The created XS file, when packaged with the '.pm' file, an
appropriate 'Makefile.PL', and 'INLINE.h' (if it's needed),
can be used to build the module in the usual way - without
any dependence upon the Inline::C module.
This is a second go at a module to simplify installing die() and warn()
handlers, and to make such handlers easier to write and control.
For most people, this just means that if use Religion; then you'll get
noticeably better error reporting from warn() and die(). This is especially
useful if you are using eval().
Religion provides four classes, WarnHandler, DieHandler, WarnPreHandler, and
DiePreHandler, that when you construct them return closures that can be
stored in variables that in turn get invoked by $SIG{__DIE__} and
$SIG{__WARN__}. Note that if Religion is in use, you should not modify
$SIG{__DIE__} or $SIG{__WARN__}, unless you are careful about invoking
chaining to the old handler.
Religion also provides a TraceBack function, which is used by a DieHandler
after you die() to give a better handle on the current scope of your
situation, and provide information about where you were, which might
influence where you want to go next, either returning back to where
The apptools project includes a set of packages that Enthought has
found useful in creating a number of applications.
- apptools.appscripting: Framework for scripting applications.
- apptools.help: Provides a plugin for displaying documents and examples.
- apptools.io: Provides an abstraction for files and folders in a
file system.
- apptools.logger: Convenience functions for creating logging handlers
- apptools.naming: Manages naming contexts, supporting non-string data
types and scoped preferences
- apptools.permissions: Supports limiting access to parts of an application
unless the user is appropriately authorised (not full-blown security).
- apptools.persistence: Supports pickling and restoring the state of an
object.
- apptools.preferences: Manages application preferences.
- apptools.selection: Manages the communication between providers and
listener of selected items in an application.
- apptools.scripting: A framework for automatic recording of Python scripts.
- apptools.sweet_pickle: Handles class-level versioning, to support
loading of saved data that exist over several generations of
internal class structures.
- apptools.template: Supports creating templatizable object hierarchies.
- apptools.type_manager: Manages type extensions, including factories to
generate adapters, and hooks for methods and functions.
- apptools.undo: Supports undoing and scripting application commands.
This library is an implementation of the JSON-LD specification in Python.
JSON-LD is designed as a light-weight syntax that can be used to
express Linked Data. It is primarily intended to be a way to express
Linked Data in JavaScript and other Web-based programming environments.
It is also useful when building interoperable Web Services and when
storing Linked Data in JSON-based document storage engines. It is
practical and designed to be as simple as possible, utilizing the
large number of JSON parsers and existing code that is in use today.
It is designed to be able to express key-value pairs, RDF data,
RDFa data, Microformats data, and Microdata. That is, it supports
every major Web-based structured data model in use today.
The syntax does not require many applications to change their JSON,
but easily add meaning by adding context in a way that is either
in-band or out-of-band. The syntax is designed to not disturb already
deployed systems running on JSON, but provide a smooth migration
path from JSON to JSON with added semantics. Finally, the format
is intended to be fast to parse, fast to generate, stream-based and
document-based processing compatible, and require a very small
memory footprint in order to operate.