GnuCap is the GNU Circuit Analysis Package.
The primary component is a general purpose circuit simulator. It
performs nonlinear dc and transient analyses, fourier analysis, and ac
analysis. It is fully interactive and command driven. It can also be
run in batch mode or as a server. Spice compatible models for the
MOSFET (level 1-7) and diode are included in this release.
GnuCap is not based on Spice, but some of the models have been derived
from the Berkeley models.
Unlike Spice, the engine is designed to do true mixed-mode simulation.
Most of the code is in place for future support of event driven analog
simulation, and true multi-rate simulation.
If you are tired of Spice and want a second opinion, you want to play
with the circuit and want a simulator that is interactive, you want to
study the source code and want something easier to follow than Spice,
or you are a researcher working on modeling and want automated model
generation tools to make your job easier, try GnuCap.
Lots of times, Class::DBI is used in web-based applications. (In fact, coupled
with a templating system that allows you to pass objects, such as
Template::Toolkit, Class::DBI is very much your friend for these.)
And, as we all know, one of the most irritating things about writing web-based
applications is the monotony of writing much of the same stuff over and over
again. And, where there's monotony there's a tendency to skip over stuff that
we all know is really important, but is a pain to write - like Taint Checking
and sensible input validation. (Especially as we can still show a 'working'
application without it!). So, we now have CGI::Untaint to take care of a lot of
that for us.
It so happens that CGI::Untaint also plays well with Class::DBI. All you need
to do is to 'use Class::DBI::FromCGI' in your class (or in your local
Class::DBI subclass that all your other classes inherit from. You do do that,
don't you?).
The DBD::AnyData module provides a DBI/SQL interface to data in many formats
and from many sources.
Regardless of the format or source of the data, it may be accessed and/or
modified using all standard DBI methods and a subset of SQL syntax.
In addition to standard database access to files, the module also supports
in-memory tables which allow you to create temporary views; to combine data
from a number of sources; to quickly prototype database systems; and to display
or save the data in any of the supported formats (e.g. to display data in a CSV
file as an HTML table). These in-memory tables can be created from any
combination of DBI databases or files of any format. They may also be created
from perl data structures which means it's possible to quickly prototype a
database system without any file access or rdbms backend.
The module also supports converting files between any of the supported formats
(e.g. save selected data from MySQL or Oracle to an XML file).
This is an SQL to OO mapper with an object API inspired by Class::DBI (with a
compatibility layer as a springboard for porting) and a resultset API that
allows abstract encapsulation of database operations. It aims to make
representing queries in your code as perl-ish as possible while still providing
access to as many of the capabilities of the database as possible, including
retrieving related records from multiple tables in a single query, JOIN, LEFT
JOIN, COUNT, DISTINCT, GROUP BY, ORDER BY and HAVING support.
DBIx::Class can handle multi-column primary and foreign keys, complex queries
and database-level paging, and does its best to only query the database in
order to return something you've directly asked for. If a resultset is used as
an iterator it only fetches rows off the statement handle as requested in order
to minimise memory usage. It has auto-increment support for SQLite, MySQL,
PostgreSQL, Oracle, SQL Server and DB2 and is known to be used in production
on at least the first four, and is fork- and thread-safe out of the box
(although your DBD may not be).
This project is still under rapid development, so large new features may be
marked EXPERIMENTAL - such APIs are still usable but may have edge bugs.
Failing test cases are *always* welcome and point releases are put out rapidly
as bugs are found and fixed.
pgloader imports data from a flat file and inserts it into one or
more PostgreSQL database tables. It uses a flat file per database
table, and you can configure as many Sections as you want, each one
associating a table name and a data file.
Data are parsed and rewritten, then given to PostgreSQL COPY command.
Parsing is necessary for dealing with end of lines and eventual trailing
separator characters, and for column reordering: your flat data file may
not have the same column order as the database table has.
pgloader is also able to load some large objects data into PostgreSQL,
as of now only Informix UNLOAD data files are supported. This command
gives large objects data location information into the main data file.
pgloader parse it add the text or bytea content properly escaped to the
COPY data.
pgloader issues some timing statistics every "commit_every" commits. At
the end of processing each section, a summary of overall operations,
numbers of rows copied and commits, time it took in seconds, errors
logged and database errors is issued.
CAL is a nicely-enhanced version of the Unix `cal' command.
Features:
* Hilights today's date when displaying a monthly calendar.
* Displays an optional user-definable list of `special day'
descriptions (like appointments) to the right of the monthly
calendar display. Cal can be set optionally to ignore appointments
older than the current day. Next month's appointments are shown if
there is room to do so. Multiple appointment data files may also
be specified on the commandline.
* You can specify your own appointment and color definition files on the
commandline, or use the defaults.
* Date descriptions can display "years since" a given year, useful for
birthdays and anniversaries.
* Completely configurable colors -- eight separate color attributes.
* No ANSI driver needed for colors, and the output may be redirected
anywhere, just like the Unix version. However, ANSI color control may
be enabled (e.g. for Unix) with a #define in the source code.
* Commandline-compatible with Unix `cal' command, but with several
enhanced switch settings.
Requests, bug reports, suggestions, donations, proposals for
contract work, and so forth may be sent to:
Attn: Alex Matulich
Unicorn Research Corporation
4621 N. Landmark Drive
Orlando, FL 32817-1235
USA
407-657-4974 FAX 407-657-6149
or send e-mail to matulich_a@seaa.navsea.navy.mil.
The primary goal of this project is to define a portable and efficient
C programming interface (API) to determine the call-chain of a program.
The API additionally provides the means to manipulate the preserved
(callee-saved) state of each call-frame and to resume execution at any
point in the call-chain (non-local goto). The API supports both local
(same-process) and remote (across-process) operation. As such, the API
is useful in a number of applications. Some examples include:
o exception handling
The libunwind API makes it trivial to implement the stack-manipulation
aspects of exception handling.
o debuggers
The libunwind API makes it trivial for debuggers to generate
the call-chain (backtrace) of the threads in a running program.
o introspection
It is often useful for a running thread to determine its call-chain.
For example, this is useful to display error messages (to show how
the error came about) and for performance monitoring/analysis.
o efficient setjmp()
With libunwind, it is possible to implement an extremely efficient
version of setjmp(). Effectively, the only context that needs to be
saved consists of the stack-pointer(s).
The rather wacky idea behind this module and its sister module DBD::AnyData
is that any data, regardless of source or format should be accessible and
modifiable with the same simple set of methods. This module provides a multi-
dimensional tied hash interface to data in a dozen different formats. The
DBD::AnyData module adds a DBI/SQL interface for those same formats.
Both modules provide built-in protections including appropriate flocking()
for all I/O and (in most cases) record-at-a-time access to files rather than
slurping of entire files.
Currently supported formats include general format flat files (CSV, Fixed
Length, etc.), specific formats (passwd files, httpd logs, etc.), and a
variety of other kinds of formats (XML, Mp3, HTML tables). The number of
supported formats will continue to grow rapidly since there is an open API
making it easy for any author to create additional format parsers which can
be plugged in to AnyData itself and thereby be accessible by either the
tiedhash or DBI/SQL interface.
A heap is a partially sorted structure where it's always easy to extract the
smallest element. If the collection of elements is changing dynamically, a heap
has less overhead than keeping the collection fully sorted.
The order in which equal elements get extracted is unspecified.
The main order relations supported by this module are "<" (numeric compare) and
"lt" (string compare).
The internals of the module do nothing with the elements inserted except
inspecting the key. This means that if you for example store a blessed object,
that's what you will get back on extract. It's also ok to keep references to the
elements around and make changes to them while they are in the heap as long as
you don't change the key.
Heap::Simple itself is just a loader for the code that will actually implement
the functionality mentioned above. You will need to install something like
Heap::Simple::XS or Heap::Simple::Perl to be able to actually do anything.
InlineX::C2XS - create an XS file from an Inline C file.
The C file that InlineX::C2XS needs to find would contain
only the C code.
InlineX::C2XS looks for the file in ./src directory - expecting that the
filename will be the same as what appears after the final '::' in the
module name (with a '.c' extension). ie if the module is called
My::Next::Mod it looks for a file ./src/Mod.c, and creates a file
named Mod.xs. Also created, is the file 'INLINE.h' - but only if that
file is needed. The generated xs file (and INLINE.h) will be written
to the cwd unless a third argument (specifying a valid directory) is
provided to the c2xs() function.
The created XS file, when packaged with the '.pm' file, an
appropriate 'Makefile.PL', and 'INLINE.h' (if it's needed),
can be used to build the module in the usual way - without
any dependence upon the Inline::C module.