DenyHosts is a script intended to be run by *ix system administrators to
help thwart ssh server attacks.
If you've ever looked at your ssh log (/var/log/auth.log ) you may be alarmed
to see how many hackers attempted to gain access to your server.
Denyhosts helps you:
- Parses /var/log/auth.log to find all login attempts
- Can be run from the command line, cron or as a daemon (new in 0.9)
- Records all failed login attempts for the user and offending host
- For each host that exceeds a threshold count, records the evil host
- Keeps track of each non-existent user (eg. sdada) when a login attempt failed.
- Keeps track of each existing user (eg. root) when a login attempt failed.
- Keeps track of each offending host (hosts can be purged )
- Keeps track of suspicious logins
- Keeps track of the file offset, so that you can reparse the same file
- When the log file is rotated, the script will detect it
- Appends /etc/hosts.allow
- Optionally sends an email of newly banned hosts and suspicious logins.
- Resolves IP addresses to hostnames, if you want
The ssh library was designed to be used by programmers needing a working SSH
implementation by the mean of a library. The complete control of the client is
made by the programmer. With libssh, you can remotely execute programs, transfer
files, use a secure and transparent tunnel for your remote programs.
With its Secure FTP implementation, you can play with remote files easily,
without third-party programs others than libcrypto (from openssl).
libssh features:
* Full C library functions for manipulating a client-side SSH connection
* SSH2 and SSH1 protocol compliant
* Fully configurable sessions
* Server support, SSH agent authentication support
* Support for AES-128, AES-192, AES-256, Blowfish, 3DES in CBC mode
* Use multiple SSH connections in a same process, at same time
* Use multiple channels in the same connection
* Thread safety when using different sessions at same time
* POSIX-like SFTP implementation with openssh extension support
* SCP implementation
* RSA and DSS server public key supported
* Compression support (with zlib)
* Public key (RSA and DSS), password and keyboard-interactive authentication
Bytes::Random::Secure provides two interfaces for obtaining crypto-quality
random bytes. The simple interface is built around plain functions. For greater
control over the Random Number Generator's seeding, there is an Object Oriented
interface that provides much more flexibility.
The "functions" interface provides functions that can be used any time you need
a string of a specific number of random bytes. The random bytes are available as
simple strings, or as hex-digits, Quoted Printable, or MIME Base64. There are
equivalent methods available from the OO interface, plus a few others.
This module can be a drop-in replacement for Bytes::Random, with the primary
enhancement of using a cryptographic-quality random number generator to create
the random data. The random_bytes function emulates the user interface of
Bytes::Random's function by the same name. But with Bytes::Random::Secure the
random number generator comes from Math::Random::ISAAC, and is suitable for
cryptographic purposes. The harder problem to solve is how to seed the
generator. This module uses Crypt::Random::Seed to generate the initial seeds
for Math::Random::ISAAC.
Unix provides the standard du utility, which scans your disk and tells you which
directories contain the largest amounts of data. That can help you narrow your
search to the things most worth deleting.
However, that only tells you what's big. What you really want to know is what's
too big. By itself, du won't let you distinguish between data that's big because
you're doing something that needs it to be big, and data that's big because you
unpacked it once and forgot about it.
Most Unix file systems, in their default mode, helpfully record when a file was
last accessed. Not just when it was written or modified, but when it was even
read. So if you generated a large amount of data years ago, forgot to clean it
up, and have never used it since, then it ought in principle to be possible to
use those last-access time stamps to tell the difference between that and a
large amount of data you're still using regularly.
agedu is a program which does this. It does basically the same sort of disk scan
as du, but it also records the last-access times of everything it scans. Then it
builds an index that lets it efficiently generate reports giving a summary of
the results for each subdirectory, and then it produces those reports on demand.
The biggest difference between runwhen and other schedulers is that
runwhen doesn't have a single daemon overseeing multiple jobs.
The runwhen tools essentially act as a glorified sleep command.
Perhaps runwhen does nothing that at(1) doesn't, and there are
lots of things at(1) does that runwhen doesn't:
- runwhen doesn't change user IDs - thus it will never run
anything as the wrong user.
- It doesn't keep a central daemon running at all times -
thus it won't break if that daemon dies.
- It doesn't require any modifications to the system boot procedure.
- It doesn't log through syslog(3) - thus it won't make a mess
on the console if syslogd(1) isn't running.
- It doesn't centralize storage of scheduled jobs (or any other
per-job information) - thus unprivileged users can install and use it
without cooperation from root, and without the use of a setuid program
to handle changes.
- It doesn't send output through mail - thus it doesn't break
if there is no mail system installed.
- It doesn't check access control files - thus it doesn't gratuitously
deny users.
This package consists of Perl modules along with supporting Perl programs
that implement the semantic relatedness measures described by Leacock
Chodorow (1998), Jiang Conrath (1997), Resnik (1995), Lin (1998), Hirst St
Onge (1998), Wu Palmer (1994), the adapted gloss overlap measure by
Banerjee and Pedersen (2002), and a measure based on context vectors
by Patwardhan (2003). The details of the Vector measure are described in the
Master's thesis work done by Patwardhan (2003) at the University of Minnesota
Duluth. The Perl modules are designed as objects with methods that take as
input two word senses. The semantic relatedness of these word senses is
returned by these methods. A quantitative measure of the degree to which two
word senses are related has wide ranging applications in numerous areas, such
as word sense disambiguation, information retrieval, etc. For example, in
order to determine which sense of a given word is being used in a particular
context, the sense having the highest relatedness with its context word
senses is most likely to be the sense being used. Similarly, in information
retrieval, retrieving documents containing highly related concepts are more
likely to have higher precision and recall values.
A command line interface to these modules is also present in the package. The
simple, user-friendly interface returns the relatedness measure of two given
words.
Many applications which process data-centric XML do that based
on a nice specification, expressed in an XML Schema.
XML::Compile reads and writes XML data with the help of such
schema's. On the Perl side, it uses a tree of nested hashes
with the same structure.
Where other Perl modules, like SOAP::WSDL help you using these
schema's (often with a lot of run-time (XPath) searches), this
module takes a different approach: in stead of run-time
processing of the specification, it will first compile the
expected structure into real Perl, and then use that to process
the data.
There are many perl modules with the same as this one: translate
between XML and nested hashes. However, there are a few serious
differences: because the schema is used here, we make sure we
only handle correct data. Data-types are formatted and processed
correctly; for instance, integer does accept huge values
(at least 18 digits) as the specification prescribes. Also more
complex data-types like list, union, and substitutionGroup
(unions on complex type level) are supported, which is rarely the
case in other modules.
Apache::SessionX extents Apache::Session. It was initialy written to use
Apache::Session from inside of HTML::Embperl, but is seems to be usefull
outside of Embperl as well, so here is it as standalone module.
Apache::Session is a persistence framework which is particularly useful
for tracking session data between httpd requests. Apache::Session is
designed to work with Apache and mod_perl, but it should work under CGI
and other web servers, and it also works outside of a web server
altogether.
Apache::Session consists of five components: the interface, the object
store, the lock manager, the ID generator, and the serializer. The
interface is defined in SessionX.pm, which is meant to be easily
subclassed. The object store can be the filesystem, a Berkeley DB, a MySQL
DB, an Oracle DB, or a Postgres DB. Locking is done by lock files,
semaphores, or the locking capabilities of MySQL and Postgres.
Serialization is done via Storable, and optionally ASCII-fied via MIME or
pack(). ID numbers are generated via MD5. The reader is encouraged to
extend these capabilities to meet his own requirements.
Desktop aggregators are great. They sit there all day, pinging away at sites,
and as soon as they notice something new, they pop up little windows on your
desktop, and let you read items. But what about when you go home from work?
Or what about when you are on a trip? You get totally out of sync, and don't
know what you've read and haven't read. You are enraged.
Feed on Feeds A server side aggregator solves this. It keeps track of what
items you've read, and keeps happily checking up on your feeds no matter where
you are. Whenever you want to see what's new, you just bring up a web page and
scan the newest items. You can mark the items as read so they won't be shown
again. Or, you can just always show the most recent N items, like the way
LiveJournal's friends pages work. Also, having the aggregator in your browser
eliminates the "impedance mismatch" that sometimes occurs between a desktop
aggregator and your browser. All your native browsing methods work on a
FEED ON FEEDS page. Open pages in new tabs, bookmark them for later, browse
whatever way you like.
This is tidy-devel, built with a shared lib.
When editing HTML it's easy to make mistakes. Wouldn't it be nice if
there was a simple way to fix these mistakes automatically and tidy up
sloppy editing into nicely layed out markup? Well now there is thanks
to Hewlett Packard's Dave Raggett. HTML TIDY is a free utility for
doing just that. It also works great on the attrociously hard to read
markup generated by specialized HTML editors and conversion tools, and
can help you identify where you need to pay further attention on
making your pages more accessible to people with disabilities.
Tidy is able to fix up a wide range of problems and to bring to your
attention things that you need to work on yourself. Each item found is
listed with the line number and column so that you can see where the
problem lies in your markup. Tidy won't generate a cleaned up version
when there are problems that it can't be sure of how to handle. These
are logged as "errors" rather than "warnings".