Interchange is the open source alternative to commercial e-commerce
servers and "best of breed" application server/component applications.
Interchange is one of the most powerful tools available to automate and
database-enable your web site or build online applications. It's also
one of the easiest tools to learn, plus it's open source, making it the
most open and least expensive solution to implement.
For some documentation and resources for it have a look at
http://www.interchange.rtfm.info/
iWebCal displays iCal calendars and other vCalendars on the Web. There's no
need for .Mac: With iWebCal you can share calendars with anyone.
jToolkit is a Python web application framework built on modpython and
Apache. There is also a simple command line webserver for running
applications from.
It is aimed at dynamically generated pages rather than mostly-static
pages (for which there are templating solutions). Pages can be produced
using a variety of widgets. It handles sessions and database
connections (and multi-database portability).
This is a port of The Internet Junkbuster Proxy(TM). An excelent way
to enhance your privacy while browsing the web. And it also happens
to do a great job of filtering out all those annoying banner ads!
This modified version allows one to specify appearance of blocked GIFs.
It can automatically compress text/html and text/plain documents for clients
which support Accept-Encoding: gzip (e.g. Netscape 4.7, Internet Explorer 5,
Lynx 2.8.3) to save downstream modem/network bandwidth. It uses the zlib
compression library to perform on-the-fly compression of HTML documents.
Please note that this software does not support IPv6. See www/privoxy
for a worth followup of this software.
This extension provides bindings to the Swish-e API.
Geronimo is a Java EE 5 application server which is developed by the
Apache Software Foundation and it is under the Apache Software License 2.0.
Larbin is a powerful web crawler (also called [web] robot, spider...). It
is intended to fetch a large number of web pages to fill the database of a
search engine. With a network fast enough, Larbin is able to fetch more than
100 million pages on a standard PC.
Larbin was initially developed for the XYLEME project in the VERSO team at
INRIA. The goal of Larbin was to go and fetch XML pages on the web to fill
the database of an xml-oriented search engine.
The following can be done with Larbin:
o A crawler for a search engine
o A crawler for a specialized search enginer (xml, images, mp3...)
o Statistics on the web (about servers or page contents)
Larbin is created by: Sebastien Ailleret
Light Squid - light squid report parser and visualizer
fast and simple install
fast log parser generatesmall per user data file
perl based cgi script for dynamic generated report pages
html template for design
no database required
no additional perl module
varios reports
user group support
graphics report
multilanguage interface
HTdump is an automation utility to retrieve URLs and write them to a
file, or process them through a pipeline. It is also a very usefull tool
for web developer who work low level, for example with CGI binaries.
This makes a perfect debugging tool, since it allows usual but also
rfc-illegal operations. Packet level debugging is included in this tool.
LinkChecker can check HTML documents for broken links.
Features :
* recursive checking
* multithreaded
* output can be colored or normal text, HTML, SQL, CSV or a sitemap
graph in XML or GML format.
* additionally reports download time for HTML pages
* HTTP/1.1 and 1.0, HTTPS, FTP, mailto:, news:, nntp:, Gopher,
Telnet and local file links are supported
Javascript links are currently ignored
* restrict link checking with regular expression filters for URLs
* proxy support
* give username/password for HTTP and FTP authorization
* robots.txt exclusion protocol support
* i18n support
* command line interface
* (Fast)CGI web interface