Serialize your RSS as JavaScript.
Perhaps you use XML::RSS to generate RSS for consumption by RSS parsers.
Perhaps you also get requests for how to use the RSS feed by people who
have no idea how to parse XML, or write Perl programs for that matter.
Enter XML::RSS::JavaScript, a simle subclass of XML::RSS which writes your
RSS feed as a sequence of JavaScript print statements. This means you
can then write the JavaScript to disk, and a users HTML can simple
include it like so:
<script language="JavaScript" src="/myfeed.js"></script>
What's more the javascript emits HTML that can be fully styled with
CSS. See the CSS examples included with the distribution in the css directory.
The Translate Toolkit is a set of software and documentation designed
to help make the lives of localizers both more productive and less
frustrating. The software includes programs to covert localization
formats to the common PO format and programs to check and manage PO
files. The documentation includes guides on using the tools, running a
localization project and how to localize various projects from
OpenOffice.org to Mozilla.
At its core the software contains a set of classes for handling various
localization storage formats: DTD, properties, OpenOffice.org GSI/SDF,
CSV and of course PO and XLIFF. It also provides scripts to convert
between these formats.
Also part of the Toolkit are Python programs to create word counts,
merge translations and perform various checks on PO and XLIFF files.
Elasticsearch DSL is a high-level library whose aim is to help with writing
and running queries against Elasticsearch. It is built on top of the official
low-level client (elasticsearch-py).
It provides a more convenient and idiomatic way to write and manipulate
queries. It stays close to the Elasticsearch JSON DSL, mirroring its terminology
and structure. It exposes the whole range of the DSL from Python either directly
using defined classes or a queryset-like expressions.
It also provides an optional wrapper for working with documents as Python
objects: defining mappings, retrieving and saving documents, wrapping the
document data in user-defined classes.
To use the other Elasticsearch APIs (eg. cluster health) just use the underlying
client.
RiCal is a new Ruby Library for parsing, generating, and using iCalendar
(RFC 2445) format data.
RiCal distinguishes itself from existing Ruby libraries in providing
support for
Timezone components in Calendars. This means that RiCal parses VTIMEZONE
data and instantiates timezone objects which can be used to convert
times in the calendar to and from UTC time. In addition, RiCal allows
created calendars and components to use time zones understood by TZInfo gem
(from either the TZInfo gem or from Rails ActiveSupport => 2.2).
When a calendar with TZInfo time zones is exported, RFC 2445 conforming
VTIMEZONE components will be included, allowing other programs to process
the result.
Enumeration of recurring occurrences. For example, if an Event has one
or more recurrence rules, then the occurrences of the event can be enumerated
as a series of Event occurrences.
rmmseg-cpp is a high performance Chinese word segmentation utility for
Ruby. It features full "Ferret":http://ferret.davebalmain.com/ integration
as well as support for normal Ruby program usage.
rmmseg-cpp is a re-written of the original
RMMSeg(http://rmmseg.rubyforge.org/) gem in C++. RMMSeg is written
in pure Ruby. Though I tried hard to tweak RMMSeg, it just consumes
lots of memory and the segmenting process is rather slow.
The interface is almost identical to RMMSeg but the performance is
much better. This gem is always preferable in production
use. However, if you want to understand how the MMSEG segmenting
algorithm works, the source code of RMMSeg is a better choice than
this.
Trac 使用较低门槛的方法来实施基于 web 的软件项目管理。我们的使命是,帮助开发者
编写出很好的软件。Trac 应该尽可能少地改变团队已有的开发过程和策略。
Trac 的所有方面都符合一个单一的目标,简化跟踪和软件问题的交流,增强和监测整个进程。
Trac 是什么?
* 一个管理软件项目的集成系统
* 一个增强的 wiki
* 一个灵活的基于 web 的问题跟踪器
* 一个 Subversion 版本控制系统的界面
Trac 的核心在于集成 wiki 和问题/bug 数据库。使用 wiki 标记,所有管理的对象
都可以直接连接到其他的问题/bug 报告、代码变更集、文档和文件。
Ever tried logging Apache page serve times using '%D'? You'll have discovered
that they aren't a good index of your server's performance, because they depend
more on the client's connection speed, computer and browsing habits than on the
speed of your server.
mod_log_firstbyte is a module for Apache 2.0 which allows you to log the time
between each request being read and the first byte of the response served.
Unlike the total serve time, this index of performance tells you how long Apache
actually spent loading the file off the disk or executing your script: it's
independent of client connection speed. It makes a great performance benchmark
for your server!
The crawl utility starts a depth-first traversal of the web at the
specified URLs. It stores all JPEG images that match the configured
constraints. Crawl is fairly fast and allows for graceful termination.
After terminating crawl, it is possible to restart it at exactly
the same spot where it was terminated. Crawl keeps a persistent
database that allows multiple crawls without revisiting sites.
The main reason for writing crawl was the lack of simple open source
web crawlers. Crawl is only a few thousand lines of code and fairly
easy to debug and customize.
Some of the main features:
- Saves encountered JPEG images
- Image selection based on regular expressions and size contrainsts
- Resume previous crawl after graceful termination
- Persistent database of visited URLs
- Very small and efficient code
- Supports robots.txt
This perl module provides an Active Server Pages port to the Apache HTTP
Server with perl as the host scripting language. Active Server Pages is
a web application platform that originated with the Microsoft IIS
server. Under Apache for both Win32 and Unix, it allows a developer to
create dynamic web applications with session management and perl code
embedded in static html files.
This is a portable solution, similar to ActiveState PerlScript and MKS
PScript implementation of perl for IIS ASP. Work has been done and will
continue to make ports to and from these other implementations as smooth
as possible.
This module works under the Apache HTTP Server with the mod_perl module
enabled. See http://www.apache.org and http://perl.apache.org for
further information.
For database access, ActiveX, and scripting language issues, please read
the FAQ section.
http://search.cpan.org/dist/Apache-ASP/
The Apache::ConfigParser module is used to load an Apache configuration
file to allow programs to determine Apache's configuration directives and
contexts. The resulting object contains a tree based structure using the
Apache::ConfigParser::Directive class, which is a subclass of
Tree::DAG_node, so all of the methods that enable tree based searches and
modifications from Tree::DAG_Node are also available. The tree structure
is used to represent the ability to nest sections, such as <VirtualHost>,
<Directory>, etc.
Apache does a great job of checking Apache configuration files for errors
and this modules leaves most of that to Apache. This module does minimal
configuration file checking. The module currently checks for:
Start and end context names match
The module checks if the start and end context names match. If the end
context name does not match the start context name, then it is ignored.
The module does not even check if the configuration contexts have valid
names.