Wednesday, November 25, 2009

New features on rt.cpan.org

We implemented and deployed two new features on rt.cpan.org.

Subject Tags

People with many distributionss will love it, as it allows youto add a custom string into subject of emails per distribution. Subject of emails will be something like [rt.cpan.org YourTokenHere #123].

We decided to leave rt.cpan.org there, but I'm pretty sure therewill be people who will try to abuse the feature. Be smart, don'tuse two short and too long tags, remember that reporters also recieveemails.

Maintainers list with links

It's just UI sugar, each maintainer in the list is now wrapped intoa link that lead to all modules this author maintains.

Sunday, October 25, 2009

Faster composite regular expressions

Regular expressions is a powerful tool, but they quickly become too long to be readable. Some people use //x modifier. I prefer split into many smaller regular expressions, for example:

    my $re_num = qr/.../;
    my $re_quoted = qr/.../;
    my $re_value = qr/$re_num|$re_quoted/;

It works just fine and usually I compile them in package space beforehead and then use in functions with //o:

    my $re_foo = ...;
    sub foo {
        ...
        if ( /^$re_foo/o ) {
            ...
        }
        ...
    }

Doesn't matter what exactly you do, the question is how much speed do you loose if you need these REs to be dynamic. I've decided to make a simple test to understang which one is faster:

    use Benchmark qw(cmpthese);
    my $count = -60;

    my $re = qr/\d+/;
    my $re_pre = qr/^\d+$/;

    cmpthese($count, {
        static => sub { return "123456789" =~ /^\d+$/ },
        o => sub { return "123456789" =~ /^$re$/o },
        no_o => sub { return "123456789" =~ /^$re$/ },
        no_o_pre => sub { return "123456789" =~ $re_pre },
    });

    cmpthese($count, {
        static => sub { return "123456w789" =~ /^\d+$/ },
        o => sub { return "123456w789" =~ /^$re$/o },
        no_o => sub { return "123456w789" =~ /^$re$/ },
        no_o_pre => sub { return "123456w789" =~ $re_pre },
    });

Just compare four different variants: just plain old static regexp, regexp in a variable with some additions, the same with //o and finally another RE with all additions and use it without any quotes. Here are results:

                  Rate     no_o no_o_pre        o   static
    no_o      851115/s       --     -30%     -41%     -47%
    no_o_pre 1222940/s      44%       --     -15%     -24%
    o        1443941/s      70%      18%       --     -11%
    static   1613818/s      90%      32%      12%       --
                  Rate     no_o no_o_pre        o   static
    no_o      923012/s       --     -33%     -37%     -46%
    no_o_pre 1376153/s      49%       --      -6%     -19%
    o        1471770/s      59%       7%       --     -14%
    static   1705241/s      85%      24%      16%       --

Results are consistent with my hopes. I'll try to describe them, but can not say I do know everything about this. In 'no_o' case perl have to compile regular expression each time you run the code. Time spent in compilation is enough to give up 40% to next variant. 'o' and 'no_o_pre' are very close and I expected something like that. In 'o' case perl have to compile once at runtime and each time check cache. In 'no_o_pre' perl have to check each time that thing on the right hand is an RE object. It's probably possible to make //o case very close to static by rebuilding op_tree, however that will disappoint some deparse modules. Static case is the fastest and it's understandable.

Should you use this? Yes. All the time? No. For example if you write a parser for apache log, not simple one, but parser that takes log file format strings and builds regular expressions for this particular format. In this case I would think twice about design and the way REs are used.

Wednesday, October 07, 2009

Easy thing, but useful, strange that nobody implemented it earlier

This post is about Perl, Mason, memory leaks and hunting them easily in objects oriented applications based on these technologies.

It's not a secret that you can cause a memory leak by introducing a cycle with references. It often happens in tree structures when parent holds references on all its children and each child references its parent.

Perl has references weakening that helps avoid most of problems or you can ask people to call a method to destroy structure. Developers who post modules on the CPAN usually aware of the solutions and cover this. However, it can be done differently and it's easy to overlook in the doc.

For a long time I was using different modules to catch leaks, for example Devel::Leak::Object. It's really a useful module, but I used with custom patches for better diagnosis.

Recently had to look into leaks once again and started to wonder how to find a leak that is not reproducible on my machine, but a customer see it and can not say which request cause it. Looked at the CPAN again. Found Devel::LeakGuard::Object, new reincarnation of Devel::Leak::Object with additional ways to instrument reporting.

It was very easy to write a simple memory leaks tracer for mason based applications as a mason plugin. At this moment it helped me identify three small memory leaks in Request Tracker software just by enabling this new module in my devolpment environment. Leaks just poped up in logs during testing of things.

I hope that MasonX::LeakGuard::Object can help other people as well.

Saturday, September 12, 2009

Improving usability for people and code reuse

For my long going tisql project I wrote Parse::Boolean module a while ago. Recently found a new application for it and it worked pretty well.

In RT we have scrips - condition, action and a template. When something happens with a ticket, a change checked against conditions of scrips. Only those actions are applied for which conditions returned a true value. Pretty simple, but condition is just a a code and we want code to be re-usable.

Conditions are implemented as a modules and controlled by an argument, usually a parsable string. Strings work good for people. Lots of RT admins can not write a code for conditions and it's stupid to ask them to wrap a code they can not write into a module.

We have 'User Defined' condition module for such cases, where you can write code right in the UI. It helps, but anyway. Here goes another problem. If you have code in a module that nobody except you can write then this module should be help for everybody, but it's not. Often people want to mix complex things with simple conditions and either you have to extend format of argument in the condition or invent a new thing.

I decided to "invent".

There was nothing to invent actually, but connect technologies together and that's what I did. Sometimes I think our work is to connect things together. Recall I said that User Defined allow you to type a condition using Perl right in the UI. Ok. What if we replace perl with custom syntax. Parse::BooleanLogic provides a good parser for pretty random things inside nested parentheses and joined with boolean operators 'AND' and 'OR'. Almost any condition in RT falls into this category and looks like the following even in perl:

    return 1 if ( this OR that OR (that AND that) ) AND else...
    return 0;

In RT we have TicketSQL using which you can search tickets and use the following simple SQL like conditions:

    x = 10 OR y LIKE 'string' OR z IS NULL

I decided that in condition it will work pretty well too. In condition we have the current ticket we check and the change (transaction).

    ( Type = 'Create' AND Ticket.Status = 'resolved' )
    OR ( Type = 'Set' AND Field = 'Status' AND NewValue = 'resolved' )

Looks good and every user can get the syntax, but it's not there yet.

We have modules already that implement conditions and want to reuse them. Pretty easy to solve:

    ModuleName{'argument'} OR !AnotherModule{'argument'}

I'm really proud that my parser allows me to parse syntax like this without much work:

    sub ParseCode {
        my $self = shift;

        my $code = $self->ScripObj->CustomIsApplicableCode;

        my @errors = ();
        my $res = $parser->as_array(
            $code,
            error_cb => sub { push @errors, $_[0]; },
            operand_cb => sub {
                my $op = shift;
                if ( $op =~ /^(!?)($re_exec_module)(?:{$re_module_argument})?$/o ) {
                    return {
                        module => $2,
                        negative => $1,
                        argument => $parser->dq($3),
                    };
                }
                elsif ( $op =~ /^($re_field)\s+($re_bin_op)\s+($re_value)$/o ) {
                    return { op => $2, lhs => $1, rhs => $3 };
                }
                elsif ( $op =~ /^($re_field)\s+($re_un_op)$/o ) {
                    return { op => $2, lhs => $1 };
                }
                else {
                    push @errors, "'$op' is not a check 'Complex' condition knows about";
                    return undef;
                }
            },
        );
        return @errors? (undef, @errors) : ($res);
    }

It's not only parser, but solver as well:

    my $solver = sub {
        my $cond = shift;
        my $self = $_[0];
        if ( $cond->{'op'} ) {
            return $self->OpHandler($cond->{'op'})->(
                $self->GetField( $cond->{'lhs'}, @_ ),
                $self->GetValue( $cond->{'rhs'}, @_ )
            );
        }
        elsif ( $cond->{'module'} ) {
            my $module = 'RT::Condition::'. $cond->{'module'};
            eval "require $module;1" || die "Require of $module failed.\n$@\n";
            my $obj = $module->new (
                TransactionObj => $_[1],
                TicketObj      => $_[2],
                Argument       => $cond->{'argument'},
                CurrentUser    => $RT::SystemUser,
            );
            return $obj->IsApplicable;
        } else {
            die "Boo";
        }
    };

    sub Solve {
        my $self = shift;
        my $tree = shift;

        my $txn = $self->TransactionObj;
        my $ticket = $self->TicketObj;

        return $parser->solve( $tree, $solver, $self, $txn, $ticket );
    }

That's it. Eveything else is grammar regexpes, column handlers and and documentation. Available on the CPAN and on the github.

Thursday, September 03, 2009

Расширения к RT с Module::Install::RTx запакует даже ребенок

В Request Tracker есть много интсрументов для расширения функционала без patch'ей, но и держать в их в одной директории с инсталяцией не стоит. Возможно завтра вы захотите их скопировать на новый сервер или вам нужно внести изменения и от-тестировать их предварительно. Запакуйте ваши изменения в расширение, ведь сделать это элементарно.

Шарим больше памяти между процессам apache/fastcgi

Если у вас fork'ающийся apache с mod_perl или FastCGI приложение, то неплохо загружать как можно больше модулей, до fork'ов. Это позволит эффективнее использовать copy on write и сохранить память под другие нужды.

С модулями, которое вы используете напрямую, все просто. Вы знаете список и можете их перечислить в файле и загружать его до форков, но в некоторых случаях внешние модули откладывают загрузку до определенного момента. В таких случаях можно использовать простой трюк, который изначально предложил JJ и я расписал подробнее на русском в статье для пользователей Request Tracker'а.

Monday, August 24, 2009

Trying Padre on MacOS

For a while wanted to play with Padre. It's IDE writen in perl programming language and at this point its main target is perl developers. I tried it once on windows, but I don't develop on windows. For development I use perl5.8 from MacPorts on MacOS X.

First of all you find that Padre requires threaded perl and it's reasonable requirement. So I had to switch perl. I've deactivated perl5.8 and installed new one with threads using the following commands:

    port deactivate perl5.8
    port install perl5.8 +threads

Sure such things don't work well: binary incompatibility and path changes. CPAN shell died complaining about missing dzopen in Compress::Zlib. Installed manually from the CPAN. Didn't help. So I deleted all directories that may affect things:

    # get rid of old perl files, the current version is 5.8.9
    find /opt/local/lib/perl5 -name '5.8.8' | xargs sudo rm -fr
    # get rid of everything related to compression
    find /opt/local/lib/perl5 | grep 'Compress' | xargs sudo rm -fr
    # get rid of everything related to old architecture, new one is darwin-threaded-multi-2level
    find /opt/local/lib/perl5 -type d -name darwin-2level | xargs sudo rm -fr

Ok. CPAN started to work as it can use gzip and gunzip commands. Re-installed Compress::Zlib. Then I usually install CPAN::Reporter module. It slows down installation a little bit, but it helps perl community provide you better solutions. Installation is simple:

    sh> cpan
    cpan> install CPAN::Reporter
    cpan> o conf init test_report
    cpan> o conf commit

Then I started installing Padre :)

    cpan> install Padre

It takes some time, so in another console I was looking at breaks in my perl. First of all subversion-perlbings was broken and svk didn't work. So it was easy to fix by reinstalling it using "port -f upgrade subversion-perlbindings" command. Upgraded in the same way all packages matching p5-*.

Installation of Padre failed, but it doesn't mean anything. I deleted lots of files. For example Algorithm::C3 was deleted when Class::MOP is not. It's all easy to fix. Just install modules when some tests die with "module is missing" error.

At the end I had problems with File::HomeDir. It's something that affects loading Padre. Reported a bug report and decided to stop at this point.

Some conclusions. Macports are good, but not that good. For example gentoo has perl-cleaner utility that helps fix things. revdep-rebuild is another gentoo's tool that help a lot during upgrades. At the end switching from perl without threads to version with threads support is a lot of reinstallations. I'm fine with that, but don't think a lot of people are too. I don't think that building everything from scratch for Padre is going to help its acceptance. I see more benefit from helping MacPorts and other distributions solve problems switching from not-threaded perl to threaded and back. File::HomeDir needs more love. Otherwise I had no problems, but the app is not functioning at this moment. Going to try a little some day later.

Tuesday, July 28, 2009

Debugging perl programs/internals in gdb

When it comes to perl internals, print based debugging doesn't work that well. Compilation and installation are too slow and you can not place a print and quickly see output. At some point gbd should be used. In perl world we have Devel::Peek's Dump function to look behind curtain. In C world there is sv_dump.

    # threaded perl:
    (gdb) call Perl_sv_dump(my_perl, variable)
    # not threaded perl:
    (gdb) call Perl_sv_dump(variable)

Perl_ prefix is some magic, I don't care much why, but in most cases you need prefix things.

Using breakpoints is a must. Use pp_* functions to break, for example Perl_pp_entersub. Here is simple session where we stop before entering a sub and using dumper to figure out sub's name:

    > gdb ./perl
    GNU gdb 6.3.50-20050815 (Apple version gdb-962) (Sat Jul 26 08:14:40 UTC 2008)

    (gdb) break Perl_pp_entersub
    Breakpoint 1 at 0xe512c: file pp_hot.c, line 2663.

    (gdb) run -e 'sub foo { return $x } foo()'

    Breakpoint 1, Perl_pp_entersub (my_perl=0x800000) at pp_hot.c:2663
    2663        dVAR; dSP; dPOPss;
    (gdb) n
    2662    {
    (gdb) 
    2663        dVAR; dSP; dPOPss;
    (gdb) 
    2668        const bool hasargs = (PL_op->op_flags & OPf_STACKED) != 0;
    (gdb) 
    2670        if (!sv)

    (gdb) call Perl_sv_dump(my_perl, sv)
    SV = PVGV(0x8103fc) at 0x813ef0
      REFCNT = 2
      FLAGS = (MULTI,IN_PAD)
      NAME = "foo"
      NAMELEN = 3
      GvSTASH = 0x8038f0    "main"
      GP = 0x3078f0
        SV = 0x0
        REFCNT = 1
        IO = 0x0
        FORM = 0x0  
        AV = 0x0
        HV = 0x0
        CV = 0x813eb0
        CVGEN = 0x0
        LINE = 1
        FILE = "-e"
        FLAGS = 0xa
        EGV = 0x813ef0      "foo"

Quite simple, but when you start investigating internals it's very helpful.

Monday, July 27, 2009

Proper double linked list

Double linked list is well known structure. Each element refereces prev and next element in the chain:

    use strict;
    use warnings;

    package List;

    sub new {
        my $proto = shift;
        my $self = bless {@_}, ref($proto) || $proto;
    }

    sub prev {
        my $self = shift;
        if ( @_ ) {
            my $prev = $self->{'prev'} = shift;
            $prev->{'next'} = $self;
        }
        return $self->{'prev'};
    }

    sub next {
        my $self = shift;
        if ( @_ ) {
            my $next = $self->{'next'} = shift;
            $next->{'prev'} = $self;
        }
        return $self->{'next'};
    }

    package main;

    my $head = List->new(v=>1);
    $head->next( List->new(v=>3)->prev( List->new(v=>2) ) );

Clean and simple. If you experienced in perl you should know that such thing leaks memory. Each element has at least one reference all the time from neighbor, so perl's garbage collector never sees that structure can be collected. It's called refernce cycle, google for it. As well, you may know that weaken from Scalar::Util module can help you solve this:

    use Scalar::Util qw(weaken);

    sub prev {
        my $self = shift;
        if ( @_ ) {
            my $prev = $self->{'prev'} = shift;
            $prev->{'next'} = $self;
            weaken $self->{'prev'};
        }
        return $self->{'prev'};
    }

    # similar thing for next

So we weak one group of references, in our example prev links. It's a win and loose. Yes, perl frees elements before exit, no more memory leaks, it's our win. But, there is always but, you can not leave point to the first element out of the scope or otherwise some elements can be freed without your wish. For a while I thought that it's impossible to solve this problem, but recent hacking, reading on perl internalsand a question on a mailing list ding a bell. After a short discussion on #p5p irc channel with Matt Trout, solution has been found. Actually there it's all been there and Matt even has module started that may help make it all easier, but here we're going to look at guts.

DESTROY method called on destroy we all know that, but a few people know that we can prevent actual destroy by incrementing reference counter on the object. One woe - you shouldn't do it during global destruction, but there is module to check when we're called:

    use Devel::GlobalDestruction;

    sub DESTROY {
        return if in_global_destruction;
        do_something_a_little_tricky();
    }

What we can do with this? We have two links: from the current element to next and from that next back to the current. One of them is weak and on destroy we can swap them if the element that is going to be destroyed is referenced by a weak link. It's easier in code than in my words:

    sub DESTROY {
        return if in_global_destruction();

        my $self = shift;
        if ( $self->{'next'} && isweak $self->{'next'}{'prev'} ) {
            $self->{'next'}{'prev'} = $self;
            weaken $self->{'next'};
        }
        if ( $self->{'prev'} && isweak $self->{'prev'}{'next'} ) {
            $self->{'prev'}{'next'} = $self;
            weaken $self->{'prev'};
        }
    }

That's it, now you can forget about heads of the Lists, pass around any element you like. isweak is also part of Scalar::Util module. Good luck with cool lists and other linked structures. Matt is looking for help with his module. You always can find user mst on irc.perl.org to chat about this.

Thursday, July 02, 2009

Nice article on perl internals nothingmuch wrote

If you interested in perl5's internals even for a little then will find this article useful. It doesn't describe quite well described SVs, AVs, HVs and other representations of perl structures, but introduces on examples execution of a perl code.

I know a few things about internals, but author's point of view allowed me to understand better RETURN and PUSHBACK macros, stack pointer, op_tree.

It's one tiny step towards understanding how cool things, like Devel::NYTProf, work.

Enjoy reading!

Friday, June 26, 2009

Perl resource you may didn't know about

Do you know which country dominates on the CPAN? Guys in RostovOnDon.pm know that. They wrote a simple service for that. It looks nice and simple. It has one feature that may be useful - RSS feed of releases per author.

Friday, May 29, 2009

The Dual-Lived Problem

Chromatic writes about perl future. His recent post on The Dual-Lived Problem brought my attention and I had time to read it to the bottom. I'll stop on the following idea and will try to promote tools we at Best Practical develop for our needs:



First, improve the core's automated testing.
This helps everyone; it can identify changes
in the core code that affect the stability
and behavior of the standard library. It can
also identify changes in standard library
modules which do not work on important platforms.


I do believe that CHIMPS toolset can help with this. Can not saythat it's been tested for such purpose, but it used exactly inthis way by BPS. We tests each revision of our project on mostrecent revision of another project it depends on.

Hope people can try with some modules they care about and reportback issues. You should prefer developer releases I recentlyblogged about orrepositories on github.

Finally done a new release of CHIMPS client

CHIMPS is a collection of tools to smoke test projects out of repositories.

My adventure with this toolset started as a simple task. I wanted to add smoking of our product RTIR. Well, now I know for sure what "yak shaving" means. At the end we have git support, less bandwidth, faster tests, new options, API for new repositories and new options.

Some use cases have been described earlier.

Recently uploaded two new development releases of Test-Chimps and Test-Chimps-Client distributions to the CPAN. You're welcome to try with your projects and report back using http://rt.cpan.org or this blog.

Friday, May 22, 2009

Bleeding edge smoke testing of perl software

We use CHIMPS for smoking in Best Practical. It's modular client server testing toolkit. One of its parts is smoking client that works with SVN repositories.

It's really simple things, at least it was, you describe projects in repositories, dependencies between them, start smoker and commit to the repos without being affraid you break backwards compatibility.

There are multiple case when it's useful. Sure you just smoke test your projects, but also you can smoke your project against repository of a module you don't control, but depend on heavily.

For example, you use Moose a lot with fancy things, its bleeding edge features and some workarounds. Chimps may be the only way to help you prevent disaster before new broken version of Moose is on the CPAN. To do this you setup smoking your project and describe Moose repository as well, so smoking goes against Moose's HEAD.

You also can test your project with Moose that is installed on the system, with Moose's tag 0.01 and Moose's HEAD. All this in one smoker.


        

Several days ago I started implementing a new tiny thing in the smoker. This change end up in total refactoring of the module and now we can smoke from git too. Adding additional back end now is easy. You have to implement four methods in a class.

Sadly there were no releases since 2006, so you have to check it out from github. I'm planning on releasing a dev version on the CPAN.

Wednesday, May 20, 2009

Where can you find code behind rt.cpan.org

Hi, continuing with blogging around http://rt.cpan.org.

Some people still think that rt.cpan.org is something hidden in Best Practical's cages where we hold it like a prisoner. It's not true for a long time.

Started simple about rt.cpan.org document. I hope peopl will find link to this document on the main page.

Tuesday, May 19, 2009

Report spam button on rt.cpan.org

May be you don't know, but when people complain about http://rt.cpan.org service.I'm one of responsible for some stupidities. Heh. I'm not going to protect myself.

Fruck. All the cpan authors are responsible. The code behind the service is freeand you can send patches and extensions. Do you know number of those we recieved?ZERO.

That was a headup. Now, about reporting spam, I've writen a simple RT extension soregular users can help report messages as spam. It's in best practical's repository:

   http://svn.bestpractical.com/cgi-bin/index.cgi/bps/browse/RT-Extension-ReportSpam

I've installed it already, but it has several problems:

1. no icons for the button.

2. when you click on link, browser jumps to a page when it should be AJAX style.it works locally, but doesn't on the service. It needs debugging.

Help is welcome. You can write to ruz@bestpractical.com or find ruz/ruz_macon IRC channels.

The way to search those spam tickets is described in the README.

Monday, May 18, 2009

A "new" game you can play on a conference

Ok, YAPC::Russia 2009 is over, it was awesome event. I really liked it.

I'm going to talk about a game we played. Main idea is to gather new fantastic ideas for startups using -Ofun mode.

You have a subject, in our case it was the Perl.

You spend 15 minutes collecting a lot of events related to the subject that most pobably will happen in the near future (1-2 years), for example it's likely that perl5 will be in use on the same rate for the next couple of years. Chances are not 100% but very high, it's a perfect match.

It's ok to have things with lower chances to happen. This is prefectly fine.

At the end you should have a big list. We used some mindmap building software for managing and writing down ideas.

Then you decide that you're not interested in things that will happen for sure. And auditory spends next 15 minutes throwing away things that have 50% and more chances to happen and filling blanks with things that have lower chances. At the end you need from 3 to 5 events per 5-10 persons in the room.

For example microsoft releases perl6 or crazy things like nuclear war. It's better to avoid really crazy things.

Randomize list, split people into groups, each group gets from 3 to 5 events. People should imagine that they live in a world where events they recieved have happenned. Every group has 15 minutes to discuss new realities.

A delegate from each group present the future describing one or two things in their world. We were describing a CPAN module and an application writen in Perl. Here the list we had:

    1) big russian it companies organize a perl school
    2) CPAN gets simpler and understandable
    3) there is a perl CMS that you can distribute without a developer in the box
    4) russian kids study perl in low-school

Everything should be recorded in a file, so you have list of fancy ideas. You can compare different realities and come up with new ideas that applicable in the real world.

Wednesday, May 13, 2009

instrumenting twitter with perl

Yesterday decided that I finally want to use twitter. Too much noise around, however it look like a nice thing that doesn't take much time. And useful for tiny announces.

What a concidence, lately I got addicted to music by Glen Hansard and Marketa Irglova from Once movie. Twitter is perfect thing to say what you're listing to right now. Why not?

    sudo cpan Net::Twitter
    vi ~/bin/tw_song
    # 10 minutes of typing
    :wq

One slight problem is how to get the current song from iTunes.

    # googling
    # AppleScript editor

    if appIsRunning("iTunes") then
            tell application "iTunes"
                    artist of current track & " - " & name of current track
            end tell
    end if

    on appIsRunning(appName)
            tell application "System Events" to (name of processes) contains appName
    end appIsRunning

That's it. From command line you can use it the following way:

    osascript /Users/ruz/Library/Scripts/say_current_itunes_song.scpt

Here is final perl script:

    #!perl

    use strict;
    use warnings;

    my $song = `osascript /Users/ruz/Library/Scripts/say_current_itunes_song.scpt`;
    return unless $song;
    $song =~ s/^\s*//;
    $song =~ s/\s*$//;

    my $status = "Listening $song";
    print $status, "\n";

    require Net::Twitter;
    my $twit = Net::Twitter->new({username=>"myuser", password => "mypass" });
    $twit->update({status => $status});

Saturday, May 09, 2009

Blogging for perl geeks

Hi, there. For a while I've been trying to return back to bloggingwithout much success. I don't like what's going on with LJ. It'scrappy these days. Have a personal blog there, mostly in Russian,don't want to turn it into an IT blog. Many months ago startedlooking for a new hosting and chose blogger for that. Made a fewposts, but html is killing me, composer sucks too.

What the hell, I can not blog using some simple syntax. That'sjust wrong.

Here is my first post from my own blogging software - pod2blog.

Yes, you heard me right. I use perl's Plain Old Documentation (POD)for blogging.

It was a little bit harder than I thought would be, but works:

    #!perl    use 5.008;    use strict;    use warnings;    use utf8;    use YAML::Any qw();    use LWP::UserAgent;    my $ua = LWP::UserAgent->new;    my $file = shift;    my $parser = MyParser->new;    $parser->parse_from_file( $file );    push_to_blogger( $parser->{'entry'} );    sub push_to_blogger {        my $entry = shift;        my $request = HTTP::Request->new();        $request->method( 'POST' );        $request->uri('http://www.blogger.com/feeds/19258261/posts/default');        $request->header( 'Content_Type' => 'application/atom+xml' );        $request->header( 'GData-Version' => 2 );        $request->header( 'Authorization' => 'GoogleLogin auth='. auth_token() );        $request->content( $entry->as_xml );        my $res = $ua->request( $request );        print $res->dump;    }    { my $cache = undef;    sub auth_token {        return $cache if $cache;        my $opt = config();        my $res = $ua->post( 'https://www.google.com/accounts/ClientLogin', [            Email  => $opt->{'email'},            Passwd => $opt->{'password'},            service => 'blogger',            accountType => 'GOOGLE',            source => 'ruz_at_cpan-pod2blog-1',        ] );        my ($auth) = ($res->content =~ /^Auth=(.*?)$/m)            or die "Couldn't authenticate: ". $res->dump;        return $cache = $auth;    } }    { my $cache;    sub config {        return $cache if $cache;        require File::HomeDir;        require File::Spec;        require YAML::Any;        $cache = YAML::Any::LoadFile( File::Spec->catfile(            File::HomeDir->my_home, '.blogger.yml'        ) );        return $cache;    } }    use Pod::Parser;    package MyParser;    use base qw(Pod::Parser);    use XML::LibXML;    sub begin_input {        my $self = shift;        use XML::Atom::Entry;        $self->{'entry'} = XML::Atom::Entry->new( Namespace => 'http://www.w3.org/2005/Atom' );        my $doc = $self->{'doc'} = XML::LibXML->createDocument( "1.0", "UTF-8" );        my $root = $self->{'in'} = $doc->createElement('div');        $doc->setDocumentElement( $root );    }    sub end_input {        my $self = shift;        $self->{'entry'}->content(            $self->{'doc'}->documentElement->toString,        );        return $self->{'entry'};    }    sub command {         my ($self, $command, $paragraph, $line_num) = @_;        if ($command eq 'head1') {            my $expansion = $self->interpolate($paragraph, $line_num);            $self->{'entry'}->title( $expansion );        }        elsif ( $command eq 'tags' ) {            $self->{'entry'}->add_category( {term => $_, scheme => 'http://www.blogger.com/atom/ns#'} )                foreach map { s/^\s+//;s/\s+$//;$_ } split /,/, $paragraph;        }    }    sub verbatim {         my ($self, $para, $line_num) = @_;        my $e;        if ( $self->{'last'} && $self->{'last'}->nodeName eq 'pre' ) {            $e = $self->{'last'};        } else {            $e = $self->{'last'} = $self->{'doc'}->createElement('pre');        }        $e->appendTextNode( $para );        $self->{'in'}->appendChild( $e );        return $e;    }    sub textblock {         my ($self, $paragraph, $line_num) = @_;        my %parse_opts = (            -expand_seq => 'interior_sequence',            -expand_text => sub { return $_[0]->{'doc'}->createTextNode( $_[1] ) },        );        my $expansion = $self->parse_text( \%parse_opts, $paragraph, $line_num );        my $e = $self->{'last'} = $self->{'doc'}->createElement('p');        $e->appendChild( $_ ) foreach $expansion->children;        $self->{'in'}->appendChild( $e );        return $e;    }    sub interior_sequence {         my ($self, $cmd, $arg, $node) = @_;        if ( $cmd eq 'B' ) {            my $e = $self->{'doc'}->createElement('strong');            $e->appendChild( $_ ) foreach $node->parse_tree->children;            return $e;        }        return $arg;    }    sub parse_from_file {        my $self = shift;        my $fname = $_[0];        # XXX: set publish time        return $self->SUPER::parse_from_file( @_ );    }

Friday, January 16, 2009

It's close to the day X

We have place defined: "Business inda .ru style" club provides us room, internet sccess and may be even online video translation.

We have time defined: 9:00 - 21:00 Europe/Moscow TZ (UTC+3h)

We have crew: 12 perl developers and one CSS guru

We have project defined: described earlier in this blog.

We have IRC channels defined: #moscow.hm on irc.perl.org for English speaking

All is set. See you on onsite and in IRC.


Monday, January 12, 2009

Idea

Write perlplanet.ru implementation with additional features, like onsite publishing, tags, comments, some sort of back syncing. Whatever we will have time to.

We're going to publish everything in a public repository online or even keep up to date server for people to test the app in real time. So people from different places can join us on IRC or develop with us. Even if people don't want to develop they can help us find quick solutions or just keep pushing us by sarcastic comments :)

Goals

  • Write ready for deployment application within 8-12 hours
  • Share experience
  • Try collective rapid development
  • Try Catalyst framework
  • See how far we can go in a short period of time
  • Have a lot of fun
  • Eat pizza

Tools

perl 5.8, Catalyst, Plagger, git and many different modules from CPAN :)

Sunday, January 11, 2009

Moscow Perl Mongers group organizes one day hackmeet next saturday

Next saturday (17 Jan 2009) Moscow Perl community going to hack whole day on a simple web-service project. All developers will be in one room with their laptops up to 12 hours and will try to make Catalyst based application start from blank paper. Experience of developers varies from "low perl" to "have some catalyst-based projects". It's gonna be fun.

Russian and English speaking communities who wouldn't be able to join us onsite can help us online. I've created IRC channel #moscow.hm on irc.perl.org for English speaking and there is #moscow.pm on  rusnet.org.ru server for people speaking in Russian. We'll use git (may be svn) and will be posting commits to some public domain (github probably), so you'll be able to hack with us as well.

Details for attendies are on moscow.pm's mailing list.

You're welcome to join us onsite or online in IRC or repository. I'll keep posting details here.