Remove last remnants of /thirdparty/.
authorTodd Larsen <tlarsen@google.com>
Mon, 04 Aug 2008 22:43:48 +0000
changeset 64 b73eec62825a
parent 63 9b1909e46633
child 65 d254d4577c30
Remove last remnants of /thirdparty/.
LICENSE.svnmerge
scripts/svn_load_dirs.pl
scripts/svnmerge.py
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/LICENSE.svnmerge	Mon Aug 04 22:43:48 2008 +0000
@@ -0,0 +1,339 @@
+		    GNU GENERAL PUBLIC LICENSE
+		       Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+			    Preamble
+
+  The licenses for most software are designed to take away your
+freedom to share and change it.  By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users.  This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it.  (Some other Free Software Foundation software is covered by
+the GNU Lesser General Public License instead.)  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+  To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have.  You must make sure that they, too, receive or can get the
+source code.  And you must show them these terms so they know their
+rights.
+
+  We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+  Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software.  If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+  Finally, any free program is threatened constantly by software
+patents.  We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary.  To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+		    GNU GENERAL PUBLIC LICENSE
+   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+  0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License.  The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language.  (Hereinafter, translation is included without limitation in
+the term "modification".)  Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope.  The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+  1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+  2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+    a) You must cause the modified files to carry prominent notices
+    stating that you changed the files and the date of any change.
+
+    b) You must cause any work that you distribute or publish, that in
+    whole or in part contains or is derived from the Program or any
+    part thereof, to be licensed as a whole at no charge to all third
+    parties under the terms of this License.
+
+    c) If the modified program normally reads commands interactively
+    when run, you must cause it, when started running for such
+    interactive use in the most ordinary way, to print or display an
+    announcement including an appropriate copyright notice and a
+    notice that there is no warranty (or else, saying that you provide
+    a warranty) and that users may redistribute the program under
+    these conditions, and telling the user how to view a copy of this
+    License.  (Exception: if the Program itself is interactive but
+    does not normally print such an announcement, your work based on
+    the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole.  If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works.  But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+  3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+    a) Accompany it with the complete corresponding machine-readable
+    source code, which must be distributed under the terms of Sections
+    1 and 2 above on a medium customarily used for software interchange; or,
+
+    b) Accompany it with a written offer, valid for at least three
+    years, to give any third party, for a charge no more than your
+    cost of physically performing source distribution, a complete
+    machine-readable copy of the corresponding source code, to be
+    distributed under the terms of Sections 1 and 2 above on a medium
+    customarily used for software interchange; or,
+
+    c) Accompany it with the information you received as to the offer
+    to distribute corresponding source code.  (This alternative is
+    allowed only for noncommercial distribution and only if you
+    received the program in object code or executable form with such
+    an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it.  For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable.  However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+  4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License.  Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+  5. You are not required to accept this License, since you have not
+signed it.  However, nothing else grants you permission to modify or
+distribute the Program or its derivative works.  These actions are
+prohibited by law if you do not accept this License.  Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+  6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions.  You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+  7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all.  For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices.  Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+  8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded.  In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+  9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number.  If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation.  If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+  10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission.  For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this.  Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+			    NO WARRANTY
+
+  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+		     END OF TERMS AND CONDITIONS
+
+	    How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License along
+    with this program; if not, write to the Free Software Foundation, Inc.,
+    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+    Gnomovision version 69, Copyright (C) year name of author
+    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary.  Here is a sample; alter the names:
+
+  Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+  `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+  <signature of Ty Coon>, 1 April 1989
+  Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs.  If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library.  If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/scripts/svn_load_dirs.pl	Mon Aug 04 22:43:48 2008 +0000
@@ -0,0 +1,2043 @@
+#!/usr/bin/perl -w
+
+# $HeadURL$
+# $LastChangedDate$
+# $LastChangedBy$
+# $LastChangedRevision$
+
+$| = 1;
+
+use strict;
+use Carp;
+use Cwd;
+use Digest::MD5  2.20;
+use File::Copy   2.03;
+use File::Find;
+use File::Path   1.0404;
+use File::Temp   0.12   qw(tempdir tempfile);
+use Getopt::Long 2.25;
+use Text::Wrap;
+use URI          1.17;
+use English;
+
+$Text::Wrap::columns = 72;
+
+# Specify the location of the svn command.
+my $svn = '/usr/bin/svn';
+
+# Process the command line options.
+
+# The base URL for the portion of the repository to work in.  Note
+# that this does not have to be the root of the subversion repository,
+# it can point to a subdirectory in the repository.
+my $repos_base_url;
+
+# The relative path from the repository base URL to work in to the
+# directory to load the input directories into.
+my $repos_load_rel_path;
+
+# To specify where tags, which are simply copies of the imported
+# directory, should be placed relative to the repository base URL, use
+# the -t command line option.  This value must contain regular
+# expressions that match portions of the input directory names to
+# create an unique tag for each input directory.  The regular
+# expressions are surrounded by a specified character to distinguish
+# the regular expression from the normal directory path.
+my $opt_import_tag_location;
+
+# Do not ask for any user input.  Just go ahead and do everything.
+my $opt_no_user_input;
+
+# Do not automatically set the svn:executable property based on the
+# file's exe bit.
+my $opt_no_auto_exe;
+
+# Username to use for commits.
+my $opt_svn_username;
+
+# Password to use for commits.
+my $opt_svn_password;
+
+# Verbosity level.
+my $opt_verbose;
+
+# Path to already checked-out working copy.
+my $opt_existing_wc_dir;
+
+# List of filename patterns to ignore (as in .subversion/config's
+# "global-ignores" option).
+my $opt_glob_ignores;
+
+# This is the character used to separate regular expressions occuring
+# in the tag directory path from the path itself.
+my $REGEX_SEP_CHAR = '@';
+
+# This specifies a configuration file that contains a list of regular
+# expressions to check against a file and the properties to set on
+# matching files.
+my $property_config_filename;
+
+GetOptions('no_user_input'           => \$opt_no_user_input,
+           'no_auto_exe'             => \$opt_no_auto_exe,
+           'property_cfg_filename=s' => \$property_config_filename,
+           'svn_password=s'          => \$opt_svn_password,
+           'svn_username=s'          => \$opt_svn_username,
+           'tag_location=s'          => \$opt_import_tag_location,
+           'verbose+'                => \$opt_verbose,
+           'wc=s'                    => \$opt_existing_wc_dir,
+           'glob_ignores=s'          => \$opt_glob_ignores)
+  or &usage;
+&usage("$0: too few arguments") if @ARGV < 2;
+
+$repos_base_url      = shift;
+$repos_load_rel_path = shift;
+
+# Check that the repository base URL and the import directories do not
+# contain any ..'s.
+if ($repos_base_url =~ /\.{2}/)
+  {
+    die "$0: repos base URL $repos_base_url cannot contain ..'s.\n";
+  }
+if ($repos_load_rel_path =~ /\.{2}/)
+  {
+    die "$0: repos import relative directory path $repos_load_rel_path ",
+        "cannot contain ..'s.\n";
+  }
+
+# If there are no directories listed on the command line, then the
+# directories are read from standard input.  In this case, the
+# -no_user_input command line option must be specified.
+if (!@ARGV and !$opt_no_user_input)
+  {
+    &usage("$0: must use -no_user_input if no dirs listed on command line.");
+  }
+
+# The tag option cannot be used when directories are read from
+# standard input because tags may collide and no user input can be
+# taken to verify that the input is ok.
+if (!@ARGV and $opt_import_tag_location)
+  {
+    &usage("$0: cannot use -tag_location when dirs are read from stdin.");
+  }
+
+# If the tag directory is set, then the import directory cannot be '.'.
+if (defined $opt_import_tag_location and $repos_load_rel_path eq '.')
+  {
+    &usage("$0: cannot set import_dir to '.' and use -t command line option.");
+  }
+
+# Set the svn command line options that are used anytime svn connects
+# to the repository.
+my @svn_use_repos_cmd_opts;
+&set_svn_use_repos_cmd_opts($opt_svn_username, $opt_svn_password);
+
+# Check that the tag directories do not contain any ..'s.  Also, the
+# import and tag directories cannot be absolute.
+if (defined $opt_import_tag_location and $opt_import_tag_location =~ /\.{2}/)
+  {
+    die "$0: repos tag relative directory path $opt_import_tag_location ",
+        "cannot contain ..'s.\n";
+  }
+if ($repos_load_rel_path =~ m|^/|)
+  {
+    die "$0: repos import relative directory path $repos_load_rel_path ",
+        "cannot start with /.\n";
+  }
+if (defined $opt_import_tag_location and $opt_import_tag_location =~ m|^/|)
+  {
+    die "$0: repos tagrelative directory path $opt_import_tag_location ",
+        "cannot start with /.\n";
+  }
+
+if (defined $opt_existing_wc_dir)
+  {
+    unless (-e $opt_existing_wc_dir)
+      {
+        die "$0: working copy '$opt_existing_wc_dir' does not exist.\n";
+      }
+
+    unless (-d _)
+      {
+        die "$0: working copy '$opt_existing_wc_dir' is not a directory.\n";
+      }
+
+    unless (-d "$opt_existing_wc_dir/.svn")
+      {
+        die "$0: working copy '$opt_existing_wc_dir' does not have .svn ",
+            "directory.\n";
+      }
+
+    $opt_existing_wc_dir = Cwd::abs_path($opt_existing_wc_dir)
+  }
+
+# If no glob_ignores specified, try to deduce from config file,
+# or use the default below.
+my $ignores_str =
+    '*.o *.lo *.la #*# .*.rej *.rej .*~ *~ .#* .DS_Store';
+
+if ( defined $opt_glob_ignores)
+  {
+    $ignores_str = $opt_glob_ignores;
+  }
+elsif ( -f "$ENV{HOME}/.subversion/config" )
+  {
+    open my $conf, "$ENV{HOME}/.subversion/config";
+    while (<$conf>)
+      {
+        if ( /^global-ignores\s*=\s*(.*?)\s*$/ )
+          {
+	    $ignores_str = $1;
+            last;
+          }
+      }
+  }
+
+my @glob_ignores = map
+                     {
+                       s/\./\\\./g; s/\*/\.\*/g; "^$_\$";
+                     } split(/\s+/, $ignores_str);
+unshift @glob_ignores, '\.svn$';
+
+# Convert the string URL into a URI object.
+$repos_base_url    =~ s|/*$||;
+my $repos_base_uri = URI->new($repos_base_url);
+
+# Check that $repos_load_rel_path is not a directory here implying
+# that a command line option was forgotten.
+if ($repos_load_rel_path ne '.' and -d $repos_load_rel_path)
+  {
+    die "$0: import_dir '$repos_load_rel_path' is a directory.\n";
+  }
+
+# The remaining command line arguments should be directories.  Check
+# that they all exist and that there are no duplicates.
+if (@ARGV)
+  {
+    my %dirs;
+    foreach my $dir (@ARGV)
+      {
+        unless (-e $dir)
+          {
+            die "$0: directory '$dir' does not exist.\n";
+          }
+
+        unless (-d _)
+          {
+            die "$0: directory '$dir' is not a directory.\n";
+          }
+
+        if ($dirs{$dir})
+          {
+            die "$0: directory '$dir' is listed more than once on command ",
+                "line.\n";
+          }
+        $dirs{$dir} = 1;
+      }
+  }
+
+# Create the tag locations and print them for the user to review.
+# Check that there are no duplicate tags.
+my %load_tags;
+if (@ARGV and defined $opt_import_tag_location)
+  {
+    my %seen_tags;
+
+    foreach my $load_dir (@ARGV)
+      {
+        my $load_tag = &get_tag_dir($load_dir);
+
+        print "Directory $load_dir will be tagged as $load_tag\n";
+
+        if ($seen_tags{$load_tag})
+          {
+            die "$0: duplicate tag generated.\n";
+          }
+        $seen_tags{$load_tag} = 1;
+
+        $load_tags{$load_dir} = $load_tag;
+      }
+
+    exit 0 unless &get_answer("Please examine identified tags.  Are they " .
+                              "acceptable? (Y/n) ", 'ny', 1);
+    print "\n";
+  }
+
+# Load the property configuration filename, if one was specified, into
+# an array of hashes, where each hash contains a regular expression
+# and a property to apply to the file if the regular expression
+# matches.
+my @property_settings;
+if (defined $property_config_filename and length $property_config_filename)
+  {
+    open(CFG, $property_config_filename)
+      or die "$0: cannot open '$property_config_filename' for reading: $!\n";
+
+    my $ok = 1;
+
+    while (my $line = <CFG>)
+      {
+        next if $line =~ /^\s*$/;
+        next if $line =~ /^\s*#/;
+
+        # Split the input line into words taking into account that
+        # single or double quotes may define a single word with
+        # whitespace in it.  The format for the file is
+        # regex control property_name property_value
+        my @line = &split_line($line);
+        next if @line == 0;
+
+        unless (@line == 2 or @line == 4)
+          {
+            warn "$0: line $. of '$property_config_filename' has to have 2 ",
+                 "or 4 columns.\n";
+            $ok = 0;
+            next;
+          }
+        my ($regex, $control, $property_name, $property_value) = @line;
+
+        unless ($control eq 'break' or $control eq 'cont')
+          {
+            warn "$0: line $. of '$property_config_filename' has illegal ",
+                 "value for column 3 '$control', must be 'break' or 'cont'.\n";
+            $ok = 0;
+            next;
+          }
+
+        # Compile the regular expression.
+        my $re;
+        eval { $re = qr/$regex/i };
+        if ($@)
+          {
+            warn "$0: line $. of '$property_config_filename' regex '$regex' ",
+                 "does not compile:\n$@\n";
+            $ok = 0;
+            next;
+          }
+
+        push(@property_settings, {name    => $property_name,
+                                  value   => $property_value,
+                                  control => $control,
+                                  re      => $re});
+      }
+    close(CFG)
+      or warn "$0: error in closing '$property_config_filename' for ",
+              "reading: $!\n";
+
+    exit 1 unless $ok;
+  }
+
+# Check that the svn base URL works by running svn log on it.  Only
+# get the HEAD revision log message; there's no need to waste
+# bandwidth seeing all of the log messages.
+print "Checking that the base URL is a Subversion repository.\n";
+read_from_process($svn, 'log', '-r', 'HEAD',
+                  @svn_use_repos_cmd_opts, $repos_base_uri);
+print "\n";
+
+my $orig_cwd = cwd;
+
+# The first step is to determine the root of the svn repository.  Do
+# this with the svn log command.  Take the svn_url hostname and port
+# as the initial url and append to it successive portions of the final
+# path until svn log succeeds.
+print "Finding the root URL of the Subversion repository.\n";
+my $repos_root_uri;
+my $repos_root_uri_path;
+my $repos_base_path_segment;
+{
+  my $r = $repos_base_uri->clone;
+  my @path_segments            = grep { length($_) } $r->path_segments;
+  my @repos_base_path_segments = @path_segments;
+  unshift(@path_segments, '');
+  $r->path('');
+  my @r_path_segments;
+
+  while (@path_segments)
+    {
+      $repos_root_uri_path = shift @path_segments;
+      push(@r_path_segments, $repos_root_uri_path);
+      $r->path_segments(@r_path_segments);
+      if (safe_read_from_pipe($svn, 'log', '-r', 'HEAD',
+                              @svn_use_repos_cmd_opts, $r) == 0)
+        {
+          $repos_root_uri = $r;
+          last;
+        }
+      shift @repos_base_path_segments;
+    }
+  $repos_base_path_segment = join('/', @repos_base_path_segments);
+}
+
+if ($repos_root_uri)
+  {
+    print "Determined that the svn root URL is $repos_root_uri.\n\n";
+  }
+else
+  {
+    die "$0: cannot determine root svn URL.\n";
+  }
+
+# Create a temporary directory for svn to work in.
+my $temp_dir = tempdir( "svn_load_dirs_XXXXXXXXXX", TMPDIR => 1 );
+
+# Put in a signal handler to clean up any temporary directories.
+sub catch_signal {
+  my $signal = shift;
+  warn "$0: caught signal $signal.  Quitting now.\n";
+  exit 1;
+}
+
+$SIG{HUP}  = \&catch_signal;
+$SIG{INT}  = \&catch_signal;
+$SIG{TERM} = \&catch_signal;
+$SIG{PIPE} = \&catch_signal;
+
+# Create an object that when DESTROY'ed will delete the temporary
+# directory.  The CLEANUP flag to tempdir should do this, but they
+# call rmtree with 1 as the last argument which takes extra security
+# measures that do not clean up the .svn directories.
+my $temp_dir_cleanup = Temp::Delete->new;
+
+# Determine the native end of line style for this system.  Do this the
+# most portable way, by writing a file with a single \n in non-binary
+# mode and then reading the file in binary mode.
+my $native_eol = &determine_native_eol;
+
+# Check if all the directories exist to load the directories into the
+# repository.  If not, ask if they should be created.  For tags, do
+# not create the tag directory itself, that is done on the svn cp.
+{
+  print "Finding if any directories need to be created in repository.\n";
+
+  my @dirs_to_create;
+  my @urls_to_create;
+  my %seen_dir;
+  my @load_tags_without_last_segment;
+
+  # Assume that the last portion of the tag directory contains the
+  # version number and remove it from the directories to create,
+  # because the tag directory will be created by svn cp.
+  foreach my $load_tag (sort values %load_tags)
+    {
+      # Skip this tag if there is only one segment in its name.
+      my $index = rindex($load_tag, '/');
+      next if $index == -1;
+
+      # Trim off the last segment and record the result.
+      push(@load_tags_without_last_segment, substr($load_tag, 0, $index));
+    }
+  
+  foreach my $dir ($repos_load_rel_path, @load_tags_without_last_segment)
+    {
+      next unless length $dir;
+      my $d = '';
+      foreach my $segment (split('/', $dir))
+        {
+          $d = length $d ? "$d/$segment" : $segment;
+          my $url = "$repos_base_url/$d";
+          unless ($seen_dir{$d})
+            {
+              $seen_dir{$d} = 1;
+              if (safe_read_from_pipe($svn, 'log', '-r', 'HEAD',
+                                      @svn_use_repos_cmd_opts, $url) != 0)
+                {
+                  push(@dirs_to_create, $d);
+                  push(@urls_to_create, $url);
+                }
+            }
+        }
+    }
+
+  if (@dirs_to_create)
+    {
+      print "The following directories do not exist and need to exist:\n";
+      foreach my $dir (@dirs_to_create)
+        {
+          print "  $dir\n";
+        }
+      exit 0 unless &get_answer("You must add them now to load the " .
+                                "directories.  Continue (Y/n)? ", 'ny', 1);
+
+      my $message = "Create directories to load project into.\n\n";
+
+      foreach my $dir (@dirs_to_create)
+        {
+          if (length $repos_base_path_segment)
+            {
+              $message .= "* $repos_base_path_segment/$dir: New directory.\n";
+            }
+          else
+            {
+              $message .= "* $dir: New directory.\n";
+            }
+        }
+      $message = wrap('', '  ', $message);
+
+      read_from_process($svn, 'mkdir', @svn_use_repos_cmd_opts,
+                        '-m', $message, @urls_to_create);
+    }
+  else
+    {
+      print "No directories need to be created to prepare repository.\n";
+    }
+}
+
+# Either checkout a new working copy from the repository or use an
+# existing working copy.
+if (defined $opt_existing_wc_dir)
+  {
+    # Update an already existing working copy.
+    print "Not checking out anything; using existing working directory at\n";
+    print "$opt_existing_wc_dir\n";
+
+    chdir($opt_existing_wc_dir)
+      or die "$0: cannot chdir '$opt_existing_wc_dir': $!\n";
+
+    read_from_process($svn, 'update', @svn_use_repos_cmd_opts);
+  }
+else
+  {
+    # Check out the svn repository starting at the svn URL into a
+    # fixed directory name.
+    my $checkout_dir_name = 'my_import_wc';
+
+    # Check out only the directory being imported to, otherwise the
+    # checkout of the entire base URL can be very huge, if it contains
+    # a large number of tags.
+    my $checkout_url;
+    if ($repos_load_rel_path eq '.')
+      {
+        $checkout_url = $repos_base_url;
+      }
+    else
+      {
+        $checkout_url = "$repos_base_url/$repos_load_rel_path";
+      }
+
+    print "Checking out $checkout_url into $temp_dir/$checkout_dir_name\n";
+
+    chdir($temp_dir)
+      or die "$0: cannot chdir '$temp_dir': $!\n";
+
+    read_from_process($svn, 'checkout',
+                      @svn_use_repos_cmd_opts,
+                      $checkout_url, $checkout_dir_name);
+
+    chdir($checkout_dir_name)
+      or die "$0: cannot chdir '$checkout_dir_name': $!\n";
+  }
+
+# At this point, the current working directory is the top level
+# directory of the working copy.  Record the absolute path to this
+# location because the script will chdir back here later on.
+my $wc_import_dir_cwd = cwd;
+
+# Set up the names for the path to the import and tag directories.
+my $repos_load_abs_path;
+if ($repos_load_rel_path eq '.')
+  {
+    $repos_load_abs_path = length($repos_base_path_segment) ?
+                           $repos_base_path_segment : "/";
+  }
+else
+  {
+    $repos_load_abs_path = length($repos_base_path_segment) ?
+                           "$repos_base_path_segment/$repos_load_rel_path" :
+                           $repos_load_rel_path;
+  }
+
+# Now go through each source directory and copy each file from the
+# source directory to the target directory.  For new target files, add
+# them to svn.  For files that no longer exist, delete them.
+my $print_rename_message = 1;
+my @load_dirs            = @ARGV;
+while (defined (my $load_dir = &get_next_load_dir))
+  {
+    my $load_tag = $load_tags{$load_dir};
+
+    if (defined $load_tag)
+      {
+        print "\nLoading $load_dir and will save in tag $load_tag.\n";
+      }
+    else
+      {
+        print "\nLoading $load_dir.\n";
+      }
+
+    # The first hash is keyed by the old name in a rename and the
+    # second by the new name.  The last variable contains a list of
+    # old and new filenames in a rename.
+    my %rename_from_files;
+    my %rename_to_files;
+    my @renamed_filenames;
+
+    unless ($opt_no_user_input)
+      {
+        my $repeat_loop;
+        do
+          {
+            $repeat_loop = 0;
+
+            my %add_files;
+            my %del_files;
+
+            # Get the list of files and directories in the repository
+            # working copy.  This hash is called %del_files because
+            # each file or directory will be deleted from the hash
+            # using the list of files and directories in the source
+            # directory, leaving the files and directories that need
+            # to be deleted.
+            %del_files = &recursive_ls_and_hash($wc_import_dir_cwd);
+
+            # This anonymous subroutine finds all the files and
+            # directories in the directory to load.  It notes the file
+            # type and for each file found, it deletes it from
+            # %del_files.
+            my $wanted = sub
+              {
+                s#^\./##;
+                return if $_ eq '.';
+
+                my $source_path = $_;
+                my $dest_path   = "$wc_import_dir_cwd/$_";
+
+                my ($source_type) = &file_info($source_path);
+                my ($dest_type)   = &file_info($dest_path);
+
+                # Fail if the destination type exists but is of a
+                # different type of file than the source type.
+                if ($dest_type ne '0' and $source_type ne $dest_type)
+                  {
+                    die "$0: does not handle changing source and destination ",
+                        "type for '$source_path'.\n";
+                  }
+
+                if ($source_type ne 'd' and
+                    $source_type ne 'f' and
+                    $source_type ne 'l')
+                  {
+                    warn "$0: skipping loading file '$source_path' of type ",
+                         "'$source_type'.\n";
+                    unless ($opt_no_user_input)
+                      {
+                        print STDERR "Press return to continue: ";
+                        <STDIN>;
+                      }
+                    return;
+                  }
+
+                unless (defined delete $del_files{$source_path})
+                  {
+                    $add_files{$source_path}{type} = $source_type;
+                  }
+              };
+
+            # Now change into the directory containing the files to
+            # load.  First change to the original directory where this
+            # script was run so that if the specified directory is a
+            # relative directory path, then the script can change into
+            # it.
+            chdir($orig_cwd)
+              or die "$0: cannot chdir '$orig_cwd': $!\n";
+            chdir($load_dir)
+              or die "$0: cannot chdir '$load_dir': $!\n";
+
+            find({no_chdir   => 1,
+                  preprocess => sub { sort { $b cmp $a }
+                                      grep { $_ !~ /^[._]svn$/ } @_ },
+                  wanted     => $wanted
+                 }, '.');
+
+            # At this point %add_files contains the list of new files
+            # and directories to be created in the working copy tree
+            # and %del_files contains the files and directories that
+            # need to be deleted.  Because there may be renames that
+            # have taken place, give the user the opportunity to
+            # rename any deleted files and directories to ones being
+            # added.
+            my @add_files = sort keys %add_files;
+            my @del_files = sort keys %del_files;
+
+            # Because the source code management system may keep the
+            # original renamed file or directory in the working copy
+            # until a commit, remove them from the list of deleted
+            # files or directories.
+            &filter_renamed_files(\@del_files, \%rename_from_files);
+
+            # Now change into the working copy directory in case any
+            # renames need to be performed.
+            chdir($wc_import_dir_cwd)
+              or die "$0: cannot chdir '$wc_import_dir_cwd': $!\n";
+
+            # Only do renames if there are both added and deleted
+            # files and directories.
+            if (@add_files and @del_files)
+              {
+                my $max = @add_files > @del_files ? @add_files : @del_files;
+
+                # Print the files that have been added and deleted.
+                # Find the deleted file with the longest name and use
+                # that for the width of the filename column.  Add one
+                # to the filename width to let the directory /
+                # character be appended to a directory name.
+                my $line_number_width = 4;
+                my $filename_width    = 0;
+                foreach my $f (@del_files)
+                  {
+                    my $l = length($f);
+                    $filename_width = $l if $l > $filename_width;
+                  }
+                ++$filename_width;
+                my $printf_format = "%${line_number_width}d";
+
+                if ($print_rename_message)
+                  {
+                    $print_rename_message = 0;
+                    print "\n",
+                      "The following table lists files and directories that\n",
+                      "exist in either the Subversion repository or the\n",
+                      "directory to be imported but not both.  You now have\n",
+                      "the opportunity to match them up as renames instead\n",
+                      "of deletes and adds.  This is a Good Thing as it'll\n",
+                      "make the repository take less space.\n\n",
+                      "The left column lists files and directories that\n",
+                      "exist in the Subversion repository and do not exist\n",
+                      "in the directory being imported.  The right column\n",
+                      "lists files and directories that exist in the\n",
+                      "directory being imported.  Match up a deleted item\n",
+                      "from the left column with an added item from the\n",
+                      "right column.  Note the line numbers on the left\n",
+                      "which you type into this script to have a rename\n",
+                      "performed.\n";
+                  }
+
+                # Sort the added and deleted files and directories by
+                # the lowercase versions of their basenames instead of
+                # their complete path, which makes finding files that
+                # were moved into different directories easier to
+                # match up.
+                @add_files = map { $_->[0] }
+                             sort { $a->[1] cmp $b->[1] }
+                             map { [$_->[0], lc($_->[1])] }
+                             map { [$_, m#([^/]+)$#] }
+                             @add_files;
+                @del_files = map { $_->[0] }
+                             sort { $a->[1] cmp $b->[1] }
+                             map { [$_->[0], lc($_->[1])] }
+                             map { [$_, m#([^/]+)$#] }
+                             @del_files;
+
+              RELIST:
+
+                for (my $i=0; $i<$max; ++$i)
+                  {
+                    my $add_filename = '';
+                    my $del_filename = '';
+                    if ($i < @add_files)
+                      {
+                        $add_filename = $add_files[$i];
+                        if ($add_files{$add_filename}{type} eq 'd')
+                          {
+                            $add_filename .= '/';
+                          }
+                      }
+                    if ($i < @del_files)
+                      {
+                        $del_filename = $del_files[$i];
+                        if ($del_files{$del_filename}{type} eq 'd')
+                          {
+                            $del_filename .= '/';
+                          }
+                      }
+
+                    if ($i % 22 == 0)
+                      {
+                        print
+                          "\n",
+                          " " x $line_number_width,
+                          " ",
+                          "Deleted", " " x ($filename_width-length("Deleted")),
+                          " ",
+                          "Added\n";
+                      }
+
+                    printf $printf_format, $i;
+                    print  " ", $del_filename,
+                           "_" x ($filename_width - length($del_filename)),
+                           " ", $add_filename, "\n";
+
+                    if (($i+1) % 22 == 0)
+                      {
+                        unless (&get_answer("Continue printing (Y/n)? ",
+                                            'ny', 1))
+                          {
+                            last;
+                          }
+                      }
+                  }
+
+                # Get the feedback from the user.
+                my $line;
+                my $add_filename;
+                my $add_index;
+                my $del_filename;
+                my $del_index;
+                my $got_line = 0;
+                do {
+                  print "Enter two indexes for each column to rename, ",
+                        "(R)elist, or (F)inish: ";
+                  $line = <STDIN>;
+                  $line = '' unless defined $line;
+                  if ($line =~ /^R$/i )
+                    {
+                      goto RELIST;
+                    }
+                  
+                  if ($line =~ /^F$/i)
+                    {
+                      $got_line = 1;
+                    }
+                  elsif ($line =~ /^(\d+)\s+(\d+)$/)
+                    {
+                      print "\n";
+
+                      $del_index = $1;
+                      $add_index = $2;
+                      if ($del_index >= @del_files)
+                        {
+                          print "Delete index $del_index is larger than ",
+                                "maximum index of ", scalar @del_files - 1,
+                                ".\n";
+                          $del_index = undef;
+                        }
+                      if ($add_index > @add_files)
+                        {
+                          print "Add index $add_index is larger than maximum ",
+                                "index of ", scalar @add_files - 1, ".\n";
+                          $add_index = undef;
+                        }
+                      $got_line = defined $del_index && defined $add_index;
+
+                      # Check that the file or directory to be renamed
+                      # has the same file type.
+                      if ($got_line)
+                        {
+                          $add_filename = $add_files[$add_index];
+                          $del_filename = $del_files[$del_index];
+                          if ($add_files{$add_filename}{type} ne
+                              $del_files{$del_filename}{type})
+                            {
+                              print "File types for $del_filename and ",
+                                    "$add_filename differ.\n";
+                              $got_line = undef;
+                            }
+                        }
+                    }
+                } until ($got_line);
+
+                if ($line !~ /^F$/i)
+                  {
+                    print "Renaming $del_filename to $add_filename.\n";
+
+                    $repeat_loop = 1;
+
+                    # Because subversion cannot rename the same file
+                    # or directory twice, which includes doing a
+                    # rename of a file in a directory that was
+                    # previously renamed, a commit has to be
+                    # performed.  Check if the file or directory being
+                    # renamed now would cause such a problem and
+                    # commit if so.
+                    my $do_commit_now = 0;
+                    foreach my $rename_to_filename (keys %rename_to_files)
+                      {
+                        if (contained_in($del_filename,
+                                         $rename_to_filename,
+                                         $rename_to_files{$rename_to_filename}{type}))
+                          {
+                            $do_commit_now = 1;
+                            last;
+                          }
+                      }
+
+                    if ($do_commit_now)
+                      {
+                        print "Now committing previously run renames.\n";
+                        &commit_renames($load_dir,
+                                        \@renamed_filenames,
+                                        \%rename_from_files,
+                                        \%rename_to_files);
+                      }
+
+                    push(@renamed_filenames, $del_filename, $add_filename);
+                    {
+                      my $d = $del_files{$del_filename};
+                      $rename_from_files{$del_filename} = $d;
+                      $rename_to_files{$add_filename}   = $d;
+                    }
+
+                    # Check that any required directories to do the
+                    # rename exist.
+                    my @add_segments = split('/', $add_filename);
+                    pop(@add_segments);
+                    my $add_dir = '';
+                    my @add_dirs;
+                    foreach my $segment (@add_segments)
+                      {
+                        $add_dir = length($add_dir) ? "$add_dir/$segment" :
+                                                      $segment;
+                        unless (-d $add_dir)
+                          {
+                            push(@add_dirs, $add_dir);
+                          }
+                      }
+
+                    if (@add_dirs)
+                      {
+                        read_from_process($svn, 'mkdir', @add_dirs);
+                      }
+
+                    read_from_process($svn, 'mv',
+                                      $del_filename, $add_filename);
+                  }
+              }
+          } while ($repeat_loop);
+      }
+
+    # If there are any renames that have not been committed, then do
+    # that now.
+    if (@renamed_filenames)
+      {
+        &commit_renames($load_dir,
+                        \@renamed_filenames,
+                        \%rename_from_files,
+                        \%rename_to_files);
+      }
+
+    # At this point all renames have been performed.  Now get the
+    # final list of files and directories in the working copy
+    # directory.  The %add_files hash will contain the list of files
+    # and directories to add to the working copy and %del_files starts
+    # with all the files already in the working copy and gets files
+    # removed that are in the imported directory, which results in a
+    # list of files that should be deleted.  %upd_files holds the list
+    # of files that have been updated.
+    my %add_files;
+    my %del_files = &recursive_ls_and_hash($wc_import_dir_cwd);
+    my %upd_files;
+
+    # This anonymous subroutine copies files from the source directory
+    # to the working copy directory.
+    my $wanted = sub
+      {
+        s#^\./##;
+        return if $_ eq '.';
+
+        my $source_path = $_;
+        my $dest_path   = "$wc_import_dir_cwd/$_";
+
+        my ($source_type, $source_is_exe) = &file_info($source_path);
+        my ($dest_type)                   = &file_info($dest_path);
+
+        return if ($source_type ne 'd' and
+                   $source_type ne 'f' and
+                   $source_type ne 'l');
+
+        # Fail if the destination type exists but is of a different
+        # type of file than the source type.
+        if ($dest_type ne '0' and $source_type ne $dest_type)
+          {
+            die "$0: does not handle changing source and destination type ",
+                "for '$source_path'.\n";
+          }
+
+        # Determine if the file is being added or is an update to an
+        # already existing file using the file's digest.
+        my $del_info = delete $del_files{$source_path};
+        if (defined $del_info)
+          {
+            if (defined (my $del_digest = $del_info->{digest}))
+              {
+                my $new_digest = &digest_hash_file($source_path);
+                if ($new_digest ne $del_digest)
+                  {
+                    print "U   $source_path\n";
+                    $upd_files{$source_path} = $del_info;
+                  }
+              }
+          }
+        else
+          {
+            print "A   $source_path\n";
+            $add_files{$source_path}{type} = $source_type;
+
+            # Create an array reference to hold the list of properties
+            # to apply to this object.
+            unless (defined $add_files{$source_path}{properties})
+              {
+                $add_files{$source_path}{properties} = [];
+              }
+
+            # Go through the list of properties for a match on this
+            # file or directory and if there is a match, then apply
+            # the property to it.
+            foreach my $property (@property_settings)
+              {
+                my $re = $property->{re};
+                if ($source_path =~ $re)
+                  {
+                    my $property_name  = $property->{name};
+                    my $property_value = $property->{value};
+
+                    # The property value may not be set in the
+                    # configuration file, since the user may just want
+                    # to set the control flag.
+                    if (defined $property_name and defined $property_value)
+                      {
+                        # Ignore properties that do not apply to
+                        # directories.
+                        if ($source_type eq 'd')
+                          {
+                            if ($property_name eq 'svn:eol-style' or
+                                $property_name eq 'svn:executable' or
+                                $property_name eq 'svn:keywords' or
+                                $property_name eq 'svn:mime-type')
+                              {
+                                next;
+                              }
+                          }
+
+                        # Ignore properties that do not apply to
+                        # files.
+                        if ($source_type eq 'f')
+                          {
+                            if ($property_name eq 'svn:externals' or
+                                $property_name eq 'svn:ignore')
+                              {
+                                next;
+                              }
+                          }
+
+                        print "Adding to '$source_path' property ",
+                              "'$property_name' with value ",
+                              "'$property_value'.\n";
+
+                        push(@{$add_files{$source_path}{properties}},
+                             $property);
+                      }
+
+                    last if $property->{control} eq 'break';
+                  }
+              }
+          }
+
+        # Add svn:executable to files that have their executable bit
+        # set.
+        if ($source_is_exe and !$opt_no_auto_exe)
+          {
+            print "Adding to '$source_path' property 'svn:executable' with ",
+                  "value '*'.\n";
+            my $property = {name => 'svn:executable', value => '*'};
+            push (@{$add_files{$source_path}{properties}},
+                  $property);
+          }
+
+        # Now make sure the file or directory in the source directory
+        # exists in the repository.
+        if ($source_type eq 'd')
+          {
+            if ($dest_type eq '0')
+              {
+                mkdir($dest_path)
+                  or die "$0: cannot mkdir '$dest_path': $!\n";
+              }
+          }
+        elsif
+          ($source_type eq 'l') {
+            my $link_target = readlink($source_path)
+              or die "$0: cannot readlink '$source_path': $!\n";
+            if ($dest_type eq 'l')
+              {
+                my $old_target = readlink($dest_path)
+                  or die "$0: cannot readlink '$dest_path': $!\n";
+                return if ($old_target eq $link_target);
+                unlink($dest_path)
+                  or die "$0: unlink '$dest_path' failed: $!\n";
+              }
+            symlink($link_target, $dest_path)
+              or die "$0: cannot symlink '$dest_path' to '$link_target': $!\n";
+          }
+        elsif
+          ($source_type eq 'f') {
+            # Only copy the file if the digests do not match.
+            if ($add_files{$source_path} or $upd_files{$source_path})
+              {
+                copy($source_path, $dest_path)
+                  or die "$0: copy '$source_path' to '$dest_path': $!\n";
+              }
+          }
+        else
+          {
+            die "$0: does not handle copying files of type '$source_type'.\n";
+          }
+      };
+
+    # Now change into the directory containing the files to load.
+    # First change to the original directory where this script was run
+    # so that if the specified directory is a relative directory path,
+    # then the script can change into it.
+    chdir($orig_cwd)
+      or die "$0: cannot chdir '$orig_cwd': $!\n";
+    chdir($load_dir)
+      or die "$0: cannot chdir '$load_dir': $!\n";
+
+    find({no_chdir   => 1,
+          preprocess => sub { sort { $b cmp $a }
+                              grep { $_ !~ /^[._]svn$/ } @_ },
+          wanted     => $wanted
+         }, '.');
+
+    # The files and directories that are in %del_files are the files
+    # and directories that need to be deleted.  Because svn will
+    # return an error if a file or directory is deleted in a directory
+    # that subsequently is deleted, first find all directories and
+    # remove from the list any files and directories inside those
+    # directories from this list.  Work through the list repeatedly
+    # working from short to long names so that directories containing
+    # other files and directories will be deleted first.
+    my $repeat_loop;
+    do
+      {
+        $repeat_loop = 0;
+        my @del_files = sort {length($a) <=> length($b) || $a cmp $b}
+                        keys %del_files;
+        &filter_renamed_files(\@del_files, \%rename_from_files);
+        foreach my $file (@del_files)
+          {
+            if ($del_files{$file}{type} eq 'd')
+              {
+                my $dir        = "$file/";
+                my $dir_length = length($dir);
+                foreach my $f (@del_files)
+                  {
+                    next if $file eq $f;
+                    if (length($f) >= $dir_length and
+                        substr($f, 0, $dir_length) eq $dir)
+                      {
+                        print "d   $f\n";
+                        delete $del_files{$f};
+                        $repeat_loop = 1;
+                      }
+                  }
+
+                # If there were any deletions of files and/or
+                # directories inside a directory that will be deleted,
+                # then restart the entire loop again, because one or
+                # more keys have been deleted from %del_files.
+                # Equally important is not to stop this loop if no
+                # deletions have been done, otherwise later
+                # directories that may contain files and directories
+                # to be deleted will not be deleted.
+                last if $repeat_loop;
+              }
+          }
+      } while ($repeat_loop);
+
+    # What is left are files that are not in any directories to be
+    # deleted and directories to be deleted.  To delete the files,
+    # deeper files and directories must be deleted first.  Because we
+    # have a hash keyed by remaining files and directories to be
+    # deleted, instead of trying to figure out which directories and
+    # files are contained in other directories, just reverse sort by
+    # the path length and then alphabetically.
+    my @del_files = sort {length($b) <=> length($a) || $a cmp $b }
+                    keys %del_files;
+    &filter_renamed_files(\@del_files, \%rename_from_files);
+    foreach my $file (@del_files)
+      {
+        print "D   $file\n";
+      }
+
+    # Now change back to the trunk directory and run the svn commands.
+    chdir($wc_import_dir_cwd)
+      or die "$0: cannot chdir '$wc_import_dir_cwd': $!\n";
+
+    # If any of the added files have the svn:eol-style property set,
+    # then pass -b to diff, otherwise diff may fail because the end of
+    # lines have changed and the source file and file in the
+    # repository will not be identical.
+    my @diff_ignore_space_changes;
+
+    if (keys %add_files)
+      {
+        my @add_files = sort {length($a) <=> length($b) || $a cmp $b}
+                        keys %add_files;
+        my $target_filename = &make_targets_file(@add_files);
+        read_from_process($svn, 'add', '-N', '--targets', $target_filename);
+        unlink($target_filename);
+
+        # Add properties on the added files.
+        foreach my $add_file (@add_files)
+          {
+            foreach my $property (@{$add_files{$add_file}{properties}})
+              {
+                my $property_name  = $property->{name};
+                my $property_value = $property->{value};
+
+                if ($property_name eq 'svn:eol-style')
+                  {
+                    @diff_ignore_space_changes = ('-b');
+                  }
+                
+                # Write the value to a temporary file in case it's multi-line
+                my ($handle, $tmpfile) = tempfile(DIR => $temp_dir);
+                print $handle $property_value;
+                close($handle);
+
+                read_from_process($svn,
+                                  'propset',
+                                  $property_name,
+                                  '--file',
+                                  $tmpfile,
+                                  $add_file);
+              }
+          }
+      }
+    if (@del_files)
+      {
+        my $target_filename = &make_targets_file(@del_files);
+        read_from_process($svn, 'rm', '--targets', $target_filename);
+        unlink($target_filename);
+      }
+
+    # Go through the list of updated files and check the svn:eol-style
+    # property.  If it is set to native, then convert all CR, CRLF and
+    # LF's in the file to the native end of line characters.  Also,
+    # modify diff's command line so that it will ignore the change in
+    # end of line style.
+    if (keys %upd_files)
+      {
+        my @upd_files = sort {length($a) <=> length($b) || $a cmp $b}
+                        keys %upd_files;
+        foreach my $upd_file (@upd_files)
+          {
+            # Always append @BASE to a filename in case they contain a
+            # @ character, in which case the Subversion command line
+            # client will attempt to parse the characters after the @
+            # as a revision and most likely fail, or if the characters
+            # after the @ are a valid revision, then it'll possibly
+            # get the incorrect information.  So always append @BASE
+            # and any preceding @'s will be treated normally and the
+            # correct information will be retrieved.
+            my @command = ($svn,
+                           'propget',
+                           'svn:eol-style',
+                           "$upd_file\@BASE");
+            my @lines = read_from_process(@command);
+            next unless @lines;
+            if (@lines > 1)
+              {
+                warn "$0: '@command' returned more than one line of output: ",
+                  "'@lines'.\n";
+                next;
+              }
+
+            my $eol_style = $lines[0];
+            if ($eol_style eq 'native')
+              {
+                @diff_ignore_space_changes = ('-b');
+                if (&convert_file_to_native_eol($upd_file))
+                  {
+                    print "Native eol-style conversion modified $upd_file.\n";
+                  }
+              }
+          }
+      }
+
+    my $message = wrap('', '', "Load $load_dir into $repos_load_abs_path.\n");
+    read_from_process($svn, 'commit',
+                      @svn_use_repos_cmd_opts,
+                      '-m', $message);
+
+    # If an update is not run now after a commit, then some file and
+    # directory paths will have an older revisions associated with
+    # them and any future commits will fail because they are out of
+    # date.
+    read_from_process($svn, 'update', @svn_use_repos_cmd_opts);
+
+    # Now remove any files and directories to be deleted in the
+    # repository.
+    if (@del_files)
+      {
+        rmtree(\@del_files, 1, 0);
+      }
+
+    # Now make the tag by doing a copy in the svn repository itself.
+    if (defined $load_tag)
+      {
+        my $repos_tag_abs_path = length($repos_base_path_segment) ?
+                                 "$repos_base_path_segment/$load_tag" :
+                                 $load_tag;
+
+        my $from_url = $repos_load_rel_path eq '.' ?
+                       $repos_load_rel_path :
+                       "$repos_base_url/$repos_load_rel_path";
+        my $to_url   = "$repos_base_url/$load_tag";
+
+        $message     = wrap("",
+                            "",
+                            "Tag $repos_load_abs_path as " .
+                            "$repos_tag_abs_path.\n");
+        read_from_process($svn, 'cp', @svn_use_repos_cmd_opts,
+                          '-m', $message, $from_url, $to_url);
+
+        # Now check out the tag and run a recursive diff between the
+        # original source directory and the tag for a consistency
+        # check.
+        my $checkout_dir_name = "my_tag_wc_named_$load_tag";
+        print "Checking out $to_url into $temp_dir/$checkout_dir_name\n";
+
+        chdir($temp_dir)
+          or die "$0: cannot chdir '$temp_dir': $!\n";
+
+        read_from_process($svn, 'checkout',
+                          @svn_use_repos_cmd_opts,
+                          $to_url, $checkout_dir_name);
+
+        chdir($checkout_dir_name)
+          or die "$0: cannot chdir '$checkout_dir_name': $!\n";
+
+        chdir($orig_cwd)
+          or die "$0: cannot chdir '$orig_cwd': $!\n";
+        read_from_process('diff', '-u', @diff_ignore_space_changes,
+                          '-x', '.svn',
+                          '-r', $load_dir, "$temp_dir/$checkout_dir_name");
+      }
+  }
+
+exit 0;
+
+sub usage
+{
+  warn "@_\n" if @_;
+  die "usage: $0 [options] svn_url svn_import_dir [dir_v1 [dir_v2 [..]]]\n",
+      "  svn_url        is the file:// or http:// URL of the svn repository\n",
+      "  svn_import_dir is the path relative to svn_url where to load dirs\n",
+      "  dir_v1 ..      list dirs to import otherwise read from stdin\n",
+      "options are\n",
+      "  -no_user_input don't ask yes/no questions and assume yes answer\n",
+      "  -no_auto_exe   don't set svn:executable for executable files\n",
+      "  -p filename    table listing properties to apply to matching files\n",
+      "  -svn_username  username to perform commits as\n",
+      "  -svn_password  password to supply to svn commit\n",
+      "  -t tag_dir     create a tag copy in tag_dir, relative to svn_url\n",
+      "  -v             increase program verbosity, multiple -v's allowed\n",
+      "  -wc path       use the already checked-out working copy at path\n",
+      "                 instead of checkout out a fresh working copy\n",
+      "  -glob_ignores  List of filename patterns to ignore (as in svn's\n",
+      "                 global-ignores config option)\n";
+}
+
+# Get the next directory to load, either from the command line or from
+# standard input.
+my $get_next_load_dir_init = 0;
+my @get_next_load_dirs;
+sub get_next_load_dir
+{
+  if (@ARGV)
+    {
+      unless ($get_next_load_dir_init)
+        {
+          $get_next_load_dir_init = 1;
+          @get_next_load_dirs     = @ARGV;
+        }
+      return shift @get_next_load_dirs;
+    }
+
+  if ($opt_verbose)
+    {
+      print "Waiting for next directory to import on standard input:\n";
+    }
+  my $line = <STDIN>;
+
+  print "\n" if $opt_verbose;
+
+  chomp $line;
+  if ($line =~ m|(\S+)\s+(\S+)|)
+    {
+      $line = $1;
+      set_svn_use_repos_cmd_opts($2, $opt_svn_password);
+    }
+  $line;
+}
+
+# This constant stores the commonly used string to indicate that a
+# subroutine has been passed an incorrect number of arguments.
+use vars qw($INCORRECT_NUMBER_OF_ARGS);
+$INCORRECT_NUMBER_OF_ARGS = "passed incorrect number of arguments.\n";
+
+# Creates a temporary file in the temporary directory and stores the
+# arguments in it for use by the svn --targets command line option.
+# If any part of the file creation failed, exit the program, as
+# there's no workaround.  Use a unique number as a counter to the
+# files.
+my $make_targets_file_counter;
+sub make_targets_file
+{
+  unless (@_)
+    {
+      confess "$0: make_targets_file $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  $make_targets_file_counter = 1 unless defined $make_targets_file_counter;
+
+  my $filename = sprintf "%s/targets.%05d",
+                 $temp_dir,
+                 $make_targets_file_counter;
+  ++$make_targets_file_counter;
+
+  open(TARGETS, ">$filename")
+    or die "$0: cannot open '$filename' for writing: $!\n";
+
+  foreach my $file (@_)
+    {
+      print TARGETS "$file\n";
+    }
+
+  close(TARGETS)
+    or die "$0: error in closing '$filename' for writing: $!\n";
+
+  $filename;
+}
+
+# Set the svn command line options that are used anytime svn connects
+# to the repository.
+sub set_svn_use_repos_cmd_opts
+{
+  unless (@_ == 2)
+    {
+      confess "$0: set_svn_use_repos_cmd_opts $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  my $username = shift;
+  my $password = shift;
+
+  @svn_use_repos_cmd_opts = ('--non-interactive');
+  if (defined $username and length $username)
+    {
+      push(@svn_use_repos_cmd_opts, '--username', $username);
+    }
+  if (defined $password)
+    {
+      push(@svn_use_repos_cmd_opts, '--password', $password);
+    }
+}
+
+sub get_tag_dir
+{
+  unless (@_ == 1)
+    {
+      confess "$0: get_tag_dir $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  my $load_dir = shift;
+
+  # Take the tag relative directory, search for pairs of
+  # REGEX_SEP_CHAR's and use the regular expression inside the pair to
+  # put in the tag directory name.
+  my $tag_location = $opt_import_tag_location;
+  my $load_tag     = '';
+  while ((my $i = index($tag_location, $REGEX_SEP_CHAR)) >= 0)
+    {
+      $load_tag .= substr($tag_location, 0, $i, '');
+      substr($tag_location, 0, 1, '');
+      my $j = index($tag_location, $REGEX_SEP_CHAR);
+      if ($j < 0)
+        {
+          die "$0: -t value '$opt_import_tag_location' does not have ",
+              "matching $REGEX_SEP_CHAR.\n";
+        }
+      my $regex = substr($tag_location, 0, $j, '');
+      $regex = "($regex)" unless ($regex =~ /\(.+\)/);
+      substr($tag_location, 0, 1, '');
+      my @results = $load_dir =~ m/$regex/;
+      $load_tag .= join('', @results);
+    }
+  $load_tag .= $tag_location;
+
+  $load_tag;
+}
+
+# Return a two element array.  The first element is a single character
+# that represents the type of object the path points to.  The second
+# is a boolean (1 for true, '' for false) if the path points to a file
+# and if the file is executable.
+sub file_info
+{
+  lstat(shift) or return ('0', '');
+  -b _ and return ('b', '');
+  -c _ and return ('c', '');
+  -d _ and return ('d', '');
+  -f _ and return ('f', -x _);
+  -l _ and return ('l', '');
+  -p _ and return ('p', '');
+  -S _ and return ('S', '');
+  return '?';
+}
+
+# Start a child process safely without using /bin/sh.
+sub safe_read_from_pipe
+{
+  unless (@_)
+    {
+      croak "$0: safe_read_from_pipe $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  my $openfork_available = "MSWin32" ne $OSNAME;
+  if ($openfork_available)
+    {
+      print "Running @_\n";
+      my $pid = open(SAFE_READ, "-|");
+      unless (defined $pid)
+        {
+          die "$0: cannot fork: $!\n";
+        }
+      unless ($pid)
+        {
+          # child
+          open(STDERR, ">&STDOUT")
+            or die "$0: cannot dup STDOUT: $!\n";
+          exec(@_)
+            or die "$0: cannot exec '@_': $!\n";
+        }
+    }
+  else
+    {
+      # Redirect the comment into a temp file and use that to work around
+      # Windoze's (non-)handling of multi-line commands.
+      my @commandline = ();
+      my $command;
+      my $comment;
+        
+      while ($command = shift)
+        {
+          if ("-m" eq $command)
+            {
+              my $comment = shift;
+              my ($handle, $tmpfile) = tempfile(DIR => $temp_dir);
+              print $handle $comment;
+              close($handle);
+                
+              push(@commandline, "--file");
+              push(@commandline, $tmpfile);
+            }
+          else
+            {
+              # Munge the command to protect it from the command line
+              $command =~ s/\"/\\\"/g;
+              if ($command =~ m"\s") { $command = "\"$command\""; }
+              if ($command eq "") { $command = "\"\""; }
+              if ($command =~ m"\n")
+                {
+                  warn "$0: carriage return detected in command - may not work\n";
+                }
+              push(@commandline, $command);
+            }
+        }
+        
+      print "Running @commandline\n";
+      if ( $comment ) { print $comment; }
+        
+      # Now do the pipe.
+      open(SAFE_READ, "@commandline |")
+        or die "$0: cannot pipe to command: $!\n";
+    }
+    
+  # parent
+  my @output;
+  while (<SAFE_READ>)
+    {
+      chomp;
+      push(@output, $_);
+    }
+  close(SAFE_READ);
+  my $result = $?;
+  my $exit   = $result >> 8;
+  my $signal = $result & 127;
+  my $cd     = $result & 128 ? "with core dump" : "";
+  if ($signal or $cd)
+    {
+      warn "$0: pipe from '@_' failed $cd: exit=$exit signal=$signal\n";
+    }
+  if (wantarray)
+    {
+      return ($result, @output);
+    }
+  else
+    {
+      return $result;
+    }
+}
+
+# Use safe_read_from_pipe to start a child process safely and exit the
+# script if the child failed for whatever reason.
+sub read_from_process
+{
+  unless (@_)
+    {
+      croak "$0: read_from_process $INCORRECT_NUMBER_OF_ARGS";
+    }
+  my ($status, @output) = &safe_read_from_pipe(@_);
+  if ($status)
+    {
+      print STDERR "$0: @_ failed with this output:\n", join("\n", @output),
+                   "\n";
+      unless ($opt_no_user_input)
+        {
+          print STDERR
+            "Press return to quit and clean up svn working directory: ";
+          <STDIN>;
+        }
+      exit 1;
+    }
+  else
+    {
+      return @output;
+    }
+}
+
+# Get a list of all the files and directories in the specified
+# directory, the type of file and a digest hash of file types.
+sub recursive_ls_and_hash
+{
+  unless (@_ == 1)
+    {
+      croak "$0: recursive_ls_and_hash $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  # This is the directory to change into.
+  my $dir = shift;
+
+  # Get the current directory so that the script can change into the
+  # current working directory after changing into the specified
+  # directory.
+  my $return_cwd = cwd;
+
+  chdir($dir)
+    or die "$0: cannot chdir '$dir': $!\n";
+
+  my %files;
+
+  my $wanted = sub
+    {
+      s#^\./##;
+      return if $_ eq '.';
+      my ($file_type) = &file_info($_);
+      my $file_digest;
+      if ($file_type eq 'f' or ($file_type eq 'l' and stat($_) and -f _))
+        {
+          $file_digest = &digest_hash_file($_);
+        }
+      $files{$_} = {type   => $file_type,
+                    digest => $file_digest};
+    };
+  find({no_chdir   => 1,
+        preprocess => sub
+	  {
+            grep
+              {
+                my $ok=1;
+                foreach my $x (@glob_ignores)
+                  {
+                    if ( $_ =~ /$x/ ) {$ok=0;last;}
+                  }
+                $ok
+              } @_
+          },
+        wanted     => $wanted
+       }, '.');
+
+  chdir($return_cwd)
+    or die "$0: cannot chdir '$return_cwd': $!\n";
+
+  %files;
+}
+
+# Given a list of files and directories which have been renamed but
+# not commtited, commit them with a proper log message.
+sub commit_renames
+{
+  unless (@_ == 4)
+    {
+      croak "$0: commit_renames $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  my $load_dir          = shift;
+  my $renamed_filenames = shift;
+  my $rename_from_files = shift;
+  my $rename_to_files   = shift;
+
+  my $number_renames    = @$renamed_filenames/2;
+
+  my $message = "To prepare to load $load_dir into $repos_load_abs_path, " .
+                "perform $number_renames rename" .
+                ($number_renames > 1 ? "s" : "") . ".\n";
+
+  # Text::Wrap::wrap appears to replace multiple consecutive \n's with
+  # one \n, so wrap the text and then append the second \n.
+  $message  = wrap("", "", $message) . "\n";
+  while (@$renamed_filenames)
+    {
+      my $from  = "$repos_load_abs_path/" . shift @$renamed_filenames;
+      my $to    = "$repos_load_abs_path/" . shift @$renamed_filenames;
+      $message .= wrap("", "  ", "* $to: Renamed from $from.\n");
+    }
+
+  # Change to the top of the working copy so that any
+  # directories will also be updated.
+  my $cwd = cwd;
+  chdir($wc_import_dir_cwd)
+    or die "$0: cannot chdir '$wc_import_dir_cwd': $!\n";
+  read_from_process($svn, 'commit', @svn_use_repos_cmd_opts, '-m', $message);
+  read_from_process($svn, 'update', @svn_use_repos_cmd_opts);
+  chdir($cwd)
+    or die "$0: cannot chdir '$cwd': $!\n";
+
+  # Some versions of subversion have a bug where renamed files
+  # or directories are not deleted after a commit, so do that
+  # here.
+  my @del_files = sort {length($b) <=> length($a) || $a cmp $b }
+                  keys %$rename_from_files;
+  rmtree(\@del_files, 1, 0);
+
+  # Empty the list of old and new renamed names.
+  undef %$rename_from_files;
+  undef %$rename_to_files;
+}
+
+# Take a one file or directory and see if its name is equal to a
+# second or is contained in the second if the second file's file type
+# is a directory.
+sub contained_in
+{
+  unless (@_ == 3)
+    {
+      croak "$0: contain_in $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  my $contained      = shift;
+  my $container      = shift;
+  my $container_type = shift;
+
+  if ($container eq $contained)
+    {
+      return 1;
+    }
+
+  if ($container_type eq 'd')
+    {
+      my $dirname        = "$container/";
+      my $dirname_length = length($dirname);
+
+      if ($dirname_length <= length($contained) and
+          $dirname eq substr($contained, 0, $dirname_length))
+        {
+          return 1;
+        }
+    }
+
+  return 0;
+}
+
+# Take an array reference containing a list of files and directories
+# and take a hash reference and remove from the array reference any
+# files and directories and the files the directory contains listed in
+# the hash.
+sub filter_renamed_files
+{
+  unless (@_ == 2)
+    {
+      croak "$0: filter_renamed_files $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  my $array_ref = shift;
+  my $hash_ref  = shift;
+
+  foreach my $remove_filename (keys %$hash_ref)
+    {
+      my $remove_file_type = $hash_ref->{$remove_filename}{type};
+      for (my $i=0; $i<@$array_ref;)
+        {
+          if (contained_in($array_ref->[$i],
+                           $remove_filename,
+                           $remove_file_type))
+            {
+              splice(@$array_ref, $i, 1);
+              next;
+            }
+          ++$i;
+        }
+    }
+}
+
+# Get a digest hash of the specified filename.
+sub digest_hash_file
+{
+  unless (@_ == 1)
+    {
+      croak "$0: digest_hash_file $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  my $filename = shift;
+
+  my $ctx = Digest::MD5->new;
+  if (open(READ, $filename))
+    {
+      binmode READ;
+      $ctx->addfile(*READ);
+      close(READ);
+    }
+  else
+    {
+      die "$0: cannot open '$filename' for reading: $!\n";
+    }
+  $ctx->digest;
+}
+
+# Read standard input until a line contains the required input or an
+# empty line to signify the default answer.
+sub get_answer
+{
+  unless (@_ == 3)
+    {
+      croak "$0: get_answer $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  my $message = shift;
+  my $answers = shift;
+  my $def_ans = shift;
+
+  return $def_ans if $opt_no_user_input;
+
+  my $char;
+  do
+    {
+      print $message;
+      $char = '';
+      my $line = <STDIN>;
+      if (defined $line and length $line)
+        {
+          $char = substr($line, 0, 1);
+          $char = '' if $char eq "\n";
+        }
+    } until $char eq '' or $answers =~ /$char/ig;
+
+  return $def_ans if $char eq '';
+  return pos($answers) - 1;
+}
+
+# Determine the native end of line on this system by writing a \n in
+# non-binary mode to an empty file and reading the same file back in
+# binary mode.
+sub determine_native_eol
+{
+  my $filename = "$temp_dir/svn_load_dirs_eol_test.$$";
+  if (-e $filename)
+    {
+      unlink($filename)
+        or die "$0: cannot unlink '$filename': $!\n";
+    }
+
+  # Write the \n in non-binary mode.
+  open(NL_TEST, ">$filename")
+    or die "$0: cannot open '$filename' for writing: $!\n";
+  print NL_TEST "\n";
+  close(NL_TEST)
+    or die "$0: error in closing '$filename' for writing: $!\n";
+
+  # Read the \n in binary mode.
+  open(NL_TEST, $filename)
+    or die "$0: cannot open '$filename' for reading: $!\n";
+  binmode NL_TEST;
+  local $/;
+  undef $/;
+  my $eol = <NL_TEST>;
+  close(NL_TEST)
+    or die "$0: cannot close '$filename' for reading: $!\n";
+  unlink($filename)
+    or die "$0: cannot unlink '$filename': $!\n";
+
+  my $eol_length = length($eol);
+  unless ($eol_length)
+    {
+      die "$0: native eol length on this system is 0.\n";
+    }
+
+  print "Native EOL on this system is ";
+  for (my $i=0; $i<$eol_length; ++$i)
+    {
+      printf "\\%03o", ord(substr($eol, $i, 1));
+    }
+  print ".\n\n";
+
+  $eol;
+}
+
+# Take a filename, open the file and replace all CR, CRLF and LF's
+# with the native end of line style for this system.
+sub convert_file_to_native_eol
+{
+  unless (@_ == 1)
+    {
+      croak "$0: convert_file_to_native_eol $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  my $filename = shift;
+  open(FILE, $filename)
+    or die "$0: cannot open '$filename' for reading: $!\n";
+  binmode FILE;
+  local $/;
+  undef $/;
+  my $in = <FILE>;
+  close(FILE)
+    or die "$0: error in closing '$filename' for reading: $!\n";
+  my $out = '';
+
+  # Go through the file and transform it byte by byte.
+  my $i = 0;
+  while ($i < length($in))
+    {
+      my $cc = substr($in, $i, 2);
+      if ($cc eq "\015\012")
+        {
+          $out .= $native_eol;
+          $i += 2;
+          next;
+        }
+
+      my $c = substr($cc, 0, 1);
+      if ($c eq "\012" or $c eq "\015")
+        {
+          $out .= $native_eol;
+        }
+      else
+        {
+          $out .= $c;
+        }
+      ++$i;
+    }
+
+  return 0 if $in eq $out;
+
+  my $tmp_filename = ".svn/tmp/svn_load_dirs.$$";
+  open(FILE, ">$tmp_filename")
+    or die "$0: cannot open '$tmp_filename' for writing: $!\n";
+  binmode FILE;
+  print FILE $out;
+  close(FILE)
+    or die "$0: cannot close '$tmp_filename' for writing: $!\n";
+  rename($tmp_filename, $filename)
+    or die "$0: cannot rename '$tmp_filename' to '$filename': $!\n";
+
+  return 1;
+}
+
+# Split the input line into words taking into account that single or
+# double quotes may define a single word with whitespace in it.
+sub split_line
+{
+  unless (@_ == 1)
+    {
+      croak "$0: split_line $INCORRECT_NUMBER_OF_ARGS";
+    }
+
+  my $line = shift;
+
+  # Strip leading whitespace.  Do not strip trailing whitespace which
+  # may be part of quoted text that was never closed.
+  $line =~ s/^\s+//;
+
+  my $line_length  = length $line;
+  my @words        = ();
+  my $current_word = '';
+  my $in_quote     = '';
+  my $in_protect   = '';
+  my $in_space     = '';
+  my $i            = 0;
+
+  while ($i < $line_length)
+    {
+      my $c = substr($line, $i, 1);
+      ++$i;
+
+      if ($in_protect)
+        {
+          if ($c eq $in_quote)
+            {
+              $current_word .= $c;
+            }
+          elsif ($c eq '"' or $c eq "'")
+            {
+              $current_word .= $c;
+            }
+          else
+            {
+              $current_word .= "$in_protect$c";
+            }
+          $in_protect = '';
+        }
+      elsif ($c eq '\\')
+        {
+          $in_protect = $c;
+        }
+      elsif ($in_quote)
+        {
+          if ($c eq $in_quote)
+            {
+              $in_quote = '';
+            }
+          else
+            {
+              $current_word .= $c;
+            }
+        }
+      elsif ($c eq '"' or $c eq "'")
+        {
+          $in_quote = $c;
+        }
+      elsif ($c =~ m/^\s$/)
+        {
+          unless ($in_space)
+            {
+              push(@words, $current_word);
+              $current_word = '';
+            }
+        }
+      else
+        {
+          $current_word .= $c;
+        }
+
+      $in_space = $c =~ m/^\s$/;
+    }
+
+  # Handle any leftovers.
+  $current_word .= $in_protect if $in_protect;
+  push(@words, $current_word) if length $current_word;
+
+  @words;
+}
+
+# This package exists just to delete the temporary directory.
+package Temp::Delete;
+
+sub new
+{
+  bless {}, shift;
+}
+
+sub DESTROY
+{
+  print "Cleaning up $temp_dir\n";
+  File::Path::rmtree([$temp_dir], 0, 0);
+}
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/scripts/svnmerge.py	Mon Aug 04 22:43:48 2008 +0000
@@ -0,0 +1,2170 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright (c) 2005, Giovanni Bajo
+# Copyright (c) 2004-2005, Awarix, Inc.
+# All rights reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
+#
+# Author: Archie Cobbs <archie at awarix dot com>
+# Rewritten in Python by: Giovanni Bajo <rasky at develer dot com>
+#
+# Acknowledgments:
+#   John Belmonte <john at neggie dot net> - metadata and usability
+#     improvements
+#   Blair Zajac <blair at orcaware dot com> - random improvements
+#   Raman Gupta <rocketraman at fastmail dot fm> - bidirectional and transitive
+#     merging support
+#
+# $HeadURL$
+# $LastChangedDate$
+# $LastChangedBy$
+# $LastChangedRevision$
+#
+# Requisites:
+# svnmerge.py has been tested with all SVN major versions since 1.1 (both
+# client and server). It is unknown if it works with previous versions.
+#
+# Differences from svnmerge.sh:
+# - More portable: tested as working in FreeBSD and OS/2.
+# - Add double-verbose mode, which shows every svn command executed (-v -v).
+# - "svnmerge avail" now only shows commits in source, not also commits in
+#   other parts of the repository.
+# - Add "svnmerge block" to flag some revisions as blocked, so that
+#   they will not show up anymore in the available list.  Added also
+#   the complementary "svnmerge unblock".
+# - "svnmerge avail" has grown two new options:
+#   -B to display a list of the blocked revisions
+#   -A to display both the blocked and the available revisions.
+# - Improved generated commit message to make it machine parsable even when
+#   merging commits which are themselves merges.
+# - Add --force option to skip working copy check
+# - Add --record-only option to "svnmerge merge" to avoid performing
+#   an actual merge, yet record that a merge happened.
+#
+# TODO:
+#  - Add "svnmerge avail -R": show logs in reverse order
+#
+# Information for Hackers:
+#
+# Identifiers for branches:
+#  A branch is identified in three ways within this source:
+#  - as a working copy (variable name usually includes 'dir')
+#  - as a fully qualified URL
+#  - as a path identifier (an opaque string indicating a particular path
+#    in a particular repository; variable name includes 'pathid')
+#  A "target" is generally user-specified, and may be a working copy or
+#  a URL.
+
+import sys, os, getopt, re, types, tempfile, time, popen2, locale
+from bisect import bisect
+from xml.dom import pulldom
+
+NAME = "svnmerge"
+if not hasattr(sys, "version_info") or sys.version_info < (2, 0):
+    error("requires Python 2.0 or newer")
+
+# Set up the separator used to separate individual log messages from
+# each revision merged into the target location.  Also, create a
+# regular expression that will find this same separator in already
+# committed log messages, so that the separator used for this run of
+# svnmerge.py will have one more LOG_SEPARATOR appended to the longest
+# separator found in all the commits.
+LOG_SEPARATOR = 8 * '.'
+LOG_SEPARATOR_RE = re.compile('^((%s)+)' % re.escape(LOG_SEPARATOR),
+                              re.MULTILINE)
+
+# Each line of the embedded log messages will be prefixed by LOG_LINE_PREFIX.
+LOG_LINE_PREFIX = 2 * ' '
+
+# Set python to the default locale as per environment settings, same as svn
+# TODO we should really parse config and if log-encoding is specified, set
+# the locale to match that encoding
+locale.setlocale(locale.LC_ALL, '')
+
+# We want the svn output (such as svn info) to be non-localized
+# Using LC_MESSAGES should not affect localized output of svn log, for example
+if os.environ.has_key("LC_ALL"):
+    del os.environ["LC_ALL"]
+os.environ["LC_MESSAGES"] = "C"
+
+###############################################################################
+# Support for older Python versions
+###############################################################################
+
+# True/False constants are Python 2.2+
+try:
+    True, False
+except NameError:
+    True, False = 1, 0
+
+def lstrip(s, ch):
+    """Replacement for str.lstrip (support for arbitrary chars to strip was
+    added in Python 2.2.2)."""
+    i = 0
+    try:
+        while s[i] == ch:
+            i = i+1
+        return s[i:]
+    except IndexError:
+        return ""
+
+def rstrip(s, ch):
+    """Replacement for str.rstrip (support for arbitrary chars to strip was
+    added in Python 2.2.2)."""
+    try:
+        if s[-1] != ch:
+            return s
+        i = -2
+        while s[i] == ch:
+            i = i-1
+        return s[:i+1]
+    except IndexError:
+        return ""
+
+def strip(s, ch):
+    """Replacement for str.strip (support for arbitrary chars to strip was
+    added in Python 2.2.2)."""
+    return lstrip(rstrip(s, ch), ch)
+
+def rsplit(s, sep, maxsplits=0):
+    """Like str.rsplit, which is Python 2.4+ only."""
+    L = s.split(sep)
+    if not 0 < maxsplits <= len(L):
+        return L
+    return [sep.join(L[0:-maxsplits])] + L[-maxsplits:]
+
+###############################################################################
+
+def kwextract(s):
+    """Extract info from a svn keyword string."""
+    try:
+        return strip(s, "$").strip().split(": ")[1]
+    except IndexError:
+        return "<unknown>"
+
+__revision__ = kwextract('$Rev$')
+__date__ = kwextract('$Date$')
+
+# Additional options, not (yet?) mapped to command line flags
+default_opts = {
+    "svn": "svn",
+    "prop": NAME + "-integrated",
+    "block-prop": NAME + "-blocked",
+    "commit-verbose": True,
+}
+logs = {}
+
+def console_width():
+    """Get the width of the console screen (if any)."""
+    try:
+        return int(os.environ["COLUMNS"])
+    except (KeyError, ValueError):
+        pass
+
+    try:
+        # Call the Windows API (requires ctypes library)
+        from ctypes import windll, create_string_buffer
+        h = windll.kernel32.GetStdHandle(-11)
+        csbi = create_string_buffer(22)
+        res = windll.kernel32.GetConsoleScreenBufferInfo(h, csbi)
+        if res:
+            import struct
+            (bufx, bufy,
+             curx, cury, wattr,
+             left, top, right, bottom,
+             maxx, maxy) = struct.unpack("hhhhHhhhhhh", csbi.raw)
+            return right - left + 1
+    except ImportError:
+        pass
+
+    # Parse the output of stty -a
+    out = os.popen("stty -a").read()
+    m = re.search(r"columns (\d+);", out)
+    if m:
+        return int(m.group(1))
+
+    # sensible default
+    return 80
+
+def error(s):
+    """Subroutine to output an error and bail."""
+    print >> sys.stderr, "%s: %s" % (NAME, s)
+    sys.exit(1)
+
+def report(s):
+    """Subroutine to output progress message, unless in quiet mode."""
+    if opts["verbose"]:
+        print "%s: %s" % (NAME, s)
+
+def prefix_lines(prefix, lines):
+    """Given a string representing one or more lines of text, insert the
+    specified prefix at the beginning of each line, and return the result.
+    The input must be terminated by a newline."""
+    assert lines[-1] == "\n"
+    return prefix + lines[:-1].replace("\n", "\n"+prefix) + "\n"
+
+def recode_stdout_to_file(s):
+    if locale.getdefaultlocale()[1] is None or not hasattr(sys.stdout, "encoding") \
+            or sys.stdout.encoding is None:
+        return s
+    u = s.decode(sys.stdout.encoding)
+    return u.encode(locale.getdefaultlocale()[1])
+
+class LaunchError(Exception):
+    """Signal a failure in execution of an external command. Parameters are the
+    exit code of the process, the original command line, and the output of the
+    command."""
+
+try:
+    """Launch a sub-process. Return its output (both stdout and stderr),
+    optionally split by lines (if split_lines is True). Raise a LaunchError
+    exception if the exit code of the process is non-zero (failure).
+
+    This function has two implementations, one based on subprocess (preferred),
+    and one based on popen (for compatibility).
+    """
+    import subprocess
+    import shlex
+
+    def launch(cmd, split_lines=True):
+        # Requiring python 2.4 or higher, on some platforms we get
+        # much faster performance from the subprocess module (where python
+        # doesn't try to close an exhorbitant number of file descriptors)
+        stdout = ""
+        stderr = ""
+        try:
+            if os.name == 'nt':
+                p = subprocess.Popen(cmd, stdout=subprocess.PIPE, \
+                                     close_fds=False, stderr=subprocess.PIPE)
+            else:
+                # Use shlex to break up the parameters intelligently,
+                # respecting quotes. shlex can't handle unicode.
+                args = shlex.split(cmd.encode('ascii'))
+                p = subprocess.Popen(args, stdout=subprocess.PIPE, \
+                                     close_fds=False, stderr=subprocess.PIPE)
+            stdoutAndErr = p.communicate()
+            stdout = stdoutAndErr[0]
+            stderr = stdoutAndErr[1]
+        except OSError, inst:
+            # Using 1 as failure code; should get actual number somehow? For
+            # examples see svnmerge_test.py's TestCase_launch.test_failure and
+            # TestCase_launch.test_failurecode.
+            raise LaunchError(1, cmd, stdout + " " + stderr + ": " + str(inst))
+
+        if p.returncode == 0:
+            if split_lines:
+                # Setting keepends=True for compatibility with previous logic
+                # (where file.readlines() preserves newlines)
+                return stdout.splitlines(True)
+            else:
+                return stdout
+        else:
+            raise LaunchError(p.returncode, cmd, stdout + stderr)
+except ImportError:
+    # support versions of python before 2.4 (slower on some systems)
+    def launch(cmd, split_lines=True):
+        if os.name not in ['nt', 'os2']:
+            p = popen2.Popen4(cmd)
+            p.tochild.close()
+            if split_lines:
+                out = p.fromchild.readlines()
+            else:
+                out = p.fromchild.read()
+            ret = p.wait()
+            if ret == 0:
+                ret = None
+            else:
+                ret >>= 8
+        else:
+            i,k = os.popen4(cmd)
+            i.close()
+            if split_lines:
+                out = k.readlines()
+            else:
+                out = k.read()
+            ret = k.close()
+
+        if ret is None:
+            return out
+        raise LaunchError(ret, cmd, out)
+
+def launchsvn(s, show=False, pretend=False, **kwargs):
+    """Launch SVN and grab its output."""
+    username = opts.get("username", None)
+    password = opts.get("password", None)
+    if username:
+        username = " --username=" + username
+    else:
+        username = ""
+    if password:
+        password = " --password=" + password
+    else:
+        password = ""
+    cmd = opts["svn"] + " --non-interactive" + username + password + " " + s
+    if show or opts["verbose"] >= 2:
+        print cmd
+    if pretend:
+        return None
+    return launch(cmd, **kwargs)
+
+def svn_command(s):
+    """Do (or pretend to do) an SVN command."""
+    out = launchsvn(s, show=opts["show-changes"] or opts["dry-run"],
+                    pretend=opts["dry-run"],
+                    split_lines=False)
+    if not opts["dry-run"]:
+        print out
+
+def check_dir_clean(dir):
+    """Check the current status of dir for local mods."""
+    if opts["force"]:
+        report('skipping status check because of --force')
+        return
+    report('checking status of "%s"' % dir)
+
+    # Checking with -q does not show unversioned files or external
+    # directories.  Though it displays a debug message for external
+    # directories, after a blank line.  So, practically, the first line
+    # matters: if it's non-empty there is a modification.
+    out = launchsvn("status -q %s" % dir)
+    if out and out[0].strip():
+        error('"%s" has local modifications; it must be clean' % dir)
+
+class RevisionLog:
+    """
+    A log of the revisions which affected a given URL between two
+    revisions.
+    """
+
+    def __init__(self, url, begin, end, find_propchanges=False):
+        """
+        Create a new RevisionLog object, which stores, in self.revs, a list
+        of the revisions which affected the specified URL between begin and
+        end. If find_propchanges is True, self.propchange_revs will contain a
+        list of the revisions which changed properties directly on the
+        specified URL. URL must be the URL for a directory in the repository.
+        """
+        self.url = url
+
+        # Setup the log options (--quiet, so we don't show log messages)
+        log_opts = '--xml --quiet -r%s:%s "%s"' % (begin, end, url)
+        if find_propchanges:
+            # The --verbose flag lets us grab merge tracking information
+            # by looking at propchanges
+            log_opts = "--verbose " + log_opts
+
+        # Read the log to look for revision numbers and merge-tracking info
+        self.revs = []
+        self.propchange_revs = []
+        repos_pathid = target_to_pathid(url)
+        for chg in SvnLogParser(launchsvn("log %s" % log_opts,
+                                          split_lines=False)):
+            self.revs.append(chg.revision())
+            for p in chg.paths():
+                if p.action() == 'M' and p.pathid() == repos_pathid:
+                    self.propchange_revs.append(chg.revision())
+
+        # Save the range of the log
+        self.begin = int(begin)
+        if end == "HEAD":
+            # If end is not provided, we do not know which is the latest
+            # revision in the repository. So we set 'end' to the latest
+            # known revision.
+            self.end = self.revs[-1]
+        else:
+            self.end = int(end)
+
+        self._merges = None
+        self._blocks = None
+
+    def merge_metadata(self):
+        """
+        Return a VersionedProperty object, with a cached view of the merge
+        metadata in the range of this log.
+        """
+
+        # Load merge metadata if necessary
+        if not self._merges:
+            self._merges = VersionedProperty(self.url, opts["prop"])
+            self._merges.load(self)
+
+        return self._merges
+
+    def block_metadata(self):
+        if not self._blocks:
+            self._blocks = VersionedProperty(self.url, opts["block-prop"])
+            self._blocks.load(self)
+
+        return self._blocks
+
+
+class VersionedProperty:
+    """
+    A read-only, cached view of a versioned property.
+
+    self.revs contains a list of the revisions in which the property changes.
+    self.values stores the new values at each corresponding revision. If the
+    value of the property is unknown, it is set to None.
+
+    Initially, we set self.revs to [0] and self.values to [None]. This
+    indicates that, as of revision zero, we know nothing about the value of
+    the property.
+
+    Later, if you run self.load(log), we cache the value of this property over
+    the entire range of the log by noting each revision in which the property
+    was changed. At the end of the range of the log, we invalidate our cache
+    by adding the value "None" to our cache for any revisions which fall out
+    of the range of our log.
+
+    Once self.revs and self.values are filled, we can find the value of the
+    property at any arbitrary revision using a binary search on self.revs.
+    Once we find the last revision during which the property was changed,
+    we can lookup the associated value in self.values. (If the associated
+    value is None, the associated value was not cached and we have to do
+    a full propget.)
+
+    An example: We know that the 'svnmerge' property was added in r10, and
+    changed in r21. We gathered log info up until r40.
+
+    revs = [0, 10, 21, 40]
+    values = [None, "val1", "val2", None]
+
+    What these values say:
+    - From r0 to r9, we know nothing about the property.
+    - In r10, the property was set to "val1". This property stayed the same
+      until r21, when it was changed to "val2".
+    - We don't know what happened after r40.
+    """
+
+    def __init__(self, url, name):
+        """View the history of a versioned property at URL with name"""
+        self.url = url
+        self.name = name
+
+        # We know nothing about the value of the property. Setup revs
+        # and values to indicate as such.
+        self.revs = [0]
+        self.values = [None]
+
+        # We don't have any revisions cached
+        self._initial_value = None
+        self._changed_revs = []
+        self._changed_values = []
+
+    def load(self, log):
+        """
+        Load the history of property changes from the specified
+        RevisionLog object.
+        """
+
+        # Get the property value before the range of the log
+        if log.begin > 1:
+            self.revs.append(log.begin-1)
+            try:
+                self._initial_value = self.raw_get(log.begin-1)
+            except LaunchError:
+                # The specified URL might not exist before the
+                # range of the log. If so, we can safely assume
+                # that the property was empty at that time.
+                self._initial_value = { }
+            self.values.append(self._initial_value)
+        else:
+            self._initial_value = { }
+            self.values[0] = self._initial_value
+
+        # Cache the property values in the log range
+        old_value = self._initial_value
+        for rev in log.propchange_revs:
+            new_value = self.raw_get(rev)
+            if new_value != old_value:
+                self._changed_revs.append(rev)
+                self._changed_values.append(new_value)
+                self.revs.append(rev)
+                self.values.append(new_value)
+                old_value = new_value
+
+        # Indicate that we know nothing about the value of the property
+        # after the range of the log.
+        if log.revs:
+            self.revs.append(log.end+1)
+            self.values.append(None)
+
+    def raw_get(self, rev=None):
+        """
+        Get the property at revision REV. If rev is not specified, get
+        the property at revision HEAD.
+        """
+        return get_revlist_prop(self.url, self.name, rev)
+
+    def get(self, rev=None):
+        """
+        Get the property at revision REV. If rev is not specified, get
+        the property at revision HEAD.
+        """
+
+        if rev is not None:
+
+            # Find the index using a binary search
+            i = bisect(self.revs, rev) - 1
+
+            # Return the value of the property, if it was cached
+            if self.values[i] is not None:
+                return self.values[i]
+
+        # Get the current value of the property
+        return self.raw_get(rev)
+
+    def changed_revs(self, key=None):
+        """
+        Get a list of the revisions in which the specified dictionary
+        key was changed in this property. If key is not specified,
+        return a list of revisions in which any key was changed.
+        """
+        if key is None:
+            return self._changed_revs
+        else:
+            changed_revs = []
+            old_val = self._initial_value
+            for rev, val in zip(self._changed_revs, self._changed_values):
+                if val.get(key) != old_val.get(key):
+                    changed_revs.append(rev)
+                    old_val = val
+            return changed_revs
+
+    def initialized_revs(self):
+        """
+        Get a list of the revisions in which keys were added or
+        removed in this property.
+        """
+        initialized_revs = []
+        old_len = len(self._initial_value)
+        for rev, val in zip(self._changed_revs, self._changed_values):
+            if len(val) != old_len:
+                initialized_revs.append(rev)
+                old_len = len(val)
+        return initialized_revs
+
+class RevisionSet:
+    """
+    A set of revisions, held in dictionary form for easy manipulation. If we
+    were to rewrite this script for Python 2.3+, we would subclass this from
+    set (or UserSet).  As this class does not include branch
+    information, it's assumed that one instance will be used per
+    branch.
+    """
+    def __init__(self, parm):
+        """Constructs a RevisionSet from a string in property form, or from
+        a dictionary whose keys are the revisions. Raises ValueError if the
+        input string is invalid."""
+
+        self._revs = {}
+
+        revision_range_split_re = re.compile('[-:]')
+
+        if isinstance(parm, types.DictType):
+            self._revs = parm.copy()
+        elif isinstance(parm, types.ListType):
+            for R in parm:
+                self._revs[int(R)] = 1
+        else:
+            parm = parm.strip()
+            if parm:
+                for R in parm.split(","):
+                    rev_or_revs = re.split(revision_range_split_re, R)
+                    if len(rev_or_revs) == 1:
+                        self._revs[int(rev_or_revs[0])] = 1
+                    elif len(rev_or_revs) == 2:
+                        for rev in range(int(rev_or_revs[0]),
+                                         int(rev_or_revs[1])+1):
+                            self._revs[rev] = 1
+                    else:
+                        raise ValueError, 'Ill formatted revision range: ' + R
+
+    def sorted(self):
+        revnums = self._revs.keys()
+        revnums.sort()
+        return revnums
+
+    def normalized(self):
+        """Returns a normalized version of the revision set, which is an
+        ordered list of couples (start,end), with the minimum number of
+        intervals."""
+        revnums = self.sorted()
+        revnums.reverse()
+        ret = []
+        while revnums:
+            s = e = revnums.pop()
+            while revnums and revnums[-1] in (e, e+1):
+                e = revnums.pop()
+            ret.append((s, e))
+        return ret
+
+    def __str__(self):
+        """Convert the revision set to a string, using its normalized form."""
+        L = []
+        for s,e in self.normalized():
+            if s == e:
+                L.append(str(s))
+            else:
+                L.append(str(s) + "-" + str(e))
+        return ",".join(L)
+
+    def __contains__(self, rev):
+        return self._revs.has_key(rev)
+
+    def __sub__(self, rs):
+        """Compute subtraction as in sets."""
+        revs = {}
+        for r in self._revs.keys():
+            if r not in rs:
+                revs[r] = 1
+        return RevisionSet(revs)
+
+    def __and__(self, rs):
+        """Compute intersections as in sets."""
+        revs = {}
+        for r in self._revs.keys():
+            if r in rs:
+                revs[r] = 1
+        return RevisionSet(revs)
+
+    def __nonzero__(self):
+        return len(self._revs) != 0
+
+    def __len__(self):
+        """Return the number of revisions in the set."""
+        return len(self._revs)
+
+    def __iter__(self):
+        return iter(self.sorted())
+
+    def __or__(self, rs):
+        """Compute set union."""
+        revs = self._revs.copy()
+        revs.update(rs._revs)
+        return RevisionSet(revs)
+
+def merge_props_to_revision_set(merge_props, pathid):
+    """A converter which returns a RevisionSet instance containing the
+    revisions from PATH as known to BRANCH_PROPS.  BRANCH_PROPS is a
+    dictionary of pathid -> revision set branch integration information
+    (as returned by get_merge_props())."""
+    if not merge_props.has_key(pathid):
+        error('no integration info available for path "%s"' % pathid)
+    return RevisionSet(merge_props[pathid])
+
+def dict_from_revlist_prop(propvalue):
+    """Given a property value as a string containing per-source revision
+    lists, return a dictionary whose key is a source path identifier
+    and whose value is the revisions for that source."""
+    prop = {}
+
+    # Multiple sources are separated by any whitespace.
+    for L in propvalue.split():
+        # We use rsplit to play safe and allow colons in pathids.
+        source, revs = rsplit(L.strip(), ":", 1)
+        prop[source] = revs
+    return prop
+
+def get_revlist_prop(url_or_dir, propname, rev=None):
+    """Given a repository URL or working copy path and a property
+    name, extract the values of the property which store per-source
+    revision lists and return a dictionary whose key is a source path
+    identifier, and whose value is the revisions for that source."""
+
+    # Note that propget does not return an error if the property does
+    # not exist, it simply does not output anything. So we do not need
+    # to check for LaunchError here.
+    args = '--strict "%s" "%s"' % (propname, url_or_dir)
+    if rev:
+        args = '-r %s %s' % (rev, args)
+    out = launchsvn('propget %s' % args, split_lines=False)
+
+    return dict_from_revlist_prop(out)
+
+def get_merge_props(dir):
+    """Extract the merged revisions."""
+    return get_revlist_prop(dir, opts["prop"])
+
+def get_block_props(dir):
+    """Extract the blocked revisions."""
+    return get_revlist_prop(dir, opts["block-prop"])
+
+def get_blocked_revs(dir, source_pathid):
+    p = get_block_props(dir)
+    if p.has_key(source_pathid):
+        return RevisionSet(p[source_pathid])
+    return RevisionSet("")
+
+def format_merge_props(props, sep=" "):
+    """Formats the hash PROPS as a string suitable for use as a
+    Subversion property value."""
+    assert sep in ["\t", "\n", " "]   # must be a whitespace
+    props = props.items()
+    props.sort()
+    L = []
+    for h, r in props:
+        L.append(h + ":" + r)
+    return sep.join(L)
+
+def _run_propset(dir, prop, value):
+    """Set the property 'prop' of directory 'dir' to value 'value'. We go
+    through a temporary file to not run into command line length limits."""
+    try:
+        fd, fname = tempfile.mkstemp()
+        f = os.fdopen(fd, "wb")
+    except AttributeError:
+        # Fallback for Python <= 2.3 which does not have mkstemp (mktemp
+        # suffers from race conditions. Not that we care...)
+        fname = tempfile.mktemp()
+        f = open(fname, "wb")
+
+    try:
+        f.write(value)
+        f.close()
+        report("property data written to temp file: %s" % value)
+        svn_command('propset "%s" -F "%s" "%s"' % (prop, fname, dir))
+    finally:
+        os.remove(fname)
+
+def set_props(dir, name, props):
+    props = format_merge_props(props)
+    if props:
+        _run_propset(dir, name, props)
+    else:
+        svn_command('propdel "%s" "%s"' % (name, dir))
+
+def set_merge_props(dir, props):
+    set_props(dir, opts["prop"], props)
+
+def set_block_props(dir, props):
+    set_props(dir, opts["block-prop"], props)
+
+def set_blocked_revs(dir, source_pathid, revs):
+    props = get_block_props(dir)
+    if revs:
+        props[source_pathid] = str(revs)
+    elif props.has_key(source_pathid):
+        del props[source_pathid]
+    set_block_props(dir, props)
+
+def is_url(url):
+    """Check if url is a valid url."""
+    return re.search(r"^[a-zA-Z][-+\.\w]*://[^\s]+$", url) is not None
+
+def is_wc(dir):
+    """Check if a directory is a working copy."""
+    return os.path.isdir(os.path.join(dir, ".svn")) or \
+           os.path.isdir(os.path.join(dir, "_svn"))
+
+_cache_svninfo = {}
+def get_svninfo(target):
+    """Extract the subversion information for a target (through 'svn info').
+    This function uses an internal cache to let clients query information
+    many times."""
+    if _cache_svninfo.has_key(target):
+        return _cache_svninfo[target]
+    info = {}
+    for L in launchsvn('info "%s"' % target):
+        L = L.strip()
+        if not L:
+            continue
+        key, value = L.split(": ", 1)
+        info[key] = value.strip()
+    _cache_svninfo[target] = info
+    return info
+
+def target_to_url(target):
+    """Convert working copy path or repos URL to a repos URL."""
+    if is_wc(target):
+        info = get_svninfo(target)
+        return info["URL"]
+    return target
+
+_cache_reporoot = {}
+def get_repo_root(target):
+    """Compute the root repos URL given a working-copy path, or a URL."""
+    # Try using "svn info WCDIR". This works only on SVN clients >= 1.3
+    if not is_url(target):
+        try:
+            info = get_svninfo(target)
+            root = info["Repository Root"]
+            _cache_reporoot[root] = None
+            return root
+        except KeyError:
+            pass
+        url = target_to_url(target)
+        assert url[-1] != '/'
+    else:
+        url = target
+
+    # Go through the cache of the repository roots. This avoids extra
+    # server round-trips if we are asking the root of different URLs
+    # in the same repository (the cache in get_svninfo() cannot detect
+    # that of course and would issue a remote command).
+    assert is_url(url)
+    for r in _cache_reporoot:
+        if url.startswith(r):
+            return r
+
+    # Try using "svn info URL". This works only on SVN clients >= 1.2
+    try:
+        info = get_svninfo(url)
+        root = info["Repository Root"]
+        _cache_reporoot[root] = None
+        return root
+    except LaunchError:
+        pass
+
+    # Constrained to older svn clients, we are stuck with this ugly
+    # trial-and-error implementation. It could be made faster with a
+    # binary search.
+    while url:
+        temp = os.path.dirname(url)
+        try:
+            launchsvn('proplist "%s"' % temp)
+        except LaunchError:
+            _cache_reporoot[url] = None
+            return url
+        url = temp
+
+    assert False, "svn repos root not found"
+
+def target_to_pathid(target):
+    """Convert a target (either a working copy path or an URL) into a
+    path identifier."""
+    root = get_repo_root(target)
+    url = target_to_url(target)
+    assert root[-1] != "/"
+    assert url[:len(root)] == root, "url=%r, root=%r" % (url, root)
+    return url[len(root):]
+
+class SvnLogParser:
+    """
+    Parse the "svn log", going through the XML output and using pulldom (which
+    would even allow streaming the command output).
+    """
+    def __init__(self, xml):
+        self._events = pulldom.parseString(xml)
+    def __getitem__(self, idx):
+        for event, node in self._events:
+            if event == pulldom.START_ELEMENT and node.tagName == "logentry":
+                self._events.expandNode(node)
+                return self.SvnLogRevision(node)
+        raise IndexError, "Could not find 'logentry' tag in xml"
+
+    class SvnLogRevision:
+        def __init__(self, xmlnode):
+            self.n = xmlnode
+        def revision(self):
+            return int(self.n.getAttribute("revision"))
+        def author(self):
+            return self.n.getElementsByTagName("author")[0].firstChild.data
+        def paths(self):
+            return [self.SvnLogPath(n)
+                    for n in  self.n.getElementsByTagName("path")]
+
+        class SvnLogPath:
+            def __init__(self, xmlnode):
+                self.n = xmlnode
+            def action(self):
+                return self.n.getAttribute("action")
+            def pathid(self):
+                return self.n.firstChild.data
+            def copyfrom_rev(self):
+                try: return self.n.getAttribute("copyfrom-rev")
+                except KeyError: return None
+            def copyfrom_pathid(self):
+                try: return self.n.getAttribute("copyfrom-path")
+                except KeyError: return None
+
+def get_copyfrom(target):
+    """Get copyfrom info for a given target (it represents the directory from
+    where it was branched). NOTE: repos root has no copyfrom info. In this case
+    None is returned.
+
+    Returns the:
+        - source file or directory from which the copy was made
+        - revision from which that source was copied
+        - revision in which the copy was committed
+    """
+    repos_path = target_to_pathid(target)
+    for chg in SvnLogParser(launchsvn('log -v --xml --stop-on-copy "%s"'
+                                      % target, split_lines=False)):
+        for p in chg.paths():
+            if p.action() == 'A' and p.pathid() == repos_path:
+                # These values will be None if the corresponding elements are
+                # not found in the log.
+                return p.copyfrom_pathid(), p.copyfrom_rev(), chg.revision()
+    return None,None,None
+
+def get_latest_rev(url):
+    """Get the latest revision of the repository of which URL is part."""
+    try:
+        return get_svninfo(url)["Revision"]
+    except LaunchError:
+        # Alternative method for latest revision checking (for svn < 1.2)
+        report('checking latest revision of "%s"' % url)
+        L = launchsvn('proplist --revprop -r HEAD "%s"' % opts["source-url"])[0]
+        rev = re.search("revision (\d+)", L).group(1)
+        report('latest revision of "%s" is %s' % (url, rev))
+        return rev
+
+def get_created_rev(url):
+    """Lookup the revision at which the path identified by the
+    provided URL was first created."""
+    oldest_rev = -1
+    report('determining oldest revision for URL "%s"' % url)
+    ### TODO: Refactor this to use a modified RevisionLog class.
+    lines = None
+    cmd = "log -r1:HEAD --stop-on-copy -q " + url
+    try:
+        lines = launchsvn(cmd + " --limit=1")
+    except LaunchError:
+        # Assume that --limit isn't supported by the installed 'svn'.
+        lines = launchsvn(cmd)
+    if lines and len(lines) > 1:
+        i = lines[1].find(" ")
+        if i != -1:
+            oldest_rev = int(lines[1][1:i])
+    if oldest_rev == -1:
+        error('unable to determine oldest revision for URL "%s"' % url)
+    return oldest_rev
+
+def get_commit_log(url, revnum):
+    """Return the log message for a specific integer revision
+    number."""
+    out = launchsvn("log --incremental -r%d %s" % (revnum, url))
+    return recode_stdout_to_file("".join(out[1:]))
+
+def construct_merged_log_message(url, revnums):
+    """Return a commit log message containing all the commit messages
+    in the specified revisions at the given URL.  The separator used
+    in this log message is determined by searching for the longest
+    svnmerge separator existing in the commit log messages and
+    extending it by one more separator.  This results in a new commit
+    log message that is clearer in describing merges that contain
+    other merges. Trailing newlines are removed from the embedded
+    log messages."""
+    messages = ['']
+    longest_sep = ''
+    for r in revnums.sorted():
+        message = get_commit_log(url, r)
+        if message:
+            message = re.sub(r'(\r\n|\r|\n)', "\n", message)
+            message = rstrip(message, "\n") + "\n"
+            messages.append(prefix_lines(LOG_LINE_PREFIX, message))
+            for match in LOG_SEPARATOR_RE.findall(message):
+                sep = match[1]
+                if len(sep) > len(longest_sep):
+                    longest_sep = sep
+
+    longest_sep += LOG_SEPARATOR + "\n"
+    messages.append('')
+    return longest_sep.join(messages)
+
+def get_default_source(branch_target, branch_props):
+    """Return the default source for branch_target (given its branch_props).
+    Error out if there is ambiguity."""
+    if not branch_props:
+        error("no integration info available")
+
+    props = branch_props.copy()
+    pathid = target_to_pathid(branch_target)
+
+    # To make bidirectional merges easier, find the target's
+    # repository local path so it can be removed from the list of
+    # possible integration sources.
+    if props.has_key(pathid):
+        del props[pathid]
+
+    if len(props) > 1:
+        err_msg = "multiple sources found. "
+        err_msg += "Explicit source argument (-S/--source) required.\n"
+        err_msg += "The merge sources available are:"
+        for prop in props:
+          err_msg += "\n  " + prop
+        error(err_msg)
+
+    return props.keys()[0]
+
+def check_old_prop_version(branch_target, branch_props):
+    """Check if branch_props (of branch_target) are svnmerge properties in
+    old format, and emit an error if so."""
+
+    # Previous svnmerge versions allowed trailing /'s in the repository
+    # local path.  Newer versions of svnmerge will trim trailing /'s
+    # appearing in the command line, so if there are any properties with
+    # trailing /'s, they will not be properly matched later on, so require
+    # the user to change them now.
+    fixed = {}
+    changed = False
+    for source, revs in branch_props.items():
+        src = rstrip(source, "/")
+        fixed[src] = revs
+        if src != source:
+            changed = True
+
+    if changed:
+        err_msg = "old property values detected; an upgrade is required.\n\n"
+        err_msg += "Please execute and commit these changes to upgrade:\n\n"
+        err_msg += 'svn propset "%s" "%s" "%s"' % \
+                   (opts["prop"], format_merge_props(fixed), branch_target)
+        error(err_msg)
+
+def should_find_reflected(branch_dir):
+    should_find_reflected = opts["bidirectional"]
+
+    # If the source has integration info for the target, set find_reflected
+    # even if --bidirectional wasn't specified
+    if not should_find_reflected:
+        source_props = get_merge_props(opts["source-url"])
+        should_find_reflected = source_props.has_key(target_to_pathid(branch_dir))
+
+    return should_find_reflected
+
+def analyze_revs(target_pathid, url, begin=1, end=None,
+                 find_reflected=False):
+    """For the source of the merges in the source URL being merged into
+    target_pathid, analyze the revisions in the interval begin-end (which
+    defaults to 1-HEAD), to find out which revisions are changes in
+    the url, which are changes elsewhere (so-called 'phantom'
+    revisions), optionally which are reflected changes (to avoid
+    conflicts that can occur when doing bidirectional merging between
+    branches), and which revisions initialize merge tracking against other
+    branches.  Return a tuple of four RevisionSet's:
+        (real_revs, phantom_revs, reflected_revs, initialized_revs).
+
+    NOTE: To maximize speed, if "end" is not provided, the function is
+    not able to find phantom revisions following the last real
+    revision in the URL.
+    """
+
+    begin = str(begin)
+    if end is None:
+        end = "HEAD"
+    else:
+        end = str(end)
+        if long(begin) > long(end):
+            return RevisionSet(""), RevisionSet(""), \
+                   RevisionSet(""), RevisionSet("")
+
+    logs[url] = RevisionLog(url, begin, end, find_reflected)
+    revs = RevisionSet(logs[url].revs)
+
+    if end == "HEAD":
+        # If end is not provided, we do not know which is the latest revision
+        # in the repository. So return the phantom revision set only up to
+        # the latest known revision.
+        end = str(list(revs)[-1])
+
+    phantom_revs = RevisionSet("%s-%s" % (begin, end)) - revs
+
+    if find_reflected:
+        reflected_revs = logs[url].merge_metadata().changed_revs(target_pathid)
+        reflected_revs += logs[url].block_metadata().changed_revs(target_pathid)
+    else:
+        reflected_revs = []
+
+    initialized_revs = RevisionSet(logs[url].merge_metadata().initialized_revs())
+    reflected_revs = RevisionSet(reflected_revs)
+
+    return revs, phantom_revs, reflected_revs, initialized_revs
+
+def analyze_source_revs(branch_target, source_url, **kwargs):
+    """For the given branch and source, extract the real and phantom
+    source revisions."""
+    branch_url = target_to_url(branch_target)
+    branch_pathid = target_to_pathid(branch_target)
+
+    # Extract the latest repository revision from the URL of the branch
+    # directory (which is already cached at this point).
+    end_rev = get_latest_rev(source_url)
+
+    # Calculate the base of analysis. If there is a "1-XX" interval in the
+    # merged_revs, we do not need to check those.
+    base = 1
+    r = opts["merged-revs"].normalized()
+    if r and r[0][0] == 1:
+        base = r[0][1] + 1
+
+    # See if the user filtered the revision set. If so, we are not
+    # interested in something outside that range.
+    if opts["revision"]:
+        revs = RevisionSet(opts["revision"]).sorted()
+        if base < revs[0]:
+            base = revs[0]
+        if end_rev > revs[-1]:
+            end_rev = revs[-1]
+
+    return analyze_revs(branch_pathid, source_url, base, end_rev, **kwargs)
+
+def minimal_merge_intervals(revs, phantom_revs):
+    """Produce the smallest number of intervals suitable for merging. revs
+    is the RevisionSet which we want to merge, and phantom_revs are phantom
+    revisions which can be used to concatenate intervals, thus minimizing the
+    number of operations."""
+    revnums = revs.normalized()
+    ret = []
+
+    cur = revnums.pop()
+    while revnums:
+        next = revnums.pop()
+        assert next[1] < cur[0]      # otherwise it is not ordered
+        assert cur[0] - next[1] > 1  # otherwise it is not normalized
+        for i in range(next[1]+1, cur[0]):
+            if i not in phantom_revs:
+                ret.append(cur)
+                cur = next
+                break
+        else:
+            cur = (next[0], cur[1])
+
+    ret.append(cur)
+    ret.reverse()
+    return ret
+
+def display_revisions(revs, display_style, revisions_msg, source_url):
+    """Show REVS as dictated by DISPLAY_STYLE, either numerically, in
+    log format, or as diffs.  When displaying revisions numerically,
+    prefix output with REVISIONS_MSG when in verbose mode.  Otherwise,
+    request logs or diffs using SOURCE_URL."""
+    if display_style == "revisions":
+        if revs:
+            report(revisions_msg)
+            print revs
+    elif display_style == "logs":
+        for start,end in revs.normalized():
+            svn_command('log --incremental -v -r %d:%d %s' % \
+                        (start, end, source_url))
+    elif display_style in ("diffs", "summarize"):
+        if display_style == 'summarize':
+            summarize = '--summarize '
+        else:
+            summarize = ''
+
+        for start, end in revs.normalized():
+            print
+            if start == end:
+                print "%s: changes in revision %d follow" % (NAME, start)
+            else:
+                print "%s: changes in revisions %d-%d follow" % (NAME,
+                                                                 start, end)
+            print
+
+            # Note: the starting revision number to 'svn diff' is
+            # NOT inclusive so we have to subtract one from ${START}.
+            svn_command("diff -r %d:%d %s %s" % (start - 1, end, summarize,
+                                                 source_url))
+    else:
+        assert False, "unhandled display style: %s" % display_style
+
+def action_init(target_dir, target_props):
+    """Initialize for merges."""
+    # Check that directory is ready for being modified
+    check_dir_clean(target_dir)
+
+    # If the user hasn't specified the revisions to use, see if the
+    # "source" is a copy from the current tree and if so, we can use
+    # the version data obtained from it.
+    revision_range = opts["revision"]
+    if not revision_range:
+        # Determining a default endpoint for the revision range that "init"
+        # will use, since none was provided by the user.
+        cf_source, cf_rev, copy_committed_in_rev = \
+                                            get_copyfrom(opts["source-url"])
+        target_path = target_to_pathid(target_dir)
+
+        if target_path == cf_source:
+            # If source was originally copyied from target, and we are merging
+            # changes from source to target (the copy target is the merge
+            # source, and the copy source is the merge target), then we want to
+            # mark as integrated up to the rev in which the copy was committed
+            # which created the merge source:
+            report('the source "%s" is a branch of "%s"' %
+                   (opts["source-url"], target_dir))
+            revision_range = "1-" + str(copy_committed_in_rev)
+        else:
+            # If the copy source is the merge source, and
+            # the copy target is the merge target, then we want to
+            # mark as integrated up to the specific rev of the merge
+            # target from which the merge source was copied. Longer
+            # discussion here:
+            # http://subversion.tigris.org/issues/show_bug.cgi?id=2810
+            target_url = target_to_url(target_dir)
+            source_path = target_to_pathid(opts["source-url"])
+            cf_source_path, cf_rev, copy_committed_in_rev = get_copyfrom(target_url)
+            if source_path == cf_source_path:
+                report('the merge source "%s" is the copy source of "%s"' %
+                       (opts["source-url"], target_dir))
+                revision_range = "1-" + cf_rev
+
+    # When neither the merge source nor target is a copy of the other, and
+    # the user did not specify a revision range, then choose a default which is
+    # the current revision; saying, in effect, "everything has been merged, so
+    # mark as integrated up to the latest rev on source url).
+    revs = revision_range or "1-" + get_latest_rev(opts["source-url"])
+    revs = RevisionSet(revs)
+
+    report('marking "%s" as already containing revisions "%s" of "%s"' %
+           (target_dir, revs, opts["source-url"]))
+
+    revs = str(revs)
+    # If the local svnmerge-integrated property already has an entry
+    # for the source-pathid, simply error out.
+    if not opts["force"] and target_props.has_key(opts["source-pathid"]):
+        error('Repository-relative path %s has already been initialized at %s\n'
+              'Use --force to re-initialize'
+              % (opts["source-pathid"], target_dir))
+    target_props[opts["source-pathid"]] = revs
+
+    # Set property
+    set_merge_props(target_dir, target_props)
+
+    # Write out commit message if desired
+    if opts["commit-file"]:
+        f = open(opts["commit-file"], "w")
+        print >>f, 'Initialized merge tracking via "%s" with revisions "%s" from ' \
+            % (NAME, revs)
+        print >>f, '%s' % opts["source-url"]
+        f.close()
+        report('wrote commit message to "%s"' % opts["commit-file"])
+
+def action_avail(branch_dir, branch_props):
+    """Show commits available for merges."""
+    source_revs, phantom_revs, reflected_revs, initialized_revs = \
+               analyze_source_revs(branch_dir, opts["source-url"],
+                                   find_reflected=
+                                       should_find_reflected(branch_dir))
+    report('skipping phantom revisions: %s' % phantom_revs)
+    if reflected_revs:
+        report('skipping reflected revisions: %s' % reflected_revs)
+        report('skipping initialized revisions: %s' % initialized_revs)
+
+    blocked_revs = get_blocked_revs(branch_dir, opts["source-pathid"])
+    avail_revs = source_revs - opts["merged-revs"] - blocked_revs - \
+                 reflected_revs - initialized_revs
+
+    # Compose the set of revisions to show
+    revs = RevisionSet("")
+    report_msg = "revisions available to be merged are:"
+    if "avail" in opts["avail-showwhat"]:
+        revs |= avail_revs
+    if "blocked" in opts["avail-showwhat"]:
+        revs |= blocked_revs
+        report_msg = "revisions blocked are:"
+
+    # Limit to revisions specified by -r (if any)
+    if opts["revision"]:
+        revs = revs & RevisionSet(opts["revision"])
+
+    display_revisions(revs, opts["avail-display"],
+                      report_msg,
+                      opts["source-url"])
+
+def action_integrated(branch_dir, branch_props):
+    """Show change sets already merged.  This set of revisions is
+    calculated from taking svnmerge-integrated property from the
+    branch, and subtracting any revision older than the branch
+    creation revision."""
+    # Extract the integration info for the branch_dir
+    branch_props = get_merge_props(branch_dir)
+    check_old_prop_version(branch_dir, branch_props)
+    revs = merge_props_to_revision_set(branch_props, opts["source-pathid"])
+
+    # Lookup the oldest revision on the branch path.
+    oldest_src_rev = get_created_rev(opts["source-url"])
+
+    # Subtract any revisions which pre-date the branch.
+    report("subtracting revisions which pre-date the source URL (%d)" %
+           oldest_src_rev)
+    revs = revs - RevisionSet(range(1, oldest_src_rev))
+
+    # Limit to revisions specified by -r (if any)
+    if opts["revision"]:
+        revs = revs & RevisionSet(opts["revision"])
+
+    display_revisions(revs, opts["integrated-display"],
+                      "revisions already integrated are:", opts["source-url"])
+
+def action_merge(branch_dir, branch_props):
+    """Record merge meta data, and do the actual merge (if not
+    requested otherwise via --record-only)."""
+    # Check branch directory is ready for being modified
+    check_dir_clean(branch_dir)
+
+    source_revs, phantom_revs, reflected_revs, initialized_revs = \
+               analyze_source_revs(branch_dir, opts["source-url"],
+                                   find_reflected=
+                                       should_find_reflected(branch_dir))
+
+    if opts["revision"]:
+        revs = RevisionSet(opts["revision"])
+    else:
+        revs = source_revs
+
+    blocked_revs = get_blocked_revs(branch_dir, opts["source-pathid"])
+    merged_revs = opts["merged-revs"]
+
+    # Show what we're doing
+    if opts["verbose"]:  # just to avoid useless calculations
+        if merged_revs & revs:
+            report('"%s" already contains revisions %s' % (branch_dir,
+                                                           merged_revs & revs))
+        if phantom_revs:
+            report('memorizing phantom revision(s): %s' % phantom_revs)
+        if reflected_revs:
+            report('memorizing reflected revision(s): %s' % reflected_revs)
+        if blocked_revs & revs:
+            report('skipping blocked revisions(s): %s' % (blocked_revs & revs))
+        if initialized_revs:
+            report('skipping initialized revision(s): %s' % initialized_revs)
+
+    # Compute final merge set.
+    revs = revs - merged_revs - blocked_revs - reflected_revs - \
+           phantom_revs - initialized_revs
+    if not revs:
+        report('no revisions to merge, exiting')
+        return
+
+    # When manually marking revisions as merged, we only update the
+    # integration meta data, and don't perform an actual merge.
+    record_only = opts["record-only"]
+
+    if record_only:
+        report('recording merge of revision(s) %s from "%s"' %
+               (revs, opts["source-url"]))
+    else:
+        report('merging in revision(s) %s from "%s"' %
+               (revs, opts["source-url"]))
+
+    # Do the merge(s). Note: the starting revision number to 'svn merge'
+    # is NOT inclusive so we have to subtract one from start.
+    # We try to keep the number of merge operations as low as possible,
+    # because it is faster and reduces the number of conflicts.
+    old_block_props = get_block_props(branch_dir)
+    merge_metadata = logs[opts["source-url"]].merge_metadata()
+    block_metadata = logs[opts["source-url"]].block_metadata()
+    for start,end in minimal_merge_intervals(revs, phantom_revs):
+        if not record_only:
+            # Preset merge/blocked properties to the source value at
+            # the start rev to avoid spurious property conflicts
+            set_merge_props(branch_dir, merge_metadata.get(start - 1))
+            set_block_props(branch_dir, block_metadata.get(start - 1))
+            # Do the merge
+            svn_command("merge --force -r %d:%d %s %s" % \
+                        (start - 1, end, opts["source-url"], branch_dir))
+            # TODO: to support graph merging, add logic to merge the property
+            # meta-data manually
+
+    # Update the set of merged revisions.
+    merged_revs = merged_revs | revs | reflected_revs | phantom_revs | initialized_revs
+    branch_props[opts["source-pathid"]] = str(merged_revs)
+    set_merge_props(branch_dir, branch_props)
+    # Reset the blocked revs
+    set_block_props(branch_dir, old_block_props)
+
+    # Write out commit message if desired
+    if opts["commit-file"]:
+        f = open(opts["commit-file"], "w")
+        if record_only:
+            print >>f, 'Recorded merge of revisions %s via %s from ' % \
+                  (revs, NAME)
+        else:
+            print >>f, 'Merged revisions %s via %s from ' % \
+                  (revs, NAME)
+        print >>f, '%s' % opts["source-url"]
+        if opts["commit-verbose"]:
+            print >>f
+            print >>f, construct_merged_log_message(opts["source-url"], revs),
+
+        f.close()
+        report('wrote commit message to "%s"' % opts["commit-file"])
+
+def action_block(branch_dir, branch_props):
+    """Block revisions."""
+    # Check branch directory is ready for being modified
+    check_dir_clean(branch_dir)
+
+    source_revs, phantom_revs, reflected_revs, initialized_revs = \
+               analyze_source_revs(branch_dir, opts["source-url"])
+    revs_to_block = source_revs - opts["merged-revs"]
+
+    # Limit to revisions specified by -r (if any)
+    if opts["revision"]:
+        revs_to_block = RevisionSet(opts["revision"]) & revs_to_block
+
+    if not revs_to_block:
+        error('no available revisions to block')
+
+    # Change blocked information
+    blocked_revs = get_blocked_revs(branch_dir, opts["source-pathid"])
+    blocked_revs = blocked_revs | revs_to_block
+    set_blocked_revs(branch_dir, opts["source-pathid"], blocked_revs)
+
+    # Write out commit message if desired
+    if opts["commit-file"]:
+        f = open(opts["commit-file"], "w")
+        print >>f, 'Blocked revisions %s via %s' % (revs_to_block, NAME)
+        if opts["commit-verbose"]:
+            print >>f
+            print >>f, construct_merged_log_message(opts["source-url"],
+                                                    revs_to_block),
+
+        f.close()
+        report('wrote commit message to "%s"' % opts["commit-file"])
+
+def action_unblock(branch_dir, branch_props):
+    """Unblock revisions."""
+    # Check branch directory is ready for being modified
+    check_dir_clean(branch_dir)
+
+    blocked_revs = get_blocked_revs(branch_dir, opts["source-pathid"])
+    revs_to_unblock = blocked_revs
+
+    # Limit to revisions specified by -r (if any)
+    if opts["revision"]:
+        revs_to_unblock = revs_to_unblock & RevisionSet(opts["revision"])
+
+    if not revs_to_unblock:
+        error('no available revisions to unblock')
+
+    # Change blocked information
+    blocked_revs = blocked_revs - revs_to_unblock
+    set_blocked_revs(branch_dir, opts["source-pathid"], blocked_revs)
+
+    # Write out commit message if desired
+    if opts["commit-file"]:
+        f = open(opts["commit-file"], "w")
+        print >>f, 'Unblocked revisions %s via %s' % (revs_to_unblock, NAME)
+        if opts["commit-verbose"]:
+            print >>f
+            print >>f, construct_merged_log_message(opts["source-url"],
+                                                    revs_to_unblock),
+        f.close()
+        report('wrote commit message to "%s"' % opts["commit-file"])
+
+def action_rollback(branch_dir, branch_props):
+    """Rollback previously integrated revisions."""
+
+    # Make sure the revision arguments are present
+    if not opts["revision"]:
+        error("The '-r' option is mandatory for rollback")
+
+    # Check branch directory is ready for being modified
+    check_dir_clean(branch_dir)
+
+    # Extract the integration info for the branch_dir
+    branch_props = get_merge_props(branch_dir)
+    check_old_prop_version(branch_dir, branch_props)
+    # Get the list of all revisions already merged into this source-pathid.
+    merged_revs = merge_props_to_revision_set(branch_props,
+                                              opts["source-pathid"])
+
+    # At which revision was the src created?
+    oldest_src_rev = get_created_rev(opts["source-url"])
+    src_pre_exist_range = RevisionSet("1-%d" % oldest_src_rev)
+
+    # Limit to revisions specified by -r (if any)
+    revs = merged_revs & RevisionSet(opts["revision"])
+
+    # make sure there's some revision to rollback
+    if not revs:
+        report("Nothing to rollback in revision range r%s" % opts["revision"])
+        return
+
+    # If even one specified revision lies outside the lifetime of the
+    # merge source, error out.
+    if revs & src_pre_exist_range:
+        err_str  = "Specified revision range falls out of the rollback range.\n"
+        err_str += "%s was created at r%d" % (opts["source-pathid"],
+                                              oldest_src_rev)
+        error(err_str)
+
+    record_only = opts["record-only"]
+
+    if record_only:
+        report('recording rollback of revision(s) %s from "%s"' %
+               (revs, opts["source-url"]))
+    else:
+        report('rollback of revision(s) %s from "%s"' %
+               (revs, opts["source-url"]))
+
+    # Do the reverse merge(s). Note: the starting revision number
+    # to 'svn merge' is NOT inclusive so we have to subtract one from start.
+    # We try to keep the number of merge operations as low as possible,
+    # because it is faster and reduces the number of conflicts.
+    rollback_intervals = minimal_merge_intervals(revs, [])
+    # rollback in the reverse order of merge
+    rollback_intervals.reverse()
+    for start, end in rollback_intervals:
+        if not record_only:
+            # Do the merge
+            svn_command("merge --force -r %d:%d %s %s" % \
+                        (end, start - 1, opts["source-url"], branch_dir))
+
+    # Write out commit message if desired
+    # calculate the phantom revs first
+    if opts["commit-file"]:
+        f = open(opts["commit-file"], "w")
+        if record_only:
+            print >>f, 'Recorded rollback of revisions %s via %s from ' % \
+                  (revs , NAME)
+        else:
+            print >>f, 'Rolled back revisions %s via %s from ' % \
+                  (revs , NAME)
+        print >>f, '%s' % opts["source-url"]
+
+        f.close()
+        report('wrote commit message to "%s"' % opts["commit-file"])
+
+    # Update the set of merged revisions.
+    merged_revs = merged_revs - revs
+    branch_props[opts["source-pathid"]] = str(merged_revs)
+    set_merge_props(branch_dir, branch_props)
+
+def action_uninit(branch_dir, branch_props):
+    """Uninit SOURCE URL."""
+    # Check branch directory is ready for being modified
+    check_dir_clean(branch_dir)
+
+    # If the source-pathid does not have an entry in the svnmerge-integrated
+    # property, simply error out.
+    if not branch_props.has_key(opts["source-pathid"]):
+        error('Repository-relative path "%s" does not contain merge '
+              'tracking information for "%s"' \
+                % (opts["source-pathid"], branch_dir))
+
+    del branch_props[opts["source-pathid"]]
+
+    # Set merge property with the selected source deleted
+    set_merge_props(branch_dir, branch_props)
+
+    # Set blocked revisions for the selected source to None
+    set_blocked_revs(branch_dir, opts["source-pathid"], None)
+
+    # Write out commit message if desired
+    if opts["commit-file"]:
+        f = open(opts["commit-file"], "w")
+        print >>f, 'Removed merge tracking for "%s" for ' % NAME
+        print >>f, '%s' % opts["source-url"]
+        f.close()
+        report('wrote commit message to "%s"' % opts["commit-file"])
+
+###############################################################################
+# Command line parsing -- options and commands management
+###############################################################################
+
+class OptBase:
+    def __init__(self, *args, **kwargs):
+        self.help = kwargs["help"]
+        del kwargs["help"]
+        self.lflags = []
+        self.sflags = []
+        for a in args:
+            if a.startswith("--"):   self.lflags.append(a)
+            elif a.startswith("-"):  self.sflags.append(a)
+            else:
+                raise TypeError, "invalid flag name: %s" % a
+        if kwargs.has_key("dest"):
+            self.dest = kwargs["dest"]
+            del kwargs["dest"]
+        else:
+            if not self.lflags:
+                raise TypeError, "cannot deduce dest name without long options"
+            self.dest = self.lflags[0][2:]
+        if kwargs:
+            raise TypeError, "invalid keyword arguments: %r" % kwargs.keys()
+    def repr_flags(self):
+        f = self.sflags + self.lflags
+        r = f[0]
+        for fl in f[1:]:
+            r += " [%s]" % fl
+        return r
+
+class Option(OptBase):
+    def __init__(self, *args, **kwargs):
+        self.default = kwargs.setdefault("default", 0)
+        del kwargs["default"]
+        self.value = kwargs.setdefault("value", None)
+        del kwargs["value"]
+        OptBase.__init__(self, *args, **kwargs)
+    def apply(self, state, value):
+        assert value == ""
+        if self.value is not None:
+            state[self.dest] = self.value
+        else:
+            state[self.dest] += 1
+
+class OptionArg(OptBase):
+    def __init__(self, *args, **kwargs):
+        self.default = kwargs["default"]
+        del kwargs["default"]
+        self.metavar = kwargs.setdefault("metavar", None)
+        del kwargs["metavar"]
+        OptBase.__init__(self, *args, **kwargs)
+
+        if self.metavar is None:
+            if self.dest is not None:
+                self.metavar = self.dest.upper()
+            else:
+                self.metavar = "arg"
+        if self.default:
+            self.help += " (default: %s)" % self.default
+    def apply(self, state, value):
+        assert value is not None
+        state[self.dest] = value
+    def repr_flags(self):
+        r = OptBase.repr_flags(self)
+        return r + " " + self.metavar
+
+class CommandOpts:
+    class Cmd:
+        def __init__(self, *args):
+            self.name, self.func, self.usage, self.help, self.opts = args
+        def short_help(self):
+            return self.help.split(".")[0]
+        def __str__(self):
+            return self.name
+        def __call__(self, *args, **kwargs):
+            return self.func(*args, **kwargs)
+
+    def __init__(self, global_opts, common_opts, command_table, version=None):
+        self.progname = NAME
+        self.version = version.replace("%prog", self.progname)
+        self.cwidth = console_width() - 2
+        self.ctable = command_table.copy()
+        self.gopts = global_opts[:]
+        self.copts = common_opts[:]
+        self._add_builtins()
+        for k in self.ctable.keys():
+            cmd = self.Cmd(k, *self.ctable[k])
+            opts = []
+            for o in cmd.opts:
+                if isinstance(o, types.StringType) or \
+                   isinstance(o, types.UnicodeType):
+                    o = self._find_common(o)
+                opts.append(o)
+            cmd.opts = opts
+            self.ctable[k] = cmd
+
+    def _add_builtins(self):
+        self.gopts.append(
+            Option("-h", "--help", help="show help for this command and exit"))
+        if self.version is not None:
+            self.gopts.append(
+                Option("-V", "--version", help="show version info and exit"))
+        self.ctable["help"] = (self._cmd_help,
+            "help [COMMAND]",
+            "Display help for a specific command. If COMMAND is omitted, "
+            "display brief command description.",
+            [])
+
+    def _cmd_help(self, cmd=None, *args):
+        if args:
+            self.error("wrong number of arguments", "help")
+        if cmd is not None:
+            cmd = self._command(cmd)
+            self.print_command_help(cmd)
+        else:
+            self.print_command_list()
+
+    def _paragraph(self, text, width=78):
+        chunks = re.split("\s+", text.strip())
+        chunks.reverse()
+        lines = []
+        while chunks:
+            L = chunks.pop()
+            while chunks and len(L) + len(chunks[-1]) + 1 <= width:
+                L += " " + chunks.pop()
+            lines.append(L)
+        return lines
+
+    def _paragraphs(self, text, *args, **kwargs):
+        pars = text.split("\n\n")
+        lines = self._paragraph(pars[0], *args, **kwargs)
+        for p in pars[1:]:
+            lines.append("")
+            lines.extend(self._paragraph(p, *args, **kwargs))
+        return lines
+
+    def _print_wrapped(self, text, indent=0):
+        text = self._paragraphs(text, self.cwidth - indent)
+        print text.pop(0)
+        for t in text:
+            print " " * indent + t
+
+    def _find_common(self, fl):
+        for o in self.copts:
+            if fl in o.lflags+o.sflags:
+                return o
+        assert False, fl
+
+    def _compute_flags(self, opts, check_conflicts=True):
+        back = {}
+        sfl = ""
+        lfl = []
+        for o in opts:
+            sapp = lapp = ""
+            if isinstance(o, OptionArg):
+                sapp, lapp = ":", "="
+            for s in o.sflags:
+                if check_conflicts and back.has_key(s):
+                    raise RuntimeError, "option conflict: %s" % s
+                back[s] = o
+                sfl += s[1:] + sapp
+            for l in o.lflags:
+                if check_conflicts and back.has_key(l):
+                    raise RuntimeError, "option conflict: %s" % l
+                back[l] = o
+                lfl.append(l[2:] + lapp)
+        return sfl, lfl, back
+
+    def _extract_command(self, args):
+        """
+        Try to extract the command name from the argument list. This is
+        non-trivial because we want to allow command-specific options even
+        before the command itself.
+        """
+        opts = self.gopts[:]
+        for cmd in self.ctable.values():
+            opts.extend(cmd.opts)
+        sfl, lfl, _ = self._compute_flags(opts, check_conflicts=False)
+
+        lopts,largs = getopt.getopt(args, sfl, lfl)
+        if not largs:
+            return None
+        return self._command(largs[0])
+
+    def _fancy_getopt(self, args, opts, state=None):
+        if state is None:
+            state= {}
+        for o in opts:
+            if not state.has_key(o.dest):
+                state[o.dest] = o.default
+
+        sfl, lfl, back = self._compute_flags(opts)
+        try:
+            lopts,args = getopt.gnu_getopt(args, sfl, lfl)
+        except AttributeError:
+            # Before Python 2.3, there was no gnu_getopt support.
+            # So we can't parse intermixed positional arguments
+            # and options.
+            lopts,args = getopt.getopt(args, sfl, lfl)
+
+        for o,v in lopts:
+            back[o].apply(state, v)
+        return state, args
+
+    def _command(self, cmd):
+        if not self.ctable.has_key(cmd):
+            self.error("unknown command: '%s'" % cmd)
+        return self.ctable[cmd]
+
+    def parse(self, args):
+        if not args:
+            self.print_small_help()
+            sys.exit(0)
+
+        cmd = None
+        try:
+            cmd = self._extract_command(args)
+            opts = self.gopts[:]
+            if cmd:
+                opts.extend(cmd.opts)
+                args.remove(cmd.name)
+            state, args = self._fancy_getopt(args, opts)
+        except getopt.GetoptError, e:
+            self.error(e, cmd)
+
+        # Handle builtins
+        if self.version is not None and state["version"]:
+            self.print_version()
+            sys.exit(0)
+        if state["help"]: # special case for --help
+            if cmd:
+                self.print_command_help(cmd)
+                sys.exit(0)
+            cmd = self.ctable["help"]
+        else:
+            if cmd is None:
+                self.error("command argument required")
+        if str(cmd) == "help":
+            cmd(*args)
+            sys.exit(0)
+        return cmd, args, state
+
+    def error(self, s, cmd=None):
+        print >>sys.stderr, "%s: %s" % (self.progname, s)
+        if cmd is not None:
+            self.print_command_help(cmd)
+        else:
+            self.print_small_help()
+        sys.exit(1)
+    def print_small_help(self):
+        print "Type '%s help' for usage" % self.progname
+    def print_usage_line(self):
+        print "usage: %s <subcommand> [options...] [args...]\n" % self.progname
+    def print_command_list(self):
+        print "Available commands (use '%s help COMMAND' for more details):\n" \
+              % self.progname
+        cmds = self.ctable.keys()
+        cmds.sort()
+        indent = max(map(len, cmds))
+        for c in cmds:
+            h = self.ctable[c].short_help()
+            print "  %-*s   " % (indent, c),
+            self._print_wrapped(h, indent+6)
+    def print_command_help(self, cmd):
+        cmd = self.ctable[str(cmd)]
+        print 'usage: %s %s\n' % (self.progname, cmd.usage)
+        self._print_wrapped(cmd.help)
+        def print_opts(opts, self=self):
+            if not opts: return
+            flags = [o.repr_flags() for o in opts]
+            indent = max(map(len, flags))
+            for f,o in zip(flags, opts):
+                print "  %-*s :" % (indent, f),
+                self._print_wrapped(o.help, indent+5)
+        print '\nCommand options:'
+        print_opts(cmd.opts)
+        print '\nGlobal options:'
+        print_opts(self.gopts)
+
+    def print_version(self):
+        print self.version
+
+###############################################################################
+# Options and Commands description
+###############################################################################
+
+global_opts = [
+    Option("-F", "--force",
+           help="force operation even if the working copy is not clean, or "
+                "there are pending updates"),
+    Option("-n", "--dry-run",
+           help="don't actually change anything, just pretend; "
+                "implies --show-changes"),
+    Option("-s", "--show-changes",
+           help="show subversion commands that make changes"),
+    Option("-v", "--verbose",
+           help="verbose mode: output more information about progress"),
+    OptionArg("-u", "--username",
+              default=None,
+              help="invoke subversion commands with the supplied username"),
+    OptionArg("-p", "--password",
+              default=None,
+              help="invoke subversion commands with the supplied password"),
+]
+
+common_opts = [
+    Option("-b", "--bidirectional",
+           value=True,
+           default=False,
+           help="remove reflected and initialized revisions from merge candidates.  "
+                "Not required but may be specified to speed things up slightly"),
+    OptionArg("-f", "--commit-file", metavar="FILE",
+              default="svnmerge-commit-message.txt",
+              help="set the name of the file where the suggested log message "
+                   "is written to"),
+    Option("-M", "--record-only",
+           value=True,
+           default=False,
+           help="do not perform an actual merge of the changes, yet record "
+                "that a merge happened"),
+    OptionArg("-r", "--revision",
+              metavar="REVLIST",
+              default="",
+              help="specify a revision list, consisting of revision numbers "
+                   'and ranges separated by commas, e.g., "534,537-539,540"'),
+    OptionArg("-S", "--source", "--head",
+              default=None,
+              help="specify a merge source for this branch.  It can be either "
+                   "a path, a full URL, or an unambiguous substring of one "
+                   "of the paths for which merge tracking was already "
+                   "initialized.  Needed only to disambiguate in case of "
+                   "multiple merge sources"),
+]
+
+command_table = {
+    "init": (action_init,
+    "init [OPTION...] [SOURCE]",
+    """Initialize merge tracking from SOURCE on the current working
+    directory.
+
+    If SOURCE is specified, all the revisions in SOURCE are marked as already
+    merged; if this is not correct, you can use --revision to specify the
+    exact list of already-merged revisions.
+
+    If SOURCE is omitted, then it is computed from the "svn cp" history of the
+    current working directory (searching back for the branch point); in this
+    case, %s assumes that no revision has been integrated yet since
+    the branch point (unless you teach it with --revision).""" % NAME,
+    [
+        "-f", "-r", # import common opts
+    ]),
+
+    "avail": (action_avail,
+    "avail [OPTION...] [PATH]",
+    """Show unmerged revisions available for PATH as a revision list.
+    If --revision is given, the revisions shown will be limited to those
+    also specified in the option.
+
+    When svnmerge is used to bidirectionally merge changes between a
+    branch and its source, it is necessary to not merge the same changes
+    forth and back: e.g., if you committed a merge of a certain
+    revision of the branch into the source, you do not want that commit
+    to appear as available to merged into the branch (as the code
+    originated in the branch itself!).  svnmerge will automatically
+    exclude these so-called "reflected" revisions.""",
+    [
+        Option("-A", "--all",
+               dest="avail-showwhat",
+               value=["blocked", "avail"],
+               default=["avail"],
+               help="show both available and blocked revisions (aka ignore "
+                    "blocked revisions)"),
+        "-b",
+        Option("-B", "--blocked",
+               dest="avail-showwhat",
+               value=["blocked"],
+               help="show the blocked revision list (see '%s block')" % NAME),
+        Option("-d", "--diff",
+               dest="avail-display",
+               value="diffs",
+               default="revisions",
+               help="show corresponding diff instead of revision list"),
+        Option("--summarize",
+               dest="avail-display",
+               value="summarize",
+               help="show summarized diff instead of revision list"),
+        Option("-l", "--log",
+               dest="avail-display",
+               value="logs",
+               help="show corresponding log history instead of revision list"),
+        "-r",
+        "-S",
+    ]),
+
+    "integrated": (action_integrated,
+    "integrated [OPTION...] [PATH]",
+    """Show merged revisions available for PATH as a revision list.
+    If --revision is given, the revisions shown will be limited to
+    those also specified in the option.""",
+    [
+        Option("-d", "--diff",
+               dest="integrated-display",
+               value="diffs",
+               default="revisions",
+               help="show corresponding diff instead of revision list"),
+        Option("-l", "--log",
+               dest="integrated-display",
+               value="logs",
+               help="show corresponding log history instead of revision list"),
+        "-r",
+        "-S",
+    ]),
+
+    "rollback": (action_rollback,
+    "rollback [OPTION...] [PATH]",
+    """Rollback previously merged in revisions from PATH.  The
+    --revision option is mandatory, and specifies which revisions
+    will be rolled back.  Only the previously integrated merges
+    will be rolled back.
+
+    When manually rolling back changes, --record-only can be used to
+    instruct %s that a manual rollback of a certain revision
+    already happened, so that it can record it and offer that
+    revision for merge henceforth.""" % (NAME),
+    [
+        "-f", "-r", "-S", "-M", # import common opts
+    ]),
+
+    "merge": (action_merge,
+    "merge [OPTION...] [PATH]",
+    """Merge in revisions into PATH from its source. If --revision is omitted,
+    all the available revisions will be merged. In any case, already merged-in
+    revisions will NOT be merged again.
+
+    When svnmerge is used to bidirectionally merge changes between a
+    branch and its source, it is necessary to not merge the same changes
+    forth and back: e.g., if you committed a merge of a certain
+    revision of the branch into the source, you do not want that commit
+    to appear as available to merged into the branch (as the code
+    originated in the branch itself!).  svnmerge will automatically
+    exclude these so-called "reflected" revisions.
+
+    When manually merging changes across branches, --record-only can
+    be used to instruct %s that a manual merge of a certain revision
+    already happened, so that it can record it and not offer that
+    revision for merge anymore.  Conversely, when there are revisions
+    which should not be merged, use '%s block'.""" % (NAME, NAME),
+    [
+        "-b", "-f", "-r", "-S", "-M", # import common opts
+    ]),
+
+    "block": (action_block,
+    "block [OPTION...] [PATH]",
+    """Block revisions within PATH so that they disappear from the available
+    list. This is useful to hide revisions which will not be integrated.
+    If --revision is omitted, it defaults to all the available revisions.
+
+    Do not use this option to hide revisions that were manually merged
+    into the branch.  Instead, use '%s merge --record-only', which
+    records that a merge happened (as opposed to a merge which should
+    not happen).""" % NAME,
+    [
+        "-f", "-r", "-S", # import common opts
+    ]),
+
+    "unblock": (action_unblock,
+    "unblock [OPTION...] [PATH]",
+    """Revert the effect of '%s block'. If --revision is omitted, all the
+    blocked revisions are unblocked""" % NAME,
+    [
+        "-f", "-r", "-S", # import common opts
+    ]),
+
+    "uninit": (action_uninit,
+    "uninit [OPTION...] [PATH]",
+    """Remove merge tracking information from PATH. It cleans any kind of merge
+    tracking information (including the list of blocked revisions). If there
+    are multiple sources, use --source to indicate which source you want to
+    forget about.""",
+    [
+        "-f", "-S", # import common opts
+    ]),
+}
+
+
+def main(args):
+    global opts
+
+    # Initialize default options
+    opts = default_opts.copy()
+    logs.clear()
+
+    optsparser = CommandOpts(global_opts, common_opts, command_table,
+                             version="%%prog r%s\n  modified: %s\n\n"
+                                     "Copyright (C) 2004,2005 Awarix Inc.\n"
+                                     "Copyright (C) 2005, Giovanni Bajo"
+                                     % (__revision__, __date__))
+
+    cmd, args, state = optsparser.parse(args)
+    opts.update(state)
+
+    source = opts.get("source", None)
+    branch_dir = "."
+
+    if str(cmd) == "init":
+        if len(args) == 1:
+            source = args[0]
+        elif len(args) > 1:
+            optsparser.error("wrong number of parameters", cmd)
+    elif str(cmd) in command_table.keys():
+        if len(args) == 1:
+            branch_dir = args[0]
+        elif len(args) > 1:
+            optsparser.error("wrong number of parameters", cmd)
+    else:
+        assert False, "command not handled: %s" % cmd
+
+    # Validate branch_dir
+    if not is_wc(branch_dir):
+        error('"%s" is not a subversion working directory' % branch_dir)
+
+    # Extract the integration info for the branch_dir
+    branch_props = get_merge_props(branch_dir)
+    check_old_prop_version(branch_dir, branch_props)
+
+    # Calculate source_url and source_path
+    report("calculate source path for the branch")
+    if not source:
+        if str(cmd) == "init":
+            cf_source, cf_rev, copy_committed_in_rev = get_copyfrom(branch_dir)
+            if not cf_source:
+                error('no copyfrom info available. '
+                      'Explicit source argument (-S/--source) required.')
+            opts["source-pathid"] = cf_source
+            if not opts["revision"]:
+                opts["revision"] = "1-" + cf_rev
+        else:
+            opts["source-pathid"] = get_default_source(branch_dir, branch_props)
+
+        # (assumes pathid is a repository-relative-path)
+        assert opts["source-pathid"][0] == '/'
+        opts["source-url"] = get_repo_root(branch_dir) + opts["source-pathid"]
+    else:
+        # The source was given as a command line argument and is stored in
+        # SOURCE.  Ensure that the specified source does not end in a /,
+        # otherwise it's easy to have the same source path listed more
+        # than once in the integrated version properties, with and without
+        # trailing /'s.
+        source = rstrip(source, "/")
+        if not is_wc(source) and not is_url(source):
+            # Check if it is a substring of a pathid recorded
+            # within the branch properties.
+            found = []
+            for pathid in branch_props.keys():
+                if pathid.find(source) > 0:
+                    found.append(pathid)
+            if len(found) == 1:
+                # (assumes pathid is a repository-relative-path)
+                source = get_repo_root(branch_dir) + found[0]
+            else:
+                error('"%s" is neither a valid URL, nor an unambiguous '
+                      'substring of a repository path, nor a working directory'
+                      % source)
+
+        source_pathid = target_to_pathid(source)
+        if str(cmd) == "init" and \
+               source_pathid == target_to_pathid("."):
+            error("cannot init integration source path '%s'\n"
+                  "Its repository-relative path must differ from the "
+                  "repository-relative path of the current directory."
+                  % source_pathid)
+        opts["source-pathid"] = source_pathid
+        opts["source-url"] = target_to_url(source)
+
+    # Sanity check source_url
+    assert is_url(opts["source-url"])
+    # SVN does not support non-normalized URL (and we should not
+    # have created them)
+    assert opts["source-url"].find("/..") < 0
+
+    report('source is "%s"' % opts["source-url"])
+
+    # Get previously merged revisions (except when command is init)
+    if str(cmd) != "init":
+        opts["merged-revs"] = merge_props_to_revision_set(branch_props,
+                                                          opts["source-pathid"])
+
+    # Perform the action
+    cmd(branch_dir, branch_props)
+
+
+if __name__ == "__main__":
+    try:
+        main(sys.argv[1:])
+    except LaunchError, (ret, cmd, out):
+        err_msg = "command execution failed (exit code: %d)\n" % ret
+        err_msg += cmd + "\n"
+        err_msg += "".join(out)
+        error(err_msg)
+    except KeyboardInterrupt:
+        # Avoid traceback on CTRL+C
+        print "aborted by user"
+        sys.exit(1)