We are going to setup our property file myprop. The below is a snippet of the key-value pairs property file, where samplepairs is our Find helper that does not change. Include Property file in build. The ANT task, myreplace is created within build. This task gets all the keys and iterates through each key in a foreach loop.
The ANT-Contrib task that is used to get the keys is propertyselector. This follows the standard regular expression syntax accepted by ANT's regular expression tasks. Advanced Renamer supports the use of regular expressions for pattern searching and replacing in several methods. The use of these expressions is primarily meant for power users and people with programming experience but none the less gaining knowledge of the basics will prove to be very rewarding.
A standard library called PCRE is used which means that people with prior knowledge of this library will feel right at home. Those learning this for the first time will also be able to use the skills in other similar tools. This page will try to give you basic knowledge about the use of regular expressions in the context of file renaming. A regular expression contains normal characters and metacharacters. The normal characters are interpreted as they are while the metacharacters have special meaning.
Let's start out with a simple expression:. The most common method with regex support is the Replace method. A tokenizer splits the input into token strings and trailing delimiter strings. There may be zero or more string filters.
A string filter processes a token and either returns a string or a null. It the string is not null it is passed to the next filter. This proceeds until all the filters are called. If a string is returned after all the filters, the string is outputs with its associated token delimiter if one is present. The trailing delimiter may be overridden by the delimOutput attribute.
Backslash interpretation A number of attributes including delimOutput interpret backslash escapes. Custom tokenizers and string filters can be declared using typedef task. Some of the filters may be used directly within a filterchain. In this case a tokenfilter is created implicitly.
The default is true. This tokenizer splits the input into lines. This is the default tokenizer. This tokenizer treats all the input as a token. So be careful not to use this on very large input. This tokenizer is based on java. It splits up the input into strings separated by white space, or by a specified list of delimiting characters. If the stream starts with delimiter characters, the first token will be the empty string unless the delimsaretokens attribute is used. This is a simple filter to replace strings.
This filter may be used directly within a filterchain. This string filter replaces regular expressions. See Regexp Type concerning the choice of the implementation. This filters strings that match regular expressions. The filter may optionally replace the matched regular expression. See Regexp Type concerning the choice of regular expression implementation.
This filter trims whitespace from the start and end of tokens. This filter removes empty tokens. Suppresses all tokens that match their ancestor token. It is most useful if combined with a sort filter. It replaces the dots in a package name with directory separators. The mapper shares the sample syntax as the glob mapper. This mapper implementation can contain multiple nested mappers. The to and from attributes are ignored.
File mapping is performed by passing the source filename to the first nested mapper, its results to the second, and so on. The target filenames generated by the last nested mapper comprise the ultimate results of the mapping operation. This mapper implementation applies a filterchain to the source file name.
See the Script task for an explanation of scripts and dependencies. See the script task on how to use this element. To use this mapper, the scripts need access to the source file, and the ability to return multiple mappings.
0コメント