vendor cleanup: remove unused,non-go and test files

This commit is contained in:
Madhu Rajanna
2019-01-16 00:05:52 +05:30
parent 52cf4aa902
commit b10ba188e7
15421 changed files with 17 additions and 4208853 deletions

View File

@ -1,10 +0,0 @@
# Treat all files in this repo as binary, with no git magic updating
# line endings. Windows users contributing to Go will need to use a
# modern version of git and editors capable of LF line endings.
#
# We'll prevent accidental CRLF line endings from entering the repo
# via the git-review gofmt checks.
#
# See golang.org/issue/9281
* -text

View File

@ -1,6 +0,0 @@
# Add no patterns to .gitignore except for files generated by the build.
last-change
/DATA
# This file is rather large and the tests really only need to be run
# after generation.
/unicode/norm/data_test.go

View File

@ -1,31 +0,0 @@
# Contributing to Go
Go is an open source project.
It is the work of hundreds of contributors. We appreciate your help!
## Filing issues
When [filing an issue](https://golang.org/issue/new), make sure to answer these five questions:
1. What version of Go are you using (`go version`)?
2. What operating system and processor architecture are you using?
3. What did you do?
4. What did you expect to see?
5. What did you see instead?
General questions should go to the [golang-nuts mailing list](https://groups.google.com/group/golang-nuts) instead of the issue tracker.
The gophers there will answer or ask you to file an issue if you've tripped over a bug.
## Contributing code
Please read the [Contribution Guidelines](https://golang.org/doc/contribute.html)
before sending patches.
**We do not accept GitHub pull requests**
(we use [Gerrit](https://code.google.com/p/gerrit/) instead for code review).
Unless otherwise noted, the Go source files are distributed under
the BSD-style license found in the LICENSE file.

93
vendor/golang.org/x/text/README.md generated vendored
View File

@ -1,93 +0,0 @@
# Go Text
This repository holds supplementary Go libraries for text processing, many involving Unicode.
## Semantic Versioning
This repo uses Semantic versioning (http://semver.org/), so
1. MAJOR version when you make incompatible API changes,
1. MINOR version when you add functionality in a backwards-compatible manner,
and
1. PATCH version when you make backwards-compatible bug fixes.
Until version 1.0.0 of x/text is reached, the minor version is considered a
major version. So going from 0.1.0 to 0.2.0 is considered to be a major version
bump.
A major new CLDR version is mapped to a minor version increase in x/text.
Any other new CLDR version is mapped to a patch version increase in x/text.
It is important that the Unicode version used in `x/text` matches the one used
by your Go compiler. The `x/text` repository supports multiple versions of
Unicode and will match the version of Unicode to that of the Go compiler. At the
moment this is supported for Go compilers from version 1.7.
## Download/Install
The easiest way to install is to run `go get -u golang.org/x/text`. You can
also manually git clone the repository to `$GOPATH/src/golang.org/x/text`.
## Contribute
To submit changes to this repository, see http://golang.org/doc/contribute.html.
To generate the tables in this repository (except for the encoding tables),
run go generate from this directory. By default tables are generated for the
Unicode version in core and the CLDR version defined in
golang.org/x/text/unicode/cldr.
Running go generate will as a side effect create a DATA subdirectory in this
directory, which holds all files that are used as a source for generating the
tables. This directory will also serve as a cache.
## Testing
Run
go test ./...
from this directory to run all tests. Add the "-tags icu" flag to also run
ICU conformance tests (if available). This requires that you have the correct
ICU version installed on your system.
TODO:
- updating unversioned source files.
## Generating Tables
To generate the tables in this repository (except for the encoding
tables), run `go generate` from this directory. By default tables are
generated for the Unicode version in core and the CLDR version defined in
golang.org/x/text/unicode/cldr.
Running go generate will as a side effect create a DATA subdirectory in this
directory which holds all files that are used as a source for generating the
tables. This directory will also serve as a cache.
## Versions
To update a Unicode version run
UNICODE_VERSION=x.x.x go generate
where `x.x.x` must correspond to a directory in http://www.unicode.org/Public/.
If this version is newer than the version in core it will also update the
relevant packages there. The idna package in x/net will always be updated.
To update a CLDR version run
CLDR_VERSION=version go generate
where `version` must correspond to a directory in
http://www.unicode.org/Public/cldr/.
Note that the code gets adapted over time to changes in the data and that
backwards compatibility is not maintained.
So updating to a different version may not work.
The files in DATA/{iana|icu|w3|whatwg} are currently not versioned.
## Report Issues / Send Patches
This repository uses Gerrit for code changes. To learn how to submit changes to
this repository, see https://golang.org/doc/contribute.html.
The main issue tracker for the image repository is located at
https://github.com/golang/go/issues. Prefix your issue with "x/image:" in the
subject line, so it is easy to find.

View File

@ -1,162 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go run gen.go gen_trieval.go
// Package cases provides general and language-specific case mappers.
package cases // import "golang.org/x/text/cases"
import (
"golang.org/x/text/language"
"golang.org/x/text/transform"
)
// References:
// - Unicode Reference Manual Chapter 3.13, 4.2, and 5.18.
// - http://www.unicode.org/reports/tr29/
// - http://www.unicode.org/Public/6.3.0/ucd/CaseFolding.txt
// - http://www.unicode.org/Public/6.3.0/ucd/SpecialCasing.txt
// - http://www.unicode.org/Public/6.3.0/ucd/DerivedCoreProperties.txt
// - http://www.unicode.org/Public/6.3.0/ucd/auxiliary/WordBreakProperty.txt
// - http://www.unicode.org/Public/6.3.0/ucd/auxiliary/WordBreakTest.txt
// - http://userguide.icu-project.org/transforms/casemappings
// TODO:
// - Case folding
// - Wide and Narrow?
// - Segmenter option for title casing.
// - ASCII fast paths
// - Encode Soft-Dotted property within trie somehow.
// A Caser transforms given input to a certain case. It implements
// transform.Transformer.
//
// A Caser may be stateful and should therefore not be shared between
// goroutines.
type Caser struct {
t transform.SpanningTransformer
}
// Bytes returns a new byte slice with the result of converting b to the case
// form implemented by c.
func (c Caser) Bytes(b []byte) []byte {
b, _, _ = transform.Bytes(c.t, b)
return b
}
// String returns a string with the result of transforming s to the case form
// implemented by c.
func (c Caser) String(s string) string {
s, _, _ = transform.String(c.t, s)
return s
}
// Reset resets the Caser to be reused for new input after a previous call to
// Transform.
func (c Caser) Reset() { c.t.Reset() }
// Transform implements the transform.Transformer interface and transforms the
// given input to the case form implemented by c.
func (c Caser) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
return c.t.Transform(dst, src, atEOF)
}
// Span implements the transform.SpanningTransformer interface.
func (c Caser) Span(src []byte, atEOF bool) (n int, err error) {
return c.t.Span(src, atEOF)
}
// Upper returns a Caser for language-specific uppercasing.
func Upper(t language.Tag, opts ...Option) Caser {
return Caser{makeUpper(t, getOpts(opts...))}
}
// Lower returns a Caser for language-specific lowercasing.
func Lower(t language.Tag, opts ...Option) Caser {
return Caser{makeLower(t, getOpts(opts...))}
}
// Title returns a Caser for language-specific title casing. It uses an
// approximation of the default Unicode Word Break algorithm.
func Title(t language.Tag, opts ...Option) Caser {
return Caser{makeTitle(t, getOpts(opts...))}
}
// Fold returns a Caser that implements Unicode case folding. The returned Caser
// is stateless and safe to use concurrently by multiple goroutines.
//
// Case folding does not normalize the input and may not preserve a normal form.
// Use the collate or search package for more convenient and linguistically
// sound comparisons. Use golang.org/x/text/secure/precis for string comparisons
// where security aspects are a concern.
func Fold(opts ...Option) Caser {
return Caser{makeFold(getOpts(opts...))}
}
// An Option is used to modify the behavior of a Caser.
type Option func(o options) options
// TODO: consider these options to take a boolean as well, like FinalSigma.
// The advantage of using this approach is that other providers of a lower-case
// algorithm could set different defaults by prefixing a user-provided slice
// of options with their own. This is handy, for instance, for the precis
// package which would override the default to not handle the Greek final sigma.
var (
// NoLower disables the lowercasing of non-leading letters for a title
// caser.
NoLower Option = noLower
// Compact omits mappings in case folding for characters that would grow the
// input. (Unimplemented.)
Compact Option = compact
)
// TODO: option to preserve a normal form, if applicable?
type options struct {
noLower bool
simple bool
// TODO: segmenter, max ignorable, alternative versions, etc.
ignoreFinalSigma bool
}
func getOpts(o ...Option) (res options) {
for _, f := range o {
res = f(res)
}
return
}
func noLower(o options) options {
o.noLower = true
return o
}
func compact(o options) options {
o.simple = true
return o
}
// HandleFinalSigma specifies whether the special handling of Greek final sigma
// should be enabled. Unicode prescribes handling the Greek final sigma for all
// locales, but standards like IDNA and PRECIS override this default.
func HandleFinalSigma(enable bool) Option {
if enable {
return handleFinalSigma
}
return ignoreFinalSigma
}
func ignoreFinalSigma(o options) options {
o.ignoreFinalSigma = true
return o
}
func handleFinalSigma(o options) options {
o.ignoreFinalSigma = false
return o
}

View File

@ -1,376 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cases
import "golang.org/x/text/transform"
// A context is used for iterating over source bytes, fetching case info and
// writing to a destination buffer.
//
// Casing operations may need more than one rune of context to decide how a rune
// should be cased. Casing implementations should call checkpoint on context
// whenever it is known to be safe to return the runes processed so far.
//
// It is recommended for implementations to not allow for more than 30 case
// ignorables as lookahead (analogous to the limit in norm) and to use state if
// unbounded lookahead is needed for cased runes.
type context struct {
dst, src []byte
atEOF bool
pDst int // pDst points past the last written rune in dst.
pSrc int // pSrc points to the start of the currently scanned rune.
// checkpoints safe to return in Transform, where nDst <= pDst and nSrc <= pSrc.
nDst, nSrc int
err error
sz int // size of current rune
info info // case information of currently scanned rune
// State preserved across calls to Transform.
isMidWord bool // false if next cased letter needs to be title-cased.
}
func (c *context) Reset() {
c.isMidWord = false
}
// ret returns the return values for the Transform method. It checks whether
// there were insufficient bytes in src to complete and introduces an error
// accordingly, if necessary.
func (c *context) ret() (nDst, nSrc int, err error) {
if c.err != nil || c.nSrc == len(c.src) {
return c.nDst, c.nSrc, c.err
}
// This point is only reached by mappers if there was no short destination
// buffer. This means that the source buffer was exhausted and that c.sz was
// set to 0 by next.
if c.atEOF && c.pSrc == len(c.src) {
return c.pDst, c.pSrc, nil
}
return c.nDst, c.nSrc, transform.ErrShortSrc
}
// retSpan returns the return values for the Span method. It checks whether
// there were insufficient bytes in src to complete and introduces an error
// accordingly, if necessary.
func (c *context) retSpan() (n int, err error) {
_, nSrc, err := c.ret()
return nSrc, err
}
// checkpoint sets the return value buffer points for Transform to the current
// positions.
func (c *context) checkpoint() {
if c.err == nil {
c.nDst, c.nSrc = c.pDst, c.pSrc+c.sz
}
}
// unreadRune causes the last rune read by next to be reread on the next
// invocation of next. Only one unreadRune may be called after a call to next.
func (c *context) unreadRune() {
c.sz = 0
}
func (c *context) next() bool {
c.pSrc += c.sz
if c.pSrc == len(c.src) || c.err != nil {
c.info, c.sz = 0, 0
return false
}
v, sz := trie.lookup(c.src[c.pSrc:])
c.info, c.sz = info(v), sz
if c.sz == 0 {
if c.atEOF {
// A zero size means we have an incomplete rune. If we are atEOF,
// this means it is an illegal rune, which we will consume one
// byte at a time.
c.sz = 1
} else {
c.err = transform.ErrShortSrc
return false
}
}
return true
}
// writeBytes adds bytes to dst.
func (c *context) writeBytes(b []byte) bool {
if len(c.dst)-c.pDst < len(b) {
c.err = transform.ErrShortDst
return false
}
// This loop is faster than using copy.
for _, ch := range b {
c.dst[c.pDst] = ch
c.pDst++
}
return true
}
// writeString writes the given string to dst.
func (c *context) writeString(s string) bool {
if len(c.dst)-c.pDst < len(s) {
c.err = transform.ErrShortDst
return false
}
// This loop is faster than using copy.
for i := 0; i < len(s); i++ {
c.dst[c.pDst] = s[i]
c.pDst++
}
return true
}
// copy writes the current rune to dst.
func (c *context) copy() bool {
return c.writeBytes(c.src[c.pSrc : c.pSrc+c.sz])
}
// copyXOR copies the current rune to dst and modifies it by applying the XOR
// pattern of the case info. It is the responsibility of the caller to ensure
// that this is a rune with a XOR pattern defined.
func (c *context) copyXOR() bool {
if !c.copy() {
return false
}
if c.info&xorIndexBit == 0 {
// Fast path for 6-bit XOR pattern, which covers most cases.
c.dst[c.pDst-1] ^= byte(c.info >> xorShift)
} else {
// Interpret XOR bits as an index.
// TODO: test performance for unrolling this loop. Verify that we have
// at least two bytes and at most three.
idx := c.info >> xorShift
for p := c.pDst - 1; ; p-- {
c.dst[p] ^= xorData[idx]
idx--
if xorData[idx] == 0 {
break
}
}
}
return true
}
// hasPrefix returns true if src[pSrc:] starts with the given string.
func (c *context) hasPrefix(s string) bool {
b := c.src[c.pSrc:]
if len(b) < len(s) {
return false
}
for i, c := range b[:len(s)] {
if c != s[i] {
return false
}
}
return true
}
// caseType returns an info with only the case bits, normalized to either
// cLower, cUpper, cTitle or cUncased.
func (c *context) caseType() info {
cm := c.info & 0x7
if cm < 4 {
return cm
}
if cm >= cXORCase {
// xor the last bit of the rune with the case type bits.
b := c.src[c.pSrc+c.sz-1]
return info(b&1) ^ cm&0x3
}
if cm == cIgnorableCased {
return cLower
}
return cUncased
}
// lower writes the lowercase version of the current rune to dst.
func lower(c *context) bool {
ct := c.caseType()
if c.info&hasMappingMask == 0 || ct == cLower {
return c.copy()
}
if c.info&exceptionBit == 0 {
return c.copyXOR()
}
e := exceptions[c.info>>exceptionShift:]
offset := 2 + e[0]&lengthMask // size of header + fold string
if nLower := (e[1] >> lengthBits) & lengthMask; nLower != noChange {
return c.writeString(e[offset : offset+nLower])
}
return c.copy()
}
func isLower(c *context) bool {
ct := c.caseType()
if c.info&hasMappingMask == 0 || ct == cLower {
return true
}
if c.info&exceptionBit == 0 {
c.err = transform.ErrEndOfSpan
return false
}
e := exceptions[c.info>>exceptionShift:]
if nLower := (e[1] >> lengthBits) & lengthMask; nLower != noChange {
c.err = transform.ErrEndOfSpan
return false
}
return true
}
// upper writes the uppercase version of the current rune to dst.
func upper(c *context) bool {
ct := c.caseType()
if c.info&hasMappingMask == 0 || ct == cUpper {
return c.copy()
}
if c.info&exceptionBit == 0 {
return c.copyXOR()
}
e := exceptions[c.info>>exceptionShift:]
offset := 2 + e[0]&lengthMask // size of header + fold string
// Get length of first special case mapping.
n := (e[1] >> lengthBits) & lengthMask
if ct == cTitle {
// The first special case mapping is for lower. Set n to the second.
if n == noChange {
n = 0
}
n, e = e[1]&lengthMask, e[n:]
}
if n != noChange {
return c.writeString(e[offset : offset+n])
}
return c.copy()
}
// isUpper writes the isUppercase version of the current rune to dst.
func isUpper(c *context) bool {
ct := c.caseType()
if c.info&hasMappingMask == 0 || ct == cUpper {
return true
}
if c.info&exceptionBit == 0 {
c.err = transform.ErrEndOfSpan
return false
}
e := exceptions[c.info>>exceptionShift:]
// Get length of first special case mapping.
n := (e[1] >> lengthBits) & lengthMask
if ct == cTitle {
n = e[1] & lengthMask
}
if n != noChange {
c.err = transform.ErrEndOfSpan
return false
}
return true
}
// title writes the title case version of the current rune to dst.
func title(c *context) bool {
ct := c.caseType()
if c.info&hasMappingMask == 0 || ct == cTitle {
return c.copy()
}
if c.info&exceptionBit == 0 {
if ct == cLower {
return c.copyXOR()
}
return c.copy()
}
// Get the exception data.
e := exceptions[c.info>>exceptionShift:]
offset := 2 + e[0]&lengthMask // size of header + fold string
nFirst := (e[1] >> lengthBits) & lengthMask
if nTitle := e[1] & lengthMask; nTitle != noChange {
if nFirst != noChange {
e = e[nFirst:]
}
return c.writeString(e[offset : offset+nTitle])
}
if ct == cLower && nFirst != noChange {
// Use the uppercase version instead.
return c.writeString(e[offset : offset+nFirst])
}
// Already in correct case.
return c.copy()
}
// isTitle reports whether the current rune is in title case.
func isTitle(c *context) bool {
ct := c.caseType()
if c.info&hasMappingMask == 0 || ct == cTitle {
return true
}
if c.info&exceptionBit == 0 {
if ct == cLower {
c.err = transform.ErrEndOfSpan
return false
}
return true
}
// Get the exception data.
e := exceptions[c.info>>exceptionShift:]
if nTitle := e[1] & lengthMask; nTitle != noChange {
c.err = transform.ErrEndOfSpan
return false
}
nFirst := (e[1] >> lengthBits) & lengthMask
if ct == cLower && nFirst != noChange {
c.err = transform.ErrEndOfSpan
return false
}
return true
}
// foldFull writes the foldFull version of the current rune to dst.
func foldFull(c *context) bool {
if c.info&hasMappingMask == 0 {
return c.copy()
}
ct := c.caseType()
if c.info&exceptionBit == 0 {
if ct != cLower || c.info&inverseFoldBit != 0 {
return c.copyXOR()
}
return c.copy()
}
e := exceptions[c.info>>exceptionShift:]
n := e[0] & lengthMask
if n == 0 {
if ct == cLower {
return c.copy()
}
n = (e[1] >> lengthBits) & lengthMask
}
return c.writeString(e[2 : 2+n])
}
// isFoldFull reports whether the current run is mapped to foldFull
func isFoldFull(c *context) bool {
if c.info&hasMappingMask == 0 {
return true
}
ct := c.caseType()
if c.info&exceptionBit == 0 {
if ct != cLower || c.info&inverseFoldBit != 0 {
c.err = transform.ErrEndOfSpan
return false
}
return true
}
e := exceptions[c.info>>exceptionShift:]
n := e[0] & lengthMask
if n == 0 && ct == cLower {
return true
}
c.err = transform.ErrEndOfSpan
return false
}

View File

@ -1,438 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cases
import (
"strings"
"testing"
"unicode"
"golang.org/x/text/internal/testtext"
"golang.org/x/text/language"
"golang.org/x/text/transform"
"golang.org/x/text/unicode/norm"
"golang.org/x/text/unicode/rangetable"
)
// The following definitions are taken directly from Chapter 3 of The Unicode
// Standard.
func propCased(r rune) bool {
return propLower(r) || propUpper(r) || unicode.IsTitle(r)
}
func propLower(r rune) bool {
return unicode.IsLower(r) || unicode.Is(unicode.Other_Lowercase, r)
}
func propUpper(r rune) bool {
return unicode.IsUpper(r) || unicode.Is(unicode.Other_Uppercase, r)
}
func propIgnore(r rune) bool {
if unicode.In(r, unicode.Mn, unicode.Me, unicode.Cf, unicode.Lm, unicode.Sk) {
return true
}
return caseIgnorable[r]
}
func hasBreakProp(r rune) bool {
// binary search over ranges
lo := 0
hi := len(breakProp)
for lo < hi {
m := lo + (hi-lo)/2
bp := &breakProp[m]
if bp.lo <= r && r <= bp.hi {
return true
}
if r < bp.lo {
hi = m
} else {
lo = m + 1
}
}
return false
}
func contextFromRune(r rune) *context {
c := context{dst: make([]byte, 128), src: []byte(string(r)), atEOF: true}
c.next()
return &c
}
func TestCaseProperties(t *testing.T) {
if unicode.Version != UnicodeVersion {
// Properties of existing code points may change by Unicode version, so
// we need to skip.
t.Skipf("Skipping as core Unicode version %s different than %s", unicode.Version, UnicodeVersion)
}
assigned := rangetable.Assigned(UnicodeVersion)
coreVersion := rangetable.Assigned(unicode.Version)
for r := rune(0); r <= lastRuneForTesting; r++ {
if !unicode.In(r, assigned) || !unicode.In(r, coreVersion) {
continue
}
c := contextFromRune(r)
if got, want := c.info.isCaseIgnorable(), propIgnore(r); got != want {
t.Errorf("caseIgnorable(%U): got %v; want %v (%x)", r, got, want, c.info)
}
// New letters may change case types, but existing case pairings should
// not change. See Case Pair Stability in
// http://unicode.org/policies/stability_policy.html.
if rf := unicode.SimpleFold(r); rf != r && unicode.In(rf, assigned) {
if got, want := c.info.isCased(), propCased(r); got != want {
t.Errorf("cased(%U): got %v; want %v (%x)", r, got, want, c.info)
}
if got, want := c.caseType() == cUpper, propUpper(r); got != want {
t.Errorf("upper(%U): got %v; want %v (%x)", r, got, want, c.info)
}
if got, want := c.caseType() == cLower, propLower(r); got != want {
t.Errorf("lower(%U): got %v; want %v (%x)", r, got, want, c.info)
}
}
if got, want := c.info.isBreak(), hasBreakProp(r); got != want {
t.Errorf("isBreak(%U): got %v; want %v (%x)", r, got, want, c.info)
}
}
// TODO: get title case from unicode file.
}
func TestMapping(t *testing.T) {
assigned := rangetable.Assigned(UnicodeVersion)
coreVersion := rangetable.Assigned(unicode.Version)
if coreVersion == nil {
coreVersion = assigned
}
apply := func(r rune, f func(c *context) bool) string {
c := contextFromRune(r)
f(c)
return string(c.dst[:c.pDst])
}
for r, tt := range special {
if got, want := apply(r, lower), tt.toLower; got != want {
t.Errorf("lowerSpecial:(%U): got %+q; want %+q", r, got, want)
}
if got, want := apply(r, title), tt.toTitle; got != want {
t.Errorf("titleSpecial:(%U): got %+q; want %+q", r, got, want)
}
if got, want := apply(r, upper), tt.toUpper; got != want {
t.Errorf("upperSpecial:(%U): got %+q; want %+q", r, got, want)
}
}
for r := rune(0); r <= lastRuneForTesting; r++ {
if !unicode.In(r, assigned) || !unicode.In(r, coreVersion) {
continue
}
if rf := unicode.SimpleFold(r); rf == r || !unicode.In(rf, assigned) {
continue
}
if _, ok := special[r]; ok {
continue
}
want := string(unicode.ToLower(r))
if got := apply(r, lower); got != want {
t.Errorf("lower:%q (%U): got %q %U; want %q %U", r, r, got, []rune(got), want, []rune(want))
}
want = string(unicode.ToUpper(r))
if got := apply(r, upper); got != want {
t.Errorf("upper:%q (%U): got %q %U; want %q %U", r, r, got, []rune(got), want, []rune(want))
}
want = string(unicode.ToTitle(r))
if got := apply(r, title); got != want {
t.Errorf("title:%q (%U): got %q %U; want %q %U", r, r, got, []rune(got), want, []rune(want))
}
}
}
func runeFoldData(r rune) (x struct{ simple, full, special string }) {
x = foldMap[r]
if x.simple == "" {
x.simple = string(unicode.ToLower(r))
}
if x.full == "" {
x.full = string(unicode.ToLower(r))
}
if x.special == "" {
x.special = x.full
}
return
}
func TestFoldData(t *testing.T) {
assigned := rangetable.Assigned(UnicodeVersion)
coreVersion := rangetable.Assigned(unicode.Version)
if coreVersion == nil {
coreVersion = assigned
}
apply := func(r rune, f func(c *context) bool) (string, info) {
c := contextFromRune(r)
f(c)
return string(c.dst[:c.pDst]), c.info.cccType()
}
for r := rune(0); r <= lastRuneForTesting; r++ {
if !unicode.In(r, assigned) || !unicode.In(r, coreVersion) {
continue
}
x := runeFoldData(r)
if got, info := apply(r, foldFull); got != x.full {
t.Errorf("full:%q (%U): got %q %U; want %q %U (ccc=%x)", r, r, got, []rune(got), x.full, []rune(x.full), info)
}
// TODO: special and simple.
}
}
func TestCCC(t *testing.T) {
assigned := rangetable.Assigned(UnicodeVersion)
normVersion := rangetable.Assigned(norm.Version)
for r := rune(0); r <= lastRuneForTesting; r++ {
if !unicode.In(r, assigned) || !unicode.In(r, normVersion) {
continue
}
c := contextFromRune(r)
p := norm.NFC.PropertiesString(string(r))
want := cccOther
switch p.CCC() {
case 0:
want = cccZero
case above:
want = cccAbove
}
if got := c.info.cccType(); got != want {
t.Errorf("%U: got %x; want %x", r, got, want)
}
}
}
func TestWordBreaks(t *testing.T) {
for _, tt := range breakTest {
testtext.Run(t, tt, func(t *testing.T) {
parts := strings.Split(tt, "|")
want := ""
for _, s := range parts {
found := false
// This algorithm implements title casing given word breaks
// as defined in the Unicode standard 3.13 R3.
for _, r := range s {
title := unicode.ToTitle(r)
lower := unicode.ToLower(r)
if !found && title != lower {
found = true
want += string(title)
} else {
want += string(lower)
}
}
}
src := strings.Join(parts, "")
got := Title(language.Und).String(src)
if got != want {
t.Errorf("got %q; want %q", got, want)
}
})
}
}
func TestContext(t *testing.T) {
tests := []struct {
desc string
dstSize int
atEOF bool
src string
out string
nSrc int
err error
ops string
prefixArg string
prefixWant bool
}{{
desc: "next: past end, atEOF, no checkpoint",
dstSize: 10,
atEOF: true,
src: "12",
out: "",
nSrc: 2,
ops: "next;next;next",
// Test that calling prefix with a non-empty argument when the buffer
// is depleted returns false.
prefixArg: "x",
prefixWant: false,
}, {
desc: "next: not at end, atEOF, no checkpoint",
dstSize: 10,
atEOF: false,
src: "12",
out: "",
nSrc: 0,
err: transform.ErrShortSrc,
ops: "next;next",
prefixArg: "",
prefixWant: true,
}, {
desc: "next: past end, !atEOF, no checkpoint",
dstSize: 10,
atEOF: false,
src: "12",
out: "",
nSrc: 0,
err: transform.ErrShortSrc,
ops: "next;next;next",
prefixArg: "",
prefixWant: true,
}, {
desc: "next: past end, !atEOF, checkpoint",
dstSize: 10,
atEOF: false,
src: "12",
out: "",
nSrc: 2,
ops: "next;next;checkpoint;next",
prefixArg: "",
prefixWant: true,
}, {
desc: "copy: exact count, atEOF, no checkpoint",
dstSize: 2,
atEOF: true,
src: "12",
out: "12",
nSrc: 2,
ops: "next;copy;next;copy;next",
prefixArg: "",
prefixWant: true,
}, {
desc: "copy: past end, !atEOF, no checkpoint",
dstSize: 2,
atEOF: false,
src: "12",
out: "",
nSrc: 0,
err: transform.ErrShortSrc,
ops: "next;copy;next;copy;next",
prefixArg: "",
prefixWant: true,
}, {
desc: "copy: past end, !atEOF, checkpoint",
dstSize: 2,
atEOF: false,
src: "12",
out: "12",
nSrc: 2,
ops: "next;copy;next;copy;checkpoint;next",
prefixArg: "",
prefixWant: true,
}, {
desc: "copy: short dst",
dstSize: 1,
atEOF: false,
src: "12",
out: "",
nSrc: 0,
err: transform.ErrShortDst,
ops: "next;copy;next;copy;checkpoint;next",
prefixArg: "12",
prefixWant: false,
}, {
desc: "copy: short dst, checkpointed",
dstSize: 1,
atEOF: false,
src: "12",
out: "1",
nSrc: 1,
err: transform.ErrShortDst,
ops: "next;copy;checkpoint;next;copy;next",
prefixArg: "",
prefixWant: true,
}, {
desc: "writeString: simple",
dstSize: 3,
atEOF: true,
src: "1",
out: "1ab",
nSrc: 1,
ops: "next;copy;writeab;next",
prefixArg: "",
prefixWant: true,
}, {
desc: "writeString: short dst",
dstSize: 2,
atEOF: true,
src: "12",
out: "",
nSrc: 0,
err: transform.ErrShortDst,
ops: "next;copy;writeab;next",
prefixArg: "2",
prefixWant: true,
}, {
desc: "writeString: simple",
dstSize: 3,
atEOF: true,
src: "12",
out: "1ab",
nSrc: 2,
ops: "next;copy;next;writeab;next",
prefixArg: "",
prefixWant: true,
}, {
desc: "writeString: short dst",
dstSize: 2,
atEOF: true,
src: "12",
out: "",
nSrc: 0,
err: transform.ErrShortDst,
ops: "next;copy;next;writeab;next",
prefixArg: "1",
prefixWant: false,
}, {
desc: "prefix",
dstSize: 2,
atEOF: true,
src: "12",
out: "",
nSrc: 0,
// Context will assign an ErrShortSrc if the input wasn't exhausted.
err: transform.ErrShortSrc,
prefixArg: "12",
prefixWant: true,
}}
for _, tt := range tests {
c := context{dst: make([]byte, tt.dstSize), src: []byte(tt.src), atEOF: tt.atEOF}
for _, op := range strings.Split(tt.ops, ";") {
switch op {
case "next":
c.next()
case "checkpoint":
c.checkpoint()
case "writeab":
c.writeString("ab")
case "copy":
c.copy()
case "":
default:
t.Fatalf("unknown op %q", op)
}
}
if got := c.hasPrefix(tt.prefixArg); got != tt.prefixWant {
t.Errorf("%s:\nprefix was %v; want %v", tt.desc, got, tt.prefixWant)
}
nDst, nSrc, err := c.ret()
if err != tt.err {
t.Errorf("%s:\nerror was %v; want %v", tt.desc, err, tt.err)
}
if out := string(c.dst[:nDst]); out != tt.out {
t.Errorf("%s:\nout was %q; want %q", tt.desc, out, tt.out)
}
if nSrc != tt.nSrc {
t.Errorf("%s:\nnSrc was %d; want %d", tt.desc, nSrc, tt.nSrc)
}
}
}

View File

@ -1,53 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cases_test
import (
"fmt"
"golang.org/x/text/cases"
"golang.org/x/text/language"
)
func Example() {
src := []string{
"hello world!",
"i with dot",
"'n ijsberg",
"here comes O'Brian",
}
for _, c := range []cases.Caser{
cases.Lower(language.Und),
cases.Upper(language.Turkish),
cases.Title(language.Dutch),
cases.Title(language.Und, cases.NoLower),
} {
fmt.Println()
for _, s := range src {
fmt.Println(c.String(s))
}
}
// Output:
// hello world!
// i with dot
// 'n ijsberg
// here comes o'brian
//
// HELLO WORLD!
// İ WİTH DOT
// 'N İJSBERG
// HERE COMES O'BRİAN
//
// Hello World!
// I With Dot
// 'n IJsberg
// Here Comes O'brian
//
// Hello World!
// I With Dot
// 'N Ijsberg
// Here Comes O'Brian
}

View File

@ -1,34 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cases
import "golang.org/x/text/transform"
type caseFolder struct{ transform.NopResetter }
// caseFolder implements the Transformer interface for doing case folding.
func (t *caseFolder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
c := context{dst: dst, src: src, atEOF: atEOF}
for c.next() {
foldFull(&c)
c.checkpoint()
}
return c.ret()
}
func (t *caseFolder) Span(src []byte, atEOF bool) (n int, err error) {
c := context{src: src, atEOF: atEOF}
for c.next() && isFoldFull(&c) {
c.checkpoint()
}
return c.retSpan()
}
func makeFold(o options) transform.SpanningTransformer {
// TODO: Special case folding, through option Language, Special/Turkic, or
// both.
// TODO: Implement Compact options.
return &caseFolder{}
}

View File

@ -1,51 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cases
import (
"testing"
"golang.org/x/text/internal/testtext"
)
var foldTestCases = []string{
"βß\u13f8", // "βssᏰ"
"ab\u13fc\uab7aꭰ", // ab
"affifflast", // affifflast
"Iİiı\u0345", // ii̇iıι
"µµΜΜςσΣΣ", // μμμμσσσσ
}
func TestFold(t *testing.T) {
for _, tc := range foldTestCases {
testEntry := func(name string, c Caser, m func(r rune) string) {
want := ""
for _, r := range tc {
want += m(r)
}
if got := c.String(tc); got != want {
t.Errorf("%s(%s) = %+q; want %+q", name, tc, got, want)
}
dst := make([]byte, 256) // big enough to hold any result
src := []byte(tc)
v := testtext.AllocsPerRun(20, func() {
c.Transform(dst, src, true)
})
if v > 0 {
t.Errorf("%s(%s): number of allocs was %f; want 0", name, tc, v)
}
}
testEntry("FullFold", Fold(), func(r rune) string {
return runeFoldData(r).full
})
// TODO:
// testEntry("SimpleFold", Fold(Compact), func(r rune) string {
// return runeFoldData(r).simple
// })
// testEntry("SpecialFold", Fold(Turkic), func(r rune) string {
// return runeFoldData(r).special
// })
}
}

839
vendor/golang.org/x/text/cases/gen.go generated vendored
View File

@ -1,839 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// This program generates the trie for casing operations. The Unicode casing
// algorithm requires the lookup of various properties and mappings for each
// rune. The table generated by this generator combines several of the most
// frequently used of these into a single trie so that they can be accessed
// with a single lookup.
package main
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"log"
"reflect"
"strconv"
"strings"
"unicode"
"golang.org/x/text/internal/gen"
"golang.org/x/text/internal/triegen"
"golang.org/x/text/internal/ucd"
"golang.org/x/text/unicode/norm"
)
func main() {
gen.Init()
genTables()
genTablesTest()
gen.Repackage("gen_trieval.go", "trieval.go", "cases")
}
// runeInfo contains all information for a rune that we care about for casing
// operations.
type runeInfo struct {
Rune rune
entry info // trie value for this rune.
CaseMode info
// Simple case mappings.
Simple [1 + maxCaseMode][]rune
// Special casing
HasSpecial bool
Conditional bool
Special [1 + maxCaseMode][]rune
// Folding
FoldSimple rune
FoldSpecial rune
FoldFull []rune
// TODO: FC_NFKC, or equivalent data.
// Properties
SoftDotted bool
CaseIgnorable bool
Cased bool
DecomposeGreek bool
BreakType string
BreakCat breakCategory
// We care mostly about 0, Above, and IotaSubscript.
CCC byte
}
type breakCategory int
const (
breakBreak breakCategory = iota
breakLetter
breakMid
)
// mapping returns the case mapping for the given case type.
func (r *runeInfo) mapping(c info) string {
if r.HasSpecial {
return string(r.Special[c])
}
if len(r.Simple[c]) != 0 {
return string(r.Simple[c])
}
return string(r.Rune)
}
func parse(file string, f func(p *ucd.Parser)) {
ucd.Parse(gen.OpenUCDFile(file), f)
}
func parseUCD() []runeInfo {
chars := make([]runeInfo, unicode.MaxRune)
get := func(r rune) *runeInfo {
c := &chars[r]
c.Rune = r
return c
}
parse("UnicodeData.txt", func(p *ucd.Parser) {
ri := get(p.Rune(0))
ri.CCC = byte(p.Int(ucd.CanonicalCombiningClass))
ri.Simple[cLower] = p.Runes(ucd.SimpleLowercaseMapping)
ri.Simple[cUpper] = p.Runes(ucd.SimpleUppercaseMapping)
ri.Simple[cTitle] = p.Runes(ucd.SimpleTitlecaseMapping)
if p.String(ucd.GeneralCategory) == "Lt" {
ri.CaseMode = cTitle
}
})
// <code>; <property>
parse("PropList.txt", func(p *ucd.Parser) {
if p.String(1) == "Soft_Dotted" {
chars[p.Rune(0)].SoftDotted = true
}
})
// <code>; <word break type>
parse("DerivedCoreProperties.txt", func(p *ucd.Parser) {
ri := get(p.Rune(0))
switch p.String(1) {
case "Case_Ignorable":
ri.CaseIgnorable = true
case "Cased":
ri.Cased = true
case "Lowercase":
ri.CaseMode = cLower
case "Uppercase":
ri.CaseMode = cUpper
}
})
// <code>; <lower> ; <title> ; <upper> ; (<condition_list> ;)?
parse("SpecialCasing.txt", func(p *ucd.Parser) {
// We drop all conditional special casing and deal with them manually in
// the language-specific case mappers. Rune 0x03A3 is the only one with
// a conditional formatting that is not language-specific. However,
// dealing with this letter is tricky, especially in a streaming
// context, so we deal with it in the Caser for Greek specifically.
ri := get(p.Rune(0))
if p.String(4) == "" {
ri.HasSpecial = true
ri.Special[cLower] = p.Runes(1)
ri.Special[cTitle] = p.Runes(2)
ri.Special[cUpper] = p.Runes(3)
} else {
ri.Conditional = true
}
})
// TODO: Use text breaking according to UAX #29.
// <code>; <word break type>
parse("auxiliary/WordBreakProperty.txt", func(p *ucd.Parser) {
ri := get(p.Rune(0))
ri.BreakType = p.String(1)
// We collapse the word breaking properties onto the categories we need.
switch p.String(1) { // TODO: officially we need to canonicalize.
case "MidLetter", "MidNumLet", "Single_Quote":
ri.BreakCat = breakMid
if !ri.CaseIgnorable {
// finalSigma relies on the fact that all breakMid runes are
// also a Case_Ignorable. Revisit this code when this changes.
log.Fatalf("Rune %U, which has a break category mid, is not a case ignorable", ri)
}
case "ALetter", "Hebrew_Letter", "Numeric", "Extend", "ExtendNumLet", "Format", "ZWJ":
ri.BreakCat = breakLetter
}
})
// <code>; <type>; <mapping>
parse("CaseFolding.txt", func(p *ucd.Parser) {
ri := get(p.Rune(0))
switch p.String(1) {
case "C":
ri.FoldSimple = p.Rune(2)
ri.FoldFull = p.Runes(2)
case "S":
ri.FoldSimple = p.Rune(2)
case "T":
ri.FoldSpecial = p.Rune(2)
case "F":
ri.FoldFull = p.Runes(2)
default:
log.Fatalf("%U: unknown type: %s", p.Rune(0), p.String(1))
}
})
return chars
}
func genTables() {
chars := parseUCD()
verifyProperties(chars)
t := triegen.NewTrie("case")
for i := range chars {
c := &chars[i]
makeEntry(c)
t.Insert(rune(i), uint64(c.entry))
}
w := gen.NewCodeWriter()
defer w.WriteVersionedGoFile("tables.go", "cases")
gen.WriteUnicodeVersion(w)
// TODO: write CLDR version after adding a mechanism to detect that the
// tables on which the manually created locale-sensitive casing code is
// based hasn't changed.
w.WriteVar("xorData", string(xorData))
w.WriteVar("exceptions", string(exceptionData))
sz, err := t.Gen(w, triegen.Compact(&sparseCompacter{}))
if err != nil {
log.Fatal(err)
}
w.Size += sz
}
func makeEntry(ri *runeInfo) {
if ri.CaseIgnorable {
if ri.Cased {
ri.entry = cIgnorableCased
} else {
ri.entry = cIgnorableUncased
}
} else {
ri.entry = ri.CaseMode
}
// TODO: handle soft-dotted.
ccc := cccOther
switch ri.CCC {
case 0: // Not_Reordered
ccc = cccZero
case above: // Above
ccc = cccAbove
}
switch ri.BreakCat {
case breakBreak:
ccc = cccBreak
case breakMid:
ri.entry |= isMidBit
}
ri.entry |= ccc
if ri.CaseMode == cUncased {
return
}
// Need to do something special.
if ri.CaseMode == cTitle || ri.HasSpecial || ri.mapping(cTitle) != ri.mapping(cUpper) {
makeException(ri)
return
}
if f := string(ri.FoldFull); len(f) > 0 && f != ri.mapping(cUpper) && f != ri.mapping(cLower) {
makeException(ri)
return
}
// Rune is either lowercase or uppercase.
orig := string(ri.Rune)
mapped := ""
if ri.CaseMode == cUpper {
mapped = ri.mapping(cLower)
} else {
mapped = ri.mapping(cUpper)
}
if len(orig) != len(mapped) {
makeException(ri)
return
}
if string(ri.FoldFull) == ri.mapping(cUpper) {
ri.entry |= inverseFoldBit
}
n := len(orig)
// Create per-byte XOR mask.
var b []byte
for i := 0; i < n; i++ {
b = append(b, orig[i]^mapped[i])
}
// Remove leading 0 bytes, but keep at least one byte.
for ; len(b) > 1 && b[0] == 0; b = b[1:] {
}
if len(b) == 1 && b[0]&0xc0 == 0 {
ri.entry |= info(b[0]) << xorShift
return
}
key := string(b)
x, ok := xorCache[key]
if !ok {
xorData = append(xorData, 0) // for detecting start of sequence
xorData = append(xorData, b...)
x = len(xorData) - 1
xorCache[key] = x
}
ri.entry |= info(x<<xorShift) | xorIndexBit
}
var xorCache = map[string]int{}
// xorData contains byte-wise XOR data for the least significant bytes of a
// UTF-8 encoded rune. An index points to the last byte. The sequence starts
// with a zero terminator.
var xorData = []byte{}
// See the comments in gen_trieval.go re "the exceptions slice".
var exceptionData = []byte{0}
// makeException encodes case mappings that cannot be expressed in a simple
// XOR diff.
func makeException(ri *runeInfo) {
ccc := ri.entry & cccMask
// Set exception bit and retain case type.
ri.entry &= 0x0007
ri.entry |= exceptionBit
if len(exceptionData) >= 1<<numExceptionBits {
log.Fatalf("%U:exceptionData too large %x > %d bits", ri.Rune, len(exceptionData), numExceptionBits)
}
// Set the offset in the exceptionData array.
ri.entry |= info(len(exceptionData) << exceptionShift)
orig := string(ri.Rune)
tc := ri.mapping(cTitle)
uc := ri.mapping(cUpper)
lc := ri.mapping(cLower)
ff := string(ri.FoldFull)
// addString sets the length of a string and adds it to the expansions array.
addString := func(s string, b *byte) {
if len(s) == 0 {
// Zero-length mappings exist, but only for conditional casing,
// which we are representing outside of this table.
log.Fatalf("%U: has zero-length mapping.", ri.Rune)
}
*b <<= 3
if s != orig {
n := len(s)
if n > 7 {
log.Fatalf("%U: mapping larger than 7 (%d)", ri.Rune, n)
}
*b |= byte(n)
exceptionData = append(exceptionData, s...)
}
}
// byte 0:
exceptionData = append(exceptionData, byte(ccc)|byte(len(ff)))
// byte 1:
p := len(exceptionData)
exceptionData = append(exceptionData, 0)
if len(ff) > 7 { // May be zero-length.
log.Fatalf("%U: fold string larger than 7 (%d)", ri.Rune, len(ff))
}
exceptionData = append(exceptionData, ff...)
ct := ri.CaseMode
if ct != cLower {
addString(lc, &exceptionData[p])
}
if ct != cUpper {
addString(uc, &exceptionData[p])
}
if ct != cTitle {
// If title is the same as upper, we set it to the original string so
// that it will be marked as not present. This implies title case is
// the same as upper case.
if tc == uc {
tc = orig
}
addString(tc, &exceptionData[p])
}
}
// sparseCompacter is a trie value block Compacter. There are many cases where
// successive runes alternate between lower- and upper-case. This Compacter
// exploits this by adding a special case type where the case value is obtained
// from or-ing it with the least-significant bit of the rune, creating large
// ranges of equal case values that compress well.
type sparseCompacter struct {
sparseBlocks [][]uint16
sparseOffsets []uint16
sparseCount int
}
// makeSparse returns the number of elements that compact block would contain
// as well as the modified values.
func makeSparse(vals []uint64) ([]uint16, int) {
// Copy the values.
values := make([]uint16, len(vals))
for i, v := range vals {
values[i] = uint16(v)
}
alt := func(i int, v uint16) uint16 {
if cm := info(v & fullCasedMask); cm == cUpper || cm == cLower {
// Convert cLower or cUpper to cXORCase value, which has the form 11x.
xor := v
xor &^= 1
xor |= uint16(i&1) ^ (v & 1)
xor |= 0x4
return xor
}
return v
}
var count int
var previous uint16
for i, v := range values {
if v != 0 {
// Try if the unmodified value is equal to the previous.
if v == previous {
continue
}
// Try if the xor-ed value is equal to the previous value.
a := alt(i, v)
if a == previous {
values[i] = a
continue
}
// This is a new value.
count++
// Use the xor-ed value if it will be identical to the next value.
if p := i + 1; p < len(values) && alt(p, values[p]) == a {
values[i] = a
v = a
}
}
previous = v
}
return values, count
}
func (s *sparseCompacter) Size(v []uint64) (int, bool) {
_, n := makeSparse(v)
// We limit using this method to having 16 entries.
if n > 16 {
return 0, false
}
return 2 + int(reflect.TypeOf(valueRange{}).Size())*n, true
}
func (s *sparseCompacter) Store(v []uint64) uint32 {
h := uint32(len(s.sparseOffsets))
values, sz := makeSparse(v)
s.sparseBlocks = append(s.sparseBlocks, values)
s.sparseOffsets = append(s.sparseOffsets, uint16(s.sparseCount))
s.sparseCount += sz
return h
}
func (s *sparseCompacter) Handler() string {
// The sparse global variable and its lookup method is defined in gen_trieval.go.
return "sparse.lookup"
}
func (s *sparseCompacter) Print(w io.Writer) (retErr error) {
p := func(format string, args ...interface{}) {
_, err := fmt.Fprintf(w, format, args...)
if retErr == nil && err != nil {
retErr = err
}
}
ls := len(s.sparseBlocks)
if ls == len(s.sparseOffsets) {
s.sparseOffsets = append(s.sparseOffsets, uint16(s.sparseCount))
}
p("// sparseOffsets: %d entries, %d bytes\n", ls+1, (ls+1)*2)
p("var sparseOffsets = %#v\n\n", s.sparseOffsets)
ns := s.sparseCount
p("// sparseValues: %d entries, %d bytes\n", ns, ns*4)
p("var sparseValues = [%d]valueRange {", ns)
for i, values := range s.sparseBlocks {
p("\n// Block %#x, offset %#x", i, s.sparseOffsets[i])
var v uint16
for i, nv := range values {
if nv != v {
if v != 0 {
p(",hi:%#02x},", 0x80+i-1)
}
if nv != 0 {
p("\n{value:%#04x,lo:%#02x", nv, 0x80+i)
}
}
v = nv
}
if v != 0 {
p(",hi:%#02x},", 0x80+len(values)-1)
}
}
p("\n}\n\n")
return
}
// verifyProperties that properties of the runes that are relied upon in the
// implementation. Each property is marked with an identifier that is referred
// to in the places where it is used.
func verifyProperties(chars []runeInfo) {
for i, c := range chars {
r := rune(i)
// Rune properties.
// A.1: modifier never changes on lowercase. [ltLower]
if c.CCC > 0 && unicode.ToLower(r) != r {
log.Fatalf("%U: non-starter changes when lowercased", r)
}
// A.2: properties of decompositions starting with I or J. [ltLower]
d := norm.NFD.PropertiesString(string(r)).Decomposition()
if len(d) > 0 {
if d[0] == 'I' || d[0] == 'J' {
// A.2.1: we expect at least an ASCII character and a modifier.
if len(d) < 3 {
log.Fatalf("%U: length of decomposition was %d; want >= 3", r, len(d))
}
// All subsequent runes are modifiers and all have the same CCC.
runes := []rune(string(d[1:]))
ccc := chars[runes[0]].CCC
for _, mr := range runes[1:] {
mc := chars[mr]
// A.2.2: all modifiers have a CCC of Above or less.
if ccc == 0 || ccc > above {
log.Fatalf("%U: CCC of successive rune (%U) was %d; want (0,230]", r, mr, ccc)
}
// A.2.3: a sequence of modifiers all have the same CCC.
if mc.CCC != ccc {
log.Fatalf("%U: CCC of follow-up modifier (%U) was %d; want %d", r, mr, mc.CCC, ccc)
}
// A.2.4: for each trailing r, r in [0x300, 0x311] <=> CCC == Above.
if (ccc == above) != (0x300 <= mr && mr <= 0x311) {
log.Fatalf("%U: modifier %U in [U+0300, U+0311] != ccc(%U) == 230", r, mr, mr)
}
if i += len(string(mr)); i >= len(d) {
break
}
}
}
}
// A.3: no U+0307 in decomposition of Soft-Dotted rune. [ltUpper]
if unicode.Is(unicode.Soft_Dotted, r) && strings.Contains(string(d), "\u0307") {
log.Fatalf("%U: decomposition of soft-dotted rune may not contain U+0307", r)
}
// A.4: only rune U+0345 may be of CCC Iota_Subscript. [elUpper]
if c.CCC == iotaSubscript && r != 0x0345 {
log.Fatalf("%U: only rune U+0345 may have CCC Iota_Subscript", r)
}
// A.5: soft-dotted runes do not have exceptions.
if c.SoftDotted && c.entry&exceptionBit != 0 {
log.Fatalf("%U: soft-dotted has exception", r)
}
// A.6: Greek decomposition. [elUpper]
if unicode.Is(unicode.Greek, r) {
if b := norm.NFD.PropertiesString(string(r)).Decomposition(); b != nil {
runes := []rune(string(b))
// A.6.1: If a Greek rune decomposes and the first rune of the
// decomposition is greater than U+00FF, the rune is always
// great and not a modifier.
if f := runes[0]; unicode.IsMark(f) || f > 0xFF && !unicode.Is(unicode.Greek, f) {
log.Fatalf("%U: expected first rune of Greek decomposition to be letter, found %U", r, f)
}
// A.6.2: Any follow-up rune in a Greek decomposition is a
// modifier of which the first should be gobbled in
// decomposition.
for _, m := range runes[1:] {
switch m {
case 0x0313, 0x0314, 0x0301, 0x0300, 0x0306, 0x0342, 0x0308, 0x0304, 0x345:
default:
log.Fatalf("%U: modifier %U is outside of expected Greek modifier set", r, m)
}
}
}
}
// Breaking properties.
// B.1: all runes with CCC > 0 are of break type Extend.
if c.CCC > 0 && c.BreakType != "Extend" {
log.Fatalf("%U: CCC == %d, but got break type %s; want Extend", r, c.CCC, c.BreakType)
}
// B.2: all cased runes with c.CCC == 0 are of break type ALetter.
if c.CCC == 0 && c.Cased && c.BreakType != "ALetter" {
log.Fatalf("%U: cased, but got break type %s; want ALetter", r, c.BreakType)
}
// B.3: letter category.
if c.CCC == 0 && c.BreakCat != breakBreak && !c.CaseIgnorable {
if c.BreakCat != breakLetter {
log.Fatalf("%U: check for letter break type gave %d; want %d", r, c.BreakCat, breakLetter)
}
}
}
}
func genTablesTest() {
w := &bytes.Buffer{}
fmt.Fprintln(w, "var (")
printProperties(w, "DerivedCoreProperties.txt", "Case_Ignorable", verifyIgnore)
// We discard the output as we know we have perfect functions. We run them
// just to verify the properties are correct.
n := printProperties(ioutil.Discard, "DerivedCoreProperties.txt", "Cased", verifyCased)
n += printProperties(ioutil.Discard, "DerivedCoreProperties.txt", "Lowercase", verifyLower)
n += printProperties(ioutil.Discard, "DerivedCoreProperties.txt", "Uppercase", verifyUpper)
if n > 0 {
log.Fatalf("One of the discarded properties does not have a perfect filter.")
}
// <code>; <lower> ; <title> ; <upper> ; (<condition_list> ;)?
fmt.Fprintln(w, "\tspecial = map[rune]struct{ toLower, toTitle, toUpper string }{")
parse("SpecialCasing.txt", func(p *ucd.Parser) {
// Skip conditional entries.
if p.String(4) != "" {
return
}
r := p.Rune(0)
fmt.Fprintf(w, "\t\t0x%04x: {%q, %q, %q},\n",
r, string(p.Runes(1)), string(p.Runes(2)), string(p.Runes(3)))
})
fmt.Fprint(w, "\t}\n\n")
// <code>; <type>; <runes>
table := map[rune]struct{ simple, full, special string }{}
parse("CaseFolding.txt", func(p *ucd.Parser) {
r := p.Rune(0)
t := p.String(1)
v := string(p.Runes(2))
if t != "T" && v == string(unicode.ToLower(r)) {
return
}
x := table[r]
switch t {
case "C":
x.full = v
x.simple = v
case "S":
x.simple = v
case "F":
x.full = v
case "T":
x.special = v
}
table[r] = x
})
fmt.Fprintln(w, "\tfoldMap = map[rune]struct{ simple, full, special string }{")
for r := rune(0); r < 0x10FFFF; r++ {
x, ok := table[r]
if !ok {
continue
}
fmt.Fprintf(w, "\t\t0x%04x: {%q, %q, %q},\n", r, x.simple, x.full, x.special)
}
fmt.Fprint(w, "\t}\n\n")
// Break property
notBreak := map[rune]bool{}
parse("auxiliary/WordBreakProperty.txt", func(p *ucd.Parser) {
switch p.String(1) {
case "Extend", "Format", "MidLetter", "MidNumLet", "Single_Quote",
"ALetter", "Hebrew_Letter", "Numeric", "ExtendNumLet", "ZWJ":
notBreak[p.Rune(0)] = true
}
})
fmt.Fprintln(w, "\tbreakProp = []struct{ lo, hi rune }{")
inBreak := false
for r := rune(0); r <= lastRuneForTesting; r++ {
if isBreak := !notBreak[r]; isBreak != inBreak {
if isBreak {
fmt.Fprintf(w, "\t\t{0x%x, ", r)
} else {
fmt.Fprintf(w, "0x%x},\n", r-1)
}
inBreak = isBreak
}
}
if inBreak {
fmt.Fprintf(w, "0x%x},\n", lastRuneForTesting)
}
fmt.Fprint(w, "\t}\n\n")
// Word break test
// Filter out all samples that do not contain cased characters.
cased := map[rune]bool{}
parse("DerivedCoreProperties.txt", func(p *ucd.Parser) {
if p.String(1) == "Cased" {
cased[p.Rune(0)] = true
}
})
fmt.Fprintln(w, "\tbreakTest = []string{")
parse("auxiliary/WordBreakTest.txt", func(p *ucd.Parser) {
c := strings.Split(p.String(0), " ")
const sep = '|'
numCased := 0
test := ""
for ; len(c) >= 2; c = c[2:] {
if c[0] == "÷" && test != "" {
test += string(sep)
}
i, err := strconv.ParseUint(c[1], 16, 32)
r := rune(i)
if err != nil {
log.Fatalf("Invalid rune %q.", c[1])
}
if r == sep {
log.Fatalf("Separator %q not allowed in test data. Pick another one.", sep)
}
if cased[r] {
numCased++
}
test += string(r)
}
if numCased > 1 {
fmt.Fprintf(w, "\t\t%q,\n", test)
}
})
fmt.Fprintln(w, "\t}")
fmt.Fprintln(w, ")")
gen.WriteVersionedGoFile("tables_test.go", "cases", w.Bytes())
}
// These functions are just used for verification that their definition have not
// changed in the Unicode Standard.
func verifyCased(r rune) bool {
return verifyLower(r) || verifyUpper(r) || unicode.IsTitle(r)
}
func verifyLower(r rune) bool {
return unicode.IsLower(r) || unicode.Is(unicode.Other_Lowercase, r)
}
func verifyUpper(r rune) bool {
return unicode.IsUpper(r) || unicode.Is(unicode.Other_Uppercase, r)
}
// verifyIgnore is an approximation of the Case_Ignorable property using the
// core unicode package. It is used to reduce the size of the test data.
func verifyIgnore(r rune) bool {
props := []*unicode.RangeTable{
unicode.Mn,
unicode.Me,
unicode.Cf,
unicode.Lm,
unicode.Sk,
}
for _, p := range props {
if unicode.Is(p, r) {
return true
}
}
return false
}
// printProperties prints tables of rune properties from the given UCD file.
// A filter func f can be given to exclude certain values. A rune r will have
// the indicated property if it is in the generated table or if f(r).
func printProperties(w io.Writer, file, property string, f func(r rune) bool) int {
verify := map[rune]bool{}
n := 0
varNameParts := strings.Split(property, "_")
varNameParts[0] = strings.ToLower(varNameParts[0])
fmt.Fprintf(w, "\t%s = map[rune]bool{\n", strings.Join(varNameParts, ""))
parse(file, func(p *ucd.Parser) {
if p.String(1) == property {
r := p.Rune(0)
verify[r] = true
if !f(r) {
n++
fmt.Fprintf(w, "\t\t0x%.4x: true,\n", r)
}
}
})
fmt.Fprint(w, "\t}\n\n")
// Verify that f is correct, that is, it represents a subset of the property.
for r := rune(0); r <= lastRuneForTesting; r++ {
if !verify[r] && f(r) {
log.Fatalf("Incorrect filter func for property %q.", property)
}
}
return n
}
// The newCaseTrie, sparseValues and sparseOffsets definitions below are
// placeholders referred to by gen_trieval.go. The real definitions are
// generated by this program and written to tables.go.
func newCaseTrie(int) int { return 0 }
var (
sparseValues [0]valueRange
sparseOffsets [0]uint16
)

View File

@ -1,219 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
// This file contains definitions for interpreting the trie value of the case
// trie generated by "go run gen*.go". It is shared by both the generator
// program and the resultant package. Sharing is achieved by the generator
// copying gen_trieval.go to trieval.go and changing what's above this comment.
// info holds case information for a single rune. It is the value returned
// by a trie lookup. Most mapping information can be stored in a single 16-bit
// value. If not, for example when a rune is mapped to multiple runes, the value
// stores some basic case data and an index into an array with additional data.
//
// The per-rune values have the following format:
//
// if (exception) {
// 15..5 unsigned exception index
// 4 unused
// } else {
// 15..8 XOR pattern or index to XOR pattern for case mapping
// Only 13..8 are used for XOR patterns.
// 7 inverseFold (fold to upper, not to lower)
// 6 index: interpret the XOR pattern as an index
// or isMid if case mode is cIgnorableUncased.
// 5..4 CCC: zero (normal or break), above or other
// }
// 3 exception: interpret this value as an exception index
// (TODO: is this bit necessary? Probably implied from case mode.)
// 2..0 case mode
//
// For the non-exceptional cases, a rune must be either uncased, lowercase or
// uppercase. If the rune is cased, the XOR pattern maps either a lowercase
// rune to uppercase or an uppercase rune to lowercase (applied to the 10
// least-significant bits of the rune).
//
// See the definitions below for a more detailed description of the various
// bits.
type info uint16
const (
casedMask = 0x0003
fullCasedMask = 0x0007
ignorableMask = 0x0006
ignorableValue = 0x0004
inverseFoldBit = 1 << 7
isMidBit = 1 << 6
exceptionBit = 1 << 3
exceptionShift = 5
numExceptionBits = 11
xorIndexBit = 1 << 6
xorShift = 8
// There is no mapping if all xor bits and the exception bit are zero.
hasMappingMask = 0xff80 | exceptionBit
)
// The case mode bits encodes the case type of a rune. This includes uncased,
// title, upper and lower case and case ignorable. (For a definition of these
// terms see Chapter 3 of The Unicode Standard Core Specification.) In some rare
// cases, a rune can be both cased and case-ignorable. This is encoded by
// cIgnorableCased. A rune of this type is always lower case. Some runes are
// cased while not having a mapping.
//
// A common pattern for scripts in the Unicode standard is for upper and lower
// case runes to alternate for increasing rune values (e.g. the accented Latin
// ranges starting from U+0100 and U+1E00 among others and some Cyrillic
// characters). We use this property by defining a cXORCase mode, where the case
// mode (always upper or lower case) is derived from the rune value. As the XOR
// pattern for case mappings is often identical for successive runes, using
// cXORCase can result in large series of identical trie values. This, in turn,
// allows us to better compress the trie blocks.
const (
cUncased info = iota // 000
cTitle // 001
cLower // 010
cUpper // 011
cIgnorableUncased // 100
cIgnorableCased // 101 // lower case if mappings exist
cXORCase // 11x // case is cLower | ((rune&1) ^ x)
maxCaseMode = cUpper
)
func (c info) isCased() bool {
return c&casedMask != 0
}
func (c info) isCaseIgnorable() bool {
return c&ignorableMask == ignorableValue
}
func (c info) isNotCasedAndNotCaseIgnorable() bool {
return c&fullCasedMask == 0
}
func (c info) isCaseIgnorableAndNotCased() bool {
return c&fullCasedMask == cIgnorableUncased
}
func (c info) isMid() bool {
return c&(fullCasedMask|isMidBit) == isMidBit|cIgnorableUncased
}
// The case mapping implementation will need to know about various Canonical
// Combining Class (CCC) values. We encode two of these in the trie value:
// cccZero (0) and cccAbove (230). If the value is cccOther, it means that
// CCC(r) > 0, but not 230. A value of cccBreak means that CCC(r) == 0 and that
// the rune also has the break category Break (see below).
const (
cccBreak info = iota << 4
cccZero
cccAbove
cccOther
cccMask = cccBreak | cccZero | cccAbove | cccOther
)
const (
starter = 0
above = 230
iotaSubscript = 240
)
// The exceptions slice holds data that does not fit in a normal info entry.
// The entry is pointed to by the exception index in an entry. It has the
// following format:
//
// Header
// byte 0:
// 7..6 unused
// 5..4 CCC type (same bits as entry)
// 3 unused
// 2..0 length of fold
//
// byte 1:
// 7..6 unused
// 5..3 length of 1st mapping of case type
// 2..0 length of 2nd mapping of case type
//
// case 1st 2nd
// lower -> upper, title
// upper -> lower, title
// title -> lower, upper
//
// Lengths with the value 0x7 indicate no value and implies no change.
// A length of 0 indicates a mapping to zero-length string.
//
// Body bytes:
// case folding bytes
// lowercase mapping bytes
// uppercase mapping bytes
// titlecase mapping bytes
// closure mapping bytes (for NFKC_Casefold). (TODO)
//
// Fallbacks:
// missing fold -> lower
// missing title -> upper
// all missing -> original rune
//
// exceptions starts with a dummy byte to enforce that there is no zero index
// value.
const (
lengthMask = 0x07
lengthBits = 3
noChange = 0
)
// References to generated trie.
var trie = newCaseTrie(0)
var sparse = sparseBlocks{
values: sparseValues[:],
offsets: sparseOffsets[:],
}
// Sparse block lookup code.
// valueRange is an entry in a sparse block.
type valueRange struct {
value uint16
lo, hi byte
}
type sparseBlocks struct {
values []valueRange
offsets []uint16
}
// lookup returns the value from values block n for byte b using binary search.
func (s *sparseBlocks) lookup(n uint32, b byte) uint16 {
lo := s.offsets[n]
hi := s.offsets[n+1]
for lo < hi {
m := lo + (hi-lo)/2
r := s.values[m]
if r.lo <= b && b <= r.hi {
return r.value
}
if b < r.lo {
hi = m
} else {
lo = m + 1
}
}
return 0
}
// lastRuneForTesting is the last rune used for testing. Everything after this
// is boring.
const lastRuneForTesting = rune(0x1FFFF)

View File

@ -1,61 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build icu
package cases
// Ideally these functions would be defined in a test file, but go test doesn't
// allow CGO in tests. The build tag should ensure either way that these
// functions will not end up in the package.
// TODO: Ensure that the correct ICU version is set.
/*
#cgo LDFLAGS: -licui18n.57 -licuuc.57
#include <stdlib.h>
#include <unicode/ustring.h>
#include <unicode/utypes.h>
#include <unicode/localpointer.h>
#include <unicode/ucasemap.h>
*/
import "C"
import "unsafe"
func doICU(tag, caser, input string) string {
err := C.UErrorCode(0)
loc := C.CString(tag)
cm := C.ucasemap_open(loc, C.uint32_t(0), &err)
buf := make([]byte, len(input)*4)
dst := (*C.char)(unsafe.Pointer(&buf[0]))
src := C.CString(input)
cn := C.int32_t(0)
switch caser {
case "fold":
cn = C.ucasemap_utf8FoldCase(cm,
dst, C.int32_t(len(buf)),
src, C.int32_t(len(input)),
&err)
case "lower":
cn = C.ucasemap_utf8ToLower(cm,
dst, C.int32_t(len(buf)),
src, C.int32_t(len(input)),
&err)
case "upper":
cn = C.ucasemap_utf8ToUpper(cm,
dst, C.int32_t(len(buf)),
src, C.int32_t(len(input)),
&err)
case "title":
cn = C.ucasemap_utf8ToTitle(cm,
dst, C.int32_t(len(buf)),
src, C.int32_t(len(input)),
&err)
}
return string(buf[:cn])
}

View File

@ -1,210 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build icu
package cases
import (
"path"
"strings"
"testing"
"golang.org/x/text/internal/testtext"
"golang.org/x/text/language"
"golang.org/x/text/unicode/norm"
)
func TestICUConformance(t *testing.T) {
// Build test set.
input := []string{
"a.a a_a",
"a\u05d0a",
"\u05d0'a",
"a\u03084a",
"a\u0308a",
"a3\u30a3a",
"a\u303aa",
"a_\u303a_a",
"1_a..a",
"1_a.a",
"a..a.",
"a--a-",
"a-a-",
"a\u200ba",
"a\u200b\u200ba",
"a\u00ad\u00ada", // Format
"a\u00ada",
"a''a", // SingleQuote
"a'a",
"a::a", // MidLetter
"a:a",
"a..a", // MidNumLet
"a.a",
"a;;a", // MidNum
"a;a",
"a__a", // ExtendNumlet
"a_a",
"ΟΣ''a",
}
add := func(x interface{}) {
switch v := x.(type) {
case string:
input = append(input, v)
case []string:
for _, s := range v {
input = append(input, s)
}
}
}
for _, tc := range testCases {
add(tc.src)
add(tc.lower)
add(tc.upper)
add(tc.title)
}
for _, tc := range bufferTests {
add(tc.src)
}
for _, tc := range breakTest {
add(strings.Replace(tc, "|", "", -1))
}
for _, tc := range foldTestCases {
add(tc)
}
// Compare ICU to Go.
for _, c := range []string{"lower", "upper", "title", "fold"} {
for _, tag := range []string{
"und", "af", "az", "el", "lt", "nl", "tr",
} {
for _, s := range input {
if exclude(c, tag, s) {
continue
}
testtext.Run(t, path.Join(c, tag, s), func(t *testing.T) {
want := doICU(tag, c, s)
got := doGo(tag, c, s)
if norm.NFC.String(got) != norm.NFC.String(want) {
t.Errorf("\n in %[3]q (%+[3]q)\n got %[1]q (%+[1]q)\n want %[2]q (%+[2]q)", got, want, s)
}
})
}
}
}
}
// exclude indicates if a string should be excluded from testing.
func exclude(cm, tag, s string) bool {
list := []struct{ cm, tags, pattern string }{
// TODO: Go does not handle certain esoteric breaks correctly. This will be
// fixed once we have a real word break iterator. Alternatively, it
// seems like we're not too far off from making it work, so we could
// fix these last steps. But first verify that using a separate word
// breaker does not hurt performance.
{"title", "af nl", "a''a"},
{"", "", "א'a"},
// All the exclusions below seem to be issues with the ICU
// implementation (at version 57) and thus are not marked as TODO.
// ICU does not handle leading apostrophe for Dutch and
// Afrikaans correctly. See http://unicode.org/cldr/trac/ticket/7078.
{"title", "af nl", "'n"},
{"title", "af nl", "'N"},
// Go terminates the final sigma check after a fixed number of
// ignorables have been found. This ensures that the algorithm can make
// progress in a streaming scenario.
{"lower title", "", "\u039f\u03a3...............................a"},
// This also applies to upper in Greek.
// NOTE: we could fix the following two cases by adding state to elUpper
// and aztrLower. However, considering a modifier to not belong to the
// preceding letter after the maximum modifiers count is reached is
// consistent with the behavior of unicode/norm.
{"upper", "el", "\u03bf" + strings.Repeat("\u0321", 29) + "\u0313"},
{"lower", "az tr lt", "I" + strings.Repeat("\u0321", 30) + "\u0307\u0300"},
{"upper", "lt", "i" + strings.Repeat("\u0321", 30) + "\u0307\u0300"},
{"lower", "lt", "I" + strings.Repeat("\u0321", 30) + "\u0300"},
// ICU title case seems to erroneously removes \u0307 from an upper case
// I unconditionally, instead of only when lowercasing. The ICU
// transform algorithm transforms these cases consistently with our
// implementation.
{"title", "az tr", "\u0307"},
// The spec says to remove \u0307 after Soft-Dotted characters. ICU
// transforms conform but ucasemap_utf8ToUpper does not.
{"upper title", "lt", "i\u0307"},
{"upper title", "lt", "i" + strings.Repeat("\u0321", 29) + "\u0307\u0300"},
// Both Unicode and CLDR prescribe an extra explicit dot above after a
// Soft_Dotted character if there are other modifiers.
// ucasemap_utf8ToUpper does not do this; ICU transforms do.
// The issue with ucasemap_utf8ToUpper seems to be that it does not
// consider the modifiers that are part of composition in the evaluation
// of More_Above. For instance, according to the More_Above rule for lt,
// a dotted capital I (U+0130) becomes i\u0307\u0307 (an small i with
// two additional dots). This seems odd, but is correct. ICU is
// definitely not correct as it produces different results for different
// normal forms. For instance, for an İ:
// \u0130 (NFC) -> i\u0307 (incorrect)
// I\u0307 (NFD) -> i\u0307\u0307 (correct)
// We could argue that we should not add a \u0307 if there already is
// one, but this may be hard to get correct and is not conform the
// standard.
{"lower title", "lt", "\u0130"},
{"lower title", "lt", "\u00cf"},
// We are conform ICU ucasemap_utf8ToUpper if we remove support for
// elUpper. However, this is clearly not conform the spec. Moreover, the
// ICU transforms _do_ implement this transform and produces results
// consistent with our implementation. Note that we still prefer to use
// ucasemap_utf8ToUpper instead of transforms as the latter have
// inconsistencies in the word breaking algorithm.
{"upper", "el", "\u0386"}, // GREEK CAPITAL LETTER ALPHA WITH TONOS
{"upper", "el", "\u0389"}, // GREEK CAPITAL LETTER ETA WITH TONOS
{"upper", "el", "\u038A"}, // GREEK CAPITAL LETTER IOTA WITH TONOS
{"upper", "el", "\u0391"}, // GREEK CAPITAL LETTER ALPHA
{"upper", "el", "\u0397"}, // GREEK CAPITAL LETTER ETA
{"upper", "el", "\u0399"}, // GREEK CAPITAL LETTER IOTA
{"upper", "el", "\u03AC"}, // GREEK SMALL LETTER ALPHA WITH TONOS
{"upper", "el", "\u03AE"}, // GREEK SMALL LETTER ALPHA WITH ETA
{"upper", "el", "\u03AF"}, // GREEK SMALL LETTER ALPHA WITH IOTA
{"upper", "el", "\u03B1"}, // GREEK SMALL LETTER ALPHA
{"upper", "el", "\u03B7"}, // GREEK SMALL LETTER ETA
{"upper", "el", "\u03B9"}, // GREEK SMALL LETTER IOTA
}
for _, x := range list {
if x.cm != "" && strings.Index(x.cm, cm) == -1 {
continue
}
if x.tags != "" && strings.Index(x.tags, tag) == -1 {
continue
}
if strings.Index(s, x.pattern) != -1 {
return true
}
}
return false
}
func doGo(tag, caser, input string) string {
var c Caser
t := language.MustParse(tag)
switch caser {
case "lower":
c = Lower(t)
case "upper":
c = Upper(t)
case "title":
c = Title(t)
case "fold":
c = Fold()
}
return c.String(input)
}

View File

@ -1,82 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cases
func (c info) cccVal() info {
if c&exceptionBit != 0 {
return info(exceptions[c>>exceptionShift]) & cccMask
}
return c & cccMask
}
func (c info) cccType() info {
ccc := c.cccVal()
if ccc <= cccZero {
return cccZero
}
return ccc
}
// TODO: Implement full Unicode breaking algorithm:
// 1) Implement breaking in separate package.
// 2) Use the breaker here.
// 3) Compare table size and performance of using the more generic breaker.
//
// Note that we can extend the current algorithm to be much more accurate. This
// only makes sense, though, if the performance and/or space penalty of using
// the generic breaker is big. Extra data will only be needed for non-cased
// runes, which means there are sufficient bits left in the caseType.
// ICU prohibits breaking in such cases as well.
// For the purpose of title casing we use an approximation of the Unicode Word
// Breaking algorithm defined in Annex #29:
// http://www.unicode.org/reports/tr29/#Default_Grapheme_Cluster_Table.
//
// For our approximation, we group the Word Break types into the following
// categories, with associated rules:
//
// 1) Letter:
// ALetter, Hebrew_Letter, Numeric, ExtendNumLet, Extend, Format_FE, ZWJ.
// Rule: Never break between consecutive runes of this category.
//
// 2) Mid:
// MidLetter, MidNumLet, Single_Quote.
// (Cf. case-ignorable: MidLetter, MidNumLet, Single_Quote or cat is Mn,
// Me, Cf, Lm or Sk).
// Rule: Don't break between Letter and Mid, but break between two Mids.
//
// 3) Break:
// Any other category: NewLine, MidNum, CR, LF, Double_Quote, Katakana, and
// Other.
// These categories should always result in a break between two cased letters.
// Rule: Always break.
//
// Note 1: the Katakana and MidNum categories can, in esoteric cases, result in
// preventing a break between two cased letters. For now we will ignore this
// (e.g. [ALetter] [ExtendNumLet] [Katakana] [ExtendNumLet] [ALetter] and
// [ALetter] [Numeric] [MidNum] [Numeric] [ALetter].)
//
// Note 2: the rule for Mid is very approximate, but works in most cases. To
// improve, we could store the categories in the trie value and use a FA to
// manage breaks. See TODO comment above.
//
// Note 3: according to the spec, it is possible for the Extend category to
// introduce breaks between other categories grouped in Letter. However, this
// is undesirable for our purposes. ICU prevents breaks in such cases as well.
// isBreak returns whether this rune should introduce a break.
func (c info) isBreak() bool {
return c.cccVal() == cccBreak
}
// isLetter returns whether the rune is of break type ALetter, Hebrew_Letter,
// Numeric, ExtendNumLet, or Extend.
func (c info) isLetter() bool {
ccc := c.cccVal()
if ccc == cccZero {
return !c.isCaseIgnorable()
}
return ccc != cccBreak
}

816
vendor/golang.org/x/text/cases/map.go generated vendored
View File

@ -1,816 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cases
// This file contains the definitions of case mappings for all supported
// languages. The rules for the language-specific tailorings were taken and
// modified from the CLDR transform definitions in common/transforms.
import (
"strings"
"unicode"
"unicode/utf8"
"golang.org/x/text/internal"
"golang.org/x/text/language"
"golang.org/x/text/transform"
"golang.org/x/text/unicode/norm"
)
// A mapFunc takes a context set to the current rune and writes the mapped
// version to the same context. It may advance the context to the next rune. It
// returns whether a checkpoint is possible: whether the pDst bytes written to
// dst so far won't need changing as we see more source bytes.
type mapFunc func(*context) bool
// A spanFunc takes a context set to the current rune and returns whether this
// rune would be altered when written to the output. It may advance the context
// to the next rune. It returns whether a checkpoint is possible.
type spanFunc func(*context) bool
// maxIgnorable defines the maximum number of ignorables to consider for
// lookahead operations.
const maxIgnorable = 30
// supported lists the language tags for which we have tailorings.
const supported = "und af az el lt nl tr"
func init() {
tags := []language.Tag{}
for _, s := range strings.Split(supported, " ") {
tags = append(tags, language.MustParse(s))
}
matcher = internal.NewInheritanceMatcher(tags)
Supported = language.NewCoverage(tags)
}
var (
matcher *internal.InheritanceMatcher
Supported language.Coverage
// We keep the following lists separate, instead of having a single per-
// language struct, to give the compiler a chance to remove unused code.
// Some uppercase mappers are stateless, so we can precompute the
// Transformers and save a bit on runtime allocations.
upperFunc = []struct {
upper mapFunc
span spanFunc
}{
{nil, nil}, // und
{nil, nil}, // af
{aztrUpper(upper), isUpper}, // az
{elUpper, noSpan}, // el
{ltUpper(upper), noSpan}, // lt
{nil, nil}, // nl
{aztrUpper(upper), isUpper}, // tr
}
undUpper transform.SpanningTransformer = &undUpperCaser{}
undLower transform.SpanningTransformer = &undLowerCaser{}
undLowerIgnoreSigma transform.SpanningTransformer = &undLowerIgnoreSigmaCaser{}
lowerFunc = []mapFunc{
nil, // und
nil, // af
aztrLower, // az
nil, // el
ltLower, // lt
nil, // nl
aztrLower, // tr
}
titleInfos = []struct {
title mapFunc
lower mapFunc
titleSpan spanFunc
rewrite func(*context)
}{
{title, lower, isTitle, nil}, // und
{title, lower, isTitle, afnlRewrite}, // af
{aztrUpper(title), aztrLower, isTitle, nil}, // az
{title, lower, isTitle, nil}, // el
{ltUpper(title), ltLower, noSpan, nil}, // lt
{nlTitle, lower, nlTitleSpan, afnlRewrite}, // nl
{aztrUpper(title), aztrLower, isTitle, nil}, // tr
}
)
func makeUpper(t language.Tag, o options) transform.SpanningTransformer {
_, i, _ := matcher.Match(t)
f := upperFunc[i].upper
if f == nil {
return undUpper
}
return &simpleCaser{f: f, span: upperFunc[i].span}
}
func makeLower(t language.Tag, o options) transform.SpanningTransformer {
_, i, _ := matcher.Match(t)
f := lowerFunc[i]
if f == nil {
if o.ignoreFinalSigma {
return undLowerIgnoreSigma
}
return undLower
}
if o.ignoreFinalSigma {
return &simpleCaser{f: f, span: isLower}
}
return &lowerCaser{
first: f,
midWord: finalSigma(f),
}
}
func makeTitle(t language.Tag, o options) transform.SpanningTransformer {
_, i, _ := matcher.Match(t)
x := &titleInfos[i]
lower := x.lower
if o.noLower {
lower = (*context).copy
} else if !o.ignoreFinalSigma {
lower = finalSigma(lower)
}
return &titleCaser{
title: x.title,
lower: lower,
titleSpan: x.titleSpan,
rewrite: x.rewrite,
}
}
func noSpan(c *context) bool {
c.err = transform.ErrEndOfSpan
return false
}
// TODO: consider a similar special case for the fast majority lower case. This
// is a bit more involved so will require some more precise benchmarking to
// justify it.
type undUpperCaser struct{ transform.NopResetter }
// undUpperCaser implements the Transformer interface for doing an upper case
// mapping for the root locale (und). It eliminates the need for an allocation
// as it prevents escaping by not using function pointers.
func (t undUpperCaser) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
c := context{dst: dst, src: src, atEOF: atEOF}
for c.next() {
upper(&c)
c.checkpoint()
}
return c.ret()
}
func (t undUpperCaser) Span(src []byte, atEOF bool) (n int, err error) {
c := context{src: src, atEOF: atEOF}
for c.next() && isUpper(&c) {
c.checkpoint()
}
return c.retSpan()
}
// undLowerIgnoreSigmaCaser implements the Transformer interface for doing
// a lower case mapping for the root locale (und) ignoring final sigma
// handling. This casing algorithm is used in some performance-critical packages
// like secure/precis and x/net/http/idna, which warrants its special-casing.
type undLowerIgnoreSigmaCaser struct{ transform.NopResetter }
func (t undLowerIgnoreSigmaCaser) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
c := context{dst: dst, src: src, atEOF: atEOF}
for c.next() && lower(&c) {
c.checkpoint()
}
return c.ret()
}
// Span implements a generic lower-casing. This is possible as isLower works
// for all lowercasing variants. All lowercase variants only vary in how they
// transform a non-lowercase letter. They will never change an already lowercase
// letter. In addition, there is no state.
func (t undLowerIgnoreSigmaCaser) Span(src []byte, atEOF bool) (n int, err error) {
c := context{src: src, atEOF: atEOF}
for c.next() && isLower(&c) {
c.checkpoint()
}
return c.retSpan()
}
type simpleCaser struct {
context
f mapFunc
span spanFunc
}
// simpleCaser implements the Transformer interface for doing a case operation
// on a rune-by-rune basis.
func (t *simpleCaser) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
c := context{dst: dst, src: src, atEOF: atEOF}
for c.next() && t.f(&c) {
c.checkpoint()
}
return c.ret()
}
func (t *simpleCaser) Span(src []byte, atEOF bool) (n int, err error) {
c := context{src: src, atEOF: atEOF}
for c.next() && t.span(&c) {
c.checkpoint()
}
return c.retSpan()
}
// undLowerCaser implements the Transformer interface for doing a lower case
// mapping for the root locale (und) ignoring final sigma handling. This casing
// algorithm is used in some performance-critical packages like secure/precis
// and x/net/http/idna, which warrants its special-casing.
type undLowerCaser struct{ transform.NopResetter }
func (t undLowerCaser) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
c := context{dst: dst, src: src, atEOF: atEOF}
for isInterWord := true; c.next(); {
if isInterWord {
if c.info.isCased() {
if !lower(&c) {
break
}
isInterWord = false
} else if !c.copy() {
break
}
} else {
if c.info.isNotCasedAndNotCaseIgnorable() {
if !c.copy() {
break
}
isInterWord = true
} else if !c.hasPrefix("Σ") {
if !lower(&c) {
break
}
} else if !finalSigmaBody(&c) {
break
}
}
c.checkpoint()
}
return c.ret()
}
func (t undLowerCaser) Span(src []byte, atEOF bool) (n int, err error) {
c := context{src: src, atEOF: atEOF}
for c.next() && isLower(&c) {
c.checkpoint()
}
return c.retSpan()
}
// lowerCaser implements the Transformer interface. The default Unicode lower
// casing requires different treatment for the first and subsequent characters
// of a word, most notably to handle the Greek final Sigma.
type lowerCaser struct {
undLowerIgnoreSigmaCaser
context
first, midWord mapFunc
}
func (t *lowerCaser) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
t.context = context{dst: dst, src: src, atEOF: atEOF}
c := &t.context
for isInterWord := true; c.next(); {
if isInterWord {
if c.info.isCased() {
if !t.first(c) {
break
}
isInterWord = false
} else if !c.copy() {
break
}
} else {
if c.info.isNotCasedAndNotCaseIgnorable() {
if !c.copy() {
break
}
isInterWord = true
} else if !t.midWord(c) {
break
}
}
c.checkpoint()
}
return c.ret()
}
// titleCaser implements the Transformer interface. Title casing algorithms
// distinguish between the first letter of a word and subsequent letters of the
// same word. It uses state to avoid requiring a potentially infinite lookahead.
type titleCaser struct {
context
// rune mappings used by the actual casing algorithms.
title mapFunc
lower mapFunc
titleSpan spanFunc
rewrite func(*context)
}
// Transform implements the standard Unicode title case algorithm as defined in
// Chapter 3 of The Unicode Standard:
// toTitlecase(X): Find the word boundaries in X according to Unicode Standard
// Annex #29, "Unicode Text Segmentation." For each word boundary, find the
// first cased character F following the word boundary. If F exists, map F to
// Titlecase_Mapping(F); then map all characters C between F and the following
// word boundary to Lowercase_Mapping(C).
func (t *titleCaser) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
t.context = context{dst: dst, src: src, atEOF: atEOF, isMidWord: t.isMidWord}
c := &t.context
if !c.next() {
return c.ret()
}
for {
p := c.info
if t.rewrite != nil {
t.rewrite(c)
}
wasMid := p.isMid()
// Break out of this loop on failure to ensure we do not modify the
// state incorrectly.
if p.isCased() {
if !c.isMidWord {
if !t.title(c) {
break
}
c.isMidWord = true
} else if !t.lower(c) {
break
}
} else if !c.copy() {
break
} else if p.isBreak() {
c.isMidWord = false
}
// As we save the state of the transformer, it is safe to call
// checkpoint after any successful write.
if !(c.isMidWord && wasMid) {
c.checkpoint()
}
if !c.next() {
break
}
if wasMid && c.info.isMid() {
c.isMidWord = false
}
}
return c.ret()
}
func (t *titleCaser) Span(src []byte, atEOF bool) (n int, err error) {
t.context = context{src: src, atEOF: atEOF, isMidWord: t.isMidWord}
c := &t.context
if !c.next() {
return c.retSpan()
}
for {
p := c.info
if t.rewrite != nil {
t.rewrite(c)
}
wasMid := p.isMid()
// Break out of this loop on failure to ensure we do not modify the
// state incorrectly.
if p.isCased() {
if !c.isMidWord {
if !t.titleSpan(c) {
break
}
c.isMidWord = true
} else if !isLower(c) {
break
}
} else if p.isBreak() {
c.isMidWord = false
}
// As we save the state of the transformer, it is safe to call
// checkpoint after any successful write.
if !(c.isMidWord && wasMid) {
c.checkpoint()
}
if !c.next() {
break
}
if wasMid && c.info.isMid() {
c.isMidWord = false
}
}
return c.retSpan()
}
// finalSigma adds Greek final Sigma handing to another casing function. It
// determines whether a lowercased sigma should be σ or ς, by looking ahead for
// case-ignorables and a cased letters.
func finalSigma(f mapFunc) mapFunc {
return func(c *context) bool {
if !c.hasPrefix("Σ") {
return f(c)
}
return finalSigmaBody(c)
}
}
func finalSigmaBody(c *context) bool {
// Current rune must be ∑.
// ::NFD();
// # 03A3; 03C2; 03A3; 03A3; Final_Sigma; # GREEK CAPITAL LETTER SIGMA
// Σ } [:case-ignorable:]* [:cased:] → σ;
// [:cased:] [:case-ignorable:]* { Σ → ς;
// ::Any-Lower;
// ::NFC();
p := c.pDst
c.writeString("ς")
// TODO: we should do this here, but right now this will never have an
// effect as this is called when the prefix is Sigma, whereas Dutch and
// Afrikaans only test for an apostrophe.
//
// if t.rewrite != nil {
// t.rewrite(c)
// }
// We need to do one more iteration after maxIgnorable, as a cased
// letter is not an ignorable and may modify the result.
wasMid := false
for i := 0; i < maxIgnorable+1; i++ {
if !c.next() {
return false
}
if !c.info.isCaseIgnorable() {
// All Midword runes are also case ignorable, so we are
// guaranteed to have a letter or word break here. As we are
// unreading the run, there is no need to unset c.isMidWord;
// the title caser will handle this.
if c.info.isCased() {
// p+1 is guaranteed to be in bounds: if writing ς was
// successful, p+1 will contain the second byte of ς. If not,
// this function will have returned after c.next returned false.
c.dst[p+1]++ // ς → σ
}
c.unreadRune()
return true
}
// A case ignorable may also introduce a word break, so we may need
// to continue searching even after detecting a break.
isMid := c.info.isMid()
if (wasMid && isMid) || c.info.isBreak() {
c.isMidWord = false
}
wasMid = isMid
c.copy()
}
return true
}
// finalSigmaSpan would be the same as isLower.
// elUpper implements Greek upper casing, which entails removing a predefined
// set of non-blocked modifiers. Note that these accents should not be removed
// for title casing!
// Example: "Οδός" -> "ΟΔΟΣ".
func elUpper(c *context) bool {
// From CLDR:
// [:Greek:] [^[:ccc=Not_Reordered:][:ccc=Above:]]*? { [\u0313\u0314\u0301\u0300\u0306\u0342\u0308\u0304] → ;
// [:Greek:] [^[:ccc=Not_Reordered:][:ccc=Iota_Subscript:]]*? { \u0345 → ;
r, _ := utf8.DecodeRune(c.src[c.pSrc:])
oldPDst := c.pDst
if !upper(c) {
return false
}
if !unicode.Is(unicode.Greek, r) {
return true
}
i := 0
// Take the properties of the uppercased rune that is already written to the
// destination. This saves us the trouble of having to uppercase the
// decomposed rune again.
if b := norm.NFD.Properties(c.dst[oldPDst:]).Decomposition(); b != nil {
// Restore the destination position and process the decomposed rune.
r, sz := utf8.DecodeRune(b)
if r <= 0xFF { // See A.6.1
return true
}
c.pDst = oldPDst
// Insert the first rune and ignore the modifiers. See A.6.2.
c.writeBytes(b[:sz])
i = len(b[sz:]) / 2 // Greek modifiers are always of length 2.
}
for ; i < maxIgnorable && c.next(); i++ {
switch r, _ := utf8.DecodeRune(c.src[c.pSrc:]); r {
// Above and Iota Subscript
case 0x0300, // U+0300 COMBINING GRAVE ACCENT
0x0301, // U+0301 COMBINING ACUTE ACCENT
0x0304, // U+0304 COMBINING MACRON
0x0306, // U+0306 COMBINING BREVE
0x0308, // U+0308 COMBINING DIAERESIS
0x0313, // U+0313 COMBINING COMMA ABOVE
0x0314, // U+0314 COMBINING REVERSED COMMA ABOVE
0x0342, // U+0342 COMBINING GREEK PERISPOMENI
0x0345: // U+0345 COMBINING GREEK YPOGEGRAMMENI
// No-op. Gobble the modifier.
default:
switch v, _ := trie.lookup(c.src[c.pSrc:]); info(v).cccType() {
case cccZero:
c.unreadRune()
return true
// We don't need to test for IotaSubscript as the only rune that
// qualifies (U+0345) was already excluded in the switch statement
// above. See A.4.
case cccAbove:
return c.copy()
default:
// Some other modifier. We're still allowed to gobble Greek
// modifiers after this.
c.copy()
}
}
}
return i == maxIgnorable
}
// TODO: implement elUpperSpan (low-priority: complex and infrequent).
func ltLower(c *context) bool {
// From CLDR:
// # Introduce an explicit dot above when lowercasing capital I's and J's
// # whenever there are more accents above.
// # (of the accents used in Lithuanian: grave, acute, tilde above, and ogonek)
// # 0049; 0069 0307; 0049; 0049; lt More_Above; # LATIN CAPITAL LETTER I
// # 004A; 006A 0307; 004A; 004A; lt More_Above; # LATIN CAPITAL LETTER J
// # 012E; 012F 0307; 012E; 012E; lt More_Above; # LATIN CAPITAL LETTER I WITH OGONEK
// # 00CC; 0069 0307 0300; 00CC; 00CC; lt; # LATIN CAPITAL LETTER I WITH GRAVE
// # 00CD; 0069 0307 0301; 00CD; 00CD; lt; # LATIN CAPITAL LETTER I WITH ACUTE
// # 0128; 0069 0307 0303; 0128; 0128; lt; # LATIN CAPITAL LETTER I WITH TILDE
// ::NFD();
// I } [^[:ccc=Not_Reordered:][:ccc=Above:]]* [:ccc=Above:] → i \u0307;
// J } [^[:ccc=Not_Reordered:][:ccc=Above:]]* [:ccc=Above:] → j \u0307;
// I \u0328 (Į) } [^[:ccc=Not_Reordered:][:ccc=Above:]]* [:ccc=Above:] → i \u0328 \u0307;
// I \u0300 (Ì) → i \u0307 \u0300;
// I \u0301 (Í) → i \u0307 \u0301;
// I \u0303 (Ĩ) → i \u0307 \u0303;
// ::Any-Lower();
// ::NFC();
i := 0
if r := c.src[c.pSrc]; r < utf8.RuneSelf {
lower(c)
if r != 'I' && r != 'J' {
return true
}
} else {
p := norm.NFD.Properties(c.src[c.pSrc:])
if d := p.Decomposition(); len(d) >= 3 && (d[0] == 'I' || d[0] == 'J') {
// UTF-8 optimization: the decomposition will only have an above
// modifier if the last rune of the decomposition is in [U+300-U+311].
// In all other cases, a decomposition starting with I is always
// an I followed by modifiers that are not cased themselves. See A.2.
if d[1] == 0xCC && d[2] <= 0x91 { // A.2.4.
if !c.writeBytes(d[:1]) {
return false
}
c.dst[c.pDst-1] += 'a' - 'A' // lower
// Assumption: modifier never changes on lowercase. See A.1.
// Assumption: all modifiers added have CCC = Above. See A.2.3.
return c.writeString("\u0307") && c.writeBytes(d[1:])
}
// In all other cases the additional modifiers will have a CCC
// that is less than 230 (Above). We will insert the U+0307, if
// needed, after these modifiers so that a string in FCD form
// will remain so. See A.2.2.
lower(c)
i = 1
} else {
return lower(c)
}
}
for ; i < maxIgnorable && c.next(); i++ {
switch c.info.cccType() {
case cccZero:
c.unreadRune()
return true
case cccAbove:
return c.writeString("\u0307") && c.copy() // See A.1.
default:
c.copy() // See A.1.
}
}
return i == maxIgnorable
}
// ltLowerSpan would be the same as isLower.
func ltUpper(f mapFunc) mapFunc {
return func(c *context) bool {
// Unicode:
// 0307; 0307; ; ; lt After_Soft_Dotted; # COMBINING DOT ABOVE
//
// From CLDR:
// # Remove \u0307 following soft-dotteds (i, j, and the like), with possible
// # intervening non-230 marks.
// ::NFD();
// [:Soft_Dotted:] [^[:ccc=Not_Reordered:][:ccc=Above:]]* { \u0307 → ;
// ::Any-Upper();
// ::NFC();
// TODO: See A.5. A soft-dotted rune never has an exception. This would
// allow us to overload the exception bit and encode this property in
// info. Need to measure performance impact of this.
r, _ := utf8.DecodeRune(c.src[c.pSrc:])
oldPDst := c.pDst
if !f(c) {
return false
}
if !unicode.Is(unicode.Soft_Dotted, r) {
return true
}
// We don't need to do an NFD normalization, as a soft-dotted rune never
// contains U+0307. See A.3.
i := 0
for ; i < maxIgnorable && c.next(); i++ {
switch c.info.cccType() {
case cccZero:
c.unreadRune()
return true
case cccAbove:
if c.hasPrefix("\u0307") {
// We don't do a full NFC, but rather combine runes for
// some of the common cases. (Returning NFC or
// preserving normal form is neither a requirement nor
// a possibility anyway).
if !c.next() {
return false
}
if c.dst[oldPDst] == 'I' && c.pDst == oldPDst+1 && c.src[c.pSrc] == 0xcc {
s := ""
switch c.src[c.pSrc+1] {
case 0x80: // U+0300 COMBINING GRAVE ACCENT
s = "\u00cc" // U+00CC LATIN CAPITAL LETTER I WITH GRAVE
case 0x81: // U+0301 COMBINING ACUTE ACCENT
s = "\u00cd" // U+00CD LATIN CAPITAL LETTER I WITH ACUTE
case 0x83: // U+0303 COMBINING TILDE
s = "\u0128" // U+0128 LATIN CAPITAL LETTER I WITH TILDE
case 0x88: // U+0308 COMBINING DIAERESIS
s = "\u00cf" // U+00CF LATIN CAPITAL LETTER I WITH DIAERESIS
default:
}
if s != "" {
c.pDst = oldPDst
return c.writeString(s)
}
}
}
return c.copy()
default:
c.copy()
}
}
return i == maxIgnorable
}
}
// TODO: implement ltUpperSpan (low priority: complex and infrequent).
func aztrUpper(f mapFunc) mapFunc {
return func(c *context) bool {
// i→İ;
if c.src[c.pSrc] == 'i' {
return c.writeString("İ")
}
return f(c)
}
}
func aztrLower(c *context) (done bool) {
// From CLDR:
// # I and i-dotless; I-dot and i are case pairs in Turkish and Azeri
// # 0130; 0069; 0130; 0130; tr; # LATIN CAPITAL LETTER I WITH DOT ABOVE
// İ→i;
// # When lowercasing, remove dot_above in the sequence I + dot_above, which will turn into i.
// # This matches the behavior of the canonically equivalent I-dot_above
// # 0307; ; 0307; 0307; tr After_I; # COMBINING DOT ABOVE
// # When lowercasing, unless an I is before a dot_above, it turns into a dotless i.
// # 0049; 0131; 0049; 0049; tr Not_Before_Dot; # LATIN CAPITAL LETTER I
// I([^[:ccc=Not_Reordered:][:ccc=Above:]]*)\u0307 → i$1 ;
// I→ı ;
// ::Any-Lower();
if c.hasPrefix("\u0130") { // İ
return c.writeString("i")
}
if c.src[c.pSrc] != 'I' {
return lower(c)
}
// We ignore the lower-case I for now, but insert it later when we know
// which form we need.
start := c.pSrc + c.sz
i := 0
Loop:
// We check for up to n ignorables before \u0307. As \u0307 is an
// ignorable as well, n is maxIgnorable-1.
for ; i < maxIgnorable && c.next(); i++ {
switch c.info.cccType() {
case cccAbove:
if c.hasPrefix("\u0307") {
return c.writeString("i") && c.writeBytes(c.src[start:c.pSrc]) // ignore U+0307
}
done = true
break Loop
case cccZero:
c.unreadRune()
done = true
break Loop
default:
// We'll write this rune after we know which starter to use.
}
}
if i == maxIgnorable {
done = true
}
return c.writeString("ı") && c.writeBytes(c.src[start:c.pSrc+c.sz]) && done
}
// aztrLowerSpan would be the same as isLower.
func nlTitle(c *context) bool {
// From CLDR:
// # Special titlecasing for Dutch initial "ij".
// ::Any-Title();
// # Fix up Ij at the beginning of a "word" (per Any-Title, notUAX #29)
// [:^WB=ALetter:] [:WB=Extend:]* [[:WB=MidLetter:][:WB=MidNumLet:]]? { Ij } → IJ ;
if c.src[c.pSrc] != 'I' && c.src[c.pSrc] != 'i' {
return title(c)
}
if !c.writeString("I") || !c.next() {
return false
}
if c.src[c.pSrc] == 'j' || c.src[c.pSrc] == 'J' {
return c.writeString("J")
}
c.unreadRune()
return true
}
func nlTitleSpan(c *context) bool {
// From CLDR:
// # Special titlecasing for Dutch initial "ij".
// ::Any-Title();
// # Fix up Ij at the beginning of a "word" (per Any-Title, notUAX #29)
// [:^WB=ALetter:] [:WB=Extend:]* [[:WB=MidLetter:][:WB=MidNumLet:]]? { Ij } → IJ ;
if c.src[c.pSrc] != 'I' {
return isTitle(c)
}
if !c.next() || c.src[c.pSrc] == 'j' {
return false
}
if c.src[c.pSrc] != 'J' {
c.unreadRune()
}
return true
}
// Not part of CLDR, but see http://unicode.org/cldr/trac/ticket/7078.
func afnlRewrite(c *context) {
if c.hasPrefix("'") || c.hasPrefix("") {
c.isMidWord = true
}
}

View File

@ -1,950 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cases
import (
"bytes"
"fmt"
"path"
"strings"
"testing"
"unicode/utf8"
"golang.org/x/text/internal/testtext"
"golang.org/x/text/language"
"golang.org/x/text/transform"
"golang.org/x/text/unicode/norm"
)
type testCase struct {
lang string
src interface{} // string, []string, or nil to skip test
title interface{} // string, []string, or nil to skip test
lower interface{} // string, []string, or nil to skip test
upper interface{} // string, []string, or nil to skip test
opts options
}
var testCases = []testCase{
0: {
lang: "und",
src: "abc aBc ABC abC İsıI ΕΣΆΣ",
title: "Abc Abc Abc Abc İsıi Εσάσ",
lower: "abc abc abc abc i\u0307sıi εσάσ",
upper: "ABC ABC ABC ABC İSII ΕΣΆΣ",
opts: getOpts(HandleFinalSigma(false)),
},
1: {
lang: "und",
src: "abc aBc ABC abC İsıI ΕΣΆΣ Σ _Σ -Σ",
title: "Abc Abc Abc Abc İsıi Εσάς Σ _Σ -Σ",
lower: "abc abc abc abc i\u0307sıi εσάς σ _σ -σ",
upper: "ABC ABC ABC ABC İSII ΕΣΆΣ Σ _Σ -Σ",
opts: getOpts(HandleFinalSigma(true)),
},
2: { // Title cased runes.
lang: supported,
src: "DžA",
title: "Dža",
lower: "dža",
upper: "DŽA",
},
3: {
// Title breaking.
lang: supported,
src: []string{
"FOO CASE TEST",
"DON'T DO THiS",
"χωΡΊΣ χωΡΊΣ^a χωΡΊΣ:a χωΡΊΣ:^a χωΡΊΣ^ όμΩΣ Σ",
"with-hyphens",
"49ers 49ers",
`"capitalize a^a -hyphen 0X _u a_u:a`,
"MidNumLet a.b\u2018c\u2019d\u2024e\ufe52f\uff07f\uff0eg",
"MidNum a,b;c\u037ed\u0589e\u060cf\u2044g\ufe50h",
"\u0345 x\u3031x x\u05d0x \u05d0x a'.a a.a a4,a",
},
title: []string{
"Foo Case Test",
"Don't Do This",
"Χωρίς Χωρίσ^A Χωρίσ:a Χωρίσ:^A Χωρίς^ Όμως Σ",
"With-Hyphens",
// Note that 49Ers is correct according to the spec.
// TODO: provide some option to the user to treat different
// characters as cased.
"49Ers 49Ers",
`"Capitalize A^A -Hyphen 0X _U A_u:a`,
"Midnumlet A.b\u2018c\u2019d\u2024e\ufe52f\uff07f\uff0eg",
"Midnum A,B;C\u037eD\u0589E\u060cF\u2044G\ufe50H",
"\u0399 X\u3031X X\u05d0x \u05d0X A'.A A.a A4,A",
},
},
// TODO: These are known deviations from the options{} Unicode Word Breaking
// Algorithm.
// {
// "und",
// "x_\u3031_x a4,4a",
// "X_\u3031_x A4,4a", // Currently is "X_\U3031_X A4,4A".
// "x_\u3031_x a4,4a",
// "X_\u3031_X A4,4A",
// options{},
// },
4: {
// Tests title options
lang: "und",
src: "abc aBc ABC abC İsıI o'Brien",
title: "Abc ABc ABC AbC İsıI O'Brien",
opts: getOpts(NoLower),
},
5: {
lang: "el",
src: "aBc ΟΔΌΣ Οδός Σο ΣΟ Σ oΣ ΟΣ σ ἕξ \u03ac",
title: "Abc Οδός Οδός Σο Σο Σ Oς Ος Σ Ἕξ \u0386",
lower: "abc οδός οδός σο σο σ oς ος σ ἕξ \u03ac",
upper: "ABC ΟΔΟΣ ΟΔΟΣ ΣΟ ΣΟ Σ OΣ ΟΣ Σ ΕΞ \u0391", // Uppercase removes accents
},
6: {
lang: "tr az",
src: "Isiİ İsıI I\u0307sIiİ İsıI\u0307 I\u0300\u0307",
title: "Isii İsıı I\u0307sıii İsıi I\u0300\u0307",
lower: "ısii isıı isıii isıi \u0131\u0300\u0307",
upper: "ISİİ İSII I\u0307SIİİ İSII\u0307 I\u0300\u0307",
},
7: {
lang: "lt",
src: "I Ï J J̈ Į Į̈ Ì Í Ĩ xi̇̈ xj̇̈ xį̇̈ xi̇̀ xi̇́ xi̇̃ XI XÏ XJ XJ̈ XĮ XĮ̈ XI̟̤",
title: "I Ï J J̈ Į Į̈ Ì Í Ĩ Xi̇̈ Xj̇̈ Xį̇̈ Xi̇̀ Xi̇́ Xi̇̃ Xi Xi̇̈ Xj Xj̇̈ Xį Xį̇̈ Xi̟̤",
lower: "i i̇̈ j j̇̈ į į̇̈ i̇̀ i̇́ i̇̃ xi̇̈ xj̇̈ xį̇̈ xi̇̀ xi̇́ xi̇̃ xi xi̇̈ xj xj̇̈ xį xį̇̈ xi̟̤",
upper: "I Ï J J̈ Į Į̈ Ì Í Ĩ XÏ XJ̈ XĮ̈ XÌ XÍ XĨ XI XÏ XJ XJ̈ XĮ XĮ̈ XI̟̤",
},
8: {
lang: "lt",
src: "\u012e\u0300 \u00cc i\u0307\u0300 i\u0307\u0301 i\u0307\u0303 i\u0307\u0308 i\u0300\u0307",
title: "\u012e\u0300 \u00cc \u00cc \u00cd \u0128 \u00cf I\u0300\u0307",
lower: "\u012f\u0307\u0300 i\u0307\u0300 i\u0307\u0300 i\u0307\u0301 i\u0307\u0303 i\u0307\u0308 i\u0300\u0307",
upper: "\u012e\u0300 \u00cc \u00cc \u00cd \u0128 \u00cf I\u0300\u0307",
},
9: {
lang: "nl",
src: "ijs IJs Ij Ijs İJ İJs aa aA 'ns 'S",
title: "IJs IJs IJ IJs İj İjs Aa Aa 'ns 's",
},
// Note: this specification is not currently part of CLDR. The same holds
// for the leading apostrophe handling for Dutch.
// See http://unicode.org/cldr/trac/ticket/7078.
10: {
lang: "af",
src: "wag 'n bietjie",
title: "Wag 'n Bietjie",
lower: "wag 'n bietjie",
upper: "WAG 'N BIETJIE",
},
}
func TestCaseMappings(t *testing.T) {
for i, tt := range testCases {
src, ok := tt.src.([]string)
if !ok {
src = strings.Split(tt.src.(string), " ")
}
for _, lang := range strings.Split(tt.lang, " ") {
tag := language.MustParse(lang)
testEntry := func(name string, mk func(language.Tag, options) transform.SpanningTransformer, gold interface{}) {
c := Caser{mk(tag, tt.opts)}
if gold != nil {
wants, ok := gold.([]string)
if !ok {
wants = strings.Split(gold.(string), " ")
}
for j, want := range wants {
if got := c.String(src[j]); got != want {
t.Errorf("%d:%s:\n%s.String(%+q):\ngot %+q;\nwant %+q", i, lang, name, src[j], got, want)
}
}
}
dst := make([]byte, 256) // big enough to hold any result
src := []byte(strings.Join(src, " "))
v := testtext.AllocsPerRun(20, func() {
c.Transform(dst, src, true)
})
if v > 1.1 {
t.Errorf("%d:%s:\n%s: number of allocs was %f; want 0", i, lang, name, v)
}
}
testEntry("Upper", makeUpper, tt.upper)
testEntry("Lower", makeLower, tt.lower)
testEntry("Title", makeTitle, tt.title)
}
}
}
// TestAlloc tests that some mapping methods should not cause any allocation.
func TestAlloc(t *testing.T) {
dst := make([]byte, 256) // big enough to hold any result
src := []byte(txtNonASCII)
for i, f := range []func() Caser{
func() Caser { return Upper(language.Und) },
func() Caser { return Lower(language.Und) },
func() Caser { return Lower(language.Und, HandleFinalSigma(false)) },
// TODO: use a shared copy for these casers as well, in order of
// importance, starting with the most important:
// func() Caser { return Title(language.Und) },
// func() Caser { return Title(language.Und, HandleFinalSigma(false)) },
} {
testtext.Run(t, "", func(t *testing.T) {
var c Caser
v := testtext.AllocsPerRun(10, func() {
c = f()
})
if v > 0 {
// TODO: Right now only Upper has 1 allocation. Special-case Lower
// and Title as well to have less allocations for the root locale.
t.Errorf("%d:init: number of allocs was %f; want 0", i, v)
}
v = testtext.AllocsPerRun(2, func() {
c.Transform(dst, src, true)
})
if v > 0 {
t.Errorf("%d:transform: number of allocs was %f; want 0", i, v)
}
})
}
}
func testHandover(t *testing.T, c Caser, src string) {
want := c.String(src)
// Find the common prefix.
pSrc := 0
for ; pSrc < len(src) && pSrc < len(want) && want[pSrc] == src[pSrc]; pSrc++ {
}
// Test handover for each substring of the prefix.
for i := 0; i < pSrc; i++ {
testtext.Run(t, fmt.Sprint("interleave/", i), func(t *testing.T) {
dst := make([]byte, 4*len(src))
c.Reset()
nSpan, _ := c.Span([]byte(src[:i]), false)
copy(dst, src[:nSpan])
nTransform, _, _ := c.Transform(dst[nSpan:], []byte(src[nSpan:]), true)
got := string(dst[:nSpan+nTransform])
if got != want {
t.Errorf("full string: got %q; want %q", got, want)
}
})
}
}
func TestHandover(t *testing.T) {
testCases := []struct {
desc string
t Caser
first, second string
}{{
"title/nosigma/single midword",
Title(language.Und, HandleFinalSigma(false)),
"A.", "a",
}, {
"title/nosigma/single midword",
Title(language.Und, HandleFinalSigma(false)),
"A", ".a",
}, {
"title/nosigma/double midword",
Title(language.Und, HandleFinalSigma(false)),
"A..", "a",
}, {
"title/nosigma/double midword",
Title(language.Und, HandleFinalSigma(false)),
"A.", ".a",
}, {
"title/nosigma/double midword",
Title(language.Und, HandleFinalSigma(false)),
"A", "..a",
}, {
"title/sigma/single midword",
Title(language.Und),
"ΟΣ.", "a",
}, {
"title/sigma/single midword",
Title(language.Und),
"ΟΣ", ".a",
}, {
"title/sigma/double midword",
Title(language.Und),
"ΟΣ..", "a",
}, {
"title/sigma/double midword",
Title(language.Und),
"ΟΣ.", ".a",
}, {
"title/sigma/double midword",
Title(language.Und),
"ΟΣ", "..a",
}, {
"title/af/leading apostrophe",
Title(language.Afrikaans),
"'", "n bietje",
}}
for _, tc := range testCases {
testtext.Run(t, tc.desc, func(t *testing.T) {
src := tc.first + tc.second
want := tc.t.String(src)
tc.t.Reset()
n, _ := tc.t.Span([]byte(tc.first), false)
dst := make([]byte, len(want))
copy(dst, tc.first[:n])
nDst, _, _ := tc.t.Transform(dst[n:], []byte(src[n:]), true)
got := string(dst[:n+nDst])
if got != want {
t.Errorf("got %q; want %q", got, want)
}
})
}
}
// minBufSize is the size of the buffer by which the casing operation in
// this package are guaranteed to make progress.
const minBufSize = norm.MaxSegmentSize
type bufferTest struct {
desc, src, want string
firstErr error
dstSize, srcSize int
t transform.SpanningTransformer
}
var bufferTests []bufferTest
func init() {
bufferTests = []bufferTest{{
desc: "und/upper/short dst",
src: "abcdefg",
want: "ABCDEFG",
firstErr: transform.ErrShortDst,
dstSize: 3,
srcSize: minBufSize,
t: Upper(language.Und),
}, {
desc: "und/upper/short src",
src: "123é56",
want: "123É56",
firstErr: transform.ErrShortSrc,
dstSize: 4,
srcSize: 4,
t: Upper(language.Und),
}, {
desc: "und/upper/no error on short",
src: "12",
want: "12",
firstErr: nil,
dstSize: 1,
srcSize: 1,
t: Upper(language.Und),
}, {
desc: "und/lower/short dst",
src: "ABCDEFG",
want: "abcdefg",
firstErr: transform.ErrShortDst,
dstSize: 3,
srcSize: minBufSize,
t: Lower(language.Und),
}, {
desc: "und/lower/short src",
src: "123É56",
want: "123é56",
firstErr: transform.ErrShortSrc,
dstSize: 4,
srcSize: 4,
t: Lower(language.Und),
}, {
desc: "und/lower/no error on short",
src: "12",
want: "12",
firstErr: nil,
dstSize: 1,
srcSize: 1,
t: Lower(language.Und),
}, {
desc: "und/lower/simple (no final sigma)",
src: "ΟΣ ΟΣΣ",
want: "οσ οσσ",
dstSize: minBufSize,
srcSize: minBufSize,
t: Lower(language.Und, HandleFinalSigma(false)),
}, {
desc: "und/title/simple (no final sigma)",
src: "ΟΣ ΟΣΣ",
want: "Οσ Οσσ",
dstSize: minBufSize,
srcSize: minBufSize,
t: Title(language.Und, HandleFinalSigma(false)),
}, {
desc: "und/title/final sigma: no error",
src: "ΟΣ",
want: "Ος",
dstSize: minBufSize,
srcSize: minBufSize,
t: Title(language.Und),
}, {
desc: "und/title/final sigma: short source",
src: "ΟΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣ",
want: "Οσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσς",
firstErr: transform.ErrShortSrc,
dstSize: minBufSize,
srcSize: 10,
t: Title(language.Und),
}, {
desc: "und/title/final sigma: short destination 1",
src: "ΟΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣ",
want: "Οσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσς",
firstErr: transform.ErrShortDst,
dstSize: 10,
srcSize: minBufSize,
t: Title(language.Und),
}, {
desc: "und/title/final sigma: short destination 2",
src: "ΟΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣ",
want: "Οσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσς",
firstErr: transform.ErrShortDst,
dstSize: 9,
srcSize: minBufSize,
t: Title(language.Und),
}, {
desc: "und/title/final sigma: short destination 3",
src: "ΟΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣΣ",
want: "Οσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσς",
firstErr: transform.ErrShortDst,
dstSize: 8,
srcSize: minBufSize,
t: Title(language.Und),
}, {
desc: "und/title/clipped UTF-8 rune",
src: "σσσσσσσσσσσ",
want: "Σσσσσσσσσσσ",
firstErr: transform.ErrShortSrc,
dstSize: minBufSize,
srcSize: 5,
t: Title(language.Und),
}, {
desc: "und/title/clipped UTF-8 rune atEOF",
src: "σσσ" + string([]byte{0xCF}),
want: "Σσσ" + string([]byte{0xCF}),
dstSize: minBufSize,
srcSize: minBufSize,
t: Title(language.Und),
}, {
// Note: the choice to change the final sigma at the end in case of
// too many case ignorables is arbitrary. The main reason for this
// choice is that it results in simpler code.
desc: "und/title/final sigma: max ignorables",
src: "ΟΣ" + strings.Repeat(".", maxIgnorable) + "a",
want: "Οσ" + strings.Repeat(".", maxIgnorable) + "A",
dstSize: minBufSize,
srcSize: minBufSize,
t: Title(language.Und),
}, {
// Note: the choice to change the final sigma at the end in case of
// too many case ignorables is arbitrary. The main reason for this
// choice is that it results in simpler code.
desc: "und/title/long string",
src: "AA" + strings.Repeat(".", maxIgnorable+1) + "a",
want: "Aa" + strings.Repeat(".", maxIgnorable+1) + "A",
dstSize: minBufSize,
srcSize: len("AA" + strings.Repeat(".", maxIgnorable+1)),
t: Title(language.Und),
}, {
// Note: the choice to change the final sigma at the end in case of
// too many case ignorables is arbitrary. The main reason for this
// choice is that it results in simpler code.
desc: "und/title/final sigma: too many ignorables",
src: "ΟΣ" + strings.Repeat(".", maxIgnorable+1) + "a",
want: "Ος" + strings.Repeat(".", maxIgnorable+1) + "A",
dstSize: minBufSize,
srcSize: len("ΟΣ" + strings.Repeat(".", maxIgnorable+1)),
t: Title(language.Und),
}, {
desc: "und/title/final sigma: apostrophe",
src: "ΟΣ''a",
want: "Οσ''A",
dstSize: minBufSize,
srcSize: minBufSize,
t: Title(language.Und),
}, {
desc: "el/upper/max ignorables",
src: "ο" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0313",
want: "Ο" + strings.Repeat("\u0321", maxIgnorable-1),
dstSize: minBufSize,
srcSize: minBufSize,
t: Upper(language.Greek),
}, {
desc: "el/upper/too many ignorables",
src: "ο" + strings.Repeat("\u0321", maxIgnorable) + "\u0313",
want: "Ο" + strings.Repeat("\u0321", maxIgnorable) + "\u0313",
dstSize: minBufSize,
srcSize: len("ο" + strings.Repeat("\u0321", maxIgnorable)),
t: Upper(language.Greek),
}, {
desc: "el/upper/short dst",
src: "123ο",
want: "123Ο",
firstErr: transform.ErrShortDst,
dstSize: 3,
srcSize: minBufSize,
t: Upper(language.Greek),
}, {
desc: "lt/lower/max ignorables",
src: "I" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0300",
want: "i" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0307\u0300",
dstSize: minBufSize,
srcSize: minBufSize,
t: Lower(language.Lithuanian),
}, {
desc: "lt/lower/too many ignorables",
src: "I" + strings.Repeat("\u0321", maxIgnorable) + "\u0300",
want: "i" + strings.Repeat("\u0321", maxIgnorable) + "\u0300",
dstSize: minBufSize,
srcSize: len("I" + strings.Repeat("\u0321", maxIgnorable)),
t: Lower(language.Lithuanian),
}, {
desc: "lt/lower/decomposition with short dst buffer 1",
src: "aaaaa\u00cc", // U+00CC LATIN CAPITAL LETTER I GRAVE
firstErr: transform.ErrShortDst,
want: "aaaaai\u0307\u0300",
dstSize: 5,
srcSize: minBufSize,
t: Lower(language.Lithuanian),
}, {
desc: "lt/lower/decomposition with short dst buffer 2",
src: "aaaa\u00cc", // U+00CC LATIN CAPITAL LETTER I GRAVE
firstErr: transform.ErrShortDst,
want: "aaaai\u0307\u0300",
dstSize: 5,
srcSize: minBufSize,
t: Lower(language.Lithuanian),
}, {
desc: "lt/upper/max ignorables",
src: "i" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0307\u0300",
want: "I" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0300",
dstSize: minBufSize,
srcSize: minBufSize,
t: Upper(language.Lithuanian),
}, {
desc: "lt/upper/too many ignorables",
src: "i" + strings.Repeat("\u0321", maxIgnorable) + "\u0307\u0300",
want: "I" + strings.Repeat("\u0321", maxIgnorable) + "\u0307\u0300",
dstSize: minBufSize,
srcSize: len("i" + strings.Repeat("\u0321", maxIgnorable)),
t: Upper(language.Lithuanian),
}, {
desc: "lt/upper/short dst",
src: "12i\u0307\u0300",
want: "12\u00cc",
firstErr: transform.ErrShortDst,
dstSize: 3,
srcSize: minBufSize,
t: Upper(language.Lithuanian),
}, {
desc: "aztr/lower/max ignorables",
src: "I" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0307\u0300",
want: "i" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0300",
dstSize: minBufSize,
srcSize: minBufSize,
t: Lower(language.Turkish),
}, {
desc: "aztr/lower/too many ignorables",
src: "I" + strings.Repeat("\u0321", maxIgnorable) + "\u0307\u0300",
want: "\u0131" + strings.Repeat("\u0321", maxIgnorable) + "\u0307\u0300",
dstSize: minBufSize,
srcSize: len("I" + strings.Repeat("\u0321", maxIgnorable)),
t: Lower(language.Turkish),
}, {
desc: "nl/title/pre-IJ cutoff",
src: " ij",
want: " IJ",
firstErr: transform.ErrShortDst,
dstSize: 2,
srcSize: minBufSize,
t: Title(language.Dutch),
}, {
desc: "nl/title/mid-IJ cutoff",
src: " ij",
want: " IJ",
firstErr: transform.ErrShortDst,
dstSize: 3,
srcSize: minBufSize,
t: Title(language.Dutch),
}, {
desc: "af/title/apostrophe",
src: "'n bietje",
want: "'n Bietje",
firstErr: transform.ErrShortDst,
dstSize: 3,
srcSize: minBufSize,
t: Title(language.Afrikaans),
}}
}
func TestShortBuffersAndOverflow(t *testing.T) {
for i, tt := range bufferTests {
testtext.Run(t, tt.desc, func(t *testing.T) {
buf := make([]byte, tt.dstSize)
got := []byte{}
var nSrc, nDst int
var err error
for p := 0; p < len(tt.src); p += nSrc {
q := p + tt.srcSize
if q > len(tt.src) {
q = len(tt.src)
}
nDst, nSrc, err = tt.t.Transform(buf, []byte(tt.src[p:q]), q == len(tt.src))
got = append(got, buf[:nDst]...)
if p == 0 && err != tt.firstErr {
t.Errorf("%d:%s:\n error was %v; want %v", i, tt.desc, err, tt.firstErr)
break
}
}
if string(got) != tt.want {
t.Errorf("%d:%s:\ngot %+q;\nwant %+q", i, tt.desc, got, tt.want)
}
testHandover(t, Caser{tt.t}, tt.src)
})
}
}
func TestSpan(t *testing.T) {
for _, tt := range []struct {
desc string
src string
want string
atEOF bool
err error
t Caser
}{{
desc: "und/upper/basic",
src: "abcdefg",
want: "",
atEOF: true,
err: transform.ErrEndOfSpan,
t: Upper(language.Und),
}, {
desc: "und/upper/short src",
src: "123É"[:4],
want: "123",
atEOF: false,
err: transform.ErrShortSrc,
t: Upper(language.Und),
}, {
desc: "und/upper/no error on short",
src: "12",
want: "12",
atEOF: false,
t: Upper(language.Und),
}, {
desc: "und/lower/basic",
src: "ABCDEFG",
want: "",
atEOF: true,
err: transform.ErrEndOfSpan,
t: Lower(language.Und),
}, {
desc: "und/lower/short src num",
src: "123é"[:4],
want: "123",
atEOF: false,
err: transform.ErrShortSrc,
t: Lower(language.Und),
}, {
desc: "und/lower/short src greek",
src: "αβγé"[:7],
want: "αβγ",
atEOF: false,
err: transform.ErrShortSrc,
t: Lower(language.Und),
}, {
desc: "und/lower/no error on short",
src: "12",
want: "12",
atEOF: false,
t: Lower(language.Und),
}, {
desc: "und/lower/simple (no final sigma)",
src: "ος οσσ",
want: "οσ οσσ",
atEOF: true,
t: Lower(language.Und, HandleFinalSigma(false)),
}, {
desc: "und/title/simple (no final sigma)",
src: "Οσ Οσσ",
want: "Οσ Οσσ",
atEOF: true,
t: Title(language.Und, HandleFinalSigma(false)),
}, {
desc: "und/lower/final sigma: no error",
src: "οΣ", // Oς
want: "ο", // Oς
err: transform.ErrEndOfSpan,
t: Lower(language.Und),
}, {
desc: "und/title/final sigma: no error",
src: "ΟΣ", // Oς
want: "Ο", // Oς
err: transform.ErrEndOfSpan,
t: Title(language.Und),
}, {
desc: "und/title/final sigma: no short source!",
src: "ΟσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσΣ",
want: "Οσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσσ",
err: transform.ErrEndOfSpan,
t: Title(language.Und),
}, {
desc: "und/title/clipped UTF-8 rune",
src: "Σσ" + string([]byte{0xCF}),
want: "Σσ",
atEOF: false,
err: transform.ErrShortSrc,
t: Title(language.Und),
}, {
desc: "und/title/clipped UTF-8 rune atEOF",
src: "Σσσ" + string([]byte{0xCF}),
want: "Σσσ" + string([]byte{0xCF}),
atEOF: true,
t: Title(language.Und),
}, {
// Note: the choice to change the final sigma at the end in case of
// too many case ignorables is arbitrary. The main reason for this
// choice is that it results in simpler code.
desc: "und/title/long string",
src: "A" + strings.Repeat("a", maxIgnorable+5),
want: "A" + strings.Repeat("a", maxIgnorable+5),
t: Title(language.Und),
}, {
// Note: the choice to change the final sigma at the end in case of
// too many case ignorables is arbitrary. The main reason for this
// choice is that it results in simpler code.
desc: "und/title/cyrillic",
src: "При",
want: "При",
atEOF: true,
t: Title(language.Und, HandleFinalSigma(false)),
}, {
// Note: the choice to change the final sigma at the end in case of
// too many case ignorables is arbitrary. The main reason for this
// choice is that it results in simpler code.
desc: "und/title/final sigma: max ignorables",
src: "Οσ" + strings.Repeat(".", maxIgnorable) + "A",
want: "Οσ" + strings.Repeat(".", maxIgnorable) + "A",
t: Title(language.Und),
}, {
desc: "el/upper/max ignorables - not implemented",
src: "Ο" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0313",
want: "",
err: transform.ErrEndOfSpan,
t: Upper(language.Greek),
}, {
desc: "el/upper/too many ignorables - not implemented",
src: "Ο" + strings.Repeat("\u0321", maxIgnorable) + "\u0313",
want: "",
err: transform.ErrEndOfSpan,
t: Upper(language.Greek),
}, {
desc: "el/upper/short dst",
src: "123ο",
want: "",
err: transform.ErrEndOfSpan,
t: Upper(language.Greek),
}, {
desc: "lt/lower/max ignorables",
src: "i" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0307\u0300",
want: "i" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0307\u0300",
t: Lower(language.Lithuanian),
}, {
desc: "lt/lower/isLower",
src: "I" + strings.Repeat("\u0321", maxIgnorable) + "\u0300",
want: "",
err: transform.ErrEndOfSpan,
t: Lower(language.Lithuanian),
}, {
desc: "lt/lower/not identical",
src: "aaaaa\u00cc", // U+00CC LATIN CAPITAL LETTER I GRAVE
err: transform.ErrEndOfSpan,
want: "aaaaa",
t: Lower(language.Lithuanian),
}, {
desc: "lt/lower/identical",
src: "aaaai\u0307\u0300", // U+00CC LATIN CAPITAL LETTER I GRAVE
want: "aaaai\u0307\u0300",
t: Lower(language.Lithuanian),
}, {
desc: "lt/upper/not implemented",
src: "I" + strings.Repeat("\u0321", maxIgnorable-1) + "\u0300",
want: "",
err: transform.ErrEndOfSpan,
t: Upper(language.Lithuanian),
}, {
desc: "lt/upper/not implemented, ascii",
src: "AB",
want: "",
err: transform.ErrEndOfSpan,
t: Upper(language.Lithuanian),
}, {
desc: "nl/title/pre-IJ cutoff",
src: " IJ",
want: " IJ",
t: Title(language.Dutch),
}, {
desc: "nl/title/mid-IJ cutoff",
src: " Ia",
want: " Ia",
t: Title(language.Dutch),
}, {
desc: "af/title/apostrophe",
src: "'n Bietje",
want: "'n Bietje",
t: Title(language.Afrikaans),
}, {
desc: "af/title/apostrophe-incorrect",
src: "'N Bietje",
// The Single_Quote (a MidWord), needs to be retained as unspanned so
// that a successive call to Transform can detect that N should not be
// capitalized.
want: "",
err: transform.ErrEndOfSpan,
t: Title(language.Afrikaans),
}} {
testtext.Run(t, tt.desc, func(t *testing.T) {
for p := 0; p < len(tt.want); p += utf8.RuneLen([]rune(tt.src[p:])[0]) {
tt.t.Reset()
n, err := tt.t.Span([]byte(tt.src[:p]), false)
if err != nil && err != transform.ErrShortSrc {
t.Errorf("early failure:Span(%+q): %v (%d < %d)", tt.src[:p], err, n, len(tt.want))
break
}
}
tt.t.Reset()
n, err := tt.t.Span([]byte(tt.src), tt.atEOF)
if n != len(tt.want) || err != tt.err {
t.Errorf("Span(%+q, %v): got %d, %v; want %d, %v", tt.src, tt.atEOF, n, err, len(tt.want), tt.err)
}
testHandover(t, tt.t, tt.src)
})
}
}
var txtASCII = strings.Repeat("The quick brown fox jumps over the lazy dog. ", 50)
// Taken from http://creativecommons.org/licenses/by-sa/3.0/vn/
const txt_vn = `Với các điều kiện sau: Ghi nhận công của tác giả. Nếu bạn sử
dụng, chuyển đổi, hoặc xây dựng dự án từ nội dung được chia sẻ này, bạn phải áp
dụng giấy phép này hoặc một giấy phép khác có các điều khoản tương tự như giấy
phép này cho dự án của bạn. Hiểu rằng: Miễn — Bất kỳ các điều kiện nào trên đây
cũng có thể được miễn bỏ nếu bạn được sự cho phép của người sở hữu bản quyền.
Phạm vi công chúng — Khi tác phẩm hoặc bất kỳ chương nào của tác phẩm đã trong
vùng dành cho công chúng theo quy định của pháp luật thì tình trạng của nó không
bị ảnh hưởng bởi giấy phép trong bất kỳ trường hợp nào.`
// http://creativecommons.org/licenses/by-sa/2.5/cn/
const txt_cn = `您可以自由: 复制、发行、展览、表演、放映、
广播或通过信息网络传播本作品 创作演绎作品
对本作品进行商业性使用 惟须遵守下列条件:
署名 — 您必须按照作者或者许可人指定的方式对作品进行署名。
相同方式共享 — 如果您改变、转换本作品或者以本作品为基础进行创作,
您只能采用与本协议相同的许可协议发布基于本作品的演绎作品。`
// Taken from http://creativecommons.org/licenses/by-sa/1.0/deed.ru
const txt_ru = `При обязательном соблюдении следующих условий: Attribution — Вы
должны атрибутировать произведение (указывать автора и источник) в порядке,
предусмотренном автором или лицензиаром (но только так, чтобы никоим образом не
подразумевалось, что они поддерживают вас или использование вами данного
произведения). Υπό τις ακόλουθες προϋποθέσεις:`
// Taken from http://creativecommons.org/licenses/by-sa/3.0/gr/
const txt_gr = `Αναφορά Δημιουργού — Θα πρέπει να κάνετε την αναφορά στο έργο με
τον τρόπο που έχει οριστεί από το δημιουργό ή το χορηγούντο την άδεια (χωρίς
όμως να εννοείται με οποιονδήποτε τρόπο ότι εγκρίνουν εσάς ή τη χρήση του έργου
από εσάς). Παρόμοια Διανομή — Εάν αλλοιώσετε, τροποποιήσετε ή δημιουργήσετε
περαιτέρω βασισμένοι στο έργο θα μπορείτε να διανέμετε το έργο που θα προκύψει
μόνο με την ίδια ή παρόμοια άδεια.`
const txtNonASCII = txt_vn + txt_cn + txt_ru + txt_gr
// TODO: Improve ASCII performance.
func BenchmarkCasers(b *testing.B) {
for _, s := range []struct{ name, text string }{
{"ascii", txtASCII},
{"nonASCII", txtNonASCII},
{"short", "При"},
} {
src := []byte(s.text)
// Measure case mappings in bytes package for comparison.
for _, f := range []struct {
name string
fn func(b []byte) []byte
}{
{"lower", bytes.ToLower},
{"title", bytes.ToTitle},
{"upper", bytes.ToUpper},
} {
testtext.Bench(b, path.Join(s.name, "bytes", f.name), func(b *testing.B) {
b.SetBytes(int64(len(src)))
for i := 0; i < b.N; i++ {
f.fn(src)
}
})
}
for _, t := range []struct {
name string
caser transform.SpanningTransformer
}{
{"fold/default", Fold()},
{"upper/default", Upper(language.Und)},
{"lower/sigma", Lower(language.Und)},
{"lower/simple", Lower(language.Und, HandleFinalSigma(false))},
{"title/sigma", Title(language.Und)},
{"title/simple", Title(language.Und, HandleFinalSigma(false))},
} {
c := Caser{t.caser}
dst := make([]byte, len(src))
testtext.Bench(b, path.Join(s.name, t.name, "transform"), func(b *testing.B) {
b.SetBytes(int64(len(src)))
for i := 0; i < b.N; i++ {
c.Reset()
c.Transform(dst, src, true)
}
})
// No need to check span for simple cases, as they will be the same
// as sigma.
if strings.HasSuffix(t.name, "/simple") {
continue
}
spanSrc := c.Bytes(src)
testtext.Bench(b, path.Join(s.name, t.name, "span"), func(b *testing.B) {
c.Reset()
if n, _ := c.Span(spanSrc, true); n < len(spanSrc) {
b.Fatalf("spanner is not recognizing text %q as done (at %d)", spanSrc, n)
}
b.SetBytes(int64(len(spanSrc)))
for i := 0; i < b.N; i++ {
c.Reset()
c.Span(spanSrc, true)
}
})
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,215 +0,0 @@
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
package cases
// This file contains definitions for interpreting the trie value of the case
// trie generated by "go run gen*.go". It is shared by both the generator
// program and the resultant package. Sharing is achieved by the generator
// copying gen_trieval.go to trieval.go and changing what's above this comment.
// info holds case information for a single rune. It is the value returned
// by a trie lookup. Most mapping information can be stored in a single 16-bit
// value. If not, for example when a rune is mapped to multiple runes, the value
// stores some basic case data and an index into an array with additional data.
//
// The per-rune values have the following format:
//
// if (exception) {
// 15..5 unsigned exception index
// 4 unused
// } else {
// 15..8 XOR pattern or index to XOR pattern for case mapping
// Only 13..8 are used for XOR patterns.
// 7 inverseFold (fold to upper, not to lower)
// 6 index: interpret the XOR pattern as an index
// or isMid if case mode is cIgnorableUncased.
// 5..4 CCC: zero (normal or break), above or other
// }
// 3 exception: interpret this value as an exception index
// (TODO: is this bit necessary? Probably implied from case mode.)
// 2..0 case mode
//
// For the non-exceptional cases, a rune must be either uncased, lowercase or
// uppercase. If the rune is cased, the XOR pattern maps either a lowercase
// rune to uppercase or an uppercase rune to lowercase (applied to the 10
// least-significant bits of the rune).
//
// See the definitions below for a more detailed description of the various
// bits.
type info uint16
const (
casedMask = 0x0003
fullCasedMask = 0x0007
ignorableMask = 0x0006
ignorableValue = 0x0004
inverseFoldBit = 1 << 7
isMidBit = 1 << 6
exceptionBit = 1 << 3
exceptionShift = 5
numExceptionBits = 11
xorIndexBit = 1 << 6
xorShift = 8
// There is no mapping if all xor bits and the exception bit are zero.
hasMappingMask = 0xff80 | exceptionBit
)
// The case mode bits encodes the case type of a rune. This includes uncased,
// title, upper and lower case and case ignorable. (For a definition of these
// terms see Chapter 3 of The Unicode Standard Core Specification.) In some rare
// cases, a rune can be both cased and case-ignorable. This is encoded by
// cIgnorableCased. A rune of this type is always lower case. Some runes are
// cased while not having a mapping.
//
// A common pattern for scripts in the Unicode standard is for upper and lower
// case runes to alternate for increasing rune values (e.g. the accented Latin
// ranges starting from U+0100 and U+1E00 among others and some Cyrillic
// characters). We use this property by defining a cXORCase mode, where the case
// mode (always upper or lower case) is derived from the rune value. As the XOR
// pattern for case mappings is often identical for successive runes, using
// cXORCase can result in large series of identical trie values. This, in turn,
// allows us to better compress the trie blocks.
const (
cUncased info = iota // 000
cTitle // 001
cLower // 010
cUpper // 011
cIgnorableUncased // 100
cIgnorableCased // 101 // lower case if mappings exist
cXORCase // 11x // case is cLower | ((rune&1) ^ x)
maxCaseMode = cUpper
)
func (c info) isCased() bool {
return c&casedMask != 0
}
func (c info) isCaseIgnorable() bool {
return c&ignorableMask == ignorableValue
}
func (c info) isNotCasedAndNotCaseIgnorable() bool {
return c&fullCasedMask == 0
}
func (c info) isCaseIgnorableAndNotCased() bool {
return c&fullCasedMask == cIgnorableUncased
}
func (c info) isMid() bool {
return c&(fullCasedMask|isMidBit) == isMidBit|cIgnorableUncased
}
// The case mapping implementation will need to know about various Canonical
// Combining Class (CCC) values. We encode two of these in the trie value:
// cccZero (0) and cccAbove (230). If the value is cccOther, it means that
// CCC(r) > 0, but not 230. A value of cccBreak means that CCC(r) == 0 and that
// the rune also has the break category Break (see below).
const (
cccBreak info = iota << 4
cccZero
cccAbove
cccOther
cccMask = cccBreak | cccZero | cccAbove | cccOther
)
const (
starter = 0
above = 230
iotaSubscript = 240
)
// The exceptions slice holds data that does not fit in a normal info entry.
// The entry is pointed to by the exception index in an entry. It has the
// following format:
//
// Header
// byte 0:
// 7..6 unused
// 5..4 CCC type (same bits as entry)
// 3 unused
// 2..0 length of fold
//
// byte 1:
// 7..6 unused
// 5..3 length of 1st mapping of case type
// 2..0 length of 2nd mapping of case type
//
// case 1st 2nd
// lower -> upper, title
// upper -> lower, title
// title -> lower, upper
//
// Lengths with the value 0x7 indicate no value and implies no change.
// A length of 0 indicates a mapping to zero-length string.
//
// Body bytes:
// case folding bytes
// lowercase mapping bytes
// uppercase mapping bytes
// titlecase mapping bytes
// closure mapping bytes (for NFKC_Casefold). (TODO)
//
// Fallbacks:
// missing fold -> lower
// missing title -> upper
// all missing -> original rune
//
// exceptions starts with a dummy byte to enforce that there is no zero index
// value.
const (
lengthMask = 0x07
lengthBits = 3
noChange = 0
)
// References to generated trie.
var trie = newCaseTrie(0)
var sparse = sparseBlocks{
values: sparseValues[:],
offsets: sparseOffsets[:],
}
// Sparse block lookup code.
// valueRange is an entry in a sparse block.
type valueRange struct {
value uint16
lo, hi byte
}
type sparseBlocks struct {
values []valueRange
offsets []uint16
}
// lookup returns the value from values block n for byte b using binary search.
func (s *sparseBlocks) lookup(n uint32, b byte) uint16 {
lo := s.offsets[n]
hi := s.offsets[n+1]
for lo < hi {
m := lo + (hi-lo)/2
r := s.values[m]
if r.lo <= b && b <= r.hi {
return r.value
}
if b < r.lo {
hi = m
} else {
lo = m + 1
}
}
return 0
}
// lastRuneForTesting is the last rune used for testing. Everything after this
// is boring.
const lastRuneForTesting = rune(0x1FFFF)

View File

@ -1,46 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import (
"fmt"
"go/build"
"go/parser"
"golang.org/x/tools/go/loader"
)
const (
extractFile = "extracted.gotext.json"
outFile = "out.gotext.json"
gotextSuffix = ".gotext.json"
)
// NOTE: The command line tool already prefixes with "gotext:".
var (
wrap = func(err error, msg string) error {
return fmt.Errorf("%s: %v", msg, err)
}
errorf = fmt.Errorf
)
// TODO: still used. Remove when possible.
func loadPackages(conf *loader.Config, args []string) (*loader.Program, error) {
if len(args) == 0 {
args = []string{"."}
}
conf.Build = &build.Default
conf.ParserMode = parser.ParseComments
// Use the initial packages from the command line.
args, err := conf.FromArgs(args, false)
if err != nil {
return nil, wrap(err, "loading packages failed")
}
// Load, parse and type-check the whole program.
return conf.Load()
}

View File

@ -1,53 +0,0 @@
// Code generated by go generate. DO NOT EDIT.
// gotext is a tool for managing text in Go source code.
//
// Usage:
//
// gotext command [arguments]
//
// The commands are:
//
// extract extracts strings to be translated from code
// rewrite rewrites fmt functions to use a message Printer
// generate generates code to insert translated messages
//
// Use "go help [command]" for more information about a command.
//
// Additional help topics:
//
//
// Use "gotext help [topic]" for more information about that topic.
//
//
// Extracts strings to be translated from code
//
// Usage:
//
// go extract <package>*
//
//
//
//
// Rewrites fmt functions to use a message Printer
//
// Usage:
//
// go rewrite <package>
//
// rewrite is typically done once for a project. It rewrites all usages of
// fmt to use x/text's message package whenever a message.Printer is in scope.
// It rewrites Print and Println calls with constant strings to the equivalent
// using Printf to allow translators to reorder arguments.
//
//
// Generates code to insert translated messages
//
// Usage:
//
// go generate <package>
//
//
//
//
package main

View File

@ -1,76 +0,0 @@
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
package main
import (
"golang.org/x/text/language"
"golang.org/x/text/message"
"golang.org/x/text/message/catalog"
)
type dictionary struct {
index []uint32
data string
}
func (d *dictionary) Lookup(key string) (data string, ok bool) {
p := messageKeyToIndex[key]
start, end := d.index[p], d.index[p+1]
if start == end {
return "", false
}
return d.data[start:end], true
}
func init() {
dict := map[string]catalog.Dictionary{
"de": &dictionary{index: deIndex, data: deData},
"en_US": &dictionary{index: en_USIndex, data: en_USData},
"zh": &dictionary{index: zhIndex, data: zhData},
}
fallback := language.MustParse("en-US")
cat, err := catalog.NewFromMap(dict, catalog.Fallback(fallback))
if err != nil {
panic(err)
}
message.DefaultCatalog = cat
}
var messageKeyToIndex = map[string]int{
"%.2[1]f miles traveled (%[1]f)": 6,
"%[1]s is visiting %[3]s!\n": 3,
"%d more files remaining!": 4,
"%s is out of order!": 5,
"%s is visiting %s!\n": 2,
"Hello %s!\n": 1,
"Hello world!\n": 0,
}
var deIndex = []uint32{ // 8 elements
0x00000000, 0x0000000d, 0x0000001b, 0x00000031,
0x00000047, 0x00000066, 0x00000066, 0x00000066,
} // Size: 56 bytes
const deData string = "" + // Size: 102 bytes
"\x02Hallo Welt!\x0a\x02Hallo %[1]s!\x0a\x02%[1]s besucht %[2]s!\x0a\x02%" +
"[1]s besucht %[3]s!\x0a\x02Noch %[1]d Bestände zu gehen!"
var en_USIndex = []uint32{ // 8 elements
0x00000000, 0x0000000e, 0x0000001c, 0x00000036,
0x00000050, 0x00000093, 0x000000aa, 0x000000c9,
} // Size: 56 bytes
const en_USData string = "" + // Size: 201 bytes
"\x02Hello world!\x0a\x02Hello %[1]s!\x0a\x02%[1]s is visiting %[2]s!\x0a" +
"\x02%[1]s is visiting %[3]s!\x0a\x04\x01\x81\x01\x00\x02\x14\x02One file" +
" remaining!\x00&\x02There are %[1]d more files remaining!\x02%[1]s is ou" +
"t of order!\x02%.2[1]f miles traveled (%[1]f)"
var zhIndex = []uint32{ // 8 elements
0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000,
} // Size: 56 bytes
const zhData string = ""
// Total table size 471 bytes (0KiB); checksum: 7746955

View File

@ -1,186 +0,0 @@
{
"language": "de",
"messages": [
{
"id": "Hello world!\n",
"key": "Hello world!\n",
"message": "Hello world!\n",
"translation": "Hallo Welt!\n",
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:27:10"
},
{
"id": "Hello {City}!\n",
"key": "Hello %s!\n",
"message": "Hello {City}!\n",
"translation": "Hallo {City}!\n",
"placeholders": [
{
"id": "City",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "city"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:31:10"
},
{
"id": "Hello {Town}!\n",
"key": "Hello %s!\n",
"message": "Hello {Town}!\n",
"translation": "Hallo {Town}!\n",
"placeholders": [
{
"id": "Town",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "town",
"comment": "Town"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:35:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%s is visiting %s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "{Person} besucht {Place}!\n",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "person",
"comment": "The person of matter."
},
{
"id": "Place",
"string": "%[2]s",
"type": "string",
"underlyingType": "string",
"argNum": 2,
"expr": "place",
"comment": "Place the person is visiting."
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:40:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%[1]s is visiting %[3]s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "{Person} besucht {Place}!\n",
"comment": "Person visiting a place.",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "pp.Person"
},
{
"id": "Place",
"string": "%[3]s",
"type": "string",
"underlyingType": "string",
"argNum": 3,
"expr": "pp.Place",
"comment": "Place the person is visiting."
},
{
"id": "Extra",
"string": "%[2]v",
"type": "int",
"underlyingType": "int",
"argNum": 2,
"expr": "pp.extra"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:55:10"
},
{
"id": "{N} more files remaining!",
"key": "%d more files remaining!",
"message": "{N} more files remaining!",
"translation": "Noch {N} Bestände zu gehen!",
"placeholders": [
{
"id": "N",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "n"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:67:10"
},
{
"id": "Use the following code for your discount: {ReferralCode}\n",
"key": "Use the following code for your discount: %d\n",
"message": "Use the following code for your discount: {ReferralCode}\n",
"translation": "",
"placeholders": [
{
"id": "ReferralCode",
"string": "%[1]d",
"type": "golang.org/x/text/cmd/gotext/examples/extract.referralCode",
"underlyingType": "int",
"argNum": 1,
"expr": "c"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:73:10"
},
{
"id": [ "msgOutOfOrder", "{Device} is out of order!" ],
"key": "%s is out of order!",
"message": "{Device} is out of order!",
"translation": "",
"comment": "FOO\n",
"placeholders": [
{
"id": "Device",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "device"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:81:10"
},
{
"id": "{Miles} miles traveled ({Miles_1})",
"key": "%.2[1]f miles traveled (%[1]f)",
"message": "{Miles} miles traveled ({Miles_1})",
"translation": "",
"placeholders": [
{
"id": "Miles",
"string": "%.2[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
},
{
"id": "Miles_1",
"string": "%[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:85:10"
}
]
}

View File

@ -1,206 +0,0 @@
{
"language": "de",
"messages": [
{
"id": "Hello world!\n",
"key": "Hello world!\n",
"message": "Hello world!\n",
"translation": "",
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:28:10"
},
{
"id": "Hello {City}!\n",
"key": "Hello %s!\n",
"message": "Hello {City}!\n",
"translation": "",
"placeholders": [
{
"id": "City",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "city"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:32:10"
},
{
"id": "Hello {Town}!\n",
"key": "Hello %s!\n",
"message": "Hello {Town}!\n",
"translation": "",
"placeholders": [
{
"id": "Town",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "town",
"comment": "Town"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:36:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%s is visiting %s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "person",
"comment": "The person of matter."
},
{
"id": "Place",
"string": "%[2]s",
"type": "string",
"underlyingType": "string",
"argNum": 2,
"expr": "place",
"comment": "Place the person is visiting."
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:41:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%[1]s is visiting %[3]s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "",
"comment": "Person visiting a place.",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "pp.Person"
},
{
"id": "Place",
"string": "%[3]s",
"type": "string",
"underlyingType": "string",
"argNum": 3,
"expr": "pp.Place",
"comment": "Place the person is visiting."
},
{
"id": "Extra",
"string": "%[2]v",
"type": "int",
"underlyingType": "int",
"argNum": 2,
"expr": "pp.extra"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:56:10"
},
{
"id": "{} files remaining!",
"key": "%d files remaining!",
"message": "{} files remaining!",
"translation": "",
"placeholders": [
{
"id": "",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "2"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:63:10"
},
{
"id": "{N} more files remaining!",
"key": "%d more files remaining!",
"message": "{N} more files remaining!",
"translation": "",
"placeholders": [
{
"id": "N",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "n"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:68:10"
},
{
"id": "Use the following code for your discount: {ReferralCode}\n",
"key": "Use the following code for your discount: %d\n",
"message": "Use the following code for your discount: {ReferralCode}\n",
"translation": "",
"placeholders": [
{
"id": "ReferralCode",
"string": "%[1]d",
"type": "golang.org/x/text/cmd/gotext/examples/extract.referralCode",
"underlyingType": "int",
"argNum": 1,
"expr": "c"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:74:10"
},
{
"id": [
"msgOutOfOrder",
"{Device} is out of order!"
],
"key": "%s is out of order!",
"message": "{Device} is out of order!",
"translation": "",
"comment": "FOO\n",
"placeholders": [
{
"id": "Device",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "device"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:82:10"
},
{
"id": "{Miles} miles traveled ({Miles_1})",
"key": "%.2[1]f miles traveled (%[1]f)",
"message": "{Miles} miles traveled ({Miles_1})",
"translation": "",
"placeholders": [
{
"id": "Miles",
"string": "%.2[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
},
{
"id": "Miles_1",
"string": "%[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:86:10"
}
]
}

View File

@ -1,82 +0,0 @@
{
"language": "en-US",
"messages": [
{
"id": "Hello world!\n",
"key": "Hello world!\n",
"message": "Hello world!\n",
"translation": "Hello world!\n",
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:27:10"
},
{
"id": "Hello {City}!\n",
"key": "Hello %s!\n",
"message": "Hello {City}!\n",
"translation": "Hello {City}!\n"
},
{
"id": "Hello {Town}!\n",
"key": "Hello %s!\n",
"message": "Hello {Town}!\n",
"translation": "Hello {Town}!\n",
"placeholders": [
{
"id": "Town",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "town",
"comment": "Town"
}
]
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%s is visiting %s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "{Person} is visiting {Place}!\n"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%[1]s is visiting %[3]s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "{Person} is visiting {Place}!\n",
"comment": "Person visiting a place."
},
{
"id": "{N} more files remaining!",
"key": "%d more files remaining!",
"message": "{N} more files remaining!",
"translation": {
"select": {
"feature": "plural",
"arg": "N",
"cases": {
"one": "One file remaining!",
"other": "There are {N} more files remaining!"
}
}
}
},
{
"id": "Use the following code for your discount: {ReferralCode}\n",
"key": "Use the following code for your discount: %d\n",
"message": "Use the following code for your discount: {ReferralCode}\n",
"translation": ""
},
{
"id": [ "msgOutOfOrder", "{Device} is out of order!" ],
"key": "%s is out of order!",
"message": "{Device} is out of order!",
"translation": "{Device} is out of order!",
"comment": "FOO\n"
},
{
"id": "{Miles} miles traveled ({Miles_1})",
"key": "%.2[1]f miles traveled (%[1]f)",
"message": "{Miles} miles traveled ({Miles_1})",
"translation": "{Miles} miles traveled ({Miles_1})"
}
]
}

View File

@ -1,206 +0,0 @@
{
"language": "en-US",
"messages": [
{
"id": "Hello world!\n",
"key": "Hello world!\n",
"message": "Hello world!\n",
"translation": "",
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:28:10"
},
{
"id": "Hello {City}!\n",
"key": "Hello %s!\n",
"message": "Hello {City}!\n",
"translation": "",
"placeholders": [
{
"id": "City",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "city"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:32:10"
},
{
"id": "Hello {Town}!\n",
"key": "Hello %s!\n",
"message": "Hello {Town}!\n",
"translation": "",
"placeholders": [
{
"id": "Town",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "town",
"comment": "Town"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:36:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%s is visiting %s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "person",
"comment": "The person of matter."
},
{
"id": "Place",
"string": "%[2]s",
"type": "string",
"underlyingType": "string",
"argNum": 2,
"expr": "place",
"comment": "Place the person is visiting."
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:41:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%[1]s is visiting %[3]s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "",
"comment": "Person visiting a place.",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "pp.Person"
},
{
"id": "Place",
"string": "%[3]s",
"type": "string",
"underlyingType": "string",
"argNum": 3,
"expr": "pp.Place",
"comment": "Place the person is visiting."
},
{
"id": "Extra",
"string": "%[2]v",
"type": "int",
"underlyingType": "int",
"argNum": 2,
"expr": "pp.extra"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:56:10"
},
{
"id": "{} files remaining!",
"key": "%d files remaining!",
"message": "{} files remaining!",
"translation": "",
"placeholders": [
{
"id": "",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "2"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:63:10"
},
{
"id": "{N} more files remaining!",
"key": "%d more files remaining!",
"message": "{N} more files remaining!",
"translation": "",
"placeholders": [
{
"id": "N",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "n"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:68:10"
},
{
"id": "Use the following code for your discount: {ReferralCode}\n",
"key": "Use the following code for your discount: %d\n",
"message": "Use the following code for your discount: {ReferralCode}\n",
"translation": "",
"placeholders": [
{
"id": "ReferralCode",
"string": "%[1]d",
"type": "golang.org/x/text/cmd/gotext/examples/extract.referralCode",
"underlyingType": "int",
"argNum": 1,
"expr": "c"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:74:10"
},
{
"id": [
"msgOutOfOrder",
"{Device} is out of order!"
],
"key": "%s is out of order!",
"message": "{Device} is out of order!",
"translation": "",
"comment": "FOO\n",
"placeholders": [
{
"id": "Device",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "device"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:82:10"
},
{
"id": "{Miles} miles traveled ({Miles_1})",
"key": "%.2[1]f miles traveled (%[1]f)",
"message": "{Miles} miles traveled ({Miles_1})",
"translation": "",
"placeholders": [
{
"id": "Miles",
"string": "%.2[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
},
{
"id": "Miles_1",
"string": "%[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:86:10"
}
]
}

View File

@ -1,206 +0,0 @@
{
"language": "en-US",
"messages": [
{
"id": "Hello world!\n",
"key": "Hello world!\n",
"message": "Hello world!\n",
"translation": "",
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:28:10"
},
{
"id": "Hello {City}!\n",
"key": "Hello %s!\n",
"message": "Hello {City}!\n",
"translation": "",
"placeholders": [
{
"id": "City",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "city"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:32:10"
},
{
"id": "Hello {Town}!\n",
"key": "Hello %s!\n",
"message": "Hello {Town}!\n",
"translation": "",
"placeholders": [
{
"id": "Town",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "town",
"comment": "Town"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:36:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%s is visiting %s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "person",
"comment": "The person of matter."
},
{
"id": "Place",
"string": "%[2]s",
"type": "string",
"underlyingType": "string",
"argNum": 2,
"expr": "place",
"comment": "Place the person is visiting."
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:41:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%[1]s is visiting %[3]s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "",
"comment": "Person visiting a place.",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "pp.Person"
},
{
"id": "Place",
"string": "%[3]s",
"type": "string",
"underlyingType": "string",
"argNum": 3,
"expr": "pp.Place",
"comment": "Place the person is visiting."
},
{
"id": "Extra",
"string": "%[2]v",
"type": "int",
"underlyingType": "int",
"argNum": 2,
"expr": "pp.extra"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:56:10"
},
{
"id": "{} files remaining!",
"key": "%d files remaining!",
"message": "{} files remaining!",
"translation": "",
"placeholders": [
{
"id": "",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "2"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:63:10"
},
{
"id": "{N} more files remaining!",
"key": "%d more files remaining!",
"message": "{N} more files remaining!",
"translation": "",
"placeholders": [
{
"id": "N",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "n"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:68:10"
},
{
"id": "Use the following code for your discount: {ReferralCode}\n",
"key": "Use the following code for your discount: %d\n",
"message": "Use the following code for your discount: {ReferralCode}\n",
"translation": "",
"placeholders": [
{
"id": "ReferralCode",
"string": "%[1]d",
"type": "golang.org/x/text/cmd/gotext/examples/extract.referralCode",
"underlyingType": "int",
"argNum": 1,
"expr": "c"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:74:10"
},
{
"id": [
"msgOutOfOrder",
"{Device} is out of order!"
],
"key": "%s is out of order!",
"message": "{Device} is out of order!",
"translation": "",
"comment": "FOO\n",
"placeholders": [
{
"id": "Device",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "device"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:82:10"
},
{
"id": "{Miles} miles traveled ({Miles_1})",
"key": "%.2[1]f miles traveled (%[1]f)",
"message": "{Miles} miles traveled ({Miles_1})",
"translation": "",
"placeholders": [
{
"id": "Miles",
"string": "%.2[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
},
{
"id": "Miles_1",
"string": "%[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:86:10"
}
]
}

View File

@ -1,203 +0,0 @@
{
"language": "zh",
"messages": [
{
"id": "Hello world!\n",
"key": "Hello world!\n",
"message": "Hello world!\n",
"translation": "",
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:27:10"
},
{
"id": "Hello {City}!\n",
"key": "Hello %s!\n",
"message": "Hello {City}!\n",
"translation": "",
"placeholders": [
{
"id": "City",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "city"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:31:10"
},
{
"id": "Hello {Town}!\n",
"key": "Hello %s!\n",
"message": "Hello {Town}!\n",
"translation": "",
"placeholders": [
{
"id": "Town",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "town",
"comment": "Town"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:35:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%s is visiting %s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "person",
"comment": "The person of matter."
},
{
"id": "Place",
"string": "%[2]s",
"type": "string",
"underlyingType": "string",
"argNum": 2,
"expr": "place",
"comment": "Place the person is visiting."
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:40:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%[1]s is visiting %[3]s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "",
"comment": "Person visiting a place.",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "pp.Person"
},
{
"id": "Place",
"string": "%[3]s",
"type": "string",
"underlyingType": "string",
"argNum": 3,
"expr": "pp.Place",
"comment": "Place the person is visiting."
},
{
"id": "Extra",
"string": "%[2]v",
"type": "int",
"underlyingType": "int",
"argNum": 2,
"expr": "pp.extra"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:55:10"
},
{
"id": "{} files remaining!",
"key": "%d files remaining!",
"message": "{} files remaining!",
"translation": "",
"placeholders": [
{
"id": "",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "2"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:62:10"
},
{
"id": "{N} more files remaining!",
"key": "%d more files remaining!",
"message": "{N} more files remaining!",
"translation": "",
"placeholders": [
{
"id": "N",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "n"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:67:10"
},
{
"id": "Use the following code for your discount: {ReferralCode}\n",
"key": "Use the following code for your discount: %d\n",
"message": "Use the following code for your discount: {ReferralCode}\n",
"translation": "",
"placeholders": [
{
"id": "ReferralCode",
"string": "%[1]d",
"type": "golang.org/x/text/cmd/gotext/examples/extract.referralCode",
"underlyingType": "int",
"argNum": 1,
"expr": "c"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:73:10"
},
{
"id": [ "{Device} is out of order!", "msgOutOfOrder" ],
"key": "%s is out of order!",
"message": "{Device} is out of order!",
"translation": "",
"comment": "FOO\n",
"placeholders": [
{
"id": "Device",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "device"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:81:10"
},
{
"id": "{Miles} miles traveled ({Miles_1})",
"key": "%.2[1]f miles traveled (%[1]f)",
"message": "{Miles} miles traveled ({Miles_1})",
"translation": "",
"placeholders": [
{
"id": "Miles",
"string": "%.2[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
},
{
"id": "Miles_1",
"string": "%[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:85:10"
}
]
}

View File

@ -1,206 +0,0 @@
{
"language": "zh",
"messages": [
{
"id": "Hello world!\n",
"key": "Hello world!\n",
"message": "Hello world!\n",
"translation": "",
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:28:10"
},
{
"id": "Hello {City}!\n",
"key": "Hello %s!\n",
"message": "Hello {City}!\n",
"translation": "",
"placeholders": [
{
"id": "City",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "city"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:32:10"
},
{
"id": "Hello {Town}!\n",
"key": "Hello %s!\n",
"message": "Hello {Town}!\n",
"translation": "",
"placeholders": [
{
"id": "Town",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "town",
"comment": "Town"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:36:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%s is visiting %s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "person",
"comment": "The person of matter."
},
{
"id": "Place",
"string": "%[2]s",
"type": "string",
"underlyingType": "string",
"argNum": 2,
"expr": "place",
"comment": "Place the person is visiting."
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:41:10"
},
{
"id": "{Person} is visiting {Place}!\n",
"key": "%[1]s is visiting %[3]s!\n",
"message": "{Person} is visiting {Place}!\n",
"translation": "",
"comment": "Person visiting a place.",
"placeholders": [
{
"id": "Person",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "pp.Person"
},
{
"id": "Place",
"string": "%[3]s",
"type": "string",
"underlyingType": "string",
"argNum": 3,
"expr": "pp.Place",
"comment": "Place the person is visiting."
},
{
"id": "Extra",
"string": "%[2]v",
"type": "int",
"underlyingType": "int",
"argNum": 2,
"expr": "pp.extra"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:56:10"
},
{
"id": "{} files remaining!",
"key": "%d files remaining!",
"message": "{} files remaining!",
"translation": "",
"placeholders": [
{
"id": "",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "2"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:63:10"
},
{
"id": "{N} more files remaining!",
"key": "%d more files remaining!",
"message": "{N} more files remaining!",
"translation": "",
"placeholders": [
{
"id": "N",
"string": "%[1]d",
"type": "int",
"underlyingType": "int",
"argNum": 1,
"expr": "n"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:68:10"
},
{
"id": "Use the following code for your discount: {ReferralCode}\n",
"key": "Use the following code for your discount: %d\n",
"message": "Use the following code for your discount: {ReferralCode}\n",
"translation": "",
"placeholders": [
{
"id": "ReferralCode",
"string": "%[1]d",
"type": "golang.org/x/text/cmd/gotext/examples/extract.referralCode",
"underlyingType": "int",
"argNum": 1,
"expr": "c"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:74:10"
},
{
"id": [
"msgOutOfOrder",
"{Device} is out of order!"
],
"key": "%s is out of order!",
"message": "{Device} is out of order!",
"translation": "",
"comment": "FOO\n",
"placeholders": [
{
"id": "Device",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "device"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:82:10"
},
{
"id": "{Miles} miles traveled ({Miles_1})",
"key": "%.2[1]f miles traveled (%[1]f)",
"message": "{Miles} miles traveled ({Miles_1})",
"translation": "",
"placeholders": [
{
"id": "Miles",
"string": "%.2[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
},
{
"id": "Miles_1",
"string": "%[1]f",
"type": "float64",
"underlyingType": "float64",
"argNum": 1,
"expr": "miles"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract/main.go:86:10"
}
]
}

View File

@ -1,87 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
//go:generate gotext extract --lang=de,zh
//go:generate gotext generate -out catalog.go
import (
"golang.org/x/text/language"
"golang.org/x/text/message"
)
func main() {
p := message.NewPrinter(language.English)
p.Print("Hello world!\n")
p.Println("Hello", "world!")
person := "Sheila"
place := "Zürich"
p.Print("Hello ", person, " in ", place, "!\n")
// Greet everyone.
p.Printf("Hello world!\n")
city := "Amsterdam"
// Greet a city.
p.Printf("Hello %s!\n", city)
town := "Amsterdam"
// Greet a town.
p.Printf("Hello %s!\n",
town, // Town
)
// Person visiting a place.
p.Printf("%s is visiting %s!\n",
person, // The person of matter.
place, // Place the person is visiting.
)
pp := struct {
Person string // The person of matter. // TODO: get this comment.
Place string
extra int
}{
person, place, 4,
}
// extract will drop this comment in favor of the one below.
// argument is added as a placeholder.
p.Printf("%[1]s is visiting %[3]s!\n", // Person visiting a place.
pp.Person,
pp.extra,
pp.Place, // Place the person is visiting.
)
// Numeric literal
p.Printf("%d files remaining!", 2)
const n = 2
// Numeric var
p.Printf("%d more files remaining!", n)
// Infer better names from type names.
type referralCode int
const c = referralCode(5)
p.Printf("Use the following code for your discount: %d\n", c)
// Using a constant for a message will cause the constant name to be
// added as an identifier, allowing for stable message identifiers.
// Explain that a device is out of order.
const msgOutOfOrder = "%s is out of order!" // FOO
const device = "Soda machine"
p.Printf(msgOutOfOrder, device)
// Double arguments.
miles := 1.2345
p.Printf("%.2[1]f miles traveled (%[1]f)", miles)
}

View File

@ -1,39 +0,0 @@
{
"language": "de",
"messages": [
{
"id": "Hello {From}!\n",
"key": "Hello %s!\n",
"message": "Hello {From}!\n",
"translation": "",
"placeholders": [
{
"id": "From",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "r.Header.Get(\"From\")"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract_http/pkg/pkg.go:22:11"
},
{
"id": "Do you like your browser ({User_Agent})?\n",
"key": "Do you like your browser (%s)?\n",
"message": "Do you like your browser ({User_Agent})?\n",
"translation": "",
"placeholders": [
{
"id": "User_Agent",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "r.Header.Get(\"User-Agent\")"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract_http/pkg/pkg.go:24:11"
}
]
}

View File

@ -1,39 +0,0 @@
{
"language": "en-US",
"messages": [
{
"id": "Hello {From}!\n",
"key": "Hello %s!\n",
"message": "Hello {From}!\n",
"translation": "",
"placeholders": [
{
"id": "From",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "r.Header.Get(\"From\")"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract_http/pkg/pkg.go:22:11"
},
{
"id": "Do you like your browser ({User_Agent})?\n",
"key": "Do you like your browser (%s)?\n",
"message": "Do you like your browser ({User_Agent})?\n",
"translation": "",
"placeholders": [
{
"id": "User_Agent",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "r.Header.Get(\"User-Agent\")"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract_http/pkg/pkg.go:24:11"
}
]
}

View File

@ -1,39 +0,0 @@
{
"language": "en-US",
"messages": [
{
"id": "Hello {From}!\n",
"key": "Hello %s!\n",
"message": "Hello {From}!\n",
"translation": "",
"placeholders": [
{
"id": "From",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "r.Header.Get(\"From\")"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract_http/pkg/pkg.go:22:11"
},
{
"id": "Do you like your browser ({User_Agent})?\n",
"key": "Do you like your browser (%s)?\n",
"message": "Do you like your browser ({User_Agent})?\n",
"translation": "",
"placeholders": [
{
"id": "User_Agent",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "r.Header.Get(\"User-Agent\")"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract_http/pkg/pkg.go:24:11"
}
]
}

View File

@ -1,39 +0,0 @@
{
"language": "zh",
"messages": [
{
"id": "Hello {From}!\n",
"key": "Hello %s!\n",
"message": "Hello {From}!\n",
"translation": "",
"placeholders": [
{
"id": "From",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "r.Header.Get(\"From\")"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract_http/pkg/pkg.go:22:11"
},
{
"id": "Do you like your browser ({User_Agent})?\n",
"key": "Do you like your browser (%s)?\n",
"message": "Do you like your browser ({User_Agent})?\n",
"translation": "",
"placeholders": [
{
"id": "User_Agent",
"string": "%[1]s",
"type": "string",
"underlyingType": "string",
"argNum": 1,
"expr": "r.Header.Get(\"User-Agent\")"
}
],
"position": "golang.org/x/text/cmd/gotext/examples/extract_http/pkg/pkg.go:24:11"
}
]
}

View File

@ -1,17 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
//go:generate gotext extract --lang=de,zh
import (
"net/http"
"golang.org/x/text/cmd/gotext/examples/extract_http/pkg"
)
func main() {
http.Handle("/generize", http.HandlerFunc(pkg.Generize))
}

View File

@ -1,25 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package pkg
import (
"net/http"
"golang.org/x/text/language"
"golang.org/x/text/message"
)
var matcher = language.NewMatcher(message.DefaultCatalog.Languages())
func Generize(w http.ResponseWriter, r *http.Request) {
lang, _ := r.Cookie("lang")
accept := r.Header.Get("Accept-Language")
tag := message.MatchLanguage(lang.String(), accept)
p := message.NewPrinter(tag)
p.Fprintf(w, "Hello %s!\n", r.Header.Get("From"))
p.Fprintf(w, "Do you like your browser (%s)?\n", r.Header.Get("User-Agent"))
}

View File

@ -1,37 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import (
"fmt"
"golang.org/x/text/language"
"golang.org/x/text/message"
)
func main() {
var nPizzas = 4
// The following call gets replaced by a call to the globally
// defined printer.
fmt.Println("We ate", nPizzas, "pizzas.")
p := message.NewPrinter(language.English)
// Prevent build failure, although it is okay for gotext.
p.Println(1024)
// Replaced by a call to p.
fmt.Println("Example punctuation:", "$%^&!")
{
q := message.NewPrinter(language.French)
const leaveAnIdentBe = "Don't expand me."
fmt.Print(leaveAnIdentBe)
q.Println() // Prevent build failure, although it is okay for gotext.
}
fmt.Printf("Hello %s\n", "City")
}

View File

@ -1,16 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"golang.org/x/text/language"
"golang.org/x/text/message"
)
// The printer defined here will be picked up by the first print statement
// in main.go.
var printer = message.NewPrinter(language.English)

View File

@ -1,81 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import (
"encoding/json"
"io/ioutil"
"os"
"path/filepath"
"golang.org/x/text/internal"
"golang.org/x/text/language"
"golang.org/x/text/message/pipeline"
)
// TODO:
// - merge information into existing files
// - handle different file formats (PO, XLIFF)
// - handle features (gender, plural)
// - message rewriting
var (
srcLang *string
lang *string
)
func init() {
srcLang = cmdExtract.Flag.String("srclang", "en-US", "the source-code language")
lang = cmdExtract.Flag.String("lang", "en-US", "comma-separated list of languages to process")
}
var cmdExtract = &Command{
Run: runExtract,
UsageLine: "extract <package>*",
Short: "extracts strings to be translated from code",
}
func runExtract(cmd *Command, args []string) error {
tag, err := language.Parse(*srcLang)
if err != nil {
return wrap(err, "")
}
config := &pipeline.Config{
SourceLanguage: tag,
Packages: args,
}
out, err := pipeline.Extract(config)
data, err := json.MarshalIndent(out, "", " ")
if err != nil {
return wrap(err, "")
}
os.MkdirAll(*dir, 0755)
// TODO: this file can probably go if we replace the extract + generate
// cycle with a init once and update cycle.
file := filepath.Join(*dir, extractFile)
if err := ioutil.WriteFile(file, data, 0644); err != nil {
return wrap(err, "could not create file")
}
langs := append(getLangs(), tag)
langs = internal.UniqueTags(langs)
for _, tag := range langs {
// TODO: inject translations from existing files to avoid retranslation.
out.Language = tag
data, err := json.MarshalIndent(out, "", " ")
if err != nil {
return wrap(err, "JSON marshal failed")
}
file := filepath.Join(*dir, tag.String(), outFile)
if err := os.MkdirAll(filepath.Dir(file), 0750); err != nil {
return wrap(err, "dir create failed")
}
if err := ioutil.WriteFile(file, data, 0740); err != nil {
return wrap(err, "write failed")
}
}
return nil
}

View File

@ -1,104 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"regexp"
"strings"
"golang.org/x/text/message/pipeline"
"golang.org/x/tools/go/loader"
)
func init() {
out = cmdGenerate.Flag.String("out", "", "output file to write to")
}
var (
out *string
)
var cmdGenerate = &Command{
Run: runGenerate,
UsageLine: "generate <package>",
Short: "generates code to insert translated messages",
}
var transRe = regexp.MustCompile(`messages\.(.*)\.json`)
func runGenerate(cmd *Command, args []string) error {
prog, err := loadPackages(&loader.Config{}, args)
if err != nil {
return wrap(err, "could not load package")
}
pkgs := prog.InitialPackages()
if len(pkgs) != 1 {
return fmt.Errorf("more than one package selected: %v", pkgs)
}
pkg := pkgs[0].Pkg.Name()
// TODO: add in external input. Right now we assume that all files are
// manually created and stored in the textdata directory.
// Build up index of translations and original messages.
extracted := pipeline.Locale{}
translations := []*pipeline.Locale{}
err = filepath.Walk(*dir, func(path string, f os.FileInfo, err error) error {
if err != nil {
return wrap(err, "loading data")
}
if f.IsDir() {
return nil
}
if f.Name() == extractFile {
b, err := ioutil.ReadFile(path)
if err != nil {
return wrap(err, "read file failed")
}
if err := json.Unmarshal(b, &extracted); err != nil {
return wrap(err, "unmarshal source failed")
}
return nil
}
if f.Name() == outFile {
return nil
}
if !strings.HasSuffix(path, gotextSuffix) {
return nil
}
b, err := ioutil.ReadFile(path)
if err != nil {
return wrap(err, "read file failed")
}
var locale pipeline.Locale
if err := json.Unmarshal(b, &locale); err != nil {
return wrap(err, "parsing translation file failed")
}
translations = append(translations, &locale)
return nil
})
if err != nil {
return err
}
w := os.Stdout
if *out != "" {
w, err = os.Create(*out)
if err != nil {
return wrap(err, "create file failed")
}
}
_, err = pipeline.Generate(w, pkg, &extracted, translations...)
return err
}

View File

@ -1,352 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go build -o gotext.latest
//go:generate ./gotext.latest help gendocumentation
//go:generate rm gotext.latest
package main
import (
"bufio"
"bytes"
"flag"
"fmt"
"go/build"
"go/format"
"io"
"io/ioutil"
"log"
"os"
"strings"
"sync"
"text/template"
"unicode"
"unicode/utf8"
"golang.org/x/text/language"
"golang.org/x/tools/go/buildutil"
)
func init() {
flag.Var((*buildutil.TagsFlag)(&build.Default.BuildTags), "tags", buildutil.TagsFlagDoc)
}
var dir = flag.String("dir", "locales", "default subdirectory to store translation files")
// NOTE: the Command struct is copied from the go tool in core.
// A Command is an implementation of a go command
// like go build or go fix.
type Command struct {
// Run runs the command.
// The args are the arguments after the command name.
Run func(cmd *Command, args []string) error
// UsageLine is the one-line usage message.
// The first word in the line is taken to be the command name.
UsageLine string
// Short is the short description shown in the 'go help' output.
Short string
// Long is the long message shown in the 'go help <this-command>' output.
Long string
// Flag is a set of flags specific to this command.
Flag flag.FlagSet
}
// Name returns the command's name: the first word in the usage line.
func (c *Command) Name() string {
name := c.UsageLine
i := strings.Index(name, " ")
if i >= 0 {
name = name[:i]
}
return name
}
func (c *Command) Usage() {
fmt.Fprintf(os.Stderr, "usage: %s\n\n", c.UsageLine)
fmt.Fprintf(os.Stderr, "%s\n", strings.TrimSpace(c.Long))
os.Exit(2)
}
// Runnable reports whether the command can be run; otherwise
// it is a documentation pseudo-command such as importpath.
func (c *Command) Runnable() bool {
return c.Run != nil
}
// Commands lists the available commands and help topics.
// The order here is the order in which they are printed by 'go help'.
var commands = []*Command{
cmdExtract,
cmdRewrite,
cmdGenerate,
// TODO:
// - update: full-cycle update of extraction, sending, and integration
// - report: report of freshness of translations
}
var exitStatus = 0
var exitMu sync.Mutex
func setExitStatus(n int) {
exitMu.Lock()
if exitStatus < n {
exitStatus = n
}
exitMu.Unlock()
}
var origEnv []string
func main() {
flag.Usage = usage
flag.Parse()
log.SetFlags(0)
args := flag.Args()
if len(args) < 1 {
usage()
}
if args[0] == "help" {
help(args[1:])
return
}
for _, cmd := range commands {
if cmd.Name() == args[0] && cmd.Runnable() {
cmd.Flag.Usage = func() { cmd.Usage() }
cmd.Flag.Parse(args[1:])
args = cmd.Flag.Args()
if err := cmd.Run(cmd, args); err != nil {
fatalf("gotext: %+v", err)
}
exit()
return
}
}
fmt.Fprintf(os.Stderr, "gotext: unknown subcommand %q\nRun 'go help' for usage.\n", args[0])
setExitStatus(2)
exit()
}
var usageTemplate = `gotext is a tool for managing text in Go source code.
Usage:
gotext command [arguments]
The commands are:
{{range .}}{{if .Runnable}}
{{.Name | printf "%-11s"}} {{.Short}}{{end}}{{end}}
Use "go help [command]" for more information about a command.
Additional help topics:
{{range .}}{{if not .Runnable}}
{{.Name | printf "%-11s"}} {{.Short}}{{end}}{{end}}
Use "gotext help [topic]" for more information about that topic.
`
var helpTemplate = `{{if .Runnable}}usage: go {{.UsageLine}}
{{end}}{{.Long | trim}}
`
var documentationTemplate = `{{range .}}{{if .Short}}{{.Short | capitalize}}
{{end}}{{if .Runnable}}Usage:
go {{.UsageLine}}
{{end}}{{.Long | trim}}
{{end}}`
// commentWriter writes a Go comment to the underlying io.Writer,
// using line comment form (//).
type commentWriter struct {
W io.Writer
wroteSlashes bool // Wrote "//" at the beginning of the current line.
}
func (c *commentWriter) Write(p []byte) (int, error) {
var n int
for i, b := range p {
if !c.wroteSlashes {
s := "//"
if b != '\n' {
s = "// "
}
if _, err := io.WriteString(c.W, s); err != nil {
return n, err
}
c.wroteSlashes = true
}
n0, err := c.W.Write(p[i : i+1])
n += n0
if err != nil {
return n, err
}
if b == '\n' {
c.wroteSlashes = false
}
}
return len(p), nil
}
// An errWriter wraps a writer, recording whether a write error occurred.
type errWriter struct {
w io.Writer
err error
}
func (w *errWriter) Write(b []byte) (int, error) {
n, err := w.w.Write(b)
if err != nil {
w.err = err
}
return n, err
}
// tmpl executes the given template text on data, writing the result to w.
func tmpl(w io.Writer, text string, data interface{}) {
t := template.New("top")
t.Funcs(template.FuncMap{"trim": strings.TrimSpace, "capitalize": capitalize})
template.Must(t.Parse(text))
ew := &errWriter{w: w}
err := t.Execute(ew, data)
if ew.err != nil {
// I/O error writing. Ignore write on closed pipe.
if strings.Contains(ew.err.Error(), "pipe") {
os.Exit(1)
}
fatalf("writing output: %v", ew.err)
}
if err != nil {
panic(err)
}
}
func capitalize(s string) string {
if s == "" {
return s
}
r, n := utf8.DecodeRuneInString(s)
return string(unicode.ToTitle(r)) + s[n:]
}
func printUsage(w io.Writer) {
bw := bufio.NewWriter(w)
tmpl(bw, usageTemplate, commands)
bw.Flush()
}
func usage() {
printUsage(os.Stderr)
os.Exit(2)
}
// help implements the 'help' command.
func help(args []string) {
if len(args) == 0 {
printUsage(os.Stdout)
// not exit 2: succeeded at 'go help'.
return
}
if len(args) != 1 {
fmt.Fprintf(os.Stderr, "usage: go help command\n\nToo many arguments given.\n")
os.Exit(2) // failed at 'go help'
}
arg := args[0]
// 'go help documentation' generates doc.go.
if strings.HasSuffix(arg, "documentation") {
w := &bytes.Buffer{}
fmt.Fprintln(w, "// Code generated by go generate. DO NOT EDIT.")
fmt.Fprintln(w)
buf := new(bytes.Buffer)
printUsage(buf)
usage := &Command{Long: buf.String()}
tmpl(&commentWriter{W: w}, documentationTemplate, append([]*Command{usage}, commands...))
fmt.Fprintln(w, "package main")
if arg == "gendocumentation" {
b, err := format.Source(w.Bytes())
if err != nil {
logf("Could not format generated docs: %v\n", err)
}
if err := ioutil.WriteFile("doc.go", b, 0666); err != nil {
logf("Could not create file alldocs.go: %v\n", err)
}
} else {
fmt.Println(w.String())
}
return
}
for _, cmd := range commands {
if cmd.Name() == arg {
tmpl(os.Stdout, helpTemplate, cmd)
// not exit 2: succeeded at 'go help cmd'.
return
}
}
fmt.Fprintf(os.Stderr, "Unknown help topic %#q. Run 'go help'.\n", arg)
os.Exit(2) // failed at 'go help cmd'
}
func getLangs() (tags []language.Tag) {
for _, t := range strings.Split(*lang, ",") {
if t == "" {
continue
}
tag, err := language.Parse(t)
if err != nil {
fatalf("gotext: could not parse language %q: %v", t, err)
}
tags = append(tags, tag)
}
return tags
}
var atexitFuncs []func()
func atexit(f func()) {
atexitFuncs = append(atexitFuncs, f)
}
func exit() {
for _, f := range atexitFuncs {
f()
}
os.Exit(exitStatus)
}
func fatalf(format string, args ...interface{}) {
logf(format, args...)
exit()
}
func logf(format string, args ...interface{}) {
log.Printf(format, args...)
setExitStatus(1)
}
func exitIfErrors() {
if exitStatus != 0 {
exit()
}
}

View File

@ -1,55 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import (
"os"
"golang.org/x/text/message/pipeline"
)
const printerType = "golang.org/x/text/message.Printer"
// TODO:
// - merge information into existing files
// - handle different file formats (PO, XLIFF)
// - handle features (gender, plural)
// - message rewriting
func init() {
overwrite = cmdRewrite.Flag.Bool("w", false, "write files in place")
}
var (
overwrite *bool
)
var cmdRewrite = &Command{
Run: runRewrite,
UsageLine: "rewrite <package>",
Short: "rewrites fmt functions to use a message Printer",
Long: `
rewrite is typically done once for a project. It rewrites all usages of
fmt to use x/text's message package whenever a message.Printer is in scope.
It rewrites Print and Println calls with constant strings to the equivalent
using Printf to allow translators to reorder arguments.
`,
}
func runRewrite(cmd *Command, args []string) error {
w := os.Stdout
if *overwrite {
w = nil
}
pkg := "."
switch len(args) {
case 0:
case 1:
pkg = args[0]
default:
return errorf("can only specify at most one package")
}
return pipeline.Rewrite(w, pkg)
}

View File

@ -1 +0,0 @@
issuerepo: golang/go

View File

@ -1,290 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package build
import "testing"
// cjk returns an implicit collation element for a CJK rune.
func cjk(r rune) []rawCE {
// A CJK character C is represented in the DUCET as
// [.AAAA.0020.0002.C][.BBBB.0000.0000.C]
// Where AAAA is the most significant 15 bits plus a base value.
// Any base value will work for the test, so we pick the common value of FB40.
const base = 0xFB40
return []rawCE{
{w: []int{base + int(r>>15), defaultSecondary, defaultTertiary, int(r)}},
{w: []int{int(r&0x7FFF) | 0x8000, 0, 0, int(r)}},
}
}
func pCE(p int) []rawCE {
return mkCE([]int{p, defaultSecondary, defaultTertiary, 0}, 0)
}
func pqCE(p, q int) []rawCE {
return mkCE([]int{p, defaultSecondary, defaultTertiary, q}, 0)
}
func ptCE(p, t int) []rawCE {
return mkCE([]int{p, defaultSecondary, t, 0}, 0)
}
func ptcCE(p, t int, ccc uint8) []rawCE {
return mkCE([]int{p, defaultSecondary, t, 0}, ccc)
}
func sCE(s int) []rawCE {
return mkCE([]int{0, s, defaultTertiary, 0}, 0)
}
func stCE(s, t int) []rawCE {
return mkCE([]int{0, s, t, 0}, 0)
}
func scCE(s int, ccc uint8) []rawCE {
return mkCE([]int{0, s, defaultTertiary, 0}, ccc)
}
func mkCE(w []int, ccc uint8) []rawCE {
return []rawCE{rawCE{w, ccc}}
}
// ducetElem is used to define test data that is used to generate a table.
type ducetElem struct {
str string
ces []rawCE
}
func newBuilder(t *testing.T, ducet []ducetElem) *Builder {
b := NewBuilder()
for _, e := range ducet {
ces := [][]int{}
for _, ce := range e.ces {
ces = append(ces, ce.w)
}
if err := b.Add([]rune(e.str), ces, nil); err != nil {
t.Errorf(err.Error())
}
}
b.t = &table{}
b.root.sort()
return b
}
type convertTest struct {
in, out []rawCE
err bool
}
var convLargeTests = []convertTest{
{pCE(0xFB39), pCE(0xFB39), false},
{cjk(0x2F9B2), pqCE(0x3F9B2, 0x2F9B2), false},
{pCE(0xFB40), pCE(0), true},
{append(pCE(0xFB40), pCE(0)[0]), pCE(0), true},
{pCE(0xFFFE), pCE(illegalOffset), false},
{pCE(0xFFFF), pCE(illegalOffset + 1), false},
}
func TestConvertLarge(t *testing.T) {
for i, tt := range convLargeTests {
e := new(entry)
for _, ce := range tt.in {
e.elems = append(e.elems, makeRawCE(ce.w, ce.ccc))
}
elems, err := convertLargeWeights(e.elems)
if tt.err {
if err == nil {
t.Errorf("%d: expected error; none found", i)
}
continue
} else if err != nil {
t.Errorf("%d: unexpected error: %v", i, err)
}
if !equalCEArrays(elems, tt.out) {
t.Errorf("%d: conversion was %x; want %x", i, elems, tt.out)
}
}
}
// Collation element table for simplify tests.
var simplifyTest = []ducetElem{
{"\u0300", sCE(30)}, // grave
{"\u030C", sCE(40)}, // caron
{"A", ptCE(100, 8)},
{"D", ptCE(104, 8)},
{"E", ptCE(105, 8)},
{"I", ptCE(110, 8)},
{"z", ptCE(130, 8)},
{"\u05F2", append(ptCE(200, 4), ptCE(200, 4)[0])},
{"\u05B7", sCE(80)},
{"\u00C0", append(ptCE(100, 8), sCE(30)...)}, // A with grave, can be removed
{"\u00C8", append(ptCE(105, 8), sCE(30)...)}, // E with grave
{"\uFB1F", append(ptCE(200, 4), ptCE(200, 4)[0], sCE(80)[0])}, // eliminated by NFD
{"\u00C8\u0302", ptCE(106, 8)}, // block previous from simplifying
{"\u01C5", append(ptCE(104, 9), ptCE(130, 4)[0], stCE(40, maxTertiary)[0])}, // eliminated by NFKD
// no removal: tertiary value of third element is not maxTertiary
{"\u2162", append(ptCE(110, 9), ptCE(110, 4)[0], ptCE(110, 8)[0])},
}
var genColTests = []ducetElem{
{"\uFA70", pqCE(0x1FA70, 0xFA70)},
{"A\u0300", append(ptCE(100, 8), sCE(30)...)},
{"A\u0300\uFA70", append(ptCE(100, 8), sCE(30)[0], pqCE(0x1FA70, 0xFA70)[0])},
{"A\u0300A\u0300", append(ptCE(100, 8), sCE(30)[0], ptCE(100, 8)[0], sCE(30)[0])},
}
func TestGenColElems(t *testing.T) {
b := newBuilder(t, simplifyTest[:5])
for i, tt := range genColTests {
res := b.root.genColElems(tt.str)
if !equalCEArrays(tt.ces, res) {
t.Errorf("%d: result %X; want %X", i, res, tt.ces)
}
}
}
type strArray []string
func (sa strArray) contains(s string) bool {
for _, e := range sa {
if e == s {
return true
}
}
return false
}
var simplifyRemoved = strArray{"\u00C0", "\uFB1F"}
var simplifyMarked = strArray{"\u01C5"}
func TestSimplify(t *testing.T) {
b := newBuilder(t, simplifyTest)
o := &b.root
simplify(o)
for i, tt := range simplifyTest {
if simplifyRemoved.contains(tt.str) {
continue
}
e := o.find(tt.str)
if e.str != tt.str || !equalCEArrays(e.elems, tt.ces) {
t.Errorf("%d: found element %s -> %X; want %s -> %X", i, e.str, e.elems, tt.str, tt.ces)
break
}
}
var i, k int
for e := o.front(); e != nil; e, _ = e.nextIndexed() {
gold := simplifyMarked.contains(e.str)
if gold {
k++
}
if gold != e.decompose {
t.Errorf("%d: %s has decompose %v; want %v", i, e.str, e.decompose, gold)
}
i++
}
if k != len(simplifyMarked) {
t.Errorf(" an entry that should be marked as decompose was deleted")
}
}
var expandTest = []ducetElem{
{"\u0300", append(scCE(29, 230), scCE(30, 230)...)},
{"\u00C0", append(ptCE(100, 8), scCE(30, 230)...)},
{"\u00C8", append(ptCE(105, 8), scCE(30, 230)...)},
{"\u00C9", append(ptCE(105, 8), scCE(30, 230)...)}, // identical expansion
{"\u05F2", append(ptCE(200, 4), ptCE(200, 4)[0], ptCE(200, 4)[0])},
{"\u01FF", append(ptCE(200, 4), ptcCE(201, 4, 0)[0], scCE(30, 230)[0])},
}
func TestExpand(t *testing.T) {
const (
totalExpansions = 5
totalElements = 2 + 2 + 2 + 3 + 3 + totalExpansions
)
b := newBuilder(t, expandTest)
o := &b.root
b.processExpansions(o)
e := o.front()
for _, tt := range expandTest {
exp := b.t.ExpandElem[e.expansionIndex:]
if int(exp[0]) != len(tt.ces) {
t.Errorf("%U: len(expansion)==%d; want %d", []rune(tt.str)[0], exp[0], len(tt.ces))
}
exp = exp[1:]
for j, w := range tt.ces {
if ce, _ := makeCE(w); exp[j] != ce {
t.Errorf("%U: element %d is %X; want %X", []rune(tt.str)[0], j, exp[j], ce)
}
}
e, _ = e.nextIndexed()
}
// Verify uniquing.
if len(b.t.ExpandElem) != totalElements {
t.Errorf("len(expandElem)==%d; want %d", len(b.t.ExpandElem), totalElements)
}
}
var contractTest = []ducetElem{
{"abc", pCE(102)},
{"abd", pCE(103)},
{"a", pCE(100)},
{"ab", pCE(101)},
{"ac", pCE(104)},
{"bcd", pCE(202)},
{"b", pCE(200)},
{"bc", pCE(201)},
{"bd", pCE(203)},
// shares suffixes with a*
{"Ab", pCE(301)},
{"A", pCE(300)},
{"Ac", pCE(304)},
{"Abc", pCE(302)},
{"Abd", pCE(303)},
// starter to be ignored
{"z", pCE(1000)},
}
func TestContract(t *testing.T) {
const (
totalElements = 5 + 5 + 4
)
b := newBuilder(t, contractTest)
o := &b.root
b.processContractions(o)
indexMap := make(map[int]bool)
handleMap := make(map[rune]*entry)
for e := o.front(); e != nil; e, _ = e.nextIndexed() {
if e.contractionHandle.n > 0 {
handleMap[e.runes[0]] = e
indexMap[e.contractionHandle.index] = true
}
}
// Verify uniquing.
if len(indexMap) != 2 {
t.Errorf("number of tries is %d; want %d", len(indexMap), 2)
}
for _, tt := range contractTest {
e, ok := handleMap[[]rune(tt.str)[0]]
if !ok {
continue
}
str := tt.str[1:]
offset, n := lookup(&b.t.ContractTries, e.contractionHandle, []byte(str))
if len(str) != n {
t.Errorf("%s: bytes consumed==%d; want %d", tt.str, n, len(str))
}
ce := b.t.ContractElem[offset+e.contractionIndex]
if want, _ := makeCE(tt.ces[0]); want != ce {
t.Errorf("%s: element %X; want %X", tt.str, ce, want)
}
}
if len(b.t.ContractElem) != totalElements {
t.Errorf("len(expandElem)==%d; want %d", len(b.t.ContractElem), totalElements)
}
}

View File

@ -1,215 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package build
import (
"testing"
"golang.org/x/text/internal/colltab"
)
type ceTest struct {
f func(in []int) (uint32, error)
arg []int
val uint32
}
func normalCE(in []int) (ce uint32, err error) {
return makeCE(rawCE{w: in[:3], ccc: uint8(in[3])})
}
func expandCE(in []int) (ce uint32, err error) {
return makeExpandIndex(in[0])
}
func contractCE(in []int) (ce uint32, err error) {
return makeContractIndex(ctHandle{in[0], in[1]}, in[2])
}
func decompCE(in []int) (ce uint32, err error) {
return makeDecompose(in[0], in[1])
}
var ceTests = []ceTest{
{normalCE, []int{0, 0, 0, 0}, 0xA0000000},
{normalCE, []int{0, 0x28, 3, 0}, 0xA0002803},
{normalCE, []int{0, 0x28, 3, 0xFF}, 0xAFF02803},
{normalCE, []int{100, defaultSecondary, 3, 0}, 0x0000C883},
// non-ignorable primary with non-default secondary
{normalCE, []int{100, 0x28, defaultTertiary, 0}, 0x4000C828},
{normalCE, []int{100, defaultSecondary + 8, 3, 0}, 0x0000C983},
{normalCE, []int{100, 0, 3, 0}, 0xFFFF}, // non-ignorable primary with non-supported secondary
{normalCE, []int{100, 1, 3, 0}, 0xFFFF},
{normalCE, []int{1 << maxPrimaryBits, defaultSecondary, 0, 0}, 0xFFFF},
{normalCE, []int{0, 1 << maxSecondaryBits, 0, 0}, 0xFFFF},
{normalCE, []int{100, defaultSecondary, 1 << maxTertiaryBits, 0}, 0xFFFF},
{normalCE, []int{0x123, defaultSecondary, 8, 0xFF}, 0x88FF0123},
{normalCE, []int{0x123, defaultSecondary + 1, 8, 0xFF}, 0xFFFF},
{contractCE, []int{0, 0, 0}, 0xC0000000},
{contractCE, []int{1, 1, 1}, 0xC0010011},
{contractCE, []int{1, (1 << maxNBits) - 1, 1}, 0xC001001F},
{contractCE, []int{(1 << maxTrieIndexBits) - 1, 1, 1}, 0xC001FFF1},
{contractCE, []int{1, 1, (1 << maxContractOffsetBits) - 1}, 0xDFFF0011},
{contractCE, []int{1, (1 << maxNBits), 1}, 0xFFFF},
{contractCE, []int{(1 << maxTrieIndexBits), 1, 1}, 0xFFFF},
{contractCE, []int{1, (1 << maxContractOffsetBits), 1}, 0xFFFF},
{expandCE, []int{0}, 0xE0000000},
{expandCE, []int{5}, 0xE0000005},
{expandCE, []int{(1 << maxExpandIndexBits) - 1}, 0xE000FFFF},
{expandCE, []int{1 << maxExpandIndexBits}, 0xFFFF},
{decompCE, []int{0, 0}, 0xF0000000},
{decompCE, []int{1, 1}, 0xF0000101},
{decompCE, []int{0x1F, 0x1F}, 0xF0001F1F},
{decompCE, []int{256, 0x1F}, 0xFFFF},
{decompCE, []int{0x1F, 256}, 0xFFFF},
}
func TestColElem(t *testing.T) {
for i, tt := range ceTests {
in := make([]int, len(tt.arg))
copy(in, tt.arg)
ce, err := tt.f(in)
if tt.val == 0xFFFF {
if err == nil {
t.Errorf("%d: expected error for args %x", i, tt.arg)
}
continue
}
if err != nil {
t.Errorf("%d: unexpected error: %v", i, err.Error())
}
if ce != tt.val {
t.Errorf("%d: colElem=%X; want %X", i, ce, tt.val)
}
}
}
func mkRawCES(in [][]int) []rawCE {
out := []rawCE{}
for _, w := range in {
out = append(out, rawCE{w: w})
}
return out
}
type weightsTest struct {
a, b [][]int
level colltab.Level
result int
}
var nextWeightTests = []weightsTest{
{
a: [][]int{{100, 20, 5, 0}},
b: [][]int{{101, defaultSecondary, defaultTertiary, 0}},
level: colltab.Primary,
},
{
a: [][]int{{100, 20, 5, 0}},
b: [][]int{{100, 21, defaultTertiary, 0}},
level: colltab.Secondary,
},
{
a: [][]int{{100, 20, 5, 0}},
b: [][]int{{100, 20, 6, 0}},
level: colltab.Tertiary,
},
{
a: [][]int{{100, 20, 5, 0}},
b: [][]int{{100, 20, 5, 0}},
level: colltab.Identity,
},
}
var extra = [][]int{{200, 32, 8, 0}, {0, 32, 8, 0}, {0, 0, 8, 0}, {0, 0, 0, 0}}
func TestNextWeight(t *testing.T) {
for i, tt := range nextWeightTests {
test := func(l colltab.Level, tt weightsTest, a, gold [][]int) {
res := nextWeight(tt.level, mkRawCES(a))
if !equalCEArrays(mkRawCES(gold), res) {
t.Errorf("%d:%d: expected weights %d; found %d", i, l, gold, res)
}
}
test(-1, tt, tt.a, tt.b)
for l := colltab.Primary; l <= colltab.Tertiary; l++ {
if tt.level <= l {
test(l, tt, append(tt.a, extra[l]), tt.b)
} else {
test(l, tt, append(tt.a, extra[l]), append(tt.b, extra[l]))
}
}
}
}
var compareTests = []weightsTest{
{
[][]int{{100, 20, 5, 0}},
[][]int{{100, 20, 5, 0}},
colltab.Identity,
0,
},
{
[][]int{{100, 20, 5, 0}, extra[0]},
[][]int{{100, 20, 5, 1}},
colltab.Primary,
1,
},
{
[][]int{{100, 20, 5, 0}},
[][]int{{101, 20, 5, 0}},
colltab.Primary,
-1,
},
{
[][]int{{101, 20, 5, 0}},
[][]int{{100, 20, 5, 0}},
colltab.Primary,
1,
},
{
[][]int{{100, 0, 0, 0}, {0, 20, 5, 0}},
[][]int{{0, 20, 5, 0}, {100, 0, 0, 0}},
colltab.Identity,
0,
},
{
[][]int{{100, 20, 5, 0}},
[][]int{{100, 21, 5, 0}},
colltab.Secondary,
-1,
},
{
[][]int{{100, 20, 5, 0}},
[][]int{{100, 20, 2, 0}},
colltab.Tertiary,
1,
},
{
[][]int{{100, 20, 5, 1}},
[][]int{{100, 20, 5, 2}},
colltab.Quaternary,
-1,
},
}
func TestCompareWeights(t *testing.T) {
for i, tt := range compareTests {
test := func(tt weightsTest, a, b [][]int) {
res, level := compareWeights(mkRawCES(a), mkRawCES(b))
if res != tt.result {
t.Errorf("%d: expected comparison result %d; found %d", i, tt.result, res)
}
if level != tt.level {
t.Errorf("%d: expected level %d; found %d", i, tt.level, level)
}
}
test(tt, tt.a, tt.b)
test(tt, append(tt.a, extra[0]), append(tt.b, extra[0]))
}
}

View File

@ -1,266 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package build
import (
"bytes"
"sort"
"testing"
"golang.org/x/text/internal/colltab"
)
var largetosmall = []stridx{
{"a", 5},
{"ab", 4},
{"abc", 3},
{"abcd", 2},
{"abcde", 1},
{"abcdef", 0},
}
var offsetSortTests = [][]stridx{
{
{"bcde", 1},
{"bc", 5},
{"ab", 4},
{"bcd", 3},
{"abcd", 0},
{"abc", 2},
},
largetosmall,
}
func TestOffsetSort(t *testing.T) {
for i, st := range offsetSortTests {
sort.Sort(offsetSort(st))
for j, si := range st {
if j != si.index {
t.Errorf("%d: failed: %v", i, st)
}
}
}
for i, tt := range genStateTests {
// ensure input is well-formed
sort.Sort(offsetSort(tt.in))
for j, si := range tt.in {
if si.index != j+1 {
t.Errorf("%dth sort failed: %v", i, tt.in)
}
}
}
}
var genidxtest1 = []stridx{
{"bcde", 3},
{"bc", 6},
{"ab", 2},
{"bcd", 5},
{"abcd", 0},
{"abc", 1},
{"bcdf", 4},
}
var genidxSortTests = [][]stridx{
genidxtest1,
largetosmall,
}
func TestGenIdxSort(t *testing.T) {
for i, st := range genidxSortTests {
sort.Sort(genidxSort(st))
for j, si := range st {
if j != si.index {
t.Errorf("%dth sort failed %v", i, st)
break
}
}
}
}
var entrySortTests = []colltab.ContractTrieSet{
{
{10, 0, 1, 3},
{99, 0, 1, 0},
{20, 50, 0, 2},
{30, 0, 1, 1},
},
}
func TestEntrySort(t *testing.T) {
for i, et := range entrySortTests {
sort.Sort(entrySort(et))
for j, fe := range et {
if j != int(fe.I) {
t.Errorf("%dth sort failed %v", i, et)
break
}
}
}
}
type GenStateTest struct {
in []stridx
firstBlockLen int
out colltab.ContractTrieSet
}
var genStateTests = []GenStateTest{
{[]stridx{
{"abc", 1},
},
1,
colltab.ContractTrieSet{
{'a', 0, 1, noIndex},
{'b', 0, 1, noIndex},
{'c', 'c', final, 1},
},
},
{[]stridx{
{"abc", 1},
{"abd", 2},
{"abe", 3},
},
1,
colltab.ContractTrieSet{
{'a', 0, 1, noIndex},
{'b', 0, 1, noIndex},
{'c', 'e', final, 1},
},
},
{[]stridx{
{"abc", 1},
{"ab", 2},
{"a", 3},
},
1,
colltab.ContractTrieSet{
{'a', 0, 1, 3},
{'b', 0, 1, 2},
{'c', 'c', final, 1},
},
},
{[]stridx{
{"abc", 1},
{"abd", 2},
{"ab", 3},
{"ac", 4},
{"a", 5},
{"b", 6},
},
2,
colltab.ContractTrieSet{
{'b', 'b', final, 6},
{'a', 0, 2, 5},
{'c', 'c', final, 4},
{'b', 0, 1, 3},
{'c', 'd', final, 1},
},
},
{[]stridx{
{"bcde", 2},
{"bc", 7},
{"ab", 6},
{"bcd", 5},
{"abcd", 1},
{"abc", 4},
{"bcdf", 3},
},
2,
colltab.ContractTrieSet{
{'b', 3, 1, noIndex},
{'a', 0, 1, noIndex},
{'b', 0, 1, 6},
{'c', 0, 1, 4},
{'d', 'd', final, 1},
{'c', 0, 1, 7},
{'d', 0, 1, 5},
{'e', 'f', final, 2},
},
},
}
func TestGenStates(t *testing.T) {
for i, tt := range genStateTests {
si := []stridx{}
for _, e := range tt.in {
si = append(si, e)
}
// ensure input is well-formed
sort.Sort(genidxSort(si))
ct := colltab.ContractTrieSet{}
n, _ := genStates(&ct, si)
if nn := tt.firstBlockLen; nn != n {
t.Errorf("%d: block len %v; want %v", i, n, nn)
}
if lv, lw := len(ct), len(tt.out); lv != lw {
t.Errorf("%d: len %v; want %v", i, lv, lw)
continue
}
for j, fe := range tt.out {
const msg = "%d:%d: value %s=%v; want %v"
if fe.L != ct[j].L {
t.Errorf(msg, i, j, "l", ct[j].L, fe.L)
}
if fe.H != ct[j].H {
t.Errorf(msg, i, j, "h", ct[j].H, fe.H)
}
if fe.N != ct[j].N {
t.Errorf(msg, i, j, "n", ct[j].N, fe.N)
}
if fe.I != ct[j].I {
t.Errorf(msg, i, j, "i", ct[j].I, fe.I)
}
}
}
}
func TestLookupContraction(t *testing.T) {
for i, tt := range genStateTests {
input := []string{}
for _, e := range tt.in {
input = append(input, e.str)
}
cts := colltab.ContractTrieSet{}
h, _ := appendTrie(&cts, input)
for j, si := range tt.in {
str := si.str
for _, s := range []string{str, str + "X"} {
msg := "%d:%d: %s(%s) %v; want %v"
idx, sn := lookup(&cts, h, []byte(s))
if idx != si.index {
t.Errorf(msg, i, j, "index", s, idx, si.index)
}
if sn != len(str) {
t.Errorf(msg, i, j, "sn", s, sn, len(str))
}
}
}
}
}
func TestPrintContractionTrieSet(t *testing.T) {
testdata := colltab.ContractTrieSet(genStateTests[4].out)
buf := &bytes.Buffer{}
print(&testdata, buf, "test")
if contractTrieOutput != buf.String() {
t.Errorf("output differs; found\n%s", buf.String())
println(string(buf.Bytes()))
}
}
const contractTrieOutput = `// testCTEntries: 8 entries, 32 bytes
var testCTEntries = [8]struct{L,H,N,I uint8}{
{0x62, 0x3, 1, 255},
{0x61, 0x0, 1, 255},
{0x62, 0x0, 1, 6},
{0x63, 0x0, 1, 4},
{0x64, 0x64, 0, 1},
{0x63, 0x0, 1, 7},
{0x64, 0x0, 1, 5},
{0x65, 0x66, 0, 2},
}
var testContractTrieSet = colltab.ContractTrieSet( testCTEntries[:] )
`

View File

@ -1,229 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package build
import (
"strconv"
"testing"
"golang.org/x/text/internal/colltab"
)
type entryTest struct {
f func(in []int) (uint32, error)
arg []int
val uint32
}
// makeList returns a list of entries of length n+2, with n normal
// entries plus a leading and trailing anchor.
func makeList(n int) []*entry {
es := make([]*entry, n+2)
weights := []rawCE{{w: []int{100, 20, 5, 0}}}
for i := range es {
runes := []rune{rune(i)}
es[i] = &entry{
runes: runes,
elems: weights,
}
weights = nextWeight(colltab.Primary, weights)
}
for i := 1; i < len(es); i++ {
es[i-1].next = es[i]
es[i].prev = es[i-1]
_, es[i-1].level = compareWeights(es[i-1].elems, es[i].elems)
}
es[0].exclude = true
es[0].logical = firstAnchor
es[len(es)-1].exclude = true
es[len(es)-1].logical = lastAnchor
return es
}
func TestNextIndexed(t *testing.T) {
const n = 5
es := makeList(n)
for i := int64(0); i < 1<<n; i++ {
mask := strconv.FormatInt(i+(1<<n), 2)
for i, c := range mask {
es[i].exclude = c == '1'
}
e := es[0]
for i, c := range mask {
if c == '0' {
e, _ = e.nextIndexed()
if e != es[i] {
t.Errorf("%d: expected entry %d; found %d", i, es[i].elems, e.elems)
}
}
}
if e, _ = e.nextIndexed(); e != nil {
t.Errorf("%d: expected nil entry; found %d", i, e.elems)
}
}
}
func TestRemove(t *testing.T) {
const n = 5
for i := int64(0); i < 1<<n; i++ {
es := makeList(n)
mask := strconv.FormatInt(i+(1<<n), 2)
for i, c := range mask {
if c == '0' {
es[i].remove()
}
}
e := es[0]
for i, c := range mask {
if c == '1' {
if e != es[i] {
t.Errorf("%d: expected entry %d; found %d", i, es[i].elems, e.elems)
}
e, _ = e.nextIndexed()
}
}
if e != nil {
t.Errorf("%d: expected nil entry; found %d", i, e.elems)
}
}
}
// nextPerm generates the next permutation of the array. The starting
// permutation is assumed to be a list of integers sorted in increasing order.
// It returns false if there are no more permuations left.
func nextPerm(a []int) bool {
i := len(a) - 2
for ; i >= 0; i-- {
if a[i] < a[i+1] {
break
}
}
if i < 0 {
return false
}
for j := len(a) - 1; j >= i; j-- {
if a[j] > a[i] {
a[i], a[j] = a[j], a[i]
break
}
}
for j := i + 1; j < (len(a)+i+1)/2; j++ {
a[j], a[len(a)+i-j] = a[len(a)+i-j], a[j]
}
return true
}
func TestInsertAfter(t *testing.T) {
const n = 5
orig := makeList(n)
perm := make([]int, n)
for i := range perm {
perm[i] = i + 1
}
for ok := true; ok; ok = nextPerm(perm) {
es := makeList(n)
last := es[0]
for _, i := range perm {
last.insertAfter(es[i])
last = es[i]
}
for _, e := range es {
e.elems = es[0].elems
}
e := es[0]
for _, i := range perm {
e, _ = e.nextIndexed()
if e.runes[0] != orig[i].runes[0] {
t.Errorf("%d:%d: expected entry %X; found %X", perm, i, orig[i].runes, e.runes)
break
}
}
}
}
func TestInsertBefore(t *testing.T) {
const n = 5
orig := makeList(n)
perm := make([]int, n)
for i := range perm {
perm[i] = i + 1
}
for ok := true; ok; ok = nextPerm(perm) {
es := makeList(n)
last := es[len(es)-1]
for _, i := range perm {
last.insertBefore(es[i])
last = es[i]
}
for _, e := range es {
e.elems = es[0].elems
}
e := es[0]
for i := n - 1; i >= 0; i-- {
e, _ = e.nextIndexed()
if e.runes[0] != rune(perm[i]) {
t.Errorf("%d:%d: expected entry %X; found %X", perm, i, orig[i].runes, e.runes)
break
}
}
}
}
type entryLessTest struct {
a, b *entry
res bool
}
var (
w1 = []rawCE{{w: []int{100, 20, 5, 5}}}
w2 = []rawCE{{w: []int{101, 20, 5, 5}}}
)
var entryLessTests = []entryLessTest{
{&entry{str: "a", elems: w1},
&entry{str: "a", elems: w1},
false,
},
{&entry{str: "a", elems: w1},
&entry{str: "a", elems: w2},
true,
},
{&entry{str: "a", elems: w1},
&entry{str: "b", elems: w1},
true,
},
{&entry{str: "a", elems: w2},
&entry{str: "a", elems: w1},
false,
},
{&entry{str: "c", elems: w1},
&entry{str: "b", elems: w1},
false,
},
{&entry{str: "a", elems: w1, logical: firstAnchor},
&entry{str: "a", elems: w1},
true,
},
{&entry{str: "a", elems: w1},
&entry{str: "b", elems: w1, logical: firstAnchor},
false,
},
{&entry{str: "b", elems: w1},
&entry{str: "a", elems: w1, logical: lastAnchor},
true,
},
{&entry{str: "a", elems: w1, logical: lastAnchor},
&entry{str: "c", elems: w1},
false,
},
}
func TestEntryLess(t *testing.T) {
for i, tt := range entryLessTests {
if res := entryLess(tt.a, tt.b); res != tt.res {
t.Errorf("%d: was %v; want %v", i, res, tt.res)
}
}
}

View File

@ -1,107 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package build
import (
"bytes"
"fmt"
"testing"
)
// We take the smallest, largest and an arbitrary value for each
// of the UTF-8 sequence lengths.
var testRunes = []rune{
0x01, 0x0C, 0x7F, // 1-byte sequences
0x80, 0x100, 0x7FF, // 2-byte sequences
0x800, 0x999, 0xFFFF, // 3-byte sequences
0x10000, 0x10101, 0x10FFFF, // 4-byte sequences
0x200, 0x201, 0x202, 0x210, 0x215, // five entries in one sparse block
}
func makeTestTrie(t *testing.T) trie {
n := newNode()
for i, r := range testRunes {
n.insert(r, uint32(i))
}
idx := newTrieBuilder()
idx.addTrie(n)
tr, err := idx.generate()
if err != nil {
t.Errorf(err.Error())
}
return *tr
}
func TestGenerateTrie(t *testing.T) {
testdata := makeTestTrie(t)
buf := &bytes.Buffer{}
testdata.printArrays(buf, "test")
fmt.Fprintf(buf, "var testTrie = ")
testdata.printStruct(buf, &trieHandle{19, 0}, "test")
if output != buf.String() {
t.Error("output differs")
}
}
var output = `// testValues: 832 entries, 3328 bytes
// Block 2 is the null block.
var testValues = [832]uint32 {
// Block 0x0, offset 0x0
0x000c:0x00000001,
// Block 0x1, offset 0x40
0x007f:0x00000002,
// Block 0x2, offset 0x80
// Block 0x3, offset 0xc0
0x00c0:0x00000003,
// Block 0x4, offset 0x100
0x0100:0x00000004,
// Block 0x5, offset 0x140
0x0140:0x0000000c, 0x0141:0x0000000d, 0x0142:0x0000000e,
0x0150:0x0000000f,
0x0155:0x00000010,
// Block 0x6, offset 0x180
0x01bf:0x00000005,
// Block 0x7, offset 0x1c0
0x01c0:0x00000006,
// Block 0x8, offset 0x200
0x0219:0x00000007,
// Block 0x9, offset 0x240
0x027f:0x00000008,
// Block 0xa, offset 0x280
0x0280:0x00000009,
// Block 0xb, offset 0x2c0
0x02c1:0x0000000a,
// Block 0xc, offset 0x300
0x033f:0x0000000b,
}
// testLookup: 640 entries, 1280 bytes
// Block 0 is the null block.
var testLookup = [640]uint16 {
// Block 0x0, offset 0x0
// Block 0x1, offset 0x40
// Block 0x2, offset 0x80
// Block 0x3, offset 0xc0
0x0e0:0x05, 0x0e6:0x06,
// Block 0x4, offset 0x100
0x13f:0x07,
// Block 0x5, offset 0x140
0x140:0x08, 0x144:0x09,
// Block 0x6, offset 0x180
0x190:0x03,
// Block 0x7, offset 0x1c0
0x1ff:0x0a,
// Block 0x8, offset 0x200
0x20f:0x05,
// Block 0x9, offset 0x240
0x242:0x01, 0x244:0x02,
0x248:0x03,
0x25f:0x04,
0x260:0x01,
0x26f:0x02,
0x270:0x04, 0x274:0x06,
}
var testTrie = trie{ testLookup[1216:], testValues[0:], testLookup[:], testValues[:]}`

View File

@ -1,482 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package collate
import (
"bytes"
"testing"
"golang.org/x/text/internal/colltab"
"golang.org/x/text/language"
)
type weightsTest struct {
opt opts
in, out ColElems
}
type opts struct {
lev int
alt alternateHandling
top int
backwards bool
caseLevel bool
}
// ignore returns an initialized boolean array based on the given Level.
// A negative value means using the default setting of quaternary.
func ignore(level colltab.Level) (ignore [colltab.NumLevels]bool) {
if level < 0 {
level = colltab.Quaternary
}
for i := range ignore {
ignore[i] = level < colltab.Level(i)
}
return ignore
}
func makeCE(w []int) colltab.Elem {
ce, err := colltab.MakeElem(w[0], w[1], w[2], uint8(w[3]))
if err != nil {
panic(err)
}
return ce
}
func (o opts) collator() *Collator {
c := &Collator{
options: options{
ignore: ignore(colltab.Level(o.lev - 1)),
alternate: o.alt,
backwards: o.backwards,
caseLevel: o.caseLevel,
variableTop: uint32(o.top),
},
}
return c
}
const (
maxQ = 0x1FFFFF
)
func wpq(p, q int) Weights {
return W(p, defaults.Secondary, defaults.Tertiary, q)
}
func wsq(s, q int) Weights {
return W(0, s, defaults.Tertiary, q)
}
func wq(q int) Weights {
return W(0, 0, 0, q)
}
var zero = W(0, 0, 0, 0)
var processTests = []weightsTest{
// Shifted
{ // simple sequence of non-variables
opt: opts{alt: altShifted, top: 100},
in: ColElems{W(200), W(300), W(400)},
out: ColElems{wpq(200, maxQ), wpq(300, maxQ), wpq(400, maxQ)},
},
{ // first is a variable
opt: opts{alt: altShifted, top: 250},
in: ColElems{W(200), W(300), W(400)},
out: ColElems{wq(200), wpq(300, maxQ), wpq(400, maxQ)},
},
{ // all but first are variable
opt: opts{alt: altShifted, top: 999},
in: ColElems{W(1000), W(200), W(300), W(400)},
out: ColElems{wpq(1000, maxQ), wq(200), wq(300), wq(400)},
},
{ // first is a modifier
opt: opts{alt: altShifted, top: 999},
in: ColElems{W(0, 10), W(1000)},
out: ColElems{wsq(10, maxQ), wpq(1000, maxQ)},
},
{ // primary ignorables
opt: opts{alt: altShifted, top: 250},
in: ColElems{W(200), W(0, 10), W(300), W(0, 15), W(400)},
out: ColElems{wq(200), zero, wpq(300, maxQ), wsq(15, maxQ), wpq(400, maxQ)},
},
{ // secondary ignorables
opt: opts{alt: altShifted, top: 250},
in: ColElems{W(200), W(0, 0, 10), W(300), W(0, 0, 15), W(400)},
out: ColElems{wq(200), zero, wpq(300, maxQ), W(0, 0, 15, maxQ), wpq(400, maxQ)},
},
{ // tertiary ignorables, no change
opt: opts{alt: altShifted, top: 250},
in: ColElems{W(200), zero, W(300), zero, W(400)},
out: ColElems{wq(200), zero, wpq(300, maxQ), zero, wpq(400, maxQ)},
},
// ShiftTrimmed (same as Shifted)
{ // simple sequence of non-variables
opt: opts{alt: altShiftTrimmed, top: 100},
in: ColElems{W(200), W(300), W(400)},
out: ColElems{wpq(200, maxQ), wpq(300, maxQ), wpq(400, maxQ)},
},
{ // first is a variable
opt: opts{alt: altShiftTrimmed, top: 250},
in: ColElems{W(200), W(300), W(400)},
out: ColElems{wq(200), wpq(300, maxQ), wpq(400, maxQ)},
},
{ // all but first are variable
opt: opts{alt: altShiftTrimmed, top: 999},
in: ColElems{W(1000), W(200), W(300), W(400)},
out: ColElems{wpq(1000, maxQ), wq(200), wq(300), wq(400)},
},
{ // first is a modifier
opt: opts{alt: altShiftTrimmed, top: 999},
in: ColElems{W(0, 10), W(1000)},
out: ColElems{wsq(10, maxQ), wpq(1000, maxQ)},
},
{ // primary ignorables
opt: opts{alt: altShiftTrimmed, top: 250},
in: ColElems{W(200), W(0, 10), W(300), W(0, 15), W(400)},
out: ColElems{wq(200), zero, wpq(300, maxQ), wsq(15, maxQ), wpq(400, maxQ)},
},
{ // secondary ignorables
opt: opts{alt: altShiftTrimmed, top: 250},
in: ColElems{W(200), W(0, 0, 10), W(300), W(0, 0, 15), W(400)},
out: ColElems{wq(200), zero, wpq(300, maxQ), W(0, 0, 15, maxQ), wpq(400, maxQ)},
},
{ // tertiary ignorables, no change
opt: opts{alt: altShiftTrimmed, top: 250},
in: ColElems{W(200), zero, W(300), zero, W(400)},
out: ColElems{wq(200), zero, wpq(300, maxQ), zero, wpq(400, maxQ)},
},
// Blanked
{ // simple sequence of non-variables
opt: opts{alt: altBlanked, top: 100},
in: ColElems{W(200), W(300), W(400)},
out: ColElems{W(200), W(300), W(400)},
},
{ // first is a variable
opt: opts{alt: altBlanked, top: 250},
in: ColElems{W(200), W(300), W(400)},
out: ColElems{zero, W(300), W(400)},
},
{ // all but first are variable
opt: opts{alt: altBlanked, top: 999},
in: ColElems{W(1000), W(200), W(300), W(400)},
out: ColElems{W(1000), zero, zero, zero},
},
{ // first is a modifier
opt: opts{alt: altBlanked, top: 999},
in: ColElems{W(0, 10), W(1000)},
out: ColElems{W(0, 10), W(1000)},
},
{ // primary ignorables
opt: opts{alt: altBlanked, top: 250},
in: ColElems{W(200), W(0, 10), W(300), W(0, 15), W(400)},
out: ColElems{zero, zero, W(300), W(0, 15), W(400)},
},
{ // secondary ignorables
opt: opts{alt: altBlanked, top: 250},
in: ColElems{W(200), W(0, 0, 10), W(300), W(0, 0, 15), W(400)},
out: ColElems{zero, zero, W(300), W(0, 0, 15), W(400)},
},
{ // tertiary ignorables, no change
opt: opts{alt: altBlanked, top: 250},
in: ColElems{W(200), zero, W(300), zero, W(400)},
out: ColElems{zero, zero, W(300), zero, W(400)},
},
// Non-ignorable: input is always equal to output.
{ // all but first are variable
opt: opts{alt: altNonIgnorable, top: 999},
in: ColElems{W(1000), W(200), W(300), W(400)},
out: ColElems{W(1000), W(200), W(300), W(400)},
},
{ // primary ignorables
opt: opts{alt: altNonIgnorable, top: 250},
in: ColElems{W(200), W(0, 10), W(300), W(0, 15), W(400)},
out: ColElems{W(200), W(0, 10), W(300), W(0, 15), W(400)},
},
{ // secondary ignorables
opt: opts{alt: altNonIgnorable, top: 250},
in: ColElems{W(200), W(0, 0, 10), W(300), W(0, 0, 15), W(400)},
out: ColElems{W(200), W(0, 0, 10), W(300), W(0, 0, 15), W(400)},
},
{ // tertiary ignorables, no change
opt: opts{alt: altNonIgnorable, top: 250},
in: ColElems{W(200), zero, W(300), zero, W(400)},
out: ColElems{W(200), zero, W(300), zero, W(400)},
},
}
func TestProcessWeights(t *testing.T) {
for i, tt := range processTests {
in := convertFromWeights(tt.in)
out := convertFromWeights(tt.out)
processWeights(tt.opt.alt, uint32(tt.opt.top), in)
for j, w := range in {
if w != out[j] {
t.Errorf("%d: Weights %d was %v; want %v", i, j, w, out[j])
}
}
}
}
type keyFromElemTest struct {
opt opts
in ColElems
out []byte
}
var defS = byte(defaults.Secondary)
var defT = byte(defaults.Tertiary)
const sep = 0 // separator byte
var keyFromElemTests = []keyFromElemTest{
{ // simple primary and secondary weights.
opts{alt: altShifted},
ColElems{W(0x200), W(0x7FFF), W(0, 0x30), W(0x100)},
[]byte{0x2, 0, 0x7F, 0xFF, 0x1, 0x00, // primary
sep, sep, 0, defS, 0, defS, 0, 0x30, 0, defS, // secondary
sep, sep, defT, defT, defT, defT, // tertiary
sep, 0xFF, 0xFF, 0xFF, 0xFF, // quaternary
},
},
{ // same as first, but with zero element that need to be removed
opts{alt: altShifted},
ColElems{W(0x200), zero, W(0x7FFF), W(0, 0x30), zero, W(0x100)},
[]byte{0x2, 0, 0x7F, 0xFF, 0x1, 0x00, // primary
sep, sep, 0, defS, 0, defS, 0, 0x30, 0, defS, // secondary
sep, sep, defT, defT, defT, defT, // tertiary
sep, 0xFF, 0xFF, 0xFF, 0xFF, // quaternary
},
},
{ // same as first, with large primary values
opts{alt: altShifted},
ColElems{W(0x200), W(0x8000), W(0, 0x30), W(0x12345)},
[]byte{0x2, 0, 0x80, 0x80, 0x00, 0x81, 0x23, 0x45, // primary
sep, sep, 0, defS, 0, defS, 0, 0x30, 0, defS, // secondary
sep, sep, defT, defT, defT, defT, // tertiary
sep, 0xFF, 0xFF, 0xFF, 0xFF, // quaternary
},
},
{ // same as first, but with the secondary level backwards
opts{alt: altShifted, backwards: true},
ColElems{W(0x200), W(0x7FFF), W(0, 0x30), W(0x100)},
[]byte{0x2, 0, 0x7F, 0xFF, 0x1, 0x00, // primary
sep, sep, 0, defS, 0, 0x30, 0, defS, 0, defS, // secondary
sep, sep, defT, defT, defT, defT, // tertiary
sep, 0xFF, 0xFF, 0xFF, 0xFF, // quaternary
},
},
{ // same as first, ignoring quaternary level
opts{alt: altShifted, lev: 3},
ColElems{W(0x200), zero, W(0x7FFF), W(0, 0x30), zero, W(0x100)},
[]byte{0x2, 0, 0x7F, 0xFF, 0x1, 0x00, // primary
sep, sep, 0, defS, 0, defS, 0, 0x30, 0, defS, // secondary
sep, sep, defT, defT, defT, defT, // tertiary
},
},
{ // same as first, ignoring tertiary level
opts{alt: altShifted, lev: 2},
ColElems{W(0x200), zero, W(0x7FFF), W(0, 0x30), zero, W(0x100)},
[]byte{0x2, 0, 0x7F, 0xFF, 0x1, 0x00, // primary
sep, sep, 0, defS, 0, defS, 0, 0x30, 0, defS, // secondary
},
},
{ // same as first, ignoring secondary level
opts{alt: altShifted, lev: 1},
ColElems{W(0x200), zero, W(0x7FFF), W(0, 0x30), zero, W(0x100)},
[]byte{0x2, 0, 0x7F, 0xFF, 0x1, 0x00},
},
{ // simple primary and secondary weights.
opts{alt: altShiftTrimmed, top: 0x250},
ColElems{W(0x300), W(0x200), W(0x7FFF), W(0, 0x30), W(0x800)},
[]byte{0x3, 0, 0x7F, 0xFF, 0x8, 0x00, // primary
sep, sep, 0, defS, 0, defS, 0, 0x30, 0, defS, // secondary
sep, sep, defT, defT, defT, defT, // tertiary
sep, 0xFF, 0x2, 0, // quaternary
},
},
{ // as first, primary with case level enabled
opts{alt: altShifted, lev: 1, caseLevel: true},
ColElems{W(0x200), W(0x7FFF), W(0, 0x30), W(0x100)},
[]byte{0x2, 0, 0x7F, 0xFF, 0x1, 0x00, // primary
sep, sep, // secondary
sep, sep, defT, defT, defT, defT, // tertiary
},
},
}
func TestKeyFromElems(t *testing.T) {
buf := Buffer{}
for i, tt := range keyFromElemTests {
buf.Reset()
in := convertFromWeights(tt.in)
processWeights(tt.opt.alt, uint32(tt.opt.top), in)
tt.opt.collator().keyFromElems(&buf, in)
res := buf.key
if len(res) != len(tt.out) {
t.Errorf("%d: len(ws) was %d; want %d (%X should be %X)", i, len(res), len(tt.out), res, tt.out)
}
n := len(res)
if len(tt.out) < n {
n = len(tt.out)
}
for j, c := range res[:n] {
if c != tt.out[j] {
t.Errorf("%d: byte %d was %X; want %X", i, j, c, tt.out[j])
}
}
}
}
func TestGetColElems(t *testing.T) {
for i, tt := range appendNextTests {
c, err := makeTable(tt.in)
if err != nil {
// error is reported in TestAppendNext
continue
}
// Create one large test per table
str := make([]byte, 0, 4000)
out := ColElems{}
for len(str) < 3000 {
for _, chk := range tt.chk {
str = append(str, chk.in[:chk.n]...)
out = append(out, chk.out...)
}
}
for j, chk := range append(tt.chk, check{string(str), len(str), out}) {
out := convertFromWeights(chk.out)
ce := c.getColElems([]byte(chk.in)[:chk.n])
if len(ce) != len(out) {
t.Errorf("%d:%d: len(ws) was %d; want %d", i, j, len(ce), len(out))
continue
}
cnt := 0
for k, w := range ce {
w, _ = colltab.MakeElem(w.Primary(), w.Secondary(), int(w.Tertiary()), 0)
if w != out[k] {
t.Errorf("%d:%d: Weights %d was %X; want %X", i, j, k, w, out[k])
cnt++
}
if cnt > 10 {
break
}
}
}
}
}
type keyTest struct {
in string
out []byte
}
var keyTests = []keyTest{
{"abc",
[]byte{0, 100, 0, 200, 1, 44, 0, 0, 0, 32, 0, 32, 0, 32, 0, 0, 2, 2, 2, 0, 255, 255, 255},
},
{"a\u0301",
[]byte{0, 102, 0, 0, 0, 32, 0, 0, 2, 0, 255},
},
{"aaaaa",
[]byte{0, 100, 0, 100, 0, 100, 0, 100, 0, 100, 0, 0,
0, 32, 0, 32, 0, 32, 0, 32, 0, 32, 0, 0,
2, 2, 2, 2, 2, 0,
255, 255, 255, 255, 255,
},
},
// Issue 16391: incomplete rune at end of UTF-8 sequence.
{"\xc2", []byte{133, 255, 253, 0, 0, 0, 32, 0, 0, 2, 0, 255}},
{"\xc2a", []byte{133, 255, 253, 0, 100, 0, 0, 0, 32, 0, 32, 0, 0, 2, 2, 0, 255, 255}},
}
func TestKey(t *testing.T) {
c, _ := makeTable(appendNextTests[4].in)
c.alternate = altShifted
c.ignore = ignore(colltab.Quaternary)
buf := Buffer{}
keys1 := [][]byte{}
keys2 := [][]byte{}
for _, tt := range keyTests {
keys1 = append(keys1, c.Key(&buf, []byte(tt.in)))
keys2 = append(keys2, c.KeyFromString(&buf, tt.in))
}
// Separate generation from testing to ensure buffers are not overwritten.
for i, tt := range keyTests {
if !bytes.Equal(keys1[i], tt.out) {
t.Errorf("%d: Key(%q) = %d; want %d", i, tt.in, keys1[i], tt.out)
}
if !bytes.Equal(keys2[i], tt.out) {
t.Errorf("%d: KeyFromString(%q) = %d; want %d", i, tt.in, keys2[i], tt.out)
}
}
}
type compareTest struct {
a, b string
res int // comparison result
}
var compareTests = []compareTest{
{"a\u0301", "a", 1},
{"a\u0301b", "ab", 1},
{"a", "a\u0301", -1},
{"ab", "a\u0301b", -1},
{"bc", "a\u0301c", 1},
{"ab", "aB", -1},
{"a\u0301", "a\u0301", 0},
{"a", "a", 0},
// Only clip prefixes of whole runes.
{"\u302E", "\u302F", 1},
// Don't clip prefixes when last rune of prefix may be part of contraction.
{"a\u035E", "a\u0301\u035F", -1},
{"a\u0301\u035Fb", "a\u0301\u035F", -1},
}
func TestCompare(t *testing.T) {
c, _ := makeTable(appendNextTests[4].in)
for i, tt := range compareTests {
if res := c.Compare([]byte(tt.a), []byte(tt.b)); res != tt.res {
t.Errorf("%d: Compare(%q, %q) == %d; want %d", i, tt.a, tt.b, res, tt.res)
}
if res := c.CompareString(tt.a, tt.b); res != tt.res {
t.Errorf("%d: CompareString(%q, %q) == %d; want %d", i, tt.a, tt.b, res, tt.res)
}
}
}
func TestNumeric(t *testing.T) {
c := New(language.English, Loose, Numeric)
for i, tt := range []struct {
a, b string
want int
}{
{"1", "2", -1},
{"2", "12", -1},
{"", "", -1}, // Fullwidth is sorted as usual.
{"₂", "₁₂", 1}, // Subscript is not sorted as numbers.
{"②", "①②", 1}, // Circled is not sorted as numbers.
{ // Imperial Aramaic, is not sorted as number.
"\U00010859",
"\U00010858\U00010859",
1,
},
{"12", "2", 1},
{"A-1", "A-2", -1},
{"A-2", "A-12", -1},
{"A-12", "A-2", 1},
{"A-0001", "A-1", 0},
} {
if got := c.CompareString(tt.a, tt.b); got != tt.want {
t.Errorf("%d: CompareString(%s, %s) = %d; want %d", i, tt.a, tt.b, got, tt.want)
}
}
}

View File

@ -1,51 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package collate
// Export for testing.
// TODO: no longer necessary. Remove at some point.
import (
"fmt"
"golang.org/x/text/internal/colltab"
)
const (
defaultSecondary = 0x20
defaultTertiary = 0x2
)
type Weights struct {
Primary, Secondary, Tertiary, Quaternary int
}
func W(ce ...int) Weights {
w := Weights{ce[0], defaultSecondary, defaultTertiary, 0}
if len(ce) > 1 {
w.Secondary = ce[1]
}
if len(ce) > 2 {
w.Tertiary = ce[2]
}
if len(ce) > 3 {
w.Quaternary = ce[3]
}
return w
}
func (w Weights) String() string {
return fmt.Sprintf("[%X.%X.%X.%X]", w.Primary, w.Secondary, w.Tertiary, w.Quaternary)
}
func convertFromWeights(ws []Weights) []colltab.Elem {
out := make([]colltab.Elem, len(ws))
for i, w := range ws {
out[i], _ = colltab.MakeElem(w.Primary, w.Secondary, w.Tertiary, 0)
if out[i] == colltab.Ignore && w.Quaternary > 0 {
out[i] = colltab.MakeQuaternary(w.Quaternary)
}
}
return out
}

View File

@ -1,209 +0,0 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package collate
import (
"reflect"
"strings"
"testing"
"golang.org/x/text/internal/colltab"
"golang.org/x/text/language"
)
var (
defaultIgnore = ignore(colltab.Tertiary)
defaultTable = getTable(locales[0])
)
func TestOptions(t *testing.T) {
for i, tt := range []struct {
in []Option
out options
}{
0: {
out: options{
ignore: defaultIgnore,
},
},
1: {
in: []Option{IgnoreDiacritics},
out: options{
ignore: [colltab.NumLevels]bool{false, true, false, true, true},
},
},
2: {
in: []Option{IgnoreCase, IgnoreDiacritics},
out: options{
ignore: ignore(colltab.Primary),
},
},
3: {
in: []Option{ignoreDiacritics, IgnoreWidth},
out: options{
ignore: ignore(colltab.Primary),
caseLevel: true,
},
},
4: {
in: []Option{IgnoreWidth, ignoreDiacritics},
out: options{
ignore: ignore(colltab.Primary),
caseLevel: true,
},
},
5: {
in: []Option{IgnoreCase, IgnoreWidth},
out: options{
ignore: ignore(colltab.Secondary),
},
},
6: {
in: []Option{IgnoreCase, IgnoreWidth, Loose},
out: options{
ignore: ignore(colltab.Primary),
},
},
7: {
in: []Option{Force, IgnoreCase, IgnoreWidth, Loose},
out: options{
ignore: [colltab.NumLevels]bool{false, true, true, true, false},
},
},
8: {
in: []Option{IgnoreDiacritics, IgnoreCase},
out: options{
ignore: ignore(colltab.Primary),
},
},
9: {
in: []Option{Numeric},
out: options{
ignore: defaultIgnore,
numeric: true,
},
},
10: {
in: []Option{OptionsFromTag(language.MustParse("und-u-ks-level1"))},
out: options{
ignore: ignore(colltab.Primary),
},
},
11: {
in: []Option{OptionsFromTag(language.MustParse("und-u-ks-level4"))},
out: options{
ignore: ignore(colltab.Quaternary),
},
},
12: {
in: []Option{OptionsFromTag(language.MustParse("und-u-ks-identic"))},
out: options{},
},
13: {
in: []Option{
OptionsFromTag(language.MustParse("und-u-kn-true-kb-true-kc-true")),
},
out: options{
ignore: defaultIgnore,
caseLevel: true,
backwards: true,
numeric: true,
},
},
14: {
in: []Option{
OptionsFromTag(language.MustParse("und-u-kn-true-kb-true-kc-true")),
OptionsFromTag(language.MustParse("und-u-kn-false-kb-false-kc-false")),
},
out: options{
ignore: defaultIgnore,
},
},
15: {
in: []Option{
OptionsFromTag(language.MustParse("und-u-kn-true-kb-true-kc-true")),
OptionsFromTag(language.MustParse("und-u-kn-foo-kb-foo-kc-foo")),
},
out: options{
ignore: defaultIgnore,
caseLevel: true,
backwards: true,
numeric: true,
},
},
16: { // Normal options take precedence over tag options.
in: []Option{
Numeric, IgnoreCase,
OptionsFromTag(language.MustParse("und-u-kn-false-kc-true")),
},
out: options{
ignore: ignore(colltab.Secondary),
caseLevel: false,
numeric: true,
},
},
17: {
in: []Option{
OptionsFromTag(language.MustParse("und-u-ka-shifted")),
},
out: options{
ignore: defaultIgnore,
alternate: altShifted,
},
},
18: {
in: []Option{
OptionsFromTag(language.MustParse("und-u-ka-blanked")),
},
out: options{
ignore: defaultIgnore,
alternate: altBlanked,
},
},
19: {
in: []Option{
OptionsFromTag(language.MustParse("und-u-ka-posix")),
},
out: options{
ignore: defaultIgnore,
alternate: altShiftTrimmed,
},
},
} {
c := newCollator(defaultTable)
c.t = nil
c.variableTop = 0
c.f = 0
c.setOptions(tt.in)
if !reflect.DeepEqual(c.options, tt.out) {
t.Errorf("%d: got %v; want %v", i, c.options, tt.out)
}
}
}
func TestAlternateSortTypes(t *testing.T) {
testCases := []struct {
lang string
in []string
want []string
}{{
lang: "zh,cmn,zh-Hant-u-co-pinyin,zh-HK-u-co-pinyin,zh-pinyin",
in: []string{"爸爸", "妈妈", "儿子", "女儿"},
want: []string{"爸爸", "儿子", "妈妈", "女儿"},
}, {
lang: "zh-Hant,zh-u-co-stroke,zh-Hant-u-co-stroke",
in: []string{"爸爸", "妈妈", "儿子", "女儿"},
want: []string{"儿子", "女儿", "妈妈", "爸爸"},
}}
for _, tc := range testCases {
for _, tag := range strings.Split(tc.lang, ",") {
got := append([]string{}, tc.in...)
New(language.MustParse(tag)).SortStrings(got)
if !reflect.DeepEqual(got, tc.want) {
t.Errorf("New(%s).SortStrings(%v) = %v; want %v", tag, tc.in, got, tc.want)
}
}
}
}

View File

@ -1,230 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package collate
import (
"archive/zip"
"bufio"
"bytes"
"flag"
"io"
"io/ioutil"
"log"
"path"
"regexp"
"strconv"
"strings"
"testing"
"unicode/utf8"
"golang.org/x/text/collate/build"
"golang.org/x/text/internal/gen"
"golang.org/x/text/language"
)
var long = flag.Bool("long", false,
"run time-consuming tests, such as tests that fetch data online")
// This regression test runs tests for the test files in CollationTest.zip
// (taken from http://www.unicode.org/Public/UCA/<gen.UnicodeVersion()>/).
//
// The test files have the following form:
// # header
// 0009 0021; # ('\u0009') <CHARACTER TABULATION> [| | | 0201 025E]
// 0009 003F; # ('\u0009') <CHARACTER TABULATION> [| | | 0201 0263]
// 000A 0021; # ('\u000A') <LINE FEED (LF)> [| | | 0202 025E]
// 000A 003F; # ('\u000A') <LINE FEED (LF)> [| | | 0202 0263]
//
// The part before the semicolon is the hex representation of a sequence
// of runes. After the hash mark is a comment. The strings
// represented by rune sequence are in the file in sorted order, as
// defined by the DUCET.
type Test struct {
name string
str [][]byte
comment []string
}
var versionRe = regexp.MustCompile(`# UCA Version: (.*)\n?$`)
var testRe = regexp.MustCompile(`^([\dA-F ]+);.*# (.*)\n?$`)
func TestCollation(t *testing.T) {
if !gen.IsLocal() && !*long {
t.Skip("skipping test to prevent downloading; to run use -long or use -local to specify a local source")
}
t.Skip("must first update to new file format to support test")
for _, test := range loadTestData() {
doTest(t, test)
}
}
func Error(e error) {
if e != nil {
log.Fatal(e)
}
}
// parseUCA parses a Default Unicode Collation Element Table of the format
// specified in http://www.unicode.org/reports/tr10/#File_Format.
// It returns the variable top.
func parseUCA(builder *build.Builder) {
r := gen.OpenUnicodeFile("UCA", "", "allkeys.txt")
defer r.Close()
input := bufio.NewReader(r)
colelem := regexp.MustCompile(`\[([.*])([0-9A-F.]+)\]`)
for i := 1; true; i++ {
l, prefix, err := input.ReadLine()
if err == io.EOF {
break
}
Error(err)
line := string(l)
if prefix {
log.Fatalf("%d: buffer overflow", i)
}
if len(line) == 0 || line[0] == '#' {
continue
}
if line[0] == '@' {
if strings.HasPrefix(line[1:], "version ") {
if v := strings.Split(line[1:], " ")[1]; v != gen.UnicodeVersion() {
log.Fatalf("incompatible version %s; want %s", v, gen.UnicodeVersion())
}
}
} else {
// parse entries
part := strings.Split(line, " ; ")
if len(part) != 2 {
log.Fatalf("%d: production rule without ';': %v", i, line)
}
lhs := []rune{}
for _, v := range strings.Split(part[0], " ") {
if v != "" {
lhs = append(lhs, rune(convHex(i, v)))
}
}
vars := []int{}
rhs := [][]int{}
for i, m := range colelem.FindAllStringSubmatch(part[1], -1) {
if m[1] == "*" {
vars = append(vars, i)
}
elem := []int{}
for _, h := range strings.Split(m[2], ".") {
elem = append(elem, convHex(i, h))
}
rhs = append(rhs, elem)
}
builder.Add(lhs, rhs, vars)
}
}
}
func convHex(line int, s string) int {
r, e := strconv.ParseInt(s, 16, 32)
if e != nil {
log.Fatalf("%d: %v", line, e)
}
return int(r)
}
func loadTestData() []Test {
f := gen.OpenUnicodeFile("UCA", "", "CollationTest.zip")
buffer, err := ioutil.ReadAll(f)
f.Close()
Error(err)
archive, err := zip.NewReader(bytes.NewReader(buffer), int64(len(buffer)))
Error(err)
tests := []Test{}
for _, f := range archive.File {
// Skip the short versions, which are simply duplicates of the long versions.
if strings.Contains(f.Name, "SHORT") || f.FileInfo().IsDir() {
continue
}
ff, err := f.Open()
Error(err)
defer ff.Close()
scanner := bufio.NewScanner(ff)
test := Test{name: path.Base(f.Name)}
for scanner.Scan() {
line := scanner.Text()
if len(line) <= 1 || line[0] == '#' {
if m := versionRe.FindStringSubmatch(line); m != nil {
if m[1] != gen.UnicodeVersion() {
log.Printf("warning:%s: version is %s; want %s", f.Name, m[1], gen.UnicodeVersion())
}
}
continue
}
m := testRe.FindStringSubmatch(line)
if m == nil || len(m) < 3 {
log.Fatalf(`Failed to parse: "%s" result: %#v`, line, m)
}
str := []byte{}
// In the regression test data (unpaired) surrogates are assigned a weight
// corresponding to their code point value. However, utf8.DecodeRune,
// which is used to compute the implicit weight, assigns FFFD to surrogates.
// We therefore skip tests with surrogates. This skips about 35 entries
// per test.
valid := true
for _, split := range strings.Split(m[1], " ") {
r, err := strconv.ParseUint(split, 16, 64)
Error(err)
valid = valid && utf8.ValidRune(rune(r))
str = append(str, string(rune(r))...)
}
if valid {
test.str = append(test.str, str)
test.comment = append(test.comment, m[2])
}
}
if scanner.Err() != nil {
log.Fatal(scanner.Err())
}
tests = append(tests, test)
}
return tests
}
var errorCount int
func runes(b []byte) []rune {
return []rune(string(b))
}
var shifted = language.MustParse("und-u-ka-shifted-ks-level4")
func doTest(t *testing.T, tc Test) {
bld := build.NewBuilder()
parseUCA(bld)
w, err := bld.Build()
Error(err)
var tag language.Tag
if !strings.Contains(tc.name, "NON_IGNOR") {
tag = shifted
}
c := NewFromTable(w, OptionsFromTag(tag))
b := &Buffer{}
prev := tc.str[0]
for i := 1; i < len(tc.str); i++ {
b.Reset()
s := tc.str[i]
ka := c.Key(b, prev)
kb := c.Key(b, s)
if r := bytes.Compare(ka, kb); r == 1 {
t.Errorf("%s:%d: Key(%.4X) < Key(%.4X) (%X < %X) == %d; want -1 or 0", tc.name, i, []rune(string(prev)), []rune(string(s)), ka, kb, r)
prev = s
continue
}
if r := c.Compare(prev, s); r == 1 {
t.Errorf("%s:%d: Compare(%.4X, %.4X) == %d; want -1 or 0", tc.name, i, runes(prev), runes(s), r)
}
if r := c.Compare(s, prev); r == -1 {
t.Errorf("%s:%d: Compare(%.4X, %.4X) == %d; want 1 or 0", tc.name, i, runes(s), runes(prev), r)
}
prev = s
}
}

View File

@ -1,55 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package collate_test
import (
"fmt"
"testing"
"golang.org/x/text/collate"
"golang.org/x/text/language"
)
func ExampleCollator_Strings() {
c := collate.New(language.Und)
strings := []string{
"ad",
"ab",
"äb",
"ac",
}
c.SortStrings(strings)
fmt.Println(strings)
// Output: [ab äb ac ad]
}
type sorter []string
func (s sorter) Len() int {
return len(s)
}
func (s sorter) Swap(i, j int) {
s[j], s[i] = s[i], s[j]
}
func (s sorter) Bytes(i int) []byte {
return []byte(s[i])
}
func TestSort(t *testing.T) {
c := collate.New(language.English)
strings := []string{
"bcd",
"abc",
"ddd",
}
c.Sort(sorter(strings))
res := fmt.Sprint(strings)
want := "[abc bcd ddd]"
if res != want {
t.Errorf("found %s; want %s", res, want)
}
}

View File

@ -1,291 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package collate
import (
"testing"
"golang.org/x/text/collate/build"
"golang.org/x/text/internal/colltab"
"golang.org/x/text/unicode/norm"
)
type ColElems []Weights
type input struct {
str string
ces [][]int
}
type check struct {
in string
n int
out ColElems
}
type tableTest struct {
in []input
chk []check
}
func w(ce ...int) Weights {
return W(ce...)
}
var defaults = w(0)
func pt(p, t int) []int {
return []int{p, defaults.Secondary, t}
}
func makeTable(in []input) (*Collator, error) {
b := build.NewBuilder()
for _, r := range in {
if e := b.Add([]rune(r.str), r.ces, nil); e != nil {
panic(e)
}
}
t, err := b.Build()
if err != nil {
return nil, err
}
return NewFromTable(t), nil
}
// modSeq holds a seqeunce of modifiers in increasing order of CCC long enough
// to cause a segment overflow if not handled correctly. The last rune in this
// list has a CCC of 214.
var modSeq = []rune{
0x05B1, 0x05B2, 0x05B3, 0x05B4, 0x05B5, 0x05B6, 0x05B7, 0x05B8, 0x05B9, 0x05BB,
0x05BC, 0x05BD, 0x05BF, 0x05C1, 0x05C2, 0xFB1E, 0x064B, 0x064C, 0x064D, 0x064E,
0x064F, 0x0650, 0x0651, 0x0652, 0x0670, 0x0711, 0x0C55, 0x0C56, 0x0E38, 0x0E48,
0x0EB8, 0x0EC8, 0x0F71, 0x0F72, 0x0F74, 0x0321, 0x1DCE,
}
var mods []input
var modW = func() ColElems {
ws := ColElems{}
for _, r := range modSeq {
rune := norm.NFC.PropertiesString(string(r))
ws = append(ws, w(0, int(rune.CCC())))
mods = append(mods, input{string(r), [][]int{{0, int(rune.CCC())}}})
}
return ws
}()
var appendNextTests = []tableTest{
{ // test getWeights
[]input{
{"a", [][]int{{100}}},
{"b", [][]int{{105}}},
{"c", [][]int{{110}}},
{"ß", [][]int{{120}}},
},
[]check{
{"a", 1, ColElems{w(100)}},
{"b", 1, ColElems{w(105)}},
{"c", 1, ColElems{w(110)}},
{"d", 1, ColElems{w(0x50064)}},
{"ab", 1, ColElems{w(100)}},
{"bc", 1, ColElems{w(105)}},
{"dd", 1, ColElems{w(0x50064)}},
{"ß", 2, ColElems{w(120)}},
},
},
{ // test expansion
[]input{
{"u", [][]int{{100}}},
{"U", [][]int{{100}, {0, 25}}},
{"w", [][]int{{100}, {100}}},
{"W", [][]int{{100}, {0, 25}, {100}, {0, 25}}},
},
[]check{
{"u", 1, ColElems{w(100)}},
{"U", 1, ColElems{w(100), w(0, 25)}},
{"w", 1, ColElems{w(100), w(100)}},
{"W", 1, ColElems{w(100), w(0, 25), w(100), w(0, 25)}},
},
},
{ // test decompose
[]input{
{"D", [][]int{pt(104, 8)}},
{"z", [][]int{pt(130, 8)}},
{"\u030C", [][]int{{0, 40}}}, // Caron
{"\u01C5", [][]int{pt(104, 9), pt(130, 4), {0, 40, 0x1F}}}, // Dž = D+z+caron
},
[]check{
{"\u01C5", 2, ColElems{w(pt(104, 9)...), w(pt(130, 4)...), w(0, 40, 0x1F)}},
},
},
{ // test basic contraction
[]input{
{"a", [][]int{{100}}},
{"ab", [][]int{{101}}},
{"aab", [][]int{{101}, {101}}},
{"abc", [][]int{{102}}},
{"b", [][]int{{200}}},
{"c", [][]int{{300}}},
{"d", [][]int{{400}}},
},
[]check{
{"a", 1, ColElems{w(100)}},
{"aa", 1, ColElems{w(100)}},
{"aac", 1, ColElems{w(100)}},
{"d", 1, ColElems{w(400)}},
{"ab", 2, ColElems{w(101)}},
{"abb", 2, ColElems{w(101)}},
{"aab", 3, ColElems{w(101), w(101)}},
{"aaba", 3, ColElems{w(101), w(101)}},
{"abc", 3, ColElems{w(102)}},
{"abcd", 3, ColElems{w(102)}},
},
},
{ // test discontinuous contraction
append(mods, []input{
// modifiers; secondary weight equals ccc
{"\u0316", [][]int{{0, 220}}},
{"\u0317", [][]int{{0, 220}, {0, 220}}},
{"\u302D", [][]int{{0, 222}}},
{"\u302E", [][]int{{0, 225}}}, // used as starter
{"\u302F", [][]int{{0, 224}}}, // used as starter
{"\u18A9", [][]int{{0, 228}}},
{"\u0300", [][]int{{0, 230}}},
{"\u0301", [][]int{{0, 230}}},
{"\u0315", [][]int{{0, 232}}},
{"\u031A", [][]int{{0, 232}}},
{"\u035C", [][]int{{0, 233}}},
{"\u035F", [][]int{{0, 233}}},
{"\u035D", [][]int{{0, 234}}},
{"\u035E", [][]int{{0, 234}}},
{"\u0345", [][]int{{0, 240}}},
// starters
{"a", [][]int{{100}}},
{"b", [][]int{{200}}},
{"c", [][]int{{300}}},
{"\u03B1", [][]int{{900}}},
{"\x01", [][]int{{0, 0, 0, 0}}},
// contractions
{"a\u0300", [][]int{{101}}},
{"a\u0301", [][]int{{102}}},
{"a\u035E", [][]int{{110}}},
{"a\u035Eb\u035E", [][]int{{115}}},
{"ac\u035Eaca\u035E", [][]int{{116}}},
{"a\u035Db\u035D", [][]int{{117}}},
{"a\u0301\u035Db", [][]int{{120}}},
{"a\u0301\u035F", [][]int{{121}}},
{"a\u0301\u035Fb", [][]int{{119}}},
{"\u03B1\u0345", [][]int{{901}, {902}}},
{"\u302E\u302F", [][]int{{0, 131}, {0, 131}}},
{"\u302F\u18A9", [][]int{{0, 130}}},
}...),
[]check{
{"a\x01\u0300", 1, ColElems{w(100)}},
{"ab", 1, ColElems{w(100)}}, // closing segment
{"a\u0316\u0300b", 5, ColElems{w(101), w(0, 220)}}, // closing segment
{"a\u0316\u0300", 5, ColElems{w(101), w(0, 220)}}, // no closing segment
{"a\u0316\u0300\u035Cb", 5, ColElems{w(101), w(0, 220)}}, // completes before segment end
{"a\u0316\u0300\u035C", 5, ColElems{w(101), w(0, 220)}}, // completes before segment end
{"a\u0316\u0301b", 5, ColElems{w(102), w(0, 220)}}, // closing segment
{"a\u0316\u0301", 5, ColElems{w(102), w(0, 220)}}, // no closing segment
{"a\u0316\u0301\u035Cb", 5, ColElems{w(102), w(0, 220)}}, // completes before segment end
{"a\u0316\u0301\u035C", 5, ColElems{w(102), w(0, 220)}}, // completes before segment end
// match blocked by modifier with same ccc
{"a\u0301\u0315\u031A\u035Fb", 3, ColElems{w(102)}},
// multiple gaps
{"a\u0301\u035Db", 6, ColElems{w(120)}},
{"a\u0301\u035F", 5, ColElems{w(121)}},
{"a\u0301\u035Fb", 6, ColElems{w(119)}},
{"a\u0316\u0301\u035F", 7, ColElems{w(121), w(0, 220)}},
{"a\u0301\u0315\u035Fb", 7, ColElems{w(121), w(0, 232)}},
{"a\u0316\u0301\u0315\u035Db", 5, ColElems{w(102), w(0, 220)}},
{"a\u0316\u0301\u0315\u035F", 9, ColElems{w(121), w(0, 220), w(0, 232)}},
{"a\u0316\u0301\u0315\u035Fb", 9, ColElems{w(121), w(0, 220), w(0, 232)}},
{"a\u0316\u0301\u0315\u035F\u035D", 9, ColElems{w(121), w(0, 220), w(0, 232)}},
{"a\u0316\u0301\u0315\u035F\u035Db", 9, ColElems{w(121), w(0, 220), w(0, 232)}},
// handling of segment overflow
{ // just fits within segment
"a" + string(modSeq[:30]) + "\u0301",
3 + len(string(modSeq[:30])),
append(ColElems{w(102)}, modW[:30]...),
},
{"a" + string(modSeq[:31]) + "\u0301", 1, ColElems{w(100)}}, // overflow
{"a" + string(modSeq) + "\u0301", 1, ColElems{w(100)}},
{ // just fits within segment with two interstitial runes
"a" + string(modSeq[:28]) + "\u0301\u0315\u035F",
7 + len(string(modSeq[:28])),
append(append(ColElems{w(121)}, modW[:28]...), w(0, 232)),
},
{ // second half does not fit within segment
"a" + string(modSeq[:29]) + "\u0301\u0315\u035F",
3 + len(string(modSeq[:29])),
append(ColElems{w(102)}, modW[:29]...),
},
// discontinuity can only occur in last normalization segment
{"a\u035Eb\u035E", 6, ColElems{w(115)}},
{"a\u0316\u035Eb\u035E", 5, ColElems{w(110), w(0, 220)}},
{"a\u035Db\u035D", 6, ColElems{w(117)}},
{"a\u0316\u035Db\u035D", 1, ColElems{w(100)}},
{"a\u035Eb\u0316\u035E", 8, ColElems{w(115), w(0, 220)}},
{"a\u035Db\u0316\u035D", 8, ColElems{w(117), w(0, 220)}},
{"ac\u035Eaca\u035E", 9, ColElems{w(116)}},
{"a\u0316c\u035Eaca\u035E", 1, ColElems{w(100)}},
{"ac\u035Eac\u0316a\u035E", 1, ColElems{w(100)}},
// expanding contraction
{"\u03B1\u0345", 4, ColElems{w(901), w(902)}},
// Theoretical possibilities
// contraction within a gap
{"a\u302F\u18A9\u0301", 9, ColElems{w(102), w(0, 130)}},
// expansion within a gap
{"a\u0317\u0301", 5, ColElems{w(102), w(0, 220), w(0, 220)}},
// repeating CCC blocks last modifier
{"a\u302E\u302F\u0301", 1, ColElems{w(100)}},
// The trailing combining characters (with lower CCC) should block the first one.
// TODO: make the following pass.
// {"a\u035E\u0316\u0316", 1, ColElems{w(100)}},
{"a\u035F\u035Eb", 5, ColElems{w(110), w(0, 233)}},
// Last combiner should match after normalization.
// TODO: make the following pass.
// {"a\u035D\u0301", 3, ColElems{w(102), w(0, 234)}},
// The first combiner is blocking the second one as they have the same CCC.
{"a\u035D\u035Eb", 1, ColElems{w(100)}},
},
},
}
func TestAppendNext(t *testing.T) {
for i, tt := range appendNextTests {
c, err := makeTable(tt.in)
if err != nil {
t.Errorf("%d: error creating table: %v", i, err)
continue
}
for j, chk := range tt.chk {
ws, n := c.t.AppendNext(nil, []byte(chk.in))
if n != chk.n {
t.Errorf("%d:%d: bytes consumed was %d; want %d", i, j, n, chk.n)
}
out := convertFromWeights(chk.out)
if len(ws) != len(out) {
t.Errorf("%d:%d: len(ws) was %d; want %d (%X vs %X)\n%X", i, j, len(ws), len(out), ws, out, chk.in)
continue
}
for k, w := range ws {
w, _ = colltab.MakeElem(w.Primary(), w.Secondary(), int(w.Tertiary()), 0)
if w != out[k] {
t.Errorf("%d:%d: Weights %d was %X; want %X", i, j, k, w, out[k])
}
}
}
}
}

View File

@ -1,7 +0,0 @@
# Copyright 2012 The Go Authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
chars:
go run ../../maketables.go -tables=chars -package=main > chars.go
gofmt -w -s chars.go

File diff suppressed because one or more lines are too long

View File

@ -1,97 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import (
"log"
"unicode/utf16"
"golang.org/x/text/collate"
"golang.org/x/text/language"
)
// Input holds an input string in both UTF-8 and UTF-16 format.
type Input struct {
index int // used for restoring to original random order
UTF8 []byte
UTF16 []uint16
key []byte // used for sorting
}
func (i Input) String() string {
return string(i.UTF8)
}
func makeInput(s8 []byte, s16 []uint16) Input {
return Input{UTF8: s8, UTF16: s16}
}
func makeInputString(s string) Input {
return Input{
UTF8: []byte(s),
UTF16: utf16.Encode([]rune(s)),
}
}
// Collator is an interface for architecture-specific implementations of collation.
type Collator interface {
// Key generates a sort key for the given input. Implemenations
// may return nil if a collator does not support sort keys.
Key(s Input) []byte
// Compare returns -1 if a < b, 1 if a > b and 0 if a == b.
Compare(a, b Input) int
}
// CollatorFactory creates a Collator for a given language tag.
type CollatorFactory struct {
name string
makeFn func(tag string) (Collator, error)
description string
}
var collators = []CollatorFactory{}
// AddFactory registers f as a factory for an implementation of Collator.
func AddFactory(f CollatorFactory) {
collators = append(collators, f)
}
func getCollator(name, locale string) Collator {
for _, f := range collators {
if f.name == name {
col, err := f.makeFn(locale)
if err != nil {
log.Fatal(err)
}
return col
}
}
log.Fatalf("collator of type %q not found", name)
return nil
}
// goCollator is an implemention of Collator using go's own collator.
type goCollator struct {
c *collate.Collator
buf collate.Buffer
}
func init() {
AddFactory(CollatorFactory{"go", newGoCollator, "Go's native collator implementation."})
}
func newGoCollator(loc string) (Collator, error) {
c := &goCollator{c: collate.New(language.Make(loc))}
return c, nil
}
func (c *goCollator) Key(b Input) []byte {
return c.c.Key(&c.buf, b.UTF8)
}
func (c *goCollator) Compare(a, b Input) int {
return c.c.Compare(a.UTF8, b.UTF8)
}

View File

@ -1,529 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main // import "golang.org/x/text/collate/tools/colcmp"
import (
"bytes"
"flag"
"fmt"
"io"
"log"
"os"
"runtime/pprof"
"sort"
"strconv"
"strings"
"text/template"
"time"
"golang.org/x/text/unicode/norm"
)
var (
doNorm = flag.Bool("norm", false, "normalize input strings")
cases = flag.Bool("case", false, "generate case variants")
verbose = flag.Bool("verbose", false, "print results")
debug = flag.Bool("debug", false, "output debug information")
locales = flag.String("locale", "en_US", "the locale to use. May be a comma-separated list for some commands.")
col = flag.String("col", "go", "collator to test")
gold = flag.String("gold", "go", "collator used as the gold standard")
usecmp = flag.Bool("usecmp", false,
`use comparison instead of sort keys when sorting. Must be "test", "gold" or "both"`)
cpuprofile = flag.String("cpuprofile", "", "write cpu profile to file")
exclude = flag.String("exclude", "", "exclude errors that contain any of the characters")
limit = flag.Int("limit", 5000000, "maximum number of samples to generate for one run")
)
func failOnError(err error) {
if err != nil {
log.Panic(err)
}
}
// Test holds test data for testing a locale-collator pair.
// Test also provides functionality that is commonly used by the various commands.
type Test struct {
ctxt *Context
Name string
Locale string
ColName string
Col Collator
UseCompare bool
Input []Input
Duration time.Duration
start time.Time
msg string
count int
}
func (t *Test) clear() {
t.Col = nil
t.Input = nil
}
const (
msgGeneratingInput = "generating input"
msgGeneratingKeys = "generating keys"
msgSorting = "sorting"
)
var lastLen = 0
func (t *Test) SetStatus(msg string) {
if *debug || *verbose {
fmt.Printf("%s: %s...\n", t.Name, msg)
} else if t.ctxt.out != nil {
fmt.Fprint(t.ctxt.out, strings.Repeat(" ", lastLen))
fmt.Fprint(t.ctxt.out, strings.Repeat("\b", lastLen))
fmt.Fprint(t.ctxt.out, msg, "...")
lastLen = len(msg) + 3
fmt.Fprint(t.ctxt.out, strings.Repeat("\b", lastLen))
}
}
// Start is used by commands to signal the start of an operation.
func (t *Test) Start(msg string) {
t.SetStatus(msg)
t.count = 0
t.msg = msg
t.start = time.Now()
}
// Stop is used by commands to signal the end of an operation.
func (t *Test) Stop() (time.Duration, int) {
d := time.Now().Sub(t.start)
t.Duration += d
if *debug || *verbose {
fmt.Printf("%s: %s done. (%.3fs /%dK ops)\n", t.Name, t.msg, d.Seconds(), t.count/1000)
}
return d, t.count
}
// generateKeys generates sort keys for all the inputs.
func (t *Test) generateKeys() {
for i, s := range t.Input {
b := t.Col.Key(s)
t.Input[i].key = b
if *debug {
fmt.Printf("%s (%X): %X\n", string(s.UTF8), s.UTF16, b)
}
}
}
// Sort sorts the inputs. It generates sort keys if this is required by the
// chosen sort method.
func (t *Test) Sort() (tkey, tsort time.Duration, nkey, nsort int) {
if *cpuprofile != "" {
f, err := os.Create(*cpuprofile)
failOnError(err)
pprof.StartCPUProfile(f)
defer pprof.StopCPUProfile()
}
if t.UseCompare || t.Col.Key(t.Input[0]) == nil {
t.Start(msgSorting)
sort.Sort(&testCompare{*t})
tsort, nsort = t.Stop()
} else {
t.Start(msgGeneratingKeys)
t.generateKeys()
t.count = len(t.Input)
tkey, nkey = t.Stop()
t.Start(msgSorting)
sort.Sort(t)
tsort, nsort = t.Stop()
}
return
}
func (t *Test) Swap(a, b int) {
t.Input[a], t.Input[b] = t.Input[b], t.Input[a]
}
func (t *Test) Less(a, b int) bool {
t.count++
return bytes.Compare(t.Input[a].key, t.Input[b].key) == -1
}
func (t Test) Len() int {
return len(t.Input)
}
type testCompare struct {
Test
}
func (t *testCompare) Less(a, b int) bool {
t.count++
return t.Col.Compare(t.Input[a], t.Input[b]) == -1
}
type testRestore struct {
Test
}
func (t *testRestore) Less(a, b int) bool {
return t.Input[a].index < t.Input[b].index
}
// GenerateInput generates input phrases for the locale tested by t.
func (t *Test) GenerateInput() {
t.Input = nil
if t.ctxt.lastLocale != t.Locale {
gen := phraseGenerator{}
gen.init(t.Locale)
t.SetStatus(msgGeneratingInput)
t.ctxt.lastInput = nil // allow the previous value to be garbage collected.
t.Input = gen.generate(*doNorm)
t.ctxt.lastInput = t.Input
t.ctxt.lastLocale = t.Locale
} else {
t.Input = t.ctxt.lastInput
for i := range t.Input {
t.Input[i].key = nil
}
sort.Sort(&testRestore{*t})
}
}
// Context holds all tests and settings translated from command line options.
type Context struct {
test []*Test
last *Test
lastLocale string
lastInput []Input
out io.Writer
}
func (ts *Context) Printf(format string, a ...interface{}) {
ts.assertBuf()
fmt.Fprintf(ts.out, format, a...)
}
func (ts *Context) Print(a ...interface{}) {
ts.assertBuf()
fmt.Fprint(ts.out, a...)
}
// assertBuf sets up an io.Writer for output, if it doesn't already exist.
// In debug and verbose mode, output is buffered so that the regular output
// will not interfere with the additional output. Otherwise, output is
// written directly to stdout for a more responsive feel.
func (ts *Context) assertBuf() {
if ts.out != nil {
return
}
if *debug || *verbose {
ts.out = &bytes.Buffer{}
} else {
ts.out = os.Stdout
}
}
// flush flushes the contents of ts.out to stdout, if it is not stdout already.
func (ts *Context) flush() {
if ts.out != nil {
if _, ok := ts.out.(io.ReadCloser); !ok {
io.Copy(os.Stdout, ts.out.(io.Reader))
}
}
}
// parseTests creates all tests from command lines and returns
// a Context to hold them.
func parseTests() *Context {
ctxt := &Context{}
colls := strings.Split(*col, ",")
for _, loc := range strings.Split(*locales, ",") {
loc = strings.TrimSpace(loc)
for _, name := range colls {
name = strings.TrimSpace(name)
col := getCollator(name, loc)
ctxt.test = append(ctxt.test, &Test{
ctxt: ctxt,
Locale: loc,
ColName: name,
UseCompare: *usecmp,
Col: col,
})
}
}
return ctxt
}
func (c *Context) Len() int {
return len(c.test)
}
func (c *Context) Test(i int) *Test {
if c.last != nil {
c.last.clear()
}
c.last = c.test[i]
return c.last
}
func parseInput(args []string) []Input {
input := []Input{}
for _, s := range args {
rs := []rune{}
for len(s) > 0 {
var r rune
r, _, s, _ = strconv.UnquoteChar(s, '\'')
rs = append(rs, r)
}
s = string(rs)
if *doNorm {
s = norm.NFD.String(s)
}
input = append(input, makeInputString(s))
}
return input
}
// A Command is an implementation of a colcmp command.
type Command struct {
Run func(cmd *Context, args []string)
Usage string
Short string
Long string
}
func (cmd Command) Name() string {
return strings.SplitN(cmd.Usage, " ", 2)[0]
}
var commands = []*Command{
cmdSort,
cmdBench,
cmdRegress,
}
const sortHelp = `
Sort sorts a given list of strings. Strings are separated by whitespace.
`
var cmdSort = &Command{
Run: runSort,
Usage: "sort <string>*",
Short: "sort a given list of strings",
Long: sortHelp,
}
func runSort(ctxt *Context, args []string) {
input := parseInput(args)
if len(input) == 0 {
log.Fatalf("Nothing to sort.")
}
if ctxt.Len() > 1 {
ctxt.Print("COLL LOCALE RESULT\n")
}
for i := 0; i < ctxt.Len(); i++ {
t := ctxt.Test(i)
t.Input = append(t.Input, input...)
t.Sort()
if ctxt.Len() > 1 {
ctxt.Printf("%-5s %-5s ", t.ColName, t.Locale)
}
for _, s := range t.Input {
ctxt.Print(string(s.UTF8), " ")
}
ctxt.Print("\n")
}
}
const benchHelp = `
Bench runs a benchmark for the given list of collator implementations.
If no collator implementations are given, the go collator will be used.
`
var cmdBench = &Command{
Run: runBench,
Usage: "bench",
Short: "benchmark a given list of collator implementations",
Long: benchHelp,
}
func runBench(ctxt *Context, args []string) {
ctxt.Printf("%-7s %-5s %-6s %-24s %-24s %-5s %s\n", "LOCALE", "COLL", "N", "KEYS", "SORT", "AVGLN", "TOTAL")
for i := 0; i < ctxt.Len(); i++ {
t := ctxt.Test(i)
ctxt.Printf("%-7s %-5s ", t.Locale, t.ColName)
t.GenerateInput()
ctxt.Printf("%-6s ", fmt.Sprintf("%dK", t.Len()/1000))
tkey, tsort, nkey, nsort := t.Sort()
p := func(dur time.Duration, n int) {
s := ""
if dur > 0 {
s = fmt.Sprintf("%6.3fs ", dur.Seconds())
if n > 0 {
s += fmt.Sprintf("%15s", fmt.Sprintf("(%4.2f ns/op)", float64(dur)/float64(n)))
}
}
ctxt.Printf("%-24s ", s)
}
p(tkey, nkey)
p(tsort, nsort)
total := 0
for _, s := range t.Input {
total += len(s.key)
}
ctxt.Printf("%-5d ", total/t.Len())
ctxt.Printf("%6.3fs\n", t.Duration.Seconds())
if *debug {
for _, s := range t.Input {
fmt.Print(string(s.UTF8), " ")
}
fmt.Println()
}
}
}
const regressHelp = `
Regress runs a monkey test by comparing the results of randomly generated tests
between two implementations of a collator. The user may optionally pass a list
of strings to regress against instead of the default test set.
`
var cmdRegress = &Command{
Run: runRegress,
Usage: "regress -gold=<col> -test=<col> [string]*",
Short: "run a monkey test between two collators",
Long: regressHelp,
}
const failedKeyCompare = `
%s:%d: incorrect comparison result for input:
a: %q (%.4X)
key: %s
b: %q (%.4X)
key: %s
Compare(a, b) = %d; want %d.
gold keys:
a: %s
b: %s
`
const failedCompare = `
%s:%d: incorrect comparison result for input:
a: %q (%.4X)
b: %q (%.4X)
Compare(a, b) = %d; want %d.
`
func keyStr(b []byte) string {
buf := &bytes.Buffer{}
for _, v := range b {
fmt.Fprintf(buf, "%.2X ", v)
}
return buf.String()
}
func runRegress(ctxt *Context, args []string) {
input := parseInput(args)
for i := 0; i < ctxt.Len(); i++ {
t := ctxt.Test(i)
if len(input) > 0 {
t.Input = append(t.Input, input...)
} else {
t.GenerateInput()
}
t.Sort()
count := 0
gold := getCollator(*gold, t.Locale)
for i := 1; i < len(t.Input); i++ {
ia := t.Input[i-1]
ib := t.Input[i]
if bytes.IndexAny(ib.UTF8, *exclude) != -1 {
i++
continue
}
if bytes.IndexAny(ia.UTF8, *exclude) != -1 {
continue
}
goldCmp := gold.Compare(ia, ib)
if cmp := bytes.Compare(ia.key, ib.key); cmp != goldCmp {
count++
a := string(ia.UTF8)
b := string(ib.UTF8)
fmt.Printf(failedKeyCompare, t.Locale, i-1, a, []rune(a), keyStr(ia.key), b, []rune(b), keyStr(ib.key), cmp, goldCmp, keyStr(gold.Key(ia)), keyStr(gold.Key(ib)))
} else if cmp := t.Col.Compare(ia, ib); cmp != goldCmp {
count++
a := string(ia.UTF8)
b := string(ib.UTF8)
fmt.Printf(failedCompare, t.Locale, i-1, a, []rune(a), b, []rune(b), cmp, goldCmp)
}
}
if count > 0 {
ctxt.Printf("Found %d inconsistencies in %d entries.\n", count, t.Len()-1)
}
}
}
const helpTemplate = `
colcmp is a tool for testing and benchmarking collation
Usage: colcmp command [arguments]
The commands are:
{{range .}}
{{.Name | printf "%-11s"}} {{.Short}}{{end}}
Use "col help [topic]" for more information about that topic.
`
const detailedHelpTemplate = `
Usage: colcmp {{.Usage}}
{{.Long | trim}}
`
func runHelp(args []string) {
t := template.New("help")
t.Funcs(template.FuncMap{"trim": strings.TrimSpace})
if len(args) < 1 {
template.Must(t.Parse(helpTemplate))
failOnError(t.Execute(os.Stderr, &commands))
} else {
for _, cmd := range commands {
if cmd.Name() == args[0] {
template.Must(t.Parse(detailedHelpTemplate))
failOnError(t.Execute(os.Stderr, cmd))
os.Exit(0)
}
}
log.Fatalf("Unknown command %q. Run 'colcmp help'.", args[0])
}
os.Exit(0)
}
func main() {
flag.Parse()
log.SetFlags(0)
ctxt := parseTests()
if flag.NArg() < 1 {
runHelp(nil)
}
args := flag.Args()[1:]
if flag.Arg(0) == "help" {
runHelp(args)
}
for _, cmd := range commands {
if cmd.Name() == flag.Arg(0) {
cmd.Run(ctxt, args)
ctxt.flush()
return
}
}
runHelp(flag.Args())
}

View File

@ -1,111 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin
package main
/*
#cgo LDFLAGS: -framework CoreFoundation
#include <CoreFoundation/CFBase.h>
#include <CoreFoundation/CoreFoundation.h>
*/
import "C"
import (
"unsafe"
)
func init() {
AddFactory(CollatorFactory{"osx", newOSX16Collator,
"OS X/Darwin collator, using native strings."})
AddFactory(CollatorFactory{"osx8", newOSX8Collator,
"OS X/Darwin collator for UTF-8."})
}
func osxUInt8P(s []byte) *C.UInt8 {
return (*C.UInt8)(unsafe.Pointer(&s[0]))
}
func osxCharP(s []uint16) *C.UniChar {
return (*C.UniChar)(unsafe.Pointer(&s[0]))
}
// osxCollator implements an Collator based on OS X's CoreFoundation.
type osxCollator struct {
loc C.CFLocaleRef
opt C.CFStringCompareFlags
}
func (c *osxCollator) init(locale string) {
l := C.CFStringCreateWithBytes(
C.kCFAllocatorDefault,
osxUInt8P([]byte(locale)),
C.CFIndex(len(locale)),
C.kCFStringEncodingUTF8,
C.Boolean(0),
)
c.loc = C.CFLocaleCreate(C.kCFAllocatorDefault, l)
}
func newOSX8Collator(locale string) (Collator, error) {
c := &osx8Collator{}
c.init(locale)
return c, nil
}
func newOSX16Collator(locale string) (Collator, error) {
c := &osx16Collator{}
c.init(locale)
return c, nil
}
func (c osxCollator) Key(s Input) []byte {
return nil // sort keys not supported by OS X CoreFoundation
}
type osx8Collator struct {
osxCollator
}
type osx16Collator struct {
osxCollator
}
func (c osx16Collator) Compare(a, b Input) int {
sa := C.CFStringCreateWithCharactersNoCopy(
C.kCFAllocatorDefault,
osxCharP(a.UTF16),
C.CFIndex(len(a.UTF16)),
C.kCFAllocatorDefault,
)
sb := C.CFStringCreateWithCharactersNoCopy(
C.kCFAllocatorDefault,
osxCharP(b.UTF16),
C.CFIndex(len(b.UTF16)),
C.kCFAllocatorDefault,
)
_range := C.CFRangeMake(0, C.CFStringGetLength(sa))
return int(C.CFStringCompareWithOptionsAndLocale(sa, sb, _range, c.opt, c.loc))
}
func (c osx8Collator) Compare(a, b Input) int {
sa := C.CFStringCreateWithBytesNoCopy(
C.kCFAllocatorDefault,
osxUInt8P(a.UTF8),
C.CFIndex(len(a.UTF8)),
C.kCFStringEncodingUTF8,
C.Boolean(0),
C.kCFAllocatorDefault,
)
sb := C.CFStringCreateWithBytesNoCopy(
C.kCFAllocatorDefault,
osxUInt8P(b.UTF8),
C.CFIndex(len(b.UTF8)),
C.kCFStringEncodingUTF8,
C.Boolean(0),
C.kCFAllocatorDefault,
)
_range := C.CFRangeMake(0, C.CFStringGetLength(sa))
return int(C.CFStringCompareWithOptionsAndLocale(sa, sb, _range, c.opt, c.loc))
}

View File

@ -1,183 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import (
"math"
"math/rand"
"strings"
"unicode"
"unicode/utf16"
"unicode/utf8"
"golang.org/x/text/language"
"golang.org/x/text/unicode/norm"
)
// TODO: replace with functionality in language package.
// parent computes the parent language for the given language.
// It returns false if the parent is already root.
func parent(locale string) (parent string, ok bool) {
if locale == "und" {
return "", false
}
if i := strings.LastIndex(locale, "-"); i != -1 {
return locale[:i], true
}
return "und", true
}
// rewriter is used to both unique strings and create variants of strings
// to add to the test set.
type rewriter struct {
seen map[string]bool
addCases bool
}
func newRewriter() *rewriter {
return &rewriter{
seen: make(map[string]bool),
}
}
func (r *rewriter) insert(a []string, s string) []string {
if !r.seen[s] {
r.seen[s] = true
a = append(a, s)
}
return a
}
// rewrite takes a sequence of strings in, adds variants of the these strings
// based on options and removes duplicates.
func (r *rewriter) rewrite(ss []string) []string {
ns := []string{}
for _, s := range ss {
ns = r.insert(ns, s)
if r.addCases {
rs := []rune(s)
rn := rs[0]
for c := unicode.SimpleFold(rn); c != rn; c = unicode.SimpleFold(c) {
rs[0] = c
ns = r.insert(ns, string(rs))
}
}
}
return ns
}
// exemplarySet holds a parsed set of characters from the exemplarCharacters table.
type exemplarySet struct {
typ exemplarType
set []string
charIndex int // cumulative total of phrases, including this set
}
type phraseGenerator struct {
sets [exN]exemplarySet
n int
}
func (g *phraseGenerator) init(id string) {
ec := exemplarCharacters
loc := language.Make(id).String()
// get sets for locale or parent locale if the set is not defined.
for i := range g.sets {
for p, ok := loc, true; ok; p, ok = parent(p) {
if set, ok := ec[p]; ok && set[i] != "" {
g.sets[i].set = strings.Split(set[i], " ")
break
}
}
}
r := newRewriter()
r.addCases = *cases
for i := range g.sets {
g.sets[i].set = r.rewrite(g.sets[i].set)
}
// compute indexes
for i, set := range g.sets {
g.n += len(set.set)
g.sets[i].charIndex = g.n
}
}
// phrase returns the ith phrase, where i < g.n.
func (g *phraseGenerator) phrase(i int) string {
for _, set := range g.sets {
if i < set.charIndex {
return set.set[i-(set.charIndex-len(set.set))]
}
}
panic("index out of range")
}
// generate generates inputs by combining all pairs of examplar strings.
// If doNorm is true, all input strings are normalized to NFC.
// TODO: allow other variations, statistical models, and random
// trailing sequences.
func (g *phraseGenerator) generate(doNorm bool) []Input {
const (
M = 1024 * 1024
buf8Size = 30 * M
buf16Size = 10 * M
)
// TODO: use a better way to limit the input size.
if sq := int(math.Sqrt(float64(*limit))); g.n > sq {
g.n = sq
}
size := g.n * g.n
a := make([]Input, 0, size)
buf8 := make([]byte, 0, buf8Size)
buf16 := make([]uint16, 0, buf16Size)
addInput := func(str string) {
buf8 = buf8[len(buf8):]
buf16 = buf16[len(buf16):]
if len(str) > cap(buf8) {
buf8 = make([]byte, 0, buf8Size)
}
if len(str) > cap(buf16) {
buf16 = make([]uint16, 0, buf16Size)
}
if doNorm {
buf8 = norm.NFD.AppendString(buf8, str)
} else {
buf8 = append(buf8, str...)
}
buf16 = appendUTF16(buf16, buf8)
a = append(a, makeInput(buf8, buf16))
}
for i := 0; i < g.n; i++ {
p1 := g.phrase(i)
addInput(p1)
for j := 0; j < g.n; j++ {
p2 := g.phrase(j)
addInput(p1 + p2)
}
}
// permutate
rnd := rand.New(rand.NewSource(int64(rand.Int())))
for i := range a {
j := i + rnd.Intn(len(a)-i)
a[i], a[j] = a[j], a[i]
a[i].index = i // allow restoring this order if input is used multiple times.
}
return a
}
func appendUTF16(buf []uint16, s []byte) []uint16 {
for len(s) > 0 {
r, sz := utf8.DecodeRune(s)
s = s[sz:]
r1, r2 := utf16.EncodeRune(r)
if r1 != 0xFFFD {
buf = append(buf, uint16(r1), uint16(r2))
} else {
buf = append(buf, uint16(r))
}
}
return buf
}

View File

@ -1,209 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build icu
package main
/*
#cgo LDFLAGS: -licui18n -licuuc
#include <stdlib.h>
#include <unicode/ucol.h>
#include <unicode/uiter.h>
#include <unicode/utypes.h>
*/
import "C"
import (
"fmt"
"log"
"unicode/utf16"
"unicode/utf8"
"unsafe"
)
func init() {
AddFactory(CollatorFactory{"icu", newUTF16,
"Main ICU collator, using native strings."})
AddFactory(CollatorFactory{"icu8", newUTF8iter,
"ICU collator using ICU iterators to process UTF8."})
AddFactory(CollatorFactory{"icu16", newUTF8conv,
"ICU collation by first converting UTF8 to UTF16."})
}
func icuCharP(s []byte) *C.char {
return (*C.char)(unsafe.Pointer(&s[0]))
}
func icuUInt8P(s []byte) *C.uint8_t {
return (*C.uint8_t)(unsafe.Pointer(&s[0]))
}
func icuUCharP(s []uint16) *C.UChar {
return (*C.UChar)(unsafe.Pointer(&s[0]))
}
func icuULen(s []uint16) C.int32_t {
return C.int32_t(len(s))
}
func icuSLen(s []byte) C.int32_t {
return C.int32_t(len(s))
}
// icuCollator implements a Collator based on ICU.
type icuCollator struct {
loc *C.char
col *C.UCollator
keyBuf []byte
}
const growBufSize = 10 * 1024 * 1024
func (c *icuCollator) init(locale string) error {
err := C.UErrorCode(0)
c.loc = C.CString(locale)
c.col = C.ucol_open(c.loc, &err)
if err > 0 {
return fmt.Errorf("failed opening collator for %q", locale)
} else if err < 0 {
loc := C.ucol_getLocaleByType(c.col, 0, &err)
fmt, ok := map[int]string{
-127: "warning: using default collator: %s",
-128: "warning: using fallback collator: %s",
}[int(err)]
if ok {
log.Printf(fmt, C.GoString(loc))
}
}
c.keyBuf = make([]byte, 0, growBufSize)
return nil
}
func (c *icuCollator) buf() (*C.uint8_t, C.int32_t) {
if len(c.keyBuf) == cap(c.keyBuf) {
c.keyBuf = make([]byte, 0, growBufSize)
}
b := c.keyBuf[len(c.keyBuf):cap(c.keyBuf)]
return icuUInt8P(b), icuSLen(b)
}
func (c *icuCollator) extendBuf(n C.int32_t) []byte {
end := len(c.keyBuf) + int(n)
if end > cap(c.keyBuf) {
if len(c.keyBuf) == 0 {
log.Fatalf("icuCollator: max string size exceeded: %v > %v", n, growBufSize)
}
c.keyBuf = make([]byte, 0, growBufSize)
return nil
}
b := c.keyBuf[len(c.keyBuf):end]
c.keyBuf = c.keyBuf[:end]
return b
}
func (c *icuCollator) Close() error {
C.ucol_close(c.col)
C.free(unsafe.Pointer(c.loc))
return nil
}
// icuUTF16 implements the Collator interface.
type icuUTF16 struct {
icuCollator
}
func newUTF16(locale string) (Collator, error) {
c := &icuUTF16{}
return c, c.init(locale)
}
func (c *icuUTF16) Compare(a, b Input) int {
return int(C.ucol_strcoll(c.col, icuUCharP(a.UTF16), icuULen(a.UTF16), icuUCharP(b.UTF16), icuULen(b.UTF16)))
}
func (c *icuUTF16) Key(s Input) []byte {
bp, bn := c.buf()
n := C.ucol_getSortKey(c.col, icuUCharP(s.UTF16), icuULen(s.UTF16), bp, bn)
if b := c.extendBuf(n); b != nil {
return b
}
return c.Key(s)
}
// icuUTF8iter implements the Collator interface
// This implementation wraps the UTF8 string in an iterator
// which is passed to the collator.
type icuUTF8iter struct {
icuCollator
a, b C.UCharIterator
}
func newUTF8iter(locale string) (Collator, error) {
c := &icuUTF8iter{}
return c, c.init(locale)
}
func (c *icuUTF8iter) Compare(a, b Input) int {
err := C.UErrorCode(0)
C.uiter_setUTF8(&c.a, icuCharP(a.UTF8), icuSLen(a.UTF8))
C.uiter_setUTF8(&c.b, icuCharP(b.UTF8), icuSLen(b.UTF8))
return int(C.ucol_strcollIter(c.col, &c.a, &c.b, &err))
}
func (c *icuUTF8iter) Key(s Input) []byte {
err := C.UErrorCode(0)
state := [2]C.uint32_t{}
C.uiter_setUTF8(&c.a, icuCharP(s.UTF8), icuSLen(s.UTF8))
bp, bn := c.buf()
n := C.ucol_nextSortKeyPart(c.col, &c.a, &(state[0]), bp, bn, &err)
if n >= bn {
// Force failure.
if c.extendBuf(n+1) != nil {
log.Fatal("expected extension to fail")
}
return c.Key(s)
}
return c.extendBuf(n)
}
// icuUTF8conv implements the Collator interface.
// This implementation first converts the give UTF8 string
// to UTF16 and then calls the main ICU collation function.
type icuUTF8conv struct {
icuCollator
}
func newUTF8conv(locale string) (Collator, error) {
c := &icuUTF8conv{}
return c, c.init(locale)
}
func (c *icuUTF8conv) Compare(sa, sb Input) int {
a := encodeUTF16(sa.UTF8)
b := encodeUTF16(sb.UTF8)
return int(C.ucol_strcoll(c.col, icuUCharP(a), icuULen(a), icuUCharP(b), icuULen(b)))
}
func (c *icuUTF8conv) Key(s Input) []byte {
a := encodeUTF16(s.UTF8)
bp, bn := c.buf()
n := C.ucol_getSortKey(c.col, icuUCharP(a), icuULen(a), bp, bn)
if b := c.extendBuf(n); b != nil {
return b
}
return c.Key(s)
}
func encodeUTF16(b []byte) []uint16 {
a := []uint16{}
for len(b) > 0 {
r, sz := utf8.DecodeRune(b)
b = b[sz:]
r1, r2 := utf16.EncodeRune(r)
if r1 != 0xFFFD {
a = append(a, uint16(r1), uint16(r2))
} else {
a = append(a, uint16(r))
}
}
return a
}

View File

@ -1,67 +0,0 @@
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
package currency
import (
"time"
"golang.org/x/text/language"
)
// This file contains code common to gen.go and the package code.
const (
cashShift = 3
roundMask = 0x7
nonTenderBit = 0x8000
)
// currencyInfo contains information about a currency.
// bits 0..2: index into roundings for standard rounding
// bits 3..5: index into roundings for cash rounding
type currencyInfo byte
// roundingType defines the scale (number of fractional decimals) and increments
// in terms of units of size 10^-scale. For example, for scale == 2 and
// increment == 1, the currency is rounded to units of 0.01.
type roundingType struct {
scale, increment uint8
}
// roundings contains rounding data for currencies. This struct is
// created by hand as it is very unlikely to change much.
var roundings = [...]roundingType{
{2, 1}, // default
{0, 1},
{1, 1},
{3, 1},
{4, 1},
{2, 5}, // cash rounding alternative
{2, 50},
}
// regionToCode returns a 16-bit region code. Only two-letter codes are
// supported. (Three-letter codes are not needed.)
func regionToCode(r language.Region) uint16 {
if s := r.String(); len(s) == 2 {
return uint16(s[0])<<8 | uint16(s[1])
}
return 0
}
func toDate(t time.Time) uint32 {
y := t.Year()
if y == 1 {
return 0
}
date := uint32(y) << 4
date |= uint32(t.Month())
date <<= 5
date |= uint32(t.Day())
return date
}
func fromDate(date uint32) time.Time {
return time.Date(int(date>>9), time.Month((date>>5)&0xf), int(date&0x1f), 0, 0, 0, 0, time.UTC)
}

View File

@ -1,185 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go run gen.go gen_common.go -output tables.go
// Package currency contains currency-related functionality.
//
// NOTE: the formatting functionality is currently under development and may
// change without notice.
package currency // import "golang.org/x/text/currency"
import (
"errors"
"sort"
"golang.org/x/text/internal/tag"
"golang.org/x/text/language"
)
// TODO:
// - language-specific currency names.
// - currency formatting.
// - currency information per region
// - register currency code (there are no private use area)
// TODO: remove Currency type from package language.
// Kind determines the rounding and rendering properties of a currency value.
type Kind struct {
rounding rounding
// TODO: formatting type: standard, accounting. See CLDR.
}
type rounding byte
const (
standard rounding = iota
cash
)
var (
// Standard defines standard rounding and formatting for currencies.
Standard Kind = Kind{rounding: standard}
// Cash defines rounding and formatting standards for cash transactions.
Cash Kind = Kind{rounding: cash}
// Accounting defines rounding and formatting standards for accounting.
Accounting Kind = Kind{rounding: standard}
)
// Rounding reports the rounding characteristics for the given currency, where
// scale is the number of fractional decimals and increment is the number of
// units in terms of 10^(-scale) to which to round to.
func (k Kind) Rounding(cur Unit) (scale, increment int) {
info := currency.Elem(int(cur.index))[3]
switch k.rounding {
case standard:
info &= roundMask
case cash:
info >>= cashShift
}
return int(roundings[info].scale), int(roundings[info].increment)
}
// Unit is an ISO 4217 currency designator.
type Unit struct {
index uint16
}
// String returns the ISO code of u.
func (u Unit) String() string {
if u.index == 0 {
return "XXX"
}
return currency.Elem(int(u.index))[:3]
}
// Amount creates an Amount for the given currency unit and amount.
func (u Unit) Amount(amount interface{}) Amount {
// TODO: verify amount is a supported number type
return Amount{amount: amount, currency: u}
}
var (
errSyntax = errors.New("currency: tag is not well-formed")
errValue = errors.New("currency: tag is not a recognized currency")
)
// ParseISO parses a 3-letter ISO 4217 currency code. It returns an error if s
// is not well-formed or not a recognized currency code.
func ParseISO(s string) (Unit, error) {
var buf [4]byte // Take one byte more to detect oversize keys.
key := buf[:copy(buf[:], s)]
if !tag.FixCase("XXX", key) {
return Unit{}, errSyntax
}
if i := currency.Index(key); i >= 0 {
if i == xxx {
return Unit{}, nil
}
return Unit{uint16(i)}, nil
}
return Unit{}, errValue
}
// MustParseISO is like ParseISO, but panics if the given currency unit
// cannot be parsed. It simplifies safe initialization of Unit values.
func MustParseISO(s string) Unit {
c, err := ParseISO(s)
if err != nil {
panic(err)
}
return c
}
// FromRegion reports the currency unit that is currently legal tender in the
// given region according to CLDR. It will return false if region currently does
// not have a legal tender.
func FromRegion(r language.Region) (currency Unit, ok bool) {
x := regionToCode(r)
i := sort.Search(len(regionToCurrency), func(i int) bool {
return regionToCurrency[i].region >= x
})
if i < len(regionToCurrency) && regionToCurrency[i].region == x {
return Unit{regionToCurrency[i].code}, true
}
return Unit{}, false
}
// FromTag reports the most likely currency for the given tag. It considers the
// currency defined in the -u extension and infers the region if necessary.
func FromTag(t language.Tag) (Unit, language.Confidence) {
if cur := t.TypeForKey("cu"); len(cur) == 3 {
c, _ := ParseISO(cur)
return c, language.Exact
}
r, conf := t.Region()
if cur, ok := FromRegion(r); ok {
return cur, conf
}
return Unit{}, language.No
}
var (
// Undefined and testing.
XXX Unit = Unit{}
XTS Unit = Unit{xts}
// G10 currencies https://en.wikipedia.org/wiki/G10_currencies.
USD Unit = Unit{usd}
EUR Unit = Unit{eur}
JPY Unit = Unit{jpy}
GBP Unit = Unit{gbp}
CHF Unit = Unit{chf}
AUD Unit = Unit{aud}
NZD Unit = Unit{nzd}
CAD Unit = Unit{cad}
SEK Unit = Unit{sek}
NOK Unit = Unit{nok}
// Additional common currencies as defined by CLDR.
BRL Unit = Unit{brl}
CNY Unit = Unit{cny}
DKK Unit = Unit{dkk}
INR Unit = Unit{inr}
RUB Unit = Unit{rub}
HKD Unit = Unit{hkd}
IDR Unit = Unit{idr}
KRW Unit = Unit{krw}
MXN Unit = Unit{mxn}
PLN Unit = Unit{pln}
SAR Unit = Unit{sar}
THB Unit = Unit{thb}
TRY Unit = Unit{try}
TWD Unit = Unit{twd}
ZAR Unit = Unit{zar}
// Precious metals.
XAG Unit = Unit{xag}
XAU Unit = Unit{xau}
XPT Unit = Unit{xpt}
XPD Unit = Unit{xpd}
)

View File

@ -1,171 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package currency
import (
"fmt"
"testing"
"golang.org/x/text/internal/testtext"
"golang.org/x/text/language"
)
var (
cup = MustParseISO("CUP")
czk = MustParseISO("CZK")
xcd = MustParseISO("XCD")
zwr = MustParseISO("ZWR")
)
func TestParseISO(t *testing.T) {
testCases := []struct {
in string
out Unit
ok bool
}{
{"USD", USD, true},
{"xxx", XXX, true},
{"xts", XTS, true},
{"XX", XXX, false},
{"XXXX", XXX, false},
{"", XXX, false}, // not well-formed
{"UUU", XXX, false}, // unknown
{"\u22A9", XXX, false}, // non-ASCII, printable
{"aaa", XXX, false},
{"zzz", XXX, false},
{"000", XXX, false},
{"999", XXX, false},
{"---", XXX, false},
{"\x00\x00\x00", XXX, false},
{"\xff\xff\xff", XXX, false},
}
for i, tc := range testCases {
if x, err := ParseISO(tc.in); x != tc.out || err == nil != tc.ok {
t.Errorf("%d:%s: was %s, %v; want %s, %v", i, tc.in, x, err == nil, tc.out, tc.ok)
}
}
}
func TestFromRegion(t *testing.T) {
testCases := []struct {
region string
currency Unit
ok bool
}{
{"NL", EUR, true},
{"BE", EUR, true},
{"AG", xcd, true},
{"CH", CHF, true},
{"CU", cup, true}, // first of multiple
{"DG", USD, true}, // does not have M49 code
{"150", XXX, false}, // implicit false
{"CP", XXX, false}, // explicit false in CLDR
{"CS", XXX, false}, // all expired
{"ZZ", XXX, false}, // none match
}
for _, tc := range testCases {
cur, ok := FromRegion(language.MustParseRegion(tc.region))
if cur != tc.currency || ok != tc.ok {
t.Errorf("%s: got %v, %v; want %v, %v", tc.region, cur, ok, tc.currency, tc.ok)
}
}
}
func TestFromTag(t *testing.T) {
testCases := []struct {
tag string
currency Unit
conf language.Confidence
}{
{"nl", EUR, language.Low}, // nl also spoken outside Euro land.
{"nl-BE", EUR, language.Exact}, // region is known
{"pt", BRL, language.Low},
{"en", USD, language.Low},
{"en-u-cu-eur", EUR, language.Exact},
{"tlh", XXX, language.No}, // Klingon has no country.
{"es-419", XXX, language.No},
{"und", USD, language.Low},
}
for _, tc := range testCases {
cur, conf := FromTag(language.MustParse(tc.tag))
if cur != tc.currency || conf != tc.conf {
t.Errorf("%s: got %v, %v; want %v, %v", tc.tag, cur, conf, tc.currency, tc.conf)
}
}
}
func TestTable(t *testing.T) {
for i := 4; i < len(currency); i += 4 {
if a, b := currency[i-4:i-1], currency[i:i+3]; a >= b {
t.Errorf("currency unordered at element %d: %s >= %s", i, a, b)
}
}
// First currency has index 1, last is numCurrencies.
if c := currency.Elem(1)[:3]; c != "ADP" {
t.Errorf("first was %q; want ADP", c)
}
if c := currency.Elem(numCurrencies)[:3]; c != "ZWR" {
t.Errorf("last was %q; want ZWR", c)
}
}
func TestKindRounding(t *testing.T) {
testCases := []struct {
kind Kind
cur Unit
scale int
inc int
}{
{Standard, USD, 2, 1},
{Standard, CHF, 2, 1},
{Cash, CHF, 2, 5},
{Standard, TWD, 2, 1},
{Cash, TWD, 0, 1},
{Standard, czk, 2, 1},
{Cash, czk, 0, 1},
{Standard, zwr, 2, 1},
{Cash, zwr, 0, 1},
{Standard, KRW, 0, 1},
{Cash, KRW, 0, 1}, // Cash defaults to standard.
}
for i, tc := range testCases {
if scale, inc := tc.kind.Rounding(tc.cur); scale != tc.scale && inc != tc.inc {
t.Errorf("%d: got %d, %d; want %d, %d", i, scale, inc, tc.scale, tc.inc)
}
}
}
const body = `package main
import (
"fmt"
"golang.org/x/text/currency"
)
func main() {
%s
}
`
func TestLinking(t *testing.T) {
base := getSize(t, `fmt.Print(currency.CLDRVersion)`)
symbols := getSize(t, `fmt.Print(currency.Symbol(currency.USD))`)
if d := symbols - base; d < 2*1024 {
t.Errorf("size(symbols)-size(base) was %d; want > 2K", d)
}
}
func getSize(t *testing.T, main string) int {
size, err := testtext.CodeSize(fmt.Sprintf(body, main))
if err != nil {
t.Skipf("skipping link size test; binary size could not be determined: %v", err)
}
return size
}
func BenchmarkString(b *testing.B) {
for i := 0; i < b.N; i++ {
USD.String()
}
}

View File

@ -1,27 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package currency_test
import (
"fmt"
"time"
"golang.org/x/text/currency"
)
func ExampleQuery() {
t1799, _ := time.Parse("2006-01-02", "1799-01-01")
for it := currency.Query(currency.Date(t1799)); it.Next(); {
from := ""
if t, ok := it.From(); ok {
from = t.Format("2006-01-01")
}
fmt.Printf("%v is used in %v since: %v\n", it.Unit(), it.Region(), from)
}
// Output:
// GBP is used in GB since: 1694-07-07
// GIP is used in GI since: 1713-01-01
// USD is used in US since: 1792-01-01
}

View File

@ -1,215 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package currency
import (
"fmt"
"io"
"sort"
"golang.org/x/text/internal"
"golang.org/x/text/internal/format"
"golang.org/x/text/language"
)
// Amount is an amount-currency unit pair.
type Amount struct {
amount interface{} // Change to decimal(64|128).
currency Unit
}
// Currency reports the currency unit of this amount.
func (a Amount) Currency() Unit { return a.currency }
// TODO: based on decimal type, but may make sense to customize a bit.
// func (a Amount) Decimal()
// func (a Amount) Int() (int64, error)
// func (a Amount) Fraction() (int64, error)
// func (a Amount) Rat() *big.Rat
// func (a Amount) Float() (float64, error)
// func (a Amount) Scale() uint
// func (a Amount) Precision() uint
// func (a Amount) Sign() int
//
// Add/Sub/Div/Mul/Round.
var space = []byte(" ")
// Format implements fmt.Formatter. It accepts format.State for
// language-specific rendering.
func (a Amount) Format(s fmt.State, verb rune) {
v := formattedValue{
currency: a.currency,
amount: a.amount,
format: defaultFormat,
}
v.Format(s, verb)
}
// formattedValue is currency amount or unit that implements language-sensitive
// formatting.
type formattedValue struct {
currency Unit
amount interface{} // Amount, Unit, or number.
format *options
}
// Format implements fmt.Formatter. It accepts format.State for
// language-specific rendering.
func (v formattedValue) Format(s fmt.State, verb rune) {
var lang int
if state, ok := s.(format.State); ok {
lang, _ = language.CompactIndex(state.Language())
}
// Get the options. Use DefaultFormat if not present.
opt := v.format
if opt == nil {
opt = defaultFormat
}
cur := v.currency
if cur.index == 0 {
cur = opt.currency
}
// TODO: use pattern.
io.WriteString(s, opt.symbol(lang, cur))
if v.amount != nil {
s.Write(space)
// TODO: apply currency-specific rounding
scale, _ := opt.kind.Rounding(cur)
if _, ok := s.Precision(); !ok {
fmt.Fprintf(s, "%.*f", scale, v.amount)
} else {
fmt.Fprint(s, v.amount)
}
}
}
// Formatter decorates a given number, Unit or Amount with formatting options.
type Formatter func(amount interface{}) formattedValue
// func (f Formatter) Options(opts ...Option) Formatter
// TODO: call this a Formatter or FormatFunc?
var dummy = USD.Amount(0)
// adjust creates a new Formatter based on the adjustments of fn on f.
func (f Formatter) adjust(fn func(*options)) Formatter {
var o options = *(f(dummy).format)
fn(&o)
return o.format
}
// Default creates a new Formatter that defaults to currency unit c if a numeric
// value is passed that is not associated with a currency.
func (f Formatter) Default(currency Unit) Formatter {
return f.adjust(func(o *options) { o.currency = currency })
}
// Kind sets the kind of the underlying currency unit.
func (f Formatter) Kind(k Kind) Formatter {
return f.adjust(func(o *options) { o.kind = k })
}
var defaultFormat *options = ISO(dummy).format
var (
// Uses Narrow symbols. Overrides Symbol, if present.
NarrowSymbol Formatter = Formatter(formNarrow)
// Use Symbols instead of ISO codes, when available.
Symbol Formatter = Formatter(formSymbol)
// Use ISO code as symbol.
ISO Formatter = Formatter(formISO)
// TODO:
// // Use full name as symbol.
// Name Formatter
)
// options configures rendering and rounding options for an Amount.
type options struct {
currency Unit
kind Kind
symbol func(compactIndex int, c Unit) string
}
func (o *options) format(amount interface{}) formattedValue {
v := formattedValue{format: o}
switch x := amount.(type) {
case Amount:
v.amount = x.amount
v.currency = x.currency
case *Amount:
v.amount = x.amount
v.currency = x.currency
case Unit:
v.currency = x
case *Unit:
v.currency = *x
default:
if o.currency.index == 0 {
panic("cannot format number without a currency being set")
}
// TODO: Must be a number.
v.amount = x
v.currency = o.currency
}
return v
}
var (
optISO = options{symbol: lookupISO}
optSymbol = options{symbol: lookupSymbol}
optNarrow = options{symbol: lookupNarrow}
)
// These need to be functions, rather than curried methods, as curried methods
// are evaluated at init time, causing tables to be included unconditionally.
func formISO(x interface{}) formattedValue { return optISO.format(x) }
func formSymbol(x interface{}) formattedValue { return optSymbol.format(x) }
func formNarrow(x interface{}) formattedValue { return optNarrow.format(x) }
func lookupISO(x int, c Unit) string { return c.String() }
func lookupSymbol(x int, c Unit) string { return normalSymbol.lookup(x, c) }
func lookupNarrow(x int, c Unit) string { return narrowSymbol.lookup(x, c) }
type symbolIndex struct {
index []uint16 // position corresponds with compact index of language.
data []curToIndex
}
var (
normalSymbol = symbolIndex{normalLangIndex, normalSymIndex}
narrowSymbol = symbolIndex{narrowLangIndex, narrowSymIndex}
)
func (x *symbolIndex) lookup(lang int, c Unit) string {
for {
index := x.data[x.index[lang]:x.index[lang+1]]
i := sort.Search(len(index), func(i int) bool {
return index[i].cur >= c.index
})
if i < len(index) && index[i].cur == c.index {
x := index[i].idx
start := x + 1
end := start + uint16(symbols[x])
if start == end {
return c.String()
}
return symbols[start:end]
}
if lang == 0 {
break
}
lang = int(internal.Parent[lang])
}
return c.String()
}

View File

@ -1,70 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package currency
import (
"testing"
"golang.org/x/text/language"
"golang.org/x/text/message"
)
var (
en = language.English
fr = language.French
en_US = language.AmericanEnglish
en_GB = language.BritishEnglish
en_AU = language.MustParse("en-AU")
und = language.Und
)
func TestFormatting(t *testing.T) {
testCases := []struct {
tag language.Tag
value interface{}
format Formatter
want string
}{
0: {en, USD.Amount(0.1), nil, "USD 0.10"},
1: {en, XPT.Amount(1.0), Symbol, "XPT 1.00"},
2: {en, USD.Amount(2.0), ISO, "USD 2.00"},
3: {und, USD.Amount(3.0), Symbol, "US$ 3.00"},
4: {en, USD.Amount(4.0), Symbol, "$ 4.00"},
5: {en, USD.Amount(5.20), NarrowSymbol, "$ 5.20"},
6: {en, AUD.Amount(6.20), Symbol, "A$ 6.20"},
7: {en_AU, AUD.Amount(7.20), Symbol, "$ 7.20"},
8: {en_GB, USD.Amount(8.20), Symbol, "US$ 8.20"},
9: {en, 9.0, Symbol.Default(EUR), "€ 9.00"},
10: {en, 10.123, Symbol.Default(KRW), "₩ 10"},
11: {fr, 11.52, Symbol.Default(TWD), "TWD 11.52"},
12: {en, 12.123, Symbol.Default(czk), "CZK 12.12"},
13: {en, 13.123, Symbol.Default(czk).Kind(Cash), "CZK 13"},
14: {en, 14.12345, ISO.Default(MustParseISO("CLF")), "CLF 14.1235"},
15: {en, USD.Amount(15.00), ISO.Default(TWD), "USD 15.00"},
16: {en, KRW.Amount(16.00), ISO.Kind(Cash), "KRW 16"},
// TODO: support integers as well.
17: {en, USD, nil, "USD"},
18: {en, USD, ISO, "USD"},
19: {en, USD, Symbol, "$"},
20: {en_GB, USD, Symbol, "US$"},
21: {en_AU, USD, NarrowSymbol, "$"},
}
for i, tc := range testCases {
p := message.NewPrinter(tc.tag)
v := tc.value
if tc.format != nil {
v = tc.format(v)
}
if got := p.Sprint(v); got != tc.want {
t.Errorf("%d: got %q; want %q", i, got, tc.want)
}
}
}

View File

@ -1,400 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Generator for currency-related data.
package main
import (
"flag"
"fmt"
"log"
"os"
"sort"
"strconv"
"strings"
"time"
"golang.org/x/text/internal"
"golang.org/x/text/internal/gen"
"golang.org/x/text/internal/tag"
"golang.org/x/text/language"
"golang.org/x/text/unicode/cldr"
)
var (
test = flag.Bool("test", false,
"test existing tables; can be used to compare web data with package data.")
outputFile = flag.String("output", "tables.go", "output file")
draft = flag.String("draft",
"contributed",
`Minimal draft requirements (approved, contributed, provisional, unconfirmed).`)
)
func main() {
gen.Init()
gen.Repackage("gen_common.go", "common.go", "currency")
// Read the CLDR zip file.
r := gen.OpenCLDRCoreZip()
defer r.Close()
d := &cldr.Decoder{}
d.SetDirFilter("supplemental", "main")
d.SetSectionFilter("numbers")
data, err := d.DecodeZip(r)
if err != nil {
log.Fatalf("DecodeZip: %v", err)
}
w := gen.NewCodeWriter()
defer w.WriteGoFile(*outputFile, "currency")
fmt.Fprintln(w, `import "golang.org/x/text/internal/tag"`)
gen.WriteCLDRVersion(w)
b := &builder{}
b.genCurrencies(w, data.Supplemental())
b.genSymbols(w, data)
}
var constants = []string{
// Undefined and testing.
"XXX", "XTS",
// G11 currencies https://en.wikipedia.org/wiki/G10_currencies.
"USD", "EUR", "JPY", "GBP", "CHF", "AUD", "NZD", "CAD", "SEK", "NOK", "DKK",
// Precious metals.
"XAG", "XAU", "XPT", "XPD",
// Additional common currencies as defined by CLDR.
"BRL", "CNY", "INR", "RUB", "HKD", "IDR", "KRW", "MXN", "PLN", "SAR",
"THB", "TRY", "TWD", "ZAR",
}
type builder struct {
currencies tag.Index
numCurrencies int
}
func (b *builder) genCurrencies(w *gen.CodeWriter, data *cldr.SupplementalData) {
// 3-letter ISO currency codes
// Start with dummy to let index start at 1.
currencies := []string{"\x00\x00\x00\x00"}
// currency codes
for _, reg := range data.CurrencyData.Region {
for _, cur := range reg.Currency {
currencies = append(currencies, cur.Iso4217)
}
}
// Not included in the list for some reasons:
currencies = append(currencies, "MVP")
sort.Strings(currencies)
// Unique the elements.
k := 0
for i := 1; i < len(currencies); i++ {
if currencies[k] != currencies[i] {
currencies[k+1] = currencies[i]
k++
}
}
currencies = currencies[:k+1]
// Close with dummy for simpler and faster searching.
currencies = append(currencies, "\xff\xff\xff\xff")
// Write currency values.
fmt.Fprintln(w, "const (")
for _, c := range constants {
index := sort.SearchStrings(currencies, c)
fmt.Fprintf(w, "\t%s = %d\n", strings.ToLower(c), index)
}
fmt.Fprint(w, ")")
// Compute currency-related data that we merge into the table.
for _, info := range data.CurrencyData.Fractions[0].Info {
if info.Iso4217 == "DEFAULT" {
continue
}
standard := getRoundingIndex(info.Digits, info.Rounding, 0)
cash := getRoundingIndex(info.CashDigits, info.CashRounding, standard)
index := sort.SearchStrings(currencies, info.Iso4217)
currencies[index] += mkCurrencyInfo(standard, cash)
}
// Set default values for entries that weren't touched.
for i, c := range currencies {
if len(c) == 3 {
currencies[i] += mkCurrencyInfo(0, 0)
}
}
b.currencies = tag.Index(strings.Join(currencies, ""))
w.WriteComment(`
currency holds an alphabetically sorted list of canonical 3-letter currency
identifiers. Each identifier is followed by a byte of type currencyInfo,
defined in gen_common.go.`)
w.WriteConst("currency", b.currencies)
// Hack alert: gofmt indents a trailing comment after an indented string.
// Ensure that the next thing written is not a comment.
b.numCurrencies = (len(b.currencies) / 4) - 2
w.WriteConst("numCurrencies", b.numCurrencies)
// Create a table that maps regions to currencies.
regionToCurrency := []toCurrency{}
for _, reg := range data.CurrencyData.Region {
if len(reg.Iso3166) != 2 {
log.Fatalf("Unexpected group %q in region data", reg.Iso3166)
}
if len(reg.Currency) == 0 {
continue
}
cur := reg.Currency[0]
if cur.To != "" || cur.Tender == "false" {
continue
}
regionToCurrency = append(regionToCurrency, toCurrency{
region: regionToCode(language.MustParseRegion(reg.Iso3166)),
code: uint16(b.currencies.Index([]byte(cur.Iso4217))),
})
}
sort.Sort(byRegion(regionToCurrency))
w.WriteType(toCurrency{})
w.WriteVar("regionToCurrency", regionToCurrency)
// Create a table that maps regions to currencies.
regionData := []regionInfo{}
for _, reg := range data.CurrencyData.Region {
if len(reg.Iso3166) != 2 {
log.Fatalf("Unexpected group %q in region data", reg.Iso3166)
}
for _, cur := range reg.Currency {
from, _ := time.Parse("2006-01-02", cur.From)
to, _ := time.Parse("2006-01-02", cur.To)
code := uint16(b.currencies.Index([]byte(cur.Iso4217)))
if cur.Tender == "false" {
code |= nonTenderBit
}
regionData = append(regionData, regionInfo{
region: regionToCode(language.MustParseRegion(reg.Iso3166)),
code: code,
from: toDate(from),
to: toDate(to),
})
}
}
sort.Stable(byRegionCode(regionData))
w.WriteType(regionInfo{})
w.WriteVar("regionData", regionData)
}
type regionInfo struct {
region uint16
code uint16 // 0x8000 not legal tender
from uint32
to uint32
}
type byRegionCode []regionInfo
func (a byRegionCode) Len() int { return len(a) }
func (a byRegionCode) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a byRegionCode) Less(i, j int) bool { return a[i].region < a[j].region }
type toCurrency struct {
region uint16
code uint16
}
type byRegion []toCurrency
func (a byRegion) Len() int { return len(a) }
func (a byRegion) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a byRegion) Less(i, j int) bool { return a[i].region < a[j].region }
func mkCurrencyInfo(standard, cash int) string {
return string([]byte{byte(cash<<cashShift | standard)})
}
func getRoundingIndex(digits, rounding string, defIndex int) int {
round := roundings[defIndex] // default
if digits != "" {
round.scale = parseUint8(digits)
}
if rounding != "" && rounding != "0" { // 0 means 1 here in CLDR
round.increment = parseUint8(rounding)
}
// Will panic if the entry doesn't exist:
for i, r := range roundings {
if r == round {
return i
}
}
log.Fatalf("Rounding entry %#v does not exist.", round)
panic("unreachable")
}
// genSymbols generates the symbols used for currencies. Most symbols are
// defined in root and there is only very small variation per language.
// The following rules apply:
// - A symbol can be requested as normal or narrow.
// - If a symbol is not defined for a currency, it defaults to its ISO code.
func (b *builder) genSymbols(w *gen.CodeWriter, data *cldr.CLDR) {
d, err := cldr.ParseDraft(*draft)
if err != nil {
log.Fatalf("filter: %v", err)
}
const (
normal = iota
narrow
numTypes
)
// language -> currency -> type -> symbol
var symbols [language.NumCompactTags][][numTypes]*string
// Collect symbol information per language.
for _, lang := range data.Locales() {
ldml := data.RawLDML(lang)
if ldml.Numbers == nil || ldml.Numbers.Currencies == nil {
continue
}
langIndex, ok := language.CompactIndex(language.MustParse(lang))
if !ok {
log.Fatalf("No compact index for language %s", lang)
}
symbols[langIndex] = make([][numTypes]*string, b.numCurrencies+1)
for _, c := range ldml.Numbers.Currencies.Currency {
syms := cldr.MakeSlice(&c.Symbol)
syms.SelectDraft(d)
for _, sym := range c.Symbol {
v := sym.Data()
if v == c.Type {
// We define "" to mean the ISO symbol.
v = ""
}
cur := b.currencies.Index([]byte(c.Type))
// XXX gets reassigned to 0 in the package's code.
if c.Type == "XXX" {
cur = 0
}
if cur == -1 {
fmt.Println("Unsupported:", c.Type)
continue
}
switch sym.Alt {
case "":
symbols[langIndex][cur][normal] = &v
case "narrow":
symbols[langIndex][cur][narrow] = &v
}
}
}
}
// Remove values identical to the parent.
for langIndex, data := range symbols {
for curIndex, curs := range data {
for typ, sym := range curs {
if sym == nil {
continue
}
for p := uint16(langIndex); p != 0; {
p = internal.Parent[p]
x := symbols[p]
if x == nil {
continue
}
if v := x[curIndex][typ]; v != nil || p == 0 {
// Value is equal to the default value root value is undefined.
parentSym := ""
if v != nil {
parentSym = *v
}
if parentSym == *sym {
// Value is the same as parent.
data[curIndex][typ] = nil
}
break
}
}
}
}
}
// Create symbol index.
symbolData := []byte{0}
symbolLookup := map[string]uint16{"": 0} // 0 means default, so block that value.
for _, data := range symbols {
for _, curs := range data {
for _, sym := range curs {
if sym == nil {
continue
}
if _, ok := symbolLookup[*sym]; !ok {
symbolLookup[*sym] = uint16(len(symbolData))
symbolData = append(symbolData, byte(len(*sym)))
symbolData = append(symbolData, *sym...)
}
}
}
}
w.WriteComment(`
symbols holds symbol data of the form <n> <str>, where n is the length of
the symbol string str.`)
w.WriteConst("symbols", string(symbolData))
// Create index from language to currency lookup to symbol.
type curToIndex struct{ cur, idx uint16 }
w.WriteType(curToIndex{})
prefix := []string{"normal", "narrow"}
// Create data for regular and narrow symbol data.
for typ := normal; typ <= narrow; typ++ {
indexes := []curToIndex{} // maps currency to symbol index
languages := []uint16{}
for _, data := range symbols {
languages = append(languages, uint16(len(indexes)))
for curIndex, curs := range data {
if sym := curs[typ]; sym != nil {
indexes = append(indexes, curToIndex{uint16(curIndex), symbolLookup[*sym]})
}
}
}
languages = append(languages, uint16(len(indexes)))
w.WriteVar(prefix[typ]+"LangIndex", languages)
w.WriteVar(prefix[typ]+"SymIndex", indexes)
}
}
func parseUint8(str string) uint8 {
x, err := strconv.ParseUint(str, 10, 8)
if err != nil {
// Show line number of where this function was called.
log.New(os.Stderr, "", log.Lshortfile).Output(2, err.Error())
os.Exit(1)
}
return uint8(x)
}

View File

@ -1,71 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"time"
"golang.org/x/text/language"
)
// This file contains code common to gen.go and the package code.
const (
cashShift = 3
roundMask = 0x7
nonTenderBit = 0x8000
)
// currencyInfo contains information about a currency.
// bits 0..2: index into roundings for standard rounding
// bits 3..5: index into roundings for cash rounding
type currencyInfo byte
// roundingType defines the scale (number of fractional decimals) and increments
// in terms of units of size 10^-scale. For example, for scale == 2 and
// increment == 1, the currency is rounded to units of 0.01.
type roundingType struct {
scale, increment uint8
}
// roundings contains rounding data for currencies. This struct is
// created by hand as it is very unlikely to change much.
var roundings = [...]roundingType{
{2, 1}, // default
{0, 1},
{1, 1},
{3, 1},
{4, 1},
{2, 5}, // cash rounding alternative
{2, 50},
}
// regionToCode returns a 16-bit region code. Only two-letter codes are
// supported. (Three-letter codes are not needed.)
func regionToCode(r language.Region) uint16 {
if s := r.String(); len(s) == 2 {
return uint16(s[0])<<8 | uint16(s[1])
}
return 0
}
func toDate(t time.Time) uint32 {
y := t.Year()
if y == 1 {
return 0
}
date := uint32(y) << 4
date |= uint32(t.Month())
date <<= 5
date |= uint32(t.Day())
return date
}
func fromDate(date uint32) time.Time {
return time.Date(int(date>>9), time.Month((date>>5)&0xf), int(date&0x1f), 0, 0, 0, 0, time.UTC)
}

View File

@ -1,152 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package currency
import (
"sort"
"time"
"golang.org/x/text/language"
)
// QueryIter represents a set of Units. The default set includes all Units that
// are currently in use as legal tender in any Region.
type QueryIter interface {
// Next returns true if there is a next element available.
// It must be called before any of the other methods are called.
Next() bool
// Unit returns the unit of the current iteration.
Unit() Unit
// Region returns the Region for the current iteration.
Region() language.Region
// From returns the date from which the unit was used in the region.
// It returns false if this date is unknown.
From() (time.Time, bool)
// To returns the date up till which the unit was used in the region.
// It returns false if this date is unknown or if the unit is still in use.
To() (time.Time, bool)
// IsTender reports whether the unit is a legal tender in the region during
// the specified date range.
IsTender() bool
}
// Query represents a set of Units. The default set includes all Units that are
// currently in use as legal tender in any Region.
func Query(options ...QueryOption) QueryIter {
it := &iter{
end: len(regionData),
date: 0xFFFFFFFF,
}
for _, fn := range options {
fn(it)
}
return it
}
// NonTender returns a new query that also includes matching Units that are not
// legal tender.
var NonTender QueryOption = nonTender
func nonTender(i *iter) {
i.nonTender = true
}
// Historical selects the units for all dates.
var Historical QueryOption = historical
func historical(i *iter) {
i.date = hist
}
// A QueryOption can be used to change the set of unit information returned by
// a query.
type QueryOption func(*iter)
// Date queries the units that were in use at the given point in history.
func Date(t time.Time) QueryOption {
d := toDate(t)
return func(i *iter) {
i.date = d
}
}
// Region limits the query to only return entries for the given region.
func Region(r language.Region) QueryOption {
p, end := len(regionData), len(regionData)
x := regionToCode(r)
i := sort.Search(len(regionData), func(i int) bool {
return regionData[i].region >= x
})
if i < len(regionData) && regionData[i].region == x {
p = i
for i++; i < len(regionData) && regionData[i].region == x; i++ {
}
end = i
}
return func(i *iter) {
i.p, i.end = p, end
}
}
const (
hist = 0x00
now = 0xFFFFFFFF
)
type iter struct {
*regionInfo
p, end int
date uint32
nonTender bool
}
func (i *iter) Next() bool {
for ; i.p < i.end; i.p++ {
i.regionInfo = &regionData[i.p]
if !i.nonTender && !i.IsTender() {
continue
}
if i.date == hist || (i.from <= i.date && (i.to == 0 || i.date <= i.to)) {
i.p++
return true
}
}
return false
}
func (r *regionInfo) Region() language.Region {
// TODO: this could be much faster.
var buf [2]byte
buf[0] = uint8(r.region >> 8)
buf[1] = uint8(r.region)
return language.MustParseRegion(string(buf[:]))
}
func (r *regionInfo) Unit() Unit {
return Unit{r.code &^ nonTenderBit}
}
func (r *regionInfo) IsTender() bool {
return r.code&nonTenderBit == 0
}
func (r *regionInfo) From() (time.Time, bool) {
if r.from == 0 {
return time.Time{}, false
}
return fromDate(r.from), true
}
func (r *regionInfo) To() (time.Time, bool) {
if r.to == 0 {
return time.Time{}, false
}
return fromDate(r.to), true
}

View File

@ -1,107 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package currency
import (
"testing"
"time"
"golang.org/x/text/language"
)
func TestQuery(t *testing.T) {
r := func(region string) language.Region {
return language.MustParseRegion(region)
}
t1800, _ := time.Parse("2006-01-02", "1800-01-01")
type result struct {
region language.Region
unit Unit
isTender bool
from, to string
}
testCases := []struct {
name string
opts []QueryOption
results []result
}{{
name: "XA",
opts: []QueryOption{Region(r("XA"))},
results: []result{},
}, {
name: "AC",
opts: []QueryOption{Region(r("AC"))},
results: []result{
{r("AC"), MustParseISO("SHP"), true, "1976-01-01", ""},
},
}, {
name: "US",
opts: []QueryOption{Region(r("US"))},
results: []result{
{r("US"), MustParseISO("USD"), true, "1792-01-01", ""},
},
}, {
name: "US-hist",
opts: []QueryOption{Region(r("US")), Historical},
results: []result{
{r("US"), MustParseISO("USD"), true, "1792-01-01", ""},
},
}, {
name: "US-non-tender",
opts: []QueryOption{Region(r("US")), NonTender},
results: []result{
{r("US"), MustParseISO("USD"), true, "1792-01-01", ""},
{r("US"), MustParseISO("USN"), false, "", ""},
},
}, {
name: "US-historical+non-tender",
opts: []QueryOption{Region(r("US")), Historical, NonTender},
results: []result{
{r("US"), MustParseISO("USD"), true, "1792-01-01", ""},
{r("US"), MustParseISO("USN"), false, "", ""},
{r("US"), MustParseISO("USS"), false, "", "2014-03-01"},
},
}, {
name: "1800",
opts: []QueryOption{Date(t1800)},
results: []result{
{r("CH"), MustParseISO("CHF"), true, "1799-03-17", ""},
{r("GB"), MustParseISO("GBP"), true, "1694-07-27", ""},
{r("GI"), MustParseISO("GIP"), true, "1713-01-01", ""},
// The date for IE and PR seem wrong, so these may be updated at
// some point causing the tests to fail.
{r("IE"), MustParseISO("GBP"), true, "1800-01-01", "1922-01-01"},
{r("PR"), MustParseISO("ESP"), true, "1800-01-01", "1898-12-10"},
{r("US"), MustParseISO("USD"), true, "1792-01-01", ""},
},
}}
for _, tc := range testCases {
n := 0
for it := Query(tc.opts...); it.Next(); n++ {
if n < len(tc.results) {
got := result{
it.Region(),
it.Unit(),
it.IsTender(),
getTime(it.From()),
getTime(it.To()),
}
if got != tc.results[n] {
t.Errorf("%s:%d: got %v; want %v", tc.name, n, got, tc.results[n])
}
}
}
if n != len(tc.results) {
t.Errorf("%s: unexpected number of results: got %d; want %d", tc.name, n, len(tc.results))
}
}
}
func getTime(t time.Time, ok bool) string {
if !ok {
return ""
}
return t.Format("2006-01-02")
}

File diff suppressed because it is too large Load Diff

View File

@ -1,93 +0,0 @@
package currency
import (
"flag"
"strings"
"testing"
"time"
"golang.org/x/text/internal/gen"
"golang.org/x/text/internal/testtext"
"golang.org/x/text/language"
"golang.org/x/text/message"
"golang.org/x/text/unicode/cldr"
)
var draft = flag.String("draft",
"contributed",
`Minimal draft requirements (approved, contributed, provisional, unconfirmed).`)
func TestTables(t *testing.T) {
testtext.SkipIfNotLong(t)
// Read the CLDR zip file.
r := gen.OpenCLDRCoreZip()
defer r.Close()
d := &cldr.Decoder{}
d.SetDirFilter("supplemental", "main")
d.SetSectionFilter("numbers")
data, err := d.DecodeZip(r)
if err != nil {
t.Fatalf("DecodeZip: %v", err)
}
dr, err := cldr.ParseDraft(*draft)
if err != nil {
t.Fatalf("filter: %v", err)
}
for _, lang := range data.Locales() {
p := message.NewPrinter(language.MustParse(lang))
ldml := data.RawLDML(lang)
if ldml.Numbers == nil || ldml.Numbers.Currencies == nil {
continue
}
for _, c := range ldml.Numbers.Currencies.Currency {
syms := cldr.MakeSlice(&c.Symbol)
syms.SelectDraft(dr)
for _, sym := range c.Symbol {
cur, err := ParseISO(c.Type)
if err != nil {
continue
}
formatter := Symbol
switch sym.Alt {
case "":
case "narrow":
formatter = NarrowSymbol
default:
continue
}
want := sym.Data()
if got := p.Sprint(formatter(cur)); got != want {
t.Errorf("%s:%sSymbol(%s) = %s; want %s", lang, strings.Title(sym.Alt), c.Type, got, want)
}
}
}
}
for _, reg := range data.Supplemental().CurrencyData.Region {
i := 0
for ; regionData[i].Region().String() != reg.Iso3166; i++ {
}
it := Query(Historical, NonTender, Region(language.MustParseRegion(reg.Iso3166)))
for _, cur := range reg.Currency {
from, _ := time.Parse("2006-01-02", cur.From)
to, _ := time.Parse("2006-01-02", cur.To)
it.Next()
for j, r := range []QueryIter{&iter{regionInfo: &regionData[i]}, it} {
if got, _ := r.From(); from != got {
t.Errorf("%d:%s:%s:from: got %v; want %v", j, reg.Iso3166, cur.Iso4217, got, from)
}
if got, _ := r.To(); to != got {
t.Errorf("%d:%s:%s:to: got %v; want %v", j, reg.Iso3166, cur.Iso4217, got, to)
}
}
i++
}
}
}

View File

@ -1,335 +0,0 @@
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
package date
var enumMap = map[string]uint16{
"": 0,
"calendars": 0,
"fields": 1,
"timeZoneNames": 2,
"buddhist": 0,
"chinese": 1,
"coptic": 2,
"dangi": 3,
"ethiopic": 4,
"ethiopic-amete-alem": 5,
"generic": 6,
"gregorian": 7,
"hebrew": 8,
"indian": 9,
"islamic": 10,
"islamic-civil": 11,
"islamic-rgsa": 12,
"islamic-tbla": 13,
"islamic-umalqura": 14,
"japanese": 15,
"persian": 16,
"roc": 17,
"months": 0,
"days": 1,
"quarters": 2,
"dayPeriods": 3,
"eras": 4,
"dateFormats": 5,
"timeFormats": 6,
"dateTimeFormats": 7,
"monthPatterns": 8,
"cyclicNameSets": 9,
"format": 0,
"stand-alone": 1,
"numeric": 2,
"widthAbbreviated": 0,
"widthNarrow": 1,
"widthWide": 2,
"widthAll": 3,
"widthShort": 4,
"leap7": 0,
"sun": 0,
"mon": 1,
"tue": 2,
"wed": 3,
"thu": 4,
"fri": 5,
"sat": 6,
"am": 0,
"pm": 1,
"midnight": 2,
"morning1": 3,
"afternoon1": 4,
"evening1": 5,
"night1": 6,
"noon": 7,
"morning2": 8,
"afternoon2": 9,
"night2": 10,
"evening2": 11,
"variant": 1,
"short": 0,
"long": 1,
"full": 2,
"medium": 3,
"dayPartsCycleType": 0,
"daysCycleType": 1,
"monthsCycleType": 2,
"solarTermsCycleType": 3,
"yearsCycleType": 4,
"zodiacsCycleType": 5,
"eraField": 0,
"era-shortField": 1,
"era-narrowField": 2,
"yearField": 3,
"year-shortField": 4,
"year-narrowField": 5,
"quarterField": 6,
"quarter-shortField": 7,
"quarter-narrowField": 8,
"monthField": 9,
"month-shortField": 10,
"month-narrowField": 11,
"weekField": 12,
"week-shortField": 13,
"week-narrowField": 14,
"weekOfMonthField": 15,
"weekOfMonth-shortField": 16,
"weekOfMonth-narrowField": 17,
"dayField": 18,
"day-shortField": 19,
"day-narrowField": 20,
"dayOfYearField": 21,
"dayOfYear-shortField": 22,
"dayOfYear-narrowField": 23,
"weekdayField": 24,
"weekday-shortField": 25,
"weekday-narrowField": 26,
"weekdayOfMonthField": 27,
"weekdayOfMonth-shortField": 28,
"weekdayOfMonth-narrowField": 29,
"sunField": 30,
"sun-shortField": 31,
"sun-narrowField": 32,
"monField": 33,
"mon-shortField": 34,
"mon-narrowField": 35,
"tueField": 36,
"tue-shortField": 37,
"tue-narrowField": 38,
"wedField": 39,
"wed-shortField": 40,
"wed-narrowField": 41,
"thuField": 42,
"thu-shortField": 43,
"thu-narrowField": 44,
"friField": 45,
"fri-shortField": 46,
"fri-narrowField": 47,
"satField": 48,
"sat-shortField": 49,
"sat-narrowField": 50,
"dayperiod-shortField": 51,
"dayperiodField": 52,
"dayperiod-narrowField": 53,
"hourField": 54,
"hour-shortField": 55,
"hour-narrowField": 56,
"minuteField": 57,
"minute-shortField": 58,
"minute-narrowField": 59,
"secondField": 60,
"second-shortField": 61,
"second-narrowField": 62,
"zoneField": 63,
"zone-shortField": 64,
"zone-narrowField": 65,
"displayName": 0,
"relative": 1,
"relativeTime": 2,
"relativePeriod": 3,
"before1": 0,
"current": 1,
"after1": 2,
"before2": 3,
"after2": 4,
"after3": 5,
"future": 0,
"past": 1,
"other": 0,
"one": 1,
"zero": 2,
"two": 3,
"few": 4,
"many": 5,
"zoneFormat": 0,
"regionFormat": 1,
"zone": 2,
"metaZone": 3,
"hourFormat": 0,
"gmtFormat": 1,
"gmtZeroFormat": 2,
"genericTime": 0,
"daylightTime": 1,
"standardTime": 2,
"Etc/UTC": 0,
"Europe/London": 1,
"Europe/Dublin": 2,
"Pacific/Honolulu": 3,
"Afghanistan": 0,
"Africa_Central": 1,
"Africa_Eastern": 2,
"Africa_Southern": 3,
"Africa_Western": 4,
"Alaska": 5,
"Amazon": 6,
"America_Central": 7,
"America_Eastern": 8,
"America_Mountain": 9,
"America_Pacific": 10,
"Anadyr": 11,
"Apia": 12,
"Arabian": 13,
"Argentina": 14,
"Argentina_Western": 15,
"Armenia": 16,
"Atlantic": 17,
"Australia_Central": 18,
"Australia_CentralWestern": 19,
"Australia_Eastern": 20,
"Australia_Western": 21,
"Azerbaijan": 22,
"Azores": 23,
"Bangladesh": 24,
"Bhutan": 25,
"Bolivia": 26,
"Brasilia": 27,
"Brunei": 28,
"Cape_Verde": 29,
"Chamorro": 30,
"Chatham": 31,
"Chile": 32,
"China": 33,
"Choibalsan": 34,
"Christmas": 35,
"Cocos": 36,
"Colombia": 37,
"Cook": 38,
"Cuba": 39,
"Davis": 40,
"DumontDUrville": 41,
"East_Timor": 42,
"Easter": 43,
"Ecuador": 44,
"Europe_Central": 45,
"Europe_Eastern": 46,
"Europe_Further_Eastern": 47,
"Europe_Western": 48,
"Falkland": 49,
"Fiji": 50,
"French_Guiana": 51,
"French_Southern": 52,
"Galapagos": 53,
"Gambier": 54,
"Georgia": 55,
"Gilbert_Islands": 56,
"GMT": 57,
"Greenland_Eastern": 58,
"Greenland_Western": 59,
"Gulf": 60,
"Guyana": 61,
"Hawaii_Aleutian": 62,
"Hong_Kong": 63,
"Hovd": 64,
"India": 65,
"Indian_Ocean": 66,
"Indochina": 67,
"Indonesia_Central": 68,
"Indonesia_Eastern": 69,
"Indonesia_Western": 70,
"Iran": 71,
"Irkutsk": 72,
"Israel": 73,
"Japan": 74,
"Kamchatka": 75,
"Kazakhstan_Eastern": 76,
"Kazakhstan_Western": 77,
"Korea": 78,
"Kosrae": 79,
"Krasnoyarsk": 80,
"Kyrgystan": 81,
"Line_Islands": 82,
"Lord_Howe": 83,
"Macquarie": 84,
"Magadan": 85,
"Malaysia": 86,
"Maldives": 87,
"Marquesas": 88,
"Marshall_Islands": 89,
"Mauritius": 90,
"Mawson": 91,
"Mexico_Northwest": 92,
"Mexico_Pacific": 93,
"Mongolia": 94,
"Moscow": 95,
"Myanmar": 96,
"Nauru": 97,
"Nepal": 98,
"New_Caledonia": 99,
"New_Zealand": 100,
"Newfoundland": 101,
"Niue": 102,
"Norfolk": 103,
"Noronha": 104,
"Novosibirsk": 105,
"Omsk": 106,
"Pakistan": 107,
"Palau": 108,
"Papua_New_Guinea": 109,
"Paraguay": 110,
"Peru": 111,
"Philippines": 112,
"Phoenix_Islands": 113,
"Pierre_Miquelon": 114,
"Pitcairn": 115,
"Ponape": 116,
"Pyongyang": 117,
"Reunion": 118,
"Rothera": 119,
"Sakhalin": 120,
"Samara": 121,
"Samoa": 122,
"Seychelles": 123,
"Singapore": 124,
"Solomon": 125,
"South_Georgia": 126,
"Suriname": 127,
"Syowa": 128,
"Tahiti": 129,
"Taipei": 130,
"Tajikistan": 131,
"Tokelau": 132,
"Tonga": 133,
"Truk": 134,
"Turkmenistan": 135,
"Tuvalu": 136,
"Uruguay": 137,
"Uzbekistan": 138,
"Vanuatu": 139,
"Venezuela": 140,
"Vladivostok": 141,
"Volgograd": 142,
"Vostok": 143,
"Wake": 144,
"Wallis": 145,
"Yakutsk": 146,
"Yekaterinburg": 147,
"Guam": 148,
"North_Mariana": 149,
"Acre": 150,
"Almaty": 151,
"Aqtau": 152,
"Aqtobe": 153,
"Casey": 154,
"Lanka": 155,
"Macau": 156,
"Qyzylorda": 157,
}
// Total table size 0 bytes (0KiB); checksum: 811C9DC5

329
vendor/golang.org/x/text/date/gen.go generated vendored
View File

@ -1,329 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"flag"
"log"
"strconv"
"strings"
"golang.org/x/text/internal/cldrtree"
"golang.org/x/text/internal/gen"
"golang.org/x/text/language"
"golang.org/x/text/unicode/cldr"
)
var (
draft = flag.String("draft",
"contributed",
`Minimal draft requirements (approved, contributed, provisional, unconfirmed).`)
)
// TODO:
// - Compile format patterns.
// - Compress the large amount of redundancy in metazones.
// - Split trees (with shared buckets) with data that is enough for default
// formatting of Go Time values and and tables that are needed for larger
// variants.
// - zone to metaZone mappings (in supplemental)
// - Add more enum values and also some key maps for some of the elements.
func main() {
gen.Init()
r := gen.OpenCLDRCoreZip()
defer r.Close()
d := &cldr.Decoder{}
d.SetDirFilter("supplemental", "main")
d.SetSectionFilter("dates")
data, err := d.DecodeZip(r)
if err != nil {
log.Fatalf("DecodeZip: %v", err)
}
dates := cldrtree.New("dates")
buildCLDRTree(data, dates)
w := gen.NewCodeWriter()
if err := dates.Gen(w); err != nil {
log.Fatal(err)
}
gen.WriteCLDRVersion(w)
w.WriteGoFile("tables.go", "date")
w = gen.NewCodeWriter()
if err := dates.GenTestData(w); err != nil {
log.Fatal(err)
}
w.WriteGoFile("data_test.go", "date")
}
func buildCLDRTree(data *cldr.CLDR, dates *cldrtree.Builder) {
context := cldrtree.Enum("context")
widthMap := func(s string) string {
// Align era with width values.
if r, ok := map[string]string{
"eraAbbr": "abbreviated",
"eraNarrow": "narrow",
"eraNames": "wide",
}[s]; ok {
s = r
}
// Prefix width to disambiguate with some overlapping length values.
return "width" + strings.Title(s)
}
width := cldrtree.EnumFunc("width", widthMap, "abbreviated", "narrow", "wide")
length := cldrtree.Enum("length", "short", "long")
month := cldrtree.Enum("month", "leap7")
relTime := cldrtree.EnumFunc("relTime", func(s string) string {
x, err := strconv.ParseInt(s, 10, 8)
if err != nil {
log.Fatal("Invalid number:", err)
}
return []string{
"before2",
"before1",
"current",
"after1",
"after2",
"after3",
}[x+2]
})
// Disambiguate keys like 'months' and 'sun'.
cycleType := cldrtree.EnumFunc("cycleType", func(s string) string {
return s + "CycleType"
})
field := cldrtree.EnumFunc("field", func(s string) string {
return s + "Field"
})
timeType := cldrtree.EnumFunc("timeType", func(s string) string {
if s == "" {
return "genericTime"
}
return s + "Time"
}, "generic")
zoneType := []cldrtree.Option{cldrtree.SharedType(), timeType}
metaZoneType := []cldrtree.Option{cldrtree.SharedType(), timeType}
for _, lang := range data.Locales() {
tag := language.Make(lang)
ldml := data.RawLDML(lang)
if ldml.Dates == nil {
continue
}
x := dates.Locale(tag)
if x := x.Index(ldml.Dates.Calendars); x != nil {
for _, cal := range ldml.Dates.Calendars.Calendar {
x := x.IndexFromType(cal)
if x := x.Index(cal.Months); x != nil {
for _, mc := range cal.Months.MonthContext {
x := x.IndexFromType(mc, context)
for _, mw := range mc.MonthWidth {
x := x.IndexFromType(mw, width)
for _, m := range mw.Month {
x.SetValue(m.Yeartype+m.Type, m, month)
}
}
}
}
if x := x.Index(cal.MonthPatterns); x != nil {
for _, mc := range cal.MonthPatterns.MonthPatternContext {
x := x.IndexFromType(mc, context)
for _, mw := range mc.MonthPatternWidth {
// Value is always leap, so no need to create a
// subindex.
for _, m := range mw.MonthPattern {
x.SetValue(mw.Type, m, width)
}
}
}
}
if x := x.Index(cal.CyclicNameSets); x != nil {
for _, cns := range cal.CyclicNameSets.CyclicNameSet {
x := x.IndexFromType(cns, cycleType)
for _, cc := range cns.CyclicNameContext {
x := x.IndexFromType(cc, context)
for _, cw := range cc.CyclicNameWidth {
x := x.IndexFromType(cw, width)
for _, c := range cw.CyclicName {
x.SetValue(c.Type, c)
}
}
}
}
}
if x := x.Index(cal.Days); x != nil {
for _, dc := range cal.Days.DayContext {
x := x.IndexFromType(dc, context)
for _, dw := range dc.DayWidth {
x := x.IndexFromType(dw, width)
for _, d := range dw.Day {
x.SetValue(d.Type, d)
}
}
}
}
if x := x.Index(cal.Quarters); x != nil {
for _, qc := range cal.Quarters.QuarterContext {
x := x.IndexFromType(qc, context)
for _, qw := range qc.QuarterWidth {
x := x.IndexFromType(qw, width)
for _, q := range qw.Quarter {
x.SetValue(q.Type, q)
}
}
}
}
if x := x.Index(cal.DayPeriods); x != nil {
for _, dc := range cal.DayPeriods.DayPeriodContext {
x := x.IndexFromType(dc, context)
for _, dw := range dc.DayPeriodWidth {
x := x.IndexFromType(dw, width)
for _, d := range dw.DayPeriod {
x.IndexFromType(d).SetValue(d.Alt, d)
}
}
}
}
if x := x.Index(cal.Eras); x != nil {
opts := []cldrtree.Option{width, cldrtree.SharedType()}
if x := x.Index(cal.Eras.EraNames, opts...); x != nil {
for _, e := range cal.Eras.EraNames.Era {
x.IndexFromAlt(e).SetValue(e.Type, e)
}
}
if x := x.Index(cal.Eras.EraAbbr, opts...); x != nil {
for _, e := range cal.Eras.EraAbbr.Era {
x.IndexFromAlt(e).SetValue(e.Type, e)
}
}
if x := x.Index(cal.Eras.EraNarrow, opts...); x != nil {
for _, e := range cal.Eras.EraNarrow.Era {
x.IndexFromAlt(e).SetValue(e.Type, e)
}
}
}
if x := x.Index(cal.DateFormats); x != nil {
for _, dfl := range cal.DateFormats.DateFormatLength {
x := x.IndexFromType(dfl, length)
for _, df := range dfl.DateFormat {
for _, p := range df.Pattern {
x.SetValue(p.Alt, p)
}
}
}
}
if x := x.Index(cal.TimeFormats); x != nil {
for _, tfl := range cal.TimeFormats.TimeFormatLength {
x := x.IndexFromType(tfl, length)
for _, tf := range tfl.TimeFormat {
for _, p := range tf.Pattern {
x.SetValue(p.Alt, p)
}
}
}
}
if x := x.Index(cal.DateTimeFormats); x != nil {
for _, dtfl := range cal.DateTimeFormats.DateTimeFormatLength {
x := x.IndexFromType(dtfl, length)
for _, dtf := range dtfl.DateTimeFormat {
for _, p := range dtf.Pattern {
x.SetValue(p.Alt, p)
}
}
}
// TODO:
// - appendItems
// - intervalFormats
}
}
}
// TODO: this is a lot of data and is probably relatively little used.
// Store this somewhere else.
if x := x.Index(ldml.Dates.Fields); x != nil {
for _, f := range ldml.Dates.Fields.Field {
x := x.IndexFromType(f, field)
for _, d := range f.DisplayName {
x.Index(d).SetValue(d.Alt, d)
}
for _, r := range f.Relative {
x.Index(r).SetValue(r.Type, r, relTime)
}
for _, rt := range f.RelativeTime {
x := x.Index(rt).IndexFromType(rt)
for _, p := range rt.RelativeTimePattern {
x.SetValue(p.Count, p)
}
}
for _, rp := range f.RelativePeriod {
x.Index(rp).SetValue(rp.Alt, rp)
}
}
}
if x := x.Index(ldml.Dates.TimeZoneNames); x != nil {
format := x.IndexWithName("zoneFormat")
for _, h := range ldml.Dates.TimeZoneNames.HourFormat {
format.SetValue(h.Element(), h)
}
for _, g := range ldml.Dates.TimeZoneNames.GmtFormat {
format.SetValue(g.Element(), g)
}
for _, g := range ldml.Dates.TimeZoneNames.GmtZeroFormat {
format.SetValue(g.Element(), g)
}
for _, r := range ldml.Dates.TimeZoneNames.RegionFormat {
x.Index(r).SetValue(r.Type, r, timeType)
}
set := func(x *cldrtree.Index, e []*cldr.Common, zone string) {
for _, n := range e {
x.Index(n, zoneType...).SetValue(zone, n)
}
}
zoneWidth := []cldrtree.Option{length, cldrtree.SharedType()}
zs := x.IndexWithName("zone")
for _, z := range ldml.Dates.TimeZoneNames.Zone {
for _, l := range z.Long {
x := zs.Index(l, zoneWidth...)
set(x, l.Generic, z.Type)
set(x, l.Standard, z.Type)
set(x, l.Daylight, z.Type)
}
for _, s := range z.Short {
x := zs.Index(s, zoneWidth...)
set(x, s.Generic, z.Type)
set(x, s.Standard, z.Type)
set(x, s.Daylight, z.Type)
}
}
set = func(x *cldrtree.Index, e []*cldr.Common, zone string) {
for _, n := range e {
x.Index(n, metaZoneType...).SetValue(zone, n)
}
}
zoneWidth = []cldrtree.Option{length, cldrtree.SharedType()}
zs = x.IndexWithName("metaZone")
for _, z := range ldml.Dates.TimeZoneNames.Metazone {
for _, l := range z.Long {
x := zs.Index(l, zoneWidth...)
set(x, l.Generic, z.Type)
set(x, l.Standard, z.Type)
set(x, l.Daylight, z.Type)
}
for _, s := range z.Short {
x := zs.Index(s, zoneWidth...)
set(x, s.Generic, z.Type)
set(x, s.Standard, z.Type)
set(x, s.Daylight, z.Type)
}
}
}
}
}

View File

@ -1,241 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package date
import (
"strconv"
"strings"
"testing"
"golang.org/x/text/internal/cldrtree"
"golang.org/x/text/internal/gen"
"golang.org/x/text/internal/testtext"
"golang.org/x/text/language"
"golang.org/x/text/unicode/cldr"
)
func TestTables(t *testing.T) {
testtext.SkipIfNotLong(t)
r := gen.OpenCLDRCoreZip()
defer r.Close()
d := &cldr.Decoder{}
d.SetDirFilter("supplemental", "main")
d.SetSectionFilter("dates")
data, err := d.DecodeZip(r)
if err != nil {
t.Fatalf("DecodeZip: %v", err)
}
count := 0
for _, lang := range data.Locales() {
ldml := data.RawLDML(lang)
if ldml.Dates == nil {
continue
}
tag, _ := language.CompactIndex(language.MustParse(lang))
test := func(want cldrtree.Element, path ...string) {
if count > 30 {
return
}
t.Run(lang+"/"+strings.Join(path, "/"), func(t *testing.T) {
p := make([]uint16, len(path))
for i, s := range path {
if v, err := strconv.Atoi(s); err == nil {
p[i] = uint16(v)
} else if v, ok := enumMap[s]; ok {
p[i] = v
} else {
count++
t.Fatalf("Unknown key %q", s)
}
}
wantStr := want.GetCommon().Data()
if got := tree.Lookup(tag, p...); got != wantStr {
count++
t.Errorf("got %q; want %q", got, wantStr)
}
})
}
width := func(s string) string { return "width" + strings.Title(s) }
if ldml.Dates.Calendars != nil {
for _, cal := range ldml.Dates.Calendars.Calendar {
if cal.Months != nil {
for _, mc := range cal.Months.MonthContext {
for _, mw := range mc.MonthWidth {
for _, m := range mw.Month {
test(m, "calendars", cal.Type, "months", mc.Type, width(mw.Type), m.Yeartype+m.Type)
}
}
}
}
if cal.MonthPatterns != nil {
for _, mc := range cal.MonthPatterns.MonthPatternContext {
for _, mw := range mc.MonthPatternWidth {
for _, m := range mw.MonthPattern {
test(m, "calendars", cal.Type, "monthPatterns", mc.Type, width(mw.Type))
}
}
}
}
if cal.CyclicNameSets != nil {
for _, cns := range cal.CyclicNameSets.CyclicNameSet {
for _, cc := range cns.CyclicNameContext {
for _, cw := range cc.CyclicNameWidth {
for _, c := range cw.CyclicName {
test(c, "calendars", cal.Type, "cyclicNameSets", cns.Type+"CycleType", cc.Type, width(cw.Type), c.Type)
}
}
}
}
}
if cal.Days != nil {
for _, dc := range cal.Days.DayContext {
for _, dw := range dc.DayWidth {
for _, d := range dw.Day {
test(d, "calendars", cal.Type, "days", dc.Type, width(dw.Type), d.Type)
}
}
}
}
if cal.Quarters != nil {
for _, qc := range cal.Quarters.QuarterContext {
for _, qw := range qc.QuarterWidth {
for _, q := range qw.Quarter {
test(q, "calendars", cal.Type, "quarters", qc.Type, width(qw.Type), q.Type)
}
}
}
}
if cal.DayPeriods != nil {
for _, dc := range cal.DayPeriods.DayPeriodContext {
for _, dw := range dc.DayPeriodWidth {
for _, d := range dw.DayPeriod {
test(d, "calendars", cal.Type, "dayPeriods", dc.Type, width(dw.Type), d.Type, d.Alt)
}
}
}
}
if cal.Eras != nil {
if cal.Eras.EraNames != nil {
for _, e := range cal.Eras.EraNames.Era {
test(e, "calendars", cal.Type, "eras", "widthWide", e.Alt, e.Type)
}
}
if cal.Eras.EraAbbr != nil {
for _, e := range cal.Eras.EraAbbr.Era {
test(e, "calendars", cal.Type, "eras", "widthAbbreviated", e.Alt, e.Type)
}
}
if cal.Eras.EraNarrow != nil {
for _, e := range cal.Eras.EraNarrow.Era {
test(e, "calendars", cal.Type, "eras", "widthNarrow", e.Alt, e.Type)
}
}
}
if cal.DateFormats != nil {
for _, dfl := range cal.DateFormats.DateFormatLength {
for _, df := range dfl.DateFormat {
for _, p := range df.Pattern {
test(p, "calendars", cal.Type, "dateFormats", dfl.Type, p.Alt)
}
}
}
}
if cal.TimeFormats != nil {
for _, tfl := range cal.TimeFormats.TimeFormatLength {
for _, tf := range tfl.TimeFormat {
for _, p := range tf.Pattern {
test(p, "calendars", cal.Type, "timeFormats", tfl.Type, p.Alt)
}
}
}
}
if cal.DateTimeFormats != nil {
for _, dtfl := range cal.DateTimeFormats.DateTimeFormatLength {
for _, dtf := range dtfl.DateTimeFormat {
for _, p := range dtf.Pattern {
test(p, "calendars", cal.Type, "dateTimeFormats", dtfl.Type, p.Alt)
}
}
}
// TODO:
// - appendItems
// - intervalFormats
}
}
}
// TODO: this is a lot of data and is probably relatively little used.
// Store this somewhere else.
if ldml.Dates.Fields != nil {
for _, f := range ldml.Dates.Fields.Field {
field := f.Type + "Field"
for _, d := range f.DisplayName {
test(d, "fields", field, "displayName", d.Alt)
}
for _, r := range f.Relative {
i, _ := strconv.Atoi(r.Type)
v := []string{"before2", "before1", "current", "after1", "after2", "after3"}[i+2]
test(r, "fields", field, "relative", v)
}
for _, rt := range f.RelativeTime {
for _, p := range rt.RelativeTimePattern {
test(p, "fields", field, "relativeTime", rt.Type, p.Count)
}
}
for _, rp := range f.RelativePeriod {
test(rp, "fields", field, "relativePeriod", rp.Alt)
}
}
}
if ldml.Dates.TimeZoneNames != nil {
for _, h := range ldml.Dates.TimeZoneNames.HourFormat {
test(h, "timeZoneNames", "zoneFormat", h.Element())
}
for _, g := range ldml.Dates.TimeZoneNames.GmtFormat {
test(g, "timeZoneNames", "zoneFormat", g.Element())
}
for _, g := range ldml.Dates.TimeZoneNames.GmtZeroFormat {
test(g, "timeZoneNames", "zoneFormat", g.Element())
}
for _, r := range ldml.Dates.TimeZoneNames.RegionFormat {
s := r.Type
if s == "" {
s = "generic"
}
test(r, "timeZoneNames", "regionFormat", s+"Time")
}
testZone := func(zoneType, zoneWidth, zone string, a ...[]*cldr.Common) {
for _, e := range a {
for _, n := range e {
test(n, "timeZoneNames", zoneType, zoneWidth, n.Element()+"Time", zone)
}
}
}
for _, z := range ldml.Dates.TimeZoneNames.Zone {
for _, l := range z.Long {
testZone("zone", l.Element(), z.Type, l.Generic, l.Standard, l.Daylight)
}
for _, l := range z.Short {
testZone("zone", l.Element(), z.Type, l.Generic, l.Standard, l.Daylight)
}
}
for _, z := range ldml.Dates.TimeZoneNames.Metazone {
for _, l := range z.Long {
testZone("metaZone", l.Element(), z.Type, l.Generic, l.Standard, l.Daylight)
}
for _, l := range z.Short {
testZone("metaZone", l.Element(), z.Type, l.Generic, l.Standard, l.Daylight)
}
}
}
}
}

64522
vendor/golang.org/x/text/date/tables.go generated vendored

File diff suppressed because it is too large Load Diff

13
vendor/golang.org/x/text/doc.go generated vendored
View File

@ -1,13 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go run gen.go
// text is a repository of text-related packages related to internationalization
// (i18n) and localization (l10n), such as character encodings, text
// transformations, and locale-specific text handling.
package text
// TODO: more documentation on general concepts, such as Transformers, use
// of normalization, etc.

View File

@ -1,249 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go run maketables.go
// Package charmap provides simple character encodings such as IBM Code Page 437
// and Windows 1252.
package charmap // import "golang.org/x/text/encoding/charmap"
import (
"unicode/utf8"
"golang.org/x/text/encoding"
"golang.org/x/text/encoding/internal"
"golang.org/x/text/encoding/internal/identifier"
"golang.org/x/text/transform"
)
// These encodings vary only in the way clients should interpret them. Their
// coded character set is identical and a single implementation can be shared.
var (
// ISO8859_6E is the ISO 8859-6E encoding.
ISO8859_6E encoding.Encoding = &iso8859_6E
// ISO8859_6I is the ISO 8859-6I encoding.
ISO8859_6I encoding.Encoding = &iso8859_6I
// ISO8859_8E is the ISO 8859-8E encoding.
ISO8859_8E encoding.Encoding = &iso8859_8E
// ISO8859_8I is the ISO 8859-8I encoding.
ISO8859_8I encoding.Encoding = &iso8859_8I
iso8859_6E = internal.Encoding{
Encoding: ISO8859_6,
Name: "ISO-8859-6E",
MIB: identifier.ISO88596E,
}
iso8859_6I = internal.Encoding{
Encoding: ISO8859_6,
Name: "ISO-8859-6I",
MIB: identifier.ISO88596I,
}
iso8859_8E = internal.Encoding{
Encoding: ISO8859_8,
Name: "ISO-8859-8E",
MIB: identifier.ISO88598E,
}
iso8859_8I = internal.Encoding{
Encoding: ISO8859_8,
Name: "ISO-8859-8I",
MIB: identifier.ISO88598I,
}
)
// All is a list of all defined encodings in this package.
var All []encoding.Encoding = listAll
// TODO: implement these encodings, in order of importance.
// ASCII, ISO8859_1: Rather common. Close to Windows 1252.
// ISO8859_9: Close to Windows 1254.
// utf8Enc holds a rune's UTF-8 encoding in data[:len].
type utf8Enc struct {
len uint8
data [3]byte
}
// Charmap is an 8-bit character set encoding.
type Charmap struct {
// name is the encoding's name.
name string
// mib is the encoding type of this encoder.
mib identifier.MIB
// asciiSuperset states whether the encoding is a superset of ASCII.
asciiSuperset bool
// low is the lower bound of the encoded byte for a non-ASCII rune. If
// Charmap.asciiSuperset is true then this will be 0x80, otherwise 0x00.
low uint8
// replacement is the encoded replacement character.
replacement byte
// decode is the map from encoded byte to UTF-8.
decode [256]utf8Enc
// encoding is the map from runes to encoded bytes. Each entry is a
// uint32: the high 8 bits are the encoded byte and the low 24 bits are
// the rune. The table entries are sorted by ascending rune.
encode [256]uint32
}
// NewDecoder implements the encoding.Encoding interface.
func (m *Charmap) NewDecoder() *encoding.Decoder {
return &encoding.Decoder{Transformer: charmapDecoder{charmap: m}}
}
// NewEncoder implements the encoding.Encoding interface.
func (m *Charmap) NewEncoder() *encoding.Encoder {
return &encoding.Encoder{Transformer: charmapEncoder{charmap: m}}
}
// String returns the Charmap's name.
func (m *Charmap) String() string {
return m.name
}
// ID implements an internal interface.
func (m *Charmap) ID() (mib identifier.MIB, other string) {
return m.mib, ""
}
// charmapDecoder implements transform.Transformer by decoding to UTF-8.
type charmapDecoder struct {
transform.NopResetter
charmap *Charmap
}
func (m charmapDecoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
for i, c := range src {
if m.charmap.asciiSuperset && c < utf8.RuneSelf {
if nDst >= len(dst) {
err = transform.ErrShortDst
break
}
dst[nDst] = c
nDst++
nSrc = i + 1
continue
}
decode := &m.charmap.decode[c]
n := int(decode.len)
if nDst+n > len(dst) {
err = transform.ErrShortDst
break
}
// It's 15% faster to avoid calling copy for these tiny slices.
for j := 0; j < n; j++ {
dst[nDst] = decode.data[j]
nDst++
}
nSrc = i + 1
}
return nDst, nSrc, err
}
// DecodeByte returns the Charmap's rune decoding of the byte b.
func (m *Charmap) DecodeByte(b byte) rune {
switch x := &m.decode[b]; x.len {
case 1:
return rune(x.data[0])
case 2:
return rune(x.data[0]&0x1f)<<6 | rune(x.data[1]&0x3f)
default:
return rune(x.data[0]&0x0f)<<12 | rune(x.data[1]&0x3f)<<6 | rune(x.data[2]&0x3f)
}
}
// charmapEncoder implements transform.Transformer by encoding from UTF-8.
type charmapEncoder struct {
transform.NopResetter
charmap *Charmap
}
func (m charmapEncoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
r, size := rune(0), 0
loop:
for nSrc < len(src) {
if nDst >= len(dst) {
err = transform.ErrShortDst
break
}
r = rune(src[nSrc])
// Decode a 1-byte rune.
if r < utf8.RuneSelf {
if m.charmap.asciiSuperset {
nSrc++
dst[nDst] = uint8(r)
nDst++
continue
}
size = 1
} else {
// Decode a multi-byte rune.
r, size = utf8.DecodeRune(src[nSrc:])
if size == 1 {
// All valid runes of size 1 (those below utf8.RuneSelf) were
// handled above. We have invalid UTF-8 or we haven't seen the
// full character yet.
if !atEOF && !utf8.FullRune(src[nSrc:]) {
err = transform.ErrShortSrc
} else {
err = internal.RepertoireError(m.charmap.replacement)
}
break
}
}
// Binary search in [low, high) for that rune in the m.charmap.encode table.
for low, high := int(m.charmap.low), 0x100; ; {
if low >= high {
err = internal.RepertoireError(m.charmap.replacement)
break loop
}
mid := (low + high) / 2
got := m.charmap.encode[mid]
gotRune := rune(got & (1<<24 - 1))
if gotRune < r {
low = mid + 1
} else if gotRune > r {
high = mid
} else {
dst[nDst] = byte(got >> 24)
nDst++
break
}
}
nSrc += size
}
return nDst, nSrc, err
}
// EncodeRune returns the Charmap's byte encoding of the rune r. ok is whether
// r is in the Charmap's repertoire. If not, b is set to the Charmap's
// replacement byte. This is often the ASCII substitute character '\x1a'.
func (m *Charmap) EncodeRune(r rune) (b byte, ok bool) {
if r < utf8.RuneSelf && m.asciiSuperset {
return byte(r), true
}
for low, high := int(m.low), 0x100; ; {
if low >= high {
return m.replacement, false
}
mid := (low + high) / 2
got := m.encode[mid]
gotRune := rune(got & (1<<24 - 1))
if gotRune < r {
low = mid + 1
} else if gotRune > r {
high = mid
} else {
return byte(got >> 24), true
}
}
}

View File

@ -1,258 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package charmap
import (
"testing"
"golang.org/x/text/encoding"
"golang.org/x/text/encoding/internal"
"golang.org/x/text/encoding/internal/enctest"
"golang.org/x/text/transform"
)
func dec(e encoding.Encoding) (dir string, t transform.Transformer, err error) {
return "Decode", e.NewDecoder(), nil
}
func encASCIISuperset(e encoding.Encoding) (dir string, t transform.Transformer, err error) {
return "Encode", e.NewEncoder(), internal.ErrASCIIReplacement
}
func encEBCDIC(e encoding.Encoding) (dir string, t transform.Transformer, err error) {
return "Encode", e.NewEncoder(), internal.RepertoireError(0x3f)
}
func TestNonRepertoire(t *testing.T) {
testCases := []struct {
init func(e encoding.Encoding) (string, transform.Transformer, error)
e encoding.Encoding
src, want string
}{
{dec, Windows1252, "\x81", "\ufffd"},
{encEBCDIC, CodePage037, "갂", ""},
{encEBCDIC, CodePage1047, "갂", ""},
{encEBCDIC, CodePage1047, "a¤갂", "\x81\x9F"},
{encEBCDIC, CodePage1140, "갂", ""},
{encEBCDIC, CodePage1140, "a€갂", "\x81\x9F"},
{encASCIISuperset, Windows1252, "갂", ""},
{encASCIISuperset, Windows1252, "a갂", "a"},
{encASCIISuperset, Windows1252, "\u00E9갂", "\xE9"},
}
for _, tc := range testCases {
dir, tr, wantErr := tc.init(tc.e)
dst, _, err := transform.String(tr, tc.src)
if err != wantErr {
t.Errorf("%s %v(%q): got %v; want %v", dir, tc.e, tc.src, err, wantErr)
}
if got := string(dst); got != tc.want {
t.Errorf("%s %v(%q):\ngot %q\nwant %q", dir, tc.e, tc.src, got, tc.want)
}
}
}
func TestBasics(t *testing.T) {
testCases := []struct {
e encoding.Encoding
encoded string
utf8 string
}{{
e: CodePage037,
encoded: "\xc8\x51\xba\x93\xcf",
utf8: "Hé[lõ",
}, {
e: CodePage437,
encoded: "H\x82ll\x93 \x9d\xa7\xf4\x9c\xbe",
utf8: "Héllô ¥º⌠£╛",
}, {
e: CodePage866,
encoded: "H\xf3\xd3o \x98\xfd\x9f\xdd\xa1",
utf8: "Hє╙o Ш¤Я▌б",
}, {
e: CodePage1047,
encoded: "\xc8\x54\x93\x93\x9f",
utf8: "Hèll¤",
}, {
e: CodePage1140,
encoded: "\xc8\x9f\x93\x93\xcf",
utf8: "H€llõ",
}, {
e: ISO8859_2,
encoded: "Hel\xe5\xf5",
utf8: "Helĺő",
}, {
e: ISO8859_3,
encoded: "He\xbd\xd4",
utf8: "He½Ô",
}, {
e: ISO8859_4,
encoded: "Hel\xb6\xf8",
utf8: "Helļø",
}, {
e: ISO8859_5,
encoded: "H\xd7\xc6o",
utf8: "HзЦo",
}, {
e: ISO8859_6,
encoded: "Hel\xc2\xc9",
utf8: "Helآة",
}, {
e: ISO8859_7,
encoded: "H\xeel\xebo",
utf8: "Hξlλo",
}, {
e: ISO8859_8,
encoded: "Hel\xf5\xed",
utf8: "Helץם",
}, {
e: ISO8859_9,
encoded: "\xdeayet",
utf8: "Şayet",
}, {
e: ISO8859_10,
encoded: "H\xea\xbfo",
utf8: "Hęŋo",
}, {
e: ISO8859_13,
encoded: "H\xe6l\xf9o",
utf8: "Hęlło",
}, {
e: ISO8859_14,
encoded: "He\xfe\xd0o",
utf8: "HeŷŴo",
}, {
e: ISO8859_15,
encoded: "H\xa4ll\xd8",
utf8: "H€llØ",
}, {
e: ISO8859_16,
encoded: "H\xe6ll\xbd",
utf8: "Hællœ",
}, {
e: KOI8R,
encoded: "He\x93\xad\x9c",
utf8: "He⌠╜°",
}, {
e: KOI8U,
encoded: "He\x93\xad\x9c",
utf8: "He⌠ґ°",
}, {
e: Macintosh,
encoded: "He\xdf\xd7",
utf8: "Hefl◊",
}, {
e: MacintoshCyrillic,
encoded: "He\xbe\x94",
utf8: "HeЊФ",
}, {
e: Windows874,
encoded: "He\xb7\xf0",
utf8: "Heท",
}, {
e: Windows1250,
encoded: "He\xe5\xe5o",
utf8: "Heĺĺo",
}, {
e: Windows1251,
encoded: "H\xball\xfe",
utf8: "Hєllю",
}, {
e: Windows1252,
encoded: "H\xe9ll\xf4 \xa5\xbA\xae\xa3\xd0",
utf8: "Héllô ¥º®£Ð",
}, {
e: Windows1253,
encoded: "H\xe5ll\xd6",
utf8: "HεllΦ",
}, {
e: Windows1254,
encoded: "\xd0ello",
utf8: "Ğello",
}, {
e: Windows1255,
encoded: "He\xd4o",
utf8: "Heװo",
}, {
e: Windows1256,
encoded: "H\xdbllo",
utf8: "Hغllo",
}, {
e: Windows1257,
encoded: "He\xeflo",
utf8: "Heļlo",
}, {
e: Windows1258,
encoded: "Hell\xf5",
utf8: "Hellơ",
}, {
e: XUserDefined,
encoded: "\x00\x40\x7f\x80\xab\xff",
utf8: "\u0000\u0040\u007f\uf780\uf7ab\uf7ff",
}}
for _, tc := range testCases {
enctest.TestEncoding(t, tc.e, tc.encoded, tc.utf8, "", "")
}
}
var windows1255TestCases = []struct {
b byte
ok bool
r rune
}{
{'\x00', true, '\u0000'},
{'\x1a', true, '\u001a'},
{'\x61', true, '\u0061'},
{'\x7f', true, '\u007f'},
{'\x80', true, '\u20ac'},
{'\x95', true, '\u2022'},
{'\xa0', true, '\u00a0'},
{'\xc0', true, '\u05b0'},
{'\xfc', true, '\ufffd'},
{'\xfd', true, '\u200e'},
{'\xfe', true, '\u200f'},
{'\xff', true, '\ufffd'},
{encoding.ASCIISub, false, '\u0400'},
{encoding.ASCIISub, false, '\u2603'},
{encoding.ASCIISub, false, '\U0001f4a9'},
}
func TestDecodeByte(t *testing.T) {
for _, tc := range windows1255TestCases {
if !tc.ok {
continue
}
got := Windows1255.DecodeByte(tc.b)
want := tc.r
if got != want {
t.Errorf("DecodeByte(%#02x): got %#08x, want %#08x", tc.b, got, want)
}
}
}
func TestEncodeRune(t *testing.T) {
for _, tc := range windows1255TestCases {
// There can be multiple tc.b values that map to tc.r = '\ufffd'.
if tc.r == '\ufffd' {
continue
}
gotB, gotOK := Windows1255.EncodeRune(tc.r)
wantB, wantOK := tc.b, tc.ok
if gotB != wantB || gotOK != wantOK {
t.Errorf("EncodeRune(%#08x): got (%#02x, %t), want (%#02x, %t)", tc.r, gotB, gotOK, wantB, wantOK)
}
}
}
func TestFiles(t *testing.T) { enctest.TestFile(t, Windows1252) }
func BenchmarkEncoding(b *testing.B) { enctest.Benchmark(b, Windows1252) }

View File

@ -1,556 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"bufio"
"fmt"
"log"
"net/http"
"sort"
"strings"
"unicode/utf8"
"golang.org/x/text/encoding"
"golang.org/x/text/internal/gen"
)
const ascii = "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +
"\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +
` !"#$%&'()*+,-./0123456789:;<=>?` +
`@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_` +
"`abcdefghijklmnopqrstuvwxyz{|}~\u007f"
var encodings = []struct {
name string
mib string
comment string
varName string
replacement byte
mapping string
}{
{
"IBM Code Page 037",
"IBM037",
"",
"CodePage037",
0x3f,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM037-2.1.2.ucm",
},
{
"IBM Code Page 437",
"PC8CodePage437",
"",
"CodePage437",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM437-2.1.2.ucm",
},
{
"IBM Code Page 850",
"PC850Multilingual",
"",
"CodePage850",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM850-2.1.2.ucm",
},
{
"IBM Code Page 852",
"PCp852",
"",
"CodePage852",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM852-2.1.2.ucm",
},
{
"IBM Code Page 855",
"IBM855",
"",
"CodePage855",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM855-2.1.2.ucm",
},
{
"Windows Code Page 858", // PC latin1 with Euro
"IBM00858",
"",
"CodePage858",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/windows-858-2000.ucm",
},
{
"IBM Code Page 860",
"IBM860",
"",
"CodePage860",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM860-2.1.2.ucm",
},
{
"IBM Code Page 862",
"PC862LatinHebrew",
"",
"CodePage862",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM862-2.1.2.ucm",
},
{
"IBM Code Page 863",
"IBM863",
"",
"CodePage863",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM863-2.1.2.ucm",
},
{
"IBM Code Page 865",
"IBM865",
"",
"CodePage865",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM865-2.1.2.ucm",
},
{
"IBM Code Page 866",
"IBM866",
"",
"CodePage866",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-ibm866.txt",
},
{
"IBM Code Page 1047",
"IBM1047",
"",
"CodePage1047",
0x3f,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM1047-2.1.2.ucm",
},
{
"IBM Code Page 1140",
"IBM01140",
"",
"CodePage1140",
0x3f,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/ibm-1140_P100-1997.ucm",
},
{
"ISO 8859-1",
"ISOLatin1",
"",
"ISO8859_1",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/iso-8859_1-1998.ucm",
},
{
"ISO 8859-2",
"ISOLatin2",
"",
"ISO8859_2",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-2.txt",
},
{
"ISO 8859-3",
"ISOLatin3",
"",
"ISO8859_3",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-3.txt",
},
{
"ISO 8859-4",
"ISOLatin4",
"",
"ISO8859_4",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-4.txt",
},
{
"ISO 8859-5",
"ISOLatinCyrillic",
"",
"ISO8859_5",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-5.txt",
},
{
"ISO 8859-6",
"ISOLatinArabic",
"",
"ISO8859_6,ISO8859_6E,ISO8859_6I",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-6.txt",
},
{
"ISO 8859-7",
"ISOLatinGreek",
"",
"ISO8859_7",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-7.txt",
},
{
"ISO 8859-8",
"ISOLatinHebrew",
"",
"ISO8859_8,ISO8859_8E,ISO8859_8I",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-8.txt",
},
{
"ISO 8859-9",
"ISOLatin5",
"",
"ISO8859_9",
encoding.ASCIISub,
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/iso-8859_9-1999.ucm",
},
{
"ISO 8859-10",
"ISOLatin6",
"",
"ISO8859_10",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-10.txt",
},
{
"ISO 8859-13",
"ISO885913",
"",
"ISO8859_13",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-13.txt",
},
{
"ISO 8859-14",
"ISO885914",
"",
"ISO8859_14",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-14.txt",
},
{
"ISO 8859-15",
"ISO885915",
"",
"ISO8859_15",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-15.txt",
},
{
"ISO 8859-16",
"ISO885916",
"",
"ISO8859_16",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-iso-8859-16.txt",
},
{
"KOI8-R",
"KOI8R",
"",
"KOI8R",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-koi8-r.txt",
},
{
"KOI8-U",
"KOI8U",
"",
"KOI8U",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-koi8-u.txt",
},
{
"Macintosh",
"Macintosh",
"",
"Macintosh",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-macintosh.txt",
},
{
"Macintosh Cyrillic",
"MacintoshCyrillic",
"",
"MacintoshCyrillic",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-x-mac-cyrillic.txt",
},
{
"Windows 874",
"Windows874",
"",
"Windows874",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-874.txt",
},
{
"Windows 1250",
"Windows1250",
"",
"Windows1250",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1250.txt",
},
{
"Windows 1251",
"Windows1251",
"",
"Windows1251",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1251.txt",
},
{
"Windows 1252",
"Windows1252",
"",
"Windows1252",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1252.txt",
},
{
"Windows 1253",
"Windows1253",
"",
"Windows1253",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1253.txt",
},
{
"Windows 1254",
"Windows1254",
"",
"Windows1254",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1254.txt",
},
{
"Windows 1255",
"Windows1255",
"",
"Windows1255",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1255.txt",
},
{
"Windows 1256",
"Windows1256",
"",
"Windows1256",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1256.txt",
},
{
"Windows 1257",
"Windows1257",
"",
"Windows1257",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1257.txt",
},
{
"Windows 1258",
"Windows1258",
"",
"Windows1258",
encoding.ASCIISub,
"http://encoding.spec.whatwg.org/index-windows-1258.txt",
},
{
"X-User-Defined",
"XUserDefined",
"It is defined at http://encoding.spec.whatwg.org/#x-user-defined",
"XUserDefined",
encoding.ASCIISub,
ascii +
"\uf780\uf781\uf782\uf783\uf784\uf785\uf786\uf787" +
"\uf788\uf789\uf78a\uf78b\uf78c\uf78d\uf78e\uf78f" +
"\uf790\uf791\uf792\uf793\uf794\uf795\uf796\uf797" +
"\uf798\uf799\uf79a\uf79b\uf79c\uf79d\uf79e\uf79f" +
"\uf7a0\uf7a1\uf7a2\uf7a3\uf7a4\uf7a5\uf7a6\uf7a7" +
"\uf7a8\uf7a9\uf7aa\uf7ab\uf7ac\uf7ad\uf7ae\uf7af" +
"\uf7b0\uf7b1\uf7b2\uf7b3\uf7b4\uf7b5\uf7b6\uf7b7" +
"\uf7b8\uf7b9\uf7ba\uf7bb\uf7bc\uf7bd\uf7be\uf7bf" +
"\uf7c0\uf7c1\uf7c2\uf7c3\uf7c4\uf7c5\uf7c6\uf7c7" +
"\uf7c8\uf7c9\uf7ca\uf7cb\uf7cc\uf7cd\uf7ce\uf7cf" +
"\uf7d0\uf7d1\uf7d2\uf7d3\uf7d4\uf7d5\uf7d6\uf7d7" +
"\uf7d8\uf7d9\uf7da\uf7db\uf7dc\uf7dd\uf7de\uf7df" +
"\uf7e0\uf7e1\uf7e2\uf7e3\uf7e4\uf7e5\uf7e6\uf7e7" +
"\uf7e8\uf7e9\uf7ea\uf7eb\uf7ec\uf7ed\uf7ee\uf7ef" +
"\uf7f0\uf7f1\uf7f2\uf7f3\uf7f4\uf7f5\uf7f6\uf7f7" +
"\uf7f8\uf7f9\uf7fa\uf7fb\uf7fc\uf7fd\uf7fe\uf7ff",
},
}
func getWHATWG(url string) string {
res, err := http.Get(url)
if err != nil {
log.Fatalf("%q: Get: %v", url, err)
}
defer res.Body.Close()
mapping := make([]rune, 128)
for i := range mapping {
mapping[i] = '\ufffd'
}
scanner := bufio.NewScanner(res.Body)
for scanner.Scan() {
s := strings.TrimSpace(scanner.Text())
if s == "" || s[0] == '#' {
continue
}
x, y := 0, 0
if _, err := fmt.Sscanf(s, "%d\t0x%x", &x, &y); err != nil {
log.Fatalf("could not parse %q", s)
}
if x < 0 || 128 <= x {
log.Fatalf("code %d is out of range", x)
}
if 0x80 <= y && y < 0xa0 {
// We diverge from the WHATWG spec by mapping control characters
// in the range [0x80, 0xa0) to U+FFFD.
continue
}
mapping[x] = rune(y)
}
return ascii + string(mapping)
}
func getUCM(url string) string {
res, err := http.Get(url)
if err != nil {
log.Fatalf("%q: Get: %v", url, err)
}
defer res.Body.Close()
mapping := make([]rune, 256)
for i := range mapping {
mapping[i] = '\ufffd'
}
charsFound := 0
scanner := bufio.NewScanner(res.Body)
for scanner.Scan() {
s := strings.TrimSpace(scanner.Text())
if s == "" || s[0] == '#' {
continue
}
var c byte
var r rune
if _, err := fmt.Sscanf(s, `<U%x> \x%x |0`, &r, &c); err != nil {
continue
}
mapping[c] = r
charsFound++
}
if charsFound < 200 {
log.Fatalf("%q: only %d characters found (wrong page format?)", url, charsFound)
}
return string(mapping)
}
func main() {
mibs := map[string]bool{}
all := []string{}
w := gen.NewCodeWriter()
defer w.WriteGoFile("tables.go", "charmap")
printf := func(s string, a ...interface{}) { fmt.Fprintf(w, s, a...) }
printf("import (\n")
printf("\t\"golang.org/x/text/encoding\"\n")
printf("\t\"golang.org/x/text/encoding/internal/identifier\"\n")
printf(")\n\n")
for _, e := range encodings {
varNames := strings.Split(e.varName, ",")
all = append(all, varNames...)
varName := varNames[0]
switch {
case strings.HasPrefix(e.mapping, "http://encoding.spec.whatwg.org/"):
e.mapping = getWHATWG(e.mapping)
case strings.HasPrefix(e.mapping, "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/"):
e.mapping = getUCM(e.mapping)
}
asciiSuperset, low := strings.HasPrefix(e.mapping, ascii), 0x00
if asciiSuperset {
low = 0x80
}
lvn := 1
if strings.HasPrefix(varName, "ISO") || strings.HasPrefix(varName, "KOI") {
lvn = 3
}
lowerVarName := strings.ToLower(varName[:lvn]) + varName[lvn:]
printf("// %s is the %s encoding.\n", varName, e.name)
if e.comment != "" {
printf("//\n// %s\n", e.comment)
}
printf("var %s *Charmap = &%s\n\nvar %s = Charmap{\nname: %q,\n",
varName, lowerVarName, lowerVarName, e.name)
if mibs[e.mib] {
log.Fatalf("MIB type %q declared multiple times.", e.mib)
}
printf("mib: identifier.%s,\n", e.mib)
printf("asciiSuperset: %t,\n", asciiSuperset)
printf("low: 0x%02x,\n", low)
printf("replacement: 0x%02x,\n", e.replacement)
printf("decode: [256]utf8Enc{\n")
i, backMapping := 0, map[rune]byte{}
for _, c := range e.mapping {
if _, ok := backMapping[c]; !ok && c != utf8.RuneError {
backMapping[c] = byte(i)
}
var buf [8]byte
n := utf8.EncodeRune(buf[:], c)
if n > 3 {
panic(fmt.Sprintf("rune %q (%U) is too long", c, c))
}
printf("{%d,[3]byte{0x%02x,0x%02x,0x%02x}},", n, buf[0], buf[1], buf[2])
if i%2 == 1 {
printf("\n")
}
i++
}
printf("},\n")
printf("encode: [256]uint32{\n")
encode := make([]uint32, 0, 256)
for c, i := range backMapping {
encode = append(encode, uint32(i)<<24|uint32(c))
}
sort.Sort(byRune(encode))
for len(encode) < cap(encode) {
encode = append(encode, encode[len(encode)-1])
}
for i, enc := range encode {
printf("0x%08x,", enc)
if i%8 == 7 {
printf("\n")
}
}
printf("},\n}\n")
// Add an estimate of the size of a single Charmap{} struct value, which
// includes two 256 elem arrays of 4 bytes and some extra fields, which
// align to 3 uint64s on 64-bit architectures.
w.Size += 2*4*256 + 3*8
}
// TODO: add proper line breaking.
printf("var listAll = []encoding.Encoding{\n%s,\n}\n\n", strings.Join(all, ",\n"))
}
type byRune []uint32
func (b byRune) Len() int { return len(b) }
func (b byRune) Less(i, j int) bool { return b[i]&0xffffff < b[j]&0xffffff }
func (b byRune) Swap(i, j int) { b[i], b[j] = b[j], b[i] }

File diff suppressed because it is too large Load Diff

View File

@ -1,335 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package encoding defines an interface for character encodings, such as Shift
// JIS and Windows 1252, that can convert to and from UTF-8.
//
// Encoding implementations are provided in other packages, such as
// golang.org/x/text/encoding/charmap and
// golang.org/x/text/encoding/japanese.
package encoding // import "golang.org/x/text/encoding"
import (
"errors"
"io"
"strconv"
"unicode/utf8"
"golang.org/x/text/encoding/internal/identifier"
"golang.org/x/text/transform"
)
// TODO:
// - There seems to be some inconsistency in when decoders return errors
// and when not. Also documentation seems to suggest they shouldn't return
// errors at all (except for UTF-16).
// - Encoders seem to rely on or at least benefit from the input being in NFC
// normal form. Perhaps add an example how users could prepare their output.
// Encoding is a character set encoding that can be transformed to and from
// UTF-8.
type Encoding interface {
// NewDecoder returns a Decoder.
NewDecoder() *Decoder
// NewEncoder returns an Encoder.
NewEncoder() *Encoder
}
// A Decoder converts bytes to UTF-8. It implements transform.Transformer.
//
// Transforming source bytes that are not of that encoding will not result in an
// error per se. Each byte that cannot be transcoded will be represented in the
// output by the UTF-8 encoding of '\uFFFD', the replacement rune.
type Decoder struct {
transform.Transformer
// This forces external creators of Decoders to use names in struct
// initializers, allowing for future extendibility without having to break
// code.
_ struct{}
}
// Bytes converts the given encoded bytes to UTF-8. It returns the converted
// bytes or nil, err if any error occurred.
func (d *Decoder) Bytes(b []byte) ([]byte, error) {
b, _, err := transform.Bytes(d, b)
if err != nil {
return nil, err
}
return b, nil
}
// String converts the given encoded string to UTF-8. It returns the converted
// string or "", err if any error occurred.
func (d *Decoder) String(s string) (string, error) {
s, _, err := transform.String(d, s)
if err != nil {
return "", err
}
return s, nil
}
// Reader wraps another Reader to decode its bytes.
//
// The Decoder may not be used for any other operation as long as the returned
// Reader is in use.
func (d *Decoder) Reader(r io.Reader) io.Reader {
return transform.NewReader(r, d)
}
// An Encoder converts bytes from UTF-8. It implements transform.Transformer.
//
// Each rune that cannot be transcoded will result in an error. In this case,
// the transform will consume all source byte up to, not including the offending
// rune. Transforming source bytes that are not valid UTF-8 will be replaced by
// `\uFFFD`. To return early with an error instead, use transform.Chain to
// preprocess the data with a UTF8Validator.
type Encoder struct {
transform.Transformer
// This forces external creators of Encoders to use names in struct
// initializers, allowing for future extendibility without having to break
// code.
_ struct{}
}
// Bytes converts bytes from UTF-8. It returns the converted bytes or nil, err if
// any error occurred.
func (e *Encoder) Bytes(b []byte) ([]byte, error) {
b, _, err := transform.Bytes(e, b)
if err != nil {
return nil, err
}
return b, nil
}
// String converts a string from UTF-8. It returns the converted string or
// "", err if any error occurred.
func (e *Encoder) String(s string) (string, error) {
s, _, err := transform.String(e, s)
if err != nil {
return "", err
}
return s, nil
}
// Writer wraps another Writer to encode its UTF-8 output.
//
// The Encoder may not be used for any other operation as long as the returned
// Writer is in use.
func (e *Encoder) Writer(w io.Writer) io.Writer {
return transform.NewWriter(w, e)
}
// ASCIISub is the ASCII substitute character, as recommended by
// http://unicode.org/reports/tr36/#Text_Comparison
const ASCIISub = '\x1a'
// Nop is the nop encoding. Its transformed bytes are the same as the source
// bytes; it does not replace invalid UTF-8 sequences.
var Nop Encoding = nop{}
type nop struct{}
func (nop) NewDecoder() *Decoder {
return &Decoder{Transformer: transform.Nop}
}
func (nop) NewEncoder() *Encoder {
return &Encoder{Transformer: transform.Nop}
}
// Replacement is the replacement encoding. Decoding from the replacement
// encoding yields a single '\uFFFD' replacement rune. Encoding from UTF-8 to
// the replacement encoding yields the same as the source bytes except that
// invalid UTF-8 is converted to '\uFFFD'.
//
// It is defined at http://encoding.spec.whatwg.org/#replacement
var Replacement Encoding = replacement{}
type replacement struct{}
func (replacement) NewDecoder() *Decoder {
return &Decoder{Transformer: replacementDecoder{}}
}
func (replacement) NewEncoder() *Encoder {
return &Encoder{Transformer: replacementEncoder{}}
}
func (replacement) ID() (mib identifier.MIB, other string) {
return identifier.Replacement, ""
}
type replacementDecoder struct{ transform.NopResetter }
func (replacementDecoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
if len(dst) < 3 {
return 0, 0, transform.ErrShortDst
}
if atEOF {
const fffd = "\ufffd"
dst[0] = fffd[0]
dst[1] = fffd[1]
dst[2] = fffd[2]
nDst = 3
}
return nDst, len(src), nil
}
type replacementEncoder struct{ transform.NopResetter }
func (replacementEncoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
r, size := rune(0), 0
for ; nSrc < len(src); nSrc += size {
r = rune(src[nSrc])
// Decode a 1-byte rune.
if r < utf8.RuneSelf {
size = 1
} else {
// Decode a multi-byte rune.
r, size = utf8.DecodeRune(src[nSrc:])
if size == 1 {
// All valid runes of size 1 (those below utf8.RuneSelf) were
// handled above. We have invalid UTF-8 or we haven't seen the
// full character yet.
if !atEOF && !utf8.FullRune(src[nSrc:]) {
err = transform.ErrShortSrc
break
}
r = '\ufffd'
}
}
if nDst+utf8.RuneLen(r) > len(dst) {
err = transform.ErrShortDst
break
}
nDst += utf8.EncodeRune(dst[nDst:], r)
}
return nDst, nSrc, err
}
// HTMLEscapeUnsupported wraps encoders to replace source runes outside the
// repertoire of the destination encoding with HTML escape sequences.
//
// This wrapper exists to comply to URL and HTML forms requiring a
// non-terminating legacy encoder. The produced sequences may lead to data
// loss as they are indistinguishable from legitimate input. To avoid this
// issue, use UTF-8 encodings whenever possible.
func HTMLEscapeUnsupported(e *Encoder) *Encoder {
return &Encoder{Transformer: &errorHandler{e, errorToHTML}}
}
// ReplaceUnsupported wraps encoders to replace source runes outside the
// repertoire of the destination encoding with an encoding-specific
// replacement.
//
// This wrapper is only provided for backwards compatibility and legacy
// handling. Its use is strongly discouraged. Use UTF-8 whenever possible.
func ReplaceUnsupported(e *Encoder) *Encoder {
return &Encoder{Transformer: &errorHandler{e, errorToReplacement}}
}
type errorHandler struct {
*Encoder
handler func(dst []byte, r rune, err repertoireError) (n int, ok bool)
}
// TODO: consider making this error public in some form.
type repertoireError interface {
Replacement() byte
}
func (h errorHandler) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
nDst, nSrc, err = h.Transformer.Transform(dst, src, atEOF)
for err != nil {
rerr, ok := err.(repertoireError)
if !ok {
return nDst, nSrc, err
}
r, sz := utf8.DecodeRune(src[nSrc:])
n, ok := h.handler(dst[nDst:], r, rerr)
if !ok {
return nDst, nSrc, transform.ErrShortDst
}
err = nil
nDst += n
if nSrc += sz; nSrc < len(src) {
var dn, sn int
dn, sn, err = h.Transformer.Transform(dst[nDst:], src[nSrc:], atEOF)
nDst += dn
nSrc += sn
}
}
return nDst, nSrc, err
}
func errorToHTML(dst []byte, r rune, err repertoireError) (n int, ok bool) {
buf := [8]byte{}
b := strconv.AppendUint(buf[:0], uint64(r), 10)
if n = len(b) + len("&#;"); n >= len(dst) {
return 0, false
}
dst[0] = '&'
dst[1] = '#'
dst[copy(dst[2:], b)+2] = ';'
return n, true
}
func errorToReplacement(dst []byte, r rune, err repertoireError) (n int, ok bool) {
if len(dst) == 0 {
return 0, false
}
dst[0] = err.Replacement()
return 1, true
}
// ErrInvalidUTF8 means that a transformer encountered invalid UTF-8.
var ErrInvalidUTF8 = errors.New("encoding: invalid UTF-8")
// UTF8Validator is a transformer that returns ErrInvalidUTF8 on the first
// input byte that is not valid UTF-8.
var UTF8Validator transform.Transformer = utf8Validator{}
type utf8Validator struct{ transform.NopResetter }
func (utf8Validator) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
n := len(src)
if n > len(dst) {
n = len(dst)
}
for i := 0; i < n; {
if c := src[i]; c < utf8.RuneSelf {
dst[i] = c
i++
continue
}
_, size := utf8.DecodeRune(src[i:])
if size == 1 {
// All valid runes of size 1 (those below utf8.RuneSelf) were
// handled above. We have invalid UTF-8 or we haven't seen the
// full character yet.
err = ErrInvalidUTF8
if !atEOF && !utf8.FullRune(src[i:]) {
err = transform.ErrShortSrc
}
return i, i, err
}
if i+size > len(dst) {
return i, i, transform.ErrShortDst
}
for ; size > 0; size-- {
dst[i] = src[i]
i++
}
}
if len(src) > len(dst) {
err = transform.ErrShortDst
}
return n, n, err
}

View File

@ -1,290 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package encoding_test
import (
"io/ioutil"
"strings"
"testing"
"golang.org/x/text/encoding"
"golang.org/x/text/encoding/charmap"
"golang.org/x/text/transform"
)
func TestEncodeInvalidUTF8(t *testing.T) {
inputs := []string{
"hello.",
"wo\ufffdld.",
"ABC\xff\x80\x80", // Invalid UTF-8.
"\x80\x80\x80\x80\x80",
"\x80\x80D\x80\x80", // Valid rune at "D".
"E\xed\xa0\x80\xed\xbf\xbfF", // Two invalid UTF-8 runes (surrogates).
"G",
"H\xe2\x82", // U+20AC in UTF-8 is "\xe2\x82\xac", which we split over two
"\xacI\xe2\x82", // input lines. It maps to 0x80 in the Windows-1252 encoding.
}
// Each invalid source byte becomes '\x1a'.
want := strings.Replace("hello.wo?ld.ABC??????????D??E??????FGH\x80I??", "?", "\x1a", -1)
transformer := encoding.ReplaceUnsupported(charmap.Windows1252.NewEncoder())
gotBuf := make([]byte, 0, 1024)
src := make([]byte, 0, 1024)
for i, input := range inputs {
dst := make([]byte, 1024)
src = append(src, input...)
atEOF := i == len(inputs)-1
nDst, nSrc, err := transformer.Transform(dst, src, atEOF)
gotBuf = append(gotBuf, dst[:nDst]...)
src = src[nSrc:]
if err != nil && err != transform.ErrShortSrc {
t.Fatalf("i=%d: %v", i, err)
}
if atEOF && err != nil {
t.Fatalf("i=%d: atEOF: %v", i, err)
}
}
if got := string(gotBuf); got != want {
t.Fatalf("\ngot %+q\nwant %+q", got, want)
}
}
func TestReplacement(t *testing.T) {
for _, direction := range []string{"Decode", "Encode"} {
enc, want := (transform.Transformer)(nil), ""
if direction == "Decode" {
enc = encoding.Replacement.NewDecoder()
want = "\ufffd"
} else {
enc = encoding.Replacement.NewEncoder()
want = "AB\x00CD\ufffdYZ"
}
sr := strings.NewReader("AB\x00CD\x80YZ")
g, err := ioutil.ReadAll(transform.NewReader(sr, enc))
if err != nil {
t.Errorf("%s: ReadAll: %v", direction, err)
continue
}
if got := string(g); got != want {
t.Errorf("%s:\ngot %q\nwant %q", direction, got, want)
continue
}
}
}
func TestUTF8Validator(t *testing.T) {
testCases := []struct {
desc string
dstSize int
src string
atEOF bool
want string
wantErr error
}{
{
"empty input",
100,
"",
false,
"",
nil,
},
{
"valid 1-byte 1-rune input",
100,
"a",
false,
"a",
nil,
},
{
"valid 3-byte 1-rune input",
100,
"\u1234",
false,
"\u1234",
nil,
},
{
"valid 5-byte 3-rune input",
100,
"a\u0100\u0101",
false,
"a\u0100\u0101",
nil,
},
{
"perfectly sized dst (non-ASCII)",
5,
"a\u0100\u0101",
false,
"a\u0100\u0101",
nil,
},
{
"short dst (non-ASCII)",
4,
"a\u0100\u0101",
false,
"a\u0100",
transform.ErrShortDst,
},
{
"perfectly sized dst (ASCII)",
5,
"abcde",
false,
"abcde",
nil,
},
{
"short dst (ASCII)",
4,
"abcde",
false,
"abcd",
transform.ErrShortDst,
},
{
"partial input (!EOF)",
100,
"a\u0100\xf1",
false,
"a\u0100",
transform.ErrShortSrc,
},
{
"invalid input (EOF)",
100,
"a\u0100\xf1",
true,
"a\u0100",
encoding.ErrInvalidUTF8,
},
{
"invalid input (!EOF)",
100,
"a\u0100\x80",
false,
"a\u0100",
encoding.ErrInvalidUTF8,
},
{
"invalid input (above U+10FFFF)",
100,
"a\u0100\xf7\xbf\xbf\xbf",
false,
"a\u0100",
encoding.ErrInvalidUTF8,
},
{
"invalid input (surrogate half)",
100,
"a\u0100\xed\xa0\x80",
false,
"a\u0100",
encoding.ErrInvalidUTF8,
},
}
for _, tc := range testCases {
dst := make([]byte, tc.dstSize)
nDst, nSrc, err := encoding.UTF8Validator.Transform(dst, []byte(tc.src), tc.atEOF)
if nDst < 0 || len(dst) < nDst {
t.Errorf("%s: nDst=%d out of range", tc.desc, nDst)
continue
}
got := string(dst[:nDst])
if got != tc.want || nSrc != len(tc.want) || err != tc.wantErr {
t.Errorf("%s:\ngot %+q, %d, %v\nwant %+q, %d, %v",
tc.desc, got, nSrc, err, tc.want, len(tc.want), tc.wantErr)
continue
}
}
}
func TestErrorHandler(t *testing.T) {
testCases := []struct {
desc string
handler func(*encoding.Encoder) *encoding.Encoder
sizeDst int
src, want string
nSrc int
err error
}{
{
desc: "one rune replacement",
handler: encoding.ReplaceUnsupported,
sizeDst: 100,
src: "\uAC00",
want: "\x1a",
nSrc: 3,
},
{
desc: "mid-stream rune replacement",
handler: encoding.ReplaceUnsupported,
sizeDst: 100,
src: "a\uAC00bcd\u00e9",
want: "a\x1abcd\xe9",
nSrc: 9,
},
{
desc: "at end rune replacement",
handler: encoding.ReplaceUnsupported,
sizeDst: 10,
src: "\u00e9\uAC00",
want: "\xe9\x1a",
nSrc: 5,
},
{
desc: "short buffer replacement",
handler: encoding.ReplaceUnsupported,
sizeDst: 1,
src: "\u00e9\uAC00",
want: "\xe9",
nSrc: 2,
err: transform.ErrShortDst,
},
{
desc: "one rune html escape",
handler: encoding.HTMLEscapeUnsupported,
sizeDst: 100,
src: "\uAC00",
want: "&#44032;",
nSrc: 3,
},
{
desc: "mid-stream html escape",
handler: encoding.HTMLEscapeUnsupported,
sizeDst: 100,
src: "\u00e9\uAC00dcba",
want: "\xe9&#44032;dcba",
nSrc: 9,
},
{
desc: "short buffer html escape",
handler: encoding.HTMLEscapeUnsupported,
sizeDst: 9,
src: "ab\uAC01",
want: "ab",
nSrc: 2,
err: transform.ErrShortDst,
},
}
for i, tc := range testCases {
tr := tc.handler(charmap.Windows1250.NewEncoder())
b := make([]byte, tc.sizeDst)
nDst, nSrc, err := tr.Transform(b, []byte(tc.src), true)
if err != tc.err {
t.Errorf("%d:%s: error was %v; want %v", i, tc.desc, err, tc.err)
}
if got := string(b[:nDst]); got != tc.want {
t.Errorf("%d:%s: result was %q: want %q", i, tc.desc, got, tc.want)
}
if nSrc != tc.nSrc {
t.Errorf("%d:%s: nSrc was %d; want %d", i, tc.desc, nSrc, tc.nSrc)
}
}
}

View File

@ -1,42 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package encoding_test
import (
"fmt"
"io"
"os"
"strings"
"golang.org/x/text/encoding"
"golang.org/x/text/encoding/charmap"
"golang.org/x/text/encoding/unicode"
"golang.org/x/text/transform"
)
func ExampleDecodeWindows1252() {
sr := strings.NewReader("Gar\xe7on !")
tr := charmap.Windows1252.NewDecoder().Reader(sr)
io.Copy(os.Stdout, tr)
// Output: Garçon !
}
func ExampleUTF8Validator() {
for i := 0; i < 2; i++ {
var transformer transform.Transformer
transformer = unicode.UTF16(unicode.BigEndian, unicode.IgnoreBOM).NewEncoder()
if i == 1 {
transformer = transform.Chain(encoding.UTF8Validator, transformer)
}
dst := make([]byte, 256)
src := []byte("abc\xffxyz") // src is invalid UTF-8.
nDst, nSrc, err := transformer.Transform(dst, src, true)
fmt.Printf("i=%d: produced %q, consumed %q, error %v\n",
i, dst[:nDst], src[:nSrc], err)
}
// Output:
// i=0: produced "\x00a\x00b\x00c\xff\xfd\x00x\x00y\x00z", consumed "abc\xffxyz", error <nil>
// i=1: produced "\x00a\x00b\x00c", consumed "abc", error encoding: invalid UTF-8
}

View File

@ -1,173 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"bytes"
"encoding/json"
"fmt"
"log"
"strings"
"golang.org/x/text/internal/gen"
)
type group struct {
Encodings []struct {
Labels []string
Name string
}
}
func main() {
gen.Init()
r := gen.Open("https://encoding.spec.whatwg.org", "whatwg", "encodings.json")
var groups []group
if err := json.NewDecoder(r).Decode(&groups); err != nil {
log.Fatalf("Error reading encodings.json: %v", err)
}
w := &bytes.Buffer{}
fmt.Fprintln(w, "type htmlEncoding byte")
fmt.Fprintln(w, "const (")
for i, g := range groups {
for _, e := range g.Encodings {
key := strings.ToLower(e.Name)
name := consts[key]
if name == "" {
log.Fatalf("No const defined for %s.", key)
}
if i == 0 {
fmt.Fprintf(w, "%s htmlEncoding = iota\n", name)
} else {
fmt.Fprintf(w, "%s\n", name)
}
}
}
fmt.Fprintln(w, "numEncodings")
fmt.Fprint(w, ")\n\n")
fmt.Fprintln(w, "var canonical = [numEncodings]string{")
for _, g := range groups {
for _, e := range g.Encodings {
fmt.Fprintf(w, "%q,\n", strings.ToLower(e.Name))
}
}
fmt.Fprint(w, "}\n\n")
fmt.Fprintln(w, "var nameMap = map[string]htmlEncoding{")
for _, g := range groups {
for _, e := range g.Encodings {
for _, l := range e.Labels {
key := strings.ToLower(e.Name)
name := consts[key]
fmt.Fprintf(w, "%q: %s,\n", l, name)
}
}
}
fmt.Fprint(w, "}\n\n")
var tags []string
fmt.Fprintln(w, "var localeMap = []htmlEncoding{")
for _, loc := range locales {
tags = append(tags, loc.tag)
fmt.Fprintf(w, "%s, // %s \n", consts[loc.name], loc.tag)
}
fmt.Fprint(w, "}\n\n")
fmt.Fprintf(w, "const locales = %q\n", strings.Join(tags, " "))
gen.WriteGoFile("tables.go", "htmlindex", w.Bytes())
}
// consts maps canonical encoding name to internal constant.
var consts = map[string]string{
"utf-8": "utf8",
"ibm866": "ibm866",
"iso-8859-2": "iso8859_2",
"iso-8859-3": "iso8859_3",
"iso-8859-4": "iso8859_4",
"iso-8859-5": "iso8859_5",
"iso-8859-6": "iso8859_6",
"iso-8859-7": "iso8859_7",
"iso-8859-8": "iso8859_8",
"iso-8859-8-i": "iso8859_8I",
"iso-8859-10": "iso8859_10",
"iso-8859-13": "iso8859_13",
"iso-8859-14": "iso8859_14",
"iso-8859-15": "iso8859_15",
"iso-8859-16": "iso8859_16",
"koi8-r": "koi8r",
"koi8-u": "koi8u",
"macintosh": "macintosh",
"windows-874": "windows874",
"windows-1250": "windows1250",
"windows-1251": "windows1251",
"windows-1252": "windows1252",
"windows-1253": "windows1253",
"windows-1254": "windows1254",
"windows-1255": "windows1255",
"windows-1256": "windows1256",
"windows-1257": "windows1257",
"windows-1258": "windows1258",
"x-mac-cyrillic": "macintoshCyrillic",
"gbk": "gbk",
"gb18030": "gb18030",
// "hz-gb-2312": "hzgb2312", // Was removed from WhatWG
"big5": "big5",
"euc-jp": "eucjp",
"iso-2022-jp": "iso2022jp",
"shift_jis": "shiftJIS",
"euc-kr": "euckr",
"replacement": "replacement",
"utf-16be": "utf16be",
"utf-16le": "utf16le",
"x-user-defined": "xUserDefined",
}
// locales is taken from
// https://html.spec.whatwg.org/multipage/syntax.html#encoding-sniffing-algorithm.
var locales = []struct{ tag, name string }{
// The default value. Explicitly state latin to benefit from the exact
// script option, while still making 1252 the default encoding for languages
// written in Latin script.
{"und_Latn", "windows-1252"},
{"ar", "windows-1256"},
{"ba", "windows-1251"},
{"be", "windows-1251"},
{"bg", "windows-1251"},
{"cs", "windows-1250"},
{"el", "iso-8859-7"},
{"et", "windows-1257"},
{"fa", "windows-1256"},
{"he", "windows-1255"},
{"hr", "windows-1250"},
{"hu", "iso-8859-2"},
{"ja", "shift_jis"},
{"kk", "windows-1251"},
{"ko", "euc-kr"},
{"ku", "windows-1254"},
{"ky", "windows-1251"},
{"lt", "windows-1257"},
{"lv", "windows-1257"},
{"mk", "windows-1251"},
{"pl", "iso-8859-2"},
{"ru", "windows-1251"},
{"sah", "windows-1251"},
{"sk", "windows-1250"},
{"sl", "iso-8859-2"},
{"sr", "windows-1251"},
{"tg", "windows-1251"},
{"th", "windows-874"},
{"tr", "windows-1254"},
{"tt", "windows-1251"},
{"uk", "windows-1251"},
{"vi", "windows-1258"},
{"zh-hans", "gb18030"},
{"zh-hant", "big5"},
}

View File

@ -1,86 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go run gen.go
// Package htmlindex maps character set encoding names to Encodings as
// recommended by the W3C for use in HTML 5. See http://www.w3.org/TR/encoding.
package htmlindex
// TODO: perhaps have a "bare" version of the index (used by this package) that
// is not pre-loaded with all encodings. Global variables in encodings prevent
// the linker from being able to purge unneeded tables. This means that
// referencing all encodings, as this package does for the default index, links
// in all encodings unconditionally.
//
// This issue can be solved by either solving the linking issue (see
// https://github.com/golang/go/issues/6330) or refactoring the encoding tables
// (e.g. moving the tables to internal packages that do not use global
// variables).
// TODO: allow canonicalizing names
import (
"errors"
"strings"
"sync"
"golang.org/x/text/encoding"
"golang.org/x/text/encoding/internal/identifier"
"golang.org/x/text/language"
)
var (
errInvalidName = errors.New("htmlindex: invalid encoding name")
errUnknown = errors.New("htmlindex: unknown Encoding")
errUnsupported = errors.New("htmlindex: this encoding is not supported")
)
var (
matcherOnce sync.Once
matcher language.Matcher
)
// LanguageDefault returns the canonical name of the default encoding for a
// given language.
func LanguageDefault(tag language.Tag) string {
matcherOnce.Do(func() {
tags := []language.Tag{}
for _, t := range strings.Split(locales, " ") {
tags = append(tags, language.MustParse(t))
}
matcher = language.NewMatcher(tags, language.PreferSameScript(true))
})
_, i, _ := matcher.Match(tag)
return canonical[localeMap[i]] // Default is Windows-1252.
}
// Get returns an Encoding for one of the names listed in
// http://www.w3.org/TR/encoding using the Default Index. Matching is case-
// insensitive.
func Get(name string) (encoding.Encoding, error) {
x, ok := nameMap[strings.ToLower(strings.TrimSpace(name))]
if !ok {
return nil, errInvalidName
}
return encodings[x], nil
}
// Name reports the canonical name of the given Encoding. It will return
// an error if e is not associated with a supported encoding scheme.
func Name(e encoding.Encoding) (string, error) {
id, ok := e.(identifier.Interface)
if !ok {
return "", errUnknown
}
mib, _ := id.ID()
if mib == 0 {
return "", errUnknown
}
v, ok := mibMap[mib]
if !ok {
return "", errUnsupported
}
return canonical[v], nil
}

View File

@ -1,144 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package htmlindex
import (
"testing"
"golang.org/x/text/encoding"
"golang.org/x/text/encoding/charmap"
"golang.org/x/text/encoding/internal/identifier"
"golang.org/x/text/encoding/unicode"
"golang.org/x/text/language"
)
func TestGet(t *testing.T) {
for i, tc := range []struct {
name string
canonical string
err error
}{
{"utf-8", "utf-8", nil},
{" utf-8 ", "utf-8", nil},
{" l5 ", "windows-1254", nil},
{"latin5 ", "windows-1254", nil},
{"latin 5", "", errInvalidName},
{"latin-5", "", errInvalidName},
} {
enc, err := Get(tc.name)
if err != tc.err {
t.Errorf("%d: error was %v; want %v", i, err, tc.err)
}
if err != nil {
continue
}
if got, err := Name(enc); got != tc.canonical {
t.Errorf("%d: Name(Get(%q)) = %q; want %q (%v)", i, tc.name, got, tc.canonical, err)
}
}
}
func TestTables(t *testing.T) {
for name, index := range nameMap {
got, err := Get(name)
if err != nil {
t.Errorf("%s:err: expected non-nil error", name)
}
if want := encodings[index]; got != want {
t.Errorf("%s:encoding: got %v; want %v", name, got, want)
}
mib, _ := got.(identifier.Interface).ID()
if mibMap[mib] != index {
t.Errorf("%s:mibMab: got %d; want %d", name, mibMap[mib], index)
}
}
}
func TestName(t *testing.T) {
for i, tc := range []struct {
desc string
enc encoding.Encoding
name string
err error
}{{
"defined encoding",
charmap.ISO8859_2,
"iso-8859-2",
nil,
}, {
"defined Unicode encoding",
unicode.UTF16(unicode.BigEndian, unicode.IgnoreBOM),
"utf-16be",
nil,
}, {
"undefined Unicode encoding in HTML standard",
unicode.UTF16(unicode.BigEndian, unicode.UseBOM),
"",
errUnsupported,
}, {
"undefined other encoding in HTML standard",
charmap.CodePage437,
"",
errUnsupported,
}, {
"unknown encoding",
encoding.Nop,
"",
errUnknown,
}} {
name, err := Name(tc.enc)
if name != tc.name || err != tc.err {
t.Errorf("%d:%s: got %q, %v; want %q, %v", i, tc.desc, name, err, tc.name, tc.err)
}
}
}
func TestLanguageDefault(t *testing.T) {
for _, tc := range []struct{ tag, want string }{
{"und", "windows-1252"}, // The default value.
{"ar", "windows-1256"},
{"ba", "windows-1251"},
{"be", "windows-1251"},
{"bg", "windows-1251"},
{"cs", "windows-1250"},
{"el", "iso-8859-7"},
{"et", "windows-1257"},
{"fa", "windows-1256"},
{"he", "windows-1255"},
{"hr", "windows-1250"},
{"hu", "iso-8859-2"},
{"ja", "shift_jis"},
{"kk", "windows-1251"},
{"ko", "euc-kr"},
{"ku", "windows-1254"},
{"ky", "windows-1251"},
{"lt", "windows-1257"},
{"lv", "windows-1257"},
{"mk", "windows-1251"},
{"pl", "iso-8859-2"},
{"ru", "windows-1251"},
{"sah", "windows-1251"},
{"sk", "windows-1250"},
{"sl", "iso-8859-2"},
{"sr", "windows-1251"},
{"tg", "windows-1251"},
{"th", "windows-874"},
{"tr", "windows-1254"},
{"tt", "windows-1251"},
{"uk", "windows-1251"},
{"vi", "windows-1258"},
{"zh-hans", "gb18030"},
{"zh-hant", "big5"},
// Variants and close approximates of the above.
{"ar_EG", "windows-1256"},
{"bs", "windows-1250"}, // Bosnian Latin maps to Croatian.
// Use default fallback in case of miss.
{"nl", "windows-1252"},
} {
if got := LanguageDefault(language.MustParse(tc.tag)); got != tc.want {
t.Errorf("LanguageDefault(%s) = %s; want %s", tc.tag, got, tc.want)
}
}
}

View File

@ -1,105 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package htmlindex
import (
"golang.org/x/text/encoding"
"golang.org/x/text/encoding/charmap"
"golang.org/x/text/encoding/internal/identifier"
"golang.org/x/text/encoding/japanese"
"golang.org/x/text/encoding/korean"
"golang.org/x/text/encoding/simplifiedchinese"
"golang.org/x/text/encoding/traditionalchinese"
"golang.org/x/text/encoding/unicode"
)
// mibMap maps a MIB identifier to an htmlEncoding index.
var mibMap = map[identifier.MIB]htmlEncoding{
identifier.UTF8: utf8,
identifier.UTF16BE: utf16be,
identifier.UTF16LE: utf16le,
identifier.IBM866: ibm866,
identifier.ISOLatin2: iso8859_2,
identifier.ISOLatin3: iso8859_3,
identifier.ISOLatin4: iso8859_4,
identifier.ISOLatinCyrillic: iso8859_5,
identifier.ISOLatinArabic: iso8859_6,
identifier.ISOLatinGreek: iso8859_7,
identifier.ISOLatinHebrew: iso8859_8,
identifier.ISO88598I: iso8859_8I,
identifier.ISOLatin6: iso8859_10,
identifier.ISO885913: iso8859_13,
identifier.ISO885914: iso8859_14,
identifier.ISO885915: iso8859_15,
identifier.ISO885916: iso8859_16,
identifier.KOI8R: koi8r,
identifier.KOI8U: koi8u,
identifier.Macintosh: macintosh,
identifier.MacintoshCyrillic: macintoshCyrillic,
identifier.Windows874: windows874,
identifier.Windows1250: windows1250,
identifier.Windows1251: windows1251,
identifier.Windows1252: windows1252,
identifier.Windows1253: windows1253,
identifier.Windows1254: windows1254,
identifier.Windows1255: windows1255,
identifier.Windows1256: windows1256,
identifier.Windows1257: windows1257,
identifier.Windows1258: windows1258,
identifier.XUserDefined: xUserDefined,
identifier.GBK: gbk,
identifier.GB18030: gb18030,
identifier.Big5: big5,
identifier.EUCPkdFmtJapanese: eucjp,
identifier.ISO2022JP: iso2022jp,
identifier.ShiftJIS: shiftJIS,
identifier.EUCKR: euckr,
identifier.Replacement: replacement,
}
// encodings maps the internal htmlEncoding to an Encoding.
// TODO: consider using a reusable index in encoding/internal.
var encodings = [numEncodings]encoding.Encoding{
utf8: unicode.UTF8,
ibm866: charmap.CodePage866,
iso8859_2: charmap.ISO8859_2,
iso8859_3: charmap.ISO8859_3,
iso8859_4: charmap.ISO8859_4,
iso8859_5: charmap.ISO8859_5,
iso8859_6: charmap.ISO8859_6,
iso8859_7: charmap.ISO8859_7,
iso8859_8: charmap.ISO8859_8,
iso8859_8I: charmap.ISO8859_8I,
iso8859_10: charmap.ISO8859_10,
iso8859_13: charmap.ISO8859_13,
iso8859_14: charmap.ISO8859_14,
iso8859_15: charmap.ISO8859_15,
iso8859_16: charmap.ISO8859_16,
koi8r: charmap.KOI8R,
koi8u: charmap.KOI8U,
macintosh: charmap.Macintosh,
windows874: charmap.Windows874,
windows1250: charmap.Windows1250,
windows1251: charmap.Windows1251,
windows1252: charmap.Windows1252,
windows1253: charmap.Windows1253,
windows1254: charmap.Windows1254,
windows1255: charmap.Windows1255,
windows1256: charmap.Windows1256,
windows1257: charmap.Windows1257,
windows1258: charmap.Windows1258,
macintoshCyrillic: charmap.MacintoshCyrillic,
gbk: simplifiedchinese.GBK,
gb18030: simplifiedchinese.GB18030,
big5: traditionalchinese.Big5,
eucjp: japanese.EUCJP,
iso2022jp: japanese.ISO2022JP,
shiftJIS: japanese.ShiftJIS,
euckr: korean.EUCKR,
replacement: encoding.Replacement,
utf16be: unicode.UTF16(unicode.BigEndian, unicode.IgnoreBOM),
utf16le: unicode.UTF16(unicode.LittleEndian, unicode.IgnoreBOM),
xUserDefined: charmap.XUserDefined,
}

View File

@ -1,352 +0,0 @@
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
package htmlindex
type htmlEncoding byte
const (
utf8 htmlEncoding = iota
ibm866
iso8859_2
iso8859_3
iso8859_4
iso8859_5
iso8859_6
iso8859_7
iso8859_8
iso8859_8I
iso8859_10
iso8859_13
iso8859_14
iso8859_15
iso8859_16
koi8r
koi8u
macintosh
windows874
windows1250
windows1251
windows1252
windows1253
windows1254
windows1255
windows1256
windows1257
windows1258
macintoshCyrillic
gbk
gb18030
big5
eucjp
iso2022jp
shiftJIS
euckr
replacement
utf16be
utf16le
xUserDefined
numEncodings
)
var canonical = [numEncodings]string{
"utf-8",
"ibm866",
"iso-8859-2",
"iso-8859-3",
"iso-8859-4",
"iso-8859-5",
"iso-8859-6",
"iso-8859-7",
"iso-8859-8",
"iso-8859-8-i",
"iso-8859-10",
"iso-8859-13",
"iso-8859-14",
"iso-8859-15",
"iso-8859-16",
"koi8-r",
"koi8-u",
"macintosh",
"windows-874",
"windows-1250",
"windows-1251",
"windows-1252",
"windows-1253",
"windows-1254",
"windows-1255",
"windows-1256",
"windows-1257",
"windows-1258",
"x-mac-cyrillic",
"gbk",
"gb18030",
"big5",
"euc-jp",
"iso-2022-jp",
"shift_jis",
"euc-kr",
"replacement",
"utf-16be",
"utf-16le",
"x-user-defined",
}
var nameMap = map[string]htmlEncoding{
"unicode-1-1-utf-8": utf8,
"utf-8": utf8,
"utf8": utf8,
"866": ibm866,
"cp866": ibm866,
"csibm866": ibm866,
"ibm866": ibm866,
"csisolatin2": iso8859_2,
"iso-8859-2": iso8859_2,
"iso-ir-101": iso8859_2,
"iso8859-2": iso8859_2,
"iso88592": iso8859_2,
"iso_8859-2": iso8859_2,
"iso_8859-2:1987": iso8859_2,
"l2": iso8859_2,
"latin2": iso8859_2,
"csisolatin3": iso8859_3,
"iso-8859-3": iso8859_3,
"iso-ir-109": iso8859_3,
"iso8859-3": iso8859_3,
"iso88593": iso8859_3,
"iso_8859-3": iso8859_3,
"iso_8859-3:1988": iso8859_3,
"l3": iso8859_3,
"latin3": iso8859_3,
"csisolatin4": iso8859_4,
"iso-8859-4": iso8859_4,
"iso-ir-110": iso8859_4,
"iso8859-4": iso8859_4,
"iso88594": iso8859_4,
"iso_8859-4": iso8859_4,
"iso_8859-4:1988": iso8859_4,
"l4": iso8859_4,
"latin4": iso8859_4,
"csisolatincyrillic": iso8859_5,
"cyrillic": iso8859_5,
"iso-8859-5": iso8859_5,
"iso-ir-144": iso8859_5,
"iso8859-5": iso8859_5,
"iso88595": iso8859_5,
"iso_8859-5": iso8859_5,
"iso_8859-5:1988": iso8859_5,
"arabic": iso8859_6,
"asmo-708": iso8859_6,
"csiso88596e": iso8859_6,
"csiso88596i": iso8859_6,
"csisolatinarabic": iso8859_6,
"ecma-114": iso8859_6,
"iso-8859-6": iso8859_6,
"iso-8859-6-e": iso8859_6,
"iso-8859-6-i": iso8859_6,
"iso-ir-127": iso8859_6,
"iso8859-6": iso8859_6,
"iso88596": iso8859_6,
"iso_8859-6": iso8859_6,
"iso_8859-6:1987": iso8859_6,
"csisolatingreek": iso8859_7,
"ecma-118": iso8859_7,
"elot_928": iso8859_7,
"greek": iso8859_7,
"greek8": iso8859_7,
"iso-8859-7": iso8859_7,
"iso-ir-126": iso8859_7,
"iso8859-7": iso8859_7,
"iso88597": iso8859_7,
"iso_8859-7": iso8859_7,
"iso_8859-7:1987": iso8859_7,
"sun_eu_greek": iso8859_7,
"csiso88598e": iso8859_8,
"csisolatinhebrew": iso8859_8,
"hebrew": iso8859_8,
"iso-8859-8": iso8859_8,
"iso-8859-8-e": iso8859_8,
"iso-ir-138": iso8859_8,
"iso8859-8": iso8859_8,
"iso88598": iso8859_8,
"iso_8859-8": iso8859_8,
"iso_8859-8:1988": iso8859_8,
"visual": iso8859_8,
"csiso88598i": iso8859_8I,
"iso-8859-8-i": iso8859_8I,
"logical": iso8859_8I,
"csisolatin6": iso8859_10,
"iso-8859-10": iso8859_10,
"iso-ir-157": iso8859_10,
"iso8859-10": iso8859_10,
"iso885910": iso8859_10,
"l6": iso8859_10,
"latin6": iso8859_10,
"iso-8859-13": iso8859_13,
"iso8859-13": iso8859_13,
"iso885913": iso8859_13,
"iso-8859-14": iso8859_14,
"iso8859-14": iso8859_14,
"iso885914": iso8859_14,
"csisolatin9": iso8859_15,
"iso-8859-15": iso8859_15,
"iso8859-15": iso8859_15,
"iso885915": iso8859_15,
"iso_8859-15": iso8859_15,
"l9": iso8859_15,
"iso-8859-16": iso8859_16,
"cskoi8r": koi8r,
"koi": koi8r,
"koi8": koi8r,
"koi8-r": koi8r,
"koi8_r": koi8r,
"koi8-ru": koi8u,
"koi8-u": koi8u,
"csmacintosh": macintosh,
"mac": macintosh,
"macintosh": macintosh,
"x-mac-roman": macintosh,
"dos-874": windows874,
"iso-8859-11": windows874,
"iso8859-11": windows874,
"iso885911": windows874,
"tis-620": windows874,
"windows-874": windows874,
"cp1250": windows1250,
"windows-1250": windows1250,
"x-cp1250": windows1250,
"cp1251": windows1251,
"windows-1251": windows1251,
"x-cp1251": windows1251,
"ansi_x3.4-1968": windows1252,
"ascii": windows1252,
"cp1252": windows1252,
"cp819": windows1252,
"csisolatin1": windows1252,
"ibm819": windows1252,
"iso-8859-1": windows1252,
"iso-ir-100": windows1252,
"iso8859-1": windows1252,
"iso88591": windows1252,
"iso_8859-1": windows1252,
"iso_8859-1:1987": windows1252,
"l1": windows1252,
"latin1": windows1252,
"us-ascii": windows1252,
"windows-1252": windows1252,
"x-cp1252": windows1252,
"cp1253": windows1253,
"windows-1253": windows1253,
"x-cp1253": windows1253,
"cp1254": windows1254,
"csisolatin5": windows1254,
"iso-8859-9": windows1254,
"iso-ir-148": windows1254,
"iso8859-9": windows1254,
"iso88599": windows1254,
"iso_8859-9": windows1254,
"iso_8859-9:1989": windows1254,
"l5": windows1254,
"latin5": windows1254,
"windows-1254": windows1254,
"x-cp1254": windows1254,
"cp1255": windows1255,
"windows-1255": windows1255,
"x-cp1255": windows1255,
"cp1256": windows1256,
"windows-1256": windows1256,
"x-cp1256": windows1256,
"cp1257": windows1257,
"windows-1257": windows1257,
"x-cp1257": windows1257,
"cp1258": windows1258,
"windows-1258": windows1258,
"x-cp1258": windows1258,
"x-mac-cyrillic": macintoshCyrillic,
"x-mac-ukrainian": macintoshCyrillic,
"chinese": gbk,
"csgb2312": gbk,
"csiso58gb231280": gbk,
"gb2312": gbk,
"gb_2312": gbk,
"gb_2312-80": gbk,
"gbk": gbk,
"iso-ir-58": gbk,
"x-gbk": gbk,
"gb18030": gb18030,
"big5": big5,
"big5-hkscs": big5,
"cn-big5": big5,
"csbig5": big5,
"x-x-big5": big5,
"cseucpkdfmtjapanese": eucjp,
"euc-jp": eucjp,
"x-euc-jp": eucjp,
"csiso2022jp": iso2022jp,
"iso-2022-jp": iso2022jp,
"csshiftjis": shiftJIS,
"ms932": shiftJIS,
"ms_kanji": shiftJIS,
"shift-jis": shiftJIS,
"shift_jis": shiftJIS,
"sjis": shiftJIS,
"windows-31j": shiftJIS,
"x-sjis": shiftJIS,
"cseuckr": euckr,
"csksc56011987": euckr,
"euc-kr": euckr,
"iso-ir-149": euckr,
"korean": euckr,
"ks_c_5601-1987": euckr,
"ks_c_5601-1989": euckr,
"ksc5601": euckr,
"ksc_5601": euckr,
"windows-949": euckr,
"csiso2022kr": replacement,
"hz-gb-2312": replacement,
"iso-2022-cn": replacement,
"iso-2022-cn-ext": replacement,
"iso-2022-kr": replacement,
"utf-16be": utf16be,
"utf-16": utf16le,
"utf-16le": utf16le,
"x-user-defined": xUserDefined,
}
var localeMap = []htmlEncoding{
windows1252, // und_Latn
windows1256, // ar
windows1251, // ba
windows1251, // be
windows1251, // bg
windows1250, // cs
iso8859_7, // el
windows1257, // et
windows1256, // fa
windows1255, // he
windows1250, // hr
iso8859_2, // hu
shiftJIS, // ja
windows1251, // kk
euckr, // ko
windows1254, // ku
windows1251, // ky
windows1257, // lt
windows1257, // lv
windows1251, // mk
iso8859_2, // pl
windows1251, // ru
windows1251, // sah
windows1250, // sk
iso8859_2, // sl
windows1251, // sr
windows1251, // tg
windows874, // th
windows1254, // tr
windows1251, // tt
windows1251, // uk
windows1258, // vi
gb18030, // zh-hans
big5, // zh-hant
}
const locales = "und_Latn ar ba be bg cs el et fa he hr hu ja kk ko ku ky lt lv mk pl ru sah sk sl sr tg th tr tt uk vi zh-hans zh-hant"

View File

@ -1,27 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ianaindex_test
import (
"fmt"
"golang.org/x/text/encoding/charmap"
"golang.org/x/text/encoding/ianaindex"
)
func ExampleIndex() {
fmt.Println(ianaindex.MIME.Name(charmap.ISO8859_7))
fmt.Println(ianaindex.IANA.Name(charmap.ISO8859_7))
fmt.Println(ianaindex.MIB.Name(charmap.ISO8859_7))
e, _ := ianaindex.IANA.Encoding("cp437")
fmt.Println(ianaindex.IANA.Name(e))
// Output:
// ISO-8859-7 <nil>
// ISO_8859-7:1987 <nil>
// ISOLatinGreek <nil>
// IBM437 <nil>
}

View File

@ -1,192 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"encoding/xml"
"fmt"
"io"
"log"
"sort"
"strconv"
"strings"
"golang.org/x/text/encoding/internal/identifier"
"golang.org/x/text/internal/gen"
)
type registry struct {
XMLName xml.Name `xml:"registry"`
Updated string `xml:"updated"`
Registry []struct {
ID string `xml:"id,attr"`
Record []struct {
Name string `xml:"name"`
Xref []struct {
Type string `xml:"type,attr"`
Data string `xml:"data,attr"`
} `xml:"xref"`
Desc struct {
Data string `xml:",innerxml"`
} `xml:"description,"`
MIB string `xml:"value"`
Alias []string `xml:"alias"`
MIME string `xml:"preferred_alias"`
} `xml:"record"`
} `xml:"registry"`
}
func main() {
r := gen.OpenIANAFile("assignments/character-sets/character-sets.xml")
reg := &registry{}
if err := xml.NewDecoder(r).Decode(&reg); err != nil && err != io.EOF {
log.Fatalf("Error decoding charset registry: %v", err)
}
if len(reg.Registry) == 0 || reg.Registry[0].ID != "character-sets-1" {
log.Fatalf("Unexpected ID %s", reg.Registry[0].ID)
}
x := &indexInfo{}
for _, rec := range reg.Registry[0].Record {
mib := identifier.MIB(parseInt(rec.MIB))
x.addEntry(mib, rec.Name)
for _, a := range rec.Alias {
a = strings.Split(a, " ")[0] // strip comments.
x.addAlias(a, mib)
// MIB name aliases are prefixed with a "cs" (character set) in the
// registry to identify them as display names and to ensure that
// the name starts with a lowercase letter in case it is used as
// an identifier. We remove it to be left with a nice clean name.
if strings.HasPrefix(a, "cs") {
x.setName(2, a[2:])
}
}
if rec.MIME != "" {
x.addAlias(rec.MIME, mib)
x.setName(1, rec.MIME)
}
}
w := gen.NewCodeWriter()
fmt.Fprintln(w, `import "golang.org/x/text/encoding/internal/identifier"`)
writeIndex(w, x)
w.WriteGoFile("tables.go", "ianaindex")
}
type alias struct {
name string
mib identifier.MIB
}
type indexInfo struct {
// compacted index from code to MIB
codeToMIB []identifier.MIB
alias []alias
names [][3]string
}
func (ii *indexInfo) Len() int {
return len(ii.codeToMIB)
}
func (ii *indexInfo) Less(a, b int) bool {
return ii.codeToMIB[a] < ii.codeToMIB[b]
}
func (ii *indexInfo) Swap(a, b int) {
ii.codeToMIB[a], ii.codeToMIB[b] = ii.codeToMIB[b], ii.codeToMIB[a]
// Co-sort the names.
ii.names[a], ii.names[b] = ii.names[b], ii.names[a]
}
func (ii *indexInfo) setName(i int, name string) {
ii.names[len(ii.names)-1][i] = name
}
func (ii *indexInfo) addEntry(mib identifier.MIB, name string) {
ii.names = append(ii.names, [3]string{name, name, name})
ii.addAlias(name, mib)
ii.codeToMIB = append(ii.codeToMIB, mib)
}
func (ii *indexInfo) addAlias(name string, mib identifier.MIB) {
// Don't add duplicates for the same mib. Adding duplicate aliases for
// different MIBs will cause the compiler to barf on an invalid map: great!.
for i := len(ii.alias) - 1; i >= 0 && ii.alias[i].mib == mib; i-- {
if ii.alias[i].name == name {
return
}
}
ii.alias = append(ii.alias, alias{name, mib})
lower := strings.ToLower(name)
if lower != name {
ii.addAlias(lower, mib)
}
}
const maxMIMENameLen = '0' - 1 // officially 40, but we leave some buffer.
func writeIndex(w *gen.CodeWriter, x *indexInfo) {
sort.Stable(x)
// Write constants.
fmt.Fprintln(w, "const (")
for i, m := range x.codeToMIB {
if i == 0 {
fmt.Fprintf(w, "enc%d = iota\n", m)
} else {
fmt.Fprintf(w, "enc%d\n", m)
}
}
fmt.Fprintln(w, "numIANA")
fmt.Fprintln(w, ")")
w.WriteVar("ianaToMIB", x.codeToMIB)
var ianaNames, mibNames []string
for _, names := range x.names {
n := names[0]
if names[0] != names[1] {
// MIME names are mostly identical to IANA names. We share the
// tables by setting the first byte of the string to an index into
// the string itself (< maxMIMENameLen) to the IANA name. The MIME
// name immediately follows the index.
x := len(names[1]) + 1
if x > maxMIMENameLen {
log.Fatalf("MIME name length (%d) > %d", x, maxMIMENameLen)
}
n = string(x) + names[1] + names[0]
}
ianaNames = append(ianaNames, n)
mibNames = append(mibNames, names[2])
}
w.WriteVar("ianaNames", ianaNames)
w.WriteVar("mibNames", mibNames)
w.WriteComment(`
TODO: Instead of using a map, we could use binary search strings doing
on-the fly lower-casing per character. This allows to always avoid
allocation and will be considerably more compact.`)
fmt.Fprintln(w, "var ianaAliases = map[string]int{")
for _, a := range x.alias {
fmt.Fprintf(w, "%q: enc%d,\n", a.name, a.mib)
}
fmt.Fprintln(w, "}")
}
func parseInt(s string) int {
x, err := strconv.ParseInt(s, 10, 64)
if err != nil {
log.Fatalf("Could not parse integer: %v", err)
}
return int(x)
}

View File

@ -1,209 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go run gen.go
// Package ianaindex maps names to Encodings as specified by the IANA registry.
// This includes both the MIME and IANA names.
//
// See http://www.iana.org/assignments/character-sets/character-sets.xhtml for
// more details.
package ianaindex
import (
"errors"
"sort"
"strings"
"golang.org/x/text/encoding"
"golang.org/x/text/encoding/charmap"
"golang.org/x/text/encoding/internal/identifier"
"golang.org/x/text/encoding/japanese"
"golang.org/x/text/encoding/korean"
"golang.org/x/text/encoding/simplifiedchinese"
"golang.org/x/text/encoding/traditionalchinese"
"golang.org/x/text/encoding/unicode"
)
// TODO: remove the "Status... incomplete" in the package doc comment.
// TODO: allow users to specify their own aliases?
// TODO: allow users to specify their own indexes?
// TODO: allow canonicalizing names
// NOTE: only use these top-level variables if we can get the linker to drop
// the indexes when they are not used. Make them a function or perhaps only
// support MIME otherwise.
var (
// MIME is an index to map MIME names.
MIME *Index = mime
// IANA is an index that supports all names and aliases using IANA names as
// the canonical identifier.
IANA *Index = iana
// MIB is an index that associates the MIB display name with an Encoding.
MIB *Index = mib
mime = &Index{mimeName, ianaToMIB, ianaAliases, encodings[:]}
iana = &Index{ianaName, ianaToMIB, ianaAliases, encodings[:]}
mib = &Index{mibName, ianaToMIB, ianaAliases, encodings[:]}
)
// Index maps names registered by IANA to Encodings.
// Currently different Indexes only differ in the names they return for
// encodings. In the future they may also differ in supported aliases.
type Index struct {
names func(i int) string
toMIB []identifier.MIB // Sorted slice of supported MIBs
alias map[string]int
enc []encoding.Encoding
}
var (
errInvalidName = errors.New("ianaindex: invalid encoding name")
errUnknown = errors.New("ianaindex: unknown Encoding")
errUnsupported = errors.New("ianaindex: unsupported Encoding")
)
// Encoding returns an Encoding for IANA-registered names. Matching is
// case-insensitive.
func (x *Index) Encoding(name string) (encoding.Encoding, error) {
name = strings.TrimSpace(name)
// First try without lowercasing (possibly creating an allocation).
i, ok := x.alias[name]
if !ok {
i, ok = x.alias[strings.ToLower(name)]
if !ok {
return nil, errInvalidName
}
}
return x.enc[i], nil
}
// Name reports the canonical name of the given Encoding. It will return an
// error if the e is not associated with a known encoding scheme.
func (x *Index) Name(e encoding.Encoding) (string, error) {
id, ok := e.(identifier.Interface)
if !ok {
return "", errUnknown
}
mib, _ := id.ID()
if mib == 0 {
return "", errUnknown
}
v := findMIB(x.toMIB, mib)
if v == -1 {
return "", errUnsupported
}
return x.names(v), nil
}
// TODO: the coverage of this index is rather spotty. Allowing users to set
// encodings would allow:
// - users to increase coverage
// - allow a partially loaded set of encodings in case the user doesn't need to
// them all.
// - write an OS-specific wrapper for supported encodings and set them.
// The exact definition of Set depends a bit on if and how we want to let users
// write their own Encoding implementations. Also, it is not possible yet to
// only partially load the encodings without doing some refactoring. Until this
// is solved, we might as well not support Set.
// // Set sets the e to be used for the encoding scheme identified by name. Only
// // canonical names may be used. An empty name assigns e to its internally
// // associated encoding scheme.
// func (x *Index) Set(name string, e encoding.Encoding) error {
// panic("TODO: implement")
// }
func findMIB(x []identifier.MIB, mib identifier.MIB) int {
i := sort.Search(len(x), func(i int) bool { return x[i] >= mib })
if i < len(x) && x[i] == mib {
return i
}
return -1
}
const maxMIMENameLen = '0' - 1 // officially 40, but we leave some buffer.
func mimeName(x int) string {
n := ianaNames[x]
// See gen.go for a description of the encoding.
if n[0] <= maxMIMENameLen {
return n[1:n[0]]
}
return n
}
func ianaName(x int) string {
n := ianaNames[x]
// See gen.go for a description of the encoding.
if n[0] <= maxMIMENameLen {
return n[n[0]:]
}
return n
}
func mibName(x int) string {
return mibNames[x]
}
var encodings = [numIANA]encoding.Encoding{
enc106: unicode.UTF8,
enc1015: unicode.UTF16(unicode.BigEndian, unicode.UseBOM),
enc1013: unicode.UTF16(unicode.BigEndian, unicode.IgnoreBOM),
enc1014: unicode.UTF16(unicode.LittleEndian, unicode.IgnoreBOM),
enc2028: charmap.CodePage037,
enc2011: charmap.CodePage437,
enc2009: charmap.CodePage850,
enc2010: charmap.CodePage852,
enc2046: charmap.CodePage855,
enc2089: charmap.CodePage858,
enc2048: charmap.CodePage860,
enc2013: charmap.CodePage862,
enc2050: charmap.CodePage863,
enc2052: charmap.CodePage865,
enc2086: charmap.CodePage866,
enc2102: charmap.CodePage1047,
enc2091: charmap.CodePage1140,
enc4: charmap.ISO8859_1,
enc5: charmap.ISO8859_2,
enc6: charmap.ISO8859_3,
enc7: charmap.ISO8859_4,
enc8: charmap.ISO8859_5,
enc9: charmap.ISO8859_6,
enc81: charmap.ISO8859_6E,
enc82: charmap.ISO8859_6I,
enc10: charmap.ISO8859_7,
enc11: charmap.ISO8859_8,
enc84: charmap.ISO8859_8E,
enc85: charmap.ISO8859_8I,
enc12: charmap.ISO8859_9,
enc13: charmap.ISO8859_10,
enc109: charmap.ISO8859_13,
enc110: charmap.ISO8859_14,
enc111: charmap.ISO8859_15,
enc112: charmap.ISO8859_16,
enc2084: charmap.KOI8R,
enc2088: charmap.KOI8U,
enc2027: charmap.Macintosh,
enc2109: charmap.Windows874,
enc2250: charmap.Windows1250,
enc2251: charmap.Windows1251,
enc2252: charmap.Windows1252,
enc2253: charmap.Windows1253,
enc2254: charmap.Windows1254,
enc2255: charmap.Windows1255,
enc2256: charmap.Windows1256,
enc2257: charmap.Windows1257,
enc2258: charmap.Windows1258,
enc18: japanese.EUCJP,
enc39: japanese.ISO2022JP,
enc17: japanese.ShiftJIS,
enc38: korean.EUCKR,
enc114: simplifiedchinese.GB18030,
enc113: simplifiedchinese.GBK,
enc2085: simplifiedchinese.HZGB2312,
enc2026: traditionalchinese.Big5,
}

View File

@ -1,192 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ianaindex
import (
"testing"
"golang.org/x/text/encoding"
"golang.org/x/text/encoding/charmap"
"golang.org/x/text/encoding/internal/identifier"
"golang.org/x/text/encoding/japanese"
"golang.org/x/text/encoding/korean"
"golang.org/x/text/encoding/simplifiedchinese"
"golang.org/x/text/encoding/traditionalchinese"
"golang.org/x/text/encoding/unicode"
)
var All = [][]encoding.Encoding{
unicode.All,
charmap.All,
japanese.All,
korean.All,
simplifiedchinese.All,
traditionalchinese.All,
}
// TestAllIANA tests whether an Encoding supported in x/text is defined by IANA but
// not supported by this package.
func TestAllIANA(t *testing.T) {
for _, ea := range All {
for _, e := range ea {
mib, _ := e.(identifier.Interface).ID()
if x := findMIB(ianaToMIB, mib); x != -1 && encodings[x] == nil {
t.Errorf("supported MIB %v (%v) not in index", mib, e)
}
}
}
}
// TestNotSupported reports the encodings in IANA, but not by x/text.
func TestNotSupported(t *testing.T) {
mibs := map[identifier.MIB]bool{}
for _, ea := range All {
for _, e := range ea {
mib, _ := e.(identifier.Interface).ID()
mibs[mib] = true
}
}
// Many encodings in the IANA index will likely not be suppored by the
// Go encodings. That is fine.
// TODO: consider wheter we should add this test.
// for code, mib := range ianaToMIB {
// t.Run(fmt.Sprint("IANA:", mib), func(t *testing.T) {
// if !mibs[mib] {
// t.Skipf("IANA encoding %s (MIB %v) not supported",
// ianaNames[code], mib)
// }
// })
// }
}
func TestEncoding(t *testing.T) {
testCases := []struct {
index *Index
name string
canonical string
err error
}{
{MIME, "utf-8", "UTF-8", nil},
{MIME, " utf-8 ", "UTF-8", nil},
{MIME, " l5 ", "ISO-8859-9", nil},
{MIME, "latin5 ", "ISO-8859-9", nil},
{MIME, "LATIN5 ", "ISO-8859-9", nil},
{MIME, "latin 5", "", errInvalidName},
{MIME, "latin-5", "", errInvalidName},
{IANA, "utf-8", "UTF-8", nil},
{IANA, " utf-8 ", "UTF-8", nil},
{IANA, " l5 ", "ISO_8859-9:1989", nil},
{IANA, "latin5 ", "ISO_8859-9:1989", nil},
{IANA, "LATIN5 ", "ISO_8859-9:1989", nil},
{IANA, "latin 5", "", errInvalidName},
{IANA, "latin-5", "", errInvalidName},
{MIB, "utf-8", "UTF8", nil},
{MIB, " utf-8 ", "UTF8", nil},
{MIB, " l5 ", "ISOLatin5", nil},
{MIB, "latin5 ", "ISOLatin5", nil},
{MIB, "LATIN5 ", "ISOLatin5", nil},
{MIB, "latin 5", "", errInvalidName},
{MIB, "latin-5", "", errInvalidName},
}
for i, tc := range testCases {
enc, err := tc.index.Encoding(tc.name)
if err != tc.err {
t.Errorf("%d: error was %v; want %v", i, err, tc.err)
}
if err != nil {
continue
}
if got, err := tc.index.Name(enc); got != tc.canonical {
t.Errorf("%d: Name(Encoding(%q)) = %q; want %q (%v)", i, tc.name, got, tc.canonical, err)
}
}
}
func TestTables(t *testing.T) {
for i, x := range []*Index{MIME, IANA} {
for name, index := range x.alias {
got, err := x.Encoding(name)
if err != nil {
t.Errorf("%d%s:err: unexpected error %v", i, name, err)
}
if want := x.enc[index]; got != want {
t.Errorf("%d%s:encoding: got %v; want %v", i, name, got, want)
}
if got != nil {
mib, _ := got.(identifier.Interface).ID()
if i := findMIB(x.toMIB, mib); i != index {
t.Errorf("%d%s:mib: got %d; want %d", i, name, i, index)
}
}
}
}
}
type unsupported struct {
encoding.Encoding
}
func (unsupported) ID() (identifier.MIB, string) { return 9999, "" }
func TestName(t *testing.T) {
testCases := []struct {
desc string
enc encoding.Encoding
f func(e encoding.Encoding) (string, error)
name string
err error
}{{
"defined encoding",
charmap.ISO8859_2,
MIME.Name,
"ISO-8859-2",
nil,
}, {
"defined Unicode encoding",
unicode.UTF16(unicode.BigEndian, unicode.IgnoreBOM),
IANA.Name,
"UTF-16BE",
nil,
}, {
"another defined Unicode encoding",
unicode.UTF16(unicode.BigEndian, unicode.UseBOM),
MIME.Name,
"UTF-16",
nil,
}, {
"unknown Unicode encoding",
unicode.UTF16(unicode.BigEndian, unicode.ExpectBOM),
MIME.Name,
"",
errUnknown,
}, {
"undefined encoding",
unsupported{},
MIME.Name,
"",
errUnsupported,
}, {
"undefined other encoding in HTML standard",
charmap.CodePage437,
IANA.Name,
"IBM437",
nil,
}, {
"unknown encoding",
encoding.Nop,
IANA.Name,
"",
errUnknown,
}}
for i, tc := range testCases {
name, err := tc.f(tc.enc)
if name != tc.name || err != tc.err {
t.Errorf("%d:%s: got %q, %v; want %q, %v", i, tc.desc, name, err, tc.name, tc.err)
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,180 +0,0 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package enctest
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"strings"
"testing"
"golang.org/x/text/encoding"
"golang.org/x/text/encoding/internal/identifier"
"golang.org/x/text/transform"
)
// Encoder or Decoder
type Transcoder interface {
transform.Transformer
Bytes([]byte) ([]byte, error)
String(string) (string, error)
}
func TestEncoding(t *testing.T, e encoding.Encoding, encoded, utf8, prefix, suffix string) {
for _, direction := range []string{"Decode", "Encode"} {
t.Run(fmt.Sprintf("%v/%s", e, direction), func(t *testing.T) {
var coder Transcoder
var want, src, wPrefix, sPrefix, wSuffix, sSuffix string
if direction == "Decode" {
coder, want, src = e.NewDecoder(), utf8, encoded
wPrefix, sPrefix, wSuffix, sSuffix = "", prefix, "", suffix
} else {
coder, want, src = e.NewEncoder(), encoded, utf8
wPrefix, sPrefix, wSuffix, sSuffix = prefix, "", suffix, ""
}
dst := make([]byte, len(wPrefix)+len(want)+len(wSuffix))
nDst, nSrc, err := coder.Transform(dst, []byte(sPrefix+src+sSuffix), true)
if err != nil {
t.Fatal(err)
}
if nDst != len(wPrefix)+len(want)+len(wSuffix) {
t.Fatalf("nDst got %d, want %d",
nDst, len(wPrefix)+len(want)+len(wSuffix))
}
if nSrc != len(sPrefix)+len(src)+len(sSuffix) {
t.Fatalf("nSrc got %d, want %d",
nSrc, len(sPrefix)+len(src)+len(sSuffix))
}
if got := string(dst); got != wPrefix+want+wSuffix {
t.Fatalf("\ngot %q\nwant %q", got, wPrefix+want+wSuffix)
}
for _, n := range []int{0, 1, 2, 10, 123, 4567} {
input := sPrefix + strings.Repeat(src, n) + sSuffix
g, err := coder.String(input)
if err != nil {
t.Fatalf("Bytes: n=%d: %v", n, err)
}
if len(g) == 0 && len(input) == 0 {
// If the input is empty then the output can be empty,
// regardless of whatever wPrefix is.
continue
}
got1, want1 := string(g), wPrefix+strings.Repeat(want, n)+wSuffix
if got1 != want1 {
t.Fatalf("ReadAll: n=%d\ngot %q\nwant %q",
n, trim(got1), trim(want1))
}
}
})
}
}
func TestFile(t *testing.T, e encoding.Encoding) {
for _, dir := range []string{"Decode", "Encode"} {
t.Run(fmt.Sprintf("%s/%s", e, dir), func(t *testing.T) {
dst, src, transformer, err := load(dir, e)
if err != nil {
t.Fatalf("load: %v", err)
}
buf, err := transformer.Bytes(src)
if err != nil {
t.Fatalf("transform: %v", err)
}
if !bytes.Equal(buf, dst) {
t.Error("transformed bytes did not match golden file")
}
})
}
}
func Benchmark(b *testing.B, enc encoding.Encoding) {
for _, direction := range []string{"Decode", "Encode"} {
b.Run(fmt.Sprintf("%s/%s", enc, direction), func(b *testing.B) {
_, src, transformer, err := load(direction, enc)
if err != nil {
b.Fatal(err)
}
b.SetBytes(int64(len(src)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
r := transform.NewReader(bytes.NewReader(src), transformer)
io.Copy(ioutil.Discard, r)
}
})
}
}
// testdataFiles are files in testdata/*.txt.
var testdataFiles = []struct {
mib identifier.MIB
basename, ext string
}{
{identifier.Windows1252, "candide", "windows-1252"},
{identifier.EUCPkdFmtJapanese, "rashomon", "euc-jp"},
{identifier.ISO2022JP, "rashomon", "iso-2022-jp"},
{identifier.ShiftJIS, "rashomon", "shift-jis"},
{identifier.EUCKR, "unsu-joh-eun-nal", "euc-kr"},
{identifier.GBK, "sunzi-bingfa-simplified", "gbk"},
{identifier.HZGB2312, "sunzi-bingfa-gb-levels-1-and-2", "hz-gb2312"},
{identifier.Big5, "sunzi-bingfa-traditional", "big5"},
{identifier.UTF16LE, "candide", "utf-16le"},
{identifier.UTF8, "candide", "utf-8"},
{identifier.UTF32BE, "candide", "utf-32be"},
// GB18030 is a superset of GBK and is nominally a Simplified Chinese
// encoding, but it can also represent the entire Basic Multilingual
// Plane, including codepoints like 'â' that aren't encodable by GBK.
// GB18030 on Simplified Chinese should perform similarly to GBK on
// Simplified Chinese. GB18030 on "candide" is more interesting.
{identifier.GB18030, "candide", "gb18030"},
}
func load(direction string, enc encoding.Encoding) ([]byte, []byte, Transcoder, error) {
basename, ext, count := "", "", 0
for _, tf := range testdataFiles {
if mib, _ := enc.(identifier.Interface).ID(); tf.mib == mib {
basename, ext = tf.basename, tf.ext
count++
}
}
if count != 1 {
if count == 0 {
return nil, nil, nil, fmt.Errorf("no testdataFiles for %s", enc)
}
return nil, nil, nil, fmt.Errorf("too many testdataFiles for %s", enc)
}
dstFile := fmt.Sprintf("../testdata/%s-%s.txt", basename, ext)
srcFile := fmt.Sprintf("../testdata/%s-utf-8.txt", basename)
var coder Transcoder = encoding.ReplaceUnsupported(enc.NewEncoder())
if direction == "Decode" {
dstFile, srcFile = srcFile, dstFile
coder = enc.NewDecoder()
}
dst, err := ioutil.ReadFile(dstFile)
if err != nil {
if dst, err = ioutil.ReadFile("../" + dstFile); err != nil {
return nil, nil, nil, err
}
}
src, err := ioutil.ReadFile(srcFile)
if err != nil {
if src, err = ioutil.ReadFile("../" + srcFile); err != nil {
return nil, nil, nil, err
}
}
return dst, src, coder, nil
}
func trim(s string) string {
if len(s) < 120 {
return s
}
return s[:50] + "..." + s[len(s)-50:]
}

View File

@ -1,137 +0,0 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"bytes"
"encoding/xml"
"fmt"
"io"
"log"
"strings"
"golang.org/x/text/internal/gen"
)
type registry struct {
XMLName xml.Name `xml:"registry"`
Updated string `xml:"updated"`
Registry []struct {
ID string `xml:"id,attr"`
Record []struct {
Name string `xml:"name"`
Xref []struct {
Type string `xml:"type,attr"`
Data string `xml:"data,attr"`
} `xml:"xref"`
Desc struct {
Data string `xml:",innerxml"`
// Any []struct {
// Data string `xml:",chardata"`
// } `xml:",any"`
// Data string `xml:",chardata"`
} `xml:"description,"`
MIB string `xml:"value"`
Alias []string `xml:"alias"`
MIME string `xml:"preferred_alias"`
} `xml:"record"`
} `xml:"registry"`
}
func main() {
r := gen.OpenIANAFile("assignments/character-sets/character-sets.xml")
reg := &registry{}
if err := xml.NewDecoder(r).Decode(&reg); err != nil && err != io.EOF {
log.Fatalf("Error decoding charset registry: %v", err)
}
if len(reg.Registry) == 0 || reg.Registry[0].ID != "character-sets-1" {
log.Fatalf("Unexpected ID %s", reg.Registry[0].ID)
}
w := &bytes.Buffer{}
fmt.Fprintf(w, "const (\n")
for _, rec := range reg.Registry[0].Record {
constName := ""
for _, a := range rec.Alias {
if strings.HasPrefix(a, "cs") && strings.IndexByte(a, '-') == -1 {
// Some of the constant definitions have comments in them. Strip those.
constName = strings.Title(strings.SplitN(a[2:], "\n", 2)[0])
}
}
if constName == "" {
switch rec.MIB {
case "2085":
constName = "HZGB2312" // Not listed as alias for some reason.
default:
log.Fatalf("No cs alias defined for %s.", rec.MIB)
}
}
if rec.MIME != "" {
rec.MIME = fmt.Sprintf(" (MIME: %s)", rec.MIME)
}
fmt.Fprintf(w, "// %s is the MIB identifier with IANA name %s%s.\n//\n", constName, rec.Name, rec.MIME)
if len(rec.Desc.Data) > 0 {
fmt.Fprint(w, "// ")
d := xml.NewDecoder(strings.NewReader(rec.Desc.Data))
inElem := true
attr := ""
for {
t, err := d.Token()
if err != nil {
if err != io.EOF {
log.Fatal(err)
}
break
}
switch x := t.(type) {
case xml.CharData:
attr = "" // Don't need attribute info.
a := bytes.Split([]byte(x), []byte("\n"))
for i, b := range a {
if b = bytes.TrimSpace(b); len(b) != 0 {
if !inElem && i > 0 {
fmt.Fprint(w, "\n// ")
}
inElem = false
fmt.Fprintf(w, "%s ", string(b))
}
}
case xml.StartElement:
if x.Name.Local == "xref" {
inElem = true
use := false
for _, a := range x.Attr {
if a.Name.Local == "type" {
use = use || a.Value != "person"
}
if a.Name.Local == "data" && use {
attr = a.Value + " "
}
}
}
case xml.EndElement:
inElem = false
fmt.Fprint(w, attr)
}
}
fmt.Fprint(w, "\n")
}
for _, x := range rec.Xref {
switch x.Type {
case "rfc":
fmt.Fprintf(w, "// Reference: %s\n", strings.ToUpper(x.Data))
case "uri":
fmt.Fprintf(w, "// Reference: %s\n", x.Data)
}
}
fmt.Fprintf(w, "%s MIB = %s\n", constName, rec.MIB)
fmt.Fprintln(w)
}
fmt.Fprintln(w, ")")
gen.WriteGoFile("mib.go", "identifier", w.Bytes())
}

Some files were not shown because too many files have changed in this diff Show More