Skip to content
Snippets Groups Projects
Commit 6cfe4a6a authored by Martin Lowe's avatar Martin Lowe :no_entry:
Browse files

Add solr search support to base marketplace application


Added dynamic Solr support to the Marketplace API using the 'q' query
string param and a service that consumes it. Currently, cache is
disabled on text searches to more quickly enable functionality. This
will be taken as technical debt and documented on GitHub.

Signed-off-by: Martin Lowe's avatarMartin Lowe <martin.lowe@eclipse-foundation.org>
parent 50057918
No related branches found
No related tags found
1 merge request!78Base implementation of Solr+MariaDB combo persistence and index
Showing
with 3022 additions and 23 deletions
......@@ -9,6 +9,7 @@ Proof of concept project within the Microservice initiative, the Foundation look
1. Installed and configured JDK 1.8+
1. Apache Maven 3.5.3+
1. Running instance of MariaDB (Docker instructions below)
1. Running instance of Solr server (version 5.5.5 currently supported)
1. GraalVM (for compilation of native-image)
### Optional requirements
......@@ -25,6 +26,8 @@ This section will outline configuration values that need to be checked and updat
1. Create a copy of `./config/sample.secret.properties` named `secret.properties` in a location of your choosing on the system, with the config folder in the project root being default configured. If changed, keep this path as it is needed to start the environment later.
1. Update `quarkus.datasource.password` to be the password for the MariaDB user in the newly created `secret.properties` file.
1. Log in to the MariaDB instance and ensure that the database defined in the JDBC string exists. By default, the name of the database is `mpc_db`. This database can be created using the command `CREATE DATABASE mpc_db;`.
1. When using the Solr search engine, a couple of properties are needed to be added to the properties and secret.properties file. The first is the Solr host and core. The host property (`eclipse.solr.host`) should be the root URL to your Solr instance (e.g. http://localhost:8093/solr) to allow connections for search indexing. The core property (`eclipse.solr.core`) should be the name of the core that will store your indexes for marketplace. If a core does not exist yet, create one through the admin panel of the Solr server and update the core value if needed.
1. To properly enable the core to work with this application, the configuration on the Solr server should be updated. Copy the contents of ./config/mpc_dev into your cores configuration folder. An example path for this folder is `/opt/solr/server/solr/marketplace`. This may change based on how the server is installed and configured. Ensure that these files match ownership of the other files in this location, otherwise the Solr core may not work as intended.
1. By default, this application binds to port 8090. If port 8090 is occupied by another service, the value of `quarkus.http.port` can be modified to designate a different port.
1. In order to protect endpoints for write operations, an introspection endpoint has been configured to validate OAuth tokens. This introspection endpoint should match the requirements set out by the OAuth group for such endpoints. The URL should be set in `quarkus.oauth2.introspection-url`.
* A property meant for development purposes has been added to this stack to bypass OAuth calls. If set, all calls will return as if authenticated as an admin. The property and value `eclipse.oauth.override=true` can be set in the `application.properties` file to enable this feature.
......
<?xml version="1.0" encoding="UTF-8" ?>
<!--
This file allows you to boost certain search items to the top of search
results. You can find out an item's ID by searching directly on the Solr
server. The item IDs are in general constructed as follows:
Search API:
$document->id = $index_id . '-' . $item_id;
Apache Solr Search Integration:
$document->id = $site_hash . '/' . $entity_type . '/' . $entity->id;
If you want this file to be automatically re-loaded when a Solr commit takes
place (e.g., if you have an automatic script active which updates elevate.xml
according to newly-indexed data), place it into Solr's data/ directory.
Otherwise, place it with the other configuration files into the conf/
directory.
See http://wiki.apache.org/solr/QueryElevationComponent for more information.
-->
<elevate>
<!-- Example for ranking the node #1 first in searches for "example query": -->
<!--
<query text="example query">
<doc id="default_node_index-1" />
<doc id="7v3jsc/node/1" />
</query>
-->
<!-- Multiple <query> elements can be specified, contained in one <elevate>. -->
<!-- <query text="...">...</query> -->
</elevate>
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# a couple of test stopwords to test that the words are really being
# configured from this file:
stopworda
stopwordb
# Standard english stop words taken from Lucene's StopAnalyzer
a
an
and
are
as
at
be
but
by
for
if
in
into
is
it
no
not
of
on
or
such
that
the
their
then
there
these
they
this
to
was
will
with
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Syntax:
# "source" => "target"
# "source".length() > 0 (source cannot be empty.)
# "target".length() >= 0 (target can be empty.)
# example:
# "À" => "A"
# "\u00C0" => "A"
# "\u00C0" => "\u0041"
# "ß" => "ss"
# "\t" => " "
# "\n" => ""
# À => A
"\u00C0" => "A"
# Á => A
"\u00C1" => "A"
# Â => A
"\u00C2" => "A"
# Ã => A
"\u00C3" => "A"
# Ä => A
"\u00C4" => "A"
# Å => A
"\u00C5" => "A"
# Æ => AE
"\u00C6" => "AE"
# Ç => C
"\u00C7" => "C"
# È => E
"\u00C8" => "E"
# É => E
"\u00C9" => "E"
# Ê => E
"\u00CA" => "E"
# Ë => E
"\u00CB" => "E"
# Ì => I
"\u00CC" => "I"
# Í => I
"\u00CD" => "I"
# Î => I
"\u00CE" => "I"
# Ï => I
"\u00CF" => "I"
# IJ => IJ
"\u0132" => "IJ"
# Ð => D
"\u00D0" => "D"
# Ñ => N
"\u00D1" => "N"
# Ò => O
"\u00D2" => "O"
# Ó => O
"\u00D3" => "O"
# Ô => O
"\u00D4" => "O"
# Õ => O
"\u00D5" => "O"
# Ö => O
"\u00D6" => "O"
# Ø => O
"\u00D8" => "O"
# Œ => OE
"\u0152" => "OE"
# Þ
"\u00DE" => "TH"
# Ù => U
"\u00D9" => "U"
# Ú => U
"\u00DA" => "U"
# Û => U
"\u00DB" => "U"
# Ü => U
"\u00DC" => "U"
# Ý => Y
"\u00DD" => "Y"
# Ÿ => Y
"\u0178" => "Y"
# à => a
"\u00E0" => "a"
# á => a
"\u00E1" => "a"
# â => a
"\u00E2" => "a"
# ã => a
"\u00E3" => "a"
# ä => a
"\u00E4" => "a"
# å => a
"\u00E5" => "a"
# æ => ae
"\u00E6" => "ae"
# ç => c
"\u00E7" => "c"
# è => e
"\u00E8" => "e"
# é => e
"\u00E9" => "e"
# ê => e
"\u00EA" => "e"
# ë => e
"\u00EB" => "e"
# ì => i
"\u00EC" => "i"
# í => i
"\u00ED" => "i"
# î => i
"\u00EE" => "i"
# ï => i
"\u00EF" => "i"
# ij => ij
"\u0133" => "ij"
# ð => d
"\u00F0" => "d"
# ñ => n
"\u00F1" => "n"
# ò => o
"\u00F2" => "o"
# ó => o
"\u00F3" => "o"
# ô => o
"\u00F4" => "o"
# õ => o
"\u00F5" => "o"
# ö => o
"\u00F6" => "o"
# ø => o
"\u00F8" => "o"
# œ => oe
"\u0153" => "oe"
# ß => ss
"\u00DF" => "ss"
# þ => th
"\u00FE" => "th"
# ù => u
"\u00F9" => "u"
# ú => u
"\u00FA" => "u"
# û => u
"\u00FB" => "u"
# ü => u
"\u00FC" => "u"
# ý => y
"\u00FD" => "y"
# ÿ => y
"\u00FF" => "y"
# ff => ff
"\uFB00" => "ff"
# fi => fi
"\uFB01" => "fi"
# fl => fl
"\uFB02" => "fl"
# ffi => ffi
"\uFB03" => "ffi"
# ffl => ffl
"\uFB04" => "ffl"
# ſt => ft
"\uFB05" => "ft"
# st => st
"\uFB06" => "st"
#-----------------------------------------------------------------------
# This file blocks words from being operated on by the stemmer and word delimiter.
&amp;
&lt;
&gt;
&#039;
&quot;
<?xml version="1.0" encoding="UTF-8" ?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<schema name="mpc_data" version="1.6">
<!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml
or Solr won't start. _version_ and update log are required for SolrCloud
-->
<field name="_version_" type="long" indexed="true" stored="true" />
<!-- points to the root document of a block of nested documents. Required for nested
document support, may be removed otherwise
-->
<field name="_root_" type="string" indexed="true" stored="false" />
<!-- Only remove the "id" field if you have a very good reason to. While not strictly
required, it is highly recommended. A <uniqueKey> is present in almost all Solr
installations. See the <uniqueKey> declaration below where <uniqueKey> is set to "id".
Do NOT change the type and apply index-time analysis to the <uniqueKey> as it will likely
make routing in SolrCloud and document replacement in general fail. Limited _query_ time
analysis is possible as long as the indexing process is guaranteed to index the term
in a compatible way. Any analysis applied to the <uniqueKey> should _not_ produce multiple
tokens
-->
<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
<!-- Dynamic field definitions allow using convention over configuration
for fields via the specification of patterns to match field names.
EXAMPLE: name="*_i" will match any field ending in _i (like myid_i, z_i)
RESTRICTION: the glob-like pattern in the name attribute must have
a "*" only at the start or the end. -->
<dynamicField name="*_i" type="int" indexed="true" stored="true" />
<dynamicField name="*_is" type="int" indexed="true" stored="true" multiValued="true" />
<dynamicField name="*_s" type="string" indexed="true" stored="true" />
<dynamicField name="*_ss" type="string" indexed="true" stored="true" multiValued="true" />
<dynamicField name="*_l" type="long" indexed="true" stored="true" />
<dynamicField name="*_ls" type="long" indexed="true" stored="true" multiValued="true" />
<dynamicField name="*_t" type="text_general" indexed="true" stored="true" />
<dynamicField name="*_txt" type="text_general" indexed="true" stored="true" multiValued="true" />
<dynamicField name="*_en" type="text_en" indexed="true" stored="true" multiValued="true" />
<dynamicField name="*_b" type="boolean" indexed="true" stored="true" />
<dynamicField name="*_bs" type="boolean" indexed="true" stored="true" multiValued="true" />
<dynamicField name="*_f" type="float" indexed="true" stored="true" />
<dynamicField name="*_fs" type="float" indexed="true" stored="true" multiValued="true" />
<dynamicField name="*_d" type="double" indexed="true" stored="true" />
<dynamicField name="*_ds" type="double" indexed="true" stored="true" multiValued="true" />
<!-- Type used to index the lat and lon components for the "location" FieldType -->
<dynamicField name="*_coordinate" type="tdouble" indexed="true" stored="false" />
<dynamicField name="*_dt" type="date" indexed="true" stored="true" />
<dynamicField name="*_dts" type="date" indexed="true" stored="true" multiValued="true" />
<dynamicField name="*_p" type="location" indexed="true" stored="true" />
<!-- some trie-coded dynamic fields for faster range queries -->
<dynamicField name="*_ti" type="tint" indexed="true" stored="true" />
<dynamicField name="*_tl" type="tlong" indexed="true" stored="true" />
<dynamicField name="*_tf" type="tfloat" indexed="true" stored="true" />
<dynamicField name="*_td" type="tdouble" indexed="true" stored="true" />
<dynamicField name="*_tdt" type="tdate" indexed="true" stored="true" />
<dynamicField name="ignored_*" type="ignored" multiValued="true" />
<dynamicField name="*" type="ignored" multiValued="true" />
<dynamicField name="random_*" type="random" />
<!-- Field to use to determine and enforce document uniqueness.
Unless this field is marked with required="false", it will be a required field
-->
<uniqueKey>id</uniqueKey>
<!-- field type definitions. The "name" attribute is
just a label to be used by field definitions. The "class"
attribute and any other attributes determine the real
behavior of the fieldType.
Class names starting with "solr" refer to java classes in a
standard package such as org.apache.solr.analysis
-->
<!-- The StrField type is not analyzed, but indexed/stored verbatim.
It supports doc values but in that case the field needs to be
single-valued and either required or have a default value.
-->
<fieldType name="string" class="solr.StrField" sortMissingLast="true" />
<!-- boolean type: "true" or "false" -->
<fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" />
<!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
currently supported on types that are sorted internally as strings
and on numeric types.
This includes "string","boolean", and, as of 3.5 (and 4.x),
int, float, long, date, double, including the "Trie" variants.
- If sortMissingLast="true", then a sort on this field will cause documents
without the field to come after documents with the field,
regardless of the requested sort order (asc or desc).
- If sortMissingFirst="true", then a sort on this field will cause documents
without the field to come before documents with the field,
regardless of the requested sort order.
- If sortMissingLast="false" and sortMissingFirst="false" (the default),
then default lucene sorting will be used which places docs without the
field first in an ascending sort and last in a descending sort.
-->
<!--
Default numeric field types. For faster range queries, consider the tint/tfloat/tlong/tdouble types.
These fields support doc values, but they require the field to be
single-valued and either be required or have a default value.
-->
<fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0" />
<fieldType name="float" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0" />
<fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0" />
<fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0" />
<!--
Numeric field types that index each value at various levels of precision
to accelerate range queries when the number of values between the range
endpoints is large. See the javadoc for NumericRangeQuery for internal
implementation details.
Smaller precisionStep values (specified in bits) will lead to more tokens
indexed per value, slightly larger index size, and faster range queries.
A precisionStep of 0 disables indexing at different precision levels.
-->
<fieldType name="tint" class="solr.TrieIntField" precisionStep="8" positionIncrementGap="0" />
<fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" positionIncrementGap="0" />
<fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" positionIncrementGap="0" />
<fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0" />
<!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
is a more restricted form of the canonical representation of dateTime
http://www.w3.org/TR/xmlschema-2/#dateTime
The trailing "Z" designates UTC time and is mandatory.
Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
All other components are mandatory.
Expressions can also be used to denote calculations that should be
performed relative to "NOW" to determine the value, ie...
NOW/HOUR
... Round to the start of the current hour
NOW-1DAY
... Exactly 1 day prior to now
NOW/DAY+6MONTHS+3DAYS
... 6 months and 3 days in the future from the start of
the current day
Consult the TrieDateField javadocs for more information.
Note: For faster range queries, consider the tdate type
-->
<fieldType name="date" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0" />
<!-- A Trie based date field for faster date range queries and date faceting. -->
<fieldType name="tdate" class="solr.TrieDateField" precisionStep="6" positionIncrementGap="0" />
<!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
<fieldType name="binary" class="solr.BinaryField" />
<!-- The "RandomSortField" is not used to store or search any
data. You can declare fields of this type it in your schema
to generate pseudo-random orderings of your docs for sorting
or function purposes. The ordering is generated based on the field
name and the version of the index. As long as the index version
remains unchanged, and the same field name is reused,
the ordering of the docs will be consistent.
If you want different psuedo-random orderings of documents,
for the same version of the index, use a dynamicField and
change the field name in the request.
-->
<fieldType name="random" class="solr.RandomSortField" indexed="true" />
<!-- solr.TextField allows the specification of custom text analyzers
specified as a tokenizer and a list of token filters. Different
analyzers may be specified for indexing and querying.
The optional positionIncrementGap puts space between multiple fields of
this type on the same document, with the purpose of preventing false phrase
matching across fields.
For more info on customizing your analyzer chain, please see
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
-->
<!-- A text field that only splits on whitespace for exact matching of words -->
<dynamicField name="*_ws" type="text_ws" indexed="true" stored="true" />
<fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.WhitespaceTokenizerFactory" />
</analyzer>
</fieldType>
<!-- A general text field that has reasonable, generic
cross-language defaults: it tokenizes with StandardTokenizer,
removes stop words from case-insensitive "stopwords.txt"
(empty by default), and down cases. At query time only, it
also applies synonyms. -->
<dynamicField name="*_txt_gen" type="text_general" indexed="true" stored="false" />
<fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory" />
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
<filter class="solr.LowerCaseFilterFactory" />
</analyzer>
<analyzer type="query">
<tokenizer class="solr.StandardTokenizerFactory" />
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true" />
<filter class="solr.LowerCaseFilterFactory" />
</analyzer>
</fieldType>
<!-- A text field with defaults appropriate for English: it
tokenizes with StandardTokenizer, removes English stop words
(lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
finally applies Porter's stemming. The query time analyzer
also applies synonyms from synonyms.txt. -->
<dynamicField name="*_txt_en" type="text_en_splitting" indexed="true" stored="false" />
<fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory" />
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter class="solr.LowerCaseFilterFactory" />
<filter class="solr.EnglishPossessiveFilterFactory" />
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt" />
<filter class="solr.PorterStemFilterFactory" />
</analyzer>
<analyzer type="query">
<tokenizer class="solr.StandardTokenizerFactory" />
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true" />
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter class="solr.LowerCaseFilterFactory" />
<filter class="solr.EnglishPossessiveFilterFactory" />
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt" />
<filter class="solr.PorterStemFilterFactory" />
</analyzer>
</fieldType>
<!-- A text field with defaults appropriate for English, plus
aggressive word-splitting and autophrase features enabled.
This field is just like text_en, except it adds
WordDelimiterFilter to enable splitting and matching of
words on case-change, alpha numeric boundaries, and
non-alphanumeric chars. This means certain compound word
cases will work, for example query "wi fi" will match
document "WiFi" or "wi-fi".
-->
<dynamicField name="*_txt_en_split" type="text_en_splitting" indexed="true" stored="false" />
<fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100"
autoGeneratePhraseQueries="true">
<analyzer type="index">
<tokenizer class="solr.WhitespaceTokenizerFactory" />
<!-- in this example, we will only use synonyms at query time
<filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-->
<!-- Case insensitive stop word removal.
-->
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1"
catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1" />
<filter class="solr.LowerCaseFilterFactory" />
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt" />
<filter class="solr.PorterStemFilterFactory" />
</analyzer>
<analyzer type="query">
<tokenizer class="solr.WhitespaceTokenizerFactory" />
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true" />
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1"
catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1" />
<filter class="solr.LowerCaseFilterFactory" />
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt" />
<filter class="solr.PorterStemFilterFactory" />
</analyzer>
</fieldType>
<!-- Less flexible matching, but less false matches. Probably not ideal for product names,
but may be good for SKUs. Can insert dashes in the wrong place and still match. -->
<dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight" indexed="true" stored="false" />
<fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100"
autoGeneratePhraseQueries="true">
<analyzer>
<tokenizer class="solr.WhitespaceTokenizerFactory" />
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false" />
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0"
catenateWords="1" catenateNumbers="1" catenateAll="0" />
<filter class="solr.LowerCaseFilterFactory" />
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt" />
<filter class="solr.EnglishMinimalStemFilterFactory" />
<!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
possible with WordDelimiterFilter in conjuncton with stemming. -->
<filter class="solr.RemoveDuplicatesTokenFilterFactory" />
</analyzer>
</fieldType>
<!-- lowercases the entire field value, keeping it as a single token. -->
<dynamicField name="*_s_lower" type="lowercase" indexed="true" stored="false" />
<fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.KeywordTokenizerFactory" />
<filter class="solr.LowerCaseFilterFactory" />
</analyzer>
</fieldType>
<!-- since fields of this type are by default not stored or indexed,
any data added to them will be ignored outright. -->
<fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />
<!-- This point type indexes the coordinates as separate fields (subFields)
If subFieldType is defined, it references a type, and a dynamic field
definition is created matching *___<typename>. Alternately, if
subFieldSuffix is defined, that is used to create the subFields.
Example: if subFieldType="double", then the coordinates would be
indexed in fields myloc_0___double,myloc_1___double.
Example: if subFieldSuffix="_d" then the coordinates would be indexed
in fields myloc_0_d,myloc_1_d
The subFields are an implementation detail of the fieldType, and end
users normally should not need to know about them.
-->
<fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d" />
<!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
<fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate" />
</schema>
\ No newline at end of file
<?xml version="1.0" encoding="UTF-8" ?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!--
This example schema is the recommended starting point for users.
It should be kept correct and concise, usable out-of-the-box.
For more information, on how to customize this file, please see
http://lucene.apache.org/solr/guide/documents-fields-and-schema-design.html
PERFORMANCE NOTE: this schema includes many optional features and should not
be used for benchmarking. To improve performance one could
- set stored="false" for all fields possible (esp large fields) when you
only need to search on the field but don't need to return the original
value.
- set indexed="false" if you don't need to search on the field, but only
return the field as a result of searching on other indexed fields.
- remove all unneeded copyField statements
- for best index size and searching performance, set "index" to false
for all general text fields, use copyField to copy them to the
catchall "text" field, and use that for searching.
-->
<schema name="default-config" version="1.6">
<!-- attribute "name" is the name of this schema and is only used for display purposes.
version="x.y" is Solr's version number for the schema syntax and
semantics. It should not normally be changed by applications.
1.0: multiValued attribute did not exist, all fields are multiValued
by nature
1.1: multiValued attribute introduced, false by default
1.2: omitTermFreqAndPositions attribute introduced, true by default
except for text fields.
1.3: removed optional field compress feature
1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
behavior when a single string produces multiple tokens. Defaults
to off for version >= 1.4
1.5: omitNorms defaults to true for primitive field types
(int, float, boolean, string...)
1.6: useDocValuesAsStored defaults to true.
-->
<!-- Valid attributes for fields:
name: mandatory - the name for the field
type: mandatory - the name of a field type from the
fieldTypes section
indexed: true if this field should be indexed (searchable or sortable)
stored: true if this field should be retrievable
docValues: true if this field should have doc values. Doc Values is
recommended (required, if you are using *Point fields) for faceting,
grouping, sorting and function queries. Doc Values will make the index
faster to load, more NRT-friendly and more memory-efficient.
They are currently only supported by StrField, UUIDField, all
*PointFields, and depending on the field type, they might require
the field to be single-valued, be required or have a default value
(check the documentation of the field type you're interested in for
more information)
multiValued: true if this field may contain multiple values per document
omitNorms: (expert) set to true to omit the norms associated with
this field (this disables length normalization and index-time
boosting for the field, and saves some memory). Only full-text
fields or fields that need an index-time boost need norms.
Norms are omitted for primitive (non-analyzed) types by default.
termVectors: [false] set to true to store the term vector for a
given field.
When using MoreLikeThis, fields used for similarity should be
stored for best performance.
termPositions: Store position information with the term vector.
This will increase storage costs.
termOffsets: Store offset information with the term vector. This
will increase storage costs.
required: The field is required. It will throw an error if the
value does not exist
default: a value that should be used if no value is specified
when adding a document.
-->
<!-- field names should consist of alphanumeric or underscore characters only and
not start with a digit. This is not currently strictly enforced,
but other field names will not have first class support from all components
and back compatibility is not guaranteed. Names with both leading and
trailing underscores (e.g. _version_) are reserved.
-->
<!-- In this _default configset, only four fields are pre-declared:
id, _version_, and _text_ and _root_. All other fields will be type guessed and added via the
"add-unknown-fields-to-the-schema" update request processor chain declared in solrconfig.xml.
Note that many dynamic fields are also defined - you can use them to specify a
field's type via field naming conventions - see below.
WARNING: The _text_ catch-all field will significantly increase your index size.
If you don't need it, consider removing it and the corresponding copyField directive.
-->
<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
<!-- docValues are enabled by default for long type so we don't need to index the version field -->
<field name="_version_" type="plong" indexed="false" stored="false" />
<!-- Dynamic field definitions allow using convention over configuration
for fields via the specification of patterns to match field names.
EXAMPLE: name="*_i" will match any field ending in _i (like myid_i, z_i)
RESTRICTION: the glob-like pattern in the name attribute must have a "*" only at the start or the end. -->
<dynamicField name="*_i" type="pint" indexed="true" stored="true" />
<dynamicField name="*_is" type="pints" indexed="true" stored="true" />
<dynamicField name="*_s" type="string" indexed="true" stored="true" />
<dynamicField name="*_ss" type="strings" indexed="true" stored="true" />
<dynamicField name="*_l" type="plong" indexed="true" stored="true" />
<dynamicField name="*_ls" type="plongs" indexed="true" stored="true" />
<dynamicField name="*_t" type="text_general" indexed="true" stored="true" multiValued="false" />
<dynamicField name="*_txt" type="text_general" indexed="true" stored="true" />
<dynamicField name="*_b" type="boolean" indexed="true" stored="true" />
<dynamicField name="*_bs" type="booleans" indexed="true" stored="true" />
<dynamicField name="*_f" type="pfloat" indexed="true" stored="true" />
<dynamicField name="*_fs" type="pfloats" indexed="true" stored="true" />
<dynamicField name="*_d" type="pdouble" indexed="true" stored="true" />
<dynamicField name="*_ds" type="pdoubles" indexed="true" stored="true" />
<dynamicField name="random_*" type="random" />
<dynamicField name="ignored_*" type="ignored" />
<dynamicField name="*_dt" type="pdate" indexed="true" stored="true" />
<dynamicField name="*_dts" type="pdate" indexed="true" stored="true" multiValued="true" />
<dynamicField name="*_p" type="location" indexed="true" stored="true" />
<dynamicField name="*_srpt" type="location_rpt" indexed="true" stored="true" />
<!-- payloaded dynamic fields -->
<dynamicField name="*_dpf" type="delimited_payloads_float" indexed="true" stored="true" />
<dynamicField name="*_dpi" type="delimited_payloads_int" indexed="true" stored="true" />
<dynamicField name="*_dps" type="delimited_payloads_string" indexed="true" stored="true" />
<dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true" />
<!-- Field to use to determine and enforce document uniqueness.
Unless this field is marked with required="false", it will be a required field
-->
<uniqueKey>id</uniqueKey>
<!-- field type definitions. The "name" attribute is
just a label to be used by field definitions. The "class"
attribute and any other attributes determine the real
behavior of the fieldType.
Class names starting with "solr" refer to java classes in a
standard package such as org.apache.solr.analysis
-->
<!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
currently supported on types that are sorted internally as strings
and on numeric types.
This includes "string", "boolean", "pint", "pfloat", "plong", "pdate", "pdouble".
- If sortMissingLast="true", then a sort on this field will cause documents
without the field to come after documents with the field,
regardless of the requested sort order (asc or desc).
- If sortMissingFirst="true", then a sort on this field will cause documents
without the field to come before documents with the field,
regardless of the requested sort order.
- If sortMissingLast="false" and sortMissingFirst="false" (the default),
then default lucene sorting will be used which places docs without the
field first in an ascending sort and last in a descending sort.
-->
<!-- The StrField type is not analyzed, but indexed/stored verbatim. -->
<fieldType name="string" class="solr.StrField" sortMissingLast="true" docValues="true" />
<fieldType name="strings" class="solr.StrField" sortMissingLast="true" multiValued="true" docValues="true" />
<!-- boolean type: "true" or "false" -->
<fieldType name="boolean" class="solr.BoolField" sortMissingLast="true" />
<fieldType name="booleans" class="solr.BoolField" sortMissingLast="true" multiValued="true" />
<!--
Numeric field types that index values using KD-trees.
Point fields don't support FieldCache, so they must have docValues="true" if needed for sorting, faceting, functions, etc.
-->
<fieldType name="pint" class="solr.IntPointField" docValues="true" />
<fieldType name="pfloat" class="solr.FloatPointField" docValues="true" />
<fieldType name="plong" class="solr.LongPointField" docValues="true" />
<fieldType name="pdouble" class="solr.DoublePointField" docValues="true" />
<fieldType name="pints" class="solr.IntPointField" docValues="true" multiValued="true" />
<fieldType name="pfloats" class="solr.FloatPointField" docValues="true" multiValued="true" />
<fieldType name="plongs" class="solr.LongPointField" docValues="true" multiValued="true" />
<fieldType name="pdoubles" class="solr.DoublePointField" docValues="true" multiValued="true" />
<fieldType name="random" class="solr.RandomSortField" indexed="true" />
<!-- since fields of this type are by default not stored or indexed,
any data added to them will be ignored outright. -->
<fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />
<!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
is a more restricted form of the canonical representation of dateTime
http://www.w3.org/TR/xmlschema-2/#dateTime
The trailing "Z" designates UTC time and is mandatory.
Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
All other components are mandatory.
Expressions can also be used to denote calculations that should be
performed relative to "NOW" to determine the value, ie...
NOW/HOUR
... Round to the start of the current hour
NOW-1DAY
... Exactly 1 day prior to now
NOW/DAY+6MONTHS+3DAYS
... 6 months and 3 days in the future from the start of
the current day
-->
<!-- KD-tree versions of date fields -->
<fieldType name="pdate" class="solr.DatePointField" docValues="true" />
<fieldType name="pdates" class="solr.DatePointField" docValues="true" multiValued="true" />
<!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
<fieldType name="binary" class="solr.BinaryField" />
<!-- solr.TextField allows the specification of custom text analyzers
specified as a tokenizer and a list of token filters. Different
analyzers may be specified for indexing and querying.
The optional positionIncrementGap puts space between multiple fields of
this type on the same document, with the purpose of preventing false phrase
matching across fields.
For more info on customizing your analyzer chain, please see
http://lucene.apache.org/solr/guide/understanding-analyzers-tokenizers-and-filters.html#understanding-analyzers-tokenizers-and-filters
-->
<!-- One can also specify an existing Analyzer class that has a
default constructor via the class attribute on the analyzer element.
Example:
<fieldType name="text_greek" class="solr.TextField">
<analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
</fieldType>
-->
<!-- A text field that only splits on whitespace for exact matching of words -->
<dynamicField name="*_ws" type="text_ws" indexed="true" stored="true" />
<fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer name="whitespace" />
</analyzer>
</fieldType>
<!-- A general text field that has reasonable, generic
cross-language defaults: it tokenizes with StandardTokenizer,
removes stop words from case-insensitive "stopwords.txt"
(empty by default), and down cases. At query time only, it
also applies synonyms.
-->
<dynamicField name="*_txt_gen" type="text_general" indexed="true" stored="true" />
<fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">
<analyzer type="index">
<tokenizer name="standard" />
<filter name="stop" ignoreCase="true" words="stopwords.txt" />
<!-- in this example, we will only use synonyms at query time
<filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
<filter name="flattenGraph"/>
-->
<filter name="lowercase" />
</analyzer>
<analyzer type="query">
<tokenizer name="standard" />
<filter name="stop" ignoreCase="true" words="stopwords.txt" />
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true" />
<filter name="lowercase" />
</analyzer>
</fieldType>
<!-- A text field with defaults appropriate for English: it tokenizes with StandardTokenizer,
removes English stop words (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
finally applies Porter's stemming. The query time analyzer also applies synonyms from synonyms.txt. -->
<dynamicField name="*_txt_en" type="text_en" indexed="true" stored="true" />
<fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<tokenizer name="standard" />
<!-- in this example, we will only use synonyms at query time
<filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
<filter name="flattenGraph"/>
-->
<!-- Case insensitive stop word removal.
-->
<filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter name="lowercase" />
<filter name="englishPossessive" />
<filter name="keywordMarker" protected="protwords.txt" />
<!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
<filter name="englishMinimalStem"/>
-->
<filter name="porterStem" />
</analyzer>
<analyzer type="query">
<tokenizer name="standard" />
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true" />
<filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter name="lowercase" />
<filter name="englishPossessive" />
<filter name="keywordMarker" protected="protwords.txt" />
<!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
<filter name="englishMinimalStem"/>
-->
<filter name="porterStem" />
</analyzer>
</fieldType>
<!-- A text field with defaults appropriate for English, plus
aggressive word-splitting and autophrase features enabled.
This field is just like text_en, except it adds
WordDelimiterGraphFilter to enable splitting and matching of
words on case-change, alpha numeric boundaries, and
non-alphanumeric chars. This means certain compound word
cases will work, for example query "wi fi" will match
document "WiFi" or "wi-fi".
-->
<dynamicField name="*_txt_en_split" type="text_en_splitting" indexed="true" stored="true" />
<fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100"
autoGeneratePhraseQueries="true">
<analyzer type="index">
<tokenizer name="whitespace" />
<!-- in this example, we will only use synonyms at query time
<filter name="synonymGraph" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-->
<!-- Case insensitive stop word removal.
-->
<filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="1"
catenateNumbers="1" catenateAll="0" splitOnCaseChange="1" />
<filter name="lowercase" />
<filter name="keywordMarker" protected="protwords.txt" />
<filter name="porterStem" />
<filter name="flattenGraph" />
</analyzer>
<analyzer type="query">
<tokenizer name="whitespace" />
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="true" />
<filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter name="wordDelimiterGraph" generateWordParts="1" generateNumberParts="1" catenateWords="0"
catenateNumbers="0" catenateAll="0" splitOnCaseChange="1" />
<filter name="lowercase" />
<filter name="keywordMarker" protected="protwords.txt" />
<filter name="porterStem" />
</analyzer>
</fieldType>
<!-- Less flexible matching, but less false matches. Probably not ideal for product names,
but may be good for SKUs. Can insert dashes in the wrong place and still match. -->
<dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight" indexed="true" stored="true" />
<fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100"
autoGeneratePhraseQueries="true">
<analyzer type="index">
<tokenizer name="whitespace" />
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false" />
<filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1"
catenateNumbers="1" catenateAll="0" />
<filter name="lowercase" />
<filter name="keywordMarker" protected="protwords.txt" />
<filter name="englishMinimalStem" />
<!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
<filter name="removeDuplicates" />
<filter name="flattenGraph" />
</analyzer>
<analyzer type="query">
<tokenizer name="whitespace" />
<filter name="synonymGraph" synonyms="synonyms.txt" ignoreCase="true" expand="false" />
<filter name="stop" ignoreCase="true" words="lang/stopwords_en.txt" />
<filter name="wordDelimiterGraph" generateWordParts="0" generateNumberParts="0" catenateWords="1"
catenateNumbers="1" catenateAll="0" />
<filter name="lowercase" />
<filter name="keywordMarker" protected="protwords.txt" />
<filter name="englishMinimalStem" />
<!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
possible with WordDelimiterGraphFilter in conjuncton with stemming. -->
<filter name="removeDuplicates" />
</analyzer>
</fieldType>
<!-- lowercases the entire field value, keeping it as a single token. -->
<dynamicField name="*_s_lower" type="lowercase" indexed="true" stored="true" />
<fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer name="keyword" />
<filter name="lowercase" />
</analyzer>
</fieldType>
<!-- This point type indexes the coordinates as separate fields (subFields)
If subFieldType is defined, it references a type, and a dynamic field
definition is created matching *___<typename>. Alternately, if
subFieldSuffix is defined, that is used to create the subFields.
Example: if subFieldType="double", then the coordinates would be
indexed in fields myloc_0___double,myloc_1___double.
Example: if subFieldSuffix="_d" then the coordinates would be indexed
in fields myloc_0_d,myloc_1_d
The subFields are an implementation detail of the fieldType, and end
users normally should not need to know about them.
-->
<dynamicField name="*_point" type="point" indexed="true" stored="true" />
<fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d" />
<!-- A specialized field for geospatial search filters and distance sorting. -->
<fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true" />
<!-- A geospatial field type that supports multiValued and polygon shapes.
For more information about this and other spatial fields see:
http://lucene.apache.org/solr/guide/spatial-search.html
-->
<fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" distErrPct="0.025"
maxDistErr="0.001" distanceUnits="kilometers" />
<!-- Payloaded field types -->
<fieldType name="delimited_payloads_float" stored="false" indexed="true" class="solr.TextField">
<analyzer>
<tokenizer name="whitespace" />
<filter name="delimitedPayload" encoder="float" />
</analyzer>
</fieldType>
<fieldType name="delimited_payloads_int" stored="false" indexed="true" class="solr.TextField">
<analyzer>
<tokenizer name="whitespace" />
<filter name="delimitedPayload" encoder="integer" />
</analyzer>
</fieldType>
<fieldType name="delimited_payloads_string" stored="false" indexed="true" class="solr.TextField">
<analyzer>
<tokenizer name="whitespace" />
<filter name="delimitedPayload" encoder="identity" />
</analyzer>
</fieldType>
</schema>
\ No newline at end of file
<fields>
<!--
Adding German dynamic field types to our Solr Schema
If you enable this, make sure you have a folder called lang with stopwords_de.txt
and synonyms_de.txt in there
This also requires to enable the content in schema_extra_types.xml
-->
<!--
<field name="label_de" type="text_de" indexed="true" stored="true" termVectors="true" omitNorms="true"/>
<field name="content_de" type="text_de" indexed="true" stored="true" termVectors="true"/>
<field name="teaser_de" type="text_de" indexed="false" stored="true"/>
<field name="path_alias_de" type="text_de" indexed="true" stored="true" termVectors="true" omitNorms="true"/>
<field name="taxonomy_names_de" type="text_de" indexed="true" stored="false" termVectors="true" multiValued="true" omitNorms="true"/>
<field name="spell_de" type="text_de" indexed="true" stored="true" multiValued="true"/>
<copyField source="label_de" dest="spell_de"/>
<copyField source="content_de" dest="spell_de"/>
<dynamicField name="tags_de_*" type="text_de" indexed="true" stored="false" omitNorms="true"/>
<dynamicField name="ts_de_*" type="text_de" indexed="true" stored="true" multiValued="false" termVectors="true"/>
<dynamicField name="tm_de_*" type="text_de" indexed="true" stored="true" multiValued="true" termVectors="true"/>
<dynamicField name="tos_de_*" type="text_de" indexed="true" stored="true" multiValued="false" termVectors="true" omitNorms="true"/>
<dynamicField name="tom_de_*" type="text_de" indexed="true" stored="true" multiValued="true" termVectors="true" omitNorms="true"/>
-->
</fields>
<types>
<!--
Adding German language to our Solr Schema German
If you enable this, make sure you have a folder called lang with stopwords_de.txt
and synonyms_de.txt in there
-->
<!--
<fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.StopFilterFactory" words="lang/stopwords_de.txt" format="snowball" ignoreCase="true" enablePositionIncrements="true"/>
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" splitOnCaseChange="1" splitOnNumerics="1" catenateWords="1" catenateNumbers="1" catenateAll="0" protected="protwords.txt" preserveOriginal="1"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.GermanLightStemFilterFactory"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
<analyzer type="query">
<charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.SynonymFilterFactory" synonyms="lang/synonyms_de.txt" ignoreCase="true" expand="true"/>
<filter class="solr.StopFilterFactory" words="lang/stopwords_de.txt" format="snowball" ignoreCase="true" enablePositionIncrements="true"/>
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" splitOnCaseChange="1" splitOnNumerics="1" catenateWords="0" catenateNumbers="0" catenateAll="0" protected="protwords.txt" preserveOriginal="1"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.GermanLightStemFilterFactory"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
</fieldType>
-->
</types>
<?xml version="1.0" encoding="UTF-8" ?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!--
For more details about configurations options that may appear in
this file, see http://wiki.apache.org/solr/SolrConfigXml.
-->
<config name="drupal-4.3-solr-4.x" >
<!-- In all configuration below, a prefix of "solr." for class names
is an alias that causes solr to search appropriate packages,
including org.apache.solr.(search|update|request|core|analysis)
You may also specify a fully qualified Java classname if you
have your own custom plugins.
-->
<!-- Set this to 'false' if you want solr to continue working after
it has encountered an severe configuration error. In a
production environment, you may want solr to keep working even
if one handler is mis-configured.
You may also set this to false using by setting the system
property:
-Dsolr.abortOnConfigurationError=false
-->
<abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>
<!-- Controls what version of Lucene various components of Solr
adhere to. Generally, you want to use the latest version to
get all bug fixes and improvements. It is highly recommended
that you fully re-index after changing this setting as it can
affect both how text is indexed and queried.
-->
<luceneMatchVersion>${solr.luceneMatchVersion:LUCENE_40}</luceneMatchVersion>
<!-- lib directives can be used to instruct Solr to load an Jars
identified and use them to resolve any "plugins" specified in
your solrconfig.xml or schema.xml (ie: Analyzers, Request
Handlers, etc...).
All directories and paths are resolved relative to the
instanceDir.
If a "./lib" directory exists in your instanceDir, all files
found in it are included as if you had used the following
syntax...
<lib dir="./lib" />
-->
<!-- A dir option by itself adds any files found in the directory to
the classpath, this is useful for including all jars in a
directory.
-->
<lib dir="${solr.contrib.dir:../../../contrib}/extraction/lib" />
<lib dir="${solr.contrib.dir:../../../contrib}/clustering/lib/" />
<!-- The velocity library has been known to crash Solr in some
instances when deployed as a war file to Tomcat. Therefore all
references have been removed from the default configuration.
@see http://drupal.org/node/1612556
-->
<!-- <lib dir="../../contrib/velocity/lib" /> -->
<!-- When a regex is specified in addition to a directory, only the
files in that directory which completely match the regex
(anchored on both ends) will be included.
-->
<!--<lib dir="../../dist/" regex="apache-solr-cell-\d.*\.jar" />-->
<!--<lib dir="../../dist/" regex="apache-solr-clustering-\d.*\.jar" />-->
<!--<lib dir="../../dist/" regex="apache-solr-dataimporthandler-\d.*\.jar" />-->
<!--<lib dir="../../dist/" regex="apache-solr-langid-\d.*\.jar" />-->
<!-- <lib dir="../../dist/" regex="apache-solr-velocity-\d.*\.jar" /> -->
<!-- If a dir option (with or without a regex) is used and nothing
is found that matches, it will be ignored
-->
<!--<lib dir="../../contrib/clustering/lib/" />-->
<!--<lib dir="/total/crap/dir/ignored" />-->
<!-- an exact path can be used to specify a specific file. This
will cause a serious error to be logged if it can't be loaded.
-->
<!--
<lib path="../a-jar-that-does-not-exist.jar" />
-->
<!-- Data Directory
Used to specify an alternate directory to hold all index data
other than the default ./data under the Solr home. If
replication is in use, this should match the replication
configuration.
-->
<!-- <dataDir>${solr.data.dir:}</dataDir> -->
<!-- The DirectoryFactory to use for indexes.
solr.StandardDirectoryFactory, the default, is filesystem
based and tries to pick the best implementation for the current
JVM and platform. One can force a particular implementation
via solr.MMapDirectoryFactory, solr.NIOFSDirectoryFactory, or
solr.SimpleFSDirectoryFactory.
solr.RAMDirectoryFactory is memory based, not
persistent, and doesn't work with replication.
-->
<directoryFactory name="DirectoryFactory"
class="${solr.directoryFactory:solr.StandardDirectoryFactory}"/>
<!-- Index Defaults
Values here affect all index writers and act as a default
unless overridden.
WARNING: See also the <mainIndex> section below for parameters
that overfor Solr's main Lucene index.
-->
<indexConfig>
<useCompoundFile>false</useCompoundFile>
<mergeFactor>4</mergeFactor>
<!-- Sets the amount of RAM that may be used by Lucene indexing
for buffering added documents and deletions before they are
flushed to the Directory. -->
<ramBufferSizeMB>32</ramBufferSizeMB>
<!-- If both ramBufferSizeMB and maxBufferedDocs is set, then
Lucene will flush based on whichever limit is hit first.
-->
<!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
<maxMergeDocs>2147483647</maxMergeDocs>
<maxFieldLength>100000</maxFieldLength>
<writeLockTimeout>1000</writeLockTimeout>
<!-- Expert: Merge Policy
The Merge Policy in Lucene controls how merging is handled by
Lucene. The default in Solr 3.3 is TieredMergePolicy.
The default in 2.3 was the LogByteSizeMergePolicy,
previous versions used LogDocMergePolicy.
LogByteSizeMergePolicy chooses segments to merge based on
their size. The Lucene 2.2 default, LogDocMergePolicy chose
when to merge based on number of documents
Other implementations of MergePolicy must have a no-argument
constructor
-->
<mergePolicy class="org.apache.lucene.index.LogByteSizeMergePolicy"/>
<!-- Expert: Merge Scheduler
The Merge Scheduler in Lucene controls how merges are
performed. The ConcurrentMergeScheduler (Lucene 2.3 default)
can perform merges in the background using separate threads.
The SerialMergeScheduler (Lucene 2.2 default) does not.
-->
<!--
<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
-->
<!-- LockFactory
This option specifies which Lucene LockFactory implementation
to use.
single = SingleInstanceLockFactory - suggested for a
read-only index or when there is no possibility of
another process trying to modify the index.
native = NativeFSLockFactory - uses OS native file locking.
Do not use when multiple solr webapps in the same
JVM are attempting to share a single index.
simple = SimpleFSLockFactory - uses a plain file for locking
(For backwards compatibility with Solr 1.2, 'simple' is the
default if not specified.)
More details on the nuances of each LockFactory...
http://wiki.apache.org/lucene-java/AvailableLockFactories
-->
<lockType>single</lockType>
<!-- Expert: Controls how often Lucene loads terms into memory
Default is 128 and is likely good for most everyone.
-->
<!-- <termIndexInterval>256</termIndexInterval> -->
<!-- Unlock On Startup
If true, unlock any held write or commit locks on startup.
This defeats the locking mechanism that allows multiple
processes to safely access a lucene index, and should be used
with care.
This is not needed if lock type is 'none' or 'single'
-->
<unlockOnStartup>false</unlockOnStartup>
<!-- If true, IndexReaders will be reopened (often more efficient)
instead of closed and then opened.
-->
<reopenReaders>true</reopenReaders>
<!-- Commit Deletion Policy
Custom deletion policies can specified here. The class must
implement org.apache.lucene.index.IndexDeletionPolicy.
http://lucene.apache.org/java/2_9_1/api/all/org/apache/lucene/index/IndexDeletionPolicy.html
The standard Solr IndexDeletionPolicy implementation supports
deleting index commit points on number of commits, age of
commit point and optimized status.
The latest commit point should always be preserved regardless
of the criteria.
-->
<deletionPolicy class="solr.SolrDeletionPolicy">
<!-- The number of commit points to be kept -->
<str name="maxCommitsToKeep">1</str>
<!-- The number of optimized commit points to be kept -->
<str name="maxOptimizedCommitsToKeep">0</str>
<!--
Delete all commit points once they have reached the given age.
Supports DateMathParser syntax e.g.
-->
<!--
<str name="maxCommitAge">30MINUTES</str>
<str name="maxCommitAge">1DAY</str>
-->
</deletionPolicy>
<!-- Lucene Infostream
To aid in advanced debugging, Lucene provides an "InfoStream"
of detailed information when indexing.
Setting The value to true will instruct the underlying Lucene
IndexWriter to write its debugging info the specified file
-->
<infoStream file="INFOSTREAM.txt">false</infoStream>
</indexConfig>
<!-- JMX
This example enables JMX if and only if an existing MBeanServer
is found, use this if you want to configure JMX through JVM
parameters. Remove this to disable exposing Solr configuration
and statistics to JMX.
For more details see http://wiki.apache.org/solr/SolrJmx
-->
<!-- <jmx /> -->
<!-- If you want to connect to a particular server, specify the
agentId
-->
<!-- <jmx agentId="myAgent" /> -->
<!-- If you want to start a new MBeanServer, specify the serviceUrl -->
<!-- <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr"/>
-->
<!-- The default high-performance update handler -->
<updateHandler class="solr.DirectUpdateHandler2">
<!-- AutoCommit
Perform a <commit/> automatically under certain conditions.
Instead of enabling autoCommit, consider using "commitWithin"
when adding documents.
http://wiki.apache.org/solr/UpdateXmlMessages
maxDocs - Maximum number of documents to add since the last
commit before automatically triggering a new commit.
maxTime - Maximum amount of time that is allowed to pass
since a document was added before automaticly
triggering a new commit.
-->
<autoCommit>
<maxDocs>${solr.autoCommit.MaxDocs:10000}</maxDocs>
<maxTime>${solr.autoCommit.MaxTime:120000}</maxTime>
</autoCommit>
<!-- softAutoCommit is like autoCommit except it causes a
'soft' commit which only ensures that changes are visible
but does not ensure that data is synced to disk. This is
faster and more near-realtime friendly than a hard commit.
-->
<autoSoftCommit>
<maxDocs>${solr.autoSoftCommit.MaxDocs:2000}</maxDocs>
<maxTime>${solr.autoSoftCommit.MaxTime:10000}</maxTime>
</autoSoftCommit>
<!-- Update Related Event Listeners
Various IndexWriter related events can trigger Listeners to
take actions.
postCommit - fired after every commit or optimize command
postOptimize - fired after every optimize command
-->
<!-- The RunExecutableListener executes an external command from a
hook such as postCommit or postOptimize.
exe - the name of the executable to run
dir - dir to use as the current working directory. (default=".")
wait - the calling thread waits until the executable returns.
(default="true")
args - the arguments to pass to the program. (default is none)
env - environment variables to set. (default is none)
-->
<!-- This example shows how RunExecutableListener could be used
with the script based replication...
http://wiki.apache.org/solr/CollectionDistribution
-->
<!--
<listener event="postCommit" class="solr.RunExecutableListener">
<str name="exe">solr/bin/snapshooter</str>
<str name="dir">.</str>
<bool name="wait">true</bool>
<arr name="args"> <str>arg1</str> <str>arg2</str> </arr>
<arr name="env"> <str>MYVAR=val1</str> </arr>
</listener>
-->
<!-- Enables a transaction log, currently used for real-time get.
"dir" - the target directory for transaction logs, defaults to the
solr data directory. -->
<updateLog>
<str name="dir">${solr.data.dir:}</str>
<!-- if you want to take control of the synchronization you may specify
the syncLevel as one of the following where ''flush'' is the default.
Fsync will reduce throughput.
<str name="syncLevel">flush|fsync|none</str>
-->
</updateLog>
</updateHandler>
<!-- IndexReaderFactory
Use the following format to specify a custom IndexReaderFactory,
which allows for alternate IndexReader implementations.
** Experimental Feature **
Please note - Using a custom IndexReaderFactory may prevent
certain other features from working. The API to
IndexReaderFactory may change without warning or may even be
removed from future releases if the problems cannot be
resolved.
** Features that may not work with custom IndexReaderFactory **
The ReplicationHandler assumes a disk-resident index. Using a
custom IndexReader implementation may cause incompatibility
with ReplicationHandler and may cause replication to not work
correctly. See SOLR-1366 for details.
-->
<!--
<indexReaderFactory name="IndexReaderFactory" class="package.class">
<str name="someArg">Some Value</str>
</indexReaderFactory >
-->
<!-- By explicitly declaring the Factory, the termIndexDivisor can
be specified.
-->
<!--
<indexReaderFactory name="IndexReaderFactory"
class="solr.StandardIndexReaderFactory">
<int name="setTermIndexDivisor">12</int>
</indexReaderFactory >
-->
<query>
<!-- Max Boolean Clauses
Maximum number of clauses in each BooleanQuery, an exception
is thrown if exceeded.
** WARNING **
This option actually modifies a global Lucene property that
will affect all SolrCores. If multiple solrconfig.xml files
disagree on this property, the value at any given moment will
be based on the last SolrCore to be initialized.
-->
<maxBooleanClauses>1024</maxBooleanClauses>
<!-- Solr Internal Query Caches
There are two implementations of cache available for Solr,
LRUCache, based on a synchronized LinkedHashMap, and
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache has faster gets and slower puts in single
threaded operation and thus is generally faster than LRUCache
when the hit ratio of the cache is high (> 75%), and may be
faster under other scenarios on multi-cpu systems.
-->
<!-- Filter Cache
Cache used by SolrIndexSearcher for filters (DocSets),
unordered sets of *all* documents that match a query. When a
new searcher is opened, its caches may be prepopulated or
"autowarmed" using data from caches in the old searcher.
autowarmCount is the number of items to prepopulate. For
LRUCache, the autowarmed items will be the most recently
accessed items.
Parameters:
class - the SolrCache implementation LRUCache or
(LRUCache or FastLRUCache)
size - the maximum number of entries in the cache
initialSize - the initial capacity (number of entries) of
the cache. (see java.util.HashMap)
autowarmCount - the number of entries to prepopulate from
and old cache.
-->
<filterCache class="solr.FastLRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- Query Result Cache
Caches results of searches - ordered lists of document ids
(DocList) based on a query, a sort, and the range of documents requested.
-->
<queryResultCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="32"/>
<!-- Document Cache
Caches Lucene Document objects (the stored fields for each
document). Since Lucene internal document ids are transient,
this cache will not be autowarmed.
-->
<documentCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- Field Value Cache
Cache used to hold field values that are quickly accessible
by document id. The fieldValueCache is created by default
even if not configured here.
-->
<!--
<fieldValueCache class="solr.FastLRUCache"
size="512"
autowarmCount="128"
showItems="32" />
-->
<!-- Custom Cache
Example of a generic cache. These caches may be accessed by
name through SolrIndexSearcher.getCache(),cacheLookup(), and
cacheInsert(). The purpose is to enable easy caching of
user/application level data. The regenerator argument should
be specified as an implementation of solr.CacheRegenerator
if autowarming is desired.
-->
<!--
<cache name="myUserCache"
class="solr.LRUCache"
size="4096"
initialSize="1024"
autowarmCount="1024"
regenerator="com.mycompany.MyRegenerator"
/>
-->
<!-- Lazy Field Loading
If true, stored fields that are not requested will be loaded
lazily. This can result in a significant speed improvement
if the usual case is to not load all stored fields,
especially if the skipped fields are large compressed text
fields.
-->
<enableLazyFieldLoading>true</enableLazyFieldLoading>
<!-- Use Filter For Sorted Query
A possible optimization that attempts to use a filter to
satisfy a search. If the requested sort does not include
score, then the filterCache will be checked for a filter
matching the query. If found, the filter will be used as the
source of document ids, and then the sort will be applied to
that.
For most situations, this will not be useful unless you
frequently get the same search repeatedly with different sort
options, and none of them ever use "score"
-->
<!--
<useFilterForSortedQuery>true</useFilterForSortedQuery>
-->
<!-- Result Window Size
An optimization for use with the queryResultCache. When a search
is requested, a superset of the requested number of document ids
are collected. For example, if a search for a particular query
requests matching documents 10 through 19, and queryWindowSize is 50,
then documents 0 through 49 will be collected and cached. Any further
requests in that range can be satisfied via the cache.
-->
<queryResultWindowSize>20</queryResultWindowSize>
<!-- Maximum number of documents to cache for any entry in the
queryResultCache.
-->
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
<!-- Query Related Event Listeners
Various IndexSearcher related events can trigger Listeners to
take actions.
newSearcher - fired whenever a new searcher is being prepared
and there is a current searcher handling requests (aka
registered). It can be used to prime certain caches to
prevent long request times for certain requests.
firstSearcher - fired whenever a new searcher is being
prepared but there is no current registered searcher to handle
requests or to gain autowarming data from.
-->
<!-- QuerySenderListener takes an array of NamedList and executes a
local query request for each NamedList in sequence.
-->
<listener event="newSearcher" class="solr.QuerySenderListener">
<arr name="queries">
<!--
<lst><str name="q">solr</str><str name="sort">price asc</str></lst>
<lst><str name="q">rocks</str><str name="sort">weight asc</str></lst>
-->
</arr>
</listener>
<listener event="firstSearcher" class="solr.QuerySenderListener">
<arr name="queries">
<lst>
<str name="q">solr rocks</str><str name="start">0</str><str name="rows">10</str>
</lst>
</arr>
</listener>
<!-- Use Cold Searcher
If a search request comes in and there is no current
registered searcher, then immediately register the still
warming searcher and use it. If "false" then all requests
will block until the first searcher is done warming.
-->
<useColdSearcher>false</useColdSearcher>
<!-- Max Warming Searchers
Maximum number of searchers that may be warming in the
background concurrently. An error is returned if this limit
is exceeded.
Recommend values of 1-2 for read-only slaves, higher for
masters w/o cache warming.
-->
<maxWarmingSearchers>2</maxWarmingSearchers>
</query>
<!-- Request Dispatcher
This section contains instructions for how the SolrDispatchFilter
should behave when processing requests for this SolrCore.
handleSelect affects the behavior of requests such as /select?qt=XXX
handleSelect="true" will cause the SolrDispatchFilter to process
the request and will result in consistent error handling and
formatting for all types of requests.
handleSelect="false" will cause the SolrDispatchFilter to
ignore "/select" requests and fallback to using the legacy
SolrServlet and it's Solr 1.1 style error formatting
-->
<requestDispatcher handleSelect="true" >
<!-- Request Parsing
These settings indicate how Solr Requests may be parsed, and
what restrictions may be placed on the ContentStreams from
those requests
enableRemoteStreaming - enables use of the stream.file
and stream.url parameters for specifying remote streams.
multipartUploadLimitInKB - specifies the max size of
Multipart File Uploads that Solr will allow in a Request.
*** WARNING ***
The settings below authorize Solr to fetch remote files, You
should make sure your system has some authentication before
using enableRemoteStreaming="true"
-->
<requestParsers enableRemoteStreaming="true"
multipartUploadLimitInKB="2048000" />
<!-- HTTP Caching
Set HTTP caching related parameters (for proxy caches and clients).
The options below instruct Solr not to output any HTTP Caching
related headers
-->
<httpCaching never304="true" />
<!-- If you include a <cacheControl> directive, it will be used to
generate a Cache-Control header (as well as an Expires header
if the value contains "max-age=")
By default, no Cache-Control header is generated.
You can use the <cacheControl> option even if you have set
never304="true"
-->
<!--
<httpCaching never304="true" >
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
<!-- To enable Solr to respond with automatically generated HTTP
Caching headers, and to response to Cache Validation requests
correctly, set the value of never304="false"
This will cause Solr to generate Last-Modified and ETag
headers based on the properties of the Index.
The following options can also be specified to affect the
values of these headers...
lastModFrom - the default value is "openTime" which means the
Last-Modified value (and validation against If-Modified-Since
requests) will all be relative to when the current Searcher
was opened. You can change it to lastModFrom="dirLastMod" if
you want the value to exactly correspond to when the physical
index was last modified.
etagSeed="..." is an option you can change to force the ETag
header (and validation against If-None-Match requests) to be
different even if the index has not changed (ie: when making
significant changes to your config file)
(lastModifiedFrom and etagSeed are both ignored if you use
the never304="true" option)
-->
<!--
<httpCaching lastModifiedFrom="openTime"
etagSeed="Solr">
<cacheControl>max-age=30, public</cacheControl>
</httpCaching>
-->
</requestDispatcher>
<!-- Request Handlers
http://wiki.apache.org/solr/SolrRequestHandler
incoming queries will be dispatched to the correct handler
based on the path or the qt (query type) param.
Names starting with a '/' are accessed with the a path equal to
the registered name. Names without a leading '/' are accessed
with: http://host/app/[core/]select?qt=name
If a /select request is processed with out a qt param
specified, the requestHandler that declares default="true" will
be used.
If a Request Handler is declared with startup="lazy", then it will
not be initialized until the first request that uses it.
-->
<!-- SearchHandler
http://wiki.apache.org/solr/SearchHandler
For processing Search Queries, the primary Request Handler
provided with Solr is "SearchHandler" It delegates to a sequent
of SearchComponents (see below) and supports distributed
queries across multiple shards
-->
<!--<requestHandler name="search" class="solr.SearchHandler" default="true">-->
<!-- default values for query parameters can be specified, these
will be overridden by parameters in the request
-->
<!--<lst name="defaults">
<str name="echoParams">explicit</str>
<int name="rows">10</int>
</lst>-->
<!-- In addition to defaults, "appends" params can be specified
to identify values which should be appended to the list of
multi-val params from the query (or the existing "defaults").
-->
<!-- In this example, the param "fq=instock:true" would be appended to
any query time fq params the user may specify, as a mechanism for
partitioning the index, independent of any user selected filtering
that may also be desired (perhaps as a result of faceted searching).
NOTE: there is *absolutely* nothing a client can do to prevent these
"appends" values from being used, so don't use this mechanism
unless you are sure you always want it.
-->
<!--
<lst name="appends">
<str name="fq">inStock:true</str>
</lst>
-->
<!-- "invariants" are a way of letting the Solr maintainer lock down
the options available to Solr clients. Any params values
specified here are used regardless of what values may be specified
in either the query, the "defaults", or the "appends" params.
In this example, the facet.field and facet.query params would
be fixed, limiting the facets clients can use. Faceting is
not turned on by default - but if the client does specify
facet=true in the request, these are the only facets they
will be able to see counts for; regardless of what other
facet.field or facet.query params they may specify.
NOTE: there is *absolutely* nothing a client can do to prevent these
"invariants" values from being used, so don't use this mechanism
unless you are sure you always want it.
-->
<!--
<lst name="invariants">
<str name="facet.field">cat</str>
<str name="facet.field">manu_exact</str>
<str name="facet.query">price:[* TO 500]</str>
<str name="facet.query">price:[500 TO *]</str>
</lst>
-->
<!-- If the default list of SearchComponents is not desired, that
list can either be overridden completely, or components can be
prepended or appended to the default list. (see below)
-->
<!--
<arr name="components">
<str>nameOfCustomComponent1</str>
<str>nameOfCustomComponent2</str>
</arr>
-->
<!--</requestHandler>-->
<!-- A Robust Example
This example SearchHandler declaration shows off usage of the
SearchHandler with many defaults declared
Note that multiple instances of the same Request Handler
(SearchHandler) can be registered multiple times with different
names (and different init parameters)
-->
<!--
<requestHandler name="/browse" class="solr.SearchHandler">
<lst name="defaults">
<str name="echoParams">explicit</str>-->
<!-- VelocityResponseWriter settings -->
<!--<str name="wt">velocity</str>
<str name="v.template">browse</str>
<str name="v.layout">layout</str>
<str name="title">Solritas</str>
<str name="defType">edismax</str>
<str name="q.alt">*:*</str>
<str name="rows">10</str>
<str name="fl">*,score</str>
<str name="mlt.qf">
text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
</str>
<str name="mlt.fl">text,features,name,sku,id,manu,cat</str>
<int name="mlt.count">3</int>
<str name="qf">
text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
</str>
<str name="facet">on</str>
<str name="facet.field">cat</str>
<str name="facet.field">manu_exact</str>
<str name="facet.query">ipod</str>
<str name="facet.query">GB</str>
<str name="facet.mincount">1</str>
<str name="facet.pivot">cat,inStock</str>
<str name="facet.range.other">after</str>
<str name="facet.range">price</str>
<int name="f.price.facet.range.start">0</int>
<int name="f.price.facet.range.end">600</int>
<int name="f.price.facet.range.gap">50</int>
<str name="facet.range">popularity</str>
<int name="f.popularity.facet.range.start">0</int>
<int name="f.popularity.facet.range.end">10</int>
<int name="f.popularity.facet.range.gap">3</int>
<str name="facet.range">manufacturedate_dt</str>
<str name="f.manufacturedate_dt.facet.range.start">NOW/YEAR-10YEARS</str>
<str name="f.manufacturedate_dt.facet.range.end">NOW</str>
<str name="f.manufacturedate_dt.facet.range.gap">+1YEAR</str>
<str name="f.manufacturedate_dt.facet.range.other">before</str>
<str name="f.manufacturedate_dt.facet.range.other">after</str>-->
<!-- Highlighting defaults -->
<!--<str name="hl">on</str>
<str name="hl.fl">text features name</str>
<str name="f.name.hl.fragsize">0</str>
<str name="f.name.hl.alternateField">name</str>
</lst>
<arr name="last-components">
<str>spellcheck</str>
</arr>-->
<!--
<str name="url-scheme">httpx</str>
-->
<!--</requestHandler>-->
<!-- trivia: the name pinkPony requestHandler was an agreement between the Search API and the
apachesolr maintainers. The decision was taken during the Drupalcon Munich codesprint.
-->
<requestHandler name="pinkPony" class="solr.SearchHandler" default="true">
<lst name="defaults">
<str name="defType">edismax</str>
<str name="echoParams">explicit</str>
<bool name="omitHeader">true</bool>
<float name="tie">0.01</float>
<!-- Don't abort searches for the pinkPony request handler (set in solrcore.properties) -->
<int name="timeAllowed">${solr.pinkPony.timeAllowed:-1}</int>
<str name="q.alt">*:*</str>
<!-- By default, don't spell check -->
<str name="spellcheck">false</str>
<!-- Defaults for the spell checker when used -->
<str name="spellcheck.onlyMorePopular">true</str>
<str name="spellcheck.extendedResults">false</str>
<!-- The number of suggestions to return -->
<str name="spellcheck.count">1</str>
</lst>
<arr name="last-components">
<str>spellcheck</str>
<str>elevator</str>
</arr>
</requestHandler>
<!-- The more like this handler offers many advantages over the standard handler,
when performing moreLikeThis requests.-->
<requestHandler name="mlt" class="solr.MoreLikeThisHandler">
<lst name="defaults">
<str name="mlt.mintf">1</str>
<str name="mlt.mindf">1</str>
<str name="mlt.minwl">3</str>
<str name="mlt.maxwl">15</str>
<str name="mlt.maxqt">20</str>
<str name="mlt.match.include">false</str>
<!-- Abort any searches longer than 2 seconds (set in solrcore.properties) -->
<int name="timeAllowed">${solr.mlt.timeAllowed:2000}</int>
</lst>
</requestHandler>
<!-- A minimal query type for doing luene queries -->
<requestHandler name="standard" class="solr.SearchHandler">
<lst name="defaults">
<str name="echoParams">explicit</str>
<bool name="omitHeader">true</bool>
</lst>
</requestHandler>
<!-- XML Update Request Handler.
http://wiki.apache.org/solr/UpdateXmlMessages
The canonical Request Handler for Modifying the Index through
commands specified using XML.
Note: Since solr1.1 requestHandlers requires a valid content
type header if posted in the body. For example, curl now
requires: -H 'Content-type:text/xml; charset=utf-8'
-->
<requestHandler name="/update"
class="solr.UpdateRequestHandler">
<!-- See below for information on defining
updateRequestProcessorChains that can be used by name
on each Update Request
-->
<!--
<lst name="defaults">
<str name="update.chain">dedupe</str>
</lst>
-->
</requestHandler>
<!-- Binary Update Request Handler
http://wiki.apache.org/solr/javabin
-->
<requestHandler name="/update/javabin"
class="solr.UpdateRequestHandler" />
<!-- CSV Update Request Handler
http://wiki.apache.org/solr/UpdateCSV
-->
<requestHandler name="/update/csv"
class="solr.CSVRequestHandler"
startup="lazy" />
<!-- JSON Update Request Handler
http://wiki.apache.org/solr/UpdateJSON
-->
<requestHandler name="/update/json"
class="solr.JsonUpdateRequestHandler"
startup="lazy" />
<!-- Solr Cell Update Request Handler
http://wiki.apache.org/solr/ExtractingRequestHandler
-->
<requestHandler name="/update/extract"
startup="lazy"
class="solr.extraction.ExtractingRequestHandler" >
<lst name="defaults">
<!-- All the main content goes into "text"... if you need to return
the extracted text or do highlighting, use a stored field. -->
<str name="fmap.content">text</str>
<str name="lowernames">true</str>
<str name="uprefix">ignored_</str>
<!-- capture link hrefs but ignore div attributes -->
<str name="captureAttr">true</str>
<str name="fmap.a">links</str>
<str name="fmap.div">ignored_</str>
</lst>
</requestHandler>
<!-- XSLT Update Request Handler
Transforms incoming XML with stylesheet identified by tr=
-->
<requestHandler name="/update/xslt"
startup="lazy"
class="solr.XsltUpdateRequestHandler"/>
<!-- Field Analysis Request Handler
RequestHandler that provides much the same functionality as
analysis.jsp. Provides the ability to specify multiple field
types and field names in the same request and outputs
index-time and query-time analysis for each of them.
Request parameters are:
analysis.fieldname - field name whose analyzers are to be used
analysis.fieldtype - field type whose analyzers are to be used
analysis.fieldvalue - text for index-time analysis
q (or analysis.q) - text for query time analysis
analysis.showmatch (true|false) - When set to true and when
query analysis is performed, the produced tokens of the
field value analysis will be marked as "matched" for every
token that is produces by the query analysis
-->
<requestHandler name="/analysis/field"
startup="lazy"
class="solr.FieldAnalysisRequestHandler" />
<!-- Document Analysis Handler
http://wiki.apache.org/solr/AnalysisRequestHandler
An analysis handler that provides a breakdown of the analysis
process of provided docuemnts. This handler expects a (single)
content stream with the following format:
<docs>
<doc>
<field name="id">1</field>
<field name="name">The Name</field>
<field name="text">The Text Value</field>
</doc>
<doc>...</doc>
<doc>...</doc>
...
</docs>
Note: Each document must contain a field which serves as the
unique key. This key is used in the returned response to associate
an analysis breakdown to the analyzed document.
Like the FieldAnalysisRequestHandler, this handler also supports
query analysis by sending either an "analysis.query" or "q"
request parameter that holds the query text to be analyzed. It
also supports the "analysis.showmatch" parameter which when set to
true, all field tokens that match the query tokens will be marked
as a "match".
-->
<requestHandler name="/analysis/document"
class="solr.DocumentAnalysisRequestHandler"
startup="lazy" />
<!-- Admin Handlers
Admin Handlers - This will register all the standard admin
RequestHandlers.
-->
<requestHandler name="/admin/" class="solr.admin.AdminHandlers" />
<!-- This single handler is equivalent to the following... -->
<!--
<requestHandler name="/admin/luke" class="solr.admin.LukeRequestHandler" />
<requestHandler name="/admin/system" class="solr.admin.SystemInfoHandler" />
<requestHandler name="/admin/plugins" class="solr.admin.PluginInfoHandler" />
<requestHandler name="/admin/threads" class="solr.admin.ThreadDumpHandler" />
<requestHandler name="/admin/properties" class="solr.admin.PropertiesRequestHandler" />
<requestHandler name="/admin/file" class="solr.admin.ShowFileRequestHandler" >
-->
<!-- If you wish to hide files under ${solr.home}/conf, explicitly
register the ShowFileRequestHandler using:
-->
<!--
<requestHandler name="/admin/file"
class="solr.admin.ShowFileRequestHandler" >
<lst name="invariants">
<str name="hidden">synonyms.txt</str>
<str name="hidden">anotherfile.txt</str>
</lst>
</requestHandler>
-->
<!-- ping/healthcheck -->
<requestHandler name="/admin/ping" class="solr.PingRequestHandler">
<lst name="invariants">
<str name="qt">pinkPony</str>
<str name="q">solrpingquery</str>
<str name="omitHeader">false</str>
</lst>
<lst name="defaults">
<str name="echoParams">all</str>
</lst>
<!-- An optional feature of the PingRequestHandler is to configure the
handler with a "healthcheckFile" which can be used to enable/disable
the PingRequestHandler.
relative paths are resolved against the data dir
-->
<!-- <str name="healthcheckFile">server-enabled.txt</str> -->
</requestHandler>
<!-- Echo the request contents back to the client -->
<requestHandler name="/debug/dump" class="solr.DumpRequestHandler" >
<lst name="defaults">
<str name="echoParams">explicit</str>
<str name="echoHandler">true</str>
</lst>
</requestHandler>
<!-- Solr Replication
The SolrReplicationHandler supports replicating indexes from a
"master" used for indexing and "slaves" used for queries.
http://wiki.apache.org/solr/SolrReplication
In the example below, remove the <lst name="master"> section if
this is just a slave and remove the <lst name="slave"> section
if this is just a master.
-->
<requestHandler name="/replication" class="solr.ReplicationHandler" >
<lst name="master">
<str name="enable">${solr.replication.master:false}</str>
<str name="replicateAfter">commit</str>
<str name="replicateAfter">startup</str>
<str name="confFiles">${solr.replication.confFiles:schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml}</str>
</lst>
<lst name="slave">
<str name="enable">${solr.replication.slave:false}</str>
<str name="masterUrl">${solr.replication.masterUrl:http://localhost:8983/solr}/replication</str>
<str name="pollInterval">${solr.replication.pollInterval:00:00:60}</str>
</lst>
</requestHandler>
<!-- Realtime get handler, guaranteed to return the latest stored fields of
any document, without the need to commit or open a new searcher. The
current implementation relies on the updateLog feature being enabled.
-->
<requestHandler name="/get" class="solr.RealTimeGetHandler">
<lst name="defaults">
<str name="omitHeader">true</str>
<str name="wt">json</str>
<str name="indent">true</str>
</lst>
</requestHandler>
<!-- Search Components
Search components are registered to SolrCore and used by
instances of SearchHandler (which can access them by name)
By default, the following components are available:
<searchComponent name="query" class="solr.QueryComponent" />
<searchComponent name="facet" class="solr.FacetComponent" />
<searchComponent name="mlt" class="solr.MoreLikeThisComponent" />
<searchComponent name="highlight" class="solr.HighlightComponent" />
<searchComponent name="stats" class="solr.StatsComponent" />
<searchComponent name="debug" class="solr.DebugComponent" />
Default configuration in a requestHandler would look like:
<arr name="components">
<str>query</str>
<str>facet</str>
<str>mlt</str>
<str>highlight</str>
<str>stats</str>
<str>debug</str>
</arr>
If you register a searchComponent to one of the standard names,
that will be used instead of the default.
To insert components before or after the 'standard' components, use:
<arr name="first-components">
<str>myFirstComponentName</str>
</arr>
<arr name="last-components">
<str>myLastComponentName</str>
</arr>
NOTE: The component registered with the name "debug" will
always be executed after the "last-components"
-->
<!-- A request handler for demonstrating the spellcheck component.
NOTE: This is purely as an example. The whole purpose of the
SpellCheckComponent is to hook it into the request handler that
handles your normal user queries so that a separate request is
not needed to get suggestions.
IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
See http://wiki.apache.org/solr/SpellCheckComponent for details
on the request parameters.
-->
<requestHandler name="/spell" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
<str name="spellcheck.onlyMorePopular">false</str>
<str name="spellcheck.extendedResults">false</str>
<str name="spellcheck.count">1</str>
</lst>
<arr name="last-components">
<str>spellcheck</str>
</arr>
</requestHandler>
<!-- Term Vector Component
http://wiki.apache.org/solr/TermVectorComponent
-->
<searchComponent name="tvComponent" class="solr.TermVectorComponent"/>
<!-- A request handler for demonstrating the term vector component
This is purely as an example.
In reality you will likely want to add the component to your
already specified request handlers.
-->
<requestHandler name="tvrh" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
<bool name="tv">true</bool>
</lst>
<arr name="last-components">
<str>tvComponent</str>
</arr>
</requestHandler>
<!-- Clustering Component
http://wiki.apache.org/solr/ClusteringComponent
This relies on third party jars which are notincluded in the
release. To use this component (and the "/clustering" handler)
Those jars will need to be downloaded, and you'll need to set
the solr.cluster.enabled system property when running solr...
java -Dsolr.clustering.enabled=true -jar start.jar
-->
<!-- <searchComponent name="clustering"
enable="${solr.clustering.enabled:false}"
class="solr.clustering.ClusteringComponent" > -->
<!-- Declare an engine -->
<!--<lst name="engine">-->
<!-- The name, only one can be named "default" -->
<!--<str name="name">default</str>-->
<!-- Class name of Carrot2 clustering algorithm.
Currently available algorithms are:
* org.carrot2.clustering.lingo.LingoClusteringAlgorithm
* org.carrot2.clustering.stc.STCClusteringAlgorithm
* org.carrot2.clustering.kmeans.BisectingKMeansClusteringAlgorithm
See http://project.carrot2.org/algorithms.html for the
algorithm's characteristics.
-->
<!--<str name="carrot.algorithm">org.carrot2.clustering.lingo.LingoClusteringAlgorithm</str>-->
<!-- Overriding values for Carrot2 default algorithm attributes.
For a description of all available attributes, see:
http://download.carrot2.org/stable/manual/#chapter.components.
Use attribute key as name attribute of str elements
below. These can be further overridden for individual
requests by specifying attribute key as request parameter
name and attribute value as parameter value.
-->
<!--<str name="LingoClusteringAlgorithm.desiredClusterCountBase">20</str>-->
<!-- Location of Carrot2 lexical resources.
A directory from which to load Carrot2-specific stop words
and stop labels. Absolute or relative to Solr config directory.
If a specific resource (e.g. stopwords.en) is present in the
specified dir, it will completely override the corresponding
default one that ships with Carrot2.
For an overview of Carrot2 lexical resources, see:
http://download.carrot2.org/head/manual/#chapter.lexical-resources
-->
<!--<str name="carrot.lexicalResourcesDir">clustering/carrot2</str>-->
<!-- The language to assume for the documents.
For a list of allowed values, see:
http://download.carrot2.org/stable/manual/#section.attribute.lingo.MultilingualClustering.defaultLanguage
-->
<!--<str name="MultilingualClustering.defaultLanguage">ENGLISH</str>
</lst>
<lst name="engine">
<str name="name">stc</str>
<str name="carrot.algorithm">org.carrot2.clustering.stc.STCClusteringAlgorithm</str>
</lst>
</searchComponent>-->
<!-- A request handler for demonstrating the clustering component
This is purely as an example.
In reality you will likely want to add the component to your
already specified request handlers.
-->
<!--<requestHandler name="/clustering"
startup="lazy"
enable="${solr.clustering.enabled:false}"
class="solr.SearchHandler">
<lst name="defaults">
<bool name="clustering">true</bool>
<str name="clustering.engine">default</str>
<bool name="clustering.results">true</bool>-->
<!-- The title field -->
<!--<str name="carrot.title">name</str>-->
<!--<str name="carrot.url">id</str>-->
<!-- The field to cluster on -->
<!--<str name="carrot.snippet">features</str>-->
<!-- produce summaries -->
<!--<bool name="carrot.produceSummary">true</bool>-->
<!-- the maximum number of labels per cluster -->
<!--<int name="carrot.numDescriptions">5</int>-->
<!-- produce sub clusters -->
<!--<bool name="carrot.outputSubClusters">false</bool>-->
<!--<str name="defType">edismax</str>
<str name="qf">
text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
</str>
<str name="q.alt">*:*</str>
<str name="rows">10</str>
<str name="fl">*,score</str>
</lst>
<arr name="last-components">
<str>clustering</str>
</arr>
</requestHandler>-->
<!-- Terms Component
http://wiki.apache.org/solr/TermsComponent
A component to return terms and document frequency of those
terms
-->
<searchComponent name="terms" class="solr.TermsComponent"/>
<!-- A request handler for demonstrating the terms component -->
<requestHandler name="/terms" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
<bool name="terms">true</bool>
</lst>
<arr name="components">
<str>terms</str>
</arr>
</requestHandler>
<!-- Query Elevation Component
http://wiki.apache.org/solr/QueryElevationComponent
a search component that enables you to configure the top
results for a given query regardless of the normal lucene
scoring.
-->
<searchComponent name="elevator" class="solr.QueryElevationComponent" >
<!-- pick a fieldType to analyze queries -->
<str name="queryFieldType">string</str>
<str name="config-file">elevate.xml</str>
</searchComponent>
<!-- A request handler for demonstrating the elevator component -->
<requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
<str name="echoParams">explicit</str>
</lst>
<arr name="last-components">
<str>elevator</str>
</arr>
</requestHandler>
<!-- Highlighting Component
http://wiki.apache.org/solr/HighlightingParameters
-->
<searchComponent class="solr.HighlightComponent" name="highlight">
<highlighting>
<!-- Configure the standard fragmenter -->
<!-- This could most likely be commented out in the "default" case -->
<fragmenter name="gap"
default="true"
class="solr.highlight.GapFragmenter">
<lst name="defaults">
<int name="hl.fragsize">100</int>
</lst>
</fragmenter>
<!-- A regular-expression-based fragmenter
(for sentence extraction)
-->
<fragmenter name="regex"
class="solr.highlight.RegexFragmenter">
<lst name="defaults">
<!-- slightly smaller fragsizes work better because of slop -->
<int name="hl.fragsize">70</int>
<!-- allow 50% slop on fragment sizes -->
<float name="hl.regex.slop">0.5</float>
<!-- a basic sentence pattern -->
<str name="hl.regex.pattern">[-\w ,/\n\&quot;&apos;]{20,200}</str>
</lst>
</fragmenter>
<!-- Configure the standard formatter -->
<formatter name="html"
default="true"
class="solr.highlight.HtmlFormatter">
<lst name="defaults">
<str name="hl.simple.pre"><![CDATA[<strong>]]></str>
<str name="hl.simple.post"><![CDATA[</strong>]]></str>
</lst>
</formatter>
<!-- Configure the standard encoder -->
<encoder name="html"
class="solr.highlight.HtmlEncoder" />
<!-- Configure the standard fragListBuilder -->
<fragListBuilder name="simple"
default="true"
class="solr.highlight.SimpleFragListBuilder"/>
<!-- Configure the single fragListBuilder -->
<fragListBuilder name="single"
class="solr.highlight.SingleFragListBuilder"/>
<!-- default tag FragmentsBuilder -->
<fragmentsBuilder name="default"
default="true"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<!--
<lst name="defaults">
<str name="hl.multiValuedSeparatorChar">/</str>
</lst>
-->
</fragmentsBuilder>
<!-- multi-colored tag FragmentsBuilder -->
<fragmentsBuilder name="colored"
class="solr.highlight.ScoreOrderFragmentsBuilder">
<lst name="defaults">
<str name="hl.tag.pre"><![CDATA[
<b style="background:yellow">,<b style="background:lawgreen">,
<b style="background:aquamarine">,<b style="background:magenta">,
<b style="background:palegreen">,<b style="background:coral">,
<b style="background:wheat">,<b style="background:khaki">,
<b style="background:lime">,<b style="background:deepskyblue">]]></str>
<str name="hl.tag.post"><![CDATA[</b>]]></str>
</lst>
</fragmentsBuilder>
<boundaryScanner name="default"
default="true"
class="solr.highlight.SimpleBoundaryScanner">
<lst name="defaults">
<str name="hl.bs.maxScan">10</str>
<str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
</lst>
</boundaryScanner>
<boundaryScanner name="breakIterator"
class="solr.highlight.BreakIteratorBoundaryScanner">
<lst name="defaults">
<!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
<str name="hl.bs.type">WORD</str>
<!-- language and country are used when constructing Locale object. -->
<!-- And the Locale object will be used when getting instance of BreakIterator -->
<str name="hl.bs.language">en</str>
<str name="hl.bs.country">US</str>
</lst>
</boundaryScanner>
</highlighting>
</searchComponent>
<!-- Update Processors
Chains of Update Processor Factories for dealing with Update
Requests can be declared, and then used by name in Update
Request Processors
http://wiki.apache.org/solr/UpdateRequestProcessor
-->
<!-- Deduplication
An example dedup update processor that creates the "id" field
on the fly based on the hash code of some other fields. This
example has overwriteDupes set to false since we are using the
id field as the signatureField and Solr will maintain
uniqueness based on that anyway.
-->
<!--
<updateRequestProcessorChain name="dedupe">
<processor class="solr.processor.SignatureUpdateProcessorFactory">
<bool name="enabled">true</bool>
<str name="signatureField">id</str>
<bool name="overwriteDupes">false</bool>
<str name="fields">name,features,cat</str>
<str name="signatureClass">solr.processor.Lookup3Signature</str>
</processor>
<processor class="solr.LogUpdateProcessorFactory" />
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
-->
<!--
This example update chain identifies the language of the incoming
documents using the langid contrib. The detected language is
written to field language_s. No field name mapping is done.
The fields used for detection are text, title, subject and description,
making this example suitable for detecting languages form full-text
rich documents injected via ExtractingRequestHandler.
See more about langId at http://wiki.apache.org/solr/LanguageDetection
-->
<!--
<updateRequestProcessorChain name="langid">
<processor class="org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory">
<str name="langid.fl">text,title,subject,description</str>
<str name="langid.langField">language_s</str>
<str name="langid.fallback">en</str>
</processor>
<processor class="solr.LogUpdateProcessorFactory" />
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
-->
<!-- Response Writers
http://wiki.apache.org/solr/QueryResponseWriter
Request responses will be written using the writer specified by
the 'wt' request parameter matching the name of a registered
writer.
The "default" writer is the default and will be used if 'wt' is
not specified in the request.
-->
<!-- The following response writers are implicitly configured unless
overridden...
-->
<!--
<queryResponseWriter name="xml"
default="true"
class="solr.XMLResponseWriter" />
<queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
<queryResponseWriter name="python" class="solr.PythonResponseWriter"/>
<queryResponseWriter name="ruby" class="solr.RubyResponseWriter"/>
<queryResponseWriter name="php" class="solr.PHPResponseWriter"/>
<queryResponseWriter name="phps" class="solr.PHPSerializedResponseWriter"/>
<queryResponseWriter name="csv" class="solr.CSVResponseWriter"/>
-->
<queryResponseWriter name="json" class="solr.JSONResponseWriter">
<!-- For the purposes of the tutorial, JSON responses are written as
plain text so that they are easy to read in *any* browser.
If you expect a MIME type of "application/json" just remove this override.
-->
<str name="content-type">text/plain; charset=UTF-8</str>
</queryResponseWriter>
<!--
Custom response writers can be declared as needed...
-->
<!-- The solr.velocity.enabled flag is used by Solr's test cases so that this response writer is not
loaded (causing an error if contrib/velocity has not been built fully) -->
<!-- <queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" enable="${solr.velocity.enabled:true}"/> -->
<!-- XSLT response writer transforms the XML output by any xslt file found
in Solr's conf/xslt directory. Changes to xslt files are checked for
every xsltCacheLifetimeSeconds.
-->
<queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
<int name="xsltCacheLifetimeSeconds">5</int>
</queryResponseWriter>
<!-- Query Parsers
http://wiki.apache.org/solr/SolrQuerySyntax
Multiple QParserPlugins can be registered by name, and then
used in either the "defType" param for the QueryComponent (used
by SearchHandler) or in LocalParams
-->
<!-- example of registering a query parser -->
<!--
<queryParser name="myparser" class="com.mycompany.MyQParserPlugin"/>
-->
<!-- Function Parsers
http://wiki.apache.org/solr/FunctionQuery
Multiple ValueSourceParsers can be registered by name, and then
used as function names when using the "func" QParser.
-->
<!-- example of registering a custom function parser -->
<!--
<valueSourceParser name="myfunc"
class="com.mycompany.MyValueSourceParser" />
-->
<!-- Legacy config for the admin interface -->
<admin>
<defaultQuery>*:*</defaultQuery>
<!-- configure a healthcheck file for servers behind a
loadbalancer
-->
<!--
<healthcheck type="file">server-enabled</healthcheck>
-->
</admin>
<!-- Following is a dynamic way to include other components or any customized solrconfig.xml stuff, added by other contrib modules -->
<xi:include href="solrconfig_extra.xml" xmlns:xi="http://www.w3.org/2001/XInclude">
<xi:fallback>
<!-- Spell Check
The spell check component can return a list of alternative spelling
suggestions. This component must be defined in
solrconfig_extra.xml if present, since it's used in the search handler.
http://wiki.apache.org/solr/SpellCheckComponent
-->
<searchComponent name="spellcheck" class="solr.SpellCheckComponent">
<str name="queryAnalyzerFieldType">textSpell</str>
<!-- a spellchecker built from a field of the main index -->
<lst name="spellchecker">
<str name="name">default</str>
<str name="field">spell</str>
<str name="spellcheckIndexDir">spellchecker</str>
<str name="buildOnOptimize">true</str>
</lst>
</searchComponent>
</xi:fallback>
</xi:include>
</config>
<!-- Spell Check
The spell check component can return a list of alternative spelling
suggestions.
http://wiki.apache.org/solr/SpellCheckComponent
-->
<searchComponent name="spellcheck" class="solr.SpellCheckComponent">
<str name="queryAnalyzerFieldType">textSpell</str>
<!-- Multiple "Spell Checkers" can be declared and used by this
component
-->
<!-- a spellchecker built from a field of the main index, and
written to disk
-->
<lst name="spellchecker">
<str name="name">default</str>
<str name="field">spell</str>
<str name="spellcheckIndexDir">spellchecker</str>
<str name="buildOnOptimize">true</str>
<!-- uncomment this to require terms to occur in 1% of the documents in order to be included in the dictionary
<float name="thresholdTokenFrequency">.01</float>
-->
</lst>
<!--
Adding German spellhecker index to our Solr index
This also requires to enable the content in schema_extra_types.xml and schema_extra_fields.xml
-->
<!--
<lst name="spellchecker">
<str name="name">spellchecker_de</str>
<str name="field">spell_de</str>
<str name="spellcheckIndexDir">./spellchecker_de</str>
<str name="buildOnOptimize">true</str>
</lst>
-->
<!-- a spellchecker that uses a different distance measure -->
<!--
<lst name="spellchecker">
<str name="name">jarowinkler</str>
<str name="field">spell</str>
<str name="distanceMeasure">
org.apache.lucene.search.spell.JaroWinklerDistance
</str>
<str name="spellcheckIndexDir">spellcheckerJaro</str>
</lst>
-->
<!-- a spellchecker that use an alternate comparator
comparatorClass be one of:
1. score (default)
2. freq (Frequency first, then score)
3. A fully qualified class name
-->
<!--
<lst name="spellchecker">
<str name="name">freq</str>
<str name="field">lowerfilt</str>
<str name="spellcheckIndexDir">spellcheckerFreq</str>
<str name="comparatorClass">freq</str>
<str name="buildOnCommit">true</str>
-->
<!-- A spellchecker that reads the list of words from a file -->
<!--
<lst name="spellchecker">
<str name="classname">solr.FileBasedSpellChecker</str>
<str name="name">file</str>
<str name="sourceLocation">spellings.txt</str>
<str name="characterEncoding">UTF-8</str>
<str name="spellcheckIndexDir">spellcheckerFile</str>
</lst>
-->
</searchComponent>
# Defines Solr properties for this specific core.
solr.replication.master=false
solr.replication.slave=false
solr.replication.pollInterval=00:00:60
solr.replication.masterUrl=http://localhost:8983/solr
solr.replication.confFiles=schema.xml,mapping-ISOLatin1Accent.txt,protwords.txt,stopwords.txt,synonyms.txt,elevate.xml
solr.mlt.timeAllowed=2000
# You should not set your luceneMatchVersion to anything lower than your Solr
# Version.
solr.luceneMatchVersion=LUCENE_40
solr.pinkPony.timeAllowed=-1
# autoCommit after 10000 docs
solr.autoCommit.MaxDocs=10000
# autoCommit after 2 minutes
solr.autoCommit.MaxTime=120000
# autoSoftCommit after 2000 docs
solr.autoSoftCommit.MaxDocs=2000
# autoSoftCommit after 10 seconds
solr.autoSoftCommit.MaxTime=10000
solr.contrib.dir=../../../contrib
# Contains words which shouldn't be indexed for fulltext fields, e.g., because
# they're too common. For documentation of the format, see
# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.StopFilterFactory
# (Lines starting with a pound character # are ignored.)
# Contains synonyms to use for your index. For the format used, see
# http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory
# (Lines starting with a pound character # are ignored.)
#Written by CorePropertiesLocator
#Wed Jun 03 15:46:22 UTC 2020
name=mpc_dev
config=solrconfig.xml
schema=schema.xml
dataDir=data
......@@ -10,7 +10,7 @@
<properties>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<surefire-plugin.version>2.22.0</surefire-plugin.version>
<quarkus.version>1.3.0.Final</quarkus.version>
<quarkus.version>1.6.0.Final</quarkus.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
......@@ -87,11 +87,6 @@
</dependency>
<!-- Custom dependencies -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.25</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
......
package org.eclipsefoundation.core.config;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.function.Supplier;
import javax.enterprise.context.ApplicationScoped;
import org.eclipse.microprofile.config.inject.ConfigProperty;
......@@ -10,6 +10,7 @@ import io.quarkus.security.identity.AuthenticationRequestContext;
import io.quarkus.security.identity.SecurityIdentity;
import io.quarkus.security.identity.SecurityIdentityAugmentor;
import io.quarkus.security.runtime.QuarkusSecurityIdentity;
import io.smallrye.mutiny.Uni;
/**
* Custom override for production that can be enabled to set user roles to
......@@ -32,9 +33,11 @@ public class RoleAugmentor implements SecurityIdentityAugmentor {
}
@Override
public CompletionStage<SecurityIdentity> augment(SecurityIdentity identity, AuthenticationRequestContext context) {
// create a future to contain the original/updated role
CompletableFuture<SecurityIdentity> cs = new CompletableFuture<>();
public Uni<SecurityIdentity> augment(SecurityIdentity identity, AuthenticationRequestContext context) {
return context.runBlocking(build(identity));
}
private Supplier<SecurityIdentity> build(SecurityIdentity identity) {
if (overrideRole) {
// create a new builder and copy principal, attributes, credentials and roles
// from the original
......@@ -45,11 +48,10 @@ public class RoleAugmentor implements SecurityIdentityAugmentor {
// add custom role source here
builder.addRole(overrideRoleName);
// put the updated role in the future
cs.complete(builder.build());
return builder::build;
} else {
// put the unmodified identity in the future
cs.complete(identity);
return () -> identity;
}
return cs;
}
}
\ No newline at end of file
......@@ -22,6 +22,8 @@ import javax.xml.bind.DatatypeConverter;
import org.eclipsefoundation.core.model.RequestWrapper;
import org.eclipsefoundation.core.service.CachingService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Helper class that transforms data into a response usable for the RESTeasy
......@@ -33,6 +35,7 @@ import org.eclipsefoundation.core.service.CachingService;
*/
@ApplicationScoped
public class ResponseHelper {
private static final Logger LOGGER = LoggerFactory.getLogger(ResponseHelper.class);
private static final MessageDigest DIGEST;
static {
......@@ -68,6 +71,7 @@ public class ResponseHelper {
// get the TTL for the current entry
Optional<Long> ttl = cachingService.getExpiration(id, wrapper);
if (!ttl.isPresent()) {
LOGGER.error("TTL not present!");
return Response.serverError().build();
}
......
......@@ -6,7 +6,6 @@
*/
package org.eclipsefoundation.core.model;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Date;
import java.util.HashMap;
......@@ -14,7 +13,6 @@ import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.stream.Collectors;
import javax.enterprise.context.RequestScoped;
import javax.servlet.http.HttpServletRequest;
......@@ -24,7 +22,9 @@ import javax.ws.rs.core.UriInfo;
import org.apache.commons.lang3.StringUtils;
import org.eclipsefoundation.core.namespace.DeprecatedHeader;
import org.eclipsefoundation.core.namespace.RequestHeaderNames;
import org.eclipsefoundation.core.namespace.UrlParameterName;
import org.eclipsefoundation.core.request.CacheBypassFilter;
import org.eclipsefoundation.marketplace.model.QueryParameters;
import org.eclipsefoundation.marketplace.model.UserAgent;
import org.jboss.resteasy.core.ResteasyContext;
......@@ -40,7 +40,7 @@ import org.jboss.resteasy.core.ResteasyContext;
public class RequestWrapper {
private static final String EMPTY_KEY_MESSAGE = "Key must not be null or blank";
private Map<String, List<String>> params;
private QueryParameters params;
private UriInfo uriInfo;
private HttpServletRequest request;
......@@ -67,7 +67,7 @@ public class RequestWrapper {
* @return the first value set in the parameter map for the given key, or null
* if absent.
*/
public Optional<String> getFirstParam(UrlParameterNames parameter) {
public Optional<String> getFirstParam(UrlParameterName parameter) {
if (parameter == null) {
throw new IllegalArgumentException(EMPTY_KEY_MESSAGE);
}
......@@ -87,7 +87,7 @@ public class RequestWrapper {
* @return the value list for the given key if it exists, or an empty collection
* if none exists.
*/
public List<String> getParams(UrlParameterNames parameter) {
public List<String> getParams(UrlParameterName parameter) {
if (parameter == null) {
throw new IllegalArgumentException(EMPTY_KEY_MESSAGE);
}
......@@ -114,6 +114,21 @@ public class RequestWrapper {
getParams().add(key, value);
}
/**
* Adds the given value for the given key, preserving previous values if they
* exist.
*
* @param key string key to add the value to, must not be null
* @param value the value to add to the key
*/
public void addParam(UrlParameterName key, String value) {
if (key == null) {
throw new IllegalArgumentException(EMPTY_KEY_MESSAGE);
}
Objects.requireNonNull(value);
getParams().add(key.getParameterName(), value);
}
/**
* Sets the value as the value for the given key, removing previous values if
* they exist.
......@@ -131,10 +146,6 @@ public class RequestWrapper {
addParam(key, value);
}
public List<UrlParameterNames> getActiveParameters() {
return params.asMap().keySet().stream().map(UrlParameterNames::getByParameterName).filter(Objects::nonNull)
.collect(Collectors.toList());
}
/**
* Returns this QueryParams object as a Map of param values indexed by the param
......
/*
* Copyright (C) 2019 Eclipse Foundation and others.
*
* This program and the accompanying materials are made
* available under the terms of the Eclipse Public License 2.0
* which is available at https://www.eclipse.org/legal/epl-2.0/
*
* SPDX-License-Identifier: EPL-2.0
*/
package org.eclipsefoundation.core.namespace;
/**
* Namespace containing common URL parameters used throughout the API.
*
* @author Martin Lowe
*/
public enum DefaultUrlParameterNames implements UrlParameterName {
QUERY_STRING("q"),
PAGE("page"),
LIMIT("limit"),
IDS("ids"),
ID("id");
private String parameterName;
private DefaultUrlParameterNames(String parameterName) {
this.parameterName = parameterName;
}
/**
* @return the URL parameters name
*/
@Override
public String getParameterName() {
return parameterName;
}
/**
* Retrieves the UrlParameterName for the given name.
*
* @param name the name to retrieve a URL parameter for
* @return the URL parameter name if it exists, or null if no match is found
*/
public static DefaultUrlParameterNames getByParameterName(String name) {
for (DefaultUrlParameterNames param: values()) {
if (param.getParameterName().equalsIgnoreCase(name)) {
return param;
}
}
return null;
}
}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment