summaryrefslogtreecommitdiff
path: root/net-proxy/haproxy/metadata.xml
diff options
context:
space:
mode:
authorV3n3RiX <venerix@redcorelinux.org>2017-12-01 03:04:39 +0000
committerV3n3RiX <venerix@redcorelinux.org>2017-12-01 03:04:39 +0000
commit407525b571b48cfd65e1ad7a02d250a927c967c9 (patch)
tree844bea44d85dc7218f54970af1c42cc9d55c3f1a /net-proxy/haproxy/metadata.xml
parent89c6c06b8c42107dd231687a1012354e7d3039fc (diff)
gentoo resync : 01.12.2017
Diffstat (limited to 'net-proxy/haproxy/metadata.xml')
-rw-r--r--net-proxy/haproxy/metadata.xml22
1 files changed, 12 insertions, 10 deletions
diff --git a/net-proxy/haproxy/metadata.xml b/net-proxy/haproxy/metadata.xml
index ddb31ac2bb8b..6a09dde73e29 100644
--- a/net-proxy/haproxy/metadata.xml
+++ b/net-proxy/haproxy/metadata.xml
@@ -6,21 +6,23 @@
<name>Christian Ruppert</name>
</maintainer>
<longdescription>
-HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the Net.
+ HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the Net.
-It can:
- - route HTTP requests depending on statically assigned cookies
- - spread the load among several servers while assuring server persistence through the use of HTTP cookies
- - switch to backup servers in the event a main one fails
- - accept connections to special ports dedicated to service monitoring
- - stop accepting connections without breaking existing ones
- - add/modify/delete HTTP headers both ways
- - block requests matching a particular pattern
-Its event-driven architecture allows it to easily handle thousands of simultaneous connections on hundreds of instances without risking the system's stability.
+ It can:
+ - route HTTP requests depending on statically assigned cookies
+ - spread the load among several servers while assuring server persistence through the use of HTTP cookies
+ - switch to backup servers in the event a main one fails
+ - accept connections to special ports dedicated to service monitoring
+ - stop accepting connections without breaking existing ones
+ - add/modify/delete HTTP headers both ways
+ - block requests matching a particular pattern
+ Its event-driven architecture allows it to easily handle thousands of simultaneous connections on hundreds of instances without risking the system's stability.
</longdescription>
<use>
<flag name="net_ns">Enable network namespace support (CONFIG_NET_NS)</flag>
<flag name="pcre-jit">Use JIT support for PCRE</flag>
+ <flag name="pcre2">Enable PCRE2 RegEx support</flag>
+ <flag name="pcre2-jit">Use JIT support for PCRE2</flag>
<flag name="slz">Use <pkg>dev-libs/libslz</pkg> compression library</flag>
<flag name="tools">Install additional tools (halog, iprange)</flag>
<flag name="device-atlas">Use <pkg>dev-libs/device-atlas-api-c</pkg> library</flag>