This service interacts with a Redis Datastore via HTTP. Setup is meant to be scalable and handle concurrent requests. Service uses the below building blocks,
<key,value>
pairs into a Guava based LRU cacheSystem in it's current form,
A software application typically desires one of the below two properties based on the use case and live with a slight compromise on the other
In a realistic Data Center where nodes can be down for arbitrary time and Network Partitions are common it is impossible to achieve both consistency and availability. Based on the use case we prefer one over the other to fit the requirements.
Data consistency is important for systems hosting transactional data. Non-transactional data (typically) demands less on consistency i.e small fraction of data loss is acceptable if that helps with High Availability so that it is available for huge throughput of writes e.g. a case of clickstream in a typical e-commerce website.
This setup establishes a foundation towards high availability while compromising on consistency. Redis can be setup for asynchronous replication. In case a Redis host goes down, replica can take over to serve as a replacement. Only gotcha is it may not have all the data. Such fault tolerance offers high availability at the cost of data consistency.
Currently, Redis is running as a single node. TODO: Setup Redis slave and turn on replication mechanism for high availability.
docker
, docker-compose
, bash
, make
docker login
for ability to pull public images. Clone repo with commands belowDockerfile.build
and resources/redis-app.yml
git clone git@github.com:abhinavmehta14/proxy-redis.git
cd proxy-redis
make test
This executes the below steps in they are mentioned,
RedisAppResourceIntegrationTest.initRedisStore
Below command stops existing proxy container and spins up a new Webserver container
make run
Note that, above command requires at least one execution of make test
before make run
to generate image for Webserver. The Webserver is not yet pulled from Docker cloud yet.
To access Webserver logs,
docker-compose exec proxy-redis /bin/bash
tail -f logs/*
Logs are appended in two separate files,
logs/proxy-redis-request.log
contains request logslogs/proxy-redis.log
contains application logsTo watch Redis metrics or stats,
> redis-cli -h localhost -p 7001
localhost:7001> info
# Server
redis_version:5.0.5
...
# Stats
total_connections_received:7
total_commands_processed:22
instantaneous_ops_per_sec:0
total_net_input_bytes:663
total_net_output_bytes:14857
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
evicted_keys:0
keyspace_hits:1
keyspace_misses:1
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
Note that the above method assumes you have redis-cli
setup which is not a part of system requirements
timers
shows useful metrics like min
, max
, t99
, stddev
etc for each endpointOne can access the GET endpoint as http://localhost:8080/v1.0/proxy?key=[KEY], where KEY is a required parameter. Webserver returns,
This is a backdoor (not part of requirements) to add key value pairs to Redis. Sample request
curl -i -XPOST 'http://localhost:8080/v1.0/proxy?key=a5&value=b5'; echo
Returns,
Alternatively, one can install and use redis-cli
to access Redis DB for writes (or reads) as follows,
redis-cli -h localhost -p 7001
localhost:7001> set a1 b1
OK
localhost:7001> get a1
"b1"
Inline with requirements, key value pairs can be added to Redis at by appending them in RedisAppResourceIntegrationTest.initRedisStore
followed by make test
which ensures pairs are set in Redis. These pairs are persisted in DB until docker-compose rm
or docker-compose rm redis
is executed.
To stop and remove all running containers including data on Redis,
> make stop
docker-compose stop
Stopping proxy ... done
Stopping redis-db ... done
docker-compose rm
Going to remove proxy, proxy-redis_redis_run_1, proxy-redis_proxy-test_run_4, proxy-redis_proxy-test_run_3, proxy-redis_proxy-test_run_2, proxy-redis_proxy-test_run_1, redis-db
Are you sure? [yN] y
Removing proxy ... done
Removing proxy-redis_redis_run_1 ... done
Removing proxy-redis_proxy-test_run_4 ... done
Removing proxy-redis_proxy-test_run_3 ... done
Removing proxy-redis_proxy-test_run_2 ... done
Removing proxy-redis_proxy-test_run_1 ... done
Removing redis-db ... done
To stop Webserver container only,
docker-compose stop proxy-redis
To bring up Redis store, run tests, build webserver image, start Webserver one can invoke the below command
make all
This combines all the make tasks.
[Work in progress]
Class com.amehta.proxy.redis.interact.test.RedisAppConcurrentRequestsTest
can be used send concurrent request to HTTP Webserver. The number of requests can be configured in main()
method.
TODO: Figure out how to run a standalone from dropwizard based fat jar and use the command to dockerize this test.
Currently, I can run this class in a native dev environment and see that concurrent requests are being handled.
TODO: Architecture Diagram
GET endpoint
Endpoint is served by a multi-threaded webserver where each thread handles requests from a bounded queue of pending requests. Request handler first checks Guava Cache for presence of key
Cache has various tunable parameters - TTL, cache size in terms of number of entries Redis keys are available until a configurable global expiry time.
POST endpoint
TODO
Guava Cache or Redis lookup or insert is O(1) for all practical purposes.
In practice, the lookup in each segment of ConcurrentHashMap is non-constant (a TODO: log
or linear
function of size of segment).
Also, multiple threads writing to same segment can cause contention issues. Hence, complexity is also a function of number of threads writing to Cache.
TODO: Elaborate on details above.
CachedRedisService
should implement io.dropwizard.lifecycle.Managed
interfaceProxyLoadTestCommand
to send concurrent requests and assert response http code. scripts/curl_test_concurrent.sh
needs to be DockerizedglobalExpiry
from config in RedisAppResourceIntegrationTest
java -jar FAT_JAR
addresses thisBenchmarks below are done using Apache HTTP server benchmarking tool for GET endpoint hosted by single replica of Proxy Service. Command to run benchmark,
make benchmark_ab
Various parameters (concurrency_level
, concurrency_level
, keep_alive
) are controlled from within the Makefile
Results,
Special conditions | #Keys queried | #Requests | Concurrency | mean | t50 | t75 | t95 | t99 | t100 | Non-2xx responses | Requests per second | response body size (bytes) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
no connection keep alive | 1 | 100,000 | 10 | 34 | 26 | 32 | 46 | 208 | 1259 | - | - | 13 |
no connection keep alive | 1 | 100,000 | 32 | 113 | 87 | 107 | 275 | 540 | 4853 | - | - | 13 |
with connection keep alive | 1 | 1000,000 | 32 | 3 | 0 | 1 | 14 | 65 | 400 | 100 | 640 | 13 |
Note that, t{xx}
is in milliseconds
TODO: Improve benchmarking capabilities for multiple url using JMeter (docker pull justb4/jmeter
)
This benchmark uses ProxyLoadTestCommand
class, a command line utility for advanced assertions on load test like response code, body
TODOs:
NoRouteToHost
exceptionBenchmark can be invoked with the command below,
make benchmark
To connect to running benchmark container
docker-compose ps # To get service name for benchmark e.g. proxy-redis_benchmark_run_3
docker exec -it proxy-redis_benchmark_run_[%s] bash # use suffix from the output of the command above
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。