-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
incubating program:ServerlessDB for HTAP #483
Comments
LGTM non-binding |
LGTM |
@winkyao: Please use GitHub review feature instead of For the reason we drop support to the commands, see also this page. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
LGTM |
Describe the feature or project you want to incubate:
Summary
we want to provide serverless db services based on TIDB on
the cloud, focusing on how to dynamically scale up and down the compute storage nodes
based on business load changes to achieve zero user perception. To ensure that the
database service process, always maintain the best match between business load and
background resources, thus helping users to maximize cost saving
Motivation
While TIDB offers cloud services, there are a number of issues.
specifications, and it is difficult for them to choose the right specifications, either by
choosing smaller or larger ones, or the business load simply cannot be evaluated,
resulting in users never being able to choose the right specifications.
what resources to expand, and how much to expand. In practice, it is difficult to
respond to the scenario of extremely rapid load changes in a timely manner, thus
causing business performance fluctuations.
what resources to shrink, and how much to shrink. If the user makes a wrong judgment,
it will cause business performance fluctuations.
and shrinkage work is very burdensome. If the system is not expanded, the business
performance will be degraded, and if the system is not scaled down, the resources will
be wasted.
connection pooling or long connections, it is even more impossible to do both of the
following:
When scaling, if the client is using connection pools or long connections, it is not
possible to break up the load to the additional compute nodes.
When scaling down, if the client is using a connection pool or a long connection,
there is no guarantee of zero user awareness because you kill the compute node
and if there is a connection on it, the client reports an exception.
Estimated Time
180 days
The text was updated successfully, but these errors were encountered: