-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed NullPointer while creating temporary function #1384
Open
ashishkshukla
wants to merge
2,911
commits into
TIBCOSoftware:master
Choose a base branch
from
ashishkshukla:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* adding commandline arg to kill on OOM * checking if system is 64 bit to apply right libgemfirexd<32/64>.so * adding agent in case of mac/linux * enable copy in case of mac also * implementing review suggestions * checking if agent load would be successful else continue without agent * implementing review suggestions * undo-ing wrong changes to gradle.properties * syncing store
Reset the pool at the end of collect to avoid spillover of lowlatency pool setting to later operations that may not use the CachedDataFrame execution paths.
For cases like MacOSX that ships with bash 3.x by default
…directory structure in order to avoid the confusion with PID of hydra JVM Vs PID of snappy node - Correcting table names in configuration file required for dmlOps tests Testing Done: Verified the changes by running few tests from HA and non-HA bts
- avoid unncessary re-evaluation of cluster properties target - redirect error of spark-shell in testSparkShell to output string for checks too - update store and spark links - remove distZip from default assemble target
…#1194) * ignoring failing tests * removing wrongly created test suite * formatting changes
…the server. Because of the previous order there were chances that this function came for execution on this node even before it is registered.
…ere was an issue with running snappy job with --packages option ouside the snappydata build directory.
…#1195) Key-based aggregations (GROUP BY) already handle copying of incoming value but was missing in non-key flat aggregations. ## Changes proposed in this pull request - refactored value copy code in ObjectHashMapAccessor and string clone code (only if required) - use above for non-key aggregations too - renamed io.snappydata.implicits to io.snappydata.sql.implicits - handle null values during clone/copy of non-primitive aggregate results
Also corrected the copyright headers to be SnappyData ones.
Changed the year from 2017 to 2018 in license headers.
…alCatalog - for paths from SnappyExternalCatalog (e.g. "CREATE VIEW"), the catalog cache needs to be cleared too; added unit test for this too - fixing some occasional failures due to test issues - renamed SnappyTableStatsProviderService.suspendCacheInvalidation to TEST_SUSPEND_CACHE_INVALIDATION to indicate clearly it is meant only for tests
When using SnappySession, the default, the temporary hive configuration passed is not just used by HiveServer2, but also overrides the internal hive configuration used by SnappyStoreHiveCatalog causing problems. Now using SnappySession hive configuration after adding the "hive.server2" configuration read from the temporary "executionHive" client (that in turn will set it up using hive-site.xml etc that are ignored by SnappySession's hive meta-store client) Also reduced logging during the temporary hive client initialization.
- new "snappydata.sql.hiveCompatible" to turn some SQL output to be more hive compatible; currently this includes "show tables ..." variants that have only one "name" column in output - added unit tests for above property and "show tables ..." variants
.jvmkill*.log files. .hprof can be pulled by passing an option '-m' or '--hprofdump' to the script.
Hot Fix Changes Updates to backlog doc items New Spark Extension API Guide
Jar published for snappydata-jdbc is shadow one that includes all dependencies
… getting corrupted (TIBCOSoftware#1381)
* Code changes for SNAP-2779 and SNAP-1338: - Adding Redundancy column in Tables List to view count of redundant copies. - Adding Redundancy Status column in Tables List to monitor redundancy has been satisfied or broken. - Changes for maintaining Redundancy and isRedundancyImpaired details for count of redundant copies and redundancy satisfaction/broken status. - Display Redundancy as 'NA' if distribution type is REPLICATE. - Display buckets count in Red colour, if any of the buckets is offline.
* changes to tackle insufficient disk space issue in transaction * fixed the test failure. Apparently in some cases, the table name present in ColumnarStore is in lower case, causing region not found exception. Fix is to upper case the table name. Not debugged why for some partitions the table name is coming in lower case
* Take region lock on bulk write ops in column table, in case of smart connector use a connection to execute procedure on a server to take the lock and release the lock using same connection when the operation is over
* added removeTableUnsafeIfExists to drop a catalog table in inconstent state * adding test for DROP_CATALOG_TABLE_UNSAFE procedure * worked on review comments * review comment changes * enhancements to REMOVE_METASTORE_ENTRY * fixing test for SNAP-3055 * review changes incorporated * review changes * removing unnecessary handling of exception
* Mask credentials (in case of s3 URI) in Describe extended/formatted output. * Mask credentials in case of s3 on UI for external tables. * Disallow access non-admin user to the tables in SNAPPY_HIVE_METASTORE.
This adds support for the two components of Spark's hive session: 1) catalog that reads from external hive meta-store using an extra hive-enabled SparkSession 2) HiveSessionState from the hive-enabled SparkSession that adds additional resolution rules and strategies for such hive managed tables 3) Parser changes to delegate to Spark Parser for Hive DDL extensions. A special format for "CREATE TABLE ... USING hive" is allowed that explicitly specifies the table to use hive provider. There are two user-level properties: - Standard "spark.sql.catalogImplementation" that will consult external hive metastore in addition to the builtin catalog when the value is set to "hive". Note that first builtin catalog is used and then the external one, so in case of name clashes, the builtin one is given preference. For writes, all tables using "hive" as the provider will use the external hive metastore while rest use builtin. - "snappydata.sql.hiveCompatibility" can be set to default/spark/full. When set to "spark" or "full" then the default behaviour of "create table ..." without any USING provider and any Hive DDL extensions will change to create a hive table instead of a row table. A lazily instantiated instance of Hive-enabled SparkSession is kept inside SnappySessionState which gets referred if the "spark.sql.catalogImplementation" is "hive" for the session. For 1), the list/get/create methods in SnappySessionCatalog have been overridden to read/write to the hive catalog after the snappy catalog if hive support is enabled on the session. For 2), wrapper Rule/Strategy classes have been added that wrap the extra rules/strategies from hive session and run them only if the property has been enabled on SnappySession. The code temporarily switches to the hive-enabled SparkSession when running hive rules/strategies some of which expect the internal sharedState/sessionState to be those of hive. Honour spark.sql.sources.default for default data source: if spark.sql.sources.default is explicitly set then use the same in SQL parser with default as 'row' like before Initial code for porting hive suite Fix for SNAP-3100: make the behaviour of "drop schema" and "drop database" as identical to drop from both builtin and external catalog since "create schema" is identical to "create database" Fixes for schema/database handling and improved help messages Improved CommandLineToolsSuite to not print failed output to screen
* Added code changes for SNAP-2772 * Added code changes for undeploying packages/jars from servers side.
sumwale
force-pushed
the
master
branch
5 times, most recently
from
October 1, 2021 09:23
8b43301
to
2b254d9
Compare
sumwale
force-pushed
the
master
branch
5 times, most recently
from
October 18, 2021 17:01
2c254f0
to
0f2888f
Compare
sumwale
force-pushed
the
master
branch
2 times, most recently
from
April 12, 2022 10:05
a466d26
to
ea127bd
Compare
sumwale
force-pushed
the
master
branch
2 times, most recently
from
June 12, 2022 04:19
99ec79c
to
c7b84fa
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Changes proposed in this pull request
NullPointer issue has been fixed while creating temporary functions in this pull.
Issue can be referred here #1383
Patch testing
manual testing
ReleaseNotes.txt changes
NA
Other PRs
NA