Skip to content

Commit

Permalink
Native C++ driver updates and some fixes (#566)
Browse files Browse the repository at this point in the history
- update oracle ojdbc6 version: switch to a non-deprecated version available in maven central
- update thrift 0.14.1 and regenerate thrift layer
- update boost to 1.76.0 and googletest to 0.10.0
- changed boost::shared_ptr and other smart pointers to C++11 std::shared_ptr throughout
  (also thrift uses the std counterparts now instead of boost)
- likewise changed boost thread to std thread except for boost::shared_mutex which
  is not available in C++11
- removed ArrayList which uses a risky realloc for expansion that will cause trouble
  for classes having a non-trivial default constructor (the gcc compiler now complains
  about it too); switched to std::vector instead
- removed the placement new strategy used to convert thrift::Row to
  Row/UpdatableRow/Parameters (which depended on a specialized ArrayList constructor to
    avoid touching the array inside) and instead use move constructors to move data from
  thrift::Row to the child classes; while this has the overhead of having to create
  child class objects, overall it is much safer and overhead is negligible in comparison
  to the cost of reading+creating the thrift::Row objects from the socket stream
- due to above change, the set of rows in ResultSet are now created using move upfront
  from thrift::Row vector and the iterator uses the same instead of using placement new
  to covert on-the-fly which is cleaner too since the earlier strategy can lead to placement
  new being done multiple times for the same thrift::Row for scrollable cursors
- changed ParametersBatch API to "moveParameters" and existing Parameters list to the end of
  batch instead of having to use the awkward "createParameters" approach
- added reserve, batchSize and parametersAt methods to ParametersBatch
- ResultSets inside Result now use shared_ptrs to avoid creating them on-the-fly everytime
- use std::move in places like serialization/deserialization to reduce copy
- removed ununsed ThreadSafeList and added note about adding move semantics support
  in ThreadSafeMap in the future
- reduced unnecessary copies in ControlConnection and other classes by switching over
  to use move operations
- use std::make_shared for shared_ptr creation throughout (except for nullptr)
- changed NULL to nullptr throughout and NULL equality checks by simple boolean checks
- correct SnappyDataInc github links to point to TIBCOSoftware instead
- updated eclipse CDT configuration files to the latest CDT 10.2.0 version

- fix heap overflow shown by AddressSanitizer (missing null termination in LOG_LEVELS)
- fix vector constructor taking size in C++11 that is not initial capacity rather size
  with default values itself, so change to use reserve()
- use foreach loop instead of explicit iterator
- removed some code repitition in UTF8<->UTF16 conversions
- changed single character append to a string to use push_back
- update copyright year

- allow for default constructor and assignment for ConnectionPropery
- added appendString and appendBinary methods to Parameter that will either
  append to string/binary if an existing value is present else set it;
  this will be used for more efficient imeplementation of SQLPutData
- adding password entry generator utility to ldap-test-server; also added --port option
  also updated auth.ldif with entries used by ODBC tests
- added "default-schema" property to connection

- fixes few potential memory leaks in case of exceptions as reported by AddressSanitizer
- updated eclipse build to use AddressSanitizer by default for release builds both in
  compilation and linking, removed the same in debug builds else debugger becomes very
  slow, switched to system installed boost library
- switch to using std::function instead of template TPROC method arguments and move those
  functions from headers to implementation (since no template arguments, so better to
      have implementation separately instead of force inlining those large methods)
- reviewed all classes and either mark them as "final" or else mark their destructors
  as virtual
- changed individual exception handling using Utils::handleExceptionInDestructor
  with a common Utils::handleExceptionsInDestructor that is now passed an std::function
  having the actual body of the destructor
- mark few methods as "noexcept" which are never supposed to throw exceptions (and ensure
    that no exceptions are thrown)
- code duplication: removed some of the methods in Utils used for logging like toStream()
  to instead use Utils::toString() methods with move semantics if required for efficiency
- fix AQP property handling in ClientService
- correct operation name sent to exception messages in ClientService::bulkClose
- Print process and library base addresses in stack traces. The addresses shown in
  SQLException::printStackTrace cannot be translated to code file and line number
  without the process/library base address where it was loaded.
- Added code in SQLException to determine the bases addresses for current executable
  and snappyodbc library using /proc/self/maps in "staticInitialize" and add the
  message to SQLException::printStackTrace

- removed transparent failover in ClientService since it fails in some situation and does
  not restore any session properties or default schema set on the previous connection
- added custom override for operator== in HostAddress so that it compares equal just
  based on either ipAddress+port or hostName+port like in HostAddress.java; specifically
  the difference in serverType field currently causes the comparison to always fail
  leading to creating of a separate ControlConnection for every ClientService
- set isCurrent flag in HostAddress and use the same for early comparison to handle cases
  where server sends back only the IP address filled in which does not match with the
  host name provided in the driver connect
- overall simplified exception handling in ClientService to make it uniform
- maintain a reference to SSLSocketFactory in ClientService and ControlConnection so
  that it outlives the SSL connections (THRIFT-4164)
- added a GlobalConnectionHolder in ControlConnection whose global destructor sets
  the TSSLSocketFactory::setInExit(true) flag so that SSL_shutdown does not cause
  unexpected crashes in Windows during application teardown
- use a map to shared_ptr of ControlConnection in GlobalConnectionHolder for each
  ClientService so that the ControlConnection gets automatically cleaned up when
  all its ClientServices are closed/destroyed
- load-balance now works fine with above changes, so switch tests to use port 1527
- correct indentation/spacing in NetConnection.h/NetConnection.cpp

- updated remaining dependencies: boost => 1.76.0, openssl => 1.1.1k
- fix build on windows with VS2019
- fix SSLParameters handling with std::function that ends up making a
  copy when used via the functor defined in SSLParameters; now using an
  inline lambda closure instead of the functor in SSLParameters
- moved SSLParameters inside SSLSocketFactory to avoid repeated copying
  and for proper ownership instead of strewing across
- corrected passing of "failure" argument during initial connection
  creation to append the real cause to it so it is shown in exception message
- allow for the truststore property to either point to a PEM file or a
  directory having a bunch of them (initialized using OpenSSL's c_rehash)
- fix global destructor teardown order to ensure that the statics in
  TSSLSocketFactory are always alive when GlobalConnectionHolder's
  destructor is invoked; this is ensured by encapsulating the global
  static of GlobalConnectionHolder inside a method
- [gradle] add proper output dirs to extractDependencies so that it only executes
  if a downloadDependencies has re-executed else it will be UP-TO-DATE
- changed insert+pair to emplace in std map/unordered_map
- add "Caused by:" in the SQLException printStackTrace if there is a
  nextException; ensure that process/library address is printed once

- Provide password encryption including for SSL keystores, enabled by
  setting "encrypted-passwords=true". The password field is then assumed
  to be the target name to be looked up in system credential manager to
  obtain the password.
- On Windows this uses the standard CredRead() API with type as
  CRED_TYPE_GENERIC so user must add the password under that heading.
  On Mac OSX the "security" tool is used to lookup the password for the
  given value as provided in the password field.
  On Linux and other systems, the "secret-tool" is used to lookup the
  password with the password field split on the first ':' to obtain the
  attribute and its value.

- updated project files for VS2019 with corrected header/source file list
- more changes for C++11 including fixes for many VS2019 source analysis
  warnings (noexcept additions for move constructors, virtual/final
      additions etc)
- when no explicit truststore has been specified, then load system installed OpenSSL
  trusted certificate bundles searching in the usual install locations on Windows, Linux and Mac
- check VTIs for authorization before failing: queries on VTIs were failing when authorization is on,
  so check in DataDictionary before failing if storage region is not found

- convert connection failures to 08001/08004 rather than the
  non-standard X0Z01
- added Connection::toString() that returns connection ID with server
- carry through the cause of failure during retries
- enhanced DatabaseMetaData with a couple of ResultSet methods
- added search for system certificates in product installation directory
  if it fails in standard paths (and no explicit truststore is specified)
- added a global initializeSnappyDataService() method to find the
  product directory easily (identical to ClientService::staticInitialize)

- changed the encrypted-passwords property to the more appropriate credential-manager
- updated build scripts to add support for PopOS and Arch/Manajaro/Garuda using
  a new set of binaries
- pass through the cause as a SQLException having SQLState/errorCode etc
  and added copy/move equality operators for passing the thrown exception
  to failoverExhausted

- Newer gdb+gcc releases are terribly slow with "-g3" at least with the C++ driver
  for some unknown reason so switching it to "-g" for both gradle and eclipse.
  The "-g3" supposedly adds more information for macros which might be causing
t  rouble with the complicated boost macros. Not important anyway.

- updated gradle to 5.6.4; also modified build.gradle a bit to enable inclusion as a sub-project (e.g. of ODBC layer)
- corrected relative path of SSL certificate installed with the product
- improved error message for credential manager lookup failures

- enabled many additional warning flags with g++ which showed some issues that are fixed
- many other changes to deal with warnings that now show as errors (-Werror)
- added precompiled headers (pch.h) that reduces compilation times by ~30%
- moved implementation header files from cpp/impl and cpp/thrift to headers/impl
- switch back to hostname-only for sockets for now; specifically required for SSL that does
  hostname verification in the certificate

- removed isCurrent checking from HostAddress hash method because this
  is incorrect in general (when the HostAddresses belong to two different
      clusters); instead take care of it in ClientService+ControlConnection
  to go through and pick up the HostAddress marked "isCurrent" if none of
  the existing ones match
- removed IP address fillup code in DNSCacheService using hostName because it is
  incorrect for different kinds of IPv6 addresses (should use inet_pton/getaddrinfo)

- some more reorganization of header for faster compile times
- added implementation of cancellation of statement in C++ driver though server-side
  does not support yet for Spark routed queries
- added code to split SQLException messages into records of given size; the records
  are created by splitting on newline, tab and spaces preferred in that order
  searching from end of designated record size; this is to be used by ODBC layer
  for SQLGetDiagRec/SQLGetDiagField because individual records have a size limit
  of 512 characters (probably coming from ancient times)
- avoid injecting SQLState=X0Z35 which is server-side stack trace indicator and instead
  copy the SQLState/severity from the previous SQLException in the chain
- compressed SQLException message causes a bit to avoid unnecessary repetitions caused
  by repeated copying from "causes" etc
- in exception throw after failover too fails, don't keep the cause inside a top-level
  new exception rather keep the original cause as top-level while failover failure
  information is appended to the same
- renamed SSL protocol names to be SSLv3, TLSv1.0 etc to match that in java
- refactored property handling in ClientService constructor and improved error messages
- minor improvement in credential-manager lookup failure messages
- removed SSL_MODES and made the property as boolean with default as false
- added an "auto-reconnect" property though not implemented yet

Implementation of auto-reconnect

- try to create the connection again before most operations if it failed previously due to
  server/network failure (isolation-level should be NONE to ensure that its not middle of
      transaction except for beginTransaction operation itself which skips that check)
- reconnection is never attempted for some operations like commit/rollback/closeResultSet
- changed ClientService::m_lock to recursive_mutex since reconnection check and
  the call to openConnection

- avoid recursive_mutex for individual calls in ClientService and instead send a flag
  to openConnection that will skip the locking if it is already acquired (for the calls
    from reconnectionToServerIfRequired)
- skip SERVER_STACK (X0Z35) in SQLException.fillRecords
- if "includeStackTrace" is false then also skip server-side stack traces
- added log-level=fine; to be used by ODBC layer to print stack traces but avoid
  printing lots of stuff to log-file like log-level=debug would do
- minor fixes to correct warnings (treated as errors) in windows build
- updated windows build files
- retain path starting from "src/" to avoid unnecessarily exposing
  internal filenames (though full paths will still be embedded in the DLLs)
- added .clang-format for expected formatting
- upgraded solutions to VS2019
- removed calls to TSSLSocketFactory::setInExit() which handled in thrift library itself now
- added explicit call to cleanupOpenSSL() to mark OpenSSL as uninitialized in the thrift
  TSSLSocket layer which will skip calls to SSL_shutdown (due to cleanup
    of objects that have not been explicitly closed during program teardown)

- renamed "product" build task to clientProduct so that calls to the
  product task from snappy-odbc do not automatically invoke this product
  target too
- cleaned up exception handling; use C++11 features
- add retries to all the available servers in openConnection (and consequently
    for auto-reconnect too)
- completely removed code related to failover in other places
- more updates to C++11 code

- update shadow plugin
- set off-heap system variables before GemFireXD boot else some statics may get set incorrectly
- C++ driver: deal with full name starting with a dot (empty schema)
- fixing bunch of test failures

- updates to TIBCO branding; changed deprecated gradle options
- fix occasional test failures due to previous test suite artifacts
- fix occasional failure in TransactionDUnit
- remove some flags that cause binary incompatibility even across minor VS version changes

Avoid appending IP address to hostname-for-clients

- causes trouble if client-address is 0.0.0.0 for example
- will also be a problem if exposed hostname is different from the bound IP address
  for clients (e.g. former is public hostname visible to clients while latter is an
      internal IP address only used by servers behind firewalls)

Co-authored-by: Sumedh Wale <[email protected]>
  • Loading branch information
sumwale and sumwale authored Jun 23, 2021
1 parent 574658d commit 185e02c
Show file tree
Hide file tree
Showing 246 changed files with 8,886 additions and 6,467 deletions.
40 changes: 24 additions & 16 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ buildscript {
}
dependencies {
classpath 'gradle.plugin.ca.coglinc2:javacc-gradle-plugin:3.0.0'
classpath 'com.github.jengelman.gradle.plugins:shadow:4.0.3'
classpath 'com.github.jengelman.gradle.plugins:shadow:5.2.0'
classpath 'de.undercouch:gradle-download-task:3.4.3'
}
}
Expand Down Expand Up @@ -322,6 +322,11 @@ task cleanReports { doLast {


subprojects {

int maxWorkers = project.hasProperty('org.gradle.workers.max') ?
project.property('org.gradle.workers.max') as int :
Runtime.getRuntime().availableProcessors()

// apply compiler options
compileJava.options.encoding = 'UTF-8'

Expand All @@ -348,16 +353,18 @@ subprojects {
// Here we want to disable all transitive dependencies on external artifacts. This
// allows us to lock down library versions. However, we want project dependencies to
// be transitive such that the libraries of a dependent project are automatically included.
configurations.all {
dependencies.all { dep ->
if (dep instanceof ModuleDependency && !(dep instanceof ProjectDependency)) {
dep.transitive = false
configurations.all { conf ->
if (!conf.name.startsWith('test')) {
dependencies.all { dep ->
if (dep instanceof ModuleDependency && !(dep instanceof ProjectDependency)) {
dep.transitive = false
}
}
resolutionStrategy {
// fail eagerly on version conflict (includes transitive dependencies)
// e.g. multiple different versions of the same dependency (group and name are equal)
failOnVersionConflict()
}
}
resolutionStrategy {
// fail eagerly on version conflict (includes transitive dependencies)
// e.g. multiple different versions of the same dependency (group and name are equal)
failOnVersionConflict()
}
}

Expand All @@ -384,7 +391,7 @@ subprojects {

test {
dependsOn "${subprojectBase}storeProduct"
maxParallelForks = Runtime.getRuntime().availableProcessors()
maxParallelForks = maxWorkers
minHeapSize = '1g'
maxHeapSize = '1g'

Expand Down Expand Up @@ -421,7 +428,7 @@ subprojects {
if (rootProject.name == 'snappy-store') {
task dunitTest(type:Test) {
dependsOn "${subprojectBase}storeProduct"
maxParallelForks = Math.max((int)Math.sqrt(Runtime.getRuntime().availableProcessors() + 1) + 1, 2)
maxParallelForks = Math.max(Math.sqrt(maxWorkers + 1) + 1 as int, 2)
minHeapSize = '2g'
maxHeapSize = '2g'

Expand Down Expand Up @@ -456,7 +463,7 @@ subprojects {
} else {
tasks.named('dunitTest').configure {
dependsOn "${subprojectBase}storeProduct"
maxParallelForks = Math.max((int)Math.sqrt(Runtime.getRuntime().availableProcessors() + 1) + 1, 2)
maxParallelForks = Math.max(Math.sqrt(maxWorkers + 1) + 1 as int, 2)
minHeapSize = '2g'
maxHeapSize = '2g'

Expand All @@ -473,7 +480,7 @@ subprojects {

task wanTest(type:Test) {
dependsOn "${subprojectBase}storeProduct"
maxParallelForks = (int)Math.sqrt(Runtime.getRuntime().availableProcessors() + 1)
maxParallelForks = Math.sqrt(maxWorkers + 1) as int
minHeapSize = '1g'
maxHeapSize = '1g'

Expand Down Expand Up @@ -503,7 +510,7 @@ subprojects {

task integrationTest(type:Test) {
dependsOn "${subprojectBase}storeProduct"
maxParallelForks = Runtime.getRuntime().availableProcessors()
maxParallelForks = maxWorkers

includes.clear()

Expand All @@ -527,6 +534,7 @@ subprojects {
test.dependsOn subprojectBase + 'cleanTestOutput'
dunitTest.dependsOn subprojectBase + 'cleanDUnitOutput'
wanTest.dependsOn subprojectBase + 'cleanWanOutput'
wanTest.mustRunAfter dunitTest
integrationTest.dependsOn subprojectBase + 'cleanIntegrationOutput'
check.dependsOn.clear()
check.dependsOn test, dunitTest, wanTest
Expand Down Expand Up @@ -575,7 +583,7 @@ subprojects {
packaging 'jar'
// optionally artifactId can be defined here
description 'SnappyData store based off Pivotal GemFireXD'
url 'http://www.snappydata.io'
url 'https://github.com/TIBCOSoftware/snappydata'

scm {
connection 'scm:git:https://github.com/TIBCOSoftware/snappy-store.git'
Expand Down
4 changes: 2 additions & 2 deletions filehdr-mod.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
/*
* Changes for TIBCO ComputeDB data platform.
* Changes for TIBCO Project SnappyData data platform.
*
* Portions Copyright (c) 2017-2019 TIBCO Software Inc. All rights reserved.
* Portions Copyright (c) 2017-2021 TIBCO Software Inc. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
Expand Down
2 changes: 1 addition & 1 deletion filehdr.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright (c) 2017-2019 TIBCO Software Inc. All rights reserved.
* Copyright (c) 2017-2021 TIBCO Software Inc. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ private static void print(PrintWriter out, String args[]) {

out.println(SEPERATOR);
out.println();
out.println(" Copyright (c) 2017-2019 TIBCO Software Inc. All rights reserved.");
out.println(" Copyright (c) 2017-2021 TIBCO Software Inc. All rights reserved.");
out.println();
out.println(" Licensed under the Apache License, Version 2.0 (the \"License\"); you");
out.println(" may not use this file except in compliance with the License. You");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3092,7 +3092,7 @@ public boolean isOffHeap() {
@Override
@Retained
public Object _getValueRetain(RegionEntryContext context, boolean decompress) {
if (decompress) {
if (context != null && decompress) {
return decompress(context, _getValue());
} else {
return _getValue();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -229,6 +229,11 @@ public static boolean checkMembersColocation(PartitionedRegion partitionedRegion
//We don't need a hostname because the equals method doesn't check it.
for(PartitionRegionConfig config: colocatedRegions) {
if(config.isColocationComplete() && !config.containsMember(member)) {
if (partitionedRegion.getLogWriterI18n().fineEnabled()) {
partitionedRegion.getLogWriterI18n().fine("checkMembersColocation for " +
partitionedRegion.getFullPath() + ": colocation not complete for " +
config.getFullPath());
}
return false;
}
}
Expand Down Expand Up @@ -270,6 +275,12 @@ private static boolean hasOfflineColocatedChildRegions(PartitionedRegion region)

if(childRegion == null || !childRegion.isAfterBucketMetadataSetup()) {
//If the child region is offline, return true
if (region.getLogWriterI18n().fineEnabled()) {
region.getLogWriterI18n().fine("hasOfflineColocatedChildRegions for " +
region.getFullPath() + ": child " + childName + " is offline " +
(childRegion == null ? "(child region is null)"
: "(child has not finished bucket initialization)"));
}
return true;
} else {
//Otherwise, look for offline children of that region.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2948,6 +2948,8 @@ public void close(String reason, Throwable systemFailureCause, boolean keepalive
CacheLifecycleListener listener = (CacheLifecycleListener) iter.next();
listener.cacheClosed(this);
}

CacheObserverHolder.setInstance(null);
} // static synchronization on GemFireCache.class
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2278,7 +2278,7 @@ public String getURL() throws SQLException {
*/
@Override
public String getUserName() throws SQLException {
return this.service.connArgs.getUserName();
return this.service.getUserName();
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ public final class ClientService extends ReentrantLock implements LobService {
private HostConnection currentHostConnection;
private HostAddress currentHostAddress;
private String currentDefaultSchema;
final OpenConnectionArgs connArgs;
private final OpenConnectionArgs connArgs;
final Map<String, String> connectionProps;
final List<HostAddress> connHosts;
private boolean loadBalance;
Expand Down Expand Up @@ -596,6 +596,10 @@ public final String getCurrentDefaultSchema() {
return this.currentDefaultSchema;
}

public final String getUserName() {
return this.connArgs.getUserName();
}

final void setTXFlag(TransactionAttribute txFlag, boolean val) {
this.txFlags[txFlag.ordinal()] = val ? (byte)1 : -1;
}
Expand Down Expand Up @@ -2276,13 +2280,9 @@ public void cancelStatement(final HostConnection source, long stmtId)

// create a new connection to fire cancel since original statement
// connection will be busy and locked; set load-balance to false
OpenConnectionArgs connArgs = new OpenConnectionArgs(this.connArgs);
Map<String, String> props = connArgs.getProperties();
if (props == null) {
props = new HashMap<>(1);
connArgs.setProperties(props);
}
props.put(ClientAttribute.LOAD_BALANCE, "false");
OpenConnectionArgs connArgs = new OpenConnectionArgs(this.connArgs,
this.connectionProps);
connArgs.getProperties().put(ClientAttribute.LOAD_BALANCE, "false");
ClientService service = new ClientService(source.hostAddr, connArgs);
try {
if (stmtId == snappydataConstants.INVALID_ID) {
Expand Down
29 changes: 3 additions & 26 deletions gemfirexd/core/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -205,29 +205,6 @@ task odbcMeta {
}
}

/*
task createClassSizeCatalog {
def internalDir = 'com/pivotal/gemfirexd/internal'
def internalOut = "${sourceSets.main.java.outputDir}/${internalDir}"
// need to re-generate class size catalog if any of the classes below change
inputs.files [ "${internalOut}/iapi/types/DataValueDescriptor.class",
"${internalOut}/impl/services/cache/CachedItem.class",
"${internalOut}/impl/store/raw/data/RecordId.class",
"${internalOut}/iapi/store/raw/ContainerKey.class",
"${internalOut}/iapi/services/cache/SizedCacheable.class" ]
outputs.file "src/main/java/${internalDir}/iapi/services/cache/ClassSizeCatalog.java"
def internalCP = 'com.pivotal.gemfirexd.internal'
javaexec {
main = 'org.apache.derbyBuild.ClassSizeCrawler'
classpath = sourceSets.main.runtimeClasspath
// TODO: SW: incomplete code ...
args = [ "-DWS=${workspace}", "-DclassDir=${out.dir}", "-Dout=${derby.engine.src.dir}/${derby.dir}/iapi/services/cache/ClassSizeCatalog.java", "-Dprefix.1=${internalCP}.iapi.types", '-Dprefix.2=com.pivotal.gemfirexd.internal.impl, com.pivotal.gemfirexd.internal.iapi.types.DataValueDescriptor' ]
}
}
*/

compileJava.dependsOn generatePropertiesFiles

// external javadocs
Expand All @@ -248,8 +225,8 @@ javadoc.dependsOn.clear()
shadowJar {
dependsOn odbcMeta, doSplit, jar
zip64 = true
baseName = 'snappydata-store'
classifier = ''
archiveBaseName.set('snappydata-store')
archiveClassifier.set('')

mergeServiceFiles()
}
Expand All @@ -258,7 +235,7 @@ archivesBaseName = 'snappydata-store-core'

jar {
dependsOn odbcMeta, doSplit
baseName = 'snappydata-store-core'
archiveBaseName.set('snappydata-store-core')

doLast {
// copy the native JNI library
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7698,8 +7698,7 @@ private String buildPreformattedSqlerrmc(SQLException se) {
private final void addServerInformation(final StringBuilder sb) {
final java.net.Socket sock = this.session.clientSocket;
final java.net.InetAddress addr = sock.getLocalAddress();
final String host = addr.getHostName();
sb.append("(Server=").append(host != null ? host : addr.getHostAddress());
sb.append("(Server=").append(addr);
sb.append('[').append(sock.getLocalPort()).append("],");
sb.append(currentThread()).append(") ");
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1385,14 +1385,14 @@ private List<GfxdSystemProcedureMessage> postCreateDDLReplay(final EmbedConnecti
for (GemFireContainer container : uninitializedContainers) {
if (logger.infoEnabled() &&
!Misc.isSnappyHiveMetaTable(container.getSchemaName())) {
logger.info("FabricDatabase: start initializing container: "
logger.info("FabricDatabase.reconnect: start initializing container: "
+ container);
}
container.initializeRegion();
// wait for notification
if (logger.infoEnabled() &&
!Misc.isSnappyHiveMetaTable(container.getSchemaName())) {
logger.info("FabricDatabase: end initializing container: "
logger.info("FabricDatabase.reconnect: end initializing container: "
+ container);
}
}
Expand All @@ -1411,7 +1411,6 @@ private List<GfxdSystemProcedureMessage> postCreateDDLReplay(final EmbedConnecti
// check if one is done or goes into WAITING, then submit next
final FabricService service = FabricServiceManager
.currentFabricServiceInstance();
ReentrantLock lock = new ReentrantLock();
Future<Boolean> f = execService.submit(() -> {
boolean initialized = false;
try {
Expand Down Expand Up @@ -1481,15 +1480,15 @@ private List<GfxdSystemProcedureMessage> postCreateDDLReplay(final EmbedConnecti
for (GemFireContainer container : failed) {
if (logger.infoEnabled() &&
!Misc.isSnappyHiveMetaTable(container.getSchemaName())) {
logger.info("FabricDatabase: start initializing container: "
logger.info("FabricDatabase.retry: start initializing container: "
+ container);
}

container.initializeRegion();

if (logger.infoEnabled() &&
!Misc.isSnappyHiveMetaTable(container.getSchemaName())) {
logger.info("FabricDatabase: end initializing container: "
logger.info("FabricDatabase.retry: end initializing container: "
+ container);
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3191,9 +3191,33 @@ public static void authorizeTableOperation(LanguageConnectionContext lcc,
final int numColumns = columns.length;
ArrayList<StatementPermission> permissions = new ArrayList<>(
numColumns + 1);
GemFireContainer rowContainer = (GemFireContainer)Misc.getRegionForTable(
tableName, true).getUserAttribute();
TableDescriptor td = rowContainer.getTableDescriptor();
Region<?, ?> region = Misc.getRegionForTable(tableName, false);
TableDescriptor td;
if (region != null) {
GemFireContainer container = (GemFireContainer)region.getUserAttribute();
td = container.getTableDescriptor();
} else {
// can be a VTI so lookup the data dictionary
TransactionController tc = lcc.getTransactionExecute();
DataDictionary dd = lcc.getDataDictionary();
int mode = dd.startReading(lcc);
try {
SchemaDescriptor sd;
String table;
int dotIndex = tableName.indexOf('.');
if (dotIndex == -1) {
sd = lcc.getDefaultSchema();
table = tableName;
} else {
sd = dd.getSchemaDescriptor(
tableName.substring(0, dotIndex), tc, true);
table = tableName.substring(dotIndex + 1);
}
td = dd.getTableDescriptor(table, sd, tc);
} finally {
dd.doneReading(mode, lcc);
}
}
if (td == null) {
throw StandardException.newException(SQLState.LANG_TABLE_NOT_FOUND,
tableName);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1100,7 +1100,7 @@ public final String asString() {
public final String getHostName() {
final String hostName = getHostNameForClients();
if (hostName != null) {
return hostName + '/' + this.inetAddress.getHostAddress();
return hostName;
}
return "/" + this.inetAddress.getHostAddress();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ public Object getValueByteSource() {
}
else if (OffHeapRegionEntry.class.isAssignableFrom(rclass)) {
retVal = RegionEntryUtils.convertOffHeapEntrytoByteSourceRetain(
(OffHeapRegionEntry)retVal, null, false, true);
(OffHeapRegionEntry)retVal, null, null, false, true);
}
}
}
Expand Down
Loading

0 comments on commit 185e02c

Please sign in to comment.