Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.TimeUnit;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.io.crypto.tls.X509Util;
import org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.BufferCallEvent;
import org.apache.hadoop.hbase.ipc.HBaseRpcController.CancellationCallback;
Expand Down Expand Up @@ -347,7 +348,7 @@ public void operationComplete(ChannelFuture future) throws Exception {
private void sendRequest0(Call call, HBaseRpcController hrc) throws IOException {
assert eventLoop.inEventLoop();
if (reloginInProgress) {
throw new IOException("Can not send request because relogin is in progress.");
throw new IOException(HConstants.RELOGIN_IS_IN_PROGRESS);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is weird, please don't put these kinds of constants in HConstants. There are too many unrelated concerns there already.

Public static string constant in some other file, even this one, is preferred.

Copy link
Contributor Author

@virajjasani virajjasani Sep 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NettyRpcConnection is package private, hence can't be accessed from hbase-server

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what is the best place to keep this, anywhere else in hbase-common would also work

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do not put it in HConstants, it is IA.Public.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i understand but i am not sure what is the best place to keep this in

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, this is now taken care of

}
hrc.notifyOnCancel(new RpcCallback<Object>() {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,9 @@ public final class HConstants {
/** Just an array of bytes of the right size. */
public static final byte[] HFILEBLOCK_DUMMY_HEADER = new byte[HFILEBLOCK_HEADER_SIZE];

public static final String RELOGIN_IS_IN_PROGRESS =
"Can not send request because relogin is in progress.";

// End HFileBlockConstants.

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,10 @@
import java.util.List;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import javax.security.sasl.SaslException;
import org.apache.hadoop.hbase.CallQueueTooBigException;
import org.apache.hadoop.hbase.DoNotRetryIOException;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.ServerName;
import org.apache.hadoop.hbase.client.AsyncRegionServerAdmin;
import org.apache.hadoop.hbase.client.RegionInfo;
Expand Down Expand Up @@ -287,17 +289,7 @@ private boolean scheduleForRetry(IOException e) {
numberOfAttemptsSoFar);
return false;
}
// This exception is thrown in the rpc framework, where we can make sure that the call has not
// been executed yet, so it is safe to mark it as fail. Especially for open a region, we'd
// better choose another region server.
// Notice that, it is safe to quit only if this is the first time we send request to region
// server. Maybe the region server has accepted our request the first time, and then there is
// a network error which prevents we receive the response, and the second time we hit a
// CallQueueTooBigException, obviously it is not safe to quit here, otherwise it may lead to a
// double assign...
if (e instanceof CallQueueTooBigException && numberOfAttemptsSoFar == 0) {
LOG.warn("request to {} failed due to {}, try={}, this usually because"
+ " server is overloaded, give up", serverName, e.toString(), numberOfAttemptsSoFar);
if (unableToConnectToServerInFirstAttempt(e)) {
return false;
}
// Always retry for other exception types if the region server is not dead yet.
Expand Down Expand Up @@ -330,6 +322,73 @@ private boolean scheduleForRetry(IOException e) {
return true;
}

private boolean unableToConnectToServerInFirstAttempt(IOException e) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean we could extract a method for testing exception only, and then we test numberOfAttemptsSoFar outside the method...

// This exception is thrown in the rpc framework, where we can make sure that the call has not
// been executed yet, so it is safe to mark it as fail. Especially for open a region, we'd
// better choose another region server.
// Notice that, it is safe to quit only if this is the first time we send request to region
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better move this block of comments to the if condition in the caller method? I mean the section start from 'Notice that, it is safe blabla'. The numberOfAttemptsSoFar == 0 test is there.

// server. Maybe the region server has accepted our request the first time, and then there is
// a network error which prevents we receive the response, and the second time we hit a
// CallQueueTooBigException, obviously it is not safe to quit here, otherwise it may lead to a
// double assign...
if (e instanceof CallQueueTooBigException && numberOfAttemptsSoFar == 0) {
LOG.warn("request to {} failed due to {}, try={}, this usually because"
+ " server is overloaded, give up", serverName, e, numberOfAttemptsSoFar);
return true;
}
if (isSaslError(e) && numberOfAttemptsSoFar == 0) {
LOG.warn("{} is not reachable; give up after first attempt", serverName, e);
return true;
}
return false;
}

private boolean isSaslError(IOException e) {
if (
e instanceof SaslException
|| (e.getMessage() != null && e.getMessage().contains(HConstants.RELOGIN_IS_IN_PROGRESS))
) {
return true;
}
// check 4 level of cause
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use a for loop here? And why only test 4 levels?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's based on the examples we have seen so far, e.g.
procedure.RSProcedureDispatcher - request to rs1,61020,1692930044498 failed due to java.io.IOException: Call to address=rs1:61020 failed on local exception: java.io.IOException: org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed, try=0, retrying...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could just use a loop to get the cause until cause is null, to check all the exceptions on chain. And we also need to handle RemoteException specially, to unwrap it instead of just calling getCause?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handle RemoteException specially, to unwrap it instead of just calling getCause

yes, that is taken care of:

    private boolean isThrowableOfTypeSasl(Throwable cause) {
      if (cause instanceof IOException) {
        IOException unwrappedException = unwrapException((IOException) cause);
        return unwrappedException instanceof SaslException
          || (unwrappedException.getMessage() != null && unwrappedException.getMessage()
            .contains(RpcConnectionConstants.RELOGIN_IS_IN_PROGRESS));
      }
      return false;
    }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean after unwraping, you still need to go back to the get cause loop, not only test one time...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is in the loop

      while (true) {
        cause = cause.getCause();
        if (cause == null) {
          return false;
        }
        if (isThrowableOfTypeSasl(cause)) {
          return true;
        }
      }

isThrowableOfTypeSasl does the unwrap and checks for type of exception.

Throwable cause = e.getCause();
if (cause == null) {
return false;
}
if (isSaslError(cause)) {
return true;
}
cause = cause.getCause();
if (cause == null) {
return false;
}
if (isSaslError(cause)) {
return true;
}
cause = cause.getCause();
if (cause == null) {
return false;
}
if (isSaslError(cause)) {
return true;
}
cause = cause.getCause();
if (cause == null) {
return false;
}
return isSaslError(cause);
}

private boolean isSaslError(Throwable cause) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do not use the same method name here, as IOException is also a Throwable, although this is valid in Java, but it will confuse the developers.

if (cause instanceof IOException) {
IOException unwrappedException = unwrapException((IOException) cause);
return unwrappedException instanceof SaslException
|| (unwrappedException.getMessage() != null
&& unwrappedException.getMessage().contains(HConstants.RELOGIN_IS_IN_PROGRESS));
}
return false;
}

private long getMaxWaitTime() {
if (this.maxWaitTime < 0) {
// This is the max attempts, not retries, so it should be at least 1.
Expand Down