1
I’ve been using the Amazon S3 SDK for cloud server communications for a while, called Dreamhost or Dreamobjects.
My application uses files for my Buckets. However, I started having problems because my files exceeded 5GB, and therefore, I had to change the way to send the file. I followed the documentation provided on the Amazon website (Documentation here).
However, when using this way, I get the following error on the console:
Dec 26, 2016 1:28:22 pm com.amazonaws.services.S3.transfer.Internal.Uploadcallable performAbortMultipartUpload INFO: Unable to abort Multipart upload, you may need to Manually remove uploaded Parts: null (Service: Amazon S3; Status Code: 403; Error Code: Signaturedoesnotmatch; Request ID: tx000000000000f1f16-0058613740-12165497-default) com.amazonaws.services.S3.model.Amazons3exception: null (Service: Amazon S3; Status Code: 403; Error Code: Signaturedoesnotmatch; Request ID: tx000000000000f1f16-0058613740-12165497-default), S3 Extended Request ID: 12165497-default-default
However, I do not know what this "Signature" would be. My information for server access is ok, I checked by third party programs and even on the site itself, so it wouldn’t be login/password/authentication.
I don’t know how to proceed. Anyone with any ideas that could help me? Below, code I’m using.
try {
AWSCredentials credentials = new BasicAWSCredentials(UploadBigFile.LOGIN_KEY, UploadBigFile.SECRET_KEY);
AmazonS3 conn = new AmazonS3Client(credentials);
conn.setEndpoint("objects-us-west-1.dream.io");
final File f = new File("C:\\backup\\packages.rar");
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(f.length());
PutObjectRequest request = new PutObjectRequest(BUCKET_NAME, "FOLDER NAME/cristiano - teste.zip", f);
request.setGeneralProgressListener(new ProgressListener() {
long transferidos = 0L;
long transferidosParcial = 0L;
long megaBytes = 5000000L;
long timeInicial = System.currentTimeMillis();
@Override
public void progressChanged(ProgressEvent pe) {
transferidosParcial += pe.getBytesTransferred();
transferidos += pe.getBytesTransferred();
if (transferidosParcial >= megaBytes){
long timeFinal = System.currentTimeMillis() - timeInicial;
int seconds = (int) (timeFinal / 1000) % 60 ;
double transf = transferidos/1024/1024;
double transParcial = transferidosParcial/1024/1024;
double arquivo = f.length()/1024/1024;
String tempo = calculeTotalTimeToUpload(seconds, transParcial, arquivo-transf);
System.out.println("Transferidos: "+transf+"(MB) de "+arquivo+". "+tempo);
transferidosParcial = 0L;
timeInicial = System.currentTimeMillis();
}
}
});
System.out.println("Enviando arquivo. Por favor, aguarde.");
TransferManager tm = new TransferManager(conn);
Upload upload = tm.upload(request);
upload.upload.waitForCompletion();();
System.out.println("Arquivo enviado com sucesso!");
} catch (AmazonS3Exception ex) {
System.out.println("Erro 1");
ex.printStackTrace();
} catch (AmazonClientException ex) {
System.out.println("Erro 2");
ex.printStackTrace();
} catch (InterruptedException ex) {
System.out.println("Erro 3");
ex.printStackTrace();
}
I almost went deaf!!! The translation of is Signature.
– viana
Apparently the error is not because of the file size. Check this link for help:http://stackoverflow.com/questions/14296999/status-code-403-signaturedoesnotmatch-when-i-am-using-amazon-ses
– viana
@Acklay Yeah, I had already looked at that topic, but I couldn’t get anything out of it. And also, it really isn’t about the file size problem, but Amazon’s AWS has different upload types. Low level/high level, and for files smaller and larger than 5GB. Smaller than 5GG I have a routine that is working perfectly. Bigger than 5GB is that misfortune there that does not work hahahah
– Cristiano Bombazar
For files less than 5 Gb does this code work? Or do you use a different file? It is not possible to split everything into files smaller than 5 Gb each and then remodel them when downloading them?
– Victor Stafusa
@Victorstafusa Yes, it is a different code and it works. The problem of splitting the file is that it is a backup file. Any byte lost would be a disaster. I have had problems with file separation that I do not know, in which, in the client’s computer, did not separate the acquisitions correctly, and in my yes. So I’m trying to get away from this option...
– Cristiano Bombazar
What if you send this giant file at once to another location and upload it to Amazon piece by piece? Also, if any byte is lost, you can use some consistency check algorithms to make sure that what was sent is correct, and if not, send it again. Also, since it is a backup, it is important that the machine being copied is performing a minimum number of write-to-disk operations during operation.
– Victor Stafusa