In the general process of decoding zxing:
1: Get the camera byte [] data
2: Analytical data
In zxing client source codePreviewCallback camera callback data is from here
Different data PlanarYUVLuminanceSource inheritance and LuminanceSource original YUV RGB
RGBLuminanceSource
AutoFocusCallback autofocus. Autofocus can not phone zxing can not shout
CameraManager camera management class. Open, close
DecodeThread thread management major advantage of the CountDownLatch
DecodeHandler data center. I understand DecodeThread thread of control, DecodeHandler transmit data
DecodeFormatManager this configuration decoding format. One-dimensional code, the two-dimensional codes
CaptureActivityHandler this is decoding and avtivity intermediary. Decoding success, failures with her callback
ViewfinderView we see the scan box, do tricks on her from the start
The above is my understanding zxing.
Here is the analysis of two-dimensional code parsing content from pictures and Implementation
Reply:
public class DecodeImageHandler {
private static final String TAG = DecodeImageHandler.class.getSimpleName ();
// Decoding format
private MultiFormatReader multiFormatReader;
private static final String ISO88591 = "ISO8859_1";
// Private Context mContext;
public DecodeImageHandler (Context context) {
// Decoded parameters
Hashtable & lt; DecodeHintType, Object & gt; hints = new Hashtable & lt; DecodeHintType, Object & gt; (2);
// Use when coding can resolve the coded type and resolution.
Vector & lt; BarcodeFormat & gt; decodeFormats = new Vector & lt; BarcodeFormat & gt; ();
decodeFormats.addAll (DecodeFormatManager.ONE_D_FORMATS);
decodeFormats.addAll (DecodeFormatManager.QR_CODE_FORMATS);
decodeFormats.addAll (DecodeFormatManager.DATA_MATRIX_FORMATS);
hints.put (DecodeHintType.POSSIBLE_FORMATS, decodeFormats);
hints.put (DecodeHintType.CHARACTER_SET, ISO88591);
init (context, hints);
}
public DecodeImageHandler (Context context, Hashtable & lt; DecodeHintType, Object & gt; hints) {
init (context, hints);
}
private void init (Context context, Hashtable & lt; DecodeHintType, Object & gt; hints) {
multiFormatReader = new MultiFormatReader ();
multiFormatReader.setHints (hints);
// MContext = context;
}
public Result decode (Bitmap bitmap) {
// First, to get the picture of the pixel array contents
int width = bitmap.getWidth ();
int height = bitmap.getHeight ();
// ------------------------------------------------ -
// Rgb mode
int [] data = new int [width * height];
bitmap.getPixels (data, 0, width, 0, 0, width, height);
Result rgbResult = rgbModeDecode (data, width, height);
if (rgbResult! = null) {
data = null;
return rgbResult;
}
// ------------------------------------------------ ----
// Yuv
byte [] bitmapPixels = new byte [width * height];
bitmap.getPixels (data, 0, width, 0, 0, width, height);
// The int array into a byte array
for (int i = 0; i & lt; data.length; i ++) {
bitmapPixels [i] = (byte) data [i];
}
// ByteArrayOutputStream baos = new ByteArrayOutputStream ();
// Bitmap.compress (Bitmap.CompressFormat.JPEG, 100, baos);
Result yuvResult = yuvModeDecode (bitmapPixels, width, height);
bitmapPixels = null;
return yuvResult;
}
// Public Result decode (String path) throws IOException {
// // Resolves picture height and width
// BitmapFactory.Options options = new BitmapFactory.Options ();
// Options.inJustDecodeBounds = true;
// BitmapFactory.decodeFile (path, options);
//
// Get byte []
directly from pictures// File file = new File (path);
// FileInputStream is = new FileInputStream (file);
// ByteArrayOutputStream os = new ByteArrayOutputStream ();
// Int len = -1;
// Byte [] buf = new byte [512];
// While ((len = is.read (buf))! = -1) {
// Os.write (buf, 0, len);
//}
// // Close stream
// Try {
// Is.close ();
//} Finally {
// If (is! = Null) {
// Is.close ();
//}
//}
//
// // Resolves
// Return decode (os.toByteArray (), options.outWidth, options.outHeight);
//}
public Result rgbModeDecode (int [] data, int width, int height) {
Result rawResult = null;
RGBLuminanceSource source = new RGBLuminanceSource (width, height, data);
BinaryBitmap bitmap = new BinaryBitmap (new HybridBinarizer (source));
try {
rawResult = multiFormatReader.decodeWithState (bitmap);
} Catch (ReaderException re) {
// Continue
} Finally {
multiFormatReader.reset ();
}
// Convert garbled
if (rawResult! = null) {
return converResult (rawResult);
}
return rawResult;
}
public Result yuvModeDecode (byte [] data, int width, int height) {
Result rawResult = null;
PlanarYUVLuminanceSource source = new PlanarYUVLuminanceSource (data, width, height, 0, 0, width, height);
BinaryBitmap bitmap = new BinaryBitmap (new HybridBinarizer (source));
try {
rawResult = multiFormatReader.decodeWithState (bitmap);
} Catch (ReaderException re) {
// Continue
} Finally {
multiFormatReader.reset ();
}
// Convert garbled
if (rawResult! = null) {
return converResult (rawResult);
}
return rawResult;
}
/ **
* Use ISO88591 be decoded and converted by ISO88591 performing garbled
* /
private Result converResult (Result rawResult) {
// Copy a Result, and transcoding
String str = rawResult.getText ();
String converText = null;
try {
converText = BarcodeUtils.converStr (str, ISO88591);
} Catch (UnsupportedEncodingException e) {
Logger.getInstance (TAG) .debug (e.toString ());
}
// FIXME conversion failed - "1: Results blanking
// - "2: the return
not decoded contentif (converText! = null) {
return serResultText (rawResult, converText);
} Else {
return rawResult;
}
}
private Result serResultText (Result rawResult, String converText) {
Result resultResult = new Result (converText, rawResult.getRawBytes (), rawResult.getResultPoints (),
rawResult.getBarcodeFormat (), System.currentTimeMillis ());
resultResult.putAllMetadata (rawResult.getResultMetadata ());
return resultResult;
}
}
Reply:
Decoding process from the camera:
Get data, generate data source, analytic background
Our idea:
Generate byte [] data gave him the ok
But pay attention to the camera to take the yuv data stream, we bitmap directly convert byte [] to the background in analytical high failure rate.
I was on top of the two patterns in rgbModeDecode yuvModeDecode
Or have failed.
If the bitmap data converted into yuv? Hope that the expert guidance of
In the android parsed picture to consider two factors 1 Memory 2 data format
These are my Talking. Wrong place at the hope of everyone.
I do not know who has the better picture two yards analytical method
Reply:
bitmap turn gray
public static Bitmap toGrayscale (Bitmap bmpOriginal) {
int width, height;
height = bmpOriginal.getHeight ();
width = bmpOriginal.getWidth ();
Bitmap bmpGrayscale = Bitmap.createBitmap (width, height, Bitmap.Config.RGB_565);
Canvas c = new Canvas (bmpGrayscale);
Paint paint = new Paint ();
ColorMatrix cm = new ColorMatrix ();
cm.setSaturation (0);
ColorMatrixColorFilter f = new ColorMatrixColorFilter (cm);
paint.setColorFilter (f);
c.drawBitmap (bmpOriginal, 0, 0, paint);
return bmpGrayscale;
}
Reply:
There is a web version of the Builder: http: //blog.csdn.net/suntongo/article/details/8742023
Reply:
byte [] data converted to a high likelihood of failure because you did not get the data to flip, the camera default is to obtain horizontal screen data.
I do not know the reason, I was a beginner, as had misunderstood him to bear.
Reply:
First mark, come back to see ~
No comments:
Post a Comment