Android下NV21转ARGB的方法

本文详细介绍了NV21图像格式及其在Android Camera API中应用的图像处理过程,包括图像帧的存储顺序、YUV转RGB的公式及Android JNI层的转换代码实现。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Android Camera.PreviewCallback回来的每一帧图像的格式为NV21,NV21是YUV420的一种。但具体YUV三个分量怎么存储,Android的文档没有指出。在网上找了很多,都没有详细的介绍,其中曲折就不说了。最终在这里找到了权威的介绍:

http://www.fourcc.org/yuv.php#NV12

将其摘要如下:

NV12

YUV 4:2:0 image with a plane of 8 bit Y samples followed by an interleaved U/V plane 
containing 8 bit 2x2 subsampled colour difference samples.

 	                  Horizontal	Vertical
Y Sample Period	               1	1
V (Cr) Sample Period           2	2
U (Cb) Sample Period           2	2

Microsoft defines this format as follows:

 "A format in which all Y samples are found first in memory as an array of 
unsigned char with an even number of lines (possibly with a larger stride for 
memory alignment), followed immediately by an array of unsigned char containing 
interleaved Cb and Cr samples (such that if addressed as a little-endian WORD 
type, Cb would be in the LSBs and Cr would be in the MSBs) with the same total 
stride as the Y samples.This is the preferred 4:2:0 pixel format."

NV21

YUV 4:2:0 image with a plane of 8 bit Y samples followed by an interleaved V/U 
plane containing 8 bit 2x2 subsampled chroma samples. The same as NV12 except 
the interleave order of U and V is reversed.

 	                  Horizontal	Vertical
Y Sample Period                1	1
V (Cr) Sample Period           2	2
U (Cb) Sample Period           2	2

Microsoft defines this format as follows:

 "The same as NV12, except that Cb and Cr samples are swapped so that the 
chroma array of unsigned char would have Cr followed by Cb for each sample (such 
that if addressed as a little-endian WORD type, Cr would be in the LSBs and Cb 
would be in the MSBs)."

意思就是NV21是按YUV 4:2:0抽样的。即,对于一张图片的每一个像素,完整保留其Y分量,而对于U和V分量,以2X2的像素块比例采样。其中Y分量按平面存储,U和V则交错打包存储。NV12和NV21都是Y分量在前,U和V分量打包在后。不同处在于NV12是按UV顺序打包,而NV21是按VU顺序打包的。

了解了NV21的YUV存储顺序后,再加上YUV转RGB的公式就OK了。

 YUV转RGB的公式,网上也是一大堆,最终我在Wikipedia上找到了一个可以用的公式,先附上Wikipedia的连接:

http://en.wikipedia.org/wiki/YUV#Y.27UV420sp_.28NV21.29_to_RGB_conversion_.28Android.29

其摘要如下:

void YUVImage::yuv2rgb(uint8_t yValue, uint8_t uValue, uint8_t vValue,
        uint8_t *r, uint8_t *g, uint8_t *b) const {
    *r = yValue + (1.370705 * (vValue-128));
    *g = yValue - (0.698001 * (vValue-128)) - (0.337633 * (uValue-128));
    *b = yValue + (1.732446 * (uValue-128));
    *r = clamp(*r, 0, 255);
    *g = clamp(*g, 0, 255);
    *b = clamp(*b, 0, 255);
}

稍微简化一下,即:

    r = yValue + (1.370705 * (vValue-128));
    g = yValue - (0.698001 * (vValue-128)) - (0.337633 * (uValue-128));
    b = yValue + (1.732446 * (uValue-128));

    r = r < 0 ? 0 : ( r > 255 ? 255 : r);
    g = g < 0 ? 0 : ( g > 255 ? 255 : g);
    b = b < 0 ? 0 : ( b > 255 ? 255 : b);

到此,NV21转RGB的算法已经一目了然了,即,对于每一个像素,取出其对应的YUV的值,然后,算出该像素对应的RGB的值即可。

下面是我在Android下JNI层的转换代码:

	jboolean copy = 1;
	unsigned char *buffer = (unsigned char*)env->GetByteArrayElements(data,&copy);
	int length = width*height*4;
	unsigned char *rgbBuf = (unsigned char*)malloc(length);

	for(int iHeight = 0;iHeight<height;iHeight++)
	{
		for(int iWidth=0;iWidth<width;iWidth++)
		{
			unsigned char yValue = buffer[width*iHeight+iWidth];
			int index = iWidth % 2 == 0 ? iWidth : iWidth - 1;
			unsigned char vValue = buffer[width*height+width*(iHeight/2)+index];
			unsigned char uValue = buffer[width*height+width*(iHeight/2)+index+1];

			double r = yValue + (1.370705 * (vValue-128));
			double g = yValue - (0.698001 * (vValue-128)) - (0.337633 * (uValue-128));
			double b = yValue + (1.732446 * (uValue-128));

			r = r < 0 ? 0 : ( r > 255 ? 255 : r);
			g = g < 0 ? 0 : ( g > 255 ? 255 : g);
			b = b < 0 ? 0 : ( b > 255 ? 255 : b);

			rgbBuf[width*iHeight*4+iWidth*4+0] = (unsigned char)r;
			rgbBuf[width*iHeight*4+iWidth*4+1] = (unsigned char)g;
			rgbBuf[width*iHeight*4+iWidth*4+2] = (unsigned char)b;
			rgbBuf[width*iHeight*4+iWidth*4+3] = 255;
		}
	}
	env->ReleaseByteArrayElements(data,(jbyte*)buffer,0);

	jbyteArray arr = env->NewByteArray(length);
	jbyte *buf = (jbyte*)malloc(length);
	memcpy(buf,rgbBuf,length);
	env->SetByteArrayRegion(arr,0,length,buf);
	free(buf);
	free(rgbBuf);
	return arr;

其中 data,width,height分别为Java层传入参数。env为JNI层默认带的参数。最终返回的arr为jbyteArray类型,到了Java层即byte[].注意这里我的像素通道顺序是RGBA.这样,到了Java层,其返回的byte[]中的顺序就是ARGB了,然后就可以按照ARGB_8888进行处理了。

### 将NV21格式数据换为ARGB_8888 Bitmap 在Android开发过程中,处理图像数据时经常需要将不同格式的数据互相换。对于从摄像头获取的原始图像数据(通常是YUV格式中的NV21),将其换为更易于操作的形式如`Bitmap`是非常常见的需求。 为了实现这一目标,可以采用以下方法: #### 方法概述 通过创建一个临时的`YuvImage`对象来作为中介,利用该类提供的接口完成由字节数组表示的NV21数据向JPEG压缩流的变;之后再借助于`ByteArrayInputStream`读取此流并最终解码得到所需的`Bitmap`实例[^3]。 具体代码实现如下所示: ```java public static Bitmap nv21ToArgb8888(byte[] data, Camera camera) { int width = camera.getParameters().getPreviewSize().width; int height = camera.getParameters().getPreviewSize().height; YuvImage image = new YuvImage(data, ImageFormat.NV21, width, height, null); ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); Rect rect = new Rect(0, 0, width, height); image.compressToJpeg(rect, 100, byteArrayOutputStream); byte[] jpegData = byteArrayOutputStream.toByteArray(); ByteArrayInputStream inputStream = new ByteArrayInputStream(jpegData); BitmapFactory.Options options = new BitmapFactory.Options(); options.inPreferredConfig = Bitmap.Config.ARGB_8888; return BitmapFactory.decodeStream(inputStream, null, options); } ``` 上述函数接收两个参数:一个是来自相机回调中获得的NV21格式图片数据(`data`),另一个则是当前使用的Camera实例用于获取预览尺寸的信息。返回的结果即为按照指定配置生成的新建`Bitmap`对象。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值