活动介绍
file-type

OpenGL基础教程:创建简单的长方形Vertex Data

ZIP文件

下载需积分: 50 | 76KB | 更新于2025-04-29 | 149 浏览量 | 1 下载量 举报 收藏
download 立即下载
### 知识点: OpenGL 简单长方形创建 OpenGL(Open Graphics Library)是一个跨语言、跨平台的应用程序编程接口(API),用于渲染2D和3D矢量图形。OpenGL被广泛用于视频游戏、模拟器和CAD软件等领域。在本节中,我们将了解如何使用OpenGL创建一个简单的长方形。 #### 长方形的Vertex Data 在图形学中,顶点数据(Vertex Data)是构成图形的基础信息,它包含顶点的位置、颜色、纹理坐标等属性。对于一个简单的长方形,我们至少需要定义四个顶点的位置来表示它的四个角。在二维空间中,长方形的顶点位置可以使用(x, y)坐标表示。 #### Vertex Buffer 对象 (VBO) 在OpenGL中,顶点数据通常存储在顶点缓冲区对象(Vertex Buffer Object, VBO)中。VBO是一个位于GPU内存中的缓冲区,可以用来存储顶点数据,如顶点位置、法线、纹理坐标、颜色等。VBO的优势在于它可以提高数据传输效率,因为它减少了与GPU之间的通信次数。 创建一个VBO的基本步骤通常包括: 1. **生成缓冲区对象:**使用`glGenBuffers`函数创建一个或多个缓冲区对象。 2. **绑定缓冲区:**通过`glBindBuffer`函数将缓冲区绑定到目标上,例如GL_ARRAY_BUFFER。 3. **填充缓冲区数据:**使用`glBufferData`函数将顶点数据加载到缓冲区中。 4. **设置顶点属性指针:**使用`glVertexPointer`、`glColorPointer`、`glTexCoordPointer`等函数设置顶点属性指针,告诉OpenGL如何解释缓冲区中的数据。 5. **启用顶点属性:**使用`glEnableClientState`启用顶点属性。 #### 创建简单长方形的步骤 1. **定义顶点数据:**首先,我们需要定义长方形四个顶点的坐标。例如,对于一个标准的长方形,顶点坐标可以是`(0.5, 0.5), (-0.5, 0.5), (-0.5, -0.5), (0.5, -0.5)`。 2. **设置OpenGL环境:**初始化OpenGL,设置视口(viewport)大小、清屏颜色等。 3. **创建VBO:**生成一个VBO,并绑定它。 4. **加载顶点数据:**将定义好的顶点数据加载到绑定的VBO中。 5. **配置渲染:**告诉OpenGL如何解释顶点数据,设置顶点属性指针。 6. **绘制长方形:**使用`glDrawArrays`或`glDrawElements`函数进行绘制,指定绘制方式为`GL_QUADS`,因为长方形是由四个顶点构成的四边形。 7. **清理资源:**在程序结束前,释放VBO资源。 #### OpenGL代码示例 以下是一个简单的OpenGL代码示例,展示了如何创建一个VBO,并用它来绘制一个长方形。 ```c // 长方形顶点数据 GLfloat vertices[] = { 0.5f, 0.5f, // 0 -0.5f, 0.5f, // 1 -0.5f, -0.5f, // 2 0.5f, -0.5f // 3 }; // 设置OpenGL环境 glClearColor(0.0, 0.0, 0.0, 1.0); // 设置背景颜色为黑色 // 创建并绑定VBO GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); // 配置渲染 glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2, GL_FLOAT, 0, 0); // 绘制长方形 glClear(GL_COLOR_BUFFER_BIT); glDrawArrays(GL_QUADS, 0, 4); // 释放VBO资源 glDeleteBuffers(1, &vbo); ``` 此段代码定义了长方形的顶点数据,并通过一系列OpenGL函数调用来初始化环境、创建VBO、加载数据、配置渲染并绘制图形。绘制完成后,代码还负责清理分配的VBO资源。 #### 总结 使用OpenGL创建一个简单的长方形涉及到定义顶点数据、设置OpenGL环境、创建和使用VBO来存储和绘制顶点数据。VBO作为OpenGL中的核心概念,极大地提高了渲染效率。理解如何操作VBO对于进行高效的OpenGL编程至关重要。通过上述步骤和示例代码,我们可以创建一个基础的图形界面,并在此基础上进行更复杂的图形操作和渲染技术的探索。

相关推荐

filetype
http://blog.csdn.net/yulinxx/article/details/77896541 在 http://blog.csdn.net/yulinxx/article/details/77894764 基础上添加 参考: 1 OpenGL ES 3.0: 图元重启(Primitive restart) - 皮斯卡略夫 - 博客园 作者:psklf 出处: http://www.cnblogs.com/psklf/p/5750783.html 2 OpenGL Separating Polygons Inside VBO - Stack Overflow 出处: https://stackoverflow.com/questions/26944959/opengl-separating-polygons-inside-vbo You can use primitive restart. The downside of this is that you need an index array, which may not have been necessary otherwise. Apart from that, it’s straightforward. You enable it with: glPrimitiveRestartIndex(0xffff); glEnable(GL_PRIMITIVE_RESTART); 1 2 You can use any index you want as the restart index, but it’s common policy to use the maximum possible index, which is 0xffff if you use indices of type GL_UNSIGNED_SHORT. If you use at least OpenGL 4.3, or OpenGL ES 3.0, you can also replace the above with: glEnable(GL_PRIMITIVE_RESTART_FIXED_INDEX); 1 2 Then you set up an index array where you insert a 0xffff value at every position you want to start a new polygon, and bind the index array as usual. Then you can draw all the polygons with a single glDrawElements() call. 3. Best Practices for Working with Vertex Data 出处: https://developer.apple.com/library/content/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html
filetype

package com.tgdz.gb28181.gl.playvideo_texuture; import android.content.Context; import android.graphics.SurfaceTexture; import android.opengl.GLES30; import android.opengl.GLUtils; import android.util.Log; import android.view.Surface; import com.blankj.utilcode.util.LogUtils; import com.km.myapplica.R; import com.serenegiant.glutils.ShaderConst; import com.tgdz.gb28181.constant.AreaConstant; import java.nio.Buffer; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import java.nio.ShortBuffer; /* loaded from: classes.dex */ public class VideoTextureSurfaceRenderer extends TextureSurfaceRenderer implements SurfaceTexture.OnFrameAvailableListener { public static final String TAG = "VideoTextureSurfaceRenderer"; private static final float squareSize = 1.0f; private long beginFrameTime; private Context context; private ShortBuffer drawListBuffer; private int frameCount; private SurfaceTexture inputTexture; int positionHandle; private int rotation; private int shaderProgram; private FloatBuffer textureBuffer; int textureCoordinateHandle; int textureParamHandle; int textureTranformHandle; private int[] textures; private FloatBuffer vertexBuffer; private float[] videoTextureTransform; public WaterMark waterMark; private static final float[] squareCoords180 = {1.0f, 1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, -1.0f}; private static final float[] squareCoords90 = {-1.0f, 1.0f, -1.0f, -1.0f, 1.0f, -1.0f, 1.0f, 1.0f}; /*private static final float[] squareCoords = { -squareSize, squareSize, 0.0f, // top left -squareSize, -squareSize, 0.0f, // bottom left squareSize, -squareSize, 0.0f, // bottom right squareSize, squareSize, 0.0f // top right };*/ private static final float[] squareCoords = {-1.0f, -1.0f, 1.0f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f}; private static final short[] drawOrder = {0, 1, 2, 0, 2, 3}; private static final float[] textureCoords = { 0.0f, 1.0f, 0.0f, 1.0f, //左上 0.0f, 0.0f, 0.0f, 1.0f, //左下 1.0f, 0.0f, 0.0f, 1.0f, //右下 1.0f, 1.0f, 0.0f, 1.0f //右上 }; public VideoTextureSurfaceRenderer(Context context, Surface surface, int width, int heigght, int framerate) { this(context, surface, width, heigght, framerate, -1); } public VideoTextureSurfaceRenderer(Context context, Surface surface, int i, int i2, int i3, int rotation) { this(context, surface, i, i2, i3, rotation, null); } public VideoTextureSurfaceRenderer(Context context, Surface surface, int i, int i2, int i3, int i4, Surface surface2) { super(surface, i, i2, i3, surface2); this.textures = new int[1]; this.beginFrameTime = 0L; this.frameCount = 0; this.rotation = i4; this.context = context; this.videoTextureTransform = new float[16]; } private void setupGraphics() { this.shaderProgram = ShaderHelper.createAndLinkProgram(ShaderHelper.compileShader(35633, RawResourceReader.readTextFileFromRawResource(this.context, R.raw.vetext_sharder)), ShaderHelper.compileShader(35632, RawResourceReader.readTextFileFromRawResource(this.context, R.raw.fragment_sharder)), new String[]{"texture", "vPosition", "vTexCoordinate", "textureTransform"}); this.textureParamHandle = GLES30.glGetUniformLocation(this.shaderProgram, "texture"); this.textureCoordinateHandle = GLES30.glGetAttribLocation(this.shaderProgram, "vTexCoordinate"); this.positionHandle = GLES30.glGetAttribLocation(this.shaderProgram, "vPosition"); this.textureTranformHandle = GLES30.glGetUniformLocation(this.shaderProgram, "textureTransform"); } /* JADX WARN: Can't fix incorrect switch cases order, some code will duplicate */ /* JADX WARN: Code restructure failed: missing block: B:173:0x00f9, code lost: if (r3.equals("P290C") != false) goto L18; */ /* Code decompiled incorrectly, please refer to instructions dump. To view partially-correct code enable 'Show inconsistent code' option in preferences */ private void setupVertexBuffer() { // 定义顶点数据 (例如:一个简单的矩形) /* float[] vertexData = { // x, y, z, u, v -1.0f, -1.0f, 0.0f, 0f, 0f, -1.0f, 1.0f, 0.0f, 0f, 1f, 1.0f, 1.0f, 0.0f, 1f, 1f, 1.0f, -1.0f, 0.0f, 1f, 0f }; final float[] TEX_VERTEX = { 0.5f, 0.5f, //纹理坐标V0 1f, 1f, //纹理坐标V1 0f, 1f, //纹理坐标V2 0f, 0.0f, //纹理坐标V3 1f, 0.0f //纹理坐标V4 }; final short[] VERTEX_INDEX = { 2, 1, 0, 0, 3, 2 }; // 创建顶点缓冲区对象 (VBO) int[] vbo = new int[1]; GLES30.glGenBuffers(1, vbo, 0); if (vbo[0] == 0) { throw new RuntimeException("无法创建顶点缓冲区对象"); } // 绑定缓冲区对象 GLES30.glBindBuffer(GLES30.GL_ARRAY_BUFFER, vbo[0]); // 将顶点数据传递到缓冲区 vertexBuffer = ByteBuffer .allocateDirect(vertexData.length * 4) // 每个 float 占 4 字节 .order(ByteOrder.nativeOrder()) .asFloatBuffer(); vertexBuffer.put(vertexData).position(0); drawListBuffer = ByteBuffer.allocateDirect(VERTEX_INDEX.length * 2) .order(ByteOrder.nativeOrder()) .asShortBuffer() .put(VERTEX_INDEX); drawListBuffer.position(0); GLES30.glBufferData( GLES30.GL_ARRAY_BUFFER, vertexData.length * 4, vertexBuffer, GLES30.GL_STATIC_DRAW ); // 解绑缓冲区,防止意外修改 GLES30.glBindBuffer(GLES30.GL_ARRAY_BUFFER, 0); ByteBuffer orderByteBuffer = ByteBuffer.allocateDirect(drawOrder. length * 2); orderByteBuffer.order(ByteOrder.nativeOrder()); //Modifies this buffer's byte order drawOrderBuffer = orderByteBuffer.asShortBuffer(); //创建此缓冲区的视图,作为一个short缓冲区. drawOrderBuffer.put(drawOrder); drawOrderBuffer.position(0); //下一个要被读或写的元素的索引,从0 开始*/ // Initialize the texture holder if (rotation ==1){ ByteBuffer bb = ByteBuffer.allocateDirect(squareCoords90.length * 4); bb.order(ByteOrder.nativeOrder()); vertexBuffer = bb.asFloatBuffer(); vertexBuffer.put(squareCoords90); vertexBuffer.position(0); }else { ByteBuffer bb = ByteBuffer.allocateDirect(squareCoords.length * 4); bb.order(ByteOrder.nativeOrder()); vertexBuffer = bb.asFloatBuffer(); vertexBuffer.put(squareCoords); vertexBuffer.position(0); } drawListBuffer = ByteBuffer.allocateDirect(drawOrder.length * 2) .order(ByteOrder.nativeOrder()) .asShortBuffer() .put(drawOrder); drawListBuffer.position(0); } private void setupTexture() { ByteBuffer allocateDirect = ByteBuffer.allocateDirect(textureCoords.length * 4); allocateDirect.order(ByteOrder.nativeOrder()); this.textureBuffer = allocateDirect.asFloatBuffer(); this.textureBuffer.put(textureCoords); this.textureBuffer.position(0); GLES30.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); GLES30.glActiveTexture(33984); if (this.textures == null) { this.textures = new int[1]; } GLES30.glGenTextures(1, this.textures, 0);//// 生成纹理ID checkGlError("Texture generate"); GLES30.glBindTexture(ShaderConst.GL_TEXTURE_EXTERNAL_OES, this.textures[0]);// 绑定纹理 checkGlError("Texture bind"); this.inputTexture = new SurfaceTexture(this.textures[0]); this.inputTexture.setOnFrameAvailableListener(this, this.backgroundThread.handler); } @Override // com.tgdz.gb28181.gl.playvideo_texuture.TextureSurfaceRenderer protected boolean draw(boolean z) { GLES30.glClear(16384); drawTexture(); if (z) { this.waterMark.draw(); return true; } return true; } private void drawTexture() { GLES30.glUseProgram(this.shaderProgram); GLES30.glViewport(0, 0, this.width, this.height); GLES30.glEnableVertexAttribArray(this.positionHandle); GLES30.glVertexAttribPointer(this.positionHandle, 2, 5126, false, 0, (Buffer) this.vertexBuffer); GLES30.glBindTexture(ShaderConst.GL_TEXTURE_EXTERNAL_OES, this.textures[0]); GLES30.glActiveTexture(33984); GLES30.glUniform1i(this.textureParamHandle, 0); GLES30.glEnableVertexAttribArray(this.textureCoordinateHandle); GLES30.glVertexAttribPointer(this.textureCoordinateHandle, 4, 5126, false, 0, (Buffer) this.textureBuffer); GLES30.glUniformMatrix4fv(this.textureTranformHandle, 1, false, this.videoTextureTransform, 0); GLES30.glDrawElements(5, drawOrder.length, 5123, this.drawListBuffer); GLES30.glDisableVertexAttribArray(this.positionHandle); GLES30.glDisableVertexAttribArray(this.textureCoordinateHandle); } @Override // com.tgdz.gb28181.gl.playvideo_texuture.TextureSurfaceRenderer protected void initGLComponents() { setupVertexBuffer(); setupTexture(); setupGraphics(); this.waterMark = new WaterMark(this.context, this.width, this.height); } @Override // com.tgdz.gb28181.gl.playvideo_texuture.TextureSurfaceRenderer protected void deinitGLComponents() { GLES30.glDeleteTextures(1, this.textures, 0); GLES30.glDeleteProgram(this.shaderProgram); this.waterMark.release(); } public void checkGlError(String str) { while (true) { int glGetError = GLES30.glGetError(); if (glGetError == 0) { return; } Log.e("SurfaceTest", str + ": glError " + GLUtils.getEGLErrorString(glGetError)); } } @Override // com.tgdz.gb28181.gl.playvideo_texuture.TextureSurfaceRenderer public SurfaceTexture getInputTexture() { return this.inputTexture; } @Override // android.graphics.SurfaceTexture.OnFrameAvailableListener public void onFrameAvailable(SurfaceTexture surfaceTexture) { LogUtils.i("onFrameAvailable可用"); long currentTimeMillis = System.currentTimeMillis(); if (this.beginFrameTime == 0) { this.beginFrameTime = currentTimeMillis; } float f = ((this.frameCount * 1000.0f) / this.frameRate) - ((float) (currentTimeMillis - this.beginFrameTime)); if (f > 0.0f) { Log.i(TAG, "onFrameAvailable: 延迟时间: "+f + "ms"); try { Thread.sleep((long) f); } catch (InterruptedException e) { e.printStackTrace(); } } this.frameCount++; try { this.inputTexture.updateTexImage(); this.inputTexture.getTransformMatrix(this.videoTextureTransform); super.onFrameAvailable(); } catch (Exception e2) { e2.printStackTrace(); } } @Override // com.tgdz.gb28181.gl.playvideo_texuture.TextureSurfaceRenderer public void destroy() { super.destroy(); } } 为什么旋转输出的画面不是居中显示,而是靠左边呢? 我这个画面输出是向右的,能帮我修改成 旋转90,拉伸的话从中间放大截取

filetype

PyOpenGl中这里哪里有错误: # 顶点着色器:处理位置和纹理坐标 vertex_shader = """ #version 330 core // 该顶点着色器使用的是 OpenGL 3.3 核心版本的着色语言 layout (location = 0) in vec2 aPos; // 顶点位置(屏幕坐标) layout (location = 1) in vec2 aTexCoord; // 纹理坐标 位置索引为 1 ,它将用于在纹理上定位采样点 out vec2 TexCoord; uniform mat4 transform; // 变换矩阵(缩放+平移) void main() { //将顶点位置 aPos 从二维扩展为齐次坐标形式的四维向量(vec4),并与 transform 矩阵相乘,得到变换后的顶点位置,最终赋值给内置变量 gl_Position gl_Position = transform * vec4(aPos, 0.0, 1.0); TexCoord = aTexCoord; // 将输入的纹理坐标 aTexCoord 直接传递给输出变量 TexCoord } """ # 片段着色器:处理颜色映射 fragment_shader = """ #version 330 core in vec2 TexCoord; // 接收来自顶点着色器传递的纹理坐标,其取值范围在 0 - 1 之间,用于在纹理上指定采样位置。 out vec4 FragColor; // 定义了片段着色器的输出变量,用于指定当前片段最终的颜色值,是一个四维向量 uniform sampler2D rasterTexture; // 用于指定要采样的栅格纹理 在渲染过程中,通过它从纹理中获取像素值 uniform sampler1D colorMapTexture; // 色阶纹理 uniform float nodata; // NoData值 uniform vec2 dataRange; // 存储了数据值的范围,是一个二维向量,分别表示数据的最小值(dataRange.x)和最大值(dataRange.y) ,用于在颜色映射时对数据进行归一化处理。 // 颜色映射函数(简化版,实际可通过纹理采样实现更复杂的colormap) vec3 colorMap(float value) { // 归一化到0~1 float norm = (value - dataRange.x) / (dataRange.y - dataRange.x); norm = clamp(norm, 0.0, 1.0); // 使用 clamp 函数将归一化后的值限制在 0 - 1 之间,防止出现超出范围的情况 // 简单的蓝-绿-红渐变(可替换为更复杂的映射) return vec3(norm, 0.5 - norm * 0.5, 1.0 - norm); } void main() { // 使用 texture 函数对指定的 rasterTexture 纹理,根据传入的 TexCoord 纹理坐标进行采样,获取纹理上对应位置的像素值, // 并取其红色通道的值(这里假设纹理数据是单通道存储数据值,若为多通道可按需调整),将其赋值给 value 变量。 float value = texture(rasterTexture, TexCoord).r; // 判断采样得到的值是否等于 NoData 值。如果是,则将 FragColor 设置为完全透明(RGBA 值为 (0.0, 0.0, 0.0, 0.0) ) if (value == nodata) { FragColor = vec4(0.0, 0.0, 0.0, 0.0); // 完全透明 return; } // 归一化并映射到色阶 float norm = (value - dataRange.x) / (dataRange.y - dataRange.x); norm = clamp(norm, 0.0, 1.0); // texture() 函数:这是 GLSL 内置的纹理采样函数,根据给定的纹理坐标从纹理中获取颜色值。对于一维纹理,只需提供一个浮点数坐标 FragColor = texture(colorMapTexture, norm); // 若采样值不是 NoData 值,则调用 colorMap 函数,根据当前数据值生成对应的颜色向量,并将其扩展为四维向量(设置透明度为 1.0 ) //FragColor = vec4(colorMap(value), 1.0); } """

风浅月明
  • 粉丝: 1242
上传资源 快速赚钱