2D Acceleration for Nanodesktop ? Can anyone help me ?

Discuss the development of new homebrew software, tools and libraries.

Moderators: cheriff, TyRaNiD

Post Reply
pegasus2000
Posts: 160
Joined: Wed Jul 12, 2006 7:09 am

2D Acceleration for Nanodesktop ? Can anyone help me ?

Post by pegasus2000 »

Till now, Nanodesktop has used NanoTile acceleration to draw the
several frames on the screen.

NanoTile uses functions as sceGuCopyImage to draw the tiles on
the screen. The system works and it has a good speed, but some
operations, like PutPixel are already executed using the CPU and writing
directly in memory area (Page 0 - 0x40000000; Page 1 - 0x440000000).

In the future versions of nd, I would add the support for libraries as
Allegro or SDL.

So, I'd like to add to Nanodesktop HAL a layer that implements
functions as PutPixel, GetPixel, DrawLine etc. using the GU
accelerator. This would increase the speed of Nanodesktop and
would offer support for Allegro or for SDL.

The trouble is that I have few experience with the GU accelerator.

So, I'm asking to myself if some of yours can create a small set of
routines for nd that can be integrated in the library and that can
improve the SDK.

Can you help me ?
Thank you in advance.
hlide
Posts: 739
Joined: Sun Sep 10, 2006 2:31 am

Post by hlide »

PutPixel, GetPixel : i'm not sure using GU to do so would accelerate anything, it may be worse indeed.
I guess DrawLine, DrawRect, etc. can benefit from GU.
pegasus2000
Posts: 160
Joined: Wed Jul 12, 2006 7:09 am

Post by pegasus2000 »

hlide wrote:PutPixel, GetPixel : i'm not sure using GU to do so would accelerate anything, it may be worse indeed.
I guess DrawLine, DrawRect, etc. can benefit from GU.
I don't know: i haven't a large experience with GU.
I thought that GU can accelerate the calculus of the address of a
pixel in memory...

In any case, can you help me ? I would insert in nd a graphical layer
that can be used in a second time by ndAllegro, or by ndSDL.
hlide
Posts: 739
Joined: Sun Sep 10, 2006 2:31 am

Post by hlide »

For what I can see :

putPixelRGBA8888(int x, int y, int c) :

Code: Select all

LUI %0, %hi(current_frame_buffer)
SLL $a1, $a1, 11 // y' = y * 512 * 4
LW %0, %lo(current_frame_buffer)(%0)
SLL $a0, $a0, 2 // x' = x * 4
ADDU $a0, $a0, $a1 // offset = x' + y' 
ADDU %0, $a0, %0 // address = offset + current_frame_buffer
JR $ra
SW $a2, 0(%0) // *(address) = c
that's just around 8-10 cycles. If you plan to call one GU primitive for one pixel drawing, it would be more expensive indeed because you need much more than 7 instructions to instruct GE to draw a pixel !

I don't have a large experience with GU as well but at least I can tell you not all operations can be done in GE if you plan to accelerate 2D mode. The right thing for the user is never using GetPixel and PutPixel indeed. More elaborate operations like a DrawLine should really benefit from GE.

How to implement a DrawLine through GE ? I dunno, but I'm pretty sure you can find this in some 2D libraries posted here.
hlide
Posts: 739
Joined: Sun Sep 10, 2006 2:31 am

Post by hlide »

For instance, JGE :

Code: Select all

void JGE_Gfx::Gfx_FillRect(int x, int y, int width, int height, PIXEL_TYPE color)
{
	struct VertexColor* vertices = (struct VertexColor*)sceGuGetMemory(2 * sizeof(struct VertexColor));

	vertices[0].color = color;
	vertices[0].x = x; 
	vertices[0].y = y; 
	vertices[0].z = 0.0f;

	vertices[1].color = color;
	vertices[1].x = x + width; 
	vertices[1].y = y + height; 
	vertices[1].z = 0.0f;

	sceGuDisable(GU_TEXTURE_2D);
	sceGuShadeModel(GU_SMOOTH);
	sceGuAmbientColor(0xffffffff);
	sceGuDrawArray(GU_SPRITES, TEXTURE_COLOR_FORMAT|GU_VERTEX_32BITF|GU_TRANSFORM_2D, 2, 0, vertices);
	sceGuEnable(GU_TEXTURE_2D);
}


void JGE_Gfx::Gfx_DrawLine(int x1, int y1, int x2, int y2, PIXEL_TYPE color)
{
	struct VertexColor* vertices = (struct VertexColor*)sceGuGetMemory(2 * sizeof(struct VertexColor));

	vertices[0].color = color;
	vertices[0].x = x1; 
	vertices[0].y = y1; 
	vertices[0].z = 0.0f;

	vertices[1].color = color;
	vertices[1].x = x2; 
	vertices[1].y = y2; 
	vertices[1].z = 0.0f;

	sceGuDisable(GU_TEXTURE_2D);
	sceGuShadeModel(GU_SMOOTH);
	sceGuAmbientColor(0xffffffff);
	sceGuDrawArray(GU_LINES, TEXTURE_COLOR_FORMAT|GU_VERTEX_32BITF|GU_TRANSFORM_2D, 2, 0, vertices);
	sceGuEnable(GU_TEXTURE_2D);
}

// this one is slower that a software version of PutPixel
void JGE_Gfx::Gfx_Plot(int x, int y, PIXEL_TYPE color)
{
	struct VertexColor* vertices = (struct VertexColor*)sceGuGetMemory(1 * sizeof(struct VertexColor));

	vertices[0].color = color;
	vertices[0].x = x; 
	vertices[0].y = y; 
	vertices[0].z = 0.0f;

	sceGuDisable(GU_TEXTURE_2D);
	sceGuShadeModel(GU_SMOOTH);
	sceGuAmbientColor(0xffffffff);
	sceGuDrawArray(GU_POINTS, TEXTURE_COLOR_FORMAT|GU_VERTEX_32BITF|GU_TRANSFORM_2D, 1, 0, vertices);
	sceGuEnable(GU_TEXTURE_2D);
}


void JGE_Gfx::Gfx_PlotArray(float *x, float *y, int count, PIXEL_TYPE color)
{
	struct VertexColor* vertices = (struct VertexColor*)sceGuGetMemory(count * sizeof(struct VertexColor));

	for &#40;int i=0;i<count;i++&#41;
	&#123;
		vertices&#91;i&#93;.color = color;
		vertices&#91;i&#93;.x = x&#91;i&#93;; 
		vertices&#91;i&#93;.y = y&#91;i&#93;; 
		vertices&#91;i&#93;.z = 0.0f;
	&#125;

	sceGuDisable&#40;GU_TEXTURE_2D&#41;;
	sceGuShadeModel&#40;GU_SMOOTH&#41;;
	sceGuAmbientColor&#40;0xffffffff&#41;;
	sceGuDrawArray&#40;GU_POINTS, TEXTURE_COLOR_FORMAT|GU_VERTEX_32BITF|GU_TRANSFORM_2D, count, 0, vertices&#41;;
	sceGuEnable&#40;GU_TEXTURE_2D&#41;;
&#125;
and yes, I'm not really sure you can have an efficient Hardware GetPixel.
Post Reply