Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion for a different approach to 3D #633

Open
tom-berend opened this issue Mar 20, 2024 · 7 comments
Open

Suggestion for a different approach to 3D #633

tom-berend opened this issue Mar 20, 2024 · 7 comments

Comments

@tom-berend
Copy link
Contributor

In a different thread "Allow Three Degrees of Freedom", Alfred Wassermann shows a link to a cube being manipulated in 3D.

Let me offer a path to that 3D vision, very different from the path we are on.

The magic of JSXGraph is combining the surface HTML and underlying SVG. The surface provides mouse dragging, elegant fonts, and LaTex annotations, while the SVG layers provide points, lines, and curves. JSXGraph is the best kind of witchcraft.

The current 3D-in-SVG implementation is also impressive, but mostly because it exists at all. It is a pale shadow, providing only a handful of the almost 100 elements available in 2D. There is no travel between 2D and 3D, they exist in different, almost unrelated dimensions.

WebGL is the standard for 3D in the browser, providing robust 3D capabilities with AR, VR, and mixed-reality extensions.

We are almost there. Really.


Imagine a simple 2D construction in the current library, three points forming a triangle labeled A B C, and perhaps a slider in the top left corner. Let's call this 'Front View'.

With a click, the user flips to 'Side View'. Now, they are looking down the X-axis and see a vertical line with three points A B C. The slider is in the same place. They drag one of the points to the left (into negative Z), and the triangle becomes apparent.

Click, back to 'Front View'. It has not changed, because we cannot see depth from the front. Click 'Top View', and the triangle reappears.

Then click into 3D, and the triangle reappears in WebGL, with orbital cameras, virtual trackballs, stereo cameras, the ability to 'fly' through the construction, but perhaps no ability to manually adjust the points.

Perhaps we can add sliders to the 3D scene, use raycasting to select points, etc. But that would be evolutionary, and not break any current capabilities.

There is no major difficulty in rendering a side view in 2D, it only requires the abstract renderer to send different parts of location vectors to the SVG renderer. The BoundingBox would have 4 or 6 parameters,the mouse handlers would have to know which dimension is being dragged, and a few other changes but nothing that would break backward compatibility.

Now we can gently evolve the 2D interface into 3D with planes and surfaces in a natural way.

Building the WebGL part is also straightforward. The ThreeJS library (a popular frontend for WebGL) provides geometries, materials, cameras, and renderers, and unsurprisingly the logic of geometries looks similar to JSXGraph. To create an oval, you calculate 40-50 points and create a 'LineGeometry' curve. Adding ThreeJS would make the size of JSXGraphCore jump, but it would be otherwise unnoticeable.

What do you think?


I spent a few days looking at how to build a renderer for WebGL (triggered by the discussion for a third degree of freedom). The plan was to write an SVG emulator; the subset of SVG used by the SVGRenderer is very small and I thought to approach SVG fidelity by replacing the SVG nodes with a tree of WebGL objects, keeping the dark magic of the top HTML layer.

It works; I have a proof-of-concept, but it requires serious refactoring to go further. The AbstractRenderer discards too much information (eg: a Point is not really an ellipse) and sends too many messages. And in the end, WebGL 2D is not any better than SVG. But it gave me the idea of the side view and 3D approach.

@alfredwassermann
Copy link
Member

Dear Tom,
your suggestion to have a WebGL / WebGPU renderer in JSXGraph kicks in open doors!
As you may imagine, we watched the development of WebGL carefully over the years. Actually, we did first experiments with WebGL more than 10 years ago, but abandoned the project due to lack of resources and lack of motivation because for 2D, SVG and canvas were already good enough.
Coincidentally, in the summer term at our university (which will start in 3 weeks) we'll have a seminar, where computer science students have to do a software project. The plan was already to let some of the students experiment and build a prototype of a WebGL renderer.

As you noticed, 3D support in JSXGraph is quite restricted and it is clear that using SVG rendering hits a wall soon, especially if one wants to go beyond wire frame models. Also, the motivation is low to implement features like intersection of complex 3D objects with a lot of effort knowing that this would be relatively easy with WebGL (at least when it comes to displaying those intersected objects).
Nevertheless, SVG rendering has one advantage, namely that 2D and 3D can be combined in one construction very well. It is possible to have multiple 3D views together with a 2D view in one board. Beside having 2D sliders together with a 3D view, I did not see anyone use this feature, yet. But it opens up a lot of possibilities which I do not want to miss. Of course, they can be partially compensated by using multiple boards.

On the positive side - as you also noted - only a few graphics primitives are necessary for rendering JSXGraph 2D in WebGL. Actually the following elements would more or less suffice (also in SVG, all the graphics "primitives" are based on the path element):

  • a path element, supporting linear segments and Bezier segments, as well as a fill color.
  • an ellipse element (optionally, could be done with Bezier segments, too)
  • something like a SVG marker element to create arrow heads
  • a possibility to create shadows
  • texts, images and videos (as far as I know these are available)

But there is one restriction: I want to keep JSXGraph independent from any 3rd party libraries. That means, a WebGL renderer has to be built from the ground up. This is what I want the students to develop. However, if you say that is easy and only a few lines of codes, we can change the topic of the software project and immediately go to higher methods.

Please, tell me your thoughts how we can combine efforts.

@tom-berend
Copy link
Contributor Author

Dear Alfred,

Sometimes, an idea is 'in the air' and the time is right. You also see the SVG emulator path, which has many advantages, especially 100% backward compatibility. It's exciting that you have some resources to apply to this summer.

The good news is that converting SVG to WebGL is easy. The bad news is that it will be a sour dead end.


I'll lay out what I did below. But first, I will repeat my suggestion to first move to orthographic top-view and side-view, a draftman's 'Orthographic 3D'. This will both challenge your summer cohort, and improve the magic of JSXGraph.

The move to WebGL is not just aesthetic, it opens doors to many new applications. But the core of JSXGraph is interactive geometry and numerical algorithms; you should focus on moving this unique strength to 3D.

There is a non-trivial amount of work to make Orthgraphic 3D work. The mouse interactions will be straightforward, but I suspect most elements have not been tested for 3D and some will surely fail.

Orthographic 3D will require some mathematical refactoring too, for example 'Integral' must know which view is exposed and recalculate on the fly. My math isn't good enough to take the integral of a 3D curve to an arbitrary axis, but I expect the wizards of JSXGraph can make it seem easy. The 2D users won't see any difference, which is how it should be.

Then I would recode 'View3D' to show the 2D-created model. This will probably break all the other 3D elements, and rebuilding them is another worthy task. This breaks backward compatibility, but I suspect there will be few complaints.


You resist using ThreeJS. There may be institutional reasons for this, but they have a similar open-source ethos, community, and licensing terms as JSXGraph. I suggest duplicating their efforts would be a poor use of your resources. Writing WebGL libraries is non-trivial, not your core mission, and doing so would leave you years behind the XR world.


Here's what I did.

I renamed a copy of 'svgrenderer.js' as 'webglrenderer.js' and hunted down the places where 'svg' was hardcoded until JSXGraph ran with 'renderer:'webgl'.

Actually 'webglrenderer.ts', I added the following to 'scripts'

"webgl": " tsc --target esnext --lib esnext,dom --moduleResolution node src/renderer/webgl -w"

and let TypeScript help me. It created a .js in the same directory, and the step to compile JSXGraphCore didn't notice.

Then I added a class to replace the SVG nodes (createElementNS, insertBefore, etc) with an internal tree, and modified the code to use this class instead.

I tracked down code in Github to parse SVG (eg: https://github.com/MaxArt2501/d-path-parser) and built another class to render SVG commands to WebGL.

All pretty standard, and it worked. The left side is SVG, the right side is WebGL. You can drag the points. But it's a dead end.

2024-03-20_11-06

I'm using a projection camera, so points diverge from where JSXGraph expects them as you move away from [0,0]. An orthographic camera wouldn't have that problem.

'AbstractRenderer' sends points as ellipses. I check, if they are really small ellipses then I assume they are points and draw them as spheres. A more complete solution would draw them as avocados, with a firm pit and a translucent body so that I can represent 'strokecolor', 'fillcolor', and 'size' properly. I marked a TODO to add a parameter to 'createPrim()' to hint the intended usage.

Every point movement sends hundreds of updates from AbstractRenderer, but only a few are useful. The dimensions and materials of the point are not changing.

2024-03-20_11-22

I started on Line, which has different and intresting problems. But my experiment already shows the road ahead, and the destination is no better than the SVG version, and inferior in many ways. There is no 3D information coming from AbstractRenderer, and too much noise. And I have no way of pushing 3D information from the render side back into JSXGraph models.


I am anxious to get back to my other projects, including the wrapper I promised you by the end of the month and another project that I think will make you smile.

@gwhitney
Copy link

As you know, my colleague @Vectornaut and I are interested in strengthening the 3D capability of JSXGraph to the point that it can handle all of the constructions in Books XI - XIII of Euclid, for the sake of a browser plugin that re-enables David Joyce's interactive Elements online (the interactions currently do not run because Java is disallowed in all major browsers, and even in the few browsers like Pale Moon that do allow Java, you have to go to considerable trouble to install an old enough Java runtime for them to work). We settled on JSXGraph because it alone among dynamic geometry systems we could find both (a) is published as a JavaScript package conforming to the rules for major-browser plugins, and (b) has some 3D support.

As a result, we're agnostic to the introduction of three.js as a dependency, so long as it, too, can be bundled in a browser extension and meet the review requirements. (Our initial pass at a plugin used GeoGebra's javascript version, but its inclusion caused all browser reviews to fail -- it is too large and too obfuscated.) We strongly suspect that three.js would be acceptable in a browser plugin, but haven't verified that.

There are a number of 3D elements we would have to add to complete this task: Circle3D is already underway, and we will need Polygon3D, and we may likely want to improve the rendering of Sphere3D with some gradients to make instances look more "sphere-like." We likely need a Tetrahedron element as well, since there are many in Euclid's constructions.

However, it's not lost on us that there is a certain amount of logical duplication between the 3D and 2D elements. Personally, I'd agree that in the long run it is simpler to have the underlying model be three-dimensional, and to allow for both 2D projection views (presumably in SVG, and presumably at least in the three axial directions, but hopefully arbitrary perpendicular planar projections) and a 3D view (presumably rendered via WebGL in some way).

I agree that such an approach would lose the ability to have multiple 3D views on a single board together with a 2D construction. I'm not personally convinced this ability is terribly useful; my view is supported by no observation of this happening in known uses of JSXGraph, and by the fact that one can't (to the best of my knowledge) have two 2D views on a single board with independent coordinate systems, and that lack seems not to have been felt. As @alfredwassermann points out, one can create multiple boards to accomplish most of what multiple views on a single board would.

I am not trying to argue for or against @alfredwassermann's direction in guiding the project and/or @tom-berend's proposals. I'm merely offering (a) my own perspective, and (b) while Vectornaut and I are working on the interactive Euclid plugin project, we are willing to use our time and efforts either to extend the current scheme of separate 3D views with distinct 3D primitives or to help with a conversion to a scheme in which all primitives are 3D-aware/capable, with some new/modified renderer capable of showing their 3D qualities. Really, just let us know which direction is the primary desired one. But also to set expectations -- once we are able to ship the desired plugin, our efforts on JSXGraph will likely wind down.

Finally, I would second the thought that there's not too much advantage in rendering 2D constructions in WebGL. But it seems to me that there could be a (alternate) WebGL renderer solely for the existing style of 3D constructions. In other words, once one has a view3d with some 3D elements in it, one could ask that be rendered in WebGL (perhaps in addition to its SVG rendering, or instead of). The WebGL canvas would presumably not appear within the SVG canvas, but alongside it in the div in which the usual 2D SVG rendering lives. Just a concept for an intermediate, perhaps interim, path in which the current architecture of entities (and the division into separate 2D and 3D versions thereof) is kept, at least for the time being, but WebGL rendering of 3D views is made possible.

Until there's some decision/direction on these issues, Vectornaut and I will continue working on extending the current architecture and renderer to the point that it can handle Euclid.

@tom-berend
Copy link
Contributor Author

@gwhitney - What a wonderful project !

I looked through a few pages of the Clark University site, and couldn't resist trying to build a few of the constructions in 2D JSXGraph.

Follow this link for a construction of the tetrahedron from Book Xi, Prop 20, plus a simple cone; the originals are posted below.

You can drag points on the tetrahedron, and it even feels like 3D. It has the clean look of 2D JSXGraph, and of course supports multiple boards per page. I was pleasantly surprised.

I can't see how to meet expectations with a cone or cylinder. Perhaps we can build a new ellipse-like element for the base connected to apex point B that transforms from line to circle as B is dragged. But the limitations of the cone body become clear under rotation.

The advantage of 2D for my project is the wealth of 'widgets' - the almost 100 or so elements that complete the last 5% of a construction. My very modest construction used point, segment, midpoint, ellipse, circle, intersection, polygon, and curvedifference.

If we move 3D info into the 2D model, we will still want a WebGL rendering option. This advertisement is just vapour, but it points to our future.

Here's a great example of a 3D Sphere using gradients.

We are in agreement - @alfredwassermann will set the direction.

Tom

book xi prop 20

book xi defn 24

@gwhitney
Copy link

Follow this link for a construction of the tetrahedron from Book Xi, Prop 20, plus a simple cone; the originals are posted below.

Not loading for me, sorry.

I can't see how to meet expectations with a cone or cylinder.

As I mentioned, we're couching this as a browser plug-in. The plugin reads the original parameters to the Java dynamic geometry applet, and translates them into JSXGraph calls and then executes them. As Joyce has no first-class cone or cylinder elements, we don't need (and couldn't make use of) cone or cylinder elements. I'm not saying that JSXGraph shouldn't ultimately have them, just that they are not directly relevant to our particular project.

Here's a great example of a 3D Sphere using gradients.

Yes, shows that we will be able to get pretty far with just svg rendering.

@tom-berend
Copy link
Contributor Author

tom-berend commented Apr 11, 2024 via email

@Cleonis
Copy link

Cleonis commented Jun 16, 2024

I noticed this thread; I would like to make some comments.
To avoid misunderstanding: I'm not a developer; I'm commenting as an end user.

Quoting Alfred Wassermann:
"SVG rendering has one advantage, namely that 2D and 3D can be combined in one construction very well. "

I have to say: I don't see any virtue in being able to combine 2D and 3D elements on one board.

I recommend that the JSXGraph team will ask every person who is in the business of creating JSXGraph powered diagrams this yes-or-no question: "Do you anticipate that at some point in the future you will wish to combine 2D and 3D elements on one board?"

I am aware that I'm probably biased on this matter.
In my presentations I use separate boards for on one hand the visualization, and on the other hand input elements such as sliders, checkboxes, and buttons. The advantage of that is: it makes positioning of the visualization and the positioning of the input elements independent.

About 3D rendering
My understanding is: the hard part of 3D rendering is occlusion handling.

With 2D rendering occlusion handling is already very much non-trivial. (I'm actually curious to know: is 2D occlusion handling performed by JSXGraph? Or is it up to the SVG rendering engine to perform the 2D occlusion handling?)

My understanding is that the current JSXGraph 3D implementation doesn't have any occlusion handling.
In this fiddle first a sphere is declared, with {fillOpacity:1.0}, and subsequently a 'parametric surface3d' with the same radius is declared:
https://jsfiddle.net/Cleonis/8f0h2e34/
So the entire parametric surface, both what is in front of the sphere3D and what is behind the sphere3D, is rendered on top of the filled circle that represents the sphere3D.

Of course, that is why the JSXGraph team has been watching the development of WebGL.

For the things that I use JSXGraph for a very small set of 3D elements would already be sufficient. My only real need is for a 3D sphere. (Of course I'm being totally selfish here.)

(The Three.js project offers an environment for complete scene rendering, with light sources, and textures and what have you. For my purposes using Three.js would be ginormous overkill, but it seems that in order to have 3D rendering with occlusion handling I will have to start deploying Three.js )

My suggestion is as follows:
Regard support for 3D rendering (WebGL powered) as a comparitively small additional feature, keeping the focus on 2D rendering as the core functionality. That is, if the 3D support will always have much less graphics elements, and much less features, that should not be counted against it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants