Neat, but I think it's deceptive for the website to claim this is a "new type of graphing" [1]. The fuzzy graph of F(x, y) = 0 is simply a 3D plot of z = |F(x, y)|, where z is displayed using color. In other words, F(x, y) is a constraint and z shows us how strongly the constraint is violated. Then the graph given by F(x, y) = 0 is a slice of the 3D graph. If you're claiming that you've discovered visualizing 3D graphs using color, you're about 50 years too late.
Dropping the absolute value makes a better visualization. The 3D graph for Example 4 Shadow Line has an established name, a hyperbolic paraboloid. The color graph for Example 5 Phi Equation doesn't capture the odd symmetry F(x,y)=-F(-x,y). The color graph for Example 6 Underwater Islands looks far inferior to the 3D surface.
I think your comment is insightful, but it's also a terrible choice of words (and something we probably do too often here). I very much doubt that deception was the intent.
Sometimes, someone just reinvents the wheel (or improves on it). And if it serves to teach several thousand people about a new visualization technique, I think that's a net positive.
You are right but the author's first claim jumped to my eyes (I practice geometry) and burned them... The author seems to be just plotting the values of the implicit function y-f(x,y)...
So, good for him but some historical perspective is needed when making such sweeping claims.
“For all the history of computational mathematical visualization, graphing equations has been done in binary mode...”
Intentional or not, the linked article opens with a comically untrue statement, that, because it is verifiably false, doesn't even escape as puffery. When I encounter this sort of grandstanding in framing (generally from junior engineers or fresh-from-school product managers), I spell out just how negatively such misstatements harm the point being made.
It's a turn-off for readers, and it's unnecessary.
"You may be used to seeing graphs like..."
or
"In grade school, we learned to graph like..."
would probably be more useful than dismissal of the history of visualization of implicit functions. Hopefully, next time, the author will be a bit less grandiose.
There are some truths though that are worth showing.
1. plotting in this way shows areas that are nearly solutions which is really cool.
2. Not mentioned, but this shows gradient around the solution as well, which helps understand attractor/repulsor a bit intuitively
I mean I have generated these plots before to visualize things like error or sensitivity, but this is clean and very cool. So, credit where credit is due for spreading the idea.
I'm wondering if there are topological tools to find the hyperplane of self-intersection from that surface, which is actually the solution of the equation?
Or if given a fuzzy graph z=|F(x,y)| we can use differential geometry to find 0=F(x,y)? Does any of the these questions make sense?
For a general function F, finding the points (x, y) with F(x, y) = 0 has no closed-form solution. The entire field of mathematical optimization is largely dedicated to finding solutions to F(x, y) = 0, in one form or another.
When F has a special structure (say, low-order polynomial), we can actually find the exact solutions. More general structure (e.g. convexity, differentiability) doesn't give us the exact solution, but it lets use use clever numerical algorithms to find them. There are techniques we can use when F has little to no structure, known as "black box" methods, and they work particularly well when we have few variables. In the case of "fuzzy graphs", there are only two variables, so this software takes the approach of computing F(x, y) for every pixel on the screen. In general this doesn't work due to the curse of dimensionality, but it creates good visualizations in low dimensions :)
To answer your question directly, yes we can use differential geometry to speed up optimization. As an example, you've probably heard of gradient descent. Preconditioned gradient descent leverages the geometry of the surface to speed up convergence. In the language of differential geometry, if we're optimizing f(x), then x is "contravariant" but grad(f) is "covariant", so technically we can't just add grad(f) to x since they have different types. We first have to multiply grad(f) by a rank-2 tensor (the "preconditioner") that encodes the local curvature of f around x. This technique is used by the Adam optimizer, with the assumption that the preconditioner is diagonal.
From school you are used to think of function in their explicit form y = f(x) but you can easily turn that into the implicit form f(x) - y = 0 or more generally f(x, y) = 0. With that you can plot the graph of f(x, y) either as a 3D surface with f(x, y) being the height at point (x, y) or encode the function value at (x, y) into some color at (x, y). Where that surface is equal to zero, i.e. where it intersects the z = 0 plane, that are the points of y = f(x). Points (x, y) at which the value of f(x, y) has small non-zero magnitude are what the article calls low error points or regions, points or regions that almost satisfy y = f(x).
f(x, y) = 0 is true only for some combinations of x and y. It’s an equation to be solved, not a universal statement like ∀ x, y : f(x, y) = 0, nor a definition like f(x, y) ≔ 0 (or “≝”). The solutions to the equation are the points (x, y) where the graph has height 0. Which points these are depends on how f is defined.
For example, f might be defined as f(x, y) ≔ x² + y² – 1. Then the points (x, y) for which f(x, y) = 0 are those on the unit circle (those for which x² + y² = 1). The graph will have height 0 only for those points.
When we say "f(x, y) = 0" in this context, we also usually have a separate definition for f(x, y) provided, where that f(x, y) is not necessarily 0 at for all x,y. And so this constraint "f(x, y) = 0" means "find pairs of x and y such that it makes f(x, y) become 0".
If "f(x, y) = 0" is actually the definition of f(x, y), then yes, it would be a pretty boring graph.
They're really two different types of equal signs.
f(x,y) = x+y might be better written as f(x,y) := x+y where := means "is defined as". Then f(x,y) = 0 is an equation that expands to x+y = 0, or in familiar intro algebra form, y=-x.
Shameless plug: eight years ago, I created the following website for posting plots of complex functions using similar gradients: https://kettenreihen.wordpress.com/
Those are really cool to look at. I kept trying to click them to learn more, I wish some of them were mini blog posts to give a little bit of grounding.
It's the heat map of the error surface of the equation... Fairly well understood as a concept in the land of optimization and gradient descent.
Interesting, what's being visualized there is actually a failure mode for an unidentifiable equation - the valley where the error is zero and therefore all solutions are acceptable. Introduce noise into the measurements of error and that valley being too flat causes odd behaviour
This is cool to look at, but isn't this just obtained by taking the absolute value of the first equation minus the second? These are very pretty visualizations—but trying to present them as some kind of "sea change" in perspective feels unhelpful.
Ouch, this hurts to read. It's not novel and lacks a very basic understanding of math.
The graph of y/(x^2+y^2)=(x+1)/(x^2+y^2) by definition contains the points that satisfy this equation. This is exactly the set of points for which y = x + 1.
The "fuzzy" graph is just coloring the difference between the left hand side and right hand side. This is very basic, not new, and it's definitely not "the graph of y/(x^2+y^2)=(x+1)/(x^2+y^2)".
Why would you say it's not a graph of y/(x^2+y^2)=(x+1)/(x^2+y^2)? I would argue that a conventional/binary graph is also not a "pure" representation of the equation, but rather one possible representation - one that runs it through a "left_side == right_side?" boolean filter. In fact, there is no way to visualize an equation with doing something to it.
This is brilliant and oddly obvious in hindsight. Measured valuable almost always have noise, and equations rarely solve to true zero. Setting a small delta is common practice, but these graphs show that some equations may have odd behaviour when you do that.
Taking it a step further, how would simple algorithms behave when viewed in this way? Rather that just the outcome, we could observe a possibility space...
Michael Levin has talked about interesting dynamics with the bubble sort algorithm, which is only a few lines of code, that have parallels in biological processes, suggesting there is a more nuanced logic to nature that we are not seeing
Isn’t that just done in a higher level language, tweaking the algorithm to allow duplicates, and then being surprised there is clustering?
I mean, I don’t see why that is special? Correct me if I’m wrong. I like his research and views on biological electric spaces, but this I did not understand.
While this perspective has merit, it is hampered by the fact that all of the examples used are polar equations, and the illustrations are therefore unnecessarily dramatic. Given that a Cartesian representation of a polar relationship is always a planar projection of the underlying conic, extremums near valid points are to expected.
It would be more useful to visually demonstrate linear relationships but of course the errors there would not make for such a punchy blog post.
With a fuzzy graph, what the essay shows, we plug each point into a recipe, the equation, and see how badly it fails. Big failure → dark region (like a bitter taste). Small failure → light region (almost right). It’s just showing the raw mistake you get after plugging in x and y.
With a signed distance function, instead of looking at the recipe’s mistake, we measure how far the point is from the perfect curve—like pulling out a ruler and measuring the nearest distance to the “correct” line.
It always has units of length and behaves nicely (positive outside, negative inside, zero on the curve).
So the fuzzy graph is about “how wrong is the equation here?”
A signed distance function is “how far away from the exact solution am I?”
They’re related ideas (both start from equations written as something = 0), but they’re not the same thing.
Reasonably, I ask myself: "but isn't dark region far distance and light region close distance?"
Sitting with that:
In the fuzzy graph, we’re coloring by equation error—how badly the equation is satisfied at each point.
In a signed distance field, we color by actual geometric distance to the curve.
Those two numbers aren’t the same unless the equation is written in a very special way. If you multiply an equation by some huge factor (say, multiply everything by 1,000, or divide by something small), the shape of the solution curve doesn’t change—distance hasn’t changed—but the equation’s error suddenly becomes 1,000 times larger (or smaller). That would completely change the shading in the fuzzy graph while leaving the real distances untouched.
Signed distance is measured with a ruler (pure geometry). The fuzzy graph is measuring algebraic error (how close the formula comes to zero). Both can get lighter near the curve and darker away from it, but they’re doing it for different reasons, so they’re not interchangeable.
It took me a second to figure out what these are showing because I usually fit plots to data and the “low error” areas are the areas where, if there was a datapoint, it would be in an area where there would be a wide confidence interval, ie low confidence and more likely to be high error in the model.
The dark areas in the plot seem to be the features driving the shape of the plots. That means that these would be the areas the plotter should be most sure about, otherwise the plot would have a different shape. The bright “low error” areas are the areas where the model seems least likely to be correct.
I might be missing an interpretation that makes much more sense, but I think “error” might be the wrong terminology to use here. It doesn’t just mean “difference between A and B”, it includes some idea of being a measure of wrongness.
I've been calling it "error" (the difference between the left and right side of the equation). But if there is a better term to use, I'd like to know it.
My first thought was "how can i do this in 3d and walk around it in VR?"
I can do the VR part - any chance you can share the algo, so I can get the machine to lift it? I can imagine a 3d graphing tool would need spatialisation in order to be properly appreciated.
It's just a matter of subtracting the two functions, taking the absolute value, and putting that number through a color ramp. If you want to see the result in 3D you can subtract the functions and throw that into a 3D graph plotter. Building a 3d surface plotter would be the hard part, but they already exist, eg plug "abs(y/(x^2+y^2) - (x+1)/(x^2+y^2))" in here:
trouble is, i'm more engineer than mathematician, so while i appreciate that this is an entirely solvable problem, assembling it from scratch would likely mean many errors, and less fun
the 3d plot is nice but not what i would call "spatialised", since it's still a flat render, and I'm exactly thinking about the meshing of the thing. i am familiar with delaunay and marching cube strategies, at least enough to get a machine to hook them up to a spatial plotter
Ummmm... You're just plotting a function of 2 variables (R^2 → R) as a heat map.
"Note that the Shadow Circle is invisible in the conventional graph. In fact, the conventional graph looks identical to a conventional graph of the x=0 equation (as if the denominator was not there)."
Ummm... Yeah, because the equation x / (x^2 + y^2 - 1) = 0 simplifies to x = 0. Your "fuzzy graph" is actually just a plot of the function z(x, y) = |x / (x^2 + y^2 - 1)|, where z is encoded as a color.
Is it possible to run this in a chaotic function? I would be interested to see what patterns emerge. I haven't found any code or model to generate this.
Seems like this is one way of visualizing the solutions to many closely related equations simultaneously. I wonder what the graph looks like if instead coloring based on error, one composited all the solutions within a range of values of of the coefficients.
Computers waste a ton of time being perfect when good enough would work just as well. If we get better at mapping what mostly right means, we can make more software faster by trading exactness for speed. You see this kind of thing in quantized LLMs and jpeg compression.
It is similar, except with a lot of fractals, the numbers being colored represent how many iterations are required to get outside of a set threshold (which indicates divergence).
Right, I just meant more that you plot more than just the equality to get some visualizations. Is also a common way to visualize game theory stuff, I thought. You want to know where the expected equilibrium is, but you also want to see, essentially, what the strength of getting there is.
It's also what scientists have done to visualize solutions of PDEs since the 1960s. Author should download Paraview and give it a twirl, to get this perspective.
First create a mesh (Sources -> Plane for 2D, or Sources -> Box if you want to do it in 3D). Set reasonably high values for Resolution on this source. Then use a filter to apply your function, either Filters -> Alphabetical -> Calculator for easy stuff, or Filters -> Alphabetical -> Python Calculator if you want complicated stuff. The "coordsX" etc. are your spatial coordinates on the mesh. Pick whatever color map you want (diverging types are good for this), change the limits on coloring, use a log scale, whatever.
If you do this in 3D on a box, you can then use a slice to scrub through the result on an arbitrarily oriented plane. You could visualize translucent isosurfaces of constant "error" and raytrace them. Or you could take the gradient of your "error" and plot as a vector field. With a bit of leg work you can add a fourth coordinate (time) and make animations. And you can combine all of these. Sky is the limit.
Dude, it's fine to be learning stuff and even writing about it. But if you're still discovering basic stuff like level sets, then maybe hold off on declaring that you've discovered, after centuries of mathematical development, a completely new form of graphing?
Reading it twice and sitting on it, I have an uneasy feeling.
It feels like it distracts more than it illuminates. ex. Quasar Equation. I don't know what it capital-M Means that at (X, Y) = 0, there's a region where there's higher differences between y and x/x^2+y^2.
But counterpoint to myself:
I'm looking at a toy example.
I'm sure there's been plenty of times I was genuinely comparing two equations and needed to understand where there'd differ.
Its just harder for me to grok when one of the equations is "y".
OK, I was expecting some sort of marketing BS at the start, but ... it's geniunely providing a lot more information than the "binary", black-and-whte conventional chart does.
Neat, but I think it's deceptive for the website to claim this is a "new type of graphing" [1]. The fuzzy graph of F(x, y) = 0 is simply a 3D plot of z = |F(x, y)|, where z is displayed using color. In other words, F(x, y) is a constraint and z shows us how strongly the constraint is violated. Then the graph given by F(x, y) = 0 is a slice of the 3D graph. If you're claiming that you've discovered visualizing 3D graphs using color, you're about 50 years too late.
[1] https://gods.art/fuzzy_graphs.html
I thought the same, so I programmed the examples into Desmos 3D (click the show/hide buttons on the left).
https://www.desmos.com/3d/3divdux6jh
Dropping the absolute value makes a better visualization. The 3D graph for Example 4 Shadow Line has an established name, a hyperbolic paraboloid. The color graph for Example 5 Phi Equation doesn't capture the odd symmetry F(x,y)=-F(-x,y). The color graph for Example 6 Underwater Islands looks far inferior to the 3D surface.
Interesting. I didn't know you could plot like this in Desmos. Thanks for doing the work to plug them in.
1 reply →
> Neat, but I think it's deceptive
I think your comment is insightful, but it's also a terrible choice of words (and something we probably do too often here). I very much doubt that deception was the intent.
Sometimes, someone just reinvents the wheel (or improves on it). And if it serves to teach several thousand people about a new visualization technique, I think that's a net positive.
You are right but the author's first claim jumped to my eyes (I practice geometry) and burned them... The author seems to be just plotting the values of the implicit function y-f(x,y)...
So, good for him but some historical perspective is needed when making such sweeping claims.
“For all the history of computational mathematical visualization, graphing equations has been done in binary mode...”
Intentional or not, the linked article opens with a comically untrue statement, that, because it is verifiably false, doesn't even escape as puffery. When I encounter this sort of grandstanding in framing (generally from junior engineers or fresh-from-school product managers), I spell out just how negatively such misstatements harm the point being made.
It's a turn-off for readers, and it's unnecessary.
"You may be used to seeing graphs like..." or "In grade school, we learned to graph like..."
would probably be more useful than dismissal of the history of visualization of implicit functions. Hopefully, next time, the author will be a bit less grandiose.
People can play with graphing these in 3D and 2D here:
https://c3d.libretexts.org/CalcPlot3D/index.html
https://www.desmos.com/3d
There are some truths though that are worth showing.
1. plotting in this way shows areas that are nearly solutions which is really cool.
2. Not mentioned, but this shows gradient around the solution as well, which helps understand attractor/repulsor a bit intuitively
I mean I have generated these plots before to visualize things like error or sensitivity, but this is clean and very cool. So, credit where credit is due for spreading the idea.
I'm wondering if there are topological tools to find the hyperplane of self-intersection from that surface, which is actually the solution of the equation? Or if given a fuzzy graph z=|F(x,y)| we can use differential geometry to find 0=F(x,y)? Does any of the these questions make sense?
For a general function F, finding the points (x, y) with F(x, y) = 0 has no closed-form solution. The entire field of mathematical optimization is largely dedicated to finding solutions to F(x, y) = 0, in one form or another.
When F has a special structure (say, low-order polynomial), we can actually find the exact solutions. More general structure (e.g. convexity, differentiability) doesn't give us the exact solution, but it lets use use clever numerical algorithms to find them. There are techniques we can use when F has little to no structure, known as "black box" methods, and they work particularly well when we have few variables. In the case of "fuzzy graphs", there are only two variables, so this software takes the approach of computing F(x, y) for every pixel on the screen. In general this doesn't work due to the curse of dimensionality, but it creates good visualizations in low dimensions :)
To answer your question directly, yes we can use differential geometry to speed up optimization. As an example, you've probably heard of gradient descent. Preconditioned gradient descent leverages the geometry of the surface to speed up convergence. In the language of differential geometry, if we're optimizing f(x), then x is "contravariant" but grad(f) is "covariant", so technically we can't just add grad(f) to x since they have different types. We first have to multiply grad(f) by a rank-2 tensor (the "preconditioner") that encodes the local curvature of f around x. This technique is used by the Adam optimizer, with the assumption that the preconditioner is diagonal.
From school you are used to think of function in their explicit form y = f(x) but you can easily turn that into the implicit form f(x) - y = 0 or more generally f(x, y) = 0. With that you can plot the graph of f(x, y) either as a 3D surface with f(x, y) being the height at point (x, y) or encode the function value at (x, y) into some color at (x, y). Where that surface is equal to zero, i.e. where it intersects the z = 0 plane, that are the points of y = f(x). Points (x, y) at which the value of f(x, y) has small non-zero magnitude are what the article calls low error points or regions, points or regions that almost satisfy y = f(x).
> f(x, y) = 0. With that you can plot the graph of f(x, y) either as a 3D surface with f(x, y) being the height at point (x, y)
If f(x, y) = 0, wouldn’t using f(x, y) for the height just result in a flat graph?
f(x, y) = 0 is true only for some combinations of x and y. It’s an equation to be solved, not a universal statement like ∀ x, y : f(x, y) = 0, nor a definition like f(x, y) ≔ 0 (or “≝”). The solutions to the equation are the points (x, y) where the graph has height 0. Which points these are depends on how f is defined.
For example, f might be defined as f(x, y) ≔ x² + y² – 1. Then the points (x, y) for which f(x, y) = 0 are those on the unit circle (those for which x² + y² = 1). The graph will have height 0 only for those points.
When we say "f(x, y) = 0" in this context, we also usually have a separate definition for f(x, y) provided, where that f(x, y) is not necessarily 0 at for all x,y. And so this constraint "f(x, y) = 0" means "find pairs of x and y such that it makes f(x, y) become 0".
If "f(x, y) = 0" is actually the definition of f(x, y), then yes, it would be a pretty boring graph.
They're really two different types of equal signs.
f(x,y) = x+y might be better written as f(x,y) := x+y where := means "is defined as". Then f(x,y) = 0 is an equation that expands to x+y = 0, or in familiar intro algebra form, y=-x.
g(x,y) := 0 really is a flat plane.
1 reply →
Shameless plug: eight years ago, I created the following website for posting plots of complex functions using similar gradients: https://kettenreihen.wordpress.com/
Those are really cool to look at. I kept trying to click them to learn more, I wish some of them were mini blog posts to give a little bit of grounding.
It's the heat map of the error surface of the equation... Fairly well understood as a concept in the land of optimization and gradient descent.
Interesting, what's being visualized there is actually a failure mode for an unidentifiable equation - the valley where the error is zero and therefore all solutions are acceptable. Introduce noise into the measurements of error and that valley being too flat causes odd behaviour
This is cool to look at, but isn't this just obtained by taking the absolute value of the first equation minus the second? These are very pretty visualizations—but trying to present them as some kind of "sea change" in perspective feels unhelpful.
Ouch, this hurts to read. It's not novel and lacks a very basic understanding of math.
The graph of y/(x^2+y^2)=(x+1)/(x^2+y^2) by definition contains the points that satisfy this equation. This is exactly the set of points for which y = x + 1.
The "fuzzy" graph is just coloring the difference between the left hand side and right hand side. This is very basic, not new, and it's definitely not "the graph of y/(x^2+y^2)=(x+1)/(x^2+y^2)".
Why would you say it's not a graph of y/(x^2+y^2)=(x+1)/(x^2+y^2)? I would argue that a conventional/binary graph is also not a "pure" representation of the equation, but rather one possible representation - one that runs it through a "left_side == right_side?" boolean filter. In fact, there is no way to visualize an equation with doing something to it.
I was surprised to learn there is a Slashdot equation. :)
People who like these types of charts will probably also like domain coloring plots of complex functions:
https://web.archive.org/web/20120208174423/https://maa.org/p...
https://observablehq.com/@rreusser/complex-function-plotter
This is brilliant and oddly obvious in hindsight. Measured valuable almost always have noise, and equations rarely solve to true zero. Setting a small delta is common practice, but these graphs show that some equations may have odd behaviour when you do that.
Taking it a step further, how would simple algorithms behave when viewed in this way? Rather that just the outcome, we could observe a possibility space...
Michael Levin has talked about interesting dynamics with the bubble sort algorithm, which is only a few lines of code, that have parallels in biological processes, suggesting there is a more nuanced logic to nature that we are not seeing
This sounds a lot like the programs encoded by neural networks.
Isn’t that just done in a higher level language, tweaking the algorithm to allow duplicates, and then being surprised there is clustering?
I mean, I don’t see why that is special? Correct me if I’m wrong. I like his research and views on biological electric spaces, but this I did not understand.
2 replies →
While this perspective has merit, it is hampered by the fact that all of the examples used are polar equations, and the illustrations are therefore unnecessarily dramatic. Given that a Cartesian representation of a polar relationship is always a planar projection of the underlying conic, extremums near valid points are to expected.
It would be more useful to visually demonstrate linear relationships but of course the errors there would not make for such a punchy blog post.
Very cool! This is also known as signed distance function in computer graphics, or implicit form equations in maths.
With a fuzzy graph, what the essay shows, we plug each point into a recipe, the equation, and see how badly it fails. Big failure → dark region (like a bitter taste). Small failure → light region (almost right). It’s just showing the raw mistake you get after plugging in x and y.
With a signed distance function, instead of looking at the recipe’s mistake, we measure how far the point is from the perfect curve—like pulling out a ruler and measuring the nearest distance to the “correct” line.
It always has units of length and behaves nicely (positive outside, negative inside, zero on the curve).
So the fuzzy graph is about “how wrong is the equation here?”
A signed distance function is “how far away from the exact solution am I?”
They’re related ideas (both start from equations written as something = 0), but they’re not the same thing.
Reasonably, I ask myself: "but isn't dark region far distance and light region close distance?"
Sitting with that:
In the fuzzy graph, we’re coloring by equation error—how badly the equation is satisfied at each point.
In a signed distance field, we color by actual geometric distance to the curve.
Those two numbers aren’t the same unless the equation is written in a very special way. If you multiply an equation by some huge factor (say, multiply everything by 1,000, or divide by something small), the shape of the solution curve doesn’t change—distance hasn’t changed—but the equation’s error suddenly becomes 1,000 times larger (or smaller). That would completely change the shading in the fuzzy graph while leaving the real distances untouched.
Signed distance is measured with a ruler (pure geometry). The fuzzy graph is measuring algebraic error (how close the formula comes to zero). Both can get lighter near the curve and darker away from it, but they’re doing it for different reasons, so they’re not interchangeable.
It took me a second to figure out what these are showing because I usually fit plots to data and the “low error” areas are the areas where, if there was a datapoint, it would be in an area where there would be a wide confidence interval, ie low confidence and more likely to be high error in the model.
The dark areas in the plot seem to be the features driving the shape of the plots. That means that these would be the areas the plotter should be most sure about, otherwise the plot would have a different shape. The bright “low error” areas are the areas where the model seems least likely to be correct.
I might be missing an interpretation that makes much more sense, but I think “error” might be the wrong terminology to use here. It doesn’t just mean “difference between A and B”, it includes some idea of being a measure of wrongness.
I've been calling it "error" (the difference between the left and right side of the equation). But if there is a better term to use, I'd like to know it.
My first thought was "how can i do this in 3d and walk around it in VR?"
I can do the VR part - any chance you can share the algo, so I can get the machine to lift it? I can imagine a 3d graphing tool would need spatialisation in order to be properly appreciated.
It's just a matter of subtracting the two functions, taking the absolute value, and putting that number through a color ramp. If you want to see the result in 3D you can subtract the functions and throw that into a 3D graph plotter. Building a 3d surface plotter would be the hard part, but they already exist, eg plug "abs(y/(x^2+y^2) - (x+1)/(x^2+y^2))" in here:
https://c3d.libretexts.org/CalcPlot3D/index.htmlT
This viewer also has a "2d" mode that produces a colored 2D plot.
trouble is, i'm more engineer than mathematician, so while i appreciate that this is an entirely solvable problem, assembling it from scratch would likely mean many errors, and less fun
the 3d plot is nice but not what i would call "spatialised", since it's still a flat render, and I'm exactly thinking about the meshing of the thing. i am familiar with delaunay and marching cube strategies, at least enough to get a machine to hook them up to a spatial plotter
1 reply →
Ummmm... You're just plotting a function of 2 variables (R^2 → R) as a heat map.
"Note that the Shadow Circle is invisible in the conventional graph. In fact, the conventional graph looks identical to a conventional graph of the x=0 equation (as if the denominator was not there)."
Ummm... Yeah, because the equation x / (x^2 + y^2 - 1) = 0 simplifies to x = 0. Your "fuzzy graph" is actually just a plot of the function z(x, y) = |x / (x^2 + y^2 - 1)|, where z is encoded as a color.
> In this case, there is absolutely nothing to show on a conventional graph, as there are actual solutions to this equations.
I feel like this must be missing a "no", but also I'm bad at math, so maybe not.
Is it possible to run this in a chaotic function? I would be interested to see what patterns emerge. I haven't found any code or model to generate this.
I wish the grapher had a radial mode. Would probably produce really cool symmetries.
I don't pretend to understand the method by which the "error == 0 surface" is calculated (do they explain it?).
But I am curious if these plots can/have been empirically validated with real world data.
Presumably they just render the absolute error between the lhs and rhs of the equation for every pixel in the plot.
Yep - it's just |left-right|^fuzzyLevel
Seems like this is one way of visualizing the solutions to many closely related equations simultaneously. I wonder what the graph looks like if instead coloring based on error, one composited all the solutions within a range of values of of the coefficients.
Does he say how the fuzzification is defined?
I need to add more details about that. But it's simply: abs(left-right)^fuzzyValue
Computers waste a ton of time being perfect when good enough would work just as well. If we get better at mapping what mostly right means, we can make more software faster by trading exactness for speed. You see this kind of thing in quantized LLMs and jpeg compression.
Isn't this essentially how many fractals are colored?
It is similar, except with a lot of fractals, the numbers being colored represent how many iterations are required to get outside of a set threshold (which indicates divergence).
Right, I just meant more that you plot more than just the equality to get some visualizations. Is also a common way to visualize game theory stuff, I thought. You want to know where the expected equilibrium is, but you also want to see, essentially, what the strength of getting there is.
Really beautiful. I bet Ramanujan just “saw” and felt these.
Isn't that what mathematicians have always done with "level lines"?
It's also what scientists have done to visualize solutions of PDEs since the 1960s. Author should download Paraview and give it a twirl, to get this perspective.
First create a mesh (Sources -> Plane for 2D, or Sources -> Box if you want to do it in 3D). Set reasonably high values for Resolution on this source. Then use a filter to apply your function, either Filters -> Alphabetical -> Calculator for easy stuff, or Filters -> Alphabetical -> Python Calculator if you want complicated stuff. The "coordsX" etc. are your spatial coordinates on the mesh. Pick whatever color map you want (diverging types are good for this), change the limits on coloring, use a log scale, whatever.
If you do this in 3D on a box, you can then use a slice to scrub through the result on an arbitrarily oriented plane. You could visualize translucent isosurfaces of constant "error" and raytrace them. Or you could take the gradient of your "error" and plot as a vector field. With a bit of leg work you can add a fourth coordinate (time) and make animations. And you can combine all of these. Sky is the limit.
I did recently learn of the https://en.wikipedia.org/wiki/Level_set concept, and it is a very similar concept.
Those were popular in the 90s for image processing: e.g. https://shape.polymtl.ca/lombaert/levelset/
Dude, it's fine to be learning stuff and even writing about it. But if you're still discovering basic stuff like level sets, then maybe hold off on declaring that you've discovered, after centuries of mathematical development, a completely new form of graphing?
7 replies →
Reading it twice and sitting on it, I have an uneasy feeling.
It feels like it distracts more than it illuminates. ex. Quasar Equation. I don't know what it capital-M Means that at (X, Y) = 0, there's a region where there's higher differences between y and x/x^2+y^2.
But counterpoint to myself:
I'm looking at a toy example.
I'm sure there's been plenty of times I was genuinely comparing two equations and needed to understand where there'd differ.
Its just harder for me to grok when one of the equations is "y".
OK, I was expecting some sort of marketing BS at the start, but ... it's geniunely providing a lot more information than the "binary", black-and-whte conventional chart does.
I'm impressed.
[dead]
[flagged]