Questions related to Mathematics
In The Big Picture (2016), Sean Carroll remarks (page 131): “While math is lumped together with science in many school curricula — and while they certainly enjoy a close and mutually beneficial relationship — at heart they are completely different endeavors.”
Do you agree?
Two examples might help illustrate the issue. Carnot in his 1824 Reflections on the Motive-Power of Fire idealized a steam engine to exclude all losses of heat and all friction. John James Waterston in his 1845 On the Physics of Media that are Composed of Free and Perfectly Elastic Molecules in a State of Motion treated molecules as point particles interacting without friction. In both these examples, the physicists discard extraneous physical features to analyze a physical system. Mathematics in many instances discards all physical elements extraneous to number and geometry. If that is a valid characterization, then one may argue that mathematics merely is an extreme version of idealization used in theoretical physics.
Incidentally, Quora raises a similar question in: https://www.quora.com/Are-math-logic-and-science-logic-the-same-thing, with some answers opposing each other.
Stiffness is associated with a small change in the input producing a large change in the output. The phase-field equations such as the non-conserving Allen Cahn and the mass conserving Cahn-Hilliard are stiff differential equations. The solution of these equations represents the dynamic of the interface between two phases. The Allen-Cahn equation is second-order while the Cahn-Hilliard equation is fourth-order nonlinear PDEs whose mathematical form is given in the picture included below with appropriate initial and boundary conditions. The parameter epsilon in the equations represents the thickness of the interface.
Thanks in advance for all of your consideration.
How to analyze the mathematics textbook from the prospective of student‘s use in order to examine how textbooks support students to do mathematics learning？
Is there any theory that could be used for doing such research？or some relevant articles？or give some suggestions？
Thank you very much！
Dear talented researchers,
I want to calculate three solar irradiance components (DNI, DHI, & GHI) from these equations which contain logistics, maximum and minimum, Less than, Greater or equal, terms.
How can I use these equations please? I don not know how to use equations 12 & 13 because they have terms such as max, min.
I am not sure, but it seems to be a Logistic map.
1- just for example, how to solve this part from equation 12?
min (1:88 * 10^-8 CO^4, SOLZEN < 77)
2-how to deal with these mathematical operation such as greater than, less than 77, equal or greater than inside these equations?
Thank you very much for your kind response.
I wish to extend a paper by incorprating the particular feature the authors havent used or considered. However after going through the litreature It isnt clear how much that particular feature plays a role, all I know it does play an very important role for the output that I care about. For experimentation I am assuming a simple linear regression function ax+by where a serves as the contribution to the paper I am extending and x its feature set, my goal is to find the parameter b (mse minimization) by encoding the feature in variable y and thus determine the strength that y plays
However there are some limitation first of that I am assuming the relationship be linear which is very wide of assumption , and I m hoping to consider some kind of non linearity
Question is how do I proceed from here. Is there any mathematical equation I can consider as intial assumption
PS: Note Y is here a continous value not categorical
A very interesting topic, "quantification of randomness" in mathematics it is sometimes reffered to as "complex theory" (although it is more about pseudorandom than randomness) that is based on saying that a complicated series is more random and then there are tests for randomness in Statistics and perhaps the most intriguing test related to information theory -"entropy"(as also being of relevence to and result of second law of thermodynamics), while there are also random numbers generators (pseudorandom numbers generators) and true random numbers generators using quantum computing.
So, what I've been trying to, is making a complete list of all available algorithms or books or even random number generators that will allow me to tell me how much random a series is, allowing me to "quantify randomness".
There are 125 unique infinite series which are pseudorandom that I have discovered and generated based on a rule, now how do I test for randomness and quantify it? Uf the series is random or there is probably a pattern, or something that will allow me to predict the next number in the series given I don't know what the next number is.
Now, do anyone know of any github links based on any of the above? ^ (like anything related to quantifying randomness in general that you think will be helpful).
A book/books on quantifying randomness will be very very helpful too. Actually anything at all...
We are planning to implement matrix based mathematical algorithms on FPGA. Could anyone suggest good book for these topics?
Is there mathematical relation ship between the twist per turn vs the frequency at which cross talk deep occurring ?
Human dynasty in its millennium era. We have identified fire from the friction of stones and now we are interacting with Nano robots. Once it was a dream to fly but today all the Premier league, La liga and Serie A players travel in airplane at least twice in a week due to the unprecedented growth of human science. BUT ONE THING IS STILL ELUDING IN THE GLITTERING PROFILE OF HUMAN DYNASTY.
Although we have the gravitation theory, Maxwell's theory of electromagnetism, Max Planck's Quantum mechanics, Einstein's relativity theory and in most recently the Stephen Hawking's Big bang concepts...… Why can't we still revert back and forth into our life?
Any possibilities in future?
Why? in terms of mathematics, physics and theology??
I’m currently doing a project and have a categorical independent variable and a continuous dependent variable. I am trying to find which group in the categorical variable produces the highest values for the continuous variable. I have already done ANOVA and post hoc tests. I was wondering does anyone know of any other mathematical or computing methods which could help me with this?
I have a problem with finding references for high-order generating functions. For example in finding explicit formula of this recurrence relation: https://mathoverflow.net/questions/266478/linear-two-dimensional-recurrence-relation
Actually, in my research, there is a three-dimensional recurrence relation. Does anybody have some books about high order generating function in general?
I really appreciate any help you can provide.
Metamathematics -- the fundamental logical paradigm of maths -- was never fully defined by Hilbert (nor anyone else), causing severe yet commonly ignored consequences for all branches of maths and maths theory ever since. So, it is difficult to find relevant papers and anybody interested in investigating or discussing the subject.
Can someone please share the relevant mathematics and explanation for the first order analysis of a BJT current mirror? Any link to an article/book/chapter will be very useful. You can also attach the document if that is convenient.
How to calculate the critical length of fiber in fiber-reinforced polymer composite? Is there any mathematical formula there or we can keep some assumptions?
my name is Athanasios Paraskevopoulos, a MSc. student in Mathematics from the Hellenic Open University. I am looking for partners, who work (or used to work) in the field of Didactics of Mathematics.
If you're interested helping me with my study please feel free to contact me via Research Gate or mail: firstname.lastname@example.org
Thank you and kind regards,
I have a simulation code for a Horizontal Washing Machine.
The code solves the equations of motions of the system by Matlab ode45 and plots the vibration response of the system at the transient state of performance.
In this code, the frequency (omega) is an exponential function of time, as it's stated below (and its diagram is attached to 'the question'):
The resulting displacement response is attached to the question.
It is desired to :
First, increase the frequency to omega_0 by exponential1
Then, increase it to omega_1 by exponential2
But 'the problem' is that:
the displacement response shows an unexpected increase in frequency at the beginning of the second exponential increase (it becomes 20 Hz, which is much larger than the maximum frequency in the simulation- 10 Hz).
Do you know what could be the reason for this response?
Any help would be gratefully appreciated.
I recently came to know about the commercial service https://mathpix.com/ which claims to convert mathematical formulas from (scanned) pdf or even handwritten text to LaTeX.
I have no experience with this. I am interested whether there is an open source solution which solves the same (or a similar) problem.
Hello to all
I am an electrical student and I have a question about optimization.
Can you help me?
To optimize (mathematically not meta-heuristic algorithms) the values of the elements and the size of the inductor-capacitor, etc. of an electronic power converter I need a few examples of formulation and simulation. (Preferably in GAMS)
For thinking - in regard to overtaking "believes, dogmas"
Blind adoption of believes, dogmas by people in populations (Psychology of the Crowd, by Gustave Le Bon) seems to be psychologically coupled and physically from a social scientific point of view explainable:
here, too, synchronization within masses occurs
- and it seems also in accordance to the Kuramoto model.
For this, only a corresponding marketing strategy, seems to be necessary (applied maths / physics).
Basis: Yoshiki Kuramoto assumed in 1975 that there is a weak relationship (better coupling) of oscillating systems (oscillators) and that these are almost identical. Kuramoto found that mathematically between each pair of coupled oscillators, their interaction is sinusoidally dependent on the respective phase difference, resulting into the so-called *Kuramoto Model* This even can be illustrated using initially non-synchronous metronomes, which in the course (under certain conditions: moveable surface) synchronize themselves.
This even seems a basic model in nature, biology, chemistry, physics and/or social sciences: – synchronizing of coupled systems:
– collective flasing of fireflies [Buck 1988]
– collective oscillation of pancreatic beta cells [Sherman 1991]
– the heartbeat synchronized with ventilation [Schäfer 1998]
– pedestrian induced oscillations on bridges [Strogatz 2005]
-Kuramoto, Yoshiki (1975) Self-entrainment of a population of coupled non-linear oscillators. In: Araki H (eds.) International Symposium on Mathematical Problems in Theoretical Physics, Lecture Notes in Physics, Volume 39, Springer-Verlag Berlin, Heidelberg. DOI: 10.1007/BFb0013365.
-Buck J (1988) Synchronous rhythmic flashing of fireflies, IIi. Q Rev Biol (63)3), 265–289. DOI: 10.1086/415929.
-Sherman A, Rinzel J (1991) Model for synchronization of pancreatic betacells by gap junction coupling. Biophysical journal 59(3), 547–559. DOI: 10.1016/S0006-3495(91)82271-8.
-Schäfer C, Rosenblum MG, Kurths J, Abel HH (1998) Heartbeat synchronized with ventilation, Nature 392(6673), 239–240. DOI: 10.1038/32567.
-Strogatz SH, Abrams DM, McRobie A, Eckhardt B, Ott E (2005) Theoretical mechanics: Crowd synchrony on the millennium bridge, Nature 438(7064), 43–44. DOI: 10.1038/43843a.
Credit 'spontaneous synchronization of metronomes' video
#psychology #synchronization #nature #physics #chemistry #biology
One thing I noted in academia is that competition can sometimes be just as fierce as in the world of business.
Sometimes it can be small and petty like who should be first author, often triggered by purely selfish reasons and following justifications.
In other cases competition can be about grands, effectively rendering someone unemployed in some cases. I have seen bullying, discrimination more frequently than in the world of business, the place I come from.
This is truly the dark side of academia, there are also positive things but these are things that make me sick to my stomach.
What is your experience? Do you agree with my rather dark view? If not, why? If yes, how can we fix it?
Best wishes Henrik
Fermat's last theorem was finally solved by Wiles using mathematical tools that were wholly unavailable to Fermat.
Do you believe
A) That we have actually not solved Fermat's theorem the way it was supposed to be solved, and that we must still look for Fermat's original solution, still undiscovered,
B) That Fermat actually made a mistake, and that his 'wonderful' proof -which he did not have the necessary space to fully set forth - was in fact mistaken or flawed, and that we were obsessed for centuries with his last "theorem" when in fact he himself had not really proved it at all?
Newton's second law, sometimes called the fundamental principle of dynamics, is usually con-
sidered as an irreducible axiom of mechanics. It is actually not a mathematical theorem, but a physical principle based on experiments on our planet. Do you think that this law would be valid in the absolute vacuum, or does it reflect the existence of some omnipresent form of aether which would explain why we need some energy to move an object in the absence of any detectable obstacle or damping of whichever nature? (solid, liquid, gaseous, plasma...)
To help your reflexion, cf.
All remarks are welcome.
If yes, you are welcome here. Please be concise and understandable. Please be ethical when communicating. Experts and amateurs of mathematics are invited. Experts are requested to give explanations if amateurs have insufficient understanding of something.
Please, see also:
Previous threads can be found here:
Please, see also:
Can some one help me express muti user massive mimo detection techniques like ZF, MMSE, ML, with their mathematical expressions .
I am currently working on sensitivity analysis in the context of AHP. I use the online tool BPMSG from Goepel, maybe someone here knows it. However, I have a problem with the traceability of the results. Let's assume that there are exactly 3 criteria in the AHP (C1,C2,C3). Then I would like to know how the final value for an alternative (a1) results if one of the criteria changes in weighting, right?
I'll just say C1 decreases by x. However, the value x that is taken away from C1 must be distributed to C2 and C3. I just wonder which method is used to do this. Is x simply distributed equally to C2 and C3 or does this happen according to the share of C2 or C3 in the sum of C2 and C3?
When I do that, I get the following for the remaining two criteria:
(C1-x) = New C1
(C2 + (C2 / (C2 + C3)) * x) = New C2
(C3 + (C3 / (C2 + C3)) * x) = New C3
Unfortunately, however, I do not know if this is correct. If I multiply the criteria with the corresponding values of alternative a1 and combine the whole thing to a final value, I can calculate the same again with the other alternatives. When I compare the graphs to see how big x has to be to change the final prioritization of the alternatives, I always get the wrong values compared to the online tool. Therefore I would like to know if the redistribution of the weights is correct.
I hope someone can help me despite the long question. Thanks a lot!
I am aware of the facts that every totally bounded metric space is separable and a metric space is compact iff it is totally bounded and complete but I wanted to know, is every totally bounded metric space is locally compact or not. If not, then give an example of a metric space that is totally bounded but not locally compact.
Follow this question on the given link
How long does it take to a journal indexed in the "Emerging Sources Citation Index" get an Impact Factor? What is the future of journals indexed in Emerging Sources Citation Index?
Assume a mathematical optimization problem with two positive continuous variables:
0 <= x <= 1
0 <= y <= 1000
I am seeking of an efficient way to express in form of linear constraints (possibly with the use of binary/integer variables and big M) the following nonlinear relationship, so the problem can be solved with milp solvers:
- when 0 <= y < 200 then x = 0
- when y = 200 then 0 <= x <= 1
- when 200 < y <= 1000 then x = 1
The numbers 200 and 1000 are indicatively big.
Are there any direct suggestions or papers/books addressing similar problems?
Jerk is defined as the rate of change in acceleration. But I would like to know some practical applications of Jerk inorder to have better understanding. I kindly request to suggest me some examples.
Re: ARTICLE: "Should Type Theory replace Set Theory as the Foundation of Mathematics?" BY
Type Theory is indicated (by the author) to be a sometimes better alternative and a sometimes-replacement for regular set theory AND thus a sometimes better replacement for the logical foundations for math (and Science). It seems to allow turning what is qualitative and not amenable to regular set theory into things that can be the clear particular objects of logical reasoning. Is this the case? (<-- REALLY, I am asking you.)
It is very rarely, if ever, I have addressed anything that I did not have a good understanding of; BUT, here is the exception (and a BIG one). (I HAVE VERY, VERY little understanding of this Article -- even from the most crude qualitative standpoint. You would say I should have researched this more, but it in not my bailiwick , only more confusion, on my part would likely occur, "shedding no light". My sincere apologies. ANYHOW:
If indeed things are as the author, Thorsten Altenkirch, says: it seems different things (other than those related to standard propositions in regular set theory) could widen the use of set theory itself yet retaining (including) all of regular set theory (with all of its virtues, as needed). BUT, in addition it is indicated it could be applied to areas (PERHAPS, like biological and behavior science) where present set theory (and the math founded on it) cannot now be applied.
"[ The ] type theoretic axiom of choice hardly corresponds to the axiom of choice as it is used in set theory. Indeed, it is not an axiom but just a derivable fact."
More Quoting of the author: "Mathematicians would normally avoid non-structural properties, because they entail that results are may not be transferable between different representations of the same concept. However, frequently non-structural properties are exploited to prove structural properties and then it is not clear whether the result is transferable." .... "And because we cannot talk about elements in isolation it is not possible to even state non-structural properties of the natural numbers. Indeed, we cannot distinguish different representations, for example using binary numbers instead." ... "we can actually play the same trick as in set theory and define our number classes as subsets of the largest number class we want to consider and we have indeed the subset relations we may expect. ... Hence Type Theory allows us to do basically the same things as set theory" ... as far as numbers are concerned (modulo the question of constructivity) but in a more disciplined fashion limiting the statements we can express and prove to purely structural ones."
"we cannot talk about elements in isolation. This means that we cannot observe intensional properties of our constructions. This already applies to Intensional Type Theory, so for example we cannot observe any difference between two functions which are pointwise equal." ...
"...Hence in ITT (regular set theory) while we cannot distinguish extensionally equal functions we do not identify them either. This seems to be a rather inconvenient incomplete- ness of ITT, [ (common set theory)] which is overcome by Type Theory (HoTT)"
"[It] reflects mathematical practice to view isomorphic structures as equal. However, this is certainly not supported by set theory which can distinguish isomorphic structures. Yes, indeed all structural properties are preserved but what exactly are those. In HoTT all properties are structural, hence the problem disappears. ..."
"While not all developments can be done constructively it is worthwhile to know the difference and the difference shouldn’t be relegated to prose but should be a mathematical statement." [AND}: ...
"Mathematicians think and they often implicitly assume that isomorphic representations are interchangeable, which at closer inspection isn’t correct when working in set theory. Modern Type Theory goes one step further by stating that isomorphic representations are actually equal, indeed because they are always interchangeable."...
..."The two main features that distinguish set theory and type theory: con- structive reasoning and univalence are not independent of each other. Indeed by being more explicit about choices we have made we can frequently avoid using the axiom of choice which is used to resurrect choices hidden in a proposition. Replacing propositions by types shows that that the axiom of choice in many cases is only needed because conventional logic limits us to think about propositions when we should have used more general types."
Oh, here's the link to THE ARTICLE:
One of the central themes in the philosophy of formal sciences (or mathematics) is the debate between realism (sometimes misnamed Platonism) and nominalism (also called "anti-realism"), which has different versions.
In my opinion, what is decisive in this regard is the position adopted on the question of whether objects postulated by the theories of the formal sciences (such as the arithmetic of natural numbers) have some mode of existence independently of the language that we humans use to refer to them; that is, independently of linguistic representations and theories. The affirmative answer assumes that things like numbers or the golden ratio are genuine discoveries, while the negative one understands that numbers are not discoveries but human inventions, they are not entities but mere referents of a language whose postulation has been useful for various purposes.
However, it does not occur to me how an anti-realist or nominalist position can respond to these two realist arguments in philosophy of mathematics: first, if numbers have no existence independently of language, how can one explain the metaphysical difference, which we call numerical, at a time before the existence of humans in which at t0 there was in a certain space-time region what we call two dinosaurs and then at t1 what we call three dinosaurs? That seems to be a real metaphysical difference in the sense in which we use the word "numerical", and it does not even require human language, which suggests that number, quantities, etc., seem to be included in the very idea of an individual entity.
Secondly, if the so-called golden ratio (also represented as the golden number and related to the Fibonacci sequence) is a human invention, how can it be explained that this relationship exists in various manifestations of nature such as the shell of certain mollusks, the florets of sunflowers, waves, the structure of galaxies, the spiral of DNA, etc.? That seems to be a discovery and not an invention, a genuine mathematical discovery. And if it is, it seems something like a universal of which those examples are particular cases, perhaps in a Platonic-like sense, which seems to suggest that mathematical entities express characteristics of the spatio-temporal world. However, this form of mathematical realism does not seem compatible with the version that maintains that the entities that mathematical theories talk about exist outside of spacetime. That is to say, if mathematical objects bear to physical and natural objects the relationship that the golden ratio bears to those mentioned, then it seems that there must be a true geometry and that, ultimately, mathematical entities are not as far out of space-time as has been suggested. After all, not everything that exists in spacetime has to be material, as the social sciences well know, that refer to norms, values or attitudes that are not. (I apologize for using a translator. Thank you.)
I need a clear step wise explanation of the inner workings of the YOLO deep learning segmentation model with all the mathematical nuances.
Could you recommend courses, papers, books or websites about modeling language and formalization?
Thank you for your attention and valuable support.
Have you ever wondered about using dimensional analysis in mathematics, as we do in physics.
For example, the Pythagoras formula is:
which relates the surface areas of squares resting on different sides of a right-angled triangle.
Therefore, based on a simple dimensional analysis, we may conclude:
a^2+b^2, could NOT be equal to c^3, due to conflict of dimensions.
This is a simple example. How about using dimensional analysis in other mathematics problems.
Please, feel free to share with me, your idea and comment.
Please, I need books that teach in-depth the mathematics of neural networks and maybe suggest possible improvement that can be made to the methodology
A comprehensive way to find the concentration of random solutions would enhance benefits related with health, industry, technology and commercial aspects. Although beer lambert law is a solution, there are some cases where Epsilon is unknown (Example: A Coca-Cola drink or a cup of coffee). In this cases, proper alternative ways of determining concentration should be suggested.
I am thinking of the vector as a point in multidimensional space. The Mean would be the location of a vector point with the minimum squared distances from all of the other vector points in the sample. Similarly, the Median would be the location of the vector point with the minimum absolute distance from all the other vector points.
Conventional thinking would have me calculate the Mean vector as the vector formed from the arithmetic mean of all the vector elements. However, there is a problem with this method. If we are working with a set of unit vectors the result of this method would not be a unit vector. So conventional thinking would have me normalize the result into a unit vector. But how would that method apply to other, non-unit, vectors? Should we divide by the arithmetic mean of the vector magnitudes? When calculating the Median, should we divide by the median of the vector magnitudes?
Do these methods produce a result that is mathematically correct? If not, what is the correct method?
I have a data in which the relationship between two parameters seems to fit to a model that has two oblique asymptotes. Does any one have any idea about what type of function I should use? Please find attached a screenshot of the data. I appreciate any help.
While working in both the software, after loading the training and validation data for the prediction of a single output using several input variables (say 10), it skips some of the inputs and delivered an explicit mathematical equation for future prediction of the specific parameter but it skips some of the input variables (say 2 or 3 or maybe greater). What criteria are these software uses in the back for picking the most influential parameters while providing a mathematical predictive model?
Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?
Thank you for your attention and valuable support.
Currently the only proof of Fermat's Last Theorem is very complex and certainly not the proof that Fermat had in mind.
I wonder if it is possible to use a method that drastically
simplifies Wiles' theory, a theory that has received much honors from the entire mathematical community.
can you please tell how to solve the governing equation to obtain the frequencies to compare with the ansys result
Practical applications of special functions of mathematics in the oil and gas industry and related fields, thank you
Knowing orthometric height, latitude, longitude of a point and reduced level, latitude, longitude of a second point, what's the mathematical expression to compute for the orthometric correction to be applied to the reduced level of the 2nd point to get its corresponding orthometric Height?
I mean something strictly mathematical and not an algorithmic routine.
The function f(n) produces
I need the function f(n) remove the zeros and produces:
1, 2, 5, 1, 3, 8, 9, ...
How to linearize any of these surface functions (separately) near the origin?
I have attached the statement of the question, both as a screenshot, and as well as a PDF, for your perusal. Thank you.
International exchanges are inevitable in order to develop our projects and to ensure a sufficient critical base for the research. This confronts us with the problem of translating ideas, concepts and results that have developed in our local working language. As we know, English nowadays plays the role of the pivotal language in most conferences and publications. My intention is not to argue with this position -- a pivotal language is needed -- but to understand what are the main problems raised by writing and communicating in a language that is not the one in which the work is done.
English speakers themselves must question the meaning of words, sometimes neologisms, used by a non-English speaker. Of course, what is at stake is not the words but the meaning they convey. These issues are being addressed in the study of learning mathematics in a second language, or in the study of the variety and variability of teachers' vocabularies in different languages.
As researchers the issue is somewhat different. In particular, we must coin words and expressions to name phenomena or concepts in our own working language and then the challenge of translating them, or to understand words and expressions specific to the domain coming from another cultural and linguistic environment-- sometimes via the pivotal language.
I am preparing a short essay on these issues. I will appreciate your contributions, hence my questions:
Do you have examples to share or any particular experience? What do you think about the reasons for these difficulties and the impact they may have on your own communication?