accessing mathematics and headaches
Studying mathematics and getting a headache need not be cause and effect. However, many times in seminars or reading some papers one gets the impression of being a little dumb. Well, I'm pretty sure you know those seminars in which everybody's constantly nodding or papers with obvious steps (once written they take three sheets, but they are obvious).
At times I think that we need to be admired for something and in mathematics this "something" is intelligence. The less people that understands something, the more difficult it is, the more intelligent the writer is. Something similar applies to speeches. The strange thing is that if you do not understand something in a seminar, you just keep quiet and spend (lose) one or two hours waiting for the storm to pass you by. Well one should stand up and quietly leave the room.
Our life is short. I should be able to quickly decide if something will be of my interest. If so, then I can dedicate time to it. For established theories there are plenty of tutorials on the web (god blesses the web) but not so for recent research papers or new theories. After all, it is not that difficult to include some examples to illustrate the concepts. Even in those papers submitted for publishing.
I agree in that mathematics are not easy, but most times we make them even more difficult. I think that mathematicians should be aware of the necessity of selling the product (I should include myself). The Bourbaki school which has had a big influence on the twentieth century belongs to the past. We should react and go to the opposite - as it is always the case: thesis, antithesis, synthesis - and try to make accessible even state-of-the-art papers. Good ideas are not difficult.
Another point is that technicalities prevail over conceptual research. In a "deep paper" one expects hard-to-follow equations just above obscure paragraphs. With minor exceptions, new approaches and concepts are not normally recognized though they are usually more influential as they allow us to think from a different perspective.
I confess I'm not a fan of B. Mandelbrot at all, but I do mind the importance of his work. Today's quote I agree, first, because it seems to support the thesis of this post and, second, because for me mathematics is nothing more (and nothing less) than a language: being a language, mathematics may be used not only to inform but also, among other things, to seduce.
miércoles, 11 de julio de 2012
sábado, 23 de junio de 2012
congruence
congruence
Congruence or G-congruence (graph congruence) checks whether two sequences (one a permutation of the other) or derivations have the same initial digraphs. It is studied in detail in Sec. 6.1 in the MGG book.
Congruence is used to characterize sequential independence: If the sequence and its permutation are both compatible, coherent and congruent, then they are sequentially independent, i.e. they can be applied to the same initial graph and will eventually derive the same image graph.
Sequential independence means that the ordering of the productions (at least the two orderings specified by these sequences) do not affect the inital nor the final states.
In the MGG book there are some formulas that guarantee congruence in case of a single production being advanced a finite number of positions inside a sequence. They are known as congruence conditions. They can be calculated during design time, while the grammar is being defined, i.e. they are independent to some extent of the initial graph the sequence is going to be applied to.
Albert Einstein: Do not worry about your difficulties in Mathematics. I can assure you mine are still greater.
Congruence or G-congruence (graph congruence) checks whether two sequences (one a permutation of the other) or derivations have the same initial digraphs. It is studied in detail in Sec. 6.1 in the MGG book.
Congruence is used to characterize sequential independence: If the sequence and its permutation are both compatible, coherent and congruent, then they are sequentially independent, i.e. they can be applied to the same initial graph and will eventually derive the same image graph.
Sequential independence means that the ordering of the productions (at least the two orderings specified by these sequences) do not affect the inital nor the final states.
In the MGG book there are some formulas that guarantee congruence in case of a single production being advanced a finite number of positions inside a sequence. They are known as congruence conditions. They can be calculated during design time, while the grammar is being defined, i.e. they are independent to some extent of the initial graph the sequence is going to be applied to.
Albert Einstein: Do not worry about your difficulties in Mathematics. I can assure you mine are still greater.
viernes, 8 de junio de 2012
composition and compatibility
composition and compatibility of sequences
So far I have introduced sequentialization (the productions inside a sequence are applied one after the other). Today I will scratch on parallelism, but first we need to introduce composition of productions.
Consider a sequence. Its composition is just the composition of the productions. Thus, we obtain a single production that performs the same actions than the whole sequence. The natural question that arises with composition is: What do we need sequences and concatenation for?. There are some differences that make composition and sequentialization pretty different concepts.
The first is that of intermediate states: Sequences of productions generate intermediate states while their composition does not (it is a single production). Not only that, but the final state may differ in both cases. Think of a sequence that first deletes and then adds one node. Had the node any dangling edge, it is deleted by some epsilon-production. However, its composition does not touch the node, so it does not delete those edges.
If two productions are sequentially independent, it does not matter the order of their application, so they can be executed in parallel. But in the MGG book, Chap. 6, it is proved that advancing (or delaying) one production two positions inside a sequence in a single step is weaker (less demanding) than advancing one position and then another position.
In general, if two sequences are going to be applied in parallel, then every production in the first sequence needs to be sequentially independent with respect to all productions in the second. Previous comments weaken this, so it is possible to execute independent subsequences in parallel. See Sec. 5.4 of the MGG book.
Today's post ends quoting Edsger Dijkstra: Elegance is not a dispensable luxury but a quality that decides between success and failure.
So far I have introduced sequentialization (the productions inside a sequence are applied one after the other). Today I will scratch on parallelism, but first we need to introduce composition of productions.
Consider a sequence. Its composition is just the composition of the productions. Thus, we obtain a single production that performs the same actions than the whole sequence. The natural question that arises with composition is: What do we need sequences and concatenation for?. There are some differences that make composition and sequentialization pretty different concepts.
The first is that of intermediate states: Sequences of productions generate intermediate states while their composition does not (it is a single production). Not only that, but the final state may differ in both cases. Think of a sequence that first deletes and then adds one node. Had the node any dangling edge, it is deleted by some epsilon-production. However, its composition does not touch the node, so it does not delete those edges.
If two productions are sequentially independent, it does not matter the order of their application, so they can be executed in parallel. But in the MGG book, Chap. 6, it is proved that advancing (or delaying) one production two positions inside a sequence in a single step is weaker (less demanding) than advancing one position and then another position.
In general, if two sequences are going to be applied in parallel, then every production in the first sequence needs to be sequentially independent with respect to all productions in the second. Previous comments weaken this, so it is possible to execute independent subsequences in parallel. See Sec. 5.4 of the MGG book.
Today's post ends quoting Edsger Dijkstra: Elegance is not a dispensable luxury but a quality that decides between success and failure.
domingo, 27 de mayo de 2012
sequentialization and sequential independence
sequentialization and sequential independence
Assume we are given a sequence of productions. Recall that a sequence is just an ordered (finite) set of productions. A natural question is whether some production can be applied before (advanced) or after (delayed) its current position without altering the result of the sequence. In essence, this is sequential independence.
Suppose that our sequence is made up of two productions p2;p1 (p1 is applied before p2 by convention). In this case if p2 can be advanced -- or equivalently p1 can be delayed -- and the new sequence p1;p2 has the same output as the original, then we say that p2 is sequentially independent of p1 (or vice versa).
Sequential independence is an important concept, both from the theoretical and the practical points of view. Theoretically, it extends commutativity to an arbitrary (though finite) amount of elements. Practically, it is closely related to parallel execution of tasks (a future post on this).
Chapter 6 of the MGG book addresses independence for sequences and for derivations (recall that a derivation is a sequence applied to an initial graph). Sufficient conditions for moving productions are stated and proved. In the literature, in order to test sequential independence for jumps larger than a single position, the production is checked to be sequentially independent with respect to every single production in the jump. One interesting result proved in the MGG book is that this is much more restrictive than what a direct jump actually needs.
Sequential independence can be characterized in terms of coherence (see this post or Chap. 4 in the MGG book) and congruence (I will write a post on it; it is addressed in Sec. 6.1 of the MGG book).
Today I quote Henri Poincaré: to doubt everything or to believe everything are two equally convenient solutions; both dispense with the necessity of reflection.
Assume we are given a sequence of productions. Recall that a sequence is just an ordered (finite) set of productions. A natural question is whether some production can be applied before (advanced) or after (delayed) its current position without altering the result of the sequence. In essence, this is sequential independence.
Suppose that our sequence is made up of two productions p2;p1 (p1 is applied before p2 by convention). In this case if p2 can be advanced -- or equivalently p1 can be delayed -- and the new sequence p1;p2 has the same output as the original, then we say that p2 is sequentially independent of p1 (or vice versa).
Sequential independence is an important concept, both from the theoretical and the practical points of view. Theoretically, it extends commutativity to an arbitrary (though finite) amount of elements. Practically, it is closely related to parallel execution of tasks (a future post on this).
Chapter 6 of the MGG book addresses independence for sequences and for derivations (recall that a derivation is a sequence applied to an initial graph). Sufficient conditions for moving productions are stated and proved. In the literature, in order to test sequential independence for jumps larger than a single position, the production is checked to be sequentially independent with respect to every single production in the jump. One interesting result proved in the MGG book is that this is much more restrictive than what a direct jump actually needs.
Sequential independence can be characterized in terms of coherence (see this post or Chap. 4 in the MGG book) and congruence (I will write a post on it; it is addressed in Sec. 6.1 of the MGG book).
Today I quote Henri Poincaré: to doubt everything or to believe everything are two equally convenient solutions; both dispense with the necessity of reflection.
domingo, 13 de mayo de 2012
publishing
My personal opinion on publishing
Today's post deviates from previous ones. I will tell you my opinion about publishing. I am not a fan of the standard way: submit, go to a revision-correction process, (eventually) publish.
There are a few problems I see in the process: it is much easier if you have a well-known surname "in the business" (or more generally, how objective the evaluation process is?), it takes quite a long time (one and a half years is not uncommon) and, most important, which is the real impact?. I mean, nowadays most of us use web engines to search for topics and most research is publicly available in the web. There are other problems that I see but keep for myself (let's be polite).
But my main objection is that once you get through and eventually the paper is published in some journal, potential readers usually have to pay.
Of course, there are good by-products even if the paper is not eventually published. For example, the feedback of the reviewers is normally very valuable. Besides, the author is usually more careful and the paper is revised and well-thought before submitting.
I absolutely support initiatives like the Electronic Journal of Combinatorics and arXiv. Needless to say, everything can be improved but in my opinion these are two examples of how scientific communication will be driven in the future. In fact, I have submitted and got published a paper to the Electronic Journal of Combinatorics.
There are two very interesting critiques that I recommend to you:
Today's post deviates from previous ones. I will tell you my opinion about publishing. I am not a fan of the standard way: submit, go to a revision-correction process, (eventually) publish.
There are a few problems I see in the process: it is much easier if you have a well-known surname "in the business" (or more generally, how objective the evaluation process is?), it takes quite a long time (one and a half years is not uncommon) and, most important, which is the real impact?. I mean, nowadays most of us use web engines to search for topics and most research is publicly available in the web. There are other problems that I see but keep for myself (let's be polite).
But my main objection is that once you get through and eventually the paper is published in some journal, potential readers usually have to pay.
Of course, there are good by-products even if the paper is not eventually published. For example, the feedback of the reviewers is normally very valuable. Besides, the author is usually more careful and the paper is revised and well-thought before submitting.
I absolutely support initiatives like the Electronic Journal of Combinatorics and arXiv. Needless to say, everything can be improved but in my opinion these are two examples of how scientific communication will be driven in the future. In fact, I have submitted and got published a paper to the Electronic Journal of Combinatorics.
There are two very interesting critiques that I recommend to you:
- The first, written by David Lorge Parnas with title Stop the Numbers Game, revises the big negative impact on current research due to the wide-spread policy of measuring researchers by the number of papers that they publish. In fact, the critique goes a bit further. Thanks to Juan de Lara for letting me know.
- The second, signed by 10 computer scientists (Oded Goldreich among them), with title On evaluating conceptual contributions, is an attempt to draw the community attention on this problem: conceptual contributions are considered far less relevant than technical ones. I see mathematics as a language, being concepts the building blocks of the skycraper, so you can figure out what I think of this.
sábado, 28 de abril de 2012
initial graphs
initial graphs (minimal and negative initial digraphs)
One of the main advantages of MGG over other approaches to graph transformation is its ability to study properties of the grammar independently of the initial state. Thus, it is possible to define an algortihm (productions of a grammar) and analyse some of its properties without taking into account the concrete problem it is going to be applied to.
For a given sequence, one natural question is the minimal (smallest) graph that we have to be given in order to apply the sequence. Recall that MGG is not only able to study the elements that have to be present in order to apply a production, but also those that must not be in the initial graph. The minimal initial digraph (MID) is precisely this smallest graph. The negative initial digraph (NID) specifies those elements that should not appear in the host graph. They both can be explicitely calculated. Sections 4.4 and 5.3 in the MGG book are devoted to this topic. It is interesting that we can use this concepts to contribute to Petri nets (Sec. 8.2).
The main importance of the initial digraphs is that they substitute the host graph in those cases where some initial state is mandatory. For example, when dealing with universal quantifiers in application conditions. Hence, the analysis of the grammar productions can be performed without the need of a concrete host graph.
In previous posts I have commented on non-determinism in MGG, being one of its sources how nodes across productions are related (whether two nodes of the same type in different productions have to be the same node or not). If we have in mind a concrete sequence, as soon as the left hand sides of the productions are matched in the state graph the non-determinism disappears. But the interesting part of the analysis is the one that happens before the productions are matched into the host graph. Recall that this non-determinism is classified as vertical and horizontal. The initial digraphs deal with horizontal non-determinism, so they contribute also to the study of the vertical non-determinism.
Depending on the way completion is carried out, different initial digraphs will be derived, giving rise to the notions of initial digraph set. The horizontal non-determinism needs further research. For some time I tried to address this issue using representation theory, which I will comment on in a future post. There is a fast introduction to representation theory here.
Summarizing, initial digraphs are interesting at least for three reasons: They keep independence in MGG of the analysis with respect to the host graph, they also help in the study of horizontal non-determinism and they answer the question of what is the smallest graph able to fire a sequence.
Today I find myself a bit sarcastic. Normally I choose quotes that are related to the topic of the post in some way, but today I cannot stand including the following phrase from David Hilbert: Some people have got a mental horizon of radius zero and call it their point of view.
One of the main advantages of MGG over other approaches to graph transformation is its ability to study properties of the grammar independently of the initial state. Thus, it is possible to define an algortihm (productions of a grammar) and analyse some of its properties without taking into account the concrete problem it is going to be applied to.
For a given sequence, one natural question is the minimal (smallest) graph that we have to be given in order to apply the sequence. Recall that MGG is not only able to study the elements that have to be present in order to apply a production, but also those that must not be in the initial graph. The minimal initial digraph (MID) is precisely this smallest graph. The negative initial digraph (NID) specifies those elements that should not appear in the host graph. They both can be explicitely calculated. Sections 4.4 and 5.3 in the MGG book are devoted to this topic. It is interesting that we can use this concepts to contribute to Petri nets (Sec. 8.2).
The main importance of the initial digraphs is that they substitute the host graph in those cases where some initial state is mandatory. For example, when dealing with universal quantifiers in application conditions. Hence, the analysis of the grammar productions can be performed without the need of a concrete host graph.
In previous posts I have commented on non-determinism in MGG, being one of its sources how nodes across productions are related (whether two nodes of the same type in different productions have to be the same node or not). If we have in mind a concrete sequence, as soon as the left hand sides of the productions are matched in the state graph the non-determinism disappears. But the interesting part of the analysis is the one that happens before the productions are matched into the host graph. Recall that this non-determinism is classified as vertical and horizontal. The initial digraphs deal with horizontal non-determinism, so they contribute also to the study of the vertical non-determinism.
Depending on the way completion is carried out, different initial digraphs will be derived, giving rise to the notions of initial digraph set. The horizontal non-determinism needs further research. For some time I tried to address this issue using representation theory, which I will comment on in a future post. There is a fast introduction to representation theory here.
Summarizing, initial digraphs are interesting at least for three reasons: They keep independence in MGG of the analysis with respect to the host graph, they also help in the study of horizontal non-determinism and they answer the question of what is the smallest graph able to fire a sequence.
Today I find myself a bit sarcastic. Normally I choose quotes that are related to the topic of the post in some way, but today I cannot stand including the following phrase from David Hilbert: Some people have got a mental horizon of radius zero and call it their point of view.
martes, 10 de abril de 2012
marking
marking places in the host graph
Non-determinism of MGG is a recurrent topic in this blog. It is a property that MGG shares with the rest of approaches to graph rewriting. However, there are several situations in which determinism is needed, even if we want to have a non-deterministic grammar.
Determinism needs arise naturally in MGG in the following three sample situations: Dangling edges, application conditions and splitting rule actions. The first I briefly explained in a previous post: In order to delete any possible edge we append an epsilon-production. Of course we must assure that both productions are applied to the same nodes (otherwise, dangling edges would not be removed).
I will touch application conditions in some future post. Application conditions are just what their name indicate: Properties that must be fulfilled by the host graph in order to apply the production. It should suffice to say that in MGG it is possible to calculate an equivalent sequence to a given production + application conditions. This sequence extends the production and again it is necessary to guarantee that all relevant productions will be applied to the same nodes. If you are interested in application conditions, they are addressed in Chap. 7 of the MGG book.
At times it is of interest to split a production into subproductions, whose overall effect are equivalent. A good example is reachability (some posts on it in the future). If productions of a certain type are split into two productions (one with the addition of elements and one with deletion of elements), then the state equation can gather more information. Clearly, both productions should be applied in the same elements. For reachability, refer to Chap. 8 of the MGG book.
How is this achieved in MGG?. The idea is quite simple but has some subtleties that I do not address here (see Sec. 5.2 of the MGG book): Just mark all nodes the first production is applied to. To this end, introduce a node of a new type (a type not present in the host graph nor in the LHS or RHS of the productions). Link this node to all nodes where the first production is applied. Alter the second production so, first, its LHS must find this element and its relevant incident edges and, second, it deletes this marking node (not its incident edges). Why the second grammar rule should not delete the incident edges is related to sequential independence and is explained in detail in the MGG book.
Today's quote I have chosen from the great Friedrich Nietzsche: there are no facts, only interpretations. I, as a mathematician, completely agree.
Non-determinism of MGG is a recurrent topic in this blog. It is a property that MGG shares with the rest of approaches to graph rewriting. However, there are several situations in which determinism is needed, even if we want to have a non-deterministic grammar.
Determinism needs arise naturally in MGG in the following three sample situations: Dangling edges, application conditions and splitting rule actions. The first I briefly explained in a previous post: In order to delete any possible edge we append an epsilon-production. Of course we must assure that both productions are applied to the same nodes (otherwise, dangling edges would not be removed).
I will touch application conditions in some future post. Application conditions are just what their name indicate: Properties that must be fulfilled by the host graph in order to apply the production. It should suffice to say that in MGG it is possible to calculate an equivalent sequence to a given production + application conditions. This sequence extends the production and again it is necessary to guarantee that all relevant productions will be applied to the same nodes. If you are interested in application conditions, they are addressed in Chap. 7 of the MGG book.
At times it is of interest to split a production into subproductions, whose overall effect are equivalent. A good example is reachability (some posts on it in the future). If productions of a certain type are split into two productions (one with the addition of elements and one with deletion of elements), then the state equation can gather more information. Clearly, both productions should be applied in the same elements. For reachability, refer to Chap. 8 of the MGG book.
How is this achieved in MGG?. The idea is quite simple but has some subtleties that I do not address here (see Sec. 5.2 of the MGG book): Just mark all nodes the first production is applied to. To this end, introduce a node of a new type (a type not present in the host graph nor in the LHS or RHS of the productions). Link this node to all nodes where the first production is applied. Alter the second production so, first, its LHS must find this element and its relevant incident edges and, second, it deletes this marking node (not its incident edges). Why the second grammar rule should not delete the incident edges is related to sequential independence and is explained in detail in the MGG book.
Today's quote I have chosen from the great Friedrich Nietzsche: there are no facts, only interpretations. I, as a mathematician, completely agree.
Suscribirse a:
Entradas (Atom)