I promised to post from time to time about the FOCS 2013 PC work (to demystify the process). So here is a quick update: We got 280 submission (well, 281 submissions but one was just a bad joke). This is up (by more than %10) from FOCS 2012 but on the other hand our PC is also on the large side so the number of papers assigned to a PC member is less than 40, which for FOCS/STOC is very reasonable (I reduced this number a bit by encouraging several authors of misplaced papers to withdraw their submissions).
The first stage of the work was assigning each paper to three PC members (more challenging than I expected), and then the individual review stage of the PC work. In this stage, each PC member reviews his/her assigned papers without discussion with other PC members. This way, each paper gets three relatively independent reviews. We are now ending this stage, and it is a good opportunity to talk about using sub-reviewers.
A PC member can ask an expert in the specific sub-area of a paper for a review (and we then refer to such an expert as a sub-reviewer). The purpose of sub-reviewers is not to lighten the load of PC members (though this could be seen as a positive byproduct), but rather to elicit expertise the PC is missing. One disadvantage of sub-reviewers which I will not consider here is the additional work for the rest of the community. This is a consideration, but hopefully an expert would be interested in reviewing papers that are very close to his/her expertise (and this would not be too much work).
It is my experience that a major source of mistakes for PCs is in the way they use external experts. One fear is an over-confident PC that believes it can judge every work on its own. True enough “in the good old days,” PCs were self-sufficient (especially in the pre-Internet era). But TOC today is much deeper and much wider than “in the good old days.” It is easy to make mistakes if you do not have good understanding of related work. On the other hand, PCs that rely on their sub-reviewers too much are in just a serious problem. The risk is to turn FOCS/STOC into a collection of mini-conferences run by separate sub-communities. This is completely opposite to the purpose of FOCS/STOC: promoting TOC as a joint community and fostering the flow of ideas. For that to happen we must have that papers accepted are of interest to larger fractions of the community. The ideal solution in my eye is to heavily rely on external experts, but to view these reviews as inputs to the PC work rather than as outsourcing of the PC work. I therefore asked each PC member to familiarize himself/herself with its assigned papers and to reach a separate opinion. I know this is an ideal view, but setting the right ideals could be important.
In my last post, we saw that the problem of learning juntas, hard as it is over Boolean inputs, seems even worse over other alphabets. Coding theory happens to have a inexhaustible supply of such problems. Some of these are long-standing open problems, others are of a more recent vintage. More of these problems seem to crop up all the time. And as will be clear from the list below, this issue does seem to follow me around.
Here are four of my favorite problems, completely solved for but wide open for any other alphabet (which we take to be some finite field).
Optimal codes of distance d: Arguably the mother of all such problems. How many parity checks must a ary length code of distance have? Here we think of as a constant and going to infinity. For the Hamming bound tells us that parity checks are necessary. The BCH construction tells us that this is also sufficient. For larger alphabets, the best lower bound is still the Hamming bound, and the Hamming bound is still . Alas the BCH upper bound is now . For more on this problem and its connections to additive combinatorics, see here.
List-decoding Reed-Muller codes: Given a function , our goal is to recover all degree-d polynomials that are close to . Of course, we can hope to solve this in reasonable time only if this list is not too large (say a constant independent of the dimension ). The question is what bound on the distance from (equivalently the error-rate) is enough to ensure that the list is small, the conjecture is that it is the minimum distance of the degree d Reed-Muller code. The degree 1 case is the problem of Hadamard decoding, solved by Goldreich and Levin. In joint work with Klivans and Zuckerman, we solved the case for all , and in a subsequent paper, the case for all . The case of larger is open.
The bottleneck in working with higher alphabets is the following. Fix a polynomial P which has good agreement. The Goldreich-Levin and GKZ algorithms “guess” the P on a c-dimensional subspace S, where is a constant. They then use this to recover P on all dimensional subspaces containing S. Why is this easier? Knowing the right values on a subspace of co-dimension 1 effectively reduces the error-rate by a factor of . For , this reduces the error by and puts us in the unique-decoding regime where things are easy. But for larger , this error-reduction does not always suffice to specify P unambiguously.
Decoding from errors and erasures: Assume that we have an linear code. Such a code can correct errors and erasures. Assume that we have efficient algorithms for both these (the latter is always the case for a linear code). Can we combine these in a black-box fashion to get an efficient algorithm that corrects errors and erasures as long as ? When , the answer is Yes (take a guess!). I don’t know the answer for larger . I came upon this problem in conversations with Atri Rudra, perhaps there is a reference for it? Or better still, a solution?
Learning Linear Functions with noise: Learning parity with noise is a central problem in learning, that is widely believed to be hard. We are given random examples of points in labeled with some unknown parity function L, the catch is that each label is flipped independently with probability . A seemingly even harder problem is when the noise is adversarial, and the adversary is allowed to flip an arbitrary fraction of labels over all points in . In joint work with Feldman, Khot and Ponnuswami, we show that the problems are in fact equivalent, there is a reduction from adversarial noise to random noise.
In retrospect, perhaps this is not too surprising: learning parity with noise is already so very hard that making the noise adversarial does not make it worse. Well, the same philosophy ought to apply to other alphabets. But I don’t know of a similar reduction for any other alphabet. This paper makes some progress on this problem, reducing adversarial noise to a more benign noise model, but does not solve the problem fully.
If you have a problem of this flavor, I’d love to hear about it. I am unlikely have anything insightful to say, but I’d be happy to commiserate with you about how annoying can be.
Update (5/7): This post earned me a spot on the not-so-exclusive club of people called names such as a “narrow-minded” “biased” “religious worshiper” “who doesn’t want to learn something difficult and new” by Luboš Motl. Interestingly, he mostly takes issue with my discounting the possibility that the complexity of SAT is something like or . Although I do believe such an algorithm is extremely unlikely, its discovery wouldn’t be a bad thing to computational complexity. On the contrary, I imagine that if such an algorithm is discovered then investigating whether it can be improved and implemented would be seen as meriting the investment of a huge amount of resources, possibly larger than those given for the Large Hadron Collider.
[ First, an announcement. Mark Braverman and Bernard Chazzelle are organizing a workshop on May 20-21 on Natural Algorithms and the Sciences , bringing together researchers from computer science, mathematics, physics, biology, and engineering to explore interactions among algorithms, dynamical systems, statistical physics, and complexity theory (in all senses of the term) ]
In some very pleasant though unsurprising news, Scott Aaronson got tenure at MIT. Among his many talents and activities, Scott has done a fantastic amount of outreach and exposition, explaining the concepts of theoretical computer science and quantum computing to other scientists and interested laymen. He also spends considerable energy trying to dispel wrong opinions with amazing patience and resilience (apparently due to his immutable psychological makeup). So, in honor of Scott getting tenure, just after a bruising flamewar with various “interesting characters” he managed to find in some dark corner of the Internet, I thought I’d step up to help him along and try to do my share of dispelling some common myths about complexity and particularly the vs problem.
P vs. NP: Reasons to Care
The question (and its close variants) is the central question of computer science. In fact, its hard to think of another question in any field that combines such mathematical depth and beauty with great importance to understanding nature. Scott did a pretty good job explaining why is true. In this brief post I’ll try to outline why I and many others are so intrigued by this question, by trying to answer various objections that are commonly made against its importance. Some of these issues are mentioned in my book with Arora but I thought a more explicit discussion might be worthwhile.
What if = but the algorithm for SAT takes or time?
In other words, what if SAT has exponential complexity for all inputs up to size for some huge number , and only starts becoming comparatively easier after that number. This is the equivalent of asking what if the laws of nature we’ve learned so far are true only in the corner of the universe we’ve managed to observe, and are completely different in other areas. Yes, this is a logical possibility, but we have not seen the remotest reason to believe it. Also, in recent years we’ve been exploring the computational universe (in terms of the huge growth of input sizes and computational resources) at a much faster rate than the physical one, and as we do so, it only becomes more important to understand the asymptotic behavior of algorithms. Having said that, I agree that after proving , the next order of business would be to get a more precise estimate of the number of computational operations needed to solve length- instances of SAT as a function of .
The question is irrelevant since is only a mathematical model having nothing to do with efficient computation.
The universality of computation, leading to equivalent models whether you study Turing machines, cellular automatons, electronic computers, mathematical deduction systems, or computation by natural systems (see workshop above) or economic markets, has been a surprising and central phenomena, validating the formal study of Turing machines or Boolean circuits. But I do agree is an imperfect model for efficient computation. Yet it is still a fine tool for the question of studying the complexity of SAT or other NP-complete problems. It’s true that is often too permissive a model, allowing algorithms that are prohibitively expensive to run on today’s technology, but this is not as much of an issue when the question is whether you can beat the exponential-time brute force algorithm. (Indeed, it has been often observed that natural problems either have fairly efficient algorithms with or running time, or seem to have exponential (i.e., ) complexity.)
Perhaps a more serious concern is that may be too restrictive, since it does not allow taking advantage of quantum mechanics (or maybe other physical theories), and is also only concerned with worst case computation. While those concerns are valid in principle, it seems that while quantum computers can do wonders for certain structured problems such integer factoring, their effect on NP-hard problems such as SAT is fairly limited. Also, while it is possible to posit a theoretical world in which but every problem is easy to solve on the average, the evidence so far strongly points to that not being the case. So, while one can argue that instead of we should be asking what’s the size of the smallest quantum circuit that solves SAT on the average case, all evidence points out to these two questions having the same answer. Moreover, at the moment we’re so far from resolving either question that the tools we have to study all those variants are the same. When we start getting close to , we might want to try to understand the more refined variants such as quantum or average-case complexity.
is not a fundamental property of nature such as the physical laws.
“Fundamental” is a subjective term, and can be used in different senses. The fundamental theorem of algebra can be (and is) derived from more basic axioms, and so this theorem is fundamental not because it’s the foundation for all algebraic facts but because it is central and important. One can make a similar analogy between “high level” laws of nature, positing the impossibility of perpetual motion machines or devices that transmit information faster than light, and the laws governing behavior of elementary particles that may be used to derive them. , positing (essentially) the impossibility of a machine that gets a SAT formula and spits out a satisfying assignment, is comparable to these high level laws in flavor, and I would say also in importance.
is only about practical efficiency whereas the deeper / more fundamental question is which computational problems can be solved at all.
I don’t think anyone that actually studied complexity for even a short while would share this objection. To paraphrase Edmonds (whom we quote in our book), in many contexts the difference between polynomial and exponential is more significant than the difference between exponential and infinite. An example we give in the book is Arthur Ganson’s kinetic sculpture Machine with Concrete. It’s composed of a sequence of 12 gears, each rotating 50 times slower than the previous one. While the first gear rotates 200 times per minute, the last one, at that speed, is to all practical purposes stationary, and indeed is embedded in a block of concrete. So, exponentially hard problems are essentially unsolvable. But it turns out that in most cases the problems we want to solve (from simulating the physical world to finding if there is an input making our program take more than units of time) are trivially computable, so in such cases the interesting question is whether their complexity is polynomial or exponential.
A related concern is that only involves “elementary” mathematics, as opposed to “deep” questions such as, say, the Riemann hypothesis. While the terms “deep” and “elementary” are not well defined (it seems that, similar to the AI effect, some people consider every problem or result in fields they don’t like as “shallow”), I believe it is again the case that this is not a concern shared by people that actually studied computational complexity. While this field is very young, it already contains several deep and beautiful results, and it is clear that many more will be needed for us to approach resolving .
Resolving is not interesting, since we already know the answer is that .
I can sympathize with this objection, as I too firmly believe that and in fact I believe that concrete problems such as, say, CIRCUIT-SAT cannot be solved in subexponential time. The reason we are looking for the proof is not just to get an extra level of confirmation to what we already know, but because we believe coming up with such a proof will necessarily have to yield new deep insights about the nature of computation. It is impossible to predict what would be the nature of such insights or what would be the extent of their applications; but in particular a better understanding of the dividing line between easy and hard problems can yield on the one hand new algorithms, and on the other hand new encryption schemes that are provably impossible to break.
Even if =, we wouldn’t necessarily be able to automate creativity, find hard proofs, beautify symphonies etc..
Some people making this objection are really thinking of the first objection, namely that but the algorithm is so inefficient that it still holds that in the “observable computational universe”. Others note that it’s not immediately clear how you use an algorithm for SAT to do things such as perfect artificial intelligence, weather prediction, etc… Lance Fortnow has a nice description in his book of a universe in which (see also Impagliazzo’s survey). Indeed, given the vast amount of NP-complete problems from all science, combined with the ability (if ) to solve even problems higher in the polynomial hierarchy, I think it’s hard to overestimate the implications of an efficient SAT algorithm. But I still do not see achieving this ideal world as the main reason to study . As I said, even if we don’t have a formal proof, there are plenty of reasons to believe that . So, I view the fact we can not rule out this hypothetical computational utopia not as a sign that it might be true but rather as demonstrating how much is still left for us to discover in the world of computation.
A new review is out for Lance Fortnow’s new book “The Golden Ticket: P, NP and the Search for the Impossible“.
In another piece of news: congratulations to new members of the National Academy of Science Éva Tardos and Avi Wigderson!
Next story on our project from Erin Wolf Chambers:
I spent most of my first couple of years of graduate school unsuccessfully trying to figure out what “research” meant. I read papers and had plenty of meetings, but somehow had no luck really making new progress on any of the problems I looked at. I tried meeting with different professors in different areas, but always seemed to run into logistical issues. (Indeed, several of the faculty wound up leaving, proving that I wasn’t selecting potential advisors very well at the time!) I dutifully kept a research notebook full of random thoughts and problems, as well as records of every talk I went to.
None of it seemed to help. By the end of my second year, I was frustrated and confused about what was going wrong. Around this time, I had a meeting with my advisor where I confessed my frustration, and told him that I was afraid I wasn’t cut out for grad school after all and was planning to quit if I couldn’t turn things around in my third year. (He had just been on sabbatical for a year, so this was probably out of nowhere from his perspective.) His answer was strangely reassuring and also terrifying: “Oh, you’ve reached THAT stage in your PhD. Alright – let’s get to work. You can do this.”
I’ll be honest – my internal thought process at that point was that this was some sort of hazing ritual, where I had to hit rock bottom before I could start to swim. Yet at the same time, the fact that he had faith in me is still motivating to this day. And it turns out, at least for me, that he was pretty much correct. In those fruitless years of banging my head against walls, I had actually absorbed a ton of techniques and background. More importantly, I had figured out what I liked (and didn’t like), so we were able to really dive into a few mutually interesting problems and made good progress.
A year or two later, I was wondering if this really constituted success. How would I know if I was cut out to keep going and try for a faculty position? (Keep in mind that at this point, a bunch of my friends had already dropped out and gotten ridiculously lucrative industry positions. They seemed really happy, so that fate didn’t seem like the end of the world.) About this time, we had a distinguished speaker in our department colloquium who was a theoretician – one of those amazing and brilliant ones that are terrifying because you could never, ever imagine how they did it. The department organized a meeting for the graduate students after his talk, which a few of our faculty also came to in order to moderate.
In the meeting, one of the most junior students shared a concern that resonated. He said, “So far, essentially all my progress has come suddenly and randomly – months apart, usually because of a good idea in the shower! This isn’t sustainable, and clearly won’t get me through a career. How do I figure out a way to make steady progress?” The speaker laughed, and replied that in his career, it wasn’t about finding a way to make steady progress; rather, it was just about coming to believe that those random ideas would keep coming. All the students were rather shocked, but all the faculty in the room were nodding agreement. This was a revelation – were we all really just flying by the seat of our pants through our careers?
I’m still not sure I have it figured out, or when I’ll reach that elusive success. People all have their own suggestions for how to go about this odd process of theoretical research, but finding what works for you is a lifelong process. I am not anyone else, and the things that motivate me don’t seem to match a lot of what I heard in graduate school. However, each suggestion and hint has been useful, if only to weed out things that don’t work for me. And one or two things have resonated and changed the way I worked; for those suggestions, I will always be grateful to my mentors and collaborators.
In fact, I think this perpetual hunt is the most important lesson I took away from my early years in research. Keep hunting, because progress is elusive and can be unexpected. Find what motivates you, because that is what will keep you thinking about the problem in the shower. Find mentors who believe in you and inspire you, because no one can really get there alone. Perhaps just as importantly (at least for me), find collaborators you like to work with, because it helps to have someone to talk about ideas with in order to keep you motivated. And have faith, because we’re all jumping in without any guarantees, and hoping for that next good idea to come to us in the shower.
Before Communications of ACM became cool again, I’ve been a regular reader of the Notices of American Mathematical Society. I still check it out occasionally to keep tabs on the mathematical community.
This month’s issue featured a lengthy article with a lofty title “Mathematical Methods in the Study of Historical Chronology”. It covers the work of Russian mathematician Anatoly Fomenko and his frequent co-author Gleb Nosovsky, where they re-examine traditional chronology of the history of the ancient world based on astronomical records and statistics.
Fomenko’s starting point is that much of traditional dating methods are circular (events and archaeological artifacts are dated relative to each other with very few absolute references). However, intervals between noteworthy astronomical phenomena, such as solar and lunar eclipses, are sufficiently unique within the period covered by recorded history, so that they can be used to anchor chronicles mentioning these events. Zodiacs and star atlases can also be dated independently based on the perceived movement of stars, some of which have shifted considerably over the two millennia. According to Fomenko and his co-authors, doing so systematically reveals stark inconsistencies in traditional chronology, which has to be revised if not completely rewritten.
This narrative is appealing and captivating: A lonesome hero, maverick and iconoclast, rides into town and wielding nothing but a Magnum and Matlab cuts through the fog of conventional wisdom, “Moneyball” style, chasing out the charlatans and humanitarians. I am mixing my metaphors here but as we shall see it is quite fitting Fomenko’s theories. We love to root for the underdog, especially if the little guy is one of our own: David against Goliath, Frodo against Sauron, Nate Silver against the GOP.
The Notices article describes several Fomenko’s methods but oddly enough does not offer any examples of their applications. It is indeed strange, since some of these methods are at least thirty years in the making. Surely, they must have led to something interesting? I will fill the gap by consulting the magnum opus of Fomenko et al. “History: Fiction or Science?” spread across four volumes (in English; there are several more in the original Russian).
Thucydides in his monumental history of the Peloponnesian War describes three eclipses, two solar and one lunar, 7 and 11 years apart. Traditional chronology places these events in the years 431, 424, and 413 BC, which disagree in some details with Thucydides’ account (such as whether the first eclipse was total or partial). To find a solution fully consistent with his description one has to fast forward to 1039 AD.
Rather than to admit that either the original text or its surviving copies are not fully accurate, as traditional historians do, Fomenko concludes that the Athenian general actually did live in 11 century AD, less than 1,000 years ago. After assembling a few more examples of similar inconsistencies, he declares the entire history of the ancient world a fabrication done in the Renaissance.
Was it a fraud or a giant mistake perpetrated by generations of historians, building upon each other’s mistakes? Fomenko and Nosovsky identify certain patterns to the history that never was. Namely, they assert, that most of what passes for historical accounts are copies of actual events, shifted and disguised by changing or translating proper names and places. The Notices article vaguely mentions it in the context of the Bible authorship. In reality, exercises in identifying parts of history that are duplicates of each other constitute the bulk of Fomenko’s work.
This is where Fomenko and his comrades abandon the scientific method for good and stand with both feet firmly planted in quackery. Everything goes to establish equivalence between events, locales, or historical figures: a fleeting phonetic or graphical similarity (often present only if one spells the name backwards and in Russian), parallelisms in some biographical or geographical details, or just a hunch. A couple of quotes from volume IV, which covers what is traditionally thought of as history of Britain, illustrate this approach nicely (page 614):
Galfridus proceeds to tell us that the city of New Troy, or London, has been founded on River Thames. We believe the name to have been a reference to the Bosporus initially, which is where we find Constantinople. This strait is very long and relatively narrow; it does look like a river on maps [...]
Let us also take a closer look [at] the word Thames. Bearing in mind the Oriental manner of reading words from the right to the left and the word “sound”, a synonym of the word “strait”. Reversed and unvocalized, it looks as “DNS” – possibly, a version of TMS (Thames).
I mentioned that according to Fomenko, much of ancient history (actually, all the way to the XVI century) was not entirely made of the whole cloth but was obtained by shifts, translations, and superposition. Where did the originals take place, and when? It turns out, astonishingly, that most of the world’s history happened somewhere in the steppes of the Eastern Europe and the Middle East. Again, from the same text (page 626):
The fact that we managed to identify the Scots of the XII-XV century as the Scythians must also imply that the term Irish had been synonymous to the term “Russian” in the epoch in question (RSS or RSH = Russia sans vocalization); the name Ireland may also have referred to Russia once.
I stop taxing your patience here (similar free associations fill thousands of pages). Hopefully it is enough to convince you that Fomenko makes “The Bible Code” look as well argued as Bourbaki.
Fomenko has quite a following in Russia, with his books widely available and debated there (I counted 65 books co-authored by Fomenko currently in print, not including his textbooks on topology and differential equations). It is not truly surprising given that Russia is known as a country with unpredictable past (Orwell’s “He who controls the present controls the past” rings as true today as ever), and its national pride has been badly hurt by the post-Cold War world order. In such an environment, Fomenko’s version of history as a world-wide forgery that ascribes to other nations the events of Byzantine and Russian past is simultaneously believable and appealing.
Promoting these views from the pages of the Notices of AMS is a different matter. Most troubling is that this is not the first time in recent memory that we are having a similar conversation. The last time the Notices got a lot of coverage in CS blogs, it was in reaction to Neal Koblitz’s article alleging that modern cryptography is a fraud wrapped in bad mathematics.
Mathematicians are known (as are men and women of other professions) to occasionally adopt extreme or radical positions. Most outrageous examples include Theodore Kaczynski (the Unabomber) or Serge Lang who was an outspoken AIDS denialist. By themselves, mathematics credentials are not a guarantee of inerrant judgement in matters remote from one’s area of expertise. Nothing I’ve seen suggested that mathematicians are particularly prone to be taken to conspiracy theories, but neither are they immune to them.
I do find it disturbing that the Notices offered their pages to views antithetical to mathematics (Neal Koblitz lamented the use of proofs by cryptographers) and to promotion of phony scholarship. The trajectory is downhill: the article of Neal Koblitz (and a follow-up, joint with Alfred Menezes) merited a serious response, while Fomenko and Nosovsky’s writings are studied as expressions of the nation’s psyche. In departure from standards of scientific publications, controversial viewpoints are not accompanied by rebuttals, other than by much delayed and necessarily curt letters to the editor. (The only source critical of Fomenko, which is acknowledged by the Notices article, appears in the references but never cited in the text.)
The most charitable explanation I could fathom was that the article was an elaborate April Fools’ joke (even though the issue became available weeks before April 1st). If this is so, consider me punk’d. Otherwise, it is the editorial board and the readership of the Notices that got duped.
On intellectual passion and its unfortunate confusion with sexual passion (and how it may relate to issues of gender)
The following is a post by Oded Goldreich which I found very interesting. It is based on a brave and important Hebrew post/essay, and I’m grateful to Oded for bringing it to my attention, translating parts of it and allowing me to post it here as well. I think that this is exactly the kind of discussion our community should have about research life, (and I thus liberally tagged it as part of our research-life stories project).
The following text is based on a post, in Hebrew, that I saw in an Internet forum on gender relations in Israeli academia. The general context of the original text, which deserves to be called an essay, is that of the underrepresentation of females in academia, especially in certain disciplines. The specific text (hereafter the essay) referred to a mostly unnoticed aspect of this underrepresentation.
The author, an anonymous female graduate student in Humanities, starts her essay with four short anecdotes that illustrate the analysis presented later. She then recalled the two standard arguments made in favor of affirmative action: (1) compensating the candidates themselves for the social problems that hindered their earlier development, and (2) serving the next generation by providing adequate role models. At this point, she offered a wise and courageous insight, which has escaped me before (although I thought of these issues a lot). She suggested “Another reason, a fairly complex one, which hinders the participation of females in academia.” Following is her analysis (slightly revised and freely translated by me):
For a person seeking wisdom (and indeed loving it), the discovery and study of ideas are strong emotional experiences: They reveal a strong passion, an intellectual passion. This passion is characteristic of great researchers, those deeply devoted to the pursuit of wisdom. A strong emotional tie is created among those who share the pursuit, among those who share the intellectual passion for similar questions. This passion is present in their interactions, in which progress is achieved by sharing and confronting ideas. But this intellectual passion presupposed an asexual sphere, where people can be passionate in their interactions, and be understood as intellectually rather than sexually passionate.An academic world consisting mainly of males is a world in which this intellectual passion is not as freely experienced by females (to say the least), because for them this asexual sphere does not exist (i.e., their passion is always understood as sexual). Thus, intellectual passion is always problematic for females, whereas it is hardly so for males. For females but not for males, a big question always exists (i.e., questioning the nature of their passion, asking whether it is intellectual or sexual). Consequently, the intellectual passion of females is restrained, which results in pushing them to more secure confines, which are more mediocre. The process works at several levels.
- The social level (“what will they say?”): Whenever I express passion or excitement in an intellectual interaction, external parties are likely to think that my passion/excitement is sexual. I am interested in the intellectual aspect, but when they see a male at the target of my intellectual passion, they use the social equation female+male+passion = sexual situation. Even when the passion/excitement is linked to an intellectual contents, it stands the danger of being interpreted as a disguise for the sexual. This social prejudice is applied to all females (regardless of whether they are straight, bisexual or lesbian), but it is never applied to males (regardless of whether they are straight, bisexual or gay).
- The first bilateral level (“what will he think?”): I do not want our relationship to become sexual, I want to keep it intellectual because this is what I’m interested in. But what will he think of my behavior? If I show excitement or passion, will he not misinterpret them as sexual? Therefore, if I want to keep it intellectual, I better restrain myself, be less passionate. I will lose in the intensity of our interaction, but I may keep away from the danger of losing it all due to misinterpretation. This danger of misinterpretation is relevant to all females (except maybe proclaimed lesbians), and is irrelevant to males (except maybe proclaimed gays).
- The second bilateral level (“what will he think that I may think?”): But he is also away of all of this. So he may also be threatened if I am too passionate, because he may feel that responding positively may be misinterpreted by me as a demonstration of sexual interest. So he thinks that he better not be intellectually passionate with me, and so I lose again (since this means a less intense interaction). He loses less since there are many alternative (males) around, with whom he can feel “safe”. For me there are few “safe” alternative (i.e., less females).
- The personal level (“what do I actually think?”): The above refers to cases where there is absolutely no sexual interest on my side, and I only fear of being perceived as having such an interest. But what if things are less clear? What if I am really sexually attracted to him? Or what if I am just confused about it, which is possible in light of the confusion between the intellectual and sexual passion? Either way this would be very confusing for me, and this confusion will have a cost (i.e., hinder my intellectual performance).
In principle (or “in theory”), all these problems may arise also for males, but in the reality of an academic world that consists mainly of males, these problems occur much more often and much more intensively for females. So an academic world with a less disproportional gender representation will be less problematic for females, but indeed more problematic for males. Needless to say, the latter “sacrifice” (as giving away any other privileges) is fair to ask for and to expect.
Of course, an ideal solution would be a radical revolution in society; getting rid of the prejudices of gender roles, stopping to label situations as sexual or not according to the gender of the participants.
My reproductions of parts of the essay comes to an end here. Originally I thought of stopping just here, because I found the argument clear and requiring no interpretations. Surely, any interested reader will have her/his own thoughts, and will draw his/her own conclusions. But a friend of mine thought that it will be nice if I end with some of my own thoughts.
I was aware of the emotional dimension of intellectual activities (and in particular, of the passions involved in it) ever since I can remember. It was also clear to me that a “resource sharing” is taking place here, sharing emotional resources between the intellectual and the personal/sexual. I was also aware of the classical Greek philosophical traditions and the psychological and social modern developmental theories that view the sexual and personal as a “corridor” towards the intellectual and the abstract. What I failed to see, until I read the foregoing essay, is that this “resource sharing” phenomenon may also cause problems in some social realities (i.e., ours).
I guess my blindness toward these problems is related to my experiences as a male in our social setting (at large), which offers different experiences to males and females. One advantage of the human society is that one can learn also from what other see, and even understand what others experience.
[Oded Goldreich, April 2013]