google reader refugee.
1413 stories
·
33 followers

Chinese Telegraph Code (CTC)

2 Shares

Michael Rank has an interesting article on Scribd entitled "Chinese telegram, 1978" (5/22/2015).

It's about a 1978 telegram that he bought on eBay.  Here's a photograph:

A preliminary note before providing the transcription and translation of the text:  Chinese telegrams are sent and received purely as four digit codes.  The sender has to convert a character text to numbers and the recipient has to convert the numbers back to characters in order to be able to read the message.  I will describe the process in greater detail below.

The characters in blue on the telegram were written by the person who decoded the numbers.

Note that they consistently wrote chǎng 厂 / 廠 as what looks like a "T".

Here's what the telegram says (it's a typical business message; personal messages tended to be much shorter):

Yíshuǐ zhì gé chǎng gōngxiāo kē
wǒ chǎng xiàn yǒu ruǎnpí báiyóu èr dūn
duō zhǔnbèi fāchē yùn guì chǎng jīn lái
diàn xiàng guì chǎng qiúyuán zhū dài gé shǒu-
tào gé guì chǎng shìfǒu cún yǒu huò
yǐbiàn wǒ chǎng bèi kuǎn qǐng sù diàngào

沂水制革厂供销科
我厂现有软皮白油弍吨
多准备发车运贵厂今来
电向贵厂求援猪带革手
套革贵厂是否存有货
以便我厂备款请速电告

Notes:  字 (third charater from the right in the next to last line) is an error for 存 (字 is CTC 1316 while 存 is 1317). And ruǎnpí báiyóu 软皮白油 is a kind of softening oil for leather.

Michal L. Wright translates the telegram as follows (with some very minor changes):

Yishui Leather Factory Sales and Marketing Division

Our factory currently has over two tonnes of leather softening white oil just about ready to be sent to your factory by truck.
Today we are sending (this) telegram to your factory seeking help (regarding) pig(skin) belt leather and glove leather.
Does your factory have the goods? In order that my factory may prepare funds, please send a telegram right away to inform us.

The Chinese telegraph code consists of 10,000 four digit numbers from 0000 to 9999.  Some telegraph operators could memorize hundreds and, in exceptional cases, a thousand or so of the numbers, but all the others had to be looked up, and that took a lot of time.  It is relatively easy to look up the numbers at the receiving end, but at the sending end it requires analysis of the shape of the characters because they are arranged according to radical and residual strokes, by the four-corner system (N.B.:  this is a totally different four digit identifier than that of the telegraph code; I learned it, but exceedingly few non-professionals ever did), or some other shape-based system.

I should mention that, in the century and more since the Chinese telegraphic code came into use (the first iterations were created by a Danish astronomer and a French customs officer in the early 1870s), there have been many different refinements and revisions, with a variety of arrangements and orderings.

When I first went to mainland China in 1981, every post office had a telegraphy section.  I was utterly fascinated by how the operators worked, and I would spend hours observing them.  I was astonished by how often they had to look up characters in their dog-eared manuals, and how frequently they had difficulty because they were unable to analyze the shape of the character correctly.  Sometimes it would take several minutes or more to find a refractory character, and they often had to huddle by asking someone else for help.  Since many of the smaller post offices only had a single operator on duty at a time, this meant that they would be stymied until someone who could look up the number of the character joined them.

After several years of watching telegraph operators in China, I never ceased to marvel at how monumentally inefficient a system it was.  My old colleagues in Chinese language and script reform told me several times that, when Premier Zhou Enlai travelled, his biggest expense was telegraphy.  I don't know if that is true or if it was an exaggeration, but I heard it from men like Zhou Youguang and Yin Binyong who were reliable sources of information about such matters pertaining to Chinese writing.

About twenty-five years ago, I was approached by international banking officials and law enforcement agencies who were forced to rely on the telegraph code to identify the characters of Chinese personal names.  Individuals scattered across the globe from different topolectal backgrounds would romanize their names in the wildest possible assortment of completely nonstandard, ad hoc ways, but those in banking and law enforcement who were charged with an exact identification of the individuals with whom they were dealing told me they needed to know which characters were used to write the names, regardless of the romanizations.  They asked me if there were any other alternatives to this method of using the telegraph code, because it was obviously giving them a heap of trouble.  I advised them to hire people who were proficient in pinyin and arrange the telegraph code according to the sounds of the characters in pinyin because that would be the fastest and easiest way for them to look up the numbers.  I don't know if they followed my advice or not.

Wm. C. Hannas, in Asia's Orthographic Dilemma, p. 313 recounts:

I once knew a man who because of his unusual profession had learned enough Standard Telegraphic Code to speak simple Chinese sentences in numbers.  If you asked him, "Nǐ hǎo ma?" (how are you?), he would reply, "2053 1771 1170" or "0008 1170," depending on how he felt.

Similarly, I knew a distinguished Buddhist scholar, Edward Conze, whose language specialty was Pali, who would regularly refer to Chinese characters by their Mathews' Chinese-English Dictionary number.  Conze probably had mastered several hundred characters in this fashion, and he always had a twinkle in his eye when he rattled off the numbers.  I also knew a couple of Sogdian Buddhist specialists who employed the same method for referring to Chinese characters.  I suspect that, among serious Buddhist scholars who didn't know Chinese, this was a common method for referring to specific characters when Mathews' dictionary was pretty much the universal standard for Anglophone sinology.  Now that pinyin is widespread and it is easy to use it to look up characters in various electronic devices, I don't think anyone is memorizing Mathews' numbers any longer.

"The future of Chinese language learning is now" (4/5/14)

Chinese characters aren't as scary as they used to be before pinyin and computers, but they're still "damn hard", in the words of a well-known sage of Chinese language and script studies.

Read the whole story
Share this story
Delete

Rust for Python Programmers

1 Comment and 2 Shares

Now that Rust 1.0 is out and quite stable, I thought it might be interesting to write an introduction to Rust for Python programmers. This guide goes over the basics of the language and compares different constructs and how they behave.

Rust language wise is a completely different beast compared to Python. Not just because one is a compiled language and the other one is interpreted, but also because the principles that go into them are completely different. However as different as the languages might be at the core, they share a lot in regards to ideas for how APIs should work. As a Python programmer a lot of concepts should feel very familiar.

Syntax

The first difference you will notice as a Python programmer is the syntax. Unlike Python, Rust is a language with lots of curly braces. However this is for a good reason and that is that Rust has anonymous functions, closures and lots of chaining that Python cannot support well. These features are much easier to understand and write in a non indentation based language. Let's look at the same example in both languages.

First a Python example of printing “Hello World” three times:

def main(): for count in range(3): print "{}. Hello World!".format(count) 

And here is the same in Rust:

fnmain(){ forcountin0..3{ println!("{}. Hello World!",count); } } 

As you can see, quite similar. def becomes fn and colons become braces. The other big difference syntax wise is that Rust requires type information for parameters to function which is not something you do in Python. In Python 3 type annotations are available which share the same syntax as in Rust.

One new concept compared to Python are these functions with exclamation marks at the end. Those are macros. A macro expands at compile time into something else. This for instance is used for string formatting and printing because this way the compiler can enforce correct format strings at compile time. It does not accidentally happen that you mismatch the types or number of arguments to a print function.

Traits vs Protocols

The most familiar yet different feature is object behavior. In Python a class can opt into certain behavior by implementing special methods. This is usually called “conforming to a protocol”. For instance to make an object iterable it implements the __iter__ method that returns an iterator. These methods must be implemented in the class itself and cannot really be changed afterwards (ignoring monkeypatching).

In Rust the concept is quite similar but instead of special methods, it uses traits. Traits are a bit different in that they accomplish the same goal but the implementation is locally scoped and you can implement more traits for types from another module. For instance if you want to give integers a special behavior you can do that without having to change anything about the integer type.

To compare this concept let's see how to implement a type that can be added to itself. First in Python:

class MyType(object): def __init__(self, value): self.value = value def __add__(self, other): if not isinstance(other, MyType): return NotImplemented return self.__class__(self.value + other.value) 

And here is the same in Rust:

usestd::ops::Add; structMyType{ value:i32, } implMyType{ fnnew(value:i32)->MyType{ MyType{value:value} } } implAddforMyType{ typeOutput=MyType; fnadd(self,other:MyType)->MyType{ MyType{value:self.value+other.value} } } 

Here the Rust example looks a bit longer but it also comes with automatic type handling which the Python example does not do. The first thing you notice is that in Python the methods live on the class, whereas in Rust the data and the operations live independently. The struct defines the data layout and the impl MyType define methods the type itself has, whereas impl Add for MyType implements the Add trait for that type. For the Add implementation we also need to define the result type of our add operations, but we avoid the extra complexity of having to check the type at runtime like we have to do in Python.

Another difference is that in Rust the constructor is explicit whereas in Python it's quite magical. When you create an instance of an object in Python it will eventually call __init__ to initialize the object, whereas in Rust you just define a static method (by convention called new) which allocates and constructs the object.

Error Handling

Error handling in Python and Rust is completely different. Whereas in Python errors are thrown as exceptions, errors in Rust are passed back in the return value. This might sound strange at first but it's actually a very nice concept. It's pretty clear from looking at a function what error it returns.

This works because a function in Rust can return a Result. A Result is a parametrized type which has two sides: a success and a failure side. For instance Result<i32, MyError> means that the function either returns a 32bit integer in the success case or MyError if an error happens. What happens if you need to return more than one error? This is where things differ from a philosophical point of view.

In Python a function can fail with any error and there is nothing you can do about that. If you ever used the Python “requests” library and you caught down all request exceptions and then got annoyed that SSL errors are not caught by this, you will understand the problem. There is very little you can do if a library does not document what it returns.

In Rust the situation is very different. A function signature includes the error. If you need to return two errors then the way to do this is to make a custom error type and to convert internal errors into a better one. For instance if you have an HTTP library and internally it might fail with Unicode errors, IO errors, SSL errors, what have you, you need to convert these errors into one error type specific to your library and users then only need to deal with that. Rust provides error chaining that such an error can still point back to the original error that created it if you need to.

You can also at any point use the Box<Error> type which any error converts into, if you are too lazy to make your own custom error type.

Where errors propagate invisibly in Python, errors propagate visibly in Rust. What this means is that you can see whenever a function returns an error even if you chose to not handle it there. This is enabled by the try! macro. This example demonstrates this:

usestd::fs::File; fnread_file(path:&Path)->Result<String,io::Error>{ letmutf=try!(File::open(path)); letmutrv=String::new(); try!(f.read_to_string(&mutrv)); Ok(rv) } 

Both File::open and read_to_string can fail with an IO error. The try! macro will propagate the error upwards and cause an early return from the function and unpack the success side. When returning the result it needs to be wrapped in either Ok to indicate success or Err to indicate failure.

The try! macro invokes the From trait to allow conversion of errors. For instance you could change the return value from io::Error to MyError and implement a conversion from io::Error to MyError by implementing the From trait and it would be automatically invoked there.

Alternatively you can change the return value from io::Error to Box<Error> and any error can be returned. This way however you can only reason about errors at runtime and no longer at compile time.

If you don't want to handle an error and abort the execution instead, you can unwrap() a result. That way you get the success value and if the result was an error, then the program aborts.

Mutability and Ownership

The part where Rust and Python become completely different languages is the concept of mutability and ownership. Python is a garbage collected language and as a result pretty much everything can happen with the objects at runtime. You can freely pass them around and it will “just work”. Obviously you can still generate memory leaks but most problems will be resolved for you automatically at runtime.

In Rust however there is no garbage collector, yet the memory management still works automatically. This is enabled by a concept known know as ownership tracking. All things you can create are owned by another thing. If you want to compare this to Python you could imagine that all objects in Python are owned by the interpreter. In Rust ownership is much more local. Function calls can have a list of objects in which case the objects are owned by the list and the list is owned by the function's scope.

More complex ownership scenarios can be expressed by lifetime annotations and the function signatures. For instance in the case of the Add implementation in the previous example the receiver was called self like in Python. However unlike in Python the value is “moved” into the function whereas in Python the method is invoked with a mutable reference. What this means is that in Python you could do something like this:

leaks = [] class MyType(object): def __add__(self, other): leaks.append(self) return self a = MyType() + MyType() 

Whenever you add an instance of MyType to another object you also leak out self to a global list. That means if you run the above example you have two references to the first instance of MyType: one is in leaks the other is in a. In Rust this is impossible. There can only ever be one owner. If you would append self to leaks the compiler would “move” the value there and you could not return it from the function because it was already moved elsewhere. You would have to move it back first to return it (for instance by removing it from the list again).

So what do you do if you need to have two references to an object? You can borrow the value. You can have an unlimited number of immutable borrows but you can only ever have one mutable borrow (and only if no immutable borrows were given out).

Functions that operate on immutable borrows are marked as &self and functions that need a mutable borrow are marked as &mut self. You can only loan out references if you are the owner. If you want to move the value out of the function (for instance by returning it) you cannot have any outstanding loans and you cannot loan out values after having moved ownership away from yourself.

This is a big change in how you think about programs but you will get used to it.

Runtime Borrows and Mutible Owners

So far pretty much all this ownership tracking was verified at compile time. But what if you cannot verify ownership at compile time? There you have multiple options to your disposal. One example is that you can use a mutex. A mutex allows you to guarantee at runtime that only one person has a mutable borrow to an object but the mutex itself owns the object. That way you can write code that access the same object but only ever once thread can access it at the time.

As a result of this this also means that you cannot accidentally forget to use a mutex and cause a data race. It would not compile.

But what if you want to program like in Python and you can't find an owner for memory? In that case you can put an object into a referenced counted wrapper and loan it out at runtime this way. That way you get very close to Python behavior just that you can cause cycles. Python breaks up cycles in it's garbage collector, Rust does not have an equivalent.

To show this in a better way, let's go with a complex Python example and the Rust equivalent:

from threading import Lock, Thread def fib(num): if num < 2: return 1 return fib(num - 2) + fib(num - 1) def thread_prog(mutex, results, i): rv = fib(i) with mutex: results[i] = rv def main(): mutex = Lock() results = {} threads = [] for i in xrange(35): thread = Thread(target=thread_prog, args=(mutex, results, i)) threads.append(thread) thread.start() for thread in threads: thread.join() for i, rv in sorted(results.items()): print "fib({}) = {}".format(i, rv) 

So what we do here is spawn 20 threads and make them compute in a very terrible manner increasing Fibonacci factorial numbers. Then we join the threads and print the sorted results. One thing you immediately notice here is that there is no intrinsic relationship between the mutex (the lock) and the results array.

Here is the Rust example:

usestd::sync::{Arc,Mutex}; usestd::collections::BTreeMap; usestd::thread; fnfib(num:u64)->u64{ ifnum<2{1}else{fib(num-2)+fib(num-1)} } fnmain(){ letlocked_results=Arc::new(Mutex::new(BTreeMap::new())); letthreads:Vec<_>=(0..35).map(|i|{ letlocked_results=locked_results.clone(); thread::spawn(move||{ letrv=fib(i); locked_results.lock().unwrap().insert(i,rv); }) }).collect(); forthreadinthreads{thread.join().unwrap();} for(i,rv)inlocked_results.lock().unwrap().iter(){ println!("fib({}) = {}",i,rv); } } 

The big differences to the Python version here is that we use a B-tree binary tree map instead of a hash table and we put that into an Arc'ed mutex. What's that? First of all we use a B-tree binary tree because it sorts automatically which is what we want here. Then we put it into a mutex so that we can at runtime lock it. Relationship established. Lastly we put it into an Arc. An Arc reference counts what it encloses. In this case the mutex. This means that we can make sure the mutex gets deleted only after the last thread finished running. Neat.

So here is how the code works: we count to 20 like in Python, and for each of those numbers we run a local function. Unlike in Python we can use a closure here. Then we make a copy of the Arc into the local thread. This means that each thread sees it's own version of the Arc (internally this will increment the refcount and decrement automatically when the thread dies). Then we spawn the thread with a local function. The move tells us to move the closure into the thread. Then we run the Fibonacci function in each thread. When we lock our Arc we get back a result we can unwrap and the insert into. Ignore the unwrap for a moment, that's just how you convert explicit results into panics. However the point is that you can only ever get the result map when you unlock the mutex. You cannot accidentally forget to lock!

Then we collect all threads into a vector. Lastly we iterate over all threads, join them and then print the results.

Two things of note here: there are very few visible types. Sure, there is the Arc and the Fibonacci function takes unsigned 64bit integers, but other than that, no types are visible. We can also use the B-tree binary tree map here instead of a hashtable because Rust provides us with such a type.

Iteration works exactly the same as in Python. The only difference there is that in Rust in this case we need to acquire the mutex because the compiler cannot know that the threads finished running and the mutex is not necessary. However there is an API that does not require this, it's just not stable yet in Rust 1.0.

Performance wise pretty much what you expect would happen. (This example is intentionally terrible just to show how the threading works.)

Unicode

My favorite topic: Unicode :) This is where Rust and Python differ quite a bit. Python (both 2 and 3) have a very similar Unicode model which is to map Unicode data against arrays of characters. characters where a character. In Rust however Unicode strings are always stored as UTF-8. I have covered this in the past about why this is a much better solution than what Python or C# are doing (see also UCS vs UTF-8 as Internal String Encoding). What's however very interesting about Rust is how it deals with the ugly reality of our encoding world.

The first thing is that Rust is perfectly aware that operating system APIs (both in Windows Unicode and Linux non-Unicode land) are pretty terrible. Unlike Python however it does not try to force Unicode into these areas, instead it has different string types that can (within reason) convert between each other reasonably cheap. This works very well in practice and makes string operations very fast.

For the vast majority of programs there is no encoding/decoding necessary because they accept UTF-8, just need to run a cheap validation check, process on UTF-8 strings and then don't need an encode on the way out. If they need to integrate with Windows Unicode APIs they internally use the WTF-8 encoding which quite cheaply can convert to UCS2 like UTF-16 and back.

At any point can you convert between Unicode and bytes and munch with the bytes as you need. Then you can later run a validation step and ensure that everything went as intended. This makes writing protocols both really fast and really convenient. Compared this to the constant encoding and decoding you have to deal with in Python just to support O(1) string indexing.

Aside from a really good storage model for Unicode it also has lots of APIs for dealing with Unicode. Either as part of the language or on the excellent crates.io index. This includes case folding, categorization, Unicode regular expressions, Unicode normalization, well conforming URI/IRI/URL APIs, segmentation or just simple things as name mappings.

What's the downside? You can't do "föo"[1] and expect 'ö' to come back. But that's not a good idea anyways.

As an example of how interaction with the OS works, here is an example application that opens a file in the current working directory and prints the contents and the filename:

usestd::env; usestd::fs; fnexample()->Result<(),Box<Error>>{ lethere=try!(env::current_dir()); println!("Contents in: {}",here.display()); forentryintry!(fs::read_dir(&here)){ letpath=try!(entry).path(); letmd=try!(fs::metadata(&path)); println!("  {} ({} bytes)",path.display(),md.len()); } Ok(()) } fnmain(){ example().unwrap(); } 

All the IO operations use these Path objects that were also shown before, which encapsulate the operating system's internal path properly. They might be bytes, unicode or whatever else the operating system uses but the can be formatted properly by calling .display() on them which return an object that can format itself into a string. This is convenient because it means you never accidentally leak out bad strings like we do in Python 3 for instance. There is a clear separation of concerns.

Distribution and Libraries

Rust comes with a combination of virtualenv+pip+setuptools called “cargo”. Well, not entirely virtualenv as it can only work with one version of Rust by default, but other than that it works as you expect. Even better than in Python land can you depend on different versions of libraries and depend on git repositories or the crates.io index. If you get rust from the website it comes with the cargo command that does everything you would expect.

Rust as a Python Replacement?

I don't think there is a direct relationship between Python and Rust. Python shines in scientific computing for instance and I don't think that this is something that can Rust tackle in the nearest future just because of how much work that would be. Likewise there really is no point in writing shell scripts in Rust when you can do that in Python. That being said, I think like many Python programmers started to pick up Go, even more will start to look at Rust for some areas where they previously used Python for.

It's a very powerful language, standing on strong foundations, under a very liberal license, with a very friendly community and driving by a democratic approach to language evolution.

Because Rust requires very little runtime support it's very easy to use via ctypes and CFFI with Python. I could very well envision a future where there is a Python package that would allow the distribution of a binary module written in Rust and callable from Python without any extra work from the developer needed.

Read the whole story
pfctdayelise
1 day ago
reply
Woot
Melbourne, Australia
Share this story
Delete

They’re Here, They’re Genderqueer, Get Used to Gender Neutral Pronouns

1 Comment

FacebookTwitterGoogle+PinterestTumblrEmailPrintFriendlyShare

Chi Luu

Chi Luu is a peripatetic linguist who speaks Australian English and studies dead languages. Every two weeks, she’ll uncover curious stories about language from around the globe for Lingua Obscura.

Transgender and genderqueer issues have been in the focus of mainstream media attention as of late. Most recently, Bruce Jenner, former Olympian and reality TV personality in Keeping Up With the Kardashians, has publicly announced an identity as a trans woman. This story has pushed debate and conversation on gender identity to the forefront of public consciousness.

One linguistic consideration of gender identity is the choice of pronouns to refer to a person who may be transitioning, or who presents an undetermined or neutral gender. This is sometimes referred to by the umbrella terms genderqueer or gender-nonconforming. The Silvia Rivera Law Project defines the term gender-nonconforming to refer to “people who do not follow other people’s ideas or stereotypes about how they should look or act based on the female or male sex they were assigned at birth.” 

With the ongoing debate on gender issues reaching the mainstream, it’s now more obvious than ever that a gender neutral approach to pronouns is needed. Is there a linguistically natural way to be pronominally inclusive of all groups in English? There have been a myriad of different proposals for invented pronounsto cover this gender neutral need.

One of the earlier pronominal neologisms, according to Dennis E. Baron in The Epicene Pronoun: The Word That Failed, is thon (possessive thons), formed by a blend of that one in 1884, which was well-known enough to appear in the Funk and Wagnalls Standard Dictionary by 1898. But you probably wouldn’t hear it bandied about very often these days. Like thon, other invented pronouns suggested over the years have been limited in their usage. This might partly have been because though there was a definite need for an epicene pronoun, with the generic he becoming increasingly problematic, there was no groundswell of urgency from a gender identity perspective until recently. However, we’ve seen before how playful internet neologisms can be widely used and spread by speakers, so why the skepticism over a few brand new pronouns? To put it simply, it’s rare and difficult for new items to enter the closed class of function words in English—the words that exist for syntactic functions, such as pronouns, as opposed to the open class of content words that convey some semantic meaning, such as nouns.

To put it simply, it’s rare and difficult for new items to enter the closed class of function words in English—the words that exist for syntactic functions, such as pronouns, as opposed to the open class of content words that convey some semantic meaning, such as nouns.

This brings up the thorny question of which gender-obscured pronouns are likely to be widely adopted, particularly in languages like English that have historically treated masculine pronouns as default or generic. What’s the solution? Could newly-coined gender neutral pronouns ever really flourish in the language, regardless of whether their use is officially mandated or not? Sweden recently attempted the feat of officially pushing for the gender neutral pronoun hen in Swedish, with some success.

Complex issues of gender identity aside, many armchair language prescriptivists may be thinking, will no one think of the pronouns? Will gender identity politics ruin pronouns forever?

For a multitude of reasons, having access to a set of gender neutral pronouns in a language can be useful, not just for gender expression and identity but also to avoid sexist or non-inclusive language. In fact, the majority of the world’s languages already use pronounsthat don’t specify gender and seem to get on fine. In certain cases specifying a gender is unnecessary and perhaps even distracting. Take note of the following sentences from Baron:

  1. Everyone loves their mother.
  2. Everyone loves his or her mother.
  3. Everyone loves his mother.

In this day and age, is sentence 3, the example with the so-called generic he, really less odd to read than the unwieldy version in sentence 2? What if it was “everyone loves xyr mother” as the Vancouver School Board might have it?

It’s certainly useful to have a gender neutral pronoun but it’s even more useful if speakers of the language actually make use of these pronouns. Studies on French and Arabic and Hebrew speakers have shown how speakers make use of existing linguistic approaches to convey gender neutral cases. Amalia Sa’ar’s 2007 study discusses how women in Israel are subconsciously using masculine pronouns to refer to themselves, regarding them as gender neutral. “In Hebrew and Arabic, for example, it is very common to hear expressions as intimate and feminine as “when you♂ become♂ a mother” (Hebrew Kesheʿata nihya ʿima) … (the symbols ♀ and ♂ are used to designate feminine and masculine grammatical gender, respectively) uttered in masculine form by women. women.”

On the surface, this sounds like how the generic he might have once been used in English. While there is a robust history of the generic he being used in English, it’s debatable whether it has ever been used in a similar vein to refer to only women. Has a sentence such as “Everyone is nervous when he becomes a mother for the first time” ever really been acceptable as a generic?

Then there is always the elephant in the room, as shown by sentence 1. There’s the widespread and organic use of singular they as a fairly serviceable gender neutral pronoun, which nevertheless seems to encounter resistance from grammatically concerned speakers, despite a long history in literature. Before the generic he was even a glint in an 18th century grammarian’s eye, there was a generic, singular they, famously used by those literary hacks, Geoffrey Chaucer, William Shakespeare and Jane Austen, among others. Its common usage in literature shows that it isn’t actually as ungrammatical as it’s been accused of being.

According to Baron, “the absence in English of a third-person, common-gender pronoun became apparent when grammarians in the eighteenth century began objecting to the apparently widespread use of they, their, and them with singular, sex-indefinite antecedents on the grounds that it violated number concord.” (Funnily enough this number agreement violation seems not to have been quite so confusing in a case like the pronoun you, which started out as a plural pronoun before also becoming a grammatically acceptable singular form). Furthermore, “generic he was actually given the force of law when, in 1850, the English Parliament passed ‘An Act for shortening the language used in acts of Parliament,’ which ordered ‘that in all acts words importing the masculine gender shall be deemed and taken to include females, and the singular to include the plural, and the plural the singular, unless the contrary as to gender and number is expressly provided’“. So even generic he, much like a newly coined pronoun, needed an official boost.

So even generic he, much like a newly coined pronoun, needed an official boost.

Regardless of whether some speakers find it awkward or ungrammatical for prescriptivist reasons, the fact is the singular they is being used more or less naturally as a generic, gender neutral pronoun as it has been in the past, according to Julie Foertsch and Morton Ann Gernsbacher’s 1997 study on the subject, which found that “reading-time experiments demonstrated that singular they is a cognitively efficient substitute for generic he or she, particularly when the antecedent is nonreferential.” This suggests that the use of singular they is already considered a default by English speakers and passes as generic without too much difficulty in comprehension.

They/them is being widely used by many genderqueer people as their preferred pronouns(sometimes even with a singular reflexive variant “themself” being used), as well as by speakers who simply don’t wish to specify gender for a variety of reasons. So, though it may not be a perfect solution for some, the singular they is here, it’s genderqueer, so get used to it.

The post They’re Here, They’re Genderqueer, Get Used to Gender Neutral Pronouns appeared first on JSTOR Daily.

Read the whole story
pfctdayelise
3 days ago
reply
Lingua Obscura is a newish linguistics blog!
Melbourne, Australia
Share this story
Delete

What happens when you crack your knuckles?

1 Comment

FacebookTwitterGoogle+PinterestTumblrEmailPrintFriendlyShare

Some people find the habit of cracking knuckles satisfying, while others hate it. Thanks to researchers, we now know what creates that familiar sound.

By cracking knuckles inside an MRI, researchers determined that the sound comes from the rapid development of an air cavity inside the joint fluid. Up until now, the belief was that the popping sound came from bursting air bubbles in the joint, but the sound actually begins before that occurs.

The motivation behind the new research was really to determine what health effects, if any, are associated with the practice. Knuckle cracking has been researched since at least the 1940s. Perhaps the simplest attempt to assess the health impacts comes from Dr. Donald Ungar, who fastidiously cracked the knuckles of one hand (but not the other) for 50 years. When his decades of selective knuckle cracking did not result in arthritis, Dr. Ungar decided that knuckle cracking was harmless. This work earned him the most coveted gag prize in science, an Ig Nobel.

On the other hand, a 1989 case study published in the British Medical Journal indicates that cracking knuckles might not be completely benign. Joints from a habitual knuckle cracker showed some hardening and calcification of the finger ligaments. However, the authors do acknowledge that the man in question is not experiencing any distress in his fingers, and they also do not know for sure if the knuckle cracking is responsible for the joint changes they observed. The article does indicate that even in 1989 scientists were close to the source of the sound. They knew that bubbles in the joint were involved but were not quite correct about the mechanism.

Now we know for sure what’s causing the sound, but that’s it. There is still no strong evidence for or against health impacts of knuckle cracking. According to the new study, the energy released upon formation of these bubbles is theoretically sufficient to damage the joints, but there is still no evidence of damage actually occurring.

Dr. Ungar’s results are similarly encouraging, but it is impossible to extrapolate every possible outcome from one man’s left hand. For now, the potential costs of joint popping remain unknown. Hopefully someone else will take a crack at the problem.


JSTOR Citations

“Hot-pink bras, cracked knuckles, and bar room brawls are winners at the lg Nobel awards” Jeanne Lenzer
BMJ: British Medical Journal, Vol. 339, No. 7725 (10 October 2009), p. 829
Published by: BMJ

“Habitual Joint Cracking And Radiological Damage”
P. Watson, A. Hamilton and R. Mollan
BMJ: British Medical Journal, Vol. 299, No. 6715 (Dec. 23 – 30, 1989), p. 1566
Published by: BMJ

The post What happens when you crack your knuckles? appeared first on JSTOR Daily.

Read the whole story
pfctdayelise
3 days ago
reply
.
Melbourne, Australia
Share this story
Delete

Paul Ford on the No Manifesto

1 Share
the whole Chicago Review PDF is worth reading  
Read the whole story
Share this story
Delete

Redrawing Shake It Off

2 Comments
49 animation students each rotoscoped 52 out of 2,767 total frames  
Read the whole story
pfctdayelise
4 days ago
reply
I looooooooove rotoscoping!!
Melbourne, Australia
Share this story
Delete
1 public comment
pberry
3 days ago
reply
Creepy
Chico, CA
Next Page of Stories