Urtzi Grau and Francesca Hughes

Cul-de-sac

2023

Given that architecture’s future is written in its schools, we contend that of the many predictive diagrams that Charles Jencks produced, it was his 1969 dissimilarity matrix of the most future-shaping architects of the day that possessed the keenest foresight.1 Not in terms of the architects selected, but in terms of the methodological paradigm it espoused: computationally enhanced prediction, currently impacting the futures of architectural education, and just about everything else too. Appropriating the format of the then-nascent numerical taxonomy – a technique able to process nearly infinite but only visible characters in calculating the relations between a set of unclassified individuals – Jencks’ little-known closed square grid of 34 columns and rows assigned a numerical value to the dissimilarity or relational distance between pairs of architects from a list of 34.2 Unlike the phylogenies that numerical taxonomy matrices sought to replace, this diagram is not a timeline. In its denial of time’s passage, it fixes us firmly in Friedrich Kittler’s eternal present of computation.3 Its hermetic nature asserts its exhaustive combinatorial logics. Unlike the bifurcating precedent, no space is reserved for the equivalent of new branches yet to be discovered. Thus it simultaneously denies both any outside and indeterminacy – if a property cannot be counted, it does not count.4

But most importantly, for Jencks then and for us now, 1969 marked taxonomy’s turn from a forensic art of judgement to a predictive science; from who your parents may have been to who your offspring will be. As taxonomy signed up to the Cold War’s newly computerised future, in which the latest metaphor for the gene was the computer ‘program’, being able to predict was deemed more important than being able to see or visually understand relations.5 Jencks fast retreated from this graphic mode of analysis, evidently considering it a cul-de-sac in his thinking about architecture’s own cul-de-sacs, and perhaps deeming it simply not cosmological enough, and likely not metaphoric enough too. He later insisted: ‘We think with these diagrams (rivers or trees) – you can’t think outside of metaphor.’6 The numerical taxonomy matrix, with its sacrifice of the branching isomorphism for predictive prowess, paved the way for the future of its own offspring: spreadsheets, and the very different kind of thinking and relations to time they would install.7 While the timeline uses classification of the past in order to map the future (and vice versa), the spreadsheet erects an apparently timeless reality in which it has become, paradoxically, the contemporary instrument par excellence of prediction. The spreadsheet does not ‘paint’ or ‘suggest’ futures, but determines, in parsimonious fashion, the always already risk-stripped and thus optimised future. No risk or inefficiency escapes its relentless analysis, except of course its own.

This predictive chronograph updates Jencks’s dissimilarity matrix by substituting the architects that painted architecture’s future with 34 of the myriad academic managerial instruments/acronyms that currently determine the education of architects: 360 Review, AD, ARC, ARWU, CERIF, CRA, DP, EDI, E&I, EuroCRIS, ERA, FFE, FIPF, FoR, GS, H-Index, IVAR, JR, KPI, LMS, NWOL, OHS, ORCID, QA, QS, RE, REF, RA, RIMS, SMART, SFS, SLO, THE, and UKRI. Collectively they populate (how democratic of them…) the spreadsheets that increasingly govern all decisions in schools, from academic workload benchmarking and studio staff/student ratios to topics of research, design briefs, timetabling and lecture length. That their component parameters have next-to-no bearing on anything that really matters to architectural education does not deter them from predicting its future with Oedipal determinism: future student numbers, future academic performance, future research income, future curricular risk amelioration, future student satisfaction, future workforce profiling, future lucrative research income streams, and so on. Their collective blind spots (though they are hardly spots) miss the unmeasurable qualities of emergent pedagogical experiments, new forms of knowledge and its exchange or important but revenue-evading research questions. As with elsewhere, the aim of this metrical saturation of architectural education is to cancel ‘accidents’ – accidents that might derail the rise of mediocrity, amongst others.8 Hence, unlike the timeline, spreadsheets do not conjure but avoid the future. Or to be more precise, they void the future. The central interest of their prediction is compliance.

For all its lightning-speed agility and its ever-growing list of functions (f ), the management spreadsheet freezes possibility as it rehearses the sacrifice numerical taxonomy made (of understanding for predictive prowess) and instead blinds us with its ‘transparency’. This is its formidable rhetorical power. At the same time, the spreadsheet asserts that it is not a diagram. How could it be? Diagrams are rhetorical; they have agency. The spreadsheet propounds to have none. It is simply an organisational tool, there to see what we cannot see. Diagrams understand that there is a more complicated world out there. The spreadsheet replaces that world. It never existed. The double power of the arch-diagram that is the spreadsheet is predicated upon its insistence that it is outside of time and that it is not a diagram. Our chronograph is no different.

Its constituent terms, often arriving from distant disciplines such as management or marketing, and paradigms such as 1920’s psychology or the Industrial Revolution’s 1802 Factory Act, each contain a history as idiosyncratic as the logics they enforce in the data they now manage. A latter-day Jencksian might organise them in a river-like timeline that unveils how research indexes shifted from an inconspicuous presence to dictatorial superstructure; why risk assessments started defining design briefs; at which point online management systems became the mandatory communication channel between students and academics. The danger of this is, of course, to fall for their very same promise: to imagine that we too could predict the future, or the demise, of these systems of prediction. To avoid such tautological pitfalls, we choose instead the cul-de-sac that Jencks discarded due to its assumed inability to predict. In so doing we do not ask the question of where these terms go or come from, but true to the mission of numerical taxonomy, only what immediate and measurable features connect or separate them.

Their mutual dissimilarity is thus calculated via six new requisite characters: General Obfuscation Index (n), Transparency percentage (k), Publicness Degree Adjustment (a), Utterability Level (previously Pronounceability Rank) (α), Ownership Constant (ε) and Grammatical Readiness Ratio (β). The character set has no pretension to universality or even comprehensiveness, but rather accepts the fragmented nature of our endeavour. Not only have we a limited knowledge of the different systems, brands and software products that now encumber schools of architecture around the world, we are also well aware that architectural education is not the avant-garde of management thinking, but rather a mediocre parvenu or, worse, a dumping ground. These systems have been well tested in other realms of the corporate world and arrive to our schools late and probably already outdated. We prefer not to know what the future of management reserves for us and, again, to want to would be to accept its very colonial logics: that we can predict each other’s future, as so many latter-day matrices now predict ours.

The digital turn arrived as if it had no history, as if the ‘end of history’ was to be taken literally.9 But instead, it is the future that has ended, or been taken away. Data-driven capitalism, the neoliberal logics it attends to, and the extreme centre it upholds, have seen to it that the cones of vision of predictive systems have given us not more but fewer futures.10 Returning to Jencks’s 1969 matrix, its top right-hand triangle simply duplicates the data below the diagonal in tonal shades. Why install such redundancy in a system that is all about reduction for optimisation? Does this anticipate some future resistance occupation, able to escape from its teleological cul-de-sac? Or better still, that which quite simply refuses to be predicted? We await its arrival, despite the obstacle race of predictive systems, from within their now extensive spaces of duplication and redundancy, the terrains vagues outside of their cones of vision and from behind their mesmeric, didactic surfaces, in full dysphoric jouissance.11

Dissimilarity Matrix Characters used to Calculate Distance between Terms Governing Architectural Education

n, General Obfuscation Index: distance between the actual term and the intended meaning of communication, usually constructed with the use of confusing and ambiguous language making the message difficult to understand.

k, Transparency percentage: level measuring the success to which the arcane, underlying measurement mechanisms are kept hidden, undetectable or hidden from view, so as not to obstruct intended function.

a, Publicness Degree Adjustment: fixed numerical value added to recognise the quality or state of being public or being owned by the public.

α, Utterability Level (a.k.a. Pronounceability Rank): 1-10 value that measures the quality of being expressible in audible words, or the quality of being distinctly pronounced in speech.

ε, Ownership Constant: evaluation of the state or fact of legal possession and control over property of the term, which may involve multiple rights, collectively referred to as title, which may be separated and held by different parties.

β, Grammatical Readiness Ratio: correction index that describes if the term is well formed; in accordance with the rules of the grammar of its derivative language.

Definitions of Terms Governing Architectural Education

1. 360 Review (Johann Baptist Rieffert, military psychologist, Germany, ca.1930)

360-degree feedback (also known as multi-rater feedback, multi-source feedback, or multi-source assessment) is a process through which feedback from an employee’s subordinates, colleagues, and supervisor(s), as well as a self-evaluation by the employee themselves, is gathered. Such feedback can also include, when relevant, feedback from external sources who interact with the employee, such as customers and suppliers or other interested stakeholders.

2. AD (Australia, ca.2020)

The Academic Dashboard (AD) is a management tool presenting a consolidated and summarised view of an individual’s academic activity, including learning, teaching and research performance, by pulling together data from a number of sources. It provides a consolidated view of an academic staff member’s teaching allocation, research, and learning and teaching data, including SET and SET scores, work profile, research outputs and income, HDR supervision, training completions and leave balances.

3. ARC (Australian Government, 2001)

The Australian Research Council (ARC) is the primary non-medical research funding agency of the Australian Government, distributing more than AUD$800 m in grants each year. The council was established by the Australian Research Council Act 2001, and provides competitive research funding to academics and researchers at Australian universities. Most health and medical research in Australia is funded by the more specialised National Health and Medical Research Council (NHMRC), which operates under a separate budget.

4. ARWU (Shanghai Jiao Tong University, 2003)

The Academic Ranking of World Universities (ARWU), also known as the Shanghai Ranking, is an annual publication of world university rankings. The league table was originally compiled and issued by Shanghai Jiao Tong University in 2003, making it the first global university ranking with multifarious indicators.

5. CERIF (European Commission, euroCRIS, 1987)

The Common European Research Information Format (CERIF) is a formal conceptual model to support the management of Research Information, including the setup of and the interoperation between Current Research Information Systems. The model is considered a standard, as it is recommended by the European Union to its Member States. It was developed with the help of the European Commission, and is now managed by euroCRIS.

6. CRA (Robert Glaser, educational psychologist, United States, 1963)

Criterion Referenced Assessment (CRA) uses a set of pre-specified qualities or standards in the assessment of a student’s learning in order to demonstrate they have achieved the projected learning outcomes.

7. DP (University of Management and Technology, United States, 1859)

A Development Plan (DP) is a projected programme outlining objectives and milestones towards the achievement of an organisation'’s strategic plan. Development plans can also refer to professional development plans for individuals to improve knowledge and/or skills, address shortcomings or establish objectives a staff member is required to accomplish to support continuous improvement in their career. Development plans may follow a 360 Review (1) and use SMART goals (30) in order to ensure KPI’s (19) are met.

8. EDI (Lewis Griggs, HR consultant, United States, c.1980)

Equity, Diversity and Inclusion (EDI) is a broadly-defined term referring to social responsibility aims within organisations, with roots in Diversity & Inclusion training in 1990s corporate America. Diversity is a relational concept that refers to wide-ranging identifications of people based on concepts of class, race, gender, sexuality, able-bodied or able-mindedness, neurological condition, and others. Inclusion and Equity refers to strategies and philosophies moving towards the treatment of ‘“diverse’” identities within organiszations, companies and institutions.

9. E&I (Assessment) (Australian Government, 2015)

Engagement and Impact (E&I) are widespread criteria for evaluation of research quality. Impact is the deemed benefit that comes from research. Engagement is the impact that happens beyond academia, in particular in its contribution to end-users and society at large.

10. EuroCRIS (European Union, 2002)

European Current Research Information System (EuroCRIS) is an international not-for-profit association that brings together experts on research information management and research information management systems to foster cooperation and knowledge-sharing across the worldwide research information community and to promote interoperability of research information through the CERIF standard. Additional areas of activity also include – among others – the uptake of CRIS systems by various stakeholders, research information infrastructures on an institutional, regional, national and international level, best practices in system interoperability and the use and implementation of standards in CRIS such as identifiers, formats, semantics, (controlled) vocabularies, etc.

11. ERA (Australian Research Council, 2010)

Excellence in Research for Australia (ERA) is Australia’s national research evaluation framework, developed and administered by the Australian Research Council. The first full round of ERA occurred in 2010, and subsequent rounds followed in 2012, 2015 and 2018, and it was put on hold in 2022.

12. FFE (various sources, ca.2000)

Future-Focused Education (FFE) is an emerging cluster of ideas, beliefs, theories and practices drawn from many sources, within and outside education, that are mobilised by the belief that major change is needed in education if it is to meet future market needs. In today’s context, future-focused education has several very different strands. In one influential strand, education’s links to work and the economy are foregrounded. This work emphasises the skills people need to participate – and drive economic growth – in today’s knowledge-based, networked economies, and argues that education’s purpose is to develop them. In some work they are called the ‘4Cs’: Creativity, Critical thinking, Collaboration and Communication. References to a range of other ‘soft’ skills – for example, innovation, agility, entrepreneurship, digital literacy, and design thinking – are also common.

13. FIPF (Australia, 2020)

The Foundations for Inspiring People Framework (FIPF) is a typical academic dashboard benchmarking framework that outlines measures of quality and performance across all three domains of academic work - research, teaching and engagement. Target priorities are listed: ‘A High-Performing Institution’, ‘Building Leadership and Capability’, ‘Outstanding Talent,’ and ‘A Values Based Culture’.

14. FoR (Australian Bureau of Statistics and, 2008; Statistics New Zealand, 2008; 2020)

Fields of Research (FoR) codes are one of three related classifications developed by Australian and New Zealand Standard Research Classification (ANZSRC) for use in the collection, analysis and dissemination of research and experimental development statistics. FoR is a three-tier hierarchical classification beginning with major fields (division), then related sub-fields (group) and finally specialty fields or emerging areas of study (field). FoR codes were renegotiated in 2020 to better reflect FFE (9) criteria.

15. GS (Google Inc., 2004)

Google Scholar (GS) is a web search engine that indexes the metadata of academic literature in order to facilitate search and track platforms for academic profiling, promotion and funding purposes. See FIPF (13).

16. H-Index (Jorge E. Hirsch, professor of physics, United States, 2005)

The Hirsch index (H-Index) is an author-level metric that measures both the productivity and citation impact of an academic’s publications. The index is based on the set of the scientist'’s most cited papers and the number of citations that they have received in other publications. The index has more recently been applied to the productivity and impact of a scholarly journal as well as a group of scientists, such as a department, university or country.

17. IVAR (various sources, ca.2000)

Identity Verified Assessment Requirement (IVAR), also referred to as ‘Identity Verified Assessment with Hurdle’ (IVAH), are assessment techniques that seek to reduce instances of plagiarism and student misconduct, such as detection software and digitally invigilated assessment.

18. JR (various sources, ca.2000)

Job-ready (JR) is an educational objective that prioritises employability. A job-ready graduate should have gained all the knowledge and skills required to immediately enter the workforce without need of further training. It is the core objective of vocational education that prepares people to work as a technician or to take up employment in a skilled craft or trade.

19. KPI (Peter Drucker, management consultant, ca.1990)

A Key Performance Indicator (KPI) is a type of performance measurement. KPIs evaluate the success of an organisation or of a particular activity in which it engages or of an individual within an organisation. See SMART goals (30).

20. LMS (Sidney Pressey, professor of psychology, United States, 1924)

A Learning Management System (LMS) is a software application for the administration, documentation, tracking, reporting, automation and delivery of educational courses, training programmes, materials or learning and development programmes. The learning management system concept emerged directly from e-Learning. Learning management systems make up the largest segment of the learning system market. Learning management systems have faced a massive growth in usage due to the pivot to remote learning during the COVID-19 pandemic. See NWOL (21).

21. NWOL (various sources, ca.2000)

The expression ‘new ways of learning’ (NWOL) experienced heightened use as a result of the COVID-19 pandemic that lead to conventional educational practices being conducted through online tools and e-learning platforms. Blended Learning incorporates New Ways of Learning with Face-to-Face learning.

22. OHS (Factory Act, United Kingdom, 1802)

Occupational Health and Safety (OHS) is a multidisciplinary field concerned with the safety, health and welfare of people at work. See Risk Assessments (28).

23. ORCID (ORCID Inc., 2012)

Open Researcher and Contributor ID (ORCID) is a non-proprietary alphanumeric code to uniquely identify authors and contributors of scholarly communication as well as ORCID's website and services to look up authors and their bibliographic output.

24. QA (Joseph M. Juran, engineer and management consultant, United States, ca.1920)

Quality Assurance (QA) is the term used in both manufacturing and service industries to describe the systematic efforts taken to ensure that the product(s) delivered to customer(s) meet with the contractual and other agreed upon performance, design, reliability and maintainability expectations of said customer.

25. QS (Quacquarelli Symonds Limited, 2010)

Quacquarelli Symonds World University Rankings (QS) is an annual publication of university rankings by Quacquarelli Symonds. The Quacquarelli Symonds system comprises three parts: the global overall ranking; the subject rankings; and five independent regional tables – namely Asia, Latin America, Emerging Europe and Central Asia, the Arab Region, and BRICS.

26. RE (James McKeen Catell, psychologist, United States, 1910)

Research Excellence (RE) is a qualitative objective synonymous with its method of assessment. Evaluative judgements of research outputs are made against universally applied metrics established by associated bodies including academic institutions and the government bodies that fund their activities.

27. REF (Research England, the Scottish Funding Council, the Higher Education Funding Council for Wales, and the Department for the Economy, Northern Ireland, 2014)

The Research Excellence Framework (REF) is a research impact evaluation of British higher education institutions. It is the successor to the Research Assessment Exercise and it was first used in 2014 to assess the period 2008–13.

28. RA (various sources, ca.1950)

A Risk Assessment (RA) is the combined effort of identifying and analysing potential events that may negatively impact individuals, assets, and/or the environment, and determining the tolerability of the risk on the basis of a risk analysis. Informs provision of mitigating actions as per OHS policy, see OHS (22).

29. RIMS (Pure (Elsevier), Converis (Thomson Reuters), Symplectic Elements, ca.2010)

A Research Information Management System (RIMS, also known as Research Information Management (RIM) or Current Research Information System (CRIS)) is an online application for the collection and management of research compliance information for regulatory agency and campus oversight policy compliance. It is designed to track training taken by Principal Investigators (PIs) and other researchers, making the training records easily accessible for regulatory compliance reporting. It can also be used as a repository for other information needed for regulatory and policy needs.

30. SMART (George T. Doran, Corporate Planning consultant, United States, 1981)

Specific, Measurable, Achievable, Realistic and Time-related (SMART) is a mnemonic acronym, giving criteria to guide in the setting of goals and objectives for better results, for example in project management, employee-performance management and personal development. The term was first proposed by George T. Doran in the November 1981 issue of Management Review. See KPI (19).

31. SFS (various sources, ca.1980)

Student Feedback Surveys (SFS), known more broadly as course evaluations, are usually anonymous paper or electronic questionnaires given to students at the end of a course, which require written or selected response answers to a series of questions in order to evaluate the instruction, content uptake and customer satisfaction of a given course.

32. SLO (Benjamin Bloom, educational psychologist, United States, ca.1940)

Student Learning Objectives (SLO) is an assessment tool that allows a teacher to quantify their impact on student achievement as measured within the parameters of a particular academic or elective standard. An objective does not describe what the instructor will be doing, but instead the skills, knowledge and attitudes that the instructor will be attempting to produce in learners.

33. THE (The Times Higher Education in partnership with Quacquarelli Symonds, 2004; 2010)

The Times Higher Education World University Rankings (THE) is an annual publication of university rankings by the Times Higher Education (THE) magazine. The publisher had collaborated with Quacquarelli Symonds (QS) (25) to publish the joint THE-QS World University Rankings from 2004 to 2009 before it turned to Thomson Reuters for a new ranking system from 2010 to 2013. Since 2014, the publication has a deal with Elsevier (see RIMS, 29) to provide it with the data used to compile the rankings.

34. UKRI (government of the United Kingdom, 2018)

United Kingdom Research and Innovation (UKRI) is a non-departmental public body of the government of the United Kingdom that directs research and innovation funding, funded through the science budget of the Department for Business, Energy and Industrial Strategy.

We’d like to thank Janelle Woo for her inestimable help with compiling the above Definitions. We hope they have not put her, or you, off academia for good.

Chronograms of Architecture is a collaboration between e-flux Architecture and the Jencks Foundation.

  1. Charles Jencks, ‘Pigeon-holing Made Difficult’, Architectural Design, vol. 11 (January 1969), Vol 11, 582.

  2. Ibid. Kahn, Venturi, Moore, Giurgola, Belluschi, Cambridge 7, Sert, Esherick, Kallmann, Barnes, Johansem, Breuer, Neutra, Nowicki, Eames, Waschmann, Fuller, Kiesler, Soleri, Greene, Goff, Wright, Lundy, Gropius, Stubbins, Rudolph, Saarinen, Johnson, I.M.Pei, Mies, SOM, Yamasaki, Stone, Harrison. Jencks explains that each value represents the degree of dissimilarity between two architects calculated by summing the 0-3 scores related to six historical influences, producing a possible minimum of 0 and maximum of 18. Each pairing appears twice as the intersection between Paolo Soleri’s column and SOM’s row is located in a different cell than the intersection between SOM’s column and Soleri’s row, yet they both hold the same value. The matrix is however diagonally divided from the upper left corner to the lower right one, cancelling the cells that show the intersection of the architects with themselves, and reserving the bottom-left triangle for numerical values and the top right for grey shades whose saturation is equivalent to the corresponding numerical value – darker colours denote higher values denote greater distance. Jencks cites his Numerical Taxonomy source from which he appropriated the matrix, Robert R. Sokal, ‘Numerical Taxonomy’, Scientific American,vol. 215, no. 6 (December, 1966), 106-117, 106.

  3. Friedrich A. Kittler, The Truth of the Technological World: Essays on the Genealogy of Presence, trans. Erik Butler (Redwood City: Stanford University Press, 2004).

  4. We are reminded that it is only by the utter denial of other criteria, of properties that cannot or will not be measured (yet), that a system can propound to be a ‘world’ for which, as with all world-building, the final goal is to be hermetic – to exclude not only that/those which it cannot/will not measure, but also its/their futures.

  5. See Sokal, ‘Numerical Taxonomy’, Scientific American. As Sokal offers, we can still ‘see’ the phenetic distance, or dissemblance, between individuals via a phenogram – a branching diagram, deceptively like its antecedents, but fundamentally different: the x-axis is no longer time, but degrees of difference. Time is gone. 109.

  6. Charles Jencks and Alejandro Zaera-Polo Compare Diagrams’, produced by the Canadian Centre for Architecture and filmed in The Cosmic House. December 2017, 11:37 mins. https://www.cca.qc.ca/en/articles/issues/25/a-history-of-references/55386/organizing-tendencies.

  7. Not unlike the tolerance of complexity that numerical taxonomy had promised, with its incorporation of a near infinite number of characters in giant matrices, allowing the taxonomist to make, as Jencks hoped, ‘more delicate’ classifications, spreadsheets too have instead produced not a space for the articulation of difference promised by computation, but the homogeneity that is the signature of mediocrity. Jencks (January 1969), 582.

  8. For a brief account of the relations between transactional logics, and the metricisation they require, and the rise of mediocrity in academia see Francesca Hughes, ‘Failing to Fail’, The Architectural Review, no. 1494, (September, 2022), 96-98.

  9. We are indebted to our dear friend and colleague Miguel Rodríguez-Casellas for pointing this out to us.

  10. On extreme centre see David Graeber, ‘Graeber on Liberals’, Double Down News, 2020, https://www.filmsforaction.org/watch/david-graeber-on-the-extreme-center/.

  11. On dysphoria see Paul B. Preciado, Dysphoria Mundi: Le son du monde qui s’écroule (Paris: Bernard Grasset, 2022).

Author
Urtzi Grau and Francesca Hughes
Title
Cul-de-sac
Date
2023
Keywords
Chronograms of Architecture, Diagrams