0% found this document useful (0 votes)
206 views184 pages

(Master Thesis) Leavitt Path Algebras

The document is a thesis submitted for a Master of Science degree. It discusses Leavitt path algebras, which were introduced in 2004 as an algebraic analogue to C*-algebras of graphs. Leavitt path algebras generalize earlier algebras constructed by Leavitt and provide a connection between ring-theoretic properties of the algebra and graph-theoretic properties of the generating graph. The thesis will cover background, formally introduce Leavitt path algebras, examine properties such as being simple or purely infinite simple, and investigate the socle, socle series, and conditions like regularity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
206 views184 pages

(Master Thesis) Leavitt Path Algebras

The document is a thesis submitted for a Master of Science degree. It discusses Leavitt path algebras, which were introduced in 2004 as an algebraic analogue to C*-algebras of graphs. Leavitt path algebras generalize earlier algebras constructed by Leavitt and provide a connection between ring-theoretic properties of the algebra and graph-theoretic properties of the generating graph. The thesis will cover background, formally introduce Leavitt path algebras, examine properties such as being simple or purely infinite simple, and investigate the socle, socle series, and conditions like regularity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 184

Leavitt Path Algebras

Iain Dangerfield

A thesis submitted for the degree of


Master of Science
at the University of Otago,
Dunedin, New Zealand

April 27, 2011


Abstract

The central concept of this thesis is that of Leavitt path algebras, a notion
introduced by both Abrams and Aranda Pino in [AA1] and Ara, Moreno and Pardo
in [AMP] in 2004. The idea of using a field K and row-finite graph E to generate
an algebra LK (E) provides an algebraic analogue to Cuntz and Krieger’s work with
C ∗ -algebras of the form C ∗ (E) (which, despite the name, are analytic concepts). At
the same time, Leavitt path algebras also generalise the algebras constructed by W.
G. Leavitt in [Le1] and [Le2], and it is from this connection that the Leavitt path
algebras get their name.

Although the concept of a Leavitt path algebra is relatively new, in the years
since the publication of [AA1] there has been a flurry of activity on the subject.
Many results were initially shown for row-finite graphs, then extended to countable
(but not necessarily row-finite) graphs (as in [AA3]) and then finally shown for
completely arbitrary graphs (see, for example, [AR]). Most of the research has
focused on the connections between ring-theoretic properties of LK (E) and graph-
theoretic properties of E (for example [AA2], [AR] and [ARM2]), the socle and socle
series of a Leavitt path algebra ([AMMS1], [AMMS2] and [ARM1]) and analogues
between LK (E) and their C ∗ -algebraic equivalents C ∗ (E) (for example [To]). Some
papers have classified certain sets of Leavitt path algebras, such as [AAMMS], which
classifies the Leavitt path algebras of graphs with up to three vertices (and without
parallel edges).

In Chapter 1 we will cover the ring-, module- and graph-theoretic background


necessary to examine these algebras in depth, as well as taking a brief look at
Morita equivalence, a concept that will prove useful at various points in this thesis.

i
ii

We introduce Leavitt path algebras formally in Chapter 2 and look at various results
that arise from the definition. We also examine simple and purely infinite simple
Leavitt path algebras, as well as the ‘desingularisation’ process, which allows us to
construct row-finite graphs from graphs containing infinite emitters in such a way
that their corresponding Leavitt path algebras are Morita equivalent. In Chapter 3
we examine the socle and socle series of a Leavitt path algebra, while in Chapter
4 we examine Leavitt path algebras that are von Neumann regular, π-regular and
weakly regular, as well as Leavitt path algebras that are self-injective. Finally, in
Appendix A we give a detailed definition of a direct limit, a concept that recurs
throughout this thesis.
Acknowledgements

First and foremost I would like to thank my supervisor John Clark for the amaz-
ing amount of time and effort he put into researching various topics, answering my
many questions and proofreading several versions of this thesis. I would also like to
thank Gonzalo Aranda Pino, Kulumani Rangaswamy, Gene Abrams and Mercedes
Siles Molina for their helpful correpondence in response to my various queries. Fi-
nally, I wish to thank my family, friends and the music of the Super Furry Animals
for providing inspiration throughout the year.

iii
Contents

Abstract i

Acknowledgements iii

1 Preliminaries 1
1.1 Ring Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Module Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Morita Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4 Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

2 Leavitt Path Algebras 39


2.1 Introduction to Leavitt Path Algebras . . . . . . . . . . . . . . . . . 39
2.2 Results and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.3 Purely Infinite Simple Leavitt Path Algebras . . . . . . . . . . . . . . 61
2.4 Desingularisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

3 Socle Theory of Leavitt Path Algebras 81


3.1 Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2 The Socle of a Leavitt Path Algebra . . . . . . . . . . . . . . . . . . 85
3.3 Quotient Graphs and Graded Ideals . . . . . . . . . . . . . . . . . . . 94
3.4 The Socle Series of a Leavitt Path Algebra . . . . . . . . . . . . . . . 109

4 Regular and Self-Injective LPAs 125


4.1 The Subalgebra Construction . . . . . . . . . . . . . . . . . . . . . . 125
4.2 Regularity Conditions for Leavitt Path Algebras . . . . . . . . . . . . 134

iv
CONTENTS v

4.3 Weakly Regular Leavitt Path Algebras . . . . . . . . . . . . . . . . . 140


4.4 Self-Injective Leavitt Path Algebras . . . . . . . . . . . . . . . . . . . 157

A Direct Limits 167


A.1 Direct Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

Bibliography 171

Index 174
Chapter 1

Preliminaries

1.1 Ring Theory


In many texts, a ring R is required to be a monoid under multiplication; that is,
R must contain a multiplicative identity. Indeed, many well-known ring theoretic
results are based on the assumption that such an element exists. However, some
authors omit this requirement, resulting in a more general definition of a ring. As we
will see in Chapter 2, a Leavitt path algebra may not necessarily have an identity,
so throughout this thesis we will assume the more general definition of a ring that
does not require the existence of a multiplicative identity. In the case that R does
have a multiplicative identity, we say that R has identity or that R is unital, and
denote this identity by 1 (or 1R ) as usual.

The following definition gives a very useful generalisation of the concept of a


multiplicative identity.

Definition 1.1.1. A ring R has local units if there exists a set of idempotents E
in R such that, for every finite subset X = {x1 , . . . , xn } ⊆ R, there exists an e ∈ E
such that X ⊆ eRe. In this case, exi = xi = xi e for each i = 1, . . . , n and e is said
to be a local unit for the subset X.

Note that if a ring R has identity 1, then {1} is a set of local units for R. In
Chapter 2 we will show that every Leavitt path algebra has local units (but is not

1
CHAPTER 1. PRELIMINARIES 2

necessarily unital), and so we will be particularly interested in looking at results for


rings with local units. As we shall see, extending results for unital rings to this more
general case is straightforward in some cases, while in other cases it can be difficult
or even impossible.

When working with rings that do not necessarily have identity we have to take
care with the way certain things are defined. For example, for an arbitrary element
x in an arbitrary ring R we define the two-sided ideal generated by x, denoted
hxi, to be the set
X X X 
hxi := ri xsi + rj0 x + xs0k +n·x
i j k

where ri , si , rj0 , s0k ∈ R, n ∈ Z and the sums are finite. If R is unital, it is easy to
P
see that this expression simplifies to the more familiar definition hxi = { i ri xsi :
ri , si ∈ R}. Furthermore, in the more general case that R has local units this
simplification still holds, since we can find a nonzero idempotent e ∈ R for which
ex = x = xe.

Similarly, for an arbitrary element a ∈ R we define the principal left ideal


generated by a, denoted Ra, to be the set

Ra := {ra + n · a : r ∈ R, n ∈ Z}.

Once again, in the case that R has local units this simplifies to the more familiar
definition Ra = {ra : r ∈ R}, since a = ea for some idempotent e ∈ R.

If R is a ring with local units, then for any element a ∈ R there exists an
idempotent e ∈ R such that a ∈ eRe, by definition. It is easy to see that eRe is a
subring of R. Furthermore, note that eRe is always unital (with identity e), even
if R is not. The following result concerns the subring eRe. Recall that a ring R is
simple if the only two-sided ideals contained in R are {0} and R itself.

Proposition 1.1.2. Let R be a ring with local units. Then R is simple if and only
if the subring eRe is simple for every nonzero idempotent e ∈ R.
CHAPTER 1. PRELIMINARIES 3

Proof. Suppose that R is simple and let e by any nonzero idempotent in R. To show
that eRe is simple it suffices to show that, for any nonzero element exe ∈ eRe, the
two-sided ideal of eRe generated by exe is equal to all of eRe. Take an arbitrary
nonzero element ex0 e ∈ eRe. Since ex0 e is an element of R and R is simple, the two-
sided ideal of R generated by ex0 e is equal to R. Now take another arbitrary element
P
eye ∈ eRe. Since R = hex0 ei and R has local units we can write y = i ri (ex0 e)si ,
where each ri , si ∈ R. Thus
P P
eye = i eri (ex0 e)si e = i (eri e)(ex0 e)(esi e)

since e is an idempotent. Thus eye is contained in the two-sided ideal of eRe


generated by ex0 e, and since eye was an arbitrary element this shows that eRe is
simple.

Conversely, suppose that f Rf is simple for every nonzero idempotent f ∈ R. As


above, it suffices to show that, for any nonzero element x ∈ R, the two-sided ideal
of R generated by x is equal to all of R. Take arbitrary nonzero elements x0 , y ∈ R.
Since R has local units, there exists an idempotent e ∈ R such that x0 , y ∈ eRe.
Since eRe is simple, the two-sided ideal of eRe generated by x0 must be all of eRe.
P
Thus y = i (eri e)x0 (esi e), where each ri , si ∈ R. However, this sum is clearly

contained in the two-sided ideal of R generated by x0 , and so x0 generates the whole


ring R, as required.

We now move on to another definition that will be important when examining


Leavitt path algebras.

Definition 1.1.3. A ring R is said to be Z-graded if there is a family {Rn : n ∈ Z}


of subgroups of the additive group (R, +) for which

(i) Rm Rn ⊆ Rm+n for all m, n ∈ Z, and


L
(ii) R = n∈Z Rn as an abelian group.

In this case, the family {Rn : n ∈ Z} is said to be a Z-grading of R, and elements


of each subgroup Rn are called homogeneous elements of degree n. Thus each
CHAPTER 1. PRELIMINARIES 4

element in R can be written uniquely as a sum of homogeneous components (from


the definition of a direct sum – see Section 1.2).

A familiar example of a Z-graded ring is the ring of Laurent polynomials R =


K[x, x−1 ] over a field K. If we define Rn = {kxn : k ∈ K} for each n ∈ Z then it is
easy to see that conditions (i) and (ii) of the above definition hold. In many cases
a ring R lends itself naturally to a Z-grading; in particular, we will show that any
Leavitt path algebra LK (E) is a Z-graded ring. This concept of grading extends to
several other ring-theoretic concepts, as the following definition illustrates.
L
Definition 1.1.4. An ideal I of a Z-graded ring R = n∈Z Rn is said to be a
P
graded ideal if x = n∈Z xn ∈ I (with each xn ∈ Rn ) implies that each xn ∈ I.
In other words, an ideal I is graded if the homogeneous components of any element
L
in I are in I themselves. Equivalently, we can write I = n∈Z (I ∩ Rn ). Note that
if R is a Z-graded ring and e is an idempotent in R then the subring eRe is also
Z-graded.
L L
If R = n∈Z Rn and S = n∈Z Sn are two Z-graded rings, then a homomor-
phism φ : R → S is said to be a graded homomorphism if φ(Rn ) ⊆ Sn for each
n ∈ Z; that is, if φ takes homogeneous elements of degree n in R to homogeneous
elements of degree n in S.

It can be shown (see, for example, [NV, page 6]) that if R is a Z-graded ring and
I is a graded ideal of R, then the quotient ring R/I is also Z-graded. Similarly, if
R and R/I are both Z-graded then I must also be graded.

Lemma 1.1.5. If φ : R → S is a graded homomorphism between two graded rings


R and S, then ker(φ) is a graded ideal of R.

Proof. Let x ∈ ker(φ) and write x = xn1 + · · · + xnt , where each xni ∈ Rni . Thus
0 = φ(x) = ti=1 φ(xni ). Since φ is a graded homomorphism, φ(xni ) ∈ Sni for each
P

i. However, since S is a graded ring, the element 0 can only be expressed one way
as a sum of homogeneous components from each Sni , namely 0 = 0 + · · · + 0. Thus
for each i ∈ {1, . . . , t} we have φ(xni ) = 0 and so xni ∈ ker(φ), as required.
CHAPTER 1. PRELIMINARIES 5

Let L be a left ideal of a ring R. Then L is said to be a minimal left ideal of


R if L 6= 0 and there exists no left ideal K of R such that 0 ⊂ K ⊂ L. Similarly, L
is said to be a maximal left ideal of R if L 6= R and there exists no left ideal M
of R such that L ⊂ M ⊂ R.

The following lemma provides a useful way to determine when a principal left
ideal is a minimal left ideal.

Lemma 1.1.6. Let R be a ring and let x be a nonzero element of R. If x ∈ Ra for


every nonzero a ∈ Rx, then Rx is a minimal left ideal.

Proof. Suppose Rx contains a nonzero left ideal I and take an arbitrary nonzero
a ∈ I. Since a = bx + nx for some b ∈ R and n ∈ N, we have Ra ⊆ Rx. Similarly,
since x ∈ Ra then Rx ⊆ Ra and so Rx = Ra. Since Ra ⊆ I, we must have I = Rx
and so Rx is minimal.

Let R be a ring. The Jacobson radical of R, denoted J(R), is the intersection


of the family of all maximal left ideals of R. It can be shown that J(R) is a two-sided
ideal of R. (Similarly, the socle of R is the sum of all minimal left ideals of R; we
will define this concept formally in Section 3.1). We now give two useful results
concerning the Jacobson radical.

Lemma 1.1.7. Let R be a ring. Then J(R) contains no nonzero idempotents.

Proof. It is well-known that every element x ∈ J(R) is quasiregular, that is, there
exists a y ∈ R such that x + y = −xy = −yx (see, for example, [D, Chapter 4]).
Suppose that J(R) contains an idempotent e. Then −e ∈ J(R), so there exists a
y ∈ R such that y − e = ey. Multiplying on the left by −e gives −ey + e = −ey,
and thus e = 0.

The following lemma is from [AA3, Lemma 6.2].

Lemma 1.1.8. Let R be a Z-graded ring. Suppose that R contains a set of local
units E such that each element of E is homogeneous. Then J(R) is a graded ideal.
CHAPTER 1. PRELIMINARIES 6

Proof. Let x ∈ J(R) and decompose x = xn1 +· · ·+xnt into a sum of its homogeneous
components. Let e be an element of E such that exe = x. Then x = exe =
exn1 e + · · · + exnt e. Since e is a local unit it must be an idempotent, and therefore
e has degree 0. Thus exni e is homogeneous with the same degree as xni . Since the
decomposition of an element into homogeneous components is unique, we must have
exni e = xni for each i ∈ {1, . . . , t}.

By Jacobson [J2, Proposition 3.7.1] we have that J(R)∩eRe = eJ(R)e = J(eRe).


Since x = exe, we have x ∈ J(R) ∩ eRe and so x ∈ J(eRe). Since each xni = exni e,
x = xn1 + · · · + xnt is in fact the decomposition of x into graded components in eRe.
Now, since eRe is a Z-graded unital subring of R (with e as identity), we can apply
Bergman [Be, Corollary 2] to get that J(eRe) is a graded ideal of eRe. Thus xni ∈
J(eRe) for each i ∈ {1, . . . , t}. Since J(R) ∩ eRe = J(eRe), we have J(eRe) ⊆ J(R)
and thus xni ∈ J(R) for each i ∈ {1, . . . , t}, completing the proof.

A ring R is said to be von Neumann regular if, for every a ∈ R, there exists an
x ∈ R for which a = axa. Furthermore, we say that x is a von Neumann regular
inverse or quasi-inverse for a. Note that any division ring is von Neumann regular,
since we can simply choose x = a−1 if a is nonzero. The question of which Leavitt
path algebras are von Neumann regular (as well as other definitions of ‘regular’) will
be visited in Section 4.2. The following lemma concerning von Neumann regular
rings is from [G1, Lemma 1.3].

Lemma 1.1.9. Let R be a ring and let J and K be two two-sided ideals in R with
J ⊆ K. Then K is von Neumann regular if and only if J and K/J are von Neumann
regular.

Proof. If K is von Neumann regular, then clearly K/J is von Neumann regular.
Now consider a ∈ J. Since J ⊆ K, there exists x ∈ K such that a = axa. Now
y = xax ∈ J (since J is a two-sided ideal) and aya = axaxa = axa = a. Thus J is
von Neumann regular.
Now suppose that K/J and J are both von Neumann regular and consider a ∈ K.
Since K/J is von Neumann regular there exists x ∈ K for which a + J = axa + J,
CHAPTER 1. PRELIMINARIES 7

so that a − axa ∈ J. Since J is von Neumann regular, there exists y ∈ J for


which a − axa = (a − axa)y(a − axa). Thus a = axa + (a − axa)y(a − axa) =
a(x + y − xay − yax + xayax)a, and so K is von Neumann regular.

We conclude this section with a useful result regarding the matrix ring Mn (K),
where K is a field.

Lemma 1.1.10. Let K be a field. Then Mn (K), the ring of n × n matrices over
K, is simple for all n ∈ N.

Proof. Let n ∈ N, let J be a nonzero two-sided ideal of Mn (K) and let A = (aij ) ∈ J.
If we can show that the n × n identity matrix In is in hAi (the two-sided ideal
generated by A) then we have Mn (K) = hAi = J, proving that Mn (K) is simple.
Let Eij be the matrix unit with 1 in the (i, j) position and zeros elsewhere. Choose
i, j ∈ {1, . . . , n} such that aij 6= 0. Then aij E11 = Ei1 A E1j ∈ hAi. Since K is
a field, we have (a−1
ij E11 )(aij E11 ) = E11 ∈ hAi. By similar arguments, we have

E22 , . . . , Enn ∈ hAi and thus In = E11 + E22 + · · · + Enn ∈ hAi, as required.

1.2 Module Theory


Let R be a ring. Recall that an abelian group (M, +) is called a left R-module or
a left module over R if there is a mapping R × M → M , given by (r, m) 7→ rm,
such that, for all r, r1 , r2 ∈ R and all m, m1 , m2 ∈ M , we have

(i) r(m1 + m2 ) = rm1 + rm2 ,

(ii) (r1 + r2 )m = r1 m + r2 m,

(iii) r1 (r2 m) = (r1 r2 )m, and

(iv) 1R m = m (if R has identity).

If M is a left R-module, we sometimes denote M by R M . Furthermore, we say


that N is a submodule of M if N is a subgroup of M and rn ∈ N for all r ∈ R
CHAPTER 1. PRELIMINARIES 8

and all n ∈ N . Note that if we view a ring R as the additive abelian group (R, +),
then R can be seen as a left module over itself, with module multiplication given
by multiplication in the ring R. Furthermore, the submodules of R R are simply the
left ideals of R.

We can define a right R-module M similarly, with a mapping M × R → M given


by (m, r) 7→ mr, and we sometimes denote this by MR . Again, any ring R can be
seen as a right module over itself, and the right submodules of RR are the right
ideals of R. In this section we will concern ourselves primarily with left R-modules,
though analogous results and definitions exist for right R-modules in most cases.

Now let R be a ring and let M, N be left R-modules. A function f : M → N


is called an R-homomorphism if f is a group homomorphism for which f (rx) =
rf (x) for all x ∈ M and all r ∈ R. We denote by HomR (M, N ) the additive abelian
group of all R-homomorphisms from M to N . (This is easily seen to be a group if
we define addition by (f + g)(m) = f (m) + g(m) for all f, g ∈ HomR (M, N ) and
all m ∈ M .) Furthermore, in the case that M = N , HomR (M, N ) is denoted by
EndR (M ) and is called the endomorphism ring of M . (Again, this is easily seen to
be a ring if we define multiplication by (f · g)(m) = f (g(m)) for all f, g ∈ EndR (M )
and all m ∈ M .)

We denote by R-mod the category of all left R-modules together with all R-
homomorphisms f : M → N , where M and N are left R-modules. (See Section
1.3 for a formal definition of a category.) However, in this thesis we will concern
ourselves with a slightly more restricted category of R-modules. We define an R-
module M to be unital if
( n
)
X
RM := ri mi : ri ∈ R, mi ∈ M = M,
i=1

and nondegenerate if Rm = 0 (for some m ∈ M ) implies m = 0. We denote by


R-Mod the subcategory of R-mod containing all unital and nondegenerate left R-
modules and all R-homomorphisms between such modules (and define Mod-R to be
the corresponding category of right R-modules.) Note that if R is a unital ring then
every left R-module is unital and nondegenerate, and so R-Mod is the full category
CHAPTER 1. PRELIMINARIES 9

R-mod.

Lemma 1.2.1. Let R be a ring with local units and let M be a unital left R-module.
Then

(i) M is nondegenerate, and

(ii) for every m ∈ M there is a local unit e ∈ R such that em = m.


Pn
Proof. Let m ∈ M . Since M is unital, we can write m = i=1 ri mi for some ri ∈ R
and mi ∈ M . Since R has local units, there exists an idempotent e ∈ R such that
ri = eri for each i = 1, . . . , n. Thus m = ni=1 eri mi = e( ni=1 ri mi ) = em, proving
P P

(ii). Furthermore, if Rm = 0 then m = em = 0, showing that M is nondegenerate


and thus proving (i).

Now let R and S be two rings. Suppose that M is a left R-module and a right
S-module, with the property that (rm)s = r(ms) for all r ∈ R, s ∈ S and m ∈ M .
Then we say that M is an R-S-bimodule, and we sometimes denote M by R MS .
Furthermore, if M and N are two R-S-bimodules, then a map f : M → N is a
bimodule homomorphism if it is both a homomorphism of left R-modules and
right S-modules.

If A is a commutative ring then a ring R is called an A-algebra if R is an


A-A-bimodule. For example, any ring R is a Z-algebra, with the obvious module
multiplications nr = n · r = rn for all n ∈ Z and all r ∈ R. In this thesis we will be
primarily concerned with algebras over an arbitrary field K.

The following lemma gives a useful way of visualising the subring eRe of a ring R,
where e is an idempotent. Recall that EndR (Re) is the ring of all R-homomorphisms
from the left R-module Re to itself.

Lemma 1.2.2. Let R be a ring and let e be an idempotent in R. Then EndR (Re) ∼
=
(eRe)Op , where (eRe)Op is the opposite ring of eRe with multiplication · defined by
(er1 e) · (er2 e) = (er2 e)(er1 e) for all r1 , r2 ∈ R. Similarly, EndR (eR) ∼
= eRe.
CHAPTER 1. PRELIMINARIES 10

Proof. Let f be an arbitrary R-homomorphism in EndR (Re) with f (e) = rf e for


some rf ∈ R. (Note that rf defines the entire homomorphism f , since given any
element te ∈ Re we have f (te) = t(f (e)) = trf e.) Consider the map φ : EndR (Re) →
(eRe)Op with φ(f ) = erf e. (To check this is a well-defined function, suppose that
rf e = se for some s ∈ R with rf 6= s. Then erf e = ese, and so φ is indeed
well-defined.)
Now suppose that f, g ∈ EndR (Re) with f (e) = rf e and g(e) = rg e for some
rf , rg ∈ R. Then (f + g)(e) = f (e) + g(e) = (rf + rg )e, and so

φ(f + g) = e(rf + rg )e = erf e + erg e = φ(f ) + φ(g)

as required. To check that φ is multiplicative, note that we must check that φ(f g) =
φ(f ) · φ(g) = φ(g)φ(f ). Now (f g)(e) = f (rg e) = f (rg e2 ) = rg ef (e) = rg erf e, and so

φ(f g) = e(rg erf )e = (erg e)(erf e) = φ(g)φ(f ).

Thus φ is a ring homomorphism. Now, given any x ∈ (eRe)Op , say x = ere,


let f ∈ EndR (Re) be the homomorphism defined by f (e) = re. Then φ(f ) =
ere = x, and so φ is an epimorphism. Finally, suppose that f ∈ EndR (Re) with
φ(f ) = erf e = 0 (where rf is an element of R for which f (e) = rf e). Then
0 = erf e = ef (e) = f (e2 ) = f (e) and so, for any se ∈ Re, we have f (se) = sf (e) = 0
and thus f = 0. Therefore φ is a monomorphism, and so EndR (Re) ∼ = (eRe)Op .
Using a similar argument, if we define ϕ : EndR (eR) → eRe to be the map
ϕ(f ) = erf e, where rf is an element of R for which f (e) = erf , we obtain the
isomorphism EndR (eR) ∼ = eRe.

We now turn our attention to direct products and direct sums. Recall that if
R is a ring and {Ai : i ∈ I} is a family of left R-modules, we define the direct
product of the family {Ai : i ∈ I} to be the R-module formed by taking the
Q
cartesian product of the family and denote this by i∈I Ai . Furthermore, we define
the external direct sum of the family to be
( )
M Y
Ai := (ai )i∈I ∈ Ai : ai 6= 0Ai for only a finite number of indices i ∈ I .
i∈I i∈I
CHAPTER 1. PRELIMINARIES 11
Q L
Note that if I is a finite index set then we have i∈I Ai = i∈I Ai .

Now if M is an R-module and {Mi : i ∈ I} is a family of submodules of M , we


P P
say the sum i∈I Mi is an internal direct sum if every element m ∈ i∈I Mi has
P
a unique representation in the form i∈I mi , where each mi ∈ Mi and mi 6= 0 for
L L
only a finite number of indices i ∈ I. We denote this by i∈I Mi . If M = i∈I Mi
then each Mi is said to be a direct summand of M .
P
It can be shown that a sum i∈I Mi is direct if and only if
!
X
Mi ∩ Mj = {0}
j∈I,j6=i

for all i ∈ I. We can also show that any internal direct sum can be regarded as an
external direct sum, and vice versa, and hence there is no ambiguity in the notation.

The following result concerns left ideals generated by idempotents, and is useful
when working with rings with local units (though it is valid for any ring).

Lemma 1.2.3. Let R be an arbitrary ring. If e ∈ R is an idempotent, then

(i) Re is a direct summand of R;

(ii) any direct summand of Re is of the form Rf , where f is an idempotent; and

(iii) if f ∈ Re is an idempotent then Rf is a direct summand of Re.

Proof. (i) Since e = e2 , we have Re = {re : r ∈ R}. Let T = {t − te : t ∈ R}.


It is straightforward to see that T is a left ideal of R. For any r ∈ R, we have
r = re + (r − re), so clearly we have R = Re + T . Furthermore, suppose x ∈ Re ∩ T .
Then x = se and x = t − te for some s, t ∈ R. Using the fact that e = e2 , we
therefore have x = se = se2 = (t − te)e = te − te = 0. Thus Re ∩ T = {0} and so
R = Re ⊕ T , as required.

(ii) Suppose that Re = B ⊕C. Since e = e2 ∈ Re we have e = f +g, where f ∈ B


and g ∈ C. Furthermore, since f ∈ Re we have f = f 0 e for some f 0 ∈ R, and thus
f = f 0 e = f 0 e2 = f e. Therefore f = f e = f (f + g) = f 2 + f g and so f − f 2 = f g.
Now f − f 2 ∈ B and f g ∈ C (since C is a left ideal), and so f 2 − f ∈ B ∩ C = {0}
CHAPTER 1. PRELIMINARIES 12

and thus f 2 = f . We now show that B = Rf . Clearly Rf ⊆ B since B is a left


ideal. Now suppose that x ∈ B. Since x ∈ Re, we have x = xe = x(f + g) and
thus x − xf = xg. Once again this implies x − xf ∈ B ∩ C and so x = xf ∈ Rf ,
completing the proof.

(iii) We show that Re = Rf ⊕ R(e − ef ). First, since f ∈ Re we have f = f 0 e for


some f 0 ∈ R, and so f e = f 0 e2 = f 0 e = f . Thus Rf +R(e−ef ) = Rf e+R(e−ef )e ⊆
Re. Furthermore, if re ∈ Re, then re = r(ef + e − ef ) = (re)f + e(e − ef ) ∈
Rf + R(e − ef ), and so Re = Rf + R(e − ef ). To show the sum is direct, suppose
that x ∈ Rf ∩ R(e − ef ). Since f and e − ef are idempotents, there exist r, s ∈ R
such that x = rf = s(e − ef ). Then rf = rf 2 = s(e − ef )f = s(ef − ef ) = 0, as
required.

For a given left R-module M and index set I, we often write


Y M
M I := Mi and M (I) := Mi
i∈I i∈I

where Mi = M for each i ∈ I. As noted above, if I is finite then we have M I = M (I) .


Furthermore, for any n ∈ N we let M n denote the direct sum (or product) of n copies
of M .

Proposition 1.2.4. Let R be a ring and let I be an index set. For any family of
left R-modules {Ai : i ∈ I} and any left R-module B, we have a group isomorphism
!
Ai , B ∼
M Y
Hom = Hom(Ai , B)
i∈I i∈I
L
Proof. For each j ∈ I we define πj : i∈I Ai → Aj to be the natural projection map
L
(ai )i∈I 7→ aj , and φj : Aj → i∈I Ai to be the natural injection map aj 7→ (bi )i∈I ,
L
where bj = aj and bi = 0 for i 6= j. Let f ∈ Hom( i∈I Ai , B). Then, for all i ∈ I
L Q
we have f φi ∈ Hom(Ai , B). Define τ : Hom( i∈I Ai , B) → i∈I Hom(Ai , B) by
τ (f ) = (f φi )i∈I . It is easy to show that τ is a group homomorphism.
L
Suppose that τ (f ) = 0 for some f ∈ Hom( i∈I Ai , B), so that f φi = 0 for all
L P
i ∈ I. Then, given any (ai )i∈I ∈ i∈I Ai , we have f ((ai )i∈I ) = f ( i∈I φi (ai )) =
P
i∈I f φi (ai ) = 0 and so f = 0. Thus τ is a monomorphism. Now let g = (gi )i∈I ∈
CHAPTER 1. PRELIMINARIES 13
Q
Hom(Ai , B), so that gi : Ai → B is a homomorphism for each i ∈ I. Define
i∈I
L P
f ∈ Hom( i∈I Ai , B) by f ((ai )i∈I ) = j∈I gj (πj ((ai )i∈I )). Now, for each j ∈ I we
have f φj (aj ) = gj (aj ) and so f φj = gj . Thus τ (f ) = g and so τ is an epimorphism,
completing the proof.

Let R be a ring. A left R-module P is said to be directly infinite if P is


isomorphic to a proper direct summand of itself. In other words, P is directly
infinite if there exists a nonzero R-module Q such that P ∼
= P ⊕ Q. Thus, for any
n ∈ N, we have

P ∼
=P ⊕B ∼
= (P ⊕ B) ⊕ B ∼
= P ⊕ B2 ∼
= ... ∼
= P ⊕ Bn.

Furthermore, an idempotent e in a ring R is said to be infinite if the right ideal eR


is directly infinite (as a right R-module). The ring R is said to be purely infinite
if every right ideal of R contains an infinite idempotent. In other words, R is purely
infinite if every right ideal of R contains a directly infinite right ideal of the form
eR, where e is an idempotent.

The following result from [AGP, Theorem 1.6] gives a useful way of determining
when a unital ring is purely infinite. We state it here without proof.

Theorem 1.2.5. Let R be a simple unital ring. Then R is purely infinite if and
only if the following conditions are satisfied:

(i) R is not a division ring, and

(ii) for every nonzero element x ∈ R, there exist elements s, t ∈ R such that
sxt = 1.

In Section 2.3 we will be examining purely infinite simple Leavitt path algebras.
As mentioned earlier, any Leavitt path algebra has local units but is not necessarily
unital, and so we will need to adapt Theorem 1.2.5 for the more general case in
which R has local units. This is not straightforward, however, and we will need to
use Morita equivalence (introduced in Section 1.3) to do so.

The following proposition gives a useful way of determining when an idempotent


is infinite.
CHAPTER 1. PRELIMINARIES 14

Proposition 1.2.6. Let R be a ring and let e ∈ R be an idempotent. Then e is


infinite if and only if there is an idempotent f in R and elements x, y in R such that

e = xy, f = yx, and f e = ef = f 6= e.

Proof. First suppose that e is infinite. Then eR = B ⊕ C, where B, C are nonzero


right ideals of R, and there is an R-isomorphism φ : eR → B. Since e ∈ eR we
have e = f + g, where f ∈ B, g ∈ C and f, g are nonzero. Following the proof of
Lemma 1.2.3 (ii) (but in the context of right ideals), we can conclude that f and g
are idempotents and that B = f R and C = gR, giving eR = f R ⊕ gR. Now, since
f ∈ eR and e is an idempotent we have f = ef , as required. Similarly, g = eg and
so g = eg = (f + g)g = f g + g 2 . Thus g − g 2 = f g ∈ B ∩ C = {0} and so f g = 0.
Therefore we have f e = f (f + g) = f 2 + f g = f , as required. Furthermore, f 6= e
since g is nonzero.
Now, since φ : eR → f R is an isomorphism, there exists x ∈ eR such that
φ(x) = f and there exists y ∈ f R such that φ(e) = y. Then

yx = φ(e)x = φ(ex) = φ(x) = f.

Furthermore, we have

φ(xy) = φ(x)y = f y = y = φ(e)

and so, since φ is a monomorphism, we also have xy = e, as required.

Conversely suppose that there exist elements f, x, y ∈ R such that f 2 = f ,


xy = e, yx = f , and ef = f e = f 6= e. Let g = e − f , noting that g 6= 0 since e 6= f .
Then e = f + g and so eR ⊆ f R + gR. Moreover, f R + gR = ef R + (e − ef )R ⊆ eR
and so we have eR = f R + gR. In fact, this last sum is direct since, if f r1 = gr2 for
some r1 , r2 ∈ R, then f r1 = f 2 r1 = f gr2 = (f e − f 2 )r2 = 0. Thus eR = f R ⊕ gR.
We complete the proof by showing that there is an isomorphism φ : eR → f R.
Recall that eR = xyR and f R = yxR. Thus we can define an R-homomorphism φ :
eR → f R by setting φ(xyr) = yxyr for all r ∈ R. If φ(xyr) = 0 then yxyr = 0 and
so xyr = er = e2 r = xyxyr = 0, showing that φ is a monomorphism. Furthermore,
given yxr ∈ f R we have φ(xyxr) = yxyxr = f 2 r = f r = yxr, showing that φ is an
epimorphism and thus completing the proof.
CHAPTER 1. PRELIMINARIES 15

Proposition 1.2.6 leads to the following useul corollary.

Corollary 1.2.7. Let R be a ring and let S be a subring of R. If R has no infinite


idempotents then S has no infinite idempotents.

Proof. Suppose that R has no infinite idempotents but S has an infinite idempotent
e. Then, by Proposition 1.2.6, there exists an idempotent f ∈ S and elements
x, y ∈ S such that e = xy, f = yx and f e = ef = f 6= e. Since these elements
are also in R, Proposition 1.2.6 also gives that e is an infinite idempotent of R, a
contradiction.

Definition 1.2.8. Let R be a ring, let M1 , M2 , . . . , Mn be left R-modules and for


each i = 1, . . . , n − 1 let fi : Mi → Mi+1 be R-homomorphisms. We say that the
sequence
f1 f2 fn−2 fn−1
M1 / M2 / ··· / Mn−1 / Mn

is exact if ker(fi+1 ) = Im(fi ) for each i = 1, . . . , n − 1. Furthermore, a short exact


sequence is an exact sequence of the form

f g
0 /A /B /C / 0.

Note that this implies that f is a monomorphism and g is an epimorphism.

We now define an important concept in module theory.

Definition 1.2.9. Let R be a ring. A left R-module P is said to be projective


if, for any R-epimorphism g : B → C, where B, C are left R-modules, and any
R-homomorphism f : P → C, there exists an R-homomorphism h : P → B such
that the following diagram commutes:

P
~
~~~ f
h
~
~ ~ 
B g /C /0

That is, gh = f .

This definition leads to the following useful lemma.


CHAPTER 1. PRELIMINARIES 16

Lemma 1.2.10. Let A and P be left R-modules, where P is projective, and let
f : A → P be an R-homomorphism. If f is an epimorphism, then there exists a left
R-module P 0 for which A ∼
= P ⊕ P 0.

Proof. If f is an epimorphism, then by the projective nature of P there exists an


R-homomorphism h : P → B such that the following diagram commutes:

P
~~
h
~~~ 1P
~ ~ 
A /P /0
f

Thus we have f h = 1P , the identity map on P . We begin by showing that A =


Im(h) ⊕ ker(f ). Let x ∈ A, and write x = hf (x) + (x − hf (x)). Now hf (x) ∈ Im(h),
while f (x − hf (x)) = f (x) − f hf (x) = 0 (since f h = 1P ) and so x − hf (x) ∈ ker(f ).
Thus A = Im(h) + ker(f ). To show this sum is direct, suppose that y ∈ Im(h) ∩
ker(f ). Then y = h(z) for some z ∈ P . Furthermore, 0 = f (y) = f (h(z)) = 1P (z) =
z, and so y = h(z) = h(0) = 0. Thus A = Im(h) ⊕ ker(f ), as required. Since f h
is a monomorphism, h must also be a monomorphism, and so P ∼ = Im(h). Letting
P 0 = ker(f ), we therefore have A ∼
= P ⊕ P 0 , as required.

A concept closely related to projective modules is that of injective modules.

Definition 1.2.11. Let R be a ring. A left R-module Q is said to be injective


if, for any R-monomorphism g : A → B, where A, B are left R-modules, and any
R-homomorphism f : A → Q, there exists an R-homomorphism h : B → Q such
that the following diagram commutes:

g
0 /A /B


f 
   h
Q

That is, hg = f .

In the case that R is injective as a left module over itself, we say that R is left
self-injective. We will examine self-injective Leavitt path algebras in Section 4.4.
CHAPTER 1. PRELIMINARIES 17

Lemma 1.2.12. Any direct summand of an injective R-module is injective.

Proof. Suppose that Q = M ⊕ N is an injective R-module. Consider the following


diagram of R-modules and R-homomorphisms, where g is a monomorphism:

g
0 /A /B

f

M

Let i : M → Q and π : Q → M be the standard inclusion and projection maps,


respectively. Now if is an R-homomorphism from A to Q, and so by the injectivity
of Q there exists an R-homomorphism h̄ such that the following diagram commutes:

g
0 /A /B

f  


M  h̄

i 
 
Q

That is, if = h̄g. Define h : B → M by h = π h̄. Then hg = π h̄g = πif = 1M f = f ,


and so M is injective.

A similar proof shows that the direct product of injective R-modules is also
injective. Furthermore, we can use similar arguments to show that any direct sum-
mand of a projective R-module is projective, and that the direct sum of projective
R-modules is projective.

We conclude this section with a series of results that generalise well-known results
for R-modules, where R is unital, to the more general case that R has local units.
This first result is from [ARM2, Proposition 2.2].

Proposition 1.2.13. Let R be a ring with local units. Then for any idempotent
e ∈ R, the left ideal Re is a projective module in the category R-Mod.

Proof. Since R has local units, it is easy to see that Re is unital and nondegenerate
CHAPTER 1. PRELIMINARIES 18

and is therefore in the category R-Mod. Now consider the diagram

Re
f

B /C /0
g

where B, C are in R-Mod and g is an epimorphism. Since e = e2 ∈ Re, we can


define c = f (e). Furthermore, since g is an epimorphism there exists b ∈ B for
which g(b) = c. Now ec = ef (e) = f (e2 ) = f (e) = c, and so g(eb) = eg(b) = ec = c.
Define h : Re → B by h(xe) = xeb for all x ∈ R. Since xe = 0 implies xeb = 0, h is
a well-defined R-homomorphism. Thus, for any x ∈ R, gh(xe) = g(xeb) = xg(eb) =
xc = xf (e) = f (xe), and so gh = f and Re is projective.

Let R be a ring, let A be a right R-module, B a left R-module and let G be an


abelian group. A function f : A × B → G is called an R-bilinear map if, for all
a, a0 ∈ A, b, b0 ∈ B and r ∈ R, we have

(i) f (a + a0 , b) = f (a, b) + f (a0 , b),

(ii) f (a, b + b0 ) = f (a, b) + f (a, b0 ), and

(iii) f (ar, b) = f (a, rb).

The tensor product of A and B over R is an abelian group A ⊗R B together


with a bilinear map ⊗ : A × B → A ⊗R B that is universal; that is, for every
abelian group G and every bilinear map f : A × B → G, there exists a unique group
homomorphism f¯ : A ⊗R B → G for which the following diagram commutes:

⊗ / A ⊗B N
A × BH
HH
HH
HH
HH
HH
H
f HHH

HH
HH
H$ 
G

That is, f¯ ◦ ⊗ = f . It can be shown that such an abelian group A ⊗R B will always
exist (see, for example, [O, Section 2.2]). The group A⊗R B is generated by elements
of the form a ⊗R b, where a ∈ A, b ∈ B and a ⊗R b = ⊗((a, b)).
CHAPTER 1. PRELIMINARIES 19

Furthermore, if C is a right R-module, D a left R-module and f : A → C and


g : B → D are R-homomorphisms, we can define a map f ⊗R g : A ⊗R B → C ⊗R D
by f ⊗R g((a, b)) = f (a) ⊗R g(b) for all a ∈ A, b ∈ B. When it is clear we are taking
the tensor product over R we may write A ⊗R B as simply A ⊗ B. We use the tensor
product to define the following important concept.

Definition 1.2.14. Let R be a ring. A module M ∈ R-Mod is said to be flat if the


functor − ⊗R M is exact on the category R-Mod. That is, whenever
f g
0 /A /B /C /0

is a short exact sequence in R-Mod, then


f ⊗1M g⊗1M
0 /A⊗M /B⊗M /C ⊗M /0

is also a short exact sequence.

It can be shown for any M that this sequence is always exact on the right-hand
side, and so to show that M is flat it suffices to show that any monomorphism
f : A → B gives rise to a monomorphism f ⊗ 1M : A ⊗ M → B ⊗ M .

We now give several results concerning flat modules over rings with local units.
We begin with the following lemma from [ARM2, Lemma 2.9].

Lemma 1.2.15. Let R be a ring with local units. For any M ∈ Mod-R, the map
µM : M ⊗ R → M given by µM ( ni=1 (mi ⊗ ri )) = ni=1 mi ri is an isomorphism of
P P

right R-modules.

Proof. First note that M ⊗R is indeed a right R-module, with module multiplication
given by ( ni=1 (mi ⊗ ri ))r = ni=1 (mi ⊗ (ri r)) for all ni=1 (mi ⊗ ri ) ∈ M ⊗ R and
P P P

all r ∈ R. Since M ∈ Mod-R, M is unital and so M R = M . Thus any m ∈ M


Pn
can be written m = i=1 mi ri for some mi ∈ M and ri ∈ R, and so µM is an

epimorphism. Now suppose ni=1 mi ri = 0. Since R has local units, there exists
P

an idempotent e ∈ R such that ri e = ri for each i = 1, . . . , n. Then we have


Pn Pn Pn Pn
i=1 (mi ⊗ ri ) = i=1 (mi ⊗ ri e) = i=1 (mi ri ⊗ e) = ( i=1 mi ri ) ⊗ e = 0 ⊗ e = 0.

Thus µM is a monomorphism, completing the proof.


CHAPTER 1. PRELIMINARIES 20

This lemma leads to the following corollary.

Corollary 1.2.16. Any ring R with local units is flat as a left R-module.

Proof. Let A, B ∈ Mod-R and let f : A → B be a monomorphism. Then, by


Lemma 1.2.15, there exist isomorphisms µA : A ⊗ R → A and µB : B ⊗ R → B. Let
a ⊗ r be a generating element of A ⊗ R. Then

f µA (a ⊗ r) = f (ar) = f (a)r = µB (f (a) ⊗ r) = µB ◦ (f ⊗ 1R )(a ⊗ r)

and so f µA = µB ◦ (f ⊗ 1R ); that is, the following diagram commutes:


f
0 /A /B
O O
µA µB

0 /A⊗R /B⊗R
f ⊗1R

Since f, µA and µB are monomorphisms, so too is f ⊗ 1R , and thus R is flat as a left


R-module.

The following result regarding flat modules is from Rotman [Ro, Theorem 3.60].
Though Rotman’s original result is for unital rings, the proof is the same for rings
with local units and so we omit it.

Proposition 1.2.17. Let R be a ring with local units, F a flat left R-module and
K a submodule of F . Then F/K is a flat left R-module if and only if K ∩ IF = IK
for every finitely generated right ideal I of R.

Recall that, for a unital ring R, an R-module is said to be free if it has a basis;
that is, a linearly independent generating set. The following definition, given in
[ARM2, Definition 2.11], extends this notion to rings with local units.

Definition 1.2.18. Let R be a ring with local units and let F be a left R-module.
Suppose there exists an index set I and sets B = {bi }i∈I ⊆ F and U = {ui }i∈I ⊆ R,
where each ui is an idempotent and bi = ui bi for all i ∈ I. We say F is a U -free left
R-module with U -basis B if, for all x ∈ F , there exists a unique family {ri }i∈I ⊆ R
(with only finitely many ri nonzero) such that ri = ri ui for each i ∈ I and
X
x= ri b i .
i∈I
CHAPTER 1. PRELIMINARIES 21
L
Note that, in particular, we have F = i∈I Rbi .

If R is a unital ring with identity 1, then taking ui = 1 for each i ∈ I reduces the
definition of a U -free left R-module to the familiar definition of a free left R-module.

We can expand the following result from [Ro, Theorem 3.62] to the more general
case involving local units and U -free modules, applying Proposition 1.2.17 in place
of [Ro, Theorem 3.60]. Again, we will omit the proof.

Theorem 1.2.19. Let R be a ring with local units and let F be a U -free left R-
module. Then, for any submodule S of F , the following statements are equivalent:

(i) F/S is a flat R-module.

(ii) For each element x ∈ S, there exists a homomorphism f : F → S such that


f (x) = x.

(iii) For each finite set of elements {x1 , . . . , xn } of S, there is a homomorphism


f : F → S such that f (xi ) = xi for each i = 1, . . . , n.

We conclude this section with the following proposition from [ARM2, Proposition
2.17], which generalises the well-known result for unital rings that any R-module is
the epimorphic image of a free R-module.

Proposition 1.2.20. If R is a ring with local units then every module M ∈ R-Mod
is the epimorphic image of a U -free left R-module.

Proof. Let M ∈ R-Mod. By Lemma 1.2.1, for every m ∈ M there exists an idem-
potent em ∈ R such that m = em m. Since M is unital, for any x ∈ M we have
P P
x = m∈M rm m = m∈M rm em m, where only finitely many rm are nonzero. Let
L P
φ : m∈M Rem → M be the map (rm em )m∈M 7→ m∈M rm em m, which, by the

above observation, is an epimorphism.


L
To show that m∈M Rem is U -free, let U = {em }m∈M and B = {bm }m∈M ⊆
L
m∈M Rem , where each bm = (bi )i∈M , with bi = em for i = m and bi = 0 otherwise.

Then bm = em bm for all m ∈ M . Furthermore, if we take an arbitrary x =


L
(sm em )m∈M ∈ m∈M Rem (where each sm ∈ R and only finitely many sm are
CHAPTER 1. PRELIMINARIES 22

nonzero), then taking rm = sm em we have a unique family {rm }m∈M ⊆ R such that
P L
x = m∈M rm bm and rm em = rm . Thus m∈M Rem is a U -free left R-module.

1.3 Morita Equivalence


In this section we examine the concept of ‘Morita Equivalence’, which was defined by
Japanese mathematician Kiichi Morita in 1958. It is a powerful concept: if we can
show that two rings are Morita equivalent, then these two rings will share various
‘Morita invariant’ ring-theoretic properties. We will appeal to Morita equivalence
at several points in this thesis; in particular, we will show in this section that the
property ‘purely infinite’ is Morita invariant (Theorem 1.3.17) and use this to expand
Theorem 1.2.5 to rings with local units (Theorem 1.3.19).

Morita equivalence is a fairly deep and complex field of theory. Here we give
enough background so that the basic concepts can be understood and we have
sufficient tools to apply these concepts to relevant areas; however, some results will
be stated without proof, as they require a large amount of background theory that
would take us far outside the bounds of this thesis. We begin by looking at category
theory.

Definition 1.3.1. A category C is made up of two sets: Obj(C), the set of objects
in C, and Mor(C), the set of morphisms between objects in C. If A, B ∈ Obj(C),
we let Mor(A, B) denote the set of morphisms from A to B. Furthermore, if f ∈
Mor(A, B), we can denote this by the usual function notation f : A → B.

Moreover, there exists an operation ◦ such that, for any A, B, C ∈ Obj(C),


we have ◦ : Mor(B, C) × Mor(A, B) → Mor(A, C). This operation is associative,
so that for all A, B, C, D ∈ Obj(C) and all f ∈ Mor(A, B), g ∈ Mor(B, C) and
h ∈ Mor(C, D) we have h ◦ (g ◦ f ) = (h ◦ g) ◦ f . Furthermore, for all A ∈ Obj(C)
there exists a unique morphism 1A ∈ Mor(A, A) for which f ◦ 1A = f and 1A ◦ g = g,
for all f ∈ Mor(A, B), g ∈ Mor(C, A) and B, C ∈ Obj(C).
CHAPTER 1. PRELIMINARIES 23

In Section 1.2 we introduced the category R-Mod. In light of the above definition,
we can see that the objects of R-Mod are the unital, nondegenerate left R-modules,
the morphisms are R-homomorphisms between such modules, and the operation ◦
is function composition.

Definition 1.3.2. Let C and D be two categories. A covariant functor is a


map F from C to D, denoted F : C → D, that maps each object A ∈ Obj(C) to
F(A) ∈ Obj(D), and each morphism f : A → B to F(f ) : F(A) → F(B), for all
A, B ∈ Obj(C). Furthermore, this map must satisfy the following two conditions:

(i) for all A, B, C ∈ Obj(C) and all f ∈ Mor(A, B) and g ∈ Mor(B, C) we have
F(g ◦ f ) = F(g) ◦ F(f ), and

(ii) for all A ∈ Obj(C) we have F(1A ) = 1F (A) .

A contravariant functor G : C → D is defined similarly, except that G takes


each morphism f : A → B to G(f ) : G(B) → G(A), and thus condition (i) is
modified to G(g ◦ f ) = G(f ) ◦ G(g), where f and g are defined as above.

We illustrate the concept of functors with the following example.

Example 1.3.3. Let R be a ring. Given A, B ∈ R-Mod, let Hom(A, B) denote


the group of R-homomorphisms from A to B, as usual. Furthermore, let Ab denote
the category of abelian groups. Now let M be a fixed module in R-Mod. We
define the map F : Mod-R → Ab by setting F(A) = Hom(M, A) for all A ∈ R-
Mod. Furthermore, for all A, B ∈ R-Mod and all f ∈ Hom(A, B), we define F(f ) :
Hom(M, A) → Hom(M, B) by F(f )(h) = f h, for all h ∈ Hom(M, A). Then F(f )
is a function from F(A) to F(B).
We show that F is a covariant functor. For all A, B, C ∈ R-Mod, and all
f ∈ Hom(A, B) and g ∈ Hom(B, C), we have F(gf )(h) = gf h = F(g)(f h) =
F(g)(F(f )(h)) for all h ∈ Hom(M, A), and thus F(gf ) = F(g) ◦ F(f ), satisfying
condition (i). Furthermore, F(1A )(h) = h for all h ∈ Hom(M, A), and so F(1A ) is
the identity on the group Hom(M, A) = F(A), satisfying condition (ii). Thus F is
a covariant functor.
CHAPTER 1. PRELIMINARIES 24

We now move on to a concept that allows us to say when two functors are
‘equivalent’ in some way.

Definition 1.3.4. Let C and D be two categories, and let F and G be two covariant
functors from C to D. A natural transformation η from F to G (denoted η : F →
G) associates to each A ∈ Obj(C) a morphism ηA : F(A) → G(A) in D, such that
for every f ∈ Mor(A, B) (where B ∈ Obj(C)) we have ηB ◦ F(f ) = G(f ) ◦ ηA . In
other words,
F (f )
F(A) / F(B)

ηA ηB

 
G(A) / G(B)
G(f )

is a commutative diagram in the category D. (Note that if F and G are contravariant


functors then the horizontal arrows in the above diagram are reversed, so that F(f ) :
F(B) → F(A) and G(f ) : G(B) → G(A).)

Furthermore, if ηA is an isomorphism for all A ∈ Obj(C), then η is said to be a


natural isomorphism or natural equivalence. In this case, we say that F and
G are naturally isomorphic and write F ∼
= G.

The concept of natural equivalence now allows us to define Morita equivalence.

Definition 1.3.5. Let R and S be rings. We say that the categories R-Mod and
S-Mod are equivalent if there exist functors F : R-Mod → S-Mod and G : S-Mod
→ R-Mod such that

G◦F ∼= the identity functor on R-Mod, and


F ◦G ∼
= the identity functor on S-Mod.

Furthermore, if R-Mod and S-Mod are equivalent then we say that R is Morita
equivalent to S.

A ring-theoretic property P is said to be Morita invariant if, whenever a ring


R has property P, so too does every ring S that is Morita equivalent to R. It can
be shown that a property P is Morita invariant if it can be characterised purely
CHAPTER 1. PRELIMINARIES 25

in terms of R-Mod, without referencing elements of the modules or elements of R


itself.

We now define a concept that allows us to give an alternative definition of Morita


equivalence.

Definition 1.3.6. Let R and S be two rings, let R NS and S MR be two bimodules
and let (−, −) : N × M → R and [−, −] : M × N → S be two maps. Furthermore,
suppose we have two maps φ : N ⊗S M → R and ϕ : M ⊗R N → S given by

φ(n ⊗ m) = (n, m) and ϕ(m ⊗ n) = [m, n]

for which the following associativity conditions hold:

φ(n ⊗ m)n0 = n ϕ(m ⊗ n0 ) and ϕ(m ⊗ n)m0 = m φ(n ⊗ m0 )

for all m, m0 ∈ M and n, n0 ∈ N .

A Morita context is a sextuple (R, S, M, N, φ, ϕ) satisfying the above condi-


tions. Furthermore, we say that this Morita context is surjective if both φ and ϕ
are surjective. Note that in this case we have R = N M and S = M N .
Pn
A ring R is said to be idempotent if R2 := { i=1 ri si : ri , si ∈ R} = R. Note
that if R has local units, then for any r ∈ R there exists an idempotent e ∈ R such
that r = er ∈ R2 , and so any ring with local units is idempotent. This definition
allows us to give an equivalent condition for Morita equivalence, as we see in the
following theorem from Garcı́a and Simón [GS, Proposition 2.3], which we state
without proof.

Theorem 1.3.7. Let R and S be two idempotent rings. Then R and S are Morita
equivalent if and only if there exists a surjective Morita context (R, S, N, M, φ, ϕ).

We owe the following results to Ánh and Márki, whose research examining Morita
equivalence for non-unital rings is invaluable, as we will require many of these results
to be valid for rings that do not necessarily have identity. The first proposition is
from [AM, Proposition 3.3].
CHAPTER 1. PRELIMINARIES 26

Proposition 1.3.8. Let R and S be two Morita equivalent rings with local units.
Then the lattice of ideals of R is isomorphic to the lattice of ideals of S; in particular,
R is simple if and only if S is simple.

This second proposition is from [AM, Proposition 3.5].

Proposition 1.3.9. Let R be a ring with local units. If there exists an idempotent
e ∈ R for which R = ReR, then R is Morita equivalent to the subring eRe.

We now establish some definitions and results that are useful in the context of
Morita equivalence.

Definition 1.3.10. A left R-module P is said to be a generator for R-Mod if every


left R-module M is the epimorphic image of P (I) for some index set I. Furthermore,
we say that P is a progenerator if P is a projective generator.

For any ring R, if we view R as a left R-module then R is a generator for R-


Mod. To see this, let M be a module in R-Mod. Since M = RM by the unital
P
property of R-Mod, for any m ∈ M we have m = n∈M rn n for some rn ∈ R (where
only a finite number of rn are nonzero). Let I = M and define φ : R(I) → M by
P
φ((rm )m∈I ) = m∈I rm m. Then φ is an epimorphism and so R is a generator for
R-Mod.

Definition 1.3.11. Let R be a ring and let P be a right R-module. The trace of
P , denoted tr(P ), is defined by
X
tr(P ) = {x ∈ R : x = g(p) for some p ∈ R and some g ∈ HomR (P, R)}.

It can be shown that tr(P ) is a two-sided ideal of R (see, for example, [L2, Propo-
sition 2.40]).

We now look at two results that allow us to determine when a right R-module P
is a generator for Mod-R. The following result has been established for unital rings
(see for example [L2, Theorem 18.8]). Here we extend it to rings with local units by
adapting part of the proof of [AA2, Proposition 10], (i) ⇐⇒ (ii).
CHAPTER 1. PRELIMINARIES 27

Proposition 1.3.12. Let R be a ring with local units and let P be a right R-module.
If tr(P ) = R then P is a generator for Mod-R.

Proof. Let E = {ei : i ∈ I} be a set of local units for R. If tr(P ) = R, then for each
Pis(i)
ei ∈ E we can write ei = t=i g (pt ) for some pt ∈ P and some gt ∈ HomR (P, R). If
1 t

we define λei : R → ei R by λei (r) = ei r, then letting Ji = {i1 , . . . , is(i) } we have that
λei ◦ t∈Ji gt : P (Ji ) → R → ei R is an epimorphism. To see this, take an arbitrary
L

ei r ∈ ei R. Then
P  L 
ei r = λei (ei r) = (λei (ei ))r = λei t∈Ji gt (pt ) r = λei ◦ t∈Ji gt (pt r)t∈Ji ,
L
and so λei ◦ gt is indeed an epimorphism. Let J be the disjoint union of the
t∈Ji

sets Ji and define ϕ : P (J) → R by ϕ|P (Ji ) = λei ◦ t∈Ji gt . Since any element r ∈ R
L

is contained in ei R for some local unit ei , we have that ϕ is also an epimorphism.


Now take an arbitrary right R-module M . Since R is a generator for Mod-R, M
is the epimorphic image of R(Λ) for some index set Λ. Thus M is the epimorphic
image of (P (J) )(Λ) and so P is a generator for Mod-R.

Proposition 1.3.12 leads to the following lemma, which has also been adapted
from the proof of [AA2, Proposition 10], (i) ⇐⇒ (ii).

Lemma 1.3.13. Let R be a ring with local units and let P be a nonzero, finitely
generated projective right R-module. If R is simple then P is a generator for Mod-R.

Proof. Let P be a nonzero, finitely generated projective right R-module. Since P


Pn
is finitely generated we can write P = i=1 xi R, where each xi ∈ P . Define a

homomorphism φ : Rn → P by φ((a1 , . . . , an )) = ni=1 xi ai . Since φ is an epimor-


P

phism and P is projective, there must exist P 0 ∈ Mod-R for which Rn ∼ = P ⊕ P 0 (by
Lemma 1.2.10).
Thus P is isomorphic to a direct summand of Rn (since, if θ : P ⊕ P 0 → Rn
is an isomorphism, then Rn = θ(P ) ⊕ θ(P 0 ) ) and so HomR (P, Rn ) 6= 0. However,
since HomR (P, Rn ) ∼
= (HomR (P, R))n (by the right R-module analogue of Propo-
sition 1.2.4), we have (HomR (P, R))n 6= 0 and so HomR (P, R) 6= 0. Thus tr(P )
is nonzero and so, since tr(P ) is a two-sided ideal of R and R is simple, we have
tr(P ) = R. Thus, by Proposition 1.3.12, P is a generator for Mod-R.
CHAPTER 1. PRELIMINARIES 28

The significance of generators in the context of Morita equivalence is illustrated


in the following two results, which we will state without proof. The first proposition
is from Ánh and Márki [AM, Theorem 2.5] and generalises a well-known result for
unital rings (see, for example, [L2, Proposition 18.33]) to the more general case of
rings with local units. First, we need to define the concept of a locally projective
module.

Definition 1.3.14. A module P ∈ Mod-R is said to be locally projective if there


exists a direct system (Pi )I of finitely generated projective summands of P for which
P = lim
−→ i∈I Pi . (See Appendix A for more information on direct limits.) Note that
if P is a finitely generated projective module, then P is locally projective (taking
(Pi )I to be simply P ).

Proposition 1.3.15. Let R and S be two rings with local units. Then R is Morita
equivalent to S if and only if there is a locally projective generator PR = −
lim
→ i∈I Pi
in Mod-R for which S ∼ = lim
−→ i∈I End(Pi ).

In the case that PR is a progenerator in Mod-R, then Proposition 1.3.15 simplifies


to ‘R is Morita equivalent to S if and only if S ∼
= End(PR )’. In particular, we have
that R is Morita equivalent to End(PR ).

This second proposition is from Lam [L2, Proposition 18.44]. While the result
is given in a unital context, the proof is valid for any ring. Here we state it without
proof.

Proposition 1.3.16. Let R be a ring and let PR be a progenerator in Mod-R. Then


the lattice of right ideals in End(PR ) is isomorphic to the lattice of submodules of
PR .

Note that if R and S are two Morita equivalent rings and PR and PS are pro-
generators for R and S, respectively, then combining Propositions 1.3.8, 1.3.15 and
1.3.16 (and viewing R, S, End(PR ) and End(PS ) as right modules over themselves)
we have that the lattices of submodules of R, S, End(PR ), End(PS ), PR and PS are
all isomorphic.
CHAPTER 1. PRELIMINARIES 29

We now come to the main result of this section, the proof of which has been
expanded from [AA2, Proposition 10], (i) ⇐⇒ (ii).

Theorem 1.3.17. Let R and S be two Morita equivalent rings with local units.
Then R is purely infinite simple if and only if S is purely infinite simple; that is,
the property ‘purely infinite simple’ is Morita invariant.

Proof. Suppose that R is purely infinite simple and let P be a nonzero, finitely-
generated projective right R-module. We know that P is a generator for Mod-R
by Lemma 1.3.13. Since R is purely infinite, it must contain an infinite idempotent
e such that the right ideal eR is directly infinite, so that there exists a nonzero
submodule B of R such that

eR ∼
= B ⊕ eR ∼
= ··· ∼
= B m ⊕ eR

for all m ∈ N. Now, since B is a direct summand of eR and eR is a projective


right R-module (by the right R-module analogue of Proposition 1.2.13), B is also a
projective right R-module. Furthermore, B is unital and nondegenerate since eR is
unital and nondegenerate.
Since B ⊆ eR ⊆ R, we have HomR (B, R) 6= 0 (since it contains the inclusion
map from B to R) and so tr(B) 6= 0. Thus, since R is simple, tr(B) = R and so B is
a generator for Mod-R (by Proposition 1.3.12). Since P is finitely generated, there
is an n ∈ N for which there exists an epimorphism α : B n → P . Therefore, since P
is projective, we have B n ∼
= P ⊕ C for some submodule C of B n (by Lemma 1.2.10).
Thus, setting Q = C ⊕ eR, we have

eR ∼
= B n ⊕ eR ∼
= P ⊕ C ⊕ eR = P ⊕ Q.

Let η : eR → P ⊕ Q be the above isomorphism. Let D be a nonzero submodule


of P , so that D0 = η −1 (D) is a nonzero submodule – and therefore a nonzero right
ideal – of eR. Since R is purely infinite, D0 contains an infinite idempotent f . Thus
T 0 = f R is a directly infinite submodule of eR and so, since f ∈ eR, f R is a
direct summand of eR (by Lemma 1.2.3 (iii)). Thus, letting T = η(T 0 ), we have
T ⊆ D ⊆ P . Futhermore, since T 0 = f R is a direct summand of eR, T must be a
CHAPTER 1. PRELIMINARIES 30

direct summand of P ⊕ Q and therefore of P . Thus every submodule of P contains


a nonzero direct summand of P that is directly infinite.
Conversely, suppose that for every nonzero, finitely generated projective right R-
module P we have that every submodule of P contains a nonzero direct summand
of P that is directly infinite. We show that R must be purely infinite. Let I be a
nonzero right ideal of R and let 0 6= x ∈ I, so that xR ⊆ I. Since R has local units,
x = ex for some idempotent e ∈ R, and thus xR is a right ideal of eR. Now eR
is a nonzero, finitely generated, projective (by Proposition 1.2.13) right R-module,
and so xR contains a nonzero direct summand T of eR that is directly infinite. By
Lemma 1.2.3 (ii) we have that T = f R, where f is an idempotent. Thus f is an
infinite idempotent, and f = f 2 ∈ f R ⊆ xR ⊆ I. We can conclude that every
nonzero right ideal of R contains an infinite idempotent and so R is purely infinite.
We already know that simplicity is a Morita invariant property (by Proposi-
tion 1.3.8). Furthermore, suppose that PS is nonzero, finitely-generated projective
right S-module. Then PS is a generator for Mod-S (by Lemma 1.3.13) and so the
lattice of submodules of P must be isomorphic to the lattice of submodules of PS by
our observation on page 28. Thus every submodule of PS contains a nonzero direct
summand of PS that is directly infinite and so, as shown in the previous paragraph,
S is purely infinite. Thus ‘purely infinite’ is a Morita invariant property between
rings with local units, completing the proof.

Theorem 1.3.17 leads to the following useful result.

Proposition 1.3.18. Let R be a ring with local units. Then R is purely infinite
simple if and only if the subring eRe is purely infinite simple for every nonzero
idempotent e ∈ R.

Proof. Suppose that R is purely infinite simple. Then, for every nonzero idempotent
e ∈ R we have R = ReR (by the simplicity of R) and so, by Proposition 1.3.9, R
is Morita equivalent to eRe. Thus, since the property ‘purely infinite simple’ is
a Morita invariant of R (by Theorem 1.3.17), eRe must be purely infinite simple.
Conversely, if eRe is purely infinite simple for every nonzero idempotent e ∈ R then
CHAPTER 1. PRELIMINARIES 31

R is simple, by Proposition 1.1.2. Thus, Proposition 1.3.9 again gives that R is


Morita equivalent to each nonzero ring eRe and so R is purely infinite simple.

We are now finally in a position to adapt Theorem 1.2.5 to the more general case
in which R has local units. Note the subtle difference in condition (ii).

Theorem 1.3.19. Let R be a simple ring with local units. Then R is purely infinite
if and only if the following conditions are satisfied:

(i) R is not a division ring, and

(ii) for every pair of nonzero elements x, y ∈ R, there exist elements s, t ∈ R such
that sxt = y.

Proof. Suppose R is purely infinite. Then R contains an idempotent e such that


eR = A ⊕ B, where A ∼= eR and B is nonzero. In particular, R contains a nonzero
proper right ideal, and so R cannot be a division ring. Now choose a pair of nonzero
elements x, y ∈ R. Since R has local units, there exists an idempotent e such that
x, y ∈ eRe. By Proposition 1.3.18, we know that eRe must be purely infinite simple.
Since e is the identity for eRe, by Theorem 1.2.5 there exist elements s0 , t0 ∈ eRe
such that s0 xt0 = e. Thus, taking s = s0 and t = t0 y, we have sxt = y for s, t ∈ R,
proving condition (ii).

Now suppose that conditions (i) and (ii) hold. Let I be a nonzero ideal of R
and let x be a nonzero element of I. Then, by condition (ii), for any y ∈ R there
exist a, b ∈ R such that y = axb ∈ I, and so R must be simple. Now let f be a
nonzero idempotent of R (such an element must exist since R has local units). Since
R is simple we have Rf R = R, and so R is Morita equivalent to the subring f Rf
by Proposition 1.3.9. Thus the lattice of ideals of R is isomorphic to the lattice of
ideals of f Rf (by Proposition 1.3.8) and so it follows from condition (i) that f Rf
is not a division ring.
Now take an element x ∈ f Rf . Applying condition (ii), we can find s0 , t0 ∈ R
such that s0 xt0 = f . Let s = f s0 f and t = f t0 f . Then, noting that x = f xf ,
we have sxt = (f s0 f )x(f t0 f ) = f s0 (f xf )t0 f = f (s0 xt0 )f = f (f )f = f . Since f is
CHAPTER 1. PRELIMINARIES 32

the identity for f Rf and s, t ∈ f Rf , Theorem 1.2.5 tells us that f Rf is purely


infinite. Furthermore, since R is simple, R = Rf R and so R and f Rf are Morita
equivalent (by Proposition 1.3.9). Thus Theorem 1.3.17 gives that R is purely
infinite, completing the proof.

The equivalence given in Theorem 1.3.19 will prove useful when we come to
determine precisely which Leavitt path algebras are purely infinite simple in Section
2.3.

1.4 Graph Theory


As we will see, Leavitt path algebras are K-algebras that are generated, in a way, by
directed graphs. In this section we will define a directed graph and introduce several
important graph-theoretic concepts that will be useful when examining Leavitt path
algebras.

Definition 1.4.1. A directed graph E = (E 0 , E 1 , r, s) consists of two sets, E 0


and E 1 , and two maps r, s : E 1 → E 0 . The elements of E 0 are called vertices
and the elements of E 1 edges. For any edge e in E 1 , s(e) is the source of e and
r(e) is the range of e. If s(e) = v and r(e) = w, then we say that v emits e and
w receives e. Informally, we can think of e as having direction from v to w. If
r(e1 ) = s(e2 ) for some edges e1 , e2 ∈ E 1 , we say that e1 and e2 are adjacent.

Since we will be dealing exclusively with directed graphs in this thesis, we will
henceforth refer to them as simply ‘graphs’.

Example 1.4.2. Consider the graph E, where E 0 = {v0 , v1 , v2 , v3 }, E 1 = {e1 , e2 , e3 }


and s(e1 ) = v0 , r(e1 ) = s(e2 ) = s(e3 ) = v1 , r(e2 ) = v2 and r(e3 ) = v3 . We can
illustrate this with the following diagram:
v2
=•
e2 zzz
z
zz
e1 zz
•v 0 / •v 1
DD
DD
D
e3 DDD
"
•v3
CHAPTER 1. PRELIMINARIES 33

From Definition 1.4.1 it follows that, for any vertex v in E 0 , s−1 (v) is the set of
all edges emitted by v, while r−1 (v) is the set of all edges received by v. If v does
not emit any edges, so that s−1 (v) = ∅, then v is called a sink. If v does not receive
any edges, it is called a source. Referring to the graph E in Example 1.4.2, we can
see that v0 is a source, while v2 , and v3 are sinks.

If v is a vertex such that |s−1 (v)| = ∞ then v is called an infinite emitter. If


v is either a sink or an infinite emitter, it is called a singular vertex. Otherwise,
v is called a regular vertex. In other words, a vertex v is regular precisely when
0 < |s−1 (v)| < ∞.

A graph E is said to be finite if E 0 is a finite set. If E contains no infinite


emitters, then E is said to be row-finite. Furthermore, if all infinite emitters in a
graph E emit a countably infinite number of edges then we say that E is countable.
Note that a graph can be finite but not row-finite; for example, consider the graph
(∞)
•u / •v

where (∞) denotes an infinite number of edges from u to v (so that u is an infinite
emitter). Many texts assume that a given graph E is row-finite, or even that E
contains no singular vertices at all. However, in this thesis we will not be making
any such assumptions unless stated otherwise.

A path p in a graph E is a sequence of edges e1 e2 . . . en such that r(ei ) = s(ei+1 )


for all i = 1, 2, . . . , n − 1. A path consisting of n edges is said to have length n, and
we write l(p) = n. If a path p contains an infinite number of edges then we say that
p has infinite length. The source of p, denoted s(p), is the source of its initial
edge, s(e1 ), while (if p has finite length) the range of p, denoted r(p), is the range
of its final edge, r(en ). It is also convenient to think of every vertex v ∈ E 0 as being
a path of length 0, with s(v) = v = r(v).

We denote the set of all paths in E by E ∗ . For a given path p = e1 . . . en ∈ E ∗ ,


we define p0 to be the set of all vertices in p; that is, p0 = {s(e1 ), r(ei ) : i = 1, 2, . . .}.
Furthermore, if q = e1 . . . em for some m ≤ n then we say that q is an initial
subpath of p.
CHAPTER 1. PRELIMINARIES 34

A path p is said to be a cycle if s(p) = r(p) and s(ei ) 6= s(ej ) for all i 6= j. In
other words, a cycle is a path that begins and ends on the same vertex and does not
pass through any vertex more than once. If c is a cycle with s(c) = r(c) = v, then
we say that c is based at v. If a graph E does not contain any cycles, it is said to
be acyclic.

An edge e ∈ E 1 is said to be an exit to the path p = e1 . . . en if there exists an


i ∈ {1, . . . , n} such that s(e) = s(ei ) but e 6= ei . Note that an exit to a path p does
not have to be external to the path itself. For example, if p contains two distinct
edges ei , ej such that s(ei ) = s(ej ), then both ei and ej are exits for p.

Example 1.4.3. Consider the following graph E:

•g
e4 e3

f  g
•u / •v •H / •w

e1 e2
(

Then p = f e1 e2 g is a path in E ∗ with s(p) = u, r(p) = w and l(p) = 4. If we let
q = f e1 , then q is an initial subpath of p. Furthermore, c = e1 e2 e3 e4 is a cycle in E
based at v, and g is an exit for c.

Definition 1.4.4. We define a relation ≥ on E 0 by setting v ≥ w if there is a path


p ∈ E ∗ such that s(p) = v and r(p) = w. (Note that, because we consider a single
vertex to be a path of length 0, it is possible that v = w.) In this case, we say that
v connects to w.

For a vertex v ∈ E 0 , we define the tree of v, denoted T (v), to be the set of all
vertices in E 0 to which v connects; that is

T (v) = {w ∈ E 0 : v ≥ w}.

Note that we always have v ∈ T (v) since all vertices connect to themselves, by
definition. We can extend the definition of a tree to an arbitrary subset X of E 0
S
by defining T (X) = v∈X T (v). Since v ∈ T (v) for each v ∈ X we therefore have
X ⊆ T (X).
CHAPTER 1. PRELIMINARIES 35

A vertex v ∈ E 0 is said to be a bifurcation (or there is a bifurcation at v)


if v emits more than one edge; that is, |s−1 (v)| > 1. Furthermore, a vertex u ∈ E 0
is said to be a line point if there are no bifurcations or cycles based at any vertex
w ∈ T (u). Note that, by definition, any sink is a line point. We denote the set of
all line points in E 0 by Pl (E).

Example 1.4.5. Consider the following graph E:

6 •u h

•v 1 / •v 2 / •v3 / •v4
E

(
•w

Then, for example, we have T (v1 ) = E 0 , since there is a path from v1 to every vertex
in E, while T (w) = {w, v4 , u}. The only bifurcation in E is v2 . Furthermore, the
line points in E are u, w, v3 and v4 ; that is, Pl (E) = {u, w, v3 , v4 }.

Definition 1.4.6. We denote by E ∞ the set of all paths of infinite length in E, and
we denote by E ≤∞ the set E ∞ together with the set of all finite paths in E whose
end vertex is a sink. A vertex v is cofinal if, for every path p ∈ E ≤∞ , there exists
a vertex w in p such that v ≥ w. Furthermore, we say that a graph E is cofinal if
all of its vertices are cofinal.

Now we define two concepts that will feature heavily in our study of Leavitt path
algebras.

Definition 1.4.7. Let H be a subset of E 0 . We say that

(i) H is hereditary if v ∈ H implies T (v) ⊆ H, and

(ii) H is saturated if {r(e) : s(e) = v} ⊆ H implies that v ∈ H, for every regular


vertex v ∈ E 0 .

In other words, a subset H is hereditary if, for each v ∈ H, every vertex that
v connects to is also in H. Furthermore, a subset H is saturated if every regular
CHAPTER 1. PRELIMINARIES 36

vertex that feeds into H, and only into H, is also in H. In the study of Leavitt path
algebras we will be particularly interested in subsets of E 0 that are both hereditary
and saturated, which we call simply ‘hereditary saturated subsets’ of E 0 . Note that
if a vertex v is a line point then any vertex w ∈ T (v) must be a line point, since
T (w) ⊆ T (v). Thus Pl (E) is a hereditary subset of E 0 – however, it is not necessarily
saturated.

Let X be an arbitrary subset of E 0 . The hereditary saturated closure of


X, denoted X̄, is the smallest hereditary saturated subset containing X; that is, for
any hereditary saturated subset H containing X, we have X̄ ⊆ H.

Example 1.4.8. Consider again the graph E from Example 1.4.5:

6 •u h

•v 1 / •v 2 / •v3 / •v4
E

(
•w

We can see that S = {v1 , v2 } forms a saturated (but not hereditary) subset of E 0 .
Furthermore, H = {u, w, v3 , v4 } forms a hereditary subset of E 0 . Indeed, this is the
set of line points of E, which is always hereditary, as noted above. However, H is
not saturated, since {r(e) : s(e) = v2 } = {u, w, v3 } ⊂ H but v2 ∈
/ H. It is easy to
see that the hereditary saturated closure of H must contain v2 , and therefore must
also contain v1 . Thus H̄ = E 0 .
Since u is the only sink in E, and E is finite, E ≤∞ is the set of all paths in E ∗
that end in u. Since every vertex in E 0 connects to u, every vertex is cofinal and
thus E is cofinal.

Lemma 1.4.9. Let E be a graph and let X be a subset of E 0 . Then the hereditary
saturated closure of X is the set ∞
S
n=0 Gn (X), where

G0 (X) = T (X), while for n ≥ 1,

Gn (X) = {v ∈ E 0 : 0 < |s−1 (v)| < ∞ and r(s−1 (v)) ⊆ Gn−1 (X)} ∪ Gn−1 (X).
CHAPTER 1. PRELIMINARIES 37

Proof. First, note that Gm (X) ⊆ Gn (X) for each m ≤ n. For ease of notation,
we set G(X) = ∞
S
n=0 Gn (X). Now X ⊆ T (X) = G0 (X) ⊆ G(X), and so G(X)

contains X. To show that G(X) is hereditary, suppose that v ∈ G(X) and let
w ∈ T (v). Furthermore, let p = e1 . . . el be a path with s(p) = v and r(p) = w,
and let n be the minimum integer for which v ∈ Gn (X). If n = 0, then v ∈ T (X)
and so w ∈ T (X) ⊆ G(X) and we are done. If n 6= 0, then by definition we
have that 0 < |s−1 (v)| < ∞ and r(s−1 (v)) ⊆ Gn−1 (X). In particular, we have
r(e1 ) ∈ Gn−1 (X). Now let m be the minimum integer for which r(e1 ) ∈ Gm (X)
(noting that m ≤ n − 1). If m = 0, then again w ∈ T (X) and we are done;
otherwise we have r(e2 ) ∈ Gm−1 (X) by the same logic as above. Thus repeating
this argument either yields that w ∈ T (X) or r(el ) = w ∈ Gp (X) for some p < n.
In either case, w ∈ G(X) and so G(X) is hereditary.
To show that G(X) is saturated, suppose we have a regular vertex v ∈ E 0 such
that r(s−1 (v)) ⊆ G(X). Let n be the minimum integer for which r(s−1 (v)) ⊆ Gn (X).
Then by definition we have v ∈ Gn+1 (X) ⊆ G(X) and so G(X) is saturated.
Finally, suppose that H is any hereditary saturated subset containing X. Since
H is hereditary, it must contain T (X), so that T (X) = G0 (X) ⊆ H. Furthermore,
since H is saturated, H must contain the set S1 = {v ∈ E 0 : 0 < |s−1 (v)| <
∞ and r(s−1 (v)) ⊆ T (X)}, and so G1 (X) = S1 ∪ T (X) ⊆ H. Continuing this
argument, we see that Gn (X) ⊆ H for each non-negative integer n, and so G(X) =
S∞
n=0 Gn (X) ⊆ H. Therefore G(X) is the hereditary saturated closure of X, as

required.

We end with a proposition that incorporates several of the concepts we have


introduced in this section. This result generalises [APS, Lemma 2.8] from the row-
finite case to an arbitrary graph E.

Proposition 1.4.10. Let E be an arbitrary graph. The only hereditary saturated


subsets of E 0 are ∅ and E 0 if and only if the following conditions are satisfied:

(i) E is cofinal, and

(ii) for every singular vertex u ∈ E 0 , we have v ≥ u for all v ∈ E 0 .


CHAPTER 1. PRELIMINARIES 38

Proof. Suppose that the only hereditary saturated subsets of E 0 are ∅ and E 0 . Let
v ∈ E 0 and p ∈ E ≤∞ . To show that E is cofinal, it suffices to show that we can
find a vertex w ∈ p0 for which w ∈ T (v). Let X = {v}. Then X̄ 6= ∅, and so
X̄ = E 0 = ∞
S
n=0 Gn (X) (where each Gn (X) is as defined in Lemma 1.4.9). Let

m ∈ N be the minimum integer for which Gm (v) ∩ p0 6= ∅ and let w ∈ Gm (v) ∩ p0 .


If m > 0, then the minimality of m implies that w ∈
/ Gm−1 (v), and so w is a regular
vertex and r(s−1 (w)) ⊆ Gm−1 (v). However, since w ∈ p0 and w is not a sink,
there must be some edge e in p for which s(e) = w. Thus r(e) ∈ r(s−1 (w)) and so
r(e) ∈ Gm−1 (v) ∩ p0 , contradicting the minimality of m. Therefore m = 0 and so
w ∈ G0 (v) = T (v), as required. Now take a singular vertex u ∈ E 0 and let v ∈ E 0 .
Again, by our hypothesis there must exist a minimum integer m ∈ N for which
u ∈ Gm (v). Suppose that m > 0. Since u is singular, we must have u ∈ Gm−1 (v)
(since only regular vertices are added with each iteration), a contradiction. Thus
m = 0, and so u ∈ T (v). Thus v ≥ u, as required.

Now suppose that conditions (i) and (ii) hold and that there exists a hereditary
saturated subset H of E 0 such that ∅ ⊂ H ⊂ E 0 . Choose a vertex v such that
v ∈ E 0 \H. Now v cannot be a singular vertex, because condition (ii) would imply
that w ≥ v for any w ∈ H, and therefore that v ∈ H by the hereditary nature of H.
In particular, v is not a sink and so s−1 (v) 6= ∅. Furthermore, r(s−1 (v)) * H, for
otherwise we would have v ∈ H by the saturated property of H. Thus there exists
an edge e1 ∈ s−1 (v) for which r(e1 ) ∈
/ H. Again, r(e1 ) cannot be a singular vertex,
so we can repeat the above procedure to find an edge e2 for which s(e2 ) = r(e1 )
and r(e2 ) ∈
/ H, and so on. Thus we can form an infinite path p = e1 e2 . . . with
p0 ∩ H = ∅. We know that p is infinite since each vertex in p0 is not in H and
therefore cannot be a sink (by the argument used above). Thus p ∈ E ∞ . However,
since E is cofinal, for any w ∈ H there exists a vertex u ∈ p0 such that w ≥ u, and
so u ∈ H, a contradiction. Thus the only hereditary saturated subsets of E 0 are ∅
and E 0 , as required.
Chapter 2

Leavitt Path Algebras

2.1 Introduction to Leavitt Path Algebras


In this section we will define the central concept of this thesis, that of the Leavitt
path algebra. This concept ties together many aspects of graph theory and ring the-
ory, as we essentially construct a K-algebra from a given graph E by using its edges
and vertices as generating elements, along with a new set of edges known as ‘ghost
edges’. As we shall see, there are many (often surprising) analogues between graph-
theoretic properties of E and ring-theoretic properties of the associated Leavitt path
algebra, LK (E). Furthermore, many well-known algebras, such as the matrix rings
Mn (K) and the Leavitt algebras L(1, n), are isomorphic to the Leavitt path algebra
of some graph E. Thus a graph can often provide a simple visual representation of
some of the more abstract properties of a particular ring.

We begin by defining the slightly simpler notion of a path algebra.

Definition 2.1.1. Let K be a field and E be an arbitrary graph. The path K-


algebra over E, denoted A(E), is defined to be the K-algebra generated by the
sets E 0 and E 1 , i.e. K[E 0 ∪ E 1 ], subject to the following relations:

(A1) vi vj = δij vi for all vi , vj ∈ E 0 ; and

(A2) s(e)e = e = er(e) for all e ∈ E 1 .

39
CHAPTER 2. LEAVITT PATH ALGEBRAS 40

As we will see, the relations (A1) and (A2) defined on A(E) essentially preserve
the path structure of the associated graph E, hence the name ‘path algebra’. In
order to extend this concept to a Leavitt path algebra, we need to introduce the
following concept.

Definition 2.1.2. For an arbitrary graph E, the extended graph of E is the


b = (E 0 , E 1 ∪ (E 1 )∗ , r0 , s0 ), where (E 1 )∗ = {e∗i : ei ∈ E 1 } and the functions
graph E
r0 and s0 are defined by

r0 (e∗ ) = s(e), s0 (e∗ ) = r(e) and r0 (e) = r(e), s0 (e) = s(e)

for all e ∈ E 1 . For ease of notation, we usually denote the functions r0 and s0 as
simply r and s.

Essentially, the extended graph introduces a new set of edges (E 1 )∗ , which is a


copy of E 1 but with the direction of each edge reversed; that is, if e ∈ E 1 runs from
u to v, then e∗ ∈ (E 1 )∗ runs from v to u. To distinguish between the two sets of
edges, we refer to E 1 as the set of real edges and (E 1 )∗ as the set of ghost edges.
A path made up of only real edges is called a real path, while a path made up of
only ghost edges is called a ghost path. For a real path p = e1 . . . en , we denote
the ghost path e∗n . . . e∗1 by p∗ . When we refer to a ‘path’ in a graph E it is assumed
that we are talking about a real path, unless stated otherwise. In particular, the
notation E ∗ continues to refer to the set of real paths in E.

We are now able to define a Leavitt path algebra.

Definition 2.1.3. Let K be a field and let E be an arbitrary graph. The Leavitt
path algebra of E with coefficients in K, denoted LK (E), is defined to be the
K-algebra generated by the sets E 0 , E 1 and (E 1 )∗ , i.e. K[E 0 ∪ E 1 ∪ (E 1 )∗ ], subject
to the following relations:

(A1) vi vj = δij vi for all vi , vj ∈ E 0 ;

(A2) s(e)e = e = er(e) and r(e)e∗ = e∗ = e∗ s(e) for all e ∈ E 1 ;

(CK1) e∗i ej = δij r(ej ) for all ei , ej ∈ E 1 ; and


CHAPTER 2. LEAVITT PATH ALGEBRAS 41

ee∗ for every regular vertex v ∈ E 0 .


P
(CK2) v = {e∈E 1 :s(e)=v}

In other words, the Leavitt path algebra of a graph E is the path K-algebra over
the extended graph E,
b subject to the relations (CK1) and (CK2), which are known
as the Cuntz-Krieger relations. Note that, by the (A1) relation, each v ∈ E 0 is
an idempotent in LK (E) and the elements of E 0 are mutually orthogonal in LK (E).
Thus the vertices of E form a set of orthogonal idempotents in LK (E).

We now give several examples of Leavitt path algebras. From this point we will
always use K to denote an arbitrary field.

Example 2.1.4. The simplest possible example is the graph I1 consisting of a single
vertex v and no edges:
•v

In this case we have simply LK (I1 ) = Kv, which is isomorphic to the ring K.
Similarly, if we add an extra vertex w to obtain the graph I1 × I1 :

•v •w

then we have LK (I1 × I1 ) = Kv ⊕ Kw ∼


= K 2 . (Note that Kv + Kw = Kv ⊕ Kw
since v and w are mutually orthogonal.)

Things get more interesting if we add an edge e between v and w to form the
graph M2 :
•v
e / •w

In this case LK (M2 ) is generated by the elements v, w, e, e∗ , subject to the four


Leavitt path algebra relations. We show that LK (M2 ) ∼= M2 (K) by defining the
map φ : LK (M2 ) → M2 (K) on the generators of LK (M2 ) as follows:

φ(v) = E11 , φ(w) = E22 , φ(e) = E12 and φ(e∗ ) = E21 ,

where Eij is the matrix unit with 1 in the (i, j) position and zeros elsewhere. We
extend φ linearly and multiplicatively. Since any element in M2 (K) is a K-linear
combination of the four matrix units listed above, φ is clearly an epimorphism.
Furthermore, it is easy to see that φ is a monomorphism since these matrix units
CHAPTER 2. LEAVITT PATH ALGEBRAS 42

are linearly independent. However, we also need to check that φ is well-defined:


specifically, that φ preserves the Leavitt path algebra relations on LK (M2 ). This
is often the most important step when defining a homomophism from a Leavitt
path algebra to another ring, and often the most time-consuming. In this case,
checking that φ preserves the relations is fairly straightforward since there are only
a small number of generating elements; in larger graphs, this process can become
quite messy and drawn-out.

Using the general matrix unit property that Eij Ekl = δjk Eil , it is easy to see
that φ preserves the (A1), (A2) and (CK1) relations. For example, to check that
the equality e = er(e) is preserved by φ we must check that φ(e) = φ(er(e)) for all
e ∈ E 1 , which in this case reduces to showing

φ(ew) = E12 E22 = E12 = φ(e),

as required. To check that the (CK2) relation is preserved, recall that the relation
is only defined at regular vertices. Thus we only need to check that the equality
v = ee∗ is preserved by φ, which is easily seen since

φ(v) = E11 = E12 E21 = φ(e)φ(e∗ ) = φ(ee∗ ).

Thus φ is an isomorphism and so LK (M2 ) ∼


= M2 (K), as claimed.1

Example 2.1.5. We can generalise the above example by defining the finite line
graph with n vertices, denoted Mn , to be the graph

e1 e2 en−1
•v1 / •v2 / •v 3 •vn−1 / •v n

We show that LK (Mn ) ∼


= Mn (K). Similar to the case n = 2, we define φ on the
generators of LK (Mn ) by

φ(vi ) = Eii , φ(ei ) = Ei(i+1) , and φ(e∗i ) = E(i+1)i


1
In this example, we can also show that φ is an isomorphism by showing that the generators
v, w, e, e∗ form a basis for LK (M2 ), in which case there is no need to check that φ preserves the
Leavitt path algebra relations on LK (M2 ). However, we have chosen to use the latter method in
order to emphasise the importance of this step.
CHAPTER 2. LEAVITT PATH ALGEBRAS 43

for all i = 1, . . . n. As in Example 2.1.4, it is straightforward to show that the Leavitt


path algebra relations are preserved by φ. Furthermore, for any matrix unit Eij ∈
Mn (K) with i < j we have Eij = Ei(i+1) E(i+1)(i+2) . . . E(j−1)j = φ(ei ei+1 . . . ej−1 ).
Similarly, if i > j then Eij = φ(e∗i−1 . . . e∗j+1 e∗j ). Since any element in Mn (K) is a
K-linear combination of such matrix units, φ is an epimorphism. Again, it is easy
to see that φ is a monomorphism, and so LK (Mn ) ∼= Mn (K), as claimed.

Example 2.1.6. We define the single loop graph, denoted R1 , to be the graph
(
e •v

Consider K[x, x−1 ], the ring of Laurent polynomials with coefficients in K. By


defining the map φ : LK (R1 ) → K[x, x−1 ] by φ(v) = 1, φ(e) = x and φ(e∗ ) = x−1 ,
it is straightforward to see that φ preserves the Leavitt path algebra relations and
that LK (R1 ) ∼= K[x, x−1 ].

We can extend the single loop graph to the rose with n leaves graph, denoted
Rn :
e3
e2
(  
e1 3 •v
en

For each n ∈ N, we have that LK (Rn ) is isomorphic to the Leavitt algebra L(1, n),
which is the unital K-algebra generated by elements {xi , yi : i = 1, . . . , n} and
subject to the following relations:

(i) xi yj = δij for all i, j ∈ {1, . . . , n}; and


Pn
(ii) i=1 yi xi = 1.

If we define φ : LK (Rn ) → L(1, n) by φ(v) = 1, φ(ei ) = yi and φ(e∗i ) = xi , we


can see that relations (i) and (ii) above correspond directly to the Leavitt path
algebra relations (CK1) and (CK2) on LK (Rn ). Furthermore, since v 2 = v and
vei = ei = ei v (and ve∗i = ei = e∗i v) for all i = 1, . . . , n, the relations (A1) and
(A2) correspond directly to the unital properties of 1. From here the isomorphism
is clear. Note that in the case n = 1 we have K[x, x−1 ] ∼
= L(1, 1), which is consistent
with the single loop example above.
CHAPTER 2. LEAVITT PATH ALGEBRAS 44

So far we have only looked at examples of Leavitt path algebras of row-finite


graphs. The following graph contains an infinite emitter, which makes the situation
slightly more complex.

Example 2.1.7. We define the infinite clock graph, denoted C∞ , to be the graph

a •O v1 v2

w;
www
w
ww
ww
•u GG / •v3
GG
(∞) GG
GG
G#
 v4

where u emits a countably infinite number of edges. We show that LK (C∞ ) ∼ =


L∞ L∞
i=1 M2 (K) ⊕ KI22 , where i=1 M2 (K) is the direct sum of a countably infinite

number of copies of M2 (K) and I22 = ∞


Q
i=1 E22 . If we let ei be the edge from u to

vi , then we can define a map φ : LK (C∞ ) → ∞


L
i=1 M2 (K) ⊕ KI22 on the generators

of LK (C∞ ) as follows:

φ(u) = I22 , φ(vi ) = (E11 )i , φ(ei ) = (E21 )i , and φ(e∗i ) = (E12 )i ,


L∞
where (A)i denotes the element of i=1 M2 (K) with A ∈ M2 (K) in the ith compo-
nent and zeros elsewhere.

Note that the map here is similar to the mapping from LK (M2 ) → M2 (K) in
Example 2.1.4. Indeed, it is as if we have an infinite number of copies of the graph
M2 emanating from a single central vertex u. Thus, in a similar fashion to that
example it is easy to see that the Leavitt path algebra relations are preserved by φ.
As an example, we check that φ(uei ) = φ(ei ) for an arbitrary edge ei :

φ(uei ) = φ(u)φ(ei ) = I22 (E21 )i = (E22 )i (E21 )i = (E21 )i = φ(ei ),

as required. Note that we do not need to check the (CK2) relation as there are no
regular vertices in C∞ . Finally, it is clear that φ is an isomorphism, as required.

From the four defining Leavitt path algebra relations we can deduce the product
of two arbitrary generating elements in LK (E). For example, by applying relations
CHAPTER 2. LEAVITT PATH ALGEBRAS 45

(A1) and (A2), we can deduce the product of two arbitrary edges ei , ej ∈ E 1 :

ei ej = ei r(ei )s(ej )ej = δr(ei ),s(ej ) ei ej .

Furthermore, for e∗i , e∗j ∈ (E 1 )∗ we have:

e∗i e∗j = e∗i s(ei )r(ej )e∗j = δs(ei ),r(ej ) e∗i e∗j .

Thus the product of two edges ei and ej is nonzero if and only if ei and ej are adjacent
in the graph E. Extending this to an arbitrary number of edges e1 , e2 , . . . en ∈ E 1 ,
we can see that the product e1 e2 . . . en is nonzero if and only if e1 e2 . . . en is a path
in E (and similarly the product e∗n . . . e∗2 e∗1 is nonzero if and only if e∗n . . . e∗2 e∗1 is a
ghost path in E).

The relations (A1) and (A2) give similar results when multiplying an arbitrary
vertex v ∈ E 0 with an arbitrary edge e ∈ E 1 :

ve = δv,s(e) e and ev = δv,r(e) e.

And similarly, for an arbitrary e∗ ∈ (E1 )∗ we have:

ve∗ = δv,r(e) e∗ and e∗ v = δv,s(e) e∗ .

Thus the product of a vertex by an edge is nonzero only when the vertex is the
source of that edge, and the product of an edge by a vertex is nonzero only when
the vertex is the range of that edge. Essentially the relations (A1) and (A2) can
be seen as preserving the path structure of the graph E, as mentioned earlier. The
following lemma from [AA1, Lemma 1.5] solidifies this concept. The proof here
follows the same argument as the proof of [Rae, Corollary 1.15].

Lemma 2.1.8. Let E be an arbitrary graph. Every monomial in LK (E) is of the


form kpq ∗ , where k ∈ K and p, q ∈ E ∗ . Specifically, every monomial can be expressed
in one of two forms:

(i) kvi , where k ∈ K and vi ∈ E 0 , or

(ii) kei1 . . . eim e∗jn . . . e∗j1 , where k ∈ K, ei1 , . . . , eim , ej1 , . . . , ejn ∈ E 1 and m, n ≥ 0,
m + n ≥ 1,
CHAPTER 2. LEAVITT PATH ALGEBRAS 46

so that p and q are either paths of length 0 at the vertex vi , or p = ei1 . . . eim , q =
ej1 . . . ejn and at least one of p and q has length greater than 0.

Proof. We proceed by induction on the length of the monomial kx1 . . . xt , where


each xi ∈ E 0 ∪ E 1 ∪ (E 1 )∗ . For t = 1, it is clear that the monomial is either of type
(i) or (ii) above. Now assume it is true that every monomial of length t ≥ 1 can
be written as a monomial of type (i) or (ii) and let β = ky1 . . . yt yt+1 , where each
yi ∈ E 0 ∪ E 1 ∪ (E 1 )∗ and k ∈ K. Set α = ky1 . . . yt , giving β = αyt+1 . By our
induction hypothesis on α, we have two cases:

Case 1: α = kvi for some vi ∈ E 0 . If yt+1 = vj then β = k δij vi is of type (i). If


yt+1 = ej , where ej ∈ E 1 , then β = kvi s(ej )ej = k δvi ,s(ej ) ej is of type (ii). Similarly,
if yt+1 = e∗j then β is again of type (ii).

Case 2: α = kei1 . . . eim e∗jn . . . e∗j1 , with m, n ≥ 0, m + n ≥ 1 and each ei , ej ∈ E 1 .


We break this case into several subcases:
Case 2.1: yt+1 = vj , for some vj ∈ E 0 and n > 0. Then e∗j1 vj = e∗j1 s(ej1 )vj =
δvj ,s(ej1 ) e∗j1 and so β = k δvj ,s(ej1 ) ei1 . . . eim e∗jn . . . e∗j1 is of type (ii).
Case 2.2: yt+1 = vj for some vj ∈ E 0 and n = 0. Then we must have m > 0 and
so β = k δvj ,r(eim ) ei1 . . . eim is again of type (ii).
Case 2.3: yt+1 = ej for some ej ∈ E 1 and n > 0. By the (CK1) relation we have
e∗j1 ej = δj1 ,j r(ej ).
If n > 1, we have β = k δj1 ,j δs(ej2 ),r(ej ) ei1 . . . eim e∗jn . . . e∗j2 , which is of type (ii).
If n = 1 and m > 0, we have β = k δj1 ,j δr(eim ),r(ej ) ei1 . . . eim , which is again of
type (ii).
Finally, if n = 1 and m = 0, we have β = k δj1 ,j r(ej ), which is of type (i).
Case 2.4: yt+1 = ej for some ej ∈ E 1 and n = 0. Then we must have m > 0 and
so β = ei1 . . . eim ej is of type (ii).
Case 2.5: yt+1 = e∗j for some ej ∈ E 1 and n > 0. Then we have that β =
k δs(ej1 ),r(ej ) ei1 . . . eim e∗jn . . . e∗j1 e∗j is of type (ii).
Case 2.6: yt+1 = e∗j for some ej ∈ E 1 and n = 0. Then m > 0 and so β =
k δr(eim ),r(ej ) ei1 . . . eim e∗j is again of type (ii).
CHAPTER 2. LEAVITT PATH ALGEBRAS 47

In light of Lemma 2.1.8, we can now describe an arbitrary element of LK (E).


Since any element in LK (E) is a K-linear combination of monomials in LK (E), an
arbitrary element α ∈ LK (E) is of the form
n
X
α= ki pi qi∗ ,
i=1

where each ki ∈ K and each pi , qi ∈ E ∗ . In other words, LK (E) is spanned as a


K-vector space by the set {pq ∗ : p, q ∈ E ∗ }. Note that a monomial pq ∗ is only
nonzero if r(p) = r(q).

Lemma 2.1.9. Let E be an arbitrary graph. Then


M
LK (E) = LK (E)v
v∈E 0

Proof. Let x ∈ LK (E). By Lemma 2.1.8, x = k1 p1 q1∗ + · · · + kn pn qn∗ , where each


ki ∈ K and each pi , qi ∈ E ∗ . Thus x = k1 p1 q1∗ v1 + · · · + kn pn qn∗ vn ∈ v∈E 0 LK (E)v,
P
P
where each vi = s(qi ), and so LK (E) = v∈E 0 LK (E)v.
P
To show this sum is direct, suppose we have y ∈ LK (E)v ∩ w∈E 0 ,w6=v LK (E)w
for some v ∈ E 0 . Then y = av = w∈E 0 ,w6=v aw w for some a, aw ∈ LK (E) (with
P
P
only a finite number of aw nonzero) and so av = (av)v = ( w∈E 0 ,w6=v aw w)v = 0,
since the vertices of E form a set of mutually orthogonal idempotents in LK (E) (by
L
the (A1) relation). Thus LK (E) = v∈E 0 LK (E)v, as required.

From Lemma 2.1.8 we know that every monomial in LK (E) is of the form kpq ∗ ,
where p and q are paths in E. But what happens when we form the product p∗ q?
The following lemma gives a useful result concerning such products.

Lemma 2.1.10. Let E be an arbitrary graph and let p and q be two paths in E.

(i) If p and q have the same length, then in LK (E) we have p∗ q = δp,q r(p).

(ii) If p and q have different lengths, then in LK (E) we have




 p2 if q is an initial subpath of p with p = qp2


p∗ q = q2 if p is an initial subpath of q with q = pq2


0 otherwise

CHAPTER 2. LEAVITT PATH ALGEBRAS 48

Proof. (i) Let p = ei1 . . . ein and q = ej1 . . . ejn , where each eik , ejk ∈ E 1 . By the
(CK1) relation we have that e∗ik ejk = δeik ,ejk r(eik ) for each k ∈ {1, . . . , n}. Also note
that r(eik ) = s(eik+1 ) and so the (A2) relation gives r(eik )eik+1 = eik+1 . Thus we
have
p∗ q = e∗in . . . e∗i1 ej1 . . . ejn
= (δei1 ,ej1 )e∗in . . . e∗i2 ej2 . . . ejn
..
.
= (δei1 ,ej1 . . . δein−1 ,ejn−1 )e∗in ejn
= (δei1 ,ej1 . . . δein−1 ,ejn−1 δein ,ejn )r(ejn )

Thus, if eik 6= ejk for any k ∈ {1, . . . , n}, so that p 6= q, then the above equation
gives p∗ q = 0. Otherwise, if p = q we have p∗ q = r(ejn ) = r(p), as required.

(ii) If q is an initial subpath of p with p = qp2 , then applying (i) gives p∗ q =


(qp2 )∗ q = p∗2 q ∗ q = p∗2 r(q) = p∗2 , since r(q) = s(p2 ). Similarly, if p is an initial
subpath of q with q = p q2 , then p∗ q = p∗ p q2 = q2 .
Now suppose that q is not an initial subpath of p and vice versa. Suppose that
l(p) > l(q) and write p = p1 p2 , where l(p2 ) = l(q). Since p1 6= q, applying (i) gives
p∗ q = p∗2 p∗1 q = 0. If l(q) > l(p), a similar argument completes the proof.

Recall the definition of a Z-graded ring from Section 1.1. If we equate degree in
LK (E) with path length in E, it is natural to think of edges as elements of degree 1,
ghost edges as elements of degree −1 and vertices as elements of zero degree. As it
turns out, this intuitive grading does indeed fulfil the requirements for a Z-grading
on LK (E), as the following lemma from [AA1, Lemma 1.7] shows.

Lemma 2.1.11. Let E be an arbitrary graph. Then LK (E) is a Z-graded algebra,


with grading induced by:

deg(v) = 0 for all v ∈ E 0 ; deg(e) = 1 and deg(e∗ ) = −1 for all e ∈ E 1 .


L
That is, LK (E) = n∈Z LK (E)n , where for each n ∈ N we define

ki pi qi∗ : l(pi ) − l(qi ) = n ,


P
LK (E)n = i

with each ki ∈ K and each pi , qi ∈ E ∗ .


CHAPTER 2. LEAVITT PATH ALGEBRAS 49
L
Proof. From Lemma 2.1.8, it is clear that LK (E) = n∈Z LK (E)n .

Now we want to show that LK (E)m LK (E)n ⊆ LK (E)m+n for each m, n ∈ Z.


Consider nonzero monomials x = p1 q1∗ ∈ LK (E)m and y = p2 q2∗ ∈ LK (E)n , where
p1 , q1 , p2 , q2 ∈ E ∗ . Note that we have l(p1 ) − l(q1 ) = m and l(p2 ) − l(q2 ) = n.
If xy = p1 q1∗ p2 q2∗ = 0 then we are done, so suppose that xy 6= 0. According to
Lemma 2.1.10, we have three cases.

Case 1: l(q1 ) = l(p2 ). Then q1∗ p2 = r(p2 ) and so xy = p1 q2∗ . Since l(p1 ) − l(q2 ) =
(m + l(q1 )) − (l(p2 ) − n) = m + n, we have that xy ∈ LK (E)m+n .
Case 2: l(q1 ) > l(p2 ). Then q1 = p2 q for some subpath q of q1 , and so xy = p1 q ∗ q2∗ .
Since l(p1 ) − l(q2 q) = l(p1 ) − (l(q2 ) + l(q)) = l(p1 ) − (l(q2 ) + l(q1 ) − l(p2 )) = (l(p1 ) −
l(q1 )) + (l(p2 ) − l(q2 )) = m + n, we again have that xy ∈ LK (E)m+n .
Case 3: l(p2 ) > l(q1 ). Then a similar argument to Case 2 gives xy ∈ LK (E)m+n .
Pr ∗
Ps ∗
Finally, if x = i=1 p1i q1i ∈ LK (E)m and y = j=1 p2j q2j ∈ LK (E)n , then

from the argument above it is clear that xy ∈ LK (E)m+n . Thus LK (E)m LK (E)n ⊆
LK (E)m+n , as required.

We define the degree of an element x ∈ LK (E) to be the lowest number n for


L
which x ∈ m≤n LK (E)m . Recall from Definition 1.1.3 that an element of LK (E)n is
S
said to be homogeneous of degree n, and so n∈Z LK (E)n is the set of homogeneous
elements in LK (E).

Furthermore, if x is an arbitrary element of LK (E) and d ∈ Z+ , we say that x


is representable as an element of degree d in real (or ghost) edges if x can
be written as a sum of monomials from the spanning set {pq ∗ : p, q ∈ E ∗ } in such a
way that d is the maximum length of a path p (or q) appearing in such monomials.

Note that an element x ∈ LK (E) can be representable as an element of different


degrees in real edges. For example, the element x = v (where v ∈ E 0 is a regular
vertex) has degree 0 in real edges, but the (CK2) relation allows us to write x =

P
s(e)=v ee , which has degree 1 in real edges.

Finally, it is natural to ask under what conditions LK (E) is unital and, more
generally, under what conditions LK (E) has local units. We close this section with
CHAPTER 2. LEAVITT PATH ALGEBRAS 50

the following relevant lemma from [AA1, Lemma 1.6].

Lemma 2.1.12. Let E be a graph.

(i) If E 0 is finite, then


P
vi ∈E 0 vi is an identity for LK (E).

(ii) If E 0 is infinite, then E 0 generates a set of local units for LK (E).

Proof. (i) Suppose E 0 is finite and consider an arbitrary monomial kpq ∗ ∈ LK (E),
where p, q ∈ E ∗ and k ∈ K. Let α = vi ∈E 0 vi . Then
P

! !
X X
α(kpq ∗ ) = vi kpq ∗ = k δvi ,s(p) s(p) pq ∗ = ks(p)pq ∗ = kpq ∗ .
vi ∈E 0 vi ∈E 0

Similarly, we can show that (kpq ∗ )α = kpq ∗ . Since any element in LK (E) is a sum
of such monomials, we must have that αx = x = xα for all x ∈ LK (E). Thus α is
an identity for LK (E).

(ii) Suppose E 0 is infinite. Consider a finite subset X = {ai }ti=1 of LK (E). We


can write each ai as ai = s(i) i i i ∗ i i ∗ i
P
j=1 kj pj (qj ) , where each pj , qj ∈ E and kj ∈ K. Now

define
t
[
V = {s(pij ), s(qji ) : j = 1, . . . , s(i)}
i=1
P
and let β = v∈V v. Then, using the same arguments as in (i), it is easy to see that
βai = ai = ai β for each ai ∈ X. Since β is a finite sum and an idempotent, it is a
local unit for X. Thus E 0 generates a set of local units for LK (E).

Lemma 2.1.12 tells us that, regardless of whether E 0 is finite or infinite, LK (E)


will always have local units. This property will prove extremely useful when proving
future results.

2.2 Results and Properties


In the previous section we defined a Leavitt path algebra for an arbitrary graph
E and examined its basic structure. Now we continue to examine in detail some
CHAPTER 2. LEAVITT PATH ALGEBRAS 51

important properties of LK (E), including the extremely powerful result shown in


Proposition 2.2.11. We begin by looking at the ideals of LK (E). This first lemma
is from [AA1, Lemma 3.9].

Lemma 2.2.1. Let E be an arbitrary graph and let J be an ideal of LK (E). The
set of all vertices contained in J, i.e. J ∩ E 0 , forms a hereditary saturated subset of
E 0.

Proof. Let H = J ∩ E 0 and let v ∈ H and w ∈ T (v). Then there exists a path
p ∈ E ∗ with s(p) = v and r(p) = w. By Lemma 2.1.10, we have w = p∗ p = p∗ vp ∈ J
and so H is hereditary.

Now suppose that w is a regular vertex in E 0 such that for all e ∈ E 1 with
s(e) = w we have r(e) ∈ H. Then the (CK2) relation gives
X X
w= ee∗ = er(e)e∗ ∈ J,
s(e)=w s(e)=w

and so H is saturated.

If G is a subset of E 0 , then we denote by I(G) the two-sided ideal in LK (E)


generated by the elements of G. This definition gives us the following simple yet
useful lemma from [APS, Lemma 2.1].

Lemma 2.2.2. Let E be an arbitrary graph and let G be a subset of E 0 . Then


I(G) = I(Ḡ), where Ḡ is the hereditary saturated closure of G.

Proof. Let H = I(G) ∩ E 0 . By Lemma 2.2.1 we know that H is a hereditary


saturated subset of E 0 containing G. By definition, Ḡ is the smallest such set, so
G ⊆ Ḡ ⊆ H and, by extension, I(G) ⊆ I(Ḡ) ⊆ I(H). However, since H ⊆ I(G) we
have I(H) ⊆ I(G) and so I(G) = I(Ḡ) = I(H), as required.

The fact that any Leavitt path algebra has local units leads to the following
useful lemma.

Lemma 2.2.3. Let E be an arbitrary graph and let I be an ideal of LK (E). Then
I ∩ E 0 = E 0 if and only if I = LK (E).
CHAPTER 2. LEAVITT PATH ALGEBRAS 52

Proof. Suppose I ∩ E 0 = E 0 and take an arbitrary element x ∈ LK (E). Since


E 0 generates a set of local units for LK (E), there must be an e ∈ I such that
ex = x = xe. Since I is an ideal we must have x ∈ I, and so I = LK (E). The
converse is obvious.

Since LK (E) has local units for any graph E, we can apply many of the results in
Section 1.2 to the category LK (E)-Mod. We give a few examples of such applications
here.

Lemma 2.2.4. Let E be an arbitrary graph. The Leavitt path algebra LK (E) is a
projective module in the category LK (E)-Mod.

LK (E)v. Since each v ∈ E 0 is an


L
Proof. By Lemma 2.1.9 we have LK (E) = v∈E 0

idempotent and LK (E) has local units, we can apply Proposition 1.2.13 to obtain
that each summand LK (E)v is projective in LK (E)-Mod. Thus, since the direct
sum of projective modules is also projective, LK (E) is projective.

Lemma 2.2.4 tells us that every Leavitt path algebra is projective as a left module
over itself (and we can show similarly that every Leavitt path algebra is projective
as a right module over itself). However, the same is not true for injectivity; that is,
not all Leavitt path algebras are left or right self-injective. In Section 4.4 we will
examine self-injective Leavitt path algebras in detail.

For an arbitrary graph E, Corollary 1.2.16 tells us that LK (E) is flat as a left
LK (E)-module. Furthermore, since LK (E) = v∈E 0 LK (E)v, then taking B = E 0
L

and U = E 0 in the definition of U -free module we can see that every Leavitt path
algebra LK (E) is an E 0 -free left LK (E)-module with basis E 0 .

Now we briefly return to graph theory to define the concept of a closed path.

Definition 2.2.5. A path p = e1 . . . en with s(p) = v = r(p) is said to be a closed


path based at v. Furthermore, if we have that s(ei ) 6= v for all i > 1 we say that p
is a closed simple path based at v. We denote the set of all closed paths based
at v by CP(v), and the set of all closed simple paths based at v by CSP(v).
CHAPTER 2. LEAVITT PATH ALGEBRAS 53

Note that any cycle is a closed simple path based at any of its vertices. However,
a closed simple path based at v may not be a cycle as it may visit any of its vertices
(other than v) more than once. Similarly, a closed path based at v may not be simple
as it may visit v more than once. We illustrate this with the following example.

Example 2.2.6. Consider the following graph:

e1 e3 e5

  
•u v
A• ] •w •A x

e2 e6 e4

Now, e1 e2 and e3 e6 are both cycles based at v. Furthermore, the path p = e3 e4 e5 e6 ∈


CSP(v) but is not a cycle, as p passes through w twice. Finally, the path q =
e1 e2 e3 e4 e5 e6 ∈ CP(v) but is not a closed simple path, as q passes through v twice.

We now use Lemma 2.1.10 to prove the following useful result from [AA1, Lemma
2.3] regarding closed paths.

Lemma 2.2.7. Every closed path (of length greater than zero) can be decomposed
into a unique series of closed simple paths (of length greater than zero); that is, for
every p ∈ CP(v), there exist unique c1 , . . . , cm ∈ CSP(v) (with l(ci ) > 0 for each
i ∈ {1, . . . , m}) such that p = c1 . . . cm .

Proof. Let p = e1 . . . en and let et1 , . . . , etm be the edges in p for which r(eti ) = v,
where t1 < · · · < tm = n. Let c1 = e1 . . . et1 and cj = etj−1 +1 . . . etj for each
1 < j ≤ m. Thus p = c1 . . . cm , where each cj ∈ CSP(v) and l(cj ) > 0.
To show that this decomposition is unique, suppose that p = c1 . . . cr = d1 . . . ds ,
with ci , dj ∈ CSP(v) and l(ci ), l(dj ) > 0. Furthermore, suppose that r ≥ s. By
Lemma 2.1.10 we have c∗1 c1 = v, and so multiplication by c∗1 on the left gives 0 6=
vc2 . . . cm = c∗1 d1 . . . ds . Since the right-hand side is nonzero, we must have c1 = d1 ,
and so by Lemma 2.1.10 again we have c2 . . . cr = d2 . . . ds (noting that vc2 = c2
and vd2 = d2 ). Repeating this process gives ci = di for each i ∈ {1, . . . , s}, and so
p = c1 . . . cs = d1 . . . ds .
CHAPTER 2. LEAVITT PATH ALGEBRAS 54

If r > s, we must have pcs+1 . . . cr = p and so v = p∗ p = p∗ pcs+1 . . . cr =


cs+1 . . . cr , which is impossible since l(ci ) > 0 for each ci . A similar argument shows
that we cannot have s > r. Thus r = s and so the decomposition is unique.

For a vertex v in an arbitrary graph E, the range index of v, denoted n(v), is


the cardinality of the set

R(v) := {p ∈ E ∗ : r(p) = v}.

Note that n(v) is always nonzero, since v ∈ R(v) for each v ∈ E 0 . We apply this
definition in the following lemma from [A, Lemma 4.4.3].

Lemma 2.2.8. Let E be a finite and row-finite graph and let v ∈ E 0 be a sink.
Then
Iv := span({αβ ∗ : α, β ∈ E ∗ , r(α) = v = r(β)})

is an ideal of LK (E), and Iv ∼


= Mn(v) (K).

Proof. Take an arbitrary nonzero monomial αβ ∗ ∈ Iv , so that r(α) = v = r(β),


and a nonzero monomial γδ ∗ ∈ LK (E) with γ, δ ∈ E ∗ . Suppose that γδ ∗ αβ ∗ 6= 0.
Then δ ∗ α 6= 0 and so (by Lemma 2.1.10) we have that either α = δp or δ = αq
for some paths p, q ∈ E ∗ . In the latter case we must have that l(q) = 0, since
r(α) = v is a sink, and so δ = α. Thus we can generalise to a single case in which
α = δp, where l(p) may be zero. Then δ ∗ α = p and so γδ ∗ αβ ∗ = (γp)β ∗ ∈ Iv , since
r(p) = r(α) = v. This shows that Iv is a left ideal. Similarly, we can show that Iv
is also a right ideal.

Now let n = n(v), as defined above. Since E is both finite and row-finite, n must
also be finite. Rename the elements in the set {α ∈ E ∗ : r(α) = v} as {p1 , . . . , pn },
giving Iv = span{pi p∗j : i, j = 1, . . . , n}. Consider the expression (pi p∗j )(pk p∗l ) and
suppose that j 6= k and (pi p∗j )(pk p∗l ) 6= 0. Then, as above, either pj = pk p or
pk = pj q for some paths p, q ∈ E ∗ . In either case, l(p) > 0 or l(q) > 0 since pj 6= pk .
However, this is impossible as v is a sink. Thus j 6= k implies that (pi p∗j )(pk p∗l ) = 0.
Otherwise, we have (pi p∗j )(pj p∗l ) = pi vp∗l = pi p∗l . Thus {pi p∗j : i, j = 1, . . . , n} is a set
of matrix units for Iv and so Iv ∼ = Mn(v) (K).
CHAPTER 2. LEAVITT PATH ALGEBRAS 55

Lemma 2.2.8 leads to the following important result from [AAS, Proposition 3.5].

Lemma 2.2.9. Let E be a finite, row-finite and acyclic graph, and let {v1 , . . . , vt }
be the sinks of E. Then
t
LK (E) ∼
M
= Mn(vi ) (K).
i=1

Proof. We begin by showing that LK (E) ∼


Lt
= i=1 Ivi , where the Ivi are the ideals

defined in Lemma 2.2.8. Consider an arbitrary nonzero monomial αβ ∗ ∈ LK (E),


where α, β ∈ E ∗ . If r(α) = vi for some i ∈ {1, . . . , t}, then αβ ∗ ∈ Ivi . Otherwise, if
r(α) 6= vi for every i then r(α) is not a sink. Thus we can apply the (CK2) relation
at r(α) (since E is row-finite), giving
X 

αβ = α {e1 e∗1 : e1 ∈ E , s(e1 ) = r(α)} β ∗
1

X
= {αe1 e∗1 β ∗ : e1 ∈ E 1 , s(e1 ) = r(α)}.

Now consider a specific summand of the above expression, αe01 (e01 )∗ β ∗ . Either
r(e01 ) = vi for some i ∈ {1 . . . , t}, in which case αe01 (e01 )∗ β ∗ ∈ Ivi , or r(e1 ) is not
a sink, in which case we can expand the expression by again applying the (CK2)
relation at r(e01 ), giving
X 
αe01 (e01 )∗ β ∗ = αe01 {e2 e∗2 : e2 ∈ E 1 , s(e2 ) = r(e01 )} (e01 )∗ β ∗
X
= {αe01 e2 e∗2 (e01 )∗ β ∗ : e2 ∈ E 1 , s(e2 ) = r(e01 )}.

Suppose that repeating the above process yields a sequence of edges e01 e02 . . . that
never reaches a sink, and consider the infinite set of vertices T = {r(e01 ), r(e02 ), . . .}.
Now this set of vertices must be distinct, since if r(e0i ) = r(e0j ) for some r(ei ), r(ej ) ∈
T then we would have a cycle in E, contradicting the fact that E is acyclic.
However, we cannot have an infinite number of distinct vertices since E is finite.
Thus, for each summand of αβ ∗ , we eventually reach an expression of the form
αe01 e02 . . . e0n (e0n )∗ . . . (e01 )∗ (e02 )∗ β ∗ , where r(e0n ) is a sink; that is, r(e0n ) = vi for some
i ∈ {1 . . . , t}. Thus each summand of αβ ∗ is in Ivi for some sink vi , and since
αβ ∗ was an arbitrary monomial and these monomials generate LK (E), we have
LK (E) = ti=1 Ivi .
P
CHAPTER 2. LEAVITT PATH ALGEBRAS 56

To show that this sum is direct, consider two arbitrary monomials αi βi∗ ∈ Ivi
and αj βj∗ ∈ Ivj for i 6= j. Suppose that (αi βi∗ )(αj βj∗ ) 6= 0. As in the proof of
Lemma 2.2.8, this implies that either αj = βi p or βi = αj q for some paths p, q ∈ E ∗ .
Again, this is impossible as αj 6= βi (since vi 6= vj ) and vi , vj are sinks. Thus
(αi βi∗ )(αj βj∗ ) = 0. Since such monomials generate Ivi and Ivj , we have Ivi Ivj =
{0} for all i, j ∈ {1, . . . , t}. Note also that since E is finite, LK (E) is unital (by
Lemma 2.1.12). Since LK (E) = ti=1 Ivi , we have 1 = e1 + · · · + et , where each
P

ei ∈ Ivi . Now suppose there exists x1 ∈ Iv1 such that x1 = x2 + · · · + xt , where


each xi ∈ Ivi . Now x1 = x1 (e1 + · · · + et ) = x1 e1 , since Ivi Ivj = {0} for i 6= j,
and so x1 = x1 e1 = (x2 + · · · + xt )e1 = 0. Repeating this argument, we have that
Ivi ∩ tj=1, j6=i Ivj = {0} for each i ∈ {1, . . . , t}, and so the sum is direct. Finally, we
P

apply Lemma 2.2.8 to complete the proof.

To illustrate Lemma 2.2.9, consider the finite line graph with t vertices, Mt :
e1 e2 et−1
•v 1 / •v 2 / •v 3 •vt−1 / •v t

Here Mt has a single sink vt with R(vt ) = {vt , pt−1 , . . . , p2 , p1 }, where we define
pi = ei ei+1 . . . et−1 for each i = 1, . . . , t − 1. Thus n(vt ) = t and so Lemma 2.2.9
gives LK (Mt ) ∼ = Mt (K), which agrees with the formulation given in Example 2.1.5.

Definition 2.2.10. Let R be a ring with local units. The ring R is said to be

−→ i∈I Ri , where {Ri : i ∈ I} is an ascending chain of rings


locally matricial if R = lim
and each Ri is isomorphic to a finite direct sum of finite-dimensional matrix rings
over K.

If E is a row-finite graph, we can use Lemma 2.2.9 to show that LK (E) is


locally matricial if and only if E is acyclic (see, for example, [A], Proposition 4.4.6).
However, to show this for an arbitrary graph E we will need some of the tools
introduced in Section 4.1. This equivalence is finally proved in Theorem 4.2.3.

We now prove the following powerful result from [AMMS1, Proposition 3.1],
which greatly simplifies the proof of several subsequent theorems. Though the orig-
inal theorem was given in a row-finite context, the proof is still valid for arbitrary
graphs. Here we have expanded the proof for ease of understanding.
CHAPTER 2. LEAVITT PATH ALGEBRAS 57

Proposition 2.2.11. Let E be an arbitrary graph. For every nonzero element x ∈


LK (E) there exist y1 , . . . , yr , z1 , . . . , zs ∈ E 0 ∪ E 1 ∪ (E 1 )∗ such that

(i) y1 . . . yr xz1 . . . zs is a nonzero element in Kv for some v ∈ E 0 , or

(ii) there exist a vertex w and a cycle without exits c based at w such that
y1 . . . yr xz1 . . . zs is a nonzero element in

( n
)
X
wLK (E)w = ki ci for m, n ∈ N0 and ki ∈ K .
i=−m

These two cases are not mutually exclusive.

Proof. We first show that for a nonzero element x in LK (E), there is a path µ in E
such that xµ is nonzero and in only real edges. Consider a vertex v ∈ E 0 such that
xv 6= 0 (note that such a vertex will always exist, since if x = ni=1 ki pi qi∗ , where
P

each ki ∈ K and pi , qi ∈ E ∗ , then choosing v = s(q1 ) ensures that k1 p1 q1∗ v 6= 0 and


thus xv 6= 0). Write xv = m ∗ 1
P
i=1 βi ei + β, where βi , β ∈ LK (E), ei ∈ E , ei 6= ej

for i 6= j, β is a polynomial in real edges, and xv is represented as an element of


minimal degree in ghost edges. We have two cases.
Case (1): xvei = 0 for all i ∈ {1, . . . , m}. This gives, for each ei , xvei =
βi + βei = 0 and so βi = −βei . Thus xv = m
P ∗
Pm ∗
i=1 −βei ei + β = β( i=1 −ei ei + v).

Since xv 6= 0 we have v − m ∗
P
i=1 ei ei 6= 0. Since s(ei ) = v for each ei , by the (CK2)

relation there must exist an f ∈ E 1 such that s(f ) = v but f 6= ei for each i. Thus
xvf = ( m ∗
P
i=1 −βei ei + β)f = 0 + βf 6= 0 (since r(β) = v), and so, since β is in only

real edges, we have a path vf ∈ E ∗ such that xvf is nonzero and in only real edges.
Case (2): xvei 6= 0 for some i, say for i = 1. Then xve1 = β1 + βe1 , with the
degree in ghost edges of xve1 strictly less than that of xv. If β1 is a polynomial
in only real edges, then we are done. Otherwise, we can repeat the above process,
reducing the degree in ghost edges with each iteration until we are left with an
element in only real edges (this must happen since the degree in ghost edges of xv
must, of course, be finite).

Now we can assume that x ∈ LK (E) is a nonzero polynomial in only real edges.
Write x = ri=1 ki αi , where 0 6= ki ∈ K for each i and each αi is a real path in
P
CHAPTER 2. LEAVITT PATH ALGEBRAS 58

E with αi 6= αj for i 6= j. Using induction on r, we will prove that multiplication


on the left and/or right of x by elements from E 0 ∪ E 1 ∪ (E 1 )∗ will produce either
a nonzero scalar multiple of a vertex or a nonzero polynomial in a cycle with no
exits. For r = 1, we have x = k1 α1 . If α1 is a vertex then we are done. Otherwise,
α1 = f1 . . . fn for some fi ∈ E 1 . Thus fn∗ . . . f1∗ x = k1 r(fn ), and so the proposition
is true for r = 1.
Now assume that the property is true for any nonzero element that is the sum of
less than r paths in the conditions above. Write x = ri=1 ki αi such that deg(αi ) ≤
P

deg(αi+1 ) and ki αi 6= 0 for each i. Let z = α1∗ x. Thus 0 6= z = k1 v + ri=2 ki βi ,


P

where v = r(α1 ) and βi = α1∗ αi . Note that deg(βi ) ≤ deg(βi+1 ) and βi 6= βj for
i 6= j.
If βi = 0 for some i then we can apply our inductive hypothesis and we are done.
Furthermore, if some βi does not begin or end in v, then we can apply our inductive
hypothesis to vz or zv (both nonzero since our βi are distinct). Thus we can assume
that each βi is nonzero and begins and ends in v.

Now suppose that there exists some path τ such that τ ∗ βi = 0 for some, but not
all, βi . Then we can apply our inductive hypothesis to τ ∗ z 6= 0 and we are done.
Thus we can suppose that, for a given path τ , if τ ∗ βi 6= 0 for some i, then τ ∗ βi 6= 0
for all i. Let τ = βj for some fixed j. Since τ ∗ βj 6= 0, we must have τ ∗ βj+1 6= 0. Since
deg(βj ) ≤ deg(βj+1 ), by Lemma 2.1.10 we have that either βj = βj+1 or βj+1 = βj rj
for some path rj ∈ CP(v). Since the βi are distinct, we must have the latter case.
Thus in general we have βi+1 = βi ri for some path ri ∈ CP(v), and so we can write
z = k1 v + k2 γ1 + k3 γ1 γ2 + · · · + kr γ1 γ2 . . . γr−1 , where each γi is a closed path based
at v.

Now write each γi as γi = γi1 . . . γin(i) , where each γij is a closed simple path
based at v. If the paths γij are not identical, then we must have γ11 6= γij for some
γij , and so γi∗j γ11 = 0 (since one cannot be an initial subpath of the other). Thus
we have 0 6= γi∗j zγij = k1 v, since γ11 appears in every term but the first.
Now assume that the paths are identical, so that γij = γ (where γ ∈ E ∗ ) for
each i, j. If γ is not a cycle then γ must contain a cycle; that is, if γ = e1 . . . en
CHAPTER 2. LEAVITT PATH ALGEBRAS 59

(with each ei ∈ E 1 ) then there exist ei1 , . . . , eik with i1 , . . . , ik ∈ {1, . . . , n} such that
i1 < · · · < ik and d = ei1 . . . eik is a cycle based at v (noting that k < n). Thus we
have that d∗ γ = 0 (since d is clearly not an initial subpath of γ) and so d∗ zd = k1 v
and we are done.
Thus we can assume that z is a polynomial in the cycle c = γ. Suppose that f
is an exit for c, so that s(e) = s(f ) for some edge e in c but f 6= e. Write c = aeb
(where a, b ∈ E ∗ ) and let ρ = af , which is nonzero since r(a) = s(e) = s(f ). Then
ρ∗ c = f ∗ a∗ aec = f ∗ ec = 0 and so ρ∗ zρ = ρ∗ k1 vρ = k1 r(f ) and we are done.
 Pn i
Finally, if c is a cycle with no exits based at v then z ∈ i=−m ki c for m, n ∈

N and ki ∈ K , where we understand c−m = (c∗ )m for m ∈ N and c0 = v. Clearly


this set is contained in vLK (E)v since each ci begins and ends in v. To see the
converse containment, first note that the elements of vLK (E)v must be linear com-
binations of monomials αβ ∗ , where α, β ∈ E ∗ , s(α) = v = s(β) and r(α) = r(β).
Now, since c has no exits, any path p ∈ E ∗ with s(p) = v must be of the form
cn p0 , where n ≥ 0 and p0 is an initial subpath of c (for if p were to contain an
edge distinct from any edge in c, that edge would constitute an exit for c). Thus
α = cm α0 and β = cn β 0 for some m, n ≥ 0. Since α0 and β 0 are initial subpaths of c
and r(α0 ) = r(β 0 ), we must have α0 = β 0 . Let α0 = e1 . . . ek . For any edge e in c, the
vertex s(e) emits only e (since c has no exits) and so applying the (CK2) relation
at s(e) yields ee∗ = s(e). Thus

α0 (β 0 )∗ = e1 . . . ek−1 ek e∗k e∗k−1 . . . e∗1


= e1 . . . ek−1 s(ek )e∗k−1 . . . e∗1
= e1 . . . ek−1 (ek−1 )∗ . . . e∗1
..
.
= e1 e∗1
= v,

and so αβ ∗ = cm α0 (β 0 )∗ c−n = cm vc−n = cm (c∗ )n . Again, using the fact that c has
no exits we can apply the (CK2) relation to give cc∗ = v (letting α0 = β 0 = c in the
above equation). Thus αβ ∗ = cm (c∗ )n = cm−n , and so vLK (E)v is precisely the set
CHAPTER 2. LEAVITT PATH ALGEBRAS 60

of all polynomials in c.
To see that the two cases are not mutually exclusive, consider the graph E
consisting of a single vertex v and a single loop e based at v, and take x = e. Thus
e∗ xv = v (giving case (1)) and vxv = e, a cycle without exits (giving case (2)).

Proposition 2.2.11 leads to the following useful corollary from [AMMS2, Corol-
lary 3.3].

Corollary 2.2.12. Let E be an arbitrary graph. Then

(i) every Z-graded nonzero ideal of LK (E) contains a vertex, and

(ii) if E contains no cycles without exits, then every nonzero ideal of LK (E) con-
tains a vertex.

Proof. (i) Let I be a Z-graded nonzero ideal of LK (E) and let 0 6= x ∈ I. By


Proposition 2.2.11, there exist y, z ∈ LK (E) such that 0 6= yxz = ni=−m ki ci , where
P

c is a cycle without exits based at a vertex w, each ki ∈ K and m, n ∈ N. Since


I is a graded ideal, each summand of yxz must also be in I (since each ki ci is a
homogeneous element of degree i in LK (E)). Then, for t ∈ {−m, . . . , n} such that
kt ct 6= 0, we have 0 6= (kt−1 (c∗ )t )kt ct = w ∈ I, as required.
(ii) Let J be a nonzero ideal of LK (E) and let 0 6= x ∈ J. Since E contains no
cycles without exits, then again by Proposition 2.2.11 there must exist y, z ∈ LK (E)
such that 0 6= yxz = kv for some v ∈ E 0 and k ∈ K. Thus 0 6= (k −1 v)kv = v ∈ I,
as required.

The following two ‘Uniqueness theorems’ are given by Tomforde as [To, Theorem
4.6] and [To, Theorem 6.8], respectively. In Tomforde’s paper, the proofs are fairly
involved. However, in light of Proposition 2.2.11 and its subsequent corollary, the
results follow almost instantly.

Theorem 2.2.13 (Graded Uniqueness Theorem). Let E be an arbitrary graph and


let A be a Z-graded ring. If π : LK (E) → A is a graded ring homomorphism for
which π(v) 6= 0 for every vertex v ∈ E 0 , then π is a monomorphism.
CHAPTER 2. LEAVITT PATH ALGEBRAS 61

Proof. By Lemma 1.1.5, ker(π) is a graded ideal of LK (E). So, by Corollary 2.2.12,
if ker(π) is nonzero it must contain a vertex, contradicting the fact that π(v) 6= 0
for every vertex v ∈ E 0 . Thus ker(π) = {0} and so π is a monomorphism.

Theorem 2.2.14 (Cuntz-Krieger Uniqueness Theorem). Let E be a graph in which


every cycle has an exit and let A be a ring. If π : LK (E) → A is a ring homomor-
phism for which π(v) 6= 0 for every vertex v ∈ E 0 , then π is a monomorphism.

Proof. Suppose that ker(π) 6= 0. Since ker(π) is an ideal of LK (E) and E contains
no cycles without exits, Corollary 2.2.12 tells us that ker(π) must contain a vertex.
Thus contradicts the fact that π(v) 6= 0 for every vertex v ∈ E 0 , and so ker(π) = {0}
and thus π is a monomorphism.

In addition to the above two results, Proposition 2.2.11 also leads to the following
useful theorem from [AMMS2, Theorem 3.7]. Recall that an element x in a ring R
is said to be nilpotent if xn = 0 for some n ∈ N.

Theorem 2.2.15. Let E be an arbitrary graph and let A be a graded K-algebra.


If π : LK (E) → A is a ring homomorphism for which π(v) 6= 0 for every vertex
v ∈ E 0 , and for which each cycle without exits in E is mapped to a non-nilpotent
homogeneous element of nonzero degree, then π is a monomorphism.

Proof. Suppose that ker(π) is nonzero. Since it is a nonzero ideal containing no


vertices, by Proposition 2.2.11 ker(φ) must contain a nonzero element of the form
x = ni=−m ki ci , where c is a cycle without exits based at a vertex w, each ki ∈ K
P

and m, n ∈ N. By hypothesis, π(c) = h, where h is a non-nilpotent homogeneous


element of nonzero degree. Thus π(x) = ni=−m ki π(c)i = ni=−m ki hi = 0. Since
P P

h is not nilpotent, we must have ki = 0 for each i = −m, . . . , n, and so x = 0, a


contradiction. Thus ker(φ) = {0} and so π is a monomorphism, as required.

2.3 Purely Infinite Simple Leavitt Path Algebras


We open this section with Theorem 2.3.1, which describes precisely which graphs
yield simple Leavitt path algebras. From this result we then build toward Theo-
CHAPTER 2. LEAVITT PATH ALGEBRAS 62

rem 2.3.9, which describes precisely which graphs yield Leavitt path algebras that
are both simple and purely infinite; that is, ‘purely infinite simple’.

The following result was first shown for row-finite graphs in [AA1, Theorem 3.11]
and then extended to arbitrary graphs in [AA3, Theorem 3.1]. In comparison to the
published versions, the first part of the proof given here is much simpler, thanks to
Proposition 2.2.11.

Theorem 2.3.1. Let E be an arbitrary graph. Then the Leavitt path algebra LK (E)
is simple if and only if E satisfies the following conditions:

(i) The only hereditary saturated subsets of E 0 are ∅ and E 0 , and

(ii) every cycle in E has an exit.

Proof. Suppose statements (i) and (ii) are true and let J be a nonzero ideal of
LK (E). Since E contains no cycles without exits, Proposition 2.2.11 tells us that
J contains at least one vertex. Thus the vertices of J form a nonempty, hereditary
saturated subset of E 0 (by Lemma 2.2.1) and so J ∩ E 0 = E 0 , by (i). Thus, by
Lemma 2.2.3, we have that J = LK (E), proving LK (E) is simple.

Now suppose that there exists a hereditary saturated subset H of E 0 that is


nonempty and is not equal to E 0 . We will show that this implies that LK (E)
cannot be simple. Define the graph F = (F 0 , F 1 , rF , sF ), where

F 0 = E 0 \H, F 1 = r−1 (E 0 \H), rF = rE 0 \H , sF = sE 0 \H

In other words, F consists of all the vertices of E that are not in H, and all the
edges whose range is not in H. To ensure that F is a well-defined graph, we must
ensure that sF (F 1 ) ∪ rF (F 1 ) ⊆ F 0 . From the definition it is clear that rF (F 1 ) ⊆ F 0 .
Furthermore, suppose that there exists an edge e ∈ F 1 with s(e) ∈ H. Then, by the
hereditary nature of H, we have r(e) ∈ H, which contradicts the definition of F 1 .
Thus s(e) ∈ F 0 , and so sF (F 1 ) ⊆ F 0 and F is therefore well-defined.

Define a map φ : LK (E) → LK (F ) on the generators of LK (E) as follows:


CHAPTER 2. LEAVITT PATH ALGEBRAS 63

  
 v if v ∈
/H  e if r(e) ∈
/H  e∗ if r(e) ∈
/H
φ(v) = φ(e) = and φ(e∗ ) =
 0 if v ∈ H,  0 if r(e) ∈ H  0 if r(e) ∈ H.

Extend φ linearly and multiplicatively. To ensure that φ is a K-algebra homomor-


phism, we must check that it preserves the Leavitt path algebra relations on E. This
is a relatively straightforward (though slightly tedious) process. We include it here
for the sake of completeness.

First, we check that the (A1) relation holds, i.e. that φ(vi )φ(vj ) = δij φ(vi ) for
all vi , vj ∈ E 0 . We must examine several different cases:
Case 1: vi , vj ∈
/ H. Then φ(vi )φ(vj ) = vi vj = δij vi = δij φ(vi ).
Case 2: vi ∈
/ H, vj ∈ H. Then δij vi = 0 and so φ(vi )φ(vj ) = 0 = δij φ(vi ). A
similar argument holds for vi ∈ H, vj ∈
/ H.
Case 3: vi , vj ∈ H. Then φ(vi )φ(vj ) = 0 = δij φ(vi ).

Next, we check that the (A2) relations hold. First, we check that φ(s(e))φ(e) =
φ(e) for all e ∈ E 1 .
Case 1: r(e) ∈
/ H. Then s(e) ∈
/ H and so φ(s(e))φ(e) = s(e)e = e = φ(e).
Case 2: r(e) ∈ H. Then φ(s(e))φ(e) = 0 = φ(e).
Similar arguments show that φ(e)φ(r(e)) = φ(e), φ(r(e))φ(e∗ ) = φ(e∗ ) and
φ(e∗ )φ(s(e)) = φ(e∗ ) for all e ∈ E 1 .

Next we check that the (CK1) relation holds, i.e. that φ(e∗i )φ(ej ) = δij φ(r(ei ))
for all ei , ej ∈ E 1 .
/ H. Then φ(e∗i )φ(ej ) = e∗i ej = δij r(ei ) = δij φ(r(ei )).
Case 1: r(ei ), r(ej ) ∈
/ H. Then ei 6= ej , so φ(e∗i )φ(ej ) = 0 = δij φ(r(ei )). A
Case 2: r(ei ) ∈ H, r(ej ) ∈
similar argument holds for r(ei ) ∈
/ H, r(ej ) ∈ H.
Case 3: r(ei ), r(ej ) ∈ H. Then again φ(e∗i )φ(ej ) = φ(e∗i )φ(ej ) = 0 = δij φ(r(ei )).

Finally, we check that the (CK2) relation holds, i.e. that φ v− sE (ei )=v ei e∗i = 0
P 

for all regular vertices v ∈ E 0 .


Case 1: v ∈ H. By the hereditary nature of H, for every edge ei ∈ E 1 with
CHAPTER 2. LEAVITT PATH ALGEBRAS 64

s(ei ) = v, we have r(ei ) ∈ H. Thus


!
X X
φ v− ei e∗i = φ(v) − φ(ei )φ(e∗i ) = 0 − 0 = 0.
sE (ei )=v sE (ei )=v

/ H. Because H is saturated, there must exist at least one edge ei ∈ E 1


Case 2: v ∈
/ H (for otherwise, if r(ei ) ∈ H for all ei ∈ s−1 (v)
such that s(ei ) = v and r(ei ) ∈
/ H, then φ(ei )φ(e∗i ) = ei e∗i .
then we must have v ∈ H, a contradiction). If r(ei ) ∈
Otherwise, φ(ei )φ(e∗i ) = 0. Recalling that, in the graph F , v only emits edges ei for

P 
which r(ei ) ∈ / H in the original graph E (by definition), we have φ sE (ei )=v ei ei =
∗ ∗
P P
sF (ei )=v φ(ei )φ(ei ) = sF (ei )=v ei ei . This gives
!
X X
φ v− ei e∗i = v − ei e∗i = 0.
sE (ei )=v sF (ei )=v

Thus φ preserves the Leavitt path algebra relations on E, and so is a K-algebra


homomorphism.

Now consider the ideal ker(φ). Since H is nonempty, there must exist a vertex
v ∈ H. Since φ(v) = 0 and v 6= 0, we have ker(φ) 6= {0}. Furthermore, since
H 6= E 0 , there must exist a vertex w ∈ E 0 \H. Since φ(w) = w 6= 0, we have
ker(φ) 6= LK (E). Thus ker(φ) is a proper nontrivial ideal of LK (E), and so LK (E)
is not simple, as required.

To complete the proof, we now suppose that E contains a cycle c without exits,
and show again that this implies that LK (E) cannot be simple. Let v be the base
of this cycle and consider the nonzero ideal hv + ci. We show that hv + ci 6= LK (E)
by showing that v ∈
/ hv + ci. Let c = ei1 . . . eiσ , where s(ei1 ) = r(eiσ ) = v. Since
c has no exits, we have that cc∗ = v (see the proof of Proposition 2.2.11, page 59).
Furthermore, by Lemma 2.1.10 we know that c∗ c = v. Furthermore, we must have
CSP(v) = {c}, since the existence of a closed simple path based at v that is distinct
from c would imply that c has an exit.

We proceed by contradiction: suppose that v ∈ hv + ci. Then there exist


(nonzero) monic monomials αt , βt ∈ LK (E) and scalars kt ∈ K such that
n
X
v= kt αt (v + c)βt .
t=1
CHAPTER 2. LEAVITT PATH ALGEBRAS 65

Each summand in the above expression must begin and end in v, for otherwise
v = v(v)v = nt=1 kt v(αt (v + c)βt )v = 0, a contradiction. Furthermore, since c is
P

based at v, we can write v + c as v(v + c)v. Thus, since the right-hand side of the
above expression is nonzero, each αt and βt must begin and end in v. Thus each
αt , βt is a monomial in vLK (E)v and so, as shown in the proof of Proposition 2.2.11,
we have that each αt and βt is equal to cm or (c∗ )n for some m, n ∈ N0 .

Recalling that cc∗ = v = c∗ c, we have that c and c∗ commute with v + c, and so


αt (v + c) = (v + c)αt for each t = 1, . . . , n. Thus
n
X n
X
v= kt αt (v + c)βt = (v + c)(kt αt βt ).
t−1 t−1

Now each αt βt term is a power of either c or c∗ , and so we can write v = (v+c)P (c, c∗ ),
where P is a polynomial with coefficients in K, i.e.

P (c, c∗ ) = l−m (c∗ )m + · · · + l0 v + · · · + ln cn , m, n ≥ 0.

Suppose that l−i 6= 0 for some index i > 0, and let m0 be the maximum such index.
Then
(v + c)P (c, c∗ ) = l−m0 (c∗ )m0 + terms of higher degree = v.

Thus we must have that l−m0 = 0, a contradiction. Thus l−i = 0 for all i > 0.
Similarly, we can show that li = 0 for all i > 0. Thus P (c, c∗ ) = l0 v, and so
v = (v + c)l0 v = l0 (v + c), which is impossible since deg(v) = 0 but deg(l0 (v + c)) =
deg(c) > 0. Thus we have obtained our contradiction, proving that LK (E) is simple
and completing the proof.

Example 2.3.2. We now apply Theorem 2.3.1 to some of the Leavitt path algebras
introduced in Section 2.1.

(i) The finite line graph Mn . For every n ∈ N, Mn has no cycles, so trivially con-
dition (ii) of Theorem 2.3.1 is satisfied. Furthermore, suppose that H is a nonempty
hereditary saturated subset of E 0 , so that vi ∈ H for some i = 1, . . . , n. Then, by
the hereditary nature of H, we must have vi+1 , . . . , vn ∈ H. Furthermore, by the
saturated nature of H we must have vi−1 ∈ H, and thus inductively vi−2 , . . . , v1 ∈ H.
CHAPTER 2. LEAVITT PATH ALGEBRAS 66

Therefore H = (Mn )0 and so condition (ii) is satisfied. Thus LK (Mn ) ∼


= Mn (K) is
simple for all n ∈ N, which agrees with the result given in Lemma 1.1.10.

(ii) The single loop graph R1 . The single loop in R1 forms a cycle without an exit,
so that condition (ii) is not satisfied and thus LK (R1 ) ∼= K[x, x−1 ] is not simple.

(iii) The rose with n leaves Rn . Every edge ei ∈ (Rn )1 is a cycle, and if n ≥ 2
then ei has an exit, since any other edge is an exit. This satisfies condition (ii).
Furthermore, condition (i) is trivially satisfied as Rn only has one vertex, so that
the only nonempty subset of (Rn )0 is (Rn )0 itself. Thus LK (Rn ) ∼
= L(1, n) is simple
for all n ≥ 2.

(iv) The infinite clock graph C∞ . In this case, for any radial vertex vi we have that
{vi } is a hereditary saturated subset of (C∞ )0 , and so LK (C∞ ) ∼
L∞
= i=1 M2 (K)⊕KI22
is not simple.

This next corollary follows directly from Theorem 2.3.1 and Proposition 1.4.10,
and offers an alternative set of conditions that are equivalent to LK (E) being simple.

Corollary 2.3.3. Let E be an arbitrary graph. The Leavitt path algebra LK (E) is
simple if and only if E satisfies the following conditions:

(i) every cycle in E has an exit,

(ii) E is cofinal, and

(iii) for every singular vertex u ∈ E 0 , we have v ≥ u for all v ∈ E 0 .

For a given graph E, we define the set

V1 := {v ∈ E 0 : | CSP(v)| = 1}.

We say that E satisfies Condition (K) if V1 = ∅. In other words, E satisfies


Condition (K) if no vertex in E 0 is the base of precisely one closed simple path.

The following lemma was first given for row-finite graphs in [AA2, Lemma 7]
and then extended to arbitrary graphs in [AA3, Lemma 4.1].
CHAPTER 2. LEAVITT PATH ALGEBRAS 67

Lemma 2.3.4. Let E be an arbitrary graph. If LK (E) is simple, then E satisfies


Condition (K).

Proof. Suppose that LK (E) is simple, and suppose there exists a v ∈ E 0 such that
CSP (v) = {p}. If p is not a cycle, it is easy to see that there exists a cycle based at v
whose edges are a subset of the edges of p, contradicting the fact that CSP (v) = {p}.
Thus p is a cycle and so, by condition (ii) of Theorem 2.3.1, there must exist an exit
e for p.
Let A be the set of all vertices in p. Now r(e) ∈
/ A, for otherwise we would
have another closed simple path based at v distinct from p. Let X = {r(e)} and let
X̄ be the hereditary saturated closure of X. Recall the definition of Gn (X) from
Lemma 1.4.9. Since LK (E) is simple, by condition (i) of Theorem 2.3.1 we have
X̄ = E 0 , and so we can find an n ∈ N such that

n = min{m : A ∩ Gm (X) 6= ∅}.

Let w ∈ A ∩ Gn (X) and suppose that n > 0. By the minimality of n, we


have w ∈
/ Gn−1 (X). Thus, by the definition of Gn (X), w must be a regular vertex
and r(s−1 (w)) ⊆ Gn−1 (X), i.e. w emits edges only into Gn−1 (X). Since w is a
vertex in p, there must exist an edge f such that s(f ) = w and r(f ) ∈ A. Thus
r(f ) ∈ A ∩ Gn−1 (X), contradicting the minimality of n. So we must have n = 0,
and therefore w ∈ G0 (X) = T (r(e)) (by definition). This means there is a path q
from r(e) to w. Since w is in the cycle p, and e is an exit for p, there must also be a
path p0 from w to r(e), and so p0 q is a cycle based at w. However, this implies that
|CSP (v)| ≥ 2, a contradiction.

The following useful result regarding infinite emitters in simple Leavitt path
algebras is from [AA3, Lemma 4.2].

Lemma 2.3.5. Let E be an arbitrary graph such that LK (E) is simple. If z ∈ E 0


is an infinite emitter, then CSP(z) 6= ∅. In particular, if LK (E) is simple and E is
acyclic, then E must be row-finite.

Proof. Let z ∈ E 0 be an infinite emitter, and let e ∈ s−1 (z). Since LK (E) is simple,
by Corollary 2.3.3 (iii) we have that r(e) ≥ z. Thus there is a closed simple path p
CHAPTER 2. LEAVITT PATH ALGEBRAS 68

based at z and so CSP(z) 6= ∅. Furthermore, it is easy to see that there is a cycle


based at z made up of a subset of edges of p. Thus any graph E which is acyclic
and for which LK (E) is simple cannot contain any infinite emitters.

The following result is from [AA2, Lemma 8].

Lemma 2.3.6. If R is a directed union of a chain of finite-dimensional subalgebras,


then R contains no infinite idempotents. In particular, R is not purely infinite.

Proof. Suppose that R contains an infinite idempotent e. Then, by Proposition 1.2.6,


there exists an idempotent f ∈ R and elements x, y ∈ R such that e = xy, f = yx
and f e = ef = f 6= e. Since R is the directed union of a chain of finite-dimensional
subalgebras, the elements e, f, x, y must be contained in a finite-dimensional sub-
algebra S of R. Thus, applying Proposition 1.2.6 again we have that e is an in-
finite idempotent in S. Therefore eS = A1 ⊕ B1 , where A1 6= {0}, and there
exists an isomorphism φ : eS → B1 . Define φ(A1 ) = A2 and φ(B1 ) = B2 . Thus
B1 = φ(eS) = φ(A ⊕ B) = A2 ⊕ B2 . Since A1 6= {0} and φ is an isomorphism,
A2 6= {0} and so B2 is properly contained in B1 . Once again, defining φ(A2 ) = A3
and φ(B2 ) = B3 , we have φ(B1 ) = B2 = A3 ⊕ B3 . By the same logic as above,
B3 is properly contained in B2 . Thus, repeating the process, we have an infinitely
decreasing chain of right ideals

B1 ⊃ B2 ⊃ B3 ⊃ · · ·

and so eS = A1 ⊕ B1 = A1 ⊕ A2 ⊕ B2 = A1 ⊕ A2 ⊕ A3 ⊕ B3 = · · · , contradicting the


fact that S is finite-dimensional.

Recall that a ring R is locally matricial if R = −→ i∈I Ri , where {Ri : i ∈ I}


lim
is an ascending chain of rings and each Ri is isomorphic to a finite direct sum of
finite-dimensional matrix rings over K. Thus Lemma 2.3.6 leads immediately to the
following corollary.

Corollary 2.3.7. Let R be a ring. If R is locally matricial, then R is not purely


infinite.
CHAPTER 2. LEAVITT PATH ALGEBRAS 69

The following proposition is from [AA2, Proposition 9]. Though the result is
given there in a row-finite context, the proof still holds for arbitrary graphs.

Proposition 2.3.8. Let E be an arbitrary graph. Suppose there exists a vertex


w ∈ E 0 with the property that there are no closed simple paths based at any vertex
v ∈ T (w). Then the corner algebra wLK (E)w is not purely infinite.

Proof. Define a new graph H = (H 0 , H 1 , r, s), where H 0 = T (w), H 1 = s−1 (H 0 )


and r and s are the functions rE and sE restricted to the set H 1 . To show this is
a well-defined graph, it is enough to show that r(s−1 (H 0 )) ⊆ H 0 . Take a vertex
z ∈ H 0 that is not a sink, and an edge e such that s(e) = z. Since z ∈ T (w), we
have r(e) ∈ T (w) = H 0 , as required.
To show that LK (H) is a subalgebra of LK (E), we must show that the Leavitt
path algebra relations hold in LK (H). It is clear that the first three relations hold;
to show that the (CK2) relation holds, suppose that v is a regular vertex in H.
Then v must be a regular vertex in E, and furthermore s−1 −1 1
H (v) = sE (v) ⊆ H , so

the (CK2) relation holds in LK (H).


Since there are no closed simple paths based at any vertex v ∈ T (w), H must
be acyclic. Thus, by Theorem 4.2.32 LK (H) is locally matricial, and so by Corol-
lary 2.3.7 LK (H) is not purely infinite. Since wLK (H)w is a subring of LK (H), and
LK (H) does not contain any infinite idempotents, then by Corollary 1.2.7 wLK (H)w
it cannot contain any infinite idempotents and is therefore not purely infinite.
Finally, we show that wLK (H)w = wLK (E)w. Let α = i ki pi qi∗ be an arbitrary
P

element of LK (E), where ki ∈ K and pi , qi ∈ E ∗ . Then wαw = j kij pij qi∗j , where
P

s(pij ) = w = s(qij ). Thus pij , qij ∈ LK (H) and so wLK (E)w ⊆ wLK (H)w. Thus
wLK (H)w = wLK (E)w and so wLK (E)w is not purely infinite, as required.

We now come to the main proof of this section. This was first given for row-
finite graphs in [AA2, Theorem 11] and then extended to arbitrary graphs in [AA3,
Theorem 4.3]. It is here that we can apply Theorem 1.3.19, which we presented in
Section 1.3.
2
The proof of this theorem is independent to any results in this section.
CHAPTER 2. LEAVITT PATH ALGEBRAS 70

Theorem 2.3.9. Let E be an arbitrary graph. Then LK (E) is purely infinite simple
if and only if E satisfies the following conditions:

(i) The only hereditary saturated subsets of E 0 are ∅ and E 0 ,

(ii) every cycle in E has an exit, and

(iii) for every vertex v ∈ E 0 , there is a vertex u ∈ T (v) such that u is the base of a
cycle.

Proof. Suppose that conditions (i), (ii) and (iii) hold. Theorem 2.3.1 tells us im-
mediately that LK (E) is simple. Thus, to show that LK (E) is purely infinite, by
Theorem 1.3.19 it suffices to show that LK (E) is not a division ring and that for any
nonzero pair of elements x, y ∈ LK (E) there exist s, t ∈ LK (E) such that sxt = y.
Together, conditions (ii) and (iii) show there exists at least one cycle with an exit
in E, and thus there must exist two distinct edges e1 and e2 in E 1 . Since e∗1 e2 = 0,
LK (E) has zero divisors and therefore cannot be a division ring.
Now let x, y be a pair of nonzero elements in LK (E). Since E contains no cycles
without exits, by applying Proposition 2.2.11 we can find elements a, b ∈ LK (E)
such that axb = u, where u ∈ E 0 . By condition (iii), u connects to some vertex v
at the base of a cycle c. Thus either u = v or there is a path p ∈ E ∗ with s(p) = u
and r(p) = v. By choosing a0 = b0 = u in the former case, or a0 = p∗ , b0 = p in the
latter, we have elements a0 , b0 ∈ LK (E) such that a0 ub0 = v.
Since c is a closed simple path based at v and LK (E) is simple, Lemma 2.3.4
tells us there must be at least one other closed simple path q based at v with q 6= c.
For each m ∈ N, let dm = cm−1 q. Since c cannot be a subpath of q, and vice versa,
we have c∗ q = 0 = q ∗ c. Using that c∗ c = v and assuming that m > n, we have
d∗m dn = (q ∗ (c∗ )m−1 )(cn−1 q) = q ∗ (c∗ )m−n q = 0. Similarly, d∗m dn = 0 for n > m. For
the case m = n, we have d∗m dn = q ∗ vq = v. Thus d∗m dn = δm,n v for all m, n ∈ N.
Since LK (E) is simple, we have hvi = LK (E), and so for an arbitrary w ∈ E 0
Pt Pt ∗
we can write w = i=1 ai vbi for some ai , bi ∈ LK (E). Let aw = i=1 ai di and
CHAPTER 2. LEAVITT PATH ALGEBRAS 71
Pt
bw = j=1 dj bj . Using the fact that d∗i dj = δi,j v, we have

t
! t
! t
X X X
aw vbw = ai d∗i v dj bj = ai vbi = w.
i=1 j=1 i=1

In other words, for any vertex w ∈ E 0 , we can find aw , bw ∈ LK (E) for which
aw vbw = w.
By Lemma 2.1.12, we can find a finite subset of vertices X = {v1 , . . . , vs } for
which e = si=1 vs is a local unit for y, so that ey = e = ye. Let avi , bvi be elements
P

for which avi vbvi = vi , for each vi ∈ X. Let s0 = si=1 avi d∗i and t0 = sj=1 dj bvj .
P P

This gives
s
! s
! s s
X X X X
0 0
s vt = avi d∗i v dj bvj = avi vbvi = vs = e.
i=1 j=1 i=1 i=1

In summary, we have found elements a, b, a0 , b0 , s0 , t0 ∈ LK (E) for which axb = u,


a0 ub0 = v and s0 vt0 = e. Let s = s0 a0 a and t = bb0 t0 y. Thus we have sxt =
(s0 a0 a)x(bb0 t0 y) = s0 a0 (axb)b0 t0 y = s0 (a0 ub0 )t0 y = (s0 vt0 )y = ey = y, and so LK (E) is
purely infinite.

Conversely, suppose that LK (E) is purely infinite simple. Again, conditions (i)
and (ii) follow directly from the fact that LK (E) is simple (by Theorem 2.3.1). If
condition (iii) does not hold, then there exists a vertex w ∈ E 0 such that no vertex
v ∈ T (w) is the base of a cycle. Since a cycle can be formed from a subset of edges
of any closed path, there cannot be any closed simple path based at any vertex v ∈
T (w) either. Thus, by Proposition 2.3.8, wLK (E)w is not purely infinite. Finally,
Proposition 1.3.18 gives that LK (E) is not purely infinite, a contradiction.

The following proposition from [AA3, Theorem 4.4] shows that, for any graph E
for which LK (E) is simple, we have the following dichotomy.

Proposition 2.3.10. Let E be an arbitrary graph. If E is simple, then either

(i) LK (E) is purely infinite simple and E contains a cycle, or

(ii) LK (E) is locally matricial and E is acyclic.


CHAPTER 2. LEAVITT PATH ALGEBRAS 72

Proof. If E is acyclic then Theorem 4.2.3 tells us that LK (E) is locally matricial.
Otherwise, suppose E contains a cycle c. By Corollary 2.3.3 we have that E is
cofinal, and so every vertex connects to the infinite path c∞ . Thus every vertex
connects to a cycle, satisfying condition (iii) of Theorem 2.3.9. Since LK (E) is
simple, conditions (i) and (ii) of Theorem 2.3.9 are satisfied (by Theorem 2.3.1),
and thus LK (E) is purely infinite simple.

Example 2.3.11. Of the Leavitt path algebras determined to be simple in Exam-


ple 2.3.2, we now use Proposition 2.3.10 determine which of these are purely infinite
simple.

(i) The finite line graph Mn . Since Mn is acyclic for all n ∈ N, LK (Mn ) must
be locally matricial for all n ∈ N. This is no surprise, considering that LK (Mn ) ∼
=
Mn (K).

(ii) The rose with n leaves Rn . Since Rn contains n cycles for each n ∈ N,
LK (Rn ) ∼
= L(1, n) must be purely infinite simple for all n ≥ 2.

2.4 Desingularisation
Recall that a vertex v ∈ E 0 is said to be singular if v is either a sink or an infinite
emitter. In this section we look at the process of ‘desingularisation’, in which we
construct from a given graph E a new graph that contains no singular vertices; in
other words, a graph that is row-finite and has no sinks. This concept was originally
used in the C ∗ -algebra context in [BPRS]. The significance of the desingularisa-
tion process is illustrated in Theorem 2.4.5, in which we show that the Leavitt
path algebra of a graph E is Morita equivalent to the Leavitt path algebra of its
desingularisation.

Definition 2.4.1. Let E be a countable graph. A desingularisation of E is a


graph F constructed from E that contains no singular vertices. We construct F by
‘adding a tail’ to each sink and infinite emitter in E 0 . If v0 is a sink in E, then we
attach an infinite line graph at v0 like so:
CHAPTER 2. LEAVITT PATH ALGEBRAS 73

•v 0 / •v 1 / •v2 / •v3 /

If v0 is an infinite emitter in E, then we first list the edges e1 , e2 , e3 , . . . ∈ s−1 (v0 )


(noting that the countable property of E allows us to list the edges in this way).
Then we again attach an infinite line graph at v0 :

f1 f2 f3
•v 0 / •v 1 / •v2 / •v3 /

We then remove the edges in s−1 (v0 ) and add an edge gj from vj−1 (in the infinite
line graph) to r(ej ) for each ej ∈ s−1 (v0 ). Effectively, we are removing each ej
and replacing it with the path f1 f2 . . . fj−1 gj of length j. Note that both ej and
f1 f2 . . . fj−1 gj have source v0 and range r(ej ).

Note also that the desingularisation of a graph may not necessarily be unique:
differences may arise depending on the way in which we choose to order the edges
in s−1 (v0 ) (in the case that v0 is an infinite emitter).

We now give two examples of the desingularisation process. In these examples


the desingularisation is in fact unique (up to isomorphism), due to the symmetry of
the graphs.

Example 2.4.2. Consider the infinite edges graph

(∞)
E∞ : •u / •v

Note that u is an infinite emitter and v is a sink, so we add a tail at both vertices
in the desingularisation process. Furthermore, each edge emitted by u has range v,
and so we obtain the desingularisation

•u / •u 1 / u2 / u3 /
{{ mmmm • hhhhhhh •
m h
{{ mmm hhhh
{{{mmmhmhmhhhhh
 }{shvmmhmhhh
•v / •v 1 / •v2 / •v 3 /
CHAPTER 2. LEAVITT PATH ALGEBRAS 74

Example 2.4.3. Recall the infinite clock graph

` •O v1 • v2
w;
www
w
ww
ww
C∞ : •u GG / •v3
GG
(∞) GG
GG
G#

•v4
Again, each vertex in this graph is a singularity, resulting in an infinite number of
infinite tails. Thus the desingularisation of C∞ looks like

•u / •u 1 / •u2 / •u 3 /

   
•v1 •v 2 •v3 •v 4

   
•v11 •v 21 •v31 •v 41

   

The following proposition is from [AA3, Proposition 5.1].

Proposition 2.4.4. Let E be a countable graph and let F be a desingularisation of


E. Then there exists a monomorphism of K-algebras from LK (E) to LK (F ).

Proof. We define a map φ : LK (E) → LK (F ) on the generators of E as follows.


First, we define φ(v) = v for all v ∈ E 0 . Note that this is valid since no vertices are
removed in the construction of F , only added. Next, if s(e) is a regular vertex then
we define φ(e) = e and φ(e∗ ) = e∗ . Furthermore, if e = ej ∈ s−1 (v0 ), where v0 is an
infinite emitter, then we define φ(ej ) = f1 f2 . . . fj−1 gj and φ(e∗j ) = gj∗ fj−1

. . . f2∗ f1∗ ,
where f1 , f2 , . . . , fj−1 and gj are as in Definition 2.4.1.
Expand φ linearly and multiplicatively. In order to check that φ is a well-defined
K-homomorphism, we must check that φ preserves the Leavitt path algebra rela-
tions on LK (E). Clearly the (A1) relation is preserved, since each vertex in LK (E)
is mapped to itself in LK (F ). Similarly, the (A2) relations are easily seen to be
preserved, since s(φ(e)) = s(e) and r(φ(e)) = r(e) for all e ∈ E 1 (as noted in Def-
inition 2.4.1). To check the (CK1) relation, note that the only nontrivial situation
CHAPTER 2. LEAVITT PATH ALGEBRAS 75

arises when s(ei ) = s(ej ) = v0 , where v0 is an infinite emitter. In the case that
i = j, we have φ(e∗i )φ(ei ) = (gi∗ fi−1

. . . f2∗ f1∗ )(f1 f2 . . . fi−1 gi ) = r(ei ) = φ(r(ei )). On
the other hand, if i 6= j then φ(e∗i )φ(ej ) = (gi∗ fi−1

. . . f2∗ f1∗ )(f1 f2 . . . fj−1 gj ) = 0, since
f1 f2 . . . fi−1 gi and f1 f2 . . . fj−1 gj are not subpaths of each other. Thus the (CK1)
relation is preserved. Finally, since regular vertices (and the edges they emit) are un-
changed by φ, and we only evaluate the (CK2) relation at regular vertices, it is clear
that the (CK2) relation is preserved. Thus φ is a well-defined K-homomorphism, as
required.
Finally, we show that φ is a monomorphism. Suppose that x ∈ ker(φ) and x 6= 0.
By Proposition 2.2.11 there exist y, z ∈ LK (E) for which either yxz = v ∈ E 0 or
yxz = ni=−m ki ci 6= 0, where m, n ∈ N0 , ki ∈ K and c is a cycle without exits in
P

E. Since ker(φ) is a two-sided ideal of LK (E), we have yxz ∈ ker(φ). By definition,


ker(φ) contains no vertices (since φ(v) = v for all v ∈ E 0 ), and so we must have
Pn i
Pn i
Pn i
i=−m k i c ∈ ker(φ). Thus φ( i=−m k i c ) = i=−m ki φ(c) = 0. Note that φ sends

paths of length t to paths of length greater than or equal to t, and that c and φ(c)
must have the same source and range. Furthermore, φ(c) cannot pass through any
vertex more than once (from the definition of φ) and so φ(c) is a cycle in F . Since
LK (F ) is graded, this implies that each term ki φ(c)i = 0, and thus each ki = 0, which
is impossible since ni=−m ki ci 6= 0. Thus ker(φ) = {0} and so φ is a monomorphism,
P

as required.

Proposition 2.4.4 leads to the following powerful result from [AA3, Theorem 5.2].
Here we have greatly expanded the proof to clarify the arguments and results used
at each step.

Theorem 2.4.5. Let E be a countable graph and let F be a desingularisation of E.


Then the Leavitt path algebras LK (E) and LK (F ) are Morita equivalent.

Proof. We begin by labelling the vertices of E as a sequence {vl }∞


l=1 . We can
P
form idempotents tk := l≤k vl for each k ∈ N. Note that for any subset X ⊆

LK (E) there exists a tk such that tk x = x = xtk for all x ∈ X (see the proof of
Lemma 2.1.12), and so {tk : k ∈ N} forms a set of local units for LK (E). (Note
CHAPTER 2. LEAVITT PATH ALGEBRAS 76

that if E 0 is finite then tk is simply the identity for LK (E) for all k ≥ |E 0 |, by
Lemma 2.1.12.)

Let t = tk for an arbitrary k ∈ N. We show that tLK (E)t ∼


= tLK (F )t. Recall
the monomorphism φ : LK (E) → LK (F ) from Proposition 2.4.4, and consider the
restriction map φ|tLK (E)t : tLK (E)t → LK (F ). Since φ(v) = v for each v ∈ E 0 , we
have φ(t) = t, and thus for any x = txt ∈ tLK (E)t we have φ(txt) = tφ(x)t. Now
if tφ(x)t = tφ(y)t for some y ∈ tLK (E)t, then clearly φ(x) = φ(y) and so x = y,
since φ is a monomorphism. Thus φ|tLK (E)t is a monomorphism from tLK (E)t to
tLK (F )t.

To show that φ|tLK (E)t is an epimorphism, consider an arbitrary element x ∈


Pn ∗
tLK (F )t. Then x = i=1 ki pi qi , where ki ∈ K and pi , qi are paths in F with

r(pi ) = r(qi ) and s(pi ), s(qi ) ∈ {vl : l ≤ k} for each i ∈ {1, . . . , n}. Suppose that p is
a path in F with s(p) ∈ {vl : l ≤ k}. If p = p1 . . . pn , where each pi is an edge from
the original graph E, then p = φ(p1 . . . pn ). If p = f1 f2 . . . fj−1 gj (where the fi and
gj are as defined in Definition 2.4.1), then p = φ(ej ), where ej ∈ s−1 (v0 ) for some
infinite emitter v0 ∈ E 0 . Furthermore, if p is a concatenation of two such paths,
then clearly p ∈ Im(φ). For all three cases above, clearly we also have p∗ ∈ Im(φ).

The final possible form for p is p = p1 . . . pn f1 . . . fj , (with n ≥ 0), where the


fi form part of an infinite tail from either a sink or an infinite emitter, and each
pi is an edge from the original graph E. Let s(f1 ) = v0 and r(fj ) = vj . By the
desingularisation definition, any path q in F with r(q) = vj must be of the form
q1 . . . qm f1 . . . fj (with m ≥ 0), where each qi is an edge from the original graph E.
Thus pq ∗ = φ(p1 . . . pn )(f1 . . . fj )(fj∗ . . . f1∗ )φ(qm

. . . q1∗ ) and so it suffices to show that
(f1 . . . fj )(fj∗ . . . f1∗ ) is in the image of φ.

If v0 is a sink, then each vi along the infinite tail based at v0 emits precisely one

edge, namely fi+1 . Thus applying the (CK2) relation at vi gives vi = fi+1 fi+1 , and
CHAPTER 2. LEAVITT PATH ALGEBRAS 77

so


(f1 . . . fj−1 fj )(fj∗ fj−1 ∗
. . . f1∗ ) = f1 . . . fj−1 vj−1 fj−1 . . . f1∗

= f1 . . . fj−1 (fj−1 . . . f1∗
= f1 . . .)vj−2 . . . f1∗
..
.
= f1 f1∗
= v0 ,

and v0 = φ(v0 ), as required.


Now suppose that v0 is an infinite emitter. Then each vi along the infinite tail
based at v0 emits precisely two edges, namely fi+1 and gi+1 . Thus applying the
∗ ∗
(CK2) relation at vi gives vi = fi+1 fi+1 + gi+1 gi+1 , and so

(f1 . . . fj−1 fj )(fj∗ fj−1



. . . f1∗ ) = f1 . . . fj−1 (vj−1 − gj gj∗ )fj−1

. . . f1∗

= (f1 . . . fj−1 )(fj−1 . . . f1∗ )
− (f1 . . . fj−1 gj )(gj∗ fj−1

. . . f1∗ ).

Repeating this expansion eventually gives


j
X
(f1 . . . fj−1 fj )(fj∗ fj−1

. . . f1∗ ) = f1 f1∗ − (f1 . . . fi−1 gi )(gi∗ fi−1

. . . f1∗ )
i=2
j
X
= v0 − g1 g1∗ − (f1 . . . fi−1 gi )(gi∗ fi−1

. . . f1∗ )
i=2
j
!
X
= φ v0 − e1 e∗1 − ei e∗i
i=2

and we are done. (Note that the inverse images of all of these paths also have
source in the set {vl : l ≤ k}, and thus are indeed contained in tLK (E)t.) Therefore
φ|tL (E)t is an isomorphism of K-algebras and so tLK (E)t ∼
K = tLK (F )t.

From the definition of tk , we can view tk LK (E)tk as the set of all elements in
LK (E) generated by paths p with s(p) ∈ {vl : l ≤ k}. Thus we have tk LK (E)tk ⊆
tk+1 LK (E)tk+1 (and tk LK (F )tk ⊆ tk+1 LK (F )tk+1 ) for each k ∈ N. For every pair
CHAPTER 2. LEAVITT PATH ALGEBRAS 78

i, j ∈ N with i ≤ j, let ϕij be the inclusion map from ti LK (E)ti to tj LK (E)tj and
let ϕ̄ij be the inclusion map from ti LK (F )ti to tj LK (F )tj . For such a pair i, j it
is easy to see that tj ti = ti = ti tj , and so for any x = ti xti ∈ ti LK (E)ti we have
tj xtj = tj (ti xti )tj = ti xti = x. Thus we can view the inclusion map ϕij as mapping
ti xti 7→ tj xtj (and similarly for ϕ̄ij ). For ease of notation, let φ|tk LK (E)tk = φk for all
k ∈ N. Thus for any x = ti xti ∈ ti LK (E)ti we have

ϕ̄ij φi (ti xti ) = ϕ̄ij (ti φi (x)ti ) = tj φi (x)tj = φj (tj xtj ) = φj ϕij (ti xti ),

and so ϕ̄ij φi = φj ϕij ; that is, the following diagram commutes

φi
ti LK (E)ti / ti LK (F )ti

ϕij ϕ̄ij

 
tj LK (E)tj / tj LK (E)tj
φj

for all i, j ∈ N with i ≤ j. (Similarly, since φi is an isomorphism for all i ∈ N, we


also have ϕij φ−1 −1
i = φj ϕ̄ij .)

Clearly (ti LK (E)ti , ϕij )N and (ti LK (F )ti , ϕ̄ij )N are direct systems of rings. Since
these are both ascending chains of rings, the direct limits − lim
→ i∈N ti LK (E)ti and
lim
−→ i∈N ti LK (F )ti exist (see Appendix A). For ease of notation, we set

RE = lim
−→ i∈N ti LK (E)ti and RF = lim
−→ i∈N ti LK (F )ti .

For each i ∈ N, let ϕi be the map from ti LK (E)ti to RE and let ϕ̄i be the map from
ti LK (F )ti to RF as defined in Definition A.1.1.

Now, for each i ∈ N there exists a ring homomorphism ϕ̄i φi : ti LK (E)ti → RF ,


and furthermore, for i ≤ j,

(ϕ̄j φj )ϕij = ϕ̄j (φj ϕij ) = ϕ̄j (ϕ̄ij φi ) = (ϕ̄j ϕ̄ij )φi = ϕ̄i φi .

Thus, by condition (ii) of Definition A.1.1, there exists a unique ring homomorphism
µ : RE → RF for which ϕ̄i φi = µϕi for all i ∈ N. By a similar argument, there exists
CHAPTER 2. LEAVITT PATH ALGEBRAS 79

a unique ring homomorphism µ0 : RF → RE for which ϕi φ−1


i = µ0 ϕ̄i for all i ∈ N.
This situation is illustrated in the following commutative diagram, which holds for
each pair i, j ∈ N with i ≤ j:
ϕij
ti LK (E)t / tj LK (E)tj
T JiJJJ t J
JJ tttt
ϕi JJJ t
t ϕj
J% ty tt
RE [
φi φ−1
i
µ µ0 φ−1
j φj

R
ttt9 F eJJJJ
ϕ̄i tt JJϕ̄Jj
t tt JJJ
 tt

ti LK (F )ti / tj LK (F )tj
ϕ̄ij

In summary, there exist unique ring homomorphisms µ : RE → RF and µ0 : RF →


RE that satisfy the following equations:

ϕ̄i φi = µϕi and ϕi φ−1 0


i = µ ϕ̄i .

From the second equation we have ϕi = µ0 ϕ̄i φi = µ0 µϕi (substituting from the first
equation) for all i ∈ N, and so, by appealing to the uniqueness given in Defini-
tion A.1.1 (ii), we have µ0 µ = 1RE . Similarly, the first equation gives ϕ̄i = µϕi φ−1
i =

µµ0 ϕ̄i , and so µµ0 = 1R . Thus µ0 = µ−1 and we have RE ∼


F = RF . However, as noted
above, the set {tk : k ∈ N} forms a set of local units for LK (E), and furthermore,
for each pair i, j ∈ N with i ≤ j we have ti ∈ tj LK (E)tj . Thus, by Lemma A.1.2 we
have RE = LK (E), and so RF = lim ∼
−→ k∈N tk LK (F )tk = LK (E).
Now suppose that w0 is a singular vertex in E and let wi be any vertex in F con-
tained in the ‘infinite tail’ added at w0 in the desingularisation process. Furthermore,
let pi denote the path f1 . . . fi from w0 to wi in F ∗ . Define πi : LK (F )w0 → LK (F )wi
by x 7→ xpi . It is easy to see that πi is a left LK (F )-module homomorphism. Further-
more, LK (F )wi is projective in LK (F )-Mod, by Proposition 1.2.13. For an arbitrary
y ∈ LK (F )wi , we have y = ywi = yp∗i pi = πi (yp∗i ), and so πi is an epimorphism.
Thus, by Lemma 1.2.10, LK (F )wi is isomorphic to a direct summand of LK (F )w0
as left LK (F )-modules.
CHAPTER 2. LEAVITT PATH ALGEBRAS 80
L
By Lemma 2.1.9, we have LK (F ) = v∈F 0 LK (F )v. Furthermore, LK (F ) is a
generator for LK (F )-Mod (see Definition 1.3.10). Now, from the above paragraph
we have
M
LK (F ) = LK (F )v
v∈F 0
! !
M L M
= LK (F )v LK (F )wi
v∈E 0 wi ∈F 0
! !

M L M
= LK (F )v Aw i
v∈E 0 wi ∈F 0

where each wi ∈ F 0 is contained in an infinite tail based at w0 (for some singular


vertex w0 ) and Awi is a direct summand of LK (F )w0 . For ease of notation, let
L L
H = v∈E 0 LK (F )v. Then we have that wi ∈F 0 Awi is a direct summand of H,

since Awi is a direct summand of LK (F )w0 for each singular vertex w0 ∈ E 0 . From
the above equation, we have an isomorphism between a subset of H ⊕H and LK (F ),
which implies we have an epimorphism from H ⊕ H to LK (F ). Since LK (F ) is a
generator for LK (F )-Mod, for any M ∈ LK (F )-Mod there exists an index set I
and epimorphism τ : LK (F )(I) → M . This induces an epimorphism η : H (2I) =
H (I) ⊕ H (I) → M , and so H is a generator for LK (F )-Mod.
L
Now, note that we have LK (F )tk = LK (F )(v1 + · · · + vk ) = {vi :i≤k} LK (F )vi
L
for each k ∈ N. Thus is it easy to see that lim
−→ k∈N LK (F )tk = v∈E 0 LK (F )v = H.

Note that each LK (F )tk is projective (by Proposition 1.2.13), is finitely generated
(with generating set {tk }) and is a direct summand of H. Thus H is a locally projec-
tive generator for LK (F )-Mod (see Definition 1.3.14) and so by Proposition 1.3.15
any ring that is isomorphic to −
lim
→ k∈N End(LK (F )tk ) must be Morita equivalent to
LK (F ).

Finally, by Lemma 1.2.2 we have End(LK (F )tk ) ∼


= tk LK (F )tk , and so

lim ∼ ∼
−→ k∈N End(LK (F )tk ) = lim
−→ k∈N tk LK (F )tk = LK (E).

Thus LK (F ) and LK (E) are Morita equivalent, completing the proof.


Chapter 3

Socle Theory of Leavitt Path


Algebras

In this chapter we define the notion of a socle and give a precise description of the
socle of an arbitrary Leavitt path algebra in Section 3.2. Furthermore, we expand
this definition to a socle series in Section 3.4, and again describe the socle series
of a Leavitt path algebra, applying the concept of a quotient graph introduced in
Section 3.3. To begin, we introduce some preliminary ring-theoretic definitions and
results.

3.1 Preliminary Results


Definition 3.1.1. Let R be a ring. Recall that L is a minimal left ideal of R if
L 6= 0 and there exists no left ideal K of R such that 0 ⊂ K ⊂ L. The left socle of
R, denoted socl (R), is defined to be the sum of the family of minimal left ideals of
R (or the zero ideal, if R contains no minimal left ideals). We can define the right
socle of R, denoted socr (R), similarly.

It is clear from the definition that socl (R) is a left ideal of R. However, what is
slightly less obvious is that it is also a right ideal of R, as the following proposition
shows.

81
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 82

Proposition 3.1.2. For any ring R, socl (R) is a two-sided ideal of R.

Proof. Since socl (R) is clearly a left ideal of R (since it is the sum of left ideals), it
suffices to show that socl (R) is also a right ideal of R. Take an arbitrary nonzero
element s ∈ socl (R) and an arbitrary nonzero r ∈ R. Since s ∈ socl (R), we can
write s = l1 + . . . + ln , where each li ∈ Li and Li is a minimal left ideal of R. Thus
sr = l1 r + . . . + ln r, and so it suffices to show that li r ∈ socl (R) for each i.
Take an arbitrary minimal left ideal Li of R and define φ : Li → R by φ(x) = xr,
for all x ∈ Li . It is easy to see that φ is an R-module homomorphism: clearly φ is
additive, and for any r0 ∈ R and x ∈ Li we have φ(r0 x) = (r0 x)r = r0 (xr) = r0 φ(x).
Since ker(φ) is a left ideal contained in Li and Li is minimal, then either ker(φ) =
Li or ker(φ) = {0}. In the former case, this gives φ(Li ) = {0}. In the latter case,
φ is a monomorphism, and so φ : Li → φ(Li ) is an isomorphism of left R-modules.
Specifically, φ(Li ) is a minimal left ideal of R. In either case, φ(Li ) ⊆ socl (R), and
thus xr ∈ socl (R) for every x ∈ Li . In particular, li r ∈ socl (R) and we are done.

A similar proof shows that socr (R) is also a two-sided ideal of R.

For a given ring R, a left R-module is semisimple if it is the direct sum of


simple submodules. If we view R as a left module over itself, then R is semisimple
if it is the direct sum of minimal left ideals. Thus we have that socl (R) = R if and
only if R is semisimple.

An ideal I is said to be nilpotent if there exists a k ∈ N such that


( n )
X
I k := xi1 . . . xik : xij ∈ I for all i, j and n ∈ Z = 0.
i=1

A ring R is said to be semiprime if it contains no nonzero two-sided nilpotent


ideals. Furthermore, a ring R is said to be nondegenerate if aRa = 0 for some
a ∈ R implies that a = 0. The following proposition shows that these two concepts
are equivalent.

Proposition 3.1.3. Let R be a ring. Then R is semiprime if and only if R is


nondegenerate.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 83

Proof. Suppose that R is nondegenerate. Let I be a nonzero two-sided ideal of R


such that I n = 0 for some n ∈ N. Let n0 be the minimum such n and set J = I n0 −1 .
Thus J is a nonzero two-sided ideal and J 2 = 0. Let a be an arbitrary element of
J. Then, since Ra ⊆ J, we have aRa ⊆ J 2 = 0. Since R is nondegenerate, a = 0
and so J = 0, a contradiction. Thus R is semiprime.
Conversely, suppose that R is semiprime and that aRa = 0 for some a ∈ R.
Recall that RaR is the two-sided ideal given by
( n )
X
RaR = ri asi : ri , si ∈ R, n ∈ Z .
i=1

Then (RaR)2 = (RaR)(RaR) ⊆ R(aRa)R = 0, and so RaR = 0, since R is


semiprime. Now let J be the two-sided ideal generated by a, so that
( )
X X X
J= ri asi + rj0 a + as0k + ma : ri , si , rj0 , s0k ∈ R, m ∈ Z .
i j k

Then any element of J 3 must be a sum of elements of the form xay, where x, y ∈ R,
and so J 3 ⊆ RaR and thus J 3 = 0. Since R is semiprime, we have that J = 0 and
so a = 0, since a ∈ J. Thus R is nondegenerate.

The following proposition shows, somewhat surprisingly, that if R contains no


nonzero two-sided nilpotent ideals then it cannot contain any nonzero left or right
nilpotent ideals either.

Proposition 3.1.4. Let R be a ring. Then R is semiprime if and only if R contains


no nonzero left (or right) nilpotent ideals.

Proof. Clearly if R contains no nonzero left (or right) nilpotent ideals then it contains
no nonzero two-sided nilpotent ideals and must therefore be semiprime. To prove
the converse, suppose that R is semiprime and let I be a nonzero left ideal of R
such that I n = 0 for some n ∈ N. As in the proof of Proposition 3.1.3, we can find
a left ideal J such that J is nonzero and J 2 = 0. Take an arbitrary element nonzero
x ∈ J and let L be the two-sided ideal generated by x, so that
( )
X X X
L= ri xsi + rj0 x + xs0k + mx : ri , si , rj0 , s0k ∈ R, m ∈ Z .
i j k
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 84

Since L3 ⊆ RxR and Rx ⊆ J, we have

L6 ⊆ (RxR)(RxR) ⊆ RxRxR ⊆ J 2 R = 0.

Thus, since R is semiprime we have that L = 0, and so x = 0. Since x was an


arbitrary element of J, we have that J = 0, a contradiction. Thus R contains no
nonzero nilpotent left ideals. Similarly, we can show that R contains no nonzero
nilpotent right ideals.

We now move on to describing the general form of minimal left ideals. The
following proposition is from [J2, Proposition 3.9.1].

Proposition 3.1.5. Let D be a minimal left ideal of a ring R. Then either D2 = 0


or D contains an idempotent e such that D = Re = {re : r ∈ R}.

Proof. Suppose that D2 6= 0. Then there exists b ∈ D such that Db 6= 0. Since Db


is a nonzero left ideal contained in D and D is minimal, we have Db = D. Now let J
be the left annihilator of b in R; that is, J = {r ∈ R : rb = 0}. It is clear that J is a
left ideal of R and, furthermore, J ∩ D 6= D, since otherwise we would have Db = 0.
Since J ∩ D is a left ideal contained in D we must therefore have J ∩ D = 0. Now,
Db = D implies that eb = b for some e ∈ D. Thus b = eb = e2 b and so (e − e2 )b = 0.
Therefore e − e2 ∈ J ∩ D = 0 and so e = e2 . Since b is nonzero, e is nonzero, and so
Re is a nonzero left ideal contained in D. Thus Re = D, as required. Finally, note
that Re = {re : r ∈ R} since e is an idempotent.

Proposition 3.1.5 leads immediately to the following corollary.

Corollary 3.1.6. Every minimal left ideal of a semiprime ring R is of the form Re,
where e is an idempotent in R.

We can show similarly that every minimal right ideal of a semiprime ring R is
of the form eR, where e is an idempotent. Note that the converse is not necessarily
true: for a given idempotent e in a semiprime ring R, Re and eR may not be minimal
left or right ideals. However, the following proposition from [L1, Lemma 1.19] shows
that if one of these is minimal then both are.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 85

Proposition 3.1.7. Let R be a semiprime ring and let e be an idempotent in R.


Then Re is a minimal left ideal if and only if eR is a minimal right ideal.

Proof. Suppose that Re is a minimal left ideal of R. To prove that eR is a min-


imal right ideal of R it suffices to show that e ∈ aR for any nonzero a ∈ eR (by
Lemma 1.1.6). Now, if a ∈ eR then a = et = e2 t = ea (for some t ∈ R), and
thus aR = eaR. Since R is semiprime it must be nondegenerate, and so ea 6= 0
implies that eaRea 6= 0. Thus easea 6= 0 for some s ∈ R. Let φ : Re → Re
be the R-homomorphism defined by φ(x) = xase. Noting that e = e2 ∈ Re,
we have φ(e) = ease 6= 0, and so Im(φ) 6= 0. Thus, since Re is a minimal
left ideal we have Im(φ) = Re. Similarly, φ(e) 6= 0 implies that ker(φ) 6= Re
and so ker(φ) = 0. Thus φ is an isomorphism of left R-modules. Therefore
e = φ−1 φ(e) = φ−1 (ease) = ea φ−1 (se) ∈ eaR = aR, and so eR is a minimal
right ideal of R. A similar argument shows the converse.

Finally, we have this useful result from [J2, Theorem 4.3.1].

Proposition 3.1.8. Let R be a ring. If R is semiprime, then socl (R) = socr (R).

Proof. Since R is semiprime, Corollary 3.1.6 tells us that that the left socle of R
is the sum of minimal left ideals of the form Re, where e is an idempotent in
R. Furthermore, by Proposition 3.1.7 we know that Re is a minimal left ideal if
P P
and only eR is a minimal right ideal. Thus, if socl (R) = i Rei , then i ei R ⊆
socr (R). Therefore each ei ∈ ei R ⊆ socr (R) and so, since socr (R) is a two-sided
P
ideal, socl (R) = i Rei ⊆ socr (R). Using a similar argument, we also have that
socr (R) ⊆ socl (R), and so socr (R) = socl (R).

3.2 The Socle of a Leavitt Path Algebra


In this section we show that the socle of a Leavitt path algebra LK (E) is closely
related to the line points of the associated graph E. Indeed, in Theorem 3.2.11 we
show that for any graph E we have soc(LK (E)) = I(Pl (E)), the ideal generated by
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 86

the line points of E. We begin with the following proposition, shown in [AMMS2,
Proposition 3.4].

Proposition 3.2.1. For an arbitrary graph E, the Leavitt path algebra LK (E) is
semiprime.

Proof. Suppose that LK (E) is not semiprime, so that there exists a nonzero ideal
I such that I 2 = 0. Take a nonzero x ∈ I. By Proposition 2.2.11, there exist
y, z ∈ LK (E) such that either yxz = kv for some nonzero k ∈ K and some v ∈ E 0 ,
or yxz = ni=−m ki ci for some ki ∈ K (not all zero) and some c ∈ E ∗ , where c is a
P

cycle without exits in E. Now I cannot contain a vertex v, since v = v 2 ∈ I 2 = 0,


a contradiction. So we must have ni=−m ki ci ∈ I. Let p = ni=−m ki ci and let k be
P P

the (nonzero) coefficient of the term of maximum degree in p. Since p2 = 0, we have


k 2 = 0 and so k = 0, a contradiction. Thus LK (E) must be semiprime.

Proposition 3.1.8 and Proposition 3.2.1 lead immediately to the following corol-
lary.

Corollary 3.2.2. Let E be an arbitrary graph. Then socl (LK (E)) = socr (LK (E)).

In light of this result, we will drop the terms ‘left’ and ‘right’ and simply refer
to the ‘socle’ of a Leavitt path algebra LK (E), which we denote by soc(LK (E)).

Recall that a vertex is a bifurcation if it emits two or more edges, and that a
vertex v is a line point if there are no bifurcations or cycles based at any vertex
w ∈ T (v). We say that a path p contains a bifurcation if the set p0 \{r(p)}
contains a bifurcation. The following related lemma is from [AMMS1, Lemma 2.2],
and though it is given there in a row-finite context, the proof remains valid for the
arbitrary case.

Lemma 3.2.3. Let E be an arbitrary graph and let u, v be in E 0 , with v ∈ T (u). If


there is only one path joining u and v and it contains no bifurcations, then LK (E)u ∼
=
LK (E)v as left LK (E)-modules.

Proof. Let p be the unique path for which s(p) = u and r(p) = v. By Lemma 2.1.10
we have that p∗ p = v. Furthermore, since p contains no bifurcations, for each edge
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 87

ei in p we have s(ei ) = ei e∗i (by the (CK2) relation). Using the same logic as in the
proof of Proposition 2.2.11, page 59, this gives pp∗ = u.
Define a map φp : LK (E)u → LK (E)v by φp (x) = xp. Similarly, define a map
φp∗ : LK (E)v → LK (E)u by φp∗ (y) = yp∗ . These maps are easily seen to be left
LK (E)-module homomorphisms. Furthermore, we have φp∗ φp (x) = xpp∗ = xu = x
and φp φp∗ (y) = yp∗ p = yv = y. Thus φp and φp∗ are mutual inverses, and so
LK (E)u ∼
= LK (E)v as left LK (E)-modules, as required.

We now embark on a series of results concerning left ideals and minimal left ideals
of a Leavitt path algebra LK (E), building towards our main result in Theorem 3.2.11.
The following proposition is from [AMMS1, Proposition 2.3], and though it is given
in a row-finite context, it is easily adapted to the arbitrary case by requiring that u
is a regular vertex rather than simply ‘not a sink’.

Proposition 3.2.4. Let E be an arbitrary graph and u ∈ E 0 be a regular vertex


Ln
with s−1 (u) = {f1 , . . . , fn }. Then LK (E)u = ∗
i=1 LK (E)fi fi . Furthermore, if

r(fi ) 6= r(fj ) for i 6= j and we let vi = r(fi ), then LK (E)u ∼


Ln
= LK (E)vi .
i=1

Pn ∗
Proof. By the (CK2) relation, we know that u = i=1 fi fi , and so LK (E)u =
Pn ∗ ∗
i=1 LK (E)fi fi . To show that this sum is direct, note that the fi fi are orthogonal

idempotents by the (CK1) relation: (fi fi∗ )(fi fi∗ ) = fi (fi∗ fi )fi∗ = fi r(fi )fi∗ = fi fi∗ ,
while (fi fi∗ )(fj fj∗ ) = fi (fi∗ fj )fj∗ = 0 for i 6= j. Thus, if xi fi fi∗ = nj=1,j6=i xj fj fj∗ for
P

some xi , xj ∈ LK (E), multiplication on the right by fi fi∗ gives xi fi fi∗ = 0, and so


the sum is direct.

To prove the second assertion, we define a map φ : LK (E)u → ni=1 LK (E)vi


L
P
by φ(x) = i xfi . It is clear that this map is a left LK (E)-module homomorphism.
Pn
Now suppose that φ(x) = i=1 xfi = 0 for some x ∈ LK (E)u. This gives 0 =
Pn 
i=1 xfi r(fj ) = xfj for each j ∈ {1, . . . , n} (since r(fi ) 6= r(fj ) for i 6= j), and

so x = xu = nj=1 xfj fj∗ = 0. Thus ker(φ) = {0} and so φ is a monomorphism.


P

Now consider an arbitrary element y = ni=1 yi ∈ ni=1 LK (E)vi . Then ni=1 yi fi∗ ∈
P L P

LK (E)u and
P  P
n
= ni=1 φ(yi fi∗ ) = ni=1

 P Pn ∗
 Pn ∗
φ y f
i=1 i i j=1 yi fi fj = i=1 (yi fi fi ) = y,
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 88

since yi fi∗ fi = yi vi = yi for each i ∈ {1, . . . , n}. Thus φ is an epimorphism, complet-


ing the proof.

The following proposition from [AMMS2, Lemma 4.3] considers the case in which
u is an infinite emitter.

Proposition 3.2.5. Let E be an arbitrary graph and let u ∈ E 0 be an infinite emitter


with s−1 (u) = {fi }i∈I (where I is an infinite index set). Then ∗
L
i∈I LK (E)fi fi ⊂

LK (E)u.

LK (E)fi fi∗ is direct since {fi fi∗ }i∈I is a set of


P
Proof. First, note that the sum i∈I

mutually orthogonal idempotents (by the (CK1) relation). Now, since r(fi∗ ) = u
for each i, we have the inclusion i∈I LK (E)fi fi∗ ⊆ LK (E)u. Suppose the converse
L

containment holds, so that u ∈ i∈I LK (E)fi fi∗ (since u = u2 ∈ LK (E)u). Then


L

u = j xj fj fj∗ , where {fj } is a finite subset of s−1 (u) and each xj ∈ LK (E). Since
P

u is an infinite emitter, there exists a g ∈ s−1 (u) such that g 6= fj for each j.

P
Thus g = ug = j xj fj fj g = 0 by the (CK1) relation, a contradiction. Thus

L
i∈I LK (E)fi fi is properly contained in LK (E)u, as required.

The previous two results lead to the following corollary.

Corollary 3.2.6. Let E be an arbitrary graph and let u ∈ E 0 . If T (u) contains a


bifurcation then LK (E)u is not a minimal left ideal.

Proof. Let v ∈ T (u) be a bifurcation, and let p be a path from u to v. Let v0


be the first bifurcation occuring in p, so that there are no bifurcations between u
and v0 . Whether v0 is a regular vertex or an infinite emitter, Proposition 3.2.4
and Proposition 3.2.5 give that LK (E)v0 is not a minimal left ideal, since for any
fi ∈ s−1 (v0 ) we have that LK (E)fi fi∗ is a left ideal properly contained in LK (E)v0 .
By Lemma 3.2.3, we have LK (E)u ∼ = LK (E)v0 as left LK (E)-modules, and thus
LK (E)u is not a minimal left ideal.

From Corollary 3.2.6 we can begin to see a relationship between minimal left
ideals and line points forming. The following proposition from [AMMS1, Corollary
2.4] reinforces this notion. Though their proof is given in a row-finite setting, it
holds for arbitrary graphs as well.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 89

Proposition 3.2.7. Let E be an arbitrary graph and let u ∈ E 0 . If there is a closed


path based at u, then LK (E)u is not a minimal left ideal.

Proof. Let µ be a closed path based at u and suppose that LK (E)u is a minimal
left ideal. By Corollary 3.2.6, there cannot be a bifurcation at any vertex in T (u).
In particular, µ cannot contain any bifurcations and so must be a cycle without
exits. Consider the left ideal LK (E)(u + µ). This ideal is nonempty, since u + µ =
u(u + µ) ∈ LK (E)(u + µ). Furthermore, it is contained in LK (E)u since r(µ) = u.
Thus, by the minimality of LK (E)u, we have LK (E)(u + µ) = LK (E)u. Specifically,
we have u ∈ LK (E)(u + µ).

Thus we can write u = ni=1 ki αi (u + µ), where the αi are monomials in LK (E)
P

and ki ∈ K. Using a similar argument to the one found in the proof of Proposi-
tion 2.2.11, each αi must begin and end in u and is therefore either a power of µ or
µ∗ (since µ is a cycle without exits). Thus we can write u = P (µ, µ∗ )(u + µ), where
P is a polynomial with coefficients in K; that is,

P (µ, µ∗ ) = l−m (µ∗ )m + · · · + l0 u + · · · + ln µn ,

where each li ∈ K and m, n ∈ N. Using the same argument found in the proof of
Theorem 2.3.1, we can deduce that l−i = 0 = li for all i > 0. Thus u = l0 u(u + µ) =
l0 (u + µ), which is impossible, and so LK (E) cannot be minimal.

The following proposition was first given in [AMMS1, Theorem 2.9] and then
generalised to the arbitrary case in [AMMS2, Theorem 4.12]. However, a far simpler
proof is given in [ARM1, Proposition 1.9], and it is this proof that we present below.

Proposition 3.2.8. Let E be an arbitrary graph and let v ∈ E 0 . Then LK (E)v is


minimal if and only if v ∈ Pl (E).

Proof. Suppose that v is a line point in E. We begin by showing that every nonzero
LK (E)-endomorphism of LK (E)v is an automorphism. By Lemma 1.2.2 we have
that End(LK (E)v) ∼
= (vLK (E)v)Op . Take an arbitrary element x ∈ (vLK (E)v)Op .
Then x = v( ni=1 ki pi qi∗ )v = ni=1 ki (vpi qi∗ v), where each pi , qi ∈ E ∗ and n ∈ N.
P P

If vpi qi∗ v 6= 0 for some i ∈ {1, . . . , n}, then s(pi ) = s(qi ) = v and r(pi ) = r(qi ).
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 90

Thus pi and qi are both paths from v to r(pi ). Since v is a line point there can only
be one such path and so pi = qi . Furthermore, since pi contains no bifurcations
we have vpi qi∗ v = v (see the proof of Lemma 3.2.3). Thus x = ( ki )v and so
P

End(LK (E)v) ∼ = (vLK (E)v)Op = Kv. Since Kv is a field with identity element v,
every nonzero element of End(LK (E)v) is invertible and thus is an automorphism.
Now let a be an arbitrary nonzero element in LK (E)v. Since LK (E) has local
units, LK (E)a 6= 0. Furthermore, since LK (E) is semiprime we have (LK (E)a)2 6= 0,
and so there exist b, c ∈ LK (E) such that (ca)(ba) 6= 0. Define φ : LK (E)v →
LK (E)v by φ(x) = x(ba). Then φ(a) = aba 6= 0 and so φ is a nonzero endomorphism,
and therefore an automorphism. Thus, since v ∈ LK (E)v, we must have v = d(ba)
for some d ∈ LK (E). Therefore v ∈ LK (E)a and so, by Lemma 1.1.6, LK (E)v is
minimal.

Conversely, suppose that LK (E)v is minimal. Suppose by way of contradiction


that T (v) contains vertices with bifurcations, and choose a bifurcation vertex u ∈
T (v) such that the path p connecting u and v is of the shortest length possible.
Since p contains no bifurcations, by Lemma 3.2.3 we have LK (E)u ∼
= LK (E)v, and
so LK (E)u is minimal. By Proposition 3.2.7, there cannot be a cycle based at u.
Let e be an edge in E 1 with s(e) = u. We claim that LK (E)u = LK (E)ee∗ ⊕ C,
where C = {x − xee∗ : x ∈ LK (E)u}. To show this, first take y ∈ LK (E)u. Then
y = yee∗ + y − yee∗ ∈ LK (E)ee∗ + C. Now take z ∈ LK (E)ee∗ + C. Then, for
some r, s ∈ LK (E), z = ree∗ + su − suee∗ = ree∗ u + su − suee∗ u ∈ LK (E)u, and so
LK (E)u = LK (E)ee∗ +C. To show the sum is direct, suppose that z ∈ LK (E)ee∗ ∩C,
so that z = t1 ee∗ = t2 u − t2 uee∗ for some t1 , t2 ∈ LK (E). Then t2 u = t1 ee∗ + t2 ee∗ ,
and so multiplying on the right by e gives t2 e = t1 e+t2 e. Thus t1 e = 0 and therefore
z = 0, showing the sum is direct.
Suppose that C = 0. Then, taking x = u in the definition of C, we must have
u − ee∗ = 0. Now, since u is a bifurcation, there must exist an edge f ∈ E 1 such
that s(f ) = u but e 6= f . Thus f = uf = ee∗ f = 0, which is absurd. Therefore C is
nonzero, and thus LK (E)u is not minimal, a contradiction. Thus v must be a line
point.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 91

Proposition 3.2.8 leads to the following lemma from [AMMS1, Proposition 4.1]

Lemma 3.2.9. Let E be an arbitrary graph. Then


X
LK (E)u ⊆ soc(LK (E)).
u∈Pl (E)

The reverse containment does not hold in general.

Proof. By Proposition 3.2.8, we know that LK (E)u is a minimal left ideal for any
vertex u ∈ Pl (E) and is therefore contained in the socle. To show that the converse
containment is not true, we give the following counterexample. Let E be the graph

f
•v o / •w
e
•z

By Lemma 2.2.9, LK (E) ∼


= M2 (K) ⊕ M2 (K). By Lemma 1.1.10, M2 (K) is simple
and so the only minimal left ideals of M2 (K) ⊕ M2 (K) are M2 (K) ⊕ {0} and {0} ⊕
M2 (K). Thus soc(LK (E)) ∼= M2 (K)⊕M2 (K) and so LK (E) coincides with its socle.
P
However, soc(LK (E)) = LK (E) 6= u∈Pl (E) LK (E)u = LK (E)v + LK (E)w, since for
instance z ∈
/ LK (E)v + LK (E)w. To see this, suppose that z = xv + yw for some
x, y ∈ LK (E). Then z = z 2 = xvz + ywz = 0, a contradiction.

So far we have shown that any principal left ideal of LK (E) generated by a line
point u is contained in the socle of LK (E), but we have not quite given a precise
formulation of the socle. The following theorem, from [AMMS1, Theorem 3.4],
brings us one step closer to doing so. Though the original proof is given for the
row-finite case, it is easily generalised to the arbitrary case by applying the relevant
generalised results.

Theorem 3.2.10. Let E be an arbitrary graph and let x be an element of LK (E)


such that LK (E)x is a minimal left ideal. Then there exists a vertex v ∈ Pl (E) such
that LK (E)x ∼= LK (E)v as left LK (E)-modules.

Proof. Consider x ∈ LK (E). By Proposition 2.2.11 we have two cases; we show that
the second case is not possible.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 92

Suppose that there exist elements y, z ∈ LK (E) such that yxz is a nonzero
element in ( )
n
X
wLK (E)w = ki ci for m, n ∈ N and ki ∈ K ,
i=−m

where c is a cycle without exits in E based at a vertex w ∈ E 0 . For ease of notation,


let λ = yxz ∈ wLK (E)w. Since LK (E)yx is a nonzero left ideal contained in LK (E)x
and LK (E)x is minimal, we must have LK (E)yx = LK (E)x. Furthermore, we can
define a map φz : LK (E)x → LK (E)xz by φz (a) = az for all a ∈ LK (E)x. Clearly
φz is a nonzero epimorphism. Also, since LK (E)x is minimal and ker(φz ) 6= LK (E)x
(since 0 6= yxz ∈ Im(φz )), we have ker(φz ) = {0} and so φz is a monomorphism.
Therefore LK (E)x ∼= LK (E)xz = LK (E)yxz = LK (E)λ, and so LK (E)λ is a mini-
mal left ideal of LK (E).

We now show that (wLK (E)w)λ is a minimal left ideal in the subring wLK (E)w.
By Lemma 1.1.6 it suffices to show that, for any nonzero a ∈ (wLK (E)w)λ, we
have λ ∈ (wLK (E)w)a. Since a ∈ LK (E)λ and LK (E)λ is minimal in LK (E), we
have LK (E)a = LK (E)λ, and so λ ∈ LK (E)a. Therefore λ = wλ ∈ wLK (E)a =
(wLK (E)w)a, as required.
It is straightforward to see that the function φ : wLK (E)w → K[t, t−1 ] given
by φ(w) = 1, φ(c) = t and φ(c∗ ) = t−1 (and expanded linearly) is an isomorphism.
This implies that φ((wLK (E)w)λ) is minimal in K[t, t−1 ]. However, K[t, t−1 ] has no
minimal left ideals. To see this, suppose that f (t) = li=k ai ti and g(t) = nj=m bj tj
P P

are two nonzero elements of R = K[t, t−1 ]. Without loss of generality, we can
suppose that ak 6= 0 and bm 6= 0, so that f (t)g(t) = ak bm tk+m + higher powers 6= 0.
Thus R is an integral domain. Now suppose that R contains a minimal left ideal I
and let x be a nonzero element of I. Since x2 ∈ I and I is minimal, I = Rx2 , and so
x = yx2 for some y ∈ R. Since R is an integral domain, this gives 1 = yx ∈ I and
so I = R. Thus R is a field. However, this is a contradiction, since it is easy to see
that not all elements in R have an inverse (for example, 1 + t). Thus K[t, t−1 ] has
no minimal left ideals, and so the second case of Proposition 2.2.11 is not possible,
as claimed.

Therefore we must be in the first case of Proposition 2.2.11, and so there exist
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 93

elements y, z ∈ LK (E) such that yxz = kv 6= 0, for some v ∈ E 0 and k ∈ K. Now


LK (E)v = LK (E)k −1 kv ⊆ LK (E)kv and so LK (E)v = LK (E)kv. Using the same
argument as in the second paragraph of the proof, we have LK (E)x ∼
= LK (E)yxz =
LK (E)kv and so LK (E)x ∼
= LK (E)v as left LK (E)-modules, as required. Finally,
since LK (E)v is therefore minimal, by Proposition 3.2.8 we have v ∈ Pl (E).

Now we come to the main result of this section, where we describe precisely the
structure of the socle of a Leavitt path algebra.

Theorem 3.2.11. Let E be an arbitrary graph. Then soc(LK (E)) = I(Pl (E)) =
I(H), where H is the hereditary saturated closure of Pl (E).

Proof. First, we show that soc(LK (E)) ⊆ I(Pl (E)). Let I be a minimal left ideal
of LK (E). Since LK (E) is semiprime, by Corollary 3.1.6 there exists an idempotent
α ∈ LK (E) such that I = LK (E)α. Furthermore, by Theorem 3.2.10 we have
LK (E)α ∼
= LK (E)u for some u ∈ Pl (E). Thus there exists a left LK (E)-module
isomorphism φ : LK (E)α → LK (E)u and we can find elements x, y ∈ LK (E) such
that φ(α) = xu and φ−1 (u) = yα, giving

α = φ−1 φ(α) = φ−1 (xu) = xu φ−1 (u) = xuyα.

Thus α = x(u)yα ∈ I(Pl (E)), and so I = LK (E)α ⊆ I(Pl (E)) and therefore
soc(LK (E)) ⊆ I(Pl (E)).

For the converse containment, take a vertex v ∈ Pl (E). By Lemma 3.2.9,


we have LK (E)v ⊆ soc(LK (E)) and so, since soc(LK (E)) is a two-sided ideal,
LK (E)vLK (E) ⊆ soc(LK (E)). Since this is true for all v ∈ Pl (E), we have
LK (E)Pl (E)LK (E) = I(Pl (E)) ⊆ soc(LK (E)), and so soc(LK (E)) = I(Pl (E)).
Finally, Lemma 2.2.2 gives I(Pl (E)) = I(H), where H is the hereditary saturated
closure of Pl (E).

Theorem 3.2.11 leads immediately to the following useful corollary.

Corollary 3.2.12. For an arbitrary graph E, the Leavitt path algebra LK (E) has
nonzero socle if and only if Pl (E) 6= ∅.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 94

Example 3.2.13. We now use Theorem 3.2.11 to compute the socle of some familiar
Leavitt path algebras.

(i) The finite line graph Mn . Every vertex in Mn is a line point, and so by
Theorem 3.2.11 we have soc(LK (Mn )) = I(Pl (Mn )) = I((Mn )0 ) = LK (Mn ). Thus,
since LK (Mn ) ∼
= Mn (K), we also have that soc(Mn (K)) = Mn (K) for all n ∈ N.

(ii) The rose with n leaves Rn . The graph Rn contains a single vertex v that is
the base of n cycles; in particular, v is not a line point. Thus Pl (Rn ) = ∅ and so
soc(Rn ) = 0. Thus, since LK (Rn ) ∼
= L(1, n), we also have that soc(L(1, n)) = 0 for
all n ∈ N.

(iii) The infinite clock graph C∞ . In this case, the line points of C∞ are the radial
vertices vi , so that Pl (C∞ ) = {vi }∞ ∞
i=1 . Thus we have soc(LK (C∞ )) = I({vi }i=1 ).
L∞
Recall from Example 2.1.7 the isomorphism φ : LK (C∞ ) → i=1 M2 (K) ⊕ KI22
L∞
that maps each vertex vi to (E11 )i , the element of i=1 M2 (K) with E11 in the ith
component and zeros elsewhere. Thus soc( ∞
L
i=1 M2 (K) ⊕ KI22 ) is the two-sided

ideal generated by the set {(E11 )i }∞


i=1 . Note that this ideal contains any matrix unit

(Emn )j , since (Emn )j = (Em1 )j (E11 )j (E1n )j , and since such matrix units generate
L∞
i=1 M2 (K) we have


! ∞
M M
soc M2 (K) ⊕ KI22 = M2 (K).
i=1 i=1

3.3 Quotient Graphs and Graded Ideals


In Section 3.4 we will examine the socle series of a Leavitt path algebra, a concept
that naturally extends the socle. In doing so we will need to consider quotient rings
of the form LK (E)/I, where I is a graded ideal of LK (E). Thus, in this section we
will examine some properties of graded ideals I of LK (E) and quotient rings of the
form LK (E)/I. In particular, we show in Theorem 3.3.8 that, for any graded ideal I
of LK (E), LK (E)/I is isomorphic to the Leavitt path algebra of a ‘quotient graph’
of E, a concept we define below.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 95

Many of the results in this section are thanks to Tomforde, whose paper [To]
gives many valuable results regarding the ideal structure of a Leavitt path algebra.
We begin with the following definitions.

Definition 3.3.1. Let E be a graph and let H be a hereditary saturated subset of


E 0 . The set of breaking vertices of H, denoted BH , is defined to be the set

BH = {v ∈ E 0 \H : v is an infinite emitter and 0 < |s−1 (v) ∩ r−1 (E 0 \H)| < ∞}.

In other words, a breaking vertex is an infinite emitter that emits an infinite number
of edges into H, while emitting only a finite number of edges into the rest of the
graph. Note that if E is row-finite then BH is always empty.

Furthermore, we say that (H, S) is an admissible pair of E if H is a hereditary


saturated subset of E 0 and S ⊆ BH .

We use these definitions to define the quotient graph E\(H, S).

Definition 3.3.2. Let E be an arbitrary graph and let (H, S) be an admissible pair
0
of E. The quotient graph E\(H, S) is defined as follows. Let BH be a set of
0
duplicates of BH , and write BH = {v 0 : v ∈ BH }. Let S 0 = {v 0 ∈ BH
0
: v ∈ S}. We
define
0
(E\(H, S))0 = (E 0 \H) ∪ (BH \S 0 ) and

/ H} ∪ {e0 : e ∈ E 1 with r(e) ∈ BH \S}.


(E\(H, S))1 = {e ∈ E 1 : r(e) ∈

Furthermore, the source and range functions sE\(H,S) and rE\(H,S) coincide with sE
/ H}, while we define sE\(H,S) (e0 ) = sE (e)
and rE when applied to {e ∈ E 1 : r(e) ∈
and rE\(H,S) (e0 ) = (rE (e))0 . If S = ∅, we often write E\(H, S) as simply E|H.

Thus, to form the quotient graph E\(H, S) we first remove all vertices u ∈ H
and all edges e ∈ E 1 with r(e) ∈ H. Then, for each breaking vertex v ∈ BH \S, we
add a new vertex v 0 to the graph. Furthermore, for each edge e with r(e) = v, we
add a new edge e0 to the graph, running from s(e) to v 0 . Note that this construction
implies that every v 0 ∈ BH
0
\S 0 is a sink.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 96

Example 3.3.3. Consider the following graph E:

e2
(
•v14 h • 4 v2

44
44

44 e1

44
(∞)

44

44 (∞)

44
44

44

44

44

44


 

•u 1 o •u2 / •u3

where (∞) denotes an infinite number of edges. Let H = {u1 , u2 , u3 }. Then H is


clearly a hereditary saturated subset of E 0 . Furthermore, both v1 and v2 emit an
infinite number of edges into H and a single edge into E 0 \H. Thus BH = {v1 , v2 }.

If we choose S = {v2 }, then the quotient graph E\(H, S) looks like:


e2
)
•v 1 i •v 2

e1 

0
 e1


•v10

Furthermore, if we choose S = ∅ then E\(H, S) = E|H looks like:


e2
(
•v 1 h •v2
e1

e02 e01

 
•v20 •v10

Definition 3.3.4. Let E be a graph and let H be a hereditary saturated subset of


E 0 . For any v ∈ BH , we define
X
v H := v − ee∗ .
s(e)=v,
r(e)∈H
/

Note that, by the definition of a breaking vertex, this sum must be finite and is
therefore well-defined. Using the fact that ei e∗i ej e∗j = δij ei e∗i (by the (CK1) relation),
it is easy to see that v H is an idempotent.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 97

Definition 3.3.5. Let E be an arbitrary graph. For any admissible pair (H, S) of E,
we denote by I(H,S) the two-sided ideal in LK (E) generated by the sets {u : u ∈ H}
and {v H : v ∈ S}. Note that if S is empty then I(H,S) = I(H).

The following proposition is from [To, Lemma 5.6] and describes the structure
of an ideal of the form I(H,S) . Here we have greatly expanded the proof for clarity.

Proposition 3.3.6. Let E be an arbitrary graph. For any admissible pair (H, S) of
E we have

I(H,S) = span {αβ ∗ : r(α) = r(β) ∈ H} ∪ {αwH β ∗ : r(α) = r(β) = w ∈ S}




where each α, β ∈ E ∗ . Furthermore, I(H,S) is a graded ideal of LK (E).

Proof. Let J denote the right-hand side of the above equation. It is clear that every
element in J is in the ideal generated by {v : v ∈ H} ∪ {wH : w ∈ S}, that is,
J ⊆ I(H,S) . To show the converse containment, let x ∈ I(H,S) , so that
X X
x= ai v i b i + cj wjH dj ,
i j

where each ai , bi , cj , dj ∈ LK (E), each vi ∈ H, each wj ∈ S and the sums are finite.
By Lemma 2.1.8 we know that every element in LK (E) is of the form i ki pi qi∗ ,
P

where each pi , qi ∈ E ∗ and each ki ∈ K. Thus, omitting the scalars ki for ease of
notation, we can write the above expression as
X X
x= (p1i q1∗i )vi (p2i q2∗i ) + (p1j q1∗j )wjH (p2j q2∗j ),
i j

where each p, q ∈ E ∗ .
Take a nonzero term y = (p1 q1∗ )v(p2 q2∗ ) from the first sum. Since y is nonzero,
we must have s(q1 ) = s(p2 ) = v, and so y = p1 q1∗ p2 q2∗ . Since q1∗ p2 6= 0, Lemma 2.1.10
tells us that either p2 = q1 γ or q1 = p2 τ for some paths γ, τ in E. For the former
case, we have y = p1 q1∗ (q1 γ)q2∗ = p1 γq2∗ . Since s(p2 ) = v and r(p2 ) = r(γ), we have
r(γ) ∈ T (v). Thus r(γ) ∈ H, by the hereditary nature of H. So, taking α = p1 γ
and β = q2 , we have y = αr(γ)β ∗ ∈ J. For the latter case, we have y = p1 τ ∗ q2∗ , and
a similar argument shows that again y ∈ J.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 98

Now take a nonzero term z = (p1 q1∗ )wH (p2 q2∗ ) from the second sum above.
Letting M = {e ∈ E 1 : s(e) = w, r(e) ∈ / H}, we can write z = (p1 q1∗ )(w −
∗ ∗ H
P
e∈M ee )(p2 q2 ) (from the definition of w ). Again, since z is nonzero we must

have s(q1 ) = s(p2 ) = w and so


X
z = p1 q1∗ p2 q2∗ − p1 q1∗ ee∗ p2 q2∗ .
e∈M

We now consider three different cases.

Case 1: l(q1 ) = l(p2 ) = 0. In this case, since z is nonzero we must have q1 =


p2 = w, and so z = p1 wq2∗ − e∈M p1 ee∗ q2∗ . Thus, letting α = p1 and β = q2 we have
P

z = αwH β ∗ ∈ J.
Case 2: l(q1 ) = 0, l(p2 ) > 0. Let f be the initial edge of p2 , so that p2 = f p02 .
Thus z = p1 f p02 q2∗ − e∈M p1 ee∗ f p02 q2∗ . If r(f ) ∈
P
/ H then f ∈ M (since s(f ) = w),
and so using the fact that e∗ f = 0 for all e ∈ M such that e 6= f , we have

z = p1 f p02 q2∗ − p1 f f ∗ f p02 q2∗ = p1 f p02 q2∗ − p1 f p02 q2∗ = 0,

contradicting the fact that z is nonzero. Thus r(f ) ∈ H and so f ∈


/ M . Therefore
e∗ f = 0 for all e ∈ M and so z = p1 f p02 q2∗ . Since r(f ) = s(p02 ) ∈ H, we have
r(p02 ) ∈ H, by the hereditary nature of H. Thus, letting α = p1 f p02 and β = q2 , we
have z = αr(p02 )β ∗ ∈ J. Using a similar argument, we can see that z ∈ J for the
case that l(q1 ) > 0 and l(p2 ) = 0.
Case 3: l(q1 ) > 0, and l(p2 ) > 0. Let p1 = f p02 and q1 = gq10 , where f, g ∈
E 1 . If f 6= g, then z = p1 (q10 )∗ g ∗ f p02 q2∗ − e∈M p1 (q10 )∗ g ∗ ee∗ f p02 q2∗ = 0 (since
P

g ∗ e = 0 and/or e∗ f = 0 for all e ∈ M ), a contradiction. Thus f = g and


so z = p1 (q10 )∗ p02 q2∗ − e∈M p1 (q10 )∗ f ∗ ee∗ f p02 q2∗ . As in Case 2, if r(f ) ∈
P
/ H then
z = p1 (q10 )∗ p02 q2∗ − p1 (q10 )∗ p02 q2∗ = 0, a contradiction. So r(f ) ∈ H, f ∈ M and we
have z = (p1 (q10 )∗ )r(f )(p02 q2∗ ). Thus using the same argument as in Case 2, we have
z ∈ J.

Thus x ∈ J, and so I(H,S) ⊆ J, as required.

To see that I(H,S) is graded, note that each term αβ ∗ , where r(α) = r(β) ∈ H, is
homogeneous of degree |α| − |β|. Furthermore, for any v ∈ S, v H is by definition an
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 99

element of degree 0, so again we have that each term αv H β is homogeneous of degree


|α| − |β|. Thus each element in I(H,S) can be expressed as the sum of homogeneous
elements of the form αβ ∗ or αv H β ∗ . Since each of these homogeneous elements is
also in I(H,S) , by definition, I(H,S) is therefore a graded ideal.

If H is a hereditary and saturated subset of E 0 , then taking S = ∅ and applying


Proposition 3.3.6 we get

I(H) = span({αβ ∗ : α, β ∈ E ∗ , r(α) = r(β) ∈ H}).

Thus Proposition 3.3.6 allows us to describe precisely the elements of an ideal I(H,S)
(or I(H)) in a relatively simple way. This will prove valuable in future results.

If (H, S) is an admissible pair in E then, by definition, I(H,S) is generated by the


set of vertices u ∈ H and the set of elements v H for which v ∈ S. It is natural to ask
if I(H,S) also contains vertices that are not in H, and elements v H for which v ∈
/ S.
The following proposition, which has been adapted from the beginning of the proof
of [To, Theorem 5.7], shows that this is in fact not possible.

Proposition 3.3.7. Let E be an arbitrary graph and let (H, S) be an admissible


pair of E. Then I(H,S) ∩ E 0 = H and {v ∈ BH : v H ∈ I(H,S) } = S.

Proof. We begin this proof by setting up a homomorphism between LK (E) and


LK (E\(H, S)) that we will refer to again in later proofs, particularly the proof of
Theorem 3.3.12. Define φ : LK (E) → LK (E\(H, S)) on the generators of LK (E) as
follows: 
 v

 if v ∈ (E 0 \H)\(BH \S)
φ(v) = v + v 0 if v ∈ BH \S


0 if v ∈ H,


 e

 if r(e) ∈ (E 0 \H)\(BH \S)
φ(e) = e + e0 if r(e) ∈ BH \S


0 if r(e) ∈ H

CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 100

and 

 e

 if r(e) ∈ (E 0 \H)\(BH \S)
φ(e∗ ) = e∗ + (e0 )∗ if r(e) ∈ BH \S


0 if r(e) ∈ H.

Extend φ linearly and multiplicatively. To begin, we must check that φ preserves the
Leavitt path algebra relations on LK (E), a rather technical and tedious procedure.
However, for the sake of completeness we will show this process in full for this
particular proof. For ease of notation, we set (E 0 \H)\(BH \S) = T .

First, we check that the (A1) relation holds, i.e. that φ(vi )φ(vj ) = δij φ(vi ) for
all vi , vj ∈ E 0 . We must examine several different cases:
Case 1: vi , vj ∈ T . Then φ(vi )φ(vj ) = vi vj = δij vi = δij φ(vi ).
Case 2: vi ∈ T, vj ∈ BH \S. Then φ(vi )φ(vj ) = vi (vj + vj0 ) = δij vi = δij φ(vi ) (we
know that vi 6= vj0 since vj0 ∈
/ LK (E)). A similar argument shows the relation holds
for vi ∈ BH \S, vj ∈ T .
Case 3: vi , vj ∈ BH \S. Then φ(vi )φ(vj ) = (vi + vi0 )(vj + vj0 ) = vi vj + vi0 vj0 =
δij (vi + vi0 ) = δij φ(vi ).
Case 4: Either vi or vj ∈ H. Then φ(vi )φ(vj ) = 0 = δij φ(vi ).

Next, we check that the (A2) relations hold. First, we check that φ(s(e))φ(e) =
φ(e) for all e ∈ E 1 .
Case 1: s(e), r(e) ∈ T . Then φ(s(e))φ(e) = s(e)e = e = φ(e).
Case 2: s(e) ∈ T, r(e) ∈ BH \S. Then φ(s(e))φ(e) = s(e)(e + e0 ) = e + e0 = φ(e),
since s(e0 ) = s(e).
Case 3: s(e) ∈ BH \S, r(e) ∈ T . Then φ(s(e))φ(e) = (s(e) + s(e)0 )e = s(e)e =
e = φ(e) (we know that s(e)0 e = 0 since every v 0 ∈ BH
0
is a sink).
Case 4: s(e), r(e) ∈ BH \S. Then φ(s(e))φ(e) = (s(e) + s(e)0 )(e + e0 ) = s(e)e +
s(e)e0 = e + e0 = φ(e).
Case 5: s(e) ∈ H. Then, since H is hereditary, r(e) ∈ H and so φ(s(e))φ(e) =
0 = φ(e).
Case 6: s(e) ∈ E 0 \H, r(e) ∈ H. Then φ(s(e))φ(e) = 0 = φ(e).

Next, we check that φ(e)φ(r(e)) = φ(e) for all e ∈ E 1 .


CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 101

Case 1: r(e) ∈ T . Then φ(e)φ(r(e)) = er(e) = e = φ(e).


Case 2: r(e) ∈ BH \S. Then φ(e)φ(r(e)) = (e+e0 )(r(e)+r(e)0 ) = er(e)+e0 r(e0 ) =
e + e0 = φ(e).
Case 3: r(e) ∈ H. Then φ(e)φ(r(e)) = 0 = φ(e).
By very similar arguments, we can show that φ(r(e))φ(e∗ ) = φ(e∗ ) and that
φ(e∗ )φ(s(e)) = φ(e∗ ) for all e ∈ E 1 .

Next we check that the (CK1) relation holds, i.e. that φ(e∗i )φ(ej ) = δij φ(r(ei ))
for all ei , ej ∈ E 1 .
Case 1: r(ei ), r(ej ) ∈ T . Then φ(e∗i )φ(ej ) = e∗i ej = δij r(ei ) = δij φ(r(ei )).
Case 2: r(ei ) ∈ T, r(ej ) ∈ BH \S. Then φ(e∗i )φ(ej ) = e∗i (ej + e0j ) = δij r(ei ) =
δij φ(r(ei )) (we know that ei 6= e0j since e0j ∈
/ LK (E)). A similar argument shows that
the relation holds for r(ei ) ∈ BH \S, r(ej ) ∈ T .
Case 3: r(ei ), r(ej ) ∈ BH \S. Then φ(e∗i )φ(ej ) = (e∗i + (e0i )∗ )(ej + e0j ) = e∗i ej +
(e0i )∗ e0j = δij (r(ei ) + r(ei )0 ) = δij φ(r(ei )).
Case 4: Either r(ei ) or r(ej ) ∈ H. Then φ(e∗i )φ(ej ) = 0 = δij φ(r(ei )).

Finally, we check that the (CK2) relation holds, i.e. that φ(v − sE (e)=v ee∗ ) = 0
P

for all regular vertices v ∈ E 0 . Specifically, v is not a breaking vertex.


If v ∈ H, s(e) = v implies that r(e) ∈ H since H is hereditary. So
!
X X
φ v− ee∗ = φ(v) − φ(e)φ(e∗ ) = 0.
sE (e)=v sE (e)=v
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 102

Otherwise, we can assume that v ∈ T . Thus we have


!
X X X X
φ v− ee∗ = φ(v) − φ(ee∗ ) − φ(ee∗ ) − φ(ee∗ )
sE (e)=v sE (e)=v sE (e)=v sE (e)=v
r(e)∈T r(e)∈BH \S r(e)∈H
X X
=v− ee∗ − (e + e0 )(e∗ + (e0 )∗ )
sE (e)=v sE (e)=v
r(e)∈T r(e)∈BH \S
X X
=v− ee∗ − (ee∗ + e0 (e0 )∗ )
sE (e)=v sE (e)=v
r(e)∈T r(e)∈BH \S
X
=v− ee∗
sE\(H,S) (e)=v

= 0, for the following reason.

We know that v must emit at least one edge e with r(e) ∈


/ H, because otherwise the
saturated property of H would imply that v ∈ H. Thus v is not a sink in E\(H, S).
Furthermore, since v is not an infinite emitter in E, and since v must emit only a
finite number of new edges e0 in E\(H, S), v is not an infinite emitter in E|H. Thus
v is a regular vertex in E\(H, S) and so we are able to apply the (CK2) relation in
the final step above. Thus φ preserves the Leavitt path algebra relations on E and
is therefore a K-algebra homomorphism.

We now show that I(H,S) ⊆ ker(φ). By definition, I(H,S) is generated by the sets
{v : v ∈ H} and {v H : v ∈ S}, so it suffices to show that all such generating elements
are mapped to 0 under φ. We know that φ(v) = 0 for all v ∈ H. Now consider an
element v H , where v ∈ S. Then, using the same argument as we did when checking
the (CK2) relation, we have
!
X X X

H
φ(v ) = φ v − ee =v− ee∗ − (ee∗ + e0 (e0 )∗ ),
sE (e)=v sE (e)=v sE (e)=v
r(e)∈H
/ r(e)∈T r(e)∈BH \S

noting that φ(v) = v since v ∈ T = (E 0 \H)\(BH \S). Note that since v ∈ S ⊆ BH ,


v must be a regular vertex in E\(H, S), by the definition of a breaking vertex.
Furthermore, we have
X X X
ee∗ − (ee∗ + e0 (e0 )∗ ) = ee∗ ,
sE (e)=v sE (e)=v sE\(H,S) (e)=v
r(e)∈T r(e)∈BH \S
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 103

and so by the (CK2) relation φ(v H ) = 0, as required. Thus I(H,S) ⊆ ker(φ).

Now suppose there exists w ∈ I(H,S) ∩E 0 such that w ∈


/ H. Then either φ(w) = w
or φ(w) = w +w0 (by the definition of φ), a contradiction since I(H,S) ⊆ ker(φ). Thus
I(H,S) ∩ E 0 = H. Similarly, suppose there exists v ∈ BH \S such that v H ∈ I(H,S) .
Then
!
X X
φ(v H ) = φ v − ee∗ = (v + v 0 ) − ee∗ = v 0
sE (e)=v,r(e)∈H
/ sE\(H,S) (e)=v

(following the same argument as above). Once again, this contradicts that I(H,S) ⊆
ker(φ) and so {v ∈ BH : v H ∈ I(H,S) } = S, completing the proof.

Note that if we take S to be the empty set, the statement of Proposition 3.3.7
simplifies to I(H) ∩ E 0 = H for all hereditary saturated subsets H of E 0 .

Now we come to perhaps the most important result of this section, which shows
that, for any admissible pair (H, S) of a graph E, the quotient ring LK (E)/I(H,S) is
in fact isomorphic to the Leavitt path algebra of the quotient graph E\(H, S). This
powerful result is from [To, Theorem 5.7(2)]. Here we have greatly expanded the
proof for clarity.

Theorem 3.3.8. Let E be an arbitrary graph and let (H, S) be an admissible pair
of E. Then
LK (E)/I(H,S) ∼
= LK (E\(H, S)).

Proof. Define ϕ : LK (E\(H, S)) → LK (E) on the generators of LK (E\(H, S)) as


follows: 
 vP

 if v ∈ (E 0 \H)\(BH \S)
ϕ(v) = ∗
/ ee
s(e)=v,r(e)∈H if v ∈ BH \S


if v = v 0 ∈ BH
0
\S 0 ,
 H
v

 e

 if r(e) ∈ (E 0 \H)\(BH \S)
ϕ(e) = e ϕ(r(e)) if r(e) ∈ BH \S


e ϕ(r(e)0 ) if e = e0

CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 104

and 

 e

 if r(e) ∈ (E 0 \H)\(BH \S)
ϕ(e∗ ) = ϕ(r(e)) e∗ if r(e) ∈ BH \S


ϕ(r(e)0 ) e∗ if e = e0

Extend ϕ linearly and multiplicatively. Furthermore, define ϕ∗ : LK (E\(H, S)) →


LK (E)/I(H,S) by ϕ∗ (x) = ϕ(x) + I(H,S) . It can be verified that ϕ, and therefore ϕ∗ ,
preserves the Leavitt path algebra relations on LK (E\(H, S)). This a straightfor-
ward but tedious process, with several subcases for each relation, so here we will
just provide a sample calculation: we show that the (CK1) relation e∗i ej = δij r(ei )
is preserved for the case r(ei ), r(ej ) ∈ BH \S. In this case we have

ϕ(e∗i )ϕ(ej ) = ϕ(r(ei )) e∗i ej ϕ(r(ej ))


= δij ϕ(r(ei )) r(ei ) ϕ(r(ei ))
∗ ∗
P  P 
= δij s(fi )=r(ei ),r(fi )∈H
/ fi fi r(ei ) s(fi )=r(ei ),r(fi )∈H
/ fi fi

P 
= δij / fi fi
s(fi )=r(ei ),r(fi )∈H

= δij ϕ(r(ei )),

as required. Checking that these relations are preserved ensures that ϕ∗ is indeed a
K-homomorphism.

To show that ϕ∗ is a monomorphism, we will apply the Graded Uniqueness The-


orem (Theorem 2.2.13). We know that I(H,S) is Z-graded (by Proposition 3.3.6), and
so LK (E)/I(H,S) is Z-graded. Furthermore, ϕ (and therefore ϕ∗ ) is a graded homo-
morphism, since it takes generating elements to elements of equal degree. To show
that ϕ∗ (v) 6= 0 for all v ∈ (E\(H, S))0 , it suffices to show that ϕ(v) ∈
/ I(H,S) for all
v ∈ (E\(H, S))0 . Suppose that v ∈ (E 0 \H)\(BH \S). Then ϕ(v) = v ∈
/ I(H,S) , since
I(H,S) ∩ E 0 = H and v ∈ / H (by Proposition 3.3.7). Now suppose that v ∈ BH \S.

P
Then ϕ(v) = s(e)=v,r(e)∈H/ ee . Suppose that ϕ(v) ∈ I(H,S) and choose a fixed edge

/ H. Then f ∗ ϕ(v)f = s(e)=v,r(e)∈H ∗ ∗


P
f for which s(f ) = v and r(f ) ∈ / f ee f = r(f ) ∈

I(H,S) , since I(H,S) is a two-sided ideal. However, this implies r(f ) ∈ H, a contra-
/ I(H,S) . Furthermore, if v 0 ∈ BH
diction, and so ϕ(v) ∈ 0
\S 0 then ϕ(v 0 ) = v H ∈
/ I(H,S) ,
since v H ∈ I(H,S) implies v ∈ S (again by Proposition 3.3.7), a contradiction since
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 105

/ I(H,S) for all v ∈ (E\(H, S))0 , so we can apply the Graded


v ∈ BH \S. Thus ϕ(v) ∈
Uniqueness Theorem to obtain that ϕ∗ is indeed a monomorphism.

To show that ϕ∗ is an epimorphism, note that LK (E)/I(H,S) is generated by


elements of the form αβ ∗ + I(H,S) , where α, β ∈ E ∗ , r(α) = r(β) and αβ ∗ ∈
/ I(H,S) .
Suppose |α| = |β| = 0, so that αβ ∗ = v for some v ∈ E 0 . Now if v ∈ H then
v ∈ I(H,S) , a contradiction. So v ∈
/ H. Now suppose |α| > 0. If α contains an edge
e such that r(e) ∈ H, then r(α) ∈ H, since H is hereditary, and so αβ ∗ ∈ I(H,S) , a
contradiction. Thus α, and similarly β, contains no edges such that r(e) ∈ H. Thus
LK (E)/I(H,S) is generated by the set

{v + I(H,S) : v ∈ / H} ∪ {e∗ + I(H,S) : r(e) ∈


/ H} ∪ {e + I(H,S) : r(e) ∈ / H}.

Since ϕ∗ (x) = ϕ(x) + I(H,S) , it suffices to show that the set {v : v ∈


/ H} ∪ {e : r(e) ∈
/
H} ∪ {e∗ : r(e) ∈
/ H} is in the image of ϕ. If v ∈ (E 0 \H)\(BH \S), then v = ϕ(v).
If v ∈ BH \S, then
X
v= ee∗ + v H = ϕ(v) + ϕ(v 0 ).
s(e)=v,
r(e)∈H
/

Similarly, if r(e) ∈ (E 0 \H)\(BH \S), then e = ϕ(e) (and e∗ = ϕ(e∗ )). If r(e) ∈
BH \S, then by the above equation we have

e = er(e) = e(ϕ(r(e)) + ϕ(r(e)0 )) = eϕ(r(e)) + eϕ(r(e)0 ) = ϕ(e) + ϕ(e0 )

(and e∗ = ϕ(e∗ ) + ϕ((e0 )∗ )). Thus ϕ∗ is an epimorphism. Therefore ϕ∗ is an isomor-


phism, so we have LK (E)/I(H,S) ∼ = LK (E\(H, S)) as required.

Note that if we take S to be the empty set then Theorem 3.3.8 simplifies to
LK (E)/I(H) ∼
= LK (E|H).

So far we have been exclusively considering graded ideals of the form I(H,S) .
However, as the following theorem shows, any graded ideal of LK (E) is in fact of
the form I(H,S) for some admissible pair (H, S) of E. This result has been adapted
from [To, Theorem 5.7(1)].

Theorem 3.3.9. Let E be an arbitrary graph and let I be a graded ideal of LK (E).
If we let H = I ∩ E 0 and S = {w ∈ BH : wH ∈ I}, then I = I(H,S) .
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 106

Proof. Let I be a graded ideal of LK (E) and let H and S be the two sets de-
scribed above. Clearly we have I(H,S) ⊆ I, from the definition of I(H,S) . By The-
orem 3.3.8, there exists an isomorphism ϕ∗ : LK (E\(H, S)) → LK (E)/I(H,S) . Let
π : LK (E)/I(H,S) → LK (E)/I be the quotient map, so that π(x + I(H,S) ) = x + I.
Note that this map is well-defined, since I(H,S) ⊆ I. Consider πϕ∗ : LK (E\(H, S)) →
LK (E)/I. Now, since I is graded so too is LK (E)/I. Furthermore, both π and ϕ∗
are graded (by definition), and so πϕ∗ is also graded.
We wish to show that πϕ∗ (v) 6= 0 for any v ∈ (E\(H, S))0 . Note that πϕ∗ (v) =
π(ϕ(v) + I(H,S) ) = ϕ(v) + I, so it suffices to show that ϕ(v) ∈
/ I for all v ∈
(E\(H, S))0 . We proceed in a similar fashion to the proof of Theorem 3.3.8. Sup-
pose v ∈ (E 0 \H)\(BH \S). Then ϕ(v) = v ∈
/ I, since H = I ∩ E 0 (by definition)

P
and v ∈/ H. Now suppose v ∈ BH \S. Then ϕ(v) = s(e)=v,r(e)∈H / ee . Suppose

that ϕ(v) ∈ I and choose a fixed edge f for which s(f ) = v and r(f ) ∈ / H. Then
f ∗ ϕ(v)f = s(e)=v,r(e)∈H ∗ ∗
P
/ f ee f = r(f ) ∈ I, since I is a two-sided ideal. However,

/ I. Furthermore, if v 0 ∈ BH
this implies r(f ) ∈ H, a contradiction, and so ϕ(v) ∈ 0
\S 0
then ϕ(v 0 ) = v H ∈
/ I, since v H ∈ I implies v ∈ S (by the definition of S), a contra-
diction as v ∈ BH \S. Therefore we have πϕ∗ (v) 6= 0 for any v ∈ (E\(H, S))0 , as
required.
Thus, since πϕ∗ is a graded homomorphism between two graded rings, we can
apply Theorem 2.2.13 to give that πϕ∗ is injective. Since ϕ∗ is an isomorphism, this
implies that π is injective. Thus π must be the identity map and so LK (E)/I(H,S) =
LK (E)/I and therefore I(H,S) = I, as required.

Note that if E is a row-finite graph, then E 0 cannot contain any breaking vertices
and so the set S in the statement of Theorem 3.3.9 will always be empty. Thus in
the row-finite case we have that I = I(H) for any graded ideal I of LK (E), where
H = I ∩ E 0.

Since Theorem 3.3.9 tells us that all graded ideals of LK (E) are of the form I(H,S)
for some admissible pair (H, S) of E, and Proposition 3.3.6 describes the structure
of such an ideal, we can now describe the structure of any graded ideal of LK (E).
We state this explicitly in the following corollary.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 107

Corollary 3.3.10. Let I be a graded ideal of LK (E). Then

I = span {αβ ∗ : α, β ∈ E ∗ , r(α) = r(β) ∈ H} ∪ {αwH β ∗ : r(α) = r(β) = w ∈ S ),


where H = I ∩ E 0 and S = {w ∈ B H : wH ∈ I}.

Theorem 3.3.9 also leads to the following useful corollary.

Corollary 3.3.11. For an arbitrary graph E, the Jacobson radical J(LK (E)) = 0.

Proof. We know that LK (E) is Z-graded and that E 0 is a set of local units for
LK (E), with each element of E 0 homogeneous. Thus, by Lemma 1.1.8 we have
that J = J(LK (E)) is a graded ideal. Furthermore, Theorem 3.3.9 tells us that
J = J(H,S) , where H = J ∩ E 0 and S = {w ∈ B H : wH ∈ J}. However, by
Lemma 1.1.7 we know that J(R) cannot contain any nonzero idempotents, and
so H = ∅. By the definition of BH , we must also have that S = ∅, and thus
J(LK (E)) = 0.

We finish this section with a result that will prove useful when examining the
socle series of a Leavitt path algebra in Section 3.4. This proof is based on the
homomorphism φ : LK (E) → LK (E\(H, S)) that we defined in the proof of Propo-
sition 3.3.7, as well as the isomorphism given in Theorem 3.3.8. This result is stated
in a simpler form in [ARM1, Theorem 1.7(ii)] and the reader is referred to Tom-
forde’s [To, Theorem 5.7]. However, Tomforde does not prove this result explicitly,
and so we provide details of the proof here.

Theorem 3.3.12. Let E be an arbitrary graph and let H be a hereditary saturated


subset of E 0 . Then there is an algebra epimorphism φ : LK (E) → LK (E\(H, S))
for which ker(φ) = I(H,S) .

Proof. Recall the homomorphism φ : LK (E) → LK (E\(H, S)) from the proof of
Proposition 3.3.7. To show that φ is an epimorphism, it suffices to show that φ
maps onto the set of generators of LK (E\(H, S)); that is, each vertex, edge and
ghost edge of LK (E\(H, S)) is in the image of φ. We begin by checking the vertices.
Case 1: v ∈
/ BH \S. Then φ(v) = v.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 108

Case 2: v 0 ∈ BH
0
\S 0 . Then we have φ(v H ) = v 0 (see the final paragraph of the
proof of Proposition 3.3.7).
Case 3: v ∈ BH \S. Then φ(v − v H ) = (v + v 0 ) − v 0 = v.

Next, we check the edges.


Case 1: r(e) ∈
/ BH \S. Then φ(e) = e.
Case 2: r(e0 ) = v 0 ∈ BH
0
\S. Then φ(ev H ) = (e + e0 )v 0 = e0 .
Case 3: r(e) = v ∈ BH \S. Then φ(e − ev H ) = (e + e0 ) − e0 = e.

Similar arguments show that the ghost edges of E\(H, S) are also in the image
of φ. Thus φ is an epimorphism, as required.

Since φ is an epimorphism, we have that LK (E)/ ker(φ) ∼


= LK (E\(H, S)). We
denote this isomorphism by φ̄ : LK (E)/ ker(φ) → LK (E\(H, S)), where φ̄(x +
ker(φ)) = φ(x). To complete the proof, we must show that ker(φ) = I(H,S) . From
the proof of Proposition 3.3.7, we know that I(H,S) ⊆ ker(φ). To show that we have
equality, we first show that the isomorphism

ϕ∗ φ̄ : LK (E)/ ker(φ) → LK (E\(H, S)) → LK (E)/I(H,S)

sends x + ker(φ) to x + I(H,S) for all x ∈ LK (E), where ϕ∗ is the isomorphism


defined in the proof of Theorem 3.3.8. To show this is true, it suffices to show it
for the generators of LK (E), that is, the set E 0 ∪ E 1 ∪ (E 1 )∗ . For v ∈ H, we have
v ∈ I(H,S) ⊆ ker(φ), so trivially ϕ∗ φ̄(v + ker(φ)) = v + I(H,S) , since v + ker(φ) and
v + I(H,S) are the zero elements of LK (E)/ ker(φ) and LK (E)/I(H,S) , respectively.
The same is true for all e ∈ E 1 and e∗ ∈ (E 1 )∗ with r(e) ∈ H.
For a generating element y that is not contained in I(H,S) , it suffices to show that
ϕφ(y) = y, since in that case we have ϕ∗ φ̄(y + ker(φ)) = ϕ∗ (φ(y)) = ϕφ(y) + I(H,S) =
y + I(H,S) . Consider v ∈ (E 0 \H)\(BH \S). Then ϕφ(v) = ϕ(v) = v. Similarly, we
have ϕφ(e) = ϕ(e) = e for all e ∈ E 1 (and ϕφ(e∗ ) = ϕ(e∗ ) = e∗ for all e∗ ∈ (E 1 )∗ )
with r(e) ∈ (E 0 \H)\(BH \S).
Now consider v ∈ BH \S. Then ϕφ(v) = ϕ(v + v 0 ) = ee∗ + v H = v,
P
s(e)=v,r(e)∈H
/

by the definition of v H . Thus, for any e ∈ E 1 with r(e) ∈ BH \S we have ϕφ(e) =


ϕ(e + e0 ) = e(ϕ(r(e)) + ϕ(r(e)0 )) = er(e) = e. Similarly, for any e∗ ∈ (E 1 )∗ with
r(e) ∈ BH \S we have ϕφ(e∗ ) = e∗ .
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 109

Thus ϕ∗ (φ̄(x + ker(φ))) = x + I(H,S) for all x ∈ LK (E), as required. Suppose


that ker(φ) 6= I(H,S) , so that there exists some a ∈ ker(φ) for which a ∈
/ I(H,S) . In
that case ϕ∗ (φ̄(a + ker(φ))) = a + I(H,S) , which is impossible since a + ker(φ) is
zero in LK (E)/ ker(φ) while a + I(H,S) is a nonzero element of LK (E)/I(H,S) . Thus
ker(φ) = I(H,S) , as required.

Note that this proof relies on the fact that we already know that LK (E)/I(H,S) ∼
=
LK (E\(H, S)) from Theorem 3.3.8. If we could show that ker(φ) = I(H,S) directly,
then this would also prove LK (E)/I(H,S) ∼
= LK (E\(H, S)), making Theorem 3.3.8
redundant. However, while we can easily show that I(H,S) ⊆ ker(φ), it is not clear
how to show that ker(φ) ⊆ I(H,S) without appealing to Theorem 3.3.8.

3.4 The Socle Series of a Leavitt Path Algebra


Definition 3.4.1. Let R be any ring and let τ = 2|R| . The Loewy left ascending
socle series, or simply left socle series, of R is the well-ordered ascending chain
of two-sided ideals

0 = S0 ≤ S1 ≤ · · · ≤ Sα ≤ Sα+1 ≤ · · · (α < τ )

where, for each α < τ ,

Sα+1 /Sα = socl (R/Sα ) if γ = α + 1 is not a limit ordinal, and


S
Sγ = α<γ Sα if γ is a limit ordinal.

For each α < τ , Sα is called the α-th left socle of R (and in particular, S1 =
socl (R)). The least ordinal λ for which Sλ = Sλ+1 is called the left Loewy length
of R, denoted l(R). If R = Sα for some α, then R is said to be a left Loewy ring
(of length α).
Starting with the right socle of R, we can define the right socle series of R (and
related terms) similarly.

Although the left and right socle series may differ in general, we will show in
Corollary 3.4.8 that they coincide for Leavitt path algebras. (Note that we already
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 110

know socl (LK (E)) = socr (LK (E)) by Corollary 3.2.2.) Thus, since we will hence-
forth only be concerned with the socle series of Leavitt path algebras, there is no
need to specify ‘left’ or ‘right’ when using terms related to the socle series.

In this section we give several results regarding the socle series of an arbitrary
Leavitt path algebra LK (E). In Theorem 3.4.7 we describe the α-th socle of LK (E)
for all ordinals α, and describe precisely when LK (E) is a Loewy ring of length λ.
Furthermore, in Theorem 3.4.12 we show that for any ordinal λ there exists a graph
E for which LK (E) is a Loewy ring of length λ.

Example 3.4.2. We begin by examining the socle series of some familiar Leavitt
path algebras.

(i) The finite line graph Mn . We saw in Example 3.2.13 that soc(LK (Mn )) =
S1 = LK (Mn ). Thus LK (Mn ), and therefore Mn (K), is a Loewy ring of length 1
(for all n ∈ N).

(ii) The rose with n leaves Rn . In Example 3.2.13 we showed that soc(LK (Rn )) =
S1 = 0. By definition S2 /S1 = soc(LK (Rn )/S1 ), and so S2 = soc(LK (Rn )) = 0.
Thus Sα = 0 for all ordinals α, and in particular LK (Rn ), and therefore L(1, n), is
certainly not a Loewy ring for any n ∈ N.

(iii) The infinite clock graph C∞ . Recall that C∞ looks like:

a •O v1 v2

w;
www
w
ww
ww
•u GG / •v3
GG
(∞) GG
GG
G#
 v4

We saw in Example 3.2.13 that soc(LK (C∞ )) = I(H), where H = {vi }∞


i=1 . Since H

is a hereditary saturated subset of E 0 (recalling that the saturated condition does


not apply at infinite emitters), we can apply Theorem 3.3.8 to get LK (C∞ )/I(H) ∼=
LK (C∞ |H) = LK ({u}) ∼
= K. Now, since the only ideals of K are {0} and K, we
have soc(K) = K, and so S2 /I(H) = soc(LK (C∞ )/I(H)) = LK (C∞ )/I(H). Thus
S2 = LK (C∞ ) and so LK (C∞ ) is a Loewy ring with Loewy length 2.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 111

We now look at a new example that will be integral to the proof of Theo-
rem 3.4.12. This example is a combination of Examples 2.1, 2.5, 2.6 and 2.7 from
[ARM1].

Example 3.4.3. We define a sequence of graphs Pn as follows. First, let P0 be the


graph consisting of a single vertex and no edges:

P0 : •v

Next, let P1 be the ‘infinite line’ graph

P1 : •v1,1 / •v1,2 / •v1,3 / •v1,4 /

Now we construct the graph P2 from P1 by adding a second row of vertices


{v2,j : j ∈ N} and edges from v2,j to v2,j+1 for each j ∈ N, effectively adding a
second ‘infinite line’ graph. We then connect the two rows of vertices by adding an
edge from v2,j to v1,1 , for each j ∈ N, giving the graph

/ v1,2 / •v1,3 / •v1,4 /


P2 : O cFFiSkWSWSWWWW•
•v1,1
FF SSSSWWWWWW
FF SSS WWWW
FF SSS W
F SSS WWWWWWWW
/ •v2,2 / •v2,3 W/ v /
•v2,1 • 2,4

In general, we construct the graph Pi+1 from the graph Pi by adding vertices
{v1+1,j : j ∈ N} and, for each j ∈ N, an edge from vi+1,j to vi+1,j+1 and an edge
from vi+1,j to vi,1 .

Now, LK (P0 ) ∼
= K, and so soc(LK (P0 )) = LK (P0 ). Thus LK (P0 ) is a Loewy
ring with l(LK (P0 )) = 1. In the graph P1 , every vertex is a line point, and so by
Theorem 3.2.11 we have soc(LK (P1 )) = I((P1 )0 ) = LK (P1 ). Thus LK (P1 ) is also a
Loewy ring with l(LK (P1 )) = 1.

For the graph P2 , the set of line points is the top row of vertices H = {v1,j :
j ∈ N}. Note that H is both hereditary and saturated. Thus soc(LK (P2 )) = I(H).
Furthermore, note that the quotient graph P2 |H consists of the ‘bottom row’ of
vertices and edges and is clearly isomorphic as a graph to P1 . Thus, by Theorem 3.3.8
we have
LK (P2 )/I(H) ∼
= LK (P2 |H) ∼
= LK (P1 ),
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 112

and since LK (P1 ) is a Loewy ring with l(LK (P1 )) = 1, LK (P2 ) is therefore a Loewy
ring with l(LK (P2 )) = 2.

Using induction, it is easy to see that LK (Pn ) is a Loewy ring with l(LK (Pn )) = n
for all n ∈ N: to begin, we know this statement is true for n = 1 and n = 2. Now
assume it is true for n = i and consider the graph Pi+1 . The line points of Pi+1 are
again the set H = {v1,j : j ∈ N}, and Pi+1 |H is isomorphic to Pi . Thus, as above,
LK (Pi+1 |H) ∼
= LK (Pi ), and since LK (Pi ) is a Loewy ring with l(LK (Pi )) = i (by
our assumption), LK (Pi+1 ) is therefore a Loewy ring with l(LK (Pi+1 )) = i + 1, as
required.

If we view Pi as being contained in Pi+1 for each i ∈ N, then {Pi }i∈N is an


ascending chain of graphs. Thus, with ω denoting the first infinite ordinal, we can
form the graph
[
Pω = Pi .
i<ω

Then, using the same argument as above, Pω |Pl (Pω ) ∼ = Pω , with S1 ⊂ S2 ⊂ S3 ⊂


S
· · · ⊂ i<ω Si = LK (Pω ), and LK (Pω ) is a Loewy ring with Loewy length ω.

We now define a sequence of graphs Qn that are very similar to the graphs
Pn , except for one subtle but important difference. This example is from [ARM1,
Example 2.8].

Example 3.4.4. Let Q1 be the infinite line graph:

Q1 : •w1,1 / •w1,2 / •w1,3 / •w1,4 /

As in the previous example, we now add a second ‘infinite line’ graph, but this
time we connect the two rows of vertices from the lower to the upper by adding an
edge from w1,j to w2,1 for each j ∈ N, giving the graph

Q2 : •wO 2,1cGGkWiSSWSWWWW/ •w2,2 / •w2,3 / •w2,4 /


GG SSSSWSWWWWW
GG W
GG SSSSSSWWWWWWWWW
G SSS WWWWW
•w1,1 / •w1,2 / •w1,3 / •w1,4 /
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 113

In general, we construct the graph Qi+1 from the graph Qi by adding vertices
{w1+1,j : j ∈ N} and, for each j ∈ N, an edge from wi+1,j to wi+1,j+1 and an edge
from wi,j to wi+1,1 . Contrast this with the construction of Pi+1 , in which we add
an edge from vi+1,j to vi,1 for each j ∈ N. Despite this difference, it is clear that
the graph Qi is isomorphic to the graph Pi for each i ∈ N. Thus the Leavitt path
algebra LK (Qn ) is a Loewy ring with l(LK (Qn )) = n for all n ∈ N.

Once again, viewing Qi as being contained in Qi+1 for each i ∈ N, we can form
S
the graph Qω = i<ω Qi . This is where the two examples diverge. For each i ∈ N,
Pl (Pi ) = {v1,j : j ∈ N} (which is independent of i) and so Pl (Pω ) = {v1,j : j ∈ N}.
However, Pl (Qi ) = {wi,j : j ∈ N} for each i ∈ N, and so Qω has no line points.
Therefore soc(LK (Qω )) = {0}, and so Sα = {0} for each α. Thus, while LK (Pω ) is
a Loewy ring, its counterpart LK (Qω ) is not.

We now give a definition that is an integral part of Theorem 3.4.7. This definition
is from [ARM1, Definition 3.1], although this version differs from the published
version for reasons that will be explained after the proof of Theorem 3.4.7.

Definition 3.4.5. Let E be an arbitrary graph and let LK (E) be its associated
Leavitt path algebra. Recall the definitions of the quotient graph E\(H, S) and v H
from Section 3.3. For each ordinal γ, we define transfinitely a subset Vγ of E 0 as
follows.

(i) V1 is the hereditary saturated closure of the set Pl (E).

Suppose γ > 1 is any ordinal and that the sets Vα have been defined for all α < γ.
Let Sα denote the α-th socle of LK (E) and define Bα := {w ∈ BVα : wVα ∈ Sα }.

0
(ii) If γ = α + 1 is a non-limit ordinal, then Vγ = E 0 ∩ I(Vα+1 ), defining

0
Vα+1 = Vα ∪ Wα ∪ Zα ,

where

Wα = {w ∈ E 0 \Vα : w is a line point in the quotient graph E\(Vα , Bα )}


CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 114

and ( )
X
Zα = v− ee∗ : v ∈ BVα \Bα
s(e)=v,
r(e)∈V
/ α
S
(iii) If γ is a limit ordinal, then Vγ = α<γ Vα .

Lemma 3.4.6. Each subset Vγ (as defined in Definition 3.4.5) is a hereditary sat-
urated subset of E 0 .

Proof. We know that the set of line points of E must be a hereditary subset of E 0
since, given a vertex v ∈ Pl (E), every vertex w ∈ T (v) must also be a line point, by
definition. Thus V1 , the hereditary saturated closure of Pl (E), must be a hereditary
saturated subset of E 0 .
0 0
If γ is a non-limit ordinal, then Vγ = E 0 ∩ I(Vα+1 ), where I(Vα+1 ) is as defined
0
above. Since I(Vα+1 ) is an ideal, Vγ must be a hereditary saturated subset, by
Lemma 2.2.1.

For the case where γ is a limit ordinal, take a vertex v ∈ Vγ and a vertex
S
w ∈ T (v). Since Vγ = α<γ Vα , we must have v ∈ Vα for some α < γ, and since Vα
is hereditary, we have w ∈ Vα and so w ∈ Vγ . Now suppose that u is a regular vertex
in E 0 such that, for each ei ∈ s−1 (u), we have r(ei ) ∈ Vγ . Since Vα ⊆ Vα+1 for each
α < γ, there must exist some α < γ for which r(ei ) ∈ Vα for all ei ∈ s−1 (u). Then,
since Vα is saturated, we must have that u ∈ Vα and thus u ∈ Vγ , as required.

We now come to the main result of this section. This theorem is from [ARM1,
Theorem 3.2], although it differs from the published version, which the author found
to be incorrect for a number of reasons. After correspondence with one of the authors
of the paper, the theorem was adjusted to the current version below. The differences
between versions and why the changes were made will be discussed after the proof.
The proof has also been expanded to clarify some of the arguments used.

Theorem 3.4.7. Let E be an arbitrary graph and let LK (E) be its associated Leavitt
path algebra. For each ordinal α, let Sα denote the α-th socle of LK (E), and let Vα
and Bα be the subsets of E 0 and BVα , respectively, defined in Definition 3.4.5. Then
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 115

(i) Sα is a graded ideal for each α;

(ii) Vα = E 0 ∩ Sα for each α;

(iii) Sα = I(Vα ,Bα ) for each α;

(iv) LK (E)/Sα ∼
= LK (E\(Vα , Bα )) as graded K-algebras for each α; and

(v) LK (E) is a Loewy ring of length λ if and only if λ is the smallest ordinal such
that E 0 = Vλ .

Proof. We prove (i)-(iv) simultaneously by transfinite induction. For γ = 1, V1


has been defined as the saturated closure of the set of line points of E. Thus, by
Theorem 3.2.11, we have S1 = soc(LK (E)) = I(V1 ). By Proposition 3.3.6, S1 is a
graded ideal (proving (i)), and by Proposition 3.3.7 we have S1 ∩ E 0 = V1 (proving
(ii)). Since S1 = I(V1 ) = I(V1 ,∅) , Proposition 3.3.7 gives B1 = {w ∈ BV1 : wV1 ∈
S1 } = ∅ and thus S1 = I(V1 ,B1 ) , proving (iii). Now (iv) follows directly from (iii) and
Theorem 3.3.8, and so we have shown that (i)-(iv) hold for the case γ = 1.
Now suppose that γ > 1 and that properties (i)-(iv) hold for all α < γ. Sup-
pose that γ is a non-limit ordinal, so that γ = α + 1 for some α, and suppose
that Vα 6= E 0 (so that Sα 6= LK (E).) Recall (from Definition 3.3.2) the sets BVα ,
BV0 α and the quotient graph LK (E\(Vα , Bα )). Let φ : LK (E) → LK (E\(Vα , Bα ))
0
be the epimorphism defined in the proof of Proposition 3.3.7 and Vα+1 be the set
0
defined in Definition 3.4.5. Consider φ(Vα+1 ) = φ(Vα ) ∪ φ(Wα ) ∪ φ(Zα ), a subset of
LK (E\(Vα , Bα )). By the definition of φ, we have φ(Vα ) = {0}.

Let Wα = {v1 , v2 , . . . , w1 , w2 , . . .}, where each vi ∈ (E 0 \Vα )\(BVα \Bα ) and each
wi ∈ BVα \Bα . Thus

φ(Wα ) = {v1 , v2 , . . . , w1 + w10 , w2 + w20 , . . .}.

ee∗ ) = u0i
P
Recalling from the proof of Proposition 3.3.7 that φ(ui − s(e)=ui ,r(e)∈V
/ α

for each ui ∈ BVα \Bα , we also have φ(Zα ) = BV0 α \Bα0 . Thus

0
φ(Vα+1 ) = {v1 , v2 , . . . , w1 + w10 , w2 + w20 , . . .} ∪ (BV0 α \Bα0 ).
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 116

0
Now, since each wi0 ∈ BV0 α \Bα0 , each wi = (wi + wi0 ) − wi0 ∈ I(φ(Vα+1 )), and so

0
I(φ(Vα+1 )) = I({v1 , v2 , . . . , w1 , w2 , . . .} ∪ (BV0 α \Bα0 )) = I(Wα ∪ (BV0 α \Bα0 )).

By definition, Wα is the set of all line points in E\(Vα , Bα ) that are also vertices in
the original graph E. Furthermore, the only new vertices introduced into E\(Vα , Bα )
are the set BV0 α \Bα0 , which are sinks (and therefore line points) by definition. Thus
Pl (E\(Vα , Bα )) = Wα ∪ (BV0 α \Bα0 ) and so, by Theorem 3.2.11,

soc(LK (E\(Vα , Bα )) = I(Pl (E\(Vα , Bα ))) = I(Wα ∪ (BV0 α \Bα0 )) = I(φ(Vα+1


0
)).

Now, by our induction hypothesis we have I(Vα ,Bα ) = Sα , and so by Theorem 3.3.8
we have LK (E)/Sα ∼= LK (E\(Vα , Bα )). Specifically, the function φ̄ : LK (E)/Sα →
LK (E\(Vα , Bα )) with φ̄(x + Sα ) = φ(x) is an isomorphism. Thus, from the socle
series definition we have

Sα+1 /Sα = soc(LK (E)/Sα ) ∼ 0


= soc(LK (E\(Vα , Bα ))) = I(φ(Vα+1 0
)) = φ(I(Vα+1 )).

0
Thus φ̄(Sα+1 /Sα ) = φ(I(Vα+1 )), and so

Sα+1 /Sα = φ̄−1 (φ(I(Vα+1


0 0
))) = I(Vα+1 )/Sα ,

0 0
giving Sα+1 = I(Vα+1 ). Thus Vα+1 = I(Vα+1 ) ∩ E 0 = Sα+1 ∩ E 0 , proving (ii).

By our inductive hypothesis, Sα and soc(LK (E\(Vα , Bα )) are graded. Further-


more, Sα+1 /Sα ∼
= soc(LK (E\(Vα , Bα ))) and so Sα+1 is graded, proving (i). Since
Sα+1 is graded, by Theorem 3.3.9 Sα+1 = I(H,S) , where H = Sα+1 ∩ E 0 and
S = {w ∈ BH : wH ∈ Sα+1 }. From above, we have H = Vα+1 and so S = Bα+1 (by
the definition of Bα+1 ). Thus Sα+1 = I(Vα+1 ,Bα+1 ) , proving (iii). Again, (iv) follows
directly from (iii) and Theorem 3.3.8.

Thus we have shown properties (i)-(iv) for when γ is not a limit ordinal. If γ is a
S S
limit ordinal, then by definition Sγ = α<γ Sα and Vγ = α<γ Vα . Since each Sα is
graded, Sγ is also graded, proving (i). Furthermore, if Vα = Sα ∩ E 0 for each α < γ
then it follows that Vγ = Sγ ∩ E 0 , proving (ii). As above, the fact that Sγ = I(Vγ ,Bγ )
follows from (i) and (ii) and the definition of Bγ , and (iv) follows directly from (iii)
and Theorem 3.3.8. Thus we have established (i)-(iv) for all γ.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 117

Finally, note that LK (E) is a Loewy ring of length λ if and only if λ is the smallest
ordinal for which Sλ = LK (E), by definition. By Lemma 2.2.3, Sλ = LK (E) if and
only if Sλ ∩ E 0 = E 0 , that is, Vλ = E 0 (by (ii)). Thus LK (E) is a Loewy ring of
length λ if and only if λ is the smallest ordinal for which Vλ = E 0 , proving (v).

The primary error in the original proof of [ARM1, Theorem 3.2] was the assump-
tion that Sα = I(Vα ) rather than Sα = I(Vα ,Bα ) . While the property Sα = I(Vα ) was
not stated explicitly in the theorem itself, the assumption is implied when [ARM1,
Theorem 1.7(ii)] is invoked to give LK (E)/Sα ∼ = LK (E|Vα ) during the induction
process. As shown in the proof above, the fact that Vα = E 0 ∩ Sα (together with
Theorem 3.3.9) implies directly that Sα = I(Vα ,Bα ) , and I(Vα ,Bα ) 6= I(Vα ) unless
Bα = ∅, which is not true in general. Thus we have changed the proof of Theo-
rem 3.4.7 accordingly and have added the statement Sα = I(Vα ,Bα ) as property (iii) for
clarity. Furthermore, [ARM1, Theorem 3.2(3)] states that ‘LK (E)/Sα ∼ = LK (E|Vα )
as graded K-algebras for each K’; here we have changed that to ‘LK (E)/Sα ∼
=
L(E\(Vα , Bα )) as graded K-algebras for each α’ in property (iv).

Furthermore, recall that we define Wα in Definition 3.4.5 as

Wα = {w ∈ E 0 \Vα : w is a line point in the quotient graph E\(Vα , Bα )},

and that this definition allows us to conclude in the proof of Theorem 3.4.7 that
Pl (E\(Vα , Bα )) = Wα ∪ (BV0 α \Bα0 ), an equality that is central to the proof. In
[ARM1, Definition 3.1], the corresponding set is defined as

{w ∈ E 0 \Vα : every bifurcation vertex u ∈ TE (w)\Vα has at most one edge e


with s(e) = u and r(e) ∈
/ Vα }.

However, such vertices will not necessarily be line points in the quotient graph
E\(Vα , Bα ), since there is the possibility that a new edge e0 with s(e0 ) = u ∈
TE (w)\Vα will be added in the construction of E\(Vα , Bα ), making u a bifurcation
in the quotient graph. Hence we have modified the definition in our version.

As promised at the beginning of this section, we now show that the left and right
socle series of a Leavitt path algebra coincide.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 118

Corollary 3.4.8. Let E be an arbitrary graph. For any ordinal α < 2|LK (E)| , the
α-th left socle of LK (E) is equal to the α-th right socle of LK (E).

Proof. We proceed using transfinite induction. For ease of notation we will denote
α-th left socle of LK (E) by Sα and the α-th right socle of LK (E) by Tα . For the
case α = 1, we have S1 = socl (LK (E)) = socr (LK (E)) = T1 , by Corollary 3.2.2.
Now let 1 < α < 2|LK (E)| and suppose that Sβ = Tβ for all β < α. Moreover,
suppose that α = β+1, where β is not a limit ordinal. Then, applying Corollary 3.2.2
and Theorem 3.4.7 (iv), we have

Sα /Sβ = socl (LK (E)/Sβ ) ∼


= socl (LK (E\(Vβ , Bβ )))
= socr (LK (E\(Vβ , Bβ ))) ∼
= socr (LK (E)/Tβ ) = Tα /Tβ ,

and since Sβ = Tβ we therefore have Sα = Tα .


S S
Now suppose that α is a limit ordinal. Then we have Sα = β<α Sβ = β<α Tβ =
Tα , completing the proof.

We now proceed with several ring-theoretic results related to the socle series of
a Leavitt path algebra. Because some of these results rely on Theorem 3.4.7, the
proofs have had to be subtly adjusted. However, these adjustments have not led to
any changes in the results themselves. The first result is from [ARM1, Proposition
3.3].

Proposition 3.4.9. Let E be an arbitrary graph and let Sα be the α-th socle of
LK (E). Each Sα is a von Neumann regular ring.

Proof. It is known (see for example [J2, pages 65, 90]) that for a semiprime ring R,
soc(R) is a direct sum of simple rings Ti and that each Ti is the directed union of
full matrix rings over division rings. By the remark on p.67 of [L1], a matrix ring
over a division ring is von Neumann regular, and thus a directed union of matrix
rings over division rings must be von Neumann regular. Since soc(R) is a direct sum
of such rings, it must also be von Neumann regular. Now we know that LK (E) is
semiprime by Proposition 3.2.1, and so S1 = soc(LK (E)) is von Neumann regular.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 119

We proceed by transfinite induction. Suppose that γ > 1 and assume that Sα


is von Neumann regular for each α < γ. Suppose γ is not a limit ordinal, so that
γ = α+1 for some α. By Theorem 3.4.7 (iv), we have LK (E)/Sα ∼
= LK (E\(Vα , Bα )),
and so Sα+1 /Sα = soc(LK (E)/Sα ) ∼
= soc(LK (E\(Vα , Bα ))). Since the socle of a
Leavitt path algebra is von Neumann regular (by the paragraph above), we have
that Sα+1 /Sα is von Neumann regular. Furthermore, Sα is von Neumann regular by
our inductive hypothesis and so Sα+1 is von Neumann regular (by Lemma 1.1.9).
S
If γ is a limit ordinal, then Sγ = α<γ Sα . Take a ∈ Sγ . Then a ∈ Sα for some
α < γ, and since Sα is von Neumann regular by our inductive hypothesis there exists
x ∈ Sα ⊆ Sγ for which a = axa. Thus Sγ is von Neumann regular, completing the
proof.

Proposition 3.4.9, together with the yet-to-come Theorem 4.2.3, leads to the
following corollary.

Corollary 3.4.10. Let E be an arbitrary graph. If LK (E) is a Loewy ring, then E


is acyclic and LK (E) is locally K-matricial and von Neumann regular.

Proof. If LK (E) is a Loewy ring then LK (E) = Sα for some α. Thus LK (E) is von
Neumann regular (by Proposition 3.4.9) and so, by Theorem 4.2.3, E is acyclic and
LK (E) is locally K-matricial.

Note that the converse is not true: recall the graph Qω from Example 3.4.3,
which was acyclic but not a Loewy ring since Sα = {0} for all α. However, the
following corollary (from [ARM1, Corollary 3.5]) shows that we have equivalence
when E 0 is finite.

Corollary 3.4.11. Let E be a graph for which E 0 is finite. The following statements
are equivalent.

(i) LK (E) is a Loewy ring

(ii) E is acyclic

(iii) LK (E) is von Neumann regular.


CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 120

Furthermore, if E 1 is also finite, then the previous conditions are equivalent to

(iv) LK (E) is semisimple (in this case, we have l(LK (E)) = 1).

Proof. Theorem 4.2.3 gives (ii) ⇐⇒ (iii), while Corollary 3.4.10 gives (i)⇒(ii). To
show (ii)⇒(i), suppose that E is acyclic. Since E 0 is finite, this implies that E must
contain at least one sink, and so Pl (E) 6= ∅. Recall from Definition 3.4.5 that V1 =
Pl (E) 6= ∅. If V1 = E 0 then LK (E) is a Loewy ring (of length 1) by Theorem 3.4.7
(v). If not, then we can form the quotient graph E\(V1 , B1 ). This graph must also
be acyclic, since the only edges added in the construction of the quotient graph end
in sinks, by definition. Now, if the added vertices v 0 ∈ BV0 1 \B10 are the only sinks in
E\(V1 , B1 ), then the graph (E\(V1 , B1 )) \ ((BV0 1 \B10 )∪{e0 ∈ (E\(V1 , B1 ))1 }) contains
no sinks, a contradiction since this is also a finite and acyclic graph. Thus E\(V1 , B1 )
must contain a vertex from the original graph E that is a sink in E\(V1 , B1 ).

Now, by definition V2 contains the sets V1 and

W1 = {w ∈ E 0 \V1 : w is a line point in the quotient graph E\(V1 , B1 )}.

By our observation above, W1 6= ∅ and so V1 ⊂ V2 , giving |V2 | > |V1 |. Again, either
V2 = E 0 , in which case we are done, or we can repeat the above argument to show
that |V3 | > |V2 |, and so on. Since E 0 is finite, this ascending chain of subsets of E 0
must stop, eventually giving Vn = E 0 for some n ∈ N. Thus LK (E) is a Loewy ring
by Theorem 3.4.7 (v).

Now suppose that E 1 is finite and E is acyclic. Then, by Lemma 2.2.9, LK (E) is
isomorphic to a direct sum of matrix rings over K. Since each matrix ring is simple
(by Lemma 1.1.10), LK (E) is therefore the direct sum of simple left ideals and so is
semisimple, showing (ii)⇒(iv). If LK (E) is semisimple, then soc(LK (E)) = LK (E)
and so l(LK (E)) = 1, as required. Thus LK (E) is a Loewy ring, showing (iv)⇒(i)
and completing the proof.

We now come to the second main result of this section, which is from [ARM1,
Theorem 4.1].
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 121

Theorem 3.4.12. For every ordinal λ and any field K, there exists an acyclic graph
Pλ for which LK (Pλ ) is a Loewy ring of length λ.

Proof. We construct a series of graphs Pn that transfinitely extends the series intro-
duced in Example 3.4.3. For λ = 1, we choose E = P1 , the ‘infinite line’ graph

P1 : •v1,1 / •v1,2 / •v1,3 / •v1,4 /.

P1 is clearly acyclic.

Now suppose that λ ≥ 2 is any ordinal, and suppose that the graphs Pα have
been defined for all α < λ and that each Pα has Loewy length α. There are three
possibilities for λ. First, suppose that λ = α + 1, where α is not a limit ordinal.
Then, in a similar manner to Example 3.4.3, we construct the graph Pα+1 from Pα
by adding vertices {vα+1,j : j ∈ N} and, for each j ∈ N, an edge from vα+1,j to
vα+1,j+1 and an edge from vα+1,j to vα,1 , giving

O fj l

/ vα,2 / •vα,3 / •vα,4 /


Pα+1 : O eJlXjUJUXUXUXXXX •
•vα,1
U XX
JJ UUUU XXXXX
JJ
JJ UUUUUUXXXXXXXXXX
JJ UUUU XXXXX
XX
•vα+1,1 / •vα+1,2 / •vα+1,3 / •vα+1,4 /

Secondly, if λ is a limit ordinal then we define


[
Pλ = Pα .
α<λ

Finally, suppose that λ = α + 1, where α is a limit ordinal. Then we construct


the graph Pα+1 from Pα by adding a single vertex vα+1,1 and, for each β < α, an
edge from vα+1,1 to vβ,1 , giving

Pα+1 : Pα ∪ O
(∞)

•vα+1,1
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 122

Note that in each case Pλ is acyclic (as required) and Pα is a subgraph of Pλ for
all α < λ, giving a chain of subgraphs P1 ⊂ P2 ⊂ · · · ⊂ Pλ−1 ⊂ Pλ .

We now show by transfinite induction that l(LK (Pα )) = α for each ordinal α.
We do this by showing that α is the smallest ordinal for which Vα = Pα0 and then
applying Theorem 3.4.7 (v). For α = 1, we have V1 = Pl (P1 ) = P10 , since every
vertex in P1 is a line point.
Now let λ be any ordinal greater than 1 and suppose that LK (Pα ) is a Loewy
ring of length α for all α < λ. Since each Pα can be viewed as a subgraph of Pλ , this
is equivalent to assuming that Vα = Pα0 for each α < λ, where each Vα is a subset of
Pλ0 .
Suppose that λ = β + 1, where β is not a limit ordinal. Now Vβ = Pβ0 , and since
there are no infinite emitters going into Vβ we have BVβ = ∅. Thus it is easy to see
that the definition of Vλ simplifies to Vλ = E 0 ∩ I(Vβ ∪ Wβ ), where Wβ is the set

Wβ = {w ∈ Pλ0 \Vβ : w is a line point in Pλ |Vβ }.

Now, since BVβ = ∅ we have Pλ |Vβ ∼


= P1 , and so every vertex in the set (Pλ |Vβ )0 =
{vβ+1,j : j ∈ N} is a line point. Thus Vλ = E 0 ∩ I(Pβ0 ∪ {vβ+1,j : j ∈ N}) =
E 0 ∩ I(Pλ0 ) = Pλ0 . Furthermore, since Vβ 6= Pλ0 , λ is indeed the smallest ordinal for
which Vλ = Pλ0 , as required.
Now suppose that λ = β + 1, where β is a limit ordinal. Since vβ+1,1 emits
an infinite number of edges into Vβ = Pβ0 and no edges into the rest of the graph,
we again have BV = ∅. Furthermore, every vertex in the graph Pλ |Vβ ∼
β = P0 is a
1
line point, and so, by the same argument as above, we have that λ is the smallest
ordinal for which Vλ = Pλ0 , as required.
Pα0 = Pλ0 ,
S S
Finally, suppose that λ is a limit ordinal. Then Vλ = α<λ Vα = α<λ

completing the proof.

We finish this section with a result from [ARM1, Theorem 4.2] that shows there
exists an upper bound on the Loewy length of the Leavitt path algebra of a row-
finite graph. The proof of this theorem refers to the set Wα from Definition 3.4.5.
1
Recall from Example 3.4.3 that P0 is the graph consisting of a single vertex and no edges.
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 123

As noted earlier, this definition is different from the one seen in [ARM1, Definition
3.1] and so we have had to modify the following proof accordingly.

Theorem 3.4.13. Suppose that the Leavitt path algebra LK (E) is a Loewy ring. If E
is a row-finite graph then LK (E) must have Loewy length ≤ ω1 , the first uncountable
ordinal.

Proof. Suppose that LK (E) has Loewy length greater than ω1 . Let Sω1 be the ω1 -st
socle of LK (E) and let Vω1 be the set of vertices defined in Definition 3.4.5. Recall
from the definition that Vω1 +1 contains the set

Wω1 = {w ∈ E 0 \Vω1 : w is a line point in E|Vω1 },

noting that Bω1 = ∅ since E is row-finite. Consider a fixed w ∈ Wω1 . Now w is


not a line point in E, for otherwise we would have w ∈ V1 ⊂ Vω1 . Furthermore,
since LK (E) is a Loewy ring then E must be acyclic (by Corollary 3.4.10), and in
particular there cannot be a cycle based at any vertex in TE (w). Thus TE (w) must
contain at least one bifurcation.

Let U = {u1 , u2 , . . .} be the set of bifurcation vertices in TE (w) that are also
contained in TE|Vω1 (w) (though indeed they are not bifurcations in E|Vω1 since w
is a line point in E|Vω1 ). Since E is row-finite, for a fixed positive integer n the
number of paths of length n with source w is finite, and so the number of vertices
in TE (w) is at most countable and thus |U | is at most countable. Furthermore, U
is not empty. To see this, suppose that U is empty and let p be a path of minimum
length in E from w to a bifurcation u ∈ TE (w). Since the only vertices removed
in the construction of E|Vω1 are in the set Vω1 , if u ∈
/ TE|Vω1 (w) then there must
be a vertex v ∈ p0 for which v ∈ Vω1 (noting that we may have v = u). However,
since there are no bifurcations between w and v, the saturated nature of Vω1 implies
that w ∈ Vω1 , a contradiction. Thus U is not empty. Note that, by definition, each
ui ∈
/ Vω1 .

As noted above, each ui ∈ U cannot be a bifurcation in E|Vω1 . Thus each ui


must emit at most one edge into E 0 \Vω1 and so, since it is a bifurcation in E, ui
CHAPTER 3. SOCLE THEORY OF LEAVITT PATH ALGEBRAS 124

emits at least one edge into Vω1 . For each ui ∈ U , let s−1 (ui ) = {ei1 , . . . , eik(i) } (a
finite set since E is row-finite), and define

Ji = {eij ∈ s−1 (ui ) : r(eij ) ∈ Vω1 }.

Note that each of these sets is nonempty since each ui emits at least one edge into
Vω1 , as explained above. From the definition of Ji we have r(Ji ) ⊆ Vω1 . Thus, since
S
Vω1 = α<ω1 Vα , for each ui ∈ U we have r(Ji ) ⊆ Vαi for some αi < ω1 .
S
Let γ = sup{αi : i = 1, 2, . . .}, so that ui ∈U r(Ji ) ⊆ Vγ (noting that γ < ω1 ,
since U is countable). Thus the quotient graph E|Vγ contains none of the edges in
0
S
ui ∈U Ji . Since each ui emits at most one edge into E \Vω1 , and therefore at most

one edge into E 0 \Vγ , each ui must be a line point in E|Vγ . Thus, by Theorem 3.2.11,
each ui ∈ soc(LK (E|Vγ ). Recall the definition of φ : LK (E) → LK (E|Vγ ) from
Proposition 3.3.7. Since each ui ∈
/ Vω1 we have ui ∈
/ Vγ and so φ(ui ) = ui . Letting
φ̄ : LK (E)/Sγ → LK (E|Vγ ) be the isomorphism defined by φ̄(x + Sγ ) = φ(x), we
therefore have φ̄−1 (ui ) = ui + Sγ . Thus we have

Sγ+1 /Sγ = soc(LK (E)/Sγ ) ∼


= soc(LK (E|Vγ ),

and specifically φ̄−1 soc(LK (E|Vγ )) = Sγ+1 /Sγ .


 

Since each ui ∈ soc(LK (E|Vγ )), we have ui +Sγ ∈ Sγ+1 /Sγ . Thus each ui ∈ Sγ+1 ,
and so ui ∈ Sγ+1 ∩ E 0 = Vγ+1 ⊆ Vω1 , contradicting the fact that each ui ∈
/ Vω1 . Thus
LK (E) has Loewy length ≤ ω1 , as required.

One may be tempted to think that ω, the first countable ordinal, would be
an upper bound for the Loewy length of LK (E) in the case that E is row-finite.
However, [ARM1, Example 4.3] constructs a series of row-finite graphs Pα for which
the Loewy length of LK (Pα ) = α for each α < ω1 , thus showing that ω1 is indeed
the best possible upper bound.
Chapter 4

Regular and Self-Injective Leavitt


Path Algebras

In this chapter we define various notions of ‘regularity’ for a ring and examine Leavitt
path algebras with these properties in Sections 4.2 and 4.3. Furthermore, in Section
4.4 we examine Leavitt path algebras that are self-injective; that is, injective as
left (or right) modules over themselves. To begin, we define the construction of a
particular K-subalgebra of a Leavitt path algebra that will be integral to proving
our main result in Section 4.2.

4.1 The Subalgebra Construction


In this section, we define a subalgebra B(X) of LK (E) for a given graph E and finite
subset of nonzero elements X ⊆ LK (E). Furthermore, we show in Proposition 4.1.7
that LK (E) is in fact the directed union of such algebras. This subalgebra con-
struction is given by Abrams and Rangaswamy in [AR], and we follow their work
closely for the majority of this section. To begin, we introduce the concept of a
graph EF ; this definition is given in [AR, Definition 2], which in turn is based on
an idea presented by Raeburn and Szymański in [RS].

Definition 4.1.1. Let E be a graph, and let F be a finite subset of E 1 . We define


s(F ) to be the set of all vertices in E 0 that are the source of at least one edge in F ,

125
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 126

and similarly r(F ) to be the set of all vertices that are the range of at least one edge
in F . We construct a new graph, EF , in two parts. First, we define the vertices:

EF0 = F ∪ WF ∪ ZF ,

where
WF = r(F ) ∩ s(F ) ∩ s(E 1 \F ) and ZF = r(F )\s(F ).

In other words, each edge in F becomes a vertex in our new graph. In addition, we
include all vertices which are both the source and range of at least one edge in F as
well as the source of at least one edge that is not in F (the set WF ), as well as all
vertices that are the range of at least one edge in F but not the source of any edge
in F (the set ZF ). Now we define the edges of EF :

EF1 = {(f, x) ∈ F × EF0 : r(f ) = s(x)},

following the convention that s(v) = v when x = v is a vertex from our original
graph E (i.e. when x ∈ WF ∪ ZF ⊆ EF0 ). In other words, EF1 is the set of ordered
pairs (f, x) of edges f ∈ F and vertices x ∈ EF0 for which either f x forms a path in
our original graph E (if x ∈ F ), or x is the range vertex for f in E (if x ∈ WF ∪ ZF ).

Finally, we define the source and range functions of EF :

s((f, x)) = f and r((f, x)) = x for all (f, x) ∈ EF1

Note that, since F is finite, the graph EF must also be finite. Also, any vertices
in the sets WF or ZF become sinks in our new graph. We illustrate the construction
of EF with the following example.

Example 4.1.2. Let E be the graph

v2

e2 yy
y<
y
yy
e1 yy
E= •v0 / •v1
EE
EE
E
e3 EEE
"
•v3
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 127

and let F be the set of edges {e1 , e2 }. Then WF = {v1 } and ZF = {v2 }, and so
EF0 = {e1 , e2 , v1 , v2 }. Thus EF1 = {(e1 , e2 ), (e1 , v1 ), (e2 , v2 )}, and so we have

(e1 ,e2 )
•e1 / •e2

EF = (e1 ,v1 ) (e2 ,v2 )

 
•v 1 •v 2

The following lemma is from [AR, Lemma 1].

Lemma 4.1.3. Let E be an acyclic graph. Then, for any finite subset F of E 1 , the
graph EF is acyclic.

Proof. By definition, any vertex v ∈ EF0 is a sink unless it is of the form v = e ∈ F .


Since r((x, y)) = y (where (x, y) ∈ EF1 ), any cycle in EF must be of the form
(e1 , e2 )(e2 , e3 ), . . . , (en , e1 ), where e1 , e2 , e3 , . . . , en ∈ F . However, (e, f ) is an edge in
EF only if r(e) = s(f ) in E, by definition. Thus e1 e2 e3 . . . en must be a cycle in E.
Therefore, EF can only be acyclic if E is acyclic.

The homomorphism φ : LK (EF ) → LK (E) defined in the following proposition


(from [AR, Proposition 1]) is integral to our definition of B(X).

Proposition 4.1.4. Let E be an arbitrary graph and let F be a finite subset of E 1 .


Then there is an algebra homomorphism φ : LK (EF ) → LK (E) with the following
properties:

(i) F ∪ F ∗ ⊆ Im(φ) (where F ∗ = {e∗ : e ∈ F });

(ii) r(F ) ⊆ Im(φ); and

(iii) if w is not a sink in E and s−1


E (w) ⊆ F , then w ∈ Im(φ).

Proof. We begin by defining φ : LK (EF ) → LK (E) on the generators of EF as


follows:
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 128



 ee P

 if w = e ∈ F
φ(w) = w − f ∈F,s(f )=w f f ∗ if w ∈ WF


w if w ∈ ZF ,



 ef f P

 if h = (e, f ), f ∈ F
φ(h) = e − f ∈F,s(f )=r(e) ef f ∗ if h = (e, r(e)), r(e) ∈ WF


e if h = (e, r(e)), r(e) ∈ ZF ,

and
φ(h∗ ) = (φ(h))∗ for all h∗ ∈ (E 1 )∗

Extend φ linearly and multiplicatively. It is a straightforward yet tedious process


to check that φ preserves the Leavitt path algebra relations on LK (E). As a sample of
the calculations required, we will check that the (CK1) relation holds for hi , hj ∈ EF1 ,
for the case hi = (ei , r(ei )), hj = (ej , r(ej )), with r(ei ), r(ej ) ∈ WF . We want to
check that φ(h∗i )φ(hj ) = δij φ(r(hi )) = δij φ(r(ei )), since r(hi ) = r(ei ) by definition.
Thus
! !
X X
φ(h∗i )φ(hj ) = e∗i − fi fi∗ e∗i ej − ej fj fj∗
fi ∈F, fj ∈F,
s(fi )=r(ei ) s(fj )=r(ej )
X X
= e∗i ej − e∗i ej fj fj∗ − fi fi∗ e∗i ej
fj ∈F, fi ∈F,
s(fj )=r(ej ) s(fi )=r(ei )
! !
X X
+ fi fi∗ e∗i ej fj fj∗
fi ∈F, fj ∈F,
s(fi )=r(ei ) s(fj )=r(ej )

Note that e∗i ej appears in every term in the above expression, which simplifies
to δij (r(ei )) by the (CK1) relation in LK (E). Note also that the (CK1) relation
simplifies the last term in the above expression to
! ! !
X X X
δij fi fi∗ fi fi∗ = δij fi fi∗ .
fi ∈F, fi ∈F, fi ∈F,
s(fi )=r(ei ) s(fi )=r(ei ) s(fi )=r(ei )
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 129

Thus the above expression simplifies to


! !
X X
φ(h∗i )φ(hj ) = δij (r(ei )) − 2 δij fi fi∗ + δij fi fi∗
fi ∈F, fi ∈F,
s(fi )=r(ei ) s(fi )=r(ei )
!
X
= δij (r(ei )) − δij fi fi∗
fi ∈F,
s(fi )=r(ei )

= δij φ(r(ei ))

as required. Similar calculations can be made for the other Leavitt path algebra
relations, and for each subcase contained within. We are now ready to show that
properties (i), (ii) and (iii) hold for our definition of φ.

To show that F ∪ (F )∗ ⊆ Im(φ), it suffices to show that every f ∈ F is contained


in Im(φ), since if f ∈ Im(φ) then f ∗ ∈ Im(φ), by definition. If r(f ) is a sink,
then r(f ) ∈ r(F )\s(F ) = ZF , and so f = φ((f, r(f ))). Now suppose that ∅ 6=
s−1 −1
E (r(f )) ⊆ F , so that r(f ) emits edges only into F . Let sE (r(f )) = {g1 , . . . , gn },

which is a finite set since each gi ∈ F and F is finite. For each gi ∈ s−1
E (r(f )) we

have f gi gi∗ = φ((f, gi )), and so , applying the (CK2) relation, we have f = f r(f ) =
∗ ∗
P  P P
f i gi gi = i f gi gi = i φ((f, gi )) ∈ Im(φ).

Now suppose that r(f ) is not a sink and emits edges only into E 1 \F . This
again implies that r(f ) ∈ r(F )\s(F ) = ZF and so f = φ((f, r(f )). Thus the
only remaining case is that r(F ) emits edges into both F and E 1 \F . In this case,
r(f ) ∈ r(F ) ∩ s(F ) ∩ s(E 1 \F ) = WF . Let {g1 , . . . , gm } be the subset of edges in F
for which s(gi ) = r(f ). As above, we have f gi gi∗ = φ((f, gi )) for each gi . Thus

f gi gi∗ + f − i f gi gi∗ = i φ((f, gi )) + φ((f, r(f )) ∈ Im(φ)


P  P  P
f= i

since r(f ) ∈ WF . Thus we have established property (i).


Now suppose v = r(f ) for some f ∈ F . Then v = f ∗ f ∈ Im(φ) by (i), estab-
lishing property (ii). Finally, suppose that w is a vertex that is not a sink in E and
s−1 ∗
E (w) = {f1 , . . . , fn } ⊆ F . Then fi fi ∈ Im(φ) for each fi , by (i), and so the (CK2)

relation gives w = i fi fi∗ ∈ Im(φ), establishing property (iii).


P

We can apply Theorem 2.2.15 to prove the following lemma.


CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 130

Lemma 4.1.5. Let E be a graph, let F be a finite subset of E 1 and let φ : LK (EF ) →
LK (E) be the homomorphism defined in Proposition 4.1.4. If E is acyclic, then φ
is a monomorphism.

Proof. Recall that, for each w ∈ EF0 , we have φ(w) = ee∗ if w = e ∈ F , φ(w) = w if
w ∈ ZF and φ(w) = w − f ∈F,s(f )=w f f ∗ if w ∈ WF . For the former two cases, it is
P

clear that φ(w) 6= 0; for the latter case, recall that WF = r(F ) ∩ s(F ) ∩ s(E 1 \F ),
so that w emits at least one edge that is in E 1 \F and thus φ(w) 6= 0 by the
(CK2) relation. Therefore φ(v) 6= 0 for every vertex v ∈ EF0 . If E is acyclic, then
Lemma 4.1.3 gives that EF is acyclic, so it is trivially true that φ maps each cycle
without exits to a non-nilpotent homogeneous element of nonzero degree. Thus, by
Theorem 2.2.15, φ is a monomorphism.

Having defined our homomorphism φ, we are almost ready to construct the K-


subalgebra B(X) of LK (E) defined in Definition 4.1.6, which will play an important
role in the proof of Theorem 4.2.3. We first have a few preliminary definitions.

Let E be an arbitrary graph and let X = {a1 , . . . , an } be a finite subset of


nonzero elements of LK (E). By Lemma 2.1.8, we can write each ar in the form
s(r) t(r)
X X
ar = kr i v ri + lrj prj qr∗j
i=1 j=1

where each kri , lrj is a nonzero element of K, each vri ∈ E 0 and each prj , qrj ∈ E ∗ .
Additionally, for each j ∈ {1, . . . , t(r)}, at least one of prj or qrj has length 1 or
greater (since the case in which both paths have zero length is covered in the first
sum).
Let F denote the set of edges that appear in the representation of some prj or
qrj for 1 ≤ j ≤ t(r), 1 ≤ r ≤ n. Furthermore, let S be the set of vertices

S = vr1 , . . . , vrs(r) : 1 ≤ r ≤ n .

Thus F , F ∗ and S are the sets of all edges and vertices, respectively, that appear in
the representation of our elements in X. Note that both F and S must be finite.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 131

We now partition S into four disjoint subsets as follows:

S1 = S ∩ r(F ),

and, defining T = S\S1 ,

S2 = {v ∈ T : s−1 −1
E (v) ⊆ F and sE (v) 6= ∅},

S3 = {v ∈ T : s−1
E (v) ∩ F = ∅}, and

S4 = {v ∈ T : s−1 −1 1
E (v) ∩ F 6= ∅ and sE (v) ∩ (E \F ) 6= ∅}.

In other words, S1 is the set of all vertices in S that are the range of some edge in
F . For those vertices in S that are not the range of some edge in F , we then have
three cases: vertices that emit edges only into F , vertices that emit no edges into
F , and vertices that emit edges into both F and E 1 \F ; these three cases make up
the subsets S2 , S3 and S4 , respectively.
Finally, let EF be the graph corresponding to our set of edges F , as defined in
Definition 4.1.1, and let φ : LK (EF ) → LK (E) be the homomorphism defined in the
proof of Proposition 4.1.4. We are now ready to construct our subalgebra B(X) of
LK (E), as defined in [AR, Definition 3].

Definition 4.1.6. Let E be an arbitrary graph and let X = {a1 , . . . , an } be any


finite subset of nonzero elements of LK (E). Let S3 , S4 and φ be as defined above.
Define B(X) to be the K-subalgebra of LK (E) generated by the set Im(φ) ∪ S3 ∪ S4 ,
so that
B(X) = hIm(φ), S3 , S4 i.

We finish this section by describing several important properties of our subalge-


bra B(X), in a result from [AR, Proposition 2]. In particular, we show that LK (E)
is the directed union of such subalgebras.

Proposition 4.1.7. Let E be an arbitrary graph and let X = {a1 , . . . , an } be any


finite subset of nonzero elements of LK (E). Let F, S3 , S4 and φ be as defined above.
For w ∈ S4 , let uw denote the (nonzero) element w − f ∈F,s(f )=w f f ∗ . Then
P
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 132

(i) X ⊆ B(X);
L  L 
(ii) B(X) = Im(φ) ⊕ vi ∈S3 Kvi ⊕ wj ∈S4 Kuwj ;

(iii) The collection {B(X) : X ⊆ LK (E), X finite } is an upward-directed set of


subalgebras of LK (E); and

(iv) LK (E) = −
lim
→{X⊆LK (E), X finite} B(X).

Proof. To prove (i), recall that the set X is generated by the subsets F , F ∗ and
S = S1 ∪ S2 ∪ S3 ∪ S4 , as defined above. By Proposition 4.1.4, we have F ∪ F ∗ ⊆
Im(φ) ⊆ B(X) (property (i)), S1 ⊆ r(F ) ⊆ Im(φ) ⊆ B(X) (property (ii)) and
S2 ⊆ Im(φ) ⊆ B(X) (property (iii)). Finally, S3 ∪ S4 ⊆ B(X), by definition, and so
X ⊆ B(X), as required.

To prove (ii), first note that since S3 ⊆ E 0 , it is a set of pairwise orthogonal


P L
idempotents, and so vi ∈S3 Kvi = vi ∈S3 Kvi . Furthermore, the set {uwj : wj ∈
S4 } is also a set of pairwise orthogonal idempotents, as follows: let wi , wj ∈ S4 .
Then
! !
X X
uwi uwj = wi − fi fi∗ wj − fj fj∗
fi ∈F,s(fi )=wi fj ∈F,s(fj )=wj
! !
X X
= δij wi − 2 δij fi fi∗ + δij fi fi∗ fi fi∗
fi ∈F,s(fi )=wi fi ∈F,s(fi )=wi
!
X
= δij wi − fi fi∗
fi ∈F,s(fi )=wi

= δij uwi
P L
as required. Thus wj ∈S4 Kuwj = wj ∈S4 Kuwj .
 L L 
We now show that the sum Im(φ) + vi ∈S3 Kvi + wj ∈S4 Kuwj is direct.
L 
We begin by showing that vi ∈S3 Kvi ∩ Im(φ) = {0}. Let v ∈ S3 . By the

definition of S3 we have v ∈
/ r(F ) ∪ s(F ), and so v ∈
/ WF ∪ ZF . We now show that
v is orthogonal to each element φ(x), where x ∈ (EF0 ) ∪ (EF1 ) ∪ (EF1 )∗ . If x = e ∈ F ,
then v · φ(x) = vee∗ = 0, since v ∈
/ s(F ). If x = w ∈ WF , then v 6= w (since
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 133

f f ∗ ) = 0. If x = w ∈ ZF , then again
P
v∈
/ WF ) and so v · φ(x) = v(w − f ∈F,s(f )=w

v 6= w (since v ∈
/ ZF ) and so v · φ(x) = vw = 0. Similarly, it is easy to see that
φ(x) · v = 0 for each of the above three cases. Thus v is orthogonal to each element
in φ(EF0 ). Now suppose h = (x, y) ∈ EF1 . Then φ(h) = φ(xhy) = φ(x)φ(h)φ(y) and
φ(h∗ ) = φ(yh∗ x) = φ(y)φ(h∗ )φ(x). Since x, y ∈ EF0 , v is therefore orthogonal to
φ(h) and φ(h∗ ). Therefore v is orthogonal to each generator of Im(φ), and so Kvi is
orthogonal to Im(φ) for each vi ∈ S3 . Since each vi is an idempotent, we therefore
L 
have vi ∈S3 Kvi ∩ Im(φ) = {0}, as required.
L 
Now we show that wj ∈S4 Kuwj ∩Im(φ) = {0}. Let w ∈ S4 . By the definition

of S4 we have w ∈
/ r(F ), and so again w ∈
/ WF ∪ ZF . Again, we must show that uw
is orthogonal to each element φ(x), where x ∈ (EF0 ) ∪ (EF1 ) ∪ (EF1 )∗ . If x = e ∈ F ,
then uw · φ(x) = (w − f ∈F,s(f )=w f f ∗ )ee∗ = δw,s(e) ee∗ − δw,s(e) ee∗ ee∗ = 0, using the
P

(CK1) relation. If x = w0 ∈ WF , then w0 6= w (since w ∈ / WF ), and so uw · φ(w0 ) =


(w − f ∈F,s(f )=w f f ∗ )(w0 − f 0 ∈F,s(f 0 )=w0 f 0 (f 0 )∗ ) = 0. If x = w0 ∈ ZF , then again
P P

/ ZF ) and so uw · φ(x) = (w − f ∈F,s(f )=w f f ∗ )w0 = 0. Similarly,


w 6= w0 (since w ∈
P

it is easy to see that φ(x) · uw = 0 for each of the above three cases. Thus, using the
same logic as above, we have that uw is orthogonal to each generator of Im(φ), and
thus Kuwj is orthogonal to Im(φ) for each wj ∈ S4 . As shown above, {uwj : wj ∈ S4 }
L 
is a set of pairwise orthogonal idempotents, and so wj ∈S4 Ku w j
∩ Im(φ) = {0}.
Now take v ∈ S3 and w ∈ S4 . Since S3 ∩ S4 = ∅, we have v 6= w and so v · uw =
v(w− f ∈F,s(f )=w f f ∗ ) = 0 = (w− f ∈F,s(f )=w f f ∗ )v = uw ·v. Thus
P P L 
vi ∈S3 Kvi ∩
L 
wj ∈S4 Kuwj = {0}. Therefore the three sets are mutually orthogonal, and so
L  L  L  L 
Im(φ)+ vi ∈S3 Kvi + wj ∈S4 Kuwj = Im(φ)⊕ vi ∈S3 Kvi ⊕ wj ∈S4 Kuwj ,

as required.
Now we need to show that this direct sum is indeed equal to B(X). For ease
L  L 
of notation, let Im(φ) ⊕ vi ∈S3 Kvi ⊕ wj ∈S4 Kuwj = A. It is clear that
L
Im(φ) ⊆ B(X) and vi ∈S3 Kvi ⊆ B(X), by definition. Let w ∈ S4 . Then for

each f ∈ F with s(f ) = w we have f f ∗ = φ(f ) ∈ Im(φ), and so uw = w −



P
f ∈F,s(f )=w f f ∈ B(X). Thus A ⊆ B(X). To show that B(X) ⊆ A, it suffices to

show that each of its generating elements is contained in A. It is clear that Im(φ) ⊆ A
and S3 ⊆ A. Furthermore, if w ∈ S4 , then w = uw + f ∈F,s(f )=w f f ∗ ∈ A, since
P
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 134

f f ∗ ∈ Im(φ), as shown above. Thus B(X) = A, as required.


P
f ∈F,s(f )=w

To show that the collection Z = {B(X) : X ⊆ LK (E), X finite} is an upward-


directed set of subalgebras of LK (E), we need to show that every pair of elements in
Z has an upper bound; that is, for every pair of finite subsets X1 , X2 ⊆ LK (E), we
can find a finite subset X3 ∈ LK (E) such that B(X1 ) ⊆ B(X3 ) and B(X2 ) ⊆ B(X3 )
(see Appendix A). Now, if X is finite then the sets F , S3 and S4 are finite, by
construction. Then, as noted earlier, EF is finite, and so LK (EF ) is a finitely-
generated K-algebra. Thus Im(φ), and therefore B(X), is a finitely-generated K-
algebra for each finite subset X of LK (E). Let T1 , T2 be finite generating sets for X1
and X2 respectively, and let T = T1 ∪ T2 . Then for each generating element t ∈ T1 ,
we have t ∈ T ⊆ B(T ) (by (i)) and so B(X1 ) ⊆ B(T ). Similarly, B(X2 ) ⊆ B(T ), as
required.

Finally, let M = −→ {X⊆LK (E), X finite} B(X) (for ease of notation) and suppose
lim
that LK (E) 6= M . Then there must exist a finite subset X ⊆ LK (E) such that
X * M . However, since X ∈ B(X) (by (i)) and M is the limit of the upward-
directed set of all such subalgebras B(X) (by (iii)), this is impossible. Thus we have
LK (E) = M , as required, completing the proof.

4.2 Regularity Conditions for Leavitt Path


Algebras
Recall that a ring R is said to be von Neumann regular if, for every x ∈ R, there
exists y ∈ R such that x = xyx. We now introduce the concept of ‘π-regularity’ and
related variations on this definition.

Definition 4.2.1. Let R be a ring.

(i) R is said to be π-regular if, for every x ∈ R, there exist y ∈ R and n ∈ N


such that xn = xn yxn .
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 135

(ii) R is said to be left π-regular (resp. right π-regular) if, for every x ∈ R, there
exist y ∈ R and n ∈ N such that xn = yxn+1 (resp. xn = xn+1 y).

(iii) R is said to be strongly π-regular if it is both left and right π-regular.

It is clear that any ring R that is von Neumann regular is also π-regular since,
taking n = 1, for every x ∈ R there exists a y ∈ R such that xn = xn yxn . However,
the converse is not true. Consider, for example, the ring R = Z/4Z. Now R is
π-regular, since 2̄2 = 0̄ = 2̄2 1̄ 2̄2 and 3̄2 = 1̄ = 3̄2 1̄ 3̄2 . However, it is clear that 2̄
has no von Neumann regular inverse, and so R is not von Neumann regular.

Furthermore, if R is a unital strongly π-regular ring then [CY, Lemma 6] tells


us that for every element x ∈ R there exist y ∈ R and n ∈ N such that xy = yx
and xn+1 y = xn = yxn+1 . It is then straightforward to show that if R is strongly
π-regular then R is also π-regular (see the proof of Theorem 4.2.3 (iv)⇒(v)). On
the other hand, consider the ring R = EndK (V ), where V is a vector space over a
field K with infinite basis {xi }∞
i=1 . It is well-known that R is von Neumann regular

(see for example [Ri, Example (c), p.131]) and is therefore π-regular. However, if we
let f : V → V be the shift transformation defined by f (x1 ) = 0 and f (xi+1 ) = f (xi )
for i > 1, then we have ker(f ) = Kx1 , ker(f 2 ) = Kx1 ⊕ Kx2 and in general
Ln
ker(f n ) = i=1 Kxi . If there were to exist a g ∈ R for which f
n
= gf n+1 , we
Ln+1
would have ker(gf n+1 ) ⊇ ker(f n+1 ) = n
i=1 Kxi ⊃ ker(f ), which is impossible.

Thus R is not strongly π-regular, and so in general the property π-regular does not
necessarily imply strongly π-regular.

The following lemma (from [AR, Lemma 2]) is useful in the context of Leavitt
path algebras.

Lemma 4.2.2. Let R be a ring with local units. Then R is strongly π-regular if and
only if the subring eRe is strongly π-regular, for every nonzero idempotent e ∈ R.

Proof. Suppose that R is strongly π-regular and let x ∈ eRe for some idempotent
e ∈ R. Since x is an element of R, there exist y, z ∈ R such that xn = yxn+1 and
xm = xm+1 z, for some m, n ∈ N. Furthermore, since x ∈ eRe we have x = xe = ex,
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 136

and so xn = exn = e(yxn+1 ) = eyexn+1 . Thus there exists an element y 0 = eye ∈ eRe
for which xn = y 0 xn+1 . Similarly, we can find an element z 0 = eze ∈ eRe such that
xm = xm+1 z 0 , and so eRe is strongly π-regular.
Conversely, suppose that eRe is strongly π-regular for every idempotent e ∈ R
and let x ∈ R. Since R has local units, there exists an idempotent f ∈ R such
that x ∈ f Rf . Since f Rf is strongly π-regular, there exist y, z ∈ f Rf for which
xn = yxn+1 and xm = xm+1 z, for some m, n ∈ N. However, since y, z are elements
of R, this implies that R is also strongly π-regular, completing the proof.

We now proceed to our main result for this section (from [AR, Theorem 1]),
which shows, perhaps surprisingly, that the properties von Neumann regular, π-
regular and strongly π-regular are equivalent for Leavitt path algebras. We also
finally show that LK (E) is locally matricial if and only if E is acyclic, a result
first mentioned in Section 2.2 (see page 56). Here we utilise the subalgebra B(X)
introduced in Section 4.1.

Theorem 4.2.3. Let E be an arbitrary graph. The following statements are equiv-
alent:

(i) LK (E) is von Neumann regular

(ii) LK (E) is π-regular

(iii) E is acyclic

(iv) LK (E) is locally matricial

(v) LK (E) is strongly π-regular.

Proof. (i)⇒(ii): This is immediate, since any von Neumann regular ring is π-regular.

(ii)⇒(iii): Suppose that LK (E) is π-regular and that there exists a cycle c based
at a vertex v in E. Let x = v + c ∈ LK (E). Since LK (E) is π-regular, there exists a
y ∈ LK (E) and n ∈ N such that xn yxn = xn . Note that xv = x = vx and so, letting
a = vyv, we have xn axn = xn (vyv)xn = xn yxn = xn . Now break a into its graded
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 137

components, so that
t
X
a= ai ,
i=s

where s, t ∈ Z, as 6= 0, at =
6 0 and deg ai = i for all s ≤ i ≤ t. Now vav =
v(vyv)v = vyv = a, and so i=s vai v = ti=s ai . Since deg(v) = 0, equating graded
Pt P

components gives vai v = ai for each s ≤ i ≤ t.


Now, applying the binomial expansion and using the fact that v is an idempotent
and cv = c = vc, we have
n   n  
n n
X n n−k k
X n
x = (v + c) = v c =v+ ck ,
k=0
k k=1
k

and so xn axn = xn expands to


n  
! t
! n  
! n  
X n k X X n k X n k
v+ c ai v+ c =v+ c . (∗)
k=1
k i=s k=1
k k=1
k

Since deg(c) > 0, we have deg(ck ) > 0 for all 1 ≤ k ≤ n, and so the lowest-degree
term on the left-hand side is vas v. Since the term of lowest degree on the right-hand
side is v, we have vas v = v and thus as = v. This implies s = 0, and so we can
write a = ti=0 ai , with a0 = v. Now suppose that c is a cycle of length m, so that
P

deg(ck ) = km. With the exception of the first term, every term on the right-hand
side contains a power of c, and so every term on the right-hand side is of degree km,
where 0 ≤ k ≤ n. Note that on the left-hand side, the leftmost terms of each bracket
Pt  Pt
multiply to give v i=0 ai v = i=0 ai , and so each ai appears in the expansion

of the left-hand side. Thus, equating terms of equal degree on both sides, we have
that ai 6= 0 only if i = km for some 0 ≤ k ≤ n.
We now use induction to establish that akm = fk (c) for each 0 ≤ k ≤ n, where
fk (c) is a polynomial in c with integer coefficients. For k = 0, we know that a0 =
v = c0 , as required. For k = 1, we equate components of degree m on both sides of
(∗), giving      
n n n
vam v + ca0 + a0 c= c
1 1 1
and so, since a0 = v, we have am + nc + nc = nc. Thus am = −nc, which is
certainly a polynomial in c with integer coefficients. Now suppose l > 1 and suppose
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 138

that akm = fk (c), where fk (c) is a polynomial in c with integer coefficients, for all
0 ≤ k < l. We now equate terms of degree lm on both sides of (*), giving
         
n n n 2 n l−1
alm + c a(l−1)m + a(l−2)m c + a(l−3)m c + · · · + a0 c
1 1 2 l−1
       
n 2 n n l−2
+ c a(l−2)m + a(l−3)m c + · · · + a0 c
2 2 l−2
         
n 3 n n l−3 n l
+ c a(l−3)m + a(l−4)m c + · · · + a0 c + ··· + c a0
3 1 l−3 l
 
n l
= c.
l
By our induction hypothesis, am , . . . , a(l−1)m are all polynomials in c with integer
coefficients and so, rearranging the above equation for alm , it is clear that alm is a
polynomial in c with integer coefficients.
So we can conclude that for every nonzero homogeneous component ai of a, we
have ai c = cai , and so ac = ca. Thus

(v + c)n = (v + c)n a(v + c)n = a(v + c)n (v + c)n = a(v + c)2n .

Let i be maximal with respect to the property ai (v + c)2n 6= 0. (We know such
an i exists, since a0 (v + c)2n = (v + c)2n 6= 0.) Thus the term of maximum degree
of a(v + c)2n is ai c2n , with degree i + 2nm, while the term of maximum degree of
(v + c)n is cn , with degree nm. This contradiction shows that c cannot exist, and so
E must be acyclic.

(iii)⇒(iv): Recall from Definition 2.2.10 that LK (E) is locally matricial if it is


the direct limit of an upward-directed set of subalgebras, each of which is isomorphic
to a finite direct sum of finite-dimensional matrix rings over K. Let {B(X) : X ⊆
LK (E), X finite} be the upward-directed set of subalgebras of LK (E) defined in
Proposition 4.1.7 (iii). By Proposition 4.1.7 (iv), we know the direct limit of this
set is LK (E). Thus, by Proposition 4.1.7 (ii) it suffices to show that B(X) =
L  L 
Im(φ) ⊕ vi ∈S3 Kv i ⊕ wj ∈S4 Ku w j
is isomorphic to a finite direct sum of
finite-dimensional matrix rings over K for each finite subset X ⊆ LK (E).
First, note that if E is acyclic then EF must be acyclic, by Lemma 4.1.3. Fur-
thermore, note that the only vertices in EF that are not sinks are those of the form
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 139

e ∈ F , and that these vertices only emit edges to their range vertices r(e) or to other
vertices of the form f ∈ F (in the case that ef forms a path in E). Since F is finite,
EF must therefore be row-finite (and finite, as noted earlier). Thus, by Lemma 2.2.9
we have LK (EF ) ∼
Ll
= Mm (K) for some mi , . . . , ml ∈ N. Now, by Lemma 4.1.5,
i=1 i

the restricted homomorphism φ̄ : LK (EF ) → Im(φ) is an isomorphism, and thus


Im(φ) ∼
Ll
= i=1 Mmi (K).
Furthermore, each term in the direct sum
L ∼
vi ∈S3 Kvi is isomorphic to K =
L
M1 (K), and similarly each term in the direct sum wj ∈S4 Kuwj is isomorphic to

M1 (K). Thus B(X) is isomorphic to a finite direct sum of finite-dimensional matrix


rings over K for each finite subset X ⊆ LK (E), as required.

(iv)⇒(i): If LK (E) is locally matricial, then every element of LK (E) is contained


in a subring S ∼
Ll
= i=1 Mmi (K). It is well known that any ring of this form is von
Neumann regular (see for example [L1], Proposition 4.27), and so every x ∈ LK (E)
has a von Neumann regular inverse.

(iv)⇒(v): As above, if LK (E) is locally matricial then every element x ∈ LK (E)


is contained in a subring S ∼
Ll
= i=1 Mmi (K). Now, any ring of this form is a unital
left (and right) artinian ring, and so considering the descending chain of left ideals
Sx ⊇ Sx2 ⊇ Sx3 ⊇ . . ., we must have Sxn = Sxn+1 for some n ∈ N. Thus, since S
is unital, we have xn ∈ Sxn = Sxn+1 , and so xn = yxn+1 for some y ∈ S ⊆ LK (E).
Since S is also right artinian, we can similarly show that there exists z ∈ LK (E) such
that xm = xm+1 z for some m ∈ N. Thus LK (E) is strongly π-regular, as required.

(v)⇒(ii): Let x ∈ LK (E). Since LK (E) has local units, x ∈ eLK (E)e for some
idempotent e ∈ LK (E). If LK (E) is strongly π-regular, then by Lemma 4.2.2 we
have that eLK (E)e is strongly π-regular. Since eLK (E)e is unital, we can apply [CY,
Lemma 6], and so there exists an element y ∈ eLK (E)e and n ∈ N such that xy = yx
and xn+1 y = xn = yxn+1 . Thus xn = xn+1 y = (xn )xy = (xn+1 y)xy = xn+2 y 2 , since
x and y commute. Repeating this process, we get

xn = xn+2 y 2 = xn+3 y 3 = . . . = x2n y n

and so, using xy = yx again, we have xn = (xn xn )y n = xn y n xn . Since y n ∈ LK (E),


we have that LK (E) is π-regular, as required.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 140

Example 4.2.4. We now apply Theorem 4.2.3 to our familiar examples of Leavitt
path algebras.

(i) The finite line graph Mn . Since Mn is acyclic, LK (Mn ) ∼


= Mn (K) is von
Neumann regular, π-regular and strongly π-regular for all n ∈ N. As we are already
aware, Theorem 4.2.3 also confirms that LK (Mn ) is locally matricial.

(ii) The rose with n leaves Rn . Since Rn contains n cycles, LK (Rn ) ∼


= L(1, n) is
not von Neumann regular, π-regular, strongly π-regular or locally matricial for any
n ∈ N.

(iii) The infinite clock graph C∞ . Since C∞ is acyclic, LK (C∞ ) ∼


L∞
= i=1 M2 (K) ⊕
KI22 is von Neumann regular, π-regular, strongly π-regular and, of course, locally
matricial.

4.3 Weakly Regular Leavitt Path Algebras


A ring R is said to be right weakly regular (resp. left weakly regular) if I 2 =
I for every right (resp. left) ideal I of R. This concept was first introduced by
Ramamurthi in [Ram]. We begin with some general properties of weakly regular
rings before moving on to look at weakly regular Leavitt path algebras. This first
proposition is from [Ram, Proposition 5].

Proposition 4.3.1. Let R be a ring with local units. If R is right weakly regular,
then every two-sided ideal I of R is right weakly regular and the quotient R/I is
right weakly regular. On the other hand, if R contains a two-sided ideal I such that
both I and R/I are right weakly regular, then R is also right weakly regular.

Proof. Suppose that R is right weakly regular. Let I be a two-sided ideal of R and
let J be a right ideal of I. Clearly J 2 ⊆ J, so it suffices to show that a ∈ J 2 for any
a ∈ J. Now, aR is a right ideal of R, and so aR = (aR)2 = aRaR ⊆ aI, since I is a
two-sided ideal. Furthermore, since R has local units, a = ae for some idempotent
e ∈ R. Thus a = ae ∈ aR = (aR)2 = (aR)4 ⊆ (aI)2 ⊆ J 2 , as required. Now, any
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 141

right ideal of R/I is of the form M/I, where M is a right ideal of R containing I.
Thus (M/I)2 = M 2 /I = M/I, and so R/I is right weakly regular.

Now suppose that R contains a two-sided ideal I such that both I and R/I
are right weakly regular. Let J be a right ideal of R and let a ∈ J. Again, let
e be a local unit for a, so that a = ae ∈ aR. Since R/I is right weakly regular
we have (aR)2 /I = (aR/I)2 = aR/I, and so there must exist b ∈ (aR)2 such that
b + I = a + I, i.e. (a − b) ∈ I. Since (a − b)R ⊆ I and I is right weakly regular, we
have (a − b) ∈ (a − b)R = ((a − b)R)2 . Furthermore, since b ∈ (aR)2 = aRaR we
have b = ag for some g ∈ RaR, and thus (a − b)R = (ae − ag)R = a(e − g)R ⊆ aR.
Thus a = (a − b) + b ∈ ((a − b)R)2 + (aR)2 ⊆ (aR)2 + (aR)2 ⊆ (aR)2 ⊆ J 2 . Thus
J ⊆ J 2 and so R is right weakly regular.

The following proposition gives two useful equivalences to the property that R
is right weakly regular. The equivalence (i) ⇐⇒ (ii) is from [Ram, Proposition 1],
while the equivalence (ii) ⇐⇒ (iii) is from [ARM2, Theorem 3.1].

Proposition 4.3.2. Let R be a ring with local units. The following statements are
equivalent:

(i) R is right weakly regular.

(ii) For all a ∈ R there exists x ∈ RaR such that a = ax.

(iii) For every two-sided ideal I of R, the left R-module R/I is flat.

Proof. (i)⇒(ii): Suppose that R is right weakly regular. Then, for any a ∈ R, we
have aR = aRaR. Since R has local units, a ∈ aR = aRaR, and so there exists
x ∈ RaR such that a = ax.
(ii)⇒(iii): Let I be a two-sided ideal of R. Since R has local units, R is flat as
a left R-module (by Corollary 1.2.16). Thus, viewing I as a submodule of R, by
Proposition 1.2.17 it suffices to show that if Y is a right ideal of R then I ∩Y R = Y I.
Now Y I ⊆ I and Y I ⊆ Y R, so Y I ⊆ I ∩Y R. Next suppose that y ∈ I ∩Y R. By (ii),
there exists x ∈ RyR such that y = yx. Since y ∈ Y R, we have y = yx ∈ Y RRyR.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 142

Since Y is a right ideal, Y RR ⊆ Y . Furthermore, yR ⊆ I, since y ∈ I. Thus


Y RRyR ⊆ Y I and so y ∈ Y I. Therefore I ∩ Y R = Y I, as required.
(iii)⇒(ii): Let a ∈ R. Then RaR is a two-sided ideal of R, and so by (iii) R/RaR
is flat as a left R-module. Since R is flat as a left R-module (by Proposition 1.2.17)
we have that RaR ∩ Y R = Y RaR for every right ideal Y of R. Specifically, taking
Y = aR, we have RaR ∩ aRR = aRRaR. Since R has local units, there exists
an idempotent e ∈ R for which eae = a = ae2 . Thus a ∈ RaR ∩ aRR, and so
a ∈ aRRaR ⊆ aRaR. Therefore there exist ri , si ∈ R such that a = ni=1 ari asi =
P
Pn  Pn
a i=1 ri asi . Letting x = i=1 ri asi , we have a = ax for x ∈ RaR, as required.

(ii)⇒(i): Assume that for all a ∈ R there exists x ∈ RaR such that a = ax, and
let I be a right ideal of R. Then, for any b ∈ I, there exist ri , si ∈ R such that
Pn Pn 2 2 2
b=b i=1 ri bsi ) = i=1 bri bsi and so b ∈ I . Since I ⊆ I, we have I = I, as

required.

The following proposition from [ARM2, Proposition 3.11] shows that the prop-
erty of being right weakly regular is preserved by subrings eRe and matrix rings.

Proposition 4.3.3. Let R be a ring with local units. The following statements are
equivalent:

(i) R is right weakly regular.

(ii) The subring eRe is right weakly regular for all idempotents e ∈ R.

(iii) The matrix ring Mn (R) is right weakly regular for all n ∈ N.

Proof. (i)⇒(ii): Suppose that R is right weakly regular and that e ∈ R is an idempo-
tent. Let eae ∈ eRe, where a ∈ R. By Proposition 4.3.2 there exists x ∈ ReaeR for
which eae = eaex. Let x = ni=1 bi (eae)ci , where each bi , ci ∈ R. Since e is an idem-
P

potent, we have eae = (eae)e = (eae ni=1 bi (eae)ci )e = eae ni=1 (ebi e)(eae)(eci e).
P P

Let y = ni=1 (ebi e)(eae)(eci e). Thus we have found an element y ∈ (eRe)eae(eRe)
P

for which eae = eaey, and so eRe is right weakly regular (by Proposition 4.3.2).
(ii)⇒(i): Let a ∈ R. Since R has local units, a ∈ eRe for some idempotent e ∈ R.
By our assumption, eRe is right weakly regular, and so there exists x ∈ (eRe)a(eRe)
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 143

for which a = ax. However, (eRe)a(eRe) ⊆ RaR, so that x ∈ RaR and thus R is
right weakly regular (again by Proposition 4.3.2).
(i)⇒(iii): This follows from the analogous result for unital rings in [Tu, Propo-
sition 20.4(ii)]. We can generalise it to rings with local units by applying Proposi-
tion 4.3.2.
(iii)⇒(i): For the case n = 1 we have Mn (R) ∼
= R, and so R must be right
weakly regular by our assumption.

Proposition 4.3.3 leads to the following theorem from [ARM2, Theorem 3.12],
which shows that the property ‘right weakly regular’ is Morita invariant.

Theorem 4.3.4. Let R and S be rings with local units that are Morita equivalent.
Then R is right weakly regular if and only if S is right weakly regular.

Proof. Suppose that R is right weakly regular. It suffices to show that eSe is right
weakly regular for every idempotent e ∈ S, since S is then right weakly regular
by Proposition 4.3.3. By Theorem 1.3.7, there exists a surjective Morita context
Pn
(R, S, N, M ). Since e ∈ S = M N , we have e = i=1 xi yi , where each xi ∈ M

and each yi ∈ N . Define x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) so that, in matrix


notation with t denoting transpose, e = xyt . Note that the element u = yt xyt x is
an idempotent in Mn (R), since

u2 = (yt xyt x)(yt xyt x) = yt (xyt )(xyt )(xyt )x = yt e3 x = yt ex = yt xyt x = u.

Define the map φ : u Mn (R)u → eSe by φ(uAu) = e(xAyt )e. (Note that
xAyt ∈ M RN ⊆ M N = S, since M is a right R-module.) First we must check
that φ is well-defined. Suppose that A, B ∈ Mn (R) with uAu = uBu. Then
φ(uAu) = e(xAyt )e = e2 (xAyt )e2 = xyt xyt xAyt xyt xyt = xuAuyt = xuBuyt =
· · · = e(xByt )e = φ(uBu), as required. Now we show that φ is a ring homo-
morphism. Clearly φ is additive. To check the multiplicative property, consider
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 144

uAu, uBu ∈ u Mn (R)u. Then

φ(uAu)φ(uBu) = (exAyt e)(exByt e)


= exAyt exByt e
= exA(yt xyt x)Byt e
= exAuByt e
= φ(u(AuB)u)
= φ((uAu)(uBu))

as required.
Now we show that φ is injective. Suppose φ(uAu) = exAyt e = 0 for some
uAu ∈ uMn (R)u. Then uAu = (yt xyt x)A(yt xyt x) = yt (exAyt e)x = 0, and so
ker(φ) = {0}, as required.
Finally, we show that φ is surjective. Consider ese = xyt sxyt ∈ eSe, where
s ∈ S. Note that yt sx is an n × n matrix, and each yi sxj ∈ N SM ⊆ N M = R,
since N is a right S-module. Thus yt sx ∈ Mn (R). Letting yt sx = C, we have

ese = e(ese)e = exyt sxyt e = e(xCyt )e = φ(uCu)

and so φ is surjective. Thus φ is an isomorphism, and since Mn (R) is right weakly


regular (by Proposition 4.3.3), eSe is right weakly regular, as required.

We now start to examine weakly regular rings in the context of Leavitt path
algebras. We begin by showing that, for any Leavitt path algebra, the properties
‘right weakly regular’ and ‘left weakly regular’ are in fact equivalent. The proof here
expands on the proof given in [ARM2, Theorem 3.15], (i) ⇐⇒ (iii).

Lemma 4.3.5. Let E be an arbitrary graph. Then LK (E) is right weakly regular if
and only if it is left weakly regular.

Proof. For any element α = k1 p1 q1∗ + · · · + kn pn qn∗ ∈ LK (E), where each ki ∈ K and
each pi , qi ∈ E ∗ , denote by α∗ the element

α∗ := k1 q1 p∗1 + · · · + kn qn p∗n .
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 145

It is easy to see that for any α, β ∈ LK (E) we have (αβ)∗ = β ∗ α∗ . Let I be a right
ideal of LK (E) and define I ∗ := {α∗ : α ∈ I}. If a, b ∈ I then a∗ − b∗ = (a − b)∗ ∈ I ∗ ,
since a − b ∈ I. Furthermore, if a ∈ I and x ∈ LK (E) then xa∗ = (ax∗ )∗ ∈ I ∗ , since
ax∗ ∈ I. Thus I ∗ is a left ideal of LK (E). Similarly, if I is a left ideal of LK (E)
then I ∗ is a right ideal of LK (E).
Suppose that LK (E) is right weakly regular, and consider a left ideal J of LK (E).
Then J ∗ is a right ideal of LK (E), and so (J ∗ )2 = J ∗ . Take an arbitrary element a ∈
J. Then a∗ = ni=1 x∗i yi∗ , where each xi , yi ∈ J. Thus a = (a∗ )∗ = ni=1 (x∗i yi∗ )∗ =
P P
Pn 2 2 2
i=1 yi xi ∈ J , and so J ⊆ J . Therefore J = J and so LK (E) is left weakly

regular. A similar argument shows the reverse implication.

We now give an example of a Leavitt path algebra that is right weakly regular.
This example is from [ARM2, Example 3.2(ii)].

Example 4.3.6. Consider the following graph E:


•Z / •v

Since E satisfies Condition (K), [G2, Theorem 4.2] tells us that every ideal of LK (E)
is graded. Since E is row-finite, for any graded ideal I of LK (E) we have I = I(H),
where H = I ∩ E 0 (by Theorem 3.3.9). Furthermore, H is a hereditary saturated
subset of E 0 (by Lemma 2.2.1), and so the only ideals in LK (E) are those generated
by hereditary saturated subsets of E 0 . Specifically, we have precisely three ideals:
0, LK (E) and I = I({v}).
Clearly LK (E)/LK (E) is flat as a left LK (E)-module. Furthermore, LK (E)/0 =
LK (E) is flat by Corollary 1.2.16. Finally, note that Pl (E) = {v}, and so by
Theorem 3.2.11 we have soc(LK (E)) = I. Now, [ARM2, Corollary 2.24] states that
if R is a semiprime ring with local units then R/ soc(R) is flat as a left R-module.
Since LK (E) is semiprime (by Proposition 3.2.1), LK (E)/I is flat. Thus we can
apply Proposition 4.3.2 (iii)⇒(i) to obtain that LK (E) is right weakly regular.

Not every Leavitt path algebra is right weakly regular, as the following examples
(from [ARM2, Example 3.3]) illustrate.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 146

Example 4.3.7. Consider the graph

u
E: 6•

We know that LK (E) ∼


= K[x, x−1 ] from Example 2.1.6. Let J = h1 + xi be the
two-sided ideal generated by 1 + x in K[x, x−1 ]. Now, if J = J 2 then we would have
1 + x = f (x)(1 + x)2 for some f (x) ∈ K[x, x−1 ], which is impossible. Thus LK (E)
is not right (or left) weakly regular.

Now consider the graph

u / •v
F : 6•

Letting H = {v} we have F |H ∼


= E, and so LK (F )/I(H) ∼
= LK (E) by Theo-
rem 3.3.8. From above, we know that LK (E) is not right weakly regular, and so
LK (F )/I(H) is not right weakly regular. Thus, by Proposition 4.3.1, LK (F ) is not
right weakly regular.

We now begin to work our way towards Proposition 4.3.10, which shows that
any graded ideal of a Leavitt path algebra is itself isomorphic to a Leavitt path
algebra. This result, while being interesting in its own right, will also be useful
when determining which Leavitt path algebras are right weakly regular. To begin,
we need the following definition.

Definition 4.3.8. Let E be an arbitrary graph, let H be a nonempty, hereditary


saturated subset of E 0 and let S ⊆ BH . We denote by F̃E (H, S) the collection of all
finite paths α = e1 . . . en (where each ei ∈ E 1 ) such that s(α) ∈
/ H, r(α) ∈ H ∪ S,
and r(ei ) ∈
/ H ∪ S for i = 1, . . . , n − 1. Informally, F̃E (H, S) is the set of all finite
paths in E that begin outside H and end in H ∪ S (with only the final edge entering
H ∪ S). Now we define

FE (H, S) = F̃E (H, S)\{e ∈ E 1 : s(e) ∈ S, r(e) ∈ H}.

In other words, FE (H, S) is the set F̃E (H, S) with all paths of length one going
directly from S to H removed.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 147

We can use the set FE (H, S) to construct a new graph H ES . First, we create
a copy of FE (H, S) and denote this by F̄E (H, S) = {ᾱ : α ∈ FE (H, S)}. Then we
define the graph H ES = (H ES0 , H ES1 , s0 , r0 ) as follows:
0
H ES := H ∪ S ∪ FE (H, S).
1
H ES := {e ∈ E 1 : s(e) ∈ H} ∪ {e ∈ E 1 : s(e) ∈ S and r(e) ∈ H} ∪ F̄E (H, S).

For every ᾱ ∈ F̄E (H, S), s0 (ᾱ) = α and r0 (ᾱ) = r(α).

For the other edges in 1


H ES , s0 (e) = s(e) and r0 (e) = r(e).

Note that for any ᾱ ∈ F̄E (H, S) we have s0 (ᾱ) = α ∈ FE (H, S) ⊆ 0


H ES and
r0 (ᾱ) = r(α) ∈ H ∪ S ⊆ 0
H ES . Similarly, for any other edge e ∈ 1
H ES we have
s0 (e) ∈ H ∪ S ⊆ H ES0 and r0 (e) ∈ H ⊆ H ES0 , and so the source and range functions
are well-defined.

We now note some properties of the graph H ES . First, note that H ES contains
the restriction graph

EH := (H, {e ∈ E 1 : s(e) ∈ H}, r|(EH )1 , s|(EH )1 ).

0
Note also that every vertex in S ⊆ H ES is an infinite emitter, emitting an infinite
number of edges into H and no other edges. On the other hand, each vertex α ∈
FE (H, S) ⊆ H ES0 is by definition a source that emits exactly one edge ᾱ with range
in H ∪ S. Moreover, since H is hereditary, if a cycle c in H ES contains a vertex in
H then all vertices of c must be in H. Thus any cycle in the graph H ES must come
from the restriction graph EH . These properties will prove useful in the proof of
Proposition 4.3.10. However, we first give an example to illustrate the construction
of H ES .

Example 4.3.9. Consider the following graph E:


e2
*
•u81 j •u2
888 
88 e1

88 
(∞) 88 (∞)
88 
88 
 
•v
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 148

where the (∞) symbol indicates that there are infinitely many edges from u1 to v
and from u2 to v. Let H = {v} (which is clearly a hereditary and saturated subset
of E 0 ), giving BH = {u1 , u2 }. Furthermore, let S = BH . Then FE (H, S) = {e1 , e2 },
and so H ES is the graph

•e1 •e2
ē1 ē2
 
•u1GG •u2
GG ww
GG
G www
(∞) GG# ww
{ww (∞)
•v

Recall from Theorem 3.3.9 that any graded ideal I of LK (E) is generated by
the hereditary saturated subset H = I ∩ E 0 and the set {v H : v ∈ S}, where
S = {w ∈ BH : wH ∈ I}. We denote this by I = I(H,S) .

The following proposition is from [ARM2, Proposition 3.7], which is the algebraic
analogue of [DHS, Lemma 1.6] and a generalisation of [AP, Lemma 1.2] to arbitrary
graphs. However, when examining this proposition the author discovered an error
that leaves the proof incomplete. Furthermore, the proof of [DHS, Lemma 1.6] was
discovered to contain a similar error. At the time of writing, these errors are yet
to be resolved. We will mention these problems when they arise in the proof and
show that they can be avoided in the row-finite case (so that [AP, Lemma 1.2] is
still valid).

Proposition 4.3.10. Let E be an arbitrary graph. For any graded ideal I = I(H,S) of
the Leavitt path algebra LK (E), there exists an isomorphism φ : LK (H ES ) → I(H,S) .

Proof. Define φ : LK (H ES ) → I(H,S) on the generators of LK (H ES ) as follows:





 v if v ∈ H

 vH

if v ∈ S
φ(v) =


 αα∗ if v = α ∈ FE (H, S), r(α) ∈ H

 αr(α)H α∗

if v = α ∈ FE (H, S), r(α) ∈ S,
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 149



 e if s(e) ∈ H

if s(e) ∈ S, r(e) ∈ H

 e
φ(e) =


 α if e = ᾱ ∈ F̄E (H, S), r(α) ∈ H

 αr(α)H

if e = ᾱ ∈ F̄E (H, S), r(α) ∈ S,
and 


 e∗ if s(e) ∈ H

 e∗

if s(e) ∈ S, r(e) ∈ H

φ(e ) =


 α∗ if e = ᾱ ∈ F̄E (H, S), r(α) ∈ H

 r(α)H α∗

if e = ᾱ ∈ F̄E (H, S), r(α) ∈ S.

Note that, by Proposition 3.3.6, Im(φ) is indeed contained in I(H,S) . Extend φ


linearly and multiplicatively. As usual, it can be shown that φ preserves the Leavitt
path algebra relations on LK (H ES ). We will check the (CK2) relation, i.e. that
φ(v − s0 (e)=v ee∗ ) = 0 for all regular vertices v ∈ H ES0 , as an example. Note that
P

if v ∈ S then v is an infinite emitter and so the (CK2) relation does not apply.
Case 1: v ∈ H. Note that every edge emitted by v in E 1 is contained in the
restriction graph EH and is therefore in H ES1 . Thus s0 (e)=v ee∗ = s(e)=v ee∗ , and
P P

so φ(v − s0 (e)=v ee∗ ) = v − s(e)=v ee∗ = 0, by the (CK2) relation in LK (E).


P P

Case 2: v = α ∈ FE (H, S) with r(α) ∈ H. Then α only emits the edge ᾱ, and
so φ(α − ᾱᾱ∗ ) = αα∗ − αα∗ = 0.
Case 3: v = α ∈ FE (H, S) with r(α) ∈ S. Again, α only emits the edge ᾱ, and
so φ(α− ᾱᾱ∗ ) = αr(α)H α∗ −(αr(α)H )(r(α)H )α∗ ) = 0, since r(α)H is an idempotent.
Thus the (CK2) relation is preserved by φ.

To show that φ is a monomorphism we apply Theorem 2.2.15. From the definition


0
of φ it is clear that φ(v) 6= 0 for each v ∈ H ES . Furthermore, the only cycles in
H ES come from the restriction graph EH , as noted above. From the definition of
φ we see that generating elements from EH are mapped to themselves, so that any
cycle without exits c in LK (H ES ) is mapped to itself (but seen as an element in
I(H,S) ). Since c is a non-nilpotent homogeneous element of nonzero degree in I(H,S) ,
φ is therefore a monomorphism by Theorem 2.2.15.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 150

Now we show that φ is an epimorphism. Recall from Proposition 3.3.6 that

I(H,S) = span {αβ ∗ : r(α) = r(β) ∈ H} ∪ {αwH β ∗ : r(α) = r(β) = w ∈ S} ,




where each α, β ∈ E ∗ . Thus, to show the surjectivity of φ it is enough to find inverse


images for these generators. Note that for any x ∈ 1
H ES we have φ(x∗ ) = [φ(x)]∗
and so, for any α ∈ E ∗ , if we can find y ∈ LK (H ES ) for which φ(y) = α then
φ(y ∗ ) = α∗ . Thus it suffices to find inverse images for elements of the form α and
βr(β)H with r(α) ∈ H and r(β) ∈ S (noting that, if r(β1 ) = r(β2 ) = w ∈ S, then
β1 wH (β2 wH )∗ = β1 wH wH β2∗ = β1 wH β2∗ , since wH is an idempotent). Note also that
these α and β are paths in our original graph E, rather than our constructed graph
H ES .

We begin with an arbitrary path α ∈ E ∗ with r(α) ∈ H. Let α = f1 . . . fm ,


where each fi ∈ E 1 . Suppose that s(α) ∈ H, so that s(fi ) ∈ H for each H (by the
hereditary nature of H). Thus each fi ∈ H ES1 with φ(fi ) = fi (by definition) and so
α = φ(fi ) . . . φ(fm ) = φ(α).

Now suppose that s(α) ∈


/ H and suppose r(f1 ) ∈ H. Then, as above we have
1
φ(fi ) = fi for i = 2, . . . , m. If s(f1 ) ∈ S, then f1 ∈ H ES and φ(f1 ) = f1 , again
giving α = φ(α). If s(f1 ) ∈/ S, then f1 is a path of length 1 contained in FE (H, S)
and so f¯1 ∈ H E 1 with φ(f¯1 ) = f1 . Thus α = φ(f¯1 f2 . . . fm ). Note that f¯1 f2 . . . fm is
S

a nonzero element of LK (H ES ) since r(f¯1 ) = r(f1 ) = s(f2 ).

Now we suppose that r(f1 ) ∈


/ H. Let n be the smallest integer such that 1 <
n ≤ m and r(fn ) ∈ H. (We know that such an n exists since r(fm ) ∈ H.) As above,
we have φ(fi ) = fi for i = n + 1, . . . , m. However, it is finding the inverse image for
f1 . . . fn that poses a problem. If s(fn ) ∈ S then φ(fn ) = fn as above, but beyond
this it is not clear how to proceed. In the proof of [ARM2, Proposition 3.7] it is
stated that any edge from a vertex in S must end in a vertex in H, which is true for
the graph H ES but not necessarily true for the original graph E. (See for example
the graph E in Example 4.3.9, where the edges e1 , e2 have both source and range in
S = {u1 , u2 }.) The reliance on this fact renders the remainder of the proof invalid.

On the other hand, the proof of [DHS, Lemma 1.6] appears to get around this
problem by writing f1 . . . fn as a concatenation of subpaths α1 . . . αk , where the
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 151

final edge (and only the final edge) of each α1 , . . . , αk−1 has range in S. Since
s(fi ) ∈
/ H for any i = 1, . . . , n (by the minimality of n), each αi ∈ FE (H, S) for
i = 1, . . . , k − 1. Furthermore, either αk ∈ FE (H, S) or αk is a single edge from S to
H, in which case φ(αk ) = αk . The proof asserts that we therefore have either α =
φ(α¯1 ) . . . φ(α¯k )φ(fn+1 ) . . . φ(fm ) (in the former case) or α = φ(α¯1 ) . . . φ(αk−1
¯ )φ(αk )
φ(fn+1 ) . . . φ(fm ) (in the latter case). Aside from the fact that φ(ᾱi ) = αi r(αi )H
rather than simply αi , the most significant problem is that α¯1 . . . α¯k is not a nonzero
element in LK (H ES ), since it is impossible for two edges β¯1 , β¯2 ∈ F̄E (H, S) ⊆ H E 1 S

to be adjacent. (Recall that for any edge β ∈ F̄E (H, S), we define s(β̄) = β, which
is a source in our graph H ES by definition.) We refer again to Example 4.3.9, in
which e1 , e2 are adjacent edges in our graph E, while e¯1 , e¯2 are not:
e2
*
•u5 1 j •u 2 •e1 •e2
555

55 e1

ē1 ē2
55

55

 
E: (∞) 5
(∞) H ES : •uG1G •u2
55

GG
GG ww
w
55 ww
5 

G
(∞) GG# ww
{ww (∞)
•v •v

Indeed, if we let α = e2 e1 f in the above example, where f is one of the (infinite


number of) edges from u1 to v, it is not clear what the inverse image of α is. A
similar problem arises when we attempt to find the inverse image of an element of
the form βr(β)H , where r(β) ∈ S.

However, in the case that E is row-finite the proof simplifies greatly and it is
possible to show that φ is an epimorphism, as we now show. Note that if E is
row-finite there are no breaking vertices and so S = ∅. Thus the set FE (H, S) is
simply the set of all positive paths α = e1 . . . en for which each ei ∈ E 1 , r(α) ∈ H
and s(ei ) ∈
/ H for each i = 1, . . . n. Furthermore, I(H,S) = I(H), which is generated
by elements of the form αβ ∗ , with r(α) = r(β) ∈ H. As above, to show that φ is
an epimorphism it suffices to find an inverse image for α = f1 . . . fm with r(α) ∈ H.
If s(α) ∈ H, then α = φ(α), as was shown in the more general case. Suppose
s(α) ∈
/ H and let n be the smallest integer such that 1 < n ≤ m and r(fn ) ∈ H. If
n < m, then α1 = f1 . . . fn ∈ FE (H, S), while s(fi ) ∈ H for each i = n + 1, . . . , m.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 152

Thus α = α1 fn+1 . . . fm = φ(α¯1 )φ(fn+1 ) . . . φ(fm ). If n = m, then α ∈ FE (H, S)


and so α = φ(ᾱ). Thus φ is an epimorphism, and therefore an isomorphism, as
required.

While we have only proved that Proposition 4.3.10 holds in the case that E
is row-finite, we will proceed as in [ARM2] and assume that the following results,
some of which rely on Proposition 4.3.10, hold for an arbitrary graph E (unless
stated otherwise). As a side note, [ARM2, Proposition 3.7] states that φ is a graded
isomorphism, which is not necessarily true. To see this, recall that φ(ᾱ) = α for
all ᾱ ∈ F̄E (H, S) with r(α) ∈ H. Now, ᾱ is an element of degree 1 in LK (H ES ),
1
since ᾱ ∈ H ES , whereas α is an element of degree l(α) in LK (E), and l(α) is not
necessarily 1. However, this observation does not affect any subsequent results.

We now proceed to work our way toward the main theorem of this section, The-
orem 4.3.15. To begin, we give the following useful theorem, which is a combination
of results from Tomforde [To] and Goodearl [G2]. Recall that a ring R is said to be
an exchange ring if, given any element x ∈ R, there exists an idempotent e ∈ xR
such that e = x + s − xs for some x ∈ R. Note that if R is unital then we have
1 − e = 1 − (x + s − xs) = (1 − x)(1 − s) ∈ (1 − x)R, and so this definition is
consistent with the more familiar unital definition.

Theorem 4.3.11. Let E be an arbitrary graph. The following statements are equiv-
alent:

(i) Every ideal of LK (E) is graded;

(ii) LK (E) is an exchange algebra; and

(iii) E satisfies Condition (K).

Proof. (i) ⇐⇒ (iii) is from [To, Theorem 6.16], while (ii) ⇐⇒ (iii) is from [G2,
Theorem 4.2].

The following proposition is from [ARM2, Proposition 3.8].

Proposition 4.3.12. Let E be an arbitrary graph. If E satisfies Condition (K),


then the Leavitt path algebra LK (E) is right weakly regular.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 153

Proof. Let I be a two-sided ideal of LK (E). Since E satisfies Condition (K), we


know that I is a graded ideal by Theorem 4.3.11. By Proposition 4.3.10, I is
isomorphic to a Leavitt path algebra, and in particular I has local units. We proceed
by showing that I satisfies condition (ii) of Theorem 1.2.19; that is, for all x ∈ I
there exists f ∈ HomLK (E) (LK (E), I) such that f (x) = x. (Note that we can apply
Theorem 1.2.19 since every Leavitt path algebra is an E 0 -free left LK (E)-module
with basis E 0 – see page 52.)
Fix a ∈ I. Since I has local units, there exists an idempotent e ∈ I for which
a = ae. Define ρe : LK (E) → I by ρe (x) = xe. Clearly this is a homomorphism
of left LK (E)-modules, and furthermore ρe (a) = ae = a, as required. Thus we can
apply Theorem 1.2.19 to give that LK (E)/I is a flat LK (E)-module. Finally, by
Proposition 4.3.2 we have that LK (E) is right weakly regular.

The following proposition from [ARM2, Proposition 3.9] shows that the converse
of Proposition 4.3.12 is true in the row-finite case.

Proposition 4.3.13. Let E be a row-finite graph. If the Leavitt path algebra LK (E)
is right weakly regular, then the graph E satisfies Condition (K).

Proof. We begin by showing that if LK (E) is right weakly regular then every cycle in
E has an exit. Suppose, by way of contradiction, that there exists a cycle c without
exits in E, and let H be the hereditary saturated closure of the vertices of c. By
[AAPS, Proposition 3.6(iii)] we have I(H) ∼
= Mn (K[x, x−1 ]) for some n ∈ N ∪ {∞}.
Now, since LK (E) is right weakly regular, so too is I(H) (by Proposition 4.3.1),
and thus Mn (K[x, x−1 ]) is right weakly regular. Consider E11 ∈ Mn (K[x, x−1 ]),
the matrix unit with 1 in the (1, 1) position and zeros elsewhere. Since E11 is an
idempotent, we have that E11 Mn (K[x, x−1 ])E11 is right weakly regular by Propo-
sition 4.3.3. Note that E11 Mn (K[x, x−1 ])E11 consists of those matrices for which
the only nonzero entry is in the (1, 1) position, and so is isomorphic to K[x, x−1 ].
However, we know that K[x, x−1 ] is not right weakly regular (see Example 4.3.7), a
contradiction, and so E contains no cycles without exits.

Now we show that if LK (E) is right weakly regular then E must satisfy Condition
(K). We proceed in a similar manner to the proof of Lemma 2.3.4: suppose, by way
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 154

of contradiction, that there exists a v ∈ E 0 such that CSP (v) = {p}. If p is not a
cycle, it is easy to see that there exists a cycle based at v whose edges are a subset
of the edges of p, contradicting the fact that CSP (v) = {p}. Thus p is a cycle and
so, by the above paragraph, there must exist exits e1 , . . . , em for p.
Let A be the set of all vertices in p. Now r(ei ) ∈
/ A for any i = 1, . . . , m, for
otherwise we would have another closed simple path based at v distinct from p. Let
X = {r(ei ) : i = 1, . . . , m} and let H be the hereditary saturated closure of X.
Recall the definition of Gn (X) from Lemma 1.4.9. Suppose that A ∩ H 6= ∅, and let
n be the minimum natural number for which A ∩ Gn (X) 6= ∅.
Let w ∈ A ∩ Gn (X) and suppose that n > 0. By the minimality of n, we have
w∈
/ Gn−1 (X). Thus, by the definition of Gn (X), w must be a regular vertex and
r(s−1 (w)) ⊆ Gn−1 (X), so that w only emits edges into Gn−1 (X). Since w is a
vertex in p, there must exist an edge f such that s(f ) = w and r(f ) ∈ A. Thus
r(f ) ∈ A ∩ Gn−1 (X), contradicting the minimality of n. Therefore we must have
n = 0, and so w ∈ G0 (X) = T (X) (by definition). Thus, for some i = 1, . . . , m,
there is a path q from r(ei ) to w. Since w is in the cycle p, and ei is an exit for p,
there must also be a path p0 from w to r(ei ), and so p0 q is a closed path based at w.
However, this implies that |CSP (v)| ≥ 2, a contradiction.
Thus H ∩ A = ∅, and in particular H 6= E 0 . Since E is row-finite, BH = ∅ and
so we have

(E|H)0 = E 0 \H, and (E|H)1 = {e ∈ E 1 : r(e) ∈


/ H}.

Since H ∩ A = ∅, we have A ⊆ (E|H)0 . Let p = f1 . . . fk . Since r(fj ) ∈ A for each


/ H and so {f1 , . . . , fk } ⊆ (E|H)1 . Thus p can be
fj (by definition), we have r(fj ) ∈
viewed as a cycle in E|H. Furthermore, for each exit ei of p we have r(ei ) ∈ X ⊆ H
/ (E|H)1 . Thus p is a cycle without exits in E|H. By
by definition, and so ei ∈
Theorem 3.3.8 we have LK (E|H) ∼= LK (E)/I(H), and since LK (E) is right weakly
regular then so too is LK (E|H), by Proposition 4.3.1. However, this implies that
every cycle in E|H has an exit (from the first part of this proof), a contradiction.
Thus LK (E) satisfies Condition (K), as required.

Using the fact that right weakly regular is a Morita invariant property, we can
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 155

use the desingularisation process to extend Proposition 4.3.13 to countable graphs,


as shown in [ARM2, Proposition 3.14].

Proposition 4.3.14. Let E be a countable but not necessarily row-finite graph.


Then E satisfies Condition (K) if and only if the Leavitt path algebra LK (E) is
right weakly regular.

Proof. Suppose that E satisfies Condition (K). Then by Proposition 4.3.12, LK (E)
is right weakly regular.
Conversely, suppose that LK (E) is right weakly regular. Since E is countable, we
can apply the desingularisation process (see Definition 2.4.1) to obtain a row-finite
desingularisation F of E. By Theorem 2.4.5, LK (E) and LK (F ) are Morita equiva-
lent, and so, by Theorem 4.3.4, we have that LK (F ) is right weakly regular. Since
F is row-finite, this implies that F satisfies Condition (K) (by Proposition 4.3.13).
Thus LK (F ) is an exchange ring by Theorem 4.3.11, and since the exchange prop-
erty is a Morita invariant for rings with local units (see [AGS, Theorem 2.1]), LK (E)
is also an exchange ring. Finally, this implies that E satisfies Condition (K), by
Theorem 4.3.11.

Proposition 4.3.14 is futhermore generalised to arbitrary graphs in [ARM2, The-


orem 3.15], following the proof of [G2, Theorem 4.2]. However, this proof requires a
large amount of background theory regarding direct limits of Leavitt path algebras
and so we will omit it here.

We now come to the main theorem of this section (from [ARM2, Theorem 3.15]),
which summarises the results we have seen thus far.

Theorem 4.3.15. Let E be an arbitrary graph. The following statements are equiv-
alent:

(i) The Leavitt path algebra LK (E) is a right weakly regular ring.

(ii) The graph E satisfies Condition (K).

(iii) The Leavitt path algebra LK (E) is a left weakly regular ring.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 156

(iv) The Leavitt path algebra LK (E) is an exchange ring.

(v) Every ideal of LK (E) is graded.

(vi) Every ideal of LK (E) is isomorphic to a Leavitt path algebra.

(vii) Every ideal of LK (E) has local units.

Proof. The equivalences (i) ⇐⇒ (iv) ⇐⇒ (v) are from Theorem 4.3.11, while
(i) ⇐⇒ (iii) comes from Lemma 4.3.5.
(i)⇒(ii): This generalisation of Proposition 4.3.14 comes from [ARM2, Theorem
3.15], as mentioned above.
(ii)⇒(vi): If E satisfies Condition (K) then by [G2, Theorem 3.8] every ideal
of LK (E) is graded. Thus every ideal of LK (E) is isomorphic to a Leavitt path
algebra, by Proposition 4.3.10.
(vi)⇒(vii): This is immediate, since every Leavitt path algebra has local units.
(vii)⇒(i): Suppose that every ideal of LK (E) has local units and consider an
arbitrary element a ∈ LK (E). Since LK (E) has local units, a = eae for some
idempotent e ∈ LK (E), and so a ∈ LK (E)aLK (E). Since LK (E)aLK (E) is a two-
sided ideal, it has local units, and so there exists u ∈ LK (E)aLK (E) for which
a = au. Thus, by Proposition 4.3.2, LK (E) is right weakly regular, as required.

Example 4.3.16. We now apply Theorem 4.3.15 to our familiar examples of Leavitt
path algebras to determine if they are weakly regular.

(i) The finite line graph Mn . Since Mn is acyclic, it satisfies Condition (K), and
so LK (Mn ) ∼
= Mn (K) is both left and right weakly regular for all n ∈ N.

(ii) The rose with n leaves Rn . For n = 1, the vertex v in R1 is the base of
exactly one closed simple path, and so R1 does not satisfy Condition (K). Thus
LK (R1 ) ∼
= K[x, x−1 ] is not left or right weakly regular, confirming what we saw in
Example 4.3.7. However, for n > 1 the graph Rn does satisfy Condition (K), and
so LK (Rn ) ∼
= L(1, n) is both left and right weakly regular.

(iii) The infinite clock graph C∞ . Since C∞ is acyclic, LK (C∞ ) ∼


L∞
= i=1 M2 (K) ⊕
KI22 is both left and right weakly regular.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 157

4.4 Self-Injective Leavitt Path Algebras


Recall from Lemma 2.2.4 that every Leavitt path algebra is projective as a left (and
right) module over itself. However, the same is not true of injectivity. Thus it is
natural to ask when a Leavitt path algebra is injective as a left (or right) module over
itself; that is, when it is left (or right) self-injective. In this section we build towards
Theorem 4.4.7, which shows that for any Leavitt path algebra the properties ‘left
self-injective’ and ‘right self-injective’ are equivalent, and furthermore gives graph
theoretic conditions on E that are equivalent to LK (E) being left (and right) self-
injective.

Our first result is from [ARM2, Proposition 4.1].

Proposition 4.4.1. Let E be an arbitrary graph. If LK (E) is left (or right) self-
injective then LK (E) is von Neumann regular and the graph E is acyclic.

Proof. Let e ∈ LK (E) be an idempotent, and recall that LK (E)e is a direct sum-
mand of LK (E) (by Lemma 1.2.3 (i)). Since LK (E) is injective as a left LK (E)-
module, so too is LK (E)e (by Lemma 1.2.12) and thus, by [L2, Theorem 13.1],
we have that EndL (E) (LK (E)e) is left self-injective. Since EndL (E) (LK (E)e) ∼
K K =
Op Op
(eLK (E)e) (by Lemma 1.2.2), (eLK (E)e) is therefore left self-injective. Thus,
by [L2, Corollary 13.2(2)] we have that (eLK (E)e)Op /J((eLK (E)e)Op ) is von Neu-
mann regular. Note that if a ring ROp is von Neumann regular then, for any a ∈ R,
there exists an x ∈ R such that a = a · x · a = axa, and so R is also von Neumann
regular. In particular, we have that eLK (E)e/J(eLK (E)e) is von Neumann regular.
Now, by [J2, Proposition 3.7.1], we have J(eLK (E)e) = eJ(LK (E))e. However,
J(LK (E)) = {0} (by Corollary 3.3.11) and so J(eLK (E)e) = {0}. Thus we have
eLK (E)e/J(eLK (E)e) = eLK (E)e and so eLK (E)e is von Neumann regular for any
idempotent e ∈ R.
Let x ∈ LK (E). Since LK (E) has local units, there exists an idempotent f ∈ R
such that x ∈ f LK (E)f . Since f LK (E)f is von Neumann regular, there exists
y ∈ f LK (E)f such that x = yxy, and so LK (E) is von Neumann regular. Finally,
by Theorem 4.2.3, E must be acyclic.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 158

In Proposition 4.4.4 we give the somewhat surprising result that if a Leavitt path
algebra LK (E) is left (or right) self-injective then the corresponding graph E must
be row-finite. This is the first time in this thesis we have seen a property of LK (E)
imply row-finiteness on E. To set up this proposition, we first give two preliminary
results. The first of these results requires the following definition.

Suppose that V is a left vector space over a division ring D. The dual vec-
tor space of V , denoted V ∗ , is the set of homomorphisms from V to D; that
is, HomD (V, D). Furthermore, V ∗ is a right vector space over D. The following
theorem, known as the ‘Erdös-Kaplansky Theorem’, gives a formulation for the di-
mension of V ∗ . This theorem is given as Theorem 2 on p. 237 of Jacobson’s [J1] and
Exercise 7.3(d) of Bourbaki’s [Bo].

Theorem 4.4.2 (The Erdös-Kaplansky Theorem). Let V be a left vector space with
infinite basis {bi : i ∈ I} over a division ring D. Then the dimension of V ∗ as a
right vector space over D is given by

dim(V ∗ ) = card(V ∗ ) = card(D)card(I) .

The Erdös-Kaplansky Theorem has the following useful application, which we


will use in the proof of Proposition 4.4.4. Suppose that K is a field and I is an infinite
index set. Using the fact that HomK (K, K) ∼ = K and applying Proposition 1.2.4,
we have K I ∼
= HomK (K, K)I ∼
= HomK (K (I) , K). Since we can view K (I) as a left
vector space over K (with an infinite basis indexed by I), we have HomK (K (I) , K) =
(K (I) )∗ . Thus, applying the Erdös-Kaplansky Theorem we have

dim(K I ) = dim((K (I) )∗ ) = card(K)card(I) .

Now let E be an arbitrary graph and let X be a collection of paths in E. We


say that X is an set of independent paths if no path in X is an initial subpath
of any other path in X. The following related lemma has been adapted from the
proof of [ARM2, Proposition 4.4].

Lemma 4.4.3. Let E be an arbitrary graph and let X be a set of independent paths
in E. Then the set of left ideals {LK (E)pp∗ : p ∈ X} is LK (E)-independent – that
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 159

is, LK (E)pp∗ ∩ LK (E)qq ∗ = {0} for all p ∈ X or, equivalently, that the
P
q∈X,q6=p

sum of these left ideals is a direct sum.

Proof. Suppose that rpp∗ = ∗


P
q∈X,q6=p rq qq for some p ∈ X and r, rq ∈ LK (E) (with
only a finite number of rq nonzero). Since no path in X is an initial subpath of any
other path in X, we have q ∗ p = 0 for all q ∈ X, q 6= p (by Lemma 2.1.10). Thus
rp = rpp∗ p = q∈X,q6=p rq qq ∗ p = 0, and so rpp∗ = (rp)p∗ = 0, as required.
P

With these two preliminary results established we can now prove the following
result from [ARM2, Proposition 4.4].

Proposition 4.4.4. If a Leavitt path algebra LK (E) is left (or right) self-injective,
then the graph E must be row-finite.

Proof. Suppose by way of contradiction that v ∈ E 0 is an infinite emitter. For each


n ∈ N, define Yn = {p ∈ E ∗ : s(p) = v, l(p) = n}, and let αn be the cardinality of Yn .
Note that Y1 = s−1 (v), which has infinite cardinality since v is an infinite emitter.
S
Let Y = n∈N Yn , so that Y is the set of all paths in E with source v. Then Y has
infinite cardinality σ = sup{αn : n ∈ N}.
Pn
Now, elements of vLK (E)v are of the form i=1 ki pi qi∗ , where each pi , qi ∈ E ∗
with s(pi ) = v = s(qi ) and each ki ∈ K. Thus each pi , qi ∈ Y , and so, since
the cardinality of the set of all finite subsets of Y is again σ, the K-dimension of
vLK (E)v must be ≤ σ. This observation will prove useful later in the proof.

For the first part of this proof, we wish to find a subset X of Y with cardinality
σ such that the set of left ideals {LK (E)pp∗ : p ∈ X} is LK (E)-independent. First
note that, for each n ∈ N, the set {LK (E)pp∗ : p ∈ Yn } is LK (E)-independent.
To see this, note that all paths in Yn are of length n, so that no path in Yn is an
initial subpath of any other path in Yn . Thus the result follows from Lemma 4.4.3.
Therefore, if αn = |Yn | = σ for some n ∈ N, we can choose X = Yn .
If not, then we must have αn < σ for all n ∈ N. Note that it is not always
the case that αn+1 > αn , since not every path in Yn is necessarily a subpath of a
path in Yn+1 . Thus we define a strictly increasing subsequence {αin : n < ω} as
follows: let αi1 = α1 , and define i2 to be the smallest integer for which αi2 > αi1 .
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 160

In general, if αin is chosen for some n, we define in+1 to be the smallest integer for
which αin+1 > αin . Note that, since αi1 = α1 is infinite, this is a sequence of strictly
increasing infinite cardinalities.

We now construct a sequence of sets Tn of independent paths. First, we define


T1 = s−1 (v) = Y1 , which is a set of independent paths since it is a set of distinct
edges. By the minimality of i2 , the number of paths of length i2 − 1, i.e. αi2 −1 ,
must be less than αi2 . Thus, remembering that αi2 is an infinite cardinal, there
must exist a path p2 ∈ Yi2 −1 such that r(p2 ) emits αi2 edges. Let r(p2 ) = v2 and let
(2)
s−1 (v2 ) = {eβ : β < αi2 }. Now we define

(2)
T2 = {p2 eβ : β < αi2 } ∪ (T1 \{q : q is an initial subpath of p2 }).

Note that the removal of the set {q : q is an initial subpath of p2 } ensures that T2
is also a set of independent paths.
Now let k ∈ N and suppose that Ti has been defined (and is a set of independent
paths of length at most ij ) for all j ≤ k. As above, there must exist a path
pk+1 ∈ Yik+1 −1 such that r(pk+1 ) emits αik+1 edges. Again, let r(pk+1 ) = vk+1
(k+1)
and let s−1 (vk+1 ) = {eβ : β < αik+1 }. Now we define

(k+1)
Tk+1 = {pk+1 eβ : β < αik+1 } ∪ (Tk \{q : q is an initial subpath of pk+1 }).

Again, the removal of the set {q : q is an initial subpath of pk+1 } ensures that Tk+1
is a set of independent paths. Thus Tn is defined (and is a set of independent paths)
for all n ∈ N. Furthermore, for any n ∈ N, {LK (E)pp∗ : p ∈ Tn } is an LK (E)-
independent set of left ideals, by Lemma 4.4.3. Note also that each Tn is a set of
paths of length in or less.
S
However, it may not necessarily be the case that T = n<ω Tn is a set of inde-
pendent set of paths, since for example a path in T2 may still be an initial subpath
of p4 . Thus, for each n ∈ N, we define

Wn = Tn \{q : q is an initial subpath of pm for some m = 2, 3, . . .},


S
which ensures that W = n<ω Wn is a set of independent paths. To see this, let
qi , qj be two paths in W and let m, n be the smallest integers for which qi ∈ Tm
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 161

and qj ∈ Tn , respectively. Suppose, without loss of generality, that m ≤ n. If


m = n, then qi , qj ∈ Tm , which we know is a set of independent paths. So suppose
(n)
that m < n. Now qj must be of the form qj = pn eβ (for some β < αin ), since
otherwise qj ∈ Tn−1 (by the construction of Tn ), contradicting the minimality of n.
(m)
In particular, qj has length in . Similarly, the minimality of m gives that pi = pm eγ
(for some γ < αim ) and so pi has length im . Thus pi cannot be a subpath of qj ,
since m < n implies im < in . Conversely, qj cannot be an initial subpath of qi ,
since this would imply that qj is an initial subpath of pm , which is impossible by
our construction of Wn . Thus W is a set of independent paths.
Note that by construction every path in W has source v, so by Lemma 4.4.3
we have that {LK (E)qq ∗ : q ∈ W } is an LK (E)-independent family of left ideals
contained in LK (E)v. Note also that each Tn , and thus each Wn , has cardinality
αin , and so the cardinality of W = sup{αin : n ∈ N} = σ. Thus letting W = X,
we have found a subset X of Y with cardinality σ such that the set of left ideals
{LK (E)pp∗ : p ∈ X} is LK (E)-independent, as required.

Now, define
X M
S= LK (E)pp∗ = LK (E)pp∗ ⊆ LK (E)v.
p∈X p∈X

We know that LK (E)v is a direct summand of LK (E) (by Lemma 2.1.9), and so since
LK (E) is injective as a left LK (E)-module, so too is LK (E)v (by Lemma 1.2.12).
Consider the inclusion map φ : S → LK (E)v and let f ∈ HomLK (E) (S, LK (E)v).
Since LK (E)v is injective, there exists h ∈ HomLK (E) (LK (E)v, LK (E)v) such that
the following diagram commutes:
φ
0 /S / LK (E)v

f
h

 
LK (E)v

That is, hφ = f . Thus, if we define

φ∗ : HomLK (E) (LK (E)v, LK (E)v) → HomLK (E) (S, LK (E)v)


CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 162

by φ∗ (g) = gφ for all g ∈ HomLK (E) (LK (E)v, LK (E)v), then φ∗ is an epimorphism.
Then we have

HomLK (E) (S, LK (E)v) ⊇ HomLK (E) (S, S)


!
M M
= HomLK (E) LK (E)pp∗ , LK (E)pp∗
p∈X p∈X
!

Y M
= HomLK (E) LK (E)pp∗ , LK (E)pp∗ ,
p∈X p∈X

the final isomorphism coming from Proposition 1.2.4. Now, for each k ∈ K and a
(i) (i)
fixed i ∈ I, we can define λk ∈ HomLK (E) LK (E)pp∗ , p∈X LK (E)pp∗ by λk (x) =
L 
(i)
(wj )j∈I , where wi = kx and wj = 0 for j 6= i. Thus, setting F (i) = {λk : k ∈ K},
we have F (i) ∼= K. Therefore
  Y
Fp(i) ∼
Y M Y
∗ ∗
HomLK (E) LK (E)pp , LK (E)pp ⊇ = Kp ,
p∈X p∈X p∈X p∈X

(i)
where each Fp = F (i) and Kp = K. Now, by the Erdös-Kaplansky Theorem,
card(X)
= card(K)σ and so, by the above inequal-
Q
p∈X Kp has K-dimension card(K)

ities, HomLK (E) (S, LK (E)v) has K-dimension ≥ card(K)σ . However, by Lemma
1.2.2, HomLK (E) (LK (E)v, LK (E)v) ∼
= vLK (E)v, which has K-dimension ≤ σ <
card(K)σ , as observed earlier. This contradicts the fact that φ∗ is an epimorphism,
and so E must be row-finite.

Let R be a ring and let n ∈ N. An R-module M is said to have uniform dimen-


sion n if M contains a direct sum of n nonzero submodules and no such collection
larger than this. This notion features in the proof of the following proposition, which
is from [ARM2, Proposition 4.5].

Proposition 4.4.5. Let LK (E) be a left (resp. right) self-injective Leavitt path alge-
bra, and let a be an arbitrary element of LK (E). Then the left ideal LK (E)a (resp.
right ideal aLK (E)) cannot contain an infinite set of LK (E)-independent left (resp.
right) ideals of LK (E).

Proof. If LK (E) is left self-injective, then by Proposition 4.4.4 the graph E must be
row-finite. Let a ∈ LK (E). Write a = nj=1 kj pj qj∗ , where pj , qj ∈ E ∗ and kj ∈ K,
P
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 163
P
and let V = {s(pj ), s(qj ) : j = 1, . . . , n}. By Lemma 2.1.12, e = v∈V v is a local
unit for a, and in particular we have LK (E)a ⊆ LK (E)e. We show that LK (E)e has
finite uniform dimension.
By way of contradiction, suppose that LK (E)e contains an infinite family of
independent submodules {Ai : i ∈ I}, where I is an infinite index set, and let
S = i∈I Ai . Note that every element of eLK (E)e is of the form m ∗
L P
j=1 lj aj bj , where
L
s(aj ), s(bj ) ∈ V for each j = 1, . . . , m. Thus eLK (E)e = v∈V vLK (E)v. For any
v ∈ V , the cardinality of the set of paths of a fixed length n beginning with v must
be finite (since E is row-finite), so the cardinality of the set of all paths of finite
length beginning with v is at most countably infinite. Since vLK (E)v is generated
by finite paths beginning with v, the K-dimension of vLK (E)v is at most countable,
and thus the K-dimension of eLK (E)e is at most countable.
We now proceed as in the proof of Proposition 4.4.4. Using a similar argument,
we can show HomLK (E) (S, LK (E)e) ⊇ i∈I Fi , where each Fi ∼
Q
= K. Furthermore,
since LK (E) is left self-injective, the direct summand LK (E)e is an injective left
LK (E)-module, and so again we have an epimorphism

φ∗ : HomLK (E) (LK (E)e, LK (E)e) → HomLK (E) (S, LK (E)e).

However, as noted above, HomLK (E) (LK (E)e, LK (E)e) ∼


= eLK (E)e has countable K-
dimension, while i∈I Fi has K-dimension card(K)card(I) (by the Erdös-Kaplansky
Q

Theorem), which is uncountably infinite since I is infinite. Thus we have a contra-


diction, and so LK (E)e, and therefore LK (E)a, has finite uniform dimension.

The following proposition is from [ARM2, Proposition 4.6].

Proposition 4.4.6. For any graph E, if the Leavitt path algebra LK (E) is left (or
right) self-injective, then every infinite path in E contains a line point.

Proof. Suppose that γ is an infinite path in E that contains no line points. Now,
since LK (E) is left self-injective, E must be acyclic, by Proposition 4.4.1. Thus γ
must contain an infinite number of bifurcation vertices {vi : i = 1, 2, 3, . . .}, and so
we can write γ as a concatenation of a series of countably many paths γ1 γ2 γ3 . . .,
where r(γi ) = vi for each i = 1, 2, 3, . . .. Furthermore, let s(γ) = v.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 164

For each n ∈ N, let pn = γ1 γ2 . . . γn γn∗ . . . γ2∗ γ1∗ . Note that pn is an idempotent


in LK (E)v. Suppose that pn = xpn+1 for some x ∈ LK (E) and some n ∈ N. Since
vn is a bifurcation, there must exist an edge fn with s(fn ) = vn such that fn is not

equal to the initial edge of γn+1 . Thus γn+1 fn = 0, and so


0 6= γ1 γ2 . . . γn fn = pn γ1 γ2 . . . γn fn = xpn+1 γ1 γ2 . . . γn fn = xγ1 γ2 . . . γn+1 γn+1 f = 0,

a contradiction. Thus, in particular we have pn 6= vpn+1 = pn+1 , so that pn −pn+1 6= 0


for all n ≥ 1.
We now show that {pn − pn+1 : n = 1, 2, 3, . . .} is a set of mutually orthogonal
idempotents in LK (E)v. First, consider pj pi with j > i. Then

pj pi = (γ1 γ2 . . . γj γj∗ . . . γ2∗ γ1∗ )(γ1 γ2 . . . γi γi∗ . . . γ2∗ γ1∗ )


= γ1 γ2 . . . γj γj∗ . . . γi+1

(γi∗ . . . γ2∗ γ1∗ γ1 γ2 . . . γi )γi∗ . . . γ2∗ γ1∗
= γ1 γ2 . . . γj γj∗ . . . γi+1

(vi )γi∗ . . . γ2∗ γ1∗
= pj .

Similarly, if j < i then we have pj pi = pi . In particular, pi+1 pi = pi+1 . Note also


that pi pi = pi for any i ≥ 1. Thus

(pi − pi+1 )2 = pi pi − pi pi+1 − pi+1 pi + pi+1 pi+1 = pi − 2pi+1 + pi+1 = pi − pi+1 ,

while for j > i,

(pj − pj+1 )(pi − pi+1 ) = pj pi − pj pi+1 − pj+1 pi + pj+1 pi+1 = pj − pj − pj+1 + pj+1 = 0.

Note that if j = i + 1, we still have pj pi+1 = pj pj = pj in the expression above, as


required. Similarly, (pj − pj+1 )(pi − pi+1 ) = 0 for j < i.
Thus {pn − pn+1 : n = 1, 2, 3, . . .} is a set of nonzero, mutually orthogonal
idempotents in LK (E)v, and so {LK (E)(pn − pn+1 ) : n = 1, 2, 3, . . .} is a count-
ably infinite independent family of left ideals in LK (E)v. However, this contradicts
Proposition 4.4.5, and so every infinite path in E must contain a line point.

We now come to the main result of this section, which is from [ARM2, Theorem
4.7].
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 165

Theorem 4.4.7. Let E be an arbitrary graph and let K be any field. The following
statements are equivalent:

(i) LK (E) is left self-injective.

(ii) LK (E) is right self-injective.

(iii) The graph E is row-finite, acyclic and every infinite path contains a line point.

(iv) LK (E) is semisimple.

Proof. (i)⇒(iii): This follows directly from Propositions 4.4.1, 4.4.4 and 4.4.6.

(iii)⇒(iv): We begin by showing that Pl (E) = E 0 . Suppose, by way of contra-


diction, that there exists v ∈ E 0 such that v ∈
/ Pl (E). Since v is not a line point, v
cannot be a sink, and so s−1 (v) 6= ∅. Now if r(s−1 (v)) ⊆ Pl (E), then the saturated
property of Pl (E) would imply that v ∈ Pl (E), a contradiction. Thus there must
exist some edge e1 ∈ s−1 (v) for which w = r(e1 ) ∈
/ Pl (E). Repeating this argument,
we can find an edge e2 ∈ s−1 (w) for which x = r(e2 ) ∈
/ Pl (E), and so on. Since
E is acyclic, we can create an infinite path γ = e1 e2 e3 . . . for which r(ei ) ∈
/ Pl (E)
for each ei . However, this contradicts the fact that every infinite path in E must
contain a line point. Thus Pl (E) = E 0 , and so we have I(Pl (E)) = LK (E). By
Theorem 3.2.11, this implies that socl (LK (E)) = LK (E), and so LK (E) is the direct
sum of minimal left ideals. Thus LK (E) is semisimple.

(iv)⇒(i): If LK (E) is semisimple then it is a direct sum of minimal left ideals,


and so socl (LK (E)) = LK (E). Thus, by [L1, Theorem 2.8], every left LK (E)-module
is injective. In particular, LK (E) is left self-injective.

Similarly, we can show that (ii)⇒(iii)⇒(iv)⇒(ii), since Propositions 4.4.1, 4.4.4


and 4.4.6 also hold for when LK (E) is right self-injective. Furthermore, if LK (E) is
semisimple, then socl (LK (E)) = LK (E) = socr (LK (E)) (by Corollary 3.2.2), and so
we can apply [L1, Theorem 2.8] again to yield that LK (E) is right self-injective.

Example 4.4.8. We now apply Theorem 4.4.7 to our familiar examples of Leavitt
path algebras to determine if they are self-injective.
CHAPTER 4. REGULAR AND SELF-INJECTIVE LPAS 166

(i) The finite line graph Mn . Since Mn is row-finite, acyclic and contains no
infinite paths, LK (Mn ) ∼= Mn (K) is both left and right self-injective (and also
semisimple) for all n ∈ N.

(ii) The rose with n leaves Rn . For each n ∈ N, Rn contains n cycles and so
LK (Rn ) ∼
= L(1, n) is neither left nor right self-injective.

(iii) The infinite clock graph C∞ . Since C∞ is not row-finite, we have that
LK (C∞ ) ∼
L∞
= i=1 M2 (K) ⊕ KI22 is neither left nor right self-injective.
Appendix A

Direct Limits

A.1 Direct Limits


A set A is said to be an upward-directed set if there is a partial ordering ≤ on A
such that, for any pair a, b ∈ A, there exists c ∈ A such that a ≤ c and b ≤ c.

Let I be an upward-directed index set and let {Ri : i ∈ I} be a family of


(not necessarily unital) rings. Furthermore, for each pair i, j ∈ I with i ≤ j, let
ϕij : Ri → Rj be a ring homomorphism. We say that (Ri , ϕij )I is a direct system
of rings, indexed by I if, for all i, j, k ∈ I with i ≤ j ≤ k, we have ϕik = ϕjk ϕij ;
that is, the following diagram commutes:
ϕij
Ri A / Rj
AA }}
AA }}
ϕik AAA }}} ϕjk
}~
Rk

Definition A.1.1. Let (Ri , ϕij )I be a direct system of rings and let R be a ring for
which there exists a ring homomorphism ϕi : Ri → R for each i ∈ I. We say that
(R, ϕi ), or simply R, is a direct limit of the system if the following two conditions
are satisfied:

(i) For each pair i, j ∈ I with i ≤ j, we have ϕi = ϕj ϕij ; that is, the following

167
APPENDIX A. DIRECT LIMITS 168

diagram commutes:
ϕij
Ri @ / Rj
@@ ~~
@@
ϕi @@@ ~~~ϕ
 ~~ ~
j

(ii) If S is a ring for which there exist ring homomorphisms µi : Ri → S such


that µi = µj ϕij for all i, j ∈ I with i ≤ j, then there exists a unique ring
homomorphism µ : R → S such that µi = µϕi for each i ∈ I; that is, the
following diagram commutes:

ϕi
Ri @ /R
@@
@@
µi @@ µ
 
S

Now suppose that (R̄, ϕ̄i ) is another ring and set of ring homomorphisms that
satisfy conditions (i) and (ii). Then there exists a unique homomorphism µ : R → R̄
such that ϕ̄i = µϕi for all i ∈ I. Similarly, there exists a unique homomorphism
µ0 : R̄ → R such that ϕi = µ0 ϕ̄i for all i ∈ I. Thus we have ϕi = µ0 µϕi , giving
(by the uniqueness) µ0 µ = 1R , and ϕ̄i = µµ0 ϕ̄i , giving µµ0 = 1R̄ . Thus µ is an
isomorphism and so R ∼ = R̄. A direct limit is therefore unique up to isomorphism,
and so we can uniquely denote this limit by −
lim
→(Ri , ϕij ).
Note that if I is an upward-directed index set and {Ri : i ∈ I} is an ascending
chain of rings – that is, Ri ⊆ Ri+1 for each i ∈ I – then defining ϕij to be the
inclusion map from Ri to Rj (for each pair i, j ∈ I with i ≤ j), we have that
(Ri , ϕij )I is a direct system. In this case we usually drop the ϕij from the notation
and write the direct limit of the family as simply lim
−→ i∈I Ri . It is straightforward to
S
show that −
lim
→ i∈I Ri = i∈I Ri , the directed union of the family.
We illustrate the concept of a direct limit with the following useful example.
Let R be a ring with local units, so that there exists a set of idempotents I ⊆ R
for which, given any finite subset {x1 . . . , xn } ⊆ R, there exists e ∈ I such that
xi ∈ eRe for each i = 1, . . . , n. We define a partial ordering ≤ on I by writing
e ≤ f if e ∈ f Rf . (Note that e ≤ f is equivalent to eRe ⊆ f Rf .) Furthermore, I
APPENDIX A. DIRECT LIMITS 169

is an upward-directed set: given any pair e, f ∈ I, there must exist g ∈ I such that
e, f ∈ gRg (by the definition of local units), so that e ≤ g and f ≤ g.

Lemma A.1.2. Let R be a ring with local units. Let I be the set of local units and
let ≤ be the partial ordering defined above. For each pair e, f ∈ I with e ≤ f , define
ϕef : eRe → f Rf and ϕe : eRe → R to be the inclusion ring homomorphisms. Then
R = lim
−→(eRe, ϕef ).

Proof. For each e, f, g ∈ I with e ≤ f ≤ g we clearly have ϕeg = ϕf g ϕef , and so


(eRe, ϕef )I is a direct system of rings. Furthermore, for each pair e, f ∈ I with
e ≤ f we clearly have ϕe = ϕf ϕef , so that the following diagram commutes:
ϕef
eReC / f Rf
CC z z
CC z
ϕe CCC zzzϕf
! z} z
R

Thus we have satisfied condition (i) of the direct limit definition.

Now suppose there exists a ring S and ring homomorphisms µe : eRe → S such
that µe = µf ϕef for all e, f ∈ I with e ≤ f . For any x ∈ R, choose e ∈ I such that
x ∈ eRe (such an element exists since I is a set of local units), and let µ(x) = µe (x),
thus defining a map µ : R → S. Note that our choice of e is not unique, so we must
check that this map is well-defined. Suppose there exists f ∈ I with f 6= e such
that x ∈ f Rf . Since I is an upward-directed set, there exists g ∈ I such that e ≤ g
and f ≤ g, and so

µe (x) = µg (ϕeg (x)) = µg (x) = µg (ϕf g (x)) = µf (x)

and thus µ is well-defined. Furthermore, given x, y ∈ R there exists e ∈ I for which


x+y ∈ eRe and xy ∈ eRe, and so µ(x+y) = µe (x+y) = µe (x)+µe (y) = µ(x)+µ(y)
(since µe is a ring homomorphism) and similarly µ(xy) = µ(x)µ(y). Thus µ is a ring
homomorphism.
Now, for each e ∈ I we have µ(ϕe (x)) = µ(x) = µe (x) for all x ∈ eRe, so that
APPENDIX A. DIRECT LIMITS 170

the following diagram commutes:

ϕe
eReC /R
CC 
CC 
µe CCC µ
! 
S

Finally, to show that µ is unique, suppose that ν : R → S is also a ring homomor-


phism with νϕe = µe for all e ∈ I. Let x ∈ R and choose f ∈ I such that x ∈ f Rf .
Then
ν(x) = ν(ϕf (x)) = µf (x) = µ(x)

and so ν = µ. Thus we have satisfied condition (ii) of the direct limit definition and
so R = lim
−→(Ri , ϕef ), up to isomorphism.
Bibliography

[AA1] Abrams, G., and Aranda Pino, G., The Leavitt path algebra of a graph,
J. Algebra 293(2) (2005), 319–334.

[AA2] Abrams, G., and Aranda Pino, G., Purely infinite simple Leavitt path
algebras, J. Pure Appl. Algebra 207(3) (2006), 553–563.

[AA3] Abrams, G., and Aranda Pino, G., The Leavitt path algebras of arbitrary
graphs, Houston J. Math 34(2) (2008), 423–442.

[AAPS] Abrams, G., Aranda Pino, G., Perera, F. and Siles Molina, M., Chain
conditions for Leavitt path algebras, Forum Math. 22(1) (2010), 95–114.

[AAS] Abrams, G., Aranda Pino, G., and Siles Molina, M., Finite-dimensional
Leavitt path algebras, J. Pure Appl. Algebra 209(3) (2007), 753–762.

[AR] Abrams, G., and Rangaswamy, K. M., Regularity conditions for arbitrary
Leavitt path algebras, Algebr. Represent. Theory 13 (2010), 319–334.

[ARM1] Abrams, G., Rangaswamy, K. M., and Siles Molina, M., The socle series
of a Leavitt path algebra, Israel J. Math. (to appear).

[AAMMS] Alberca Bjerregaard, P., Aranda Pino, G., Martı́n Barquero, D., Martı́n
González, C., and Siles Molina, M., Atlas of Leavitt path algebras of small
graphs (preprint).

[AM] Ánh, P.N., and Márki, L., Morita equivalence for rings without identity,
Tsukuba J. Math 11(1) (1987), 1–16.

171
BIBLIOGRAPHY 172

[AGS] Ara, P., Gómez Lozano, M., and Siles Molina, M., Local rings of exchange
rings, Comm. Algebra 26(12) (1998), 4191–4205.

[AGP] Ara, P., Goodearl, K. R., and Pardo, E., K0 of purely infinite simple
regular rings, K-Theory 26 (2002), 69–100.

[AMP] Ara, P., Moreno, M. A., and Pardo, E., Nonstable K-theory for graph
algebras, Algebra. Represent. Theory 10(2) (2007), 157–178.

[AP] Ara, P., and Pardo, E., Stable rank for graph algebras, Proc. Amer. Math.
Soc. 136(7) (2008), 2375–2386.

[A] Aranda Pino, G., On maximal left quotient systems and Leavitt path alge-
bras, Doctoral Thesis, Department of Algebra, Geometry and Topology,
University of Malaga (2005).

[AMMS1] Aranda Pino, G., Martı́n Barquero, D., Martı́n González, C., and Siles
Molina, M., The socle of a Leavitt path algebra, J. Pure Appl. Algebra
212 (2008), 500–509.

[AMMS2] Aranda Pino, G., Martı́n Barquero, D., Martı́n González, C., and Siles
Molina, M., Socle theory for Leavitt path algebras of arbitrary graphs,
Rev. Mat. Iberoamericana 26(2) (2010), 611–638.

[APS] Aranda Pino, G., Pardo, E., and Siles Molina, M., Exchange Leavitt path
algebras and stable rank, J. Algebra 305(2) (2006), 912–936.

[ARM2] Aranda Pino, G., Rangaswamy, K. M., and Siles Molina, M., Weakly reg-
ular and self-injective Leavitt path algebras over arbitrary graphs, Algebr.
Represent. Theor. (to appear).

[BPRS] Bates, T., Pask, D., Raeburn, I., and Szymański, W., The C ∗ -algebras
of row-finite graphs, New York J. Math. 6 (2000), 307–324.

[Be] Bergman, G., On Jacobson radicals of graded rings, unpublished notes.

[Bo] Bourbaki, N., Algèbre Linéaire. Troisiéme édition, Hermann (1962).


BIBLIOGRAPHY 173

[CY] Camillo, V., and Yu, H.P., Stable range one for rings with many idem-
potents, Trans. Amer. Math. Soc. 347(8) (1995), 3141–3147.

[DHS] Deicke, K., Hong, J. H., and Szymański, W., Stable rank of graph al-
gebras: Type I graph algebras and their limits, Indiana Univ. Math. J.
52(4) (2003), 963–979.

[D] Divinsky, N. J., Rings and Radicals, George Allen and Unwin, London
(1965).

[GS] Garcı́a, J. L., and Simón, J. J., Morita equivalence for idempotent rings,
J. Pure Appl. Algebra 76 (1991), 39–56.

[G1] Goodearl, K. R., Von Neumann Regular Rings, Pitman, London (1979).

[G2] Goodearl, K. R., Leavitt path algebras and direct limits, Contemp. Math.
480(200) (2009), 165–187.

[J1] Jacobson, N., Lectures in Abstract Algebra, vol. II, Linear Algebra, van
Nostrand (1953).

[J2] Jacobson, N., Structure of Rings, Amer. Math. Soc., Providence, RI


(1964).

[L1] Lam, T. Y., A First Course in Noncommutative Rings, Springer-Verlag,


New York (1991).

[L2] Lam, T. Y., Lectures on Modules and Rings, Springer-Verlag, New York
(1999).

[Le1] Leavitt, W. G., Modules without invariant basis number, Proc. Amer.
Math. Soc. 8 (1957), 322–328.

[Le2] Leavitt, W. G., The module type of a ring, Trans. Amer. Math. Soc. 103
(1962), 113–130.

[M] McCoy, N. H., The Theory of Rings, Macmillan (1964).


BIBLIOGRAPHY 174

[NV] Nǎstǎsescu, C., and Van Oystaeyen, F., Graded and Filtered Rings and
Modules, Springer-Verlag, New York (1979).

[O] Osborne, M. S., Basic Homological Algebra, Springer-Verlag, New York


(2000).

[Ram] Ramamurthi, V. S., Weakly regular rings, Canad. Math. Bull., 16 (1973),
317–321.

[Rae] Raeburn, I., Graph Algebras, CMBS Reg. Conf. Ser. Math. vol. 103,
Amer. Math. Soc. Providence, RI, 2005.

[Ri] Ribenboim, P., Rings and Modules, Interscience Publishers, New York
(1969).

[Ro] Rotman, J., An Introduction to Homological Algebra, Second Edition,


Universitext, Springer, New York (2009).

[RS] Raeburn, I., and Szymański, W., Cuntz-Krieger algebras of infinite


graphs and matrices, Trans. Amer. Math. Soc. 365(1) (2003), 39–59.

[To] Tomforde, M., Uniqueness theorems and ideal structure for Leavitt path
algebras, J. Algebra 318 (2007), 270–299.

[Tu] Tuganbaev, A., Rings Close to Regular, Mathematics and its Applica-
tions, 545, Kluwer Academic Publishers, Dordrecht (2002).
Index

acyclic graph, 34 direct


admissible pair, 95 limit, 167
algebra, 9 product, 10
Leavitt path, 40 sum, 10
path, 39 summand, 11
system of rings, 167
bifurcation, 35
directed graph, 32
bilinear map, 18
directly infinite module, 13
bimodule, 9
dual vector space, 158
homomorphism, 9
breaking vertex, 95 edge, 32
adjacent, 32
category, 22
ghost, 40
closed path, 52
endomorphism ring, 8
closed simple path, 52
Erdös-Kaplansky Theorem, 158
cofinal
exact sequence, 15
graph, 35
exchange ring, 152
vertex, 35
exit, 34
Condition (K), 66
extended graph, 40
contravariant functor, 23
external direct sum, 10
covariant functor, 23
Cuntz-Krieger relations, 41 finite
Cuntz-Krieger Uniqueness Theorem, 61 graph, 33
cycle, 34 line graph, 42
flat module, 19
degree, 49
free module, 20
desingularisation, 72
functor

175
INDEX 176

contravariant, 23 ideal
covariant, 23 generated by x, 2
graded, 4
generator, 26
nilpotent, 82
ghost
idempotent ring, 25
edge, 40
independent paths, 158
path, 40
infinite
graded
clock graph, 44
homomorphism, 4
emitter, 33
ideal, 4
idempotent, 13
ring, 3
initial subpath, 33
Graded Uniqueness Theorem, 60
injective module, 16
graph, 32
internal direct sum, 11
acyclic, 34
cofinal, 35 Jacobson radical, 5
countable, 33
Leavitt path algebra, 40
directed, 32
left
extended, 40
π-regular ring, 135
finite, 33
ideal
finite line, 42
maximal, 5
infinite clock, 44
minimal, 5
quotient, 95
principal, 2
rose, 43
Loewy length, 109
row-finite, 33
Loewy ring, 109
single loop, 43
socle, 81
hereditary socle series, 109
saturated closure, 36 line point, 35
subset, 35 local units, 1
homomorphism, 8 locally matricial, 56
bimodule, 9 locally projective module, 28
graded, 4 Loewy left ascending socle series, 109
INDEX 177

Loewy length, 109 objects, 22


Loewy ring, 109
path, 33
maximal left ideal, 5 algebra, 39
minimal left ideal, 5 closed, 52
module, 7 closed simple, 52
directly infinite, 13 ghost, 40
flat, 19 π-regular ring, 134
free, 20 principal left ideal, 2
injective, 16 progenerator, 26
locally projective, 28 projective module, 15
nondegenerate, 8 purely infinite ring, 13
projective, 15
quotient graph, 95
self-injective, 16
semisimple, 82 range, 32

U-free, 20 index, 54

unital, 8 regular vertex, 33

Morita right weakly regular ring, 140

context, 25 ring

surjective, 25 Z-graded, 3

equivalent, 24 π-regular, 134

invariant, 24 endomorphism, 8

morphisms, 22 exchange, 152


idempotent, 25
natural left π-regular, 135
equivalence, 24 Loewy, 109
isomorphism, 24 nondegenerate, 82
transformation, 24 purely infinite, 13
nilpotent ideal, 82 right weakly regular, 140
nondegenerate semiprime, 82
module, 8 simple, 2
ring, 82 strongly π-regular, 135
INDEX 178

unital, 1 breaking, 95
von Neumann regular, 6 cofinal, 35
rose graph, 43 regular, 33
row-finite graph, 33 singular, 33
von Neumann regular ring, 6
saturated subset, 35
self-injective module, 16 weakly regular ring, 140
semiprime ring, 82
semisimple module, 82
short exact sequence, 15
simple ring, 2
single loop graph, 43
singular vertex, 33
sink, 33
socle, 81
socle series, 109
source
function, 32
vertex, 33
strongly π-regular ring, 135
submodule, 7

tensor product, 18
trace, 26
tree, 34

U-basis, 20
U-free module, 20
uniform dimension, 162
unital module, 8
upward-directed set, 167

vertex, 32

You might also like