Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller:
"Just a few fixes trickling in at this point.
1) If we see an attached socket on an skb in the ipv4 forwarding path,
bail. This can happen due to races with FIB rule addition, and
deletion, and we should just drop such frames. From Sebastian
Pöhn.
2) pppoe receive should only accept packets destined for this hosts's
MAC address. From Joakim Tjernlund.
3) Handle checksum unwrapping properly in ppp receive properly when
it's encapsulated in UDP in some way, fix from Tom Herbert.
4) Fix some bugs in mv88e6xxx DSA driver resulting from the conversion
from register offset constants to mnenomic macros. From Vivien
Didelot.
5) Fix handling of HCA max message size in mlx4 adapters, from Eran
Ben ELisha"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
net/mlx4_core: Fix reading HCA max message size in mlx4_QUERY_DEV_CAP
tcp: add memory barriers to write space paths
altera tse: Error-Bit on tx-avalon-stream always set.
net: dsa: mv88e6xxx: use PORT_DEFAULT_VLAN
net: dsa: mv88e6xxx: fix setup of port control 1
ppp: call skb_checksum_complete_unset in ppp_receive_frame
net: add skb_checksum_complete_unset
pppoe: Lacks DST MAC address check
ip_forward: Drop frames with attached skb->sk
diff --git a/Documentation/ABI/testing/sysfs-block-dm b/Documentation/ABI/testing/sysfs-block-dm
index 87ca569..f9f2339 100644
--- a/Documentation/ABI/testing/sysfs-block-dm
+++ b/Documentation/ABI/testing/sysfs-block-dm
@@ -23,3 +23,25 @@
Contains the value 1 while the device is suspended.
Otherwise it contains 0. Read-only attribute.
Users: util-linux, device-mapper udev rules
+
+What: /sys/block/dm-<num>/dm/rq_based_seq_io_merge_deadline
+Date: March 2015
+KernelVersion: 4.1
+Contact: dm-devel@redhat.com
+Description: Allow control over how long a request that is a
+ reasonable merge candidate can be queued on the request
+ queue. The resolution of this deadline is in
+ microseconds (ranging from 1 to 100000 usecs).
+ Setting this attribute to 0 (the default) will disable
+ request-based DM's merge heuristic and associated extra
+ accounting. This attribute is not applicable to
+ bio-based DM devices so it will only ever report 0 for
+ them.
+
+What: /sys/block/dm-<num>/dm/use_blk_mq
+Date: March 2015
+KernelVersion: 4.1
+Contact: dm-devel@redhat.com
+Description: Request-based Device-mapper blk-mq I/O path mode.
+ Contains the value 1 if the device is using blk-mq.
+ Otherwise it contains 0. Read-only attribute.
diff --git a/Documentation/CodingStyle b/Documentation/CodingStyle
index 4d4f06d..f4b78ea 100644
--- a/Documentation/CodingStyle
+++ b/Documentation/CodingStyle
@@ -13,7 +13,7 @@
Anyway, here goes:
- Chapter 1: Indentation
+ Chapter 1: Indentation
Tabs are 8 characters, and thus indentations are also 8 characters.
There are heretic movements that try to make indentations 4 (or even 2!)
@@ -56,7 +56,6 @@
break;
}
-
Don't put multiple statements on a single line unless you have
something to hide:
@@ -156,25 +155,25 @@
Do not unnecessarily use braces where a single statement will do.
-if (condition)
- action();
+ if (condition)
+ action();
and
-if (condition)
- do_this();
-else
- do_that();
+ if (condition)
+ do_this();
+ else
+ do_that();
This does not apply if only one branch of a conditional statement is a single
statement; in the latter case use braces in both branches:
-if (condition) {
- do_this();
- do_that();
-} else {
- otherwise();
-}
+ if (condition) {
+ do_this();
+ do_that();
+ } else {
+ otherwise();
+ }
3.1: Spaces
@@ -186,8 +185,11 @@
"struct fileinfo info;" is declared).
So use a space after these keywords:
+
if, switch, case, for, do, while
+
but not with sizeof, typeof, alignof, or __attribute__. E.g.,
+
s = sizeof(struct file);
Do not add spaces around (inside) parenthesized expressions. This example is
@@ -209,12 +211,15 @@
= + - < > * / % | & ^ <= >= == != ? :
but no space after unary operators:
+
& * + - ~ ! sizeof typeof alignof __attribute__ defined
no space before the postfix increment & decrement unary operators:
+
++ --
no space after the prefix increment & decrement unary operators:
+
++ --
and no space around the '.' and "->" structure member operators.
@@ -268,13 +273,11 @@
Chapter 5: Typedefs
Please don't use things like "vps_t".
-
It's a _mistake_ to use typedef for structures and pointers. When you see a
vps_t a;
in the source, what does it mean?
-
In contrast, if it says
struct virtual_container *a;
@@ -372,11 +375,11 @@
exported, the EXPORT* macro for it should follow immediately after the closing
function brace line. E.g.:
-int system_is_up(void)
-{
- return system_state == SYSTEM_RUNNING;
-}
-EXPORT_SYMBOL(system_is_up);
+ int system_is_up(void)
+ {
+ return system_state == SYSTEM_RUNNING;
+ }
+ EXPORT_SYMBOL(system_is_up);
In function prototypes, include parameter names with their data types.
Although this is not required by the C language, it is preferred in Linux
@@ -405,34 +408,34 @@
modifications are prevented
- saves the compiler work to optimize redundant code away ;)
-int fun(int a)
-{
- int result = 0;
- char *buffer;
+ int fun(int a)
+ {
+ int result = 0;
+ char *buffer;
- buffer = kmalloc(SIZE, GFP_KERNEL);
- if (!buffer)
- return -ENOMEM;
+ buffer = kmalloc(SIZE, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
- if (condition1) {
- while (loop1) {
- ...
+ if (condition1) {
+ while (loop1) {
+ ...
+ }
+ result = 1;
+ goto out_buffer;
}
- result = 1;
- goto out_buffer;
+ ...
+ out_buffer:
+ kfree(buffer);
+ return result;
}
- ...
-out_buffer:
- kfree(buffer);
- return result;
-}
A common type of bug to be aware of it "one err bugs" which look like this:
-err:
- kfree(foo->bar);
- kfree(foo);
- return ret;
+ err:
+ kfree(foo->bar);
+ kfree(foo);
+ return ret;
The bug in this code is that on some exit paths "foo" is NULL. Normally the
fix for this is to split it up into two error labels "err_bar:" and "err_foo:".
@@ -503,9 +506,9 @@
(defun c-lineup-arglist-tabs-only (ignored)
"Line up argument lists by tabs, not spaces"
(let* ((anchor (c-langelem-pos c-syntactic-element))
- (column (c-langelem-2nd-pos c-syntactic-element))
- (offset (- (1+ column) anchor))
- (steps (floor offset c-basic-offset)))
+ (column (c-langelem-2nd-pos c-syntactic-element))
+ (offset (- (1+ column) anchor))
+ (steps (floor offset c-basic-offset)))
(* (max steps 1)
c-basic-offset)))
@@ -612,7 +615,7 @@
Names of macros defining constants and labels in enums are capitalized.
-#define CONSTANT 0x12345
+ #define CONSTANT 0x12345
Enums are preferred when defining several related constants.
@@ -623,28 +626,28 @@
Macros with multiple statements should be enclosed in a do - while block:
-#define macrofun(a, b, c) \
- do { \
- if (a == 5) \
- do_this(b, c); \
- } while (0)
+ #define macrofun(a, b, c) \
+ do { \
+ if (a == 5) \
+ do_this(b, c); \
+ } while (0)
Things to avoid when using macros:
1) macros that affect control flow:
-#define FOO(x) \
- do { \
- if (blah(x) < 0) \
- return -EBUGGERED; \
- } while(0)
+ #define FOO(x) \
+ do { \
+ if (blah(x) < 0) \
+ return -EBUGGERED; \
+ } while(0)
is a _very_ bad idea. It looks like a function call but exits the "calling"
function; don't break the internal parsers of those who will read the code.
2) macros that depend on having a local variable with a magic name:
-#define FOO(val) bar(index, val)
+ #define FOO(val) bar(index, val)
might look like a good thing, but it's confusing as hell when one reads the
code and it's prone to breakage from seemingly innocent changes.
@@ -656,8 +659,8 @@
must enclose the expression in parentheses. Beware of similar issues with
macros using parameters.
-#define CONSTANT 0x4000
-#define CONSTEXP (CONSTANT | 3)
+ #define CONSTANT 0x4000
+ #define CONSTEXP (CONSTANT | 3)
5) namespace collisions when defining local variables in macros resembling
functions:
@@ -809,11 +812,11 @@
For example, if you need to calculate the length of an array, take advantage
of the macro
- #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+ #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
Similarly, if you need to calculate the size of some structure member, use
- #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
+ #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
There are also min() and max() macros that do strict type checking if you
need them. Feel free to peruse that header file to see what else is already
@@ -826,19 +829,19 @@
indicated with special markers. For example, emacs interprets lines marked
like this:
--*- mode: c -*-
+ -*- mode: c -*-
Or like this:
-/*
-Local Variables:
-compile-command: "gcc -DMAGIC_DEBUG_FLAG foo.c"
-End:
-*/
+ /*
+ Local Variables:
+ compile-command: "gcc -DMAGIC_DEBUG_FLAG foo.c"
+ End:
+ */
Vim interprets markers that look like this:
-/* vim:set sw=8 noet */
+ /* vim:set sw=8 noet */
Do not include any of these in source files. People have their own personal
editor configurations, and your source files should not override them. This
@@ -915,9 +918,9 @@
place a comment after the #endif on the same line, noting the conditional
expression used. For instance:
-#ifdef CONFIG_SOMETHING
-...
-#endif /* CONFIG_SOMETHING */
+ #ifdef CONFIG_SOMETHING
+ ...
+ #endif /* CONFIG_SOMETHING */
Appendix I: References
diff --git a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl
index 03f1985..9765a4c 100644
--- a/Documentation/DocBook/drm.tmpl
+++ b/Documentation/DocBook/drm.tmpl
@@ -1293,7 +1293,7 @@
</para>
<para>
If a page flip can be successfully scheduled the driver must set the
- <code>drm_crtc-<fb</code> field to the new framebuffer pointed to
+ <code>drm_crtc->fb</code> field to the new framebuffer pointed to
by <code>fb</code>. This is important so that the reference counting
on framebuffers stays balanced.
</para>
@@ -3979,6 +3979,11 @@
!Fdrivers/gpu/drm/i915/i915_irq.c intel_runtime_pm_disable_interrupts
!Fdrivers/gpu/drm/i915/i915_irq.c intel_runtime_pm_enable_interrupts
</sect2>
+ <sect2>
+ <title>Intel GVT-g Guest Support(vGPU)</title>
+!Pdrivers/gpu/drm/i915/i915_vgpu.c Intel GVT-g guest support
+!Idrivers/gpu/drm/i915/i915_vgpu.c
+ </sect2>
</sect1>
<sect1>
<title>Display Hardware Handling</title>
@@ -4048,6 +4053,17 @@
!Idrivers/gpu/drm/i915/intel_fbc.c
</sect2>
<sect2>
+ <title>Display Refresh Rate Switching (DRRS)</title>
+!Pdrivers/gpu/drm/i915/intel_dp.c Display Refresh Rate Switching (DRRS)
+!Fdrivers/gpu/drm/i915/intel_dp.c intel_dp_set_drrs_state
+!Fdrivers/gpu/drm/i915/intel_dp.c intel_edp_drrs_enable
+!Fdrivers/gpu/drm/i915/intel_dp.c intel_edp_drrs_disable
+!Fdrivers/gpu/drm/i915/intel_dp.c intel_edp_drrs_invalidate
+!Fdrivers/gpu/drm/i915/intel_dp.c intel_edp_drrs_flush
+!Fdrivers/gpu/drm/i915/intel_dp.c intel_dp_drrs_init
+
+ </sect2>
+ <sect2>
<title>DPIO</title>
!Pdrivers/gpu/drm/i915/i915_reg.h DPIO
<table id="dpiox2">
@@ -4168,7 +4184,7 @@
<sect2>
<title>Buffer Object Eviction</title>
<para>
- This section documents the interface function for evicting buffer
+ This section documents the interface functions for evicting buffer
objects to make space available in the virtual gpu address spaces.
Note that this is mostly orthogonal to shrinking buffer objects
caches, which has the goal to make main memory (shared with the gpu
@@ -4176,6 +4192,17 @@
</para>
!Idrivers/gpu/drm/i915/i915_gem_evict.c
</sect2>
+ <sect2>
+ <title>Buffer Object Memory Shrinking</title>
+ <para>
+ This section documents the interface function for shrinking memory
+ usage of buffer object caches. Shrinking is used to make main memory
+ available. Note that this is mostly orthogonal to evicting buffer
+ objects, which has the goal to make space in gpu virtual address
+ spaces.
+ </para>
+!Idrivers/gpu/drm/i915/i915_gem_shrinker.c
+ </sect2>
</sect1>
<sect1>
diff --git a/Documentation/DocBook/media/v4l/biblio.xml b/Documentation/DocBook/media/v4l/biblio.xml
index 7ff01a2..fdee6b3 100644
--- a/Documentation/DocBook/media/v4l/biblio.xml
+++ b/Documentation/DocBook/media/v4l/biblio.xml
@@ -1,14 +1,13 @@
<bibliography>
<title>References</title>
- <biblioentry id="eia608">
- <abbrev>EIA 608-B</abbrev>
+ <biblioentry id="cea608">
+ <abbrev>CEA 608-E</abbrev>
<authorgroup>
- <corpauthor>Electronic Industries Alliance (<ulink
-url="http://www.eia.org">http://www.eia.org</ulink>)</corpauthor>
+ <corpauthor>Consumer Electronics Association (<ulink
+url="http://www.ce.org">http://www.ce.org</ulink>)</corpauthor>
</authorgroup>
- <title>EIA 608-B "Recommended Practice for Line 21 Data
-Service"</title>
+ <title>CEA-608-E R-2014 "Line 21 Data Services"</title>
</biblioentry>
<biblioentry id="en300294">
diff --git a/Documentation/DocBook/media/v4l/compat.xml b/Documentation/DocBook/media/v4l/compat.xml
index 350dfb3..a0aef85 100644
--- a/Documentation/DocBook/media/v4l/compat.xml
+++ b/Documentation/DocBook/media/v4l/compat.xml
@@ -2491,7 +2491,7 @@
</listitem>
<listitem>
<para>Added <constant>V4L2_EVENT_CTRL_CH_RANGE</constant> control event
- changes flag. See <xref linkend="changes-flags"/>.</para>
+ changes flag. See <xref linkend="ctrl-changes-flags"/>.</para>
</listitem>
</orderedlist>
</section>
diff --git a/Documentation/DocBook/media/v4l/dev-sliced-vbi.xml b/Documentation/DocBook/media/v4l/dev-sliced-vbi.xml
index 7a8bf30..0aec62e 100644
--- a/Documentation/DocBook/media/v4l/dev-sliced-vbi.xml
+++ b/Documentation/DocBook/media/v4l/dev-sliced-vbi.xml
@@ -254,7 +254,7 @@
<row>
<entry><constant>V4L2_SLICED_CAPTION_525</constant></entry>
<entry>0x1000</entry>
- <entry><xref linkend="eia608" /></entry>
+ <entry><xref linkend="cea608" /></entry>
<entry>NTSC line 21, 284 (second field 21)</entry>
<entry>Two bytes in transmission order, including parity
bit, lsb first transmitted.</entry>
diff --git a/Documentation/DocBook/media/v4l/media-ioc-enum-entities.xml b/Documentation/DocBook/media/v4l/media-ioc-enum-entities.xml
index 116c301..5872f8bb 100644
--- a/Documentation/DocBook/media/v4l/media-ioc-enum-entities.xml
+++ b/Documentation/DocBook/media/v4l/media-ioc-enum-entities.xml
@@ -143,86 +143,28 @@
<row>
<entry></entry>
<entry>struct</entry>
- <entry><structfield>v4l</structfield></entry>
+ <entry><structfield>dev</structfield></entry>
<entry></entry>
- <entry>Valid for V4L sub-devices and nodes only.</entry>
+ <entry>Valid for (sub-)devices that create a single device node.</entry>
</row>
<row>
<entry></entry>
<entry></entry>
<entry>__u32</entry>
<entry><structfield>major</structfield></entry>
- <entry>V4L device node major number. For V4L sub-devices with no
- device node, set by the driver to 0.</entry>
+ <entry>Device node major number.</entry>
</row>
<row>
<entry></entry>
<entry></entry>
<entry>__u32</entry>
<entry><structfield>minor</structfield></entry>
- <entry>V4L device node minor number. For V4L sub-devices with no
- device node, set by the driver to 0.</entry>
- </row>
- <row>
- <entry></entry>
- <entry>struct</entry>
- <entry><structfield>fb</structfield></entry>
- <entry></entry>
- <entry>Valid for frame buffer nodes only.</entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry>__u32</entry>
- <entry><structfield>major</structfield></entry>
- <entry>Frame buffer device node major number.</entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry>__u32</entry>
- <entry><structfield>minor</structfield></entry>
- <entry>Frame buffer device node minor number.</entry>
- </row>
- <row>
- <entry></entry>
- <entry>struct</entry>
- <entry><structfield>alsa</structfield></entry>
- <entry></entry>
- <entry>Valid for ALSA devices only.</entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry>__u32</entry>
- <entry><structfield>card</structfield></entry>
- <entry>ALSA card number</entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry>__u32</entry>
- <entry><structfield>device</structfield></entry>
- <entry>ALSA device number</entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry>__u32</entry>
- <entry><structfield>subdevice</structfield></entry>
- <entry>ALSA sub-device number</entry>
- </row>
- <row>
- <entry></entry>
- <entry>int</entry>
- <entry><structfield>dvb</structfield></entry>
- <entry></entry>
- <entry>DVB card number</entry>
+ <entry>Device node minor number.</entry>
</row>
<row>
<entry></entry>
<entry>__u8</entry>
- <entry><structfield>raw</structfield>[180]</entry>
+ <entry><structfield>raw</structfield>[184]</entry>
<entry></entry>
<entry></entry>
</row>
@@ -253,8 +195,24 @@
<entry>ALSA card</entry>
</row>
<row>
- <entry><constant>MEDIA_ENT_T_DEVNODE_DVB</constant></entry>
- <entry>DVB card</entry>
+ <entry><constant>MEDIA_ENT_T_DEVNODE_DVB_FE</constant></entry>
+ <entry>DVB frontend devnode</entry>
+ </row>
+ <row>
+ <entry><constant>MEDIA_ENT_T_DEVNODE_DVB_DEMUX</constant></entry>
+ <entry>DVB demux devnode</entry>
+ </row>
+ <row>
+ <entry><constant>MEDIA_ENT_T_DEVNODE_DVB_DVR</constant></entry>
+ <entry>DVB DVR devnode</entry>
+ </row>
+ <row>
+ <entry><constant>MEDIA_ENT_T_DEVNODE_DVB_CA</constant></entry>
+ <entry>DVB CAM devnode</entry>
+ </row>
+ <row>
+ <entry><constant>MEDIA_ENT_T_DEVNODE_DVB_NET</constant></entry>
+ <entry>DVB network devnode</entry>
</row>
<row>
<entry><constant>MEDIA_ENT_T_V4L2_SUBDEV</constant></entry>
@@ -282,6 +240,10 @@
it in some digital video standard, with appropriate embedded timing
signals.</entry>
</row>
+ <row>
+ <entry><constant>MEDIA_ENT_T_V4L2_SUBDEV_TUNER</constant></entry>
+ <entry>TV and/or radio tuner</entry>
+ </row>
</tbody>
</tgroup>
</table>
diff --git a/Documentation/DocBook/media/v4l/pixfmt-packed-rgb.xml b/Documentation/DocBook/media/v4l/pixfmt-packed-rgb.xml
index 6ab4f0f..b60fb93 100644
--- a/Documentation/DocBook/media/v4l/pixfmt-packed-rgb.xml
+++ b/Documentation/DocBook/media/v4l/pixfmt-packed-rgb.xml
@@ -303,45 +303,6 @@
<entry>b<subscript>1</subscript></entry>
<entry>b<subscript>0</subscript></entry>
</row>
- <row id="V4L2-PIX-FMT-BGR666">
- <entry><constant>V4L2_PIX_FMT_BGR666</constant></entry>
- <entry>'BGRH'</entry>
- <entry></entry>
- <entry>b<subscript>5</subscript></entry>
- <entry>b<subscript>4</subscript></entry>
- <entry>b<subscript>3</subscript></entry>
- <entry>b<subscript>2</subscript></entry>
- <entry>b<subscript>1</subscript></entry>
- <entry>b<subscript>0</subscript></entry>
- <entry>g<subscript>5</subscript></entry>
- <entry>g<subscript>4</subscript></entry>
- <entry></entry>
- <entry>g<subscript>3</subscript></entry>
- <entry>g<subscript>2</subscript></entry>
- <entry>g<subscript>1</subscript></entry>
- <entry>g<subscript>0</subscript></entry>
- <entry>r<subscript>5</subscript></entry>
- <entry>r<subscript>4</subscript></entry>
- <entry>r<subscript>3</subscript></entry>
- <entry>r<subscript>2</subscript></entry>
- <entry></entry>
- <entry>r<subscript>1</subscript></entry>
- <entry>r<subscript>0</subscript></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- </row>
<row id="V4L2-PIX-FMT-BGR24">
<entry><constant>V4L2_PIX_FMT_BGR24</constant></entry>
<entry>'BGR3'</entry>
@@ -404,6 +365,46 @@
<entry>b<subscript>1</subscript></entry>
<entry>b<subscript>0</subscript></entry>
</row>
+ <row id="V4L2-PIX-FMT-BGR666">
+ <entry><constant>V4L2_PIX_FMT_BGR666</constant></entry>
+ <entry>'BGRH'</entry>
+ <entry></entry>
+ <entry>b<subscript>5</subscript></entry>
+ <entry>b<subscript>4</subscript></entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>b<subscript>0</subscript></entry>
+ <entry>g<subscript>5</subscript></entry>
+ <entry>g<subscript>4</subscript></entry>
+ <entry></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>g<subscript>0</subscript></entry>
+ <entry>r<subscript>5</subscript></entry>
+ <entry>r<subscript>4</subscript></entry>
+ <entry>r<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ <entry></entry>
+ <entry>r<subscript>1</subscript></entry>
+ <entry>r<subscript>0</subscript></entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry></entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ </row>
<row id="V4L2-PIX-FMT-ABGR32">
<entry><constant>V4L2_PIX_FMT_ABGR32</constant></entry>
<entry>'AR24'</entry>
diff --git a/Documentation/DocBook/media/v4l/pixfmt-sgrbg8.xml b/Documentation/DocBook/media/v4l/pixfmt-sgrbg8.xml
index 19727ab..7803b8c 100644
--- a/Documentation/DocBook/media/v4l/pixfmt-sgrbg8.xml
+++ b/Documentation/DocBook/media/v4l/pixfmt-sgrbg8.xml
@@ -38,10 +38,10 @@
</row>
<row>
<entry>start + 4:</entry>
- <entry>R<subscript>10</subscript></entry>
- <entry>B<subscript>11</subscript></entry>
- <entry>R<subscript>12</subscript></entry>
- <entry>B<subscript>13</subscript></entry>
+ <entry>B<subscript>10</subscript></entry>
+ <entry>G<subscript>11</subscript></entry>
+ <entry>B<subscript>12</subscript></entry>
+ <entry>G<subscript>13</subscript></entry>
</row>
<row>
<entry>start + 8:</entry>
@@ -52,10 +52,10 @@
</row>
<row>
<entry>start + 12:</entry>
- <entry>R<subscript>30</subscript></entry>
- <entry>B<subscript>31</subscript></entry>
- <entry>R<subscript>32</subscript></entry>
- <entry>B<subscript>33</subscript></entry>
+ <entry>B<subscript>30</subscript></entry>
+ <entry>G<subscript>31</subscript></entry>
+ <entry>B<subscript>32</subscript></entry>
+ <entry>G<subscript>33</subscript></entry>
</row>
</tbody>
</tgroup>
diff --git a/Documentation/DocBook/media/v4l/pixfmt-srggb10p.xml b/Documentation/DocBook/media/v4l/pixfmt-srggb10p.xml
index 30aa635..a8cc102 100644
--- a/Documentation/DocBook/media/v4l/pixfmt-srggb10p.xml
+++ b/Documentation/DocBook/media/v4l/pixfmt-srggb10p.xml
@@ -38,7 +38,7 @@
<title>Byte Order.</title>
<para>Each cell is one byte.
<informaltable frame="topbot" colsep="1" rowsep="1">
- <tgroup cols="5" align="center" border="1">
+ <tgroup cols="5" align="center">
<colspec align="left" colwidth="2*" />
<tbody valign="top">
<row>
diff --git a/Documentation/DocBook/media/v4l/pixfmt-yuv420m.xml b/Documentation/DocBook/media/v4l/pixfmt-yuv420m.xml
index 60308f1..e781cc6 100644
--- a/Documentation/DocBook/media/v4l/pixfmt-yuv420m.xml
+++ b/Documentation/DocBook/media/v4l/pixfmt-yuv420m.xml
@@ -29,12 +29,12 @@
words, two Cx rows (including padding) is exactly as long as one Y row
(including padding).</para>
- <para><constant>V4L2_PIX_FMT_NV12M</constant> is intended to be
+ <para><constant>V4L2_PIX_FMT_YUV420M</constant> is intended to be
used only in drivers and applications that support the multi-planar API,
described in <xref linkend="planar-apis"/>. </para>
<example>
- <title><constant>V4L2_PIX_FMT_YVU420M</constant> 4 × 4
+ <title><constant>V4L2_PIX_FMT_YUV420M</constant> 4 × 4
pixel image</title>
<formalpara>
diff --git a/Documentation/DocBook/media/v4l/pixfmt.xml b/Documentation/DocBook/media/v4l/pixfmt.xml
index 5e0352c..fcde4e2 100644
--- a/Documentation/DocBook/media/v4l/pixfmt.xml
+++ b/Documentation/DocBook/media/v4l/pixfmt.xml
@@ -80,9 +80,9 @@
boundary. Input devices may write padding bytes, the value is
undefined. Output devices ignore the contents of padding
bytes.</para><para>When the image format is planar the
-<structfield>bytesperline</structfield> value applies to the largest
+<structfield>bytesperline</structfield> value applies to the first
plane and is divided by the same factor as the
-<structfield>width</structfield> field for any smaller planes. For
+<structfield>width</structfield> field for the other planes. For
example the Cb and Cr planes of a YUV 4:2:0 image have half as many
padding bytes following each line as the Y plane. To avoid ambiguities
drivers must return a <structfield>bytesperline</structfield> value
@@ -182,14 +182,14 @@
</entry>
</row>
<row>
- <entry>__u16</entry>
+ <entry>__u32</entry>
<entry><structfield>bytesperline</structfield></entry>
<entry>Distance in bytes between the leftmost pixels in two adjacent
lines. See &v4l2-pix-format;.</entry>
</row>
<row>
<entry>__u16</entry>
- <entry><structfield>reserved[7]</structfield></entry>
+ <entry><structfield>reserved[6]</structfield></entry>
<entry>Reserved for future extensions. Should be zeroed by the
application.</entry>
</row>
@@ -483,8 +483,8 @@
Y'CbCr encodings and the third is the quantization identifier (&v4l2-quantization;)
to specify non-standard quantization methods. Most of the time only the colorspace
field of &v4l2-pix-format; or &v4l2-pix-format-mplane; needs to be filled in. Note
-that the default R'G'B' quantization is always full range for all colorspaces,
-so this won't be mentioned explicitly for each colorspace description.</para>
+that the default R'G'B' quantization is full range for all colorspaces except for
+BT.2020 which uses limited range R'G'B' quantization.</para>
<table pgwide="1" frame="none" id="v4l2-colorspace">
<title>V4L2 Colorspaces</title>
@@ -598,7 +598,8 @@
<row>
<entry><constant>V4L2_QUANTIZATION_DEFAULT</constant></entry>
<entry>Use the default quantization encoding as defined by the colorspace.
-This is always full range for R'G'B' and usually limited range for Y'CbCr.</entry>
+This is always full range for R'G'B' (except for the BT.2020 colorspace) and usually
+limited range for Y'CbCr.</entry>
</row>
<row>
<entry><constant>V4L2_QUANTIZATION_FULL_RANGE</constant></entry>
@@ -620,8 +621,8 @@
<section>
<title>Detailed Colorspace Descriptions</title>
- <section>
- <title id="col-smpte-170m">Colorspace SMPTE 170M (<constant>V4L2_COLORSPACE_SMPTE170M</constant>)</title>
+ <section id="col-smpte-170m">
+ <title>Colorspace SMPTE 170M (<constant>V4L2_COLORSPACE_SMPTE170M</constant>)</title>
<para>The <xref linkend="smpte170m" /> standard defines the colorspace used by NTSC and PAL and by SDTV
in general. The default Y'CbCr encoding is <constant>V4L2_YCBCR_ENC_601</constant>.
The default Y'CbCr quantization is limited range. The chromaticities of the primary colors and
@@ -666,8 +667,7 @@
<variablelist>
<varlistentry>
<term>The transfer function defined for SMPTE 170M is the same as the
-one defined in Rec. 709. Normally L is in the range [0…1], but for the extended
-gamut xvYCC encoding values outside that range are allowed.</term>
+one defined in Rec. 709.</term>
<listitem>
<para>L' = -1.099(-L)<superscript>0.45</superscript> + 0.099 for L ≤ -0.018</para>
<para>L' = 4.5L for -0.018 < L < 0.018</para>
@@ -702,29 +702,10 @@
though BT.601 does not mention any color primaries.</para>
<para>The default quantization is limited range, but full range is possible although
rarely seen.</para>
- <para>The <constant>V4L2_YCBCR_ENC_601</constant> encoding as described above is the
-default for this colorspace, but it can be overridden with <constant>V4L2_YCBCR_ENC_709</constant>,
-in which case the Rec. 709 Y'CbCr encoding is used.</para>
- <variablelist>
- <varlistentry>
- <term>The xvYCC 601 encoding (<constant>V4L2_YCBCR_ENC_XV601</constant>, <xref linkend="xvycc" />) is similar
-to the BT.601 encoding, but it allows for R', G' and B' values that are outside the range
-[0…1]. The resulting Y', Cb and Cr values are scaled and offset:</term>
- <listitem>
- <para>Y' = (219 / 255) * (0.299R' + 0.587G' + 0.114B') + (16 / 255)</para>
- <para>Cb = (224 / 255) * (-0.169R' - 0.331G' + 0.5B')</para>
- <para>Cr = (224 / 255) * (0.5R' - 0.419G' - 0.081B')</para>
- </listitem>
- </varlistentry>
- </variablelist>
- <para>Y' is clamped to the range [0…1] and Cb and Cr are clamped
-to the range [-0.5…0.5]. The non-standard xvYCC 709 encoding can also be used by selecting
-<constant>V4L2_YCBCR_ENC_XV709</constant>. The xvYCC encodings always use full range
-quantization.</para>
</section>
- <section>
- <title id="col-rec709">Colorspace Rec. 709 (<constant>V4L2_COLORSPACE_REC709</constant>)</title>
+ <section id="col-rec709">
+ <title>Colorspace Rec. 709 (<constant>V4L2_COLORSPACE_REC709</constant>)</title>
<para>The <xref linkend="itu709" /> standard defines the colorspace used by HDTV in general. The default
Y'CbCr encoding is <constant>V4L2_YCBCR_ENC_709</constant>. The default Y'CbCr quantization is
limited range. The chromaticities of the primary colors and the white reference are:</para>
@@ -803,26 +784,39 @@
<para>The <constant>V4L2_YCBCR_ENC_709</constant> encoding described above is the default
for this colorspace, but it can be overridden with <constant>V4L2_YCBCR_ENC_601</constant>, in which
case the BT.601 Y'CbCr encoding is used.</para>
+ <para>Two additional extended gamut Y'CbCr encodings are also possible with this colorspace:</para>
<variablelist>
<varlistentry>
<term>The xvYCC 709 encoding (<constant>V4L2_YCBCR_ENC_XV709</constant>, <xref linkend="xvycc" />)
is similar to the Rec. 709 encoding, but it allows for R', G' and B' values that are outside the range
[0…1]. The resulting Y', Cb and Cr values are scaled and offset:</term>
<listitem>
- <para>Y' = (219 / 255) * (0.2126R' + 0.7152G' + 0.0722B') + (16 / 255)</para>
- <para>Cb = (224 / 255) * (-0.1146R' - 0.3854G' + 0.5B')</para>
- <para>Cr = (224 / 255) * (0.5R' - 0.4542G' - 0.0458B')</para>
+ <para>Y' = (219 / 256) * (0.2126R' + 0.7152G' + 0.0722B') + (16 / 256)</para>
+ <para>Cb = (224 / 256) * (-0.1146R' - 0.3854G' + 0.5B')</para>
+ <para>Cr = (224 / 256) * (0.5R' - 0.4542G' - 0.0458B')</para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ <variablelist>
+ <varlistentry>
+ <term>The xvYCC 601 encoding (<constant>V4L2_YCBCR_ENC_XV601</constant>, <xref linkend="xvycc" />) is similar
+to the BT.601 encoding, but it allows for R', G' and B' values that are outside the range
+[0…1]. The resulting Y', Cb and Cr values are scaled and offset:</term>
+ <listitem>
+ <para>Y' = (219 / 256) * (0.299R' + 0.587G' + 0.114B') + (16 / 256)</para>
+ <para>Cb = (224 / 256) * (-0.169R' - 0.331G' + 0.5B')</para>
+ <para>Cr = (224 / 256) * (0.5R' - 0.419G' - 0.081B')</para>
</listitem>
</varlistentry>
</variablelist>
<para>Y' is clamped to the range [0…1] and Cb and Cr are clamped
-to the range [-0.5…0.5]. The non-standard xvYCC 601 encoding can also be used by
-selecting <constant>V4L2_YCBCR_ENC_XV601</constant>. The xvYCC encodings always use full
-range quantization.</para>
+to the range [-0.5…0.5]. The non-standard xvYCC 709 or xvYCC 601 encodings can be used by
+selecting <constant>V4L2_YCBCR_ENC_XV709</constant> or <constant>V4L2_YCBCR_ENC_XV601</constant>.
+The xvYCC encodings always use full range quantization.</para>
</section>
- <section>
- <title id="col-srgb">Colorspace sRGB (<constant>V4L2_COLORSPACE_SRGB</constant>)</title>
+ <section id="col-srgb">
+ <title>Colorspace sRGB (<constant>V4L2_COLORSPACE_SRGB</constant>)</title>
<para>The <xref linkend="srgb" /> standard defines the colorspace used by most webcams and computer graphics. The
default Y'CbCr encoding is <constant>V4L2_YCBCR_ENC_SYCC</constant>. The default Y'CbCr quantization
is full range. The chromaticities of the primary colors and the white reference are:</para>
@@ -898,8 +892,8 @@
values before quantization, but this encoding does not do that.</para>
</section>
- <section>
- <title id="col-adobergb">Colorspace Adobe RGB (<constant>V4L2_COLORSPACE_ADOBERGB</constant>)</title>
+ <section id="col-adobergb">
+ <title>Colorspace Adobe RGB (<constant>V4L2_COLORSPACE_ADOBERGB</constant>)</title>
<para>The <xref linkend="adobergb" /> standard defines the colorspace used by computer graphics
that use the AdobeRGB colorspace. This is also known as the <xref linkend="oprgb" /> standard.
The default Y'CbCr encoding is <constant>V4L2_YCBCR_ENC_601</constant>. The default Y'CbCr
@@ -970,12 +964,12 @@
SMPTE 170M/BT.601. The Y'CbCr quantization is limited range.</para>
</section>
- <section>
- <title id="col-bt2020">Colorspace BT.2020 (<constant>V4L2_COLORSPACE_BT2020</constant>)</title>
+ <section id="col-bt2020">
+ <title>Colorspace BT.2020 (<constant>V4L2_COLORSPACE_BT2020</constant>)</title>
<para>The <xref linkend="itu2020" /> standard defines the colorspace used by Ultra-high definition
television (UHDTV). The default Y'CbCr encoding is <constant>V4L2_YCBCR_ENC_BT2020</constant>.
-The default Y'CbCr quantization is limited range. The chromaticities of the primary colors and
-the white reference are:</para>
+The default R'G'B' quantization is limited range (!), and so is the default Y'CbCr quantization.
+The chromaticities of the primary colors and the white reference are:</para>
<table frame="none">
<title>BT.2020 Chromaticities</title>
<tgroup cols="3" align="left">
@@ -1032,7 +1026,7 @@
<term>The luminance (Y') and color difference (Cb and Cr) are obtained with the
following <constant>V4L2_YCBCR_ENC_BT2020</constant> encoding:</term>
<listitem>
- <para>Y' = 0.2627R' + 0.6789G' + 0.0593B'</para>
+ <para>Y' = 0.2627R' + 0.6780G' + 0.0593B'</para>
<para>Cb = -0.1396R' - 0.3604G' + 0.5B'</para>
<para>Cr = 0.5R' - 0.4598G' - 0.0402B'</para>
</listitem>
@@ -1046,7 +1040,7 @@
<varlistentry>
<term>Luma:</term>
<listitem>
- <para>Yc' = (0.2627R + 0.6789G + 0.0593B)'</para>
+ <para>Yc' = (0.2627R + 0.6780G + 0.0593B)'</para>
</listitem>
</varlistentry>
</variablelist>
@@ -1054,7 +1048,7 @@
<varlistentry>
<term>B' - Yc' ≤ 0:</term>
<listitem>
- <para>Cbc = (B' - Y') / 1.9404</para>
+ <para>Cbc = (B' - Yc') / 1.9404</para>
</listitem>
</varlistentry>
</variablelist>
@@ -1062,7 +1056,7 @@
<varlistentry>
<term>B' - Yc' > 0:</term>
<listitem>
- <para>Cbc = (B' - Y') / 1.5816</para>
+ <para>Cbc = (B' - Yc') / 1.5816</para>
</listitem>
</varlistentry>
</variablelist>
@@ -1086,8 +1080,8 @@
clamped to the range [-0.5…0.5]. The Yc'CbcCrc quantization is limited range.</para>
</section>
- <section>
- <title id="col-smpte-240m">Colorspace SMPTE 240M (<constant>V4L2_COLORSPACE_SMPTE240M</constant>)</title>
+ <section id="col-smpte-240m">
+ <title>Colorspace SMPTE 240M (<constant>V4L2_COLORSPACE_SMPTE240M</constant>)</title>
<para>The <xref linkend="smpte240m" /> standard was an interim standard used during the early days of HDTV (1988-1998).
It has been superseded by Rec. 709. The default Y'CbCr encoding is <constant>V4L2_YCBCR_ENC_SMPTE240M</constant>.
The default Y'CbCr quantization is limited range. The chromaticities of the primary colors and the
@@ -1159,8 +1153,8 @@
clamped to the range [-0.5…0.5]. The Y'CbCr quantization is limited range.</para>
</section>
- <section>
- <title id="col-sysm">Colorspace NTSC 1953 (<constant>V4L2_COLORSPACE_470_SYSTEM_M</constant>)</title>
+ <section id="col-sysm">
+ <title>Colorspace NTSC 1953 (<constant>V4L2_COLORSPACE_470_SYSTEM_M</constant>)</title>
<para>This standard defines the colorspace used by NTSC in 1953. In practice this
colorspace is obsolete and SMPTE 170M should be used instead. The default Y'CbCr encoding
is <constant>V4L2_YCBCR_ENC_601</constant>. The default Y'CbCr quantization is limited range.
@@ -1237,8 +1231,8 @@
This transform is identical to one defined in SMPTE 170M/BT.601.</para>
</section>
- <section>
- <title id="col-sysbg">Colorspace EBU Tech. 3213 (<constant>V4L2_COLORSPACE_470_SYSTEM_BG</constant>)</title>
+ <section id="col-sysbg">
+ <title>Colorspace EBU Tech. 3213 (<constant>V4L2_COLORSPACE_470_SYSTEM_BG</constant>)</title>
<para>The <xref linkend="tech3213" /> standard defines the colorspace used by PAL/SECAM in 1975. In practice this
colorspace is obsolete and SMPTE 170M should be used instead. The default Y'CbCr encoding
is <constant>V4L2_YCBCR_ENC_601</constant>. The default Y'CbCr quantization is limited range.
@@ -1311,8 +1305,8 @@
This transform is identical to one defined in SMPTE 170M/BT.601.</para>
</section>
- <section>
- <title id="col-jpeg">Colorspace JPEG (<constant>V4L2_COLORSPACE_JPEG</constant>)</title>
+ <section id="col-jpeg">
+ <title>Colorspace JPEG (<constant>V4L2_COLORSPACE_JPEG</constant>)</title>
<para>This colorspace defines the colorspace used by most (Motion-)JPEG formats. The chromaticities
of the primary colors and the white reference are identical to sRGB. The Y'CbCr encoding is
<constant>V4L2_YCBCR_ENC_601</constant> with full range quantization where
diff --git a/Documentation/DocBook/media/v4l/subdev-formats.xml b/Documentation/DocBook/media/v4l/subdev-formats.xml
index c5ea868..2588ad7 100644
--- a/Documentation/DocBook/media/v4l/subdev-formats.xml
+++ b/Documentation/DocBook/media/v4l/subdev-formats.xml
@@ -91,7 +91,9 @@
<listitem><para>For formats where the total number of bits per pixel is smaller
than the number of bus samples per pixel times the bus width, a padding
value stating if the bytes are padded in their most high order bits
- (PADHI) or low order bits (PADLO).</para></listitem>
+ (PADHI) or low order bits (PADLO). A "C" prefix is used for component-wise
+ padding in the most high order bits (CPADHI) or low order bits (CPADLO)
+ of each separate component.</para></listitem>
<listitem><para>For formats where the number of bus samples per pixel is larger
than 1, an endianness value stating if the pixel is transferred MSB first
(BE) or LSB first (LE).</para></listitem>
@@ -192,6 +194,24 @@
</row>
</thead>
<tbody valign="top">
+ <row id="MEDIA-BUS-FMT-RGB444-1X12">
+ <entry>MEDIA_BUS_FMT_RGB444_1X12</entry>
+ <entry>0x1016</entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>r<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ <entry>r<subscript>1</subscript></entry>
+ <entry>r<subscript>0</subscript></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>g<subscript>0</subscript></entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>b<subscript>0</subscript></entry>
+ </row>
<row id="MEDIA-BUS-FMT-RGB444-2X8-PADHI-BE">
<entry>MEDIA_BUS_FMT_RGB444_2X8_PADHI_BE</entry>
<entry>0x1001</entry>
@@ -304,6 +324,28 @@
<entry>g<subscript>4</subscript></entry>
<entry>g<subscript>3</subscript></entry>
</row>
+ <row id="MEDIA-BUS-FMT-RGB565-1X16">
+ <entry>MEDIA_BUS_FMT_RGB565_1X16</entry>
+ <entry>0x1017</entry>
+ <entry></entry>
+ &dash-ent-16;
+ <entry>r<subscript>4</subscript></entry>
+ <entry>r<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ <entry>r<subscript>1</subscript></entry>
+ <entry>r<subscript>0</subscript></entry>
+ <entry>g<subscript>5</subscript></entry>
+ <entry>g<subscript>4</subscript></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>g<subscript>0</subscript></entry>
+ <entry>b<subscript>4</subscript></entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>b<subscript>0</subscript></entry>
+ </row>
<row id="MEDIA-BUS-FMT-BGR565-2X8-BE">
<entry>MEDIA_BUS_FMT_BGR565_2X8_BE</entry>
<entry>0x1005</entry>
@@ -440,6 +482,126 @@
<entry>b<subscript>1</subscript></entry>
<entry>b<subscript>0</subscript></entry>
</row>
+ <row id="MEDIA-BUS-FMT-RBG888-1X24">
+ <entry>MEDIA_BUS_FMT_RBG888_1X24</entry>
+ <entry>0x100e</entry>
+ <entry></entry>
+ &dash-ent-8;
+ <entry>r<subscript>7</subscript></entry>
+ <entry>r<subscript>6</subscript></entry>
+ <entry>r<subscript>5</subscript></entry>
+ <entry>r<subscript>4</subscript></entry>
+ <entry>r<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ <entry>r<subscript>1</subscript></entry>
+ <entry>r<subscript>0</subscript></entry>
+ <entry>b<subscript>7</subscript></entry>
+ <entry>b<subscript>6</subscript></entry>
+ <entry>b<subscript>5</subscript></entry>
+ <entry>b<subscript>4</subscript></entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>b<subscript>0</subscript></entry>
+ <entry>g<subscript>7</subscript></entry>
+ <entry>g<subscript>6</subscript></entry>
+ <entry>g<subscript>5</subscript></entry>
+ <entry>g<subscript>4</subscript></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>g<subscript>0</subscript></entry>
+ </row>
+ <row id="MEDIA-BUS-FMT-RGB666-1X24_CPADHI">
+ <entry>MEDIA_BUS_FMT_RGB666_1X24_CPADHI</entry>
+ <entry>0x1015</entry>
+ <entry></entry>
+ &dash-ent-8;
+ <entry>0</entry>
+ <entry>0</entry>
+ <entry>r<subscript>5</subscript></entry>
+ <entry>r<subscript>4</subscript></entry>
+ <entry>r<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ <entry>r<subscript>1</subscript></entry>
+ <entry>r<subscript>0</subscript></entry>
+ <entry>0</entry>
+ <entry>0</entry>
+ <entry>g<subscript>5</subscript></entry>
+ <entry>g<subscript>4</subscript></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>g<subscript>0</subscript></entry>
+ <entry>0</entry>
+ <entry>0</entry>
+ <entry>b<subscript>5</subscript></entry>
+ <entry>b<subscript>4</subscript></entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>b<subscript>0</subscript></entry>
+ </row>
+ <row id="MEDIA-BUS-FMT-BGR888-1X24">
+ <entry>MEDIA_BUS_FMT_BGR888_1X24</entry>
+ <entry>0x1013</entry>
+ <entry></entry>
+ &dash-ent-8;
+ <entry>b<subscript>7</subscript></entry>
+ <entry>b<subscript>6</subscript></entry>
+ <entry>b<subscript>5</subscript></entry>
+ <entry>b<subscript>4</subscript></entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>b<subscript>0</subscript></entry>
+ <entry>g<subscript>7</subscript></entry>
+ <entry>g<subscript>6</subscript></entry>
+ <entry>g<subscript>5</subscript></entry>
+ <entry>g<subscript>4</subscript></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>g<subscript>0</subscript></entry>
+ <entry>r<subscript>7</subscript></entry>
+ <entry>r<subscript>6</subscript></entry>
+ <entry>r<subscript>5</subscript></entry>
+ <entry>r<subscript>4</subscript></entry>
+ <entry>r<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ <entry>r<subscript>1</subscript></entry>
+ <entry>r<subscript>0</subscript></entry>
+ </row>
+ <row id="MEDIA-BUS-FMT-GBR888-1X24">
+ <entry>MEDIA_BUS_FMT_GBR888_1X24</entry>
+ <entry>0x1014</entry>
+ <entry></entry>
+ &dash-ent-8;
+ <entry>g<subscript>7</subscript></entry>
+ <entry>g<subscript>6</subscript></entry>
+ <entry>g<subscript>5</subscript></entry>
+ <entry>g<subscript>4</subscript></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>g<subscript>0</subscript></entry>
+ <entry>b<subscript>7</subscript></entry>
+ <entry>b<subscript>6</subscript></entry>
+ <entry>b<subscript>5</subscript></entry>
+ <entry>b<subscript>4</subscript></entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>b<subscript>0</subscript></entry>
+ <entry>r<subscript>7</subscript></entry>
+ <entry>r<subscript>6</subscript></entry>
+ <entry>r<subscript>5</subscript></entry>
+ <entry>r<subscript>4</subscript></entry>
+ <entry>r<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ <entry>r<subscript>1</subscript></entry>
+ <entry>r<subscript>0</subscript></entry>
+ </row>
<row id="MEDIA-BUS-FMT-RGB888-1X24">
<entry>MEDIA_BUS_FMT_RGB888_1X24</entry>
<entry>0x100a</entry>
@@ -579,6 +741,298 @@
<entry>b<subscript>1</subscript></entry>
<entry>b<subscript>0</subscript></entry>
</row>
+ <row id="MEDIA-BUS-FMT-RGB888-1X32-PADHI">
+ <entry>MEDIA_BUS_FMT_RGB888_1X32_PADHI</entry>
+ <entry>0x100f</entry>
+ <entry></entry>
+ <entry>0</entry>
+ <entry>0</entry>
+ <entry>0</entry>
+ <entry>0</entry>
+ <entry>0</entry>
+ <entry>0</entry>
+ <entry>0</entry>
+ <entry>0</entry>
+ <entry>r<subscript>7</subscript></entry>
+ <entry>r<subscript>6</subscript></entry>
+ <entry>r<subscript>5</subscript></entry>
+ <entry>r<subscript>4</subscript></entry>
+ <entry>r<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ <entry>r<subscript>1</subscript></entry>
+ <entry>r<subscript>0</subscript></entry>
+ <entry>g<subscript>7</subscript></entry>
+ <entry>g<subscript>6</subscript></entry>
+ <entry>g<subscript>5</subscript></entry>
+ <entry>g<subscript>4</subscript></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>g<subscript>0</subscript></entry>
+ <entry>b<subscript>7</subscript></entry>
+ <entry>b<subscript>6</subscript></entry>
+ <entry>b<subscript>5</subscript></entry>
+ <entry>b<subscript>4</subscript></entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>b<subscript>0</subscript></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+
+ <para>On LVDS buses, usually each sample is transferred serialized in
+ seven time slots per pixel clock, on three (18-bit) or four (24-bit)
+ differential data pairs at the same time. The remaining bits are used for
+ control signals as defined by SPWG/PSWG/VESA or JEIDA standards.
+ The 24-bit RGB format serialized in seven time slots on four lanes using
+ JEIDA defined bit mapping will be named
+ <constant>MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA</constant>, for example.
+ </para>
+
+ <table pgwide="0" frame="none" id="v4l2-mbus-pixelcode-rgb-lvds">
+ <title>LVDS RGB formats</title>
+ <tgroup cols="8">
+ <colspec colname="id" align="left" />
+ <colspec colname="code" align="center" />
+ <colspec colname="slot" align="center" />
+ <colspec colname="lane" />
+ <colspec colnum="5" colname="l03" align="center" />
+ <colspec colnum="6" colname="l02" align="center" />
+ <colspec colnum="7" colname="l01" align="center" />
+ <colspec colnum="8" colname="l00" align="center" />
+ <spanspec namest="l03" nameend="l00" spanname="l0" />
+ <thead>
+ <row>
+ <entry>Identifier</entry>
+ <entry>Code</entry>
+ <entry></entry>
+ <entry></entry>
+ <entry spanname="l0">Data organization</entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>Timeslot</entry>
+ <entry>Lane</entry>
+ <entry>3</entry>
+ <entry>2</entry>
+ <entry>1</entry>
+ <entry>0</entry>
+ </row>
+ </thead>
+ <tbody valign="top">
+ <row id="MEDIA-BUS-FMT-RGB666-1X7X3-SPWG">
+ <entry>MEDIA_BUS_FMT_RGB666_1X7X3_SPWG</entry>
+ <entry>0x1010</entry>
+ <entry>0</entry>
+ <entry></entry>
+ <entry>-</entry>
+ <entry>d</entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>g<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>1</entry>
+ <entry></entry>
+ <entry>-</entry>
+ <entry>d</entry>
+ <entry>b<subscript>0</subscript></entry>
+ <entry>r<subscript>5</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>2</entry>
+ <entry></entry>
+ <entry>-</entry>
+ <entry>d</entry>
+ <entry>g<subscript>5</subscript></entry>
+ <entry>r<subscript>4</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>3</entry>
+ <entry></entry>
+ <entry>-</entry>
+ <entry>b<subscript>5</subscript></entry>
+ <entry>g<subscript>4</subscript></entry>
+ <entry>r<subscript>3</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>4</entry>
+ <entry></entry>
+ <entry>-</entry>
+ <entry>b<subscript>4</subscript></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>5</entry>
+ <entry></entry>
+ <entry>-</entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ <entry>r<subscript>1</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>6</entry>
+ <entry></entry>
+ <entry>-</entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>r<subscript>0</subscript></entry>
+ </row>
+ <row id="MEDIA-BUS-FMT-RGB888-1X7X4-SPWG">
+ <entry>MEDIA_BUS_FMT_RGB888_1X7X4_SPWG</entry>
+ <entry>0x1011</entry>
+ <entry>0</entry>
+ <entry></entry>
+ <entry>d</entry>
+ <entry>d</entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>g<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>1</entry>
+ <entry></entry>
+ <entry>b<subscript>7</subscript></entry>
+ <entry>d</entry>
+ <entry>b<subscript>0</subscript></entry>
+ <entry>r<subscript>5</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>2</entry>
+ <entry></entry>
+ <entry>b<subscript>6</subscript></entry>
+ <entry>d</entry>
+ <entry>g<subscript>5</subscript></entry>
+ <entry>r<subscript>4</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>3</entry>
+ <entry></entry>
+ <entry>g<subscript>7</subscript></entry>
+ <entry>b<subscript>5</subscript></entry>
+ <entry>g<subscript>4</subscript></entry>
+ <entry>r<subscript>3</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>4</entry>
+ <entry></entry>
+ <entry>g<subscript>6</subscript></entry>
+ <entry>b<subscript>4</subscript></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>5</entry>
+ <entry></entry>
+ <entry>r<subscript>7</subscript></entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ <entry>r<subscript>1</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>6</entry>
+ <entry></entry>
+ <entry>r<subscript>6</subscript></entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>r<subscript>0</subscript></entry>
+ </row>
+ <row id="MEDIA-BUS-FMT-RGB888-1X7X4-JEIDA">
+ <entry>MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA</entry>
+ <entry>0x1012</entry>
+ <entry>0</entry>
+ <entry></entry>
+ <entry>d</entry>
+ <entry>d</entry>
+ <entry>b<subscript>3</subscript></entry>
+ <entry>g<subscript>2</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>1</entry>
+ <entry></entry>
+ <entry>b<subscript>1</subscript></entry>
+ <entry>d</entry>
+ <entry>b<subscript>2</subscript></entry>
+ <entry>r<subscript>7</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>2</entry>
+ <entry></entry>
+ <entry>b<subscript>0</subscript></entry>
+ <entry>d</entry>
+ <entry>g<subscript>7</subscript></entry>
+ <entry>r<subscript>6</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>3</entry>
+ <entry></entry>
+ <entry>g<subscript>1</subscript></entry>
+ <entry>b<subscript>7</subscript></entry>
+ <entry>g<subscript>6</subscript></entry>
+ <entry>r<subscript>5</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>4</entry>
+ <entry></entry>
+ <entry>g<subscript>0</subscript></entry>
+ <entry>b<subscript>6</subscript></entry>
+ <entry>g<subscript>5</subscript></entry>
+ <entry>r<subscript>4</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>5</entry>
+ <entry></entry>
+ <entry>r<subscript>1</subscript></entry>
+ <entry>b<subscript>5</subscript></entry>
+ <entry>g<subscript>4</subscript></entry>
+ <entry>r<subscript>3</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry>6</entry>
+ <entry></entry>
+ <entry>r<subscript>0</subscript></entry>
+ <entry>b<subscript>4</subscript></entry>
+ <entry>g<subscript>3</subscript></entry>
+ <entry>r<subscript>2</subscript></entry>
+ </row>
</tbody>
</tgroup>
</table>
@@ -2188,6 +2642,294 @@
<entry>y<subscript>1</subscript></entry>
<entry>y<subscript>0</subscript></entry>
</row>
+ <row id="MEDIA-BUS-FMT-UYVY12-2X12">
+ <entry>MEDIA_BUS_FMT_UYVY12_2X12</entry>
+ <entry>0x201c</entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>u<subscript>11</subscript></entry>
+ <entry>u<subscript>10</subscript></entry>
+ <entry>u<subscript>9</subscript></entry>
+ <entry>u<subscript>8</subscript></entry>
+ <entry>u<subscript>7</subscript></entry>
+ <entry>u<subscript>6</subscript></entry>
+ <entry>u<subscript>5</subscript></entry>
+ <entry>u<subscript>4</subscript></entry>
+ <entry>u<subscript>3</subscript></entry>
+ <entry>u<subscript>2</subscript></entry>
+ <entry>u<subscript>1</subscript></entry>
+ <entry>u<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>y<subscript>11</subscript></entry>
+ <entry>y<subscript>10</subscript></entry>
+ <entry>y<subscript>9</subscript></entry>
+ <entry>y<subscript>8</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>v<subscript>11</subscript></entry>
+ <entry>v<subscript>10</subscript></entry>
+ <entry>v<subscript>9</subscript></entry>
+ <entry>v<subscript>8</subscript></entry>
+ <entry>v<subscript>7</subscript></entry>
+ <entry>v<subscript>6</subscript></entry>
+ <entry>v<subscript>5</subscript></entry>
+ <entry>v<subscript>4</subscript></entry>
+ <entry>v<subscript>3</subscript></entry>
+ <entry>v<subscript>2</subscript></entry>
+ <entry>v<subscript>1</subscript></entry>
+ <entry>v<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>y<subscript>11</subscript></entry>
+ <entry>y<subscript>10</subscript></entry>
+ <entry>y<subscript>9</subscript></entry>
+ <entry>y<subscript>8</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ </row>
+ <row id="MEDIA-BUS-FMT-VYUY12-2X12">
+ <entry>MEDIA_BUS_FMT_VYUY12_2X12</entry>
+ <entry>0x201d</entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>v<subscript>11</subscript></entry>
+ <entry>v<subscript>10</subscript></entry>
+ <entry>v<subscript>9</subscript></entry>
+ <entry>v<subscript>8</subscript></entry>
+ <entry>v<subscript>7</subscript></entry>
+ <entry>v<subscript>6</subscript></entry>
+ <entry>v<subscript>5</subscript></entry>
+ <entry>v<subscript>4</subscript></entry>
+ <entry>v<subscript>3</subscript></entry>
+ <entry>v<subscript>2</subscript></entry>
+ <entry>v<subscript>1</subscript></entry>
+ <entry>v<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>y<subscript>11</subscript></entry>
+ <entry>y<subscript>10</subscript></entry>
+ <entry>y<subscript>9</subscript></entry>
+ <entry>y<subscript>8</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>u<subscript>11</subscript></entry>
+ <entry>u<subscript>10</subscript></entry>
+ <entry>u<subscript>9</subscript></entry>
+ <entry>u<subscript>8</subscript></entry>
+ <entry>u<subscript>7</subscript></entry>
+ <entry>u<subscript>6</subscript></entry>
+ <entry>u<subscript>5</subscript></entry>
+ <entry>u<subscript>4</subscript></entry>
+ <entry>u<subscript>3</subscript></entry>
+ <entry>u<subscript>2</subscript></entry>
+ <entry>u<subscript>1</subscript></entry>
+ <entry>u<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>y<subscript>11</subscript></entry>
+ <entry>y<subscript>10</subscript></entry>
+ <entry>y<subscript>9</subscript></entry>
+ <entry>y<subscript>8</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ </row>
+ <row id="MEDIA-BUS-FMT-YUYV12-2X12">
+ <entry>MEDIA_BUS_FMT_YUYV12_2X12</entry>
+ <entry>0x201e</entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>y<subscript>11</subscript></entry>
+ <entry>y<subscript>10</subscript></entry>
+ <entry>y<subscript>9</subscript></entry>
+ <entry>y<subscript>8</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>u<subscript>11</subscript></entry>
+ <entry>u<subscript>10</subscript></entry>
+ <entry>u<subscript>9</subscript></entry>
+ <entry>u<subscript>8</subscript></entry>
+ <entry>u<subscript>7</subscript></entry>
+ <entry>u<subscript>6</subscript></entry>
+ <entry>u<subscript>5</subscript></entry>
+ <entry>u<subscript>4</subscript></entry>
+ <entry>u<subscript>3</subscript></entry>
+ <entry>u<subscript>2</subscript></entry>
+ <entry>u<subscript>1</subscript></entry>
+ <entry>u<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>y<subscript>11</subscript></entry>
+ <entry>y<subscript>10</subscript></entry>
+ <entry>y<subscript>9</subscript></entry>
+ <entry>y<subscript>8</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>v<subscript>11</subscript></entry>
+ <entry>v<subscript>10</subscript></entry>
+ <entry>v<subscript>9</subscript></entry>
+ <entry>v<subscript>8</subscript></entry>
+ <entry>v<subscript>7</subscript></entry>
+ <entry>v<subscript>6</subscript></entry>
+ <entry>v<subscript>5</subscript></entry>
+ <entry>v<subscript>4</subscript></entry>
+ <entry>v<subscript>3</subscript></entry>
+ <entry>v<subscript>2</subscript></entry>
+ <entry>v<subscript>1</subscript></entry>
+ <entry>v<subscript>0</subscript></entry>
+ </row>
+ <row id="MEDIA-BUS-FMT-YVYU12-2X12">
+ <entry>MEDIA_BUS_FMT_YVYU12_2X12</entry>
+ <entry>0x201f</entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>y<subscript>11</subscript></entry>
+ <entry>y<subscript>10</subscript></entry>
+ <entry>y<subscript>9</subscript></entry>
+ <entry>y<subscript>8</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>v<subscript>11</subscript></entry>
+ <entry>v<subscript>10</subscript></entry>
+ <entry>v<subscript>9</subscript></entry>
+ <entry>v<subscript>8</subscript></entry>
+ <entry>v<subscript>7</subscript></entry>
+ <entry>v<subscript>6</subscript></entry>
+ <entry>v<subscript>5</subscript></entry>
+ <entry>v<subscript>4</subscript></entry>
+ <entry>v<subscript>3</subscript></entry>
+ <entry>v<subscript>2</subscript></entry>
+ <entry>v<subscript>1</subscript></entry>
+ <entry>v<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>y<subscript>11</subscript></entry>
+ <entry>y<subscript>10</subscript></entry>
+ <entry>y<subscript>9</subscript></entry>
+ <entry>y<subscript>8</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ </row>
+ <row>
+ <entry></entry>
+ <entry></entry>
+ <entry></entry>
+ &dash-ent-20;
+ <entry>u<subscript>11</subscript></entry>
+ <entry>u<subscript>10</subscript></entry>
+ <entry>u<subscript>9</subscript></entry>
+ <entry>u<subscript>8</subscript></entry>
+ <entry>u<subscript>7</subscript></entry>
+ <entry>u<subscript>6</subscript></entry>
+ <entry>u<subscript>5</subscript></entry>
+ <entry>u<subscript>4</subscript></entry>
+ <entry>u<subscript>3</subscript></entry>
+ <entry>u<subscript>2</subscript></entry>
+ <entry>u<subscript>1</subscript></entry>
+ <entry>u<subscript>0</subscript></entry>
+ </row>
<row id="MEDIA-BUS-FMT-UYVY8-1X16">
<entry>MEDIA_BUS_FMT_UYVY8_1X16</entry>
<entry>0x200f</entry>
@@ -2660,55 +3402,48 @@
<entry>u<subscript>1</subscript></entry>
<entry>u<subscript>0</subscript></entry>
</row>
- <row id="MEDIA-BUS-FMT-YUV10-1X30">
- <entry>MEDIA_BUS_FMT_YUV10_1X30</entry>
- <entry>0x2016</entry>
+ <row id="MEDIA-BUS-FMT-VUY8-1X24">
+ <entry>MEDIA_BUS_FMT_VUY8_1X24</entry>
+ <entry>0x201a</entry>
+ <entry></entry>
+ &dash-ent-8;
+ <entry>v<subscript>7</subscript></entry>
+ <entry>v<subscript>6</subscript></entry>
+ <entry>v<subscript>5</subscript></entry>
+ <entry>v<subscript>4</subscript></entry>
+ <entry>v<subscript>3</subscript></entry>
+ <entry>v<subscript>2</subscript></entry>
+ <entry>v<subscript>1</subscript></entry>
+ <entry>v<subscript>0</subscript></entry>
+ <entry>u<subscript>7</subscript></entry>
+ <entry>u<subscript>6</subscript></entry>
+ <entry>u<subscript>5</subscript></entry>
+ <entry>u<subscript>4</subscript></entry>
+ <entry>u<subscript>3</subscript></entry>
+ <entry>u<subscript>2</subscript></entry>
+ <entry>u<subscript>1</subscript></entry>
+ <entry>u<subscript>0</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ </row>
+ <row id="MEDIA-BUS-FMT-YUV8-1X24">
+ <entry>MEDIA_BUS_FMT_YUV8_1X24</entry>
+ <entry>0x2025</entry>
<entry></entry>
<entry>-</entry>
<entry>-</entry>
- <entry>y<subscript>9</subscript></entry>
- <entry>y<subscript>8</subscript></entry>
- <entry>y<subscript>7</subscript></entry>
- <entry>y<subscript>6</subscript></entry>
- <entry>y<subscript>5</subscript></entry>
- <entry>y<subscript>4</subscript></entry>
- <entry>y<subscript>3</subscript></entry>
- <entry>y<subscript>2</subscript></entry>
- <entry>y<subscript>1</subscript></entry>
- <entry>y<subscript>0</subscript></entry>
- <entry>u<subscript>9</subscript></entry>
- <entry>u<subscript>8</subscript></entry>
- <entry>u<subscript>7</subscript></entry>
- <entry>u<subscript>6</subscript></entry>
- <entry>u<subscript>5</subscript></entry>
- <entry>u<subscript>4</subscript></entry>
- <entry>u<subscript>3</subscript></entry>
- <entry>u<subscript>2</subscript></entry>
- <entry>u<subscript>1</subscript></entry>
- <entry>u<subscript>0</subscript></entry>
- <entry>v<subscript>9</subscript></entry>
- <entry>v<subscript>8</subscript></entry>
- <entry>v<subscript>7</subscript></entry>
- <entry>v<subscript>6</subscript></entry>
- <entry>v<subscript>5</subscript></entry>
- <entry>v<subscript>4</subscript></entry>
- <entry>v<subscript>3</subscript></entry>
- <entry>v<subscript>2</subscript></entry>
- <entry>v<subscript>1</subscript></entry>
- <entry>v<subscript>0</subscript></entry>
- </row>
- <row id="MEDIA-BUS-FMT-AYUV8-1X32">
- <entry>MEDIA_BUS_FMT_AYUV8_1X32</entry>
- <entry>0x2017</entry>
- <entry></entry>
- <entry>a<subscript>7</subscript></entry>
- <entry>a<subscript>6</subscript></entry>
- <entry>a<subscript>5</subscript></entry>
- <entry>a<subscript>4</subscript></entry>
- <entry>a<subscript>3</subscript></entry>
- <entry>a<subscript>2</subscript></entry>
- <entry>a<subscript>1</subscript></entry>
- <entry>a<subscript>0</subscript></entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>-</entry>
<entry>y<subscript>7</subscript></entry>
<entry>y<subscript>6</subscript></entry>
<entry>y<subscript>5</subscript></entry>
@@ -2734,294 +3469,6 @@
<entry>v<subscript>1</subscript></entry>
<entry>v<subscript>0</subscript></entry>
</row>
- <row id="MEDIA-BUS-FMT-UYVY12-2X12">
- <entry>MEDIA_BUS_FMT_UYVY12_2X12</entry>
- <entry>0x201c</entry>
- <entry></entry>
- &dash-ent-20;
- <entry>u<subscript>11</subscript></entry>
- <entry>u<subscript>10</subscript></entry>
- <entry>u<subscript>9</subscript></entry>
- <entry>u<subscript>8</subscript></entry>
- <entry>u<subscript>7</subscript></entry>
- <entry>u<subscript>6</subscript></entry>
- <entry>u<subscript>5</subscript></entry>
- <entry>u<subscript>4</subscript></entry>
- <entry>u<subscript>3</subscript></entry>
- <entry>u<subscript>2</subscript></entry>
- <entry>u<subscript>1</subscript></entry>
- <entry>u<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>y<subscript>11</subscript></entry>
- <entry>y<subscript>10</subscript></entry>
- <entry>y<subscript>9</subscript></entry>
- <entry>y<subscript>8</subscript></entry>
- <entry>y<subscript>7</subscript></entry>
- <entry>y<subscript>6</subscript></entry>
- <entry>y<subscript>5</subscript></entry>
- <entry>y<subscript>4</subscript></entry>
- <entry>y<subscript>3</subscript></entry>
- <entry>y<subscript>2</subscript></entry>
- <entry>y<subscript>1</subscript></entry>
- <entry>y<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>v<subscript>11</subscript></entry>
- <entry>v<subscript>10</subscript></entry>
- <entry>v<subscript>9</subscript></entry>
- <entry>v<subscript>8</subscript></entry>
- <entry>v<subscript>7</subscript></entry>
- <entry>v<subscript>6</subscript></entry>
- <entry>v<subscript>5</subscript></entry>
- <entry>v<subscript>4</subscript></entry>
- <entry>v<subscript>3</subscript></entry>
- <entry>v<subscript>2</subscript></entry>
- <entry>v<subscript>1</subscript></entry>
- <entry>v<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>y<subscript>11</subscript></entry>
- <entry>y<subscript>10</subscript></entry>
- <entry>y<subscript>9</subscript></entry>
- <entry>y<subscript>8</subscript></entry>
- <entry>y<subscript>7</subscript></entry>
- <entry>y<subscript>6</subscript></entry>
- <entry>y<subscript>5</subscript></entry>
- <entry>y<subscript>4</subscript></entry>
- <entry>y<subscript>3</subscript></entry>
- <entry>y<subscript>2</subscript></entry>
- <entry>y<subscript>1</subscript></entry>
- <entry>y<subscript>0</subscript></entry>
- </row>
- <row id="MEDIA-BUS-FMT-VYUY12-2X12">
- <entry>MEDIA_BUS_FMT_VYUY12_2X12</entry>
- <entry>0x201d</entry>
- <entry></entry>
- &dash-ent-20;
- <entry>v<subscript>11</subscript></entry>
- <entry>v<subscript>10</subscript></entry>
- <entry>v<subscript>9</subscript></entry>
- <entry>v<subscript>8</subscript></entry>
- <entry>v<subscript>7</subscript></entry>
- <entry>v<subscript>6</subscript></entry>
- <entry>v<subscript>5</subscript></entry>
- <entry>v<subscript>4</subscript></entry>
- <entry>v<subscript>3</subscript></entry>
- <entry>v<subscript>2</subscript></entry>
- <entry>v<subscript>1</subscript></entry>
- <entry>v<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>y<subscript>11</subscript></entry>
- <entry>y<subscript>10</subscript></entry>
- <entry>y<subscript>9</subscript></entry>
- <entry>y<subscript>8</subscript></entry>
- <entry>y<subscript>7</subscript></entry>
- <entry>y<subscript>6</subscript></entry>
- <entry>y<subscript>5</subscript></entry>
- <entry>y<subscript>4</subscript></entry>
- <entry>y<subscript>3</subscript></entry>
- <entry>y<subscript>2</subscript></entry>
- <entry>y<subscript>1</subscript></entry>
- <entry>y<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>u<subscript>11</subscript></entry>
- <entry>u<subscript>10</subscript></entry>
- <entry>u<subscript>9</subscript></entry>
- <entry>u<subscript>8</subscript></entry>
- <entry>u<subscript>7</subscript></entry>
- <entry>u<subscript>6</subscript></entry>
- <entry>u<subscript>5</subscript></entry>
- <entry>u<subscript>4</subscript></entry>
- <entry>u<subscript>3</subscript></entry>
- <entry>u<subscript>2</subscript></entry>
- <entry>u<subscript>1</subscript></entry>
- <entry>u<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>y<subscript>11</subscript></entry>
- <entry>y<subscript>10</subscript></entry>
- <entry>y<subscript>9</subscript></entry>
- <entry>y<subscript>8</subscript></entry>
- <entry>y<subscript>7</subscript></entry>
- <entry>y<subscript>6</subscript></entry>
- <entry>y<subscript>5</subscript></entry>
- <entry>y<subscript>4</subscript></entry>
- <entry>y<subscript>3</subscript></entry>
- <entry>y<subscript>2</subscript></entry>
- <entry>y<subscript>1</subscript></entry>
- <entry>y<subscript>0</subscript></entry>
- </row>
- <row id="MEDIA-BUS-FMT-YUYV12-2X12">
- <entry>MEDIA_BUS_FMT_YUYV12_2X12</entry>
- <entry>0x201e</entry>
- <entry></entry>
- &dash-ent-20;
- <entry>y<subscript>11</subscript></entry>
- <entry>y<subscript>10</subscript></entry>
- <entry>y<subscript>9</subscript></entry>
- <entry>y<subscript>8</subscript></entry>
- <entry>y<subscript>7</subscript></entry>
- <entry>y<subscript>6</subscript></entry>
- <entry>y<subscript>5</subscript></entry>
- <entry>y<subscript>4</subscript></entry>
- <entry>y<subscript>3</subscript></entry>
- <entry>y<subscript>2</subscript></entry>
- <entry>y<subscript>1</subscript></entry>
- <entry>y<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>u<subscript>11</subscript></entry>
- <entry>u<subscript>10</subscript></entry>
- <entry>u<subscript>9</subscript></entry>
- <entry>u<subscript>8</subscript></entry>
- <entry>u<subscript>7</subscript></entry>
- <entry>u<subscript>6</subscript></entry>
- <entry>u<subscript>5</subscript></entry>
- <entry>u<subscript>4</subscript></entry>
- <entry>u<subscript>3</subscript></entry>
- <entry>u<subscript>2</subscript></entry>
- <entry>u<subscript>1</subscript></entry>
- <entry>u<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>y<subscript>11</subscript></entry>
- <entry>y<subscript>10</subscript></entry>
- <entry>y<subscript>9</subscript></entry>
- <entry>y<subscript>8</subscript></entry>
- <entry>y<subscript>7</subscript></entry>
- <entry>y<subscript>6</subscript></entry>
- <entry>y<subscript>5</subscript></entry>
- <entry>y<subscript>4</subscript></entry>
- <entry>y<subscript>3</subscript></entry>
- <entry>y<subscript>2</subscript></entry>
- <entry>y<subscript>1</subscript></entry>
- <entry>y<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>v<subscript>11</subscript></entry>
- <entry>v<subscript>10</subscript></entry>
- <entry>v<subscript>9</subscript></entry>
- <entry>v<subscript>8</subscript></entry>
- <entry>v<subscript>7</subscript></entry>
- <entry>v<subscript>6</subscript></entry>
- <entry>v<subscript>5</subscript></entry>
- <entry>v<subscript>4</subscript></entry>
- <entry>v<subscript>3</subscript></entry>
- <entry>v<subscript>2</subscript></entry>
- <entry>v<subscript>1</subscript></entry>
- <entry>v<subscript>0</subscript></entry>
- </row>
- <row id="MEDIA-BUS-FMT-YVYU12-2X12">
- <entry>MEDIA_BUS_FMT_YVYU12_2X12</entry>
- <entry>0x201f</entry>
- <entry></entry>
- &dash-ent-20;
- <entry>y<subscript>11</subscript></entry>
- <entry>y<subscript>10</subscript></entry>
- <entry>y<subscript>9</subscript></entry>
- <entry>y<subscript>8</subscript></entry>
- <entry>y<subscript>7</subscript></entry>
- <entry>y<subscript>6</subscript></entry>
- <entry>y<subscript>5</subscript></entry>
- <entry>y<subscript>4</subscript></entry>
- <entry>y<subscript>3</subscript></entry>
- <entry>y<subscript>2</subscript></entry>
- <entry>y<subscript>1</subscript></entry>
- <entry>y<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>v<subscript>11</subscript></entry>
- <entry>v<subscript>10</subscript></entry>
- <entry>v<subscript>9</subscript></entry>
- <entry>v<subscript>8</subscript></entry>
- <entry>v<subscript>7</subscript></entry>
- <entry>v<subscript>6</subscript></entry>
- <entry>v<subscript>5</subscript></entry>
- <entry>v<subscript>4</subscript></entry>
- <entry>v<subscript>3</subscript></entry>
- <entry>v<subscript>2</subscript></entry>
- <entry>v<subscript>1</subscript></entry>
- <entry>v<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>y<subscript>11</subscript></entry>
- <entry>y<subscript>10</subscript></entry>
- <entry>y<subscript>9</subscript></entry>
- <entry>y<subscript>8</subscript></entry>
- <entry>y<subscript>7</subscript></entry>
- <entry>y<subscript>6</subscript></entry>
- <entry>y<subscript>5</subscript></entry>
- <entry>y<subscript>4</subscript></entry>
- <entry>y<subscript>3</subscript></entry>
- <entry>y<subscript>2</subscript></entry>
- <entry>y<subscript>1</subscript></entry>
- <entry>y<subscript>0</subscript></entry>
- </row>
- <row>
- <entry></entry>
- <entry></entry>
- <entry></entry>
- &dash-ent-20;
- <entry>u<subscript>11</subscript></entry>
- <entry>u<subscript>10</subscript></entry>
- <entry>u<subscript>9</subscript></entry>
- <entry>u<subscript>8</subscript></entry>
- <entry>u<subscript>7</subscript></entry>
- <entry>u<subscript>6</subscript></entry>
- <entry>u<subscript>5</subscript></entry>
- <entry>u<subscript>4</subscript></entry>
- <entry>u<subscript>3</subscript></entry>
- <entry>u<subscript>2</subscript></entry>
- <entry>u<subscript>1</subscript></entry>
- <entry>u<subscript>0</subscript></entry>
- </row>
<row id="MEDIA-BUS-FMT-UYVY12-1X24">
<entry>MEDIA_BUS_FMT_UYVY12_1X24</entry>
<entry>0x2020</entry>
@@ -3262,6 +3709,80 @@
<entry>u<subscript>1</subscript></entry>
<entry>u<subscript>0</subscript></entry>
</row>
+ <row id="MEDIA-BUS-FMT-YUV10-1X30">
+ <entry>MEDIA_BUS_FMT_YUV10_1X30</entry>
+ <entry>0x2016</entry>
+ <entry></entry>
+ <entry>-</entry>
+ <entry>-</entry>
+ <entry>y<subscript>9</subscript></entry>
+ <entry>y<subscript>8</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ <entry>u<subscript>9</subscript></entry>
+ <entry>u<subscript>8</subscript></entry>
+ <entry>u<subscript>7</subscript></entry>
+ <entry>u<subscript>6</subscript></entry>
+ <entry>u<subscript>5</subscript></entry>
+ <entry>u<subscript>4</subscript></entry>
+ <entry>u<subscript>3</subscript></entry>
+ <entry>u<subscript>2</subscript></entry>
+ <entry>u<subscript>1</subscript></entry>
+ <entry>u<subscript>0</subscript></entry>
+ <entry>v<subscript>9</subscript></entry>
+ <entry>v<subscript>8</subscript></entry>
+ <entry>v<subscript>7</subscript></entry>
+ <entry>v<subscript>6</subscript></entry>
+ <entry>v<subscript>5</subscript></entry>
+ <entry>v<subscript>4</subscript></entry>
+ <entry>v<subscript>3</subscript></entry>
+ <entry>v<subscript>2</subscript></entry>
+ <entry>v<subscript>1</subscript></entry>
+ <entry>v<subscript>0</subscript></entry>
+ </row>
+ <row id="MEDIA-BUS-FMT-AYUV8-1X32">
+ <entry>MEDIA_BUS_FMT_AYUV8_1X32</entry>
+ <entry>0x2017</entry>
+ <entry></entry>
+ <entry>a<subscript>7</subscript></entry>
+ <entry>a<subscript>6</subscript></entry>
+ <entry>a<subscript>5</subscript></entry>
+ <entry>a<subscript>4</subscript></entry>
+ <entry>a<subscript>3</subscript></entry>
+ <entry>a<subscript>2</subscript></entry>
+ <entry>a<subscript>1</subscript></entry>
+ <entry>a<subscript>0</subscript></entry>
+ <entry>y<subscript>7</subscript></entry>
+ <entry>y<subscript>6</subscript></entry>
+ <entry>y<subscript>5</subscript></entry>
+ <entry>y<subscript>4</subscript></entry>
+ <entry>y<subscript>3</subscript></entry>
+ <entry>y<subscript>2</subscript></entry>
+ <entry>y<subscript>1</subscript></entry>
+ <entry>y<subscript>0</subscript></entry>
+ <entry>u<subscript>7</subscript></entry>
+ <entry>u<subscript>6</subscript></entry>
+ <entry>u<subscript>5</subscript></entry>
+ <entry>u<subscript>4</subscript></entry>
+ <entry>u<subscript>3</subscript></entry>
+ <entry>u<subscript>2</subscript></entry>
+ <entry>u<subscript>1</subscript></entry>
+ <entry>u<subscript>0</subscript></entry>
+ <entry>v<subscript>7</subscript></entry>
+ <entry>v<subscript>6</subscript></entry>
+ <entry>v<subscript>5</subscript></entry>
+ <entry>v<subscript>4</subscript></entry>
+ <entry>v<subscript>3</subscript></entry>
+ <entry>v<subscript>2</subscript></entry>
+ <entry>v<subscript>1</subscript></entry>
+ <entry>v<subscript>0</subscript></entry>
+ </row>
</tbody>
</tgroup>
</table>
diff --git a/Documentation/DocBook/media/v4l/v4l2.xml b/Documentation/DocBook/media/v4l/v4l2.xml
index ac0f8d9..e98caa1 100644
--- a/Documentation/DocBook/media/v4l/v4l2.xml
+++ b/Documentation/DocBook/media/v4l/v4l2.xml
@@ -136,6 +136,7 @@
<year>2012</year>
<year>2013</year>
<year>2014</year>
+ <year>2015</year>
<holder>Bill Dirks, Michael H. Schimek, Hans Verkuil, Martin
Rubli, Andy Walls, Muralidharan Karicheri, Mauro Carvalho Chehab,
Pawel Osciak</holder>
@@ -152,6 +153,14 @@
applications. -->
<revision>
+ <revnumber>3.21</revnumber>
+ <date>2015-02-13</date>
+ <authorinitials>mcc</authorinitials>
+ <revremark>Fix documentation for media controller device nodes and add support for DVB device nodes.
+Add support for Tuner sub-device.
+ </revremark>
+ </revision>
+ <revision>
<revnumber>3.19</revnumber>
<date>2014-12-05</date>
<authorinitials>hv</authorinitials>
diff --git a/Documentation/DocBook/media/v4l/vidioc-cropcap.xml b/Documentation/DocBook/media/v4l/vidioc-cropcap.xml
index 1f5ed64..50cb940 100644
--- a/Documentation/DocBook/media/v4l/vidioc-cropcap.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-cropcap.xml
@@ -59,6 +59,11 @@
switch can occur implicit when switching the video input or
output.</para>
+<para>Do not use the multiplanar buffer types. Use <constant>V4L2_BUF_TYPE_VIDEO_CAPTURE</constant>
+instead of <constant>V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE</constant>
+and use <constant>V4L2_BUF_TYPE_VIDEO_OUTPUT</constant> instead of
+<constant>V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE</constant>.</para>
+
<para>This ioctl must be implemented for video capture or output devices that
support cropping and/or scaling and/or have non-square pixels, and for overlay devices.</para>
@@ -73,9 +78,7 @@
<entry>Type of the data stream, set by the application.
Only these types are valid here:
<constant>V4L2_BUF_TYPE_VIDEO_CAPTURE</constant>,
-<constant>V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE</constant>,
-<constant>V4L2_BUF_TYPE_VIDEO_OUTPUT</constant>,
-<constant>V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE</constant> and
+<constant>V4L2_BUF_TYPE_VIDEO_OUTPUT</constant> and
<constant>V4L2_BUF_TYPE_VIDEO_OVERLAY</constant>. See <xref linkend="v4l2-buf-type" />.</entry>
</row>
<row>
diff --git a/Documentation/DocBook/media/v4l/vidioc-dqevent.xml b/Documentation/DocBook/media/v4l/vidioc-dqevent.xml
index b036f89..50ccd33 100644
--- a/Documentation/DocBook/media/v4l/vidioc-dqevent.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-dqevent.xml
@@ -64,7 +64,7 @@
<entry>__u32</entry>
<entry><structfield>type</structfield></entry>
<entry></entry>
- <entry>Type of the event.</entry>
+ <entry>Type of the event, see <xref linkend="event-type" />.</entry>
</row>
<row>
<entry>union</entry>
@@ -154,6 +154,113 @@
</tgroup>
</table>
+ <table frame="none" pgwide="1" id="event-type">
+ <title>Event Types</title>
+ <tgroup cols="3">
+ &cs-def;
+ <tbody valign="top">
+ <row>
+ <entry><constant>V4L2_EVENT_ALL</constant></entry>
+ <entry>0</entry>
+ <entry>All events. V4L2_EVENT_ALL is valid only for
+ VIDIOC_UNSUBSCRIBE_EVENT for unsubscribing all events at once.
+ </entry>
+ </row>
+ <row>
+ <entry><constant>V4L2_EVENT_VSYNC</constant></entry>
+ <entry>1</entry>
+ <entry>This event is triggered on the vertical sync.
+ This event has a &v4l2-event-vsync; associated with it.
+ </entry>
+ </row>
+ <row>
+ <entry><constant>V4L2_EVENT_EOS</constant></entry>
+ <entry>2</entry>
+ <entry>This event is triggered when the end of a stream is reached.
+ This is typically used with MPEG decoders to report to the application
+ when the last of the MPEG stream has been decoded.
+ </entry>
+ </row>
+ <row>
+ <entry><constant>V4L2_EVENT_CTRL</constant></entry>
+ <entry>3</entry>
+ <entry><para>This event requires that the <structfield>id</structfield>
+ matches the control ID from which you want to receive events.
+ This event is triggered if the control's value changes, if a
+ button control is pressed or if the control's flags change.
+ This event has a &v4l2-event-ctrl; associated with it. This struct
+ contains much of the same information as &v4l2-queryctrl; and
+ &v4l2-control;.</para>
+
+ <para>If the event is generated due to a call to &VIDIOC-S-CTRL; or
+ &VIDIOC-S-EXT-CTRLS;, then the event will <emphasis>not</emphasis> be sent to
+ the file handle that called the ioctl function. This prevents
+ nasty feedback loops. If you <emphasis>do</emphasis> want to get the
+ event, then set the <constant>V4L2_EVENT_SUB_FL_ALLOW_FEEDBACK</constant>
+ flag.
+ </para>
+
+ <para>This event type will ensure that no information is lost when
+ more events are raised than there is room internally. In that
+ case the &v4l2-event-ctrl; of the second-oldest event is kept,
+ but the <structfield>changes</structfield> field of the
+ second-oldest event is ORed with the <structfield>changes</structfield>
+ field of the oldest event.</para>
+ </entry>
+ </row>
+ <row>
+ <entry><constant>V4L2_EVENT_FRAME_SYNC</constant></entry>
+ <entry>4</entry>
+ <entry>
+ <para>Triggered immediately when the reception of a
+ frame has begun. This event has a
+ &v4l2-event-frame-sync; associated with it.</para>
+
+ <para>If the hardware needs to be stopped in the case of a
+ buffer underrun it might not be able to generate this event.
+ In such cases the <structfield>frame_sequence</structfield>
+ field in &v4l2-event-frame-sync; will not be incremented. This
+ causes two consecutive frame sequence numbers to have n times
+ frame interval in between them.</para>
+ </entry>
+ </row>
+ <row>
+ <entry><constant>V4L2_EVENT_SOURCE_CHANGE</constant></entry>
+ <entry>5</entry>
+ <entry>
+ <para>This event is triggered when a source parameter change is
+ detected during runtime by the video device. It can be a
+ runtime resolution change triggered by a video decoder or the
+ format change happening on an input connector.
+ This event requires that the <structfield>id</structfield>
+ matches the input index (when used with a video device node)
+ or the pad index (when used with a subdevice node) from which
+ you want to receive events.</para>
+
+ <para>This event has a &v4l2-event-src-change; associated
+ with it. The <structfield>changes</structfield> bitfield denotes
+ what has changed for the subscribed pad. If multiple events
+ occurred before application could dequeue them, then the changes
+ will have the ORed value of all the events generated.</para>
+ </entry>
+ </row>
+ <row>
+ <entry><constant>V4L2_EVENT_MOTION_DET</constant></entry>
+ <entry>6</entry>
+ <entry>
+ <para>Triggered whenever the motion detection state for one or more of the regions
+ changes. This event has a &v4l2-event-motion-det; associated with it.</para>
+ </entry>
+ </row>
+ <row>
+ <entry><constant>V4L2_EVENT_PRIVATE_START</constant></entry>
+ <entry>0x08000000</entry>
+ <entry>Base event number for driver-private events.</entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+
<table frame="none" pgwide="1" id="v4l2-event-vsync">
<title>struct <structname>v4l2_event_vsync</structname></title>
<tgroup cols="3">
@@ -177,7 +284,7 @@
<entry>__u32</entry>
<entry><structfield>changes</structfield></entry>
<entry></entry>
- <entry>A bitmask that tells what has changed. See <xref linkend="changes-flags" />.</entry>
+ <entry>A bitmask that tells what has changed. See <xref linkend="ctrl-changes-flags" />.</entry>
</row>
<row>
<entry>__u32</entry>
@@ -309,8 +416,8 @@
</tgroup>
</table>
- <table pgwide="1" frame="none" id="changes-flags">
- <title>Changes</title>
+ <table pgwide="1" frame="none" id="ctrl-changes-flags">
+ <title>Control Changes</title>
<tgroup cols="3">
&cs-def;
<tbody valign="top">
@@ -318,9 +425,9 @@
<entry><constant>V4L2_EVENT_CTRL_CH_VALUE</constant></entry>
<entry>0x0001</entry>
<entry>This control event was triggered because the value of the control
- changed. Special case: if a button control is pressed, then this
- event is sent as well, even though there is not explicit value
- associated with a button control.</entry>
+ changed. Special cases: Volatile controls do no generate this event;
+ If a control has the <constant>V4L2_CTRL_FLAG_EXECUTE_ON_WRITE</constant>
+ flag set, then this event is sent as well, regardless its value.</entry>
</row>
<row>
<entry><constant>V4L2_EVENT_CTRL_CH_FLAGS</constant></entry>
diff --git a/Documentation/DocBook/media/v4l/vidioc-g-crop.xml b/Documentation/DocBook/media/v4l/vidioc-g-crop.xml
index 75c6a93..e6c4efb 100644
--- a/Documentation/DocBook/media/v4l/vidioc-g-crop.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-g-crop.xml
@@ -70,6 +70,11 @@
<constant>VIDIOC_S_CROP</constant> ioctl with a pointer to this
structure.</para>
+<para>Do not use the multiplanar buffer types. Use <constant>V4L2_BUF_TYPE_VIDEO_CAPTURE</constant>
+instead of <constant>V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE</constant>
+and use <constant>V4L2_BUF_TYPE_VIDEO_OUTPUT</constant> instead of
+<constant>V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE</constant>.</para>
+
<para>The driver first adjusts the requested dimensions against
hardware limits, &ie; the bounds given by the capture/output window,
and it rounds to the closest possible values of horizontal and
diff --git a/Documentation/DocBook/media/v4l/vidioc-g-dv-timings.xml b/Documentation/DocBook/media/v4l/vidioc-g-dv-timings.xml
index c433657..764b635 100644
--- a/Documentation/DocBook/media/v4l/vidioc-g-dv-timings.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-g-dv-timings.xml
@@ -318,10 +318,20 @@
</row>
<row>
<entry>V4L2_DV_FL_HALF_LINE</entry>
- <entry>Specific to interlaced formats: if set, then field 1 (aka the odd field)
-is really one half-line longer and field 2 (aka the even field) is really one half-line
-shorter, so each field has exactly the same number of half-lines. Whether half-lines can be
-detected or used depends on the hardware.
+ <entry>Specific to interlaced formats: if set, then the vertical frontporch
+of field 1 (aka the odd field) is really one half-line longer and the vertical backporch
+of field 2 (aka the even field) is really one half-line shorter, so each field has exactly
+the same number of half-lines. Whether half-lines can be detected or used depends on
+the hardware.
+ </entry>
+ </row>
+ <row>
+ <entry>V4L2_DV_FL_IS_CE_VIDEO</entry>
+ <entry>If set, then this is a Consumer Electronics (CE) video format.
+Such formats differ from other formats (commonly called IT formats) in that if
+R'G'B' encoding is used then by default the R'G'B' values use limited range
+(i.e. 16-235) as opposed to full range (i.e. 0-255). All formats defined in CEA-861
+except for the 640x480p59.94 format are CE formats.
</entry>
</row>
</tbody>
diff --git a/Documentation/DocBook/media/v4l/vidioc-g-fbuf.xml b/Documentation/DocBook/media/v4l/vidioc-g-fbuf.xml
index 2046073..77607cc 100644
--- a/Documentation/DocBook/media/v4l/vidioc-g-fbuf.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-g-fbuf.xml
@@ -240,9 +240,9 @@
page boundary. Capture devices may write padding bytes, the value is
undefined. Output devices ignore the contents of padding
bytes.</para><para>When the image format is planar the
-<structfield>bytesperline</structfield> value applies to the largest
+<structfield>bytesperline</structfield> value applies to the first
plane and is divided by the same factor as the
-<structfield>width</structfield> field for any smaller planes. For
+<structfield>width</structfield> field for the other planes. For
example the Cb and Cr planes of a YUV 4:2:0 image have half as many
padding bytes following each line as the Y plane. To avoid ambiguities
drivers must return a <structfield>bytesperline</structfield> value
diff --git a/Documentation/DocBook/media/v4l/vidioc-g-selection.xml b/Documentation/DocBook/media/v4l/vidioc-g-selection.xml
index 9c04ac8..0bb5c06 100644
--- a/Documentation/DocBook/media/v4l/vidioc-g-selection.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-g-selection.xml
@@ -60,8 +60,8 @@
<para>To query the cropping (composing) rectangle set &v4l2-selection;
<structfield> type </structfield> field to the respective buffer type.
-Do not use multiplanar buffers. Use <constant>V4L2_BUF_TYPE_VIDEO_CAPTURE</constant>
-instead of <constant>V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE</constant>. Use
+Do not use the multiplanar buffer types. Use <constant>V4L2_BUF_TYPE_VIDEO_CAPTURE</constant>
+instead of <constant>V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE</constant> and use
<constant>V4L2_BUF_TYPE_VIDEO_OUTPUT</constant> instead of
<constant>V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE</constant>. The next step is
setting the value of &v4l2-selection; <structfield>target</structfield> field
diff --git a/Documentation/DocBook/media/v4l/vidioc-g-sliced-vbi-cap.xml b/Documentation/DocBook/media/v4l/vidioc-g-sliced-vbi-cap.xml
index bd015d1..d05623c 100644
--- a/Documentation/DocBook/media/v4l/vidioc-g-sliced-vbi-cap.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-g-sliced-vbi-cap.xml
@@ -205,7 +205,7 @@
<row>
<entry><constant>V4L2_SLICED_CAPTION_525</constant></entry>
<entry>0x1000</entry>
- <entry><xref linkend="eia608" /></entry>
+ <entry><xref linkend="cea608" /></entry>
<entry>NTSC line 21, 284 (second field 21)</entry>
<entry>Two bytes in transmission order, including parity
bit, lsb first transmitted.</entry>
diff --git a/Documentation/DocBook/media/v4l/vidioc-querycap.xml b/Documentation/DocBook/media/v4l/vidioc-querycap.xml
index d0c5e60..20fda75 100644
--- a/Documentation/DocBook/media/v4l/vidioc-querycap.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-querycap.xml
@@ -102,10 +102,10 @@
<entry>__u32</entry>
<entry><structfield>version</structfield></entry>
<entry><para>Version number of the driver.</para>
-<para>Starting on kernel 3.1, the version reported is provided per
-V4L2 subsystem, following the same Kernel numberation scheme. However, it
-should not always return the same version as the kernel, if, for example,
-an stable or distribution-modified kernel uses the V4L2 stack from a
+<para>Starting with kernel 3.1, the version reported is provided by the
+V4L2 subsystem following the kernel numbering scheme. However, it
+may not always return the same version as the kernel if, for example,
+a stable or distribution-modified kernel uses the V4L2 stack from a
newer kernel.</para>
<para>The version number is formatted using the
<constant>KERNEL_VERSION()</constant> macro:</para></entry>
diff --git a/Documentation/DocBook/media/v4l/vidioc-queryctrl.xml b/Documentation/DocBook/media/v4l/vidioc-queryctrl.xml
index 2bd98fd..dc83ad7 100644
--- a/Documentation/DocBook/media/v4l/vidioc-queryctrl.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-queryctrl.xml
@@ -600,7 +600,9 @@
changes continuously. A typical example would be the current gain value if the device
is in auto-gain mode. In such a case the hardware calculates the gain value based on
the lighting conditions which can change over time. Note that setting a new value for
-a volatile control will have no effect. The new value will just be ignored.</entry>
+a volatile control will have no effect and no <constant>V4L2_EVENT_CTRL_CH_VALUE</constant>
+will be sent, unless the <constant>V4L2_CTRL_FLAG_EXECUTE_ON_WRITE</constant> flag
+(see below) is also set. Otherwise the new value will just be ignored.</entry>
</row>
<row>
<entry><constant>V4L2_CTRL_FLAG_HAS_PAYLOAD</constant></entry>
@@ -610,6 +612,14 @@
that are an array, string, or have a compound type. In all cases you have to set a
pointer to memory containing the payload of the control.</entry>
</row>
+ <row>
+ <entry><constant>V4L2_CTRL_FLAG_EXECUTE_ON_WRITE</constant></entry>
+ <entry>0x0200</entry>
+ <entry>The value provided to the control will be propagated to the driver
+even if remains constant. This is required when the control represents an action
+on the hardware. For example: clearing an error flag or triggering the flash. All the
+controls of the type <constant>V4L2_CTRL_TYPE_BUTTON</constant> have this flag set.</entry>
+ </row>
</tbody>
</tgroup>
</table>
diff --git a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-interval.xml b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-interval.xml
index 2f8f4f0..cff59f5 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-interval.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-interval.xml
@@ -67,9 +67,9 @@
<para>To enumerate frame intervals applications initialize the
<structfield>index</structfield>, <structfield>pad</structfield>,
- <structfield>code</structfield>, <structfield>width</structfield> and
- <structfield>height</structfield> fields of
- &v4l2-subdev-frame-interval-enum; and call the
+ <structfield>which</structfield>, <structfield>code</structfield>,
+ <structfield>width</structfield> and <structfield>height</structfield>
+ fields of &v4l2-subdev-frame-interval-enum; and call the
<constant>VIDIOC_SUBDEV_ENUM_FRAME_INTERVAL</constant> ioctl with a pointer
to this structure. Drivers fill the rest of the structure or return
an &EINVAL; if one of the input fields is invalid. All frame intervals are
@@ -123,7 +123,12 @@
</row>
<row>
<entry>__u32</entry>
- <entry><structfield>reserved</structfield>[9]</entry>
+ <entry><structfield>which</structfield></entry>
+ <entry>Frame intervals to be enumerated, from &v4l2-subdev-format-whence;.</entry>
+ </row>
+ <row>
+ <entry>__u32</entry>
+ <entry><structfield>reserved</structfield>[8]</entry>
<entry>Reserved for future extensions. Applications and drivers must
set the array to zero.</entry>
</row>
diff --git a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-size.xml b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-size.xml
index 79ce42b..abd545e 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-size.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-size.xml
@@ -61,9 +61,9 @@
ioctl.</para>
<para>To enumerate frame sizes applications initialize the
- <structfield>pad</structfield>, <structfield>code</structfield> and
- <structfield>index</structfield> fields of the
- &v4l2-subdev-mbus-code-enum; and call the
+ <structfield>pad</structfield>, <structfield>which</structfield> ,
+ <structfield>code</structfield> and <structfield>index</structfield>
+ fields of the &v4l2-subdev-mbus-code-enum; and call the
<constant>VIDIOC_SUBDEV_ENUM_FRAME_SIZE</constant> ioctl with a pointer to
the structure. Drivers fill the minimum and maximum frame sizes or return
an &EINVAL; if one of the input parameters is invalid.</para>
@@ -127,7 +127,12 @@
</row>
<row>
<entry>__u32</entry>
- <entry><structfield>reserved</structfield>[9]</entry>
+ <entry><structfield>which</structfield></entry>
+ <entry>Frame sizes to be enumerated, from &v4l2-subdev-format-whence;.</entry>
+ </row>
+ <row>
+ <entry>__u32</entry>
+ <entry><structfield>reserved</structfield>[8]</entry>
<entry>Reserved for future extensions. Applications and drivers must
set the array to zero.</entry>
</row>
diff --git a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-mbus-code.xml b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-mbus-code.xml
index a6b3432..0bcb278 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-mbus-code.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-mbus-code.xml
@@ -56,8 +56,8 @@
</note>
<para>To enumerate media bus formats available at a given sub-device pad
- applications initialize the <structfield>pad</structfield> and
- <structfield>index</structfield> fields of &v4l2-subdev-mbus-code-enum; and
+ applications initialize the <structfield>pad</structfield>, <structfield>which</structfield>
+ and <structfield>index</structfield> fields of &v4l2-subdev-mbus-code-enum; and
call the <constant>VIDIOC_SUBDEV_ENUM_MBUS_CODE</constant> ioctl with a
pointer to this structure. Drivers fill the rest of the structure or return
an &EINVAL; if either the <structfield>pad</structfield> or
@@ -93,7 +93,12 @@
</row>
<row>
<entry>__u32</entry>
- <entry><structfield>reserved</structfield>[9]</entry>
+ <entry><structfield>which</structfield></entry>
+ <entry>Media bus format codes to be enumerated, from &v4l2-subdev-format-whence;.</entry>
+ </row>
+ <row>
+ <entry>__u32</entry>
+ <entry><structfield>reserved</structfield>[8]</entry>
<entry>Reserved for future extensions. Applications and drivers must
set the array to zero.</entry>
</row>
diff --git a/Documentation/DocBook/media/v4l/vidioc-subscribe-event.xml b/Documentation/DocBook/media/v4l/vidioc-subscribe-event.xml
index d7c9365..d0332f6 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subscribe-event.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subscribe-event.xml
@@ -60,7 +60,9 @@
<row>
<entry>__u32</entry>
<entry><structfield>type</structfield></entry>
- <entry>Type of the event.</entry>
+ <entry>Type of the event, see <xref linkend="event-type" />. Note that
+<constant>V4L2_EVENT_ALL</constant> can be used with VIDIOC_UNSUBSCRIBE_EVENT
+for unsubscribing all events at once.</entry>
</row>
<row>
<entry>__u32</entry>
@@ -84,113 +86,6 @@
</tgroup>
</table>
- <table frame="none" pgwide="1" id="event-type">
- <title>Event Types</title>
- <tgroup cols="3">
- &cs-def;
- <tbody valign="top">
- <row>
- <entry><constant>V4L2_EVENT_ALL</constant></entry>
- <entry>0</entry>
- <entry>All events. V4L2_EVENT_ALL is valid only for
- VIDIOC_UNSUBSCRIBE_EVENT for unsubscribing all events at once.
- </entry>
- </row>
- <row>
- <entry><constant>V4L2_EVENT_VSYNC</constant></entry>
- <entry>1</entry>
- <entry>This event is triggered on the vertical sync.
- This event has a &v4l2-event-vsync; associated with it.
- </entry>
- </row>
- <row>
- <entry><constant>V4L2_EVENT_EOS</constant></entry>
- <entry>2</entry>
- <entry>This event is triggered when the end of a stream is reached.
- This is typically used with MPEG decoders to report to the application
- when the last of the MPEG stream has been decoded.
- </entry>
- </row>
- <row>
- <entry><constant>V4L2_EVENT_CTRL</constant></entry>
- <entry>3</entry>
- <entry><para>This event requires that the <structfield>id</structfield>
- matches the control ID from which you want to receive events.
- This event is triggered if the control's value changes, if a
- button control is pressed or if the control's flags change.
- This event has a &v4l2-event-ctrl; associated with it. This struct
- contains much of the same information as &v4l2-queryctrl; and
- &v4l2-control;.</para>
-
- <para>If the event is generated due to a call to &VIDIOC-S-CTRL; or
- &VIDIOC-S-EXT-CTRLS;, then the event will <emphasis>not</emphasis> be sent to
- the file handle that called the ioctl function. This prevents
- nasty feedback loops. If you <emphasis>do</emphasis> want to get the
- event, then set the <constant>V4L2_EVENT_SUB_FL_ALLOW_FEEDBACK</constant>
- flag.
- </para>
-
- <para>This event type will ensure that no information is lost when
- more events are raised than there is room internally. In that
- case the &v4l2-event-ctrl; of the second-oldest event is kept,
- but the <structfield>changes</structfield> field of the
- second-oldest event is ORed with the <structfield>changes</structfield>
- field of the oldest event.</para>
- </entry>
- </row>
- <row>
- <entry><constant>V4L2_EVENT_FRAME_SYNC</constant></entry>
- <entry>4</entry>
- <entry>
- <para>Triggered immediately when the reception of a
- frame has begun. This event has a
- &v4l2-event-frame-sync; associated with it.</para>
-
- <para>If the hardware needs to be stopped in the case of a
- buffer underrun it might not be able to generate this event.
- In such cases the <structfield>frame_sequence</structfield>
- field in &v4l2-event-frame-sync; will not be incremented. This
- causes two consecutive frame sequence numbers to have n times
- frame interval in between them.</para>
- </entry>
- </row>
- <row>
- <entry><constant>V4L2_EVENT_SOURCE_CHANGE</constant></entry>
- <entry>5</entry>
- <entry>
- <para>This event is triggered when a source parameter change is
- detected during runtime by the video device. It can be a
- runtime resolution change triggered by a video decoder or the
- format change happening on an input connector.
- This event requires that the <structfield>id</structfield>
- matches the input index (when used with a video device node)
- or the pad index (when used with a subdevice node) from which
- you want to receive events.</para>
-
- <para>This event has a &v4l2-event-src-change; associated
- with it. The <structfield>changes</structfield> bitfield denotes
- what has changed for the subscribed pad. If multiple events
- occurred before application could dequeue them, then the changes
- will have the ORed value of all the events generated.</para>
- </entry>
- </row>
- <row>
- <entry><constant>V4L2_EVENT_MOTION_DET</constant></entry>
- <entry>6</entry>
- <entry>
- <para>Triggered whenever the motion detection state for one or more of the regions
- changes. This event has a &v4l2-event-motion-det; associated with it.</para>
- </entry>
- </row>
- <row>
- <entry><constant>V4L2_EVENT_PRIVATE_START</constant></entry>
- <entry>0x08000000</entry>
- <entry>Base event number for driver-private events.</entry>
- </row>
- </tbody>
- </tgroup>
- </table>
-
<table pgwide="1" frame="none" id="event-flags">
<title>Event Flags</title>
<tgroup cols="3">
diff --git a/Documentation/PCI/MSI-HOWTO.txt b/Documentation/PCI/MSI-HOWTO.txt
index 0d920d5..1179850 100644
--- a/Documentation/PCI/MSI-HOWTO.txt
+++ b/Documentation/PCI/MSI-HOWTO.txt
@@ -353,7 +353,7 @@
rc = pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
maxvec, maxvec);
/*
- * -ENOSPC is the only error code allowed to be analized
+ * -ENOSPC is the only error code allowed to be analyzed
*/
if (rc == -ENOSPC) {
if (maxvec == 1)
@@ -370,7 +370,7 @@
return rc;
}
-Note how pci_enable_msix_range() return value is analized for a fallback -
+Note how pci_enable_msix_range() return value is analyzed for a fallback -
any error code other than -ENOSPC indicates a fatal error and should not
be retried.
@@ -486,7 +486,7 @@
If your device supports both MSI-X and MSI capabilities, you should use
the MSI-X facilities in preference to the MSI facilities. As mentioned
above, MSI-X supports any number of interrupts between 1 and 2048.
-In constrast, MSI is restricted to a maximum of 32 interrupts (and
+In contrast, MSI is restricted to a maximum of 32 interrupts (and
must be a power of two). In addition, the MSI interrupt vectors must
be allocated consecutively, so the system might not be able to allocate
as many vectors for MSI as it could for MSI-X. On some platforms, MSI
@@ -501,18 +501,9 @@
not be re-entered). If a device uses multiple interrupts, the driver
must disable interrupts while the lock is held. If the device sends
a different interrupt, the driver will deadlock trying to recursively
-acquire the spinlock.
-
-There are two solutions. The first is to take the lock with
-spin_lock_irqsave() or spin_lock_irq() (see
-Documentation/DocBook/kernel-locking). The second is to specify
-IRQF_DISABLED to request_irq() so that the kernel runs the entire
-interrupt routine with interrupts disabled.
-
-If your MSI interrupt routine does not hold the lock for the whole time
-it is running, the first solution may be best. The second solution is
-normally preferred as it avoids making two transitions from interrupt
-disabled to enabled and back again.
+acquire the spinlock. Such deadlocks can be avoided by using
+spin_lock_irqsave() or spin_lock_irq() which disable local interrupts
+and acquire the lock (see Documentation/DocBook/kernel-locking).
4.6 How to tell whether MSI/MSI-X is enabled on a device
diff --git a/Documentation/PCI/pci-error-recovery.txt b/Documentation/PCI/pci-error-recovery.txt
index 898ded2..ac26869 100644
--- a/Documentation/PCI/pci-error-recovery.txt
+++ b/Documentation/PCI/pci-error-recovery.txt
@@ -256,7 +256,7 @@
------------------
In response to a return value of PCI_ERS_RESULT_NEED_RESET, the
-the platform will peform a slot reset on the requesting PCI device(s).
+the platform will perform a slot reset on the requesting PCI device(s).
The actual steps taken by a platform to perform a slot reset
will be platform-dependent. Upon completion of slot reset, the
platform will call the device slot_reset() callback.
diff --git a/Documentation/PCI/pcieaer-howto.txt b/Documentation/PCI/pcieaer-howto.txt
index 26d3d94..b4987c0 100644
--- a/Documentation/PCI/pcieaer-howto.txt
+++ b/Documentation/PCI/pcieaer-howto.txt
@@ -66,8 +66,8 @@
source ID. nosourceid=n by default.
2.3 AER error output
-When a PCI-E AER error is captured, an error message will be outputed to
-console. If it's a correctable error, it is outputed as a warning.
+When a PCI-E AER error is captured, an error message will be outputted to
+console. If it's a correctable error, it is outputted as a warning.
Otherwise, it is printed as an error. So users could choose different
log level to filter out correctable error messages.
diff --git a/Documentation/arm/Booting b/Documentation/arm/Booting
index 371814a..83c1df2 100644
--- a/Documentation/arm/Booting
+++ b/Documentation/arm/Booting
@@ -58,13 +58,18 @@
--------------------------
Existing boot loaders: OPTIONAL
-New boot loaders: MANDATORY
+New boot loaders: MANDATORY except for DT-only platforms
The boot loader should detect the machine type its running on by some
method. Whether this is a hard coded value or some algorithm that
looks at the connected hardware is beyond the scope of this document.
The boot loader must ultimately be able to provide a MACH_TYPE_xxx
-value to the kernel. (see linux/arch/arm/tools/mach-types).
+value to the kernel. (see linux/arch/arm/tools/mach-types). This
+should be passed to the kernel in register r1.
+
+For DT-only platforms, the machine type will be determined by device
+tree. set the machine type to all ones (~0). This is not strictly
+necessary, but assures that it will not match any existing types.
4. Setup boot data
------------------
diff --git a/Documentation/arm/README b/Documentation/arm/README
index aea3409..9d1e5b2 100644
--- a/Documentation/arm/README
+++ b/Documentation/arm/README
@@ -185,13 +185,20 @@
board devices are used, or the device is setup, and provides that
machine specific "personality."
- This fine-grained machine specific selection is controlled by the machine
- type ID, which acts both as a run-time and a compile-time code selection
- method.
+ For platforms that support device tree (DT), the machine selection is
+ controlled at runtime by passing the device tree blob to the kernel. At
+ compile-time, support for the machine type must be selected. This allows for
+ a single multiplatform kernel build to be used for several machine types.
- You can register a new machine via the web site at:
+ For platforms that do not use device tree, this machine selection is
+ controlled by the machine type ID, which acts both as a run-time and a
+ compile-time code selection method. You can register a new machine via the
+ web site at:
<http://www.arm.linux.org.uk/developer/machines/>
+ Note: Please do not register a machine type for DT-only platforms. If your
+ platform is DT-only, you do not need a registered machine type.
+
---
Russell King (15/03/2004)
diff --git a/Documentation/blackfin/Makefile b/Documentation/blackfin/Makefile
index 03f7805..6782c58 100644
--- a/Documentation/blackfin/Makefile
+++ b/Documentation/blackfin/Makefile
@@ -1,5 +1,5 @@
ifneq ($(CONFIG_BLACKFIN),)
-ifneq ($(CONFIG_BFIN_GPTIMERS,)
+ifneq ($(CONFIG_BFIN_GPTIMERS),)
obj-m := gptimers-example.o
endif
endif
diff --git a/Documentation/block/biodoc.txt b/Documentation/block/biodoc.txt
index 5aabc08..fd12c0d 100644
--- a/Documentation/block/biodoc.txt
+++ b/Documentation/block/biodoc.txt
@@ -48,8 +48,7 @@
- Highmem I/O support
- I/O scheduler modularization
1.2 Tuning based on high level requirements/capabilities
- 1.2.1 I/O Barriers
- 1.2.2 Request Priority/Latency
+ 1.2.1 Request Priority/Latency
1.3 Direct access/bypass to lower layers for diagnostics and special
device operations
1.3.1 Pre-built commands
@@ -255,29 +254,12 @@
What kind of support exists at the generic block layer for this ?
The flags and rw fields in the bio structure can be used for some tuning
-from above e.g indicating that an i/o is just a readahead request, or for
-marking barrier requests (discussed next), or priority settings (currently
-unused). As far as user applications are concerned they would need an
-additional mechanism either via open flags or ioctls, or some other upper
-level mechanism to communicate such settings to block.
+from above e.g indicating that an i/o is just a readahead request, or priority
+settings (currently unused). As far as user applications are concerned they
+would need an additional mechanism either via open flags or ioctls, or some
+other upper level mechanism to communicate such settings to block.
-1.2.1 I/O Barriers
-
-There is a way to enforce strict ordering for i/os through barriers.
-All requests before a barrier point must be serviced before the barrier
-request and any other requests arriving after the barrier will not be
-serviced until after the barrier has completed. This is useful for higher
-level control on write ordering, e.g flushing a log of committed updates
-to disk before the corresponding updates themselves.
-
-A flag in the bio structure, BIO_BARRIER is used to identify a barrier i/o.
-The generic i/o scheduler would make sure that it places the barrier request and
-all other requests coming after it after all the previous requests in the
-queue. Barriers may be implemented in different ways depending on the
-driver. For more details regarding I/O barriers, please read barrier.txt
-in this directory.
-
-1.2.2 Request Priority/Latency
+1.2.1 Request Priority/Latency
Todo/Under discussion:
Arjan's proposed request priority scheme allows higher levels some broad
@@ -906,8 +888,8 @@
to refer to both parts and I/O scheduler to specific I/O schedulers.
Block layer implements generic dispatch queue in block/*.c.
-The generic dispatch queue is responsible for properly ordering barrier
-requests, requeueing, handling non-fs requests and all other subtleties.
+The generic dispatch queue is responsible for requeueing, handling non-fs
+requests and all other subtleties.
Specific I/O schedulers are responsible for ordering normal filesystem
requests. They can also choose to delay certain requests to improve
diff --git a/Documentation/cgroups/memory.txt b/Documentation/cgroups/memory.txt
index a22df3a..f456b43 100644
--- a/Documentation/cgroups/memory.txt
+++ b/Documentation/cgroups/memory.txt
@@ -275,11 +275,6 @@
2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM)
-WARNING: Current implementation lacks reclaim support. That means allocation
- attempts will fail when close to the limit even if there are plenty of
- kmem available for reclaim. That makes this option unusable in real
- life so DO NOT SELECT IT unless for development purposes.
-
With the Kernel memory extension, the Memory Controller is able to limit
the amount of kernel memory used by the system. Kernel memory is fundamentally
different than user memory, since it can't be swapped out, which makes it
@@ -345,6 +340,9 @@
In this case, the admin could set up K so that the sum of all groups is
never greater than the total memory, and freely set U at the cost of his
QoS.
+ WARNING: In the current implementation, memory reclaim will NOT be
+ triggered for a cgroup when it hits K while staying below U, which makes
+ this setup impractical.
U != 0, K >= U:
Since kmem charges will also be fed to the user counter and reclaim will be
diff --git a/Documentation/cpu-hotplug.txt b/Documentation/cpu-hotplug.txt
index a0b005d..f9ad5e0 100644
--- a/Documentation/cpu-hotplug.txt
+++ b/Documentation/cpu-hotplug.txt
@@ -108,7 +108,7 @@
for_each_possible_cpu - Iterate over cpu_possible_mask
for_each_online_cpu - Iterate over cpu_online_mask
for_each_present_cpu - Iterate over cpu_present_mask
- for_each_cpu_mask(x,mask) - Iterate over some random collection of cpu mask.
+ for_each_cpu(x,mask) - Iterate over some random collection of cpu mask.
#include <linux/cpu.h>
get_online_cpus() and put_online_cpus():
diff --git a/Documentation/device-mapper/dm-crypt.txt b/Documentation/device-mapper/dm-crypt.txt
index ad69778..692171f 100644
--- a/Documentation/device-mapper/dm-crypt.txt
+++ b/Documentation/device-mapper/dm-crypt.txt
@@ -5,7 +5,7 @@
using the kernel crypto API.
For a more detailed description of supported parameters see:
-http://code.google.com/p/cryptsetup/wiki/DMCrypt
+https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt
Parameters: <cipher> <key> <iv_offset> <device path> \
<offset> [<#opt_params> <opt_params>]
@@ -80,7 +80,7 @@
===============
LUKS (Linux Unified Key Setup) is now the preferred way to set up disk
encryption with dm-crypt using the 'cryptsetup' utility, see
-http://code.google.com/p/cryptsetup/
+https://gitlab.com/cryptsetup/cryptsetup
[[
#!/bin/sh
diff --git a/Documentation/device-mapper/log-writes.txt b/Documentation/device-mapper/log-writes.txt
new file mode 100644
index 0000000..c10f30c
--- /dev/null
+++ b/Documentation/device-mapper/log-writes.txt
@@ -0,0 +1,140 @@
+dm-log-writes
+=============
+
+This target takes 2 devices, one to pass all IO to normally, and one to log all
+of the write operations to. This is intended for file system developers wishing
+to verify the integrity of metadata or data as the file system is written to.
+There is a log_write_entry written for every WRITE request and the target is
+able to take arbitrary data from userspace to insert into the log. The data
+that is in the WRITE requests is copied into the log to make the replay happen
+exactly as it happened originally.
+
+Log Ordering
+============
+
+We log things in order of completion once we are sure the write is no longer in
+cache. This means that normal WRITE requests are not actually logged until the
+next REQ_FLUSH request. This is to make it easier for userspace to replay the
+log in a way that correlates to what is on disk and not what is in cache, to
+make it easier to detect improper waiting/flushing.
+
+This works by attaching all WRITE requests to a list once the write completes.
+Once we see a REQ_FLUSH request we splice this list onto the request and once
+the FLUSH request completes we log all of the WRITEs and then the FLUSH. Only
+completed WRITEs, at the time the REQ_FLUSH is issued, are added in order to
+simulate the worst case scenario with regard to power failures. Consider the
+following example (W means write, C means complete):
+
+W1,W2,W3,C3,C2,Wflush,C1,Cflush
+
+The log would show the following
+
+W3,W2,flush,W1....
+
+Again this is to simulate what is actually on disk, this allows us to detect
+cases where a power failure at a particular point in time would create an
+inconsistent file system.
+
+Any REQ_FUA requests bypass this flushing mechanism and are logged as soon as
+they complete as those requests will obviously bypass the device cache.
+
+Any REQ_DISCARD requests are treated like WRITE requests. Otherwise we would
+have all the DISCARD requests, and then the WRITE requests and then the FLUSH
+request. Consider the following example:
+
+WRITE block 1, DISCARD block 1, FLUSH
+
+If we logged DISCARD when it completed, the replay would look like this
+
+DISCARD 1, WRITE 1, FLUSH
+
+which isn't quite what happened and wouldn't be caught during the log replay.
+
+Target interface
+================
+
+i) Constructor
+
+ log-writes <dev_path> <log_dev_path>
+
+ dev_path : Device that all of the IO will go to normally.
+ log_dev_path : Device where the log entries are written to.
+
+ii) Status
+
+ <#logged entries> <highest allocated sector>
+
+ #logged entries : Number of logged entries
+ highest allocated sector : Highest allocated sector
+
+iii) Messages
+
+ mark <description>
+
+ You can use a dmsetup message to set an arbitrary mark in a log.
+ For example say you want to fsck a file system after every
+ write, but first you need to replay up to the mkfs to make sure
+ we're fsck'ing something reasonable, you would do something like
+ this:
+
+ mkfs.btrfs -f /dev/mapper/log
+ dmsetup message log 0 mark mkfs
+ <run test>
+
+ This would allow you to replay the log up to the mkfs mark and
+ then replay from that point on doing the fsck check in the
+ interval that you want.
+
+ Every log has a mark at the end labeled "dm-log-writes-end".
+
+Userspace component
+===================
+
+There is a userspace tool that will replay the log for you in various ways.
+It can be found here: https://github.com/josefbacik/log-writes
+
+Example usage
+=============
+
+Say you want to test fsync on your file system. You would do something like
+this:
+
+TABLE="0 $(blockdev --getsz /dev/sdb) log-writes /dev/sdb /dev/sdc"
+dmsetup create log --table "$TABLE"
+mkfs.btrfs -f /dev/mapper/log
+dmsetup message log 0 mark mkfs
+
+mount /dev/mapper/log /mnt/btrfs-test
+<some test that does fsync at the end>
+dmsetup message log 0 mark fsync
+md5sum /mnt/btrfs-test/foo
+umount /mnt/btrfs-test
+
+dmsetup remove log
+replay-log --log /dev/sdc --replay /dev/sdb --end-mark fsync
+mount /dev/sdb /mnt/btrfs-test
+md5sum /mnt/btrfs-test/foo
+<verify md5sum's are correct>
+
+Another option is to do a complicated file system operation and verify the file
+system is consistent during the entire operation. You could do this with:
+
+TABLE="0 $(blockdev --getsz /dev/sdb) log-writes /dev/sdb /dev/sdc"
+dmsetup create log --table "$TABLE"
+mkfs.btrfs -f /dev/mapper/log
+dmsetup message log 0 mark mkfs
+
+mount /dev/mapper/log /mnt/btrfs-test
+<fsstress to dirty the fs>
+btrfs filesystem balance /mnt/btrfs-test
+umount /mnt/btrfs-test
+dmsetup remove log
+
+replay-log --log /dev/sdc --replay /dev/sdb --end-mark mkfs
+btrfsck /dev/sdb
+replay-log --log /dev/sdc --replay /dev/sdb --start-mark mkfs \
+ --fsck "btrfsck /dev/sdb" --check fua
+
+And that will replay the log until it sees a FUA request, run the fsck command
+and if the fsck passes it will replay to the next FUA, until it is completed or
+the fsck command exists abnormally.
diff --git a/Documentation/device-mapper/switch.txt b/Documentation/device-mapper/switch.txt
index 8897d04..424835e 100644
--- a/Documentation/device-mapper/switch.txt
+++ b/Documentation/device-mapper/switch.txt
@@ -47,8 +47,8 @@
Using this device-mapper switch target we can now build a two-layer
device hierarchy:
- Upper Tier – Determine which array member the I/O should be sent to.
- Lower Tier – Load balance amongst paths to a particular member.
+ Upper Tier - Determine which array member the I/O should be sent to.
+ Lower Tier - Load balance amongst paths to a particular member.
The lower tier consists of a single dm multipath device for each member.
Each of these multipath devices contains the set of paths directly to
diff --git a/Documentation/device-mapper/thin-provisioning.txt b/Documentation/device-mapper/thin-provisioning.txt
index 2f51735..4f67578 100644
--- a/Documentation/device-mapper/thin-provisioning.txt
+++ b/Documentation/device-mapper/thin-provisioning.txt
@@ -380,9 +380,6 @@
load a target that is bigger than before, then extra blocks will be
provisioned as and when needed.
-If you wish to reduce the size of your thin device and potentially
-regain some space then send the 'trim' message to the pool.
-
ii) Status
<nr mapped sectors> <highest mapped sector>
diff --git a/Documentation/device-mapper/verity.txt b/Documentation/device-mapper/verity.txt
index 9884681..e15bc1a 100644
--- a/Documentation/device-mapper/verity.txt
+++ b/Documentation/device-mapper/verity.txt
@@ -11,6 +11,7 @@
<data_block_size> <hash_block_size>
<num_data_blocks> <hash_start_block>
<algorithm> <digest> <salt>
+ [<#opt_params> <opt_params>]
<version>
This is the type of the on-disk hash format.
@@ -62,6 +63,22 @@
<salt>
The hexadecimal encoding of the salt value.
+<#opt_params>
+ Number of optional parameters. If there are no optional parameters,
+ the optional paramaters section can be skipped or #opt_params can be zero.
+ Otherwise #opt_params is the number of following arguments.
+
+ Example of optional parameters section:
+ 1 ignore_corruption
+
+ignore_corruption
+ Log corrupted blocks, but allow read operations to proceed normally.
+
+restart_on_corruption
+ Restart the system when a corrupted block is discovered. This option is
+ not compatible with ignore_corruption and requires user space support to
+ avoid restart loops.
+
Theory of operation
===================
@@ -125,7 +142,7 @@
The full specification of kernel parameters and on-disk metadata format
is available at the cryptsetup project's wiki page
- http://code.google.com/p/cryptsetup/wiki/DMVerity
+ https://gitlab.com/cryptsetup/cryptsetup/wikis/DMVerity
Status
======
@@ -142,7 +159,7 @@
A command line tool veritysetup is available to compute or verify
the hash tree or activate the kernel device. This is available from
-the cryptsetup upstream repository http://code.google.com/p/cryptsetup/
+the cryptsetup upstream repository https://gitlab.com/cryptsetup/cryptsetup/
(as a libcryptsetup extension).
Create hash on the device:
diff --git a/Documentation/devicetree/bindings/arm/bcm/brcm,bcm11351-cpu-method b/Documentation/devicetree/bindings/arm/bcm/brcm,bcm11351-cpu-method.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/bcm/brcm,bcm11351-cpu-method
rename to Documentation/devicetree/bindings/arm/bcm/brcm,bcm11351-cpu-method.txt
diff --git a/Documentation/devicetree/bindings/arm/bcm/bcm11351.txt b/Documentation/devicetree/bindings/arm/bcm/brcm,bcm11351.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/bcm/bcm11351.txt
rename to Documentation/devicetree/bindings/arm/bcm/brcm,bcm11351.txt
diff --git a/Documentation/devicetree/bindings/arm/bcm/bcm21664.txt b/Documentation/devicetree/bindings/arm/bcm/brcm,bcm21664.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/bcm/bcm21664.txt
rename to Documentation/devicetree/bindings/arm/bcm/brcm,bcm21664.txt
diff --git a/Documentation/devicetree/bindings/arm/bcm2835.txt b/Documentation/devicetree/bindings/arm/bcm/brcm,bcm2835.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/bcm2835.txt
rename to Documentation/devicetree/bindings/arm/bcm/brcm,bcm2835.txt
diff --git a/Documentation/devicetree/bindings/arm/bcm4708.txt b/Documentation/devicetree/bindings/arm/bcm/brcm,bcm4708.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/bcm4708.txt
rename to Documentation/devicetree/bindings/arm/bcm/brcm,bcm4708.txt
diff --git a/Documentation/devicetree/bindings/arm/bcm/bcm63138.txt b/Documentation/devicetree/bindings/arm/bcm/brcm,bcm63138.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/bcm/bcm63138.txt
rename to Documentation/devicetree/bindings/arm/bcm/brcm,bcm63138.txt
diff --git a/Documentation/devicetree/bindings/arm/brcm-brcmstb.txt b/Documentation/devicetree/bindings/arm/bcm/brcm,brcmstb.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/brcm-brcmstb.txt
rename to Documentation/devicetree/bindings/arm/bcm/brcm,brcmstb.txt
diff --git a/Documentation/devicetree/bindings/arm/bcm/cygnus.txt b/Documentation/devicetree/bindings/arm/bcm/brcm,cygnus.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/bcm/cygnus.txt
rename to Documentation/devicetree/bindings/arm/bcm/brcm,cygnus.txt
diff --git a/Documentation/devicetree/bindings/arm/coresight.txt b/Documentation/devicetree/bindings/arm/coresight.txt
index a308935..88602b7 100644
--- a/Documentation/devicetree/bindings/arm/coresight.txt
+++ b/Documentation/devicetree/bindings/arm/coresight.txt
@@ -61,7 +61,6 @@
compatible = "arm,coresight-etb10", "arm,primecell";
reg = <0 0x20010000 0 0x1000>;
- coresight-default-sink;
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
port {
diff --git a/Documentation/devicetree/bindings/bus/bcma.txt b/Documentation/devicetree/bindings/bus/brcm,bus-axi.txt
similarity index 100%
rename from Documentation/devicetree/bindings/bus/bcma.txt
rename to Documentation/devicetree/bindings/bus/brcm,bus-axi.txt
diff --git a/Documentation/devicetree/bindings/clock/bcm-kona-clock.txt b/Documentation/devicetree/bindings/clock/brcm,kona-ccu.txt
similarity index 100%
rename from Documentation/devicetree/bindings/clock/bcm-kona-clock.txt
rename to Documentation/devicetree/bindings/clock/brcm,kona-ccu.txt
diff --git a/Documentation/devicetree/bindings/clock/exynos3250-clock.txt b/Documentation/devicetree/bindings/clock/exynos3250-clock.txt
index f57d9dd..f1738b8 100644
--- a/Documentation/devicetree/bindings/clock/exynos3250-clock.txt
+++ b/Documentation/devicetree/bindings/clock/exynos3250-clock.txt
@@ -9,6 +9,8 @@
- "samsung,exynos3250-cmu" - controller compatible with Exynos3250 SoC.
- "samsung,exynos3250-cmu-dmc" - controller compatible with
Exynos3250 SoC for Dynamic Memory Controller domain.
+ - "samsung,exynos3250-cmu-isp" - ISP block clock controller compatible
+ with Exynos3250 SOC
- reg: physical base address of the controller and length of memory mapped
region.
@@ -36,6 +38,12 @@
#clock-cells = <1>;
};
+ cmu_isp: clock-controller@10048000 {
+ compatible = "samsung,exynos3250-cmu-isp";
+ reg = <0x10048000 0x1000>;
+ #clock-cells = <1>;
+ };
+
Example 2: UART controller node that consumes the clock generated by the clock
controller. Refer to the standard clock bindings for information
about 'clocks' and 'clock-names' property.
diff --git a/Documentation/devicetree/bindings/clock/exynos5433-clock.txt b/Documentation/devicetree/bindings/clock/exynos5433-clock.txt
new file mode 100644
index 0000000..63379b0
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/exynos5433-clock.txt
@@ -0,0 +1,462 @@
+* Samsung Exynos5433 CMU (Clock Management Units)
+
+The Exynos5433 clock controller generates and supplies clock to various
+controllers within the Exynos5433 SoC.
+
+Required Properties:
+
+- compatible: should be one of the following.
+ - "samsung,exynos5433-cmu-top" - clock controller compatible for CMU_TOP
+ which generates clocks for IMEM/FSYS/G3D/GSCL/HEVC/MSCL/G2D/MFC/PERIC/PERIS
+ domains and bus clocks.
+ - "samsung,exynos5433-cmu-cpif" - clock controller compatible for CMU_CPIF
+ which generates clocks for LLI (Low Latency Interface) IP.
+ - "samsung,exynos5433-cmu-mif" - clock controller compatible for CMU_MIF
+ which generates clocks for DRAM Memory Controller domain.
+ - "samsung,exynos5433-cmu-peric" - clock controller compatible for CMU_PERIC
+ which generates clocks for UART/I2C/SPI/I2S/PCM/SPDIF/PWM/SLIMBUS IPs.
+ - "samsung,exynos5433-cmu-peris" - clock controller compatible for CMU_PERIS
+ which generates clocks for PMU/TMU/MCT/WDT/RTC/SECKEY/TZPC IPs.
+ - "samsung,exynos5433-cmu-fsys" - clock controller compatible for CMU_FSYS
+ which generates clocks for USB/UFS/SDMMC/TSI/PDMA IPs.
+ - "samsung,exynos5433-cmu-g2d" - clock controller compatible for CMU_G2D
+ which generates clocks for G2D/MDMA IPs.
+ - "samsung,exynos5433-cmu-disp" - clock controller compatible for CMU_DISP
+ which generates clocks for Display (DECON/HDMI/DSIM/MIXER) IPs.
+ - "samsung,exynos5433-cmu-aud" - clock controller compatible for CMU_AUD
+ which generates clocks for Cortex-A5/BUS/AUDIO clocks.
+ - "samsung,exynos5433-cmu-bus0", "samsung,exynos5433-cmu-bus1"
+ and "samsung,exynos5433-cmu-bus2" - clock controller compatible for CMU_BUS
+ which generates global data buses clock and global peripheral buses clock.
+ - "samsung,exynos5433-cmu-g3d" - clock controller compatible for CMU_G3D
+ which generates clocks for 3D Graphics Engine IP.
+ - "samsung,exynos5433-cmu-gscl" - clock controller compatible for CMU_GSCL
+ which generates clocks for GSCALER IPs.
+ - "samsung,exynos5433-cmu-apollo"- clock controller compatible for CMU_APOLLO
+ which generates clocks for Cortex-A53 Quad-core processor.
+ - "samsung,exynos5433-cmu-atlas" - clock controller compatible for CMU_ATLAS
+ which generates clocks for Cortex-A57 Quad-core processor, CoreSight and
+ L2 cache controller.
+ - "samsung,exynos5433-cmu-mscl" - clock controller compatible for CMU_MSCL
+ which generates clocks for M2M (Memory to Memory) scaler and JPEG IPs.
+ - "samsung,exynos5433-cmu-mfc" - clock controller compatible for CMU_MFC
+ which generates clocks for MFC(Multi-Format Codec) IP.
+ - "samsung,exynos5433-cmu-hevc" - clock controller compatible for CMU_HEVC
+ which generates clocks for HEVC(High Efficiency Video Codec) decoder IP.
+ - "samsung,exynos5433-cmu-isp" - clock controller compatible for CMU_ISP
+ which generates clocks for FIMC-ISP/DRC/SCLC/DIS/3DNR IPs.
+ - "samsung,exynos5433-cmu-cam0" - clock controller compatible for CMU_CAM0
+ which generates clocks for MIPI_CSIS{0|1}/FIMC_LITE_{A|B|D}/FIMC_3AA{0|1}
+ IPs.
+ - "samsung,exynos5433-cmu-cam1" - clock controller compatible for CMU_CAM1
+ which generates clocks for Cortex-A5/MIPI_CSIS2/FIMC-LITE_C/FIMC-FD IPs.
+
+- reg: physical base address of the controller and length of memory mapped
+ region.
+
+- #clock-cells: should be 1.
+
+- clocks: list of the clock controller input clock identifiers,
+ from common clock bindings. Please refer the next section
+ to find the input clocks for a given controller.
+
+- clock-names: list of the clock controller input clock names,
+ as described in clock-bindings.txt.
+
+ Input clocks for top clock controller:
+ - oscclk
+ - sclk_mphy_pll
+ - sclk_mfc_pll
+ - sclk_bus_pll
+
+ Input clocks for cpif clock controller:
+ - oscclk
+
+ Input clocks for mif clock controller:
+ - oscclk
+ - sclk_mphy_pll
+
+ Input clocks for fsys clock controller:
+ - oscclk
+ - sclk_ufs_mphy
+ - div_aclk_fsys_200
+ - sclk_pcie_100_fsys
+ - sclk_ufsunipro_fsys
+ - sclk_mmc2_fsys
+ - sclk_mmc1_fsys
+ - sclk_mmc0_fsys
+ - sclk_usbhost30_fsys
+ - sclk_usbdrd30_fsys
+
+ Input clocks for g2d clock controller:
+ - oscclk
+ - aclk_g2d_266
+ - aclk_g2d_400
+
+ Input clocks for disp clock controller:
+ - oscclk
+ - sclk_dsim1_disp
+ - sclk_dsim0_disp
+ - sclk_dsd_disp
+ - sclk_decon_tv_eclk_disp
+ - sclk_decon_vclk_disp
+ - sclk_decon_eclk_disp
+ - sclk_decon_tv_vclk_disp
+ - aclk_disp_333
+
+ Input clocks for bus0 clock controller:
+ - aclk_bus0_400
+
+ Input clocks for bus1 clock controller:
+ - aclk_bus1_400
+
+ Input clocks for bus2 clock controller:
+ - oscclk
+ - aclk_bus2_400
+
+ Input clocks for g3d clock controller:
+ - oscclk
+ - aclk_g3d_400
+
+ Input clocks for gscl clock controller:
+ - oscclk
+ - aclk_gscl_111
+ - aclk_gscl_333
+
+ Input clocks for apollo clock controller:
+ - oscclk
+ - sclk_bus_pll_apollo
+
+ Input clocks for atlas clock controller:
+ - oscclk
+ - sclk_bus_pll_atlas
+
+ Input clocks for mscl clock controller:
+ - oscclk
+ - sclk_jpeg_mscl
+ - aclk_mscl_400
+
+ Input clocks for mfc clock controller:
+ - oscclk
+ - aclk_mfc_400
+
+ Input clocks for hevc clock controller:
+ - oscclk
+ - aclk_hevc_400
+
+ Input clocks for isp clock controller:
+ - oscclk
+ - aclk_isp_dis_400
+ - aclk_isp_400
+
+ Input clocks for cam0 clock controller:
+ - oscclk
+ - aclk_cam0_333
+ - aclk_cam0_400
+ - aclk_cam0_552
+
+ Input clocks for cam1 clock controller:
+ - oscclk
+ - sclk_isp_uart_cam1
+ - sclk_isp_spi1_cam1
+ - sclk_isp_spi0_cam1
+ - aclk_cam1_333
+ - aclk_cam1_400
+ - aclk_cam1_552
+
+Each clock is assigned an identifier and client nodes can use this identifier
+to specify the clock which they consume.
+
+All available clocks are defined as preprocessor macros in
+dt-bindings/clock/exynos5433.h header and can be used in device
+tree sources.
+
+Example 1: Examples of 'oscclk' source clock node are listed below.
+
+ xxti: xxti {
+ compatible = "fixed-clock";
+ clock-output-names = "oscclk";
+ #clock-cells = <0>;
+ };
+
+Example 2: Examples of clock controller nodes are listed below.
+
+ cmu_top: clock-controller@10030000 {
+ compatible = "samsung,exynos5433-cmu-top";
+ reg = <0x10030000 0x0c04>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk",
+ "sclk_mphy_pll",
+ "sclk_mfc_pll",
+ "sclk_bus_pll";
+ clocks = <&xxti>,
+ <&cmu_cpif CLK_SCLK_MPHY_PLL>,
+ <&cmu_mif CLK_SCLK_MFC_PLL>,
+ <&cmu_mif CLK_SCLK_BUS_PLL>;
+ };
+
+ cmu_cpif: clock-controller@10fc0000 {
+ compatible = "samsung,exynos5433-cmu-cpif";
+ reg = <0x10fc0000 0x0c04>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk";
+ clocks = <&xxti>;
+ };
+
+ cmu_mif: clock-controller@105b0000 {
+ compatible = "samsung,exynos5433-cmu-mif";
+ reg = <0x105b0000 0x100c>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk",
+ "sclk_mphy_pll";
+ clocks = <&xxti>,
+ <&cmu_cpif CLK_SCLK_MPHY_PLL>;
+ };
+
+ cmu_peric: clock-controller@14c80000 {
+ compatible = "samsung,exynos5433-cmu-peric";
+ reg = <0x14c80000 0x0b08>;
+ #clock-cells = <1>;
+ };
+
+ cmu_peris: clock-controller@10040000 {
+ compatible = "samsung,exynos5433-cmu-peris";
+ reg = <0x10040000 0x0b20>;
+ #clock-cells = <1>;
+ };
+
+ cmu_fsys: clock-controller@156e0000 {
+ compatible = "samsung,exynos5433-cmu-fsys";
+ reg = <0x156e0000 0x0b04>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk",
+ "sclk_ufs_mphy",
+ "div_aclk_fsys_200",
+ "sclk_pcie_100_fsys",
+ "sclk_ufsunipro_fsys",
+ "sclk_mmc2_fsys",
+ "sclk_mmc1_fsys",
+ "sclk_mmc0_fsys",
+ "sclk_usbhost30_fsys",
+ "sclk_usbdrd30_fsys";
+ clocks = <&xxti>,
+ <&cmu_cpif CLK_SCLK_UFS_MPHY>,
+ <&cmu_top CLK_DIV_ACLK_FSYS_200>,
+ <&cmu_top CLK_SCLK_PCIE_100_FSYS>,
+ <&cmu_top CLK_SCLK_UFSUNIPRO_FSYS>,
+ <&cmu_top CLK_SCLK_MMC2_FSYS>,
+ <&cmu_top CLK_SCLK_MMC1_FSYS>,
+ <&cmu_top CLK_SCLK_MMC0_FSYS>,
+ <&cmu_top CLK_SCLK_USBHOST30_FSYS>,
+ <&cmu_top CLK_SCLK_USBDRD30_FSYS>;
+ };
+
+ cmu_g2d: clock-controller@12460000 {
+ compatible = "samsung,exynos5433-cmu-g2d";
+ reg = <0x12460000 0x0b08>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk",
+ "aclk_g2d_266",
+ "aclk_g2d_400";
+ clocks = <&xxti>,
+ <&cmu_top CLK_ACLK_G2D_266>,
+ <&cmu_top CLK_ACLK_G2D_400>;
+ };
+
+ cmu_disp: clock-controller@13b90000 {
+ compatible = "samsung,exynos5433-cmu-disp";
+ reg = <0x13b90000 0x0c04>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk",
+ "sclk_dsim1_disp",
+ "sclk_dsim0_disp",
+ "sclk_dsd_disp",
+ "sclk_decon_tv_eclk_disp",
+ "sclk_decon_vclk_disp",
+ "sclk_decon_eclk_disp",
+ "sclk_decon_tv_vclk_disp",
+ "aclk_disp_333";
+ clocks = <&xxti>,
+ <&cmu_mif CLK_SCLK_DSIM1_DISP>,
+ <&cmu_mif CLK_SCLK_DSIM0_DISP>,
+ <&cmu_mif CLK_SCLK_DSD_DISP>,
+ <&cmu_mif CLK_SCLK_DECON_TV_ECLK_DISP>,
+ <&cmu_mif CLK_SCLK_DECON_VCLK_DISP>,
+ <&cmu_mif CLK_SCLK_DECON_ECLK_DISP>,
+ <&cmu_mif CLK_SCLK_DECON_TV_VCLK_DISP>,
+ <&cmu_mif CLK_ACLK_DISP_333>;
+ };
+
+ cmu_aud: clock-controller@114c0000 {
+ compatible = "samsung,exynos5433-cmu-aud";
+ reg = <0x114c0000 0x0b04>;
+ #clock-cells = <1>;
+ };
+
+ cmu_bus0: clock-controller@13600000 {
+ compatible = "samsung,exynos5433-cmu-bus0";
+ reg = <0x13600000 0x0b04>;
+ #clock-cells = <1>;
+
+ clock-names = "aclk_bus0_400";
+ clocks = <&cmu_top CLK_ACLK_BUS0_400>;
+ };
+
+ cmu_bus1: clock-controller@14800000 {
+ compatible = "samsung,exynos5433-cmu-bus1";
+ reg = <0x14800000 0x0b04>;
+ #clock-cells = <1>;
+
+ clock-names = "aclk_bus1_400";
+ clocks = <&cmu_top CLK_ACLK_BUS1_400>;
+ };
+
+ cmu_bus2: clock-controller@13400000 {
+ compatible = "samsung,exynos5433-cmu-bus2";
+ reg = <0x13400000 0x0b04>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk", "aclk_bus2_400";
+ clocks = <&xxti>, <&cmu_mif CLK_ACLK_BUS2_400>;
+ };
+
+ cmu_g3d: clock-controller@14aa0000 {
+ compatible = "samsung,exynos5433-cmu-g3d";
+ reg = <0x14aa0000 0x1000>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk", "aclk_g3d_400";
+ clocks = <&xxti>, <&cmu_top CLK_ACLK_G3D_400>;
+ };
+
+ cmu_gscl: clock-controller@13cf0000 {
+ compatible = "samsung,exynos5433-cmu-gscl";
+ reg = <0x13cf0000 0x0b10>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk",
+ "aclk_gscl_111",
+ "aclk_gscl_333";
+ clocks = <&xxti>,
+ <&cmu_top CLK_ACLK_GSCL_111>,
+ <&cmu_top CLK_ACLK_GSCL_333>;
+ };
+
+ cmu_apollo: clock-controller@11900000 {
+ compatible = "samsung,exynos5433-cmu-apollo";
+ reg = <0x11900000 0x1088>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk", "sclk_bus_pll_apollo";
+ clocks = <&xxti>, <&cmu_mif CLK_SCLK_BUS_PLL_APOLLO>;
+ };
+
+ cmu_atlas: clock-controller@11800000 {
+ compatible = "samsung,exynos5433-cmu-atlas";
+ reg = <0x11800000 0x1088>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk", "sclk_bus_pll_atlas";
+ clocks = <&xxti>, <&cmu_mif CLK_SCLK_BUS_PLL_ATLAS>;
+ };
+
+ cmu_mscl: clock-controller@105d0000 {
+ compatible = "samsung,exynos5433-cmu-mscl";
+ reg = <0x105d0000 0x0b10>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk",
+ "sclk_jpeg_mscl",
+ "aclk_mscl_400";
+ clocks = <&xxti>,
+ <&cmu_top CLK_SCLK_JPEG_MSCL>,
+ <&cmu_top CLK_ACLK_MSCL_400>;
+ };
+
+ cmu_mfc: clock-controller@15280000 {
+ compatible = "samsung,exynos5433-cmu-mfc";
+ reg = <0x15280000 0x0b08>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk", "aclk_mfc_400";
+ clocks = <&xxti>, <&cmu_top CLK_ACLK_MFC_400>;
+ };
+
+ cmu_hevc: clock-controller@14f80000 {
+ compatible = "samsung,exynos5433-cmu-hevc";
+ reg = <0x14f80000 0x0b08>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk", "aclk_hevc_400";
+ clocks = <&xxti>, <&cmu_top CLK_ACLK_HEVC_400>;
+ };
+
+ cmu_isp: clock-controller@146d0000 {
+ compatible = "samsung,exynos5433-cmu-isp";
+ reg = <0x146d0000 0x0b0c>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk",
+ "aclk_isp_dis_400",
+ "aclk_isp_400";
+ clocks = <&xxti>,
+ <&cmu_top CLK_ACLK_ISP_DIS_400>,
+ <&cmu_top CLK_ACLK_ISP_400>;
+ };
+
+ cmu_cam0: clock-controller@120d0000 {
+ compatible = "samsung,exynos5433-cmu-cam0";
+ reg = <0x120d0000 0x0b0c>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk",
+ "aclk_cam0_333",
+ "aclk_cam0_400",
+ "aclk_cam0_552";
+ clocks = <&xxti>,
+ <&cmu_top CLK_ACLK_CAM0_333>,
+ <&cmu_top CLK_ACLK_CAM0_400>,
+ <&cmu_top CLK_ACLK_CAM0_552>;
+ };
+
+ cmu_cam1: clock-controller@145d0000 {
+ compatible = "samsung,exynos5433-cmu-cam1";
+ reg = <0x145d0000 0x0b08>;
+ #clock-cells = <1>;
+
+ clock-names = "oscclk",
+ "sclk_isp_uart_cam1",
+ "sclk_isp_spi1_cam1",
+ "sclk_isp_spi0_cam1",
+ "aclk_cam1_333",
+ "aclk_cam1_400",
+ "aclk_cam1_552";
+ clocks = <&xxti>,
+ <&cmu_top CLK_SCLK_ISP_UART_CAM1>,
+ <&cmu_top CLK_SCLK_ISP_SPI1_CAM1>,
+ <&cmu_top CLK_SCLK_ISP_SPI0_CAM1>,
+ <&cmu_top CLK_ACLK_CAM1_333>,
+ <&cmu_top CLK_ACLK_CAM1_400>,
+ <&cmu_top CLK_ACLK_CAM1_552>;
+ };
+
+Example 3: UART controller node that consumes the clock generated by the clock
+ controller.
+
+ serial_0: serial@14C10000 {
+ compatible = "samsung,exynos5433-uart";
+ reg = <0x14C10000 0x100>;
+ interrupts = <0 421 0>;
+ clocks = <&cmu_peric CLK_PCLK_UART0>,
+ <&cmu_peric CLK_SCLK_UART0>;
+ clock-names = "uart", "clk_uart_baud0";
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart0_bus>;
+ status = "disabled";
+ };
diff --git a/Documentation/devicetree/bindings/clock/fujitsu,mb86s70-crg11.txt b/Documentation/devicetree/bindings/clock/fujitsu,mb86s70-crg11.txt
new file mode 100644
index 0000000..3323962
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/fujitsu,mb86s70-crg11.txt
@@ -0,0 +1,26 @@
+Fujitsu CRG11 clock driver bindings
+-----------------------------------
+
+Required properties :
+- compatible : Shall contain "fujitsu,mb86s70-crg11"
+- #clock-cells : Shall be 3 {cntrlr domain port}
+
+The consumer specifies the desired clock pointing to its phandle.
+
+Example:
+
+ clock: crg11 {
+ compatible = "fujitsu,mb86s70-crg11";
+ #clock-cells = <3>;
+ };
+
+ mhu: mhu0@2b1f0000 {
+ #mbox-cells = <1>;
+ compatible = "arm,mhu";
+ reg = <0 0x2B1F0000 0x1000>;
+ interrupts = <0 36 4>, /* LP Non-Sec */
+ <0 35 4>, /* HP Non-Sec */
+ <0 37 4>; /* Secure */
+ clocks = <&clock 0 2 1>; /* Cntrlr:0 Domain:2 Port:1 */
+ clock-names = "clk";
+ };
diff --git a/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt b/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt
index dc5ea5b..670c2af 100644
--- a/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt
+++ b/Documentation/devicetree/bindings/clock/mvebu-core-clock.txt
@@ -23,6 +23,14 @@
2 = l2clk (L2 Cache clock)
3 = ddrclk (DDR clock)
+The following is a list of provided IDs and clock names on Armada 39x:
+ 0 = tclk (Internal Bus clock)
+ 1 = cpuclk (CPU clock)
+ 2 = nbclk (Coherent Fabric clock)
+ 3 = hclk (SDRAM Controller Internal Clock)
+ 4 = dclk (SDRAM Interface Clock)
+ 5 = refclk (Reference Clock)
+
The following is a list of provided IDs and clock names on Kirkwood and Dove:
0 = tclk (Internal Bus clock)
1 = cpuclk (CPU0 clock)
@@ -39,6 +47,7 @@
"marvell,armada-370-core-clock" - For Armada 370 SoC core clocks
"marvell,armada-375-core-clock" - For Armada 375 SoC core clocks
"marvell,armada-380-core-clock" - For Armada 380/385 SoC core clocks
+ "marvell,armada-390-core-clock" - For Armada 39x SoC core clocks
"marvell,armada-xp-core-clock" - For Armada XP SoC core clocks
"marvell,dove-core-clock" - for Dove SoC core clocks
"marvell,kirkwood-core-clock" - for Kirkwood SoC (except mv88f6180)
diff --git a/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt b/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt
index 76477be..31c7c0c 100644
--- a/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt
+++ b/Documentation/devicetree/bindings/clock/mvebu-gated-clock.txt
@@ -1,6 +1,6 @@
* Gated Clock bindings for Marvell EBU SoCs
-Marvell Armada 370/375/380/385/XP, Dove and Kirkwood allow some
+Marvell Armada 370/375/380/385/39x/XP, Dove and Kirkwood allow some
peripheral clocks to be gated to save some power. The clock consumer
should specify the desired clock by having the clock ID in its
"clocks" phandle cell. The clock ID is directly mapped to the
@@ -77,6 +77,18 @@
28 xor1 XOR 1
30 sata1 SATA 1
+The following is a list of provided IDs for Armada 39x:
+ID Clock Peripheral
+-----------------------------------
+5 pex1 PCIe 1
+6 pex2 PCIe 2
+7 pex3 PCIe 3
+8 pex0 PCIe 0
+9 usb3h0 USB3 Host 0
+17 sdio SDIO
+22 xor0 XOR 0
+28 xor1 XOR 1
+
The following is a list of provided IDs for Armada XP:
ID Clock Peripheral
-----------------------------------
@@ -152,6 +164,7 @@
"marvell,armada-370-gating-clock" - for Armada 370 SoC clock gating
"marvell,armada-375-gating-clock" - for Armada 375 SoC clock gating
"marvell,armada-380-gating-clock" - for Armada 380/385 SoC clock gating
+ "marvell,armada-390-gating-clock" - for Armada 39x SoC clock gating
"marvell,armada-xp-gating-clock" - for Armada XP SoC clock gating
"marvell,dove-gating-clock" - for Dove SoC clock gating
"marvell,kirkwood-gating-clock" - for Kirkwood SoC clock gating
diff --git a/Documentation/devicetree/bindings/clock/pwm-clock.txt b/Documentation/devicetree/bindings/clock/pwm-clock.txt
new file mode 100644
index 0000000..83db876
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/pwm-clock.txt
@@ -0,0 +1,26 @@
+Binding for an external clock signal driven by a PWM pin.
+
+This binding uses the common clock binding[1] and the common PWM binding[2].
+
+[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
+[2] Documentation/devicetree/bindings/pwm/pwm.txt
+
+Required properties:
+- compatible : shall be "pwm-clock".
+- #clock-cells : from common clock binding; shall be set to 0.
+- pwms : from common PWM binding; this determines the clock frequency
+ via the period given in the PWM specifier.
+
+Optional properties:
+- clock-output-names : From common clock binding.
+- clock-frequency : Exact output frequency, in case the PWM period
+ is not exact but was rounded to nanoseconds.
+
+Example:
+ clock {
+ compatible = "pwm-clock";
+ #clock-cells = <0>;
+ clock-frequency = <25000000>;
+ clock-output-names = "mipi_mclk";
+ pwms = <&pwm2 0 40>; /* 1 / 40 ns = 25 MHz */
+ };
diff --git a/Documentation/devicetree/bindings/clock/qcom,gcc.txt b/Documentation/devicetree/bindings/clock/qcom,gcc.txt
index aba3d25..54c23f3 100644
--- a/Documentation/devicetree/bindings/clock/qcom,gcc.txt
+++ b/Documentation/devicetree/bindings/clock/qcom,gcc.txt
@@ -8,6 +8,7 @@
"qcom,gcc-apq8084"
"qcom,gcc-ipq8064"
"qcom,gcc-msm8660"
+ "qcom,gcc-msm8916"
"qcom,gcc-msm8960"
"qcom,gcc-msm8974"
"qcom,gcc-msm8974pro"
diff --git a/Documentation/devicetree/bindings/clock/sunxi.txt b/Documentation/devicetree/bindings/clock/sunxi.txt
index 60b4428..4fa11af 100644
--- a/Documentation/devicetree/bindings/clock/sunxi.txt
+++ b/Documentation/devicetree/bindings/clock/sunxi.txt
@@ -20,6 +20,7 @@
"allwinner,sun8i-a23-axi-clk" - for the AXI clock on A23
"allwinner,sun4i-a10-axi-gates-clk" - for the AXI gates
"allwinner,sun4i-a10-ahb-clk" - for the AHB clock
+ "allwinner,sun5i-a13-ahb-clk" - for the AHB clock on A13
"allwinner,sun9i-a80-ahb-clk" - for the AHB bus clocks on A80
"allwinner,sun4i-a10-ahb-gates-clk" - for the AHB gates on A10
"allwinner,sun5i-a13-ahb-gates-clk" - for the AHB gates on A13
@@ -66,6 +67,8 @@
"allwinner,sun4i-a10-usb-clk" - for usb gates + resets on A10 / A20
"allwinner,sun5i-a13-usb-clk" - for usb gates + resets on A13
"allwinner,sun6i-a31-usb-clk" - for usb gates + resets on A31
+ "allwinner,sun9i-a80-usb-mod-clk" - for usb gates + resets on A80
+ "allwinner,sun9i-a80-usb-phy-clk" - for usb phy gates + resets on A80
Required properties for all clocks:
- reg : shall be the control register address for the clock.
diff --git a/Documentation/devicetree/bindings/dma/bcm2835-dma.txt b/Documentation/devicetree/bindings/dma/brcm,bcm2835-dma.txt
similarity index 100%
rename from Documentation/devicetree/bindings/dma/bcm2835-dma.txt
rename to Documentation/devicetree/bindings/dma/brcm,bcm2835-dma.txt
diff --git a/Documentation/devicetree/bindings/drm/imx/ldb.txt b/Documentation/devicetree/bindings/drm/imx/ldb.txt
index 443bcb6..9a21366 100644
--- a/Documentation/devicetree/bindings/drm/imx/ldb.txt
+++ b/Documentation/devicetree/bindings/drm/imx/ldb.txt
@@ -44,23 +44,30 @@
LVDS Channel
============
-Each LVDS Channel has to contain a display-timings node that describes the
-video timings for the connected LVDS display. For detailed information, also
-have a look at Documentation/devicetree/bindings/video/display-timing.txt.
+Each LVDS Channel has to contain either an of graph link to a panel device node
+or a display-timings node that describes the video timings for the connected
+LVDS display as well as the fsl,data-mapping and fsl,data-width properties.
Required properties:
- reg : should be <0> or <1>
+ - port: Input and output port nodes with endpoint definitions as defined in
+ Documentation/devicetree/bindings/graph.txt.
+ On i.MX5, the internal two-input-multiplexer is used. Due to hardware
+ limitations, only one input port (port@[0,1]) can be used for each channel
+ (lvds-channel@[0,1], respectively).
+ On i.MX6, there should be four input ports (port@[0-3]) that correspond
+ to the four LVDS multiplexer inputs.
+ A single output port (port@2 on i.MX5, port@4 on i.MX6) must be connected
+ to a panel input port. Optionally, the output port can be left out if
+ display-timings are used instead.
+
+Optional properties (required if display-timings are used):
+ - display-timings : A node that describes the display timings as defined in
+ Documentation/devicetree/bindings/video/display-timing.txt.
- fsl,data-mapping : should be "spwg" or "jeida"
This describes how the color bits are laid out in the
serialized LVDS signal.
- fsl,data-width : should be <18> or <24>
- - port: A port node with endpoint definitions as defined in
- Documentation/devicetree/bindings/media/video-interfaces.txt.
- On i.MX5, the internal two-input-multiplexer is used.
- Due to hardware limitations, only one port (port@[0,1])
- can be used for each channel (lvds-channel@[0,1], respectively)
- On i.MX6, there should be four ports (port@[0-3]) that correspond
- to the four LVDS multiplexer inputs.
example:
@@ -73,23 +80,21 @@
#size-cells = <0>;
compatible = "fsl,imx53-ldb";
gpr = <&gpr>;
- clocks = <&clks 122>, <&clks 120>,
- <&clks 115>, <&clks 116>,
- <&clks 123>, <&clks 85>;
+ clocks = <&clks IMX5_CLK_LDB_DI0_SEL>,
+ <&clks IMX5_CLK_LDB_DI1_SEL>,
+ <&clks IMX5_CLK_IPU_DI0_SEL>,
+ <&clks IMX5_CLK_IPU_DI1_SEL>,
+ <&clks IMX5_CLK_LDB_DI0_GATE>,
+ <&clks IMX5_CLK_LDB_DI1_GATE>;
clock-names = "di0_pll", "di1_pll",
"di0_sel", "di1_sel",
"di0", "di1";
+ /* Using an of-graph endpoint link to connect the panel */
lvds-channel@0 {
#address-cells = <1>;
#size-cells = <0>;
reg = <0>;
- fsl,data-mapping = "spwg";
- fsl,data-width = <24>;
-
- display-timings {
- /* ... */
- };
port@0 {
reg = <0>;
@@ -98,8 +103,17 @@
remote-endpoint = <&ipu_di0_lvds0>;
};
};
+
+ port@2 {
+ reg = <2>;
+
+ lvds0_out: endpoint {
+ remote-endpoint = <&panel_in>;
+ };
+ };
};
+ /* Using display-timings and fsl,data-mapping/width instead */
lvds-channel@1 {
#address-cells = <1>;
#size-cells = <0>;
@@ -120,3 +134,13 @@
};
};
};
+
+panel: lvds-panel {
+ /* ... */
+
+ port {
+ panel_in: endpoint {
+ remote-endpoint = <&lvds0_out>;
+ };
+ };
+};
diff --git a/Documentation/devicetree/bindings/extcon/extcon-usb-gpio.txt b/Documentation/devicetree/bindings/extcon/extcon-usb-gpio.txt
new file mode 100644
index 0000000..af0b903
--- /dev/null
+++ b/Documentation/devicetree/bindings/extcon/extcon-usb-gpio.txt
@@ -0,0 +1,18 @@
+USB GPIO Extcon device
+
+This is a virtual device used to generate USB cable states from the USB ID pin
+connected to a GPIO pin.
+
+Required properties:
+- compatible: Should be "linux,extcon-usb-gpio"
+- id-gpio: gpio for USB ID pin. See gpio binding.
+
+Example: Examples of extcon-usb-gpio node in dra7-evm.dts as listed below:
+ extcon_usb1 {
+ compatible = "linux,extcon-usb-gpio";
+ id-gpio = <&gpio6 1 GPIO_ACTIVE_HIGH>;
+ }
+
+ &omap_dwc3_1 {
+ extcon = <&extcon_usb1>;
+ };
diff --git a/Documentation/devicetree/bindings/gpio/gpio-bcm-kona.txt b/Documentation/devicetree/bindings/gpio/brcm,kona-gpio.txt
similarity index 100%
rename from Documentation/devicetree/bindings/gpio/gpio-bcm-kona.txt
rename to Documentation/devicetree/bindings/gpio/brcm,kona-gpio.txt
diff --git a/Documentation/devicetree/bindings/gpio/gpio-altera.txt b/Documentation/devicetree/bindings/gpio/gpio-altera.txt
new file mode 100644
index 0000000..12f5014
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpio/gpio-altera.txt
@@ -0,0 +1,43 @@
+Altera GPIO controller bindings
+
+Required properties:
+- compatible:
+ - "altr,pio-1.0"
+- reg: Physical base address and length of the controller's registers.
+- #gpio-cells : Should be 2
+ - The first cell is the gpio offset number.
+ - The second cell is reserved and is currently unused.
+- gpio-controller : Marks the device node as a GPIO controller.
+- interrupt-controller: Mark the device node as an interrupt controller
+- #interrupt-cells : Should be 1. The interrupt type is fixed in the hardware.
+ - The first cell is the GPIO offset number within the GPIO controller.
+- interrupts: Specify the interrupt.
+- altr,interrupt-trigger: Specifies the interrupt trigger type the GPIO
+ hardware is synthesized. This field is required if the Altera GPIO controller
+ used has IRQ enabled as the interrupt type is not software controlled,
+ but hardware synthesized. Required if GPIO is used as an interrupt
+ controller. The value is defined in <dt-bindings/interrupt-controller/irq.h>
+ Only the following flags are supported:
+ IRQ_TYPE_EDGE_RISING
+ IRQ_TYPE_EDGE_FALLING
+ IRQ_TYPE_EDGE_BOTH
+ IRQ_TYPE_LEVEL_HIGH
+
+Optional properties:
+- altr,ngpio: Width of the GPIO bank. This defines how many pins the
+ GPIO device has. Ranges between 1-32. Optional and defaults to 32 if not
+ specified.
+
+Example:
+
+gpio_altr: gpio@0xff200000 {
+ compatible = "altr,pio-1.0";
+ reg = <0xff200000 0x10>;
+ interrupts = <0 45 4>;
+ altr,ngpio = <32>;
+ altr,interrupt-trigger = <IRQ_TYPE_EDGE_RISING>;
+ #gpio-cells = <2>;
+ gpio-controller;
+ #interrupt-cells = <1>;
+ interrupt-controller;
+};
diff --git a/Documentation/devicetree/bindings/gpio/gpio.txt b/Documentation/devicetree/bindings/gpio/gpio.txt
index f7a158d..5788d5c 100644
--- a/Documentation/devicetree/bindings/gpio/gpio.txt
+++ b/Documentation/devicetree/bindings/gpio/gpio.txt
@@ -116,6 +116,29 @@
property, and a #gpio-cells integer property, which indicates the number of
cells in a gpio-specifier.
+The GPIO chip may contain GPIO hog definitions. GPIO hogging is a mechanism
+providing automatic GPIO request and configuration as part of the
+gpio-controller's driver probe function.
+
+Each GPIO hog definition is represented as a child node of the GPIO controller.
+Required properties:
+- gpio-hog: A property specifying that this child node represent a GPIO hog.
+- gpios: Store the GPIO information (id, flags, ...). Shall contain the
+ number of cells specified in its parent node (GPIO controller
+ node).
+Only one of the following properties scanned in the order shown below.
+This means that when multiple properties are present they will be searched
+in the order presented below and the first match is taken as the intended
+configuration.
+- input: A property specifying to set the GPIO direction as input.
+- output-low A property specifying to set the GPIO direction as output with
+ the value low.
+- output-high A property specifying to set the GPIO direction as output with
+ the value high.
+
+Optional properties:
+- line-name: The GPIO label name. If not present the node name is used.
+
Example of two SOC GPIO banks defined as gpio-controller nodes:
qe_pio_a: gpio-controller@1400 {
@@ -123,6 +146,13 @@
reg = <0x1400 0x18>;
gpio-controller;
#gpio-cells = <2>;
+
+ line_b {
+ gpio-hog;
+ gpios = <6 0>;
+ output-low;
+ line-name = "foo-bar-gpio";
+ };
};
qe_pio_e: gpio-controller@1460 {
diff --git a/Documentation/devicetree/bindings/gpio/mrvl-gpio.txt b/Documentation/devicetree/bindings/gpio/mrvl-gpio.txt
index 67a2e4e..98d1983 100644
--- a/Documentation/devicetree/bindings/gpio/mrvl-gpio.txt
+++ b/Documentation/devicetree/bindings/gpio/mrvl-gpio.txt
@@ -12,7 +12,7 @@
gpio_mux.
- interrupt-names : Should be the names of irq resources. Each interrupt
uses its own interrupt name, so there should be as many interrupt names
- as referenced interrups.
+ as referenced interrupts.
- interrupt-controller : Identifies the node as an interrupt controller.
- #interrupt-cells: Specifies the number of cells needed to encode an
interrupt source.
diff --git a/Documentation/devicetree/bindings/i2c/i2c-bcm-kona.txt b/Documentation/devicetree/bindings/i2c/brcm,kona-i2c.txt
similarity index 100%
rename from Documentation/devicetree/bindings/i2c/i2c-bcm-kona.txt
rename to Documentation/devicetree/bindings/i2c/brcm,kona-i2c.txt
diff --git a/Documentation/devicetree/bindings/media/exynos-jpeg-codec.txt b/Documentation/devicetree/bindings/media/exynos-jpeg-codec.txt
index bf52ed4..4ef4563 100644
--- a/Documentation/devicetree/bindings/media/exynos-jpeg-codec.txt
+++ b/Documentation/devicetree/bindings/media/exynos-jpeg-codec.txt
@@ -4,7 +4,7 @@
- compatible : should be one of:
"samsung,s5pv210-jpeg", "samsung,exynos4210-jpeg",
- "samsung,exynos3250-jpeg";
+ "samsung,exynos3250-jpeg", "samsung,exynos5420-jpeg";
- reg : address and length of the JPEG codec IP register set;
- interrupts : specifies the JPEG codec IP interrupt;
- clock-names : should contain:
diff --git a/Documentation/devicetree/bindings/media/i2c/mt9v032.txt b/Documentation/devicetree/bindings/media/i2c/mt9v032.txt
new file mode 100644
index 0000000..2025653
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/i2c/mt9v032.txt
@@ -0,0 +1,39 @@
+* Aptina 1/3-Inch WVGA CMOS Digital Image Sensor
+
+The Aptina MT9V032 is a 1/3-inch CMOS active pixel digital image sensor with
+an active array size of 752H x 480V. It is programmable through a simple
+two-wire serial interface.
+
+Required Properties:
+
+- compatible: value should be either one among the following
+ (a) "aptina,mt9v022" for MT9V022 color sensor
+ (b) "aptina,mt9v022m" for MT9V022 monochrome sensor
+ (c) "aptina,mt9v024" for MT9V024 color sensor
+ (d) "aptina,mt9v024m" for MT9V024 monochrome sensor
+ (e) "aptina,mt9v032" for MT9V032 color sensor
+ (f) "aptina,mt9v032m" for MT9V032 monochrome sensor
+ (g) "aptina,mt9v034" for MT9V034 color sensor
+ (h) "aptina,mt9v034m" for MT9V034 monochrome sensor
+
+Optional Properties:
+
+- link-frequencies: List of allowed link frequencies in Hz. Each frequency is
+ expressed as a 64-bit big-endian integer.
+
+For further reading on port node refer to
+Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+Example:
+
+ mt9v032@5c {
+ compatible = "aptina,mt9v032";
+ reg = <0x5c>;
+
+ port {
+ mt9v032_out: endpoint {
+ link-frequencies = /bits/ 64
+ <13000000 26600000 27000000>;
+ };
+ };
+ };
diff --git a/Documentation/devicetree/bindings/media/i2c/ov2640.txt b/Documentation/devicetree/bindings/media/i2c/ov2640.txt
new file mode 100644
index 0000000..c429b5b
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/i2c/ov2640.txt
@@ -0,0 +1,46 @@
+* Omnivision OV2640 CMOS sensor
+
+The Omnivision OV2640 sensor support multiple resolutions output, such as
+CIF, SVGA, UXGA. It also can support YUV422/420, RGB565/555 or raw RGB
+output format.
+
+Required Properties:
+- compatible: should be "ovti,ov2640"
+- clocks: reference to the xvclk input clock.
+- clock-names: should be "xvclk".
+
+Optional Properties:
+- resetb-gpios: reference to the GPIO connected to the resetb pin, if any.
+- pwdn-gpios: reference to the GPIO connected to the pwdn pin, if any.
+
+The device node must contain one 'port' child node for its digital output
+video port, in accordance with the video interface bindings defined in
+Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+Example:
+
+ i2c1: i2c@f0018000 {
+ ov2640: camera@0x30 {
+ compatible = "ovti,ov2640";
+ reg = <0x30>;
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pck1 &pinctrl_ov2640_pwdn &pinctrl_ov2640_resetb>;
+
+ resetb-gpios = <&pioE 24 GPIO_ACTIVE_LOW>;
+ pwdn-gpios = <&pioE 29 GPIO_ACTIVE_HIGH>;
+
+ clocks = <&pck1>;
+ clock-names = "xvclk";
+
+ assigned-clocks = <&pck1>;
+ assigned-clock-rates = <25000000>;
+
+ port {
+ ov2640_0: endpoint {
+ remote-endpoint = <&isi_0>;
+ bus-width = <8>;
+ };
+ };
+ };
+ };
diff --git a/Documentation/devicetree/bindings/media/i2c/ov2659.txt b/Documentation/devicetree/bindings/media/i2c/ov2659.txt
new file mode 100644
index 0000000..cabc7d8
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/i2c/ov2659.txt
@@ -0,0 +1,38 @@
+* OV2659 1/5-Inch 2Mp SOC Camera
+
+The Omnivision OV2659 is a 1/5-inch SOC camera, with an active array size of
+1632H x 1212V. It is programmable through a SCCB. The OV2659 sensor supports
+multiple resolutions output, such as UXGA, SVGA, 720p. It also can support
+YUV422, RGB565/555 or raw RGB output formats.
+
+Required Properties:
+- compatible: Must be "ovti,ov2659"
+- reg: I2C slave address
+- clocks: reference to the xvclk input clock.
+- clock-names: should be "xvclk".
+- link-frequencies: target pixel clock frequency.
+
+For further reading on port node refer to
+Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+Example:
+
+ i2c0@1c22000 {
+ ...
+ ...
+ ov2659@30 {
+ compatible = "ovti,ov2659";
+ reg = <0x30>;
+
+ clocks = <&clk_ov2659 0>;
+ clock-names = "xvclk";
+
+ port {
+ ov2659_0: endpoint {
+ remote-endpoint = <&vpfe_ep>;
+ link-frequencies = /bits/ 64 <70000000>;
+ };
+ };
+ };
+ ...
+ };
diff --git a/Documentation/devicetree/bindings/media/video-interfaces.txt b/Documentation/devicetree/bindings/media/video-interfaces.txt
index 571b4c6..9cd2a36 100644
--- a/Documentation/devicetree/bindings/media/video-interfaces.txt
+++ b/Documentation/devicetree/bindings/media/video-interfaces.txt
@@ -106,6 +106,12 @@
- link-frequencies: Allowed data bus frequencies. For MIPI CSI-2, for
instance, this is the actual frequency of the bus, not bits per clock per
lane value. An array of 64-bit unsigned integers.
+- lane-polarities: an array of polarities of the lanes starting from the clock
+ lane and followed by the data lanes in the same order as in data-lanes.
+ Valid values are 0 (normal) and 1 (inverted). The length of the array
+ should be the combined length of data-lanes and clock-lanes properties.
+ If the lane-polarities property is omitted, the value must be interpreted
+ as 0 (normal). This property is valid for serial busses only.
Example
diff --git a/Documentation/devicetree/bindings/media/xilinx/video.txt b/Documentation/devicetree/bindings/media/xilinx/video.txt
new file mode 100644
index 0000000..cbd46fa
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/xilinx/video.txt
@@ -0,0 +1,35 @@
+DT bindings for Xilinx video IP cores
+-------------------------------------
+
+Xilinx video IP cores process video streams by acting as video sinks and/or
+sources. They are connected by links through their input and output ports,
+creating a video pipeline.
+
+Each video IP core is represented by an AMBA bus child node in the device
+tree using bindings documented in this directory. Connections between the IP
+cores are represented as defined in ../video-interfaces.txt.
+
+The whole pipeline is represented by an AMBA bus child node in the device
+tree using bindings documented in ./xlnx,video.txt.
+
+Common properties
+-----------------
+
+The following properties are common to all Xilinx video IP cores.
+
+- xlnx,video-format: This property represents a video format transmitted on an
+ AXI bus between video IP cores, using its VF code as defined in "AXI4-Stream
+ Video IP and System Design Guide" [UG934]. How the format relates to the IP
+ core is decribed in the IP core bindings documentation.
+
+- xlnx,video-width: This property qualifies the video format with the sample
+ width expressed as a number of bits per pixel component. All components must
+ use the same width.
+
+- xlnx,cfa-pattern: When the video format is set to Mono/Sensor, this property
+ describes the sensor's color filter array pattern. Supported values are
+ "bggr", "gbrg", "grbg", "rggb" and "mono". If not specified, the pattern
+ defaults to "mono".
+
+
+[UG934] http://www.xilinx.com/support/documentation/ip_documentation/axi_videoip/v1_0/ug934_axi_videoIP.pdf
diff --git a/Documentation/devicetree/bindings/media/xilinx/xlnx,v-tc.txt b/Documentation/devicetree/bindings/media/xilinx/xlnx,v-tc.txt
new file mode 100644
index 0000000..2aed3b4
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/xilinx/xlnx,v-tc.txt
@@ -0,0 +1,33 @@
+Xilinx Video Timing Controller (VTC)
+------------------------------------
+
+The Video Timing Controller is a general purpose video timing generator and
+detector.
+
+Required properties:
+
+ - compatible: Must be "xlnx,v-tc-6.1".
+
+ - reg: Physical base address and length of the registers set for the device.
+
+ - clocks: Must contain a clock specifier for the VTC core and timing
+ interfaces clock.
+
+Optional properties:
+
+ - xlnx,detector: The VTC has a timing detector
+ - xlnx,generator: The VTC has a timing generator
+
+ At least one of the xlnx,detector and xlnx,generator properties must be
+ specified.
+
+
+Example:
+
+ vtc: vtc@43c40000 {
+ compatible = "xlnx,v-tc-6.1";
+ reg = <0x43c40000 0x10000>;
+
+ clocks = <&clkc 15>;
+ xlnx,generator;
+ };
diff --git a/Documentation/devicetree/bindings/media/xilinx/xlnx,v-tpg.txt b/Documentation/devicetree/bindings/media/xilinx/xlnx,v-tpg.txt
new file mode 100644
index 0000000..9dd86b3
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/xilinx/xlnx,v-tpg.txt
@@ -0,0 +1,71 @@
+Xilinx Video Test Pattern Generator (TPG)
+-----------------------------------------
+
+Required properties:
+
+- compatible: Must contain at least one of
+
+ "xlnx,v-tpg-5.0" (TPG version 5.0)
+ "xlnx,v-tpg-6.0" (TPG version 6.0)
+
+ TPG versions backward-compatible with previous versions should list all
+ compatible versions in the newer to older order.
+
+- reg: Physical base address and length of the registers set for the device.
+
+- clocks: Reference to the video core clock.
+
+- xlnx,video-format, xlnx,video-width: Video format and width, as defined in
+ video.txt.
+
+- port: Video port, using the DT bindings defined in ../video-interfaces.txt.
+ The TPG has a single output port numbered 0.
+
+Optional properties:
+
+- xlnx,vtc: A phandle referencing the Video Timing Controller that generates
+ video timings for the TPG test patterns.
+
+- timing-gpios: Specifier for a GPIO that controls the timing mux at the TPG
+ input. The GPIO active level corresponds to the selection of VTC-generated
+ video timings.
+
+The xlnx,vtc and timing-gpios properties are mandatory when the TPG is
+synthesized with two ports and forbidden when synthesized with one port.
+
+Example:
+
+ tpg_0: tpg@40050000 {
+ compatible = "xlnx,v-tpg-6.0", "xlnx,v-tpg-5.0";
+ reg = <0x40050000 0x10000>;
+ clocks = <&clkc 15>;
+
+ xlnx,vtc = <&vtc_3>;
+ timing-gpios = <&ps7_gpio_0 55 GPIO_ACTIVE_LOW>;
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ port@0 {
+ reg = <0>;
+
+ xlnx,video-format = <XVIP_VF_YUV_422>;
+ xlnx,video-width = <8>;
+
+ tpg_in: endpoint {
+ remote-endpoint = <&adv7611_out>;
+ };
+ };
+ port@1 {
+ reg = <1>;
+
+ xlnx,video-format = <XVIP_VF_YUV_422>;
+ xlnx,video-width = <8>;
+
+ tpg1_out: endpoint {
+ remote-endpoint = <&switch_in0>;
+ };
+ }:
+ };
+ };
diff --git a/Documentation/devicetree/bindings/media/xilinx/xlnx,video.txt b/Documentation/devicetree/bindings/media/xilinx/xlnx,video.txt
new file mode 100644
index 0000000..5a02270
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/xilinx/xlnx,video.txt
@@ -0,0 +1,55 @@
+Xilinx Video IP Pipeline (VIPP)
+-------------------------------
+
+General concept
+---------------
+
+Xilinx video IP pipeline processes video streams through one or more Xilinx
+video IP cores. Each video IP core is represented as documented in video.txt
+and IP core specific documentation, xlnx,v-*.txt, in this directory. The DT
+node of the VIPP represents as a top level node of the pipeline and defines
+mappings between DMAs and the video IP cores.
+
+Required properties:
+
+- compatible: Must be "xlnx,video".
+
+- dmas, dma-names: List of one DMA specifier and identifier string (as defined
+ in Documentation/devicetree/bindings/dma/dma.txt) per port. Each port
+ requires a DMA channel with the identifier string set to "port" followed by
+ the port index.
+
+- ports: Video port, using the DT bindings defined in ../video-interfaces.txt.
+
+Required port properties:
+
+- direction: should be either "input" or "output" depending on the direction
+ of stream.
+
+Example:
+
+ video_cap {
+ compatible = "xlnx,video";
+ dmas = <&vdma_1 1>, <&vdma_3 1>;
+ dma-names = "port0", "port1";
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ port@0 {
+ reg = <0>;
+ direction = "input";
+ vcap0_in0: endpoint {
+ remote-endpoint = <&scaler0_out>;
+ };
+ };
+ port@1 {
+ reg = <1>;
+ direction = "input";
+ vcap0_in1: endpoint {
+ remote-endpoint = <&switch_out1>;
+ };
+ };
+ };
+ };
diff --git a/Documentation/devicetree/bindings/memory-controllers/ingenic,jz4780-nemc.txt b/Documentation/devicetree/bindings/memory-controllers/ingenic,jz4780-nemc.txt
new file mode 100644
index 0000000..f936b55
--- /dev/null
+++ b/Documentation/devicetree/bindings/memory-controllers/ingenic,jz4780-nemc.txt
@@ -0,0 +1,75 @@
+* Ingenic JZ4780 NAND/external memory controller (NEMC)
+
+This file documents the device tree bindings for the NEMC external memory
+controller in Ingenic JZ4780
+
+Required properties:
+- compatible: Should be set to one of:
+ "ingenic,jz4780-nemc" (JZ4780)
+- reg: Should specify the NEMC controller registers location and length.
+- clocks: Clock for the NEMC controller.
+- #address-cells: Must be set to 2.
+- #size-cells: Must be set to 1.
+- ranges: A set of ranges for each bank describing the physical memory layout.
+ Each should specify the following 4 integer values:
+
+ <cs number> 0 <physical address of mapping> <size of mapping>
+
+Each child of the NEMC node describes a device connected to the NEMC.
+
+Required child node properties:
+- reg: Should contain at least one register specifier, given in the following
+ format:
+
+ <cs number> <offset> <size>
+
+ Multiple registers can be specified across multiple banks. This is needed,
+ for example, for packaged NAND devices with multiple dies. Such devices
+ should be grouped into a single node.
+
+Optional child node properties:
+- ingenic,nemc-bus-width: Specifies the bus width in bits. Defaults to 8 bits.
+- ingenic,nemc-tAS: Address setup time in nanoseconds.
+- ingenic,nemc-tAH: Address hold time in nanoseconds.
+- ingenic,nemc-tBP: Burst pitch time in nanoseconds.
+- ingenic,nemc-tAW: Access wait time in nanoseconds.
+- ingenic,nemc-tSTRV: Static memory recovery time in nanoseconds.
+
+If a child node references multiple banks in its "reg" property, the same value
+for all optional parameters will be configured for all banks. If any optional
+parameters are omitted, they will be left unchanged from whatever they are
+configured to when the NEMC device is probed (which may be the reset value as
+given in the hardware reference manual, or a value configured by the boot
+loader).
+
+Example (NEMC node with a NAND child device attached at CS1):
+
+nemc: nemc@13410000 {
+ compatible = "ingenic,jz4780-nemc";
+ reg = <0x13410000 0x10000>;
+
+ #address-cells = <2>;
+ #size-cells = <1>;
+
+ ranges = <1 0 0x1b000000 0x1000000
+ 2 0 0x1a000000 0x1000000
+ 3 0 0x19000000 0x1000000
+ 4 0 0x18000000 0x1000000
+ 5 0 0x17000000 0x1000000
+ 6 0 0x16000000 0x1000000>;
+
+ clocks = <&cgu JZ4780_CLK_NEMC>;
+
+ nand: nand@1 {
+ compatible = "ingenic,jz4780-nand";
+ reg = <1 0 0x1000000>;
+
+ ingenic,nemc-tAS = <10>;
+ ingenic,nemc-tAH = <5>;
+ ingenic,nemc-tBP = <10>;
+ ingenic,nemc-tAW = <15>;
+ ingenic,nemc-tSTRV = <100>;
+
+ ...
+ };
+};
diff --git a/Documentation/devicetree/bindings/mfd/bcm590xx.txt b/Documentation/devicetree/bindings/mfd/brcm,bcm59056.txt
similarity index 100%
rename from Documentation/devicetree/bindings/mfd/bcm590xx.txt
rename to Documentation/devicetree/bindings/mfd/brcm,bcm59056.txt
diff --git a/Documentation/devicetree/bindings/mips/brcm/bmips.txt b/Documentation/devicetree/bindings/mips/brcm/brcm,bmips.txt
similarity index 100%
rename from Documentation/devicetree/bindings/mips/brcm/bmips.txt
rename to Documentation/devicetree/bindings/mips/brcm/brcm,bmips.txt
diff --git a/Documentation/devicetree/bindings/misc/smc.txt b/Documentation/devicetree/bindings/misc/brcm,kona-smc.txt
similarity index 100%
rename from Documentation/devicetree/bindings/misc/smc.txt
rename to Documentation/devicetree/bindings/misc/brcm,kona-smc.txt
diff --git a/Documentation/devicetree/bindings/misc/lis302.txt b/Documentation/devicetree/bindings/misc/lis302.txt
index 6def86f..2a19bff 100644
--- a/Documentation/devicetree/bindings/misc/lis302.txt
+++ b/Documentation/devicetree/bindings/misc/lis302.txt
@@ -46,11 +46,18 @@
interrupt 2
- st,wakeup-{x,y,z}-{lo,hi}: set wakeup condition on x/y/z axis for
upper/lower limit
+ - st,wakeup-threshold: set wakeup threshold
+ - st,wakeup2-{x,y,z}-{lo,hi}: set wakeup condition on x/y/z axis for
+ upper/lower limit for second wakeup
+ engine.
+ - st,wakeup2-threshold: set wakeup threshold for second wakeup
+ engine.
- st,highpass-cutoff-hz=: 1, 2, 4 or 8 for 1Hz, 2Hz, 4Hz or 8Hz of
highpass cut-off frequency
- st,hipass{1,2}-disable: disable highpass 1/2.
- st,default-rate=: set the default rate
- - st,axis-{x,y,z}=: set the axis to map to the three coordinates
+ - st,axis-{x,y,z}=: set the axis to map to the three coordinates.
+ Negative values can be used for inverted axis.
- st,{min,max}-limit-{x,y,z} set the min/max limits for x/y/z axis
(used by self-test)
diff --git a/Documentation/devicetree/bindings/mmc/kona-sdhci.txt b/Documentation/devicetree/bindings/mmc/brcm,kona-sdhci.txt
similarity index 100%
rename from Documentation/devicetree/bindings/mmc/kona-sdhci.txt
rename to Documentation/devicetree/bindings/mmc/brcm,kona-sdhci.txt
diff --git a/Documentation/devicetree/bindings/net/broadcom-sf2.txt b/Documentation/devicetree/bindings/net/brcm,bcm7445-switch-v4.0.txt
similarity index 100%
rename from Documentation/devicetree/bindings/net/broadcom-sf2.txt
rename to Documentation/devicetree/bindings/net/brcm,bcm7445-switch-v4.0.txt
diff --git a/Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt b/Documentation/devicetree/bindings/net/brcm,bcmgenet.txt
similarity index 100%
rename from Documentation/devicetree/bindings/net/broadcom-bcmgenet.txt
rename to Documentation/devicetree/bindings/net/brcm,bcmgenet.txt
diff --git a/Documentation/devicetree/bindings/net/broadcom-systemport.txt b/Documentation/devicetree/bindings/net/brcm,systemport.txt
similarity index 100%
rename from Documentation/devicetree/bindings/net/broadcom-systemport.txt
rename to Documentation/devicetree/bindings/net/brcm,systemport.txt
diff --git a/Documentation/devicetree/bindings/net/broadcom-mdio-unimac.txt b/Documentation/devicetree/bindings/net/brcm,unimac-mdio.txt
similarity index 100%
rename from Documentation/devicetree/bindings/net/broadcom-mdio-unimac.txt
rename to Documentation/devicetree/bindings/net/brcm,unimac-mdio.txt
diff --git a/Documentation/devicetree/bindings/panel/ampire,am800480r3tmqwa1h.txt b/Documentation/devicetree/bindings/panel/ampire,am800480r3tmqwa1h.txt
new file mode 100644
index 0000000..83e2cae
--- /dev/null
+++ b/Documentation/devicetree/bindings/panel/ampire,am800480r3tmqwa1h.txt
@@ -0,0 +1,7 @@
+Ampire AM-800480R3TMQW-A1H 7.0" WVGA TFT LCD panel
+
+Required properties:
+- compatible: should be "ampire,am800480r3tmqwa1h"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/panel/auo,b101ean01.txt b/Documentation/devicetree/bindings/panel/auo,b101ean01.txt
new file mode 100644
index 0000000..3590b07
--- /dev/null
+++ b/Documentation/devicetree/bindings/panel/auo,b101ean01.txt
@@ -0,0 +1,7 @@
+AU Optronics Corporation 10.1" WSVGA TFT LCD panel
+
+Required properties:
+- compatible: should be "auo,b101ean01"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/panel/innolux,at043tn24.txt b/Documentation/devicetree/bindings/panel/innolux,at043tn24.txt
new file mode 100644
index 0000000..4104226
--- /dev/null
+++ b/Documentation/devicetree/bindings/panel/innolux,at043tn24.txt
@@ -0,0 +1,7 @@
+Innolux AT043TN24 4.3" WQVGA TFT LCD panel
+
+Required properties:
+- compatible: should be "innolux,at043tn24"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/panel/innolux,zj070na-01p.txt b/Documentation/devicetree/bindings/panel/innolux,zj070na-01p.txt
new file mode 100644
index 0000000..824f87f
--- /dev/null
+++ b/Documentation/devicetree/bindings/panel/innolux,zj070na-01p.txt
@@ -0,0 +1,7 @@
+Innolux Corporation 7.0" WSVGA (1024x600) TFT LCD panel
+
+Required properties:
+- compatible: should be "innolux,zj070na-01p"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/panel/ortustech,com43h4m85ulc.txt b/Documentation/devicetree/bindings/panel/ortustech,com43h4m85ulc.txt
new file mode 100644
index 0000000..de19e93
--- /dev/null
+++ b/Documentation/devicetree/bindings/panel/ortustech,com43h4m85ulc.txt
@@ -0,0 +1,7 @@
+OrtusTech COM43H4M85ULC Blanview 3.7" TFT-LCD panel
+
+Required properties:
+- compatible: should be "ortustech,com43h4m85ulc"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/panel/samsung,ltn140at29-301.txt b/Documentation/devicetree/bindings/panel/samsung,ltn140at29-301.txt
new file mode 100644
index 0000000..e7f969d
--- /dev/null
+++ b/Documentation/devicetree/bindings/panel/samsung,ltn140at29-301.txt
@@ -0,0 +1,7 @@
+Samsung Electronics 14" WXGA (1366x768) TFT LCD panel
+
+Required properties:
+- compatible: should be "samsung,ltn140at29-301"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/panel/shelly,sca07010-bfn-lnn.txt b/Documentation/devicetree/bindings/panel/shelly,sca07010-bfn-lnn.txt
new file mode 100644
index 0000000..fc1ea9e
--- /dev/null
+++ b/Documentation/devicetree/bindings/panel/shelly,sca07010-bfn-lnn.txt
@@ -0,0 +1,7 @@
+Shelly SCA07010-BFN-LNN 7.0" WVGA TFT LCD panel
+
+Required properties:
+- compatible: should be "shelly,sca07010-bfn-lnn"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.
diff --git a/Documentation/devicetree/bindings/phy/bcm-phy.txt b/Documentation/devicetree/bindings/phy/brcm,kona-usb2-phy.txt
similarity index 100%
rename from Documentation/devicetree/bindings/phy/bcm-phy.txt
rename to Documentation/devicetree/bindings/phy/brcm,kona-usb2-phy.txt
diff --git a/Documentation/devicetree/bindings/pwm/bcm-kona-pwm.txt b/Documentation/devicetree/bindings/pwm/brcm,kona-pwm.txt
similarity index 100%
rename from Documentation/devicetree/bindings/pwm/bcm-kona-pwm.txt
rename to Documentation/devicetree/bindings/pwm/brcm,kona-pwm.txt
diff --git a/Documentation/devicetree/bindings/arm/bcm/kona-resetmgr.txt b/Documentation/devicetree/bindings/reset/brcm,bcm21664-resetmgr.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/bcm/kona-resetmgr.txt
rename to Documentation/devicetree/bindings/reset/brcm,bcm21664-resetmgr.txt
diff --git a/Documentation/devicetree/bindings/serial/bcm63xx-uart.txt b/Documentation/devicetree/bindings/serial/brcm,bcm6345-uart.txt
similarity index 100%
rename from Documentation/devicetree/bindings/serial/bcm63xx-uart.txt
rename to Documentation/devicetree/bindings/serial/brcm,bcm6345-uart.txt
diff --git a/Documentation/devicetree/bindings/sound/bcm2835-i2s.txt b/Documentation/devicetree/bindings/sound/brcm,bcm2835-i2s.txt
similarity index 100%
rename from Documentation/devicetree/bindings/sound/bcm2835-i2s.txt
rename to Documentation/devicetree/bindings/sound/brcm,bcm2835-i2s.txt
diff --git a/Documentation/devicetree/bindings/spmi/qcom,spmi-pmic-arb.txt b/Documentation/devicetree/bindings/spmi/qcom,spmi-pmic-arb.txt
index 715d099..e16b9b5 100644
--- a/Documentation/devicetree/bindings/spmi/qcom,spmi-pmic-arb.txt
+++ b/Documentation/devicetree/bindings/spmi/qcom,spmi-pmic-arb.txt
@@ -1,6 +1,6 @@
Qualcomm SPMI Controller (PMIC Arbiter)
-The SPMI PMIC Arbiter is found on the Snapdragon 800 Series. It is an SPMI
+The SPMI PMIC Arbiter is found on Snapdragon chipsets. It is an SPMI
controller with wrapping arbitration logic to allow for multiple on-chip
devices to control a single SPMI master.
@@ -19,6 +19,10 @@
"core" - core registers
"intr" - interrupt controller registers
"cnfg" - configuration registers
+ Registers used only for V2 PMIC Arbiter:
+ "chnls" - tx-channel per virtual slave registers.
+ "obsrvr" - rx-channel (called observer) per virtual slave registers.
+
- reg : address + size pairs describing the PMIC arb register sets; order must
correspond with the order of entries in reg-names
- #address-cells : must be set to 2
diff --git a/Documentation/devicetree/bindings/arm/bcm/kona-timer.txt b/Documentation/devicetree/bindings/timer/brcm,kona-timer.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/bcm/kona-timer.txt
rename to Documentation/devicetree/bindings/timer/brcm,kona-timer.txt
diff --git a/Documentation/devicetree/bindings/unittest.txt b/Documentation/devicetree/bindings/unittest.txt
index 8933211..3bf58c2 100644
--- a/Documentation/devicetree/bindings/unittest.txt
+++ b/Documentation/devicetree/bindings/unittest.txt
@@ -1,60 +1,60 @@
-1) OF selftest platform device
+1) OF unittest platform device
-** selftest
+** unittest
Required properties:
-- compatible: must be "selftest"
+- compatible: must be "unittest"
All other properties are optional.
Example:
- selftest {
- compatible = "selftest";
+ unittest {
+ compatible = "unittest";
status = "okay";
};
-2) OF selftest i2c adapter platform device
+2) OF unittest i2c adapter platform device
** platform device unittest adapter
Required properties:
-- compatible: must be selftest-i2c-bus
+- compatible: must be unittest-i2c-bus
-Children nodes contain selftest i2c devices.
+Children nodes contain unittest i2c devices.
Example:
- selftest-i2c-bus {
- compatible = "selftest-i2c-bus";
+ unittest-i2c-bus {
+ compatible = "unittest-i2c-bus";
status = "okay";
};
-3) OF selftest i2c device
+3) OF unittest i2c device
-** I2C selftest device
+** I2C unittest device
Required properties:
-- compatible: must be selftest-i2c-dev
+- compatible: must be unittest-i2c-dev
All other properties are optional
Example:
- selftest-i2c-dev {
- compatible = "selftest-i2c-dev";
+ unittest-i2c-dev {
+ compatible = "unittest-i2c-dev";
status = "okay";
};
-4) OF selftest i2c mux device
+4) OF unittest i2c mux device
-** I2C selftest mux
+** I2C unittest mux
Required properties:
-- compatible: must be selftest-i2c-mux
+- compatible: must be unittest-i2c-mux
-Children nodes contain selftest i2c bus nodes per channel.
+Children nodes contain unittest i2c bus nodes per channel.
Example:
- selftest-i2c-mux {
- compatible = "selftest-i2c-mux";
+ unittest-i2c-mux {
+ compatible = "unittest-i2c-mux";
status = "okay";
#address-cells = <1>;
#size-cells = <0>;
@@ -64,7 +64,7 @@
#size-cells = <0>;
i2c-dev {
reg = <8>;
- compatible = "selftest-i2c-dev";
+ compatible = "unittest-i2c-dev";
status = "okay";
};
};
diff --git a/Documentation/devicetree/bindings/mips/brcm/usb.txt b/Documentation/devicetree/bindings/usb/brcm,bcm3384-usb.txt
similarity index 100%
rename from Documentation/devicetree/bindings/mips/brcm/usb.txt
rename to Documentation/devicetree/bindings/usb/brcm,bcm3384-usb.txt
diff --git a/Documentation/devicetree/bindings/vendor-prefixes.txt b/Documentation/devicetree/bindings/vendor-prefixes.txt
index b13aa55..aecec35 100644
--- a/Documentation/devicetree/bindings/vendor-prefixes.txt
+++ b/Documentation/devicetree/bindings/vendor-prefixes.txt
@@ -17,9 +17,11 @@
amcc Applied Micro Circuits Corporation (APM, formally AMCC)
amd Advanced Micro Devices (AMD), Inc.
amlogic Amlogic, Inc.
+ampire Ampire Co., Ltd.
ams AMS AG
amstaos AMS-Taos Inc.
apm Applied Micro Circuits Corporation (APM)
+aptina Aptina Imaging
arasan Arasan Chip Systems
arm ARM Ltd.
armadeus ARMadeus Systems SARL
@@ -135,6 +137,7 @@
nxp NXP Semiconductors
onnn ON Semiconductor Corp.
opencores OpenCores.org
+ortustech Ortus Technology Co., Ltd.
ovti OmniVision Technologies
panasonic Panasonic Corporation
parade Parade Technologies Inc.
diff --git a/Documentation/devicetree/bindings/video/ti,omap-dss.txt b/Documentation/devicetree/bindings/video/ti,omap-dss.txt
index d5f1a3f..e1ef295 100644
--- a/Documentation/devicetree/bindings/video/ti,omap-dss.txt
+++ b/Documentation/devicetree/bindings/video/ti,omap-dss.txt
@@ -25,8 +25,8 @@
-----------
The DSS Core and the encoders have video port outputs. The structure of the
-video ports is described in Documentation/devicetree/bindings/video/video-
-ports.txt, and the properties for the ports and endpoints for each encoder are
+video ports is described in Documentation/devicetree/bindings/graph.txt,
+and the properties for the ports and endpoints for each encoder are
described in the SoC's DSS binding documentation.
The video ports are used to describe the connections to external hardware, like
diff --git a/Documentation/devicetree/bindings/arm/bcm/kona-wdt.txt b/Documentation/devicetree/bindings/watchdog/brcm,kona-wdt.txt
similarity index 100%
rename from Documentation/devicetree/bindings/arm/bcm/kona-wdt.txt
rename to Documentation/devicetree/bindings/watchdog/brcm,kona-wdt.txt
diff --git a/Documentation/devicetree/of_selftest.txt b/Documentation/devicetree/of_unittest.txt
similarity index 90%
rename from Documentation/devicetree/of_selftest.txt
rename to Documentation/devicetree/of_unittest.txt
index 57a808b..3e4e7d4 100644
--- a/Documentation/devicetree/of_selftest.txt
+++ b/Documentation/devicetree/of_unittest.txt
@@ -1,11 +1,11 @@
-Open Firmware Device Tree Selftest
+Open Firmware Device Tree Unittest
----------------------------------
Author: Gaurav Minocha <gaurav.minocha.os@gmail.com>
1. Introduction
-This document explains how the test data required for executing OF selftest
+This document explains how the test data required for executing OF unittest
is attached to the live tree dynamically, independent of the machine's
architecture.
@@ -22,31 +22,31 @@
2. Test-data
-The Device Tree Source file (drivers/of/testcase-data/testcases.dts) contains
+The Device Tree Source file (drivers/of/unittest-data/testcases.dts) contains
the test data required for executing the unit tests automated in
-drivers/of/selftests.c. Currently, following Device Tree Source Include files
-(.dtsi) are included in testcase.dts:
+drivers/of/unittest.c. Currently, following Device Tree Source Include files
+(.dtsi) are included in testcases.dts:
-drivers/of/testcase-data/tests-interrupts.dtsi
-drivers/of/testcase-data/tests-platform.dtsi
-drivers/of/testcase-data/tests-phandle.dtsi
-drivers/of/testcase-data/tests-match.dtsi
+drivers/of/unittest-data/tests-interrupts.dtsi
+drivers/of/unittest-data/tests-platform.dtsi
+drivers/of/unittest-data/tests-phandle.dtsi
+drivers/of/unittest-data/tests-match.dtsi
When the kernel is build with OF_SELFTEST enabled, then the following make rule
$(obj)/%.dtb: $(src)/%.dts FORCE
$(call if_changed_dep, dtc)
-is used to compile the DT source file (testcase.dts) into a binary blob
-(testcase.dtb), also referred as flattened DT.
+is used to compile the DT source file (testcases.dts) into a binary blob
+(testcases.dtb), also referred as flattened DT.
After that, using the following rule the binary blob above is wrapped as an
-assembly file (testcase.dtb.S).
+assembly file (testcases.dtb.S).
$(obj)/%.dtb.S: $(obj)/%.dtb
$(call cmd, dt_S_dtb)
-The assembly file is compiled into an object file (testcase.dtb.o), and is
+The assembly file is compiled into an object file (testcases.dtb.o), and is
linked into the kernel image.
@@ -98,7 +98,7 @@
Figure 1: Generic structure of un-flattened device tree
-Before executing OF selftest, it is required to attach the test data to
+Before executing OF unittest, it is required to attach the test data to
machine's device tree (if present). So, when selftest_data_add() is called,
at first it reads the flattened device tree data linked into the kernel image
via the following kernel symbols:
diff --git a/Documentation/driver-model/devres.txt b/Documentation/driver-model/devres.txt
index e1e2bbd..831a536 100644
--- a/Documentation/driver-model/devres.txt
+++ b/Documentation/driver-model/devres.txt
@@ -276,6 +276,7 @@
devm_ioport_unmap()
devm_ioremap()
devm_ioremap_nocache()
+ devm_ioremap_wc()
devm_ioremap_resource() : checks resource, requests memory region, ioremaps
devm_iounmap()
pcim_iomap()
diff --git a/Documentation/email-clients.txt b/Documentation/email-clients.txt
index eede608..c7d49b8 100644
--- a/Documentation/email-clients.txt
+++ b/Documentation/email-clients.txt
@@ -211,7 +211,7 @@
Thunderbird is an Outlook clone that likes to mangle text, but there are ways
to coerce it into behaving.
-- Allows use of an external editor:
+- Allow use of an external editor:
The easiest thing to do with Thunderbird and patches is to use an
"external editor" extension and then just use your favorite $EDITOR
for reading/merging patches into the body text. To do this, download
@@ -219,6 +219,15 @@
View->Toolbars->Customize... and finally just click on it when in the
Compose dialog.
+ Please note that "external editor" requires that your editor must not
+ fork, or in other words, the editor must not return before closing.
+ You may have to pass additional flags or change the settings of your
+ editor. Most notably if you are using gvim then you must pass the -f
+ option to gvim by putting "/usr/bin/gvim -f" (if the binary is in
+ /usr/bin) to the text editor field in "external editor" settings. If you
+ are using some other editor then please read its manual to find out how
+ to do this.
+
To beat some sense out of the internal editor, do this:
- Edit your Thunderbird config settings so that it won't use format=flowed.
diff --git a/Documentation/filesystems/f2fs.txt b/Documentation/filesystems/f2fs.txt
index dac11d7..e9e750e 100644
--- a/Documentation/filesystems/f2fs.txt
+++ b/Documentation/filesystems/f2fs.txt
@@ -140,6 +140,12 @@
fastboot This option is used when a system wants to reduce mount
time as much as possible, even though normal performance
can be sacrificed.
+extent_cache Enable an extent cache based on rb-tree, it can cache
+ as many as extent which map between contiguous logical
+ address and physical address per inode, resulting in
+ increasing the cache hit ratio.
+noinline_data Disable the inline data feature, inline data feature is
+ enabled by default.
================================================================================
DEBUGFS ENTRIES
diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
index 8e36c7e..c3b6b30 100644
--- a/Documentation/filesystems/proc.txt
+++ b/Documentation/filesystems/proc.txt
@@ -1260,9 +1260,9 @@
since the system first booted. For a quick look, simply cat the file:
> cat /proc/stat
- cpu 2255 34 2290 22625563 6290 127 456 0 0
- cpu0 1132 34 1441 11311718 3675 127 438 0 0
- cpu1 1123 0 849 11313845 2614 0 18 0 0
+ cpu 2255 34 2290 22625563 6290 127 456 0 0 0
+ cpu0 1132 34 1441 11311718 3675 127 438 0 0 0
+ cpu1 1123 0 849 11313845 2614 0 18 0 0 0
intr 114930548 113199788 3 0 5 263 0 4 [... lots more numbers ...]
ctxt 1990473
btime 1062191376
diff --git a/Documentation/gpio/board.txt b/Documentation/gpio/board.txt
index 8b35f51..b80606d 100644
--- a/Documentation/gpio/board.txt
+++ b/Documentation/gpio/board.txt
@@ -50,10 +50,43 @@
ACPI
----
-ACPI does not support function names for GPIOs. Therefore, only the "idx"
-argument of gpiod_get_index() is useful to discriminate between GPIOs assigned
-to a device. The "con_id" argument can still be set for debugging purposes (it
-will appear under error messages as well as debug and sysfs nodes).
+ACPI also supports function names for GPIOs in a similar fashion to DT.
+The above DT example can be converted to an equivalent ACPI description
+with the help of _DSD (Device Specific Data), introduced in ACPI 5.1:
+
+ Device (FOO) {
+ Name (_CRS, ResourceTemplate () {
+ GpioIo (Exclusive, ..., IoRestrictionOutputOnly,
+ "\\_SB.GPI0") {15} // red
+ GpioIo (Exclusive, ..., IoRestrictionOutputOnly,
+ "\\_SB.GPI0") {16} // green
+ GpioIo (Exclusive, ..., IoRestrictionOutputOnly,
+ "\\_SB.GPI0") {17} // blue
+ GpioIo (Exclusive, ..., IoRestrictionOutputOnly,
+ "\\_SB.GPI0") {1} // power
+ })
+
+ Name (_DSD, Package () {
+ ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
+ Package () {
+ Package () {
+ "led-gpios",
+ Package () {
+ ^FOO, 0, 0, 1,
+ ^FOO, 1, 0, 1,
+ ^FOO, 2, 0, 1,
+ }
+ },
+ Package () {
+ "power-gpios",
+ Package () {^FOO, 3, 0, 0},
+ },
+ }
+ })
+ }
+
+For more information about the ACPI GPIO bindings see
+Documentation/acpi/gpio-properties.txt.
Platform Data
-------------
diff --git a/Documentation/gpio/consumer.txt b/Documentation/gpio/consumer.txt
index d85fbae..c21c131 100644
--- a/Documentation/gpio/consumer.txt
+++ b/Documentation/gpio/consumer.txt
@@ -58,7 +58,6 @@
gpiod_get_index_optional() functions can be used. These functions return NULL
instead of -ENOENT if no GPIO has been assigned to the requested function:
-
struct gpio_desc *gpiod_get_optional(struct device *dev,
const char *con_id,
enum gpiod_flags flags)
@@ -68,6 +67,27 @@
unsigned int index,
enum gpiod_flags flags)
+For a function using multiple GPIOs all of those can be obtained with one call:
+
+ struct gpio_descs *gpiod_get_array(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
+
+This function returns a struct gpio_descs which contains an array of
+descriptors:
+
+ struct gpio_descs {
+ unsigned int ndescs;
+ struct gpio_desc *desc[];
+ }
+
+The following function returns NULL instead of -ENOENT if no GPIOs have been
+assigned to the requested function:
+
+ struct gpio_descs *gpiod_get_array_optional(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
+
Device-managed variants of these functions are also defined:
struct gpio_desc *devm_gpiod_get(struct device *dev, const char *con_id,
@@ -82,20 +102,37 @@
const char *con_id,
enum gpiod_flags flags)
- struct gpio_desc * devm_gpiod_get_index_optional(struct device *dev,
+ struct gpio_desc *devm_gpiod_get_index_optional(struct device *dev,
const char *con_id,
unsigned int index,
enum gpiod_flags flags)
+ struct gpio_descs *devm_gpiod_get_array(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
+
+ struct gpio_descs *devm_gpiod_get_array_optional(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
+
A GPIO descriptor can be disposed of using the gpiod_put() function:
void gpiod_put(struct gpio_desc *desc)
-It is strictly forbidden to use a descriptor after calling this function. The
-device-managed variant is, unsurprisingly:
+For an array of GPIOs this function can be used:
+
+ void gpiod_put_array(struct gpio_descs *descs)
+
+It is strictly forbidden to use a descriptor after calling these functions.
+It is also not allowed to individually release descriptors (using gpiod_put())
+from an array acquired with gpiod_get_array().
+
+The device-managed variants are, unsurprisingly:
void devm_gpiod_put(struct device *dev, struct gpio_desc *desc)
+ void devm_gpiod_put_array(struct device *dev, struct gpio_descs *descs)
+
Using GPIOs
===========
@@ -222,6 +259,26 @@
corresponding chip driver. In that case a significantly improved performance
can be expected. If simultaneous setting is not possible the GPIOs will be set
sequentially.
+
+The gpiod_set_array() functions take three arguments:
+ * array_size - the number of array elements
+ * desc_array - an array of GPIO descriptors
+ * value_array - an array of values to assign to the GPIOs
+
+The descriptor array can be obtained using the gpiod_get_array() function
+or one of its variants. If the group of descriptors returned by that function
+matches the desired group of GPIOs, those GPIOs can be set by simply using
+the struct gpio_descs returned by gpiod_get_array():
+
+ struct gpio_descs *my_gpio_descs = gpiod_get_array(...);
+ gpiod_set_array(my_gpio_descs->ndescs, my_gpio_descs->desc,
+ my_gpio_values);
+
+It is also possible to set a completely arbitrary array of descriptors. The
+descriptors may be obtained using any combination of gpiod_get() and
+gpiod_get_array(). Afterwards the array of descriptors has to be setup
+manually before it can be used with gpiod_set_array().
+
Note that for optimal performance GPIOs belonging to the same chip should be
contiguous within the array of descriptors.
diff --git a/Documentation/i2o/README b/Documentation/i2o/README
deleted file mode 100644
index ee91e26..0000000
--- a/Documentation/i2o/README
+++ /dev/null
@@ -1,63 +0,0 @@
-
- Linux I2O Support (c) Copyright 1999 Red Hat Software
- and others.
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License
- as published by the Free Software Foundation; either version
- 2 of the License, or (at your option) any later version.
-
-AUTHORS (so far)
-
-Alan Cox, Building Number Three Ltd.
- Core code, SCSI and Block OSMs
-
-Steve Ralston, LSI Logic Corp.
- Debugging SCSI and Block OSM
-
-Deepak Saxena, Intel Corp.
- Various core/block extensions
- /proc interface, bug fixes
- Ioctl interfaces for control
- Debugging LAN OSM
-
-Philip Rumpf
- Fixed assorted dumb SMP locking bugs
-
-Juha Sievanen, University of Helsinki Finland
- LAN OSM code
- /proc interface to LAN class
- Bug fixes
- Core code extensions
-
-Auvo Häkkinen, University of Helsinki Finland
- LAN OSM code
- /Proc interface to LAN class
- Bug fixes
- Core code extensions
-
-Taneli Vähäkangas, University of Helsinki Finland
- Fixes to i2o_config
-
-CREDITS
-
- This work was made possible by
-
-Red Hat Software
- Funding for the Building #3 part of the project
-
-Symbios Logic (Now LSI)
- Host adapters, hints, known to work platforms when I hit
- compatibility problems
-
-BoxHill Corporation
- Loan of initial FibreChannel disk array used for development work.
-
-European Commission
- Funding the work done by the University of Helsinki
-
-SysKonnect
- Loan of FDDI and Gigabit Ethernet cards
-
-ASUSTeK
- Loan of I2O motherboard
diff --git a/Documentation/i2o/ioctl b/Documentation/i2o/ioctl
deleted file mode 100644
index 27c3c54..0000000
--- a/Documentation/i2o/ioctl
+++ /dev/null
@@ -1,394 +0,0 @@
-
-Linux I2O User Space Interface
-rev 0.3 - 04/20/99
-
-=============================================================================
-Originally written by Deepak Saxena(deepak@plexity.net)
-Currently maintained by Deepak Saxena(deepak@plexity.net)
-=============================================================================
-
-I. Introduction
-
-The Linux I2O subsystem provides a set of ioctl() commands that can be
-utilized by user space applications to communicate with IOPs and devices
-on individual IOPs. This document defines the specific ioctl() commands
-that are available to the user and provides examples of their uses.
-
-This document assumes the reader is familiar with or has access to the
-I2O specification as no I2O message parameters are outlined. For information
-on the specification, see http://www.i2osig.org
-
-This document and the I2O user space interface are currently maintained
-by Deepak Saxena. Please send all comments, errata, and bug fixes to
-deepak@csociety.purdue.edu
-
-II. IOP Access
-
-Access to the I2O subsystem is provided through the device file named
-/dev/i2o/ctl. This file is a character file with major number 10 and minor
-number 166. It can be created through the following command:
-
- mknod /dev/i2o/ctl c 10 166
-
-III. Determining the IOP Count
-
- SYNOPSIS
-
- ioctl(fd, I2OGETIOPS, int *count);
-
- u8 count[MAX_I2O_CONTROLLERS];
-
- DESCRIPTION
-
- This function returns the system's active IOP table. count should
- point to a buffer containing MAX_I2O_CONTROLLERS entries. Upon
- returning, each entry will contain a non-zero value if the given
- IOP unit is active, and NULL if it is inactive or non-existent.
-
- RETURN VALUE.
-
- Returns 0 if no errors occur, and -1 otherwise. If an error occurs,
- errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
-
-IV. Getting Hardware Resource Table
-
- SYNOPSIS
-
- ioctl(fd, I2OHRTGET, struct i2o_cmd_hrt *hrt);
-
- struct i2o_cmd_hrtlct
- {
- u32 iop; /* IOP unit number */
- void *resbuf; /* Buffer for result */
- u32 *reslen; /* Buffer length in bytes */
- };
-
- DESCRIPTION
-
- This function returns the Hardware Resource Table of the IOP specified
- by hrt->iop in the buffer pointed to by hrt->resbuf. The actual size of
- the data is written into *(hrt->reslen).
-
- RETURNS
-
- This function returns 0 if no errors occur. If an error occurs, -1
- is returned and errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ENOBUFS Buffer not large enough. If this occurs, the required
- buffer length is written into *(hrt->reslen)
-
-V. Getting Logical Configuration Table
-
- SYNOPSIS
-
- ioctl(fd, I2OLCTGET, struct i2o_cmd_lct *lct);
-
- struct i2o_cmd_hrtlct
- {
- u32 iop; /* IOP unit number */
- void *resbuf; /* Buffer for result */
- u32 *reslen; /* Buffer length in bytes */
- };
-
- DESCRIPTION
-
- This function returns the Logical Configuration Table of the IOP specified
- by lct->iop in the buffer pointed to by lct->resbuf. The actual size of
- the data is written into *(lct->reslen).
-
- RETURNS
-
- This function returns 0 if no errors occur. If an error occurs, -1
- is returned and errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ENOBUFS Buffer not large enough. If this occurs, the required
- buffer length is written into *(lct->reslen)
-
-VI. Setting Parameters
-
- SYNOPSIS
-
- ioctl(fd, I2OPARMSET, struct i2o_parm_setget *ops);
-
- struct i2o_cmd_psetget
- {
- u32 iop; /* IOP unit number */
- u32 tid; /* Target device TID */
- void *opbuf; /* Operation List buffer */
- u32 oplen; /* Operation List buffer length in bytes */
- void *resbuf; /* Result List buffer */
- u32 *reslen; /* Result List buffer length in bytes */
- };
-
- DESCRIPTION
-
- This function posts a UtilParamsSet message to the device identified
- by ops->iop and ops->tid. The operation list for the message is
- sent through the ops->opbuf buffer, and the result list is written
- into the buffer pointed to by ops->resbuf. The number of bytes
- written is placed into *(ops->reslen).
-
- RETURNS
-
- The return value is the size in bytes of the data written into
- ops->resbuf if no errors occur. If an error occurs, -1 is returned
- and errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ENOBUFS Buffer not large enough. If this occurs, the required
- buffer length is written into *(ops->reslen)
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
- A return value of 0 does not mean that the value was actually
- changed properly on the IOP. The user should check the result
- list to determine the specific status of the transaction.
-
-VII. Getting Parameters
-
- SYNOPSIS
-
- ioctl(fd, I2OPARMGET, struct i2o_parm_setget *ops);
-
- struct i2o_parm_setget
- {
- u32 iop; /* IOP unit number */
- u32 tid; /* Target device TID */
- void *opbuf; /* Operation List buffer */
- u32 oplen; /* Operation List buffer length in bytes */
- void *resbuf; /* Result List buffer */
- u32 *reslen; /* Result List buffer length in bytes */
- };
-
- DESCRIPTION
-
- This function posts a UtilParamsGet message to the device identified
- by ops->iop and ops->tid. The operation list for the message is
- sent through the ops->opbuf buffer, and the result list is written
- into the buffer pointed to by ops->resbuf. The actual size of data
- written is placed into *(ops->reslen).
-
- RETURNS
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ENOBUFS Buffer not large enough. If this occurs, the required
- buffer length is written into *(ops->reslen)
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
- A return value of 0 does not mean that the value was actually
- properly retrieved. The user should check the result list
- to determine the specific status of the transaction.
-
-VIII. Downloading Software
-
- SYNOPSIS
-
- ioctl(fd, I2OSWDL, struct i2o_sw_xfer *sw);
-
- struct i2o_sw_xfer
- {
- u32 iop; /* IOP unit number */
- u8 flags; /* DownloadFlags field */
- u8 sw_type; /* Software type */
- u32 sw_id; /* Software ID */
- void *buf; /* Pointer to software buffer */
- u32 *swlen; /* Length of software buffer */
- u32 *maxfrag; /* Number of fragments */
- u32 *curfrag; /* Current fragment number */
- };
-
- DESCRIPTION
-
- This function downloads a software fragment pointed by sw->buf
- to the iop identified by sw->iop. The DownloadFlags, SwID, SwType
- and SwSize fields of the ExecSwDownload message are filled in with
- the values of sw->flags, sw->sw_id, sw->sw_type and *(sw->swlen).
-
- The fragments _must_ be sent in order and be 8K in size. The last
- fragment _may_ be shorter, however. The kernel will compute its
- size based on information in the sw->swlen field.
-
- Please note that SW transfers can take a long time.
-
- RETURNS
-
- This function returns 0 no errors occur. If an error occurs, -1
- is returned and errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
-IX. Uploading Software
-
- SYNOPSIS
-
- ioctl(fd, I2OSWUL, struct i2o_sw_xfer *sw);
-
- struct i2o_sw_xfer
- {
- u32 iop; /* IOP unit number */
- u8 flags; /* UploadFlags */
- u8 sw_type; /* Software type */
- u32 sw_id; /* Software ID */
- void *buf; /* Pointer to software buffer */
- u32 *swlen; /* Length of software buffer */
- u32 *maxfrag; /* Number of fragments */
- u32 *curfrag; /* Current fragment number */
- };
-
- DESCRIPTION
-
- This function uploads a software fragment from the IOP identified
- by sw->iop, sw->sw_type, sw->sw_id and optionally sw->swlen fields.
- The UploadFlags, SwID, SwType and SwSize fields of the ExecSwUpload
- message are filled in with the values of sw->flags, sw->sw_id,
- sw->sw_type and *(sw->swlen).
-
- The fragments _must_ be requested in order and be 8K in size. The
- user is responsible for allocating memory pointed by sw->buf. The
- last fragment _may_ be shorter.
-
- Please note that SW transfers can take a long time.
-
- RETURNS
-
- This function returns 0 if no errors occur. If an error occurs, -1
- is returned and errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
-X. Removing Software
-
- SYNOPSIS
-
- ioctl(fd, I2OSWDEL, struct i2o_sw_xfer *sw);
-
- struct i2o_sw_xfer
- {
- u32 iop; /* IOP unit number */
- u8 flags; /* RemoveFlags */
- u8 sw_type; /* Software type */
- u32 sw_id; /* Software ID */
- void *buf; /* Unused */
- u32 *swlen; /* Length of the software data */
- u32 *maxfrag; /* Unused */
- u32 *curfrag; /* Unused */
- };
-
- DESCRIPTION
-
- This function removes software from the IOP identified by sw->iop.
- The RemoveFlags, SwID, SwType and SwSize fields of the ExecSwRemove message
- are filled in with the values of sw->flags, sw->sw_id, sw->sw_type and
- *(sw->swlen). Give zero in *(sw->len) if the value is unknown. IOP uses
- *(sw->swlen) value to verify correct identication of the module to remove.
- The actual size of the module is written into *(sw->swlen).
-
- RETURNS
-
- This function returns 0 if no errors occur. If an error occurs, -1
- is returned and errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
-X. Validating Configuration
-
- SYNOPSIS
-
- ioctl(fd, I2OVALIDATE, int *iop);
- u32 iop;
-
- DESCRIPTION
-
- This function posts an ExecConfigValidate message to the controller
- identified by iop. This message indicates that the current
- configuration is accepted. The iop changes the status of suspect drivers
- to valid and may delete old drivers from its store.
-
- RETURNS
-
- This function returns 0 if no erro occur. If an error occurs, -1 is
- returned and errno is set appropriately:
-
- ETIMEDOUT Timeout waiting for reply message
- ENXIO Invalid IOP number
-
-XI. Configuration Dialog
-
- SYNOPSIS
-
- ioctl(fd, I2OHTML, struct i2o_html *htquery);
- struct i2o_html
- {
- u32 iop; /* IOP unit number */
- u32 tid; /* Target device ID */
- u32 page; /* HTML page */
- void *resbuf; /* Buffer for reply HTML page */
- u32 *reslen; /* Length in bytes of reply buffer */
- void *qbuf; /* Pointer to HTTP query string */
- u32 qlen; /* Length in bytes of query string buffer */
- };
-
- DESCRIPTION
-
- This function posts an UtilConfigDialog message to the device identified
- by htquery->iop and htquery->tid. The requested HTML page number is
- provided by the htquery->page field, and the resultant data is stored
- in the buffer pointed to by htquery->resbuf. If there is an HTTP query
- string that is to be sent to the device, it should be sent in the buffer
- pointed to by htquery->qbuf. If there is no query string, this field
- should be set to NULL. The actual size of the reply received is written
- into *(htquery->reslen).
-
- RETURNS
-
- This function returns 0 if no error occur. If an error occurs, -1
- is returned and errno is set appropriately:
-
- EFAULT Invalid user space pointer was passed
- ENXIO Invalid IOP number
- ENOBUFS Buffer not large enough. If this occurs, the required
- buffer length is written into *(ops->reslen)
- ETIMEDOUT Timeout waiting for reply message
- ENOMEM Kernel memory allocation error
-
-XII. Events
-
- In the process of determining this. Current idea is to have use
- the select() interface to allow user apps to periodically poll
- the /dev/i2o/ctl device for events. When select() notifies the user
- that an event is available, the user would call read() to retrieve
- a list of all the events that are pending for the specific device.
-
-=============================================================================
-Revision History
-=============================================================================
-
-Rev 0.1 - 04/01/99
-- Initial revision
-
-Rev 0.2 - 04/06/99
-- Changed return values to match UNIX ioctl() standard. Only return values
- are 0 and -1. All errors are reported through errno.
-- Added summary of proposed possible event interfaces
-
-Rev 0.3 - 04/20/99
-- Changed all ioctls() to use pointers to user data instead of actual data
-- Updated error values to match the code
diff --git a/Documentation/input/alps.txt b/Documentation/input/alps.txt
index 92ae734..c86f2f1 100644
--- a/Documentation/input/alps.txt
+++ b/Documentation/input/alps.txt
@@ -58,7 +58,7 @@
While in command mode, register addresses can be set by first sending a
specific command, either EC for v3 devices or F5 for v4 devices. Then the
address is sent one nibble at a time, where each nibble is encoded as a
-command with optional data. This enoding differs slightly between the v3 and
+command with optional data. This encoding differs slightly between the v3 and
v4 protocols.
Once an address has been set, the addressed register can be read by sending
@@ -94,6 +94,10 @@
Note that the device never signals overflow condition.
+For protocol version 2 devices when the trackpoint is used, and no fingers
+are on the touchpad, the M R L bits signal the combined status of both the
+pointingstick and touchpad buttons.
+
ALPS Absolute Mode - Protocol Version 1
--------------------------------------
@@ -107,7 +111,7 @@
ALPS Absolute Mode - Protocol Version 2
---------------------------------------
- byte 0: 1 ? ? ? 1 ? ? ?
+ byte 0: 1 ? ? ? 1 PSM PSR PSL
byte 1: 0 x6 x5 x4 x3 x2 x1 x0
byte 2: 0 x10 x9 x8 x7 ? fin ges
byte 3: 0 y9 y8 y7 1 M R L
@@ -115,7 +119,8 @@
byte 5: 0 z6 z5 z4 z3 z2 z1 z0
Protocol Version 2 DualPoint devices send standard PS/2 mouse packets for
-the DualPoint Stick.
+the DualPoint Stick. For non interleaved dualpoint devices the pointingstick
+buttons get reported separately in the PSM, PSR and PSL bits.
Dualpoint device -- interleaved packet format
---------------------------------------------
@@ -139,7 +144,7 @@
---------------------------------------
ALPS protocol version 3 has three different packet formats. The first two are
-associated with touchpad events, and the third is associatd with trackstick
+associated with touchpad events, and the third is associated with trackstick
events.
The first type is the touchpad position packet.
diff --git a/Documentation/input/event-codes.txt b/Documentation/input/event-codes.txt
index 9670561..3f0f5ce 100644
--- a/Documentation/input/event-codes.txt
+++ b/Documentation/input/event-codes.txt
@@ -229,7 +229,7 @@
EV_PWR:
----------
EV_PWR events are a special type of event used specifically for power
-mangement. Its usage is not well defined. To be addressed later.
+management. Its usage is not well defined. To be addressed later.
Device properties:
=================
diff --git a/Documentation/input/gpio-tilt.txt b/Documentation/input/gpio-tilt.txt
index 06d60c3..2cdfd9b 100644
--- a/Documentation/input/gpio-tilt.txt
+++ b/Documentation/input/gpio-tilt.txt
@@ -28,7 +28,7 @@
--------
Example configuration for a single TS1003 tilt switch that rotates around
-one axis in 4 steps and emitts the current tilt via two GPIOs.
+one axis in 4 steps and emits the current tilt via two GPIOs.
static int sg060_tilt_enable(struct device *dev) {
/* code to enable the sensors */
diff --git a/Documentation/input/iforce-protocol.txt b/Documentation/input/iforce-protocol.txt
index 2d5fbfd..6628715 100644
--- a/Documentation/input/iforce-protocol.txt
+++ b/Documentation/input/iforce-protocol.txt
@@ -97,7 +97,7 @@
*** Attack and fade ***
OP= 02
LEN= 08
-00-01 Address where to store the parameteres
+00-01 Address where to store the parameters
02-03 Duration of attack (little endian encoding, in ms)
04 Level at end of attack. Signed byte.
05-06 Duration of fade.
diff --git a/Documentation/input/walkera0701.txt b/Documentation/input/walkera0701.txt
index 561385d..49e3ac6 100644
--- a/Documentation/input/walkera0701.txt
+++ b/Documentation/input/walkera0701.txt
@@ -91,7 +91,7 @@
first ten nibbles.
Next nibbles 12 .. 21 represents four channels (not all channels can be
-directly controlled from TX). Binary representations ar the same as in first
+directly controlled from TX). Binary representations are the same as in first
four channels. In nibbles 22 and 23 is a special magic number. Nibble 24 is
checksum for nibbles 12..23.
diff --git a/Documentation/input/yealink.txt b/Documentation/input/yealink.txt
index 5360e43..8277b76 100644
--- a/Documentation/input/yealink.txt
+++ b/Documentation/input/yealink.txt
@@ -93,7 +93,7 @@
Format specifier
'8' : Generic 7 segment digit with individual addressable segments
- Reduced capability 7 segm digit, when segments are hard wired together.
+ Reduced capability 7 segment digit, when segments are hard wired together.
'1' : 2 segments digit only able to produce a 1.
'e' : Most significant day of the month digit,
able to produce at least 1 2 3.
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 11a76df..84960c6 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -713,10 +713,18 @@
uart[8250],io,<addr>[,options]
uart[8250],mmio,<addr>[,options]
+ uart[8250],mmio32,<addr>[,options]
+ uart[8250],0x<addr>[,options]
Start an early, polled-mode console on the 8250/16550
UART at the specified I/O port or MMIO address,
- switching to the matching ttyS device later. The
- options are the same as for ttyS, above.
+ switching to the matching ttyS device later.
+ MMIO inter-register address stride is either 8-bit
+ (mmio) or 32-bit (mmio32).
+ If none of [io|mmio|mmio32], <addr> is assumed to be
+ equivalent to 'mmio'. 'options' are specified in the
+ same format described for ttyS above; if unspecified,
+ the h/w is not re-initialized.
+
hvc<n> Use the hypervisor console device <n>. This is for
both Xen and PowerPC hypervisors.
@@ -928,6 +936,12 @@
Enable debug messages at boot time. See
Documentation/dynamic-debug-howto.txt for details.
+ eagerfpu= [X86]
+ on enable eager fpu restore
+ off disable eager fpu restore
+ auto selects the default scheme, which automatically
+ enables eagerfpu restore for xsaveopt.
+
early_ioremap_debug [KNL]
Enable debug messages in early_ioremap support. This
is useful for tracking down temporary early mappings
@@ -944,11 +958,15 @@
uart[8250],io,<addr>[,options]
uart[8250],mmio,<addr>[,options]
uart[8250],mmio32,<addr>[,options]
+ uart[8250],0x<addr>[,options]
Start an early, polled-mode console on the 8250/16550
UART at the specified I/O port or MMIO address.
MMIO inter-register address stride is either 8-bit
(mmio) or 32-bit (mmio32).
- The options are the same as for ttyS, above.
+ If none of [io|mmio|mmio32], <addr> is assumed to be
+ equivalent to 'mmio'. 'options' are specified in the
+ same format described for "console=ttyS<n>"; if
+ unspecified, the h/w is not initialized.
pl011,<addr>
Start an early, polled-mode console on a pl011 serial
@@ -1966,6 +1984,12 @@
or
memmap=0x10000$0x18690000
+ memmap=nn[KMG]!ss[KMG]
+ [KNL,X86] Mark specific memory as protected.
+ Region of memory to be used, from ss to ss+nn.
+ The memory region may be marked as e820 type 12 (0xc)
+ and is NVDIMM or ADR memory.
+
memory_corruption_check=0/1 [X86]
Some BIOSes seem to corrupt the first 64k of
memory when doing things like suspend/resume.
@@ -2344,12 +2368,6 @@
parameter, xsave area per process might occupy more
memory on xsaves enabled systems.
- eagerfpu= [X86]
- on enable eager fpu restore
- off disable eager fpu restore
- auto selects the default scheme, which automatically
- enables eagerfpu restore for xsaveopt.
-
nohlt [BUGS=ARM,SH] Tells the kernel that the sleep(SH) or
wfi(ARM) instruction doesn't work correctly and not to
use it. This is also useful when using JTAG debugger.
diff --git a/Documentation/kmemcheck.txt b/Documentation/kmemcheck.txt
index a41bdeb..80aae85 100644
--- a/Documentation/kmemcheck.txt
+++ b/Documentation/kmemcheck.txt
@@ -82,8 +82,8 @@
o CONFIG_DEBUG_PAGEALLOC=n
- This option is located under "Kernel hacking" / "Debug page memory
- allocations".
+ This option is located under "Kernel hacking" / "Memory Debugging"
+ / "Debug page memory allocations".
In addition, I highly recommend turning on CONFIG_DEBUG_INFO=y. This is also
located under "Kernel hacking". With this, you will be able to get line number
diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt
index 1488b65..1f9b3e2 100644
--- a/Documentation/kprobes.txt
+++ b/Documentation/kprobes.txt
@@ -305,8 +305,8 @@
3. Configuring Kprobes
When configuring the kernel using make menuconfig/xconfig/oldconfig,
-ensure that CONFIG_KPROBES is set to "y". Under "Instrumentation
-Support", look for "Kprobes".
+ensure that CONFIG_KPROBES is set to "y". Under "General setup", look
+for "Kprobes".
So that you can load and unload Kprobes-based instrumentation modules,
make sure "Loadable module support" (CONFIG_MODULES) and "Module
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 6974f1c..f957461 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1727,7 +1727,7 @@
}
The dma_rmb() allows us guarantee the device has released ownership
- before we read the data from the descriptor, and he dma_wmb() allows
+ before we read the data from the descriptor, and the dma_wmb() allows
us to guarantee the data is written to the descriptor before the device
can see it now has ownership. The wmb() is needed to guarantee that the
cache coherent memory writes have completed before attempting a write to
diff --git a/Documentation/memory-hotplug.txt b/Documentation/memory-hotplug.txt
index ea03abf..ce2cfcf 100644
--- a/Documentation/memory-hotplug.txt
+++ b/Documentation/memory-hotplug.txt
@@ -149,7 +149,7 @@
(0x100000000 / 1Gib = 4)
This device covers address range [0x100000000 ... 0x140000000)
-Under each memory block, you can see 4 files:
+Under each memory block, you can see 5 files:
/sys/devices/system/memory/memoryXXX/phys_index
/sys/devices/system/memory/memoryXXX/phys_device
@@ -359,38 +359,51 @@
--------------------------------
8. Memory hotplug event notifier
--------------------------------
-Memory hotplug has event notifier. There are 6 types of notification.
+Hotplugging events are sent to a notification queue.
-MEMORY_GOING_ONLINE
+There are six types of notification defined in include/linux/memory.h:
+
+MEM_GOING_ONLINE
Generated before new memory becomes available in order to be able to
prepare subsystems to handle memory. The page allocator is still unable
to allocate from the new memory.
-MEMORY_CANCEL_ONLINE
+MEM_CANCEL_ONLINE
Generated if MEMORY_GOING_ONLINE fails.
-MEMORY_ONLINE
+MEM_ONLINE
Generated when memory has successfully brought online. The callback may
allocate pages from the new memory.
-MEMORY_GOING_OFFLINE
+MEM_GOING_OFFLINE
Generated to begin the process of offlining memory. Allocations are no
longer possible from the memory but some of the memory to be offlined
is still in use. The callback can be used to free memory known to a
subsystem from the indicated memory block.
-MEMORY_CANCEL_OFFLINE
+MEM_CANCEL_OFFLINE
Generated if MEMORY_GOING_OFFLINE fails. Memory is available again from
the memory block that we attempted to offline.
-MEMORY_OFFLINE
+MEM_OFFLINE
Generated after offlining memory is complete.
-A callback routine can be registered by
+A callback routine can be registered by calling
+
hotplug_memory_notifier(callback_func, priority)
-The second argument of callback function (action) is event types of above.
-The third argument is passed by pointer of struct memory_notify.
+Callback functions with higher values of priority are called before callback
+functions with lower values.
+
+A callback function must have the following prototype:
+
+ int callback_func(
+ struct notifier_block *self, unsigned long action, void *arg);
+
+The first argument of the callback function (self) is a pointer to the block
+of the notifier chain that points to the callback function itself.
+The second argument (action) is one of the event types described above.
+The third argument (arg) passes a pointer of struct memory_notify.
struct memory_notify {
unsigned long start_pfn;
@@ -412,6 +425,18 @@
If status_changed_nid* >= 0, callback should create/discard structures for the
node if necessary.
+The callback routine shall return one of the values
+NOTIFY_DONE, NOTIFY_OK, NOTIFY_BAD, NOTIFY_STOP
+defined in include/linux/notifier.h
+
+NOTIFY_DONE and NOTIFY_OK have no effect on the further processing.
+
+NOTIFY_BAD is used as response to the MEM_GOING_ONLINE, MEM_GOING_OFFLINE,
+MEM_ONLINE, or MEM_OFFLINE action to cancel hotplugging. It stops
+further processing of the notification queue.
+
+NOTIFY_STOP stops further processing of the notification queue.
+
--------------
9. Future Work
--------------
diff --git a/Documentation/printk-formats.txt b/Documentation/printk-formats.txt
index cb6a596..2216eb1 100644
--- a/Documentation/printk-formats.txt
+++ b/Documentation/printk-formats.txt
@@ -228,7 +228,7 @@
lower ('l') or upper case ('L') hex characters - and big endian order
in lower ('b') or upper case ('B') hex characters.
- Where no additional specifiers are used the default little endian
+ Where no additional specifiers are used the default big endian
order with lower case hex characters will be printed.
Passed by reference.
@@ -273,6 +273,16 @@
Passed by reference.
+bitmap and its derivatives such as cpumask and nodemask:
+
+ %*pb 0779
+ %*pbl 0,3-6,8-10
+
+ For printing bitmap and its derivatives such as cpumask and nodemask,
+ %*pb output the bitmap with field width as the number of bits and %*pbl
+ output the bitmap as range list with field width as the number of bits.
+
+ Passed by reference.
Thank you for your cooperation and attention.
diff --git a/Documentation/scheduler/completion.txt b/Documentation/scheduler/completion.txt
index f77651e..2622bc7 100644
--- a/Documentation/scheduler/completion.txt
+++ b/Documentation/scheduler/completion.txt
@@ -7,24 +7,24 @@
-------------
If you have one or more threads of execution that must wait for some process
-to have reached a point or a specific state, completions can provide a race
-free solution to this problem. Semantically they are somewhat like a
-pthread_barriers and have similar use-cases.
+to have reached a point or a specific state, completions can provide a
+race-free solution to this problem. Semantically they are somewhat like a
+pthread_barrier and have similar use-cases.
-Completions are a code synchronization mechanism that is preferable to any
+Completions are a code synchronization mechanism which is preferable to any
misuse of locks. Any time you think of using yield() or some quirky
-msleep(1); loop to allow something else to proceed, you probably want to
+msleep(1) loop to allow something else to proceed, you probably want to
look into using one of the wait_for_completion*() calls instead. The
-advantage of using completions is clear intent of the code but also more
+advantage of using completions is clear intent of the code, but also more
efficient code as both threads can continue until the result is actually
needed.
Completions are built on top of the generic event infrastructure in Linux,
-with the event reduced to a simple flag appropriately called "done" in
-struct completion, that tells the waiting threads of execution if they
+with the event reduced to a simple flag (appropriately called "done") in
+struct completion that tells the waiting threads of execution if they
can continue safely.
-As completions are scheduling related the code is found in
+As completions are scheduling related, the code is found in
kernel/sched/completion.c - for details on completion design and
implementation see completions-design.txt
@@ -32,9 +32,9 @@
Usage:
------
-There are three parts to the using completions, the initialization of the
+There are three parts to using completions, the initialization of the
struct completion, the waiting part through a call to one of the variants of
-wait_for_completion() and the signaling side through a call to complete(),
+wait_for_completion() and the signaling side through a call to complete()
or complete_all(). Further there are some helper functions for checking the
state of completions.
@@ -50,7 +50,7 @@
providing the wait queue to place tasks on for waiting and the flag for
indicating the state of affairs.
-Completions should be named to convey the intent of the waiter. A good
+Completions should be named to convey the intent of the waiter. A good
example is:
wait_for_completion(&early_console_added);
@@ -73,7 +73,7 @@
The re-initialization function, reinit_completion(), simply resets the
done element to "not available", thus again to 0, without touching the
-wait queue. Calling init_completion() on the same completions object is
+wait queue. Calling init_completion() twice on the same completion object is
most likely a bug as it re-initializes the queue to an empty queue and
enqueued tasks could get "lost" - use reinit_completion() in that case.
@@ -87,10 +87,17 @@
DECLARE_COMPLETION_ONSTACK(setup_done)
suitable for automatic/local variables on the stack and will make lockdep
-happy. Note also that one needs to making *sure* the completion passt to
+happy. Note also that one needs to make *sure* the completion passed to
work threads remains in-scope, and no references remain to on-stack data
when the initiating function returns.
+Using on-stack completions for code that calls any of the _timeout or
+_interruptible/_killable variants is not advisable as they will require
+additional synchronization to prevent the on-stack completion object in
+the timeout/signal cases from going out of scope. Consider using dynamically
+allocated completions when intending to use the _interruptible/_killable
+or _timeout variants of wait_for_completion().
+
Waiting for completions:
------------------------
@@ -99,34 +106,38 @@
calls wait_for_completion() on the initialized completion structure.
A typical usage scenario is:
- structure completion setup_done;
+ struct completion setup_done;
init_completion(&setup_done);
- initialze_work(...,&setup_done,...)
+ initialize_work(...,&setup_done,...)
/* run non-dependent code */ /* do setup */
- wait_for_completion(&seupt_done); complete(setup_done)
+ wait_for_completion(&setup_done); complete(setup_done)
-This is not implying any temporal order of wait_for_completion() and the
+This is not implying any temporal order on wait_for_completion() and the
call to complete() - if the call to complete() happened before the call
to wait_for_completion() then the waiting side simply will continue
-immediately as all dependencies are satisfied.
+immediately as all dependencies are satisfied if not it will block until
+completion is signaled by complete().
-Note that wait_for_completion() is calling spin_lock_irq/spin_unlock_irq
+Note that wait_for_completion() is calling spin_lock_irq()/spin_unlock_irq(),
so it can only be called safely when you know that interrupts are enabled.
-Calling it from hard-irq context will result in hard to detect spurious
-enabling of interrupts.
+Calling it from hard-irq or irqs-off atomic contexts will result in
+hard-to-detect spurious enabling of interrupts.
wait_for_completion():
void wait_for_completion(struct completion *done):
-The default behavior is to wait without a timeout and mark the task as
+The default behavior is to wait without a timeout and to mark the task as
uninterruptible. wait_for_completion() and its variants are only safe
-in soft-interrupt or process context but not in hard-irq context.
+in process context (as they can sleep) but not in atomic context,
+interrupt context, with disabled irqs. or preemption is disabled - see also
+try_wait_for_completion() below for handling completion in atomic/interrupt
+context.
+
As all variants of wait_for_completion() can (obviously) block for a long
-time, you probably don't want to call this with held locks - see also
-try_wait_for_completion() below.
+time, you probably don't want to call this with held mutexes.
Variants available:
@@ -141,43 +152,44 @@
so care should be taken with assigning return-values to variables of proper
type. Checking for the specific meaning of return values also has been found
to be quite inaccurate e.g. constructs like
-if(!wait_for_completion_interruptible_timeout(...)) would execute the same
+if (!wait_for_completion_interruptible_timeout(...)) would execute the same
code path for successful completion and for the interrupted case - which is
probably not what you want.
int wait_for_completion_interruptible(struct completion *done)
-marking the task TASK_INTERRUPTIBLE. If a signal was received while waiting.
-It will return -ERESTARTSYS and 0 otherwise.
+This function marks the task TASK_INTERRUPTIBLE. If a signal was received
+while waiting it will return -ERESTARTSYS; 0 otherwise.
unsigned long wait_for_completion_timeout(struct completion *done,
unsigned long timeout)
-The task is marked as TASK_UNINTERRUPTIBLE and will wait at most timeout
-(in jiffies). If timeout occurs it return 0 else the remaining time in
-jiffies (but at least 1). Timeouts are preferably passed by msecs_to_jiffies()
-or usecs_to_jiffies(). If the returned timeout value is deliberately ignored
-a comment should probably explain why (e.g. see drivers/mfd/wm8350-core.c
-wm8350_read_auxadc())
+The task is marked as TASK_UNINTERRUPTIBLE and will wait at most 'timeout'
+(in jiffies). If timeout occurs it returns 0 else the remaining time in
+jiffies (but at least 1). Timeouts are preferably calculated with
+msecs_to_jiffies() or usecs_to_jiffies(). If the returned timeout value is
+deliberately ignored a comment should probably explain why (e.g. see
+drivers/mfd/wm8350-core.c wm8350_read_auxadc())
long wait_for_completion_interruptible_timeout(
struct completion *done, unsigned long timeout)
-passing a timeout in jiffies and marking the task as TASK_INTERRUPTIBLE. If a
-signal was received it will return -ERESTARTSYS, 0 if completion timed-out and
-the remaining time in jiffies if completion occurred.
+This function passes a timeout in jiffies and marks the task as
+TASK_INTERRUPTIBLE. If a signal was received it will return -ERESTARTSYS;
+otherwise it returns 0 if the completion timed out or the remaining time in
+jiffies if completion occurred.
-Further variants include _killable which passes TASK_KILLABLE as the
-designated tasks state and will return a -ERESTARTSYS if interrupted or
-else 0 if completions was achieved as well as a _timeout variant.
+Further variants include _killable which uses TASK_KILLABLE as the
+designated tasks state and will return -ERESTARTSYS if it is interrupted or
+else 0 if completion was achieved. There is a _timeout variant as well:
long wait_for_completion_killable(struct completion *done)
long wait_for_completion_killable_timeout(struct completion *done,
unsigned long timeout)
-The _io variants wait_for_completion_io behave the same as the non-_io
+The _io variants wait_for_completion_io() behave the same as the non-_io
variants, except for accounting waiting time as waiting on IO, which has
-an impact on how scheduling is calculated.
+an impact on how the task is accounted in scheduling stats.
void wait_for_completion_io(struct completion *done)
unsigned long wait_for_completion_io_timeout(struct completion *done
@@ -187,13 +199,13 @@
Signaling completions:
----------------------
-A thread of execution that wants to signal that the conditions for
-continuation have been achieved calls complete() to signal exactly one
-of the waiters that it can continue.
+A thread that wants to signal that the conditions for continuation have been
+achieved calls complete() to signal exactly one of the waiters that it can
+continue.
void complete(struct completion *done)
-or calls complete_all to signal all current and future waiters.
+or calls complete_all() to signal all current and future waiters.
void complete_all(struct completion *done)
@@ -205,32 +217,32 @@
If complete() is called multiple times then this will allow for that number
of waiters to continue - each call to complete() will simply increment the
done element. Calling complete_all() multiple times is a bug though. Both
-complete() and complete_all() can be called in hard-irq context safely.
+complete() and complete_all() can be called in hard-irq/atomic context safely.
There only can be one thread calling complete() or complete_all() on a
-particular struct completions at any time - serialized through the wait
+particular struct completion at any time - serialized through the wait
queue spinlock. Any such concurrent calls to complete() or complete_all()
probably are a design bug.
Signaling completion from hard-irq context is fine as it will appropriately
-lock with spin_lock_irqsave/spin_unlock_irqrestore.
+lock with spin_lock_irqsave/spin_unlock_irqrestore and it will never sleep.
try_wait_for_completion()/completion_done():
--------------------------------------------
-The try_wait_for_completion will not put the thread on the wait queue but
-rather returns false if it would need to enqueue (block) the thread, else it
-consumes any posted completions and returns true.
+The try_wait_for_completion() function will not put the thread on the wait
+queue but rather returns false if it would need to enqueue (block) the thread,
+else it consumes one posted completion and returns true.
- bool try_wait_for_completion(struct completion *done)
+ bool try_wait_for_completion(struct completion *done)
-Finally to check state of a completions without changing it in any way is
-provided by completion_done() returning false if there are any posted
-completion that was not yet consumed by waiters implying that there are
-waiters and true otherwise;
+Finally, to check the state of a completion without changing it in any way,
+call completion_done(), which returns false if there are no posted
+completions that were not yet consumed by waiters (implying that there are
+waiters) and true otherwise;
- bool completion_done(struct completion *done)
+ bool completion_done(struct completion *done)
Both try_wait_for_completion() and completion_done() are safe to be called in
-hard-irq context.
+hard-irq or atomic context.
diff --git a/Documentation/trace/coresight.txt b/Documentation/trace/coresight.txt
index 0236155..77d14d5 100644
--- a/Documentation/trace/coresight.txt
+++ b/Documentation/trace/coresight.txt
@@ -14,7 +14,7 @@
HW assisted tracing is becoming increasingly useful when dealing with systems
that have many SoCs and other components like GPU and DMA engines. ARM has
developed a HW assisted tracing solution by means of different components, each
-being added to a design at systhesis time to cater to specific tracing needs.
+being added to a design at synthesis time to cater to specific tracing needs.
Compoments are generally categorised as source, link and sinks and are
(usually) discovered using the AMBA bus.
diff --git a/Documentation/video4linux/v4l2-controls.txt b/Documentation/video4linux/v4l2-controls.txt
index 0f84ce8..5517db6 100644
--- a/Documentation/video4linux/v4l2-controls.txt
+++ b/Documentation/video4linux/v4l2-controls.txt
@@ -344,7 +344,9 @@
}
Note that you use the 'new value' union as well in g_volatile_ctrl. In general
-controls that need to implement g_volatile_ctrl are read-only controls.
+controls that need to implement g_volatile_ctrl are read-only controls. If they
+are not, a V4L2_EVENT_CTRL_CH_VALUE will not be generated when the control
+changes.
To mark a control as volatile you have to set V4L2_CTRL_FLAG_VOLATILE:
diff --git a/Documentation/video4linux/v4l2-framework.txt b/Documentation/video4linux/v4l2-framework.txt
index f586e29..59e619f 100644
--- a/Documentation/video4linux/v4l2-framework.txt
+++ b/Documentation/video4linux/v4l2-framework.txt
@@ -793,8 +793,8 @@
Whenever a device node is created some attributes are also created for you.
If you look in /sys/class/video4linux you see the devices. Go into e.g.
-video0 and you will see 'name', 'debug' and 'index' attributes. The 'name'
-attribute is the 'name' field of the video_device struct. The 'debug' attribute
+video0 and you will see 'name', 'dev_debug' and 'index' attributes. The 'name'
+attribute is the 'name' field of the video_device struct. The 'dev_debug' attribute
can be used to enable core debugging. See the next section for more detailed
information on this.
@@ -821,7 +821,7 @@
video device debugging
----------------------
-The 'debug' attribute that is created for each video, vbi, radio or swradio
+The 'dev_debug' attribute that is created for each video, vbi, radio or swradio
device in /sys/class/video4linux/<devX>/ allows you to enable logging of
file operations.
diff --git a/Documentation/video4linux/vivid.txt b/Documentation/video4linux/vivid.txt
index 6cfc854..cd4b5a1 100644
--- a/Documentation/video4linux/vivid.txt
+++ b/Documentation/video4linux/vivid.txt
@@ -912,6 +912,11 @@
sequence and field counting in struct v4l2_buffer on the capture side may not
be 100% accurate.
+- field settings V4L2_FIELD_SEQ_TB/BT are not supported. While it is possible to
+ implement this, it would mean a lot of work to get this right. Since these
+ field values are rarely used the decision was made not to implement this for
+ now.
+
- on the input side the "Standard Signal Mode" for the S-Video input or the
"DV Timings Signal Mode" for the HDMI input should be configured so that a
valid signal is passed to the video input.
diff --git a/Documentation/vm/pagemap.txt b/Documentation/vm/pagemap.txt
index 6fbd55e..6bfbc17 100644
--- a/Documentation/vm/pagemap.txt
+++ b/Documentation/vm/pagemap.txt
@@ -131,7 +131,8 @@
13. SWAPCACHE page is mapped to swap space, ie. has an associated swap entry
14. SWAPBACKED page is backed by swap/RAM
-The page-types tool in this directory can be used to query the above flags.
+The page-types tool in the tools/vm directory can be used to query the
+above flags.
Using pagemap to do something useful:
diff --git a/Documentation/vm/transhuge.txt b/Documentation/vm/transhuge.txt
index 6b31cfb..8143b9e 100644
--- a/Documentation/vm/transhuge.txt
+++ b/Documentation/vm/transhuge.txt
@@ -159,6 +159,17 @@
/sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
+max_ptes_none specifies how many extra small pages (that are
+not already mapped) can be allocated when collapsing a group
+of small pages into one large page.
+
+/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none
+
+A higher value leads to use additional memory for programs.
+A lower value leads to gain less thp performance. Value of
+max_ptes_none can waste cpu time very little, you can
+ignore it.
+
== Boot parameter ==
You can change the sysfs boot time defaults of Transparent Hugepage
diff --git a/Documentation/zh_CN/arm64/booting.txt b/Documentation/zh_CN/arm64/booting.txt
index 6f6d956..7cd36af 100644
--- a/Documentation/zh_CN/arm64/booting.txt
+++ b/Documentation/zh_CN/arm64/booting.txt
@@ -15,6 +15,8 @@
交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻
译存在问题,请联系中文版维护者。
+本文翻译提交时的 Git 检出点为: bc465aa9d045feb0e13b4a8f32cc33c1943f62d6
+
英文版维护者: Will Deacon <will.deacon@arm.com>
中文版维护者: 傅炜 Fu Wei <wefu@redhat.com>
中文版翻译者: 傅炜 Fu Wei <wefu@redhat.com>
@@ -88,22 +90,44 @@
u32 code0; /* 可执行代码 */
u32 code1; /* 可执行代码 */
- u64 text_offset; /* 映像装载偏移 */
- u64 res0 = 0; /* 保留 */
- u64 res1 = 0; /* 保留 */
+ u64 text_offset; /* 映像装载偏移,小端模式 */
+ u64 image_size; /* 映像实际大小, 小端模式 */
+ u64 flags; /* 内核旗标, 小端模式 *
u64 res2 = 0; /* 保留 */
u64 res3 = 0; /* 保留 */
u64 res4 = 0; /* 保留 */
u32 magic = 0x644d5241; /* 魔数, 小端, "ARM\x64" */
- u32 res5 = 0; /* 保留 */
+ u32 res5; /* 保留 (用于 PE COFF 偏移) */
映像头注释:
+- 自 v3.17 起,除非另有说明,所有域都是小端模式。
+
- code0/code1 负责跳转到 stext.
-映像必须位于系统 RAM 起始处的特定偏移(当前是 0x80000)。系统 RAM
-的起始地址必须是以 2MB 对齐的。
+- 当通过 EFI 启动时, 最初 code0/code1 被跳过。
+ res5 是到 PE 文件头的偏移,而 PE 文件头含有 EFI 的启动入口点 (efi_stub_entry)。
+ 当 stub 代码完成了它的使命,它会跳转到 code0 继续正常的启动流程。
+
+- v3.17 之前,未明确指定 text_offset 的字节序。此时,image_size 为零,
+ 且 text_offset 依照内核字节序为 0x80000。
+ 当 image_size 非零,text_offset 为小端模式且是有效值,应被引导加载程序使用。
+ 当 image_size 为零,text_offset 可假定为 0x80000。
+
+- flags 域 (v3.17 引入) 为 64 位小端模式,其编码如下:
+ 位 0: 内核字节序。 1 表示大端模式,0 表示小端模式。
+ 位 1-63: 保留。
+
+- 当 image_size 为零时,引导装载程序应该试图在内核映像末尾之后尽可能多地保留空闲内存
+ 供内核直接使用。对内存空间的需求量因所选定的内核特性而异, 且无实际限制。
+
+内核映像必须被放置在靠近可用系统内存起始的 2MB 对齐为基址的 text_offset 字节处,并从那里被调用。
+当前,对 Linux 来说在此基址以下的内存是无法使用的,因此强烈建议将系统内存的起始作为这个基址。
+从映像起始地址算起,最少必须为内核释放出 image_size 字节的空间。
+
+任何提供给内核的内存(甚至在 2MB 对齐的基地址之前),若未从内核中标记为保留
+(如在设备树(dtb)的 memreserve 区域),都将被认为对内核是可用。
在跳转入内核前,必须符合以下状态:
@@ -124,8 +148,12 @@
- 高速缓存、MMU
MMU 必须关闭。
指令缓存开启或关闭都可以。
- 数据缓存必须关闭且无效。
- 外部高速缓存(如果存在)必须配置并禁用。
+ 已载入的内核映像的相应内存区必须被清理,以达到缓存一致性点(PoC)。
+ 当存在系统缓存或其他使能缓存的一致性主控器时,通常需使用虚拟地址维护其缓存,而非 set/way 操作。
+ 遵从通过虚拟地址操作维护构架缓存的系统缓存必须被配置,并可以被使能。
+ 而不通过虚拟地址操作维护构架缓存的系统缓存(不推荐),必须被配置且禁用。
+
+ *译者注:对于 PoC 以及缓存相关内容,请参考 ARMv8 构架参考手册 ARM DDI 0487A
- 架构计时器
CNTFRQ 必须设定为计时器的频率,且 CNTVOFF 必须设定为对所有 CPU
@@ -141,6 +169,14 @@
在进入内核映像的异常级中,所有构架中可写的系统寄存器必须通过软件
在一个更高的异常级别下初始化,以防止在 未知 状态下运行。
+ 对于拥有 GICv3 中断控制器的系统:
+ - 若当前在 EL3 :
+ ICC_SRE_EL3.Enable (位 3) 必须初始化为 0b1。
+ ICC_SRE_EL3.SRE (位 0) 必须初始化为 0b1。
+ - 若内核运行在 EL1:
+ ICC_SRE_EL2.Enable (位 3) 必须初始化为 0b1。
+ ICC_SRE_EL2.SRE (位 0) 必须初始化为 0b1。
+
以上对于 CPU 模式、高速缓存、MMU、架构计时器、一致性、系统寄存器的
必要条件描述适用于所有 CPU。所有 CPU 必须在同一异常级别跳入内核。
@@ -170,7 +206,7 @@
ARM DEN 0022A:用于 ARM 上的电源状态协调接口系统软件)中描述的
CPU_ON 调用来将 CPU 带入内核。
- *译者注:到文档翻译时,此文档已更新为 ARM DEN 0022B。
+ *译者注: ARM DEN 0022A 已更新到 ARM DEN 0022C。
设备树必须包含一个 ‘psci’ 节点,请参考以下文档:
Documentation/devicetree/bindings/arm/psci.txt
diff --git a/Documentation/zh_CN/arm64/legacy_instructions.txt b/Documentation/zh_CN/arm64/legacy_instructions.txt
new file mode 100644
index 0000000..68362a1
--- /dev/null
+++ b/Documentation/zh_CN/arm64/legacy_instructions.txt
@@ -0,0 +1,72 @@
+Chinese translated version of Documentation/arm64/legacy_instructions.txt
+
+If you have any comment or update to the content, please contact the
+original document maintainer directly. However, if you have a problem
+communicating in English you can also ask the Chinese maintainer for
+help. Contact the Chinese maintainer if this translation is outdated
+or if there is a problem with the translation.
+
+Maintainer: Punit Agrawal <punit.agrawal@arm.com>
+ Suzuki K. Poulose <suzuki.poulose@arm.com>
+Chinese maintainer: Fu Wei <wefu@redhat.com>
+---------------------------------------------------------------------
+Documentation/arm64/legacy_instructions.txt 的中文翻译
+
+如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文
+交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻
+译存在问题,请联系中文版维护者。
+
+本文翻译提交时的 Git 检出点为: bc465aa9d045feb0e13b4a8f32cc33c1943f62d6
+
+英文版维护者: Punit Agrawal <punit.agrawal@arm.com>
+ Suzuki K. Poulose <suzuki.poulose@arm.com>
+中文版维护者: 傅炜 Fu Wei <wefu@redhat.com>
+中文版翻译者: 傅炜 Fu Wei <wefu@redhat.com>
+中文版校译者: 傅炜 Fu Wei <wefu@redhat.com>
+
+以下为正文
+---------------------------------------------------------------------
+Linux 内核在 arm64 上的移植提供了一个基础框架,以支持构架中正在被淘汰或已废弃指令的模拟执行。
+这个基础框架的代码使用未定义指令钩子(hooks)来支持模拟。如果指令存在,它也允许在硬件中启用该指令。
+
+模拟模式可通过写 sysctl 节点(/proc/sys/abi)来控制。
+不同的执行方式及 sysctl 节点的相应值,解释如下:
+
+* Undef(未定义)
+ 值: 0
+ 产生未定义指令终止异常。它是那些构架中已废弃的指令,如 SWP,的默认处理方式。
+
+* Emulate(模拟)
+ 值: 1
+ 使用软件模拟方式。为解决软件迁移问题,这种模拟指令模式的使用是被跟踪的,并会发出速率限制警告。
+ 它是那些构架中正在被淘汰的指令,如 CP15 barriers(隔离指令),的默认处理方式。
+
+* Hardware Execution(硬件执行)
+ 值: 2
+ 虽然标记为正在被淘汰,但一些实现可能提供硬件执行这些指令的使能/禁用操作。
+ 使用硬件执行一般会有更好的性能,但将无法收集运行时对正被淘汰指令的使用统计数据。
+
+默认执行模式依赖于指令在构架中状态。正在被淘汰的指令应该以模拟(Emulate)作为默认模式,
+而已废弃的指令必须默认使用未定义(Undef)模式
+
+注意:指令模拟可能无法应对所有情况。更多详情请参考单独的指令注释。
+
+受支持的遗留指令
+-------------
+* SWP{B}
+节点: /proc/sys/abi/swp
+状态: 已废弃
+默认执行方式: Undef (0)
+
+* CP15 Barriers
+节点: /proc/sys/abi/cp15_barrier
+状态: 正被淘汰,不推荐使用
+默认执行方式: Emulate (1)
+
+* SETEND
+节点: /proc/sys/abi/setend
+状态: 正被淘汰,不推荐使用
+默认执行方式: Emulate (1)*
+注:为了使能这个特性,系统中的所有 CPU 必须在 EL0 支持混合字节序。
+如果一个新的 CPU (不支持混合字节序) 在使能这个特性后被热插入系统,
+在应用中可能会出现不可预期的结果。
diff --git a/Documentation/zh_CN/arm64/memory.txt b/Documentation/zh_CN/arm64/memory.txt
index a782704..19b3a52 100644
--- a/Documentation/zh_CN/arm64/memory.txt
+++ b/Documentation/zh_CN/arm64/memory.txt
@@ -15,6 +15,8 @@
交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻
译存在问题,请联系中文版维护者。
+本文翻译提交时的 Git 检出点为: bc465aa9d045feb0e13b4a8f32cc33c1943f62d6
+
英文版维护者: Catalin Marinas <catalin.marinas@arm.com>
中文版维护者: 傅炜 Fu Wei <wefu@redhat.com>
中文版翻译者: 傅炜 Fu Wei <wefu@redhat.com>
@@ -26,69 +28,53 @@
===========================
作者: Catalin Marinas <catalin.marinas@arm.com>
-日期: 2012 年 02 月 20 日
本文档描述 AArch64 Linux 内核所使用的虚拟内存布局。此构架可以实现
页大小为 4KB 的 4 级转换表和页大小为 64KB 的 3 级转换表。
-AArch64 Linux 使用页大小为 4KB 的 3 级转换表配置,对于用户和内核
-都有 39-bit (512GB) 的虚拟地址空间。对于页大小为 64KB的配置,仅
-使用 2 级转换表,但内存布局相同。
+AArch64 Linux 使用 3 级或 4 级转换表,其页大小配置为 4KB,对于用户和内核
+分别都有 39-bit (512GB) 或 48-bit (256TB) 的虚拟地址空间。
+对于页大小为 64KB的配置,仅使用 2 级转换表,有 42-bit (4TB) 的虚拟地址空间,但内存布局相同。
-用户地址空间的 63:39 位为 0,而内核地址空间的相应位为 1。TTBRx 的
+用户地址空间的 63:48 位为 0,而内核地址空间的相应位为 1。TTBRx 的
选择由虚拟地址的 63 位给出。swapper_pg_dir 仅包含内核(全局)映射,
-而用户 pgd 仅包含用户(非全局)映射。swapper_pgd_dir 地址被写入
+而用户 pgd 仅包含用户(非全局)映射。swapper_pg_dir 地址被写入
TTBR1 中,且从不写入 TTBR0。
-AArch64 Linux 在页大小为 4KB 时的内存布局:
+AArch64 Linux 在页大小为 4KB,并使用 3 级转换表时的内存布局:
起始地址 结束地址 大小 用途
-----------------------------------------------------------------------
0000000000000000 0000007fffffffff 512GB 用户空间
-
-ffffff8000000000 ffffffbbfffeffff ~240GB vmalloc
-
-ffffffbbffff0000 ffffffbbffffffff 64KB [防护页]
-
-ffffffbc00000000 ffffffbdffffffff 8GB vmemmap
-
-ffffffbe00000000 ffffffbffbbfffff ~8GB [防护页,未来用于 vmmemap]
-
-ffffffbffbc00000 ffffffbffbdfffff 2MB earlyprintk 设备
-
-ffffffbffbe00000 ffffffbffbe0ffff 64KB PCI I/O 空间
-
-ffffffbffbe10000 ffffffbcffffffff ~2MB [防护页]
-
-ffffffbffc000000 ffffffbfffffffff 64MB 模块
-
-ffffffc000000000 ffffffffffffffff 256GB 内核逻辑内存映射
+ffffff8000000000 ffffffffffffffff 512GB 内核空间
-AArch64 Linux 在页大小为 64KB 时的内存布局:
+AArch64 Linux 在页大小为 4KB,并使用 4 级转换表时的内存布局:
+
+起始地址 结束地址 大小 用途
+-----------------------------------------------------------------------
+0000000000000000 0000ffffffffffff 256TB 用户空间
+ffff000000000000 ffffffffffffffff 256TB 内核空间
+
+
+AArch64 Linux 在页大小为 64KB,并使用 2 级转换表时的内存布局:
起始地址 结束地址 大小 用途
-----------------------------------------------------------------------
0000000000000000 000003ffffffffff 4TB 用户空间
+fffffc0000000000 ffffffffffffffff 4TB 内核空间
-fffffc0000000000 fffffdfbfffeffff ~2TB vmalloc
-fffffdfbffff0000 fffffdfbffffffff 64KB [防护页]
+AArch64 Linux 在页大小为 64KB,并使用 3 级转换表时的内存布局:
-fffffdfc00000000 fffffdfdffffffff 8GB vmemmap
+起始地址 结束地址 大小 用途
+-----------------------------------------------------------------------
+0000000000000000 0000ffffffffffff 256TB 用户空间
+ffff000000000000 ffffffffffffffff 256TB 内核空间
-fffffdfe00000000 fffffdfffbbfffff ~8GB [防护页,未来用于 vmmemap]
-fffffdfffbc00000 fffffdfffbdfffff 2MB earlyprintk 设备
-
-fffffdfffbe00000 fffffdfffbe0ffff 64KB PCI I/O 空间
-
-fffffdfffbe10000 fffffdfffbffffff ~2MB [防护页]
-
-fffffdfffc000000 fffffdffffffffff 64MB 模块
-
-fffffe0000000000 ffffffffffffffff 2TB 内核逻辑内存映射
+更详细的内核虚拟内存布局,请参阅内核启动信息。
4KB 页大小的转换表查找:
@@ -102,7 +88,7 @@
| | | | +-> [20:12] L3 索引
| | | +-----------> [29:21] L2 索引
| | +---------------------> [38:30] L1 索引
- | +-------------------------------> [47:39] L0 索引 (未使用)
+ | +-------------------------------> [47:39] L0 索引
+-------------------------------------------------> [63] TTBR0/1
@@ -115,10 +101,11 @@
| | | | v
| | | | [15:0] 页内偏移
| | | +----------> [28:16] L3 索引
- | | +--------------------------> [41:29] L2 索引 (仅使用 38:29 )
- | +-------------------------------> [47:42] L1 索引 (未使用)
+ | | +--------------------------> [41:29] L2 索引
+ | +-------------------------------> [47:42] L1 索引
+-------------------------------------------------> [63] TTBR0/1
+
当使用 KVM 时, 管理程序(hypervisor)在 EL2 中通过相对内核虚拟地址的
一个固定偏移来映射内核页(内核虚拟地址的高 24 位设为零):
diff --git a/MAINTAINERS b/MAINTAINERS
index f7bbaec..b4b131a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -569,6 +569,12 @@
S: Maintained
F: drivers/mailbox/mailbox-altera.c
+ALTERA PIO DRIVER
+M: Tien Hock Loh <thloh@altera.com>
+L: linux-gpio@vger.kernel.org
+S: Maintained
+F: drivers/gpio/gpio-altera.c
+
ALTERA TRIPLE SPEED ETHERNET DRIVER
M: Vince Bridgers <vbridger@opensource.altera.com>
L: netdev@vger.kernel.org
@@ -952,7 +958,7 @@
M: Mathieu Poirier <mathieu.poirier@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
-F: drivers/coresight/*
+F: drivers/hwtracing/coresight/*
F: Documentation/trace/coresight.txt
F: Documentation/devicetree/bindings/arm/coresight.txt
F: Documentation/ABI/testing/sysfs-bus-coresight-devices-*
@@ -1822,7 +1828,7 @@
F: drivers/spi/spi-atmel.*
ATMEL SSC DRIVER
-M: Bo Shen <voice.shen@atmel.com>
+M: Nicolas Ferre <nicolas.ferre@atmel.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported
F: drivers/misc/atmel-ssc.c
@@ -2571,6 +2577,7 @@
CLK API
M: Russell King <linux@arm.linux.org.uk>
+L: linux-clk@vger.kernel.org
S: Maintained
F: include/linux/clk.h
@@ -2631,7 +2638,7 @@
COMMON CLK FRAMEWORK
M: Mike Turquette <mturquette@linaro.org>
M: Stephen Boyd <sboyd@codeaurora.org>
-L: linux-kernel@vger.kernel.org
+L: linux-clk@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux.git
S: Maintained
F: drivers/clk/
@@ -3285,7 +3292,9 @@
F: Documentation/
X: Documentation/ABI/
X: Documentation/devicetree/
-X: Documentation/[a-z][a-z]_[A-Z][A-Z]/
+X: Documentation/acpi
+X: Documentation/power
+X: Documentation/spi
T: git git://git.lwn.net/linux-2.6.git docs-next
DOUBLETALK DRIVER
@@ -3409,7 +3418,6 @@
S: Supported
F: drivers/gpu/drm/rcar-du/
F: drivers/gpu/drm/shmobile/
-F: include/linux/platform_data/rcar-du.h
F: include/linux/platform_data/shmob_drm.h
DSBR100 USB FM RADIO DRIVER
@@ -4466,7 +4474,7 @@
F: block/partitions/efi.*
STK1160 USB VIDEO CAPTURE DRIVER
-M: Ezequiel Garcia <elezegarcia@gmail.com>
+M: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
L: linux-media@vger.kernel.org
T: git git://linuxtv.org/media_tree.git
S: Maintained
@@ -6158,16 +6166,6 @@
S: Maintained
F: drivers/media/dvb-frontends/m88rs2000*
-M88TS2022 MEDIA DRIVER
-M: Antti Palosaari <crope@iki.fi>
-L: linux-media@vger.kernel.org
-W: http://linuxtv.org/
-W: http://palosaari.fi/linux/
-Q: http://patchwork.linuxtv.org/project/linux-media/list/
-T: git git://linuxtv.org/anttip/media_tree.git
-S: Maintained
-F: drivers/media/tuners/m88ts2022*
-
MA901 MASTERKIT USB FM RADIO DRIVER
M: Alexey Klimov <klimov.linux@gmail.com>
L: linux-media@vger.kernel.org
@@ -6591,6 +6589,7 @@
L: linux-media@vger.kernel.org
T: git git://linuxtv.org/media_tree.git
S: Maintained
+F: Documentation/devicetree/bindings/media/i2c/mt9v032.txt
F: drivers/media/i2c/mt9v032.c
F: include/media/mt9v032.h
@@ -7237,6 +7236,15 @@
F: arch/*/boot/dts/
F: include/dt-bindings/
+OPEN FIRMWARE AND DEVICE TREE OVERLAYS
+M: Pantelis Antoniou <pantelis.antoniou@konsulko.com>
+L: devicetree@vger.kernel.org
+S: Maintained
+F: Documentation/devicetree/dynamic-resolution-notes.txt
+F: Documentation/devicetree/overlay-notes.txt
+F: drivers/of/overlay.c
+F: drivers/of/resolver.c
+
OPENRISC ARCHITECTURE
M: Jonas Bonn <jonas@southpole.se>
W: http://openrisc.net
@@ -8113,6 +8121,12 @@
F: Documentation/blockdev/ramdisk.txt
F: drivers/block/brd.c
+PERSISTENT MEMORY DRIVER
+M: Ross Zwisler <ross.zwisler@linux.intel.com>
+L: linux-nvdimm@lists.01.org
+S: Supported
+F: drivers/block/pmem.c
+
RANDOM NUMBER DRIVER
M: "Theodore Ts'o" <tytso@mit.edu>
S: Maintained
@@ -8969,6 +8983,16 @@
S: Maintained
F: drivers/media/platform/am437x/
+OV2659 OMNIVISION SENSOR DRIVER
+M: Lad, Prabhakar <prabhakar.csengg@gmail.com>
+L: linux-media@vger.kernel.org
+W: http://linuxtv.org/
+Q: http://patchwork.linuxtv.org/project/linux-media/list/
+T: git git://linuxtv.org/mhadli/v4l-dvb-davinci_devices.git
+S: Maintained
+F: drivers/media/i2c/ov2659.c
+F: include/media/ov2659.h
+
SIS 190 ETHERNET DRIVER
M: Francois Romieu <romieu@fr.zoreil.com>
L: netdev@vger.kernel.org
@@ -10573,6 +10597,14 @@
S: Maintained
F: drivers/misc/vmw_balloon.c
+VMWARE VMMOUSE SUBDRIVER
+M: "VMware Graphics" <linux-graphics-maintainer@vmware.com>
+M: "VMware, Inc." <pv-drivers@vmware.com>
+L: linux-input@vger.kernel.org
+S: Maintained
+F: drivers/input/mouse/vmmouse.c
+F: drivers/input/mouse/vmmouse.h
+
VMWARE VMXNET3 ETHERNET DRIVER
M: Shreyas Bhatewara <sbhatewara@vmware.com>
M: "VMware, Inc." <pv-drivers@vmware.com>
@@ -10898,6 +10930,16 @@
S: Maintained
F: drivers/tty/serial/uartlite.c
+XILINX VIDEO IP CORES
+M: Hyun Kwon <hyun.kwon@xilinx.com>
+M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+L: linux-media@vger.kernel.org
+T: git git://linuxtv.org/media_tree.git
+S: Supported
+F: Documentation/devicetree/bindings/media/xilinx/
+F: drivers/media/platform/xilinx/
+F: include/uapi/linux/xilinx-v4l2-controls.h
+
XILLYBUS DRIVER
M: Eli Billauer <eli.billauer@gmail.com>
L: linux-kernel@vger.kernel.org
diff --git a/README b/README
index a24ec89..69c68fb 100644
--- a/README
+++ b/README
@@ -1,6 +1,6 @@
- Linux kernel release 3.x <http://kernel.org/>
+ Linux kernel release 4.x <http://kernel.org/>
-These are the release notes for Linux version 3. Read them carefully,
+These are the release notes for Linux version 4. Read them carefully,
as they tell you what this is all about, explain how to install the
kernel, and what to do if something goes wrong.
@@ -62,11 +62,7 @@
directory where you have permissions (eg. your home directory) and
unpack it:
- gzip -cd linux-3.X.tar.gz | tar xvf -
-
- or
-
- bzip2 -dc linux-3.X.tar.bz2 | tar xvf -
+ xz -cd linux-4.X.tar.xz | tar xvf -
Replace "X" with the version number of the latest kernel.
@@ -75,16 +71,12 @@
files. They should match the library, and not get messed up by
whatever the kernel-du-jour happens to be.
- - You can also upgrade between 3.x releases by patching. Patches are
- distributed in the traditional gzip and the newer bzip2 format. To
- install by patching, get all the newer patch files, enter the
- top level directory of the kernel source (linux-3.X) and execute:
+ - You can also upgrade between 4.x releases by patching. Patches are
+ distributed in the xz format. To install by patching, get all the
+ newer patch files, enter the top level directory of the kernel source
+ (linux-4.X) and execute:
- gzip -cd ../patch-3.x.gz | patch -p1
-
- or
-
- bzip2 -dc ../patch-3.x.bz2 | patch -p1
+ xz -cd ../patch-4.x.xz | patch -p1
Replace "x" for all versions bigger than the version "X" of your current
source tree, _in_order_, and you should be ok. You may want to remove
@@ -92,13 +84,13 @@
that there are no failed patches (some-file-name# or some-file-name.rej).
If there are, either you or I have made a mistake.
- Unlike patches for the 3.x kernels, patches for the 3.x.y kernels
+ Unlike patches for the 4.x kernels, patches for the 4.x.y kernels
(also known as the -stable kernels) are not incremental but instead apply
- directly to the base 3.x kernel. For example, if your base kernel is 3.0
- and you want to apply the 3.0.3 patch, you must not first apply the 3.0.1
- and 3.0.2 patches. Similarly, if you are running kernel version 3.0.2 and
- want to jump to 3.0.3, you must first reverse the 3.0.2 patch (that is,
- patch -R) _before_ applying the 3.0.3 patch. You can read more on this in
+ directly to the base 4.x kernel. For example, if your base kernel is 4.0
+ and you want to apply the 4.0.3 patch, you must not first apply the 4.0.1
+ and 4.0.2 patches. Similarly, if you are running kernel version 4.0.2 and
+ want to jump to 4.0.3, you must first reverse the 4.0.2 patch (that is,
+ patch -R) _before_ applying the 4.0.3 patch. You can read more on this in
Documentation/applying-patches.txt
Alternatively, the script patch-kernel can be used to automate this
@@ -120,7 +112,7 @@
SOFTWARE REQUIREMENTS
- Compiling and running the 3.x kernels requires up-to-date
+ Compiling and running the 4.x kernels requires up-to-date
versions of various software packages. Consult
Documentation/Changes for the minimum version numbers required
and how to get updates for these packages. Beware that using
@@ -137,12 +129,12 @@
place for the output files (including .config).
Example:
- kernel source code: /usr/src/linux-3.X
+ kernel source code: /usr/src/linux-4.X
build directory: /home/name/build/kernel
To configure and build the kernel, use:
- cd /usr/src/linux-3.X
+ cd /usr/src/linux-4.X
make O=/home/name/build/kernel menuconfig
make O=/home/name/build/kernel
sudo make O=/home/name/build/kernel modules_install install
diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug
index 970de75..8b0183a 100644
--- a/arch/arm/Kconfig.debug
+++ b/arch/arm/Kconfig.debug
@@ -1610,59 +1610,6 @@
against certain classes of kernel exploits.
If in doubt, say "N".
-menuconfig CORESIGHT
- bool "CoreSight Tracing Support"
- select ARM_AMBA
- help
- This framework provides a kernel interface for the CoreSight debug
- and trace drivers to register themselves with. It's intended to build
- a topological view of the CoreSight components based on a DT
- specification and configure the right serie of components when a
- trace source gets enabled.
+source "drivers/hwtracing/coresight/Kconfig"
-if CORESIGHT
-config CORESIGHT_LINKS_AND_SINKS
- bool "CoreSight Link and Sink drivers"
- help
- This enables support for CoreSight link and sink drivers that are
- responsible for transporting and collecting the trace data
- respectively. Link and sinks are dynamically aggregated with a trace
- entity at run time to form a complete trace path.
-
-config CORESIGHT_LINK_AND_SINK_TMC
- bool "Coresight generic TMC driver"
- depends on CORESIGHT_LINKS_AND_SINKS
- help
- This enables support for the Trace Memory Controller driver. Depending
- on its configuration the device can act as a link (embedded trace router
- - ETR) or sink (embedded trace FIFO). The driver complies with the
- generic implementation of the component without special enhancement or
- added features.
-
-config CORESIGHT_SINK_TPIU
- bool "Coresight generic TPIU driver"
- depends on CORESIGHT_LINKS_AND_SINKS
- help
- This enables support for the Trace Port Interface Unit driver, responsible
- for bridging the gap between the on-chip coresight components and a trace
- port collection engine, typically connected to an external host for use
- case capturing more traces than the on-board coresight memory can handle.
-
-config CORESIGHT_SINK_ETBV10
- bool "Coresight ETBv1.0 driver"
- depends on CORESIGHT_LINKS_AND_SINKS
- help
- This enables support for the Embedded Trace Buffer version 1.0 driver
- that complies with the generic implementation of the component without
- special enhancement or added features.
-
-config CORESIGHT_SOURCE_ETM3X
- bool "CoreSight Embedded Trace Macrocell 3.x driver"
- select CORESIGHT_LINKS_AND_SINKS
- help
- This driver provides support for processor ETM3.x and PTM1.x modules,
- which allows tracing the instructions that a processor is executing
- This is primarily useful for instruction level tracing. Depending
- the ETM version data tracing may also be available.
-endif
endmenu
diff --git a/arch/arm/boot/dts/hip04.dtsi b/arch/arm/boot/dts/hip04.dtsi
index 2388145..44044f2 100644
--- a/arch/arm/boot/dts/hip04.dtsi
+++ b/arch/arm/boot/dts/hip04.dtsi
@@ -275,7 +275,6 @@
compatible = "arm,coresight-etb10", "arm,primecell";
reg = <0 0xe3c42000 0 0x1000>;
- coresight-default-sink;
clocks = <&clk_375m>;
clock-names = "apb_pclk";
port {
diff --git a/arch/arm/boot/dts/omap3-beagle-xm.dts b/arch/arm/boot/dts/omap3-beagle-xm.dts
index 25f7b0a..8cdca51 100644
--- a/arch/arm/boot/dts/omap3-beagle-xm.dts
+++ b/arch/arm/boot/dts/omap3-beagle-xm.dts
@@ -150,7 +150,6 @@
compatible = "arm,coresight-etb10", "arm,primecell";
reg = <0x5401b000 0x1000>;
- coresight-default-sink;
clocks = <&emu_src_ck>;
clock-names = "apb_pclk";
port {
diff --git a/arch/arm/boot/dts/omap3-beagle.dts b/arch/arm/boot/dts/omap3-beagle.dts
index c792391..6d4c46b 100644
--- a/arch/arm/boot/dts/omap3-beagle.dts
+++ b/arch/arm/boot/dts/omap3-beagle.dts
@@ -145,7 +145,6 @@
compatible = "arm,coresight-etb10", "arm,primecell";
reg = <0x5401b000 0x1000>;
- coresight-default-sink;
clocks = <&emu_src_ck>;
clock-names = "apb_pclk";
port {
diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
index db80f9d..2cab149 100644
--- a/arch/arm/boot/dts/omap3-n900.dts
+++ b/arch/arm/boot/dts/omap3-n900.dts
@@ -609,6 +609,58 @@
pinctrl-0 = <&i2c3_pins>;
clock-frequency = <400000>;
+
+ lis302dl: lis3lv02d@1d {
+ compatible = "st,lis3lv02d";
+ reg = <0x1d>;
+
+ Vdd-supply = <&vaux1>;
+ Vdd_IO-supply = <&vio>;
+
+ interrupt-parent = <&gpio6>;
+ interrupts = <21 20>; /* 181 and 180 */
+
+ /* click flags */
+ st,click-single-x;
+ st,click-single-y;
+ st,click-single-z;
+
+ /* Limits are 0.5g * value */
+ st,click-threshold-x = <8>;
+ st,click-threshold-y = <8>;
+ st,click-threshold-z = <10>;
+
+ /* Click must be longer than time limit */
+ st,click-time-limit = <9>;
+
+ /* Kind of debounce filter */
+ st,click-latency = <50>;
+
+ /* Interrupt line 2 for click detection */
+ st,irq2-click;
+
+ st,wakeup-x-hi;
+ st,wakeup-y-hi;
+ st,wakeup-threshold = <(800/18)>; /* millig-value / 18 to get HW values */
+
+ st,wakeup2-z-hi;
+ st,wakeup2-threshold = <(900/18)>; /* millig-value / 18 to get HW values */
+
+ st,hipass1-disable;
+ st,hipass2-disable;
+
+ st,axis-x = <1>; /* LIS3_DEV_X */
+ st,axis-y = <(-2)>; /* LIS3_INV_DEV_Y */
+ st,axis-z = <(-3)>; /* LIS3_INV_DEV_Z */
+
+ st,min-limit-x = <(-32)>;
+ st,min-limit-y = <3>;
+ st,min-limit-z = <3>;
+
+ st,max-limit-x = <(-3)>;
+ st,max-limit-y = <32>;
+ st,max-limit-z = <32>;
+ };
};
&mmc1 {
diff --git a/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts b/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
index 33920df..7a2aeac 100644
--- a/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
+++ b/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
@@ -362,7 +362,6 @@
compatible = "arm,coresight-etb10", "arm,primecell";
reg = <0 0x20010000 0 0x1000>;
- coresight-default-sink;
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
port {
diff --git a/arch/arm/mach-omap2/board-cm-t35.c b/arch/arm/mach-omap2/board-cm-t35.c
index 91738a1..b5dfbc1 100644
--- a/arch/arm/mach-omap2/board-cm-t35.c
+++ b/arch/arm/mach-omap2/board-cm-t35.c
@@ -492,51 +492,36 @@
#include <media/omap3isp.h>
#include "devices.h"
-static struct i2c_board_info cm_t35_isp_i2c_boardinfo[] = {
+static struct isp_platform_subdev cm_t35_isp_subdevs[] = {
{
- I2C_BOARD_INFO("mt9t001", 0x5d),
- },
- {
- I2C_BOARD_INFO("tvp5150", 0x5c),
- },
-};
-
-static struct isp_subdev_i2c_board_info cm_t35_isp_primary_subdevs[] = {
- {
- .board_info = &cm_t35_isp_i2c_boardinfo[0],
+ .board_info = &(struct i2c_board_info){
+ I2C_BOARD_INFO("mt9t001", 0x5d)
+ },
.i2c_adapter_id = 3,
- },
- { NULL, 0, },
-};
-
-static struct isp_subdev_i2c_board_info cm_t35_isp_secondary_subdevs[] = {
- {
- .board_info = &cm_t35_isp_i2c_boardinfo[1],
- .i2c_adapter_id = 3,
- },
- { NULL, 0, },
-};
-
-static struct isp_v4l2_subdevs_group cm_t35_isp_subdevs[] = {
- {
- .subdevs = cm_t35_isp_primary_subdevs,
- .interface = ISP_INTERFACE_PARALLEL,
- .bus = {
- .parallel = {
- .clk_pol = 1,
+ .bus = &(struct isp_bus_cfg){
+ .interface = ISP_INTERFACE_PARALLEL,
+ .bus = {
+ .parallel = {
+ .clk_pol = 1,
+ },
},
},
},
{
- .subdevs = cm_t35_isp_secondary_subdevs,
- .interface = ISP_INTERFACE_PARALLEL,
- .bus = {
- .parallel = {
- .clk_pol = 0,
+ .board_info = &(struct i2c_board_info){
+ I2C_BOARD_INFO("tvp5150", 0x5c),
+ },
+ .i2c_adapter_id = 3,
+ .bus = &(struct isp_bus_cfg){
+ .interface = ISP_INTERFACE_PARALLEL,
+ .bus = {
+ .parallel = {
+ .clk_pol = 0,
+ },
},
},
},
- { NULL, 0, },
+ { 0 },
};
static struct isp_platform_data cm_t35_isp_pdata = {
diff --git a/arch/arm/mach-omap2/devices.c b/arch/arm/mach-omap2/devices.c
index 1afb50d..990338f 100644
--- a/arch/arm/mach-omap2/devices.c
+++ b/arch/arm/mach-omap2/devices.c
@@ -74,82 +74,12 @@
static struct resource omap3isp_resources[] = {
{
.start = OMAP3430_ISP_BASE,
- .end = OMAP3430_ISP_END,
+ .end = OMAP3430_ISP_BASE + 0x12fc,
.flags = IORESOURCE_MEM,
},
{
- .start = OMAP3430_ISP_CCP2_BASE,
- .end = OMAP3430_ISP_CCP2_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3430_ISP_CCDC_BASE,
- .end = OMAP3430_ISP_CCDC_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3430_ISP_HIST_BASE,
- .end = OMAP3430_ISP_HIST_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3430_ISP_H3A_BASE,
- .end = OMAP3430_ISP_H3A_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3430_ISP_PREV_BASE,
- .end = OMAP3430_ISP_PREV_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3430_ISP_RESZ_BASE,
- .end = OMAP3430_ISP_RESZ_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3430_ISP_SBL_BASE,
- .end = OMAP3430_ISP_SBL_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3430_ISP_CSI2A_REGS1_BASE,
- .end = OMAP3430_ISP_CSI2A_REGS1_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3430_ISP_CSIPHY2_BASE,
- .end = OMAP3430_ISP_CSIPHY2_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3630_ISP_CSI2A_REGS2_BASE,
- .end = OMAP3630_ISP_CSI2A_REGS2_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3630_ISP_CSI2C_REGS1_BASE,
- .end = OMAP3630_ISP_CSI2C_REGS1_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3630_ISP_CSIPHY1_BASE,
- .end = OMAP3630_ISP_CSIPHY1_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP3630_ISP_CSI2C_REGS2_BASE,
- .end = OMAP3630_ISP_CSI2C_REGS2_END,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP343X_CTRL_BASE + OMAP343X_CONTROL_CSIRXFE,
- .end = OMAP343X_CTRL_BASE + OMAP343X_CONTROL_CSIRXFE + 3,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = OMAP343X_CTRL_BASE + OMAP3630_CONTROL_CAMERA_PHY_CTRL,
- .end = OMAP343X_CTRL_BASE + OMAP3630_CONTROL_CAMERA_PHY_CTRL + 3,
+ .start = OMAP3430_ISP_BASE2,
+ .end = OMAP3430_ISP_BASE2 + 0x0600,
.flags = IORESOURCE_MEM,
},
{
diff --git a/arch/arm/mach-omap2/omap34xx.h b/arch/arm/mach-omap2/omap34xx.h
index c0d1b4b..ed0024d 100644
--- a/arch/arm/mach-omap2/omap34xx.h
+++ b/arch/arm/mach-omap2/omap34xx.h
@@ -46,39 +46,9 @@
#define OMAP34XX_IC_BASE 0x48200000
-#define OMAP3430_ISP_BASE (L4_34XX_BASE + 0xBC000)
-#define OMAP3430_ISP_CBUFF_BASE (OMAP3430_ISP_BASE + 0x0100)
-#define OMAP3430_ISP_CCP2_BASE (OMAP3430_ISP_BASE + 0x0400)
-#define OMAP3430_ISP_CCDC_BASE (OMAP3430_ISP_BASE + 0x0600)
-#define OMAP3430_ISP_HIST_BASE (OMAP3430_ISP_BASE + 0x0A00)
-#define OMAP3430_ISP_H3A_BASE (OMAP3430_ISP_BASE + 0x0C00)
-#define OMAP3430_ISP_PREV_BASE (OMAP3430_ISP_BASE + 0x0E00)
-#define OMAP3430_ISP_RESZ_BASE (OMAP3430_ISP_BASE + 0x1000)
-#define OMAP3430_ISP_SBL_BASE (OMAP3430_ISP_BASE + 0x1200)
-#define OMAP3430_ISP_MMU_BASE (OMAP3430_ISP_BASE + 0x1400)
-#define OMAP3430_ISP_CSI2A_REGS1_BASE (OMAP3430_ISP_BASE + 0x1800)
-#define OMAP3430_ISP_CSIPHY2_BASE (OMAP3430_ISP_BASE + 0x1970)
-#define OMAP3630_ISP_CSI2A_REGS2_BASE (OMAP3430_ISP_BASE + 0x19C0)
-#define OMAP3630_ISP_CSI2C_REGS1_BASE (OMAP3430_ISP_BASE + 0x1C00)
-#define OMAP3630_ISP_CSIPHY1_BASE (OMAP3430_ISP_BASE + 0x1D70)
-#define OMAP3630_ISP_CSI2C_REGS2_BASE (OMAP3430_ISP_BASE + 0x1DC0)
-
-#define OMAP3430_ISP_END (OMAP3430_ISP_BASE + 0x06F)
-#define OMAP3430_ISP_CBUFF_END (OMAP3430_ISP_CBUFF_BASE + 0x077)
-#define OMAP3430_ISP_CCP2_END (OMAP3430_ISP_CCP2_BASE + 0x1EF)
-#define OMAP3430_ISP_CCDC_END (OMAP3430_ISP_CCDC_BASE + 0x0A7)
-#define OMAP3430_ISP_HIST_END (OMAP3430_ISP_HIST_BASE + 0x047)
-#define OMAP3430_ISP_H3A_END (OMAP3430_ISP_H3A_BASE + 0x05F)
-#define OMAP3430_ISP_PREV_END (OMAP3430_ISP_PREV_BASE + 0x09F)
-#define OMAP3430_ISP_RESZ_END (OMAP3430_ISP_RESZ_BASE + 0x0AB)
-#define OMAP3430_ISP_SBL_END (OMAP3430_ISP_SBL_BASE + 0x0FB)
-#define OMAP3430_ISP_MMU_END (OMAP3430_ISP_MMU_BASE + 0x06F)
-#define OMAP3430_ISP_CSI2A_REGS1_END (OMAP3430_ISP_CSI2A_REGS1_BASE + 0x16F)
-#define OMAP3430_ISP_CSIPHY2_END (OMAP3430_ISP_CSIPHY2_BASE + 0x00B)
-#define OMAP3630_ISP_CSI2A_REGS2_END (OMAP3630_ISP_CSI2A_REGS2_BASE + 0x3F)
-#define OMAP3630_ISP_CSI2C_REGS1_END (OMAP3630_ISP_CSI2C_REGS1_BASE + 0x16F)
-#define OMAP3630_ISP_CSIPHY1_END (OMAP3630_ISP_CSIPHY1_BASE + 0x00B)
-#define OMAP3630_ISP_CSI2C_REGS2_END (OMAP3630_ISP_CSI2C_REGS2_BASE + 0x3F)
+#define OMAP3430_ISP_BASE (L4_34XX_BASE + 0xBC000)
+#define OMAP3430_ISP_MMU_BASE (OMAP3430_ISP_BASE + 0x1400)
+#define OMAP3430_ISP_BASE2 (OMAP3430_ISP_BASE + 0x1800)
#define OMAP34XX_HSUSB_OTG_BASE (L4_34XX_BASE + 0xAB000)
#define OMAP34XX_USBTLL_BASE (L4_34XX_BASE + 0x62000)
diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index 4a87410..d6285ef 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -89,4 +89,6 @@
If in doubt, say N
+source "drivers/hwtracing/coresight/Kconfig"
+
endmenu
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index ffe8e1b..714411f 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -636,7 +636,7 @@
cpumask_t mask;
cpumask_copy(&mask, cpu_online_mask);
- cpu_clear(smp_processor_id(), mask);
+ cpumask_clear_cpu(smp_processor_id(), &mask);
smp_cross_call(&mask, IPI_CPU_STOP);
}
diff --git a/arch/blackfin/mach-bf561/smp.c b/arch/blackfin/mach-bf561/smp.c
index 11789be..8c0c80f 100644
--- a/arch/blackfin/mach-bf561/smp.c
+++ b/arch/blackfin/mach-bf561/smp.c
@@ -124,7 +124,7 @@
unsigned int cpu;
int offset = (irq == IRQ_SUPPLE_0) ? 6 : 8;
- for_each_cpu_mask(cpu, callmap) {
+ for_each_cpu(cpu, &callmap) {
BUG_ON(cpu >= 2);
SSYNC();
bfin_write_SICB_SYSCR(bfin_read_SICB_SYSCR() | (1 << (offset + cpu)));
diff --git a/arch/ia64/include/asm/acpi.h b/arch/ia64/include/asm/acpi.h
index a1d91ab..aa0fdf1 100644
--- a/arch/ia64/include/asm/acpi.h
+++ b/arch/ia64/include/asm/acpi.h
@@ -117,7 +117,7 @@
#ifdef CONFIG_ACPI_NUMA
extern cpumask_t early_cpu_possible_map;
#define for_each_possible_early_cpu(cpu) \
- for_each_cpu_mask((cpu), early_cpu_possible_map)
+ for_each_cpu((cpu), &early_cpu_possible_map)
static inline void per_cpu_scan_finalize(int min_cpus, int reserve_cpus)
{
@@ -125,13 +125,13 @@
int cpu;
int next_nid = 0;
- low_cpu = cpus_weight(early_cpu_possible_map);
+ low_cpu = cpumask_weight(&early_cpu_possible_map);
high_cpu = max(low_cpu, min_cpus);
high_cpu = min(high_cpu + reserve_cpus, NR_CPUS);
for (cpu = low_cpu; cpu < high_cpu; cpu++) {
- cpu_set(cpu, early_cpu_possible_map);
+ cpumask_set_cpu(cpu, &early_cpu_possible_map);
if (node_cpuid[cpu].nid == NUMA_NO_NODE) {
node_cpuid[cpu].nid = next_nid;
next_nid++;
diff --git a/arch/ia64/kernel/acpi.c b/arch/ia64/kernel/acpi.c
index 2c44989..35bf22c 100644
--- a/arch/ia64/kernel/acpi.c
+++ b/arch/ia64/kernel/acpi.c
@@ -483,7 +483,7 @@
(pa->apic_id << 8) | (pa->local_sapic_eid);
/* nid should be overridden as logical node id later */
node_cpuid[srat_num_cpus].nid = pxm;
- cpu_set(srat_num_cpus, early_cpu_possible_map);
+ cpumask_set_cpu(srat_num_cpus, &early_cpu_possible_map);
srat_num_cpus++;
}
diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index cd44a57..bc9501e 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -690,7 +690,7 @@
do {
if (++cpu >= nr_cpu_ids)
cpu = 0;
- } while (!cpu_online(cpu) || !cpu_isset(cpu, domain));
+ } while (!cpu_online(cpu) || !cpumask_test_cpu(cpu, &domain));
return cpu_physical_id(cpu);
#else /* CONFIG_SMP */
diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 698d8fe..eaa3199 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -109,13 +109,13 @@
int pos, vector;
cpumask_and(&mask, &domain, cpu_online_mask);
- if (cpus_empty(mask))
+ if (cpumask_empty(&mask))
return -EINVAL;
for (pos = 0; pos < IA64_NUM_DEVICE_VECTORS; pos++) {
vector = IA64_FIRST_DEVICE_VECTOR + pos;
- cpus_and(mask, domain, vector_table[vector]);
- if (!cpus_empty(mask))
+ cpumask_and(&mask, &domain, &vector_table[vector]);
+ if (!cpumask_empty(&mask))
continue;
return vector;
}
@@ -132,18 +132,18 @@
BUG_ON((unsigned)vector >= IA64_NUM_VECTORS);
cpumask_and(&mask, &domain, cpu_online_mask);
- if (cpus_empty(mask))
+ if (cpumask_empty(&mask))
return -EINVAL;
- if ((cfg->vector == vector) && cpus_equal(cfg->domain, domain))
+ if ((cfg->vector == vector) && cpumask_equal(&cfg->domain, &domain))
return 0;
if (cfg->vector != IRQ_VECTOR_UNASSIGNED)
return -EBUSY;
- for_each_cpu_mask(cpu, mask)
+ for_each_cpu(cpu, &mask)
per_cpu(vector_irq, cpu)[vector] = irq;
cfg->vector = vector;
cfg->domain = domain;
irq_status[irq] = IRQ_USED;
- cpus_or(vector_table[vector], vector_table[vector], domain);
+ cpumask_or(&vector_table[vector], &vector_table[vector], &domain);
return 0;
}
@@ -161,7 +161,6 @@
static void __clear_irq_vector(int irq)
{
int vector, cpu;
- cpumask_t mask;
cpumask_t domain;
struct irq_cfg *cfg = &irq_cfg[irq];
@@ -169,13 +168,12 @@
BUG_ON(cfg->vector == IRQ_VECTOR_UNASSIGNED);
vector = cfg->vector;
domain = cfg->domain;
- cpumask_and(&mask, &cfg->domain, cpu_online_mask);
- for_each_cpu_mask(cpu, mask)
+ for_each_cpu_and(cpu, &cfg->domain, cpu_online_mask)
per_cpu(vector_irq, cpu)[vector] = -1;
cfg->vector = IRQ_VECTOR_UNASSIGNED;
cfg->domain = CPU_MASK_NONE;
irq_status[irq] = IRQ_UNUSED;
- cpus_andnot(vector_table[vector], vector_table[vector], domain);
+ cpumask_andnot(&vector_table[vector], &vector_table[vector], &domain);
}
static void clear_irq_vector(int irq)
@@ -244,7 +242,7 @@
per_cpu(vector_irq, cpu)[vector] = -1;
/* Mark the inuse vectors */
for (irq = 0; irq < NR_IRQS; ++irq) {
- if (!cpu_isset(cpu, irq_cfg[irq].domain))
+ if (!cpumask_test_cpu(cpu, &irq_cfg[irq].domain))
continue;
vector = irq_to_vector(irq);
per_cpu(vector_irq, cpu)[vector] = irq;
@@ -261,7 +259,7 @@
static cpumask_t vector_allocation_domain(int cpu)
{
if (vector_domain_type == VECTOR_DOMAIN_PERCPU)
- return cpumask_of_cpu(cpu);
+ return *cpumask_of(cpu);
return CPU_MASK_ALL;
}
@@ -275,7 +273,7 @@
return -EBUSY;
if (cfg->vector == IRQ_VECTOR_UNASSIGNED || !cpu_online(cpu))
return -EINVAL;
- if (cpu_isset(cpu, cfg->domain))
+ if (cpumask_test_cpu(cpu, &cfg->domain))
return 0;
domain = vector_allocation_domain(cpu);
vector = find_unassigned_vector(domain);
@@ -309,12 +307,12 @@
if (likely(!cfg->move_in_progress))
return;
- if (unlikely(cpu_isset(smp_processor_id(), cfg->old_domain)))
+ if (unlikely(cpumask_test_cpu(smp_processor_id(), &cfg->old_domain)))
return;
cpumask_and(&cleanup_mask, &cfg->old_domain, cpu_online_mask);
- cfg->move_cleanup_count = cpus_weight(cleanup_mask);
- for_each_cpu_mask(i, cleanup_mask)
+ cfg->move_cleanup_count = cpumask_weight(&cleanup_mask);
+ for_each_cpu(i, &cleanup_mask)
platform_send_ipi(i, IA64_IRQ_MOVE_VECTOR, IA64_IPI_DM_INT, 0);
cfg->move_in_progress = 0;
}
@@ -340,12 +338,12 @@
if (!cfg->move_cleanup_count)
goto unlock;
- if (!cpu_isset(me, cfg->old_domain))
+ if (!cpumask_test_cpu(me, &cfg->old_domain))
goto unlock;
spin_lock_irqsave(&vector_lock, flags);
__this_cpu_write(vector_irq[vector], -1);
- cpu_clear(me, vector_table[vector]);
+ cpumask_clear_cpu(me, &vector_table[vector]);
spin_unlock_irqrestore(&vector_lock, flags);
cfg->move_cleanup_count--;
unlock:
diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
index 8bfd36a..dd5801e 100644
--- a/arch/ia64/kernel/mca.c
+++ b/arch/ia64/kernel/mca.c
@@ -1293,7 +1293,7 @@
monarch_cpu = cpu;
sos->monarch = 1;
} else {
- cpu_set(cpu, mca_cpu);
+ cpumask_set_cpu(cpu, &mca_cpu);
sos->monarch = 0;
}
mprintk(KERN_INFO "Entered OS MCA handler. PSP=%lx cpu=%d "
@@ -1316,7 +1316,7 @@
*/
ia64_mca_wakeup_all();
} else {
- while (cpu_isset(cpu, mca_cpu))
+ while (cpumask_test_cpu(cpu, &mca_cpu))
cpu_relax(); /* spin until monarch wakes us */
}
@@ -1355,9 +1355,9 @@
* and put this cpu in the rendez loop.
*/
for_each_online_cpu(i) {
- if (cpu_isset(i, mca_cpu)) {
+ if (cpumask_test_cpu(i, &mca_cpu)) {
monarch_cpu = i;
- cpu_clear(i, mca_cpu); /* wake next cpu */
+ cpumask_clear_cpu(i, &mca_cpu); /* wake next cpu */
while (monarch_cpu != -1)
cpu_relax(); /* spin until last cpu leaves */
set_curr_task(cpu, previous_current);
@@ -1822,7 +1822,7 @@
ti->cpu = cpu;
p->stack = ti;
p->state = TASK_UNINTERRUPTIBLE;
- cpu_set(cpu, p->cpus_allowed);
+ cpumask_set_cpu(cpu, &p->cpus_allowed);
INIT_LIST_HEAD(&p->tasks);
p->parent = p->real_parent = p->group_leader = p;
INIT_LIST_HEAD(&p->children);
diff --git a/arch/ia64/kernel/msi_ia64.c b/arch/ia64/kernel/msi_ia64.c
index 8ae36ea..9dd7464 100644
--- a/arch/ia64/kernel/msi_ia64.c
+++ b/arch/ia64/kernel/msi_ia64.c
@@ -47,15 +47,14 @@
struct msi_msg msg;
unsigned long dest_phys_id;
int irq, vector;
- cpumask_t mask;
irq = create_irq();
if (irq < 0)
return irq;
irq_set_msi_desc(irq, desc);
- cpumask_and(&mask, &(irq_to_domain(irq)), cpu_online_mask);
- dest_phys_id = cpu_physical_id(first_cpu(mask));
+ dest_phys_id = cpu_physical_id(cpumask_any_and(&(irq_to_domain(irq)),
+ cpu_online_mask));
vector = irq_to_vector(irq);
msg.address_hi = 0;
@@ -171,10 +170,9 @@
{
struct irq_cfg *cfg = irq_cfg + irq;
unsigned dest;
- cpumask_t mask;
- cpumask_and(&mask, &(irq_to_domain(irq)), cpu_online_mask);
- dest = cpu_physical_id(first_cpu(mask));
+ dest = cpu_physical_id(cpumask_first_and(&(irq_to_domain(irq)),
+ cpu_online_mask));
msg->address_hi = 0;
msg->address_lo =
diff --git a/arch/ia64/kernel/numa.c b/arch/ia64/kernel/numa.c
index d288cde..92c3762 100644
--- a/arch/ia64/kernel/numa.c
+++ b/arch/ia64/kernel/numa.c
@@ -39,7 +39,7 @@
}
/* sanity check first */
oldnid = cpu_to_node_map[cpu];
- if (cpu_isset(cpu, node_to_cpu_mask[oldnid])) {
+ if (cpumask_test_cpu(cpu, &node_to_cpu_mask[oldnid])) {
return; /* nothing to do */
}
/* we don't have cpu-driven node hot add yet...
@@ -47,16 +47,16 @@
if (!node_online(nid))
nid = first_online_node;
cpu_to_node_map[cpu] = nid;
- cpu_set(cpu, node_to_cpu_mask[nid]);
+ cpumask_set_cpu(cpu, &node_to_cpu_mask[nid]);
return;
}
void unmap_cpu_from_node(int cpu, int nid)
{
- WARN_ON(!cpu_isset(cpu, node_to_cpu_mask[nid]));
+ WARN_ON(!cpumask_test_cpu(cpu, &node_to_cpu_mask[nid]));
WARN_ON(cpu_to_node_map[cpu] != nid);
cpu_to_node_map[cpu] = 0;
- cpu_clear(cpu, node_to_cpu_mask[nid]);
+ cpumask_clear_cpu(cpu, &node_to_cpu_mask[nid]);
}
@@ -71,7 +71,7 @@
int cpu, i, node;
for(node=0; node < MAX_NUMNODES; node++)
- cpus_clear(node_to_cpu_mask[node]);
+ cpumask_clear(&node_to_cpu_mask[node]);
for_each_possible_early_cpu(cpu) {
node = -1;
diff --git a/arch/ia64/kernel/salinfo.c b/arch/ia64/kernel/salinfo.c
index ee9719e..1eeffb7 100644
--- a/arch/ia64/kernel/salinfo.c
+++ b/arch/ia64/kernel/salinfo.c
@@ -256,7 +256,7 @@
data_saved->buffer = buffer;
}
}
- cpu_set(smp_processor_id(), data->cpu_event);
+ cpumask_set_cpu(smp_processor_id(), &data->cpu_event);
if (irqsafe) {
salinfo_work_to_do(data);
spin_unlock_irqrestore(&data_saved_lock, flags);
@@ -274,7 +274,7 @@
unsigned long flags;
if (!data->open)
return;
- if (!cpus_empty(data->cpu_event)) {
+ if (!cpumask_empty(&data->cpu_event)) {
spin_lock_irqsave(&data_saved_lock, flags);
salinfo_work_to_do(data);
spin_unlock_irqrestore(&data_saved_lock, flags);
@@ -308,7 +308,7 @@
int i, n, cpu = -1;
retry:
- if (cpus_empty(data->cpu_event) && down_trylock(&data->mutex)) {
+ if (cpumask_empty(&data->cpu_event) && down_trylock(&data->mutex)) {
if (file->f_flags & O_NONBLOCK)
return -EAGAIN;
if (down_interruptible(&data->mutex))
@@ -317,9 +317,9 @@
n = data->cpu_check;
for (i = 0; i < nr_cpu_ids; i++) {
- if (cpu_isset(n, data->cpu_event)) {
+ if (cpumask_test_cpu(n, &data->cpu_event)) {
if (!cpu_online(n)) {
- cpu_clear(n, data->cpu_event);
+ cpumask_clear_cpu(n, &data->cpu_event);
continue;
}
cpu = n;
@@ -451,7 +451,7 @@
call_on_cpu(cpu, salinfo_log_read_cpu, data);
if (!data->log_size) {
data->state = STATE_NO_DATA;
- cpu_clear(cpu, data->cpu_event);
+ cpumask_clear_cpu(cpu, &data->cpu_event);
} else {
data->state = STATE_LOG_RECORD;
}
@@ -491,11 +491,11 @@
unsigned long flags;
spin_lock_irqsave(&data_saved_lock, flags);
data->state = STATE_NO_DATA;
- if (!cpu_isset(cpu, data->cpu_event)) {
+ if (!cpumask_test_cpu(cpu, &data->cpu_event)) {
spin_unlock_irqrestore(&data_saved_lock, flags);
return 0;
}
- cpu_clear(cpu, data->cpu_event);
+ cpumask_clear_cpu(cpu, &data->cpu_event);
if (data->saved_num) {
shift1_data_saved(data, data->saved_num - 1);
data->saved_num = 0;
@@ -509,7 +509,7 @@
salinfo_log_new_read(cpu, data);
if (data->state == STATE_LOG_RECORD) {
spin_lock_irqsave(&data_saved_lock, flags);
- cpu_set(cpu, data->cpu_event);
+ cpumask_set_cpu(cpu, &data->cpu_event);
salinfo_work_to_do(data);
spin_unlock_irqrestore(&data_saved_lock, flags);
}
@@ -581,7 +581,7 @@
for (i = 0, data = salinfo_data;
i < ARRAY_SIZE(salinfo_data);
++i, ++data) {
- cpu_set(cpu, data->cpu_event);
+ cpumask_set_cpu(cpu, &data->cpu_event);
salinfo_work_to_do(data);
}
spin_unlock_irqrestore(&data_saved_lock, flags);
@@ -601,7 +601,7 @@
shift1_data_saved(data, j);
}
}
- cpu_clear(cpu, data->cpu_event);
+ cpumask_clear_cpu(cpu, &data->cpu_event);
}
spin_unlock_irqrestore(&data_saved_lock, flags);
break;
@@ -659,7 +659,7 @@
/* we missed any events before now */
for_each_online_cpu(j)
- cpu_set(j, data->cpu_event);
+ cpumask_set_cpu(j, &data->cpu_event);
*sdir++ = dir;
}
diff --git a/arch/ia64/kernel/setup.c b/arch/ia64/kernel/setup.c
index d86669b..b976138 100644
--- a/arch/ia64/kernel/setup.c
+++ b/arch/ia64/kernel/setup.c
@@ -562,8 +562,8 @@
# ifdef CONFIG_ACPI_HOTPLUG_CPU
prefill_possible_map();
# endif
- per_cpu_scan_finalize((cpus_weight(early_cpu_possible_map) == 0 ?
- 32 : cpus_weight(early_cpu_possible_map)),
+ per_cpu_scan_finalize((cpumask_weight(&early_cpu_possible_map) == 0 ?
+ 32 : cpumask_weight(&early_cpu_possible_map)),
additional_cpus > 0 ? additional_cpus : 0);
# endif
#endif /* CONFIG_APCI_BOOT */
@@ -702,7 +702,8 @@
c->itc_freq / 1000000, c->itc_freq % 1000000,
lpj*HZ/500000, (lpj*HZ/5000) % 100);
#ifdef CONFIG_SMP
- seq_printf(m, "siblings : %u\n", cpus_weight(cpu_core_map[cpunum]));
+ seq_printf(m, "siblings : %u\n",
+ cpumask_weight(&cpu_core_map[cpunum]));
if (c->socket_id != -1)
seq_printf(m, "physical id: %u\n", c->socket_id);
if (c->threads_per_core > 1 || c->cores_per_socket > 1)
@@ -933,8 +934,8 @@
* (must be done after per_cpu area is setup)
*/
if (smp_processor_id() == 0) {
- cpu_set(0, per_cpu(cpu_sibling_map, 0));
- cpu_set(0, cpu_core_map[0]);
+ cpumask_set_cpu(0, &per_cpu(cpu_sibling_map, 0));
+ cpumask_set_cpu(0, &cpu_core_map[0]);
} else {
/*
* Set ar.k3 so that assembly code in MCA handler can compute
diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 9fcd4e6..7f706d4 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -262,11 +262,11 @@
preempt_disable();
mycpu = smp_processor_id();
- for_each_cpu_mask(cpu, cpumask)
+ for_each_cpu(cpu, &cpumask)
counts[cpu] = local_tlb_flush_counts[cpu].count & 0xffff;
mb();
- for_each_cpu_mask(cpu, cpumask) {
+ for_each_cpu(cpu, &cpumask) {
if (cpu == mycpu)
flush_mycpu = 1;
else
@@ -276,7 +276,7 @@
if (flush_mycpu)
smp_local_flush_tlb();
- for_each_cpu_mask(cpu, cpumask)
+ for_each_cpu(cpu, &cpumask)
while(counts[cpu] == (local_tlb_flush_counts[cpu].count & 0xffff))
udelay(FLUSH_DELAY);
diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index 547a48d..15051e9 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -434,7 +434,7 @@
/*
* Allow the master to continue.
*/
- cpu_set(cpuid, cpu_callin_map);
+ cpumask_set_cpu(cpuid, &cpu_callin_map);
Dprintk("Stack on CPU %d at about %p\n",cpuid, &cpuid);
}
@@ -475,13 +475,13 @@
*/
Dprintk("Waiting on callin_map ...");
for (timeout = 0; timeout < 100000; timeout++) {
- if (cpu_isset(cpu, cpu_callin_map))
+ if (cpumask_test_cpu(cpu, &cpu_callin_map))
break; /* It has booted */
udelay(100);
}
Dprintk("\n");
- if (!cpu_isset(cpu, cpu_callin_map)) {
+ if (!cpumask_test_cpu(cpu, &cpu_callin_map)) {
printk(KERN_ERR "Processor 0x%x/0x%x is stuck.\n", cpu, sapicid);
ia64_cpu_to_sapicid[cpu] = -1;
set_cpu_online(cpu, false); /* was set in smp_callin() */
@@ -541,7 +541,7 @@
smp_setup_percpu_timer();
- cpu_set(0, cpu_callin_map);
+ cpumask_set_cpu(0, &cpu_callin_map);
local_cpu_data->loops_per_jiffy = loops_per_jiffy;
ia64_cpu_to_sapicid[0] = boot_cpu_id;
@@ -565,7 +565,7 @@
void smp_prepare_boot_cpu(void)
{
set_cpu_online(smp_processor_id(), true);
- cpu_set(smp_processor_id(), cpu_callin_map);
+ cpumask_set_cpu(smp_processor_id(), &cpu_callin_map);
set_numa_node(cpu_to_node_map[smp_processor_id()]);
per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
paravirt_post_smp_prepare_boot_cpu();
@@ -577,10 +577,10 @@
{
int i;
- for_each_cpu_mask(i, per_cpu(cpu_sibling_map, cpu))
- cpu_clear(cpu, per_cpu(cpu_sibling_map, i));
- for_each_cpu_mask(i, cpu_core_map[cpu])
- cpu_clear(cpu, cpu_core_map[i]);
+ for_each_cpu(i, &per_cpu(cpu_sibling_map, cpu))
+ cpumask_clear_cpu(cpu, &per_cpu(cpu_sibling_map, i));
+ for_each_cpu(i, &cpu_core_map[cpu])
+ cpumask_clear_cpu(cpu, &cpu_core_map[i]);
per_cpu(cpu_sibling_map, cpu) = cpu_core_map[cpu] = CPU_MASK_NONE;
}
@@ -592,12 +592,12 @@
if (cpu_data(cpu)->threads_per_core == 1 &&
cpu_data(cpu)->cores_per_socket == 1) {
- cpu_clear(cpu, cpu_core_map[cpu]);
- cpu_clear(cpu, per_cpu(cpu_sibling_map, cpu));
+ cpumask_clear_cpu(cpu, &cpu_core_map[cpu]);
+ cpumask_clear_cpu(cpu, &per_cpu(cpu_sibling_map, cpu));
return;
}
- last = (cpus_weight(cpu_core_map[cpu]) == 1 ? 1 : 0);
+ last = (cpumask_weight(&cpu_core_map[cpu]) == 1 ? 1 : 0);
/* remove it from all sibling map's */
clear_cpu_sibling_map(cpu);
@@ -673,7 +673,7 @@
remove_siblinginfo(cpu);
fixup_irqs();
local_flush_tlb_all();
- cpu_clear(cpu, cpu_callin_map);
+ cpumask_clear_cpu(cpu, &cpu_callin_map);
return 0;
}
@@ -718,11 +718,13 @@
for_each_online_cpu(i) {
if ((cpu_data(cpu)->socket_id == cpu_data(i)->socket_id)) {
- cpu_set(i, cpu_core_map[cpu]);
- cpu_set(cpu, cpu_core_map[i]);
+ cpumask_set_cpu(i, &cpu_core_map[cpu]);
+ cpumask_set_cpu(cpu, &cpu_core_map[i]);
if (cpu_data(cpu)->core_id == cpu_data(i)->core_id) {
- cpu_set(i, per_cpu(cpu_sibling_map, cpu));
- cpu_set(cpu, per_cpu(cpu_sibling_map, i));
+ cpumask_set_cpu(i,
+ &per_cpu(cpu_sibling_map, cpu));
+ cpumask_set_cpu(cpu,
+ &per_cpu(cpu_sibling_map, i));
}
}
}
@@ -742,7 +744,7 @@
* Already booted cpu? not valid anymore since we dont
* do idle loop tightspin anymore.
*/
- if (cpu_isset(cpu, cpu_callin_map))
+ if (cpumask_test_cpu(cpu, &cpu_callin_map))
return -EINVAL;
per_cpu(cpu_state, cpu) = CPU_UP_PREPARE;
@@ -753,8 +755,8 @@
if (cpu_data(cpu)->threads_per_core == 1 &&
cpu_data(cpu)->cores_per_socket == 1) {
- cpu_set(cpu, per_cpu(cpu_sibling_map, cpu));
- cpu_set(cpu, cpu_core_map[cpu]);
+ cpumask_set_cpu(cpu, &per_cpu(cpu_sibling_map, cpu));
+ cpumask_set_cpu(cpu, &cpu_core_map[cpu]);
return 0;
}
diff --git a/arch/ia64/kernel/topology.c b/arch/ia64/kernel/topology.c
index 965ab42..c01fe89 100644
--- a/arch/ia64/kernel/topology.c
+++ b/arch/ia64/kernel/topology.c
@@ -148,7 +148,7 @@
if (cpu_data(cpu)->threads_per_core <= 1 &&
cpu_data(cpu)->cores_per_socket <= 1) {
- cpu_set(cpu, this_leaf->shared_cpu_map);
+ cpumask_set_cpu(cpu, &this_leaf->shared_cpu_map);
return;
}
@@ -164,7 +164,7 @@
if (cpu_data(cpu)->socket_id == cpu_data(j)->socket_id
&& cpu_data(j)->core_id == csi.log1_cid
&& cpu_data(j)->thread_id == csi.log1_tid)
- cpu_set(j, this_leaf->shared_cpu_map);
+ cpumask_set_cpu(j, &this_leaf->shared_cpu_map);
i++;
} while (i < num_shared &&
@@ -177,7 +177,7 @@
static void cache_shared_cpu_map_setup(unsigned int cpu,
struct cache_info * this_leaf)
{
- cpu_set(cpu, this_leaf->shared_cpu_map);
+ cpumask_set_cpu(cpu, &this_leaf->shared_cpu_map);
return;
}
#endif
diff --git a/arch/m32r/include/asm/io.h b/arch/m32r/include/asm/io.h
index 6e7787f..9cc00db 100644
--- a/arch/m32r/include/asm/io.h
+++ b/arch/m32r/include/asm/io.h
@@ -67,6 +67,7 @@
extern void iounmap(volatile void __iomem *addr);
#define ioremap_nocache(off,size) ioremap(off,size)
+#define ioremap_wc ioremap_nocache
/*
* IO bus memory addresses are also 1:1 with the physical address
diff --git a/arch/m32r/kernel/smpboot.c b/arch/m32r/kernel/smpboot.c
index bb21f4f..a468467 100644
--- a/arch/m32r/kernel/smpboot.c
+++ b/arch/m32r/kernel/smpboot.c
@@ -376,7 +376,7 @@
if (!cpumask_equal(&cpu_callin_map, cpu_online_mask))
BUG();
- for (cpu_id = 0 ; cpu_id < num_online_cpus() ; cpu_id++)
+ for_each_online_cpu(cpu_id)
show_cpu_info(cpu_id);
/*
diff --git a/arch/m68k/coldfire/m527x.c b/arch/m68k/coldfire/m527x.c
index 2ba4707..c0b3e28 100644
--- a/arch/m68k/coldfire/m527x.c
+++ b/arch/m68k/coldfire/m527x.c
@@ -92,7 +92,6 @@
static void __init m527x_fec_init(void)
{
- u16 par;
u8 v;
/* Set multi-function pins to ethernet mode for fec0 */
@@ -100,6 +99,8 @@
v = readb(MCFGPIO_PAR_FECI2C);
writeb(v | 0xf0, MCFGPIO_PAR_FECI2C);
#else
+ u16 par;
+
par = readw(MCFGPIO_PAR_FECI2C);
writew(par | 0xf00, MCFGPIO_PAR_FECI2C);
v = readb(MCFGPIO_PAR_FEC0HL);
diff --git a/arch/m68k/include/asm/m527xsim.h b/arch/m68k/include/asm/m527xsim.h
index 1bebbe7..2c648a0 100644
--- a/arch/m68k/include/asm/m527xsim.h
+++ b/arch/m68k/include/asm/m527xsim.h
@@ -103,8 +103,10 @@
*/
#define MCFFEC_BASE0 (MCF_IPSBAR + 0x1000)
#define MCFFEC_SIZE0 0x800
+#ifdef CONFIG_M5275
#define MCFFEC_BASE1 (MCF_IPSBAR + 0x1800)
#define MCFFEC_SIZE1 0x800
+#endif
/*
* QSPI module.
diff --git a/arch/m68k/include/asm/m68360_pram.h b/arch/m68k/include/asm/m68360_pram.h
index e6088bb..c0cbd96 100644
--- a/arch/m68k/include/asm/m68360_pram.h
+++ b/arch/m68k/include/asm/m68360_pram.h
@@ -170,7 +170,7 @@
unsigned short frmer; /* Rx framing error counter */
unsigned short nosec; /* Rx noise counter */
unsigned short brkec; /* Rx break character counter */
- unsigned short brkln; /* Reaceive break length */
+ unsigned short brkln; /* Receive break length */
unsigned short uaddr1; /* address character 1 */
unsigned short uaddr2; /* address character 2 */
@@ -338,7 +338,7 @@
unsigned long c_pres; /* preset CRC */
unsigned long c_mask; /* constant mask for CRC */
unsigned long crcec; /* CRC error counter */
- unsigned long alec; /* alighnment error counter */
+ unsigned long alec; /* alignment error counter */
unsigned long disfc; /* discard frame counter */
unsigned short pads; /* short frame PAD characters */
unsigned short ret_lim; /* retry limit threshold */
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 2198837..f501665 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -1288,6 +1288,7 @@
select CPU_SUPPORTS_HUGEPAGES
select WEAK_ORDERING
select WEAK_REORDERING_BEYOND_LLSC
+ select ARCH_REQUIRE_GPIOLIB
help
The Loongson 3 processor implements the MIPS64R2 instruction
set with many extensions.
diff --git a/arch/mips/bcm63xx/irq.c b/arch/mips/bcm63xx/irq.c
index b94bf44d..e3e808a 100644
--- a/arch/mips/bcm63xx/irq.c
+++ b/arch/mips/bcm63xx/irq.c
@@ -58,9 +58,9 @@
#ifdef CONFIG_SMP
if (m)
- enable &= cpu_isset(cpu, *m);
+ enable &= cpumask_test_cpu(cpu, m);
else if (irqd_affinity_was_set(d))
- enable &= cpu_isset(cpu, *d->affinity);
+ enable &= cpumask_test_cpu(cpu, d->affinity);
#endif
return enable;
}
diff --git a/arch/mips/cavium-octeon/smp.c b/arch/mips/cavium-octeon/smp.c
index 8b1eeff..56f5d08 100644
--- a/arch/mips/cavium-octeon/smp.c
+++ b/arch/mips/cavium-octeon/smp.c
@@ -72,7 +72,7 @@
{
unsigned int i;
- for_each_cpu_mask(i, *mask)
+ for_each_cpu(i, mask)
octeon_send_ipi_single(i, action);
}
@@ -239,7 +239,7 @@
return -ENOTSUPP;
set_cpu_online(cpu, false);
- cpu_clear(cpu, cpu_callin_map);
+ cpumask_clear_cpu(cpu, &cpu_callin_map);
octeon_fixup_irqs();
flush_cache_all();
diff --git a/arch/mips/configs/lemote2f_defconfig b/arch/mips/configs/lemote2f_defconfig
index e51aad9..0cbc986 100644
--- a/arch/mips/configs/lemote2f_defconfig
+++ b/arch/mips/configs/lemote2f_defconfig
@@ -171,6 +171,7 @@
CONFIG_LEGACY_PTY_COUNT=16
CONFIG_HW_RANDOM=y
CONFIG_RTC=y
+CONFIG_GPIO_LOONGSON=y
CONFIG_THERMAL=y
CONFIG_MEDIA_SUPPORT=m
CONFIG_VIDEO_DEV=m
diff --git a/arch/mips/configs/loongson3_defconfig b/arch/mips/configs/loongson3_defconfig
index 7eabcd2..c844299 100644
--- a/arch/mips/configs/loongson3_defconfig
+++ b/arch/mips/configs/loongson3_defconfig
@@ -243,6 +243,7 @@
CONFIG_RAW_DRIVER=m
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_PIIX4=y
+CONFIG_GPIO_LOONGSON=y
CONFIG_SENSORS_LM75=m
CONFIG_SENSORS_LM93=m
CONFIG_SENSORS_W83627HF=m
diff --git a/arch/mips/include/asm/mach-loongson/gpio.h b/arch/mips/include/asm/mach-loongson/gpio.h
index 211a7b7..b3b2169 100644
--- a/arch/mips/include/asm/mach-loongson/gpio.h
+++ b/arch/mips/include/asm/mach-loongson/gpio.h
@@ -1,8 +1,9 @@
/*
- * STLS2F GPIO Support
+ * Loongson GPIO Support
*
* Copyright (c) 2008 Richard Liu, STMicroelectronics <richard.liu@st.com>
* Copyright (c) 2008-2010 Arnaud Patard <apatard@mandriva.com>
+ * Copyright (c) 2014 Huacai Chen <chenhc@lemote.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -10,14 +11,14 @@
* (at your option) any later version.
*/
-#ifndef __STLS2F_GPIO_H
-#define __STLS2F_GPIO_H
+#ifndef __LOONGSON_GPIO_H
+#define __LOONGSON_GPIO_H
#include <asm-generic/gpio.h>
-extern void gpio_set_value(unsigned gpio, int value);
-extern int gpio_get_value(unsigned gpio);
-extern int gpio_cansleep(unsigned gpio);
+#define gpio_get_value __gpio_get_value
+#define gpio_set_value __gpio_set_value
+#define gpio_cansleep __gpio_cansleep
/* The chip can do interrupt
* but it has not been tested and doc not clear
@@ -32,4 +33,4 @@
return -EINVAL;
}
-#endif /* __STLS2F_GPIO_H */
+#endif /* __LOONGSON_GPIO_H */
diff --git a/arch/mips/include/asm/smp.h b/arch/mips/include/asm/smp.h
index eacf865..bb02fac 100644
--- a/arch/mips/include/asm/smp.h
+++ b/arch/mips/include/asm/smp.h
@@ -88,7 +88,7 @@
{
extern struct plat_smp_ops *mp_ops; /* private */
- mp_ops->send_ipi_mask(&cpumask_of_cpu(cpu), SMP_CALL_FUNCTION);
+ mp_ops->send_ipi_mask(cpumask_of(cpu), SMP_CALL_FUNCTION);
}
static inline void arch_send_call_function_ipi_mask(const struct cpumask *mask)
diff --git a/arch/mips/kernel/crash.c b/arch/mips/kernel/crash.c
index d212646..d434d5d 100644
--- a/arch/mips/kernel/crash.c
+++ b/arch/mips/kernel/crash.c
@@ -25,9 +25,9 @@
return;
local_irq_disable();
- if (!cpu_isset(cpu, cpus_in_crash))
+ if (!cpumask_test_cpu(cpu, &cpus_in_crash))
crash_save_cpu(regs, cpu);
- cpu_set(cpu, cpus_in_crash);
+ cpumask_set_cpu(cpu, &cpus_in_crash);
while (!atomic_read(&kexec_ready_to_reboot))
cpu_relax();
@@ -50,7 +50,7 @@
*/
pr_emerg("Sending IPI to other cpus...\n");
msecs = 10000;
- while ((cpus_weight(cpus_in_crash) < ncpus) && (--msecs > 0)) {
+ while ((cpumask_weight(&cpus_in_crash) < ncpus) && (--msecs > 0)) {
cpu_relax();
mdelay(1);
}
@@ -66,5 +66,5 @@
crashing_cpu = smp_processor_id();
crash_save_cpu(regs, crashing_cpu);
crash_kexec_prepare_cpus();
- cpu_set(crashing_cpu, cpus_in_crash);
+ cpumask_set_cpu(crashing_cpu, &cpus_in_crash);
}
diff --git a/arch/mips/kernel/mips-mt-fpaff.c b/arch/mips/kernel/mips-mt-fpaff.c
index 362bb37..3e4491a 100644
--- a/arch/mips/kernel/mips-mt-fpaff.c
+++ b/arch/mips/kernel/mips-mt-fpaff.c
@@ -114,8 +114,8 @@
/* Compute new global allowed CPU set if necessary */
ti = task_thread_info(p);
if (test_ti_thread_flag(ti, TIF_FPUBOUND) &&
- cpus_intersects(*new_mask, mt_fpu_cpumask)) {
- cpus_and(*effective_mask, *new_mask, mt_fpu_cpumask);
+ cpumask_intersects(new_mask, &mt_fpu_cpumask)) {
+ cpumask_and(effective_mask, new_mask, &mt_fpu_cpumask);
retval = set_cpus_allowed_ptr(p, effective_mask);
} else {
cpumask_copy(effective_mask, new_mask);
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index d295bd1..f2975d4 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -49,7 +49,7 @@
void arch_cpu_idle_dead(void)
{
/* What the heck is this check doing ? */
- if (!cpu_isset(smp_processor_id(), cpu_callin_map))
+ if (!cpumask_test_cpu(smp_processor_id(), &cpu_callin_map))
play_dead();
}
#endif
diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c
index b8bd934..fd528d7 100644
--- a/arch/mips/kernel/smp-bmips.c
+++ b/arch/mips/kernel/smp-bmips.c
@@ -362,7 +362,7 @@
pr_info("SMP: CPU%d is offline\n", cpu);
set_cpu_online(cpu, false);
- cpu_clear(cpu, cpu_callin_map);
+ cpumask_clear_cpu(cpu, &cpu_callin_map);
clear_c0_status(IE_IRQ5);
local_flush_tlb_all();
diff --git a/arch/mips/kernel/smp-cmp.c b/arch/mips/kernel/smp-cmp.c
index e36a859..d5e0f94 100644
--- a/arch/mips/kernel/smp-cmp.c
+++ b/arch/mips/kernel/smp-cmp.c
@@ -66,7 +66,7 @@
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
- cpu_set(smp_processor_id(), mt_fpu_cpumask);
+ cpumask_set_cpu(smp_processor_id(), &mt_fpu_cpumask);
#endif /* CONFIG_MIPS_MT_FPAFF */
local_irq_enable();
@@ -110,7 +110,7 @@
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
- cpu_set(0, mt_fpu_cpumask);
+ cpumask_set_cpu(0, &mt_fpu_cpumask);
#endif /* CONFIG_MIPS_MT_FPAFF */
for (i = 1; i < NR_CPUS; i++) {
diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
index d5589be..7e011f9 100644
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -290,7 +290,7 @@
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
- cpu_set(smp_processor_id(), mt_fpu_cpumask);
+ cpumask_set_cpu(smp_processor_id(), &mt_fpu_cpumask);
#endif /* CONFIG_MIPS_MT_FPAFF */
local_irq_enable();
@@ -313,7 +313,7 @@
atomic_sub(1 << cpu_vpe_id(¤t_cpu_data), &core_cfg->vpe_mask);
smp_mb__after_atomic();
set_cpu_online(cpu, false);
- cpu_clear(cpu, cpu_callin_map);
+ cpumask_clear_cpu(cpu, &cpu_callin_map);
return 0;
}
diff --git a/arch/mips/kernel/smp-mt.c b/arch/mips/kernel/smp-mt.c
index 17ea705..86311a1 100644
--- a/arch/mips/kernel/smp-mt.c
+++ b/arch/mips/kernel/smp-mt.c
@@ -178,7 +178,7 @@
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
- cpu_set(smp_processor_id(), mt_fpu_cpumask);
+ cpumask_set_cpu(smp_processor_id(), &mt_fpu_cpumask);
#endif /* CONFIG_MIPS_MT_FPAFF */
local_irq_enable();
@@ -239,7 +239,7 @@
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
- cpu_set(0, mt_fpu_cpumask);
+ cpumask_set_cpu(0, &mt_fpu_cpumask);
#endif /* CONFIG_MIPS_MT_FPAFF */
if (!cpu_has_mipsmt)
return;
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 5b020bd..193ace7 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -75,30 +75,30 @@
{
int i;
- cpu_set(cpu, cpu_sibling_setup_map);
+ cpumask_set_cpu(cpu, &cpu_sibling_setup_map);
if (smp_num_siblings > 1) {
- for_each_cpu_mask(i, cpu_sibling_setup_map) {
+ for_each_cpu(i, &cpu_sibling_setup_map) {
if (cpu_data[cpu].package == cpu_data[i].package &&
cpu_data[cpu].core == cpu_data[i].core) {
- cpu_set(i, cpu_sibling_map[cpu]);
- cpu_set(cpu, cpu_sibling_map[i]);
+ cpumask_set_cpu(i, &cpu_sibling_map[cpu]);
+ cpumask_set_cpu(cpu, &cpu_sibling_map[i]);
}
}
} else
- cpu_set(cpu, cpu_sibling_map[cpu]);
+ cpumask_set_cpu(cpu, &cpu_sibling_map[cpu]);
}
static inline void set_cpu_core_map(int cpu)
{
int i;
- cpu_set(cpu, cpu_core_setup_map);
+ cpumask_set_cpu(cpu, &cpu_core_setup_map);
- for_each_cpu_mask(i, cpu_core_setup_map) {
+ for_each_cpu(i, &cpu_core_setup_map) {
if (cpu_data[cpu].package == cpu_data[i].package) {
- cpu_set(i, cpu_core_map[cpu]);
- cpu_set(cpu, cpu_core_map[i]);
+ cpumask_set_cpu(i, &cpu_core_map[cpu]);
+ cpumask_set_cpu(cpu, &cpu_core_map[i]);
}
}
}
@@ -138,7 +138,7 @@
cpu = smp_processor_id();
cpu_data[cpu].udelay_val = loops_per_jiffy;
- cpu_set(cpu, cpu_coherent_mask);
+ cpumask_set_cpu(cpu, &cpu_coherent_mask);
notify_cpu_starting(cpu);
set_cpu_online(cpu, true);
@@ -146,7 +146,7 @@
set_cpu_sibling_map(cpu);
set_cpu_core_map(cpu);
- cpu_set(cpu, cpu_callin_map);
+ cpumask_set_cpu(cpu, &cpu_callin_map);
synchronise_count_slave(cpu);
@@ -208,7 +208,7 @@
{
set_cpu_possible(0, true);
set_cpu_online(0, true);
- cpu_set(0, cpu_callin_map);
+ cpumask_set_cpu(0, &cpu_callin_map);
}
int __cpu_up(unsigned int cpu, struct task_struct *tidle)
@@ -218,7 +218,7 @@
/*
* Trust is futile. We should really have timeouts ...
*/
- while (!cpu_isset(cpu, cpu_callin_map))
+ while (!cpumask_test_cpu(cpu, &cpu_callin_map))
udelay(100);
synchronise_count_master(cpu);
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index e334c64..ba32e48 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -1153,13 +1153,13 @@
* restricted the allowed set to exclude any CPUs with FPUs,
* we'll skip the procedure.
*/
- if (cpus_intersects(current->cpus_allowed, mt_fpu_cpumask)) {
+ if (cpumask_intersects(¤t->cpus_allowed, &mt_fpu_cpumask)) {
cpumask_t tmask;
current->thread.user_cpus_allowed
= current->cpus_allowed;
- cpus_and(tmask, current->cpus_allowed,
- mt_fpu_cpumask);
+ cpumask_and(&tmask, ¤t->cpus_allowed,
+ &mt_fpu_cpumask);
set_cpus_allowed_ptr(current, &tmask);
set_thread_flag(TIF_FPUBOUND);
}
diff --git a/arch/mips/loongson/common/Makefile b/arch/mips/loongson/common/Makefile
index d87e033..e70c33f 100644
--- a/arch/mips/loongson/common/Makefile
+++ b/arch/mips/loongson/common/Makefile
@@ -4,7 +4,6 @@
obj-y += setup.o init.o cmdline.o env.o time.o reset.o irq.o \
bonito-irq.o mem.o machtype.o platform.o
-obj-$(CONFIG_GPIOLIB) += gpio.o
obj-$(CONFIG_PCI) += pci.o
#
diff --git a/arch/mips/loongson/common/gpio.c b/arch/mips/loongson/common/gpio.c
deleted file mode 100644
index 29dbaa2..0000000
--- a/arch/mips/loongson/common/gpio.c
+++ /dev/null
@@ -1,139 +0,0 @@
-/*
- * STLS2F GPIO Support
- *
- * Copyright (c) 2008 Richard Liu, STMicroelectronics <richard.liu@st.com>
- * Copyright (c) 2008-2010 Arnaud Patard <apatard@mandriva.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#include <linux/kernel.h>
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/spinlock.h>
-#include <linux/err.h>
-#include <asm/types.h>
-#include <loongson.h>
-#include <linux/gpio.h>
-
-#define STLS2F_N_GPIO 4
-#define STLS2F_GPIO_IN_OFFSET 16
-
-static DEFINE_SPINLOCK(gpio_lock);
-
-int gpio_get_value(unsigned gpio)
-{
- u32 val;
- u32 mask;
-
- if (gpio >= STLS2F_N_GPIO)
- return __gpio_get_value(gpio);
-
- mask = 1 << (gpio + STLS2F_GPIO_IN_OFFSET);
- spin_lock(&gpio_lock);
- val = LOONGSON_GPIODATA;
- spin_unlock(&gpio_lock);
-
- return (val & mask) != 0;
-}
-EXPORT_SYMBOL(gpio_get_value);
-
-void gpio_set_value(unsigned gpio, int state)
-{
- u32 val;
- u32 mask;
-
- if (gpio >= STLS2F_N_GPIO) {
- __gpio_set_value(gpio, state);
- return ;
- }
-
- mask = 1 << gpio;
-
- spin_lock(&gpio_lock);
- val = LOONGSON_GPIODATA;
- if (state)
- val |= mask;
- else
- val &= (~mask);
- LOONGSON_GPIODATA = val;
- spin_unlock(&gpio_lock);
-}
-EXPORT_SYMBOL(gpio_set_value);
-
-int gpio_cansleep(unsigned gpio)
-{
- if (gpio < STLS2F_N_GPIO)
- return 0;
- else
- return __gpio_cansleep(gpio);
-}
-EXPORT_SYMBOL(gpio_cansleep);
-
-static int ls2f_gpio_direction_input(struct gpio_chip *chip, unsigned gpio)
-{
- u32 temp;
- u32 mask;
-
- if (gpio >= STLS2F_N_GPIO)
- return -EINVAL;
-
- spin_lock(&gpio_lock);
- mask = 1 << gpio;
- temp = LOONGSON_GPIOIE;
- temp |= mask;
- LOONGSON_GPIOIE = temp;
- spin_unlock(&gpio_lock);
-
- return 0;
-}
-
-static int ls2f_gpio_direction_output(struct gpio_chip *chip,
- unsigned gpio, int level)
-{
- u32 temp;
- u32 mask;
-
- if (gpio >= STLS2F_N_GPIO)
- return -EINVAL;
-
- gpio_set_value(gpio, level);
- spin_lock(&gpio_lock);
- mask = 1 << gpio;
- temp = LOONGSON_GPIOIE;
- temp &= (~mask);
- LOONGSON_GPIOIE = temp;
- spin_unlock(&gpio_lock);
-
- return 0;
-}
-
-static int ls2f_gpio_get_value(struct gpio_chip *chip, unsigned gpio)
-{
- return gpio_get_value(gpio);
-}
-
-static void ls2f_gpio_set_value(struct gpio_chip *chip,
- unsigned gpio, int value)
-{
- gpio_set_value(gpio, value);
-}
-
-static struct gpio_chip ls2f_chip = {
- .label = "ls2f",
- .direction_input = ls2f_gpio_direction_input,
- .get = ls2f_gpio_get_value,
- .direction_output = ls2f_gpio_direction_output,
- .set = ls2f_gpio_set_value,
- .base = 0,
- .ngpio = STLS2F_N_GPIO,
-};
-
-static int __init ls2f_gpio_setup(void)
-{
- return gpiochip_add(&ls2f_chip);
-}
-arch_initcall(ls2f_gpio_setup);
diff --git a/arch/mips/loongson/loongson-3/numa.c b/arch/mips/loongson/loongson-3/numa.c
index 6cae0e7..12d14ed 100644
--- a/arch/mips/loongson/loongson-3/numa.c
+++ b/arch/mips/loongson/loongson-3/numa.c
@@ -233,7 +233,7 @@
if (node_online(node)) {
szmem(node);
node_mem_init(node);
- cpus_clear(__node_data[(node)]->cpumask);
+ cpumask_clear(&__node_data[(node)]->cpumask);
}
}
for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) {
@@ -244,7 +244,7 @@
if (loongson_sysconf.reserved_cpus_mask & (1<<cpu))
continue;
- cpu_set(active_cpu, __node_data[(node)]->cpumask);
+ cpumask_set_cpu(active_cpu, &__node_data[(node)]->cpumask);
pr_info("NUMA: set cpumask cpu %d on node %d\n", active_cpu, node);
active_cpu++;
diff --git a/arch/mips/loongson/loongson-3/smp.c b/arch/mips/loongson/loongson-3/smp.c
index e2eb688b..e3c68b5 100644
--- a/arch/mips/loongson/loongson-3/smp.c
+++ b/arch/mips/loongson/loongson-3/smp.c
@@ -408,7 +408,7 @@
return -EBUSY;
set_cpu_online(cpu, false);
- cpu_clear(cpu, cpu_callin_map);
+ cpumask_clear_cpu(cpu, &cpu_callin_map);
local_irq_save(flags);
fixup_irqs();
local_irq_restore(flags);
diff --git a/arch/mips/mti-malta/malta-init.c b/arch/mips/mti-malta/malta-init.c
index 6849f53..cec3e18 100644
--- a/arch/mips/mti-malta/malta-init.c
+++ b/arch/mips/mti-malta/malta-init.c
@@ -14,7 +14,7 @@
#include <linux/init.h>
#include <linux/string.h>
#include <linux/kernel.h>
-#include <linux/serial_8250.h>
+#include <linux/serial_core.h>
#include <asm/cacheflush.h>
#include <asm/smp-ops.h>
@@ -75,7 +75,7 @@
if ((strstr(fw_getcmdline(), "earlycon=")) == NULL) {
sprintf(console_string, "uart8250,io,0x3f8,%d%c%c", baud,
parity, bits);
- setup_early_serial8250_console(console_string);
+ setup_earlycon(console_string);
}
if ((strstr(fw_getcmdline(), "console=")) == NULL) {
diff --git a/arch/mips/paravirt/paravirt-smp.c b/arch/mips/paravirt/paravirt-smp.c
index 0164b0c..42181c7 100644
--- a/arch/mips/paravirt/paravirt-smp.c
+++ b/arch/mips/paravirt/paravirt-smp.c
@@ -75,7 +75,7 @@
{
unsigned int cpu;
- for_each_cpu_mask(cpu, *mask)
+ for_each_cpu(cpu, mask)
paravirt_send_ipi_single(cpu, action);
}
diff --git a/arch/mips/sgi-ip27/ip27-init.c b/arch/mips/sgi-ip27/ip27-init.c
index ee736bd..570098b 100644
--- a/arch/mips/sgi-ip27/ip27-init.c
+++ b/arch/mips/sgi-ip27/ip27-init.c
@@ -60,7 +60,7 @@
nasid_t nasid = COMPACT_TO_NASID_NODEID(cnode);
int i;
- cpu_set(smp_processor_id(), hub->h_cpus);
+ cpumask_set_cpu(smp_processor_id(), &hub->h_cpus);
if (test_and_set_bit(cnode, hub_init_mask))
return;
diff --git a/arch/mips/sgi-ip27/ip27-klnuma.c b/arch/mips/sgi-ip27/ip27-klnuma.c
index ecbb62f..bda90cf 100644
--- a/arch/mips/sgi-ip27/ip27-klnuma.c
+++ b/arch/mips/sgi-ip27/ip27-klnuma.c
@@ -29,8 +29,8 @@
void __init setup_replication_mask(void)
{
/* Set only the master cnode's bit. The master cnode is always 0. */
- cpus_clear(ktext_repmask);
- cpu_set(0, ktext_repmask);
+ cpumask_clear(&ktext_repmask);
+ cpumask_set_cpu(0, &ktext_repmask);
#ifdef CONFIG_REPLICATE_KTEXT
#ifndef CONFIG_MAPPED_KERNEL
@@ -43,7 +43,7 @@
if (cnode == 0)
continue;
/* Advertise that we have a copy of the kernel */
- cpu_set(cnode, ktext_repmask);
+ cpumask_set_cpu(cnode, &ktext_repmask);
}
}
#endif
@@ -99,7 +99,7 @@
client_nasid = COMPACT_TO_NASID_NODEID(cnode);
/* Check if this node should get a copy of the kernel */
- if (cpu_isset(cnode, ktext_repmask)) {
+ if (cpumask_test_cpu(cnode, &ktext_repmask)) {
server_nasid = client_nasid;
copy_kernel(server_nasid);
}
@@ -124,7 +124,7 @@
loadbase += 16777216;
#endif
offset = PAGE_ALIGN((unsigned long)(&_end)) - loadbase;
- if ((cnode == 0) || (cpu_isset(cnode, ktext_repmask)))
+ if ((cnode == 0) || (cpumask_test_cpu(cnode, &ktext_repmask)))
return TO_NODE(nasid, offset) >> PAGE_SHIFT;
else
return KDM_TO_PHYS(PAGE_ALIGN(SYMMON_STK_ADDR(nasid, 0))) >> PAGE_SHIFT;
diff --git a/arch/mips/sgi-ip27/ip27-memory.c b/arch/mips/sgi-ip27/ip27-memory.c
index 0b68469..8d0eb26 100644
--- a/arch/mips/sgi-ip27/ip27-memory.c
+++ b/arch/mips/sgi-ip27/ip27-memory.c
@@ -404,7 +404,7 @@
NODE_DATA(node)->node_start_pfn = start_pfn;
NODE_DATA(node)->node_spanned_pages = end_pfn - start_pfn;
- cpus_clear(hub_data(node)->h_cpus);
+ cpumask_clear(&hub_data(node)->h_cpus);
slot_freepfn += PFN_UP(sizeof(struct pglist_data) +
sizeof(struct hub_data));
diff --git a/arch/parisc/include/asm/Kbuild b/arch/parisc/include/asm/Kbuild
index 12b341d..7a4bcc3 100644
--- a/arch/parisc/include/asm/Kbuild
+++ b/arch/parisc/include/asm/Kbuild
@@ -20,6 +20,7 @@
generic-y += percpu.h
generic-y += poll.h
generic-y += preempt.h
+generic-y += scatterlist.h
generic-y += seccomp.h
generic-y += segment.h
generic-y += topology.h
diff --git a/arch/parisc/include/asm/pgalloc.h b/arch/parisc/include/asm/pgalloc.h
index 1ba2936..3a08eae 100644
--- a/arch/parisc/include/asm/pgalloc.h
+++ b/arch/parisc/include/asm/pgalloc.h
@@ -26,7 +26,7 @@
if (likely(pgd != NULL)) {
memset(pgd, 0, PAGE_SIZE<<PGD_ALLOC_ORDER);
-#if PT_NLEVELS == 3
+#if CONFIG_PGTABLE_LEVELS == 3
actual_pgd += PTRS_PER_PGD;
/* Populate first pmd with allocated memory. We mark it
* with PxD_FLAG_ATTACHED as a signal to the system that this
@@ -45,7 +45,7 @@
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
-#if PT_NLEVELS == 3
+#if CONFIG_PGTABLE_LEVELS == 3
pgd -= PTRS_PER_PGD;
#endif
free_pages((unsigned long)pgd, PGD_ALLOC_ORDER);
@@ -102,7 +102,7 @@
static inline void
pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte)
{
-#if PT_NLEVELS == 3
+#if CONFIG_PGTABLE_LEVELS == 3
/* preserve the gateway marker if this is the beginning of
* the permanent pmd */
if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED)
diff --git a/arch/parisc/include/asm/scatterlist.h b/arch/parisc/include/asm/scatterlist.h
deleted file mode 100644
index 8bf1f0d..0000000
--- a/arch/parisc/include/asm/scatterlist.h
+++ /dev/null
@@ -1,10 +0,0 @@
-#ifndef _ASM_PARISC_SCATTERLIST_H
-#define _ASM_PARISC_SCATTERLIST_H
-
-#include <asm/page.h>
-#include <asm/types.h>
-#include <asm-generic/scatterlist.h>
-
-#define sg_virt_addr(sg) ((unsigned long)sg_virt(sg))
-
-#endif /* _ASM_PARISC_SCATTERLIST_H */
diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
index cfe056f..f3191db 100644
--- a/arch/parisc/kernel/irq.c
+++ b/arch/parisc/kernel/irq.c
@@ -525,8 +525,8 @@
desc = irq_to_desc(irq);
cpumask_copy(&dest, desc->irq_data.affinity);
if (irqd_is_per_cpu(&desc->irq_data) &&
- !cpu_isset(smp_processor_id(), dest)) {
- int cpu = first_cpu(dest);
+ !cpumask_test_cpu(smp_processor_id(), &dest)) {
+ int cpu = cpumask_first(&dest);
printk(KERN_DEBUG "redirecting irq %d from CPU %d to %d\n",
irq, smp_processor_id(), cpu);
diff --git a/arch/parisc/kernel/pci-dma.c b/arch/parisc/kernel/pci-dma.c
index d87d1c4..ff834fd 100644
--- a/arch/parisc/kernel/pci-dma.c
+++ b/arch/parisc/kernel/pci-dma.c
@@ -482,7 +482,7 @@
BUG_ON(direction == DMA_NONE);
for (i = 0; i < nents; i++, sglist++ ) {
- unsigned long vaddr = sg_virt_addr(sglist);
+ unsigned long vaddr = (unsigned long)sg_virt(sglist);
sg_dma_address(sglist) = (dma_addr_t) virt_to_phys(vaddr);
sg_dma_len(sglist) = sglist->length;
flush_kernel_dcache_range(vaddr, sglist->length);
@@ -502,7 +502,7 @@
/* once we do combining we'll need to use phys_to_virt(sg_dma_address(sglist)) */
for (i = 0; i < nents; i++, sglist++ )
- flush_kernel_dcache_range(sg_virt_addr(sglist), sglist->length);
+ flush_kernel_vmap_range(sg_virt(sglist), sglist->length);
return;
}
@@ -527,7 +527,7 @@
/* once we do combining we'll need to use phys_to_virt(sg_dma_address(sglist)) */
for (i = 0; i < nents; i++, sglist++ )
- flush_kernel_dcache_range(sg_virt_addr(sglist), sglist->length);
+ flush_kernel_vmap_range(sg_virt(sglist), sglist->length);
}
static void pa11_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction)
@@ -537,7 +537,7 @@
/* once we do combining we'll need to use phys_to_virt(sg_dma_address(sglist)) */
for (i = 0; i < nents; i++, sglist++ )
- flush_kernel_dcache_range(sg_virt_addr(sglist), sglist->length);
+ flush_kernel_vmap_range(sg_virt(sglist), sglist->length);
}
struct hppa_dma_ops pcxl_dma_ops = {
diff --git a/arch/powerpc/include/asm/cputhreads.h b/arch/powerpc/include/asm/cputhreads.h
index 4c8ad59..5be6c47 100644
--- a/arch/powerpc/include/asm/cputhreads.h
+++ b/arch/powerpc/include/asm/cputhreads.h
@@ -25,7 +25,7 @@
#define threads_per_core 1
#define threads_per_subcore 1
#define threads_shift 0
-#define threads_core_mask (CPU_MASK_CPU0)
+#define threads_core_mask (*get_cpu_mask(0))
#endif
/* cpu_thread_mask_to_cores - Return a cpumask of one per cores
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index de2726a..8e58c61 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -115,7 +115,7 @@
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
- select HAVE_BPF_JIT if PACK_STACK
+ select HAVE_BPF_JIT if PACK_STACK && HAVE_MARCH_Z9_109_FEATURES
select HAVE_CMPXCHG_DOUBLE
select HAVE_CMPXCHG_LOCAL
select HAVE_DEBUG_KMEMLEAK
diff --git a/arch/s390/include/asm/dma-mapping.h b/arch/s390/include/asm/dma-mapping.h
index 709955d..9d39596 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -42,7 +42,7 @@
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{
if (!dev->dma_mask)
- return 0;
+ return false;
return addr + size - 1 <= *dev->dma_mask;
}
diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
index ef803c2..a648338 100644
--- a/arch/s390/include/asm/pci.h
+++ b/arch/s390/include/asm/pci.h
@@ -7,6 +7,7 @@
#define PCI_BAR_COUNT 6
#include <linux/pci.h>
+#include <linux/mutex.h>
#include <asm-generic/pci.h>
#include <asm-generic/pci-dma-compat.h>
#include <asm/pci_clp.h>
@@ -44,10 +45,6 @@
u64 rpcit_ops;
u64 dma_rbytes;
u64 dma_wbytes;
- /* software counters */
- atomic64_t allocated_pages;
- atomic64_t mapped_pages;
- atomic64_t unmapped_pages;
} __packed __aligned(16);
enum zpci_state {
@@ -80,6 +77,7 @@
u8 pft; /* pci function type */
u16 domain;
+ struct mutex lock;
u8 pfip[CLP_PFIP_NR_SEGMENTS]; /* pci function internal path */
u32 uid; /* user defined id */
u8 util_str[CLP_UTIL_STR_LEN]; /* utility string */
@@ -111,6 +109,10 @@
/* Function measurement block */
struct zpci_fmb *fmb;
u16 fmb_update; /* update interval */
+ /* software counters */
+ atomic64_t allocated_pages;
+ atomic64_t mapped_pages;
+ atomic64_t unmapped_pages;
enum pci_bus_speed max_bus_speed;
diff --git a/arch/s390/net/bpf_jit.S b/arch/s390/net/bpf_jit.S
index ba44c9f..a1c917d 100644
--- a/arch/s390/net/bpf_jit.S
+++ b/arch/s390/net/bpf_jit.S
@@ -1,134 +1,115 @@
/*
* BPF Jit compiler for s390, help functions.
*
- * Copyright IBM Corp. 2012
+ * Copyright IBM Corp. 2012,2015
*
* Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
+ * Michael Holzheu <holzheu@linux.vnet.ibm.com>
*/
+
#include <linux/linkage.h>
+#include "bpf_jit.h"
/*
* Calling convention:
- * registers %r2, %r6-%r8, %r10-%r11, %r13, %r15 are call saved
- * %r2: skb pointer
- * %r3: offset parameter
- * %r5: BPF A accumulator
- * %r8: return address
- * %r9: save register for skb pointer
- * %r10: skb->data
- * %r11: skb->len - skb->data_len (headlen)
- * %r12: BPF X accumulator
+ * registers %r7-%r10, %r11,%r13, and %r15 are call saved
+ *
+ * Input (64 bit):
+ * %r3 (%b2) = offset into skb data
+ * %r6 (%b5) = return address
+ * %r7 (%b6) = skb pointer
+ * %r12 = skb data pointer
+ *
+ * Output:
+ * %r14= %b0 = return value (read skb value)
+ *
+ * Work registers: %r2,%r4,%r5,%r14
*
* skb_copy_bits takes 4 parameters:
* %r2 = skb pointer
* %r3 = offset into skb data
* %r4 = pointer to temp buffer
* %r5 = length to copy
+ * Return value in %r2: 0 = ok
+ *
+ * bpf_internal_load_pointer_neg_helper takes 3 parameters:
+ * %r2 = skb pointer
+ * %r3 = offset into data
+ * %r4 = length to copy
+ * Return value in %r2: Pointer to data
*/
-#define SKBDATA %r8
- /* A = *(u32 *) (skb->data+K+X) */
-ENTRY(sk_load_word_ind)
- ar %r3,%r12 # offset += X
- bmr %r8 # < 0 -> return with cc
+#define SKF_MAX_NEG_OFF -0x200000 /* SKF_LL_OFF from filter.h */
- /* A = *(u32 *) (skb->data+K) */
-ENTRY(sk_load_word)
- llgfr %r1,%r3 # extend offset
- ahi %r3,4 # offset + 4
- clr %r11,%r3 # hlen <= offset + 4 ?
- jl sk_load_word_slow
- l %r5,0(%r1,%r10) # get word from skb
- xr %r1,%r1 # set cc to zero
- br %r8
+/*
+ * Load SIZE bytes from SKB
+ */
+#define sk_load_common(NAME, SIZE, LOAD) \
+ENTRY(sk_load_##NAME); \
+ ltgr %r3,%r3; /* Is offset negative? */ \
+ jl sk_load_##NAME##_slow_neg; \
+ENTRY(sk_load_##NAME##_pos); \
+ aghi %r3,SIZE; /* Offset + SIZE */ \
+ clg %r3,STK_OFF_HLEN(%r15); /* Offset + SIZE > hlen? */ \
+ jh sk_load_##NAME##_slow; \
+ LOAD %r14,-SIZE(%r3,%r12); /* Get data from skb */ \
+ b OFF_OK(%r6); /* Return */ \
+ \
+sk_load_##NAME##_slow:; \
+ lgr %r2,%r7; /* Arg1 = skb pointer */ \
+ aghi %r3,-SIZE; /* Arg2 = offset */ \
+ la %r4,STK_OFF_TMP(%r15); /* Arg3 = temp bufffer */ \
+ lghi %r5,SIZE; /* Arg4 = size */ \
+ brasl %r14,skb_copy_bits; /* Get data from skb */ \
+ LOAD %r14,STK_OFF_TMP(%r15); /* Load from temp bufffer */ \
+ ltgr %r2,%r2; /* Set cc to (%r2 != 0) */ \
+ br %r6; /* Return */
-sk_load_word_slow:
- lgr %r9,%r2 # save %r2
- lgr %r3,%r1 # offset
- la %r4,160(%r15) # pointer to temp buffer
- lghi %r5,4 # 4 bytes
- brasl %r14,skb_copy_bits # get data from skb
- l %r5,160(%r15) # load result from temp buffer
- ltgr %r2,%r2 # set cc to (%r2 != 0)
- lgr %r2,%r9 # restore %r2
- br %r8
+sk_load_common(word, 4, llgf) /* r14 = *(u32 *) (skb->data+offset) */
+sk_load_common(half, 2, llgh) /* r14 = *(u16 *) (skb->data+offset) */
- /* A = *(u16 *) (skb->data+K+X) */
-ENTRY(sk_load_half_ind)
- ar %r3,%r12 # offset += X
- bmr %r8 # < 0 -> return with cc
-
- /* A = *(u16 *) (skb->data+K) */
-ENTRY(sk_load_half)
- llgfr %r1,%r3 # extend offset
- ahi %r3,2 # offset + 2
- clr %r11,%r3 # hlen <= offset + 2 ?
- jl sk_load_half_slow
- llgh %r5,0(%r1,%r10) # get half from skb
- xr %r1,%r1 # set cc to zero
- br %r8
-
-sk_load_half_slow:
- lgr %r9,%r2 # save %r2
- lgr %r3,%r1 # offset
- la %r4,162(%r15) # pointer to temp buffer
- lghi %r5,2 # 2 bytes
- brasl %r14,skb_copy_bits # get data from skb
- xc 160(2,%r15),160(%r15)
- l %r5,160(%r15) # load result from temp buffer
- ltgr %r2,%r2 # set cc to (%r2 != 0)
- lgr %r2,%r9 # restore %r2
- br %r8
-
- /* A = *(u8 *) (skb->data+K+X) */
-ENTRY(sk_load_byte_ind)
- ar %r3,%r12 # offset += X
- bmr %r8 # < 0 -> return with cc
-
- /* A = *(u8 *) (skb->data+K) */
+/*
+ * Load 1 byte from SKB (optimized version)
+ */
+ /* r14 = *(u8 *) (skb->data+offset) */
ENTRY(sk_load_byte)
- llgfr %r1,%r3 # extend offset
- clr %r11,%r3 # hlen < offset ?
- jle sk_load_byte_slow
- lhi %r5,0
- ic %r5,0(%r1,%r10) # get byte from skb
- xr %r1,%r1 # set cc to zero
- br %r8
+ ltgr %r3,%r3 # Is offset negative?
+ jl sk_load_byte_slow_neg
+ENTRY(sk_load_byte_pos)
+ clg %r3,STK_OFF_HLEN(%r15) # Offset >= hlen?
+ jnl sk_load_byte_slow
+ llgc %r14,0(%r3,%r12) # Get byte from skb
+ b OFF_OK(%r6) # Return OK
sk_load_byte_slow:
- lgr %r9,%r2 # save %r2
- lgr %r3,%r1 # offset
- la %r4,163(%r15) # pointer to temp buffer
- lghi %r5,1 # 1 byte
- brasl %r14,skb_copy_bits # get data from skb
- xc 160(3,%r15),160(%r15)
- l %r5,160(%r15) # load result from temp buffer
- ltgr %r2,%r2 # set cc to (%r2 != 0)
- lgr %r2,%r9 # restore %r2
- br %r8
+ lgr %r2,%r7 # Arg1 = skb pointer
+ # Arg2 = offset
+ la %r4,STK_OFF_TMP(%r15) # Arg3 = pointer to temp buffer
+ lghi %r5,1 # Arg4 = size (1 byte)
+ brasl %r14,skb_copy_bits # Get data from skb
+ llgc %r14,STK_OFF_TMP(%r15) # Load result from temp buffer
+ ltgr %r2,%r2 # Set cc to (%r2 != 0)
+ br %r6 # Return cc
- /* X = (*(u8 *)(skb->data+K) & 0xf) << 2 */
-ENTRY(sk_load_byte_msh)
- llgfr %r1,%r3 # extend offset
- clr %r11,%r3 # hlen < offset ?
- jle sk_load_byte_msh_slow
- lhi %r12,0
- ic %r12,0(%r1,%r10) # get byte from skb
- nill %r12,0x0f
- sll %r12,2
- xr %r1,%r1 # set cc to zero
- br %r8
+#define sk_negative_common(NAME, SIZE, LOAD) \
+sk_load_##NAME##_slow_neg:; \
+ cgfi %r3,SKF_MAX_NEG_OFF; \
+ jl bpf_error; \
+ lgr %r2,%r7; /* Arg1 = skb pointer */ \
+ /* Arg2 = offset */ \
+ lghi %r4,SIZE; /* Arg3 = size */ \
+ brasl %r14,bpf_internal_load_pointer_neg_helper; \
+ ltgr %r2,%r2; \
+ jz bpf_error; \
+ LOAD %r14,0(%r2); /* Get data from pointer */ \
+ xr %r3,%r3; /* Set cc to zero */ \
+ br %r6; /* Return cc */
-sk_load_byte_msh_slow:
- lgr %r9,%r2 # save %r2
- lgr %r3,%r1 # offset
- la %r4,163(%r15) # pointer to temp buffer
- lghi %r5,1 # 1 byte
- brasl %r14,skb_copy_bits # get data from skb
- xc 160(3,%r15),160(%r15)
- l %r12,160(%r15) # load result from temp buffer
- nill %r12,0x0f
- sll %r12,2
- ltgr %r2,%r2 # set cc to (%r2 != 0)
- lgr %r2,%r9 # restore %r2
- br %r8
+sk_negative_common(word, 4, llgf)
+sk_negative_common(half, 2, llgh)
+sk_negative_common(byte, 1, llgc)
+
+bpf_error:
+# force a return 0 from jit handler
+ ltgr %r15,%r15 # Set condition code
+ br %r6
diff --git a/arch/s390/net/bpf_jit.h b/arch/s390/net/bpf_jit.h
new file mode 100644
index 0000000..ba8593a5
--- /dev/null
+++ b/arch/s390/net/bpf_jit.h
@@ -0,0 +1,58 @@
+/*
+ * BPF Jit compiler defines
+ *
+ * Copyright IBM Corp. 2012,2015
+ *
+ * Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
+ * Michael Holzheu <holzheu@linux.vnet.ibm.com>
+ */
+
+#ifndef __ARCH_S390_NET_BPF_JIT_H
+#define __ARCH_S390_NET_BPF_JIT_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/filter.h>
+#include <linux/types.h>
+
+extern u8 sk_load_word_pos[], sk_load_half_pos[], sk_load_byte_pos[];
+extern u8 sk_load_word[], sk_load_half[], sk_load_byte[];
+
+#endif /* __ASSEMBLY__ */
+
+/*
+ * Stackframe layout (packed stack):
+ *
+ * ^ high
+ * +---------------+ |
+ * | old backchain | |
+ * +---------------+ |
+ * | r15 - r6 | |
+ * BFP -> +===============+ |
+ * | | |
+ * | BPF stack | |
+ * | | |
+ * +---------------+ |
+ * | 8 byte hlen | |
+ * R15+168 -> +---------------+ |
+ * | 4 byte align | |
+ * +---------------+ |
+ * | 4 byte temp | |
+ * | for bpf_jit.S | |
+ * R15+160 -> +---------------+ |
+ * | new backchain | |
+ * R15+152 -> +---------------+ |
+ * | + 152 byte SA | |
+ * R15 -> +---------------+ + low
+ *
+ * We get 160 bytes stack space from calling function, but only use
+ * 11 * 8 byte (old backchain + r15 - r6) for storing registers.
+ */
+#define STK_OFF (MAX_BPF_STACK + 8 + 4 + 4 + (160 - 11 * 8))
+#define STK_OFF_TMP 160 /* Offset of tmp buffer on stack */
+#define STK_OFF_HLEN 168 /* Offset of SKB header length on stack */
+
+/* Offset to skip condition code check */
+#define OFF_OK 4
+
+#endif /* __ARCH_S390_NET_BPF_JIT_H */
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index bbd1981..7690dc8 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -1,817 +1,1209 @@
/*
* BPF Jit compiler for s390.
*
- * Copyright IBM Corp. 2012
+ * Minimum build requirements:
+ *
+ * - HAVE_MARCH_Z196_FEATURES: laal, laalg
+ * - HAVE_MARCH_Z10_FEATURES: msfi, cgrj, clgrj
+ * - HAVE_MARCH_Z9_109_FEATURES: alfi, llilf, clfi, oilf, nilf
+ * - PACK_STACK
+ * - 64BIT
+ *
+ * Copyright IBM Corp. 2012,2015
*
* Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
+ * Michael Holzheu <holzheu@linux.vnet.ibm.com>
*/
+
+#define KMSG_COMPONENT "bpf_jit"
+#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+
#include <linux/netdevice.h>
-#include <linux/if_vlan.h>
#include <linux/filter.h>
#include <linux/init.h>
#include <asm/cacheflush.h>
-#include <asm/facility.h>
#include <asm/dis.h>
+#include "bpf_jit.h"
-/*
- * Conventions:
- * %r2 = skb pointer
- * %r3 = offset parameter
- * %r4 = scratch register / length parameter
- * %r5 = BPF A accumulator
- * %r8 = return address
- * %r9 = save register for skb pointer
- * %r10 = skb->data
- * %r11 = skb->len - skb->data_len (headlen)
- * %r12 = BPF X accumulator
- * %r13 = literal pool pointer
- * 0(%r15) - 63(%r15) scratch memory array with BPF_MEMWORDS
- */
int bpf_jit_enable __read_mostly;
-/*
- * assembly code in arch/x86/net/bpf_jit.S
- */
-extern u8 sk_load_word[], sk_load_half[], sk_load_byte[], sk_load_byte_msh[];
-extern u8 sk_load_word_ind[], sk_load_half_ind[], sk_load_byte_ind[];
-
struct bpf_jit {
- unsigned int seen;
- u8 *start;
- u8 *prg;
- u8 *mid;
- u8 *lit;
- u8 *end;
- u8 *base_ip;
- u8 *ret0_ip;
- u8 *exit_ip;
- unsigned int off_load_word;
- unsigned int off_load_half;
- unsigned int off_load_byte;
- unsigned int off_load_bmsh;
- unsigned int off_load_iword;
- unsigned int off_load_ihalf;
- unsigned int off_load_ibyte;
+ u32 seen; /* Flags to remember seen eBPF instructions */
+ u32 seen_reg[16]; /* Array to remember which registers are used */
+ u32 *addrs; /* Array with relative instruction addresses */
+ u8 *prg_buf; /* Start of program */
+ int size; /* Size of program and literal pool */
+ int size_prg; /* Size of program */
+ int prg; /* Current position in program */
+ int lit_start; /* Start of literal pool */
+ int lit; /* Current position in literal pool */
+ int base_ip; /* Base address for literal pool */
+ int ret0_ip; /* Address of return 0 */
+ int exit_ip; /* Address of exit */
};
#define BPF_SIZE_MAX 4096 /* Max size for program */
-#define SEEN_DATAREF 1 /* might call external helpers */
-#define SEEN_XREG 2 /* ebx is used */
-#define SEEN_MEM 4 /* use mem[] for temporary storage */
-#define SEEN_RET0 8 /* pc_ret0 points to a valid return 0 */
-#define SEEN_LITERAL 16 /* code uses literals */
-#define SEEN_LOAD_WORD 32 /* code uses sk_load_word */
-#define SEEN_LOAD_HALF 64 /* code uses sk_load_half */
-#define SEEN_LOAD_BYTE 128 /* code uses sk_load_byte */
-#define SEEN_LOAD_BMSH 256 /* code uses sk_load_byte_msh */
-#define SEEN_LOAD_IWORD 512 /* code uses sk_load_word_ind */
-#define SEEN_LOAD_IHALF 1024 /* code uses sk_load_half_ind */
-#define SEEN_LOAD_IBYTE 2048 /* code uses sk_load_byte_ind */
+#define SEEN_SKB 1 /* skb access */
+#define SEEN_MEM 2 /* use mem[] for temporary storage */
+#define SEEN_RET0 4 /* ret0_ip points to a valid return 0 */
+#define SEEN_LITERAL 8 /* code uses literals */
+#define SEEN_FUNC 16 /* calls C functions */
+#define SEEN_STACK (SEEN_FUNC | SEEN_MEM | SEEN_SKB)
-#define EMIT2(op) \
-({ \
- if (jit->prg + 2 <= jit->mid) \
- *(u16 *) jit->prg = op; \
- jit->prg += 2; \
-})
+/*
+ * s390 registers
+ */
+#define REG_W0 (__MAX_BPF_REG+0) /* Work register 1 (even) */
+#define REG_W1 (__MAX_BPF_REG+1) /* Work register 2 (odd) */
+#define REG_SKB_DATA (__MAX_BPF_REG+2) /* SKB data register */
+#define REG_L (__MAX_BPF_REG+3) /* Literal pool register */
+#define REG_15 (__MAX_BPF_REG+4) /* Register 15 */
+#define REG_0 REG_W0 /* Register 0 */
+#define REG_2 BPF_REG_1 /* Register 2 */
+#define REG_14 BPF_REG_0 /* Register 14 */
-#define EMIT4(op) \
-({ \
- if (jit->prg + 4 <= jit->mid) \
- *(u32 *) jit->prg = op; \
- jit->prg += 4; \
-})
+/*
+ * Mapping of BPF registers to s390 registers
+ */
+static const int reg2hex[] = {
+ /* Return code */
+ [BPF_REG_0] = 14,
+ /* Function parameters */
+ [BPF_REG_1] = 2,
+ [BPF_REG_2] = 3,
+ [BPF_REG_3] = 4,
+ [BPF_REG_4] = 5,
+ [BPF_REG_5] = 6,
+ /* Call saved registers */
+ [BPF_REG_6] = 7,
+ [BPF_REG_7] = 8,
+ [BPF_REG_8] = 9,
+ [BPF_REG_9] = 10,
+ /* BPF stack pointer */
+ [BPF_REG_FP] = 13,
+ /* SKB data pointer */
+ [REG_SKB_DATA] = 12,
+ /* Work registers for s390x backend */
+ [REG_W0] = 0,
+ [REG_W1] = 1,
+ [REG_L] = 11,
+ [REG_15] = 15,
+};
-#define EMIT4_DISP(op, disp) \
-({ \
- unsigned int __disp = (disp) & 0xfff; \
- EMIT4(op | __disp); \
-})
-
-#define EMIT4_IMM(op, imm) \
-({ \
- unsigned int __imm = (imm) & 0xffff; \
- EMIT4(op | __imm); \
-})
-
-#define EMIT4_PCREL(op, pcrel) \
-({ \
- long __pcrel = ((pcrel) >> 1) & 0xffff; \
- EMIT4(op | __pcrel); \
-})
-
-#define EMIT6(op1, op2) \
-({ \
- if (jit->prg + 6 <= jit->mid) { \
- *(u32 *) jit->prg = op1; \
- *(u16 *) (jit->prg + 4) = op2; \
- } \
- jit->prg += 6; \
-})
-
-#define EMIT6_DISP(op1, op2, disp) \
-({ \
- unsigned int __disp = (disp) & 0xfff; \
- EMIT6(op1 | __disp, op2); \
-})
-
-#define EMIT6_IMM(op, imm) \
-({ \
- unsigned int __imm = (imm); \
- EMIT6(op | (__imm >> 16), __imm & 0xffff); \
-})
-
-#define EMIT_CONST(val) \
-({ \
- unsigned int ret; \
- ret = (unsigned int) (jit->lit - jit->base_ip); \
- jit->seen |= SEEN_LITERAL; \
- if (jit->lit + 4 <= jit->end) \
- *(u32 *) jit->lit = val; \
- jit->lit += 4; \
- ret; \
-})
-
-#define EMIT_FN_CONST(bit, fn) \
-({ \
- unsigned int ret; \
- ret = (unsigned int) (jit->lit - jit->base_ip); \
- if (jit->seen & bit) { \
- jit->seen |= SEEN_LITERAL; \
- if (jit->lit + 8 <= jit->end) \
- *(void **) jit->lit = fn; \
- jit->lit += 8; \
- } \
- ret; \
-})
-
-static void bpf_jit_fill_hole(void *area, unsigned int size)
+static inline u32 reg(u32 dst_reg, u32 src_reg)
{
- /* Fill whole space with illegal instructions */
+ return reg2hex[dst_reg] << 4 | reg2hex[src_reg];
+}
+
+static inline u32 reg_high(u32 reg)
+{
+ return reg2hex[reg] << 4;
+}
+
+static inline void reg_set_seen(struct bpf_jit *jit, u32 b1)
+{
+ u32 r1 = reg2hex[b1];
+
+ if (!jit->seen_reg[r1] && r1 >= 6 && r1 <= 15)
+ jit->seen_reg[r1] = 1;
+}
+
+#define REG_SET_SEEN(b1) \
+({ \
+ reg_set_seen(jit, b1); \
+})
+
+#define REG_SEEN(b1) jit->seen_reg[reg2hex[(b1)]]
+
+/*
+ * EMIT macros for code generation
+ */
+
+#define _EMIT2(op) \
+({ \
+ if (jit->prg_buf) \
+ *(u16 *) (jit->prg_buf + jit->prg) = op; \
+ jit->prg += 2; \
+})
+
+#define EMIT2(op, b1, b2) \
+({ \
+ _EMIT2(op | reg(b1, b2)); \
+ REG_SET_SEEN(b1); \
+ REG_SET_SEEN(b2); \
+})
+
+#define _EMIT4(op) \
+({ \
+ if (jit->prg_buf) \
+ *(u32 *) (jit->prg_buf + jit->prg) = op; \
+ jit->prg += 4; \
+})
+
+#define EMIT4(op, b1, b2) \
+({ \
+ _EMIT4(op | reg(b1, b2)); \
+ REG_SET_SEEN(b1); \
+ REG_SET_SEEN(b2); \
+})
+
+#define EMIT4_RRF(op, b1, b2, b3) \
+({ \
+ _EMIT4(op | reg_high(b3) << 8 | reg(b1, b2)); \
+ REG_SET_SEEN(b1); \
+ REG_SET_SEEN(b2); \
+ REG_SET_SEEN(b3); \
+})
+
+#define _EMIT4_DISP(op, disp) \
+({ \
+ unsigned int __disp = (disp) & 0xfff; \
+ _EMIT4(op | __disp); \
+})
+
+#define EMIT4_DISP(op, b1, b2, disp) \
+({ \
+ _EMIT4_DISP(op | reg_high(b1) << 16 | \
+ reg_high(b2) << 8, disp); \
+ REG_SET_SEEN(b1); \
+ REG_SET_SEEN(b2); \
+})
+
+#define EMIT4_IMM(op, b1, imm) \
+({ \
+ unsigned int __imm = (imm) & 0xffff; \
+ _EMIT4(op | reg_high(b1) << 16 | __imm); \
+ REG_SET_SEEN(b1); \
+})
+
+#define EMIT4_PCREL(op, pcrel) \
+({ \
+ long __pcrel = ((pcrel) >> 1) & 0xffff; \
+ _EMIT4(op | __pcrel); \
+})
+
+#define _EMIT6(op1, op2) \
+({ \
+ if (jit->prg_buf) { \
+ *(u32 *) (jit->prg_buf + jit->prg) = op1; \
+ *(u16 *) (jit->prg_buf + jit->prg + 4) = op2; \
+ } \
+ jit->prg += 6; \
+})
+
+#define _EMIT6_DISP(op1, op2, disp) \
+({ \
+ unsigned int __disp = (disp) & 0xfff; \
+ _EMIT6(op1 | __disp, op2); \
+})
+
+#define EMIT6_DISP(op1, op2, b1, b2, b3, disp) \
+({ \
+ _EMIT6_DISP(op1 | reg(b1, b2) << 16 | \
+ reg_high(b3) << 8, op2, disp); \
+ REG_SET_SEEN(b1); \
+ REG_SET_SEEN(b2); \
+ REG_SET_SEEN(b3); \
+})
+
+#define _EMIT6_DISP_LH(op1, op2, disp) \
+({ \
+ unsigned int __disp_h = ((u32)disp) & 0xff000; \
+ unsigned int __disp_l = ((u32)disp) & 0x00fff; \
+ _EMIT6(op1 | __disp_l, op2 | __disp_h >> 4); \
+})
+
+#define EMIT6_DISP_LH(op1, op2, b1, b2, b3, disp) \
+({ \
+ _EMIT6_DISP_LH(op1 | reg(b1, b2) << 16 | \
+ reg_high(b3) << 8, op2, disp); \
+ REG_SET_SEEN(b1); \
+ REG_SET_SEEN(b2); \
+ REG_SET_SEEN(b3); \
+})
+
+#define EMIT6_PCREL(op1, op2, b1, b2, i, off, mask) \
+({ \
+ /* Branch instruction needs 6 bytes */ \
+ int rel = (addrs[i + off + 1] - (addrs[i + 1] - 6)) / 2;\
+ _EMIT6(op1 | reg(b1, b2) << 16 | rel, op2 | mask); \
+ REG_SET_SEEN(b1); \
+ REG_SET_SEEN(b2); \
+})
+
+#define _EMIT6_IMM(op, imm) \
+({ \
+ unsigned int __imm = (imm); \
+ _EMIT6(op | (__imm >> 16), __imm & 0xffff); \
+})
+
+#define EMIT6_IMM(op, b1, imm) \
+({ \
+ _EMIT6_IMM(op | reg_high(b1) << 16, imm); \
+ REG_SET_SEEN(b1); \
+})
+
+#define EMIT_CONST_U32(val) \
+({ \
+ unsigned int ret; \
+ ret = jit->lit - jit->base_ip; \
+ jit->seen |= SEEN_LITERAL; \
+ if (jit->prg_buf) \
+ *(u32 *) (jit->prg_buf + jit->lit) = (u32) val; \
+ jit->lit += 4; \
+ ret; \
+})
+
+#define EMIT_CONST_U64(val) \
+({ \
+ unsigned int ret; \
+ ret = jit->lit - jit->base_ip; \
+ jit->seen |= SEEN_LITERAL; \
+ if (jit->prg_buf) \
+ *(u64 *) (jit->prg_buf + jit->lit) = (u64) val; \
+ jit->lit += 8; \
+ ret; \
+})
+
+#define EMIT_ZERO(b1) \
+({ \
+ /* llgfr %dst,%dst (zero extend to 64 bit) */ \
+ EMIT4(0xb9160000, b1, b1); \
+ REG_SET_SEEN(b1); \
+})
+
+/*
+ * Fill whole space with illegal instructions
+ */
+static void jit_fill_hole(void *area, unsigned int size)
+{
memset(area, 0, size);
}
+/*
+ * Save registers from "rs" (register start) to "re" (register end) on stack
+ */
+static void save_regs(struct bpf_jit *jit, u32 rs, u32 re)
+{
+ u32 off = 72 + (rs - 6) * 8;
+
+ if (rs == re)
+ /* stg %rs,off(%r15) */
+ _EMIT6(0xe300f000 | rs << 20 | off, 0x0024);
+ else
+ /* stmg %rs,%re,off(%r15) */
+ _EMIT6_DISP(0xeb00f000 | rs << 20 | re << 16, 0x0024, off);
+}
+
+/*
+ * Restore registers from "rs" (register start) to "re" (register end) on stack
+ */
+static void restore_regs(struct bpf_jit *jit, u32 rs, u32 re)
+{
+ u32 off = 72 + (rs - 6) * 8;
+
+ if (jit->seen & SEEN_STACK)
+ off += STK_OFF;
+
+ if (rs == re)
+ /* lg %rs,off(%r15) */
+ _EMIT6(0xe300f000 | rs << 20 | off, 0x0004);
+ else
+ /* lmg %rs,%re,off(%r15) */
+ _EMIT6_DISP(0xeb00f000 | rs << 20 | re << 16, 0x0004, off);
+}
+
+/*
+ * Return first seen register (from start)
+ */
+static int get_start(struct bpf_jit *jit, int start)
+{
+ int i;
+
+ for (i = start; i <= 15; i++) {
+ if (jit->seen_reg[i])
+ return i;
+ }
+ return 0;
+}
+
+/*
+ * Return last seen register (from start) (gap >= 2)
+ */
+static int get_end(struct bpf_jit *jit, int start)
+{
+ int i;
+
+ for (i = start; i < 15; i++) {
+ if (!jit->seen_reg[i] && !jit->seen_reg[i + 1])
+ return i - 1;
+ }
+ return jit->seen_reg[15] ? 15 : 14;
+}
+
+#define REGS_SAVE 1
+#define REGS_RESTORE 0
+/*
+ * Save and restore clobbered registers (6-15) on stack.
+ * We save/restore registers in chunks with gap >= 2 registers.
+ */
+static void save_restore_regs(struct bpf_jit *jit, int op)
+{
+
+ int re = 6, rs;
+
+ do {
+ rs = get_start(jit, re);
+ if (!rs)
+ break;
+ re = get_end(jit, rs + 1);
+ if (op == REGS_SAVE)
+ save_regs(jit, rs, re);
+ else
+ restore_regs(jit, rs, re);
+ re++;
+ } while (re <= 15);
+}
+
+/*
+ * Emit function prologue
+ *
+ * Save registers and create stack frame if necessary.
+ * See stack frame layout desription in "bpf_jit.h"!
+ */
static void bpf_jit_prologue(struct bpf_jit *jit)
{
- /* Save registers and create stack frame if necessary */
- if (jit->seen & SEEN_DATAREF) {
- /* stmg %r8,%r15,88(%r15) */
- EMIT6(0xeb8ff058, 0x0024);
- /* lgr %r14,%r15 */
- EMIT4(0xb90400ef);
- /* aghi %r15,<offset> */
- EMIT4_IMM(0xa7fb0000, (jit->seen & SEEN_MEM) ? -112 : -80);
- /* stg %r14,152(%r15) */
- EMIT6(0xe3e0f098, 0x0024);
- } else if ((jit->seen & SEEN_XREG) && (jit->seen & SEEN_LITERAL))
- /* stmg %r12,%r13,120(%r15) */
- EMIT6(0xebcdf078, 0x0024);
- else if (jit->seen & SEEN_XREG)
- /* stg %r12,120(%r15) */
- EMIT6(0xe3c0f078, 0x0024);
- else if (jit->seen & SEEN_LITERAL)
- /* stg %r13,128(%r15) */
- EMIT6(0xe3d0f080, 0x0024);
-
+ /* Save registers */
+ save_restore_regs(jit, REGS_SAVE);
/* Setup literal pool */
if (jit->seen & SEEN_LITERAL) {
/* basr %r13,0 */
- EMIT2(0x0dd0);
+ EMIT2(0x0d00, REG_L, REG_0);
jit->base_ip = jit->prg;
}
- jit->off_load_word = EMIT_FN_CONST(SEEN_LOAD_WORD, sk_load_word);
- jit->off_load_half = EMIT_FN_CONST(SEEN_LOAD_HALF, sk_load_half);
- jit->off_load_byte = EMIT_FN_CONST(SEEN_LOAD_BYTE, sk_load_byte);
- jit->off_load_bmsh = EMIT_FN_CONST(SEEN_LOAD_BMSH, sk_load_byte_msh);
- jit->off_load_iword = EMIT_FN_CONST(SEEN_LOAD_IWORD, sk_load_word_ind);
- jit->off_load_ihalf = EMIT_FN_CONST(SEEN_LOAD_IHALF, sk_load_half_ind);
- jit->off_load_ibyte = EMIT_FN_CONST(SEEN_LOAD_IBYTE, sk_load_byte_ind);
-
- /* Filter needs to access skb data */
- if (jit->seen & SEEN_DATAREF) {
- /* l %r11,<len>(%r2) */
- EMIT4_DISP(0x58b02000, offsetof(struct sk_buff, len));
- /* s %r11,<data_len>(%r2) */
- EMIT4_DISP(0x5bb02000, offsetof(struct sk_buff, data_len));
- /* lg %r10,<data>(%r2) */
- EMIT6_DISP(0xe3a02000, 0x0004,
- offsetof(struct sk_buff, data));
+ /* Setup stack and backchain */
+ if (jit->seen & SEEN_STACK) {
+ /* lgr %bfp,%r15 (BPF frame pointer) */
+ EMIT4(0xb9040000, BPF_REG_FP, REG_15);
+ /* aghi %r15,-STK_OFF */
+ EMIT4_IMM(0xa70b0000, REG_15, -STK_OFF);
+ if (jit->seen & SEEN_FUNC)
+ /* stg %bfp,152(%r15) (backchain) */
+ EMIT6_DISP_LH(0xe3000000, 0x0024, BPF_REG_FP, REG_0,
+ REG_15, 152);
}
+ /*
+ * For SKB access %b1 contains the SKB pointer. For "bpf_jit.S"
+ * we store the SKB header length on the stack and the SKB data
+ * pointer in REG_SKB_DATA.
+ */
+ if (jit->seen & SEEN_SKB) {
+ /* Header length: llgf %w1,<len>(%b1) */
+ EMIT6_DISP_LH(0xe3000000, 0x0016, REG_W1, REG_0, BPF_REG_1,
+ offsetof(struct sk_buff, len));
+ /* s %w1,<data_len>(%b1) */
+ EMIT4_DISP(0x5b000000, REG_W1, BPF_REG_1,
+ offsetof(struct sk_buff, data_len));
+ /* stg %w1,ST_OFF_HLEN(%r0,%r15) */
+ EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0, REG_15,
+ STK_OFF_HLEN);
+ /* lg %skb_data,data_off(%b1) */
+ EMIT6_DISP_LH(0xe3000000, 0x0004, REG_SKB_DATA, REG_0,
+ BPF_REG_1, offsetof(struct sk_buff, data));
+ }
+ /* BPF compatibility: clear A (%b7) and X (%b8) registers */
+ if (REG_SEEN(BPF_REG_7))
+ /* lghi %b7,0 */
+ EMIT4_IMM(0xa7090000, BPF_REG_7, 0);
+ if (REG_SEEN(BPF_REG_8))
+ /* lghi %b8,0 */
+ EMIT4_IMM(0xa7090000, BPF_REG_8, 0);
}
+/*
+ * Function epilogue
+ */
static void bpf_jit_epilogue(struct bpf_jit *jit)
{
/* Return 0 */
if (jit->seen & SEEN_RET0) {
jit->ret0_ip = jit->prg;
- /* lghi %r2,0 */
- EMIT4(0xa7290000);
+ /* lghi %b0,0 */
+ EMIT4_IMM(0xa7090000, BPF_REG_0, 0);
}
jit->exit_ip = jit->prg;
+ /* Load exit code: lgr %r2,%b0 */
+ EMIT4(0xb9040000, REG_2, BPF_REG_0);
/* Restore registers */
- if (jit->seen & SEEN_DATAREF)
- /* lmg %r8,%r15,<offset>(%r15) */
- EMIT6_DISP(0xeb8ff000, 0x0004,
- (jit->seen & SEEN_MEM) ? 200 : 168);
- else if ((jit->seen & SEEN_XREG) && (jit->seen & SEEN_LITERAL))
- /* lmg %r12,%r13,120(%r15) */
- EMIT6(0xebcdf078, 0x0004);
- else if (jit->seen & SEEN_XREG)
- /* lg %r12,120(%r15) */
- EMIT6(0xe3c0f078, 0x0004);
- else if (jit->seen & SEEN_LITERAL)
- /* lg %r13,128(%r15) */
- EMIT6(0xe3d0f080, 0x0004);
+ save_restore_regs(jit, REGS_RESTORE);
/* br %r14 */
- EMIT2(0x07fe);
+ _EMIT2(0x07fe);
}
/*
- * make sure we dont leak kernel information to user
+ * Compile one eBPF instruction into s390x code
*/
-static void bpf_jit_noleaks(struct bpf_jit *jit, struct sock_filter *filter)
+static int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i)
{
- /* Clear temporary memory if (seen & SEEN_MEM) */
- if (jit->seen & SEEN_MEM)
- /* xc 0(64,%r15),0(%r15) */
- EMIT6(0xd73ff000, 0xf000);
- /* Clear X if (seen & SEEN_XREG) */
- if (jit->seen & SEEN_XREG)
- /* lhi %r12,0 */
- EMIT4(0xa7c80000);
- /* Clear A if the first register does not set it. */
- switch (filter[0].code) {
- case BPF_LD | BPF_W | BPF_ABS:
- case BPF_LD | BPF_H | BPF_ABS:
- case BPF_LD | BPF_B | BPF_ABS:
- case BPF_LD | BPF_W | BPF_LEN:
- case BPF_LD | BPF_W | BPF_IND:
- case BPF_LD | BPF_H | BPF_IND:
- case BPF_LD | BPF_B | BPF_IND:
- case BPF_LD | BPF_IMM:
- case BPF_LD | BPF_MEM:
- case BPF_MISC | BPF_TXA:
- case BPF_RET | BPF_K:
- /* first instruction sets A register */
+ struct bpf_insn *insn = &fp->insnsi[i];
+ int jmp_off, last, insn_count = 1;
+ unsigned int func_addr, mask;
+ u32 dst_reg = insn->dst_reg;
+ u32 src_reg = insn->src_reg;
+ u32 *addrs = jit->addrs;
+ s32 imm = insn->imm;
+ s16 off = insn->off;
+
+ switch (insn->code) {
+ /*
+ * BPF_MOV
+ */
+ case BPF_ALU | BPF_MOV | BPF_X: /* dst = (u32) src */
+ /* llgfr %dst,%src */
+ EMIT4(0xb9160000, dst_reg, src_reg);
break;
- default: /* A = 0 */
- /* lhi %r5,0 */
- EMIT4(0xa7580000);
+ case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
+ /* lgr %dst,%src */
+ EMIT4(0xb9040000, dst_reg, src_reg);
+ break;
+ case BPF_ALU | BPF_MOV | BPF_K: /* dst = (u32) imm */
+ /* llilf %dst,imm */
+ EMIT6_IMM(0xc00f0000, dst_reg, imm);
+ break;
+ case BPF_ALU64 | BPF_MOV | BPF_K: /* dst = imm */
+ /* lgfi %dst,imm */
+ EMIT6_IMM(0xc0010000, dst_reg, imm);
+ break;
+ /*
+ * BPF_LD 64
+ */
+ case BPF_LD | BPF_IMM | BPF_DW: /* dst = (u64) imm */
+ {
+ /* 16 byte instruction that uses two 'struct bpf_insn' */
+ u64 imm64;
+
+ imm64 = (u64)(u32) insn[0].imm | ((u64)(u32) insn[1].imm) << 32;
+ /* lg %dst,<d(imm)>(%l) */
+ EMIT6_DISP_LH(0xe3000000, 0x0004, dst_reg, REG_0, REG_L,
+ EMIT_CONST_U64(imm64));
+ insn_count = 2;
+ break;
}
-}
+ /*
+ * BPF_ADD
+ */
+ case BPF_ALU | BPF_ADD | BPF_X: /* dst = (u32) dst + (u32) src */
+ /* ar %dst,%src */
+ EMIT2(0x1a00, dst_reg, src_reg);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_ADD | BPF_X: /* dst = dst + src */
+ /* agr %dst,%src */
+ EMIT4(0xb9080000, dst_reg, src_reg);
+ break;
+ case BPF_ALU | BPF_ADD | BPF_K: /* dst = (u32) dst + (u32) imm */
+ if (!imm)
+ break;
+ /* alfi %dst,imm */
+ EMIT6_IMM(0xc20b0000, dst_reg, imm);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_ADD | BPF_K: /* dst = dst + imm */
+ if (!imm)
+ break;
+ /* agfi %dst,imm */
+ EMIT6_IMM(0xc2080000, dst_reg, imm);
+ break;
+ /*
+ * BPF_SUB
+ */
+ case BPF_ALU | BPF_SUB | BPF_X: /* dst = (u32) dst - (u32) src */
+ /* sr %dst,%src */
+ EMIT2(0x1b00, dst_reg, src_reg);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_SUB | BPF_X: /* dst = dst - src */
+ /* sgr %dst,%src */
+ EMIT4(0xb9090000, dst_reg, src_reg);
+ break;
+ case BPF_ALU | BPF_SUB | BPF_K: /* dst = (u32) dst - (u32) imm */
+ if (!imm)
+ break;
+ /* alfi %dst,-imm */
+ EMIT6_IMM(0xc20b0000, dst_reg, -imm);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_SUB | BPF_K: /* dst = dst - imm */
+ if (!imm)
+ break;
+ /* agfi %dst,-imm */
+ EMIT6_IMM(0xc2080000, dst_reg, -imm);
+ break;
+ /*
+ * BPF_MUL
+ */
+ case BPF_ALU | BPF_MUL | BPF_X: /* dst = (u32) dst * (u32) src */
+ /* msr %dst,%src */
+ EMIT4(0xb2520000, dst_reg, src_reg);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_MUL | BPF_X: /* dst = dst * src */
+ /* msgr %dst,%src */
+ EMIT4(0xb90c0000, dst_reg, src_reg);
+ break;
+ case BPF_ALU | BPF_MUL | BPF_K: /* dst = (u32) dst * (u32) imm */
+ if (imm == 1)
+ break;
+ /* msfi %r5,imm */
+ EMIT6_IMM(0xc2010000, dst_reg, imm);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_MUL | BPF_K: /* dst = dst * imm */
+ if (imm == 1)
+ break;
+ /* msgfi %dst,imm */
+ EMIT6_IMM(0xc2000000, dst_reg, imm);
+ break;
+ /*
+ * BPF_DIV / BPF_MOD
+ */
+ case BPF_ALU | BPF_DIV | BPF_X: /* dst = (u32) dst / (u32) src */
+ case BPF_ALU | BPF_MOD | BPF_X: /* dst = (u32) dst % (u32) src */
+ {
+ int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
-static int bpf_jit_insn(struct bpf_jit *jit, struct sock_filter *filter,
- unsigned int *addrs, int i, int last)
-{
- unsigned int K;
- int offset;
- unsigned int mask;
- u16 code;
-
- K = filter->k;
- code = bpf_anc_helper(filter);
-
- switch (code) {
- case BPF_ALU | BPF_ADD | BPF_X: /* A += X */
- jit->seen |= SEEN_XREG;
- /* ar %r5,%r12 */
- EMIT2(0x1a5c);
- break;
- case BPF_ALU | BPF_ADD | BPF_K: /* A += K */
- if (!K)
- break;
- if (K <= 16383)
- /* ahi %r5,<K> */
- EMIT4_IMM(0xa75a0000, K);
- else if (test_facility(21))
- /* alfi %r5,<K> */
- EMIT6_IMM(0xc25b0000, K);
- else
- /* a %r5,<d(K)>(%r13) */
- EMIT4_DISP(0x5a50d000, EMIT_CONST(K));
- break;
- case BPF_ALU | BPF_SUB | BPF_X: /* A -= X */
- jit->seen |= SEEN_XREG;
- /* sr %r5,%r12 */
- EMIT2(0x1b5c);
- break;
- case BPF_ALU | BPF_SUB | BPF_K: /* A -= K */
- if (!K)
- break;
- if (K <= 16384)
- /* ahi %r5,-K */
- EMIT4_IMM(0xa75a0000, -K);
- else if (test_facility(21))
- /* alfi %r5,-K */
- EMIT6_IMM(0xc25b0000, -K);
- else
- /* s %r5,<d(K)>(%r13) */
- EMIT4_DISP(0x5b50d000, EMIT_CONST(K));
- break;
- case BPF_ALU | BPF_MUL | BPF_X: /* A *= X */
- jit->seen |= SEEN_XREG;
- /* msr %r5,%r12 */
- EMIT4(0xb252005c);
- break;
- case BPF_ALU | BPF_MUL | BPF_K: /* A *= K */
- if (K <= 16383)
- /* mhi %r5,K */
- EMIT4_IMM(0xa75c0000, K);
- else if (test_facility(34))
- /* msfi %r5,<K> */
- EMIT6_IMM(0xc2510000, K);
- else
- /* ms %r5,<d(K)>(%r13) */
- EMIT4_DISP(0x7150d000, EMIT_CONST(K));
- break;
- case BPF_ALU | BPF_DIV | BPF_X: /* A /= X */
- jit->seen |= SEEN_XREG | SEEN_RET0;
- /* ltr %r12,%r12 */
- EMIT2(0x12cc);
- /* jz <ret0> */
- EMIT4_PCREL(0xa7840000, (jit->ret0_ip - jit->prg));
- /* lhi %r4,0 */
- EMIT4(0xa7480000);
- /* dlr %r4,%r12 */
- EMIT4(0xb997004c);
- break;
- case BPF_ALU | BPF_DIV | BPF_K: /* A /= K */
- if (K == 1)
- break;
- /* lhi %r4,0 */
- EMIT4(0xa7480000);
- /* dl %r4,<d(K)>(%r13) */
- EMIT6_DISP(0xe340d000, 0x0097, EMIT_CONST(K));
- break;
- case BPF_ALU | BPF_MOD | BPF_X: /* A %= X */
- jit->seen |= SEEN_XREG | SEEN_RET0;
- /* ltr %r12,%r12 */
- EMIT2(0x12cc);
- /* jz <ret0> */
- EMIT4_PCREL(0xa7840000, (jit->ret0_ip - jit->prg));
- /* lhi %r4,0 */
- EMIT4(0xa7480000);
- /* dlr %r4,%r12 */
- EMIT4(0xb997004c);
- /* lr %r5,%r4 */
- EMIT2(0x1854);
- break;
- case BPF_ALU | BPF_MOD | BPF_K: /* A %= K */
- if (K == 1) {
- /* lhi %r5,0 */
- EMIT4(0xa7580000);
- break;
- }
- /* lhi %r4,0 */
- EMIT4(0xa7480000);
- /* dl %r4,<d(K)>(%r13) */
- EMIT6_DISP(0xe340d000, 0x0097, EMIT_CONST(K));
- /* lr %r5,%r4 */
- EMIT2(0x1854);
- break;
- case BPF_ALU | BPF_AND | BPF_X: /* A &= X */
- jit->seen |= SEEN_XREG;
- /* nr %r5,%r12 */
- EMIT2(0x145c);
- break;
- case BPF_ALU | BPF_AND | BPF_K: /* A &= K */
- if (test_facility(21))
- /* nilf %r5,<K> */
- EMIT6_IMM(0xc05b0000, K);
- else
- /* n %r5,<d(K)>(%r13) */
- EMIT4_DISP(0x5450d000, EMIT_CONST(K));
- break;
- case BPF_ALU | BPF_OR | BPF_X: /* A |= X */
- jit->seen |= SEEN_XREG;
- /* or %r5,%r12 */
- EMIT2(0x165c);
- break;
- case BPF_ALU | BPF_OR | BPF_K: /* A |= K */
- if (test_facility(21))
- /* oilf %r5,<K> */
- EMIT6_IMM(0xc05d0000, K);
- else
- /* o %r5,<d(K)>(%r13) */
- EMIT4_DISP(0x5650d000, EMIT_CONST(K));
- break;
- case BPF_ANC | SKF_AD_ALU_XOR_X: /* A ^= X; */
- case BPF_ALU | BPF_XOR | BPF_X:
- jit->seen |= SEEN_XREG;
- /* xr %r5,%r12 */
- EMIT2(0x175c);
- break;
- case BPF_ALU | BPF_XOR | BPF_K: /* A ^= K */
- if (!K)
- break;
- /* x %r5,<d(K)>(%r13) */
- EMIT4_DISP(0x5750d000, EMIT_CONST(K));
- break;
- case BPF_ALU | BPF_LSH | BPF_X: /* A <<= X; */
- jit->seen |= SEEN_XREG;
- /* sll %r5,0(%r12) */
- EMIT4(0x8950c000);
- break;
- case BPF_ALU | BPF_LSH | BPF_K: /* A <<= K */
- if (K == 0)
- break;
- /* sll %r5,K */
- EMIT4_DISP(0x89500000, K);
- break;
- case BPF_ALU | BPF_RSH | BPF_X: /* A >>= X; */
- jit->seen |= SEEN_XREG;
- /* srl %r5,0(%r12) */
- EMIT4(0x8850c000);
- break;
- case BPF_ALU | BPF_RSH | BPF_K: /* A >>= K; */
- if (K == 0)
- break;
- /* srl %r5,K */
- EMIT4_DISP(0x88500000, K);
- break;
- case BPF_ALU | BPF_NEG: /* A = -A */
- /* lcr %r5,%r5 */
- EMIT2(0x1355);
- break;
- case BPF_JMP | BPF_JA: /* ip += K */
- offset = addrs[i + K] + jit->start - jit->prg;
- EMIT4_PCREL(0xa7f40000, offset);
- break;
- case BPF_JMP | BPF_JGT | BPF_K: /* ip += (A > K) ? jt : jf */
- mask = 0x200000; /* jh */
- goto kbranch;
- case BPF_JMP | BPF_JGE | BPF_K: /* ip += (A >= K) ? jt : jf */
- mask = 0xa00000; /* jhe */
- goto kbranch;
- case BPF_JMP | BPF_JEQ | BPF_K: /* ip += (A == K) ? jt : jf */
- mask = 0x800000; /* je */
-kbranch: /* Emit compare if the branch targets are different */
- if (filter->jt != filter->jf) {
- if (test_facility(21))
- /* clfi %r5,<K> */
- EMIT6_IMM(0xc25f0000, K);
- else
- /* cl %r5,<d(K)>(%r13) */
- EMIT4_DISP(0x5550d000, EMIT_CONST(K));
- }
-branch: if (filter->jt == filter->jf) {
- if (filter->jt == 0)
- break;
- /* j <jt> */
- offset = addrs[i + filter->jt] + jit->start - jit->prg;
- EMIT4_PCREL(0xa7f40000, offset);
- break;
- }
- if (filter->jt != 0) {
- /* brc <mask>,<jt> */
- offset = addrs[i + filter->jt] + jit->start - jit->prg;
- EMIT4_PCREL(0xa7040000 | mask, offset);
- }
- if (filter->jf != 0) {
- /* brc <mask^15>,<jf> */
- offset = addrs[i + filter->jf] + jit->start - jit->prg;
- EMIT4_PCREL(0xa7040000 | (mask ^ 0xf00000), offset);
- }
- break;
- case BPF_JMP | BPF_JSET | BPF_K: /* ip += (A & K) ? jt : jf */
- mask = 0x700000; /* jnz */
- /* Emit test if the branch targets are different */
- if (filter->jt != filter->jf) {
- if (K > 65535) {
- /* lr %r4,%r5 */
- EMIT2(0x1845);
- /* n %r4,<d(K)>(%r13) */
- EMIT4_DISP(0x5440d000, EMIT_CONST(K));
- } else
- /* tmll %r5,K */
- EMIT4_IMM(0xa7510000, K);
- }
- goto branch;
- case BPF_JMP | BPF_JGT | BPF_X: /* ip += (A > X) ? jt : jf */
- mask = 0x200000; /* jh */
- goto xbranch;
- case BPF_JMP | BPF_JGE | BPF_X: /* ip += (A >= X) ? jt : jf */
- mask = 0xa00000; /* jhe */
- goto xbranch;
- case BPF_JMP | BPF_JEQ | BPF_X: /* ip += (A == X) ? jt : jf */
- mask = 0x800000; /* je */
-xbranch: /* Emit compare if the branch targets are different */
- if (filter->jt != filter->jf) {
- jit->seen |= SEEN_XREG;
- /* clr %r5,%r12 */
- EMIT2(0x155c);
- }
- goto branch;
- case BPF_JMP | BPF_JSET | BPF_X: /* ip += (A & X) ? jt : jf */
- mask = 0x700000; /* jnz */
- /* Emit test if the branch targets are different */
- if (filter->jt != filter->jf) {
- jit->seen |= SEEN_XREG;
- /* lr %r4,%r5 */
- EMIT2(0x1845);
- /* nr %r4,%r12 */
- EMIT2(0x144c);
- }
- goto branch;
- case BPF_LD | BPF_W | BPF_ABS: /* A = *(u32 *) (skb->data+K) */
- jit->seen |= SEEN_DATAREF | SEEN_RET0 | SEEN_LOAD_WORD;
- offset = jit->off_load_word;
- goto load_abs;
- case BPF_LD | BPF_H | BPF_ABS: /* A = *(u16 *) (skb->data+K) */
- jit->seen |= SEEN_DATAREF | SEEN_RET0 | SEEN_LOAD_HALF;
- offset = jit->off_load_half;
- goto load_abs;
- case BPF_LD | BPF_B | BPF_ABS: /* A = *(u8 *) (skb->data+K) */
- jit->seen |= SEEN_DATAREF | SEEN_RET0 | SEEN_LOAD_BYTE;
- offset = jit->off_load_byte;
-load_abs: if ((int) K < 0)
- goto out;
-call_fn: /* lg %r1,<d(function)>(%r13) */
- EMIT6_DISP(0xe310d000, 0x0004, offset);
- /* l %r3,<d(K)>(%r13) */
- EMIT4_DISP(0x5830d000, EMIT_CONST(K));
- /* basr %r8,%r1 */
- EMIT2(0x0d81);
- /* jnz <ret0> */
- EMIT4_PCREL(0xa7740000, (jit->ret0_ip - jit->prg));
- break;
- case BPF_LD | BPF_W | BPF_IND: /* A = *(u32 *) (skb->data+K+X) */
- jit->seen |= SEEN_DATAREF | SEEN_RET0 | SEEN_LOAD_IWORD;
- offset = jit->off_load_iword;
- goto call_fn;
- case BPF_LD | BPF_H | BPF_IND: /* A = *(u16 *) (skb->data+K+X) */
- jit->seen |= SEEN_DATAREF | SEEN_RET0 | SEEN_LOAD_IHALF;
- offset = jit->off_load_ihalf;
- goto call_fn;
- case BPF_LD | BPF_B | BPF_IND: /* A = *(u8 *) (skb->data+K+X) */
- jit->seen |= SEEN_DATAREF | SEEN_RET0 | SEEN_LOAD_IBYTE;
- offset = jit->off_load_ibyte;
- goto call_fn;
- case BPF_LDX | BPF_B | BPF_MSH:
- /* X = (*(u8 *)(skb->data+K) & 0xf) << 2 */
jit->seen |= SEEN_RET0;
- if ((int) K < 0) {
- /* j <ret0> */
- EMIT4_PCREL(0xa7f40000, (jit->ret0_ip - jit->prg));
+ /* ltr %src,%src (if src == 0 goto fail) */
+ EMIT2(0x1200, src_reg, src_reg);
+ /* jz <ret0> */
+ EMIT4_PCREL(0xa7840000, jit->ret0_ip - jit->prg);
+ /* lhi %w0,0 */
+ EMIT4_IMM(0xa7080000, REG_W0, 0);
+ /* lr %w1,%dst */
+ EMIT2(0x1800, REG_W1, dst_reg);
+ /* dlr %w0,%src */
+ EMIT4(0xb9970000, REG_W0, src_reg);
+ /* llgfr %dst,%rc */
+ EMIT4(0xb9160000, dst_reg, rc_reg);
+ break;
+ }
+ case BPF_ALU64 | BPF_DIV | BPF_X: /* dst = dst / (u32) src */
+ case BPF_ALU64 | BPF_MOD | BPF_X: /* dst = dst % (u32) src */
+ {
+ int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
+
+ jit->seen |= SEEN_RET0;
+ /* ltgr %src,%src (if src == 0 goto fail) */
+ EMIT4(0xb9020000, src_reg, src_reg);
+ /* jz <ret0> */
+ EMIT4_PCREL(0xa7840000, jit->ret0_ip - jit->prg);
+ /* lghi %w0,0 */
+ EMIT4_IMM(0xa7090000, REG_W0, 0);
+ /* lgr %w1,%dst */
+ EMIT4(0xb9040000, REG_W1, dst_reg);
+ /* llgfr %dst,%src (u32 cast) */
+ EMIT4(0xb9160000, dst_reg, src_reg);
+ /* dlgr %w0,%dst */
+ EMIT4(0xb9870000, REG_W0, dst_reg);
+ /* lgr %dst,%rc */
+ EMIT4(0xb9040000, dst_reg, rc_reg);
+ break;
+ }
+ case BPF_ALU | BPF_DIV | BPF_K: /* dst = (u32) dst / (u32) imm */
+ case BPF_ALU | BPF_MOD | BPF_K: /* dst = (u32) dst % (u32) imm */
+ {
+ int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
+
+ if (imm == 1) {
+ if (BPF_OP(insn->code) == BPF_MOD)
+ /* lhgi %dst,0 */
+ EMIT4_IMM(0xa7090000, dst_reg, 0);
break;
}
- jit->seen |= SEEN_DATAREF | SEEN_LOAD_BMSH;
- offset = jit->off_load_bmsh;
- goto call_fn;
- case BPF_LD | BPF_W | BPF_LEN: /* A = skb->len; */
- BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, len) != 4);
- /* l %r5,<d(len)>(%r2) */
- EMIT4_DISP(0x58502000, offsetof(struct sk_buff, len));
+ /* lhi %w0,0 */
+ EMIT4_IMM(0xa7080000, REG_W0, 0);
+ /* lr %w1,%dst */
+ EMIT2(0x1800, REG_W1, dst_reg);
+ /* dl %w0,<d(imm)>(%l) */
+ EMIT6_DISP_LH(0xe3000000, 0x0097, REG_W0, REG_0, REG_L,
+ EMIT_CONST_U32(imm));
+ /* llgfr %dst,%rc */
+ EMIT4(0xb9160000, dst_reg, rc_reg);
break;
- case BPF_LDX | BPF_W | BPF_LEN: /* X = skb->len; */
- jit->seen |= SEEN_XREG;
- /* l %r12,<d(len)>(%r2) */
- EMIT4_DISP(0x58c02000, offsetof(struct sk_buff, len));
+ }
+ case BPF_ALU64 | BPF_DIV | BPF_K: /* dst = dst / (u32) imm */
+ case BPF_ALU64 | BPF_MOD | BPF_K: /* dst = dst % (u32) imm */
+ {
+ int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
+
+ if (imm == 1) {
+ if (BPF_OP(insn->code) == BPF_MOD)
+ /* lhgi %dst,0 */
+ EMIT4_IMM(0xa7090000, dst_reg, 0);
+ break;
+ }
+ /* lghi %w0,0 */
+ EMIT4_IMM(0xa7090000, REG_W0, 0);
+ /* lgr %w1,%dst */
+ EMIT4(0xb9040000, REG_W1, dst_reg);
+ /* dlg %w0,<d(imm)>(%l) */
+ EMIT6_DISP_LH(0xe3000000, 0x0087, REG_W0, REG_0, REG_L,
+ EMIT_CONST_U64((u32) imm));
+ /* lgr %dst,%rc */
+ EMIT4(0xb9040000, dst_reg, rc_reg);
break;
- case BPF_LD | BPF_IMM: /* A = K */
- if (K <= 16383)
- /* lhi %r5,K */
- EMIT4_IMM(0xa7580000, K);
- else if (test_facility(21))
- /* llilf %r5,<K> */
- EMIT6_IMM(0xc05f0000, K);
- else
- /* l %r5,<d(K)>(%r13) */
- EMIT4_DISP(0x5850d000, EMIT_CONST(K));
+ }
+ /*
+ * BPF_AND
+ */
+ case BPF_ALU | BPF_AND | BPF_X: /* dst = (u32) dst & (u32) src */
+ /* nr %dst,%src */
+ EMIT2(0x1400, dst_reg, src_reg);
+ EMIT_ZERO(dst_reg);
break;
- case BPF_LDX | BPF_IMM: /* X = K */
- jit->seen |= SEEN_XREG;
- if (K <= 16383)
- /* lhi %r12,<K> */
- EMIT4_IMM(0xa7c80000, K);
- else if (test_facility(21))
- /* llilf %r12,<K> */
- EMIT6_IMM(0xc0cf0000, K);
- else
- /* l %r12,<d(K)>(%r13) */
- EMIT4_DISP(0x58c0d000, EMIT_CONST(K));
+ case BPF_ALU64 | BPF_AND | BPF_X: /* dst = dst & src */
+ /* ngr %dst,%src */
+ EMIT4(0xb9800000, dst_reg, src_reg);
break;
- case BPF_LD | BPF_MEM: /* A = mem[K] */
- jit->seen |= SEEN_MEM;
- /* l %r5,<K>(%r15) */
- EMIT4_DISP(0x5850f000,
- (jit->seen & SEEN_DATAREF) ? 160 + K*4 : K*4);
+ case BPF_ALU | BPF_AND | BPF_K: /* dst = (u32) dst & (u32) imm */
+ /* nilf %dst,imm */
+ EMIT6_IMM(0xc00b0000, dst_reg, imm);
+ EMIT_ZERO(dst_reg);
break;
- case BPF_LDX | BPF_MEM: /* X = mem[K] */
- jit->seen |= SEEN_XREG | SEEN_MEM;
- /* l %r12,<K>(%r15) */
- EMIT4_DISP(0x58c0f000,
- (jit->seen & SEEN_DATAREF) ? 160 + K*4 : K*4);
+ case BPF_ALU64 | BPF_AND | BPF_K: /* dst = dst & imm */
+ /* ng %dst,<d(imm)>(%l) */
+ EMIT6_DISP_LH(0xe3000000, 0x0080, dst_reg, REG_0, REG_L,
+ EMIT_CONST_U64(imm));
break;
- case BPF_MISC | BPF_TAX: /* X = A */
- jit->seen |= SEEN_XREG;
- /* lr %r12,%r5 */
- EMIT2(0x18c5);
+ /*
+ * BPF_OR
+ */
+ case BPF_ALU | BPF_OR | BPF_X: /* dst = (u32) dst | (u32) src */
+ /* or %dst,%src */
+ EMIT2(0x1600, dst_reg, src_reg);
+ EMIT_ZERO(dst_reg);
break;
- case BPF_MISC | BPF_TXA: /* A = X */
- jit->seen |= SEEN_XREG;
- /* lr %r5,%r12 */
- EMIT2(0x185c);
+ case BPF_ALU64 | BPF_OR | BPF_X: /* dst = dst | src */
+ /* ogr %dst,%src */
+ EMIT4(0xb9810000, dst_reg, src_reg);
break;
- case BPF_RET | BPF_K:
- if (K == 0) {
- jit->seen |= SEEN_RET0;
- if (last)
- break;
- /* j <ret0> */
- EMIT4_PCREL(0xa7f40000, jit->ret0_ip - jit->prg);
- } else {
- if (K <= 16383)
- /* lghi %r2,K */
- EMIT4_IMM(0xa7290000, K);
- else
- /* llgf %r2,<K>(%r13) */
- EMIT6_DISP(0xe320d000, 0x0016, EMIT_CONST(K));
- /* j <exit> */
- if (last && !(jit->seen & SEEN_RET0))
- break;
- EMIT4_PCREL(0xa7f40000, jit->exit_ip - jit->prg);
+ case BPF_ALU | BPF_OR | BPF_K: /* dst = (u32) dst | (u32) imm */
+ /* oilf %dst,imm */
+ EMIT6_IMM(0xc00d0000, dst_reg, imm);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_OR | BPF_K: /* dst = dst | imm */
+ /* og %dst,<d(imm)>(%l) */
+ EMIT6_DISP_LH(0xe3000000, 0x0081, dst_reg, REG_0, REG_L,
+ EMIT_CONST_U64(imm));
+ break;
+ /*
+ * BPF_XOR
+ */
+ case BPF_ALU | BPF_XOR | BPF_X: /* dst = (u32) dst ^ (u32) src */
+ /* xr %dst,%src */
+ EMIT2(0x1700, dst_reg, src_reg);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_XOR | BPF_X: /* dst = dst ^ src */
+ /* xgr %dst,%src */
+ EMIT4(0xb9820000, dst_reg, src_reg);
+ break;
+ case BPF_ALU | BPF_XOR | BPF_K: /* dst = (u32) dst ^ (u32) imm */
+ if (!imm)
+ break;
+ /* xilf %dst,imm */
+ EMIT6_IMM(0xc0070000, dst_reg, imm);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_XOR | BPF_K: /* dst = dst ^ imm */
+ /* xg %dst,<d(imm)>(%l) */
+ EMIT6_DISP_LH(0xe3000000, 0x0082, dst_reg, REG_0, REG_L,
+ EMIT_CONST_U64(imm));
+ break;
+ /*
+ * BPF_LSH
+ */
+ case BPF_ALU | BPF_LSH | BPF_X: /* dst = (u32) dst << (u32) src */
+ /* sll %dst,0(%src) */
+ EMIT4_DISP(0x89000000, dst_reg, src_reg, 0);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_LSH | BPF_X: /* dst = dst << src */
+ /* sllg %dst,%dst,0(%src) */
+ EMIT6_DISP_LH(0xeb000000, 0x000d, dst_reg, dst_reg, src_reg, 0);
+ break;
+ case BPF_ALU | BPF_LSH | BPF_K: /* dst = (u32) dst << (u32) imm */
+ if (imm == 0)
+ break;
+ /* sll %dst,imm(%r0) */
+ EMIT4_DISP(0x89000000, dst_reg, REG_0, imm);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_LSH | BPF_K: /* dst = dst << imm */
+ if (imm == 0)
+ break;
+ /* sllg %dst,%dst,imm(%r0) */
+ EMIT6_DISP_LH(0xeb000000, 0x000d, dst_reg, dst_reg, REG_0, imm);
+ break;
+ /*
+ * BPF_RSH
+ */
+ case BPF_ALU | BPF_RSH | BPF_X: /* dst = (u32) dst >> (u32) src */
+ /* srl %dst,0(%src) */
+ EMIT4_DISP(0x88000000, dst_reg, src_reg, 0);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_RSH | BPF_X: /* dst = dst >> src */
+ /* srlg %dst,%dst,0(%src) */
+ EMIT6_DISP_LH(0xeb000000, 0x000c, dst_reg, dst_reg, src_reg, 0);
+ break;
+ case BPF_ALU | BPF_RSH | BPF_K: /* dst = (u32) dst >> (u32) imm */
+ if (imm == 0)
+ break;
+ /* srl %dst,imm(%r0) */
+ EMIT4_DISP(0x88000000, dst_reg, REG_0, imm);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_RSH | BPF_K: /* dst = dst >> imm */
+ if (imm == 0)
+ break;
+ /* srlg %dst,%dst,imm(%r0) */
+ EMIT6_DISP_LH(0xeb000000, 0x000c, dst_reg, dst_reg, REG_0, imm);
+ break;
+ /*
+ * BPF_ARSH
+ */
+ case BPF_ALU64 | BPF_ARSH | BPF_X: /* ((s64) dst) >>= src */
+ /* srag %dst,%dst,0(%src) */
+ EMIT6_DISP_LH(0xeb000000, 0x000a, dst_reg, dst_reg, src_reg, 0);
+ break;
+ case BPF_ALU64 | BPF_ARSH | BPF_K: /* ((s64) dst) >>= imm */
+ if (imm == 0)
+ break;
+ /* srag %dst,%dst,imm(%r0) */
+ EMIT6_DISP_LH(0xeb000000, 0x000a, dst_reg, dst_reg, REG_0, imm);
+ break;
+ /*
+ * BPF_NEG
+ */
+ case BPF_ALU | BPF_NEG: /* dst = (u32) -dst */
+ /* lcr %dst,%dst */
+ EMIT2(0x1300, dst_reg, dst_reg);
+ EMIT_ZERO(dst_reg);
+ break;
+ case BPF_ALU64 | BPF_NEG: /* dst = -dst */
+ /* lcgr %dst,%dst */
+ EMIT4(0xb9130000, dst_reg, dst_reg);
+ break;
+ /*
+ * BPF_FROM_BE/LE
+ */
+ case BPF_ALU | BPF_END | BPF_FROM_BE:
+ /* s390 is big endian, therefore only clear high order bytes */
+ switch (imm) {
+ case 16: /* dst = (u16) cpu_to_be16(dst) */
+ /* llghr %dst,%dst */
+ EMIT4(0xb9850000, dst_reg, dst_reg);
+ break;
+ case 32: /* dst = (u32) cpu_to_be32(dst) */
+ /* llgfr %dst,%dst */
+ EMIT4(0xb9160000, dst_reg, dst_reg);
+ break;
+ case 64: /* dst = (u64) cpu_to_be64(dst) */
+ break;
}
break;
- case BPF_RET | BPF_A:
- /* llgfr %r2,%r5 */
- EMIT4(0xb9160025);
+ case BPF_ALU | BPF_END | BPF_FROM_LE:
+ switch (imm) {
+ case 16: /* dst = (u16) cpu_to_le16(dst) */
+ /* lrvr %dst,%dst */
+ EMIT4(0xb91f0000, dst_reg, dst_reg);
+ /* srl %dst,16(%r0) */
+ EMIT4_DISP(0x88000000, dst_reg, REG_0, 16);
+ /* llghr %dst,%dst */
+ EMIT4(0xb9850000, dst_reg, dst_reg);
+ break;
+ case 32: /* dst = (u32) cpu_to_le32(dst) */
+ /* lrvr %dst,%dst */
+ EMIT4(0xb91f0000, dst_reg, dst_reg);
+ /* llgfr %dst,%dst */
+ EMIT4(0xb9160000, dst_reg, dst_reg);
+ break;
+ case 64: /* dst = (u64) cpu_to_le64(dst) */
+ /* lrvgr %dst,%dst */
+ EMIT4(0xb90f0000, dst_reg, dst_reg);
+ break;
+ }
+ break;
+ /*
+ * BPF_ST(X)
+ */
+ case BPF_STX | BPF_MEM | BPF_B: /* *(u8 *)(dst + off) = src_reg */
+ /* stcy %src,off(%dst) */
+ EMIT6_DISP_LH(0xe3000000, 0x0072, src_reg, dst_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ case BPF_STX | BPF_MEM | BPF_H: /* (u16 *)(dst + off) = src */
+ /* sthy %src,off(%dst) */
+ EMIT6_DISP_LH(0xe3000000, 0x0070, src_reg, dst_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ case BPF_STX | BPF_MEM | BPF_W: /* *(u32 *)(dst + off) = src */
+ /* sty %src,off(%dst) */
+ EMIT6_DISP_LH(0xe3000000, 0x0050, src_reg, dst_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ case BPF_STX | BPF_MEM | BPF_DW: /* (u64 *)(dst + off) = src */
+ /* stg %src,off(%dst) */
+ EMIT6_DISP_LH(0xe3000000, 0x0024, src_reg, dst_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ case BPF_ST | BPF_MEM | BPF_B: /* *(u8 *)(dst + off) = imm */
+ /* lhi %w0,imm */
+ EMIT4_IMM(0xa7080000, REG_W0, (u8) imm);
+ /* stcy %w0,off(dst) */
+ EMIT6_DISP_LH(0xe3000000, 0x0072, REG_W0, dst_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ case BPF_ST | BPF_MEM | BPF_H: /* (u16 *)(dst + off) = imm */
+ /* lhi %w0,imm */
+ EMIT4_IMM(0xa7080000, REG_W0, (u16) imm);
+ /* sthy %w0,off(dst) */
+ EMIT6_DISP_LH(0xe3000000, 0x0070, REG_W0, dst_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ case BPF_ST | BPF_MEM | BPF_W: /* *(u32 *)(dst + off) = imm */
+ /* llilf %w0,imm */
+ EMIT6_IMM(0xc00f0000, REG_W0, (u32) imm);
+ /* sty %w0,off(%dst) */
+ EMIT6_DISP_LH(0xe3000000, 0x0050, REG_W0, dst_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ case BPF_ST | BPF_MEM | BPF_DW: /* *(u64 *)(dst + off) = imm */
+ /* lgfi %w0,imm */
+ EMIT6_IMM(0xc0010000, REG_W0, imm);
+ /* stg %w0,off(%dst) */
+ EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W0, dst_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ /*
+ * BPF_STX XADD (atomic_add)
+ */
+ case BPF_STX | BPF_XADD | BPF_W: /* *(u32 *)(dst + off) += src */
+ /* laal %w0,%src,off(%dst) */
+ EMIT6_DISP_LH(0xeb000000, 0x00fa, REG_W0, src_reg,
+ dst_reg, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ case BPF_STX | BPF_XADD | BPF_DW: /* *(u64 *)(dst + off) += src */
+ /* laalg %w0,%src,off(%dst) */
+ EMIT6_DISP_LH(0xeb000000, 0x00ea, REG_W0, src_reg,
+ dst_reg, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ /*
+ * BPF_LDX
+ */
+ case BPF_LDX | BPF_MEM | BPF_B: /* dst = *(u8 *)(ul) (src + off) */
+ /* llgc %dst,0(off,%src) */
+ EMIT6_DISP_LH(0xe3000000, 0x0090, dst_reg, src_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ case BPF_LDX | BPF_MEM | BPF_H: /* dst = *(u16 *)(ul) (src + off) */
+ /* llgh %dst,0(off,%src) */
+ EMIT6_DISP_LH(0xe3000000, 0x0091, dst_reg, src_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
+ case BPF_LDX | BPF_MEM | BPF_W: /* dst = *(u32 *)(ul) (src + off) */
+ /* llgf %dst,off(%src) */
+ jit->seen |= SEEN_MEM;
+ EMIT6_DISP_LH(0xe3000000, 0x0016, dst_reg, src_reg, REG_0, off);
+ break;
+ case BPF_LDX | BPF_MEM | BPF_DW: /* dst = *(u64 *)(ul) (src + off) */
+ /* lg %dst,0(off,%src) */
+ jit->seen |= SEEN_MEM;
+ EMIT6_DISP_LH(0xe3000000, 0x0004, dst_reg, src_reg, REG_0, off);
+ break;
+ /*
+ * BPF_JMP / CALL
+ */
+ case BPF_JMP | BPF_CALL:
+ {
+ /*
+ * b0 = (__bpf_call_base + imm)(b1, b2, b3, b4, b5)
+ */
+ const u64 func = (u64)__bpf_call_base + imm;
+
+ REG_SET_SEEN(BPF_REG_5);
+ jit->seen |= SEEN_FUNC;
+ /* lg %w1,<d(imm)>(%l) */
+ EMIT6_DISP(0xe3000000, 0x0004, REG_W1, REG_0, REG_L,
+ EMIT_CONST_U64(func));
+ /* basr %r14,%w1 */
+ EMIT2(0x0d00, REG_14, REG_W1);
+ /* lgr %b0,%r2: load return value into %b0 */
+ EMIT4(0xb9040000, BPF_REG_0, REG_2);
+ break;
+ }
+ case BPF_JMP | BPF_EXIT: /* return b0 */
+ last = (i == fp->len - 1) ? 1 : 0;
+ if (last && !(jit->seen & SEEN_RET0))
+ break;
/* j <exit> */
EMIT4_PCREL(0xa7f40000, jit->exit_ip - jit->prg);
break;
- case BPF_ST: /* mem[K] = A */
- jit->seen |= SEEN_MEM;
- /* st %r5,<K>(%r15) */
- EMIT4_DISP(0x5050f000,
- (jit->seen & SEEN_DATAREF) ? 160 + K*4 : K*4);
+ /*
+ * Branch relative (number of skipped instructions) to offset on
+ * condition.
+ *
+ * Condition code to mask mapping:
+ *
+ * CC | Description | Mask
+ * ------------------------------
+ * 0 | Operands equal | 8
+ * 1 | First operand low | 4
+ * 2 | First operand high | 2
+ * 3 | Unused | 1
+ *
+ * For s390x relative branches: ip = ip + off_bytes
+ * For BPF relative branches: insn = insn + off_insns + 1
+ *
+ * For example for s390x with offset 0 we jump to the branch
+ * instruction itself (loop) and for BPF with offset 0 we
+ * branch to the instruction behind the branch.
+ */
+ case BPF_JMP | BPF_JA: /* if (true) */
+ mask = 0xf000; /* j */
+ goto branch_oc;
+ case BPF_JMP | BPF_JSGT | BPF_K: /* ((s64) dst > (s64) imm) */
+ mask = 0x2000; /* jh */
+ goto branch_ks;
+ case BPF_JMP | BPF_JSGE | BPF_K: /* ((s64) dst >= (s64) imm) */
+ mask = 0xa000; /* jhe */
+ goto branch_ks;
+ case BPF_JMP | BPF_JGT | BPF_K: /* (dst_reg > imm) */
+ mask = 0x2000; /* jh */
+ goto branch_ku;
+ case BPF_JMP | BPF_JGE | BPF_K: /* (dst_reg >= imm) */
+ mask = 0xa000; /* jhe */
+ goto branch_ku;
+ case BPF_JMP | BPF_JNE | BPF_K: /* (dst_reg != imm) */
+ mask = 0x7000; /* jne */
+ goto branch_ku;
+ case BPF_JMP | BPF_JEQ | BPF_K: /* (dst_reg == imm) */
+ mask = 0x8000; /* je */
+ goto branch_ku;
+ case BPF_JMP | BPF_JSET | BPF_K: /* (dst_reg & imm) */
+ mask = 0x7000; /* jnz */
+ /* lgfi %w1,imm (load sign extend imm) */
+ EMIT6_IMM(0xc0010000, REG_W1, imm);
+ /* ngr %w1,%dst */
+ EMIT4(0xb9800000, REG_W1, dst_reg);
+ goto branch_oc;
+
+ case BPF_JMP | BPF_JSGT | BPF_X: /* ((s64) dst > (s64) src) */
+ mask = 0x2000; /* jh */
+ goto branch_xs;
+ case BPF_JMP | BPF_JSGE | BPF_X: /* ((s64) dst >= (s64) src) */
+ mask = 0xa000; /* jhe */
+ goto branch_xs;
+ case BPF_JMP | BPF_JGT | BPF_X: /* (dst > src) */
+ mask = 0x2000; /* jh */
+ goto branch_xu;
+ case BPF_JMP | BPF_JGE | BPF_X: /* (dst >= src) */
+ mask = 0xa000; /* jhe */
+ goto branch_xu;
+ case BPF_JMP | BPF_JNE | BPF_X: /* (dst != src) */
+ mask = 0x7000; /* jne */
+ goto branch_xu;
+ case BPF_JMP | BPF_JEQ | BPF_X: /* (dst == src) */
+ mask = 0x8000; /* je */
+ goto branch_xu;
+ case BPF_JMP | BPF_JSET | BPF_X: /* (dst & src) */
+ mask = 0x7000; /* jnz */
+ /* ngrk %w1,%dst,%src */
+ EMIT4_RRF(0xb9e40000, REG_W1, dst_reg, src_reg);
+ goto branch_oc;
+branch_ks:
+ /* lgfi %w1,imm (load sign extend imm) */
+ EMIT6_IMM(0xc0010000, REG_W1, imm);
+ /* cgrj %dst,%w1,mask,off */
+ EMIT6_PCREL(0xec000000, 0x0064, dst_reg, REG_W1, i, off, mask);
break;
- case BPF_STX: /* mem[K] = X : mov %ebx,off8(%rbp) */
- jit->seen |= SEEN_XREG | SEEN_MEM;
- /* st %r12,<K>(%r15) */
- EMIT4_DISP(0x50c0f000,
- (jit->seen & SEEN_DATAREF) ? 160 + K*4 : K*4);
+branch_ku:
+ /* lgfi %w1,imm (load sign extend imm) */
+ EMIT6_IMM(0xc0010000, REG_W1, imm);
+ /* clgrj %dst,%w1,mask,off */
+ EMIT6_PCREL(0xec000000, 0x0065, dst_reg, REG_W1, i, off, mask);
break;
- case BPF_ANC | SKF_AD_PROTOCOL: /* A = ntohs(skb->protocol); */
- BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, protocol) != 2);
- /* lhi %r5,0 */
- EMIT4(0xa7580000);
- /* icm %r5,3,<d(protocol)>(%r2) */
- EMIT4_DISP(0xbf532000, offsetof(struct sk_buff, protocol));
+branch_xs:
+ /* cgrj %dst,%src,mask,off */
+ EMIT6_PCREL(0xec000000, 0x0064, dst_reg, src_reg, i, off, mask);
break;
- case BPF_ANC | SKF_AD_IFINDEX: /* if (!skb->dev) return 0;
- * A = skb->dev->ifindex */
- BUILD_BUG_ON(FIELD_SIZEOF(struct net_device, ifindex) != 4);
- jit->seen |= SEEN_RET0;
- /* lg %r1,<d(dev)>(%r2) */
- EMIT6_DISP(0xe3102000, 0x0004, offsetof(struct sk_buff, dev));
- /* ltgr %r1,%r1 */
- EMIT4(0xb9020011);
- /* jz <ret0> */
- EMIT4_PCREL(0xa7840000, jit->ret0_ip - jit->prg);
- /* l %r5,<d(ifindex)>(%r1) */
- EMIT4_DISP(0x58501000, offsetof(struct net_device, ifindex));
+branch_xu:
+ /* clgrj %dst,%src,mask,off */
+ EMIT6_PCREL(0xec000000, 0x0065, dst_reg, src_reg, i, off, mask);
break;
- case BPF_ANC | SKF_AD_MARK: /* A = skb->mark */
- BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, mark) != 4);
- /* l %r5,<d(mark)>(%r2) */
- EMIT4_DISP(0x58502000, offsetof(struct sk_buff, mark));
+branch_oc:
+ /* brc mask,jmp_off (branch instruction needs 4 bytes) */
+ jmp_off = addrs[i + off + 1] - (addrs[i + 1] - 4);
+ EMIT4_PCREL(0xa7040000 | mask << 8, jmp_off);
break;
- case BPF_ANC | SKF_AD_QUEUE: /* A = skb->queue_mapping */
- BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, queue_mapping) != 2);
- /* lhi %r5,0 */
- EMIT4(0xa7580000);
- /* icm %r5,3,<d(queue_mapping)>(%r2) */
- EMIT4_DISP(0xbf532000, offsetof(struct sk_buff, queue_mapping));
- break;
- case BPF_ANC | SKF_AD_HATYPE: /* if (!skb->dev) return 0;
- * A = skb->dev->type */
- BUILD_BUG_ON(FIELD_SIZEOF(struct net_device, type) != 2);
- jit->seen |= SEEN_RET0;
- /* lg %r1,<d(dev)>(%r2) */
- EMIT6_DISP(0xe3102000, 0x0004, offsetof(struct sk_buff, dev));
- /* ltgr %r1,%r1 */
- EMIT4(0xb9020011);
- /* jz <ret0> */
- EMIT4_PCREL(0xa7840000, jit->ret0_ip - jit->prg);
- /* lhi %r5,0 */
- EMIT4(0xa7580000);
- /* icm %r5,3,<d(type)>(%r1) */
- EMIT4_DISP(0xbf531000, offsetof(struct net_device, type));
- break;
- case BPF_ANC | SKF_AD_RXHASH: /* A = skb->hash */
- BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, hash) != 4);
- /* l %r5,<d(hash)>(%r2) */
- EMIT4_DISP(0x58502000, offsetof(struct sk_buff, hash));
- break;
- case BPF_ANC | SKF_AD_VLAN_TAG:
- case BPF_ANC | SKF_AD_VLAN_TAG_PRESENT:
- BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, vlan_tci) != 2);
- BUILD_BUG_ON(VLAN_TAG_PRESENT != 0x1000);
- /* lhi %r5,0 */
- EMIT4(0xa7580000);
- /* icm %r5,3,<d(vlan_tci)>(%r2) */
- EMIT4_DISP(0xbf532000, offsetof(struct sk_buff, vlan_tci));
- if (code == (BPF_ANC | SKF_AD_VLAN_TAG)) {
- /* nill %r5,0xefff */
- EMIT4_IMM(0xa5570000, ~VLAN_TAG_PRESENT);
- } else {
- /* nill %r5,0x1000 */
- EMIT4_IMM(0xa5570000, VLAN_TAG_PRESENT);
- /* srl %r5,12 */
- EMIT4_DISP(0x88500000, 12);
- }
- break;
- case BPF_ANC | SKF_AD_PKTTYPE:
- /* lhi %r5,0 */
- EMIT4(0xa7580000);
- /* ic %r5,<d(pkt_type_offset)>(%r2) */
- EMIT4_DISP(0x43502000, PKT_TYPE_OFFSET());
- /* srl %r5,5 */
- EMIT4_DISP(0x88500000, 5);
- break;
- case BPF_ANC | SKF_AD_CPU: /* A = smp_processor_id() */
-#ifdef CONFIG_SMP
- /* l %r5,<d(cpu_nr)> */
- EMIT4_DISP(0x58500000, offsetof(struct _lowcore, cpu_nr));
-#else
- /* lhi %r5,0 */
- EMIT4(0xa7580000);
-#endif
+ /*
+ * BPF_LD
+ */
+ case BPF_LD | BPF_ABS | BPF_B: /* b0 = *(u8 *) (skb->data+imm) */
+ case BPF_LD | BPF_IND | BPF_B: /* b0 = *(u8 *) (skb->data+imm+src) */
+ if ((BPF_MODE(insn->code) == BPF_ABS) && (imm >= 0))
+ func_addr = __pa(sk_load_byte_pos);
+ else
+ func_addr = __pa(sk_load_byte);
+ goto call_fn;
+ case BPF_LD | BPF_ABS | BPF_H: /* b0 = *(u16 *) (skb->data+imm) */
+ case BPF_LD | BPF_IND | BPF_H: /* b0 = *(u16 *) (skb->data+imm+src) */
+ if ((BPF_MODE(insn->code) == BPF_ABS) && (imm >= 0))
+ func_addr = __pa(sk_load_half_pos);
+ else
+ func_addr = __pa(sk_load_half);
+ goto call_fn;
+ case BPF_LD | BPF_ABS | BPF_W: /* b0 = *(u32 *) (skb->data+imm) */
+ case BPF_LD | BPF_IND | BPF_W: /* b0 = *(u32 *) (skb->data+imm+src) */
+ if ((BPF_MODE(insn->code) == BPF_ABS) && (imm >= 0))
+ func_addr = __pa(sk_load_word_pos);
+ else
+ func_addr = __pa(sk_load_word);
+ goto call_fn;
+call_fn:
+ jit->seen |= SEEN_SKB | SEEN_RET0 | SEEN_FUNC;
+ REG_SET_SEEN(REG_14); /* Return address of possible func call */
+
+ /*
+ * Implicit input:
+ * BPF_REG_6 (R7) : skb pointer
+ * REG_SKB_DATA (R12): skb data pointer
+ *
+ * Calculated input:
+ * BPF_REG_2 (R3) : offset of byte(s) to fetch in skb
+ * BPF_REG_5 (R6) : return address
+ *
+ * Output:
+ * BPF_REG_0 (R14): data read from skb
+ *
+ * Scratch registers (BPF_REG_1-5)
+ */
+
+ /* Call function: llilf %w1,func_addr */
+ EMIT6_IMM(0xc00f0000, REG_W1, func_addr);
+
+ /* Offset: lgfi %b2,imm */
+ EMIT6_IMM(0xc0010000, BPF_REG_2, imm);
+ if (BPF_MODE(insn->code) == BPF_IND)
+ /* agfr %b2,%src (%src is s32 here) */
+ EMIT4(0xb9180000, BPF_REG_2, src_reg);
+
+ /* basr %b5,%w1 (%b5 is call saved) */
+ EMIT2(0x0d00, BPF_REG_5, REG_W1);
+
+ /*
+ * Note: For fast access we jump directly after the
+ * jnz instruction from bpf_jit.S
+ */
+ /* jnz <ret0> */
+ EMIT4_PCREL(0xa7740000, jit->ret0_ip - jit->prg);
break;
default: /* too complex, give up */
- goto out;
+ pr_err("Unknown opcode %02x\n", insn->code);
+ return -1;
}
- addrs[i] = jit->prg - jit->start;
- return 0;
-out:
- return -1;
+ return insn_count;
}
+/*
+ * Compile eBPF program into s390x code
+ */
+static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp)
+{
+ int i, insn_count;
+
+ jit->lit = jit->lit_start;
+ jit->prg = 0;
+
+ bpf_jit_prologue(jit);
+ for (i = 0; i < fp->len; i += insn_count) {
+ insn_count = bpf_jit_insn(jit, fp, i);
+ if (insn_count < 0)
+ return -1;
+ jit->addrs[i + 1] = jit->prg; /* Next instruction address */
+ }
+ bpf_jit_epilogue(jit);
+
+ jit->lit_start = jit->prg;
+ jit->size = jit->lit;
+ jit->size_prg = jit->prg;
+ return 0;
+}
+
+/*
+ * Classic BPF function stub. BPF programs will be converted into
+ * eBPF and then bpf_int_jit_compile() will be called.
+ */
void bpf_jit_compile(struct bpf_prog *fp)
{
- struct bpf_binary_header *header = NULL;
- unsigned long size, prg_len, lit_len;
- struct bpf_jit jit, cjit;
- unsigned int *addrs;
- int pass, i;
+}
+
+/*
+ * Compile eBPF program "fp"
+ */
+void bpf_int_jit_compile(struct bpf_prog *fp)
+{
+ struct bpf_binary_header *header;
+ struct bpf_jit jit;
+ int pass;
if (!bpf_jit_enable)
return;
- addrs = kcalloc(fp->len, sizeof(*addrs), GFP_KERNEL);
- if (addrs == NULL)
+ memset(&jit, 0, sizeof(jit));
+ jit.addrs = kcalloc(fp->len + 1, sizeof(*jit.addrs), GFP_KERNEL);
+ if (jit.addrs == NULL)
return;
- memset(&jit, 0, sizeof(cjit));
- memset(&cjit, 0, sizeof(cjit));
-
- for (pass = 0; pass < 10; pass++) {
- jit.prg = jit.start;
- jit.lit = jit.mid;
-
- bpf_jit_prologue(&jit);
- bpf_jit_noleaks(&jit, fp->insns);
- for (i = 0; i < fp->len; i++) {
- if (bpf_jit_insn(&jit, fp->insns + i, addrs, i,
- i == fp->len - 1))
- goto out;
- }
- bpf_jit_epilogue(&jit);
- if (jit.start) {
- WARN_ON(jit.prg > cjit.prg || jit.lit > cjit.lit);
- if (memcmp(&jit, &cjit, sizeof(jit)) == 0)
- break;
- } else if (jit.prg == cjit.prg && jit.lit == cjit.lit) {
- prg_len = jit.prg - jit.start;
- lit_len = jit.lit - jit.mid;
- size = prg_len + lit_len;
- if (size >= BPF_SIZE_MAX)
- goto out;
- header = bpf_jit_binary_alloc(size, &jit.start,
- 2, bpf_jit_fill_hole);
- if (!header)
- goto out;
- jit.prg = jit.mid = jit.start + prg_len;
- jit.lit = jit.end = jit.start + prg_len + lit_len;
- jit.base_ip += (unsigned long) jit.start;
- jit.exit_ip += (unsigned long) jit.start;
- jit.ret0_ip += (unsigned long) jit.start;
- }
- cjit = jit;
+ /*
+ * Three initial passes:
+ * - 1/2: Determine clobbered registers
+ * - 3: Calculate program size and addrs arrray
+ */
+ for (pass = 1; pass <= 3; pass++) {
+ if (bpf_jit_prog(&jit, fp))
+ goto free_addrs;
}
+ /*
+ * Final pass: Allocate and generate program
+ */
+ if (jit.size >= BPF_SIZE_MAX)
+ goto free_addrs;
+ header = bpf_jit_binary_alloc(jit.size, &jit.prg_buf, 2, jit_fill_hole);
+ if (!header)
+ goto free_addrs;
+ if (bpf_jit_prog(&jit, fp))
+ goto free_addrs;
if (bpf_jit_enable > 1) {
- bpf_jit_dump(fp->len, jit.end - jit.start, pass, jit.start);
- if (jit.start)
- print_fn_code(jit.start, jit.mid - jit.start);
+ bpf_jit_dump(fp->len, jit.size, pass, jit.prg_buf);
+ if (jit.prg_buf)
+ print_fn_code(jit.prg_buf, jit.size_prg);
}
- if (jit.start) {
+ if (jit.prg_buf) {
set_memory_ro((unsigned long)header, header->pages);
- fp->bpf_func = (void *) jit.start;
+ fp->bpf_func = (void *) jit.prg_buf;
fp->jited = true;
}
-out:
- kfree(addrs);
+free_addrs:
+ kfree(jit.addrs);
}
+/*
+ * Free eBPF program
+ */
void bpf_jit_free(struct bpf_prog *fp)
{
unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index 9833620..598f023 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -190,6 +190,11 @@
return -ENOMEM;
WARN_ON((u64) zdev->fmb & 0xf);
+ /* reset software counters */
+ atomic64_set(&zdev->allocated_pages, 0);
+ atomic64_set(&zdev->mapped_pages, 0);
+ atomic64_set(&zdev->unmapped_pages, 0);
+
args.fmb_addr = virt_to_phys(zdev->fmb);
return mod_pci(zdev, ZPCI_MOD_FC_SET_MEASURE, 0, &args);
}
@@ -822,6 +827,7 @@
if (rc)
goto out;
+ mutex_init(&zdev->lock);
if (zdev->state == ZPCI_FN_STATE_CONFIGURED) {
rc = zpci_enable_device(zdev);
if (rc)
diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index c22d440..4129b0a 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -31,12 +31,25 @@
"Refresh operations",
"DMA read bytes",
"DMA write bytes",
- /* software counters */
+};
+
+static char *pci_sw_names[] = {
"Allocated pages",
"Mapped pages",
"Unmapped pages",
};
+static void pci_sw_counter_show(struct seq_file *m)
+{
+ struct zpci_dev *zdev = m->private;
+ atomic64_t *counter = &zdev->allocated_pages;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
+ seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
+ atomic64_read(counter));
+}
+
static int pci_perf_show(struct seq_file *m, void *v)
{
struct zpci_dev *zdev = m->private;
@@ -45,7 +58,10 @@
if (!zdev)
return 0;
+
+ mutex_lock(&zdev->lock);
if (!zdev->fmb) {
+ mutex_unlock(&zdev->lock);
seq_puts(m, "FMB statistics disabled\n");
return 0;
}
@@ -65,12 +81,9 @@
for (i = 4; i < 6; i++)
seq_printf(m, "%26s:\t%llu\n",
pci_perf_names[i], *(stat + i));
- /* software counters */
- for (i = 6; i < ARRAY_SIZE(pci_perf_names); i++)
- seq_printf(m, "%26s:\t%llu\n",
- pci_perf_names[i],
- atomic64_read((atomic64_t *) (stat + i)));
+ pci_sw_counter_show(m);
+ mutex_unlock(&zdev->lock);
return 0;
}
@@ -88,19 +101,17 @@
if (rc)
return rc;
+ mutex_lock(&zdev->lock);
switch (val) {
case 0:
rc = zpci_fmb_disable_device(zdev);
- if (rc)
- return rc;
break;
case 1:
rc = zpci_fmb_enable_device(zdev);
- if (rc)
- return rc;
break;
}
- return count;
+ mutex_unlock(&zdev->lock);
+ return rc ? rc : count;
}
static int pci_perf_seq_open(struct inode *inode, struct file *filp)
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
index 4cbb29a..6fd8d58 100644
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -300,7 +300,7 @@
flags |= ZPCI_TABLE_PROTECTED;
if (!dma_update_trans(zdev, pa, dma_addr, size, flags)) {
- atomic64_add(nr_pages, &zdev->fmb->mapped_pages);
+ atomic64_add(nr_pages, &zdev->mapped_pages);
return dma_addr + (offset & ~PAGE_MASK);
}
@@ -328,7 +328,7 @@
zpci_err_hex(&dma_addr, sizeof(dma_addr));
}
- atomic64_add(npages, &zdev->fmb->unmapped_pages);
+ atomic64_add(npages, &zdev->unmapped_pages);
iommu_page_index = (dma_addr - zdev->start_dma) >> PAGE_SHIFT;
dma_free_iommu(zdev, iommu_page_index, npages);
}
@@ -357,7 +357,7 @@
return NULL;
}
- atomic64_add(size / PAGE_SIZE, &zdev->fmb->allocated_pages);
+ atomic64_add(size / PAGE_SIZE, &zdev->allocated_pages);
if (dma_handle)
*dma_handle = map;
return (void *) pa;
@@ -370,7 +370,7 @@
struct zpci_dev *zdev = get_zdev(to_pci_dev(dev));
size = PAGE_ALIGN(size);
- atomic64_sub(size / PAGE_SIZE, &zdev->fmb->allocated_pages);
+ atomic64_sub(size / PAGE_SIZE, &zdev->allocated_pages);
s390_dma_unmap_pages(dev, dma_handle, size, DMA_BIDIRECTIONAL, NULL);
free_pages((unsigned long) pa, get_order(size));
}
diff --git a/arch/sh/include/asm/mmu_context.h b/arch/sh/include/asm/mmu_context.h
index b9d9489..9f417fe 100644
--- a/arch/sh/include/asm/mmu_context.h
+++ b/arch/sh/include/asm/mmu_context.h
@@ -99,7 +99,7 @@
{
int i;
- for (i = 0; i < num_online_cpus(); i++)
+ for_each_online_cpu(i)
cpu_context(i, mm) = NO_CONTEXT;
return 0;
diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index fc5acfc..de6be00 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -363,7 +363,7 @@
smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1);
} else {
int i;
- for (i = 0; i < num_online_cpus(); i++)
+ for_each_online_cpu(i)
if (smp_processor_id() != i)
cpu_context(i, mm) = 0;
}
@@ -400,7 +400,7 @@
smp_call_function(flush_tlb_range_ipi, (void *)&fd, 1);
} else {
int i;
- for (i = 0; i < num_online_cpus(); i++)
+ for_each_online_cpu(i)
if (smp_processor_id() != i)
cpu_context(i, mm) = 0;
}
@@ -443,7 +443,7 @@
smp_call_function(flush_tlb_page_ipi, (void *)&fd, 1);
} else {
int i;
- for (i = 0; i < num_online_cpus(); i++)
+ for_each_online_cpu(i)
if (smp_processor_id() != i)
cpu_context(i, vma->vm_mm) = 0;
}
diff --git a/arch/sparc/include/asm/iommu_64.h b/arch/sparc/include/asm/iommu_64.h
index e3cd449..cd0d69f 100644
--- a/arch/sparc/include/asm/iommu_64.h
+++ b/arch/sparc/include/asm/iommu_64.h
@@ -25,7 +25,7 @@
};
struct iommu {
- struct iommu_table tbl;
+ struct iommu_map_table tbl;
spinlock_t lock;
u32 dma_addr_mask;
iopte_t *page_table;
diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c
index 9b16b34..5320689 100644
--- a/arch/sparc/kernel/iommu.c
+++ b/arch/sparc/kernel/iommu.c
@@ -13,15 +13,12 @@
#include <linux/errno.h>
#include <linux/iommu-helper.h>
#include <linux/bitmap.h>
-#include <linux/hash.h>
#include <linux/iommu-common.h>
#ifdef CONFIG_PCI
#include <linux/pci.h>
#endif
-static DEFINE_PER_CPU(unsigned int, iommu_pool_hash);
-
#include <asm/iommu.h>
#include "iommu_common.h"
@@ -49,9 +46,9 @@
"i" (ASI_PHYS_BYPASS_EC_E))
/* Must be invoked under the IOMMU lock. */
-static void iommu_flushall(struct iommu_table *iommu_table)
+static void iommu_flushall(struct iommu_map_table *iommu_map_table)
{
- struct iommu *iommu = container_of(iommu_table, struct iommu, tbl);
+ struct iommu *iommu = container_of(iommu_map_table, struct iommu, tbl);
if (iommu->iommu_flushinv) {
iommu_write(iommu->iommu_flushinv, ~(u64)0);
} else {
@@ -92,23 +89,6 @@
iopte_val(*iopte) = val;
}
-static struct iommu_tbl_ops iommu_sparc_ops = {
- .reset = iommu_flushall
-};
-
-static void setup_iommu_pool_hash(void)
-{
- unsigned int i;
- static bool do_once;
-
- if (do_once)
- return;
- do_once = true;
- for_each_possible_cpu(i)
- per_cpu(iommu_pool_hash, i) = hash_32(i, IOMMU_POOL_HASHBITS);
-}
-
-
int iommu_table_init(struct iommu *iommu, int tsbsize,
u32 dma_offset, u32 dma_addr_mask,
int numa_node)
@@ -121,7 +101,7 @@
/* Setup initial software IOMMU state. */
spin_lock_init(&iommu->lock);
iommu->ctx_lowest_free = 1;
- iommu->tbl.page_table_map_base = dma_offset;
+ iommu->tbl.table_map_base = dma_offset;
iommu->dma_addr_mask = dma_addr_mask;
/* Allocate and initialize the free area map. */
@@ -131,12 +111,10 @@
if (!iommu->tbl.map)
return -ENOMEM;
memset(iommu->tbl.map, 0, sz);
- if (tlb_type != hypervisor)
- iommu_sparc_ops.reset = NULL; /* not needed on on sun4v */
- setup_iommu_pool_hash();
iommu_tbl_pool_init(&iommu->tbl, num_tsb_entries, IO_PAGE_SHIFT,
- &iommu_sparc_ops, false, 1);
+ (tlb_type != hypervisor ? iommu_flushall : NULL),
+ false, 1, false);
/* Allocate and initialize the dummy page which we
* set inactive IO PTEs to point to.
@@ -182,7 +160,7 @@
unsigned long entry;
entry = iommu_tbl_range_alloc(dev, &iommu->tbl, npages, NULL,
- __this_cpu_read(iommu_pool_hash));
+ (unsigned long)(-1), 0);
if (unlikely(entry == DMA_ERROR_CODE))
return NULL;
@@ -249,7 +227,7 @@
return NULL;
}
- *dma_addrp = (iommu->tbl.page_table_map_base +
+ *dma_addrp = (iommu->tbl.table_map_base +
((iopte - iommu->page_table) << IO_PAGE_SHIFT));
ret = (void *) first_page;
npages = size >> IO_PAGE_SHIFT;
@@ -275,7 +253,7 @@
npages = IO_PAGE_ALIGN(size) >> IO_PAGE_SHIFT;
iommu = dev->archdata.iommu;
- iommu_tbl_range_free(&iommu->tbl, dvma, npages, false, NULL);
+ iommu_tbl_range_free(&iommu->tbl, dvma, npages, DMA_ERROR_CODE);
order = get_order(size);
if (order < 10)
@@ -315,7 +293,7 @@
if (unlikely(!base))
goto bad;
- bus_addr = (iommu->tbl.page_table_map_base +
+ bus_addr = (iommu->tbl.table_map_base +
((base - iommu->page_table) << IO_PAGE_SHIFT));
ret = bus_addr | (oaddr & ~IO_PAGE_MASK);
base_paddr = __pa(oaddr & IO_PAGE_MASK);
@@ -426,7 +404,7 @@
npages = IO_PAGE_ALIGN(bus_addr + sz) - (bus_addr & IO_PAGE_MASK);
npages >>= IO_PAGE_SHIFT;
base = iommu->page_table +
- ((bus_addr - iommu->tbl.page_table_map_base) >> IO_PAGE_SHIFT);
+ ((bus_addr - iommu->tbl.table_map_base) >> IO_PAGE_SHIFT);
bus_addr &= IO_PAGE_MASK;
spin_lock_irqsave(&iommu->lock, flags);
@@ -448,8 +426,7 @@
iommu_free_ctx(iommu, ctx);
spin_unlock_irqrestore(&iommu->lock, flags);
- iommu_tbl_range_free(&iommu->tbl, bus_addr, npages,
- false, NULL);
+ iommu_tbl_range_free(&iommu->tbl, bus_addr, npages, DMA_ERROR_CODE);
}
static int dma_4u_map_sg(struct device *dev, struct scatterlist *sglist,
@@ -497,7 +474,7 @@
max_seg_size = dma_get_max_seg_size(dev);
seg_boundary_size = ALIGN(dma_get_seg_boundary(dev) + 1,
IO_PAGE_SIZE) >> IO_PAGE_SHIFT;
- base_shift = iommu->tbl.page_table_map_base >> IO_PAGE_SHIFT;
+ base_shift = iommu->tbl.table_map_base >> IO_PAGE_SHIFT;
for_each_sg(sglist, s, nelems, i) {
unsigned long paddr, npages, entry, out_entry = 0, slen;
iopte_t *base;
@@ -511,8 +488,8 @@
/* Allocate iommu entries for that segment */
paddr = (unsigned long) SG_ENT_PHYS_ADDRESS(s);
npages = iommu_num_pages(paddr, slen, IO_PAGE_SIZE);
- entry = iommu_tbl_range_alloc(dev, &iommu->tbl, npages, &handle,
- __this_cpu_read(iommu_pool_hash));
+ entry = iommu_tbl_range_alloc(dev, &iommu->tbl, npages,
+ &handle, (unsigned long)(-1), 0);
/* Handle failure */
if (unlikely(entry == DMA_ERROR_CODE)) {
@@ -525,7 +502,7 @@
base = iommu->page_table + entry;
/* Convert entry to a dma_addr_t */
- dma_addr = iommu->tbl.page_table_map_base +
+ dma_addr = iommu->tbl.table_map_base +
(entry << IO_PAGE_SHIFT);
dma_addr |= (s->offset & ~IO_PAGE_MASK);
@@ -586,7 +563,7 @@
npages = iommu_num_pages(s->dma_address, s->dma_length,
IO_PAGE_SIZE);
- entry = (vaddr - iommu->tbl.page_table_map_base)
+ entry = (vaddr - iommu->tbl.table_map_base)
>> IO_PAGE_SHIFT;
base = iommu->page_table + entry;
@@ -594,7 +571,7 @@
iopte_make_dummy(iommu, base + j);
iommu_tbl_range_free(&iommu->tbl, vaddr, npages,
- false, NULL);
+ DMA_ERROR_CODE);
s->dma_address = DMA_ERROR_CODE;
s->dma_length = 0;
@@ -610,19 +587,18 @@
/* If contexts are being used, they are the same in all of the mappings
* we make for a particular SG.
*/
-static unsigned long fetch_sg_ctx(struct iommu *iommu,
- struct scatterlist *sg)
+static unsigned long fetch_sg_ctx(struct iommu *iommu, struct scatterlist *sg)
{
unsigned long ctx = 0;
if (iommu->iommu_ctxflush) {
iopte_t *base;
u32 bus_addr;
- struct iommu_table *tbl = &iommu->tbl;
+ struct iommu_map_table *tbl = &iommu->tbl;
bus_addr = sg->dma_address & IO_PAGE_MASK;
base = iommu->page_table +
- ((bus_addr - tbl->page_table_map_base) >> IO_PAGE_SHIFT);
+ ((bus_addr - tbl->table_map_base) >> IO_PAGE_SHIFT);
ctx = (iopte_val(*base) & IOPTE_CONTEXT) >> 47UL;
}
@@ -659,7 +635,7 @@
break;
npages = iommu_num_pages(dma_handle, len, IO_PAGE_SIZE);
- entry = ((dma_handle - iommu->tbl.page_table_map_base)
+ entry = ((dma_handle - iommu->tbl.table_map_base)
>> IO_PAGE_SHIFT);
base = iommu->page_table + entry;
@@ -671,8 +647,8 @@
for (i = 0; i < npages; i++)
iopte_make_dummy(iommu, base + i);
- iommu_tbl_range_free(&iommu->tbl, dma_handle, npages, false,
- NULL);
+ iommu_tbl_range_free(&iommu->tbl, dma_handle, npages,
+ DMA_ERROR_CODE);
sg = sg_next(sg);
}
@@ -706,10 +682,10 @@
if (iommu->iommu_ctxflush &&
strbuf->strbuf_ctxflush) {
iopte_t *iopte;
- struct iommu_table *tbl = &iommu->tbl;
+ struct iommu_map_table *tbl = &iommu->tbl;
iopte = iommu->page_table +
- ((bus_addr - tbl->page_table_map_base)>>IO_PAGE_SHIFT);
+ ((bus_addr - tbl->table_map_base)>>IO_PAGE_SHIFT);
ctx = (iopte_val(*iopte) & IOPTE_CONTEXT) >> 47UL;
}
@@ -742,10 +718,10 @@
if (iommu->iommu_ctxflush &&
strbuf->strbuf_ctxflush) {
iopte_t *iopte;
- struct iommu_table *tbl = &iommu->tbl;
+ struct iommu_map_table *tbl = &iommu->tbl;
iopte = iommu->page_table + ((sglist[0].dma_address -
- tbl->page_table_map_base) >> IO_PAGE_SHIFT);
+ tbl->table_map_base) >> IO_PAGE_SHIFT);
ctx = (iopte_val(*iopte) & IOPTE_CONTEXT) >> 47UL;
}
diff --git a/arch/sparc/kernel/ldc.c b/arch/sparc/kernel/ldc.c
index d485697..d2ae0f7 100644
--- a/arch/sparc/kernel/ldc.c
+++ b/arch/sparc/kernel/ldc.c
@@ -15,7 +15,6 @@
#include <linux/list.h>
#include <linux/init.h>
#include <linux/bitmap.h>
-#include <linux/hash.h>
#include <linux/iommu-common.h>
#include <asm/hypervisor.h>
@@ -32,7 +31,6 @@
#define COOKIE_PGSZ_CODE 0xf000000000000000ULL
#define COOKIE_PGSZ_CODE_SHIFT 60ULL
-static DEFINE_PER_CPU(unsigned int, ldc_pool_hash);
static char version[] =
DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n";
@@ -108,7 +106,7 @@
/* Protects ldc_unmap. */
spinlock_t lock;
struct ldc_mtable_entry *page_table;
- struct iommu_table iommu_table;
+ struct iommu_map_table iommu_map_table;
};
struct ldc_channel {
@@ -1015,18 +1013,9 @@
return (cookie >> (13ULL + (szcode * 3ULL)));
}
-struct ldc_demap_arg {
- struct ldc_iommu *ldc_iommu;
- u64 cookie;
- unsigned long id;
-};
-
-static void ldc_demap(void *arg, unsigned long entry, unsigned long npages)
+static void ldc_demap(struct ldc_iommu *iommu, unsigned long id, u64 cookie,
+ unsigned long entry, unsigned long npages)
{
- struct ldc_demap_arg *ldc_demap_arg = arg;
- struct ldc_iommu *iommu = ldc_demap_arg->ldc_iommu;
- unsigned long id = ldc_demap_arg->id;
- u64 cookie = ldc_demap_arg->cookie;
struct ldc_mtable_entry *base;
unsigned long i, shift;
@@ -1043,36 +1032,17 @@
/* XXX Make this configurable... XXX */
#define LDC_IOTABLE_SIZE (8 * 1024)
-struct iommu_tbl_ops ldc_iommu_ops = {
- .cookie_to_index = ldc_cookie_to_index,
- .demap = ldc_demap,
-};
-
-static void setup_ldc_pool_hash(void)
-{
- unsigned int i;
- static bool do_once;
-
- if (do_once)
- return;
- do_once = true;
- for_each_possible_cpu(i)
- per_cpu(ldc_pool_hash, i) = hash_32(i, IOMMU_POOL_HASHBITS);
-}
-
-
static int ldc_iommu_init(const char *name, struct ldc_channel *lp)
{
unsigned long sz, num_tsb_entries, tsbsize, order;
struct ldc_iommu *ldc_iommu = &lp->iommu;
- struct iommu_table *iommu = &ldc_iommu->iommu_table;
+ struct iommu_map_table *iommu = &ldc_iommu->iommu_map_table;
struct ldc_mtable_entry *table;
unsigned long hv_err;
int err;
num_tsb_entries = LDC_IOTABLE_SIZE;
tsbsize = num_tsb_entries * sizeof(struct ldc_mtable_entry);
- setup_ldc_pool_hash();
spin_lock_init(&ldc_iommu->lock);
sz = num_tsb_entries / 8;
@@ -1083,7 +1053,9 @@
return -ENOMEM;
}
iommu_tbl_pool_init(iommu, num_tsb_entries, PAGE_SHIFT,
- &ldc_iommu_ops, false, 1);
+ NULL, false /* no large pool */,
+ 1 /* npools */,
+ true /* skip span boundary check */);
order = get_order(tsbsize);
@@ -1122,7 +1094,7 @@
static void ldc_iommu_release(struct ldc_channel *lp)
{
struct ldc_iommu *ldc_iommu = &lp->iommu;
- struct iommu_table *iommu = &ldc_iommu->iommu_table;
+ struct iommu_map_table *iommu = &ldc_iommu->iommu_map_table;
unsigned long num_tsb_entries, tsbsize, order;
(void) sun4v_ldc_set_map_table(lp->id, 0, 0);
@@ -1979,8 +1951,8 @@
{
long entry;
- entry = iommu_tbl_range_alloc(NULL, &iommu->iommu_table, npages,
- NULL, __this_cpu_read(ldc_pool_hash));
+ entry = iommu_tbl_range_alloc(NULL, &iommu->iommu_map_table,
+ npages, NULL, (unsigned long)-1, 0);
if (unlikely(entry < 0))
return NULL;
@@ -2191,17 +2163,13 @@
static void free_npages(unsigned long id, struct ldc_iommu *iommu,
u64 cookie, u64 size)
{
- unsigned long npages;
- struct ldc_demap_arg demap_arg;
-
- demap_arg.ldc_iommu = iommu;
- demap_arg.cookie = cookie;
- demap_arg.id = id;
+ unsigned long npages, entry;
npages = PAGE_ALIGN(((cookie & ~PAGE_MASK) + size)) >> PAGE_SHIFT;
- iommu_tbl_range_free(&iommu->iommu_table, cookie, npages, true,
- &demap_arg);
+ entry = ldc_cookie_to_index(cookie, iommu);
+ ldc_demap(iommu, id, cookie, entry, npages);
+ iommu_tbl_range_free(&iommu->iommu_map_table, cookie, npages, entry);
}
void ldc_unmap(struct ldc_channel *lp, struct ldc_trans_cookie *cookies,
diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c
index 9b76b9d..d2fe57d 100644
--- a/arch/sparc/kernel/pci_sun4v.c
+++ b/arch/sparc/kernel/pci_sun4v.c
@@ -15,7 +15,6 @@
#include <linux/export.h>
#include <linux/log2.h>
#include <linux/of_device.h>
-#include <linux/hash.h>
#include <linux/iommu-common.h>
#include <asm/iommu.h>
@@ -30,7 +29,6 @@
#define DRIVER_NAME "pci_sun4v"
#define PFX DRIVER_NAME ": "
-static DEFINE_PER_CPU(unsigned int, iommu_pool_hash);
static unsigned long vpci_major = 1;
static unsigned long vpci_minor = 1;
@@ -159,13 +157,12 @@
iommu = dev->archdata.iommu;
entry = iommu_tbl_range_alloc(dev, &iommu->tbl, npages, NULL,
- __this_cpu_read(iommu_pool_hash));
+ (unsigned long)(-1), 0);
if (unlikely(entry == DMA_ERROR_CODE))
goto range_alloc_fail;
- *dma_addrp = (iommu->tbl.page_table_map_base +
- (entry << IO_PAGE_SHIFT));
+ *dma_addrp = (iommu->tbl.table_map_base + (entry << IO_PAGE_SHIFT));
ret = (void *) first_page;
first_page = __pa(first_page);
@@ -190,7 +187,7 @@
return ret;
iommu_map_fail:
- iommu_tbl_range_free(&iommu->tbl, *dma_addrp, npages, false, NULL);
+ iommu_tbl_range_free(&iommu->tbl, *dma_addrp, npages, DMA_ERROR_CODE);
range_alloc_fail:
free_pages(first_page, order);
@@ -227,9 +224,9 @@
iommu = dev->archdata.iommu;
pbm = dev->archdata.host_controller;
devhandle = pbm->devhandle;
- entry = ((dvma - iommu->tbl.page_table_map_base) >> IO_PAGE_SHIFT);
+ entry = ((dvma - iommu->tbl.table_map_base) >> IO_PAGE_SHIFT);
dma_4v_iommu_demap(&devhandle, entry, npages);
- iommu_tbl_range_free(&iommu->tbl, dvma, npages, false, NULL);
+ iommu_tbl_range_free(&iommu->tbl, dvma, npages, DMA_ERROR_CODE);
order = get_order(size);
if (order < 10)
free_pages((unsigned long)cpu, order);
@@ -257,13 +254,12 @@
npages >>= IO_PAGE_SHIFT;
entry = iommu_tbl_range_alloc(dev, &iommu->tbl, npages, NULL,
- __this_cpu_read(iommu_pool_hash));
+ (unsigned long)(-1), 0);
if (unlikely(entry == DMA_ERROR_CODE))
goto bad;
- bus_addr = (iommu->tbl.page_table_map_base +
- (entry << IO_PAGE_SHIFT));
+ bus_addr = (iommu->tbl.table_map_base + (entry << IO_PAGE_SHIFT));
ret = bus_addr | (oaddr & ~IO_PAGE_MASK);
base_paddr = __pa(oaddr & IO_PAGE_MASK);
prot = HV_PCI_MAP_ATTR_READ;
@@ -292,7 +288,7 @@
return DMA_ERROR_CODE;
iommu_map_fail:
- iommu_tbl_range_free(&iommu->tbl, bus_addr, npages, false, NULL);
+ iommu_tbl_range_free(&iommu->tbl, bus_addr, npages, DMA_ERROR_CODE);
return DMA_ERROR_CODE;
}
@@ -319,9 +315,9 @@
npages = IO_PAGE_ALIGN(bus_addr + sz) - (bus_addr & IO_PAGE_MASK);
npages >>= IO_PAGE_SHIFT;
bus_addr &= IO_PAGE_MASK;
- entry = (bus_addr - iommu->tbl.page_table_map_base) >> IO_PAGE_SHIFT;
+ entry = (bus_addr - iommu->tbl.table_map_base) >> IO_PAGE_SHIFT;
dma_4v_iommu_demap(&devhandle, entry, npages);
- iommu_tbl_range_free(&iommu->tbl, bus_addr, npages, false, NULL);
+ iommu_tbl_range_free(&iommu->tbl, bus_addr, npages, DMA_ERROR_CODE);
}
static int dma_4v_map_sg(struct device *dev, struct scatterlist *sglist,
@@ -363,7 +359,7 @@
max_seg_size = dma_get_max_seg_size(dev);
seg_boundary_size = ALIGN(dma_get_seg_boundary(dev) + 1,
IO_PAGE_SIZE) >> IO_PAGE_SHIFT;
- base_shift = iommu->tbl.page_table_map_base >> IO_PAGE_SHIFT;
+ base_shift = iommu->tbl.table_map_base >> IO_PAGE_SHIFT;
for_each_sg(sglist, s, nelems, i) {
unsigned long paddr, npages, entry, out_entry = 0, slen;
@@ -376,8 +372,8 @@
/* Allocate iommu entries for that segment */
paddr = (unsigned long) SG_ENT_PHYS_ADDRESS(s);
npages = iommu_num_pages(paddr, slen, IO_PAGE_SIZE);
- entry = iommu_tbl_range_alloc(dev, &iommu->tbl, npages, &handle,
- __this_cpu_read(iommu_pool_hash));
+ entry = iommu_tbl_range_alloc(dev, &iommu->tbl, npages,
+ &handle, (unsigned long)(-1), 0);
/* Handle failure */
if (unlikely(entry == DMA_ERROR_CODE)) {
@@ -390,8 +386,7 @@
iommu_batch_new_entry(entry);
/* Convert entry to a dma_addr_t */
- dma_addr = iommu->tbl.page_table_map_base +
- (entry << IO_PAGE_SHIFT);
+ dma_addr = iommu->tbl.table_map_base + (entry << IO_PAGE_SHIFT);
dma_addr |= (s->offset & ~IO_PAGE_MASK);
/* Insert into HW table */
@@ -456,7 +451,7 @@
npages = iommu_num_pages(s->dma_address, s->dma_length,
IO_PAGE_SIZE);
iommu_tbl_range_free(&iommu->tbl, vaddr, npages,
- false, NULL);
+ DMA_ERROR_CODE);
/* XXX demap? XXX */
s->dma_address = DMA_ERROR_CODE;
s->dma_length = 0;
@@ -492,16 +487,16 @@
dma_addr_t dma_handle = sg->dma_address;
unsigned int len = sg->dma_length;
unsigned long npages;
- struct iommu_table *tbl = &iommu->tbl;
+ struct iommu_map_table *tbl = &iommu->tbl;
unsigned long shift = IO_PAGE_SHIFT;
if (!len)
break;
npages = iommu_num_pages(dma_handle, len, IO_PAGE_SIZE);
- entry = ((dma_handle - tbl->page_table_map_base) >> shift);
+ entry = ((dma_handle - tbl->table_map_base) >> shift);
dma_4v_iommu_demap(&devhandle, entry, npages);
iommu_tbl_range_free(&iommu->tbl, dma_handle, npages,
- false, NULL);
+ DMA_ERROR_CODE);
sg = sg_next(sg);
}
@@ -517,8 +512,6 @@
.unmap_sg = dma_4v_unmap_sg,
};
-static struct iommu_tbl_ops dma_4v_iommu_ops;
-
static void pci_sun4v_scan_bus(struct pci_pbm_info *pbm, struct device *parent)
{
struct property *prop;
@@ -533,7 +526,7 @@
}
static unsigned long probe_existing_entries(struct pci_pbm_info *pbm,
- struct iommu_table *iommu)
+ struct iommu_map_table *iommu)
{
struct iommu_pool *pool;
unsigned long i, pool_nr, cnt = 0;
@@ -541,7 +534,7 @@
devhandle = pbm->devhandle;
for (pool_nr = 0; pool_nr < iommu->nr_pools; pool_nr++) {
- pool = &(iommu->arena_pool[pool_nr]);
+ pool = &(iommu->pools[pool_nr]);
for (i = pool->start; i <= pool->end; i++) {
unsigned long ret, io_attrs, ra;
@@ -587,8 +580,9 @@
dma_offset = vdma[0];
/* Setup initial software IOMMU state. */
+ spin_lock_init(&iommu->lock);
iommu->ctx_lowest_free = 1;
- iommu->tbl.page_table_map_base = dma_offset;
+ iommu->tbl.table_map_base = dma_offset;
iommu->dma_addr_mask = dma_mask;
/* Allocate and initialize the free area map. */
@@ -600,8 +594,9 @@
return -ENOMEM;
}
iommu_tbl_pool_init(&iommu->tbl, num_tsb_entries, IO_PAGE_SHIFT,
- &dma_4v_iommu_ops, false /* no large_pool */,
- 0 /* default npools */);
+ NULL, false /* no large_pool */,
+ 0 /* default npools */,
+ false /* want span boundary checking */);
sz = probe_existing_entries(pbm, &iommu->tbl);
if (sz)
printk("%s: Imported %lu TSB entries from OBP\n",
@@ -1001,17 +996,8 @@
.probe = pci_sun4v_probe,
};
-static void setup_iommu_pool_hash(void)
-{
- unsigned int i;
-
- for_each_possible_cpu(i)
- per_cpu(iommu_pool_hash, i) = hash_32(i, IOMMU_POOL_HASHBITS);
-}
-
static int __init pci_sun4v_init(void)
{
- setup_iommu_pool_hash();
return platform_driver_register(&pci_sun4v_driver);
}
diff --git a/arch/sparc/kernel/time_32.c b/arch/sparc/kernel/time_32.c
index 18147a5..8caf45e 100644
--- a/arch/sparc/kernel/time_32.c
+++ b/arch/sparc/kernel/time_32.c
@@ -194,7 +194,7 @@
static void percpu_ce_setup(enum clock_event_mode mode,
struct clock_event_device *evt)
{
- int cpu = __first_cpu(evt->cpumask);
+ int cpu = cpumask_first(evt->cpumask);
switch (mode) {
case CLOCK_EVT_MODE_PERIODIC:
@@ -214,7 +214,7 @@
static int percpu_ce_set_next_event(unsigned long delta,
struct clock_event_device *evt)
{
- int cpu = __first_cpu(evt->cpumask);
+ int cpu = cpumask_first(evt->cpumask);
unsigned int next = (unsigned int)delta;
sparc_config.load_profile_irq(cpu, next);
diff --git a/arch/tile/kernel/setup.c b/arch/tile/kernel/setup.c
index 7833b2c..6873f00 100644
--- a/arch/tile/kernel/setup.c
+++ b/arch/tile/kernel/setup.c
@@ -774,7 +774,7 @@
* though, there'll be no lowmem, so we just alloc_bootmem
* the memmap. There will be no percpu memory either.
*/
- if (i != 0 && cpu_isset(i, isolnodes)) {
+ if (i != 0 && cpumask_test_cpu(i, &isolnodes)) {
node_memmap_pfn[i] =
alloc_bootmem_pfn(0, memmap_size, 0);
BUG_ON(node_percpu[i] != 0);
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d43e7e1..6049d58 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -178,7 +178,7 @@
config NEED_DMA_MAP_STATE
def_bool y
- depends on X86_64 || INTEL_IOMMU || DMA_API_DEBUG
+ depends on X86_64 || INTEL_IOMMU || DMA_API_DEBUG || SWIOTLB
config NEED_SG_DMA_LENGTH
def_bool y
@@ -1421,6 +1421,16 @@
source "mm/Kconfig"
+config X86_PMEM_LEGACY
+ bool "Support non-standard NVDIMMs and ADR protected memory"
+ help
+ Treat memory marked using the non-standard e820 type of 12 as used
+ by the Intel Sandy Bridge-EP reference BIOS as protected memory.
+ The kernel will offer these regions to the 'pmem' driver so
+ they can be used for persistent storage.
+
+ Say Y if unsure.
+
config HIGHPTE
bool "Allocate 3rd-level pagetables from highmem"
depends on HIGHMEM
diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug
index 20028da..72484a6 100644
--- a/arch/x86/Kconfig.debug
+++ b/arch/x86/Kconfig.debug
@@ -43,10 +43,6 @@
with klogd/syslogd or the X server. You should normally N here,
unless you want to debug such a crash.
-config EARLY_PRINTK_INTEL_MID
- bool "Early printk for Intel MID platform support"
- depends on EARLY_PRINTK && X86_INTEL_MID
-
config EARLY_PRINTK_DBGP
bool "Early printk via EHCI debug port"
depends on EARLY_PRINTK && PCI
diff --git a/arch/x86/include/asm/intel-mid.h b/arch/x86/include/asm/intel-mid.h
index 705d357..7c5af12 100644
--- a/arch/x86/include/asm/intel-mid.h
+++ b/arch/x86/include/asm/intel-mid.h
@@ -136,9 +136,6 @@
#define SFI_MTMR_MAX_NUM 8
#define SFI_MRTC_MAX 8
-extern struct console early_hsu_console;
-extern void hsu_early_console_init(const char *);
-
extern void intel_scu_devices_create(void);
extern void intel_scu_devices_destroy(void);
diff --git a/arch/x86/include/asm/serial.h b/arch/x86/include/asm/serial.h
index 460b84f..8378b8c 100644
--- a/arch/x86/include/asm/serial.h
+++ b/arch/x86/include/asm/serial.h
@@ -12,11 +12,11 @@
/* Standard COM flags (except for COM4, because of the 8514 problem) */
#ifdef CONFIG_SERIAL_DETECT_IRQ
-# define STD_COMX_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST | ASYNC_AUTO_IRQ)
-# define STD_COM4_FLAGS (ASYNC_BOOT_AUTOCONF | 0 | ASYNC_AUTO_IRQ)
+# define STD_COMX_FLAGS (UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_AUTO_IRQ)
+# define STD_COM4_FLAGS (UPF_BOOT_AUTOCONF | 0 | UPF_AUTO_IRQ)
#else
-# define STD_COMX_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST | 0 )
-# define STD_COM4_FLAGS (ASYNC_BOOT_AUTOCONF | 0 | 0 )
+# define STD_COMX_FLAGS (UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | 0 )
+# define STD_COM4_FLAGS (UPF_BOOT_AUTOCONF | 0 | 0 )
#endif
#define SERIAL_PORT_DFNS \
diff --git a/arch/x86/include/uapi/asm/e820.h b/arch/x86/include/uapi/asm/e820.h
index d993e33..960a8a9 100644
--- a/arch/x86/include/uapi/asm/e820.h
+++ b/arch/x86/include/uapi/asm/e820.h
@@ -33,6 +33,16 @@
#define E820_NVS 4
#define E820_UNUSABLE 5
+/*
+ * This is a non-standardized way to represent ADR or NVDIMM regions that
+ * persist over a reboot. The kernel will ignore their special capabilities
+ * unless the CONFIG_X86_PMEM_LEGACY=y option is set.
+ *
+ * ( Note that older platforms also used 6 for the same type of memory,
+ * but newer versions switched to 12 as 6 was assigned differently. Some
+ * time they will learn... )
+ */
+#define E820_PRAM 12
/*
* reserved RAM used by kernel itself
diff --git a/arch/x86/include/uapi/asm/hyperv.h b/arch/x86/include/uapi/asm/hyperv.h
index 90c458e..ce6068d 100644
--- a/arch/x86/include/uapi/asm/hyperv.h
+++ b/arch/x86/include/uapi/asm/hyperv.h
@@ -225,6 +225,8 @@
#define HV_STATUS_INVALID_HYPERCALL_CODE 2
#define HV_STATUS_INVALID_HYPERCALL_INPUT 3
#define HV_STATUS_INVALID_ALIGNMENT 4
+#define HV_STATUS_INSUFFICIENT_MEMORY 11
+#define HV_STATUS_INVALID_CONNECTION_ID 18
#define HV_STATUS_INSUFFICIENT_BUFFERS 19
typedef struct _HV_REFERENCE_TSC_PAGE {
diff --git a/arch/x86/include/uapi/asm/msr-index.h b/arch/x86/include/uapi/asm/msr-index.h
index 1a4eae6..c469490 100644
--- a/arch/x86/include/uapi/asm/msr-index.h
+++ b/arch/x86/include/uapi/asm/msr-index.h
@@ -61,6 +61,9 @@
#define MSR_OFFCORE_RSP_1 0x000001a7
#define MSR_NHM_TURBO_RATIO_LIMIT 0x000001ad
#define MSR_IVT_TURBO_RATIO_LIMIT 0x000001ae
+#define MSR_TURBO_RATIO_LIMIT 0x000001ad
+#define MSR_TURBO_RATIO_LIMIT1 0x000001ae
+#define MSR_TURBO_RATIO_LIMIT2 0x000001af
#define MSR_LBR_SELECT 0x000001c8
#define MSR_LBR_TOS 0x000001c9
@@ -165,6 +168,11 @@
#define MSR_PP1_ENERGY_STATUS 0x00000641
#define MSR_PP1_POLICY 0x00000642
+#define MSR_PKG_WEIGHTED_CORE_C0_RES 0x00000658
+#define MSR_PKG_ANY_CORE_C0_RES 0x00000659
+#define MSR_PKG_ANY_GFXE_C0_RES 0x0000065A
+#define MSR_PKG_BOTH_CORE_GFXE_C0_RES 0x0000065B
+
#define MSR_CORE_C1_RES 0x00000660
#define MSR_CC6_DEMOTION_POLICY_CONFIG 0x00000668
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index c887cd9..9bcd0b5 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -95,6 +95,7 @@
obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt_patch_$(BITS).o
obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
obj-$(CONFIG_PARAVIRT_CLOCK) += pvclock.o
+obj-$(CONFIG_X86_PMEM_LEGACY) += pmem.o
obj-$(CONFIG_PCSPKR_PLATFORM) += pcspeaker.o
diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
index d9d0bd2..ab3219b 100644
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -171,8 +171,8 @@
for_each_online_cpu(cpu) {
if (x2apic_cluster(this_cpu) != x2apic_cluster(cpu))
continue;
- __cpu_clear(this_cpu, per_cpu(cpus_in_cluster, cpu));
- __cpu_clear(cpu, per_cpu(cpus_in_cluster, this_cpu));
+ cpumask_clear_cpu(this_cpu, per_cpu(cpus_in_cluster, cpu));
+ cpumask_clear_cpu(cpu, per_cpu(cpus_in_cluster, this_cpu));
}
free_cpumask_var(per_cpu(cpus_in_cluster, this_cpu));
free_cpumask_var(per_cpu(ipi_mask, this_cpu));
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 329f035..6ac5cb7 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -65,15 +65,15 @@
/*
* struct hw_perf_event.flags flags
*/
-#define PERF_X86_EVENT_PEBS_LDLAT 0x1 /* ld+ldlat data address sampling */
-#define PERF_X86_EVENT_PEBS_ST 0x2 /* st data address sampling */
-#define PERF_X86_EVENT_PEBS_ST_HSW 0x4 /* haswell style datala, store */
-#define PERF_X86_EVENT_COMMITTED 0x8 /* event passed commit_txn */
-#define PERF_X86_EVENT_PEBS_LD_HSW 0x10 /* haswell style datala, load */
-#define PERF_X86_EVENT_PEBS_NA_HSW 0x20 /* haswell style datala, unknown */
-#define PERF_X86_EVENT_EXCL 0x40 /* HT exclusivity on counter */
-#define PERF_X86_EVENT_DYNAMIC 0x80 /* dynamic alloc'd constraint */
-#define PERF_X86_EVENT_RDPMC_ALLOWED 0x40 /* grant rdpmc permission */
+#define PERF_X86_EVENT_PEBS_LDLAT 0x0001 /* ld+ldlat data address sampling */
+#define PERF_X86_EVENT_PEBS_ST 0x0002 /* st data address sampling */
+#define PERF_X86_EVENT_PEBS_ST_HSW 0x0004 /* haswell style datala, store */
+#define PERF_X86_EVENT_COMMITTED 0x0008 /* event passed commit_txn */
+#define PERF_X86_EVENT_PEBS_LD_HSW 0x0010 /* haswell style datala, load */
+#define PERF_X86_EVENT_PEBS_NA_HSW 0x0020 /* haswell style datala, unknown */
+#define PERF_X86_EVENT_EXCL 0x0040 /* HT exclusivity on counter */
+#define PERF_X86_EVENT_DYNAMIC 0x0080 /* dynamic alloc'd constraint */
+#define PERF_X86_EVENT_RDPMC_ALLOWED 0x0100 /* grant rdpmc permission */
struct amd_nb {
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 9da2400..219d3fb 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -3275,7 +3275,7 @@
hw_cache_extra_regs[C(NODE)][C(OP_WRITE)][C(RESULT_ACCESS)] = HSW_DEMAND_WRITE|
BDW_L3_MISS_LOCAL|HSW_SNOOP_DRAM;
- intel_pmu_lbr_init_snb();
+ intel_pmu_lbr_init_hsw();
x86_pmu.event_constraints = intel_bdw_event_constraints;
x86_pmu.pebs_constraints = intel_hsw_pebs_event_constraints;
diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
index ca69ea5..813f75d 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
@@ -558,6 +558,8 @@
INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* BR_INST_RETIRED.MISPRED */
INTEL_FLAGS_UEVENT_CONSTRAINT(0x1fc7, 0x1), /* SIMD_INST_RETURED.ANY */
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1), /* MEM_LOAD_RETIRED.* */
+ /* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+ INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
EVENT_CONSTRAINT_END
};
@@ -565,6 +567,8 @@
INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c0, 0x1), /* INST_RETIRED.ANY */
INTEL_FLAGS_UEVENT_CONSTRAINT(0x00c5, 0x1), /* MISPREDICTED_BRANCH_RETIRED */
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0x1), /* MEM_LOAD_RETIRED.* */
+ /* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+ INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x01),
EVENT_CONSTRAINT_END
};
@@ -588,6 +592,8 @@
INTEL_FLAGS_UEVENT_CONSTRAINT(0x20c8, 0xf), /* ITLB_MISS_RETIRED */
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf), /* MEM_LOAD_RETIRED.* */
INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf), /* FP_ASSIST.* */
+ /* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+ INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
EVENT_CONSTRAINT_END
};
@@ -603,6 +609,8 @@
INTEL_FLAGS_UEVENT_CONSTRAINT(0x20c8, 0xf), /* ITLB_MISS_RETIRED */
INTEL_FLAGS_EVENT_CONSTRAINT(0xcb, 0xf), /* MEM_LOAD_RETIRED.* */
INTEL_FLAGS_EVENT_CONSTRAINT(0xf7, 0xf), /* FP_ASSIST.* */
+ /* INST_RETIRED.ANY_P, inv=1, cmask=16 (cycles:p). */
+ INTEL_FLAGS_EVENT_CONSTRAINT(0x108000c0, 0x0f),
EVENT_CONSTRAINT_END
};
diff --git a/arch/x86/kernel/cpu/perf_event_intel_pt.c b/arch/x86/kernel/cpu/perf_event_intel_pt.c
index f277064..ffe666c 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_pt.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_pt.c
@@ -988,39 +988,36 @@
int ret = -EBUSY;
if (pt->handle.event)
- goto out;
+ goto fail;
buf = perf_aux_output_begin(&pt->handle, event);
- if (!buf) {
- ret = -EINVAL;
- goto out;
- }
+ ret = -EINVAL;
+ if (!buf)
+ goto fail_stop;
pt_buffer_reset_offsets(buf, pt->handle.head);
if (!buf->snapshot) {
ret = pt_buffer_reset_markers(buf, &pt->handle);
- if (ret) {
- perf_aux_output_end(&pt->handle, 0, true);
- goto out;
- }
+ if (ret)
+ goto fail_end_stop;
}
if (mode & PERF_EF_START) {
pt_event_start(event, 0);
- if (hwc->state == PERF_HES_STOPPED) {
- pt_event_del(event, 0);
- ret = -EBUSY;
- }
+ ret = -EBUSY;
+ if (hwc->state == PERF_HES_STOPPED)
+ goto fail_end_stop;
} else {
hwc->state = PERF_HES_STOPPED;
}
- ret = 0;
-out:
+ return 0;
- if (ret)
- hwc->state = PERF_HES_STOPPED;
-
+fail_end_stop:
+ perf_aux_output_end(&pt->handle, 0, true);
+fail_stop:
+ hwc->state = PERF_HES_STOPPED;
+fail:
return ret;
}
diff --git a/arch/x86/kernel/cpu/perf_event_intel_rapl.c b/arch/x86/kernel/cpu/perf_event_intel_rapl.c
index c4bb8b8..999289b9 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_rapl.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_rapl.c
@@ -62,6 +62,14 @@
#define RAPL_IDX_PP1_NRG_STAT 3 /* gpu */
#define INTEL_RAPL_PP1 0x4 /* pseudo-encoding */
+#define NR_RAPL_DOMAINS 0x4
+static const char *rapl_domain_names[NR_RAPL_DOMAINS] __initconst = {
+ "pp0-core",
+ "package",
+ "dram",
+ "pp1-gpu",
+};
+
/* Clients have PP0, PKG */
#define RAPL_IDX_CLN (1<<RAPL_IDX_PP0_NRG_STAT|\
1<<RAPL_IDX_PKG_NRG_STAT|\
@@ -112,7 +120,6 @@
struct rapl_pmu {
spinlock_t lock;
- int hw_unit; /* 1/2^hw_unit Joule */
int n_active; /* number of active events */
struct list_head active_list;
struct pmu *pmu; /* pointer to rapl_pmu_class */
@@ -120,6 +127,7 @@
struct hrtimer hrtimer;
};
+static int rapl_hw_unit[NR_RAPL_DOMAINS] __read_mostly; /* 1/2^hw_unit Joule */
static struct pmu rapl_pmu_class;
static cpumask_t rapl_cpu_mask;
static int rapl_cntr_mask;
@@ -127,6 +135,7 @@
static DEFINE_PER_CPU(struct rapl_pmu *, rapl_pmu);
static DEFINE_PER_CPU(struct rapl_pmu *, rapl_pmu_to_free);
+static struct x86_pmu_quirk *rapl_quirks;
static inline u64 rapl_read_counter(struct perf_event *event)
{
u64 raw;
@@ -134,15 +143,28 @@
return raw;
}
-static inline u64 rapl_scale(u64 v)
+#define rapl_add_quirk(func_) \
+do { \
+ static struct x86_pmu_quirk __quirk __initdata = { \
+ .func = func_, \
+ }; \
+ __quirk.next = rapl_quirks; \
+ rapl_quirks = &__quirk; \
+} while (0)
+
+static inline u64 rapl_scale(u64 v, int cfg)
{
+ if (cfg > NR_RAPL_DOMAINS) {
+ pr_warn("invalid domain %d, failed to scale data\n", cfg);
+ return v;
+ }
/*
* scale delta to smallest unit (1/2^32)
* users must then scale back: count * 1/(1e9*2^32) to get Joules
* or use ldexp(count, -32).
* Watts = Joules/Time delta
*/
- return v << (32 - __this_cpu_read(rapl_pmu)->hw_unit);
+ return v << (32 - rapl_hw_unit[cfg - 1]);
}
static u64 rapl_event_update(struct perf_event *event)
@@ -173,7 +195,7 @@
delta = (new_raw_count << shift) - (prev_raw_count << shift);
delta >>= shift;
- sdelta = rapl_scale(delta);
+ sdelta = rapl_scale(delta, event->hw.config);
local64_add(sdelta, &event->count);
@@ -546,12 +568,22 @@
cpumask_set_cpu(cpu, &rapl_cpu_mask);
}
+static __init void rapl_hsw_server_quirk(void)
+{
+ /*
+ * DRAM domain on HSW server has fixed energy unit which can be
+ * different than the unit from power unit MSR.
+ * "Intel Xeon Processor E5-1600 and E5-2600 v3 Product Families, V2
+ * of 2. Datasheet, September 2014, Reference Number: 330784-001 "
+ */
+ rapl_hw_unit[RAPL_IDX_RAM_NRG_STAT] = 16;
+}
+
static int rapl_cpu_prepare(int cpu)
{
struct rapl_pmu *pmu = per_cpu(rapl_pmu, cpu);
int phys_id = topology_physical_package_id(cpu);
u64 ms;
- u64 msr_rapl_power_unit_bits;
if (pmu)
return 0;
@@ -559,24 +591,13 @@
if (phys_id < 0)
return -1;
- /* protect rdmsrl() to handle virtualization */
- if (rdmsrl_safe(MSR_RAPL_POWER_UNIT, &msr_rapl_power_unit_bits))
- return -1;
-
pmu = kzalloc_node(sizeof(*pmu), GFP_KERNEL, cpu_to_node(cpu));
if (!pmu)
return -1;
-
spin_lock_init(&pmu->lock);
INIT_LIST_HEAD(&pmu->active_list);
- /*
- * grab power unit as: 1/2^unit Joules
- *
- * we cache in local PMU instance
- */
- pmu->hw_unit = (msr_rapl_power_unit_bits >> 8) & 0x1FULL;
pmu->pmu = &rapl_pmu_class;
/*
@@ -586,8 +607,8 @@
* divide interval by 2 to avoid lockstep (2 * 100)
* if hw unit is 32, then we use 2 ms 1/200/2
*/
- if (pmu->hw_unit < 32)
- ms = (1000 / (2 * 100)) * (1ULL << (32 - pmu->hw_unit - 1));
+ if (rapl_hw_unit[0] < 32)
+ ms = (1000 / (2 * 100)) * (1ULL << (32 - rapl_hw_unit[0] - 1));
else
ms = 2;
@@ -655,6 +676,20 @@
return NOTIFY_OK;
}
+static int rapl_check_hw_unit(void)
+{
+ u64 msr_rapl_power_unit_bits;
+ int i;
+
+ /* protect rdmsrl() to handle virtualization */
+ if (rdmsrl_safe(MSR_RAPL_POWER_UNIT, &msr_rapl_power_unit_bits))
+ return -1;
+ for (i = 0; i < NR_RAPL_DOMAINS; i++)
+ rapl_hw_unit[i] = (msr_rapl_power_unit_bits >> 8) & 0x1FULL;
+
+ return 0;
+}
+
static const struct x86_cpu_id rapl_cpu_match[] = {
[0] = { .vendor = X86_VENDOR_INTEL, .family = 6 },
[1] = {},
@@ -664,6 +699,8 @@
{
struct rapl_pmu *pmu;
int cpu, ret;
+ struct x86_pmu_quirk *quirk;
+ int i;
/*
* check for Intel processor family 6
@@ -678,6 +715,11 @@
rapl_cntr_mask = RAPL_IDX_CLN;
rapl_pmu_events_group.attrs = rapl_events_cln_attr;
break;
+ case 63: /* Haswell-Server */
+ rapl_add_quirk(rapl_hsw_server_quirk);
+ rapl_cntr_mask = RAPL_IDX_SRV;
+ rapl_pmu_events_group.attrs = rapl_events_srv_attr;
+ break;
case 60: /* Haswell */
case 69: /* Haswell-Celeron */
rapl_cntr_mask = RAPL_IDX_HSW;
@@ -693,7 +735,13 @@
/* unsupported */
return 0;
}
+ ret = rapl_check_hw_unit();
+ if (ret)
+ return ret;
+ /* run cpu model quirks */
+ for (quirk = rapl_quirks; quirk; quirk = quirk->next)
+ quirk->func();
cpu_notifier_register_begin();
for_each_online_cpu(cpu) {
@@ -714,14 +762,18 @@
pmu = __this_cpu_read(rapl_pmu);
- pr_info("RAPL PMU detected, hw unit 2^-%d Joules,"
+ pr_info("RAPL PMU detected,"
" API unit is 2^-32 Joules,"
" %d fixed counters"
" %llu ms ovfl timer\n",
- pmu->hw_unit,
hweight32(rapl_cntr_mask),
ktime_to_ms(pmu->timer_interval));
-
+ for (i = 0; i < NR_RAPL_DOMAINS; i++) {
+ if (rapl_cntr_mask & (1 << i)) {
+ pr_info("hw unit of domain %s 2^-%d Joules\n",
+ rapl_domain_names[i], rapl_hw_unit[i]);
+ }
+ }
out:
cpu_notifier_register_done();
diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index 7d46bb2..e2ce85d 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -149,6 +149,9 @@
case E820_UNUSABLE:
printk(KERN_CONT "unusable");
break;
+ case E820_PRAM:
+ printk(KERN_CONT "persistent (type %u)", type);
+ break;
default:
printk(KERN_CONT "type %u", type);
break;
@@ -343,7 +346,7 @@
* continue building up new bios map based on this
* information
*/
- if (current_type != last_type) {
+ if (current_type != last_type || current_type == E820_PRAM) {
if (last_type != 0) {
new_bios[new_bios_entry].size =
change_point[chgidx]->addr - last_addr;
@@ -688,6 +691,7 @@
register_nosave_region(pfn, PFN_UP(ei->addr));
pfn = PFN_DOWN(ei->addr + ei->size);
+
if (ei->type != E820_RAM && ei->type != E820_RESERVED_KERN)
register_nosave_region(PFN_UP(ei->addr), pfn);
@@ -748,7 +752,7 @@
/*
* Find the highest page frame number we have available
*/
-static unsigned long __init e820_end_pfn(unsigned long limit_pfn, unsigned type)
+static unsigned long __init e820_end_pfn(unsigned long limit_pfn)
{
int i;
unsigned long last_pfn = 0;
@@ -759,7 +763,11 @@
unsigned long start_pfn;
unsigned long end_pfn;
- if (ei->type != type)
+ /*
+ * Persistent memory is accounted as ram for purposes of
+ * establishing max_pfn and mem_map.
+ */
+ if (ei->type != E820_RAM && ei->type != E820_PRAM)
continue;
start_pfn = ei->addr >> PAGE_SHIFT;
@@ -784,12 +792,12 @@
}
unsigned long __init e820_end_of_ram_pfn(void)
{
- return e820_end_pfn(MAX_ARCH_PFN, E820_RAM);
+ return e820_end_pfn(MAX_ARCH_PFN);
}
unsigned long __init e820_end_of_low_ram_pfn(void)
{
- return e820_end_pfn(1UL<<(32 - PAGE_SHIFT), E820_RAM);
+ return e820_end_pfn(1UL << (32-PAGE_SHIFT));
}
static void early_panic(char *msg)
@@ -866,6 +874,9 @@
} else if (*p == '$') {
start_at = memparse(p+1, &p);
e820_add_region(start_at, mem_size, E820_RESERVED);
+ } else if (*p == '!') {
+ start_at = memparse(p+1, &p);
+ e820_add_region(start_at, mem_size, E820_PRAM);
} else
e820_remove_range(mem_size, ULLONG_MAX - mem_size, E820_RAM, 1);
@@ -907,6 +918,7 @@
case E820_ACPI: return "ACPI Tables";
case E820_NVS: return "ACPI Non-volatile Storage";
case E820_UNUSABLE: return "Unusable memory";
+ case E820_PRAM: return "Persistent RAM";
default: return "reserved";
}
}
@@ -940,7 +952,9 @@
* pci device BAR resource and insert them later in
* pcibios_resource_survey()
*/
- if (e820.map[i].type != E820_RESERVED || res->start < (1ULL<<20)) {
+ if (((e820.map[i].type != E820_RESERVED) &&
+ (e820.map[i].type != E820_PRAM)) ||
+ res->start < (1ULL<<20)) {
res->flags |= IORESOURCE_BUSY;
insert_resource(&iomem_resource, res);
}
diff --git a/arch/x86/kernel/early_printk.c b/arch/x86/kernel/early_printk.c
index 49ff55e..89427d8 100644
--- a/arch/x86/kernel/early_printk.c
+++ b/arch/x86/kernel/early_printk.c
@@ -375,12 +375,6 @@
if (!strncmp(buf, "xen", 3))
early_console_register(&xenboot_console, keep);
#endif
-#ifdef CONFIG_EARLY_PRINTK_INTEL_MID
- if (!strncmp(buf, "hsu", 3)) {
- hsu_early_console_init(buf + 3);
- early_console_register(&early_hsu_console, keep);
- }
-#endif
#ifdef CONFIG_EARLY_PRINTK_EFI
if (!strncmp(buf, "efi", 3))
early_console_register(&early_efi_console, keep);
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 367f39d..00918327 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -341,7 +341,7 @@
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
- struct xsave_struct *xsave = &target->thread.fpu.state->xsave;
+ struct xsave_struct *xsave;
int ret;
if (!cpu_has_xsave)
@@ -351,6 +351,8 @@
if (ret)
return ret;
+ xsave = &target->thread.fpu.state->xsave;
+
/*
* Copy the 48bytes defined by the software first into the xstate
* memory layout in the thread struct, so that we can copy the entire
@@ -369,7 +371,7 @@
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
- struct xsave_struct *xsave = &target->thread.fpu.state->xsave;
+ struct xsave_struct *xsave;
int ret;
if (!cpu_has_xsave)
@@ -379,6 +381,8 @@
if (ret)
return ret;
+ xsave = &target->thread.fpu.state->xsave;
+
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, xsave, 0, -1);
/*
* mxcsr reserved bits must be masked to zero for security reasons.
diff --git a/arch/x86/kernel/pmem.c b/arch/x86/kernel/pmem.c
new file mode 100644
index 0000000..3420c87
--- /dev/null
+++ b/arch/x86/kernel/pmem.c
@@ -0,0 +1,53 @@
+/*
+ * Copyright (c) 2015, Christoph Hellwig.
+ */
+#include <linux/memblock.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <asm/e820.h>
+#include <asm/page_types.h>
+#include <asm/setup.h>
+
+static __init void register_pmem_device(struct resource *res)
+{
+ struct platform_device *pdev;
+ int error;
+
+ pdev = platform_device_alloc("pmem", PLATFORM_DEVID_AUTO);
+ if (!pdev)
+ return;
+
+ error = platform_device_add_resources(pdev, res, 1);
+ if (error)
+ goto out_put_pdev;
+
+ error = platform_device_add(pdev);
+ if (error)
+ goto out_put_pdev;
+ return;
+
+out_put_pdev:
+ dev_warn(&pdev->dev, "failed to add 'pmem' (persistent memory) device!\n");
+ platform_device_put(pdev);
+}
+
+static __init int register_pmem_devices(void)
+{
+ int i;
+
+ for (i = 0; i < e820.nr_map; i++) {
+ struct e820entry *ei = &e820.map[i];
+
+ if (ei->type == E820_PRAM) {
+ struct resource res = {
+ .flags = IORESOURCE_MEM,
+ .start = ei->addr,
+ .end = ei->addr + ei->size - 1,
+ };
+ register_pmem_device(&res);
+ }
+ }
+
+ return 0;
+}
+device_initcall(register_pmem_devices);
diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
index f980408..1ea14fd 100644
--- a/arch/x86/kernel/signal.c
+++ b/arch/x86/kernel/signal.c
@@ -616,7 +616,8 @@
static void
handle_signal(struct ksignal *ksig, struct pt_regs *regs)
{
- bool failed;
+ bool stepping, failed;
+
/* Are we from a system call? */
if (syscall_get_nr(current, regs) >= 0) {
/* If so, check system call restarting.. */
@@ -640,12 +641,13 @@
}
/*
- * If TF is set due to a debugger (TIF_FORCED_TF), clear the TF
- * flag so that register information in the sigcontext is correct.
+ * If TF is set due to a debugger (TIF_FORCED_TF), clear TF now
+ * so that register information in the sigcontext is correct and
+ * then notify the tracer before entering the signal handler.
*/
- if (unlikely(regs->flags & X86_EFLAGS_TF) &&
- likely(test_and_clear_thread_flag(TIF_FORCED_TF)))
- regs->flags &= ~X86_EFLAGS_TF;
+ stepping = test_thread_flag(TIF_SINGLESTEP);
+ if (stepping)
+ user_disable_single_step(current);
failed = (setup_rt_frame(ksig, regs) < 0);
if (!failed) {
@@ -656,10 +658,8 @@
* it might disable possible debug exception from the
* signal handler.
*
- * Clear TF when entering the signal handler, but
- * notify any tracer that was single-stepping it.
- * The tracer may want to single-step inside the
- * handler too.
+ * Clear TF for the case when it wasn't set by debugger to
+ * avoid the recursive send_sigtrap() in SIGTRAP handler.
*/
regs->flags &= ~(X86_EFLAGS_DF|X86_EFLAGS_RF|X86_EFLAGS_TF);
/*
@@ -668,7 +668,7 @@
if (used_math())
fpu_reset_state(current);
}
- signal_setup_done(failed, ksig, test_thread_flag(TIF_SINGLESTEP));
+ signal_setup_done(failed, ksig, stepping);
}
#ifdef CONFIG_X86_32
diff --git a/arch/x86/platform/intel-mid/Makefile b/arch/x86/platform/intel-mid/Makefile
index 0a8ee70..0ce1b19 100644
--- a/arch/x86/platform/intel-mid/Makefile
+++ b/arch/x86/platform/intel-mid/Makefile
@@ -1,5 +1,4 @@
obj-$(CONFIG_X86_INTEL_MID) += intel-mid.o intel_mid_vrtc.o mfld.o mrfl.o
-obj-$(CONFIG_EARLY_PRINTK_INTEL_MID) += early_printk_intel_mid.o
# SFI specific code
ifdef CONFIG_X86_INTEL_MID
diff --git a/arch/x86/platform/intel-mid/early_printk_intel_mid.c b/arch/x86/platform/intel-mid/early_printk_intel_mid.c
deleted file mode 100644
index 4e72082..0000000
--- a/arch/x86/platform/intel-mid/early_printk_intel_mid.c
+++ /dev/null
@@ -1,112 +0,0 @@
-/*
- * early_printk_intel_mid.c - early consoles for Intel MID platforms
- *
- * Copyright (c) 2008-2010, Intel Corporation
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; version 2
- * of the License.
- */
-
-/*
- * This file implements early console named hsu.
- * hsu is based on a High Speed UART device which only exists in the Medfield
- * platform
- */
-
-#include <linux/serial_reg.h>
-#include <linux/serial_mfd.h>
-#include <linux/console.h>
-#include <linux/kernel.h>
-#include <linux/delay.h>
-#include <linux/io.h>
-
-#include <asm/fixmap.h>
-#include <asm/pgtable.h>
-#include <asm/intel-mid.h>
-
-/*
- * Following is the early console based on Medfield HSU (High
- * Speed UART) device.
- */
-#define HSU_PORT_BASE 0xffa28080
-
-static void __iomem *phsu;
-
-void hsu_early_console_init(const char *s)
-{
- unsigned long paddr, port = 0;
- u8 lcr;
-
- /*
- * Select the early HSU console port if specified by user in the
- * kernel command line.
- */
- if (*s && !kstrtoul(s, 10, &port))
- port = clamp_val(port, 0, 2);
-
- paddr = HSU_PORT_BASE + port * 0x80;
- phsu = (void __iomem *)set_fixmap_offset_nocache(FIX_EARLYCON_MEM_BASE, paddr);
-
- /* Disable FIFO */
- writeb(0x0, phsu + UART_FCR);
-
- /* Set to default 115200 bps, 8n1 */
- lcr = readb(phsu + UART_LCR);
- writeb((0x80 | lcr), phsu + UART_LCR);
- writeb(0x18, phsu + UART_DLL);
- writeb(lcr, phsu + UART_LCR);
- writel(0x3600, phsu + UART_MUL*4);
-
- writeb(0x8, phsu + UART_MCR);
- writeb(0x7, phsu + UART_FCR);
- writeb(0x3, phsu + UART_LCR);
-
- /* Clear IRQ status */
- readb(phsu + UART_LSR);
- readb(phsu + UART_RX);
- readb(phsu + UART_IIR);
- readb(phsu + UART_MSR);
-
- /* Enable FIFO */
- writeb(0x7, phsu + UART_FCR);
-}
-
-#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
-
-static void early_hsu_putc(char ch)
-{
- unsigned int timeout = 10000; /* 10ms */
- u8 status;
-
- while (--timeout) {
- status = readb(phsu + UART_LSR);
- if (status & BOTH_EMPTY)
- break;
- udelay(1);
- }
-
- /* Only write the char when there was no timeout */
- if (timeout)
- writeb(ch, phsu + UART_TX);
-}
-
-static void early_hsu_write(struct console *con, const char *str, unsigned n)
-{
- int i;
-
- for (i = 0; i < n && *str; i++) {
- if (*str == '\n')
- early_hsu_putc('\r');
- early_hsu_putc(*str);
- str++;
- }
-}
-
-struct console early_hsu_console = {
- .name = "earlyhsu",
- .write = early_hsu_write,
- .flags = CON_PRINTBUFFER,
- .index = -1,
-};
diff --git a/drivers/Makefile b/drivers/Makefile
index 527a6da..46d2554 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -163,5 +163,5 @@
obj-$(CONFIG_MCB) += mcb/
obj-$(CONFIG_RAS) += ras/
obj-$(CONFIG_THUNDERBOLT) += thunderbolt/
-obj-$(CONFIG_CORESIGHT) += coresight/
+obj-$(CONFIG_CORESIGHT) += hwtracing/coresight/
obj-$(CONFIG_ANDROID) += android/
diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
index 1b8094d..eb1fed5 100644
--- a/drivers/block/Kconfig
+++ b/drivers/block/Kconfig
@@ -404,6 +404,17 @@
and will prevent RAM block device backing store memory from being
allocated from highmem (only a problem for highmem systems).
+config BLK_DEV_PMEM
+ tristate "Persistent memory block device support"
+ help
+ Saying Y here will allow you to use a contiguous range of reserved
+ memory as one or more persistent block devices.
+
+ To compile this driver as a module, choose M here: the module will be
+ called 'pmem'.
+
+ If unsure, say N.
+
config CDROM_PKTCDVD
tristate "Packet writing on CD/DVD media"
depends on !UML
diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index 02b688d..9cc6c18 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -14,6 +14,7 @@
obj-$(CONFIG_ATARI_FLOPPY) += ataflop.o
obj-$(CONFIG_AMIGA_Z2RAM) += z2ram.o
obj-$(CONFIG_BLK_DEV_RAM) += brd.o
+obj-$(CONFIG_BLK_DEV_PMEM) += pmem.o
obj-$(CONFIG_BLK_DEV_LOOP) += loop.o
obj-$(CONFIG_BLK_CPQ_DA) += cpqarray.o
obj-$(CONFIG_BLK_CPQ_CISS_DA) += cciss.o
diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c
new file mode 100644
index 0000000..eabf4a8
--- /dev/null
+++ b/drivers/block/pmem.c
@@ -0,0 +1,262 @@
+/*
+ * Persistent Memory Driver
+ *
+ * Copyright (c) 2014, Intel Corporation.
+ * Copyright (c) 2015, Christoph Hellwig <hch@lst.de>.
+ * Copyright (c) 2015, Boaz Harrosh <boaz@plexistor.com>.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+
+#include <asm/cacheflush.h>
+#include <linux/blkdev.h>
+#include <linux/hdreg.h>
+#include <linux/init.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+
+#define PMEM_MINORS 16
+
+struct pmem_device {
+ struct request_queue *pmem_queue;
+ struct gendisk *pmem_disk;
+
+ /* One contiguous memory region per device */
+ phys_addr_t phys_addr;
+ void *virt_addr;
+ size_t size;
+};
+
+static int pmem_major;
+static atomic_t pmem_index;
+
+static void pmem_do_bvec(struct pmem_device *pmem, struct page *page,
+ unsigned int len, unsigned int off, int rw,
+ sector_t sector)
+{
+ void *mem = kmap_atomic(page);
+ size_t pmem_off = sector << 9;
+
+ if (rw == READ) {
+ memcpy(mem + off, pmem->virt_addr + pmem_off, len);
+ flush_dcache_page(page);
+ } else {
+ flush_dcache_page(page);
+ memcpy(pmem->virt_addr + pmem_off, mem + off, len);
+ }
+
+ kunmap_atomic(mem);
+}
+
+static void pmem_make_request(struct request_queue *q, struct bio *bio)
+{
+ struct block_device *bdev = bio->bi_bdev;
+ struct pmem_device *pmem = bdev->bd_disk->private_data;
+ int rw;
+ struct bio_vec bvec;
+ sector_t sector;
+ struct bvec_iter iter;
+ int err = 0;
+
+ if (bio_end_sector(bio) > get_capacity(bdev->bd_disk)) {
+ err = -EIO;
+ goto out;
+ }
+
+ BUG_ON(bio->bi_rw & REQ_DISCARD);
+
+ rw = bio_data_dir(bio);
+ sector = bio->bi_iter.bi_sector;
+ bio_for_each_segment(bvec, bio, iter) {
+ pmem_do_bvec(pmem, bvec.bv_page, bvec.bv_len, bvec.bv_offset,
+ rw, sector);
+ sector += bvec.bv_len >> 9;
+ }
+
+out:
+ bio_endio(bio, err);
+}
+
+static int pmem_rw_page(struct block_device *bdev, sector_t sector,
+ struct page *page, int rw)
+{
+ struct pmem_device *pmem = bdev->bd_disk->private_data;
+
+ pmem_do_bvec(pmem, page, PAGE_CACHE_SIZE, 0, rw, sector);
+ page_endio(page, rw & WRITE, 0);
+
+ return 0;
+}
+
+static long pmem_direct_access(struct block_device *bdev, sector_t sector,
+ void **kaddr, unsigned long *pfn, long size)
+{
+ struct pmem_device *pmem = bdev->bd_disk->private_data;
+ size_t offset = sector << 9;
+
+ if (!pmem)
+ return -ENODEV;
+
+ *kaddr = pmem->virt_addr + offset;
+ *pfn = (pmem->phys_addr + offset) >> PAGE_SHIFT;
+
+ return pmem->size - offset;
+}
+
+static const struct block_device_operations pmem_fops = {
+ .owner = THIS_MODULE,
+ .rw_page = pmem_rw_page,
+ .direct_access = pmem_direct_access,
+};
+
+static struct pmem_device *pmem_alloc(struct device *dev, struct resource *res)
+{
+ struct pmem_device *pmem;
+ struct gendisk *disk;
+ int idx, err;
+
+ err = -ENOMEM;
+ pmem = kzalloc(sizeof(*pmem), GFP_KERNEL);
+ if (!pmem)
+ goto out;
+
+ pmem->phys_addr = res->start;
+ pmem->size = resource_size(res);
+
+ err = -EINVAL;
+ if (!request_mem_region(pmem->phys_addr, pmem->size, "pmem")) {
+ dev_warn(dev, "could not reserve region [0x%pa:0x%zx]\n", &pmem->phys_addr, pmem->size);
+ goto out_free_dev;
+ }
+
+ /*
+ * Map the memory as non-cachable, as we can't write back the contents
+ * of the CPU caches in case of a crash.
+ */
+ err = -ENOMEM;
+ pmem->virt_addr = ioremap_nocache(pmem->phys_addr, pmem->size);
+ if (!pmem->virt_addr)
+ goto out_release_region;
+
+ pmem->pmem_queue = blk_alloc_queue(GFP_KERNEL);
+ if (!pmem->pmem_queue)
+ goto out_unmap;
+
+ blk_queue_make_request(pmem->pmem_queue, pmem_make_request);
+ blk_queue_max_hw_sectors(pmem->pmem_queue, 1024);
+ blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY);
+
+ disk = alloc_disk(PMEM_MINORS);
+ if (!disk)
+ goto out_free_queue;
+
+ idx = atomic_inc_return(&pmem_index) - 1;
+
+ disk->major = pmem_major;
+ disk->first_minor = PMEM_MINORS * idx;
+ disk->fops = &pmem_fops;
+ disk->private_data = pmem;
+ disk->queue = pmem->pmem_queue;
+ disk->flags = GENHD_FL_EXT_DEVT;
+ sprintf(disk->disk_name, "pmem%d", idx);
+ disk->driverfs_dev = dev;
+ set_capacity(disk, pmem->size >> 9);
+ pmem->pmem_disk = disk;
+
+ add_disk(disk);
+
+ return pmem;
+
+out_free_queue:
+ blk_cleanup_queue(pmem->pmem_queue);
+out_unmap:
+ iounmap(pmem->virt_addr);
+out_release_region:
+ release_mem_region(pmem->phys_addr, pmem->size);
+out_free_dev:
+ kfree(pmem);
+out:
+ return ERR_PTR(err);
+}
+
+static void pmem_free(struct pmem_device *pmem)
+{
+ del_gendisk(pmem->pmem_disk);
+ put_disk(pmem->pmem_disk);
+ blk_cleanup_queue(pmem->pmem_queue);
+ iounmap(pmem->virt_addr);
+ release_mem_region(pmem->phys_addr, pmem->size);
+ kfree(pmem);
+}
+
+static int pmem_probe(struct platform_device *pdev)
+{
+ struct pmem_device *pmem;
+ struct resource *res;
+
+ if (WARN_ON(pdev->num_resources > 1))
+ return -ENXIO;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res)
+ return -ENXIO;
+
+ pmem = pmem_alloc(&pdev->dev, res);
+ if (IS_ERR(pmem))
+ return PTR_ERR(pmem);
+
+ platform_set_drvdata(pdev, pmem);
+
+ return 0;
+}
+
+static int pmem_remove(struct platform_device *pdev)
+{
+ struct pmem_device *pmem = platform_get_drvdata(pdev);
+
+ pmem_free(pmem);
+ return 0;
+}
+
+static struct platform_driver pmem_driver = {
+ .probe = pmem_probe,
+ .remove = pmem_remove,
+ .driver = {
+ .owner = THIS_MODULE,
+ .name = "pmem",
+ },
+};
+
+static int __init pmem_init(void)
+{
+ int error;
+
+ pmem_major = register_blkdev(0, "pmem");
+ if (pmem_major < 0)
+ return pmem_major;
+
+ error = platform_driver_register(&pmem_driver);
+ if (error)
+ unregister_blkdev(pmem_major, "pmem");
+ return error;
+}
+module_init(pmem_init);
+
+static void pmem_exit(void)
+{
+ platform_driver_unregister(&pmem_driver);
+ unregister_blkdev(pmem_major, "pmem");
+}
+module_exit(pmem_exit);
+
+MODULE_AUTHOR("Ross Zwisler <ross.zwisler@linux.intel.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index 571ef61..da8faf7 100644
--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -300,11 +300,14 @@
.llseek = noop_llseek,
};
+static const struct attribute_group *rng_dev_groups[];
+
static struct miscdevice rng_miscdev = {
.minor = RNG_MISCDEV_MINOR,
.name = RNG_MODULE_NAME,
.nodename = "hwrng",
.fops = &rng_chrdev_ops,
+ .groups = rng_dev_groups,
};
@@ -377,37 +380,22 @@
hwrng_attr_available_show,
NULL);
+static struct attribute *rng_dev_attrs[] = {
+ &dev_attr_rng_current.attr,
+ &dev_attr_rng_available.attr,
+ NULL
+};
+
+ATTRIBUTE_GROUPS(rng_dev);
static void __exit unregister_miscdev(void)
{
- device_remove_file(rng_miscdev.this_device, &dev_attr_rng_available);
- device_remove_file(rng_miscdev.this_device, &dev_attr_rng_current);
misc_deregister(&rng_miscdev);
}
static int __init register_miscdev(void)
{
- int err;
-
- err = misc_register(&rng_miscdev);
- if (err)
- goto out;
- err = device_create_file(rng_miscdev.this_device,
- &dev_attr_rng_current);
- if (err)
- goto err_misc_dereg;
- err = device_create_file(rng_miscdev.this_device,
- &dev_attr_rng_available);
- if (err)
- goto err_remove_current;
-out:
- return err;
-
-err_remove_current:
- device_remove_file(rng_miscdev.this_device, &dev_attr_rng_current);
-err_misc_dereg:
- misc_deregister(&rng_miscdev);
- goto out;
+ return misc_register(&rng_miscdev);
}
static int hwrng_fillfn(void *unused)
diff --git a/drivers/char/hw_random/pasemi-rng.c b/drivers/char/hw_random/pasemi-rng.c
index 3eb7bdd..51cb1d5 100644
--- a/drivers/char/hw_random/pasemi-rng.c
+++ b/drivers/char/hw_random/pasemi-rng.c
@@ -133,7 +133,7 @@
return 0;
}
-static struct of_device_id rng_match[] = {
+static const struct of_device_id rng_match[] = {
{ .compatible = "1682m-rng", },
{ .compatible = "pasemi,pwrficient-rng", },
{ },
diff --git a/drivers/char/hw_random/powernv-rng.c b/drivers/char/hw_random/powernv-rng.c
index 3f4f632..263a5bb 100644
--- a/drivers/char/hw_random/powernv-rng.c
+++ b/drivers/char/hw_random/powernv-rng.c
@@ -61,7 +61,7 @@
return 0;
}
-static struct of_device_id powernv_rng_match[] = {
+static const struct of_device_id powernv_rng_match[] = {
{ .compatible = "ibm,power-rng",},
{},
};
diff --git a/drivers/char/hw_random/ppc4xx-rng.c b/drivers/char/hw_random/ppc4xx-rng.c
index c85d31a..b2cfda0 100644
--- a/drivers/char/hw_random/ppc4xx-rng.c
+++ b/drivers/char/hw_random/ppc4xx-rng.c
@@ -123,7 +123,7 @@
return 0;
}
-static struct of_device_id ppc4xx_rng_match[] = {
+static const struct of_device_id ppc4xx_rng_match[] = {
{ .compatible = "ppc4xx-rng", },
{ .compatible = "amcc,ppc460ex-rng", },
{ .compatible = "amcc,ppc440epx-rng", },
diff --git a/drivers/char/i8k.c b/drivers/char/i8k.c
index 24cc4ed..a43048b 100644
--- a/drivers/char/i8k.c
+++ b/drivers/char/i8k.c
@@ -510,13 +510,15 @@
* 9) AC power
* 10) Fn Key status
*/
- return seq_printf(seq, "%s %s %s %d %d %d %d %d %d %d\n",
- I8K_PROC_FMT,
- bios_version,
- i8k_get_dmi_data(DMI_PRODUCT_SERIAL),
- cpu_temp,
- left_fan, right_fan, left_speed, right_speed,
- ac_power, fn_key);
+ seq_printf(seq, "%s %s %s %d %d %d %d %d %d %d\n",
+ I8K_PROC_FMT,
+ bios_version,
+ i8k_get_dmi_data(DMI_PRODUCT_SERIAL),
+ cpu_temp,
+ left_fan, right_fan, left_speed, right_speed,
+ ac_power, fn_key);
+
+ return 0;
}
static int i8k_open_fs(struct inode *inode, struct file *file)
diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
index 518585c..5e90a18 100644
--- a/drivers/char/ipmi/ipmi_si_intf.c
+++ b/drivers/char/ipmi/ipmi_si_intf.c
@@ -2667,7 +2667,7 @@
};
#endif /* CONFIG_PCI */
-static struct of_device_id ipmi_match[];
+static const struct of_device_id ipmi_match[];
static int ipmi_probe(struct platform_device *dev)
{
#ifdef CONFIG_OF
@@ -2764,7 +2764,7 @@
return 0;
}
-static struct of_device_id ipmi_match[] =
+static const struct of_device_id ipmi_match[] =
{
{ .type = "ipmi", .compatible = "ipmi-kcs",
.data = (void *)(unsigned long) SI_KCS },
diff --git a/drivers/char/misc.c b/drivers/char/misc.c
index ffa97d2..9fd5a91 100644
--- a/drivers/char/misc.c
+++ b/drivers/char/misc.c
@@ -140,12 +140,17 @@
goto fail;
}
+ /*
+ * Place the miscdevice in the file's
+ * private_data so it can be used by the
+ * file operations, including f_op->open below
+ */
+ file->private_data = c;
+
err = 0;
replace_fops(file, new_fops);
- if (file->f_op->open) {
- file->private_data = c;
+ if (file->f_op->open)
err = file->f_op->open(inode,file);
- }
fail:
mutex_unlock(&misc_mtx);
return err;
@@ -169,7 +174,9 @@
* the minor number requested is used.
*
* The structure passed is linked into the kernel and may not be
- * destroyed until it has been unregistered.
+ * destroyed until it has been unregistered. By default, an open()
+ * syscall to the device sets file->private_data to point to the
+ * structure. Drivers don't need open in fops for this.
*
* A zero is returned on success and a negative errno code for
* failure.
@@ -205,8 +212,9 @@
dev = MKDEV(MISC_MAJOR, misc->minor);
- misc->this_device = device_create(misc_class, misc->parent, dev,
- misc, "%s", misc->name);
+ misc->this_device =
+ device_create_with_groups(misc_class, misc->parent, dev,
+ misc, misc->groups, "%s", misc->name);
if (IS_ERR(misc->this_device)) {
int i = DYNAMIC_MINORS - misc->minor - 1;
if (i < DYNAMIC_MINORS && i >= 0)
diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
index 72d7028..50754d20 100644
--- a/drivers/char/virtio_console.c
+++ b/drivers/char/virtio_console.c
@@ -355,7 +355,7 @@
* early_init
*/
if (!portdev->vdev)
- return 0;
+ return false;
return __virtio_test_bit(portdev->vdev, VIRTIO_CONSOLE_F_MULTIPORT);
}
diff --git a/drivers/char/xillybus/xillybus_core.c b/drivers/char/xillybus/xillybus_core.c
index b827fa0..77d6c12 100644
--- a/drivers/char/xillybus/xillybus_core.c
+++ b/drivers/char/xillybus/xillybus_core.c
@@ -1237,6 +1237,8 @@
unsigned char *tail;
int i;
+ howmany = 0;
+
end_offset_plus1 = bufpos >>
channel->log2_element_size;
diff --git a/drivers/char/xillybus/xillybus_of.c b/drivers/char/xillybus/xillybus_of.c
index 2002a3a..7818650 100644
--- a/drivers/char/xillybus/xillybus_of.c
+++ b/drivers/char/xillybus/xillybus_of.c
@@ -31,7 +31,7 @@
static const char xillyname[] = "xillybus_of";
/* Match table for of_platform binding */
-static struct of_device_id xillybus_of_match[] = {
+static const struct of_device_id xillybus_of_match[] = {
{ .compatible = "xillybus,xillybus-1.00.a", },
{ .compatible = "xlnx,xillybus-1.00.a", }, /* Deprecated */
{}
diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
index 0b474a0..9897f35 100644
--- a/drivers/clk/Kconfig
+++ b/drivers/clk/Kconfig
@@ -130,6 +130,13 @@
This driver supports TI Palmas devices 32KHz output KG and KG_AUDIO
using common clock framework.
+config COMMON_CLK_PWM
+ tristate "Clock driver for PWMs used as clock outputs"
+ depends on PWM
+ ---help---
+ Adapter driver so that any PWM output can be (mis)used as clock signal
+ at 50% duty cycle.
+
config COMMON_CLK_PXA
def_bool COMMON_CLK && ARCH_PXA
---help---
diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile
index e43ff53..3d00c25 100644
--- a/drivers/clk/Makefile
+++ b/drivers/clk/Makefile
@@ -28,6 +28,7 @@
obj-$(CONFIG_COMMON_CLK_MAX_GEN) += clk-max-gen.o
obj-$(CONFIG_COMMON_CLK_MAX77686) += clk-max77686.o
obj-$(CONFIG_COMMON_CLK_MAX77802) += clk-max77802.o
+obj-$(CONFIG_ARCH_MB86S7X) += clk-mb86s7x.o
obj-$(CONFIG_ARCH_MOXART) += clk-moxart.o
obj-$(CONFIG_ARCH_NOMADIK) += clk-nomadik.o
obj-$(CONFIG_ARCH_NSPIRE) += clk-nspire.o
@@ -42,6 +43,7 @@
obj-$(CONFIG_ARCH_VT8500) += clk-vt8500.o
obj-$(CONFIG_COMMON_CLK_WM831X) += clk-wm831x.o
obj-$(CONFIG_COMMON_CLK_XGENE) += clk-xgene.o
+obj-$(CONFIG_COMMON_CLK_PWM) += clk-pwm.o
obj-$(CONFIG_COMMON_CLK_AT91) += at91/
obj-$(CONFIG_ARCH_BCM_MOBILE) += bcm/
obj-$(CONFIG_ARCH_BERLIN) += berlin/
diff --git a/drivers/clk/at91/clk-usb.c b/drivers/clk/at91/clk-usb.c
index a23ac0c..0b7c3e8 100644
--- a/drivers/clk/at91/clk-usb.c
+++ b/drivers/clk/at91/clk-usb.c
@@ -56,22 +56,55 @@
return DIV_ROUND_CLOSEST(parent_rate, (usbdiv + 1));
}
-static long at91sam9x5_clk_usb_round_rate(struct clk_hw *hw, unsigned long rate,
- unsigned long *parent_rate)
+static long at91sam9x5_clk_usb_determine_rate(struct clk_hw *hw,
+ unsigned long rate,
+ unsigned long min_rate,
+ unsigned long max_rate,
+ unsigned long *best_parent_rate,
+ struct clk_hw **best_parent_hw)
{
- unsigned long div;
+ struct clk *parent = NULL;
+ long best_rate = -EINVAL;
+ unsigned long tmp_rate;
+ int best_diff = -1;
+ int tmp_diff;
+ int i;
- if (!rate)
- return -EINVAL;
+ for (i = 0; i < __clk_get_num_parents(hw->clk); i++) {
+ int div;
- if (rate >= *parent_rate)
- return *parent_rate;
+ parent = clk_get_parent_by_index(hw->clk, i);
+ if (!parent)
+ continue;
- div = DIV_ROUND_CLOSEST(*parent_rate, rate);
- if (div > SAM9X5_USB_MAX_DIV + 1)
- div = SAM9X5_USB_MAX_DIV + 1;
+ for (div = 1; div < SAM9X5_USB_MAX_DIV + 2; div++) {
+ unsigned long tmp_parent_rate;
- return DIV_ROUND_CLOSEST(*parent_rate, div);
+ tmp_parent_rate = rate * div;
+ tmp_parent_rate = __clk_round_rate(parent,
+ tmp_parent_rate);
+ tmp_rate = DIV_ROUND_CLOSEST(tmp_parent_rate, div);
+ if (tmp_rate < rate)
+ tmp_diff = rate - tmp_rate;
+ else
+ tmp_diff = tmp_rate - rate;
+
+ if (best_diff < 0 || best_diff > tmp_diff) {
+ best_rate = tmp_rate;
+ best_diff = tmp_diff;
+ *best_parent_rate = tmp_parent_rate;
+ *best_parent_hw = __clk_get_hw(parent);
+ }
+
+ if (!best_diff || tmp_rate < rate)
+ break;
+ }
+
+ if (!best_diff)
+ break;
+ }
+
+ return best_rate;
}
static int at91sam9x5_clk_usb_set_parent(struct clk_hw *hw, u8 index)
@@ -121,7 +154,7 @@
static const struct clk_ops at91sam9x5_usb_ops = {
.recalc_rate = at91sam9x5_clk_usb_recalc_rate,
- .round_rate = at91sam9x5_clk_usb_round_rate,
+ .determine_rate = at91sam9x5_clk_usb_determine_rate,
.get_parent = at91sam9x5_clk_usb_get_parent,
.set_parent = at91sam9x5_clk_usb_set_parent,
.set_rate = at91sam9x5_clk_usb_set_rate,
@@ -159,7 +192,7 @@
.disable = at91sam9n12_clk_usb_disable,
.is_enabled = at91sam9n12_clk_usb_is_enabled,
.recalc_rate = at91sam9x5_clk_usb_recalc_rate,
- .round_rate = at91sam9x5_clk_usb_round_rate,
+ .determine_rate = at91sam9x5_clk_usb_determine_rate,
.set_rate = at91sam9x5_clk_usb_set_rate,
};
@@ -179,7 +212,8 @@
init.ops = &at91sam9x5_usb_ops;
init.parent_names = parent_names;
init.num_parents = num_parents;
- init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE;
+ init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE |
+ CLK_SET_RATE_PARENT;
usb->hw.init = &init;
usb->pmc = pmc;
@@ -207,7 +241,7 @@
init.ops = &at91sam9n12_usb_ops;
init.parent_names = &parent_name;
init.num_parents = 1;
- init.flags = CLK_SET_RATE_GATE;
+ init.flags = CLK_SET_RATE_GATE | CLK_SET_RATE_PARENT;
usb->hw.init = &init;
usb->pmc = pmc;
diff --git a/drivers/clk/clk-cdce706.c b/drivers/clk/clk-cdce706.c
index c386ad2..b8e4f8a 100644
--- a/drivers/clk/clk-cdce706.c
+++ b/drivers/clk/clk-cdce706.c
@@ -58,7 +58,7 @@
#define CDCE706_CLKOUT_DIVIDER_MASK 0x7
#define CDCE706_CLKOUT_ENABLE_MASK 0x8
-static struct regmap_config cdce706_regmap_config = {
+static const struct regmap_config cdce706_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
.val_format_endian = REGMAP_ENDIAN_NATIVE,
diff --git a/drivers/clk/clk-conf.c b/drivers/clk/clk-conf.c
index aad4796..48a65b2 100644
--- a/drivers/clk/clk-conf.c
+++ b/drivers/clk/clk-conf.c
@@ -13,7 +13,6 @@
#include <linux/device.h>
#include <linux/of.h>
#include <linux/printk.h>
-#include "clk.h"
static int __set_clk_parents(struct device_node *node, bool clk_supplier)
{
@@ -39,7 +38,7 @@
}
if (clkspec.np == node && !clk_supplier)
return 0;
- pclk = of_clk_get_by_clkspec(&clkspec);
+ pclk = of_clk_get_from_provider(&clkspec);
if (IS_ERR(pclk)) {
pr_warn("clk: couldn't get parent clock %d for %s\n",
index, node->full_name);
@@ -54,7 +53,7 @@
rc = 0;
goto err;
}
- clk = of_clk_get_by_clkspec(&clkspec);
+ clk = of_clk_get_from_provider(&clkspec);
if (IS_ERR(clk)) {
pr_warn("clk: couldn't get parent clock %d for %s\n",
index, node->full_name);
@@ -98,7 +97,7 @@
if (clkspec.np == node && !clk_supplier)
return 0;
- clk = of_clk_get_by_clkspec(&clkspec);
+ clk = of_clk_get_from_provider(&clkspec);
if (IS_ERR(clk)) {
pr_warn("clk: couldn't get clock %d for %s\n",
index, node->full_name);
diff --git a/drivers/clk/clk-fractional-divider.c b/drivers/clk/clk-fractional-divider.c
index 82a59d0..6aa72d9 100644
--- a/drivers/clk/clk-fractional-divider.c
+++ b/drivers/clk/clk-fractional-divider.c
@@ -36,6 +36,9 @@
m = (val & fd->mmask) >> fd->mshift;
n = (val & fd->nmask) >> fd->nshift;
+ if (!n || !m)
+ return parent_rate;
+
ret = (u64)parent_rate * m;
do_div(ret, n);
diff --git a/drivers/clk/clk-gpio-gate.c b/drivers/clk/clk-gpio-gate.c
index 08e4322..a71cabe 100644
--- a/drivers/clk/clk-gpio-gate.c
+++ b/drivers/clk/clk-gpio-gate.c
@@ -65,10 +65,12 @@
* @dev: device that is registering this clock
* @name: name of this clock
* @parent_name: name of this clock's parent
- * @gpiod: gpio descriptor to gate this clock
+ * @gpio: gpio number to gate this clock
+ * @active_low: true if gpio should be set to 0 to enable clock
+ * @flags: clock flags
*/
struct clk *clk_register_gpio_gate(struct device *dev, const char *name,
- const char *parent_name, struct gpio_desc *gpiod,
+ const char *parent_name, unsigned gpio, bool active_low,
unsigned long flags)
{
struct clk_gpio *clk_gpio = NULL;
@@ -77,20 +79,19 @@
unsigned long gpio_flags;
int err;
- if (gpiod_is_active_low(gpiod))
- gpio_flags = GPIOF_OUT_INIT_HIGH;
+ if (active_low)
+ gpio_flags = GPIOF_ACTIVE_LOW | GPIOF_OUT_INIT_HIGH;
else
gpio_flags = GPIOF_OUT_INIT_LOW;
if (dev)
- err = devm_gpio_request_one(dev, desc_to_gpio(gpiod),
- gpio_flags, name);
+ err = devm_gpio_request_one(dev, gpio, gpio_flags, name);
else
- err = gpio_request_one(desc_to_gpio(gpiod), gpio_flags, name);
+ err = gpio_request_one(gpio, gpio_flags, name);
if (err) {
pr_err("%s: %s: Error requesting clock control gpio %u\n",
- __func__, name, desc_to_gpio(gpiod));
+ __func__, name, gpio);
return ERR_PTR(err);
}
@@ -111,7 +112,7 @@
init.parent_names = (parent_name ? &parent_name : NULL);
init.num_parents = (parent_name ? 1 : 0);
- clk_gpio->gpiod = gpiod;
+ clk_gpio->gpiod = gpio_to_desc(gpio);
clk_gpio->hw.init = &init;
clk = clk_register(dev, &clk_gpio->hw);
@@ -123,7 +124,8 @@
kfree(clk_gpio);
clk_register_gpio_gate_err:
- gpiod_put(gpiod);
+ if (!dev)
+ gpio_free(gpio);
return clk;
}
@@ -149,8 +151,8 @@
struct clk *clk;
const char *clk_name = data->node->name;
const char *parent_name;
- struct gpio_desc *gpiod;
int gpio;
+ enum of_gpio_flags of_flags;
mutex_lock(&data->lock);
@@ -159,7 +161,8 @@
return data->clk;
}
- gpio = of_get_named_gpio_flags(data->node, "enable-gpios", 0, NULL);
+ gpio = of_get_named_gpio_flags(data->node, "enable-gpios", 0,
+ &of_flags);
if (gpio < 0) {
mutex_unlock(&data->lock);
if (gpio != -EPROBE_DEFER)
@@ -167,11 +170,11 @@
__func__, clk_name);
return ERR_PTR(gpio);
}
- gpiod = gpio_to_desc(gpio);
parent_name = of_clk_get_parent_name(data->node, 0);
- clk = clk_register_gpio_gate(NULL, clk_name, parent_name, gpiod, 0);
+ clk = clk_register_gpio_gate(NULL, clk_name, parent_name, gpio,
+ of_flags & OF_GPIO_ACTIVE_LOW, 0);
if (IS_ERR(clk)) {
mutex_unlock(&data->lock);
return clk;
diff --git a/drivers/clk/clk-mb86s7x.c b/drivers/clk/clk-mb86s7x.c
new file mode 100644
index 0000000..f39c25a
--- /dev/null
+++ b/drivers/clk/clk-mb86s7x.c
@@ -0,0 +1,386 @@
+/*
+ * Copyright (C) 2013-2015 FUJITSU SEMICONDUCTOR LIMITED
+ * Copyright (C) 2015 Linaro Ltd.
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/clkdev.h>
+#include <linux/err.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/cpu.h>
+#include <linux/clk-provider.h>
+#include <linux/spinlock.h>
+#include <linux/module.h>
+#include <linux/topology.h>
+#include <linux/mailbox_client.h>
+#include <linux/platform_device.h>
+
+#include <soc/mb86s7x/scb_mhu.h>
+
+#define to_crg_clk(p) container_of(p, struct crg_clk, hw)
+#define to_clc_clk(p) container_of(p, struct cl_clk, hw)
+
+struct mb86s7x_peri_clk {
+ u32 payload_size;
+ u32 cntrlr;
+ u32 domain;
+ u32 port;
+ u32 en;
+ u64 frequency;
+} __packed __aligned(4);
+
+struct hack_rate {
+ unsigned clk_id;
+ unsigned long rate;
+ int gated;
+};
+
+struct crg_clk {
+ struct clk_hw hw;
+ u8 cntrlr, domain, port;
+};
+
+static int crg_gate_control(struct clk_hw *hw, int en)
+{
+ struct crg_clk *crgclk = to_crg_clk(hw);
+ struct mb86s7x_peri_clk cmd;
+ int ret;
+
+ cmd.payload_size = sizeof(cmd);
+ cmd.cntrlr = crgclk->cntrlr;
+ cmd.domain = crgclk->domain;
+ cmd.port = crgclk->port;
+ cmd.en = en;
+
+ /* Port is UngatedCLK */
+ if (cmd.port == 8)
+ return en ? 0 : -EINVAL;
+
+ pr_debug("%s:%d CMD Cntrlr-%u Dom-%u Port-%u En-%u}\n",
+ __func__, __LINE__, cmd.cntrlr,
+ cmd.domain, cmd.port, cmd.en);
+
+ ret = mb86s7x_send_packet(CMD_PERI_CLOCK_GATE_SET_REQ,
+ &cmd, sizeof(cmd));
+ if (ret < 0) {
+ pr_err("%s:%d failed!\n", __func__, __LINE__);
+ return ret;
+ }
+
+ pr_debug("%s:%d REP Cntrlr-%u Dom-%u Port-%u En-%u}\n",
+ __func__, __LINE__, cmd.cntrlr,
+ cmd.domain, cmd.port, cmd.en);
+
+ /* If the request was rejected */
+ if (cmd.en != en)
+ ret = -EINVAL;
+ else
+ ret = 0;
+
+ return ret;
+}
+
+static int crg_port_prepare(struct clk_hw *hw)
+{
+ return crg_gate_control(hw, 1);
+}
+
+static void crg_port_unprepare(struct clk_hw *hw)
+{
+ crg_gate_control(hw, 0);
+}
+
+static int
+crg_rate_control(struct clk_hw *hw, int set, unsigned long *rate)
+{
+ struct crg_clk *crgclk = to_crg_clk(hw);
+ struct mb86s7x_peri_clk cmd;
+ int code, ret;
+
+ cmd.payload_size = sizeof(cmd);
+ cmd.cntrlr = crgclk->cntrlr;
+ cmd.domain = crgclk->domain;
+ cmd.port = crgclk->port;
+ cmd.frequency = *rate;
+
+ if (set) {
+ code = CMD_PERI_CLOCK_RATE_SET_REQ;
+ pr_debug("%s:%d CMD Cntrlr-%u Dom-%u Port-%u Rate-SET %lluHz}\n",
+ __func__, __LINE__, cmd.cntrlr,
+ cmd.domain, cmd.port, cmd.frequency);
+ } else {
+ code = CMD_PERI_CLOCK_RATE_GET_REQ;
+ pr_debug("%s:%d CMD Cntrlr-%u Dom-%u Port-%u Rate-GET}\n",
+ __func__, __LINE__, cmd.cntrlr,
+ cmd.domain, cmd.port);
+ }
+
+ ret = mb86s7x_send_packet(code, &cmd, sizeof(cmd));
+ if (ret < 0) {
+ pr_err("%s:%d failed!\n", __func__, __LINE__);
+ return ret;
+ }
+
+ if (set)
+ pr_debug("%s:%d REP Cntrlr-%u Dom-%u Port-%u Rate-SET %lluHz}\n",
+ __func__, __LINE__, cmd.cntrlr,
+ cmd.domain, cmd.port, cmd.frequency);
+ else
+ pr_debug("%s:%d REP Cntrlr-%u Dom-%u Port-%u Rate-GOT %lluHz}\n",
+ __func__, __LINE__, cmd.cntrlr,
+ cmd.domain, cmd.port, cmd.frequency);
+
+ *rate = cmd.frequency;
+ return 0;
+}
+
+static unsigned long
+crg_port_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
+{
+ unsigned long rate;
+
+ crg_rate_control(hw, 0, &rate);
+
+ return rate;
+}
+
+static long
+crg_port_round_rate(struct clk_hw *hw,
+ unsigned long rate, unsigned long *pr)
+{
+ return rate;
+}
+
+static int
+crg_port_set_rate(struct clk_hw *hw,
+ unsigned long rate, unsigned long parent_rate)
+{
+ return crg_rate_control(hw, 1, &rate);
+}
+
+const struct clk_ops crg_port_ops = {
+ .prepare = crg_port_prepare,
+ .unprepare = crg_port_unprepare,
+ .recalc_rate = crg_port_recalc_rate,
+ .round_rate = crg_port_round_rate,
+ .set_rate = crg_port_set_rate,
+};
+
+struct mb86s70_crg11 {
+ struct mutex lock; /* protects CLK populating and searching */
+};
+
+static struct clk *crg11_get(struct of_phandle_args *clkspec, void *data)
+{
+ struct mb86s70_crg11 *crg11 = data;
+ struct clk_init_data init;
+ u32 cntrlr, domain, port;
+ struct crg_clk *crgclk;
+ struct clk *clk;
+ char clkp[20];
+
+ if (clkspec->args_count != 3)
+ return ERR_PTR(-EINVAL);
+
+ cntrlr = clkspec->args[0];
+ domain = clkspec->args[1];
+ port = clkspec->args[2];
+
+ if (port > 7)
+ snprintf(clkp, 20, "UngatedCLK%d_%X", cntrlr, domain);
+ else
+ snprintf(clkp, 20, "CLK%d_%X_%d", cntrlr, domain, port);
+
+ mutex_lock(&crg11->lock);
+
+ clk = __clk_lookup(clkp);
+ if (clk) {
+ mutex_unlock(&crg11->lock);
+ return clk;
+ }
+
+ crgclk = kzalloc(sizeof(*crgclk), GFP_KERNEL);
+ if (!crgclk) {
+ mutex_unlock(&crg11->lock);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ init.name = clkp;
+ init.num_parents = 0;
+ init.ops = &crg_port_ops;
+ init.flags = CLK_IS_ROOT;
+ crgclk->hw.init = &init;
+ crgclk->cntrlr = cntrlr;
+ crgclk->domain = domain;
+ crgclk->port = port;
+ clk = clk_register(NULL, &crgclk->hw);
+ if (IS_ERR(clk))
+ pr_err("%s:%d Error!\n", __func__, __LINE__);
+ else
+ pr_debug("Registered %s\n", clkp);
+
+ clk_register_clkdev(clk, clkp, NULL);
+ mutex_unlock(&crg11->lock);
+ return clk;
+}
+
+static void __init crg_port_init(struct device_node *node)
+{
+ struct mb86s70_crg11 *crg11;
+
+ crg11 = kzalloc(sizeof(*crg11), GFP_KERNEL);
+ if (!crg11)
+ return;
+
+ mutex_init(&crg11->lock);
+
+ of_clk_add_provider(node, crg11_get, crg11);
+}
+CLK_OF_DECLARE(crg11_gate, "fujitsu,mb86s70-crg11", crg_port_init);
+
+struct cl_clk {
+ struct clk_hw hw;
+ int cluster;
+};
+
+struct mb86s7x_cpu_freq {
+ u32 payload_size;
+ u32 cluster_class;
+ u32 cluster_id;
+ u32 cpu_id;
+ u64 frequency;
+};
+
+static void mhu_cluster_rate(struct clk_hw *hw, unsigned long *rate, int get)
+{
+ struct cl_clk *clc = to_clc_clk(hw);
+ struct mb86s7x_cpu_freq cmd;
+ int code, ret;
+
+ cmd.payload_size = sizeof(cmd);
+ cmd.cluster_class = 0;
+ cmd.cluster_id = clc->cluster;
+ cmd.cpu_id = 0;
+ cmd.frequency = *rate;
+
+ if (get)
+ code = CMD_CPU_CLOCK_RATE_GET_REQ;
+ else
+ code = CMD_CPU_CLOCK_RATE_SET_REQ;
+
+ pr_debug("%s:%d CMD Cl_Class-%u CL_ID-%u CPU_ID-%u Freq-%llu}\n",
+ __func__, __LINE__, cmd.cluster_class,
+ cmd.cluster_id, cmd.cpu_id, cmd.frequency);
+
+ ret = mb86s7x_send_packet(code, &cmd, sizeof(cmd));
+ if (ret < 0) {
+ pr_err("%s:%d failed!\n", __func__, __LINE__);
+ return;
+ }
+
+ pr_debug("%s:%d REP Cl_Class-%u CL_ID-%u CPU_ID-%u Freq-%llu}\n",
+ __func__, __LINE__, cmd.cluster_class,
+ cmd.cluster_id, cmd.cpu_id, cmd.frequency);
+
+ *rate = cmd.frequency;
+}
+
+static unsigned long
+clc_recalc_rate(struct clk_hw *hw, unsigned long unused)
+{
+ unsigned long rate;
+
+ mhu_cluster_rate(hw, &rate, 1);
+ return rate;
+}
+
+static long
+clc_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *unused)
+{
+ return rate;
+}
+
+static int
+clc_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long unused)
+{
+ unsigned long res = rate;
+
+ mhu_cluster_rate(hw, &res, 0);
+
+ return (res == rate) ? 0 : -EINVAL;
+}
+
+static struct clk_ops clk_clc_ops = {
+ .recalc_rate = clc_recalc_rate,
+ .round_rate = clc_round_rate,
+ .set_rate = clc_set_rate,
+};
+
+struct clk *mb86s7x_clclk_register(struct device *cpu_dev)
+{
+ struct clk_init_data init;
+ struct cl_clk *clc;
+
+ clc = kzalloc(sizeof(*clc), GFP_KERNEL);
+ if (!clc)
+ return ERR_PTR(-ENOMEM);
+
+ clc->hw.init = &init;
+ clc->cluster = topology_physical_package_id(cpu_dev->id);
+
+ init.name = dev_name(cpu_dev);
+ init.ops = &clk_clc_ops;
+ init.flags = CLK_IS_ROOT | CLK_GET_RATE_NOCACHE;
+ init.num_parents = 0;
+
+ return devm_clk_register(cpu_dev, &clc->hw);
+}
+
+static int mb86s7x_clclk_of_init(void)
+{
+ int cpu, ret = -ENODEV;
+ struct device_node *np;
+ struct clk *clk;
+
+ np = of_find_compatible_node(NULL, NULL, "fujitsu,mb86s70-scb-1.0");
+ if (!np || !of_device_is_available(np))
+ goto exit;
+
+ for_each_possible_cpu(cpu) {
+ struct device *cpu_dev = get_cpu_device(cpu);
+
+ if (!cpu_dev) {
+ pr_err("failed to get cpu%d device\n", cpu);
+ continue;
+ }
+
+ clk = mb86s7x_clclk_register(cpu_dev);
+ if (IS_ERR(clk)) {
+ pr_err("failed to register cpu%d clock\n", cpu);
+ continue;
+ }
+ if (clk_register_clkdev(clk, NULL, dev_name(cpu_dev))) {
+ pr_err("failed to register cpu%d clock lookup\n", cpu);
+ continue;
+ }
+ pr_debug("registered clk for %s\n", dev_name(cpu_dev));
+ }
+ ret = 0;
+
+ platform_device_register_simple("arm-bL-cpufreq-dt", -1, NULL, 0);
+exit:
+ of_node_put(np);
+ return ret;
+}
+module_init(mb86s7x_clclk_of_init);
diff --git a/drivers/clk/clk-palmas.c b/drivers/clk/clk-palmas.c
index 8d45992..45a535a 100644
--- a/drivers/clk/clk-palmas.c
+++ b/drivers/clk/clk-palmas.c
@@ -161,7 +161,7 @@
},
};
-static struct of_device_id palmas_clks_of_match[] = {
+static const struct of_device_id palmas_clks_of_match[] = {
{
.compatible = "ti,palmas-clk32kg",
.data = &palmas_of_clk32kg,
diff --git a/drivers/clk/clk-pwm.c b/drivers/clk/clk-pwm.c
new file mode 100644
index 0000000..328fcfc
--- /dev/null
+++ b/drivers/clk/clk-pwm.c
@@ -0,0 +1,136 @@
+/*
+ * Copyright (C) 2014 Philipp Zabel, Pengutronix
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * PWM (mis)used as clock output
+ */
+#include <linux/clk-provider.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/pwm.h>
+
+struct clk_pwm {
+ struct clk_hw hw;
+ struct pwm_device *pwm;
+ u32 fixed_rate;
+};
+
+static inline struct clk_pwm *to_clk_pwm(struct clk_hw *hw)
+{
+ return container_of(hw, struct clk_pwm, hw);
+}
+
+static int clk_pwm_prepare(struct clk_hw *hw)
+{
+ struct clk_pwm *clk_pwm = to_clk_pwm(hw);
+
+ return pwm_enable(clk_pwm->pwm);
+}
+
+static void clk_pwm_unprepare(struct clk_hw *hw)
+{
+ struct clk_pwm *clk_pwm = to_clk_pwm(hw);
+
+ pwm_disable(clk_pwm->pwm);
+}
+
+static unsigned long clk_pwm_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct clk_pwm *clk_pwm = to_clk_pwm(hw);
+
+ return clk_pwm->fixed_rate;
+}
+
+static const struct clk_ops clk_pwm_ops = {
+ .prepare = clk_pwm_prepare,
+ .unprepare = clk_pwm_unprepare,
+ .recalc_rate = clk_pwm_recalc_rate,
+};
+
+static int clk_pwm_probe(struct platform_device *pdev)
+{
+ struct device_node *node = pdev->dev.of_node;
+ struct clk_init_data init;
+ struct clk_pwm *clk_pwm;
+ struct pwm_device *pwm;
+ const char *clk_name;
+ struct clk *clk;
+ int ret;
+
+ clk_pwm = devm_kzalloc(&pdev->dev, sizeof(*clk_pwm), GFP_KERNEL);
+ if (!clk_pwm)
+ return -ENOMEM;
+
+ pwm = devm_pwm_get(&pdev->dev, NULL);
+ if (IS_ERR(pwm))
+ return PTR_ERR(pwm);
+
+ if (!pwm->period) {
+ dev_err(&pdev->dev, "invalid PWM period\n");
+ return -EINVAL;
+ }
+
+ if (of_property_read_u32(node, "clock-frequency", &clk_pwm->fixed_rate))
+ clk_pwm->fixed_rate = NSEC_PER_SEC / pwm->period;
+
+ if (pwm->period != NSEC_PER_SEC / clk_pwm->fixed_rate &&
+ pwm->period != DIV_ROUND_UP(NSEC_PER_SEC, clk_pwm->fixed_rate)) {
+ dev_err(&pdev->dev,
+ "clock-frequency does not match PWM period\n");
+ return -EINVAL;
+ }
+
+ ret = pwm_config(pwm, (pwm->period + 1) >> 1, pwm->period);
+ if (ret < 0)
+ return ret;
+
+ clk_name = node->name;
+ of_property_read_string(node, "clock-output-names", &clk_name);
+
+ init.name = clk_name;
+ init.ops = &clk_pwm_ops;
+ init.flags = CLK_IS_BASIC | CLK_IS_ROOT;
+ init.num_parents = 0;
+
+ clk_pwm->pwm = pwm;
+ clk_pwm->hw.init = &init;
+ clk = devm_clk_register(&pdev->dev, &clk_pwm->hw);
+ if (IS_ERR(clk))
+ return PTR_ERR(clk);
+
+ return of_clk_add_provider(node, of_clk_src_simple_get, clk);
+}
+
+static int clk_pwm_remove(struct platform_device *pdev)
+{
+ of_clk_del_provider(pdev->dev.of_node);
+
+ return 0;
+}
+
+static const struct of_device_id clk_pwm_dt_ids[] = {
+ { .compatible = "pwm-clock" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, clk_pwm_dt_ids);
+
+static struct platform_driver clk_pwm_driver = {
+ .probe = clk_pwm_probe,
+ .remove = clk_pwm_remove,
+ .driver = {
+ .name = "pwm-clock",
+ .of_match_table = of_match_ptr(clk_pwm_dt_ids),
+ },
+};
+
+module_platform_driver(clk_pwm_driver);
+
+MODULE_AUTHOR("Philipp Zabel <p.zabel@pengutronix.de>");
+MODULE_DESCRIPTION("PWM clock driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/clk/clk-si5351.c b/drivers/clk/clk-si5351.c
index 3b2a66f..44ea107 100644
--- a/drivers/clk/clk-si5351.c
+++ b/drivers/clk/clk-si5351.c
@@ -68,16 +68,16 @@
struct si5351_hw_data *clkout;
};
-static const char const *si5351_input_names[] = {
+static const char * const si5351_input_names[] = {
"xtal", "clkin"
};
-static const char const *si5351_pll_names[] = {
+static const char * const si5351_pll_names[] = {
"plla", "pllb", "vxco"
};
-static const char const *si5351_msynth_names[] = {
+static const char * const si5351_msynth_names[] = {
"ms0", "ms1", "ms2", "ms3", "ms4", "ms5", "ms6", "ms7"
};
-static const char const *si5351_clkout_names[] = {
+static const char * const si5351_clkout_names[] = {
"clk0", "clk1", "clk2", "clk3", "clk4", "clk5", "clk6", "clk7"
};
@@ -207,7 +207,7 @@
return true;
}
-static struct regmap_config si5351_regmap_config = {
+static const struct regmap_config si5351_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
.cache_type = REGCACHE_RBTREE,
diff --git a/drivers/clk/clk-si570.c b/drivers/clk/clk-si570.c
index fc167b3..20a5aec 100644
--- a/drivers/clk/clk-si570.c
+++ b/drivers/clk/clk-si570.c
@@ -393,7 +393,7 @@
}
}
-static struct regmap_config si570_regmap_config = {
+static const struct regmap_config si570_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
.cache_type = REGCACHE_RBTREE,
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 237f23f..459ce9d 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -77,13 +77,16 @@
struct kref ref;
};
+#define CREATE_TRACE_POINTS
+#include <trace/events/clk.h>
+
struct clk {
struct clk_core *core;
const char *dev_id;
const char *con_id;
unsigned long min_rate;
unsigned long max_rate;
- struct hlist_node child_node;
+ struct hlist_node clks_node;
};
/*** locking ***/
@@ -480,6 +483,8 @@
{
struct clk_core *child;
+ lockdep_assert_held(&prepare_lock);
+
hlist_for_each_entry(child, &clk->children, child_node)
clk_unprepare_unused_subtree(child);
@@ -490,10 +495,12 @@
return;
if (clk_core_is_prepared(clk)) {
+ trace_clk_unprepare(clk);
if (clk->ops->unprepare_unused)
clk->ops->unprepare_unused(clk->hw);
else if (clk->ops->unprepare)
clk->ops->unprepare(clk->hw);
+ trace_clk_unprepare_complete(clk);
}
}
@@ -503,6 +510,8 @@
struct clk_core *child;
unsigned long flags;
+ lockdep_assert_held(&prepare_lock);
+
hlist_for_each_entry(child, &clk->children, child_node)
clk_disable_unused_subtree(child);
@@ -520,10 +529,12 @@
* back to .disable
*/
if (clk_core_is_enabled(clk)) {
+ trace_clk_disable(clk);
if (clk->ops->disable_unused)
clk->ops->disable_unused(clk->hw);
else if (clk->ops->disable)
clk->ops->disable(clk->hw);
+ trace_clk_disable_complete(clk);
}
unlock_out:
@@ -851,10 +862,10 @@
*min_rate = 0;
*max_rate = ULONG_MAX;
- hlist_for_each_entry(clk_user, &clk->clks, child_node)
+ hlist_for_each_entry(clk_user, &clk->clks, clks_node)
*min_rate = max(*min_rate, clk_user->min_rate);
- hlist_for_each_entry(clk_user, &clk->clks, child_node)
+ hlist_for_each_entry(clk_user, &clk->clks, clks_node)
*max_rate = min(*max_rate, clk_user->max_rate);
}
@@ -903,9 +914,12 @@
WARN_ON(clk->enable_count > 0);
+ trace_clk_unprepare(clk);
+
if (clk->ops->unprepare)
clk->ops->unprepare(clk->hw);
+ trace_clk_unprepare_complete(clk);
clk_core_unprepare(clk->parent);
}
@@ -943,12 +957,16 @@
if (ret)
return ret;
- if (clk->ops->prepare) {
+ trace_clk_prepare(clk);
+
+ if (clk->ops->prepare)
ret = clk->ops->prepare(clk->hw);
- if (ret) {
- clk_core_unprepare(clk->parent);
- return ret;
- }
+
+ trace_clk_prepare_complete(clk);
+
+ if (ret) {
+ clk_core_unprepare(clk->parent);
+ return ret;
}
}
@@ -995,9 +1013,13 @@
if (--clk->enable_count > 0)
return;
+ trace_clk_disable(clk);
+
if (clk->ops->disable)
clk->ops->disable(clk->hw);
+ trace_clk_disable_complete(clk);
+
clk_core_disable(clk->parent);
}
@@ -1050,12 +1072,16 @@
if (ret)
return ret;
- if (clk->ops->enable) {
+ trace_clk_enable(clk);
+
+ if (clk->ops->enable)
ret = clk->ops->enable(clk->hw);
- if (ret) {
- clk_core_disable(clk->parent);
- return ret;
- }
+
+ trace_clk_enable_complete(clk);
+
+ if (ret) {
+ clk_core_disable(clk->parent);
+ return ret;
}
}
@@ -1106,6 +1132,8 @@
struct clk_core *parent;
struct clk_hw *parent_hw;
+ lockdep_assert_held(&prepare_lock);
+
if (!clk)
return 0;
@@ -1245,6 +1273,8 @@
unsigned long parent_accuracy = 0;
struct clk_core *child;
+ lockdep_assert_held(&prepare_lock);
+
if (clk->parent)
parent_accuracy = clk->parent->accuracy;
@@ -1318,6 +1348,8 @@
unsigned long parent_rate = 0;
struct clk_core *child;
+ lockdep_assert_held(&prepare_lock);
+
old_rate = clk->rate;
if (clk->parent)
@@ -1479,10 +1511,14 @@
old_parent = __clk_set_parent_before(clk, parent);
+ trace_clk_set_parent(clk, parent);
+
/* change clock input source */
if (parent && clk->ops->set_parent)
ret = clk->ops->set_parent(clk->hw, p_index);
+ trace_clk_set_parent_complete(clk, parent);
+
if (ret) {
flags = clk_enable_lock();
clk_reparent(clk, old_parent);
@@ -1524,6 +1560,8 @@
unsigned long new_rate;
int ret = NOTIFY_DONE;
+ lockdep_assert_held(&prepare_lock);
+
new_rate = clk_recalc(clk, parent_rate);
/* abort rate change if a driver returns NOTIFY_BAD or NOTIFY_STOP */
@@ -1580,6 +1618,7 @@
unsigned long min_rate;
unsigned long max_rate;
int p_index = 0;
+ long ret;
/* sanity */
if (IS_ERR_OR_NULL(clk))
@@ -1595,15 +1634,23 @@
/* find the closest rate and parent clk/rate */
if (clk->ops->determine_rate) {
parent_hw = parent ? parent->hw : NULL;
- new_rate = clk->ops->determine_rate(clk->hw, rate,
- min_rate,
- max_rate,
- &best_parent_rate,
- &parent_hw);
+ ret = clk->ops->determine_rate(clk->hw, rate,
+ min_rate,
+ max_rate,
+ &best_parent_rate,
+ &parent_hw);
+ if (ret < 0)
+ return NULL;
+
+ new_rate = ret;
parent = parent_hw ? parent_hw->core : NULL;
} else if (clk->ops->round_rate) {
- new_rate = clk->ops->round_rate(clk->hw, rate,
- &best_parent_rate);
+ ret = clk->ops->round_rate(clk->hw, rate,
+ &best_parent_rate);
+ if (ret < 0)
+ return NULL;
+
+ new_rate = ret;
if (new_rate < min_rate || new_rate > max_rate)
return NULL;
} else if (!parent || !(clk->flags & CLK_SET_RATE_PARENT)) {
@@ -1706,6 +1753,7 @@
if (clk->new_parent && clk->new_parent != clk->parent) {
old_parent = __clk_set_parent_before(clk, clk->new_parent);
+ trace_clk_set_parent(clk, clk->new_parent);
if (clk->ops->set_rate_and_parent) {
skip_set_rate = true;
@@ -1716,12 +1764,17 @@
clk->ops->set_parent(clk->hw, clk->new_parent_index);
}
+ trace_clk_set_parent_complete(clk, clk->new_parent);
__clk_set_parent_after(clk, clk->new_parent, old_parent);
}
+ trace_clk_set_rate(clk, clk->new_rate);
+
if (!skip_set_rate && clk->ops->set_rate)
clk->ops->set_rate(clk->hw, clk->new_rate, best_parent_rate);
+ trace_clk_set_rate_complete(clk, clk->new_rate);
+
clk->rate = clk_recalc(clk, best_parent_rate);
if (clk->notifier_count && old_rate != clk->rate)
@@ -2010,16 +2063,18 @@
if (!clk)
return 0;
- /* verify ops for for multi-parent clks */
- if ((clk->num_parents > 1) && (!clk->ops->set_parent))
- return -ENOSYS;
-
/* prevent racing with updates to the clock topology */
clk_prepare_lock();
if (clk->parent == parent)
goto out;
+ /* verify ops for for multi-parent clks */
+ if ((clk->num_parents > 1) && (!clk->ops->set_parent)) {
+ ret = -ENOSYS;
+ goto out;
+ }
+
/* check that we are allowed to re-parent if the clock is in use */
if ((clk->flags & CLK_SET_PARENT_GATE) && clk->prepare_count) {
ret = -EBUSY;
@@ -2110,10 +2165,10 @@
*/
int clk_set_phase(struct clk *clk, int degrees)
{
- int ret = 0;
+ int ret = -EINVAL;
if (!clk)
- goto out;
+ return 0;
/* sanity check degrees */
degrees %= 360;
@@ -2122,18 +2177,18 @@
clk_prepare_lock();
- if (!clk->core->ops->set_phase)
- goto out_unlock;
+ trace_clk_set_phase(clk->core, degrees);
- ret = clk->core->ops->set_phase(clk->core->hw, degrees);
+ if (clk->core->ops->set_phase)
+ ret = clk->core->ops->set_phase(clk->core->hw, degrees);
+
+ trace_clk_set_phase_complete(clk->core, degrees);
if (!ret)
clk->core->phase = degrees;
-out_unlock:
clk_prepare_unlock();
-out:
return ret;
}
EXPORT_SYMBOL_GPL(clk_set_phase);
@@ -2401,7 +2456,7 @@
clk->max_rate = ULONG_MAX;
clk_prepare_lock();
- hlist_add_head(&clk->child_node, &hw->core->clks);
+ hlist_add_head(&clk->clks_node, &hw->core->clks);
clk_prepare_unlock();
return clk;
@@ -2410,7 +2465,7 @@
void __clk_free_clk(struct clk *clk)
{
clk_prepare_lock();
- hlist_del(&clk->child_node);
+ hlist_del(&clk->clks_node);
clk_prepare_unlock();
kfree(clk);
@@ -2513,6 +2568,8 @@
struct clk_core *clk = container_of(ref, struct clk_core, ref);
int i = clk->num_parents;
+ lockdep_assert_held(&prepare_lock);
+
kfree(clk->parents);
while (--i >= 0)
kfree_const(clk->parent_names[i]);
@@ -2688,7 +2745,7 @@
clk_prepare_lock();
- hlist_del(&clk->child_node);
+ hlist_del(&clk->clks_node);
if (clk->min_rate > clk->core->req_rate ||
clk->max_rate < clk->core->req_rate)
clk_core_set_rate_nolock(clk->core, clk->core->req_rate);
@@ -2834,17 +2891,6 @@
static LIST_HEAD(of_clk_providers);
static DEFINE_MUTEX(of_clk_mutex);
-/* of_clk_provider list locking helpers */
-void of_clk_lock(void)
-{
- mutex_lock(&of_clk_mutex);
-}
-
-void of_clk_unlock(void)
-{
- mutex_unlock(&of_clk_mutex);
-}
-
struct clk *of_clk_src_simple_get(struct of_phandle_args *clkspec,
void *data)
{
@@ -2928,7 +2974,11 @@
struct of_clk_provider *provider;
struct clk *clk = ERR_PTR(-EPROBE_DEFER);
+ if (!clkspec)
+ return ERR_PTR(-EINVAL);
+
/* Check if we have such a provider in our array */
+ mutex_lock(&of_clk_mutex);
list_for_each_entry(provider, &of_clk_providers, link) {
if (provider->node == clkspec->np)
clk = provider->get(clkspec, provider->data);
@@ -2944,19 +2994,22 @@
break;
}
}
+ mutex_unlock(&of_clk_mutex);
return clk;
}
+/**
+ * of_clk_get_from_provider() - Lookup a clock from a clock provider
+ * @clkspec: pointer to a clock specifier data structure
+ *
+ * This function looks up a struct clk from the registered list of clock
+ * providers, an input is a clock specifier data structure as returned
+ * from the of_parse_phandle_with_args() function call.
+ */
struct clk *of_clk_get_from_provider(struct of_phandle_args *clkspec)
{
- struct clk *clk;
-
- mutex_lock(&of_clk_mutex);
- clk = __of_clk_get_from_provider(clkspec, NULL, __func__);
- mutex_unlock(&of_clk_mutex);
-
- return clk;
+ return __of_clk_get_from_provider(clkspec, NULL, __func__);
}
int of_clk_get_parent_count(struct device_node *np)
diff --git a/drivers/clk/clk.h b/drivers/clk/clk.h
index ba84540..00b35a1 100644
--- a/drivers/clk/clk.h
+++ b/drivers/clk/clk.h
@@ -12,11 +12,8 @@
struct clk_hw;
#if defined(CONFIG_OF) && defined(CONFIG_COMMON_CLK)
-struct clk *of_clk_get_by_clkspec(struct of_phandle_args *clkspec);
struct clk *__of_clk_get_from_provider(struct of_phandle_args *clkspec,
const char *dev_id, const char *con_id);
-void of_clk_lock(void);
-void of_clk_unlock(void);
#endif
#ifdef CONFIG_COMMON_CLK
diff --git a/drivers/clk/clkdev.c b/drivers/clk/clkdev.c
index 043fd36..1fcb6ef 100644
--- a/drivers/clk/clkdev.c
+++ b/drivers/clk/clkdev.c
@@ -28,34 +28,6 @@
static DEFINE_MUTEX(clocks_mutex);
#if defined(CONFIG_OF) && defined(CONFIG_COMMON_CLK)
-
-static struct clk *__of_clk_get_by_clkspec(struct of_phandle_args *clkspec,
- const char *dev_id, const char *con_id)
-{
- struct clk *clk;
-
- if (!clkspec)
- return ERR_PTR(-EINVAL);
-
- of_clk_lock();
- clk = __of_clk_get_from_provider(clkspec, dev_id, con_id);
- of_clk_unlock();
- return clk;
-}
-
-/**
- * of_clk_get_by_clkspec() - Lookup a clock form a clock provider
- * @clkspec: pointer to a clock specifier data structure
- *
- * This function looks up a struct clk from the registered list of clock
- * providers, an input is a clock specifier data structure as returned
- * from the of_parse_phandle_with_args() function call.
- */
-struct clk *of_clk_get_by_clkspec(struct of_phandle_args *clkspec)
-{
- return __of_clk_get_by_clkspec(clkspec, NULL, __func__);
-}
-
static struct clk *__of_clk_get(struct device_node *np, int index,
const char *dev_id, const char *con_id)
{
@@ -71,7 +43,7 @@
if (rc)
return ERR_PTR(rc);
- clk = __of_clk_get_by_clkspec(&clkspec, dev_id, con_id);
+ clk = __of_clk_get_from_provider(&clkspec, dev_id, con_id);
of_node_put(clkspec.np);
return clk;
diff --git a/drivers/clk/hisilicon/clk-hi3620.c b/drivers/clk/hisilicon/clk-hi3620.c
index 2e4f6d4..472dd2c 100644
--- a/drivers/clk/hisilicon/clk-hi3620.c
+++ b/drivers/clk/hisilicon/clk-hi3620.c
@@ -38,44 +38,44 @@
#include "clk.h"
/* clock parent list */
-static const char *timer0_mux_p[] __initconst = { "osc32k", "timerclk01", };
-static const char *timer1_mux_p[] __initconst = { "osc32k", "timerclk01", };
-static const char *timer2_mux_p[] __initconst = { "osc32k", "timerclk23", };
-static const char *timer3_mux_p[] __initconst = { "osc32k", "timerclk23", };
-static const char *timer4_mux_p[] __initconst = { "osc32k", "timerclk45", };
-static const char *timer5_mux_p[] __initconst = { "osc32k", "timerclk45", };
-static const char *timer6_mux_p[] __initconst = { "osc32k", "timerclk67", };
-static const char *timer7_mux_p[] __initconst = { "osc32k", "timerclk67", };
-static const char *timer8_mux_p[] __initconst = { "osc32k", "timerclk89", };
-static const char *timer9_mux_p[] __initconst = { "osc32k", "timerclk89", };
-static const char *uart0_mux_p[] __initconst = { "osc26m", "pclk", };
-static const char *uart1_mux_p[] __initconst = { "osc26m", "pclk", };
-static const char *uart2_mux_p[] __initconst = { "osc26m", "pclk", };
-static const char *uart3_mux_p[] __initconst = { "osc26m", "pclk", };
-static const char *uart4_mux_p[] __initconst = { "osc26m", "pclk", };
-static const char *spi0_mux_p[] __initconst = { "osc26m", "rclk_cfgaxi", };
-static const char *spi1_mux_p[] __initconst = { "osc26m", "rclk_cfgaxi", };
-static const char *spi2_mux_p[] __initconst = { "osc26m", "rclk_cfgaxi", };
+static const char *timer0_mux_p[] __initdata = { "osc32k", "timerclk01", };
+static const char *timer1_mux_p[] __initdata = { "osc32k", "timerclk01", };
+static const char *timer2_mux_p[] __initdata = { "osc32k", "timerclk23", };
+static const char *timer3_mux_p[] __initdata = { "osc32k", "timerclk23", };
+static const char *timer4_mux_p[] __initdata = { "osc32k", "timerclk45", };
+static const char *timer5_mux_p[] __initdata = { "osc32k", "timerclk45", };
+static const char *timer6_mux_p[] __initdata = { "osc32k", "timerclk67", };
+static const char *timer7_mux_p[] __initdata = { "osc32k", "timerclk67", };
+static const char *timer8_mux_p[] __initdata = { "osc32k", "timerclk89", };
+static const char *timer9_mux_p[] __initdata = { "osc32k", "timerclk89", };
+static const char *uart0_mux_p[] __initdata = { "osc26m", "pclk", };
+static const char *uart1_mux_p[] __initdata = { "osc26m", "pclk", };
+static const char *uart2_mux_p[] __initdata = { "osc26m", "pclk", };
+static const char *uart3_mux_p[] __initdata = { "osc26m", "pclk", };
+static const char *uart4_mux_p[] __initdata = { "osc26m", "pclk", };
+static const char *spi0_mux_p[] __initdata = { "osc26m", "rclk_cfgaxi", };
+static const char *spi1_mux_p[] __initdata = { "osc26m", "rclk_cfgaxi", };
+static const char *spi2_mux_p[] __initdata = { "osc26m", "rclk_cfgaxi", };
/* share axi parent */
-static const char *saxi_mux_p[] __initconst = { "armpll3", "armpll2", };
-static const char *pwm0_mux_p[] __initconst = { "osc32k", "osc26m", };
-static const char *pwm1_mux_p[] __initconst = { "osc32k", "osc26m", };
-static const char *sd_mux_p[] __initconst = { "armpll2", "armpll3", };
-static const char *mmc1_mux_p[] __initconst = { "armpll2", "armpll3", };
-static const char *mmc1_mux2_p[] __initconst = { "osc26m", "mmc1_div", };
-static const char *g2d_mux_p[] __initconst = { "armpll2", "armpll3", };
-static const char *venc_mux_p[] __initconst = { "armpll2", "armpll3", };
-static const char *vdec_mux_p[] __initconst = { "armpll2", "armpll3", };
-static const char *vpp_mux_p[] __initconst = { "armpll2", "armpll3", };
-static const char *edc0_mux_p[] __initconst = { "armpll2", "armpll3", };
-static const char *ldi0_mux_p[] __initconst = { "armpll2", "armpll4",
+static const char *saxi_mux_p[] __initdata = { "armpll3", "armpll2", };
+static const char *pwm0_mux_p[] __initdata = { "osc32k", "osc26m", };
+static const char *pwm1_mux_p[] __initdata = { "osc32k", "osc26m", };
+static const char *sd_mux_p[] __initdata = { "armpll2", "armpll3", };
+static const char *mmc1_mux_p[] __initdata = { "armpll2", "armpll3", };
+static const char *mmc1_mux2_p[] __initdata = { "osc26m", "mmc1_div", };
+static const char *g2d_mux_p[] __initdata = { "armpll2", "armpll3", };
+static const char *venc_mux_p[] __initdata = { "armpll2", "armpll3", };
+static const char *vdec_mux_p[] __initdata = { "armpll2", "armpll3", };
+static const char *vpp_mux_p[] __initdata = { "armpll2", "armpll3", };
+static const char *edc0_mux_p[] __initdata = { "armpll2", "armpll3", };
+static const char *ldi0_mux_p[] __initdata = { "armpll2", "armpll4",
"armpll3", "armpll5", };
-static const char *edc1_mux_p[] __initconst = { "armpll2", "armpll3", };
-static const char *ldi1_mux_p[] __initconst = { "armpll2", "armpll4",
+static const char *edc1_mux_p[] __initdata = { "armpll2", "armpll3", };
+static const char *ldi1_mux_p[] __initdata = { "armpll2", "armpll4",
"armpll3", "armpll5", };
-static const char *rclk_hsic_p[] __initconst = { "armpll3", "armpll2", };
-static const char *mmc2_mux_p[] __initconst = { "armpll2", "armpll3", };
-static const char *mmc3_mux_p[] __initconst = { "armpll2", "armpll3", };
+static const char *rclk_hsic_p[] __initdata = { "armpll3", "armpll2", };
+static const char *mmc2_mux_p[] __initdata = { "armpll2", "armpll3", };
+static const char *mmc3_mux_p[] __initdata = { "armpll2", "armpll3", };
/* fixed rate clocks */
diff --git a/drivers/clk/hisilicon/clk-hix5hd2.c b/drivers/clk/hisilicon/clk-hix5hd2.c
index 3f369c6..f1d2394 100644
--- a/drivers/clk/hisilicon/clk-hix5hd2.c
+++ b/drivers/clk/hisilicon/clk-hix5hd2.c
@@ -46,15 +46,15 @@
{ HIX5HD2_FIXED_83M, "83m", NULL, CLK_IS_ROOT, 83333333, },
};
-static const char *sfc_mux_p[] __initconst = {
+static const char *sfc_mux_p[] __initdata = {
"24m", "150m", "200m", "100m", "75m", };
static u32 sfc_mux_table[] = {0, 4, 5, 6, 7};
-static const char *sdio_mux_p[] __initconst = {
+static const char *sdio_mux_p[] __initdata = {
"75m", "100m", "50m", "15m", };
static u32 sdio_mux_table[] = {0, 1, 2, 3};
-static const char *fephy_mux_p[] __initconst = { "25m", "125m"};
+static const char *fephy_mux_p[] __initdata = { "25m", "125m"};
static u32 fephy_mux_table[] = {0, 1};
diff --git a/drivers/clk/mvebu/Kconfig b/drivers/clk/mvebu/Kconfig
index 3b34dba..2769625 100644
--- a/drivers/clk/mvebu/Kconfig
+++ b/drivers/clk/mvebu/Kconfig
@@ -21,6 +21,10 @@
bool
select MVEBU_CLK_COMMON
+config ARMADA_39X_CLK
+ bool
+ select MVEBU_CLK_COMMON
+
config ARMADA_XP_CLK
bool
select MVEBU_CLK_COMMON
diff --git a/drivers/clk/mvebu/Makefile b/drivers/clk/mvebu/Makefile
index a9a56fc..645ac7e 100644
--- a/drivers/clk/mvebu/Makefile
+++ b/drivers/clk/mvebu/Makefile
@@ -5,6 +5,7 @@
obj-$(CONFIG_ARMADA_370_CLK) += armada-370.o
obj-$(CONFIG_ARMADA_375_CLK) += armada-375.o
obj-$(CONFIG_ARMADA_38X_CLK) += armada-38x.o
+obj-$(CONFIG_ARMADA_39X_CLK) += armada-39x.o
obj-$(CONFIG_ARMADA_XP_CLK) += armada-xp.o
obj-$(CONFIG_DOVE_CLK) += dove.o
obj-$(CONFIG_KIRKWOOD_CLK) += kirkwood.o
diff --git a/drivers/clk/mvebu/armada-39x.c b/drivers/clk/mvebu/armada-39x.c
new file mode 100644
index 0000000..efb974d
--- /dev/null
+++ b/drivers/clk/mvebu/armada-39x.c
@@ -0,0 +1,156 @@
+/*
+ * Marvell Armada 39x SoC clocks
+ *
+ * Copyright (C) 2015 Marvell
+ *
+ * Gregory CLEMENT <gregory.clement@free-electrons.com>
+ * Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
+ * Andrew Lunn <andrew@lunn.ch>
+ * Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/kernel.h>
+#include <linux/clk-provider.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include "common.h"
+
+/*
+ * SARL[14:10] : Ratios between CPU, NBCLK, HCLK and DCLK.
+ *
+ * SARL[15] : TCLK frequency
+ * 0 = 250 MHz
+ * 1 = 200 MHz
+ *
+ * SARH[0] : Reference clock frequency
+ * 0 = 25 Mhz
+ * 1 = 40 Mhz
+ */
+
+#define SARL 0
+#define SARL_A390_TCLK_FREQ_OPT 15
+#define SARL_A390_TCLK_FREQ_OPT_MASK 0x1
+#define SARL_A390_CPU_DDR_L2_FREQ_OPT 10
+#define SARL_A390_CPU_DDR_L2_FREQ_OPT_MASK 0x1F
+#define SARH 4
+#define SARH_A390_REFCLK_FREQ BIT(0)
+
+static const u32 armada_39x_tclk_frequencies[] __initconst = {
+ 250000000,
+ 200000000,
+};
+
+static u32 __init armada_39x_get_tclk_freq(void __iomem *sar)
+{
+ u8 tclk_freq_select;
+
+ tclk_freq_select = ((readl(sar + SARL) >> SARL_A390_TCLK_FREQ_OPT) &
+ SARL_A390_TCLK_FREQ_OPT_MASK);
+ return armada_39x_tclk_frequencies[tclk_freq_select];
+}
+
+static const u32 armada_39x_cpu_frequencies[] __initconst = {
+ [0x0] = 666 * 1000 * 1000,
+ [0x2] = 800 * 1000 * 1000,
+ [0x3] = 800 * 1000 * 1000,
+ [0x4] = 1066 * 1000 * 1000,
+ [0x5] = 1066 * 1000 * 1000,
+ [0x6] = 1200 * 1000 * 1000,
+ [0x8] = 1332 * 1000 * 1000,
+ [0xB] = 1600 * 1000 * 1000,
+ [0xC] = 1600 * 1000 * 1000,
+ [0x12] = 1800 * 1000 * 1000,
+ [0x1E] = 1800 * 1000 * 1000,
+};
+
+static u32 __init armada_39x_get_cpu_freq(void __iomem *sar)
+{
+ u8 cpu_freq_select;
+
+ cpu_freq_select = ((readl(sar + SARL) >> SARL_A390_CPU_DDR_L2_FREQ_OPT) &
+ SARL_A390_CPU_DDR_L2_FREQ_OPT_MASK);
+ if (cpu_freq_select >= ARRAY_SIZE(armada_39x_cpu_frequencies)) {
+ pr_err("Selected CPU frequency (%d) unsupported\n",
+ cpu_freq_select);
+ return 0;
+ }
+
+ return armada_39x_cpu_frequencies[cpu_freq_select];
+}
+
+enum { A390_CPU_TO_NBCLK, A390_CPU_TO_HCLK, A390_CPU_TO_DCLK };
+
+static const struct coreclk_ratio armada_39x_coreclk_ratios[] __initconst = {
+ { .id = A390_CPU_TO_NBCLK, .name = "nbclk" },
+ { .id = A390_CPU_TO_HCLK, .name = "hclk" },
+ { .id = A390_CPU_TO_DCLK, .name = "dclk" },
+};
+
+static void __init armada_39x_get_clk_ratio(
+ void __iomem *sar, int id, int *mult, int *div)
+{
+ switch (id) {
+ case A390_CPU_TO_NBCLK:
+ *mult = 1;
+ *div = 2;
+ break;
+ case A390_CPU_TO_HCLK:
+ *mult = 1;
+ *div = 4;
+ break;
+ case A390_CPU_TO_DCLK:
+ *mult = 1;
+ *div = 2;
+ break;
+ }
+}
+
+static u32 __init armada_39x_refclk_ratio(void __iomem *sar)
+{
+ if (readl(sar + SARH) & SARH_A390_REFCLK_FREQ)
+ return 40 * 1000 * 1000;
+ else
+ return 25 * 1000 * 1000;
+}
+
+static const struct coreclk_soc_desc armada_39x_coreclks = {
+ .get_tclk_freq = armada_39x_get_tclk_freq,
+ .get_cpu_freq = armada_39x_get_cpu_freq,
+ .get_clk_ratio = armada_39x_get_clk_ratio,
+ .get_refclk_freq = armada_39x_refclk_ratio,
+ .ratios = armada_39x_coreclk_ratios,
+ .num_ratios = ARRAY_SIZE(armada_39x_coreclk_ratios),
+};
+
+static void __init armada_39x_coreclk_init(struct device_node *np)
+{
+ mvebu_coreclk_setup(np, &armada_39x_coreclks);
+}
+CLK_OF_DECLARE(armada_39x_core_clk, "marvell,armada-390-core-clock",
+ armada_39x_coreclk_init);
+
+/*
+ * Clock Gating Control
+ */
+static const struct clk_gating_soc_desc armada_39x_gating_desc[] __initconst = {
+ { "pex1", NULL, 5 },
+ { "pex2", NULL, 6 },
+ { "pex3", NULL, 7 },
+ { "pex0", NULL, 8 },
+ { "usb3h0", NULL, 9 },
+ { "sdio", NULL, 17 },
+ { "xor0", NULL, 22 },
+ { "xor1", NULL, 28 },
+ { }
+};
+
+static void __init armada_39x_clk_gating_init(struct device_node *np)
+{
+ mvebu_clk_gating_setup(np, armada_39x_gating_desc);
+}
+CLK_OF_DECLARE(armada_39x_clk_gating, "marvell,armada-390-gating-clock",
+ armada_39x_clk_gating_init);
diff --git a/drivers/clk/mvebu/common.c b/drivers/clk/mvebu/common.c
index 0d4d121..15b370f 100644
--- a/drivers/clk/mvebu/common.c
+++ b/drivers/clk/mvebu/common.c
@@ -121,6 +121,11 @@
/* Allocate struct for TCLK, cpu clk, and core ratio clocks */
clk_data.clk_num = 2 + desc->num_ratios;
+
+ /* One more clock for the optional refclk */
+ if (desc->get_refclk_freq)
+ clk_data.clk_num += 1;
+
clk_data.clks = kzalloc(clk_data.clk_num * sizeof(struct clk *),
GFP_KERNEL);
if (WARN_ON(!clk_data.clks)) {
@@ -162,6 +167,18 @@
WARN_ON(IS_ERR(clk_data.clks[2+n]));
};
+ /* Register optional refclk */
+ if (desc->get_refclk_freq) {
+ const char *name = "refclk";
+ of_property_read_string_index(np, "clock-output-names",
+ 2 + desc->num_ratios, &name);
+ rate = desc->get_refclk_freq(base);
+ clk_data.clks[2 + desc->num_ratios] =
+ clk_register_fixed_rate(NULL, name, NULL,
+ CLK_IS_ROOT, rate);
+ WARN_ON(IS_ERR(clk_data.clks[2 + desc->num_ratios]));
+ }
+
/* SAR register isn't needed anymore */
iounmap(base);
diff --git a/drivers/clk/mvebu/common.h b/drivers/clk/mvebu/common.h
index 783b563..f0de6c8 100644
--- a/drivers/clk/mvebu/common.h
+++ b/drivers/clk/mvebu/common.h
@@ -30,6 +30,7 @@
u32 (*get_tclk_freq)(void __iomem *sar);
u32 (*get_cpu_freq)(void __iomem *sar);
void (*get_clk_ratio)(void __iomem *sar, int id, int *mult, int *div);
+ u32 (*get_refclk_freq)(void __iomem *sar);
bool (*is_sscg_enabled)(void __iomem *sar);
u32 (*fix_sscg_deviation)(u32 system_clk);
const struct coreclk_ratio *ratios;
diff --git a/drivers/clk/mxs/clk-imx23.c b/drivers/clk/mxs/clk-imx23.c
index 9fc9359..22d136aa 100644
--- a/drivers/clk/mxs/clk-imx23.c
+++ b/drivers/clk/mxs/clk-imx23.c
@@ -77,12 +77,12 @@
writel_relaxed(30 << BP_FRAC_IOFRAC, FRAC + SET);
}
-static const char *sel_pll[] __initconst = { "pll", "ref_xtal", };
-static const char *sel_cpu[] __initconst = { "ref_cpu", "ref_xtal", };
-static const char *sel_pix[] __initconst = { "ref_pix", "ref_xtal", };
-static const char *sel_io[] __initconst = { "ref_io", "ref_xtal", };
-static const char *cpu_sels[] __initconst = { "cpu_pll", "cpu_xtal", };
-static const char *emi_sels[] __initconst = { "emi_pll", "emi_xtal", };
+static const char *sel_pll[] __initdata = { "pll", "ref_xtal", };
+static const char *sel_cpu[] __initdata = { "ref_cpu", "ref_xtal", };
+static const char *sel_pix[] __initdata = { "ref_pix", "ref_xtal", };
+static const char *sel_io[] __initdata = { "ref_io", "ref_xtal", };
+static const char *cpu_sels[] __initdata = { "cpu_pll", "cpu_xtal", };
+static const char *emi_sels[] __initdata = { "emi_pll", "emi_xtal", };
enum imx23_clk {
ref_xtal, pll, ref_cpu, ref_emi, ref_pix, ref_io, saif_sel,
diff --git a/drivers/clk/mxs/clk-imx28.c b/drivers/clk/mxs/clk-imx28.c
index a6c3501..b1be374 100644
--- a/drivers/clk/mxs/clk-imx28.c
+++ b/drivers/clk/mxs/clk-imx28.c
@@ -125,15 +125,15 @@
writel_relaxed(val, FRAC0);
}
-static const char *sel_cpu[] __initconst = { "ref_cpu", "ref_xtal", };
-static const char *sel_io0[] __initconst = { "ref_io0", "ref_xtal", };
-static const char *sel_io1[] __initconst = { "ref_io1", "ref_xtal", };
-static const char *sel_pix[] __initconst = { "ref_pix", "ref_xtal", };
-static const char *sel_gpmi[] __initconst = { "ref_gpmi", "ref_xtal", };
-static const char *sel_pll0[] __initconst = { "pll0", "ref_xtal", };
-static const char *cpu_sels[] __initconst = { "cpu_pll", "cpu_xtal", };
-static const char *emi_sels[] __initconst = { "emi_pll", "emi_xtal", };
-static const char *ptp_sels[] __initconst = { "ref_xtal", "pll0", };
+static const char *sel_cpu[] __initdata = { "ref_cpu", "ref_xtal", };
+static const char *sel_io0[] __initdata = { "ref_io0", "ref_xtal", };
+static const char *sel_io1[] __initdata = { "ref_io1", "ref_xtal", };
+static const char *sel_pix[] __initdata = { "ref_pix", "ref_xtal", };
+static const char *sel_gpmi[] __initdata = { "ref_gpmi", "ref_xtal", };
+static const char *sel_pll0[] __initdata = { "pll0", "ref_xtal", };
+static const char *cpu_sels[] __initdata = { "cpu_pll", "cpu_xtal", };
+static const char *emi_sels[] __initdata = { "emi_pll", "emi_xtal", };
+static const char *ptp_sels[] __initdata = { "ref_xtal", "pll0", };
enum imx28_clk {
ref_xtal, pll0, pll1, pll2, ref_cpu, ref_emi, ref_io0, ref_io1,
diff --git a/drivers/clk/pxa/clk-pxa.h b/drivers/clk/pxa/clk-pxa.h
index 3239654..b04c5b9 100644
--- a/drivers/clk/pxa/clk-pxa.h
+++ b/drivers/clk/pxa/clk-pxa.h
@@ -14,7 +14,7 @@
#define _CLK_PXA_
#define PARENTS(name) \
- static const char *name ## _parents[] __initconst
+ static const char *name ## _parents[] __initdata
#define MUX_RO_RATE_RO_OPS(name, clk_name) \
static struct clk_hw name ## _mux_hw; \
static struct clk_hw name ## _rate_hw; \
diff --git a/drivers/clk/pxa/clk-pxa3xx.c b/drivers/clk/pxa/clk-pxa3xx.c
index 39f891b..4b93a1e 100644
--- a/drivers/clk/pxa/clk-pxa3xx.c
+++ b/drivers/clk/pxa/clk-pxa3xx.c
@@ -336,6 +336,9 @@
clk_register_clk_pxa3xx_smemc();
clk_register_gate(NULL, "CLK_POUT", "osc_13mhz", 0,
(void __iomem *)&OSCC, 11, 0, NULL);
+ clkdev_pxa_register(CLK_OSTIMER, "OSTIMER0", NULL,
+ clk_register_fixed_factor(NULL, "os-timer0",
+ "osc_13mhz", 0, 1, 4));
}
int __init pxa3xx_clocks_init(void)
diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
index 0d7ab52..59d1666 100644
--- a/drivers/clk/qcom/Kconfig
+++ b/drivers/clk/qcom/Kconfig
@@ -1,6 +1,7 @@
config COMMON_CLK_QCOM
tristate "Support for Qualcomm's clock controllers"
depends on OF
+ depends on ARCH_QCOM || COMPILE_TEST
select REGMAP_MMIO
select RESET_CONTROLLER
@@ -46,6 +47,14 @@
Say Y if you want to use peripheral devices such as UART, SPI,
i2c, USB, SD/eMMC, etc.
+config MSM_GCC_8916
+ tristate "MSM8916 Global Clock Controller"
+ depends on COMMON_CLK_QCOM
+ help
+ Support for the global clock controller on msm8916 devices.
+ Say Y if you want to use devices such as UART, SPI i2c, USB,
+ SD/eMMC, display, graphics, camera etc.
+
config MSM_GCC_8960
tristate "APQ8064/MSM8960 Global Clock Controller"
depends on COMMON_CLK_QCOM
diff --git a/drivers/clk/qcom/Makefile b/drivers/clk/qcom/Makefile
index 6178264..50b337a 100644
--- a/drivers/clk/qcom/Makefile
+++ b/drivers/clk/qcom/Makefile
@@ -15,6 +15,7 @@
obj-$(CONFIG_IPQ_GCC_806X) += gcc-ipq806x.o
obj-$(CONFIG_IPQ_LCC_806X) += lcc-ipq806x.o
obj-$(CONFIG_MSM_GCC_8660) += gcc-msm8660.o
+obj-$(CONFIG_MSM_GCC_8916) += gcc-msm8916.o
obj-$(CONFIG_MSM_GCC_8960) += gcc-msm8960.o
obj-$(CONFIG_MSM_LCC_8960) += lcc-msm8960.o
obj-$(CONFIG_MSM_GCC_8974) += gcc-msm8974.o
diff --git a/drivers/clk/qcom/clk-pll.c b/drivers/clk/qcom/clk-pll.c
index b4325f6..245d506 100644
--- a/drivers/clk/qcom/clk-pll.c
+++ b/drivers/clk/qcom/clk-pll.c
@@ -71,12 +71,8 @@
udelay(50);
/* Enable PLL output. */
- ret = regmap_update_bits(pll->clkr.regmap, pll->mode_reg, PLL_OUTCTRL,
+ return regmap_update_bits(pll->clkr.regmap, pll->mode_reg, PLL_OUTCTRL,
PLL_OUTCTRL);
- if (ret)
- return ret;
-
- return 0;
}
static void clk_pll_disable(struct clk_hw *hw)
diff --git a/drivers/clk/qcom/clk-rcg.c b/drivers/clk/qcom/clk-rcg.c
index 0039bd7..7b3d626 100644
--- a/drivers/clk/qcom/clk-rcg.c
+++ b/drivers/clk/qcom/clk-rcg.c
@@ -47,15 +47,20 @@
struct clk_rcg *rcg = to_clk_rcg(hw);
int num_parents = __clk_get_num_parents(hw->clk);
u32 ns;
- int i;
+ int i, ret;
- regmap_read(rcg->clkr.regmap, rcg->ns_reg, &ns);
+ ret = regmap_read(rcg->clkr.regmap, rcg->ns_reg, &ns);
+ if (ret)
+ goto err;
ns = ns_to_src(&rcg->s, ns);
for (i = 0; i < num_parents; i++)
- if (ns == rcg->s.parent_map[i])
+ if (ns == rcg->s.parent_map[i].cfg)
return i;
- return -EINVAL;
+err:
+ pr_debug("%s: Clock %s has invalid parent, using default.\n",
+ __func__, __clk_get_name(hw->clk));
+ return 0;
}
static int reg_to_bank(struct clk_dyn_rcg *rcg, u32 bank)
@@ -70,21 +75,28 @@
int num_parents = __clk_get_num_parents(hw->clk);
u32 ns, reg;
int bank;
- int i;
+ int i, ret;
struct src_sel *s;
- regmap_read(rcg->clkr.regmap, rcg->bank_reg, ®);
+ ret = regmap_read(rcg->clkr.regmap, rcg->bank_reg, ®);
+ if (ret)
+ goto err;
bank = reg_to_bank(rcg, reg);
s = &rcg->s[bank];
- regmap_read(rcg->clkr.regmap, rcg->ns_reg[bank], &ns);
+ ret = regmap_read(rcg->clkr.regmap, rcg->ns_reg[bank], &ns);
+ if (ret)
+ goto err;
ns = ns_to_src(s, ns);
for (i = 0; i < num_parents; i++)
- if (ns == s->parent_map[i])
+ if (ns == s->parent_map[i].cfg)
return i;
- return -EINVAL;
+err:
+ pr_debug("%s: Clock %s has invalid parent, using default.\n",
+ __func__, __clk_get_name(hw->clk));
+ return 0;
}
static int clk_rcg_set_parent(struct clk_hw *hw, u8 index)
@@ -93,7 +105,7 @@
u32 ns;
regmap_read(rcg->clkr.regmap, rcg->ns_reg, &ns);
- ns = src_to_ns(&rcg->s, rcg->s.parent_map[index], ns);
+ ns = src_to_ns(&rcg->s, rcg->s.parent_map[index].cfg, ns);
regmap_write(rcg->clkr.regmap, rcg->ns_reg, ns);
return 0;
@@ -191,10 +203,10 @@
return val;
}
-static void configure_bank(struct clk_dyn_rcg *rcg, const struct freq_tbl *f)
+static int configure_bank(struct clk_dyn_rcg *rcg, const struct freq_tbl *f)
{
u32 ns, md, reg;
- int bank, new_bank;
+ int bank, new_bank, ret, index;
struct mn *mn;
struct pre_div *p;
struct src_sel *s;
@@ -206,38 +218,56 @@
enabled = __clk_is_enabled(hw->clk);
- regmap_read(rcg->clkr.regmap, rcg->bank_reg, ®);
+ ret = regmap_read(rcg->clkr.regmap, rcg->bank_reg, ®);
+ if (ret)
+ return ret;
bank = reg_to_bank(rcg, reg);
new_bank = enabled ? !bank : bank;
ns_reg = rcg->ns_reg[new_bank];
- regmap_read(rcg->clkr.regmap, ns_reg, &ns);
+ ret = regmap_read(rcg->clkr.regmap, ns_reg, &ns);
+ if (ret)
+ return ret;
if (banked_mn) {
mn = &rcg->mn[new_bank];
md_reg = rcg->md_reg[new_bank];
ns |= BIT(mn->mnctr_reset_bit);
- regmap_write(rcg->clkr.regmap, ns_reg, ns);
+ ret = regmap_write(rcg->clkr.regmap, ns_reg, ns);
+ if (ret)
+ return ret;
- regmap_read(rcg->clkr.regmap, md_reg, &md);
+ ret = regmap_read(rcg->clkr.regmap, md_reg, &md);
+ if (ret)
+ return ret;
md = mn_to_md(mn, f->m, f->n, md);
- regmap_write(rcg->clkr.regmap, md_reg, md);
-
+ ret = regmap_write(rcg->clkr.regmap, md_reg, md);
+ if (ret)
+ return ret;
ns = mn_to_ns(mn, f->m, f->n, ns);
- regmap_write(rcg->clkr.regmap, ns_reg, ns);
+ ret = regmap_write(rcg->clkr.regmap, ns_reg, ns);
+ if (ret)
+ return ret;
/* Two NS registers means mode control is in NS register */
if (rcg->ns_reg[0] != rcg->ns_reg[1]) {
ns = mn_to_reg(mn, f->m, f->n, ns);
- regmap_write(rcg->clkr.regmap, ns_reg, ns);
+ ret = regmap_write(rcg->clkr.regmap, ns_reg, ns);
+ if (ret)
+ return ret;
} else {
reg = mn_to_reg(mn, f->m, f->n, reg);
- regmap_write(rcg->clkr.regmap, rcg->bank_reg, reg);
+ ret = regmap_write(rcg->clkr.regmap, rcg->bank_reg,
+ reg);
+ if (ret)
+ return ret;
}
ns &= ~BIT(mn->mnctr_reset_bit);
- regmap_write(rcg->clkr.regmap, ns_reg, ns);
+ ret = regmap_write(rcg->clkr.regmap, ns_reg, ns);
+ if (ret)
+ return ret;
}
if (banked_p) {
@@ -246,14 +276,24 @@
}
s = &rcg->s[new_bank];
- ns = src_to_ns(s, s->parent_map[f->src], ns);
- regmap_write(rcg->clkr.regmap, ns_reg, ns);
+ index = qcom_find_src_index(hw, s->parent_map, f->src);
+ if (index < 0)
+ return index;
+ ns = src_to_ns(s, s->parent_map[index].cfg, ns);
+ ret = regmap_write(rcg->clkr.regmap, ns_reg, ns);
+ if (ret)
+ return ret;
if (enabled) {
- regmap_read(rcg->clkr.regmap, rcg->bank_reg, ®);
+ ret = regmap_read(rcg->clkr.regmap, rcg->bank_reg, ®);
+ if (ret)
+ return ret;
reg ^= BIT(rcg->mux_sel_bit);
- regmap_write(rcg->clkr.regmap, rcg->bank_reg, reg);
+ ret = regmap_write(rcg->clkr.regmap, rcg->bank_reg, reg);
+ if (ret)
+ return ret;
}
+ return 0;
}
static int clk_dyn_rcg_set_parent(struct clk_hw *hw, u8 index)
@@ -279,10 +319,8 @@
if (banked_p)
f.pre_div = ns_to_pre_div(&rcg->p[bank], ns) + 1;
- f.src = index;
- configure_bank(rcg, &f);
-
- return 0;
+ f.src = qcom_find_src_index(hw, rcg->s[bank].parent_map, index);
+ return configure_bank(rcg, &f);
}
/*
@@ -369,17 +407,23 @@
static long _freq_tbl_determine_rate(struct clk_hw *hw,
const struct freq_tbl *f, unsigned long rate,
unsigned long min_rate, unsigned long max_rate,
- unsigned long *p_rate, struct clk_hw **p_hw)
+ unsigned long *p_rate, struct clk_hw **p_hw,
+ const struct parent_map *parent_map)
{
unsigned long clk_flags;
struct clk *p;
+ int index;
f = qcom_find_freq(f, rate);
if (!f)
return -EINVAL;
+ index = qcom_find_src_index(hw, parent_map, f->src);
+ if (index < 0)
+ return index;
+
clk_flags = __clk_get_flags(hw->clk);
- p = clk_get_parent_by_index(hw->clk, f->src);
+ p = clk_get_parent_by_index(hw->clk, index);
if (clk_flags & CLK_SET_RATE_PARENT) {
rate = rate * f->pre_div;
if (f->n) {
@@ -404,7 +448,7 @@
struct clk_rcg *rcg = to_clk_rcg(hw);
return _freq_tbl_determine_rate(hw, rcg->freq_tbl, rate, min_rate,
- max_rate, p_rate, p);
+ max_rate, p_rate, p, rcg->s.parent_map);
}
static long clk_dyn_rcg_determine_rate(struct clk_hw *hw, unsigned long rate,
@@ -412,9 +456,16 @@
unsigned long *p_rate, struct clk_hw **p)
{
struct clk_dyn_rcg *rcg = to_clk_dyn_rcg(hw);
+ u32 reg;
+ int bank;
+ struct src_sel *s;
+
+ regmap_read(rcg->clkr.regmap, rcg->bank_reg, ®);
+ bank = reg_to_bank(rcg, reg);
+ s = &rcg->s[bank];
return _freq_tbl_determine_rate(hw, rcg->freq_tbl, rate, min_rate,
- max_rate, p_rate, p);
+ max_rate, p_rate, p, s->parent_map);
}
static long clk_rcg_bypass_determine_rate(struct clk_hw *hw, unsigned long rate,
@@ -424,8 +475,9 @@
struct clk_rcg *rcg = to_clk_rcg(hw);
const struct freq_tbl *f = rcg->freq_tbl;
struct clk *p;
+ int index = qcom_find_src_index(hw, rcg->s.parent_map, f->src);
- p = clk_get_parent_by_index(hw->clk, f->src);
+ p = clk_get_parent_by_index(hw->clk, index);
*p_hw = __clk_get_hw(p);
*p_rate = __clk_round_rate(p, rate);
@@ -495,6 +547,57 @@
return __clk_rcg_set_rate(rcg, rcg->freq_tbl);
}
+/*
+ * This type of clock has a glitch-free mux that switches between the output of
+ * the M/N counter and an always on clock source (XO). When clk_set_rate() is
+ * called we need to make sure that we don't switch to the M/N counter if it
+ * isn't clocking because the mux will get stuck and the clock will stop
+ * outputting a clock. This can happen if the framework isn't aware that this
+ * clock is on and so clk_set_rate() doesn't turn on the new parent. To fix
+ * this we switch the mux in the enable/disable ops and reprogram the M/N
+ * counter in the set_rate op. We also make sure to switch away from the M/N
+ * counter in set_rate if software thinks the clock is off.
+ */
+static int clk_rcg_lcc_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ struct clk_rcg *rcg = to_clk_rcg(hw);
+ const struct freq_tbl *f;
+ int ret;
+ u32 gfm = BIT(10);
+
+ f = qcom_find_freq(rcg->freq_tbl, rate);
+ if (!f)
+ return -EINVAL;
+
+ /* Switch to XO to avoid glitches */
+ regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
+ ret = __clk_rcg_set_rate(rcg, f);
+ /* Switch back to M/N if it's clocking */
+ if (__clk_is_enabled(hw->clk))
+ regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
+
+ return ret;
+}
+
+static int clk_rcg_lcc_enable(struct clk_hw *hw)
+{
+ struct clk_rcg *rcg = to_clk_rcg(hw);
+ u32 gfm = BIT(10);
+
+ /* Use M/N */
+ return regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
+}
+
+static void clk_rcg_lcc_disable(struct clk_hw *hw)
+{
+ struct clk_rcg *rcg = to_clk_rcg(hw);
+ u32 gfm = BIT(10);
+
+ /* Use XO */
+ regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
+}
+
static int __clk_dyn_rcg_set_rate(struct clk_hw *hw, unsigned long rate)
{
struct clk_dyn_rcg *rcg = to_clk_dyn_rcg(hw);
@@ -504,9 +607,7 @@
if (!f)
return -EINVAL;
- configure_bank(rcg, f);
-
- return 0;
+ return configure_bank(rcg, f);
}
static int clk_dyn_rcg_set_rate(struct clk_hw *hw, unsigned long rate,
@@ -543,6 +644,17 @@
};
EXPORT_SYMBOL_GPL(clk_rcg_bypass_ops);
+const struct clk_ops clk_rcg_lcc_ops = {
+ .enable = clk_rcg_lcc_enable,
+ .disable = clk_rcg_lcc_disable,
+ .get_parent = clk_rcg_get_parent,
+ .set_parent = clk_rcg_set_parent,
+ .recalc_rate = clk_rcg_recalc_rate,
+ .determine_rate = clk_rcg_determine_rate,
+ .set_rate = clk_rcg_lcc_set_rate,
+};
+EXPORT_SYMBOL_GPL(clk_rcg_lcc_ops);
+
const struct clk_ops clk_dyn_rcg_ops = {
.enable = clk_enable_regmap,
.is_enabled = clk_is_enabled_regmap,
diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
index 687e41f..56028bb 100644
--- a/drivers/clk/qcom/clk-rcg.h
+++ b/drivers/clk/qcom/clk-rcg.h
@@ -26,6 +26,16 @@
};
/**
+ * struct parent_map - map table for PLL source select configuration values
+ * @src: source PLL
+ * @cfg: configuration value
+ */
+struct parent_map {
+ u8 src;
+ u8 cfg;
+};
+
+/**
* struct mn - M/N:D counter
* @mnctr_en_bit: bit to enable mn counter
* @mnctr_reset_bit: bit to assert mn counter reset
@@ -65,7 +75,7 @@
struct src_sel {
u8 src_sel_shift;
#define SRC_SEL_MASK 0x7
- const u8 *parent_map;
+ const struct parent_map *parent_map;
};
/**
@@ -96,6 +106,7 @@
extern const struct clk_ops clk_rcg_ops;
extern const struct clk_ops clk_rcg_bypass_ops;
+extern const struct clk_ops clk_rcg_lcc_ops;
#define to_clk_rcg(_hw) container_of(to_clk_regmap(_hw), struct clk_rcg, clkr)
@@ -150,7 +161,7 @@
u32 cmd_rcgr;
u8 mnd_width;
u8 hid_width;
- const u8 *parent_map;
+ const struct parent_map *parent_map;
const struct freq_tbl *freq_tbl;
struct clk_regmap clkr;
};
diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
index 742acfa..b95d17f 100644
--- a/drivers/clk/qcom/clk-rcg2.c
+++ b/drivers/clk/qcom/clk-rcg2.c
@@ -69,16 +69,19 @@
ret = regmap_read(rcg->clkr.regmap, rcg->cmd_rcgr + CFG_REG, &cfg);
if (ret)
- return ret;
+ goto err;
cfg &= CFG_SRC_SEL_MASK;
cfg >>= CFG_SRC_SEL_SHIFT;
for (i = 0; i < num_parents; i++)
- if (cfg == rcg->parent_map[i])
+ if (cfg == rcg->parent_map[i].cfg)
return i;
- return -EINVAL;
+err:
+ pr_debug("%s: Clock %s has invalid parent, using default.\n",
+ __func__, __clk_get_name(hw->clk));
+ return 0;
}
static int update_config(struct clk_rcg2 *rcg)
@@ -111,10 +114,10 @@
{
struct clk_rcg2 *rcg = to_clk_rcg2(hw);
int ret;
+ u32 cfg = rcg->parent_map[index].cfg << CFG_SRC_SEL_SHIFT;
ret = regmap_update_bits(rcg->clkr.regmap, rcg->cmd_rcgr + CFG_REG,
- CFG_SRC_SEL_MASK,
- rcg->parent_map[index] << CFG_SRC_SEL_SHIFT);
+ CFG_SRC_SEL_MASK, cfg);
if (ret)
return ret;
@@ -179,13 +182,19 @@
{
unsigned long clk_flags;
struct clk *p;
+ struct clk_rcg2 *rcg = to_clk_rcg2(hw);
+ int index;
f = qcom_find_freq(f, rate);
if (!f)
return -EINVAL;
+ index = qcom_find_src_index(hw, rcg->parent_map, f->src);
+ if (index < 0)
+ return index;
+
clk_flags = __clk_get_flags(hw->clk);
- p = clk_get_parent_by_index(hw->clk, f->src);
+ p = clk_get_parent_by_index(hw->clk, index);
if (clk_flags & CLK_SET_RATE_PARENT) {
if (f->pre_div) {
rate /= 2;
@@ -219,7 +228,11 @@
static int clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
{
u32 cfg, mask;
- int ret;
+ struct clk_hw *hw = &rcg->clkr.hw;
+ int ret, index = qcom_find_src_index(hw, rcg->parent_map, f->src);
+
+ if (index < 0)
+ return index;
if (rcg->mnd_width && f->n) {
mask = BIT(rcg->mnd_width) - 1;
@@ -242,8 +255,8 @@
mask = BIT(rcg->hid_width) - 1;
mask |= CFG_SRC_SEL_MASK | CFG_MODE_MASK;
cfg = f->pre_div << CFG_SRC_DIV_SHIFT;
- cfg |= rcg->parent_map[f->src] << CFG_SRC_SEL_SHIFT;
- if (rcg->mnd_width && f->n)
+ cfg |= rcg->parent_map[index].cfg << CFG_SRC_SEL_SHIFT;
+ if (rcg->mnd_width && f->n && (f->m != f->n))
cfg |= CFG_MODE_DUAL_EDGE;
ret = regmap_update_bits(rcg->clkr.regmap,
rcg->cmd_rcgr + CFG_REG, mask, cfg);
@@ -374,9 +387,10 @@
s64 request;
u32 mask = BIT(rcg->hid_width) - 1;
u32 hid_div;
+ int index = qcom_find_src_index(hw, rcg->parent_map, f->src);
/* Force the correct parent */
- *p = __clk_get_hw(clk_get_parent_by_index(hw->clk, f->src));
+ *p = __clk_get_hw(clk_get_parent_by_index(hw->clk, index));
if (src_rate == 810000000)
frac = frac_table_810m;
@@ -420,6 +434,7 @@
{
struct clk_rcg2 *rcg = to_clk_rcg2(hw);
const struct freq_tbl *f = rcg->freq_tbl;
+ int index = qcom_find_src_index(hw, rcg->parent_map, f->src);
unsigned long parent_rate, div;
u32 mask = BIT(rcg->hid_width) - 1;
struct clk *p;
@@ -427,7 +442,7 @@
if (rate == 0)
return -EINVAL;
- p = clk_get_parent_by_index(hw->clk, f->src);
+ p = clk_get_parent_by_index(hw->clk, index);
*p_hw = __clk_get_hw(p);
*p_rate = parent_rate = __clk_round_rate(p, rate);
@@ -489,7 +504,8 @@
int delta = 100000;
const struct freq_tbl *f = rcg->freq_tbl;
const struct frac_entry *frac = frac_table_pixel;
- struct clk *parent = clk_get_parent_by_index(hw->clk, f->src);
+ int index = qcom_find_src_index(hw, rcg->parent_map, f->src);
+ struct clk *parent = clk_get_parent_by_index(hw->clk, index);
*p = __clk_get_hw(parent);
@@ -518,7 +534,8 @@
int delta = 100000;
u32 mask = BIT(rcg->hid_width) - 1;
u32 hid_div;
- struct clk *parent = clk_get_parent_by_index(hw->clk, f.src);
+ int index = qcom_find_src_index(hw, rcg->parent_map, f.src);
+ struct clk *parent = clk_get_parent_by_index(hw->clk, index);
for (; frac->num; frac++) {
request = (rate * frac->den) / frac->num;
diff --git a/drivers/clk/qcom/common.c b/drivers/clk/qcom/common.c
index e20d947..f7101e3 100644
--- a/drivers/clk/qcom/common.c
+++ b/drivers/clk/qcom/common.c
@@ -43,6 +43,18 @@
}
EXPORT_SYMBOL_GPL(qcom_find_freq);
+int qcom_find_src_index(struct clk_hw *hw, const struct parent_map *map, u8 src)
+{
+ int i, num_parents = __clk_get_num_parents(hw->clk);
+
+ for (i = 0; i < num_parents; i++)
+ if (src == map[i].src)
+ return i;
+
+ return -ENOENT;
+}
+EXPORT_SYMBOL_GPL(qcom_find_src_index);
+
struct regmap *
qcom_cc_map(struct platform_device *pdev, const struct qcom_cc_desc *desc)
{
diff --git a/drivers/clk/qcom/common.h b/drivers/clk/qcom/common.h
index f519322..7a0e737 100644
--- a/drivers/clk/qcom/common.h
+++ b/drivers/clk/qcom/common.h
@@ -19,6 +19,8 @@
struct qcom_reset_map;
struct regmap;
struct freq_tbl;
+struct clk_hw;
+struct parent_map;
struct qcom_cc_desc {
const struct regmap_config *config;
@@ -30,6 +32,8 @@
extern const struct freq_tbl *qcom_find_freq(const struct freq_tbl *f,
unsigned long rate);
+extern int qcom_find_src_index(struct clk_hw *hw, const struct parent_map *map,
+ u8 src);
extern struct regmap *qcom_cc_map(struct platform_device *pdev,
const struct qcom_cc_desc *desc);
diff --git a/drivers/clk/qcom/gcc-apq8084.c b/drivers/clk/qcom/gcc-apq8084.c
index e3ef902..54a756b9 100644
--- a/drivers/clk/qcom/gcc-apq8084.c
+++ b/drivers/clk/qcom/gcc-apq8084.c
@@ -32,18 +32,20 @@
#include "clk-branch.h"
#include "reset.h"
-#define P_XO 0
-#define P_GPLL0 1
-#define P_GPLL1 1
-#define P_GPLL4 2
-#define P_PCIE_0_1_PIPE_CLK 1
-#define P_SATA_ASIC0_CLK 1
-#define P_SATA_RX_CLK 1
-#define P_SLEEP_CLK 1
+enum {
+ P_XO,
+ P_GPLL0,
+ P_GPLL1,
+ P_GPLL4,
+ P_PCIE_0_1_PIPE_CLK,
+ P_SATA_ASIC0_CLK,
+ P_SATA_RX_CLK,
+ P_SLEEP_CLK,
+};
-static const u8 gcc_xo_gpll0_map[] = {
- [P_XO] = 0,
- [P_GPLL0] = 1,
+static const struct parent_map gcc_xo_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 }
};
static const char *gcc_xo_gpll0[] = {
@@ -51,10 +53,10 @@
"gpll0_vote",
};
-static const u8 gcc_xo_gpll0_gpll4_map[] = {
- [P_XO] = 0,
- [P_GPLL0] = 1,
- [P_GPLL4] = 5,
+static const struct parent_map gcc_xo_gpll0_gpll4_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 },
+ { P_GPLL4, 5 }
};
static const char *gcc_xo_gpll0_gpll4[] = {
@@ -63,9 +65,9 @@
"gpll4_vote",
};
-static const u8 gcc_xo_sata_asic0_map[] = {
- [P_XO] = 0,
- [P_SATA_ASIC0_CLK] = 2,
+static const struct parent_map gcc_xo_sata_asic0_map[] = {
+ { P_XO, 0 },
+ { P_SATA_ASIC0_CLK, 2 }
};
static const char *gcc_xo_sata_asic0[] = {
@@ -73,9 +75,9 @@
"sata_asic0_clk",
};
-static const u8 gcc_xo_sata_rx_map[] = {
- [P_XO] = 0,
- [P_SATA_RX_CLK] = 2,
+static const struct parent_map gcc_xo_sata_rx_map[] = {
+ { P_XO, 0 },
+ { P_SATA_RX_CLK, 2}
};
static const char *gcc_xo_sata_rx[] = {
@@ -83,9 +85,9 @@
"sata_rx_clk",
};
-static const u8 gcc_xo_pcie_map[] = {
- [P_XO] = 0,
- [P_PCIE_0_1_PIPE_CLK] = 2,
+static const struct parent_map gcc_xo_pcie_map[] = {
+ { P_XO, 0 },
+ { P_PCIE_0_1_PIPE_CLK, 2 }
};
static const char *gcc_xo_pcie[] = {
@@ -93,9 +95,9 @@
"pcie_pipe",
};
-static const u8 gcc_xo_pcie_sleep_map[] = {
- [P_XO] = 0,
- [P_SLEEP_CLK] = 6,
+static const struct parent_map gcc_xo_pcie_sleep_map[] = {
+ { P_XO, 0 },
+ { P_SLEEP_CLK, 6 }
};
static const char *gcc_xo_pcie_sleep[] = {
@@ -1263,9 +1265,9 @@
{ }
};
-static u8 usb_hsic_clk_src_map[] = {
- [P_XO] = 0,
- [P_GPLL1] = 4,
+static const struct parent_map usb_hsic_clk_src_map[] = {
+ { P_XO, 0 },
+ { P_GPLL1, 4 }
};
static struct clk_rcg2 usb_hsic_clk_src = {
diff --git a/drivers/clk/qcom/gcc-ipq806x.c b/drivers/clk/qcom/gcc-ipq806x.c
index cbdc31d..a50936a 100644
--- a/drivers/clk/qcom/gcc-ipq806x.c
+++ b/drivers/clk/qcom/gcc-ipq806x.c
@@ -140,15 +140,17 @@
},
};
-#define P_PXO 0
-#define P_PLL8 1
-#define P_PLL3 1
-#define P_PLL0 2
-#define P_CXO 2
+enum {
+ P_PXO,
+ P_PLL8,
+ P_PLL3,
+ P_PLL0,
+ P_CXO,
+};
-static const u8 gcc_pxo_pll8_map[] = {
- [P_PXO] = 0,
- [P_PLL8] = 3,
+static const struct parent_map gcc_pxo_pll8_map[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 3 }
};
static const char *gcc_pxo_pll8[] = {
@@ -156,10 +158,10 @@
"pll8_vote",
};
-static const u8 gcc_pxo_pll8_cxo_map[] = {
- [P_PXO] = 0,
- [P_PLL8] = 3,
- [P_CXO] = 5,
+static const struct parent_map gcc_pxo_pll8_cxo_map[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 3 },
+ { P_CXO, 5 }
};
static const char *gcc_pxo_pll8_cxo[] = {
@@ -168,14 +170,14 @@
"cxo",
};
-static const u8 gcc_pxo_pll3_map[] = {
- [P_PXO] = 0,
- [P_PLL3] = 1,
+static const struct parent_map gcc_pxo_pll3_map[] = {
+ { P_PXO, 0 },
+ { P_PLL3, 1 }
};
-static const u8 gcc_pxo_pll3_sata_map[] = {
- [P_PXO] = 0,
- [P_PLL3] = 6,
+static const struct parent_map gcc_pxo_pll3_sata_map[] = {
+ { P_PXO, 0 },
+ { P_PLL3, 6 }
};
static const char *gcc_pxo_pll3[] = {
@@ -183,10 +185,10 @@
"pll3",
};
-static const u8 gcc_pxo_pll8_pll0[] = {
- [P_PXO] = 0,
- [P_PLL8] = 3,
- [P_PLL0] = 2,
+static const struct parent_map gcc_pxo_pll8_pll0[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 3 },
+ { P_PLL0, 2 }
};
static const char *gcc_pxo_pll8_pll0_map[] = {
@@ -525,8 +527,8 @@
{ 10800000, P_PXO, 1, 2, 5 },
{ 15060000, P_PLL8, 1, 2, 51 },
{ 24000000, P_PLL8, 4, 1, 4 },
+ { 25000000, P_PXO, 1, 0, 0 },
{ 25600000, P_PLL8, 1, 1, 15 },
- { 27000000, P_PXO, 1, 0, 0 },
{ 48000000, P_PLL8, 4, 1, 2 },
{ 51200000, P_PLL8, 1, 2, 15 },
{ }
@@ -2170,6 +2172,36 @@
},
};
+static struct clk_branch ebi2_clk = {
+ .hwcg_reg = 0x3b00,
+ .hwcg_bit = 6,
+ .halt_reg = 0x2fcc,
+ .halt_bit = 1,
+ .clkr = {
+ .enable_reg = 0x3b00,
+ .enable_mask = BIT(4),
+ .hw.init = &(struct clk_init_data){
+ .name = "ebi2_clk",
+ .ops = &clk_branch_ops,
+ .flags = CLK_IS_ROOT,
+ },
+ },
+};
+
+static struct clk_branch ebi2_aon_clk = {
+ .halt_reg = 0x2fcc,
+ .halt_bit = 0,
+ .clkr = {
+ .enable_reg = 0x3b00,
+ .enable_mask = BIT(8),
+ .hw.init = &(struct clk_init_data){
+ .name = "ebi2_always_on_clk",
+ .ops = &clk_branch_ops,
+ .flags = CLK_IS_ROOT,
+ },
+ },
+};
+
static struct clk_regmap *gcc_ipq806x_clks[] = {
[PLL0] = &pll0.clkr,
[PLL0_VOTE] = &pll0_vote,
@@ -2273,6 +2305,8 @@
[USB_FS1_XCVR_SRC] = &usb_fs1_xcvr_clk_src.clkr,
[USB_FS1_XCVR_CLK] = &usb_fs1_xcvr_clk.clkr,
[USB_FS1_SYSTEM_CLK] = &usb_fs1_sys_clk.clkr,
+ [EBI2_CLK] = &ebi2_clk.clkr,
+ [EBI2_AON_CLK] = &ebi2_aon_clk.clkr,
};
static const struct qcom_reset_map gcc_ipq806x_resets[] = {
diff --git a/drivers/clk/qcom/gcc-msm8660.c b/drivers/clk/qcom/gcc-msm8660.c
index f366e68..fc6b12d 100644
--- a/drivers/clk/qcom/gcc-msm8660.c
+++ b/drivers/clk/qcom/gcc-msm8660.c
@@ -59,13 +59,15 @@
},
};
-#define P_PXO 0
-#define P_PLL8 1
-#define P_CXO 2
+enum {
+ P_PXO,
+ P_PLL8,
+ P_CXO,
+};
-static const u8 gcc_pxo_pll8_map[] = {
- [P_PXO] = 0,
- [P_PLL8] = 3,
+static const struct parent_map gcc_pxo_pll8_map[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 3 }
};
static const char *gcc_pxo_pll8[] = {
@@ -73,10 +75,10 @@
"pll8_vote",
};
-static const u8 gcc_pxo_pll8_cxo_map[] = {
- [P_PXO] = 0,
- [P_PLL8] = 3,
- [P_CXO] = 5,
+static const struct parent_map gcc_pxo_pll8_cxo_map[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 3 },
+ { P_CXO, 5 }
};
static const char *gcc_pxo_pll8_cxo[] = {
diff --git a/drivers/clk/qcom/gcc-msm8916.c b/drivers/clk/qcom/gcc-msm8916.c
new file mode 100644
index 0000000..d345847
--- /dev/null
+++ b/drivers/clk/qcom/gcc-msm8916.c
@@ -0,0 +1,2868 @@
+/*
+ * Copyright 2015 Linaro Limited
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/bitops.h>
+#include <linux/err.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/clk-provider.h>
+#include <linux/regmap.h>
+#include <linux/reset-controller.h>
+
+#include <dt-bindings/clock/qcom,gcc-msm8916.h>
+#include <dt-bindings/reset/qcom,gcc-msm8916.h>
+
+#include "common.h"
+#include "clk-regmap.h"
+#include "clk-pll.h"
+#include "clk-rcg.h"
+#include "clk-branch.h"
+#include "reset.h"
+
+enum {
+ P_XO,
+ P_GPLL0,
+ P_GPLL0_AUX,
+ P_BIMC,
+ P_GPLL1,
+ P_GPLL1_AUX,
+ P_GPLL2,
+ P_GPLL2_AUX,
+ P_SLEEP_CLK,
+ P_DSI0_PHYPLL_BYTE,
+ P_DSI0_PHYPLL_DSI,
+};
+
+static const struct parent_map gcc_xo_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 },
+};
+
+static const char *gcc_xo_gpll0[] = {
+ "xo",
+ "gpll0_vote",
+};
+
+static const struct parent_map gcc_xo_gpll0_bimc_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 },
+ { P_BIMC, 2 },
+};
+
+static const char *gcc_xo_gpll0_bimc[] = {
+ "xo",
+ "gpll0_vote",
+ "bimc_pll_vote",
+};
+
+static const struct parent_map gcc_xo_gpll0a_gpll1_gpll2a_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0_AUX, 3 },
+ { P_GPLL2_AUX, 2 },
+ { P_GPLL1, 1 },
+};
+
+static const char *gcc_xo_gpll0a_gpll1_gpll2a[] = {
+ "xo",
+ "gpll0_vote",
+ "gpll1_vote",
+ "gpll2_vote",
+};
+
+static const struct parent_map gcc_xo_gpll0_gpll2_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 },
+ { P_GPLL2, 2 },
+};
+
+static const char *gcc_xo_gpll0_gpll2[] = {
+ "xo",
+ "gpll0_vote",
+ "gpll2_vote",
+};
+
+static const struct parent_map gcc_xo_gpll0a_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0_AUX, 2 },
+};
+
+static const char *gcc_xo_gpll0a[] = {
+ "xo",
+ "gpll0_vote",
+};
+
+static const struct parent_map gcc_xo_gpll0_gpll1a_sleep_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 },
+ { P_GPLL1_AUX, 2 },
+ { P_SLEEP_CLK, 6 },
+};
+
+static const char *gcc_xo_gpll0_gpll1a_sleep[] = {
+ "xo",
+ "gpll0_vote",
+ "gpll1_vote",
+ "sleep_clk",
+};
+
+static const struct parent_map gcc_xo_gpll0_gpll1a_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 },
+ { P_GPLL1_AUX, 2 },
+};
+
+static const char *gcc_xo_gpll0_gpll1a[] = {
+ "xo",
+ "gpll0_vote",
+ "gpll1_vote",
+};
+
+static const struct parent_map gcc_xo_dsibyte_map[] = {
+ { P_XO, 0, },
+ { P_DSI0_PHYPLL_BYTE, 2 },
+};
+
+static const char *gcc_xo_dsibyte[] = {
+ "xo",
+ "dsi0pllbyte",
+};
+
+static const struct parent_map gcc_xo_gpll0a_dsibyte_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0_AUX, 2 },
+ { P_DSI0_PHYPLL_BYTE, 1 },
+};
+
+static const char *gcc_xo_gpll0a_dsibyte[] = {
+ "xo",
+ "gpll0_vote",
+ "dsi0pllbyte",
+};
+
+static const struct parent_map gcc_xo_gpll0_dsiphy_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 },
+ { P_DSI0_PHYPLL_DSI, 2 },
+};
+
+static const char *gcc_xo_gpll0_dsiphy[] = {
+ "xo",
+ "gpll0_vote",
+ "dsi0pll",
+};
+
+static const struct parent_map gcc_xo_gpll0a_dsiphy_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0_AUX, 2 },
+ { P_DSI0_PHYPLL_DSI, 1 },
+};
+
+static const char *gcc_xo_gpll0a_dsiphy[] = {
+ "xo",
+ "gpll0_vote",
+ "dsi0pll",
+};
+
+static const struct parent_map gcc_xo_gpll0a_gpll1_gpll2_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0_AUX, 1 },
+ { P_GPLL1, 3 },
+ { P_GPLL2, 2 },
+};
+
+static const char *gcc_xo_gpll0a_gpll1_gpll2[] = {
+ "xo",
+ "gpll0_vote",
+ "gpll1_vote",
+ "gpll2_vote",
+};
+
+#define F(f, s, h, m, n) { (f), (s), (2 * (h) - 1), (m), (n) }
+
+static struct clk_pll gpll0 = {
+ .l_reg = 0x21004,
+ .m_reg = 0x21008,
+ .n_reg = 0x2100c,
+ .config_reg = 0x21014,
+ .mode_reg = 0x21000,
+ .status_reg = 0x2101c,
+ .status_bit = 17,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll0",
+ .parent_names = (const char *[]){ "xo" },
+ .num_parents = 1,
+ .ops = &clk_pll_ops,
+ },
+};
+
+static struct clk_regmap gpll0_vote = {
+ .enable_reg = 0x45000,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gpll0_vote",
+ .parent_names = (const char *[]){ "gpll0" },
+ .num_parents = 1,
+ .ops = &clk_pll_vote_ops,
+ },
+};
+
+static struct clk_pll gpll1 = {
+ .l_reg = 0x20004,
+ .m_reg = 0x20008,
+ .n_reg = 0x2000c,
+ .config_reg = 0x20014,
+ .mode_reg = 0x20000,
+ .status_reg = 0x2001c,
+ .status_bit = 17,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll1",
+ .parent_names = (const char *[]){ "xo" },
+ .num_parents = 1,
+ .ops = &clk_pll_ops,
+ },
+};
+
+static struct clk_regmap gpll1_vote = {
+ .enable_reg = 0x45000,
+ .enable_mask = BIT(1),
+ .hw.init = &(struct clk_init_data){
+ .name = "gpll1_vote",
+ .parent_names = (const char *[]){ "gpll1" },
+ .num_parents = 1,
+ .ops = &clk_pll_vote_ops,
+ },
+};
+
+static struct clk_pll gpll2 = {
+ .l_reg = 0x4a004,
+ .m_reg = 0x4a008,
+ .n_reg = 0x4a00c,
+ .config_reg = 0x4a014,
+ .mode_reg = 0x4a000,
+ .status_reg = 0x4a01c,
+ .status_bit = 17,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll2",
+ .parent_names = (const char *[]){ "xo" },
+ .num_parents = 1,
+ .ops = &clk_pll_ops,
+ },
+};
+
+static struct clk_regmap gpll2_vote = {
+ .enable_reg = 0x45000,
+ .enable_mask = BIT(2),
+ .hw.init = &(struct clk_init_data){
+ .name = "gpll2_vote",
+ .parent_names = (const char *[]){ "gpll2" },
+ .num_parents = 1,
+ .ops = &clk_pll_vote_ops,
+ },
+};
+
+static struct clk_pll bimc_pll = {
+ .l_reg = 0x23004,
+ .m_reg = 0x23008,
+ .n_reg = 0x2300c,
+ .config_reg = 0x23014,
+ .mode_reg = 0x23000,
+ .status_reg = 0x2301c,
+ .status_bit = 17,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "bimc_pll",
+ .parent_names = (const char *[]){ "xo" },
+ .num_parents = 1,
+ .ops = &clk_pll_ops,
+ },
+};
+
+static struct clk_regmap bimc_pll_vote = {
+ .enable_reg = 0x45000,
+ .enable_mask = BIT(3),
+ .hw.init = &(struct clk_init_data){
+ .name = "bimc_pll_vote",
+ .parent_names = (const char *[]){ "bimc_pll" },
+ .num_parents = 1,
+ .ops = &clk_pll_vote_ops,
+ },
+};
+
+static struct clk_rcg2 pcnoc_bfdcd_clk_src = {
+ .cmd_rcgr = 0x27000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_bimc_map,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "pcnoc_bfdcd_clk_src",
+ .parent_names = gcc_xo_gpll0_bimc,
+ .num_parents = 3,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 system_noc_bfdcd_clk_src = {
+ .cmd_rcgr = 0x26004,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_bimc_map,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "system_noc_bfdcd_clk_src",
+ .parent_names = gcc_xo_gpll0_bimc,
+ .num_parents = 3,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_camss_ahb_clk[] = {
+ F(40000000, P_GPLL0, 10, 1, 2),
+ F(80000000, P_GPLL0, 10, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 camss_ahb_clk_src = {
+ .cmd_rcgr = 0x5a000,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_camss_ahb_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "camss_ahb_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_apss_ahb_clk[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ F(50000000, P_GPLL0, 16, 0, 0),
+ F(100000000, P_GPLL0, 8, 0, 0),
+ F(133330000, P_GPLL0, 6, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 apss_ahb_clk_src = {
+ .cmd_rcgr = 0x46000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_apss_ahb_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "apss_ahb_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_camss_csi0_1_clk[] = {
+ F(100000000, P_GPLL0, 8, 0, 0),
+ F(200000000, P_GPLL0, 4, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 csi0_clk_src = {
+ .cmd_rcgr = 0x4e020,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_camss_csi0_1_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "csi0_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 csi1_clk_src = {
+ .cmd_rcgr = 0x4f020,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_camss_csi0_1_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "csi1_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_oxili_gfx3d_clk[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ F(50000000, P_GPLL0_AUX, 16, 0, 0),
+ F(80000000, P_GPLL0_AUX, 10, 0, 0),
+ F(100000000, P_GPLL0_AUX, 8, 0, 0),
+ F(160000000, P_GPLL0_AUX, 5, 0, 0),
+ F(177780000, P_GPLL0_AUX, 4.5, 0, 0),
+ F(200000000, P_GPLL0_AUX, 4, 0, 0),
+ F(266670000, P_GPLL0_AUX, 3, 0, 0),
+ F(294912000, P_GPLL1, 3, 0, 0),
+ F(310000000, P_GPLL2, 3, 0, 0),
+ F(400000000, P_GPLL0_AUX, 2, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gfx3d_clk_src = {
+ .cmd_rcgr = 0x59000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0a_gpll1_gpll2a_map,
+ .freq_tbl = ftbl_gcc_oxili_gfx3d_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gfx3d_clk_src",
+ .parent_names = gcc_xo_gpll0a_gpll1_gpll2a,
+ .num_parents = 4,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_camss_vfe0_clk[] = {
+ F(50000000, P_GPLL0, 16, 0, 0),
+ F(80000000, P_GPLL0, 10, 0, 0),
+ F(100000000, P_GPLL0, 8, 0, 0),
+ F(160000000, P_GPLL0, 5, 0, 0),
+ F(177780000, P_GPLL0, 4.5, 0, 0),
+ F(200000000, P_GPLL0, 4, 0, 0),
+ F(266670000, P_GPLL0, 3, 0, 0),
+ F(320000000, P_GPLL0, 2.5, 0, 0),
+ F(400000000, P_GPLL0, 2, 0, 0),
+ F(465000000, P_GPLL2, 2, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 vfe0_clk_src = {
+ .cmd_rcgr = 0x58000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll2_map,
+ .freq_tbl = ftbl_gcc_camss_vfe0_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "vfe0_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll2,
+ .num_parents = 3,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_blsp1_qup1_6_i2c_apps_clk[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ F(50000000, P_GPLL0, 16, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 blsp1_qup1_i2c_apps_clk_src = {
+ .cmd_rcgr = 0x0200c,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_i2c_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup1_i2c_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_blsp1_qup1_6_spi_apps_clk[] = {
+ F(960000, P_XO, 10, 1, 2),
+ F(4800000, P_XO, 4, 0, 0),
+ F(9600000, P_XO, 2, 0, 0),
+ F(16000000, P_GPLL0, 10, 1, 5),
+ F(19200000, P_XO, 1, 0, 0),
+ F(25000000, P_GPLL0, 16, 1, 2),
+ F(50000000, P_GPLL0, 16, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 blsp1_qup1_spi_apps_clk_src = {
+ .cmd_rcgr = 0x02024,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_spi_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup1_spi_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_qup2_i2c_apps_clk_src = {
+ .cmd_rcgr = 0x03000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_i2c_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup2_i2c_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_qup2_spi_apps_clk_src = {
+ .cmd_rcgr = 0x03014,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_spi_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup2_spi_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_qup3_i2c_apps_clk_src = {
+ .cmd_rcgr = 0x04000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_i2c_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup3_i2c_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_qup3_spi_apps_clk_src = {
+ .cmd_rcgr = 0x04024,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_spi_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup3_spi_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_qup4_i2c_apps_clk_src = {
+ .cmd_rcgr = 0x05000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_i2c_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup4_i2c_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_qup4_spi_apps_clk_src = {
+ .cmd_rcgr = 0x05024,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_spi_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup4_spi_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_qup5_i2c_apps_clk_src = {
+ .cmd_rcgr = 0x06000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_i2c_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup5_i2c_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_qup5_spi_apps_clk_src = {
+ .cmd_rcgr = 0x06024,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_spi_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup5_spi_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_qup6_i2c_apps_clk_src = {
+ .cmd_rcgr = 0x07000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_i2c_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup6_i2c_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_qup6_spi_apps_clk_src = {
+ .cmd_rcgr = 0x07024,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_qup1_6_spi_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_qup6_spi_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_blsp1_uart1_6_apps_clk[] = {
+ F(3686400, P_GPLL0, 1, 72, 15625),
+ F(7372800, P_GPLL0, 1, 144, 15625),
+ F(14745600, P_GPLL0, 1, 288, 15625),
+ F(16000000, P_GPLL0, 10, 1, 5),
+ F(19200000, P_XO, 1, 0, 0),
+ F(24000000, P_GPLL0, 1, 3, 100),
+ F(25000000, P_GPLL0, 16, 1, 2),
+ F(32000000, P_GPLL0, 1, 1, 25),
+ F(40000000, P_GPLL0, 1, 1, 20),
+ F(46400000, P_GPLL0, 1, 29, 500),
+ F(48000000, P_GPLL0, 1, 3, 50),
+ F(51200000, P_GPLL0, 1, 8, 125),
+ F(56000000, P_GPLL0, 1, 7, 100),
+ F(58982400, P_GPLL0, 1, 1152, 15625),
+ F(60000000, P_GPLL0, 1, 3, 40),
+ { }
+};
+
+static struct clk_rcg2 blsp1_uart1_apps_clk_src = {
+ .cmd_rcgr = 0x02044,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_uart1_6_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_uart1_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 blsp1_uart2_apps_clk_src = {
+ .cmd_rcgr = 0x03034,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_blsp1_uart1_6_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "blsp1_uart2_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_camss_cci_clk[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 cci_clk_src = {
+ .cmd_rcgr = 0x51000,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0a_map,
+ .freq_tbl = ftbl_gcc_camss_cci_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "cci_clk_src",
+ .parent_names = gcc_xo_gpll0a,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_camss_gp0_1_clk[] = {
+ F(100000000, P_GPLL0, 8, 0, 0),
+ F(200000000, P_GPLL0, 4, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 camss_gp0_clk_src = {
+ .cmd_rcgr = 0x54000,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll1a_sleep_map,
+ .freq_tbl = ftbl_gcc_camss_gp0_1_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "camss_gp0_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll1a_sleep,
+ .num_parents = 4,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 camss_gp1_clk_src = {
+ .cmd_rcgr = 0x55000,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll1a_sleep_map,
+ .freq_tbl = ftbl_gcc_camss_gp0_1_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "camss_gp1_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll1a_sleep,
+ .num_parents = 4,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_camss_jpeg0_clk[] = {
+ F(133330000, P_GPLL0, 6, 0, 0),
+ F(266670000, P_GPLL0, 3, 0, 0),
+ F(320000000, P_GPLL0, 2.5, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 jpeg0_clk_src = {
+ .cmd_rcgr = 0x57000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_camss_jpeg0_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "jpeg0_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_camss_mclk0_1_clk[] = {
+ F(9600000, P_XO, 2, 0, 0),
+ F(23880000, P_GPLL0, 1, 2, 67),
+ F(66670000, P_GPLL0, 12, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 mclk0_clk_src = {
+ .cmd_rcgr = 0x52000,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll1a_sleep_map,
+ .freq_tbl = ftbl_gcc_camss_mclk0_1_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "mclk0_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll1a_sleep,
+ .num_parents = 4,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 mclk1_clk_src = {
+ .cmd_rcgr = 0x53000,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll1a_sleep_map,
+ .freq_tbl = ftbl_gcc_camss_mclk0_1_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "mclk1_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll1a_sleep,
+ .num_parents = 4,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_camss_csi0_1phytimer_clk[] = {
+ F(100000000, P_GPLL0, 8, 0, 0),
+ F(200000000, P_GPLL0, 4, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 csi0phytimer_clk_src = {
+ .cmd_rcgr = 0x4e000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll1a_map,
+ .freq_tbl = ftbl_gcc_camss_csi0_1phytimer_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "csi0phytimer_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll1a,
+ .num_parents = 3,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 csi1phytimer_clk_src = {
+ .cmd_rcgr = 0x4f000,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll1a_map,
+ .freq_tbl = ftbl_gcc_camss_csi0_1phytimer_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "csi1phytimer_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll1a,
+ .num_parents = 3,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_camss_cpp_clk[] = {
+ F(160000000, P_GPLL0, 5, 0, 0),
+ F(320000000, P_GPLL0, 2.5, 0, 0),
+ F(465000000, P_GPLL2, 2, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 cpp_clk_src = {
+ .cmd_rcgr = 0x58018,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll2_map,
+ .freq_tbl = ftbl_gcc_camss_cpp_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "cpp_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll2,
+ .num_parents = 3,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_crypto_clk[] = {
+ F(50000000, P_GPLL0, 16, 0, 0),
+ F(80000000, P_GPLL0, 10, 0, 0),
+ F(100000000, P_GPLL0, 8, 0, 0),
+ F(160000000, P_GPLL0, 5, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 crypto_clk_src = {
+ .cmd_rcgr = 0x16004,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_crypto_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "crypto_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_gp1_3_clk[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 gp1_clk_src = {
+ .cmd_rcgr = 0x08004,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll1a_sleep_map,
+ .freq_tbl = ftbl_gcc_gp1_3_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gp1_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll1a_sleep,
+ .num_parents = 3,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 gp2_clk_src = {
+ .cmd_rcgr = 0x09004,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll1a_sleep_map,
+ .freq_tbl = ftbl_gcc_gp1_3_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gp2_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll1a_sleep,
+ .num_parents = 3,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_rcg2 gp3_clk_src = {
+ .cmd_rcgr = 0x0a004,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_gpll1a_sleep_map,
+ .freq_tbl = ftbl_gcc_gp1_3_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gp3_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll1a_sleep,
+ .num_parents = 3,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct freq_tbl ftbl_gcc_mdss_byte0_clk[] = {
+ { .src = P_DSI0_PHYPLL_BYTE },
+ { }
+};
+
+static struct clk_rcg2 byte0_clk_src = {
+ .cmd_rcgr = 0x4d044,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0a_dsibyte_map,
+ .freq_tbl = ftbl_gcc_mdss_byte0_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "byte0_clk_src",
+ .parent_names = gcc_xo_gpll0a_dsibyte,
+ .num_parents = 3,
+ .ops = &clk_byte_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_mdss_esc0_clk[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 esc0_clk_src = {
+ .cmd_rcgr = 0x4d05c,
+ .hid_width = 5,
+ .parent_map = gcc_xo_dsibyte_map,
+ .freq_tbl = ftbl_gcc_mdss_esc0_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "esc0_clk_src",
+ .parent_names = gcc_xo_dsibyte,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_mdss_mdp_clk[] = {
+ F(50000000, P_GPLL0, 16, 0, 0),
+ F(80000000, P_GPLL0, 10, 0, 0),
+ F(100000000, P_GPLL0, 8, 0, 0),
+ F(160000000, P_GPLL0, 5, 0, 0),
+ F(177780000, P_GPLL0, 4.5, 0, 0),
+ F(200000000, P_GPLL0, 4, 0, 0),
+ F(266670000, P_GPLL0, 3, 0, 0),
+ F(320000000, P_GPLL0, 2.5, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 mdp_clk_src = {
+ .cmd_rcgr = 0x4d014,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_dsiphy_map,
+ .freq_tbl = ftbl_gcc_mdss_mdp_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "mdp_clk_src",
+ .parent_names = gcc_xo_gpll0_dsiphy,
+ .num_parents = 3,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct freq_tbl ftbl_gcc_mdss_pclk[] = {
+ { .src = P_DSI0_PHYPLL_DSI },
+ { }
+};
+
+static struct clk_rcg2 pclk0_clk_src = {
+ .cmd_rcgr = 0x4d000,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0a_dsiphy_map,
+ .freq_tbl = ftbl_gcc_mdss_pclk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "pclk0_clk_src",
+ .parent_names = gcc_xo_gpll0a_dsiphy,
+ .num_parents = 3,
+ .ops = &clk_pixel_ops,
+ .flags = CLK_SET_RATE_PARENT,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_mdss_vsync_clk[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 vsync_clk_src = {
+ .cmd_rcgr = 0x4d02c,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0a_map,
+ .freq_tbl = ftbl_gcc_mdss_vsync_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "vsync_clk_src",
+ .parent_names = gcc_xo_gpll0a,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_pdm2_clk[] = {
+ F(64000000, P_GPLL0, 12.5, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 pdm2_clk_src = {
+ .cmd_rcgr = 0x44010,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_pdm2_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "pdm2_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk[] = {
+ F(144000, P_XO, 16, 3, 25),
+ F(400000, P_XO, 12, 1, 4),
+ F(20000000, P_GPLL0, 10, 1, 4),
+ F(25000000, P_GPLL0, 16, 1, 2),
+ F(50000000, P_GPLL0, 16, 0, 0),
+ F(100000000, P_GPLL0, 8, 0, 0),
+ F(177770000, P_GPLL0, 4.5, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 sdcc1_apps_clk_src = {
+ .cmd_rcgr = 0x42004,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_sdcc1_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "sdcc1_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_sdcc2_apps_clk[] = {
+ F(144000, P_XO, 16, 3, 25),
+ F(400000, P_XO, 12, 1, 4),
+ F(20000000, P_GPLL0, 10, 1, 4),
+ F(25000000, P_GPLL0, 16, 1, 2),
+ F(50000000, P_GPLL0, 16, 0, 0),
+ F(100000000, P_GPLL0, 8, 0, 0),
+ F(200000000, P_GPLL0, 4, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 sdcc2_apps_clk_src = {
+ .cmd_rcgr = 0x43004,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_sdcc2_apps_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "sdcc2_apps_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_apss_tcu_clk[] = {
+ F(155000000, P_GPLL2, 6, 0, 0),
+ F(310000000, P_GPLL2, 3, 0, 0),
+ F(400000000, P_GPLL0, 2, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 apss_tcu_clk_src = {
+ .cmd_rcgr = 0x1207c,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0a_gpll1_gpll2_map,
+ .freq_tbl = ftbl_gcc_apss_tcu_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "apss_tcu_clk_src",
+ .parent_names = gcc_xo_gpll0a_gpll1_gpll2,
+ .num_parents = 4,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_usb_hs_system_clk[] = {
+ F(80000000, P_GPLL0, 10, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 usb_hs_system_clk_src = {
+ .cmd_rcgr = 0x41010,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_usb_hs_system_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "usb_hs_system_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static const struct freq_tbl ftbl_gcc_venus0_vcodec0_clk[] = {
+ F(100000000, P_GPLL0, 8, 0, 0),
+ F(160000000, P_GPLL0, 5, 0, 0),
+ F(228570000, P_GPLL0, 5, 0, 0),
+ { }
+};
+
+static struct clk_rcg2 vcodec0_clk_src = {
+ .cmd_rcgr = 0x4C000,
+ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+ .freq_tbl = ftbl_gcc_venus0_vcodec0_clk,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "vcodec0_clk_src",
+ .parent_names = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+};
+
+static struct clk_branch gcc_blsp1_ahb_clk = {
+ .halt_reg = 0x01008,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x45004,
+ .enable_mask = BIT(10),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_sleep_clk = {
+ .halt_reg = 0x01004,
+ .clkr = {
+ .enable_reg = 0x01004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_sleep_clk",
+ .parent_names = (const char *[]){
+ "sleep_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup1_i2c_apps_clk = {
+ .halt_reg = 0x02008,
+ .clkr = {
+ .enable_reg = 0x02008,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup1_i2c_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup1_i2c_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup1_spi_apps_clk = {
+ .halt_reg = 0x02004,
+ .clkr = {
+ .enable_reg = 0x02004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup1_spi_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup1_spi_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup2_i2c_apps_clk = {
+ .halt_reg = 0x03010,
+ .clkr = {
+ .enable_reg = 0x03010,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup2_i2c_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup2_i2c_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup2_spi_apps_clk = {
+ .halt_reg = 0x0300c,
+ .clkr = {
+ .enable_reg = 0x0300c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup2_spi_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup2_spi_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup3_i2c_apps_clk = {
+ .halt_reg = 0x04020,
+ .clkr = {
+ .enable_reg = 0x04020,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup3_i2c_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup3_i2c_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup3_spi_apps_clk = {
+ .halt_reg = 0x0401c,
+ .clkr = {
+ .enable_reg = 0x0401c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup3_spi_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup3_spi_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup4_i2c_apps_clk = {
+ .halt_reg = 0x05020,
+ .clkr = {
+ .enable_reg = 0x05020,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup4_i2c_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup4_i2c_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup4_spi_apps_clk = {
+ .halt_reg = 0x0501c,
+ .clkr = {
+ .enable_reg = 0x0501c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup4_spi_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup4_spi_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup5_i2c_apps_clk = {
+ .halt_reg = 0x06020,
+ .clkr = {
+ .enable_reg = 0x06020,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup5_i2c_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup5_i2c_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup5_spi_apps_clk = {
+ .halt_reg = 0x0601c,
+ .clkr = {
+ .enable_reg = 0x0601c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup5_spi_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup5_spi_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup6_i2c_apps_clk = {
+ .halt_reg = 0x07020,
+ .clkr = {
+ .enable_reg = 0x07020,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup6_i2c_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup6_i2c_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_qup6_spi_apps_clk = {
+ .halt_reg = 0x0701c,
+ .clkr = {
+ .enable_reg = 0x0701c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_qup6_spi_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_qup6_spi_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_uart1_apps_clk = {
+ .halt_reg = 0x0203c,
+ .clkr = {
+ .enable_reg = 0x0203c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_uart1_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_uart1_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_blsp1_uart2_apps_clk = {
+ .halt_reg = 0x0302c,
+ .clkr = {
+ .enable_reg = 0x0302c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_blsp1_uart2_apps_clk",
+ .parent_names = (const char *[]){
+ "blsp1_uart2_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_boot_rom_ahb_clk = {
+ .halt_reg = 0x1300c,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x45004,
+ .enable_mask = BIT(7),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_boot_rom_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_cci_ahb_clk = {
+ .halt_reg = 0x5101c,
+ .clkr = {
+ .enable_reg = 0x5101c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_cci_ahb_clk",
+ .parent_names = (const char *[]){
+ "camss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_cci_clk = {
+ .halt_reg = 0x51018,
+ .clkr = {
+ .enable_reg = 0x51018,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_cci_clk",
+ .parent_names = (const char *[]){
+ "cci_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi0_ahb_clk = {
+ .halt_reg = 0x4e040,
+ .clkr = {
+ .enable_reg = 0x4e040,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi0_ahb_clk",
+ .parent_names = (const char *[]){
+ "camss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi0_clk = {
+ .halt_reg = 0x4e03c,
+ .clkr = {
+ .enable_reg = 0x4e03c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi0_clk",
+ .parent_names = (const char *[]){
+ "csi0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi0phy_clk = {
+ .halt_reg = 0x4e048,
+ .clkr = {
+ .enable_reg = 0x4e048,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi0phy_clk",
+ .parent_names = (const char *[]){
+ "csi0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi0pix_clk = {
+ .halt_reg = 0x4e058,
+ .clkr = {
+ .enable_reg = 0x4e058,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi0pix_clk",
+ .parent_names = (const char *[]){
+ "csi0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi0rdi_clk = {
+ .halt_reg = 0x4e050,
+ .clkr = {
+ .enable_reg = 0x4e050,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi0rdi_clk",
+ .parent_names = (const char *[]){
+ "csi0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi1_ahb_clk = {
+ .halt_reg = 0x4f040,
+ .clkr = {
+ .enable_reg = 0x4f040,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi1_ahb_clk",
+ .parent_names = (const char *[]){
+ "camss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi1_clk = {
+ .halt_reg = 0x4f03c,
+ .clkr = {
+ .enable_reg = 0x4f03c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi1_clk",
+ .parent_names = (const char *[]){
+ "csi1_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi1phy_clk = {
+ .halt_reg = 0x4f048,
+ .clkr = {
+ .enable_reg = 0x4f048,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi1phy_clk",
+ .parent_names = (const char *[]){
+ "csi1_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi1pix_clk = {
+ .halt_reg = 0x4f058,
+ .clkr = {
+ .enable_reg = 0x4f058,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi1pix_clk",
+ .parent_names = (const char *[]){
+ "csi1_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi1rdi_clk = {
+ .halt_reg = 0x4f050,
+ .clkr = {
+ .enable_reg = 0x4f050,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi1rdi_clk",
+ .parent_names = (const char *[]){
+ "csi1_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi_vfe0_clk = {
+ .halt_reg = 0x58050,
+ .clkr = {
+ .enable_reg = 0x58050,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi_vfe0_clk",
+ .parent_names = (const char *[]){
+ "vfe0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_gp0_clk = {
+ .halt_reg = 0x54018,
+ .clkr = {
+ .enable_reg = 0x54018,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_gp0_clk",
+ .parent_names = (const char *[]){
+ "camss_gp0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_gp1_clk = {
+ .halt_reg = 0x55018,
+ .clkr = {
+ .enable_reg = 0x55018,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_gp1_clk",
+ .parent_names = (const char *[]){
+ "camss_gp1_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_ispif_ahb_clk = {
+ .halt_reg = 0x50004,
+ .clkr = {
+ .enable_reg = 0x50004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_ispif_ahb_clk",
+ .parent_names = (const char *[]){
+ "camss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_jpeg0_clk = {
+ .halt_reg = 0x57020,
+ .clkr = {
+ .enable_reg = 0x57020,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_jpeg0_clk",
+ .parent_names = (const char *[]){
+ "jpeg0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_jpeg_ahb_clk = {
+ .halt_reg = 0x57024,
+ .clkr = {
+ .enable_reg = 0x57024,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_jpeg_ahb_clk",
+ .parent_names = (const char *[]){
+ "camss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_jpeg_axi_clk = {
+ .halt_reg = 0x57028,
+ .clkr = {
+ .enable_reg = 0x57028,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_jpeg_axi_clk",
+ .parent_names = (const char *[]){
+ "system_noc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_mclk0_clk = {
+ .halt_reg = 0x52018,
+ .clkr = {
+ .enable_reg = 0x52018,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_mclk0_clk",
+ .parent_names = (const char *[]){
+ "mclk0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_mclk1_clk = {
+ .halt_reg = 0x53018,
+ .clkr = {
+ .enable_reg = 0x53018,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_mclk1_clk",
+ .parent_names = (const char *[]){
+ "mclk1_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_micro_ahb_clk = {
+ .halt_reg = 0x5600c,
+ .clkr = {
+ .enable_reg = 0x5600c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_micro_ahb_clk",
+ .parent_names = (const char *[]){
+ "camss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi0phytimer_clk = {
+ .halt_reg = 0x4e01c,
+ .clkr = {
+ .enable_reg = 0x4e01c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi0phytimer_clk",
+ .parent_names = (const char *[]){
+ "csi0phytimer_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_csi1phytimer_clk = {
+ .halt_reg = 0x4f01c,
+ .clkr = {
+ .enable_reg = 0x4f01c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_csi1phytimer_clk",
+ .parent_names = (const char *[]){
+ "csi1phytimer_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_ahb_clk = {
+ .halt_reg = 0x5a014,
+ .clkr = {
+ .enable_reg = 0x5a014,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_ahb_clk",
+ .parent_names = (const char *[]){
+ "camss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_top_ahb_clk = {
+ .halt_reg = 0x56004,
+ .clkr = {
+ .enable_reg = 0x56004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_top_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_cpp_ahb_clk = {
+ .halt_reg = 0x58040,
+ .clkr = {
+ .enable_reg = 0x58040,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_cpp_ahb_clk",
+ .parent_names = (const char *[]){
+ "camss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_cpp_clk = {
+ .halt_reg = 0x5803c,
+ .clkr = {
+ .enable_reg = 0x5803c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_cpp_clk",
+ .parent_names = (const char *[]){
+ "cpp_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_vfe0_clk = {
+ .halt_reg = 0x58038,
+ .clkr = {
+ .enable_reg = 0x58038,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_vfe0_clk",
+ .parent_names = (const char *[]){
+ "vfe0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_vfe_ahb_clk = {
+ .halt_reg = 0x58044,
+ .clkr = {
+ .enable_reg = 0x58044,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_vfe_ahb_clk",
+ .parent_names = (const char *[]){
+ "camss_ahb_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_camss_vfe_axi_clk = {
+ .halt_reg = 0x58048,
+ .clkr = {
+ .enable_reg = 0x58048,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_camss_vfe_axi_clk",
+ .parent_names = (const char *[]){
+ "system_noc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_crypto_ahb_clk = {
+ .halt_reg = 0x16024,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x45004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_crypto_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_crypto_axi_clk = {
+ .halt_reg = 0x16020,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x45004,
+ .enable_mask = BIT(1),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_crypto_axi_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_crypto_clk = {
+ .halt_reg = 0x1601c,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x45004,
+ .enable_mask = BIT(2),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_crypto_clk",
+ .parent_names = (const char *[]){
+ "crypto_clk_src",
+ },
+ .num_parents = 1,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_oxili_gmem_clk = {
+ .halt_reg = 0x59024,
+ .clkr = {
+ .enable_reg = 0x59024,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_oxili_gmem_clk",
+ .parent_names = (const char *[]){
+ "gfx3d_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_gp1_clk = {
+ .halt_reg = 0x08000,
+ .clkr = {
+ .enable_reg = 0x08000,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_gp1_clk",
+ .parent_names = (const char *[]){
+ "gp1_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_gp2_clk = {
+ .halt_reg = 0x09000,
+ .clkr = {
+ .enable_reg = 0x09000,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_gp2_clk",
+ .parent_names = (const char *[]){
+ "gp2_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_gp3_clk = {
+ .halt_reg = 0x0a000,
+ .clkr = {
+ .enable_reg = 0x0a000,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_gp3_clk",
+ .parent_names = (const char *[]){
+ "gp3_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mdss_ahb_clk = {
+ .halt_reg = 0x4d07c,
+ .clkr = {
+ .enable_reg = 0x4d07c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mdss_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mdss_axi_clk = {
+ .halt_reg = 0x4d080,
+ .clkr = {
+ .enable_reg = 0x4d080,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mdss_axi_clk",
+ .parent_names = (const char *[]){
+ "system_noc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mdss_byte0_clk = {
+ .halt_reg = 0x4d094,
+ .clkr = {
+ .enable_reg = 0x4d094,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mdss_byte0_clk",
+ .parent_names = (const char *[]){
+ "byte0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mdss_esc0_clk = {
+ .halt_reg = 0x4d098,
+ .clkr = {
+ .enable_reg = 0x4d098,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mdss_esc0_clk",
+ .parent_names = (const char *[]){
+ "esc0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mdss_mdp_clk = {
+ .halt_reg = 0x4D088,
+ .clkr = {
+ .enable_reg = 0x4D088,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mdss_mdp_clk",
+ .parent_names = (const char *[]){
+ "mdp_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mdss_pclk0_clk = {
+ .halt_reg = 0x4d084,
+ .clkr = {
+ .enable_reg = 0x4d084,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mdss_pclk0_clk",
+ .parent_names = (const char *[]){
+ "pclk0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mdss_vsync_clk = {
+ .halt_reg = 0x4d090,
+ .clkr = {
+ .enable_reg = 0x4d090,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mdss_vsync_clk",
+ .parent_names = (const char *[]){
+ "vsync_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mss_cfg_ahb_clk = {
+ .halt_reg = 0x49000,
+ .clkr = {
+ .enable_reg = 0x49000,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mss_cfg_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_oxili_ahb_clk = {
+ .halt_reg = 0x59028,
+ .clkr = {
+ .enable_reg = 0x59028,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_oxili_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_oxili_gfx3d_clk = {
+ .halt_reg = 0x59020,
+ .clkr = {
+ .enable_reg = 0x59020,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_oxili_gfx3d_clk",
+ .parent_names = (const char *[]){
+ "gfx3d_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pdm2_clk = {
+ .halt_reg = 0x4400c,
+ .clkr = {
+ .enable_reg = 0x4400c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pdm2_clk",
+ .parent_names = (const char *[]){
+ "pdm2_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_pdm_ahb_clk = {
+ .halt_reg = 0x44004,
+ .clkr = {
+ .enable_reg = 0x44004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_pdm_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_prng_ahb_clk = {
+ .halt_reg = 0x13004,
+ .halt_check = BRANCH_HALT_VOTED,
+ .clkr = {
+ .enable_reg = 0x45004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_prng_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_sdcc1_ahb_clk = {
+ .halt_reg = 0x4201c,
+ .clkr = {
+ .enable_reg = 0x4201c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_sdcc1_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_sdcc1_apps_clk = {
+ .halt_reg = 0x42018,
+ .clkr = {
+ .enable_reg = 0x42018,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_sdcc1_apps_clk",
+ .parent_names = (const char *[]){
+ "sdcc1_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_sdcc2_ahb_clk = {
+ .halt_reg = 0x4301c,
+ .clkr = {
+ .enable_reg = 0x4301c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_sdcc2_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_sdcc2_apps_clk = {
+ .halt_reg = 0x43018,
+ .clkr = {
+ .enable_reg = 0x43018,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_sdcc2_apps_clk",
+ .parent_names = (const char *[]){
+ "sdcc2_apps_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_gtcu_ahb_clk = {
+ .halt_reg = 0x12044,
+ .clkr = {
+ .enable_reg = 0x4500c,
+ .enable_mask = BIT(13),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_gtcu_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_jpeg_tbu_clk = {
+ .halt_reg = 0x12034,
+ .clkr = {
+ .enable_reg = 0x4500c,
+ .enable_mask = BIT(10),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_jpeg_tbu_clk",
+ .parent_names = (const char *[]){
+ "system_noc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_mdp_tbu_clk = {
+ .halt_reg = 0x1201c,
+ .clkr = {
+ .enable_reg = 0x4500c,
+ .enable_mask = BIT(4),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_mdp_tbu_clk",
+ .parent_names = (const char *[]){
+ "system_noc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_smmu_cfg_clk = {
+ .halt_reg = 0x12038,
+ .clkr = {
+ .enable_reg = 0x4500c,
+ .enable_mask = BIT(12),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_smmu_cfg_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_venus_tbu_clk = {
+ .halt_reg = 0x12014,
+ .clkr = {
+ .enable_reg = 0x4500c,
+ .enable_mask = BIT(5),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_venus_tbu_clk",
+ .parent_names = (const char *[]){
+ "system_noc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_vfe_tbu_clk = {
+ .halt_reg = 0x1203c,
+ .clkr = {
+ .enable_reg = 0x4500c,
+ .enable_mask = BIT(9),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_vfe_tbu_clk",
+ .parent_names = (const char *[]){
+ "system_noc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_usb2a_phy_sleep_clk = {
+ .halt_reg = 0x4102c,
+ .clkr = {
+ .enable_reg = 0x4102c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_usb2a_phy_sleep_clk",
+ .parent_names = (const char *[]){
+ "sleep_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_usb_hs_ahb_clk = {
+ .halt_reg = 0x41008,
+ .clkr = {
+ .enable_reg = 0x41008,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_usb_hs_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_usb_hs_system_clk = {
+ .halt_reg = 0x41004,
+ .clkr = {
+ .enable_reg = 0x41004,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_usb_hs_system_clk",
+ .parent_names = (const char *[]){
+ "usb_hs_system_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_venus0_ahb_clk = {
+ .halt_reg = 0x4c020,
+ .clkr = {
+ .enable_reg = 0x4c020,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_venus0_ahb_clk",
+ .parent_names = (const char *[]){
+ "pcnoc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_venus0_axi_clk = {
+ .halt_reg = 0x4c024,
+ .clkr = {
+ .enable_reg = 0x4c024,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_venus0_axi_clk",
+ .parent_names = (const char *[]){
+ "system_noc_bfdcd_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_branch gcc_venus0_vcodec0_clk = {
+ .halt_reg = 0x4c01c,
+ .clkr = {
+ .enable_reg = 0x4c01c,
+ .enable_mask = BIT(0),
+ .hw.init = &(struct clk_init_data){
+ .name = "gcc_venus0_vcodec0_clk",
+ .parent_names = (const char *[]){
+ "vcodec0_clk_src",
+ },
+ .num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
+ .ops = &clk_branch2_ops,
+ },
+ },
+};
+
+static struct clk_regmap *gcc_msm8916_clocks[] = {
+ [GPLL0] = &gpll0.clkr,
+ [GPLL0_VOTE] = &gpll0_vote,
+ [BIMC_PLL] = &bimc_pll.clkr,
+ [BIMC_PLL_VOTE] = &bimc_pll_vote,
+ [GPLL1] = &gpll1.clkr,
+ [GPLL1_VOTE] = &gpll1_vote,
+ [GPLL2] = &gpll2.clkr,
+ [GPLL2_VOTE] = &gpll2_vote,
+ [PCNOC_BFDCD_CLK_SRC] = &pcnoc_bfdcd_clk_src.clkr,
+ [SYSTEM_NOC_BFDCD_CLK_SRC] = &system_noc_bfdcd_clk_src.clkr,
+ [CAMSS_AHB_CLK_SRC] = &camss_ahb_clk_src.clkr,
+ [APSS_AHB_CLK_SRC] = &apss_ahb_clk_src.clkr,
+ [CSI0_CLK_SRC] = &csi0_clk_src.clkr,
+ [CSI1_CLK_SRC] = &csi1_clk_src.clkr,
+ [GFX3D_CLK_SRC] = &gfx3d_clk_src.clkr,
+ [VFE0_CLK_SRC] = &vfe0_clk_src.clkr,
+ [BLSP1_QUP1_I2C_APPS_CLK_SRC] = &blsp1_qup1_i2c_apps_clk_src.clkr,
+ [BLSP1_QUP1_SPI_APPS_CLK_SRC] = &blsp1_qup1_spi_apps_clk_src.clkr,
+ [BLSP1_QUP2_I2C_APPS_CLK_SRC] = &blsp1_qup2_i2c_apps_clk_src.clkr,
+ [BLSP1_QUP2_SPI_APPS_CLK_SRC] = &blsp1_qup2_spi_apps_clk_src.clkr,
+ [BLSP1_QUP3_I2C_APPS_CLK_SRC] = &blsp1_qup3_i2c_apps_clk_src.clkr,
+ [BLSP1_QUP3_SPI_APPS_CLK_SRC] = &blsp1_qup3_spi_apps_clk_src.clkr,
+ [BLSP1_QUP4_I2C_APPS_CLK_SRC] = &blsp1_qup4_i2c_apps_clk_src.clkr,
+ [BLSP1_QUP4_SPI_APPS_CLK_SRC] = &blsp1_qup4_spi_apps_clk_src.clkr,
+ [BLSP1_QUP5_I2C_APPS_CLK_SRC] = &blsp1_qup5_i2c_apps_clk_src.clkr,
+ [BLSP1_QUP5_SPI_APPS_CLK_SRC] = &blsp1_qup5_spi_apps_clk_src.clkr,
+ [BLSP1_QUP6_I2C_APPS_CLK_SRC] = &blsp1_qup6_i2c_apps_clk_src.clkr,
+ [BLSP1_QUP6_SPI_APPS_CLK_SRC] = &blsp1_qup6_spi_apps_clk_src.clkr,
+ [BLSP1_UART1_APPS_CLK_SRC] = &blsp1_uart1_apps_clk_src.clkr,
+ [BLSP1_UART2_APPS_CLK_SRC] = &blsp1_uart2_apps_clk_src.clkr,
+ [CCI_CLK_SRC] = &cci_clk_src.clkr,
+ [CAMSS_GP0_CLK_SRC] = &camss_gp0_clk_src.clkr,
+ [CAMSS_GP1_CLK_SRC] = &camss_gp1_clk_src.clkr,
+ [JPEG0_CLK_SRC] = &jpeg0_clk_src.clkr,
+ [MCLK0_CLK_SRC] = &mclk0_clk_src.clkr,
+ [MCLK1_CLK_SRC] = &mclk1_clk_src.clkr,
+ [CSI0PHYTIMER_CLK_SRC] = &csi0phytimer_clk_src.clkr,
+ [CSI1PHYTIMER_CLK_SRC] = &csi1phytimer_clk_src.clkr,
+ [CPP_CLK_SRC] = &cpp_clk_src.clkr,
+ [CRYPTO_CLK_SRC] = &crypto_clk_src.clkr,
+ [GP1_CLK_SRC] = &gp1_clk_src.clkr,
+ [GP2_CLK_SRC] = &gp2_clk_src.clkr,
+ [GP3_CLK_SRC] = &gp3_clk_src.clkr,
+ [BYTE0_CLK_SRC] = &byte0_clk_src.clkr,
+ [ESC0_CLK_SRC] = &esc0_clk_src.clkr,
+ [MDP_CLK_SRC] = &mdp_clk_src.clkr,
+ [PCLK0_CLK_SRC] = &pclk0_clk_src.clkr,
+ [VSYNC_CLK_SRC] = &vsync_clk_src.clkr,
+ [PDM2_CLK_SRC] = &pdm2_clk_src.clkr,
+ [SDCC1_APPS_CLK_SRC] = &sdcc1_apps_clk_src.clkr,
+ [SDCC2_APPS_CLK_SRC] = &sdcc2_apps_clk_src.clkr,
+ [APSS_TCU_CLK_SRC] = &apss_tcu_clk_src.clkr,
+ [USB_HS_SYSTEM_CLK_SRC] = &usb_hs_system_clk_src.clkr,
+ [VCODEC0_CLK_SRC] = &vcodec0_clk_src.clkr,
+ [GCC_BLSP1_AHB_CLK] = &gcc_blsp1_ahb_clk.clkr,
+ [GCC_BLSP1_SLEEP_CLK] = &gcc_blsp1_sleep_clk.clkr,
+ [GCC_BLSP1_QUP1_I2C_APPS_CLK] = &gcc_blsp1_qup1_i2c_apps_clk.clkr,
+ [GCC_BLSP1_QUP1_SPI_APPS_CLK] = &gcc_blsp1_qup1_spi_apps_clk.clkr,
+ [GCC_BLSP1_QUP2_I2C_APPS_CLK] = &gcc_blsp1_qup2_i2c_apps_clk.clkr,
+ [GCC_BLSP1_QUP2_SPI_APPS_CLK] = &gcc_blsp1_qup2_spi_apps_clk.clkr,
+ [GCC_BLSP1_QUP3_I2C_APPS_CLK] = &gcc_blsp1_qup3_i2c_apps_clk.clkr,
+ [GCC_BLSP1_QUP3_SPI_APPS_CLK] = &gcc_blsp1_qup3_spi_apps_clk.clkr,
+ [GCC_BLSP1_QUP4_I2C_APPS_CLK] = &gcc_blsp1_qup4_i2c_apps_clk.clkr,
+ [GCC_BLSP1_QUP4_SPI_APPS_CLK] = &gcc_blsp1_qup4_spi_apps_clk.clkr,
+ [GCC_BLSP1_QUP5_I2C_APPS_CLK] = &gcc_blsp1_qup5_i2c_apps_clk.clkr,
+ [GCC_BLSP1_QUP5_SPI_APPS_CLK] = &gcc_blsp1_qup5_spi_apps_clk.clkr,
+ [GCC_BLSP1_QUP6_I2C_APPS_CLK] = &gcc_blsp1_qup6_i2c_apps_clk.clkr,
+ [GCC_BLSP1_QUP6_SPI_APPS_CLK] = &gcc_blsp1_qup6_spi_apps_clk.clkr,
+ [GCC_BLSP1_UART1_APPS_CLK] = &gcc_blsp1_uart1_apps_clk.clkr,
+ [GCC_BLSP1_UART2_APPS_CLK] = &gcc_blsp1_uart2_apps_clk.clkr,
+ [GCC_BOOT_ROM_AHB_CLK] = &gcc_boot_rom_ahb_clk.clkr,
+ [GCC_CAMSS_CCI_AHB_CLK] = &gcc_camss_cci_ahb_clk.clkr,
+ [GCC_CAMSS_CCI_CLK] = &gcc_camss_cci_clk.clkr,
+ [GCC_CAMSS_CSI0_AHB_CLK] = &gcc_camss_csi0_ahb_clk.clkr,
+ [GCC_CAMSS_CSI0_CLK] = &gcc_camss_csi0_clk.clkr,
+ [GCC_CAMSS_CSI0PHY_CLK] = &gcc_camss_csi0phy_clk.clkr,
+ [GCC_CAMSS_CSI0PIX_CLK] = &gcc_camss_csi0pix_clk.clkr,
+ [GCC_CAMSS_CSI0RDI_CLK] = &gcc_camss_csi0rdi_clk.clkr,
+ [GCC_CAMSS_CSI1_AHB_CLK] = &gcc_camss_csi1_ahb_clk.clkr,
+ [GCC_CAMSS_CSI1_CLK] = &gcc_camss_csi1_clk.clkr,
+ [GCC_CAMSS_CSI1PHY_CLK] = &gcc_camss_csi1phy_clk.clkr,
+ [GCC_CAMSS_CSI1PIX_CLK] = &gcc_camss_csi1pix_clk.clkr,
+ [GCC_CAMSS_CSI1RDI_CLK] = &gcc_camss_csi1rdi_clk.clkr,
+ [GCC_CAMSS_CSI_VFE0_CLK] = &gcc_camss_csi_vfe0_clk.clkr,
+ [GCC_CAMSS_GP0_CLK] = &gcc_camss_gp0_clk.clkr,
+ [GCC_CAMSS_GP1_CLK] = &gcc_camss_gp1_clk.clkr,
+ [GCC_CAMSS_ISPIF_AHB_CLK] = &gcc_camss_ispif_ahb_clk.clkr,
+ [GCC_CAMSS_JPEG0_CLK] = &gcc_camss_jpeg0_clk.clkr,
+ [GCC_CAMSS_JPEG_AHB_CLK] = &gcc_camss_jpeg_ahb_clk.clkr,
+ [GCC_CAMSS_JPEG_AXI_CLK] = &gcc_camss_jpeg_axi_clk.clkr,
+ [GCC_CAMSS_MCLK0_CLK] = &gcc_camss_mclk0_clk.clkr,
+ [GCC_CAMSS_MCLK1_CLK] = &gcc_camss_mclk1_clk.clkr,
+ [GCC_CAMSS_MICRO_AHB_CLK] = &gcc_camss_micro_ahb_clk.clkr,
+ [GCC_CAMSS_CSI0PHYTIMER_CLK] = &gcc_camss_csi0phytimer_clk.clkr,
+ [GCC_CAMSS_CSI1PHYTIMER_CLK] = &gcc_camss_csi1phytimer_clk.clkr,
+ [GCC_CAMSS_AHB_CLK] = &gcc_camss_ahb_clk.clkr,
+ [GCC_CAMSS_TOP_AHB_CLK] = &gcc_camss_top_ahb_clk.clkr,
+ [GCC_CAMSS_CPP_AHB_CLK] = &gcc_camss_cpp_ahb_clk.clkr,
+ [GCC_CAMSS_CPP_CLK] = &gcc_camss_cpp_clk.clkr,
+ [GCC_CAMSS_VFE0_CLK] = &gcc_camss_vfe0_clk.clkr,
+ [GCC_CAMSS_VFE_AHB_CLK] = &gcc_camss_vfe_ahb_clk.clkr,
+ [GCC_CAMSS_VFE_AXI_CLK] = &gcc_camss_vfe_axi_clk.clkr,
+ [GCC_CRYPTO_AHB_CLK] = &gcc_crypto_ahb_clk.clkr,
+ [GCC_CRYPTO_AXI_CLK] = &gcc_crypto_axi_clk.clkr,
+ [GCC_CRYPTO_CLK] = &gcc_crypto_clk.clkr,
+ [GCC_OXILI_GMEM_CLK] = &gcc_oxili_gmem_clk.clkr,
+ [GCC_GP1_CLK] = &gcc_gp1_clk.clkr,
+ [GCC_GP2_CLK] = &gcc_gp2_clk.clkr,
+ [GCC_GP3_CLK] = &gcc_gp3_clk.clkr,
+ [GCC_MDSS_AHB_CLK] = &gcc_mdss_ahb_clk.clkr,
+ [GCC_MDSS_AXI_CLK] = &gcc_mdss_axi_clk.clkr,
+ [GCC_MDSS_BYTE0_CLK] = &gcc_mdss_byte0_clk.clkr,
+ [GCC_MDSS_ESC0_CLK] = &gcc_mdss_esc0_clk.clkr,
+ [GCC_MDSS_MDP_CLK] = &gcc_mdss_mdp_clk.clkr,
+ [GCC_MDSS_PCLK0_CLK] = &gcc_mdss_pclk0_clk.clkr,
+ [GCC_MDSS_VSYNC_CLK] = &gcc_mdss_vsync_clk.clkr,
+ [GCC_MSS_CFG_AHB_CLK] = &gcc_mss_cfg_ahb_clk.clkr,
+ [GCC_OXILI_AHB_CLK] = &gcc_oxili_ahb_clk.clkr,
+ [GCC_OXILI_GFX3D_CLK] = &gcc_oxili_gfx3d_clk.clkr,
+ [GCC_PDM2_CLK] = &gcc_pdm2_clk.clkr,
+ [GCC_PDM_AHB_CLK] = &gcc_pdm_ahb_clk.clkr,
+ [GCC_PRNG_AHB_CLK] = &gcc_prng_ahb_clk.clkr,
+ [GCC_SDCC1_AHB_CLK] = &gcc_sdcc1_ahb_clk.clkr,
+ [GCC_SDCC1_APPS_CLK] = &gcc_sdcc1_apps_clk.clkr,
+ [GCC_SDCC2_AHB_CLK] = &gcc_sdcc2_ahb_clk.clkr,
+ [GCC_SDCC2_APPS_CLK] = &gcc_sdcc2_apps_clk.clkr,
+ [GCC_GTCU_AHB_CLK] = &gcc_gtcu_ahb_clk.clkr,
+ [GCC_JPEG_TBU_CLK] = &gcc_jpeg_tbu_clk.clkr,
+ [GCC_MDP_TBU_CLK] = &gcc_mdp_tbu_clk.clkr,
+ [GCC_SMMU_CFG_CLK] = &gcc_smmu_cfg_clk.clkr,
+ [GCC_VENUS_TBU_CLK] = &gcc_venus_tbu_clk.clkr,
+ [GCC_VFE_TBU_CLK] = &gcc_vfe_tbu_clk.clkr,
+ [GCC_USB2A_PHY_SLEEP_CLK] = &gcc_usb2a_phy_sleep_clk.clkr,
+ [GCC_USB_HS_AHB_CLK] = &gcc_usb_hs_ahb_clk.clkr,
+ [GCC_USB_HS_SYSTEM_CLK] = &gcc_usb_hs_system_clk.clkr,
+ [GCC_VENUS0_AHB_CLK] = &gcc_venus0_ahb_clk.clkr,
+ [GCC_VENUS0_AXI_CLK] = &gcc_venus0_axi_clk.clkr,
+ [GCC_VENUS0_VCODEC0_CLK] = &gcc_venus0_vcodec0_clk.clkr,
+};
+
+static const struct qcom_reset_map gcc_msm8916_resets[] = {
+ [GCC_BLSP1_BCR] = { 0x01000 },
+ [GCC_BLSP1_QUP1_BCR] = { 0x02000 },
+ [GCC_BLSP1_UART1_BCR] = { 0x02038 },
+ [GCC_BLSP1_QUP2_BCR] = { 0x03008 },
+ [GCC_BLSP1_UART2_BCR] = { 0x03028 },
+ [GCC_BLSP1_QUP3_BCR] = { 0x04018 },
+ [GCC_BLSP1_QUP4_BCR] = { 0x05018 },
+ [GCC_BLSP1_QUP5_BCR] = { 0x06018 },
+ [GCC_BLSP1_QUP6_BCR] = { 0x07018 },
+ [GCC_IMEM_BCR] = { 0x0e000 },
+ [GCC_SMMU_BCR] = { 0x12000 },
+ [GCC_APSS_TCU_BCR] = { 0x12050 },
+ [GCC_SMMU_XPU_BCR] = { 0x12054 },
+ [GCC_PCNOC_TBU_BCR] = { 0x12058 },
+ [GCC_PRNG_BCR] = { 0x13000 },
+ [GCC_BOOT_ROM_BCR] = { 0x13008 },
+ [GCC_CRYPTO_BCR] = { 0x16000 },
+ [GCC_SEC_CTRL_BCR] = { 0x1a000 },
+ [GCC_AUDIO_CORE_BCR] = { 0x1c008 },
+ [GCC_ULT_AUDIO_BCR] = { 0x1c0b4 },
+ [GCC_DEHR_BCR] = { 0x1f000 },
+ [GCC_SYSTEM_NOC_BCR] = { 0x26000 },
+ [GCC_PCNOC_BCR] = { 0x27018 },
+ [GCC_TCSR_BCR] = { 0x28000 },
+ [GCC_QDSS_BCR] = { 0x29000 },
+ [GCC_DCD_BCR] = { 0x2a000 },
+ [GCC_MSG_RAM_BCR] = { 0x2b000 },
+ [GCC_MPM_BCR] = { 0x2c000 },
+ [GCC_SPMI_BCR] = { 0x2e000 },
+ [GCC_SPDM_BCR] = { 0x2f000 },
+ [GCC_MM_SPDM_BCR] = { 0x2f024 },
+ [GCC_BIMC_BCR] = { 0x31000 },
+ [GCC_RBCPR_BCR] = { 0x33000 },
+ [GCC_TLMM_BCR] = { 0x34000 },
+ [GCC_USB_HS_BCR] = { 0x41000 },
+ [GCC_USB2A_PHY_BCR] = { 0x41028 },
+ [GCC_SDCC1_BCR] = { 0x42000 },
+ [GCC_SDCC2_BCR] = { 0x43000 },
+ [GCC_PDM_BCR] = { 0x44000 },
+ [GCC_SNOC_BUS_TIMEOUT0_BCR] = { 0x47000 },
+ [GCC_PCNOC_BUS_TIMEOUT0_BCR] = { 0x48000 },
+ [GCC_PCNOC_BUS_TIMEOUT1_BCR] = { 0x48008 },
+ [GCC_PCNOC_BUS_TIMEOUT2_BCR] = { 0x48010 },
+ [GCC_PCNOC_BUS_TIMEOUT3_BCR] = { 0x48018 },
+ [GCC_PCNOC_BUS_TIMEOUT4_BCR] = { 0x48020 },
+ [GCC_PCNOC_BUS_TIMEOUT5_BCR] = { 0x48028 },
+ [GCC_PCNOC_BUS_TIMEOUT6_BCR] = { 0x48030 },
+ [GCC_PCNOC_BUS_TIMEOUT7_BCR] = { 0x48038 },
+ [GCC_PCNOC_BUS_TIMEOUT8_BCR] = { 0x48040 },
+ [GCC_PCNOC_BUS_TIMEOUT9_BCR] = { 0x48048 },
+ [GCC_MMSS_BCR] = { 0x4b000 },
+ [GCC_VENUS0_BCR] = { 0x4c014 },
+ [GCC_MDSS_BCR] = { 0x4d074 },
+ [GCC_CAMSS_PHY0_BCR] = { 0x4e018 },
+ [GCC_CAMSS_CSI0_BCR] = { 0x4e038 },
+ [GCC_CAMSS_CSI0PHY_BCR] = { 0x4e044 },
+ [GCC_CAMSS_CSI0RDI_BCR] = { 0x4e04c },
+ [GCC_CAMSS_CSI0PIX_BCR] = { 0x4e054 },
+ [GCC_CAMSS_PHY1_BCR] = { 0x4f018 },
+ [GCC_CAMSS_CSI1_BCR] = { 0x4f038 },
+ [GCC_CAMSS_CSI1PHY_BCR] = { 0x4f044 },
+ [GCC_CAMSS_CSI1RDI_BCR] = { 0x4f04c },
+ [GCC_CAMSS_CSI1PIX_BCR] = { 0x4f054 },
+ [GCC_CAMSS_ISPIF_BCR] = { 0x50000 },
+ [GCC_CAMSS_CCI_BCR] = { 0x51014 },
+ [GCC_CAMSS_MCLK0_BCR] = { 0x52014 },
+ [GCC_CAMSS_MCLK1_BCR] = { 0x53014 },
+ [GCC_CAMSS_GP0_BCR] = { 0x54014 },
+ [GCC_CAMSS_GP1_BCR] = { 0x55014 },
+ [GCC_CAMSS_TOP_BCR] = { 0x56000 },
+ [GCC_CAMSS_MICRO_BCR] = { 0x56008 },
+ [GCC_CAMSS_JPEG_BCR] = { 0x57018 },
+ [GCC_CAMSS_VFE_BCR] = { 0x58030 },
+ [GCC_CAMSS_CSI_VFE0_BCR] = { 0x5804c },
+ [GCC_OXILI_BCR] = { 0x59018 },
+ [GCC_GMEM_BCR] = { 0x5902c },
+ [GCC_CAMSS_AHB_BCR] = { 0x5a018 },
+ [GCC_MDP_TBU_BCR] = { 0x62000 },
+ [GCC_GFX_TBU_BCR] = { 0x63000 },
+ [GCC_GFX_TCU_BCR] = { 0x64000 },
+ [GCC_MSS_TBU_AXI_BCR] = { 0x65000 },
+ [GCC_MSS_TBU_GSS_AXI_BCR] = { 0x66000 },
+ [GCC_MSS_TBU_Q6_AXI_BCR] = { 0x67000 },
+ [GCC_GTCU_AHB_BCR] = { 0x68000 },
+ [GCC_SMMU_CFG_BCR] = { 0x69000 },
+ [GCC_VFE_TBU_BCR] = { 0x6a000 },
+ [GCC_VENUS_TBU_BCR] = { 0x6b000 },
+ [GCC_JPEG_TBU_BCR] = { 0x6c000 },
+ [GCC_PRONTO_TBU_BCR] = { 0x6d000 },
+ [GCC_SMMU_CATS_BCR] = { 0x7c000 },
+};
+
+static const struct regmap_config gcc_msm8916_regmap_config = {
+ .reg_bits = 32,
+ .reg_stride = 4,
+ .val_bits = 32,
+ .max_register = 0x80000,
+ .fast_io = true,
+};
+
+static const struct qcom_cc_desc gcc_msm8916_desc = {
+ .config = &gcc_msm8916_regmap_config,
+ .clks = gcc_msm8916_clocks,
+ .num_clks = ARRAY_SIZE(gcc_msm8916_clocks),
+ .resets = gcc_msm8916_resets,
+ .num_resets = ARRAY_SIZE(gcc_msm8916_resets),
+};
+
+static const struct of_device_id gcc_msm8916_match_table[] = {
+ { .compatible = "qcom,gcc-msm8916" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, gcc_msm8916_match_table);
+
+static int gcc_msm8916_probe(struct platform_device *pdev)
+{
+ struct clk *clk;
+ struct device *dev = &pdev->dev;
+
+ /* Temporary until RPM clocks supported */
+ clk = clk_register_fixed_rate(dev, "xo", NULL, CLK_IS_ROOT, 19200000);
+ if (IS_ERR(clk))
+ return PTR_ERR(clk);
+
+ clk = clk_register_fixed_rate(dev, "sleep_clk_src", NULL,
+ CLK_IS_ROOT, 32768);
+ if (IS_ERR(clk))
+ return PTR_ERR(clk);
+
+ return qcom_cc_probe(pdev, &gcc_msm8916_desc);
+}
+
+static int gcc_msm8916_remove(struct platform_device *pdev)
+{
+ qcom_cc_remove(pdev);
+ return 0;
+}
+
+static struct platform_driver gcc_msm8916_driver = {
+ .probe = gcc_msm8916_probe,
+ .remove = gcc_msm8916_remove,
+ .driver = {
+ .name = "gcc-msm8916",
+ .of_match_table = gcc_msm8916_match_table,
+ },
+};
+
+static int __init gcc_msm8916_init(void)
+{
+ return platform_driver_register(&gcc_msm8916_driver);
+}
+core_initcall(gcc_msm8916_init);
+
+static void __exit gcc_msm8916_exit(void)
+{
+ platform_driver_unregister(&gcc_msm8916_driver);
+}
+module_exit(gcc_msm8916_exit);
+
+MODULE_DESCRIPTION("Qualcomm GCC MSM8916 Driver");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("platform:gcc-msm8916");
diff --git a/drivers/clk/qcom/gcc-msm8960.c b/drivers/clk/qcom/gcc-msm8960.c
index e60feff..eb6a4f9 100644
--- a/drivers/clk/qcom/gcc-msm8960.c
+++ b/drivers/clk/qcom/gcc-msm8960.c
@@ -113,14 +113,16 @@
},
};
-#define P_PXO 0
-#define P_PLL8 1
-#define P_PLL3 2
-#define P_CXO 2
+enum {
+ P_PXO,
+ P_PLL8,
+ P_PLL3,
+ P_CXO,
+};
-static const u8 gcc_pxo_pll8_map[] = {
- [P_PXO] = 0,
- [P_PLL8] = 3,
+static const struct parent_map gcc_pxo_pll8_map[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 3 }
};
static const char *gcc_pxo_pll8[] = {
@@ -128,10 +130,10 @@
"pll8_vote",
};
-static const u8 gcc_pxo_pll8_cxo_map[] = {
- [P_PXO] = 0,
- [P_PLL8] = 3,
- [P_CXO] = 5,
+static const struct parent_map gcc_pxo_pll8_cxo_map[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 3 },
+ { P_CXO, 5 }
};
static const char *gcc_pxo_pll8_cxo[] = {
@@ -140,10 +142,10 @@
"cxo",
};
-static const u8 gcc_pxo_pll8_pll3_map[] = {
- [P_PXO] = 0,
- [P_PLL8] = 3,
- [P_PLL3] = 6,
+static const struct parent_map gcc_pxo_pll8_pll3_map[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 3 },
+ { P_PLL3, 6 }
};
static const char *gcc_pxo_pll8_pll3[] = {
diff --git a/drivers/clk/qcom/gcc-msm8974.c b/drivers/clk/qcom/gcc-msm8974.c
index a6937fe..c39d098 100644
--- a/drivers/clk/qcom/gcc-msm8974.c
+++ b/drivers/clk/qcom/gcc-msm8974.c
@@ -32,14 +32,16 @@
#include "clk-branch.h"
#include "reset.h"
-#define P_XO 0
-#define P_GPLL0 1
-#define P_GPLL1 1
-#define P_GPLL4 2
+enum {
+ P_XO,
+ P_GPLL0,
+ P_GPLL1,
+ P_GPLL4,
+};
-static const u8 gcc_xo_gpll0_map[] = {
- [P_XO] = 0,
- [P_GPLL0] = 1,
+static const struct parent_map gcc_xo_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 }
};
static const char *gcc_xo_gpll0[] = {
@@ -47,10 +49,10 @@
"gpll0_vote",
};
-static const u8 gcc_xo_gpll0_gpll4_map[] = {
- [P_XO] = 0,
- [P_GPLL0] = 1,
- [P_GPLL4] = 5,
+static const struct parent_map gcc_xo_gpll0_gpll4_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 },
+ { P_GPLL4, 5 }
};
static const char *gcc_xo_gpll0_gpll4[] = {
@@ -984,9 +986,9 @@
{ }
};
-static u8 usb_hsic_clk_src_map[] = {
- [P_XO] = 0,
- [P_GPLL1] = 4,
+static const struct parent_map usb_hsic_clk_src_map[] = {
+ { P_XO, 0 },
+ { P_GPLL1, 4 }
};
static struct clk_rcg2 usb_hsic_clk_src = {
diff --git a/drivers/clk/qcom/lcc-ipq806x.c b/drivers/clk/qcom/lcc-ipq806x.c
index c9ff27b..47f0ac1 100644
--- a/drivers/clk/qcom/lcc-ipq806x.c
+++ b/drivers/clk/qcom/lcc-ipq806x.c
@@ -61,12 +61,14 @@
.main_output_mask = BIT(23),
};
-#define P_PXO 0
-#define P_PLL4 1
+enum {
+ P_PXO,
+ P_PLL4,
+};
-static const u8 lcc_pxo_pll4_map[] = {
- [P_PXO] = 0,
- [P_PLL4] = 2,
+static const struct parent_map lcc_pxo_pll4_map[] = {
+ { P_PXO, 0 },
+ { P_PLL4, 2 }
};
static const char *lcc_pxo_pll4[] = {
@@ -294,14 +296,14 @@
};
static struct freq_tbl clk_tbl_aif_osr[] = {
- { 22050, P_PLL4, 1, 147, 20480 },
- { 32000, P_PLL4, 1, 1, 96 },
- { 44100, P_PLL4, 1, 147, 10240 },
- { 48000, P_PLL4, 1, 1, 64 },
- { 88200, P_PLL4, 1, 147, 5120 },
- { 96000, P_PLL4, 1, 1, 32 },
- { 176400, P_PLL4, 1, 147, 2560 },
- { 192000, P_PLL4, 1, 1, 16 },
+ { 2822400, P_PLL4, 1, 147, 20480 },
+ { 4096000, P_PLL4, 1, 1, 96 },
+ { 5644800, P_PLL4, 1, 147, 10240 },
+ { 6144000, P_PLL4, 1, 1, 64 },
+ { 11289600, P_PLL4, 1, 147, 5120 },
+ { 12288000, P_PLL4, 1, 1, 32 },
+ { 22579200, P_PLL4, 1, 147, 2560 },
+ { 24576000, P_PLL4, 1, 1, 16 },
{ },
};
@@ -360,7 +362,7 @@
};
static struct freq_tbl clk_tbl_ahbix[] = {
- { 131072, P_PLL4, 1, 1, 3 },
+ { 131072000, P_PLL4, 1, 1, 3 },
{ },
};
@@ -386,13 +388,12 @@
.freq_tbl = clk_tbl_ahbix,
.clkr = {
.enable_reg = 0x38,
- .enable_mask = BIT(10), /* toggle the gfmux to select mn/pxo */
+ .enable_mask = BIT(11),
.hw.init = &(struct clk_init_data){
.name = "ahbix",
.parent_names = lcc_pxo_pll4,
.num_parents = 2,
- .ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
+ .ops = &clk_rcg_lcc_ops,
},
},
};
diff --git a/drivers/clk/qcom/lcc-msm8960.c b/drivers/clk/qcom/lcc-msm8960.c
index e2c8632..d0df9d5 100644
--- a/drivers/clk/qcom/lcc-msm8960.c
+++ b/drivers/clk/qcom/lcc-msm8960.c
@@ -47,12 +47,14 @@
},
};
-#define P_PXO 0
-#define P_PLL4 1
+enum {
+ P_PXO,
+ P_PLL4,
+};
-static const u8 lcc_pxo_pll4_map[] = {
- [P_PXO] = 0,
- [P_PLL4] = 2,
+static const struct parent_map lcc_pxo_pll4_map[] = {
+ { P_PXO, 0 },
+ { P_PLL4, 2 }
};
static const char *lcc_pxo_pll4[] = {
diff --git a/drivers/clk/qcom/mmcc-apq8084.c b/drivers/clk/qcom/mmcc-apq8084.c
index 157139a..1b17df2 100644
--- a/drivers/clk/qcom/mmcc-apq8084.c
+++ b/drivers/clk/qcom/mmcc-apq8084.c
@@ -27,28 +27,30 @@
#include "clk-branch.h"
#include "reset.h"
-#define P_XO 0
-#define P_MMPLL0 1
-#define P_EDPLINK 1
-#define P_MMPLL1 2
-#define P_HDMIPLL 2
-#define P_GPLL0 3
-#define P_EDPVCO 3
-#define P_MMPLL4 4
-#define P_DSI0PLL 4
-#define P_DSI0PLL_BYTE 4
-#define P_MMPLL2 4
-#define P_MMPLL3 4
-#define P_GPLL1 5
-#define P_DSI1PLL 5
-#define P_DSI1PLL_BYTE 5
-#define P_MMSLEEP 6
+enum {
+ P_XO,
+ P_MMPLL0,
+ P_EDPLINK,
+ P_MMPLL1,
+ P_HDMIPLL,
+ P_GPLL0,
+ P_EDPVCO,
+ P_MMPLL4,
+ P_DSI0PLL,
+ P_DSI0PLL_BYTE,
+ P_MMPLL2,
+ P_MMPLL3,
+ P_GPLL1,
+ P_DSI1PLL,
+ P_DSI1PLL_BYTE,
+ P_MMSLEEP,
+};
-static const u8 mmcc_xo_mmpll0_mmpll1_gpll0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_MMPLL1] = 2,
- [P_GPLL0] = 5,
+static const struct parent_map mmcc_xo_mmpll0_mmpll1_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_MMPLL1, 2 },
+ { P_GPLL0, 5 }
};
static const char *mmcc_xo_mmpll0_mmpll1_gpll0[] = {
@@ -58,13 +60,13 @@
"mmss_gpll0_vote",
};
-static const u8 mmcc_xo_mmpll0_dsi_hdmi_gpll0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_HDMIPLL] = 4,
- [P_GPLL0] = 5,
- [P_DSI0PLL] = 2,
- [P_DSI1PLL] = 3,
+static const struct parent_map mmcc_xo_mmpll0_dsi_hdmi_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_HDMIPLL, 4 },
+ { P_GPLL0, 5 },
+ { P_DSI0PLL, 2 },
+ { P_DSI1PLL, 3 }
};
static const char *mmcc_xo_mmpll0_dsi_hdmi_gpll0[] = {
@@ -76,12 +78,12 @@
"dsi1pll",
};
-static const u8 mmcc_xo_mmpll0_1_2_gpll0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_MMPLL1] = 2,
- [P_GPLL0] = 5,
- [P_MMPLL2] = 3,
+static const struct parent_map mmcc_xo_mmpll0_1_2_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_MMPLL1, 2 },
+ { P_GPLL0, 5 },
+ { P_MMPLL2, 3 }
};
static const char *mmcc_xo_mmpll0_1_2_gpll0[] = {
@@ -92,12 +94,12 @@
"mmpll2",
};
-static const u8 mmcc_xo_mmpll0_1_3_gpll0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_MMPLL1] = 2,
- [P_GPLL0] = 5,
- [P_MMPLL3] = 3,
+static const struct parent_map mmcc_xo_mmpll0_1_3_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_MMPLL1, 2 },
+ { P_GPLL0, 5 },
+ { P_MMPLL3, 3 }
};
static const char *mmcc_xo_mmpll0_1_3_gpll0[] = {
@@ -108,13 +110,13 @@
"mmpll3",
};
-static const u8 mmcc_xo_dsi_hdmi_edp_map[] = {
- [P_XO] = 0,
- [P_EDPLINK] = 4,
- [P_HDMIPLL] = 3,
- [P_EDPVCO] = 5,
- [P_DSI0PLL] = 1,
- [P_DSI1PLL] = 2,
+static const struct parent_map mmcc_xo_dsi_hdmi_edp_map[] = {
+ { P_XO, 0 },
+ { P_EDPLINK, 4 },
+ { P_HDMIPLL, 3 },
+ { P_EDPVCO, 5 },
+ { P_DSI0PLL, 1 },
+ { P_DSI1PLL, 2 }
};
static const char *mmcc_xo_dsi_hdmi_edp[] = {
@@ -126,13 +128,13 @@
"dsi1pll",
};
-static const u8 mmcc_xo_dsi_hdmi_edp_gpll0_map[] = {
- [P_XO] = 0,
- [P_EDPLINK] = 4,
- [P_HDMIPLL] = 3,
- [P_GPLL0] = 5,
- [P_DSI0PLL] = 1,
- [P_DSI1PLL] = 2,
+static const struct parent_map mmcc_xo_dsi_hdmi_edp_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_EDPLINK, 4 },
+ { P_HDMIPLL, 3 },
+ { P_GPLL0, 5 },
+ { P_DSI0PLL, 1 },
+ { P_DSI1PLL, 2 }
};
static const char *mmcc_xo_dsi_hdmi_edp_gpll0[] = {
@@ -144,13 +146,13 @@
"dsi1pll",
};
-static const u8 mmcc_xo_dsibyte_hdmi_edp_gpll0_map[] = {
- [P_XO] = 0,
- [P_EDPLINK] = 4,
- [P_HDMIPLL] = 3,
- [P_GPLL0] = 5,
- [P_DSI0PLL_BYTE] = 1,
- [P_DSI1PLL_BYTE] = 2,
+static const struct parent_map mmcc_xo_dsibyte_hdmi_edp_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_EDPLINK, 4 },
+ { P_HDMIPLL, 3 },
+ { P_GPLL0, 5 },
+ { P_DSI0PLL_BYTE, 1 },
+ { P_DSI1PLL_BYTE, 2 }
};
static const char *mmcc_xo_dsibyte_hdmi_edp_gpll0[] = {
@@ -162,12 +164,12 @@
"dsi1pllbyte",
};
-static const u8 mmcc_xo_mmpll0_1_4_gpll0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_MMPLL1] = 2,
- [P_GPLL0] = 5,
- [P_MMPLL4] = 3,
+static const struct parent_map mmcc_xo_mmpll0_1_4_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_MMPLL1, 2 },
+ { P_GPLL0, 5 },
+ { P_MMPLL4, 3 }
};
static const char *mmcc_xo_mmpll0_1_4_gpll0[] = {
@@ -178,13 +180,13 @@
"gpll0",
};
-static const u8 mmcc_xo_mmpll0_1_4_gpll1_0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_MMPLL1] = 2,
- [P_MMPLL4] = 3,
- [P_GPLL0] = 5,
- [P_GPLL1] = 4,
+static const struct parent_map mmcc_xo_mmpll0_1_4_gpll1_0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_MMPLL1, 2 },
+ { P_MMPLL4, 3 },
+ { P_GPLL0, 5 },
+ { P_GPLL1, 4 }
};
static const char *mmcc_xo_mmpll0_1_4_gpll1_0[] = {
@@ -196,14 +198,14 @@
"gpll0",
};
-static const u8 mmcc_xo_mmpll0_1_4_gpll1_0_sleep_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_MMPLL1] = 2,
- [P_MMPLL4] = 3,
- [P_GPLL0] = 5,
- [P_GPLL1] = 4,
- [P_MMSLEEP] = 6,
+static const struct parent_map mmcc_xo_mmpll0_1_4_gpll1_0_sleep_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_MMPLL1, 2 },
+ { P_MMPLL4, 3 },
+ { P_GPLL0, 5 },
+ { P_GPLL1, 4 },
+ { P_MMSLEEP, 6 }
};
static const char *mmcc_xo_mmpll0_1_4_gpll1_0_sleep[] = {
diff --git a/drivers/clk/qcom/mmcc-msm8960.c b/drivers/clk/qcom/mmcc-msm8960.c
index e8b33bb..9711bca 100644
--- a/drivers/clk/qcom/mmcc-msm8960.c
+++ b/drivers/clk/qcom/mmcc-msm8960.c
@@ -33,18 +33,21 @@
#include "clk-branch.h"
#include "reset.h"
-#define P_PXO 0
-#define P_PLL8 1
-#define P_PLL2 2
-#define P_PLL3 3
-#define P_PLL15 3
+enum {
+ P_PXO,
+ P_PLL8,
+ P_PLL2,
+ P_PLL3,
+ P_PLL15,
+ P_HDMI_PLL,
+};
#define F_MN(f, s, _m, _n) { .freq = f, .src = s, .m = _m, .n = _n }
-static u8 mmcc_pxo_pll8_pll2_map[] = {
- [P_PXO] = 0,
- [P_PLL8] = 2,
- [P_PLL2] = 1,
+static const struct parent_map mmcc_pxo_pll8_pll2_map[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 2 },
+ { P_PLL2, 1 }
};
static const char *mmcc_pxo_pll8_pll2[] = {
@@ -53,11 +56,11 @@
"pll2",
};
-static u8 mmcc_pxo_pll8_pll2_pll3_map[] = {
- [P_PXO] = 0,
- [P_PLL8] = 2,
- [P_PLL2] = 1,
- [P_PLL3] = 3,
+static const struct parent_map mmcc_pxo_pll8_pll2_pll3_map[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 2 },
+ { P_PLL2, 1 },
+ { P_PLL3, 3 }
};
static const char *mmcc_pxo_pll8_pll2_pll15[] = {
@@ -67,11 +70,11 @@
"pll15",
};
-static u8 mmcc_pxo_pll8_pll2_pll15_map[] = {
- [P_PXO] = 0,
- [P_PLL8] = 2,
- [P_PLL2] = 1,
- [P_PLL15] = 3,
+static const struct parent_map mmcc_pxo_pll8_pll2_pll15_map[] = {
+ { P_PXO, 0 },
+ { P_PLL8, 2 },
+ { P_PLL2, 1 },
+ { P_PLL15, 3 }
};
static const char *mmcc_pxo_pll8_pll2_pll3[] = {
@@ -1377,11 +1380,9 @@
},
};
-#define P_HDMI_PLL 1
-
-static u8 mmcc_pxo_hdmi_map[] = {
- [P_PXO] = 0,
- [P_HDMI_PLL] = 3,
+static const struct parent_map mmcc_pxo_hdmi_map[] = {
+ { P_PXO, 0 },
+ { P_HDMI_PLL, 3 }
};
static const char *mmcc_pxo_hdmi[] = {
diff --git a/drivers/clk/qcom/mmcc-msm8974.c b/drivers/clk/qcom/mmcc-msm8974.c
index be94c54..07f4cc1 100644
--- a/drivers/clk/qcom/mmcc-msm8974.c
+++ b/drivers/clk/qcom/mmcc-msm8974.c
@@ -32,26 +32,28 @@
#include "clk-branch.h"
#include "reset.h"
-#define P_XO 0
-#define P_MMPLL0 1
-#define P_EDPLINK 1
-#define P_MMPLL1 2
-#define P_HDMIPLL 2
-#define P_GPLL0 3
-#define P_EDPVCO 3
-#define P_GPLL1 4
-#define P_DSI0PLL 4
-#define P_DSI0PLL_BYTE 4
-#define P_MMPLL2 4
-#define P_MMPLL3 4
-#define P_DSI1PLL 5
-#define P_DSI1PLL_BYTE 5
+enum {
+ P_XO,
+ P_MMPLL0,
+ P_EDPLINK,
+ P_MMPLL1,
+ P_HDMIPLL,
+ P_GPLL0,
+ P_EDPVCO,
+ P_GPLL1,
+ P_DSI0PLL,
+ P_DSI0PLL_BYTE,
+ P_MMPLL2,
+ P_MMPLL3,
+ P_DSI1PLL,
+ P_DSI1PLL_BYTE,
+};
-static const u8 mmcc_xo_mmpll0_mmpll1_gpll0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_MMPLL1] = 2,
- [P_GPLL0] = 5,
+static const struct parent_map mmcc_xo_mmpll0_mmpll1_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_MMPLL1, 2 },
+ { P_GPLL0, 5 }
};
static const char *mmcc_xo_mmpll0_mmpll1_gpll0[] = {
@@ -61,13 +63,13 @@
"mmss_gpll0_vote",
};
-static const u8 mmcc_xo_mmpll0_dsi_hdmi_gpll0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_HDMIPLL] = 4,
- [P_GPLL0] = 5,
- [P_DSI0PLL] = 2,
- [P_DSI1PLL] = 3,
+static const struct parent_map mmcc_xo_mmpll0_dsi_hdmi_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_HDMIPLL, 4 },
+ { P_GPLL0, 5 },
+ { P_DSI0PLL, 2 },
+ { P_DSI1PLL, 3 }
};
static const char *mmcc_xo_mmpll0_dsi_hdmi_gpll0[] = {
@@ -79,12 +81,12 @@
"dsi1pll",
};
-static const u8 mmcc_xo_mmpll0_1_2_gpll0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_MMPLL1] = 2,
- [P_GPLL0] = 5,
- [P_MMPLL2] = 3,
+static const struct parent_map mmcc_xo_mmpll0_1_2_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_MMPLL1, 2 },
+ { P_GPLL0, 5 },
+ { P_MMPLL2, 3 }
};
static const char *mmcc_xo_mmpll0_1_2_gpll0[] = {
@@ -95,12 +97,12 @@
"mmpll2",
};
-static const u8 mmcc_xo_mmpll0_1_3_gpll0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_MMPLL1] = 2,
- [P_GPLL0] = 5,
- [P_MMPLL3] = 3,
+static const struct parent_map mmcc_xo_mmpll0_1_3_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_MMPLL1, 2 },
+ { P_GPLL0, 5 },
+ { P_MMPLL3, 3 }
};
static const char *mmcc_xo_mmpll0_1_3_gpll0[] = {
@@ -111,12 +113,12 @@
"mmpll3",
};
-static const u8 mmcc_xo_mmpll0_1_gpll1_0_map[] = {
- [P_XO] = 0,
- [P_MMPLL0] = 1,
- [P_MMPLL1] = 2,
- [P_GPLL0] = 5,
- [P_GPLL1] = 4,
+static const struct parent_map mmcc_xo_mmpll0_1_gpll1_0_map[] = {
+ { P_XO, 0 },
+ { P_MMPLL0, 1 },
+ { P_MMPLL1, 2 },
+ { P_GPLL0, 5 },
+ { P_GPLL1, 4 }
};
static const char *mmcc_xo_mmpll0_1_gpll1_0[] = {
@@ -127,13 +129,13 @@
"gpll1_vote",
};
-static const u8 mmcc_xo_dsi_hdmi_edp_map[] = {
- [P_XO] = 0,
- [P_EDPLINK] = 4,
- [P_HDMIPLL] = 3,
- [P_EDPVCO] = 5,
- [P_DSI0PLL] = 1,
- [P_DSI1PLL] = 2,
+static const struct parent_map mmcc_xo_dsi_hdmi_edp_map[] = {
+ { P_XO, 0 },
+ { P_EDPLINK, 4 },
+ { P_HDMIPLL, 3 },
+ { P_EDPVCO, 5 },
+ { P_DSI0PLL, 1 },
+ { P_DSI1PLL, 2 }
};
static const char *mmcc_xo_dsi_hdmi_edp[] = {
@@ -145,13 +147,13 @@
"dsi1pll",
};
-static const u8 mmcc_xo_dsi_hdmi_edp_gpll0_map[] = {
- [P_XO] = 0,
- [P_EDPLINK] = 4,
- [P_HDMIPLL] = 3,
- [P_GPLL0] = 5,
- [P_DSI0PLL] = 1,
- [P_DSI1PLL] = 2,
+static const struct parent_map mmcc_xo_dsi_hdmi_edp_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_EDPLINK, 4 },
+ { P_HDMIPLL, 3 },
+ { P_GPLL0, 5 },
+ { P_DSI0PLL, 1 },
+ { P_DSI1PLL, 2 }
};
static const char *mmcc_xo_dsi_hdmi_edp_gpll0[] = {
@@ -163,13 +165,13 @@
"dsi1pll",
};
-static const u8 mmcc_xo_dsibyte_hdmi_edp_gpll0_map[] = {
- [P_XO] = 0,
- [P_EDPLINK] = 4,
- [P_HDMIPLL] = 3,
- [P_GPLL0] = 5,
- [P_DSI0PLL_BYTE] = 1,
- [P_DSI1PLL_BYTE] = 2,
+static const struct parent_map mmcc_xo_dsibyte_hdmi_edp_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_EDPLINK, 4 },
+ { P_HDMIPLL, 3 },
+ { P_GPLL0, 5 },
+ { P_DSI0PLL_BYTE, 1 },
+ { P_DSI1PLL_BYTE, 2 }
};
static const char *mmcc_xo_dsibyte_hdmi_edp_gpll0[] = {
diff --git a/drivers/clk/rockchip/clk-rk3188.c b/drivers/clk/rockchip/clk-rk3188.c
index 7eb684c..556ce04 100644
--- a/drivers/clk/rockchip/clk-rk3188.c
+++ b/drivers/clk/rockchip/clk-rk3188.c
@@ -704,7 +704,7 @@
GATE(ACLK_GPS, "aclk_gps", "aclk_peri", 0, RK2928_CLKGATE_CON(8), 13, GFLAGS),
};
-static const char *rk3188_critical_clocks[] __initconst = {
+static const char *const rk3188_critical_clocks[] __initconst = {
"aclk_cpu",
"aclk_peri",
"hclk_peri",
diff --git a/drivers/clk/rockchip/clk-rk3288.c b/drivers/clk/rockchip/clk-rk3288.c
index 05d7a0b..d17eb45 100644
--- a/drivers/clk/rockchip/clk-rk3288.c
+++ b/drivers/clk/rockchip/clk-rk3288.c
@@ -771,7 +771,7 @@
GATE(0, "pclk_isp_in", "ext_isp", 0, RK3288_CLKGATE_CON(16), 3, GFLAGS),
};
-static const char *rk3288_critical_clocks[] __initconst = {
+static const char *const rk3288_critical_clocks[] __initconst = {
"aclk_cpu",
"aclk_peri",
"hclk_peri",
diff --git a/drivers/clk/rockchip/clk.c b/drivers/clk/rockchip/clk.c
index 20e05bb..edb5d48 100644
--- a/drivers/clk/rockchip/clk.c
+++ b/drivers/clk/rockchip/clk.c
@@ -317,7 +317,8 @@
rockchip_clk_add_lookup(clk, lookup_id);
}
-void __init rockchip_clk_protect_critical(const char *clocks[], int nclocks)
+void __init rockchip_clk_protect_critical(const char *const clocks[],
+ int nclocks)
{
int i;
diff --git a/drivers/clk/rockchip/clk.h b/drivers/clk/rockchip/clk.h
index 58d2e3b..e63cafe 100644
--- a/drivers/clk/rockchip/clk.h
+++ b/drivers/clk/rockchip/clk.h
@@ -182,7 +182,7 @@
const char **parent_names, u8 num_parents,
void __iomem *reg, int shift);
-#define PNAME(x) static const char *x[] __initconst
+#define PNAME(x) static const char *x[] __initdata
enum rockchip_clk_branch_type {
branch_composite,
@@ -407,7 +407,7 @@
const struct rockchip_cpuclk_reg_data *reg_data,
const struct rockchip_cpuclk_rate_table *rates,
int nrates);
-void rockchip_clk_protect_critical(const char *clocks[], int nclocks);
+void rockchip_clk_protect_critical(const char *const clocks[], int nclocks);
void rockchip_register_restart_notifier(unsigned int reg);
#define ROCKCHIP_SOFTRST_HIWORD_MASK BIT(0)
diff --git a/drivers/clk/samsung/Makefile b/drivers/clk/samsung/Makefile
index 006c6f2..17e9af7 100644
--- a/drivers/clk/samsung/Makefile
+++ b/drivers/clk/samsung/Makefile
@@ -10,6 +10,7 @@
obj-$(CONFIG_SOC_EXYNOS5260) += clk-exynos5260.o
obj-$(CONFIG_SOC_EXYNOS5410) += clk-exynos5410.o
obj-$(CONFIG_SOC_EXYNOS5420) += clk-exynos5420.o
+obj-$(CONFIG_ARCH_EXYNOS5433) += clk-exynos5433.o
obj-$(CONFIG_SOC_EXYNOS5440) += clk-exynos5440.o
obj-$(CONFIG_ARCH_EXYNOS) += clk-exynos-audss.o
obj-$(CONFIG_ARCH_EXYNOS) += clk-exynos-clkout.o
diff --git a/drivers/clk/samsung/clk-exynos-clkout.c b/drivers/clk/samsung/clk-exynos-clkout.c
index 3a7cb25..03a5222 100644
--- a/drivers/clk/samsung/clk-exynos-clkout.c
+++ b/drivers/clk/samsung/clk-exynos-clkout.c
@@ -142,6 +142,8 @@
exynos4_clkout_init);
CLK_OF_DECLARE(exynos4412_clkout, "samsung,exynos4412-pmu",
exynos4_clkout_init);
+CLK_OF_DECLARE(exynos3250_clkout, "samsung,exynos3250-pmu",
+ exynos4_clkout_init);
static void __init exynos5_clkout_init(struct device_node *node)
{
@@ -151,3 +153,5 @@
exynos5_clkout_init);
CLK_OF_DECLARE(exynos5420_clkout, "samsung,exynos5420-pmu",
exynos5_clkout_init);
+CLK_OF_DECLARE(exynos5433_clkout, "samsung,exynos5433-pmu",
+ exynos5_clkout_init);
diff --git a/drivers/clk/samsung/clk-exynos3250.c b/drivers/clk/samsung/clk-exynos3250.c
index cc4c348..538de66 100644
--- a/drivers/clk/samsung/clk-exynos3250.c
+++ b/drivers/clk/samsung/clk-exynos3250.c
@@ -894,3 +894,166 @@
}
CLK_OF_DECLARE(exynos3250_cmu_dmc, "samsung,exynos3250-cmu-dmc",
exynos3250_cmu_dmc_init);
+
+
+/*
+ * CMU ISP
+ */
+
+#define DIV_ISP0 0x300
+#define DIV_ISP1 0x304
+#define GATE_IP_ISP0 0x800
+#define GATE_IP_ISP1 0x804
+#define GATE_SCLK_ISP 0x900
+
+static struct samsung_div_clock isp_div_clks[] __initdata = {
+ /*
+ * NOTE: Following table is sorted by register address in ascending
+ * order and then bitfield shift in descending order, as it is done
+ * in the User's Manual. When adding new entries, please make sure
+ * that the order is preserved, to avoid merge conflicts and make
+ * further work with defined data easier.
+ */
+ /* DIV_ISP0 */
+ DIV(CLK_DIV_ISP1, "div_isp1", "mout_aclk_266_sub", DIV_ISP0, 4, 3),
+ DIV(CLK_DIV_ISP0, "div_isp0", "mout_aclk_266_sub", DIV_ISP0, 0, 3),
+
+ /* DIV_ISP1 */
+ DIV(CLK_DIV_MCUISP1, "div_mcuisp1", "mout_aclk_400_mcuisp_sub",
+ DIV_ISP1, 8, 3),
+ DIV(CLK_DIV_MCUISP0, "div_mcuisp0", "mout_aclk_400_mcuisp_sub",
+ DIV_ISP1, 4, 3),
+ DIV(CLK_DIV_MPWM, "div_mpwm", "div_isp1", DIV_ISP1, 0, 3),
+};
+
+static struct samsung_gate_clock isp_gate_clks[] __initdata = {
+ /*
+ * NOTE: Following table is sorted by register address in ascending
+ * order and then bitfield shift in descending order, as it is done
+ * in the User's Manual. When adding new entries, please make sure
+ * that the order is preserved, to avoid merge conflicts and make
+ * further work with defined data easier.
+ */
+
+ /* GATE_IP_ISP0 */
+ GATE(CLK_UART_ISP, "uart_isp", "uart_isp_top",
+ GATE_IP_ISP0, 31, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_WDT_ISP, "wdt_isp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 30, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PWM_ISP, "pwm_isp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 28, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_I2C1_ISP, "i2c1_isp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 26, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_I2C0_ISP, "i2c0_isp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 25, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_MPWM_ISP, "mpwm_isp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 24, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_MCUCTL_ISP, "mcuctl_isp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 23, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PPMUISPX, "ppmuispx", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 21, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PPMUISPMX, "ppmuispmx", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 20, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_QE_LITE1, "qe_lite1", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 18, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_QE_LITE0, "qe_lite0", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_QE_FD, "qe_fd", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_QE_DRC, "qe_drc", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_QE_ISP, "qe_isp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_CSIS1, "csis1", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SMMU_LITE1, "smmu_lite1", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SMMU_LITE0, "smmu_lite0", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SMMU_FD, "smmu_fd", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SMMU_DRC, "smmu_drc", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SMMU_ISP, "smmu_isp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_GICISP, "gicisp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_CSIS0, "csis0", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_MCUISP, "mcuisp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_LITE1, "lite1", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_LITE0, "lite0", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_FD, "fd", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_DRC, "drc", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ISP, "isp", "mout_aclk_266_sub",
+ GATE_IP_ISP0, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* GATE_IP_ISP1 */
+ GATE(CLK_QE_ISPCX, "qe_ispcx", "uart_isp_top",
+ GATE_IP_ISP0, 21, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_QE_SCALERP, "qe_scalerp", "uart_isp_top",
+ GATE_IP_ISP0, 20, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_QE_SCALERC, "qe_scalerc", "uart_isp_top",
+ GATE_IP_ISP0, 19, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SMMU_SCALERP, "smmu_scalerp", "uart_isp_top",
+ GATE_IP_ISP0, 18, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SMMU_SCALERC, "smmu_scalerc", "uart_isp_top",
+ GATE_IP_ISP0, 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCALERP, "scalerp", "uart_isp_top",
+ GATE_IP_ISP0, 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCALERC, "scalerc", "uart_isp_top",
+ GATE_IP_ISP0, 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SPI1_ISP, "spi1_isp", "uart_isp_top",
+ GATE_IP_ISP0, 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SPI0_ISP, "spi0_isp", "uart_isp_top",
+ GATE_IP_ISP0, 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SMMU_ISPCX, "smmu_ispcx", "uart_isp_top",
+ GATE_IP_ISP0, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ASYNCAXIM, "asyncaxim", "uart_isp_top",
+ GATE_IP_ISP0, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* GATE_SCLK_ISP */
+ GATE(CLK_SCLK_MPWM_ISP, "sclk_mpwm_isp", "div_mpwm",
+ GATE_SCLK_ISP, 0, CLK_IGNORE_UNUSED, 0),
+};
+
+static struct samsung_cmu_info isp_cmu_info __initdata = {
+ .div_clks = isp_div_clks,
+ .nr_div_clks = ARRAY_SIZE(isp_div_clks),
+ .gate_clks = isp_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(isp_gate_clks),
+ .nr_clk_ids = NR_CLKS_ISP,
+};
+
+static int __init exynos3250_cmu_isp_probe(struct platform_device *pdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+
+ samsung_cmu_register_one(np, &isp_cmu_info);
+ return 0;
+}
+
+static const struct of_device_id exynos3250_cmu_isp_of_match[] = {
+ { .compatible = "samsung,exynos3250-cmu-isp", },
+ { /* sentinel */ }
+};
+
+static struct platform_driver exynos3250_cmu_isp_driver = {
+ .driver = {
+ .name = "exynos3250-cmu-isp",
+ .of_match_table = exynos3250_cmu_isp_of_match,
+ },
+};
+
+static int __init exynos3250_cmu_platform_init(void)
+{
+ return platform_driver_probe(&exynos3250_cmu_isp_driver,
+ exynos3250_cmu_isp_probe);
+}
+subsys_initcall(exynos3250_cmu_platform_init);
+
diff --git a/drivers/clk/samsung/clk-exynos4.c b/drivers/clk/samsung/clk-exynos4.c
index 51462e8..714d6ba 100644
--- a/drivers/clk/samsung/clk-exynos4.c
+++ b/drivers/clk/samsung/clk-exynos4.c
@@ -1354,7 +1354,7 @@
VPLL_LOCK, VPLL_CON0, NULL),
};
-static void __init exynos4_core_down_clock(enum exynos4_soc soc)
+static void __init exynos4x12_core_down_clock(void)
{
unsigned int tmp;
@@ -1373,11 +1373,9 @@
__raw_writel(tmp, reg_base + PWR_CTRL1);
/*
- * Disable the clock up feature on Exynos4x12, in case it was
- * enabled by bootloader.
+ * Disable the clock up feature in case it was enabled by bootloader.
*/
- if (exynos4_soc == EXYNOS4X12)
- __raw_writel(0x0, reg_base + E4X12_PWR_CTRL2);
+ __raw_writel(0x0, reg_base + E4X12_PWR_CTRL2);
}
/* register exynos4 clocks */
@@ -1474,7 +1472,8 @@
samsung_clk_register_alias(ctx, exynos4_aliases,
ARRAY_SIZE(exynos4_aliases));
- exynos4_core_down_clock(soc);
+ if (soc == EXYNOS4X12)
+ exynos4x12_core_down_clock();
exynos4_clk_sleep_init();
samsung_clk_of_add_provider(np, ctx);
diff --git a/drivers/clk/samsung/clk-exynos5433.c b/drivers/clk/samsung/clk-exynos5433.c
new file mode 100644
index 0000000..387e3e3
--- /dev/null
+++ b/drivers/clk/samsung/clk-exynos5433.c
@@ -0,0 +1,5423 @@
+/*
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Chanwoo Choi <cw00.choi@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Common Clock Framework support for Exynos5443 SoC.
+ */
+
+#include <linux/clk.h>
+#include <linux/clkdev.h>
+#include <linux/clk-provider.h>
+#include <linux/of.h>
+
+#include <dt-bindings/clock/exynos5433.h>
+
+#include "clk.h"
+#include "clk-pll.h"
+
+/*
+ * Register offset definitions for CMU_TOP
+ */
+#define ISP_PLL_LOCK 0x0000
+#define AUD_PLL_LOCK 0x0004
+#define ISP_PLL_CON0 0x0100
+#define ISP_PLL_CON1 0x0104
+#define ISP_PLL_FREQ_DET 0x0108
+#define AUD_PLL_CON0 0x0110
+#define AUD_PLL_CON1 0x0114
+#define AUD_PLL_CON2 0x0118
+#define AUD_PLL_FREQ_DET 0x011c
+#define MUX_SEL_TOP0 0x0200
+#define MUX_SEL_TOP1 0x0204
+#define MUX_SEL_TOP2 0x0208
+#define MUX_SEL_TOP3 0x020c
+#define MUX_SEL_TOP4 0x0210
+#define MUX_SEL_TOP_MSCL 0x0220
+#define MUX_SEL_TOP_CAM1 0x0224
+#define MUX_SEL_TOP_DISP 0x0228
+#define MUX_SEL_TOP_FSYS0 0x0230
+#define MUX_SEL_TOP_FSYS1 0x0234
+#define MUX_SEL_TOP_PERIC0 0x0238
+#define MUX_SEL_TOP_PERIC1 0x023c
+#define MUX_ENABLE_TOP0 0x0300
+#define MUX_ENABLE_TOP1 0x0304
+#define MUX_ENABLE_TOP2 0x0308
+#define MUX_ENABLE_TOP3 0x030c
+#define MUX_ENABLE_TOP4 0x0310
+#define MUX_ENABLE_TOP_MSCL 0x0320
+#define MUX_ENABLE_TOP_CAM1 0x0324
+#define MUX_ENABLE_TOP_DISP 0x0328
+#define MUX_ENABLE_TOP_FSYS0 0x0330
+#define MUX_ENABLE_TOP_FSYS1 0x0334
+#define MUX_ENABLE_TOP_PERIC0 0x0338
+#define MUX_ENABLE_TOP_PERIC1 0x033c
+#define MUX_STAT_TOP0 0x0400
+#define MUX_STAT_TOP1 0x0404
+#define MUX_STAT_TOP2 0x0408
+#define MUX_STAT_TOP3 0x040c
+#define MUX_STAT_TOP4 0x0410
+#define MUX_STAT_TOP_MSCL 0x0420
+#define MUX_STAT_TOP_CAM1 0x0424
+#define MUX_STAT_TOP_FSYS0 0x0430
+#define MUX_STAT_TOP_FSYS1 0x0434
+#define MUX_STAT_TOP_PERIC0 0x0438
+#define MUX_STAT_TOP_PERIC1 0x043c
+#define DIV_TOP0 0x0600
+#define DIV_TOP1 0x0604
+#define DIV_TOP2 0x0608
+#define DIV_TOP3 0x060c
+#define DIV_TOP4 0x0610
+#define DIV_TOP_MSCL 0x0618
+#define DIV_TOP_CAM10 0x061c
+#define DIV_TOP_CAM11 0x0620
+#define DIV_TOP_FSYS0 0x062c
+#define DIV_TOP_FSYS1 0x0630
+#define DIV_TOP_FSYS2 0x0634
+#define DIV_TOP_PERIC0 0x0638
+#define DIV_TOP_PERIC1 0x063c
+#define DIV_TOP_PERIC2 0x0640
+#define DIV_TOP_PERIC3 0x0644
+#define DIV_TOP_PERIC4 0x0648
+#define DIV_TOP_PLL_FREQ_DET 0x064c
+#define DIV_STAT_TOP0 0x0700
+#define DIV_STAT_TOP1 0x0704
+#define DIV_STAT_TOP2 0x0708
+#define DIV_STAT_TOP3 0x070c
+#define DIV_STAT_TOP4 0x0710
+#define DIV_STAT_TOP_MSCL 0x0718
+#define DIV_STAT_TOP_CAM10 0x071c
+#define DIV_STAT_TOP_CAM11 0x0720
+#define DIV_STAT_TOP_FSYS0 0x072c
+#define DIV_STAT_TOP_FSYS1 0x0730
+#define DIV_STAT_TOP_FSYS2 0x0734
+#define DIV_STAT_TOP_PERIC0 0x0738
+#define DIV_STAT_TOP_PERIC1 0x073c
+#define DIV_STAT_TOP_PERIC2 0x0740
+#define DIV_STAT_TOP_PERIC3 0x0744
+#define DIV_STAT_TOP_PLL_FREQ_DET 0x074c
+#define ENABLE_ACLK_TOP 0x0800
+#define ENABLE_SCLK_TOP 0x0a00
+#define ENABLE_SCLK_TOP_MSCL 0x0a04
+#define ENABLE_SCLK_TOP_CAM1 0x0a08
+#define ENABLE_SCLK_TOP_DISP 0x0a0c
+#define ENABLE_SCLK_TOP_FSYS 0x0a10
+#define ENABLE_SCLK_TOP_PERIC 0x0a14
+#define ENABLE_IP_TOP 0x0b00
+#define ENABLE_CMU_TOP 0x0c00
+#define ENABLE_CMU_TOP_DIV_STAT 0x0c04
+
+static unsigned long top_clk_regs[] __initdata = {
+ ISP_PLL_LOCK,
+ AUD_PLL_LOCK,
+ ISP_PLL_CON0,
+ ISP_PLL_CON1,
+ ISP_PLL_FREQ_DET,
+ AUD_PLL_CON0,
+ AUD_PLL_CON1,
+ AUD_PLL_CON2,
+ AUD_PLL_FREQ_DET,
+ MUX_SEL_TOP0,
+ MUX_SEL_TOP1,
+ MUX_SEL_TOP2,
+ MUX_SEL_TOP3,
+ MUX_SEL_TOP4,
+ MUX_SEL_TOP_MSCL,
+ MUX_SEL_TOP_CAM1,
+ MUX_SEL_TOP_DISP,
+ MUX_SEL_TOP_FSYS0,
+ MUX_SEL_TOP_FSYS1,
+ MUX_SEL_TOP_PERIC0,
+ MUX_SEL_TOP_PERIC1,
+ MUX_ENABLE_TOP0,
+ MUX_ENABLE_TOP1,
+ MUX_ENABLE_TOP2,
+ MUX_ENABLE_TOP3,
+ MUX_ENABLE_TOP4,
+ MUX_ENABLE_TOP_MSCL,
+ MUX_ENABLE_TOP_CAM1,
+ MUX_ENABLE_TOP_DISP,
+ MUX_ENABLE_TOP_FSYS0,
+ MUX_ENABLE_TOP_FSYS1,
+ MUX_ENABLE_TOP_PERIC0,
+ MUX_ENABLE_TOP_PERIC1,
+ MUX_STAT_TOP0,
+ MUX_STAT_TOP1,
+ MUX_STAT_TOP2,
+ MUX_STAT_TOP3,
+ MUX_STAT_TOP4,
+ MUX_STAT_TOP_MSCL,
+ MUX_STAT_TOP_CAM1,
+ MUX_STAT_TOP_FSYS0,
+ MUX_STAT_TOP_FSYS1,
+ MUX_STAT_TOP_PERIC0,
+ MUX_STAT_TOP_PERIC1,
+ DIV_TOP0,
+ DIV_TOP1,
+ DIV_TOP2,
+ DIV_TOP3,
+ DIV_TOP4,
+ DIV_TOP_MSCL,
+ DIV_TOP_CAM10,
+ DIV_TOP_CAM11,
+ DIV_TOP_FSYS0,
+ DIV_TOP_FSYS1,
+ DIV_TOP_FSYS2,
+ DIV_TOP_PERIC0,
+ DIV_TOP_PERIC1,
+ DIV_TOP_PERIC2,
+ DIV_TOP_PERIC3,
+ DIV_TOP_PERIC4,
+ DIV_TOP_PLL_FREQ_DET,
+ DIV_STAT_TOP0,
+ DIV_STAT_TOP1,
+ DIV_STAT_TOP2,
+ DIV_STAT_TOP3,
+ DIV_STAT_TOP4,
+ DIV_STAT_TOP_MSCL,
+ DIV_STAT_TOP_CAM10,
+ DIV_STAT_TOP_CAM11,
+ DIV_STAT_TOP_FSYS0,
+ DIV_STAT_TOP_FSYS1,
+ DIV_STAT_TOP_FSYS2,
+ DIV_STAT_TOP_PERIC0,
+ DIV_STAT_TOP_PERIC1,
+ DIV_STAT_TOP_PERIC2,
+ DIV_STAT_TOP_PERIC3,
+ DIV_STAT_TOP_PLL_FREQ_DET,
+ ENABLE_ACLK_TOP,
+ ENABLE_SCLK_TOP,
+ ENABLE_SCLK_TOP_MSCL,
+ ENABLE_SCLK_TOP_CAM1,
+ ENABLE_SCLK_TOP_DISP,
+ ENABLE_SCLK_TOP_FSYS,
+ ENABLE_SCLK_TOP_PERIC,
+ ENABLE_IP_TOP,
+ ENABLE_CMU_TOP,
+ ENABLE_CMU_TOP_DIV_STAT,
+};
+
+/* list of all parent clock list */
+PNAME(mout_aud_pll_p) = { "oscclk", "fout_aud_pll", };
+PNAME(mout_isp_pll_p) = { "oscclk", "fout_isp_pll", };
+PNAME(mout_aud_pll_user_p) = { "oscclk", "mout_aud_pll", };
+PNAME(mout_mphy_pll_user_p) = { "oscclk", "sclk_mphy_pll", };
+PNAME(mout_mfc_pll_user_p) = { "oscclk", "sclk_mfc_pll", };
+PNAME(mout_bus_pll_user_p) = { "oscclk", "sclk_bus_pll", };
+PNAME(mout_bus_pll_user_t_p) = { "oscclk", "mout_bus_pll_user", };
+PNAME(mout_mphy_pll_user_t_p) = { "oscclk", "mout_mphy_pll_user", };
+
+PNAME(mout_bus_mfc_pll_user_p) = { "mout_bus_pll_user", "mout_mfc_pll_user",};
+PNAME(mout_mfc_bus_pll_user_p) = { "mout_mfc_pll_user", "mout_bus_pll_user",};
+PNAME(mout_aclk_cam1_552_b_p) = { "mout_aclk_cam1_552_a",
+ "mout_mfc_pll_user", };
+PNAME(mout_aclk_cam1_552_a_p) = { "mout_isp_pll", "mout_bus_pll_user", };
+
+PNAME(mout_aclk_mfc_400_c_p) = { "mout_aclk_mfc_400_b",
+ "mout_mphy_pll_user", };
+PNAME(mout_aclk_mfc_400_b_p) = { "mout_aclk_mfc_400_a",
+ "mout_bus_pll_user", };
+PNAME(mout_aclk_mfc_400_a_p) = { "mout_mfc_pll_user", "mout_isp_pll", };
+
+PNAME(mout_bus_mphy_pll_user_p) = { "mout_bus_pll_user",
+ "mout_mphy_pll_user", };
+PNAME(mout_aclk_mscl_b_p) = { "mout_aclk_mscl_400_a",
+ "mout_mphy_pll_user", };
+PNAME(mout_aclk_g2d_400_b_p) = { "mout_aclk_g2d_400_a",
+ "mout_mphy_pll_user", };
+
+PNAME(mout_sclk_jpeg_c_p) = { "mout_sclk_jpeg_b", "mout_mphy_pll_user",};
+PNAME(mout_sclk_jpeg_b_p) = { "mout_sclk_jpeg_a", "mout_mfc_pll_user", };
+
+PNAME(mout_sclk_mmc2_b_p) = { "mout_sclk_mmc2_a", "mout_mfc_pll_user",};
+PNAME(mout_sclk_mmc1_b_p) = { "mout_sclk_mmc1_a", "mout_mfc_pll_user",};
+PNAME(mout_sclk_mmc0_d_p) = { "mout_sclk_mmc0_c", "mout_isp_pll", };
+PNAME(mout_sclk_mmc0_c_p) = { "mout_sclk_mmc0_b", "mout_mphy_pll_user",};
+PNAME(mout_sclk_mmc0_b_p) = { "mout_sclk_mmc0_a", "mout_mfc_pll_user", };
+
+PNAME(mout_sclk_spdif_p) = { "sclk_audio0", "sclk_audio1",
+ "oscclk", "ioclk_spdif_extclk", };
+PNAME(mout_sclk_audio1_p) = { "ioclk_audiocdclk1", "oscclk",
+ "mout_aud_pll_user_t",};
+PNAME(mout_sclk_audio0_p) = { "ioclk_audiocdclk0", "oscclk",
+ "mout_aud_pll_user_t",};
+
+PNAME(mout_sclk_hdmi_spdif_p) = { "sclk_audio1", "ioclk_spdif_extclk", };
+
+static struct samsung_fixed_factor_clock top_fixed_factor_clks[] __initdata = {
+ FFACTOR(0, "oscclk_efuse_common", "oscclk", 1, 1, 0),
+};
+
+static struct samsung_fixed_rate_clock top_fixed_clks[] __initdata = {
+ /* Xi2s{0|1}CDCLK input clock for I2S/PCM */
+ FRATE(0, "ioclk_audiocdclk1", NULL, CLK_IS_ROOT, 100000000),
+ FRATE(0, "ioclk_audiocdclk0", NULL, CLK_IS_ROOT, 100000000),
+ /* Xi2s1SDI input clock for SPDIF */
+ FRATE(0, "ioclk_spdif_extclk", NULL, CLK_IS_ROOT, 100000000),
+ /* XspiCLK[4:0] input clock for SPI */
+ FRATE(0, "ioclk_spi4_clk_in", NULL, CLK_IS_ROOT, 50000000),
+ FRATE(0, "ioclk_spi3_clk_in", NULL, CLK_IS_ROOT, 50000000),
+ FRATE(0, "ioclk_spi2_clk_in", NULL, CLK_IS_ROOT, 50000000),
+ FRATE(0, "ioclk_spi1_clk_in", NULL, CLK_IS_ROOT, 50000000),
+ FRATE(0, "ioclk_spi0_clk_in", NULL, CLK_IS_ROOT, 50000000),
+ /* Xi2s1SCLK input clock for I2S1_BCLK */
+ FRATE(0, "ioclk_i2s1_bclk_in", NULL, CLK_IS_ROOT, 12288000),
+};
+
+static struct samsung_mux_clock top_mux_clks[] __initdata = {
+ /* MUX_SEL_TOP0 */
+ MUX(CLK_MOUT_AUD_PLL, "mout_aud_pll", mout_aud_pll_p, MUX_SEL_TOP0,
+ 4, 1),
+ MUX(CLK_MOUT_ISP_PLL, "mout_isp_pll", mout_isp_pll_p, MUX_SEL_TOP0,
+ 0, 1),
+
+ /* MUX_SEL_TOP1 */
+ MUX(CLK_MOUT_AUD_PLL_USER_T, "mout_aud_pll_user_t",
+ mout_aud_pll_user_p, MUX_SEL_TOP1, 12, 1),
+ MUX(CLK_MOUT_MPHY_PLL_USER, "mout_mphy_pll_user", mout_mphy_pll_user_p,
+ MUX_SEL_TOP1, 8, 1),
+ MUX(CLK_MOUT_MFC_PLL_USER, "mout_mfc_pll_user", mout_mfc_pll_user_p,
+ MUX_SEL_TOP1, 4, 1),
+ MUX(CLK_MOUT_BUS_PLL_USER, "mout_bus_pll_user", mout_bus_pll_user_p,
+ MUX_SEL_TOP1, 0, 1),
+
+ /* MUX_SEL_TOP2 */
+ MUX(CLK_MOUT_ACLK_HEVC_400, "mout_aclk_hevc_400",
+ mout_bus_mfc_pll_user_p, MUX_SEL_TOP2, 28, 1),
+ MUX(CLK_MOUT_ACLK_CAM1_333, "mout_aclk_cam1_333",
+ mout_mfc_bus_pll_user_p, MUX_SEL_TOP2, 16, 1),
+ MUX(CLK_MOUT_ACLK_CAM1_552_B, "mout_aclk_cam1_552_b",
+ mout_aclk_cam1_552_b_p, MUX_SEL_TOP2, 12, 1),
+ MUX(CLK_MOUT_ACLK_CAM1_552_A, "mout_aclk_cam1_552_a",
+ mout_aclk_cam1_552_a_p, MUX_SEL_TOP2, 8, 1),
+ MUX(CLK_MOUT_ACLK_ISP_DIS_400, "mout_aclk_isp_dis_400",
+ mout_bus_mfc_pll_user_p, MUX_SEL_TOP2, 4, 1),
+ MUX(CLK_MOUT_ACLK_ISP_400, "mout_aclk_isp_400",
+ mout_bus_mfc_pll_user_p, MUX_SEL_TOP2, 0, 1),
+
+ /* MUX_SEL_TOP3 */
+ MUX(CLK_MOUT_ACLK_BUS0_400, "mout_aclk_bus0_400",
+ mout_bus_mphy_pll_user_p, MUX_SEL_TOP3, 20, 1),
+ MUX(CLK_MOUT_ACLK_MSCL_400_B, "mout_aclk_mscl_400_b",
+ mout_aclk_mscl_b_p, MUX_SEL_TOP3, 16, 1),
+ MUX(CLK_MOUT_ACLK_MSCL_400_A, "mout_aclk_mscl_400_a",
+ mout_bus_mfc_pll_user_p, MUX_SEL_TOP3, 12, 1),
+ MUX(CLK_MOUT_ACLK_GSCL_333, "mout_aclk_gscl_333",
+ mout_mfc_bus_pll_user_p, MUX_SEL_TOP3, 8, 1),
+ MUX(CLK_MOUT_ACLK_G2D_400_B, "mout_aclk_g2d_400_b",
+ mout_aclk_g2d_400_b_p, MUX_SEL_TOP3, 4, 1),
+ MUX(CLK_MOUT_ACLK_G2D_400_A, "mout_aclk_g2d_400_a",
+ mout_bus_mfc_pll_user_p, MUX_SEL_TOP3, 0, 1),
+
+ /* MUX_SEL_TOP4 */
+ MUX(CLK_MOUT_ACLK_MFC_400_C, "mout_aclk_mfc_400_c",
+ mout_aclk_mfc_400_c_p, MUX_SEL_TOP4, 8, 1),
+ MUX(CLK_MOUT_ACLK_MFC_400_B, "mout_aclk_mfc_400_b",
+ mout_aclk_mfc_400_b_p, MUX_SEL_TOP4, 4, 1),
+ MUX(CLK_MOUT_ACLK_MFC_400_A, "mout_aclk_mfc_400_a",
+ mout_aclk_mfc_400_a_p, MUX_SEL_TOP4, 0, 1),
+
+ /* MUX_SEL_TOP_MSCL */
+ MUX(CLK_MOUT_SCLK_JPEG_C, "mout_sclk_jpeg_c", mout_sclk_jpeg_c_p,
+ MUX_SEL_TOP_MSCL, 8, 1),
+ MUX(CLK_MOUT_SCLK_JPEG_B, "mout_sclk_jpeg_b", mout_sclk_jpeg_b_p,
+ MUX_SEL_TOP_MSCL, 4, 1),
+ MUX(CLK_MOUT_SCLK_JPEG_A, "mout_sclk_jpeg_a", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_MSCL, 0, 1),
+
+ /* MUX_SEL_TOP_CAM1 */
+ MUX(CLK_MOUT_SCLK_ISP_SENSOR2, "mout_sclk_isp_sensor2",
+ mout_bus_pll_user_t_p, MUX_SEL_TOP_CAM1, 24, 1),
+ MUX(CLK_MOUT_SCLK_ISP_SENSOR1, "mout_sclk_isp_sensor1",
+ mout_bus_pll_user_t_p, MUX_SEL_TOP_CAM1, 20, 1),
+ MUX(CLK_MOUT_SCLK_ISP_SENSOR0, "mout_sclk_isp_sensor0",
+ mout_bus_pll_user_t_p, MUX_SEL_TOP_CAM1, 16, 1),
+ MUX(CLK_MOUT_SCLK_ISP_UART, "mout_sclk_isp_uart",
+ mout_bus_pll_user_t_p, MUX_SEL_TOP_CAM1, 8, 1),
+ MUX(CLK_MOUT_SCLK_ISP_SPI1, "mout_sclk_isp_spi1",
+ mout_bus_pll_user_t_p, MUX_SEL_TOP_CAM1, 4, 1),
+ MUX(CLK_MOUT_SCLK_ISP_SPI0, "mout_sclk_isp_spi0",
+ mout_bus_pll_user_t_p, MUX_SEL_TOP_CAM1, 0, 1),
+
+ /* MUX_SEL_TOP_FSYS0 */
+ MUX(CLK_MOUT_SCLK_MMC2_B, "mout_sclk_mmc2_b", mout_sclk_mmc2_b_p,
+ MUX_SEL_TOP_FSYS0, 28, 1),
+ MUX(CLK_MOUT_SCLK_MMC2_A, "mout_sclk_mmc2_a", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_FSYS0, 24, 1),
+ MUX(CLK_MOUT_SCLK_MMC1_B, "mout_sclk_mmc1_b", mout_sclk_mmc1_b_p,
+ MUX_SEL_TOP_FSYS0, 20, 1),
+ MUX(CLK_MOUT_SCLK_MMC1_A, "mout_sclk_mmc1_a", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_FSYS0, 16, 1),
+ MUX(CLK_MOUT_SCLK_MMC0_D, "mout_sclk_mmc0_d", mout_sclk_mmc0_d_p,
+ MUX_SEL_TOP_FSYS0, 12, 1),
+ MUX(CLK_MOUT_SCLK_MMC0_C, "mout_sclk_mmc0_c", mout_sclk_mmc0_c_p,
+ MUX_SEL_TOP_FSYS0, 8, 1),
+ MUX(CLK_MOUT_SCLK_MMC0_B, "mout_sclk_mmc0_b", mout_sclk_mmc0_b_p,
+ MUX_SEL_TOP_FSYS0, 4, 1),
+ MUX(CLK_MOUT_SCLK_MMC0_A, "mout_sclk_mmc0_a", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_FSYS0, 0, 1),
+
+ /* MUX_SEL_TOP_FSYS1 */
+ MUX(CLK_MOUT_SCLK_PCIE_100, "mout_sclk_pcie_100", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_FSYS1, 12, 1),
+ MUX(CLK_MOUT_SCLK_UFSUNIPRO, "mout_sclk_ufsunipro",
+ mout_mphy_pll_user_t_p, MUX_SEL_TOP_FSYS1, 8, 1),
+ MUX(CLK_MOUT_SCLK_USBHOST30, "mout_sclk_usbhost30",
+ mout_bus_pll_user_t_p, MUX_SEL_TOP_FSYS1, 4, 1),
+ MUX(CLK_MOUT_SCLK_USBDRD30, "mout_sclk_usbdrd30",
+ mout_bus_pll_user_t_p, MUX_SEL_TOP_FSYS1, 0, 1),
+
+ /* MUX_SEL_TOP_PERIC0 */
+ MUX(CLK_MOUT_SCLK_SPI4, "mout_sclk_spi4", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_PERIC0, 28, 1),
+ MUX(CLK_MOUT_SCLK_SPI3, "mout_sclk_spi3", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_PERIC0, 24, 1),
+ MUX(CLK_MOUT_SCLK_UART2, "mout_sclk_uart2", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_PERIC0, 20, 1),
+ MUX(CLK_MOUT_SCLK_UART1, "mout_sclk_uart1", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_PERIC0, 16, 1),
+ MUX(CLK_MOUT_SCLK_UART0, "mout_sclk_uart0", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_PERIC0, 12, 1),
+ MUX(CLK_MOUT_SCLK_SPI2, "mout_sclk_spi2", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_PERIC0, 8, 1),
+ MUX(CLK_MOUT_SCLK_SPI1, "mout_sclk_spi1", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_PERIC0, 4, 1),
+ MUX(CLK_MOUT_SCLK_SPI0, "mout_sclk_spi0", mout_bus_pll_user_t_p,
+ MUX_SEL_TOP_PERIC0, 0, 1),
+
+ /* MUX_SEL_TOP_PERIC1 */
+ MUX(CLK_MOUT_SCLK_SLIMBUS, "mout_sclk_slimbus", mout_aud_pll_user_p,
+ MUX_SEL_TOP_PERIC1, 16, 1),
+ MUX(CLK_MOUT_SCLK_SPDIF, "mout_sclk_spdif", mout_sclk_spdif_p,
+ MUX_SEL_TOP_PERIC1, 12, 2),
+ MUX(CLK_MOUT_SCLK_AUDIO1, "mout_sclk_audio1", mout_sclk_audio1_p,
+ MUX_SEL_TOP_PERIC1, 4, 2),
+ MUX(CLK_MOUT_SCLK_AUDIO0, "mout_sclk_audio0", mout_sclk_audio0_p,
+ MUX_SEL_TOP_PERIC1, 0, 2),
+
+ /* MUX_SEL_TOP_DISP */
+ MUX(CLK_MOUT_SCLK_HDMI_SPDIF, "mout_sclk_hdmi_spdif",
+ mout_sclk_hdmi_spdif_p, MUX_SEL_TOP_DISP, 0, 1),
+};
+
+static struct samsung_div_clock top_div_clks[] __initdata = {
+ /* DIV_TOP0 */
+ DIV(CLK_DIV_ACLK_CAM1_333, "div_aclk_cam1_333", "mout_aclk_cam1_333",
+ DIV_TOP0, 28, 3),
+ DIV(CLK_DIV_ACLK_CAM1_400, "div_aclk_cam1_400", "mout_bus_pll_user",
+ DIV_TOP0, 24, 3),
+ DIV(CLK_DIV_ACLK_CAM1_552, "div_aclk_cam1_552", "mout_aclk_cam1_552_b",
+ DIV_TOP0, 20, 3),
+ DIV(CLK_DIV_ACLK_CAM0_333, "div_aclk_cam0_333", "mout_mfc_pll_user",
+ DIV_TOP0, 16, 3),
+ DIV(CLK_DIV_ACLK_CAM0_400, "div_aclk_cam0_400", "mout_bus_pll_user",
+ DIV_TOP0, 12, 3),
+ DIV(CLK_DIV_ACLK_CAM0_552, "div_aclk_cam0_552", "mout_isp_pll",
+ DIV_TOP0, 8, 3),
+ DIV(CLK_DIV_ACLK_ISP_DIS_400, "div_aclk_isp_dis_400",
+ "mout_aclk_isp_dis_400", DIV_TOP0, 4, 4),
+ DIV(CLK_DIV_ACLK_ISP_400, "div_aclk_isp_400",
+ "mout_aclk_isp_400", DIV_TOP0, 0, 4),
+
+ /* DIV_TOP1 */
+ DIV(CLK_DIV_ACLK_GSCL_111, "div_aclk_gscl_111", "mout_aclk_gscl_333",
+ DIV_TOP1, 28, 3),
+ DIV(CLK_DIV_ACLK_GSCL_333, "div_aclk_gscl_333", "mout_aclk_gscl_333",
+ DIV_TOP1, 24, 3),
+ DIV(CLK_DIV_ACLK_HEVC_400, "div_aclk_hevc_400", "mout_aclk_hevc_400",
+ DIV_TOP1, 20, 3),
+ DIV(CLK_DIV_ACLK_MFC_400, "div_aclk_mfc_400", "mout_aclk_mfc_400_c",
+ DIV_TOP1, 12, 3),
+ DIV(CLK_DIV_ACLK_G2D_266, "div_aclk_g2d_266", "mout_bus_pll_user",
+ DIV_TOP1, 8, 3),
+ DIV(CLK_DIV_ACLK_G2D_400, "div_aclk_g2d_400", "mout_aclk_g2d_400_b",
+ DIV_TOP1, 0, 3),
+
+ /* DIV_TOP2 */
+ DIV(CLK_DIV_ACLK_MSCL_400, "div_aclk_mscl_400", "mout_aclk_mscl_400_b",
+ DIV_TOP2, 4, 3),
+ DIV(CLK_DIV_ACLK_FSYS_200, "div_aclk_fsys_200", "mout_bus_pll_user",
+ DIV_TOP2, 0, 3),
+
+ /* DIV_TOP3 */
+ DIV(CLK_DIV_ACLK_IMEM_SSSX_266, "div_aclk_imem_sssx_266",
+ "mout_bus_pll_user", DIV_TOP3, 24, 3),
+ DIV(CLK_DIV_ACLK_IMEM_200, "div_aclk_imem_200",
+ "mout_bus_pll_user", DIV_TOP3, 20, 3),
+ DIV(CLK_DIV_ACLK_IMEM_266, "div_aclk_imem_266",
+ "mout_bus_pll_user", DIV_TOP3, 16, 3),
+ DIV(CLK_DIV_ACLK_PERIC_66_B, "div_aclk_peric_66_b",
+ "div_aclk_peric_66_a", DIV_TOP3, 12, 3),
+ DIV(CLK_DIV_ACLK_PERIC_66_A, "div_aclk_peric_66_a",
+ "mout_bus_pll_user", DIV_TOP3, 8, 3),
+ DIV(CLK_DIV_ACLK_PERIS_66_B, "div_aclk_peris_66_b",
+ "div_aclk_peris_66_a", DIV_TOP3, 4, 3),
+ DIV(CLK_DIV_ACLK_PERIS_66_A, "div_aclk_peris_66_a",
+ "mout_bus_pll_user", DIV_TOP3, 0, 3),
+
+ /* DIV_TOP4 */
+ DIV(CLK_DIV_ACLK_G3D_400, "div_aclk_g3d_400", "mout_bus_pll_user",
+ DIV_TOP4, 8, 3),
+ DIV(CLK_DIV_ACLK_BUS0_400, "div_aclk_bus0_400", "mout_aclk_bus0_400",
+ DIV_TOP4, 4, 3),
+ DIV(CLK_DIV_ACLK_BUS1_400, "div_aclk_bus1_400", "mout_bus_pll_user",
+ DIV_TOP4, 0, 3),
+
+ /* DIV_TOP_MSCL */
+ DIV(CLK_DIV_SCLK_JPEG, "div_sclk_jpeg", "mout_sclk_jpeg_c",
+ DIV_TOP_MSCL, 0, 4),
+
+ /* DIV_TOP_CAM10 */
+ DIV(CLK_DIV_SCLK_ISP_UART, "div_sclk_isp_uart", "mout_sclk_isp_uart",
+ DIV_TOP_CAM10, 24, 5),
+ DIV(CLK_DIV_SCLK_ISP_SPI1_B, "div_sclk_isp_spi1_b",
+ "div_sclk_isp_spi1_a", DIV_TOP_CAM10, 16, 8),
+ DIV(CLK_DIV_SCLK_ISP_SPI1_A, "div_sclk_isp_spi1_a",
+ "mout_sclk_isp_spi1", DIV_TOP_CAM10, 12, 4),
+ DIV(CLK_DIV_SCLK_ISP_SPI0_B, "div_sclk_isp_spi0_b",
+ "div_sclk_isp_spi0_a", DIV_TOP_CAM10, 4, 8),
+ DIV(CLK_DIV_SCLK_ISP_SPI0_A, "div_sclk_isp_spi0_a",
+ "mout_sclk_isp_spi0", DIV_TOP_CAM10, 0, 4),
+
+ /* DIV_TOP_CAM11 */
+ DIV(CLK_DIV_SCLK_ISP_SENSOR2_B, "div_sclk_isp_sensor2_b",
+ "div_sclk_isp_sensor2_a", DIV_TOP_CAM11, 20, 4),
+ DIV(CLK_DIV_SCLK_ISP_SENSOR2_A, "div_sclk_isp_sensor2_a",
+ "mout_sclk_isp_sensor2", DIV_TOP_CAM11, 16, 4),
+ DIV(CLK_DIV_SCLK_ISP_SENSOR1_B, "div_sclk_isp_sensor1_b",
+ "div_sclk_isp_sensor1_a", DIV_TOP_CAM11, 12, 4),
+ DIV(CLK_DIV_SCLK_ISP_SENSOR1_A, "div_sclk_isp_sensor1_a",
+ "mout_sclk_isp_sensor1", DIV_TOP_CAM11, 8, 4),
+ DIV(CLK_DIV_SCLK_ISP_SENSOR0_B, "div_sclk_isp_sensor0_b",
+ "div_sclk_isp_sensor0_a", DIV_TOP_CAM11, 12, 4),
+ DIV(CLK_DIV_SCLK_ISP_SENSOR0_A, "div_sclk_isp_sensor0_a",
+ "mout_sclk_isp_sensor0", DIV_TOP_CAM11, 8, 4),
+
+ /* DIV_TOP_FSYS0 */
+ DIV(CLK_DIV_SCLK_MMC1_B, "div_sclk_mmc1_b", "div_sclk_mmc1_a",
+ DIV_TOP_FSYS0, 16, 8),
+ DIV(CLK_DIV_SCLK_MMC1_A, "div_sclk_mmc1_a", "mout_sclk_mmc1_b",
+ DIV_TOP_FSYS0, 12, 4),
+ DIV_F(CLK_DIV_SCLK_MMC0_B, "div_sclk_mmc0_b", "div_sclk_mmc0_a",
+ DIV_TOP_FSYS0, 4, 8, CLK_SET_RATE_PARENT, 0),
+ DIV_F(CLK_DIV_SCLK_MMC0_A, "div_sclk_mmc0_a", "mout_sclk_mmc0_d",
+ DIV_TOP_FSYS0, 0, 4, CLK_SET_RATE_PARENT, 0),
+
+ /* DIV_TOP_FSYS1 */
+ DIV(CLK_DIV_SCLK_MMC2_B, "div_sclk_mmc2_b", "div_sclk_mmc2_a",
+ DIV_TOP_FSYS1, 4, 8),
+ DIV(CLK_DIV_SCLK_MMC2_A, "div_sclk_mmc2_a", "mout_sclk_mmc2_b",
+ DIV_TOP_FSYS1, 0, 4),
+
+ /* DIV_TOP_FSYS2 */
+ DIV(CLK_DIV_SCLK_PCIE_100, "div_sclk_pcie_100", "mout_sclk_pcie_100",
+ DIV_TOP_FSYS2, 12, 3),
+ DIV(CLK_DIV_SCLK_USBHOST30, "div_sclk_usbhost30",
+ "mout_sclk_usbhost30", DIV_TOP_FSYS2, 8, 4),
+ DIV(CLK_DIV_SCLK_UFSUNIPRO, "div_sclk_ufsunipro",
+ "mout_sclk_ufsunipro", DIV_TOP_FSYS2, 4, 4),
+ DIV(CLK_DIV_SCLK_USBDRD30, "div_sclk_usbdrd30", "mout_sclk_usbdrd30",
+ DIV_TOP_FSYS2, 0, 4),
+
+ /* DIV_TOP_PERIC0 */
+ DIV(CLK_DIV_SCLK_SPI1_B, "div_sclk_spi1_b", "div_sclk_spi1_a",
+ DIV_TOP_PERIC0, 16, 8),
+ DIV(CLK_DIV_SCLK_SPI1_A, "div_sclk_spi1_a", "mout_sclk_spi1",
+ DIV_TOP_PERIC0, 12, 4),
+ DIV(CLK_DIV_SCLK_SPI0_B, "div_sclk_spi0_b", "div_sclk_spi0_a",
+ DIV_TOP_PERIC0, 4, 8),
+ DIV(CLK_DIV_SCLK_SPI0_A, "div_sclk_spi0_a", "mout_sclk_spi0",
+ DIV_TOP_PERIC0, 0, 4),
+
+ /* DIV_TOP_PERIC1 */
+ DIV(CLK_DIV_SCLK_SPI2_B, "div_sclk_spi2_b", "div_sclk_spi2_a",
+ DIV_TOP_PERIC1, 4, 8),
+ DIV(CLK_DIV_SCLK_SPI2_A, "div_sclk_spi2_a", "mout_sclk_spi2",
+ DIV_TOP_PERIC1, 0, 4),
+
+ /* DIV_TOP_PERIC2 */
+ DIV(CLK_DIV_SCLK_UART2, "div_sclk_uart2", "mout_sclk_uart2",
+ DIV_TOP_PERIC2, 8, 4),
+ DIV(CLK_DIV_SCLK_UART1, "div_sclk_uart1", "mout_sclk_uart0",
+ DIV_TOP_PERIC2, 4, 4),
+ DIV(CLK_DIV_SCLK_UART0, "div_sclk_uart0", "mout_sclk_uart1",
+ DIV_TOP_PERIC2, 0, 4),
+
+ /* DIV_TOP_PERIC3 */
+ DIV(CLK_DIV_SCLK_I2S1, "div_sclk_i2s1", "sclk_audio1",
+ DIV_TOP_PERIC3, 16, 6),
+ DIV(CLK_DIV_SCLK_PCM1, "div_sclk_pcm1", "sclk_audio1",
+ DIV_TOP_PERIC3, 8, 8),
+ DIV(CLK_DIV_SCLK_AUDIO1, "div_sclk_audio1", "mout_sclk_audio1",
+ DIV_TOP_PERIC3, 4, 4),
+ DIV(CLK_DIV_SCLK_AUDIO0, "div_sclk_audio0", "mout_sclk_audio0",
+ DIV_TOP_PERIC3, 0, 4),
+
+ /* DIV_TOP_PERIC4 */
+ DIV(CLK_DIV_SCLK_SPI4_B, "div_sclk_spi4_b", "div_sclk_spi4_a",
+ DIV_TOP_PERIC4, 16, 8),
+ DIV(CLK_DIV_SCLK_SPI4_A, "div_sclk_spi4_a", "mout_sclk_spi4",
+ DIV_TOP_PERIC4, 12, 4),
+ DIV(CLK_DIV_SCLK_SPI3_B, "div_sclk_spi3_b", "div_sclk_spi3_a",
+ DIV_TOP_PERIC4, 4, 8),
+ DIV(CLK_DIV_SCLK_SPI3_A, "div_sclk_spi3_a", "mout_sclk_spi3",
+ DIV_TOP_PERIC4, 0, 4),
+};
+
+static struct samsung_gate_clock top_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_TOP */
+ GATE(CLK_ACLK_G3D_400, "aclk_g3d_400", "div_aclk_g3d_400",
+ ENABLE_ACLK_TOP, 30, 0, 0),
+ GATE(CLK_ACLK_IMEM_SSX_266, "aclk_imem_ssx_266",
+ "div_aclk_imem_sssx_266", ENABLE_ACLK_TOP,
+ 29, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BUS0_400, "aclk_bus0_400", "div_aclk_bus0_400",
+ ENABLE_ACLK_TOP, 26,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_ACLK_BUS1_400, "aclk_bus1_400", "div_aclk_bus1_400",
+ ENABLE_ACLK_TOP, 25,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_ACLK_IMEM_200, "aclk_imem_200", "div_aclk_imem_266",
+ ENABLE_ACLK_TOP, 24,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_ACLK_IMEM_266, "aclk_imem_266", "div_aclk_imem_200",
+ ENABLE_ACLK_TOP, 23,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_ACLK_PERIC_66, "aclk_peric_66", "div_aclk_peric_66_b",
+ ENABLE_ACLK_TOP, 22,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_PERIS_66, "aclk_peris_66", "div_aclk_peris_66_b",
+ ENABLE_ACLK_TOP, 21,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MSCL_400, "aclk_mscl_400", "div_aclk_mscl_400",
+ ENABLE_ACLK_TOP, 19,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_FSYS_200, "aclk_fsys_200", "div_aclk_fsys_200",
+ ENABLE_ACLK_TOP, 18,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_GSCL_111, "aclk_gscl_111", "div_aclk_gscl_111",
+ ENABLE_ACLK_TOP, 15,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_GSCL_333, "aclk_gscl_333", "div_aclk_gscl_333",
+ ENABLE_ACLK_TOP, 14,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CAM1_333, "aclk_cam1_333", "div_aclk_cam1_333",
+ ENABLE_ACLK_TOP, 13,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CAM1_400, "aclk_cam1_400", "div_aclk_cam1_400",
+ ENABLE_ACLK_TOP, 12,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CAM1_552, "aclk_cam1_552", "div_aclk_cam1_552",
+ ENABLE_ACLK_TOP, 11,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CAM0_333, "aclk_cam0_333", "div_aclk_cam0_333",
+ ENABLE_ACLK_TOP, 10,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CAM0_400, "aclk_cam0_400", "div_aclk_cam0_400",
+ ENABLE_ACLK_TOP, 9,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CAM0_552, "aclk_cam0_552", "div_aclk_cam0_552",
+ ENABLE_ACLK_TOP, 8,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ISP_DIS_400, "aclk_isp_dis_400", "div_aclk_isp_dis_400",
+ ENABLE_ACLK_TOP, 7,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ISP_400, "aclk_isp_400", "div_aclk_isp_400",
+ ENABLE_ACLK_TOP, 6,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_HEVC_400, "aclk_hevc_400", "div_aclk_hevc_400",
+ ENABLE_ACLK_TOP, 5,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MFC_400, "aclk_mfc_400", "div_aclk_mfc_400",
+ ENABLE_ACLK_TOP, 3,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_G2D_266, "aclk_g2d_266", "div_aclk_g2d_266",
+ ENABLE_ACLK_TOP, 2,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_G2D_400, "aclk_g2d_400", "div_aclk_g2d_400",
+ ENABLE_ACLK_TOP, 0,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_SCLK_TOP_MSCL */
+ GATE(CLK_SCLK_JPEG_MSCL, "sclk_jpeg_mscl", "div_sclk_jpeg",
+ ENABLE_SCLK_TOP_MSCL, 0, 0, 0),
+
+ /* ENABLE_SCLK_TOP_CAM1 */
+ GATE(CLK_SCLK_ISP_SENSOR2, "sclk_isp_sensor2", "div_sclk_isp_sensor2_b",
+ ENABLE_SCLK_TOP_CAM1, 7, 0, 0),
+ GATE(CLK_SCLK_ISP_SENSOR1, "sclk_isp_sensor1", "div_sclk_isp_sensor1_b",
+ ENABLE_SCLK_TOP_CAM1, 6, 0, 0),
+ GATE(CLK_SCLK_ISP_SENSOR0, "sclk_isp_sensor0", "div_sclk_isp_sensor0_b",
+ ENABLE_SCLK_TOP_CAM1, 5, 0, 0),
+ GATE(CLK_SCLK_ISP_MCTADC_CAM1, "sclk_isp_mctadc_cam1", "oscclk",
+ ENABLE_SCLK_TOP_CAM1, 4, 0, 0),
+ GATE(CLK_SCLK_ISP_UART_CAM1, "sclk_isp_uart_cam1", "div_sclk_isp_uart",
+ ENABLE_SCLK_TOP_CAM1, 2, 0, 0),
+ GATE(CLK_SCLK_ISP_SPI1_CAM1, "sclk_isp_spi1_cam1", "div_sclk_isp_spi1_b",
+ ENABLE_SCLK_TOP_CAM1, 1, 0, 0),
+ GATE(CLK_SCLK_ISP_SPI0_CAM1, "sclk_isp_spi0_cam1", "div_sclk_isp_spi0_b",
+ ENABLE_SCLK_TOP_CAM1, 0, 0, 0),
+
+ /* ENABLE_SCLK_TOP_DISP */
+ GATE(CLK_SCLK_HDMI_SPDIF_DISP, "sclk_hdmi_spdif_disp",
+ "mout_sclk_hdmi_spdif", ENABLE_SCLK_TOP_DISP, 0,
+ CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_SCLK_TOP_FSYS */
+ GATE(CLK_SCLK_PCIE_100_FSYS, "sclk_pcie_100_fsys", "div_sclk_pcie_100",
+ ENABLE_SCLK_TOP_FSYS, 7, 0, 0),
+ GATE(CLK_SCLK_MMC2_FSYS, "sclk_mmc2_fsys", "div_sclk_mmc2_b",
+ ENABLE_SCLK_TOP_FSYS, 6, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_MMC1_FSYS, "sclk_mmc1_fsys", "div_sclk_mmc1_b",
+ ENABLE_SCLK_TOP_FSYS, 5, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_MMC0_FSYS, "sclk_mmc0_fsys", "div_sclk_mmc0_b",
+ ENABLE_SCLK_TOP_FSYS, 4, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_UFSUNIPRO_FSYS, "sclk_ufsunipro_fsys",
+ "div_sclk_ufsunipro", ENABLE_SCLK_TOP_FSYS,
+ 3, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_USBHOST30_FSYS, "sclk_usbhost30_fsys",
+ "div_sclk_usbhost30", ENABLE_SCLK_TOP_FSYS,
+ 1, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_USBDRD30_FSYS, "sclk_usbdrd30_fsys",
+ "div_sclk_usbdrd30", ENABLE_SCLK_TOP_FSYS,
+ 0, CLK_SET_RATE_PARENT, 0),
+
+ /* ENABLE_SCLK_TOP_PERIC */
+ GATE(CLK_SCLK_SPI4_PERIC, "sclk_spi4_peric", "div_sclk_spi4_b",
+ ENABLE_SCLK_TOP_PERIC, 12, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI3_PERIC, "sclk_spi3_peric", "div_sclk_spi3_b",
+ ENABLE_SCLK_TOP_PERIC, 11, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPDIF_PERIC, "sclk_spdif_peric", "mout_sclk_spdif",
+ ENABLE_SCLK_TOP_PERIC, 9, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_I2S1_PERIC, "sclk_i2s1_peric", "div_sclk_i2s1",
+ ENABLE_SCLK_TOP_PERIC, 8, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_PCM1_PERIC, "sclk_pcm1_peric", "div_sclk_pcm1",
+ ENABLE_SCLK_TOP_PERIC, 7, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_UART2_PERIC, "sclk_uart2_peric", "div_sclk_uart2",
+ ENABLE_SCLK_TOP_PERIC, 5, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_UART1_PERIC, "sclk_uart1_peric", "div_sclk_uart1",
+ ENABLE_SCLK_TOP_PERIC, 4, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_UART0_PERIC, "sclk_uart0_peric", "div_sclk_uart0",
+ ENABLE_SCLK_TOP_PERIC, 3, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI2_PERIC, "sclk_spi2_peric", "div_sclk_spi2_b",
+ ENABLE_SCLK_TOP_PERIC, 2, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI1_PERIC, "sclk_spi1_peric", "div_sclk_spi1_b",
+ ENABLE_SCLK_TOP_PERIC, 1, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI0_PERIC, "sclk_spi0_peric", "div_sclk_spi0_b",
+ ENABLE_SCLK_TOP_PERIC, 0, CLK_SET_RATE_PARENT, 0),
+
+ /* MUX_ENABLE_TOP_PERIC1 */
+ GATE(CLK_SCLK_SLIMBUS, "sclk_slimbus", "mout_sclk_slimbus",
+ MUX_ENABLE_TOP_PERIC1, 16, 0, 0),
+ GATE(CLK_SCLK_AUDIO1, "sclk_audio1", "div_sclk_audio1",
+ MUX_ENABLE_TOP_PERIC1, 4, 0, 0),
+ GATE(CLK_SCLK_AUDIO0, "sclk_audio0", "div_sclk_audio0",
+ MUX_ENABLE_TOP_PERIC1, 0, 0, 0),
+};
+
+/*
+ * ATLAS_PLL & APOLLO_PLL & MEM0_PLL & MEM1_PLL & BUS_PLL & MFC_PLL
+ * & MPHY_PLL & G3D_PLL & DISP_PLL & ISP_PLL
+ */
+static struct samsung_pll_rate_table exynos5443_pll_rates[] = {
+ PLL_35XX_RATE(2500000000U, 625, 6, 0),
+ PLL_35XX_RATE(2400000000U, 500, 5, 0),
+ PLL_35XX_RATE(2300000000U, 575, 6, 0),
+ PLL_35XX_RATE(2200000000U, 550, 6, 0),
+ PLL_35XX_RATE(2100000000U, 350, 4, 0),
+ PLL_35XX_RATE(2000000000U, 500, 6, 0),
+ PLL_35XX_RATE(1900000000U, 475, 6, 0),
+ PLL_35XX_RATE(1800000000U, 375, 5, 0),
+ PLL_35XX_RATE(1700000000U, 425, 6, 0),
+ PLL_35XX_RATE(1600000000U, 400, 6, 0),
+ PLL_35XX_RATE(1500000000U, 250, 4, 0),
+ PLL_35XX_RATE(1400000000U, 350, 6, 0),
+ PLL_35XX_RATE(1332000000U, 222, 4, 0),
+ PLL_35XX_RATE(1300000000U, 325, 6, 0),
+ PLL_35XX_RATE(1200000000U, 500, 5, 1),
+ PLL_35XX_RATE(1100000000U, 550, 6, 1),
+ PLL_35XX_RATE(1086000000U, 362, 4, 1),
+ PLL_35XX_RATE(1066000000U, 533, 6, 1),
+ PLL_35XX_RATE(1000000000U, 500, 6, 1),
+ PLL_35XX_RATE(933000000U, 311, 4, 1),
+ PLL_35XX_RATE(921000000U, 307, 4, 1),
+ PLL_35XX_RATE(900000000U, 375, 5, 1),
+ PLL_35XX_RATE(825000000U, 275, 4, 1),
+ PLL_35XX_RATE(800000000U, 400, 6, 1),
+ PLL_35XX_RATE(733000000U, 733, 12, 1),
+ PLL_35XX_RATE(700000000U, 360, 6, 1),
+ PLL_35XX_RATE(667000000U, 222, 4, 1),
+ PLL_35XX_RATE(633000000U, 211, 4, 1),
+ PLL_35XX_RATE(600000000U, 500, 5, 2),
+ PLL_35XX_RATE(552000000U, 460, 5, 2),
+ PLL_35XX_RATE(550000000U, 550, 6, 2),
+ PLL_35XX_RATE(543000000U, 362, 4, 2),
+ PLL_35XX_RATE(533000000U, 533, 6, 2),
+ PLL_35XX_RATE(500000000U, 500, 6, 2),
+ PLL_35XX_RATE(444000000U, 370, 5, 2),
+ PLL_35XX_RATE(420000000U, 350, 5, 2),
+ PLL_35XX_RATE(400000000U, 400, 6, 2),
+ PLL_35XX_RATE(350000000U, 360, 6, 2),
+ PLL_35XX_RATE(333000000U, 222, 4, 2),
+ PLL_35XX_RATE(300000000U, 500, 5, 3),
+ PLL_35XX_RATE(266000000U, 532, 6, 3),
+ PLL_35XX_RATE(200000000U, 400, 6, 3),
+ PLL_35XX_RATE(166000000U, 332, 6, 3),
+ PLL_35XX_RATE(160000000U, 320, 6, 3),
+ PLL_35XX_RATE(133000000U, 552, 6, 4),
+ PLL_35XX_RATE(100000000U, 400, 6, 4),
+ { /* sentinel */ }
+};
+
+/* AUD_PLL */
+static struct samsung_pll_rate_table exynos5443_aud_pll_rates[] = {
+ PLL_36XX_RATE(400000000U, 200, 3, 2, 0),
+ PLL_36XX_RATE(393216000U, 197, 3, 2, -25690),
+ PLL_36XX_RATE(384000000U, 128, 2, 2, 0),
+ PLL_36XX_RATE(368640000U, 246, 4, 2, -15729),
+ PLL_36XX_RATE(361507200U, 181, 3, 2, -16148),
+ PLL_36XX_RATE(338688000U, 113, 2, 2, -6816),
+ PLL_36XX_RATE(294912000U, 98, 1, 3, 19923),
+ PLL_36XX_RATE(288000000U, 96, 1, 3, 0),
+ PLL_36XX_RATE(252000000U, 84, 1, 3, 0),
+ { /* sentinel */ }
+};
+
+static struct samsung_pll_clock top_pll_clks[] __initdata = {
+ PLL(pll_35xx, CLK_FOUT_ISP_PLL, "fout_isp_pll", "oscclk",
+ ISP_PLL_LOCK, ISP_PLL_CON0, exynos5443_pll_rates),
+ PLL(pll_36xx, CLK_FOUT_AUD_PLL, "fout_aud_pll", "oscclk",
+ AUD_PLL_LOCK, AUD_PLL_CON0, exynos5443_aud_pll_rates),
+};
+
+static struct samsung_cmu_info top_cmu_info __initdata = {
+ .pll_clks = top_pll_clks,
+ .nr_pll_clks = ARRAY_SIZE(top_pll_clks),
+ .mux_clks = top_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(top_mux_clks),
+ .div_clks = top_div_clks,
+ .nr_div_clks = ARRAY_SIZE(top_div_clks),
+ .gate_clks = top_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(top_gate_clks),
+ .fixed_clks = top_fixed_clks,
+ .nr_fixed_clks = ARRAY_SIZE(top_fixed_clks),
+ .fixed_factor_clks = top_fixed_factor_clks,
+ .nr_fixed_factor_clks = ARRAY_SIZE(top_fixed_factor_clks),
+ .nr_clk_ids = TOP_NR_CLK,
+ .clk_regs = top_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(top_clk_regs),
+};
+
+static void __init exynos5433_cmu_top_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &top_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_top, "samsung,exynos5433-cmu-top",
+ exynos5433_cmu_top_init);
+
+/*
+ * Register offset definitions for CMU_CPIF
+ */
+#define MPHY_PLL_LOCK 0x0000
+#define MPHY_PLL_CON0 0x0100
+#define MPHY_PLL_CON1 0x0104
+#define MPHY_PLL_FREQ_DET 0x010c
+#define MUX_SEL_CPIF0 0x0200
+#define DIV_CPIF 0x0600
+#define ENABLE_SCLK_CPIF 0x0a00
+
+static unsigned long cpif_clk_regs[] __initdata = {
+ MPHY_PLL_LOCK,
+ MPHY_PLL_CON0,
+ MPHY_PLL_CON1,
+ MPHY_PLL_FREQ_DET,
+ MUX_SEL_CPIF0,
+ ENABLE_SCLK_CPIF,
+};
+
+/* list of all parent clock list */
+PNAME(mout_mphy_pll_p) = { "oscclk", "fout_mphy_pll", };
+
+static struct samsung_pll_clock cpif_pll_clks[] __initdata = {
+ PLL(pll_35xx, CLK_FOUT_MPHY_PLL, "fout_mphy_pll", "oscclk",
+ MPHY_PLL_LOCK, MPHY_PLL_CON0, exynos5443_pll_rates),
+};
+
+static struct samsung_mux_clock cpif_mux_clks[] __initdata = {
+ /* MUX_SEL_CPIF0 */
+ MUX(CLK_MOUT_MPHY_PLL, "mout_mphy_pll", mout_mphy_pll_p, MUX_SEL_CPIF0,
+ 0, 1),
+};
+
+static struct samsung_div_clock cpif_div_clks[] __initdata = {
+ /* DIV_CPIF */
+ DIV(CLK_DIV_SCLK_MPHY, "div_sclk_mphy", "mout_mphy_pll", DIV_CPIF,
+ 0, 6),
+};
+
+static struct samsung_gate_clock cpif_gate_clks[] __initdata = {
+ /* ENABLE_SCLK_CPIF */
+ GATE(CLK_SCLK_MPHY_PLL, "sclk_mphy_pll", "mout_mphy_pll",
+ ENABLE_SCLK_CPIF, 9, 0, 0),
+ GATE(CLK_SCLK_UFS_MPHY, "sclk_ufs_mphy", "div_sclk_mphy",
+ ENABLE_SCLK_CPIF, 4, 0, 0),
+};
+
+static struct samsung_cmu_info cpif_cmu_info __initdata = {
+ .pll_clks = cpif_pll_clks,
+ .nr_pll_clks = ARRAY_SIZE(cpif_pll_clks),
+ .mux_clks = cpif_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(cpif_mux_clks),
+ .div_clks = cpif_div_clks,
+ .nr_div_clks = ARRAY_SIZE(cpif_div_clks),
+ .gate_clks = cpif_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(cpif_gate_clks),
+ .nr_clk_ids = CPIF_NR_CLK,
+ .clk_regs = cpif_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(cpif_clk_regs),
+};
+
+static void __init exynos5433_cmu_cpif_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &cpif_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_cpif, "samsung,exynos5433-cmu-cpif",
+ exynos5433_cmu_cpif_init);
+
+/*
+ * Register offset definitions for CMU_MIF
+ */
+#define MEM0_PLL_LOCK 0x0000
+#define MEM1_PLL_LOCK 0x0004
+#define BUS_PLL_LOCK 0x0008
+#define MFC_PLL_LOCK 0x000c
+#define MEM0_PLL_CON0 0x0100
+#define MEM0_PLL_CON1 0x0104
+#define MEM0_PLL_FREQ_DET 0x010c
+#define MEM1_PLL_CON0 0x0110
+#define MEM1_PLL_CON1 0x0114
+#define MEM1_PLL_FREQ_DET 0x011c
+#define BUS_PLL_CON0 0x0120
+#define BUS_PLL_CON1 0x0124
+#define BUS_PLL_FREQ_DET 0x012c
+#define MFC_PLL_CON0 0x0130
+#define MFC_PLL_CON1 0x0134
+#define MFC_PLL_FREQ_DET 0x013c
+#define MUX_SEL_MIF0 0x0200
+#define MUX_SEL_MIF1 0x0204
+#define MUX_SEL_MIF2 0x0208
+#define MUX_SEL_MIF3 0x020c
+#define MUX_SEL_MIF4 0x0210
+#define MUX_SEL_MIF5 0x0214
+#define MUX_SEL_MIF6 0x0218
+#define MUX_SEL_MIF7 0x021c
+#define MUX_ENABLE_MIF0 0x0300
+#define MUX_ENABLE_MIF1 0x0304
+#define MUX_ENABLE_MIF2 0x0308
+#define MUX_ENABLE_MIF3 0x030c
+#define MUX_ENABLE_MIF4 0x0310
+#define MUX_ENABLE_MIF5 0x0314
+#define MUX_ENABLE_MIF6 0x0318
+#define MUX_ENABLE_MIF7 0x031c
+#define MUX_STAT_MIF0 0x0400
+#define MUX_STAT_MIF1 0x0404
+#define MUX_STAT_MIF2 0x0408
+#define MUX_STAT_MIF3 0x040c
+#define MUX_STAT_MIF4 0x0410
+#define MUX_STAT_MIF5 0x0414
+#define MUX_STAT_MIF6 0x0418
+#define MUX_STAT_MIF7 0x041c
+#define DIV_MIF1 0x0604
+#define DIV_MIF2 0x0608
+#define DIV_MIF3 0x060c
+#define DIV_MIF4 0x0610
+#define DIV_MIF5 0x0614
+#define DIV_MIF_PLL_FREQ_DET 0x0618
+#define DIV_STAT_MIF1 0x0704
+#define DIV_STAT_MIF2 0x0708
+#define DIV_STAT_MIF3 0x070c
+#define DIV_STAT_MIF4 0x0710
+#define DIV_STAT_MIF5 0x0714
+#define DIV_STAT_MIF_PLL_FREQ_DET 0x0718
+#define ENABLE_ACLK_MIF0 0x0800
+#define ENABLE_ACLK_MIF1 0x0804
+#define ENABLE_ACLK_MIF2 0x0808
+#define ENABLE_ACLK_MIF3 0x080c
+#define ENABLE_PCLK_MIF 0x0900
+#define ENABLE_PCLK_MIF_SECURE_DREX0_TZ 0x0904
+#define ENABLE_PCLK_MIF_SECURE_DREX1_TZ 0x0908
+#define ENABLE_PCLK_MIF_SECURE_MONOTONIC_CNT 0x090c
+#define ENABLE_PCLK_MIF_SECURE_RTC 0x0910
+#define ENABLE_SCLK_MIF 0x0a00
+#define ENABLE_IP_MIF0 0x0b00
+#define ENABLE_IP_MIF1 0x0b04
+#define ENABLE_IP_MIF2 0x0b08
+#define ENABLE_IP_MIF3 0x0b0c
+#define ENABLE_IP_MIF_SECURE_DREX0_TZ 0x0b10
+#define ENABLE_IP_MIF_SECURE_DREX1_TZ 0x0b14
+#define ENABLE_IP_MIF_SECURE_MONOTONIC_CNT 0x0b18
+#define ENABLE_IP_MIF_SECURE_RTC 0x0b1c
+#define CLKOUT_CMU_MIF 0x0c00
+#define CLKOUT_CMU_MIF_DIV_STAT 0x0c04
+#define DREX_FREQ_CTRL0 0x1000
+#define DREX_FREQ_CTRL1 0x1004
+#define PAUSE 0x1008
+#define DDRPHY_LOCK_CTRL 0x100c
+
+static unsigned long mif_clk_regs[] __initdata = {
+ MEM0_PLL_LOCK,
+ MEM1_PLL_LOCK,
+ BUS_PLL_LOCK,
+ MFC_PLL_LOCK,
+ MEM0_PLL_CON0,
+ MEM0_PLL_CON1,
+ MEM0_PLL_FREQ_DET,
+ MEM1_PLL_CON0,
+ MEM1_PLL_CON1,
+ MEM1_PLL_FREQ_DET,
+ BUS_PLL_CON0,
+ BUS_PLL_CON1,
+ BUS_PLL_FREQ_DET,
+ MFC_PLL_CON0,
+ MFC_PLL_CON1,
+ MFC_PLL_FREQ_DET,
+ MUX_SEL_MIF0,
+ MUX_SEL_MIF1,
+ MUX_SEL_MIF2,
+ MUX_SEL_MIF3,
+ MUX_SEL_MIF4,
+ MUX_SEL_MIF5,
+ MUX_SEL_MIF6,
+ MUX_SEL_MIF7,
+ MUX_ENABLE_MIF0,
+ MUX_ENABLE_MIF1,
+ MUX_ENABLE_MIF2,
+ MUX_ENABLE_MIF3,
+ MUX_ENABLE_MIF4,
+ MUX_ENABLE_MIF5,
+ MUX_ENABLE_MIF6,
+ MUX_ENABLE_MIF7,
+ MUX_STAT_MIF0,
+ MUX_STAT_MIF1,
+ MUX_STAT_MIF2,
+ MUX_STAT_MIF3,
+ MUX_STAT_MIF4,
+ MUX_STAT_MIF5,
+ MUX_STAT_MIF6,
+ MUX_STAT_MIF7,
+ DIV_MIF1,
+ DIV_MIF2,
+ DIV_MIF3,
+ DIV_MIF4,
+ DIV_MIF5,
+ DIV_MIF_PLL_FREQ_DET,
+ DIV_STAT_MIF1,
+ DIV_STAT_MIF2,
+ DIV_STAT_MIF3,
+ DIV_STAT_MIF4,
+ DIV_STAT_MIF5,
+ DIV_STAT_MIF_PLL_FREQ_DET,
+ ENABLE_ACLK_MIF0,
+ ENABLE_ACLK_MIF1,
+ ENABLE_ACLK_MIF2,
+ ENABLE_ACLK_MIF3,
+ ENABLE_PCLK_MIF,
+ ENABLE_PCLK_MIF_SECURE_DREX0_TZ,
+ ENABLE_PCLK_MIF_SECURE_DREX1_TZ,
+ ENABLE_PCLK_MIF_SECURE_MONOTONIC_CNT,
+ ENABLE_PCLK_MIF_SECURE_RTC,
+ ENABLE_SCLK_MIF,
+ ENABLE_IP_MIF0,
+ ENABLE_IP_MIF1,
+ ENABLE_IP_MIF2,
+ ENABLE_IP_MIF3,
+ ENABLE_IP_MIF_SECURE_DREX0_TZ,
+ ENABLE_IP_MIF_SECURE_DREX1_TZ,
+ ENABLE_IP_MIF_SECURE_MONOTONIC_CNT,
+ ENABLE_IP_MIF_SECURE_RTC,
+ CLKOUT_CMU_MIF,
+ CLKOUT_CMU_MIF_DIV_STAT,
+ DREX_FREQ_CTRL0,
+ DREX_FREQ_CTRL1,
+ PAUSE,
+ DDRPHY_LOCK_CTRL,
+};
+
+static struct samsung_pll_clock mif_pll_clks[] __initdata = {
+ PLL(pll_35xx, CLK_FOUT_MEM0_PLL, "fout_mem0_pll", "oscclk",
+ MEM0_PLL_LOCK, MEM0_PLL_CON0, exynos5443_pll_rates),
+ PLL(pll_35xx, CLK_FOUT_MEM1_PLL, "fout_mem1_pll", "oscclk",
+ MEM1_PLL_LOCK, MEM1_PLL_CON0, exynos5443_pll_rates),
+ PLL(pll_35xx, CLK_FOUT_BUS_PLL, "fout_bus_pll", "oscclk",
+ BUS_PLL_LOCK, BUS_PLL_CON0, exynos5443_pll_rates),
+ PLL(pll_35xx, CLK_FOUT_MFC_PLL, "fout_mfc_pll", "oscclk",
+ MFC_PLL_LOCK, MFC_PLL_CON0, exynos5443_pll_rates),
+};
+
+/* list of all parent clock list */
+PNAME(mout_mfc_pll_div2_p) = { "mout_mfc_pll", "dout_mfc_pll", };
+PNAME(mout_bus_pll_div2_p) = { "mout_bus_pll", "dout_bus_pll", };
+PNAME(mout_mem1_pll_div2_p) = { "mout_mem1_pll", "dout_mem1_pll", };
+PNAME(mout_mem0_pll_div2_p) = { "mout_mem0_pll", "dout_mem0_pll", };
+PNAME(mout_mfc_pll_p) = { "oscclk", "fout_mfc_pll", };
+PNAME(mout_bus_pll_p) = { "oscclk", "fout_bus_pll", };
+PNAME(mout_mem1_pll_p) = { "oscclk", "fout_mem1_pll", };
+PNAME(mout_mem0_pll_p) = { "oscclk", "fout_mem0_pll", };
+
+PNAME(mout_clk2x_phy_c_p) = { "mout_mem0_pll_div2", "mout_clkm_phy_b", };
+PNAME(mout_clk2x_phy_b_p) = { "mout_bus_pll_div2", "mout_clkm_phy_a", };
+PNAME(mout_clk2x_phy_a_p) = { "mout_bus_pll_div2", "mout_mfc_pll_div2", };
+PNAME(mout_clkm_phy_b_p) = { "mout_mem1_pll_div2", "mout_clkm_phy_a", };
+
+PNAME(mout_aclk_mifnm_200_p) = { "mout_mem0_pll_div2", "div_mif_pre", };
+PNAME(mout_aclk_mifnm_400_p) = { "mout_mem1_pll_div2", "mout_bus_pll_div2",};
+
+PNAME(mout_aclk_disp_333_b_p) = { "mout_aclk_disp_333_a",
+ "mout_bus_pll_div2", };
+PNAME(mout_aclk_disp_333_a_p) = { "mout_mfc_pll_div2", "sclk_mphy_pll", };
+
+PNAME(mout_sclk_decon_vclk_c_p) = { "mout_sclk_decon_vclk_b",
+ "sclk_mphy_pll", };
+PNAME(mout_sclk_decon_vclk_b_p) = { "mout_sclk_decon_vclk_a",
+ "mout_mfc_pll_div2", };
+PNAME(mout_sclk_decon_p) = { "oscclk", "mout_bus_pll_div2", };
+PNAME(mout_sclk_decon_eclk_c_p) = { "mout_sclk_decon_eclk_b",
+ "sclk_mphy_pll", };
+PNAME(mout_sclk_decon_eclk_b_p) = { "mout_sclk_decon_eclk_a",
+ "mout_mfc_pll_div2", };
+
+PNAME(mout_sclk_decon_tv_eclk_c_p) = { "mout_sclk_decon_tv_eclk_b",
+ "sclk_mphy_pll", };
+PNAME(mout_sclk_decon_tv_eclk_b_p) = { "mout_sclk_decon_tv_eclk_a",
+ "mout_mfc_pll_div2", };
+PNAME(mout_sclk_dsd_c_p) = { "mout_sclk_dsd_b", "mout_bus_pll_div2", };
+PNAME(mout_sclk_dsd_b_p) = { "mout_sclk_dsd_a", "sclk_mphy_pll", };
+PNAME(mout_sclk_dsd_a_p) = { "oscclk", "mout_mfc_pll_div2", };
+
+PNAME(mout_sclk_dsim0_c_p) = { "mout_sclk_dsim0_b", "sclk_mphy_pll", };
+PNAME(mout_sclk_dsim0_b_p) = { "mout_sclk_dsim0_a", "mout_mfc_pll_div2" };
+
+PNAME(mout_sclk_decon_tv_vclk_c_p) = { "mout_sclk_decon_tv_vclk_b",
+ "sclk_mphy_pll", };
+PNAME(mout_sclk_decon_tv_vclk_b_p) = { "mout_sclk_decon_tv_vclk_a",
+ "mout_mfc_pll_div2", };
+PNAME(mout_sclk_dsim1_c_p) = { "mout_sclk_dsim1_b", "sclk_mphy_pll", };
+PNAME(mout_sclk_dsim1_b_p) = { "mout_sclk_dsim1_a", "mout_mfc_pll_div2",};
+
+static struct samsung_fixed_factor_clock mif_fixed_factor_clks[] __initdata = {
+ /* dout_{mfc|bus|mem1|mem0}_pll is half fixed rate from parent mux */
+ FFACTOR(CLK_DOUT_MFC_PLL, "dout_mfc_pll", "mout_mfc_pll", 1, 1, 0),
+ FFACTOR(CLK_DOUT_BUS_PLL, "dout_bus_pll", "mout_bus_pll", 1, 1, 0),
+ FFACTOR(CLK_DOUT_MEM1_PLL, "dout_mem1_pll", "mout_mem1_pll", 1, 1, 0),
+ FFACTOR(CLK_DOUT_MEM0_PLL, "dout_mem0_pll", "mout_mem0_pll", 1, 1, 0),
+};
+
+static struct samsung_mux_clock mif_mux_clks[] __initdata = {
+ /* MUX_SEL_MIF0 */
+ MUX(CLK_MOUT_MFC_PLL_DIV2, "mout_mfc_pll_div2", mout_mfc_pll_div2_p,
+ MUX_SEL_MIF0, 28, 1),
+ MUX(CLK_MOUT_BUS_PLL_DIV2, "mout_bus_pll_div2", mout_bus_pll_div2_p,
+ MUX_SEL_MIF0, 24, 1),
+ MUX(CLK_MOUT_MEM1_PLL_DIV2, "mout_mem1_pll_div2", mout_mem1_pll_div2_p,
+ MUX_SEL_MIF0, 20, 1),
+ MUX(CLK_MOUT_MEM0_PLL_DIV2, "mout_mem0_pll_div2", mout_mem0_pll_div2_p,
+ MUX_SEL_MIF0, 16, 1),
+ MUX(CLK_MOUT_MFC_PLL, "mout_mfc_pll", mout_mfc_pll_p, MUX_SEL_MIF0,
+ 12, 1),
+ MUX(CLK_MOUT_BUS_PLL, "mout_bus_pll", mout_bus_pll_p, MUX_SEL_MIF0,
+ 8, 1),
+ MUX(CLK_MOUT_MEM1_PLL, "mout_mem1_pll", mout_mem1_pll_p, MUX_SEL_MIF0,
+ 4, 1),
+ MUX(CLK_MOUT_MEM0_PLL, "mout_mem0_pll", mout_mem0_pll_p, MUX_SEL_MIF0,
+ 0, 1),
+
+ /* MUX_SEL_MIF1 */
+ MUX(CLK_MOUT_CLK2X_PHY_C, "mout_clk2x_phy_c", mout_clk2x_phy_c_p,
+ MUX_SEL_MIF1, 24, 1),
+ MUX(CLK_MOUT_CLK2X_PHY_B, "mout_clk2x_phy_b", mout_clk2x_phy_b_p,
+ MUX_SEL_MIF1, 20, 1),
+ MUX(CLK_MOUT_CLK2X_PHY_A, "mout_clk2x_phy_a", mout_clk2x_phy_a_p,
+ MUX_SEL_MIF1, 16, 1),
+ MUX(CLK_MOUT_CLKM_PHY_C, "mout_clkm_phy_c", mout_clk2x_phy_c_p,
+ MUX_SEL_MIF1, 12, 1),
+ MUX(CLK_MOUT_CLKM_PHY_B, "mout_clkm_phy_b", mout_clkm_phy_b_p,
+ MUX_SEL_MIF1, 8, 1),
+ MUX(CLK_MOUT_CLKM_PHY_A, "mout_clkm_phy_a", mout_clk2x_phy_a_p,
+ MUX_SEL_MIF1, 4, 1),
+
+ /* MUX_SEL_MIF2 */
+ MUX(CLK_MOUT_ACLK_MIFNM_200, "mout_aclk_mifnm_200",
+ mout_aclk_mifnm_200_p, MUX_SEL_MIF2, 8, 1),
+ MUX(CLK_MOUT_ACLK_MIFNM_400, "mout_aclk_mifnm_400",
+ mout_aclk_mifnm_400_p, MUX_SEL_MIF2, 0, 1),
+
+ /* MUX_SEL_MIF3 */
+ MUX(CLK_MOUT_ACLK_DISP_333_B, "mout_aclk_disp_333_b",
+ mout_aclk_disp_333_b_p, MUX_SEL_MIF3, 4, 1),
+ MUX(CLK_MOUT_ACLK_DISP_333_A, "mout_aclk_disp_333_a",
+ mout_aclk_disp_333_a_p, MUX_SEL_MIF3, 0, 1),
+
+ /* MUX_SEL_MIF4 */
+ MUX(CLK_MOUT_SCLK_DECON_VCLK_C, "mout_sclk_decon_vclk_c",
+ mout_sclk_decon_vclk_c_p, MUX_SEL_MIF4, 24, 1),
+ MUX(CLK_MOUT_SCLK_DECON_VCLK_B, "mout_sclk_decon_vclk_b",
+ mout_sclk_decon_vclk_b_p, MUX_SEL_MIF4, 20, 1),
+ MUX(CLK_MOUT_SCLK_DECON_VCLK_A, "mout_sclk_decon_vclk_a",
+ mout_sclk_decon_p, MUX_SEL_MIF4, 16, 1),
+ MUX(CLK_MOUT_SCLK_DECON_ECLK_C, "mout_sclk_decon_eclk_c",
+ mout_sclk_decon_eclk_c_p, MUX_SEL_MIF4, 8, 1),
+ MUX(CLK_MOUT_SCLK_DECON_ECLK_B, "mout_sclk_decon_eclk_b",
+ mout_sclk_decon_eclk_b_p, MUX_SEL_MIF4, 4, 1),
+ MUX(CLK_MOUT_SCLK_DECON_ECLK_A, "mout_sclk_decon_eclk_a",
+ mout_sclk_decon_p, MUX_SEL_MIF4, 0, 1),
+
+ /* MUX_SEL_MIF5 */
+ MUX(CLK_MOUT_SCLK_DECON_TV_ECLK_C, "mout_sclk_decon_tv_eclk_c",
+ mout_sclk_decon_tv_eclk_c_p, MUX_SEL_MIF5, 24, 1),
+ MUX(CLK_MOUT_SCLK_DECON_TV_ECLK_B, "mout_sclk_decon_tv_eclk_b",
+ mout_sclk_decon_tv_eclk_b_p, MUX_SEL_MIF5, 20, 1),
+ MUX(CLK_MOUT_SCLK_DECON_TV_ECLK_A, "mout_sclk_decon_tv_eclk_a",
+ mout_sclk_decon_p, MUX_SEL_MIF5, 16, 1),
+ MUX(CLK_MOUT_SCLK_DSD_C, "mout_sclk_dsd_c", mout_sclk_dsd_c_p,
+ MUX_SEL_MIF5, 8, 1),
+ MUX(CLK_MOUT_SCLK_DSD_B, "mout_sclk_dsd_b", mout_sclk_dsd_b_p,
+ MUX_SEL_MIF5, 4, 1),
+ MUX(CLK_MOUT_SCLK_DSD_A, "mout_sclk_dsd_a", mout_sclk_dsd_a_p,
+ MUX_SEL_MIF5, 0, 1),
+
+ /* MUX_SEL_MIF6 */
+ MUX(CLK_MOUT_SCLK_DSIM0_C, "mout_sclk_dsim0_c", mout_sclk_dsim0_c_p,
+ MUX_SEL_MIF6, 8, 1),
+ MUX(CLK_MOUT_SCLK_DSIM0_B, "mout_sclk_dsim0_b", mout_sclk_dsim0_b_p,
+ MUX_SEL_MIF6, 4, 1),
+ MUX(CLK_MOUT_SCLK_DSIM0_A, "mout_sclk_dsim0_a", mout_sclk_decon_p,
+ MUX_SEL_MIF6, 0, 1),
+
+ /* MUX_SEL_MIF7 */
+ MUX(CLK_MOUT_SCLK_DECON_TV_VCLK_C, "mout_sclk_decon_tv_vclk_c",
+ mout_sclk_decon_tv_vclk_c_p, MUX_SEL_MIF7, 24, 1),
+ MUX(CLK_MOUT_SCLK_DECON_TV_VCLK_B, "mout_sclk_decon_tv_vclk_b",
+ mout_sclk_decon_tv_vclk_b_p, MUX_SEL_MIF7, 20, 1),
+ MUX(CLK_MOUT_SCLK_DECON_TV_VCLK_A, "mout_sclk_decon_tv_vclk_a",
+ mout_sclk_decon_p, MUX_SEL_MIF7, 16, 1),
+ MUX(CLK_MOUT_SCLK_DSIM1_C, "mout_sclk_dsim1_c", mout_sclk_dsim1_c_p,
+ MUX_SEL_MIF7, 8, 1),
+ MUX(CLK_MOUT_SCLK_DSIM1_B, "mout_sclk_dsim1_b", mout_sclk_dsim1_b_p,
+ MUX_SEL_MIF7, 4, 1),
+ MUX(CLK_MOUT_SCLK_DSIM1_A, "mout_sclk_dsim1_a", mout_sclk_decon_p,
+ MUX_SEL_MIF7, 0, 1),
+};
+
+static struct samsung_div_clock mif_div_clks[] __initdata = {
+ /* DIV_MIF1 */
+ DIV(CLK_DIV_SCLK_HPM_MIF, "div_sclk_hpm_mif", "div_clk2x_phy",
+ DIV_MIF1, 16, 2),
+ DIV(CLK_DIV_ACLK_DREX1, "div_aclk_drex1", "div_clk2x_phy", DIV_MIF1,
+ 12, 2),
+ DIV(CLK_DIV_ACLK_DREX0, "div_aclk_drex0", "div_clk2x_phy", DIV_MIF1,
+ 8, 2),
+ DIV(CLK_DIV_CLK2XPHY, "div_clk2x_phy", "mout_clk2x_phy_c", DIV_MIF1,
+ 4, 4),
+
+ /* DIV_MIF2 */
+ DIV(CLK_DIV_ACLK_MIF_266, "div_aclk_mif_266", "mout_bus_pll_div2",
+ DIV_MIF2, 20, 3),
+ DIV(CLK_DIV_ACLK_MIFND_133, "div_aclk_mifnd_133", "div_mif_pre",
+ DIV_MIF2, 16, 4),
+ DIV(CLK_DIV_ACLK_MIF_133, "div_aclk_mif_133", "div_mif_pre",
+ DIV_MIF2, 12, 4),
+ DIV(CLK_DIV_ACLK_MIFNM_200, "div_aclk_mifnm_200",
+ "mout_aclk_mifnm_200", DIV_MIF2, 8, 3),
+ DIV(CLK_DIV_ACLK_MIF_200, "div_aclk_mif_200", "div_aclk_mif_400",
+ DIV_MIF2, 4, 2),
+ DIV(CLK_DIV_ACLK_MIF_400, "div_aclk_mif_400", "mout_aclk_mifnm_400",
+ DIV_MIF2, 0, 3),
+
+ /* DIV_MIF3 */
+ DIV(CLK_DIV_ACLK_BUS2_400, "div_aclk_bus2_400", "div_mif_pre",
+ DIV_MIF3, 16, 4),
+ DIV(CLK_DIV_ACLK_DISP_333, "div_aclk_disp_333", "mout_aclk_disp_333_b",
+ DIV_MIF3, 4, 3),
+ DIV(CLK_DIV_ACLK_CPIF_200, "div_aclk_cpif_200", "mout_aclk_mifnm_200",
+ DIV_MIF3, 0, 3),
+
+ /* DIV_MIF4 */
+ DIV(CLK_DIV_SCLK_DSIM1, "div_sclk_dsim1", "mout_sclk_dsim1_c",
+ DIV_MIF4, 24, 4),
+ DIV(CLK_DIV_SCLK_DECON_TV_VCLK, "div_sclk_decon_tv_vclk",
+ "mout_sclk_decon_tv_vclk_c", DIV_MIF4, 20, 4),
+ DIV(CLK_DIV_SCLK_DSIM0, "div_sclk_dsim0", "mout_sclk_dsim0_c",
+ DIV_MIF4, 16, 4),
+ DIV(CLK_DIV_SCLK_DSD, "div_sclk_dsd", "mout_sclk_dsd_c",
+ DIV_MIF4, 12, 4),
+ DIV(CLK_DIV_SCLK_DECON_TV_ECLK, "div_sclk_decon_tv_eclk",
+ "mout_sclk_decon_tv_eclk_c", DIV_MIF4, 8, 4),
+ DIV(CLK_DIV_SCLK_DECON_VCLK, "div_sclk_decon_vclk",
+ "mout_sclk_decon_vclk_c", DIV_MIF4, 4, 4),
+ DIV(CLK_DIV_SCLK_DECON_ECLK, "div_sclk_decon_eclk",
+ "mout_sclk_decon_eclk_c", DIV_MIF4, 0, 4),
+
+ /* DIV_MIF5 */
+ DIV(CLK_DIV_MIF_PRE, "div_mif_pre", "mout_bus_pll_div2", DIV_MIF5,
+ 0, 3),
+};
+
+static struct samsung_gate_clock mif_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_MIF0 */
+ GATE(CLK_CLK2X_PHY1, "clk2k_phy1", "div_clk2x_phy", ENABLE_ACLK_MIF0,
+ 19, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_CLK2X_PHY0, "clk2x_phy0", "div_clk2x_phy", ENABLE_ACLK_MIF0,
+ 18, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_CLKM_PHY1, "clkm_phy1", "mout_clkm_phy_c", ENABLE_ACLK_MIF0,
+ 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_CLKM_PHY0, "clkm_phy0", "mout_clkm_phy_c", ENABLE_ACLK_MIF0,
+ 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_RCLK_DREX1, "rclk_drex1", "oscclk", ENABLE_ACLK_MIF0,
+ 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_RCLK_DREX0, "rclk_drex0", "oscclk", ENABLE_ACLK_MIF0,
+ 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX1_TZ, "aclk_drex1_tz", "div_aclk_drex1",
+ ENABLE_ACLK_MIF0, 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX0_TZ, "aclk_drex0_tz", "div_aclk_drex0",
+ ENABLE_ACLK_MIF0, 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX1_PEREV, "aclk_drex1_perev", "div_aclk_drex1",
+ ENABLE_ACLK_MIF0, 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX0_PEREV, "aclk_drex0_perev", "div_aclk_drex0",
+ ENABLE_ACLK_MIF0, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX1_MEMIF, "aclk_drex1_memif", "div_aclk_drex1",
+ ENABLE_ACLK_MIF0, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX0_MEMIF, "aclk_drex0_memif", "div_aclk_drex0",
+ ENABLE_ACLK_MIF0, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX1_SCH, "aclk_drex1_sch", "div_aclk_drex1",
+ ENABLE_ACLK_MIF0, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX0_SCH, "aclk_drex0_sch", "div_aclk_drex0",
+ ENABLE_ACLK_MIF0, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX1_BUSIF, "aclk_drex1_busif", "div_aclk_drex1",
+ ENABLE_ACLK_MIF0, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX0_BUSIF, "aclk_drex0_busif", "div_aclk_drex0",
+ ENABLE_ACLK_MIF0, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX1_BUSIF_RD, "aclk_drex1_busif_rd", "div_aclk_drex1",
+ ENABLE_ACLK_MIF0, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX0_BUSIF_RD, "aclk_drex0_busif_rd", "div_aclk_drex0",
+ ENABLE_ACLK_MIF0, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX1, "aclk_drex1", "div_aclk_drex1",
+ ENABLE_ACLK_MIF0, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DREX0, "aclk_drex0", "div_aclk_drex0",
+ ENABLE_ACLK_MIF0, 1, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_ACLK_MIF1 */
+ GATE(CLK_ACLK_ASYNCAXIS_MIF_IMEM, "aclk_asyncaxis_mif_imem",
+ "div_aclk_mif_200", ENABLE_ACLK_MIF1, 28,
+ CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_NOC_P_CCI, "aclk_asyncaxis_noc_p_cci",
+ "div_aclk_mif_200", ENABLE_ACLK_MIF1,
+ 27, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_NOC_P_CCI, "aclk_asyncaxim_noc_p_cci",
+ "div_aclk_mif_133", ENABLE_ACLK_MIF1,
+ 26, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_CP1, "aclk_asyncaxis_cp1",
+ "div_aclk_mifnm_200", ENABLE_ACLK_MIF1,
+ 25, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_CP1, "aclk_asyncaxim_cp1",
+ "div_aclk_drex1", ENABLE_ACLK_MIF1,
+ 24, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_CP0, "aclk_asyncaxis_cp0",
+ "div_aclk_mifnm_200", ENABLE_ACLK_MIF1,
+ 23, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_CP0, "aclk_asyncaxim_cp0",
+ "div_aclk_drex0", ENABLE_ACLK_MIF1,
+ 22, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_DREX1_3, "aclk_asyncaxis_drex1_3",
+ "div_aclk_mif_133", ENABLE_ACLK_MIF1,
+ 21, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_DREX1_3, "aclk_asyncaxim_drex1_3",
+ "div_aclk_drex1", ENABLE_ACLK_MIF1,
+ 20, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_DREX1_1, "aclk_asyncaxis_drex1_1",
+ "div_aclk_mif_133", ENABLE_ACLK_MIF1,
+ 19, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_DREX1_1, "aclk_asyncaxim_drex1_1",
+ "div_aclk_drex1", ENABLE_ACLK_MIF1,
+ 18, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_DREX1_0, "aclk_asyncaxis_drex1_0",
+ "div_aclk_mif_133", ENABLE_ACLK_MIF1,
+ 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_DREX1_0, "aclk_asyncaxim_drex1_0",
+ "div_aclk_drex1", ENABLE_ACLK_MIF1,
+ 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_DREX0_3, "aclk_asyncaxis_drex0_3",
+ "div_aclk_mif_133", ENABLE_ACLK_MIF1,
+ 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_DREX0_3, "aclk_asyncaxim_drex0_3",
+ "div_aclk_drex0", ENABLE_ACLK_MIF1,
+ 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_DREX0_1, "aclk_asyncaxis_drex0_1",
+ "div_aclk_mif_133", ENABLE_ACLK_MIF1,
+ 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_DREX0_1, "aclk_asyncaxim_drex0_1",
+ "div_aclk_drex0", ENABLE_ACLK_MIF1,
+ 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_DREX0_0, "aclk_asyncaxis_drex0_0",
+ "div_aclk_mif_133", ENABLE_ACLK_MIF1,
+ 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_DREX0_0, "aclk_asyncaxim_drex0_0",
+ "div_aclk_drex0", ENABLE_ACLK_MIF1,
+ 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_MIF2P, "aclk_ahb2apb_mif2p", "div_aclk_mif_133",
+ ENABLE_ACLK_MIF1, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_MIF1P, "aclk_ahb2apb_mif1p", "div_aclk_mif_133",
+ ENABLE_ACLK_MIF1, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_MIF0P, "aclk_ahb2apb_mif0p", "div_aclk_mif_133",
+ ENABLE_ACLK_MIF1, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_IXIU_CCI, "aclk_ixiu_cci", "div_aclk_mif_400",
+ ENABLE_ACLK_MIF1, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_MIFSFRX, "aclk_xiu_mifsfrx", "div_aclk_mif_200",
+ ENABLE_ACLK_MIF1, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MIFNP_133, "aclk_mifnp_133", "div_aclk_mif_133",
+ ENABLE_ACLK_MIF1, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MIFNM_200, "aclk_mifnm_200", "div_aclk_mifnm_200",
+ ENABLE_ACLK_MIF1, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MIFND_133, "aclk_mifnd_133", "div_aclk_mifnd_133",
+ ENABLE_ACLK_MIF1, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MIFND_400, "aclk_mifnd_400", "div_aclk_mif_400",
+ ENABLE_ACLK_MIF1, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CCI, "aclk_cci", "div_aclk_mif_400", ENABLE_ACLK_MIF1,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_ACLK_MIF2 */
+ GATE(CLK_ACLK_MIFND_266, "aclk_mifnd_266", "div_aclk_mif_266",
+ ENABLE_ACLK_MIF2, 20, 0, 0),
+ GATE(CLK_ACLK_PPMU_DREX1S3, "aclk_ppmu_drex1s3", "div_aclk_drex1",
+ ENABLE_ACLK_MIF2, 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_PPMU_DREX1S1, "aclk_ppmu_drex1s1", "div_aclk_drex1",
+ ENABLE_ACLK_MIF2, 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_PPMU_DREX1S0, "aclk_ppmu_drex1s0", "div_aclk_drex1",
+ ENABLE_ACLK_MIF2, 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_PPMU_DREX0S3, "aclk_ppmu_drex0s3", "div_aclk_drex0",
+ ENABLE_ACLK_MIF2, 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_PPMU_DREX0S1, "aclk_ppmu_drex0s1", "div_aclk_drex0",
+ ENABLE_ACLK_MIF2, 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_PPMU_DREX0S0, "aclk_ppmu_drex0s0", "div_aclk_drex0",
+ ENABLE_ACLK_MIF2, 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIDS_CCI_MIFSFRX, "aclk_axids_cci_mifsfrx",
+ "div_aclk_mif_200", ENABLE_ACLK_MIF2, 7,
+ CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXISYNCDNS_CCI, "aclk_axisyncdns_cci",
+ "div_aclk_mif_400", ENABLE_ACLK_MIF2,
+ 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXISYNCDN_CCI, "aclk_axisyncdn_cci", "div_aclk_mif_400",
+ ENABLE_ACLK_MIF2, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXISYNCDN_NOC_D, "aclk_axisyncdn_noc_d",
+ "div_aclk_mif_200", ENABLE_ACLK_MIF2,
+ 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBS_MIF_CSSYS, "aclk_asyncapbs_mif_cssys",
+ "div_aclk_mifnd_133", ENABLE_ACLK_MIF2, 0, 0, 0),
+
+ /* ENABLE_ACLK_MIF3 */
+ GATE(CLK_ACLK_BUS2_400, "aclk_bus2_400", "div_aclk_bus2_400",
+ ENABLE_ACLK_MIF3, 4,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_ACLK_DISP_333, "aclk_disp_333", "div_aclk_disp_333",
+ ENABLE_ACLK_MIF3, 1,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_ACLK_CPIF_200, "aclk_cpif_200", "div_aclk_cpif_200",
+ ENABLE_ACLK_MIF3, 0,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+
+ /* ENABLE_PCLK_MIF */
+ GATE(CLK_PCLK_PPMU_DREX1S3, "pclk_ppmu_drex1s3", "div_aclk_drex1",
+ ENABLE_PCLK_MIF, 29, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PPMU_DREX1S1, "pclk_ppmu_drex1s1", "div_aclk_drex1",
+ ENABLE_PCLK_MIF, 28, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PPMU_DREX1S0, "pclk_ppmu_drex1s0", "div_aclk_drex1",
+ ENABLE_PCLK_MIF, 27, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PPMU_DREX0S3, "pclk_ppmu_drex0s3", "div_aclk_drex0",
+ ENABLE_PCLK_MIF, 26, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PPMU_DREX0S1, "pclk_ppmu_drex0s1", "div_aclk_drex0",
+ ENABLE_PCLK_MIF, 25, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PPMU_DREX0S0, "pclk_ppmu_drex0s0", "div_aclk_drex0",
+ ENABLE_PCLK_MIF, 24, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXI_NOC_P_CCI, "pclk_asyncaxi_noc_p_cci",
+ "div_aclk_mif_133", ENABLE_PCLK_MIF, 21,
+ CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXI_CP1, "pclk_asyncaxi_cp1", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 19, 0, 0),
+ GATE(CLK_PCLK_ASYNCAXI_CP0, "pclk_asyncaxi_cp0", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 18, 0, 0),
+ GATE(CLK_PCLK_ASYNCAXI_DREX1_3, "pclk_asyncaxi_drex1_3",
+ "div_aclk_mif_133", ENABLE_PCLK_MIF, 17, 0, 0),
+ GATE(CLK_PCLK_ASYNCAXI_DREX1_1, "pclk_asyncaxi_drex1_1",
+ "div_aclk_mif_133", ENABLE_PCLK_MIF, 16, 0, 0),
+ GATE(CLK_PCLK_ASYNCAXI_DREX1_0, "pclk_asyncaxi_drex1_0",
+ "div_aclk_mif_133", ENABLE_PCLK_MIF, 15, 0, 0),
+ GATE(CLK_PCLK_ASYNCAXI_DREX0_3, "pclk_asyncaxi_drex0_3",
+ "div_aclk_mif_133", ENABLE_PCLK_MIF, 14, 0, 0),
+ GATE(CLK_PCLK_ASYNCAXI_DREX0_1, "pclk_asyncaxi_drex0_1",
+ "div_aclk_mif_133", ENABLE_PCLK_MIF, 13, 0, 0),
+ GATE(CLK_PCLK_ASYNCAXI_DREX0_0, "pclk_asyncaxi_drex0_0",
+ "div_aclk_mif_133", ENABLE_PCLK_MIF, 12, 0, 0),
+ GATE(CLK_PCLK_MIFSRVND_133, "pclk_mifsrvnd_133", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 11, 0, 0),
+ GATE(CLK_PCLK_PMU_MIF, "pclk_pmu_mif", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_MIF, "pclk_sysreg_mif", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_GPIO_ALIVE, "pclk_gpio_alive", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ABB, "pclk_abb", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 7, 0, 0),
+ GATE(CLK_PCLK_PMU_APBIF, "pclk_pmu_apbif", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_DDR_PHY1, "pclk_ddr_phy1", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 5, 0, 0),
+ GATE(CLK_PCLK_DREX1, "pclk_drex1", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_DDR_PHY0, "pclk_ddr_phy0", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 2, 0, 0),
+ GATE(CLK_PCLK_DREX0, "pclk_drex0", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_MIF_SECURE_DREX0_TZ */
+ GATE(CLK_PCLK_DREX0_TZ, "pclk_drex0_tz", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF_SECURE_DREX0_TZ, 0, 0, 0),
+
+ /* ENABLE_PCLK_MIF_SECURE_DREX1_TZ */
+ GATE(CLK_PCLK_DREX1_TZ, "pclk_drex1_tz", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF_SECURE_DREX1_TZ, 0, 0, 0),
+
+ /* ENABLE_PCLK_MIF_SECURE_MONOTONIC_CNT */
+ GATE(CLK_PCLK_MONOTONIC_CNT, "pclk_monotonic_cnt", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF_SECURE_RTC, 0, 0, 0),
+
+ /* ENABLE_PCLK_MIF_SECURE_RTC */
+ GATE(CLK_PCLK_RTC, "pclk_rtc", "div_aclk_mif_133",
+ ENABLE_PCLK_MIF_SECURE_RTC, 0, 0, 0),
+
+ /* ENABLE_SCLK_MIF */
+ GATE(CLK_SCLK_DSIM1_DISP, "sclk_dsim1_disp", "div_sclk_dsim1",
+ ENABLE_SCLK_MIF, 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_DECON_TV_VCLK_DISP, "sclk_decon_tv_vclk_disp",
+ "div_sclk_decon_tv_vclk", ENABLE_SCLK_MIF,
+ 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_DSIM0_DISP, "sclk_dsim0_disp", "div_sclk_dsim0",
+ ENABLE_SCLK_MIF, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_DSD_DISP, "sclk_dsd_disp", "div_sclk_dsd",
+ ENABLE_SCLK_MIF, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_DECON_TV_ECLK_DISP, "sclk_decon_tv_eclk_disp",
+ "div_sclk_decon_tv_eclk", ENABLE_SCLK_MIF,
+ 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_DECON_VCLK_DISP, "sclk_decon_vclk_disp",
+ "div_sclk_decon_vclk", ENABLE_SCLK_MIF,
+ 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_DECON_ECLK_DISP, "sclk_decon_eclk_disp",
+ "div_sclk_decon_eclk", ENABLE_SCLK_MIF,
+ 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_HPM_MIF, "sclk_hpm_mif", "div_sclk_hpm_mif",
+ ENABLE_SCLK_MIF, 4,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_MFC_PLL, "sclk_mfc_pll", "mout_mfc_pll_div2",
+ ENABLE_SCLK_MIF, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_BUS_PLL, "sclk_bus_pll", "mout_bus_pll_div2",
+ ENABLE_SCLK_MIF, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_BUS_PLL_APOLLO, "sclk_bus_pll_apollo", "sclk_bus_pll",
+ ENABLE_SCLK_MIF, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_BUS_PLL_ATLAS, "sclk_bus_pll_atlas", "sclk_bus_pll",
+ ENABLE_SCLK_MIF, 0, CLK_IGNORE_UNUSED, 0),
+};
+
+static struct samsung_cmu_info mif_cmu_info __initdata = {
+ .pll_clks = mif_pll_clks,
+ .nr_pll_clks = ARRAY_SIZE(mif_pll_clks),
+ .mux_clks = mif_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(mif_mux_clks),
+ .div_clks = mif_div_clks,
+ .nr_div_clks = ARRAY_SIZE(mif_div_clks),
+ .gate_clks = mif_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(mif_gate_clks),
+ .fixed_factor_clks = mif_fixed_factor_clks,
+ .nr_fixed_factor_clks = ARRAY_SIZE(mif_fixed_factor_clks),
+ .nr_clk_ids = MIF_NR_CLK,
+ .clk_regs = mif_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(mif_clk_regs),
+};
+
+static void __init exynos5433_cmu_mif_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &mif_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_mif, "samsung,exynos5433-cmu-mif",
+ exynos5433_cmu_mif_init);
+
+/*
+ * Register offset definitions for CMU_PERIC
+ */
+#define DIV_PERIC 0x0600
+#define DIV_STAT_PERIC 0x0700
+#define ENABLE_ACLK_PERIC 0x0800
+#define ENABLE_PCLK_PERIC0 0x0900
+#define ENABLE_PCLK_PERIC1 0x0904
+#define ENABLE_SCLK_PERIC 0x0A00
+#define ENABLE_IP_PERIC0 0x0B00
+#define ENABLE_IP_PERIC1 0x0B04
+#define ENABLE_IP_PERIC2 0x0B08
+
+static unsigned long peric_clk_regs[] __initdata = {
+ DIV_PERIC,
+ DIV_STAT_PERIC,
+ ENABLE_ACLK_PERIC,
+ ENABLE_PCLK_PERIC0,
+ ENABLE_PCLK_PERIC1,
+ ENABLE_SCLK_PERIC,
+ ENABLE_IP_PERIC0,
+ ENABLE_IP_PERIC1,
+ ENABLE_IP_PERIC2,
+};
+
+static struct samsung_div_clock peric_div_clks[] __initdata = {
+ /* DIV_PERIC */
+ DIV(CLK_DIV_SCLK_SCI, "div_sclk_sci", "oscclk", DIV_PERIC, 4, 4),
+ DIV(CLK_DIV_SCLK_SC_IN, "div_sclk_sc_in", "oscclk", DIV_PERIC, 0, 4),
+};
+
+static struct samsung_gate_clock peric_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_PERIC */
+ GATE(CLK_ACLK_AHB2APB_PERIC2P, "aclk_ahb2apb_peric2p", "aclk_peric_66",
+ ENABLE_ACLK_PERIC, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_PERIC1P, "aclk_ahb2apb_peric1p", "aclk_peric_66",
+ ENABLE_ACLK_PERIC, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_PERIC0P, "aclk_ahb2apb_peric0p", "aclk_peric_66",
+ ENABLE_ACLK_PERIC, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_PERICNP_66, "aclk_pericnp_66", "aclk_peric_66",
+ ENABLE_ACLK_PERIC, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_PERIC0 */
+ GATE(CLK_PCLK_SCI, "pclk_sci", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 31, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_GPIO_FINGER, "pclk_gpio_finger", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 30, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_GPIO_ESE, "pclk_gpio_ese", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 29, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PWM, "pclk_pwm", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 28, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_SPDIF, "pclk_spdif", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 26, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_PCM1, "pclk_pcm1", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 25, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_I2S1, "pclk_i2s", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 24, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_SPI2, "pclk_spi2", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 23, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_SPI1, "pclk_spi1", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 22, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_SPI0, "pclk_spi0", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 21, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_ADCIF, "pclk_adcif", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 20, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_GPIO_TOUCH, "pclk_gpio_touch", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 19, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_GPIO_NFC, "pclk_gpio_nfc", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 18, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_GPIO_PERIC, "pclk_gpio_peric", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PMU_PERIC, "pclk_pmu_peric", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 16, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_SYSREG_PERIC, "pclk_sysreg_peric", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 15,
+ CLK_SET_RATE_PARENT | CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_UART2, "pclk_uart2", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 14, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_UART1, "pclk_uart1", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 13, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_UART0, "pclk_uart0", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 12, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C3, "pclk_hsi2c3", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 11, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C2, "pclk_hsi2c2", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 10, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C1, "pclk_hsi2c1", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 9, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C0, "pclk_hsi2c0", "aclk_peric_66",
+ ENABLE_PCLK_PERIC0, 8, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_I2C7, "pclk_i2c7", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 7, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_I2C6, "pclk_i2c6", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 6, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_I2C5, "pclk_i2c5", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 5, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_I2C4, "pclk_i2c4", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 4, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_I2C3, "pclk_i2c3", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 3, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_I2C2, "pclk_i2c2", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 2, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_I2C1, "pclk_i2c1", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 1, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_I2C0, "pclk_i2c0", "aclk_peric_66", ENABLE_PCLK_PERIC0,
+ 0, CLK_SET_RATE_PARENT, 0),
+
+ /* ENABLE_PCLK_PERIC1 */
+ GATE(CLK_PCLK_SPI4, "pclk_spi4", "aclk_peric_66", ENABLE_PCLK_PERIC1,
+ 9, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_SPI3, "pclk_spi3", "aclk_peric_66", ENABLE_PCLK_PERIC1,
+ 8, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C11, "pclk_hsi2c11", "aclk_peric_66",
+ ENABLE_PCLK_PERIC1, 7, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C10, "pclk_hsi2c10", "aclk_peric_66",
+ ENABLE_PCLK_PERIC1, 6, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C9, "pclk_hsi2c9", "aclk_peric_66",
+ ENABLE_PCLK_PERIC1, 5, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C8, "pclk_hsi2c8", "aclk_peric_66",
+ ENABLE_PCLK_PERIC1, 4, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C7, "pclk_hsi2c7", "aclk_peric_66",
+ ENABLE_PCLK_PERIC1, 3, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C6, "pclk_hsi2c6", "aclk_peric_66",
+ ENABLE_PCLK_PERIC1, 2, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C5, "pclk_hsi2c5", "aclk_peric_66",
+ ENABLE_PCLK_PERIC1, 1, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_PCLK_HSI2C4, "pclk_hsi2c4", "aclk_peric_66",
+ ENABLE_PCLK_PERIC1, 0, CLK_SET_RATE_PARENT, 0),
+
+ /* ENABLE_SCLK_PERIC */
+ GATE(CLK_SCLK_IOCLK_SPI4, "sclk_ioclk_spi4", "ioclk_spi4_clk_in",
+ ENABLE_SCLK_PERIC, 21, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_IOCLK_SPI3, "sclk_ioclk_spi3", "ioclk_spi3_clk_in",
+ ENABLE_SCLK_PERIC, 20, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI4, "sclk_spi4", "sclk_spi4_peric", ENABLE_SCLK_PERIC,
+ 19, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI3, "sclk_spi3", "sclk_spi3_peric", ENABLE_SCLK_PERIC,
+ 18, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SCI, "sclk_sci", "div_sclk_sci", ENABLE_SCLK_PERIC,
+ 17, 0, 0),
+ GATE(CLK_SCLK_SC_IN, "sclk_sc_in", "div_sclk_sc_in", ENABLE_SCLK_PERIC,
+ 16, 0, 0),
+ GATE(CLK_SCLK_PWM, "sclk_pwm", "oscclk", ENABLE_SCLK_PERIC, 15, 0, 0),
+ GATE(CLK_SCLK_IOCLK_SPI2, "sclk_ioclk_spi2", "ioclk_spi2_clk_in",
+ ENABLE_SCLK_PERIC, 13, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_IOCLK_SPI1, "sclk_ioclk_spi1", "ioclk_spi1_clk_in",
+ ENABLE_SCLK_PERIC, 12,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_IOCLK_SPI0, "sclk_ioclk_spi0", "ioclk_spi0_clk_in",
+ ENABLE_SCLK_PERIC, 11, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_IOCLK_I2S1_BCLK, "sclk_ioclk_i2s1_bclk",
+ "ioclk_i2s1_bclk_in", ENABLE_SCLK_PERIC, 10,
+ CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPDIF, "sclk_spdif", "sclk_spdif_peric",
+ ENABLE_SCLK_PERIC, 8, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_PCM1, "sclk_pcm1", "sclk_pcm1_peric",
+ ENABLE_SCLK_PERIC, 7, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_I2S1, "sclk_i2s1", "sclk_i2s1_peric",
+ ENABLE_SCLK_PERIC, 6, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI2, "sclk_spi2", "sclk_spi2_peric", ENABLE_SCLK_PERIC,
+ 5, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI1, "sclk_spi1", "sclk_spi1_peric", ENABLE_SCLK_PERIC,
+ 4, CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_SPI0, "sclk_spi0", "sclk_spi0_peric", ENABLE_SCLK_PERIC,
+ 3, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_UART2, "sclk_uart2", "sclk_uart2_peric",
+ ENABLE_SCLK_PERIC, 2, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_UART1, "sclk_uart1", "sclk_uart1_peric",
+ ENABLE_SCLK_PERIC, 1, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_UART0, "sclk_uart0", "sclk_uart0_peric",
+ ENABLE_SCLK_PERIC, 0, CLK_SET_RATE_PARENT, 0),
+};
+
+static struct samsung_cmu_info peric_cmu_info __initdata = {
+ .div_clks = peric_div_clks,
+ .nr_div_clks = ARRAY_SIZE(peric_div_clks),
+ .gate_clks = peric_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(peric_gate_clks),
+ .nr_clk_ids = PERIC_NR_CLK,
+ .clk_regs = peric_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(peric_clk_regs),
+};
+
+static void __init exynos5433_cmu_peric_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &peric_cmu_info);
+}
+
+CLK_OF_DECLARE(exynos5433_cmu_peric, "samsung,exynos5433-cmu-peric",
+ exynos5433_cmu_peric_init);
+
+/*
+ * Register offset definitions for CMU_PERIS
+ */
+#define ENABLE_ACLK_PERIS 0x0800
+#define ENABLE_PCLK_PERIS 0x0900
+#define ENABLE_PCLK_PERIS_SECURE_TZPC 0x0904
+#define ENABLE_PCLK_PERIS_SECURE_SECKEY_APBIF 0x0908
+#define ENABLE_PCLK_PERIS_SECURE_CHIPID_APBIF 0x090c
+#define ENABLE_PCLK_PERIS_SECURE_TOPRTC 0x0910
+#define ENABLE_PCLK_PERIS_SECURE_CUSTOM_EFUSE_APBIF 0x0914
+#define ENABLE_PCLK_PERIS_SECURE_ANTIRBK_CNT_APBIF 0x0918
+#define ENABLE_PCLK_PERIS_SECURE_OTP_CON_APBIF 0x091c
+#define ENABLE_SCLK_PERIS 0x0a00
+#define ENABLE_SCLK_PERIS_SECURE_SECKEY 0x0a04
+#define ENABLE_SCLK_PERIS_SECURE_CHIPID 0x0a08
+#define ENABLE_SCLK_PERIS_SECURE_TOPRTC 0x0a0c
+#define ENABLE_SCLK_PERIS_SECURE_CUSTOM_EFUSE 0x0a10
+#define ENABLE_SCLK_PERIS_SECURE_ANTIRBK_CNT 0x0a14
+#define ENABLE_SCLK_PERIS_SECURE_OTP_CON 0x0a18
+#define ENABLE_IP_PERIS0 0x0b00
+#define ENABLE_IP_PERIS1 0x0b04
+#define ENABLE_IP_PERIS_SECURE_TZPC 0x0b08
+#define ENABLE_IP_PERIS_SECURE_SECKEY 0x0b0c
+#define ENABLE_IP_PERIS_SECURE_CHIPID 0x0b10
+#define ENABLE_IP_PERIS_SECURE_TOPRTC 0x0b14
+#define ENABLE_IP_PERIS_SECURE_CUSTOM_EFUSE 0x0b18
+#define ENABLE_IP_PERIS_SECURE_ANTIBRK_CNT 0x0b1c
+#define ENABLE_IP_PERIS_SECURE_OTP_CON 0x0b20
+
+static unsigned long peris_clk_regs[] __initdata = {
+ ENABLE_ACLK_PERIS,
+ ENABLE_PCLK_PERIS,
+ ENABLE_PCLK_PERIS_SECURE_TZPC,
+ ENABLE_PCLK_PERIS_SECURE_SECKEY_APBIF,
+ ENABLE_PCLK_PERIS_SECURE_CHIPID_APBIF,
+ ENABLE_PCLK_PERIS_SECURE_TOPRTC,
+ ENABLE_PCLK_PERIS_SECURE_CUSTOM_EFUSE_APBIF,
+ ENABLE_PCLK_PERIS_SECURE_ANTIRBK_CNT_APBIF,
+ ENABLE_PCLK_PERIS_SECURE_OTP_CON_APBIF,
+ ENABLE_SCLK_PERIS,
+ ENABLE_SCLK_PERIS_SECURE_SECKEY,
+ ENABLE_SCLK_PERIS_SECURE_CHIPID,
+ ENABLE_SCLK_PERIS_SECURE_TOPRTC,
+ ENABLE_SCLK_PERIS_SECURE_CUSTOM_EFUSE,
+ ENABLE_SCLK_PERIS_SECURE_ANTIRBK_CNT,
+ ENABLE_SCLK_PERIS_SECURE_OTP_CON,
+ ENABLE_IP_PERIS0,
+ ENABLE_IP_PERIS1,
+ ENABLE_IP_PERIS_SECURE_TZPC,
+ ENABLE_IP_PERIS_SECURE_SECKEY,
+ ENABLE_IP_PERIS_SECURE_CHIPID,
+ ENABLE_IP_PERIS_SECURE_TOPRTC,
+ ENABLE_IP_PERIS_SECURE_CUSTOM_EFUSE,
+ ENABLE_IP_PERIS_SECURE_ANTIBRK_CNT,
+ ENABLE_IP_PERIS_SECURE_OTP_CON,
+};
+
+static struct samsung_gate_clock peris_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_PERIS */
+ GATE(CLK_ACLK_AHB2APB_PERIS1P, "aclk_ahb2apb_peris1p", "aclk_peris_66",
+ ENABLE_ACLK_PERIS, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_PERIS0P, "aclk_ahb2apb_peris0p", "aclk_peris_66",
+ ENABLE_ACLK_PERIS, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_PERISNP_66, "aclk_perisnp_66", "aclk_peris_66",
+ ENABLE_ACLK_PERIS, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_PERIS */
+ GATE(CLK_PCLK_HPM_APBIF, "pclk_hpm_apbif", "aclk_peris_66",
+ ENABLE_PCLK_PERIS, 30, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_TMU1_APBIF, "pclk_tmu1_apbif", "aclk_peris_66",
+ ENABLE_PCLK_PERIS, 24, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_TMU0_APBIF, "pclk_tmu0_apbif", "aclk_peris_66",
+ ENABLE_PCLK_PERIS, 23, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PMU_PERIS, "pclk_pmu_peris", "aclk_peris_66",
+ ENABLE_PCLK_PERIS, 22, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_PERIS, "pclk_sysreg_peris", "aclk_peris_66",
+ ENABLE_PCLK_PERIS, 21, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_CMU_TOP_APBIF, "pclk_cmu_top_apbif", "aclk_peris_66",
+ ENABLE_PCLK_PERIS, 20, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_WDT_APOLLO, "pclk_wdt_apollo", "aclk_peris_66",
+ ENABLE_PCLK_PERIS, 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_WDT_ATLAS, "pclk_wdt_atlas", "aclk_peris_66",
+ ENABLE_PCLK_PERIS, 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_MCT, "pclk_mct", "aclk_peris_66",
+ ENABLE_PCLK_PERIS, 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_HDMI_CEC, "pclk_hdmi_cec", "aclk_peris_66",
+ ENABLE_PCLK_PERIS, 14, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_PERIS_SECURE_TZPC */
+ GATE(CLK_PCLK_TZPC12, "pclk_tzpc12", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 12, 0, 0),
+ GATE(CLK_PCLK_TZPC11, "pclk_tzpc11", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 11, 0, 0),
+ GATE(CLK_PCLK_TZPC10, "pclk_tzpc10", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 10, 0, 0),
+ GATE(CLK_PCLK_TZPC9, "pclk_tzpc9", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 9, 0, 0),
+ GATE(CLK_PCLK_TZPC8, "pclk_tzpc8", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 8, 0, 0),
+ GATE(CLK_PCLK_TZPC7, "pclk_tzpc7", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 7, 0, 0),
+ GATE(CLK_PCLK_TZPC6, "pclk_tzpc6", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 6, 0, 0),
+ GATE(CLK_PCLK_TZPC5, "pclk_tzpc5", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 5, 0, 0),
+ GATE(CLK_PCLK_TZPC4, "pclk_tzpc4", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 4, 0, 0),
+ GATE(CLK_PCLK_TZPC3, "pclk_tzpc3", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 3, 0, 0),
+ GATE(CLK_PCLK_TZPC2, "pclk_tzpc2", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 2, 0, 0),
+ GATE(CLK_PCLK_TZPC1, "pclk_tzpc1", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 1, 0, 0),
+ GATE(CLK_PCLK_TZPC0, "pclk_tzpc0", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TZPC, 0, 0, 0),
+
+ /* ENABLE_PCLK_PERIS_SECURE_SECKEY_APBIF */
+ GATE(CLK_PCLK_SECKEY_APBIF, "pclk_seckey_apbif", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_SECKEY_APBIF, 0, 0, 0),
+
+ /* ENABLE_PCLK_PERIS_SECURE_CHIPID_APBIF */
+ GATE(CLK_PCLK_CHIPID_APBIF, "pclk_chipid_apbif", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_CHIPID_APBIF, 0, 0, 0),
+
+ /* ENABLE_PCLK_PERIS_SECURE_TOPRTC */
+ GATE(CLK_PCLK_TOPRTC, "pclk_toprtc", "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_TOPRTC, 0, 0, 0),
+
+ /* ENABLE_PCLK_PERIS_SECURE_CUSTOM_EFUSE_APBIF */
+ GATE(CLK_PCLK_CUSTOM_EFUSE_APBIF, "pclk_custom_efuse_apbif",
+ "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_CUSTOM_EFUSE_APBIF, 0, 0, 0),
+
+ /* ENABLE_PCLK_PERIS_SECURE_ANTIRBK_CNT_APBIF */
+ GATE(CLK_PCLK_ANTIRBK_CNT_APBIF, "pclk_antirbk_cnt_apbif",
+ "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_ANTIRBK_CNT_APBIF, 0, 0, 0),
+
+ /* ENABLE_PCLK_PERIS_SECURE_OTP_CON_APBIF */
+ GATE(CLK_PCLK_OTP_CON_APBIF, "pclk_otp_con_apbif",
+ "aclk_peris_66",
+ ENABLE_PCLK_PERIS_SECURE_OTP_CON_APBIF, 0, 0, 0),
+
+ /* ENABLE_SCLK_PERIS */
+ GATE(CLK_SCLK_ASV_TB, "sclk_asv_tb", "oscclk_efuse_common",
+ ENABLE_SCLK_PERIS, 10, 0, 0),
+ GATE(CLK_SCLK_TMU1, "sclk_tmu1", "oscclk_efuse_common",
+ ENABLE_SCLK_PERIS, 4, 0, 0),
+ GATE(CLK_SCLK_TMU0, "sclk_tmu0", "oscclk_efuse_common",
+ ENABLE_SCLK_PERIS, 3, 0, 0),
+
+ /* ENABLE_SCLK_PERIS_SECURE_SECKEY */
+ GATE(CLK_SCLK_SECKEY, "sclk_seckey", "oscclk_efuse_common",
+ ENABLE_SCLK_PERIS_SECURE_SECKEY, 0, 0, 0),
+
+ /* ENABLE_SCLK_PERIS_SECURE_CHIPID */
+ GATE(CLK_SCLK_CHIPID, "sclk_chipid", "oscclk_efuse_common",
+ ENABLE_SCLK_PERIS_SECURE_CHIPID, 0, 0, 0),
+
+ /* ENABLE_SCLK_PERIS_SECURE_TOPRTC */
+ GATE(CLK_SCLK_TOPRTC, "sclk_toprtc", "oscclk_efuse_common",
+ ENABLE_SCLK_PERIS_SECURE_TOPRTC, 0, 0, 0),
+
+ /* ENABLE_SCLK_PERIS_SECURE_CUSTOM_EFUSE */
+ GATE(CLK_SCLK_CUSTOM_EFUSE, "sclk_custom_efuse", "oscclk_efuse_common",
+ ENABLE_SCLK_PERIS_SECURE_CUSTOM_EFUSE, 0, 0, 0),
+
+ /* ENABLE_SCLK_PERIS_SECURE_ANTIRBK_CNT */
+ GATE(CLK_SCLK_ANTIRBK_CNT, "sclk_antirbk_cnt", "oscclk_efuse_common",
+ ENABLE_SCLK_PERIS_SECURE_ANTIRBK_CNT, 0, 0, 0),
+
+ /* ENABLE_SCLK_PERIS_SECURE_OTP_CON */
+ GATE(CLK_SCLK_OTP_CON, "sclk_otp_con", "oscclk_efuse_common",
+ ENABLE_SCLK_PERIS_SECURE_OTP_CON, 0, 0, 0),
+};
+
+static struct samsung_cmu_info peris_cmu_info __initdata = {
+ .gate_clks = peris_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(peris_gate_clks),
+ .nr_clk_ids = PERIS_NR_CLK,
+ .clk_regs = peris_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(peris_clk_regs),
+};
+
+static void __init exynos5433_cmu_peris_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &peris_cmu_info);
+}
+
+CLK_OF_DECLARE(exynos5433_cmu_peris, "samsung,exynos5433-cmu-peris",
+ exynos5433_cmu_peris_init);
+
+/*
+ * Register offset definitions for CMU_FSYS
+ */
+#define MUX_SEL_FSYS0 0x0200
+#define MUX_SEL_FSYS1 0x0204
+#define MUX_SEL_FSYS2 0x0208
+#define MUX_SEL_FSYS3 0x020c
+#define MUX_SEL_FSYS4 0x0210
+#define MUX_ENABLE_FSYS0 0x0300
+#define MUX_ENABLE_FSYS1 0x0304
+#define MUX_ENABLE_FSYS2 0x0308
+#define MUX_ENABLE_FSYS3 0x030c
+#define MUX_ENABLE_FSYS4 0x0310
+#define MUX_STAT_FSYS0 0x0400
+#define MUX_STAT_FSYS1 0x0404
+#define MUX_STAT_FSYS2 0x0408
+#define MUX_STAT_FSYS3 0x040c
+#define MUX_STAT_FSYS4 0x0410
+#define MUX_IGNORE_FSYS2 0x0508
+#define MUX_IGNORE_FSYS3 0x050c
+#define ENABLE_ACLK_FSYS0 0x0800
+#define ENABLE_ACLK_FSYS1 0x0804
+#define ENABLE_PCLK_FSYS 0x0900
+#define ENABLE_SCLK_FSYS 0x0a00
+#define ENABLE_IP_FSYS0 0x0b00
+#define ENABLE_IP_FSYS1 0x0b04
+
+/* list of all parent clock list */
+PNAME(mout_sclk_ufs_mphy_user_p) = { "oscclk", "sclk_ufs_mphy", };
+PNAME(mout_aclk_fsys_200_user_p) = { "oscclk", "div_aclk_fsys_200", };
+PNAME(mout_sclk_pcie_100_user_p) = { "oscclk", "sclk_pcie_100_fsys",};
+PNAME(mout_sclk_ufsunipro_user_p) = { "oscclk", "sclk_ufsunipro_fsys",};
+PNAME(mout_sclk_mmc2_user_p) = { "oscclk", "sclk_mmc2_fsys", };
+PNAME(mout_sclk_mmc1_user_p) = { "oscclk", "sclk_mmc1_fsys", };
+PNAME(mout_sclk_mmc0_user_p) = { "oscclk", "sclk_mmc0_fsys", };
+PNAME(mout_sclk_usbhost30_user_p) = { "oscclk", "sclk_usbhost30_fsys",};
+PNAME(mout_sclk_usbdrd30_user_p) = { "oscclk", "sclk_usbdrd30_fsys", };
+
+PNAME(mout_phyclk_usbhost30_uhost30_pipe_pclk_user_p)
+ = { "oscclk", "phyclk_usbhost30_uhost30_pipe_pclk_phy", };
+PNAME(mout_phyclk_usbhost30_uhost30_phyclock_user_p)
+ = { "oscclk", "phyclk_usbhost30_uhost30_phyclock_phy", };
+PNAME(mout_phyclk_usbhost20_phy_hsic1_p)
+ = { "oscclk", "phyclk_usbhost20_phy_hsic1_phy", };
+PNAME(mout_phyclk_usbhost20_phy_clk48mohci_user_p)
+ = { "oscclk", "phyclk_usbhost20_phy_clk48mohci_phy", };
+PNAME(mout_phyclk_usbhost20_phy_phyclock_user_p)
+ = { "oscclk", "phyclk_usbhost20_phy_phyclock_phy", };
+PNAME(mout_phyclk_usbhost20_phy_freeclk_user_p)
+ = { "oscclk", "phyclk_usbhost20_phy_freeclk_phy", };
+PNAME(mout_phyclk_usbdrd30_udrd30_pipe_pclk_p)
+ = { "oscclk", "phyclk_usbdrd30_udrd30_pipe_pclk_phy", };
+PNAME(mout_phyclk_usbdrd30_udrd30_phyclock_user_p)
+ = { "oscclk", "phyclk_usbdrd30_udrd30_phyclock_phy", };
+PNAME(mout_phyclk_ufs_rx1_symbol_user_p)
+ = { "oscclk", "phyclk_ufs_rx1_symbol_phy", };
+PNAME(mout_phyclk_ufs_rx0_symbol_user_p)
+ = { "oscclk", "phyclk_ufs_rx0_symbol_phy", };
+PNAME(mout_phyclk_ufs_tx1_symbol_user_p)
+ = { "oscclk", "phyclk_ufs_tx1_symbol_phy", };
+PNAME(mout_phyclk_ufs_tx0_symbol_user_p)
+ = { "oscclk", "phyclk_ufs_tx0_symbol_phy", };
+PNAME(mout_phyclk_lli_mphy_to_ufs_user_p)
+ = { "oscclk", "phyclk_lli_mphy_to_ufs_phy", };
+PNAME(mout_sclk_mphy_p)
+ = { "mout_sclk_ufs_mphy_user",
+ "mout_phyclk_lli_mphy_to_ufs_user", };
+
+static unsigned long fsys_clk_regs[] __initdata = {
+ MUX_SEL_FSYS0,
+ MUX_SEL_FSYS1,
+ MUX_SEL_FSYS2,
+ MUX_SEL_FSYS3,
+ MUX_SEL_FSYS4,
+ MUX_ENABLE_FSYS0,
+ MUX_ENABLE_FSYS1,
+ MUX_ENABLE_FSYS2,
+ MUX_ENABLE_FSYS3,
+ MUX_ENABLE_FSYS4,
+ MUX_STAT_FSYS0,
+ MUX_STAT_FSYS1,
+ MUX_STAT_FSYS2,
+ MUX_STAT_FSYS3,
+ MUX_STAT_FSYS4,
+ MUX_IGNORE_FSYS2,
+ MUX_IGNORE_FSYS3,
+ ENABLE_ACLK_FSYS0,
+ ENABLE_ACLK_FSYS1,
+ ENABLE_PCLK_FSYS,
+ ENABLE_SCLK_FSYS,
+ ENABLE_IP_FSYS0,
+ ENABLE_IP_FSYS1,
+};
+
+static struct samsung_fixed_rate_clock fsys_fixed_clks[] __initdata = {
+ /* PHY clocks from USBDRD30_PHY */
+ FRATE(CLK_PHYCLK_USBDRD30_UDRD30_PHYCLOCK_PHY,
+ "phyclk_usbdrd30_udrd30_phyclock_phy", NULL,
+ CLK_IS_ROOT, 60000000),
+ FRATE(CLK_PHYCLK_USBDRD30_UDRD30_PIPE_PCLK_PHY,
+ "phyclk_usbdrd30_udrd30_pipe_pclk_phy", NULL,
+ CLK_IS_ROOT, 125000000),
+ /* PHY clocks from USBHOST30_PHY */
+ FRATE(CLK_PHYCLK_USBHOST30_UHOST30_PHYCLOCK_PHY,
+ "phyclk_usbhost30_uhost30_phyclock_phy", NULL,
+ CLK_IS_ROOT, 60000000),
+ FRATE(CLK_PHYCLK_USBHOST30_UHOST30_PIPE_PCLK_PHY,
+ "phyclk_usbhost30_uhost30_pipe_pclk_phy", NULL,
+ CLK_IS_ROOT, 125000000),
+ /* PHY clocks from USBHOST20_PHY */
+ FRATE(CLK_PHYCLK_USBHOST20_PHY_FREECLK_PHY,
+ "phyclk_usbhost20_phy_freeclk_phy", NULL, CLK_IS_ROOT,
+ 60000000),
+ FRATE(CLK_PHYCLK_USBHOST20_PHY_PHYCLOCK_PHY,
+ "phyclk_usbhost20_phy_phyclock_phy", NULL, CLK_IS_ROOT,
+ 60000000),
+ FRATE(CLK_PHYCLK_USBHOST20_PHY_CLK48MOHCI_PHY,
+ "phyclk_usbhost20_phy_clk48mohci_phy", NULL,
+ CLK_IS_ROOT, 48000000),
+ FRATE(CLK_PHYCLK_USBHOST20_PHY_HSIC1_PHY,
+ "phyclk_usbhost20_phy_hsic1_phy", NULL, CLK_IS_ROOT,
+ 60000000),
+ /* PHY clocks from UFS_PHY */
+ FRATE(CLK_PHYCLK_UFS_TX0_SYMBOL_PHY, "phyclk_ufs_tx0_symbol_phy",
+ NULL, CLK_IS_ROOT, 300000000),
+ FRATE(CLK_PHYCLK_UFS_RX0_SYMBOL_PHY, "phyclk_ufs_rx0_symbol_phy",
+ NULL, CLK_IS_ROOT, 300000000),
+ FRATE(CLK_PHYCLK_UFS_TX1_SYMBOL_PHY, "phyclk_ufs_tx1_symbol_phy",
+ NULL, CLK_IS_ROOT, 300000000),
+ FRATE(CLK_PHYCLK_UFS_RX1_SYMBOL_PHY, "phyclk_ufs_rx1_symbol_phy",
+ NULL, CLK_IS_ROOT, 300000000),
+ /* PHY clocks from LLI_PHY */
+ FRATE(CLK_PHYCLK_LLI_MPHY_TO_UFS_PHY, "phyclk_lli_mphy_to_ufs_phy",
+ NULL, CLK_IS_ROOT, 26000000),
+};
+
+static struct samsung_mux_clock fsys_mux_clks[] __initdata = {
+ /* MUX_SEL_FSYS0 */
+ MUX(CLK_MOUT_SCLK_UFS_MPHY_USER, "mout_sclk_ufs_mphy_user",
+ mout_sclk_ufs_mphy_user_p, MUX_SEL_FSYS0, 4, 1),
+ MUX(CLK_MOUT_ACLK_FSYS_200_USER, "mout_aclk_fsys_200_user",
+ mout_aclk_fsys_200_user_p, MUX_SEL_FSYS0, 0, 1),
+
+ /* MUX_SEL_FSYS1 */
+ MUX(CLK_MOUT_SCLK_PCIE_100_USER, "mout_sclk_pcie_100_user",
+ mout_sclk_pcie_100_user_p, MUX_SEL_FSYS1, 28, 1),
+ MUX(CLK_MOUT_SCLK_UFSUNIPRO_USER, "mout_sclk_ufsunipro_user",
+ mout_sclk_ufsunipro_user_p, MUX_SEL_FSYS1, 24, 1),
+ MUX(CLK_MOUT_SCLK_MMC2_USER, "mout_sclk_mmc2_user",
+ mout_sclk_mmc2_user_p, MUX_SEL_FSYS1, 20, 1),
+ MUX(CLK_MOUT_SCLK_MMC1_USER, "mout_sclk_mmc1_user",
+ mout_sclk_mmc1_user_p, MUX_SEL_FSYS1, 16, 1),
+ MUX(CLK_MOUT_SCLK_MMC0_USER, "mout_sclk_mmc0_user",
+ mout_sclk_mmc0_user_p, MUX_SEL_FSYS1, 12, 1),
+ MUX(CLK_MOUT_SCLK_USBHOST30_USER, "mout_sclk_usbhost30_user",
+ mout_sclk_usbhost30_user_p, MUX_SEL_FSYS1, 4, 1),
+ MUX(CLK_MOUT_SCLK_USBDRD30_USER, "mout_sclk_usbdrd30_user",
+ mout_sclk_usbdrd30_user_p, MUX_SEL_FSYS1, 0, 1),
+
+ /* MUX_SEL_FSYS2 */
+ MUX(CLK_MOUT_PHYCLK_USBHOST30_UHOST30_PIPE_PCLK_USER,
+ "mout_phyclk_usbhost30_uhost30_pipe_pclk_user",
+ mout_phyclk_usbhost30_uhost30_pipe_pclk_user_p,
+ MUX_SEL_FSYS2, 28, 1),
+ MUX(CLK_MOUT_PHYCLK_USBHOST30_UHOST30_PHYCLOCK_USER,
+ "mout_phyclk_usbhost30_uhost30_phyclock_user",
+ mout_phyclk_usbhost30_uhost30_phyclock_user_p,
+ MUX_SEL_FSYS2, 24, 1),
+ MUX(CLK_MOUT_PHYCLK_USBHOST20_PHY_HSIC1_USER,
+ "mout_phyclk_usbhost20_phy_hsic1",
+ mout_phyclk_usbhost20_phy_hsic1_p,
+ MUX_SEL_FSYS2, 20, 1),
+ MUX(CLK_MOUT_PHYCLK_USBHOST20_PHY_CLK48MOHCI_USER,
+ "mout_phyclk_usbhost20_phy_clk48mohci_user",
+ mout_phyclk_usbhost20_phy_clk48mohci_user_p,
+ MUX_SEL_FSYS2, 16, 1),
+ MUX(CLK_MOUT_PHYCLK_USBHOST20_PHY_PHYCLOCK_USER,
+ "mout_phyclk_usbhost20_phy_phyclock_user",
+ mout_phyclk_usbhost20_phy_phyclock_user_p,
+ MUX_SEL_FSYS2, 12, 1),
+ MUX(CLK_MOUT_PHYCLK_USBHOST20_PHY_PHY_FREECLK_USER,
+ "mout_phyclk_usbhost20_phy_freeclk_user",
+ mout_phyclk_usbhost20_phy_freeclk_user_p,
+ MUX_SEL_FSYS2, 8, 1),
+ MUX(CLK_MOUT_PHYCLK_USBDRD30_UDRD30_PIPE_PCLK_USER,
+ "mout_phyclk_usbdrd30_udrd30_pipe_pclk_user",
+ mout_phyclk_usbdrd30_udrd30_pipe_pclk_p,
+ MUX_SEL_FSYS2, 4, 1),
+ MUX(CLK_MOUT_PHYCLK_USBDRD30_UDRD30_PHYCLOCK_USER,
+ "mout_phyclk_usbdrd30_udrd30_phyclock_user",
+ mout_phyclk_usbdrd30_udrd30_phyclock_user_p,
+ MUX_SEL_FSYS2, 0, 1),
+
+ /* MUX_SEL_FSYS3 */
+ MUX(CLK_MOUT_PHYCLK_UFS_RX1_SYMBOL_USER,
+ "mout_phyclk_ufs_rx1_symbol_user",
+ mout_phyclk_ufs_rx1_symbol_user_p,
+ MUX_SEL_FSYS3, 16, 1),
+ MUX(CLK_MOUT_PHYCLK_UFS_RX0_SYMBOL_USER,
+ "mout_phyclk_ufs_rx0_symbol_user",
+ mout_phyclk_ufs_rx0_symbol_user_p,
+ MUX_SEL_FSYS3, 12, 1),
+ MUX(CLK_MOUT_PHYCLK_UFS_TX1_SYMBOL_USER,
+ "mout_phyclk_ufs_tx1_symbol_user",
+ mout_phyclk_ufs_tx1_symbol_user_p,
+ MUX_SEL_FSYS3, 8, 1),
+ MUX(CLK_MOUT_PHYCLK_UFS_TX0_SYMBOL_USER,
+ "mout_phyclk_ufs_tx0_symbol_user",
+ mout_phyclk_ufs_tx0_symbol_user_p,
+ MUX_SEL_FSYS3, 4, 1),
+ MUX(CLK_MOUT_PHYCLK_LLI_MPHY_TO_UFS_USER,
+ "mout_phyclk_lli_mphy_to_ufs_user",
+ mout_phyclk_lli_mphy_to_ufs_user_p,
+ MUX_SEL_FSYS3, 0, 1),
+
+ /* MUX_SEL_FSYS4 */
+ MUX(CLK_MOUT_SCLK_MPHY, "mout_sclk_mphy", mout_sclk_mphy_p,
+ MUX_SEL_FSYS4, 0, 1),
+};
+
+static struct samsung_gate_clock fsys_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_FSYS0 */
+ GATE(CLK_ACLK_PCIE, "aclk_pcie", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_PDMA1, "aclk_pdma1", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_TSI, "aclk_tsi", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MMC2, "aclk_mmc2", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MMC1, "aclk_mmc1", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MMC0, "aclk_mmc0", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_UFS, "aclk_ufs", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_USBHOST20, "aclk_usbhost20", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_USBHOST30, "aclk_usbhost30", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_USBDRD30, "aclk_usbdrd30", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_PDMA0, "aclk_pdma0", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS0, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_ACLK_FSYS1 */
+ GATE(CLK_ACLK_XIU_FSYSPX, "aclk_xiu_fsyspx", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 27, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB_USBLINKH1, "aclk_ahb_usblinkh1",
+ "mout_aclk_fsys_200_user", ENABLE_ACLK_FSYS1,
+ 26, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_PDMA1, "aclk_smmu_pdma1", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 25, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_PCIE, "aclk_bts_pcie", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 24, 0, 0),
+ GATE(CLK_ACLK_AXIUS_PDMA1, "aclk_axius_pdma1",
+ "mout_aclk_fsys_200_user", ENABLE_ACLK_FSYS1,
+ 22, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_PDMA0, "aclk_smmu_pdma0", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_UFS, "aclk_bts_ufs", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_USBHOST30, "aclk_bts_usbhost30",
+ "mout_aclk_fsys_200_user", ENABLE_ACLK_FSYS1,
+ 13, 0, 0),
+ GATE(CLK_ACLK_BTS_USBDRD30, "aclk_bts_usbdrd30",
+ "mout_aclk_fsys_200_user", ENABLE_ACLK_FSYS1,
+ 12, 0, 0),
+ GATE(CLK_ACLK_AXIUS_PDMA0, "aclk_axius_pdma0",
+ "mout_aclk_fsys_200_user", ENABLE_ACLK_FSYS1,
+ 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIUS_USBHS, "aclk_axius_usbhs",
+ "mout_aclk_fsys_200_user", ENABLE_ACLK_FSYS1,
+ 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIUS_FSYSSX, "aclk_axius_fsyssx",
+ "mout_aclk_fsys_200_user", ENABLE_ACLK_FSYS1,
+ 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_FSYSP, "aclk_ahb2apb_fsysp",
+ "mout_aclk_fsys_200_user", ENABLE_ACLK_FSYS1,
+ 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2AXI_USBHS, "aclk_ahb2axi_usbhs",
+ "mout_aclk_fsys_200_user", ENABLE_ACLK_FSYS1,
+ 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB_USBLINKH0, "aclk_ahb_usblinkh0",
+ "mout_aclk_fsys_200_user", ENABLE_ACLK_FSYS1,
+ 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB_USBHS, "aclk_ahb_usbhs", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB_FSYSH, "aclk_ahb_fsysh", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_FSYSX, "aclk_xiu_fsysx", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_FSYSSX, "aclk_xiu_fsyssx", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_FSYSNP_200, "aclk_fsysnp_200", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_FSYSND_200, "aclk_fsysnd_200", "mout_aclk_fsys_200_user",
+ ENABLE_ACLK_FSYS1, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_FSYS */
+ GATE(CLK_PCLK_PCIE_CTRL, "pclk_pcie_ctrl", "mout_aclk_fsys_200_user",
+ ENABLE_PCLK_FSYS, 17, 0, 0),
+ GATE(CLK_PCLK_SMMU_PDMA1, "pclk_smmu_pdma1", "mout_aclk_fsys_200_user",
+ ENABLE_PCLK_FSYS, 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PCIE_PHY, "pclk_pcie_phy", "mout_aclk_fsys_200_user",
+ ENABLE_PCLK_FSYS, 14, 0, 0),
+ GATE(CLK_PCLK_BTS_PCIE, "pclk_bts_pcie", "mout_aclk_fsys_200_user",
+ ENABLE_PCLK_FSYS, 13, 0, 0),
+ GATE(CLK_PCLK_SMMU_PDMA0, "pclk_smmu_pdma0", "mout_aclk_fsys_200_user",
+ ENABLE_PCLK_FSYS, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_UFS, "pclk_bts_ufs", "mout_aclk_fsys_200_user",
+ ENABLE_PCLK_FSYS, 5, 0, 0),
+ GATE(CLK_PCLK_BTS_USBHOST30, "pclk_bts_usbhost30",
+ "mout_aclk_fsys_200_user", ENABLE_PCLK_FSYS, 4, 0, 0),
+ GATE(CLK_PCLK_BTS_USBDRD30, "pclk_bts_usbdrd30",
+ "mout_aclk_fsys_200_user", ENABLE_PCLK_FSYS, 3, 0, 0),
+ GATE(CLK_PCLK_GPIO_FSYS, "pclk_gpio_fsys", "mout_aclk_fsys_200_user",
+ ENABLE_PCLK_FSYS, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PMU_FSYS, "pclk_pmu_fsys", "mout_aclk_fsys_200_user",
+ ENABLE_PCLK_FSYS, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_FSYS, "pclk_sysreg_fsys",
+ "mout_aclk_fsys_200_user", ENABLE_PCLK_FSYS,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_SCLK_FSYS */
+ GATE(CLK_SCLK_PCIE_100, "sclk_pcie_100", "mout_sclk_pcie_100_user",
+ ENABLE_SCLK_FSYS, 21, 0, 0),
+ GATE(CLK_PHYCLK_USBHOST30_UHOST30_PIPE_PCLK,
+ "phyclk_usbhost30_uhost30_pipe_pclk",
+ "mout_phyclk_usbhost30_uhost30_pipe_pclk_user",
+ ENABLE_SCLK_FSYS, 18, 0, 0),
+ GATE(CLK_PHYCLK_USBHOST30_UHOST30_PHYCLOCK,
+ "phyclk_usbhost30_uhost30_phyclock",
+ "mout_phyclk_usbhost30_uhost30_phyclock_user",
+ ENABLE_SCLK_FSYS, 17, 0, 0),
+ GATE(CLK_PHYCLK_UFS_RX1_SYMBOL, "phyclk_ufs_rx1_symbol",
+ "mout_phyclk_ufs_rx1_symbol_user", ENABLE_SCLK_FSYS,
+ 16, 0, 0),
+ GATE(CLK_PHYCLK_UFS_RX0_SYMBOL, "phyclk_ufs_rx0_symbol",
+ "mout_phyclk_ufs_rx0_symbol_user", ENABLE_SCLK_FSYS,
+ 15, 0, 0),
+ GATE(CLK_PHYCLK_UFS_TX1_SYMBOL, "phyclk_ufs_tx1_symbol",
+ "mout_phyclk_ufs_tx1_symbol_user", ENABLE_SCLK_FSYS,
+ 14, 0, 0),
+ GATE(CLK_PHYCLK_UFS_TX0_SYMBOL, "phyclk_ufs_tx0_symbol",
+ "mout_phyclk_ufs_tx0_symbol_user", ENABLE_SCLK_FSYS,
+ 13, 0, 0),
+ GATE(CLK_PHYCLK_USBHOST20_PHY_HSIC1, "phyclk_usbhost20_phy_hsic1",
+ "mout_phyclk_usbhost20_phy_hsic1", ENABLE_SCLK_FSYS,
+ 12, 0, 0),
+ GATE(CLK_PHYCLK_USBHOST20_PHY_CLK48MOHCI,
+ "phyclk_usbhost20_phy_clk48mohci",
+ "mout_phyclk_usbhost20_phy_clk48mohci_user",
+ ENABLE_SCLK_FSYS, 11, 0, 0),
+ GATE(CLK_PHYCLK_USBHOST20_PHY_PHYCLOCK,
+ "phyclk_usbhost20_phy_phyclock",
+ "mout_phyclk_usbhost20_phy_phyclock_user",
+ ENABLE_SCLK_FSYS, 10, 0, 0),
+ GATE(CLK_PHYCLK_USBHOST20_PHY_FREECLK,
+ "phyclk_usbhost20_phy_freeclk",
+ "mout_phyclk_usbhost20_phy_freeclk_user",
+ ENABLE_SCLK_FSYS, 9, 0, 0),
+ GATE(CLK_PHYCLK_USBDRD30_UDRD30_PIPE_PCLK,
+ "phyclk_usbdrd30_udrd30_pipe_pclk",
+ "mout_phyclk_usbdrd30_udrd30_pipe_pclk_user",
+ ENABLE_SCLK_FSYS, 8, 0, 0),
+ GATE(CLK_PHYCLK_USBDRD30_UDRD30_PHYCLOCK,
+ "phyclk_usbdrd30_udrd30_phyclock",
+ "mout_phyclk_usbdrd30_udrd30_phyclock_user",
+ ENABLE_SCLK_FSYS, 7, 0, 0),
+ GATE(CLK_SCLK_MPHY, "sclk_mphy", "mout_sclk_mphy",
+ ENABLE_SCLK_FSYS, 6, 0, 0),
+ GATE(CLK_SCLK_UFSUNIPRO, "sclk_ufsunipro", "mout_sclk_ufsunipro_user",
+ ENABLE_SCLK_FSYS, 5, 0, 0),
+ GATE(CLK_SCLK_MMC2, "sclk_mmc2", "mout_sclk_mmc2_user",
+ ENABLE_SCLK_FSYS, 4, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_MMC1, "sclk_mmc1", "mout_sclk_mmc1_user",
+ ENABLE_SCLK_FSYS, 3, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_MMC0, "sclk_mmc0", "mout_sclk_mmc0_user",
+ ENABLE_SCLK_FSYS, 2, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_USBHOST30, "sclk_usbhost30", "mout_sclk_usbhost30_user",
+ ENABLE_SCLK_FSYS, 1, 0, 0),
+ GATE(CLK_SCLK_USBDRD30, "sclk_usbdrd30", "mout_sclk_usbdrd30_user",
+ ENABLE_SCLK_FSYS, 0, 0, 0),
+
+ /* ENABLE_IP_FSYS0 */
+ GATE(CLK_PDMA1, "pdma1", "aclk_pdma1", ENABLE_IP_FSYS0, 15, 0, 0),
+ GATE(CLK_PDMA0, "pdma0", "aclk_pdma0", ENABLE_IP_FSYS0, 0, 0, 0),
+};
+
+static struct samsung_cmu_info fsys_cmu_info __initdata = {
+ .mux_clks = fsys_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(fsys_mux_clks),
+ .gate_clks = fsys_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(fsys_gate_clks),
+ .fixed_clks = fsys_fixed_clks,
+ .nr_fixed_clks = ARRAY_SIZE(fsys_fixed_clks),
+ .nr_clk_ids = FSYS_NR_CLK,
+ .clk_regs = fsys_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(fsys_clk_regs),
+};
+
+static void __init exynos5433_cmu_fsys_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &fsys_cmu_info);
+}
+
+CLK_OF_DECLARE(exynos5433_cmu_fsys, "samsung,exynos5433-cmu-fsys",
+ exynos5433_cmu_fsys_init);
+
+/*
+ * Register offset definitions for CMU_G2D
+ */
+#define MUX_SEL_G2D0 0x0200
+#define MUX_SEL_ENABLE_G2D0 0x0300
+#define MUX_SEL_STAT_G2D0 0x0400
+#define DIV_G2D 0x0600
+#define DIV_STAT_G2D 0x0700
+#define DIV_ENABLE_ACLK_G2D 0x0800
+#define DIV_ENABLE_ACLK_G2D_SECURE_SMMU_G2D 0x0804
+#define DIV_ENABLE_PCLK_G2D 0x0900
+#define DIV_ENABLE_PCLK_G2D_SECURE_SMMU_G2D 0x0904
+#define DIV_ENABLE_IP_G2D0 0x0b00
+#define DIV_ENABLE_IP_G2D1 0x0b04
+#define DIV_ENABLE_IP_G2D_SECURE_SMMU_G2D 0x0b08
+
+static unsigned long g2d_clk_regs[] __initdata = {
+ MUX_SEL_G2D0,
+ MUX_SEL_ENABLE_G2D0,
+ MUX_SEL_STAT_G2D0,
+ DIV_G2D,
+ DIV_STAT_G2D,
+ DIV_ENABLE_ACLK_G2D,
+ DIV_ENABLE_ACLK_G2D_SECURE_SMMU_G2D,
+ DIV_ENABLE_PCLK_G2D,
+ DIV_ENABLE_PCLK_G2D_SECURE_SMMU_G2D,
+ DIV_ENABLE_IP_G2D0,
+ DIV_ENABLE_IP_G2D1,
+ DIV_ENABLE_IP_G2D_SECURE_SMMU_G2D,
+};
+
+/* list of all parent clock list */
+PNAME(mout_aclk_g2d_266_user_p) = { "oscclk", "aclk_g2d_266", };
+PNAME(mout_aclk_g2d_400_user_p) = { "oscclk", "aclk_g2d_400", };
+
+static struct samsung_mux_clock g2d_mux_clks[] __initdata = {
+ /* MUX_SEL_G2D0 */
+ MUX(CLK_MUX_ACLK_G2D_266_USER, "mout_aclk_g2d_266_user",
+ mout_aclk_g2d_266_user_p, MUX_SEL_G2D0, 4, 1),
+ MUX(CLK_MUX_ACLK_G2D_400_USER, "mout_aclk_g2d_400_user",
+ mout_aclk_g2d_400_user_p, MUX_SEL_G2D0, 0, 1),
+};
+
+static struct samsung_div_clock g2d_div_clks[] __initdata = {
+ /* DIV_G2D */
+ DIV(CLK_DIV_PCLK_G2D, "div_pclk_g2d", "mout_aclk_g2d_266_user",
+ DIV_G2D, 0, 2),
+};
+
+static struct samsung_gate_clock g2d_gate_clks[] __initdata = {
+ /* DIV_ENABLE_ACLK_G2D */
+ GATE(CLK_ACLK_SMMU_MDMA1, "aclk_smmu_mdma1", "mout_aclk_g2d_266_user",
+ DIV_ENABLE_ACLK_G2D, 12, 0, 0),
+ GATE(CLK_ACLK_BTS_MDMA1, "aclk_bts_mdam1", "mout_aclk_g2d_266_user",
+ DIV_ENABLE_ACLK_G2D, 11, 0, 0),
+ GATE(CLK_ACLK_BTS_G2D, "aclk_bts_g2d", "mout_aclk_g2d_400_user",
+ DIV_ENABLE_ACLK_G2D, 10, 0, 0),
+ GATE(CLK_ACLK_ALB_G2D, "aclk_alb_g2d", "mout_aclk_g2d_400_user",
+ DIV_ENABLE_ACLK_G2D, 9, 0, 0),
+ GATE(CLK_ACLK_AXIUS_G2DX, "aclk_axius_g2dx", "mout_aclk_g2d_400_user",
+ DIV_ENABLE_ACLK_G2D, 8, 0, 0),
+ GATE(CLK_ACLK_ASYNCAXI_SYSX, "aclk_asyncaxi_sysx",
+ "mout_aclk_g2d_400_user", DIV_ENABLE_ACLK_G2D,
+ 7, 0, 0),
+ GATE(CLK_ACLK_AHB2APB_G2D1P, "aclk_ahb2apb_g2d1p", "div_pclk_g2d",
+ DIV_ENABLE_ACLK_G2D, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_G2D0P, "aclk_ahb2apb_g2d0p", "div_pclk_g2d",
+ DIV_ENABLE_ACLK_G2D, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_G2DX, "aclk_xiu_g2dx", "mout_aclk_g2d_400_user",
+ DIV_ENABLE_ACLK_G2D, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_G2DNP_133, "aclk_g2dnp_133", "div_pclk_g2d",
+ DIV_ENABLE_ACLK_G2D, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_G2DND_400, "aclk_g2dnd_400", "mout_aclk_g2d_400_user",
+ DIV_ENABLE_ACLK_G2D, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MDMA1, "aclk_mdma1", "mout_aclk_g2d_266_user",
+ DIV_ENABLE_ACLK_G2D, 1, 0, 0),
+ GATE(CLK_ACLK_G2D, "aclk_g2d", "mout_aclk_g2d_400_user",
+ DIV_ENABLE_ACLK_G2D, 0, 0, 0),
+
+ /* DIV_ENABLE_ACLK_G2D_SECURE_SMMU_G2D */
+ GATE(CLK_ACLK_SMMU_G2D, "aclk_smmu_g2d", "mout_aclk_g2d_400_user",
+ DIV_ENABLE_ACLK_G2D_SECURE_SMMU_G2D, 0, 0, 0),
+
+ /* DIV_ENABLE_PCLK_G2D */
+ GATE(CLK_PCLK_SMMU_MDMA1, "pclk_smmu_mdma1", "div_pclk_g2d",
+ DIV_ENABLE_PCLK_G2D, 7, 0, 0),
+ GATE(CLK_PCLK_BTS_MDMA1, "pclk_bts_mdam1", "div_pclk_g2d",
+ DIV_ENABLE_PCLK_G2D, 6, 0, 0),
+ GATE(CLK_PCLK_BTS_G2D, "pclk_bts_g2d", "div_pclk_g2d",
+ DIV_ENABLE_PCLK_G2D, 5, 0, 0),
+ GATE(CLK_PCLK_ALB_G2D, "pclk_alb_g2d", "div_pclk_g2d",
+ DIV_ENABLE_PCLK_G2D, 4, 0, 0),
+ GATE(CLK_PCLK_ASYNCAXI_SYSX, "pclk_asyncaxi_sysx", "div_pclk_g2d",
+ DIV_ENABLE_PCLK_G2D, 3, 0, 0),
+ GATE(CLK_PCLK_PMU_G2D, "pclk_pmu_g2d", "div_pclk_g2d",
+ DIV_ENABLE_PCLK_G2D, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_G2D, "pclk_sysreg_g2d", "div_pclk_g2d",
+ DIV_ENABLE_PCLK_G2D, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_G2D, "pclk_g2d", "div_pclk_g2d", DIV_ENABLE_PCLK_G2D,
+ 0, 0, 0),
+
+ /* DIV_ENABLE_PCLK_G2D_SECURE_SMMU_G2D */
+ GATE(CLK_PCLK_SMMU_G2D, "pclk_smmu_g2d", "div_pclk_g2d",
+ DIV_ENABLE_PCLK_G2D_SECURE_SMMU_G2D, 0, 0, 0),
+};
+
+static struct samsung_cmu_info g2d_cmu_info __initdata = {
+ .mux_clks = g2d_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(g2d_mux_clks),
+ .div_clks = g2d_div_clks,
+ .nr_div_clks = ARRAY_SIZE(g2d_div_clks),
+ .gate_clks = g2d_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(g2d_gate_clks),
+ .nr_clk_ids = G2D_NR_CLK,
+ .clk_regs = g2d_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(g2d_clk_regs),
+};
+
+static void __init exynos5433_cmu_g2d_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &g2d_cmu_info);
+}
+
+CLK_OF_DECLARE(exynos5433_cmu_g2d, "samsung,exynos5433-cmu-g2d",
+ exynos5433_cmu_g2d_init);
+
+/*
+ * Register offset definitions for CMU_DISP
+ */
+#define DISP_PLL_LOCK 0x0000
+#define DISP_PLL_CON0 0x0100
+#define DISP_PLL_CON1 0x0104
+#define DISP_PLL_FREQ_DET 0x0108
+#define MUX_SEL_DISP0 0x0200
+#define MUX_SEL_DISP1 0x0204
+#define MUX_SEL_DISP2 0x0208
+#define MUX_SEL_DISP3 0x020c
+#define MUX_SEL_DISP4 0x0210
+#define MUX_ENABLE_DISP0 0x0300
+#define MUX_ENABLE_DISP1 0x0304
+#define MUX_ENABLE_DISP2 0x0308
+#define MUX_ENABLE_DISP3 0x030c
+#define MUX_ENABLE_DISP4 0x0310
+#define MUX_STAT_DISP0 0x0400
+#define MUX_STAT_DISP1 0x0404
+#define MUX_STAT_DISP2 0x0408
+#define MUX_STAT_DISP3 0x040c
+#define MUX_STAT_DISP4 0x0410
+#define MUX_IGNORE_DISP2 0x0508
+#define DIV_DISP 0x0600
+#define DIV_DISP_PLL_FREQ_DET 0x0604
+#define DIV_STAT_DISP 0x0700
+#define DIV_STAT_DISP_PLL_FREQ_DET 0x0704
+#define ENABLE_ACLK_DISP0 0x0800
+#define ENABLE_ACLK_DISP1 0x0804
+#define ENABLE_PCLK_DISP 0x0900
+#define ENABLE_SCLK_DISP 0x0a00
+#define ENABLE_IP_DISP0 0x0b00
+#define ENABLE_IP_DISP1 0x0b04
+#define CLKOUT_CMU_DISP 0x0c00
+#define CLKOUT_CMU_DISP_DIV_STAT 0x0c04
+
+static unsigned long disp_clk_regs[] __initdata = {
+ DISP_PLL_LOCK,
+ DISP_PLL_CON0,
+ DISP_PLL_CON1,
+ DISP_PLL_FREQ_DET,
+ MUX_SEL_DISP0,
+ MUX_SEL_DISP1,
+ MUX_SEL_DISP2,
+ MUX_SEL_DISP3,
+ MUX_SEL_DISP4,
+ MUX_ENABLE_DISP0,
+ MUX_ENABLE_DISP1,
+ MUX_ENABLE_DISP2,
+ MUX_ENABLE_DISP3,
+ MUX_ENABLE_DISP4,
+ MUX_STAT_DISP0,
+ MUX_STAT_DISP1,
+ MUX_STAT_DISP2,
+ MUX_STAT_DISP3,
+ MUX_STAT_DISP4,
+ MUX_IGNORE_DISP2,
+ DIV_DISP,
+ DIV_DISP_PLL_FREQ_DET,
+ DIV_STAT_DISP,
+ DIV_STAT_DISP_PLL_FREQ_DET,
+ ENABLE_ACLK_DISP0,
+ ENABLE_ACLK_DISP1,
+ ENABLE_PCLK_DISP,
+ ENABLE_SCLK_DISP,
+ ENABLE_IP_DISP0,
+ ENABLE_IP_DISP1,
+ CLKOUT_CMU_DISP,
+ CLKOUT_CMU_DISP_DIV_STAT,
+};
+
+/* list of all parent clock list */
+PNAME(mout_disp_pll_p) = { "oscclk", "fout_disp_pll", };
+PNAME(mout_sclk_dsim1_user_p) = { "oscclk", "sclk_dsim1_disp", };
+PNAME(mout_sclk_dsim0_user_p) = { "oscclk", "sclk_dsim0_disp", };
+PNAME(mout_sclk_dsd_user_p) = { "oscclk", "sclk_dsd_disp", };
+PNAME(mout_sclk_decon_tv_eclk_user_p) = { "oscclk",
+ "sclk_decon_tv_eclk_disp", };
+PNAME(mout_sclk_decon_vclk_user_p) = { "oscclk",
+ "sclk_decon_vclk_disp", };
+PNAME(mout_sclk_decon_eclk_user_p) = { "oscclk",
+ "sclk_decon_eclk_disp", };
+PNAME(mout_sclk_decon_tv_vlkc_user_p) = { "oscclk",
+ "sclk_decon_tv_vclk_disp", };
+PNAME(mout_aclk_disp_333_user_p) = { "oscclk", "aclk_disp_333", };
+
+PNAME(mout_phyclk_mipidphy1_bitclkdiv8_user_p) = { "oscclk",
+ "phyclk_mipidphy1_bitclkdiv8_phy", };
+PNAME(mout_phyclk_mipidphy1_rxclkesc0_user_p) = { "oscclk",
+ "phyclk_mipidphy1_rxclkesc0_phy", };
+PNAME(mout_phyclk_mipidphy0_bitclkdiv8_user_p) = { "oscclk",
+ "phyclk_mipidphy0_bitclkdiv8_phy", };
+PNAME(mout_phyclk_mipidphy0_rxclkesc0_user_p) = { "oscclk",
+ "phyclk_mipidphy0_rxclkesc0_phy", };
+PNAME(mout_phyclk_hdmiphy_tmds_clko_user_p) = { "oscclk",
+ "phyclk_hdmiphy_tmds_clko_phy", };
+PNAME(mout_phyclk_hdmiphy_pixel_clko_user_p) = { "oscclk",
+ "phyclk_hdmiphy_pixel_clko_phy", };
+
+PNAME(mout_sclk_dsim0_p) = { "mout_disp_pll",
+ "mout_sclk_dsim0_user", };
+PNAME(mout_sclk_decon_tv_eclk_p) = { "mout_disp_pll",
+ "mout_sclk_decon_tv_eclk_user", };
+PNAME(mout_sclk_decon_vclk_p) = { "mout_disp_pll",
+ "mout_sclk_decon_vclk_user", };
+PNAME(mout_sclk_decon_eclk_p) = { "mout_disp_pll",
+ "mout_sclk_decon_eclk_user", };
+
+PNAME(mout_sclk_dsim1_b_disp_p) = { "mout_sclk_dsim1_a_disp",
+ "mout_sclk_dsim1_user", };
+PNAME(mout_sclk_decon_tv_vclk_c_disp_p) = {
+ "mout_phyclk_hdmiphy_pixel_clko_user",
+ "mout_sclk_decon_tv_vclk_b_disp", };
+PNAME(mout_sclk_decon_tv_vclk_b_disp_p) = { "mout_sclk_decon_tv_vclk_a_disp",
+ "mout_sclk_decon_tv_vclk_user", };
+
+static struct samsung_pll_clock disp_pll_clks[] __initdata = {
+ PLL(pll_35xx, CLK_FOUT_DISP_PLL, "fout_disp_pll", "oscclk",
+ DISP_PLL_LOCK, DISP_PLL_CON0, exynos5443_pll_rates),
+};
+
+static struct samsung_fixed_factor_clock disp_fixed_factor_clks[] __initdata = {
+ /*
+ * sclk_rgb_{vclk|tv_vclk} is half clock of sclk_decon_{vclk|tv_vclk}.
+ * The divider has fixed value (2) between sclk_rgb_{vclk|tv_vclk}
+ * and sclk_decon_{vclk|tv_vclk}.
+ */
+ FFACTOR(CLK_SCLK_RGB_VCLK, "sclk_rgb_vclk", "sclk_decon_vclk",
+ 1, 2, 0),
+ FFACTOR(CLK_SCLK_RGB_TV_VCLK, "sclk_rgb_tv_vclk", "sclk_decon_tv_vclk",
+ 1, 2, 0),
+};
+
+static struct samsung_fixed_rate_clock disp_fixed_clks[] __initdata = {
+ /* PHY clocks from MIPI_DPHY1 */
+ FRATE(0, "phyclk_mipidphy1_bitclkdiv8_phy", NULL, CLK_IS_ROOT,
+ 188000000),
+ FRATE(0, "phyclk_mipidphy1_rxclkesc0_phy", NULL, CLK_IS_ROOT,
+ 100000000),
+ /* PHY clocks from MIPI_DPHY0 */
+ FRATE(0, "phyclk_mipidphy0_bitclkdiv8_phy", NULL, CLK_IS_ROOT,
+ 188000000),
+ FRATE(0, "phyclk_mipidphy0_rxclkesc0_phy", NULL, CLK_IS_ROOT,
+ 100000000),
+ /* PHY clocks from HDMI_PHY */
+ FRATE(0, "phyclk_hdmiphy_tmds_clko_phy", NULL, CLK_IS_ROOT, 300000000),
+ FRATE(0, "phyclk_hdmiphy_pixel_clko_phy", NULL, CLK_IS_ROOT, 166000000),
+};
+
+static struct samsung_mux_clock disp_mux_clks[] __initdata = {
+ /* MUX_SEL_DISP0 */
+ MUX(CLK_MOUT_DISP_PLL, "mout_disp_pll", mout_disp_pll_p, MUX_SEL_DISP0,
+ 0, 1),
+
+ /* MUX_SEL_DISP1 */
+ MUX(CLK_MOUT_SCLK_DSIM1_USER, "mout_sclk_dsim1_user",
+ mout_sclk_dsim1_user_p, MUX_SEL_DISP1, 28, 1),
+ MUX(CLK_MOUT_SCLK_DSIM0_USER, "mout_sclk_dsim0_user",
+ mout_sclk_dsim0_user_p, MUX_SEL_DISP1, 24, 1),
+ MUX(CLK_MOUT_SCLK_DSD_USER, "mout_sclk_dsd_user", mout_sclk_dsd_user_p,
+ MUX_SEL_DISP1, 20, 1),
+ MUX(CLK_MOUT_SCLK_DECON_TV_ECLK_USER, "mout_sclk_decon_tv_eclk_user",
+ mout_sclk_decon_tv_eclk_user_p, MUX_SEL_DISP1, 16, 1),
+ MUX(CLK_MOUT_SCLK_DECON_VCLK_USER, "mout_sclk_decon_vclk_user",
+ mout_sclk_decon_vclk_user_p, MUX_SEL_DISP1, 12, 1),
+ MUX(CLK_MOUT_SCLK_DECON_ECLK_USER, "mout_sclk_decon_eclk_user",
+ mout_sclk_decon_eclk_user_p, MUX_SEL_DISP1, 8, 1),
+ MUX(CLK_MOUT_SCLK_DECON_TV_VCLK_USER, "mout_sclk_decon_tv_vclk_user",
+ mout_sclk_decon_tv_vlkc_user_p, MUX_SEL_DISP1, 4, 1),
+ MUX(CLK_MOUT_ACLK_DISP_333_USER, "mout_aclk_disp_333_user",
+ mout_aclk_disp_333_user_p, MUX_SEL_DISP1, 0, 1),
+
+ /* MUX_SEL_DISP2 */
+ MUX(CLK_MOUT_PHYCLK_MIPIDPHY1_BITCLKDIV8_USER,
+ "mout_phyclk_mipidphy1_bitclkdiv8_user",
+ mout_phyclk_mipidphy1_bitclkdiv8_user_p, MUX_SEL_DISP2,
+ 20, 1),
+ MUX(CLK_MOUT_PHYCLK_MIPIDPHY1_RXCLKESC0_USER,
+ "mout_phyclk_mipidphy1_rxclkesc0_user",
+ mout_phyclk_mipidphy1_rxclkesc0_user_p, MUX_SEL_DISP2,
+ 16, 1),
+ MUX(CLK_MOUT_PHYCLK_MIPIDPHY0_BITCLKDIV8_USER,
+ "mout_phyclk_mipidphy0_bitclkdiv8_user",
+ mout_phyclk_mipidphy0_bitclkdiv8_user_p, MUX_SEL_DISP2,
+ 12, 1),
+ MUX(CLK_MOUT_PHYCLK_MIPIDPHY0_RXCLKESC0_USER,
+ "mout_phyclk_mipidphy0_rxclkesc0_user",
+ mout_phyclk_mipidphy0_rxclkesc0_user_p, MUX_SEL_DISP2,
+ 8, 1),
+ MUX(CLK_MOUT_PHYCLK_HDMIPHY_TMDS_CLKO_USER,
+ "mout_phyclk_hdmiphy_tmds_clko_user",
+ mout_phyclk_hdmiphy_tmds_clko_user_p, MUX_SEL_DISP2,
+ 4, 1),
+ MUX(CLK_MOUT_PHYCLK_HDMIPHY_PIXEL_CLKO_USER,
+ "mout_phyclk_hdmiphy_pixel_clko_user",
+ mout_phyclk_hdmiphy_pixel_clko_user_p, MUX_SEL_DISP2,
+ 0, 1),
+
+ /* MUX_SEL_DISP3 */
+ MUX(CLK_MOUT_SCLK_DSIM0, "mout_sclk_dsim0", mout_sclk_dsim0_p,
+ MUX_SEL_DISP3, 12, 1),
+ MUX(CLK_MOUT_SCLK_DECON_TV_ECLK, "mout_sclk_decon_tv_eclk",
+ mout_sclk_decon_tv_eclk_p, MUX_SEL_DISP3, 8, 1),
+ MUX(CLK_MOUT_SCLK_DECON_VCLK, "mout_sclk_decon_vclk",
+ mout_sclk_decon_vclk_p, MUX_SEL_DISP3, 4, 1),
+ MUX(CLK_MOUT_SCLK_DECON_ECLK, "mout_sclk_decon_eclk",
+ mout_sclk_decon_eclk_p, MUX_SEL_DISP3, 0, 1),
+
+ /* MUX_SEL_DISP4 */
+ MUX(CLK_MOUT_SCLK_DSIM1_B_DISP, "mout_sclk_dsim1_b_disp",
+ mout_sclk_dsim1_b_disp_p, MUX_SEL_DISP4, 16, 1),
+ MUX(CLK_MOUT_SCLK_DSIM1_A_DISP, "mout_sclk_dsim1_a_disp",
+ mout_sclk_dsim0_p, MUX_SEL_DISP4, 12, 1),
+ MUX(CLK_MOUT_SCLK_DECON_TV_VCLK_C_DISP,
+ "mout_sclk_decon_tv_vclk_c_disp",
+ mout_sclk_decon_tv_vclk_c_disp_p, MUX_SEL_DISP4, 8, 1),
+ MUX(CLK_MOUT_SCLK_DECON_TV_VCLK_B_DISP,
+ "mout_sclk_decon_tv_vclk_b_disp",
+ mout_sclk_decon_tv_vclk_b_disp_p, MUX_SEL_DISP4, 4, 1),
+ MUX(CLK_MOUT_SCLK_DECON_TV_VCLK_A_DISP,
+ "mout_sclk_decon_tv_vclk_a_disp",
+ mout_sclk_decon_vclk_p, MUX_SEL_DISP4, 0, 1),
+};
+
+static struct samsung_div_clock disp_div_clks[] __initdata = {
+ /* DIV_DISP */
+ DIV(CLK_DIV_SCLK_DSIM1_DISP, "div_sclk_dsim1_disp",
+ "mout_sclk_dsim1_b_disp", DIV_DISP, 24, 3),
+ DIV(CLK_DIV_SCLK_DECON_TV_VCLK_DISP, "div_sclk_decon_tv_vclk_disp",
+ "mout_sclk_decon_tv_vclk_c_disp", DIV_DISP, 20, 3),
+ DIV(CLK_DIV_SCLK_DSIM0_DISP, "div_sclk_dsim0_disp", "mout_sclk_dsim0",
+ DIV_DISP, 16, 3),
+ DIV(CLK_DIV_SCLK_DECON_TV_ECLK_DISP, "div_sclk_decon_tv_eclk_disp",
+ "mout_sclk_decon_tv_eclk", DIV_DISP, 12, 3),
+ DIV(CLK_DIV_SCLK_DECON_VCLK_DISP, "div_sclk_decon_vclk_disp",
+ "mout_sclk_decon_vclk", DIV_DISP, 8, 3),
+ DIV(CLK_DIV_SCLK_DECON_ECLK_DISP, "div_sclk_decon_eclk_disp",
+ "mout_sclk_decon_eclk", DIV_DISP, 4, 3),
+ DIV(CLK_DIV_PCLK_DISP, "div_pclk_disp", "mout_aclk_disp_333_user",
+ DIV_DISP, 0, 2),
+};
+
+static struct samsung_gate_clock disp_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_DISP0 */
+ GATE(CLK_ACLK_DECON_TV, "aclk_decon_tv", "mout_aclk_disp_333_user",
+ ENABLE_ACLK_DISP0, 2, 0, 0),
+ GATE(CLK_ACLK_DECON, "aclk_decon", "mout_aclk_disp_333_user",
+ ENABLE_ACLK_DISP0, 0, 0, 0),
+
+ /* ENABLE_ACLK_DISP1 */
+ GATE(CLK_ACLK_SMMU_TV1X, "aclk_smmu_tv1x", "mout_aclk_disp_333_user",
+ ENABLE_ACLK_DISP1, 25, 0, 0),
+ GATE(CLK_ACLK_SMMU_TV0X, "aclk_smmu_tv0x", "mout_aclk_disp_333_user",
+ ENABLE_ACLK_DISP1, 24, 0, 0),
+ GATE(CLK_ACLK_SMMU_DECON1X, "aclk_smmu_decon1x",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 23, 0, 0),
+ GATE(CLK_ACLK_SMMU_DECON0X, "aclk_smmu_decon0x",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 22, 0, 0),
+ GATE(CLK_ACLK_BTS_DECON_TV_M3, "aclk_bts_decon_tv_m3",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 21, 0, 0),
+ GATE(CLK_ACLK_BTS_DECON_TV_M2, "aclk_bts_decon_tv_m2",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 20, 0, 0),
+ GATE(CLK_ACLK_BTS_DECON_TV_M1, "aclk_bts_decon_tv_m1",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 19, 0, 0),
+ GATE(CLK_ACLK_BTS_DECON_TV_M0, "aclk-bts_decon_tv_m0",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 18, 0, 0),
+ GATE(CLK_ACLK_BTS_DECON_NM4, "aclk_bts_decon_nm4",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 17, 0, 0),
+ GATE(CLK_ACLK_BTS_DECON_NM3, "aclk_bts_decon_nm3",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 16, 0, 0),
+ GATE(CLK_ACLK_BTS_DECON_NM2, "aclk_bts_decon_nm2",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 15, 0, 0),
+ GATE(CLK_ACLK_BTS_DECON_NM1, "aclk_bts_decon_nm1",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 14, 0, 0),
+ GATE(CLK_ACLK_BTS_DECON_NM0, "aclk_bts_decon_nm0",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 13, 0, 0),
+ GATE(CLK_ACLK_AHB2APB_DISPSFR2P, "aclk_ahb2apb_dispsfr2p",
+ "div_pclk_disp", ENABLE_ACLK_DISP1,
+ 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_DISPSFR1P, "aclk_ahb2apb_dispsfr1p",
+ "div_pclk_disp", ENABLE_ACLK_DISP1,
+ 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_DISPSFR0P, "aclk_ahb2apb_dispsfr0p",
+ "div_pclk_disp", ENABLE_ACLK_DISP1,
+ 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB_DISPH, "aclk_ahb_disph", "div_pclk_disp",
+ ENABLE_ACLK_DISP1, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_TV1X, "aclk_xiu_tv1x", "mout_aclk_disp_333_user",
+ ENABLE_ACLK_DISP1, 7, 0, 0),
+ GATE(CLK_ACLK_XIU_TV0X, "aclk_xiu_tv0x", "mout_aclk_disp_333_user",
+ ENABLE_ACLK_DISP1, 6, 0, 0),
+ GATE(CLK_ACLK_XIU_DECON1X, "aclk_xiu_decon1x",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 5, 0, 0),
+ GATE(CLK_ACLK_XIU_DECON0X, "aclk_xiu_decon0x",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 4, 0, 0),
+ GATE(CLK_ACLK_XIU_DISP1X, "aclk_xiu_disp1x", "mout_aclk_disp_333_user",
+ ENABLE_ACLK_DISP1, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_DISPNP_100, "aclk_xiu_dispnp_100", "div_pclk_disp",
+ ENABLE_ACLK_DISP1, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DISP1ND_333, "aclk_disp1nd_333",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1, 1,
+ CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_DISP0ND_333, "aclk_disp0nd_333",
+ "mout_aclk_disp_333_user", ENABLE_ACLK_DISP1,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_DISP */
+ GATE(CLK_PCLK_SMMU_TV1X, "pclk_smmu_tv1x", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 23, 0, 0),
+ GATE(CLK_PCLK_SMMU_TV0X, "pclk_smmu_tv0x", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 22, 0, 0),
+ GATE(CLK_PCLK_SMMU_DECON1X, "pclk_smmu_decon1x", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 21, 0, 0),
+ GATE(CLK_PCLK_SMMU_DECON0X, "pclk_smmu_decon0x", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 20, 0, 0),
+ GATE(CLK_PCLK_BTS_DECON_TV_M3, "pclk_bts_decon_tv_m3", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 19, 0, 0),
+ GATE(CLK_PCLK_BTS_DECON_TV_M2, "pclk_bts_decon_tv_m2", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 18, 0, 0),
+ GATE(CLK_PCLK_BTS_DECON_TV_M1, "pclk_bts_decon_tv_m1", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 17, 0, 0),
+ GATE(CLK_PCLK_BTS_DECON_TV_M0, "pclk_bts_decon_tv_m0", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 16, 0, 0),
+ GATE(CLK_PCLK_BTS_DECONM4, "pclk_bts_deconm4", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 15, 0, 0),
+ GATE(CLK_PCLK_BTS_DECONM3, "pclk_bts_deconm3", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 14, 0, 0),
+ GATE(CLK_PCLK_BTS_DECONM2, "pclk_bts_deconm2", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 13, 0, 0),
+ GATE(CLK_PCLK_BTS_DECONM1, "pclk_bts_deconm1", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 12, 0, 0),
+ GATE(CLK_PCLK_BTS_DECONM0, "pclk_bts_deconm0", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 11, 0, 0),
+ GATE(CLK_PCLK_MIC1, "pclk_mic1", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 10, 0, 0),
+ GATE(CLK_PCLK_PMU_DISP, "pclk_pmu_disp", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_DISP, "pclk_sysreg_disp", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_HDMIPHY, "pclk_hdmiphy", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 7, 0, 0),
+ GATE(CLK_PCLK_HDMI, "pclk_hdmi", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 6, 0, 0),
+ GATE(CLK_PCLK_MIC0, "pclk_mic0", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 5, 0, 0),
+ GATE(CLK_PCLK_DSIM1, "pclk_dsim1", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 3, 0, 0),
+ GATE(CLK_PCLK_DSIM0, "pclk_dsim0", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 2, 0, 0),
+ GATE(CLK_PCLK_DECON_TV, "pclk_decon_tv", "div_pclk_disp",
+ ENABLE_PCLK_DISP, 1, 0, 0),
+
+ /* ENABLE_SCLK_DISP */
+ GATE(CLK_PHYCLK_MIPIDPHY1_BITCLKDIV8, "phyclk_mipidphy1_bitclkdiv8",
+ "mout_phyclk_mipidphy1_bitclkdiv8_user",
+ ENABLE_SCLK_DISP, 26, 0, 0),
+ GATE(CLK_PHYCLK_MIPIDPHY1_RXCLKESC0, "phyclk_mipidphy1_rxclkesc0",
+ "mout_phyclk_mipidphy1_rxclkesc0_user",
+ ENABLE_SCLK_DISP, 25, 0, 0),
+ GATE(CLK_SCLK_RGB_TV_VCLK_TO_DSIM1, "sclk_rgb_tv_vclk_to_dsim1",
+ "sclk_rgb_tv_vclk", ENABLE_SCLK_DISP, 24, 0, 0),
+ GATE(CLK_SCLK_RGB_TV_VCLK_TO_MIC1, "sclk_rgb_tv_vclk_to_mic1",
+ "sclk_rgb_tv_vclk", ENABLE_SCLK_DISP, 23, 0, 0),
+ GATE(CLK_SCLK_DSIM1, "sclk_dsim1", "div_sclk_dsim1_disp",
+ ENABLE_SCLK_DISP, 22, 0, 0),
+ GATE(CLK_SCLK_DECON_TV_VCLK, "sclk_decon_tv_vclk",
+ "div_sclk_decon_tv_vclk_disp",
+ ENABLE_SCLK_DISP, 21, 0, 0),
+ GATE(CLK_PHYCLK_MIPIDPHY0_BITCLKDIV8, "phyclk_mipidphy0_bitclkdiv8",
+ "mout_phyclk_mipidphy0_bitclkdiv8_user",
+ ENABLE_SCLK_DISP, 15, 0, 0),
+ GATE(CLK_PHYCLK_MIPIDPHY0_RXCLKESC0, "phyclk_mipidphy0_rxclkesc0",
+ "mout_phyclk_mipidphy0_rxclkesc0_user",
+ ENABLE_SCLK_DISP, 14, 0, 0),
+ GATE(CLK_PHYCLK_HDMIPHY_TMDS_CLKO, "phyclk_hdmiphy_tmds_clko",
+ "mout_phyclk_hdmiphy_tmds_clko_user",
+ ENABLE_SCLK_DISP, 13, 0, 0),
+ GATE(CLK_PHYCLK_HDMI_PIXEL, "phyclk_hdmi_pixel",
+ "sclk_rgb_tv_vclk", ENABLE_SCLK_DISP, 12, 0, 0),
+ GATE(CLK_SCLK_RGB_VCLK_TO_SMIES, "sclk_rgb_vclk_to_smies",
+ "sclk_rgb_vclk", ENABLE_SCLK_DISP, 11, 0, 0),
+ GATE(CLK_SCLK_RGB_VCLK_TO_DSIM0, "sclk_rgb_vclk_to_dsim0",
+ "sclk_rgb_vclk", ENABLE_SCLK_DISP, 9, 0, 0),
+ GATE(CLK_SCLK_RGB_VCLK_TO_MIC0, "sclk_rgb_vclk_to_mic0",
+ "sclk_rgb_vclk", ENABLE_SCLK_DISP, 8, 0, 0),
+ GATE(CLK_SCLK_DSD, "sclk_dsd", "mout_sclk_dsd_user",
+ ENABLE_SCLK_DISP, 7, 0, 0),
+ GATE(CLK_SCLK_HDMI_SPDIF, "sclk_hdmi_spdif", "sclk_hdmi_spdif_disp",
+ ENABLE_SCLK_DISP, 6, 0, 0),
+ GATE(CLK_SCLK_DSIM0, "sclk_dsim0", "div_sclk_dsim0_disp",
+ ENABLE_SCLK_DISP, 5, 0, 0),
+ GATE(CLK_SCLK_DECON_TV_ECLK, "sclk_decon_tv_eclk",
+ "div_sclk_decon_tv_eclk_disp",
+ ENABLE_SCLK_DISP, 4, 0, 0),
+ GATE(CLK_SCLK_DECON_VCLK, "sclk_decon_vclk",
+ "div_sclk_decon_vclk_disp", ENABLE_SCLK_DISP, 3, 0, 0),
+ GATE(CLK_SCLK_DECON_ECLK, "sclk_decon_eclk",
+ "div_sclk_decon_eclk_disp", ENABLE_SCLK_DISP, 2, 0, 0),
+};
+
+static struct samsung_cmu_info disp_cmu_info __initdata = {
+ .pll_clks = disp_pll_clks,
+ .nr_pll_clks = ARRAY_SIZE(disp_pll_clks),
+ .mux_clks = disp_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(disp_mux_clks),
+ .div_clks = disp_div_clks,
+ .nr_div_clks = ARRAY_SIZE(disp_div_clks),
+ .gate_clks = disp_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(disp_gate_clks),
+ .fixed_clks = disp_fixed_clks,
+ .nr_fixed_clks = ARRAY_SIZE(disp_fixed_clks),
+ .fixed_factor_clks = disp_fixed_factor_clks,
+ .nr_fixed_factor_clks = ARRAY_SIZE(disp_fixed_factor_clks),
+ .nr_clk_ids = DISP_NR_CLK,
+ .clk_regs = disp_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(disp_clk_regs),
+};
+
+static void __init exynos5433_cmu_disp_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &disp_cmu_info);
+}
+
+CLK_OF_DECLARE(exynos5433_cmu_disp, "samsung,exynos5433-cmu-disp",
+ exynos5433_cmu_disp_init);
+
+/*
+ * Register offset definitions for CMU_AUD
+ */
+#define MUX_SEL_AUD0 0x0200
+#define MUX_SEL_AUD1 0x0204
+#define MUX_ENABLE_AUD0 0x0300
+#define MUX_ENABLE_AUD1 0x0304
+#define MUX_STAT_AUD0 0x0400
+#define DIV_AUD0 0x0600
+#define DIV_AUD1 0x0604
+#define DIV_STAT_AUD0 0x0700
+#define DIV_STAT_AUD1 0x0704
+#define ENABLE_ACLK_AUD 0x0800
+#define ENABLE_PCLK_AUD 0x0900
+#define ENABLE_SCLK_AUD0 0x0a00
+#define ENABLE_SCLK_AUD1 0x0a04
+#define ENABLE_IP_AUD0 0x0b00
+#define ENABLE_IP_AUD1 0x0b04
+
+static unsigned long aud_clk_regs[] __initdata = {
+ MUX_SEL_AUD0,
+ MUX_SEL_AUD1,
+ MUX_ENABLE_AUD0,
+ MUX_ENABLE_AUD1,
+ MUX_STAT_AUD0,
+ DIV_AUD0,
+ DIV_AUD1,
+ DIV_STAT_AUD0,
+ DIV_STAT_AUD1,
+ ENABLE_ACLK_AUD,
+ ENABLE_PCLK_AUD,
+ ENABLE_SCLK_AUD0,
+ ENABLE_SCLK_AUD1,
+ ENABLE_IP_AUD0,
+ ENABLE_IP_AUD1,
+};
+
+/* list of all parent clock list */
+PNAME(mout_aud_pll_user_aud_p) = { "oscclk", "fout_aud_pll", };
+PNAME(mout_sclk_aud_pcm_p) = { "mout_aud_pll_user", "ioclk_audiocdclk0",};
+
+static struct samsung_fixed_rate_clock aud_fixed_clks[] __initdata = {
+ FRATE(0, "ioclk_jtag_tclk", NULL, CLK_IS_ROOT, 33000000),
+ FRATE(0, "ioclk_slimbus_clk", NULL, CLK_IS_ROOT, 25000000),
+ FRATE(0, "ioclk_i2s_bclk", NULL, CLK_IS_ROOT, 50000000),
+};
+
+static struct samsung_mux_clock aud_mux_clks[] __initdata = {
+ /* MUX_SEL_AUD0 */
+ MUX(CLK_MOUT_AUD_PLL_USER, "mout_aud_pll_user",
+ mout_aud_pll_user_aud_p, MUX_SEL_AUD0, 0, 1),
+
+ /* MUX_SEL_AUD1 */
+ MUX(CLK_MOUT_SCLK_AUD_PCM, "mout_sclk_aud_pcm", mout_sclk_aud_pcm_p,
+ MUX_SEL_AUD1, 8, 1),
+ MUX(CLK_MOUT_SCLK_AUD_I2S, "mout_sclk_aud_i2s", mout_sclk_aud_pcm_p,
+ MUX_SEL_AUD1, 0, 1),
+};
+
+static struct samsung_div_clock aud_div_clks[] __initdata = {
+ /* DIV_AUD0 */
+ DIV(CLK_DIV_ATCLK_AUD, "div_atclk_aud", "div_aud_ca5", DIV_AUD0,
+ 12, 4),
+ DIV(CLK_DIV_PCLK_DBG_AUD, "div_pclk_dbg_aud", "div_aud_ca5", DIV_AUD0,
+ 8, 4),
+ DIV(CLK_DIV_ACLK_AUD, "div_aclk_aud", "div_aud_ca5", DIV_AUD0,
+ 4, 4),
+ DIV(CLK_DIV_AUD_CA5, "div_aud_ca5", "mout_aud_pll_user", DIV_AUD0,
+ 0, 4),
+
+ /* DIV_AUD1 */
+ DIV(CLK_DIV_SCLK_AUD_SLIMBUS, "div_sclk_aud_slimbus",
+ "mout_aud_pll_user", DIV_AUD1, 16, 5),
+ DIV(CLK_DIV_SCLK_AUD_UART, "div_sclk_aud_uart", "mout_aud_pll_user",
+ DIV_AUD1, 12, 4),
+ DIV(CLK_DIV_SCLK_AUD_PCM, "div_sclk_aud_pcm", "mout_sclk_aud_pcm",
+ DIV_AUD1, 4, 8),
+ DIV(CLK_DIV_SCLK_AUD_I2S, "div_sclk_aud_i2s", "mout_sclk_aud_i2s",
+ DIV_AUD1, 0, 4),
+};
+
+static struct samsung_gate_clock aud_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_AUD */
+ GATE(CLK_ACLK_INTR_CTRL, "aclk_intr_ctrl", "div_aclk_aud",
+ ENABLE_ACLK_AUD, 12, 0, 0),
+ GATE(CLK_ACLK_SMMU_LPASSX, "aclk_smmu_lpassx", "div_aclk_aud",
+ ENABLE_ACLK_AUD, 7, 0, 0),
+ GATE(CLK_ACLK_XIU_LPASSX, "aclk_xiu_lpassx", "div_aclk_aud",
+ ENABLE_ACLK_AUD, 0, 4, 0),
+ GATE(CLK_ACLK_AUDNP_133, "aclk_audnp_133", "div_aclk_aud",
+ ENABLE_ACLK_AUD, 0, 3, 0),
+ GATE(CLK_ACLK_AUDND_133, "aclk_audnd_133", "div_aclk_aud",
+ ENABLE_ACLK_AUD, 0, 2, 0),
+ GATE(CLK_ACLK_SRAMC, "aclk_sramc", "div_aclk_aud", ENABLE_ACLK_AUD,
+ 0, 1, 0),
+ GATE(CLK_ACLK_DMAC, "aclk_dmac", "div_aclk_aud", ENABLE_ACLK_AUD,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_AUD */
+ GATE(CLK_PCLK_WDT1, "pclk_wdt1", "div_aclk_aud", ENABLE_PCLK_AUD,
+ 13, 0, 0),
+ GATE(CLK_PCLK_WDT0, "pclk_wdt0", "div_aclk_aud", ENABLE_PCLK_AUD,
+ 12, 0, 0),
+ GATE(CLK_PCLK_SFR1, "pclk_sfr1", "div_aclk_aud", ENABLE_PCLK_AUD,
+ 11, 0, 0),
+ GATE(CLK_PCLK_SMMU_LPASSX, "pclk_smmu_lpassx", "div_aclk_aud",
+ ENABLE_PCLK_AUD, 10, 0, 0),
+ GATE(CLK_PCLK_GPIO_AUD, "pclk_gpio_aud", "div_aclk_aud",
+ ENABLE_PCLK_AUD, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PMU_AUD, "pclk_pmu_aud", "div_aclk_aud",
+ ENABLE_PCLK_AUD, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_AUD, "pclk_sysreg_aud", "div_aclk_aud",
+ ENABLE_PCLK_AUD, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_AUD_SLIMBUS, "pclk_aud_slimbus", "div_aclk_aud",
+ ENABLE_PCLK_AUD, 6, 0, 0),
+ GATE(CLK_PCLK_AUD_UART, "pclk_aud_uart", "div_aclk_aud",
+ ENABLE_PCLK_AUD, 5, 0, 0),
+ GATE(CLK_PCLK_AUD_PCM, "pclk_aud_pcm", "div_aclk_aud",
+ ENABLE_PCLK_AUD, 4, 0, 0),
+ GATE(CLK_PCLK_AUD_I2S, "pclk_aud_i2s", "div_aclk_aud",
+ ENABLE_PCLK_AUD, 3, 0, 0),
+ GATE(CLK_PCLK_TIMER, "pclk_timer", "div_aclk_aud", ENABLE_PCLK_AUD,
+ 2, 0, 0),
+ GATE(CLK_PCLK_SFR0_CTRL, "pclk_sfr0_ctrl", "div_aclk_aud",
+ ENABLE_PCLK_AUD, 0, 0, 0),
+
+ /* ENABLE_SCLK_AUD0 */
+ GATE(CLK_ATCLK_AUD, "atclk_aud", "div_atclk_aud", ENABLE_SCLK_AUD0,
+ 2, 0, 0),
+ GATE(CLK_PCLK_DBG_AUD, "pclk_dbg_aud", "div_pclk_dbg_aud",
+ ENABLE_SCLK_AUD0, 1, 0, 0),
+ GATE(CLK_SCLK_AUD_CA5, "sclk_aud_ca5", "div_aud_ca5", ENABLE_SCLK_AUD0,
+ 0, 0, 0),
+
+ /* ENABLE_SCLK_AUD1 */
+ GATE(CLK_SCLK_JTAG_TCK, "sclk_jtag_tck", "ioclk_jtag_tclk",
+ ENABLE_SCLK_AUD1, 6, 0, 0),
+ GATE(CLK_SCLK_SLIMBUS_CLKIN, "sclk_slimbus_clkin", "ioclk_slimbus_clk",
+ ENABLE_SCLK_AUD1, 5, 0, 0),
+ GATE(CLK_SCLK_AUD_SLIMBUS, "sclk_aud_slimbus", "div_sclk_aud_slimbus",
+ ENABLE_SCLK_AUD1, 4, 0, 0),
+ GATE(CLK_SCLK_AUD_UART, "sclk_aud_uart", "div_sclk_aud_uart",
+ ENABLE_SCLK_AUD1, 3, 0, 0),
+ GATE(CLK_SCLK_AUD_PCM, "sclk_aud_pcm", "div_sclk_aud_pcm",
+ ENABLE_SCLK_AUD1, 2, 0, 0),
+ GATE(CLK_SCLK_I2S_BCLK, "sclk_i2s_bclk", "ioclk_i2s_bclk",
+ ENABLE_SCLK_AUD1, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_AUD_I2S, "sclk_aud_i2s", "div_sclk_aud_i2s",
+ ENABLE_SCLK_AUD1, 0, CLK_IGNORE_UNUSED, 0),
+};
+
+static struct samsung_cmu_info aud_cmu_info __initdata = {
+ .mux_clks = aud_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(aud_mux_clks),
+ .div_clks = aud_div_clks,
+ .nr_div_clks = ARRAY_SIZE(aud_div_clks),
+ .gate_clks = aud_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(aud_gate_clks),
+ .fixed_clks = aud_fixed_clks,
+ .nr_fixed_clks = ARRAY_SIZE(aud_fixed_clks),
+ .nr_clk_ids = AUD_NR_CLK,
+ .clk_regs = aud_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(aud_clk_regs),
+};
+
+static void __init exynos5433_cmu_aud_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &aud_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_aud, "samsung,exynos5433-cmu-aud",
+ exynos5433_cmu_aud_init);
+
+
+/*
+ * Register offset definitions for CMU_BUS{0|1|2}
+ */
+#define DIV_BUS 0x0600
+#define DIV_STAT_BUS 0x0700
+#define ENABLE_ACLK_BUS 0x0800
+#define ENABLE_PCLK_BUS 0x0900
+#define ENABLE_IP_BUS0 0x0b00
+#define ENABLE_IP_BUS1 0x0b04
+
+#define MUX_SEL_BUS2 0x0200 /* Only for CMU_BUS2 */
+#define MUX_ENABLE_BUS2 0x0300 /* Only for CMU_BUS2 */
+#define MUX_STAT_BUS2 0x0400 /* Only for CMU_BUS2 */
+
+/* list of all parent clock list */
+PNAME(mout_aclk_bus2_400_p) = { "oscclk", "aclk_bus2_400", };
+
+#define CMU_BUS_COMMON_CLK_REGS \
+ DIV_BUS, \
+ DIV_STAT_BUS, \
+ ENABLE_ACLK_BUS, \
+ ENABLE_PCLK_BUS, \
+ ENABLE_IP_BUS0, \
+ ENABLE_IP_BUS1
+
+static unsigned long bus01_clk_regs[] __initdata = {
+ CMU_BUS_COMMON_CLK_REGS,
+};
+
+static unsigned long bus2_clk_regs[] __initdata = {
+ MUX_SEL_BUS2,
+ MUX_ENABLE_BUS2,
+ MUX_STAT_BUS2,
+ CMU_BUS_COMMON_CLK_REGS,
+};
+
+static struct samsung_div_clock bus0_div_clks[] __initdata = {
+ /* DIV_BUS0 */
+ DIV(CLK_DIV_PCLK_BUS_133, "div_pclk_bus0_133", "aclk_bus0_400",
+ DIV_BUS, 0, 3),
+};
+
+/* CMU_BUS0 clocks */
+static struct samsung_gate_clock bus0_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_BUS0 */
+ GATE(CLK_ACLK_AHB2APB_BUSP, "aclk_ahb2apb_bus0p", "div_pclk_bus0_133",
+ ENABLE_ACLK_BUS, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BUSNP_133, "aclk_bus0np_133", "div_pclk_bus0_133",
+ ENABLE_ACLK_BUS, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BUSND_400, "aclk_bus0nd_400", "aclk_bus0_400",
+ ENABLE_ACLK_BUS, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_BUS0 */
+ GATE(CLK_PCLK_BUSSRVND_133, "pclk_bus0srvnd_133", "div_pclk_bus0_133",
+ ENABLE_PCLK_BUS, 2, 0, 0),
+ GATE(CLK_PCLK_PMU_BUS, "pclk_pmu_bus0", "div_pclk_bus0_133",
+ ENABLE_PCLK_BUS, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_BUS, "pclk_sysreg_bus0", "div_pclk_bus0_133",
+ ENABLE_PCLK_BUS, 0, CLK_IGNORE_UNUSED, 0),
+};
+
+/* CMU_BUS1 clocks */
+static struct samsung_div_clock bus1_div_clks[] __initdata = {
+ /* DIV_BUS1 */
+ DIV(CLK_DIV_PCLK_BUS_133, "div_pclk_bus1_133", "aclk_bus1_400",
+ DIV_BUS, 0, 3),
+};
+
+static struct samsung_gate_clock bus1_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_BUS1 */
+ GATE(CLK_ACLK_AHB2APB_BUSP, "aclk_ahb2apb_bus1p", "div_pclk_bus1_133",
+ ENABLE_ACLK_BUS, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BUSNP_133, "aclk_bus1np_133", "div_pclk_bus1_133",
+ ENABLE_ACLK_BUS, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BUSND_400, "aclk_bus1nd_400", "aclk_bus1_400",
+ ENABLE_ACLK_BUS, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_BUS1 */
+ GATE(CLK_PCLK_BUSSRVND_133, "pclk_bus1srvnd_133", "div_pclk_bus1_133",
+ ENABLE_PCLK_BUS, 2, 0, 0),
+ GATE(CLK_PCLK_PMU_BUS, "pclk_pmu_bus1", "div_pclk_bus1_133",
+ ENABLE_PCLK_BUS, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_BUS, "pclk_sysreg_bus1", "div_pclk_bus1_133",
+ ENABLE_PCLK_BUS, 0, CLK_IGNORE_UNUSED, 0),
+};
+
+/* CMU_BUS2 clocks */
+static struct samsung_mux_clock bus2_mux_clks[] __initdata = {
+ /* MUX_SEL_BUS2 */
+ MUX(CLK_MOUT_ACLK_BUS2_400_USER, "mout_aclk_bus2_400_user",
+ mout_aclk_bus2_400_p, MUX_SEL_BUS2, 0, 1),
+};
+
+static struct samsung_div_clock bus2_div_clks[] __initdata = {
+ /* DIV_BUS2 */
+ DIV(CLK_DIV_PCLK_BUS_133, "div_pclk_bus2_133",
+ "mout_aclk_bus2_400_user", DIV_BUS, 0, 3),
+};
+
+static struct samsung_gate_clock bus2_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_BUS2 */
+ GATE(CLK_ACLK_AHB2APB_BUSP, "aclk_ahb2apb_bus2p", "div_pclk_bus2_133",
+ ENABLE_ACLK_BUS, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BUSNP_133, "aclk_bus2np_133", "div_pclk_bus2_133",
+ ENABLE_ACLK_BUS, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BUS2BEND_400, "aclk_bus2bend_400",
+ "mout_aclk_bus2_400_user", ENABLE_ACLK_BUS,
+ 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BUS2RTND_400, "aclk_bus2rtnd_400",
+ "mout_aclk_bus2_400_user", ENABLE_ACLK_BUS,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_BUS2 */
+ GATE(CLK_PCLK_BUSSRVND_133, "pclk_bus2srvnd_133", "div_pclk_bus2_133",
+ ENABLE_PCLK_BUS, 2, 0, 0),
+ GATE(CLK_PCLK_PMU_BUS, "pclk_pmu_bus2", "div_pclk_bus2_133",
+ ENABLE_PCLK_BUS, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_BUS, "pclk_sysreg_bus2", "div_pclk_bus2_133",
+ ENABLE_PCLK_BUS, 0, CLK_IGNORE_UNUSED, 0),
+};
+
+#define CMU_BUS_INFO_CLKS(id) \
+ .div_clks = bus##id##_div_clks, \
+ .nr_div_clks = ARRAY_SIZE(bus##id##_div_clks), \
+ .gate_clks = bus##id##_gate_clks, \
+ .nr_gate_clks = ARRAY_SIZE(bus##id##_gate_clks), \
+ .nr_clk_ids = BUSx_NR_CLK
+
+static struct samsung_cmu_info bus0_cmu_info __initdata = {
+ CMU_BUS_INFO_CLKS(0),
+ .clk_regs = bus01_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(bus01_clk_regs),
+};
+
+static struct samsung_cmu_info bus1_cmu_info __initdata = {
+ CMU_BUS_INFO_CLKS(1),
+ .clk_regs = bus01_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(bus01_clk_regs),
+};
+
+static struct samsung_cmu_info bus2_cmu_info __initdata = {
+ CMU_BUS_INFO_CLKS(2),
+ .mux_clks = bus2_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(bus2_mux_clks),
+ .clk_regs = bus2_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(bus2_clk_regs),
+};
+
+#define exynos5433_cmu_bus_init(id) \
+static void __init exynos5433_cmu_bus##id##_init(struct device_node *np)\
+{ \
+ samsung_cmu_register_one(np, &bus##id##_cmu_info); \
+} \
+CLK_OF_DECLARE(exynos5433_cmu_bus##id, \
+ "samsung,exynos5433-cmu-bus"#id, \
+ exynos5433_cmu_bus##id##_init)
+
+exynos5433_cmu_bus_init(0);
+exynos5433_cmu_bus_init(1);
+exynos5433_cmu_bus_init(2);
+
+/*
+ * Register offset definitions for CMU_G3D
+ */
+#define G3D_PLL_LOCK 0x0000
+#define G3D_PLL_CON0 0x0100
+#define G3D_PLL_CON1 0x0104
+#define G3D_PLL_FREQ_DET 0x010c
+#define MUX_SEL_G3D 0x0200
+#define MUX_ENABLE_G3D 0x0300
+#define MUX_STAT_G3D 0x0400
+#define DIV_G3D 0x0600
+#define DIV_G3D_PLL_FREQ_DET 0x0604
+#define DIV_STAT_G3D 0x0700
+#define DIV_STAT_G3D_PLL_FREQ_DET 0x0704
+#define ENABLE_ACLK_G3D 0x0800
+#define ENABLE_PCLK_G3D 0x0900
+#define ENABLE_SCLK_G3D 0x0a00
+#define ENABLE_IP_G3D0 0x0b00
+#define ENABLE_IP_G3D1 0x0b04
+#define CLKOUT_CMU_G3D 0x0c00
+#define CLKOUT_CMU_G3D_DIV_STAT 0x0c04
+#define CLK_STOPCTRL 0x1000
+
+static unsigned long g3d_clk_regs[] __initdata = {
+ G3D_PLL_LOCK,
+ G3D_PLL_CON0,
+ G3D_PLL_CON1,
+ G3D_PLL_FREQ_DET,
+ MUX_SEL_G3D,
+ MUX_ENABLE_G3D,
+ MUX_STAT_G3D,
+ DIV_G3D,
+ DIV_G3D_PLL_FREQ_DET,
+ DIV_STAT_G3D,
+ DIV_STAT_G3D_PLL_FREQ_DET,
+ ENABLE_ACLK_G3D,
+ ENABLE_PCLK_G3D,
+ ENABLE_SCLK_G3D,
+ ENABLE_IP_G3D0,
+ ENABLE_IP_G3D1,
+ CLKOUT_CMU_G3D,
+ CLKOUT_CMU_G3D_DIV_STAT,
+ CLK_STOPCTRL,
+};
+
+/* list of all parent clock list */
+PNAME(mout_aclk_g3d_400_p) = { "mout_g3d_pll", "aclk_g3d_400", };
+PNAME(mout_g3d_pll_p) = { "oscclk", "fout_g3d_pll", };
+
+static struct samsung_pll_clock g3d_pll_clks[] __initdata = {
+ PLL(pll_35xx, CLK_FOUT_G3D_PLL, "fout_g3d_pll", "oscclk",
+ G3D_PLL_LOCK, G3D_PLL_CON0, exynos5443_pll_rates),
+};
+
+static struct samsung_mux_clock g3d_mux_clks[] __initdata = {
+ /* MUX_SEL_G3D */
+ MUX(CLK_MOUT_ACLK_G3D_400, "mout_aclk_g3d_400", mout_aclk_g3d_400_p,
+ MUX_SEL_G3D, 8, 1),
+ MUX(CLK_MOUT_G3D_PLL, "mout_g3d_pll", mout_g3d_pll_p,
+ MUX_SEL_G3D, 0, 1),
+};
+
+static struct samsung_div_clock g3d_div_clks[] __initdata = {
+ /* DIV_G3D */
+ DIV(CLK_DIV_SCLK_HPM_G3D, "div_sclk_hpm_g3d", "mout_g3d_pll", DIV_G3D,
+ 8, 2),
+ DIV(CLK_DIV_PCLK_G3D, "div_pclk_g3d", "div_aclk_g3d", DIV_G3D,
+ 4, 3),
+ DIV(CLK_DIV_ACLK_G3D, "div_aclk_g3d", "mout_aclk_g3d_400", DIV_G3D,
+ 0, 3),
+};
+
+static struct samsung_gate_clock g3d_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_G3D */
+ GATE(CLK_ACLK_BTS_G3D1, "aclk_bts_g3d1", "div_aclk_g3d",
+ ENABLE_ACLK_G3D, 7, 0, 0),
+ GATE(CLK_ACLK_BTS_G3D0, "aclk_bts_g3d0", "div_aclk_g3d",
+ ENABLE_ACLK_G3D, 6, 0, 0),
+ GATE(CLK_ACLK_ASYNCAPBS_G3D, "aclk_asyncapbs_g3d", "div_pclk_g3d",
+ ENABLE_ACLK_G3D, 5, 0, 0),
+ GATE(CLK_ACLK_ASYNCAPBM_G3D, "aclk_asyncapbm_g3d", "div_aclk_g3d",
+ ENABLE_ACLK_G3D, 4, 0, 0),
+ GATE(CLK_ACLK_AHB2APB_G3DP, "aclk_ahb2apb_g3dp", "div_pclk_g3d",
+ ENABLE_ACLK_G3D, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_G3DNP_150, "aclk_g3dnp_150", "div_pclk_g3d",
+ ENABLE_ACLK_G3D, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_G3DND_600, "aclk_g3dnd_600", "div_aclk_g3d",
+ ENABLE_ACLK_G3D, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_G3D, "aclk_g3d", "div_aclk_g3d",
+ ENABLE_ACLK_G3D, 0, 0, 0),
+
+ /* ENABLE_PCLK_G3D */
+ GATE(CLK_PCLK_BTS_G3D1, "pclk_bts_g3d1", "div_pclk_g3d",
+ ENABLE_PCLK_G3D, 3, 0, 0),
+ GATE(CLK_PCLK_BTS_G3D0, "pclk_bts_g3d0", "div_pclk_g3d",
+ ENABLE_PCLK_G3D, 2, 0, 0),
+ GATE(CLK_PCLK_PMU_G3D, "pclk_pmu_g3d", "div_pclk_g3d",
+ ENABLE_PCLK_G3D, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_G3D, "pclk_sysreg_g3d", "div_pclk_g3d",
+ ENABLE_PCLK_G3D, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_SCLK_G3D */
+ GATE(CLK_SCLK_HPM_G3D, "sclk_hpm_g3d", "div_sclk_hpm_g3d",
+ ENABLE_SCLK_G3D, 0, 0, 0),
+};
+
+static struct samsung_cmu_info g3d_cmu_info __initdata = {
+ .pll_clks = g3d_pll_clks,
+ .nr_pll_clks = ARRAY_SIZE(g3d_pll_clks),
+ .mux_clks = g3d_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(g3d_mux_clks),
+ .div_clks = g3d_div_clks,
+ .nr_div_clks = ARRAY_SIZE(g3d_div_clks),
+ .gate_clks = g3d_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(g3d_gate_clks),
+ .nr_clk_ids = G3D_NR_CLK,
+ .clk_regs = g3d_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(g3d_clk_regs),
+};
+
+static void __init exynos5433_cmu_g3d_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &g3d_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_g3d, "samsung,exynos5433-cmu-g3d",
+ exynos5433_cmu_g3d_init);
+
+/*
+ * Register offset definitions for CMU_GSCL
+ */
+#define MUX_SEL_GSCL 0x0200
+#define MUX_ENABLE_GSCL 0x0300
+#define MUX_STAT_GSCL 0x0400
+#define ENABLE_ACLK_GSCL 0x0800
+#define ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL0 0x0804
+#define ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL1 0x0808
+#define ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL2 0x080c
+#define ENABLE_PCLK_GSCL 0x0900
+#define ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL0 0x0904
+#define ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL1 0x0908
+#define ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL2 0x090c
+#define ENABLE_IP_GSCL0 0x0b00
+#define ENABLE_IP_GSCL1 0x0b04
+#define ENABLE_IP_GSCL_SECURE_SMMU_GSCL0 0x0b08
+#define ENABLE_IP_GSCL_SECURE_SMMU_GSCL1 0x0b0c
+#define ENABLE_IP_GSCL_SECURE_SMMU_GSCL2 0x0b10
+
+static unsigned long gscl_clk_regs[] __initdata = {
+ MUX_SEL_GSCL,
+ MUX_ENABLE_GSCL,
+ MUX_STAT_GSCL,
+ ENABLE_ACLK_GSCL,
+ ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL0,
+ ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL1,
+ ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL2,
+ ENABLE_PCLK_GSCL,
+ ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL0,
+ ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL1,
+ ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL2,
+ ENABLE_IP_GSCL0,
+ ENABLE_IP_GSCL1,
+ ENABLE_IP_GSCL_SECURE_SMMU_GSCL0,
+ ENABLE_IP_GSCL_SECURE_SMMU_GSCL1,
+ ENABLE_IP_GSCL_SECURE_SMMU_GSCL2,
+};
+
+/* list of all parent clock list */
+PNAME(aclk_gscl_111_user_p) = { "oscclk", "aclk_gscl_111", };
+PNAME(aclk_gscl_333_user_p) = { "oscclk", "aclk_gscl_333", };
+
+static struct samsung_mux_clock gscl_mux_clks[] __initdata = {
+ /* MUX_SEL_GSCL */
+ MUX(CLK_MOUT_ACLK_GSCL_111_USER, "mout_aclk_gscl_111_user",
+ aclk_gscl_111_user_p, MUX_SEL_GSCL, 4, 1),
+ MUX(CLK_MOUT_ACLK_GSCL_333_USER, "mout_aclk_gscl_333_user",
+ aclk_gscl_333_user_p, MUX_SEL_GSCL, 0, 1),
+};
+
+static struct samsung_gate_clock gscl_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_GSCL */
+ GATE(CLK_ACLK_BTS_GSCL2, "aclk_bts_gscl2", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL, 11, 0, 0),
+ GATE(CLK_ACLK_BTS_GSCL1, "aclk_bts_gscl1", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL, 10, 0, 0),
+ GATE(CLK_ACLK_BTS_GSCL0, "aclk_bts_gscl0", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL, 9, 0, 0),
+ GATE(CLK_ACLK_AHB2APB_GSCLP, "aclk_ahb2apb_gsclp",
+ "mout_aclk_gscl_111_user", ENABLE_ACLK_GSCL,
+ 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_GSCLX, "aclk_xiu_gsclx", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL, 7, 0, 0),
+ GATE(CLK_ACLK_GSCLNP_111, "aclk_gsclnp_111", "mout_aclk_gscl_111_user",
+ ENABLE_ACLK_GSCL, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_GSCLRTND_333, "aclk_gsclrtnd_333",
+ "mout_aclk_gscl_333_user", ENABLE_ACLK_GSCL, 5, 0, 0),
+ GATE(CLK_ACLK_GSCLBEND_333, "aclk_gsclbend_333",
+ "mout_aclk_gscl_333_user", ENABLE_ACLK_GSCL, 4, 0, 0),
+ GATE(CLK_ACLK_GSD, "aclk_gsd", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL, 3, 0, 0),
+ GATE(CLK_ACLK_GSCL2, "aclk_gscl2", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL, 2, 0, 0),
+ GATE(CLK_ACLK_GSCL1, "aclk_gscl1", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL, 1, 0, 0),
+ GATE(CLK_ACLK_GSCL0, "aclk_gscl0", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL, 0, 0, 0),
+
+ /* ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL0 */
+ GATE(CLK_ACLK_SMMU_GSCL0, "aclk_smmu_gscl0", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL0, 0, 0, 0),
+
+ /* ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL1 */
+ GATE(CLK_ACLK_SMMU_GSCL1, "aclk_smmu_gscl1", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL1, 0, 0, 0),
+
+ /* ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL2 */
+ GATE(CLK_ACLK_SMMU_GSCL2, "aclk_smmu_gscl2", "mout_aclk_gscl_333_user",
+ ENABLE_ACLK_GSCL_SECURE_SMMU_GSCL2, 0, 0, 0),
+
+ /* ENABLE_PCLK_GSCL */
+ GATE(CLK_PCLK_BTS_GSCL2, "pclk_bts_gscl2", "mout_aclk_gscl_111_user",
+ ENABLE_PCLK_GSCL, 7, 0, 0),
+ GATE(CLK_PCLK_BTS_GSCL1, "pclk_bts_gscl1", "mout_aclk_gscl_111_user",
+ ENABLE_PCLK_GSCL, 6, 0, 0),
+ GATE(CLK_PCLK_BTS_GSCL0, "pclk_bts_gscl0", "mout_aclk_gscl_111_user",
+ ENABLE_PCLK_GSCL, 5, 0, 0),
+ GATE(CLK_PCLK_PMU_GSCL, "pclk_pmu_gscl", "mout_aclk_gscl_111_user",
+ ENABLE_PCLK_GSCL, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_GSCL, "pclk_sysreg_gscl",
+ "mout_aclk_gscl_111_user", ENABLE_PCLK_GSCL,
+ 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_GSCL2, "pclk_gscl2", "mout_aclk_gscl_111_user",
+ ENABLE_PCLK_GSCL, 2, 0, 0),
+ GATE(CLK_PCLK_GSCL1, "pclk_gscl1", "mout_aclk_gscl_111_user",
+ ENABLE_PCLK_GSCL, 1, 0, 0),
+ GATE(CLK_PCLK_GSCL0, "pclk_gscl0", "mout_aclk_gscl_111_user",
+ ENABLE_PCLK_GSCL, 0, 0, 0),
+
+ /* ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL0 */
+ GATE(CLK_PCLK_SMMU_GSCL0, "pclk_smmu_gscl0", "mout_aclk_gscl_111_user",
+ ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL0, 0, 0, 0),
+
+ /* ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL1 */
+ GATE(CLK_PCLK_SMMU_GSCL1, "pclk_smmu_gscl1", "mout_aclk_gscl_111_user",
+ ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL0, 0, 0, 0),
+
+ /* ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL2 */
+ GATE(CLK_PCLK_SMMU_GSCL2, "pclk_smmu_gscl2", "mout_aclk_gscl_111_user",
+ ENABLE_PCLK_GSCL_SECURE_SMMU_GSCL0, 0, 0, 0),
+};
+
+static struct samsung_cmu_info gscl_cmu_info __initdata = {
+ .mux_clks = gscl_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(gscl_mux_clks),
+ .gate_clks = gscl_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(gscl_gate_clks),
+ .nr_clk_ids = GSCL_NR_CLK,
+ .clk_regs = gscl_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(gscl_clk_regs),
+};
+
+static void __init exynos5433_cmu_gscl_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &gscl_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_gscl, "samsung,exynos5433-cmu-gscl",
+ exynos5433_cmu_gscl_init);
+
+/*
+ * Register offset definitions for CMU_APOLLO
+ */
+#define APOLLO_PLL_LOCK 0x0000
+#define APOLLO_PLL_CON0 0x0100
+#define APOLLO_PLL_CON1 0x0104
+#define APOLLO_PLL_FREQ_DET 0x010c
+#define MUX_SEL_APOLLO0 0x0200
+#define MUX_SEL_APOLLO1 0x0204
+#define MUX_SEL_APOLLO2 0x0208
+#define MUX_ENABLE_APOLLO0 0x0300
+#define MUX_ENABLE_APOLLO1 0x0304
+#define MUX_ENABLE_APOLLO2 0x0308
+#define MUX_STAT_APOLLO0 0x0400
+#define MUX_STAT_APOLLO1 0x0404
+#define MUX_STAT_APOLLO2 0x0408
+#define DIV_APOLLO0 0x0600
+#define DIV_APOLLO1 0x0604
+#define DIV_APOLLO_PLL_FREQ_DET 0x0608
+#define DIV_STAT_APOLLO0 0x0700
+#define DIV_STAT_APOLLO1 0x0704
+#define DIV_STAT_APOLLO_PLL_FREQ_DET 0x0708
+#define ENABLE_ACLK_APOLLO 0x0800
+#define ENABLE_PCLK_APOLLO 0x0900
+#define ENABLE_SCLK_APOLLO 0x0a00
+#define ENABLE_IP_APOLLO0 0x0b00
+#define ENABLE_IP_APOLLO1 0x0b04
+#define CLKOUT_CMU_APOLLO 0x0c00
+#define CLKOUT_CMU_APOLLO_DIV_STAT 0x0c04
+#define ARMCLK_STOPCTRL 0x1000
+#define APOLLO_PWR_CTRL 0x1020
+#define APOLLO_PWR_CTRL2 0x1024
+#define APOLLO_INTR_SPREAD_ENABLE 0x1080
+#define APOLLO_INTR_SPREAD_USE_STANDBYWFI 0x1084
+#define APOLLO_INTR_SPREAD_BLOCKING_DURATION 0x1088
+
+static unsigned long apollo_clk_regs[] __initdata = {
+ APOLLO_PLL_LOCK,
+ APOLLO_PLL_CON0,
+ APOLLO_PLL_CON1,
+ APOLLO_PLL_FREQ_DET,
+ MUX_SEL_APOLLO0,
+ MUX_SEL_APOLLO1,
+ MUX_SEL_APOLLO2,
+ MUX_ENABLE_APOLLO0,
+ MUX_ENABLE_APOLLO1,
+ MUX_ENABLE_APOLLO2,
+ MUX_STAT_APOLLO0,
+ MUX_STAT_APOLLO1,
+ MUX_STAT_APOLLO2,
+ DIV_APOLLO0,
+ DIV_APOLLO1,
+ DIV_APOLLO_PLL_FREQ_DET,
+ DIV_STAT_APOLLO0,
+ DIV_STAT_APOLLO1,
+ DIV_STAT_APOLLO_PLL_FREQ_DET,
+ ENABLE_ACLK_APOLLO,
+ ENABLE_PCLK_APOLLO,
+ ENABLE_SCLK_APOLLO,
+ ENABLE_IP_APOLLO0,
+ ENABLE_IP_APOLLO1,
+ CLKOUT_CMU_APOLLO,
+ CLKOUT_CMU_APOLLO_DIV_STAT,
+ ARMCLK_STOPCTRL,
+ APOLLO_PWR_CTRL,
+ APOLLO_PWR_CTRL2,
+ APOLLO_INTR_SPREAD_ENABLE,
+ APOLLO_INTR_SPREAD_USE_STANDBYWFI,
+ APOLLO_INTR_SPREAD_BLOCKING_DURATION,
+};
+
+/* list of all parent clock list */
+PNAME(mout_apollo_pll_p) = { "oscclk", "fout_apollo_pll", };
+PNAME(mout_bus_pll_apollo_user_p) = { "oscclk", "sclk_bus_pll_apollo", };
+PNAME(mout_apollo_p) = { "mout_apollo_pll",
+ "mout_bus_pll_apollo_user", };
+
+static struct samsung_pll_clock apollo_pll_clks[] __initdata = {
+ PLL(pll_35xx, CLK_FOUT_APOLLO_PLL, "fout_apollo_pll", "oscclk",
+ APOLLO_PLL_LOCK, APOLLO_PLL_CON0, exynos5443_pll_rates),
+};
+
+static struct samsung_mux_clock apollo_mux_clks[] __initdata = {
+ /* MUX_SEL_APOLLO0 */
+ MUX_F(CLK_MOUT_APOLLO_PLL, "mout_apollo_pll", mout_apollo_pll_p,
+ MUX_SEL_APOLLO0, 0, 1, 0, CLK_MUX_READ_ONLY),
+
+ /* MUX_SEL_APOLLO1 */
+ MUX(CLK_MOUT_BUS_PLL_APOLLO_USER, "mout_bus_pll_apollo_user",
+ mout_bus_pll_apollo_user_p, MUX_SEL_APOLLO1, 0, 1),
+
+ /* MUX_SEL_APOLLO2 */
+ MUX_F(CLK_MOUT_APOLLO, "mout_apollo", mout_apollo_p, MUX_SEL_APOLLO2,
+ 0, 1, 0, CLK_MUX_READ_ONLY),
+};
+
+static struct samsung_div_clock apollo_div_clks[] __initdata = {
+ /* DIV_APOLLO0 */
+ DIV_F(CLK_DIV_CNTCLK_APOLLO, "div_cntclk_apollo", "div_apollo2",
+ DIV_APOLLO0, 24, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_PCLK_DBG_APOLLO, "div_pclk_dbg_apollo", "div_apollo2",
+ DIV_APOLLO0, 20, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_ATCLK_APOLLO, "div_atclk_apollo", "div_apollo2",
+ DIV_APOLLO0, 16, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_PCLK_APOLLO, "div_pclk_apollo", "div_apollo2",
+ DIV_APOLLO0, 12, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_ACLK_APOLLO, "div_aclk_apollo", "div_apollo2",
+ DIV_APOLLO0, 8, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_APOLLO2, "div_apollo2", "div_apollo1",
+ DIV_APOLLO0, 4, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_APOLLO1, "div_apollo1", "mout_apollo",
+ DIV_APOLLO0, 0, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+
+ /* DIV_APOLLO1 */
+ DIV_F(CLK_DIV_SCLK_HPM_APOLLO, "div_sclk_hpm_apollo", "mout_apollo",
+ DIV_APOLLO1, 4, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_APOLLO_PLL, "div_apollo_pll", "mout_apollo",
+ DIV_APOLLO1, 0, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+};
+
+static struct samsung_gate_clock apollo_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_APOLLO */
+ GATE(CLK_ACLK_ASATBSLV_APOLLO_3_CSSYS, "aclk_asatbslv_apollo_3_cssys",
+ "div_atclk_apollo", ENABLE_ACLK_APOLLO,
+ 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASATBSLV_APOLLO_2_CSSYS, "aclk_asatbslv_apollo_2_cssys",
+ "div_atclk_apollo", ENABLE_ACLK_APOLLO,
+ 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASATBSLV_APOLLO_1_CSSYS, "aclk_asatbslv_apollo_1_cssys",
+ "div_atclk_apollo", ENABLE_ACLK_APOLLO,
+ 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASATBSLV_APOLLO_0_CSSYS, "aclk_asatbslv_apollo_0_cssys",
+ "div_atclk_apollo", ENABLE_ACLK_APOLLO,
+ 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCACES_APOLLO_CCI, "aclk_asyncaces_apollo_cci",
+ "div_aclk_apollo", ENABLE_ACLK_APOLLO,
+ 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_APOLLOP, "aclk_ahb2apb_apollop",
+ "div_pclk_apollo", ENABLE_ACLK_APOLLO,
+ 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_APOLLONP_200, "aclk_apollonp_200",
+ "div_pclk_apollo", ENABLE_ACLK_APOLLO,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_APOLLO */
+ GATE(CLK_PCLK_ASAPBMST_CSSYS_APOLLO, "pclk_asapbmst_cssys_apollo",
+ "div_pclk_dbg_apollo", ENABLE_PCLK_APOLLO,
+ 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PMU_APOLLO, "pclk_pmu_apollo", "div_pclk_apollo",
+ ENABLE_PCLK_APOLLO, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_APOLLO, "pclk_sysreg_apollo",
+ "div_pclk_apollo", ENABLE_PCLK_APOLLO,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_SCLK_APOLLO */
+ GATE(CLK_CNTCLK_APOLLO, "cntclk_apollo", "div_cntclk_apollo",
+ ENABLE_SCLK_APOLLO, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_HPM_APOLLO, "sclk_hpm_apollo", "div_sclk_hpm_apollo",
+ ENABLE_SCLK_APOLLO, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_APOLLO, "sclk_apollo", "div_apollo_pll",
+ ENABLE_SCLK_APOLLO, 0, CLK_IGNORE_UNUSED, 0),
+};
+
+static struct samsung_cmu_info apollo_cmu_info __initdata = {
+ .pll_clks = apollo_pll_clks,
+ .nr_pll_clks = ARRAY_SIZE(apollo_pll_clks),
+ .mux_clks = apollo_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(apollo_mux_clks),
+ .div_clks = apollo_div_clks,
+ .nr_div_clks = ARRAY_SIZE(apollo_div_clks),
+ .gate_clks = apollo_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(apollo_gate_clks),
+ .nr_clk_ids = APOLLO_NR_CLK,
+ .clk_regs = apollo_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(apollo_clk_regs),
+};
+
+static void __init exynos5433_cmu_apollo_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &apollo_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_apollo, "samsung,exynos5433-cmu-apollo",
+ exynos5433_cmu_apollo_init);
+
+/*
+ * Register offset definitions for CMU_ATLAS
+ */
+#define ATLAS_PLL_LOCK 0x0000
+#define ATLAS_PLL_CON0 0x0100
+#define ATLAS_PLL_CON1 0x0104
+#define ATLAS_PLL_FREQ_DET 0x010c
+#define MUX_SEL_ATLAS0 0x0200
+#define MUX_SEL_ATLAS1 0x0204
+#define MUX_SEL_ATLAS2 0x0208
+#define MUX_ENABLE_ATLAS0 0x0300
+#define MUX_ENABLE_ATLAS1 0x0304
+#define MUX_ENABLE_ATLAS2 0x0308
+#define MUX_STAT_ATLAS0 0x0400
+#define MUX_STAT_ATLAS1 0x0404
+#define MUX_STAT_ATLAS2 0x0408
+#define DIV_ATLAS0 0x0600
+#define DIV_ATLAS1 0x0604
+#define DIV_ATLAS_PLL_FREQ_DET 0x0608
+#define DIV_STAT_ATLAS0 0x0700
+#define DIV_STAT_ATLAS1 0x0704
+#define DIV_STAT_ATLAS_PLL_FREQ_DET 0x0708
+#define ENABLE_ACLK_ATLAS 0x0800
+#define ENABLE_PCLK_ATLAS 0x0900
+#define ENABLE_SCLK_ATLAS 0x0a00
+#define ENABLE_IP_ATLAS0 0x0b00
+#define ENABLE_IP_ATLAS1 0x0b04
+#define CLKOUT_CMU_ATLAS 0x0c00
+#define CLKOUT_CMU_ATLAS_DIV_STAT 0x0c04
+#define ARMCLK_STOPCTRL 0x1000
+#define ATLAS_PWR_CTRL 0x1020
+#define ATLAS_PWR_CTRL2 0x1024
+#define ATLAS_INTR_SPREAD_ENABLE 0x1080
+#define ATLAS_INTR_SPREAD_USE_STANDBYWFI 0x1084
+#define ATLAS_INTR_SPREAD_BLOCKING_DURATION 0x1088
+
+static unsigned long atlas_clk_regs[] __initdata = {
+ ATLAS_PLL_LOCK,
+ ATLAS_PLL_CON0,
+ ATLAS_PLL_CON1,
+ ATLAS_PLL_FREQ_DET,
+ MUX_SEL_ATLAS0,
+ MUX_SEL_ATLAS1,
+ MUX_SEL_ATLAS2,
+ MUX_ENABLE_ATLAS0,
+ MUX_ENABLE_ATLAS1,
+ MUX_ENABLE_ATLAS2,
+ MUX_STAT_ATLAS0,
+ MUX_STAT_ATLAS1,
+ MUX_STAT_ATLAS2,
+ DIV_ATLAS0,
+ DIV_ATLAS1,
+ DIV_ATLAS_PLL_FREQ_DET,
+ DIV_STAT_ATLAS0,
+ DIV_STAT_ATLAS1,
+ DIV_STAT_ATLAS_PLL_FREQ_DET,
+ ENABLE_ACLK_ATLAS,
+ ENABLE_PCLK_ATLAS,
+ ENABLE_SCLK_ATLAS,
+ ENABLE_IP_ATLAS0,
+ ENABLE_IP_ATLAS1,
+ CLKOUT_CMU_ATLAS,
+ CLKOUT_CMU_ATLAS_DIV_STAT,
+ ARMCLK_STOPCTRL,
+ ATLAS_PWR_CTRL,
+ ATLAS_PWR_CTRL2,
+ ATLAS_INTR_SPREAD_ENABLE,
+ ATLAS_INTR_SPREAD_USE_STANDBYWFI,
+ ATLAS_INTR_SPREAD_BLOCKING_DURATION,
+};
+
+/* list of all parent clock list */
+PNAME(mout_atlas_pll_p) = { "oscclk", "fout_atlas_pll", };
+PNAME(mout_bus_pll_atlas_user_p) = { "oscclk", "sclk_bus_pll_atlas", };
+PNAME(mout_atlas_p) = { "mout_atlas_pll",
+ "mout_bus_pll_atlas_user", };
+
+static struct samsung_pll_clock atlas_pll_clks[] __initdata = {
+ PLL(pll_35xx, CLK_FOUT_ATLAS_PLL, "fout_atlas_pll", "oscclk",
+ ATLAS_PLL_LOCK, ATLAS_PLL_CON0, exynos5443_pll_rates),
+};
+
+static struct samsung_mux_clock atlas_mux_clks[] __initdata = {
+ /* MUX_SEL_ATLAS0 */
+ MUX_F(CLK_MOUT_ATLAS_PLL, "mout_atlas_pll", mout_atlas_pll_p,
+ MUX_SEL_ATLAS0, 0, 1, 0, CLK_MUX_READ_ONLY),
+
+ /* MUX_SEL_ATLAS1 */
+ MUX(CLK_MOUT_BUS_PLL_ATLAS_USER, "mout_bus_pll_atlas_user",
+ mout_bus_pll_atlas_user_p, MUX_SEL_ATLAS1, 0, 1),
+
+ /* MUX_SEL_ATLAS2 */
+ MUX_F(CLK_MOUT_ATLAS, "mout_atlas", mout_atlas_p, MUX_SEL_ATLAS2,
+ 0, 1, 0, CLK_MUX_READ_ONLY),
+};
+
+static struct samsung_div_clock atlas_div_clks[] __initdata = {
+ /* DIV_ATLAS0 */
+ DIV_F(CLK_DIV_CNTCLK_ATLAS, "div_cntclk_atlas", "div_atlas2",
+ DIV_ATLAS0, 24, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_PCLK_DBG_ATLAS, "div_pclk_dbg_atlas", "div_atclk_atlas",
+ DIV_ATLAS0, 20, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_ATCLK_ATLASO, "div_atclk_atlas", "div_atlas2",
+ DIV_ATLAS0, 16, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_PCLK_ATLAS, "div_pclk_atlas", "div_atlas2",
+ DIV_ATLAS0, 12, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_ACLK_ATLAS, "div_aclk_atlas", "div_atlas2",
+ DIV_ATLAS0, 8, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_ATLAS2, "div_atlas2", "div_atlas1",
+ DIV_ATLAS0, 4, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_ATLAS1, "div_atlas1", "mout_atlas",
+ DIV_ATLAS0, 0, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+
+ /* DIV_ATLAS1 */
+ DIV_F(CLK_DIV_SCLK_HPM_ATLAS, "div_sclk_hpm_atlas", "mout_atlas",
+ DIV_ATLAS1, 4, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+ DIV_F(CLK_DIV_ATLAS_PLL, "div_atlas_pll", "mout_atlas",
+ DIV_ATLAS1, 0, 3, CLK_GET_RATE_NOCACHE,
+ CLK_DIVIDER_READ_ONLY),
+};
+
+static struct samsung_gate_clock atlas_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_ATLAS */
+ GATE(CLK_ACLK_ATB_AUD_CSSYS, "aclk_atb_aud_cssys",
+ "div_atclk_atlas", ENABLE_ACLK_ATLAS,
+ 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ATB_APOLLO3_CSSYS, "aclk_atb_apollo3_cssys",
+ "div_atclk_atlas", ENABLE_ACLK_ATLAS,
+ 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ATB_APOLLO2_CSSYS, "aclk_atb_apollo2_cssys",
+ "div_atclk_atlas", ENABLE_ACLK_ATLAS,
+ 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ATB_APOLLO1_CSSYS, "aclk_atb_apollo1_cssys",
+ "div_atclk_atlas", ENABLE_ACLK_ATLAS,
+ 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ATB_APOLLO0_CSSYS, "aclk_atb_apollo0_cssys",
+ "div_atclk_atlas", ENABLE_ACLK_ATLAS,
+ 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAHBS_CSSYS_SSS, "aclk_asyncahbs_cssys_sss",
+ "div_atclk_atlas", ENABLE_ACLK_ATLAS,
+ 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_CSSYS_CCIX, "aclk_asyncaxis_cssys_ccix",
+ "div_pclk_dbg_atlas", ENABLE_ACLK_ATLAS,
+ 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCACES_ATLAS_CCI, "aclk_asyncaces_atlas_cci",
+ "div_aclk_atlas", ENABLE_ACLK_ATLAS,
+ 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_ATLASP, "aclk_ahb2apb_atlasp", "div_pclk_atlas",
+ ENABLE_ACLK_ATLAS, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ATLASNP_200, "aclk_atlasnp_200", "div_pclk_atlas",
+ ENABLE_ACLK_ATLAS, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_ATLAS */
+ GATE(CLK_PCLK_ASYNCAPB_AUD_CSSYS, "pclk_asyncapb_aud_cssys",
+ "div_pclk_dbg_atlas", ENABLE_PCLK_ATLAS,
+ 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAPB_ISP_CSSYS, "pclk_asyncapb_isp_cssys",
+ "div_pclk_dbg_atlas", ENABLE_PCLK_ATLAS,
+ 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAPB_APOLLO_CSSYS, "pclk_asyncapb_apollo_cssys",
+ "div_pclk_dbg_atlas", ENABLE_PCLK_ATLAS,
+ 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PMU_ATLAS, "pclk_pmu_atlas", "div_pclk_atlas",
+ ENABLE_PCLK_ATLAS, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_ATLAS, "pclk_sysreg_atlas", "div_pclk_atlas",
+ ENABLE_PCLK_ATLAS, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SECJTAG, "pclk_secjtag", "div_pclk_dbg_atlas",
+ ENABLE_PCLK_ATLAS, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_SCLK_ATLAS */
+ GATE(CLK_CNTCLK_ATLAS, "cntclk_atlas", "div_cntclk_atlas",
+ ENABLE_SCLK_ATLAS, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_HPM_ATLAS, "sclk_hpm_atlas", "div_sclk_hpm_atlas",
+ ENABLE_SCLK_ATLAS, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_TRACECLK, "traceclk", "div_atclk_atlas",
+ ENABLE_SCLK_ATLAS, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_CTMCLK, "ctmclk", "div_atclk_atlas",
+ ENABLE_SCLK_ATLAS, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_HCLK_CSSYS, "hclk_cssys", "div_atclk_atlas",
+ ENABLE_SCLK_ATLAS, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_DBG_CSSYS, "pclk_dbg_cssys", "div_pclk_dbg_atlas",
+ ENABLE_SCLK_ATLAS, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_DBG, "pclk_dbg", "div_pclk_dbg_atlas",
+ ENABLE_SCLK_ATLAS, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ATCLK, "atclk", "div_atclk_atlas",
+ ENABLE_SCLK_ATLAS, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_ATLAS, "sclk_atlas", "div_atlas2",
+ ENABLE_SCLK_ATLAS, 0, CLK_IGNORE_UNUSED, 0),
+};
+
+static struct samsung_cmu_info atlas_cmu_info __initdata = {
+ .pll_clks = atlas_pll_clks,
+ .nr_pll_clks = ARRAY_SIZE(atlas_pll_clks),
+ .mux_clks = atlas_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(atlas_mux_clks),
+ .div_clks = atlas_div_clks,
+ .nr_div_clks = ARRAY_SIZE(atlas_div_clks),
+ .gate_clks = atlas_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(atlas_gate_clks),
+ .nr_clk_ids = ATLAS_NR_CLK,
+ .clk_regs = atlas_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(atlas_clk_regs),
+};
+
+static void __init exynos5433_cmu_atlas_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &atlas_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_atlas, "samsung,exynos5433-cmu-atlas",
+ exynos5433_cmu_atlas_init);
+
+/*
+ * Register offset definitions for CMU_MSCL
+ */
+#define MUX_SEL_MSCL0 0x0200
+#define MUX_SEL_MSCL1 0x0204
+#define MUX_ENABLE_MSCL0 0x0300
+#define MUX_ENABLE_MSCL1 0x0304
+#define MUX_STAT_MSCL0 0x0400
+#define MUX_STAT_MSCL1 0x0404
+#define DIV_MSCL 0x0600
+#define DIV_STAT_MSCL 0x0700
+#define ENABLE_ACLK_MSCL 0x0800
+#define ENABLE_ACLK_MSCL_SECURE_SMMU_M2MSCALER0 0x0804
+#define ENABLE_ACLK_MSCL_SECURE_SMMU_M2MSCALER1 0x0808
+#define ENABLE_ACLK_MSCL_SECURE_SMMU_JPEG 0x080c
+#define ENABLE_PCLK_MSCL 0x0900
+#define ENABLE_PCLK_MSCL_SECURE_SMMU_M2MSCALER0 0x0904
+#define ENABLE_PCLK_MSCL_SECURE_SMMU_M2MSCALER1 0x0908
+#define ENABLE_PCLK_MSCL_SECURE_SMMU_JPEG 0x000c
+#define ENABLE_SCLK_MSCL 0x0a00
+#define ENABLE_IP_MSCL0 0x0b00
+#define ENABLE_IP_MSCL1 0x0b04
+#define ENABLE_IP_MSCL_SECURE_SMMU_M2MSCALER0 0x0b08
+#define ENABLE_IP_MSCL_SECURE_SMMU_M2MSCALER1 0x0b0c
+#define ENABLE_IP_MSCL_SECURE_SMMU_JPEG 0x0b10
+
+static unsigned long mscl_clk_regs[] __initdata = {
+ MUX_SEL_MSCL0,
+ MUX_SEL_MSCL1,
+ MUX_ENABLE_MSCL0,
+ MUX_ENABLE_MSCL1,
+ MUX_STAT_MSCL0,
+ MUX_STAT_MSCL1,
+ DIV_MSCL,
+ DIV_STAT_MSCL,
+ ENABLE_ACLK_MSCL,
+ ENABLE_ACLK_MSCL_SECURE_SMMU_M2MSCALER0,
+ ENABLE_ACLK_MSCL_SECURE_SMMU_M2MSCALER1,
+ ENABLE_ACLK_MSCL_SECURE_SMMU_JPEG,
+ ENABLE_PCLK_MSCL,
+ ENABLE_PCLK_MSCL_SECURE_SMMU_M2MSCALER0,
+ ENABLE_PCLK_MSCL_SECURE_SMMU_M2MSCALER1,
+ ENABLE_PCLK_MSCL_SECURE_SMMU_JPEG,
+ ENABLE_SCLK_MSCL,
+ ENABLE_IP_MSCL0,
+ ENABLE_IP_MSCL1,
+ ENABLE_IP_MSCL_SECURE_SMMU_M2MSCALER0,
+ ENABLE_IP_MSCL_SECURE_SMMU_M2MSCALER1,
+ ENABLE_IP_MSCL_SECURE_SMMU_JPEG,
+};
+
+/* list of all parent clock list */
+PNAME(mout_sclk_jpeg_user_p) = { "oscclk", "sclk_jpeg_mscl", };
+PNAME(mout_aclk_mscl_400_user_p) = { "oscclk", "aclk_mscl_400", };
+PNAME(mout_sclk_jpeg_p) = { "mout_sclk_jpeg_user",
+ "mout_aclk_mscl_400_user", };
+
+static struct samsung_mux_clock mscl_mux_clks[] __initdata = {
+ /* MUX_SEL_MSCL0 */
+ MUX(CLK_MOUT_SCLK_JPEG_USER, "mout_sclk_jpeg_user",
+ mout_sclk_jpeg_user_p, MUX_SEL_MSCL0, 4, 1),
+ MUX(CLK_MOUT_ACLK_MSCL_400_USER, "mout_aclk_mscl_400_user",
+ mout_aclk_mscl_400_user_p, MUX_SEL_MSCL0, 0, 1),
+
+ /* MUX_SEL_MSCL1 */
+ MUX(CLK_MOUT_SCLK_JPEG, "mout_sclk_jpeg", mout_sclk_jpeg_p,
+ MUX_SEL_MSCL1, 0, 1),
+};
+
+static struct samsung_div_clock mscl_div_clks[] __initdata = {
+ /* DIV_MSCL */
+ DIV(CLK_DIV_PCLK_MSCL, "div_pclk_mscl", "mout_aclk_mscl_400_user",
+ DIV_MSCL, 0, 3),
+};
+
+static struct samsung_gate_clock mscl_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_MSCL */
+ GATE(CLK_ACLK_BTS_JPEG, "aclk_bts_jpeg", "mout_aclk_mscl_400_user",
+ ENABLE_ACLK_MSCL, 9, 0, 0),
+ GATE(CLK_ACLK_BTS_M2MSCALER1, "aclk_bts_m2mscaler1",
+ "mout_aclk_mscl_400_user", ENABLE_ACLK_MSCL, 8, 0, 0),
+ GATE(CLK_ACLK_BTS_M2MSCALER0, "aclk_bts_m2mscaler0",
+ "mout_aclk_mscl_400_user", ENABLE_ACLK_MSCL, 7, 0, 0),
+ GATE(CLK_ACLK_AHB2APB_MSCL0P, "aclk_abh2apb_mscl0p", "div_pclk_mscl",
+ ENABLE_ACLK_MSCL, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_MSCLX, "aclk_xiu_msclx", "mout_aclk_mscl_400_user",
+ ENABLE_ACLK_MSCL, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MSCLNP_100, "aclk_msclnp_100", "div_pclk_mscl",
+ ENABLE_ACLK_MSCL, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MSCLND_400, "aclk_msclnd_400", "mout_aclk_mscl_400_user",
+ ENABLE_ACLK_MSCL, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_JPEG, "aclk_jpeg", "mout_aclk_mscl_400_user",
+ ENABLE_ACLK_MSCL, 2, 0, 0),
+ GATE(CLK_ACLK_M2MSCALER1, "aclk_m2mscaler1", "mout_aclk_mscl_400_user",
+ ENABLE_ACLK_MSCL, 1, 0, 0),
+ GATE(CLK_ACLK_M2MSCALER0, "aclk_m2mscaler0", "mout_aclk_mscl_400_user",
+ ENABLE_ACLK_MSCL, 0, 0, 0),
+
+ /* ENABLE_ACLK_MSCL_SECURE_SMMU_M2MSCALER0 */
+ GATE(CLK_ACLK_SMMU_M2MSCALER0, "aclk_smmu_m2mscaler0",
+ "mout_aclk_mscl_400_user",
+ ENABLE_ACLK_MSCL_SECURE_SMMU_M2MSCALER0,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_ACLK_MSCL_SECURE_SMMU_M2MSCALER1 */
+ GATE(CLK_ACLK_SMMU_M2MSCALER1, "aclk_smmu_m2mscaler1",
+ "mout_aclk_mscl_400_user",
+ ENABLE_ACLK_MSCL_SECURE_SMMU_M2MSCALER1,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_ACLK_MSCL_SECURE_SMMU_JPEG */
+ GATE(CLK_ACLK_SMMU_JPEG, "aclk_smmu_jpeg", "mout_aclk_mscl_400_user",
+ ENABLE_ACLK_MSCL_SECURE_SMMU_JPEG,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_MSCL */
+ GATE(CLK_PCLK_BTS_JPEG, "pclk_bts_jpeg", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL, 7, 0, 0),
+ GATE(CLK_PCLK_BTS_M2MSCALER1, "pclk_bts_m2mscaler1", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL, 6, 0, 0),
+ GATE(CLK_PCLK_BTS_M2MSCALER0, "pclk_bts_m2mscaler0", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL, 5, 0, 0),
+ GATE(CLK_PCLK_PMU_MSCL, "pclk_pmu_mscl", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_MSCL, "pclk_sysreg_mscl", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_JPEG, "pclk_jpeg", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL, 2, 0, 0),
+ GATE(CLK_PCLK_M2MSCALER1, "pclk_m2mscaler1", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL, 1, 0, 0),
+ GATE(CLK_PCLK_M2MSCALER0, "pclk_m2mscaler0", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL, 0, 0, 0),
+
+ /* ENABLE_PCLK_MSCL_SECURE_SMMU_M2MSCALER0 */
+ GATE(CLK_PCLK_SMMU_M2MSCALER0, "pclk_smmu_m2mscaler0", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL_SECURE_SMMU_M2MSCALER0,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_MSCL_SECURE_SMMU_M2MSCALER1 */
+ GATE(CLK_PCLK_SMMU_M2MSCALER1, "pclk_smmu_m2mscaler1", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL_SECURE_SMMU_M2MSCALER1,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_MSCL_SECURE_SMMU_JPEG */
+ GATE(CLK_PCLK_SMMU_JPEG, "pclk_smmu_jpeg", "div_pclk_mscl",
+ ENABLE_PCLK_MSCL_SECURE_SMMU_JPEG,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_SCLK_MSCL */
+ GATE(CLK_SCLK_JPEG, "sclk_jpeg", "mout_sclk_jpeg", ENABLE_SCLK_MSCL, 0,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_PARENT, 0),
+};
+
+static struct samsung_cmu_info mscl_cmu_info __initdata = {
+ .mux_clks = mscl_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(mscl_mux_clks),
+ .div_clks = mscl_div_clks,
+ .nr_div_clks = ARRAY_SIZE(mscl_div_clks),
+ .gate_clks = mscl_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(mscl_gate_clks),
+ .nr_clk_ids = MSCL_NR_CLK,
+ .clk_regs = mscl_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(mscl_clk_regs),
+};
+
+static void __init exynos5433_cmu_mscl_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &mscl_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_mscl, "samsung,exynos5433-cmu-mscl",
+ exynos5433_cmu_mscl_init);
+
+/*
+ * Register offset definitions for CMU_MFC
+ */
+#define MUX_SEL_MFC 0x0200
+#define MUX_ENABLE_MFC 0x0300
+#define MUX_STAT_MFC 0x0400
+#define DIV_MFC 0x0600
+#define DIV_STAT_MFC 0x0700
+#define ENABLE_ACLK_MFC 0x0800
+#define ENABLE_ACLK_MFC_SECURE_SMMU_MFC 0x0804
+#define ENABLE_PCLK_MFC 0x0900
+#define ENABLE_PCLK_MFC_SECURE_SMMU_MFC 0x0904
+#define ENABLE_IP_MFC0 0x0b00
+#define ENABLE_IP_MFC1 0x0b04
+#define ENABLE_IP_MFC_SECURE_SMMU_MFC 0x0b08
+
+static unsigned long mfc_clk_regs[] __initdata = {
+ MUX_SEL_MFC,
+ MUX_ENABLE_MFC,
+ MUX_STAT_MFC,
+ DIV_MFC,
+ DIV_STAT_MFC,
+ ENABLE_ACLK_MFC,
+ ENABLE_ACLK_MFC_SECURE_SMMU_MFC,
+ ENABLE_PCLK_MFC,
+ ENABLE_PCLK_MFC_SECURE_SMMU_MFC,
+ ENABLE_IP_MFC0,
+ ENABLE_IP_MFC1,
+ ENABLE_IP_MFC_SECURE_SMMU_MFC,
+};
+
+PNAME(mout_aclk_mfc_400_user_p) = { "oscclk", "aclk_mfc_400", };
+
+static struct samsung_mux_clock mfc_mux_clks[] __initdata = {
+ /* MUX_SEL_MFC */
+ MUX(CLK_MOUT_ACLK_MFC_400_USER, "mout_aclk_mfc_400_user",
+ mout_aclk_mfc_400_user_p, MUX_SEL_MFC, 0, 0),
+};
+
+static struct samsung_div_clock mfc_div_clks[] __initdata = {
+ /* DIV_MFC */
+ DIV(CLK_DIV_PCLK_MFC, "div_pclk_mfc", "mout_aclk_mfc_400_user",
+ DIV_MFC, 0, 2),
+};
+
+static struct samsung_gate_clock mfc_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_MFC */
+ GATE(CLK_ACLK_BTS_MFC_1, "aclk_bts_mfc_1", "mout_aclk_mfc_400_user",
+ ENABLE_ACLK_MFC, 6, 0, 0),
+ GATE(CLK_ACLK_BTS_MFC_0, "aclk_bts_mfc_0", "mout_aclk_mfc_400_user",
+ ENABLE_ACLK_MFC, 5, 0, 0),
+ GATE(CLK_ACLK_AHB2APB_MFCP, "aclk_ahb2apb_mfcp", "div_pclk_mfc",
+ ENABLE_ACLK_MFC, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_MFCX, "aclk_xiu_mfcx", "mout_aclk_mfc_400_user",
+ ENABLE_ACLK_MFC, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MFCNP_100, "aclk_mfcnp_100", "div_pclk_mfc",
+ ENABLE_ACLK_MFC, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MFCND_400, "aclk_mfcnd_400", "mout_aclk_mfc_400_user",
+ ENABLE_ACLK_MFC, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_MFC, "aclk_mfc", "mout_aclk_mfc_400_user",
+ ENABLE_ACLK_MFC, 0, 0, 0),
+
+ /* ENABLE_ACLK_MFC_SECURE_SMMU_MFC */
+ GATE(CLK_ACLK_SMMU_MFC_1, "aclk_smmu_mfc_1", "mout_aclk_mfc_400_user",
+ ENABLE_ACLK_MFC_SECURE_SMMU_MFC,
+ 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_MFC_0, "aclk_smmu_mfc_0", "mout_aclk_mfc_400_user",
+ ENABLE_ACLK_MFC_SECURE_SMMU_MFC,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_MFC */
+ GATE(CLK_PCLK_BTS_MFC_1, "pclk_bts_mfc_1", "div_pclk_mfc",
+ ENABLE_PCLK_MFC, 4, 0, 0),
+ GATE(CLK_PCLK_BTS_MFC_0, "pclk_bts_mfc_0", "div_pclk_mfc",
+ ENABLE_PCLK_MFC, 3, 0, 0),
+ GATE(CLK_PCLK_PMU_MFC, "pclk_pmu_mfc", "div_pclk_mfc",
+ ENABLE_PCLK_MFC, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_MFC, "pclk_sysreg_mfc", "div_pclk_mfc",
+ ENABLE_PCLK_MFC, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_MFC, "pclk_mfc", "div_pclk_mfc",
+ ENABLE_PCLK_MFC, 4, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_MFC_SECURE_SMMU_MFC */
+ GATE(CLK_PCLK_SMMU_MFC_1, "pclk_smmu_mfc_1", "div_pclk_mfc",
+ ENABLE_PCLK_MFC_SECURE_SMMU_MFC,
+ 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_MFC_0, "pclk_smmu_mfc_0", "div_pclk_mfc",
+ ENABLE_PCLK_MFC_SECURE_SMMU_MFC,
+ 0, CLK_IGNORE_UNUSED, 0),
+};
+
+static struct samsung_cmu_info mfc_cmu_info __initdata = {
+ .mux_clks = mfc_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(mfc_mux_clks),
+ .div_clks = mfc_div_clks,
+ .nr_div_clks = ARRAY_SIZE(mfc_div_clks),
+ .gate_clks = mfc_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(mfc_gate_clks),
+ .nr_clk_ids = MFC_NR_CLK,
+ .clk_regs = mfc_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(mfc_clk_regs),
+};
+
+static void __init exynos5433_cmu_mfc_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &mfc_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_mfc, "samsung,exynos5433-cmu-mfc",
+ exynos5433_cmu_mfc_init);
+
+/*
+ * Register offset definitions for CMU_HEVC
+ */
+#define MUX_SEL_HEVC 0x0200
+#define MUX_ENABLE_HEVC 0x0300
+#define MUX_STAT_HEVC 0x0400
+#define DIV_HEVC 0x0600
+#define DIV_STAT_HEVC 0x0700
+#define ENABLE_ACLK_HEVC 0x0800
+#define ENABLE_ACLK_HEVC_SECURE_SMMU_HEVC 0x0804
+#define ENABLE_PCLK_HEVC 0x0900
+#define ENABLE_PCLK_HEVC_SECURE_SMMU_HEVC 0x0904
+#define ENABLE_IP_HEVC0 0x0b00
+#define ENABLE_IP_HEVC1 0x0b04
+#define ENABLE_IP_HEVC_SECURE_SMMU_HEVC 0x0b08
+
+static unsigned long hevc_clk_regs[] __initdata = {
+ MUX_SEL_HEVC,
+ MUX_ENABLE_HEVC,
+ MUX_STAT_HEVC,
+ DIV_HEVC,
+ DIV_STAT_HEVC,
+ ENABLE_ACLK_HEVC,
+ ENABLE_ACLK_HEVC_SECURE_SMMU_HEVC,
+ ENABLE_PCLK_HEVC,
+ ENABLE_PCLK_HEVC_SECURE_SMMU_HEVC,
+ ENABLE_IP_HEVC0,
+ ENABLE_IP_HEVC1,
+ ENABLE_IP_HEVC_SECURE_SMMU_HEVC,
+};
+
+PNAME(mout_aclk_hevc_400_user_p) = { "oscclk", "aclk_hevc_400", };
+
+static struct samsung_mux_clock hevc_mux_clks[] __initdata = {
+ /* MUX_SEL_HEVC */
+ MUX(CLK_MOUT_ACLK_HEVC_400_USER, "mout_aclk_hevc_400_user",
+ mout_aclk_hevc_400_user_p, MUX_SEL_HEVC, 0, 0),
+};
+
+static struct samsung_div_clock hevc_div_clks[] __initdata = {
+ /* DIV_HEVC */
+ DIV(CLK_DIV_PCLK_HEVC, "div_pclk_hevc", "mout_aclk_hevc_400_user",
+ DIV_HEVC, 0, 2),
+};
+
+static struct samsung_gate_clock hevc_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_HEVC */
+ GATE(CLK_ACLK_BTS_HEVC_1, "aclk_bts_hevc_1", "mout_aclk_hevc_400_user",
+ ENABLE_ACLK_HEVC, 6, 0, 0),
+ GATE(CLK_ACLK_BTS_HEVC_0, "aclk_bts_hevc_0", "mout_aclk_hevc_400_user",
+ ENABLE_ACLK_HEVC, 5, 0, 0),
+ GATE(CLK_ACLK_AHB2APB_HEVCP, "aclk_ahb2apb_hevcp", "div_pclk_hevc",
+ ENABLE_ACLK_HEVC, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_HEVCX, "aclk_xiu_hevcx", "mout_aclk_hevc_400_user",
+ ENABLE_ACLK_HEVC, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_HEVCNP_100, "aclk_hevcnp_100", "div_pclk_hevc",
+ ENABLE_ACLK_HEVC, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_HEVCND_400, "aclk_hevcnd_400", "mout_aclk_hevc_400_user",
+ ENABLE_ACLK_HEVC, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_HEVC, "aclk_hevc", "mout_aclk_hevc_400_user",
+ ENABLE_ACLK_HEVC, 0, 0, 0),
+
+ /* ENABLE_ACLK_HEVC_SECURE_SMMU_HEVC */
+ GATE(CLK_ACLK_SMMU_HEVC_1, "aclk_smmu_hevc_1",
+ "mout_aclk_hevc_400_user",
+ ENABLE_ACLK_HEVC_SECURE_SMMU_HEVC,
+ 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_HEVC_0, "aclk_smmu_hevc_0",
+ "mout_aclk_hevc_400_user",
+ ENABLE_ACLK_HEVC_SECURE_SMMU_HEVC,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_HEVC */
+ GATE(CLK_PCLK_BTS_HEVC_1, "pclk_bts_hevc_1", "div_pclk_hevc",
+ ENABLE_PCLK_HEVC, 4, 0, 0),
+ GATE(CLK_PCLK_BTS_HEVC_0, "pclk_bts_hevc_0", "div_pclk_hevc",
+ ENABLE_PCLK_HEVC, 3, 0, 0),
+ GATE(CLK_PCLK_PMU_HEVC, "pclk_pmu_hevc", "div_pclk_hevc",
+ ENABLE_PCLK_HEVC, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_HEVC, "pclk_sysreg_hevc", "div_pclk_hevc",
+ ENABLE_PCLK_HEVC, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_HEVC, "pclk_hevc", "div_pclk_hevc",
+ ENABLE_PCLK_HEVC, 4, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_HEVC_SECURE_SMMU_HEVC */
+ GATE(CLK_PCLK_SMMU_HEVC_1, "pclk_smmu_hevc_1", "div_pclk_hevc",
+ ENABLE_PCLK_HEVC_SECURE_SMMU_HEVC,
+ 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_HEVC_0, "pclk_smmu_hevc_0", "div_pclk_hevc",
+ ENABLE_PCLK_HEVC_SECURE_SMMU_HEVC,
+ 0, CLK_IGNORE_UNUSED, 0),
+};
+
+static struct samsung_cmu_info hevc_cmu_info __initdata = {
+ .mux_clks = hevc_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(hevc_mux_clks),
+ .div_clks = hevc_div_clks,
+ .nr_div_clks = ARRAY_SIZE(hevc_div_clks),
+ .gate_clks = hevc_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(hevc_gate_clks),
+ .nr_clk_ids = HEVC_NR_CLK,
+ .clk_regs = hevc_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(hevc_clk_regs),
+};
+
+static void __init exynos5433_cmu_hevc_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &hevc_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_hevc, "samsung,exynos5433-cmu-hevc",
+ exynos5433_cmu_hevc_init);
+
+/*
+ * Register offset definitions for CMU_ISP
+ */
+#define MUX_SEL_ISP 0x0200
+#define MUX_ENABLE_ISP 0x0300
+#define MUX_STAT_ISP 0x0400
+#define DIV_ISP 0x0600
+#define DIV_STAT_ISP 0x0700
+#define ENABLE_ACLK_ISP0 0x0800
+#define ENABLE_ACLK_ISP1 0x0804
+#define ENABLE_ACLK_ISP2 0x0808
+#define ENABLE_PCLK_ISP 0x0900
+#define ENABLE_SCLK_ISP 0x0a00
+#define ENABLE_IP_ISP0 0x0b00
+#define ENABLE_IP_ISP1 0x0b04
+#define ENABLE_IP_ISP2 0x0b08
+#define ENABLE_IP_ISP3 0x0b0c
+
+static unsigned long isp_clk_regs[] __initdata = {
+ MUX_SEL_ISP,
+ MUX_ENABLE_ISP,
+ MUX_STAT_ISP,
+ DIV_ISP,
+ DIV_STAT_ISP,
+ ENABLE_ACLK_ISP0,
+ ENABLE_ACLK_ISP1,
+ ENABLE_ACLK_ISP2,
+ ENABLE_PCLK_ISP,
+ ENABLE_SCLK_ISP,
+ ENABLE_IP_ISP0,
+ ENABLE_IP_ISP1,
+ ENABLE_IP_ISP2,
+ ENABLE_IP_ISP3,
+};
+
+PNAME(mout_aclk_isp_dis_400_user_p) = { "oscclk", "aclk_isp_dis_400", };
+PNAME(mout_aclk_isp_400_user_p) = { "oscclk", "aclk_isp_400", };
+
+static struct samsung_mux_clock isp_mux_clks[] __initdata = {
+ /* MUX_SEL_ISP */
+ MUX(CLK_MOUT_ACLK_ISP_DIS_400_USER, "mout_aclk_isp_dis_400_user",
+ mout_aclk_isp_dis_400_user_p, MUX_SEL_ISP, 4, 0),
+ MUX(CLK_MOUT_ACLK_ISP_400_USER, "mout_aclk_isp_400_user",
+ mout_aclk_isp_400_user_p, MUX_SEL_ISP, 0, 0),
+};
+
+static struct samsung_div_clock isp_div_clks[] __initdata = {
+ /* DIV_ISP */
+ DIV(CLK_DIV_PCLK_ISP_DIS, "div_pclk_isp_dis",
+ "mout_aclk_isp_dis_400_user", DIV_ISP, 12, 3),
+ DIV(CLK_DIV_PCLK_ISP, "div_pclk_isp", "mout_aclk_isp_400_user",
+ DIV_ISP, 8, 3),
+ DIV(CLK_DIV_ACLK_ISP_D_200, "div_aclk_isp_d_200",
+ "mout_aclk_isp_400_user", DIV_ISP, 4, 3),
+ DIV(CLK_DIV_ACLK_ISP_C_200, "div_aclk_isp_c_200",
+ "mout_aclk_isp_400_user", DIV_ISP, 0, 3),
+};
+
+static struct samsung_gate_clock isp_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_ISP0 */
+ GATE(CLK_ACLK_ISP_D_GLUE, "aclk_isp_d_glue", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP0, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SCALERP, "aclk_scalerp", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP0, 5, 0, 0),
+ GATE(CLK_ACLK_3DNR, "aclk_3dnr", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP0, 4, 0, 0),
+ GATE(CLK_ACLK_DIS, "aclk_dis", "mout_aclk_isp_dis_400_user",
+ ENABLE_ACLK_ISP0, 3, 0, 0),
+ GATE(CLK_ACLK_SCALERC, "aclk_scalerc", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP0, 2, 0, 0),
+ GATE(CLK_ACLK_DRC, "aclk_drc", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP0, 1, 0, 0),
+ GATE(CLK_ACLK_ISP, "aclk_isp", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP0, 0, 0, 0),
+
+ /* ENABLE_ACLK_ISP1 */
+ GATE(CLK_ACLK_AXIUS_SCALERP, "aclk_axius_scalerp",
+ "mout_aclk_isp_400_user", ENABLE_ACLK_ISP1,
+ 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIUS_SCALERC, "aclk_axius_scalerc",
+ "mout_aclk_isp_400_user", ENABLE_ACLK_ISP1,
+ 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIUS_DRC, "aclk_axius_drc",
+ "mout_aclk_isp_400_user", ENABLE_ACLK_ISP1,
+ 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAHBM_ISP2P, "aclk_asyncahbm_isp2p",
+ "div_pclk_isp", ENABLE_ACLK_ISP1,
+ 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAHBM_ISP1P, "aclk_asyncahbm_isp1p",
+ "div_pclk_isp", ENABLE_ACLK_ISP1,
+ 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_DIS1, "aclk_asyncaxis_dis1",
+ "mout_aclk_isp_dis_400_user", ENABLE_ACLK_ISP1,
+ 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_DIS0, "aclk_asyncaxis_dis0",
+ "mout_aclk_isp_dis_400_user", ENABLE_ACLK_ISP1,
+ 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_DIS1, "aclk_asyncaxim_dis1",
+ "mout_aclk_isp_400_user", ENABLE_ACLK_ISP1,
+ 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_DIS0, "aclk_asyncaxim_dis0",
+ "mout_aclk_isp_400_user", ENABLE_ACLK_ISP1,
+ 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_ISP2P, "aclk_asyncaxim_isp2p",
+ "div_aclk_isp_d_200", ENABLE_ACLK_ISP1,
+ 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_ISP1P, "aclk_asyncaxim_isp1p",
+ "div_aclk_isp_c_200", ENABLE_ACLK_ISP1,
+ 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_ISP2P, "aclk_ahb2apb_isp2p", "div_pclk_isp",
+ ENABLE_ACLK_ISP1, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_ISP1P, "aclk_ahb2apb_isp1p", "div_pclk_isp",
+ ENABLE_ACLK_ISP1, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXI2APB_ISP2P, "aclk_axi2apb_isp2p",
+ "div_aclk_isp_d_200", ENABLE_ACLK_ISP1,
+ 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXI2APB_ISP1P, "aclk_axi2apb_isp1p",
+ "div_aclk_isp_c_200", ENABLE_ACLK_ISP1,
+ 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_ISPEX1, "aclk_xiu_ispex1", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP1, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_ISPEX0, "aclk_xiu_ispex0", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP1, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ISPND_400, "aclk_ispnd_400", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP1, 1, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_ACLK_ISP2 */
+ GATE(CLK_ACLK_SMMU_SCALERP, "aclk_smmu_scalerp",
+ "mout_aclk_isp_400_user", ENABLE_ACLK_ISP2,
+ 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_3DNR, "aclk_smmu_3dnr", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP2, 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_DIS1, "aclk_smmu_dis1", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP2, 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_DIS0, "aclk_smmu_dis0", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP2, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_SCALERC, "aclk_smmu_scalerc",
+ "mout_aclk_isp_400_user", ENABLE_ACLK_ISP2,
+ 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_DRC, "aclk_smmu_drc", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP2, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_ISP, "aclk_smmu_isp", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP2, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_SCALERP, "aclk_bts_scalerp",
+ "mout_aclk_isp_400_user", ENABLE_ACLK_ISP2,
+ 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_3DR, "aclk_bts_3dnr", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP2, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_DIS1, "aclk_bts_dis1", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP2, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_DIS0, "aclk_bts_dis0", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP2, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_SCALERC, "aclk_bts_scalerc",
+ "mout_aclk_isp_400_user", ENABLE_ACLK_ISP2,
+ 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_DRC, "aclk_bts_drc", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP2, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_ISP, "aclk_bts_isp", "mout_aclk_isp_400_user",
+ ENABLE_ACLK_ISP2, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_ISP */
+ GATE(CLK_PCLK_SMMU_SCALERP, "pclk_smmu_scalerp", "div_aclk_isp_d_200",
+ ENABLE_PCLK_ISP, 25, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_3DNR, "pclk_smmu_3dnr", "div_aclk_isp_d_200",
+ ENABLE_PCLK_ISP, 24, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_DIS1, "pclk_smmu_dis1", "div_aclk_isp_d_200",
+ ENABLE_PCLK_ISP, 23, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_DIS0, "pclk_smmu_dis0", "div_aclk_isp_d_200",
+ ENABLE_PCLK_ISP, 22, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_SCALERC, "pclk_smmu_scalerc", "div_aclk_isp_c_200",
+ ENABLE_PCLK_ISP, 21, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_DRC, "pclk_smmu_drc", "div_aclk_isp_c_200",
+ ENABLE_PCLK_ISP, 20, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_ISP, "pclk_smmu_isp", "div_aclk_isp_c_200",
+ ENABLE_PCLK_ISP, 19, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_SCALERP, "pclk_bts_scalerp", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 18, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_3DNR, "pclk_bts_3dnr", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_DIS1, "pclk_bts_dis1", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_DIS0, "pclk_bts_dis0", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_SCALERC, "pclk_bts_scalerc", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_DRC, "pclk_bts_drc", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_ISP, "pclk_bts_isp", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXI_DIS1, "pclk_asyncaxi_dis1", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXI_DIS0, "pclk_asyncaxi_dis0", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PMU_ISP, "pclk_pmu_isp", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_ISP, "pclk_sysreg_isp", "div_pclk_isp",
+ ENABLE_PCLK_ISP, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_CMU_ISP_LOCAL, "pclk_cmu_isp_local",
+ "div_aclk_isp_c_200", ENABLE_PCLK_ISP,
+ 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SCALERP, "pclk_scalerp", "div_aclk_isp_d_200",
+ ENABLE_PCLK_ISP, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_3DNR, "pclk_3dnr", "div_aclk_isp_d_200",
+ ENABLE_PCLK_ISP, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_DIS_CORE, "pclk_dis_core", "div_pclk_isp_dis",
+ ENABLE_PCLK_ISP, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_DIS, "pclk_dis", "div_aclk_isp_d_200",
+ ENABLE_PCLK_ISP, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SCALERC, "pclk_scalerc", "div_aclk_isp_c_200",
+ ENABLE_PCLK_ISP, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_DRC, "pclk_drc", "div_aclk_isp_c_200",
+ ENABLE_PCLK_ISP, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP, "pclk_isp", "div_aclk_isp_c_200",
+ ENABLE_PCLK_ISP, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_SCLK_ISP */
+ GATE(CLK_SCLK_PIXELASYNCS_DIS, "sclk_pixelasyncs_dis",
+ "mout_aclk_isp_dis_400_user", ENABLE_SCLK_ISP,
+ 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_PIXELASYNCM_DIS, "sclk_pixelasyncm_dis",
+ "mout_aclk_isp_dis_400_user", ENABLE_SCLK_ISP,
+ 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_PIXELASYNCS_SCALERP, "sclk_pixelasyncs_scalerp",
+ "mout_aclk_isp_400_user", ENABLE_SCLK_ISP,
+ 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_PIXELASYNCM_ISPD, "sclk_pixelasyncm_ispd",
+ "mout_aclk_isp_400_user", ENABLE_SCLK_ISP,
+ 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_PIXELASYNCS_ISPC, "sclk_pixelasyncs_ispc",
+ "mout_aclk_isp_400_user", ENABLE_SCLK_ISP,
+ 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_SCLK_PIXELASYNCM_ISPC, "sclk_pixelasyncm_ispc",
+ "mout_aclk_isp_400_user", ENABLE_SCLK_ISP,
+ 0, CLK_IGNORE_UNUSED, 0),
+};
+
+static struct samsung_cmu_info isp_cmu_info __initdata = {
+ .mux_clks = isp_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(isp_mux_clks),
+ .div_clks = isp_div_clks,
+ .nr_div_clks = ARRAY_SIZE(isp_div_clks),
+ .gate_clks = isp_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(isp_gate_clks),
+ .nr_clk_ids = ISP_NR_CLK,
+ .clk_regs = isp_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(isp_clk_regs),
+};
+
+static void __init exynos5433_cmu_isp_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &isp_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_isp, "samsung,exynos5433-cmu-isp",
+ exynos5433_cmu_isp_init);
+
+/*
+ * Register offset definitions for CMU_CAM0
+ */
+#define MUX_SEL_CAM00 0x0200
+#define MUX_SEL_CAM01 0x0204
+#define MUX_SEL_CAM02 0x0208
+#define MUX_SEL_CAM03 0x020c
+#define MUX_SEL_CAM04 0x0210
+#define MUX_ENABLE_CAM00 0x0300
+#define MUX_ENABLE_CAM01 0x0304
+#define MUX_ENABLE_CAM02 0x0308
+#define MUX_ENABLE_CAM03 0x030c
+#define MUX_ENABLE_CAM04 0x0310
+#define MUX_STAT_CAM00 0x0400
+#define MUX_STAT_CAM01 0x0404
+#define MUX_STAT_CAM02 0x0408
+#define MUX_STAT_CAM03 0x040c
+#define MUX_STAT_CAM04 0x0410
+#define MUX_IGNORE_CAM01 0x0504
+#define DIV_CAM00 0x0600
+#define DIV_CAM01 0x0604
+#define DIV_CAM02 0x0608
+#define DIV_CAM03 0x060c
+#define DIV_STAT_CAM00 0x0700
+#define DIV_STAT_CAM01 0x0704
+#define DIV_STAT_CAM02 0x0708
+#define DIV_STAT_CAM03 0x070c
+#define ENABLE_ACLK_CAM00 0X0800
+#define ENABLE_ACLK_CAM01 0X0804
+#define ENABLE_ACLK_CAM02 0X0808
+#define ENABLE_PCLK_CAM0 0X0900
+#define ENABLE_SCLK_CAM0 0X0a00
+#define ENABLE_IP_CAM00 0X0b00
+#define ENABLE_IP_CAM01 0X0b04
+#define ENABLE_IP_CAM02 0X0b08
+#define ENABLE_IP_CAM03 0X0b0C
+
+static unsigned long cam0_clk_regs[] __initdata = {
+ MUX_SEL_CAM00,
+ MUX_SEL_CAM01,
+ MUX_SEL_CAM02,
+ MUX_SEL_CAM03,
+ MUX_SEL_CAM04,
+ MUX_ENABLE_CAM00,
+ MUX_ENABLE_CAM01,
+ MUX_ENABLE_CAM02,
+ MUX_ENABLE_CAM03,
+ MUX_ENABLE_CAM04,
+ MUX_STAT_CAM00,
+ MUX_STAT_CAM01,
+ MUX_STAT_CAM02,
+ MUX_STAT_CAM03,
+ MUX_STAT_CAM04,
+ MUX_IGNORE_CAM01,
+ DIV_CAM00,
+ DIV_CAM01,
+ DIV_CAM02,
+ DIV_CAM03,
+ DIV_STAT_CAM00,
+ DIV_STAT_CAM01,
+ DIV_STAT_CAM02,
+ DIV_STAT_CAM03,
+ ENABLE_ACLK_CAM00,
+ ENABLE_ACLK_CAM01,
+ ENABLE_ACLK_CAM02,
+ ENABLE_PCLK_CAM0,
+ ENABLE_SCLK_CAM0,
+ ENABLE_IP_CAM00,
+ ENABLE_IP_CAM01,
+ ENABLE_IP_CAM02,
+ ENABLE_IP_CAM03,
+};
+PNAME(mout_aclk_cam0_333_user_p) = { "oscclk", "aclk_cam0_333", };
+PNAME(mout_aclk_cam0_400_user_p) = { "oscclk", "aclk_cam0_400", };
+PNAME(mout_aclk_cam0_552_user_p) = { "oscclk", "aclk_cam0_552", };
+
+PNAME(mout_phyclk_rxbyteclkhs0_s4_user_p) = { "oscclk",
+ "phyclk_rxbyteclkhs0_s4_phy", };
+PNAME(mout_phyclk_rxbyteclkhs0_s2a_user_p) = { "oscclk",
+ "phyclk_rxbyteclkhs0_s2a_phy", };
+
+PNAME(mout_aclk_lite_d_b_p) = { "mout_aclk_lite_d_a",
+ "mout_aclk_cam0_333_user", };
+PNAME(mout_aclk_lite_d_a_p) = { "mout_aclk_cam0_552_user",
+ "mout_aclk_cam0_400_user", };
+PNAME(mout_aclk_lite_b_b_p) = { "mout_aclk_lite_b_a",
+ "mout_aclk_cam0_333_user", };
+PNAME(mout_aclk_lite_b_a_p) = { "mout_aclk_cam0_552_user",
+ "mout_aclk_cam0_400_user", };
+PNAME(mout_aclk_lite_a_b_p) = { "mout_aclk_lite_a_a",
+ "mout_aclk_cam0_333_user", };
+PNAME(mout_aclk_lite_a_a_p) = { "mout_aclk_cam0_552_user",
+ "mout_aclk_cam0_400_user", };
+PNAME(mout_aclk_cam0_400_p) = { "mout_aclk_cam0_400_user",
+ "mout_aclk_cam0_333_user", };
+
+PNAME(mout_aclk_csis1_b_p) = { "mout_aclk_csis1_a",
+ "mout_aclk_cam0_333_user" };
+PNAME(mout_aclk_csis1_a_p) = { "mout_aclk_cam0_552_user",
+ "mout_aclk_cam0_400_user", };
+PNAME(mout_aclk_csis0_b_p) = { "mout_aclk_csis0_a",
+ "mout_aclk_cam0_333_user", };
+PNAME(mout_aclk_csis0_a_p) = { "mout_aclk_cam0_552_user",
+ "mout_aclk-cam0_400_user", };
+PNAME(mout_aclk_3aa1_b_p) = { "mout_aclk_3aa1_a",
+ "mout_aclk_cam0_333_user", };
+PNAME(mout_aclk_3aa1_a_p) = { "mout_aclk_cam0_552_user",
+ "mout_aclk_cam0_400_user", };
+PNAME(mout_aclk_3aa0_b_p) = { "mout_aclk_3aa0_a",
+ "mout_aclk_cam0_333_user", };
+PNAME(mout_aclk_3aa0_a_p) = { "mout_aclk_cam0_552_user",
+ "mout_aclk_cam0_400_user", };
+
+PNAME(mout_sclk_lite_freecnt_c_p) = { "mout_sclk_lite_freecnt_b",
+ "div_pclk_lite_d", };
+PNAME(mout_sclk_lite_freecnt_b_p) = { "mout_sclk_lite_freecnt_a",
+ "div_pclk_pixelasync_lite_c", };
+PNAME(mout_sclk_lite_freecnt_a_p) = { "div_pclk_lite_a",
+ "div_pclk_lite_b", };
+PNAME(mout_sclk_pixelasync_lite_c_b_p) = { "mout_sclk_pixelasync_lite_c_a",
+ "mout_aclk_cam0_333_user", };
+PNAME(mout_sclk_pixelasync_lite_c_a_p) = { "mout_aclk_cam0_552_user",
+ "mout_aclk_cam0_400_user", };
+PNAME(mout_sclk_pixelasync_lite_c_init_b_p) = {
+ "mout_sclk_pixelasync_lite_c_init_a",
+ "mout_aclk_cam0_400_user", };
+PNAME(mout_sclk_pixelasync_lite_c_init_a_p) = {
+ "mout_aclk_cam0_552_user",
+ "mout_aclk_cam0_400_user", };
+
+static struct samsung_fixed_rate_clock cam0_fixed_clks[] __initdata = {
+ FRATE(CLK_PHYCLK_RXBYTEECLKHS0_S4_PHY, "phyclk_rxbyteclkhs0_s4_phy",
+ NULL, CLK_IS_ROOT, 100000000),
+ FRATE(CLK_PHYCLK_RXBYTEECLKHS0_S2A_PHY, "phyclk_rxbyteclkhs0_s2a_phy",
+ NULL, CLK_IS_ROOT, 100000000),
+};
+
+static struct samsung_mux_clock cam0_mux_clks[] __initdata = {
+ /* MUX_SEL_CAM00 */
+ MUX(CLK_MOUT_ACLK_CAM0_333_USER, "mout_aclk_cam0_333_user",
+ mout_aclk_cam0_333_user_p, MUX_SEL_CAM00, 8, 1),
+ MUX(CLK_MOUT_ACLK_CAM0_400_USER, "mout_aclk_cam0_400_user",
+ mout_aclk_cam0_400_user_p, MUX_SEL_CAM00, 4, 1),
+ MUX(CLK_MOUT_ACLK_CAM0_552_USER, "mout_aclk_cam0_552_user",
+ mout_aclk_cam0_552_user_p, MUX_SEL_CAM00, 0, 1),
+
+ /* MUX_SEL_CAM01 */
+ MUX(CLK_MOUT_PHYCLK_RXBYTECLKHS0_S4_USER,
+ "mout_phyclk_rxbyteclkhs0_s4_user",
+ mout_phyclk_rxbyteclkhs0_s4_user_p,
+ MUX_SEL_CAM01, 4, 1),
+ MUX(CLK_MOUT_PHYCLK_RXBYTECLKHS0_S2A_USER,
+ "mout_phyclk_rxbyteclkhs0_s2a_user",
+ mout_phyclk_rxbyteclkhs0_s2a_user_p,
+ MUX_SEL_CAM01, 0, 1),
+
+ /* MUX_SEL_CAM02 */
+ MUX(CLK_MOUT_ACLK_LITE_D_B, "mout_aclk_lite_d_b", mout_aclk_lite_d_b_p,
+ MUX_SEL_CAM02, 24, 1),
+ MUX(CLK_MOUT_ACLK_LITE_D_A, "mout_aclk_lite_d_a", mout_aclk_lite_d_a_p,
+ MUX_SEL_CAM02, 20, 1),
+ MUX(CLK_MOUT_ACLK_LITE_B_B, "mout_aclk_lite_b_b", mout_aclk_lite_b_b_p,
+ MUX_SEL_CAM02, 16, 1),
+ MUX(CLK_MOUT_ACLK_LITE_B_A, "mout_aclk_lite_b_a", mout_aclk_lite_b_a_p,
+ MUX_SEL_CAM02, 12, 1),
+ MUX(CLK_MOUT_ACLK_LITE_A_B, "mout_aclk_lite_a_b", mout_aclk_lite_a_b_p,
+ MUX_SEL_CAM02, 8, 1),
+ MUX(CLK_MOUT_ACLK_LITE_A_A, "mout_aclk_lite_a_a", mout_aclk_lite_a_a_p,
+ MUX_SEL_CAM02, 4, 1),
+ MUX(CLK_MOUT_ACLK_CAM0_400, "mout_aclk_cam0_400", mout_aclk_cam0_400_p,
+ MUX_SEL_CAM02, 0, 1),
+
+ /* MUX_SEL_CAM03 */
+ MUX(CLK_MOUT_ACLK_CSIS1_B, "mout_aclk_csis1_b", mout_aclk_csis1_b_p,
+ MUX_SEL_CAM03, 28, 1),
+ MUX(CLK_MOUT_ACLK_CSIS1_A, "mout_aclk_csis1_a", mout_aclk_csis1_a_p,
+ MUX_SEL_CAM03, 24, 1),
+ MUX(CLK_MOUT_ACLK_CSIS0_B, "mout_aclk_csis0_b", mout_aclk_csis0_b_p,
+ MUX_SEL_CAM03, 20, 1),
+ MUX(CLK_MOUT_ACLK_CSIS0_A, "mout_aclk_csis0_a", mout_aclk_csis0_a_p,
+ MUX_SEL_CAM03, 16, 1),
+ MUX(CLK_MOUT_ACLK_3AA1_B, "mout_aclk_3aa1_b", mout_aclk_3aa1_b_p,
+ MUX_SEL_CAM03, 12, 1),
+ MUX(CLK_MOUT_ACLK_3AA1_A, "mout_aclk_3aa1_a", mout_aclk_3aa1_a_p,
+ MUX_SEL_CAM03, 8, 1),
+ MUX(CLK_MOUT_ACLK_3AA0_B, "mout_aclk_3aa0_b", mout_aclk_3aa0_b_p,
+ MUX_SEL_CAM03, 4, 1),
+ MUX(CLK_MOUT_ACLK_3AA0_A, "mout_aclk_3aa0_a", mout_aclk_3aa0_a_p,
+ MUX_SEL_CAM03, 0, 1),
+
+ /* MUX_SEL_CAM04 */
+ MUX(CLK_MOUT_SCLK_LITE_FREECNT_C, "mout_sclk_lite_freecnt_c",
+ mout_sclk_lite_freecnt_c_p, MUX_SEL_CAM04, 24, 1),
+ MUX(CLK_MOUT_SCLK_LITE_FREECNT_B, "mout_sclk_lite_freecnt_b",
+ mout_sclk_lite_freecnt_b_p, MUX_SEL_CAM04, 24, 1),
+ MUX(CLK_MOUT_SCLK_LITE_FREECNT_A, "mout_sclk_lite_freecnt_a",
+ mout_sclk_lite_freecnt_a_p, MUX_SEL_CAM04, 24, 1),
+ MUX(CLK_MOUT_SCLK_PIXELASYNC_LITE_C_B, "mout_sclk_pixelasync_lite_c_b",
+ mout_sclk_pixelasync_lite_c_b_p, MUX_SEL_CAM04, 24, 1),
+ MUX(CLK_MOUT_SCLK_PIXELASYNC_LITE_C_A, "mout_sclk_pixelasync_lite_c_a",
+ mout_sclk_pixelasync_lite_c_a_p, MUX_SEL_CAM04, 24, 1),
+ MUX(CLK_MOUT_SCLK_PIXELASYNC_LITE_C_INIT_B,
+ "mout_sclk_pixelasync_lite_c_init_b",
+ mout_sclk_pixelasync_lite_c_init_b_p,
+ MUX_SEL_CAM04, 24, 1),
+ MUX(CLK_MOUT_SCLK_PIXELASYNC_LITE_C_INIT_A,
+ "mout_sclk_pixelasync_lite_c_init_a",
+ mout_sclk_pixelasync_lite_c_init_a_p,
+ MUX_SEL_CAM04, 24, 1),
+};
+
+static struct samsung_div_clock cam0_div_clks[] __initdata = {
+ /* DIV_CAM00 */
+ DIV(CLK_DIV_PCLK_CAM0_50, "div_pclk_cam0_50", "div_aclk_cam0_200",
+ DIV_CAM00, 8, 2),
+ DIV(CLK_DIV_ACLK_CAM0_200, "div_aclk_cam0_200", "mout_aclk_cam0_400",
+ DIV_CAM00, 4, 3),
+ DIV(CLK_DIV_ACLK_CAM0_BUS_400, "div_aclk_cam0_bus_400",
+ "mout_aclk_cam0_400", DIV_CAM00, 0, 3),
+
+ /* DIV_CAM01 */
+ DIV(CLK_DIV_PCLK_LITE_D, "div_pclk_lite_d", "div_aclk_lite_d",
+ DIV_CAM01, 20, 2),
+ DIV(CLK_DIV_ACLK_LITE_D, "div_aclk_lite_d", "mout_aclk_lite_d_b",
+ DIV_CAM01, 16, 3),
+ DIV(CLK_DIV_PCLK_LITE_B, "div_pclk_lite_b", "div_aclk_lite_b",
+ DIV_CAM01, 12, 2),
+ DIV(CLK_DIV_ACLK_LITE_B, "div_aclk_lite_b", "mout_aclk_lite_b_b",
+ DIV_CAM01, 8, 3),
+ DIV(CLK_DIV_PCLK_LITE_A, "div_pclk_lite_a", "div_aclk_lite_a",
+ DIV_CAM01, 4, 2),
+ DIV(CLK_DIV_ACLK_LITE_A, "div_aclk_lite_a", "mout_aclk_lite_a_b",
+ DIV_CAM01, 0, 3),
+
+ /* DIV_CAM02 */
+ DIV(CLK_DIV_ACLK_CSIS1, "div_aclk_csis1", "mout_aclk_csis1_b",
+ DIV_CAM02, 20, 3),
+ DIV(CLK_DIV_ACLK_CSIS0, "div_aclk_csis0", "mout_aclk_csis0_b",
+ DIV_CAM02, 16, 3),
+ DIV(CLK_DIV_PCLK_3AA1, "div_pclk_3aa1", "div_aclk_3aa1",
+ DIV_CAM02, 12, 2),
+ DIV(CLK_DIV_ACLK_3AA1, "div_aclk_3aa1", "mout_aclk_3aa1_b",
+ DIV_CAM02, 8, 3),
+ DIV(CLK_DIV_PCLK_3AA0, "div_pclk_3aa0", "div_aclk_3aa0",
+ DIV_CAM02, 4, 2),
+ DIV(CLK_DIV_ACLK_3AA0, "div_aclk_3aa0", "mout_aclk_3aa0_b",
+ DIV_CAM02, 0, 3),
+
+ /* DIV_CAM03 */
+ DIV(CLK_DIV_SCLK_PIXELASYNC_LITE_C, "div_sclk_pixelasync_lite_c",
+ "mout_sclk_pixelasync_lite_c_b", DIV_CAM03, 8, 3),
+ DIV(CLK_DIV_PCLK_PIXELASYNC_LITE_C, "div_pclk_pixelasync_lite_c",
+ "div_sclk_pixelasync_lite_c_init", DIV_CAM03, 4, 2),
+ DIV(CLK_DIV_SCLK_PIXELASYNC_LITE_C_INIT,
+ "div_sclk_pixelasync_lite_c_init",
+ "mout_sclk_pixelasync_lite_c_init_b", DIV_CAM03, 0, 3),
+};
+
+static struct samsung_gate_clock cam0_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_CAM00 */
+ GATE(CLK_ACLK_CSIS1, "aclk_csis1", "div_aclk_csis1", ENABLE_ACLK_CAM00,
+ 6, 0, 0),
+ GATE(CLK_ACLK_CSIS0, "aclk_csis0", "div_aclk_csis0", ENABLE_ACLK_CAM00,
+ 5, 0, 0),
+ GATE(CLK_ACLK_3AA1, "aclk_3aa1", "div_aclk_3aa1", ENABLE_ACLK_CAM00,
+ 4, 0, 0),
+ GATE(CLK_ACLK_3AA0, "aclk_3aa0", "div_aclk_3aa0", ENABLE_ACLK_CAM00,
+ 3, 0, 0),
+ GATE(CLK_ACLK_LITE_D, "aclk_lite_d", "div_aclk_lite_d",
+ ENABLE_ACLK_CAM00, 2, 0, 0),
+ GATE(CLK_ACLK_LITE_B, "aclk_lite_b", "div_aclk_lite_b",
+ ENABLE_ACLK_CAM00, 1, 0, 0),
+ GATE(CLK_ACLK_LITE_A, "aclk_lite_a", "div_aclk_lite_a",
+ ENABLE_ACLK_CAM00, 0, 0, 0),
+
+ /* ENABLE_ACLK_CAM01 */
+ GATE(CLK_ACLK_AHBSYNCDN, "aclk_ahbsyncdn", "div_aclk_cam0_200",
+ ENABLE_ACLK_CAM01, 31, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIUS_LITE_D, "aclk_axius_lite_d", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM01, 30, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIUS_LITE_B, "aclk_axius_lite_b", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM01, 29, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIUS_LITE_A, "aclk_axius_lite_a", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM01, 28, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBM_3AA1, "aclk_asyncapbm_3aa1", "div_pclk_3aa1",
+ ENABLE_ACLK_CAM01, 27, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBS_3AA1, "aclk_asyncapbs_3aa1", "div_aclk_3aa1",
+ ENABLE_ACLK_CAM01, 26, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBM_3AA0, "aclk_asyncapbm_3aa0", "div_pclk_3aa0",
+ ENABLE_ACLK_CAM01, 25, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBS_3AA0, "aclk_asyncapbs_3aa0", "div_aclk_3aa0",
+ ENABLE_ACLK_CAM01, 24, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBM_LITE_D, "aclk_asyncapbm_lite_d",
+ "div_pclk_lite_d", ENABLE_ACLK_CAM01,
+ 23, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBS_LITE_D, "aclk_asyncapbs_lite_d",
+ "div_aclk_cam0_200", ENABLE_ACLK_CAM01,
+ 22, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBM_LITE_B, "aclk_asyncapbm_lite_b",
+ "div_pclk_lite_b", ENABLE_ACLK_CAM01,
+ 21, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBS_LITE_B, "aclk_asyncapbs_lite_b",
+ "div_aclk_cam0_200", ENABLE_ACLK_CAM01,
+ 20, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBM_LITE_A, "aclk_asyncapbm_lite_a",
+ "div_pclk_lite_a", ENABLE_ACLK_CAM01,
+ 19, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBS_LITE_A, "aclk_asyncapbs_lite_a",
+ "div_aclk_cam0_200", ENABLE_ACLK_CAM01,
+ 18, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_ISP0P, "aclk_asyncaxim_isp0p",
+ "div_aclk_cam0_200", ENABLE_ACLK_CAM01,
+ 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_3AA1, "aclk_asyncaxim_3aa1",
+ "div_aclk_cam0_bus_400", ENABLE_ACLK_CAM01,
+ 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_3AA1, "aclk_asyncaxis_3aa1",
+ "div_aclk_3aa1", ENABLE_ACLK_CAM01,
+ 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_3AA0, "aclk_asyncaxim_3aa0",
+ "div_aclk_cam0_bus_400", ENABLE_ACLK_CAM01,
+ 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_3AA0, "aclk_asyncaxis_3aa0",
+ "div_aclk_3aa0", ENABLE_ACLK_CAM01,
+ 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_LITE_D, "aclk_asyncaxim_lite_d",
+ "div_aclk_cam0_bus_400", ENABLE_ACLK_CAM01,
+ 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_LITE_D, "aclk_asyncaxis_lite_d",
+ "div_aclk_lite_d", ENABLE_ACLK_CAM01,
+ 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_LITE_B, "aclk_asyncaxim_lite_b",
+ "div_aclk_cam0_bus_400", ENABLE_ACLK_CAM01,
+ 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_LITE_B, "aclk_asyncaxis_lite_b",
+ "div_aclk_lite_b", ENABLE_ACLK_CAM01,
+ 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_LITE_A, "aclk_asyncaxim_lite_a",
+ "div_aclk_cam0_bus_400", ENABLE_ACLK_CAM01,
+ 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_LITE_A, "aclk_asyncaxis_lite_a",
+ "div_aclk_lite_a", ENABLE_ACLK_CAM01,
+ 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_ISPSFRP, "aclk_ahb2apb_ispsfrp",
+ "div_pclk_cam0_50", ENABLE_ACLK_CAM01,
+ 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXI2APB_ISP0P, "aclk_axi2apb_isp0p", "div_aclk_cam0_200",
+ ENABLE_ACLK_CAM01, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXI2AHB_ISP0P, "aclk_axi2ahb_isp0p", "div_aclk_cam0_200",
+ ENABLE_ACLK_CAM01, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_IS0X, "aclk_xiu_is0x", "div_aclk_cam0_200",
+ ENABLE_ACLK_CAM01, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_ISP0EX, "aclk_xiu_isp0ex", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM01, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CAM0NP_276, "aclk_cam0np_276", "div_aclk_cam0_200",
+ ENABLE_ACLK_CAM01, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CAM0ND_400, "aclk_cam0nd_400", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM01, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_ACLK_CAM02 */
+ GATE(CLK_ACLK_SMMU_3AA1, "aclk_smmu_3aa1", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM02, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_3AA0, "aclk_smmu_3aa0", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM02, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_LITE_D, "aclk_smmu_lite_d", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM02, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_LITE_B, "aclk_smmu_lite_b", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM02, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_LITE_A, "aclk_smmu_lite_a", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM02, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_3AA1, "aclk_bts_3aa1", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM02, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_3AA0, "aclk_bts_3aa0", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM02, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_LITE_D, "aclk_bts_lite_d", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM02, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_LITE_B, "aclk_bts_lite_b", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM02, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_LITE_A, "aclk_bts_lite_a", "div_aclk_cam0_bus_400",
+ ENABLE_ACLK_CAM02, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_CAM0 */
+ GATE(CLK_PCLK_SMMU_3AA1, "pclk_smmu_3aa1", "div_aclk_cam0_200",
+ ENABLE_PCLK_CAM0, 25, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_3AA0, "pclk_smmu_3aa0", "div_aclk_cam0_200",
+ ENABLE_PCLK_CAM0, 24, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_LITE_D, "pclk_smmu_lite_d", "div_aclk_cam0_200",
+ ENABLE_PCLK_CAM0, 23, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_LITE_B, "pclk_smmu_lite_b", "div_aclk_cam0_200",
+ ENABLE_PCLK_CAM0, 22, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_LITE_A, "pclk_smmu_lite_a", "div_aclk_cam0_200",
+ ENABLE_PCLK_CAM0, 21, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_3AA1, "pclk_bts_3aa1", "div_pclk_cam0_50",
+ ENABLE_PCLK_CAM0, 20, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_3AA0, "pclk_bts_3aa0", "div_pclk_cam0_50",
+ ENABLE_PCLK_CAM0, 19, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_LITE_D, "pclk_bts_lite_d", "div_pclk_cam0_50",
+ ENABLE_PCLK_CAM0, 18, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_LITE_B, "pclk_bts_lite_b", "div_pclk_cam0_50",
+ ENABLE_PCLK_CAM0, 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_LITE_A, "pclk_bts_lite_a", "div_pclk_cam0_50",
+ ENABLE_PCLK_CAM0, 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXI_CAM1, "pclk_asyncaxi_cam1", "div_pclk_cam0_50",
+ ENABLE_PCLK_CAM0, 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXI_3AA1, "pclk_asyncaxi_3aa1", "div_pclk_cam0_50",
+ ENABLE_PCLK_CAM0, 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXI_3AA0, "pclk_asyncaxi_3aa0", "div_pclk_cam0_50",
+ ENABLE_PCLK_CAM0, 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXI_LITE_D, "pclk_asyncaxi_lite_d",
+ "div_pclk_cam0_50", ENABLE_PCLK_CAM0,
+ 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXI_LITE_B, "pclk_asyncaxi_lite_b",
+ "div_pclk_cam0_50", ENABLE_PCLK_CAM0,
+ 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXI_LITE_A, "pclk_asyncaxi_lite_a",
+ "div_pclk_cam0_50", ENABLE_PCLK_CAM0,
+ 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PMU_CAM0, "pclk_pmu_cam0", "div_pclk_cam0_50",
+ ENABLE_PCLK_CAM0, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_CAM0, "pclk_sysreg_cam0", "div_pclk_cam0_50",
+ ENABLE_PCLK_CAM0, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_CMU_CAM0_LOCAL, "pclk_cmu_cam0_local",
+ "div_aclk_cam0_200", ENABLE_PCLK_CAM0,
+ 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_CSIS1, "pclk_csis1", "div_aclk_cam0_200",
+ ENABLE_PCLK_CAM0, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_CSIS0, "pclk_csis0", "div_aclk_cam0_200",
+ ENABLE_PCLK_CAM0, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_3AA1, "pclk_3aa1", "div_pclk_3aa1",
+ ENABLE_PCLK_CAM0, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_3AA0, "pclk_3aa0", "div_pclk_3aa0",
+ ENABLE_PCLK_CAM0, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_LITE_D, "pclk_lite_d", "div_pclk_lite_d",
+ ENABLE_PCLK_CAM0, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_LITE_B, "pclk_lite_b", "div_pclk_lite_b",
+ ENABLE_PCLK_CAM0, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_LITE_A, "pclk_lite_a", "div_pclk_lite_a",
+ ENABLE_PCLK_CAM0, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_SCLK_CAM0 */
+ GATE(CLK_PHYCLK_RXBYTECLKHS0_S4, "phyclk_rxbyteclkhs0_s4",
+ "mout_phyclk_rxbyteclkhs0_s4_user",
+ ENABLE_SCLK_CAM0, 8, 0, 0),
+ GATE(CLK_PHYCLK_RXBYTECLKHS0_S2A, "phyclk_rxbyteclkhs0_s2a",
+ "mout_phyclk_rxbyteclkhs0_s2a_user",
+ ENABLE_SCLK_CAM0, 7, 0, 0),
+ GATE(CLK_SCLK_LITE_FREECNT, "sclk_lite_freecnt",
+ "mout_sclk_lite_freecnt_c", ENABLE_SCLK_CAM0, 6, 0, 0),
+ GATE(CLK_SCLK_PIXELASYNCM_3AA1, "sclk_pixelasycm_3aa1",
+ "div_aclk_3aa1", ENABLE_SCLK_CAM0, 5, 0, 0),
+ GATE(CLK_SCLK_PIXELASYNCM_3AA0, "sclk_pixelasycm_3aa0",
+ "div_aclk_3aa0", ENABLE_SCLK_CAM0, 4, 0, 0),
+ GATE(CLK_SCLK_PIXELASYNCS_3AA0, "sclk_pixelasycs_3aa0",
+ "div_aclk_3aa0", ENABLE_SCLK_CAM0, 3, 0, 0),
+ GATE(CLK_SCLK_PIXELASYNCM_LITE_C, "sclk_pixelasyncm_lite_c",
+ "div_sclk_pixelasync_lite_c",
+ ENABLE_SCLK_CAM0, 2, 0, 0),
+ GATE(CLK_SCLK_PIXELASYNCM_LITE_C_INIT, "sclk_pixelasyncm_lite_c_init",
+ "div_sclk_pixelasync_lite_c_init",
+ ENABLE_SCLK_CAM0, 1, 0, 0),
+ GATE(CLK_SCLK_PIXELASYNCS_LITE_C_INIT, "sclk_pixelasyncs_lite_c_init",
+ "div_sclk_pixelasync_lite_c",
+ ENABLE_SCLK_CAM0, 0, 0, 0),
+};
+
+static struct samsung_cmu_info cam0_cmu_info __initdata = {
+ .mux_clks = cam0_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(cam0_mux_clks),
+ .div_clks = cam0_div_clks,
+ .nr_div_clks = ARRAY_SIZE(cam0_div_clks),
+ .gate_clks = cam0_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(cam0_gate_clks),
+ .fixed_clks = cam0_fixed_clks,
+ .nr_fixed_clks = ARRAY_SIZE(cam0_fixed_clks),
+ .nr_clk_ids = CAM0_NR_CLK,
+ .clk_regs = cam0_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(cam0_clk_regs),
+};
+
+static void __init exynos5433_cmu_cam0_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &cam0_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_cam0, "samsung,exynos5433-cmu-cam0",
+ exynos5433_cmu_cam0_init);
+
+/*
+ * Register offset definitions for CMU_CAM1
+ */
+#define MUX_SEL_CAM10 0x0200
+#define MUX_SEL_CAM11 0x0204
+#define MUX_SEL_CAM12 0x0208
+#define MUX_ENABLE_CAM10 0x0300
+#define MUX_ENABLE_CAM11 0x0304
+#define MUX_ENABLE_CAM12 0x0308
+#define MUX_STAT_CAM10 0x0400
+#define MUX_STAT_CAM11 0x0404
+#define MUX_STAT_CAM12 0x0408
+#define MUX_IGNORE_CAM11 0x0504
+#define DIV_CAM10 0x0600
+#define DIV_CAM11 0x0604
+#define DIV_STAT_CAM10 0x0700
+#define DIV_STAT_CAM11 0x0704
+#define ENABLE_ACLK_CAM10 0X0800
+#define ENABLE_ACLK_CAM11 0X0804
+#define ENABLE_ACLK_CAM12 0X0808
+#define ENABLE_PCLK_CAM1 0X0900
+#define ENABLE_SCLK_CAM1 0X0a00
+#define ENABLE_IP_CAM10 0X0b00
+#define ENABLE_IP_CAM11 0X0b04
+#define ENABLE_IP_CAM12 0X0b08
+
+static unsigned long cam1_clk_regs[] __initdata = {
+ MUX_SEL_CAM10,
+ MUX_SEL_CAM11,
+ MUX_SEL_CAM12,
+ MUX_ENABLE_CAM10,
+ MUX_ENABLE_CAM11,
+ MUX_ENABLE_CAM12,
+ MUX_STAT_CAM10,
+ MUX_STAT_CAM11,
+ MUX_STAT_CAM12,
+ MUX_IGNORE_CAM11,
+ DIV_CAM10,
+ DIV_CAM11,
+ DIV_STAT_CAM10,
+ DIV_STAT_CAM11,
+ ENABLE_ACLK_CAM10,
+ ENABLE_ACLK_CAM11,
+ ENABLE_ACLK_CAM12,
+ ENABLE_PCLK_CAM1,
+ ENABLE_SCLK_CAM1,
+ ENABLE_IP_CAM10,
+ ENABLE_IP_CAM11,
+ ENABLE_IP_CAM12,
+};
+
+PNAME(mout_sclk_isp_uart_user_p) = { "oscclk", "sclk_isp_uart_cam1", };
+PNAME(mout_sclk_isp_spi1_user_p) = { "oscclk", "sclk_isp_spi1_cam1", };
+PNAME(mout_sclk_isp_spi0_user_p) = { "oscclk", "sclk_isp_spi0_cam1", };
+
+PNAME(mout_aclk_cam1_333_user_p) = { "oscclk", "aclk_cam1_333", };
+PNAME(mout_aclk_cam1_400_user_p) = { "oscclk", "aclk_cam1_400", };
+PNAME(mout_aclk_cam1_552_user_p) = { "oscclk", "aclk_cam1_552", };
+
+PNAME(mout_phyclk_rxbyteclkhs0_s2b_user_p) = { "oscclk",
+ "phyclk_rxbyteclkhs0_s2b_phy", };
+
+PNAME(mout_aclk_csis2_b_p) = { "mout_aclk_csis2_a",
+ "mout_aclk_cam1_333_user", };
+PNAME(mout_aclk_csis2_a_p) = { "mout_aclk_cam1_552_user",
+ "mout_aclk_cam1_400_user", };
+
+PNAME(mout_aclk_fd_b_p) = { "mout_aclk_fd_a",
+ "mout_aclk_cam1_333_user", };
+PNAME(mout_aclk_fd_a_p) = { "mout_aclk_cam1_552_user",
+ "mout_aclk_cam1_400_user", };
+
+PNAME(mout_aclk_lite_c_b_p) = { "mout_aclk_lite_c_a",
+ "mout_aclk_cam1_333_user", };
+PNAME(mout_aclk_lite_c_a_p) = { "mout_aclk_cam1_552_user",
+ "mout_aclk_cam1_400_user", };
+
+static struct samsung_fixed_rate_clock cam1_fixed_clks[] __initdata = {
+ FRATE(CLK_PHYCLK_RXBYTEECLKHS0_S2B, "phyclk_rxbyteclkhs0_s2b_phy", NULL,
+ CLK_IS_ROOT, 100000000),
+};
+
+static struct samsung_mux_clock cam1_mux_clks[] __initdata = {
+ /* MUX_SEL_CAM10 */
+ MUX(CLK_MOUT_SCLK_ISP_UART_USER, "mout_sclk_isp_uart_user",
+ mout_sclk_isp_uart_user_p, MUX_SEL_CAM10, 20, 1),
+ MUX(CLK_MOUT_SCLK_ISP_SPI1_USER, "mout_sclk_isp_spi1_user",
+ mout_sclk_isp_spi1_user_p, MUX_SEL_CAM10, 16, 1),
+ MUX(CLK_MOUT_SCLK_ISP_SPI0_USER, "mout_sclk_isp_spi0_user",
+ mout_sclk_isp_spi0_user_p, MUX_SEL_CAM10, 12, 1),
+ MUX(CLK_MOUT_ACLK_CAM1_333_USER, "mout_aclk_cam1_333_user",
+ mout_aclk_cam1_333_user_p, MUX_SEL_CAM10, 8, 1),
+ MUX(CLK_MOUT_ACLK_CAM1_400_USER, "mout_aclk_cam1_400_user",
+ mout_aclk_cam1_400_user_p, MUX_SEL_CAM01, 4, 1),
+ MUX(CLK_MOUT_ACLK_CAM1_552_USER, "mout_aclk_cam1_552_user",
+ mout_aclk_cam1_552_user_p, MUX_SEL_CAM01, 0, 1),
+
+ /* MUX_SEL_CAM11 */
+ MUX(CLK_MOUT_PHYCLK_RXBYTECLKHS0_S2B_USER,
+ "mout_phyclk_rxbyteclkhs0_s2b_user",
+ mout_phyclk_rxbyteclkhs0_s2b_user_p,
+ MUX_SEL_CAM11, 0, 1),
+
+ /* MUX_SEL_CAM12 */
+ MUX(CLK_MOUT_ACLK_CSIS2_B, "mout_aclk_csis2_b", mout_aclk_csis2_b_p,
+ MUX_SEL_CAM12, 20, 1),
+ MUX(CLK_MOUT_ACLK_CSIS2_A, "mout_aclk_csis2_a", mout_aclk_csis2_a_p,
+ MUX_SEL_CAM12, 16, 1),
+ MUX(CLK_MOUT_ACLK_FD_B, "mout_aclk_fd_b", mout_aclk_fd_b_p,
+ MUX_SEL_CAM12, 12, 1),
+ MUX(CLK_MOUT_ACLK_FD_A, "mout_aclk_fd_a", mout_aclk_fd_a_p,
+ MUX_SEL_CAM12, 8, 1),
+ MUX(CLK_MOUT_ACLK_LITE_C_B, "mout_aclk_lite_c_b", mout_aclk_lite_c_b_p,
+ MUX_SEL_CAM12, 4, 1),
+ MUX(CLK_MOUT_ACLK_LITE_C_A, "mout_aclk_lite_c_a", mout_aclk_lite_c_a_p,
+ MUX_SEL_CAM12, 0, 1),
+};
+
+static struct samsung_div_clock cam1_div_clks[] __initdata = {
+ /* DIV_CAM10 */
+ DIV(CLK_DIV_SCLK_ISP_WPWM, "div_sclk_isp_wpwm",
+ "div_pclk_cam1_83", DIV_CAM10, 16, 2),
+ DIV(CLK_DIV_PCLK_CAM1_83, "div_pclk_cam1_83",
+ "mout_aclk_cam1_333_user", DIV_CAM10, 12, 2),
+ DIV(CLK_DIV_PCLK_CAM1_166, "div_pclk_cam1_166",
+ "mout_aclk_cam1_333_user", DIV_CAM10, 8, 2),
+ DIV(CLK_DIV_PCLK_DBG_CAM1, "div_pclk_dbg_cam1",
+ "mout_aclk_cam1_552_user", DIV_CAM10, 4, 3),
+ DIV(CLK_DIV_ATCLK_CAM1, "div_atclk_cam1", "mout_aclk_cam1_552_user",
+ DIV_CAM10, 0, 3),
+
+ /* DIV_CAM11 */
+ DIV(CLK_DIV_ACLK_CSIS2, "div_aclk_csis2", "mout_aclk_csis2_b",
+ DIV_CAM11, 16, 3),
+ DIV(CLK_DIV_PCLK_FD, "div_pclk_fd", "div_aclk_fd", DIV_CAM11, 12, 2),
+ DIV(CLK_DIV_ACLK_FD, "div_aclk_fd", "mout_aclk_fd_b", DIV_CAM11, 8, 3),
+ DIV(CLK_DIV_PCLK_LITE_C, "div_pclk_lite_c", "div_aclk_lite_c",
+ DIV_CAM11, 4, 2),
+ DIV(CLK_DIV_ACLK_LITE_C, "div_aclk_lite_c", "mout_aclk_lite_c_b",
+ DIV_CAM11, 0, 3),
+};
+
+static struct samsung_gate_clock cam1_gate_clks[] __initdata = {
+ /* ENABLE_ACLK_CAM10 */
+ GATE(CLK_ACLK_ISP_GIC, "aclk_isp_gic", "mout_aclk_cam1_333_user",
+ ENABLE_ACLK_CAM10, 4, 0, 0),
+ GATE(CLK_ACLK_FD, "aclk_fd", "div_aclk_fd",
+ ENABLE_ACLK_CAM10, 3, 0, 0),
+ GATE(CLK_ACLK_LITE_C, "aclk_lite_c", "div_aclk_lite_c",
+ ENABLE_ACLK_CAM10, 1, 0, 0),
+ GATE(CLK_ACLK_CSIS2, "aclk_csis2", "div_aclk_csis2",
+ ENABLE_ACLK_CAM10, 0, 0, 0),
+
+ /* ENABLE_ACLK_CAM11 */
+ GATE(CLK_ACLK_ASYNCAPBM_FD, "aclk_asyncapbm_fd", "div_pclk_fd",
+ ENABLE_ACLK_CAM11, 29, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBS_FD, "aclk_asyncapbs_fd", "div_pclk_cam1_166",
+ ENABLE_ACLK_CAM11, 28, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBM_LITE_C, "aclk_asyncapbm_lite_c",
+ "div_pclk_lite_c", ENABLE_ACLK_CAM11,
+ 27, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAPBS_LITE_C, "aclk_asyncapbs_lite_c",
+ "div_pclk_cam1_166", ENABLE_ACLK_CAM11,
+ 26, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAHBS_SFRISP2H2, "aclk_asyncahbs_sfrisp2h2",
+ "div_pclk_cam1_83", ENABLE_ACLK_CAM11,
+ 25, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAHBS_SFRISP2H1, "aclk_asyncahbs_sfrisp2h1",
+ "div_pclk_cam1_83", ENABLE_ACLK_CAM11,
+ 24, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_CA5, "aclk_asyncaxim_ca5",
+ "mout_aclk_cam1_333_user", ENABLE_ACLK_CAM11,
+ 23, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_CA5, "aclk_asyncaxis_ca5",
+ "mout_aclk_cam1_552_user", ENABLE_ACLK_CAM11,
+ 22, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_ISPX2, "aclk_asyncaxis_ispx2",
+ "mout_aclk_cam1_333_user", ENABLE_ACLK_CAM11,
+ 21, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_ISPX1, "aclk_asyncaxis_ispx1",
+ "mout_aclk_cam1_333_user", ENABLE_ACLK_CAM11,
+ 20, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_ISPX0, "aclk_asyncaxis_ispx0",
+ "mout_aclk_cam1_333_user", ENABLE_ACLK_CAM11,
+ 19, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_ISPEX, "aclk_asyncaxim_ispex",
+ "mout_aclk_cam1_400_user", ENABLE_ACLK_CAM11,
+ 18, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_ISP3P, "aclk_asyncaxim_isp3p",
+ "mout_aclk_cam1_400_user", ENABLE_ACLK_CAM11,
+ 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_ISP3P, "aclk_asyncaxis_isp3p",
+ "mout_aclk_cam1_333_user", ENABLE_ACLK_CAM11,
+ 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_FD, "aclk_asyncaxim_fd",
+ "mout_aclk_cam1_400_user", ENABLE_ACLK_CAM11,
+ 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_FD, "aclk_asyncaxis_fd", "div_aclk_fd",
+ ENABLE_ACLK_CAM11, 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIM_LITE_C, "aclk_asyncaxim_lite_c",
+ "mout_aclk_cam1_400_user", ENABLE_ACLK_CAM11,
+ 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_ASYNCAXIS_LITE_C, "aclk_asyncaxis_lite_c",
+ "div_aclk_lite_c", ENABLE_ACLK_CAM11,
+ 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_ISP5P, "aclk_ahb2apb_isp5p", "div_pclk_cam1_83",
+ ENABLE_ACLK_CAM11, 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB2APB_ISP3P, "aclk_ahb2apb_isp3p", "div_pclk_cam1_83",
+ ENABLE_ACLK_CAM11, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXI2APB_ISP3P, "aclk_axi2apb_isp3p",
+ "mout_aclk_cam1_333_user", ENABLE_ACLK_CAM11,
+ 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHB_SFRISP2H, "aclk_ahb_sfrisp2h", "div_pclk_cam1_83",
+ ENABLE_ACLK_CAM11, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXI_ISP_HX_R, "aclk_axi_isp_hx_r", "div_pclk_cam1_166",
+ ENABLE_ACLK_CAM11, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXI_ISP_CX_R, "aclk_axi_isp_cx_r", "div_pclk_cam1_166",
+ ENABLE_ACLK_CAM11, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXI_ISP_HX, "aclk_axi_isp_hx", "mout_aclk_cam1_333_user",
+ ENABLE_ACLK_CAM11, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXI_ISP_CX, "aclk_axi_isp_cx", "mout_aclk_cam1_333_user",
+ ENABLE_ACLK_CAM11, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_ISPX, "aclk_xiu_ispx", "mout_aclk_cam1_333_user",
+ ENABLE_ACLK_CAM11, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_XIU_ISPEX, "aclk_xiu_ispex", "mout_aclk_cam1_400_user",
+ ENABLE_ACLK_CAM11, 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CAM1NP_333, "aclk_cam1np_333", "mout_aclk_cam1_333_user",
+ ENABLE_ACLK_CAM11, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_CAM1ND_400, "aclk_cam1nd_400", "mout_aclk_cam1_400_user",
+ ENABLE_ACLK_CAM11, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_ACLK_CAM12 */
+ GATE(CLK_ACLK_SMMU_ISPCPU, "aclk_smmu_ispcpu",
+ "mout_aclk_cam1_400_user", ENABLE_ACLK_CAM12,
+ 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_FD, "aclk_smmu_fd", "mout_aclk_cam1_400_user",
+ ENABLE_ACLK_CAM12, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_SMMU_LITE_C, "aclk_smmu_lite_c",
+ "mout_aclk_cam1_400_user", ENABLE_ACLK_CAM12,
+ 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_ISP3P, "aclk_bts_isp3p", "mout_aclk_cam1_400_user",
+ ENABLE_ACLK_CAM12, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_FD, "aclk_bts_fd", "mout_aclk_cam1_400_user",
+ ENABLE_ACLK_CAM12, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_BTS_LITE_C, "aclk_bts_lite_c", "mout_aclk_cam1_400_user",
+ ENABLE_ACLK_CAM12, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHBDN_SFRISP2H, "aclk_ahbdn_sfrisp2h",
+ "mout_aclk_cam1_333_user", ENABLE_ACLK_CAM12,
+ 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AHBDN_ISP5P, "aclk_aclk-shbdn_isp5p",
+ "mout_aclk_cam1_333_user", ENABLE_ACLK_CAM12,
+ 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIUS_ISP3P, "aclk_axius_isp3p",
+ "mout_aclk_cam1_400_user", ENABLE_ACLK_CAM12,
+ 2, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIUS_FD, "aclk_axius_fd", "mout_aclk_cam1_400_user",
+ ENABLE_ACLK_CAM12, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_ACLK_AXIUS_LITE_C, "aclk_axius_lite_c",
+ "mout_aclk_cam1_400_user", ENABLE_ACLK_CAM12,
+ 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_PCLK_CAM1 */
+ GATE(CLK_PCLK_SMMU_ISPCPU, "pclk_smmu_ispcpu", "div_pclk_cam1_166",
+ ENABLE_PCLK_CAM1, 27, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_FD, "pclk_smmu_fd", "div_pclk_cam1_166",
+ ENABLE_PCLK_CAM1, 26, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SMMU_LITE_C, "pclk_smmu_lite_c", "div_pclk_cam1_166",
+ ENABLE_PCLK_CAM1, 25, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_ISP3P, "pclk_bts_isp3p", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 24, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_FD, "pclk_bts_fd", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 23, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_BTS_LITE_C, "pclk_bts_lite_c", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 22, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXIM_CA5, "pclk_asyncaxim_ca5", "div_pclk_cam1_166",
+ ENABLE_PCLK_CAM1, 21, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXIM_ISPEX, "pclk_asyncaxim_ispex",
+ "div_pclk_cam1_83", ENABLE_PCLK_CAM1,
+ 20, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXIM_ISP3P, "pclk_asyncaxim_isp3p",
+ "div_pclk_cam1_83", ENABLE_PCLK_CAM1,
+ 19, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXIM_FD, "pclk_asyncaxim_fd", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 18, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ASYNCAXIM_LITE_C, "pclk_asyncaxim_lite_c",
+ "div_pclk_cam1_83", ENABLE_PCLK_CAM1,
+ 17, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_PMU_CAM1, "pclk_pmu_cam1", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 16, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_SYSREG_CAM1, "pclk_sysreg_cam1", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 15, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_CMU_CAM1_LOCAL, "pclk_cmu_cam1_local",
+ "div_pclk_cam1_166", ENABLE_PCLK_CAM1,
+ 14, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_MCTADC, "pclk_isp_mctadc", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 13, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_WDT, "pclk_isp_wdt", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 12, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_PWM, "pclk_isp_pwm", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 11, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_UART, "pclk_isp_uart", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 10, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_MCUCTL, "pclk_isp_mcuctl", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 9, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_SPI1, "pclk_isp_spi1", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 8, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_SPI0, "pclk_isp_spi0", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 7, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_I2C2, "pclk_isp_i2c2", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 6, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_I2C1, "pclk_isp_i2c1", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 5, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_I2C0, "pclk_isp_i2c0", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 4, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_ISP_MPWM, "pclk_isp_wpwm", "div_pclk_cam1_83",
+ ENABLE_PCLK_CAM1, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_FD, "pclk_fd", "div_pclk_fd",
+ ENABLE_PCLK_CAM1, 3, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_LITE_C, "pclk_lite_c", "div_pclk_lite_c",
+ ENABLE_PCLK_CAM1, 1, CLK_IGNORE_UNUSED, 0),
+ GATE(CLK_PCLK_CSIS2, "pclk_csis2", "div_pclk_cam1_166",
+ ENABLE_PCLK_CAM1, 0, CLK_IGNORE_UNUSED, 0),
+
+ /* ENABLE_SCLK_CAM1 */
+ GATE(CLK_SCLK_ISP_I2C2, "sclk_isp_i2c2", "oscclk", ENABLE_SCLK_CAM1,
+ 15, 0, 0),
+ GATE(CLK_SCLK_ISP_I2C1, "sclk_isp_i2c1", "oscclk", ENABLE_SCLK_CAM1,
+ 14, 0, 0),
+ GATE(CLK_SCLK_ISP_I2C0, "sclk_isp_i2c0", "oscclk", ENABLE_SCLK_CAM1,
+ 13, 0, 0),
+ GATE(CLK_SCLK_ISP_PWM, "sclk_isp_pwm", "oscclk", ENABLE_SCLK_CAM1,
+ 12, 0, 0),
+ GATE(CLK_PHYCLK_RXBYTECLKHS0_S2B, "phyclk_rxbyteclkhs0_s2b",
+ "mout_phyclk_rxbyteclkhs0_s2b_user",
+ ENABLE_SCLK_CAM1, 11, 0, 0),
+ GATE(CLK_SCLK_LITE_C_FREECNT, "sclk_lite_c_freecnt", "div_pclk_lite_c",
+ ENABLE_SCLK_CAM1, 10, 0, 0),
+ GATE(CLK_SCLK_PIXELASYNCM_FD, "sclk_pixelasyncm_fd", "div_aclk_fd",
+ ENABLE_SCLK_CAM1, 9, 0, 0),
+ GATE(CLK_SCLK_ISP_MCTADC, "sclk_isp_mctadc", "sclk_isp_mctadc_cam1",
+ ENABLE_SCLK_CAM1, 7, 0, 0),
+ GATE(CLK_SCLK_ISP_UART, "sclk_isp_uart", "mout_sclk_isp_uart_user",
+ ENABLE_SCLK_CAM1, 6, 0, 0),
+ GATE(CLK_SCLK_ISP_SPI1, "sclk_isp_spi1", "mout_sclk_isp_spi1_user",
+ ENABLE_SCLK_CAM1, 5, 0, 0),
+ GATE(CLK_SCLK_ISP_SPI0, "sclk_isp_spi0", "mout_sclk_isp_spi0_user",
+ ENABLE_SCLK_CAM1, 4, 0, 0),
+ GATE(CLK_SCLK_ISP_MPWM, "sclk_isp_wpwm", "div_sclk_isp_wpwm",
+ ENABLE_SCLK_CAM1, 3, 0, 0),
+ GATE(CLK_PCLK_DBG_ISP, "sclk_dbg_isp", "div_pclk_dbg_cam1",
+ ENABLE_SCLK_CAM1, 2, 0, 0),
+ GATE(CLK_ATCLK_ISP, "atclk_isp", "div_atclk_cam1",
+ ENABLE_SCLK_CAM1, 1, 0, 0),
+ GATE(CLK_SCLK_ISP_CA5, "sclk_isp_ca5", "mout_aclk_cam1_552_user",
+ ENABLE_SCLK_CAM1, 0, 0, 0),
+};
+
+static struct samsung_cmu_info cam1_cmu_info __initdata = {
+ .mux_clks = cam1_mux_clks,
+ .nr_mux_clks = ARRAY_SIZE(cam1_mux_clks),
+ .div_clks = cam1_div_clks,
+ .nr_div_clks = ARRAY_SIZE(cam1_div_clks),
+ .gate_clks = cam1_gate_clks,
+ .nr_gate_clks = ARRAY_SIZE(cam1_gate_clks),
+ .fixed_clks = cam1_fixed_clks,
+ .nr_fixed_clks = ARRAY_SIZE(cam1_fixed_clks),
+ .nr_clk_ids = CAM1_NR_CLK,
+ .clk_regs = cam1_clk_regs,
+ .nr_clk_regs = ARRAY_SIZE(cam1_clk_regs),
+};
+
+static void __init exynos5433_cmu_cam1_init(struct device_node *np)
+{
+ samsung_cmu_register_one(np, &cam1_cmu_info);
+}
+CLK_OF_DECLARE(exynos5433_cmu_cam1, "samsung,exynos5433-cmu-cam1",
+ exynos5433_cmu_cam1_init);
diff --git a/drivers/clk/samsung/clk-s5pv210.c b/drivers/clk/samsung/clk-s5pv210.c
index d270a20..e668e47 100644
--- a/drivers/clk/samsung/clk-s5pv210.c
+++ b/drivers/clk/samsung/clk-s5pv210.c
@@ -169,44 +169,44 @@
#endif
/* Mux parent lists. */
-static const char *fin_pll_p[] __initconst = {
+static const char *fin_pll_p[] __initdata = {
"xxti",
"xusbxti"
};
-static const char *mout_apll_p[] __initconst = {
+static const char *mout_apll_p[] __initdata = {
"fin_pll",
"fout_apll"
};
-static const char *mout_mpll_p[] __initconst = {
+static const char *mout_mpll_p[] __initdata = {
"fin_pll",
"fout_mpll"
};
-static const char *mout_epll_p[] __initconst = {
+static const char *mout_epll_p[] __initdata = {
"fin_pll",
"fout_epll"
};
-static const char *mout_vpllsrc_p[] __initconst = {
+static const char *mout_vpllsrc_p[] __initdata = {
"fin_pll",
"sclk_hdmi27m"
};
-static const char *mout_vpll_p[] __initconst = {
+static const char *mout_vpll_p[] __initdata = {
"mout_vpllsrc",
"fout_vpll"
};
-static const char *mout_group1_p[] __initconst = {
+static const char *mout_group1_p[] __initdata = {
"dout_a2m",
"mout_mpll",
"mout_epll",
"mout_vpll"
};
-static const char *mout_group2_p[] __initconst = {
+static const char *mout_group2_p[] __initdata = {
"xxti",
"xusbxti",
"sclk_hdmi27m",
@@ -218,7 +218,7 @@
"mout_vpll",
};
-static const char *mout_audio0_p[] __initconst = {
+static const char *mout_audio0_p[] __initdata = {
"xxti",
"pcmcdclk0",
"sclk_hdmi27m",
@@ -230,7 +230,7 @@
"mout_vpll",
};
-static const char *mout_audio1_p[] __initconst = {
+static const char *mout_audio1_p[] __initdata = {
"i2scdclk1",
"pcmcdclk1",
"sclk_hdmi27m",
@@ -242,7 +242,7 @@
"mout_vpll",
};
-static const char *mout_audio2_p[] __initconst = {
+static const char *mout_audio2_p[] __initdata = {
"i2scdclk2",
"pcmcdclk2",
"sclk_hdmi27m",
@@ -254,63 +254,63 @@
"mout_vpll",
};
-static const char *mout_spdif_p[] __initconst = {
+static const char *mout_spdif_p[] __initdata = {
"dout_audio0",
"dout_audio1",
"dout_audio3",
};
-static const char *mout_group3_p[] __initconst = {
+static const char *mout_group3_p[] __initdata = {
"mout_apll",
"mout_mpll"
};
-static const char *mout_group4_p[] __initconst = {
+static const char *mout_group4_p[] __initdata = {
"mout_mpll",
"dout_a2m"
};
-static const char *mout_flash_p[] __initconst = {
+static const char *mout_flash_p[] __initdata = {
"dout_hclkd",
"dout_hclkp"
};
-static const char *mout_dac_p[] __initconst = {
+static const char *mout_dac_p[] __initdata = {
"mout_vpll",
"sclk_hdmiphy"
};
-static const char *mout_hdmi_p[] __initconst = {
+static const char *mout_hdmi_p[] __initdata = {
"sclk_hdmiphy",
"dout_tblk"
};
-static const char *mout_mixer_p[] __initconst = {
+static const char *mout_mixer_p[] __initdata = {
"mout_dac",
"mout_hdmi"
};
-static const char *mout_vpll_6442_p[] __initconst = {
+static const char *mout_vpll_6442_p[] __initdata = {
"fin_pll",
"fout_vpll"
};
-static const char *mout_mixer_6442_p[] __initconst = {
+static const char *mout_mixer_6442_p[] __initdata = {
"mout_vpll",
"dout_mixer"
};
-static const char *mout_d0sync_6442_p[] __initconst = {
+static const char *mout_d0sync_6442_p[] __initdata = {
"mout_dsys",
"div_apll"
};
-static const char *mout_d1sync_6442_p[] __initconst = {
+static const char *mout_d1sync_6442_p[] __initdata = {
"mout_psys",
"div_apll"
};
-static const char *mout_group2_6442_p[] __initconst = {
+static const char *mout_group2_6442_p[] __initdata = {
"fin_pll",
"none",
"none",
@@ -322,7 +322,7 @@
"mout_vpll",
};
-static const char *mout_audio0_6442_p[] __initconst = {
+static const char *mout_audio0_6442_p[] __initdata = {
"fin_pll",
"pcmcdclk0",
"none",
@@ -334,7 +334,7 @@
"mout_vpll",
};
-static const char *mout_audio1_6442_p[] __initconst = {
+static const char *mout_audio1_6442_p[] __initdata = {
"i2scdclk1",
"pcmcdclk1",
"none",
@@ -347,7 +347,7 @@
"fin_pll",
};
-static const char *mout_clksel_p[] __initconst = {
+static const char *mout_clksel_p[] __initdata = {
"fout_apll_clkout",
"fout_mpll_clkout",
"fout_epll",
@@ -370,7 +370,7 @@
"div_dclk"
};
-static const char *mout_clksel_6442_p[] __initconst = {
+static const char *mout_clksel_6442_p[] __initdata = {
"fout_apll_clkout",
"fout_mpll_clkout",
"fout_epll",
@@ -393,7 +393,7 @@
"div_dclk"
};
-static const char *mout_clkout_p[] __initconst = {
+static const char *mout_clkout_p[] __initdata = {
"dout_clkout",
"none",
"xxti",
diff --git a/drivers/clk/st/clkgen-fsyn.c b/drivers/clk/st/clkgen-fsyn.c
index af94ed8..a917c4c 100644
--- a/drivers/clk/st/clkgen-fsyn.c
+++ b/drivers/clk/st/clkgen-fsyn.c
@@ -1057,7 +1057,7 @@
return clk;
}
-static struct of_device_id quadfs_of_match[] = {
+static const struct of_device_id quadfs_of_match[] = {
{
.compatible = "st,stih416-quadfs216",
.data = &st_fs216c65_416
diff --git a/drivers/clk/st/clkgen-mux.c b/drivers/clk/st/clkgen-mux.c
index 9a15ec3..fdcff10 100644
--- a/drivers/clk/st/clkgen-mux.c
+++ b/drivers/clk/st/clkgen-mux.c
@@ -341,7 +341,7 @@
.fb_start_bit_idx = 24,
};
-static struct of_device_id clkgena_divmux_of_match[] = {
+static const struct of_device_id clkgena_divmux_of_match[] = {
{
.compatible = "st,clkgena-divmux-c65-hs",
.data = &st_divmux_c65hs,
@@ -479,7 +479,7 @@
.table = prediv_table16,
};
-static struct of_device_id clkgena_prediv_of_match[] = {
+static const struct of_device_id clkgena_prediv_of_match[] = {
{ .compatible = "st,clkgena-prediv-c65", .data = &prediv_c65_data },
{ .compatible = "st,clkgena-prediv-c32", .data = &prediv_c32_data },
{}
@@ -586,7 +586,7 @@
.width = 2,
};
-static struct of_device_id mux_of_match[] = {
+static const struct of_device_id mux_of_match[] = {
{
.compatible = "st,stih416-clkgenc-vcc-hd",
.data = &clkgen_mux_c_vcc_hd_416,
@@ -693,7 +693,7 @@
.lock = &clkgenf_lock,
};
-static struct of_device_id vcc_of_match[] = {
+static const struct of_device_id vcc_of_match[] = {
{ .compatible = "st,stih416-clkgenc", .data = &st_clkgenc_vcc_416 },
{ .compatible = "st,stih416-clkgenf", .data = &st_clkgenf_vcc_416 },
{}
diff --git a/drivers/clk/st/clkgen-pll.c b/drivers/clk/st/clkgen-pll.c
index 29769d7..d204ba8 100644
--- a/drivers/clk/st/clkgen-pll.c
+++ b/drivers/clk/st/clkgen-pll.c
@@ -593,7 +593,7 @@
return clk;
}
-static struct of_device_id c32_pll_of_match[] = {
+static const struct of_device_id c32_pll_of_match[] = {
{
.compatible = "st,plls-c32-a1x-0",
.data = &st_pll3200c32_a1x_0,
@@ -708,7 +708,7 @@
}
CLK_OF_DECLARE(clkgen_c32_pll, "st,clkgen-plls-c32", clkgen_c32_pll_setup);
-static struct of_device_id c32_gpu_pll_of_match[] = {
+static const struct of_device_id c32_gpu_pll_of_match[] = {
{
.compatible = "st,stih415-gpu-pll-c32",
.data = &st_pll1200c32_gpu_415,
diff --git a/drivers/clk/sunxi/Makefile b/drivers/clk/sunxi/Makefile
index 3a5292e..058f273 100644
--- a/drivers/clk/sunxi/Makefile
+++ b/drivers/clk/sunxi/Makefile
@@ -9,6 +9,7 @@
obj-y += clk-sun8i-mbus.o
obj-y += clk-sun9i-core.o
obj-y += clk-sun9i-mmc.o
+obj-y += clk-usb.o
obj-$(CONFIG_MFD_SUN6I_PRCM) += \
clk-sun6i-ar100.o clk-sun6i-apb0.o clk-sun6i-apb0-gates.o \
diff --git a/drivers/clk/sunxi/clk-sunxi.c b/drivers/clk/sunxi/clk-sunxi.c
index 379324e..7e1e2bd 100644
--- a/drivers/clk/sunxi/clk-sunxi.c
+++ b/drivers/clk/sunxi/clk-sunxi.c
@@ -482,6 +482,45 @@
}
/**
+ * sun5i_a13_get_ahb_factors() - calculates m, p factors for AHB
+ * AHB rate is calculated as follows
+ * rate = parent_rate >> p
+ */
+
+static void sun5i_a13_get_ahb_factors(u32 *freq, u32 parent_rate,
+ u8 *n, u8 *k, u8 *m, u8 *p)
+{
+ u32 div;
+
+ /* divide only */
+ if (parent_rate < *freq)
+ *freq = parent_rate;
+
+ /*
+ * user manual says valid speed is 8k ~ 276M, but tests show it
+ * can work at speeds up to 300M, just after reparenting to pll6
+ */
+ if (*freq < 8000)
+ *freq = 8000;
+ if (*freq > 300000000)
+ *freq = 300000000;
+
+ div = order_base_2(DIV_ROUND_UP(parent_rate, *freq));
+
+ /* p = 0 ~ 3 */
+ if (div > 3)
+ div = 3;
+
+ *freq = parent_rate >> div;
+
+ /* we were called to round the frequency, we can now return */
+ if (p == NULL)
+ return;
+
+ *p = div;
+}
+
+/**
* sun4i_get_apb1_factors() - calculates m, p factors for APB1
* APB1 rate is calculated as follows
* rate = (parent_rate >> p) / (m + 1);
@@ -616,6 +655,11 @@
.n_start = 1,
};
+static struct clk_factors_config sun5i_a13_ahb_config = {
+ .pshift = 4,
+ .pwidth = 2,
+};
+
static struct clk_factors_config sun4i_apb1_config = {
.mshift = 0,
.mwidth = 5,
@@ -676,6 +720,13 @@
.name = "pll6x2",
};
+static const struct factors_data sun5i_a13_ahb_data __initconst = {
+ .mux = 6,
+ .muxmask = BIT(1) | BIT(0),
+ .table = &sun5i_a13_ahb_config,
+ .getter = sun5i_a13_get_ahb_factors,
+};
+
static const struct factors_data sun4i_apb1_data __initconst = {
.mux = 24,
.muxmask = BIT(1) | BIT(0),
@@ -838,59 +889,6 @@
/**
- * sunxi_gates_reset... - reset bits in leaf gate clk registers handling
- */
-
-struct gates_reset_data {
- void __iomem *reg;
- spinlock_t *lock;
- struct reset_controller_dev rcdev;
-};
-
-static int sunxi_gates_reset_assert(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- struct gates_reset_data *data = container_of(rcdev,
- struct gates_reset_data,
- rcdev);
- unsigned long flags;
- u32 reg;
-
- spin_lock_irqsave(data->lock, flags);
-
- reg = readl(data->reg);
- writel(reg & ~BIT(id), data->reg);
-
- spin_unlock_irqrestore(data->lock, flags);
-
- return 0;
-}
-
-static int sunxi_gates_reset_deassert(struct reset_controller_dev *rcdev,
- unsigned long id)
-{
- struct gates_reset_data *data = container_of(rcdev,
- struct gates_reset_data,
- rcdev);
- unsigned long flags;
- u32 reg;
-
- spin_lock_irqsave(data->lock, flags);
-
- reg = readl(data->reg);
- writel(reg | BIT(id), data->reg);
-
- spin_unlock_irqrestore(data->lock, flags);
-
- return 0;
-}
-
-static struct reset_control_ops sunxi_gates_reset_ops = {
- .assert = sunxi_gates_reset_assert,
- .deassert = sunxi_gates_reset_deassert,
-};
-
-/**
* sunxi_gates_clk_setup() - Setup function for leaf gates on clocks
*/
@@ -898,7 +896,6 @@
struct gates_data {
DECLARE_BITMAP(mask, SUNXI_GATES_MAX_SIZE);
- u32 reset_mask;
};
static const struct gates_data sun4i_axi_gates_data __initconst = {
@@ -997,26 +994,10 @@
.mask = {0x1F0007},
};
-static const struct gates_data sun4i_a10_usb_gates_data __initconst = {
- .mask = {0x1C0},
- .reset_mask = 0x07,
-};
-
-static const struct gates_data sun5i_a13_usb_gates_data __initconst = {
- .mask = {0x140},
- .reset_mask = 0x03,
-};
-
-static const struct gates_data sun6i_a31_usb_gates_data __initconst = {
- .mask = { BIT(18) | BIT(17) | BIT(16) | BIT(10) | BIT(9) | BIT(8) },
- .reset_mask = BIT(2) | BIT(1) | BIT(0),
-};
-
static void __init sunxi_gates_clk_setup(struct device_node *node,
struct gates_data *data)
{
struct clk_onecell_data *clk_data;
- struct gates_reset_data *reset_data;
const char *clk_parent;
const char *clk_name;
void __iomem *reg;
@@ -1057,21 +1038,6 @@
clk_data->clk_num = i;
of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
-
- /* Register a reset controler for gates with reset bits */
- if (data->reset_mask == 0)
- return;
-
- reset_data = kzalloc(sizeof(*reset_data), GFP_KERNEL);
- if (!reset_data)
- return;
-
- reset_data->reg = reg;
- reset_data->lock = &clk_lock;
- reset_data->rcdev.nr_resets = __fls(data->reset_mask) + 1;
- reset_data->rcdev.ops = &sunxi_gates_reset_ops;
- reset_data->rcdev.of_node = node;
- reset_controller_register(&reset_data->rcdev);
}
@@ -1080,13 +1046,20 @@
* sunxi_divs_clk_setup() helper data
*/
-#define SUNXI_DIVS_MAX_QTY 2
+#define SUNXI_DIVS_MAX_QTY 4
#define SUNXI_DIVISOR_WIDTH 2
struct divs_data {
const struct factors_data *factors; /* data for the factor clock */
- int ndivs; /* number of children */
+ int ndivs; /* number of outputs */
+ /*
+ * List of outputs. Refer to the diagram for sunxi_divs_clk_setup():
+ * self or base factor clock refers to the output from the pll
+ * itself. The remaining refer to fixed or configurable divider
+ * outputs.
+ */
struct {
+ u8 self; /* is it the base factor clock? (only one) */
u8 fixed; /* is it a fixed divisor? if not... */
struct clk_div_table *table; /* is it a table based divisor? */
u8 shift; /* otherwise it's a normal divisor with this shift */
@@ -1109,23 +1082,27 @@
.div = {
{ .shift = 0, .pow = 0, }, /* M, DDR */
{ .shift = 16, .pow = 1, }, /* P, other */
+ /* No output for the base factor clock */
}
};
static const struct divs_data pll6_divs_data __initconst = {
.factors = &sun4i_pll6_data,
- .ndivs = 2,
+ .ndivs = 4,
.div = {
{ .shift = 0, .table = pll6_sata_tbl, .gate = 14 }, /* M, SATA */
{ .fixed = 2 }, /* P, other */
+ { .self = 1 }, /* base factor clock, 2x */
+ { .fixed = 4 }, /* pll6 / 4, used as ahb input */
}
};
static const struct divs_data sun6i_a31_pll6_divs_data __initconst = {
.factors = &sun6i_a31_pll6_data,
- .ndivs = 1,
+ .ndivs = 2,
.div = {
{ .fixed = 2 }, /* normal output */
+ { .self = 1 }, /* base factor clock, 2x */
}
};
@@ -1156,6 +1133,10 @@
int ndivs = SUNXI_DIVS_MAX_QTY, i = 0;
int flags, clkflags;
+ /* if number of children known, use it */
+ if (data->ndivs)
+ ndivs = data->ndivs;
+
/* Set up factor clock that we will be dividing */
pclk = sunxi_factors_clk_setup(node, data->factors);
parent = __clk_get_name(pclk);
@@ -1166,7 +1147,7 @@
if (!clk_data)
return;
- clks = kzalloc((SUNXI_DIVS_MAX_QTY+1) * sizeof(*clks), GFP_KERNEL);
+ clks = kcalloc(ndivs, sizeof(*clks), GFP_KERNEL);
if (!clks)
goto free_clkdata;
@@ -1176,15 +1157,17 @@
* our RAM clock! */
clkflags = !strcmp("pll5", parent) ? 0 : CLK_SET_RATE_PARENT;
- /* if number of children known, use it */
- if (data->ndivs)
- ndivs = data->ndivs;
-
for (i = 0; i < ndivs; i++) {
if (of_property_read_string_index(node, "clock-output-names",
i, &clk_name) != 0)
break;
+ /* If this is the base factor clock, only update clks */
+ if (data->div[i].self) {
+ clk_data->clks[i] = pclk;
+ continue;
+ }
+
gate_hw = NULL;
rate_hw = NULL;
rate_ops = NULL;
@@ -1243,9 +1226,6 @@
clk_register_clkdev(clks[i], clk_name, NULL);
}
- /* The last clock available on the getter is the parent */
- clks[i++] = pclk;
-
/* Adjust to the real max */
clk_data->clk_num = i;
@@ -1269,6 +1249,7 @@
{.compatible = "allwinner,sun6i-a31-pll1-clk", .data = &sun6i_a31_pll1_data,},
{.compatible = "allwinner,sun8i-a23-pll1-clk", .data = &sun8i_a23_pll1_data,},
{.compatible = "allwinner,sun7i-a20-pll4-clk", .data = &sun7i_a20_pll4_data,},
+ {.compatible = "allwinner,sun5i-a13-ahb-clk", .data = &sun5i_a13_ahb_data,},
{.compatible = "allwinner,sun4i-a10-apb1-clk", .data = &sun4i_apb1_data,},
{.compatible = "allwinner,sun7i-a20-out-clk", .data = &sun7i_a20_out_data,},
{}
@@ -1324,9 +1305,6 @@
{.compatible = "allwinner,sun9i-a80-apb1-gates-clk", .data = &sun9i_a80_apb1_gates_data,},
{.compatible = "allwinner,sun6i-a31-apb2-gates-clk", .data = &sun6i_a31_apb2_gates_data,},
{.compatible = "allwinner,sun8i-a23-apb2-gates-clk", .data = &sun8i_a23_apb2_gates_data,},
- {.compatible = "allwinner,sun4i-a10-usb-clk", .data = &sun4i_a10_usb_gates_data,},
- {.compatible = "allwinner,sun5i-a13-usb-clk", .data = &sun5i_a13_usb_gates_data,},
- {.compatible = "allwinner,sun6i-a31-usb-clk", .data = &sun6i_a31_usb_gates_data,},
{}
};
@@ -1348,15 +1326,15 @@
{
unsigned int i;
+ /* Register divided output clocks */
+ of_sunxi_table_clock_setup(clk_divs_match, sunxi_divs_clk_setup);
+
/* Register factor clocks */
of_sunxi_table_clock_setup(clk_factors_match, sunxi_factors_clk_setup);
/* Register divider clocks */
of_sunxi_table_clock_setup(clk_div_match, sunxi_divider_clk_setup);
- /* Register divided output clocks */
- of_sunxi_table_clock_setup(clk_divs_match, sunxi_divs_clk_setup);
-
/* Register mux clocks */
of_sunxi_table_clock_setup(clk_mux_match, sunxi_mux_clk_setup);
@@ -1385,6 +1363,7 @@
CLK_OF_DECLARE(sun4i_a10_clk_init, "allwinner,sun4i-a10", sun4i_a10_init_clocks);
static const char *sun5i_critical_clocks[] __initdata = {
+ "cpu",
"pll5_ddr",
"ahb_sdram",
};
diff --git a/drivers/clk/sunxi/clk-usb.c b/drivers/clk/sunxi/clk-usb.c
new file mode 100644
index 0000000..a86ed2f
--- /dev/null
+++ b/drivers/clk/sunxi/clk-usb.c
@@ -0,0 +1,233 @@
+/*
+ * Copyright 2013-2015 Emilio López
+ *
+ * Emilio López <emilio@elopez.com.ar>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/clk-provider.h>
+#include <linux/clkdev.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/reset-controller.h>
+#include <linux/spinlock.h>
+
+
+/**
+ * sunxi_usb_reset... - reset bits in usb clk registers handling
+ */
+
+struct usb_reset_data {
+ void __iomem *reg;
+ spinlock_t *lock;
+ struct clk *clk;
+ struct reset_controller_dev rcdev;
+};
+
+static int sunxi_usb_reset_assert(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ struct usb_reset_data *data = container_of(rcdev,
+ struct usb_reset_data,
+ rcdev);
+ unsigned long flags;
+ u32 reg;
+
+ clk_prepare_enable(data->clk);
+ spin_lock_irqsave(data->lock, flags);
+
+ reg = readl(data->reg);
+ writel(reg & ~BIT(id), data->reg);
+
+ spin_unlock_irqrestore(data->lock, flags);
+ clk_disable_unprepare(data->clk);
+
+ return 0;
+}
+
+static int sunxi_usb_reset_deassert(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ struct usb_reset_data *data = container_of(rcdev,
+ struct usb_reset_data,
+ rcdev);
+ unsigned long flags;
+ u32 reg;
+
+ clk_prepare_enable(data->clk);
+ spin_lock_irqsave(data->lock, flags);
+
+ reg = readl(data->reg);
+ writel(reg | BIT(id), data->reg);
+
+ spin_unlock_irqrestore(data->lock, flags);
+ clk_disable_unprepare(data->clk);
+
+ return 0;
+}
+
+static struct reset_control_ops sunxi_usb_reset_ops = {
+ .assert = sunxi_usb_reset_assert,
+ .deassert = sunxi_usb_reset_deassert,
+};
+
+/**
+ * sunxi_usb_clk_setup() - Setup function for usb gate clocks
+ */
+
+#define SUNXI_USB_MAX_SIZE 32
+
+struct usb_clk_data {
+ u32 clk_mask;
+ u32 reset_mask;
+ bool reset_needs_clk;
+};
+
+static void __init sunxi_usb_clk_setup(struct device_node *node,
+ const struct usb_clk_data *data,
+ spinlock_t *lock)
+{
+ struct clk_onecell_data *clk_data;
+ struct usb_reset_data *reset_data;
+ const char *clk_parent;
+ const char *clk_name;
+ void __iomem *reg;
+ int qty;
+ int i = 0;
+ int j = 0;
+
+ reg = of_io_request_and_map(node, 0, of_node_full_name(node));
+ if (IS_ERR(reg))
+ return;
+
+ clk_parent = of_clk_get_parent_name(node, 0);
+ if (!clk_parent)
+ return;
+
+ /* Worst-case size approximation and memory allocation */
+ qty = find_last_bit((unsigned long *)&data->clk_mask,
+ SUNXI_USB_MAX_SIZE);
+
+ clk_data = kmalloc(sizeof(struct clk_onecell_data), GFP_KERNEL);
+ if (!clk_data)
+ return;
+
+ clk_data->clks = kzalloc((qty+1) * sizeof(struct clk *), GFP_KERNEL);
+ if (!clk_data->clks) {
+ kfree(clk_data);
+ return;
+ }
+
+ for_each_set_bit(i, (unsigned long *)&data->clk_mask,
+ SUNXI_USB_MAX_SIZE) {
+ of_property_read_string_index(node, "clock-output-names",
+ j, &clk_name);
+ clk_data->clks[i] = clk_register_gate(NULL, clk_name,
+ clk_parent, 0,
+ reg, i, 0, lock);
+ WARN_ON(IS_ERR(clk_data->clks[i]));
+
+ j++;
+ }
+
+ /* Adjust to the real max */
+ clk_data->clk_num = i;
+
+ of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
+
+ /* Register a reset controller for usb with reset bits */
+ if (data->reset_mask == 0)
+ return;
+
+ reset_data = kzalloc(sizeof(*reset_data), GFP_KERNEL);
+ if (!reset_data)
+ return;
+
+ if (data->reset_needs_clk) {
+ reset_data->clk = of_clk_get(node, 0);
+ if (IS_ERR(reset_data->clk)) {
+ pr_err("Could not get clock for reset controls\n");
+ kfree(reset_data);
+ return;
+ }
+ }
+
+ reset_data->reg = reg;
+ reset_data->lock = lock;
+ reset_data->rcdev.nr_resets = __fls(data->reset_mask) + 1;
+ reset_data->rcdev.ops = &sunxi_usb_reset_ops;
+ reset_data->rcdev.of_node = node;
+ reset_controller_register(&reset_data->rcdev);
+}
+
+static const struct usb_clk_data sun4i_a10_usb_clk_data __initconst = {
+ .clk_mask = BIT(8) | BIT(7) | BIT(6),
+ .reset_mask = BIT(2) | BIT(1) | BIT(0),
+};
+
+static DEFINE_SPINLOCK(sun4i_a10_usb_lock);
+
+static void __init sun4i_a10_usb_setup(struct device_node *node)
+{
+ sunxi_usb_clk_setup(node, &sun4i_a10_usb_clk_data, &sun4i_a10_usb_lock);
+}
+CLK_OF_DECLARE(sun4i_a10_usb, "allwinner,sun4i-a10-usb-clk", sun4i_a10_usb_setup);
+
+static const struct usb_clk_data sun5i_a13_usb_clk_data __initconst = {
+ .clk_mask = BIT(8) | BIT(6),
+ .reset_mask = BIT(1) | BIT(0),
+};
+
+static void __init sun5i_a13_usb_setup(struct device_node *node)
+{
+ sunxi_usb_clk_setup(node, &sun5i_a13_usb_clk_data, &sun4i_a10_usb_lock);
+}
+CLK_OF_DECLARE(sun5i_a13_usb, "allwinner,sun5i-a13-usb-clk", sun5i_a13_usb_setup);
+
+static const struct usb_clk_data sun6i_a31_usb_clk_data __initconst = {
+ .clk_mask = BIT(18) | BIT(17) | BIT(16) | BIT(10) | BIT(9) | BIT(8),
+ .reset_mask = BIT(2) | BIT(1) | BIT(0),
+};
+
+static void __init sun6i_a31_usb_setup(struct device_node *node)
+{
+ sunxi_usb_clk_setup(node, &sun6i_a31_usb_clk_data, &sun4i_a10_usb_lock);
+}
+CLK_OF_DECLARE(sun6i_a31_usb, "allwinner,sun6i-a31-usb-clk", sun6i_a31_usb_setup);
+
+static const struct usb_clk_data sun9i_a80_usb_mod_data __initconst = {
+ .clk_mask = BIT(6) | BIT(5) | BIT(4) | BIT(3) | BIT(2) | BIT(1),
+ .reset_mask = BIT(19) | BIT(18) | BIT(17),
+ .reset_needs_clk = 1,
+};
+
+static DEFINE_SPINLOCK(a80_usb_mod_lock);
+
+static void __init sun9i_a80_usb_mod_setup(struct device_node *node)
+{
+ sunxi_usb_clk_setup(node, &sun9i_a80_usb_mod_data, &a80_usb_mod_lock);
+}
+CLK_OF_DECLARE(sun9i_a80_usb_mod, "allwinner,sun9i-a80-usb-mod-clk", sun9i_a80_usb_mod_setup);
+
+static const struct usb_clk_data sun9i_a80_usb_phy_data __initconst = {
+ .clk_mask = BIT(10) | BIT(5) | BIT(4) | BIT(3) | BIT(2) | BIT(1),
+ .reset_mask = BIT(21) | BIT(20) | BIT(19) | BIT(18) | BIT(17),
+ .reset_needs_clk = 1,
+};
+
+static DEFINE_SPINLOCK(a80_usb_phy_lock);
+
+static void __init sun9i_a80_usb_phy_setup(struct device_node *node)
+{
+ sunxi_usb_clk_setup(node, &sun9i_a80_usb_phy_data, &a80_usb_phy_lock);
+}
+CLK_OF_DECLARE(sun9i_a80_usb_phy, "allwinner,sun9i-a80-usb-phy-clk", sun9i_a80_usb_phy_setup);
diff --git a/drivers/clk/tegra/clk-pll.c b/drivers/clk/tegra/clk-pll.c
index bfef9ab..05c6d08 100644
--- a/drivers/clk/tegra/clk-pll.c
+++ b/drivers/clk/tegra/clk-pll.c
@@ -981,7 +981,7 @@
struct tegra_clk_pll *pll = to_clk_pll(hw);
struct tegra_clk_pll_freq_table cfg, old_cfg;
unsigned long flags = 0;
- int ret = 0;
+ int ret;
ret = _pll_ramp_calc_pll(hw, &cfg, rate, parent_rate);
if (ret < 0)
@@ -1005,7 +1005,7 @@
unsigned long *prate)
{
struct tegra_clk_pll_freq_table cfg;
- int ret = 0, p_div;
+ int ret, p_div;
u64 output_rate = *prate;
ret = _pll_ramp_calc_pll(hw, &cfg, rate, *prate);
@@ -1073,7 +1073,7 @@
{
struct tegra_clk_pll *pll = to_clk_pll(hw);
u32 val;
- int ret = 0;
+ int ret;
unsigned long flags = 0;
if (pll->lock)
@@ -1223,6 +1223,7 @@
return output_rate;
}
+
static int clk_pllre_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
{
diff --git a/drivers/clk/tegra/clk-tegra-fixed.c b/drivers/clk/tegra/clk-tegra-fixed.c
index f3b7738..605676d 100644
--- a/drivers/clk/tegra/clk-tegra-fixed.c
+++ b/drivers/clk/tegra/clk-tegra-fixed.c
@@ -30,13 +30,12 @@
#define OSC_CTRL_OSC_FREQ_SHIFT 28
#define OSC_CTRL_PLL_REF_DIV_SHIFT 26
-int __init tegra_osc_clk_init(void __iomem *clk_base,
- struct tegra_clk *tegra_clks,
- unsigned long *input_freqs, int num,
- unsigned long *osc_freq,
- unsigned long *pll_ref_freq)
+int __init tegra_osc_clk_init(void __iomem *clk_base, struct tegra_clk *clks,
+ unsigned long *input_freqs, unsigned int num,
+ unsigned int clk_m_div, unsigned long *osc_freq,
+ unsigned long *pll_ref_freq)
{
- struct clk *clk;
+ struct clk *clk, *osc;
struct clk **dt_clk;
u32 val, pll_ref_div;
unsigned osc_idx;
@@ -54,22 +53,25 @@
return -EINVAL;
}
- dt_clk = tegra_lookup_dt_id(tegra_clk_clk_m, tegra_clks);
+ osc = clk_register_fixed_rate(NULL, "osc", NULL, CLK_IS_ROOT,
+ *osc_freq);
+
+ dt_clk = tegra_lookup_dt_id(tegra_clk_clk_m, clks);
if (!dt_clk)
return 0;
- clk = clk_register_fixed_rate(NULL, "clk_m", NULL, CLK_IS_ROOT,
- *osc_freq);
+ clk = clk_register_fixed_factor(NULL, "clk_m", "osc",
+ 0, 1, clk_m_div);
*dt_clk = clk;
/* pll_ref */
val = (val >> OSC_CTRL_PLL_REF_DIV_SHIFT) & 3;
pll_ref_div = 1 << val;
- dt_clk = tegra_lookup_dt_id(tegra_clk_pll_ref, tegra_clks);
+ dt_clk = tegra_lookup_dt_id(tegra_clk_pll_ref, clks);
if (!dt_clk)
return 0;
- clk = clk_register_fixed_factor(NULL, "pll_ref", "clk_m",
+ clk = clk_register_fixed_factor(NULL, "pll_ref", "osc",
0, 1, pll_ref_div);
*dt_clk = clk;
diff --git a/drivers/clk/tegra/clk-tegra-periph.c b/drivers/clk/tegra/clk-tegra-periph.c
index cef0727..46af924 100644
--- a/drivers/clk/tegra/clk-tegra-periph.c
+++ b/drivers/clk/tegra/clk-tegra-periph.c
@@ -218,7 +218,7 @@
.clk_id = _clk_id, \
.p.parent_name = _parent_name, \
.periph = TEGRA_CLK_PERIPH(0, 0, 0, 0, 0, 0, 0, \
- _clk_num, _gate_flags, 0, NULL), \
+ _clk_num, _gate_flags, NULL, NULL), \
.flags = _flags \
}
diff --git a/drivers/clk/tegra/clk-tegra114.c b/drivers/clk/tegra/clk-tegra114.c
index d076642..8237d16 100644
--- a/drivers/clk/tegra/clk-tegra114.c
+++ b/drivers/clk/tegra/clk-tegra114.c
@@ -940,36 +940,6 @@
static unsigned long osc_freq;
static unsigned long pll_ref_freq;
-static int __init tegra114_osc_clk_init(void __iomem *clk_base)
-{
- struct clk *clk;
- u32 val, pll_ref_div;
-
- val = readl_relaxed(clk_base + OSC_CTRL);
-
- osc_freq = tegra114_input_freq[val >> OSC_CTRL_OSC_FREQ_SHIFT];
- if (!osc_freq) {
- WARN_ON(1);
- return -EINVAL;
- }
-
- /* clk_m */
- clk = clk_register_fixed_rate(NULL, "clk_m", NULL, CLK_IS_ROOT,
- osc_freq);
- clks[TEGRA114_CLK_CLK_M] = clk;
-
- /* pll_ref */
- val = (val >> OSC_CTRL_PLL_REF_DIV_SHIFT) & 3;
- pll_ref_div = 1 << val;
- clk = clk_register_fixed_factor(NULL, "pll_ref", "clk_m",
- CLK_SET_RATE_PARENT, 1, pll_ref_div);
- clks[TEGRA114_CLK_PLL_REF] = clk;
-
- pll_ref_freq = osc_freq / pll_ref_div;
-
- return 0;
-}
-
static void __init tegra114_fixed_clk_init(void __iomem *clk_base)
{
struct clk *clk;
@@ -1263,6 +1233,7 @@
cpu_relax();
} while (!(reg & (1 << cpu))); /* check CPU been reset or not */
}
+
static void tegra114_disable_cpu_clock(u32 cpu)
{
/* flow controller would take care in the power sequence. */
@@ -1351,7 +1322,6 @@
tegra_init_from_table(init_table, clks, TEGRA114_CLK_CLK_MAX);
}
-
/**
* tegra114_car_barrier - wait for pending writes to the CAR to complete
*
@@ -1505,7 +1475,9 @@
if (!clks)
return;
- if (tegra114_osc_clk_init(clk_base) < 0)
+ if (tegra_osc_clk_init(clk_base, tegra114_clks, tegra114_input_freq,
+ ARRAY_SIZE(tegra114_input_freq), 1, &osc_freq,
+ &pll_ref_freq) < 0)
return;
tegra114_fixed_clk_init(clk_base);
diff --git a/drivers/clk/tegra/clk-tegra124.c b/drivers/clk/tegra/clk-tegra124.c
index 9a893f2..11f857c 100644
--- a/drivers/clk/tegra/clk-tegra124.c
+++ b/drivers/clk/tegra/clk-tegra124.c
@@ -1014,6 +1014,9 @@
{ .con_id = "fuse", .dt_id = TEGRA124_CLK_FUSE },
{ .dev_id = "rtc-tegra", .dt_id = TEGRA124_CLK_RTC },
{ .dev_id = "timer", .dt_id = TEGRA124_CLK_TIMER },
+ { .con_id = "hda", .dt_id = TEGRA124_CLK_HDA },
+ { .con_id = "hda2codec_2x", .dt_id = TEGRA124_CLK_HDA2CODEC_2X },
+ { .con_id = "hda2hdmi", .dt_id = TEGRA124_CLK_HDA2HDMI },
};
static struct clk **clks;
@@ -1110,16 +1113,18 @@
1, 2);
clks[TEGRA124_CLK_XUSB_SS_DIV2] = clk;
- clk = clk_register_gate(NULL, "plld_dsi", "plld_out0", 0,
+ clk = clk_register_gate(NULL, "pll_d_dsi_out", "pll_d_out0", 0,
clk_base + PLLD_MISC, 30, 0, &pll_d_lock);
- clks[TEGRA124_CLK_PLLD_DSI] = clk;
+ clks[TEGRA124_CLK_PLL_D_DSI_OUT] = clk;
- clk = tegra_clk_register_periph_gate("dsia", "plld_dsi", 0, clk_base,
- 0, 48, periph_clk_enb_refcnt);
+ clk = tegra_clk_register_periph_gate("dsia", "pll_d_dsi_out", 0,
+ clk_base, 0, 48,
+ periph_clk_enb_refcnt);
clks[TEGRA124_CLK_DSIA] = clk;
- clk = tegra_clk_register_periph_gate("dsib", "plld_dsi", 0, clk_base,
- 0, 82, periph_clk_enb_refcnt);
+ clk = tegra_clk_register_periph_gate("dsib", "pll_d_dsi_out", 0,
+ clk_base, 0, 82,
+ periph_clk_enb_refcnt);
clks[TEGRA124_CLK_DSIB] = clk;
/* emc mux */
@@ -1395,6 +1400,8 @@
static struct tegra_clk_init_table tegra124_init_table[] __initdata = {
{TEGRA124_CLK_SOC_THERM, TEGRA124_CLK_PLL_P, 51000000, 0},
{TEGRA124_CLK_CCLK_G, TEGRA124_CLK_CLK_MAX, 0, 1},
+ {TEGRA124_CLK_HDA, TEGRA124_CLK_PLL_P, 102000000, 0},
+ {TEGRA124_CLK_HDA2CODEC_2X, TEGRA124_CLK_PLL_P, 48000000, 0},
/* This MUST be the last entry. */
{TEGRA124_CLK_CLK_MAX, TEGRA124_CLK_CLK_MAX, 0, 0},
};
@@ -1475,7 +1482,8 @@
return;
if (tegra_osc_clk_init(clk_base, tegra124_clks, tegra124_input_freq,
- ARRAY_SIZE(tegra124_input_freq), &osc_freq, &pll_ref_freq) < 0)
+ ARRAY_SIZE(tegra124_input_freq), 1, &osc_freq,
+ &pll_ref_freq) < 0)
return;
tegra_fixed_clk_init(tegra124_clks);
diff --git a/drivers/clk/tegra/clk-tegra30.c b/drivers/clk/tegra/clk-tegra30.c
index 4b9d8bd..4b26509 100644
--- a/drivers/clk/tegra/clk-tegra30.c
+++ b/drivers/clk/tegra/clk-tegra30.c
@@ -657,16 +657,16 @@
{ .con_id = "fuse_burn", .dev_id = "fuse-tegra", .dt_id = TEGRA30_CLK_FUSE_BURN },
{ .con_id = "apbif", .dev_id = "tegra30-ahub", .dt_id = TEGRA30_CLK_APBIF },
{ .con_id = "hda2hdmi", .dev_id = "tegra30-hda", .dt_id = TEGRA30_CLK_HDA2HDMI },
- { .dev_id = "tegra-apbdma", .dt_id = TEGRA30_CLK_APBDMA },
- { .dev_id = "rtc-tegra", .dt_id = TEGRA30_CLK_RTC },
- { .dev_id = "timer", .dt_id = TEGRA30_CLK_TIMER },
- { .dev_id = "tegra-kbc", .dt_id = TEGRA30_CLK_KBC },
- { .dev_id = "fsl-tegra-udc", .dt_id = TEGRA30_CLK_USBD },
- { .dev_id = "tegra-ehci.1", .dt_id = TEGRA30_CLK_USB2 },
- { .dev_id = "tegra-ehci.2", .dt_id = TEGRA30_CLK_USB2 },
- { .dev_id = "kfuse-tegra", .dt_id = TEGRA30_CLK_KFUSE },
- { .dev_id = "tegra_sata_cold", .dt_id = TEGRA30_CLK_SATA_COLD },
- { .dev_id = "dtv", .dt_id = TEGRA30_CLK_DTV },
+ { .dev_id = "tegra-apbdma", .dt_id = TEGRA30_CLK_APBDMA },
+ { .dev_id = "rtc-tegra", .dt_id = TEGRA30_CLK_RTC },
+ { .dev_id = "timer", .dt_id = TEGRA30_CLK_TIMER },
+ { .dev_id = "tegra-kbc", .dt_id = TEGRA30_CLK_KBC },
+ { .dev_id = "fsl-tegra-udc", .dt_id = TEGRA30_CLK_USBD },
+ { .dev_id = "tegra-ehci.1", .dt_id = TEGRA30_CLK_USB2 },
+ { .dev_id = "tegra-ehci.2", .dt_id = TEGRA30_CLK_USB2 },
+ { .dev_id = "kfuse-tegra", .dt_id = TEGRA30_CLK_KFUSE },
+ { .dev_id = "tegra_sata_cold", .dt_id = TEGRA30_CLK_SATA_COLD },
+ { .dev_id = "dtv", .dt_id = TEGRA30_CLK_DTV },
{ .dev_id = "tegra30-i2s.0", .dt_id = TEGRA30_CLK_I2S0 },
{ .dev_id = "tegra30-i2s.1", .dt_id = TEGRA30_CLK_I2S1 },
{ .dev_id = "tegra30-i2s.2", .dt_id = TEGRA30_CLK_I2S2 },
@@ -1434,7 +1434,8 @@
return;
if (tegra_osc_clk_init(clk_base, tegra30_clks, tegra30_input_freq,
- ARRAY_SIZE(tegra30_input_freq), &input_freq, NULL) < 0)
+ ARRAY_SIZE(tegra30_input_freq), 1, &input_freq,
+ NULL) < 0)
return;
diff --git a/drivers/clk/tegra/clk.c b/drivers/clk/tegra/clk.c
index 9ddb754..41cd87c 100644
--- a/drivers/clk/tegra/clk.c
+++ b/drivers/clk/tegra/clk.c
@@ -30,6 +30,7 @@
#define CLK_OUT_ENB_V 0x360
#define CLK_OUT_ENB_W 0x364
#define CLK_OUT_ENB_X 0x280
+#define CLK_OUT_ENB_Y 0x298
#define CLK_OUT_ENB_SET_L 0x320
#define CLK_OUT_ENB_CLR_L 0x324
#define CLK_OUT_ENB_SET_H 0x328
@@ -42,6 +43,8 @@
#define CLK_OUT_ENB_CLR_W 0x44c
#define CLK_OUT_ENB_SET_X 0x284
#define CLK_OUT_ENB_CLR_X 0x288
+#define CLK_OUT_ENB_SET_Y 0x29c
+#define CLK_OUT_ENB_CLR_Y 0x2a0
#define RST_DEVICES_L 0x004
#define RST_DEVICES_H 0x008
@@ -50,6 +53,7 @@
#define RST_DEVICES_V 0x358
#define RST_DEVICES_W 0x35C
#define RST_DEVICES_X 0x28C
+#define RST_DEVICES_Y 0x2a4
#define RST_DEVICES_SET_L 0x300
#define RST_DEVICES_CLR_L 0x304
#define RST_DEVICES_SET_H 0x308
@@ -62,6 +66,8 @@
#define RST_DEVICES_CLR_W 0x43c
#define RST_DEVICES_SET_X 0x290
#define RST_DEVICES_CLR_X 0x294
+#define RST_DEVICES_SET_Y 0x2a8
+#define RST_DEVICES_CLR_Y 0x2ac
/* Global data of Tegra CPU CAR ops */
static struct tegra_cpu_car_ops dummy_car_ops;
@@ -122,6 +128,14 @@
.rst_set_reg = RST_DEVICES_SET_X,
.rst_clr_reg = RST_DEVICES_CLR_X,
},
+ [6] = {
+ .enb_reg = CLK_OUT_ENB_Y,
+ .enb_set_reg = CLK_OUT_ENB_SET_Y,
+ .enb_clr_reg = CLK_OUT_ENB_CLR_Y,
+ .rst_reg = RST_DEVICES_Y,
+ .rst_set_reg = RST_DEVICES_SET_Y,
+ .rst_clr_reg = RST_DEVICES_CLR_Y,
+ },
};
static void __iomem *clk_base;
@@ -272,7 +286,7 @@
of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data);
rst_ctlr.of_node = np;
- rst_ctlr.nr_resets = clk_num * 32;
+ rst_ctlr.nr_resets = periph_banks * 32;
reset_controller_register(&rst_ctlr);
}
diff --git a/drivers/clk/tegra/clk.h b/drivers/clk/tegra/clk.h
index 4e458aa..d6ac006 100644
--- a/drivers/clk/tegra/clk.h
+++ b/drivers/clk/tegra/clk.h
@@ -548,7 +548,7 @@
u8 width, u8 pllx_index, u8 div2_index, spinlock_t *lock);
/**
- * struct clk_init_tabel - clock initialization table
+ * struct clk_init_table - clock initialization table
* @clk_id: clock id as mentioned in device tree bindings
* @parent_id: parent clock id as mentioned in device tree bindings
* @rate: rate to set
@@ -615,10 +615,10 @@
void tegra_pmc_clk_init(void __iomem *pmc_base, struct tegra_clk *tegra_clks);
void tegra_fixed_clk_init(struct tegra_clk *tegra_clks);
-int tegra_osc_clk_init(void __iomem *clk_base, struct tegra_clk *tegra_clks,
- unsigned long *input_freqs, int num,
- unsigned long *osc_freq,
- unsigned long *pll_ref_freq);
+int tegra_osc_clk_init(void __iomem *clk_base, struct tegra_clk *clks,
+ unsigned long *input_freqs, unsigned int num,
+ unsigned int clk_m_div, unsigned long *osc_freq,
+ unsigned long *pll_ref_freq);
void tegra_super_clk_gen4_init(void __iomem *clk_base,
void __iomem *pmc_base, struct tegra_clk *tegra_clks,
struct tegra_clk_pll_params *pll_params);
diff --git a/drivers/clk/ti/apll.c b/drivers/clk/ti/apll.c
index 72d9727..49baf38 100644
--- a/drivers/clk/ti/apll.c
+++ b/drivers/clk/ti/apll.c
@@ -203,7 +203,7 @@
ad->control_reg = ti_clk_get_reg_addr(node, 0);
ad->idlest_reg = ti_clk_get_reg_addr(node, 1);
- if (!ad->control_reg || !ad->idlest_reg)
+ if (IS_ERR(ad->control_reg) || IS_ERR(ad->idlest_reg))
goto cleanup;
ad->idlest_mask = 0x1;
@@ -384,7 +384,8 @@
ad->autoidle_reg = ti_clk_get_reg_addr(node, 1);
ad->idlest_reg = ti_clk_get_reg_addr(node, 2);
- if (!ad->control_reg || !ad->autoidle_reg || !ad->idlest_reg)
+ if (IS_ERR(ad->control_reg) || IS_ERR(ad->autoidle_reg) ||
+ IS_ERR(ad->idlest_reg))
goto cleanup;
clk = clk_register(NULL, &clk_hw->hw);
diff --git a/drivers/clk/ti/autoidle.c b/drivers/clk/ti/autoidle.c
index 8912ff8..e75c64c9 100644
--- a/drivers/clk/ti/autoidle.c
+++ b/drivers/clk/ti/autoidle.c
@@ -119,7 +119,7 @@
clk->name = node->name;
clk->reg = ti_clk_get_reg_addr(node, 0);
- if (!clk->reg) {
+ if (IS_ERR(clk->reg)) {
kfree(clk);
return -EINVAL;
}
diff --git a/drivers/clk/ti/clk-3xxx-legacy.c b/drivers/clk/ti/clk-3xxx-legacy.c
index e0732a4..0b61548 100644
--- a/drivers/clk/ti/clk-3xxx-legacy.c
+++ b/drivers/clk/ti/clk-3xxx-legacy.c
@@ -4320,7 +4320,6 @@
CLK(NULL, "dpll3_m3x2_ck", &dpll3_m3x2_ck),
CLK("etb", "emu_core_alwon_ck", &emu_core_alwon_ck),
CLK(NULL, "sys_altclk", &sys_altclk),
- CLK(NULL, "mcbsp_clks", &mcbsp_clks),
CLK(NULL, "sys_clkout1", &sys_clkout1),
CLK(NULL, "dpll3_m2_ck", &dpll3_m2_ck),
CLK(NULL, "core_ck", &core_ck),
@@ -4369,8 +4368,6 @@
CLK(NULL, "i2c3_fck", &i2c3_fck),
CLK(NULL, "i2c2_fck", &i2c2_fck),
CLK(NULL, "i2c1_fck", &i2c1_fck),
- CLK(NULL, "mcbsp5_fck", &mcbsp5_fck),
- CLK(NULL, "mcbsp1_fck", &mcbsp1_fck),
CLK(NULL, "core_48m_fck", &core_48m_fck),
CLK(NULL, "mcspi4_fck", &mcspi4_fck),
CLK(NULL, "mcspi3_fck", &mcspi3_fck),
@@ -4409,8 +4406,6 @@
CLK(NULL, "uart1_ick", &uart1_ick),
CLK(NULL, "gpt11_ick", &gpt11_ick),
CLK(NULL, "gpt10_ick", &gpt10_ick),
- CLK("omap-mcbsp.5", "ick", &mcbsp5_ick),
- CLK("omap-mcbsp.1", "ick", &mcbsp1_ick),
CLK(NULL, "mcbsp5_ick", &mcbsp5_ick),
CLK(NULL, "mcbsp1_ick", &mcbsp1_ick),
CLK(NULL, "omapctrl_ick", &omapctrl_ick),
@@ -4467,15 +4462,22 @@
CLK(NULL, "gpt4_ick", &gpt4_ick),
CLK(NULL, "gpt3_ick", &gpt3_ick),
CLK(NULL, "gpt2_ick", &gpt2_ick),
+ CLK(NULL, "mcbsp_clks", &mcbsp_clks),
+ CLK("omap-mcbsp.1", "ick", &mcbsp1_ick),
CLK("omap-mcbsp.2", "ick", &mcbsp2_ick),
CLK("omap-mcbsp.3", "ick", &mcbsp3_ick),
CLK("omap-mcbsp.4", "ick", &mcbsp4_ick),
- CLK(NULL, "mcbsp4_ick", &mcbsp2_ick),
+ CLK("omap-mcbsp.5", "ick", &mcbsp5_ick),
+ CLK(NULL, "mcbsp1_ick", &mcbsp1_ick),
+ CLK(NULL, "mcbsp2_ick", &mcbsp2_ick),
CLK(NULL, "mcbsp3_ick", &mcbsp3_ick),
- CLK(NULL, "mcbsp2_ick", &mcbsp4_ick),
+ CLK(NULL, "mcbsp4_ick", &mcbsp4_ick),
+ CLK(NULL, "mcbsp5_ick", &mcbsp5_ick),
+ CLK(NULL, "mcbsp1_fck", &mcbsp1_fck),
CLK(NULL, "mcbsp2_fck", &mcbsp2_fck),
CLK(NULL, "mcbsp3_fck", &mcbsp3_fck),
CLK(NULL, "mcbsp4_fck", &mcbsp4_fck),
+ CLK(NULL, "mcbsp5_fck", &mcbsp5_fck),
CLK(NULL, "emu_src_mux_ck", &emu_src_mux_ck),
CLK("etb", "emu_src_ck", &emu_src_ck),
CLK(NULL, "emu_src_mux_ck", &emu_src_mux_ck),
diff --git a/drivers/clk/ti/clk-3xxx.c b/drivers/clk/ti/clk-3xxx.c
index 383a06e..757636d 100644
--- a/drivers/clk/ti/clk-3xxx.c
+++ b/drivers/clk/ti/clk-3xxx.c
@@ -34,7 +34,6 @@
DT_CLK(NULL, "omap_96m_alwon_fck", "omap_96m_alwon_fck"),
DT_CLK("etb", "emu_core_alwon_ck", "emu_core_alwon_ck"),
DT_CLK(NULL, "sys_altclk", "sys_altclk"),
- DT_CLK(NULL, "mcbsp_clks", "mcbsp_clks"),
DT_CLK(NULL, "sys_clkout1", "sys_clkout1"),
DT_CLK(NULL, "dpll1_ck", "dpll1_ck"),
DT_CLK(NULL, "dpll1_x2_ck", "dpll1_x2_ck"),
@@ -82,8 +81,6 @@
DT_CLK(NULL, "i2c3_fck", "i2c3_fck"),
DT_CLK(NULL, "i2c2_fck", "i2c2_fck"),
DT_CLK(NULL, "i2c1_fck", "i2c1_fck"),
- DT_CLK(NULL, "mcbsp5_fck", "mcbsp5_fck"),
- DT_CLK(NULL, "mcbsp1_fck", "mcbsp1_fck"),
DT_CLK(NULL, "core_48m_fck", "core_48m_fck"),
DT_CLK(NULL, "mcspi4_fck", "mcspi4_fck"),
DT_CLK(NULL, "mcspi3_fck", "mcspi3_fck"),
@@ -122,10 +119,6 @@
DT_CLK(NULL, "uart1_ick", "uart1_ick"),
DT_CLK(NULL, "gpt11_ick", "gpt11_ick"),
DT_CLK(NULL, "gpt10_ick", "gpt10_ick"),
- DT_CLK("omap-mcbsp.5", "ick", "mcbsp5_ick"),
- DT_CLK("omap-mcbsp.1", "ick", "mcbsp1_ick"),
- DT_CLK(NULL, "mcbsp5_ick", "mcbsp5_ick"),
- DT_CLK(NULL, "mcbsp1_ick", "mcbsp1_ick"),
DT_CLK(NULL, "omapctrl_ick", "omapctrl_ick"),
DT_CLK(NULL, "dss_tv_fck", "dss_tv_fck"),
DT_CLK(NULL, "dss_96m_fck", "dss_96m_fck"),
@@ -179,15 +172,17 @@
DT_CLK(NULL, "gpt4_ick", "gpt4_ick"),
DT_CLK(NULL, "gpt3_ick", "gpt3_ick"),
DT_CLK(NULL, "gpt2_ick", "gpt2_ick"),
- DT_CLK("omap-mcbsp.2", "ick", "mcbsp2_ick"),
- DT_CLK("omap-mcbsp.3", "ick", "mcbsp3_ick"),
- DT_CLK("omap-mcbsp.4", "ick", "mcbsp4_ick"),
- DT_CLK(NULL, "mcbsp4_ick", "mcbsp2_ick"),
+ DT_CLK(NULL, "mcbsp_clks", "mcbsp_clks"),
+ DT_CLK(NULL, "mcbsp1_ick", "mcbsp1_ick"),
+ DT_CLK(NULL, "mcbsp2_ick", "mcbsp2_ick"),
DT_CLK(NULL, "mcbsp3_ick", "mcbsp3_ick"),
- DT_CLK(NULL, "mcbsp2_ick", "mcbsp4_ick"),
+ DT_CLK(NULL, "mcbsp4_ick", "mcbsp4_ick"),
+ DT_CLK(NULL, "mcbsp5_ick", "mcbsp5_ick"),
+ DT_CLK(NULL, "mcbsp1_fck", "mcbsp1_fck"),
DT_CLK(NULL, "mcbsp2_fck", "mcbsp2_fck"),
DT_CLK(NULL, "mcbsp3_fck", "mcbsp3_fck"),
DT_CLK(NULL, "mcbsp4_fck", "mcbsp4_fck"),
+ DT_CLK(NULL, "mcbsp5_fck", "mcbsp5_fck"),
DT_CLK("etb", "emu_src_ck", "emu_src_ck"),
DT_CLK(NULL, "emu_src_ck", "emu_src_ck"),
DT_CLK(NULL, "pclk_fck", "pclk_fck"),
diff --git a/drivers/clk/ti/clk-44xx.c b/drivers/clk/ti/clk-44xx.c
index 4f4c877..581db77 100644
--- a/drivers/clk/ti/clk-44xx.c
+++ b/drivers/clk/ti/clk-44xx.c
@@ -249,17 +249,6 @@
DT_CLK("usbhs_tll", "usbtll_fck", "dummy_ck"),
DT_CLK("omap_wdt", "ick", "dummy_ck"),
DT_CLK(NULL, "timer_32k_ck", "sys_32k_ck"),
- DT_CLK("omap_timer.1", "timer_sys_ck", "sys_clkin_ck"),
- DT_CLK("omap_timer.2", "timer_sys_ck", "sys_clkin_ck"),
- DT_CLK("omap_timer.3", "timer_sys_ck", "sys_clkin_ck"),
- DT_CLK("omap_timer.4", "timer_sys_ck", "sys_clkin_ck"),
- DT_CLK("omap_timer.9", "timer_sys_ck", "sys_clkin_ck"),
- DT_CLK("omap_timer.10", "timer_sys_ck", "sys_clkin_ck"),
- DT_CLK("omap_timer.11", "timer_sys_ck", "sys_clkin_ck"),
- DT_CLK("omap_timer.5", "timer_sys_ck", "syc_clk_div_ck"),
- DT_CLK("omap_timer.6", "timer_sys_ck", "syc_clk_div_ck"),
- DT_CLK("omap_timer.7", "timer_sys_ck", "syc_clk_div_ck"),
- DT_CLK("omap_timer.8", "timer_sys_ck", "syc_clk_div_ck"),
DT_CLK("4a318000.timer", "timer_sys_ck", "sys_clkin_ck"),
DT_CLK("48032000.timer", "timer_sys_ck", "sys_clkin_ck"),
DT_CLK("48034000.timer", "timer_sys_ck", "sys_clkin_ck"),
diff --git a/drivers/clk/ti/clk-54xx.c b/drivers/clk/ti/clk-54xx.c
index 14160b2..96c69a3 100644
--- a/drivers/clk/ti/clk-54xx.c
+++ b/drivers/clk/ti/clk-54xx.c
@@ -208,17 +208,17 @@
DT_CLK("usbhs_omap", "usbtll_fck", "dummy_ck"),
DT_CLK("omap_wdt", "ick", "dummy_ck"),
DT_CLK(NULL, "timer_32k_ck", "sys_32k_ck"),
- DT_CLK("omap_timer.1", "sys_ck", "sys_clkin"),
- DT_CLK("omap_timer.2", "sys_ck", "sys_clkin"),
- DT_CLK("omap_timer.3", "sys_ck", "sys_clkin"),
- DT_CLK("omap_timer.4", "sys_ck", "sys_clkin"),
- DT_CLK("omap_timer.9", "sys_ck", "sys_clkin"),
- DT_CLK("omap_timer.10", "sys_ck", "sys_clkin"),
- DT_CLK("omap_timer.11", "sys_ck", "sys_clkin"),
- DT_CLK("omap_timer.5", "sys_ck", "dss_syc_gfclk_div"),
- DT_CLK("omap_timer.6", "sys_ck", "dss_syc_gfclk_div"),
- DT_CLK("omap_timer.7", "sys_ck", "dss_syc_gfclk_div"),
- DT_CLK("omap_timer.8", "sys_ck", "dss_syc_gfclk_div"),
+ DT_CLK("4ae18000.timer", "timer_sys_ck", "sys_clkin"),
+ DT_CLK("48032000.timer", "timer_sys_ck", "sys_clkin"),
+ DT_CLK("48034000.timer", "timer_sys_ck", "sys_clkin"),
+ DT_CLK("48036000.timer", "timer_sys_ck", "sys_clkin"),
+ DT_CLK("4803e000.timer", "timer_sys_ck", "sys_clkin"),
+ DT_CLK("48086000.timer", "timer_sys_ck", "sys_clkin"),
+ DT_CLK("48088000.timer", "timer_sys_ck", "sys_clkin"),
+ DT_CLK("40138000.timer", "timer_sys_ck", "dss_syc_gfclk_div"),
+ DT_CLK("4013a000.timer", "timer_sys_ck", "dss_syc_gfclk_div"),
+ DT_CLK("4013c000.timer", "timer_sys_ck", "dss_syc_gfclk_div"),
+ DT_CLK("4013e000.timer", "timer_sys_ck", "dss_syc_gfclk_div"),
{ .node_name = NULL },
};
diff --git a/drivers/clk/ti/clk-7xx.c b/drivers/clk/ti/clk-7xx.c
index ee32f4de..5d2217a 100644
--- a/drivers/clk/ti/clk-7xx.c
+++ b/drivers/clk/ti/clk-7xx.c
@@ -289,17 +289,21 @@
DT_CLK("usbhs_omap", "usbtll_fck", "dummy_ck"),
DT_CLK("omap_wdt", "ick", "dummy_ck"),
DT_CLK(NULL, "timer_32k_ck", "sys_32k_ck"),
- DT_CLK("4ae18000.timer", "timer_sys_ck", "sys_clkin2"),
- DT_CLK("48032000.timer", "timer_sys_ck", "sys_clkin2"),
- DT_CLK("48034000.timer", "timer_sys_ck", "sys_clkin2"),
- DT_CLK("48036000.timer", "timer_sys_ck", "sys_clkin2"),
- DT_CLK("4803e000.timer", "timer_sys_ck", "sys_clkin2"),
- DT_CLK("48086000.timer", "timer_sys_ck", "sys_clkin2"),
- DT_CLK("48088000.timer", "timer_sys_ck", "sys_clkin2"),
+ DT_CLK("4ae18000.timer", "timer_sys_ck", "timer_sys_clk_div"),
+ DT_CLK("48032000.timer", "timer_sys_ck", "timer_sys_clk_div"),
+ DT_CLK("48034000.timer", "timer_sys_ck", "timer_sys_clk_div"),
+ DT_CLK("48036000.timer", "timer_sys_ck", "timer_sys_clk_div"),
+ DT_CLK("4803e000.timer", "timer_sys_ck", "timer_sys_clk_div"),
+ DT_CLK("48086000.timer", "timer_sys_ck", "timer_sys_clk_div"),
+ DT_CLK("48088000.timer", "timer_sys_ck", "timer_sys_clk_div"),
DT_CLK("48820000.timer", "timer_sys_ck", "timer_sys_clk_div"),
DT_CLK("48822000.timer", "timer_sys_ck", "timer_sys_clk_div"),
DT_CLK("48824000.timer", "timer_sys_ck", "timer_sys_clk_div"),
DT_CLK("48826000.timer", "timer_sys_ck", "timer_sys_clk_div"),
+ DT_CLK("48828000.timer", "timer_sys_ck", "timer_sys_clk_div"),
+ DT_CLK("4882a000.timer", "timer_sys_ck", "timer_sys_clk_div"),
+ DT_CLK("4882c000.timer", "timer_sys_ck", "timer_sys_clk_div"),
+ DT_CLK("4882e000.timer", "timer_sys_ck", "timer_sys_clk_div"),
DT_CLK(NULL, "sys_clkin", "sys_clkin1"),
{ .node_name = NULL },
};
diff --git a/drivers/clk/ti/clk-dra7-atl.c b/drivers/clk/ti/clk-dra7-atl.c
index 59bb4b3..d86bc46 100644
--- a/drivers/clk/ti/clk-dra7-atl.c
+++ b/drivers/clk/ti/clk-dra7-atl.c
@@ -294,7 +294,7 @@
return 0;
}
-static struct of_device_id of_dra7_atl_clk_match_tbl[] = {
+static const struct of_device_id of_dra7_atl_clk_match_tbl[] = {
{ .compatible = "ti,dra7-atl", },
{},
};
diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c
index e22b956..0ebe5c5 100644
--- a/drivers/clk/ti/clk.c
+++ b/drivers/clk/ti/clk.c
@@ -103,7 +103,8 @@
* @index: register index from the clock node
*
* Builds clock register address from device tree information. This
- * is a struct of type clk_omap_reg.
+ * is a struct of type clk_omap_reg. Returns a pointer to the register
+ * address, or a pointer error value in failure.
*/
void __iomem *ti_clk_get_reg_addr(struct device_node *node, int index)
{
@@ -121,14 +122,14 @@
if (i == CLK_MAX_MEMMAPS) {
pr_err("clk-provider not found for %s!\n", node->name);
- return NULL;
+ return ERR_PTR(-ENOENT);
}
reg->index = i;
if (of_property_read_u32_index(node, "reg", index, &val)) {
pr_err("%s must have reg[%d]!\n", node->name, index);
- return NULL;
+ return ERR_PTR(-EINVAL);
}
reg->offset = val;
diff --git a/drivers/clk/ti/clockdomain.c b/drivers/clk/ti/clockdomain.c
index b4c5fac..35fe108 100644
--- a/drivers/clk/ti/clockdomain.c
+++ b/drivers/clk/ti/clockdomain.c
@@ -52,7 +52,7 @@
}
}
-static struct of_device_id ti_clkdm_match_table[] __initdata = {
+static const struct of_device_id ti_clkdm_match_table[] __initconst = {
{ .compatible = "ti,clockdomain" },
{ }
};
diff --git a/drivers/clk/ti/composite.c b/drivers/clk/ti/composite.c
index 3654f61..96f83ce 100644
--- a/drivers/clk/ti/composite.c
+++ b/drivers/clk/ti/composite.c
@@ -69,7 +69,7 @@
struct list_head link;
};
-static const char * __initconst component_clk_types[] = {
+static const char * const component_clk_types[] __initconst = {
"gate", "divider", "mux"
};
diff --git a/drivers/clk/ti/divider.c b/drivers/clk/ti/divider.c
index 6211893..ff5f117 100644
--- a/drivers/clk/ti/divider.c
+++ b/drivers/clk/ti/divider.c
@@ -530,8 +530,8 @@
u32 val;
*reg = ti_clk_get_reg_addr(node, 0);
- if (!*reg)
- return -EINVAL;
+ if (IS_ERR(*reg))
+ return PTR_ERR(*reg);
if (!of_property_read_u32(node, "ti,bit-shift", &val))
*shift = val;
diff --git a/drivers/clk/ti/dpll.c b/drivers/clk/ti/dpll.c
index 81dc469..11478a5 100644
--- a/drivers/clk/ti/dpll.c
+++ b/drivers/clk/ti/dpll.c
@@ -390,18 +390,18 @@
#endif
} else {
dd->idlest_reg = ti_clk_get_reg_addr(node, 1);
- if (!dd->idlest_reg)
+ if (IS_ERR(dd->idlest_reg))
goto cleanup;
dd->mult_div1_reg = ti_clk_get_reg_addr(node, 2);
}
- if (!dd->control_reg || !dd->mult_div1_reg)
+ if (IS_ERR(dd->control_reg) || IS_ERR(dd->mult_div1_reg))
goto cleanup;
if (dd->autoidle_mask) {
dd->autoidle_reg = ti_clk_get_reg_addr(node, 3);
- if (!dd->autoidle_reg)
+ if (IS_ERR(dd->autoidle_reg))
goto cleanup;
}
diff --git a/drivers/clk/ti/fapll.c b/drivers/clk/ti/fapll.c
index d216406..ffcd8e0 100644
--- a/drivers/clk/ti/fapll.c
+++ b/drivers/clk/ti/fapll.c
@@ -11,19 +11,27 @@
#include <linux/clk-provider.h>
#include <linux/delay.h>
-#include <linux/slab.h>
#include <linux/err.h>
+#include <linux/math64.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/clk/ti.h>
-#include <asm/div64.h>
/* FAPLL Control Register PLL_CTRL */
+#define FAPLL_MAIN_MULT_N_SHIFT 16
+#define FAPLL_MAIN_DIV_P_SHIFT 8
#define FAPLL_MAIN_LOCK BIT(7)
#define FAPLL_MAIN_PLLEN BIT(3)
#define FAPLL_MAIN_BP BIT(2)
#define FAPLL_MAIN_LOC_CTL BIT(0)
+#define FAPLL_MAIN_MAX_MULT_N 0xffff
+#define FAPLL_MAIN_MAX_DIV_P 0xff
+#define FAPLL_MAIN_CLEAR_MASK \
+ ((FAPLL_MAIN_MAX_MULT_N << FAPLL_MAIN_MULT_N_SHIFT) | \
+ (FAPLL_MAIN_DIV_P_SHIFT << FAPLL_MAIN_DIV_P_SHIFT) | \
+ FAPLL_MAIN_LOC_CTL)
+
/* FAPLL powerdown register PWD */
#define FAPLL_PWD_OFFSET 4
@@ -49,6 +57,10 @@
/* Synthesizer frequency register */
#define SYNTH_LDFREQ BIT(31)
+#define SYNTH_PHASE_K 8
+#define SYNTH_MAX_INT_DIV 0xf
+#define SYNTH_MAX_DIV_M 0xff
+
struct fapll_data {
struct clk_hw hw;
void __iomem *base;
@@ -79,6 +91,48 @@
return !!(v & FAPLL_MAIN_BP);
}
+static void ti_fapll_set_bypass(struct fapll_data *fd)
+{
+ u32 v = readl_relaxed(fd->base);
+
+ if (fd->bypass_bit_inverted)
+ v &= ~FAPLL_MAIN_BP;
+ else
+ v |= FAPLL_MAIN_BP;
+ writel_relaxed(v, fd->base);
+}
+
+static void ti_fapll_clear_bypass(struct fapll_data *fd)
+{
+ u32 v = readl_relaxed(fd->base);
+
+ if (fd->bypass_bit_inverted)
+ v |= FAPLL_MAIN_BP;
+ else
+ v &= ~FAPLL_MAIN_BP;
+ writel_relaxed(v, fd->base);
+}
+
+static int ti_fapll_wait_lock(struct fapll_data *fd)
+{
+ int retries = FAPLL_MAX_RETRIES;
+ u32 v;
+
+ while ((v = readl_relaxed(fd->base))) {
+ if (v & FAPLL_MAIN_LOCK)
+ return 0;
+
+ if (retries-- <= 0)
+ break;
+
+ udelay(1);
+ }
+
+ pr_err("%s failed to lock\n", fd->name);
+
+ return -ETIMEDOUT;
+}
+
static int ti_fapll_enable(struct clk_hw *hw)
{
struct fapll_data *fd = to_fapll(hw);
@@ -86,6 +140,7 @@
v |= FAPLL_MAIN_PLLEN;
writel_relaxed(v, fd->base);
+ ti_fapll_wait_lock(fd);
return 0;
}
@@ -141,12 +196,85 @@
return 0;
}
+static int ti_fapll_set_div_mult(unsigned long rate,
+ unsigned long parent_rate,
+ u32 *pre_div_p, u32 *mult_n)
+{
+ /*
+ * So far no luck getting decent clock with PLL divider,
+ * PLL does not seem to lock and the signal does not look
+ * right. It seems the divider can only be used together
+ * with the multiplier?
+ */
+ if (rate < parent_rate) {
+ pr_warn("FAPLL main divider rates unsupported\n");
+ return -EINVAL;
+ }
+
+ *mult_n = rate / parent_rate;
+ if (*mult_n > FAPLL_MAIN_MAX_MULT_N)
+ return -EINVAL;
+ *pre_div_p = 1;
+
+ return 0;
+}
+
+static long ti_fapll_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *parent_rate)
+{
+ u32 pre_div_p, mult_n;
+ int error;
+
+ if (!rate)
+ return -EINVAL;
+
+ error = ti_fapll_set_div_mult(rate, *parent_rate,
+ &pre_div_p, &mult_n);
+ if (error)
+ return error;
+
+ rate = *parent_rate / pre_div_p;
+ rate *= mult_n;
+
+ return rate;
+}
+
+static int ti_fapll_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ struct fapll_data *fd = to_fapll(hw);
+ u32 pre_div_p, mult_n, v;
+ int error;
+
+ if (!rate)
+ return -EINVAL;
+
+ error = ti_fapll_set_div_mult(rate, parent_rate,
+ &pre_div_p, &mult_n);
+ if (error)
+ return error;
+
+ ti_fapll_set_bypass(fd);
+ v = readl_relaxed(fd->base);
+ v &= ~FAPLL_MAIN_CLEAR_MASK;
+ v |= pre_div_p << FAPLL_MAIN_DIV_P_SHIFT;
+ v |= mult_n << FAPLL_MAIN_MULT_N_SHIFT;
+ writel_relaxed(v, fd->base);
+ if (ti_fapll_is_enabled(hw))
+ ti_fapll_wait_lock(fd);
+ ti_fapll_clear_bypass(fd);
+
+ return 0;
+}
+
static struct clk_ops ti_fapll_ops = {
.enable = ti_fapll_enable,
.disable = ti_fapll_disable,
.is_enabled = ti_fapll_is_enabled,
.recalc_rate = ti_fapll_recalc_rate,
.get_parent = ti_fapll_get_parent,
+ .round_rate = ti_fapll_round_rate,
+ .set_rate = ti_fapll_set_rate,
};
static int ti_fapll_synth_enable(struct clk_hw *hw)
@@ -204,7 +332,7 @@
/*
* Synth frequency integer and fractional divider.
* Note that the phase output K is 8, so the result needs
- * to be multiplied by 8.
+ * to be multiplied by SYNTH_PHASE_K.
*/
if (synth->freq) {
u32 v, synth_int_div, synth_frac_div, synth_div_freq;
@@ -215,14 +343,138 @@
synth_div_freq = (synth_int_div * 10000000) + synth_frac_div;
rate *= 10000000;
do_div(rate, synth_div_freq);
- rate *= 8;
+ rate *= SYNTH_PHASE_K;
}
- /* Synth ost-divider M */
- synth_div_m = readl_relaxed(synth->div) & 0xff;
- do_div(rate, synth_div_m);
+ /* Synth post-divider M */
+ synth_div_m = readl_relaxed(synth->div) & SYNTH_MAX_DIV_M;
- return rate;
+ return DIV_ROUND_UP_ULL(rate, synth_div_m);
+}
+
+static unsigned long ti_fapll_synth_get_frac_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct fapll_synth *synth = to_synth(hw);
+ unsigned long current_rate, frac_rate;
+ u32 post_div_m;
+
+ current_rate = ti_fapll_synth_recalc_rate(hw, parent_rate);
+ post_div_m = readl_relaxed(synth->div) & SYNTH_MAX_DIV_M;
+ frac_rate = current_rate * post_div_m;
+
+ return frac_rate;
+}
+
+static u32 ti_fapll_synth_set_frac_rate(struct fapll_synth *synth,
+ unsigned long rate,
+ unsigned long parent_rate)
+{
+ u32 post_div_m, synth_int_div = 0, synth_frac_div = 0, v;
+
+ post_div_m = DIV_ROUND_UP_ULL((u64)parent_rate * SYNTH_PHASE_K, rate);
+ post_div_m = post_div_m / SYNTH_MAX_INT_DIV;
+ if (post_div_m > SYNTH_MAX_DIV_M)
+ return -EINVAL;
+ if (!post_div_m)
+ post_div_m = 1;
+
+ for (; post_div_m < SYNTH_MAX_DIV_M; post_div_m++) {
+ synth_int_div = DIV_ROUND_UP_ULL((u64)parent_rate *
+ SYNTH_PHASE_K *
+ 10000000,
+ rate * post_div_m);
+ synth_frac_div = synth_int_div % 10000000;
+ synth_int_div /= 10000000;
+
+ if (synth_int_div <= SYNTH_MAX_INT_DIV)
+ break;
+ }
+
+ if (synth_int_div > SYNTH_MAX_INT_DIV)
+ return -EINVAL;
+
+ v = readl_relaxed(synth->freq);
+ v &= ~0x1fffffff;
+ v |= (synth_int_div & SYNTH_MAX_INT_DIV) << 24;
+ v |= (synth_frac_div & 0xffffff);
+ v |= SYNTH_LDFREQ;
+ writel_relaxed(v, synth->freq);
+
+ return post_div_m;
+}
+
+static long ti_fapll_synth_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *parent_rate)
+{
+ struct fapll_synth *synth = to_synth(hw);
+ struct fapll_data *fd = synth->fd;
+ unsigned long r;
+
+ if (ti_fapll_clock_is_bypass(fd) || !synth->div || !rate)
+ return -EINVAL;
+
+ /* Only post divider m available with no fractional divider? */
+ if (!synth->freq) {
+ unsigned long frac_rate;
+ u32 synth_post_div_m;
+
+ frac_rate = ti_fapll_synth_get_frac_rate(hw, *parent_rate);
+ synth_post_div_m = DIV_ROUND_UP(frac_rate, rate);
+ r = DIV_ROUND_UP(frac_rate, synth_post_div_m);
+ goto out;
+ }
+
+ r = *parent_rate * SYNTH_PHASE_K;
+ if (rate > r)
+ goto out;
+
+ r = DIV_ROUND_UP_ULL(r, SYNTH_MAX_INT_DIV * SYNTH_MAX_DIV_M);
+ if (rate < r)
+ goto out;
+
+ r = rate;
+out:
+ return r;
+}
+
+static int ti_fapll_synth_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ struct fapll_synth *synth = to_synth(hw);
+ struct fapll_data *fd = synth->fd;
+ unsigned long frac_rate, post_rate = 0;
+ u32 post_div_m = 0, v;
+
+ if (ti_fapll_clock_is_bypass(fd) || !synth->div || !rate)
+ return -EINVAL;
+
+ /* Produce the rate with just post divider M? */
+ frac_rate = ti_fapll_synth_get_frac_rate(hw, parent_rate);
+ if (frac_rate < rate) {
+ if (!synth->freq)
+ return -EINVAL;
+ } else {
+ post_div_m = DIV_ROUND_UP(frac_rate, rate);
+ if (post_div_m && (post_div_m <= SYNTH_MAX_DIV_M))
+ post_rate = DIV_ROUND_UP(frac_rate, post_div_m);
+ if (!synth->freq && !post_rate)
+ return -EINVAL;
+ }
+
+ /* Need to recalculate the fractional divider? */
+ if ((post_rate != rate) && synth->freq)
+ post_div_m = ti_fapll_synth_set_frac_rate(synth,
+ rate,
+ parent_rate);
+
+ v = readl_relaxed(synth->div);
+ v &= ~SYNTH_MAX_DIV_M;
+ v |= post_div_m;
+ v |= SYNTH_LDMDIV1;
+ writel_relaxed(v, synth->div);
+
+ return 0;
}
static struct clk_ops ti_fapll_synt_ops = {
@@ -230,6 +482,8 @@
.disable = ti_fapll_synth_disable,
.is_enabled = ti_fapll_synth_is_enabled,
.recalc_rate = ti_fapll_synth_recalc_rate,
+ .round_rate = ti_fapll_synth_round_rate,
+ .set_rate = ti_fapll_synth_set_rate,
};
static struct clk * __init ti_fapll_synth_setup(struct fapll_data *fd,
diff --git a/drivers/clk/ti/gate.c b/drivers/clk/ti/gate.c
index d493307..0c6fdfc 100644
--- a/drivers/clk/ti/gate.c
+++ b/drivers/clk/ti/gate.c
@@ -225,7 +225,7 @@
if (ops != &omap_gate_clkdm_clk_ops) {
reg = ti_clk_get_reg_addr(node, 0);
- if (!reg)
+ if (IS_ERR(reg))
return;
if (!of_property_read_u32(node, "ti,bit-shift", &val))
@@ -264,7 +264,7 @@
return;
gate->enable_reg = ti_clk_get_reg_addr(node, 0);
- if (!gate->enable_reg)
+ if (IS_ERR(gate->enable_reg))
goto cleanup;
of_property_read_u32(node, "ti,bit-shift", &val);
diff --git a/drivers/clk/ti/interface.c b/drivers/clk/ti/interface.c
index 265d91f..c76230d 100644
--- a/drivers/clk/ti/interface.c
+++ b/drivers/clk/ti/interface.c
@@ -111,7 +111,7 @@
u32 val;
reg = ti_clk_get_reg_addr(node, 0);
- if (!reg)
+ if (IS_ERR(reg))
return;
if (!of_property_read_u32(node, "ti,bit-shift", &val))
diff --git a/drivers/clk/ti/mux.c b/drivers/clk/ti/mux.c
index 728e253..5cdeed5 100644
--- a/drivers/clk/ti/mux.c
+++ b/drivers/clk/ti/mux.c
@@ -210,7 +210,7 @@
reg = ti_clk_get_reg_addr(node, 0);
- if (!reg)
+ if (IS_ERR(reg))
goto cleanup;
of_property_read_u32(node, "ti,bit-shift", &shift);
@@ -283,7 +283,7 @@
mux->reg = ti_clk_get_reg_addr(node, 0);
- if (!mux->reg)
+ if (IS_ERR(mux->reg))
goto cleanup;
if (!of_property_read_u32(node, "ti,bit-shift", &val))
diff --git a/drivers/clk/versatile/clk-versatile.c b/drivers/clk/versatile/clk-versatile.c
index a76981e..7a4f863 100644
--- a/drivers/clk/versatile/clk-versatile.c
+++ b/drivers/clk/versatile/clk-versatile.c
@@ -69,7 +69,7 @@
struct device_node *parent;
parent = of_get_parent(np);
- if (!np) {
+ if (!parent) {
pr_err("no parent on core module clock\n");
return;
}
diff --git a/drivers/clk/versatile/clk-vexpress-osc.c b/drivers/clk/versatile/clk-vexpress-osc.c
index 765f1e0..89c0609 100644
--- a/drivers/clk/versatile/clk-vexpress-osc.c
+++ b/drivers/clk/versatile/clk-vexpress-osc.c
@@ -110,7 +110,7 @@
return 0;
}
-static struct of_device_id vexpress_osc_of_match[] = {
+static const struct of_device_id vexpress_osc_of_match[] = {
{ .compatible = "arm,vexpress-osc", },
{}
};
diff --git a/drivers/clk/zynq/clkc.c b/drivers/clk/zynq/clkc.c
index f870aad..40cb113 100644
--- a/drivers/clk/zynq/clkc.c
+++ b/drivers/clk/zynq/clkc.c
@@ -85,22 +85,22 @@
static DEFINE_SPINLOCK(dbgclk_lock);
static DEFINE_SPINLOCK(aperclk_lock);
-static const char *armpll_parents[] __initconst = {"armpll_int", "ps_clk"};
-static const char *ddrpll_parents[] __initconst = {"ddrpll_int", "ps_clk"};
-static const char *iopll_parents[] __initconst = {"iopll_int", "ps_clk"};
-static const char *gem0_mux_parents[] __initconst = {"gem0_div1", "dummy_name"};
-static const char *gem1_mux_parents[] __initconst = {"gem1_div1", "dummy_name"};
-static const char *can0_mio_mux2_parents[] __initconst = {"can0_gate",
+static const char *armpll_parents[] __initdata = {"armpll_int", "ps_clk"};
+static const char *ddrpll_parents[] __initdata = {"ddrpll_int", "ps_clk"};
+static const char *iopll_parents[] __initdata = {"iopll_int", "ps_clk"};
+static const char *gem0_mux_parents[] __initdata = {"gem0_div1", "dummy_name"};
+static const char *gem1_mux_parents[] __initdata = {"gem1_div1", "dummy_name"};
+static const char *can0_mio_mux2_parents[] __initdata = {"can0_gate",
"can0_mio_mux"};
-static const char *can1_mio_mux2_parents[] __initconst = {"can1_gate",
+static const char *can1_mio_mux2_parents[] __initdata = {"can1_gate",
"can1_mio_mux"};
-static const char *dbg_emio_mux_parents[] __initconst = {"dbg_div",
+static const char *dbg_emio_mux_parents[] __initdata = {"dbg_div",
"dummy_name"};
-static const char *dbgtrc_emio_input_names[] __initconst = {"trace_emio_clk"};
-static const char *gem0_emio_input_names[] __initconst = {"gem0_emio_clk"};
-static const char *gem1_emio_input_names[] __initconst = {"gem1_emio_clk"};
-static const char *swdt_ext_clk_input_names[] __initconst = {"swdt_ext_clk"};
+static const char *dbgtrc_emio_input_names[] __initdata = {"trace_emio_clk"};
+static const char *gem0_emio_input_names[] __initdata = {"gem0_emio_clk"};
+static const char *gem1_emio_input_names[] __initdata = {"gem1_emio_clk"};
+static const char *swdt_ext_clk_input_names[] __initdata = {"swdt_ext_clk"};
static void __init zynq_clk_register_fclk(enum zynq_clk fclk,
const char *clk_name, void __iomem *fclk_ctrl_reg,
diff --git a/drivers/clocksource/dw_apb_timer.c b/drivers/clocksource/dw_apb_timer.c
index f3656a6..35a8809 100644
--- a/drivers/clocksource/dw_apb_timer.c
+++ b/drivers/clocksource/dw_apb_timer.c
@@ -117,7 +117,8 @@
unsigned long period;
struct dw_apb_clock_event_device *dw_ced = ced_to_dw_apb_ced(evt);
- pr_debug("%s CPU %d mode=%d\n", __func__, first_cpu(*evt->cpumask),
+ pr_debug("%s CPU %d mode=%d\n", __func__,
+ cpumask_first(evt->cpumask),
mode);
switch (mode) {
diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index 73fe2f8..7936dce 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -292,7 +292,7 @@
*/
smp_rmb();
- for_each_cpu_mask(i, coupled->coupled_cpus)
+ for_each_cpu(i, &coupled->coupled_cpus)
if (cpu_online(i) && coupled->requested_state[i] < state)
state = coupled->requested_state[i];
@@ -338,7 +338,7 @@
{
int cpu;
- for_each_cpu_mask(cpu, coupled->coupled_cpus)
+ for_each_cpu(cpu, &coupled->coupled_cpus)
if (cpu != this_cpu && cpu_online(cpu))
cpuidle_coupled_poke(cpu);
}
@@ -638,7 +638,7 @@
if (cpumask_empty(&dev->coupled_cpus))
return 0;
- for_each_cpu_mask(cpu, dev->coupled_cpus) {
+ for_each_cpu(cpu, &dev->coupled_cpus) {
other_dev = per_cpu(cpuidle_devices, cpu);
if (other_dev && other_dev->coupled) {
coupled = other_dev->coupled;
diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c
index afd136b..10a9aef 100644
--- a/drivers/crypto/n2_core.c
+++ b/drivers/crypto/n2_core.c
@@ -1754,7 +1754,7 @@
dev->dev.of_node->full_name);
return -EINVAL;
}
- cpu_set(*id, p->sharing);
+ cpumask_set_cpu(*id, &p->sharing);
table[*id] = p;
}
return 0;
@@ -1776,7 +1776,7 @@
return -ENOMEM;
}
- cpus_clear(p->sharing);
+ cpumask_clear(&p->sharing);
spin_lock_init(&p->lock);
p->q_type = q_type;
INIT_LIST_HEAD(&p->jobs);
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 942ca54..91eced0 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -112,6 +112,8 @@
EloPlus is on mpc85xx and mpc86xx and Pxxx parts, and the Elo3 is on
some Txxx and Bxxx parts.
+source "drivers/dma/hsu/Kconfig"
+
config MPC512X_DMA
tristate "Freescale MPC512x built-in DMA engine support"
depends on PPC_MPC512x || PPC_MPC831x
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 539d482..7e8301c 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -10,6 +10,7 @@
obj-$(CONFIG_INTEL_IOATDMA) += ioat/
obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
obj-$(CONFIG_FSL_DMA) += fsldma.o
+obj-$(CONFIG_HSU_DMA) += hsu/
obj-$(CONFIG_MPC512X_DMA) += mpc512x_dma.o
obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
obj-$(CONFIG_MV_XOR) += mv_xor.o
diff --git a/drivers/dma/hsu/Kconfig b/drivers/dma/hsu/Kconfig
new file mode 100644
index 0000000..7e98eff
--- /dev/null
+++ b/drivers/dma/hsu/Kconfig
@@ -0,0 +1,14 @@
+# DMA engine configuration for hsu
+config HSU_DMA
+ tristate "High Speed UART DMA support"
+ select DMA_ENGINE
+ select DMA_VIRTUAL_CHANNELS
+
+config HSU_DMA_PCI
+ tristate "High Speed UART DMA PCI driver"
+ depends on PCI
+ select HSU_DMA
+ help
+ Support the High Speed UART DMA on the platfroms that
+ enumerate it as a PCI device. For example, Intel Medfield
+ has integrated this HSU DMA controller.
diff --git a/drivers/dma/hsu/Makefile b/drivers/dma/hsu/Makefile
new file mode 100644
index 0000000..b8f9af0
--- /dev/null
+++ b/drivers/dma/hsu/Makefile
@@ -0,0 +1,5 @@
+obj-$(CONFIG_HSU_DMA) += hsu_dma.o
+hsu_dma-objs := hsu.o
+
+obj-$(CONFIG_HSU_DMA_PCI) += hsu_dma_pci.o
+hsu_dma_pci-objs := pci.o
diff --git a/drivers/dma/hsu/hsu.c b/drivers/dma/hsu/hsu.c
new file mode 100644
index 0000000..9b84def
--- /dev/null
+++ b/drivers/dma/hsu/hsu.c
@@ -0,0 +1,495 @@
+/*
+ * Core driver for the High Speed UART DMA
+ *
+ * Copyright (C) 2015 Intel Corporation
+ * Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *
+ * Partially based on the bits found in drivers/tty/serial/mfd.c.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/*
+ * DMA channel allocation:
+ * 1. Even number chans are used for DMA Read (UART TX), odd chans for DMA
+ * Write (UART RX).
+ * 2. 0/1 channel are assigned to port 0, 2/3 chan to port 1, 4/5 chan to
+ * port 3, and so on.
+ */
+
+#include <linux/delay.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+#include "hsu.h"
+
+#define HSU_DMA_BUSWIDTHS \
+ BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) | \
+ BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
+ BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
+ BIT(DMA_SLAVE_BUSWIDTH_3_BYTES) | \
+ BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
+ BIT(DMA_SLAVE_BUSWIDTH_8_BYTES) | \
+ BIT(DMA_SLAVE_BUSWIDTH_16_BYTES)
+
+static inline void hsu_chan_disable(struct hsu_dma_chan *hsuc)
+{
+ hsu_chan_writel(hsuc, HSU_CH_CR, 0);
+}
+
+static inline void hsu_chan_enable(struct hsu_dma_chan *hsuc)
+{
+ u32 cr = HSU_CH_CR_CHA;
+
+ if (hsuc->direction == DMA_MEM_TO_DEV)
+ cr &= ~HSU_CH_CR_CHD;
+ else if (hsuc->direction == DMA_DEV_TO_MEM)
+ cr |= HSU_CH_CR_CHD;
+
+ hsu_chan_writel(hsuc, HSU_CH_CR, cr);
+}
+
+static void hsu_dma_chan_start(struct hsu_dma_chan *hsuc)
+{
+ struct dma_slave_config *config = &hsuc->config;
+ struct hsu_dma_desc *desc = hsuc->desc;
+ u32 bsr = 0, mtsr = 0; /* to shut the compiler up */
+ u32 dcr = HSU_CH_DCR_CHSOE | HSU_CH_DCR_CHEI;
+ unsigned int i, count;
+
+ if (hsuc->direction == DMA_MEM_TO_DEV) {
+ bsr = config->dst_maxburst;
+ mtsr = config->dst_addr_width;
+ } else if (hsuc->direction == DMA_DEV_TO_MEM) {
+ bsr = config->src_maxburst;
+ mtsr = config->src_addr_width;
+ }
+
+ hsu_chan_disable(hsuc);
+
+ hsu_chan_writel(hsuc, HSU_CH_DCR, 0);
+ hsu_chan_writel(hsuc, HSU_CH_BSR, bsr);
+ hsu_chan_writel(hsuc, HSU_CH_MTSR, mtsr);
+
+ /* Set descriptors */
+ count = (desc->nents - desc->active) % HSU_DMA_CHAN_NR_DESC;
+ for (i = 0; i < count; i++) {
+ hsu_chan_writel(hsuc, HSU_CH_DxSAR(i), desc->sg[i].addr);
+ hsu_chan_writel(hsuc, HSU_CH_DxTSR(i), desc->sg[i].len);
+
+ /* Prepare value for DCR */
+ dcr |= HSU_CH_DCR_DESCA(i);
+ dcr |= HSU_CH_DCR_CHTOI(i); /* timeout bit, see HSU Errata 1 */
+
+ desc->active++;
+ }
+ /* Only for the last descriptor in the chain */
+ dcr |= HSU_CH_DCR_CHSOD(count - 1);
+ dcr |= HSU_CH_DCR_CHDI(count - 1);
+
+ hsu_chan_writel(hsuc, HSU_CH_DCR, dcr);
+
+ hsu_chan_enable(hsuc);
+}
+
+static void hsu_dma_stop_channel(struct hsu_dma_chan *hsuc)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&hsuc->lock, flags);
+ hsu_chan_disable(hsuc);
+ hsu_chan_writel(hsuc, HSU_CH_DCR, 0);
+ spin_unlock_irqrestore(&hsuc->lock, flags);
+}
+
+static void hsu_dma_start_channel(struct hsu_dma_chan *hsuc)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&hsuc->lock, flags);
+ hsu_dma_chan_start(hsuc);
+ spin_unlock_irqrestore(&hsuc->lock, flags);
+}
+
+static void hsu_dma_start_transfer(struct hsu_dma_chan *hsuc)
+{
+ struct virt_dma_desc *vdesc;
+
+ /* Get the next descriptor */
+ vdesc = vchan_next_desc(&hsuc->vchan);
+ if (!vdesc) {
+ hsuc->desc = NULL;
+ return;
+ }
+
+ list_del(&vdesc->node);
+ hsuc->desc = to_hsu_dma_desc(vdesc);
+
+ /* Start the channel with a new descriptor */
+ hsu_dma_start_channel(hsuc);
+}
+
+static u32 hsu_dma_chan_get_sr(struct hsu_dma_chan *hsuc)
+{
+ unsigned long flags;
+ u32 sr;
+
+ spin_lock_irqsave(&hsuc->lock, flags);
+ sr = hsu_chan_readl(hsuc, HSU_CH_SR);
+ spin_unlock_irqrestore(&hsuc->lock, flags);
+
+ return sr;
+}
+
+irqreturn_t hsu_dma_irq(struct hsu_dma_chip *chip, unsigned short nr)
+{
+ struct hsu_dma_chan *hsuc;
+ struct hsu_dma_desc *desc;
+ unsigned long flags;
+ u32 sr;
+
+ /* Sanity check */
+ if (nr >= chip->pdata->nr_channels)
+ return IRQ_NONE;
+
+ hsuc = &chip->hsu->chan[nr];
+
+ /*
+ * No matter what situation, need read clear the IRQ status
+ * There is a bug, see Errata 5, HSD 2900918
+ */
+ sr = hsu_dma_chan_get_sr(hsuc);
+ if (!sr)
+ return IRQ_NONE;
+
+ /* Timeout IRQ, need wait some time, see Errata 2 */
+ if (hsuc->direction == DMA_DEV_TO_MEM && (sr & HSU_CH_SR_DESCTO_ANY))
+ udelay(2);
+
+ sr &= ~HSU_CH_SR_DESCTO_ANY;
+ if (!sr)
+ return IRQ_HANDLED;
+
+ spin_lock_irqsave(&hsuc->vchan.lock, flags);
+ desc = hsuc->desc;
+ if (desc) {
+ if (sr & HSU_CH_SR_CHE) {
+ desc->status = DMA_ERROR;
+ } else if (desc->active < desc->nents) {
+ hsu_dma_start_channel(hsuc);
+ } else {
+ vchan_cookie_complete(&desc->vdesc);
+ desc->status = DMA_COMPLETE;
+ hsu_dma_start_transfer(hsuc);
+ }
+ }
+ spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
+
+ return IRQ_HANDLED;
+}
+EXPORT_SYMBOL_GPL(hsu_dma_irq);
+
+static struct hsu_dma_desc *hsu_dma_alloc_desc(unsigned int nents)
+{
+ struct hsu_dma_desc *desc;
+
+ desc = kzalloc(sizeof(*desc), GFP_NOWAIT);
+ if (!desc)
+ return NULL;
+
+ desc->sg = kcalloc(nents, sizeof(*desc->sg), GFP_NOWAIT);
+ if (!desc->sg) {
+ kfree(desc);
+ return NULL;
+ }
+
+ return desc;
+}
+
+static void hsu_dma_desc_free(struct virt_dma_desc *vdesc)
+{
+ struct hsu_dma_desc *desc = to_hsu_dma_desc(vdesc);
+
+ kfree(desc->sg);
+ kfree(desc);
+}
+
+static struct dma_async_tx_descriptor *hsu_dma_prep_slave_sg(
+ struct dma_chan *chan, struct scatterlist *sgl,
+ unsigned int sg_len, enum dma_transfer_direction direction,
+ unsigned long flags, void *context)
+{
+ struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan);
+ struct hsu_dma_desc *desc;
+ struct scatterlist *sg;
+ unsigned int i;
+
+ desc = hsu_dma_alloc_desc(sg_len);
+ if (!desc)
+ return NULL;
+
+ for_each_sg(sgl, sg, sg_len, i) {
+ desc->sg[i].addr = sg_dma_address(sg);
+ desc->sg[i].len = sg_dma_len(sg);
+ }
+
+ desc->nents = sg_len;
+ desc->direction = direction;
+ /* desc->active = 0 by kzalloc */
+ desc->status = DMA_IN_PROGRESS;
+
+ return vchan_tx_prep(&hsuc->vchan, &desc->vdesc, flags);
+}
+
+static void hsu_dma_issue_pending(struct dma_chan *chan)
+{
+ struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan);
+ unsigned long flags;
+
+ spin_lock_irqsave(&hsuc->vchan.lock, flags);
+ if (vchan_issue_pending(&hsuc->vchan) && !hsuc->desc)
+ hsu_dma_start_transfer(hsuc);
+ spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
+}
+
+static size_t hsu_dma_desc_size(struct hsu_dma_desc *desc)
+{
+ size_t bytes = 0;
+ unsigned int i;
+
+ for (i = desc->active; i < desc->nents; i++)
+ bytes += desc->sg[i].len;
+
+ return bytes;
+}
+
+static size_t hsu_dma_active_desc_size(struct hsu_dma_chan *hsuc)
+{
+ struct hsu_dma_desc *desc = hsuc->desc;
+ size_t bytes = hsu_dma_desc_size(desc);
+ int i;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hsuc->lock, flags);
+ i = desc->active % HSU_DMA_CHAN_NR_DESC;
+ do {
+ bytes += hsu_chan_readl(hsuc, HSU_CH_DxTSR(i));
+ } while (--i >= 0);
+ spin_unlock_irqrestore(&hsuc->lock, flags);
+
+ return bytes;
+}
+
+static enum dma_status hsu_dma_tx_status(struct dma_chan *chan,
+ dma_cookie_t cookie, struct dma_tx_state *state)
+{
+ struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan);
+ struct virt_dma_desc *vdesc;
+ enum dma_status status;
+ size_t bytes;
+ unsigned long flags;
+
+ status = dma_cookie_status(chan, cookie, state);
+ if (status == DMA_COMPLETE)
+ return status;
+
+ spin_lock_irqsave(&hsuc->vchan.lock, flags);
+ vdesc = vchan_find_desc(&hsuc->vchan, cookie);
+ if (hsuc->desc && cookie == hsuc->desc->vdesc.tx.cookie) {
+ bytes = hsu_dma_active_desc_size(hsuc);
+ dma_set_residue(state, bytes);
+ status = hsuc->desc->status;
+ } else if (vdesc) {
+ bytes = hsu_dma_desc_size(to_hsu_dma_desc(vdesc));
+ dma_set_residue(state, bytes);
+ }
+ spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
+
+ return status;
+}
+
+static int hsu_dma_slave_config(struct dma_chan *chan,
+ struct dma_slave_config *config)
+{
+ struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan);
+
+ /* Check if chan will be configured for slave transfers */
+ if (!is_slave_direction(config->direction))
+ return -EINVAL;
+
+ memcpy(&hsuc->config, config, sizeof(hsuc->config));
+
+ return 0;
+}
+
+static void hsu_dma_chan_deactivate(struct hsu_dma_chan *hsuc)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&hsuc->lock, flags);
+ hsu_chan_disable(hsuc);
+ spin_unlock_irqrestore(&hsuc->lock, flags);
+}
+
+static void hsu_dma_chan_activate(struct hsu_dma_chan *hsuc)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&hsuc->lock, flags);
+ hsu_chan_enable(hsuc);
+ spin_unlock_irqrestore(&hsuc->lock, flags);
+}
+
+static int hsu_dma_pause(struct dma_chan *chan)
+{
+ struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan);
+ unsigned long flags;
+
+ spin_lock_irqsave(&hsuc->vchan.lock, flags);
+ if (hsuc->desc && hsuc->desc->status == DMA_IN_PROGRESS) {
+ hsu_dma_chan_deactivate(hsuc);
+ hsuc->desc->status = DMA_PAUSED;
+ }
+ spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
+
+ return 0;
+}
+
+static int hsu_dma_resume(struct dma_chan *chan)
+{
+ struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan);
+ unsigned long flags;
+
+ spin_lock_irqsave(&hsuc->vchan.lock, flags);
+ if (hsuc->desc && hsuc->desc->status == DMA_PAUSED) {
+ hsuc->desc->status = DMA_IN_PROGRESS;
+ hsu_dma_chan_activate(hsuc);
+ }
+ spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
+
+ return 0;
+}
+
+static int hsu_dma_terminate_all(struct dma_chan *chan)
+{
+ struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan);
+ unsigned long flags;
+ LIST_HEAD(head);
+
+ spin_lock_irqsave(&hsuc->vchan.lock, flags);
+
+ hsu_dma_stop_channel(hsuc);
+ hsuc->desc = NULL;
+
+ vchan_get_all_descriptors(&hsuc->vchan, &head);
+ spin_unlock_irqrestore(&hsuc->vchan.lock, flags);
+ vchan_dma_desc_free_list(&hsuc->vchan, &head);
+
+ return 0;
+}
+
+static void hsu_dma_free_chan_resources(struct dma_chan *chan)
+{
+ vchan_free_chan_resources(to_virt_chan(chan));
+}
+
+int hsu_dma_probe(struct hsu_dma_chip *chip)
+{
+ struct hsu_dma *hsu;
+ struct hsu_dma_platform_data *pdata = chip->pdata;
+ void __iomem *addr = chip->regs + chip->offset;
+ unsigned short i;
+ int ret;
+
+ hsu = devm_kzalloc(chip->dev, sizeof(*hsu), GFP_KERNEL);
+ if (!hsu)
+ return -ENOMEM;
+
+ chip->hsu = hsu;
+
+ if (!pdata) {
+ pdata = devm_kzalloc(chip->dev, sizeof(*pdata), GFP_KERNEL);
+ if (!pdata)
+ return -ENOMEM;
+
+ chip->pdata = pdata;
+
+ /* Guess nr_channels from the IO space length */
+ pdata->nr_channels = (chip->length - chip->offset) /
+ HSU_DMA_CHAN_LENGTH;
+ }
+
+ hsu->chan = devm_kcalloc(chip->dev, pdata->nr_channels,
+ sizeof(*hsu->chan), GFP_KERNEL);
+ if (!hsu->chan)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&hsu->dma.channels);
+ for (i = 0; i < pdata->nr_channels; i++) {
+ struct hsu_dma_chan *hsuc = &hsu->chan[i];
+
+ hsuc->vchan.desc_free = hsu_dma_desc_free;
+ vchan_init(&hsuc->vchan, &hsu->dma);
+
+ hsuc->direction = (i & 0x1) ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV;
+ hsuc->reg = addr + i * HSU_DMA_CHAN_LENGTH;
+
+ spin_lock_init(&hsuc->lock);
+ }
+
+ dma_cap_set(DMA_SLAVE, hsu->dma.cap_mask);
+ dma_cap_set(DMA_PRIVATE, hsu->dma.cap_mask);
+
+ hsu->dma.device_free_chan_resources = hsu_dma_free_chan_resources;
+
+ hsu->dma.device_prep_slave_sg = hsu_dma_prep_slave_sg;
+
+ hsu->dma.device_issue_pending = hsu_dma_issue_pending;
+ hsu->dma.device_tx_status = hsu_dma_tx_status;
+
+ hsu->dma.device_config = hsu_dma_slave_config;
+ hsu->dma.device_pause = hsu_dma_pause;
+ hsu->dma.device_resume = hsu_dma_resume;
+ hsu->dma.device_terminate_all = hsu_dma_terminate_all;
+
+ hsu->dma.src_addr_widths = HSU_DMA_BUSWIDTHS;
+ hsu->dma.dst_addr_widths = HSU_DMA_BUSWIDTHS;
+ hsu->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+ hsu->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
+
+ hsu->dma.dev = chip->dev;
+
+ ret = dma_async_device_register(&hsu->dma);
+ if (ret)
+ return ret;
+
+ dev_info(chip->dev, "Found HSU DMA, %d channels\n", pdata->nr_channels);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(hsu_dma_probe);
+
+int hsu_dma_remove(struct hsu_dma_chip *chip)
+{
+ struct hsu_dma *hsu = chip->hsu;
+ unsigned short i;
+
+ dma_async_device_unregister(&hsu->dma);
+
+ for (i = 0; i < chip->pdata->nr_channels; i++) {
+ struct hsu_dma_chan *hsuc = &hsu->chan[i];
+
+ tasklet_kill(&hsuc->vchan.task);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(hsu_dma_remove);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("High Speed UART DMA core driver");
+MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>");
diff --git a/drivers/dma/hsu/hsu.h b/drivers/dma/hsu/hsu.h
new file mode 100644
index 0000000..0275233
--- /dev/null
+++ b/drivers/dma/hsu/hsu.h
@@ -0,0 +1,118 @@
+/*
+ * Driver for the High Speed UART DMA
+ *
+ * Copyright (C) 2015 Intel Corporation
+ *
+ * Partially based on the bits found in drivers/tty/serial/mfd.c.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __DMA_HSU_H__
+#define __DMA_HSU_H__
+
+#include <linux/spinlock.h>
+#include <linux/dma/hsu.h>
+
+#include "../virt-dma.h"
+
+#define HSU_CH_SR 0x00 /* channel status */
+#define HSU_CH_CR 0x04 /* channel control */
+#define HSU_CH_DCR 0x08 /* descriptor control */
+#define HSU_CH_BSR 0x10 /* FIFO buffer size */
+#define HSU_CH_MTSR 0x14 /* minimum transfer size */
+#define HSU_CH_DxSAR(x) (0x20 + 8 * (x)) /* desc start addr */
+#define HSU_CH_DxTSR(x) (0x24 + 8 * (x)) /* desc transfer size */
+#define HSU_CH_D0SAR 0x20 /* desc 0 start addr */
+#define HSU_CH_D0TSR 0x24 /* desc 0 transfer size */
+#define HSU_CH_D1SAR 0x28
+#define HSU_CH_D1TSR 0x2c
+#define HSU_CH_D2SAR 0x30
+#define HSU_CH_D2TSR 0x34
+#define HSU_CH_D3SAR 0x38
+#define HSU_CH_D3TSR 0x3c
+
+#define HSU_DMA_CHAN_NR_DESC 4
+#define HSU_DMA_CHAN_LENGTH 0x40
+
+/* Bits in HSU_CH_SR */
+#define HSU_CH_SR_DESCTO(x) BIT(8 + (x))
+#define HSU_CH_SR_DESCTO_ANY (BIT(11) | BIT(10) | BIT(9) | BIT(8))
+#define HSU_CH_SR_CHE BIT(15)
+
+/* Bits in HSU_CH_CR */
+#define HSU_CH_CR_CHA BIT(0)
+#define HSU_CH_CR_CHD BIT(1)
+
+/* Bits in HSU_CH_DCR */
+#define HSU_CH_DCR_DESCA(x) BIT(0 + (x))
+#define HSU_CH_DCR_CHSOD(x) BIT(8 + (x))
+#define HSU_CH_DCR_CHSOTO BIT(14)
+#define HSU_CH_DCR_CHSOE BIT(15)
+#define HSU_CH_DCR_CHDI(x) BIT(16 + (x))
+#define HSU_CH_DCR_CHEI BIT(23)
+#define HSU_CH_DCR_CHTOI(x) BIT(24 + (x))
+
+struct hsu_dma_sg {
+ dma_addr_t addr;
+ unsigned int len;
+};
+
+struct hsu_dma_desc {
+ struct virt_dma_desc vdesc;
+ enum dma_transfer_direction direction;
+ struct hsu_dma_sg *sg;
+ unsigned int nents;
+ unsigned int active;
+ enum dma_status status;
+};
+
+static inline struct hsu_dma_desc *to_hsu_dma_desc(struct virt_dma_desc *vdesc)
+{
+ return container_of(vdesc, struct hsu_dma_desc, vdesc);
+}
+
+struct hsu_dma_chan {
+ struct virt_dma_chan vchan;
+
+ void __iomem *reg;
+ spinlock_t lock;
+
+ /* hardware configuration */
+ enum dma_transfer_direction direction;
+ struct dma_slave_config config;
+
+ struct hsu_dma_desc *desc;
+};
+
+static inline struct hsu_dma_chan *to_hsu_dma_chan(struct dma_chan *chan)
+{
+ return container_of(chan, struct hsu_dma_chan, vchan.chan);
+}
+
+static inline u32 hsu_chan_readl(struct hsu_dma_chan *hsuc, int offset)
+{
+ return readl(hsuc->reg + offset);
+}
+
+static inline void hsu_chan_writel(struct hsu_dma_chan *hsuc, int offset,
+ u32 value)
+{
+ writel(value, hsuc->reg + offset);
+}
+
+struct hsu_dma {
+ struct dma_device dma;
+
+ /* channels */
+ struct hsu_dma_chan *chan;
+};
+
+static inline struct hsu_dma *to_hsu_dma(struct dma_device *ddev)
+{
+ return container_of(ddev, struct hsu_dma, dma);
+}
+
+#endif /* __DMA_HSU_H__ */
diff --git a/drivers/dma/hsu/pci.c b/drivers/dma/hsu/pci.c
new file mode 100644
index 0000000..77879e6
--- /dev/null
+++ b/drivers/dma/hsu/pci.c
@@ -0,0 +1,124 @@
+/*
+ * PCI driver for the High Speed UART DMA
+ *
+ * Copyright (C) 2015 Intel Corporation
+ * Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *
+ * Partially based on the bits found in drivers/tty/serial/mfd.c.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/bitops.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include "hsu.h"
+
+#define HSU_PCI_DMASR 0x00
+#define HSU_PCI_DMAISR 0x04
+
+#define HSU_PCI_CHAN_OFFSET 0x100
+
+static irqreturn_t hsu_pci_irq(int irq, void *dev)
+{
+ struct hsu_dma_chip *chip = dev;
+ u32 dmaisr;
+ unsigned short i;
+ irqreturn_t ret = IRQ_NONE;
+
+ dmaisr = readl(chip->regs + HSU_PCI_DMAISR);
+ for (i = 0; i < chip->pdata->nr_channels; i++) {
+ if (dmaisr & 0x1)
+ ret |= hsu_dma_irq(chip, i);
+ dmaisr >>= 1;
+ }
+
+ return ret;
+}
+
+static int hsu_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct hsu_dma_chip *chip;
+ int ret;
+
+ ret = pcim_enable_device(pdev);
+ if (ret)
+ return ret;
+
+ ret = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev));
+ if (ret) {
+ dev_err(&pdev->dev, "I/O memory remapping failed\n");
+ return ret;
+ }
+
+ pci_set_master(pdev);
+ pci_try_set_mwi(pdev);
+
+ ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (ret)
+ return ret;
+
+ ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (ret)
+ return ret;
+
+ chip = devm_kzalloc(&pdev->dev, sizeof(*chip), GFP_KERNEL);
+ if (!chip)
+ return -ENOMEM;
+
+ chip->dev = &pdev->dev;
+ chip->regs = pcim_iomap_table(pdev)[0];
+ chip->length = pci_resource_len(pdev, 0);
+ chip->offset = HSU_PCI_CHAN_OFFSET;
+ chip->irq = pdev->irq;
+
+ pci_enable_msi(pdev);
+
+ ret = hsu_dma_probe(chip);
+ if (ret)
+ return ret;
+
+ ret = request_irq(chip->irq, hsu_pci_irq, 0, "hsu_dma_pci", chip);
+ if (ret)
+ goto err_register_irq;
+
+ pci_set_drvdata(pdev, chip);
+
+ return 0;
+
+err_register_irq:
+ hsu_dma_remove(chip);
+ return ret;
+}
+
+static void hsu_pci_remove(struct pci_dev *pdev)
+{
+ struct hsu_dma_chip *chip = pci_get_drvdata(pdev);
+
+ free_irq(chip->irq, chip);
+ hsu_dma_remove(chip);
+}
+
+static const struct pci_device_id hsu_pci_id_table[] = {
+ { PCI_VDEVICE(INTEL, 0x081e), 0 },
+ { PCI_VDEVICE(INTEL, 0x1192), 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(pci, hsu_pci_id_table);
+
+static struct pci_driver hsu_pci_driver = {
+ .name = "hsu_dma_pci",
+ .id_table = hsu_pci_id_table,
+ .probe = hsu_pci_probe,
+ .remove = hsu_pci_remove,
+};
+
+module_pci_driver(hsu_pci_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("High Speed UART DMA PCI driver");
+MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>");
diff --git a/drivers/extcon/Kconfig b/drivers/extcon/Kconfig
index 6a1f7de..fdc0bf0 100644
--- a/drivers/extcon/Kconfig
+++ b/drivers/extcon/Kconfig
@@ -55,6 +55,16 @@
Maxim MAX77693 PMIC. The MAX77693 MUIC is a USB port accessory
detector and switch.
+config EXTCON_MAX77843
+ tristate "MAX77843 EXTCON Support"
+ depends on MFD_MAX77843
+ select IRQ_DOMAIN
+ select REGMAP_I2C
+ help
+ If you say yes here you get support for the MUIC device of
+ Maxim MAX77843. The MAX77843 MUIC is a USB port accessory
+ detector add switch.
+
config EXTCON_MAX8997
tristate "MAX8997 EXTCON Support"
depends on MFD_MAX8997 && IRQ_DOMAIN
@@ -93,4 +103,11 @@
Silicon Mitus SM5502. The SM5502 is a USB port accessory
detector and switch.
+config EXTCON_USB_GPIO
+ tristate "USB GPIO extcon support"
+ depends on GPIOLIB
+ help
+ Say Y here to enable GPIO based USB cable detection extcon support.
+ Used typically if GPIO is used for USB ID pin detection.
+
endif # MULTISTATE_SWITCH
diff --git a/drivers/extcon/Makefile b/drivers/extcon/Makefile
index 0370b42..9204114 100644
--- a/drivers/extcon/Makefile
+++ b/drivers/extcon/Makefile
@@ -2,13 +2,15 @@
# Makefile for external connector class (extcon) devices
#
-obj-$(CONFIG_EXTCON) += extcon-class.o
+obj-$(CONFIG_EXTCON) += extcon.o
obj-$(CONFIG_EXTCON_ADC_JACK) += extcon-adc-jack.o
obj-$(CONFIG_EXTCON_ARIZONA) += extcon-arizona.o
obj-$(CONFIG_EXTCON_GPIO) += extcon-gpio.o
obj-$(CONFIG_EXTCON_MAX14577) += extcon-max14577.o
obj-$(CONFIG_EXTCON_MAX77693) += extcon-max77693.o
+obj-$(CONFIG_EXTCON_MAX77843) += extcon-max77843.o
obj-$(CONFIG_EXTCON_MAX8997) += extcon-max8997.o
obj-$(CONFIG_EXTCON_PALMAS) += extcon-palmas.o
obj-$(CONFIG_EXTCON_RT8973A) += extcon-rt8973a.o
obj-$(CONFIG_EXTCON_SM5502) += extcon-sm5502.o
+obj-$(CONFIG_EXTCON_USB_GPIO) += extcon-usb-gpio.o
diff --git a/drivers/extcon/extcon-arizona.c b/drivers/extcon/extcon-arizona.c
index 6b5e795..a0ed35b 100644
--- a/drivers/extcon/extcon-arizona.c
+++ b/drivers/extcon/extcon-arizona.c
@@ -136,18 +136,35 @@
static void arizona_start_hpdet_acc_id(struct arizona_extcon_info *info);
-static void arizona_extcon_do_magic(struct arizona_extcon_info *info,
- unsigned int magic)
+static void arizona_extcon_hp_clamp(struct arizona_extcon_info *info,
+ bool clamp)
{
struct arizona *arizona = info->arizona;
+ unsigned int mask = 0, val = 0;
int ret;
+ switch (arizona->type) {
+ case WM5110:
+ mask = ARIZONA_HP1L_SHRTO | ARIZONA_HP1L_FLWR |
+ ARIZONA_HP1L_SHRTI;
+ if (clamp)
+ val = ARIZONA_HP1L_SHRTO;
+ else
+ val = ARIZONA_HP1L_FLWR | ARIZONA_HP1L_SHRTI;
+ break;
+ default:
+ mask = ARIZONA_RMV_SHRT_HP1L;
+ if (clamp)
+ val = ARIZONA_RMV_SHRT_HP1L;
+ break;
+ };
+
mutex_lock(&arizona->dapm->card->dapm_mutex);
- arizona->hpdet_magic = magic;
+ arizona->hpdet_clamp = clamp;
- /* Keep the HP output stages disabled while doing the magic */
- if (magic) {
+ /* Keep the HP output stages disabled while doing the clamp */
+ if (clamp) {
ret = regmap_update_bits(arizona->regmap,
ARIZONA_OUTPUT_ENABLES_1,
ARIZONA_OUT1L_ENA |
@@ -158,20 +175,20 @@
ret);
}
- ret = regmap_update_bits(arizona->regmap, 0x225, 0x4000,
- magic);
+ ret = regmap_update_bits(arizona->regmap, ARIZONA_HP_CTRL_1L,
+ mask, val);
if (ret != 0)
- dev_warn(arizona->dev, "Failed to do magic: %d\n",
+ dev_warn(arizona->dev, "Failed to do clamp: %d\n",
ret);
- ret = regmap_update_bits(arizona->regmap, 0x226, 0x4000,
- magic);
+ ret = regmap_update_bits(arizona->regmap, ARIZONA_HP_CTRL_1R,
+ mask, val);
if (ret != 0)
- dev_warn(arizona->dev, "Failed to do magic: %d\n",
+ dev_warn(arizona->dev, "Failed to do clamp: %d\n",
ret);
- /* Restore the desired state while not doing the magic */
- if (!magic) {
+ /* Restore the desired state while not doing the clamp */
+ if (!clamp) {
ret = regmap_update_bits(arizona->regmap,
ARIZONA_OUTPUT_ENABLES_1,
ARIZONA_OUT1L_ENA |
@@ -603,7 +620,7 @@
ARIZONA_HP_IMPEDANCE_RANGE_MASK | ARIZONA_HP_POLL,
0);
- arizona_extcon_do_magic(info, 0);
+ arizona_extcon_hp_clamp(info, false);
if (id_gpio)
gpio_set_value_cansleep(id_gpio, 0);
@@ -648,7 +665,7 @@
if (info->mic)
arizona_stop_mic(info);
- arizona_extcon_do_magic(info, 0x4000);
+ arizona_extcon_hp_clamp(info, true);
ret = regmap_update_bits(arizona->regmap,
ARIZONA_ACCESSORY_DETECT_MODE_1,
@@ -699,7 +716,7 @@
info->hpdet_active = true;
- arizona_extcon_do_magic(info, 0x4000);
+ arizona_extcon_hp_clamp(info, true);
ret = regmap_update_bits(arizona->regmap,
ARIZONA_ACCESSORY_DETECT_MODE_1,
diff --git a/drivers/extcon/extcon-max14577.c b/drivers/extcon/extcon-max14577.c
index c1bf0cf..3823aa4 100644
--- a/drivers/extcon/extcon-max14577.c
+++ b/drivers/extcon/extcon-max14577.c
@@ -539,8 +539,6 @@
dev_err(info->dev, "failed to handle MUIC interrupt\n");
mutex_unlock(&info->mutex);
-
- return;
}
/*
@@ -730,8 +728,7 @@
muic_irq->name, info);
if (ret) {
dev_err(&pdev->dev,
- "failed: irq request (IRQ: %d,"
- " error :%d)\n",
+ "failed: irq request (IRQ: %d, error :%d)\n",
muic_irq->irq, ret);
return ret;
}
diff --git a/drivers/extcon/extcon-max77693.c b/drivers/extcon/extcon-max77693.c
index af165fd..a66bec8 100644
--- a/drivers/extcon/extcon-max77693.c
+++ b/drivers/extcon/extcon-max77693.c
@@ -190,8 +190,8 @@
/* The below accessories have same ADC value so ADCLow and
ADC1K bit is used to separate specific accessory */
/* ADC|VBVolot|ADCLow|ADC1K| */
- MAX77693_MUIC_GND_USB_OTG = 0x100, /* 0x0| 0| 0| 0| */
- MAX77693_MUIC_GND_USB_OTG_VB = 0x104, /* 0x0| 1| 0| 0| */
+ MAX77693_MUIC_GND_USB_HOST = 0x100, /* 0x0| 0| 0| 0| */
+ MAX77693_MUIC_GND_USB_HOST_VB = 0x104, /* 0x0| 1| 0| 0| */
MAX77693_MUIC_GND_AV_CABLE_LOAD = 0x102,/* 0x0| 0| 1| 0| */
MAX77693_MUIC_GND_MHL = 0x103, /* 0x0| 0| 1| 1| */
MAX77693_MUIC_GND_MHL_VB = 0x107, /* 0x0| 1| 1| 1| */
@@ -228,7 +228,7 @@
[EXTCON_CABLE_SLOW_CHARGER] = "Slow-charger",
[EXTCON_CABLE_CHARGE_DOWNSTREAM] = "Charge-downstream",
[EXTCON_CABLE_MHL] = "MHL",
- [EXTCON_CABLE_MHL_TA] = "MHL_TA",
+ [EXTCON_CABLE_MHL_TA] = "MHL-TA",
[EXTCON_CABLE_JIG_USB_ON] = "JIG-USB-ON",
[EXTCON_CABLE_JIG_USB_OFF] = "JIG-USB-OFF",
[EXTCON_CABLE_JIG_UART_OFF] = "JIG-UART-OFF",
@@ -403,8 +403,8 @@
/**
* [0x1|VBVolt|ADCLow|ADC1K]
- * [0x1| 0| 0| 0] USB_OTG
- * [0x1| 1| 0| 0] USB_OTG_VB
+ * [0x1| 0| 0| 0] USB_HOST
+ * [0x1| 1| 0| 0] USB_HSOT_VB
* [0x1| 0| 1| 0] Audio Video cable with load
* [0x1| 0| 1| 1] MHL without charging cable
* [0x1| 1| 1| 1] MHL with charging cable
@@ -523,7 +523,7 @@
* - Support charging and data connection through micro-usb port
* if USB cable is connected between target and host
* device.
- * - Support OTG device (Mouse/Keyboard)
+ * - Support OTG(On-The-Go) device (Ex: Mouse/Keyboard)
*/
ret = max77693_muic_set_path(info, info->path_usb, attached);
if (ret < 0)
@@ -609,9 +609,9 @@
MAX77693_CABLE_GROUP_ADC_GND, &attached);
switch (cable_type_gnd) {
- case MAX77693_MUIC_GND_USB_OTG:
- case MAX77693_MUIC_GND_USB_OTG_VB:
- /* USB_OTG, PATH: AP_USB */
+ case MAX77693_MUIC_GND_USB_HOST:
+ case MAX77693_MUIC_GND_USB_HOST_VB:
+ /* USB_HOST, PATH: AP_USB */
ret = max77693_muic_set_path(info, CONTROL1_SW_USB, attached);
if (ret < 0)
return ret;
@@ -704,7 +704,7 @@
switch (cable_type) {
case MAX77693_MUIC_ADC_GROUND:
- /* USB_OTG/MHL/Audio */
+ /* USB_HOST/MHL/Audio */
max77693_muic_adc_ground_handler(info);
break;
case MAX77693_MUIC_ADC_FACTORY_MODE_USB_OFF:
@@ -823,19 +823,19 @@
case MAX77693_MUIC_GND_MHL:
case MAX77693_MUIC_GND_MHL_VB:
/*
- * MHL cable with MHL_TA(USB/TA) cable
+ * MHL cable with MHL-TA(USB/TA) cable
* - MHL cable include two port(HDMI line and separate
* micro-usb port. When the target connect MHL cable,
- * extcon driver check whether MHL_TA(USB/TA) cable is
- * connected. If MHL_TA cable is connected, extcon
+ * extcon driver check whether MHL-TA(USB/TA) cable is
+ * connected. If MHL-TA cable is connected, extcon
* driver notify state to notifiee for charging battery.
*
- * Features of 'MHL_TA(USB/TA) with MHL cable'
+ * Features of 'MHL-TA(USB/TA) with MHL cable'
* - Support MHL
* - Support charging through micro-usb port without
* data connection
*/
- extcon_set_cable_state(info->edev, "MHL_TA", attached);
+ extcon_set_cable_state(info->edev, "MHL-TA", attached);
if (!cable_attached)
extcon_set_cable_state(info->edev,
"MHL", cable_attached);
@@ -886,7 +886,7 @@
* - Support charging and data connection through micro-
* usb port if USB cable is connected between target
* and host device
- * - Support OTG device (Mouse/Keyboard)
+ * - Support OTG(On-The-Go) device (Ex: Mouse/Keyboard)
*/
ret = max77693_muic_set_path(info, info->path_usb,
attached);
@@ -1019,8 +1019,6 @@
dev_err(info->dev, "failed to handle MUIC interrupt\n");
mutex_unlock(&info->mutex);
-
- return;
}
static irqreturn_t max77693_muic_irq_handler(int irq, void *data)
@@ -1171,8 +1169,7 @@
muic_irq->name, info);
if (ret) {
dev_err(&pdev->dev,
- "failed: irq request (IRQ: %d,"
- " error :%d)\n",
+ "failed: irq request (IRQ: %d, error :%d)\n",
muic_irq->irq, ret);
return ret;
}
diff --git a/drivers/extcon/extcon-max77843.c b/drivers/extcon/extcon-max77843.c
new file mode 100644
index 0000000..8db6a92
--- /dev/null
+++ b/drivers/extcon/extcon-max77843.c
@@ -0,0 +1,881 @@
+/*
+ * extcon-max77843.c - Maxim MAX77843 extcon driver to support
+ * MUIC(Micro USB Interface Controller)
+ *
+ * Copyright (C) 2015 Samsung Electronics
+ * Author: Jaewon Kim <jaewon02.kim@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/extcon.h>
+#include <linux/i2c.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/mfd/max77843-private.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/workqueue.h>
+
+#define DELAY_MS_DEFAULT 15000 /* unit: millisecond */
+
+enum max77843_muic_status {
+ MAX77843_MUIC_STATUS1 = 0,
+ MAX77843_MUIC_STATUS2,
+ MAX77843_MUIC_STATUS3,
+
+ MAX77843_MUIC_STATUS_NUM,
+};
+
+struct max77843_muic_info {
+ struct device *dev;
+ struct max77843 *max77843;
+ struct extcon_dev *edev;
+
+ struct mutex mutex;
+ struct work_struct irq_work;
+ struct delayed_work wq_detcable;
+
+ u8 status[MAX77843_MUIC_STATUS_NUM];
+ int prev_cable_type;
+ int prev_chg_type;
+ int prev_gnd_type;
+
+ bool irq_adc;
+ bool irq_chg;
+};
+
+enum max77843_muic_cable_group {
+ MAX77843_CABLE_GROUP_ADC = 0,
+ MAX77843_CABLE_GROUP_ADC_GND,
+ MAX77843_CABLE_GROUP_CHG,
+};
+
+enum max77843_muic_adc_debounce_time {
+ MAX77843_DEBOUNCE_TIME_5MS = 0,
+ MAX77843_DEBOUNCE_TIME_10MS,
+ MAX77843_DEBOUNCE_TIME_25MS,
+ MAX77843_DEBOUNCE_TIME_38_62MS,
+};
+
+/* Define accessory cable type */
+enum max77843_muic_accessory_type {
+ MAX77843_MUIC_ADC_GROUND = 0,
+ MAX77843_MUIC_ADC_SEND_END_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S1_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S2_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S3_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S4_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S5_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S6_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S7_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S8_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S9_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S10_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S11_BUTTON,
+ MAX77843_MUIC_ADC_REMOTE_S12_BUTTON,
+ MAX77843_MUIC_ADC_RESERVED_ACC_1,
+ MAX77843_MUIC_ADC_RESERVED_ACC_2,
+ MAX77843_MUIC_ADC_RESERVED_ACC_3,
+ MAX77843_MUIC_ADC_RESERVED_ACC_4,
+ MAX77843_MUIC_ADC_RESERVED_ACC_5,
+ MAX77843_MUIC_ADC_AUDIO_DEVICE_TYPE2,
+ MAX77843_MUIC_ADC_PHONE_POWERED_DEV,
+ MAX77843_MUIC_ADC_TTY_CONVERTER,
+ MAX77843_MUIC_ADC_UART_CABLE,
+ MAX77843_MUIC_ADC_CEA936A_TYPE1_CHG,
+ MAX77843_MUIC_ADC_FACTORY_MODE_USB_OFF,
+ MAX77843_MUIC_ADC_FACTORY_MODE_USB_ON,
+ MAX77843_MUIC_ADC_AV_CABLE_NOLOAD,
+ MAX77843_MUIC_ADC_CEA936A_TYPE2_CHG,
+ MAX77843_MUIC_ADC_FACTORY_MODE_UART_OFF,
+ MAX77843_MUIC_ADC_FACTORY_MODE_UART_ON,
+ MAX77843_MUIC_ADC_AUDIO_DEVICE_TYPE1,
+ MAX77843_MUIC_ADC_OPEN,
+
+ /* The blow accessories should check
+ not only ADC value but also ADC1K and VBVolt value. */
+ /* Offset|ADC1K|VBVolt| */
+ MAX77843_MUIC_GND_USB_HOST = 0x100, /* 0x1| 0| 0| */
+ MAX77843_MUIC_GND_USB_HOST_VB = 0x101, /* 0x1| 0| 1| */
+ MAX77843_MUIC_GND_MHL = 0x102, /* 0x1| 1| 0| */
+ MAX77843_MUIC_GND_MHL_VB = 0x103, /* 0x1| 1| 1| */
+};
+
+/* Define charger cable type */
+enum max77843_muic_charger_type {
+ MAX77843_MUIC_CHG_NONE = 0,
+ MAX77843_MUIC_CHG_USB,
+ MAX77843_MUIC_CHG_DOWNSTREAM,
+ MAX77843_MUIC_CHG_DEDICATED,
+ MAX77843_MUIC_CHG_SPECIAL_500MA,
+ MAX77843_MUIC_CHG_SPECIAL_1A,
+ MAX77843_MUIC_CHG_SPECIAL_BIAS,
+ MAX77843_MUIC_CHG_RESERVED,
+ MAX77843_MUIC_CHG_GND,
+};
+
+enum {
+ MAX77843_CABLE_USB = 0,
+ MAX77843_CABLE_USB_HOST,
+ MAX77843_CABLE_TA,
+ MAX77843_CABLE_CHARGE_DOWNSTREAM,
+ MAX77843_CABLE_FAST_CHARGER,
+ MAX77843_CABLE_SLOW_CHARGER,
+ MAX77843_CABLE_MHL,
+ MAX77843_CABLE_MHL_TA,
+ MAX77843_CABLE_JIG_USB_ON,
+ MAX77843_CABLE_JIG_USB_OFF,
+ MAX77843_CABLE_JIG_UART_ON,
+ MAX77843_CABLE_JIG_UART_OFF,
+
+ MAX77843_CABLE_NUM,
+};
+
+static const char *max77843_extcon_cable[] = {
+ [MAX77843_CABLE_USB] = "USB",
+ [MAX77843_CABLE_USB_HOST] = "USB-HOST",
+ [MAX77843_CABLE_TA] = "TA",
+ [MAX77843_CABLE_CHARGE_DOWNSTREAM] = "CHARGER-DOWNSTREAM",
+ [MAX77843_CABLE_FAST_CHARGER] = "FAST-CHARGER",
+ [MAX77843_CABLE_SLOW_CHARGER] = "SLOW-CHARGER",
+ [MAX77843_CABLE_MHL] = "MHL",
+ [MAX77843_CABLE_MHL_TA] = "MHL-TA",
+ [MAX77843_CABLE_JIG_USB_ON] = "JIG-USB-ON",
+ [MAX77843_CABLE_JIG_USB_OFF] = "JIG-USB-OFF",
+ [MAX77843_CABLE_JIG_UART_ON] = "JIG-UART-ON",
+ [MAX77843_CABLE_JIG_UART_OFF] = "JIG-UART-OFF",
+};
+
+struct max77843_muic_irq {
+ unsigned int irq;
+ const char *name;
+ unsigned int virq;
+};
+
+static struct max77843_muic_irq max77843_muic_irqs[] = {
+ { MAX77843_MUIC_IRQ_INT1_ADC, "MUIC-ADC" },
+ { MAX77843_MUIC_IRQ_INT1_ADCERROR, "MUIC-ADC_ERROR" },
+ { MAX77843_MUIC_IRQ_INT1_ADC1K, "MUIC-ADC1K" },
+ { MAX77843_MUIC_IRQ_INT2_CHGTYP, "MUIC-CHGTYP" },
+ { MAX77843_MUIC_IRQ_INT2_CHGDETRUN, "MUIC-CHGDETRUN" },
+ { MAX77843_MUIC_IRQ_INT2_DCDTMR, "MUIC-DCDTMR" },
+ { MAX77843_MUIC_IRQ_INT2_DXOVP, "MUIC-DXOVP" },
+ { MAX77843_MUIC_IRQ_INT2_VBVOLT, "MUIC-VBVOLT" },
+ { MAX77843_MUIC_IRQ_INT3_VBADC, "MUIC-VBADC" },
+ { MAX77843_MUIC_IRQ_INT3_VDNMON, "MUIC-VDNMON" },
+ { MAX77843_MUIC_IRQ_INT3_DNRES, "MUIC-DNRES" },
+ { MAX77843_MUIC_IRQ_INT3_MPNACK, "MUIC-MPNACK"},
+ { MAX77843_MUIC_IRQ_INT3_MRXBUFOW, "MUIC-MRXBUFOW"},
+ { MAX77843_MUIC_IRQ_INT3_MRXTRF, "MUIC-MRXTRF"},
+ { MAX77843_MUIC_IRQ_INT3_MRXPERR, "MUIC-MRXPERR"},
+ { MAX77843_MUIC_IRQ_INT3_MRXRDY, "MUIC-MRXRDY"},
+};
+
+static const struct regmap_config max77843_muic_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+ .max_register = MAX77843_MUIC_REG_END,
+};
+
+static const struct regmap_irq max77843_muic_irq[] = {
+ /* INT1 interrupt */
+ { .reg_offset = 0, .mask = MAX77843_MUIC_ADC, },
+ { .reg_offset = 0, .mask = MAX77843_MUIC_ADCERROR, },
+ { .reg_offset = 0, .mask = MAX77843_MUIC_ADC1K, },
+
+ /* INT2 interrupt */
+ { .reg_offset = 1, .mask = MAX77843_MUIC_CHGTYP, },
+ { .reg_offset = 1, .mask = MAX77843_MUIC_CHGDETRUN, },
+ { .reg_offset = 1, .mask = MAX77843_MUIC_DCDTMR, },
+ { .reg_offset = 1, .mask = MAX77843_MUIC_DXOVP, },
+ { .reg_offset = 1, .mask = MAX77843_MUIC_VBVOLT, },
+
+ /* INT3 interrupt */
+ { .reg_offset = 2, .mask = MAX77843_MUIC_VBADC, },
+ { .reg_offset = 2, .mask = MAX77843_MUIC_VDNMON, },
+ { .reg_offset = 2, .mask = MAX77843_MUIC_DNRES, },
+ { .reg_offset = 2, .mask = MAX77843_MUIC_MPNACK, },
+ { .reg_offset = 2, .mask = MAX77843_MUIC_MRXBUFOW, },
+ { .reg_offset = 2, .mask = MAX77843_MUIC_MRXTRF, },
+ { .reg_offset = 2, .mask = MAX77843_MUIC_MRXPERR, },
+ { .reg_offset = 2, .mask = MAX77843_MUIC_MRXRDY, },
+};
+
+static const struct regmap_irq_chip max77843_muic_irq_chip = {
+ .name = "max77843-muic",
+ .status_base = MAX77843_MUIC_REG_INT1,
+ .mask_base = MAX77843_MUIC_REG_INTMASK1,
+ .mask_invert = true,
+ .num_regs = 3,
+ .irqs = max77843_muic_irq,
+ .num_irqs = ARRAY_SIZE(max77843_muic_irq),
+};
+
+static int max77843_muic_set_path(struct max77843_muic_info *info,
+ u8 val, bool attached)
+{
+ struct max77843 *max77843 = info->max77843;
+ int ret = 0;
+ unsigned int ctrl1, ctrl2;
+
+ if (attached)
+ ctrl1 = val;
+ else
+ ctrl1 = CONTROL1_SW_OPEN;
+
+ ret = regmap_update_bits(max77843->regmap_muic,
+ MAX77843_MUIC_REG_CONTROL1,
+ CONTROL1_COM_SW, ctrl1);
+ if (ret < 0) {
+ dev_err(info->dev, "Cannot switch MUIC port\n");
+ return ret;
+ }
+
+ if (attached)
+ ctrl2 = MAX77843_MUIC_CONTROL2_CPEN_MASK;
+ else
+ ctrl2 = MAX77843_MUIC_CONTROL2_LOWPWR_MASK;
+
+ ret = regmap_update_bits(max77843->regmap_muic,
+ MAX77843_MUIC_REG_CONTROL2,
+ MAX77843_MUIC_CONTROL2_LOWPWR_MASK |
+ MAX77843_MUIC_CONTROL2_CPEN_MASK, ctrl2);
+ if (ret < 0) {
+ dev_err(info->dev, "Cannot update lowpower mode\n");
+ return ret;
+ }
+
+ dev_dbg(info->dev,
+ "CONTROL1 : 0x%02x, CONTROL2 : 0x%02x, state : %s\n",
+ ctrl1, ctrl2, attached ? "attached" : "detached");
+
+ return 0;
+}
+
+static int max77843_muic_get_cable_type(struct max77843_muic_info *info,
+ enum max77843_muic_cable_group group, bool *attached)
+{
+ int adc, chg_type, cable_type, gnd_type;
+
+ adc = info->status[MAX77843_MUIC_STATUS1] &
+ MAX77843_MUIC_STATUS1_ADC_MASK;
+ adc >>= STATUS1_ADC_SHIFT;
+
+ switch (group) {
+ case MAX77843_CABLE_GROUP_ADC:
+ if (adc == MAX77843_MUIC_ADC_OPEN) {
+ *attached = false;
+ cable_type = info->prev_cable_type;
+ info->prev_cable_type = MAX77843_MUIC_ADC_OPEN;
+ } else {
+ *attached = true;
+ cable_type = info->prev_cable_type = adc;
+ }
+ break;
+ case MAX77843_CABLE_GROUP_CHG:
+ chg_type = info->status[MAX77843_MUIC_STATUS2] &
+ MAX77843_MUIC_STATUS2_CHGTYP_MASK;
+
+ /* Check GROUND accessory with charger cable */
+ if (adc == MAX77843_MUIC_ADC_GROUND) {
+ if (chg_type == MAX77843_MUIC_CHG_NONE) {
+ /* The following state when charger cable is
+ * disconnected but the GROUND accessory still
+ * connected */
+ *attached = false;
+ cable_type = info->prev_chg_type;
+ info->prev_chg_type = MAX77843_MUIC_CHG_NONE;
+ } else {
+
+ /* The following state when charger cable is
+ * connected on the GROUND accessory */
+ *attached = true;
+ cable_type = MAX77843_MUIC_CHG_GND;
+ info->prev_chg_type = MAX77843_MUIC_CHG_GND;
+ }
+ break;
+ }
+
+ if (chg_type == MAX77843_MUIC_CHG_NONE) {
+ *attached = false;
+ cable_type = info->prev_chg_type;
+ info->prev_chg_type = MAX77843_MUIC_CHG_NONE;
+ } else {
+ *attached = true;
+ cable_type = info->prev_chg_type = chg_type;
+ }
+ break;
+ case MAX77843_CABLE_GROUP_ADC_GND:
+ if (adc == MAX77843_MUIC_ADC_OPEN) {
+ *attached = false;
+ cable_type = info->prev_gnd_type;
+ info->prev_gnd_type = MAX77843_MUIC_ADC_OPEN;
+ } else {
+ *attached = true;
+
+ /* Offset|ADC1K|VBVolt|
+ * 0x1| 0| 0| USB-HOST
+ * 0x1| 0| 1| USB-HOST with VB
+ * 0x1| 1| 0| MHL
+ * 0x1| 1| 1| MHL with VB */
+ /* Get ADC1K register bit */
+ gnd_type = (info->status[MAX77843_MUIC_STATUS1] &
+ MAX77843_MUIC_STATUS1_ADC1K_MASK);
+
+ /* Get VBVolt register bit */
+ gnd_type |= (info->status[MAX77843_MUIC_STATUS2] &
+ MAX77843_MUIC_STATUS2_VBVOLT_MASK);
+ gnd_type >>= STATUS2_VBVOLT_SHIFT;
+
+ /* Offset of GND cable */
+ gnd_type |= MAX77843_MUIC_GND_USB_HOST;
+ cable_type = info->prev_gnd_type = gnd_type;
+ }
+ break;
+ default:
+ dev_err(info->dev, "Unknown cable group (%d)\n", group);
+ cable_type = -EINVAL;
+ break;
+ }
+
+ return cable_type;
+}
+
+static int max77843_muic_adc_gnd_handler(struct max77843_muic_info *info)
+{
+ int ret, gnd_cable_type;
+ bool attached;
+
+ gnd_cable_type = max77843_muic_get_cable_type(info,
+ MAX77843_CABLE_GROUP_ADC_GND, &attached);
+ dev_dbg(info->dev, "external connector is %s (gnd:0x%02x)\n",
+ attached ? "attached" : "detached", gnd_cable_type);
+
+ switch (gnd_cable_type) {
+ case MAX77843_MUIC_GND_USB_HOST:
+ case MAX77843_MUIC_GND_USB_HOST_VB:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_USB, attached);
+ if (ret < 0)
+ return ret;
+
+ extcon_set_cable_state(info->edev, "USB-HOST", attached);
+ break;
+ case MAX77843_MUIC_GND_MHL_VB:
+ case MAX77843_MUIC_GND_MHL:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
+ if (ret < 0)
+ return ret;
+
+ extcon_set_cable_state(info->edev, "MHL", attached);
+ break;
+ default:
+ dev_err(info->dev, "failed to detect %s accessory(gnd:0x%x)\n",
+ attached ? "attached" : "detached", gnd_cable_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int max77843_muic_jig_handler(struct max77843_muic_info *info,
+ int cable_type, bool attached)
+{
+ int ret;
+
+ dev_dbg(info->dev, "external connector is %s (adc:0x%02x)\n",
+ attached ? "attached" : "detached", cable_type);
+
+ switch (cable_type) {
+ case MAX77843_MUIC_ADC_FACTORY_MODE_USB_OFF:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_USB, attached);
+ if (ret < 0)
+ return ret;
+ extcon_set_cable_state(info->edev, "JIG-USB-OFF", attached);
+ break;
+ case MAX77843_MUIC_ADC_FACTORY_MODE_USB_ON:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_USB, attached);
+ if (ret < 0)
+ return ret;
+ extcon_set_cable_state(info->edev, "JIG-USB-ON", attached);
+ break;
+ case MAX77843_MUIC_ADC_FACTORY_MODE_UART_OFF:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_UART, attached);
+ if (ret < 0)
+ return ret;
+ extcon_set_cable_state(info->edev, "JIG-UART-OFF", attached);
+ break;
+ default:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
+ if (ret < 0)
+ return ret;
+ break;
+ }
+
+ return 0;
+}
+
+static int max77843_muic_adc_handler(struct max77843_muic_info *info)
+{
+ int ret, cable_type;
+ bool attached;
+
+ cable_type = max77843_muic_get_cable_type(info,
+ MAX77843_CABLE_GROUP_ADC, &attached);
+
+ dev_dbg(info->dev,
+ "external connector is %s (adc:0x%02x, prev_adc:0x%x)\n",
+ attached ? "attached" : "detached", cable_type,
+ info->prev_cable_type);
+
+ switch (cable_type) {
+ case MAX77843_MUIC_ADC_GROUND:
+ ret = max77843_muic_adc_gnd_handler(info);
+ if (ret < 0)
+ return ret;
+ break;
+ case MAX77843_MUIC_ADC_FACTORY_MODE_USB_OFF:
+ case MAX77843_MUIC_ADC_FACTORY_MODE_USB_ON:
+ case MAX77843_MUIC_ADC_FACTORY_MODE_UART_OFF:
+ ret = max77843_muic_jig_handler(info, cable_type, attached);
+ if (ret < 0)
+ return ret;
+ break;
+ case MAX77843_MUIC_ADC_SEND_END_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S1_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S2_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S3_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S4_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S5_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S6_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S7_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S8_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S9_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S10_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S11_BUTTON:
+ case MAX77843_MUIC_ADC_REMOTE_S12_BUTTON:
+ case MAX77843_MUIC_ADC_RESERVED_ACC_1:
+ case MAX77843_MUIC_ADC_RESERVED_ACC_2:
+ case MAX77843_MUIC_ADC_RESERVED_ACC_3:
+ case MAX77843_MUIC_ADC_RESERVED_ACC_4:
+ case MAX77843_MUIC_ADC_RESERVED_ACC_5:
+ case MAX77843_MUIC_ADC_AUDIO_DEVICE_TYPE2:
+ case MAX77843_MUIC_ADC_PHONE_POWERED_DEV:
+ case MAX77843_MUIC_ADC_TTY_CONVERTER:
+ case MAX77843_MUIC_ADC_UART_CABLE:
+ case MAX77843_MUIC_ADC_CEA936A_TYPE1_CHG:
+ case MAX77843_MUIC_ADC_AV_CABLE_NOLOAD:
+ case MAX77843_MUIC_ADC_CEA936A_TYPE2_CHG:
+ case MAX77843_MUIC_ADC_FACTORY_MODE_UART_ON:
+ case MAX77843_MUIC_ADC_AUDIO_DEVICE_TYPE1:
+ case MAX77843_MUIC_ADC_OPEN:
+ dev_err(info->dev,
+ "accessory is %s but it isn't used (adc:0x%x)\n",
+ attached ? "attached" : "detached", cable_type);
+ return -EAGAIN;
+ default:
+ dev_err(info->dev,
+ "failed to detect %s accessory (adc:0x%x)\n",
+ attached ? "attached" : "detached", cable_type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int max77843_muic_chg_handler(struct max77843_muic_info *info)
+{
+ int ret, chg_type, gnd_type;
+ bool attached;
+
+ chg_type = max77843_muic_get_cable_type(info,
+ MAX77843_CABLE_GROUP_CHG, &attached);
+
+ dev_dbg(info->dev,
+ "external connector is %s(chg_type:0x%x, prev_chg_type:0x%x)\n",
+ attached ? "attached" : "detached",
+ chg_type, info->prev_chg_type);
+
+ switch (chg_type) {
+ case MAX77843_MUIC_CHG_USB:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_USB, attached);
+ if (ret < 0)
+ return ret;
+
+ extcon_set_cable_state(info->edev, "USB", attached);
+ break;
+ case MAX77843_MUIC_CHG_DOWNSTREAM:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
+ if (ret < 0)
+ return ret;
+
+ extcon_set_cable_state(info->edev,
+ "CHARGER-DOWNSTREAM", attached);
+ break;
+ case MAX77843_MUIC_CHG_DEDICATED:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
+ if (ret < 0)
+ return ret;
+
+ extcon_set_cable_state(info->edev, "TA", attached);
+ break;
+ case MAX77843_MUIC_CHG_SPECIAL_500MA:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
+ if (ret < 0)
+ return ret;
+
+ extcon_set_cable_state(info->edev, "SLOW-CHAREGER", attached);
+ break;
+ case MAX77843_MUIC_CHG_SPECIAL_1A:
+ ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
+ if (ret < 0)
+ return ret;
+
+ extcon_set_cable_state(info->edev, "FAST-CHARGER", attached);
+ break;
+ case MAX77843_MUIC_CHG_GND:
+ gnd_type = max77843_muic_get_cable_type(info,
+ MAX77843_CABLE_GROUP_ADC_GND, &attached);
+
+ /* Charger cable on MHL accessory is attach or detach */
+ if (gnd_type == MAX77843_MUIC_GND_MHL_VB)
+ extcon_set_cable_state(info->edev, "MHL-TA", true);
+ else if (gnd_type == MAX77843_MUIC_GND_MHL)
+ extcon_set_cable_state(info->edev, "MHL-TA", false);
+ break;
+ case MAX77843_MUIC_CHG_NONE:
+ break;
+ default:
+ dev_err(info->dev,
+ "failed to detect %s accessory (chg_type:0x%x)\n",
+ attached ? "attached" : "detached", chg_type);
+
+ max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void max77843_muic_irq_work(struct work_struct *work)
+{
+ struct max77843_muic_info *info = container_of(work,
+ struct max77843_muic_info, irq_work);
+ struct max77843 *max77843 = info->max77843;
+ int ret = 0;
+
+ mutex_lock(&info->mutex);
+
+ ret = regmap_bulk_read(max77843->regmap_muic,
+ MAX77843_MUIC_REG_STATUS1, info->status,
+ MAX77843_MUIC_STATUS_NUM);
+ if (ret) {
+ dev_err(info->dev, "Cannot read STATUS registers\n");
+ mutex_unlock(&info->mutex);
+ return;
+ }
+
+ if (info->irq_adc) {
+ ret = max77843_muic_adc_handler(info);
+ if (ret)
+ dev_err(info->dev, "Unknown cable type\n");
+ info->irq_adc = false;
+ }
+
+ if (info->irq_chg) {
+ ret = max77843_muic_chg_handler(info);
+ if (ret)
+ dev_err(info->dev, "Unknown charger type\n");
+ info->irq_chg = false;
+ }
+
+ mutex_unlock(&info->mutex);
+}
+
+static irqreturn_t max77843_muic_irq_handler(int irq, void *data)
+{
+ struct max77843_muic_info *info = data;
+ int i, irq_type = -1;
+
+ for (i = 0; i < ARRAY_SIZE(max77843_muic_irqs); i++)
+ if (irq == max77843_muic_irqs[i].virq)
+ irq_type = max77843_muic_irqs[i].irq;
+
+ switch (irq_type) {
+ case MAX77843_MUIC_IRQ_INT1_ADC:
+ case MAX77843_MUIC_IRQ_INT1_ADCERROR:
+ case MAX77843_MUIC_IRQ_INT1_ADC1K:
+ info->irq_adc = true;
+ break;
+ case MAX77843_MUIC_IRQ_INT2_CHGTYP:
+ case MAX77843_MUIC_IRQ_INT2_CHGDETRUN:
+ case MAX77843_MUIC_IRQ_INT2_DCDTMR:
+ case MAX77843_MUIC_IRQ_INT2_DXOVP:
+ case MAX77843_MUIC_IRQ_INT2_VBVOLT:
+ info->irq_chg = true;
+ break;
+ case MAX77843_MUIC_IRQ_INT3_VBADC:
+ case MAX77843_MUIC_IRQ_INT3_VDNMON:
+ case MAX77843_MUIC_IRQ_INT3_DNRES:
+ case MAX77843_MUIC_IRQ_INT3_MPNACK:
+ case MAX77843_MUIC_IRQ_INT3_MRXBUFOW:
+ case MAX77843_MUIC_IRQ_INT3_MRXTRF:
+ case MAX77843_MUIC_IRQ_INT3_MRXPERR:
+ case MAX77843_MUIC_IRQ_INT3_MRXRDY:
+ break;
+ default:
+ dev_err(info->dev, "Cannot recognize IRQ(%d)\n", irq_type);
+ break;
+ }
+
+ schedule_work(&info->irq_work);
+
+ return IRQ_HANDLED;
+}
+
+static void max77843_muic_detect_cable_wq(struct work_struct *work)
+{
+ struct max77843_muic_info *info = container_of(to_delayed_work(work),
+ struct max77843_muic_info, wq_detcable);
+ struct max77843 *max77843 = info->max77843;
+ int chg_type, adc, ret;
+ bool attached;
+
+ mutex_lock(&info->mutex);
+
+ ret = regmap_bulk_read(max77843->regmap_muic,
+ MAX77843_MUIC_REG_STATUS1, info->status,
+ MAX77843_MUIC_STATUS_NUM);
+ if (ret) {
+ dev_err(info->dev, "Cannot read STATUS registers\n");
+ goto err_cable_wq;
+ }
+
+ adc = max77843_muic_get_cable_type(info,
+ MAX77843_CABLE_GROUP_ADC, &attached);
+ if (attached && adc != MAX77843_MUIC_ADC_OPEN) {
+ ret = max77843_muic_adc_handler(info);
+ if (ret < 0) {
+ dev_err(info->dev, "Cannot detect accessory\n");
+ goto err_cable_wq;
+ }
+ }
+
+ chg_type = max77843_muic_get_cable_type(info,
+ MAX77843_CABLE_GROUP_CHG, &attached);
+ if (attached && chg_type != MAX77843_MUIC_CHG_NONE) {
+ ret = max77843_muic_chg_handler(info);
+ if (ret < 0) {
+ dev_err(info->dev, "Cannot detect charger accessory\n");
+ goto err_cable_wq;
+ }
+ }
+
+err_cable_wq:
+ mutex_unlock(&info->mutex);
+}
+
+static int max77843_muic_set_debounce_time(struct max77843_muic_info *info,
+ enum max77843_muic_adc_debounce_time time)
+{
+ struct max77843 *max77843 = info->max77843;
+ int ret;
+
+ switch (time) {
+ case MAX77843_DEBOUNCE_TIME_5MS:
+ case MAX77843_DEBOUNCE_TIME_10MS:
+ case MAX77843_DEBOUNCE_TIME_25MS:
+ case MAX77843_DEBOUNCE_TIME_38_62MS:
+ ret = regmap_update_bits(max77843->regmap_muic,
+ MAX77843_MUIC_REG_CONTROL4,
+ MAX77843_MUIC_CONTROL4_ADCDBSET_MASK,
+ time << CONTROL4_ADCDBSET_SHIFT);
+ if (ret < 0) {
+ dev_err(info->dev, "Cannot write MUIC regmap\n");
+ return ret;
+ }
+ break;
+ default:
+ dev_err(info->dev, "Invalid ADC debounce time\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int max77843_init_muic_regmap(struct max77843 *max77843)
+{
+ int ret;
+
+ max77843->i2c_muic = i2c_new_dummy(max77843->i2c->adapter,
+ I2C_ADDR_MUIC);
+ if (!max77843->i2c_muic) {
+ dev_err(&max77843->i2c->dev,
+ "Cannot allocate I2C device for MUIC\n");
+ return -ENOMEM;
+ }
+
+ i2c_set_clientdata(max77843->i2c_muic, max77843);
+
+ max77843->regmap_muic = devm_regmap_init_i2c(max77843->i2c_muic,
+ &max77843_muic_regmap_config);
+ if (IS_ERR(max77843->regmap_muic)) {
+ ret = PTR_ERR(max77843->regmap_muic);
+ goto err_muic_i2c;
+ }
+
+ ret = regmap_add_irq_chip(max77843->regmap_muic, max77843->irq,
+ IRQF_TRIGGER_LOW | IRQF_ONESHOT | IRQF_SHARED,
+ 0, &max77843_muic_irq_chip, &max77843->irq_data_muic);
+ if (ret < 0) {
+ dev_err(&max77843->i2c->dev, "Cannot add MUIC IRQ chip\n");
+ goto err_muic_i2c;
+ }
+
+ return 0;
+
+err_muic_i2c:
+ i2c_unregister_device(max77843->i2c_muic);
+
+ return ret;
+}
+
+static int max77843_muic_probe(struct platform_device *pdev)
+{
+ struct max77843 *max77843 = dev_get_drvdata(pdev->dev.parent);
+ struct max77843_muic_info *info;
+ unsigned int id;
+ int i, ret;
+
+ info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
+ if (!info)
+ return -ENOMEM;
+
+ info->dev = &pdev->dev;
+ info->max77843 = max77843;
+
+ platform_set_drvdata(pdev, info);
+ mutex_init(&info->mutex);
+
+ /* Initialize i2c and regmap */
+ ret = max77843_init_muic_regmap(max77843);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to init MUIC regmap\n");
+ return ret;
+ }
+
+ /* Turn off auto detection configuration */
+ ret = regmap_update_bits(max77843->regmap_muic,
+ MAX77843_MUIC_REG_CONTROL4,
+ MAX77843_MUIC_CONTROL4_USBAUTO_MASK |
+ MAX77843_MUIC_CONTROL4_FCTAUTO_MASK,
+ CONTROL4_AUTO_DISABLE);
+
+ /* Initialize extcon device */
+ info->edev = devm_extcon_dev_allocate(&pdev->dev,
+ max77843_extcon_cable);
+ if (IS_ERR(info->edev)) {
+ dev_err(&pdev->dev, "Failed to allocate memory for extcon\n");
+ ret = -ENODEV;
+ goto err_muic_irq;
+ }
+
+ ret = devm_extcon_dev_register(&pdev->dev, info->edev);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to register extcon device\n");
+ goto err_muic_irq;
+ }
+
+ /* Set ADC debounce time */
+ max77843_muic_set_debounce_time(info, MAX77843_DEBOUNCE_TIME_25MS);
+
+ /* Set initial path for UART */
+ max77843_muic_set_path(info, CONTROL1_SW_UART, true);
+
+ /* Check revision number of MUIC device */
+ ret = regmap_read(max77843->regmap_muic, MAX77843_MUIC_REG_ID, &id);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to read revision number\n");
+ goto err_muic_irq;
+ }
+ dev_info(info->dev, "MUIC device ID : 0x%x\n", id);
+
+ /* Support virtual irq domain for max77843 MUIC device */
+ INIT_WORK(&info->irq_work, max77843_muic_irq_work);
+
+ for (i = 0; i < ARRAY_SIZE(max77843_muic_irqs); i++) {
+ struct max77843_muic_irq *muic_irq = &max77843_muic_irqs[i];
+ unsigned int virq = 0;
+
+ virq = regmap_irq_get_virq(max77843->irq_data_muic,
+ muic_irq->irq);
+ if (virq <= 0) {
+ ret = -EINVAL;
+ goto err_muic_irq;
+ }
+ muic_irq->virq = virq;
+
+ ret = devm_request_threaded_irq(&pdev->dev, virq, NULL,
+ max77843_muic_irq_handler, IRQF_NO_SUSPEND,
+ muic_irq->name, info);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "Failed to request irq (IRQ: %d, error: %d)\n",
+ muic_irq->irq, ret);
+ goto err_muic_irq;
+ }
+ }
+
+ /* Detect accessory after completing the initialization of platform */
+ INIT_DELAYED_WORK(&info->wq_detcable, max77843_muic_detect_cable_wq);
+ queue_delayed_work(system_power_efficient_wq,
+ &info->wq_detcable, msecs_to_jiffies(DELAY_MS_DEFAULT));
+
+ return 0;
+
+err_muic_irq:
+ regmap_del_irq_chip(max77843->irq, max77843->irq_data_muic);
+ i2c_unregister_device(max77843->i2c_muic);
+
+ return ret;
+}
+
+static int max77843_muic_remove(struct platform_device *pdev)
+{
+ struct max77843_muic_info *info = platform_get_drvdata(pdev);
+ struct max77843 *max77843 = info->max77843;
+
+ cancel_work_sync(&info->irq_work);
+ regmap_del_irq_chip(max77843->irq, max77843->irq_data_muic);
+ i2c_unregister_device(max77843->i2c_muic);
+
+ return 0;
+}
+
+static const struct platform_device_id max77843_muic_id[] = {
+ { "max77843-muic", },
+ { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(platform, max77843_muic_id);
+
+static struct platform_driver max77843_muic_driver = {
+ .driver = {
+ .name = "max77843-muic",
+ },
+ .probe = max77843_muic_probe,
+ .remove = max77843_muic_remove,
+ .id_table = max77843_muic_id,
+};
+
+static int __init max77843_muic_init(void)
+{
+ return platform_driver_register(&max77843_muic_driver);
+}
+subsys_initcall(max77843_muic_init);
+
+MODULE_DESCRIPTION("Maxim MAX77843 Extcon driver");
+MODULE_AUTHOR("Jaewon Kim <jaewon02.kim@samsung.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/extcon/extcon-max8997.c b/drivers/extcon/extcon-max8997.c
index fc1678f..5774e56 100644
--- a/drivers/extcon/extcon-max8997.c
+++ b/drivers/extcon/extcon-max8997.c
@@ -579,8 +579,6 @@
dev_err(info->dev, "failed to handle MUIC interrupt\n");
mutex_unlock(&info->mutex);
-
- return;
}
static irqreturn_t max8997_muic_irq_handler(int irq, void *data)
@@ -689,8 +687,7 @@
muic_irq->name, info);
if (ret) {
dev_err(&pdev->dev,
- "failed: irq request (IRQ: %d,"
- " error :%d)\n",
+ "failed: irq request (IRQ: %d, error :%d)\n",
muic_irq->irq, ret);
goto err_irq;
}
diff --git a/drivers/extcon/extcon-rt8973a.c b/drivers/extcon/extcon-rt8973a.c
index a784b2d..9ccd5af 100644
--- a/drivers/extcon/extcon-rt8973a.c
+++ b/drivers/extcon/extcon-rt8973a.c
@@ -582,10 +582,8 @@
return -EINVAL;
info = devm_kzalloc(&i2c->dev, sizeof(*info), GFP_KERNEL);
- if (!info) {
- dev_err(&i2c->dev, "failed to allocate memory\n");
+ if (!info)
return -ENOMEM;
- }
i2c_set_clientdata(i2c, info);
info->dev = &i2c->dev;
@@ -681,7 +679,7 @@
return 0;
}
-static struct of_device_id rt8973a_dt_match[] = {
+static const struct of_device_id rt8973a_dt_match[] = {
{ .compatible = "richtek,rt8973a-muic" },
{ },
};
diff --git a/drivers/extcon/extcon-sm5502.c b/drivers/extcon/extcon-sm5502.c
index b0f7bd8..2f93cf3 100644
--- a/drivers/extcon/extcon-sm5502.c
+++ b/drivers/extcon/extcon-sm5502.c
@@ -359,8 +359,8 @@
break;
default:
dev_dbg(info->dev,
- "cannot identify the cable type: adc(0x%x) "
- "dev_type1(0x%x)\n", adc, dev_type1);
+ "cannot identify the cable type: adc(0x%x)\n",
+ adc);
return -EINVAL;
};
break;
@@ -659,7 +659,7 @@
return 0;
}
-static struct of_device_id sm5502_dt_match[] = {
+static const struct of_device_id sm5502_dt_match[] = {
{ .compatible = "siliconmitus,sm5502-muic" },
{ },
};
diff --git a/drivers/extcon/extcon-usb-gpio.c b/drivers/extcon/extcon-usb-gpio.c
new file mode 100644
index 0000000..de67fce
--- /dev/null
+++ b/drivers/extcon/extcon-usb-gpio.c
@@ -0,0 +1,237 @@
+/**
+ * drivers/extcon/extcon-usb-gpio.c - USB GPIO extcon driver
+ *
+ * Copyright (C) 2015 Texas Instruments Incorporated - http://www.ti.com
+ * Author: Roger Quadros <rogerq@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/extcon.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_gpio.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+
+#define USB_GPIO_DEBOUNCE_MS 20 /* ms */
+
+struct usb_extcon_info {
+ struct device *dev;
+ struct extcon_dev *edev;
+
+ struct gpio_desc *id_gpiod;
+ int id_irq;
+
+ unsigned long debounce_jiffies;
+ struct delayed_work wq_detcable;
+};
+
+/* List of detectable cables */
+enum {
+ EXTCON_CABLE_USB = 0,
+ EXTCON_CABLE_USB_HOST,
+
+ EXTCON_CABLE_END,
+};
+
+static const char *usb_extcon_cable[] = {
+ [EXTCON_CABLE_USB] = "USB",
+ [EXTCON_CABLE_USB_HOST] = "USB-HOST",
+ NULL,
+};
+
+static void usb_extcon_detect_cable(struct work_struct *work)
+{
+ int id;
+ struct usb_extcon_info *info = container_of(to_delayed_work(work),
+ struct usb_extcon_info,
+ wq_detcable);
+
+ /* check ID and update cable state */
+ id = gpiod_get_value_cansleep(info->id_gpiod);
+ if (id) {
+ /*
+ * ID = 1 means USB HOST cable detached.
+ * As we don't have event for USB peripheral cable attached,
+ * we simulate USB peripheral attach here.
+ */
+ extcon_set_cable_state(info->edev,
+ usb_extcon_cable[EXTCON_CABLE_USB_HOST],
+ false);
+ extcon_set_cable_state(info->edev,
+ usb_extcon_cable[EXTCON_CABLE_USB],
+ true);
+ } else {
+ /*
+ * ID = 0 means USB HOST cable attached.
+ * As we don't have event for USB peripheral cable detached,
+ * we simulate USB peripheral detach here.
+ */
+ extcon_set_cable_state(info->edev,
+ usb_extcon_cable[EXTCON_CABLE_USB],
+ false);
+ extcon_set_cable_state(info->edev,
+ usb_extcon_cable[EXTCON_CABLE_USB_HOST],
+ true);
+ }
+}
+
+static irqreturn_t usb_irq_handler(int irq, void *dev_id)
+{
+ struct usb_extcon_info *info = dev_id;
+
+ queue_delayed_work(system_power_efficient_wq, &info->wq_detcable,
+ info->debounce_jiffies);
+
+ return IRQ_HANDLED;
+}
+
+static int usb_extcon_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node;
+ struct usb_extcon_info *info;
+ int ret;
+
+ if (!np)
+ return -EINVAL;
+
+ info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
+ if (!info)
+ return -ENOMEM;
+
+ info->dev = dev;
+ info->id_gpiod = devm_gpiod_get(&pdev->dev, "id");
+ if (IS_ERR(info->id_gpiod)) {
+ dev_err(dev, "failed to get ID GPIO\n");
+ return PTR_ERR(info->id_gpiod);
+ }
+
+ ret = gpiod_set_debounce(info->id_gpiod,
+ USB_GPIO_DEBOUNCE_MS * 1000);
+ if (ret < 0)
+ info->debounce_jiffies = msecs_to_jiffies(USB_GPIO_DEBOUNCE_MS);
+
+ INIT_DELAYED_WORK(&info->wq_detcable, usb_extcon_detect_cable);
+
+ info->id_irq = gpiod_to_irq(info->id_gpiod);
+ if (info->id_irq < 0) {
+ dev_err(dev, "failed to get ID IRQ\n");
+ return info->id_irq;
+ }
+
+ ret = devm_request_threaded_irq(dev, info->id_irq, NULL,
+ usb_irq_handler,
+ IRQF_TRIGGER_RISING |
+ IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ pdev->name, info);
+ if (ret < 0) {
+ dev_err(dev, "failed to request handler for ID IRQ\n");
+ return ret;
+ }
+
+ info->edev = devm_extcon_dev_allocate(dev, usb_extcon_cable);
+ if (IS_ERR(info->edev)) {
+ dev_err(dev, "failed to allocate extcon device\n");
+ return -ENOMEM;
+ }
+
+ ret = devm_extcon_dev_register(dev, info->edev);
+ if (ret < 0) {
+ dev_err(dev, "failed to register extcon device\n");
+ return ret;
+ }
+
+ platform_set_drvdata(pdev, info);
+ device_init_wakeup(dev, 1);
+
+ /* Perform initial detection */
+ usb_extcon_detect_cable(&info->wq_detcable.work);
+
+ return 0;
+}
+
+static int usb_extcon_remove(struct platform_device *pdev)
+{
+ struct usb_extcon_info *info = platform_get_drvdata(pdev);
+
+ cancel_delayed_work_sync(&info->wq_detcable);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int usb_extcon_suspend(struct device *dev)
+{
+ struct usb_extcon_info *info = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (device_may_wakeup(dev)) {
+ ret = enable_irq_wake(info->id_irq);
+ if (ret)
+ return ret;
+ }
+
+ /*
+ * We don't want to process any IRQs after this point
+ * as GPIOs used behind I2C subsystem might not be
+ * accessible until resume completes. So disable IRQ.
+ */
+ disable_irq(info->id_irq);
+
+ return ret;
+}
+
+static int usb_extcon_resume(struct device *dev)
+{
+ struct usb_extcon_info *info = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (device_may_wakeup(dev)) {
+ ret = disable_irq_wake(info->id_irq);
+ if (ret)
+ return ret;
+ }
+
+ enable_irq(info->id_irq);
+
+ return ret;
+}
+#endif
+
+static SIMPLE_DEV_PM_OPS(usb_extcon_pm_ops,
+ usb_extcon_suspend, usb_extcon_resume);
+
+static const struct of_device_id usb_extcon_dt_match[] = {
+ { .compatible = "linux,extcon-usb-gpio", },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, usb_extcon_dt_match);
+
+static struct platform_driver usb_extcon_driver = {
+ .probe = usb_extcon_probe,
+ .remove = usb_extcon_remove,
+ .driver = {
+ .name = "extcon-usb-gpio",
+ .pm = &usb_extcon_pm_ops,
+ .of_match_table = usb_extcon_dt_match,
+ },
+};
+
+module_platform_driver(usb_extcon_driver);
+
+MODULE_AUTHOR("Roger Quadros <rogerq@ti.com>");
+MODULE_DESCRIPTION("USB GPIO extcon driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/extcon/extcon-class.c b/drivers/extcon/extcon.c
similarity index 96%
rename from drivers/extcon/extcon-class.c
rename to drivers/extcon/extcon.c
index 8319f25..4c9f165 100644
--- a/drivers/extcon/extcon-class.c
+++ b/drivers/extcon/extcon.c
@@ -158,6 +158,7 @@
/* Optional callback given by the user */
if (edev->print_name) {
int ret = edev->print_name(edev, buf);
+
if (ret >= 0)
return ret;
}
@@ -444,6 +445,9 @@
const char *extcon_name, const char *cable_name,
struct notifier_block *nb)
{
+ unsigned long flags;
+ int ret;
+
if (!obj || !cable_name || !nb)
return -EINVAL;
@@ -461,8 +465,11 @@
obj->internal_nb.notifier_call = _call_per_cable;
- return raw_notifier_chain_register(&obj->edev->nh,
+ spin_lock_irqsave(&obj->edev->lock, flags);
+ ret = raw_notifier_chain_register(&obj->edev->nh,
&obj->internal_nb);
+ spin_unlock_irqrestore(&obj->edev->lock, flags);
+ return ret;
} else {
struct class_dev_iter iter;
struct extcon_dev *extd;
@@ -495,10 +502,17 @@
*/
int extcon_unregister_interest(struct extcon_specific_cable_nb *obj)
{
+ unsigned long flags;
+ int ret;
+
if (!obj)
return -EINVAL;
- return raw_notifier_chain_unregister(&obj->edev->nh, &obj->internal_nb);
+ spin_lock_irqsave(&obj->edev->lock, flags);
+ ret = raw_notifier_chain_unregister(&obj->edev->nh, &obj->internal_nb);
+ spin_unlock_irqrestore(&obj->edev->lock, flags);
+
+ return ret;
}
EXPORT_SYMBOL_GPL(extcon_unregister_interest);
@@ -515,7 +529,14 @@
int extcon_register_notifier(struct extcon_dev *edev,
struct notifier_block *nb)
{
- return raw_notifier_chain_register(&edev->nh, nb);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&edev->lock, flags);
+ ret = raw_notifier_chain_register(&edev->nh, nb);
+ spin_unlock_irqrestore(&edev->lock, flags);
+
+ return ret;
}
EXPORT_SYMBOL_GPL(extcon_register_notifier);
@@ -527,7 +548,14 @@
int extcon_unregister_notifier(struct extcon_dev *edev,
struct notifier_block *nb)
{
- return raw_notifier_chain_unregister(&edev->nh, nb);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&edev->lock, flags);
+ ret = raw_notifier_chain_unregister(&edev->nh, nb);
+ spin_unlock_irqrestore(&edev->lock, flags);
+
+ return ret;
}
EXPORT_SYMBOL_GPL(extcon_unregister_notifier);
diff --git a/drivers/firmware/pcdp.c b/drivers/firmware/pcdp.c
index a330492..75273a25 100644
--- a/drivers/firmware/pcdp.c
+++ b/drivers/firmware/pcdp.c
@@ -15,7 +15,7 @@
#include <linux/console.h>
#include <linux/efi.h>
#include <linux/serial.h>
-#include <linux/serial_8250.h>
+#include <linux/serial_core.h>
#include <asm/vga.h>
#include "pcdp.h"
@@ -43,7 +43,7 @@
}
add_preferred_console("uart", 8250, &options[9]);
- return setup_early_serial8250_console(options);
+ return setup_earlycon(options);
#else
return -ENODEV;
#endif
diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
index dc1aaa8..38d875d 100644
--- a/drivers/gpio/Kconfig
+++ b/drivers/gpio/Kconfig
@@ -90,27 +90,11 @@
# put drivers in the right section, in alphabetical order
-config GPIO_DA9052
- tristate "Dialog DA9052 GPIO"
- depends on PMIC_DA9052
- help
- Say yes here to enable the GPIO driver for the DA9052 chip.
-
-config GPIO_DA9055
- tristate "Dialog Semiconductor DA9055 GPIO"
- depends on MFD_DA9055
- help
- Say yes here to enable the GPIO driver for the DA9055 chip.
-
- The Dialog DA9055 PMIC chip has 3 GPIO pins that can be
- be controller by this driver.
-
- If driver is built as a module it will be called gpio-da9055.
-
+# This symbol is selected by both I2C and SPI expanders
config GPIO_MAX730X
tristate
-comment "Memory mapped GPIO drivers:"
+menu "Memory mapped GPIO drivers"
config GPIO_74XX_MMIO
tristate "GPIO driver for 74xx-ICs with MMIO access"
@@ -126,6 +110,22 @@
8 bits: 74244 (Input), 74273 (Output)
16 bits: 741624 (Input), 7416374 (Output)
+config GPIO_ALTERA
+ tristate "Altera GPIO"
+ depends on OF_GPIO
+ select GPIO_GENERIC
+ select GPIOLIB_IRQCHIP
+ help
+ Say Y or M here to build support for the Altera PIO device.
+
+ If driver is built as a module it will be called gpio-altera.
+
+config GPIO_BCM_KONA
+ bool "Broadcom Kona GPIO"
+ depends on OF_GPIO && (ARCH_BCM_MOBILE || COMPILE_TEST)
+ help
+ Turn on GPIO support for Broadcom "Kona" chips.
+
config GPIO_CLPS711X
tristate "CLPS711X GPIO support"
depends on ARCH_CLPS711X || COMPILE_TEST
@@ -140,28 +140,14 @@
help
Say yes here to enable GPIO support for TI Davinci/Keystone SoCs.
-config GPIO_GENERIC_PLATFORM
- tristate "Generic memory-mapped GPIO controller support (MMIO platform device)"
- select GPIO_GENERIC
- help
- Say yes here to support basic platform_device memory-mapped GPIO controllers.
-
config GPIO_DWAPB
tristate "Synopsys DesignWare APB GPIO driver"
- depends on ARM
- depends on OF_GPIO
select GPIO_GENERIC
select GENERIC_IRQ_CHIP
help
Say Y or M here to build support for the Synopsys DesignWare APB
GPIO block.
-config GPIO_IT8761E
- tristate "IT8761E GPIO support"
- depends on X86 # unconditional access to IO space.
- help
- Say yes here to support GPIO functionality of IT8761E super I/O chip.
-
config GPIO_EM
tristate "Emma Mobile GPIO"
depends on ARM && OF_GPIO
@@ -173,11 +159,90 @@
depends on ARCH_EP93XX
select GPIO_GENERIC
-config GPIO_ZEVIO
- bool "LSI ZEVIO SoC memory mapped GPIOs"
- depends on ARM && OF_GPIO
+config GPIO_F7188X
+ tristate "F71869, F71869A, F71882FG and F71889F GPIO support"
+ depends on X86
help
- Say yes here to support the GPIO controller in LSI ZEVIO SoCs.
+ This option enables support for GPIOs found on Fintek Super-I/O
+ chips F71869, F71869A, F71882FG and F71889F.
+
+ To compile this driver as a module, choose M here: the module will
+ be called f7188x-gpio.
+
+config GPIO_GE_FPGA
+ bool "GE FPGA based GPIO"
+ depends on GE_FPGA
+ select GPIO_GENERIC
+ help
+ Support for common GPIO functionality provided on some GE Single Board
+ Computers.
+
+ This driver provides basic support (configure as input or output, read
+ and write pin state) for GPIO implemented in a number of GE single
+ board computers.
+
+config GPIO_GENERIC_PLATFORM
+ tristate "Generic memory-mapped GPIO controller support (MMIO platform device)"
+ select GPIO_GENERIC
+ help
+ Say yes here to support basic platform_device memory-mapped GPIO controllers.
+
+config GPIO_GRGPIO
+ tristate "Aeroflex Gaisler GRGPIO support"
+ depends on OF
+ select GPIO_GENERIC
+ select IRQ_DOMAIN
+ help
+ Select this to support Aeroflex Gaisler GRGPIO cores from the GRLIB
+ VHDL IP core library.
+
+config GPIO_ICH
+ tristate "Intel ICH GPIO"
+ depends on PCI && X86
+ select MFD_CORE
+ select LPC_ICH
+ help
+ Say yes here to support the GPIO functionality of a number of Intel
+ ICH-based chipsets. Currently supported devices: ICH6, ICH7, ICH8
+ ICH9, ICH10, Series 5/3400 (eg Ibex Peak), Series 6/C200 (eg
+ Cougar Point), NM10 (Tiger Point), and 3100 (Whitmore Lake).
+
+ If unsure, say N.
+
+config GPIO_IOP
+ tristate "Intel IOP GPIO"
+ depends on ARM && (ARCH_IOP32X || ARCH_IOP33X)
+ help
+ Say yes here to support the GPIO functionality of a number of Intel
+ IOP32X or IOP33X.
+
+ If unsure, say N.
+
+config GPIO_IT8761E
+ tristate "IT8761E GPIO support"
+ depends on X86 # unconditional access to IO space.
+ help
+ Say yes here to support GPIO functionality of IT8761E super I/O chip.
+
+config GPIO_LOONGSON
+ bool "Loongson-2/3 GPIO support"
+ depends on CPU_LOONGSON2 || CPU_LOONGSON3
+ help
+ driver for GPIO functionality on Loongson-2F/3A/3B processors.
+
+config GPIO_LYNXPOINT
+ tristate "Intel Lynxpoint GPIO support"
+ depends on ACPI && X86
+ select GPIOLIB_IRQCHIP
+ help
+ driver for GPIO functionality on Intel Lynxpoint PCH chipset
+ Requires ACPI device enumeration code to set up a platform device.
+
+config GPIO_MB86S7X
+ bool "GPIO support for Fujitsu MB86S7x Platforms"
+ depends on ARCH_MB86S7X
+ help
+ Say yes here to support the GPIO controller in Fujitsu MB86S70 SoCs.
config GPIO_MM_LANTIQ
bool "Lantiq Memory mapped GPIOs"
@@ -187,22 +252,6 @@
(EBU) found on Lantiq SoCs. The gpios are output only as they are
created by attaching a 16bit latch to the bus.
-config GPIO_F7188X
- tristate "F71882FG and F71889F GPIO support"
- depends on X86
- help
- This option enables support for GPIOs found on Fintek Super-I/O
- chips F71882FG and F71889F.
-
- To compile this driver as a module, choose M here: the module will
- be called f7188x-gpio.
-
-config GPIO_MB86S7X
- bool "GPIO support for Fujitsu MB86S7x Platforms"
- depends on ARCH_MB86S7X
- help
- Say yes here to support the GPIO controller in Fujitsu MB86S70 SoCs.
-
config GPIO_MOXART
bool "MOXART GPIO support"
depends on ARCH_MOXART
@@ -303,6 +352,33 @@
Legacy GPIO support. Use only for platforms without support for
pinctrl.
+config GPIO_SCH
+ tristate "Intel SCH/TunnelCreek/Centerton/Quark X1000 GPIO"
+ depends on PCI && X86
+ select MFD_CORE
+ select LPC_SCH
+ help
+ Say yes here to support GPIO interface on Intel Poulsbo SCH,
+ Intel Tunnel Creek processor, Intel Centerton processor or
+ Intel Quark X1000 SoC.
+
+ The Intel SCH contains a total of 14 GPIO pins. Ten GPIOs are
+ powered by the core power rail and are turned off during sleep
+ modes (S3 and higher). The remaining four GPIOs are powered by
+ the Intel SCH suspend power supply. These GPIOs remain
+ active during S3. The suspend powered GPIOs can be used to wake the
+ system from the Suspend-to-RAM state.
+
+ The Intel Tunnel Creek processor has 5 GPIOs powered by the
+ core power rail and 9 from suspend power supply.
+
+ The Intel Centerton processor has a total of 30 GPIO pins.
+ Twenty-one are powered by the core power rail and 9 from the
+ suspend power supply.
+
+ The Intel Quark X1000 SoC has 2 GPIOs powered by the core
+ power well and 6 from the suspend power well.
+
config GPIO_SCH311X
tristate "SMSC SCH311x SuperI/O GPIO"
help
@@ -327,12 +403,27 @@
Say yes here to support the STA2x11/ConneXt GPIO device.
The GPIO module has 128 GPIO pins with alternate functions.
+config GPIO_STP_XWAY
+ bool "XWAY STP GPIOs"
+ depends on SOC_XWAY
+ help
+ This enables support for the Serial To Parallel (STP) unit found on
+ XWAY SoC. The STP allows the SoC to drive a shift registers cascade,
+ that can be up to 24 bit. This peripheral is aimed at driving leds.
+ Some of the gpios/leds can be auto updated by the soc with dsl and
+ phy status.
+
config GPIO_SYSCON
tristate "GPIO based on SYSCON"
depends on MFD_SYSCON && OF
help
Say yes here to support GPIO functionality though SYSCON driver.
+config GPIO_TB10X
+ bool
+ select GENERIC_IRQ_CHIP
+ select OF_GPIO
+
config GPIO_TS5500
tristate "TS-5500 DIO blocks and compatibles"
depends on TS5500 || COMPILE_TEST
@@ -364,6 +455,24 @@
help
Say yes here to support Vybrid vf610 GPIOs.
+config GPIO_VR41XX
+ tristate "NEC VR4100 series General-purpose I/O Uint support"
+ depends on CPU_VR41XX
+ help
+ Say yes here to support the NEC VR4100 series General-purpose I/O Uint
+
+config GPIO_VX855
+ tristate "VIA VX855/VX875 GPIO"
+ depends on PCI
+ select MFD_CORE
+ select MFD_VX855
+ help
+ Support access to the VX855/VX875 GPIO lines through the gpio library.
+
+ This driver provides common support for accessing the device,
+ additional drivers must be enabled in order to use the
+ functionality of the device.
+
config GPIO_XGENE
bool "APM X-Gene GPIO controller support"
depends on ARM64 && OF_GPIO
@@ -387,13 +496,6 @@
help
Say yes here to support the Xilinx FPGA GPIO device
-config GPIO_ZYNQ
- tristate "Xilinx Zynq GPIO support"
- depends on ARCH_ZYNQ
- select GPIOLIB_IRQCHIP
- help
- Say yes here to support Xilinx Zynq GPIO controller.
-
config GPIO_XTENSA
bool "Xtensa GPIO32 support"
depends on XTENSA
@@ -403,135 +505,49 @@
Say yes here to support the Xtensa internal GPIO32 IMPWIRE (input)
and EXPSTATE (output) ports
-config GPIO_VR41XX
- tristate "NEC VR4100 series General-purpose I/O Uint support"
- depends on CPU_VR41XX
+config GPIO_ZEVIO
+ bool "LSI ZEVIO SoC memory mapped GPIOs"
+ depends on ARM && OF_GPIO
help
- Say yes here to support the NEC VR4100 series General-purpose I/O Uint
+ Say yes here to support the GPIO controller in LSI ZEVIO SoCs.
-config GPIO_SCH
- tristate "Intel SCH/TunnelCreek/Centerton/Quark X1000 GPIO"
- depends on PCI && X86
- select MFD_CORE
- select LPC_SCH
- help
- Say yes here to support GPIO interface on Intel Poulsbo SCH,
- Intel Tunnel Creek processor, Intel Centerton processor or
- Intel Quark X1000 SoC.
-
- The Intel SCH contains a total of 14 GPIO pins. Ten GPIOs are
- powered by the core power rail and are turned off during sleep
- modes (S3 and higher). The remaining four GPIOs are powered by
- the Intel SCH suspend power supply. These GPIOs remain
- active during S3. The suspend powered GPIOs can be used to wake the
- system from the Suspend-to-RAM state.
-
- The Intel Tunnel Creek processor has 5 GPIOs powered by the
- core power rail and 9 from suspend power supply.
-
- The Intel Centerton processor has a total of 30 GPIO pins.
- Twenty-one are powered by the core power rail and 9 from the
- suspend power supply.
-
- The Intel Quark X1000 SoC has 2 GPIOs powered by the core
- power well and 6 from the suspend power well.
-
-config GPIO_ICH
- tristate "Intel ICH GPIO"
- depends on PCI && X86
- select MFD_CORE
- select LPC_ICH
- help
- Say yes here to support the GPIO functionality of a number of Intel
- ICH-based chipsets. Currently supported devices: ICH6, ICH7, ICH8
- ICH9, ICH10, Series 5/3400 (eg Ibex Peak), Series 6/C200 (eg
- Cougar Point), NM10 (Tiger Point), and 3100 (Whitmore Lake).
-
- If unsure, say N.
-
-config GPIO_IOP
- tristate "Intel IOP GPIO"
- depends on ARM && (ARCH_IOP32X || ARCH_IOP33X)
- help
- Say yes here to support the GPIO functionality of a number of Intel
- IOP32X or IOP33X.
-
- If unsure, say N.
-
-config GPIO_VX855
- tristate "VIA VX855/VX875 GPIO"
- depends on PCI
- select MFD_CORE
- select MFD_VX855
- help
- Support access to the VX855/VX875 GPIO lines through the gpio library.
-
- This driver provides common support for accessing the device,
- additional drivers must be enabled in order to use the
- functionality of the device.
-
-config GPIO_GE_FPGA
- bool "GE FPGA based GPIO"
- depends on GE_FPGA
- select GPIO_GENERIC
- help
- Support for common GPIO functionality provided on some GE Single Board
- Computers.
-
- This driver provides basic support (configure as input or output, read
- and write pin state) for GPIO implemented in a number of GE single
- board computers.
-
-config GPIO_LYNXPOINT
- tristate "Intel Lynxpoint GPIO support"
- depends on ACPI && X86
+config GPIO_ZYNQ
+ tristate "Xilinx Zynq GPIO support"
+ depends on ARCH_ZYNQ
select GPIOLIB_IRQCHIP
help
- driver for GPIO functionality on Intel Lynxpoint PCH chipset
- Requires ACPI device enumeration code to set up a platform device.
+ Say yes here to support Xilinx Zynq GPIO controller.
-config GPIO_GRGPIO
- tristate "Aeroflex Gaisler GRGPIO support"
- depends on OF
- select GPIO_GENERIC
- select IRQ_DOMAIN
+endmenu
+
+menu "I2C GPIO expanders"
+ depends on I2C
+
+config GPIO_ADP5588
+ tristate "ADP5588 I2C GPIO expander"
+ depends on I2C
help
- Select this to support Aeroflex Gaisler GRGPIO cores from the GRLIB
- VHDL IP core library.
+ This option enables support for 18 GPIOs found
+ on Analog Devices ADP5588 GPIO Expanders.
-config GPIO_TB10X
- bool
- select GENERIC_IRQ_CHIP
- select OF_GPIO
-
-comment "I2C GPIO expanders:"
-
-config GPIO_ARIZONA
- tristate "Wolfson Microelectronics Arizona class devices"
- depends on MFD_ARIZONA
+config GPIO_ADP5588_IRQ
+ bool "Interrupt controller support for ADP5588"
+ depends on GPIO_ADP5588=y
help
- Support for GPIOs on Wolfson Arizona class devices.
+ Say yes here to enable the adp5588 to be used as an interrupt
+ controller. It requires the driver to be built in the kernel.
-config GPIO_CRYSTAL_COVE
- tristate "GPIO support for Crystal Cove PMIC"
- depends on INTEL_SOC_PMIC
+config GPIO_ADNP
+ tristate "Avionic Design N-bit GPIO expander"
+ depends on I2C && OF_GPIO
select GPIOLIB_IRQCHIP
help
- Support for GPIO pins on Crystal Cove PMIC.
-
- Say Yes if you have a Intel SoC based tablet with Crystal Cove PMIC
- inside.
-
- This driver can also be built as a module. If so, the module will be
- called gpio-crystalcove.
-
-config GPIO_LP3943
- tristate "TI/National Semiconductor LP3943 GPIO expander"
- depends on MFD_LP3943
- help
- GPIO driver for LP3943 MFD.
- LP3943 can be used as a GPIO expander which provides up to 16 GPIOs.
- Open drain outputs are required for this usage.
+ This option enables support for N GPIOs found on Avionic Design
+ I2C GPIO expanders. The register space will be extended by powers
+ of two, so the controller will need to accommodate for that. For
+ example: if a controller provides 48 pins, 6 registers will be
+ enough to represent all pins, but the driver will assume a
+ register layout for 64 pins (8 registers).
config GPIO_MAX7300
tristate "Maxim MAX7300 GPIO expander"
@@ -543,7 +559,6 @@
config GPIO_MAX732X
tristate "MAX7319, MAX7320-7327 I2C Port Expanders"
depends on I2C
- select IRQ_DOMAIN
help
Say yes here to support the MAX7319, MAX7320-7327 series of I2C
Port Expanders. Each IO port on these chips has a fixed role of
@@ -563,6 +578,7 @@
config GPIO_MAX732X_IRQ
bool "Interrupt controller support for MAX732x"
depends on GPIO_MAX732X=y
+ select GPIOLIB_IRQCHIP
help
Say yes here to enable the max732x to be used as an interrupt
controller. It requires the driver to be built in the kernel.
@@ -604,6 +620,7 @@
config GPIO_PCF857X
tristate "PCF857x, PCA{85,96}7x, and MAX732[89] I2C GPIO expanders"
depends on I2C
+ select GPIOLIB_IRQCHIP
select IRQ_DOMAIN
help
Say yes here to provide access to most "quasi-bidirectional" I2C
@@ -626,15 +643,6 @@
This driver provides an in-kernel interface to those GPIOs using
platform-neutral GPIO calls.
-config GPIO_RC5T583
- bool "RICOH RC5T583 GPIO"
- depends on MFD_RC5T583
- help
- Select this option to enable GPIO driver for the Ricoh RC5T583
- chip family.
- This driver provides the support for driving/reading the gpio pins
- of RC5T583 device through standard gpio library.
-
config GPIO_SX150X
bool "Semtech SX150x I2C GPIO expander"
depends on I2C=y
@@ -647,6 +655,124 @@
8 bits: sx1508q
16 bits: sx1509q
+endmenu
+
+menu "MFD GPIO expanders"
+
+config GPIO_ADP5520
+ tristate "GPIO Support for ADP5520 PMIC"
+ depends on PMIC_ADP5520
+ help
+ This option enables support for on-chip GPIO found
+ on Analog Devices ADP5520 PMICs.
+
+config GPIO_ARIZONA
+ tristate "Wolfson Microelectronics Arizona class devices"
+ depends on MFD_ARIZONA
+ help
+ Support for GPIOs on Wolfson Arizona class devices.
+
+config GPIO_CRYSTAL_COVE
+ tristate "GPIO support for Crystal Cove PMIC"
+ depends on INTEL_SOC_PMIC
+ select GPIOLIB_IRQCHIP
+ help
+ Support for GPIO pins on Crystal Cove PMIC.
+
+ Say Yes if you have a Intel SoC based tablet with Crystal Cove PMIC
+ inside.
+
+ This driver can also be built as a module. If so, the module will be
+ called gpio-crystalcove.
+
+config GPIO_CS5535
+ tristate "AMD CS5535/CS5536 GPIO support"
+ depends on MFD_CS5535
+ help
+ The AMD CS5535 and CS5536 southbridges support 28 GPIO pins that
+ can be used for quite a number of things. The CS5535/6 is found on
+ AMD Geode and Lemote Yeeloong devices.
+
+ If unsure, say N.
+
+config GPIO_DA9052
+ tristate "Dialog DA9052 GPIO"
+ depends on PMIC_DA9052
+ help
+ Say yes here to enable the GPIO driver for the DA9052 chip.
+
+config GPIO_DA9055
+ tristate "Dialog Semiconductor DA9055 GPIO"
+ depends on MFD_DA9055
+ help
+ Say yes here to enable the GPIO driver for the DA9055 chip.
+
+ The Dialog DA9055 PMIC chip has 3 GPIO pins that can be
+ be controller by this driver.
+
+ If driver is built as a module it will be called gpio-da9055.
+
+config GPIO_DLN2
+ tristate "Diolan DLN2 GPIO support"
+ depends on MFD_DLN2
+ select GPIOLIB_IRQCHIP
+
+ help
+ Select this option to enable GPIO driver for the Diolan DLN2
+ board.
+
+ This driver can also be built as a module. If so, the module
+ will be called gpio-dln2.
+
+config GPIO_JANZ_TTL
+ tristate "Janz VMOD-TTL Digital IO Module"
+ depends on MFD_JANZ_CMODIO
+ help
+ This enables support for the Janz VMOD-TTL Digital IO module.
+ This driver provides support for driving the pins in output
+ mode only. Input mode is not supported.
+
+config GPIO_KEMPLD
+ tristate "Kontron ETX / COMexpress GPIO"
+ depends on MFD_KEMPLD
+ help
+ This enables support for the PLD GPIO interface on some Kontron ETX
+ and COMexpress (ETXexpress) modules.
+
+ This driver can also be built as a module. If so, the module will be
+ called gpio-kempld.
+
+config GPIO_LP3943
+ tristate "TI/National Semiconductor LP3943 GPIO expander"
+ depends on MFD_LP3943
+ help
+ GPIO driver for LP3943 MFD.
+ LP3943 can be used as a GPIO expander which provides up to 16 GPIOs.
+ Open drain outputs are required for this usage.
+
+config GPIO_MSIC
+ bool "Intel MSIC mixed signal gpio support"
+ depends on MFD_INTEL_MSIC
+ help
+ Enable support for GPIO on intel MSIC controllers found in
+ intel MID devices
+
+config GPIO_PALMAS
+ bool "TI PALMAS series PMICs GPIO"
+ depends on MFD_PALMAS
+ help
+ Select this option to enable GPIO driver for the TI PALMAS
+ series chip family.
+
+config GPIO_RC5T583
+ bool "RICOH RC5T583 GPIO"
+ depends on MFD_RC5T583
+ help
+ Select this option to enable GPIO driver for the Ricoh RC5T583
+ chip family.
+ This driver provides the support for driving/reading the gpio pins
+ of RC5T583 device through standard gpio library.
+
config GPIO_STMPE
bool "STMPE GPIOs"
depends on MFD_STMPE
@@ -656,16 +782,6 @@
This enables support for the GPIOs found on the STMPE I/O
Expanders.
-config GPIO_STP_XWAY
- bool "XWAY STP GPIOs"
- depends on SOC_XWAY
- help
- This enables support for the Serial To Parallel (STP) unit found on
- XWAY SoC. The STP allows the SoC to drive a shift registers cascade,
- that can be up to 24 bit. This peripheral is aimed at driving leds.
- Some of the gpios/leds can be auto updated by the soc with dsl and
- phy status.
-
config GPIO_TC3589X
bool "TC3589X GPIOs"
depends on MFD_TC3589X
@@ -675,6 +791,26 @@
This enables support for the GPIOs found on the TC3589X
I/O Expander.
+config GPIO_TIMBERDALE
+ bool "Support for timberdale GPIO IP"
+ depends on MFD_TIMBERDALE
+ ---help---
+ Add support for the GPIO IP in the timberdale FPGA.
+
+config GPIO_TPS6586X
+ bool "TPS6586X GPIO"
+ depends on MFD_TPS6586X
+ help
+ Select this option to enable GPIO driver for the TPS6586X
+ chip family.
+
+config GPIO_TPS65910
+ bool "TPS65910 GPIO"
+ depends on MFD_TPS65910
+ help
+ Select this option to enable GPIO driver for the TPS65910
+ chip family.
+
config GPIO_TPS65912
tristate "TI TPS65912 GPIO"
depends on (MFD_TPS65912_I2C || MFD_TPS65912_SPI)
@@ -695,6 +831,13 @@
Say yes here to access the GPO signals of twl6040
audio chip from Texas Instruments.
+config GPIO_UCB1400
+ tristate "Philips UCB1400 GPIO"
+ depends on UCB1400_CORE
+ help
+ This enables support for the Philips UCB1400 GPIO pins.
+ The UCB1400 is an AC97 audio codec.
+
config GPIO_WM831X
tristate "WM831x GPIOs"
depends on MFD_WM831X
@@ -716,50 +859,22 @@
Say yes here to access the GPIO signals of WM8994 audio hub
CODECs from Wolfson Microelectronics.
-config GPIO_ADP5520
- tristate "GPIO Support for ADP5520 PMIC"
- depends on PMIC_ADP5520
+endmenu
+
+menu "PCI GPIO expanders"
+ depends on PCI
+
+config GPIO_AMD8111
+ tristate "AMD 8111 GPIO driver"
+ depends on PCI
help
- This option enables support for on-chip GPIO found
- on Analog Devices ADP5520 PMICs.
+ The AMD 8111 south bridge contains 32 GPIO pins which can be used.
-config GPIO_ADP5588
- tristate "ADP5588 I2C GPIO expander"
- depends on I2C
- help
- This option enables support for 18 GPIOs found
- on Analog Devices ADP5588 GPIO Expanders.
+ Note, that usually system firmware/ACPI handles GPIO pins on their
+ own and users might easily break their systems with uncarefull usage
+ of this driver!
-config GPIO_ADP5588_IRQ
- bool "Interrupt controller support for ADP5588"
- depends on GPIO_ADP5588=y
- help
- Say yes here to enable the adp5588 to be used as an interrupt
- controller. It requires the driver to be built in the kernel.
-
-config GPIO_ADNP
- tristate "Avionic Design N-bit GPIO expander"
- depends on I2C && OF_GPIO
- select GPIOLIB_IRQCHIP
- help
- This option enables support for N GPIOs found on Avionic Design
- I2C GPIO expanders. The register space will be extended by powers
- of two, so the controller will need to accommodate for that. For
- example: if a controller provides 48 pins, 6 registers will be
- enough to represent all pins, but the driver will assume a
- register layout for 64 pins (8 registers).
-
-comment "PCI GPIO expanders:"
-
-config GPIO_CS5535
- tristate "AMD CS5535/CS5536 GPIO support"
- depends on MFD_CS5535
- help
- The AMD CS5535 and CS5536 southbridges support 28 GPIO pins that
- can be used for quite a number of things. The CS5535/6 is found on
- AMD Geode and Lemote Yeeloong devices.
-
- If unsure, say N.
+ If unsure, say N
config GPIO_BT8XX
tristate "BT8XX GPIO abuser"
@@ -777,18 +892,6 @@
If unsure, say N.
-config GPIO_AMD8111
- tristate "AMD 8111 GPIO driver"
- depends on PCI
- help
- The AMD 8111 south bridge contains 32 GPIO pins which can be used.
-
- Note, that usually system firmware/ACPI handles GPIO pins on their
- own and users might easily break their systems with uncarefull usage
- of this driver!
-
- If unsure, say N
-
config GPIO_INTEL_MID
bool "Intel Mid GPIO support"
depends on PCI && X86
@@ -796,6 +899,16 @@
help
Say Y here to support Intel Mid GPIO.
+config GPIO_ML_IOH
+ tristate "OKI SEMICONDUCTOR ML7213 IOH GPIO support"
+ depends on PCI
+ select GENERIC_IRQ_CHIP
+ help
+ ML7213 is companion chip for Intel Atom E6xx series.
+ This driver can be used for OKI SEMICONDUCTOR ML7213 IOH(Input/Output
+ Hub) which is for IVI(In-Vehicle Infotainment) use.
+ This driver can access the IOH's GPIO device.
+
config GPIO_PCH
tristate "Intel EG20T PCH/LAPIS Semiconductor IOH(ML7223/ML7831) GPIO"
depends on PCI && (X86_32 || COMPILE_TEST)
@@ -812,30 +925,6 @@
ML7223/ML7831 is companion chip for Intel Atom E6xx series.
ML7223/ML7831 is completely compatible for Intel EG20T PCH.
-config GPIO_ML_IOH
- tristate "OKI SEMICONDUCTOR ML7213 IOH GPIO support"
- depends on PCI
- select GENERIC_IRQ_CHIP
- help
- ML7213 is companion chip for Intel Atom E6xx series.
- This driver can be used for OKI SEMICONDUCTOR ML7213 IOH(Input/Output
- Hub) which is for IVI(In-Vehicle Infotainment) use.
- This driver can access the IOH's GPIO device.
-
-config GPIO_SODAVILLE
- bool "Intel Sodaville GPIO support"
- depends on X86 && PCI && OF
- select GPIO_GENERIC
- select GENERIC_IRQ_CHIP
- help
- Say Y here to support Intel Sodaville GPIO.
-
-config GPIO_TIMBERDALE
- bool "Support for timberdale GPIO IP"
- depends on MFD_TIMBERDALE
- ---help---
- Add support for the GPIO IP in the timberdale FPGA.
-
config GPIO_RDC321X
tristate "RDC R-321x GPIO support"
depends on PCI
@@ -845,7 +934,26 @@
Support for the RDC R321x SoC GPIOs over southbridge
PCI configuration space.
-comment "SPI GPIO expanders:"
+config GPIO_SODAVILLE
+ bool "Intel Sodaville GPIO support"
+ depends on X86 && PCI && OF
+ select GPIO_GENERIC
+ select GENERIC_IRQ_CHIP
+ help
+ Say Y here to support Intel Sodaville GPIO.
+
+endmenu
+
+menu "SPI GPIO expanders"
+ depends on SPI_MASTER
+
+config GPIO_74X164
+ tristate "74x164 serial-in/parallel-out 8-bits shift register"
+ depends on SPI_MASTER && OF
+ help
+ Driver for 74x164 compatible serial-in/parallel-out 8-outputs
+ shift registers. This driver can be used to provide access
+ to more gpio outputs.
config GPIO_MAX7301
tristate "Maxim MAX7301 GPIO expander"
@@ -870,80 +978,10 @@
SPI driver for Freescale MC33880 high-side/low-side switch.
This provides GPIO interface supporting inputs and outputs.
-config GPIO_74X164
- tristate "74x164 serial-in/parallel-out 8-bits shift register"
- depends on SPI_MASTER && OF
- help
- Driver for 74x164 compatible serial-in/parallel-out 8-outputs
- shift registers. This driver can be used to provide access
- to more gpio outputs.
+endmenu
-comment "AC97 GPIO expanders:"
-
-config GPIO_UCB1400
- tristate "Philips UCB1400 GPIO"
- depends on UCB1400_CORE
- help
- This enables support for the Philips UCB1400 GPIO pins.
- The UCB1400 is an AC97 audio codec.
-
-comment "LPC GPIO expanders:"
-
-config GPIO_KEMPLD
- tristate "Kontron ETX / COMexpress GPIO"
- depends on MFD_KEMPLD
- help
- This enables support for the PLD GPIO interface on some Kontron ETX
- and COMexpress (ETXexpress) modules.
-
- This driver can also be built as a module. If so, the module will be
- called gpio-kempld.
-
-comment "MODULbus GPIO expanders:"
-
-config GPIO_JANZ_TTL
- tristate "Janz VMOD-TTL Digital IO Module"
- depends on MFD_JANZ_CMODIO
- help
- This enables support for the Janz VMOD-TTL Digital IO module.
- This driver provides support for driving the pins in output
- mode only. Input mode is not supported.
-
-config GPIO_PALMAS
- bool "TI PALMAS series PMICs GPIO"
- depends on MFD_PALMAS
- help
- Select this option to enable GPIO driver for the TI PALMAS
- series chip family.
-
-config GPIO_TPS6586X
- bool "TPS6586X GPIO"
- depends on MFD_TPS6586X
- help
- Select this option to enable GPIO driver for the TPS6586X
- chip family.
-
-config GPIO_TPS65910
- bool "TPS65910 GPIO"
- depends on MFD_TPS65910
- help
- Select this option to enable GPIO driver for the TPS65910
- chip family.
-
-config GPIO_MSIC
- bool "Intel MSIC mixed signal gpio support"
- depends on MFD_INTEL_MSIC
- help
- Enable support for GPIO on intel MSIC controllers found in
- intel MID devices
-
-config GPIO_BCM_KONA
- bool "Broadcom Kona GPIO"
- depends on OF_GPIO && (ARCH_BCM_MOBILE || COMPILE_TEST)
- help
- Turn on GPIO support for Broadcom "Kona" chips.
-
-comment "USB GPIO expanders:"
+menu "USB GPIO expanders"
+ depends on USB
config GPIO_VIPERBOARD
tristate "Viperboard GPIO a & b support"
@@ -956,16 +994,6 @@
River Tech's viperboard.h for detailed meaning
of the module parameters.
-config GPIO_DLN2
- tristate "Diolan DLN2 GPIO support"
- depends on MFD_DLN2
- select GPIOLIB_IRQCHIP
-
- help
- Select this option to enable GPIO driver for the Diolan DLN2
- board.
-
- This driver can also be built as a module. If so, the module
- will be called gpio-dln2.
+endmenu
endif
diff --git a/drivers/gpio/Makefile b/drivers/gpio/Makefile
index bdda6a9..07b816b 100644
--- a/drivers/gpio/Makefile
+++ b/drivers/gpio/Makefile
@@ -17,6 +17,7 @@
obj-$(CONFIG_GPIO_ADNP) += gpio-adnp.o
obj-$(CONFIG_GPIO_ADP5520) += gpio-adp5520.o
obj-$(CONFIG_GPIO_ADP5588) += gpio-adp5588.o
+obj-$(CONFIG_GPIO_ALTERA) += gpio-altera.o
obj-$(CONFIG_GPIO_AMD8111) += gpio-amd8111.o
obj-$(CONFIG_GPIO_ARIZONA) += gpio-arizona.o
obj-$(CONFIG_GPIO_BCM_KONA) += gpio-bcm-kona.o
@@ -41,6 +42,7 @@
obj-$(CONFIG_GPIO_KEMPLD) += gpio-kempld.o
obj-$(CONFIG_ARCH_KS8695) += gpio-ks8695.o
obj-$(CONFIG_GPIO_INTEL_MID) += gpio-intel-mid.o
+obj-$(CONFIG_GPIO_LOONGSON) += gpio-loongson.o
obj-$(CONFIG_GPIO_LP3943) += gpio-lp3943.o
obj-$(CONFIG_ARCH_LPC32XX) += gpio-lpc32xx.o
obj-$(CONFIG_GPIO_LYNXPOINT) += gpio-lynxpoint.o
diff --git a/drivers/gpio/devres.c b/drivers/gpio/devres.c
index 13dbd3d..07ba823 100644
--- a/drivers/gpio/devres.c
+++ b/drivers/gpio/devres.c
@@ -35,6 +35,20 @@
return *this == *gpio;
}
+static void devm_gpiod_release_array(struct device *dev, void *res)
+{
+ struct gpio_descs **descs = res;
+
+ gpiod_put_array(*descs);
+}
+
+static int devm_gpiod_match_array(struct device *dev, void *res, void *data)
+{
+ struct gpio_descs **this = res, **gpios = data;
+
+ return *this == *gpios;
+}
+
/**
* devm_gpiod_get - Resource-managed gpiod_get()
* @dev: GPIO consumer
@@ -111,23 +125,39 @@
/**
* devm_get_gpiod_from_child - get a GPIO descriptor from a device's child node
* @dev: GPIO consumer
+ * @con_id: function within the GPIO consumer
* @child: firmware node (child of @dev)
*
* GPIO descriptors returned from this function are automatically disposed on
* driver detach.
*/
struct gpio_desc *devm_get_gpiod_from_child(struct device *dev,
+ const char *con_id,
struct fwnode_handle *child)
{
+ static const char * const suffixes[] = { "gpios", "gpio" };
+ char prop_name[32]; /* 32 is max size of property name */
struct gpio_desc **dr;
struct gpio_desc *desc;
+ unsigned int i;
dr = devres_alloc(devm_gpiod_release, sizeof(struct gpio_desc *),
GFP_KERNEL);
if (!dr)
return ERR_PTR(-ENOMEM);
- desc = fwnode_get_named_gpiod(child, "gpios");
+ for (i = 0; i < ARRAY_SIZE(suffixes); i++) {
+ if (con_id)
+ snprintf(prop_name, sizeof(prop_name), "%s-%s",
+ con_id, suffixes[i]);
+ else
+ snprintf(prop_name, sizeof(prop_name), "%s",
+ suffixes[i]);
+
+ desc = fwnode_get_named_gpiod(child, prop_name);
+ if (!IS_ERR(desc) || (PTR_ERR(desc) == -EPROBE_DEFER))
+ break;
+ }
if (IS_ERR(desc)) {
devres_free(dr);
return desc;
@@ -170,6 +200,66 @@
EXPORT_SYMBOL(__devm_gpiod_get_index_optional);
/**
+ * devm_gpiod_get_array - Resource-managed gpiod_get_array()
+ * @dev: GPIO consumer
+ * @con_id: function within the GPIO consumer
+ * @flags: optional GPIO initialization flags
+ *
+ * Managed gpiod_get_array(). GPIO descriptors returned from this function are
+ * automatically disposed on driver detach. See gpiod_get_array() for detailed
+ * information about behavior and return values.
+ */
+struct gpio_descs *__must_check devm_gpiod_get_array(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
+{
+ struct gpio_descs **dr;
+ struct gpio_descs *descs;
+
+ dr = devres_alloc(devm_gpiod_release_array,
+ sizeof(struct gpio_descs *), GFP_KERNEL);
+ if (!dr)
+ return ERR_PTR(-ENOMEM);
+
+ descs = gpiod_get_array(dev, con_id, flags);
+ if (IS_ERR(descs)) {
+ devres_free(dr);
+ return descs;
+ }
+
+ *dr = descs;
+ devres_add(dev, dr);
+
+ return descs;
+}
+EXPORT_SYMBOL(devm_gpiod_get_array);
+
+/**
+ * devm_gpiod_get_array_optional - Resource-managed gpiod_get_array_optional()
+ * @dev: GPIO consumer
+ * @con_id: function within the GPIO consumer
+ * @flags: optional GPIO initialization flags
+ *
+ * Managed gpiod_get_array_optional(). GPIO descriptors returned from this
+ * function are automatically disposed on driver detach.
+ * See gpiod_get_array_optional() for detailed information about behavior and
+ * return values.
+ */
+struct gpio_descs *__must_check
+devm_gpiod_get_array_optional(struct device *dev, const char *con_id,
+ enum gpiod_flags flags)
+{
+ struct gpio_descs *descs;
+
+ descs = devm_gpiod_get_array(dev, con_id, flags);
+ if (IS_ERR(descs) && (PTR_ERR(descs) == -ENOENT))
+ return NULL;
+
+ return descs;
+}
+EXPORT_SYMBOL(devm_gpiod_get_array_optional);
+
+/**
* devm_gpiod_put - Resource-managed gpiod_put()
* @desc: GPIO descriptor to dispose of
*
@@ -184,6 +274,21 @@
}
EXPORT_SYMBOL(devm_gpiod_put);
+/**
+ * devm_gpiod_put_array - Resource-managed gpiod_put_array()
+ * @descs: GPIO descriptor array to dispose of
+ *
+ * Dispose of an array of GPIO descriptors obtained with devm_gpiod_get_array().
+ * Normally this function will not be called as the GPIOs will be disposed of
+ * by the resource management code.
+ */
+void devm_gpiod_put_array(struct device *dev, struct gpio_descs *descs)
+{
+ WARN_ON(devres_release(dev, devm_gpiod_release_array,
+ devm_gpiod_match_array, &descs));
+}
+EXPORT_SYMBOL(devm_gpiod_put_array);
+
diff --git a/drivers/gpio/gpio-adp5588.c b/drivers/gpio/gpio-adp5588.c
index 3beed6e..d3fe6a6 100644
--- a/drivers/gpio/gpio-adp5588.c
+++ b/drivers/gpio/gpio-adp5588.c
@@ -367,7 +367,7 @@
struct gpio_chip *gc;
int ret, i, revid;
- if (pdata == NULL) {
+ if (!pdata) {
dev_err(&client->dev, "missing platform data\n");
return -ENODEV;
}
@@ -378,8 +378,8 @@
return -EIO;
}
- dev = kzalloc(sizeof(*dev), GFP_KERNEL);
- if (dev == NULL)
+ dev = devm_kzalloc(&client->dev, sizeof(*dev), GFP_KERNEL);
+ if (!dev)
return -ENOMEM;
dev->client = client;
@@ -446,7 +446,6 @@
err_irq:
adp5588_irq_teardown(dev);
err:
- kfree(dev);
return ret;
}
@@ -472,7 +471,6 @@
gpiochip_remove(&dev->gpio_chip);
- kfree(dev);
return 0;
}
diff --git a/drivers/gpio/gpio-altera.c b/drivers/gpio/gpio-altera.c
new file mode 100644
index 0000000..449fb46
--- /dev/null
+++ b/drivers/gpio/gpio-altera.c
@@ -0,0 +1,374 @@
+/*
+ * Copyright (C) 2013 Altera Corporation
+ * Based on gpio-mpc8xxx.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/io.h>
+#include <linux/of_gpio.h>
+#include <linux/platform_device.h>
+
+#define ALTERA_GPIO_MAX_NGPIO 32
+#define ALTERA_GPIO_DATA 0x0
+#define ALTERA_GPIO_DIR 0x4
+#define ALTERA_GPIO_IRQ_MASK 0x8
+#define ALTERA_GPIO_EDGE_CAP 0xc
+
+/**
+* struct altera_gpio_chip
+* @mmchip : memory mapped chip structure.
+* @gpio_lock : synchronization lock so that new irq/set/get requests
+ will be blocked until the current one completes.
+* @interrupt_trigger : specifies the hardware configured IRQ trigger type
+ (rising, falling, both, high)
+* @mapped_irq : kernel mapped irq number.
+*/
+struct altera_gpio_chip {
+ struct of_mm_gpio_chip mmchip;
+ spinlock_t gpio_lock;
+ int interrupt_trigger;
+ int mapped_irq;
+};
+
+static void altera_gpio_irq_unmask(struct irq_data *d)
+{
+ struct altera_gpio_chip *altera_gc;
+ struct of_mm_gpio_chip *mm_gc;
+ unsigned long flags;
+ u32 intmask;
+
+ altera_gc = irq_data_get_irq_chip_data(d);
+ mm_gc = &altera_gc->mmchip;
+
+ spin_lock_irqsave(&altera_gc->gpio_lock, flags);
+ intmask = readl(mm_gc->regs + ALTERA_GPIO_IRQ_MASK);
+ /* Set ALTERA_GPIO_IRQ_MASK bit to unmask */
+ intmask |= BIT(irqd_to_hwirq(d));
+ writel(intmask, mm_gc->regs + ALTERA_GPIO_IRQ_MASK);
+ spin_unlock_irqrestore(&altera_gc->gpio_lock, flags);
+}
+
+static void altera_gpio_irq_mask(struct irq_data *d)
+{
+ struct altera_gpio_chip *altera_gc;
+ struct of_mm_gpio_chip *mm_gc;
+ unsigned long flags;
+ u32 intmask;
+
+ altera_gc = irq_data_get_irq_chip_data(d);
+ mm_gc = &altera_gc->mmchip;
+
+ spin_lock_irqsave(&altera_gc->gpio_lock, flags);
+ intmask = readl(mm_gc->regs + ALTERA_GPIO_IRQ_MASK);
+ /* Clear ALTERA_GPIO_IRQ_MASK bit to mask */
+ intmask &= ~BIT(irqd_to_hwirq(d));
+ writel(intmask, mm_gc->regs + ALTERA_GPIO_IRQ_MASK);
+ spin_unlock_irqrestore(&altera_gc->gpio_lock, flags);
+}
+
+/**
+ * This controller's IRQ type is synthesized in hardware, so this function
+ * just checks if the requested set_type matches the synthesized IRQ type
+ */
+static int altera_gpio_irq_set_type(struct irq_data *d,
+ unsigned int type)
+{
+ struct altera_gpio_chip *altera_gc;
+
+ altera_gc = irq_data_get_irq_chip_data(d);
+
+ if (type == IRQ_TYPE_NONE)
+ return 0;
+ if (type == IRQ_TYPE_LEVEL_HIGH &&
+ altera_gc->interrupt_trigger == IRQ_TYPE_LEVEL_HIGH)
+ return 0;
+ if (type == IRQ_TYPE_EDGE_RISING &&
+ altera_gc->interrupt_trigger == IRQ_TYPE_EDGE_RISING)
+ return 0;
+ if (type == IRQ_TYPE_EDGE_FALLING &&
+ altera_gc->interrupt_trigger == IRQ_TYPE_EDGE_FALLING)
+ return 0;
+ if (type == IRQ_TYPE_EDGE_BOTH &&
+ altera_gc->interrupt_trigger == IRQ_TYPE_EDGE_BOTH)
+ return 0;
+
+ return -EINVAL;
+}
+
+static unsigned int altera_gpio_irq_startup(struct irq_data *d) {
+ altera_gpio_irq_unmask(d);
+
+ return 0;
+}
+
+static struct irq_chip altera_irq_chip = {
+ .name = "altera-gpio",
+ .irq_mask = altera_gpio_irq_mask,
+ .irq_unmask = altera_gpio_irq_unmask,
+ .irq_set_type = altera_gpio_irq_set_type,
+ .irq_startup = altera_gpio_irq_startup,
+ .irq_shutdown = altera_gpio_irq_mask,
+};
+
+static int altera_gpio_get(struct gpio_chip *gc, unsigned offset)
+{
+ struct of_mm_gpio_chip *mm_gc;
+
+ mm_gc = to_of_mm_gpio_chip(gc);
+
+ return !!(readl(mm_gc->regs + ALTERA_GPIO_DATA) & BIT(offset));
+}
+
+static void altera_gpio_set(struct gpio_chip *gc, unsigned offset, int value)
+{
+ struct of_mm_gpio_chip *mm_gc;
+ struct altera_gpio_chip *chip;
+ unsigned long flags;
+ unsigned int data_reg;
+
+ mm_gc = to_of_mm_gpio_chip(gc);
+ chip = container_of(mm_gc, struct altera_gpio_chip, mmchip);
+
+ spin_lock_irqsave(&chip->gpio_lock, flags);
+ data_reg = readl(mm_gc->regs + ALTERA_GPIO_DATA);
+ if (value)
+ data_reg |= BIT(offset);
+ else
+ data_reg &= ~BIT(offset);
+ writel(data_reg, mm_gc->regs + ALTERA_GPIO_DATA);
+ spin_unlock_irqrestore(&chip->gpio_lock, flags);
+}
+
+static int altera_gpio_direction_input(struct gpio_chip *gc, unsigned offset)
+{
+ struct of_mm_gpio_chip *mm_gc;
+ struct altera_gpio_chip *chip;
+ unsigned long flags;
+ unsigned int gpio_ddr;
+
+ mm_gc = to_of_mm_gpio_chip(gc);
+ chip = container_of(mm_gc, struct altera_gpio_chip, mmchip);
+
+ spin_lock_irqsave(&chip->gpio_lock, flags);
+ /* Set pin as input, assumes software controlled IP */
+ gpio_ddr = readl(mm_gc->regs + ALTERA_GPIO_DIR);
+ gpio_ddr &= ~BIT(offset);
+ writel(gpio_ddr, mm_gc->regs + ALTERA_GPIO_DIR);
+ spin_unlock_irqrestore(&chip->gpio_lock, flags);
+
+ return 0;
+}
+
+static int altera_gpio_direction_output(struct gpio_chip *gc,
+ unsigned offset, int value)
+{
+ struct of_mm_gpio_chip *mm_gc;
+ struct altera_gpio_chip *chip;
+ unsigned long flags;
+ unsigned int data_reg, gpio_ddr;
+
+ mm_gc = to_of_mm_gpio_chip(gc);
+ chip = container_of(mm_gc, struct altera_gpio_chip, mmchip);
+
+ spin_lock_irqsave(&chip->gpio_lock, flags);
+ /* Sets the GPIO value */
+ data_reg = readl(mm_gc->regs + ALTERA_GPIO_DATA);
+ if (value)
+ data_reg |= BIT(offset);
+ else
+ data_reg &= ~BIT(offset);
+ writel(data_reg, mm_gc->regs + ALTERA_GPIO_DATA);
+
+ /* Set pin as output, assumes software controlled IP */
+ gpio_ddr = readl(mm_gc->regs + ALTERA_GPIO_DIR);
+ gpio_ddr |= BIT(offset);
+ writel(gpio_ddr, mm_gc->regs + ALTERA_GPIO_DIR);
+ spin_unlock_irqrestore(&chip->gpio_lock, flags);
+
+ return 0;
+}
+
+static void altera_gpio_irq_edge_handler(unsigned int irq,
+ struct irq_desc *desc)
+{
+ struct altera_gpio_chip *altera_gc;
+ struct irq_chip *chip;
+ struct of_mm_gpio_chip *mm_gc;
+ struct irq_domain *irqdomain;
+ unsigned long status;
+ int i;
+
+ altera_gc = irq_desc_get_handler_data(desc);
+ chip = irq_desc_get_chip(desc);
+ mm_gc = &altera_gc->mmchip;
+ irqdomain = altera_gc->mmchip.gc.irqdomain;
+
+ chained_irq_enter(chip, desc);
+
+ while ((status =
+ (readl(mm_gc->regs + ALTERA_GPIO_EDGE_CAP) &
+ readl(mm_gc->regs + ALTERA_GPIO_IRQ_MASK)))) {
+ writel(status, mm_gc->regs + ALTERA_GPIO_EDGE_CAP);
+ for_each_set_bit(i, &status, mm_gc->gc.ngpio) {
+ generic_handle_irq(irq_find_mapping(irqdomain, i));
+ }
+ }
+
+ chained_irq_exit(chip, desc);
+}
+
+
+static void altera_gpio_irq_leveL_high_handler(unsigned int irq,
+ struct irq_desc *desc)
+{
+ struct altera_gpio_chip *altera_gc;
+ struct irq_chip *chip;
+ struct of_mm_gpio_chip *mm_gc;
+ struct irq_domain *irqdomain;
+ unsigned long status;
+ int i;
+
+ altera_gc = irq_desc_get_handler_data(desc);
+ chip = irq_desc_get_chip(desc);
+ mm_gc = &altera_gc->mmchip;
+ irqdomain = altera_gc->mmchip.gc.irqdomain;
+
+ chained_irq_enter(chip, desc);
+
+ status = readl(mm_gc->regs + ALTERA_GPIO_DATA);
+ status &= readl(mm_gc->regs + ALTERA_GPIO_IRQ_MASK);
+
+ for_each_set_bit(i, &status, mm_gc->gc.ngpio) {
+ generic_handle_irq(irq_find_mapping(irqdomain, i));
+ }
+ chained_irq_exit(chip, desc);
+}
+
+static int altera_gpio_probe(struct platform_device *pdev)
+{
+ struct device_node *node = pdev->dev.of_node;
+ int reg, ret;
+ struct altera_gpio_chip *altera_gc;
+
+ altera_gc = devm_kzalloc(&pdev->dev, sizeof(*altera_gc), GFP_KERNEL);
+ if (!altera_gc)
+ return -ENOMEM;
+
+ spin_lock_init(&altera_gc->gpio_lock);
+
+ if (of_property_read_u32(node, "altr,ngpio", ®))
+ /* By default assume maximum ngpio */
+ altera_gc->mmchip.gc.ngpio = ALTERA_GPIO_MAX_NGPIO;
+ else
+ altera_gc->mmchip.gc.ngpio = reg;
+
+ if (altera_gc->mmchip.gc.ngpio > ALTERA_GPIO_MAX_NGPIO) {
+ dev_warn(&pdev->dev,
+ "ngpio is greater than %d, defaulting to %d\n",
+ ALTERA_GPIO_MAX_NGPIO, ALTERA_GPIO_MAX_NGPIO);
+ altera_gc->mmchip.gc.ngpio = ALTERA_GPIO_MAX_NGPIO;
+ }
+
+ altera_gc->mmchip.gc.direction_input = altera_gpio_direction_input;
+ altera_gc->mmchip.gc.direction_output = altera_gpio_direction_output;
+ altera_gc->mmchip.gc.get = altera_gpio_get;
+ altera_gc->mmchip.gc.set = altera_gpio_set;
+ altera_gc->mmchip.gc.owner = THIS_MODULE;
+ altera_gc->mmchip.gc.dev = &pdev->dev;
+
+ ret = of_mm_gpiochip_add(node, &altera_gc->mmchip);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed adding memory mapped gpiochip\n");
+ return ret;
+ }
+
+ platform_set_drvdata(pdev, altera_gc);
+
+ altera_gc->mapped_irq = platform_get_irq(pdev, 0);
+
+ if (altera_gc->mapped_irq < 0)
+ goto skip_irq;
+
+ if (of_property_read_u32(node, "altr,interrupt-type", ®)) {
+ ret = -EINVAL;
+ dev_err(&pdev->dev,
+ "altr,interrupt-type value not set in device tree\n");
+ goto teardown;
+ }
+ altera_gc->interrupt_trigger = reg;
+
+ ret = gpiochip_irqchip_add(&altera_gc->mmchip.gc, &altera_irq_chip, 0,
+ handle_simple_irq, IRQ_TYPE_NONE);
+
+ if (ret) {
+ dev_info(&pdev->dev, "could not add irqchip\n");
+ return ret;
+ }
+
+ gpiochip_set_chained_irqchip(&altera_gc->mmchip.gc,
+ &altera_irq_chip,
+ altera_gc->mapped_irq,
+ altera_gc->interrupt_trigger == IRQ_TYPE_LEVEL_HIGH ?
+ altera_gpio_irq_leveL_high_handler :
+ altera_gpio_irq_edge_handler);
+
+skip_irq:
+ return 0;
+teardown:
+ pr_err("%s: registration failed with status %d\n",
+ node->full_name, ret);
+
+ return ret;
+}
+
+static int altera_gpio_remove(struct platform_device *pdev)
+{
+ struct altera_gpio_chip *altera_gc = platform_get_drvdata(pdev);
+
+ gpiochip_remove(&altera_gc->mmchip.gc);
+
+ return -EIO;
+}
+
+static const struct of_device_id altera_gpio_of_match[] = {
+ { .compatible = "altr,pio-1.0", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, altera_gpio_of_match);
+
+static struct platform_driver altera_gpio_driver = {
+ .driver = {
+ .name = "altera_gpio",
+ .of_match_table = of_match_ptr(altera_gpio_of_match),
+ },
+ .probe = altera_gpio_probe,
+ .remove = altera_gpio_remove,
+};
+
+static int __init altera_gpio_init(void)
+{
+ return platform_driver_register(&altera_gpio_driver);
+}
+subsys_initcall(altera_gpio_init);
+
+static void __exit altera_gpio_exit(void)
+{
+ platform_driver_unregister(&altera_gpio_driver);
+}
+module_exit(altera_gpio_exit);
+
+MODULE_AUTHOR("Tien Hock Loh <thloh@altera.com>");
+MODULE_DESCRIPTION("Altera GPIO driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/gpio/gpio-arizona.c b/drivers/gpio/gpio-arizona.c
index 9665d0a..052fbc8 100644
--- a/drivers/gpio/gpio-arizona.c
+++ b/drivers/gpio/gpio-arizona.c
@@ -103,7 +103,7 @@
arizona_gpio = devm_kzalloc(&pdev->dev, sizeof(*arizona_gpio),
GFP_KERNEL);
- if (arizona_gpio == NULL)
+ if (!arizona_gpio)
return -ENOMEM;
arizona_gpio->arizona = arizona;
@@ -156,7 +156,6 @@
static struct platform_driver arizona_gpio_driver = {
.driver.name = "arizona-gpio",
- .driver.owner = THIS_MODULE,
.probe = arizona_gpio_probe,
.remove = arizona_gpio_remove,
};
diff --git a/drivers/gpio/gpio-crystalcove.c b/drivers/gpio/gpio-crystalcove.c
index 3d9e08f..91a7ffe 100644
--- a/drivers/gpio/gpio-crystalcove.c
+++ b/drivers/gpio/gpio-crystalcove.c
@@ -24,7 +24,7 @@
#include <linux/mfd/intel_soc_pmic.h>
#define CRYSTALCOVE_GPIO_NUM 16
-#define CRYSTALCOVE_VGPIO_NUM 94
+#define CRYSTALCOVE_VGPIO_NUM 95
#define UPDATE_IRQ_TYPE BIT(0)
#define UPDATE_IRQ_MASK BIT(1)
@@ -39,6 +39,7 @@
#define GPIO0P0CTLI 0x33
#define GPIO1P0CTLO 0x3b
#define GPIO1P0CTLI 0x43
+#define GPIOPANELCTL 0x52
#define CTLI_INTCNT_DIS (0)
#define CTLI_INTCNT_NE (1 << 1)
@@ -93,6 +94,10 @@
{
int reg;
+ if (gpio == 94) {
+ return GPIOPANELCTL;
+ }
+
if (reg_type == CTRL_IN) {
if (gpio < 8)
reg = GPIO0P0CTLI;
diff --git a/drivers/gpio/gpio-da9052.c b/drivers/gpio/gpio-da9052.c
index 389a4d2..2e9578e 100644
--- a/drivers/gpio/gpio-da9052.c
+++ b/drivers/gpio/gpio-da9052.c
@@ -212,7 +212,7 @@
int ret;
gpio = devm_kzalloc(&pdev->dev, sizeof(*gpio), GFP_KERNEL);
- if (gpio == NULL)
+ if (!gpio)
return -ENOMEM;
gpio->da9052 = dev_get_drvdata(pdev->dev.parent);
diff --git a/drivers/gpio/gpio-da9055.c b/drivers/gpio/gpio-da9055.c
index b8d7570..7227e6e 100644
--- a/drivers/gpio/gpio-da9055.c
+++ b/drivers/gpio/gpio-da9055.c
@@ -146,7 +146,7 @@
int ret;
gpio = devm_kzalloc(&pdev->dev, sizeof(*gpio), GFP_KERNEL);
- if (gpio == NULL)
+ if (!gpio)
return -ENOMEM;
gpio->da9055 = dev_get_drvdata(pdev->dev.parent);
diff --git a/drivers/gpio/gpio-f7188x.c b/drivers/gpio/gpio-f7188x.c
index 1be291a..dbda843 100644
--- a/drivers/gpio/gpio-f7188x.c
+++ b/drivers/gpio/gpio-f7188x.c
@@ -1,5 +1,5 @@
/*
- * GPIO driver for Fintek Super-I/O F71882 and F71889
+ * GPIO driver for Fintek Super-I/O F71869, F71869A, F71882 and F71889
*
* Copyright (C) 2010-2013 LaCie
*
@@ -32,12 +32,16 @@
#define SIO_LOCK_KEY 0xAA /* Key to disable Super-I/O */
#define SIO_FINTEK_ID 0x1934 /* Manufacturer ID */
+#define SIO_F71869_ID 0x0814 /* F71869 chipset ID */
+#define SIO_F71869A_ID 0x1007 /* F71869A chipset ID */
#define SIO_F71882_ID 0x0541 /* F71882 chipset ID */
#define SIO_F71889_ID 0x0909 /* F71889 chipset ID */
-enum chips { f71882fg, f71889f };
+enum chips { f71869, f71869a, f71882fg, f71889f };
static const char * const f7188x_names[] = {
+ "f71869",
+ "f71869a",
"f71882fg",
"f71889f",
};
@@ -146,6 +150,27 @@
/* Output mode register (0:open drain 1:push-pull). */
#define gpio_out_mode(base) (base + 3)
+static struct f7188x_gpio_bank f71869_gpio_bank[] = {
+ F7188X_GPIO_BANK(0, 6, 0xF0),
+ F7188X_GPIO_BANK(10, 8, 0xE0),
+ F7188X_GPIO_BANK(20, 8, 0xD0),
+ F7188X_GPIO_BANK(30, 8, 0xC0),
+ F7188X_GPIO_BANK(40, 8, 0xB0),
+ F7188X_GPIO_BANK(50, 5, 0xA0),
+ F7188X_GPIO_BANK(60, 6, 0x90),
+};
+
+static struct f7188x_gpio_bank f71869a_gpio_bank[] = {
+ F7188X_GPIO_BANK(0, 6, 0xF0),
+ F7188X_GPIO_BANK(10, 8, 0xE0),
+ F7188X_GPIO_BANK(20, 8, 0xD0),
+ F7188X_GPIO_BANK(30, 8, 0xC0),
+ F7188X_GPIO_BANK(40, 8, 0xB0),
+ F7188X_GPIO_BANK(50, 5, 0xA0),
+ F7188X_GPIO_BANK(60, 8, 0x90),
+ F7188X_GPIO_BANK(70, 8, 0x80),
+};
+
static struct f7188x_gpio_bank f71882_gpio_bank[] = {
F7188X_GPIO_BANK(0 , 8, 0xF0),
F7188X_GPIO_BANK(10, 8, 0xE0),
@@ -281,6 +306,14 @@
return -ENOMEM;
switch (sio->type) {
+ case f71869:
+ data->nr_bank = ARRAY_SIZE(f71869_gpio_bank);
+ data->bank = f71869_gpio_bank;
+ break;
+ case f71869a:
+ data->nr_bank = ARRAY_SIZE(f71869a_gpio_bank);
+ data->bank = f71869a_gpio_bank;
+ break;
case f71882fg:
data->nr_bank = ARRAY_SIZE(f71882_gpio_bank);
data->bank = f71882_gpio_bank;
@@ -354,6 +387,12 @@
devid = superio_inw(addr, SIO_DEVID);
switch (devid) {
+ case SIO_F71869_ID:
+ sio->type = f71869;
+ break;
+ case SIO_F71869A_ID:
+ sio->type = f71869a;
+ break;
case SIO_F71882_ID:
sio->type = f71882fg;
break;
@@ -410,7 +449,7 @@
}
/*
- * Try to match a supported Fintech device by reading the (hard-wired)
+ * Try to match a supported Fintek device by reading the (hard-wired)
* configuration I/O ports. If available, then register both the platform
* device and driver to support the GPIOs.
*/
@@ -450,6 +489,6 @@
}
module_exit(f7188x_gpio_exit);
-MODULE_DESCRIPTION("GPIO driver for Super-I/O chips F71882FG and F71889F");
+MODULE_DESCRIPTION("GPIO driver for Super-I/O chips F71869, F71869A, F71882FG and F71889F");
MODULE_AUTHOR("Simon Guinot <simon.guinot@sequanux.org>");
MODULE_LICENSE("GPL");
diff --git a/drivers/gpio/gpio-ich.c b/drivers/gpio/gpio-ich.c
index 7818cd1..4ba7ed5 100644
--- a/drivers/gpio/gpio-ich.c
+++ b/drivers/gpio/gpio-ich.c
@@ -173,6 +173,11 @@
return !!(ichx_priv.use_gpio & (1 << (nr / 32)));
}
+static int ichx_gpio_get_direction(struct gpio_chip *gpio, unsigned nr)
+{
+ return ichx_read_bit(GPIO_IO_SEL, nr) ? GPIOF_DIR_IN : GPIOF_DIR_OUT;
+}
+
static int ichx_gpio_direction_input(struct gpio_chip *gpio, unsigned nr)
{
/*
@@ -286,6 +291,7 @@
ichx_priv.desc->get : ichx_gpio_get;
chip->set = ichx_gpio_set;
+ chip->get_direction = ichx_gpio_get_direction;
chip->direction_input = ichx_gpio_direction_input;
chip->direction_output = ichx_gpio_direction_output;
chip->base = modparam_gpiobase;
diff --git a/drivers/gpio/gpio-kempld.c b/drivers/gpio/gpio-kempld.c
index 443518f..6b8115f 100644
--- a/drivers/gpio/gpio-kempld.c
+++ b/drivers/gpio/gpio-kempld.c
@@ -156,7 +156,7 @@
}
gpio = devm_kzalloc(dev, sizeof(*gpio), GFP_KERNEL);
- if (gpio == NULL)
+ if (!gpio)
return -ENOMEM;
gpio->pld = pld;
diff --git a/drivers/gpio/gpio-loongson.c b/drivers/gpio/gpio-loongson.c
new file mode 100644
index 0000000..ccc65a1
--- /dev/null
+++ b/drivers/gpio/gpio-loongson.c
@@ -0,0 +1,115 @@
+/*
+ * Loongson-2F/3A/3B GPIO Support
+ *
+ * Copyright (c) 2008 Richard Liu, STMicroelectronics <richard.liu@st.com>
+ * Copyright (c) 2008-2010 Arnaud Patard <apatard@mandriva.com>
+ * Copyright (c) 2013 Hongbing Hu <huhb@lemote.com>
+ * Copyright (c) 2014 Huacai Chen <chenhc@lemote.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <linux/err.h>
+#include <asm/types.h>
+#include <loongson.h>
+#include <linux/gpio.h>
+
+#define STLS2F_N_GPIO 4
+#define STLS3A_N_GPIO 16
+
+#ifdef CONFIG_CPU_LOONGSON3
+#define LOONGSON_N_GPIO STLS3A_N_GPIO
+#else
+#define LOONGSON_N_GPIO STLS2F_N_GPIO
+#endif
+
+#define LOONGSON_GPIO_IN_OFFSET 16
+
+static DEFINE_SPINLOCK(gpio_lock);
+
+static int loongson_gpio_direction_input(struct gpio_chip *chip, unsigned gpio)
+{
+ u32 temp;
+ u32 mask;
+
+ spin_lock(&gpio_lock);
+ mask = 1 << gpio;
+ temp = LOONGSON_GPIOIE;
+ temp |= mask;
+ LOONGSON_GPIOIE = temp;
+ spin_unlock(&gpio_lock);
+
+ return 0;
+}
+
+static int loongson_gpio_direction_output(struct gpio_chip *chip,
+ unsigned gpio, int level)
+{
+ u32 temp;
+ u32 mask;
+
+ gpio_set_value(gpio, level);
+ spin_lock(&gpio_lock);
+ mask = 1 << gpio;
+ temp = LOONGSON_GPIOIE;
+ temp &= (~mask);
+ LOONGSON_GPIOIE = temp;
+ spin_unlock(&gpio_lock);
+
+ return 0;
+}
+
+static int loongson_gpio_get_value(struct gpio_chip *chip, unsigned gpio)
+{
+ u32 val;
+ u32 mask;
+
+ mask = 1 << (gpio + LOONGSON_GPIO_IN_OFFSET);
+ spin_lock(&gpio_lock);
+ val = LOONGSON_GPIODATA;
+ spin_unlock(&gpio_lock);
+
+ return (val & mask) != 0;
+}
+
+static void loongson_gpio_set_value(struct gpio_chip *chip,
+ unsigned gpio, int value)
+{
+ u32 val;
+ u32 mask;
+
+ mask = 1 << gpio;
+
+ spin_lock(&gpio_lock);
+ val = LOONGSON_GPIODATA;
+ if (value)
+ val |= mask;
+ else
+ val &= (~mask);
+ LOONGSON_GPIODATA = val;
+ spin_unlock(&gpio_lock);
+}
+
+static struct gpio_chip loongson_chip = {
+ .label = "Loongson-gpio-chip",
+ .direction_input = loongson_gpio_direction_input,
+ .get = loongson_gpio_get_value,
+ .direction_output = loongson_gpio_direction_output,
+ .set = loongson_gpio_set_value,
+ .base = 0,
+ .ngpio = LOONGSON_N_GPIO,
+ .can_sleep = false,
+};
+
+static int __init loongson_gpio_setup(void)
+{
+ return gpiochip_add(&loongson_chip);
+}
+postcore_initcall(loongson_gpio_setup);
diff --git a/drivers/gpio/gpio-max7300.c b/drivers/gpio/gpio-max7300.c
index 40ab6df..0cc2c27 100644
--- a/drivers/gpio/gpio-max7300.c
+++ b/drivers/gpio/gpio-max7300.c
@@ -35,7 +35,6 @@
const struct i2c_device_id *id)
{
struct max7301 *ts;
- int ret;
if (!i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_BYTE_DATA))
@@ -49,8 +48,7 @@
ts->write = max7300_i2c_write;
ts->dev = &client->dev;
- ret = __max730x_probe(ts);
- return ret;
+ return __max730x_probe(ts);
}
static int max7300_remove(struct i2c_client *client)
diff --git a/drivers/gpio/gpio-max732x.c b/drivers/gpio/gpio-max732x.c
index a095b23..0fa4543c 100644
--- a/drivers/gpio/gpio-max732x.c
+++ b/drivers/gpio/gpio-max732x.c
@@ -4,6 +4,7 @@
* Copyright (C) 2007 Marvell International Ltd.
* Copyright (C) 2008 Jack Ren <jack.ren@marvell.com>
* Copyright (C) 2008 Eric Miao <eric.miao@marvell.com>
+ * Copyright (C) 2015 Linus Walleij <linus.walleij@linaro.org>
*
* Derived from drivers/gpio/pca953x.c
*
@@ -16,10 +17,8 @@
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/string.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
#include <linux/interrupt.h>
-#include <linux/irq.h>
-#include <linux/irqdomain.h>
#include <linux/i2c.h>
#include <linux/i2c/max732x.h>
#include <linux/of.h>
@@ -150,9 +149,7 @@
uint8_t reg_out[2];
#ifdef CONFIG_GPIO_MAX732X_IRQ
- struct irq_domain *irq_domain;
struct mutex irq_lock;
- int irq_base;
uint8_t irq_mask;
uint8_t irq_mask_cur;
uint8_t irq_trig_raise;
@@ -356,35 +353,26 @@
mutex_unlock(&chip->lock);
}
-static int max732x_gpio_to_irq(struct gpio_chip *gc, unsigned off)
-{
- struct max732x_chip *chip = to_max732x(gc);
-
- if (chip->irq_domain) {
- return irq_create_mapping(chip->irq_domain,
- chip->irq_base + off);
- } else {
- return -ENXIO;
- }
-}
-
static void max732x_irq_mask(struct irq_data *d)
{
- struct max732x_chip *chip = irq_data_get_irq_chip_data(d);
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct max732x_chip *chip = to_max732x(gc);
chip->irq_mask_cur &= ~(1 << d->hwirq);
}
static void max732x_irq_unmask(struct irq_data *d)
{
- struct max732x_chip *chip = irq_data_get_irq_chip_data(d);
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct max732x_chip *chip = to_max732x(gc);
chip->irq_mask_cur |= 1 << d->hwirq;
}
static void max732x_irq_bus_lock(struct irq_data *d)
{
- struct max732x_chip *chip = irq_data_get_irq_chip_data(d);
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct max732x_chip *chip = to_max732x(gc);
mutex_lock(&chip->irq_lock);
chip->irq_mask_cur = chip->irq_mask;
@@ -392,7 +380,8 @@
static void max732x_irq_bus_sync_unlock(struct irq_data *d)
{
- struct max732x_chip *chip = irq_data_get_irq_chip_data(d);
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct max732x_chip *chip = to_max732x(gc);
uint16_t new_irqs;
uint16_t level;
@@ -410,7 +399,8 @@
static int max732x_irq_set_type(struct irq_data *d, unsigned int type)
{
- struct max732x_chip *chip = irq_data_get_irq_chip_data(d);
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct max732x_chip *chip = to_max732x(gc);
uint16_t off = d->hwirq;
uint16_t mask = 1 << off;
@@ -492,7 +482,8 @@
do {
level = __ffs(pending);
- handle_nested_irq(irq_find_mapping(chip->irq_domain, level));
+ handle_nested_irq(irq_find_mapping(chip->gpio_chip.irqdomain,
+ level));
pending &= ~(1 << level);
} while (pending);
@@ -500,86 +491,50 @@
return IRQ_HANDLED;
}
-static int max732x_irq_map(struct irq_domain *h, unsigned int virq,
- irq_hw_number_t hw)
-{
- struct max732x_chip *chip = h->host_data;
-
- if (!(chip->dir_input & (1 << hw))) {
- dev_err(&chip->client->dev,
- "Attempt to map output line as IRQ line: %lu\n",
- hw);
- return -EPERM;
- }
-
- irq_set_chip_data(virq, chip);
- irq_set_chip_and_handler(virq, &max732x_irq_chip,
- handle_edge_irq);
- irq_set_nested_thread(virq, 1);
-#ifdef CONFIG_ARM
- /* ARM needs us to explicitly flag the IRQ as valid
- * and will set them noprobe when we do so. */
- set_irq_flags(virq, IRQF_VALID);
-#else
- irq_set_noprobe(virq);
-#endif
-
- return 0;
-}
-
-static struct irq_domain_ops max732x_irq_domain_ops = {
- .map = max732x_irq_map,
- .xlate = irq_domain_xlate_twocell,
-};
-
-static void max732x_irq_teardown(struct max732x_chip *chip)
-{
- if (chip->client->irq && chip->irq_domain)
- irq_domain_remove(chip->irq_domain);
-}
-
static int max732x_irq_setup(struct max732x_chip *chip,
const struct i2c_device_id *id)
{
struct i2c_client *client = chip->client;
struct max732x_platform_data *pdata = dev_get_platdata(&client->dev);
int has_irq = max732x_features[id->driver_data] >> 32;
+ int irq_base = 0;
int ret;
if (((pdata && pdata->irq_base) || client->irq)
&& has_irq != INT_NONE) {
if (pdata)
- chip->irq_base = pdata->irq_base;
+ irq_base = pdata->irq_base;
chip->irq_features = has_irq;
mutex_init(&chip->irq_lock);
- chip->irq_domain = irq_domain_add_simple(client->dev.of_node,
- chip->gpio_chip.ngpio, chip->irq_base,
- &max732x_irq_domain_ops, chip);
- if (!chip->irq_domain) {
- dev_err(&client->dev, "Failed to create IRQ domain\n");
- return -ENOMEM;
- }
-
- ret = request_threaded_irq(client->irq,
- NULL,
- max732x_irq_handler,
- IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
- dev_name(&client->dev), chip);
+ ret = devm_request_threaded_irq(&client->dev,
+ client->irq,
+ NULL,
+ max732x_irq_handler,
+ IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ dev_name(&client->dev), chip);
if (ret) {
dev_err(&client->dev, "failed to request irq %d\n",
client->irq);
- goto out_failed;
+ return ret;
}
-
- chip->gpio_chip.to_irq = max732x_gpio_to_irq;
+ ret = gpiochip_irqchip_add(&chip->gpio_chip,
+ &max732x_irq_chip,
+ irq_base,
+ handle_edge_irq,
+ IRQ_TYPE_NONE);
+ if (ret) {
+ dev_err(&client->dev,
+ "could not connect irqchip to gpiochip\n");
+ return ret;
+ }
+ gpiochip_set_chained_irqchip(&chip->gpio_chip,
+ &max732x_irq_chip,
+ client->irq,
+ NULL);
}
return 0;
-
-out_failed:
- max732x_irq_teardown(chip);
- return ret;
}
#else /* CONFIG_GPIO_MAX732X_IRQ */
@@ -595,10 +550,6 @@
return 0;
}
-
-static void max732x_irq_teardown(struct max732x_chip *chip)
-{
-}
#endif
static int max732x_setup_gpio(struct max732x_chip *chip,
@@ -730,14 +681,16 @@
if (nr_port > 8)
max732x_readb(chip, is_group_a(chip, 8), &chip->reg_out[1]);
- ret = max732x_irq_setup(chip, id);
- if (ret)
- goto out_failed;
-
ret = gpiochip_add(&chip->gpio_chip);
if (ret)
goto out_failed;
+ ret = max732x_irq_setup(chip, id);
+ if (ret) {
+ gpiochip_remove(&chip->gpio_chip);
+ goto out_failed;
+ }
+
if (pdata && pdata->setup) {
ret = pdata->setup(client, chip->gpio_chip.base,
chip->gpio_chip.ngpio, pdata->context);
@@ -751,7 +704,6 @@
out_failed:
if (chip->client_dummy)
i2c_unregister_device(chip->client_dummy);
- max732x_irq_teardown(chip);
return ret;
}
@@ -774,8 +726,6 @@
gpiochip_remove(&chip->gpio_chip);
- max732x_irq_teardown(chip);
-
/* unregister any dummy i2c_client */
if (chip->client_dummy)
i2c_unregister_device(chip->client_dummy);
diff --git a/drivers/gpio/gpio-mb86s7x.c b/drivers/gpio/gpio-mb86s7x.c
index 21b1ce5..ee93c0a 100644
--- a/drivers/gpio/gpio-mb86s7x.c
+++ b/drivers/gpio/gpio-mb86s7x.c
@@ -58,6 +58,11 @@
spin_lock_irqsave(&gchip->lock, flags);
val = readl(gchip->base + PFR(gpio));
+ if (!(val & OFFSET(gpio))) {
+ spin_unlock_irqrestore(&gchip->lock, flags);
+ return -EINVAL;
+ }
+
val &= ~OFFSET(gpio);
writel(val, gchip->base + PFR(gpio));
diff --git a/drivers/gpio/gpio-mc33880.c b/drivers/gpio/gpio-mc33880.c
index 4e3e160..a431604 100644
--- a/drivers/gpio/gpio-mc33880.c
+++ b/drivers/gpio/gpio-mc33880.c
@@ -151,7 +151,7 @@
struct mc33880 *mc;
mc = spi_get_drvdata(spi);
- if (mc == NULL)
+ if (!mc)
return -ENODEV;
gpiochip_remove(&mc->chip);
diff --git a/drivers/gpio/gpio-mcp23s08.c b/drivers/gpio/gpio-mcp23s08.c
index eea5d7e..2fc7ff8 100644
--- a/drivers/gpio/gpio-mcp23s08.c
+++ b/drivers/gpio/gpio-mcp23s08.c
@@ -949,10 +949,12 @@
if (!chips)
return -ENODEV;
- data = kzalloc(sizeof(*data) + chips * sizeof(struct mcp23s08),
- GFP_KERNEL);
+ data = devm_kzalloc(&spi->dev,
+ sizeof(*data) + chips * sizeof(struct mcp23s08),
+ GFP_KERNEL);
if (!data)
return -ENOMEM;
+
spi_set_drvdata(spi, data);
spi->irq = irq_of_parse_and_map(spi->dev.of_node, 0);
@@ -989,7 +991,6 @@
continue;
gpiochip_remove(&data->mcp[addr]->chip);
}
- kfree(data);
return status;
}
@@ -1007,7 +1008,7 @@
mcp23s08_irq_teardown(data->mcp[addr]);
gpiochip_remove(&data->mcp[addr]->chip);
}
- kfree(data);
+
return 0;
}
diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
index d0bc123..1a54205 100644
--- a/drivers/gpio/gpio-mvebu.c
+++ b/drivers/gpio/gpio-mvebu.c
@@ -320,11 +320,13 @@
{
struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
struct mvebu_gpio_chip *mvchip = gc->private;
+ struct irq_chip_type *ct = irq_data_get_chip_type(d);
u32 mask = 1 << (d->irq - gc->irq_base);
irq_gc_lock(gc);
- gc->mask_cache &= ~mask;
- writel_relaxed(gc->mask_cache, mvebu_gpioreg_edge_mask(mvchip));
+ ct->mask_cache_priv &= ~mask;
+
+ writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_edge_mask(mvchip));
irq_gc_unlock(gc);
}
@@ -332,11 +334,13 @@
{
struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
struct mvebu_gpio_chip *mvchip = gc->private;
+ struct irq_chip_type *ct = irq_data_get_chip_type(d);
+
u32 mask = 1 << (d->irq - gc->irq_base);
irq_gc_lock(gc);
- gc->mask_cache |= mask;
- writel_relaxed(gc->mask_cache, mvebu_gpioreg_edge_mask(mvchip));
+ ct->mask_cache_priv |= mask;
+ writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_edge_mask(mvchip));
irq_gc_unlock(gc);
}
@@ -344,11 +348,13 @@
{
struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
struct mvebu_gpio_chip *mvchip = gc->private;
+ struct irq_chip_type *ct = irq_data_get_chip_type(d);
+
u32 mask = 1 << (d->irq - gc->irq_base);
irq_gc_lock(gc);
- gc->mask_cache &= ~mask;
- writel_relaxed(gc->mask_cache, mvebu_gpioreg_level_mask(mvchip));
+ ct->mask_cache_priv &= ~mask;
+ writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_level_mask(mvchip));
irq_gc_unlock(gc);
}
@@ -356,11 +362,13 @@
{
struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
struct mvebu_gpio_chip *mvchip = gc->private;
+ struct irq_chip_type *ct = irq_data_get_chip_type(d);
+
u32 mask = 1 << (d->irq - gc->irq_base);
irq_gc_lock(gc);
- gc->mask_cache |= mask;
- writel_relaxed(gc->mask_cache, mvebu_gpioreg_level_mask(mvchip));
+ ct->mask_cache_priv |= mask;
+ writel_relaxed(ct->mask_cache_priv, mvebu_gpioreg_level_mask(mvchip));
irq_gc_unlock(gc);
}
diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c
index f476ae2..cd1d5bf 100644
--- a/drivers/gpio/gpio-omap.c
+++ b/drivers/gpio/gpio-omap.c
@@ -75,14 +75,12 @@
int power_mode;
bool workaround_enabled;
- void (*set_dataout)(struct gpio_bank *bank, int gpio, int enable);
+ void (*set_dataout)(struct gpio_bank *bank, unsigned gpio, int enable);
int (*get_context_loss_count)(struct device *dev);
struct omap_gpio_reg_offs *regs;
};
-#define GPIO_INDEX(bank, gpio) (gpio % bank->width)
-#define GPIO_BIT(bank, gpio) (BIT(GPIO_INDEX(bank, gpio)))
#define GPIO_MOD_CTRL_BIT BIT(0)
#define BANK_USED(bank) (bank->mod_usage || bank->irq_usage)
@@ -90,11 +88,6 @@
static void omap_gpio_unmask_irq(struct irq_data *d);
-static int omap_irq_to_gpio(struct gpio_bank *bank, unsigned int gpio_irq)
-{
- return bank->chip.base + gpio_irq;
-}
-
static inline struct gpio_bank *omap_irq_data_get_bank(struct irq_data *d)
{
struct gpio_chip *chip = irq_data_get_irq_chip_data(d);
@@ -119,11 +112,11 @@
/* set data out value using dedicate set/clear register */
-static void omap_set_gpio_dataout_reg(struct gpio_bank *bank, int gpio,
+static void omap_set_gpio_dataout_reg(struct gpio_bank *bank, unsigned offset,
int enable)
{
void __iomem *reg = bank->base;
- u32 l = GPIO_BIT(bank, gpio);
+ u32 l = BIT(offset);
if (enable) {
reg += bank->regs->set_dataout;
@@ -137,11 +130,11 @@
}
/* set data out value using mask register */
-static void omap_set_gpio_dataout_mask(struct gpio_bank *bank, int gpio,
+static void omap_set_gpio_dataout_mask(struct gpio_bank *bank, unsigned offset,
int enable)
{
void __iomem *reg = bank->base + bank->regs->dataout;
- u32 gpio_bit = GPIO_BIT(bank, gpio);
+ u32 gpio_bit = BIT(offset);
u32 l;
l = readl_relaxed(reg);
@@ -208,13 +201,13 @@
/**
* omap2_set_gpio_debounce - low level gpio debounce time
* @bank: the gpio bank we're acting upon
- * @gpio: the gpio number on this @gpio
+ * @offset: the gpio number on this @bank
* @debounce: debounce time to use
*
* OMAP's debounce time is in 31us steps so we need
* to convert and round up to the closest unit.
*/
-static void omap2_set_gpio_debounce(struct gpio_bank *bank, unsigned gpio,
+static void omap2_set_gpio_debounce(struct gpio_bank *bank, unsigned offset,
unsigned debounce)
{
void __iomem *reg;
@@ -231,7 +224,7 @@
else
debounce = (debounce / 0x1f) - 1;
- l = GPIO_BIT(bank, gpio);
+ l = BIT(offset);
clk_prepare_enable(bank->dbck);
reg = bank->base + bank->regs->debounce;
@@ -266,16 +259,16 @@
/**
* omap_clear_gpio_debounce - clear debounce settings for a gpio
* @bank: the gpio bank we're acting upon
- * @gpio: the gpio number on this @gpio
+ * @offset: the gpio number on this @bank
*
* If a gpio is using debounce, then clear the debounce enable bit and if
* this is the only gpio in this bank using debounce, then clear the debounce
* time too. The debounce clock will also be disabled when calling this function
* if this is the only gpio in the bank using debounce.
*/
-static void omap_clear_gpio_debounce(struct gpio_bank *bank, unsigned gpio)
+static void omap_clear_gpio_debounce(struct gpio_bank *bank, unsigned offset)
{
- u32 gpio_bit = GPIO_BIT(bank, gpio);
+ u32 gpio_bit = BIT(offset);
if (!bank->dbck_flag)
return;
@@ -472,42 +465,32 @@
}
}
-static int omap_gpio_is_input(struct gpio_bank *bank, int mask)
+static int omap_gpio_is_input(struct gpio_bank *bank, unsigned offset)
{
void __iomem *reg = bank->base + bank->regs->direction;
- return readl_relaxed(reg) & mask;
+ return readl_relaxed(reg) & BIT(offset);
}
-static void omap_gpio_init_irq(struct gpio_bank *bank, unsigned gpio,
- unsigned offset)
+static void omap_gpio_init_irq(struct gpio_bank *bank, unsigned offset)
{
if (!LINE_USED(bank->mod_usage, offset)) {
omap_enable_gpio_module(bank, offset);
omap_set_gpio_direction(bank, offset, 1);
}
- bank->irq_usage |= BIT(GPIO_INDEX(bank, gpio));
+ bank->irq_usage |= BIT(offset);
}
static int omap_gpio_irq_type(struct irq_data *d, unsigned type)
{
struct gpio_bank *bank = omap_irq_data_get_bank(d);
- unsigned gpio = 0;
int retval;
unsigned long flags;
- unsigned offset;
+ unsigned offset = d->hwirq;
if (!BANK_USED(bank))
pm_runtime_get_sync(bank->dev);
-#ifdef CONFIG_ARCH_OMAP1
- if (d->irq > IH_MPUIO_BASE)
- gpio = OMAP_MPUIO(d->irq - IH_MPUIO_BASE);
-#endif
-
- if (!gpio)
- gpio = omap_irq_to_gpio(bank, d->hwirq);
-
if (type & ~IRQ_TYPE_SENSE_MASK)
return -EINVAL;
@@ -516,10 +499,9 @@
return -EINVAL;
spin_lock_irqsave(&bank->lock, flags);
- offset = GPIO_INDEX(bank, gpio);
retval = omap_set_gpio_triggering(bank, offset, type);
- omap_gpio_init_irq(bank, gpio, offset);
- if (!omap_gpio_is_input(bank, BIT(offset))) {
+ omap_gpio_init_irq(bank, offset);
+ if (!omap_gpio_is_input(bank, offset)) {
spin_unlock_irqrestore(&bank->lock, flags);
return -EINVAL;
}
@@ -550,9 +532,10 @@
readl_relaxed(reg);
}
-static inline void omap_clear_gpio_irqstatus(struct gpio_bank *bank, int gpio)
+static inline void omap_clear_gpio_irqstatus(struct gpio_bank *bank,
+ unsigned offset)
{
- omap_clear_gpio_irqbank(bank, GPIO_BIT(bank, gpio));
+ omap_clear_gpio_irqbank(bank, BIT(offset));
}
static u32 omap_get_gpio_irqbank_mask(struct gpio_bank *bank)
@@ -613,13 +596,13 @@
writel_relaxed(l, reg);
}
-static inline void omap_set_gpio_irqenable(struct gpio_bank *bank, int gpio,
- int enable)
+static inline void omap_set_gpio_irqenable(struct gpio_bank *bank,
+ unsigned offset, int enable)
{
if (enable)
- omap_enable_gpio_irqbank(bank, GPIO_BIT(bank, gpio));
+ omap_enable_gpio_irqbank(bank, BIT(offset));
else
- omap_disable_gpio_irqbank(bank, GPIO_BIT(bank, gpio));
+ omap_disable_gpio_irqbank(bank, BIT(offset));
}
/*
@@ -630,14 +613,16 @@
* enabled. When system is suspended, only selected GPIO interrupts need
* to have wake-up enabled.
*/
-static int omap_set_gpio_wakeup(struct gpio_bank *bank, int gpio, int enable)
+static int omap_set_gpio_wakeup(struct gpio_bank *bank, unsigned offset,
+ int enable)
{
- u32 gpio_bit = GPIO_BIT(bank, gpio);
+ u32 gpio_bit = BIT(offset);
unsigned long flags;
if (bank->non_wakeup_gpios & gpio_bit) {
dev_err(bank->dev,
- "Unable to modify wakeup on non-wakeup GPIO%d\n", gpio);
+ "Unable to modify wakeup on non-wakeup GPIO%d\n",
+ offset);
return -EINVAL;
}
@@ -653,22 +638,22 @@
return 0;
}
-static void omap_reset_gpio(struct gpio_bank *bank, int gpio)
+static void omap_reset_gpio(struct gpio_bank *bank, unsigned offset)
{
- omap_set_gpio_direction(bank, GPIO_INDEX(bank, gpio), 1);
- omap_set_gpio_irqenable(bank, gpio, 0);
- omap_clear_gpio_irqstatus(bank, gpio);
- omap_set_gpio_triggering(bank, GPIO_INDEX(bank, gpio), IRQ_TYPE_NONE);
- omap_clear_gpio_debounce(bank, gpio);
+ omap_set_gpio_direction(bank, offset, 1);
+ omap_set_gpio_irqenable(bank, offset, 0);
+ omap_clear_gpio_irqstatus(bank, offset);
+ omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE);
+ omap_clear_gpio_debounce(bank, offset);
}
/* Use disable_irq_wake() and enable_irq_wake() functions from drivers */
static int omap_gpio_wake_enable(struct irq_data *d, unsigned int enable)
{
struct gpio_bank *bank = omap_irq_data_get_bank(d);
- unsigned int gpio = omap_irq_to_gpio(bank, d->hwirq);
+ unsigned offset = d->hwirq;
- return omap_set_gpio_wakeup(bank, gpio, enable);
+ return omap_set_gpio_wakeup(bank, offset, enable);
}
static int omap_gpio_request(struct gpio_chip *chip, unsigned offset)
@@ -706,7 +691,7 @@
spin_lock_irqsave(&bank->lock, flags);
bank->mod_usage &= ~(BIT(offset));
omap_disable_gpio_module(bank, offset);
- omap_reset_gpio(bank, bank->chip.base + offset);
+ omap_reset_gpio(bank, offset);
spin_unlock_irqrestore(&bank->lock, flags);
/*
@@ -803,15 +788,14 @@
static unsigned int omap_gpio_irq_startup(struct irq_data *d)
{
struct gpio_bank *bank = omap_irq_data_get_bank(d);
- unsigned int gpio = omap_irq_to_gpio(bank, d->hwirq);
unsigned long flags;
- unsigned offset = GPIO_INDEX(bank, gpio);
+ unsigned offset = d->hwirq;
if (!BANK_USED(bank))
pm_runtime_get_sync(bank->dev);
spin_lock_irqsave(&bank->lock, flags);
- omap_gpio_init_irq(bank, gpio, offset);
+ omap_gpio_init_irq(bank, offset);
spin_unlock_irqrestore(&bank->lock, flags);
omap_gpio_unmask_irq(d);
@@ -821,15 +805,13 @@
static void omap_gpio_irq_shutdown(struct irq_data *d)
{
struct gpio_bank *bank = omap_irq_data_get_bank(d);
- unsigned int gpio = omap_irq_to_gpio(bank, d->hwirq);
unsigned long flags;
- unsigned offset = GPIO_INDEX(bank, gpio);
+ unsigned offset = d->hwirq;
spin_lock_irqsave(&bank->lock, flags);
- gpiochip_unlock_as_irq(&bank->chip, offset);
bank->irq_usage &= ~(BIT(offset));
omap_disable_gpio_module(bank, offset);
- omap_reset_gpio(bank, gpio);
+ omap_reset_gpio(bank, offset);
spin_unlock_irqrestore(&bank->lock, flags);
/*
@@ -843,43 +825,42 @@
static void omap_gpio_ack_irq(struct irq_data *d)
{
struct gpio_bank *bank = omap_irq_data_get_bank(d);
- unsigned int gpio = omap_irq_to_gpio(bank, d->hwirq);
+ unsigned offset = d->hwirq;
- omap_clear_gpio_irqstatus(bank, gpio);
+ omap_clear_gpio_irqstatus(bank, offset);
}
static void omap_gpio_mask_irq(struct irq_data *d)
{
struct gpio_bank *bank = omap_irq_data_get_bank(d);
- unsigned int gpio = omap_irq_to_gpio(bank, d->hwirq);
+ unsigned offset = d->hwirq;
unsigned long flags;
spin_lock_irqsave(&bank->lock, flags);
- omap_set_gpio_irqenable(bank, gpio, 0);
- omap_set_gpio_triggering(bank, GPIO_INDEX(bank, gpio), IRQ_TYPE_NONE);
+ omap_set_gpio_irqenable(bank, offset, 0);
+ omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE);
spin_unlock_irqrestore(&bank->lock, flags);
}
static void omap_gpio_unmask_irq(struct irq_data *d)
{
struct gpio_bank *bank = omap_irq_data_get_bank(d);
- unsigned int gpio = omap_irq_to_gpio(bank, d->hwirq);
- unsigned int irq_mask = GPIO_BIT(bank, gpio);
+ unsigned offset = d->hwirq;
u32 trigger = irqd_get_trigger_type(d);
unsigned long flags;
spin_lock_irqsave(&bank->lock, flags);
if (trigger)
- omap_set_gpio_triggering(bank, GPIO_INDEX(bank, gpio), trigger);
+ omap_set_gpio_triggering(bank, offset, trigger);
/* For level-triggered GPIOs, the clearing must be done after
* the HW source is cleared, thus after the handler has run */
- if (bank->level_mask & irq_mask) {
- omap_set_gpio_irqenable(bank, gpio, 0);
- omap_clear_gpio_irqstatus(bank, gpio);
+ if (bank->level_mask & BIT(offset)) {
+ omap_set_gpio_irqenable(bank, offset, 0);
+ omap_clear_gpio_irqstatus(bank, offset);
}
- omap_set_gpio_irqenable(bank, gpio, 1);
+ omap_set_gpio_irqenable(bank, offset, 1);
spin_unlock_irqrestore(&bank->lock, flags);
}
@@ -977,12 +958,10 @@
static int omap_gpio_get(struct gpio_chip *chip, unsigned offset)
{
struct gpio_bank *bank;
- u32 mask;
bank = container_of(chip, struct gpio_bank, chip);
- mask = (BIT(offset));
- if (omap_gpio_is_input(bank, mask))
+ if (omap_gpio_is_input(bank, offset))
return omap_get_gpio_datain(bank, offset);
else
return omap_get_gpio_dataout(bank, offset);
diff --git a/drivers/gpio/gpio-pcf857x.c b/drivers/gpio/gpio-pcf857x.c
index 236708a..945f0cd 100644
--- a/drivers/gpio/gpio-pcf857x.c
+++ b/drivers/gpio/gpio-pcf857x.c
@@ -88,11 +88,9 @@
struct gpio_chip chip;
struct i2c_client *client;
struct mutex lock; /* protect 'out' */
- struct irq_domain *irq_domain; /* for irq demux */
spinlock_t slock; /* protect irq demux */
unsigned out; /* software latch */
unsigned status; /* current status */
- unsigned irq_mapped; /* mapped gpio irqs */
int (*write)(struct i2c_client *client, unsigned data);
int (*read)(struct i2c_client *client);
@@ -182,18 +180,6 @@
/*-------------------------------------------------------------------------*/
-static int pcf857x_to_irq(struct gpio_chip *chip, unsigned offset)
-{
- struct pcf857x *gpio = container_of(chip, struct pcf857x, chip);
- int ret;
-
- ret = irq_create_mapping(gpio->irq_domain, offset);
- if (ret > 0)
- gpio->irq_mapped |= (1 << offset);
-
- return ret;
-}
-
static irqreturn_t pcf857x_irq(int irq, void *data)
{
struct pcf857x *gpio = data;
@@ -208,9 +194,9 @@
* interrupt source, just to avoid bad irqs
*/
- change = ((gpio->status ^ status) & gpio->irq_mapped);
+ change = (gpio->status ^ status);
for_each_set_bit(i, &change, gpio->chip.ngpio)
- generic_handle_irq(irq_find_mapping(gpio->irq_domain, i));
+ handle_nested_irq(irq_find_mapping(gpio->chip.irqdomain, i));
gpio->status = status;
spin_unlock_irqrestore(&gpio->slock, flags);
@@ -218,66 +204,36 @@
return IRQ_HANDLED;
}
-static int pcf857x_irq_domain_map(struct irq_domain *domain, unsigned int irq,
- irq_hw_number_t hw)
+/*
+ * NOP functions
+ */
+static void noop(struct irq_data *data) { }
+
+static unsigned int noop_ret(struct irq_data *data)
{
- struct pcf857x *gpio = domain->host_data;
-
- irq_set_chip_and_handler(irq,
- &dummy_irq_chip,
- handle_level_irq);
-#ifdef CONFIG_ARM
- set_irq_flags(irq, IRQF_VALID);
-#else
- irq_set_noprobe(irq);
-#endif
- gpio->irq_mapped |= (1 << hw);
-
return 0;
}
-static struct irq_domain_ops pcf857x_irq_domain_ops = {
- .map = pcf857x_irq_domain_map,
+static int pcf857x_irq_set_wake(struct irq_data *data, unsigned int on)
+{
+ struct pcf857x *gpio = irq_data_get_irq_chip_data(data);
+
+ irq_set_irq_wake(gpio->client->irq, on);
+ return 0;
+}
+
+static struct irq_chip pcf857x_irq_chip = {
+ .name = "pcf857x",
+ .irq_startup = noop_ret,
+ .irq_shutdown = noop,
+ .irq_enable = noop,
+ .irq_disable = noop,
+ .irq_ack = noop,
+ .irq_mask = noop,
+ .irq_unmask = noop,
+ .irq_set_wake = pcf857x_irq_set_wake,
};
-static void pcf857x_irq_domain_cleanup(struct pcf857x *gpio)
-{
- if (gpio->irq_domain)
- irq_domain_remove(gpio->irq_domain);
-
-}
-
-static int pcf857x_irq_domain_init(struct pcf857x *gpio,
- struct i2c_client *client)
-{
- int status;
-
- gpio->irq_domain = irq_domain_add_linear(client->dev.of_node,
- gpio->chip.ngpio,
- &pcf857x_irq_domain_ops,
- gpio);
- if (!gpio->irq_domain)
- goto fail;
-
- /* enable real irq */
- status = devm_request_threaded_irq(&client->dev, client->irq,
- NULL, pcf857x_irq, IRQF_ONESHOT |
- IRQF_TRIGGER_FALLING | IRQF_SHARED,
- dev_name(&client->dev), gpio);
-
- if (status)
- goto fail;
-
- /* enable gpio_to_irq() */
- gpio->chip.to_irq = pcf857x_to_irq;
-
- return 0;
-
-fail:
- pcf857x_irq_domain_cleanup(gpio);
- return -EINVAL;
-}
-
/*-------------------------------------------------------------------------*/
static int pcf857x_probe(struct i2c_client *client,
@@ -314,15 +270,6 @@
gpio->chip.direction_output = pcf857x_output;
gpio->chip.ngpio = id->driver_data;
- /* enable gpio_to_irq() if platform has settings */
- if (client->irq) {
- status = pcf857x_irq_domain_init(gpio, client);
- if (status < 0) {
- dev_err(&client->dev, "irq_domain init failed\n");
- goto fail_irq_domain;
- }
- }
-
/* NOTE: the OnSemi jlc1562b is also largely compatible with
* these parts, notably for output. It has a low-resolution
* DAC instead of pin change IRQs; and its inputs can be the
@@ -398,6 +345,27 @@
if (status < 0)
goto fail;
+ /* Enable irqchip if we have an interrupt */
+ if (client->irq) {
+ status = gpiochip_irqchip_add(&gpio->chip, &pcf857x_irq_chip,
+ 0, handle_level_irq,
+ IRQ_TYPE_NONE);
+ if (status) {
+ dev_err(&client->dev, "cannot add irqchip\n");
+ goto fail_irq;
+ }
+
+ status = devm_request_threaded_irq(&client->dev, client->irq,
+ NULL, pcf857x_irq, IRQF_ONESHOT |
+ IRQF_TRIGGER_FALLING | IRQF_SHARED,
+ dev_name(&client->dev), gpio);
+ if (status)
+ goto fail_irq;
+
+ gpiochip_set_chained_irqchip(&gpio->chip, &pcf857x_irq_chip,
+ client->irq, NULL);
+ }
+
/* Let platform code set up the GPIOs and their users.
* Now is the first time anyone could use them.
*/
@@ -413,13 +381,12 @@
return 0;
-fail:
- if (client->irq)
- pcf857x_irq_domain_cleanup(gpio);
+fail_irq:
+ gpiochip_remove(&gpio->chip);
-fail_irq_domain:
- dev_dbg(&client->dev, "probe error %d for '%s'\n",
- status, client->name);
+fail:
+ dev_dbg(&client->dev, "probe error %d for '%s'\n", status,
+ client->name);
return status;
}
@@ -441,9 +408,6 @@
}
}
- if (client->irq)
- pcf857x_irq_domain_cleanup(gpio);
-
gpiochip_remove(&gpio->chip);
return status;
}
diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c
index 2fdb04b..cdbbcf0 100644
--- a/drivers/gpio/gpio-pxa.c
+++ b/drivers/gpio/gpio-pxa.c
@@ -59,8 +59,7 @@
#define GAFR_OFFSET 0x54
#define ED_MASK_OFFSET 0x9C /* GPIO edge detection for AP side */
-#define BANK_OFF(n) (((n) < 3) ? (n) << 2 : ((n) > 5 ? 0x200 : 0x100) \
- + (((n) % 3) << 2))
+#define BANK_OFF(n) (((n) / 3) << 8) + (((n) % 3) << 2)
int pxa_last_gpio;
static int irq_base;
diff --git a/drivers/gpio/gpio-rcar.c b/drivers/gpio/gpio-rcar.c
index c49522e..fd39774 100644
--- a/drivers/gpio/gpio-rcar.c
+++ b/drivers/gpio/gpio-rcar.c
@@ -14,6 +14,7 @@
* GNU General Public License for more details.
*/
+#include <linux/clk.h>
#include <linux/err.h>
#include <linux/gpio.h>
#include <linux/init.h>
@@ -37,20 +38,22 @@
struct platform_device *pdev;
struct gpio_chip gpio_chip;
struct irq_chip irq_chip;
+ unsigned int irq_parent;
+ struct clk *clk;
};
-#define IOINTSEL 0x00
-#define INOUTSEL 0x04
-#define OUTDT 0x08
-#define INDT 0x0c
-#define INTDT 0x10
-#define INTCLR 0x14
-#define INTMSK 0x18
-#define MSKCLR 0x1c
-#define POSNEG 0x20
-#define EDGLEVEL 0x24
-#define FILONOFF 0x28
-#define BOTHEDGE 0x4c
+#define IOINTSEL 0x00 /* General IO/Interrupt Switching Register */
+#define INOUTSEL 0x04 /* General Input/Output Switching Register */
+#define OUTDT 0x08 /* General Output Register */
+#define INDT 0x0c /* General Input Register */
+#define INTDT 0x10 /* Interrupt Display Register */
+#define INTCLR 0x14 /* Interrupt Clear Register */
+#define INTMSK 0x18 /* Interrupt Mask Register */
+#define MSKCLR 0x1c /* Interrupt Mask Clear Register */
+#define POSNEG 0x20 /* Positive/Negative Logic Select Register */
+#define EDGLEVEL 0x24 /* Edge/level Select Register */
+#define FILONOFF 0x28 /* Chattering Prevention On/Off Register */
+#define BOTHEDGE 0x4c /* One Edge/Both Edge Select Register */
#define RCAR_MAX_GPIO_PER_BANK 32
@@ -169,6 +172,25 @@
return 0;
}
+static int gpio_rcar_irq_set_wake(struct irq_data *d, unsigned int on)
+{
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct gpio_rcar_priv *p = container_of(gc, struct gpio_rcar_priv,
+ gpio_chip);
+
+ irq_set_irq_wake(p->irq_parent, on);
+
+ if (!p->clk)
+ return 0;
+
+ if (on)
+ clk_enable(p->clk);
+ else
+ clk_disable(p->clk);
+
+ return 0;
+}
+
static irqreturn_t gpio_rcar_irq_handler(int irq, void *dev_id)
{
struct gpio_rcar_priv *p = dev_id;
@@ -367,6 +389,12 @@
platform_set_drvdata(pdev, p);
+ p->clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(p->clk)) {
+ dev_warn(dev, "unable to get clock\n");
+ p->clk = NULL;
+ }
+
pm_runtime_enable(dev);
pm_runtime_get_sync(dev);
@@ -404,8 +432,8 @@
irq_chip->irq_mask = gpio_rcar_irq_disable;
irq_chip->irq_unmask = gpio_rcar_irq_enable;
irq_chip->irq_set_type = gpio_rcar_irq_set_type;
- irq_chip->flags = IRQCHIP_SKIP_SET_WAKE | IRQCHIP_SET_TYPE_MASKED
- | IRQCHIP_MASK_ON_SUSPEND;
+ irq_chip->irq_set_wake = gpio_rcar_irq_set_wake;
+ irq_chip->flags = IRQCHIP_SET_TYPE_MASKED | IRQCHIP_MASK_ON_SUSPEND;
ret = gpiochip_add(gpio_chip);
if (ret) {
@@ -413,13 +441,14 @@
goto err0;
}
- ret = gpiochip_irqchip_add(&p->gpio_chip, irq_chip, p->config.irq_base,
+ ret = gpiochip_irqchip_add(gpio_chip, irq_chip, p->config.irq_base,
handle_level_irq, IRQ_TYPE_NONE);
if (ret) {
dev_err(dev, "cannot add irqchip\n");
goto err1;
}
+ p->irq_parent = irq->start;
if (devm_request_irq(dev, irq->start, gpio_rcar_irq_handler,
IRQF_SHARED, name, p)) {
dev_err(dev, "failed to request IRQ\n");
@@ -431,7 +460,7 @@
/* warn in case of mismatch if irq base is specified */
if (p->config.irq_base) {
- ret = irq_find_mapping(p->gpio_chip.irqdomain, 0);
+ ret = irq_find_mapping(gpio_chip->irqdomain, 0);
if (p->config.irq_base != ret)
dev_warn(dev, "irq base mismatch (%u/%u)\n",
p->config.irq_base, ret);
@@ -447,7 +476,7 @@
return 0;
err1:
- gpiochip_remove(&p->gpio_chip);
+ gpiochip_remove(gpio_chip);
err0:
pm_runtime_put(dev);
pm_runtime_disable(dev);
diff --git a/drivers/gpio/gpio-tb10x.c b/drivers/gpio/gpio-tb10x.c
index 62ab9f4..46b8961 100644
--- a/drivers/gpio/gpio-tb10x.c
+++ b/drivers/gpio/gpio-tb10x.c
@@ -283,7 +283,7 @@
return ret;
}
-static int __exit tb10x_gpio_remove(struct platform_device *pdev)
+static int tb10x_gpio_remove(struct platform_device *pdev)
{
struct tb10x_gpio *tb10x_gpio = platform_get_drvdata(pdev);
diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c
index 971c739..7bd9f20 100644
--- a/drivers/gpio/gpio-vf610.c
+++ b/drivers/gpio/gpio-vf610.c
@@ -244,16 +244,16 @@
gc = &port->gc;
gc->of_node = np;
gc->dev = dev;
- gc->label = "vf610-gpio",
- gc->ngpio = VF610_GPIO_PER_PORT,
+ gc->label = "vf610-gpio";
+ gc->ngpio = VF610_GPIO_PER_PORT;
gc->base = of_alias_get_id(np, "gpio") * VF610_GPIO_PER_PORT;
- gc->request = vf610_gpio_request,
- gc->free = vf610_gpio_free,
- gc->direction_input = vf610_gpio_direction_input,
- gc->get = vf610_gpio_get,
- gc->direction_output = vf610_gpio_direction_output,
- gc->set = vf610_gpio_set,
+ gc->request = vf610_gpio_request;
+ gc->free = vf610_gpio_free;
+ gc->direction_input = vf610_gpio_direction_input;
+ gc->get = vf610_gpio_get;
+ gc->direction_output = vf610_gpio_direction_output;
+ gc->set = vf610_gpio_set;
ret = gpiochip_add(gc);
if (ret < 0)
diff --git a/drivers/gpio/gpio-xgene-sb.c b/drivers/gpio/gpio-xgene-sb.c
index b6a15c3..fb9d29a 100644
--- a/drivers/gpio/gpio-xgene-sb.c
+++ b/drivers/gpio/gpio-xgene-sb.c
@@ -93,7 +93,7 @@
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
regs = devm_ioremap_resource(&pdev->dev, res);
- if (!regs)
+ if (IS_ERR(regs))
return PTR_ERR(regs);
ret = bgpio_init(&priv->bgc, &pdev->dev, 4,
diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
index df990f2..d2303d5 100644
--- a/drivers/gpio/gpiolib-acpi.c
+++ b/drivers/gpio/gpiolib-acpi.c
@@ -304,7 +304,7 @@
return;
INIT_LIST_HEAD(&acpi_gpio->events);
- acpi_walk_resources(ACPI_HANDLE(chip->dev), "_AEI",
+ acpi_walk_resources(handle, "_AEI",
acpi_gpiochip_request_interrupt, acpi_gpio);
}
@@ -722,3 +722,87 @@
acpi_detach_data(handle, acpi_gpio_chip_dh);
kfree(acpi_gpio);
}
+
+static unsigned int acpi_gpio_package_count(const union acpi_object *obj)
+{
+ const union acpi_object *element = obj->package.elements;
+ const union acpi_object *end = element + obj->package.count;
+ unsigned int count = 0;
+
+ while (element < end) {
+ if (element->type == ACPI_TYPE_LOCAL_REFERENCE)
+ count++;
+
+ element++;
+ }
+ return count;
+}
+
+static int acpi_find_gpio_count(struct acpi_resource *ares, void *data)
+{
+ unsigned int *count = data;
+
+ if (ares->type == ACPI_RESOURCE_TYPE_GPIO)
+ *count += ares->data.gpio.pin_table_length;
+
+ return 1;
+}
+
+/**
+ * acpi_gpio_count - return the number of GPIOs associated with a
+ * device / function or -ENOENT if no GPIO has been
+ * assigned to the requested function.
+ * @dev: GPIO consumer, can be NULL for system-global GPIOs
+ * @con_id: function within the GPIO consumer
+ */
+int acpi_gpio_count(struct device *dev, const char *con_id)
+{
+ struct acpi_device *adev = ACPI_COMPANION(dev);
+ const union acpi_object *obj;
+ const struct acpi_gpio_mapping *gm;
+ int count = -ENOENT;
+ int ret;
+ char propname[32];
+ unsigned int i;
+
+ /* Try first from _DSD */
+ for (i = 0; i < ARRAY_SIZE(gpio_suffixes); i++) {
+ if (con_id && strcmp(con_id, "gpios"))
+ snprintf(propname, sizeof(propname), "%s-%s",
+ con_id, gpio_suffixes[i]);
+ else
+ snprintf(propname, sizeof(propname), "%s",
+ gpio_suffixes[i]);
+
+ ret = acpi_dev_get_property(adev, propname, ACPI_TYPE_ANY,
+ &obj);
+ if (ret == 0) {
+ if (obj->type == ACPI_TYPE_LOCAL_REFERENCE)
+ count = 1;
+ else if (obj->type == ACPI_TYPE_PACKAGE)
+ count = acpi_gpio_package_count(obj);
+ } else if (adev->driver_gpios) {
+ for (gm = adev->driver_gpios; gm->name; gm++)
+ if (strcmp(propname, gm->name) == 0) {
+ count = gm->size;
+ break;
+ }
+ }
+ if (count >= 0)
+ break;
+ }
+
+ /* Then from plain _CRS GPIOs */
+ if (count < 0) {
+ struct list_head resource_list;
+ unsigned int crs_count = 0;
+
+ INIT_LIST_HEAD(&resource_list);
+ acpi_dev_get_resources(adev, &resource_list,
+ acpi_find_gpio_count, &crs_count);
+ acpi_dev_free_resource_list(&resource_list);
+ if (crs_count > 0)
+ count = crs_count;
+ }
+ return count;
+}
diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
index 4650bf8..a6c67c6 100644
--- a/drivers/gpio/gpiolib-of.c
+++ b/drivers/gpio/gpiolib-of.c
@@ -22,6 +22,7 @@
#include <linux/of_gpio.h>
#include <linux/pinctrl/pinctrl.h>
#include <linux/slab.h>
+#include <linux/gpio/machine.h>
#include "gpiolib.h"
@@ -118,6 +119,114 @@
EXPORT_SYMBOL(of_get_named_gpio_flags);
/**
+ * of_get_gpio_hog() - Get a GPIO hog descriptor, names and flags for GPIO API
+ * @np: device node to get GPIO from
+ * @name: GPIO line name
+ * @lflags: gpio_lookup_flags - returned from of_find_gpio() or
+ * of_get_gpio_hog()
+ * @dflags: gpiod_flags - optional GPIO initialization flags
+ *
+ * Returns GPIO descriptor to use with Linux GPIO API, or one of the errno
+ * value on the error condition.
+ */
+static struct gpio_desc *of_get_gpio_hog(struct device_node *np,
+ const char **name,
+ enum gpio_lookup_flags *lflags,
+ enum gpiod_flags *dflags)
+{
+ struct device_node *chip_np;
+ enum of_gpio_flags xlate_flags;
+ struct gpio_desc *desc;
+ struct gg_data gg_data = {
+ .flags = &xlate_flags,
+ };
+ u32 tmp;
+ int i, ret;
+
+ chip_np = np->parent;
+ if (!chip_np)
+ return ERR_PTR(-EINVAL);
+
+ xlate_flags = 0;
+ *lflags = 0;
+ *dflags = 0;
+
+ ret = of_property_read_u32(chip_np, "#gpio-cells", &tmp);
+ if (ret)
+ return ERR_PTR(ret);
+
+ if (tmp > MAX_PHANDLE_ARGS)
+ return ERR_PTR(-EINVAL);
+
+ gg_data.gpiospec.args_count = tmp;
+ gg_data.gpiospec.np = chip_np;
+ for (i = 0; i < tmp; i++) {
+ ret = of_property_read_u32_index(np, "gpios", i,
+ &gg_data.gpiospec.args[i]);
+ if (ret)
+ return ERR_PTR(ret);
+ }
+
+ gpiochip_find(&gg_data, of_gpiochip_find_and_xlate);
+ if (!gg_data.out_gpio) {
+ if (np->parent == np)
+ return ERR_PTR(-ENXIO);
+ else
+ return ERR_PTR(-EINVAL);
+ }
+
+ if (xlate_flags & OF_GPIO_ACTIVE_LOW)
+ *lflags |= GPIO_ACTIVE_LOW;
+
+ if (of_property_read_bool(np, "input"))
+ *dflags |= GPIOD_IN;
+ else if (of_property_read_bool(np, "output-low"))
+ *dflags |= GPIOD_OUT_LOW;
+ else if (of_property_read_bool(np, "output-high"))
+ *dflags |= GPIOD_OUT_HIGH;
+ else {
+ pr_warn("GPIO line %d (%s): no hogging state specified, bailing out\n",
+ desc_to_gpio(gg_data.out_gpio), np->name);
+ return ERR_PTR(-EINVAL);
+ }
+
+ if (name && of_property_read_string(np, "line-name", name))
+ *name = np->name;
+
+ desc = gg_data.out_gpio;
+
+ return desc;
+}
+
+/**
+ * of_gpiochip_scan_hogs - Scan gpio-controller and apply GPIO hog as requested
+ * @chip: gpio chip to act on
+ *
+ * This is only used by of_gpiochip_add to request/set GPIO initial
+ * configuration.
+ */
+static void of_gpiochip_scan_hogs(struct gpio_chip *chip)
+{
+ struct gpio_desc *desc = NULL;
+ struct device_node *np;
+ const char *name;
+ enum gpio_lookup_flags lflags;
+ enum gpiod_flags dflags;
+
+ for_each_child_of_node(chip->of_node, np) {
+ if (!of_property_read_bool(np, "gpio-hog"))
+ continue;
+
+ desc = of_get_gpio_hog(np, &name, &lflags, &dflags);
+ if (IS_ERR(desc))
+ continue;
+
+ if (gpiod_hog(desc, name, lflags, dflags))
+ continue;
+ }
+}
+
+/**
* of_gpio_simple_xlate - translate gpio_spec to the GPIO number and flags
* @gc: pointer to the gpio_chip structure
* @np: device node of the GPIO chip
@@ -326,6 +435,8 @@
of_gpiochip_add_pin_range(chip);
of_node_get(chip->of_node);
+
+ of_gpiochip_scan_hogs(chip);
}
void of_gpiochip_remove(struct gpio_chip *chip)
diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
index 1ca9295..59eaa23 100644
--- a/drivers/gpio/gpiolib.c
+++ b/drivers/gpio/gpiolib.c
@@ -315,6 +315,7 @@
/* Forward-declaration */
static void gpiochip_irqchip_remove(struct gpio_chip *gpiochip);
+static void gpiochip_free_hogs(struct gpio_chip *chip);
/**
* gpiochip_remove() - unregister a gpio_chip
@@ -333,6 +334,7 @@
acpi_gpiochip_remove(chip);
gpiochip_remove_pin_ranges(chip);
+ gpiochip_free_hogs(chip);
of_gpiochip_remove(chip);
spin_lock_irqsave(&gpio_lock, flags);
@@ -866,6 +868,7 @@
clear_bit(FLAG_REQUESTED, &desc->flags);
clear_bit(FLAG_OPEN_DRAIN, &desc->flags);
clear_bit(FLAG_OPEN_SOURCE, &desc->flags);
+ clear_bit(FLAG_IS_HOGGED, &desc->flags);
ret = true;
}
@@ -1659,19 +1662,18 @@
unsigned int idx,
enum gpio_lookup_flags *flags)
{
- static const char * const suffixes[] = { "gpios", "gpio" };
char prop_name[32]; /* 32 is max size of property name */
enum of_gpio_flags of_flags;
struct gpio_desc *desc;
unsigned int i;
- for (i = 0; i < ARRAY_SIZE(suffixes); i++) {
+ for (i = 0; i < ARRAY_SIZE(gpio_suffixes); i++) {
if (con_id)
snprintf(prop_name, sizeof(prop_name), "%s-%s", con_id,
- suffixes[i]);
+ gpio_suffixes[i]);
else
snprintf(prop_name, sizeof(prop_name), "%s",
- suffixes[i]);
+ gpio_suffixes[i]);
desc = of_get_named_gpiod_flags(dev->of_node, prop_name, idx,
&of_flags);
@@ -1692,7 +1694,6 @@
unsigned int idx,
enum gpio_lookup_flags *flags)
{
- static const char * const suffixes[] = { "gpios", "gpio" };
struct acpi_device *adev = ACPI_COMPANION(dev);
struct acpi_gpio_info info;
struct gpio_desc *desc;
@@ -1700,13 +1701,13 @@
int i;
/* Try first from _DSD */
- for (i = 0; i < ARRAY_SIZE(suffixes); i++) {
+ for (i = 0; i < ARRAY_SIZE(gpio_suffixes); i++) {
if (con_id && strcmp(con_id, "gpios")) {
snprintf(propname, sizeof(propname), "%s-%s",
- con_id, suffixes[i]);
+ con_id, gpio_suffixes[i]);
} else {
snprintf(propname, sizeof(propname), "%s",
- suffixes[i]);
+ gpio_suffixes[i]);
}
desc = acpi_get_gpiod_by_index(adev, propname, idx, &info);
@@ -1805,6 +1806,70 @@
return desc;
}
+static int dt_gpio_count(struct device *dev, const char *con_id)
+{
+ int ret;
+ char propname[32];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(gpio_suffixes); i++) {
+ if (con_id)
+ snprintf(propname, sizeof(propname), "%s-%s",
+ con_id, gpio_suffixes[i]);
+ else
+ snprintf(propname, sizeof(propname), "%s",
+ gpio_suffixes[i]);
+
+ ret = of_gpio_named_count(dev->of_node, propname);
+ if (ret >= 0)
+ break;
+ }
+ return ret;
+}
+
+static int platform_gpio_count(struct device *dev, const char *con_id)
+{
+ struct gpiod_lookup_table *table;
+ struct gpiod_lookup *p;
+ unsigned int count = 0;
+
+ table = gpiod_find_lookup_table(dev);
+ if (!table)
+ return -ENOENT;
+
+ for (p = &table->table[0]; p->chip_label; p++) {
+ if ((con_id && p->con_id && !strcmp(con_id, p->con_id)) ||
+ (!con_id && !p->con_id))
+ count++;
+ }
+ if (!count)
+ return -ENOENT;
+
+ return count;
+}
+
+/**
+ * gpiod_count - return the number of GPIOs associated with a device / function
+ * or -ENOENT if no GPIO has been assigned to the requested function
+ * @dev: GPIO consumer, can be NULL for system-global GPIOs
+ * @con_id: function within the GPIO consumer
+ */
+int gpiod_count(struct device *dev, const char *con_id)
+{
+ int count = -ENOENT;
+
+ if (IS_ENABLED(CONFIG_OF) && dev && dev->of_node)
+ count = dt_gpio_count(dev, con_id);
+ else if (IS_ENABLED(CONFIG_ACPI) && dev && ACPI_HANDLE(dev))
+ count = acpi_gpio_count(dev, con_id);
+
+ if (count < 0)
+ count = platform_gpio_count(dev, con_id);
+
+ return count;
+}
+EXPORT_SYMBOL_GPL(gpiod_count);
+
/**
* gpiod_get - obtain a GPIO for a given GPIO function
* @dev: GPIO consumer, can be NULL for system-global GPIOs
@@ -1840,6 +1905,47 @@
}
EXPORT_SYMBOL_GPL(__gpiod_get_optional);
+
+/**
+ * gpiod_configure_flags - helper function to configure a given GPIO
+ * @desc: gpio whose value will be assigned
+ * @con_id: function within the GPIO consumer
+ * @lflags: gpio_lookup_flags - returned from of_find_gpio() or
+ * of_get_gpio_hog()
+ * @dflags: gpiod_flags - optional GPIO initialization flags
+ *
+ * Return 0 on success, -ENOENT if no GPIO has been assigned to the
+ * requested function and/or index, or another IS_ERR() code if an error
+ * occurred while trying to acquire the GPIO.
+ */
+static int gpiod_configure_flags(struct gpio_desc *desc, const char *con_id,
+ unsigned long lflags, enum gpiod_flags dflags)
+{
+ int status;
+
+ if (lflags & GPIO_ACTIVE_LOW)
+ set_bit(FLAG_ACTIVE_LOW, &desc->flags);
+ if (lflags & GPIO_OPEN_DRAIN)
+ set_bit(FLAG_OPEN_DRAIN, &desc->flags);
+ if (lflags & GPIO_OPEN_SOURCE)
+ set_bit(FLAG_OPEN_SOURCE, &desc->flags);
+
+ /* No particular flag request, return here... */
+ if (!(dflags & GPIOD_FLAGS_BIT_DIR_SET)) {
+ pr_debug("no flags found for %s\n", con_id);
+ return 0;
+ }
+
+ /* Process flags */
+ if (dflags & GPIOD_FLAGS_BIT_DIR_OUT)
+ status = gpiod_direction_output(desc,
+ dflags & GPIOD_FLAGS_BIT_DIR_VAL);
+ else
+ status = gpiod_direction_input(desc);
+
+ return status;
+}
+
/**
* gpiod_get_index - obtain a GPIO from a multi-index GPIO function
* @dev: GPIO consumer, can be NULL for system-global GPIOs
@@ -1865,13 +1971,15 @@
dev_dbg(dev, "GPIO lookup for consumer %s\n", con_id);
- /* Using device tree? */
- if (IS_ENABLED(CONFIG_OF) && dev && dev->of_node) {
- dev_dbg(dev, "using device tree for GPIO lookup\n");
- desc = of_find_gpio(dev, con_id, idx, &lookupflags);
- } else if (IS_ENABLED(CONFIG_ACPI) && dev && ACPI_HANDLE(dev)) {
- dev_dbg(dev, "using ACPI for GPIO lookup\n");
- desc = acpi_find_gpio(dev, con_id, idx, &lookupflags);
+ if (dev) {
+ /* Using device tree? */
+ if (IS_ENABLED(CONFIG_OF) && dev->of_node) {
+ dev_dbg(dev, "using device tree for GPIO lookup\n");
+ desc = of_find_gpio(dev, con_id, idx, &lookupflags);
+ } else if (ACPI_COMPANION(dev)) {
+ dev_dbg(dev, "using ACPI for GPIO lookup\n");
+ desc = acpi_find_gpio(dev, con_id, idx, &lookupflags);
+ }
}
/*
@@ -1889,28 +1997,10 @@
}
status = gpiod_request(desc, con_id);
-
if (status < 0)
return ERR_PTR(status);
- if (lookupflags & GPIO_ACTIVE_LOW)
- set_bit(FLAG_ACTIVE_LOW, &desc->flags);
- if (lookupflags & GPIO_OPEN_DRAIN)
- set_bit(FLAG_OPEN_DRAIN, &desc->flags);
- if (lookupflags & GPIO_OPEN_SOURCE)
- set_bit(FLAG_OPEN_SOURCE, &desc->flags);
-
- /* No particular flag request, return here... */
- if (!(flags & GPIOD_FLAGS_BIT_DIR_SET))
- return desc;
-
- /* Process flags */
- if (flags & GPIOD_FLAGS_BIT_DIR_OUT)
- status = gpiod_direction_output(desc,
- flags & GPIOD_FLAGS_BIT_DIR_VAL);
- else
- status = gpiod_direction_input(desc);
-
+ status = gpiod_configure_flags(desc, con_id, lookupflags, flags);
if (status < 0) {
dev_dbg(dev, "setup of GPIO %s failed\n", con_id);
gpiod_put(desc);
@@ -2006,6 +2096,132 @@
EXPORT_SYMBOL_GPL(__gpiod_get_index_optional);
/**
+ * gpiod_hog - Hog the specified GPIO desc given the provided flags
+ * @desc: gpio whose value will be assigned
+ * @name: gpio line name
+ * @lflags: gpio_lookup_flags - returned from of_find_gpio() or
+ * of_get_gpio_hog()
+ * @dflags: gpiod_flags - optional GPIO initialization flags
+ */
+int gpiod_hog(struct gpio_desc *desc, const char *name,
+ unsigned long lflags, enum gpiod_flags dflags)
+{
+ struct gpio_chip *chip;
+ struct gpio_desc *local_desc;
+ int hwnum;
+ int status;
+
+ chip = gpiod_to_chip(desc);
+ hwnum = gpio_chip_hwgpio(desc);
+
+ local_desc = gpiochip_request_own_desc(chip, hwnum, name);
+ if (IS_ERR(local_desc)) {
+ pr_debug("requesting own GPIO %s failed\n", name);
+ return PTR_ERR(local_desc);
+ }
+
+ status = gpiod_configure_flags(desc, name, lflags, dflags);
+ if (status < 0) {
+ pr_debug("setup of GPIO %s failed\n", name);
+ gpiochip_free_own_desc(desc);
+ return status;
+ }
+
+ /* Mark GPIO as hogged so it can be identified and removed later */
+ set_bit(FLAG_IS_HOGGED, &desc->flags);
+
+ pr_info("GPIO line %d (%s) hogged as %s%s\n",
+ desc_to_gpio(desc), name,
+ (dflags&GPIOD_FLAGS_BIT_DIR_OUT) ? "output" : "input",
+ (dflags&GPIOD_FLAGS_BIT_DIR_OUT) ?
+ (dflags&GPIOD_FLAGS_BIT_DIR_VAL) ? "/high" : "/low":"");
+
+ return 0;
+}
+
+/**
+ * gpiochip_free_hogs - Scan gpio-controller chip and release GPIO hog
+ * @chip: gpio chip to act on
+ *
+ * This is only used by of_gpiochip_remove to free hogged gpios
+ */
+static void gpiochip_free_hogs(struct gpio_chip *chip)
+{
+ int id;
+
+ for (id = 0; id < chip->ngpio; id++) {
+ if (test_bit(FLAG_IS_HOGGED, &chip->desc[id].flags))
+ gpiochip_free_own_desc(&chip->desc[id]);
+ }
+}
+
+/**
+ * gpiod_get_array - obtain multiple GPIOs from a multi-index GPIO function
+ * @dev: GPIO consumer, can be NULL for system-global GPIOs
+ * @con_id: function within the GPIO consumer
+ * @flags: optional GPIO initialization flags
+ *
+ * This function acquires all the GPIOs defined under a given function.
+ *
+ * Return a struct gpio_descs containing an array of descriptors, -ENOENT if
+ * no GPIO has been assigned to the requested function, or another IS_ERR()
+ * code if an error occurred while trying to acquire the GPIOs.
+ */
+struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
+{
+ struct gpio_desc *desc;
+ struct gpio_descs *descs;
+ int count;
+
+ count = gpiod_count(dev, con_id);
+ if (count < 0)
+ return ERR_PTR(count);
+
+ descs = kzalloc(sizeof(*descs) + sizeof(descs->desc[0]) * count,
+ GFP_KERNEL);
+ if (!descs)
+ return ERR_PTR(-ENOMEM);
+
+ for (descs->ndescs = 0; descs->ndescs < count; ) {
+ desc = gpiod_get_index(dev, con_id, descs->ndescs, flags);
+ if (IS_ERR(desc)) {
+ gpiod_put_array(descs);
+ return ERR_CAST(desc);
+ }
+ descs->desc[descs->ndescs] = desc;
+ descs->ndescs++;
+ }
+ return descs;
+}
+EXPORT_SYMBOL_GPL(gpiod_get_array);
+
+/**
+ * gpiod_get_array_optional - obtain multiple GPIOs from a multi-index GPIO
+ * function
+ * @dev: GPIO consumer, can be NULL for system-global GPIOs
+ * @con_id: function within the GPIO consumer
+ * @flags: optional GPIO initialization flags
+ *
+ * This is equivalent to gpiod_get_array(), except that when no GPIO was
+ * assigned to the requested function it will return NULL.
+ */
+struct gpio_descs *__must_check gpiod_get_array_optional(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags)
+{
+ struct gpio_descs *descs;
+
+ descs = gpiod_get_array(dev, con_id, flags);
+ if (IS_ERR(descs) && (PTR_ERR(descs) == -ENOENT))
+ return NULL;
+
+ return descs;
+}
+EXPORT_SYMBOL_GPL(gpiod_get_array_optional);
+
+/**
* gpiod_put - dispose of a GPIO descriptor
* @desc: GPIO descriptor to dispose of
*
@@ -2017,6 +2233,21 @@
}
EXPORT_SYMBOL_GPL(gpiod_put);
+/**
+ * gpiod_put_array - dispose of multiple GPIO descriptors
+ * @descs: struct gpio_descs containing an array of descriptors
+ */
+void gpiod_put_array(struct gpio_descs *descs)
+{
+ unsigned int i;
+
+ for (i = 0; i < descs->ndescs; i++)
+ gpiod_put(descs->desc[i]);
+
+ kfree(descs);
+}
+EXPORT_SYMBOL_GPL(gpiod_put_array);
+
#ifdef CONFIG_DEBUG_FS
static void gpiolib_dbg_show(struct seq_file *s, struct gpio_chip *chip)
diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h
index ab892be..594b179 100644
--- a/drivers/gpio/gpiolib.h
+++ b/drivers/gpio/gpiolib.h
@@ -29,6 +29,9 @@
bool active_low;
};
+/* gpio suffixes used for ACPI and device tree lookup */
+static const char * const gpio_suffixes[] = { "gpios", "gpio" };
+
#ifdef CONFIG_ACPI
void acpi_gpiochip_add(struct gpio_chip *chip);
void acpi_gpiochip_remove(struct gpio_chip *chip);
@@ -39,6 +42,8 @@
struct gpio_desc *acpi_get_gpiod_by_index(struct acpi_device *adev,
const char *propname, int index,
struct acpi_gpio_info *info);
+
+int acpi_gpio_count(struct device *dev, const char *con_id);
#else
static inline void acpi_gpiochip_add(struct gpio_chip *chip) { }
static inline void acpi_gpiochip_remove(struct gpio_chip *chip) { }
@@ -55,6 +60,11 @@
{
return ERR_PTR(-ENOSYS);
}
+
+static inline int acpi_gpio_count(struct device *dev, const char *con_id)
+{
+ return -ENODEV;
+}
#endif
struct gpio_desc *of_get_named_gpiod_flags(struct device_node *np,
@@ -80,6 +90,7 @@
#define FLAG_OPEN_SOURCE 8 /* Gpio is open source type */
#define FLAG_USED_AS_IRQ 9 /* GPIO is connected to an IRQ */
#define FLAG_SYSFS_DIR 10 /* show sysfs direction attribute */
+#define FLAG_IS_HOGGED 11 /* GPIO is hogged */
#define ID_SHIFT 16 /* add new flags before this one */
@@ -91,6 +102,8 @@
int gpiod_request(struct gpio_desc *desc, const char *label);
void gpiod_free(struct gpio_desc *desc);
+int gpiod_hog(struct gpio_desc *desc, const char *name,
+ unsigned long lflags, enum gpiod_flags dflags);
/*
* Return the GPIO number of the passed descriptor relative to its chip
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 151a050..47f2ce8 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -165,6 +165,15 @@
Choose this option if you have a Savage3D/4/SuperSavage/Pro/Twister
chipset. If M is selected the module will be called savage.
+config DRM_VGEM
+ tristate "Virtual GEM provider"
+ depends on DRM
+ help
+ Choose this option to get a virtual graphics memory manager,
+ as used by Mesa's software renderer for enhanced performance.
+ If M is selected the module will be called vgem.
+
+
source "drivers/gpu/drm/exynos/Kconfig"
source "drivers/gpu/drm/rockchip/Kconfig"
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 2c239b9..7d4944e 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -48,6 +48,7 @@
obj-$(CONFIG_DRM_SAVAGE)+= savage/
obj-$(CONFIG_DRM_VMWGFX)+= vmwgfx/
obj-$(CONFIG_DRM_VIA) +=via/
+obj-$(CONFIG_DRM_VGEM) += vgem/
obj-$(CONFIG_DRM_NOUVEAU) +=nouveau/
obj-$(CONFIG_DRM_EXYNOS) +=exynos/
obj-$(CONFIG_DRM_ROCKCHIP) +=rockchip/
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 5c50aa8..19a4fba 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -435,21 +435,22 @@
{
struct kfd_ioctl_get_clock_counters_args *args = data;
struct kfd_dev *dev;
- struct timespec time;
+ struct timespec64 time;
dev = kfd_device_by_id(args->gpu_id);
if (dev == NULL)
return -EINVAL;
/* Reading GPU clock counter from KGD */
- args->gpu_clock_counter = kfd2kgd->get_gpu_clock_counter(dev->kgd);
+ args->gpu_clock_counter =
+ dev->kfd2kgd->get_gpu_clock_counter(dev->kgd);
/* No access to rdtsc. Using raw monotonic time */
- getrawmonotonic(&time);
- args->cpu_clock_counter = (uint64_t)timespec_to_ns(&time);
+ getrawmonotonic64(&time);
+ args->cpu_clock_counter = (uint64_t)timespec64_to_ns(&time);
- get_monotonic_boottime(&time);
- args->system_clock_counter = (uint64_t)timespec_to_ns(&time);
+ get_monotonic_boottime64(&time);
+ args->system_clock_counter = (uint64_t)timespec64_to_ns(&time);
/* Since the counter is in nano-seconds we use 1GHz frequency */
args->system_clock_freq = 1000000000;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
index 5bc32c2..ca7f2d3 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
@@ -94,7 +94,8 @@
return NULL;
}
-struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev)
+struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd,
+ struct pci_dev *pdev, const struct kfd2kgd_calls *f2g)
{
struct kfd_dev *kfd;
@@ -112,6 +113,11 @@
kfd->device_info = device_info;
kfd->pdev = pdev;
kfd->init_complete = false;
+ kfd->kfd2kgd = f2g;
+
+ mutex_init(&kfd->doorbell_mutex);
+ memset(&kfd->doorbell_available_index, 0,
+ sizeof(kfd->doorbell_available_index));
return kfd;
}
@@ -200,8 +206,9 @@
/* add another 512KB for all other allocations on gart (HPD, fences) */
size += 512 * 1024;
- if (kfd2kgd->init_gtt_mem_allocation(kfd->kgd, size, &kfd->gtt_mem,
- &kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr)) {
+ if (kfd->kfd2kgd->init_gtt_mem_allocation(
+ kfd->kgd, size, &kfd->gtt_mem,
+ &kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr)){
dev_err(kfd_device,
"Could not allocate %d bytes for device (%x:%x)\n",
size, kfd->pdev->vendor, kfd->pdev->device);
@@ -270,7 +277,7 @@
kfd_topology_add_device_error:
kfd_gtt_sa_fini(kfd);
kfd_gtt_sa_init_error:
- kfd2kgd->free_gtt_mem(kfd->kgd, kfd->gtt_mem);
+ kfd->kfd2kgd->free_gtt_mem(kfd->kgd, kfd->gtt_mem);
dev_err(kfd_device,
"device (%x:%x) NOT added due to errors\n",
kfd->pdev->vendor, kfd->pdev->device);
@@ -285,7 +292,7 @@
amd_iommu_free_device(kfd->pdev);
kfd_topology_remove_device(kfd);
kfd_gtt_sa_fini(kfd);
- kfd2kgd->free_gtt_mem(kfd->kgd, kfd->gtt_mem);
+ kfd->kfd2kgd->free_gtt_mem(kfd->kgd, kfd->gtt_mem);
}
kfree(kfd);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index d8135ad..69af73f 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -82,7 +82,8 @@
void program_sh_mem_settings(struct device_queue_manager *dqm,
struct qcm_process_device *qpd)
{
- return kfd2kgd->program_sh_mem_settings(dqm->dev->kgd, qpd->vmid,
+ return dqm->dev->kfd2kgd->program_sh_mem_settings(
+ dqm->dev->kgd, qpd->vmid,
qpd->sh_mem_config,
qpd->sh_mem_ape1_base,
qpd->sh_mem_ape1_limit,
@@ -457,9 +458,12 @@
{
uint32_t pasid_mapping;
- pasid_mapping = (pasid == 0) ? 0 : (uint32_t)pasid |
- ATC_VMID_PASID_MAPPING_VALID;
- return kfd2kgd->set_pasid_vmid_mapping(dqm->dev->kgd, pasid_mapping,
+ pasid_mapping = (pasid == 0) ? 0 :
+ (uint32_t)pasid |
+ ATC_VMID_PASID_MAPPING_VALID;
+
+ return dqm->dev->kfd2kgd->set_pasid_vmid_mapping(
+ dqm->dev->kgd, pasid_mapping,
vmid);
}
@@ -511,7 +515,7 @@
pipe_hpd_addr = dqm->pipelines_addr + i * CIK_HPD_EOP_BYTES;
pr_debug("kfd: pipeline address %llX\n", pipe_hpd_addr);
/* = log2(bytes/4)-1 */
- kfd2kgd->init_pipeline(dqm->dev->kgd, inx,
+ dqm->dev->kfd2kgd->init_pipeline(dqm->dev->kgd, inx,
CIK_HPD_EOP_BYTES_LOG2 - 3, pipe_hpd_addr);
}
@@ -905,7 +909,7 @@
return retval;
}
-static int fence_wait_timeout(unsigned int *fence_addr,
+static int amdkfd_fence_wait_timeout(unsigned int *fence_addr,
unsigned int fence_value,
unsigned long timeout)
{
@@ -961,7 +965,7 @@
pm_send_query_status(&dqm->packets, dqm->fence_gpu_addr,
KFD_FENCE_COMPLETED);
/* should be timed out */
- fence_wait_timeout(dqm->fence_addr, KFD_FENCE_COMPLETED,
+ amdkfd_fence_wait_timeout(dqm->fence_addr, KFD_FENCE_COMPLETED,
QUEUE_PREEMPT_DEFAULT_TIMEOUT_MS);
pm_release_ib(&dqm->packets);
dqm->active_runlist = false;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
index 1a9b355..17e56dc 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
@@ -32,9 +32,6 @@
* and that's assures that any user process won't get access to the
* kernel doorbells page
*/
-static DEFINE_MUTEX(doorbell_mutex);
-static unsigned long doorbell_available_index[
- DIV_ROUND_UP(KFD_MAX_NUM_OF_QUEUES_PER_PROCESS, BITS_PER_LONG)] = { 0 };
#define KERNEL_DOORBELL_PASID 1
#define KFD_SIZE_OF_DOORBELL_IN_BYTES 4
@@ -170,12 +167,12 @@
BUG_ON(!kfd || !doorbell_off);
- mutex_lock(&doorbell_mutex);
- inx = find_first_zero_bit(doorbell_available_index,
+ mutex_lock(&kfd->doorbell_mutex);
+ inx = find_first_zero_bit(kfd->doorbell_available_index,
KFD_MAX_NUM_OF_QUEUES_PER_PROCESS);
- __set_bit(inx, doorbell_available_index);
- mutex_unlock(&doorbell_mutex);
+ __set_bit(inx, kfd->doorbell_available_index);
+ mutex_unlock(&kfd->doorbell_mutex);
if (inx >= KFD_MAX_NUM_OF_QUEUES_PER_PROCESS)
return NULL;
@@ -203,9 +200,9 @@
inx = (unsigned int)(db_addr - kfd->doorbell_kernel_ptr);
- mutex_lock(&doorbell_mutex);
- __clear_bit(inx, doorbell_available_index);
- mutex_unlock(&doorbell_mutex);
+ mutex_lock(&kfd->doorbell_mutex);
+ __clear_bit(inx, kfd->doorbell_available_index);
+ mutex_unlock(&kfd->doorbell_mutex);
}
inline void write_kernel_doorbell(u32 __iomem *db, u32 value)
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_module.c b/drivers/gpu/drm/amd/amdkfd/kfd_module.c
index 3f34ae1..4e0a68f 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_module.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_module.c
@@ -34,7 +34,6 @@
#define KFD_DRIVER_MINOR 7
#define KFD_DRIVER_PATCHLEVEL 1
-const struct kfd2kgd_calls *kfd2kgd;
static const struct kgd2kfd_calls kgd2kfd = {
.exit = kgd2kfd_exit,
.probe = kgd2kfd_probe,
@@ -55,9 +54,7 @@
MODULE_PARM_DESC(max_num_of_queues_per_device,
"Maximum number of supported queues per device (1 = Minimum, 4096 = default)");
-bool kgd2kfd_init(unsigned interface_version,
- const struct kfd2kgd_calls *f2g,
- const struct kgd2kfd_calls **g2f)
+bool kgd2kfd_init(unsigned interface_version, const struct kgd2kfd_calls **g2f)
{
/*
* Only one interface version is supported,
@@ -66,11 +63,6 @@
if (interface_version != KFD_INTERFACE_VERSION)
return false;
- /* Protection against multiple amd kgd loads */
- if (kfd2kgd)
- return true;
-
- kfd2kgd = f2g;
*g2f = &kgd2kfd;
return true;
@@ -85,8 +77,6 @@
{
int err;
- kfd2kgd = NULL;
-
/* Verify module parameters */
if ((sched_policy < KFD_SCHED_POLICY_HWS) ||
(sched_policy > KFD_SCHED_POLICY_NO_HWS)) {
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c
index a09e18a..4349794 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c
@@ -151,14 +151,15 @@
static int load_mqd(struct mqd_manager *mm, void *mqd, uint32_t pipe_id,
uint32_t queue_id, uint32_t __user *wptr)
{
- return kfd2kgd->hqd_load(mm->dev->kgd, mqd, pipe_id, queue_id, wptr);
+ return mm->dev->kfd2kgd->hqd_load
+ (mm->dev->kgd, mqd, pipe_id, queue_id, wptr);
}
static int load_mqd_sdma(struct mqd_manager *mm, void *mqd,
uint32_t pipe_id, uint32_t queue_id,
uint32_t __user *wptr)
{
- return kfd2kgd->hqd_sdma_load(mm->dev->kgd, mqd);
+ return mm->dev->kfd2kgd->hqd_sdma_load(mm->dev->kgd, mqd);
}
static int update_mqd(struct mqd_manager *mm, void *mqd,
@@ -245,7 +246,7 @@
unsigned int timeout, uint32_t pipe_id,
uint32_t queue_id)
{
- return kfd2kgd->hqd_destroy(mm->dev->kgd, type, timeout,
+ return mm->dev->kfd2kgd->hqd_destroy(mm->dev->kgd, type, timeout,
pipe_id, queue_id);
}
@@ -258,7 +259,7 @@
unsigned int timeout, uint32_t pipe_id,
uint32_t queue_id)
{
- return kfd2kgd->hqd_sdma_destroy(mm->dev->kgd, mqd, timeout);
+ return mm->dev->kfd2kgd->hqd_sdma_destroy(mm->dev->kgd, mqd, timeout);
}
static bool is_occupied(struct mqd_manager *mm, void *mqd,
@@ -266,7 +267,7 @@
uint32_t queue_id)
{
- return kfd2kgd->hqd_is_occupied(mm->dev->kgd, queue_address,
+ return mm->dev->kfd2kgd->hqd_is_occupied(mm->dev->kgd, queue_address,
pipe_id, queue_id);
}
@@ -275,7 +276,7 @@
uint64_t queue_address, uint32_t pipe_id,
uint32_t queue_id)
{
- return kfd2kgd->hqd_sdma_is_occupied(mm->dev->kgd, mqd);
+ return mm->dev->kfd2kgd->hqd_sdma_is_occupied(mm->dev->kgd, mqd);
}
/*
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index 5a44f2f..f21fcce 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -148,6 +148,11 @@
struct kgd2kfd_shared_resources shared_resources;
+ const struct kfd2kgd_calls *kfd2kgd;
+ struct mutex doorbell_mutex;
+ unsigned long doorbell_available_index[DIV_ROUND_UP(
+ KFD_MAX_NUM_OF_QUEUES_PER_PROCESS, BITS_PER_LONG)];
+
void *gtt_mem;
uint64_t gtt_start_gpu_addr;
void *gtt_start_cpu_ptr;
@@ -164,13 +169,12 @@
/* KGD2KFD callbacks */
void kgd2kfd_exit(void);
-struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev);
+struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd,
+ struct pci_dev *pdev, const struct kfd2kgd_calls *f2g);
bool kgd2kfd_device_init(struct kfd_dev *kfd,
- const struct kgd2kfd_shared_resources *gpu_resources);
+ const struct kgd2kfd_shared_resources *gpu_resources);
void kgd2kfd_device_exit(struct kfd_dev *kfd);
-extern const struct kfd2kgd_calls *kfd2kgd;
-
enum kfd_mempool {
KFD_MEMPOOL_SYSTEM_CACHEABLE = 1,
KFD_MEMPOOL_SYSTEM_WRITECOMBINE = 2,
@@ -378,8 +382,6 @@
/* The Device Queue Manager that owns this data */
struct device_queue_manager *dqm;
struct process_queue_manager *pqm;
- /* Device Queue Manager lock */
- struct mutex *lock;
/* Queues list */
struct list_head queues_list;
struct list_head priv_queue_list;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
index a369c14..945d622 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
@@ -162,10 +162,16 @@
p = my_work->p;
+ pr_debug("Releasing process (pasid %d) in workqueue\n",
+ p->pasid);
+
mutex_lock(&p->mutex);
list_for_each_entry_safe(pdd, temp, &p->per_device_data,
per_device_list) {
+ pr_debug("Releasing pdd (topology id %d) for process (pasid %d) in workqueue\n",
+ pdd->dev->id, p->pasid);
+
amd_iommu_unbind_pasid(pdd->dev->pdev, p->pasid);
list_del(&pdd->per_device_list);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
index 4983993..661c660 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
@@ -726,13 +726,14 @@
}
sysfs_show_32bit_prop(buffer, "max_engine_clk_fcompute",
- kfd2kgd->get_max_engine_clock_in_mhz(
+ dev->gpu->kfd2kgd->get_max_engine_clock_in_mhz(
dev->gpu->kgd));
sysfs_show_64bit_prop(buffer, "local_mem_size",
- kfd2kgd->get_vmem_size(dev->gpu->kgd));
+ dev->gpu->kfd2kgd->get_vmem_size(
+ dev->gpu->kgd));
sysfs_show_32bit_prop(buffer, "fw_version",
- kfd2kgd->get_fw_version(
+ dev->gpu->kfd2kgd->get_fw_version(
dev->gpu->kgd,
KGD_ENGINE_MEC1));
}
@@ -1099,8 +1100,9 @@
buf[2] = gpu->pdev->subsystem_device;
buf[3] = gpu->pdev->device;
buf[4] = gpu->pdev->bus->number;
- buf[5] = (uint32_t)(kfd2kgd->get_vmem_size(gpu->kgd) & 0xffffffff);
- buf[6] = (uint32_t)(kfd2kgd->get_vmem_size(gpu->kgd) >> 32);
+ buf[5] = (uint32_t)(gpu->kfd2kgd->get_vmem_size(gpu->kgd)
+ & 0xffffffff);
+ buf[6] = (uint32_t)(gpu->kfd2kgd->get_vmem_size(gpu->kgd) >> 32);
for (i = 0, hashout = 0; i < 7; i++)
hashout ^= hash_32(buf[i], KFD_GPU_ID_HASH_WIDTH);
diff --git a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
index 239bc16..dabd944 100644
--- a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+++ b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
@@ -77,37 +77,6 @@
};
/**
- * struct kgd2kfd_calls
- *
- * @exit: Notifies amdkfd that kgd module is unloaded
- *
- * @probe: Notifies amdkfd about a probe done on a device in the kgd driver.
- *
- * @device_init: Initialize the newly probed device (if it is a device that
- * amdkfd supports)
- *
- * @device_exit: Notifies amdkfd about a removal of a kgd device
- *
- * @suspend: Notifies amdkfd about a suspend action done to a kgd device
- *
- * @resume: Notifies amdkfd about a resume action done to a kgd device
- *
- * This structure contains function callback pointers so the kgd driver
- * will notify to the amdkfd about certain status changes.
- *
- */
-struct kgd2kfd_calls {
- void (*exit)(void);
- struct kfd_dev* (*probe)(struct kgd_dev *kgd, struct pci_dev *pdev);
- bool (*device_init)(struct kfd_dev *kfd,
- const struct kgd2kfd_shared_resources *gpu_resources);
- void (*device_exit)(struct kfd_dev *kfd);
- void (*interrupt)(struct kfd_dev *kfd, const void *ih_ring_entry);
- void (*suspend)(struct kfd_dev *kfd);
- int (*resume)(struct kfd_dev *kfd);
-};
-
-/**
* struct kfd2kgd_calls
*
* @init_gtt_mem_allocation: Allocate a buffer on the gart aperture.
@@ -196,8 +165,39 @@
enum kgd_engine_type type);
};
+/**
+ * struct kgd2kfd_calls
+ *
+ * @exit: Notifies amdkfd that kgd module is unloaded
+ *
+ * @probe: Notifies amdkfd about a probe done on a device in the kgd driver.
+ *
+ * @device_init: Initialize the newly probed device (if it is a device that
+ * amdkfd supports)
+ *
+ * @device_exit: Notifies amdkfd about a removal of a kgd device
+ *
+ * @suspend: Notifies amdkfd about a suspend action done to a kgd device
+ *
+ * @resume: Notifies amdkfd about a resume action done to a kgd device
+ *
+ * This structure contains function callback pointers so the kgd driver
+ * will notify to the amdkfd about certain status changes.
+ *
+ */
+struct kgd2kfd_calls {
+ void (*exit)(void);
+ struct kfd_dev* (*probe)(struct kgd_dev *kgd, struct pci_dev *pdev,
+ const struct kfd2kgd_calls *f2g);
+ bool (*device_init)(struct kfd_dev *kfd,
+ const struct kgd2kfd_shared_resources *gpu_resources);
+ void (*device_exit)(struct kfd_dev *kfd);
+ void (*interrupt)(struct kfd_dev *kfd, const void *ih_ring_entry);
+ void (*suspend)(struct kfd_dev *kfd);
+ int (*resume)(struct kfd_dev *kfd);
+};
+
bool kgd2kfd_init(unsigned interface_version,
- const struct kfd2kgd_calls *f2g,
const struct kgd2kfd_calls **g2f);
#endif /* KGD_KFD_INTERFACE_H_INCLUDED */
diff --git a/drivers/gpu/drm/armada/armada_output.h b/drivers/gpu/drm/armada/armada_output.h
index 4126d43..3c4023e 100644
--- a/drivers/gpu/drm/armada/armada_output.h
+++ b/drivers/gpu/drm/armada/armada_output.h
@@ -9,7 +9,7 @@
#define ARMADA_CONNETOR_H
#define encoder_helper_funcs(encoder) \
- ((struct drm_encoder_helper_funcs *)encoder->helper_private)
+ ((const struct drm_encoder_helper_funcs *)encoder->helper_private)
struct armada_output_type {
int connector_type;
diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c
index b3e3068..f69b925 100644
--- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c
+++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c
@@ -21,6 +21,7 @@
#include <linux/clk.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
+#include <linux/pinctrl/consumer.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
@@ -37,14 +38,14 @@
* @hlcdc: pointer to the atmel_hlcdc structure provided by the MFD device
* @event: pointer to the current page flip event
* @id: CRTC id (returned by drm_crtc_index)
- * @dpms: DPMS mode
+ * @enabled: CRTC state
*/
struct atmel_hlcdc_crtc {
struct drm_crtc base;
struct atmel_hlcdc_dc *dc;
struct drm_pending_vblank_event *event;
int id;
- int dpms;
+ bool enabled;
};
static inline struct atmel_hlcdc_crtc *
@@ -53,86 +54,17 @@
return container_of(crtc, struct atmel_hlcdc_crtc, base);
}
-static void atmel_hlcdc_crtc_dpms(struct drm_crtc *c, int mode)
-{
- struct drm_device *dev = c->dev;
- struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
- struct regmap *regmap = crtc->dc->hlcdc->regmap;
- unsigned int status;
-
- if (mode != DRM_MODE_DPMS_ON)
- mode = DRM_MODE_DPMS_OFF;
-
- if (crtc->dpms == mode)
- return;
-
- pm_runtime_get_sync(dev->dev);
-
- if (mode != DRM_MODE_DPMS_ON) {
- regmap_write(regmap, ATMEL_HLCDC_DIS, ATMEL_HLCDC_DISP);
- while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
- (status & ATMEL_HLCDC_DISP))
- cpu_relax();
-
- regmap_write(regmap, ATMEL_HLCDC_DIS, ATMEL_HLCDC_SYNC);
- while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
- (status & ATMEL_HLCDC_SYNC))
- cpu_relax();
-
- regmap_write(regmap, ATMEL_HLCDC_DIS, ATMEL_HLCDC_PIXEL_CLK);
- while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
- (status & ATMEL_HLCDC_PIXEL_CLK))
- cpu_relax();
-
- clk_disable_unprepare(crtc->dc->hlcdc->sys_clk);
-
- pm_runtime_allow(dev->dev);
- } else {
- pm_runtime_forbid(dev->dev);
-
- clk_prepare_enable(crtc->dc->hlcdc->sys_clk);
-
- regmap_write(regmap, ATMEL_HLCDC_EN, ATMEL_HLCDC_PIXEL_CLK);
- while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
- !(status & ATMEL_HLCDC_PIXEL_CLK))
- cpu_relax();
-
-
- regmap_write(regmap, ATMEL_HLCDC_EN, ATMEL_HLCDC_SYNC);
- while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
- !(status & ATMEL_HLCDC_SYNC))
- cpu_relax();
-
- regmap_write(regmap, ATMEL_HLCDC_EN, ATMEL_HLCDC_DISP);
- while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
- !(status & ATMEL_HLCDC_DISP))
- cpu_relax();
- }
-
- pm_runtime_put_sync(dev->dev);
-
- crtc->dpms = mode;
-}
-
-static int atmel_hlcdc_crtc_mode_set(struct drm_crtc *c,
- struct drm_display_mode *mode,
- struct drm_display_mode *adj,
- int x, int y,
- struct drm_framebuffer *old_fb)
+static void atmel_hlcdc_crtc_mode_set_nofb(struct drm_crtc *c)
{
struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
struct regmap *regmap = crtc->dc->hlcdc->regmap;
- struct drm_plane *plane = c->primary;
- struct drm_framebuffer *fb;
+ struct drm_display_mode *adj = &c->state->adjusted_mode;
unsigned long mode_rate;
struct videomode vm;
unsigned long prate;
unsigned int cfg;
int div;
- if (atmel_hlcdc_dc_mode_valid(crtc->dc, adj) != MODE_OK)
- return -EINVAL;
-
vm.vfront_porch = adj->crtc_vsync_start - adj->crtc_vdisplay;
vm.vback_porch = adj->crtc_vtotal - adj->crtc_vsync_end;
vm.vsync_len = adj->crtc_vsync_end - adj->crtc_vsync_start;
@@ -156,7 +88,7 @@
cfg = 0;
prate = clk_get_rate(crtc->dc->hlcdc->sys_clk);
- mode_rate = mode->crtc_clock * 1000;
+ mode_rate = adj->crtc_clock * 1000;
if ((prate / 2) < mode_rate) {
prate *= 2;
cfg |= ATMEL_HLCDC_CLKSEL;
@@ -174,10 +106,10 @@
cfg = 0;
- if (mode->flags & DRM_MODE_FLAG_NVSYNC)
+ if (adj->flags & DRM_MODE_FLAG_NVSYNC)
cfg |= ATMEL_HLCDC_VSPOL;
- if (mode->flags & DRM_MODE_FLAG_NHSYNC)
+ if (adj->flags & DRM_MODE_FLAG_NHSYNC)
cfg |= ATMEL_HLCDC_HSPOL;
regmap_update_bits(regmap, ATMEL_HLCDC_CFG(5),
@@ -187,44 +119,6 @@
ATMEL_HLCDC_VSPSU | ATMEL_HLCDC_VSPHO |
ATMEL_HLCDC_GUARDTIME_MASK,
cfg);
-
- fb = plane->fb;
- plane->fb = old_fb;
-
- return atmel_hlcdc_plane_update_with_mode(plane, c, fb, 0, 0,
- adj->hdisplay, adj->vdisplay,
- x << 16, y << 16,
- adj->hdisplay << 16,
- adj->vdisplay << 16,
- adj);
-}
-
-int atmel_hlcdc_crtc_mode_set_base(struct drm_crtc *c, int x, int y,
- struct drm_framebuffer *old_fb)
-{
- struct drm_plane *plane = c->primary;
- struct drm_framebuffer *fb = plane->fb;
- struct drm_display_mode *mode = &c->hwmode;
-
- plane->fb = old_fb;
-
- return plane->funcs->update_plane(plane, c, fb,
- 0, 0,
- mode->hdisplay,
- mode->vdisplay,
- x << 16, y << 16,
- mode->hdisplay << 16,
- mode->vdisplay << 16);
-}
-
-static void atmel_hlcdc_crtc_prepare(struct drm_crtc *crtc)
-{
- atmel_hlcdc_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
-}
-
-static void atmel_hlcdc_crtc_commit(struct drm_crtc *crtc)
-{
- atmel_hlcdc_crtc_dpms(crtc, DRM_MODE_DPMS_ON);
}
static bool atmel_hlcdc_crtc_mode_fixup(struct drm_crtc *crtc,
@@ -234,30 +128,146 @@
return true;
}
-static void atmel_hlcdc_crtc_disable(struct drm_crtc *crtc)
+static void atmel_hlcdc_crtc_disable(struct drm_crtc *c)
{
- struct drm_plane *plane;
+ struct drm_device *dev = c->dev;
+ struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
+ struct regmap *regmap = crtc->dc->hlcdc->regmap;
+ unsigned int status;
- atmel_hlcdc_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
- crtc->primary->funcs->disable_plane(crtc->primary);
+ if (!crtc->enabled)
+ return;
- drm_for_each_legacy_plane(plane, &crtc->dev->mode_config.plane_list) {
- if (plane->crtc != crtc)
- continue;
+ drm_crtc_vblank_off(c);
- plane->funcs->disable_plane(crtc->primary);
- plane->crtc = NULL;
+ pm_runtime_get_sync(dev->dev);
+
+ regmap_write(regmap, ATMEL_HLCDC_DIS, ATMEL_HLCDC_DISP);
+ while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
+ (status & ATMEL_HLCDC_DISP))
+ cpu_relax();
+
+ regmap_write(regmap, ATMEL_HLCDC_DIS, ATMEL_HLCDC_SYNC);
+ while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
+ (status & ATMEL_HLCDC_SYNC))
+ cpu_relax();
+
+ regmap_write(regmap, ATMEL_HLCDC_DIS, ATMEL_HLCDC_PIXEL_CLK);
+ while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
+ (status & ATMEL_HLCDC_PIXEL_CLK))
+ cpu_relax();
+
+ clk_disable_unprepare(crtc->dc->hlcdc->sys_clk);
+ pinctrl_pm_select_sleep_state(dev->dev);
+
+ pm_runtime_allow(dev->dev);
+
+ pm_runtime_put_sync(dev->dev);
+
+ crtc->enabled = false;
+}
+
+static void atmel_hlcdc_crtc_enable(struct drm_crtc *c)
+{
+ struct drm_device *dev = c->dev;
+ struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
+ struct regmap *regmap = crtc->dc->hlcdc->regmap;
+ unsigned int status;
+
+ if (crtc->enabled)
+ return;
+
+ pm_runtime_get_sync(dev->dev);
+
+ pm_runtime_forbid(dev->dev);
+
+ pinctrl_pm_select_default_state(dev->dev);
+ clk_prepare_enable(crtc->dc->hlcdc->sys_clk);
+
+ regmap_write(regmap, ATMEL_HLCDC_EN, ATMEL_HLCDC_PIXEL_CLK);
+ while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
+ !(status & ATMEL_HLCDC_PIXEL_CLK))
+ cpu_relax();
+
+
+ regmap_write(regmap, ATMEL_HLCDC_EN, ATMEL_HLCDC_SYNC);
+ while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
+ !(status & ATMEL_HLCDC_SYNC))
+ cpu_relax();
+
+ regmap_write(regmap, ATMEL_HLCDC_EN, ATMEL_HLCDC_DISP);
+ while (!regmap_read(regmap, ATMEL_HLCDC_SR, &status) &&
+ !(status & ATMEL_HLCDC_DISP))
+ cpu_relax();
+
+ pm_runtime_put_sync(dev->dev);
+
+ drm_crtc_vblank_on(c);
+
+ crtc->enabled = true;
+}
+
+void atmel_hlcdc_crtc_suspend(struct drm_crtc *c)
+{
+ struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
+
+ if (crtc->enabled) {
+ atmel_hlcdc_crtc_disable(c);
+ /* save enable state for resume */
+ crtc->enabled = true;
}
}
+void atmel_hlcdc_crtc_resume(struct drm_crtc *c)
+{
+ struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
+
+ if (crtc->enabled) {
+ crtc->enabled = false;
+ atmel_hlcdc_crtc_enable(c);
+ }
+}
+
+static int atmel_hlcdc_crtc_atomic_check(struct drm_crtc *c,
+ struct drm_crtc_state *s)
+{
+ struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
+
+ if (atmel_hlcdc_dc_mode_valid(crtc->dc, &s->adjusted_mode) != MODE_OK)
+ return -EINVAL;
+
+ return atmel_hlcdc_plane_prepare_disc_area(s);
+}
+
+static void atmel_hlcdc_crtc_atomic_begin(struct drm_crtc *c)
+{
+ struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
+
+ if (c->state->event) {
+ c->state->event->pipe = drm_crtc_index(c);
+
+ WARN_ON(drm_crtc_vblank_get(c) != 0);
+
+ crtc->event = c->state->event;
+ c->state->event = NULL;
+ }
+}
+
+static void atmel_hlcdc_crtc_atomic_flush(struct drm_crtc *crtc)
+{
+ /* TODO: write common plane control register if available */
+}
+
static const struct drm_crtc_helper_funcs lcdc_crtc_helper_funcs = {
.mode_fixup = atmel_hlcdc_crtc_mode_fixup,
- .dpms = atmel_hlcdc_crtc_dpms,
- .mode_set = atmel_hlcdc_crtc_mode_set,
- .mode_set_base = atmel_hlcdc_crtc_mode_set_base,
- .prepare = atmel_hlcdc_crtc_prepare,
- .commit = atmel_hlcdc_crtc_commit,
+ .mode_set = drm_helper_crtc_mode_set,
+ .mode_set_nofb = atmel_hlcdc_crtc_mode_set_nofb,
+ .mode_set_base = drm_helper_crtc_mode_set_base,
.disable = atmel_hlcdc_crtc_disable,
+ .enable = atmel_hlcdc_crtc_enable,
+ .atomic_check = atmel_hlcdc_crtc_atomic_check,
+ .atomic_begin = atmel_hlcdc_crtc_atomic_begin,
+ .atomic_flush = atmel_hlcdc_crtc_atomic_flush,
};
static void atmel_hlcdc_crtc_destroy(struct drm_crtc *c)
@@ -306,61 +316,13 @@
atmel_hlcdc_crtc_finish_page_flip(drm_crtc_to_atmel_hlcdc_crtc(c));
}
-static int atmel_hlcdc_crtc_page_flip(struct drm_crtc *c,
- struct drm_framebuffer *fb,
- struct drm_pending_vblank_event *event,
- uint32_t page_flip_flags)
-{
- struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
- struct atmel_hlcdc_plane_update_req req;
- struct drm_plane *plane = c->primary;
- struct drm_device *dev = c->dev;
- unsigned long flags;
- int ret = 0;
-
- spin_lock_irqsave(&dev->event_lock, flags);
- if (crtc->event)
- ret = -EBUSY;
- spin_unlock_irqrestore(&dev->event_lock, flags);
-
- if (ret)
- return ret;
-
- memset(&req, 0, sizeof(req));
- req.crtc_x = 0;
- req.crtc_y = 0;
- req.crtc_h = c->mode.crtc_vdisplay;
- req.crtc_w = c->mode.crtc_hdisplay;
- req.src_x = c->x << 16;
- req.src_y = c->y << 16;
- req.src_w = req.crtc_w << 16;
- req.src_h = req.crtc_h << 16;
- req.fb = fb;
-
- ret = atmel_hlcdc_plane_prepare_update_req(plane, &req, &c->hwmode);
- if (ret)
- return ret;
-
- if (event) {
- drm_vblank_get(c->dev, crtc->id);
- spin_lock_irqsave(&dev->event_lock, flags);
- crtc->event = event;
- spin_unlock_irqrestore(&dev->event_lock, flags);
- }
-
- ret = atmel_hlcdc_plane_apply_update_req(plane, &req);
- if (ret)
- crtc->event = NULL;
- else
- plane->fb = fb;
-
- return ret;
-}
-
static const struct drm_crtc_funcs atmel_hlcdc_crtc_funcs = {
- .page_flip = atmel_hlcdc_crtc_page_flip,
- .set_config = drm_crtc_helper_set_config,
+ .page_flip = drm_atomic_helper_page_flip,
+ .set_config = drm_atomic_helper_set_config,
.destroy = atmel_hlcdc_crtc_destroy,
+ .reset = drm_atomic_helper_crtc_reset,
+ .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
};
int atmel_hlcdc_crtc_create(struct drm_device *dev)
@@ -375,7 +337,6 @@
if (!crtc)
return -ENOMEM;
- crtc->dpms = DRM_MODE_DPMS_OFF;
crtc->dc = dc;
ret = drm_crtc_init_with_planes(dev, &crtc->base,
diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c
index c1cb174..60b0c13 100644
--- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c
+++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c
@@ -222,6 +222,8 @@
static const struct drm_mode_config_funcs mode_config_funcs = {
.fb_create = atmel_hlcdc_fb_create,
.output_poll_changed = atmel_hlcdc_fb_output_poll_changed,
+ .atomic_check = drm_atomic_helper_check,
+ .atomic_commit = drm_atomic_helper_commit,
};
static int atmel_hlcdc_dc_modeset_init(struct drm_device *dev)
@@ -317,6 +319,8 @@
goto err_periph_clk_disable;
}
+ drm_mode_config_reset(dev);
+
ret = drm_vblank_init(dev, 1);
if (ret < 0) {
dev_err(dev->dev, "failed to initialize vblank\n");
@@ -555,6 +559,41 @@
return 0;
}
+#ifdef CONFIG_PM
+static int atmel_hlcdc_dc_drm_suspend(struct device *dev)
+{
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
+ struct drm_crtc *crtc;
+
+ if (pm_runtime_suspended(dev))
+ return 0;
+
+ drm_modeset_lock_all(drm_dev);
+ list_for_each_entry(crtc, &drm_dev->mode_config.crtc_list, head)
+ atmel_hlcdc_crtc_suspend(crtc);
+ drm_modeset_unlock_all(drm_dev);
+ return 0;
+}
+
+static int atmel_hlcdc_dc_drm_resume(struct device *dev)
+{
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
+ struct drm_crtc *crtc;
+
+ if (pm_runtime_suspended(dev))
+ return 0;
+
+ drm_modeset_lock_all(drm_dev);
+ list_for_each_entry(crtc, &drm_dev->mode_config.crtc_list, head)
+ atmel_hlcdc_crtc_resume(crtc);
+ drm_modeset_unlock_all(drm_dev);
+ return 0;
+}
+#endif
+
+static SIMPLE_DEV_PM_OPS(atmel_hlcdc_dc_drm_pm_ops,
+ atmel_hlcdc_dc_drm_suspend, atmel_hlcdc_dc_drm_resume);
+
static const struct of_device_id atmel_hlcdc_dc_of_match[] = {
{ .compatible = "atmel,hlcdc-display-controller" },
{ },
@@ -565,6 +604,7 @@
.remove = atmel_hlcdc_dc_drm_remove,
.driver = {
.name = "atmel-hlcdc-display-controller",
+ .pm = &atmel_hlcdc_dc_drm_pm_ops,
.of_match_table = atmel_hlcdc_dc_of_match,
},
};
diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h
index 7bc96af..cf6b375 100644
--- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h
+++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h
@@ -26,11 +26,14 @@
#include <linux/irqdomain.h>
#include <linux/pwm.h>
+#include <drm/drm_atomic.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_panel.h>
+#include <drm/drm_plane_helper.h>
#include <drm/drmP.h>
#include "atmel_hlcdc_layer.h"
@@ -69,7 +72,6 @@
*/
struct atmel_hlcdc_plane_properties {
struct drm_property *alpha;
- struct drm_property *rotation;
};
/**
@@ -84,7 +86,6 @@
struct drm_plane base;
struct atmel_hlcdc_layer layer;
struct atmel_hlcdc_plane_properties *properties;
- unsigned int rotation;
};
static inline struct atmel_hlcdc_plane *
@@ -100,43 +101,6 @@
}
/**
- * Atmel HLCDC Plane update request structure.
- *
- * @crtc_x: x position of the plane relative to the CRTC
- * @crtc_y: y position of the plane relative to the CRTC
- * @crtc_w: visible width of the plane
- * @crtc_h: visible height of the plane
- * @src_x: x buffer position
- * @src_y: y buffer position
- * @src_w: buffer width
- * @src_h: buffer height
- * @fb: framebuffer object object
- * @bpp: bytes per pixel deduced from pixel_format
- * @offsets: offsets to apply to the GEM buffers
- * @xstride: value to add to the pixel pointer between each line
- * @pstride: value to add to the pixel pointer between each pixel
- * @nplanes: number of planes (deduced from pixel_format)
- */
-struct atmel_hlcdc_plane_update_req {
- int crtc_x;
- int crtc_y;
- unsigned int crtc_w;
- unsigned int crtc_h;
- uint32_t src_x;
- uint32_t src_y;
- uint32_t src_w;
- uint32_t src_h;
- struct drm_framebuffer *fb;
-
- /* These fields are private and should not be touched */
- int bpp[ATMEL_HLCDC_MAX_PLANES];
- unsigned int offsets[ATMEL_HLCDC_MAX_PLANES];
- int xstride[ATMEL_HLCDC_MAX_PLANES];
- int pstride[ATMEL_HLCDC_MAX_PLANES];
- int nplanes;
-};
-
-/**
* Atmel HLCDC Planes.
*
* This structure stores the instantiated HLCDC Planes and can be accessed by
@@ -184,28 +148,16 @@
struct atmel_hlcdc_planes *
atmel_hlcdc_create_planes(struct drm_device *dev);
-int atmel_hlcdc_plane_prepare_update_req(struct drm_plane *p,
- struct atmel_hlcdc_plane_update_req *req,
- const struct drm_display_mode *mode);
-
-int atmel_hlcdc_plane_apply_update_req(struct drm_plane *p,
- struct atmel_hlcdc_plane_update_req *req);
-
-int atmel_hlcdc_plane_update_with_mode(struct drm_plane *p,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- int crtc_x, int crtc_y,
- unsigned int crtc_w,
- unsigned int crtc_h,
- uint32_t src_x, uint32_t src_y,
- uint32_t src_w, uint32_t src_h,
- const struct drm_display_mode *mode);
+int atmel_hlcdc_plane_prepare_disc_area(struct drm_crtc_state *c_state);
void atmel_hlcdc_crtc_irq(struct drm_crtc *c);
void atmel_hlcdc_crtc_cancel_page_flip(struct drm_crtc *crtc,
struct drm_file *file);
+void atmel_hlcdc_crtc_suspend(struct drm_crtc *crtc);
+void atmel_hlcdc_crtc_resume(struct drm_crtc *crtc);
+
int atmel_hlcdc_crtc_create(struct drm_device *dev);
int atmel_hlcdc_create_outputs(struct drm_device *dev);
diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.c
index e79bd9b..377e43c 100644
--- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.c
+++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.c
@@ -298,7 +298,7 @@
spin_unlock_irqrestore(&layer->lock, flags);
}
-int atmel_hlcdc_layer_disable(struct atmel_hlcdc_layer *layer)
+void atmel_hlcdc_layer_disable(struct atmel_hlcdc_layer *layer)
{
struct atmel_hlcdc_layer_dma_channel *dma = &layer->dma;
struct atmel_hlcdc_layer_update *upd = &layer->update;
@@ -341,8 +341,6 @@
dma->status = ATMEL_HLCDC_LAYER_DISABLED;
spin_unlock_irqrestore(&layer->lock, flags);
-
- return 0;
}
int atmel_hlcdc_layer_update_start(struct atmel_hlcdc_layer *layer)
diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.h b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.h
index 27e56c0..9beabc9 100644
--- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.h
+++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_layer.h
@@ -120,6 +120,7 @@
#define ATMEL_HLCDC_LAYER_DISCEN BIT(11)
#define ATMEL_HLCDC_LAYER_GA_SHIFT 16
#define ATMEL_HLCDC_LAYER_GA_MASK GENMASK(23, ATMEL_HLCDC_LAYER_GA_SHIFT)
+#define ATMEL_HLCDC_LAYER_GA(x) ((x) << ATMEL_HLCDC_LAYER_GA_SHIFT)
#define ATMEL_HLCDC_LAYER_CSC_CFG(p, o) ATMEL_HLCDC_LAYER_CFG(p, (p)->desc->layout.csc + o)
@@ -376,7 +377,7 @@
void atmel_hlcdc_layer_cleanup(struct drm_device *dev,
struct atmel_hlcdc_layer *layer);
-int atmel_hlcdc_layer_disable(struct atmel_hlcdc_layer *layer);
+void atmel_hlcdc_layer_disable(struct atmel_hlcdc_layer *layer);
int atmel_hlcdc_layer_update_start(struct atmel_hlcdc_layer *layer);
diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_output.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_output.c
index c402192..9c45130 100644
--- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_output.c
+++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_output.c
@@ -86,25 +86,22 @@
return container_of(output, struct atmel_hlcdc_panel, base);
}
-static void atmel_hlcdc_panel_encoder_dpms(struct drm_encoder *encoder,
- int mode)
+static void atmel_hlcdc_panel_encoder_enable(struct drm_encoder *encoder)
{
struct atmel_hlcdc_rgb_output *rgb =
drm_encoder_to_atmel_hlcdc_rgb_output(encoder);
struct atmel_hlcdc_panel *panel = atmel_hlcdc_rgb_output_to_panel(rgb);
- if (mode != DRM_MODE_DPMS_ON)
- mode = DRM_MODE_DPMS_OFF;
+ drm_panel_enable(panel->panel);
+}
- if (mode == rgb->dpms)
- return;
+static void atmel_hlcdc_panel_encoder_disable(struct drm_encoder *encoder)
+{
+ struct atmel_hlcdc_rgb_output *rgb =
+ drm_encoder_to_atmel_hlcdc_rgb_output(encoder);
+ struct atmel_hlcdc_panel *panel = atmel_hlcdc_rgb_output_to_panel(rgb);
- if (mode != DRM_MODE_DPMS_ON)
- drm_panel_disable(panel->panel);
- else
- drm_panel_enable(panel->panel);
-
- rgb->dpms = mode;
+ drm_panel_disable(panel->panel);
}
static bool
@@ -115,16 +112,6 @@
return true;
}
-static void atmel_hlcdc_panel_encoder_prepare(struct drm_encoder *encoder)
-{
- atmel_hlcdc_panel_encoder_dpms(encoder, DRM_MODE_DPMS_OFF);
-}
-
-static void atmel_hlcdc_panel_encoder_commit(struct drm_encoder *encoder)
-{
- atmel_hlcdc_panel_encoder_dpms(encoder, DRM_MODE_DPMS_ON);
-}
-
static void
atmel_hlcdc_rgb_encoder_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *mode,
@@ -156,11 +143,10 @@
}
static struct drm_encoder_helper_funcs atmel_hlcdc_panel_encoder_helper_funcs = {
- .dpms = atmel_hlcdc_panel_encoder_dpms,
.mode_fixup = atmel_hlcdc_panel_encoder_mode_fixup,
- .prepare = atmel_hlcdc_panel_encoder_prepare,
- .commit = atmel_hlcdc_panel_encoder_commit,
.mode_set = atmel_hlcdc_rgb_encoder_mode_set,
+ .disable = atmel_hlcdc_panel_encoder_disable,
+ .enable = atmel_hlcdc_panel_encoder_enable,
};
static void atmel_hlcdc_rgb_encoder_destroy(struct drm_encoder *encoder)
@@ -226,10 +212,13 @@
}
static const struct drm_connector_funcs atmel_hlcdc_panel_connector_funcs = {
- .dpms = drm_helper_connector_dpms,
+ .dpms = drm_atomic_helper_connector_dpms,
.detect = atmel_hlcdc_panel_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = atmel_hlcdc_panel_connector_destroy,
+ .reset = drm_atomic_helper_connector_reset,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static int atmel_hlcdc_create_panel_output(struct drm_device *dev,
diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
index c5892dc..be9fa82 100644
--- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
+++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
@@ -19,6 +19,59 @@
#include "atmel_hlcdc_dc.h"
+/**
+ * Atmel HLCDC Plane state structure.
+ *
+ * @base: DRM plane state
+ * @crtc_x: x position of the plane relative to the CRTC
+ * @crtc_y: y position of the plane relative to the CRTC
+ * @crtc_w: visible width of the plane
+ * @crtc_h: visible height of the plane
+ * @src_x: x buffer position
+ * @src_y: y buffer position
+ * @src_w: buffer width
+ * @src_h: buffer height
+ * @alpha: alpha blending of the plane
+ * @bpp: bytes per pixel deduced from pixel_format
+ * @offsets: offsets to apply to the GEM buffers
+ * @xstride: value to add to the pixel pointer between each line
+ * @pstride: value to add to the pixel pointer between each pixel
+ * @nplanes: number of planes (deduced from pixel_format)
+ */
+struct atmel_hlcdc_plane_state {
+ struct drm_plane_state base;
+ int crtc_x;
+ int crtc_y;
+ unsigned int crtc_w;
+ unsigned int crtc_h;
+ uint32_t src_x;
+ uint32_t src_y;
+ uint32_t src_w;
+ uint32_t src_h;
+
+ u8 alpha;
+
+ bool disc_updated;
+
+ int disc_x;
+ int disc_y;
+ int disc_w;
+ int disc_h;
+
+ /* These fields are private and should not be touched */
+ int bpp[ATMEL_HLCDC_MAX_PLANES];
+ unsigned int offsets[ATMEL_HLCDC_MAX_PLANES];
+ int xstride[ATMEL_HLCDC_MAX_PLANES];
+ int pstride[ATMEL_HLCDC_MAX_PLANES];
+ int nplanes;
+};
+
+static inline struct atmel_hlcdc_plane_state *
+drm_plane_state_to_atmel_hlcdc_plane_state(struct drm_plane_state *s)
+{
+ return container_of(s, struct atmel_hlcdc_plane_state, base);
+}
+
#define SUBPIXEL_MASK 0xffff
static uint32_t rgb_formats[] = {
@@ -128,7 +181,7 @@
return 0;
}
-static bool atmel_hlcdc_format_embedds_alpha(u32 format)
+static bool atmel_hlcdc_format_embeds_alpha(u32 format)
{
int i;
@@ -204,7 +257,7 @@
static void
atmel_hlcdc_plane_update_pos_and_size(struct atmel_hlcdc_plane *plane,
- struct atmel_hlcdc_plane_update_req *req)
+ struct atmel_hlcdc_plane_state *state)
{
const struct atmel_hlcdc_layer_cfg_layout *layout =
&plane->layer.desc->layout;
@@ -213,69 +266,69 @@
atmel_hlcdc_layer_update_cfg(&plane->layer,
layout->size,
0xffffffff,
- (req->crtc_w - 1) |
- ((req->crtc_h - 1) << 16));
+ (state->crtc_w - 1) |
+ ((state->crtc_h - 1) << 16));
if (layout->memsize)
atmel_hlcdc_layer_update_cfg(&plane->layer,
layout->memsize,
0xffffffff,
- (req->src_w - 1) |
- ((req->src_h - 1) << 16));
+ (state->src_w - 1) |
+ ((state->src_h - 1) << 16));
if (layout->pos)
atmel_hlcdc_layer_update_cfg(&plane->layer,
layout->pos,
0xffffffff,
- req->crtc_x |
- (req->crtc_y << 16));
+ state->crtc_x |
+ (state->crtc_y << 16));
/* TODO: rework the rescaling part */
- if (req->crtc_w != req->src_w || req->crtc_h != req->src_h) {
+ if (state->crtc_w != state->src_w || state->crtc_h != state->src_h) {
u32 factor_reg = 0;
- if (req->crtc_w != req->src_w) {
+ if (state->crtc_w != state->src_w) {
int i;
u32 factor;
u32 *coeff_tab = heo_upscaling_xcoef;
u32 max_memsize;
- if (req->crtc_w < req->src_w)
+ if (state->crtc_w < state->src_w)
coeff_tab = heo_downscaling_xcoef;
for (i = 0; i < ARRAY_SIZE(heo_upscaling_xcoef); i++)
atmel_hlcdc_layer_update_cfg(&plane->layer,
17 + i,
0xffffffff,
coeff_tab[i]);
- factor = ((8 * 256 * req->src_w) - (256 * 4)) /
- req->crtc_w;
+ factor = ((8 * 256 * state->src_w) - (256 * 4)) /
+ state->crtc_w;
factor++;
- max_memsize = ((factor * req->crtc_w) + (256 * 4)) /
+ max_memsize = ((factor * state->crtc_w) + (256 * 4)) /
2048;
- if (max_memsize > req->src_w)
+ if (max_memsize > state->src_w)
factor--;
factor_reg |= factor | 0x80000000;
}
- if (req->crtc_h != req->src_h) {
+ if (state->crtc_h != state->src_h) {
int i;
u32 factor;
u32 *coeff_tab = heo_upscaling_ycoef;
u32 max_memsize;
- if (req->crtc_w < req->src_w)
+ if (state->crtc_w < state->src_w)
coeff_tab = heo_downscaling_ycoef;
for (i = 0; i < ARRAY_SIZE(heo_upscaling_ycoef); i++)
atmel_hlcdc_layer_update_cfg(&plane->layer,
33 + i,
0xffffffff,
coeff_tab[i]);
- factor = ((8 * 256 * req->src_w) - (256 * 4)) /
- req->crtc_w;
+ factor = ((8 * 256 * state->src_w) - (256 * 4)) /
+ state->crtc_w;
factor++;
- max_memsize = ((factor * req->crtc_w) + (256 * 4)) /
+ max_memsize = ((factor * state->crtc_w) + (256 * 4)) /
2048;
- if (max_memsize > req->src_w)
+ if (max_memsize > state->src_w)
factor--;
factor_reg |= (factor << 16) | 0x80000000;
}
@@ -287,7 +340,7 @@
static void
atmel_hlcdc_plane_update_general_settings(struct atmel_hlcdc_plane *plane,
- struct atmel_hlcdc_plane_update_req *req)
+ struct atmel_hlcdc_plane_state *state)
{
const struct atmel_hlcdc_layer_cfg_layout *layout =
&plane->layer.desc->layout;
@@ -297,10 +350,11 @@
cfg |= ATMEL_HLCDC_LAYER_OVR | ATMEL_HLCDC_LAYER_ITER2BL |
ATMEL_HLCDC_LAYER_ITER;
- if (atmel_hlcdc_format_embedds_alpha(req->fb->pixel_format))
+ if (atmel_hlcdc_format_embeds_alpha(state->base.fb->pixel_format))
cfg |= ATMEL_HLCDC_LAYER_LAEN;
else
- cfg |= ATMEL_HLCDC_LAYER_GAEN;
+ cfg |= ATMEL_HLCDC_LAYER_GAEN |
+ ATMEL_HLCDC_LAYER_GA(state->alpha);
}
atmel_hlcdc_layer_update_cfg(&plane->layer,
@@ -312,24 +366,26 @@
ATMEL_HLCDC_LAYER_ITER2BL |
ATMEL_HLCDC_LAYER_ITER |
ATMEL_HLCDC_LAYER_GAEN |
+ ATMEL_HLCDC_LAYER_GA_MASK |
ATMEL_HLCDC_LAYER_LAEN |
ATMEL_HLCDC_LAYER_OVR |
ATMEL_HLCDC_LAYER_DMA, cfg);
}
static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane,
- struct atmel_hlcdc_plane_update_req *req)
+ struct atmel_hlcdc_plane_state *state)
{
u32 cfg;
int ret;
- ret = atmel_hlcdc_format_to_plane_mode(req->fb->pixel_format, &cfg);
+ ret = atmel_hlcdc_format_to_plane_mode(state->base.fb->pixel_format,
+ &cfg);
if (ret)
return;
- if ((req->fb->pixel_format == DRM_FORMAT_YUV422 ||
- req->fb->pixel_format == DRM_FORMAT_NV61) &&
- (plane->rotation & (BIT(DRM_ROTATE_90) | BIT(DRM_ROTATE_270))))
+ if ((state->base.fb->pixel_format == DRM_FORMAT_YUV422 ||
+ state->base.fb->pixel_format == DRM_FORMAT_NV61) &&
+ (state->base.rotation & (BIT(DRM_ROTATE_90) | BIT(DRM_ROTATE_270))))
cfg |= ATMEL_HLCDC_YUV422ROT;
atmel_hlcdc_layer_update_cfg(&plane->layer,
@@ -341,7 +397,7 @@
* Rotation optimization is not working on RGB888 (rotation is still
* working but without any optimization).
*/
- if (req->fb->pixel_format == DRM_FORMAT_RGB888)
+ if (state->base.fb->pixel_format == DRM_FORMAT_RGB888)
cfg = ATMEL_HLCDC_LAYER_DMA_ROTDIS;
else
cfg = 0;
@@ -352,73 +408,142 @@
}
static void atmel_hlcdc_plane_update_buffers(struct atmel_hlcdc_plane *plane,
- struct atmel_hlcdc_plane_update_req *req)
+ struct atmel_hlcdc_plane_state *state)
{
struct atmel_hlcdc_layer *layer = &plane->layer;
const struct atmel_hlcdc_layer_cfg_layout *layout =
&layer->desc->layout;
int i;
- atmel_hlcdc_layer_update_set_fb(&plane->layer, req->fb, req->offsets);
+ atmel_hlcdc_layer_update_set_fb(&plane->layer, state->base.fb,
+ state->offsets);
- for (i = 0; i < req->nplanes; i++) {
+ for (i = 0; i < state->nplanes; i++) {
if (layout->xstride[i]) {
atmel_hlcdc_layer_update_cfg(&plane->layer,
layout->xstride[i],
0xffffffff,
- req->xstride[i]);
+ state->xstride[i]);
}
if (layout->pstride[i]) {
atmel_hlcdc_layer_update_cfg(&plane->layer,
layout->pstride[i],
0xffffffff,
- req->pstride[i]);
+ state->pstride[i]);
}
}
}
-static int atmel_hlcdc_plane_check_update_req(struct drm_plane *p,
- struct atmel_hlcdc_plane_update_req *req,
- const struct drm_display_mode *mode)
+int
+atmel_hlcdc_plane_prepare_disc_area(struct drm_crtc_state *c_state)
{
- struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
- const struct atmel_hlcdc_layer_cfg_layout *layout =
- &plane->layer.desc->layout;
+ int disc_x = 0, disc_y = 0, disc_w = 0, disc_h = 0;
+ const struct atmel_hlcdc_layer_cfg_layout *layout;
+ struct atmel_hlcdc_plane_state *primary_state;
+ struct drm_plane_state *primary_s;
+ struct atmel_hlcdc_plane *primary;
+ struct drm_plane *ovl;
- if (!layout->size &&
- (mode->hdisplay != req->crtc_w ||
- mode->vdisplay != req->crtc_h))
- return -EINVAL;
+ primary = drm_plane_to_atmel_hlcdc_plane(c_state->crtc->primary);
+ layout = &primary->layer.desc->layout;
+ if (!layout->disc_pos || !layout->disc_size)
+ return 0;
- if (plane->layer.desc->max_height &&
- req->crtc_h > plane->layer.desc->max_height)
- return -EINVAL;
+ primary_s = drm_atomic_get_plane_state(c_state->state,
+ &primary->base);
+ if (IS_ERR(primary_s))
+ return PTR_ERR(primary_s);
- if (plane->layer.desc->max_width &&
- req->crtc_w > plane->layer.desc->max_width)
- return -EINVAL;
+ primary_state = drm_plane_state_to_atmel_hlcdc_plane_state(primary_s);
- if ((req->crtc_h != req->src_h || req->crtc_w != req->src_w) &&
- (!layout->memsize ||
- atmel_hlcdc_format_embedds_alpha(req->fb->pixel_format)))
- return -EINVAL;
+ drm_atomic_crtc_state_for_each_plane(ovl, c_state) {
+ struct atmel_hlcdc_plane_state *ovl_state;
+ struct drm_plane_state *ovl_s;
- if (req->crtc_x < 0 || req->crtc_y < 0)
- return -EINVAL;
+ if (ovl == c_state->crtc->primary)
+ continue;
- if (req->crtc_w + req->crtc_x > mode->hdisplay ||
- req->crtc_h + req->crtc_y > mode->vdisplay)
- return -EINVAL;
+ ovl_s = drm_atomic_get_plane_state(c_state->state, ovl);
+ if (IS_ERR(ovl_s))
+ return PTR_ERR(ovl_s);
+
+ ovl_state = drm_plane_state_to_atmel_hlcdc_plane_state(ovl_s);
+
+ if (!ovl_s->fb ||
+ atmel_hlcdc_format_embeds_alpha(ovl_s->fb->pixel_format) ||
+ ovl_state->alpha != 255)
+ continue;
+
+ /* TODO: implement a smarter hidden area detection */
+ if (ovl_state->crtc_h * ovl_state->crtc_w < disc_h * disc_w)
+ continue;
+
+ disc_x = ovl_state->crtc_x;
+ disc_y = ovl_state->crtc_y;
+ disc_h = ovl_state->crtc_h;
+ disc_w = ovl_state->crtc_w;
+ }
+
+ if (disc_x == primary_state->disc_x &&
+ disc_y == primary_state->disc_y &&
+ disc_w == primary_state->disc_w &&
+ disc_h == primary_state->disc_h)
+ return 0;
+
+
+ primary_state->disc_x = disc_x;
+ primary_state->disc_y = disc_y;
+ primary_state->disc_w = disc_w;
+ primary_state->disc_h = disc_h;
+ primary_state->disc_updated = true;
return 0;
}
-int atmel_hlcdc_plane_prepare_update_req(struct drm_plane *p,
- struct atmel_hlcdc_plane_update_req *req,
- const struct drm_display_mode *mode)
+static void
+atmel_hlcdc_plane_update_disc_area(struct atmel_hlcdc_plane *plane,
+ struct atmel_hlcdc_plane_state *state)
+{
+ const struct atmel_hlcdc_layer_cfg_layout *layout =
+ &plane->layer.desc->layout;
+ int disc_surface = 0;
+
+ if (!state->disc_updated)
+ return;
+
+ disc_surface = state->disc_h * state->disc_w;
+
+ atmel_hlcdc_layer_update_cfg(&plane->layer, layout->general_config,
+ ATMEL_HLCDC_LAYER_DISCEN,
+ disc_surface ? ATMEL_HLCDC_LAYER_DISCEN : 0);
+
+ if (!disc_surface)
+ return;
+
+ atmel_hlcdc_layer_update_cfg(&plane->layer,
+ layout->disc_pos,
+ 0xffffffff,
+ state->disc_x | (state->disc_y << 16));
+
+ atmel_hlcdc_layer_update_cfg(&plane->layer,
+ layout->disc_size,
+ 0xffffffff,
+ (state->disc_w - 1) |
+ ((state->disc_h - 1) << 16));
+}
+
+static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
+ struct drm_plane_state *s)
{
struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
+ struct atmel_hlcdc_plane_state *state =
+ drm_plane_state_to_atmel_hlcdc_plane_state(s);
+ const struct atmel_hlcdc_layer_cfg_layout *layout =
+ &plane->layer.desc->layout;
+ struct drm_framebuffer *fb = state->base.fb;
+ const struct drm_display_mode *mode;
+ struct drm_crtc_state *crtc_state;
unsigned int patched_crtc_w;
unsigned int patched_crtc_h;
unsigned int patched_src_w;
@@ -430,196 +555,196 @@
int vsub = 1;
int i;
- if ((req->src_x | req->src_y | req->src_w | req->src_h) &
+ if (!state->base.crtc || !fb)
+ return 0;
+
+ crtc_state = s->state->crtc_states[drm_crtc_index(s->crtc)];
+ mode = &crtc_state->adjusted_mode;
+
+ state->src_x = s->src_x;
+ state->src_y = s->src_y;
+ state->src_h = s->src_h;
+ state->src_w = s->src_w;
+ state->crtc_x = s->crtc_x;
+ state->crtc_y = s->crtc_y;
+ state->crtc_h = s->crtc_h;
+ state->crtc_w = s->crtc_w;
+ if ((state->src_x | state->src_y | state->src_w | state->src_h) &
SUBPIXEL_MASK)
return -EINVAL;
- req->src_x >>= 16;
- req->src_y >>= 16;
- req->src_w >>= 16;
- req->src_h >>= 16;
+ state->src_x >>= 16;
+ state->src_y >>= 16;
+ state->src_w >>= 16;
+ state->src_h >>= 16;
- req->nplanes = drm_format_num_planes(req->fb->pixel_format);
- if (req->nplanes > ATMEL_HLCDC_MAX_PLANES)
+ state->nplanes = drm_format_num_planes(fb->pixel_format);
+ if (state->nplanes > ATMEL_HLCDC_MAX_PLANES)
return -EINVAL;
/*
* Swap width and size in case of 90 or 270 degrees rotation
*/
- if (plane->rotation & (BIT(DRM_ROTATE_90) | BIT(DRM_ROTATE_270))) {
- tmp = req->crtc_w;
- req->crtc_w = req->crtc_h;
- req->crtc_h = tmp;
- tmp = req->src_w;
- req->src_w = req->src_h;
- req->src_h = tmp;
+ if (state->base.rotation & (BIT(DRM_ROTATE_90) | BIT(DRM_ROTATE_270))) {
+ tmp = state->crtc_w;
+ state->crtc_w = state->crtc_h;
+ state->crtc_h = tmp;
+ tmp = state->src_w;
+ state->src_w = state->src_h;
+ state->src_h = tmp;
}
- if (req->crtc_x + req->crtc_w > mode->hdisplay)
- patched_crtc_w = mode->hdisplay - req->crtc_x;
+ if (state->crtc_x + state->crtc_w > mode->hdisplay)
+ patched_crtc_w = mode->hdisplay - state->crtc_x;
else
- patched_crtc_w = req->crtc_w;
+ patched_crtc_w = state->crtc_w;
- if (req->crtc_x < 0) {
- patched_crtc_w += req->crtc_x;
- x_offset = -req->crtc_x;
- req->crtc_x = 0;
+ if (state->crtc_x < 0) {
+ patched_crtc_w += state->crtc_x;
+ x_offset = -state->crtc_x;
+ state->crtc_x = 0;
}
- if (req->crtc_y + req->crtc_h > mode->vdisplay)
- patched_crtc_h = mode->vdisplay - req->crtc_y;
+ if (state->crtc_y + state->crtc_h > mode->vdisplay)
+ patched_crtc_h = mode->vdisplay - state->crtc_y;
else
- patched_crtc_h = req->crtc_h;
+ patched_crtc_h = state->crtc_h;
- if (req->crtc_y < 0) {
- patched_crtc_h += req->crtc_y;
- y_offset = -req->crtc_y;
- req->crtc_y = 0;
+ if (state->crtc_y < 0) {
+ patched_crtc_h += state->crtc_y;
+ y_offset = -state->crtc_y;
+ state->crtc_y = 0;
}
- patched_src_w = DIV_ROUND_CLOSEST(patched_crtc_w * req->src_w,
- req->crtc_w);
- patched_src_h = DIV_ROUND_CLOSEST(patched_crtc_h * req->src_h,
- req->crtc_h);
+ patched_src_w = DIV_ROUND_CLOSEST(patched_crtc_w * state->src_w,
+ state->crtc_w);
+ patched_src_h = DIV_ROUND_CLOSEST(patched_crtc_h * state->src_h,
+ state->crtc_h);
- hsub = drm_format_horz_chroma_subsampling(req->fb->pixel_format);
- vsub = drm_format_vert_chroma_subsampling(req->fb->pixel_format);
+ hsub = drm_format_horz_chroma_subsampling(fb->pixel_format);
+ vsub = drm_format_vert_chroma_subsampling(fb->pixel_format);
- for (i = 0; i < req->nplanes; i++) {
+ for (i = 0; i < state->nplanes; i++) {
unsigned int offset = 0;
int xdiv = i ? hsub : 1;
int ydiv = i ? vsub : 1;
- req->bpp[i] = drm_format_plane_cpp(req->fb->pixel_format, i);
- if (!req->bpp[i])
+ state->bpp[i] = drm_format_plane_cpp(fb->pixel_format, i);
+ if (!state->bpp[i])
return -EINVAL;
- switch (plane->rotation & 0xf) {
+ switch (state->base.rotation & 0xf) {
case BIT(DRM_ROTATE_90):
- offset = ((y_offset + req->src_y + patched_src_w - 1) /
- ydiv) * req->fb->pitches[i];
- offset += ((x_offset + req->src_x) / xdiv) *
- req->bpp[i];
- req->xstride[i] = ((patched_src_w - 1) / ydiv) *
- req->fb->pitches[i];
- req->pstride[i] = -req->fb->pitches[i] - req->bpp[i];
+ offset = ((y_offset + state->src_y + patched_src_w - 1) /
+ ydiv) * fb->pitches[i];
+ offset += ((x_offset + state->src_x) / xdiv) *
+ state->bpp[i];
+ state->xstride[i] = ((patched_src_w - 1) / ydiv) *
+ fb->pitches[i];
+ state->pstride[i] = -fb->pitches[i] - state->bpp[i];
break;
case BIT(DRM_ROTATE_180):
- offset = ((y_offset + req->src_y + patched_src_h - 1) /
- ydiv) * req->fb->pitches[i];
- offset += ((x_offset + req->src_x + patched_src_w - 1) /
- xdiv) * req->bpp[i];
- req->xstride[i] = ((((patched_src_w - 1) / xdiv) - 1) *
- req->bpp[i]) - req->fb->pitches[i];
- req->pstride[i] = -2 * req->bpp[i];
+ offset = ((y_offset + state->src_y + patched_src_h - 1) /
+ ydiv) * fb->pitches[i];
+ offset += ((x_offset + state->src_x + patched_src_w - 1) /
+ xdiv) * state->bpp[i];
+ state->xstride[i] = ((((patched_src_w - 1) / xdiv) - 1) *
+ state->bpp[i]) - fb->pitches[i];
+ state->pstride[i] = -2 * state->bpp[i];
break;
case BIT(DRM_ROTATE_270):
- offset = ((y_offset + req->src_y) / ydiv) *
- req->fb->pitches[i];
- offset += ((x_offset + req->src_x + patched_src_h - 1) /
- xdiv) * req->bpp[i];
- req->xstride[i] = -(((patched_src_w - 1) / ydiv) *
- req->fb->pitches[i]) -
- (2 * req->bpp[i]);
- req->pstride[i] = req->fb->pitches[i] - req->bpp[i];
+ offset = ((y_offset + state->src_y) / ydiv) *
+ fb->pitches[i];
+ offset += ((x_offset + state->src_x + patched_src_h - 1) /
+ xdiv) * state->bpp[i];
+ state->xstride[i] = -(((patched_src_w - 1) / ydiv) *
+ fb->pitches[i]) -
+ (2 * state->bpp[i]);
+ state->pstride[i] = fb->pitches[i] - state->bpp[i];
break;
case BIT(DRM_ROTATE_0):
default:
- offset = ((y_offset + req->src_y) / ydiv) *
- req->fb->pitches[i];
- offset += ((x_offset + req->src_x) / xdiv) *
- req->bpp[i];
- req->xstride[i] = req->fb->pitches[i] -
+ offset = ((y_offset + state->src_y) / ydiv) *
+ fb->pitches[i];
+ offset += ((x_offset + state->src_x) / xdiv) *
+ state->bpp[i];
+ state->xstride[i] = fb->pitches[i] -
((patched_src_w / xdiv) *
- req->bpp[i]);
- req->pstride[i] = 0;
+ state->bpp[i]);
+ state->pstride[i] = 0;
break;
}
- req->offsets[i] = offset + req->fb->offsets[i];
+ state->offsets[i] = offset + fb->offsets[i];
}
- req->src_w = patched_src_w;
- req->src_h = patched_src_h;
- req->crtc_w = patched_crtc_w;
- req->crtc_h = patched_crtc_h;
+ state->src_w = patched_src_w;
+ state->src_h = patched_src_h;
+ state->crtc_w = patched_crtc_w;
+ state->crtc_h = patched_crtc_h;
- return atmel_hlcdc_plane_check_update_req(p, req, mode);
-}
+ if (!layout->size &&
+ (mode->hdisplay != state->crtc_w ||
+ mode->vdisplay != state->crtc_h))
+ return -EINVAL;
-int atmel_hlcdc_plane_apply_update_req(struct drm_plane *p,
- struct atmel_hlcdc_plane_update_req *req)
-{
- struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
- int ret;
+ if (plane->layer.desc->max_height &&
+ state->crtc_h > plane->layer.desc->max_height)
+ return -EINVAL;
- ret = atmel_hlcdc_layer_update_start(&plane->layer);
- if (ret)
- return ret;
+ if (plane->layer.desc->max_width &&
+ state->crtc_w > plane->layer.desc->max_width)
+ return -EINVAL;
- atmel_hlcdc_plane_update_pos_and_size(plane, req);
- atmel_hlcdc_plane_update_general_settings(plane, req);
- atmel_hlcdc_plane_update_format(plane, req);
- atmel_hlcdc_plane_update_buffers(plane, req);
+ if ((state->crtc_h != state->src_h || state->crtc_w != state->src_w) &&
+ (!layout->memsize ||
+ atmel_hlcdc_format_embeds_alpha(state->base.fb->pixel_format)))
+ return -EINVAL;
- atmel_hlcdc_layer_update_commit(&plane->layer);
+ if (state->crtc_x < 0 || state->crtc_y < 0)
+ return -EINVAL;
+
+ if (state->crtc_w + state->crtc_x > mode->hdisplay ||
+ state->crtc_h + state->crtc_y > mode->vdisplay)
+ return -EINVAL;
return 0;
}
-int atmel_hlcdc_plane_update_with_mode(struct drm_plane *p,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- int crtc_x, int crtc_y,
- unsigned int crtc_w,
- unsigned int crtc_h,
- uint32_t src_x, uint32_t src_y,
- uint32_t src_w, uint32_t src_h,
- const struct drm_display_mode *mode)
-{
- struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
- struct atmel_hlcdc_plane_update_req req;
- int ret = 0;
-
- memset(&req, 0, sizeof(req));
- req.crtc_x = crtc_x;
- req.crtc_y = crtc_y;
- req.crtc_w = crtc_w;
- req.crtc_h = crtc_h;
- req.src_x = src_x;
- req.src_y = src_y;
- req.src_w = src_w;
- req.src_h = src_h;
- req.fb = fb;
-
- ret = atmel_hlcdc_plane_prepare_update_req(&plane->base, &req, mode);
- if (ret)
- return ret;
-
- if (!req.crtc_h || !req.crtc_w)
- return atmel_hlcdc_layer_disable(&plane->layer);
-
- return atmel_hlcdc_plane_apply_update_req(&plane->base, &req);
-}
-
-static int atmel_hlcdc_plane_update(struct drm_plane *p,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- int crtc_x, int crtc_y,
- unsigned int crtc_w, unsigned int crtc_h,
- uint32_t src_x, uint32_t src_y,
- uint32_t src_w, uint32_t src_h)
-{
- return atmel_hlcdc_plane_update_with_mode(p, crtc, fb, crtc_x, crtc_y,
- crtc_w, crtc_h, src_x, src_y,
- src_w, src_h, &crtc->hwmode);
-}
-
-static int atmel_hlcdc_plane_disable(struct drm_plane *p)
+static int atmel_hlcdc_plane_prepare_fb(struct drm_plane *p,
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *new_state)
{
struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
- return atmel_hlcdc_layer_disable(&plane->layer);
+ return atmel_hlcdc_layer_update_start(&plane->layer);
+}
+
+static void atmel_hlcdc_plane_atomic_update(struct drm_plane *p,
+ struct drm_plane_state *old_s)
+{
+ struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
+ struct atmel_hlcdc_plane_state *state =
+ drm_plane_state_to_atmel_hlcdc_plane_state(p->state);
+
+ if (!p->state->crtc || !p->state->fb)
+ return;
+
+ atmel_hlcdc_plane_update_pos_and_size(plane, state);
+ atmel_hlcdc_plane_update_general_settings(plane, state);
+ atmel_hlcdc_plane_update_format(plane, state);
+ atmel_hlcdc_plane_update_buffers(plane, state);
+ atmel_hlcdc_plane_update_disc_area(plane, state);
+
+ atmel_hlcdc_layer_update_commit(&plane->layer);
+}
+
+static void atmel_hlcdc_plane_atomic_disable(struct drm_plane *p,
+ struct drm_plane_state *old_state)
+{
+ struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
+
+ atmel_hlcdc_layer_disable(&plane->layer);
}
static void atmel_hlcdc_plane_destroy(struct drm_plane *p)
@@ -635,38 +760,36 @@
devm_kfree(p->dev->dev, plane);
}
-static int atmel_hlcdc_plane_set_alpha(struct atmel_hlcdc_plane *plane,
- u8 alpha)
-{
- atmel_hlcdc_layer_update_start(&plane->layer);
- atmel_hlcdc_layer_update_cfg(&plane->layer,
- plane->layer.desc->layout.general_config,
- ATMEL_HLCDC_LAYER_GA_MASK,
- alpha << ATMEL_HLCDC_LAYER_GA_SHIFT);
- atmel_hlcdc_layer_update_commit(&plane->layer);
-
- return 0;
-}
-
-static int atmel_hlcdc_plane_set_rotation(struct atmel_hlcdc_plane *plane,
- unsigned int rotation)
-{
- plane->rotation = rotation;
-
- return 0;
-}
-
-static int atmel_hlcdc_plane_set_property(struct drm_plane *p,
- struct drm_property *property,
- uint64_t value)
+static int atmel_hlcdc_plane_atomic_set_property(struct drm_plane *p,
+ struct drm_plane_state *s,
+ struct drm_property *property,
+ uint64_t val)
{
struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
struct atmel_hlcdc_plane_properties *props = plane->properties;
+ struct atmel_hlcdc_plane_state *state =
+ drm_plane_state_to_atmel_hlcdc_plane_state(s);
if (property == props->alpha)
- atmel_hlcdc_plane_set_alpha(plane, value);
- else if (property == props->rotation)
- atmel_hlcdc_plane_set_rotation(plane, value);
+ state->alpha = val;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+
+static int atmel_hlcdc_plane_atomic_get_property(struct drm_plane *p,
+ const struct drm_plane_state *s,
+ struct drm_property *property,
+ uint64_t *val)
+{
+ struct atmel_hlcdc_plane *plane = drm_plane_to_atmel_hlcdc_plane(p);
+ struct atmel_hlcdc_plane_properties *props = plane->properties;
+ const struct atmel_hlcdc_plane_state *state =
+ container_of(s, const struct atmel_hlcdc_plane_state, base);
+
+ if (property == props->alpha)
+ *val = state->alpha;
else
return -EINVAL;
@@ -694,8 +817,8 @@
if (desc->layout.xstride && desc->layout.pstride)
drm_object_attach_property(&plane->base.base,
- props->rotation,
- BIT(DRM_ROTATE_0));
+ plane->base.dev->mode_config.rotation_property,
+ BIT(DRM_ROTATE_0));
if (desc->layout.csc) {
/*
@@ -717,11 +840,76 @@
}
}
+static struct drm_plane_helper_funcs atmel_hlcdc_layer_plane_helper_funcs = {
+ .prepare_fb = atmel_hlcdc_plane_prepare_fb,
+ .atomic_check = atmel_hlcdc_plane_atomic_check,
+ .atomic_update = atmel_hlcdc_plane_atomic_update,
+ .atomic_disable = atmel_hlcdc_plane_atomic_disable,
+};
+
+static void atmel_hlcdc_plane_reset(struct drm_plane *p)
+{
+ struct atmel_hlcdc_plane_state *state;
+
+ if (p->state) {
+ state = drm_plane_state_to_atmel_hlcdc_plane_state(p->state);
+
+ if (state->base.fb)
+ drm_framebuffer_unreference(state->base.fb);
+
+ kfree(state);
+ p->state = NULL;
+ }
+
+ state = kzalloc(sizeof(*state), GFP_KERNEL);
+ if (state) {
+ state->alpha = 255;
+ p->state = &state->base;
+ p->state->plane = p;
+ }
+}
+
+static struct drm_plane_state *
+atmel_hlcdc_plane_atomic_duplicate_state(struct drm_plane *p)
+{
+ struct atmel_hlcdc_plane_state *state =
+ drm_plane_state_to_atmel_hlcdc_plane_state(p->state);
+ struct atmel_hlcdc_plane_state *copy;
+
+ copy = kmemdup(state, sizeof(*state), GFP_KERNEL);
+ if (!copy)
+ return NULL;
+
+ copy->disc_updated = false;
+
+ if (copy->base.fb)
+ drm_framebuffer_reference(copy->base.fb);
+
+ return ©->base;
+}
+
+static void atmel_hlcdc_plane_atomic_destroy_state(struct drm_plane *plane,
+ struct drm_plane_state *s)
+{
+ struct atmel_hlcdc_plane_state *state =
+ drm_plane_state_to_atmel_hlcdc_plane_state(s);
+
+ if (s->fb)
+ drm_framebuffer_unreference(s->fb);
+
+ kfree(state);
+}
+
static struct drm_plane_funcs layer_plane_funcs = {
- .update_plane = atmel_hlcdc_plane_update,
- .disable_plane = atmel_hlcdc_plane_disable,
- .set_property = atmel_hlcdc_plane_set_property,
+ .update_plane = drm_atomic_helper_update_plane,
+ .disable_plane = drm_atomic_helper_disable_plane,
+ .set_property = drm_atomic_helper_plane_set_property,
.destroy = atmel_hlcdc_plane_destroy,
+ .reset = atmel_hlcdc_plane_reset,
+ .atomic_duplicate_state = atmel_hlcdc_plane_atomic_duplicate_state,
+ .atomic_destroy_state = atmel_hlcdc_plane_atomic_destroy_state,
+ .atomic_set_property = atmel_hlcdc_plane_atomic_set_property,
+ .atomic_get_property = atmel_hlcdc_plane_atomic_get_property,
};
static struct atmel_hlcdc_plane *
@@ -755,6 +943,9 @@
if (ret)
return ERR_PTR(ret);
+ drm_plane_helper_add(&plane->base,
+ &atmel_hlcdc_layer_plane_helper_funcs);
+
/* Set default property values*/
atmel_hlcdc_plane_init_properties(plane, desc, props);
@@ -774,12 +965,13 @@
if (!props->alpha)
return ERR_PTR(-ENOMEM);
- props->rotation = drm_mode_create_rotation_property(dev,
- BIT(DRM_ROTATE_0) |
- BIT(DRM_ROTATE_90) |
- BIT(DRM_ROTATE_180) |
- BIT(DRM_ROTATE_270));
- if (!props->rotation)
+ dev->mode_config.rotation_property =
+ drm_mode_create_rotation_property(dev,
+ BIT(DRM_ROTATE_0) |
+ BIT(DRM_ROTATE_90) |
+ BIT(DRM_ROTATE_180) |
+ BIT(DRM_ROTATE_270));
+ if (!dev->mode_config.rotation_property)
return ERR_PTR(-ENOMEM);
return props;
diff --git a/drivers/gpu/drm/bochs/bochs_hw.c b/drivers/gpu/drm/bochs/bochs_hw.c
index 4603897..a39b034 100644
--- a/drivers/gpu/drm/bochs/bochs_hw.c
+++ b/drivers/gpu/drm/bochs/bochs_hw.c
@@ -164,6 +164,7 @@
bochs_vga_writeb(bochs, 0x3c0, 0x20); /* unblank */
+ bochs_dispi_write(bochs, VBE_DISPI_INDEX_ENABLE, 0);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_BPP, bochs->bpp);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_XRES, bochs->xres);
bochs_dispi_write(bochs, VBE_DISPI_INDEX_YRES, bochs->yres);
diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
index f38bbcd..acef322 100644
--- a/drivers/gpu/drm/bridge/Kconfig
+++ b/drivers/gpu/drm/bridge/Kconfig
@@ -11,3 +11,14 @@
select DRM_PANEL
---help---
ptn3460 eDP-LVDS bridge chip driver.
+
+config DRM_PS8622
+ tristate "Parade eDP/LVDS bridge"
+ depends on DRM
+ depends on OF
+ select DRM_PANEL
+ select DRM_KMS_HELPER
+ select BACKLIGHT_LCD_SUPPORT
+ select BACKLIGHT_CLASS_DEVICE
+ ---help---
+ parade eDP-LVDS bridge chip driver.
diff --git a/drivers/gpu/drm/bridge/Makefile b/drivers/gpu/drm/bridge/Makefile
index d8a8cfd..8dfebd9 100644
--- a/drivers/gpu/drm/bridge/Makefile
+++ b/drivers/gpu/drm/bridge/Makefile
@@ -1,4 +1,5 @@
ccflags-y := -Iinclude/drm
+obj-$(CONFIG_DRM_PS8622) += ps8622.o
obj-$(CONFIG_DRM_PTN3460) += ptn3460.o
obj-$(CONFIG_DRM_DW_HDMI) += dw_hdmi.o
diff --git a/drivers/gpu/drm/bridge/dw_hdmi.c b/drivers/gpu/drm/bridge/dw_hdmi.c
index cd6a706..49cafb6 100644
--- a/drivers/gpu/drm/bridge/dw_hdmi.c
+++ b/drivers/gpu/drm/bridge/dw_hdmi.c
@@ -16,6 +16,7 @@
#include <linux/err.h>
#include <linux/clk.h>
#include <linux/hdmi.h>
+#include <linux/mutex.h>
#include <linux/of_device.h>
#include <drm/drm_of.h>
@@ -126,6 +127,7 @@
struct i2c_adapter *ddc;
void __iomem *regs;
+ struct mutex audio_mutex;
unsigned int sample_rate;
int ratio;
@@ -177,26 +179,23 @@
hdmi_modb(hdmi, data << shift, mask, reg);
}
-static void hdmi_set_clock_regenerator_n(struct dw_hdmi *hdmi,
- unsigned int value)
-{
- hdmi_writeb(hdmi, value & 0xff, HDMI_AUD_N1);
- hdmi_writeb(hdmi, (value >> 8) & 0xff, HDMI_AUD_N2);
- hdmi_writeb(hdmi, (value >> 16) & 0x0f, HDMI_AUD_N3);
-
- /* nshift factor = 0 */
- hdmi_modb(hdmi, 0, HDMI_AUD_CTS3_N_SHIFT_MASK, HDMI_AUD_CTS3);
-}
-
-static void hdmi_regenerate_cts(struct dw_hdmi *hdmi, unsigned int cts)
+static void hdmi_set_cts_n(struct dw_hdmi *hdmi, unsigned int cts,
+ unsigned int n)
{
/* Must be set/cleared first */
hdmi_modb(hdmi, 0, HDMI_AUD_CTS3_CTS_MANUAL, HDMI_AUD_CTS3);
- hdmi_writeb(hdmi, cts & 0xff, HDMI_AUD_CTS1);
- hdmi_writeb(hdmi, (cts >> 8) & 0xff, HDMI_AUD_CTS2);
+ /* nshift factor = 0 */
+ hdmi_modb(hdmi, 0, HDMI_AUD_CTS3_N_SHIFT_MASK, HDMI_AUD_CTS3);
+
hdmi_writeb(hdmi, ((cts >> 16) & HDMI_AUD_CTS3_AUDCTS19_16_MASK) |
HDMI_AUD_CTS3_CTS_MANUAL, HDMI_AUD_CTS3);
+ hdmi_writeb(hdmi, (cts >> 8) & 0xff, HDMI_AUD_CTS2);
+ hdmi_writeb(hdmi, cts & 0xff, HDMI_AUD_CTS1);
+
+ hdmi_writeb(hdmi, (n >> 16) & 0x0f, HDMI_AUD_N3);
+ hdmi_writeb(hdmi, (n >> 8) & 0xff, HDMI_AUD_N2);
+ hdmi_writeb(hdmi, n & 0xff, HDMI_AUD_N1);
}
static unsigned int hdmi_compute_n(unsigned int freq, unsigned long pixel_clk,
@@ -355,18 +354,21 @@
__func__, hdmi->sample_rate, hdmi->ratio,
pixel_clk, clk_n, clk_cts);
- hdmi_set_clock_regenerator_n(hdmi, clk_n);
- hdmi_regenerate_cts(hdmi, clk_cts);
+ hdmi_set_cts_n(hdmi, clk_cts, clk_n);
}
static void hdmi_init_clk_regenerator(struct dw_hdmi *hdmi)
{
+ mutex_lock(&hdmi->audio_mutex);
hdmi_set_clk_regenerator(hdmi, 74250000);
+ mutex_unlock(&hdmi->audio_mutex);
}
static void hdmi_clk_regenerator_update_pixel_clock(struct dw_hdmi *hdmi)
{
+ mutex_lock(&hdmi->audio_mutex);
hdmi_set_clk_regenerator(hdmi, hdmi->hdmi_data.video_mode.mpixelclock);
+ mutex_unlock(&hdmi->audio_mutex);
}
/*
@@ -753,10 +755,10 @@
{
unsigned res_idx, i;
u8 val, msec;
- const struct dw_hdmi_mpll_config *mpll_config =
- hdmi->plat_data->mpll_cfg;
- const struct dw_hdmi_curr_ctrl *curr_ctrl = hdmi->plat_data->cur_ctr;
- const struct dw_hdmi_sym_term *sym_term = hdmi->plat_data->sym_term;
+ const struct dw_hdmi_plat_data *plat_data = hdmi->plat_data;
+ const struct dw_hdmi_mpll_config *mpll_config = plat_data->mpll_cfg;
+ const struct dw_hdmi_curr_ctrl *curr_ctrl = plat_data->cur_ctr;
+ const struct dw_hdmi_phy_config *phy_config = plat_data->phy_config;
if (prep)
return -EINVAL;
@@ -827,18 +829,18 @@
hdmi_phy_i2c_write(hdmi, 0x0000, 0x13); /* PLLPHBYCTRL */
hdmi_phy_i2c_write(hdmi, 0x0006, 0x17);
- for (i = 0; sym_term[i].mpixelclock != (~0UL); i++)
+ for (i = 0; phy_config[i].mpixelclock != (~0UL); i++)
if (hdmi->hdmi_data.video_mode.mpixelclock <=
- sym_term[i].mpixelclock)
+ phy_config[i].mpixelclock)
break;
/* RESISTANCE TERM 133Ohm Cfg */
- hdmi_phy_i2c_write(hdmi, sym_term[i].term, 0x19); /* TXTERM */
+ hdmi_phy_i2c_write(hdmi, phy_config[i].term, 0x19); /* TXTERM */
/* PREEMP Cgf 0.00 */
- hdmi_phy_i2c_write(hdmi, sym_term[i].sym_ctr, 0x09); /* CKSYMTXCTRL */
-
+ hdmi_phy_i2c_write(hdmi, phy_config[i].sym_ctr, 0x09); /* CKSYMTXCTRL */
/* TX/CK LVL 10 */
- hdmi_phy_i2c_write(hdmi, 0x01ad, 0x0E); /* VLEVCTRL */
+ hdmi_phy_i2c_write(hdmi, phy_config[i].vlev_ctr, 0x0E); /* VLEVCTRL */
+
/* REMOVE CLK TERM */
hdmi_phy_i2c_write(hdmi, 0x8000, 0x05); /* CKCALCTRL */
@@ -1569,6 +1571,8 @@
hdmi->ratio = 100;
hdmi->encoder = encoder;
+ mutex_init(&hdmi->audio_mutex);
+
of_property_read_u32(np, "reg-io-width", &val);
switch (val) {
diff --git a/drivers/gpu/drm/bridge/ps8622.c b/drivers/gpu/drm/bridge/ps8622.c
new file mode 100644
index 0000000..e895aa7
--- /dev/null
+++ b/drivers/gpu/drm/bridge/ps8622.c
@@ -0,0 +1,684 @@
+/*
+ * Parade PS8622 eDP/LVDS bridge driver
+ *
+ * Copyright (C) 2014 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/backlight.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/fb.h>
+#include <linux/gpio.h>
+#include <linux/i2c.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_graph.h>
+#include <linux/pm.h>
+#include <linux/regulator/consumer.h>
+
+#include <drm/drm_panel.h>
+
+#include "drmP.h"
+#include "drm_crtc.h"
+#include "drm_crtc_helper.h"
+
+/* Brightness scale on the Parade chip */
+#define PS8622_MAX_BRIGHTNESS 0xff
+
+/* Timings taken from the version 1.7 datasheet for the PS8622/PS8625 */
+#define PS8622_POWER_RISE_T1_MIN_US 10
+#define PS8622_POWER_RISE_T1_MAX_US 10000
+#define PS8622_RST_HIGH_T2_MIN_US 3000
+#define PS8622_RST_HIGH_T2_MAX_US 30000
+#define PS8622_PWMO_END_T12_MS 200
+#define PS8622_POWER_FALL_T16_MAX_US 10000
+#define PS8622_POWER_OFF_T17_MS 500
+
+#if ((PS8622_RST_HIGH_T2_MIN_US + PS8622_POWER_RISE_T1_MAX_US) > \
+ (PS8622_RST_HIGH_T2_MAX_US + PS8622_POWER_RISE_T1_MIN_US))
+#error "T2.min + T1.max must be less than T2.max + T1.min"
+#endif
+
+struct ps8622_bridge {
+ struct drm_connector connector;
+ struct i2c_client *client;
+ struct drm_bridge bridge;
+ struct drm_panel *panel;
+ struct regulator *v12;
+ struct backlight_device *bl;
+
+ struct gpio_desc *gpio_slp;
+ struct gpio_desc *gpio_rst;
+
+ u32 max_lane_count;
+ u32 lane_count;
+
+ bool enabled;
+};
+
+static inline struct ps8622_bridge *
+ bridge_to_ps8622(struct drm_bridge *bridge)
+{
+ return container_of(bridge, struct ps8622_bridge, bridge);
+}
+
+static inline struct ps8622_bridge *
+ connector_to_ps8622(struct drm_connector *connector)
+{
+ return container_of(connector, struct ps8622_bridge, connector);
+}
+
+static int ps8622_set(struct i2c_client *client, u8 page, u8 reg, u8 val)
+{
+ int ret;
+ struct i2c_adapter *adap = client->adapter;
+ struct i2c_msg msg;
+ u8 data[] = {reg, val};
+
+ msg.addr = client->addr + page;
+ msg.flags = 0;
+ msg.len = sizeof(data);
+ msg.buf = data;
+
+ ret = i2c_transfer(adap, &msg, 1);
+ if (ret != 1)
+ pr_warn("PS8622 I2C write (0x%02x,0x%02x,0x%02x) failed: %d\n",
+ client->addr + page, reg, val, ret);
+ return !(ret == 1);
+}
+
+static int ps8622_send_config(struct ps8622_bridge *ps8622)
+{
+ struct i2c_client *cl = ps8622->client;
+ int err = 0;
+
+ /* HPD low */
+ err = ps8622_set(cl, 0x02, 0xa1, 0x01);
+ if (err)
+ goto error;
+
+ /* SW setting: [1:0] SW output 1.2V voltage is lower to 96% */
+ err = ps8622_set(cl, 0x04, 0x14, 0x01);
+ if (err)
+ goto error;
+
+ /* RCO SS setting: [5:4] = b01 0.5%, b10 1%, b11 1.5% */
+ err = ps8622_set(cl, 0x04, 0xe3, 0x20);
+ if (err)
+ goto error;
+
+ /* [7] RCO SS enable */
+ err = ps8622_set(cl, 0x04, 0xe2, 0x80);
+ if (err)
+ goto error;
+
+ /* RPHY Setting
+ * [3:2] CDR tune wait cycle before measure for fine tune
+ * b00: 1us b01: 0.5us b10:2us, b11: 4us
+ */
+ err = ps8622_set(cl, 0x04, 0x8a, 0x0c);
+ if (err)
+ goto error;
+
+ /* [3] RFD always on */
+ err = ps8622_set(cl, 0x04, 0x89, 0x08);
+ if (err)
+ goto error;
+
+ /* CTN lock in/out: 20000ppm/80000ppm. Lock out 2 times. */
+ err = ps8622_set(cl, 0x04, 0x71, 0x2d);
+ if (err)
+ goto error;
+
+ /* 2.7G CDR settings: NOF=40LSB for HBR CDR setting */
+ err = ps8622_set(cl, 0x04, 0x7d, 0x07);
+ if (err)
+ goto error;
+
+ /* [1:0] Fmin=+4bands */
+ err = ps8622_set(cl, 0x04, 0x7b, 0x00);
+ if (err)
+ goto error;
+
+ /* [7:5] DCO_FTRNG=+-40% */
+ err = ps8622_set(cl, 0x04, 0x7a, 0xfd);
+ if (err)
+ goto error;
+
+ /* 1.62G CDR settings: [5:2]NOF=64LSB [1:0]DCO scale is 2/5 */
+ err = ps8622_set(cl, 0x04, 0xc0, 0x12);
+ if (err)
+ goto error;
+
+ /* Gitune=-37% */
+ err = ps8622_set(cl, 0x04, 0xc1, 0x92);
+ if (err)
+ goto error;
+
+ /* Fbstep=100% */
+ err = ps8622_set(cl, 0x04, 0xc2, 0x1c);
+ if (err)
+ goto error;
+
+ /* [7] LOS signal disable */
+ err = ps8622_set(cl, 0x04, 0x32, 0x80);
+ if (err)
+ goto error;
+
+ /* RPIO Setting: [7:4] LVDS driver bias current : 75% (250mV swing) */
+ err = ps8622_set(cl, 0x04, 0x00, 0xb0);
+ if (err)
+ goto error;
+
+ /* [7:6] Right-bar GPIO output strength is 8mA */
+ err = ps8622_set(cl, 0x04, 0x15, 0x40);
+ if (err)
+ goto error;
+
+ /* EQ Training State Machine Setting, RCO calibration start */
+ err = ps8622_set(cl, 0x04, 0x54, 0x10);
+ if (err)
+ goto error;
+
+ /* Logic, needs more than 10 I2C command */
+ /* [4:0] MAX_LANE_COUNT set to max supported lanes */
+ err = ps8622_set(cl, 0x01, 0x02, 0x80 | ps8622->max_lane_count);
+ if (err)
+ goto error;
+
+ /* [4:0] LANE_COUNT_SET set to chosen lane count */
+ err = ps8622_set(cl, 0x01, 0x21, 0x80 | ps8622->lane_count);
+ if (err)
+ goto error;
+
+ err = ps8622_set(cl, 0x00, 0x52, 0x20);
+ if (err)
+ goto error;
+
+ /* HPD CP toggle enable */
+ err = ps8622_set(cl, 0x00, 0xf1, 0x03);
+ if (err)
+ goto error;
+
+ err = ps8622_set(cl, 0x00, 0x62, 0x41);
+ if (err)
+ goto error;
+
+ /* Counter number, add 1ms counter delay */
+ err = ps8622_set(cl, 0x00, 0xf6, 0x01);
+ if (err)
+ goto error;
+
+ /* [6]PWM function control by DPCD0040f[7], default is PWM block */
+ err = ps8622_set(cl, 0x00, 0x77, 0x06);
+ if (err)
+ goto error;
+
+ /* 04h Adjust VTotal toleranceto fix the 30Hz no display issue */
+ err = ps8622_set(cl, 0x00, 0x4c, 0x04);
+ if (err)
+ goto error;
+
+ /* DPCD00400='h00, Parade OUI ='h001cf8 */
+ err = ps8622_set(cl, 0x01, 0xc0, 0x00);
+ if (err)
+ goto error;
+
+ /* DPCD00401='h1c */
+ err = ps8622_set(cl, 0x01, 0xc1, 0x1c);
+ if (err)
+ goto error;
+
+ /* DPCD00402='hf8 */
+ err = ps8622_set(cl, 0x01, 0xc2, 0xf8);
+ if (err)
+ goto error;
+
+ /* DPCD403~408 = ASCII code, D2SLV5='h4432534c5635 */
+ err = ps8622_set(cl, 0x01, 0xc3, 0x44);
+ if (err)
+ goto error;
+
+ /* DPCD404 */
+ err = ps8622_set(cl, 0x01, 0xc4, 0x32);
+ if (err)
+ goto error;
+
+ /* DPCD405 */
+ err = ps8622_set(cl, 0x01, 0xc5, 0x53);
+ if (err)
+ goto error;
+
+ /* DPCD406 */
+ err = ps8622_set(cl, 0x01, 0xc6, 0x4c);
+ if (err)
+ goto error;
+
+ /* DPCD407 */
+ err = ps8622_set(cl, 0x01, 0xc7, 0x56);
+ if (err)
+ goto error;
+
+ /* DPCD408 */
+ err = ps8622_set(cl, 0x01, 0xc8, 0x35);
+ if (err)
+ goto error;
+
+ /* DPCD40A, Initial Code major revision '01' */
+ err = ps8622_set(cl, 0x01, 0xca, 0x01);
+ if (err)
+ goto error;
+
+ /* DPCD40B, Initial Code minor revision '05' */
+ err = ps8622_set(cl, 0x01, 0xcb, 0x05);
+ if (err)
+ goto error;
+
+
+ if (ps8622->bl) {
+ /* DPCD720, internal PWM */
+ err = ps8622_set(cl, 0x01, 0xa5, 0xa0);
+ if (err)
+ goto error;
+
+ /* FFh for 100% brightness, 0h for 0% brightness */
+ err = ps8622_set(cl, 0x01, 0xa7,
+ ps8622->bl->props.brightness);
+ if (err)
+ goto error;
+ } else {
+ /* DPCD720, external PWM */
+ err = ps8622_set(cl, 0x01, 0xa5, 0x80);
+ if (err)
+ goto error;
+ }
+
+ /* Set LVDS output as 6bit-VESA mapping, single LVDS channel */
+ err = ps8622_set(cl, 0x01, 0xcc, 0x13);
+ if (err)
+ goto error;
+
+ /* Enable SSC set by register */
+ err = ps8622_set(cl, 0x02, 0xb1, 0x20);
+ if (err)
+ goto error;
+
+ /* Set SSC enabled and +/-1% central spreading */
+ err = ps8622_set(cl, 0x04, 0x10, 0x16);
+ if (err)
+ goto error;
+
+ /* Logic end */
+ /* MPU Clock source: LC => RCO */
+ err = ps8622_set(cl, 0x04, 0x59, 0x60);
+ if (err)
+ goto error;
+
+ /* LC -> RCO */
+ err = ps8622_set(cl, 0x04, 0x54, 0x14);
+ if (err)
+ goto error;
+
+ /* HPD high */
+ err = ps8622_set(cl, 0x02, 0xa1, 0x91);
+
+error:
+ return err ? -EIO : 0;
+}
+
+static int ps8622_backlight_update(struct backlight_device *bl)
+{
+ struct ps8622_bridge *ps8622 = dev_get_drvdata(&bl->dev);
+ int ret, brightness = bl->props.brightness;
+
+ if (bl->props.power != FB_BLANK_UNBLANK ||
+ bl->props.state & (BL_CORE_SUSPENDED | BL_CORE_FBBLANK))
+ brightness = 0;
+
+ if (!ps8622->enabled)
+ return -EINVAL;
+
+ ret = ps8622_set(ps8622->client, 0x01, 0xa7, brightness);
+
+ return ret;
+}
+
+static const struct backlight_ops ps8622_backlight_ops = {
+ .update_status = ps8622_backlight_update,
+};
+
+static void ps8622_pre_enable(struct drm_bridge *bridge)
+{
+ struct ps8622_bridge *ps8622 = bridge_to_ps8622(bridge);
+ int ret;
+
+ if (ps8622->enabled)
+ return;
+
+ gpiod_set_value(ps8622->gpio_rst, 0);
+
+ if (ps8622->v12) {
+ ret = regulator_enable(ps8622->v12);
+ if (ret)
+ DRM_ERROR("fails to enable ps8622->v12");
+ }
+
+ if (drm_panel_prepare(ps8622->panel)) {
+ DRM_ERROR("failed to prepare panel\n");
+ return;
+ }
+
+ gpiod_set_value(ps8622->gpio_slp, 1);
+
+ /*
+ * T1 is the range of time that it takes for the power to rise after we
+ * enable the lcd/ps8622 fet. T2 is the range of time in which the
+ * data sheet specifies we should deassert the reset pin.
+ *
+ * If it takes T1.max for the power to rise, we need to wait atleast
+ * T2.min before deasserting the reset pin. If it takes T1.min for the
+ * power to rise, we need to wait at most T2.max before deasserting the
+ * reset pin.
+ */
+ usleep_range(PS8622_RST_HIGH_T2_MIN_US + PS8622_POWER_RISE_T1_MAX_US,
+ PS8622_RST_HIGH_T2_MAX_US + PS8622_POWER_RISE_T1_MIN_US);
+
+ gpiod_set_value(ps8622->gpio_rst, 1);
+
+ /* wait 20ms after RST high */
+ usleep_range(20000, 30000);
+
+ ret = ps8622_send_config(ps8622);
+ if (ret) {
+ DRM_ERROR("Failed to send config to bridge (%d)\n", ret);
+ return;
+ }
+
+ ps8622->enabled = true;
+}
+
+static void ps8622_enable(struct drm_bridge *bridge)
+{
+ struct ps8622_bridge *ps8622 = bridge_to_ps8622(bridge);
+
+ if (drm_panel_enable(ps8622->panel)) {
+ DRM_ERROR("failed to enable panel\n");
+ return;
+ }
+}
+
+static void ps8622_disable(struct drm_bridge *bridge)
+{
+ struct ps8622_bridge *ps8622 = bridge_to_ps8622(bridge);
+
+ if (drm_panel_disable(ps8622->panel)) {
+ DRM_ERROR("failed to disable panel\n");
+ return;
+ }
+ msleep(PS8622_PWMO_END_T12_MS);
+}
+
+static void ps8622_post_disable(struct drm_bridge *bridge)
+{
+ struct ps8622_bridge *ps8622 = bridge_to_ps8622(bridge);
+
+ if (!ps8622->enabled)
+ return;
+
+ ps8622->enabled = false;
+
+ /*
+ * This doesn't matter if the regulators are turned off, but something
+ * else might keep them on. In that case, we want to assert the slp gpio
+ * to lower power.
+ */
+ gpiod_set_value(ps8622->gpio_slp, 0);
+
+ if (drm_panel_unprepare(ps8622->panel)) {
+ DRM_ERROR("failed to unprepare panel\n");
+ return;
+ }
+
+ if (ps8622->v12)
+ regulator_disable(ps8622->v12);
+
+ /*
+ * Sleep for at least the amount of time that it takes the power rail to
+ * fall to prevent asserting the rst gpio from doing anything.
+ */
+ usleep_range(PS8622_POWER_FALL_T16_MAX_US,
+ 2 * PS8622_POWER_FALL_T16_MAX_US);
+ gpiod_set_value(ps8622->gpio_rst, 0);
+
+ msleep(PS8622_POWER_OFF_T17_MS);
+}
+
+static int ps8622_get_modes(struct drm_connector *connector)
+{
+ struct ps8622_bridge *ps8622;
+
+ ps8622 = connector_to_ps8622(connector);
+
+ return drm_panel_get_modes(ps8622->panel);
+}
+
+static struct drm_encoder *ps8622_best_encoder(struct drm_connector *connector)
+{
+ struct ps8622_bridge *ps8622;
+
+ ps8622 = connector_to_ps8622(connector);
+
+ return ps8622->bridge.encoder;
+}
+
+static const struct drm_connector_helper_funcs ps8622_connector_helper_funcs = {
+ .get_modes = ps8622_get_modes,
+ .best_encoder = ps8622_best_encoder,
+};
+
+static enum drm_connector_status ps8622_detect(struct drm_connector *connector,
+ bool force)
+{
+ return connector_status_connected;
+}
+
+static void ps8622_connector_destroy(struct drm_connector *connector)
+{
+ drm_connector_cleanup(connector);
+}
+
+static const struct drm_connector_funcs ps8622_connector_funcs = {
+ .dpms = drm_helper_connector_dpms,
+ .fill_modes = drm_helper_probe_single_connector_modes,
+ .detect = ps8622_detect,
+ .destroy = ps8622_connector_destroy,
+};
+
+static int ps8622_attach(struct drm_bridge *bridge)
+{
+ struct ps8622_bridge *ps8622 = bridge_to_ps8622(bridge);
+ int ret;
+
+ if (!bridge->encoder) {
+ DRM_ERROR("Parent encoder object not found");
+ return -ENODEV;
+ }
+
+ ps8622->connector.polled = DRM_CONNECTOR_POLL_HPD;
+ ret = drm_connector_init(bridge->dev, &ps8622->connector,
+ &ps8622_connector_funcs, DRM_MODE_CONNECTOR_LVDS);
+ if (ret) {
+ DRM_ERROR("Failed to initialize connector with drm\n");
+ return ret;
+ }
+ drm_connector_helper_add(&ps8622->connector,
+ &ps8622_connector_helper_funcs);
+ drm_connector_register(&ps8622->connector);
+ drm_mode_connector_attach_encoder(&ps8622->connector,
+ bridge->encoder);
+
+ if (ps8622->panel)
+ drm_panel_attach(ps8622->panel, &ps8622->connector);
+
+ drm_helper_hpd_irq_event(ps8622->connector.dev);
+
+ return ret;
+}
+
+static const struct drm_bridge_funcs ps8622_bridge_funcs = {
+ .pre_enable = ps8622_pre_enable,
+ .enable = ps8622_enable,
+ .disable = ps8622_disable,
+ .post_disable = ps8622_post_disable,
+ .attach = ps8622_attach,
+};
+
+static const struct of_device_id ps8622_devices[] = {
+ {.compatible = "parade,ps8622",},
+ {.compatible = "parade,ps8625",},
+ {}
+};
+MODULE_DEVICE_TABLE(of, ps8622_devices);
+
+static int ps8622_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct device *dev = &client->dev;
+ struct device_node *endpoint, *panel_node;
+ struct ps8622_bridge *ps8622;
+ int ret;
+
+ ps8622 = devm_kzalloc(dev, sizeof(*ps8622), GFP_KERNEL);
+ if (!ps8622)
+ return -ENOMEM;
+
+ endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
+ if (endpoint) {
+ panel_node = of_graph_get_remote_port_parent(endpoint);
+ if (panel_node) {
+ ps8622->panel = of_drm_find_panel(panel_node);
+ of_node_put(panel_node);
+ if (!ps8622->panel)
+ return -EPROBE_DEFER;
+ }
+ }
+
+ ps8622->client = client;
+
+ ps8622->v12 = devm_regulator_get(dev, "vdd12");
+ if (IS_ERR(ps8622->v12)) {
+ dev_info(dev, "no 1.2v regulator found for PS8622\n");
+ ps8622->v12 = NULL;
+ }
+
+ ps8622->gpio_slp = devm_gpiod_get(dev, "sleep");
+ if (IS_ERR(ps8622->gpio_slp)) {
+ ret = PTR_ERR(ps8622->gpio_slp);
+ dev_err(dev, "cannot get gpio_slp %d\n", ret);
+ return ret;
+ }
+ ret = gpiod_direction_output(ps8622->gpio_slp, 1);
+ if (ret) {
+ dev_err(dev, "cannot configure gpio_slp\n");
+ return ret;
+ }
+
+ ps8622->gpio_rst = devm_gpiod_get(dev, "reset");
+ if (IS_ERR(ps8622->gpio_rst)) {
+ ret = PTR_ERR(ps8622->gpio_rst);
+ dev_err(dev, "cannot get gpio_rst %d\n", ret);
+ return ret;
+ }
+ /*
+ * Assert the reset pin high to avoid the bridge being
+ * initialized prematurely
+ */
+ ret = gpiod_direction_output(ps8622->gpio_rst, 1);
+ if (ret) {
+ dev_err(dev, "cannot configure gpio_rst\n");
+ return ret;
+ }
+
+ ps8622->max_lane_count = id->driver_data;
+
+ if (of_property_read_u32(dev->of_node, "lane-count",
+ &ps8622->lane_count)) {
+ ps8622->lane_count = ps8622->max_lane_count;
+ } else if (ps8622->lane_count > ps8622->max_lane_count) {
+ dev_info(dev, "lane-count property is too high,"
+ "using max_lane_count\n");
+ ps8622->lane_count = ps8622->max_lane_count;
+ }
+
+ if (!of_find_property(dev->of_node, "use-external-pwm", NULL)) {
+ ps8622->bl = backlight_device_register("ps8622-backlight",
+ dev, ps8622, &ps8622_backlight_ops,
+ NULL);
+ if (IS_ERR(ps8622->bl)) {
+ DRM_ERROR("failed to register backlight\n");
+ ret = PTR_ERR(ps8622->bl);
+ ps8622->bl = NULL;
+ return ret;
+ }
+ ps8622->bl->props.max_brightness = PS8622_MAX_BRIGHTNESS;
+ ps8622->bl->props.brightness = PS8622_MAX_BRIGHTNESS;
+ }
+
+ ps8622->bridge.funcs = &ps8622_bridge_funcs;
+ ps8622->bridge.of_node = dev->of_node;
+ ret = drm_bridge_add(&ps8622->bridge);
+ if (ret) {
+ DRM_ERROR("Failed to add bridge\n");
+ return ret;
+ }
+
+ i2c_set_clientdata(client, ps8622);
+
+ return 0;
+}
+
+static int ps8622_remove(struct i2c_client *client)
+{
+ struct ps8622_bridge *ps8622 = i2c_get_clientdata(client);
+
+ if (ps8622->bl)
+ backlight_device_unregister(ps8622->bl);
+
+ drm_bridge_remove(&ps8622->bridge);
+
+ return 0;
+}
+
+static const struct i2c_device_id ps8622_i2c_table[] = {
+ /* Device type, max_lane_count */
+ {"ps8622", 1},
+ {"ps8625", 2},
+ {},
+};
+MODULE_DEVICE_TABLE(i2c, ps8622_i2c_table);
+
+static struct i2c_driver ps8622_driver = {
+ .id_table = ps8622_i2c_table,
+ .probe = ps8622_probe,
+ .remove = ps8622_remove,
+ .driver = {
+ .name = "ps8622",
+ .owner = THIS_MODULE,
+ .of_match_table = ps8622_devices,
+ },
+};
+module_i2c_driver(ps8622_driver);
+
+MODULE_AUTHOR("Vincent Palatin <vpalatin@chromium.org>");
+MODULE_DESCRIPTION("Parade ps8622/ps8625 eDP-LVDS converter driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/gpu/drm/bridge/ptn3460.c b/drivers/gpu/drm/bridge/ptn3460.c
index 826833e..9d2f053 100644
--- a/drivers/gpu/drm/bridge/ptn3460.c
+++ b/drivers/gpu/drm/bridge/ptn3460.c
@@ -265,7 +265,7 @@
.destroy = ptn3460_connector_destroy,
};
-int ptn3460_bridge_attach(struct drm_bridge *bridge)
+static int ptn3460_bridge_attach(struct drm_bridge *bridge)
{
struct ptn3460_bridge *ptn_bridge = bridge_to_ptn3460(bridge);
int ret;
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
index c2e9c52..6e3b78e 100644
--- a/drivers/gpu/drm/drm_atomic.c
+++ b/drivers/gpu/drm/drm_atomic.c
@@ -92,7 +92,7 @@
state->dev = dev;
- DRM_DEBUG_KMS("Allocate atomic state %p\n", state);
+ DRM_DEBUG_ATOMIC("Allocate atomic state %p\n", state);
return state;
fail:
@@ -122,7 +122,7 @@
struct drm_mode_config *config = &dev->mode_config;
int i;
- DRM_DEBUG_KMS("Clearing atomic state %p\n", state);
+ DRM_DEBUG_ATOMIC("Clearing atomic state %p\n", state);
for (i = 0; i < state->num_connector; i++) {
struct drm_connector *connector = state->connectors[i];
@@ -134,6 +134,7 @@
connector->funcs->atomic_destroy_state(connector,
state->connector_states[i]);
+ state->connectors[i] = NULL;
state->connector_states[i] = NULL;
}
@@ -145,6 +146,7 @@
crtc->funcs->atomic_destroy_state(crtc,
state->crtc_states[i]);
+ state->crtcs[i] = NULL;
state->crtc_states[i] = NULL;
}
@@ -156,6 +158,7 @@
plane->funcs->atomic_destroy_state(plane,
state->plane_states[i]);
+ state->planes[i] = NULL;
state->plane_states[i] = NULL;
}
}
@@ -170,9 +173,12 @@
*/
void drm_atomic_state_free(struct drm_atomic_state *state)
{
+ if (!state)
+ return;
+
drm_atomic_state_clear(state);
- DRM_DEBUG_KMS("Freeing atomic state %p\n", state);
+ DRM_DEBUG_ATOMIC("Freeing atomic state %p\n", state);
kfree_state(state);
}
@@ -217,8 +223,8 @@
state->crtcs[index] = crtc;
crtc_state->state = state;
- DRM_DEBUG_KMS("Added [CRTC:%d] %p state to %p\n",
- crtc->base.id, crtc_state, state);
+ DRM_DEBUG_ATOMIC("Added [CRTC:%d] %p state to %p\n",
+ crtc->base.id, crtc_state, state);
return crtc_state;
}
@@ -248,11 +254,14 @@
struct drm_mode_config *config = &dev->mode_config;
/* FIXME: Mode prop is missing, which also controls ->enable. */
- if (property == config->prop_active) {
+ if (property == config->prop_active)
state->active = val;
- } else if (crtc->funcs->atomic_set_property)
+ else if (crtc->funcs->atomic_set_property)
return crtc->funcs->atomic_set_property(crtc, state, property, val);
- return -EINVAL;
+ else
+ return -EINVAL;
+
+ return 0;
}
EXPORT_SYMBOL(drm_atomic_crtc_set_property);
@@ -266,9 +275,17 @@
const struct drm_crtc_state *state,
struct drm_property *property, uint64_t *val)
{
- if (crtc->funcs->atomic_get_property)
+ struct drm_device *dev = crtc->dev;
+ struct drm_mode_config *config = &dev->mode_config;
+
+ if (property == config->prop_active)
+ *val = state->active;
+ else if (crtc->funcs->atomic_get_property)
return crtc->funcs->atomic_get_property(crtc, state, property, val);
- return -EINVAL;
+ else
+ return -EINVAL;
+
+ return 0;
}
/**
@@ -293,8 +310,8 @@
*/
if (state->active && !state->enable) {
- DRM_DEBUG_KMS("[CRTC:%d] active without enabled\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CRTC:%d] active without enabled\n",
+ crtc->base.id);
return -EINVAL;
}
@@ -340,8 +357,8 @@
state->planes[index] = plane;
plane_state->state = state;
- DRM_DEBUG_KMS("Added [PLANE:%d] %p state to %p\n",
- plane->base.id, plane_state, state);
+ DRM_DEBUG_ATOMIC("Added [PLANE:%d] %p state to %p\n",
+ plane->base.id, plane_state, state);
if (plane_state->crtc) {
struct drm_crtc_state *crtc_state;
@@ -450,6 +467,8 @@
*val = state->src_w;
} else if (property == config->prop_src_h) {
*val = state->src_h;
+ } else if (property == config->rotation_property) {
+ *val = state->rotation;
} else if (plane->funcs->atomic_get_property) {
return plane->funcs->atomic_get_property(plane, state, property, val);
} else {
@@ -473,14 +492,14 @@
struct drm_plane_state *state)
{
unsigned int fb_width, fb_height;
- unsigned int i;
+ int ret;
/* either *both* CRTC and FB must be set, or neither */
if (WARN_ON(state->crtc && !state->fb)) {
- DRM_DEBUG_KMS("CRTC set but no FB\n");
+ DRM_DEBUG_ATOMIC("CRTC set but no FB\n");
return -EINVAL;
} else if (WARN_ON(state->fb && !state->crtc)) {
- DRM_DEBUG_KMS("FB set but no CRTC\n");
+ DRM_DEBUG_ATOMIC("FB set but no CRTC\n");
return -EINVAL;
}
@@ -490,18 +509,16 @@
/* Check whether this plane is usable on this CRTC */
if (!(plane->possible_crtcs & drm_crtc_mask(state->crtc))) {
- DRM_DEBUG_KMS("Invalid crtc for plane\n");
+ DRM_DEBUG_ATOMIC("Invalid crtc for plane\n");
return -EINVAL;
}
/* Check whether this plane supports the fb pixel format. */
- for (i = 0; i < plane->format_count; i++)
- if (state->fb->pixel_format == plane->format_types[i])
- break;
- if (i == plane->format_count) {
- DRM_DEBUG_KMS("Invalid pixel format %s\n",
- drm_get_format_name(state->fb->pixel_format));
- return -EINVAL;
+ ret = drm_plane_check_pixel_format(plane, state->fb->pixel_format);
+ if (ret) {
+ DRM_DEBUG_ATOMIC("Invalid pixel format %s\n",
+ drm_get_format_name(state->fb->pixel_format));
+ return ret;
}
/* Give drivers some help against integer overflows */
@@ -509,9 +526,9 @@
state->crtc_x > INT_MAX - (int32_t) state->crtc_w ||
state->crtc_h > INT_MAX ||
state->crtc_y > INT_MAX - (int32_t) state->crtc_h) {
- DRM_DEBUG_KMS("Invalid CRTC coordinates %ux%u+%d+%d\n",
- state->crtc_w, state->crtc_h,
- state->crtc_x, state->crtc_y);
+ DRM_DEBUG_ATOMIC("Invalid CRTC coordinates %ux%u+%d+%d\n",
+ state->crtc_w, state->crtc_h,
+ state->crtc_x, state->crtc_y);
return -ERANGE;
}
@@ -523,12 +540,12 @@
state->src_x > fb_width - state->src_w ||
state->src_h > fb_height ||
state->src_y > fb_height - state->src_h) {
- DRM_DEBUG_KMS("Invalid source coordinates "
- "%u.%06ux%u.%06u+%u.%06u+%u.%06u\n",
- state->src_w >> 16, ((state->src_w & 0xffff) * 15625) >> 10,
- state->src_h >> 16, ((state->src_h & 0xffff) * 15625) >> 10,
- state->src_x >> 16, ((state->src_x & 0xffff) * 15625) >> 10,
- state->src_y >> 16, ((state->src_y & 0xffff) * 15625) >> 10);
+ DRM_DEBUG_ATOMIC("Invalid source coordinates "
+ "%u.%06ux%u.%06u+%u.%06u+%u.%06u\n",
+ state->src_w >> 16, ((state->src_w & 0xffff) * 15625) >> 10,
+ state->src_h >> 16, ((state->src_h & 0xffff) * 15625) >> 10,
+ state->src_x >> 16, ((state->src_x & 0xffff) * 15625) >> 10,
+ state->src_y >> 16, ((state->src_y & 0xffff) * 15625) >> 10);
return -ENOSPC;
}
@@ -575,7 +592,7 @@
* at most the array is a bit too large.
*/
if (index >= state->num_connector) {
- DRM_DEBUG_KMS("Hot-added connector would overflow state array, restarting\n");
+ DRM_DEBUG_ATOMIC("Hot-added connector would overflow state array, restarting\n");
return ERR_PTR(-EAGAIN);
}
@@ -590,8 +607,8 @@
state->connectors[index] = connector;
connector_state->state = state;
- DRM_DEBUG_KMS("Added [CONNECTOR:%d] %p state to %p\n",
- connector->base.id, connector_state, state);
+ DRM_DEBUG_ATOMIC("Added [CONNECTOR:%d] %p state to %p\n",
+ connector->base.id, connector_state, state);
if (connector_state->crtc) {
struct drm_crtc_state *crtc_state;
@@ -752,17 +769,18 @@
}
if (crtc)
- DRM_DEBUG_KMS("Link plane state %p to [CRTC:%d]\n",
- plane_state, crtc->base.id);
+ DRM_DEBUG_ATOMIC("Link plane state %p to [CRTC:%d]\n",
+ plane_state, crtc->base.id);
else
- DRM_DEBUG_KMS("Link plane state %p to [NOCRTC]\n", plane_state);
+ DRM_DEBUG_ATOMIC("Link plane state %p to [NOCRTC]\n",
+ plane_state);
return 0;
}
EXPORT_SYMBOL(drm_atomic_set_crtc_for_plane);
/**
- * drm_atomic_set_fb_for_plane - set crtc for plane
+ * drm_atomic_set_fb_for_plane - set framebuffer for plane
* @plane_state: atomic state object for the plane
* @fb: fb to use for the plane
*
@@ -782,10 +800,11 @@
plane_state->fb = fb;
if (fb)
- DRM_DEBUG_KMS("Set [FB:%d] for plane state %p\n",
- fb->base.id, plane_state);
+ DRM_DEBUG_ATOMIC("Set [FB:%d] for plane state %p\n",
+ fb->base.id, plane_state);
else
- DRM_DEBUG_KMS("Set [NOFB] for plane state %p\n", plane_state);
+ DRM_DEBUG_ATOMIC("Set [NOFB] for plane state %p\n",
+ plane_state);
}
EXPORT_SYMBOL(drm_atomic_set_fb_for_plane);
@@ -818,11 +837,11 @@
conn_state->crtc = crtc;
if (crtc)
- DRM_DEBUG_KMS("Link connector state %p to [CRTC:%d]\n",
- conn_state, crtc->base.id);
+ DRM_DEBUG_ATOMIC("Link connector state %p to [CRTC:%d]\n",
+ conn_state, crtc->base.id);
else
- DRM_DEBUG_KMS("Link connector state %p to [NOCRTC]\n",
- conn_state);
+ DRM_DEBUG_ATOMIC("Link connector state %p to [NOCRTC]\n",
+ conn_state);
return 0;
}
@@ -858,8 +877,8 @@
if (ret)
return ret;
- DRM_DEBUG_KMS("Adding all current connectors for [CRTC:%d] to %p\n",
- crtc->base.id, state);
+ DRM_DEBUG_ATOMIC("Adding all current connectors for [CRTC:%d] to %p\n",
+ crtc->base.id, state);
/*
* Changed connectors are already in @state, so only need to look at the
@@ -890,19 +909,18 @@
drm_atomic_connectors_for_crtc(struct drm_atomic_state *state,
struct drm_crtc *crtc)
{
+ struct drm_connector *connector;
+ struct drm_connector_state *conn_state;
+
int i, num_connected_connectors = 0;
- for (i = 0; i < state->num_connector; i++) {
- struct drm_connector_state *conn_state;
-
- conn_state = state->connector_states[i];
-
- if (conn_state && conn_state->crtc == crtc)
+ for_each_connector_in_state(state, connector, conn_state, i) {
+ if (conn_state->crtc == crtc)
num_connected_connectors++;
}
- DRM_DEBUG_KMS("State %p has %i connectors for [CRTC:%d]\n",
- state, num_connected_connectors, crtc->base.id);
+ DRM_DEBUG_ATOMIC("State %p has %i connectors for [CRTC:%d]\n",
+ state, num_connected_connectors, crtc->base.id);
return num_connected_connectors;
}
@@ -914,7 +932,7 @@
*
* This function should be used by legacy entry points which don't understand
* -EDEADLK semantics. For simplicity this one will grab all modeset locks after
- * the slowpath completed.
+ * the slowpath completed.
*/
void drm_atomic_legacy_backoff(struct drm_atomic_state *state)
{
@@ -949,36 +967,28 @@
{
struct drm_device *dev = state->dev;
struct drm_mode_config *config = &dev->mode_config;
- int nplanes = config->num_total_plane;
- int ncrtcs = config->num_crtc;
+ struct drm_plane *plane;
+ struct drm_plane_state *plane_state;
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *crtc_state;
int i, ret = 0;
- DRM_DEBUG_KMS("checking %p\n", state);
+ DRM_DEBUG_ATOMIC("checking %p\n", state);
- for (i = 0; i < nplanes; i++) {
- struct drm_plane *plane = state->planes[i];
-
- if (!plane)
- continue;
-
- ret = drm_atomic_plane_check(plane, state->plane_states[i]);
+ for_each_plane_in_state(state, plane, plane_state, i) {
+ ret = drm_atomic_plane_check(plane, plane_state);
if (ret) {
- DRM_DEBUG_KMS("[PLANE:%d] atomic core check failed\n",
- plane->base.id);
+ DRM_DEBUG_ATOMIC("[PLANE:%d] atomic core check failed\n",
+ plane->base.id);
return ret;
}
}
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc *crtc = state->crtcs[i];
-
- if (!crtc)
- continue;
-
- ret = drm_atomic_crtc_check(crtc, state->crtc_states[i]);
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
+ ret = drm_atomic_crtc_check(crtc, crtc_state);
if (ret) {
- DRM_DEBUG_KMS("[CRTC:%d] atomic core check failed\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CRTC:%d] atomic core check failed\n",
+ crtc->base.id);
return ret;
}
}
@@ -987,17 +997,11 @@
ret = config->funcs->atomic_check(state->dev, state);
if (!state->allow_modeset) {
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc *crtc = state->crtcs[i];
- struct drm_crtc_state *crtc_state = state->crtc_states[i];
-
- if (!crtc)
- continue;
-
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
if (crtc_state->mode_changed ||
crtc_state->active_changed) {
- DRM_DEBUG_KMS("[CRTC:%d] requires full modeset\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CRTC:%d] requires full modeset\n",
+ crtc->base.id);
return -EINVAL;
}
}
@@ -1032,7 +1036,7 @@
if (ret)
return ret;
- DRM_DEBUG_KMS("commiting %p\n", state);
+ DRM_DEBUG_ATOMIC("commiting %p\n", state);
return config->funcs->atomic_commit(state->dev, state, false);
}
@@ -1063,7 +1067,7 @@
if (ret)
return ret;
- DRM_DEBUG_KMS("commiting %p asynchronously\n", state);
+ DRM_DEBUG_ATOMIC("commiting %p asynchronously\n", state);
return config->funcs->atomic_commit(state->dev, state, true);
}
@@ -1191,6 +1195,8 @@
struct drm_atomic_state *state;
struct drm_modeset_acquire_ctx ctx;
struct drm_plane *plane;
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *crtc_state;
unsigned plane_mask = 0;
int ret = 0;
unsigned int i, j;
@@ -1294,15 +1300,9 @@
}
if (arg->flags & DRM_MODE_PAGE_FLIP_EVENT) {
- int ncrtcs = dev->mode_config.num_crtc;
-
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc_state *crtc_state = state->crtc_states[i];
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
struct drm_pending_vblank_event *e;
- if (!crtc_state)
- continue;
-
e = create_vblank_event(dev, file_priv, arg->user_data);
if (!e) {
ret = -ENOMEM;
@@ -1354,14 +1354,7 @@
goto backoff;
if (arg->flags & DRM_MODE_PAGE_FLIP_EVENT) {
- int ncrtcs = dev->mode_config.num_crtc;
-
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc_state *crtc_state = state->crtc_states[i];
-
- if (!crtc_state)
- continue;
-
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
destroy_vblank_event(dev, file_priv, crtc_state->event);
crtc_state->event = NULL;
}
diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
index 7e3a52b..1d2ca52 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -116,9 +116,9 @@
*/
WARN_ON(!drm_modeset_is_locked(&config->connection_mutex));
- DRM_DEBUG_KMS("[ENCODER:%d:%s] in use on [CRTC:%d], stealing it\n",
- encoder->base.id, encoder->name,
- encoder_crtc->base.id);
+ DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] in use on [CRTC:%d], stealing it\n",
+ encoder->base.id, encoder->name,
+ encoder_crtc->base.id);
crtc_state = drm_atomic_get_crtc_state(state, encoder_crtc);
if (IS_ERR(crtc_state))
@@ -130,9 +130,9 @@
if (connector->state->best_encoder != encoder)
continue;
- DRM_DEBUG_KMS("Stealing encoder from [CONNECTOR:%d:%s]\n",
- connector->base.id,
- connector->name);
+ DRM_DEBUG_ATOMIC("Stealing encoder from [CONNECTOR:%d:%s]\n",
+ connector->base.id,
+ connector->name);
connector_state = drm_atomic_get_connector_state(state,
connector);
@@ -151,7 +151,7 @@
static int
update_connector_routing(struct drm_atomic_state *state, int conn_idx)
{
- struct drm_connector_helper_funcs *funcs;
+ const struct drm_connector_helper_funcs *funcs;
struct drm_encoder *new_encoder;
struct drm_crtc *encoder_crtc;
struct drm_connector *connector;
@@ -165,9 +165,9 @@
if (!connector)
return 0;
- DRM_DEBUG_KMS("Updating routing for [CONNECTOR:%d:%s]\n",
- connector->base.id,
- connector->name);
+ DRM_DEBUG_ATOMIC("Updating routing for [CONNECTOR:%d:%s]\n",
+ connector->base.id,
+ connector->name);
if (connector->state->crtc != connector_state->crtc) {
if (connector->state->crtc) {
@@ -186,7 +186,7 @@
}
if (!connector_state->crtc) {
- DRM_DEBUG_KMS("Disabling [CONNECTOR:%d:%s]\n",
+ DRM_DEBUG_ATOMIC("Disabling [CONNECTOR:%d:%s]\n",
connector->base.id,
connector->name);
@@ -199,19 +199,19 @@
new_encoder = funcs->best_encoder(connector);
if (!new_encoder) {
- DRM_DEBUG_KMS("No suitable encoder found for [CONNECTOR:%d:%s]\n",
- connector->base.id,
- connector->name);
+ DRM_DEBUG_ATOMIC("No suitable encoder found for [CONNECTOR:%d:%s]\n",
+ connector->base.id,
+ connector->name);
return -EINVAL;
}
if (new_encoder == connector_state->best_encoder) {
- DRM_DEBUG_KMS("[CONNECTOR:%d:%s] keeps [ENCODER:%d:%s], now on [CRTC:%d]\n",
- connector->base.id,
- connector->name,
- new_encoder->base.id,
- new_encoder->name,
- connector_state->crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] keeps [ENCODER:%d:%s], now on [CRTC:%d]\n",
+ connector->base.id,
+ connector->name,
+ new_encoder->base.id,
+ new_encoder->name,
+ connector_state->crtc->base.id);
return 0;
}
@@ -222,9 +222,9 @@
if (encoder_crtc) {
ret = steal_encoder(state, new_encoder, encoder_crtc);
if (ret) {
- DRM_DEBUG_KMS("Encoder stealing failed for [CONNECTOR:%d:%s]\n",
- connector->base.id,
- connector->name);
+ DRM_DEBUG_ATOMIC("Encoder stealing failed for [CONNECTOR:%d:%s]\n",
+ connector->base.id,
+ connector->name);
return ret;
}
}
@@ -235,12 +235,12 @@
crtc_state = state->crtc_states[idx];
crtc_state->mode_changed = true;
- DRM_DEBUG_KMS("[CONNECTOR:%d:%s] using [ENCODER:%d:%s] on [CRTC:%d]\n",
- connector->base.id,
- connector->name,
- new_encoder->base.id,
- new_encoder->name,
- connector_state->crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CONNECTOR:%d:%s] using [ENCODER:%d:%s] on [CRTC:%d]\n",
+ connector->base.id,
+ connector->name,
+ new_encoder->base.id,
+ new_encoder->name,
+ connector_state->crtc->base.id);
return 0;
}
@@ -248,30 +248,24 @@
static int
mode_fixup(struct drm_atomic_state *state)
{
- int ncrtcs = state->dev->mode_config.num_crtc;
+ struct drm_crtc *crtc;
struct drm_crtc_state *crtc_state;
+ struct drm_connector *connector;
struct drm_connector_state *conn_state;
int i;
bool ret;
- for (i = 0; i < ncrtcs; i++) {
- crtc_state = state->crtc_states[i];
-
- if (!crtc_state || !crtc_state->mode_changed)
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
+ if (!crtc_state->mode_changed)
continue;
drm_mode_copy(&crtc_state->adjusted_mode, &crtc_state->mode);
}
- for (i = 0; i < state->num_connector; i++) {
- struct drm_encoder_helper_funcs *funcs;
+ for_each_connector_in_state(state, connector, conn_state, i) {
+ const struct drm_encoder_helper_funcs *funcs;
struct drm_encoder *encoder;
- conn_state = state->connector_states[i];
-
- if (!conn_state)
- continue;
-
WARN_ON(!!conn_state->best_encoder != !!conn_state->crtc);
if (!conn_state->crtc || !conn_state->best_encoder)
@@ -292,7 +286,7 @@
encoder->bridge, &crtc_state->mode,
&crtc_state->adjusted_mode);
if (!ret) {
- DRM_DEBUG_KMS("Bridge fixup failed\n");
+ DRM_DEBUG_ATOMIC("Bridge fixup failed\n");
return -EINVAL;
}
}
@@ -301,37 +295,33 @@
ret = funcs->atomic_check(encoder, crtc_state,
conn_state);
if (ret) {
- DRM_DEBUG_KMS("[ENCODER:%d:%s] check failed\n",
- encoder->base.id, encoder->name);
+ DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] check failed\n",
+ encoder->base.id, encoder->name);
return ret;
}
} else {
ret = funcs->mode_fixup(encoder, &crtc_state->mode,
&crtc_state->adjusted_mode);
if (!ret) {
- DRM_DEBUG_KMS("[ENCODER:%d:%s] fixup failed\n",
- encoder->base.id, encoder->name);
+ DRM_DEBUG_ATOMIC("[ENCODER:%d:%s] fixup failed\n",
+ encoder->base.id, encoder->name);
return -EINVAL;
}
}
}
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc_helper_funcs *funcs;
- struct drm_crtc *crtc;
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
+ const struct drm_crtc_helper_funcs *funcs;
- crtc_state = state->crtc_states[i];
- crtc = state->crtcs[i];
-
- if (!crtc_state || !crtc_state->mode_changed)
+ if (!crtc_state->mode_changed)
continue;
funcs = crtc->helper_private;
ret = funcs->mode_fixup(crtc, &crtc_state->mode,
&crtc_state->adjusted_mode);
if (!ret) {
- DRM_DEBUG_KMS("[CRTC:%d] fixup failed\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CRTC:%d] fixup failed\n",
+ crtc->base.id);
return -EINVAL;
}
}
@@ -346,7 +336,7 @@
}
/**
- * drm_atomic_helper_check - validate state object for modeset changes
+ * drm_atomic_helper_check_modeset - validate state object for modeset changes
* @dev: DRM device
* @state: the driver state object
*
@@ -371,32 +361,27 @@
drm_atomic_helper_check_modeset(struct drm_device *dev,
struct drm_atomic_state *state)
{
- int ncrtcs = dev->mode_config.num_crtc;
struct drm_crtc *crtc;
struct drm_crtc_state *crtc_state;
+ struct drm_connector *connector;
+ struct drm_connector_state *connector_state;
int i, ret;
- for (i = 0; i < ncrtcs; i++) {
- crtc = state->crtcs[i];
- crtc_state = state->crtc_states[i];
-
- if (!crtc)
- continue;
-
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
if (!drm_mode_equal(&crtc->state->mode, &crtc_state->mode)) {
- DRM_DEBUG_KMS("[CRTC:%d] mode changed\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CRTC:%d] mode changed\n",
+ crtc->base.id);
crtc_state->mode_changed = true;
}
if (crtc->state->enable != crtc_state->enable) {
- DRM_DEBUG_KMS("[CRTC:%d] enable changed\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CRTC:%d] enable changed\n",
+ crtc->base.id);
crtc_state->mode_changed = true;
}
}
- for (i = 0; i < state->num_connector; i++) {
+ for_each_connector_in_state(state, connector, connector_state, i) {
/*
* This only sets crtc->mode_changed for routing changes,
* drivers must set crtc->mode_changed themselves when connector
@@ -413,32 +398,26 @@
* configuration. This must be done before calling mode_fixup in case a
* crtc only changed its mode but has the same set of connectors.
*/
- for (i = 0; i < ncrtcs; i++) {
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
int num_connectors;
- crtc = state->crtcs[i];
- crtc_state = state->crtc_states[i];
-
- if (!crtc)
- continue;
-
/*
* We must set ->active_changed after walking connectors for
* otherwise an update that only changes active would result in
* a full modeset because update_connector_routing force that.
*/
if (crtc->state->active != crtc_state->active) {
- DRM_DEBUG_KMS("[CRTC:%d] active changed\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CRTC:%d] active changed\n",
+ crtc->base.id);
crtc_state->active_changed = true;
}
if (!needs_modeset(crtc_state))
continue;
- DRM_DEBUG_KMS("[CRTC:%d] needs all connectors, enable: %c, active: %c\n",
- crtc->base.id,
- crtc_state->enable ? 'y' : 'n',
+ DRM_DEBUG_ATOMIC("[CRTC:%d] needs all connectors, enable: %c, active: %c\n",
+ crtc->base.id,
+ crtc_state->enable ? 'y' : 'n',
crtc_state->active ? 'y' : 'n');
ret = drm_atomic_add_affected_connectors(state, crtc);
@@ -449,8 +428,8 @@
crtc);
if (crtc_state->enable != !!num_connectors) {
- DRM_DEBUG_KMS("[CRTC:%d] enabled/connectors mismatch\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CRTC:%d] enabled/connectors mismatch\n",
+ crtc->base.id);
return -EINVAL;
}
@@ -461,7 +440,7 @@
EXPORT_SYMBOL(drm_atomic_helper_check_modeset);
/**
- * drm_atomic_helper_check - validate state object for modeset changes
+ * drm_atomic_helper_check_planes - validate state object for planes changes
* @dev: DRM device
* @state: the driver state object
*
@@ -476,17 +455,14 @@
drm_atomic_helper_check_planes(struct drm_device *dev,
struct drm_atomic_state *state)
{
- int nplanes = dev->mode_config.num_total_plane;
- int ncrtcs = dev->mode_config.num_crtc;
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *crtc_state;
+ struct drm_plane *plane;
+ struct drm_plane_state *plane_state;
int i, ret = 0;
- for (i = 0; i < nplanes; i++) {
- struct drm_plane_helper_funcs *funcs;
- struct drm_plane *plane = state->planes[i];
- struct drm_plane_state *plane_state = state->plane_states[i];
-
- if (!plane)
- continue;
+ for_each_plane_in_state(state, plane, plane_state, i) {
+ const struct drm_plane_helper_funcs *funcs;
funcs = plane->helper_private;
@@ -497,18 +473,14 @@
ret = funcs->atomic_check(plane, plane_state);
if (ret) {
- DRM_DEBUG_KMS("[PLANE:%d] atomic driver check failed\n",
- plane->base.id);
+ DRM_DEBUG_ATOMIC("[PLANE:%d] atomic driver check failed\n",
+ plane->base.id);
return ret;
}
}
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc_helper_funcs *funcs;
- struct drm_crtc *crtc = state->crtcs[i];
-
- if (!crtc)
- continue;
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
+ const struct drm_crtc_helper_funcs *funcs;
funcs = crtc->helper_private;
@@ -517,8 +489,8 @@
ret = funcs->atomic_check(crtc, state->crtc_states[i]);
if (ret) {
- DRM_DEBUG_KMS("[CRTC:%d] atomic driver check failed\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("[CRTC:%d] atomic driver check failed\n",
+ crtc->base.id);
return ret;
}
}
@@ -567,27 +539,26 @@
static void
disable_outputs(struct drm_device *dev, struct drm_atomic_state *old_state)
{
- int ncrtcs = old_state->dev->mode_config.num_crtc;
+ struct drm_connector *connector;
+ struct drm_connector_state *old_conn_state;
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *old_crtc_state;
int i;
- for (i = 0; i < old_state->num_connector; i++) {
- struct drm_connector_state *old_conn_state;
- struct drm_connector *connector;
- struct drm_encoder_helper_funcs *funcs;
+ for_each_connector_in_state(old_state, connector, old_conn_state, i) {
+ const struct drm_encoder_helper_funcs *funcs;
struct drm_encoder *encoder;
struct drm_crtc_state *old_crtc_state;
- old_conn_state = old_state->connector_states[i];
- connector = old_state->connectors[i];
-
/* Shut down everything that's in the changeset and currently
* still on. So need to check the old, saved state. */
- if (!old_conn_state || !old_conn_state->crtc)
+ if (!old_conn_state->crtc)
continue;
old_crtc_state = old_state->crtc_states[drm_crtc_index(old_conn_state->crtc)];
- if (!old_crtc_state->active)
+ if (!old_crtc_state->active ||
+ !needs_modeset(old_conn_state->crtc->state))
continue;
encoder = old_conn_state->best_encoder;
@@ -600,12 +571,12 @@
funcs = encoder->helper_private;
- DRM_DEBUG_KMS("disabling [ENCODER:%d:%s]\n",
- encoder->base.id, encoder->name);
+ DRM_DEBUG_ATOMIC("disabling [ENCODER:%d:%s]\n",
+ encoder->base.id, encoder->name);
/*
* Each encoder has at most one connector (since we always steal
- * it away), so we won't call call disable hooks twice.
+ * it away), so we won't call disable hooks twice.
*/
if (encoder->bridge)
encoder->bridge->funcs->disable(encoder->bridge);
@@ -622,16 +593,11 @@
encoder->bridge->funcs->post_disable(encoder->bridge);
}
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc_helper_funcs *funcs;
- struct drm_crtc *crtc;
- struct drm_crtc_state *old_crtc_state;
-
- crtc = old_state->crtcs[i];
- old_crtc_state = old_state->crtc_states[i];
+ for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
+ const struct drm_crtc_helper_funcs *funcs;
/* Shut down everything that needs a full modeset. */
- if (!crtc || !needs_modeset(crtc->state))
+ if (!needs_modeset(crtc->state))
continue;
if (!old_crtc_state->active)
@@ -639,8 +605,8 @@
funcs = crtc->helper_private;
- DRM_DEBUG_KMS("disabling [CRTC:%d]\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("disabling [CRTC:%d]\n",
+ crtc->base.id);
/* Right function depends upon target state. */
@@ -656,16 +622,15 @@
static void
set_routing_links(struct drm_device *dev, struct drm_atomic_state *old_state)
{
- int ncrtcs = old_state->dev->mode_config.num_crtc;
+ struct drm_connector *connector;
+ struct drm_connector_state *old_conn_state;
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *old_crtc_state;
int i;
/* clear out existing links */
- for (i = 0; i < old_state->num_connector; i++) {
- struct drm_connector *connector;
-
- connector = old_state->connectors[i];
-
- if (!connector || !connector->encoder)
+ for_each_connector_in_state(old_state, connector, old_conn_state, i) {
+ if (!connector->encoder)
continue;
WARN_ON(!connector->encoder->crtc);
@@ -675,12 +640,8 @@
}
/* set new links */
- for (i = 0; i < old_state->num_connector; i++) {
- struct drm_connector *connector;
-
- connector = old_state->connectors[i];
-
- if (!connector || !connector->state->crtc)
+ for_each_connector_in_state(old_state, connector, old_conn_state, i) {
+ if (!connector->state->crtc)
continue;
if (WARN_ON(!connector->state->best_encoder))
@@ -691,14 +652,7 @@
}
/* set legacy state in the crtc structure */
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc *crtc;
-
- crtc = old_state->crtcs[i];
-
- if (!crtc)
- continue;
-
+ for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
crtc->mode = crtc->state->mode;
crtc->enabled = crtc->state->enable;
crtc->x = crtc->primary->state->src_x >> 16;
@@ -709,38 +663,35 @@
static void
crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *old_state)
{
- int ncrtcs = old_state->dev->mode_config.num_crtc;
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *old_crtc_state;
+ struct drm_connector *connector;
+ struct drm_connector_state *old_conn_state;
int i;
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc_helper_funcs *funcs;
- struct drm_crtc *crtc;
+ for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
+ const struct drm_crtc_helper_funcs *funcs;
- crtc = old_state->crtcs[i];
-
- if (!crtc || !crtc->state->mode_changed)
+ if (!crtc->state->mode_changed)
continue;
funcs = crtc->helper_private;
- if (crtc->state->enable) {
- DRM_DEBUG_KMS("modeset on [CRTC:%d]\n",
- crtc->base.id);
+ if (crtc->state->enable && funcs->mode_set_nofb) {
+ DRM_DEBUG_ATOMIC("modeset on [CRTC:%d]\n",
+ crtc->base.id);
funcs->mode_set_nofb(crtc);
}
}
- for (i = 0; i < old_state->num_connector; i++) {
- struct drm_connector *connector;
+ for_each_connector_in_state(old_state, connector, old_conn_state, i) {
+ const struct drm_encoder_helper_funcs *funcs;
struct drm_crtc_state *new_crtc_state;
- struct drm_encoder_helper_funcs *funcs;
struct drm_encoder *encoder;
struct drm_display_mode *mode, *adjusted_mode;
- connector = old_state->connectors[i];
-
- if (!connector || !connector->state->best_encoder)
+ if (!connector->state->best_encoder)
continue;
encoder = connector->state->best_encoder;
@@ -752,14 +703,15 @@
if (!new_crtc_state->mode_changed)
continue;
- DRM_DEBUG_KMS("modeset on [ENCODER:%d:%s]\n",
- encoder->base.id, encoder->name);
+ DRM_DEBUG_ATOMIC("modeset on [ENCODER:%d:%s]\n",
+ encoder->base.id, encoder->name);
/*
* Each encoder has at most one connector (since we always steal
- * it away), so we won't call call mode_set hooks twice.
+ * it away), so we won't call mode_set hooks twice.
*/
- funcs->mode_set(encoder, mode, adjusted_mode);
+ if (funcs->mode_set)
+ funcs->mode_set(encoder, mode, adjusted_mode);
if (encoder->bridge && encoder->bridge->funcs->mode_set)
encoder->bridge->funcs->mode_set(encoder->bridge,
@@ -768,46 +720,56 @@
}
/**
- * drm_atomic_helper_commit_pre_planes - modeset commit before plane updates
- * @dev: DRM device
- * @state: atomic state
- *
- * This function commits the modeset changes that need to be committed before
- * updating planes. It shuts down all the outputs that need to be shut down and
- * prepares them (if required) with the new mode.
- */
-void drm_atomic_helper_commit_pre_planes(struct drm_device *dev,
- struct drm_atomic_state *state)
-{
- disable_outputs(dev, state);
- set_routing_links(dev, state);
- crtc_set_mode(dev, state);
-}
-EXPORT_SYMBOL(drm_atomic_helper_commit_pre_planes);
-
-/**
- * drm_atomic_helper_commit_post_planes - modeset commit after plane updates
+ * drm_atomic_helper_commit_modeset_disables - modeset commit to disable outputs
* @dev: DRM device
* @old_state: atomic state object with old state structures
*
- * This function commits the modeset changes that need to be committed after
- * updating planes: It enables all the outputs with the new configuration which
- * had to be turned off for the update.
+ * This function shuts down all the outputs that need to be shut down and
+ * prepares them (if required) with the new mode.
+ *
+ * For compatability with legacy crtc helpers this should be called before
+ * drm_atomic_helper_commit_planes(), which is what the default commit function
+ * does. But drivers with different needs can group the modeset commits together
+ * and do the plane commits at the end. This is useful for drivers doing runtime
+ * PM since planes updates then only happen when the CRTC is actually enabled.
*/
-void drm_atomic_helper_commit_post_planes(struct drm_device *dev,
- struct drm_atomic_state *old_state)
+void drm_atomic_helper_commit_modeset_disables(struct drm_device *dev,
+ struct drm_atomic_state *old_state)
{
- int ncrtcs = old_state->dev->mode_config.num_crtc;
+ disable_outputs(dev, old_state);
+ set_routing_links(dev, old_state);
+ crtc_set_mode(dev, old_state);
+}
+EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_disables);
+
+/**
+ * drm_atomic_helper_commit_modeset_enables - modeset commit to enable outputs
+ * @dev: DRM device
+ * @old_state: atomic state object with old state structures
+ *
+ * This function enables all the outputs with the new configuration which had to
+ * be turned off for the update.
+ *
+ * For compatability with legacy crtc helpers this should be called after
+ * drm_atomic_helper_commit_planes(), which is what the default commit function
+ * does. But drivers with different needs can group the modeset commits together
+ * and do the plane commits at the end. This is useful for drivers doing runtime
+ * PM since planes updates then only happen when the CRTC is actually enabled.
+ */
+void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
+ struct drm_atomic_state *old_state)
+{
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *old_crtc_state;
+ struct drm_connector *connector;
+ struct drm_connector_state *old_conn_state;
int i;
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc_helper_funcs *funcs;
- struct drm_crtc *crtc;
-
- crtc = old_state->crtcs[i];
+ for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
+ const struct drm_crtc_helper_funcs *funcs;
/* Need to filter out CRTCs where only planes change. */
- if (!crtc || !needs_modeset(crtc->state))
+ if (!needs_modeset(crtc->state))
continue;
if (!crtc->state->active)
@@ -816,8 +778,8 @@
funcs = crtc->helper_private;
if (crtc->state->enable) {
- DRM_DEBUG_KMS("enabling [CRTC:%d]\n",
- crtc->base.id);
+ DRM_DEBUG_ATOMIC("enabling [CRTC:%d]\n",
+ crtc->base.id);
if (funcs->enable)
funcs->enable(crtc);
@@ -826,28 +788,26 @@
}
}
- for (i = 0; i < old_state->num_connector; i++) {
- struct drm_connector *connector;
- struct drm_encoder_helper_funcs *funcs;
+ for_each_connector_in_state(old_state, connector, old_conn_state, i) {
+ const struct drm_encoder_helper_funcs *funcs;
struct drm_encoder *encoder;
- connector = old_state->connectors[i];
-
- if (!connector || !connector->state->best_encoder)
+ if (!connector->state->best_encoder)
continue;
- if (!connector->state->crtc->state->active)
+ if (!connector->state->crtc->state->active ||
+ !needs_modeset(connector->state->crtc->state))
continue;
encoder = connector->state->best_encoder;
funcs = encoder->helper_private;
- DRM_DEBUG_KMS("enabling [ENCODER:%d:%s]\n",
- encoder->base.id, encoder->name);
+ DRM_DEBUG_ATOMIC("enabling [ENCODER:%d:%s]\n",
+ encoder->base.id, encoder->name);
/*
* Each encoder has at most one connector (since we always steal
- * it away), so we won't call call enable hooks twice.
+ * it away), so we won't call enable hooks twice.
*/
if (encoder->bridge)
encoder->bridge->funcs->pre_enable(encoder->bridge);
@@ -861,18 +821,17 @@
encoder->bridge->funcs->enable(encoder->bridge);
}
}
-EXPORT_SYMBOL(drm_atomic_helper_commit_post_planes);
+EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_enables);
static void wait_for_fences(struct drm_device *dev,
struct drm_atomic_state *state)
{
- int nplanes = dev->mode_config.num_total_plane;
+ struct drm_plane *plane;
+ struct drm_plane_state *plane_state;
int i;
- for (i = 0; i < nplanes; i++) {
- struct drm_plane *plane = state->planes[i];
-
- if (!plane || !plane->state->fence)
+ for_each_plane_in_state(state, plane, plane_state, i) {
+ if (!plane->state->fence)
continue;
WARN_ON(!plane->state->fb);
@@ -889,16 +848,9 @@
{
struct drm_plane *plane;
struct drm_plane_state *old_plane_state;
- int nplanes = old_state->dev->mode_config.num_total_plane;
int i;
- for (i = 0; i < nplanes; i++) {
- plane = old_state->planes[i];
- old_plane_state = old_state->plane_states[i];
-
- if (!plane)
- continue;
-
+ for_each_plane_in_state(old_state, plane, old_plane_state, i) {
if (plane->state->crtc != crtc &&
old_plane_state->crtc != crtc)
continue;
@@ -927,16 +879,9 @@
{
struct drm_crtc *crtc;
struct drm_crtc_state *old_crtc_state;
- int ncrtcs = old_state->dev->mode_config.num_crtc;
int i, ret;
- for (i = 0; i < ncrtcs; i++) {
- crtc = old_state->crtcs[i];
- old_crtc_state = old_state->crtc_states[i];
-
- if (!crtc)
- continue;
-
+ for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
/* No one cares about the old state, so abuse it for tracking
* and store whether we hold a vblank reference (and should do a
* vblank wait) in the ->enable boolean. */
@@ -961,11 +906,8 @@
old_crtc_state->last_vblank_count = drm_vblank_count(dev, i);
}
- for (i = 0; i < ncrtcs; i++) {
- crtc = old_state->crtcs[i];
- old_crtc_state = old_state->crtc_states[i];
-
- if (!crtc || !old_crtc_state->enable)
+ for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
+ if (!old_crtc_state->enable)
continue;
ret = wait_event_timeout(dev->vblank[i].queue,
@@ -1014,7 +956,7 @@
/*
* Everything below can be run asynchronously without the need to grab
- * any modeset locks at all under one conditions: It must be guaranteed
+ * any modeset locks at all under one condition: It must be guaranteed
* that the asynchronous work has either been cancelled (if the driver
* supports it, which at least requires that the framebuffers get
* cleaned up with drm_atomic_helper_cleanup_planes()) or completed
@@ -1030,11 +972,11 @@
wait_for_fences(dev, state);
- drm_atomic_helper_commit_pre_planes(dev, state);
+ drm_atomic_helper_commit_modeset_disables(dev, state);
drm_atomic_helper_commit_planes(dev, state);
- drm_atomic_helper_commit_post_planes(dev, state);
+ drm_atomic_helper_commit_modeset_enables(dev, state);
drm_atomic_helper_wait_for_vblanks(dev, state);
@@ -1085,9 +1027,9 @@
*/
/**
- * drm_atomic_helper_prepare_planes - prepare plane resources after commit
+ * drm_atomic_helper_prepare_planes - prepare plane resources before commit
* @dev: DRM device
- * @state: atomic state object with old state structures
+ * @state: atomic state object with new state structures
*
* This function prepares plane state, specifically framebuffers, for the new
* configuration. If any failure is encountered this function will call
@@ -1103,8 +1045,9 @@
int ret, i;
for (i = 0; i < nplanes; i++) {
- struct drm_plane_helper_funcs *funcs;
+ const struct drm_plane_helper_funcs *funcs;
struct drm_plane *plane = state->planes[i];
+ struct drm_plane_state *plane_state = state->plane_states[i];
struct drm_framebuffer *fb;
if (!plane)
@@ -1112,10 +1055,10 @@
funcs = plane->helper_private;
- fb = state->plane_states[i]->fb;
+ fb = plane_state->fb;
if (fb && funcs->prepare_fb) {
- ret = funcs->prepare_fb(plane, fb);
+ ret = funcs->prepare_fb(plane, fb, plane_state);
if (ret)
goto fail;
}
@@ -1125,8 +1068,9 @@
fail:
for (i--; i >= 0; i--) {
- struct drm_plane_helper_funcs *funcs;
+ const struct drm_plane_helper_funcs *funcs;
struct drm_plane *plane = state->planes[i];
+ struct drm_plane_state *plane_state = state->plane_states[i];
struct drm_framebuffer *fb;
if (!plane)
@@ -1137,7 +1081,7 @@
fb = state->plane_states[i]->fb;
if (fb && funcs->cleanup_fb)
- funcs->cleanup_fb(plane, fb);
+ funcs->cleanup_fb(plane, fb, plane_state);
}
@@ -1161,16 +1105,14 @@
void drm_atomic_helper_commit_planes(struct drm_device *dev,
struct drm_atomic_state *old_state)
{
- int nplanes = dev->mode_config.num_total_plane;
- int ncrtcs = dev->mode_config.num_crtc;
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *old_crtc_state;
+ struct drm_plane *plane;
+ struct drm_plane_state *old_plane_state;
int i;
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc_helper_funcs *funcs;
- struct drm_crtc *crtc = old_state->crtcs[i];
-
- if (!crtc)
- continue;
+ for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
+ const struct drm_crtc_helper_funcs *funcs;
funcs = crtc->helper_private;
@@ -1180,13 +1122,8 @@
funcs->atomic_begin(crtc);
}
- for (i = 0; i < nplanes; i++) {
- struct drm_plane_helper_funcs *funcs;
- struct drm_plane *plane = old_state->planes[i];
- struct drm_plane_state *old_plane_state;
-
- if (!plane)
- continue;
+ for_each_plane_in_state(old_state, plane, old_plane_state, i) {
+ const struct drm_plane_helper_funcs *funcs;
funcs = plane->helper_private;
@@ -1205,12 +1142,8 @@
funcs->atomic_update(plane, old_plane_state);
}
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc_helper_funcs *funcs;
- struct drm_crtc *crtc = old_state->crtcs[i];
-
- if (!crtc)
- continue;
+ for_each_crtc_in_state(old_state, crtc, old_crtc_state, i) {
+ const struct drm_crtc_helper_funcs *funcs;
funcs = crtc->helper_private;
@@ -1237,23 +1170,20 @@
void drm_atomic_helper_cleanup_planes(struct drm_device *dev,
struct drm_atomic_state *old_state)
{
- int nplanes = dev->mode_config.num_total_plane;
+ struct drm_plane *plane;
+ struct drm_plane_state *plane_state;
int i;
- for (i = 0; i < nplanes; i++) {
- struct drm_plane_helper_funcs *funcs;
- struct drm_plane *plane = old_state->planes[i];
+ for_each_plane_in_state(old_state, plane, plane_state, i) {
+ const struct drm_plane_helper_funcs *funcs;
struct drm_framebuffer *old_fb;
- if (!plane)
- continue;
-
funcs = plane->helper_private;
- old_fb = old_state->plane_states[i]->fb;
+ old_fb = plane_state->fb;
if (old_fb && funcs->cleanup_fb)
- funcs->cleanup_fb(plane, old_fb);
+ funcs->cleanup_fb(plane, old_fb, plane_state);
}
}
EXPORT_SYMBOL(drm_atomic_helper_cleanup_planes);
@@ -1496,8 +1426,10 @@
struct drm_mode_set *set)
{
struct drm_device *dev = set->crtc->dev;
+ struct drm_crtc *crtc;
+ struct drm_crtc_state *crtc_state;
+ struct drm_connector *connector;
struct drm_connector_state *conn_state;
- int ncrtcs = state->dev->mode_config.num_crtc;
int ret, i, j;
ret = drm_modeset_lock(&dev->mode_config.connection_mutex,
@@ -1513,27 +1445,14 @@
return PTR_ERR(conn_state);
}
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc *crtc = state->crtcs[i];
-
- if (!crtc)
- continue;
-
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
ret = drm_atomic_add_affected_connectors(state, crtc);
if (ret)
return ret;
}
/* Then recompute connector->crtc links and crtc enabling state. */
- for (i = 0; i < state->num_connector; i++) {
- struct drm_connector *connector;
-
- connector = state->connectors[i];
- conn_state = state->connector_states[i];
-
- if (!connector)
- continue;
-
+ for_each_connector_in_state(state, connector, conn_state, i) {
if (conn_state->crtc == set->crtc) {
ret = drm_atomic_set_crtc_for_connector(conn_state,
NULL);
@@ -1552,13 +1471,7 @@
}
}
- for (i = 0; i < ncrtcs; i++) {
- struct drm_crtc *crtc = state->crtcs[i];
- struct drm_crtc_state *crtc_state = state->crtc_states[i];
-
- if (!crtc)
- continue;
-
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
/* Don't update ->enable for the CRTC in the set_config request,
* since a mismatch would indicate a bug in the upper layers.
* The actual modeset code later on will catch any
@@ -1678,12 +1591,13 @@
EXPORT_SYMBOL(drm_atomic_helper_set_config);
/**
- * drm_atomic_helper_crtc_set_property - helper for crtc prorties
+ * drm_atomic_helper_crtc_set_property - helper for crtc properties
* @crtc: DRM crtc
* @property: DRM property
* @val: value of property
*
- * Provides a default plane disablle handler using the atomic driver interface.
+ * Provides a default crtc set_property handler using the atomic driver
+ * interface.
*
* RETURNS:
* Zero on success, error code on failure
@@ -1737,12 +1651,13 @@
EXPORT_SYMBOL(drm_atomic_helper_crtc_set_property);
/**
- * drm_atomic_helper_plane_set_property - helper for plane prorties
+ * drm_atomic_helper_plane_set_property - helper for plane properties
* @plane: DRM plane
* @property: DRM property
* @val: value of property
*
- * Provides a default plane disable handler using the atomic driver interface.
+ * Provides a default plane set_property handler using the atomic driver
+ * interface.
*
* RETURNS:
* Zero on success, error code on failure
@@ -1796,12 +1711,13 @@
EXPORT_SYMBOL(drm_atomic_helper_plane_set_property);
/**
- * drm_atomic_helper_connector_set_property - helper for connector prorties
+ * drm_atomic_helper_connector_set_property - helper for connector properties
* @connector: DRM connector
* @property: DRM property
* @val: value of property
*
- * Provides a default plane disablle handler using the atomic driver interface.
+ * Provides a default connector set_property handler using the atomic driver
+ * interface.
*
* RETURNS:
* Zero on success, error code on failure
@@ -1984,10 +1900,10 @@
WARN_ON(!drm_modeset_is_locked(&config->connection_mutex));
list_for_each_entry(tmp_connector, &config->connector_list, head) {
- if (connector->state->crtc != crtc)
+ if (tmp_connector->state->crtc != crtc)
continue;
- if (connector->dpms == DRM_MODE_DPMS_ON) {
+ if (tmp_connector->dpms == DRM_MODE_DPMS_ON) {
active = true;
break;
}
@@ -2050,6 +1966,26 @@
EXPORT_SYMBOL(drm_atomic_helper_crtc_reset);
/**
+ * __drm_atomic_helper_crtc_duplicate_state - copy atomic CRTC state
+ * @crtc: CRTC object
+ * @state: atomic CRTC state
+ *
+ * Copies atomic state from a CRTC's current state and resets inferred values.
+ * This is useful for drivers that subclass the CRTC state.
+ */
+void __drm_atomic_helper_crtc_duplicate_state(struct drm_crtc *crtc,
+ struct drm_crtc_state *state)
+{
+ memcpy(state, crtc->state, sizeof(*state));
+
+ state->mode_changed = false;
+ state->active_changed = false;
+ state->planes_changed = false;
+ state->event = NULL;
+}
+EXPORT_SYMBOL(__drm_atomic_helper_crtc_duplicate_state);
+
+/**
* drm_atomic_helper_crtc_duplicate_state - default state duplicate hook
* @crtc: drm CRTC
*
@@ -2064,20 +2000,35 @@
if (WARN_ON(!crtc->state))
return NULL;
- state = kmemdup(crtc->state, sizeof(*crtc->state), GFP_KERNEL);
-
- if (state) {
- state->mode_changed = false;
- state->active_changed = false;
- state->planes_changed = false;
- state->event = NULL;
- }
+ state = kmalloc(sizeof(*state), GFP_KERNEL);
+ if (state)
+ __drm_atomic_helper_crtc_duplicate_state(crtc, state);
return state;
}
EXPORT_SYMBOL(drm_atomic_helper_crtc_duplicate_state);
/**
+ * __drm_atomic_helper_crtc_destroy_state - release CRTC state
+ * @crtc: CRTC object
+ * @state: CRTC state object to release
+ *
+ * Releases all resources stored in the CRTC state without actually freeing
+ * the memory of the CRTC state. This is useful for drivers that subclass the
+ * CRTC state.
+ */
+void __drm_atomic_helper_crtc_destroy_state(struct drm_crtc *crtc,
+ struct drm_crtc_state *state)
+{
+ /*
+ * This is currently a placeholder so that drivers that subclass the
+ * state will automatically do the right thing if code is ever added
+ * to this function.
+ */
+}
+EXPORT_SYMBOL(__drm_atomic_helper_crtc_destroy_state);
+
+/**
* drm_atomic_helper_crtc_destroy_state - default state destroy hook
* @crtc: drm CRTC
* @state: CRTC state object to release
@@ -2088,6 +2039,7 @@
void drm_atomic_helper_crtc_destroy_state(struct drm_crtc *crtc,
struct drm_crtc_state *state)
{
+ __drm_atomic_helper_crtc_destroy_state(crtc, state);
kfree(state);
}
EXPORT_SYMBOL(drm_atomic_helper_crtc_destroy_state);
@@ -2113,6 +2065,24 @@
EXPORT_SYMBOL(drm_atomic_helper_plane_reset);
/**
+ * __drm_atomic_helper_plane_duplicate_state - copy atomic plane state
+ * @plane: plane object
+ * @state: atomic plane state
+ *
+ * Copies atomic state from a plane's current state. This is useful for
+ * drivers that subclass the plane state.
+ */
+void __drm_atomic_helper_plane_duplicate_state(struct drm_plane *plane,
+ struct drm_plane_state *state)
+{
+ memcpy(state, plane->state, sizeof(*state));
+
+ if (state->fb)
+ drm_framebuffer_reference(state->fb);
+}
+EXPORT_SYMBOL(__drm_atomic_helper_plane_duplicate_state);
+
+/**
* drm_atomic_helper_plane_duplicate_state - default state duplicate hook
* @plane: drm plane
*
@@ -2127,16 +2097,32 @@
if (WARN_ON(!plane->state))
return NULL;
- state = kmemdup(plane->state, sizeof(*plane->state), GFP_KERNEL);
-
- if (state && state->fb)
- drm_framebuffer_reference(state->fb);
+ state = kmalloc(sizeof(*state), GFP_KERNEL);
+ if (state)
+ __drm_atomic_helper_plane_duplicate_state(plane, state);
return state;
}
EXPORT_SYMBOL(drm_atomic_helper_plane_duplicate_state);
/**
+ * __drm_atomic_helper_plane_destroy_state - release plane state
+ * @plane: plane object
+ * @state: plane state object to release
+ *
+ * Releases all resources stored in the plane state without actually freeing
+ * the memory of the plane state. This is useful for drivers that subclass the
+ * plane state.
+ */
+void __drm_atomic_helper_plane_destroy_state(struct drm_plane *plane,
+ struct drm_plane_state *state)
+{
+ if (state->fb)
+ drm_framebuffer_unreference(state->fb);
+}
+EXPORT_SYMBOL(__drm_atomic_helper_plane_destroy_state);
+
+/**
* drm_atomic_helper_plane_destroy_state - default state destroy hook
* @plane: drm plane
* @state: plane state object to release
@@ -2147,9 +2133,7 @@
void drm_atomic_helper_plane_destroy_state(struct drm_plane *plane,
struct drm_plane_state *state)
{
- if (state->fb)
- drm_framebuffer_unreference(state->fb);
-
+ __drm_atomic_helper_plane_destroy_state(plane, state);
kfree(state);
}
EXPORT_SYMBOL(drm_atomic_helper_plane_destroy_state);
@@ -2173,6 +2157,22 @@
EXPORT_SYMBOL(drm_atomic_helper_connector_reset);
/**
+ * __drm_atomic_helper_connector_duplicate_state - copy atomic connector state
+ * @connector: connector object
+ * @state: atomic connector state
+ *
+ * Copies atomic state from a connector's current state. This is useful for
+ * drivers that subclass the connector state.
+ */
+void
+__drm_atomic_helper_connector_duplicate_state(struct drm_connector *connector,
+ struct drm_connector_state *state)
+{
+ memcpy(state, connector->state, sizeof(*state));
+}
+EXPORT_SYMBOL(__drm_atomic_helper_connector_duplicate_state);
+
+/**
* drm_atomic_helper_connector_duplicate_state - default state duplicate hook
* @connector: drm connector
*
@@ -2182,14 +2182,41 @@
struct drm_connector_state *
drm_atomic_helper_connector_duplicate_state(struct drm_connector *connector)
{
+ struct drm_connector_state *state;
+
if (WARN_ON(!connector->state))
return NULL;
- return kmemdup(connector->state, sizeof(*connector->state), GFP_KERNEL);
+ state = kmalloc(sizeof(*state), GFP_KERNEL);
+ if (state)
+ __drm_atomic_helper_connector_duplicate_state(connector, state);
+
+ return state;
}
EXPORT_SYMBOL(drm_atomic_helper_connector_duplicate_state);
/**
+ * __drm_atomic_helper_connector_destroy_state - release connector state
+ * @connector: connector object
+ * @state: connector state object to release
+ *
+ * Releases all resources stored in the connector state without actually
+ * freeing the memory of the connector state. This is useful for drivers that
+ * subclass the connector state.
+ */
+void
+__drm_atomic_helper_connector_destroy_state(struct drm_connector *connector,
+ struct drm_connector_state *state)
+{
+ /*
+ * This is currently a placeholder so that drivers that subclass the
+ * state will automatically do the right thing if code is ever added
+ * to this function.
+ */
+}
+EXPORT_SYMBOL(__drm_atomic_helper_connector_destroy_state);
+
+/**
* drm_atomic_helper_connector_destroy_state - default state destroy hook
* @connector: drm connector
* @state: connector state object to release
@@ -2200,6 +2227,7 @@
void drm_atomic_helper_connector_destroy_state(struct drm_connector *connector,
struct drm_connector_state *state)
{
+ __drm_atomic_helper_connector_destroy_state(connector, state);
kfree(state);
}
EXPORT_SYMBOL(drm_atomic_helper_connector_destroy_state);
diff --git a/drivers/gpu/drm/drm_bridge.c b/drivers/gpu/drm/drm_bridge.c
index d1187e5..eaa5790 100644
--- a/drivers/gpu/drm/drm_bridge.c
+++ b/drivers/gpu/drm/drm_bridge.c
@@ -49,7 +49,7 @@
}
EXPORT_SYMBOL(drm_bridge_remove);
-extern int drm_bridge_attach(struct drm_device *dev, struct drm_bridge *bridge)
+int drm_bridge_attach(struct drm_device *dev, struct drm_bridge *bridge)
{
if (!dev || !bridge)
return -EINVAL;
diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index b6f076b..3007b44 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -660,6 +660,9 @@
struct drm_mode_config *config = &dev->mode_config;
int ret;
+ WARN_ON(primary && primary->type != DRM_PLANE_TYPE_PRIMARY);
+ WARN_ON(cursor && cursor->type != DRM_PLANE_TYPE_CURSOR);
+
crtc->dev = dev;
crtc->funcs = funcs;
crtc->invert_dimensions = false;
@@ -1999,21 +2002,32 @@
return -ENOENT;
drm_modeset_lock_crtc(crtc, crtc->primary);
- crtc_resp->x = crtc->x;
- crtc_resp->y = crtc->y;
crtc_resp->gamma_size = crtc->gamma_size;
if (crtc->primary->fb)
crtc_resp->fb_id = crtc->primary->fb->base.id;
else
crtc_resp->fb_id = 0;
- if (crtc->enabled) {
+ if (crtc->state) {
+ crtc_resp->x = crtc->primary->state->src_x >> 16;
+ crtc_resp->y = crtc->primary->state->src_y >> 16;
+ if (crtc->state->enable) {
+ drm_crtc_convert_to_umode(&crtc_resp->mode, &crtc->state->mode);
+ crtc_resp->mode_valid = 1;
- drm_crtc_convert_to_umode(&crtc_resp->mode, &crtc->mode);
- crtc_resp->mode_valid = 1;
-
+ } else {
+ crtc_resp->mode_valid = 0;
+ }
} else {
- crtc_resp->mode_valid = 0;
+ crtc_resp->x = crtc->x;
+ crtc_resp->y = crtc->y;
+ if (crtc->enabled) {
+ drm_crtc_convert_to_umode(&crtc_resp->mode, &crtc->mode);
+ crtc_resp->mode_valid = 1;
+
+ } else {
+ crtc_resp->mode_valid = 0;
+ }
}
drm_modeset_unlock_crtc(crtc);
@@ -2266,8 +2280,6 @@
crtc = drm_encoder_get_crtc(encoder);
if (crtc)
enc_resp->crtc_id = crtc->base.id;
- else if (encoder->crtc)
- enc_resp->crtc_id = encoder->crtc->base.id;
else
enc_resp->crtc_id = 0;
drm_modeset_unlock(&dev->mode_config.connection_mutex);
@@ -2402,6 +2414,27 @@
return 0;
}
+/**
+ * drm_plane_check_pixel_format - Check if the plane supports the pixel format
+ * @plane: plane to check for format support
+ * @format: the pixel format
+ *
+ * Returns:
+ * Zero of @plane has @format in its list of supported pixel formats, -EINVAL
+ * otherwise.
+ */
+int drm_plane_check_pixel_format(const struct drm_plane *plane, u32 format)
+{
+ unsigned int i;
+
+ for (i = 0; i < plane->format_count; i++) {
+ if (format == plane->format_types[i])
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
/*
* setplane_internal - setplane handler for internal callers
*
@@ -2422,7 +2455,6 @@
{
int ret = 0;
unsigned int fb_width, fb_height;
- unsigned int i;
/* No fb means shut it down */
if (!fb) {
@@ -2445,16 +2477,24 @@
}
/* Check whether this plane supports the fb pixel format. */
- for (i = 0; i < plane->format_count; i++)
- if (fb->pixel_format == plane->format_types[i])
- break;
- if (i == plane->format_count) {
+ ret = drm_plane_check_pixel_format(plane, fb->pixel_format);
+ if (ret) {
DRM_DEBUG_KMS("Invalid pixel format %s\n",
drm_get_format_name(fb->pixel_format));
- ret = -EINVAL;
goto out;
}
+ /* Give drivers some help against integer overflows */
+ if (crtc_w > INT_MAX ||
+ crtc_x > INT_MAX - (int32_t) crtc_w ||
+ crtc_h > INT_MAX ||
+ crtc_y > INT_MAX - (int32_t) crtc_h) {
+ DRM_DEBUG_KMS("Invalid CRTC coordinates %ux%u+%d+%d\n",
+ crtc_w, crtc_h, crtc_x, crtc_y);
+ return -ERANGE;
+ }
+
+
fb_width = fb->width << 16;
fb_height = fb->height << 16;
@@ -2539,17 +2579,6 @@
if (!drm_core_check_feature(dev, DRIVER_MODESET))
return -EINVAL;
- /* Give drivers some help against integer overflows */
- if (plane_req->crtc_w > INT_MAX ||
- plane_req->crtc_x > INT_MAX - (int32_t) plane_req->crtc_w ||
- plane_req->crtc_h > INT_MAX ||
- plane_req->crtc_y > INT_MAX - (int32_t) plane_req->crtc_h) {
- DRM_DEBUG_KMS("Invalid CRTC coordinates %ux%u+%d+%d\n",
- plane_req->crtc_w, plane_req->crtc_h,
- plane_req->crtc_x, plane_req->crtc_y);
- return -ERANGE;
- }
-
/*
* First, find the plane, crtc, and fb objects. If not available,
* we don't bother to call the driver.
@@ -2775,6 +2804,23 @@
drm_mode_set_crtcinfo(mode, CRTC_INTERLACE_HALVE_V);
+ /*
+ * Check whether the primary plane supports the fb pixel format.
+ * Drivers not implementing the universal planes API use a
+ * default formats list provided by the DRM core which doesn't
+ * match real hardware capabilities. Skip the check in that
+ * case.
+ */
+ if (!crtc->primary->format_default) {
+ ret = drm_plane_check_pixel_format(crtc->primary,
+ fb->pixel_format);
+ if (ret) {
+ DRM_DEBUG_KMS("Invalid pixel format %s\n",
+ drm_get_format_name(fb->pixel_format));
+ goto out;
+ }
+ }
+
ret = drm_crtc_check_viewport(crtc, crtc_req->x, crtc_req->y,
mode, fb);
if (ret)
@@ -3252,6 +3298,12 @@
DRM_DEBUG_KMS("bad pitch %u for plane %d\n", r->pitches[i], i);
return -EINVAL;
}
+
+ if (r->modifier[i] && !(r->flags & DRM_MODE_FB_MODIFIERS)) {
+ DRM_DEBUG_KMS("bad fb modifier %llu for plane %d\n",
+ r->modifier[i], i);
+ return -EINVAL;
+ }
}
return 0;
@@ -3266,7 +3318,7 @@
struct drm_framebuffer *fb;
int ret;
- if (r->flags & ~DRM_MODE_FB_INTERLACED) {
+ if (r->flags & ~(DRM_MODE_FB_INTERLACED | DRM_MODE_FB_MODIFIERS)) {
DRM_DEBUG_KMS("bad framebuffer flags 0x%08x\n", r->flags);
return ERR_PTR(-EINVAL);
}
@@ -3282,6 +3334,12 @@
return ERR_PTR(-EINVAL);
}
+ if (r->flags & DRM_MODE_FB_MODIFIERS &&
+ !dev->mode_config.allow_fb_modifiers) {
+ DRM_DEBUG_KMS("driver does not support fb modifiers\n");
+ return ERR_PTR(-EINVAL);
+ }
+
ret = framebuffer_check(r);
if (ret)
return ERR_PTR(ret);
@@ -5543,6 +5601,7 @@
mutex_unlock(&dev->mode_config.idr_mutex);
return NULL;
}
+EXPORT_SYMBOL(drm_mode_get_tile_group);
/**
* drm_mode_create_tile_group - create a tile group from a displayid description
@@ -5581,3 +5640,4 @@
mutex_unlock(&dev->mode_config.idr_mutex);
return tg;
}
+EXPORT_SYMBOL(drm_mode_create_tile_group);
diff --git a/drivers/gpu/drm/drm_crtc_helper.c b/drivers/gpu/drm/drm_crtc_helper.c
index b1979e7..ab00286 100644
--- a/drivers/gpu/drm/drm_crtc_helper.c
+++ b/drivers/gpu/drm/drm_crtc_helper.c
@@ -161,7 +161,7 @@
static void
drm_encoder_disable(struct drm_encoder *encoder)
{
- struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private;
if (encoder->bridge)
encoder->bridge->funcs->disable(encoder->bridge);
@@ -191,7 +191,7 @@
}
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
crtc->enabled = drm_helper_crtc_in_use(crtc);
if (!crtc->enabled) {
if (crtc_funcs->disable)
@@ -229,7 +229,7 @@
static void
drm_crtc_prepare_encoders(struct drm_device *dev)
{
- struct drm_encoder_helper_funcs *encoder_funcs;
+ const struct drm_encoder_helper_funcs *encoder_funcs;
struct drm_encoder *encoder;
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
@@ -270,9 +270,9 @@
struct drm_framebuffer *old_fb)
{
struct drm_device *dev = crtc->dev;
- struct drm_display_mode *adjusted_mode, saved_mode;
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
- struct drm_encoder_helper_funcs *encoder_funcs;
+ struct drm_display_mode *adjusted_mode, saved_mode, saved_hwmode;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_encoder_helper_funcs *encoder_funcs;
int saved_x, saved_y;
bool saved_enabled;
struct drm_encoder *encoder;
@@ -292,6 +292,7 @@
}
saved_mode = crtc->mode;
+ saved_hwmode = crtc->hwmode;
saved_x = crtc->x;
saved_y = crtc->y;
@@ -334,6 +335,8 @@
}
DRM_DEBUG_KMS("[CRTC:%d]\n", crtc->base.id);
+ crtc->hwmode = *adjusted_mode;
+
/* Prepare the encoders and CRTCs before setting the mode. */
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
@@ -396,9 +399,6 @@
encoder->bridge->funcs->enable(encoder->bridge);
}
- /* Store real post-adjustment hardware mode. */
- crtc->hwmode = *adjusted_mode;
-
/* Calculate and store various constants which
* are later needed by vblank and swap-completion
* timestamping. They are derived from true hwmode.
@@ -411,6 +411,7 @@
if (!ret) {
crtc->enabled = saved_enabled;
crtc->mode = saved_mode;
+ crtc->hwmode = saved_hwmode;
crtc->x = saved_x;
crtc->y = saved_y;
}
@@ -472,7 +473,7 @@
bool fb_changed = false; /* if true and !mode_changed just do a flip */
struct drm_connector *save_connectors, *connector;
int count = 0, ro, fail = 0;
- struct drm_crtc_helper_funcs *crtc_funcs;
+ const struct drm_crtc_helper_funcs *crtc_funcs;
struct drm_mode_set save_set;
int ret;
int i;
@@ -572,7 +573,7 @@
/* a) traverse passed in connector list and get encoders for them */
count = 0;
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
- struct drm_connector_helper_funcs *connector_funcs =
+ const struct drm_connector_helper_funcs *connector_funcs =
connector->helper_private;
new_encoder = connector->encoder;
for (ro = 0; ro < set->num_connectors; ro++) {
@@ -732,7 +733,7 @@
static void drm_helper_encoder_dpms(struct drm_encoder *encoder, int mode)
{
struct drm_bridge *bridge = encoder->bridge;
- struct drm_encoder_helper_funcs *encoder_funcs;
+ const struct drm_encoder_helper_funcs *encoder_funcs;
if (bridge) {
if (mode == DRM_MODE_DPMS_ON)
@@ -794,7 +795,7 @@
/* from off to on, do crtc then encoder */
if (mode < old_dpms) {
if (crtc) {
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
if (crtc_funcs->dpms)
(*crtc_funcs->dpms) (crtc,
drm_helper_choose_crtc_dpms(crtc));
@@ -808,7 +809,7 @@
if (encoder)
drm_helper_encoder_dpms(encoder, encoder_dpms);
if (crtc) {
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
if (crtc_funcs->dpms)
(*crtc_funcs->dpms) (crtc,
drm_helper_choose_crtc_dpms(crtc));
@@ -837,6 +838,7 @@
for (i = 0; i < 4; i++) {
fb->pitches[i] = mode_cmd->pitches[i];
fb->offsets[i] = mode_cmd->offsets[i];
+ fb->modifier[i] = mode_cmd->modifier[i];
}
drm_fb_get_bpp_depth(mode_cmd->pixel_format, &fb->depth,
&fb->bits_per_pixel);
@@ -869,7 +871,7 @@
{
struct drm_crtc *crtc;
struct drm_encoder *encoder;
- struct drm_crtc_helper_funcs *crtc_funcs;
+ const struct drm_crtc_helper_funcs *crtc_funcs;
int encoder_dpms;
bool ret;
@@ -934,7 +936,7 @@
struct drm_framebuffer *old_fb)
{
struct drm_crtc_state *crtc_state;
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
int ret;
if (crtc->funcs->atomic_duplicate_state)
diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
index f128387..71dcbc6 100644
--- a/drivers/gpu/drm/drm_dp_helper.c
+++ b/drivers/gpu/drm/drm_dp_helper.c
@@ -427,11 +427,13 @@
* retrying the transaction as appropriate. It is assumed that the
* aux->transfer function does not modify anything in the msg other than the
* reply field.
+ *
+ * Returns bytes transferred on success, or a negative error code on failure.
*/
static int drm_dp_i2c_do_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
{
unsigned int retry;
- int err;
+ int ret;
/*
* DP1.2 sections 2.7.7.1.5.6.1 and 2.7.7.1.6.6.1: A DP Source device
@@ -440,14 +442,14 @@
*/
for (retry = 0; retry < 7; retry++) {
mutex_lock(&aux->hw_mutex);
- err = aux->transfer(aux, msg);
+ ret = aux->transfer(aux, msg);
mutex_unlock(&aux->hw_mutex);
- if (err < 0) {
- if (err == -EBUSY)
+ if (ret < 0) {
+ if (ret == -EBUSY)
continue;
- DRM_DEBUG_KMS("transaction failed: %d\n", err);
- return err;
+ DRM_DEBUG_KMS("transaction failed: %d\n", ret);
+ return ret;
}
@@ -460,7 +462,7 @@
break;
case DP_AUX_NATIVE_REPLY_NACK:
- DRM_DEBUG_KMS("native nack\n");
+ DRM_DEBUG_KMS("native nack (result=%d, size=%zu)\n", ret, msg->size);
return -EREMOTEIO;
case DP_AUX_NATIVE_REPLY_DEFER:
@@ -488,12 +490,10 @@
* Both native ACK and I2C ACK replies received. We
* can assume the transfer was successful.
*/
- if (err < msg->size)
- return -EPROTO;
- return 0;
+ return ret;
case DP_AUX_I2C_REPLY_NACK:
- DRM_DEBUG_KMS("I2C nack\n");
+ DRM_DEBUG_KMS("I2C nack (result=%d, size=%zu\n", ret, msg->size);
aux->i2c_nack_count++;
return -EREMOTEIO;
@@ -513,14 +513,55 @@
return -EREMOTEIO;
}
+/*
+ * Keep retrying drm_dp_i2c_do_msg until all data has been transferred.
+ *
+ * Returns an error code on failure, or a recommended transfer size on success.
+ */
+static int drm_dp_i2c_drain_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *orig_msg)
+{
+ int err, ret = orig_msg->size;
+ struct drm_dp_aux_msg msg = *orig_msg;
+
+ while (msg.size > 0) {
+ err = drm_dp_i2c_do_msg(aux, &msg);
+ if (err <= 0)
+ return err == 0 ? -EPROTO : err;
+
+ if (err < msg.size && err < ret) {
+ DRM_DEBUG_KMS("Partial I2C reply: requested %zu bytes got %d bytes\n",
+ msg.size, err);
+ ret = err;
+ }
+
+ msg.size -= err;
+ msg.buffer += err;
+ }
+
+ return ret;
+}
+
+/*
+ * Bizlink designed DP->DVI-D Dual Link adapters require the I2C over AUX
+ * packets to be as large as possible. If not, the I2C transactions never
+ * succeed. Hence the default is maximum.
+ */
+static int dp_aux_i2c_transfer_size __read_mostly = DP_AUX_MAX_PAYLOAD_BYTES;
+module_param_unsafe(dp_aux_i2c_transfer_size, int, 0644);
+MODULE_PARM_DESC(dp_aux_i2c_transfer_size,
+ "Number of bytes to transfer in a single I2C over DP AUX CH message, (1-16, default 16)");
+
static int drm_dp_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs,
int num)
{
struct drm_dp_aux *aux = adapter->algo_data;
unsigned int i, j;
+ unsigned transfer_size;
struct drm_dp_aux_msg msg;
int err = 0;
+ dp_aux_i2c_transfer_size = clamp(dp_aux_i2c_transfer_size, 1, DP_AUX_MAX_PAYLOAD_BYTES);
+
memset(&msg, 0, sizeof(msg));
for (i = 0; i < num; i++) {
@@ -538,20 +579,19 @@
err = drm_dp_i2c_do_msg(aux, &msg);
if (err < 0)
break;
- /*
- * Many hardware implementations support FIFOs larger than a
- * single byte, but it has been empirically determined that
- * transferring data in larger chunks can actually lead to
- * decreased performance. Therefore each message is simply
- * transferred byte-by-byte.
+ /* We want each transaction to be as large as possible, but
+ * we'll go to smaller sizes if the hardware gives us a
+ * short reply.
*/
- for (j = 0; j < msgs[i].len; j++) {
+ transfer_size = dp_aux_i2c_transfer_size;
+ for (j = 0; j < msgs[i].len; j += msg.size) {
msg.buffer = msgs[i].buf + j;
- msg.size = 1;
+ msg.size = min(transfer_size, msgs[i].len - j);
- err = drm_dp_i2c_do_msg(aux, &msg);
+ err = drm_dp_i2c_drain_msg(aux, &msg);
if (err < 0)
break;
+ transfer_size = err;
}
if (err < 0)
break;
diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
index 379ab45..132581c 100644
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -2324,6 +2324,19 @@
}
EXPORT_SYMBOL(drm_dp_mst_allocate_vcpi);
+int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port)
+{
+ int slots = 0;
+ port = drm_dp_get_validated_port_ref(mgr, port);
+ if (!port)
+ return slots;
+
+ slots = port->vcpi.num_slots;
+ drm_dp_put_port(port);
+ return slots;
+}
+EXPORT_SYMBOL(drm_dp_mst_get_vcpi_slots);
+
/**
* drm_dp_mst_reset_vcpi_slots() - Reset number of slots to 0 for VCPI
* @mgr: manager for this port
diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
index d512134..48f7359 100644
--- a/drivers/gpu/drm/drm_drv.c
+++ b/drivers/gpu/drm/drm_drv.c
@@ -70,7 +70,7 @@
vaf.fmt = format;
vaf.va = &args;
- printk(KERN_ERR "[" DRM_NAME ":%pf] *ERROR* %pV",
+ printk(KERN_ERR "[" DRM_NAME ":%ps] *ERROR* %pV",
__builtin_return_address(0), &vaf);
va_end(args);
diff --git a/drivers/gpu/drm/drm_fb_cma_helper.c b/drivers/gpu/drm/drm_fb_cma_helper.c
index cc0ae04..5c1aca4 100644
--- a/drivers/gpu/drm/drm_fb_cma_helper.c
+++ b/drivers/gpu/drm/drm_fb_cma_helper.c
@@ -304,7 +304,7 @@
}
drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth);
- drm_fb_helper_fill_var(fbi, helper, fb->width, fb->height);
+ drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height);
offset = fbi->var.xoffset * bytes_per_pixel;
offset += fbi->var.yoffset * fb->pitches[0];
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 1e6a0c7..cac4229 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -238,7 +238,7 @@
int drm_fb_helper_debug_enter(struct fb_info *info)
{
struct drm_fb_helper *helper = info->par;
- struct drm_crtc_helper_funcs *funcs;
+ const struct drm_crtc_helper_funcs *funcs;
int i;
list_for_each_entry(helper, &kernel_fb_helper_list, kernel_fb_list) {
@@ -285,7 +285,7 @@
{
struct drm_fb_helper *helper = info->par;
struct drm_crtc *crtc;
- struct drm_crtc_helper_funcs *funcs;
+ const struct drm_crtc_helper_funcs *funcs;
struct drm_framebuffer *fb;
int i;
@@ -765,7 +765,7 @@
{
struct drm_fb_helper *fb_helper = info->par;
struct drm_device *dev = fb_helper->dev;
- struct drm_crtc_helper_funcs *crtc_funcs;
+ const struct drm_crtc_helper_funcs *crtc_funcs;
u16 *red, *green, *blue, *transp;
struct drm_crtc *crtc;
int i, j, rc = 0;
@@ -1034,23 +1034,45 @@
crtc_count = 0;
for (i = 0; i < fb_helper->crtc_count; i++) {
struct drm_display_mode *desired_mode;
- int x, y;
+ struct drm_mode_set *mode_set;
+ int x, y, j;
+ /* in case of tile group, are we the last tile vert or horiz?
+ * If no tile group you are always the last one both vertically
+ * and horizontally
+ */
+ bool lastv = true, lasth = true;
+
desired_mode = fb_helper->crtc_info[i].desired_mode;
+ mode_set = &fb_helper->crtc_info[i].mode_set;
+
+ if (!desired_mode)
+ continue;
+
+ crtc_count++;
+
x = fb_helper->crtc_info[i].x;
y = fb_helper->crtc_info[i].y;
- if (desired_mode) {
- if (gamma_size == 0)
- gamma_size = fb_helper->crtc_info[i].mode_set.crtc->gamma_size;
- if (desired_mode->hdisplay + x < sizes.fb_width)
- sizes.fb_width = desired_mode->hdisplay + x;
- if (desired_mode->vdisplay + y < sizes.fb_height)
- sizes.fb_height = desired_mode->vdisplay + y;
- if (desired_mode->hdisplay + x > sizes.surface_width)
- sizes.surface_width = desired_mode->hdisplay + x;
- if (desired_mode->vdisplay + y > sizes.surface_height)
- sizes.surface_height = desired_mode->vdisplay + y;
- crtc_count++;
+
+ if (gamma_size == 0)
+ gamma_size = fb_helper->crtc_info[i].mode_set.crtc->gamma_size;
+
+ sizes.surface_width = max_t(u32, desired_mode->hdisplay + x, sizes.surface_width);
+ sizes.surface_height = max_t(u32, desired_mode->vdisplay + y, sizes.surface_height);
+
+ for (j = 0; j < mode_set->num_connectors; j++) {
+ struct drm_connector *connector = mode_set->connectors[j];
+ if (connector->has_tile) {
+ lasth = (connector->tile_h_loc == (connector->num_h_tile - 1));
+ lastv = (connector->tile_v_loc == (connector->num_v_tile - 1));
+ /* cloning to multiple tiles is just crazy-talk, so: */
+ break;
+ }
}
+
+ if (lasth)
+ sizes.fb_width = min_t(u32, desired_mode->hdisplay + x, sizes.fb_width);
+ if (lastv)
+ sizes.fb_height = min_t(u32, desired_mode->vdisplay + y, sizes.fb_height);
}
if (crtc_count == 0 || sizes.fb_width == -1 || sizes.fb_height == -1) {
@@ -1261,12 +1283,12 @@
int width, int height)
{
struct drm_cmdline_mode *cmdline_mode;
- struct drm_display_mode *mode = NULL;
+ struct drm_display_mode *mode;
bool prefer_non_interlace;
cmdline_mode = &fb_helper_conn->connector->cmdline_mode;
if (cmdline_mode->specified == false)
- return mode;
+ return NULL;
/* attempt to find a matching mode in the list of modes
* we have gotten so far, if not add a CVT mode that conforms
@@ -1275,7 +1297,7 @@
goto create_mode;
prefer_non_interlace = !cmdline_mode->interlace;
- again:
+again:
list_for_each_entry(mode, &fb_helper_conn->connector->modes, head) {
/* check width/height */
if (mode->hdisplay != cmdline_mode->xres ||
@@ -1529,7 +1551,7 @@
int c, o;
struct drm_device *dev = fb_helper->dev;
struct drm_connector *connector;
- struct drm_connector_helper_funcs *connector_funcs;
+ const struct drm_connector_helper_funcs *connector_funcs;
struct drm_encoder *encoder;
int my_score, best_score, score;
struct drm_fb_helper_crtc **crtcs, *crtc;
diff --git a/drivers/gpu/drm/drm_info.c b/drivers/gpu/drm/drm_info.c
index f1b32f9..cbb4fc0 100644
--- a/drivers/gpu/drm/drm_info.c
+++ b/drivers/gpu/drm/drm_info.c
@@ -37,6 +37,7 @@
#include <drm/drmP.h>
#include <drm/drm_gem.h>
+#include "drm_internal.h"
#include "drm_legacy.h"
/**
diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
index 2f4c4343..aa8bbb4 100644
--- a/drivers/gpu/drm/drm_ioc32.c
+++ b/drivers/gpu/drm/drm_ioc32.c
@@ -1016,7 +1016,7 @@
return 0;
}
-drm_ioctl_compat_t *drm_compat_ioctls[] = {
+static drm_ioctl_compat_t *drm_compat_ioctls[] = {
[DRM_IOCTL_NR(DRM_IOCTL_VERSION32)] = compat_drm_version,
[DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE32)] = compat_drm_getunique,
[DRM_IOCTL_NR(DRM_IOCTL_GET_MAP32)] = compat_drm_getmap,
diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
index 3785d66..266dcd6 100644
--- a/drivers/gpu/drm/drm_ioctl.c
+++ b/drivers/gpu/drm/drm_ioctl.c
@@ -321,6 +321,9 @@
else
req->value = 64;
break;
+ case DRM_CAP_ADDFB2_MODIFIERS:
+ req->value = dev->mode_config.allow_fb_modifiers;
+ break;
default:
return -EINVAL;
}
@@ -521,8 +524,13 @@
return 0;
}
-#define DRM_IOCTL_DEF(ioctl, _func, _flags) \
- [DRM_IOCTL_NR(ioctl)] = {.cmd = ioctl, .func = _func, .flags = _flags, .cmd_drv = 0, .name = #ioctl}
+#define DRM_IOCTL_DEF(ioctl, _func, _flags) \
+ [DRM_IOCTL_NR(ioctl)] = { \
+ .cmd = ioctl, \
+ .func = _func, \
+ .flags = _flags, \
+ .name = #ioctl \
+ }
/** Ioctl table */
static const struct drm_ioctl_desc drm_ioctls[] = {
@@ -660,39 +668,29 @@
int retcode = -EINVAL;
char stack_kdata[128];
char *kdata = NULL;
- unsigned int usize, asize;
+ unsigned int usize, asize, drv_size;
dev = file_priv->minor->dev;
if (drm_device_is_unplugged(dev))
return -ENODEV;
- if ((nr >= DRM_CORE_IOCTL_COUNT) &&
- ((nr < DRM_COMMAND_BASE) || (nr >= DRM_COMMAND_END)))
- goto err_i1;
- if ((nr >= DRM_COMMAND_BASE) && (nr < DRM_COMMAND_END) &&
- (nr < DRM_COMMAND_BASE + dev->driver->num_ioctls)) {
- u32 drv_size;
+ if (nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END) {
+ /* driver ioctl */
+ if (nr - DRM_COMMAND_BASE >= dev->driver->num_ioctls)
+ goto err_i1;
ioctl = &dev->driver->ioctls[nr - DRM_COMMAND_BASE];
- drv_size = _IOC_SIZE(ioctl->cmd_drv);
- usize = asize = _IOC_SIZE(cmd);
- if (drv_size > asize)
- asize = drv_size;
- cmd = ioctl->cmd_drv;
- }
- else if ((nr >= DRM_COMMAND_END) || (nr < DRM_COMMAND_BASE)) {
- u32 drv_size;
-
+ } else {
+ /* core ioctl */
+ if (nr >= DRM_CORE_IOCTL_COUNT)
+ goto err_i1;
ioctl = &drm_ioctls[nr];
+ }
- drv_size = _IOC_SIZE(ioctl->cmd);
- usize = asize = _IOC_SIZE(cmd);
- if (drv_size > asize)
- asize = drv_size;
-
- cmd = ioctl->cmd;
- } else
- goto err_i1;
+ drv_size = _IOC_SIZE(ioctl->cmd);
+ usize = _IOC_SIZE(cmd);
+ asize = max(usize, drv_size);
+ cmd = ioctl->cmd;
DRM_DEBUG("pid=%d, dev=0x%lx, auth=%d, %s\n",
task_pid_nr(current),
@@ -773,12 +771,13 @@
*/
bool drm_ioctl_flags(unsigned int nr, unsigned int *flags)
{
- if ((nr >= DRM_COMMAND_END && nr < DRM_CORE_IOCTL_COUNT) ||
- (nr < DRM_COMMAND_BASE)) {
- *flags = drm_ioctls[nr].flags;
- return true;
- }
+ if (nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END)
+ return false;
- return false;
+ if (nr >= DRM_CORE_IOCTL_COUNT)
+ return false;
+
+ *flags = drm_ioctls[nr].flags;
+ return true;
}
EXPORT_SYMBOL(drm_ioctl_flags);
diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
index 10574a0..c8a3447 100644
--- a/drivers/gpu/drm/drm_irq.c
+++ b/drivers/gpu/drm/drm_irq.c
@@ -276,7 +276,6 @@
void drm_vblank_cleanup(struct drm_device *dev)
{
int crtc;
- unsigned long irqflags;
/* Bail if the driver didn't call drm_vblank_init() */
if (dev->num_crtcs == 0)
@@ -285,11 +284,10 @@
for (crtc = 0; crtc < dev->num_crtcs; crtc++) {
struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
- del_timer_sync(&vblank->disable_timer);
+ WARN_ON(vblank->enabled &&
+ drm_core_check_feature(dev, DRIVER_MODESET));
- spin_lock_irqsave(&dev->vbl_lock, irqflags);
- vblank_disable_and_save(dev, crtc);
- spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
+ del_timer_sync(&vblank->disable_timer);
}
kfree(dev->vblank);
@@ -475,17 +473,23 @@
dev->irq_enabled = false;
/*
- * Wake up any waiters so they don't hang.
+ * Wake up any waiters so they don't hang. This is just to paper over
+ * isssues for UMS drivers which aren't in full control of their
+ * vblank/irq handling. KMS drivers must ensure that vblanks are all
+ * disabled when uninstalling the irq handler.
*/
if (dev->num_crtcs) {
spin_lock_irqsave(&dev->vbl_lock, irqflags);
for (i = 0; i < dev->num_crtcs; i++) {
struct drm_vblank_crtc *vblank = &dev->vblank[i];
+ if (!vblank->enabled)
+ continue;
+
+ WARN_ON(drm_core_check_feature(dev, DRIVER_MODESET));
+
+ vblank_disable_and_save(dev, i);
wake_up(&vblank->queue);
- vblank->enabled = false;
- vblank->last =
- dev->driver->get_vblank_counter(dev, i);
}
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
}
@@ -1052,7 +1056,7 @@
* Acquire a reference count on vblank events to avoid having them disabled
* while in use.
*
- * This is the native kms version of drm_vblank_off().
+ * This is the native kms version of drm_vblank_get().
*
* Returns:
* Zero on success, nonzero on failure.
@@ -1233,6 +1237,38 @@
EXPORT_SYMBOL(drm_crtc_vblank_off);
/**
+ * drm_crtc_vblank_reset - reset vblank state to off on a CRTC
+ * @crtc: CRTC in question
+ *
+ * Drivers can use this function to reset the vblank state to off at load time.
+ * Drivers should use this together with the drm_crtc_vblank_off() and
+ * drm_crtc_vblank_on() functions. The difference compared to
+ * drm_crtc_vblank_off() is that this function doesn't save the vblank counter
+ * and hence doesn't need to call any driver hooks.
+ */
+void drm_crtc_vblank_reset(struct drm_crtc *drm_crtc)
+{
+ struct drm_device *dev = drm_crtc->dev;
+ unsigned long irqflags;
+ int crtc = drm_crtc_index(drm_crtc);
+ struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
+
+ spin_lock_irqsave(&dev->vbl_lock, irqflags);
+ /*
+ * Prevent subsequent drm_vblank_get() from enabling the vblank
+ * interrupt by bumping the refcount.
+ */
+ if (!vblank->inmodeset) {
+ atomic_inc(&vblank->refcount);
+ vblank->inmodeset = 1;
+ }
+ spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
+
+ WARN_ON(!list_empty(&dev->vblank_event_list));
+}
+EXPORT_SYMBOL(drm_crtc_vblank_reset);
+
+/**
* drm_vblank_on - enable vblank events on a CRTC
* @dev: DRM device
* @crtc: CRTC in question
@@ -1653,7 +1689,7 @@
struct timeval tvblank;
unsigned long irqflags;
- if (!dev->num_crtcs)
+ if (WARN_ON_ONCE(!dev->num_crtcs))
return false;
if (WARN_ON(crtc >= dev->num_crtcs))
diff --git a/drivers/gpu/drm/drm_modes.c b/drivers/gpu/drm/drm_modes.c
index 487d0e3..213b11e 100644
--- a/drivers/gpu/drm/drm_modes.c
+++ b/drivers/gpu/drm/drm_modes.c
@@ -278,7 +278,7 @@
hblank = drm_mode->hdisplay * hblank_percentage /
(100 * HV_FACTOR - hblank_percentage);
hblank -= hblank % (2 * CVT_H_GRANULARITY);
- /* 14. find the total pixes per line */
+ /* 14. find the total pixels per line */
drm_mode->htotal = drm_mode->hdisplay + hblank;
drm_mode->hsync_end = drm_mode->hdisplay + hblank / 2;
drm_mode->hsync_start = drm_mode->hsync_end -
@@ -903,6 +903,12 @@
*/
bool drm_mode_equal(const struct drm_display_mode *mode1, const struct drm_display_mode *mode2)
{
+ if (!mode1 && !mode2)
+ return true;
+
+ if (!mode1 || !mode2)
+ return false;
+
/* do clock check convert to PICOS so fb modes get matched
* the same */
if (mode1->clock && mode2->clock) {
@@ -1148,7 +1154,7 @@
/**
* drm_mode_connector_list_update - update the mode list for the connector
* @connector: the connector to update
- * @merge_type_bits: whether to merge or overright type bits.
+ * @merge_type_bits: whether to merge or overwrite type bits
*
* This moves the modes from the @connector probed_modes list
* to the actual mode list. It compares the probed mode against the current
@@ -1209,7 +1215,7 @@
* <xres>x<yres>[M][R][-<bpp>][@<refresh>][i][m][eDd]
*
* The intermediate drm_cmdline_mode structure is required to store additional
- * options from the command line modline like the force-enabel/disable flag.
+ * options from the command line modline like the force-enable/disable flag.
*
* Returns:
* True if a valid modeline has been parsed, false otherwise.
diff --git a/drivers/gpu/drm/drm_of.c b/drivers/gpu/drm/drm_of.c
index 16150a0..aaa1307 100644
--- a/drivers/gpu/drm/drm_of.c
+++ b/drivers/gpu/drm/drm_of.c
@@ -43,14 +43,10 @@
uint32_t drm_of_find_possible_crtcs(struct drm_device *dev,
struct device_node *port)
{
- struct device_node *remote_port, *ep = NULL;
+ struct device_node *remote_port, *ep;
uint32_t possible_crtcs = 0;
- do {
- ep = of_graph_get_next_endpoint(port, ep);
- if (!ep)
- break;
-
+ for_each_endpoint_of_node(port, ep) {
remote_port = of_graph_get_remote_port(ep);
if (!remote_port) {
of_node_put(ep);
@@ -60,7 +56,7 @@
possible_crtcs |= drm_crtc_port_mask(dev, remote_port);
of_node_put(remote_port);
- } while (1);
+ }
return possible_crtcs;
}
diff --git a/drivers/gpu/drm/drm_pci.c b/drivers/gpu/drm/drm_pci.c
index fd29f03..1b1bd42 100644
--- a/drivers/gpu/drm/drm_pci.c
+++ b/drivers/gpu/drm/drm_pci.c
@@ -27,6 +27,7 @@
#include <linux/dma-mapping.h>
#include <linux/export.h>
#include <drm/drmP.h>
+#include "drm_internal.h"
#include "drm_legacy.h"
/**
diff --git a/drivers/gpu/drm/drm_plane_helper.c b/drivers/gpu/drm/drm_plane_helper.c
index 5ba5792..40c1db9 100644
--- a/drivers/gpu/drm/drm_plane_helper.c
+++ b/drivers/gpu/drm/drm_plane_helper.c
@@ -344,20 +344,7 @@
};
EXPORT_SYMBOL(drm_primary_helper_funcs);
-/**
- * drm_primary_helper_create_plane() - Create a generic primary plane
- * @dev: drm device
- * @formats: pixel formats supported, or NULL for a default safe list
- * @num_formats: size of @formats; ignored if @formats is NULL
- *
- * Allocates and initializes a primary plane that can be used with the primary
- * plane helpers. Drivers that wish to use driver-specific plane structures or
- * provide custom handler functions may perform their own allocation and
- * initialization rather than calling this function.
- */
-struct drm_plane *drm_primary_helper_create_plane(struct drm_device *dev,
- const uint32_t *formats,
- int num_formats)
+static struct drm_plane *create_primary_plane(struct drm_device *dev)
{
struct drm_plane *primary;
int ret;
@@ -368,15 +355,17 @@
return NULL;
}
- if (formats == NULL) {
- formats = safe_modeset_formats;
- num_formats = ARRAY_SIZE(safe_modeset_formats);
- }
+ /*
+ * Remove the format_default field from drm_plane when dropping
+ * this helper.
+ */
+ primary->format_default = true;
/* possible_crtc's will be filled in later by crtc_init */
ret = drm_universal_plane_init(dev, primary, 0,
&drm_primary_helper_funcs,
- formats, num_formats,
+ safe_modeset_formats,
+ ARRAY_SIZE(safe_modeset_formats),
DRM_PLANE_TYPE_PRIMARY);
if (ret) {
kfree(primary);
@@ -385,7 +374,6 @@
return primary;
}
-EXPORT_SYMBOL(drm_primary_helper_create_plane);
/**
* drm_crtc_init - Legacy CRTC initialization function
@@ -404,7 +392,7 @@
{
struct drm_plane *primary;
- primary = drm_primary_helper_create_plane(dev, NULL, 0);
+ primary = create_primary_plane(dev);
return drm_crtc_init_with_planes(dev, crtc, primary, NULL, funcs);
}
EXPORT_SYMBOL(drm_crtc_init);
@@ -413,9 +401,9 @@
struct drm_plane_state *plane_state,
struct drm_framebuffer *old_fb)
{
- struct drm_plane_helper_funcs *plane_funcs;
+ const struct drm_plane_helper_funcs *plane_funcs;
struct drm_crtc *crtc[2];
- struct drm_crtc_helper_funcs *crtc_funcs[2];
+ const struct drm_crtc_helper_funcs *crtc_funcs[2];
int i, ret = 0;
plane_funcs = plane->helper_private;
@@ -437,7 +425,8 @@
if (plane_funcs->prepare_fb && plane_state->fb &&
plane_state->fb != old_fb) {
- ret = plane_funcs->prepare_fb(plane, plane_state->fb);
+ ret = plane_funcs->prepare_fb(plane, plane_state->fb,
+ plane_state);
if (ret)
goto out;
}
@@ -487,7 +476,7 @@
}
if (plane_funcs->cleanup_fb && old_fb)
- plane_funcs->cleanup_fb(plane, old_fb);
+ plane_funcs->cleanup_fb(plane, old_fb, plane_state);
out:
if (plane_state) {
if (plane->funcs->atomic_destroy_state)
diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
index 3fee587..6350387 100644
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -98,7 +98,7 @@
{
struct drm_device *dev = connector->dev;
struct drm_display_mode *mode;
- struct drm_connector_helper_funcs *connector_funcs =
+ const struct drm_connector_helper_funcs *connector_funcs =
connector->helper_private;
int count = 0;
int mode_flags = 0;
diff --git a/drivers/gpu/drm/drm_sysfs.c b/drivers/gpu/drm/drm_sysfs.c
index 5c99d37..ffc305f 100644
--- a/drivers/gpu/drm/drm_sysfs.c
+++ b/drivers/gpu/drm/drm_sysfs.c
@@ -166,23 +166,68 @@
/*
* Connector properties
*/
+static ssize_t status_store(struct device *device,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct drm_connector *connector = to_drm_connector(device);
+ struct drm_device *dev = connector->dev;
+ enum drm_connector_status old_status;
+ int ret;
+
+ ret = mutex_lock_interruptible(&dev->mode_config.mutex);
+ if (ret)
+ return ret;
+
+ old_status = connector->status;
+
+ if (sysfs_streq(buf, "detect")) {
+ connector->force = 0;
+ connector->status = connector->funcs->detect(connector, true);
+ } else if (sysfs_streq(buf, "on")) {
+ connector->force = DRM_FORCE_ON;
+ } else if (sysfs_streq(buf, "on-digital")) {
+ connector->force = DRM_FORCE_ON_DIGITAL;
+ } else if (sysfs_streq(buf, "off")) {
+ connector->force = DRM_FORCE_OFF;
+ } else
+ ret = -EINVAL;
+
+ if (ret == 0 && connector->force) {
+ if (connector->force == DRM_FORCE_ON ||
+ connector->force == DRM_FORCE_ON_DIGITAL)
+ connector->status = connector_status_connected;
+ else
+ connector->status = connector_status_disconnected;
+ if (connector->funcs->force)
+ connector->funcs->force(connector);
+ }
+
+ if (old_status != connector->status) {
+ DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %d to %d\n",
+ connector->base.id,
+ connector->name,
+ old_status, connector->status);
+
+ dev->mode_config.delayed_event = true;
+ if (dev->mode_config.poll_enabled)
+ schedule_delayed_work(&dev->mode_config.output_poll_work,
+ 0);
+ }
+
+ mutex_unlock(&dev->mode_config.mutex);
+
+ return ret;
+}
+
static ssize_t status_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct drm_connector *connector = to_drm_connector(device);
- enum drm_connector_status status;
- int ret;
-
- ret = mutex_lock_interruptible(&connector->dev->mode_config.mutex);
- if (ret)
- return ret;
-
- status = connector->funcs->detect(connector, true);
- mutex_unlock(&connector->dev->mode_config.mutex);
return snprintf(buf, PAGE_SIZE, "%s\n",
- drm_get_connector_status_name(status));
+ drm_get_connector_status_name(connector->status));
}
static ssize_t dpms_show(struct device *device,
@@ -339,7 +384,7 @@
drm_get_dvi_i_select_name((int)subconnector));
}
-static DEVICE_ATTR_RO(status);
+static DEVICE_ATTR_RW(status);
static DEVICE_ATTR_RO(enabled);
static DEVICE_ATTR_RO(dpms);
static DEVICE_ATTR_RO(modes);
diff --git a/drivers/gpu/drm/drm_vm.c b/drivers/gpu/drm/drm_vm.c
index 4a2c328..aab49ee 100644
--- a/drivers/gpu/drm/drm_vm.c
+++ b/drivers/gpu/drm/drm_vm.c
@@ -41,6 +41,7 @@
#include <linux/slab.h>
#endif
#include <asm/pgtable.h>
+#include "drm_internal.h"
#include "drm_legacy.h"
struct drm_vma_entry {
diff --git a/drivers/gpu/drm/exynos/exynos7_drm_decon.c b/drivers/gpu/drm/exynos/exynos7_drm_decon.c
index 9700461..1f7e33f 100644
--- a/drivers/gpu/drm/exynos/exynos7_drm_decon.c
+++ b/drivers/gpu/drm/exynos/exynos7_drm_decon.c
@@ -28,6 +28,7 @@
#include <video/exynos7_decon.h>
#include "exynos_drm_crtc.h"
+#include "exynos_drm_plane.h"
#include "exynos_drm_drv.h"
#include "exynos_drm_fbdev.h"
#include "exynos_drm_iommu.h"
@@ -41,32 +42,16 @@
#define WINDOWS_NR 2
-struct decon_win_data {
- unsigned int ovl_x;
- unsigned int ovl_y;
- unsigned int offset_x;
- unsigned int offset_y;
- unsigned int ovl_width;
- unsigned int ovl_height;
- unsigned int fb_width;
- unsigned int fb_height;
- unsigned int bpp;
- unsigned int pixel_format;
- dma_addr_t dma_addr;
- bool enabled;
- bool resume;
-};
-
struct decon_context {
struct device *dev;
struct drm_device *drm_dev;
struct exynos_drm_crtc *crtc;
+ struct exynos_drm_plane planes[WINDOWS_NR];
struct clk *pclk;
struct clk *aclk;
struct clk *eclk;
struct clk *vclk;
void __iomem *regs;
- struct decon_win_data win_data[WINDOWS_NR];
unsigned int default_win;
unsigned long irq_flags;
bool i80_if;
@@ -296,59 +281,16 @@
}
}
-static void decon_win_mode_set(struct exynos_drm_crtc *crtc,
- struct exynos_drm_plane *plane)
-{
- struct decon_context *ctx = crtc->ctx;
- struct decon_win_data *win_data;
- int win, padding;
-
- if (!plane) {
- DRM_ERROR("plane is NULL\n");
- return;
- }
-
- win = plane->zpos;
- if (win == DEFAULT_ZPOS)
- win = ctx->default_win;
-
- if (win < 0 || win >= WINDOWS_NR)
- return;
-
-
- win_data = &ctx->win_data[win];
-
- padding = (plane->pitch / (plane->bpp >> 3)) - plane->fb_width;
- win_data->offset_x = plane->fb_x;
- win_data->offset_y = plane->fb_y;
- win_data->fb_width = plane->fb_width + padding;
- win_data->fb_height = plane->fb_height;
- win_data->ovl_x = plane->crtc_x;
- win_data->ovl_y = plane->crtc_y;
- win_data->ovl_width = plane->crtc_width;
- win_data->ovl_height = plane->crtc_height;
- win_data->dma_addr = plane->dma_addr[0];
- win_data->bpp = plane->bpp;
- win_data->pixel_format = plane->pixel_format;
-
- DRM_DEBUG_KMS("offset_x = %d, offset_y = %d\n",
- win_data->offset_x, win_data->offset_y);
- DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n",
- win_data->ovl_width, win_data->ovl_height);
- DRM_DEBUG_KMS("paddr = 0x%lx\n", (unsigned long)win_data->dma_addr);
- DRM_DEBUG_KMS("fb_width = %d, crtc_width = %d\n",
- plane->fb_width, plane->crtc_width);
-}
-
static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win)
{
- struct decon_win_data *win_data = &ctx->win_data[win];
+ struct exynos_drm_plane *plane = &ctx->planes[win];
unsigned long val;
+ int padding;
val = readl(ctx->regs + WINCON(win));
val &= ~WINCONx_BPPMODE_MASK;
- switch (win_data->pixel_format) {
+ switch (plane->pixel_format) {
case DRM_FORMAT_RGB565:
val |= WINCONx_BPPMODE_16BPP_565;
val |= WINCONx_BURSTLEN_16WORD;
@@ -397,7 +339,7 @@
break;
}
- DRM_DEBUG_KMS("bpp = %d\n", win_data->bpp);
+ DRM_DEBUG_KMS("bpp = %d\n", plane->bpp);
/*
* In case of exynos, setting dma-burst to 16Word causes permanent
@@ -407,7 +349,8 @@
* movement causes unstable DMA which results into iommu crash/tear.
*/
- if (win_data->fb_width < MIN_FB_WIDTH_FOR_16WORD_BURST) {
+ padding = (plane->pitch / (plane->bpp >> 3)) - plane->fb_width;
+ if (plane->fb_width + padding < MIN_FB_WIDTH_FOR_16WORD_BURST) {
val &= ~WINCONx_BURSTLEN_MASK;
val |= WINCONx_BURSTLEN_8WORD;
}
@@ -435,7 +378,7 @@
* @protect: 1 to protect (disable updates)
*/
static void decon_shadow_protect_win(struct decon_context *ctx,
- int win, bool protect)
+ unsigned int win, bool protect)
{
u32 bits, val;
@@ -449,12 +392,12 @@
writel(val, ctx->regs + SHADOWCON);
}
-static void decon_win_commit(struct exynos_drm_crtc *crtc, int zpos)
+static void decon_win_commit(struct exynos_drm_crtc *crtc, unsigned int win)
{
struct decon_context *ctx = crtc->ctx;
struct drm_display_mode *mode = &crtc->base.mode;
- struct decon_win_data *win_data;
- int win = zpos;
+ struct exynos_drm_plane *plane;
+ int padding;
unsigned long val, alpha;
unsigned int last_x;
unsigned int last_y;
@@ -462,17 +405,14 @@
if (ctx->suspended)
return;
- if (win == DEFAULT_ZPOS)
- win = ctx->default_win;
-
if (win < 0 || win >= WINDOWS_NR)
return;
- win_data = &ctx->win_data[win];
+ plane = &ctx->planes[win];
/* If suspended, enable this on resume */
if (ctx->suspended) {
- win_data->resume = true;
+ plane->resume = true;
return;
}
@@ -490,39 +430,41 @@
decon_shadow_protect_win(ctx, win, true);
/* buffer start address */
- val = (unsigned long)win_data->dma_addr;
+ val = (unsigned long)plane->dma_addr[0];
writel(val, ctx->regs + VIDW_BUF_START(win));
+ padding = (plane->pitch / (plane->bpp >> 3)) - plane->fb_width;
+
/* buffer size */
- writel(win_data->fb_width, ctx->regs + VIDW_WHOLE_X(win));
- writel(win_data->fb_height, ctx->regs + VIDW_WHOLE_Y(win));
+ writel(plane->fb_width + padding, ctx->regs + VIDW_WHOLE_X(win));
+ writel(plane->fb_height, ctx->regs + VIDW_WHOLE_Y(win));
/* offset from the start of the buffer to read */
- writel(win_data->offset_x, ctx->regs + VIDW_OFFSET_X(win));
- writel(win_data->offset_y, ctx->regs + VIDW_OFFSET_Y(win));
+ writel(plane->src_x, ctx->regs + VIDW_OFFSET_X(win));
+ writel(plane->src_y, ctx->regs + VIDW_OFFSET_Y(win));
DRM_DEBUG_KMS("start addr = 0x%lx\n",
- (unsigned long)win_data->dma_addr);
+ (unsigned long)val);
DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n",
- win_data->ovl_width, win_data->ovl_height);
+ plane->crtc_width, plane->crtc_height);
/*
* OSD position.
* In case the window layout goes of LCD layout, DECON fails.
*/
- if ((win_data->ovl_x + win_data->ovl_width) > mode->hdisplay)
- win_data->ovl_x = mode->hdisplay - win_data->ovl_width;
- if ((win_data->ovl_y + win_data->ovl_height) > mode->vdisplay)
- win_data->ovl_y = mode->vdisplay - win_data->ovl_height;
+ if ((plane->crtc_x + plane->crtc_width) > mode->hdisplay)
+ plane->crtc_x = mode->hdisplay - plane->crtc_width;
+ if ((plane->crtc_y + plane->crtc_height) > mode->vdisplay)
+ plane->crtc_y = mode->vdisplay - plane->crtc_height;
- val = VIDOSDxA_TOPLEFT_X(win_data->ovl_x) |
- VIDOSDxA_TOPLEFT_Y(win_data->ovl_y);
+ val = VIDOSDxA_TOPLEFT_X(plane->crtc_x) |
+ VIDOSDxA_TOPLEFT_Y(plane->crtc_y);
writel(val, ctx->regs + VIDOSD_A(win));
- last_x = win_data->ovl_x + win_data->ovl_width;
+ last_x = plane->crtc_x + plane->crtc_width;
if (last_x)
last_x--;
- last_y = win_data->ovl_y + win_data->ovl_height;
+ last_y = plane->crtc_y + plane->crtc_height;
if (last_y)
last_y--;
@@ -531,7 +473,7 @@
writel(val, ctx->regs + VIDOSD_B(win));
DRM_DEBUG_KMS("osd pos: tx = %d, ty = %d, bx = %d, by = %d\n",
- win_data->ovl_x, win_data->ovl_y, last_x, last_y);
+ plane->crtc_x, plane->crtc_y, last_x, last_y);
/* OSD alpha */
alpha = VIDOSDxC_ALPHA0_R_F(0x0) |
@@ -565,27 +507,23 @@
val |= DECON_UPDATE_STANDALONE_F;
writel(val, ctx->regs + DECON_UPDATE);
- win_data->enabled = true;
+ plane->enabled = true;
}
-static void decon_win_disable(struct exynos_drm_crtc *crtc, int zpos)
+static void decon_win_disable(struct exynos_drm_crtc *crtc, unsigned int win)
{
struct decon_context *ctx = crtc->ctx;
- struct decon_win_data *win_data;
- int win = zpos;
+ struct exynos_drm_plane *plane;
u32 val;
- if (win == DEFAULT_ZPOS)
- win = ctx->default_win;
-
if (win < 0 || win >= WINDOWS_NR)
return;
- win_data = &ctx->win_data[win];
+ plane = &ctx->planes[win];
if (ctx->suspended) {
/* do not resume this window*/
- win_data->resume = false;
+ plane->resume = false;
return;
}
@@ -604,42 +542,42 @@
val |= DECON_UPDATE_STANDALONE_F;
writel(val, ctx->regs + DECON_UPDATE);
- win_data->enabled = false;
+ plane->enabled = false;
}
static void decon_window_suspend(struct decon_context *ctx)
{
- struct decon_win_data *win_data;
+ struct exynos_drm_plane *plane;
int i;
for (i = 0; i < WINDOWS_NR; i++) {
- win_data = &ctx->win_data[i];
- win_data->resume = win_data->enabled;
- if (win_data->enabled)
+ plane = &ctx->planes[i];
+ plane->resume = plane->enabled;
+ if (plane->enabled)
decon_win_disable(ctx->crtc, i);
}
}
static void decon_window_resume(struct decon_context *ctx)
{
- struct decon_win_data *win_data;
+ struct exynos_drm_plane *plane;
int i;
for (i = 0; i < WINDOWS_NR; i++) {
- win_data = &ctx->win_data[i];
- win_data->enabled = win_data->resume;
- win_data->resume = false;
+ plane = &ctx->planes[i];
+ plane->enabled = plane->resume;
+ plane->resume = false;
}
}
static void decon_apply(struct decon_context *ctx)
{
- struct decon_win_data *win_data;
+ struct exynos_drm_plane *plane;
int i;
for (i = 0; i < WINDOWS_NR; i++) {
- win_data = &ctx->win_data[i];
- if (win_data->enabled)
+ plane = &ctx->planes[i];
+ if (plane->enabled)
decon_win_commit(ctx->crtc, i);
else
decon_win_disable(ctx->crtc, i);
@@ -779,7 +717,6 @@
.enable_vblank = decon_enable_vblank,
.disable_vblank = decon_disable_vblank,
.wait_for_vblank = decon_wait_for_vblank,
- .win_mode_set = decon_win_mode_set,
.win_commit = decon_win_commit,
.win_disable = decon_win_disable,
};
@@ -818,6 +755,9 @@
{
struct decon_context *ctx = dev_get_drvdata(dev);
struct drm_device *drm_dev = data;
+ struct exynos_drm_plane *exynos_plane;
+ enum drm_plane_type type;
+ unsigned int zpos;
int ret;
ret = decon_ctx_initialize(ctx, drm_dev);
@@ -826,8 +766,18 @@
return ret;
}
- ctx->crtc = exynos_drm_crtc_create(drm_dev, ctx->pipe,
- EXYNOS_DISPLAY_TYPE_LCD,
+ for (zpos = 0; zpos < WINDOWS_NR; zpos++) {
+ type = (zpos == ctx->default_win) ? DRM_PLANE_TYPE_PRIMARY :
+ DRM_PLANE_TYPE_OVERLAY;
+ ret = exynos_plane_init(drm_dev, &ctx->planes[zpos],
+ 1 << ctx->pipe, type, zpos);
+ if (ret)
+ return ret;
+ }
+
+ exynos_plane = &ctx->planes[ctx->default_win];
+ ctx->crtc = exynos_drm_crtc_create(drm_dev, &exynos_plane->base,
+ ctx->pipe, EXYNOS_DISPLAY_TYPE_LCD,
&decon_crtc_ops, ctx);
if (IS_ERR(ctx->crtc)) {
decon_ctx_remove(ctx);
diff --git a/drivers/gpu/drm/exynos/exynos_dp_core.c b/drivers/gpu/drm/exynos/exynos_dp_core.c
index bf17a60..1dbfba5 100644
--- a/drivers/gpu/drm/exynos/exynos_dp_core.c
+++ b/drivers/gpu/drm/exynos/exynos_dp_core.c
@@ -32,10 +32,16 @@
#include <drm/bridge/ptn3460.h>
#include "exynos_dp_core.h"
+#include "exynos_drm_fimd.h"
#define ctx_from_connector(c) container_of(c, struct exynos_dp_device, \
connector)
+static inline struct exynos_drm_crtc *dp_to_crtc(struct exynos_dp_device *dp)
+{
+ return to_exynos_crtc(dp->encoder->crtc);
+}
+
static inline struct exynos_dp_device *
display_to_dp(struct exynos_drm_display *d)
{
@@ -1070,6 +1076,8 @@
}
}
+ fimd_dp_clock_enable(dp_to_crtc(dp), true);
+
clk_prepare_enable(dp->clock);
exynos_dp_phy_init(dp);
exynos_dp_init_dp(dp);
@@ -1094,6 +1102,8 @@
exynos_dp_phy_exit(dp);
clk_disable_unprepare(dp->clock);
+ fimd_dp_clock_enable(dp_to_crtc(dp), false);
+
if (dp->panel) {
if (drm_panel_unprepare(dp->panel))
DRM_ERROR("failed to turnoff the panel\n");
diff --git a/drivers/gpu/drm/exynos/exynos_drm_crtc.c b/drivers/gpu/drm/exynos/exynos_drm_crtc.c
index 48ccab7..eb49195 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_crtc.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_crtc.c
@@ -34,9 +34,8 @@
if (mode > DRM_MODE_DPMS_ON) {
/* wait for the completion of page flip. */
if (!wait_event_timeout(exynos_crtc->pending_flip_queue,
- !atomic_read(&exynos_crtc->pending_flip),
- HZ/20))
- atomic_set(&exynos_crtc->pending_flip, 0);
+ (exynos_crtc->event == NULL), HZ/20))
+ exynos_crtc->event = NULL;
drm_crtc_vblank_off(crtc);
}
@@ -164,11 +163,10 @@
uint32_t page_flip_flags)
{
struct drm_device *dev = crtc->dev;
- struct exynos_drm_private *dev_priv = dev->dev_private;
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
struct drm_framebuffer *old_fb = crtc->primary->fb;
unsigned int crtc_w, crtc_h;
- int ret = -EINVAL;
+ int ret;
/* when the page flip is requested, crtc's dpms should be on */
if (exynos_crtc->dpms > DRM_MODE_DPMS_ON) {
@@ -176,48 +174,49 @@
return -EINVAL;
}
- mutex_lock(&dev->struct_mutex);
+ if (!event)
+ return -EINVAL;
- if (event) {
- /*
- * the pipe from user always is 0 so we can set pipe number
- * of current owner to event.
- */
- event->pipe = exynos_crtc->pipe;
-
- ret = drm_vblank_get(dev, exynos_crtc->pipe);
- if (ret) {
- DRM_DEBUG("failed to acquire vblank counter\n");
-
- goto out;
- }
-
- spin_lock_irq(&dev->event_lock);
- list_add_tail(&event->base.link,
- &dev_priv->pageflip_event_list);
- atomic_set(&exynos_crtc->pending_flip, 1);
- spin_unlock_irq(&dev->event_lock);
-
- crtc->primary->fb = fb;
- crtc_w = fb->width - crtc->x;
- crtc_h = fb->height - crtc->y;
- ret = exynos_update_plane(crtc->primary, crtc, fb, 0, 0,
- crtc_w, crtc_h, crtc->x, crtc->y,
- crtc_w, crtc_h);
- if (ret) {
- crtc->primary->fb = old_fb;
-
- spin_lock_irq(&dev->event_lock);
- drm_vblank_put(dev, exynos_crtc->pipe);
- list_del(&event->base.link);
- atomic_set(&exynos_crtc->pending_flip, 0);
- spin_unlock_irq(&dev->event_lock);
-
- goto out;
- }
+ spin_lock_irq(&dev->event_lock);
+ if (exynos_crtc->event) {
+ ret = -EBUSY;
+ goto out;
}
+
+ ret = drm_vblank_get(dev, exynos_crtc->pipe);
+ if (ret) {
+ DRM_DEBUG("failed to acquire vblank counter\n");
+ goto out;
+ }
+
+ exynos_crtc->event = event;
+ spin_unlock_irq(&dev->event_lock);
+
+ /*
+ * the pipe from user always is 0 so we can set pipe number
+ * of current owner to event.
+ */
+ event->pipe = exynos_crtc->pipe;
+
+ crtc->primary->fb = fb;
+ crtc_w = fb->width - crtc->x;
+ crtc_h = fb->height - crtc->y;
+ ret = exynos_update_plane(crtc->primary, crtc, fb, 0, 0,
+ crtc_w, crtc_h, crtc->x, crtc->y,
+ crtc_w, crtc_h);
+ if (ret) {
+ crtc->primary->fb = old_fb;
+ spin_lock_irq(&dev->event_lock);
+ exynos_crtc->event = NULL;
+ drm_vblank_put(dev, exynos_crtc->pipe);
+ spin_unlock_irq(&dev->event_lock);
+ return ret;
+ }
+
+ return 0;
+
out:
- mutex_unlock(&dev->struct_mutex);
+ spin_unlock_irq(&dev->event_lock);
return ret;
}
@@ -239,13 +238,13 @@
};
struct exynos_drm_crtc *exynos_drm_crtc_create(struct drm_device *drm_dev,
+ struct drm_plane *plane,
int pipe,
enum exynos_drm_output_type type,
struct exynos_drm_crtc_ops *ops,
void *ctx)
{
struct exynos_drm_crtc *exynos_crtc;
- struct drm_plane *plane;
struct exynos_drm_private *private = drm_dev->dev_private;
struct drm_crtc *crtc;
int ret;
@@ -255,19 +254,12 @@
return ERR_PTR(-ENOMEM);
init_waitqueue_head(&exynos_crtc->pending_flip_queue);
- atomic_set(&exynos_crtc->pending_flip, 0);
exynos_crtc->dpms = DRM_MODE_DPMS_OFF;
exynos_crtc->pipe = pipe;
exynos_crtc->type = type;
exynos_crtc->ops = ops;
exynos_crtc->ctx = ctx;
- plane = exynos_plane_init(drm_dev, 1 << pipe,
- DRM_PLANE_TYPE_PRIMARY);
- if (IS_ERR(plane)) {
- ret = PTR_ERR(plane);
- goto err_plane;
- }
crtc = &exynos_crtc->base;
@@ -284,7 +276,6 @@
err_crtc:
plane->funcs->destroy(plane);
-err_plane:
kfree(exynos_crtc);
return ERR_PTR(ret);
}
@@ -320,26 +311,20 @@
void exynos_drm_crtc_finish_pageflip(struct drm_device *dev, int pipe)
{
struct exynos_drm_private *dev_priv = dev->dev_private;
- struct drm_pending_vblank_event *e, *t;
struct drm_crtc *drm_crtc = dev_priv->crtc[pipe];
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(drm_crtc);
unsigned long flags;
spin_lock_irqsave(&dev->event_lock, flags);
+ if (exynos_crtc->event) {
- list_for_each_entry_safe(e, t, &dev_priv->pageflip_event_list,
- base.link) {
- /* if event's pipe isn't same as crtc then ignore it. */
- if (pipe != e->pipe)
- continue;
-
- list_del(&e->base.link);
- drm_send_vblank_event(dev, -1, e);
+ drm_send_vblank_event(dev, -1, exynos_crtc->event);
drm_vblank_put(dev, pipe);
- atomic_set(&exynos_crtc->pending_flip, 0);
wake_up(&exynos_crtc->pending_flip_queue);
+
}
+ exynos_crtc->event = NULL;
spin_unlock_irqrestore(&dev->event_lock, flags);
}
diff --git a/drivers/gpu/drm/exynos/exynos_drm_crtc.h b/drivers/gpu/drm/exynos/exynos_drm_crtc.h
index 6258b80..0ecd8fc 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_crtc.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_crtc.h
@@ -18,6 +18,7 @@
#include "exynos_drm_drv.h"
struct exynos_drm_crtc *exynos_drm_crtc_create(struct drm_device *drm_dev,
+ struct drm_plane *plane,
int pipe,
enum exynos_drm_output_type type,
struct exynos_drm_crtc_ops *ops,
@@ -27,12 +28,6 @@
void exynos_drm_crtc_finish_pageflip(struct drm_device *dev, int pipe);
void exynos_drm_crtc_complete_scanout(struct drm_framebuffer *fb);
-void exynos_drm_crtc_plane_mode_set(struct drm_crtc *crtc,
- struct exynos_drm_plane *plane);
-void exynos_drm_crtc_plane_commit(struct drm_crtc *crtc, int zpos);
-void exynos_drm_crtc_plane_enable(struct drm_crtc *crtc, int zpos);
-void exynos_drm_crtc_plane_disable(struct drm_crtc *crtc, int zpos);
-
/* This function gets pipe value to crtc device matched with out_type. */
int exynos_drm_crtc_get_pipe_from_type(struct drm_device *drm_dev,
unsigned int out_type);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.c b/drivers/gpu/drm/exynos/exynos_drm_drv.c
index 90168d7..8ac4652 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_drv.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_drv.c
@@ -55,13 +55,11 @@
{
struct exynos_drm_private *private;
int ret;
- int nr;
private = kzalloc(sizeof(struct exynos_drm_private), GFP_KERNEL);
if (!private)
return -ENOMEM;
- INIT_LIST_HEAD(&private->pageflip_event_list);
dev_set_drvdata(dev->dev, dev);
dev->dev_private = (void *)private;
@@ -81,19 +79,6 @@
exynos_drm_mode_config_init(dev);
- for (nr = 0; nr < MAX_PLANE; nr++) {
- struct drm_plane *plane;
- unsigned long possible_crtcs = (1 << MAX_CRTC) - 1;
-
- plane = exynos_plane_init(dev, possible_crtcs,
- DRM_PLANE_TYPE_OVERLAY);
- if (!IS_ERR(plane))
- continue;
-
- ret = PTR_ERR(plane);
- goto err_mode_config_cleanup;
- }
-
/* setup possible_clones. */
exynos_drm_encoder_setup(dev);
@@ -237,25 +222,13 @@
static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file)
{
- struct exynos_drm_private *private = dev->dev_private;
- struct drm_pending_vblank_event *v, *vt;
struct drm_pending_event *e, *et;
unsigned long flags;
if (!file->driver_priv)
return;
- /* Release all events not unhandled by page flip handler. */
spin_lock_irqsave(&dev->event_lock, flags);
- list_for_each_entry_safe(v, vt, &private->pageflip_event_list,
- base.link) {
- if (v->base.file_priv == file) {
- list_del(&v->base.link);
- drm_vblank_put(dev, v->pipe);
- v->base.destroy(&v->base);
- }
- }
-
/* Release all events handled by page flip handler but not freed. */
list_for_each_entry_safe(e, et, &file->event_list, link) {
list_del(&e->link);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.h b/drivers/gpu/drm/exynos/exynos_drm_drv.h
index 9afd390..e12ecb5 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_drv.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_drv.h
@@ -21,7 +21,6 @@
#define MAX_CRTC 3
#define MAX_PLANE 5
#define MAX_FB_BUFFER 4
-#define DEFAULT_ZPOS -1
#define to_exynos_crtc(x) container_of(x, struct exynos_drm_crtc, base)
#define to_exynos_plane(x) container_of(x, struct exynos_drm_plane, base)
@@ -48,20 +47,22 @@
* Exynos drm common overlay structure.
*
* @base: plane object
- * @fb_x: offset x on a framebuffer to be displayed.
+ * @src_x: offset x on a framebuffer to be displayed.
* - the unit is screen coordinates.
- * @fb_y: offset y on a framebuffer to be displayed.
+ * @src_y: offset y on a framebuffer to be displayed.
* - the unit is screen coordinates.
- * @fb_width: width of a framebuffer.
- * @fb_height: height of a framebuffer.
* @src_width: width of a partial image to be displayed from framebuffer.
* @src_height: height of a partial image to be displayed from framebuffer.
+ * @fb_width: width of a framebuffer.
+ * @fb_height: height of a framebuffer.
* @crtc_x: offset x on hardware screen.
* @crtc_y: offset y on hardware screen.
* @crtc_width: window width to be displayed (hardware screen).
* @crtc_height: window height to be displayed (hardware screen).
* @mode_width: width of screen mode.
* @mode_height: height of screen mode.
+ * @h_ratio: horizontal scaling ratio, 16.16 fixed point
+ * @v_ratio: vertical scaling ratio, 16.16 fixed point
* @refresh: refresh rate.
* @scan_flag: interlace or progressive way.
* (it could be DRM_MODE_FLAG_*)
@@ -78,6 +79,7 @@
* @transparency: transparency on or off.
* @activated: activated or not.
* @enabled: enabled or not.
+ * @resume: to resume or not.
*
* this structure is common to exynos SoC and its contents would be copied
* to hardware specific overlay info.
@@ -85,25 +87,27 @@
struct exynos_drm_plane {
struct drm_plane base;
- unsigned int fb_x;
- unsigned int fb_y;
- unsigned int fb_width;
- unsigned int fb_height;
+ unsigned int src_x;
+ unsigned int src_y;
unsigned int src_width;
unsigned int src_height;
+ unsigned int fb_width;
+ unsigned int fb_height;
unsigned int crtc_x;
unsigned int crtc_y;
unsigned int crtc_width;
unsigned int crtc_height;
unsigned int mode_width;
unsigned int mode_height;
+ unsigned int h_ratio;
+ unsigned int v_ratio;
unsigned int refresh;
unsigned int scan_flag;
unsigned int bpp;
unsigned int pitch;
uint32_t pixel_format;
dma_addr_t dma_addr[MAX_FB_BUFFER];
- int zpos;
+ unsigned int zpos;
unsigned int index_color;
bool default_win:1;
@@ -112,6 +116,7 @@
bool transparency:1;
bool activated:1;
bool enabled:1;
+ bool resume:1;
};
/*
@@ -172,9 +177,7 @@
* @disable_vblank: specific driver callback for disabling vblank interrupt.
* @wait_for_vblank: wait for vblank interrupt to make sure that
* hardware overlay is updated.
- * @win_mode_set: copy drm overlay info to hw specific overlay info.
* @win_commit: apply hardware specific overlay data to registers.
- * @win_enable: enable hardware specific overlay.
* @win_disable: disable hardware specific overlay.
* @te_handler: trigger to transfer video image at the tearing effect
* synchronization signal if there is a page flip request.
@@ -189,11 +192,8 @@
int (*enable_vblank)(struct exynos_drm_crtc *crtc);
void (*disable_vblank)(struct exynos_drm_crtc *crtc);
void (*wait_for_vblank)(struct exynos_drm_crtc *crtc);
- void (*win_mode_set)(struct exynos_drm_crtc *crtc,
- struct exynos_drm_plane *plane);
- void (*win_commit)(struct exynos_drm_crtc *crtc, int zpos);
- void (*win_enable)(struct exynos_drm_crtc *crtc, int zpos);
- void (*win_disable)(struct exynos_drm_crtc *crtc, int zpos);
+ void (*win_commit)(struct exynos_drm_crtc *crtc, unsigned int zpos);
+ void (*win_disable)(struct exynos_drm_crtc *crtc, unsigned int zpos);
void (*te_handler)(struct exynos_drm_crtc *crtc);
};
@@ -210,6 +210,7 @@
* we can refer to the crtc to current hardware interrupt occurred through
* this pipe value.
* @dpms: store the crtc dpms value
+ * @event: vblank event that is currently queued for flip
* @ops: pointer to callbacks for exynos drm specific functionality
* @ctx: A pointer to the crtc's implementation specific context
*/
@@ -219,7 +220,7 @@
unsigned int pipe;
unsigned int dpms;
wait_queue_head_t pending_flip_queue;
- atomic_t pending_flip;
+ struct drm_pending_vblank_event *event;
struct exynos_drm_crtc_ops *ops;
void *ctx;
};
@@ -249,9 +250,6 @@
struct exynos_drm_private {
struct drm_fb_helper *fb_helper;
- /* list head for new event to be added. */
- struct list_head pageflip_event_list;
-
/*
* created crtc object would be contained at this array and
* this array is used to be aware of which crtc did it request vblank.
diff --git a/drivers/gpu/drm/exynos/exynos_drm_dsi.c b/drivers/gpu/drm/exynos/exynos_drm_dsi.c
index 05fe93d..0492715 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_dsi.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_dsi.c
@@ -1473,12 +1473,6 @@
return 0;
}
-static int exynos_dsi_mode_valid(struct drm_connector *connector,
- struct drm_display_mode *mode)
-{
- return MODE_OK;
-}
-
static struct drm_encoder *
exynos_dsi_best_encoder(struct drm_connector *connector)
{
@@ -1489,7 +1483,6 @@
static struct drm_connector_helper_funcs exynos_dsi_connector_helper_funcs = {
.get_modes = exynos_dsi_get_modes,
- .mode_valid = exynos_dsi_mode_valid,
.best_encoder = exynos_dsi_best_encoder,
};
diff --git a/drivers/gpu/drm/exynos/exynos_drm_fb.c b/drivers/gpu/drm/exynos/exynos_drm_fb.c
index d346d1e..929cb03 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_fb.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_fb.c
@@ -151,10 +151,8 @@
exynos_gem_obj = to_exynos_gem_obj(obj);
ret = check_fb_gem_memory_type(dev, exynos_gem_obj);
- if (ret < 0) {
- DRM_ERROR("cannot use this gem memory type for fb.\n");
- return ERR_PTR(-EINVAL);
- }
+ if (ret < 0)
+ return ERR_PTR(ret);
exynos_fb = kzalloc(sizeof(*exynos_fb), GFP_KERNEL);
if (!exynos_fb)
@@ -250,10 +248,8 @@
exynos_fb->exynos_gem_obj[i] = exynos_gem_obj;
ret = check_fb_gem_memory_type(dev, exynos_gem_obj);
- if (ret < 0) {
- DRM_ERROR("cannot use this gem memory type for fb.\n");
+ if (ret < 0)
goto err_unreference;
- }
}
ret = drm_framebuffer_init(dev, &exynos_fb->fb, &exynos_drm_fb_funcs);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
index 84f8dfe..e71e331 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
@@ -76,6 +76,7 @@
};
static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
+ struct drm_fb_helper_surface_size *sizes,
struct drm_framebuffer *fb)
{
struct fb_info *fbi = helper->fbdev;
@@ -85,7 +86,7 @@
unsigned long offset;
drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth);
- drm_fb_helper_fill_var(fbi, helper, fb->width, fb->height);
+ drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height);
/* RGB formats use only one buffer */
buffer = exynos_drm_fb_buffer(fb, 0);
@@ -189,7 +190,7 @@
goto err_destroy_framebuffer;
}
- ret = exynos_drm_fbdev_update(helper, helper->fb);
+ ret = exynos_drm_fbdev_update(helper, sizes, helper->fb);
if (ret < 0)
goto err_dealloc_cmap;
diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
index 33a10ce..9819fa6 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
@@ -31,7 +31,9 @@
#include "exynos_drm_drv.h"
#include "exynos_drm_fbdev.h"
#include "exynos_drm_crtc.h"
+#include "exynos_drm_plane.h"
#include "exynos_drm_iommu.h"
+#include "exynos_drm_fimd.h"
/*
* FIMD stands for Fully Interactive Mobile Display and
@@ -54,6 +56,9 @@
/* size control register for hardware windows 1 ~ 2. */
#define VIDOSD_D(win) (VIDOSD_BASE + 0x0C + (win) * 16)
+#define VIDWnALPHA0(win) (VIDW_ALPHA + 0x00 + (win) * 8)
+#define VIDWnALPHA1(win) (VIDW_ALPHA + 0x04 + (win) * 8)
+
#define VIDWx_BUF_START(win, buf) (VIDW_BUF_START(buf) + (win) * 8)
#define VIDWx_BUF_END(win, buf) (VIDW_BUF_END(buf) + (win) * 8)
#define VIDWx_BUF_SIZE(win, buf) (VIDW_BUF_SIZE(buf) + (win) * 4)
@@ -140,32 +145,15 @@
.has_vtsel = 1,
};
-struct fimd_win_data {
- unsigned int offset_x;
- unsigned int offset_y;
- unsigned int ovl_width;
- unsigned int ovl_height;
- unsigned int fb_width;
- unsigned int fb_height;
- unsigned int fb_pitch;
- unsigned int bpp;
- unsigned int pixel_format;
- dma_addr_t dma_addr;
- unsigned int buf_offsize;
- unsigned int line_size; /* bytes */
- bool enabled;
- bool resume;
-};
-
struct fimd_context {
struct device *dev;
struct drm_device *drm_dev;
struct exynos_drm_crtc *crtc;
+ struct exynos_drm_plane planes[WINDOWS_NR];
struct clk *bus_clk;
struct clk *lcd_clk;
void __iomem *regs;
struct regmap *sysreg;
- struct fimd_win_data win_data[WINDOWS_NR];
unsigned int default_win;
unsigned long irq_flags;
u32 vidcon0;
@@ -502,59 +490,9 @@
}
}
-static void fimd_win_mode_set(struct exynos_drm_crtc *crtc,
- struct exynos_drm_plane *plane)
-{
- struct fimd_context *ctx = crtc->ctx;
- struct fimd_win_data *win_data;
- int win;
- unsigned long offset;
-
- if (!plane) {
- DRM_ERROR("plane is NULL\n");
- return;
- }
-
- win = plane->zpos;
- if (win == DEFAULT_ZPOS)
- win = ctx->default_win;
-
- if (win < 0 || win >= WINDOWS_NR)
- return;
-
- offset = plane->fb_x * (plane->bpp >> 3);
- offset += plane->fb_y * plane->pitch;
-
- DRM_DEBUG_KMS("offset = 0x%lx, pitch = %x\n", offset, plane->pitch);
-
- win_data = &ctx->win_data[win];
-
- win_data->offset_x = plane->crtc_x;
- win_data->offset_y = plane->crtc_y;
- win_data->ovl_width = plane->crtc_width;
- win_data->ovl_height = plane->crtc_height;
- win_data->fb_pitch = plane->pitch;
- win_data->fb_width = plane->fb_width;
- win_data->fb_height = plane->fb_height;
- win_data->dma_addr = plane->dma_addr[0] + offset;
- win_data->bpp = plane->bpp;
- win_data->pixel_format = plane->pixel_format;
- win_data->buf_offsize =
- plane->pitch - (plane->crtc_width * (plane->bpp >> 3));
- win_data->line_size = plane->crtc_width * (plane->bpp >> 3);
-
- DRM_DEBUG_KMS("offset_x = %d, offset_y = %d\n",
- win_data->offset_x, win_data->offset_y);
- DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n",
- win_data->ovl_width, win_data->ovl_height);
- DRM_DEBUG_KMS("paddr = 0x%lx\n", (unsigned long)win_data->dma_addr);
- DRM_DEBUG_KMS("fb_width = %d, crtc_width = %d\n",
- plane->fb_width, plane->crtc_width);
-}
-
static void fimd_win_set_pixfmt(struct fimd_context *ctx, unsigned int win)
{
- struct fimd_win_data *win_data = &ctx->win_data[win];
+ struct exynos_drm_plane *plane = &ctx->planes[win];
unsigned long val;
val = WINCONx_ENWIN;
@@ -564,11 +502,11 @@
* So the request format is ARGB8888 then change it to XRGB8888.
*/
if (ctx->driver_data->has_limited_fmt && !win) {
- if (win_data->pixel_format == DRM_FORMAT_ARGB8888)
- win_data->pixel_format = DRM_FORMAT_XRGB8888;
+ if (plane->pixel_format == DRM_FORMAT_ARGB8888)
+ plane->pixel_format = DRM_FORMAT_XRGB8888;
}
- switch (win_data->pixel_format) {
+ switch (plane->pixel_format) {
case DRM_FORMAT_C8:
val |= WINCON0_BPPMODE_8BPP_PALETTE;
val |= WINCONx_BURSTLEN_8WORD;
@@ -604,7 +542,7 @@
break;
}
- DRM_DEBUG_KMS("bpp = %d\n", win_data->bpp);
+ DRM_DEBUG_KMS("bpp = %d\n", plane->bpp);
/*
* In case of exynos, setting dma-burst to 16Word causes permanent
@@ -614,12 +552,30 @@
* movement causes unstable DMA which results into iommu crash/tear.
*/
- if (win_data->fb_width < MIN_FB_WIDTH_FOR_16WORD_BURST) {
+ if (plane->fb_width < MIN_FB_WIDTH_FOR_16WORD_BURST) {
val &= ~WINCONx_BURSTLEN_MASK;
val |= WINCONx_BURSTLEN_4WORD;
}
writel(val, ctx->regs + WINCON(win));
+
+ /* hardware window 0 doesn't support alpha channel. */
+ if (win != 0) {
+ /* OSD alpha */
+ val = VIDISD14C_ALPHA0_R(0xf) |
+ VIDISD14C_ALPHA0_G(0xf) |
+ VIDISD14C_ALPHA0_B(0xf) |
+ VIDISD14C_ALPHA1_R(0xf) |
+ VIDISD14C_ALPHA1_G(0xf) |
+ VIDISD14C_ALPHA1_B(0xf);
+
+ writel(val, ctx->regs + VIDOSD_C(win));
+
+ val = VIDW_ALPHA_R(0xf) | VIDW_ALPHA_G(0xf) |
+ VIDW_ALPHA_G(0xf);
+ writel(val, ctx->regs + VIDWnALPHA0(win));
+ writel(val, ctx->regs + VIDWnALPHA1(win));
+ }
}
static void fimd_win_set_colkey(struct fimd_context *ctx, unsigned int win)
@@ -642,7 +598,7 @@
* @protect: 1 to protect (disable updates)
*/
static void fimd_shadow_protect_win(struct fimd_context *ctx,
- int win, bool protect)
+ unsigned int win, bool protect)
{
u32 reg, bits, val;
@@ -662,29 +618,25 @@
writel(val, ctx->regs + reg);
}
-static void fimd_win_commit(struct exynos_drm_crtc *crtc, int zpos)
+static void fimd_win_commit(struct exynos_drm_crtc *crtc, unsigned int win)
{
struct fimd_context *ctx = crtc->ctx;
- struct fimd_win_data *win_data;
- int win = zpos;
- unsigned long val, alpha, size;
- unsigned int last_x;
- unsigned int last_y;
+ struct exynos_drm_plane *plane;
+ dma_addr_t dma_addr;
+ unsigned long val, size, offset;
+ unsigned int last_x, last_y, buf_offsize, line_size;
if (ctx->suspended)
return;
- if (win == DEFAULT_ZPOS)
- win = ctx->default_win;
-
if (win < 0 || win >= WINDOWS_NR)
return;
- win_data = &ctx->win_data[win];
+ plane = &ctx->planes[win];
/* If suspended, enable this on resume */
if (ctx->suspended) {
- win_data->resume = true;
+ plane->resume = true;
return;
}
@@ -701,38 +653,45 @@
/* protect windows */
fimd_shadow_protect_win(ctx, win, true);
+
+ offset = plane->src_x * (plane->bpp >> 3);
+ offset += plane->src_y * plane->pitch;
+
/* buffer start address */
- val = (unsigned long)win_data->dma_addr;
+ dma_addr = plane->dma_addr[0] + offset;
+ val = (unsigned long)dma_addr;
writel(val, ctx->regs + VIDWx_BUF_START(win, 0));
/* buffer end address */
- size = win_data->fb_pitch * win_data->ovl_height * (win_data->bpp >> 3);
- val = (unsigned long)(win_data->dma_addr + size);
+ size = plane->pitch * plane->crtc_height;
+ val = (unsigned long)(dma_addr + size);
writel(val, ctx->regs + VIDWx_BUF_END(win, 0));
DRM_DEBUG_KMS("start addr = 0x%lx, end addr = 0x%lx, size = 0x%lx\n",
- (unsigned long)win_data->dma_addr, val, size);
+ (unsigned long)dma_addr, val, size);
DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n",
- win_data->ovl_width, win_data->ovl_height);
+ plane->crtc_width, plane->crtc_height);
/* buffer size */
- val = VIDW_BUF_SIZE_OFFSET(win_data->buf_offsize) |
- VIDW_BUF_SIZE_PAGEWIDTH(win_data->line_size) |
- VIDW_BUF_SIZE_OFFSET_E(win_data->buf_offsize) |
- VIDW_BUF_SIZE_PAGEWIDTH_E(win_data->line_size);
+ buf_offsize = plane->pitch - (plane->crtc_width * (plane->bpp >> 3));
+ line_size = plane->crtc_width * (plane->bpp >> 3);
+ val = VIDW_BUF_SIZE_OFFSET(buf_offsize) |
+ VIDW_BUF_SIZE_PAGEWIDTH(line_size) |
+ VIDW_BUF_SIZE_OFFSET_E(buf_offsize) |
+ VIDW_BUF_SIZE_PAGEWIDTH_E(line_size);
writel(val, ctx->regs + VIDWx_BUF_SIZE(win, 0));
/* OSD position */
- val = VIDOSDxA_TOPLEFT_X(win_data->offset_x) |
- VIDOSDxA_TOPLEFT_Y(win_data->offset_y) |
- VIDOSDxA_TOPLEFT_X_E(win_data->offset_x) |
- VIDOSDxA_TOPLEFT_Y_E(win_data->offset_y);
+ val = VIDOSDxA_TOPLEFT_X(plane->crtc_x) |
+ VIDOSDxA_TOPLEFT_Y(plane->crtc_y) |
+ VIDOSDxA_TOPLEFT_X_E(plane->crtc_x) |
+ VIDOSDxA_TOPLEFT_Y_E(plane->crtc_y);
writel(val, ctx->regs + VIDOSD_A(win));
- last_x = win_data->offset_x + win_data->ovl_width;
+ last_x = plane->crtc_x + plane->crtc_width;
if (last_x)
last_x--;
- last_y = win_data->offset_y + win_data->ovl_height;
+ last_y = plane->crtc_y + plane->crtc_height;
if (last_y)
last_y--;
@@ -742,24 +701,14 @@
writel(val, ctx->regs + VIDOSD_B(win));
DRM_DEBUG_KMS("osd pos: tx = %d, ty = %d, bx = %d, by = %d\n",
- win_data->offset_x, win_data->offset_y, last_x, last_y);
-
- /* hardware window 0 doesn't support alpha channel. */
- if (win != 0) {
- /* OSD alpha */
- alpha = VIDISD14C_ALPHA1_R(0xf) |
- VIDISD14C_ALPHA1_G(0xf) |
- VIDISD14C_ALPHA1_B(0xf);
-
- writel(alpha, ctx->regs + VIDOSD_C(win));
- }
+ plane->crtc_x, plane->crtc_y, last_x, last_y);
/* OSD size */
if (win != 3 && win != 4) {
u32 offset = VIDOSD_D(win);
if (win == 0)
offset = VIDOSD_C(win);
- val = win_data->ovl_width * win_data->ovl_height;
+ val = plane->crtc_width * plane->crtc_height;
writel(val, ctx->regs + offset);
DRM_DEBUG_KMS("osd size = 0x%x\n", (unsigned int)val);
@@ -779,29 +728,25 @@
/* Enable DMA channel and unprotect windows */
fimd_shadow_protect_win(ctx, win, false);
- win_data->enabled = true;
+ plane->enabled = true;
if (ctx->i80_if)
atomic_set(&ctx->win_updated, 1);
}
-static void fimd_win_disable(struct exynos_drm_crtc *crtc, int zpos)
+static void fimd_win_disable(struct exynos_drm_crtc *crtc, unsigned int win)
{
struct fimd_context *ctx = crtc->ctx;
- struct fimd_win_data *win_data;
- int win = zpos;
-
- if (win == DEFAULT_ZPOS)
- win = ctx->default_win;
+ struct exynos_drm_plane *plane;
if (win < 0 || win >= WINDOWS_NR)
return;
- win_data = &ctx->win_data[win];
+ plane = &ctx->planes[win];
if (ctx->suspended) {
/* do not resume this window*/
- win_data->resume = false;
+ plane->resume = false;
return;
}
@@ -816,42 +761,42 @@
/* unprotect windows */
fimd_shadow_protect_win(ctx, win, false);
- win_data->enabled = false;
+ plane->enabled = false;
}
static void fimd_window_suspend(struct fimd_context *ctx)
{
- struct fimd_win_data *win_data;
+ struct exynos_drm_plane *plane;
int i;
for (i = 0; i < WINDOWS_NR; i++) {
- win_data = &ctx->win_data[i];
- win_data->resume = win_data->enabled;
- if (win_data->enabled)
+ plane = &ctx->planes[i];
+ plane->resume = plane->enabled;
+ if (plane->enabled)
fimd_win_disable(ctx->crtc, i);
}
}
static void fimd_window_resume(struct fimd_context *ctx)
{
- struct fimd_win_data *win_data;
+ struct exynos_drm_plane *plane;
int i;
for (i = 0; i < WINDOWS_NR; i++) {
- win_data = &ctx->win_data[i];
- win_data->enabled = win_data->resume;
- win_data->resume = false;
+ plane = &ctx->planes[i];
+ plane->enabled = plane->resume;
+ plane->resume = false;
}
}
static void fimd_apply(struct fimd_context *ctx)
{
- struct fimd_win_data *win_data;
+ struct exynos_drm_plane *plane;
int i;
for (i = 0; i < WINDOWS_NR; i++) {
- win_data = &ctx->win_data[i];
- if (win_data->enabled)
+ plane = &ctx->planes[i];
+ if (plane->enabled)
fimd_win_commit(ctx->crtc, i);
else
fimd_win_disable(ctx->crtc, i);
@@ -1008,7 +953,6 @@
.enable_vblank = fimd_enable_vblank,
.disable_vblank = fimd_disable_vblank,
.wait_for_vblank = fimd_wait_for_vblank,
- .win_mode_set = fimd_win_mode_set,
.win_commit = fimd_win_commit,
.win_disable = fimd_win_disable,
.te_handler = fimd_te_handler,
@@ -1054,14 +998,29 @@
struct fimd_context *ctx = dev_get_drvdata(dev);
struct drm_device *drm_dev = data;
struct exynos_drm_private *priv = drm_dev->dev_private;
+ struct exynos_drm_plane *exynos_plane;
+ enum drm_plane_type type;
+ unsigned int zpos;
int ret;
ctx->drm_dev = drm_dev;
ctx->pipe = priv->pipe++;
- ctx->crtc = exynos_drm_crtc_create(drm_dev, ctx->pipe,
- EXYNOS_DISPLAY_TYPE_LCD,
+ for (zpos = 0; zpos < WINDOWS_NR; zpos++) {
+ type = (zpos == ctx->default_win) ? DRM_PLANE_TYPE_PRIMARY :
+ DRM_PLANE_TYPE_OVERLAY;
+ ret = exynos_plane_init(drm_dev, &ctx->planes[zpos],
+ 1 << ctx->pipe, type, zpos);
+ if (ret)
+ return ret;
+ }
+
+ exynos_plane = &ctx->planes[ctx->default_win];
+ ctx->crtc = exynos_drm_crtc_create(drm_dev, &exynos_plane->base,
+ ctx->pipe, EXYNOS_DISPLAY_TYPE_LCD,
&fimd_crtc_ops, ctx);
+ if (IS_ERR(ctx->crtc))
+ return PTR_ERR(ctx->crtc);
if (ctx->display)
exynos_drm_create_enc_conn(drm_dev, ctx->display);
@@ -1233,6 +1192,24 @@
return 0;
}
+void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable)
+{
+ struct fimd_context *ctx = crtc->ctx;
+ u32 val;
+
+ /*
+ * Only Exynos 5250, 5260, 5410 and 542x requires enabling DP/MIE
+ * clock. On these SoCs the bootloader may enable it but any
+ * power domain off/on will reset it to disable state.
+ */
+ if (ctx->driver_data != &exynos5_fimd_driver_data)
+ return;
+
+ val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
+ writel(DP_MIE_CLK_DP_ENABLE, ctx->regs + DP_MIE_CLKCON);
+}
+EXPORT_SYMBOL_GPL(fimd_dp_clock_enable);
+
struct platform_driver fimd_driver = {
.probe = fimd_probe,
.remove = fimd_remove,
diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.h b/drivers/gpu/drm/exynos/exynos_drm_fimd.h
new file mode 100644
index 0000000..b4fcaa5
--- /dev/null
+++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.h
@@ -0,0 +1,15 @@
+/*
+ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef _EXYNOS_DRM_FIMD_H_
+#define _EXYNOS_DRM_FIMD_H_
+
+extern void fimd_dp_clock_enable(struct exynos_drm_crtc *crtc, bool enable);
+
+#endif /* _EXYNOS_DRM_FIMD_H_ */
diff --git a/drivers/gpu/drm/exynos/exynos_drm_ipp.c b/drivers/gpu/drm/exynos/exynos_drm_ipp.c
index d5ad17d..b7f1cbc 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_ipp.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_ipp.c
@@ -476,6 +476,45 @@
return ret;
}
+static int ipp_validate_mem_node(struct drm_device *drm_dev,
+ struct drm_exynos_ipp_mem_node *m_node,
+ struct drm_exynos_ipp_cmd_node *c_node)
+{
+ struct drm_exynos_ipp_config *ipp_cfg;
+ unsigned int num_plane;
+ unsigned long min_size, size;
+ unsigned int bpp;
+ int i;
+
+ /* The property id should already be varified */
+ ipp_cfg = &c_node->property.config[m_node->prop_id];
+ num_plane = drm_format_num_planes(ipp_cfg->fmt);
+
+ /**
+ * This is a rather simplified validation of a memory node.
+ * It basically verifies provided gem object handles
+ * and the buffer sizes with respect to current configuration.
+ * This is not the best that can be done
+ * but it seems more than enough
+ */
+ for (i = 0; i < num_plane; ++i) {
+ if (!m_node->buf_info.handles[i]) {
+ DRM_ERROR("invalid handle for plane %d\n", i);
+ return -EINVAL;
+ }
+ bpp = drm_format_plane_cpp(ipp_cfg->fmt, i);
+ min_size = (ipp_cfg->sz.hsize * ipp_cfg->sz.vsize * bpp) >> 3;
+ size = exynos_drm_gem_get_size(drm_dev,
+ m_node->buf_info.handles[i],
+ c_node->filp);
+ if (min_size > size) {
+ DRM_ERROR("invalid size for plane %d\n", i);
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
static int ipp_put_mem_node(struct drm_device *drm_dev,
struct drm_exynos_ipp_cmd_node *c_node,
struct drm_exynos_ipp_mem_node *m_node)
@@ -552,6 +591,11 @@
}
mutex_lock(&c_node->mem_lock);
+ if (ipp_validate_mem_node(drm_dev, m_node, c_node)) {
+ ipp_put_mem_node(drm_dev, c_node, m_node);
+ mutex_unlock(&c_node->mem_lock);
+ return ERR_PTR(-EFAULT);
+ }
list_add_tail(&m_node->list, &c_node->mem_list[qbuf->ops_id]);
mutex_unlock(&c_node->mem_lock);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_plane.c b/drivers/gpu/drm/exynos/exynos_drm_plane.c
index 8ad5b72..13ea334 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_plane.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_plane.c
@@ -92,7 +92,6 @@
uint32_t src_w, uint32_t src_h)
{
struct exynos_drm_plane *exynos_plane = to_exynos_plane(plane);
- struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
unsigned int actual_w;
unsigned int actual_h;
@@ -111,13 +110,17 @@
crtc_y = 0;
}
+ /* set ratio */
+ exynos_plane->h_ratio = (src_w << 16) / crtc_w;
+ exynos_plane->v_ratio = (src_h << 16) / crtc_h;
+
/* set drm framebuffer data. */
- exynos_plane->fb_x = src_x;
- exynos_plane->fb_y = src_y;
+ exynos_plane->src_x = src_x;
+ exynos_plane->src_y = src_y;
+ exynos_plane->src_width = (actual_w * exynos_plane->h_ratio) >> 16;
+ exynos_plane->src_height = (actual_h * exynos_plane->v_ratio) >> 16;
exynos_plane->fb_width = fb->width;
exynos_plane->fb_height = fb->height;
- exynos_plane->src_width = src_w;
- exynos_plane->src_height = src_h;
exynos_plane->bpp = fb->bits_per_pixel;
exynos_plane->pitch = fb->pitches[0];
exynos_plane->pixel_format = fb->pixel_format;
@@ -139,9 +142,6 @@
exynos_plane->crtc_width, exynos_plane->crtc_height);
plane->crtc = crtc;
-
- if (exynos_crtc->ops->win_mode_set)
- exynos_crtc->ops->win_mode_set(exynos_crtc, exynos_plane);
}
int
@@ -182,39 +182,14 @@
return 0;
}
-static void exynos_plane_destroy(struct drm_plane *plane)
-{
- struct exynos_drm_plane *exynos_plane = to_exynos_plane(plane);
-
- exynos_disable_plane(plane);
- drm_plane_cleanup(plane);
- kfree(exynos_plane);
-}
-
-static int exynos_plane_set_property(struct drm_plane *plane,
- struct drm_property *property,
- uint64_t val)
-{
- struct drm_device *dev = plane->dev;
- struct exynos_drm_plane *exynos_plane = to_exynos_plane(plane);
- struct exynos_drm_private *dev_priv = dev->dev_private;
-
- if (property == dev_priv->plane_zpos_property) {
- exynos_plane->zpos = val;
- return 0;
- }
-
- return -EINVAL;
-}
-
static struct drm_plane_funcs exynos_plane_funcs = {
.update_plane = exynos_update_plane,
.disable_plane = exynos_disable_plane,
- .destroy = exynos_plane_destroy,
- .set_property = exynos_plane_set_property,
+ .destroy = drm_plane_cleanup,
};
-static void exynos_plane_attach_zpos_property(struct drm_plane *plane)
+static void exynos_plane_attach_zpos_property(struct drm_plane *plane,
+ unsigned int zpos)
{
struct drm_device *dev = plane->dev;
struct exynos_drm_private *dev_priv = dev->dev_private;
@@ -222,41 +197,36 @@
prop = dev_priv->plane_zpos_property;
if (!prop) {
- prop = drm_property_create_range(dev, 0, "zpos", 0,
- MAX_PLANE - 1);
+ prop = drm_property_create_range(dev, DRM_MODE_PROP_IMMUTABLE,
+ "zpos", 0, MAX_PLANE - 1);
if (!prop)
return;
dev_priv->plane_zpos_property = prop;
}
- drm_object_attach_property(&plane->base, prop, 0);
+ drm_object_attach_property(&plane->base, prop, zpos);
}
-struct drm_plane *exynos_plane_init(struct drm_device *dev,
- unsigned long possible_crtcs,
- enum drm_plane_type type)
+int exynos_plane_init(struct drm_device *dev,
+ struct exynos_drm_plane *exynos_plane,
+ unsigned long possible_crtcs, enum drm_plane_type type,
+ unsigned int zpos)
{
- struct exynos_drm_plane *exynos_plane;
int err;
- exynos_plane = kzalloc(sizeof(struct exynos_drm_plane), GFP_KERNEL);
- if (!exynos_plane)
- return ERR_PTR(-ENOMEM);
-
err = drm_universal_plane_init(dev, &exynos_plane->base, possible_crtcs,
&exynos_plane_funcs, formats,
ARRAY_SIZE(formats), type);
if (err) {
DRM_ERROR("failed to initialize plane\n");
- kfree(exynos_plane);
- return ERR_PTR(err);
+ return err;
}
- if (type == DRM_PLANE_TYPE_PRIMARY)
- exynos_plane->zpos = DEFAULT_ZPOS;
- else
- exynos_plane_attach_zpos_property(&exynos_plane->base);
+ exynos_plane->zpos = zpos;
- return &exynos_plane->base;
+ if (type == DRM_PLANE_TYPE_OVERLAY)
+ exynos_plane_attach_zpos_property(&exynos_plane->base, zpos);
+
+ return 0;
}
diff --git a/drivers/gpu/drm/exynos/exynos_drm_plane.h b/drivers/gpu/drm/exynos/exynos_drm_plane.h
index 9d3c374..f360590 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_plane.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_plane.h
@@ -20,6 +20,7 @@
unsigned int crtc_w, unsigned int crtc_h,
uint32_t src_x, uint32_t src_y,
uint32_t src_w, uint32_t src_h);
-struct drm_plane *exynos_plane_init(struct drm_device *dev,
- unsigned long possible_crtcs,
- enum drm_plane_type type);
+int exynos_plane_init(struct drm_device *dev,
+ struct exynos_drm_plane *exynos_plane,
+ unsigned long possible_crtcs, enum drm_plane_type type,
+ unsigned int zpos);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_vidi.c b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
index b886972..27e84ec 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
@@ -23,6 +23,7 @@
#include "exynos_drm_drv.h"
#include "exynos_drm_crtc.h"
+#include "exynos_drm_plane.h"
#include "exynos_drm_encoder.h"
#include "exynos_drm_vidi.h"
@@ -32,20 +33,6 @@
#define ctx_from_connector(c) container_of(c, struct vidi_context, \
connector)
-struct vidi_win_data {
- unsigned int offset_x;
- unsigned int offset_y;
- unsigned int ovl_width;
- unsigned int ovl_height;
- unsigned int fb_width;
- unsigned int fb_height;
- unsigned int bpp;
- dma_addr_t dma_addr;
- unsigned int buf_offsize;
- unsigned int line_size; /* bytes */
- bool enabled;
-};
-
struct vidi_context {
struct exynos_drm_display display;
struct platform_device *pdev;
@@ -53,7 +40,7 @@
struct exynos_drm_crtc *crtc;
struct drm_encoder *encoder;
struct drm_connector connector;
- struct vidi_win_data win_data[WINDOWS_NR];
+ struct exynos_drm_plane planes[WINDOWS_NR];
struct edid *raw_edid;
unsigned int clkdiv;
unsigned int default_win;
@@ -97,19 +84,6 @@
0x00, 0x00, 0x00, 0x06
};
-static void vidi_apply(struct vidi_context *ctx)
-{
- struct exynos_drm_crtc_ops *crtc_ops = ctx->crtc->ops;
- struct vidi_win_data *win_data;
- int i;
-
- for (i = 0; i < WINDOWS_NR; i++) {
- win_data = &ctx->win_data[i];
- if (win_data->enabled && (crtc_ops && crtc_ops->win_commit))
- crtc_ops->win_commit(ctx->crtc, i);
- }
-}
-
static int vidi_enable_vblank(struct exynos_drm_crtc *crtc)
{
struct vidi_context *ctx = crtc->ctx;
@@ -143,104 +117,46 @@
ctx->vblank_on = false;
}
-static void vidi_win_mode_set(struct exynos_drm_crtc *crtc,
- struct exynos_drm_plane *plane)
+static void vidi_win_commit(struct exynos_drm_crtc *crtc, unsigned int win)
{
struct vidi_context *ctx = crtc->ctx;
- struct vidi_win_data *win_data;
- int win;
- unsigned long offset;
-
- if (!plane) {
- DRM_ERROR("plane is NULL\n");
- return;
- }
-
- win = plane->zpos;
- if (win == DEFAULT_ZPOS)
- win = ctx->default_win;
-
- if (win < 0 || win >= WINDOWS_NR)
- return;
-
- offset = plane->fb_x * (plane->bpp >> 3);
- offset += plane->fb_y * plane->pitch;
-
- DRM_DEBUG_KMS("offset = 0x%lx, pitch = %x\n", offset, plane->pitch);
-
- win_data = &ctx->win_data[win];
-
- win_data->offset_x = plane->crtc_x;
- win_data->offset_y = plane->crtc_y;
- win_data->ovl_width = plane->crtc_width;
- win_data->ovl_height = plane->crtc_height;
- win_data->fb_width = plane->fb_width;
- win_data->fb_height = plane->fb_height;
- win_data->dma_addr = plane->dma_addr[0] + offset;
- win_data->bpp = plane->bpp;
- win_data->buf_offsize = (plane->fb_width - plane->crtc_width) *
- (plane->bpp >> 3);
- win_data->line_size = plane->crtc_width * (plane->bpp >> 3);
-
- /*
- * some parts of win_data should be transferred to user side
- * through specific ioctl.
- */
-
- DRM_DEBUG_KMS("offset_x = %d, offset_y = %d\n",
- win_data->offset_x, win_data->offset_y);
- DRM_DEBUG_KMS("ovl_width = %d, ovl_height = %d\n",
- win_data->ovl_width, win_data->ovl_height);
- DRM_DEBUG_KMS("paddr = 0x%lx\n", (unsigned long)win_data->dma_addr);
- DRM_DEBUG_KMS("fb_width = %d, crtc_width = %d\n",
- plane->fb_width, plane->crtc_width);
-}
-
-static void vidi_win_commit(struct exynos_drm_crtc *crtc, int zpos)
-{
- struct vidi_context *ctx = crtc->ctx;
- struct vidi_win_data *win_data;
- int win = zpos;
+ struct exynos_drm_plane *plane;
if (ctx->suspended)
return;
- if (win == DEFAULT_ZPOS)
- win = ctx->default_win;
-
if (win < 0 || win >= WINDOWS_NR)
return;
- win_data = &ctx->win_data[win];
+ plane = &ctx->planes[win];
- win_data->enabled = true;
+ plane->enabled = true;
- DRM_DEBUG_KMS("dma_addr = %pad\n", &win_data->dma_addr);
+ DRM_DEBUG_KMS("dma_addr = %pad\n", plane->dma_addr);
if (ctx->vblank_on)
schedule_work(&ctx->work);
}
-static void vidi_win_disable(struct exynos_drm_crtc *crtc, int zpos)
+static void vidi_win_disable(struct exynos_drm_crtc *crtc, unsigned int win)
{
struct vidi_context *ctx = crtc->ctx;
- struct vidi_win_data *win_data;
- int win = zpos;
-
- if (win == DEFAULT_ZPOS)
- win = ctx->default_win;
+ struct exynos_drm_plane *plane;
if (win < 0 || win >= WINDOWS_NR)
return;
- win_data = &ctx->win_data[win];
- win_data->enabled = false;
+ plane = &ctx->planes[win];
+ plane->enabled = false;
/* TODO. */
}
static int vidi_power_on(struct vidi_context *ctx, bool enable)
{
+ struct exynos_drm_plane *plane;
+ int i;
+
DRM_DEBUG_KMS("%s\n", __FILE__);
if (enable != false && enable != true)
@@ -253,7 +169,11 @@
if (test_and_clear_bit(0, &ctx->irq_flags))
vidi_enable_vblank(ctx->crtc);
- vidi_apply(ctx);
+ for (i = 0; i < WINDOWS_NR; i++) {
+ plane = &ctx->planes[i];
+ if (plane->enabled)
+ vidi_win_commit(ctx->crtc, i);
+ }
} else {
ctx->suspended = true;
}
@@ -301,7 +221,6 @@
.dpms = vidi_dpms,
.enable_vblank = vidi_enable_vblank,
.disable_vblank = vidi_disable_vblank,
- .win_mode_set = vidi_win_mode_set,
.win_commit = vidi_win_commit,
.win_disable = vidi_win_disable,
};
@@ -543,12 +462,25 @@
{
struct vidi_context *ctx = dev_get_drvdata(dev);
struct drm_device *drm_dev = data;
+ struct exynos_drm_plane *exynos_plane;
+ enum drm_plane_type type;
+ unsigned int zpos;
int ret;
vidi_ctx_initialize(ctx, drm_dev);
- ctx->crtc = exynos_drm_crtc_create(drm_dev, ctx->pipe,
- EXYNOS_DISPLAY_TYPE_VIDI,
+ for (zpos = 0; zpos < WINDOWS_NR; zpos++) {
+ type = (zpos == ctx->default_win) ? DRM_PLANE_TYPE_PRIMARY :
+ DRM_PLANE_TYPE_OVERLAY;
+ ret = exynos_plane_init(drm_dev, &ctx->planes[zpos],
+ 1 << ctx->pipe, type, zpos);
+ if (ret)
+ return ret;
+ }
+
+ exynos_plane = &ctx->planes[ctx->default_win];
+ ctx->crtc = exynos_drm_crtc_create(drm_dev, &exynos_plane->base,
+ ctx->pipe, EXYNOS_DISPLAY_TYPE_VIDI,
&vidi_crtc_ops, ctx);
if (IS_ERR(ctx->crtc)) {
DRM_ERROR("failed to create crtc.\n");
diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c b/drivers/gpu/drm/exynos/exynos_hdmi.c
index 229b361..5eba971 100644
--- a/drivers/gpu/drm/exynos/exynos_hdmi.c
+++ b/drivers/gpu/drm/exynos/exynos_hdmi.c
@@ -2007,7 +2007,7 @@
DRM_DEBUG_KMS("xres=%d, yres=%d, refresh=%d, intl=%s\n",
m->hdisplay, m->vdisplay,
m->vrefresh, (m->flags & DRM_MODE_FLAG_INTERLACE) ?
- "INTERLACED" : "PROGERESSIVE");
+ "INTERLACED" : "PROGRESSIVE");
/* preserve mode information for later use. */
drm_mode_copy(&hdata->current_mode, mode);
@@ -2101,7 +2101,7 @@
struct hdmi_context *hdata = display_to_hdmi(display);
struct drm_encoder *encoder = hdata->encoder;
struct drm_crtc *crtc = encoder->crtc;
- struct drm_crtc_helper_funcs *funcs = NULL;
+ const struct drm_crtc_helper_funcs *funcs = NULL;
DRM_DEBUG_KMS("mode %d\n", mode);
diff --git a/drivers/gpu/drm/exynos/exynos_mixer.c b/drivers/gpu/drm/exynos/exynos_mixer.c
index 2e3bc57..fbec750 100644
--- a/drivers/gpu/drm/exynos/exynos_mixer.c
+++ b/drivers/gpu/drm/exynos/exynos_mixer.c
@@ -37,35 +37,13 @@
#include "exynos_drm_drv.h"
#include "exynos_drm_crtc.h"
+#include "exynos_drm_plane.h"
#include "exynos_drm_iommu.h"
#include "exynos_mixer.h"
#define MIXER_WIN_NR 3
#define MIXER_DEFAULT_WIN 0
-struct hdmi_win_data {
- dma_addr_t dma_addr;
- dma_addr_t chroma_dma_addr;
- uint32_t pixel_format;
- unsigned int bpp;
- unsigned int crtc_x;
- unsigned int crtc_y;
- unsigned int crtc_width;
- unsigned int crtc_height;
- unsigned int fb_x;
- unsigned int fb_y;
- unsigned int fb_width;
- unsigned int fb_pitch;
- unsigned int fb_height;
- unsigned int src_width;
- unsigned int src_height;
- unsigned int mode_width;
- unsigned int mode_height;
- unsigned int scan_flags;
- bool enabled;
- bool resume;
-};
-
struct mixer_resources {
int irq;
void __iomem *mixer_regs;
@@ -90,6 +68,7 @@
struct device *dev;
struct drm_device *drm_dev;
struct exynos_drm_crtc *crtc;
+ struct exynos_drm_plane planes[MIXER_WIN_NR];
int pipe;
bool interlace;
bool powered;
@@ -99,7 +78,6 @@
struct mutex mixer_mutex;
struct mixer_resources mixer_res;
- struct hdmi_win_data win_data[MIXER_WIN_NR];
enum mixer_version_id mxr_ver;
wait_queue_head_t wait_vsync_queue;
atomic_t wait_vsync_event;
@@ -289,7 +267,7 @@
/* choosing between interlace and progressive mode */
val = (ctx->interlace ? MXR_CFG_SCAN_INTERLACE :
- MXR_CFG_SCAN_PROGRASSIVE);
+ MXR_CFG_SCAN_PROGRESSIVE);
if (ctx->mxr_ver != MXR_VER_128_0_0_184) {
/* choosing between proper HD and SD mode */
@@ -403,17 +381,16 @@
{
struct mixer_resources *res = &ctx->mixer_res;
unsigned long flags;
- struct hdmi_win_data *win_data;
- unsigned int x_ratio, y_ratio;
+ struct exynos_drm_plane *plane;
unsigned int buf_num = 1;
dma_addr_t luma_addr[2], chroma_addr[2];
bool tiled_mode = false;
bool crcb_mode = false;
u32 val;
- win_data = &ctx->win_data[win];
+ plane = &ctx->planes[win];
- switch (win_data->pixel_format) {
+ switch (plane->pixel_format) {
case DRM_FORMAT_NV12:
crcb_mode = false;
buf_num = 2;
@@ -421,35 +398,31 @@
/* TODO: single buffer format NV12, NV21 */
default:
/* ignore pixel format at disable time */
- if (!win_data->dma_addr)
+ if (!plane->dma_addr[0])
break;
DRM_ERROR("pixel format for vp is wrong [%d].\n",
- win_data->pixel_format);
+ plane->pixel_format);
return;
}
- /* scaling feature: (src << 16) / dst */
- x_ratio = (win_data->src_width << 16) / win_data->crtc_width;
- y_ratio = (win_data->src_height << 16) / win_data->crtc_height;
-
if (buf_num == 2) {
- luma_addr[0] = win_data->dma_addr;
- chroma_addr[0] = win_data->chroma_dma_addr;
+ luma_addr[0] = plane->dma_addr[0];
+ chroma_addr[0] = plane->dma_addr[1];
} else {
- luma_addr[0] = win_data->dma_addr;
- chroma_addr[0] = win_data->dma_addr
- + (win_data->fb_pitch * win_data->fb_height);
+ luma_addr[0] = plane->dma_addr[0];
+ chroma_addr[0] = plane->dma_addr[0]
+ + (plane->pitch * plane->fb_height);
}
- if (win_data->scan_flags & DRM_MODE_FLAG_INTERLACE) {
+ if (plane->scan_flag & DRM_MODE_FLAG_INTERLACE) {
ctx->interlace = true;
if (tiled_mode) {
luma_addr[1] = luma_addr[0] + 0x40;
chroma_addr[1] = chroma_addr[0] + 0x40;
} else {
- luma_addr[1] = luma_addr[0] + win_data->fb_pitch;
- chroma_addr[1] = chroma_addr[0] + win_data->fb_pitch;
+ luma_addr[1] = luma_addr[0] + plane->pitch;
+ chroma_addr[1] = chroma_addr[0] + plane->pitch;
}
} else {
ctx->interlace = false;
@@ -470,30 +443,30 @@
vp_reg_writemask(res, VP_MODE, val, VP_MODE_FMT_MASK);
/* setting size of input image */
- vp_reg_write(res, VP_IMG_SIZE_Y, VP_IMG_HSIZE(win_data->fb_pitch) |
- VP_IMG_VSIZE(win_data->fb_height));
+ vp_reg_write(res, VP_IMG_SIZE_Y, VP_IMG_HSIZE(plane->pitch) |
+ VP_IMG_VSIZE(plane->fb_height));
/* chroma height has to reduced by 2 to avoid chroma distorions */
- vp_reg_write(res, VP_IMG_SIZE_C, VP_IMG_HSIZE(win_data->fb_pitch) |
- VP_IMG_VSIZE(win_data->fb_height / 2));
+ vp_reg_write(res, VP_IMG_SIZE_C, VP_IMG_HSIZE(plane->pitch) |
+ VP_IMG_VSIZE(plane->fb_height / 2));
- vp_reg_write(res, VP_SRC_WIDTH, win_data->src_width);
- vp_reg_write(res, VP_SRC_HEIGHT, win_data->src_height);
+ vp_reg_write(res, VP_SRC_WIDTH, plane->src_width);
+ vp_reg_write(res, VP_SRC_HEIGHT, plane->src_height);
vp_reg_write(res, VP_SRC_H_POSITION,
- VP_SRC_H_POSITION_VAL(win_data->fb_x));
- vp_reg_write(res, VP_SRC_V_POSITION, win_data->fb_y);
+ VP_SRC_H_POSITION_VAL(plane->src_x));
+ vp_reg_write(res, VP_SRC_V_POSITION, plane->src_y);
- vp_reg_write(res, VP_DST_WIDTH, win_data->crtc_width);
- vp_reg_write(res, VP_DST_H_POSITION, win_data->crtc_x);
+ vp_reg_write(res, VP_DST_WIDTH, plane->crtc_width);
+ vp_reg_write(res, VP_DST_H_POSITION, plane->crtc_x);
if (ctx->interlace) {
- vp_reg_write(res, VP_DST_HEIGHT, win_data->crtc_height / 2);
- vp_reg_write(res, VP_DST_V_POSITION, win_data->crtc_y / 2);
+ vp_reg_write(res, VP_DST_HEIGHT, plane->crtc_height / 2);
+ vp_reg_write(res, VP_DST_V_POSITION, plane->crtc_y / 2);
} else {
- vp_reg_write(res, VP_DST_HEIGHT, win_data->crtc_height);
- vp_reg_write(res, VP_DST_V_POSITION, win_data->crtc_y);
+ vp_reg_write(res, VP_DST_HEIGHT, plane->crtc_height);
+ vp_reg_write(res, VP_DST_V_POSITION, plane->crtc_y);
}
- vp_reg_write(res, VP_H_RATIO, x_ratio);
- vp_reg_write(res, VP_V_RATIO, y_ratio);
+ vp_reg_write(res, VP_H_RATIO, plane->h_ratio);
+ vp_reg_write(res, VP_V_RATIO, plane->v_ratio);
vp_reg_write(res, VP_ENDIAN_MODE, VP_ENDIAN_MODE_LITTLE);
@@ -503,8 +476,8 @@
vp_reg_write(res, VP_TOP_C_PTR, chroma_addr[0]);
vp_reg_write(res, VP_BOT_C_PTR, chroma_addr[1]);
- mixer_cfg_scan(ctx, win_data->mode_height);
- mixer_cfg_rgb_fmt(ctx, win_data->mode_height);
+ mixer_cfg_scan(ctx, plane->mode_height);
+ mixer_cfg_rgb_fmt(ctx, plane->mode_height);
mixer_cfg_layer(ctx, win, true);
mixer_run(ctx);
@@ -521,25 +494,49 @@
mixer_reg_writemask(res, MXR_CFG, ~0, MXR_CFG_LAYER_UPDATE);
}
+static int mixer_setup_scale(const struct exynos_drm_plane *plane,
+ unsigned int *x_ratio, unsigned int *y_ratio)
+{
+ if (plane->crtc_width != plane->src_width) {
+ if (plane->crtc_width == 2 * plane->src_width)
+ *x_ratio = 1;
+ else
+ goto fail;
+ }
+
+ if (plane->crtc_height != plane->src_height) {
+ if (plane->crtc_height == 2 * plane->src_height)
+ *y_ratio = 1;
+ else
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ DRM_DEBUG_KMS("only 2x width/height scaling of plane supported\n");
+ return -ENOTSUPP;
+}
+
static void mixer_graph_buffer(struct mixer_context *ctx, int win)
{
struct mixer_resources *res = &ctx->mixer_res;
unsigned long flags;
- struct hdmi_win_data *win_data;
- unsigned int x_ratio, y_ratio;
+ struct exynos_drm_plane *plane;
+ unsigned int x_ratio = 0, y_ratio = 0;
unsigned int src_x_offset, src_y_offset, dst_x_offset, dst_y_offset;
dma_addr_t dma_addr;
unsigned int fmt;
u32 val;
- win_data = &ctx->win_data[win];
+ plane = &ctx->planes[win];
#define RGB565 4
#define ARGB1555 5
#define ARGB4444 6
#define ARGB8888 7
- switch (win_data->bpp) {
+ switch (plane->bpp) {
case 16:
fmt = ARGB4444;
break;
@@ -550,21 +547,21 @@
fmt = ARGB8888;
}
- /* 2x scaling feature */
- x_ratio = 0;
- y_ratio = 0;
+ /* check if mixer supports requested scaling setup */
+ if (mixer_setup_scale(plane, &x_ratio, &y_ratio))
+ return;
- dst_x_offset = win_data->crtc_x;
- dst_y_offset = win_data->crtc_y;
+ dst_x_offset = plane->crtc_x;
+ dst_y_offset = plane->crtc_y;
/* converting dma address base and source offset */
- dma_addr = win_data->dma_addr
- + (win_data->fb_x * win_data->bpp >> 3)
- + (win_data->fb_y * win_data->fb_pitch);
+ dma_addr = plane->dma_addr[0]
+ + (plane->src_x * plane->bpp >> 3)
+ + (plane->src_y * plane->pitch);
src_x_offset = 0;
src_y_offset = 0;
- if (win_data->scan_flags & DRM_MODE_FLAG_INTERLACE)
+ if (plane->scan_flag & DRM_MODE_FLAG_INTERLACE)
ctx->interlace = true;
else
ctx->interlace = false;
@@ -578,18 +575,18 @@
/* setup geometry */
mixer_reg_write(res, MXR_GRAPHIC_SPAN(win),
- win_data->fb_pitch / (win_data->bpp >> 3));
+ plane->pitch / (plane->bpp >> 3));
/* setup display size */
if (ctx->mxr_ver == MXR_VER_128_0_0_184 &&
win == MIXER_DEFAULT_WIN) {
- val = MXR_MXR_RES_HEIGHT(win_data->mode_height);
- val |= MXR_MXR_RES_WIDTH(win_data->mode_width);
+ val = MXR_MXR_RES_HEIGHT(plane->mode_height);
+ val |= MXR_MXR_RES_WIDTH(plane->mode_width);
mixer_reg_write(res, MXR_RESOLUTION, val);
}
- val = MXR_GRP_WH_WIDTH(win_data->crtc_width);
- val |= MXR_GRP_WH_HEIGHT(win_data->crtc_height);
+ val = MXR_GRP_WH_WIDTH(plane->src_width);
+ val |= MXR_GRP_WH_HEIGHT(plane->src_height);
val |= MXR_GRP_WH_H_SCALE(x_ratio);
val |= MXR_GRP_WH_V_SCALE(y_ratio);
mixer_reg_write(res, MXR_GRAPHIC_WH(win), val);
@@ -607,8 +604,8 @@
/* set buffer address to mixer */
mixer_reg_write(res, MXR_GRAPHIC_BASE(win), dma_addr);
- mixer_cfg_scan(ctx, win_data->mode_height);
- mixer_cfg_rgb_fmt(ctx, win_data->mode_height);
+ mixer_cfg_scan(ctx, plane->mode_height);
+ mixer_cfg_rgb_fmt(ctx, plane->mode_height);
mixer_cfg_layer(ctx, win, true);
/* layer update mandatory for mixer 16.0.33.0 */
@@ -920,63 +917,9 @@
mixer_reg_writemask(res, MXR_INT_EN, 0, MXR_INT_EN_VSYNC);
}
-static void mixer_win_mode_set(struct exynos_drm_crtc *crtc,
- struct exynos_drm_plane *plane)
+static void mixer_win_commit(struct exynos_drm_crtc *crtc, unsigned int win)
{
struct mixer_context *mixer_ctx = crtc->ctx;
- struct hdmi_win_data *win_data;
- int win;
-
- if (!plane) {
- DRM_ERROR("plane is NULL\n");
- return;
- }
-
- DRM_DEBUG_KMS("set [%d]x[%d] at (%d,%d) to [%d]x[%d] at (%d,%d)\n",
- plane->fb_width, plane->fb_height,
- plane->fb_x, plane->fb_y,
- plane->crtc_width, plane->crtc_height,
- plane->crtc_x, plane->crtc_y);
-
- win = plane->zpos;
- if (win == DEFAULT_ZPOS)
- win = MIXER_DEFAULT_WIN;
-
- if (win < 0 || win >= MIXER_WIN_NR) {
- DRM_ERROR("mixer window[%d] is wrong\n", win);
- return;
- }
-
- win_data = &mixer_ctx->win_data[win];
-
- win_data->dma_addr = plane->dma_addr[0];
- win_data->chroma_dma_addr = plane->dma_addr[1];
- win_data->pixel_format = plane->pixel_format;
- win_data->bpp = plane->bpp;
-
- win_data->crtc_x = plane->crtc_x;
- win_data->crtc_y = plane->crtc_y;
- win_data->crtc_width = plane->crtc_width;
- win_data->crtc_height = plane->crtc_height;
-
- win_data->fb_x = plane->fb_x;
- win_data->fb_y = plane->fb_y;
- win_data->fb_width = plane->fb_width;
- win_data->fb_height = plane->fb_height;
- win_data->fb_pitch = plane->pitch;
- win_data->src_width = plane->src_width;
- win_data->src_height = plane->src_height;
-
- win_data->mode_width = plane->mode_width;
- win_data->mode_height = plane->mode_height;
-
- win_data->scan_flags = plane->scan_flag;
-}
-
-static void mixer_win_commit(struct exynos_drm_crtc *crtc, int zpos)
-{
- struct mixer_context *mixer_ctx = crtc->ctx;
- int win = zpos == DEFAULT_ZPOS ? MIXER_DEFAULT_WIN : zpos;
DRM_DEBUG_KMS("win: %d\n", win);
@@ -992,14 +935,13 @@
else
mixer_graph_buffer(mixer_ctx, win);
- mixer_ctx->win_data[win].enabled = true;
+ mixer_ctx->planes[win].enabled = true;
}
-static void mixer_win_disable(struct exynos_drm_crtc *crtc, int zpos)
+static void mixer_win_disable(struct exynos_drm_crtc *crtc, unsigned int win)
{
struct mixer_context *mixer_ctx = crtc->ctx;
struct mixer_resources *res = &mixer_ctx->mixer_res;
- int win = zpos == DEFAULT_ZPOS ? MIXER_DEFAULT_WIN : zpos;
unsigned long flags;
DRM_DEBUG_KMS("win: %d\n", win);
@@ -1007,7 +949,7 @@
mutex_lock(&mixer_ctx->mixer_mutex);
if (!mixer_ctx->powered) {
mutex_unlock(&mixer_ctx->mixer_mutex);
- mixer_ctx->win_data[win].resume = false;
+ mixer_ctx->planes[win].resume = false;
return;
}
mutex_unlock(&mixer_ctx->mixer_mutex);
@@ -1020,7 +962,7 @@
mixer_vsync_set_update(mixer_ctx, true);
spin_unlock_irqrestore(&res->reg_slock, flags);
- mixer_ctx->win_data[win].enabled = false;
+ mixer_ctx->planes[win].enabled = false;
}
static void mixer_wait_for_vblank(struct exynos_drm_crtc *crtc)
@@ -1057,12 +999,12 @@
static void mixer_window_suspend(struct mixer_context *ctx)
{
- struct hdmi_win_data *win_data;
+ struct exynos_drm_plane *plane;
int i;
for (i = 0; i < MIXER_WIN_NR; i++) {
- win_data = &ctx->win_data[i];
- win_data->resume = win_data->enabled;
+ plane = &ctx->planes[i];
+ plane->resume = plane->enabled;
mixer_win_disable(ctx->crtc, i);
}
mixer_wait_for_vblank(ctx->crtc);
@@ -1070,14 +1012,14 @@
static void mixer_window_resume(struct mixer_context *ctx)
{
- struct hdmi_win_data *win_data;
+ struct exynos_drm_plane *plane;
int i;
for (i = 0; i < MIXER_WIN_NR; i++) {
- win_data = &ctx->win_data[i];
- win_data->enabled = win_data->resume;
- win_data->resume = false;
- if (win_data->enabled)
+ plane = &ctx->planes[i];
+ plane->enabled = plane->resume;
+ plane->resume = false;
+ if (plane->enabled)
mixer_win_commit(ctx->crtc, i);
}
}
@@ -1189,7 +1131,6 @@
.enable_vblank = mixer_enable_vblank,
.disable_vblank = mixer_disable_vblank,
.wait_for_vblank = mixer_wait_for_vblank,
- .win_mode_set = mixer_win_mode_set,
.win_commit = mixer_win_commit,
.win_disable = mixer_win_disable,
};
@@ -1253,15 +1194,28 @@
{
struct mixer_context *ctx = dev_get_drvdata(dev);
struct drm_device *drm_dev = data;
+ struct exynos_drm_plane *exynos_plane;
+ enum drm_plane_type type;
+ unsigned int zpos;
int ret;
ret = mixer_initialize(ctx, drm_dev);
if (ret)
return ret;
- ctx->crtc = exynos_drm_crtc_create(drm_dev, ctx->pipe,
- EXYNOS_DISPLAY_TYPE_HDMI,
- &mixer_crtc_ops, ctx);
+ for (zpos = 0; zpos < MIXER_WIN_NR; zpos++) {
+ type = (zpos == MIXER_DEFAULT_WIN) ? DRM_PLANE_TYPE_PRIMARY :
+ DRM_PLANE_TYPE_OVERLAY;
+ ret = exynos_plane_init(drm_dev, &ctx->planes[zpos],
+ 1 << ctx->pipe, type, zpos);
+ if (ret)
+ return ret;
+ }
+
+ exynos_plane = &ctx->planes[MIXER_DEFAULT_WIN];
+ ctx->crtc = exynos_drm_crtc_create(drm_dev, &exynos_plane->base,
+ ctx->pipe, EXYNOS_DISPLAY_TYPE_HDMI,
+ &mixer_crtc_ops, ctx);
if (IS_ERR(ctx->crtc)) {
mixer_ctx_remove(ctx);
ret = PTR_ERR(ctx->crtc);
diff --git a/drivers/gpu/drm/exynos/regs-mixer.h b/drivers/gpu/drm/exynos/regs-mixer.h
index 5f32e1a..ac60260 100644
--- a/drivers/gpu/drm/exynos/regs-mixer.h
+++ b/drivers/gpu/drm/exynos/regs-mixer.h
@@ -101,7 +101,7 @@
#define MXR_CFG_GRP0_ENABLE (1 << 4)
#define MXR_CFG_VP_ENABLE (1 << 3)
#define MXR_CFG_SCAN_INTERLACE (0 << 2)
-#define MXR_CFG_SCAN_PROGRASSIVE (1 << 2)
+#define MXR_CFG_SCAN_PROGRESSIVE (1 << 2)
#define MXR_CFG_SCAN_NTSC (0 << 1)
#define MXR_CFG_SCAN_PAL (1 << 1)
#define MXR_CFG_SCAN_SD (0 << 0)
diff --git a/drivers/gpu/drm/gma500/cdv_intel_display.c b/drivers/gpu/drm/gma500/cdv_intel_display.c
index 6672732..7d47b3d 100644
--- a/drivers/gpu/drm/gma500/cdv_intel_display.c
+++ b/drivers/gpu/drm/gma500/cdv_intel_display.c
@@ -823,7 +823,7 @@
/* Flush the plane changes */
{
- struct drm_crtc_helper_funcs *crtc_funcs =
+ const struct drm_crtc_helper_funcs *crtc_funcs =
crtc->helper_private;
crtc_funcs->mode_set_base(crtc, x, y, old_fb);
}
diff --git a/drivers/gpu/drm/gma500/cdv_intel_hdmi.c b/drivers/gpu/drm/gma500/cdv_intel_hdmi.c
index 4268bf2..6b1d334 100644
--- a/drivers/gpu/drm/gma500/cdv_intel_hdmi.c
+++ b/drivers/gpu/drm/gma500/cdv_intel_hdmi.c
@@ -195,7 +195,7 @@
encoder->crtc->x, encoder->crtc->y, encoder->crtc->primary->fb))
return -1;
} else {
- struct drm_encoder_helper_funcs *helpers
+ const struct drm_encoder_helper_funcs *helpers
= encoder->helper_private;
helpers->mode_set(encoder, &crtc->saved_mode,
&crtc->saved_adjusted_mode);
diff --git a/drivers/gpu/drm/gma500/cdv_intel_lvds.c b/drivers/gpu/drm/gma500/cdv_intel_lvds.c
index 0b77039..211069b 100644
--- a/drivers/gpu/drm/gma500/cdv_intel_lvds.c
+++ b/drivers/gpu/drm/gma500/cdv_intel_lvds.c
@@ -505,7 +505,7 @@
else
gma_backlight_set(encoder->dev, value);
} else if (!strcmp(property->name, "DPMS") && encoder) {
- struct drm_encoder_helper_funcs *helpers =
+ const struct drm_encoder_helper_funcs *helpers =
encoder->helper_private;
helpers->dpms(encoder, value);
}
diff --git a/drivers/gpu/drm/gma500/gma_display.c b/drivers/gpu/drm/gma500/gma_display.c
index 9bb9bdd..001b450 100644
--- a/drivers/gpu/drm/gma500/gma_display.c
+++ b/drivers/gpu/drm/gma500/gma_display.c
@@ -501,20 +501,20 @@
void gma_crtc_prepare(struct drm_crtc *crtc)
{
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF);
}
void gma_crtc_commit(struct drm_crtc *crtc)
{
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
crtc_funcs->dpms(crtc, DRM_MODE_DPMS_ON);
}
void gma_crtc_disable(struct drm_crtc *crtc)
{
struct gtt_range *gt;
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF);
@@ -656,7 +656,7 @@
void gma_encoder_prepare(struct drm_encoder *encoder)
{
- struct drm_encoder_helper_funcs *encoder_funcs =
+ const struct drm_encoder_helper_funcs *encoder_funcs =
encoder->helper_private;
/* lvds has its own version of prepare see psb_intel_lvds_prepare */
encoder_funcs->dpms(encoder, DRM_MODE_DPMS_OFF);
@@ -664,7 +664,7 @@
void gma_encoder_commit(struct drm_encoder *encoder)
{
- struct drm_encoder_helper_funcs *encoder_funcs =
+ const struct drm_encoder_helper_funcs *encoder_funcs =
encoder->helper_private;
/* lvds has its own version of commit see psb_intel_lvds_commit */
encoder_funcs->dpms(encoder, DRM_MODE_DPMS_ON);
diff --git a/drivers/gpu/drm/gma500/mdfld_dsi_output.c b/drivers/gpu/drm/gma500/mdfld_dsi_output.c
index abf2248..89f705c 100644
--- a/drivers/gpu/drm/gma500/mdfld_dsi_output.c
+++ b/drivers/gpu/drm/gma500/mdfld_dsi_output.c
@@ -290,7 +290,7 @@
encoder->crtc->primary->fb))
goto set_prop_error;
} else {
- struct drm_encoder_helper_funcs *funcs =
+ const struct drm_encoder_helper_funcs *funcs =
encoder->helper_private;
funcs->mode_set(encoder,
&gma_crtc->saved_mode,
diff --git a/drivers/gpu/drm/gma500/mdfld_intel_display.c b/drivers/gpu/drm/gma500/mdfld_intel_display.c
index 8cc8a5a..acd3834 100644
--- a/drivers/gpu/drm/gma500/mdfld_intel_display.c
+++ b/drivers/gpu/drm/gma500/mdfld_intel_display.c
@@ -849,7 +849,7 @@
/* Flush the plane changes */
{
- struct drm_crtc_helper_funcs *crtc_funcs =
+ const struct drm_crtc_helper_funcs *crtc_funcs =
crtc->helper_private;
crtc_funcs->mode_set_base(crtc, x, y, old_fb);
}
diff --git a/drivers/gpu/drm/gma500/oaktrail_crtc.c b/drivers/gpu/drm/gma500/oaktrail_crtc.c
index 2de216c..1048f0c 100644
--- a/drivers/gpu/drm/gma500/oaktrail_crtc.c
+++ b/drivers/gpu/drm/gma500/oaktrail_crtc.c
@@ -483,7 +483,7 @@
/* Flush the plane changes */
{
- struct drm_crtc_helper_funcs *crtc_funcs =
+ const struct drm_crtc_helper_funcs *crtc_funcs =
crtc->helper_private;
crtc_funcs->mode_set_base(crtc, x, y, old_fb);
}
diff --git a/drivers/gpu/drm/gma500/oaktrail_hdmi.c b/drivers/gpu/drm/gma500/oaktrail_hdmi.c
index 54f73f5..2310d87 100644
--- a/drivers/gpu/drm/gma500/oaktrail_hdmi.c
+++ b/drivers/gpu/drm/gma500/oaktrail_hdmi.c
@@ -347,7 +347,7 @@
/* Flush the plane changes */
{
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
crtc_funcs->mode_set_base(crtc, x, y, old_fb);
}
diff --git a/drivers/gpu/drm/gma500/psb_intel_display.c b/drivers/gpu/drm/gma500/psb_intel_display.c
index b21a094..6659da8 100644
--- a/drivers/gpu/drm/gma500/psb_intel_display.c
+++ b/drivers/gpu/drm/gma500/psb_intel_display.c
@@ -108,7 +108,7 @@
struct drm_device *dev = crtc->dev;
struct drm_psb_private *dev_priv = dev->dev_private;
struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
int pipe = gma_crtc->pipe;
const struct psb_offset *map = &dev_priv->regmap[pipe];
int refclk;
diff --git a/drivers/gpu/drm/gma500/psb_intel_lvds.c b/drivers/gpu/drm/gma500/psb_intel_lvds.c
index 88aad95..ce0645d 100644
--- a/drivers/gpu/drm/gma500/psb_intel_lvds.c
+++ b/drivers/gpu/drm/gma500/psb_intel_lvds.c
@@ -625,7 +625,7 @@
else
gma_backlight_set(encoder->dev, value);
} else if (!strcmp(property->name, "DPMS")) {
- struct drm_encoder_helper_funcs *hfuncs
+ const struct drm_encoder_helper_funcs *hfuncs
= encoder->helper_private;
hfuncs->dpms(encoder, value);
}
diff --git a/drivers/gpu/drm/i2c/adv7511.c b/drivers/gpu/drm/i2c/adv7511.c
index fa140e0..b728523 100644
--- a/drivers/gpu/drm/i2c/adv7511.c
+++ b/drivers/gpu/drm/i2c/adv7511.c
@@ -27,12 +27,13 @@
struct regmap *regmap;
struct regmap *packet_memory_regmap;
enum drm_connector_status status;
- int dpms_mode;
+ bool powered;
unsigned int f_tmds;
unsigned int current_edid_segment;
uint8_t edid_buf[256];
+ bool edid_read;
wait_queue_head_t wq;
struct drm_encoder *encoder;
@@ -357,6 +358,48 @@
adv7511->rgb = config->input_colorspace == HDMI_COLORSPACE_RGB;
}
+static void adv7511_power_on(struct adv7511 *adv7511)
+{
+ adv7511->current_edid_segment = -1;
+
+ regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
+ ADV7511_INT0_EDID_READY);
+ regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
+ ADV7511_INT1_DDC_ERROR);
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ ADV7511_POWER_POWER_DOWN, 0);
+
+ /*
+ * Per spec it is allowed to pulse the HDP signal to indicate that the
+ * EDID information has changed. Some monitors do this when they wakeup
+ * from standby or are enabled. When the HDP goes low the adv7511 is
+ * reset and the outputs are disabled which might cause the monitor to
+ * go to standby again. To avoid this we ignore the HDP pin for the
+ * first few seconds after enabling the output.
+ */
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
+ ADV7511_REG_POWER2_HDP_SRC_MASK,
+ ADV7511_REG_POWER2_HDP_SRC_NONE);
+
+ /*
+ * Most of the registers are reset during power down or when HPD is low.
+ */
+ regcache_sync(adv7511->regmap);
+
+ adv7511->powered = true;
+}
+
+static void adv7511_power_off(struct adv7511 *adv7511)
+{
+ /* TODO: setup additional power down modes */
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ ADV7511_POWER_POWER_DOWN,
+ ADV7511_POWER_POWER_DOWN);
+ regcache_mark_dirty(adv7511->regmap);
+
+ adv7511->powered = false;
+}
+
/* -----------------------------------------------------------------------------
* Interrupt and hotplug detection
*/
@@ -379,69 +422,71 @@
return false;
}
-static irqreturn_t adv7511_irq_handler(int irq, void *devid)
-{
- struct adv7511 *adv7511 = devid;
-
- if (adv7511_hpd(adv7511))
- drm_helper_hpd_irq_event(adv7511->encoder->dev);
-
- wake_up_all(&adv7511->wq);
-
- return IRQ_HANDLED;
-}
-
-static unsigned int adv7511_is_interrupt_pending(struct adv7511 *adv7511,
- unsigned int irq)
+static int adv7511_irq_process(struct adv7511 *adv7511)
{
unsigned int irq0, irq1;
- unsigned int pending;
int ret;
ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(0), &irq0);
if (ret < 0)
- return 0;
+ return ret;
+
ret = regmap_read(adv7511->regmap, ADV7511_REG_INT(1), &irq1);
if (ret < 0)
- return 0;
+ return ret;
- pending = (irq1 << 8) | irq0;
+ regmap_write(adv7511->regmap, ADV7511_REG_INT(0), irq0);
+ regmap_write(adv7511->regmap, ADV7511_REG_INT(1), irq1);
- return pending & irq;
-}
+ if (irq0 & ADV7511_INT0_HDP)
+ drm_helper_hpd_irq_event(adv7511->encoder->dev);
-static int adv7511_wait_for_interrupt(struct adv7511 *adv7511, int irq,
- int timeout)
-{
- unsigned int pending;
- int ret;
+ if (irq0 & ADV7511_INT0_EDID_READY || irq1 & ADV7511_INT1_DDC_ERROR) {
+ adv7511->edid_read = true;
- if (adv7511->i2c_main->irq) {
- ret = wait_event_interruptible_timeout(adv7511->wq,
- adv7511_is_interrupt_pending(adv7511, irq),
- msecs_to_jiffies(timeout));
- if (ret <= 0)
- return 0;
- pending = adv7511_is_interrupt_pending(adv7511, irq);
- } else {
- if (timeout < 25)
- timeout = 25;
- do {
- pending = adv7511_is_interrupt_pending(adv7511, irq);
- if (pending)
- break;
- msleep(25);
- timeout -= 25;
- } while (timeout >= 25);
+ if (adv7511->i2c_main->irq)
+ wake_up_all(&adv7511->wq);
}
- return pending;
+ return 0;
+}
+
+static irqreturn_t adv7511_irq_handler(int irq, void *devid)
+{
+ struct adv7511 *adv7511 = devid;
+ int ret;
+
+ ret = adv7511_irq_process(adv7511);
+ return ret < 0 ? IRQ_NONE : IRQ_HANDLED;
}
/* -----------------------------------------------------------------------------
* EDID retrieval
*/
+static int adv7511_wait_for_edid(struct adv7511 *adv7511, int timeout)
+{
+ int ret;
+
+ if (adv7511->i2c_main->irq) {
+ ret = wait_event_interruptible_timeout(adv7511->wq,
+ adv7511->edid_read, msecs_to_jiffies(timeout));
+ } else {
+ for (; timeout > 0; timeout -= 25) {
+ ret = adv7511_irq_process(adv7511);
+ if (ret < 0)
+ break;
+
+ if (adv7511->edid_read)
+ break;
+
+ msleep(25);
+ }
+ }
+
+ return adv7511->edid_read ? 0 : -EIO;
+}
+
static int adv7511_get_edid_block(void *data, u8 *buf, unsigned int block,
size_t len)
{
@@ -463,19 +508,14 @@
return ret;
if (status != 2) {
+ adv7511->edid_read = false;
regmap_write(adv7511->regmap, ADV7511_REG_EDID_SEGMENT,
block);
- ret = adv7511_wait_for_interrupt(adv7511,
- ADV7511_INT0_EDID_READY |
- ADV7511_INT1_DDC_ERROR, 200);
-
- if (!(ret & ADV7511_INT0_EDID_READY))
- return -EIO;
+ ret = adv7511_wait_for_edid(adv7511, 200);
+ if (ret < 0)
+ return ret;
}
- regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
- ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
-
/* Break this apart, hopefully more I2C controllers will
* support 64 byte transfers than 256 byte transfers
*/
@@ -526,9 +566,11 @@
unsigned int count;
/* Reading the EDID only works if the device is powered */
- if (adv7511->dpms_mode != DRM_MODE_DPMS_ON) {
+ if (!adv7511->powered) {
regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
- ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
+ ADV7511_INT0_EDID_READY);
+ regmap_write(adv7511->regmap, ADV7511_REG_INT(1),
+ ADV7511_INT1_DDC_ERROR);
regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
ADV7511_POWER_POWER_DOWN, 0);
adv7511->current_edid_segment = -1;
@@ -536,7 +578,7 @@
edid = drm_do_get_edid(connector, adv7511_get_edid_block, adv7511);
- if (adv7511->dpms_mode != DRM_MODE_DPMS_ON)
+ if (!adv7511->powered)
regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
ADV7511_POWER_POWER_DOWN,
ADV7511_POWER_POWER_DOWN);
@@ -558,41 +600,10 @@
{
struct adv7511 *adv7511 = encoder_to_adv7511(encoder);
- switch (mode) {
- case DRM_MODE_DPMS_ON:
- adv7511->current_edid_segment = -1;
-
- regmap_write(adv7511->regmap, ADV7511_REG_INT(0),
- ADV7511_INT0_EDID_READY | ADV7511_INT1_DDC_ERROR);
- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
- ADV7511_POWER_POWER_DOWN, 0);
- /*
- * Per spec it is allowed to pulse the HDP signal to indicate
- * that the EDID information has changed. Some monitors do this
- * when they wakeup from standby or are enabled. When the HDP
- * goes low the adv7511 is reset and the outputs are disabled
- * which might cause the monitor to go to standby again. To
- * avoid this we ignore the HDP pin for the first few seconds
- * after enabeling the output.
- */
- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
- ADV7511_REG_POWER2_HDP_SRC_MASK,
- ADV7511_REG_POWER2_HDP_SRC_NONE);
- /* Most of the registers are reset during power down or
- * when HPD is low
- */
- regcache_sync(adv7511->regmap);
- break;
- default:
- /* TODO: setup additional power down modes */
- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
- ADV7511_POWER_POWER_DOWN,
- ADV7511_POWER_POWER_DOWN);
- regcache_mark_dirty(adv7511->regmap);
- break;
- }
-
- adv7511->dpms_mode = mode;
+ if (mode == DRM_MODE_DPMS_ON)
+ adv7511_power_on(adv7511);
+ else
+ adv7511_power_off(adv7511);
}
static enum drm_connector_status
@@ -620,10 +631,9 @@
* there is a pending HPD interrupt and the cable is connected there was
* at least one transition from disconnected to connected and the chip
* has to be reinitialized. */
- if (status == connector_status_connected && hpd &&
- adv7511->dpms_mode == DRM_MODE_DPMS_ON) {
+ if (status == connector_status_connected && hpd && adv7511->powered) {
regcache_mark_dirty(adv7511->regmap);
- adv7511_encoder_dpms(encoder, adv7511->dpms_mode);
+ adv7511_power_on(adv7511);
adv7511_get_modes(encoder, connector);
if (adv7511->status == connector_status_connected)
status = connector_status_disconnected;
@@ -858,7 +868,7 @@
if (!adv7511)
return -ENOMEM;
- adv7511->dpms_mode = DRM_MODE_DPMS_OFF;
+ adv7511->powered = false;
adv7511->status = connector_status_disconnected;
ret = adv7511_parse_dt(dev->of_node, &link_config);
@@ -918,10 +928,7 @@
regmap_write(adv7511->regmap, ADV7511_REG_CEC_CTRL,
ADV7511_CEC_CTRL_POWER_DOWN);
- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
- ADV7511_POWER_POWER_DOWN, ADV7511_POWER_POWER_DOWN);
-
- adv7511->current_edid_segment = -1;
+ adv7511_power_off(adv7511);
i2c_set_clientdata(i2c, adv7511);
diff --git a/drivers/gpu/drm/i2c/tda998x_drv.c b/drivers/gpu/drm/i2c/tda998x_drv.c
index a9041d1..5febffd 100644
--- a/drivers/gpu/drm/i2c/tda998x_drv.c
+++ b/drivers/gpu/drm/i2c/tda998x_drv.c
@@ -25,6 +25,7 @@
#include <drm/drm_crtc_helper.h>
#include <drm/drm_encoder_slave.h>
#include <drm/drm_edid.h>
+#include <drm/drm_of.h>
#include <drm/i2c/tda998x.h>
#define DBG(fmt, ...) DRM_DEBUG(fmt"\n", ##__VA_ARGS__)
@@ -387,7 +388,7 @@
};
int ret = i2c_master_send(client, buf, sizeof(buf));
if (ret < 0) {
- dev_err(&client->dev, "setpage %04x err %d\n",
+ dev_err(&client->dev, "%s %04x err %d\n", __func__,
reg, ret);
return ret;
}
@@ -1035,8 +1036,9 @@
connector_status_disconnected;
}
-static int read_edid_block(struct tda998x_priv *priv, uint8_t *buf, int blk)
+static int read_edid_block(void *data, u8 *buf, unsigned int blk, size_t length)
{
+ struct tda998x_priv *priv = data;
uint8_t offset, segptr;
int ret, i;
@@ -1080,8 +1082,8 @@
return -ETIMEDOUT;
}
- ret = reg_read_range(priv, REG_EDID_DATA_0, buf, EDID_LENGTH);
- if (ret != EDID_LENGTH) {
+ ret = reg_read_range(priv, REG_EDID_DATA_0, buf, length);
+ if (ret != length) {
dev_err(&priv->hdmi->dev, "failed to read edid block %d: %d\n",
blk, ret);
return ret;
@@ -1090,82 +1092,31 @@
return 0;
}
-static uint8_t *do_get_edid(struct tda998x_priv *priv)
-{
- int j, valid_extensions = 0;
- uint8_t *block, *new;
- bool print_bad_edid = drm_debug & DRM_UT_KMS;
-
- if ((block = kmalloc(EDID_LENGTH, GFP_KERNEL)) == NULL)
- return NULL;
-
- if (priv->rev == TDA19988)
- reg_clear(priv, REG_TX4, TX4_PD_RAM);
-
- /* base block fetch */
- if (read_edid_block(priv, block, 0))
- goto fail;
-
- if (!drm_edid_block_valid(block, 0, print_bad_edid))
- goto fail;
-
- /* if there's no extensions, we're done */
- if (block[0x7e] == 0)
- goto done;
-
- new = krealloc(block, (block[0x7e] + 1) * EDID_LENGTH, GFP_KERNEL);
- if (!new)
- goto fail;
- block = new;
-
- for (j = 1; j <= block[0x7e]; j++) {
- uint8_t *ext_block = block + (valid_extensions + 1) * EDID_LENGTH;
- if (read_edid_block(priv, ext_block, j))
- goto fail;
-
- if (!drm_edid_block_valid(ext_block, j, print_bad_edid))
- goto fail;
-
- valid_extensions++;
- }
-
- if (valid_extensions != block[0x7e]) {
- block[EDID_LENGTH-1] += block[0x7e] - valid_extensions;
- block[0x7e] = valid_extensions;
- new = krealloc(block, (valid_extensions + 1) * EDID_LENGTH, GFP_KERNEL);
- if (!new)
- goto fail;
- block = new;
- }
-
-done:
- if (priv->rev == TDA19988)
- reg_set(priv, REG_TX4, TX4_PD_RAM);
-
- return block;
-
-fail:
- if (priv->rev == TDA19988)
- reg_set(priv, REG_TX4, TX4_PD_RAM);
- dev_warn(&priv->hdmi->dev, "failed to read EDID\n");
- kfree(block);
- return NULL;
-}
-
static int
tda998x_encoder_get_modes(struct tda998x_priv *priv,
struct drm_connector *connector)
{
- struct edid *edid = (struct edid *)do_get_edid(priv);
- int n = 0;
+ struct edid *edid;
+ int n;
- if (edid) {
- drm_mode_connector_update_edid_property(connector, edid);
- n = drm_add_edid_modes(connector, edid);
- priv->is_hdmi_sink = drm_detect_hdmi_monitor(edid);
- kfree(edid);
+ if (priv->rev == TDA19988)
+ reg_clear(priv, REG_TX4, TX4_PD_RAM);
+
+ edid = drm_do_get_edid(connector, read_edid_block, priv);
+
+ if (priv->rev == TDA19988)
+ reg_set(priv, REG_TX4, TX4_PD_RAM);
+
+ if (!edid) {
+ dev_warn(&priv->hdmi->dev, "failed to read EDID\n");
+ return 0;
}
+ drm_mode_connector_update_edid_property(connector, edid);
+ n = drm_add_edid_modes(connector, edid);
+ priv->is_hdmi_sink = drm_detect_hdmi_monitor(edid);
+ kfree(edid);
+
return n;
}
@@ -1547,6 +1498,7 @@
struct i2c_client *client = to_i2c_client(dev);
struct drm_device *drm = data;
struct tda998x_priv2 *priv;
+ uint32_t crtcs = 0;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
@@ -1555,9 +1507,18 @@
dev_set_drvdata(dev, priv);
+ if (dev->of_node)
+ crtcs = drm_of_find_possible_crtcs(drm, dev->of_node);
+
+ /* If no CRTCs were found, fall back to our old behaviour */
+ if (crtcs == 0) {
+ dev_warn(dev, "Falling back to first CRTC\n");
+ crtcs = 1 << 0;
+ }
+
priv->base.encoder = &priv->encoder;
priv->connector.interlace_allowed = 1;
- priv->encoder.possible_crtcs = 1 << 0;
+ priv->encoder.possible_crtcs = crtcs;
ret = tda998x_create(client, &priv->base);
if (ret)
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index f019225..a69002e 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -28,6 +28,7 @@
i915_gem_execbuffer.o \
i915_gem_gtt.o \
i915_gem.o \
+ i915_gem_shrinker.o \
i915_gem_stolen.o \
i915_gem_tiling.o \
i915_gem_userptr.o \
@@ -83,9 +84,11 @@
intel_sdvo.o \
intel_tv.o
+# virtual gpu code
+i915-y += i915_vgpu.o
+
# legacy horrors
-i915-y += i915_dma.o \
- i915_ums.o
+i915-y += i915_dma.o
obj-$(CONFIG_DRM_I915) += i915.o
diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c
index 806e812..61ae8ff 100644
--- a/drivers/gpu/drm/i915/i915_cmd_parser.c
+++ b/drivers/gpu/drm/i915/i915_cmd_parser.c
@@ -818,23 +818,28 @@
return false;
}
-static u32 *vmap_batch(struct drm_i915_gem_object *obj)
+static u32 *vmap_batch(struct drm_i915_gem_object *obj,
+ unsigned start, unsigned len)
{
int i;
void *addr = NULL;
struct sg_page_iter sg_iter;
+ int first_page = start >> PAGE_SHIFT;
+ int last_page = (len + start + 4095) >> PAGE_SHIFT;
+ int npages = last_page - first_page;
struct page **pages;
- pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages));
+ pages = drm_malloc_ab(npages, sizeof(*pages));
if (pages == NULL) {
DRM_DEBUG_DRIVER("Failed to get space for pages\n");
goto finish;
}
i = 0;
- for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) {
- pages[i] = sg_page_iter_page(&sg_iter);
- i++;
+ for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, first_page) {
+ pages[i++] = sg_page_iter_page(&sg_iter);
+ if (i == npages)
+ break;
}
addr = vmap(pages, i, 0, PAGE_KERNEL);
@@ -855,61 +860,61 @@
u32 batch_start_offset,
u32 batch_len)
{
- int ret = 0;
int needs_clflush = 0;
- u32 *src_base, *dest_base = NULL;
- u32 *src_addr, *dest_addr;
- u32 offset = batch_start_offset / sizeof(*dest_addr);
- u32 end = batch_start_offset + batch_len;
+ void *src_base, *src;
+ void *dst = NULL;
+ int ret;
- if (end > dest_obj->base.size || end > src_obj->base.size)
+ if (batch_len > dest_obj->base.size ||
+ batch_len + batch_start_offset > src_obj->base.size)
return ERR_PTR(-E2BIG);
ret = i915_gem_obj_prepare_shmem_read(src_obj, &needs_clflush);
if (ret) {
- DRM_DEBUG_DRIVER("CMD: failed to prep read\n");
+ DRM_DEBUG_DRIVER("CMD: failed to prepare shadow batch\n");
return ERR_PTR(ret);
}
- src_base = vmap_batch(src_obj);
+ src_base = vmap_batch(src_obj, batch_start_offset, batch_len);
if (!src_base) {
DRM_DEBUG_DRIVER("CMD: Failed to vmap batch\n");
ret = -ENOMEM;
goto unpin_src;
}
- src_addr = src_base + offset;
-
- if (needs_clflush)
- drm_clflush_virt_range((char *)src_addr, batch_len);
+ ret = i915_gem_object_get_pages(dest_obj);
+ if (ret) {
+ DRM_DEBUG_DRIVER("CMD: Failed to get pages for shadow batch\n");
+ goto unmap_src;
+ }
+ i915_gem_object_pin_pages(dest_obj);
ret = i915_gem_object_set_to_cpu_domain(dest_obj, true);
if (ret) {
- DRM_DEBUG_DRIVER("CMD: Failed to set batch CPU domain\n");
+ DRM_DEBUG_DRIVER("CMD: Failed to set shadow batch to CPU\n");
goto unmap_src;
}
- dest_base = vmap_batch(dest_obj);
- if (!dest_base) {
+ dst = vmap_batch(dest_obj, 0, batch_len);
+ if (!dst) {
DRM_DEBUG_DRIVER("CMD: Failed to vmap shadow batch\n");
+ i915_gem_object_unpin_pages(dest_obj);
ret = -ENOMEM;
goto unmap_src;
}
- dest_addr = dest_base + offset;
+ src = src_base + offset_in_page(batch_start_offset);
+ if (needs_clflush)
+ drm_clflush_virt_range(src, batch_len);
- if (batch_start_offset != 0)
- memset((u8 *)dest_base, 0, batch_start_offset);
-
- memcpy(dest_addr, src_addr, batch_len);
- memset((u8 *)dest_addr + batch_len, 0, dest_obj->base.size - end);
+ memcpy(dst, src, batch_len);
unmap_src:
vunmap(src_base);
unpin_src:
i915_gem_object_unpin_pages(src_obj);
- return ret ? ERR_PTR(ret) : dest_base;
+ return ret ? ERR_PTR(ret) : dst;
}
/**
@@ -1046,34 +1051,26 @@
u32 batch_len,
bool is_master)
{
- int ret = 0;
u32 *cmd, *batch_base, *batch_end;
struct drm_i915_cmd_descriptor default_desc = { 0 };
bool oacontrol_set = false; /* OACONTROL tracking. See check_cmd() */
-
- ret = i915_gem_obj_ggtt_pin(shadow_batch_obj, 4096, 0);
- if (ret) {
- DRM_DEBUG_DRIVER("CMD: Failed to pin shadow batch\n");
- return -1;
- }
+ int ret = 0;
batch_base = copy_batch(shadow_batch_obj, batch_obj,
batch_start_offset, batch_len);
if (IS_ERR(batch_base)) {
DRM_DEBUG_DRIVER("CMD: Failed to copy batch\n");
- i915_gem_object_ggtt_unpin(shadow_batch_obj);
return PTR_ERR(batch_base);
}
- cmd = batch_base + (batch_start_offset / sizeof(*cmd));
-
/*
* We use the batch length as size because the shadow object is as
* large or larger and copy_batch() will write MI_NOPs to the extra
* space. Parsing should be faster in some cases this way.
*/
- batch_end = cmd + (batch_len / sizeof(*batch_end));
+ batch_end = batch_base + (batch_len / sizeof(*batch_end));
+ cmd = batch_base;
while (cmd < batch_end) {
const struct drm_i915_cmd_descriptor *desc;
u32 length;
@@ -1132,7 +1129,7 @@
}
vunmap(batch_base);
- i915_gem_object_ggtt_unpin(shadow_batch_obj);
+ i915_gem_object_unpin_pages(shadow_batch_obj);
return ret;
}
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index e8b18e5..007c7d7 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -139,10 +139,11 @@
obj->madv == I915_MADV_DONTNEED ? " purgeable" : "");
if (obj->base.name)
seq_printf(m, " (name: %d)", obj->base.name);
- list_for_each_entry(vma, &obj->vma_list, vma_link)
+ list_for_each_entry(vma, &obj->vma_list, vma_link) {
if (vma->pin_count > 0)
pin_count++;
- seq_printf(m, " (pinned x %d)", pin_count);
+ }
+ seq_printf(m, " (pinned x %d)", pin_count);
if (obj->pin_display)
seq_printf(m, " (display)");
if (obj->fence_reg != I915_FENCE_REG_NONE)
@@ -580,7 +581,7 @@
seq_printf(m, "Flip queued on frame %d, (was ready on frame %d), now %d\n",
work->flip_queued_vblank,
work->flip_ready_vblank,
- drm_vblank_count(dev, crtc->pipe));
+ drm_crtc_vblank_count(&crtc->base));
if (work->enable_stall_check)
seq_puts(m, "Stall check enabled, ");
else
@@ -1089,7 +1090,7 @@
seq_printf(m, "Current P-state: %d\n",
(rgvstat & MEMSTAT_PSTATE_MASK) >> MEMSTAT_PSTATE_SHIFT);
} else if (IS_GEN6(dev) || (IS_GEN7(dev) && !IS_VALLEYVIEW(dev)) ||
- IS_BROADWELL(dev)) {
+ IS_BROADWELL(dev) || IS_GEN9(dev)) {
u32 gt_perf_status = I915_READ(GEN6_GT_PERF_STATUS);
u32 rp_state_limits = I915_READ(GEN6_RP_STATE_LIMITS);
u32 rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
@@ -1108,11 +1109,15 @@
intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
reqf = I915_READ(GEN6_RPNSWREQ);
- reqf &= ~GEN6_TURBO_DISABLE;
- if (IS_HASWELL(dev) || IS_BROADWELL(dev))
- reqf >>= 24;
- else
- reqf >>= 25;
+ if (IS_GEN9(dev))
+ reqf >>= 23;
+ else {
+ reqf &= ~GEN6_TURBO_DISABLE;
+ if (IS_HASWELL(dev) || IS_BROADWELL(dev))
+ reqf >>= 24;
+ else
+ reqf >>= 25;
+ }
reqf = intel_gpu_freq(dev_priv, reqf);
rpmodectl = I915_READ(GEN6_RP_CONTROL);
@@ -1126,7 +1131,9 @@
rpdownei = I915_READ(GEN6_RP_CUR_DOWN_EI);
rpcurdown = I915_READ(GEN6_RP_CUR_DOWN);
rpprevdown = I915_READ(GEN6_RP_PREV_DOWN);
- if (IS_HASWELL(dev) || IS_BROADWELL(dev))
+ if (IS_GEN9(dev))
+ cagf = (rpstat & GEN9_CAGF_MASK) >> GEN9_CAGF_SHIFT;
+ else if (IS_HASWELL(dev) || IS_BROADWELL(dev))
cagf = (rpstat & HSW_CAGF_MASK) >> HSW_CAGF_SHIFT;
else
cagf = (rpstat & GEN6_CAGF_MASK) >> GEN6_CAGF_SHIFT;
@@ -1152,7 +1159,7 @@
pm_ier, pm_imr, pm_isr, pm_iir, pm_mask);
seq_printf(m, "GT_PERF_STATUS: 0x%08x\n", gt_perf_status);
seq_printf(m, "Render p-state ratio: %d\n",
- (gt_perf_status & 0xff00) >> 8);
+ (gt_perf_status & (IS_GEN9(dev) ? 0x1ff00 : 0xff00)) >> 8);
seq_printf(m, "Render p-state VID: %d\n",
gt_perf_status & 0xff);
seq_printf(m, "Render p-state limit: %d\n",
@@ -1177,19 +1184,25 @@
GEN6_CURBSYTAVG_MASK);
max_freq = (rp_state_cap & 0xff0000) >> 16;
+ max_freq *= (IS_SKYLAKE(dev) ? GEN9_FREQ_SCALER : 1);
seq_printf(m, "Lowest (RPN) frequency: %dMHz\n",
intel_gpu_freq(dev_priv, max_freq));
max_freq = (rp_state_cap & 0xff00) >> 8;
+ max_freq *= (IS_SKYLAKE(dev) ? GEN9_FREQ_SCALER : 1);
seq_printf(m, "Nominal (RP1) frequency: %dMHz\n",
intel_gpu_freq(dev_priv, max_freq));
max_freq = rp_state_cap & 0xff;
+ max_freq *= (IS_SKYLAKE(dev) ? GEN9_FREQ_SCALER : 1);
seq_printf(m, "Max non-overclocked (RP0) frequency: %dMHz\n",
intel_gpu_freq(dev_priv, max_freq));
seq_printf(m, "Max overclocked frequency: %dMHz\n",
intel_gpu_freq(dev_priv, dev_priv->rps.max_freq));
+
+ seq_printf(m, "Idle freq: %d MHz\n",
+ intel_gpu_freq(dev_priv, dev_priv->rps.idle_freq));
} else if (IS_VALLEYVIEW(dev)) {
u32 freq_sts;
@@ -1204,6 +1217,9 @@
seq_printf(m, "min GPU freq: %d MHz\n",
intel_gpu_freq(dev_priv, dev_priv->rps.min_freq));
+ seq_printf(m, "idle GPU freq: %d MHz\n",
+ intel_gpu_freq(dev_priv, dev_priv->rps.idle_freq));
+
seq_printf(m,
"efficient (RPe) frequency: %d MHz\n",
intel_gpu_freq(dev_priv, dev_priv->rps.efficient_freq));
@@ -1778,11 +1794,12 @@
ifbdev = dev_priv->fbdev;
fb = to_intel_framebuffer(ifbdev->helper.fb);
- seq_printf(m, "fbcon size: %d x %d, depth %d, %d bpp, refcount %d, obj ",
+ seq_printf(m, "fbcon size: %d x %d, depth %d, %d bpp, modifier 0x%llx, refcount %d, obj ",
fb->base.width,
fb->base.height,
fb->base.depth,
fb->base.bits_per_pixel,
+ fb->base.modifier[0],
atomic_read(&fb->base.refcount.refcount));
describe_obj(m, fb->obj);
seq_putc(m, '\n');
@@ -1793,11 +1810,12 @@
if (ifbdev && &fb->base == ifbdev->helper.fb)
continue;
- seq_printf(m, "user size: %d x %d, depth %d, %d bpp, refcount %d, obj ",
+ seq_printf(m, "user size: %d x %d, depth %d, %d bpp, modifier 0x%llx, refcount %d, obj ",
fb->base.width,
fb->base.height,
fb->base.depth,
fb->base.bits_per_pixel,
+ fb->base.modifier[0],
atomic_read(&fb->base.refcount.refcount));
describe_obj(m, fb->obj);
seq_putc(m, '\n');
@@ -1828,18 +1846,6 @@
if (ret)
return ret;
- if (dev_priv->ips.pwrctx) {
- seq_puts(m, "power context ");
- describe_obj(m, dev_priv->ips.pwrctx);
- seq_putc(m, '\n');
- }
-
- if (dev_priv->ips.renderctx) {
- seq_puts(m, "render context ");
- describe_obj(m, dev_priv->ips.renderctx);
- seq_putc(m, '\n');
- }
-
list_for_each_entry(ctx, &dev_priv->context_list, link) {
if (!i915.enable_execlists &&
ctx->legacy_hw_ctx.rcs_state == NULL)
@@ -2183,7 +2189,7 @@
struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
seq_puts(m, "aliasing PPGTT:\n");
- seq_printf(m, "pd gtt offset: 0x%08x\n", ppgtt->pd_offset);
+ seq_printf(m, "pd gtt offset: 0x%08x\n", ppgtt->pd.pd_offset);
ppgtt->debug_dump(ppgtt, m);
}
@@ -2243,6 +2249,11 @@
enum pipe pipe;
bool enabled = false;
+ if (!HAS_PSR(dev)) {
+ seq_puts(m, "PSR not supported\n");
+ return 0;
+ }
+
intel_runtime_pm_get(dev_priv);
mutex_lock(&dev_priv->psr.lock);
@@ -2255,17 +2266,15 @@
seq_printf(m, "Re-enable work scheduled: %s\n",
yesno(work_busy(&dev_priv->psr.work.work)));
- if (HAS_PSR(dev)) {
- if (HAS_DDI(dev))
- enabled = I915_READ(EDP_PSR_CTL(dev)) & EDP_PSR_ENABLE;
- else {
- for_each_pipe(dev_priv, pipe) {
- stat[pipe] = I915_READ(VLV_PSRSTAT(pipe)) &
- VLV_EDP_PSR_CURR_STATE_MASK;
- if ((stat[pipe] == VLV_EDP_PSR_ACTIVE_NORFB_UP) ||
- (stat[pipe] == VLV_EDP_PSR_ACTIVE_SF_UPDATE))
- enabled = true;
- }
+ if (HAS_DDI(dev))
+ enabled = I915_READ(EDP_PSR_CTL(dev)) & EDP_PSR_ENABLE;
+ else {
+ for_each_pipe(dev_priv, pipe) {
+ stat[pipe] = I915_READ(VLV_PSRSTAT(pipe)) &
+ VLV_EDP_PSR_CURR_STATE_MASK;
+ if ((stat[pipe] == VLV_EDP_PSR_ACTIVE_NORFB_UP) ||
+ (stat[pipe] == VLV_EDP_PSR_ACTIVE_SF_UPDATE))
+ enabled = true;
}
}
seq_printf(m, "HW Enabled & Active bit: %s", yesno(enabled));
@@ -2282,7 +2291,7 @@
yesno((bool)dev_priv->psr.link_standby));
/* CHV PSR has no kind of performance counter */
- if (HAS_PSR(dev) && HAS_DDI(dev)) {
+ if (HAS_DDI(dev)) {
psrperf = I915_READ(EDP_PSR_PERF_CNT(dev)) &
EDP_PSR_PERF_CNT_MASK;
@@ -2305,8 +2314,7 @@
u8 crc[6];
drm_modeset_lock_all(dev);
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->base.dpms != DRM_MODE_DPMS_ON)
continue;
@@ -2674,7 +2682,8 @@
active = cursor_position(dev, crtc->pipe, &x, &y);
seq_printf(m, "\tcursor visible? %s, position (%d, %d), size %dx%d, addr 0x%08x, active? %s\n",
yesno(crtc->cursor_base),
- x, y, crtc->cursor_width, crtc->cursor_height,
+ x, y, crtc->base.cursor->state->crtc_w,
+ crtc->base.cursor->state->crtc_h,
crtc->cursor_addr, yesno(active));
}
@@ -2850,7 +2859,7 @@
for_each_pipe(dev_priv, pipe) {
seq_printf(m, "Pipe %c\n", pipe_name(pipe));
- for_each_plane(pipe, plane) {
+ for_each_plane(dev_priv, pipe, plane) {
entry = &ddb->plane[pipe][plane];
seq_printf(m, " Plane%-8d%8u%8u%8u\n", plane + 1,
entry->start, entry->end,
@@ -2867,6 +2876,115 @@
return 0;
}
+static void drrs_status_per_crtc(struct seq_file *m,
+ struct drm_device *dev, struct intel_crtc *intel_crtc)
+{
+ struct intel_encoder *intel_encoder;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct i915_drrs *drrs = &dev_priv->drrs;
+ int vrefresh = 0;
+
+ for_each_encoder_on_crtc(dev, &intel_crtc->base, intel_encoder) {
+ /* Encoder connected on this CRTC */
+ switch (intel_encoder->type) {
+ case INTEL_OUTPUT_EDP:
+ seq_puts(m, "eDP:\n");
+ break;
+ case INTEL_OUTPUT_DSI:
+ seq_puts(m, "DSI:\n");
+ break;
+ case INTEL_OUTPUT_HDMI:
+ seq_puts(m, "HDMI:\n");
+ break;
+ case INTEL_OUTPUT_DISPLAYPORT:
+ seq_puts(m, "DP:\n");
+ break;
+ default:
+ seq_printf(m, "Other encoder (id=%d).\n",
+ intel_encoder->type);
+ return;
+ }
+ }
+
+ if (dev_priv->vbt.drrs_type == STATIC_DRRS_SUPPORT)
+ seq_puts(m, "\tVBT: DRRS_type: Static");
+ else if (dev_priv->vbt.drrs_type == SEAMLESS_DRRS_SUPPORT)
+ seq_puts(m, "\tVBT: DRRS_type: Seamless");
+ else if (dev_priv->vbt.drrs_type == DRRS_NOT_SUPPORTED)
+ seq_puts(m, "\tVBT: DRRS_type: None");
+ else
+ seq_puts(m, "\tVBT: DRRS_type: FIXME: Unrecognized Value");
+
+ seq_puts(m, "\n\n");
+
+ if (intel_crtc->config->has_drrs) {
+ struct intel_panel *panel;
+
+ mutex_lock(&drrs->mutex);
+ /* DRRS Supported */
+ seq_puts(m, "\tDRRS Supported: Yes\n");
+
+ /* disable_drrs() will make drrs->dp NULL */
+ if (!drrs->dp) {
+ seq_puts(m, "Idleness DRRS: Disabled");
+ mutex_unlock(&drrs->mutex);
+ return;
+ }
+
+ panel = &drrs->dp->attached_connector->panel;
+ seq_printf(m, "\t\tBusy_frontbuffer_bits: 0x%X",
+ drrs->busy_frontbuffer_bits);
+
+ seq_puts(m, "\n\t\t");
+ if (drrs->refresh_rate_type == DRRS_HIGH_RR) {
+ seq_puts(m, "DRRS_State: DRRS_HIGH_RR\n");
+ vrefresh = panel->fixed_mode->vrefresh;
+ } else if (drrs->refresh_rate_type == DRRS_LOW_RR) {
+ seq_puts(m, "DRRS_State: DRRS_LOW_RR\n");
+ vrefresh = panel->downclock_mode->vrefresh;
+ } else {
+ seq_printf(m, "DRRS_State: Unknown(%d)\n",
+ drrs->refresh_rate_type);
+ mutex_unlock(&drrs->mutex);
+ return;
+ }
+ seq_printf(m, "\t\tVrefresh: %d", vrefresh);
+
+ seq_puts(m, "\n\t\t");
+ mutex_unlock(&drrs->mutex);
+ } else {
+ /* DRRS not supported. Print the VBT parameter*/
+ seq_puts(m, "\tDRRS Supported : No");
+ }
+ seq_puts(m, "\n");
+}
+
+static int i915_drrs_status(struct seq_file *m, void *unused)
+{
+ struct drm_info_node *node = m->private;
+ struct drm_device *dev = node->minor->dev;
+ struct intel_crtc *intel_crtc;
+ int active_crtc_cnt = 0;
+
+ for_each_intel_crtc(dev, intel_crtc) {
+ drm_modeset_lock(&intel_crtc->base.mutex, NULL);
+
+ if (intel_crtc->active) {
+ active_crtc_cnt++;
+ seq_printf(m, "\nCRTC %d: ", active_crtc_cnt);
+
+ drrs_status_per_crtc(m, dev, intel_crtc);
+ }
+
+ drm_modeset_unlock(&intel_crtc->base.mutex);
+ }
+
+ if (!active_crtc_cnt)
+ seq_puts(m, "No active crtc found\n");
+
+ return 0;
+}
+
struct pipe_crc_info {
const char *name;
struct drm_device *dev;
@@ -4189,7 +4307,7 @@
{
struct drm_device *dev = data;
struct drm_i915_private *dev_priv = dev->dev_private;
- u32 rp_state_cap, hw_max, hw_min;
+ u32 hw_max, hw_min;
int ret;
if (INTEL_INFO(dev)->gen < 6)
@@ -4206,18 +4324,10 @@
/*
* Turbo will still be enabled, but won't go above the set value.
*/
- if (IS_VALLEYVIEW(dev)) {
- val = intel_freq_opcode(dev_priv, val);
+ val = intel_freq_opcode(dev_priv, val);
- hw_max = dev_priv->rps.max_freq;
- hw_min = dev_priv->rps.min_freq;
- } else {
- val = intel_freq_opcode(dev_priv, val);
-
- rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
- hw_max = dev_priv->rps.max_freq;
- hw_min = (rp_state_cap >> 16) & 0xff;
- }
+ hw_max = dev_priv->rps.max_freq;
+ hw_min = dev_priv->rps.min_freq;
if (val < hw_min || val > hw_max || val < dev_priv->rps.min_freq_softlimit) {
mutex_unlock(&dev_priv->rps.hw_lock);
@@ -4226,10 +4336,7 @@
dev_priv->rps.max_freq_softlimit = val;
- if (IS_VALLEYVIEW(dev))
- valleyview_set_rps(dev, val);
- else
- gen6_set_rps(dev, val);
+ intel_set_rps(dev, val);
mutex_unlock(&dev_priv->rps.hw_lock);
@@ -4267,7 +4374,7 @@
{
struct drm_device *dev = data;
struct drm_i915_private *dev_priv = dev->dev_private;
- u32 rp_state_cap, hw_max, hw_min;
+ u32 hw_max, hw_min;
int ret;
if (INTEL_INFO(dev)->gen < 6)
@@ -4284,18 +4391,10 @@
/*
* Turbo will still be enabled, but won't go below the set value.
*/
- if (IS_VALLEYVIEW(dev)) {
- val = intel_freq_opcode(dev_priv, val);
+ val = intel_freq_opcode(dev_priv, val);
- hw_max = dev_priv->rps.max_freq;
- hw_min = dev_priv->rps.min_freq;
- } else {
- val = intel_freq_opcode(dev_priv, val);
-
- rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
- hw_max = dev_priv->rps.max_freq;
- hw_min = (rp_state_cap >> 16) & 0xff;
- }
+ hw_max = dev_priv->rps.max_freq;
+ hw_min = dev_priv->rps.min_freq;
if (val < hw_min || val > hw_max || val > dev_priv->rps.max_freq_softlimit) {
mutex_unlock(&dev_priv->rps.hw_lock);
@@ -4304,10 +4403,7 @@
dev_priv->rps.min_freq_softlimit = val;
- if (IS_VALLEYVIEW(dev))
- valleyview_set_rps(dev, val);
- else
- gen6_set_rps(dev, val);
+ intel_set_rps(dev, val);
mutex_unlock(&dev_priv->rps.hw_lock);
@@ -4374,6 +4470,112 @@
i915_cache_sharing_get, i915_cache_sharing_set,
"%llu\n");
+static int i915_sseu_status(struct seq_file *m, void *unused)
+{
+ struct drm_info_node *node = (struct drm_info_node *) m->private;
+ struct drm_device *dev = node->minor->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ unsigned int s_tot = 0, ss_tot = 0, ss_per = 0, eu_tot = 0, eu_per = 0;
+
+ if ((INTEL_INFO(dev)->gen < 8) || IS_BROADWELL(dev))
+ return -ENODEV;
+
+ seq_puts(m, "SSEU Device Info\n");
+ seq_printf(m, " Available Slice Total: %u\n",
+ INTEL_INFO(dev)->slice_total);
+ seq_printf(m, " Available Subslice Total: %u\n",
+ INTEL_INFO(dev)->subslice_total);
+ seq_printf(m, " Available Subslice Per Slice: %u\n",
+ INTEL_INFO(dev)->subslice_per_slice);
+ seq_printf(m, " Available EU Total: %u\n",
+ INTEL_INFO(dev)->eu_total);
+ seq_printf(m, " Available EU Per Subslice: %u\n",
+ INTEL_INFO(dev)->eu_per_subslice);
+ seq_printf(m, " Has Slice Power Gating: %s\n",
+ yesno(INTEL_INFO(dev)->has_slice_pg));
+ seq_printf(m, " Has Subslice Power Gating: %s\n",
+ yesno(INTEL_INFO(dev)->has_subslice_pg));
+ seq_printf(m, " Has EU Power Gating: %s\n",
+ yesno(INTEL_INFO(dev)->has_eu_pg));
+
+ seq_puts(m, "SSEU Device Status\n");
+ if (IS_CHERRYVIEW(dev)) {
+ const int ss_max = 2;
+ int ss;
+ u32 sig1[ss_max], sig2[ss_max];
+
+ sig1[0] = I915_READ(CHV_POWER_SS0_SIG1);
+ sig1[1] = I915_READ(CHV_POWER_SS1_SIG1);
+ sig2[0] = I915_READ(CHV_POWER_SS0_SIG2);
+ sig2[1] = I915_READ(CHV_POWER_SS1_SIG2);
+
+ for (ss = 0; ss < ss_max; ss++) {
+ unsigned int eu_cnt;
+
+ if (sig1[ss] & CHV_SS_PG_ENABLE)
+ /* skip disabled subslice */
+ continue;
+
+ s_tot = 1;
+ ss_per++;
+ eu_cnt = ((sig1[ss] & CHV_EU08_PG_ENABLE) ? 0 : 2) +
+ ((sig1[ss] & CHV_EU19_PG_ENABLE) ? 0 : 2) +
+ ((sig1[ss] & CHV_EU210_PG_ENABLE) ? 0 : 2) +
+ ((sig2[ss] & CHV_EU311_PG_ENABLE) ? 0 : 2);
+ eu_tot += eu_cnt;
+ eu_per = max(eu_per, eu_cnt);
+ }
+ ss_tot = ss_per;
+ } else if (IS_SKYLAKE(dev)) {
+ const int s_max = 3, ss_max = 4;
+ int s, ss;
+ u32 s_reg[s_max], eu_reg[2*s_max], eu_mask[2];
+
+ s_reg[0] = I915_READ(GEN9_SLICE0_PGCTL_ACK);
+ s_reg[1] = I915_READ(GEN9_SLICE1_PGCTL_ACK);
+ s_reg[2] = I915_READ(GEN9_SLICE2_PGCTL_ACK);
+ eu_reg[0] = I915_READ(GEN9_SLICE0_SS01_EU_PGCTL_ACK);
+ eu_reg[1] = I915_READ(GEN9_SLICE0_SS23_EU_PGCTL_ACK);
+ eu_reg[2] = I915_READ(GEN9_SLICE1_SS01_EU_PGCTL_ACK);
+ eu_reg[3] = I915_READ(GEN9_SLICE1_SS23_EU_PGCTL_ACK);
+ eu_reg[4] = I915_READ(GEN9_SLICE2_SS01_EU_PGCTL_ACK);
+ eu_reg[5] = I915_READ(GEN9_SLICE2_SS23_EU_PGCTL_ACK);
+ eu_mask[0] = GEN9_PGCTL_SSA_EU08_ACK |
+ GEN9_PGCTL_SSA_EU19_ACK |
+ GEN9_PGCTL_SSA_EU210_ACK |
+ GEN9_PGCTL_SSA_EU311_ACK;
+ eu_mask[1] = GEN9_PGCTL_SSB_EU08_ACK |
+ GEN9_PGCTL_SSB_EU19_ACK |
+ GEN9_PGCTL_SSB_EU210_ACK |
+ GEN9_PGCTL_SSB_EU311_ACK;
+
+ for (s = 0; s < s_max; s++) {
+ if ((s_reg[s] & GEN9_PGCTL_SLICE_ACK) == 0)
+ /* skip disabled slice */
+ continue;
+
+ s_tot++;
+ ss_per = INTEL_INFO(dev)->subslice_per_slice;
+ ss_tot += ss_per;
+ for (ss = 0; ss < ss_max; ss++) {
+ unsigned int eu_cnt;
+
+ eu_cnt = 2 * hweight32(eu_reg[2*s + ss/2] &
+ eu_mask[ss%2]);
+ eu_tot += eu_cnt;
+ eu_per = max(eu_per, eu_cnt);
+ }
+ }
+ }
+ seq_printf(m, " Enabled Slice Total: %u\n", s_tot);
+ seq_printf(m, " Enabled Subslice Total: %u\n", ss_tot);
+ seq_printf(m, " Enabled Subslice Per Slice: %u\n", ss_per);
+ seq_printf(m, " Enabled EU Total: %u\n", eu_tot);
+ seq_printf(m, " Enabled EU Per Subslice: %u\n", eu_per);
+
+ return 0;
+}
+
static int i915_forcewake_open(struct inode *inode, struct file *file)
{
struct drm_device *dev = inode->i_private;
@@ -4487,6 +4689,8 @@
{"i915_dp_mst_info", i915_dp_mst_info, 0},
{"i915_wa_registers", i915_wa_registers, 0},
{"i915_ddb_info", i915_ddb_info, 0},
+ {"i915_sseu_status", i915_sseu_status, 0},
+ {"i915_drrs_status", i915_drrs_status, 0},
};
#define I915_DEBUGFS_ENTRIES ARRAY_SIZE(i915_debugfs_list)
diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index 1a46787..68e0c85 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -36,6 +36,7 @@
#include "intel_drv.h"
#include <drm/i915_drm.h>
#include "i915_drv.h"
+#include "i915_vgpu.h"
#include "i915_trace.h"
#include <linux/pci.h>
#include <linux/console.h>
@@ -67,6 +68,9 @@
case I915_PARAM_CHIPSET_ID:
value = dev->pdev->device;
break;
+ case I915_PARAM_REVISION:
+ value = dev->pdev->revision;
+ break;
case I915_PARAM_HAS_GEM:
value = 1;
break;
@@ -149,6 +153,16 @@
case I915_PARAM_MMAP_VERSION:
value = 1;
break;
+ case I915_PARAM_SUBSLICE_TOTAL:
+ value = INTEL_INFO(dev)->subslice_total;
+ if (!value)
+ return -ENODEV;
+ break;
+ case I915_PARAM_EU_TOTAL:
+ value = INTEL_INFO(dev)->eu_total;
+ if (!value)
+ return -ENODEV;
+ break;
default:
DRM_DEBUG("Unknown parameter %d\n", param->param);
return -EINVAL;
@@ -605,16 +619,128 @@
}
}
+ /* Initialize slice/subslice/EU info */
if (IS_CHERRYVIEW(dev)) {
- u32 fuse, mask_eu;
+ u32 fuse, eu_dis;
fuse = I915_READ(CHV_FUSE_GT);
- mask_eu = fuse & (CHV_FGT_EU_DIS_SS0_R0_MASK |
- CHV_FGT_EU_DIS_SS0_R1_MASK |
- CHV_FGT_EU_DIS_SS1_R0_MASK |
- CHV_FGT_EU_DIS_SS1_R1_MASK);
- info->eu_total = 16 - hweight32(mask_eu);
+
+ info->slice_total = 1;
+
+ if (!(fuse & CHV_FGT_DISABLE_SS0)) {
+ info->subslice_per_slice++;
+ eu_dis = fuse & (CHV_FGT_EU_DIS_SS0_R0_MASK |
+ CHV_FGT_EU_DIS_SS0_R1_MASK);
+ info->eu_total += 8 - hweight32(eu_dis);
+ }
+
+ if (!(fuse & CHV_FGT_DISABLE_SS1)) {
+ info->subslice_per_slice++;
+ eu_dis = fuse & (CHV_FGT_EU_DIS_SS1_R0_MASK |
+ CHV_FGT_EU_DIS_SS1_R1_MASK);
+ info->eu_total += 8 - hweight32(eu_dis);
+ }
+
+ info->subslice_total = info->subslice_per_slice;
+ /*
+ * CHV expected to always have a uniform distribution of EU
+ * across subslices.
+ */
+ info->eu_per_subslice = info->subslice_total ?
+ info->eu_total / info->subslice_total :
+ 0;
+ /*
+ * CHV supports subslice power gating on devices with more than
+ * one subslice, and supports EU power gating on devices with
+ * more than one EU pair per subslice.
+ */
+ info->has_slice_pg = 0;
+ info->has_subslice_pg = (info->subslice_total > 1);
+ info->has_eu_pg = (info->eu_per_subslice > 2);
+ } else if (IS_SKYLAKE(dev)) {
+ const int s_max = 3, ss_max = 4, eu_max = 8;
+ int s, ss;
+ u32 fuse2, eu_disable[s_max], s_enable, ss_disable;
+
+ fuse2 = I915_READ(GEN8_FUSE2);
+ s_enable = (fuse2 & GEN8_F2_S_ENA_MASK) >>
+ GEN8_F2_S_ENA_SHIFT;
+ ss_disable = (fuse2 & GEN9_F2_SS_DIS_MASK) >>
+ GEN9_F2_SS_DIS_SHIFT;
+
+ eu_disable[0] = I915_READ(GEN8_EU_DISABLE0);
+ eu_disable[1] = I915_READ(GEN8_EU_DISABLE1);
+ eu_disable[2] = I915_READ(GEN8_EU_DISABLE2);
+
+ info->slice_total = hweight32(s_enable);
+ /*
+ * The subslice disable field is global, i.e. it applies
+ * to each of the enabled slices.
+ */
+ info->subslice_per_slice = ss_max - hweight32(ss_disable);
+ info->subslice_total = info->slice_total *
+ info->subslice_per_slice;
+
+ /*
+ * Iterate through enabled slices and subslices to
+ * count the total enabled EU.
+ */
+ for (s = 0; s < s_max; s++) {
+ if (!(s_enable & (0x1 << s)))
+ /* skip disabled slice */
+ continue;
+
+ for (ss = 0; ss < ss_max; ss++) {
+ u32 n_disabled;
+
+ if (ss_disable & (0x1 << ss))
+ /* skip disabled subslice */
+ continue;
+
+ n_disabled = hweight8(eu_disable[s] >>
+ (ss * eu_max));
+
+ /*
+ * Record which subslice(s) has(have) 7 EUs. we
+ * can tune the hash used to spread work among
+ * subslices if they are unbalanced.
+ */
+ if (eu_max - n_disabled == 7)
+ info->subslice_7eu[s] |= 1 << ss;
+
+ info->eu_total += eu_max - n_disabled;
+ }
+ }
+
+ /*
+ * SKL is expected to always have a uniform distribution
+ * of EU across subslices with the exception that any one
+ * EU in any one subslice may be fused off for die
+ * recovery.
+ */
+ info->eu_per_subslice = info->subslice_total ?
+ DIV_ROUND_UP(info->eu_total,
+ info->subslice_total) : 0;
+ /*
+ * SKL supports slice power gating on devices with more than
+ * one slice, and supports EU power gating on devices with
+ * more than one EU pair per subslice.
+ */
+ info->has_slice_pg = (info->slice_total > 1) ? 1 : 0;
+ info->has_subslice_pg = 0;
+ info->has_eu_pg = (info->eu_per_subslice > 2) ? 1 : 0;
}
+ DRM_DEBUG_DRIVER("slice total: %u\n", info->slice_total);
+ DRM_DEBUG_DRIVER("subslice total: %u\n", info->subslice_total);
+ DRM_DEBUG_DRIVER("subslice per slice: %u\n", info->subslice_per_slice);
+ DRM_DEBUG_DRIVER("EU total: %u\n", info->eu_total);
+ DRM_DEBUG_DRIVER("EU per subslice: %u\n", info->eu_per_subslice);
+ DRM_DEBUG_DRIVER("has slice power gating: %s\n",
+ info->has_slice_pg ? "y" : "n");
+ DRM_DEBUG_DRIVER("has subslice power gating: %s\n",
+ info->has_subslice_pg ? "y" : "n");
+ DRM_DEBUG_DRIVER("has EU power gating: %s\n",
+ info->has_eu_pg ? "y" : "n");
}
/**
@@ -637,17 +763,6 @@
info = (struct intel_device_info *) flags;
- /* Refuse to load on gen6+ without kms enabled. */
- if (info->gen >= 6 && !drm_core_check_feature(dev, DRIVER_MODESET)) {
- DRM_INFO("Your hardware requires kernel modesetting (KMS)\n");
- DRM_INFO("See CONFIG_DRM_I915_KMS, nomodeset, and i915.modeset parameters\n");
- return -ENODEV;
- }
-
- /* UMS needs agp support. */
- if (!drm_core_check_feature(dev, DRIVER_MODESET) && !dev->agp)
- return -EINVAL;
-
dev_priv = kzalloc(sizeof(*dev_priv), GFP_KERNEL);
if (dev_priv == NULL)
return -ENOMEM;
@@ -717,20 +832,18 @@
if (ret)
goto out_regs;
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- /* WARNING: Apparently we must kick fbdev drivers before vgacon,
- * otherwise the vga fbdev driver falls over. */
- ret = i915_kick_out_firmware_fb(dev_priv);
- if (ret) {
- DRM_ERROR("failed to remove conflicting framebuffer drivers\n");
- goto out_gtt;
- }
+ /* WARNING: Apparently we must kick fbdev drivers before vgacon,
+ * otherwise the vga fbdev driver falls over. */
+ ret = i915_kick_out_firmware_fb(dev_priv);
+ if (ret) {
+ DRM_ERROR("failed to remove conflicting framebuffer drivers\n");
+ goto out_gtt;
+ }
- ret = i915_kick_out_vgacon(dev_priv);
- if (ret) {
- DRM_ERROR("failed to remove conflicting VGA console\n");
- goto out_gtt;
- }
+ ret = i915_kick_out_vgacon(dev_priv);
+ if (ret) {
+ DRM_ERROR("failed to remove conflicting VGA console\n");
+ goto out_gtt;
}
pci_set_master(dev->pdev);
@@ -834,14 +947,19 @@
intel_power_domains_init(dev_priv);
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- ret = i915_load_modeset_init(dev);
- if (ret < 0) {
- DRM_ERROR("failed to init modeset\n");
- goto out_power_well;
- }
+ ret = i915_load_modeset_init(dev);
+ if (ret < 0) {
+ DRM_ERROR("failed to init modeset\n");
+ goto out_power_well;
}
+ /*
+ * Notify a valid surface after modesetting,
+ * when running inside a VM.
+ */
+ if (intel_vgpu_active(dev))
+ I915_WRITE(vgtif_reg(display_ready), VGT_DRV_DISPLAY_READY);
+
i915_setup_sysfs(dev);
if (INTEL_INFO(dev)->num_pipes) {
@@ -921,28 +1039,25 @@
acpi_video_unregister();
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- intel_fbdev_fini(dev);
+ intel_fbdev_fini(dev);
drm_vblank_cleanup(dev);
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- intel_modeset_cleanup(dev);
+ intel_modeset_cleanup(dev);
- /*
- * free the memory space allocated for the child device
- * config parsed from VBT
- */
- if (dev_priv->vbt.child_dev && dev_priv->vbt.child_dev_num) {
- kfree(dev_priv->vbt.child_dev);
- dev_priv->vbt.child_dev = NULL;
- dev_priv->vbt.child_dev_num = 0;
- }
-
- vga_switcheroo_unregister_client(dev->pdev);
- vga_client_register(dev->pdev, NULL, NULL, NULL);
+ /*
+ * free the memory space allocated for the child device
+ * config parsed from VBT
+ */
+ if (dev_priv->vbt.child_dev && dev_priv->vbt.child_dev_num) {
+ kfree(dev_priv->vbt.child_dev);
+ dev_priv->vbt.child_dev = NULL;
+ dev_priv->vbt.child_dev_num = 0;
}
+ vga_switcheroo_unregister_client(dev->pdev);
+ vga_client_register(dev->pdev, NULL, NULL, NULL);
+
/* Free error state after interrupts are fully disabled. */
cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
i915_destroy_error_state(dev);
@@ -952,17 +1067,15 @@
intel_opregion_fini(dev);
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- /* Flush any outstanding unpin_work. */
- flush_workqueue(dev_priv->wq);
+ /* Flush any outstanding unpin_work. */
+ flush_workqueue(dev_priv->wq);
- mutex_lock(&dev->struct_mutex);
- i915_gem_cleanup_ringbuffer(dev);
- i915_gem_batch_pool_fini(&dev_priv->mm.batch_pool);
- i915_gem_context_fini(dev);
- mutex_unlock(&dev->struct_mutex);
- i915_gem_cleanup_stolen(dev);
- }
+ mutex_lock(&dev->struct_mutex);
+ i915_gem_cleanup_ringbuffer(dev);
+ i915_gem_batch_pool_fini(&dev_priv->mm.batch_pool);
+ i915_gem_context_fini(dev);
+ mutex_unlock(&dev->struct_mutex);
+ i915_gem_cleanup_stolen(dev);
intel_teardown_gmbus(dev);
intel_teardown_mchbar(dev);
@@ -1023,8 +1136,7 @@
i915_gem_release(dev, file);
mutex_unlock(&dev->struct_mutex);
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- intel_modeset_preclose(dev, file);
+ intel_modeset_preclose(dev, file);
}
void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
@@ -1087,7 +1199,7 @@
DRM_IOCTL_DEF_DRV(I915_OVERLAY_PUT_IMAGE, intel_overlay_put_image, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF_DRV(I915_OVERLAY_ATTRS, intel_overlay_attrs, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF_DRV(I915_SET_SPRITE_COLORKEY, intel_sprite_set_colorkey, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
- DRM_IOCTL_DEF_DRV(I915_GET_SPRITE_COLORKEY, intel_sprite_get_colorkey, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
+ DRM_IOCTL_DEF_DRV(I915_GET_SPRITE_COLORKEY, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF_DRV(I915_GEM_WAIT, i915_gem_wait_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_CREATE, i915_gem_context_create_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_DESTROY, i915_gem_context_destroy_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW),
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 5c66b56..c24c3f1 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -346,7 +346,6 @@
};
static const struct intel_device_info intel_cherryview_info = {
- .is_preliminary = 1,
.gen = 8, .num_pipes = 3,
.need_gfx_hws = 1, .has_hotplug = 1,
.ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING,
@@ -369,6 +368,19 @@
IVB_CURSOR_OFFSETS,
};
+static const struct intel_device_info intel_skylake_gt3_info = {
+ .is_preliminary = 1,
+ .is_skylake = 1,
+ .gen = 9, .num_pipes = 3,
+ .need_gfx_hws = 1, .has_hotplug = 1,
+ .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
+ .has_llc = 1,
+ .has_ddi = 1,
+ .has_fbc = 1,
+ GEN_DEFAULT_PIPEOFFSETS,
+ IVB_CURSOR_OFFSETS,
+};
+
/*
* Make sure any device matches here are from most specific to most
* general. For example, since the Quanta match is based on the subsystem
@@ -406,7 +418,9 @@
INTEL_BDW_GT3M_IDS(&intel_broadwell_gt3m_info), \
INTEL_BDW_GT3D_IDS(&intel_broadwell_gt3d_info), \
INTEL_CHV_IDS(&intel_cherryview_info), \
- INTEL_SKL_IDS(&intel_skylake_info)
+ INTEL_SKL_GT1_IDS(&intel_skylake_info), \
+ INTEL_SKL_GT2_IDS(&intel_skylake_info), \
+ INTEL_SKL_GT3_IDS(&intel_skylake_gt3_info) \
static const struct pci_device_id pciidlist[] = { /* aka */
INTEL_PCI_IDS,
@@ -553,6 +567,7 @@
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_crtc *crtc;
pci_power_t opregion_target_state;
+ int error;
/* ignore lid events during suspend */
mutex_lock(&dev_priv->modeset_restore_lock);
@@ -567,38 +582,33 @@
pci_save_state(dev->pdev);
- /* If KMS is active, we do the leavevt stuff here */
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- int error;
-
- error = i915_gem_suspend(dev);
- if (error) {
- dev_err(&dev->pdev->dev,
- "GEM idle failed, resume might fail\n");
- return error;
- }
-
- intel_suspend_gt_powersave(dev);
-
- /*
- * Disable CRTCs directly since we want to preserve sw state
- * for _thaw. Also, power gate the CRTC power wells.
- */
- drm_modeset_lock_all(dev);
- for_each_crtc(dev, crtc)
- intel_crtc_control(crtc, false);
- drm_modeset_unlock_all(dev);
-
- intel_dp_mst_suspend(dev);
-
- intel_runtime_pm_disable_interrupts(dev_priv);
- intel_hpd_cancel_work(dev_priv);
-
- intel_suspend_encoders(dev_priv);
-
- intel_suspend_hw(dev);
+ error = i915_gem_suspend(dev);
+ if (error) {
+ dev_err(&dev->pdev->dev,
+ "GEM idle failed, resume might fail\n");
+ return error;
}
+ intel_suspend_gt_powersave(dev);
+
+ /*
+ * Disable CRTCs directly since we want to preserve sw state
+ * for _thaw. Also, power gate the CRTC power wells.
+ */
+ drm_modeset_lock_all(dev);
+ for_each_crtc(dev, crtc)
+ intel_crtc_control(crtc, false);
+ drm_modeset_unlock_all(dev);
+
+ intel_dp_mst_suspend(dev);
+
+ intel_runtime_pm_disable_interrupts(dev_priv);
+ intel_hpd_cancel_work(dev_priv);
+
+ intel_suspend_encoders(dev_priv);
+
+ intel_suspend_hw(dev);
+
i915_gem_suspend_gtt_mappings(dev);
i915_save_state(dev);
@@ -679,53 +689,48 @@
{
struct drm_i915_private *dev_priv = dev->dev_private;
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- mutex_lock(&dev->struct_mutex);
- i915_gem_restore_gtt_mappings(dev);
- mutex_unlock(&dev->struct_mutex);
- }
+ mutex_lock(&dev->struct_mutex);
+ i915_gem_restore_gtt_mappings(dev);
+ mutex_unlock(&dev->struct_mutex);
i915_restore_state(dev);
intel_opregion_setup(dev);
- /* KMS EnterVT equivalent */
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- intel_init_pch_refclk(dev);
- drm_mode_config_reset(dev);
+ intel_init_pch_refclk(dev);
+ drm_mode_config_reset(dev);
- mutex_lock(&dev->struct_mutex);
- if (i915_gem_init_hw(dev)) {
- DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n");
- atomic_set_mask(I915_WEDGED, &dev_priv->gpu_error.reset_counter);
- }
- mutex_unlock(&dev->struct_mutex);
-
- /* We need working interrupts for modeset enabling ... */
- intel_runtime_pm_enable_interrupts(dev_priv);
-
- intel_modeset_init_hw(dev);
-
- spin_lock_irq(&dev_priv->irq_lock);
- if (dev_priv->display.hpd_irq_setup)
- dev_priv->display.hpd_irq_setup(dev);
- spin_unlock_irq(&dev_priv->irq_lock);
-
- drm_modeset_lock_all(dev);
- intel_modeset_setup_hw_state(dev, true);
- drm_modeset_unlock_all(dev);
-
- intel_dp_mst_resume(dev);
-
- /*
- * ... but also need to make sure that hotplug processing
- * doesn't cause havoc. Like in the driver load code we don't
- * bother with the tiny race here where we might loose hotplug
- * notifications.
- * */
- intel_hpd_init(dev_priv);
- /* Config may have changed between suspend and resume */
- drm_helper_hpd_irq_event(dev);
+ mutex_lock(&dev->struct_mutex);
+ if (i915_gem_init_hw(dev)) {
+ DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n");
+ atomic_set_mask(I915_WEDGED, &dev_priv->gpu_error.reset_counter);
}
+ mutex_unlock(&dev->struct_mutex);
+
+ /* We need working interrupts for modeset enabling ... */
+ intel_runtime_pm_enable_interrupts(dev_priv);
+
+ intel_modeset_init_hw(dev);
+
+ spin_lock_irq(&dev_priv->irq_lock);
+ if (dev_priv->display.hpd_irq_setup)
+ dev_priv->display.hpd_irq_setup(dev);
+ spin_unlock_irq(&dev_priv->irq_lock);
+
+ drm_modeset_lock_all(dev);
+ intel_modeset_setup_hw_state(dev, true);
+ drm_modeset_unlock_all(dev);
+
+ intel_dp_mst_resume(dev);
+
+ /*
+ * ... but also need to make sure that hotplug processing
+ * doesn't cause havoc. Like in the driver load code we don't
+ * bother with the tiny race here where we might loose hotplug
+ * notifications.
+ * */
+ intel_hpd_init(dev_priv);
+ /* Config may have changed between suspend and resume */
+ drm_helper_hpd_irq_event(dev);
intel_opregion_init(dev);
@@ -861,38 +866,29 @@
* was running at the time of the reset (i.e. we weren't VT
* switched away).
*/
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- /* Used to prevent gem_check_wedged returning -EAGAIN during gpu reset */
- dev_priv->gpu_error.reload_in_reset = true;
- ret = i915_gem_init_hw(dev);
+ /* Used to prevent gem_check_wedged returning -EAGAIN during gpu reset */
+ dev_priv->gpu_error.reload_in_reset = true;
- dev_priv->gpu_error.reload_in_reset = false;
+ ret = i915_gem_init_hw(dev);
- mutex_unlock(&dev->struct_mutex);
- if (ret) {
- DRM_ERROR("Failed hw init on reset %d\n", ret);
- return ret;
- }
+ dev_priv->gpu_error.reload_in_reset = false;
- /*
- * FIXME: This races pretty badly against concurrent holders of
- * ring interrupts. This is possible since we've started to drop
- * dev->struct_mutex in select places when waiting for the gpu.
- */
-
- /*
- * rps/rc6 re-init is necessary to restore state lost after the
- * reset and the re-install of gt irqs. Skip for ironlake per
- * previous concerns that it doesn't respond well to some forms
- * of re-init after reset.
- */
- if (INTEL_INFO(dev)->gen > 5)
- intel_enable_gt_powersave(dev);
- } else {
- mutex_unlock(&dev->struct_mutex);
+ mutex_unlock(&dev->struct_mutex);
+ if (ret) {
+ DRM_ERROR("Failed hw init on reset %d\n", ret);
+ return ret;
}
+ /*
+ * rps/rc6 re-init is necessary to restore state lost after the
+ * reset and the re-install of gt irqs. Skip for ironlake per
+ * previous concerns that it doesn't respond well to some forms
+ * of re-init after reset.
+ */
+ if (INTEL_INFO(dev)->gen > 5)
+ intel_enable_gt_powersave(dev);
+
return 0;
}
@@ -1640,11 +1636,9 @@
if (!(driver.driver_features & DRIVER_MODESET)) {
driver.get_vblank_timestamp = NULL;
-#ifndef CONFIG_DRM_I915_UMS
/* Silently fail loading to not upset userspace. */
DRM_DEBUG_DRIVER("KMS and UMS disabled.\n");
return 0;
-#endif
}
/*
@@ -1660,10 +1654,8 @@
static void __exit i915_exit(void)
{
-#ifndef CONFIG_DRM_I915_UMS
if (!(driver.driver_features & DRIVER_MODESET))
return; /* Never loaded a driver. */
-#endif
drm_pci_exit(&driver, &i915_pci_driver);
}
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b4faa2d..8ae6f7f 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -31,6 +31,7 @@
#define _I915_DRV_H_
#include <uapi/drm/i915_drm.h>
+#include <uapi/drm/drm_fourcc.h>
#include "i915_reg.h"
#include "intel_bios.h"
@@ -55,7 +56,7 @@
#define DRIVER_NAME "i915"
#define DRIVER_DESC "Intel Graphics"
-#define DRIVER_DATE "20150130"
+#define DRIVER_DATE "20150327"
#undef WARN_ON
/* Many gcc seem to no see through this and fall over :( */
@@ -69,6 +70,9 @@
#define WARN_ON(x) WARN((x), "WARN_ON(" #x ")")
#endif
+#undef WARN_ON_ONCE
+#define WARN_ON_ONCE(x) WARN_ONCE((x), "WARN_ON_ONCE(" #x ")")
+
#define MISSING_CASE(x) WARN(1, "Missing switch case (%lu) in %s\n", \
(long) (x), __func__);
@@ -222,9 +226,14 @@
#define for_each_pipe(__dev_priv, __p) \
for ((__p) = 0; (__p) < INTEL_INFO(__dev_priv)->num_pipes; (__p)++)
-#define for_each_plane(pipe, p) \
- for ((p) = 0; (p) < INTEL_INFO(dev)->num_sprites[(pipe)] + 1; (p)++)
-#define for_each_sprite(p, s) for ((s) = 0; (s) < INTEL_INFO(dev)->num_sprites[(p)]; (s)++)
+#define for_each_plane(__dev_priv, __pipe, __p) \
+ for ((__p) = 0; \
+ (__p) < INTEL_INFO(__dev_priv)->num_sprites[(__pipe)] + 1; \
+ (__p)++)
+#define for_each_sprite(__dev_priv, __p, __s) \
+ for ((__s) = 0; \
+ (__s) < INTEL_INFO(__dev_priv)->num_sprites[(__p)]; \
+ (__s)++)
#define for_each_crtc(dev, crtc) \
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
@@ -237,6 +246,12 @@
&(dev)->mode_config.encoder_list, \
base.head)
+#define for_each_intel_connector(dev, intel_connector) \
+ list_for_each_entry(intel_connector, \
+ &dev->mode_config.connector_list, \
+ base.head)
+
+
#define for_each_encoder_on_crtc(dev, __crtc, intel_encoder) \
list_for_each_entry((intel_encoder), &(dev)->mode_config.encoder_list, base.head) \
if ((intel_encoder)->base.crtc == (__crtc))
@@ -412,6 +427,8 @@
u32 forcewake;
u32 error; /* gen6+ */
u32 err_int; /* gen7 */
+ u32 fault_data0; /* gen8, gen9 */
+ u32 fault_data1; /* gen8, gen9 */
u32 done_reg;
u32 gac_eco;
u32 gam_ecochk;
@@ -529,7 +546,7 @@
* Returns true on success, false on failure.
*/
bool (*find_dpll)(const struct intel_limit *limit,
- struct intel_crtc *crtc,
+ struct intel_crtc_state *crtc_state,
int target, int refclk,
struct dpll *match_clock,
struct dpll *best_clock);
@@ -538,7 +555,7 @@
struct drm_crtc *crtc,
uint32_t sprite_width, uint32_t sprite_height,
int pixel_size, bool enable, bool scaled);
- void (*modeset_global_resources)(struct drm_device *dev);
+ void (*modeset_global_resources)(struct drm_atomic_state *state);
/* Returns the active state of the crtc, and if the crtc is active,
* fills out the pipe-config with the hw state. */
bool (*get_pipe_config)(struct intel_crtc *,
@@ -692,7 +709,18 @@
int trans_offsets[I915_MAX_TRANSCODERS];
int palette_offsets[I915_MAX_PIPES];
int cursor_offsets[I915_MAX_PIPES];
- unsigned int eu_total;
+
+ /* Slice/subslice/EU info */
+ u8 slice_total;
+ u8 subslice_total;
+ u8 subslice_per_slice;
+ u8 eu_total;
+ u8 eu_per_subslice;
+ /* For each slice, which subslice(s) has(have) 7 EUs (bitfield)? */
+ u8 subslice_7eu[3];
+ u8 has_slice_pg:1;
+ u8 has_subslice_pg:1;
+ u8 has_eu_pg:1;
};
#undef DEFINE_FLAG
@@ -771,11 +799,20 @@
struct list_head link;
};
+enum fb_op_origin {
+ ORIGIN_GTT,
+ ORIGIN_CPU,
+ ORIGIN_CS,
+ ORIGIN_FLIP,
+};
+
struct i915_fbc {
- unsigned long size;
+ unsigned long uncompressed_size;
unsigned threshold;
unsigned int fb_id;
- enum plane plane;
+ unsigned int possible_framebuffer_bits;
+ unsigned int busy_bits;
+ struct intel_crtc *crtc;
int y;
struct drm_mm_node compressed_fb;
@@ -787,14 +824,6 @@
* possible. */
bool enabled;
- /* On gen8 some rings cannont perform fbc clean operation so for now
- * we are doing this on SW with mmio.
- * This variable works in the opposite information direction
- * of ring->fbc_dirty telling software on frontbuffer tracking
- * to perform the cache clean on sw side.
- */
- bool need_sw_cache_clean;
-
struct intel_fbc_work {
struct delayed_work work;
struct drm_crtc *crtc;
@@ -888,150 +917,21 @@
};
struct i915_suspend_saved_registers {
- u8 saveLBB;
- u32 saveDSPACNTR;
- u32 saveDSPBCNTR;
u32 saveDSPARB;
- u32 savePIPEACONF;
- u32 savePIPEBCONF;
- u32 savePIPEASRC;
- u32 savePIPEBSRC;
- u32 saveFPA0;
- u32 saveFPA1;
- u32 saveDPLL_A;
- u32 saveDPLL_A_MD;
- u32 saveHTOTAL_A;
- u32 saveHBLANK_A;
- u32 saveHSYNC_A;
- u32 saveVTOTAL_A;
- u32 saveVBLANK_A;
- u32 saveVSYNC_A;
- u32 saveBCLRPAT_A;
- u32 saveTRANSACONF;
- u32 saveTRANS_HTOTAL_A;
- u32 saveTRANS_HBLANK_A;
- u32 saveTRANS_HSYNC_A;
- u32 saveTRANS_VTOTAL_A;
- u32 saveTRANS_VBLANK_A;
- u32 saveTRANS_VSYNC_A;
- u32 savePIPEASTAT;
- u32 saveDSPASTRIDE;
- u32 saveDSPASIZE;
- u32 saveDSPAPOS;
- u32 saveDSPAADDR;
- u32 saveDSPASURF;
- u32 saveDSPATILEOFF;
- u32 savePFIT_PGM_RATIOS;
- u32 saveBLC_HIST_CTL;
- u32 saveBLC_PWM_CTL;
- u32 saveBLC_PWM_CTL2;
- u32 saveBLC_CPU_PWM_CTL;
- u32 saveBLC_CPU_PWM_CTL2;
- u32 saveFPB0;
- u32 saveFPB1;
- u32 saveDPLL_B;
- u32 saveDPLL_B_MD;
- u32 saveHTOTAL_B;
- u32 saveHBLANK_B;
- u32 saveHSYNC_B;
- u32 saveVTOTAL_B;
- u32 saveVBLANK_B;
- u32 saveVSYNC_B;
- u32 saveBCLRPAT_B;
- u32 saveTRANSBCONF;
- u32 saveTRANS_HTOTAL_B;
- u32 saveTRANS_HBLANK_B;
- u32 saveTRANS_HSYNC_B;
- u32 saveTRANS_VTOTAL_B;
- u32 saveTRANS_VBLANK_B;
- u32 saveTRANS_VSYNC_B;
- u32 savePIPEBSTAT;
- u32 saveDSPBSTRIDE;
- u32 saveDSPBSIZE;
- u32 saveDSPBPOS;
- u32 saveDSPBADDR;
- u32 saveDSPBSURF;
- u32 saveDSPBTILEOFF;
- u32 saveVGA0;
- u32 saveVGA1;
- u32 saveVGA_PD;
- u32 saveVGACNTRL;
- u32 saveADPA;
u32 saveLVDS;
u32 savePP_ON_DELAYS;
u32 savePP_OFF_DELAYS;
- u32 saveDVOA;
- u32 saveDVOB;
- u32 saveDVOC;
u32 savePP_ON;
u32 savePP_OFF;
u32 savePP_CONTROL;
u32 savePP_DIVISOR;
- u32 savePFIT_CONTROL;
- u32 save_palette_a[256];
- u32 save_palette_b[256];
u32 saveFBC_CONTROL;
- u32 saveIER;
- u32 saveIIR;
- u32 saveIMR;
- u32 saveDEIER;
- u32 saveDEIMR;
- u32 saveGTIER;
- u32 saveGTIMR;
- u32 saveFDI_RXA_IMR;
- u32 saveFDI_RXB_IMR;
u32 saveCACHE_MODE_0;
u32 saveMI_ARB_STATE;
u32 saveSWF0[16];
u32 saveSWF1[16];
u32 saveSWF2[3];
- u8 saveMSR;
- u8 saveSR[8];
- u8 saveGR[25];
- u8 saveAR_INDEX;
- u8 saveAR[21];
- u8 saveDACMASK;
- u8 saveCR[37];
uint64_t saveFENCE[I915_MAX_NUM_FENCES];
- u32 saveCURACNTR;
- u32 saveCURAPOS;
- u32 saveCURABASE;
- u32 saveCURBCNTR;
- u32 saveCURBPOS;
- u32 saveCURBBASE;
- u32 saveCURSIZE;
- u32 saveDP_B;
- u32 saveDP_C;
- u32 saveDP_D;
- u32 savePIPEA_GMCH_DATA_M;
- u32 savePIPEB_GMCH_DATA_M;
- u32 savePIPEA_GMCH_DATA_N;
- u32 savePIPEB_GMCH_DATA_N;
- u32 savePIPEA_DP_LINK_M;
- u32 savePIPEB_DP_LINK_M;
- u32 savePIPEA_DP_LINK_N;
- u32 savePIPEB_DP_LINK_N;
- u32 saveFDI_RXA_CTL;
- u32 saveFDI_TXA_CTL;
- u32 saveFDI_RXB_CTL;
- u32 saveFDI_TXB_CTL;
- u32 savePFA_CTL_1;
- u32 savePFB_CTL_1;
- u32 savePFA_WIN_SZ;
- u32 savePFB_WIN_SZ;
- u32 savePFA_WIN_POS;
- u32 savePFB_WIN_POS;
- u32 savePCH_DREF_CONTROL;
- u32 saveDISP_ARB_CTL;
- u32 savePIPEA_DATA_M1;
- u32 savePIPEA_DATA_N1;
- u32 savePIPEA_LINK_M1;
- u32 savePIPEA_LINK_N1;
- u32 savePIPEB_DATA_M1;
- u32 savePIPEB_DATA_N1;
- u32 savePIPEB_LINK_M1;
- u32 savePIPEB_LINK_N1;
- u32 saveMCHBAR_RENDER_STANDBY;
u32 savePCH_PORT_HOTPLUG;
u16 saveGCDGMBUS;
};
@@ -1128,13 +1028,12 @@
u8 max_freq_softlimit; /* Max frequency permitted by the driver */
u8 max_freq; /* Maximum frequency, RP0 if not overclocking */
u8 min_freq; /* AKA RPn. Minimum frequency */
+ u8 idle_freq; /* Frequency to request when we are idle */
u8 efficient_freq; /* AKA RPe. Pre-determined balanced frequency */
u8 rp1_freq; /* "less than" RP0 power/freqency */
u8 rp0_freq; /* Non-overclocked max frequency. */
u32 cz_freq;
- u32 ei_interrupt_count;
-
int last_adj;
enum { LOW_POWER, BETWEEN, HIGH_POWER } power;
@@ -1171,9 +1070,6 @@
int c_m;
int r_t;
-
- struct drm_i915_gem_object *pwrctx;
- struct drm_i915_gem_object *renderctx;
};
struct drm_i915_private;
@@ -1455,6 +1351,7 @@
bool edp_initialized;
bool edp_support;
int edp_bpp;
+ bool edp_low_vswing;
struct edp_power_seq edp_pps;
struct {
@@ -1515,6 +1412,25 @@
enum intel_ddb_partitioning partitioning;
};
+struct vlv_wm_values {
+ struct {
+ uint16_t primary;
+ uint16_t sprite[2];
+ uint8_t cursor;
+ } pipe[3];
+
+ struct {
+ uint16_t plane;
+ uint8_t cursor;
+ } sr;
+
+ struct {
+ uint8_t cursor;
+ uint8_t sprite[2];
+ uint8_t primary;
+ } ddl[3];
+};
+
struct skl_ddb_entry {
uint16_t start, end; /* in number of blocks, 'end' is exclusive */
};
@@ -1641,6 +1557,10 @@
u32 count;
};
+struct i915_virtual_gpu {
+ bool active;
+};
+
struct drm_i915_private {
struct drm_device *dev;
struct kmem_cache *slab;
@@ -1653,6 +1573,8 @@
struct intel_uncore uncore;
+ struct i915_virtual_gpu vgpu;
+
struct intel_gmbus gmbus[GMBUS_NUM_PORTS];
@@ -1871,6 +1793,7 @@
union {
struct ilk_wm_values hw;
struct skl_wm_values skl_hw;
+ struct vlv_wm_values vlv;
};
} wm;
@@ -2142,7 +2065,7 @@
u32 tail;
/**
- * Context related to this request
+ * Context and ring buffer related to this request
* Contexts are refcounted, so when this request is associated with a
* context, we must increment the context's refcount, to guarantee that
* it persists while any request is linked to it. Requests themselves
@@ -2152,6 +2075,7 @@
* context.
*/
struct intel_context *ctx;
+ struct intel_ringbuffer *ringbuf;
/** Batch buffer related to this request if any */
struct drm_i915_gem_object *batch_obj;
@@ -2166,6 +2090,9 @@
/** file_priv list entry for this request */
struct list_head client_list;
+ /** process identifier submitting this request */
+ struct pid *pid;
+
uint32_t uniq;
/**
@@ -2352,6 +2279,7 @@
})
#define INTEL_INFO(p) (&__I915__(p)->info)
#define INTEL_DEVID(p) (INTEL_INFO(p)->device_id)
+#define INTEL_REVID(p) (__I915__(p)->dev->pdev->revision)
#define IS_I830(dev) (INTEL_DEVID(dev) == 0x3577)
#define IS_845G(dev) (INTEL_DEVID(dev) == 0x2562)
@@ -2374,9 +2302,6 @@
#define IS_IVB_GT1(dev) (INTEL_DEVID(dev) == 0x0156 || \
INTEL_DEVID(dev) == 0x0152 || \
INTEL_DEVID(dev) == 0x015a)
-#define IS_SNB_GT1(dev) (INTEL_DEVID(dev) == 0x0102 || \
- INTEL_DEVID(dev) == 0x0106 || \
- INTEL_DEVID(dev) == 0x010A)
#define IS_VALLEYVIEW(dev) (INTEL_INFO(dev)->is_valleyview)
#define IS_CHERRYVIEW(dev) (INTEL_INFO(dev)->is_valleyview && IS_GEN8(dev))
#define IS_HASWELL(dev) (INTEL_INFO(dev)->is_haswell)
@@ -2400,6 +2325,12 @@
INTEL_DEVID(dev) == 0x0A1E)
#define IS_PRELIMINARY_HW(intel_info) ((intel_info)->is_preliminary)
+#define SKL_REVID_A0 (0x0)
+#define SKL_REVID_B0 (0x1)
+#define SKL_REVID_C0 (0x2)
+#define SKL_REVID_D0 (0x3)
+#define SKL_REVID_E0 (0x4)
+
/*
* The genX designation typically refers to the render engine, so render
* capability related checks should use IS_GEN, while display and other checks
@@ -2499,6 +2430,7 @@
#define NUM_L3_SLICES(dev) (IS_HSW_GT3(dev) ? 2 : HAS_L3_DPF(dev))
#define GT_FREQUENCY_MULTIPLIER 50
+#define GEN9_FREQ_SCALER 3
#include "i915_trace.h"
@@ -2507,14 +2439,11 @@
extern int i915_suspend_legacy(struct drm_device *dev, pm_message_t state);
extern int i915_resume_legacy(struct drm_device *dev);
-extern int i915_master_create(struct drm_device *dev, struct drm_master *master);
-extern void i915_master_destroy(struct drm_device *dev, struct drm_master *master);
/* i915_params.c */
struct i915_params {
int modeset;
int panel_ignore_lid;
- unsigned int powersave;
int semaphores;
unsigned int lvds_downclock;
int lvds_channel_mode;
@@ -2534,11 +2463,12 @@
bool enable_hangcheck;
bool fastboot;
bool prefault_disable;
+ bool load_detect_test;
bool reset;
bool disable_display;
bool disable_vtd_wa;
int use_mmio_flip;
- bool mmio_debug;
+ int mmio_debug;
bool verbose_state_checks;
bool nuclear_pageflip;
};
@@ -2591,6 +2521,10 @@
void intel_uncore_forcewake_put(struct drm_i915_private *dev_priv,
enum forcewake_domains domains);
void assert_forcewakes_inactive(struct drm_i915_private *dev_priv);
+static inline bool intel_vgpu_active(struct drm_device *dev)
+{
+ return to_i915(dev)->vgpu.active;
+}
void
i915_enable_pipestat(struct drm_i915_private *dev_priv, enum pipe pipe,
@@ -2669,12 +2603,6 @@
int i915_gem_wait_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
void i915_gem_load(struct drm_device *dev);
-unsigned long i915_gem_shrink(struct drm_i915_private *dev_priv,
- long target,
- unsigned flags);
-#define I915_SHRINK_PURGEABLE 0x1
-#define I915_SHRINK_UNBOUND 0x2
-#define I915_SHRINK_BOUND 0x4
void *i915_gem_object_alloc(struct drm_device *dev);
void i915_gem_object_free(struct drm_i915_gem_object *obj);
void i915_gem_object_init(struct drm_i915_gem_object *obj,
@@ -2691,20 +2619,16 @@
#define PIN_GLOBAL 0x4
#define PIN_OFFSET_BIAS 0x8
#define PIN_OFFSET_MASK (~4095)
-int __must_check i915_gem_object_pin_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- uint32_t alignment,
- uint64_t flags,
- const struct i915_ggtt_view *view);
-static inline
-int __must_check i915_gem_object_pin(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- uint32_t alignment,
- uint64_t flags)
-{
- return i915_gem_object_pin_view(obj, vm, alignment, flags,
- &i915_ggtt_view_normal);
-}
+int __must_check
+i915_gem_object_pin(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm,
+ uint32_t alignment,
+ uint64_t flags);
+int __must_check
+i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view,
+ uint32_t alignment,
+ uint64_t flags);
int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
u32 flags);
@@ -2844,8 +2768,10 @@
int __must_check
i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
u32 alignment,
- struct intel_engine_cs *pipelined);
-void i915_gem_object_unpin_from_display_plane(struct drm_i915_gem_object *obj);
+ struct intel_engine_cs *pipelined,
+ const struct i915_ggtt_view *view);
+void i915_gem_object_unpin_from_display_plane(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view);
int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj,
int align);
int i915_gem_open(struct drm_device *dev, struct drm_file *file);
@@ -2868,60 +2794,46 @@
void i915_gem_restore_fences(struct drm_device *dev);
-unsigned long i915_gem_obj_offset_view(struct drm_i915_gem_object *o,
- struct i915_address_space *vm,
- enum i915_ggtt_view_type view);
-static inline
-unsigned long i915_gem_obj_offset(struct drm_i915_gem_object *o,
- struct i915_address_space *vm)
+unsigned long
+i915_gem_obj_ggtt_offset_view(struct drm_i915_gem_object *o,
+ const struct i915_ggtt_view *view);
+unsigned long
+i915_gem_obj_offset(struct drm_i915_gem_object *o,
+ struct i915_address_space *vm);
+static inline unsigned long
+i915_gem_obj_ggtt_offset(struct drm_i915_gem_object *o)
{
- return i915_gem_obj_offset_view(o, vm, I915_GGTT_VIEW_NORMAL);
+ return i915_gem_obj_ggtt_offset_view(o, &i915_ggtt_view_normal);
}
+
bool i915_gem_obj_bound_any(struct drm_i915_gem_object *o);
-bool i915_gem_obj_bound_view(struct drm_i915_gem_object *o,
- struct i915_address_space *vm,
- enum i915_ggtt_view_type view);
-static inline
+bool i915_gem_obj_ggtt_bound_view(struct drm_i915_gem_object *o,
+ const struct i915_ggtt_view *view);
bool i915_gem_obj_bound(struct drm_i915_gem_object *o,
- struct i915_address_space *vm)
-{
- return i915_gem_obj_bound_view(o, vm, I915_GGTT_VIEW_NORMAL);
-}
+ struct i915_address_space *vm);
unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o,
struct i915_address_space *vm);
-struct i915_vma *i915_gem_obj_to_vma_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *view);
-static inline
-struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm)
-{
- return i915_gem_obj_to_vma_view(obj, vm, &i915_ggtt_view_normal);
-}
-
struct i915_vma *
-i915_gem_obj_lookup_or_create_vma_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *view);
+i915_gem_obj_to_vma(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm);
+struct i915_vma *
+i915_gem_obj_to_ggtt_view(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view);
-static inline
struct i915_vma *
i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm)
-{
- return i915_gem_obj_lookup_or_create_vma_view(obj, vm,
- &i915_ggtt_view_normal);
-}
+ struct i915_address_space *vm);
+struct i915_vma *
+i915_gem_obj_lookup_or_create_ggtt_vma(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view);
-struct i915_vma *i915_gem_obj_to_ggtt(struct drm_i915_gem_object *obj);
-static inline bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj) {
- struct i915_vma *vma;
- list_for_each_entry(vma, &obj->vma_list, vma_link)
- if (vma->pin_count > 0)
- return true;
- return false;
+static inline struct i915_vma *
+i915_gem_obj_to_ggtt(struct drm_i915_gem_object *obj)
+{
+ return i915_gem_obj_to_ggtt_view(obj, &i915_ggtt_view_normal);
}
+bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj);
/* Some GGTT VM helpers */
#define i915_obj_to_ggtt(obj) \
@@ -2944,13 +2856,7 @@
static inline bool i915_gem_obj_ggtt_bound(struct drm_i915_gem_object *obj)
{
- return i915_gem_obj_bound(obj, i915_obj_to_ggtt(obj));
-}
-
-static inline unsigned long
-i915_gem_obj_ggtt_offset(struct drm_i915_gem_object *obj)
-{
- return i915_gem_obj_offset(obj, i915_obj_to_ggtt(obj));
+ return i915_gem_obj_ggtt_bound_view(obj, &i915_ggtt_view_normal);
}
static inline unsigned long
@@ -2974,7 +2880,13 @@
return i915_vma_unbind(i915_gem_obj_to_ggtt(obj));
}
-void i915_gem_object_ggtt_unpin(struct drm_i915_gem_object *obj);
+void i915_gem_object_ggtt_unpin_view(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view);
+static inline void
+i915_gem_object_ggtt_unpin(struct drm_i915_gem_object *obj)
+{
+ i915_gem_object_ggtt_unpin_view(obj, &i915_ggtt_view_normal);
+}
/* i915_gem_context.c */
int __must_check i915_gem_context_init(struct drm_device *dev);
@@ -3046,6 +2958,17 @@
u32 gtt_offset,
u32 size);
+/* i915_gem_shrinker.c */
+unsigned long i915_gem_shrink(struct drm_i915_private *dev_priv,
+ long target,
+ unsigned flags);
+#define I915_SHRINK_PURGEABLE 0x1
+#define I915_SHRINK_UNBOUND 0x2
+#define I915_SHRINK_BOUND 0x4
+unsigned long i915_gem_shrink_all(struct drm_i915_private *dev_priv);
+void i915_gem_shrinker_init(struct drm_i915_private *dev_priv);
+
+
/* i915_gem_tiling.c */
static inline bool i915_gem_object_needs_bit17_swizzle(struct drm_i915_gem_object *obj)
{
@@ -3121,10 +3044,6 @@
extern int i915_save_state(struct drm_device *dev);
extern int i915_restore_state(struct drm_device *dev);
-/* i915_ums.c */
-void i915_save_display_reg(struct drm_device *dev);
-void i915_restore_display_reg(struct drm_device *dev);
-
/* i915_sysfs.c */
void i915_setup_sysfs(struct drm_device *dev_priv);
void i915_teardown_sysfs(struct drm_device *dev_priv);
@@ -3196,8 +3115,7 @@
extern void i915_redisable_vga_power_on(struct drm_device *dev);
extern bool ironlake_set_drps(struct drm_device *dev, u8 val);
extern void intel_init_pch_refclk(struct drm_device *dev);
-extern void gen6_set_rps(struct drm_device *dev, u8 val);
-extern void valleyview_set_rps(struct drm_device *dev, u8 val);
+extern void intel_set_rps(struct drm_device *dev, u8 val);
extern void intel_set_memory_cxsr(struct drm_i915_private *dev_priv,
bool enable);
extern void intel_detect_pch(struct drm_device *dev);
@@ -3210,8 +3128,6 @@
int i915_get_reset_stats_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
-void intel_notify_mmio_flip(struct intel_engine_cs *ring);
-
/* overlay */
extern struct intel_overlay_error_state *intel_overlay_capture_error_state(struct drm_device *dev);
extern void intel_overlay_print_error_state(struct drm_i915_error_state_buf *e,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 27ea6bd..d07c0b1 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1,5 +1,5 @@
/*
- * Copyright © 2008 Intel Corporation
+ * Copyright © 2008-2015 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -29,9 +29,9 @@
#include <drm/drm_vma_manager.h>
#include <drm/i915_drm.h>
#include "i915_drv.h"
+#include "i915_vgpu.h"
#include "i915_trace.h"
#include "intel_drv.h"
-#include <linux/oom.h>
#include <linux/shmem_fs.h>
#include <linux/slab.h>
#include <linux/swap.h>
@@ -52,15 +52,6 @@
struct drm_i915_fence_reg *fence,
bool enable);
-static unsigned long i915_gem_shrinker_count(struct shrinker *shrinker,
- struct shrink_control *sc);
-static unsigned long i915_gem_shrinker_scan(struct shrinker *shrinker,
- struct shrink_control *sc);
-static int i915_gem_shrinker_oom(struct notifier_block *nb,
- unsigned long event,
- void *ptr);
-static unsigned long i915_gem_shrink_all(struct drm_i915_private *dev_priv);
-
static bool cpu_cache_is_coherent(struct drm_device *dev,
enum i915_cache_level level)
{
@@ -350,7 +341,7 @@
struct drm_device *dev = obj->base.dev;
void *vaddr = obj->phys_handle->vaddr + args->offset;
char __user *user_data = to_user_ptr(args->data_ptr);
- int ret;
+ int ret = 0;
/* We manually control the domain here and pretend that it
* remains coherent i.e. in the GTT domain, like shmem_pwrite.
@@ -359,6 +350,7 @@
if (ret)
return ret;
+ intel_fb_obj_invalidate(obj, NULL, ORIGIN_CPU);
if (__copy_from_user_inatomic_nocache(vaddr, user_data, args->size)) {
unsigned long unwritten;
@@ -369,13 +361,18 @@
mutex_unlock(&dev->struct_mutex);
unwritten = copy_from_user(vaddr, user_data, args->size);
mutex_lock(&dev->struct_mutex);
- if (unwritten)
- return -EFAULT;
+ if (unwritten) {
+ ret = -EFAULT;
+ goto out;
+ }
}
drm_clflush_virt_range(vaddr, args->size);
i915_gem_chipset_flush(dev);
- return 0;
+
+out:
+ intel_fb_obj_flush(obj, false);
+ return ret;
}
void *i915_gem_object_alloc(struct drm_device *dev)
@@ -809,6 +806,8 @@
offset = i915_gem_obj_ggtt_offset(obj) + args->offset;
+ intel_fb_obj_invalidate(obj, NULL, ORIGIN_GTT);
+
while (remain > 0) {
/* Operation in this page
*
@@ -829,7 +828,7 @@
if (fast_user_write(dev_priv->gtt.mappable, page_base,
page_offset, user_data, page_length)) {
ret = -EFAULT;
- goto out_unpin;
+ goto out_flush;
}
remain -= page_length;
@@ -837,6 +836,8 @@
offset += page_length;
}
+out_flush:
+ intel_fb_obj_flush(obj, false);
out_unpin:
i915_gem_object_ggtt_unpin(obj);
out:
@@ -951,6 +952,8 @@
if (ret)
return ret;
+ intel_fb_obj_invalidate(obj, NULL, ORIGIN_CPU);
+
i915_gem_object_pin_pages(obj);
offset = args->offset;
@@ -1029,6 +1032,7 @@
if (needs_clflush_after)
i915_gem_chipset_flush(dev);
+ intel_fb_obj_flush(obj, false);
return ret;
}
@@ -1922,12 +1926,6 @@
return i915_gem_mmap_gtt(file, dev, args->handle, &args->offset);
}
-static inline int
-i915_gem_object_is_purgeable(struct drm_i915_gem_object *obj)
-{
- return obj->madv == I915_MADV_DONTNEED;
-}
-
/* Immediately discard the backing storage */
static void
i915_gem_object_truncate(struct drm_i915_gem_object *obj)
@@ -2033,85 +2031,6 @@
return 0;
}
-unsigned long
-i915_gem_shrink(struct drm_i915_private *dev_priv,
- long target, unsigned flags)
-{
- const struct {
- struct list_head *list;
- unsigned int bit;
- } phases[] = {
- { &dev_priv->mm.unbound_list, I915_SHRINK_UNBOUND },
- { &dev_priv->mm.bound_list, I915_SHRINK_BOUND },
- { NULL, 0 },
- }, *phase;
- unsigned long count = 0;
-
- /*
- * As we may completely rewrite the (un)bound list whilst unbinding
- * (due to retiring requests) we have to strictly process only
- * one element of the list at the time, and recheck the list
- * on every iteration.
- *
- * In particular, we must hold a reference whilst removing the
- * object as we may end up waiting for and/or retiring the objects.
- * This might release the final reference (held by the active list)
- * and result in the object being freed from under us. This is
- * similar to the precautions the eviction code must take whilst
- * removing objects.
- *
- * Also note that although these lists do not hold a reference to
- * the object we can safely grab one here: The final object
- * unreferencing and the bound_list are both protected by the
- * dev->struct_mutex and so we won't ever be able to observe an
- * object on the bound_list with a reference count equals 0.
- */
- for (phase = phases; phase->list; phase++) {
- struct list_head still_in_list;
-
- if ((flags & phase->bit) == 0)
- continue;
-
- INIT_LIST_HEAD(&still_in_list);
- while (count < target && !list_empty(phase->list)) {
- struct drm_i915_gem_object *obj;
- struct i915_vma *vma, *v;
-
- obj = list_first_entry(phase->list,
- typeof(*obj), global_list);
- list_move_tail(&obj->global_list, &still_in_list);
-
- if (flags & I915_SHRINK_PURGEABLE &&
- !i915_gem_object_is_purgeable(obj))
- continue;
-
- drm_gem_object_reference(&obj->base);
-
- /* For the unbound phase, this should be a no-op! */
- list_for_each_entry_safe(vma, v,
- &obj->vma_list, vma_link)
- if (i915_vma_unbind(vma))
- break;
-
- if (i915_gem_object_put_pages(obj) == 0)
- count += obj->base.size >> PAGE_SHIFT;
-
- drm_gem_object_unreference(&obj->base);
- }
- list_splice(&still_in_list, phase->list);
- }
-
- return count;
-}
-
-static unsigned long
-i915_gem_shrink_all(struct drm_i915_private *dev_priv)
-{
- i915_gem_evict_everything(dev_priv->dev);
- return i915_gem_shrink(dev_priv, LONG_MAX,
- I915_SHRINK_BOUND | I915_SHRINK_UNBOUND);
-}
-
static int
i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
{
@@ -2492,6 +2411,8 @@
list_add_tail(&request->client_list,
&file_priv->mm.request_list);
spin_unlock(&file_priv->mm.lock);
+
+ request->pid = get_pid(task_pid(current));
}
trace_i915_gem_request_add(request);
@@ -2572,6 +2493,8 @@
list_del(&request->list);
i915_gem_request_remove_from_client(request);
+ put_pid(request->pid);
+
i915_gem_request_unreference(request);
}
@@ -2744,7 +2667,6 @@
*/
while (!list_empty(&ring->request_list)) {
struct drm_i915_gem_request *request;
- struct intel_ringbuffer *ringbuf;
request = list_first_entry(&ring->request_list,
struct drm_i915_gem_request,
@@ -2755,23 +2677,12 @@
trace_i915_gem_request_retire(request);
- /* This is one of the few common intersection points
- * between legacy ringbuffer submission and execlists:
- * we need to tell them apart in order to find the correct
- * ringbuffer to which the request belongs to.
- */
- if (i915.enable_execlists) {
- struct intel_context *ctx = request->ctx;
- ringbuf = ctx->engine[ring->id].ringbuf;
- } else
- ringbuf = ring->buffer;
-
/* We know the GPU must have read the request to have
* sent us the seqno + interrupt, so use the position
* of tail of the request to update the last known position
* of the GPU head.
*/
- ringbuf->last_retired_head = request->postfix;
+ request->ringbuf->last_retired_head = request->postfix;
i915_gem_free_request(request);
}
@@ -3516,9 +3427,9 @@
static struct i915_vma *
i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
struct i915_address_space *vm,
+ const struct i915_ggtt_view *ggtt_view,
unsigned alignment,
- uint64_t flags,
- const struct i915_ggtt_view *view)
+ uint64_t flags)
{
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -3530,6 +3441,9 @@
struct i915_vma *vma;
int ret;
+ if(WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
+ return ERR_PTR(-EINVAL);
+
fence_size = i915_gem_get_gtt_size(dev,
obj->base.size,
obj->tiling_mode);
@@ -3568,7 +3482,9 @@
i915_gem_object_pin_pages(obj);
- vma = i915_gem_obj_lookup_or_create_vma_view(obj, vm, view);
+ vma = ggtt_view ? i915_gem_obj_lookup_or_create_ggtt_vma(obj, ggtt_view) :
+ i915_gem_obj_lookup_or_create_vma(obj, vm);
+
if (IS_ERR(vma))
goto err_unpin;
@@ -3598,6 +3514,17 @@
if (ret)
goto err_remove_node;
+ /* allocate before insert / bind */
+ if (vma->vm->allocate_va_range) {
+ trace_i915_va_alloc(vma->vm, vma->node.start, vma->node.size,
+ VM_TO_TRACE_NAME(vma->vm));
+ ret = vma->vm->allocate_va_range(vma->vm,
+ vma->node.start,
+ vma->node.size);
+ if (ret)
+ goto err_remove_node;
+ }
+
trace_i915_vma_bind(vma, flags);
ret = i915_vma_bind(vma, obj->cache_level,
flags & PIN_GLOBAL ? GLOBAL_BIND : 0);
@@ -3768,7 +3695,7 @@
}
if (write)
- intel_fb_obj_invalidate(obj, NULL);
+ intel_fb_obj_invalidate(obj, NULL, ORIGIN_GTT);
trace_i915_gem_object_change_domain(obj,
old_read_domains,
@@ -3950,7 +3877,8 @@
int
i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
u32 alignment,
- struct intel_engine_cs *pipelined)
+ struct intel_engine_cs *pipelined,
+ const struct i915_ggtt_view *view)
{
u32 old_read_domains, old_write_domain;
bool was_pin_display;
@@ -3986,7 +3914,9 @@
* (e.g. libkms for the bootup splash), we have to ensure that we
* always use map_and_fenceable for all scanout buffers.
*/
- ret = i915_gem_obj_ggtt_pin(obj, alignment, PIN_MAPPABLE);
+ ret = i915_gem_object_ggtt_pin(obj, view, alignment,
+ view->type == I915_GGTT_VIEW_NORMAL ?
+ PIN_MAPPABLE : 0);
if (ret)
goto err_unpin_display;
@@ -4014,9 +3944,11 @@
}
void
-i915_gem_object_unpin_from_display_plane(struct drm_i915_gem_object *obj)
+i915_gem_object_unpin_from_display_plane(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view)
{
- i915_gem_object_ggtt_unpin(obj);
+ i915_gem_object_ggtt_unpin_view(obj, view);
+
obj->pin_display = is_pin_display(obj);
}
@@ -4083,7 +4015,7 @@
}
if (write)
- intel_fb_obj_invalidate(obj, NULL);
+ intel_fb_obj_invalidate(obj, NULL, ORIGIN_CPU);
trace_i915_gem_object_change_domain(obj,
old_read_domains,
@@ -4165,12 +4097,12 @@
return false;
}
-int
-i915_gem_object_pin_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- uint32_t alignment,
- uint64_t flags,
- const struct i915_ggtt_view *view)
+static int
+i915_gem_object_do_pin(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm,
+ const struct i915_ggtt_view *ggtt_view,
+ uint32_t alignment,
+ uint64_t flags)
{
struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
struct i915_vma *vma;
@@ -4186,17 +4118,29 @@
if (WARN_ON((flags & (PIN_MAPPABLE | PIN_GLOBAL)) == PIN_MAPPABLE))
return -EINVAL;
- vma = i915_gem_obj_to_vma_view(obj, vm, view);
+ if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
+ return -EINVAL;
+
+ vma = ggtt_view ? i915_gem_obj_to_ggtt_view(obj, ggtt_view) :
+ i915_gem_obj_to_vma(obj, vm);
+
+ if (IS_ERR(vma))
+ return PTR_ERR(vma);
+
if (vma) {
if (WARN_ON(vma->pin_count == DRM_I915_GEM_OBJECT_MAX_PIN_COUNT))
return -EBUSY;
if (i915_vma_misplaced(vma, alignment, flags)) {
+ unsigned long offset;
+ offset = ggtt_view ? i915_gem_obj_ggtt_offset_view(obj, ggtt_view) :
+ i915_gem_obj_offset(obj, vm);
WARN(vma->pin_count,
- "bo is already pinned with incorrect alignment:"
+ "bo is already pinned in %s with incorrect alignment:"
" offset=%lx, req.alignment=%x, req.map_and_fenceable=%d,"
" obj->map_and_fenceable=%d\n",
- i915_gem_obj_offset_view(obj, vm, view->type),
+ ggtt_view ? "ggtt" : "ppgtt",
+ offset,
alignment,
!!(flags & PIN_MAPPABLE),
obj->map_and_fenceable);
@@ -4210,8 +4154,12 @@
bound = vma ? vma->bound : 0;
if (vma == NULL || !drm_mm_node_allocated(&vma->node)) {
- vma = i915_gem_object_bind_to_vm(obj, vm, alignment,
- flags, view);
+ /* In true PPGTT, bind has possibly changed PDEs, which
+ * means we must do a context switch before the GPU can
+ * accurately read some of the VMAs.
+ */
+ vma = i915_gem_object_bind_to_vm(obj, vm, ggtt_view, alignment,
+ flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
}
@@ -4237,7 +4185,7 @@
fenceable = (vma->node.size == fence_size &&
(vma->node.start & (fence_alignment - 1)) == 0);
- mappable = (vma->node.start + obj->base.size <=
+ mappable = (vma->node.start + fence_size <=
dev_priv->gtt.mappable_end);
obj->map_and_fenceable = mappable && fenceable;
@@ -4252,16 +4200,41 @@
return 0;
}
-void
-i915_gem_object_ggtt_unpin(struct drm_i915_gem_object *obj)
+int
+i915_gem_object_pin(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm,
+ uint32_t alignment,
+ uint64_t flags)
{
- struct i915_vma *vma = i915_gem_obj_to_ggtt(obj);
+ return i915_gem_object_do_pin(obj, vm,
+ i915_is_ggtt(vm) ? &i915_ggtt_view_normal : NULL,
+ alignment, flags);
+}
+
+int
+i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view,
+ uint32_t alignment,
+ uint64_t flags)
+{
+ if (WARN_ONCE(!view, "no view specified"))
+ return -EINVAL;
+
+ return i915_gem_object_do_pin(obj, i915_obj_to_ggtt(obj), view,
+ alignment, flags | PIN_GLOBAL);
+}
+
+void
+i915_gem_object_ggtt_unpin_view(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view)
+{
+ struct i915_vma *vma = i915_gem_obj_to_ggtt_view(obj, view);
BUG_ON(!vma);
- BUG_ON(vma->pin_count == 0);
- BUG_ON(!i915_gem_obj_ggtt_bound(obj));
+ WARN_ON(vma->pin_count == 0);
+ WARN_ON(!i915_gem_obj_ggtt_bound_view(obj, view));
- if (--vma->pin_count == 0)
+ if (--vma->pin_count == 0 && view->type == I915_GGTT_VIEW_NORMAL)
obj->pin_mappable = false;
}
@@ -4382,7 +4355,7 @@
obj->madv = args->madv;
/* if the object is no longer attached, discard its backing storage */
- if (i915_gem_object_is_purgeable(obj) && obj->pages == NULL)
+ if (obj->madv == I915_MADV_DONTNEED && obj->pages == NULL)
i915_gem_object_truncate(obj);
args->retained = obj->madv != __I915_MADV_PURGED;
@@ -4557,15 +4530,33 @@
intel_runtime_pm_put(dev_priv);
}
-struct i915_vma *i915_gem_obj_to_vma_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *view)
+struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm)
{
struct i915_vma *vma;
- list_for_each_entry(vma, &obj->vma_list, vma_link)
- if (vma->vm == vm && vma->ggtt_view.type == view->type)
+ list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ if (i915_is_ggtt(vma->vm) &&
+ vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
+ continue;
+ if (vma->vm == vm)
return vma;
+ }
+ return NULL;
+}
+struct i915_vma *i915_gem_obj_to_ggtt_view(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view)
+{
+ struct i915_address_space *ggtt = i915_obj_to_ggtt(obj);
+ struct i915_vma *vma;
+
+ if (WARN_ONCE(!view, "no view specified"))
+ return ERR_PTR(-EINVAL);
+
+ list_for_each_entry(vma, &obj->vma_list, vma_link)
+ if (vma->vm == ggtt &&
+ i915_ggtt_view_equal(&vma->ggtt_view, view))
+ return vma;
return NULL;
}
@@ -4612,10 +4603,6 @@
i915_gem_retire_requests(dev);
- /* Under UMS, be paranoid and evict. */
- if (!drm_core_check_feature(dev, DRIVER_MODESET))
- i915_gem_evict_everything(dev);
-
i915_gem_stop_ringbuffers(dev);
mutex_unlock(&dev->struct_mutex);
@@ -4986,18 +4973,8 @@
i915_gem_idle_work_handler);
init_waitqueue_head(&dev_priv->gpu_error.reset_queue);
- /* On GEN3 we really need to make sure the ARB C3 LP bit is set */
- if (!drm_core_check_feature(dev, DRIVER_MODESET) && IS_GEN3(dev)) {
- I915_WRITE(MI_ARB_STATE,
- _MASKED_BIT_ENABLE(MI_ARB_C3_LP_WRITE_ENABLE));
- }
-
dev_priv->relative_constants_mode = I915_EXEC_CONSTANTS_REL_GENERAL;
- /* Old X drivers will take 0-2 for front, back, depth buffers */
- if (!drm_core_check_feature(dev, DRIVER_MODESET))
- dev_priv->fence_reg_start = 3;
-
if (INTEL_INFO(dev)->gen >= 7 && !IS_VALLEYVIEW(dev))
dev_priv->num_fence_regs = 32;
else if (INTEL_INFO(dev)->gen >= 4 || IS_I945G(dev) || IS_I945GM(dev) || IS_G33(dev))
@@ -5005,6 +4982,10 @@
else
dev_priv->num_fence_regs = 8;
+ if (intel_vgpu_active(dev))
+ dev_priv->num_fence_regs =
+ I915_READ(vgtif_reg(avail_rs.fence_num));
+
/* Initialize fence registers to zero */
INIT_LIST_HEAD(&dev_priv->mm.fence_list);
i915_gem_restore_fences(dev);
@@ -5014,13 +4995,7 @@
dev_priv->mm.interruptible = true;
- dev_priv->mm.shrinker.scan_objects = i915_gem_shrinker_scan;
- dev_priv->mm.shrinker.count_objects = i915_gem_shrinker_count;
- dev_priv->mm.shrinker.seeks = DEFAULT_SEEKS;
- register_shrinker(&dev_priv->mm.shrinker);
-
- dev_priv->mm.oom_notifier.notifier_call = i915_gem_shrinker_oom;
- register_oom_notifier(&dev_priv->mm.oom_notifier);
+ i915_gem_shrinker_init(dev_priv);
i915_gem_batch_pool_init(dev, &dev_priv->mm.batch_pool);
@@ -5112,81 +5087,10 @@
}
}
-static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
-{
- if (!mutex_is_locked(mutex))
- return false;
-
-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
- return mutex->owner == task;
-#else
- /* Since UP may be pre-empted, we cannot assume that we own the lock */
- return false;
-#endif
-}
-
-static bool i915_gem_shrinker_lock(struct drm_device *dev, bool *unlock)
-{
- if (!mutex_trylock(&dev->struct_mutex)) {
- if (!mutex_is_locked_by(&dev->struct_mutex, current))
- return false;
-
- if (to_i915(dev)->mm.shrinker_no_lock_stealing)
- return false;
-
- *unlock = false;
- } else
- *unlock = true;
-
- return true;
-}
-
-static int num_vma_bound(struct drm_i915_gem_object *obj)
-{
- struct i915_vma *vma;
- int count = 0;
-
- list_for_each_entry(vma, &obj->vma_list, vma_link)
- if (drm_mm_node_allocated(&vma->node))
- count++;
-
- return count;
-}
-
-static unsigned long
-i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
-{
- struct drm_i915_private *dev_priv =
- container_of(shrinker, struct drm_i915_private, mm.shrinker);
- struct drm_device *dev = dev_priv->dev;
- struct drm_i915_gem_object *obj;
- unsigned long count;
- bool unlock;
-
- if (!i915_gem_shrinker_lock(dev, &unlock))
- return 0;
-
- count = 0;
- list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list)
- if (obj->pages_pin_count == 0)
- count += obj->base.size >> PAGE_SHIFT;
-
- list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
- if (!i915_gem_obj_is_pinned(obj) &&
- obj->pages_pin_count == num_vma_bound(obj))
- count += obj->base.size >> PAGE_SHIFT;
- }
-
- if (unlock)
- mutex_unlock(&dev->struct_mutex);
-
- return count;
-}
-
/* All the new VM stuff */
-unsigned long i915_gem_obj_offset_view(struct drm_i915_gem_object *o,
- struct i915_address_space *vm,
- enum i915_ggtt_view_type view)
+unsigned long
+i915_gem_obj_offset(struct drm_i915_gem_object *o,
+ struct i915_address_space *vm)
{
struct drm_i915_private *dev_priv = o->base.dev->dev_private;
struct i915_vma *vma;
@@ -5194,24 +5098,59 @@
WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base);
list_for_each_entry(vma, &o->vma_list, vma_link) {
- if (vma->vm == vm && vma->ggtt_view.type == view)
+ if (i915_is_ggtt(vma->vm) &&
+ vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
+ continue;
+ if (vma->vm == vm)
return vma->node.start;
-
}
+
WARN(1, "%s vma for this object not found.\n",
i915_is_ggtt(vm) ? "global" : "ppgtt");
return -1;
}
-bool i915_gem_obj_bound_view(struct drm_i915_gem_object *o,
- struct i915_address_space *vm,
- enum i915_ggtt_view_type view)
+unsigned long
+i915_gem_obj_ggtt_offset_view(struct drm_i915_gem_object *o,
+ const struct i915_ggtt_view *view)
{
+ struct i915_address_space *ggtt = i915_obj_to_ggtt(o);
struct i915_vma *vma;
list_for_each_entry(vma, &o->vma_list, vma_link)
- if (vma->vm == vm &&
- vma->ggtt_view.type == view &&
+ if (vma->vm == ggtt &&
+ i915_ggtt_view_equal(&vma->ggtt_view, view))
+ return vma->node.start;
+
+ WARN(1, "global vma for this object not found.\n");
+ return -1;
+}
+
+bool i915_gem_obj_bound(struct drm_i915_gem_object *o,
+ struct i915_address_space *vm)
+{
+ struct i915_vma *vma;
+
+ list_for_each_entry(vma, &o->vma_list, vma_link) {
+ if (i915_is_ggtt(vma->vm) &&
+ vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
+ continue;
+ if (vma->vm == vm && drm_mm_node_allocated(&vma->node))
+ return true;
+ }
+
+ return false;
+}
+
+bool i915_gem_obj_ggtt_bound_view(struct drm_i915_gem_object *o,
+ const struct i915_ggtt_view *view)
+{
+ struct i915_address_space *ggtt = i915_obj_to_ggtt(o);
+ struct i915_vma *vma;
+
+ list_for_each_entry(vma, &o->vma_list, vma_link)
+ if (vma->vm == ggtt &&
+ i915_ggtt_view_equal(&vma->ggtt_view, view) &&
drm_mm_node_allocated(&vma->node))
return true;
@@ -5239,118 +5178,26 @@
BUG_ON(list_empty(&o->vma_list));
- list_for_each_entry(vma, &o->vma_list, vma_link)
+ list_for_each_entry(vma, &o->vma_list, vma_link) {
+ if (i915_is_ggtt(vma->vm) &&
+ vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
+ continue;
if (vma->vm == vm)
return vma->node.size;
-
+ }
return 0;
}
-static unsigned long
-i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
+bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj)
{
- struct drm_i915_private *dev_priv =
- container_of(shrinker, struct drm_i915_private, mm.shrinker);
- struct drm_device *dev = dev_priv->dev;
- unsigned long freed;
- bool unlock;
-
- if (!i915_gem_shrinker_lock(dev, &unlock))
- return SHRINK_STOP;
-
- freed = i915_gem_shrink(dev_priv,
- sc->nr_to_scan,
- I915_SHRINK_BOUND |
- I915_SHRINK_UNBOUND |
- I915_SHRINK_PURGEABLE);
- if (freed < sc->nr_to_scan)
- freed += i915_gem_shrink(dev_priv,
- sc->nr_to_scan - freed,
- I915_SHRINK_BOUND |
- I915_SHRINK_UNBOUND);
- if (unlock)
- mutex_unlock(&dev->struct_mutex);
-
- return freed;
-}
-
-static int
-i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
-{
- struct drm_i915_private *dev_priv =
- container_of(nb, struct drm_i915_private, mm.oom_notifier);
- struct drm_device *dev = dev_priv->dev;
- struct drm_i915_gem_object *obj;
- unsigned long timeout = msecs_to_jiffies(5000) + 1;
- unsigned long pinned, bound, unbound, freed_pages;
- bool was_interruptible;
- bool unlock;
-
- while (!i915_gem_shrinker_lock(dev, &unlock) && --timeout) {
- schedule_timeout_killable(1);
- if (fatal_signal_pending(current))
- return NOTIFY_DONE;
- }
- if (timeout == 0) {
- pr_err("Unable to purge GPU memory due lock contention.\n");
- return NOTIFY_DONE;
- }
-
- was_interruptible = dev_priv->mm.interruptible;
- dev_priv->mm.interruptible = false;
-
- freed_pages = i915_gem_shrink_all(dev_priv);
-
- dev_priv->mm.interruptible = was_interruptible;
-
- /* Because we may be allocating inside our own driver, we cannot
- * assert that there are no objects with pinned pages that are not
- * being pointed to by hardware.
- */
- unbound = bound = pinned = 0;
- list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list) {
- if (!obj->base.filp) /* not backed by a freeable object */
- continue;
-
- if (obj->pages_pin_count)
- pinned += obj->base.size;
- else
- unbound += obj->base.size;
- }
- list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
- if (!obj->base.filp)
- continue;
-
- if (obj->pages_pin_count)
- pinned += obj->base.size;
- else
- bound += obj->base.size;
- }
-
- if (unlock)
- mutex_unlock(&dev->struct_mutex);
-
- if (freed_pages || unbound || bound)
- pr_info("Purging GPU memory, %lu bytes freed, %lu bytes still pinned.\n",
- freed_pages << PAGE_SHIFT, pinned);
- if (unbound || bound)
- pr_err("%lu and %lu bytes still available in the "
- "bound and unbound GPU page lists.\n",
- bound, unbound);
-
- *(unsigned long *)ptr += freed_pages;
- return NOTIFY_DONE;
-}
-
-struct i915_vma *i915_gem_obj_to_ggtt(struct drm_i915_gem_object *obj)
-{
- struct i915_address_space *ggtt = i915_obj_to_ggtt(obj);
struct i915_vma *vma;
-
- list_for_each_entry(vma, &obj->vma_list, vma_link)
- if (vma->vm == ggtt &&
- vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL)
- return vma;
-
- return NULL;
+ list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ if (i915_is_ggtt(vma->vm) &&
+ vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
+ continue;
+ if (vma->pin_count > 0)
+ return true;
+ }
+ return false;
}
+
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 8603bf4..f3e84c4 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -296,11 +296,15 @@
struct drm_i915_private *dev_priv = dev->dev_private;
int i;
- /* In execlists mode we will unreference the context when the execlist
- * queue is cleared and the requests destroyed.
- */
- if (i915.enable_execlists)
+ if (i915.enable_execlists) {
+ struct intel_context *ctx;
+
+ list_for_each_entry(ctx, &dev_priv->context_list, link) {
+ intel_lr_context_reset(dev, ctx);
+ }
+
return;
+ }
for (i = 0; i < I915_NUM_RINGS; i++) {
struct intel_engine_cs *ring = &dev_priv->ring[i];
@@ -565,6 +569,66 @@
return ret;
}
+static inline bool should_skip_switch(struct intel_engine_cs *ring,
+ struct intel_context *from,
+ struct intel_context *to)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+
+ if (to->remap_slice)
+ return false;
+
+ if (to->ppgtt) {
+ if (from == to && !test_bit(ring->id,
+ &to->ppgtt->pd_dirty_rings))
+ return true;
+ } else if (dev_priv->mm.aliasing_ppgtt) {
+ if (from == to && !test_bit(ring->id,
+ &dev_priv->mm.aliasing_ppgtt->pd_dirty_rings))
+ return true;
+ }
+
+ return false;
+}
+
+static bool
+needs_pd_load_pre(struct intel_engine_cs *ring, struct intel_context *to)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+
+ if (!to->ppgtt)
+ return false;
+
+ if (INTEL_INFO(ring->dev)->gen < 8)
+ return true;
+
+ if (ring != &dev_priv->ring[RCS])
+ return true;
+
+ return false;
+}
+
+static bool
+needs_pd_load_post(struct intel_engine_cs *ring, struct intel_context *to,
+ u32 hw_flags)
+{
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+
+ if (!to->ppgtt)
+ return false;
+
+ if (!IS_GEN8(ring->dev))
+ return false;
+
+ if (ring != &dev_priv->ring[RCS])
+ return false;
+
+ if (hw_flags & MI_RESTORE_INHIBIT)
+ return true;
+
+ return false;
+}
+
static int do_switch(struct intel_engine_cs *ring,
struct intel_context *to)
{
@@ -580,7 +644,7 @@
BUG_ON(!i915_gem_obj_is_pinned(from->legacy_hw_ctx.rcs_state));
}
- if (from == to && !to->remap_slice)
+ if (should_skip_switch(ring, from, to))
return 0;
/* Trying to pin first makes error handling easier. */
@@ -598,11 +662,18 @@
*/
from = ring->last_context;
- if (to->ppgtt) {
+ if (needs_pd_load_pre(ring, to)) {
+ /* Older GENs and non render rings still want the load first,
+ * "PP_DCLV followed by PP_DIR_BASE register through Load
+ * Register Immediate commands in Ring Buffer before submitting
+ * a context."*/
trace_switch_mm(ring, to);
ret = to->ppgtt->switch_mm(to->ppgtt, ring);
if (ret)
goto unpin_out;
+
+ /* Doing a PD load always reloads the page dirs */
+ clear_bit(ring->id, &to->ppgtt->pd_dirty_rings);
}
if (ring != &dev_priv->ring[RCS]) {
@@ -633,13 +704,41 @@
goto unpin_out;
}
- if (!to->legacy_hw_ctx.initialized || i915_gem_context_is_default(to))
+ if (!to->legacy_hw_ctx.initialized) {
hw_flags |= MI_RESTORE_INHIBIT;
+ /* NB: If we inhibit the restore, the context is not allowed to
+ * die because future work may end up depending on valid address
+ * space. This means we must enforce that a page table load
+ * occur when this occurs. */
+ } else if (to->ppgtt &&
+ test_and_clear_bit(ring->id, &to->ppgtt->pd_dirty_rings))
+ hw_flags |= MI_FORCE_RESTORE;
+
+ /* We should never emit switch_mm more than once */
+ WARN_ON(needs_pd_load_pre(ring, to) &&
+ needs_pd_load_post(ring, to, hw_flags));
ret = mi_set_context(ring, to, hw_flags);
if (ret)
goto unpin_out;
+ /* GEN8 does *not* require an explicit reload if the PDPs have been
+ * setup, and we do not wish to move them.
+ */
+ if (needs_pd_load_post(ring, to, hw_flags)) {
+ trace_switch_mm(ring, to);
+ ret = to->ppgtt->switch_mm(to->ppgtt, ring);
+ /* The hardware context switch is emitted, but we haven't
+ * actually changed the state - so it's probably safe to bail
+ * here. Still, let the user know something dangerous has
+ * happened.
+ */
+ if (ret) {
+ DRM_ERROR("Failed to change address space on context switch\n");
+ goto unpin_out;
+ }
+ }
+
for (i = 0; i < MAX_L3_SLICES; i++) {
if (!(to->remap_slice & (1<<i)))
continue;
@@ -677,7 +776,7 @@
i915_gem_context_unreference(from);
}
- uninitialized = !to->legacy_hw_ctx.initialized && from == NULL;
+ uninitialized = !to->legacy_hw_ctx.initialized;
to->legacy_hw_ctx.initialized = true;
done:
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index e3a49d9..d09e35e 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -63,6 +63,10 @@
*
* This function is used by the object/vma binding code.
*
+ * Since this function is only used to free up virtual address space it only
+ * ignores pinned vmas, and not object where the backing storage itself is
+ * pinned. Hence obj->pages_pin_count does not protect against eviction.
+ *
* To clarify: This is for freeing up virtual address space, not for freeing
* memory in e.g. the shrinker.
*/
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 38a7425..a3190e79 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -251,7 +251,6 @@
{
return (HAS_LLC(obj->base.dev) ||
obj->base.write_domain == I915_GEM_DOMAIN_CPU ||
- !obj->map_and_fenceable ||
obj->cache_level != I915_CACHE_NONE);
}
@@ -337,6 +336,51 @@
return 0;
}
+static void
+clflush_write32(void *addr, uint32_t value)
+{
+ /* This is not a fast path, so KISS. */
+ drm_clflush_virt_range(addr, sizeof(uint32_t));
+ *(uint32_t *)addr = value;
+ drm_clflush_virt_range(addr, sizeof(uint32_t));
+}
+
+static int
+relocate_entry_clflush(struct drm_i915_gem_object *obj,
+ struct drm_i915_gem_relocation_entry *reloc,
+ uint64_t target_offset)
+{
+ struct drm_device *dev = obj->base.dev;
+ uint32_t page_offset = offset_in_page(reloc->offset);
+ uint64_t delta = (int)reloc->delta + target_offset;
+ char *vaddr;
+ int ret;
+
+ ret = i915_gem_object_set_to_gtt_domain(obj, true);
+ if (ret)
+ return ret;
+
+ vaddr = kmap_atomic(i915_gem_object_get_page(obj,
+ reloc->offset >> PAGE_SHIFT));
+ clflush_write32(vaddr + page_offset, lower_32_bits(delta));
+
+ if (INTEL_INFO(dev)->gen >= 8) {
+ page_offset = offset_in_page(page_offset + sizeof(uint32_t));
+
+ if (page_offset == 0) {
+ kunmap_atomic(vaddr);
+ vaddr = kmap_atomic(i915_gem_object_get_page(obj,
+ (reloc->offset + sizeof(uint32_t)) >> PAGE_SHIFT));
+ }
+
+ clflush_write32(vaddr + page_offset, upper_32_bits(delta));
+ }
+
+ kunmap_atomic(vaddr);
+
+ return 0;
+}
+
static int
i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
struct eb_vmas *eb,
@@ -426,8 +470,14 @@
if (use_cpu_reloc(obj))
ret = relocate_entry_cpu(obj, reloc, target_offset);
- else
+ else if (obj->map_and_fenceable)
ret = relocate_entry_gtt(obj, reloc, target_offset);
+ else if (cpu_has_clflush)
+ ret = relocate_entry_clflush(obj, reloc, target_offset);
+ else {
+ WARN_ONCE(1, "Impossible case in relocation handling\n");
+ ret = -ENODEV;
+ }
if (ret)
return ret;
@@ -525,6 +575,12 @@
return ret;
}
+static bool only_mappable_for_reloc(unsigned int flags)
+{
+ return (flags & (EXEC_OBJECT_NEEDS_FENCE | __EXEC_OBJECT_NEEDS_MAP)) ==
+ __EXEC_OBJECT_NEEDS_MAP;
+}
+
static int
i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
struct intel_engine_cs *ring,
@@ -536,14 +592,21 @@
int ret;
flags = 0;
- if (entry->flags & __EXEC_OBJECT_NEEDS_MAP)
- flags |= PIN_GLOBAL | PIN_MAPPABLE;
- if (entry->flags & EXEC_OBJECT_NEEDS_GTT)
- flags |= PIN_GLOBAL;
- if (entry->flags & __EXEC_OBJECT_NEEDS_BIAS)
- flags |= BATCH_OFFSET_BIAS | PIN_OFFSET_BIAS;
+ if (!drm_mm_node_allocated(&vma->node)) {
+ if (entry->flags & __EXEC_OBJECT_NEEDS_MAP)
+ flags |= PIN_GLOBAL | PIN_MAPPABLE;
+ if (entry->flags & EXEC_OBJECT_NEEDS_GTT)
+ flags |= PIN_GLOBAL;
+ if (entry->flags & __EXEC_OBJECT_NEEDS_BIAS)
+ flags |= BATCH_OFFSET_BIAS | PIN_OFFSET_BIAS;
+ }
ret = i915_gem_object_pin(obj, vma->vm, entry->alignment, flags);
+ if ((ret == -ENOSPC || ret == -E2BIG) &&
+ only_mappable_for_reloc(entry->flags))
+ ret = i915_gem_object_pin(obj, vma->vm,
+ entry->alignment,
+ flags & ~(PIN_GLOBAL | PIN_MAPPABLE));
if (ret)
return ret;
@@ -605,13 +668,14 @@
vma->node.start & (entry->alignment - 1))
return true;
- if (entry->flags & __EXEC_OBJECT_NEEDS_MAP && !obj->map_and_fenceable)
- return true;
-
if (entry->flags & __EXEC_OBJECT_NEEDS_BIAS &&
vma->node.start < BATCH_OFFSET_BIAS)
return true;
+ /* avoid costly ping-pong once a batch bo ended up non-mappable */
+ if (entry->flags & __EXEC_OBJECT_NEEDS_MAP && !obj->map_and_fenceable)
+ return !only_mappable_for_reloc(entry->flags);
+
return false;
}
@@ -971,7 +1035,7 @@
obj->dirty = 1;
i915_gem_request_assign(&obj->last_write_req, req);
- intel_fb_obj_invalidate(obj, ring);
+ intel_fb_obj_invalidate(obj, ring, ORIGIN_CS);
/* update for the implicit flush after a batch */
obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
@@ -1076,16 +1140,15 @@
struct drm_i915_gem_object *batch_obj,
u32 batch_start_offset,
u32 batch_len,
- bool is_master,
- u32 *flags)
+ bool is_master)
{
struct drm_i915_private *dev_priv = to_i915(batch_obj->base.dev);
struct drm_i915_gem_object *shadow_batch_obj;
- bool need_reloc = false;
+ struct i915_vma *vma;
int ret;
shadow_batch_obj = i915_gem_batch_pool_get(&dev_priv->mm.batch_pool,
- batch_obj->base.size);
+ PAGE_ALIGN(batch_len));
if (IS_ERR(shadow_batch_obj))
return shadow_batch_obj;
@@ -1095,40 +1158,30 @@
batch_start_offset,
batch_len,
is_master);
- if (ret) {
- if (ret == -EACCES)
- return batch_obj;
- } else {
- struct i915_vma *vma;
+ if (ret)
+ goto err;
- memset(shadow_exec_entry, 0, sizeof(*shadow_exec_entry));
+ ret = i915_gem_obj_ggtt_pin(shadow_batch_obj, 0, 0);
+ if (ret)
+ goto err;
- vma = i915_gem_obj_to_ggtt(shadow_batch_obj);
- vma->exec_entry = shadow_exec_entry;
- vma->exec_entry->flags = __EXEC_OBJECT_PURGEABLE;
- drm_gem_object_reference(&shadow_batch_obj->base);
- i915_gem_execbuffer_reserve_vma(vma, ring, &need_reloc);
- list_add_tail(&vma->exec_list, &eb->vmas);
+ memset(shadow_exec_entry, 0, sizeof(*shadow_exec_entry));
- shadow_batch_obj->base.pending_read_domains =
- batch_obj->base.pending_read_domains;
+ vma = i915_gem_obj_to_ggtt(shadow_batch_obj);
+ vma->exec_entry = shadow_exec_entry;
+ vma->exec_entry->flags = __EXEC_OBJECT_PURGEABLE | __EXEC_OBJECT_HAS_PIN;
+ drm_gem_object_reference(&shadow_batch_obj->base);
+ list_add_tail(&vma->exec_list, &eb->vmas);
- /*
- * Set the DISPATCH_SECURE bit to remove the NON_SECURE
- * bit from MI_BATCH_BUFFER_START commands issued in the
- * dispatch_execbuffer implementations. We specifically
- * don't want that set when the command parser is
- * enabled.
- *
- * FIXME: with aliasing ppgtt, buffers that should only
- * be in ggtt still end up in the aliasing ppgtt. remove
- * this check when that is fixed.
- */
- if (USES_FULL_PPGTT(dev))
- *flags |= I915_DISPATCH_SECURE;
- }
+ shadow_batch_obj->base.pending_read_domains = I915_GEM_DOMAIN_COMMAND;
- return ret ? ERR_PTR(ret) : shadow_batch_obj;
+ return shadow_batch_obj;
+
+err:
+ if (ret == -EACCES) /* unhandled chained batch */
+ return batch_obj;
+ else
+ return ERR_PTR(ret);
}
int
@@ -1138,7 +1191,7 @@
struct drm_i915_gem_execbuffer2 *args,
struct list_head *vmas,
struct drm_i915_gem_object *batch_obj,
- u64 exec_start, u32 flags)
+ u64 exec_start, u32 dispatch_flags)
{
struct drm_clip_rect *cliprects = NULL;
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -1198,6 +1251,13 @@
if (ret)
goto error;
+ if (ctx->ppgtt)
+ WARN(ctx->ppgtt->pd_dirty_rings & (1<<ring->id),
+ "%s didn't clear reload\n", ring->name);
+ else if (dev_priv->mm.aliasing_ppgtt)
+ WARN(dev_priv->mm.aliasing_ppgtt->pd_dirty_rings &
+ (1<<ring->id), "%s didn't clear reload\n", ring->name);
+
instp_mode = args->flags & I915_EXEC_CONSTANTS_MASK;
instp_mask = I915_EXEC_CONSTANTS_MASK;
switch (instp_mode) {
@@ -1266,19 +1326,19 @@
ret = ring->dispatch_execbuffer(ring,
exec_start, exec_len,
- flags);
+ dispatch_flags);
if (ret)
goto error;
}
} else {
ret = ring->dispatch_execbuffer(ring,
exec_start, exec_len,
- flags);
+ dispatch_flags);
if (ret)
return ret;
}
- trace_i915_gem_ring_dispatch(intel_ring_get_request(ring), flags);
+ trace_i915_gem_ring_dispatch(intel_ring_get_request(ring), dispatch_flags);
i915_gem_execbuffer_move_to_active(vmas, ring);
i915_gem_execbuffer_retire_commands(dev, file, ring, batch_obj);
@@ -1353,7 +1413,7 @@
struct i915_address_space *vm;
const u32 ctx_id = i915_execbuffer2_get_context_id(*args);
u64 exec_start = args->batch_start_offset;
- u32 flags;
+ u32 dispatch_flags;
int ret;
bool need_relocs;
@@ -1364,15 +1424,15 @@
if (ret)
return ret;
- flags = 0;
+ dispatch_flags = 0;
if (args->flags & I915_EXEC_SECURE) {
if (!file->is_master || !capable(CAP_SYS_ADMIN))
return -EPERM;
- flags |= I915_DISPATCH_SECURE;
+ dispatch_flags |= I915_DISPATCH_SECURE;
}
if (args->flags & I915_EXEC_IS_PINNED)
- flags |= I915_DISPATCH_PINNED;
+ dispatch_flags |= I915_DISPATCH_PINNED;
if ((args->flags & I915_EXEC_RING_MASK) > LAST_USER_RING) {
DRM_DEBUG("execbuf with unknown ring: %d\n",
@@ -1494,12 +1554,27 @@
batch_obj,
args->batch_start_offset,
args->batch_len,
- file->is_master,
- &flags);
+ file->is_master);
if (IS_ERR(batch_obj)) {
ret = PTR_ERR(batch_obj);
goto err;
}
+
+ /*
+ * Set the DISPATCH_SECURE bit to remove the NON_SECURE
+ * bit from MI_BATCH_BUFFER_START commands issued in the
+ * dispatch_execbuffer implementations. We specifically
+ * don't want that set when the command parser is
+ * enabled.
+ *
+ * FIXME: with aliasing ppgtt, buffers that should only
+ * be in ggtt still end up in the aliasing ppgtt. remove
+ * this check when that is fixed.
+ */
+ if (USES_FULL_PPGTT(dev))
+ dispatch_flags |= I915_DISPATCH_SECURE;
+
+ exec_start = 0;
}
batch_obj->base.pending_read_domains |= I915_GEM_DOMAIN_COMMAND;
@@ -1507,14 +1582,14 @@
/* snb/ivb/vlv conflate the "batch in ppgtt" bit with the "non-secure
* batch" bit. Hence we need to pin secure batches into the global gtt.
* hsw should have this fixed, but bdw mucks it up again. */
- if (flags & I915_DISPATCH_SECURE) {
+ if (dispatch_flags & I915_DISPATCH_SECURE) {
/*
* So on first glance it looks freaky that we pin the batch here
* outside of the reservation loop. But:
* - The batch is already pinned into the relevant ppgtt, so we
* already have the backing storage fully allocated.
* - No other BO uses the global gtt (well contexts, but meh),
- * so we don't really have issues with mutliple objects not
+ * so we don't really have issues with multiple objects not
* fitting due to fragmentation.
* So this is actually safe.
*/
@@ -1527,7 +1602,8 @@
exec_start += i915_gem_obj_offset(batch_obj, vm);
ret = dev_priv->gt.do_execbuf(dev, file, ring, ctx, args,
- &eb->vmas, batch_obj, exec_start, flags);
+ &eb->vmas, batch_obj, exec_start,
+ dispatch_flags);
/*
* FIXME: We crucially rely upon the active tracking for the (ppgtt)
@@ -1535,7 +1611,7 @@
* needs to be adjusted to also track the ggtt batch vma properly as
* active.
*/
- if (flags & I915_DISPATCH_SECURE)
+ if (dispatch_flags & I915_DISPATCH_SECURE)
i915_gem_object_ggtt_unpin(batch_obj);
err:
/* the request owns the ref now */
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index dccdc8a..0239fbf 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -27,6 +27,7 @@
#include <drm/drmP.h>
#include <drm/i915_drm.h>
#include "i915_drv.h"
+#include "i915_vgpu.h"
#include "i915_trace.h"
#include "intel_drv.h"
@@ -66,8 +67,9 @@
* i915_ggtt_view_type and struct i915_ggtt_view.
*
* A new flavour of core GEM functions which work with GGTT bound objects were
- * added with the _view suffix. They take the struct i915_ggtt_view parameter
- * encapsulating all metadata required to implement a view.
+ * added with the _ggtt_ infix, and sometimes with _view postfix to avoid
+ * renaming in large amounts of code. They take the struct i915_ggtt_view
+ * parameter encapsulating all metadata required to implement a view.
*
* As a helper for callers which are only interested in the normal view,
* globally const i915_ggtt_view_normal singleton instance exists. All old core
@@ -91,6 +93,9 @@
*/
const struct i915_ggtt_view i915_ggtt_view_normal;
+const struct i915_ggtt_view i915_ggtt_view_rotated = {
+ .type = I915_GGTT_VIEW_ROTATED
+};
static void bdw_setup_private_ppat(struct drm_i915_private *dev_priv);
static void chv_setup_private_ppat(struct drm_i915_private *dev_priv);
@@ -103,6 +108,9 @@
has_aliasing_ppgtt = INTEL_INFO(dev)->gen >= 6;
has_full_ppgtt = INTEL_INFO(dev)->gen >= 7;
+ if (intel_vgpu_active(dev))
+ has_full_ppgtt = false; /* emulation is too hard */
+
/*
* We don't allow disabling PPGTT for gen9+ as it's a requirement for
* execlists, the sole mechanism available to submit work.
@@ -138,17 +146,16 @@
return has_aliasing_ppgtt ? 1 : 0;
}
-
static void ppgtt_bind_vma(struct i915_vma *vma,
enum i915_cache_level cache_level,
u32 flags);
static void ppgtt_unbind_vma(struct i915_vma *vma);
-static inline gen8_gtt_pte_t gen8_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid)
+static inline gen8_pte_t gen8_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid)
{
- gen8_gtt_pte_t pte = valid ? _PAGE_PRESENT | _PAGE_RW : 0;
+ gen8_pte_t pte = valid ? _PAGE_PRESENT | _PAGE_RW : 0;
pte |= addr;
switch (level) {
@@ -166,11 +173,11 @@
return pte;
}
-static inline gen8_ppgtt_pde_t gen8_pde_encode(struct drm_device *dev,
- dma_addr_t addr,
- enum i915_cache_level level)
+static inline gen8_pde_t gen8_pde_encode(struct drm_device *dev,
+ dma_addr_t addr,
+ enum i915_cache_level level)
{
- gen8_ppgtt_pde_t pde = _PAGE_PRESENT | _PAGE_RW;
+ gen8_pde_t pde = _PAGE_PRESENT | _PAGE_RW;
pde |= addr;
if (level != I915_CACHE_NONE)
pde |= PPAT_CACHED_PDE_INDEX;
@@ -179,11 +186,11 @@
return pde;
}
-static gen6_gtt_pte_t snb_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 unused)
+static gen6_pte_t snb_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 unused)
{
- gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
+ gen6_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
switch (level) {
@@ -201,11 +208,11 @@
return pte;
}
-static gen6_gtt_pte_t ivb_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 unused)
+static gen6_pte_t ivb_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 unused)
{
- gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
+ gen6_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
switch (level) {
@@ -225,11 +232,11 @@
return pte;
}
-static gen6_gtt_pte_t byt_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 flags)
+static gen6_pte_t byt_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 flags)
{
- gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
+ gen6_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
if (!(flags & PTE_READ_ONLY))
@@ -241,11 +248,11 @@
return pte;
}
-static gen6_gtt_pte_t hsw_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 unused)
+static gen6_pte_t hsw_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 unused)
{
- gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
+ gen6_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= HSW_PTE_ADDR_ENCODE(addr);
if (level != I915_CACHE_NONE)
@@ -254,11 +261,11 @@
return pte;
}
-static gen6_gtt_pte_t iris_pte_encode(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 unused)
+static gen6_pte_t iris_pte_encode(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 unused)
{
- gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
+ gen6_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= HSW_PTE_ADDR_ENCODE(addr);
switch (level) {
@@ -275,6 +282,162 @@
return pte;
}
+#define i915_dma_unmap_single(px, dev) \
+ __i915_dma_unmap_single((px)->daddr, dev)
+
+static inline void __i915_dma_unmap_single(dma_addr_t daddr,
+ struct drm_device *dev)
+{
+ struct device *device = &dev->pdev->dev;
+
+ dma_unmap_page(device, daddr, 4096, PCI_DMA_BIDIRECTIONAL);
+}
+
+/**
+ * i915_dma_map_single() - Create a dma mapping for a page table/dir/etc.
+ * @px: Page table/dir/etc to get a DMA map for
+ * @dev: drm device
+ *
+ * Page table allocations are unified across all gens. They always require a
+ * single 4k allocation, as well as a DMA mapping. If we keep the structs
+ * symmetric here, the simple macro covers us for every page table type.
+ *
+ * Return: 0 if success.
+ */
+#define i915_dma_map_single(px, dev) \
+ i915_dma_map_page_single((px)->page, (dev), &(px)->daddr)
+
+static inline int i915_dma_map_page_single(struct page *page,
+ struct drm_device *dev,
+ dma_addr_t *daddr)
+{
+ struct device *device = &dev->pdev->dev;
+
+ *daddr = dma_map_page(device, page, 0, 4096, PCI_DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(device, *daddr))
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void unmap_and_free_pt(struct i915_page_table_entry *pt,
+ struct drm_device *dev)
+{
+ if (WARN_ON(!pt->page))
+ return;
+
+ i915_dma_unmap_single(pt, dev);
+ __free_page(pt->page);
+ kfree(pt->used_ptes);
+ kfree(pt);
+}
+
+static struct i915_page_table_entry *alloc_pt_single(struct drm_device *dev)
+{
+ struct i915_page_table_entry *pt;
+ const size_t count = INTEL_INFO(dev)->gen >= 8 ?
+ GEN8_PTES : GEN6_PTES;
+ int ret = -ENOMEM;
+
+ pt = kzalloc(sizeof(*pt), GFP_KERNEL);
+ if (!pt)
+ return ERR_PTR(-ENOMEM);
+
+ pt->used_ptes = kcalloc(BITS_TO_LONGS(count), sizeof(*pt->used_ptes),
+ GFP_KERNEL);
+
+ if (!pt->used_ptes)
+ goto fail_bitmap;
+
+ pt->page = alloc_page(GFP_KERNEL);
+ if (!pt->page)
+ goto fail_page;
+
+ ret = i915_dma_map_single(pt, dev);
+ if (ret)
+ goto fail_dma;
+
+ return pt;
+
+fail_dma:
+ __free_page(pt->page);
+fail_page:
+ kfree(pt->used_ptes);
+fail_bitmap:
+ kfree(pt);
+
+ return ERR_PTR(ret);
+}
+
+/**
+ * alloc_pt_range() - Allocate a multiple page tables
+ * @pd: The page directory which will have at least @count entries
+ * available to point to the allocated page tables.
+ * @pde: First page directory entry for which we are allocating.
+ * @count: Number of pages to allocate.
+ * @dev: DRM device.
+ *
+ * Allocates multiple page table pages and sets the appropriate entries in the
+ * page table structure within the page directory. Function cleans up after
+ * itself on any failures.
+ *
+ * Return: 0 if allocation succeeded.
+ */
+static int alloc_pt_range(struct i915_page_directory_entry *pd, uint16_t pde, size_t count,
+ struct drm_device *dev)
+{
+ int i, ret;
+
+ /* 512 is the max page tables per page_directory on any platform. */
+ if (WARN_ON(pde + count > I915_PDES))
+ return -EINVAL;
+
+ for (i = pde; i < pde + count; i++) {
+ struct i915_page_table_entry *pt = alloc_pt_single(dev);
+
+ if (IS_ERR(pt)) {
+ ret = PTR_ERR(pt);
+ goto err_out;
+ }
+ WARN(pd->page_table[i],
+ "Leaking page directory entry %d (%p)\n",
+ i, pd->page_table[i]);
+ pd->page_table[i] = pt;
+ }
+
+ return 0;
+
+err_out:
+ while (i-- > pde)
+ unmap_and_free_pt(pd->page_table[i], dev);
+ return ret;
+}
+
+static void unmap_and_free_pd(struct i915_page_directory_entry *pd)
+{
+ if (pd->page) {
+ __free_page(pd->page);
+ kfree(pd);
+ }
+}
+
+static struct i915_page_directory_entry *alloc_pd_single(void)
+{
+ struct i915_page_directory_entry *pd;
+
+ pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+ if (!pd)
+ return ERR_PTR(-ENOMEM);
+
+ pd->page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ if (!pd->page) {
+ kfree(pd);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ return pd;
+}
+
/* Broadwell Page Directory Pointer Descriptors */
static int gen8_write_pdp(struct intel_engine_cs *ring, unsigned entry,
uint64_t val)
@@ -304,10 +467,10 @@
int i, ret;
/* bit of a hack to find the actual last used pd */
- int used_pd = ppgtt->num_pd_entries / GEN8_PDES_PER_PAGE;
+ int used_pd = ppgtt->num_pd_entries / I915_PDES;
for (i = used_pd - 1; i >= 0; i--) {
- dma_addr_t addr = ppgtt->pd_dma_addr[i];
+ dma_addr_t addr = ppgtt->pdp.page_directory[i]->daddr;
ret = gen8_write_pdp(ring, i, addr);
if (ret)
return ret;
@@ -323,7 +486,7 @@
{
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
- gen8_gtt_pte_t *pt_vaddr, scratch_pte;
+ gen8_pte_t *pt_vaddr, scratch_pte;
unsigned pdpe = start >> GEN8_PDPE_SHIFT & GEN8_PDPE_MASK;
unsigned pde = start >> GEN8_PDE_SHIFT & GEN8_PDE_MASK;
unsigned pte = start >> GEN8_PTE_SHIFT & GEN8_PTE_MASK;
@@ -334,11 +497,28 @@
I915_CACHE_LLC, use_scratch);
while (num_entries) {
- struct page *page_table = ppgtt->gen8_pt_pages[pdpe][pde];
+ struct i915_page_directory_entry *pd;
+ struct i915_page_table_entry *pt;
+ struct page *page_table;
+
+ if (WARN_ON(!ppgtt->pdp.page_directory[pdpe]))
+ continue;
+
+ pd = ppgtt->pdp.page_directory[pdpe];
+
+ if (WARN_ON(!pd->page_table[pde]))
+ continue;
+
+ pt = pd->page_table[pde];
+
+ if (WARN_ON(!pt->page))
+ continue;
+
+ page_table = pt->page;
last_pte = pte + num_entries;
- if (last_pte > GEN8_PTES_PER_PAGE)
- last_pte = GEN8_PTES_PER_PAGE;
+ if (last_pte > GEN8_PTES)
+ last_pte = GEN8_PTES;
pt_vaddr = kmap_atomic(page_table);
@@ -352,7 +532,7 @@
kunmap_atomic(pt_vaddr);
pte = 0;
- if (++pde == GEN8_PDES_PER_PAGE) {
+ if (++pde == I915_PDES) {
pdpe++;
pde = 0;
}
@@ -366,7 +546,7 @@
{
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
- gen8_gtt_pte_t *pt_vaddr;
+ gen8_pte_t *pt_vaddr;
unsigned pdpe = start >> GEN8_PDPE_SHIFT & GEN8_PDPE_MASK;
unsigned pde = start >> GEN8_PDE_SHIFT & GEN8_PDE_MASK;
unsigned pte = start >> GEN8_PTE_SHIFT & GEN8_PTE_MASK;
@@ -375,21 +555,26 @@
pt_vaddr = NULL;
for_each_sg_page(pages->sgl, &sg_iter, pages->nents, 0) {
- if (WARN_ON(pdpe >= GEN8_LEGACY_PDPS))
+ if (WARN_ON(pdpe >= GEN8_LEGACY_PDPES))
break;
- if (pt_vaddr == NULL)
- pt_vaddr = kmap_atomic(ppgtt->gen8_pt_pages[pdpe][pde]);
+ if (pt_vaddr == NULL) {
+ struct i915_page_directory_entry *pd = ppgtt->pdp.page_directory[pdpe];
+ struct i915_page_table_entry *pt = pd->page_table[pde];
+ struct page *page_table = pt->page;
+
+ pt_vaddr = kmap_atomic(page_table);
+ }
pt_vaddr[pte] =
gen8_pte_encode(sg_page_iter_dma_address(&sg_iter),
cache_level, true);
- if (++pte == GEN8_PTES_PER_PAGE) {
+ if (++pte == GEN8_PTES) {
if (!HAS_LLC(ppgtt->base.dev))
drm_clflush_virt_range(pt_vaddr, PAGE_SIZE);
kunmap_atomic(pt_vaddr);
pt_vaddr = NULL;
- if (++pde == GEN8_PDES_PER_PAGE) {
+ if (++pde == I915_PDES) {
pdpe++;
pde = 0;
}
@@ -403,29 +588,33 @@
}
}
-static void gen8_free_page_tables(struct page **pt_pages)
+static void gen8_free_page_tables(struct i915_page_directory_entry *pd, struct drm_device *dev)
{
int i;
- if (pt_pages == NULL)
+ if (!pd->page)
return;
- for (i = 0; i < GEN8_PDES_PER_PAGE; i++)
- if (pt_pages[i])
- __free_pages(pt_pages[i], 0);
+ for (i = 0; i < I915_PDES; i++) {
+ if (WARN_ON(!pd->page_table[i]))
+ continue;
+
+ unmap_and_free_pt(pd->page_table[i], dev);
+ pd->page_table[i] = NULL;
+ }
}
-static void gen8_ppgtt_free(const struct i915_hw_ppgtt *ppgtt)
+static void gen8_ppgtt_free(struct i915_hw_ppgtt *ppgtt)
{
int i;
for (i = 0; i < ppgtt->num_pd_pages; i++) {
- gen8_free_page_tables(ppgtt->gen8_pt_pages[i]);
- kfree(ppgtt->gen8_pt_pages[i]);
- kfree(ppgtt->gen8_pt_dma_addr[i]);
- }
+ if (WARN_ON(!ppgtt->pdp.page_directory[i]))
+ continue;
- __free_pages(ppgtt->pd_pages, get_order(ppgtt->num_pd_pages << PAGE_SHIFT));
+ gen8_free_page_tables(ppgtt->pdp.page_directory[i], ppgtt->base.dev);
+ unmap_and_free_pd(ppgtt->pdp.page_directory[i]);
+ }
}
static void gen8_ppgtt_unmap_pages(struct i915_hw_ppgtt *ppgtt)
@@ -436,14 +625,23 @@
for (i = 0; i < ppgtt->num_pd_pages; i++) {
/* TODO: In the future we'll support sparse mappings, so this
* will have to change. */
- if (!ppgtt->pd_dma_addr[i])
+ if (!ppgtt->pdp.page_directory[i]->daddr)
continue;
- pci_unmap_page(hwdev, ppgtt->pd_dma_addr[i], PAGE_SIZE,
+ pci_unmap_page(hwdev, ppgtt->pdp.page_directory[i]->daddr, PAGE_SIZE,
PCI_DMA_BIDIRECTIONAL);
- for (j = 0; j < GEN8_PDES_PER_PAGE; j++) {
- dma_addr_t addr = ppgtt->gen8_pt_dma_addr[i][j];
+ for (j = 0; j < I915_PDES; j++) {
+ struct i915_page_directory_entry *pd = ppgtt->pdp.page_directory[i];
+ struct i915_page_table_entry *pt;
+ dma_addr_t addr;
+
+ if (WARN_ON(!pd->page_table[j]))
+ continue;
+
+ pt = pd->page_table[j];
+ addr = pt->daddr;
+
if (addr)
pci_unmap_page(hwdev, addr, PAGE_SIZE,
PCI_DMA_BIDIRECTIONAL);
@@ -460,86 +658,47 @@
gen8_ppgtt_free(ppgtt);
}
-static struct page **__gen8_alloc_page_tables(void)
+static int gen8_ppgtt_allocate_page_tables(struct i915_hw_ppgtt *ppgtt)
{
- struct page **pt_pages;
- int i;
-
- pt_pages = kcalloc(GEN8_PDES_PER_PAGE, sizeof(struct page *), GFP_KERNEL);
- if (!pt_pages)
- return ERR_PTR(-ENOMEM);
-
- for (i = 0; i < GEN8_PDES_PER_PAGE; i++) {
- pt_pages[i] = alloc_page(GFP_KERNEL);
- if (!pt_pages[i])
- goto bail;
- }
-
- return pt_pages;
-
-bail:
- gen8_free_page_tables(pt_pages);
- kfree(pt_pages);
- return ERR_PTR(-ENOMEM);
-}
-
-static int gen8_ppgtt_allocate_page_tables(struct i915_hw_ppgtt *ppgtt,
- const int max_pdp)
-{
- struct page **pt_pages[GEN8_LEGACY_PDPS];
int i, ret;
- for (i = 0; i < max_pdp; i++) {
- pt_pages[i] = __gen8_alloc_page_tables();
- if (IS_ERR(pt_pages[i])) {
- ret = PTR_ERR(pt_pages[i]);
+ for (i = 0; i < ppgtt->num_pd_pages; i++) {
+ ret = alloc_pt_range(ppgtt->pdp.page_directory[i],
+ 0, I915_PDES, ppgtt->base.dev);
+ if (ret)
goto unwind_out;
- }
}
- /* NB: Avoid touching gen8_pt_pages until last to keep the allocation,
- * "atomic" - for cleanup purposes.
- */
- for (i = 0; i < max_pdp; i++)
- ppgtt->gen8_pt_pages[i] = pt_pages[i];
-
return 0;
unwind_out:
- while (i--) {
- gen8_free_page_tables(pt_pages[i]);
- kfree(pt_pages[i]);
- }
+ while (i--)
+ gen8_free_page_tables(ppgtt->pdp.page_directory[i], ppgtt->base.dev);
- return ret;
-}
-
-static int gen8_ppgtt_allocate_dma(struct i915_hw_ppgtt *ppgtt)
-{
- int i;
-
- for (i = 0; i < ppgtt->num_pd_pages; i++) {
- ppgtt->gen8_pt_dma_addr[i] = kcalloc(GEN8_PDES_PER_PAGE,
- sizeof(dma_addr_t),
- GFP_KERNEL);
- if (!ppgtt->gen8_pt_dma_addr[i])
- return -ENOMEM;
- }
-
- return 0;
+ return -ENOMEM;
}
static int gen8_ppgtt_allocate_page_directories(struct i915_hw_ppgtt *ppgtt,
const int max_pdp)
{
- ppgtt->pd_pages = alloc_pages(GFP_KERNEL, get_order(max_pdp << PAGE_SHIFT));
- if (!ppgtt->pd_pages)
- return -ENOMEM;
+ int i;
- ppgtt->num_pd_pages = 1 << get_order(max_pdp << PAGE_SHIFT);
- BUG_ON(ppgtt->num_pd_pages > GEN8_LEGACY_PDPS);
+ for (i = 0; i < max_pdp; i++) {
+ ppgtt->pdp.page_directory[i] = alloc_pd_single();
+ if (IS_ERR(ppgtt->pdp.page_directory[i]))
+ goto unwind_out;
+ }
+
+ ppgtt->num_pd_pages = max_pdp;
+ BUG_ON(ppgtt->num_pd_pages > GEN8_LEGACY_PDPES);
return 0;
+
+unwind_out:
+ while (i--)
+ unmap_and_free_pd(ppgtt->pdp.page_directory[i]);
+
+ return -ENOMEM;
}
static int gen8_ppgtt_alloc(struct i915_hw_ppgtt *ppgtt,
@@ -551,18 +710,16 @@
if (ret)
return ret;
- ret = gen8_ppgtt_allocate_page_tables(ppgtt, max_pdp);
- if (ret) {
- __free_pages(ppgtt->pd_pages, get_order(max_pdp << PAGE_SHIFT));
- return ret;
- }
-
- ppgtt->num_pd_entries = max_pdp * GEN8_PDES_PER_PAGE;
-
- ret = gen8_ppgtt_allocate_dma(ppgtt);
+ ret = gen8_ppgtt_allocate_page_tables(ppgtt);
if (ret)
- gen8_ppgtt_free(ppgtt);
+ goto err_out;
+ ppgtt->num_pd_entries = max_pdp * I915_PDES;
+
+ return 0;
+
+err_out:
+ gen8_ppgtt_free(ppgtt);
return ret;
}
@@ -573,14 +730,14 @@
int ret;
pd_addr = pci_map_page(ppgtt->base.dev->pdev,
- &ppgtt->pd_pages[pd], 0,
+ ppgtt->pdp.page_directory[pd]->page, 0,
PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
ret = pci_dma_mapping_error(ppgtt->base.dev->pdev, pd_addr);
if (ret)
return ret;
- ppgtt->pd_dma_addr[pd] = pd_addr;
+ ppgtt->pdp.page_directory[pd]->daddr = pd_addr;
return 0;
}
@@ -590,22 +747,23 @@
const int pt)
{
dma_addr_t pt_addr;
- struct page *p;
+ struct i915_page_directory_entry *pdir = ppgtt->pdp.page_directory[pd];
+ struct i915_page_table_entry *ptab = pdir->page_table[pt];
+ struct page *p = ptab->page;
int ret;
- p = ppgtt->gen8_pt_pages[pd][pt];
pt_addr = pci_map_page(ppgtt->base.dev->pdev,
p, 0, PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
ret = pci_dma_mapping_error(ppgtt->base.dev->pdev, pt_addr);
if (ret)
return ret;
- ppgtt->gen8_pt_dma_addr[pd][pt] = pt_addr;
+ ptab->daddr = pt_addr;
return 0;
}
-/**
+/*
* GEN8 legacy ppgtt programming is accomplished through a max 4 PDP registers
* with a net effect resembling a 2-level page table in normal x86 terms. Each
* PDP represents 1GB of memory 4 * 512 * 512 * 4096 = 4GB legacy 32b address
@@ -618,26 +776,30 @@
static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt, uint64_t size)
{
const int max_pdp = DIV_ROUND_UP(size, 1 << 30);
- const int min_pt_pages = GEN8_PDES_PER_PAGE * max_pdp;
+ const int min_pt_pages = I915_PDES * max_pdp;
int i, j, ret;
if (size % (1<<30))
DRM_INFO("Pages will be wasted unless GTT size (%llu) is divisible by 1GB\n", size);
- /* 1. Do all our allocations for page directories and page tables. */
- ret = gen8_ppgtt_alloc(ppgtt, max_pdp);
+ /* 1. Do all our allocations for page directories and page tables.
+ * We allocate more than was asked so that we can point the unused parts
+ * to valid entries that point to scratch page. Dynamic page tables
+ * will fix this eventually.
+ */
+ ret = gen8_ppgtt_alloc(ppgtt, GEN8_LEGACY_PDPES);
if (ret)
return ret;
/*
* 2. Create DMA mappings for the page directories and page tables.
*/
- for (i = 0; i < max_pdp; i++) {
+ for (i = 0; i < GEN8_LEGACY_PDPES; i++) {
ret = gen8_ppgtt_setup_page_directories(ppgtt, i);
if (ret)
goto bail;
- for (j = 0; j < GEN8_PDES_PER_PAGE; j++) {
+ for (j = 0; j < I915_PDES; j++) {
ret = gen8_ppgtt_setup_page_tables(ppgtt, i, j);
if (ret)
goto bail;
@@ -652,11 +814,13 @@
* plugged in correctly. So we do that now/here. For aliasing PPGTT, we
* will never need to touch the PDEs again.
*/
- for (i = 0; i < max_pdp; i++) {
- gen8_ppgtt_pde_t *pd_vaddr;
- pd_vaddr = kmap_atomic(&ppgtt->pd_pages[i]);
- for (j = 0; j < GEN8_PDES_PER_PAGE; j++) {
- dma_addr_t addr = ppgtt->gen8_pt_dma_addr[i][j];
+ for (i = 0; i < GEN8_LEGACY_PDPES; i++) {
+ struct i915_page_directory_entry *pd = ppgtt->pdp.page_directory[i];
+ gen8_pde_t *pd_vaddr;
+ pd_vaddr = kmap_atomic(ppgtt->pdp.page_directory[i]->page);
+ for (j = 0; j < I915_PDES; j++) {
+ struct i915_page_table_entry *pt = pd->page_table[j];
+ dma_addr_t addr = pt->daddr;
pd_vaddr[j] = gen8_pde_encode(ppgtt->base.dev, addr,
I915_CACHE_LLC);
}
@@ -670,9 +834,14 @@
ppgtt->base.insert_entries = gen8_ppgtt_insert_entries;
ppgtt->base.cleanup = gen8_ppgtt_cleanup;
ppgtt->base.start = 0;
- ppgtt->base.total = ppgtt->num_pd_entries * GEN8_PTES_PER_PAGE * PAGE_SIZE;
- ppgtt->base.clear_range(&ppgtt->base, 0, ppgtt->base.total, true);
+ /* This is the area that we advertise as usable for the caller */
+ ppgtt->base.total = max_pdp * I915_PDES * GEN8_PTES * PAGE_SIZE;
+
+ /* Set all ptes to a valid scratch page. Also above requested space */
+ ppgtt->base.clear_range(&ppgtt->base, 0,
+ ppgtt->num_pd_pages * GEN8_PTES * PAGE_SIZE,
+ true);
DRM_DEBUG_DRIVER("Allocated %d pages for page directories (%d wasted)\n",
ppgtt->num_pd_pages, ppgtt->num_pd_pages - max_pdp);
@@ -691,22 +860,23 @@
{
struct drm_i915_private *dev_priv = ppgtt->base.dev->dev_private;
struct i915_address_space *vm = &ppgtt->base;
- gen6_gtt_pte_t __iomem *pd_addr;
- gen6_gtt_pte_t scratch_pte;
+ gen6_pte_t __iomem *pd_addr;
+ gen6_pte_t scratch_pte;
uint32_t pd_entry;
int pte, pde;
scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC, true, 0);
- pd_addr = (gen6_gtt_pte_t __iomem *)dev_priv->gtt.gsm +
- ppgtt->pd_offset / sizeof(gen6_gtt_pte_t);
+ pd_addr = (gen6_pte_t __iomem *)dev_priv->gtt.gsm +
+ ppgtt->pd.pd_offset / sizeof(gen6_pte_t);
seq_printf(m, " VM %p (pd_offset %x-%x):\n", vm,
- ppgtt->pd_offset, ppgtt->pd_offset + ppgtt->num_pd_entries);
+ ppgtt->pd.pd_offset,
+ ppgtt->pd.pd_offset + ppgtt->num_pd_entries);
for (pde = 0; pde < ppgtt->num_pd_entries; pde++) {
u32 expected;
- gen6_gtt_pte_t *pt_vaddr;
- dma_addr_t pt_addr = ppgtt->pt_dma_addr[pde];
+ gen6_pte_t *pt_vaddr;
+ dma_addr_t pt_addr = ppgtt->pd.page_table[pde]->daddr;
pd_entry = readl(pd_addr + pde);
expected = (GEN6_PDE_ADDR_ENCODE(pt_addr) | GEN6_PDE_VALID);
@@ -717,10 +887,10 @@
expected);
seq_printf(m, "\tPDE: %x\n", pd_entry);
- pt_vaddr = kmap_atomic(ppgtt->pt_pages[pde]);
- for (pte = 0; pte < I915_PPGTT_PT_ENTRIES; pte+=4) {
+ pt_vaddr = kmap_atomic(ppgtt->pd.page_table[pde]->page);
+ for (pte = 0; pte < GEN6_PTES; pte+=4) {
unsigned long va =
- (pde * PAGE_SIZE * I915_PPGTT_PT_ENTRIES) +
+ (pde * PAGE_SIZE * GEN6_PTES) +
(pte * PAGE_SIZE);
int i;
bool found = false;
@@ -743,33 +913,43 @@
}
}
-static void gen6_write_pdes(struct i915_hw_ppgtt *ppgtt)
+/* Write pde (index) from the page directory @pd to the page table @pt */
+static void gen6_write_pde(struct i915_page_directory_entry *pd,
+ const int pde, struct i915_page_table_entry *pt)
{
- struct drm_i915_private *dev_priv = ppgtt->base.dev->dev_private;
- gen6_gtt_pte_t __iomem *pd_addr;
- uint32_t pd_entry;
- int i;
+ /* Caller needs to make sure the write completes if necessary */
+ struct i915_hw_ppgtt *ppgtt =
+ container_of(pd, struct i915_hw_ppgtt, pd);
+ u32 pd_entry;
- WARN_ON(ppgtt->pd_offset & 0x3f);
- pd_addr = (gen6_gtt_pte_t __iomem*)dev_priv->gtt.gsm +
- ppgtt->pd_offset / sizeof(gen6_gtt_pte_t);
- for (i = 0; i < ppgtt->num_pd_entries; i++) {
- dma_addr_t pt_addr;
+ pd_entry = GEN6_PDE_ADDR_ENCODE(pt->daddr);
+ pd_entry |= GEN6_PDE_VALID;
- pt_addr = ppgtt->pt_dma_addr[i];
- pd_entry = GEN6_PDE_ADDR_ENCODE(pt_addr);
- pd_entry |= GEN6_PDE_VALID;
+ writel(pd_entry, ppgtt->pd_addr + pde);
+}
- writel(pd_entry, pd_addr + i);
- }
- readl(pd_addr);
+/* Write all the page tables found in the ppgtt structure to incrementing page
+ * directories. */
+static void gen6_write_page_range(struct drm_i915_private *dev_priv,
+ struct i915_page_directory_entry *pd,
+ uint32_t start, uint32_t length)
+{
+ struct i915_page_table_entry *pt;
+ uint32_t pde, temp;
+
+ gen6_for_each_pde(pt, pd, start, length, temp, pde)
+ gen6_write_pde(pd, pde, pt);
+
+ /* Make sure write is complete before other code can use this page
+ * table. Also require for WC mapped PTEs */
+ readl(dev_priv->gtt.gsm);
}
static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
{
- BUG_ON(ppgtt->pd_offset & 0x3f);
+ BUG_ON(ppgtt->pd.pd_offset & 0x3f);
- return (ppgtt->pd_offset / 64) << 16;
+ return (ppgtt->pd.pd_offset / 64) << 16;
}
static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
@@ -797,6 +977,16 @@
return 0;
}
+static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
+ struct intel_engine_cs *ring)
+{
+ struct drm_i915_private *dev_priv = to_i915(ppgtt->base.dev);
+
+ I915_WRITE(RING_PP_DIR_DCLV(ring), PP_DIR_DCLV_2G);
+ I915_WRITE(RING_PP_DIR_BASE(ring), get_pd_offset(ppgtt));
+ return 0;
+}
+
static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct intel_engine_cs *ring)
{
@@ -908,21 +1098,21 @@
{
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
- gen6_gtt_pte_t *pt_vaddr, scratch_pte;
+ gen6_pte_t *pt_vaddr, scratch_pte;
unsigned first_entry = start >> PAGE_SHIFT;
unsigned num_entries = length >> PAGE_SHIFT;
- unsigned act_pt = first_entry / I915_PPGTT_PT_ENTRIES;
- unsigned first_pte = first_entry % I915_PPGTT_PT_ENTRIES;
+ unsigned act_pt = first_entry / GEN6_PTES;
+ unsigned first_pte = first_entry % GEN6_PTES;
unsigned last_pte, i;
scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC, true, 0);
while (num_entries) {
last_pte = first_pte + num_entries;
- if (last_pte > I915_PPGTT_PT_ENTRIES)
- last_pte = I915_PPGTT_PT_ENTRIES;
+ if (last_pte > GEN6_PTES)
+ last_pte = GEN6_PTES;
- pt_vaddr = kmap_atomic(ppgtt->pt_pages[act_pt]);
+ pt_vaddr = kmap_atomic(ppgtt->pd.page_table[act_pt]->page);
for (i = first_pte; i < last_pte; i++)
pt_vaddr[i] = scratch_pte;
@@ -942,22 +1132,22 @@
{
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
- gen6_gtt_pte_t *pt_vaddr;
+ gen6_pte_t *pt_vaddr;
unsigned first_entry = start >> PAGE_SHIFT;
- unsigned act_pt = first_entry / I915_PPGTT_PT_ENTRIES;
- unsigned act_pte = first_entry % I915_PPGTT_PT_ENTRIES;
+ unsigned act_pt = first_entry / GEN6_PTES;
+ unsigned act_pte = first_entry % GEN6_PTES;
struct sg_page_iter sg_iter;
pt_vaddr = NULL;
for_each_sg_page(pages->sgl, &sg_iter, pages->nents, 0) {
if (pt_vaddr == NULL)
- pt_vaddr = kmap_atomic(ppgtt->pt_pages[act_pt]);
+ pt_vaddr = kmap_atomic(ppgtt->pd.page_table[act_pt]->page);
pt_vaddr[act_pte] =
vm->pte_encode(sg_page_iter_dma_address(&sg_iter),
cache_level, true, flags);
- if (++act_pte == I915_PPGTT_PT_ENTRIES) {
+ if (++act_pte == GEN6_PTES) {
kunmap_atomic(pt_vaddr);
pt_vaddr = NULL;
act_pt++;
@@ -968,26 +1158,134 @@
kunmap_atomic(pt_vaddr);
}
-static void gen6_ppgtt_unmap_pages(struct i915_hw_ppgtt *ppgtt)
+/* PDE TLBs are a pain invalidate pre GEN8. It requires a context reload. If we
+ * are switching between contexts with the same LRCA, we also must do a force
+ * restore.
+ */
+static inline void mark_tlbs_dirty(struct i915_hw_ppgtt *ppgtt)
{
+ /* If current vm != vm, */
+ ppgtt->pd_dirty_rings = INTEL_INFO(ppgtt->base.dev)->ring_mask;
+}
+
+static void gen6_initialize_pt(struct i915_address_space *vm,
+ struct i915_page_table_entry *pt)
+{
+ gen6_pte_t *pt_vaddr, scratch_pte;
int i;
- if (ppgtt->pt_dma_addr) {
- for (i = 0; i < ppgtt->num_pd_entries; i++)
- pci_unmap_page(ppgtt->base.dev->pdev,
- ppgtt->pt_dma_addr[i],
- 4096, PCI_DMA_BIDIRECTIONAL);
+ WARN_ON(vm->scratch.addr == 0);
+
+ scratch_pte = vm->pte_encode(vm->scratch.addr,
+ I915_CACHE_LLC, true, 0);
+
+ pt_vaddr = kmap_atomic(pt->page);
+
+ for (i = 0; i < GEN6_PTES; i++)
+ pt_vaddr[i] = scratch_pte;
+
+ kunmap_atomic(pt_vaddr);
+}
+
+static int gen6_alloc_va_range(struct i915_address_space *vm,
+ uint64_t start, uint64_t length)
+{
+ DECLARE_BITMAP(new_page_tables, I915_PDES);
+ struct drm_device *dev = vm->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct i915_hw_ppgtt *ppgtt =
+ container_of(vm, struct i915_hw_ppgtt, base);
+ struct i915_page_table_entry *pt;
+ const uint32_t start_save = start, length_save = length;
+ uint32_t pde, temp;
+ int ret;
+
+ WARN_ON(upper_32_bits(start));
+
+ bitmap_zero(new_page_tables, I915_PDES);
+
+ /* The allocation is done in two stages so that we can bail out with
+ * minimal amount of pain. The first stage finds new page tables that
+ * need allocation. The second stage marks use ptes within the page
+ * tables.
+ */
+ gen6_for_each_pde(pt, &ppgtt->pd, start, length, temp, pde) {
+ if (pt != ppgtt->scratch_pt) {
+ WARN_ON(bitmap_empty(pt->used_ptes, GEN6_PTES));
+ continue;
+ }
+
+ /* We've already allocated a page table */
+ WARN_ON(!bitmap_empty(pt->used_ptes, GEN6_PTES));
+
+ pt = alloc_pt_single(dev);
+ if (IS_ERR(pt)) {
+ ret = PTR_ERR(pt);
+ goto unwind_out;
+ }
+
+ gen6_initialize_pt(vm, pt);
+
+ ppgtt->pd.page_table[pde] = pt;
+ set_bit(pde, new_page_tables);
+ trace_i915_page_table_entry_alloc(vm, pde, start, GEN6_PDE_SHIFT);
}
+
+ start = start_save;
+ length = length_save;
+
+ gen6_for_each_pde(pt, &ppgtt->pd, start, length, temp, pde) {
+ DECLARE_BITMAP(tmp_bitmap, GEN6_PTES);
+
+ bitmap_zero(tmp_bitmap, GEN6_PTES);
+ bitmap_set(tmp_bitmap, gen6_pte_index(start),
+ gen6_pte_count(start, length));
+
+ if (test_and_clear_bit(pde, new_page_tables))
+ gen6_write_pde(&ppgtt->pd, pde, pt);
+
+ trace_i915_page_table_entry_map(vm, pde, pt,
+ gen6_pte_index(start),
+ gen6_pte_count(start, length),
+ GEN6_PTES);
+ bitmap_or(pt->used_ptes, tmp_bitmap, pt->used_ptes,
+ GEN6_PTES);
+ }
+
+ WARN_ON(!bitmap_empty(new_page_tables, I915_PDES));
+
+ /* Make sure write is complete before other code can use this page
+ * table. Also require for WC mapped PTEs */
+ readl(dev_priv->gtt.gsm);
+
+ mark_tlbs_dirty(ppgtt);
+ return 0;
+
+unwind_out:
+ for_each_set_bit(pde, new_page_tables, I915_PDES) {
+ struct i915_page_table_entry *pt = ppgtt->pd.page_table[pde];
+
+ ppgtt->pd.page_table[pde] = ppgtt->scratch_pt;
+ unmap_and_free_pt(pt, vm->dev);
+ }
+
+ mark_tlbs_dirty(ppgtt);
+ return ret;
}
static void gen6_ppgtt_free(struct i915_hw_ppgtt *ppgtt)
{
int i;
- kfree(ppgtt->pt_dma_addr);
- for (i = 0; i < ppgtt->num_pd_entries; i++)
- __free_page(ppgtt->pt_pages[i]);
- kfree(ppgtt->pt_pages);
+ for (i = 0; i < ppgtt->num_pd_entries; i++) {
+ struct i915_page_table_entry *pt = ppgtt->pd.page_table[i];
+
+ if (pt != ppgtt->scratch_pt)
+ unmap_and_free_pt(ppgtt->pd.page_table[i], ppgtt->base.dev);
+ }
+
+ unmap_and_free_pt(ppgtt->scratch_pt, ppgtt->base.dev);
+ unmap_and_free_pd(&ppgtt->pd);
}
static void gen6_ppgtt_cleanup(struct i915_address_space *vm)
@@ -997,7 +1295,6 @@
drm_mm_remove_node(&ppgtt->node);
- gen6_ppgtt_unmap_pages(ppgtt);
gen6_ppgtt_free(ppgtt);
}
@@ -1013,6 +1310,12 @@
* size. We allocate at the top of the GTT to avoid fragmentation.
*/
BUG_ON(!drm_mm_initialized(&dev_priv->gtt.base.mm));
+ ppgtt->scratch_pt = alloc_pt_single(ppgtt->base.dev);
+ if (IS_ERR(ppgtt->scratch_pt))
+ return PTR_ERR(ppgtt->scratch_pt);
+
+ gen6_initialize_pt(&ppgtt->base, ppgtt->scratch_pt);
+
alloc:
ret = drm_mm_insert_node_in_range_generic(&dev_priv->gtt.base.mm,
&ppgtt->node, GEN6_PD_SIZE,
@@ -1026,88 +1329,43 @@
0, dev_priv->gtt.base.total,
0);
if (ret)
- return ret;
+ goto err_out;
retried = true;
goto alloc;
}
+ if (ret)
+ goto err_out;
+
+
if (ppgtt->node.start < dev_priv->gtt.mappable_end)
DRM_DEBUG("Forced to use aperture for PDEs\n");
- ppgtt->num_pd_entries = GEN6_PPGTT_PD_ENTRIES;
- return ret;
-}
-
-static int gen6_ppgtt_allocate_page_tables(struct i915_hw_ppgtt *ppgtt)
-{
- int i;
-
- ppgtt->pt_pages = kcalloc(ppgtt->num_pd_entries, sizeof(struct page *),
- GFP_KERNEL);
-
- if (!ppgtt->pt_pages)
- return -ENOMEM;
-
- for (i = 0; i < ppgtt->num_pd_entries; i++) {
- ppgtt->pt_pages[i] = alloc_page(GFP_KERNEL);
- if (!ppgtt->pt_pages[i]) {
- gen6_ppgtt_free(ppgtt);
- return -ENOMEM;
- }
- }
-
+ ppgtt->num_pd_entries = I915_PDES;
return 0;
+
+err_out:
+ unmap_and_free_pt(ppgtt->scratch_pt, ppgtt->base.dev);
+ return ret;
}
static int gen6_ppgtt_alloc(struct i915_hw_ppgtt *ppgtt)
{
- int ret;
-
- ret = gen6_ppgtt_allocate_page_directories(ppgtt);
- if (ret)
- return ret;
-
- ret = gen6_ppgtt_allocate_page_tables(ppgtt);
- if (ret) {
- drm_mm_remove_node(&ppgtt->node);
- return ret;
- }
-
- ppgtt->pt_dma_addr = kcalloc(ppgtt->num_pd_entries, sizeof(dma_addr_t),
- GFP_KERNEL);
- if (!ppgtt->pt_dma_addr) {
- drm_mm_remove_node(&ppgtt->node);
- gen6_ppgtt_free(ppgtt);
- return -ENOMEM;
- }
-
- return 0;
+ return gen6_ppgtt_allocate_page_directories(ppgtt);
}
-static int gen6_ppgtt_setup_page_tables(struct i915_hw_ppgtt *ppgtt)
+static void gen6_scratch_va_range(struct i915_hw_ppgtt *ppgtt,
+ uint64_t start, uint64_t length)
{
- struct drm_device *dev = ppgtt->base.dev;
- int i;
+ struct i915_page_table_entry *unused;
+ uint32_t pde, temp;
- for (i = 0; i < ppgtt->num_pd_entries; i++) {
- dma_addr_t pt_addr;
-
- pt_addr = pci_map_page(dev->pdev, ppgtt->pt_pages[i], 0, 4096,
- PCI_DMA_BIDIRECTIONAL);
-
- if (pci_dma_mapping_error(dev->pdev, pt_addr)) {
- gen6_ppgtt_unmap_pages(ppgtt);
- return -EIO;
- }
-
- ppgtt->pt_dma_addr[i] = pt_addr;
- }
-
- return 0;
+ gen6_for_each_pde(unused, &ppgtt->pd, start, length, temp, pde)
+ ppgtt->pd.page_table[pde] = ppgtt->scratch_pt;
}
-static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
+static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt, bool aliasing)
{
struct drm_device *dev = ppgtt->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -1123,40 +1381,57 @@
} else
BUG();
+ if (intel_vgpu_active(dev))
+ ppgtt->switch_mm = vgpu_mm_switch;
+
ret = gen6_ppgtt_alloc(ppgtt);
if (ret)
return ret;
- ret = gen6_ppgtt_setup_page_tables(ppgtt);
- if (ret) {
- gen6_ppgtt_free(ppgtt);
- return ret;
+ if (aliasing) {
+ /* preallocate all pts */
+ ret = alloc_pt_range(&ppgtt->pd, 0, ppgtt->num_pd_entries,
+ ppgtt->base.dev);
+
+ if (ret) {
+ gen6_ppgtt_cleanup(&ppgtt->base);
+ return ret;
+ }
}
+ ppgtt->base.allocate_va_range = gen6_alloc_va_range;
ppgtt->base.clear_range = gen6_ppgtt_clear_range;
ppgtt->base.insert_entries = gen6_ppgtt_insert_entries;
ppgtt->base.cleanup = gen6_ppgtt_cleanup;
ppgtt->base.start = 0;
- ppgtt->base.total = ppgtt->num_pd_entries * I915_PPGTT_PT_ENTRIES * PAGE_SIZE;
+ ppgtt->base.total = ppgtt->num_pd_entries * GEN6_PTES * PAGE_SIZE;
ppgtt->debug_dump = gen6_dump_ppgtt;
- ppgtt->pd_offset =
- ppgtt->node.start / PAGE_SIZE * sizeof(gen6_gtt_pte_t);
+ ppgtt->pd.pd_offset =
+ ppgtt->node.start / PAGE_SIZE * sizeof(gen6_pte_t);
- ppgtt->base.clear_range(&ppgtt->base, 0, ppgtt->base.total, true);
+ ppgtt->pd_addr = (gen6_pte_t __iomem *)dev_priv->gtt.gsm +
+ ppgtt->pd.pd_offset / sizeof(gen6_pte_t);
+
+ if (aliasing)
+ ppgtt->base.clear_range(&ppgtt->base, 0, ppgtt->base.total, true);
+ else
+ gen6_scratch_va_range(ppgtt, 0, ppgtt->base.total);
+
+ gen6_write_page_range(dev_priv, &ppgtt->pd, 0, ppgtt->base.total);
DRM_DEBUG_DRIVER("Allocated pde space (%lldM) at GTT entry: %llx\n",
ppgtt->node.size >> 20,
ppgtt->node.start / PAGE_SIZE);
- gen6_write_pdes(ppgtt);
DRM_DEBUG("Adding PPGTT at offset %x\n",
- ppgtt->pd_offset << 10);
+ ppgtt->pd.pd_offset << 10);
return 0;
}
-static int __hw_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt)
+static int __hw_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt,
+ bool aliasing)
{
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -1164,7 +1439,7 @@
ppgtt->base.scratch = dev_priv->gtt.base.scratch;
if (INTEL_INFO(dev)->gen < 8)
- return gen6_ppgtt_init(ppgtt);
+ return gen6_ppgtt_init(ppgtt, aliasing);
else
return gen8_ppgtt_init(ppgtt, dev_priv->gtt.base.total);
}
@@ -1173,7 +1448,7 @@
struct drm_i915_private *dev_priv = dev->dev_private;
int ret = 0;
- ret = __hw_ppgtt_init(dev, ppgtt);
+ ret = __hw_ppgtt_init(dev, ppgtt, false);
if (ret == 0) {
kref_init(&ppgtt->ref);
drm_mm_init(&ppgtt->base.mm, ppgtt->base.start,
@@ -1420,15 +1695,20 @@
return;
}
- list_for_each_entry(vm, &dev_priv->vm_list, global_link) {
- /* TODO: Perhaps it shouldn't be gen6 specific */
- if (i915_is_ggtt(vm)) {
- if (dev_priv->mm.aliasing_ppgtt)
- gen6_write_pdes(dev_priv->mm.aliasing_ppgtt);
- continue;
- }
+ if (USES_PPGTT(dev)) {
+ list_for_each_entry(vm, &dev_priv->vm_list, global_link) {
+ /* TODO: Perhaps it shouldn't be gen6 specific */
- gen6_write_pdes(container_of(vm, struct i915_hw_ppgtt, base));
+ struct i915_hw_ppgtt *ppgtt =
+ container_of(vm, struct i915_hw_ppgtt,
+ base);
+
+ if (i915_is_ggtt(vm))
+ ppgtt = dev_priv->mm.aliasing_ppgtt;
+
+ gen6_write_page_range(dev_priv, &ppgtt->pd,
+ 0, ppgtt->base.total);
+ }
}
i915_ggtt_flush(dev_priv);
@@ -1447,7 +1727,7 @@
return 0;
}
-static inline void gen8_set_pte(void __iomem *addr, gen8_gtt_pte_t pte)
+static inline void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
{
#ifdef writeq
writeq(pte, addr);
@@ -1464,8 +1744,8 @@
{
struct drm_i915_private *dev_priv = vm->dev->dev_private;
unsigned first_entry = start >> PAGE_SHIFT;
- gen8_gtt_pte_t __iomem *gtt_entries =
- (gen8_gtt_pte_t __iomem *)dev_priv->gtt.gsm + first_entry;
+ gen8_pte_t __iomem *gtt_entries =
+ (gen8_pte_t __iomem *)dev_priv->gtt.gsm + first_entry;
int i = 0;
struct sg_page_iter sg_iter;
dma_addr_t addr = 0; /* shut up gcc */
@@ -1510,8 +1790,8 @@
{
struct drm_i915_private *dev_priv = vm->dev->dev_private;
unsigned first_entry = start >> PAGE_SHIFT;
- gen6_gtt_pte_t __iomem *gtt_entries =
- (gen6_gtt_pte_t __iomem *)dev_priv->gtt.gsm + first_entry;
+ gen6_pte_t __iomem *gtt_entries =
+ (gen6_pte_t __iomem *)dev_priv->gtt.gsm + first_entry;
int i = 0;
struct sg_page_iter sg_iter;
dma_addr_t addr = 0;
@@ -1549,8 +1829,8 @@
struct drm_i915_private *dev_priv = vm->dev->dev_private;
unsigned first_entry = start >> PAGE_SHIFT;
unsigned num_entries = length >> PAGE_SHIFT;
- gen8_gtt_pte_t scratch_pte, __iomem *gtt_base =
- (gen8_gtt_pte_t __iomem *) dev_priv->gtt.gsm + first_entry;
+ gen8_pte_t scratch_pte, __iomem *gtt_base =
+ (gen8_pte_t __iomem *) dev_priv->gtt.gsm + first_entry;
const int max_entries = gtt_total_entries(dev_priv->gtt) - first_entry;
int i;
@@ -1575,8 +1855,8 @@
struct drm_i915_private *dev_priv = vm->dev->dev_private;
unsigned first_entry = start >> PAGE_SHIFT;
unsigned num_entries = length >> PAGE_SHIFT;
- gen6_gtt_pte_t scratch_pte, __iomem *gtt_base =
- (gen6_gtt_pte_t __iomem *) dev_priv->gtt.gsm + first_entry;
+ gen6_pte_t scratch_pte, __iomem *gtt_base =
+ (gen6_pte_t __iomem *) dev_priv->gtt.gsm + first_entry;
const int max_entries = gtt_total_entries(dev_priv->gtt) - first_entry;
int i;
@@ -1633,11 +1913,15 @@
struct drm_device *dev = vma->vm->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_object *obj = vma->obj;
+ struct sg_table *pages = obj->pages;
/* Currently applicable only to VLV */
if (obj->gt_ro)
flags |= PTE_READ_ONLY;
+ if (i915_is_ggtt(vma->vm))
+ pages = vma->ggtt_view.pages;
+
/* If there is no aliasing PPGTT, or the caller needs a global mapping,
* or we have a global mapping already but the cacheability flags have
* changed, set the global PTEs.
@@ -1652,7 +1936,7 @@
if (!dev_priv->mm.aliasing_ppgtt || flags & GLOBAL_BIND) {
if (!(vma->bound & GLOBAL_BIND) ||
(cache_level != obj->cache_level)) {
- vma->vm->insert_entries(vma->vm, vma->ggtt_view.pages,
+ vma->vm->insert_entries(vma->vm, pages,
vma->node.start,
cache_level, flags);
vma->bound |= GLOBAL_BIND;
@@ -1663,8 +1947,7 @@
(!(vma->bound & LOCAL_BIND) ||
(cache_level != obj->cache_level))) {
struct i915_hw_ppgtt *appgtt = dev_priv->mm.aliasing_ppgtt;
- appgtt->base.insert_entries(&appgtt->base,
- vma->ggtt_view.pages,
+ appgtt->base.insert_entries(&appgtt->base, pages,
vma->node.start,
cache_level, flags);
vma->bound |= LOCAL_BIND;
@@ -1753,6 +2036,16 @@
/* Subtract the guard page ... */
drm_mm_init(&ggtt_vm->mm, start, end - start - PAGE_SIZE);
+
+ dev_priv->gtt.base.start = start;
+ dev_priv->gtt.base.total = end - start;
+
+ if (intel_vgpu_active(dev)) {
+ ret = intel_vgt_balloon(dev);
+ if (ret)
+ return ret;
+ }
+
if (!HAS_LLC(dev))
dev_priv->gtt.base.mm.color_adjust = i915_gtt_color_adjust;
@@ -1772,9 +2065,6 @@
vma->bound |= GLOBAL_BIND;
}
- dev_priv->gtt.base.start = start;
- dev_priv->gtt.base.total = end - start;
-
/* Clear any non-preallocated blocks */
drm_mm_for_each_hole(entry, &ggtt_vm->mm, hole_start, hole_end) {
DRM_DEBUG_KMS("clearing unused GTT space: [%lx, %lx]\n",
@@ -1793,9 +2083,11 @@
if (!ppgtt)
return -ENOMEM;
- ret = __hw_ppgtt_init(dev, ppgtt);
- if (ret != 0)
+ ret = __hw_ppgtt_init(dev, ppgtt, true);
+ if (ret) {
+ kfree(ppgtt);
return ret;
+ }
dev_priv->mm.aliasing_ppgtt = ppgtt;
}
@@ -1826,6 +2118,9 @@
}
if (drm_mm_initialized(&vm->mm)) {
+ if (intel_vgpu_active(dev))
+ intel_vgt_deballoon();
+
drm_mm_takedown(&vm->mm);
list_del(&vm->global_link);
}
@@ -2078,7 +2373,7 @@
gtt_size = gen8_get_total_gtt_size(snb_gmch_ctl);
}
- *gtt_total = (gtt_size / sizeof(gen8_gtt_pte_t)) << PAGE_SHIFT;
+ *gtt_total = (gtt_size / sizeof(gen8_pte_t)) << PAGE_SHIFT;
if (IS_CHERRYVIEW(dev))
chv_setup_private_ppat(dev_priv);
@@ -2123,7 +2418,7 @@
*stolen = gen6_get_stolen_size(snb_gmch_ctl);
gtt_size = gen6_get_total_gtt_size(snb_gmch_ctl);
- *gtt_total = (gtt_size / sizeof(gen6_gtt_pte_t)) << PAGE_SHIFT;
+ *gtt_total = (gtt_size / sizeof(gen6_pte_t)) << PAGE_SHIFT;
ret = ggtt_probe_common(dev, gtt_size);
@@ -2228,11 +2523,16 @@
return 0;
}
-static struct i915_vma *__i915_gem_vma_create(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *view)
+static struct i915_vma *
+__i915_gem_vma_create(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm,
+ const struct i915_ggtt_view *ggtt_view)
{
- struct i915_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL);
+ struct i915_vma *vma;
+
+ if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
+ return ERR_PTR(-EINVAL);
+ vma = kzalloc(sizeof(*vma), GFP_KERNEL);
if (vma == NULL)
return ERR_PTR(-ENOMEM);
@@ -2241,10 +2541,11 @@
INIT_LIST_HEAD(&vma->exec_list);
vma->vm = vm;
vma->obj = obj;
- vma->ggtt_view = *view;
if (INTEL_INFO(vm->dev)->gen >= 6) {
if (i915_is_ggtt(vm)) {
+ vma->ggtt_view = *ggtt_view;
+
vma->unbind_vma = ggtt_unbind_vma;
vma->bind_vma = ggtt_bind_vma;
} else {
@@ -2253,6 +2554,7 @@
}
} else {
BUG_ON(!i915_is_ggtt(vm));
+ vma->ggtt_view = *ggtt_view;
vma->unbind_vma = i915_ggtt_unbind_vma;
vma->bind_vma = i915_ggtt_bind_vma;
}
@@ -2265,38 +2567,170 @@
}
struct i915_vma *
-i915_gem_obj_lookup_or_create_vma_view(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *view)
+i915_gem_obj_lookup_or_create_vma(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm)
{
struct i915_vma *vma;
- vma = i915_gem_obj_to_vma_view(obj, vm, view);
+ vma = i915_gem_obj_to_vma(obj, vm);
if (!vma)
- vma = __i915_gem_vma_create(obj, vm, view);
+ vma = __i915_gem_vma_create(obj, vm,
+ i915_is_ggtt(vm) ? &i915_ggtt_view_normal : NULL);
return vma;
}
-static inline
-int i915_get_vma_pages(struct i915_vma *vma)
+struct i915_vma *
+i915_gem_obj_lookup_or_create_ggtt_vma(struct drm_i915_gem_object *obj,
+ const struct i915_ggtt_view *view)
{
+ struct i915_address_space *ggtt = i915_obj_to_ggtt(obj);
+ struct i915_vma *vma;
+
+ if (WARN_ON(!view))
+ return ERR_PTR(-EINVAL);
+
+ vma = i915_gem_obj_to_ggtt_view(obj, view);
+
+ if (IS_ERR(vma))
+ return vma;
+
+ if (!vma)
+ vma = __i915_gem_vma_create(obj, ggtt, view);
+
+ return vma;
+
+}
+
+static void
+rotate_pages(dma_addr_t *in, unsigned int width, unsigned int height,
+ struct sg_table *st)
+{
+ unsigned int column, row;
+ unsigned int src_idx;
+ struct scatterlist *sg = st->sgl;
+
+ st->nents = 0;
+
+ for (column = 0; column < width; column++) {
+ src_idx = width * (height - 1) + column;
+ for (row = 0; row < height; row++) {
+ st->nents++;
+ /* We don't need the pages, but need to initialize
+ * the entries so the sg list can be happily traversed.
+ * The only thing we need are DMA addresses.
+ */
+ sg_set_page(sg, NULL, PAGE_SIZE, 0);
+ sg_dma_address(sg) = in[src_idx];
+ sg_dma_len(sg) = PAGE_SIZE;
+ sg = sg_next(sg);
+ src_idx -= width;
+ }
+ }
+}
+
+static struct sg_table *
+intel_rotate_fb_obj_pages(struct i915_ggtt_view *ggtt_view,
+ struct drm_i915_gem_object *obj)
+{
+ struct drm_device *dev = obj->base.dev;
+ struct intel_rotation_info *rot_info = &ggtt_view->rotation_info;
+ unsigned long size, pages, rot_pages;
+ struct sg_page_iter sg_iter;
+ unsigned long i;
+ dma_addr_t *page_addr_list;
+ struct sg_table *st;
+ unsigned int tile_pitch, tile_height;
+ unsigned int width_pages, height_pages;
+ int ret = -ENOMEM;
+
+ pages = obj->base.size / PAGE_SIZE;
+
+ /* Calculate tiling geometry. */
+ tile_height = intel_tile_height(dev, rot_info->pixel_format,
+ rot_info->fb_modifier);
+ tile_pitch = PAGE_SIZE / tile_height;
+ width_pages = DIV_ROUND_UP(rot_info->pitch, tile_pitch);
+ height_pages = DIV_ROUND_UP(rot_info->height, tile_height);
+ rot_pages = width_pages * height_pages;
+ size = rot_pages * PAGE_SIZE;
+
+ /* Allocate a temporary list of source pages for random access. */
+ page_addr_list = drm_malloc_ab(pages, sizeof(dma_addr_t));
+ if (!page_addr_list)
+ return ERR_PTR(ret);
+
+ /* Allocate target SG list. */
+ st = kmalloc(sizeof(*st), GFP_KERNEL);
+ if (!st)
+ goto err_st_alloc;
+
+ ret = sg_alloc_table(st, rot_pages, GFP_KERNEL);
+ if (ret)
+ goto err_sg_alloc;
+
+ /* Populate source page list from the object. */
+ i = 0;
+ for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) {
+ page_addr_list[i] = sg_page_iter_dma_address(&sg_iter);
+ i++;
+ }
+
+ /* Rotate the pages. */
+ rotate_pages(page_addr_list, width_pages, height_pages, st);
+
+ DRM_DEBUG_KMS(
+ "Created rotated page mapping for object size %lu (pitch=%u, height=%u, pixel_format=0x%x, %ux%u tiles, %lu pages).\n",
+ size, rot_info->pitch, rot_info->height,
+ rot_info->pixel_format, width_pages, height_pages,
+ rot_pages);
+
+ drm_free_large(page_addr_list);
+
+ return st;
+
+err_sg_alloc:
+ kfree(st);
+err_st_alloc:
+ drm_free_large(page_addr_list);
+
+ DRM_DEBUG_KMS(
+ "Failed to create rotated mapping for object size %lu! (%d) (pitch=%u, height=%u, pixel_format=0x%x, %ux%u tiles, %lu pages)\n",
+ size, ret, rot_info->pitch, rot_info->height,
+ rot_info->pixel_format, width_pages, height_pages,
+ rot_pages);
+ return ERR_PTR(ret);
+}
+
+static inline int
+i915_get_ggtt_vma_pages(struct i915_vma *vma)
+{
+ int ret = 0;
+
if (vma->ggtt_view.pages)
return 0;
if (vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL)
vma->ggtt_view.pages = vma->obj->pages;
+ else if (vma->ggtt_view.type == I915_GGTT_VIEW_ROTATED)
+ vma->ggtt_view.pages =
+ intel_rotate_fb_obj_pages(&vma->ggtt_view, vma->obj);
else
WARN_ONCE(1, "GGTT view %u not implemented!\n",
vma->ggtt_view.type);
if (!vma->ggtt_view.pages) {
- DRM_ERROR("Failed to get pages for VMA view type %u!\n",
+ DRM_ERROR("Failed to get pages for GGTT view type %u!\n",
vma->ggtt_view.type);
- return -EINVAL;
+ ret = -EINVAL;
+ } else if (IS_ERR(vma->ggtt_view.pages)) {
+ ret = PTR_ERR(vma->ggtt_view.pages);
+ vma->ggtt_view.pages = NULL;
+ DRM_ERROR("Failed to get pages for VMA view type %u (%d)!\n",
+ vma->ggtt_view.type, ret);
}
- return 0;
+ return ret;
}
/**
@@ -2312,10 +2746,12 @@
int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
u32 flags)
{
- int ret = i915_get_vma_pages(vma);
+ if (i915_is_ggtt(vma->vm)) {
+ int ret = i915_get_ggtt_vma_pages(vma);
- if (ret)
- return ret;
+ if (ret)
+ return ret;
+ }
vma->bind_vma(vma, cache_level, flags);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index e377c7d..fc03c99 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -36,13 +36,13 @@
struct drm_i915_file_private;
-typedef uint32_t gen6_gtt_pte_t;
-typedef uint64_t gen8_gtt_pte_t;
-typedef gen8_gtt_pte_t gen8_ppgtt_pde_t;
+typedef uint32_t gen6_pte_t;
+typedef uint64_t gen8_pte_t;
+typedef uint64_t gen8_pde_t;
#define gtt_total_entries(gtt) ((gtt).base.total >> PAGE_SHIFT)
-#define I915_PPGTT_PT_ENTRIES (PAGE_SIZE / sizeof(gen6_gtt_pte_t))
+
/* gen6-hsw has bit 11-4 for physical addr bit 39-32 */
#define GEN6_GTT_ADDR_ENCODE(addr) ((addr) | (((addr) >> 28) & 0xff0))
#define GEN6_PTE_ADDR_ENCODE(addr) GEN6_GTT_ADDR_ENCODE(addr)
@@ -51,9 +51,16 @@
#define GEN6_PTE_UNCACHED (1 << 1)
#define GEN6_PTE_VALID (1 << 0)
-#define GEN6_PPGTT_PD_ENTRIES 512
-#define GEN6_PD_SIZE (GEN6_PPGTT_PD_ENTRIES * PAGE_SIZE)
+#define I915_PTES(pte_len) (PAGE_SIZE / (pte_len))
+#define I915_PTE_MASK(pte_len) (I915_PTES(pte_len) - 1)
+#define I915_PDES 512
+#define I915_PDE_MASK (I915_PDES - 1)
+#define NUM_PTE(pde_shift) (1 << (pde_shift - PAGE_SHIFT))
+
+#define GEN6_PTES I915_PTES(sizeof(gen6_pte_t))
+#define GEN6_PD_SIZE (I915_PDES * PAGE_SIZE)
#define GEN6_PD_ALIGN (PAGE_SIZE * 16)
+#define GEN6_PDE_SHIFT 22
#define GEN6_PDE_VALID (1 << 0)
#define GEN7_PTE_CACHE_L3_LLC (3 << 1)
@@ -88,9 +95,8 @@
#define GEN8_PDE_MASK 0x1ff
#define GEN8_PTE_SHIFT 12
#define GEN8_PTE_MASK 0x1ff
-#define GEN8_LEGACY_PDPS 4
-#define GEN8_PTES_PER_PAGE (PAGE_SIZE / sizeof(gen8_gtt_pte_t))
-#define GEN8_PDES_PER_PAGE (PAGE_SIZE / sizeof(gen8_ppgtt_pde_t))
+#define GEN8_LEGACY_PDPES 4
+#define GEN8_PTES I915_PTES(sizeof(gen8_pte_t))
#define PPAT_UNCACHED_INDEX (_PAGE_PWT | _PAGE_PCD)
#define PPAT_CACHED_PDE_INDEX 0 /* WB LLC */
@@ -111,15 +117,28 @@
enum i915_ggtt_view_type {
I915_GGTT_VIEW_NORMAL = 0,
+ I915_GGTT_VIEW_ROTATED
+};
+
+struct intel_rotation_info {
+ unsigned int height;
+ unsigned int pitch;
+ uint32_t pixel_format;
+ uint64_t fb_modifier;
};
struct i915_ggtt_view {
enum i915_ggtt_view_type type;
struct sg_table *pages;
+
+ union {
+ struct intel_rotation_info rotation_info;
+ };
};
extern const struct i915_ggtt_view i915_ggtt_view_normal;
+extern const struct i915_ggtt_view i915_ggtt_view_rotated;
enum i915_cache_level;
@@ -187,6 +206,28 @@
u32 flags);
};
+struct i915_page_table_entry {
+ struct page *page;
+ dma_addr_t daddr;
+
+ unsigned long *used_ptes;
+};
+
+struct i915_page_directory_entry {
+ struct page *page; /* NULL for GEN6-GEN7 */
+ union {
+ uint32_t pd_offset;
+ dma_addr_t daddr;
+ };
+
+ struct i915_page_table_entry *page_table[I915_PDES]; /* PDEs */
+};
+
+struct i915_page_directory_pointer_entry {
+ /* struct page *page; */
+ struct i915_page_directory_entry *page_directory[GEN8_LEGACY_PDPES];
+};
+
struct i915_address_space {
struct drm_mm mm;
struct drm_device *dev;
@@ -223,9 +264,12 @@
struct list_head inactive_list;
/* FIXME: Need a more generic return type */
- gen6_gtt_pte_t (*pte_encode)(dma_addr_t addr,
- enum i915_cache_level level,
- bool valid, u32 flags); /* Create a valid PTE */
+ gen6_pte_t (*pte_encode)(dma_addr_t addr,
+ enum i915_cache_level level,
+ bool valid, u32 flags); /* Create a valid PTE */
+ int (*allocate_va_range)(struct i915_address_space *vm,
+ uint64_t start,
+ uint64_t length);
void (*clear_range)(struct i915_address_space *vm,
uint64_t start,
uint64_t length,
@@ -269,30 +313,90 @@
struct i915_address_space base;
struct kref ref;
struct drm_mm_node node;
+ unsigned long pd_dirty_rings;
unsigned num_pd_entries;
unsigned num_pd_pages; /* gen8+ */
union {
- struct page **pt_pages;
- struct page **gen8_pt_pages[GEN8_LEGACY_PDPS];
- };
- struct page *pd_pages;
- union {
- uint32_t pd_offset;
- dma_addr_t pd_dma_addr[GEN8_LEGACY_PDPS];
- };
- union {
- dma_addr_t *pt_dma_addr;
- dma_addr_t *gen8_pt_dma_addr[4];
+ struct i915_page_directory_pointer_entry pdp;
+ struct i915_page_directory_entry pd;
};
+ struct i915_page_table_entry *scratch_pt;
+
struct drm_i915_file_private *file_priv;
+ gen6_pte_t __iomem *pd_addr;
+
int (*enable)(struct i915_hw_ppgtt *ppgtt);
int (*switch_mm)(struct i915_hw_ppgtt *ppgtt,
struct intel_engine_cs *ring);
void (*debug_dump)(struct i915_hw_ppgtt *ppgtt, struct seq_file *m);
};
+/* For each pde iterates over every pde between from start until start + length.
+ * If start, and start+length are not perfectly divisible, the macro will round
+ * down, and up as needed. The macro modifies pde, start, and length. Dev is
+ * only used to differentiate shift values. Temp is temp. On gen6/7, start = 0,
+ * and length = 2G effectively iterates over every PDE in the system.
+ *
+ * XXX: temp is not actually needed, but it saves doing the ALIGN operation.
+ */
+#define gen6_for_each_pde(pt, pd, start, length, temp, iter) \
+ for (iter = gen6_pde_index(start); \
+ pt = (pd)->page_table[iter], length > 0 && iter < I915_PDES; \
+ iter++, \
+ temp = ALIGN(start+1, 1 << GEN6_PDE_SHIFT) - start, \
+ temp = min_t(unsigned, temp, length), \
+ start += temp, length -= temp)
+
+static inline uint32_t i915_pte_index(uint64_t address, uint32_t pde_shift)
+{
+ const uint32_t mask = NUM_PTE(pde_shift) - 1;
+
+ return (address >> PAGE_SHIFT) & mask;
+}
+
+/* Helper to counts the number of PTEs within the given length. This count
+ * does not cross a page table boundary, so the max value would be
+ * GEN6_PTES for GEN6, and GEN8_PTES for GEN8.
+*/
+static inline uint32_t i915_pte_count(uint64_t addr, size_t length,
+ uint32_t pde_shift)
+{
+ const uint64_t mask = ~((1 << pde_shift) - 1);
+ uint64_t end;
+
+ WARN_ON(length == 0);
+ WARN_ON(offset_in_page(addr|length));
+
+ end = addr + length;
+
+ if ((addr & mask) != (end & mask))
+ return NUM_PTE(pde_shift) - i915_pte_index(addr, pde_shift);
+
+ return i915_pte_index(end, pde_shift) - i915_pte_index(addr, pde_shift);
+}
+
+static inline uint32_t i915_pde_index(uint64_t addr, uint32_t shift)
+{
+ return (addr >> shift) & I915_PDE_MASK;
+}
+
+static inline uint32_t gen6_pte_index(uint32_t addr)
+{
+ return i915_pte_index(addr, GEN6_PDE_SHIFT);
+}
+
+static inline size_t gen6_pte_count(uint32_t addr, uint32_t length)
+{
+ return i915_pte_count(addr, length, GEN6_PDE_SHIFT);
+}
+
+static inline uint32_t gen6_pde_index(uint32_t addr)
+{
+ return i915_pde_index(addr, GEN6_PDE_SHIFT);
+}
+
int i915_gem_gtt_init(struct drm_device *dev);
void i915_gem_init_global_gtt(struct drm_device *dev);
void i915_global_gtt_cleanup(struct drm_device *dev);
@@ -321,4 +425,14 @@
int __must_check i915_gem_gtt_prepare_object(struct drm_i915_gem_object *obj);
void i915_gem_gtt_finish_object(struct drm_i915_gem_object *obj);
+static inline bool
+i915_ggtt_view_equal(const struct i915_ggtt_view *a,
+ const struct i915_ggtt_view *b)
+{
+ if (WARN_ON(!a || !b))
+ return false;
+
+ return a->type == b->type;
+}
+
#endif
diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
new file mode 100644
index 0000000..f7929e7
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -0,0 +1,335 @@
+/*
+ * Copyright © 2008-2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/oom.h>
+#include <linux/shmem_fs.h>
+#include <linux/slab.h>
+#include <linux/swap.h>
+#include <linux/pci.h>
+#include <linux/dma-buf.h>
+#include <drm/drmP.h>
+#include <drm/i915_drm.h>
+
+#include "i915_drv.h"
+#include "i915_trace.h"
+
+static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
+{
+ if (!mutex_is_locked(mutex))
+ return false;
+
+#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
+ return mutex->owner == task;
+#else
+ /* Since UP may be pre-empted, we cannot assume that we own the lock */
+ return false;
+#endif
+}
+
+/**
+ * i915_gem_shrink - Shrink buffer object caches
+ * @dev_priv: i915 device
+ * @target: amount of memory to make available, in pages
+ * @flags: control flags for selecting cache types
+ *
+ * This function is the main interface to the shrinker. It will try to release
+ * up to @target pages of main memory backing storage from buffer objects.
+ * Selection of the specific caches can be done with @flags. This is e.g. useful
+ * when purgeable objects should be removed from caches preferentially.
+ *
+ * Note that it's not guaranteed that released amount is actually available as
+ * free system memory - the pages might still be in-used to due to other reasons
+ * (like cpu mmaps) or the mm core has reused them before we could grab them.
+ * Therefore code that needs to explicitly shrink buffer objects caches (e.g. to
+ * avoid deadlocks in memory reclaim) must fall back to i915_gem_shrink_all().
+ *
+ * Also note that any kind of pinning (both per-vma address space pins and
+ * backing storage pins at the buffer object level) result in the shrinker code
+ * having to skip the object.
+ *
+ * Returns:
+ * The number of pages of backing storage actually released.
+ */
+unsigned long
+i915_gem_shrink(struct drm_i915_private *dev_priv,
+ long target, unsigned flags)
+{
+ const struct {
+ struct list_head *list;
+ unsigned int bit;
+ } phases[] = {
+ { &dev_priv->mm.unbound_list, I915_SHRINK_UNBOUND },
+ { &dev_priv->mm.bound_list, I915_SHRINK_BOUND },
+ { NULL, 0 },
+ }, *phase;
+ unsigned long count = 0;
+
+ /*
+ * As we may completely rewrite the (un)bound list whilst unbinding
+ * (due to retiring requests) we have to strictly process only
+ * one element of the list at the time, and recheck the list
+ * on every iteration.
+ *
+ * In particular, we must hold a reference whilst removing the
+ * object as we may end up waiting for and/or retiring the objects.
+ * This might release the final reference (held by the active list)
+ * and result in the object being freed from under us. This is
+ * similar to the precautions the eviction code must take whilst
+ * removing objects.
+ *
+ * Also note that although these lists do not hold a reference to
+ * the object we can safely grab one here: The final object
+ * unreferencing and the bound_list are both protected by the
+ * dev->struct_mutex and so we won't ever be able to observe an
+ * object on the bound_list with a reference count equals 0.
+ */
+ for (phase = phases; phase->list; phase++) {
+ struct list_head still_in_list;
+
+ if ((flags & phase->bit) == 0)
+ continue;
+
+ INIT_LIST_HEAD(&still_in_list);
+ while (count < target && !list_empty(phase->list)) {
+ struct drm_i915_gem_object *obj;
+ struct i915_vma *vma, *v;
+
+ obj = list_first_entry(phase->list,
+ typeof(*obj), global_list);
+ list_move_tail(&obj->global_list, &still_in_list);
+
+ if (flags & I915_SHRINK_PURGEABLE &&
+ obj->madv != I915_MADV_DONTNEED)
+ continue;
+
+ drm_gem_object_reference(&obj->base);
+
+ /* For the unbound phase, this should be a no-op! */
+ list_for_each_entry_safe(vma, v,
+ &obj->vma_list, vma_link)
+ if (i915_vma_unbind(vma))
+ break;
+
+ if (i915_gem_object_put_pages(obj) == 0)
+ count += obj->base.size >> PAGE_SHIFT;
+
+ drm_gem_object_unreference(&obj->base);
+ }
+ list_splice(&still_in_list, phase->list);
+ }
+
+ return count;
+}
+
+/**
+ * i915_gem_shrink - Shrink buffer object caches completely
+ * @dev_priv: i915 device
+ *
+ * This is a simple wraper around i915_gem_shrink() to aggressively shrink all
+ * caches completely. It also first waits for and retires all outstanding
+ * requests to also be able to release backing storage for active objects.
+ *
+ * This should only be used in code to intentionally quiescent the gpu or as a
+ * last-ditch effort when memory seems to have run out.
+ *
+ * Returns:
+ * The number of pages of backing storage actually released.
+ */
+unsigned long i915_gem_shrink_all(struct drm_i915_private *dev_priv)
+{
+ i915_gem_evict_everything(dev_priv->dev);
+ return i915_gem_shrink(dev_priv, LONG_MAX,
+ I915_SHRINK_BOUND | I915_SHRINK_UNBOUND);
+}
+
+static bool i915_gem_shrinker_lock(struct drm_device *dev, bool *unlock)
+{
+ if (!mutex_trylock(&dev->struct_mutex)) {
+ if (!mutex_is_locked_by(&dev->struct_mutex, current))
+ return false;
+
+ if (to_i915(dev)->mm.shrinker_no_lock_stealing)
+ return false;
+
+ *unlock = false;
+ } else
+ *unlock = true;
+
+ return true;
+}
+
+static int num_vma_bound(struct drm_i915_gem_object *obj)
+{
+ struct i915_vma *vma;
+ int count = 0;
+
+ list_for_each_entry(vma, &obj->vma_list, vma_link)
+ if (drm_mm_node_allocated(&vma->node))
+ count++;
+
+ return count;
+}
+
+static unsigned long
+i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
+{
+ struct drm_i915_private *dev_priv =
+ container_of(shrinker, struct drm_i915_private, mm.shrinker);
+ struct drm_device *dev = dev_priv->dev;
+ struct drm_i915_gem_object *obj;
+ unsigned long count;
+ bool unlock;
+
+ if (!i915_gem_shrinker_lock(dev, &unlock))
+ return 0;
+
+ count = 0;
+ list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list)
+ if (obj->pages_pin_count == 0)
+ count += obj->base.size >> PAGE_SHIFT;
+
+ list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
+ if (!i915_gem_obj_is_pinned(obj) &&
+ obj->pages_pin_count == num_vma_bound(obj))
+ count += obj->base.size >> PAGE_SHIFT;
+ }
+
+ if (unlock)
+ mutex_unlock(&dev->struct_mutex);
+
+ return count;
+}
+
+static unsigned long
+i915_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
+{
+ struct drm_i915_private *dev_priv =
+ container_of(shrinker, struct drm_i915_private, mm.shrinker);
+ struct drm_device *dev = dev_priv->dev;
+ unsigned long freed;
+ bool unlock;
+
+ if (!i915_gem_shrinker_lock(dev, &unlock))
+ return SHRINK_STOP;
+
+ freed = i915_gem_shrink(dev_priv,
+ sc->nr_to_scan,
+ I915_SHRINK_BOUND |
+ I915_SHRINK_UNBOUND |
+ I915_SHRINK_PURGEABLE);
+ if (freed < sc->nr_to_scan)
+ freed += i915_gem_shrink(dev_priv,
+ sc->nr_to_scan - freed,
+ I915_SHRINK_BOUND |
+ I915_SHRINK_UNBOUND);
+ if (unlock)
+ mutex_unlock(&dev->struct_mutex);
+
+ return freed;
+}
+
+static int
+i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
+{
+ struct drm_i915_private *dev_priv =
+ container_of(nb, struct drm_i915_private, mm.oom_notifier);
+ struct drm_device *dev = dev_priv->dev;
+ struct drm_i915_gem_object *obj;
+ unsigned long timeout = msecs_to_jiffies(5000) + 1;
+ unsigned long pinned, bound, unbound, freed_pages;
+ bool was_interruptible;
+ bool unlock;
+
+ while (!i915_gem_shrinker_lock(dev, &unlock) && --timeout) {
+ schedule_timeout_killable(1);
+ if (fatal_signal_pending(current))
+ return NOTIFY_DONE;
+ }
+ if (timeout == 0) {
+ pr_err("Unable to purge GPU memory due lock contention.\n");
+ return NOTIFY_DONE;
+ }
+
+ was_interruptible = dev_priv->mm.interruptible;
+ dev_priv->mm.interruptible = false;
+
+ freed_pages = i915_gem_shrink_all(dev_priv);
+
+ dev_priv->mm.interruptible = was_interruptible;
+
+ /* Because we may be allocating inside our own driver, we cannot
+ * assert that there are no objects with pinned pages that are not
+ * being pointed to by hardware.
+ */
+ unbound = bound = pinned = 0;
+ list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list) {
+ if (!obj->base.filp) /* not backed by a freeable object */
+ continue;
+
+ if (obj->pages_pin_count)
+ pinned += obj->base.size;
+ else
+ unbound += obj->base.size;
+ }
+ list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
+ if (!obj->base.filp)
+ continue;
+
+ if (obj->pages_pin_count)
+ pinned += obj->base.size;
+ else
+ bound += obj->base.size;
+ }
+
+ if (unlock)
+ mutex_unlock(&dev->struct_mutex);
+
+ if (freed_pages || unbound || bound)
+ pr_info("Purging GPU memory, %lu bytes freed, %lu bytes still pinned.\n",
+ freed_pages << PAGE_SHIFT, pinned);
+ if (unbound || bound)
+ pr_err("%lu and %lu bytes still available in the "
+ "bound and unbound GPU page lists.\n",
+ bound, unbound);
+
+ *(unsigned long *)ptr += freed_pages;
+ return NOTIFY_DONE;
+}
+
+/**
+ * i915_gem_shrinker_init - Initialize i915 shrinker
+ * @dev_priv: i915 device
+ *
+ * This function registers and sets up the i915 shrinker and OOM handler.
+ */
+void i915_gem_shrinker_init(struct drm_i915_private *dev_priv)
+{
+ dev_priv->mm.shrinker.scan_objects = i915_gem_shrinker_scan;
+ dev_priv->mm.shrinker.count_objects = i915_gem_shrinker_count;
+ dev_priv->mm.shrinker.seeks = DEFAULT_SEEKS;
+ register_shrinker(&dev_priv->mm.shrinker);
+
+ dev_priv->mm.oom_notifier.notifier_call = i915_gem_shrinker_oom;
+ register_oom_notifier(&dev_priv->mm.oom_notifier);
+}
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 9c6f93e..f8da716 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -231,7 +231,7 @@
dev_priv->mm.stolen_base + compressed_llb->start);
}
- dev_priv->fbc.size = size / dev_priv->fbc.threshold;
+ dev_priv->fbc.uncompressed_size = size;
DRM_DEBUG_KMS("reserved %d bytes of contiguous stolen space for FBC\n",
size);
@@ -253,7 +253,7 @@
if (!drm_mm_initialized(&dev_priv->mm.stolen))
return -ENODEV;
- if (size < dev_priv->fbc.size)
+ if (size <= dev_priv->fbc.uncompressed_size)
return 0;
/* Release any current block */
@@ -266,7 +266,7 @@
{
struct drm_i915_private *dev_priv = dev->dev_private;
- if (dev_priv->fbc.size == 0)
+ if (dev_priv->fbc.uncompressed_size == 0)
return;
drm_mm_remove_node(&dev_priv->fbc.compressed_fb);
@@ -276,7 +276,7 @@
kfree(dev_priv->fbc.compressed_llb);
}
- dev_priv->fbc.size = 0;
+ dev_priv->fbc.uncompressed_size = 0;
}
void i915_gem_cleanup_stolen(struct drm_device *dev)
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 48ddbf4..1d4e60d 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -386,6 +386,11 @@
if (INTEL_INFO(dev)->gen >= 6) {
err_printf(m, "ERROR: 0x%08x\n", error->error);
+
+ if (INTEL_INFO(dev)->gen >= 8)
+ err_printf(m, "FAULT_TLB_DATA: 0x%08x 0x%08x\n",
+ error->fault_data1, error->fault_data0);
+
err_printf(m, "DONE_REG: 0x%08x\n", error->done_reg);
}
@@ -555,7 +560,14 @@
}
i915_error_object_free(error->semaphore_obj);
+
+ for (i = 0; i < error->vm_count; i++)
+ kfree(error->active_bo[i]);
+
kfree(error->active_bo);
+ kfree(error->active_bo_count);
+ kfree(error->pinned_bo);
+ kfree(error->pinned_bo_count);
kfree(error->overlay);
kfree(error->display);
kfree(error);
@@ -994,12 +1006,11 @@
i915_error_ggtt_object_create(dev_priv,
ring->scratch.obj);
- if (request->file_priv) {
+ if (request->pid) {
struct task_struct *task;
rcu_read_lock();
- task = pid_task(request->file_priv->file->pid,
- PIDTYPE_PID);
+ task = pid_task(request->pid, PIDTYPE_PID);
if (task) {
strcpy(error->ring[i].comm, task->comm);
error->ring[i].pid = task->pid;
@@ -1165,6 +1176,11 @@
if (IS_GEN7(dev))
error->err_int = I915_READ(GEN7_ERR_INT);
+ if (INTEL_INFO(dev)->gen >= 8) {
+ error->fault_data0 = I915_READ(GEN8_FAULT_TLB_DATA0);
+ error->fault_data1 = I915_READ(GEN8_FAULT_TLB_DATA1);
+ }
+
if (IS_GEN6(dev)) {
error->forcewake = I915_READ(FORCEWAKE);
error->gab_ctl = I915_READ(GAB_CTL);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index ede5bbb..6d49443 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -277,6 +277,7 @@
I915_WRITE(reg, dev_priv->pm_rps_events);
I915_WRITE(reg, dev_priv->pm_rps_events);
POSTING_READ(reg);
+ dev_priv->rps.pm_iir = 0;
spin_unlock_irq(&dev_priv->irq_lock);
}
@@ -330,12 +331,10 @@
__gen6_disable_pm_irq(dev_priv, dev_priv->pm_rps_events);
I915_WRITE(gen6_pm_ier(dev_priv), I915_READ(gen6_pm_ier(dev_priv)) &
~dev_priv->pm_rps_events);
- I915_WRITE(gen6_pm_iir(dev_priv), dev_priv->pm_rps_events);
- I915_WRITE(gen6_pm_iir(dev_priv), dev_priv->pm_rps_events);
-
- dev_priv->rps.pm_iir = 0;
spin_unlock_irq(&dev_priv->irq_lock);
+
+ synchronize_irq(dev->irq);
}
/**
@@ -492,31 +491,6 @@
spin_unlock_irq(&dev_priv->irq_lock);
}
-/**
- * i915_pipe_enabled - check if a pipe is enabled
- * @dev: DRM device
- * @pipe: pipe to check
- *
- * Reading certain registers when the pipe is disabled can hang the chip.
- * Use this routine to make sure the PLL is running and the pipe is active
- * before reading such registers if unsure.
- */
-static int
-i915_pipe_enabled(struct drm_device *dev, int pipe)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- /* Locking is horribly broken here, but whatever. */
- struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-
- return intel_crtc->active;
- } else {
- return I915_READ(PIPECONF(pipe)) & PIPECONF_ENABLE;
- }
-}
-
/*
* This timing diagram depicts the video signal in and
* around the vertical blanking period.
@@ -582,34 +556,16 @@
unsigned long high_frame;
unsigned long low_frame;
u32 high1, high2, low, pixel, vbl_start, hsync_start, htotal;
+ struct intel_crtc *intel_crtc =
+ to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
+ const struct drm_display_mode *mode =
+ &intel_crtc->config->base.adjusted_mode;
- if (!i915_pipe_enabled(dev, pipe)) {
- DRM_DEBUG_DRIVER("trying to get vblank count for disabled "
- "pipe %c\n", pipe_name(pipe));
- return 0;
- }
-
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- struct intel_crtc *intel_crtc =
- to_intel_crtc(dev_priv->pipe_to_crtc_mapping[pipe]);
- const struct drm_display_mode *mode =
- &intel_crtc->config->base.adjusted_mode;
-
- htotal = mode->crtc_htotal;
- hsync_start = mode->crtc_hsync_start;
- vbl_start = mode->crtc_vblank_start;
- if (mode->flags & DRM_MODE_FLAG_INTERLACE)
- vbl_start = DIV_ROUND_UP(vbl_start, 2);
- } else {
- enum transcoder cpu_transcoder = (enum transcoder) pipe;
-
- htotal = ((I915_READ(HTOTAL(cpu_transcoder)) >> 16) & 0x1fff) + 1;
- hsync_start = (I915_READ(HSYNC(cpu_transcoder)) & 0x1fff) + 1;
- vbl_start = (I915_READ(VBLANK(cpu_transcoder)) & 0x1fff) + 1;
- if ((I915_READ(PIPECONF(cpu_transcoder)) &
- PIPECONF_INTERLACE_MASK) != PIPECONF_PROGRESSIVE)
- vbl_start = DIV_ROUND_UP(vbl_start, 2);
- }
+ htotal = mode->crtc_htotal;
+ hsync_start = mode->crtc_hsync_start;
+ vbl_start = mode->crtc_vblank_start;
+ if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ vbl_start = DIV_ROUND_UP(vbl_start, 2);
/* Convert to pixel count */
vbl_start *= htotal;
@@ -648,12 +604,6 @@
struct drm_i915_private *dev_priv = dev->dev_private;
int reg = PIPE_FRMCOUNT_GM45(pipe);
- if (!i915_pipe_enabled(dev, pipe)) {
- DRM_DEBUG_DRIVER("trying to get vblank count for disabled "
- "pipe %c\n", pipe_name(pipe));
- return 0;
- }
-
return I915_READ(reg);
}
@@ -840,7 +790,7 @@
return -EINVAL;
}
- if (!crtc->enabled) {
+ if (!crtc->state->enable) {
DRM_DEBUG_KMS("crtc %d is disabled\n", pipe);
return -EBUSY;
}
@@ -1046,129 +996,73 @@
wake_up_all(&ring->irq_queue);
}
-static u32 vlv_c0_residency(struct drm_i915_private *dev_priv,
- struct intel_rps_ei *rps_ei)
+static void vlv_c0_read(struct drm_i915_private *dev_priv,
+ struct intel_rps_ei *ei)
{
- u32 cz_ts, cz_freq_khz;
- u32 render_count, media_count;
- u32 elapsed_render, elapsed_media, elapsed_time;
- u32 residency = 0;
-
- cz_ts = vlv_punit_read(dev_priv, PUNIT_REG_CZ_TIMESTAMP);
- cz_freq_khz = DIV_ROUND_CLOSEST(dev_priv->mem_freq * 1000, 4);
-
- render_count = I915_READ(VLV_RENDER_C0_COUNT_REG);
- media_count = I915_READ(VLV_MEDIA_C0_COUNT_REG);
-
- if (rps_ei->cz_clock == 0) {
- rps_ei->cz_clock = cz_ts;
- rps_ei->render_c0 = render_count;
- rps_ei->media_c0 = media_count;
-
- return dev_priv->rps.cur_freq;
- }
-
- elapsed_time = cz_ts - rps_ei->cz_clock;
- rps_ei->cz_clock = cz_ts;
-
- elapsed_render = render_count - rps_ei->render_c0;
- rps_ei->render_c0 = render_count;
-
- elapsed_media = media_count - rps_ei->media_c0;
- rps_ei->media_c0 = media_count;
-
- /* Convert all the counters into common unit of milli sec */
- elapsed_time /= VLV_CZ_CLOCK_TO_MILLI_SEC;
- elapsed_render /= cz_freq_khz;
- elapsed_media /= cz_freq_khz;
-
- /*
- * Calculate overall C0 residency percentage
- * only if elapsed time is non zero
- */
- if (elapsed_time) {
- residency =
- ((max(elapsed_render, elapsed_media) * 100)
- / elapsed_time);
- }
-
- return residency;
+ ei->cz_clock = vlv_punit_read(dev_priv, PUNIT_REG_CZ_TIMESTAMP);
+ ei->render_c0 = I915_READ(VLV_RENDER_C0_COUNT);
+ ei->media_c0 = I915_READ(VLV_MEDIA_C0_COUNT);
}
-/**
- * vlv_calc_delay_from_C0_counters - Increase/Decrease freq based on GPU
- * busy-ness calculated from C0 counters of render & media power wells
- * @dev_priv: DRM device private
- *
- */
-static int vlv_calc_delay_from_C0_counters(struct drm_i915_private *dev_priv)
+static bool vlv_c0_above(struct drm_i915_private *dev_priv,
+ const struct intel_rps_ei *old,
+ const struct intel_rps_ei *now,
+ int threshold)
{
- u32 residency_C0_up = 0, residency_C0_down = 0;
- int new_delay, adj;
+ u64 time, c0;
- dev_priv->rps.ei_interrupt_count++;
+ if (old->cz_clock == 0)
+ return false;
- WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));
+ time = now->cz_clock - old->cz_clock;
+ time *= threshold * dev_priv->mem_freq;
-
- if (dev_priv->rps.up_ei.cz_clock == 0) {
- vlv_c0_residency(dev_priv, &dev_priv->rps.up_ei);
- vlv_c0_residency(dev_priv, &dev_priv->rps.down_ei);
- return dev_priv->rps.cur_freq;
- }
-
-
- /*
- * To down throttle, C0 residency should be less than down threshold
- * for continous EI intervals. So calculate down EI counters
- * once in VLV_INT_COUNT_FOR_DOWN_EI
+ /* Workload can be split between render + media, e.g. SwapBuffers
+ * being blitted in X after being rendered in mesa. To account for
+ * this we need to combine both engines into our activity counter.
*/
- if (dev_priv->rps.ei_interrupt_count == VLV_INT_COUNT_FOR_DOWN_EI) {
+ c0 = now->render_c0 - old->render_c0;
+ c0 += now->media_c0 - old->media_c0;
+ c0 *= 100 * VLV_CZ_CLOCK_TO_MILLI_SEC * 4 / 1000;
- dev_priv->rps.ei_interrupt_count = 0;
+ return c0 >= time;
+}
- residency_C0_down = vlv_c0_residency(dev_priv,
- &dev_priv->rps.down_ei);
- } else {
- residency_C0_up = vlv_c0_residency(dev_priv,
- &dev_priv->rps.up_ei);
+void gen6_rps_reset_ei(struct drm_i915_private *dev_priv)
+{
+ vlv_c0_read(dev_priv, &dev_priv->rps.down_ei);
+ dev_priv->rps.up_ei = dev_priv->rps.down_ei;
+}
+
+static u32 vlv_wa_c0_ei(struct drm_i915_private *dev_priv, u32 pm_iir)
+{
+ struct intel_rps_ei now;
+ u32 events = 0;
+
+ if ((pm_iir & (GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED)) == 0)
+ return 0;
+
+ vlv_c0_read(dev_priv, &now);
+ if (now.cz_clock == 0)
+ return 0;
+
+ if (pm_iir & GEN6_PM_RP_DOWN_EI_EXPIRED) {
+ if (!vlv_c0_above(dev_priv,
+ &dev_priv->rps.down_ei, &now,
+ VLV_RP_DOWN_EI_THRESHOLD))
+ events |= GEN6_PM_RP_DOWN_THRESHOLD;
+ dev_priv->rps.down_ei = now;
}
- new_delay = dev_priv->rps.cur_freq;
-
- adj = dev_priv->rps.last_adj;
- /* C0 residency is greater than UP threshold. Increase Frequency */
- if (residency_C0_up >= VLV_RP_UP_EI_THRESHOLD) {
- if (adj > 0)
- adj *= 2;
- else
- adj = 1;
-
- if (dev_priv->rps.cur_freq < dev_priv->rps.max_freq_softlimit)
- new_delay = dev_priv->rps.cur_freq + adj;
-
- /*
- * For better performance, jump directly
- * to RPe if we're below it.
- */
- if (new_delay < dev_priv->rps.efficient_freq)
- new_delay = dev_priv->rps.efficient_freq;
-
- } else if (!dev_priv->rps.ei_interrupt_count &&
- (residency_C0_down < VLV_RP_DOWN_EI_THRESHOLD)) {
- if (adj < 0)
- adj *= 2;
- else
- adj = -1;
- /*
- * This means, C0 residency is less than down threshold over
- * a period of VLV_INT_COUNT_FOR_DOWN_EI. So, reduce the freq
- */
- if (dev_priv->rps.cur_freq > dev_priv->rps.min_freq_softlimit)
- new_delay = dev_priv->rps.cur_freq + adj;
+ if (pm_iir & GEN6_PM_RP_UP_EI_EXPIRED) {
+ if (vlv_c0_above(dev_priv,
+ &dev_priv->rps.up_ei, &now,
+ VLV_RP_UP_EI_THRESHOLD))
+ events |= GEN6_PM_RP_UP_THRESHOLD;
+ dev_priv->rps.up_ei = now;
}
- return new_delay;
+ return events;
}
static void gen6_pm_rps_work(struct work_struct *work)
@@ -1198,6 +1092,8 @@
mutex_lock(&dev_priv->rps.hw_lock);
+ pm_iir |= vlv_wa_c0_ei(dev_priv, pm_iir);
+
adj = dev_priv->rps.last_adj;
if (pm_iir & GEN6_PM_RP_UP_THRESHOLD) {
if (adj > 0)
@@ -1220,8 +1116,6 @@
else
new_delay = dev_priv->rps.min_freq_softlimit;
adj = 0;
- } else if (pm_iir & GEN6_PM_RP_UP_EI_EXPIRED) {
- new_delay = vlv_calc_delay_from_C0_counters(dev_priv);
} else if (pm_iir & GEN6_PM_RP_DOWN_THRESHOLD) {
if (adj < 0)
adj *= 2;
@@ -1243,10 +1137,7 @@
dev_priv->rps.last_adj = new_delay - dev_priv->rps.cur_freq;
- if (IS_VALLEYVIEW(dev_priv->dev))
- valleyview_set_rps(dev_priv->dev, new_delay);
- else
- gen6_set_rps(dev_priv->dev, new_delay);
+ intel_set_rps(dev_priv->dev, new_delay);
mutex_unlock(&dev_priv->rps.hw_lock);
}
@@ -1748,11 +1639,6 @@
* the work queue. */
static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir)
{
- /* TODO: RPS on GEN9+ is not supported yet. */
- if (WARN_ONCE(INTEL_INFO(dev_priv)->gen >= 9,
- "GEN9+: unexpected RPS IRQ\n"))
- return;
-
if (pm_iir & dev_priv->pm_rps_events) {
spin_lock(&dev_priv->irq_lock);
gen6_disable_pm_irq(dev_priv, pm_iir & dev_priv->pm_rps_events);
@@ -2662,9 +2548,6 @@
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long irqflags;
- if (!i915_pipe_enabled(dev, pipe))
- return -EINVAL;
-
spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
if (INTEL_INFO(dev)->gen >= 4)
i915_enable_pipestat(dev_priv, pipe,
@@ -2684,9 +2567,6 @@
uint32_t bit = (INTEL_INFO(dev)->gen >= 7) ? DE_PIPE_VBLANK_IVB(pipe) :
DE_PIPE_VBLANK(pipe);
- if (!i915_pipe_enabled(dev, pipe))
- return -EINVAL;
-
spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
ironlake_enable_display_irq(dev_priv, bit);
spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
@@ -2699,9 +2579,6 @@
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long irqflags;
- if (!i915_pipe_enabled(dev, pipe))
- return -EINVAL;
-
spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
i915_enable_pipestat(dev_priv, pipe,
PIPE_START_VBLANK_INTERRUPT_STATUS);
@@ -2715,9 +2592,6 @@
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long irqflags;
- if (!i915_pipe_enabled(dev, pipe))
- return -EINVAL;
-
spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
dev_priv->de_irq_mask[pipe] &= ~GEN8_PIPE_VBLANK;
I915_WRITE(GEN8_DE_PIPE_IMR(pipe), dev_priv->de_irq_mask[pipe]);
@@ -2769,9 +2643,6 @@
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long irqflags;
- if (!i915_pipe_enabled(dev, pipe))
- return;
-
spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
dev_priv->de_irq_mask[pipe] |= GEN8_PIPE_VBLANK;
I915_WRITE(GEN8_DE_PIPE_IMR(pipe), dev_priv->de_irq_mask[pipe]);
@@ -3236,15 +3107,24 @@
ibx_irq_reset(dev);
}
-void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv)
+void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv,
+ unsigned int pipe_mask)
{
uint32_t extra_ier = GEN8_PIPE_VBLANK | GEN8_PIPE_FIFO_UNDERRUN;
spin_lock_irq(&dev_priv->irq_lock);
- GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_B, dev_priv->de_irq_mask[PIPE_B],
- ~dev_priv->de_irq_mask[PIPE_B] | extra_ier);
- GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_C, dev_priv->de_irq_mask[PIPE_C],
- ~dev_priv->de_irq_mask[PIPE_C] | extra_ier);
+ if (pipe_mask & 1 << PIPE_A)
+ GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_A,
+ dev_priv->de_irq_mask[PIPE_A],
+ ~dev_priv->de_irq_mask[PIPE_A] | extra_ier);
+ if (pipe_mask & 1 << PIPE_B)
+ GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_B,
+ dev_priv->de_irq_mask[PIPE_B],
+ ~dev_priv->de_irq_mask[PIPE_B] | extra_ier);
+ if (pipe_mask & 1 << PIPE_C)
+ GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_C,
+ dev_priv->de_irq_mask[PIPE_C],
+ ~dev_priv->de_irq_mask[PIPE_C] | extra_ier);
spin_unlock_irq(&dev_priv->irq_lock);
}
@@ -3718,14 +3598,12 @@
~(I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
- I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT |
- I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
+ I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT);
I915_WRITE16(IMR, dev_priv->irq_mask);
I915_WRITE16(IER,
I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
- I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT |
I915_USER_INTERRUPT);
POSTING_READ16(IER);
@@ -3887,14 +3765,12 @@
I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT |
- I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT |
- I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
+ I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT);
enable_mask =
I915_ASLE_INTERRUPT |
I915_DISPLAY_PIPE_A_EVENT_INTERRUPT |
I915_DISPLAY_PIPE_B_EVENT_INTERRUPT |
- I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT |
I915_USER_INTERRUPT;
if (I915_HAS_HOTPLUG(dev)) {
@@ -4362,7 +4238,7 @@
/* Let's track the enabled rps events */
if (IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv))
/* WaGsvRC0ResidencyMethod:vlv */
- dev_priv->pm_rps_events = GEN6_PM_RP_UP_EI_EXPIRED;
+ dev_priv->pm_rps_events = GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED;
else
dev_priv->pm_rps_events = GEN6_PM_RPS_EVENTS;
@@ -4392,10 +4268,8 @@
if (!IS_GEN2(dev_priv))
dev->vblank_disable_immediate = true;
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- dev->driver->get_vblank_timestamp = i915_get_vblank_timestamp;
- dev->driver->get_scanout_position = i915_get_crtc_scanoutpos;
- }
+ dev->driver->get_vblank_timestamp = i915_get_vblank_timestamp;
+ dev->driver->get_scanout_position = i915_get_crtc_scanoutpos;
if (IS_CHERRYVIEW(dev_priv)) {
dev->driver->irq_handler = cherryview_irq_handler;
diff --git a/drivers/gpu/drm/i915/i915_params.c b/drivers/gpu/drm/i915/i915_params.c
index 44f2262..bb64415 100644
--- a/drivers/gpu/drm/i915/i915_params.c
+++ b/drivers/gpu/drm/i915/i915_params.c
@@ -27,7 +27,6 @@
struct i915_params i915 __read_mostly = {
.modeset = -1,
.panel_ignore_lid = 1,
- .powersave = 1,
.semaphores = -1,
.lvds_downclock = 0,
.lvds_channel_mode = 0,
@@ -44,6 +43,7 @@
.enable_ips = 1,
.fastboot = 0,
.prefault_disable = 0,
+ .load_detect_test = 0,
.reset = true,
.invert_brightness = 0,
.disable_display = 0,
@@ -65,10 +65,6 @@
"Override lid status (0=autodetect, 1=autodetect disabled [default], "
"-1=force lid closed, -2=force lid open)");
-module_param_named(powersave, i915.powersave, int, 0600);
-MODULE_PARM_DESC(powersave,
- "Enable powersavings, fbc, downclocking, etc. (default: true)");
-
module_param_named_unsafe(semaphores, i915.semaphores, int, 0400);
MODULE_PARM_DESC(semaphores,
"Use semaphores for inter-ring sync "
@@ -144,11 +140,16 @@
MODULE_PARM_DESC(fastboot,
"Try to skip unnecessary mode sets at boot time (default: false)");
-module_param_named(prefault_disable, i915.prefault_disable, bool, 0600);
+module_param_named_unsafe(prefault_disable, i915.prefault_disable, bool, 0600);
MODULE_PARM_DESC(prefault_disable,
"Disable page prefaulting for pread/pwrite/reloc (default:false). "
"For developers only.");
+module_param_named_unsafe(load_detect_test, i915.load_detect_test, bool, 0600);
+MODULE_PARM_DESC(load_detect_test,
+ "Force-enable the VGA load detect code for testing (default:false). "
+ "For developers only.");
+
module_param_named(invert_brightness, i915.invert_brightness, int, 0600);
MODULE_PARM_DESC(invert_brightness,
"Invert backlight brightness "
@@ -171,10 +172,10 @@
MODULE_PARM_DESC(use_mmio_flip,
"use MMIO flips (-1=never, 0=driver discretion [default], 1=always)");
-module_param_named(mmio_debug, i915.mmio_debug, bool, 0600);
+module_param_named(mmio_debug, i915.mmio_debug, int, 0600);
MODULE_PARM_DESC(mmio_debug,
- "Enable the MMIO debug code (default: false). This may negatively "
- "affect performance.");
+ "Enable the MMIO debug code for the first N failures (default: off). "
+ "This may negatively affect performance.");
module_param_named(verbose_state_checks, i915.verbose_state_checks, bool, 0600);
MODULE_PARM_DESC(verbose_state_checks,
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 33b3d0a2..b522eb6 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -139,7 +139,21 @@
#define GEN8_RING_PDP_UDW(ring, n) ((ring)->mmio_base+0x270 + ((n) * 8 + 4))
#define GEN8_RING_PDP_LDW(ring, n) ((ring)->mmio_base+0x270 + (n) * 8)
+#define GEN8_R_PWR_CLK_STATE 0x20C8
+#define GEN8_RPCS_ENABLE (1 << 31)
+#define GEN8_RPCS_S_CNT_ENABLE (1 << 18)
+#define GEN8_RPCS_S_CNT_SHIFT 15
+#define GEN8_RPCS_S_CNT_MASK (0x7 << GEN8_RPCS_S_CNT_SHIFT)
+#define GEN8_RPCS_SS_CNT_ENABLE (1 << 11)
+#define GEN8_RPCS_SS_CNT_SHIFT 8
+#define GEN8_RPCS_SS_CNT_MASK (0x7 << GEN8_RPCS_SS_CNT_SHIFT)
+#define GEN8_RPCS_EU_MAX_SHIFT 4
+#define GEN8_RPCS_EU_MAX_MASK (0xf << GEN8_RPCS_EU_MAX_SHIFT)
+#define GEN8_RPCS_EU_MIN_SHIFT 0
+#define GEN8_RPCS_EU_MIN_MASK (0xf << GEN8_RPCS_EU_MIN_SHIFT)
+
#define GAM_ECOCHK 0x4090
+#define BDW_DISABLE_HDC_INVALIDATION (1<<25)
#define ECOCHK_SNB_BIT (1<<10)
#define HSW_ECOCHK_ARB_PRIO_SOL (1<<6)
#define ECOCHK_PPGTT_CACHE64B (0x3<<3)
@@ -552,6 +566,9 @@
#define DSPFREQSTAT_MASK (0x3 << DSPFREQSTAT_SHIFT)
#define DSPFREQGUAR_SHIFT 14
#define DSPFREQGUAR_MASK (0x3 << DSPFREQGUAR_SHIFT)
+#define DSP_MAXFIFO_PM5_STATUS (1 << 22) /* chv */
+#define DSP_AUTO_CDCLK_GATE_DISABLE (1 << 7) /* chv */
+#define DSP_MAXFIFO_PM5_ENABLE (1 << 6) /* chv */
#define _DP_SSC(val, pipe) ((val) << (2 * (pipe)))
#define DP_SSC_MASK(pipe) _DP_SSC(0x3, (pipe))
#define DP_SSC_PWR_ON(pipe) _DP_SSC(0x0, (pipe))
@@ -586,6 +603,19 @@
PUNIT_POWER_WELL_NUM,
};
+enum skl_disp_power_wells {
+ SKL_DISP_PW_MISC_IO,
+ SKL_DISP_PW_DDI_A_E,
+ SKL_DISP_PW_DDI_B,
+ SKL_DISP_PW_DDI_C,
+ SKL_DISP_PW_DDI_D,
+ SKL_DISP_PW_1 = 14,
+ SKL_DISP_PW_2,
+};
+
+#define SKL_POWER_WELL_STATE(pw) (1 << ((pw) * 2))
+#define SKL_POWER_WELL_REQ(pw) (1 << (((pw) * 2) + 1))
+
#define PUNIT_REG_PWRGT_CTRL 0x60
#define PUNIT_REG_PWRGT_STATUS 0x61
#define PUNIT_PWRGT_MASK(power_well) (3 << ((power_well) * 2))
@@ -614,6 +644,11 @@
#define FB_GFX_FMIN_AT_VMIN_FUSE 0x137
#define FB_GFX_FMIN_AT_VMIN_FUSE_SHIFT 8
+#define PUNIT_REG_DDR_SETUP2 0x139
+#define FORCE_DDR_FREQ_REQ_ACK (1 << 8)
+#define FORCE_DDR_LOW_FREQ (1 << 1)
+#define FORCE_DDR_HIGH_FREQ (1 << 0)
+
#define PUNIT_GPU_STATUS_REG 0xdb
#define PUNIT_GPU_STATUS_MAX_FREQ_SHIFT 16
#define PUNIT_GPU_STATUS_MAX_FREQ_MASK 0xff
@@ -638,7 +673,6 @@
#define VLV_CZ_CLOCK_TO_MILLI_SEC 100000
#define VLV_RP_UP_EI_THRESHOLD 90
#define VLV_RP_DOWN_EI_THRESHOLD 70
-#define VLV_INT_COUNT_FOR_DOWN_EI 5
/* vlv2 north clock has */
#define CCK_FUSE_REG 0x8
@@ -1002,6 +1036,7 @@
#define DPIO_CHV_FIRST_MOD (0 << 8)
#define DPIO_CHV_SECOND_MOD (1 << 8)
#define DPIO_CHV_FEEDFWD_GAIN_SHIFT 0
+#define DPIO_CHV_FEEDFWD_GAIN_MASK (0xF << 0)
#define CHV_PLL_DW3(ch) _PIPE(ch, _CHV_PLL_DW3_CH0, _CHV_PLL_DW3_CH1)
#define _CHV_PLL_DW6_CH0 0x8018
@@ -1011,6 +1046,19 @@
#define DPIO_CHV_PROP_COEFF_SHIFT 0
#define CHV_PLL_DW6(ch) _PIPE(ch, _CHV_PLL_DW6_CH0, _CHV_PLL_DW6_CH1)
+#define _CHV_PLL_DW8_CH0 0x8020
+#define _CHV_PLL_DW8_CH1 0x81A0
+#define DPIO_CHV_TDC_TARGET_CNT_SHIFT 0
+#define DPIO_CHV_TDC_TARGET_CNT_MASK (0x3FF << 0)
+#define CHV_PLL_DW8(ch) _PIPE(ch, _CHV_PLL_DW8_CH0, _CHV_PLL_DW8_CH1)
+
+#define _CHV_PLL_DW9_CH0 0x8024
+#define _CHV_PLL_DW9_CH1 0x81A4
+#define DPIO_CHV_INT_LOCK_THRESHOLD_SHIFT 1 /* 3 bits */
+#define DPIO_CHV_INT_LOCK_THRESHOLD_MASK (7 << 1)
+#define DPIO_CHV_INT_LOCK_THRESHOLD_SEL_COARSE 1 /* 1: coarse & 0 : fine */
+#define CHV_PLL_DW9(ch) _PIPE(ch, _CHV_PLL_DW9_CH0, _CHV_PLL_DW9_CH1)
+
#define _CHV_CMN_DW5_CH0 0x8114
#define CHV_BUFRIGHTENA1_DISABLE (0 << 20)
#define CHV_BUFRIGHTENA1_NORMAL (1 << 20)
@@ -1258,6 +1306,9 @@
#define ERR_INT_FIFO_UNDERRUN_A (1<<0)
#define ERR_INT_FIFO_UNDERRUN(pipe) (1<<(pipe*3))
+#define GEN8_FAULT_TLB_DATA0 0x04b10
+#define GEN8_FAULT_TLB_DATA1 0x04b14
+
#define FPGA_DBG 0x42300
#define FPGA_DBG_RM_NOCLAIM (1<<31)
@@ -1314,6 +1365,8 @@
#define GEN6_WIZ_HASHING_16x4 GEN6_WIZ_HASHING(1, 0)
#define GEN6_WIZ_HASHING_MASK GEN6_WIZ_HASHING(1, 1)
#define GEN6_TD_FOUR_ROW_DISPATCH_DISABLE (1 << 5)
+#define GEN9_IZ_HASHING_MASK(slice) (0x3 << (slice * 2))
+#define GEN9_IZ_HASHING(slice, val) ((val) << (slice * 2))
#define GFX_MODE 0x02520
#define GFX_MODE_GEN7 0x0229c
@@ -1470,6 +1523,7 @@
#define CACHE_MODE_1 0x7004 /* IVB+ */
#define PIXEL_SUBSPAN_COLLECT_OPT_DISABLE (1<<6)
#define GEN8_4x4_STC_OPTIMIZATION_DISABLE (1<<6)
+#define GEN9_PARTIAL_RESOLVE_IN_VC_DISABLE (1<<1)
#define GEN6_BLITTER_ECOSKPD 0x221d0
#define GEN6_BLITTER_LOCK_SHIFT 16
@@ -1482,6 +1536,8 @@
/* Fuse readout registers for GT */
#define CHV_FUSE_GT (VLV_DISPLAY_BASE + 0x2168)
+#define CHV_FGT_DISABLE_SS0 (1 << 10)
+#define CHV_FGT_DISABLE_SS1 (1 << 11)
#define CHV_FGT_EU_DIS_SS0_R0_SHIFT 16
#define CHV_FGT_EU_DIS_SS0_R0_MASK (0xf << CHV_FGT_EU_DIS_SS0_R0_SHIFT)
#define CHV_FGT_EU_DIS_SS0_R1_SHIFT 20
@@ -1491,6 +1547,17 @@
#define CHV_FGT_EU_DIS_SS1_R1_SHIFT 28
#define CHV_FGT_EU_DIS_SS1_R1_MASK (0xf << CHV_FGT_EU_DIS_SS1_R1_SHIFT)
+#define GEN8_FUSE2 0x9120
+#define GEN8_F2_S_ENA_SHIFT 25
+#define GEN8_F2_S_ENA_MASK (0x7 << GEN8_F2_S_ENA_SHIFT)
+
+#define GEN9_F2_SS_DIS_SHIFT 20
+#define GEN9_F2_SS_DIS_MASK (0xf << GEN9_F2_SS_DIS_SHIFT)
+
+#define GEN8_EU_DISABLE0 0x9134
+#define GEN8_EU_DISABLE1 0x9138
+#define GEN8_EU_DISABLE2 0x913c
+
#define GEN6_BSD_SLEEP_PSMI_CONTROL 0x12050
#define GEN6_BSD_SLEEP_MSG_DISABLE (1 << 0)
#define GEN6_BSD_SLEEP_FLUSH_DISABLE (1 << 2)
@@ -2048,6 +2115,14 @@
#define CDCLK_FREQ_SHIFT 4
#define CDCLK_FREQ_MASK (0x1f << CDCLK_FREQ_SHIFT)
#define CZCLK_FREQ_MASK 0xf
+
+#define GCI_CONTROL (VLV_DISPLAY_BASE + 0x650C)
+#define PFI_CREDIT_63 (9 << 28) /* chv only */
+#define PFI_CREDIT_31 (8 << 28) /* chv only */
+#define PFI_CREDIT(x) (((x) - 8) << 28) /* 8-15 */
+#define PFI_CREDIT_RESEND (1 << 27)
+#define VGA_FAST_MODE_DISABLE (1 << 14)
+
#define GMBUSFREQ_VLV (VLV_DISPLAY_BASE + 0x6510)
/*
@@ -2376,6 +2451,12 @@
#define GEN6_RP_STATE_LIMITS (MCHBAR_MIRROR_BASE_SNB + 0x5994)
#define GEN6_RP_STATE_CAP (MCHBAR_MIRROR_BASE_SNB + 0x5998)
+#define INTERVAL_1_28_US(us) (((us) * 100) >> 7)
+#define INTERVAL_1_33_US(us) (((us) * 3) >> 2)
+#define GT_INTERVAL_FROM_US(dev_priv, us) (IS_GEN9(dev_priv) ? \
+ INTERVAL_1_33_US(us) : \
+ INTERVAL_1_28_US(us))
+
/*
* Logical Context regs
*/
@@ -2968,7 +3049,7 @@
/* Video Data Island Packet control */
#define VIDEO_DIP_DATA 0x61178
-/* Read the description of VIDEO_DIP_DATA (before Haswel) or VIDEO_DIP_ECC
+/* Read the description of VIDEO_DIP_DATA (before Haswell) or VIDEO_DIP_ECC
* (Haswell and newer) to see which VIDEO_DIP_DATA byte corresponds to each byte
* of the infoframe structure specified by CEA-861. */
#define VIDEO_DIP_DATA_SIZE 32
@@ -3865,6 +3946,7 @@
#define PIPECONF_INTERLACE_MODE_MASK (7 << 21)
#define PIPECONF_EDP_RR_MODE_SWITCH (1 << 20)
#define PIPECONF_CXSR_DOWNCLOCK (1<<16)
+#define PIPECONF_EDP_RR_MODE_SWITCH_VLV (1 << 14)
#define PIPECONF_COLOR_RANGE_SELECT (1 << 13)
#define PIPECONF_BPC_MASK (0x7 << 5)
#define PIPECONF_8BPC (0<<5)
@@ -4013,7 +4095,7 @@
#define DPINVGTT_STATUS_MASK 0xff
#define DPINVGTT_STATUS_MASK_CHV 0xfff
-#define DSPARB 0x70030
+#define DSPARB (dev_priv->info.display_mmio_offset + 0x70030)
#define DSPARB_CSTART_MASK (0x7f << 7)
#define DSPARB_CSTART_SHIFT 7
#define DSPARB_BSTART_MASK (0x7f)
@@ -4021,6 +4103,9 @@
#define DSPARB_BEND_SHIFT 9 /* on 855 */
#define DSPARB_AEND_SHIFT 0
+#define DSPARB2 (VLV_DISPLAY_BASE + 0x70060) /* vlv/chv */
+#define DSPARB3 (VLV_DISPLAY_BASE + 0x7006c) /* chv */
+
/* pnv/gen4/g4x/vlv/chv */
#define DSPFW1 (dev_priv->info.display_mmio_offset + 0x70034)
#define DSPFW_SR_SHIFT 23
@@ -4044,8 +4129,8 @@
#define DSPFW_SPRITEB_MASK_VLV (0xff<<16) /* vlv/chv */
#define DSPFW_CURSORA_SHIFT 8
#define DSPFW_CURSORA_MASK (0x3f<<8)
-#define DSPFW_PLANEC_SHIFT_OLD 0
-#define DSPFW_PLANEC_MASK_OLD (0x7f<<0) /* pre-gen4 sprite C */
+#define DSPFW_PLANEC_OLD_SHIFT 0
+#define DSPFW_PLANEC_OLD_MASK (0x7f<<0) /* pre-gen4 sprite C */
#define DSPFW_SPRITEA_SHIFT 0
#define DSPFW_SPRITEA_MASK (0x7f<<0) /* g4x */
#define DSPFW_SPRITEA_MASK_VLV (0xff<<0) /* vlv/chv */
@@ -4084,25 +4169,25 @@
#define DSPFW_SPRITED_WM1_SHIFT 24
#define DSPFW_SPRITED_WM1_MASK (0xff<<24)
#define DSPFW_SPRITED_SHIFT 16
-#define DSPFW_SPRITED_MASK (0xff<<16)
+#define DSPFW_SPRITED_MASK_VLV (0xff<<16)
#define DSPFW_SPRITEC_WM1_SHIFT 8
#define DSPFW_SPRITEC_WM1_MASK (0xff<<8)
#define DSPFW_SPRITEC_SHIFT 0
-#define DSPFW_SPRITEC_MASK (0xff<<0)
+#define DSPFW_SPRITEC_MASK_VLV (0xff<<0)
#define DSPFW8_CHV (VLV_DISPLAY_BASE + 0x700b8)
#define DSPFW_SPRITEF_WM1_SHIFT 24
#define DSPFW_SPRITEF_WM1_MASK (0xff<<24)
#define DSPFW_SPRITEF_SHIFT 16
-#define DSPFW_SPRITEF_MASK (0xff<<16)
+#define DSPFW_SPRITEF_MASK_VLV (0xff<<16)
#define DSPFW_SPRITEE_WM1_SHIFT 8
#define DSPFW_SPRITEE_WM1_MASK (0xff<<8)
#define DSPFW_SPRITEE_SHIFT 0
-#define DSPFW_SPRITEE_MASK (0xff<<0)
+#define DSPFW_SPRITEE_MASK_VLV (0xff<<0)
#define DSPFW9_CHV (VLV_DISPLAY_BASE + 0x7007c) /* wtf #2? */
#define DSPFW_PLANEC_WM1_SHIFT 24
#define DSPFW_PLANEC_WM1_MASK (0xff<<24)
#define DSPFW_PLANEC_SHIFT 16
-#define DSPFW_PLANEC_MASK (0xff<<16)
+#define DSPFW_PLANEC_MASK_VLV (0xff<<16)
#define DSPFW_CURSORC_WM1_SHIFT 8
#define DSPFW_CURSORC_WM1_MASK (0x3f<<16)
#define DSPFW_CURSORC_SHIFT 0
@@ -4111,7 +4196,7 @@
/* vlv/chv high order bits */
#define DSPHOWM (VLV_DISPLAY_BASE + 0x70064)
#define DSPFW_SR_HI_SHIFT 24
-#define DSPFW_SR_HI_MASK (1<<24)
+#define DSPFW_SR_HI_MASK (3<<24) /* 2 bits for chv, 1 for vlv */
#define DSPFW_SPRITEF_HI_SHIFT 23
#define DSPFW_SPRITEF_HI_MASK (1<<23)
#define DSPFW_SPRITEE_HI_SHIFT 22
@@ -4132,7 +4217,7 @@
#define DSPFW_PLANEA_HI_MASK (1<<0)
#define DSPHOWM1 (VLV_DISPLAY_BASE + 0x70068)
#define DSPFW_SR_WM1_HI_SHIFT 24
-#define DSPFW_SR_WM1_HI_MASK (1<<24)
+#define DSPFW_SR_WM1_HI_MASK (3<<24) /* 2 bits for chv, 1 for vlv */
#define DSPFW_SPRITEF_WM1_HI_SHIFT 23
#define DSPFW_SPRITEF_WM1_HI_MASK (1<<23)
#define DSPFW_SPRITEE_WM1_HI_SHIFT 22
@@ -4153,21 +4238,17 @@
#define DSPFW_PLANEA_WM1_HI_MASK (1<<0)
/* drain latency register values*/
-#define DRAIN_LATENCY_PRECISION_16 16
-#define DRAIN_LATENCY_PRECISION_32 32
-#define DRAIN_LATENCY_PRECISION_64 64
#define VLV_DDL(pipe) (VLV_DISPLAY_BASE + 0x70050 + 4 * (pipe))
-#define DDL_CURSOR_PRECISION_HIGH (1<<31)
-#define DDL_CURSOR_PRECISION_LOW (0<<31)
#define DDL_CURSOR_SHIFT 24
-#define DDL_SPRITE_PRECISION_HIGH(sprite) (1<<(15+8*(sprite)))
-#define DDL_SPRITE_PRECISION_LOW(sprite) (0<<(15+8*(sprite)))
#define DDL_SPRITE_SHIFT(sprite) (8+8*(sprite))
-#define DDL_PLANE_PRECISION_HIGH (1<<7)
-#define DDL_PLANE_PRECISION_LOW (0<<7)
#define DDL_PLANE_SHIFT 0
+#define DDL_PRECISION_HIGH (1<<7)
+#define DDL_PRECISION_LOW (0<<7)
#define DRAIN_LATENCY_MASK 0x7f
+#define CBR1_VLV (VLV_DISPLAY_BASE + 0x70400)
+#define CBR_PND_DEADLINE_DISABLE (1<<31)
+
/* FIFO watermark sizes etc */
#define G4X_FIFO_LINE_SIZE 64
#define I915_FIFO_LINE_SIZE 64
@@ -5221,14 +5302,22 @@
#define HSW_NDE_RSTWRN_OPT 0x46408
#define RESET_PCH_HANDSHAKE_ENABLE (1<<4)
+#define FF_SLICE_CS_CHICKEN2 0x02e4
+#define GEN9_TSG_BARRIER_ACK_DISABLE (1<<8)
+
/* GEN7 chicken */
#define GEN7_COMMON_SLICE_CHICKEN1 0x7010
# define GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC ((1<<10) | (1<<26))
+# define GEN9_RHWO_OPTIMIZATION_DISABLE (1<<14)
#define COMMON_SLICE_CHICKEN2 0x7014
# define GEN8_CSC2_SBE_VUE_CACHE_CONSERVATIVE (1<<0)
-#define HIZ_CHICKEN 0x7018
-# define CHV_HZ_8X8_MODE_IN_1X (1<<15)
+#define HIZ_CHICKEN 0x7018
+# define CHV_HZ_8X8_MODE_IN_1X (1<<15)
+# define BDW_HIZ_POWER_COMPILER_CLOCK_GATING_DISABLE (1<<3)
+
+#define GEN9_SLICE_COMMON_ECO_CHICKEN0 0x7308
+#define DISABLE_PIXEL_MASK_CAMMING (1<<14)
#define GEN7_L3SQCREG1 0xB010
#define VLV_B0_WA_L3SQCREG1_VALUE 0x00D30000
@@ -5245,11 +5334,16 @@
#define GEN7_L3SQCREG4 0xb034
#define L3SQ_URB_READ_CAM_MATCH_DISABLE (1<<27)
+#define GEN8_L3SQCREG4 0xb118
+#define GEN8_LQSC_RO_PERF_DIS (1<<27)
+
/* GEN8 chicken */
#define HDC_CHICKEN0 0x7300
-#define HDC_FORCE_NON_COHERENT (1<<4)
-#define HDC_DONOT_FETCH_MEM_WHEN_MASKED (1<<11)
#define HDC_FENCE_DEST_SLM_DISABLE (1<<14)
+#define HDC_DONOT_FETCH_MEM_WHEN_MASKED (1<<11)
+#define HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT (1<<5)
+#define HDC_FORCE_NON_COHERENT (1<<4)
+#define HDC_BARRIER_PERFORMANCE_DISABLE (1<<10)
/* WaCatErrorRejectionIssue */
#define GEN7_SQ_CHICKEN_MBCUNIT_CONFIG 0x9030
@@ -5258,6 +5352,9 @@
#define HSW_SCRATCH1 0xb038
#define HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE (1<<27)
+#define BDW_SCRATCH1 0xb11c
+#define GEN9_LBS_SLA_RETRY_TIMER_DECREMENT_ENABLE (1<<2)
+
/* PCH */
/* south display engine interrupt: IBX */
@@ -5980,6 +6077,7 @@
#define HSW_IDICR 0x9008
#define IDIHASHMSK(x) (((x) & 0x3f) << 16)
#define HSW_EDRAM_PRESENT 0x120010
+#define EDRAM_ENABLED 0x1
#define GEN6_UCGCTL1 0x9400
# define GEN6_EU_TCUNIT_CLOCK_GATE_DISABLE (1 << 16)
@@ -6003,6 +6101,7 @@
#define GEN6_RSTCTL 0x9420
#define GEN8_UCGCTL6 0x9430
+#define GEN8_GAPSUNIT_CLOCK_GATE_DISABLE (1<<24)
#define GEN8_SDEUNIT_CLOCK_GATE_DISABLE (1<<14)
#define GEN6_GFXPAUSE 0xA000
@@ -6010,6 +6109,7 @@
#define GEN6_TURBO_DISABLE (1<<31)
#define GEN6_FREQUENCY(x) ((x)<<25)
#define HSW_FREQUENCY(x) ((x)<<24)
+#define GEN9_FREQUENCY(x) ((x)<<23)
#define GEN6_OFFSET(x) ((x)<<19)
#define GEN6_AGGRESSIVE_TURBO (0<<15)
#define GEN6_RC_VIDEO_FREQ 0xA00C
@@ -6028,8 +6128,10 @@
#define GEN6_RPSTAT1 0xA01C
#define GEN6_CAGF_SHIFT 8
#define HSW_CAGF_SHIFT 7
+#define GEN9_CAGF_SHIFT 23
#define GEN6_CAGF_MASK (0x7f << GEN6_CAGF_SHIFT)
#define HSW_CAGF_MASK (0x7f << HSW_CAGF_SHIFT)
+#define GEN9_CAGF_MASK (0x1ff << GEN9_CAGF_SHIFT)
#define GEN6_RP_CONTROL 0xA024
#define GEN6_RP_MEDIA_TURBO (1<<11)
#define GEN6_RP_MEDIA_MODE_MASK (3<<9)
@@ -6120,8 +6222,8 @@
#define GEN6_GT_GFX_RC6p 0x13810C
#define GEN6_GT_GFX_RC6pp 0x138110
-#define VLV_RENDER_C0_COUNT_REG 0x138118
-#define VLV_MEDIA_C0_COUNT_REG 0x13811C
+#define VLV_RENDER_C0_COUNT 0x138118
+#define VLV_MEDIA_C0_COUNT 0x13811C
#define GEN6_PCODE_MAILBOX 0x138124
#define GEN6_PCODE_READY (1<<31)
@@ -6155,6 +6257,37 @@
#define GEN6_RC6 3
#define GEN6_RC7 4
+#define CHV_POWER_SS0_SIG1 0xa720
+#define CHV_POWER_SS1_SIG1 0xa728
+#define CHV_SS_PG_ENABLE (1<<1)
+#define CHV_EU08_PG_ENABLE (1<<9)
+#define CHV_EU19_PG_ENABLE (1<<17)
+#define CHV_EU210_PG_ENABLE (1<<25)
+
+#define CHV_POWER_SS0_SIG2 0xa724
+#define CHV_POWER_SS1_SIG2 0xa72c
+#define CHV_EU311_PG_ENABLE (1<<1)
+
+#define GEN9_SLICE0_PGCTL_ACK 0x804c
+#define GEN9_SLICE1_PGCTL_ACK 0x8050
+#define GEN9_SLICE2_PGCTL_ACK 0x8054
+#define GEN9_PGCTL_SLICE_ACK (1 << 0)
+
+#define GEN9_SLICE0_SS01_EU_PGCTL_ACK 0x805c
+#define GEN9_SLICE0_SS23_EU_PGCTL_ACK 0x8060
+#define GEN9_SLICE1_SS01_EU_PGCTL_ACK 0x8064
+#define GEN9_SLICE1_SS23_EU_PGCTL_ACK 0x8068
+#define GEN9_SLICE2_SS01_EU_PGCTL_ACK 0x806c
+#define GEN9_SLICE2_SS23_EU_PGCTL_ACK 0x8070
+#define GEN9_PGCTL_SSA_EU08_ACK (1 << 0)
+#define GEN9_PGCTL_SSA_EU19_ACK (1 << 2)
+#define GEN9_PGCTL_SSA_EU210_ACK (1 << 4)
+#define GEN9_PGCTL_SSA_EU311_ACK (1 << 6)
+#define GEN9_PGCTL_SSB_EU08_ACK (1 << 8)
+#define GEN9_PGCTL_SSB_EU19_ACK (1 << 10)
+#define GEN9_PGCTL_SSB_EU210_ACK (1 << 12)
+#define GEN9_PGCTL_SSB_EU311_ACK (1 << 14)
+
#define GEN7_MISCCPCTL (0x9424)
#define GEN7_DOP_CLOCK_GATE_ENABLE (1<<0)
@@ -6185,6 +6318,7 @@
#define GEN9_HALF_SLICE_CHICKEN5 0xe188
#define GEN9_DG_MIRROR_FIX_ENABLE (1<<5)
+#define GEN9_CCS_TLB_PREFETCH_ENABLE (1<<3)
#define GEN8_ROW_CHICKEN 0xe4f0
#define PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE (1<<8)
@@ -6200,8 +6334,12 @@
#define HALF_SLICE_CHICKEN3 0xe184
#define HSW_SAMPLE_C_PERFORMANCE (1<<9)
#define GEN8_CENTROID_PIXEL_OPT_DIS (1<<8)
+#define GEN9_DISABLE_OCL_OOB_SUPPRESS_LOGIC (1<<5)
#define GEN8_SAMPLER_POWER_BYPASS_DIS (1<<1)
+#define GEN9_HALF_SLICE_CHICKEN7 0xe194
+#define GEN9_ENABLE_YV12_BUGFIX (1<<4)
+
/* Audio */
#define G4X_AUD_VID_DID (dev_priv->info.display_mmio_offset + 0x62020)
#define INTEL_AUDIO_DEVCL 0x808629FB
@@ -6351,6 +6489,13 @@
#define HSW_PWR_WELL_FORCE_ON (1<<19)
#define HSW_PWR_WELL_CTL6 0x45414
+/* SKL Fuse Status */
+#define SKL_FUSE_STATUS 0x42000
+#define SKL_FUSE_DOWNLOAD_STATUS (1<<31)
+#define SKL_FUSE_PG0_DIST_STATUS (1<<27)
+#define SKL_FUSE_PG1_DIST_STATUS (1<<26)
+#define SKL_FUSE_PG2_DIST_STATUS (1<<25)
+
/* Per-pipe DDI Function Control */
#define TRANS_DDI_FUNC_CTL_A 0x60400
#define TRANS_DDI_FUNC_CTL_B 0x61400
diff --git a/drivers/gpu/drm/i915/i915_suspend.c b/drivers/gpu/drm/i915/i915_suspend.c
index 9f19ed3..cf67f82 100644
--- a/drivers/gpu/drm/i915/i915_suspend.c
+++ b/drivers/gpu/drm/i915/i915_suspend.c
@@ -29,166 +29,6 @@
#include "intel_drv.h"
#include "i915_reg.h"
-static u8 i915_read_indexed(struct drm_device *dev, u16 index_port, u16 data_port, u8 reg)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- I915_WRITE8(index_port, reg);
- return I915_READ8(data_port);
-}
-
-static u8 i915_read_ar(struct drm_device *dev, u16 st01, u8 reg, u16 palette_enable)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- I915_READ8(st01);
- I915_WRITE8(VGA_AR_INDEX, palette_enable | reg);
- return I915_READ8(VGA_AR_DATA_READ);
-}
-
-static void i915_write_ar(struct drm_device *dev, u16 st01, u8 reg, u8 val, u16 palette_enable)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- I915_READ8(st01);
- I915_WRITE8(VGA_AR_INDEX, palette_enable | reg);
- I915_WRITE8(VGA_AR_DATA_WRITE, val);
-}
-
-static void i915_write_indexed(struct drm_device *dev, u16 index_port, u16 data_port, u8 reg, u8 val)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- I915_WRITE8(index_port, reg);
- I915_WRITE8(data_port, val);
-}
-
-static void i915_save_vga(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- int i;
- u16 cr_index, cr_data, st01;
-
- /* VGA state */
- dev_priv->regfile.saveVGA0 = I915_READ(VGA0);
- dev_priv->regfile.saveVGA1 = I915_READ(VGA1);
- dev_priv->regfile.saveVGA_PD = I915_READ(VGA_PD);
- dev_priv->regfile.saveVGACNTRL = I915_READ(i915_vgacntrl_reg(dev));
-
- /* VGA color palette registers */
- dev_priv->regfile.saveDACMASK = I915_READ8(VGA_DACMASK);
-
- /* MSR bits */
- dev_priv->regfile.saveMSR = I915_READ8(VGA_MSR_READ);
- if (dev_priv->regfile.saveMSR & VGA_MSR_CGA_MODE) {
- cr_index = VGA_CR_INDEX_CGA;
- cr_data = VGA_CR_DATA_CGA;
- st01 = VGA_ST01_CGA;
- } else {
- cr_index = VGA_CR_INDEX_MDA;
- cr_data = VGA_CR_DATA_MDA;
- st01 = VGA_ST01_MDA;
- }
-
- /* CRT controller regs */
- i915_write_indexed(dev, cr_index, cr_data, 0x11,
- i915_read_indexed(dev, cr_index, cr_data, 0x11) &
- (~0x80));
- for (i = 0; i <= 0x24; i++)
- dev_priv->regfile.saveCR[i] =
- i915_read_indexed(dev, cr_index, cr_data, i);
- /* Make sure we don't turn off CR group 0 writes */
- dev_priv->regfile.saveCR[0x11] &= ~0x80;
-
- /* Attribute controller registers */
- I915_READ8(st01);
- dev_priv->regfile.saveAR_INDEX = I915_READ8(VGA_AR_INDEX);
- for (i = 0; i <= 0x14; i++)
- dev_priv->regfile.saveAR[i] = i915_read_ar(dev, st01, i, 0);
- I915_READ8(st01);
- I915_WRITE8(VGA_AR_INDEX, dev_priv->regfile.saveAR_INDEX);
- I915_READ8(st01);
-
- /* Graphics controller registers */
- for (i = 0; i < 9; i++)
- dev_priv->regfile.saveGR[i] =
- i915_read_indexed(dev, VGA_GR_INDEX, VGA_GR_DATA, i);
-
- dev_priv->regfile.saveGR[0x10] =
- i915_read_indexed(dev, VGA_GR_INDEX, VGA_GR_DATA, 0x10);
- dev_priv->regfile.saveGR[0x11] =
- i915_read_indexed(dev, VGA_GR_INDEX, VGA_GR_DATA, 0x11);
- dev_priv->regfile.saveGR[0x18] =
- i915_read_indexed(dev, VGA_GR_INDEX, VGA_GR_DATA, 0x18);
-
- /* Sequencer registers */
- for (i = 0; i < 8; i++)
- dev_priv->regfile.saveSR[i] =
- i915_read_indexed(dev, VGA_SR_INDEX, VGA_SR_DATA, i);
-}
-
-static void i915_restore_vga(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- int i;
- u16 cr_index, cr_data, st01;
-
- /* VGA state */
- I915_WRITE(i915_vgacntrl_reg(dev), dev_priv->regfile.saveVGACNTRL);
-
- I915_WRITE(VGA0, dev_priv->regfile.saveVGA0);
- I915_WRITE(VGA1, dev_priv->regfile.saveVGA1);
- I915_WRITE(VGA_PD, dev_priv->regfile.saveVGA_PD);
- POSTING_READ(VGA_PD);
- udelay(150);
-
- /* MSR bits */
- I915_WRITE8(VGA_MSR_WRITE, dev_priv->regfile.saveMSR);
- if (dev_priv->regfile.saveMSR & VGA_MSR_CGA_MODE) {
- cr_index = VGA_CR_INDEX_CGA;
- cr_data = VGA_CR_DATA_CGA;
- st01 = VGA_ST01_CGA;
- } else {
- cr_index = VGA_CR_INDEX_MDA;
- cr_data = VGA_CR_DATA_MDA;
- st01 = VGA_ST01_MDA;
- }
-
- /* Sequencer registers, don't write SR07 */
- for (i = 0; i < 7; i++)
- i915_write_indexed(dev, VGA_SR_INDEX, VGA_SR_DATA, i,
- dev_priv->regfile.saveSR[i]);
-
- /* CRT controller regs */
- /* Enable CR group 0 writes */
- i915_write_indexed(dev, cr_index, cr_data, 0x11, dev_priv->regfile.saveCR[0x11]);
- for (i = 0; i <= 0x24; i++)
- i915_write_indexed(dev, cr_index, cr_data, i, dev_priv->regfile.saveCR[i]);
-
- /* Graphics controller regs */
- for (i = 0; i < 9; i++)
- i915_write_indexed(dev, VGA_GR_INDEX, VGA_GR_DATA, i,
- dev_priv->regfile.saveGR[i]);
-
- i915_write_indexed(dev, VGA_GR_INDEX, VGA_GR_DATA, 0x10,
- dev_priv->regfile.saveGR[0x10]);
- i915_write_indexed(dev, VGA_GR_INDEX, VGA_GR_DATA, 0x11,
- dev_priv->regfile.saveGR[0x11]);
- i915_write_indexed(dev, VGA_GR_INDEX, VGA_GR_DATA, 0x18,
- dev_priv->regfile.saveGR[0x18]);
-
- /* Attribute controller registers */
- I915_READ8(st01); /* switch back to index mode */
- for (i = 0; i <= 0x14; i++)
- i915_write_ar(dev, st01, i, dev_priv->regfile.saveAR[i], 0);
- I915_READ8(st01); /* switch back to index mode */
- I915_WRITE8(VGA_AR_INDEX, dev_priv->regfile.saveAR_INDEX | 0x20);
- I915_READ8(st01);
-
- /* VGA color palette registers */
- I915_WRITE8(VGA_DACMASK, dev_priv->regfile.saveDACMASK);
-}
-
static void i915_save_display(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -197,11 +37,6 @@
if (INTEL_INFO(dev)->gen <= 4)
dev_priv->regfile.saveDSPARB = I915_READ(DSPARB);
- /* This is only meaningful in non-KMS mode */
- /* Don't regfile.save them in KMS mode */
- if (!drm_core_check_feature(dev, DRIVER_MODESET))
- i915_save_display_reg(dev);
-
/* LVDS state */
if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev))
dev_priv->regfile.saveLVDS = I915_READ(PCH_LVDS);
@@ -224,9 +59,6 @@
/* save FBC interval */
if (HAS_FBC(dev) && INTEL_INFO(dev)->gen <= 4 && !IS_G4X(dev))
dev_priv->regfile.saveFBC_CONTROL = I915_READ(FBC_CONTROL);
-
- if (!drm_core_check_feature(dev, DRIVER_MODESET))
- i915_save_vga(dev);
}
static void i915_restore_display(struct drm_device *dev)
@@ -238,11 +70,7 @@
if (INTEL_INFO(dev)->gen <= 4)
I915_WRITE(DSPARB, dev_priv->regfile.saveDSPARB);
- if (!drm_core_check_feature(dev, DRIVER_MODESET))
- i915_restore_display_reg(dev);
-
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- mask = ~LVDS_PORT_EN;
+ mask = ~LVDS_PORT_EN;
/* LVDS state */
if (HAS_PCH_IBX(dev) || HAS_PCH_CPT(dev))
@@ -270,10 +98,7 @@
if (HAS_FBC(dev) && INTEL_INFO(dev)->gen <= 4 && !IS_G4X(dev))
I915_WRITE(FBC_CONTROL, dev_priv->regfile.saveFBC_CONTROL);
- if (!drm_core_check_feature(dev, DRIVER_MODESET))
- i915_restore_vga(dev);
- else
- i915_redisable_vga(dev);
+ i915_redisable_vga(dev);
}
int i915_save_state(struct drm_device *dev)
@@ -285,24 +110,6 @@
i915_save_display(dev);
- if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
- /* Interrupt state */
- if (HAS_PCH_SPLIT(dev)) {
- dev_priv->regfile.saveDEIER = I915_READ(DEIER);
- dev_priv->regfile.saveDEIMR = I915_READ(DEIMR);
- dev_priv->regfile.saveGTIER = I915_READ(GTIER);
- dev_priv->regfile.saveGTIMR = I915_READ(GTIMR);
- dev_priv->regfile.saveFDI_RXA_IMR = I915_READ(_FDI_RXA_IMR);
- dev_priv->regfile.saveFDI_RXB_IMR = I915_READ(_FDI_RXB_IMR);
- dev_priv->regfile.saveMCHBAR_RENDER_STANDBY =
- I915_READ(RSTDBYCTL);
- dev_priv->regfile.savePCH_PORT_HOTPLUG = I915_READ(PCH_PORT_HOTPLUG);
- } else {
- dev_priv->regfile.saveIER = I915_READ(IER);
- dev_priv->regfile.saveIMR = I915_READ(IMR);
- }
- }
-
if (IS_GEN4(dev))
pci_read_config_word(dev->pdev, GCDGMBUS,
&dev_priv->regfile.saveGCDGMBUS);
@@ -341,24 +148,6 @@
dev_priv->regfile.saveGCDGMBUS);
i915_restore_display(dev);
- if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
- /* Interrupt state */
- if (HAS_PCH_SPLIT(dev)) {
- I915_WRITE(DEIER, dev_priv->regfile.saveDEIER);
- I915_WRITE(DEIMR, dev_priv->regfile.saveDEIMR);
- I915_WRITE(GTIER, dev_priv->regfile.saveGTIER);
- I915_WRITE(GTIMR, dev_priv->regfile.saveGTIMR);
- I915_WRITE(_FDI_RXA_IMR, dev_priv->regfile.saveFDI_RXA_IMR);
- I915_WRITE(_FDI_RXB_IMR, dev_priv->regfile.saveFDI_RXB_IMR);
- I915_WRITE(PCH_PORT_HOTPLUG, dev_priv->regfile.savePCH_PORT_HOTPLUG);
- I915_WRITE(RSTDBYCTL,
- dev_priv->regfile.saveMCHBAR_RENDER_STANDBY);
- } else {
- I915_WRITE(IER, dev_priv->regfile.saveIER);
- I915_WRITE(IMR, dev_priv->regfile.saveIMR);
- }
- }
-
/* Cache mode state */
if (INTEL_INFO(dev)->gen < 7)
I915_WRITE(CACHE_MODE_0, dev_priv->regfile.saveCACHE_MODE_0 |
diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c
index 49f5ade..2476268 100644
--- a/drivers/gpu/drm/i915/i915_sysfs.c
+++ b/drivers/gpu/drm/i915/i915_sysfs.c
@@ -127,10 +127,19 @@
return snprintf(buf, PAGE_SIZE, "%u\n", rc6pp_residency);
}
+static ssize_t
+show_media_rc6_ms(struct device *kdev, struct device_attribute *attr, char *buf)
+{
+ struct drm_minor *dminor = dev_get_drvdata(kdev);
+ u32 rc6_residency = calc_residency(dminor->dev, VLV_GT_MEDIA_RC6);
+ return snprintf(buf, PAGE_SIZE, "%u\n", rc6_residency);
+}
+
static DEVICE_ATTR(rc6_enable, S_IRUGO, show_rc6_mask, NULL);
static DEVICE_ATTR(rc6_residency_ms, S_IRUGO, show_rc6_ms, NULL);
static DEVICE_ATTR(rc6p_residency_ms, S_IRUGO, show_rc6p_ms, NULL);
static DEVICE_ATTR(rc6pp_residency_ms, S_IRUGO, show_rc6pp_ms, NULL);
+static DEVICE_ATTR(media_rc6_residency_ms, S_IRUGO, show_media_rc6_ms, NULL);
static struct attribute *rc6_attrs[] = {
&dev_attr_rc6_enable.attr,
@@ -153,6 +162,16 @@
.name = power_group_name,
.attrs = rc6p_attrs
};
+
+static struct attribute *media_rc6_attrs[] = {
+ &dev_attr_media_rc6_residency_ms.attr,
+ NULL
+};
+
+static struct attribute_group media_rc6_attr_group = {
+ .name = power_group_name,
+ .attrs = media_rc6_attrs
+};
#endif
static int l3_access_valid(struct drm_device *dev, loff_t offset)
@@ -300,7 +319,9 @@
ret = intel_gpu_freq(dev_priv, (freq >> 8) & 0xff);
} else {
u32 rpstat = I915_READ(GEN6_RPSTAT1);
- if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
+ if (IS_GEN9(dev_priv))
+ ret = (rpstat & GEN9_CAGF_MASK) >> GEN9_CAGF_SHIFT;
+ else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
ret = (rpstat & HSW_CAGF_MASK) >> HSW_CAGF_SHIFT;
else
ret = (rpstat & GEN6_CAGF_MASK) >> GEN6_CAGF_SHIFT;
@@ -402,10 +423,7 @@
/* We still need *_set_rps to process the new max_delay and
* update the interrupt limits and PMINTRMSK even though
* frequency request may be unchanged. */
- if (IS_VALLEYVIEW(dev))
- valleyview_set_rps(dev, val);
- else
- gen6_set_rps(dev, val);
+ intel_set_rps(dev, val);
mutex_unlock(&dev_priv->rps.hw_lock);
@@ -464,10 +482,7 @@
/* We still need *_set_rps to process the new min_delay and
* update the interrupt limits and PMINTRMSK even though
* frequency request may be unchanged. */
- if (IS_VALLEYVIEW(dev))
- valleyview_set_rps(dev, val);
- else
- gen6_set_rps(dev, val);
+ intel_set_rps(dev, val);
mutex_unlock(&dev_priv->rps.hw_lock);
@@ -493,38 +508,17 @@
struct drm_minor *minor = dev_to_drm_minor(kdev);
struct drm_device *dev = minor->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- u32 val, rp_state_cap;
- ssize_t ret;
+ u32 val;
- ret = mutex_lock_interruptible(&dev->struct_mutex);
- if (ret)
- return ret;
- intel_runtime_pm_get(dev_priv);
- rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
- intel_runtime_pm_put(dev_priv);
- mutex_unlock(&dev->struct_mutex);
-
- if (attr == &dev_attr_gt_RP0_freq_mhz) {
- if (IS_VALLEYVIEW(dev))
- val = intel_gpu_freq(dev_priv, dev_priv->rps.rp0_freq);
- else
- val = intel_gpu_freq(dev_priv,
- ((rp_state_cap & 0x0000ff) >> 0));
- } else if (attr == &dev_attr_gt_RP1_freq_mhz) {
- if (IS_VALLEYVIEW(dev))
- val = intel_gpu_freq(dev_priv, dev_priv->rps.rp1_freq);
- else
- val = intel_gpu_freq(dev_priv,
- ((rp_state_cap & 0x00ff00) >> 8));
- } else if (attr == &dev_attr_gt_RPn_freq_mhz) {
- if (IS_VALLEYVIEW(dev))
- val = intel_gpu_freq(dev_priv, dev_priv->rps.min_freq);
- else
- val = intel_gpu_freq(dev_priv,
- ((rp_state_cap & 0xff0000) >> 16));
- } else {
+ if (attr == &dev_attr_gt_RP0_freq_mhz)
+ val = intel_gpu_freq(dev_priv, dev_priv->rps.rp0_freq);
+ else if (attr == &dev_attr_gt_RP1_freq_mhz)
+ val = intel_gpu_freq(dev_priv, dev_priv->rps.rp1_freq);
+ else if (attr == &dev_attr_gt_RPn_freq_mhz)
+ val = intel_gpu_freq(dev_priv, dev_priv->rps.min_freq);
+ else
BUG();
- }
+
return snprintf(buf, PAGE_SIZE, "%d\n", val);
}
@@ -633,6 +627,12 @@
if (ret)
DRM_ERROR("RC6p residency sysfs setup failed\n");
}
+ if (IS_VALLEYVIEW(dev)) {
+ ret = sysfs_merge_group(&dev->primary->kdev->kobj,
+ &media_rc6_attr_group);
+ if (ret)
+ DRM_ERROR("Media RC6 residency sysfs setup failed\n");
+ }
#endif
if (HAS_L3_DPF(dev)) {
ret = device_create_bin_file(dev->primary->kdev, &dpf_attrs);
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index d776621..5fda6c7 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -114,7 +114,7 @@
TP_STRUCT__entry(
__field(struct drm_i915_gem_object *, obj)
__field(struct i915_address_space *, vm)
- __field(u32, offset)
+ __field(u64, offset)
__field(u32, size)
__field(unsigned, flags)
),
@@ -127,7 +127,7 @@
__entry->flags = flags;
),
- TP_printk("obj=%p, offset=%08x size=%x%s vm=%p",
+ TP_printk("obj=%p, offset=%016llx size=%x%s vm=%p",
__entry->obj, __entry->offset, __entry->size,
__entry->flags & PIN_MAPPABLE ? ", mappable" : "",
__entry->vm)
@@ -140,7 +140,7 @@
TP_STRUCT__entry(
__field(struct drm_i915_gem_object *, obj)
__field(struct i915_address_space *, vm)
- __field(u32, offset)
+ __field(u64, offset)
__field(u32, size)
),
@@ -151,10 +151,109 @@
__entry->size = vma->node.size;
),
- TP_printk("obj=%p, offset=%08x size=%x vm=%p",
+ TP_printk("obj=%p, offset=%016llx size=%x vm=%p",
__entry->obj, __entry->offset, __entry->size, __entry->vm)
);
+#define VM_TO_TRACE_NAME(vm) \
+ (i915_is_ggtt(vm) ? "G" : \
+ "P")
+
+DECLARE_EVENT_CLASS(i915_va,
+ TP_PROTO(struct i915_address_space *vm, u64 start, u64 length, const char *name),
+ TP_ARGS(vm, start, length, name),
+
+ TP_STRUCT__entry(
+ __field(struct i915_address_space *, vm)
+ __field(u64, start)
+ __field(u64, end)
+ __string(name, name)
+ ),
+
+ TP_fast_assign(
+ __entry->vm = vm;
+ __entry->start = start;
+ __entry->end = start + length - 1;
+ __assign_str(name, name);
+ ),
+
+ TP_printk("vm=%p (%s), 0x%llx-0x%llx",
+ __entry->vm, __get_str(name), __entry->start, __entry->end)
+);
+
+DEFINE_EVENT(i915_va, i915_va_alloc,
+ TP_PROTO(struct i915_address_space *vm, u64 start, u64 length, const char *name),
+ TP_ARGS(vm, start, length, name)
+);
+
+DECLARE_EVENT_CLASS(i915_page_table_entry,
+ TP_PROTO(struct i915_address_space *vm, u32 pde, u64 start, u64 pde_shift),
+ TP_ARGS(vm, pde, start, pde_shift),
+
+ TP_STRUCT__entry(
+ __field(struct i915_address_space *, vm)
+ __field(u32, pde)
+ __field(u64, start)
+ __field(u64, end)
+ ),
+
+ TP_fast_assign(
+ __entry->vm = vm;
+ __entry->pde = pde;
+ __entry->start = start;
+ __entry->end = ((start + (1ULL << pde_shift)) & ~((1ULL << pde_shift)-1)) - 1;
+ ),
+
+ TP_printk("vm=%p, pde=%d (0x%llx-0x%llx)",
+ __entry->vm, __entry->pde, __entry->start, __entry->end)
+);
+
+DEFINE_EVENT(i915_page_table_entry, i915_page_table_entry_alloc,
+ TP_PROTO(struct i915_address_space *vm, u32 pde, u64 start, u64 pde_shift),
+ TP_ARGS(vm, pde, start, pde_shift)
+);
+
+/* Avoid extra math because we only support two sizes. The format is defined by
+ * bitmap_scnprintf. Each 32 bits is 8 HEX digits followed by comma */
+#define TRACE_PT_SIZE(bits) \
+ ((((bits) == 1024) ? 288 : 144) + 1)
+
+DECLARE_EVENT_CLASS(i915_page_table_entry_update,
+ TP_PROTO(struct i915_address_space *vm, u32 pde,
+ struct i915_page_table_entry *pt, u32 first, u32 count, u32 bits),
+ TP_ARGS(vm, pde, pt, first, count, bits),
+
+ TP_STRUCT__entry(
+ __field(struct i915_address_space *, vm)
+ __field(u32, pde)
+ __field(u32, first)
+ __field(u32, last)
+ __dynamic_array(char, cur_ptes, TRACE_PT_SIZE(bits))
+ ),
+
+ TP_fast_assign(
+ __entry->vm = vm;
+ __entry->pde = pde;
+ __entry->first = first;
+ __entry->last = first + count - 1;
+ scnprintf(__get_str(cur_ptes),
+ TRACE_PT_SIZE(bits),
+ "%*pb",
+ bits,
+ pt->used_ptes);
+ ),
+
+ TP_printk("vm=%p, pde=%d, updating %u:%u\t%s",
+ __entry->vm, __entry->pde, __entry->last, __entry->first,
+ __get_str(cur_ptes))
+);
+
+DEFINE_EVENT(i915_page_table_entry_update, i915_page_table_entry_map,
+ TP_PROTO(struct i915_address_space *vm, u32 pde,
+ struct i915_page_table_entry *pt, u32 first, u32 count, u32 bits),
+ TP_ARGS(vm, pde, pt, first, count, bits)
+);
+
TRACE_EVENT(i915_gem_object_change_domain,
TP_PROTO(struct drm_i915_gem_object *obj, u32 old_read, u32 old_write),
TP_ARGS(obj, old_read, old_write),
diff --git a/drivers/gpu/drm/i915/i915_ums.c b/drivers/gpu/drm/i915/i915_ums.c
deleted file mode 100644
index d10fe3e..0000000
--- a/drivers/gpu/drm/i915/i915_ums.c
+++ /dev/null
@@ -1,552 +0,0 @@
-/*
- *
- * Copyright 2008 (c) Intel Corporation
- * Jesse Barnes <jbarnes@virtuousgeek.org>
- * Copyright 2013 (c) Intel Corporation
- * Daniel Vetter <daniel.vetter@ffwll.ch>
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the
- * "Software"), to deal in the Software without restriction, including
- * without limitation the rights to use, copy, modify, merge, publish,
- * distribute, sub license, and/or sell copies of the Software, and to
- * permit persons to whom the Software is furnished to do so, subject to
- * the following conditions:
- *
- * The above copyright notice and this permission notice (including the
- * next paragraph) shall be included in all copies or substantial portions
- * of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
- * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
- * IN NO EVENT SHALL TUNGSTEN GRAPHICS AND/OR ITS SUPPLIERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
- * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
- * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-#include <drm/drmP.h>
-#include <drm/i915_drm.h>
-#include "intel_drv.h"
-#include "i915_reg.h"
-
-static bool i915_pipe_enabled(struct drm_device *dev, enum pipe pipe)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- u32 dpll_reg;
-
- /* On IVB, 3rd pipe shares PLL with another one */
- if (pipe > 1)
- return false;
-
- if (HAS_PCH_SPLIT(dev))
- dpll_reg = PCH_DPLL(pipe);
- else
- dpll_reg = (pipe == PIPE_A) ? _DPLL_A : _DPLL_B;
-
- return (I915_READ(dpll_reg) & DPLL_VCO_ENABLE);
-}
-
-static void i915_save_palette(struct drm_device *dev, enum pipe pipe)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long reg = (pipe == PIPE_A ? _PALETTE_A : _PALETTE_B);
- u32 *array;
- int i;
-
- if (!i915_pipe_enabled(dev, pipe))
- return;
-
- if (HAS_PCH_SPLIT(dev))
- reg = (pipe == PIPE_A) ? _LGC_PALETTE_A : _LGC_PALETTE_B;
-
- if (pipe == PIPE_A)
- array = dev_priv->regfile.save_palette_a;
- else
- array = dev_priv->regfile.save_palette_b;
-
- for (i = 0; i < 256; i++)
- array[i] = I915_READ(reg + (i << 2));
-}
-
-static void i915_restore_palette(struct drm_device *dev, enum pipe pipe)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long reg = (pipe == PIPE_A ? _PALETTE_A : _PALETTE_B);
- u32 *array;
- int i;
-
- if (!i915_pipe_enabled(dev, pipe))
- return;
-
- if (HAS_PCH_SPLIT(dev))
- reg = (pipe == PIPE_A) ? _LGC_PALETTE_A : _LGC_PALETTE_B;
-
- if (pipe == PIPE_A)
- array = dev_priv->regfile.save_palette_a;
- else
- array = dev_priv->regfile.save_palette_b;
-
- for (i = 0; i < 256; i++)
- I915_WRITE(reg + (i << 2), array[i]);
-}
-
-void i915_save_display_reg(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- int i;
-
- /* Cursor state */
- dev_priv->regfile.saveCURACNTR = I915_READ(_CURACNTR);
- dev_priv->regfile.saveCURAPOS = I915_READ(_CURAPOS);
- dev_priv->regfile.saveCURABASE = I915_READ(_CURABASE);
- dev_priv->regfile.saveCURBCNTR = I915_READ(_CURBCNTR);
- dev_priv->regfile.saveCURBPOS = I915_READ(_CURBPOS);
- dev_priv->regfile.saveCURBBASE = I915_READ(_CURBBASE);
- if (IS_GEN2(dev))
- dev_priv->regfile.saveCURSIZE = I915_READ(CURSIZE);
-
- if (HAS_PCH_SPLIT(dev)) {
- dev_priv->regfile.savePCH_DREF_CONTROL = I915_READ(PCH_DREF_CONTROL);
- dev_priv->regfile.saveDISP_ARB_CTL = I915_READ(DISP_ARB_CTL);
- }
-
- /* Pipe & plane A info */
- dev_priv->regfile.savePIPEACONF = I915_READ(_PIPEACONF);
- dev_priv->regfile.savePIPEASRC = I915_READ(_PIPEASRC);
- if (HAS_PCH_SPLIT(dev)) {
- dev_priv->regfile.saveFPA0 = I915_READ(_PCH_FPA0);
- dev_priv->regfile.saveFPA1 = I915_READ(_PCH_FPA1);
- dev_priv->regfile.saveDPLL_A = I915_READ(_PCH_DPLL_A);
- } else {
- dev_priv->regfile.saveFPA0 = I915_READ(_FPA0);
- dev_priv->regfile.saveFPA1 = I915_READ(_FPA1);
- dev_priv->regfile.saveDPLL_A = I915_READ(_DPLL_A);
- }
- if (INTEL_INFO(dev)->gen >= 4 && !HAS_PCH_SPLIT(dev))
- dev_priv->regfile.saveDPLL_A_MD = I915_READ(_DPLL_A_MD);
- dev_priv->regfile.saveHTOTAL_A = I915_READ(_HTOTAL_A);
- dev_priv->regfile.saveHBLANK_A = I915_READ(_HBLANK_A);
- dev_priv->regfile.saveHSYNC_A = I915_READ(_HSYNC_A);
- dev_priv->regfile.saveVTOTAL_A = I915_READ(_VTOTAL_A);
- dev_priv->regfile.saveVBLANK_A = I915_READ(_VBLANK_A);
- dev_priv->regfile.saveVSYNC_A = I915_READ(_VSYNC_A);
- if (!HAS_PCH_SPLIT(dev))
- dev_priv->regfile.saveBCLRPAT_A = I915_READ(_BCLRPAT_A);
-
- if (HAS_PCH_SPLIT(dev)) {
- dev_priv->regfile.savePIPEA_DATA_M1 = I915_READ(_PIPEA_DATA_M1);
- dev_priv->regfile.savePIPEA_DATA_N1 = I915_READ(_PIPEA_DATA_N1);
- dev_priv->regfile.savePIPEA_LINK_M1 = I915_READ(_PIPEA_LINK_M1);
- dev_priv->regfile.savePIPEA_LINK_N1 = I915_READ(_PIPEA_LINK_N1);
-
- dev_priv->regfile.saveFDI_TXA_CTL = I915_READ(_FDI_TXA_CTL);
- dev_priv->regfile.saveFDI_RXA_CTL = I915_READ(_FDI_RXA_CTL);
-
- dev_priv->regfile.savePFA_CTL_1 = I915_READ(_PFA_CTL_1);
- dev_priv->regfile.savePFA_WIN_SZ = I915_READ(_PFA_WIN_SZ);
- dev_priv->regfile.savePFA_WIN_POS = I915_READ(_PFA_WIN_POS);
-
- dev_priv->regfile.saveTRANSACONF = I915_READ(_PCH_TRANSACONF);
- dev_priv->regfile.saveTRANS_HTOTAL_A = I915_READ(_PCH_TRANS_HTOTAL_A);
- dev_priv->regfile.saveTRANS_HBLANK_A = I915_READ(_PCH_TRANS_HBLANK_A);
- dev_priv->regfile.saveTRANS_HSYNC_A = I915_READ(_PCH_TRANS_HSYNC_A);
- dev_priv->regfile.saveTRANS_VTOTAL_A = I915_READ(_PCH_TRANS_VTOTAL_A);
- dev_priv->regfile.saveTRANS_VBLANK_A = I915_READ(_PCH_TRANS_VBLANK_A);
- dev_priv->regfile.saveTRANS_VSYNC_A = I915_READ(_PCH_TRANS_VSYNC_A);
- }
-
- dev_priv->regfile.saveDSPACNTR = I915_READ(_DSPACNTR);
- dev_priv->regfile.saveDSPASTRIDE = I915_READ(_DSPASTRIDE);
- dev_priv->regfile.saveDSPASIZE = I915_READ(_DSPASIZE);
- dev_priv->regfile.saveDSPAPOS = I915_READ(_DSPAPOS);
- dev_priv->regfile.saveDSPAADDR = I915_READ(_DSPAADDR);
- if (INTEL_INFO(dev)->gen >= 4) {
- dev_priv->regfile.saveDSPASURF = I915_READ(_DSPASURF);
- dev_priv->regfile.saveDSPATILEOFF = I915_READ(_DSPATILEOFF);
- }
- i915_save_palette(dev, PIPE_A);
- dev_priv->regfile.savePIPEASTAT = I915_READ(_PIPEASTAT);
-
- /* Pipe & plane B info */
- dev_priv->regfile.savePIPEBCONF = I915_READ(_PIPEBCONF);
- dev_priv->regfile.savePIPEBSRC = I915_READ(_PIPEBSRC);
- if (HAS_PCH_SPLIT(dev)) {
- dev_priv->regfile.saveFPB0 = I915_READ(_PCH_FPB0);
- dev_priv->regfile.saveFPB1 = I915_READ(_PCH_FPB1);
- dev_priv->regfile.saveDPLL_B = I915_READ(_PCH_DPLL_B);
- } else {
- dev_priv->regfile.saveFPB0 = I915_READ(_FPB0);
- dev_priv->regfile.saveFPB1 = I915_READ(_FPB1);
- dev_priv->regfile.saveDPLL_B = I915_READ(_DPLL_B);
- }
- if (INTEL_INFO(dev)->gen >= 4 && !HAS_PCH_SPLIT(dev))
- dev_priv->regfile.saveDPLL_B_MD = I915_READ(_DPLL_B_MD);
- dev_priv->regfile.saveHTOTAL_B = I915_READ(_HTOTAL_B);
- dev_priv->regfile.saveHBLANK_B = I915_READ(_HBLANK_B);
- dev_priv->regfile.saveHSYNC_B = I915_READ(_HSYNC_B);
- dev_priv->regfile.saveVTOTAL_B = I915_READ(_VTOTAL_B);
- dev_priv->regfile.saveVBLANK_B = I915_READ(_VBLANK_B);
- dev_priv->regfile.saveVSYNC_B = I915_READ(_VSYNC_B);
- if (!HAS_PCH_SPLIT(dev))
- dev_priv->regfile.saveBCLRPAT_B = I915_READ(_BCLRPAT_B);
-
- if (HAS_PCH_SPLIT(dev)) {
- dev_priv->regfile.savePIPEB_DATA_M1 = I915_READ(_PIPEB_DATA_M1);
- dev_priv->regfile.savePIPEB_DATA_N1 = I915_READ(_PIPEB_DATA_N1);
- dev_priv->regfile.savePIPEB_LINK_M1 = I915_READ(_PIPEB_LINK_M1);
- dev_priv->regfile.savePIPEB_LINK_N1 = I915_READ(_PIPEB_LINK_N1);
-
- dev_priv->regfile.saveFDI_TXB_CTL = I915_READ(_FDI_TXB_CTL);
- dev_priv->regfile.saveFDI_RXB_CTL = I915_READ(_FDI_RXB_CTL);
-
- dev_priv->regfile.savePFB_CTL_1 = I915_READ(_PFB_CTL_1);
- dev_priv->regfile.savePFB_WIN_SZ = I915_READ(_PFB_WIN_SZ);
- dev_priv->regfile.savePFB_WIN_POS = I915_READ(_PFB_WIN_POS);
-
- dev_priv->regfile.saveTRANSBCONF = I915_READ(_PCH_TRANSBCONF);
- dev_priv->regfile.saveTRANS_HTOTAL_B = I915_READ(_PCH_TRANS_HTOTAL_B);
- dev_priv->regfile.saveTRANS_HBLANK_B = I915_READ(_PCH_TRANS_HBLANK_B);
- dev_priv->regfile.saveTRANS_HSYNC_B = I915_READ(_PCH_TRANS_HSYNC_B);
- dev_priv->regfile.saveTRANS_VTOTAL_B = I915_READ(_PCH_TRANS_VTOTAL_B);
- dev_priv->regfile.saveTRANS_VBLANK_B = I915_READ(_PCH_TRANS_VBLANK_B);
- dev_priv->regfile.saveTRANS_VSYNC_B = I915_READ(_PCH_TRANS_VSYNC_B);
- }
-
- dev_priv->regfile.saveDSPBCNTR = I915_READ(_DSPBCNTR);
- dev_priv->regfile.saveDSPBSTRIDE = I915_READ(_DSPBSTRIDE);
- dev_priv->regfile.saveDSPBSIZE = I915_READ(_DSPBSIZE);
- dev_priv->regfile.saveDSPBPOS = I915_READ(_DSPBPOS);
- dev_priv->regfile.saveDSPBADDR = I915_READ(_DSPBADDR);
- if (INTEL_INFO(dev)->gen >= 4) {
- dev_priv->regfile.saveDSPBSURF = I915_READ(_DSPBSURF);
- dev_priv->regfile.saveDSPBTILEOFF = I915_READ(_DSPBTILEOFF);
- }
- i915_save_palette(dev, PIPE_B);
- dev_priv->regfile.savePIPEBSTAT = I915_READ(_PIPEBSTAT);
-
- /* Fences */
- switch (INTEL_INFO(dev)->gen) {
- case 7:
- case 6:
- for (i = 0; i < 16; i++)
- dev_priv->regfile.saveFENCE[i] = I915_READ64(FENCE_REG_SANDYBRIDGE_0 + (i * 8));
- break;
- case 5:
- case 4:
- for (i = 0; i < 16; i++)
- dev_priv->regfile.saveFENCE[i] = I915_READ64(FENCE_REG_965_0 + (i * 8));
- break;
- case 3:
- if (IS_I945G(dev) || IS_I945GM(dev) || IS_G33(dev))
- for (i = 0; i < 8; i++)
- dev_priv->regfile.saveFENCE[i+8] = I915_READ(FENCE_REG_945_8 + (i * 4));
- case 2:
- for (i = 0; i < 8; i++)
- dev_priv->regfile.saveFENCE[i] = I915_READ(FENCE_REG_830_0 + (i * 4));
- break;
- }
-
- /* CRT state */
- if (HAS_PCH_SPLIT(dev))
- dev_priv->regfile.saveADPA = I915_READ(PCH_ADPA);
- else
- dev_priv->regfile.saveADPA = I915_READ(ADPA);
-
- /* Display Port state */
- if (SUPPORTS_INTEGRATED_DP(dev)) {
- dev_priv->regfile.saveDP_B = I915_READ(DP_B);
- dev_priv->regfile.saveDP_C = I915_READ(DP_C);
- dev_priv->regfile.saveDP_D = I915_READ(DP_D);
- dev_priv->regfile.savePIPEA_GMCH_DATA_M = I915_READ(_PIPEA_DATA_M_G4X);
- dev_priv->regfile.savePIPEB_GMCH_DATA_M = I915_READ(_PIPEB_DATA_M_G4X);
- dev_priv->regfile.savePIPEA_GMCH_DATA_N = I915_READ(_PIPEA_DATA_N_G4X);
- dev_priv->regfile.savePIPEB_GMCH_DATA_N = I915_READ(_PIPEB_DATA_N_G4X);
- dev_priv->regfile.savePIPEA_DP_LINK_M = I915_READ(_PIPEA_LINK_M_G4X);
- dev_priv->regfile.savePIPEB_DP_LINK_M = I915_READ(_PIPEB_LINK_M_G4X);
- dev_priv->regfile.savePIPEA_DP_LINK_N = I915_READ(_PIPEA_LINK_N_G4X);
- dev_priv->regfile.savePIPEB_DP_LINK_N = I915_READ(_PIPEB_LINK_N_G4X);
- }
- /* FIXME: regfile.save TV & SDVO state */
-
- /* Panel fitter */
- if (!IS_I830(dev) && !IS_845G(dev) && !HAS_PCH_SPLIT(dev)) {
- dev_priv->regfile.savePFIT_CONTROL = I915_READ(PFIT_CONTROL);
- dev_priv->regfile.savePFIT_PGM_RATIOS = I915_READ(PFIT_PGM_RATIOS);
- }
-
- /* Backlight */
- if (INTEL_INFO(dev)->gen <= 4)
- pci_read_config_byte(dev->pdev, PCI_LBPC,
- &dev_priv->regfile.saveLBB);
-
- if (HAS_PCH_SPLIT(dev)) {
- dev_priv->regfile.saveBLC_PWM_CTL = I915_READ(BLC_PWM_PCH_CTL1);
- dev_priv->regfile.saveBLC_PWM_CTL2 = I915_READ(BLC_PWM_PCH_CTL2);
- dev_priv->regfile.saveBLC_CPU_PWM_CTL = I915_READ(BLC_PWM_CPU_CTL);
- dev_priv->regfile.saveBLC_CPU_PWM_CTL2 = I915_READ(BLC_PWM_CPU_CTL2);
- } else {
- dev_priv->regfile.saveBLC_PWM_CTL = I915_READ(BLC_PWM_CTL);
- if (INTEL_INFO(dev)->gen >= 4)
- dev_priv->regfile.saveBLC_PWM_CTL2 = I915_READ(BLC_PWM_CTL2);
- dev_priv->regfile.saveBLC_HIST_CTL = I915_READ(BLC_HIST_CTL);
- }
-
- return;
-}
-
-void i915_restore_display_reg(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- int dpll_a_reg, fpa0_reg, fpa1_reg;
- int dpll_b_reg, fpb0_reg, fpb1_reg;
- int i;
-
- /* Backlight */
- if (INTEL_INFO(dev)->gen <= 4)
- pci_write_config_byte(dev->pdev, PCI_LBPC,
- dev_priv->regfile.saveLBB);
-
- if (HAS_PCH_SPLIT(dev)) {
- I915_WRITE(BLC_PWM_PCH_CTL1, dev_priv->regfile.saveBLC_PWM_CTL);
- I915_WRITE(BLC_PWM_PCH_CTL2, dev_priv->regfile.saveBLC_PWM_CTL2);
- /* NOTE: BLC_PWM_CPU_CTL must be written after BLC_PWM_CPU_CTL2;
- * otherwise we get blank eDP screen after S3 on some machines
- */
- I915_WRITE(BLC_PWM_CPU_CTL2, dev_priv->regfile.saveBLC_CPU_PWM_CTL2);
- I915_WRITE(BLC_PWM_CPU_CTL, dev_priv->regfile.saveBLC_CPU_PWM_CTL);
- } else {
- if (INTEL_INFO(dev)->gen >= 4)
- I915_WRITE(BLC_PWM_CTL2, dev_priv->regfile.saveBLC_PWM_CTL2);
- I915_WRITE(BLC_PWM_CTL, dev_priv->regfile.saveBLC_PWM_CTL);
- I915_WRITE(BLC_HIST_CTL, dev_priv->regfile.saveBLC_HIST_CTL);
- }
-
- /* Panel fitter */
- if (!IS_I830(dev) && !IS_845G(dev) && !HAS_PCH_SPLIT(dev)) {
- I915_WRITE(PFIT_PGM_RATIOS, dev_priv->regfile.savePFIT_PGM_RATIOS);
- I915_WRITE(PFIT_CONTROL, dev_priv->regfile.savePFIT_CONTROL);
- }
-
- /* Display port ratios (must be done before clock is set) */
- if (SUPPORTS_INTEGRATED_DP(dev)) {
- I915_WRITE(_PIPEA_DATA_M_G4X, dev_priv->regfile.savePIPEA_GMCH_DATA_M);
- I915_WRITE(_PIPEB_DATA_M_G4X, dev_priv->regfile.savePIPEB_GMCH_DATA_M);
- I915_WRITE(_PIPEA_DATA_N_G4X, dev_priv->regfile.savePIPEA_GMCH_DATA_N);
- I915_WRITE(_PIPEB_DATA_N_G4X, dev_priv->regfile.savePIPEB_GMCH_DATA_N);
- I915_WRITE(_PIPEA_LINK_M_G4X, dev_priv->regfile.savePIPEA_DP_LINK_M);
- I915_WRITE(_PIPEB_LINK_M_G4X, dev_priv->regfile.savePIPEB_DP_LINK_M);
- I915_WRITE(_PIPEA_LINK_N_G4X, dev_priv->regfile.savePIPEA_DP_LINK_N);
- I915_WRITE(_PIPEB_LINK_N_G4X, dev_priv->regfile.savePIPEB_DP_LINK_N);
- }
-
- /* Fences */
- switch (INTEL_INFO(dev)->gen) {
- case 7:
- case 6:
- for (i = 0; i < 16; i++)
- I915_WRITE64(FENCE_REG_SANDYBRIDGE_0 + (i * 8), dev_priv->regfile.saveFENCE[i]);
- break;
- case 5:
- case 4:
- for (i = 0; i < 16; i++)
- I915_WRITE64(FENCE_REG_965_0 + (i * 8), dev_priv->regfile.saveFENCE[i]);
- break;
- case 3:
- case 2:
- if (IS_I945G(dev) || IS_I945GM(dev) || IS_G33(dev))
- for (i = 0; i < 8; i++)
- I915_WRITE(FENCE_REG_945_8 + (i * 4), dev_priv->regfile.saveFENCE[i+8]);
- for (i = 0; i < 8; i++)
- I915_WRITE(FENCE_REG_830_0 + (i * 4), dev_priv->regfile.saveFENCE[i]);
- break;
- }
-
-
- if (HAS_PCH_SPLIT(dev)) {
- dpll_a_reg = _PCH_DPLL_A;
- dpll_b_reg = _PCH_DPLL_B;
- fpa0_reg = _PCH_FPA0;
- fpb0_reg = _PCH_FPB0;
- fpa1_reg = _PCH_FPA1;
- fpb1_reg = _PCH_FPB1;
- } else {
- dpll_a_reg = _DPLL_A;
- dpll_b_reg = _DPLL_B;
- fpa0_reg = _FPA0;
- fpb0_reg = _FPB0;
- fpa1_reg = _FPA1;
- fpb1_reg = _FPB1;
- }
-
- if (HAS_PCH_SPLIT(dev)) {
- I915_WRITE(PCH_DREF_CONTROL, dev_priv->regfile.savePCH_DREF_CONTROL);
- I915_WRITE(DISP_ARB_CTL, dev_priv->regfile.saveDISP_ARB_CTL);
- }
-
- /* Pipe & plane A info */
- /* Prime the clock */
- if (dev_priv->regfile.saveDPLL_A & DPLL_VCO_ENABLE) {
- I915_WRITE(dpll_a_reg, dev_priv->regfile.saveDPLL_A &
- ~DPLL_VCO_ENABLE);
- POSTING_READ(dpll_a_reg);
- udelay(150);
- }
- I915_WRITE(fpa0_reg, dev_priv->regfile.saveFPA0);
- I915_WRITE(fpa1_reg, dev_priv->regfile.saveFPA1);
- /* Actually enable it */
- I915_WRITE(dpll_a_reg, dev_priv->regfile.saveDPLL_A);
- POSTING_READ(dpll_a_reg);
- udelay(150);
- if (INTEL_INFO(dev)->gen >= 4 && !HAS_PCH_SPLIT(dev)) {
- I915_WRITE(_DPLL_A_MD, dev_priv->regfile.saveDPLL_A_MD);
- POSTING_READ(_DPLL_A_MD);
- }
- udelay(150);
-
- /* Restore mode */
- I915_WRITE(_HTOTAL_A, dev_priv->regfile.saveHTOTAL_A);
- I915_WRITE(_HBLANK_A, dev_priv->regfile.saveHBLANK_A);
- I915_WRITE(_HSYNC_A, dev_priv->regfile.saveHSYNC_A);
- I915_WRITE(_VTOTAL_A, dev_priv->regfile.saveVTOTAL_A);
- I915_WRITE(_VBLANK_A, dev_priv->regfile.saveVBLANK_A);
- I915_WRITE(_VSYNC_A, dev_priv->regfile.saveVSYNC_A);
- if (!HAS_PCH_SPLIT(dev))
- I915_WRITE(_BCLRPAT_A, dev_priv->regfile.saveBCLRPAT_A);
-
- if (HAS_PCH_SPLIT(dev)) {
- I915_WRITE(_PIPEA_DATA_M1, dev_priv->regfile.savePIPEA_DATA_M1);
- I915_WRITE(_PIPEA_DATA_N1, dev_priv->regfile.savePIPEA_DATA_N1);
- I915_WRITE(_PIPEA_LINK_M1, dev_priv->regfile.savePIPEA_LINK_M1);
- I915_WRITE(_PIPEA_LINK_N1, dev_priv->regfile.savePIPEA_LINK_N1);
-
- I915_WRITE(_FDI_RXA_CTL, dev_priv->regfile.saveFDI_RXA_CTL);
- I915_WRITE(_FDI_TXA_CTL, dev_priv->regfile.saveFDI_TXA_CTL);
-
- I915_WRITE(_PFA_CTL_1, dev_priv->regfile.savePFA_CTL_1);
- I915_WRITE(_PFA_WIN_SZ, dev_priv->regfile.savePFA_WIN_SZ);
- I915_WRITE(_PFA_WIN_POS, dev_priv->regfile.savePFA_WIN_POS);
-
- I915_WRITE(_PCH_TRANSACONF, dev_priv->regfile.saveTRANSACONF);
- I915_WRITE(_PCH_TRANS_HTOTAL_A, dev_priv->regfile.saveTRANS_HTOTAL_A);
- I915_WRITE(_PCH_TRANS_HBLANK_A, dev_priv->regfile.saveTRANS_HBLANK_A);
- I915_WRITE(_PCH_TRANS_HSYNC_A, dev_priv->regfile.saveTRANS_HSYNC_A);
- I915_WRITE(_PCH_TRANS_VTOTAL_A, dev_priv->regfile.saveTRANS_VTOTAL_A);
- I915_WRITE(_PCH_TRANS_VBLANK_A, dev_priv->regfile.saveTRANS_VBLANK_A);
- I915_WRITE(_PCH_TRANS_VSYNC_A, dev_priv->regfile.saveTRANS_VSYNC_A);
- }
-
- /* Restore plane info */
- I915_WRITE(_DSPASIZE, dev_priv->regfile.saveDSPASIZE);
- I915_WRITE(_DSPAPOS, dev_priv->regfile.saveDSPAPOS);
- I915_WRITE(_PIPEASRC, dev_priv->regfile.savePIPEASRC);
- I915_WRITE(_DSPAADDR, dev_priv->regfile.saveDSPAADDR);
- I915_WRITE(_DSPASTRIDE, dev_priv->regfile.saveDSPASTRIDE);
- if (INTEL_INFO(dev)->gen >= 4) {
- I915_WRITE(_DSPASURF, dev_priv->regfile.saveDSPASURF);
- I915_WRITE(_DSPATILEOFF, dev_priv->regfile.saveDSPATILEOFF);
- }
-
- I915_WRITE(_PIPEACONF, dev_priv->regfile.savePIPEACONF);
-
- i915_restore_palette(dev, PIPE_A);
- /* Enable the plane */
- I915_WRITE(_DSPACNTR, dev_priv->regfile.saveDSPACNTR);
- I915_WRITE(_DSPAADDR, I915_READ(_DSPAADDR));
-
- /* Pipe & plane B info */
- if (dev_priv->regfile.saveDPLL_B & DPLL_VCO_ENABLE) {
- I915_WRITE(dpll_b_reg, dev_priv->regfile.saveDPLL_B &
- ~DPLL_VCO_ENABLE);
- POSTING_READ(dpll_b_reg);
- udelay(150);
- }
- I915_WRITE(fpb0_reg, dev_priv->regfile.saveFPB0);
- I915_WRITE(fpb1_reg, dev_priv->regfile.saveFPB1);
- /* Actually enable it */
- I915_WRITE(dpll_b_reg, dev_priv->regfile.saveDPLL_B);
- POSTING_READ(dpll_b_reg);
- udelay(150);
- if (INTEL_INFO(dev)->gen >= 4 && !HAS_PCH_SPLIT(dev)) {
- I915_WRITE(_DPLL_B_MD, dev_priv->regfile.saveDPLL_B_MD);
- POSTING_READ(_DPLL_B_MD);
- }
- udelay(150);
-
- /* Restore mode */
- I915_WRITE(_HTOTAL_B, dev_priv->regfile.saveHTOTAL_B);
- I915_WRITE(_HBLANK_B, dev_priv->regfile.saveHBLANK_B);
- I915_WRITE(_HSYNC_B, dev_priv->regfile.saveHSYNC_B);
- I915_WRITE(_VTOTAL_B, dev_priv->regfile.saveVTOTAL_B);
- I915_WRITE(_VBLANK_B, dev_priv->regfile.saveVBLANK_B);
- I915_WRITE(_VSYNC_B, dev_priv->regfile.saveVSYNC_B);
- if (!HAS_PCH_SPLIT(dev))
- I915_WRITE(_BCLRPAT_B, dev_priv->regfile.saveBCLRPAT_B);
-
- if (HAS_PCH_SPLIT(dev)) {
- I915_WRITE(_PIPEB_DATA_M1, dev_priv->regfile.savePIPEB_DATA_M1);
- I915_WRITE(_PIPEB_DATA_N1, dev_priv->regfile.savePIPEB_DATA_N1);
- I915_WRITE(_PIPEB_LINK_M1, dev_priv->regfile.savePIPEB_LINK_M1);
- I915_WRITE(_PIPEB_LINK_N1, dev_priv->regfile.savePIPEB_LINK_N1);
-
- I915_WRITE(_FDI_RXB_CTL, dev_priv->regfile.saveFDI_RXB_CTL);
- I915_WRITE(_FDI_TXB_CTL, dev_priv->regfile.saveFDI_TXB_CTL);
-
- I915_WRITE(_PFB_CTL_1, dev_priv->regfile.savePFB_CTL_1);
- I915_WRITE(_PFB_WIN_SZ, dev_priv->regfile.savePFB_WIN_SZ);
- I915_WRITE(_PFB_WIN_POS, dev_priv->regfile.savePFB_WIN_POS);
-
- I915_WRITE(_PCH_TRANSBCONF, dev_priv->regfile.saveTRANSBCONF);
- I915_WRITE(_PCH_TRANS_HTOTAL_B, dev_priv->regfile.saveTRANS_HTOTAL_B);
- I915_WRITE(_PCH_TRANS_HBLANK_B, dev_priv->regfile.saveTRANS_HBLANK_B);
- I915_WRITE(_PCH_TRANS_HSYNC_B, dev_priv->regfile.saveTRANS_HSYNC_B);
- I915_WRITE(_PCH_TRANS_VTOTAL_B, dev_priv->regfile.saveTRANS_VTOTAL_B);
- I915_WRITE(_PCH_TRANS_VBLANK_B, dev_priv->regfile.saveTRANS_VBLANK_B);
- I915_WRITE(_PCH_TRANS_VSYNC_B, dev_priv->regfile.saveTRANS_VSYNC_B);
- }
-
- /* Restore plane info */
- I915_WRITE(_DSPBSIZE, dev_priv->regfile.saveDSPBSIZE);
- I915_WRITE(_DSPBPOS, dev_priv->regfile.saveDSPBPOS);
- I915_WRITE(_PIPEBSRC, dev_priv->regfile.savePIPEBSRC);
- I915_WRITE(_DSPBADDR, dev_priv->regfile.saveDSPBADDR);
- I915_WRITE(_DSPBSTRIDE, dev_priv->regfile.saveDSPBSTRIDE);
- if (INTEL_INFO(dev)->gen >= 4) {
- I915_WRITE(_DSPBSURF, dev_priv->regfile.saveDSPBSURF);
- I915_WRITE(_DSPBTILEOFF, dev_priv->regfile.saveDSPBTILEOFF);
- }
-
- I915_WRITE(_PIPEBCONF, dev_priv->regfile.savePIPEBCONF);
-
- i915_restore_palette(dev, PIPE_B);
- /* Enable the plane */
- I915_WRITE(_DSPBCNTR, dev_priv->regfile.saveDSPBCNTR);
- I915_WRITE(_DSPBADDR, I915_READ(_DSPBADDR));
-
- /* Cursor state */
- I915_WRITE(_CURAPOS, dev_priv->regfile.saveCURAPOS);
- I915_WRITE(_CURACNTR, dev_priv->regfile.saveCURACNTR);
- I915_WRITE(_CURABASE, dev_priv->regfile.saveCURABASE);
- I915_WRITE(_CURBPOS, dev_priv->regfile.saveCURBPOS);
- I915_WRITE(_CURBCNTR, dev_priv->regfile.saveCURBCNTR);
- I915_WRITE(_CURBBASE, dev_priv->regfile.saveCURBBASE);
- if (IS_GEN2(dev))
- I915_WRITE(CURSIZE, dev_priv->regfile.saveCURSIZE);
-
- /* CRT state */
- if (HAS_PCH_SPLIT(dev))
- I915_WRITE(PCH_ADPA, dev_priv->regfile.saveADPA);
- else
- I915_WRITE(ADPA, dev_priv->regfile.saveADPA);
-
- /* Display Port state */
- if (SUPPORTS_INTEGRATED_DP(dev)) {
- I915_WRITE(DP_B, dev_priv->regfile.saveDP_B);
- I915_WRITE(DP_C, dev_priv->regfile.saveDP_C);
- I915_WRITE(DP_D, dev_priv->regfile.saveDP_D);
- }
- /* FIXME: restore TV & SDVO state */
-
- return;
-}
diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c
new file mode 100644
index 0000000..5eee75b
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_vgpu.c
@@ -0,0 +1,264 @@
+/*
+ * Copyright(c) 2011-2015 Intel Corporation. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include "intel_drv.h"
+#include "i915_vgpu.h"
+
+/**
+ * DOC: Intel GVT-g guest support
+ *
+ * Intel GVT-g is a graphics virtualization technology which shares the
+ * GPU among multiple virtual machines on a time-sharing basis. Each
+ * virtual machine is presented a virtual GPU (vGPU), which has equivalent
+ * features as the underlying physical GPU (pGPU), so i915 driver can run
+ * seamlessly in a virtual machine. This file provides vGPU specific
+ * optimizations when running in a virtual machine, to reduce the complexity
+ * of vGPU emulation and to improve the overall performance.
+ *
+ * A primary function introduced here is so-called "address space ballooning"
+ * technique. Intel GVT-g partitions global graphics memory among multiple VMs,
+ * so each VM can directly access a portion of the memory without hypervisor's
+ * intervention, e.g. filling textures or queuing commands. However with the
+ * partitioning an unmodified i915 driver would assume a smaller graphics
+ * memory starting from address ZERO, then requires vGPU emulation module to
+ * translate the graphics address between 'guest view' and 'host view', for
+ * all registers and command opcodes which contain a graphics memory address.
+ * To reduce the complexity, Intel GVT-g introduces "address space ballooning",
+ * by telling the exact partitioning knowledge to each guest i915 driver, which
+ * then reserves and prevents non-allocated portions from allocation. Thus vGPU
+ * emulation module only needs to scan and validate graphics addresses without
+ * complexity of address translation.
+ *
+ */
+
+/**
+ * i915_check_vgpu - detect virtual GPU
+ * @dev: drm device *
+ *
+ * This function is called at the initialization stage, to detect whether
+ * running on a vGPU.
+ */
+void i915_check_vgpu(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = to_i915(dev);
+ uint64_t magic;
+ uint32_t version;
+
+ BUILD_BUG_ON(sizeof(struct vgt_if) != VGT_PVINFO_SIZE);
+
+ if (!IS_HASWELL(dev))
+ return;
+
+ magic = readq(dev_priv->regs + vgtif_reg(magic));
+ if (magic != VGT_MAGIC)
+ return;
+
+ version = INTEL_VGT_IF_VERSION_ENCODE(
+ readw(dev_priv->regs + vgtif_reg(version_major)),
+ readw(dev_priv->regs + vgtif_reg(version_minor)));
+ if (version != INTEL_VGT_IF_VERSION) {
+ DRM_INFO("VGT interface version mismatch!\n");
+ return;
+ }
+
+ dev_priv->vgpu.active = true;
+ DRM_INFO("Virtual GPU for Intel GVT-g detected.\n");
+}
+
+struct _balloon_info_ {
+ /*
+ * There are up to 2 regions per mappable/unmappable graphic
+ * memory that might be ballooned. Here, index 0/1 is for mappable
+ * graphic memory, 2/3 for unmappable graphic memory.
+ */
+ struct drm_mm_node space[4];
+};
+
+static struct _balloon_info_ bl_info;
+
+/**
+ * intel_vgt_deballoon - deballoon reserved graphics address trunks
+ *
+ * This function is called to deallocate the ballooned-out graphic memory, when
+ * driver is unloaded or when ballooning fails.
+ */
+void intel_vgt_deballoon(void)
+{
+ int i;
+
+ DRM_DEBUG("VGT deballoon.\n");
+
+ for (i = 0; i < 4; i++) {
+ if (bl_info.space[i].allocated)
+ drm_mm_remove_node(&bl_info.space[i]);
+ }
+
+ memset(&bl_info, 0, sizeof(bl_info));
+}
+
+static int vgt_balloon_space(struct drm_mm *mm,
+ struct drm_mm_node *node,
+ unsigned long start, unsigned long end)
+{
+ unsigned long size = end - start;
+
+ if (start == end)
+ return -EINVAL;
+
+ DRM_INFO("balloon space: range [ 0x%lx - 0x%lx ] %lu KiB.\n",
+ start, end, size / 1024);
+
+ node->start = start;
+ node->size = size;
+
+ return drm_mm_reserve_node(mm, node);
+}
+
+/**
+ * intel_vgt_balloon - balloon out reserved graphics address trunks
+ * @dev: drm device
+ *
+ * This function is called at the initialization stage, to balloon out the
+ * graphic address space allocated to other vGPUs, by marking these spaces as
+ * reserved. The ballooning related knowledge(starting address and size of
+ * the mappable/unmappable graphic memory) is described in the vgt_if structure
+ * in a reserved mmio range.
+ *
+ * To give an example, the drawing below depicts one typical scenario after
+ * ballooning. Here the vGPU1 has 2 pieces of graphic address spaces ballooned
+ * out each for the mappable and the non-mappable part. From the vGPU1 point of
+ * view, the total size is the same as the physical one, with the start address
+ * of its graphic space being zero. Yet there are some portions ballooned out(
+ * the shadow part, which are marked as reserved by drm allocator). From the
+ * host point of view, the graphic address space is partitioned by multiple
+ * vGPUs in different VMs.
+ *
+ * vGPU1 view Host view
+ * 0 ------> +-----------+ +-----------+
+ * ^ |///////////| | vGPU3 |
+ * | |///////////| +-----------+
+ * | |///////////| | vGPU2 |
+ * | +-----------+ +-----------+
+ * mappable GM | available | ==> | vGPU1 |
+ * | +-----------+ +-----------+
+ * | |///////////| | |
+ * v |///////////| | Host |
+ * +=======+===========+ +===========+
+ * ^ |///////////| | vGPU3 |
+ * | |///////////| +-----------+
+ * | |///////////| | vGPU2 |
+ * | +-----------+ +-----------+
+ * unmappable GM | available | ==> | vGPU1 |
+ * | +-----------+ +-----------+
+ * | |///////////| | |
+ * | |///////////| | Host |
+ * v |///////////| | |
+ * total GM size ------> +-----------+ +-----------+
+ *
+ * Returns:
+ * zero on success, non-zero if configuration invalid or ballooning failed
+ */
+int intel_vgt_balloon(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = to_i915(dev);
+ struct i915_address_space *ggtt_vm = &dev_priv->gtt.base;
+ unsigned long ggtt_vm_end = ggtt_vm->start + ggtt_vm->total;
+
+ unsigned long mappable_base, mappable_size, mappable_end;
+ unsigned long unmappable_base, unmappable_size, unmappable_end;
+ int ret;
+
+ mappable_base = I915_READ(vgtif_reg(avail_rs.mappable_gmadr.base));
+ mappable_size = I915_READ(vgtif_reg(avail_rs.mappable_gmadr.size));
+ unmappable_base = I915_READ(vgtif_reg(avail_rs.nonmappable_gmadr.base));
+ unmappable_size = I915_READ(vgtif_reg(avail_rs.nonmappable_gmadr.size));
+
+ mappable_end = mappable_base + mappable_size;
+ unmappable_end = unmappable_base + unmappable_size;
+
+ DRM_INFO("VGT ballooning configuration:\n");
+ DRM_INFO("Mappable graphic memory: base 0x%lx size %ldKiB\n",
+ mappable_base, mappable_size / 1024);
+ DRM_INFO("Unmappable graphic memory: base 0x%lx size %ldKiB\n",
+ unmappable_base, unmappable_size / 1024);
+
+ if (mappable_base < ggtt_vm->start ||
+ mappable_end > dev_priv->gtt.mappable_end ||
+ unmappable_base < dev_priv->gtt.mappable_end ||
+ unmappable_end > ggtt_vm_end) {
+ DRM_ERROR("Invalid ballooning configuration!\n");
+ return -EINVAL;
+ }
+
+ /* Unmappable graphic memory ballooning */
+ if (unmappable_base > dev_priv->gtt.mappable_end) {
+ ret = vgt_balloon_space(&ggtt_vm->mm,
+ &bl_info.space[2],
+ dev_priv->gtt.mappable_end,
+ unmappable_base);
+
+ if (ret)
+ goto err;
+ }
+
+ /*
+ * No need to partition out the last physical page,
+ * because it is reserved to the guard page.
+ */
+ if (unmappable_end < ggtt_vm_end - PAGE_SIZE) {
+ ret = vgt_balloon_space(&ggtt_vm->mm,
+ &bl_info.space[3],
+ unmappable_end,
+ ggtt_vm_end - PAGE_SIZE);
+ if (ret)
+ goto err;
+ }
+
+ /* Mappable graphic memory ballooning */
+ if (mappable_base > ggtt_vm->start) {
+ ret = vgt_balloon_space(&ggtt_vm->mm,
+ &bl_info.space[0],
+ ggtt_vm->start, mappable_base);
+
+ if (ret)
+ goto err;
+ }
+
+ if (mappable_end < dev_priv->gtt.mappable_end) {
+ ret = vgt_balloon_space(&ggtt_vm->mm,
+ &bl_info.space[1],
+ mappable_end,
+ dev_priv->gtt.mappable_end);
+
+ if (ret)
+ goto err;
+ }
+
+ DRM_INFO("VGT balloon successfully\n");
+ return 0;
+
+err:
+ DRM_ERROR("VGT balloon fail\n");
+ intel_vgt_deballoon();
+ return ret;
+}
diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h
new file mode 100644
index 0000000..97a88b5
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_vgpu.h
@@ -0,0 +1,91 @@
+/*
+ * Copyright(c) 2011-2015 Intel Corporation. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _I915_VGPU_H_
+#define _I915_VGPU_H_
+
+/* The MMIO offset of the shared info between guest and host emulator */
+#define VGT_PVINFO_PAGE 0x78000
+#define VGT_PVINFO_SIZE 0x1000
+
+/*
+ * The following structure pages are defined in GEN MMIO space
+ * for virtualization. (One page for now)
+ */
+#define VGT_MAGIC 0x4776544776544776ULL /* 'vGTvGTvG' */
+#define VGT_VERSION_MAJOR 1
+#define VGT_VERSION_MINOR 0
+
+#define INTEL_VGT_IF_VERSION_ENCODE(major, minor) ((major) << 16 | (minor))
+#define INTEL_VGT_IF_VERSION \
+ INTEL_VGT_IF_VERSION_ENCODE(VGT_VERSION_MAJOR, VGT_VERSION_MINOR)
+
+struct vgt_if {
+ uint64_t magic; /* VGT_MAGIC */
+ uint16_t version_major;
+ uint16_t version_minor;
+ uint32_t vgt_id; /* ID of vGT instance */
+ uint32_t rsv1[12]; /* pad to offset 0x40 */
+ /*
+ * Data structure to describe the balooning info of resources.
+ * Each VM can only have one portion of continuous area for now.
+ * (May support scattered resource in future)
+ * (starting from offset 0x40)
+ */
+ struct {
+ /* Aperture register balooning */
+ struct {
+ uint32_t base;
+ uint32_t size;
+ } mappable_gmadr; /* aperture */
+ /* GMADR register balooning */
+ struct {
+ uint32_t base;
+ uint32_t size;
+ } nonmappable_gmadr; /* non aperture */
+ /* allowed fence registers */
+ uint32_t fence_num;
+ uint32_t rsv2[3];
+ } avail_rs; /* available/assigned resource */
+ uint32_t rsv3[0x200 - 24]; /* pad to half page */
+ /*
+ * The bottom half page is for response from Gfx driver to hypervisor.
+ * Set to reserved fields temporarily by now.
+ */
+ uint32_t rsv4;
+ uint32_t display_ready; /* ready for display owner switch */
+ uint32_t rsv5[0x200 - 2]; /* pad to one page */
+} __packed;
+
+#define vgtif_reg(x) \
+ (VGT_PVINFO_PAGE + (long)&((struct vgt_if *)NULL)->x)
+
+/* vGPU display status to be used by the host side */
+#define VGT_DRV_DISPLAY_NOT_READY 0
+#define VGT_DRV_DISPLAY_READY 1 /* ready for display switch */
+
+extern void i915_check_vgpu(struct drm_device *dev);
+extern int intel_vgt_balloon(struct drm_device *dev);
+extern void intel_vgt_deballoon(void);
+
+#endif /* _I915_VGPU_H_ */
diff --git a/drivers/gpu/drm/i915/intel_atomic.c b/drivers/gpu/drm/i915/intel_atomic.c
index 19a9dd5..3903b90 100644
--- a/drivers/gpu/drm/i915/intel_atomic.c
+++ b/drivers/gpu/drm/i915/intel_atomic.c
@@ -134,9 +134,9 @@
* FIXME: The proper sequence here will eventually be:
*
* drm_atomic_helper_swap_state(dev, state)
- * drm_atomic_helper_commit_pre_planes(dev, state);
+ * drm_atomic_helper_commit_modeset_disables(dev, state);
* drm_atomic_helper_commit_planes(dev, state);
- * drm_atomic_helper_commit_post_planes(dev, state);
+ * drm_atomic_helper_commit_modeset_enables(dev, state);
* drm_atomic_helper_wait_for_vblanks(dev, state);
* drm_atomic_helper_cleanup_planes(dev, state);
* drm_atomic_state_free(state);
@@ -214,12 +214,18 @@
intel_crtc_duplicate_state(struct drm_crtc *crtc)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct intel_crtc_state *crtc_state;
if (WARN_ON(!intel_crtc->config))
- return kzalloc(sizeof(*intel_crtc->config), GFP_KERNEL);
+ crtc_state = kzalloc(sizeof(*crtc_state), GFP_KERNEL);
+ else
+ crtc_state = kmemdup(intel_crtc->config,
+ sizeof(*intel_crtc->config), GFP_KERNEL);
- return kmemdup(intel_crtc->config, sizeof(*intel_crtc->config),
- GFP_KERNEL);
+ if (crtc_state)
+ crtc_state->base.crtc = crtc;
+
+ return &crtc_state->base;
}
/**
diff --git a/drivers/gpu/drm/i915/intel_atomic_plane.c b/drivers/gpu/drm/i915/intel_atomic_plane.c
index 9e6f727..976b891 100644
--- a/drivers/gpu/drm/i915/intel_atomic_plane.c
+++ b/drivers/gpu/drm/i915/intel_atomic_plane.c
@@ -203,16 +203,8 @@
struct drm_property *property,
uint64_t *val)
{
- struct drm_mode_config *config = &plane->dev->mode_config;
-
- if (property == config->rotation_property) {
- *val = state->rotation;
- } else {
- DRM_DEBUG_KMS("Unknown plane property '%s'\n", property->name);
- return -EINVAL;
- }
-
- return 0;
+ DRM_DEBUG_KMS("Unknown plane property '%s'\n", property->name);
+ return -EINVAL;
}
/**
@@ -233,14 +225,6 @@
struct drm_property *property,
uint64_t val)
{
- struct drm_mode_config *config = &plane->dev->mode_config;
-
- if (property == config->rotation_property) {
- state->rotation = val;
- } else {
- DRM_DEBUG_KMS("Unknown plane property '%s'\n", property->name);
- return -EINVAL;
- }
-
- return 0;
+ DRM_DEBUG_KMS("Unknown plane property '%s'\n", property->name);
+ return -EINVAL;
}
diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c
index 3f17825..c684085 100644
--- a/drivers/gpu/drm/i915/intel_bios.c
+++ b/drivers/gpu/drm/i915/intel_bios.c
@@ -662,6 +662,13 @@
edp_link_params->vswing);
break;
}
+
+ if (bdb->version >= 173) {
+ uint8_t vswing;
+
+ vswing = (edp->edp_vswing_preemph >> (panel_type * 4)) & 0xF;
+ dev_priv->vbt.edp_low_vswing = vswing == 0;
+ }
}
static void
diff --git a/drivers/gpu/drm/i915/intel_bios.h b/drivers/gpu/drm/i915/intel_bios.h
index a6a8710..6afd5be 100644
--- a/drivers/gpu/drm/i915/intel_bios.h
+++ b/drivers/gpu/drm/i915/intel_bios.h
@@ -554,6 +554,7 @@
/* ith bit indicates enabled/disabled for (i+1)th panel */
u16 edp_s3d_feature;
u16 edp_t3_optimization;
+ u64 edp_vswing_preemph; /* v173 */
} __packed;
struct psr_table {
diff --git a/drivers/gpu/drm/i915/intel_crt.c b/drivers/gpu/drm/i915/intel_crt.c
index e66e17a..515d712 100644
--- a/drivers/gpu/drm/i915/intel_crt.c
+++ b/drivers/gpu/drm/i915/intel_crt.c
@@ -690,7 +690,7 @@
* broken monitor (without edid) to work behind a broken kvm (that fails
* to have the right resistors for HP detection) needs to fix this up.
* For now just bail out. */
- if (I915_HAS_HOTPLUG(dev)) {
+ if (I915_HAS_HOTPLUG(dev) && !i915.load_detect_test) {
status = connector_status_disconnected;
goto out;
}
@@ -706,9 +706,11 @@
if (intel_get_load_detect_pipe(connector, NULL, &tmp, &ctx)) {
if (intel_crt_detect_ddc(connector))
status = connector_status_connected;
- else
+ else if (INTEL_INFO(dev)->gen < 4)
status = intel_crt_load_detect(crt);
- intel_release_load_detect_pipe(connector, &tmp);
+ else
+ status = connector_status_unknown;
+ intel_release_load_detect_pipe(connector, &tmp, &ctx);
} else
status = connector_status_unknown;
@@ -794,6 +796,7 @@
.destroy = intel_crt_destroy,
.set_property = intel_crt_set_property,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_get_property = intel_connector_atomic_get_property,
};
@@ -848,7 +851,7 @@
if (!crt)
return;
- intel_connector = kzalloc(sizeof(*intel_connector), GFP_KERNEL);
+ intel_connector = intel_connector_alloc();
if (!intel_connector) {
kfree(crt);
return;
diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
index f14e8a2..3eb0efc 100644
--- a/drivers/gpu/drm/i915/intel_ddi.c
+++ b/drivers/gpu/drm/i915/intel_ddi.c
@@ -139,18 +139,24 @@
{ 0x00004014, 0x00000087 },
};
+/* eDP 1.4 low vswing translation parameters */
+static const struct ddi_buf_trans skl_ddi_translations_edp[] = {
+ { 0x00000018, 0x000000a8 },
+ { 0x00002016, 0x000000ab },
+ { 0x00006012, 0x000000a2 },
+ { 0x00008010, 0x00000088 },
+ { 0x00000018, 0x000000ab },
+ { 0x00004014, 0x000000a2 },
+ { 0x00006012, 0x000000a6 },
+ { 0x00000018, 0x000000a2 },
+ { 0x00005013, 0x0000009c },
+ { 0x00000018, 0x00000088 },
+};
+
+
static const struct ddi_buf_trans skl_ddi_translations_hdmi[] = {
/* Idx NT mV T mV db */
- { 0x00000018, 0x000000a0 }, /* 0: 400 400 0 */
- { 0x00004014, 0x00000098 }, /* 1: 400 600 3.5 */
- { 0x00006012, 0x00000088 }, /* 2: 400 800 6 */
- { 0x00000018, 0x0000003c }, /* 3: 450 450 0 */
- { 0x00000018, 0x00000098 }, /* 4: 600 600 0 */
- { 0x00003015, 0x00000088 }, /* 5: 600 800 2.5 */
- { 0x00005013, 0x00000080 }, /* 6: 600 1000 4.5 */
- { 0x00000018, 0x00000088 }, /* 7: 800 800 0 */
- { 0x00000096, 0x00000080 }, /* 8: 800 1000 2 */
- { 0x00000018, 0x00000080 }, /* 9: 1200 1200 0 */
+ { 0x00004014, 0x00000087 }, /* 0: 800 1000 2 */
};
enum port intel_ddi_get_encoder_port(struct intel_encoder *intel_encoder)
@@ -187,7 +193,8 @@
{
struct drm_i915_private *dev_priv = dev->dev_private;
u32 reg;
- int i, n_hdmi_entries, hdmi_800mV_0dB;
+ int i, n_hdmi_entries, n_dp_entries, n_edp_entries, hdmi_default_entry,
+ size;
int hdmi_level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
const struct ddi_buf_trans *ddi_translations_fdi;
const struct ddi_buf_trans *ddi_translations_dp;
@@ -198,60 +205,85 @@
if (IS_SKYLAKE(dev)) {
ddi_translations_fdi = NULL;
ddi_translations_dp = skl_ddi_translations_dp;
- ddi_translations_edp = skl_ddi_translations_dp;
+ n_dp_entries = ARRAY_SIZE(skl_ddi_translations_dp);
+ if (dev_priv->vbt.edp_low_vswing) {
+ ddi_translations_edp = skl_ddi_translations_edp;
+ n_edp_entries = ARRAY_SIZE(skl_ddi_translations_edp);
+ } else {
+ ddi_translations_edp = skl_ddi_translations_dp;
+ n_edp_entries = ARRAY_SIZE(skl_ddi_translations_dp);
+ }
+
+ /*
+ * On SKL, the recommendation from the hw team is to always use
+ * a certain type of level shifter (and thus the corresponding
+ * 800mV+2dB entry). Given that's the only validated entry, we
+ * override what is in the VBT, at least until further notice.
+ */
+ hdmi_level = 0;
ddi_translations_hdmi = skl_ddi_translations_hdmi;
n_hdmi_entries = ARRAY_SIZE(skl_ddi_translations_hdmi);
- hdmi_800mV_0dB = 7;
+ hdmi_default_entry = 0;
} else if (IS_BROADWELL(dev)) {
ddi_translations_fdi = bdw_ddi_translations_fdi;
ddi_translations_dp = bdw_ddi_translations_dp;
ddi_translations_edp = bdw_ddi_translations_edp;
ddi_translations_hdmi = bdw_ddi_translations_hdmi;
+ n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp);
+ n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
- hdmi_800mV_0dB = 7;
+ hdmi_default_entry = 7;
} else if (IS_HASWELL(dev)) {
ddi_translations_fdi = hsw_ddi_translations_fdi;
ddi_translations_dp = hsw_ddi_translations_dp;
ddi_translations_edp = hsw_ddi_translations_dp;
ddi_translations_hdmi = hsw_ddi_translations_hdmi;
+ n_dp_entries = n_edp_entries = ARRAY_SIZE(hsw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(hsw_ddi_translations_hdmi);
- hdmi_800mV_0dB = 6;
+ hdmi_default_entry = 6;
} else {
WARN(1, "ddi translation table missing\n");
ddi_translations_edp = bdw_ddi_translations_dp;
ddi_translations_fdi = bdw_ddi_translations_fdi;
ddi_translations_dp = bdw_ddi_translations_dp;
ddi_translations_hdmi = bdw_ddi_translations_hdmi;
+ n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp);
+ n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
- hdmi_800mV_0dB = 7;
+ hdmi_default_entry = 7;
}
switch (port) {
case PORT_A:
ddi_translations = ddi_translations_edp;
+ size = n_edp_entries;
break;
case PORT_B:
case PORT_C:
ddi_translations = ddi_translations_dp;
+ size = n_dp_entries;
break;
case PORT_D:
- if (intel_dp_is_edp(dev, PORT_D))
+ if (intel_dp_is_edp(dev, PORT_D)) {
ddi_translations = ddi_translations_edp;
- else
+ size = n_edp_entries;
+ } else {
ddi_translations = ddi_translations_dp;
+ size = n_dp_entries;
+ }
break;
case PORT_E:
if (ddi_translations_fdi)
ddi_translations = ddi_translations_fdi;
else
ddi_translations = ddi_translations_dp;
+ size = n_dp_entries;
break;
default:
BUG();
}
- for (i = 0, reg = DDI_BUF_TRANS(port);
- i < ARRAY_SIZE(hsw_ddi_translations_fdi); i++) {
+ for (i = 0, reg = DDI_BUF_TRANS(port); i < size; i++) {
I915_WRITE(reg, ddi_translations[i].trans1);
reg += 4;
I915_WRITE(reg, ddi_translations[i].trans2);
@@ -261,7 +293,7 @@
/* Choose a good default if VBT is badly populated */
if (hdmi_level == HDMI_LEVEL_SHIFT_UNKNOWN ||
hdmi_level >= n_hdmi_entries)
- hdmi_level = hdmi_800mV_0dB;
+ hdmi_level = hdmi_default_entry;
/* Entry 9 is for HDMI: */
I915_WRITE(reg, ddi_translations_hdmi[hdmi_level].trans1);
@@ -460,17 +492,23 @@
}
static struct intel_encoder *
-intel_ddi_get_crtc_new_encoder(struct intel_crtc *crtc)
+intel_ddi_get_crtc_new_encoder(struct intel_crtc_state *crtc_state)
{
- struct drm_device *dev = crtc->base.dev;
- struct intel_encoder *intel_encoder, *ret = NULL;
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
+ struct intel_encoder *ret = NULL;
+ struct drm_atomic_state *state;
int num_encoders = 0;
+ int i;
- for_each_intel_encoder(dev, intel_encoder) {
- if (intel_encoder->new_crtc == crtc) {
- ret = intel_encoder;
- num_encoders++;
- }
+ state = crtc_state->base.state;
+
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i] ||
+ state->connector_states[i]->crtc != crtc_state->base.crtc)
+ continue;
+
+ ret = to_intel_encoder(state->connector_states[i]->best_encoder);
+ num_encoders++;
}
WARN(num_encoders != 1, "%d encoders on crtc for pipe %c\n", num_encoders,
@@ -752,9 +790,18 @@
case DPLL_CRTL1_LINK_RATE_810:
link_clock = 81000;
break;
+ case DPLL_CRTL1_LINK_RATE_1080:
+ link_clock = 108000;
+ break;
case DPLL_CRTL1_LINK_RATE_1350:
link_clock = 135000;
break;
+ case DPLL_CRTL1_LINK_RATE_1620:
+ link_clock = 162000;
+ break;
+ case DPLL_CRTL1_LINK_RATE_2160:
+ link_clock = 216000;
+ break;
case DPLL_CRTL1_LINK_RATE_2700:
link_clock = 270000;
break;
@@ -1175,7 +1222,7 @@
{
struct drm_device *dev = intel_crtc->base.dev;
struct intel_encoder *intel_encoder =
- intel_ddi_get_crtc_new_encoder(intel_crtc);
+ intel_ddi_get_crtc_new_encoder(crtc_state);
int clock = crtc_state->port_clock;
if (IS_SKYLAKE(dev))
@@ -2153,7 +2200,7 @@
struct intel_connector *connector;
enum port port = intel_dig_port->port;
- connector = kzalloc(sizeof(*connector), GFP_KERNEL);
+ connector = intel_connector_alloc();
if (!connector)
return NULL;
@@ -2172,7 +2219,7 @@
struct intel_connector *connector;
enum port port = intel_dig_port->port;
- connector = kzalloc(sizeof(*connector), GFP_KERNEL);
+ connector = intel_connector_alloc();
if (!connector)
return NULL;
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index f75173c..d547d9c8 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -83,7 +83,8 @@
struct intel_crtc_state *pipe_config);
static int intel_set_mode(struct drm_crtc *crtc, struct drm_display_mode *mode,
- int x, int y, struct drm_framebuffer *old_fb);
+ int x, int y, struct drm_framebuffer *old_fb,
+ struct drm_atomic_state *state);
static int intel_framebuffer_init(struct drm_device *dev,
struct intel_framebuffer *ifb,
struct drm_mode_fb_cmd2 *mode_cmd,
@@ -391,7 +392,7 @@
* them would make no difference.
*/
.dot = { .min = 25000 * 5, .max = 540000 * 5},
- .vco = { .min = 4860000, .max = 6700000 },
+ .vco = { .min = 4800000, .max = 6480000 },
.n = { .min = 1, .max = 1 },
.m1 = { .min = 2, .max = 2 },
.m2 = { .min = 24 << 22, .max = 175 << 22 },
@@ -430,25 +431,41 @@
* intel_pipe_has_type() but looking at encoder->new_crtc instead of
* encoder->crtc.
*/
-static bool intel_pipe_will_have_type(struct intel_crtc *crtc, int type)
+static bool intel_pipe_will_have_type(const struct intel_crtc_state *crtc_state,
+ int type)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_atomic_state *state = crtc_state->base.state;
+ struct drm_connector_state *connector_state;
struct intel_encoder *encoder;
+ int i, num_connectors = 0;
- for_each_intel_encoder(dev, encoder)
- if (encoder->new_crtc == crtc && encoder->type == type)
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
+ continue;
+
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != crtc_state->base.crtc)
+ continue;
+
+ num_connectors++;
+
+ encoder = to_intel_encoder(connector_state->best_encoder);
+ if (encoder->type == type)
return true;
+ }
+
+ WARN_ON(num_connectors == 0);
return false;
}
-static const intel_limit_t *intel_ironlake_limit(struct intel_crtc *crtc,
- int refclk)
+static const intel_limit_t *
+intel_ironlake_limit(struct intel_crtc_state *crtc_state, int refclk)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
const intel_limit_t *limit;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
if (intel_is_dual_link_lvds(dev)) {
if (refclk == 100000)
limit = &intel_limits_ironlake_dual_lvds_100m;
@@ -466,20 +483,21 @@
return limit;
}
-static const intel_limit_t *intel_g4x_limit(struct intel_crtc *crtc)
+static const intel_limit_t *
+intel_g4x_limit(struct intel_crtc_state *crtc_state)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
const intel_limit_t *limit;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
if (intel_is_dual_link_lvds(dev))
limit = &intel_limits_g4x_dual_channel_lvds;
else
limit = &intel_limits_g4x_single_channel_lvds;
- } else if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_HDMI) ||
- intel_pipe_will_have_type(crtc, INTEL_OUTPUT_ANALOG)) {
+ } else if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_HDMI) ||
+ intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_ANALOG)) {
limit = &intel_limits_g4x_hdmi;
- } else if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_SDVO)) {
+ } else if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_SDVO)) {
limit = &intel_limits_g4x_sdvo;
} else /* The option is for other outputs */
limit = &intel_limits_i9xx_sdvo;
@@ -487,17 +505,18 @@
return limit;
}
-static const intel_limit_t *intel_limit(struct intel_crtc *crtc, int refclk)
+static const intel_limit_t *
+intel_limit(struct intel_crtc_state *crtc_state, int refclk)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
const intel_limit_t *limit;
if (HAS_PCH_SPLIT(dev))
- limit = intel_ironlake_limit(crtc, refclk);
+ limit = intel_ironlake_limit(crtc_state, refclk);
else if (IS_G4X(dev)) {
- limit = intel_g4x_limit(crtc);
+ limit = intel_g4x_limit(crtc_state);
} else if (IS_PINEVIEW(dev)) {
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS))
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS))
limit = &intel_limits_pineview_lvds;
else
limit = &intel_limits_pineview_sdvo;
@@ -506,14 +525,14 @@
} else if (IS_VALLEYVIEW(dev)) {
limit = &intel_limits_vlv;
} else if (!IS_GEN2(dev)) {
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS))
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS))
limit = &intel_limits_i9xx_lvds;
else
limit = &intel_limits_i9xx_sdvo;
} else {
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS))
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS))
limit = &intel_limits_i8xx_lvds;
- else if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_DVO))
+ else if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_DVO))
limit = &intel_limits_i8xx_dvo;
else
limit = &intel_limits_i8xx_dac;
@@ -600,15 +619,17 @@
}
static bool
-i9xx_find_best_dpll(const intel_limit_t *limit, struct intel_crtc *crtc,
+i9xx_find_best_dpll(const intel_limit_t *limit,
+ struct intel_crtc_state *crtc_state,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
intel_clock_t clock;
int err = target;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
/*
* For LVDS just rely on its current settings for dual-channel.
* We haven't figured out how to reliably set up different
@@ -661,15 +682,17 @@
}
static bool
-pnv_find_best_dpll(const intel_limit_t *limit, struct intel_crtc *crtc,
+pnv_find_best_dpll(const intel_limit_t *limit,
+ struct intel_crtc_state *crtc_state,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
intel_clock_t clock;
int err = target;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
/*
* For LVDS just rely on its current settings for dual-channel.
* We haven't figured out how to reliably set up different
@@ -720,10 +743,12 @@
}
static bool
-g4x_find_best_dpll(const intel_limit_t *limit, struct intel_crtc *crtc,
+g4x_find_best_dpll(const intel_limit_t *limit,
+ struct intel_crtc_state *crtc_state,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
intel_clock_t clock;
int max_n;
@@ -732,7 +757,7 @@
int err_most = (target >> 8) + (target >> 9);
found = false;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
if (intel_is_dual_link_lvds(dev))
clock.p2 = limit->p2.p2_fast;
else
@@ -776,11 +801,53 @@
return found;
}
+/*
+ * Check if the calculated PLL configuration is more optimal compared to the
+ * best configuration and error found so far. Return the calculated error.
+ */
+static bool vlv_PLL_is_optimal(struct drm_device *dev, int target_freq,
+ const intel_clock_t *calculated_clock,
+ const intel_clock_t *best_clock,
+ unsigned int best_error_ppm,
+ unsigned int *error_ppm)
+{
+ /*
+ * For CHV ignore the error and consider only the P value.
+ * Prefer a bigger P value based on HW requirements.
+ */
+ if (IS_CHERRYVIEW(dev)) {
+ *error_ppm = 0;
+
+ return calculated_clock->p > best_clock->p;
+ }
+
+ if (WARN_ON_ONCE(!target_freq))
+ return false;
+
+ *error_ppm = div_u64(1000000ULL *
+ abs(target_freq - calculated_clock->dot),
+ target_freq);
+ /*
+ * Prefer a better P value over a better (smaller) error if the error
+ * is small. Ensure this preference for future configurations too by
+ * setting the error to 0.
+ */
+ if (*error_ppm < 100 && calculated_clock->p > best_clock->p) {
+ *error_ppm = 0;
+
+ return true;
+ }
+
+ return *error_ppm + 10 < best_error_ppm;
+}
+
static bool
-vlv_find_best_dpll(const intel_limit_t *limit, struct intel_crtc *crtc,
+vlv_find_best_dpll(const intel_limit_t *limit,
+ struct intel_crtc_state *crtc_state,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
intel_clock_t clock;
unsigned int bestppm = 1000000;
@@ -800,7 +867,7 @@
clock.p = clock.p1 * clock.p2;
/* based on hardware requirement, prefer bigger m1,m2 values */
for (clock.m1 = limit->m1.min; clock.m1 <= limit->m1.max; clock.m1++) {
- unsigned int ppm, diff;
+ unsigned int ppm;
clock.m2 = DIV_ROUND_CLOSEST(target * clock.p * clock.n,
refclk * clock.m1);
@@ -811,20 +878,15 @@
&clock))
continue;
- diff = abs(clock.dot - target);
- ppm = div_u64(1000000ULL * diff, target);
+ if (!vlv_PLL_is_optimal(dev, target,
+ &clock,
+ best_clock,
+ bestppm, &ppm))
+ continue;
- if (ppm < 100 && clock.p > best_clock->p) {
- bestppm = 0;
- *best_clock = clock;
- found = true;
- }
-
- if (bestppm >= 10 && ppm < bestppm - 10) {
- bestppm = ppm;
- *best_clock = clock;
- found = true;
- }
+ *best_clock = clock;
+ bestppm = ppm;
+ found = true;
}
}
}
@@ -834,16 +896,20 @@
}
static bool
-chv_find_best_dpll(const intel_limit_t *limit, struct intel_crtc *crtc,
+chv_find_best_dpll(const intel_limit_t *limit,
+ struct intel_crtc_state *crtc_state,
int target, int refclk, intel_clock_t *match_clock,
intel_clock_t *best_clock)
{
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_device *dev = crtc->base.dev;
+ unsigned int best_error_ppm;
intel_clock_t clock;
uint64_t m2;
int found = false;
memset(best_clock, 0, sizeof(*best_clock));
+ best_error_ppm = 1000000;
/*
* Based on hardware doc, the n always set to 1, and m1 always
@@ -857,6 +923,7 @@
for (clock.p2 = limit->p2.p2_fast;
clock.p2 >= limit->p2.p2_slow;
clock.p2 -= clock.p2 > 10 ? 2 : 1) {
+ unsigned int error_ppm;
clock.p = clock.p1 * clock.p2;
@@ -873,12 +940,13 @@
if (!intel_PLL_is_valid(dev, limit, &clock))
continue;
- /* based on hardware requirement, prefer bigger p
- */
- if (clock.p > best_clock->p) {
- *best_clock = clock;
- found = true;
- }
+ if (!vlv_PLL_is_optimal(dev, target, &clock, best_clock,
+ best_error_ppm, &error_ppm))
+ continue;
+
+ *best_clock = clock;
+ best_error_ppm = error_ppm;
+ found = true;
}
}
@@ -897,8 +965,12 @@
*
* We can ditch the crtc->primary->fb check as soon as we can
* properly reconstruct framebuffers.
+ *
+ * FIXME: The intel_crtc->active here should be switched to
+ * crtc->state->active once we have proper CRTC states wired up
+ * for atomic.
*/
- return intel_crtc->active && crtc->primary->fb &&
+ return intel_crtc->active && crtc->primary->state->fb &&
intel_crtc->config->base.adjusted_mode.crtc_clock;
}
@@ -1301,14 +1373,14 @@
u32 val;
if (INTEL_INFO(dev)->gen >= 9) {
- for_each_sprite(pipe, sprite) {
+ for_each_sprite(dev_priv, pipe, sprite) {
val = I915_READ(PLANE_CTL(pipe, sprite));
I915_STATE_WARN(val & PLANE_CTL_ENABLE,
"plane %d assertion failure, should be off on pipe %c but is still active\n",
sprite, pipe_name(pipe));
}
} else if (IS_VALLEYVIEW(dev)) {
- for_each_sprite(pipe, sprite) {
+ for_each_sprite(dev_priv, pipe, sprite) {
reg = SPCNTR(pipe, sprite);
val = I915_READ(reg);
I915_STATE_WARN(val & SP_ENABLE,
@@ -2190,30 +2262,109 @@
return false;
}
-int
-intel_fb_align_height(struct drm_device *dev, int height, unsigned int tiling)
+unsigned int
+intel_tile_height(struct drm_device *dev, uint32_t pixel_format,
+ uint64_t fb_format_modifier)
{
- int tile_height;
+ unsigned int tile_height;
+ uint32_t pixel_bytes;
- tile_height = tiling ? (IS_GEN2(dev) ? 16 : 8) : 1;
- return ALIGN(height, tile_height);
+ switch (fb_format_modifier) {
+ case DRM_FORMAT_MOD_NONE:
+ tile_height = 1;
+ break;
+ case I915_FORMAT_MOD_X_TILED:
+ tile_height = IS_GEN2(dev) ? 16 : 8;
+ break;
+ case I915_FORMAT_MOD_Y_TILED:
+ tile_height = 32;
+ break;
+ case I915_FORMAT_MOD_Yf_TILED:
+ pixel_bytes = drm_format_plane_cpp(pixel_format, 0);
+ switch (pixel_bytes) {
+ default:
+ case 1:
+ tile_height = 64;
+ break;
+ case 2:
+ case 4:
+ tile_height = 32;
+ break;
+ case 8:
+ tile_height = 16;
+ break;
+ case 16:
+ WARN_ONCE(1,
+ "128-bit pixels are not supported for display!");
+ tile_height = 16;
+ break;
+ }
+ break;
+ default:
+ MISSING_CASE(fb_format_modifier);
+ tile_height = 1;
+ break;
+ }
+
+ return tile_height;
+}
+
+unsigned int
+intel_fb_align_height(struct drm_device *dev, unsigned int height,
+ uint32_t pixel_format, uint64_t fb_format_modifier)
+{
+ return ALIGN(height, intel_tile_height(dev, pixel_format,
+ fb_format_modifier));
+}
+
+static int
+intel_fill_fb_ggtt_view(struct i915_ggtt_view *view, struct drm_framebuffer *fb,
+ const struct drm_plane_state *plane_state)
+{
+ struct intel_rotation_info *info = &view->rotation_info;
+
+ *view = i915_ggtt_view_normal;
+
+ if (!plane_state)
+ return 0;
+
+ if (!intel_rotation_90_or_270(plane_state->rotation))
+ return 0;
+
+ *view = i915_ggtt_view_rotated;
+
+ info->height = fb->height;
+ info->pixel_format = fb->pixel_format;
+ info->pitch = fb->pitches[0];
+ info->fb_modifier = fb->modifier[0];
+
+ if (!(info->fb_modifier == I915_FORMAT_MOD_Y_TILED ||
+ info->fb_modifier == I915_FORMAT_MOD_Yf_TILED)) {
+ DRM_DEBUG_KMS(
+ "Y or Yf tiling is needed for 90/270 rotation!\n");
+ return -EINVAL;
+ }
+
+ return 0;
}
int
intel_pin_and_fence_fb_obj(struct drm_plane *plane,
struct drm_framebuffer *fb,
+ const struct drm_plane_state *plane_state,
struct intel_engine_cs *pipelined)
{
struct drm_device *dev = fb->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ struct i915_ggtt_view view;
u32 alignment;
int ret;
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
- switch (obj->tiling_mode) {
- case I915_TILING_NONE:
+ switch (fb->modifier[0]) {
+ case DRM_FORMAT_MOD_NONE:
if (INTEL_INFO(dev)->gen >= 9)
alignment = 256 * 1024;
else if (IS_BROADWATER(dev) || IS_CRESTLINE(dev))
@@ -2223,7 +2374,7 @@
else
alignment = 64 * 1024;
break;
- case I915_TILING_X:
+ case I915_FORMAT_MOD_X_TILED:
if (INTEL_INFO(dev)->gen >= 9)
alignment = 256 * 1024;
else {
@@ -2231,13 +2382,22 @@
alignment = 0;
}
break;
- case I915_TILING_Y:
- WARN(1, "Y tiled bo slipped through, driver bug!\n");
- return -EINVAL;
+ case I915_FORMAT_MOD_Y_TILED:
+ case I915_FORMAT_MOD_Yf_TILED:
+ if (WARN_ONCE(INTEL_INFO(dev)->gen < 9,
+ "Y tiling bo slipped through, driver bug!\n"))
+ return -EINVAL;
+ alignment = 1 * 1024 * 1024;
+ break;
default:
- BUG();
+ MISSING_CASE(fb->modifier[0]);
+ return -EINVAL;
}
+ ret = intel_fill_fb_ggtt_view(&view, fb, plane_state);
+ if (ret)
+ return ret;
+
/* Note that the w/a also requires 64 PTE of padding following the
* bo. We currently fill all unused PTE with the shadow page and so
* we should always have valid PTE following the scanout preventing
@@ -2256,7 +2416,8 @@
intel_runtime_pm_get(dev_priv);
dev_priv->mm.interruptible = false;
- ret = i915_gem_object_pin_to_display_plane(obj, alignment, pipelined);
+ ret = i915_gem_object_pin_to_display_plane(obj, alignment, pipelined,
+ &view);
if (ret)
goto err_interruptible;
@@ -2276,19 +2437,27 @@
return 0;
err_unpin:
- i915_gem_object_unpin_from_display_plane(obj);
+ i915_gem_object_unpin_from_display_plane(obj, &view);
err_interruptible:
dev_priv->mm.interruptible = true;
intel_runtime_pm_put(dev_priv);
return ret;
}
-void intel_unpin_fb_obj(struct drm_i915_gem_object *obj)
+static void intel_unpin_fb_obj(struct drm_framebuffer *fb,
+ const struct drm_plane_state *plane_state)
{
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ struct i915_ggtt_view view;
+ int ret;
+
WARN_ON(!mutex_is_locked(&obj->base.dev->struct_mutex));
+ ret = intel_fill_fb_ggtt_view(&view, fb, plane_state);
+ WARN_ONCE(ret, "Couldn't get view from plane state!");
+
i915_gem_object_unpin_fence(obj);
- i915_gem_object_unpin_from_display_plane(obj);
+ i915_gem_object_unpin_from_display_plane(obj, &view);
}
/* Computes the linear offset to the base tile and adjusts x, y. bytes per pixel
@@ -2366,12 +2535,13 @@
}
static bool
-intel_alloc_plane_obj(struct intel_crtc *crtc,
- struct intel_initial_plane_config *plane_config)
+intel_alloc_initial_plane_obj(struct intel_crtc *crtc,
+ struct intel_initial_plane_config *plane_config)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_gem_object *obj = NULL;
struct drm_mode_fb_cmd2 mode_cmd = { 0 };
+ struct drm_framebuffer *fb = &plane_config->fb->base;
u32 base_aligned = round_down(plane_config->base, PAGE_SIZE);
u32 size_aligned = round_up(plane_config->base + plane_config->size,
PAGE_SIZE);
@@ -2390,25 +2560,24 @@
obj->tiling_mode = plane_config->tiling;
if (obj->tiling_mode == I915_TILING_X)
- obj->stride = crtc->base.primary->fb->pitches[0];
+ obj->stride = fb->pitches[0];
- mode_cmd.pixel_format = crtc->base.primary->fb->pixel_format;
- mode_cmd.width = crtc->base.primary->fb->width;
- mode_cmd.height = crtc->base.primary->fb->height;
- mode_cmd.pitches[0] = crtc->base.primary->fb->pitches[0];
+ mode_cmd.pixel_format = fb->pixel_format;
+ mode_cmd.width = fb->width;
+ mode_cmd.height = fb->height;
+ mode_cmd.pitches[0] = fb->pitches[0];
+ mode_cmd.modifier[0] = fb->modifier[0];
+ mode_cmd.flags = DRM_MODE_FB_MODIFIERS;
mutex_lock(&dev->struct_mutex);
-
- if (intel_framebuffer_init(dev, to_intel_framebuffer(crtc->base.primary->fb),
+ if (intel_framebuffer_init(dev, to_intel_framebuffer(fb),
&mode_cmd, obj)) {
DRM_DEBUG_KMS("intel fb init failed\n");
goto out_unref_obj;
}
-
- obj->frontbuffer_bits = INTEL_FRONTBUFFER_PRIMARY(crtc->pipe);
mutex_unlock(&dev->struct_mutex);
- DRM_DEBUG_KMS("plane fb obj %p\n", obj);
+ DRM_DEBUG_KMS("initial plane fb obj %p\n", obj);
return true;
out_unref_obj:
@@ -2421,35 +2590,37 @@
static void
update_state_fb(struct drm_plane *plane)
{
- if (plane->fb != plane->state->fb)
- drm_atomic_set_fb_for_plane(plane->state, plane->fb);
+ if (plane->fb == plane->state->fb)
+ return;
+
+ if (plane->state->fb)
+ drm_framebuffer_unreference(plane->state->fb);
+ plane->state->fb = plane->fb;
+ if (plane->state->fb)
+ drm_framebuffer_reference(plane->state->fb);
}
static void
-intel_find_plane_obj(struct intel_crtc *intel_crtc,
- struct intel_initial_plane_config *plane_config)
+intel_find_initial_plane_obj(struct intel_crtc *intel_crtc,
+ struct intel_initial_plane_config *plane_config)
{
struct drm_device *dev = intel_crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_crtc *c;
struct intel_crtc *i;
struct drm_i915_gem_object *obj;
+ struct drm_plane *primary = intel_crtc->base.primary;
+ struct drm_framebuffer *fb;
- if (!intel_crtc->base.primary->fb)
+ if (!plane_config->fb)
return;
- if (intel_alloc_plane_obj(intel_crtc, plane_config)) {
- struct drm_plane *primary = intel_crtc->base.primary;
-
- primary->state->crtc = &intel_crtc->base;
- primary->crtc = &intel_crtc->base;
- update_state_fb(primary);
-
- return;
+ if (intel_alloc_initial_plane_obj(intel_crtc, plane_config)) {
+ fb = &plane_config->fb->base;
+ goto valid_fb;
}
- kfree(intel_crtc->base.primary->fb);
- intel_crtc->base.primary->fb = NULL;
+ kfree(plane_config->fb);
/*
* Failed to alloc the obj, check to see if we should share
@@ -2464,26 +2635,29 @@
if (!i->active)
continue;
- obj = intel_fb_obj(c->primary->fb);
- if (obj == NULL)
+ fb = c->primary->fb;
+ if (!fb)
continue;
+ obj = intel_fb_obj(fb);
if (i915_gem_obj_ggtt_offset(obj) == plane_config->base) {
- struct drm_plane *primary = intel_crtc->base.primary;
-
- if (obj->tiling_mode != I915_TILING_NONE)
- dev_priv->preserve_bios_swizzle = true;
-
- drm_framebuffer_reference(c->primary->fb);
- primary->fb = c->primary->fb;
- primary->state->crtc = &intel_crtc->base;
- primary->crtc = &intel_crtc->base;
- obj->frontbuffer_bits |= INTEL_FRONTBUFFER_PRIMARY(intel_crtc->pipe);
- break;
+ drm_framebuffer_reference(fb);
+ goto valid_fb;
}
}
- update_state_fb(intel_crtc->base.primary);
+ return;
+
+valid_fb:
+ obj = intel_fb_obj(fb);
+ if (obj->tiling_mode != I915_TILING_NONE)
+ dev_priv->preserve_bios_swizzle = true;
+
+ primary->fb = fb;
+ primary->state->crtc = &intel_crtc->base;
+ primary->crtc = &intel_crtc->base;
+ update_state_fb(primary);
+ obj->frontbuffer_bits |= INTEL_FRONTBUFFER_PRIMARY(intel_crtc->pipe);
}
static void i9xx_update_primary_plane(struct drm_crtc *crtc,
@@ -2604,9 +2778,6 @@
I915_WRITE(reg, dspcntr);
- DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n",
- i915_gem_obj_ggtt_offset(obj), linear_offset, x, y,
- fb->pitches[0]);
I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]);
if (INTEL_INFO(dev)->gen >= 4) {
I915_WRITE(DSPSURF(plane),
@@ -2708,9 +2879,6 @@
I915_WRITE(reg, dspcntr);
- DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n",
- i915_gem_obj_ggtt_offset(obj), linear_offset, x, y,
- fb->pitches[0]);
I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]);
I915_WRITE(DSPSURF(plane),
i915_gem_obj_ggtt_offset(obj) + intel_crtc->dspaddr_offset);
@@ -2723,6 +2891,51 @@
POSTING_READ(reg);
}
+u32 intel_fb_stride_alignment(struct drm_device *dev, uint64_t fb_modifier,
+ uint32_t pixel_format)
+{
+ u32 bits_per_pixel = drm_format_plane_cpp(pixel_format, 0) * 8;
+
+ /*
+ * The stride is either expressed as a multiple of 64 bytes
+ * chunks for linear buffers or in number of tiles for tiled
+ * buffers.
+ */
+ switch (fb_modifier) {
+ case DRM_FORMAT_MOD_NONE:
+ return 64;
+ case I915_FORMAT_MOD_X_TILED:
+ if (INTEL_INFO(dev)->gen == 2)
+ return 128;
+ return 512;
+ case I915_FORMAT_MOD_Y_TILED:
+ /* No need to check for old gens and Y tiling since this is
+ * about the display engine and those will be blocked before
+ * we get here.
+ */
+ return 128;
+ case I915_FORMAT_MOD_Yf_TILED:
+ if (bits_per_pixel == 8)
+ return 64;
+ else
+ return 128;
+ default:
+ MISSING_CASE(fb_modifier);
+ return 64;
+ }
+}
+
+unsigned long intel_plane_obj_offset(struct intel_plane *intel_plane,
+ struct drm_i915_gem_object *obj)
+{
+ const struct i915_ggtt_view *view = &i915_ggtt_view_normal;
+
+ if (intel_rotation_90_or_270(intel_plane->base.state->rotation))
+ view = &i915_ggtt_view_rotated;
+
+ return i915_gem_obj_ggtt_offset_view(obj, view);
+}
+
static void skylake_update_primary_plane(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
int x, int y)
@@ -2730,10 +2943,10 @@
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct intel_framebuffer *intel_fb;
struct drm_i915_gem_object *obj;
int pipe = intel_crtc->pipe;
- u32 plane_ctl, stride;
+ u32 plane_ctl, stride_div;
+ unsigned long surf_addr;
if (!intel_crtc->primary_enabled) {
I915_WRITE(PLANE_CTL(pipe, 0), 0);
@@ -2777,43 +2990,39 @@
BUG();
}
- intel_fb = to_intel_framebuffer(fb);
- obj = intel_fb->obj;
-
- /*
- * The stride is either expressed as a multiple of 64 bytes chunks for
- * linear buffers or in number of tiles for tiled buffers.
- */
- switch (obj->tiling_mode) {
- case I915_TILING_NONE:
- stride = fb->pitches[0] >> 6;
+ switch (fb->modifier[0]) {
+ case DRM_FORMAT_MOD_NONE:
break;
- case I915_TILING_X:
+ case I915_FORMAT_MOD_X_TILED:
plane_ctl |= PLANE_CTL_TILED_X;
- stride = fb->pitches[0] >> 9;
+ break;
+ case I915_FORMAT_MOD_Y_TILED:
+ plane_ctl |= PLANE_CTL_TILED_Y;
+ break;
+ case I915_FORMAT_MOD_Yf_TILED:
+ plane_ctl |= PLANE_CTL_TILED_YF;
break;
default:
- BUG();
+ MISSING_CASE(fb->modifier[0]);
}
plane_ctl |= PLANE_CTL_PLANE_GAMMA_DISABLE;
if (crtc->primary->state->rotation == BIT(DRM_ROTATE_180))
plane_ctl |= PLANE_CTL_ROTATE_180;
+ obj = intel_fb_obj(fb);
+ stride_div = intel_fb_stride_alignment(dev, fb->modifier[0],
+ fb->pixel_format);
+ surf_addr = intel_plane_obj_offset(to_intel_plane(crtc->primary), obj);
+
I915_WRITE(PLANE_CTL(pipe, 0), plane_ctl);
-
- DRM_DEBUG_KMS("Writing base %08lX %d,%d,%d,%d pitch=%d\n",
- i915_gem_obj_ggtt_offset(obj),
- x, y, fb->width, fb->height,
- fb->pitches[0]);
-
I915_WRITE(PLANE_POS(pipe, 0), 0);
I915_WRITE(PLANE_OFFSET(pipe, 0), (y << 16) | x);
I915_WRITE(PLANE_SIZE(pipe, 0),
(intel_crtc->config->pipe_src_h - 1) << 16 |
(intel_crtc->config->pipe_src_w - 1));
- I915_WRITE(PLANE_STRIDE(pipe, 0), stride);
- I915_WRITE(PLANE_SURF(pipe, 0), i915_gem_obj_ggtt_offset(obj));
+ I915_WRITE(PLANE_STRIDE(pipe, 0), fb->pitches[0] / stride_div);
+ I915_WRITE(PLANE_SURF(pipe, 0), surf_addr);
POSTING_READ(PLANE_SURF(pipe, 0));
}
@@ -3064,38 +3273,6 @@
FDI_FE_ERRC_ENABLE);
}
-static bool pipe_has_enabled_pch(struct intel_crtc *crtc)
-{
- return crtc->base.enabled && crtc->active &&
- crtc->config->has_pch_encoder;
-}
-
-static void ivb_modeset_global_resources(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *pipe_B_crtc =
- to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_B]);
- struct intel_crtc *pipe_C_crtc =
- to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_C]);
- uint32_t temp;
-
- /*
- * When everything is off disable fdi C so that we could enable fdi B
- * with all lanes. Note that we don't care about enabled pipes without
- * an enabled pch encoder.
- */
- if (!pipe_has_enabled_pch(pipe_B_crtc) &&
- !pipe_has_enabled_pch(pipe_C_crtc)) {
- WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE);
- WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE);
-
- temp = I915_READ(SOUTH_CHICKEN1);
- temp &= ~FDI_BC_BIFURCATION_SELECT;
- DRM_DEBUG_KMS("disabling fdi C rx\n");
- I915_WRITE(SOUTH_CHICKEN1, temp);
- }
-}
-
/* The FDI link training functions for ILK/Ibexpeak. */
static void ironlake_fdi_link_train(struct drm_crtc *crtc)
{
@@ -3751,20 +3928,23 @@
I915_READ(VSYNCSHIFT(cpu_transcoder)));
}
-static void cpt_enable_fdi_bc_bifurcation(struct drm_device *dev)
+static void cpt_set_fdi_bc_bifurcation(struct drm_device *dev, bool enable)
{
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t temp;
temp = I915_READ(SOUTH_CHICKEN1);
- if (temp & FDI_BC_BIFURCATION_SELECT)
+ if (!!(temp & FDI_BC_BIFURCATION_SELECT) == enable)
return;
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE);
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE);
- temp |= FDI_BC_BIFURCATION_SELECT;
- DRM_DEBUG_KMS("enabling fdi C rx\n");
+ temp &= ~FDI_BC_BIFURCATION_SELECT;
+ if (enable)
+ temp |= FDI_BC_BIFURCATION_SELECT;
+
+ DRM_DEBUG_KMS("%sabling fdi C rx\n", enable ? "en" : "dis");
I915_WRITE(SOUTH_CHICKEN1, temp);
POSTING_READ(SOUTH_CHICKEN1);
}
@@ -3772,20 +3952,19 @@
static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc)
{
struct drm_device *dev = intel_crtc->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
switch (intel_crtc->pipe) {
case PIPE_A:
break;
case PIPE_B:
if (intel_crtc->config->fdi_lanes > 2)
- WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT);
+ cpt_set_fdi_bc_bifurcation(dev, false);
else
- cpt_enable_fdi_bc_bifurcation(dev);
+ cpt_set_fdi_bc_bifurcation(dev, true);
break;
case PIPE_C:
- cpt_enable_fdi_bc_bifurcation(dev);
+ cpt_set_fdi_bc_bifurcation(dev, true);
break;
default:
@@ -4120,6 +4299,24 @@
}
}
+/*
+ * Disable a plane internally without actually modifying the plane's state.
+ * This will allow us to easily restore the plane later by just reprogramming
+ * its state.
+ */
+static void disable_plane_internal(struct drm_plane *plane)
+{
+ struct intel_plane *intel_plane = to_intel_plane(plane);
+ struct drm_plane_state *state =
+ plane->funcs->atomic_duplicate_state(plane);
+ struct intel_plane_state *intel_state = to_intel_plane_state(state);
+
+ intel_state->visible = false;
+ intel_plane->commit_plane(plane, intel_state);
+
+ intel_plane_destroy_state(plane, state);
+}
+
static void intel_disable_sprite_planes(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
@@ -4129,8 +4326,8 @@
drm_for_each_legacy_plane(plane, &dev->mode_config.plane_list) {
intel_plane = to_intel_plane(plane);
- if (intel_plane->pipe == pipe)
- plane->funcs->disable_plane(plane);
+ if (plane->fb && intel_plane->pipe == pipe)
+ disable_plane_internal(plane);
}
}
@@ -4204,7 +4401,7 @@
bool reenable_ips = false;
/* The clocks have to be on to load the palette. */
- if (!crtc->enabled || !intel_crtc->active)
+ if (!crtc->state->enable || !intel_crtc->active)
return;
if (!HAS_PCH_SPLIT(dev_priv->dev)) {
@@ -4288,11 +4485,10 @@
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int pipe = intel_crtc->pipe;
- int plane = intel_crtc->plane;
intel_crtc_wait_for_pending_flips(crtc);
- if (dev_priv->fbc.plane == plane)
+ if (dev_priv->fbc.crtc == intel_crtc)
intel_fbc_disable(dev);
hsw_disable_ips(intel_crtc);
@@ -4318,7 +4514,7 @@
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
- WARN_ON(!crtc->enabled);
+ WARN_ON(!crtc->state->enable);
if (intel_crtc->active)
return;
@@ -4327,7 +4523,7 @@
intel_prepare_shared_dpll(intel_crtc);
if (intel_crtc->config->has_dp_encoder)
- intel_dp_set_m_n(intel_crtc);
+ intel_dp_set_m_n(intel_crtc, M1_N1);
intel_set_pipe_timings(intel_crtc);
@@ -4426,7 +4622,7 @@
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
- WARN_ON(!crtc->enabled);
+ WARN_ON(!crtc->state->enable);
if (intel_crtc->active)
return;
@@ -4435,7 +4631,7 @@
intel_enable_shared_dpll(intel_crtc);
if (intel_crtc->config->has_dp_encoder)
- intel_dp_set_m_n(intel_crtc);
+ intel_dp_set_m_n(intel_crtc, M1_N1);
intel_set_pipe_timings(intel_crtc);
@@ -4760,8 +4956,9 @@
return mask;
}
-static void modeset_update_crtc_power_domains(struct drm_device *dev)
+static void modeset_update_crtc_power_domains(struct drm_atomic_state *state)
{
+ struct drm_device *dev = state->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long pipe_domains[I915_MAX_PIPES] = { 0, };
struct intel_crtc *crtc;
@@ -4773,7 +4970,7 @@
for_each_intel_crtc(dev, crtc) {
enum intel_display_power_domain domain;
- if (!crtc->base.enabled)
+ if (!crtc->base.state->enable)
continue;
pipe_domains[crtc->pipe] = get_crtc_power_domains(&crtc->base);
@@ -4783,7 +4980,7 @@
}
if (dev_priv->display.modeset_global_resources)
- dev_priv->display.modeset_global_resources(dev);
+ dev_priv->display.modeset_global_resources(state);
for_each_intel_crtc(dev, crtc) {
enum intel_display_power_domain domain;
@@ -4900,24 +5097,23 @@
WARN_ON(dev_priv->display.get_display_clock_speed(dev) != dev_priv->vlv_cdclk_freq);
switch (cdclk) {
- case 400000:
- cmd = 3;
- break;
case 333333:
case 320000:
- cmd = 2;
- break;
case 266667:
- cmd = 1;
- break;
case 200000:
- cmd = 0;
break;
default:
MISSING_CASE(cdclk);
return;
}
+ /*
+ * Specs are full of misinformation, but testing on actual
+ * hardware has shown that we just need to write the desired
+ * CCK divider into the Punit register.
+ */
+ cmd = DIV_ROUND_CLOSEST(dev_priv->hpll_freq << 1, cdclk) - 1;
+
mutex_lock(&dev_priv->rps.hw_lock);
val = vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ);
val &= ~DSPFREQGUAR_MASK_CHV;
@@ -4937,27 +5133,25 @@
int max_pixclk)
{
int freq_320 = (dev_priv->hpll_freq << 1) % 320000 != 0 ? 333333 : 320000;
-
- /* FIXME: Punit isn't quite ready yet */
- if (IS_CHERRYVIEW(dev_priv->dev))
- return 400000;
+ int limit = IS_CHERRYVIEW(dev_priv) ? 95 : 90;
/*
* Really only a few cases to deal with, as only 4 CDclks are supported:
* 200MHz
* 267MHz
* 320/333MHz (depends on HPLL freq)
- * 400MHz
- * So we check to see whether we're above 90% of the lower bin and
- * adjust if needed.
+ * 400MHz (VLV only)
+ * So we check to see whether we're above 90% (VLV) or 95% (CHV)
+ * of the lower bin and adjust if needed.
*
* We seem to get an unstable or solid color picture at 200MHz.
* Not sure what's wrong. For now use 200MHz only when all pipes
* are off.
*/
- if (max_pixclk > freq_320*9/10)
+ if (!IS_CHERRYVIEW(dev_priv) &&
+ max_pixclk > freq_320*limit/100)
return 400000;
- else if (max_pixclk > 266667*9/10)
+ else if (max_pixclk > 266667*limit/100)
return freq_320;
else if (max_pixclk > 0)
return 266667;
@@ -4994,12 +5188,49 @@
/* disable/enable all currently active pipes while we change cdclk */
for_each_intel_crtc(dev, intel_crtc)
- if (intel_crtc->base.enabled)
+ if (intel_crtc->base.state->enable)
*prepare_pipes |= (1 << intel_crtc->pipe);
}
-static void valleyview_modeset_global_resources(struct drm_device *dev)
+static void vlv_program_pfi_credits(struct drm_i915_private *dev_priv)
{
+ unsigned int credits, default_credits;
+
+ if (IS_CHERRYVIEW(dev_priv))
+ default_credits = PFI_CREDIT(12);
+ else
+ default_credits = PFI_CREDIT(8);
+
+ if (DIV_ROUND_CLOSEST(dev_priv->vlv_cdclk_freq, 1000) >= dev_priv->rps.cz_freq) {
+ /* CHV suggested value is 31 or 63 */
+ if (IS_CHERRYVIEW(dev_priv))
+ credits = PFI_CREDIT_31;
+ else
+ credits = PFI_CREDIT(15);
+ } else {
+ credits = default_credits;
+ }
+
+ /*
+ * WA - write default credits before re-programming
+ * FIXME: should we also set the resend bit here?
+ */
+ I915_WRITE(GCI_CONTROL, VGA_FAST_MODE_DISABLE |
+ default_credits);
+
+ I915_WRITE(GCI_CONTROL, VGA_FAST_MODE_DISABLE |
+ credits | PFI_CREDIT_RESEND);
+
+ /*
+ * FIXME is this guaranteed to clear
+ * immediately or should we poll for it?
+ */
+ WARN_ON(I915_READ(GCI_CONTROL) & PFI_CREDIT_RESEND);
+}
+
+static void valleyview_modeset_global_resources(struct drm_atomic_state *state)
+{
+ struct drm_device *dev = state->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int max_pixclk = intel_mode_max_pixclk(dev_priv);
int req_cdclk = valleyview_calc_cdclk(dev_priv, max_pixclk);
@@ -5021,6 +5252,8 @@
else
valleyview_set_cdclk(dev, req_cdclk);
+ vlv_program_pfi_credits(dev_priv);
+
intel_display_power_put(dev_priv, POWER_DOMAIN_PIPE_A);
}
}
@@ -5034,7 +5267,7 @@
int pipe = intel_crtc->pipe;
bool is_dsi;
- WARN_ON(!crtc->enabled);
+ WARN_ON(!crtc->state->enable);
if (intel_crtc->active)
return;
@@ -5049,7 +5282,7 @@
}
if (intel_crtc->config->has_dp_encoder)
- intel_dp_set_m_n(intel_crtc);
+ intel_dp_set_m_n(intel_crtc, M1_N1);
intel_set_pipe_timings(intel_crtc);
@@ -5117,7 +5350,7 @@
struct intel_encoder *encoder;
int pipe = intel_crtc->pipe;
- WARN_ON(!crtc->enabled);
+ WARN_ON(!crtc->state->enable);
if (intel_crtc->active)
return;
@@ -5125,7 +5358,7 @@
i9xx_set_pll_dividers(intel_crtc);
if (intel_crtc->config->has_dp_encoder)
- intel_dp_set_m_n(intel_crtc);
+ intel_dp_set_m_n(intel_crtc, M1_N1);
intel_set_pipe_timings(intel_crtc);
@@ -5316,7 +5549,7 @@
struct drm_i915_private *dev_priv = dev->dev_private;
/* crtc should still be enabled when we disable it. */
- WARN_ON(!crtc->enabled);
+ WARN_ON(!crtc->state->enable);
dev_priv->display.crtc_disable(crtc);
dev_priv->display.off(crtc);
@@ -5394,7 +5627,8 @@
crtc = encoder->base.crtc;
- I915_STATE_WARN(!crtc->enabled, "crtc not enabled\n");
+ I915_STATE_WARN(!crtc->state->enable,
+ "crtc not enabled\n");
I915_STATE_WARN(!to_intel_crtc(crtc)->active, "crtc not active\n");
I915_STATE_WARN(pipe != to_intel_crtc(crtc)->pipe,
"encoder active on the wrong pipe\n");
@@ -5402,6 +5636,34 @@
}
}
+int intel_connector_init(struct intel_connector *connector)
+{
+ struct drm_connector_state *connector_state;
+
+ connector_state = kzalloc(sizeof *connector_state, GFP_KERNEL);
+ if (!connector_state)
+ return -ENOMEM;
+
+ connector->base.state = connector_state;
+ return 0;
+}
+
+struct intel_connector *intel_connector_alloc(void)
+{
+ struct intel_connector *connector;
+
+ connector = kzalloc(sizeof *connector, GFP_KERNEL);
+ if (!connector)
+ return NULL;
+
+ if (intel_connector_init(connector) < 0) {
+ kfree(connector);
+ return NULL;
+ }
+
+ return connector;
+}
+
/* Even simpler default implementation, if there's really no special case to
* consider. */
void intel_connector_dpms(struct drm_connector *connector, int mode)
@@ -5433,13 +5695,21 @@
return encoder->get_hw_state(encoder, &pipe);
}
+static int pipe_required_fdi_lanes(struct drm_device *dev, enum pipe pipe)
+{
+ struct intel_crtc *crtc =
+ to_intel_crtc(intel_get_crtc_for_pipe(dev, pipe));
+
+ if (crtc->base.state->enable &&
+ crtc->config->has_pch_encoder)
+ return crtc->config->fdi_lanes;
+
+ return 0;
+}
+
static bool ironlake_check_fdi_lanes(struct drm_device *dev, enum pipe pipe,
struct intel_crtc_state *pipe_config)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *pipe_B_crtc =
- to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_B]);
-
DRM_DEBUG_KMS("checking fdi config on pipe %c, lanes %i\n",
pipe_name(pipe), pipe_config->fdi_lanes);
if (pipe_config->fdi_lanes > 4) {
@@ -5466,22 +5736,20 @@
case PIPE_A:
return true;
case PIPE_B:
- if (dev_priv->pipe_to_crtc_mapping[PIPE_C]->enabled &&
- pipe_config->fdi_lanes > 2) {
+ if (pipe_config->fdi_lanes > 2 &&
+ pipe_required_fdi_lanes(dev, PIPE_C) > 0) {
DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %c: %i lanes\n",
pipe_name(pipe), pipe_config->fdi_lanes);
return false;
}
return true;
case PIPE_C:
- if (!pipe_has_enabled_pch(pipe_B_crtc) ||
- pipe_B_crtc->config->fdi_lanes <= 2) {
- if (pipe_config->fdi_lanes > 2) {
- DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %c: %i lanes\n",
- pipe_name(pipe), pipe_config->fdi_lanes);
- return false;
- }
- } else {
+ if (pipe_config->fdi_lanes > 2) {
+ DRM_DEBUG_KMS("only 2 lanes on pipe %c: required %i lanes\n",
+ pipe_name(pipe), pipe_config->fdi_lanes);
+ return false;
+ }
+ if (pipe_required_fdi_lanes(dev, PIPE_B) > 2) {
DRM_DEBUG_KMS("fdi link B uses too many lanes to enable link C\n");
return false;
}
@@ -5581,7 +5849,7 @@
* - LVDS dual channel mode
* - Double wide pipe
*/
- if ((intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) &&
+ if ((intel_pipe_will_have_type(pipe_config, INTEL_OUTPUT_LVDS) &&
intel_is_dual_link_lvds(dev)) || pipe_config->double_wide)
pipe_config->pipe_src_w &= ~1;
@@ -5615,10 +5883,6 @@
u32 val;
int divider;
- /* FIXME: Punit isn't quite ready yet */
- if (IS_CHERRYVIEW(dev))
- return 400000;
-
if (dev_priv->hpll_freq == 0)
dev_priv->hpll_freq = valleyview_get_vco(dev_priv);
@@ -5764,15 +6028,18 @@
&& !(dev_priv->quirks & QUIRK_LVDS_SSC_DISABLE);
}
-static int i9xx_get_refclk(struct intel_crtc *crtc, int num_connectors)
+static int i9xx_get_refclk(const struct intel_crtc_state *crtc_state,
+ int num_connectors)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int refclk;
+ WARN_ON(!crtc_state->base.state);
+
if (IS_VALLEYVIEW(dev)) {
refclk = 100000;
- } else if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS) &&
+ } else if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS) &&
intel_panel_use_ssc(dev_priv) && num_connectors < 2) {
refclk = dev_priv->vbt.lvds_ssc_freq;
DRM_DEBUG_KMS("using SSC reference clock of %d kHz\n", refclk);
@@ -5815,8 +6082,8 @@
crtc_state->dpll_hw_state.fp0 = fp;
crtc->lowfreq_avail = false;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS) &&
- reduced_clock && i915.powersave) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS) &&
+ reduced_clock) {
crtc_state->dpll_hw_state.fp1 = fp2;
crtc->lowfreq_avail = true;
} else {
@@ -5884,7 +6151,7 @@
* for gen < 8) and if DRRS is supported (to make sure the
* registers are not unnecessarily accessed).
*/
- if (m2_n2 && INTEL_INFO(dev)->gen < 8 &&
+ if (m2_n2 && (IS_CHERRYVIEW(dev) || INTEL_INFO(dev)->gen < 8) &&
crtc->config->has_drrs) {
I915_WRITE(PIPE_DATA_M2(transcoder),
TU_SIZE(m2_n2->tu) | m2_n2->gmch_m);
@@ -5900,13 +6167,29 @@
}
}
-void intel_dp_set_m_n(struct intel_crtc *crtc)
+void intel_dp_set_m_n(struct intel_crtc *crtc, enum link_m_n_set m_n)
{
+ struct intel_link_m_n *dp_m_n, *dp_m2_n2 = NULL;
+
+ if (m_n == M1_N1) {
+ dp_m_n = &crtc->config->dp_m_n;
+ dp_m2_n2 = &crtc->config->dp_m2_n2;
+ } else if (m_n == M2_N2) {
+
+ /*
+ * M2_N2 registers are not supported. Hence m2_n2 divider value
+ * needs to be programmed into M1_N1.
+ */
+ dp_m_n = &crtc->config->dp_m2_n2;
+ } else {
+ DRM_ERROR("Unsupported divider value\n");
+ return;
+ }
+
if (crtc->config->has_pch_encoder)
intel_pch_transcoder_set_m_n(crtc, &crtc->config->dp_m_n);
else
- intel_cpu_transcoder_set_m_n(crtc, &crtc->config->dp_m_n,
- &crtc->config->dp_m2_n2);
+ intel_cpu_transcoder_set_m_n(crtc, dp_m_n, dp_m2_n2);
}
static void vlv_update_pll(struct intel_crtc *crtc,
@@ -6044,9 +6327,10 @@
int pipe = crtc->pipe;
int dpll_reg = DPLL(crtc->pipe);
enum dpio_channel port = vlv_pipe_to_channel(pipe);
- u32 loopfilter, intcoeff;
+ u32 loopfilter, tribuf_calcntr;
u32 bestn, bestm1, bestm2, bestp1, bestp2, bestm2_frac;
- int refclk;
+ u32 dpio_val;
+ int vco;
bestn = pipe_config->dpll.n;
bestm2_frac = pipe_config->dpll.m2 & 0x3fffff;
@@ -6054,6 +6338,9 @@
bestm2 = pipe_config->dpll.m2 >> 22;
bestp1 = pipe_config->dpll.p1;
bestp2 = pipe_config->dpll.p2;
+ vco = pipe_config->dpll.vco;
+ dpio_val = 0;
+ loopfilter = 0;
/*
* Enable Refclk and SSC
@@ -6079,26 +6366,56 @@
1 << DPIO_CHV_N_DIV_SHIFT);
/* M2 fraction division */
- vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW2(port), bestm2_frac);
+ if (bestm2_frac)
+ vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW2(port), bestm2_frac);
/* M2 fraction division enable */
- vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW3(port),
- DPIO_CHV_FRAC_DIV_EN |
- (2 << DPIO_CHV_FEEDFWD_GAIN_SHIFT));
+ dpio_val = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW3(port));
+ dpio_val &= ~(DPIO_CHV_FEEDFWD_GAIN_MASK | DPIO_CHV_FRAC_DIV_EN);
+ dpio_val |= (2 << DPIO_CHV_FEEDFWD_GAIN_SHIFT);
+ if (bestm2_frac)
+ dpio_val |= DPIO_CHV_FRAC_DIV_EN;
+ vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW3(port), dpio_val);
+
+ /* Program digital lock detect threshold */
+ dpio_val = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW9(port));
+ dpio_val &= ~(DPIO_CHV_INT_LOCK_THRESHOLD_MASK |
+ DPIO_CHV_INT_LOCK_THRESHOLD_SEL_COARSE);
+ dpio_val |= (0x5 << DPIO_CHV_INT_LOCK_THRESHOLD_SHIFT);
+ if (!bestm2_frac)
+ dpio_val |= DPIO_CHV_INT_LOCK_THRESHOLD_SEL_COARSE;
+ vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW9(port), dpio_val);
/* Loop filter */
- refclk = i9xx_get_refclk(crtc, 0);
- loopfilter = 5 << DPIO_CHV_PROP_COEFF_SHIFT |
- 2 << DPIO_CHV_GAIN_CTRL_SHIFT;
- if (refclk == 100000)
- intcoeff = 11;
- else if (refclk == 38400)
- intcoeff = 10;
- else
- intcoeff = 9;
- loopfilter |= intcoeff << DPIO_CHV_INT_COEFF_SHIFT;
+ if (vco == 5400000) {
+ loopfilter |= (0x3 << DPIO_CHV_PROP_COEFF_SHIFT);
+ loopfilter |= (0x8 << DPIO_CHV_INT_COEFF_SHIFT);
+ loopfilter |= (0x1 << DPIO_CHV_GAIN_CTRL_SHIFT);
+ tribuf_calcntr = 0x9;
+ } else if (vco <= 6200000) {
+ loopfilter |= (0x5 << DPIO_CHV_PROP_COEFF_SHIFT);
+ loopfilter |= (0xB << DPIO_CHV_INT_COEFF_SHIFT);
+ loopfilter |= (0x3 << DPIO_CHV_GAIN_CTRL_SHIFT);
+ tribuf_calcntr = 0x9;
+ } else if (vco <= 6480000) {
+ loopfilter |= (0x4 << DPIO_CHV_PROP_COEFF_SHIFT);
+ loopfilter |= (0x9 << DPIO_CHV_INT_COEFF_SHIFT);
+ loopfilter |= (0x3 << DPIO_CHV_GAIN_CTRL_SHIFT);
+ tribuf_calcntr = 0x8;
+ } else {
+ /* Not supported. Apply the same limits as in the max case */
+ loopfilter |= (0x4 << DPIO_CHV_PROP_COEFF_SHIFT);
+ loopfilter |= (0x9 << DPIO_CHV_INT_COEFF_SHIFT);
+ loopfilter |= (0x3 << DPIO_CHV_GAIN_CTRL_SHIFT);
+ tribuf_calcntr = 0;
+ }
vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW6(port), loopfilter);
+ dpio_val = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW8(port));
+ dpio_val &= ~DPIO_CHV_TDC_TARGET_CNT_MASK;
+ dpio_val |= (tribuf_calcntr << DPIO_CHV_TDC_TARGET_CNT_SHIFT);
+ vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW8(port), dpio_val);
+
/* AFC Recal */
vlv_dpio_write(dev_priv, pipe, CHV_CMN_DW14(port),
vlv_dpio_read(dev_priv, pipe, CHV_CMN_DW14(port)) |
@@ -6123,6 +6440,7 @@
struct intel_crtc *crtc =
to_intel_crtc(intel_get_crtc_for_pipe(dev, pipe));
struct intel_crtc_state pipe_config = {
+ .base.crtc = &crtc->base,
.pixel_multiplier = 1,
.dpll = *dpll,
};
@@ -6167,12 +6485,12 @@
i9xx_update_pll_dividers(crtc, crtc_state, reduced_clock);
- is_sdvo = intel_pipe_will_have_type(crtc, INTEL_OUTPUT_SDVO) ||
- intel_pipe_will_have_type(crtc, INTEL_OUTPUT_HDMI);
+ is_sdvo = intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_SDVO) ||
+ intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_HDMI);
dpll = DPLL_VGA_MODE_DIS;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS))
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS))
dpll |= DPLLB_MODE_LVDS;
else
dpll |= DPLLB_MODE_DAC_SERIAL;
@@ -6215,7 +6533,7 @@
if (crtc_state->sdvo_tv_clock)
dpll |= PLL_REF_INPUT_TVCLKINBC;
- else if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS) &&
+ else if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS) &&
intel_panel_use_ssc(dev_priv) && num_connectors < 2)
dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN;
else
@@ -6245,7 +6563,7 @@
dpll = DPLL_VGA_MODE_DIS;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS)) {
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS)) {
dpll |= (1 << (clock->p1 - 1)) << DPLL_FPA01_P1_POST_DIV_SHIFT;
} else {
if (clock->p1 == 2)
@@ -6256,10 +6574,10 @@
dpll |= PLL_P2_DIVIDE_BY_4;
}
- if (!IS_I830(dev) && intel_pipe_will_have_type(crtc, INTEL_OUTPUT_DVO))
+ if (!IS_I830(dev) && intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_DVO))
dpll |= DPLL_DVO_2X_MODE;
- if (intel_pipe_will_have_type(crtc, INTEL_OUTPUT_LVDS) &&
+ if (intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS) &&
intel_panel_use_ssc(dev_priv) && num_connectors < 2)
dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN;
else
@@ -6473,11 +6791,20 @@
bool is_lvds = false, is_dsi = false;
struct intel_encoder *encoder;
const intel_limit_t *limit;
+ struct drm_atomic_state *state = crtc_state->base.state;
+ struct drm_connector_state *connector_state;
+ int i;
- for_each_intel_encoder(dev, encoder) {
- if (encoder->new_crtc != crtc)
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
continue;
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != &crtc->base)
+ continue;
+
+ encoder = to_intel_encoder(connector_state->best_encoder);
+
switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
@@ -6496,7 +6823,7 @@
return 0;
if (!crtc_state->clock_set) {
- refclk = i9xx_get_refclk(crtc, num_connectors);
+ refclk = i9xx_get_refclk(crtc_state, num_connectors);
/*
* Returns a set of divisors for the desired target clock with
@@ -6504,8 +6831,8 @@
* the clock equation: reflck * (5 * (m1 + 2) + (m2 + 2)) / (n +
* 2) / p1 / p2.
*/
- limit = intel_limit(crtc, refclk);
- ok = dev_priv->display.find_dpll(limit, crtc,
+ limit = intel_limit(crtc_state, refclk);
+ ok = dev_priv->display.find_dpll(limit, crtc_state,
crtc_state->port_clock,
refclk, NULL, &clock);
if (!ok) {
@@ -6521,7 +6848,7 @@
* we will disable the LVDS downclock feature.
*/
has_reduced_clock =
- dev_priv->display.find_dpll(limit, crtc,
+ dev_priv->display.find_dpll(limit, crtc_state,
dev_priv->lvds_downclock,
refclk, &clock,
&reduced_clock);
@@ -6620,7 +6947,7 @@
u32 val, base, offset;
int pipe = crtc->pipe, plane = crtc->plane;
int fourcc, pixel_format;
- int aligned_height;
+ unsigned int aligned_height;
struct drm_framebuffer *fb;
struct intel_framebuffer *intel_fb;
@@ -6636,9 +6963,12 @@
fb = &intel_fb->base;
- if (INTEL_INFO(dev)->gen >= 4)
- if (val & DISPPLANE_TILED)
+ if (INTEL_INFO(dev)->gen >= 4) {
+ if (val & DISPPLANE_TILED) {
plane_config->tiling = I915_TILING_X;
+ fb->modifier[0] = I915_FORMAT_MOD_X_TILED;
+ }
+ }
pixel_format = val & DISPPLANE_PIXFORMAT_MASK;
fourcc = i9xx_format_to_fourcc(pixel_format);
@@ -6664,7 +6994,8 @@
fb->pitches[0] = val & 0xffffffc0;
aligned_height = intel_fb_align_height(dev, fb->height,
- plane_config->tiling);
+ fb->pixel_format,
+ fb->modifier[0]);
plane_config->size = fb->pitches[0] * aligned_height;
@@ -6673,7 +7004,7 @@
fb->bits_per_pixel, base, fb->pitches[0],
plane_config->size);
- crtc->base.primary->fb = fb;
+ plane_config->fb = intel_fb;
}
static void chv_crtc_clock_get(struct intel_crtc *crtc,
@@ -7147,18 +7478,26 @@
lpt_init_pch_refclk(dev);
}
-static int ironlake_get_refclk(struct drm_crtc *crtc)
+static int ironlake_get_refclk(struct intel_crtc_state *crtc_state)
{
- struct drm_device *dev = crtc->dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_atomic_state *state = crtc_state->base.state;
+ struct drm_connector_state *connector_state;
struct intel_encoder *encoder;
- int num_connectors = 0;
+ int num_connectors = 0, i;
bool is_lvds = false;
- for_each_intel_encoder(dev, encoder) {
- if (encoder->new_crtc != to_intel_crtc(crtc))
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
continue;
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != crtc_state->base.crtc)
+ continue;
+
+ encoder = to_intel_encoder(connector_state->best_encoder);
+
switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
@@ -7345,22 +7684,21 @@
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
int refclk;
const intel_limit_t *limit;
bool ret, is_lvds = false;
- is_lvds = intel_pipe_will_have_type(intel_crtc, INTEL_OUTPUT_LVDS);
+ is_lvds = intel_pipe_will_have_type(crtc_state, INTEL_OUTPUT_LVDS);
- refclk = ironlake_get_refclk(crtc);
+ refclk = ironlake_get_refclk(crtc_state);
/*
* Returns a set of divisors for the desired target clock with the given
* refclk, or FALSE. The returned values represent the clock equation:
* reflck * (5 * (m1 + 2) + (m2 + 2)) / (n + 2) / p1 / p2.
*/
- limit = intel_limit(intel_crtc, refclk);
- ret = dev_priv->display.find_dpll(limit, intel_crtc,
+ limit = intel_limit(crtc_state, refclk);
+ ret = dev_priv->display.find_dpll(limit, crtc_state,
crtc_state->port_clock,
refclk, NULL, clock);
if (!ret)
@@ -7374,7 +7712,7 @@
* downclock feature.
*/
*has_reduced_clock =
- dev_priv->display.find_dpll(limit, intel_crtc,
+ dev_priv->display.find_dpll(limit, crtc_state,
dev_priv->lvds_downclock,
refclk, clock,
reduced_clock);
@@ -7407,16 +7745,24 @@
struct drm_crtc *crtc = &intel_crtc->base;
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_encoder *intel_encoder;
+ struct drm_atomic_state *state = crtc_state->base.state;
+ struct drm_connector_state *connector_state;
+ struct intel_encoder *encoder;
uint32_t dpll;
- int factor, num_connectors = 0;
+ int factor, num_connectors = 0, i;
bool is_lvds = false, is_sdvo = false;
- for_each_intel_encoder(dev, intel_encoder) {
- if (intel_encoder->new_crtc != to_intel_crtc(crtc))
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
continue;
- switch (intel_encoder->type) {
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != crtc_state->base.crtc)
+ continue;
+
+ encoder = to_intel_encoder(connector_state->best_encoder);
+
+ switch (encoder->type) {
case INTEL_OUTPUT_LVDS:
is_lvds = true;
break;
@@ -7545,7 +7891,7 @@
}
}
- if (is_lvds && has_reduced_clock && i915.powersave)
+ if (is_lvds && has_reduced_clock)
crtc->lowfreq_avail = true;
else
crtc->lowfreq_avail = false;
@@ -7651,10 +7997,10 @@
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- u32 val, base, offset, stride_mult;
+ u32 val, base, offset, stride_mult, tiling;
int pipe = crtc->pipe;
int fourcc, pixel_format;
- int aligned_height;
+ unsigned int aligned_height;
struct drm_framebuffer *fb;
struct intel_framebuffer *intel_fb;
@@ -7670,9 +8016,6 @@
if (!(val & PLANE_CTL_ENABLE))
goto error;
- if (val & PLANE_CTL_TILED_MASK)
- plane_config->tiling = I915_TILING_X;
-
pixel_format = val & PLANE_CTL_FORMAT_MASK;
fourcc = skl_format_to_fourcc(pixel_format,
val & PLANE_CTL_ORDER_RGBX,
@@ -7680,6 +8023,26 @@
fb->pixel_format = fourcc;
fb->bits_per_pixel = drm_format_plane_cpp(fourcc, 0) * 8;
+ tiling = val & PLANE_CTL_TILED_MASK;
+ switch (tiling) {
+ case PLANE_CTL_TILED_LINEAR:
+ fb->modifier[0] = DRM_FORMAT_MOD_NONE;
+ break;
+ case PLANE_CTL_TILED_X:
+ plane_config->tiling = I915_TILING_X;
+ fb->modifier[0] = I915_FORMAT_MOD_X_TILED;
+ break;
+ case PLANE_CTL_TILED_Y:
+ fb->modifier[0] = I915_FORMAT_MOD_Y_TILED;
+ break;
+ case PLANE_CTL_TILED_YF:
+ fb->modifier[0] = I915_FORMAT_MOD_Yf_TILED;
+ break;
+ default:
+ MISSING_CASE(tiling);
+ goto error;
+ }
+
base = I915_READ(PLANE_SURF(pipe, 0)) & 0xfffff000;
plane_config->base = base;
@@ -7690,21 +8053,13 @@
fb->width = ((val >> 0) & 0x1fff) + 1;
val = I915_READ(PLANE_STRIDE(pipe, 0));
- switch (plane_config->tiling) {
- case I915_TILING_NONE:
- stride_mult = 64;
- break;
- case I915_TILING_X:
- stride_mult = 512;
- break;
- default:
- MISSING_CASE(plane_config->tiling);
- goto error;
- }
+ stride_mult = intel_fb_stride_alignment(dev, fb->modifier[0],
+ fb->pixel_format);
fb->pitches[0] = (val & 0x3ff) * stride_mult;
aligned_height = intel_fb_align_height(dev, fb->height,
- plane_config->tiling);
+ fb->pixel_format,
+ fb->modifier[0]);
plane_config->size = fb->pitches[0] * aligned_height;
@@ -7713,7 +8068,7 @@
fb->bits_per_pixel, base, fb->pitches[0],
plane_config->size);
- crtc->base.primary->fb = fb;
+ plane_config->fb = intel_fb;
return;
error:
@@ -7753,7 +8108,7 @@
u32 val, base, offset;
int pipe = crtc->pipe;
int fourcc, pixel_format;
- int aligned_height;
+ unsigned int aligned_height;
struct drm_framebuffer *fb;
struct intel_framebuffer *intel_fb;
@@ -7769,9 +8124,12 @@
fb = &intel_fb->base;
- if (INTEL_INFO(dev)->gen >= 4)
- if (val & DISPPLANE_TILED)
+ if (INTEL_INFO(dev)->gen >= 4) {
+ if (val & DISPPLANE_TILED) {
plane_config->tiling = I915_TILING_X;
+ fb->modifier[0] = I915_FORMAT_MOD_X_TILED;
+ }
+ }
pixel_format = val & DISPPLANE_PIXFORMAT_MASK;
fourcc = i9xx_format_to_fourcc(pixel_format);
@@ -7797,7 +8155,8 @@
fb->pitches[0] = val & 0xffffffc0;
aligned_height = intel_fb_align_height(dev, fb->height,
- plane_config->tiling);
+ fb->pixel_format,
+ fb->modifier[0]);
plane_config->size = fb->pitches[0] * aligned_height;
@@ -7806,7 +8165,7 @@
fb->bits_per_pixel, base, fb->pitches[0],
plane_config->size);
- crtc->base.primary->fb = fb;
+ plane_config->fb = intel_fb;
}
static bool ironlake_get_pipe_config(struct intel_crtc *crtc,
@@ -8292,8 +8651,8 @@
uint32_t cntl = 0, size = 0;
if (base) {
- unsigned int width = intel_crtc->cursor_width;
- unsigned int height = intel_crtc->cursor_height;
+ unsigned int width = intel_crtc->base.cursor->state->crtc_w;
+ unsigned int height = intel_crtc->base.cursor->state->crtc_h;
unsigned int stride = roundup_pow_of_two(width) * 4;
switch (stride) {
@@ -8357,7 +8716,7 @@
cntl = 0;
if (base) {
cntl = MCURSOR_GAMMA_ENABLE;
- switch (intel_crtc->cursor_width) {
+ switch (intel_crtc->base.cursor->state->crtc_w) {
case 64:
cntl |= CURSOR_MODE_64_ARGB_AX;
break;
@@ -8368,7 +8727,7 @@
cntl |= CURSOR_MODE_256_ARGB_AX;
break;
default:
- MISSING_CASE(intel_crtc->cursor_width);
+ MISSING_CASE(intel_crtc->base.cursor->state->crtc_w);
return;
}
cntl |= pipe << 28; /* Connect to correct pipe */
@@ -8415,7 +8774,7 @@
base = 0;
if (x < 0) {
- if (x + intel_crtc->cursor_width <= 0)
+ if (x + intel_crtc->base.cursor->state->crtc_w <= 0)
base = 0;
pos |= CURSOR_POS_SIGN << CURSOR_X_SHIFT;
@@ -8424,7 +8783,7 @@
pos |= x << CURSOR_X_SHIFT;
if (y < 0) {
- if (y + intel_crtc->cursor_height <= 0)
+ if (y + intel_crtc->base.cursor->state->crtc_h <= 0)
base = 0;
pos |= CURSOR_POS_SIGN << CURSOR_Y_SHIFT;
@@ -8440,8 +8799,8 @@
/* ILK+ do this automagically */
if (HAS_GMCH_DISPLAY(dev) &&
crtc->cursor->state->rotation == BIT(DRM_ROTATE_180)) {
- base += (intel_crtc->cursor_height *
- intel_crtc->cursor_width - 1) * 4;
+ base += (intel_crtc->base.cursor->state->crtc_h *
+ intel_crtc->base.cursor->state->crtc_w - 1) * 4;
}
if (IS_845G(dev) || IS_I865G(dev))
@@ -8633,6 +8992,8 @@
struct drm_device *dev = encoder->dev;
struct drm_framebuffer *fb;
struct drm_mode_config *config = &dev->mode_config;
+ struct drm_atomic_state *state = NULL;
+ struct drm_connector_state *connector_state;
int ret, i = -1;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s], [ENCODER:%d:%s]\n",
@@ -8680,7 +9041,7 @@
i++;
if (!(encoder->possible_crtcs & (1 << i)))
continue;
- if (possible_crtc->enabled)
+ if (possible_crtc->state->enable)
continue;
/* This can occur when applying the pipe A quirk on resume. */
if (to_intel_crtc(possible_crtc)->new_enabled)
@@ -8714,6 +9075,21 @@
old->load_detect_temp = true;
old->release_fb = NULL;
+ state = drm_atomic_state_alloc(dev);
+ if (!state)
+ return false;
+
+ state->acquire_ctx = ctx;
+
+ connector_state = drm_atomic_get_connector_state(state, connector);
+ if (IS_ERR(connector_state)) {
+ ret = PTR_ERR(connector_state);
+ goto fail;
+ }
+
+ connector_state->crtc = crtc;
+ connector_state->best_encoder = &intel_encoder->base;
+
if (!mode)
mode = &load_detect_mode;
@@ -8736,7 +9112,7 @@
goto fail;
}
- if (intel_set_mode(crtc, mode, 0, 0, fb)) {
+ if (intel_set_mode(crtc, mode, 0, 0, fb, state)) {
DRM_DEBUG_KMS("failed to set mode on load-detect pipe\n");
if (old->release_fb)
old->release_fb->funcs->destroy(old->release_fb);
@@ -8749,12 +9125,17 @@
return true;
fail:
- intel_crtc->new_enabled = crtc->enabled;
+ intel_crtc->new_enabled = crtc->state->enable;
if (intel_crtc->new_enabled)
intel_crtc->new_config = intel_crtc->config;
else
intel_crtc->new_config = NULL;
fail_unlock:
+ if (state) {
+ drm_atomic_state_free(state);
+ state = NULL;
+ }
+
if (ret == -EDEADLK) {
drm_modeset_backoff(ctx);
goto retry;
@@ -8764,24 +9145,44 @@
}
void intel_release_load_detect_pipe(struct drm_connector *connector,
- struct intel_load_detect_pipe *old)
+ struct intel_load_detect_pipe *old,
+ struct drm_modeset_acquire_ctx *ctx)
{
+ struct drm_device *dev = connector->dev;
struct intel_encoder *intel_encoder =
intel_attached_encoder(connector);
struct drm_encoder *encoder = &intel_encoder->base;
struct drm_crtc *crtc = encoder->crtc;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct drm_atomic_state *state;
+ struct drm_connector_state *connector_state;
DRM_DEBUG_KMS("[CONNECTOR:%d:%s], [ENCODER:%d:%s]\n",
connector->base.id, connector->name,
encoder->base.id, encoder->name);
if (old->load_detect_temp) {
+ state = drm_atomic_state_alloc(dev);
+ if (!state)
+ goto fail;
+
+ state->acquire_ctx = ctx;
+
+ connector_state = drm_atomic_get_connector_state(state, connector);
+ if (IS_ERR(connector_state))
+ goto fail;
+
to_intel_connector(connector)->new_encoder = NULL;
intel_encoder->new_crtc = NULL;
intel_crtc->new_enabled = false;
intel_crtc->new_config = NULL;
- intel_set_mode(crtc, NULL, 0, 0, NULL);
+
+ connector_state->best_encoder = NULL;
+ connector_state->crtc = NULL;
+
+ intel_set_mode(crtc, NULL, 0, 0, NULL, state);
+
+ drm_atomic_state_free(state);
if (old->release_fb) {
drm_framebuffer_unregister_private(old->release_fb);
@@ -8794,6 +9195,11 @@
/* Switch crtc and encoder back off if necessary */
if (old->dpms_mode != DRM_MODE_DPMS_ON)
connector->funcs->dpms(connector, old->dpms_mode);
+
+ return;
+fail:
+ DRM_DEBUG_KMS("Couldn't release load detect pipe.\n");
+ drm_atomic_state_free(state);
}
static int i9xx_pll_refclk(struct drm_device *dev,
@@ -9032,6 +9438,8 @@
intel_runtime_pm_get(dev_priv);
i915_update_gfx_val(dev_priv);
+ if (INTEL_INFO(dev)->gen >= 6)
+ gen6_rps_busy(dev_priv);
dev_priv->mm.busy = true;
}
@@ -9045,9 +9453,6 @@
dev_priv->mm.busy = false;
- if (!i915.powersave)
- goto out;
-
for_each_crtc(dev, crtc) {
if (!crtc->primary->fb)
continue;
@@ -9058,7 +9463,6 @@
if (INTEL_INFO(dev)->gen >= 6)
gen6_rps_idle(dev->dev_private);
-out:
intel_runtime_pm_put(dev_priv);
}
@@ -9100,9 +9504,8 @@
enum pipe pipe = to_intel_crtc(work->crtc)->pipe;
mutex_lock(&dev->struct_mutex);
- intel_unpin_fb_obj(work->old_fb_obj);
+ intel_unpin_fb_obj(work->old_fb, work->crtc->primary->state);
drm_gem_object_unreference(&work->pending_flip_obj->base);
- drm_gem_object_unreference(&work->old_fb_obj->base);
intel_fbc_update(dev);
@@ -9111,6 +9514,7 @@
mutex_unlock(&dev->struct_mutex);
intel_frontbuffer_flip_complete(dev, INTEL_FRONTBUFFER_PRIMARY(pipe));
+ drm_framebuffer_unreference(work->old_fb);
BUG_ON(atomic_read(&to_intel_crtc(work->crtc)->unpin_work_count) == 0);
atomic_dec(&to_intel_crtc(work->crtc)->unpin_work_count);
@@ -9627,69 +10031,6 @@
return 0;
}
-static int intel_gen9_queue_flip(struct drm_device *dev,
- struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
- struct intel_engine_cs *ring,
- uint32_t flags)
-{
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- uint32_t plane = 0, stride;
- int ret;
-
- switch(intel_crtc->pipe) {
- case PIPE_A:
- plane = MI_DISPLAY_FLIP_SKL_PLANE_1_A;
- break;
- case PIPE_B:
- plane = MI_DISPLAY_FLIP_SKL_PLANE_1_B;
- break;
- case PIPE_C:
- plane = MI_DISPLAY_FLIP_SKL_PLANE_1_C;
- break;
- default:
- WARN_ONCE(1, "unknown plane in flip command\n");
- return -ENODEV;
- }
-
- switch (obj->tiling_mode) {
- case I915_TILING_NONE:
- stride = fb->pitches[0] >> 6;
- break;
- case I915_TILING_X:
- stride = fb->pitches[0] >> 9;
- break;
- default:
- WARN_ONCE(1, "unknown tiling in flip command\n");
- return -ENODEV;
- }
-
- ret = intel_ring_begin(ring, 10);
- if (ret)
- return ret;
-
- intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit(ring, DERRMR);
- intel_ring_emit(ring, ~(DERRMR_PIPEA_PRI_FLIP_DONE |
- DERRMR_PIPEB_PRI_FLIP_DONE |
- DERRMR_PIPEC_PRI_FLIP_DONE));
- intel_ring_emit(ring, MI_STORE_REGISTER_MEM_GEN8(1) |
- MI_SRM_LRM_GLOBAL_GTT);
- intel_ring_emit(ring, DERRMR);
- intel_ring_emit(ring, ring->scratch.gtt_offset + 256);
- intel_ring_emit(ring, 0);
-
- intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 | plane);
- intel_ring_emit(ring, stride << 6 | obj->tiling_mode);
- intel_ring_emit(ring, intel_crtc->unpin_work->gtt_offset);
-
- intel_mark_page_flip_active(intel_crtc);
- __intel_ring_advance(ring);
-
- return 0;
-}
-
static int intel_default_queue_flip(struct drm_device *dev,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
@@ -9719,10 +10060,10 @@
!i915_gem_request_completed(work->flip_queued_req, true))
return false;
- work->flip_ready_vblank = drm_vblank_count(dev, intel_crtc->pipe);
+ work->flip_ready_vblank = drm_crtc_vblank_count(crtc);
}
- if (drm_vblank_count(dev, intel_crtc->pipe) - work->flip_ready_vblank < 3)
+ if (drm_crtc_vblank_count(crtc) - work->flip_ready_vblank < 3)
return false;
/* Potential stall - if we see that the flip has happened,
@@ -9753,7 +10094,8 @@
spin_lock(&dev->event_lock);
if (intel_crtc->unpin_work && __intel_pageflip_stall_check(dev, crtc)) {
WARN_ONCE(1, "Kicking stuck page flip: queued at %d, now %d\n",
- intel_crtc->unpin_work->flip_queued_vblank, drm_vblank_count(dev, pipe));
+ intel_crtc->unpin_work->flip_queued_vblank,
+ drm_vblank_count(dev, pipe));
page_flip_completed(intel_crtc);
}
spin_unlock(&dev->event_lock);
@@ -9805,7 +10147,7 @@
work->event = event;
work->crtc = crtc;
- work->old_fb_obj = intel_fb_obj(old_fb);
+ work->old_fb = old_fb;
INIT_WORK(&work->work, intel_unpin_work_fn);
ret = drm_crtc_vblank_get(crtc);
@@ -9836,12 +10178,8 @@
if (atomic_read(&intel_crtc->unpin_work_count) >= 2)
flush_workqueue(dev_priv->wq);
- ret = i915_mutex_lock_interruptible(dev);
- if (ret)
- goto cleanup;
-
/* Reference the objects for the scheduled work. */
- drm_gem_object_reference(&work->old_fb_obj->base);
+ drm_framebuffer_reference(work->old_fb);
drm_gem_object_reference(&obj->base);
crtc->primary->fb = fb;
@@ -9849,6 +10187,10 @@
work->pending_flip_obj = obj;
+ ret = i915_mutex_lock_interruptible(dev);
+ if (ret)
+ goto cleanup;
+
atomic_inc(&intel_crtc->unpin_work_count);
intel_crtc->reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
@@ -9857,7 +10199,7 @@
if (IS_VALLEYVIEW(dev)) {
ring = &dev_priv->ring[BCS];
- if (obj->tiling_mode != work->old_fb_obj->tiling_mode)
+ if (obj->tiling_mode != intel_fb_obj(work->old_fb)->tiling_mode)
/* vlv: DISPLAY_FLIP fails to change tiling */
ring = NULL;
} else if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
@@ -9870,12 +10212,13 @@
ring = &dev_priv->ring[RCS];
}
- ret = intel_pin_and_fence_fb_obj(crtc->primary, fb, ring);
+ ret = intel_pin_and_fence_fb_obj(crtc->primary, fb,
+ crtc->primary->state, ring);
if (ret)
goto cleanup_pending;
- work->gtt_offset =
- i915_gem_obj_ggtt_offset(obj) + intel_crtc->dspaddr_offset;
+ work->gtt_offset = intel_plane_obj_offset(to_intel_plane(primary), obj)
+ + intel_crtc->dspaddr_offset;
if (use_mmio_flip(ring, obj)) {
ret = intel_queue_mmio_flip(dev, crtc, fb, obj, ring,
@@ -9895,10 +10238,10 @@
intel_ring_get_request(ring));
}
- work->flip_queued_vblank = drm_vblank_count(dev, intel_crtc->pipe);
+ work->flip_queued_vblank = drm_crtc_vblank_count(crtc);
work->enable_stall_check = true;
- i915_gem_track_fb(work->old_fb_obj, obj,
+ i915_gem_track_fb(intel_fb_obj(work->old_fb), obj,
INTEL_FRONTBUFFER_PRIMARY(pipe));
intel_fbc_disable(dev);
@@ -9910,16 +10253,17 @@
return 0;
cleanup_unpin:
- intel_unpin_fb_obj(obj);
+ intel_unpin_fb_obj(fb, crtc->primary->state);
cleanup_pending:
atomic_dec(&intel_crtc->unpin_work_count);
+ mutex_unlock(&dev->struct_mutex);
+cleanup:
crtc->primary->fb = old_fb;
update_state_fb(crtc->primary);
- drm_gem_object_unreference(&work->old_fb_obj->base);
- drm_gem_object_unreference(&obj->base);
- mutex_unlock(&dev->struct_mutex);
-cleanup:
+ drm_gem_object_unreference_unlocked(&obj->base);
+ drm_framebuffer_unreference(work->old_fb);
+
spin_lock_irq(&dev->event_lock);
intel_crtc->unpin_work = NULL;
spin_unlock_irq(&dev->event_lock);
@@ -9959,8 +10303,7 @@
struct intel_encoder *encoder;
struct intel_connector *connector;
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
connector->new_encoder =
to_intel_encoder(connector->base.encoder);
}
@@ -9971,7 +10314,7 @@
}
for_each_intel_crtc(dev, crtc) {
- crtc->new_enabled = crtc->base.enabled;
+ crtc->new_enabled = crtc->base.state->enable;
if (crtc->new_enabled)
crtc->new_config = crtc->config;
@@ -9980,6 +10323,27 @@
}
}
+/* Transitional helper to copy current connector/encoder state to
+ * connector->state. This is needed so that code that is partially
+ * converted to atomic does the right thing.
+ */
+static void intel_modeset_update_connector_atomic_state(struct drm_device *dev)
+{
+ struct intel_connector *connector;
+
+ for_each_intel_connector(dev, connector) {
+ if (connector->base.encoder) {
+ connector->base.state->best_encoder =
+ connector->base.encoder;
+ connector->base.state->crtc =
+ connector->base.encoder->crtc;
+ } else {
+ connector->base.state->best_encoder = NULL;
+ connector->base.state->crtc = NULL;
+ }
+ }
+}
+
/**
* intel_modeset_commit_output_state
*
@@ -9991,8 +10355,7 @@
struct intel_encoder *encoder;
struct intel_connector *connector;
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
connector->base.encoder = &connector->new_encoder->base;
}
@@ -10001,8 +10364,11 @@
}
for_each_intel_crtc(dev, crtc) {
+ crtc->base.state->enable = crtc->new_enabled;
crtc->base.enabled = crtc->new_enabled;
}
+
+ intel_modeset_update_connector_atomic_state(dev);
}
static void
@@ -10037,8 +10403,9 @@
struct intel_crtc_state *pipe_config)
{
struct drm_device *dev = crtc->base.dev;
+ struct drm_atomic_state *state;
struct intel_connector *connector;
- int bpp;
+ int bpp, i;
switch (fb->pixel_format) {
case DRM_FORMAT_C8:
@@ -10078,11 +10445,15 @@
pipe_config->pipe_bpp = bpp;
+ state = pipe_config->base.state;
+
/* Clamp display bpp to EDID value */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
- if (!connector->new_encoder ||
- connector->new_encoder->new_crtc != crtc)
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
+ continue;
+
+ connector = to_intel_connector(state->connectors[i]);
+ if (state->connector_states[i]->crtc != &crtc->base)
continue;
connected_sink_compute_bpp(connector, pipe_config);
@@ -10207,8 +10578,7 @@
* list to detect the problem on ddi platforms
* where there's just one encoder per digital port.
*/
- list_for_each_entry(connector,
- &dev->mode_config.connector_list, base.head) {
+ for_each_intel_connector(dev, connector) {
struct intel_encoder *encoder = connector->new_encoder;
if (!encoder)
@@ -10239,15 +10609,30 @@
return true;
}
+static void
+clear_intel_crtc_state(struct intel_crtc_state *crtc_state)
+{
+ struct drm_crtc_state tmp_state;
+
+ /* Clear only the intel specific part of the crtc state */
+ tmp_state = crtc_state->base;
+ memset(crtc_state, 0, sizeof *crtc_state);
+ crtc_state->base = tmp_state;
+}
+
static struct intel_crtc_state *
intel_modeset_pipe_config(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_display_mode *mode)
+ struct drm_display_mode *mode,
+ struct drm_atomic_state *state)
{
struct drm_device *dev = crtc->dev;
struct intel_encoder *encoder;
+ struct intel_connector *connector;
+ struct drm_connector_state *connector_state;
struct intel_crtc_state *pipe_config;
int plane_bpp, ret = -EINVAL;
+ int i;
bool retry = true;
if (!check_encoder_cloning(to_intel_crtc(crtc))) {
@@ -10260,10 +10645,13 @@
return ERR_PTR(-EINVAL);
}
- pipe_config = kzalloc(sizeof(*pipe_config), GFP_KERNEL);
- if (!pipe_config)
- return ERR_PTR(-ENOMEM);
+ pipe_config = intel_atomic_get_crtc_state(state, to_intel_crtc(crtc));
+ if (IS_ERR(pipe_config))
+ return pipe_config;
+ clear_intel_crtc_state(pipe_config);
+
+ pipe_config->base.crtc = crtc;
drm_mode_copy(&pipe_config->base.adjusted_mode, mode);
drm_mode_copy(&pipe_config->base.mode, mode);
@@ -10318,11 +10706,17 @@
* adjust it according to limitations or connector properties, and also
* a chance to reject the mode entirely.
*/
- for_each_intel_encoder(dev, encoder) {
-
- if (&encoder->new_crtc->base != crtc)
+ for (i = 0; i < state->num_connector; i++) {
+ connector = to_intel_connector(state->connectors[i]);
+ if (!connector)
continue;
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != crtc)
+ continue;
+
+ encoder = to_intel_encoder(connector_state->best_encoder);
+
if (!(encoder->compute_config(encoder, pipe_config))) {
DRM_DEBUG_KMS("Encoder config failure\n");
goto fail;
@@ -10358,7 +10752,6 @@
return pipe_config;
fail:
- kfree(pipe_config);
return ERR_PTR(ret);
}
@@ -10380,8 +10773,7 @@
* to be part of the prepare_pipes mask. We don't (yet) support global
* modeset across multiple crtcs, so modeset_pipes will only have one
* bit set at most. */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->base.encoder == &connector->new_encoder->base)
continue;
@@ -10412,7 +10804,7 @@
/* Check for pipes that will be enabled/disabled ... */
for_each_intel_crtc(dev, intel_crtc) {
- if (intel_crtc->base.enabled == intel_crtc->new_enabled)
+ if (intel_crtc->base.state->enable == intel_crtc->new_enabled)
continue;
if (!intel_crtc->new_enabled)
@@ -10487,10 +10879,10 @@
/* Double check state. */
for_each_intel_crtc(dev, intel_crtc) {
- WARN_ON(intel_crtc->base.enabled != intel_crtc_in_use(&intel_crtc->base));
+ WARN_ON(intel_crtc->base.state->enable != intel_crtc_in_use(&intel_crtc->base));
WARN_ON(intel_crtc->new_config &&
intel_crtc->new_config != intel_crtc->config);
- WARN_ON(intel_crtc->base.enabled != !!intel_crtc->new_config);
+ WARN_ON(intel_crtc->base.state->enable != !!intel_crtc->new_config);
}
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
@@ -10750,7 +11142,7 @@
continue;
/* planes */
- for_each_plane(pipe, plane) {
+ for_each_plane(dev_priv, pipe, plane) {
hw_entry = &hw_ddb.plane[pipe][plane];
sw_entry = &sw_ddb->plane[pipe][plane];
@@ -10784,8 +11176,7 @@
{
struct intel_connector *connector;
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
/* This also checks the encoder/connector hw state with the
* ->get_hw_state callbacks. */
intel_connector_check_state(connector);
@@ -10815,8 +11206,7 @@
I915_STATE_WARN(encoder->connectors_active && !encoder->base.crtc,
"encoder's active_connectors set, but no crtc\n");
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->base.encoder != &encoder->base)
continue;
enabled = true;
@@ -10877,7 +11267,7 @@
DRM_DEBUG_KMS("[CRTC:%d]\n",
crtc->base.base.id);
- I915_STATE_WARN(crtc->active && !crtc->base.enabled,
+ I915_STATE_WARN(crtc->active && !crtc->base.state->enable,
"active crtc, but not enabled in sw tracking\n");
for_each_intel_encoder(dev, encoder) {
@@ -10891,9 +11281,10 @@
I915_STATE_WARN(active != crtc->active,
"crtc's computed active state doesn't match tracked active state "
"(expected %i, found %i)\n", active, crtc->active);
- I915_STATE_WARN(enabled != crtc->base.enabled,
+ I915_STATE_WARN(enabled != crtc->base.state->enable,
"crtc's computed enabled state doesn't match tracked enabled state "
- "(expected %i, found %i)\n", enabled, crtc->base.enabled);
+ "(expected %i, found %i)\n", enabled,
+ crtc->base.state->enable);
active = dev_priv->display.get_pipe_config(crtc,
&pipe_config);
@@ -10957,7 +11348,7 @@
pll->on, active);
for_each_intel_crtc(dev, crtc) {
- if (crtc->base.enabled && intel_crtc_to_shared_dpll(crtc) == pll)
+ if (crtc->base.state->enable && intel_crtc_to_shared_dpll(crtc) == pll)
enabled_crtcs++;
if (crtc->active && intel_crtc_to_shared_dpll(crtc) == pll)
active_crtcs++;
@@ -11039,17 +11430,30 @@
intel_modeset_compute_config(struct drm_crtc *crtc,
struct drm_display_mode *mode,
struct drm_framebuffer *fb,
+ struct drm_atomic_state *state,
unsigned *modeset_pipes,
unsigned *prepare_pipes,
unsigned *disable_pipes)
{
+ struct drm_device *dev = crtc->dev;
struct intel_crtc_state *pipe_config = NULL;
+ struct intel_crtc *intel_crtc;
+ int ret = 0;
+
+ ret = drm_atomic_add_affected_connectors(state, crtc);
+ if (ret)
+ return ERR_PTR(ret);
intel_modeset_affected_pipes(crtc, modeset_pipes,
prepare_pipes, disable_pipes);
- if ((*modeset_pipes) == 0)
- goto out;
+ for_each_intel_crtc_masked(dev, *disable_pipes, intel_crtc) {
+ pipe_config = intel_atomic_get_crtc_state(state, intel_crtc);
+ if (IS_ERR(pipe_config))
+ return pipe_config;
+
+ pipe_config->base.enable = false;
+ }
/*
* Note this needs changes when we start tracking multiple modes
@@ -11057,15 +11461,21 @@
* (i.e. one pipe_config for each crtc) rather than just the one
* for this crtc.
*/
- pipe_config = intel_modeset_pipe_config(crtc, fb, mode);
- if (IS_ERR(pipe_config)) {
- goto out;
- }
- intel_dump_pipe_config(to_intel_crtc(crtc), pipe_config,
- "[modeset]");
+ for_each_intel_crtc_masked(dev, *modeset_pipes, intel_crtc) {
+ /* FIXME: For now we still expect modeset_pipes has at most
+ * one bit set. */
+ if (WARN_ON(&intel_crtc->base != crtc))
+ continue;
-out:
- return pipe_config;
+ pipe_config = intel_modeset_pipe_config(crtc, fb, mode, state);
+ if (IS_ERR(pipe_config))
+ return pipe_config;
+
+ intel_dump_pipe_config(to_intel_crtc(crtc), pipe_config,
+ "[modeset]");
+ }
+
+ return intel_atomic_get_crtc_state(state, to_intel_crtc(crtc));;
}
static int __intel_set_mode_setup_plls(struct drm_device *dev,
@@ -11109,6 +11519,7 @@
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_display_mode *saved_mode;
+ struct intel_crtc_state *crtc_state_copy = NULL;
struct intel_crtc *intel_crtc;
int ret = 0;
@@ -11116,6 +11527,12 @@
if (!saved_mode)
return -ENOMEM;
+ crtc_state_copy = kmalloc(sizeof(*crtc_state_copy), GFP_KERNEL);
+ if (!crtc_state_copy) {
+ ret = -ENOMEM;
+ goto done;
+ }
+
*saved_mode = crtc->mode;
if (modeset_pipes)
@@ -11143,7 +11560,7 @@
intel_crtc_disable(&intel_crtc->base);
for_each_intel_crtc_masked(dev, prepare_pipes, intel_crtc) {
- if (intel_crtc->base.enabled)
+ if (intel_crtc->base.state->enable)
dev_priv->display.crtc_disable(&intel_crtc->base);
}
@@ -11173,7 +11590,7 @@
* update the the output configuration. */
intel_modeset_update_state(dev, prepare_pipes);
- modeset_update_crtc_power_domains(dev);
+ modeset_update_crtc_power_domains(pipe_config->base.state);
/* Set up the DPLL and any encoders state that needs to adjust or depend
* on the DPLL.
@@ -11199,9 +11616,25 @@
/* FIXME: add subpixel order */
done:
- if (ret && crtc->enabled)
+ if (ret && crtc->state->enable)
crtc->mode = *saved_mode;
+ if (ret == 0 && pipe_config) {
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+
+ /* The pipe_config will be freed with the atomic state, so
+ * make a copy. */
+ memcpy(crtc_state_copy, intel_crtc->config,
+ sizeof *crtc_state_copy);
+ intel_crtc->config = crtc_state_copy;
+ intel_crtc->base.state = &crtc_state_copy->base;
+
+ if (modeset_pipes)
+ intel_crtc->new_config = intel_crtc->config;
+ } else {
+ kfree(crtc_state_copy);
+ }
+
kfree(saved_mode);
return ret;
}
@@ -11227,27 +11660,81 @@
static int intel_set_mode(struct drm_crtc *crtc,
struct drm_display_mode *mode,
- int x, int y, struct drm_framebuffer *fb)
+ int x, int y, struct drm_framebuffer *fb,
+ struct drm_atomic_state *state)
{
struct intel_crtc_state *pipe_config;
unsigned modeset_pipes, prepare_pipes, disable_pipes;
+ int ret = 0;
- pipe_config = intel_modeset_compute_config(crtc, mode, fb,
+ pipe_config = intel_modeset_compute_config(crtc, mode, fb, state,
&modeset_pipes,
&prepare_pipes,
&disable_pipes);
- if (IS_ERR(pipe_config))
- return PTR_ERR(pipe_config);
+ if (IS_ERR(pipe_config)) {
+ ret = PTR_ERR(pipe_config);
+ goto out;
+ }
- return intel_set_mode_pipes(crtc, mode, x, y, fb, pipe_config,
- modeset_pipes, prepare_pipes,
- disable_pipes);
+ ret = intel_set_mode_pipes(crtc, mode, x, y, fb, pipe_config,
+ modeset_pipes, prepare_pipes,
+ disable_pipes);
+ if (ret)
+ goto out;
+
+out:
+ return ret;
}
void intel_crtc_restore_mode(struct drm_crtc *crtc)
{
- intel_set_mode(crtc, &crtc->mode, crtc->x, crtc->y, crtc->primary->fb);
+ struct drm_device *dev = crtc->dev;
+ struct drm_atomic_state *state;
+ struct intel_encoder *encoder;
+ struct intel_connector *connector;
+ struct drm_connector_state *connector_state;
+
+ state = drm_atomic_state_alloc(dev);
+ if (!state) {
+ DRM_DEBUG_KMS("[CRTC:%d] mode restore failed, out of memory",
+ crtc->base.id);
+ return;
+ }
+
+ state->acquire_ctx = dev->mode_config.acquire_ctx;
+
+ /* The force restore path in the HW readout code relies on the staged
+ * config still keeping the user requested config while the actual
+ * state has been overwritten by the configuration read from HW. We
+ * need to copy the staged config to the atomic state, otherwise the
+ * mode set will just reapply the state the HW is already in. */
+ for_each_intel_encoder(dev, encoder) {
+ if (&encoder->new_crtc->base != crtc)
+ continue;
+
+ for_each_intel_connector(dev, connector) {
+ if (connector->new_encoder != encoder)
+ continue;
+
+ connector_state = drm_atomic_get_connector_state(state, &connector->base);
+ if (IS_ERR(connector_state)) {
+ DRM_DEBUG_KMS("Failed to add [CONNECTOR:%d:%s] to state: %ld\n",
+ connector->base.base.id,
+ connector->base.name,
+ PTR_ERR(connector_state));
+ continue;
+ }
+
+ connector_state->crtc = crtc;
+ connector_state->best_encoder = &encoder->base;
+ }
+ }
+
+ intel_set_mode(crtc, &crtc->mode, crtc->x, crtc->y, crtc->primary->fb,
+ state);
+
+ drm_atomic_state_free(state);
}
#undef for_each_intel_crtc_masked
@@ -11295,7 +11782,7 @@
*/
count = 0;
for_each_crtc(dev, crtc) {
- config->save_crtc_enabled[count++] = crtc->enabled;
+ config->save_crtc_enabled[count++] = crtc->state->enable;
}
count = 0;
@@ -11336,7 +11823,7 @@
}
count = 0;
- list_for_each_entry(connector, &dev->mode_config.connector_list, base.head) {
+ for_each_intel_connector(dev, connector) {
connector->new_encoder =
to_intel_encoder(config->save_connector_encoders[count++]);
}
@@ -11416,9 +11903,11 @@
static int
intel_modeset_stage_output_state(struct drm_device *dev,
struct drm_mode_set *set,
- struct intel_set_config *config)
+ struct intel_set_config *config,
+ struct drm_atomic_state *state)
{
struct intel_connector *connector;
+ struct drm_connector_state *connector_state;
struct intel_encoder *encoder;
struct intel_crtc *crtc;
int ro;
@@ -11428,8 +11917,7 @@
WARN_ON(!set->fb && (set->num_connectors != 0));
WARN_ON(set->fb && (set->num_connectors == 0));
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
/* Otherwise traverse passed in connector list and get encoders
* for them. */
for (ro = 0; ro < set->num_connectors; ro++) {
@@ -11454,15 +11942,16 @@
if (&connector->new_encoder->base != connector->base.encoder) {
- DRM_DEBUG_KMS("encoder changed, full mode switch\n");
+ DRM_DEBUG_KMS("[CONNECTOR:%d:%s] encoder changed, full mode switch\n",
+ connector->base.base.id,
+ connector->base.name);
config->mode_changed = true;
}
}
/* connector->new_encoder is now updated for all connectors. */
/* Update crtc of enabled connectors. */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
struct drm_crtc *new_crtc;
if (!connector->new_encoder)
@@ -11482,6 +11971,14 @@
}
connector->new_encoder->new_crtc = to_intel_crtc(new_crtc);
+ connector_state =
+ drm_atomic_get_connector_state(state, &connector->base);
+ if (IS_ERR(connector_state))
+ return PTR_ERR(connector_state);
+
+ connector_state->crtc = new_crtc;
+ connector_state->best_encoder = &connector->new_encoder->base;
+
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] to [CRTC:%d]\n",
connector->base.base.id,
connector->base.name,
@@ -11491,9 +11988,7 @@
/* Check for any encoders that needs to be disabled. */
for_each_intel_encoder(dev, encoder) {
int num_connectors = 0;
- list_for_each_entry(connector,
- &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->new_encoder == encoder) {
WARN_ON(!connector->new_encoder->new_crtc);
num_connectors++;
@@ -11508,16 +12003,25 @@
/* Only now check for crtc changes so we don't miss encoders
* that will be disabled. */
if (&encoder->new_crtc->base != encoder->base.crtc) {
- DRM_DEBUG_KMS("crtc changed, full mode switch\n");
+ DRM_DEBUG_KMS("[ENCODER:%d:%s] crtc changed, full mode switch\n",
+ encoder->base.base.id,
+ encoder->base.name);
config->mode_changed = true;
}
}
/* Now we've also updated encoder->new_crtc for all encoders. */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
- if (connector->new_encoder)
+ for_each_intel_connector(dev, connector) {
+ connector_state =
+ drm_atomic_get_connector_state(state, &connector->base);
+ if (IS_ERR(connector_state))
+ return PTR_ERR(connector_state);
+
+ if (connector->new_encoder) {
if (connector->new_encoder != connector->encoder)
connector->encoder = connector->new_encoder;
+ } else {
+ connector_state->crtc = NULL;
+ }
}
for_each_intel_crtc(dev, crtc) {
crtc->new_enabled = false;
@@ -11529,8 +12033,9 @@
}
}
- if (crtc->new_enabled != crtc->base.enabled) {
- DRM_DEBUG_KMS("crtc %sabled, full mode switch\n",
+ if (crtc->new_enabled != crtc->base.state->enable) {
+ DRM_DEBUG_KMS("[CRTC:%d] %sabled, full mode switch\n",
+ crtc->base.base.id,
crtc->new_enabled ? "en" : "dis");
config->mode_changed = true;
}
@@ -11553,7 +12058,7 @@
DRM_DEBUG_KMS("Trying to restore without FB -> disabling pipe %c\n",
pipe_name(crtc->pipe));
- list_for_each_entry(connector, &dev->mode_config.connector_list, base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->new_encoder &&
connector->new_encoder->new_crtc == crtc)
connector->new_encoder = NULL;
@@ -11572,6 +12077,7 @@
{
struct drm_device *dev;
struct drm_mode_set save_set;
+ struct drm_atomic_state *state = NULL;
struct intel_set_config *config;
struct intel_crtc_state *pipe_config;
unsigned modeset_pipes, prepare_pipes, disable_pipes;
@@ -11616,12 +12122,20 @@
* such cases. */
intel_set_config_compute_mode_changes(set, config);
- ret = intel_modeset_stage_output_state(dev, set, config);
+ state = drm_atomic_state_alloc(dev);
+ if (!state) {
+ ret = -ENOMEM;
+ goto out_config;
+ }
+
+ state->acquire_ctx = dev->mode_config.acquire_ctx;
+
+ ret = intel_modeset_stage_output_state(dev, set, config, state);
if (ret)
goto fail;
pipe_config = intel_modeset_compute_config(set->crtc, set->mode,
- set->fb,
+ set->fb, state,
&modeset_pipes,
&prepare_pipes,
&disable_pipes);
@@ -11641,10 +12155,6 @@
*/
}
- /* set_mode will free it in the mode_changed case */
- if (!config->mode_changed)
- kfree(pipe_config);
-
intel_update_pipe_size(to_intel_crtc(set->crtc));
if (config->mode_changed) {
@@ -11690,6 +12200,8 @@
fail:
intel_set_config_restore_state(dev, config);
+ drm_atomic_state_clear(state);
+
/*
* HACK: if the pipe was on, but we didn't have a framebuffer,
* force the pipe off to avoid oopsing in the modeset code
@@ -11702,11 +12214,15 @@
/* Try to restore the config */
if (config->mode_changed &&
intel_set_mode(save_set.crtc, save_set.mode,
- save_set.x, save_set.y, save_set.fb))
+ save_set.x, save_set.y, save_set.fb,
+ state))
DRM_ERROR("failed to restore config after modeset failure\n");
}
out_config:
+ if (state)
+ drm_atomic_state_free(state);
+
intel_set_config_free(config);
return ret;
}
@@ -11821,6 +12337,28 @@
}
/**
+ * intel_wm_need_update - Check whether watermarks need updating
+ * @plane: drm plane
+ * @state: new plane state
+ *
+ * Check current plane state versus the new one to determine whether
+ * watermarks need to be recalculated.
+ *
+ * Returns true or false.
+ */
+bool intel_wm_need_update(struct drm_plane *plane,
+ struct drm_plane_state *state)
+{
+ /* Update watermarks on tiling changes. */
+ if (!plane->state->fb || !state->fb ||
+ plane->state->fb->modifier[0] != state->fb->modifier[0] ||
+ plane->state->rotation != state->rotation)
+ return true;
+
+ return false;
+}
+
+/**
* intel_prepare_plane_fb - Prepare fb for usage on plane
* @plane: drm plane to prepare for
* @fb: framebuffer to prepare for presentation
@@ -11834,7 +12372,8 @@
*/
int
intel_prepare_plane_fb(struct drm_plane *plane,
- struct drm_framebuffer *fb)
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *new_state)
{
struct drm_device *dev = plane->dev;
struct intel_plane *intel_plane = to_intel_plane(plane);
@@ -11868,7 +12407,7 @@
if (ret)
DRM_DEBUG_KMS("failed to attach phys object\n");
} else {
- ret = intel_pin_and_fence_fb_obj(plane, fb, NULL);
+ ret = intel_pin_and_fence_fb_obj(plane, fb, new_state, NULL);
}
if (ret == 0)
@@ -11888,7 +12427,8 @@
*/
void
intel_cleanup_plane_fb(struct drm_plane *plane,
- struct drm_framebuffer *fb)
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *old_state)
{
struct drm_device *dev = plane->dev;
struct drm_i915_gem_object *obj = intel_fb_obj(fb);
@@ -11899,7 +12439,7 @@
if (plane->type != DRM_PLANE_TYPE_CURSOR ||
!INTEL_INFO(dev)->cursor_needs_physical) {
mutex_lock(&dev->struct_mutex);
- intel_unpin_fb_obj(obj);
+ intel_unpin_fb_obj(fb, old_state);
mutex_unlock(&dev->struct_mutex);
}
}
@@ -11944,7 +12484,7 @@
*/
if (intel_crtc->primary_enabled &&
INTEL_INFO(dev)->gen <= 4 && !IS_G4X(dev) &&
- dev_priv->fbc.plane == intel_crtc->plane &&
+ dev_priv->fbc.crtc == intel_crtc &&
state->base.rotation != BIT(DRM_ROTATE_0)) {
intel_crtc->atomic.disable_fbc = true;
}
@@ -11963,6 +12503,9 @@
INTEL_FRONTBUFFER_PRIMARY(intel_crtc->pipe);
intel_crtc->atomic.update_fbc = true;
+
+ if (intel_wm_need_update(plane, &state->base))
+ intel_crtc->atomic.update_wm = true;
}
return 0;
@@ -11977,8 +12520,6 @@
struct drm_device *dev = plane->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc;
- struct drm_i915_gem_object *obj = intel_fb_obj(fb);
- struct intel_plane *intel_plane = to_intel_plane(plane);
struct drm_rect *src = &state->src;
crtc = crtc ? crtc : plane->crtc;
@@ -11988,8 +12529,6 @@
crtc->x = src->x1 >> 16;
crtc->y = src->y1 >> 16;
- intel_plane->obj = obj;
-
if (intel_crtc->active) {
if (state->visible) {
/* FIXME: kill this fastboot hack */
@@ -12229,17 +12768,14 @@
return -ENOMEM;
}
- /* we only need to pin inside GTT if cursor is non-phy */
- mutex_lock(&dev->struct_mutex);
- if (!INTEL_INFO(dev)->cursor_needs_physical && obj->tiling_mode) {
+ if (fb->modifier[0] != DRM_FORMAT_MOD_NONE) {
DRM_DEBUG_KMS("cursor cannot be tiled\n");
ret = -EINVAL;
}
- mutex_unlock(&dev->struct_mutex);
finish:
if (intel_crtc->active) {
- if (intel_crtc->cursor_width != state->base.crtc_w)
+ if (plane->state->crtc_w != state->base.crtc_w)
intel_crtc->atomic.update_wm = true;
intel_crtc->atomic.fb_bits |=
@@ -12256,7 +12792,6 @@
struct drm_crtc *crtc = state->base.crtc;
struct drm_device *dev = plane->dev;
struct intel_crtc *intel_crtc;
- struct intel_plane *intel_plane = to_intel_plane(plane);
struct drm_i915_gem_object *obj = intel_fb_obj(state->base.fb);
uint32_t addr;
@@ -12267,8 +12802,6 @@
crtc->cursor_x = state->base.crtc_x;
crtc->cursor_y = state->base.crtc_y;
- intel_plane->obj = obj;
-
if (intel_crtc->cursor_bo == obj)
goto update;
@@ -12282,8 +12815,6 @@
intel_crtc->cursor_addr = addr;
intel_crtc->cursor_bo = obj;
update:
- intel_crtc->cursor_width = state->base.crtc_w;
- intel_crtc->cursor_height = state->base.crtc_h;
if (intel_crtc->active)
intel_crtc_update_cursor(crtc, state->visible);
@@ -12353,6 +12884,7 @@
if (!crtc_state)
goto fail;
intel_crtc_set_state(intel_crtc, crtc_state);
+ crtc_state->base.crtc = &intel_crtc->base;
primary = intel_primary_plane_create(dev, pipe);
if (!primary)
@@ -12430,9 +12962,6 @@
struct drm_crtc *drmmode_crtc;
struct intel_crtc *crtc;
- if (!drm_core_check_feature(dev, DRIVER_MODESET))
- return -ENODEV;
-
drmmode_crtc = drm_crtc_find(dev, pipe_from_crtc_id->crtc_id);
if (!drmmode_crtc) {
@@ -12502,7 +13031,6 @@
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_encoder *encoder;
- struct drm_connector *connector;
bool dpd_is_edp = false;
intel_lvds_init(dev);
@@ -12513,10 +13041,15 @@
if (HAS_DDI(dev)) {
int found;
- /* Haswell uses DDI functions to detect digital outputs */
+ /*
+ * Haswell uses DDI functions to detect digital outputs.
+ * On SKL pre-D0 the strap isn't connected, so we assume
+ * it's there.
+ */
found = I915_READ(DDI_BUF_CTL_A) & DDI_INIT_DISPLAY_DETECTED;
- /* DDI A only supports eDP */
- if (found)
+ /* WaIgnoreDDIAStrap: skl */
+ if (found ||
+ (IS_SKYLAKE(dev) && INTEL_REVID(dev) < SKL_REVID_D0))
intel_ddi_init(dev, PORT_A);
/* DDI B, C and D detection is indicated by the SFUSE_STRAP
@@ -12633,37 +13166,6 @@
if (SUPPORTS_TV(dev))
intel_tv_init(dev);
- /*
- * FIXME: We don't have full atomic support yet, but we want to be
- * able to enable/test plane updates via the atomic interface in the
- * meantime. However as soon as we flip DRIVER_ATOMIC on, the DRM core
- * will take some atomic codepaths to lookup properties during
- * drmModeGetConnector() that unconditionally dereference
- * connector->state.
- *
- * We create a dummy connector state here for each connector to ensure
- * the DRM core doesn't try to dereference a NULL connector->state.
- * The actual connector properties will never be updated or contain
- * useful information, but since we're doing this specifically for
- * testing/debug of the plane operations (and only when a specific
- * kernel module option is given), that shouldn't really matter.
- *
- * Once atomic support for crtc's + connectors lands, this loop should
- * be removed since we'll be setting up real connector state, which
- * will contain Intel-specific properties.
- */
- if (drm_core_check_feature(dev, DRIVER_ATOMIC)) {
- list_for_each_entry(connector,
- &dev->mode_config.connector_list,
- head) {
- if (!WARN_ON(connector->state)) {
- connector->state =
- kzalloc(sizeof(*connector->state),
- GFP_KERNEL);
- }
- }
- }
-
intel_psr_init(dev);
for_each_intel_encoder(dev, encoder) {
@@ -12705,52 +13207,100 @@
.create_handle = intel_user_framebuffer_create_handle,
};
+static
+u32 intel_fb_pitch_limit(struct drm_device *dev, uint64_t fb_modifier,
+ uint32_t pixel_format)
+{
+ u32 gen = INTEL_INFO(dev)->gen;
+
+ if (gen >= 9) {
+ /* "The stride in bytes must not exceed the of the size of 8K
+ * pixels and 32K bytes."
+ */
+ return min(8192*drm_format_plane_cpp(pixel_format, 0), 32768);
+ } else if (gen >= 5 && !IS_VALLEYVIEW(dev)) {
+ return 32*1024;
+ } else if (gen >= 4) {
+ if (fb_modifier == I915_FORMAT_MOD_X_TILED)
+ return 16*1024;
+ else
+ return 32*1024;
+ } else if (gen >= 3) {
+ if (fb_modifier == I915_FORMAT_MOD_X_TILED)
+ return 8*1024;
+ else
+ return 16*1024;
+ } else {
+ /* XXX DSPC is limited to 4k tiled */
+ return 8*1024;
+ }
+}
+
static int intel_framebuffer_init(struct drm_device *dev,
struct intel_framebuffer *intel_fb,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_i915_gem_object *obj)
{
- int aligned_height;
- int pitch_limit;
+ unsigned int aligned_height;
int ret;
+ u32 pitch_limit, stride_alignment;
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
- if (obj->tiling_mode == I915_TILING_Y) {
- DRM_DEBUG("hardware does not support tiling Y\n");
+ if (mode_cmd->flags & DRM_MODE_FB_MODIFIERS) {
+ /* Enforce that fb modifier and tiling mode match, but only for
+ * X-tiled. This is needed for FBC. */
+ if (!!(obj->tiling_mode == I915_TILING_X) !=
+ !!(mode_cmd->modifier[0] == I915_FORMAT_MOD_X_TILED)) {
+ DRM_DEBUG("tiling_mode doesn't match fb modifier\n");
+ return -EINVAL;
+ }
+ } else {
+ if (obj->tiling_mode == I915_TILING_X)
+ mode_cmd->modifier[0] = I915_FORMAT_MOD_X_TILED;
+ else if (obj->tiling_mode == I915_TILING_Y) {
+ DRM_DEBUG("No Y tiling for legacy addfb\n");
+ return -EINVAL;
+ }
+ }
+
+ /* Passed in modifier sanity checking. */
+ switch (mode_cmd->modifier[0]) {
+ case I915_FORMAT_MOD_Y_TILED:
+ case I915_FORMAT_MOD_Yf_TILED:
+ if (INTEL_INFO(dev)->gen < 9) {
+ DRM_DEBUG("Unsupported tiling 0x%llx!\n",
+ mode_cmd->modifier[0]);
+ return -EINVAL;
+ }
+ case DRM_FORMAT_MOD_NONE:
+ case I915_FORMAT_MOD_X_TILED:
+ break;
+ default:
+ DRM_DEBUG("Unsupported fb modifier 0x%llx!\n",
+ mode_cmd->modifier[0]);
return -EINVAL;
}
- if (mode_cmd->pitches[0] & 63) {
- DRM_DEBUG("pitch (%d) must be at least 64 byte aligned\n",
- mode_cmd->pitches[0]);
+ stride_alignment = intel_fb_stride_alignment(dev, mode_cmd->modifier[0],
+ mode_cmd->pixel_format);
+ if (mode_cmd->pitches[0] & (stride_alignment - 1)) {
+ DRM_DEBUG("pitch (%d) must be at least %u byte aligned\n",
+ mode_cmd->pitches[0], stride_alignment);
return -EINVAL;
}
- if (INTEL_INFO(dev)->gen >= 5 && !IS_VALLEYVIEW(dev)) {
- pitch_limit = 32*1024;
- } else if (INTEL_INFO(dev)->gen >= 4) {
- if (obj->tiling_mode)
- pitch_limit = 16*1024;
- else
- pitch_limit = 32*1024;
- } else if (INTEL_INFO(dev)->gen >= 3) {
- if (obj->tiling_mode)
- pitch_limit = 8*1024;
- else
- pitch_limit = 16*1024;
- } else
- /* XXX DSPC is limited to 4k tiled */
- pitch_limit = 8*1024;
-
+ pitch_limit = intel_fb_pitch_limit(dev, mode_cmd->modifier[0],
+ mode_cmd->pixel_format);
if (mode_cmd->pitches[0] > pitch_limit) {
- DRM_DEBUG("%s pitch (%d) must be at less than %d\n",
- obj->tiling_mode ? "tiled" : "linear",
+ DRM_DEBUG("%s pitch (%u) must be at less than %d\n",
+ mode_cmd->modifier[0] != DRM_FORMAT_MOD_NONE ?
+ "tiled" : "linear",
mode_cmd->pitches[0], pitch_limit);
return -EINVAL;
}
- if (obj->tiling_mode != I915_TILING_NONE &&
+ if (mode_cmd->modifier[0] == I915_FORMAT_MOD_X_TILED &&
mode_cmd->pitches[0] != obj->stride) {
DRM_DEBUG("pitch (%d) must match tiling stride (%d)\n",
mode_cmd->pitches[0], obj->stride);
@@ -12805,7 +13355,8 @@
return -EINVAL;
aligned_height = intel_fb_align_height(dev, mode_cmd->height,
- obj->tiling_mode);
+ mode_cmd->pixel_format,
+ mode_cmd->modifier[0]);
/* FIXME drm helper for size checks (especially planar formats)? */
if (obj->base.size < aligned_height * mode_cmd->pitches[0])
return -EINVAL;
@@ -12958,8 +13509,6 @@
} else if (IS_IVYBRIDGE(dev)) {
/* FIXME: detect B0+ stepping and use auto training */
dev_priv->display.fdi_link_train = ivb_manual_fdi_link_train;
- dev_priv->display.modeset_global_resources =
- ivb_modeset_global_resources;
} else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) {
dev_priv->display.fdi_link_train = hsw_fdi_link_train;
} else if (IS_VALLEYVIEW(dev)) {
@@ -12967,9 +13516,6 @@
valleyview_modeset_global_resources;
}
- /* Default just returns -ENODEV to indicate unsupported */
- dev_priv->display.queue_flip = intel_default_queue_flip;
-
switch (INTEL_INFO(dev)->gen) {
case 2:
dev_priv->display.queue_flip = intel_gen2_queue_flip;
@@ -12992,8 +13538,10 @@
dev_priv->display.queue_flip = intel_gen7_queue_flip;
break;
case 9:
- dev_priv->display.queue_flip = intel_gen9_queue_flip;
- break;
+ /* Drop through - unsupported since execlist only. */
+ default:
+ /* Default just returns -ENODEV to indicate unsupported */
+ dev_priv->display.queue_flip = intel_default_queue_flip;
}
intel_panel_init_backlight_funcs(dev);
@@ -13212,6 +13760,8 @@
dev->mode_config.preferred_depth = 24;
dev->mode_config.prefer_shadow = 1;
+ dev->mode_config.allow_fb_modifiers = true;
+
dev->mode_config.funcs = &intel_mode_funcs;
intel_init_quirks(dev);
@@ -13254,7 +13804,7 @@
for_each_pipe(dev_priv, pipe) {
intel_crtc_init(dev, pipe);
- for_each_sprite(pipe, sprite) {
+ for_each_sprite(dev_priv, pipe, sprite) {
ret = intel_plane_init(dev, pipe, sprite);
if (ret)
DRM_DEBUG_KMS("pipe %c sprite %c init failed: %d\n",
@@ -13295,7 +13845,7 @@
* If the fb is shared between multiple heads, we'll
* just get the first one.
*/
- intel_find_plane_obj(crtc, &crtc->plane_config);
+ intel_find_initial_plane_obj(crtc, &crtc->plane_config);
}
}
}
@@ -13310,9 +13860,7 @@
/* We can't just switch on the pipe A, we need to set things up with a
* proper mode and output configuration. As a gross hack, enable pipe A
* by enabling the load detect pipe once. */
- list_for_each_entry(connector,
- &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->encoder->type == INTEL_OUTPUT_ANALOG) {
crt = &connector->base;
break;
@@ -13323,7 +13871,7 @@
return;
if (intel_get_load_detect_pipe(crt, NULL, &load_detect_temp, ctx))
- intel_release_load_detect_pipe(crt, &load_detect_temp);
+ intel_release_load_detect_pipe(crt, &load_detect_temp, ctx);
}
static bool
@@ -13357,11 +13905,11 @@
I915_WRITE(reg, I915_READ(reg) & ~PIPECONF_FRAME_START_DELAY_MASK);
/* restore vblank interrupts to correct state */
+ drm_crtc_vblank_reset(&crtc->base);
if (crtc->active) {
update_scanline_offset(crtc);
- drm_vblank_on(dev, crtc->pipe);
- } else
- drm_vblank_off(dev, crtc->pipe);
+ drm_crtc_vblank_on(&crtc->base);
+ }
/* We need to sanitize the plane -> pipe mapping first because this will
* disable the crtc (and hence change the state) if it is wrong. Note
@@ -13383,8 +13931,7 @@
crtc->plane = plane;
/* ... and break all links. */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->encoder->base.crtc != &crtc->base)
continue;
@@ -13393,14 +13940,14 @@
}
/* multiple connectors may have the same encoder:
* handle them and break crtc link separately */
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head)
+ for_each_intel_connector(dev, connector)
if (connector->encoder->base.crtc == &crtc->base) {
connector->encoder->base.crtc = NULL;
connector->encoder->connectors_active = false;
}
WARN_ON(crtc->active);
+ crtc->base.state->enable = false;
crtc->base.enabled = false;
}
@@ -13417,7 +13964,7 @@
* have active connectors/encoders. */
intel_crtc_update_dpms(&crtc->base);
- if (crtc->active != crtc->base.enabled) {
+ if (crtc->active != crtc->base.state->enable) {
struct intel_encoder *encoder;
/* This can happen either due to bugs in the get_hw_state
@@ -13425,9 +13972,10 @@
* pipe A quirk. */
DRM_DEBUG_KMS("[CRTC:%d] hw state adjusted, was %s, now %s\n",
crtc->base.base.id,
- crtc->base.enabled ? "enabled" : "disabled",
+ crtc->base.state->enable ? "enabled" : "disabled",
crtc->active ? "enabled" : "disabled");
+ crtc->base.state->enable = crtc->active;
crtc->base.enabled = crtc->active;
/* Because we only establish the connector -> encoder ->
@@ -13496,9 +14044,7 @@
* a bug in one of the get_hw_state functions. Or someplace else
* in our code, like the register restore mess on resume. Clamp
* things to off as a safer default. */
- list_for_each_entry(connector,
- &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->encoder != encoder)
continue;
connector->base.dpms = DRM_MODE_DPMS_OFF;
@@ -13564,6 +14110,7 @@
crtc->active = dev_priv->display.get_pipe_config(crtc,
crtc->config);
+ crtc->base.state->enable = crtc->active;
crtc->base.enabled = crtc->active;
crtc->primary_enabled = primary_get_hw_state(crtc);
@@ -13612,8 +14159,7 @@
pipe_name(pipe));
}
- list_for_each_entry(connector, &dev->mode_config.connector_list,
- base.head) {
+ for_each_intel_connector(dev, connector) {
if (connector->get_hw_state(connector)) {
connector->base.dpms = DRM_MODE_DPMS_ON;
connector->encoder->connectors_active = true;
@@ -13669,6 +14215,8 @@
"[setup_hw_state]");
}
+ intel_modeset_update_connector_atomic_state(dev);
+
for (i = 0; i < dev_priv->num_shared_dpll; i++) {
struct intel_shared_dpll *pll = &dev_priv->shared_dplls[i];
@@ -13697,8 +14245,7 @@
struct drm_crtc *crtc =
dev_priv->pipe_to_crtc_mapping[pipe];
- intel_set_mode(crtc, &crtc->mode, crtc->x, crtc->y,
- crtc->primary->fb);
+ intel_crtc_restore_mode(crtc);
}
} else {
intel_modeset_update_staged_output_state(dev);
@@ -13712,6 +14259,7 @@
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_crtc *c;
struct drm_i915_gem_object *obj;
+ int ret;
mutex_lock(&dev->struct_mutex);
intel_init_gt_powersave(dev);
@@ -13736,15 +14284,18 @@
* pinned & fenced. When we do the allocation it's too early
* for this.
*/
- mutex_lock(&dev->struct_mutex);
for_each_crtc(dev, c) {
obj = intel_fb_obj(c->primary->fb);
if (obj == NULL)
continue;
- if (intel_pin_and_fence_fb_obj(c->primary,
- c->primary->fb,
- NULL)) {
+ mutex_lock(&dev->struct_mutex);
+ ret = intel_pin_and_fence_fb_obj(c->primary,
+ c->primary->fb,
+ c->primary->state,
+ NULL);
+ mutex_unlock(&dev->struct_mutex);
+ if (ret) {
DRM_ERROR("failed to pin boot fb on pipe %d\n",
to_intel_crtc(c)->pipe);
drm_framebuffer_unreference(c->primary->fb);
@@ -13752,7 +14303,6 @@
update_state_fb(c->primary);
}
}
- mutex_unlock(&dev->struct_mutex);
intel_backlight_register(dev);
}
@@ -13793,8 +14343,6 @@
intel_fbc_disable(dev);
- ironlake_teardown_rc6(dev);
-
mutex_unlock(&dev->struct_mutex);
/* flush any delayed tasks or pending work */
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index a74aaf9..d023710 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -84,6 +84,13 @@
{ DP_LINK_BW_5_4, /* m2_int = 27, m2_fraction = 0 */
{ .p1 = 2, .p2 = 1, .n = 1, .m1 = 2, .m2 = 0x6c00000 } }
};
+/* Skylake supports following rates */
+static const int gen9_rates[] = { 162000, 216000, 270000,
+ 324000, 432000, 540000 };
+static const int chv_rates[] = { 162000, 202500, 210000, 216000,
+ 243000, 270000, 324000, 405000,
+ 420000, 432000, 540000 };
+static const int default_rates[] = { 162000, 270000, 540000 };
/**
* is_edp - is the given port attached to an eDP panel (either CPU or PCH)
@@ -118,23 +125,15 @@
static void vlv_steal_power_sequencer(struct drm_device *dev,
enum pipe pipe);
-int
-intel_dp_max_link_bw(struct intel_dp *intel_dp)
+static int
+intel_dp_max_link_bw(struct intel_dp *intel_dp)
{
int max_link_bw = intel_dp->dpcd[DP_MAX_LINK_RATE];
- struct drm_device *dev = intel_dp->attached_connector->base.dev;
switch (max_link_bw) {
case DP_LINK_BW_1_62:
case DP_LINK_BW_2_7:
- break;
- case DP_LINK_BW_5_4: /* 1.2 capable displays may advertise higher bw */
- if (((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) ||
- INTEL_INFO(dev)->gen >= 8) &&
- intel_dp->dpcd[DP_DPCD_REV] >= 0x12)
- max_link_bw = DP_LINK_BW_5_4;
- else
- max_link_bw = DP_LINK_BW_2_7;
+ case DP_LINK_BW_5_4:
break;
default:
WARN(1, "invalid max DP link bw val %x, using 1.62Gbps\n",
@@ -210,7 +209,7 @@
target_clock = fixed_mode->clock;
}
- max_link_clock = drm_dp_bw_code_to_link_rate(intel_dp_max_link_bw(intel_dp));
+ max_link_clock = intel_dp_max_link_rate(intel_dp);
max_lanes = intel_dp_max_lane_count(intel_dp);
max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
@@ -240,7 +239,7 @@
return v;
}
-void intel_dp_unpack_aux(uint32_t src, uint8_t *dst, int dst_bytes)
+static void intel_dp_unpack_aux(uint32_t src, uint8_t *dst, int dst_bytes)
{
int i;
if (dst_bytes > 4)
@@ -943,8 +942,9 @@
size_t txsize, rxsize;
int ret;
- txbuf[0] = msg->request << 4;
- txbuf[1] = msg->address >> 8;
+ txbuf[0] = (msg->request << 4) |
+ ((msg->address >> 16) & 0xf);
+ txbuf[1] = (msg->address >> 8) & 0xff;
txbuf[2] = msg->address & 0xff;
txbuf[3] = msg->size - 1;
@@ -952,7 +952,7 @@
case DP_AUX_NATIVE_WRITE:
case DP_AUX_I2C_WRITE:
txsize = msg->size ? HEADER_SIZE + msg->size : BARE_ADDRESS_SIZE;
- rxsize = 1;
+ rxsize = 2; /* 0 or 1 data bytes */
if (WARN_ON(txsize > 20))
return -E2BIG;
@@ -963,8 +963,13 @@
if (ret > 0) {
msg->reply = rxbuf[0] >> 4;
- /* Return payload size. */
- ret = msg->size;
+ if (ret > 1) {
+ /* Number of bytes written in a short write. */
+ ret = clamp_t(int, rxbuf[1], 0, msg->size);
+ } else {
+ /* Return payload size. */
+ ret = msg->size;
+ }
}
break;
@@ -1075,7 +1080,7 @@
}
static void
-skl_edp_set_pll_config(struct intel_crtc_state *pipe_config, int link_bw)
+skl_edp_set_pll_config(struct intel_crtc_state *pipe_config, int link_clock)
{
u32 ctrl1;
@@ -1084,19 +1089,35 @@
pipe_config->dpll_hw_state.cfgcr2 = 0;
ctrl1 = DPLL_CTRL1_OVERRIDE(SKL_DPLL0);
- switch (link_bw) {
- case DP_LINK_BW_1_62:
+ switch (link_clock / 2) {
+ case 81000:
ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_810,
SKL_DPLL0);
break;
- case DP_LINK_BW_2_7:
+ case 135000:
ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_1350,
SKL_DPLL0);
break;
- case DP_LINK_BW_5_4:
+ case 270000:
ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_2700,
SKL_DPLL0);
break;
+ case 162000:
+ ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_1620,
+ SKL_DPLL0);
+ break;
+ /* TBD: For DP link rates 2.16 GHz and 4.32 GHz, VCO is 8640 which
+ results in CDCLK change. Need to handle the change of CDCLK by
+ disabling pipes and re-enabling them */
+ case 108000:
+ ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_1080,
+ SKL_DPLL0);
+ break;
+ case 216000:
+ ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_2160,
+ SKL_DPLL0);
+ break;
+
}
pipe_config->dpll_hw_state.ctrl1 = ctrl1;
}
@@ -1117,6 +1138,42 @@
}
}
+static int
+intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
+{
+ if (intel_dp->num_sink_rates) {
+ *sink_rates = intel_dp->sink_rates;
+ return intel_dp->num_sink_rates;
+ }
+
+ *sink_rates = default_rates;
+
+ return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
+}
+
+static int
+intel_dp_source_rates(struct drm_device *dev, const int **source_rates)
+{
+ if (INTEL_INFO(dev)->gen >= 9) {
+ *source_rates = gen9_rates;
+ return ARRAY_SIZE(gen9_rates);
+ } else if (IS_CHERRYVIEW(dev)) {
+ *source_rates = chv_rates;
+ return ARRAY_SIZE(chv_rates);
+ }
+
+ *source_rates = default_rates;
+
+ if (IS_SKYLAKE(dev) && INTEL_REVID(dev) <= SKL_REVID_B0)
+ /* WaDisableHBR2:skl */
+ return (DP_LINK_BW_2_7 >> 3) + 1;
+ else if (INTEL_INFO(dev)->gen >= 8 ||
+ (IS_HASWELL(dev) && !IS_HSW_ULX(dev)))
+ return (DP_LINK_BW_5_4 >> 3) + 1;
+ else
+ return (DP_LINK_BW_2_7 >> 3) + 1;
+}
+
static void
intel_dp_set_clock(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config, int link_bw)
@@ -1150,6 +1207,113 @@
}
}
+static int intersect_rates(const int *source_rates, int source_len,
+ const int *sink_rates, int sink_len,
+ int *common_rates)
+{
+ int i = 0, j = 0, k = 0;
+
+ while (i < source_len && j < sink_len) {
+ if (source_rates[i] == sink_rates[j]) {
+ if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
+ return k;
+ common_rates[k] = source_rates[i];
+ ++k;
+ ++i;
+ ++j;
+ } else if (source_rates[i] < sink_rates[j]) {
+ ++i;
+ } else {
+ ++j;
+ }
+ }
+ return k;
+}
+
+static int intel_dp_common_rates(struct intel_dp *intel_dp,
+ int *common_rates)
+{
+ struct drm_device *dev = intel_dp_to_dev(intel_dp);
+ const int *source_rates, *sink_rates;
+ int source_len, sink_len;
+
+ sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
+ source_len = intel_dp_source_rates(dev, &source_rates);
+
+ return intersect_rates(source_rates, source_len,
+ sink_rates, sink_len,
+ common_rates);
+}
+
+static void snprintf_int_array(char *str, size_t len,
+ const int *array, int nelem)
+{
+ int i;
+
+ str[0] = '\0';
+
+ for (i = 0; i < nelem; i++) {
+ int r = snprintf(str, len, "%d,", array[i]);
+ if (r >= len)
+ return;
+ str += r;
+ len -= r;
+ }
+}
+
+static void intel_dp_print_rates(struct intel_dp *intel_dp)
+{
+ struct drm_device *dev = intel_dp_to_dev(intel_dp);
+ const int *source_rates, *sink_rates;
+ int source_len, sink_len, common_len;
+ int common_rates[DP_MAX_SUPPORTED_RATES];
+ char str[128]; /* FIXME: too big for stack? */
+
+ if ((drm_debug & DRM_UT_KMS) == 0)
+ return;
+
+ source_len = intel_dp_source_rates(dev, &source_rates);
+ snprintf_int_array(str, sizeof(str), source_rates, source_len);
+ DRM_DEBUG_KMS("source rates: %s\n", str);
+
+ sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
+ snprintf_int_array(str, sizeof(str), sink_rates, sink_len);
+ DRM_DEBUG_KMS("sink rates: %s\n", str);
+
+ common_len = intel_dp_common_rates(intel_dp, common_rates);
+ snprintf_int_array(str, sizeof(str), common_rates, common_len);
+ DRM_DEBUG_KMS("common rates: %s\n", str);
+}
+
+static int rate_to_index(int find, const int *rates)
+{
+ int i = 0;
+
+ for (i = 0; i < DP_MAX_SUPPORTED_RATES; ++i)
+ if (find == rates[i])
+ break;
+
+ return i;
+}
+
+int
+intel_dp_max_link_rate(struct intel_dp *intel_dp)
+{
+ int rates[DP_MAX_SUPPORTED_RATES] = {};
+ int len;
+
+ len = intel_dp_common_rates(intel_dp, rates);
+ if (WARN_ON(len <= 0))
+ return 162000;
+
+ return rates[rate_to_index(0, rates) - 1];
+}
+
+int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
+{
+ return rate_to_index(rate, intel_dp->sink_rates);
+}
+
bool
intel_dp_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
@@ -1159,17 +1323,25 @@
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
enum port port = dp_to_dig_port(intel_dp)->port;
- struct intel_crtc *intel_crtc = encoder->new_crtc;
+ struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_connector *intel_connector = intel_dp->attached_connector;
int lane_count, clock;
int min_lane_count = 1;
int max_lane_count = intel_dp_max_lane_count(intel_dp);
/* Conveniently, the link BW constants become indices with a shift...*/
int min_clock = 0;
- int max_clock = intel_dp_max_link_bw(intel_dp) >> 3;
+ int max_clock;
int bpp, mode_rate;
- static int bws[] = { DP_LINK_BW_1_62, DP_LINK_BW_2_7, DP_LINK_BW_5_4 };
int link_avail, link_clock;
+ int common_rates[DP_MAX_SUPPORTED_RATES] = {};
+ int common_len;
+
+ common_len = intel_dp_common_rates(intel_dp, common_rates);
+
+ /* No common link rates between source and sink */
+ WARN_ON(common_len <= 0);
+
+ max_clock = common_len - 1;
if (HAS_PCH_SPLIT(dev) && !HAS_DDI(dev) && port != PORT_A)
pipe_config->has_pch_encoder = true;
@@ -1193,8 +1365,8 @@
return false;
DRM_DEBUG_KMS("DP link computation with max lane count %i "
- "max bw %02x pixel clock %iKHz\n",
- max_lane_count, bws[max_clock],
+ "max bw %d pixel clock %iKHz\n",
+ max_lane_count, common_rates[max_clock],
adjusted_mode->crtc_clock);
/* Walk through all bpp values. Luckily they're all nicely spaced with 2
@@ -1223,8 +1395,11 @@
bpp);
for (clock = min_clock; clock <= max_clock; clock++) {
- for (lane_count = min_lane_count; lane_count <= max_lane_count; lane_count <<= 1) {
- link_clock = drm_dp_bw_code_to_link_rate(bws[clock]);
+ for (lane_count = min_lane_count;
+ lane_count <= max_lane_count;
+ lane_count <<= 1) {
+
+ link_clock = common_rates[clock];
link_avail = intel_dp_max_data_rate(link_clock,
lane_count);
@@ -1253,10 +1428,20 @@
if (intel_dp->color_range)
pipe_config->limited_color_range = true;
- intel_dp->link_bw = bws[clock];
intel_dp->lane_count = lane_count;
+
+ if (intel_dp->num_sink_rates) {
+ intel_dp->link_bw = 0;
+ intel_dp->rate_select =
+ intel_dp_rate_select(intel_dp, common_rates[clock]);
+ } else {
+ intel_dp->link_bw =
+ drm_dp_link_rate_to_bw_code(common_rates[clock]);
+ intel_dp->rate_select = 0;
+ }
+
pipe_config->pipe_bpp = bpp;
- pipe_config->port_clock = drm_dp_bw_code_to_link_rate(intel_dp->link_bw);
+ pipe_config->port_clock = common_rates[clock];
DRM_DEBUG_KMS("DP link bw %02x lane count %d clock %d bpp %d\n",
intel_dp->link_bw, intel_dp->lane_count,
@@ -1279,7 +1464,7 @@
}
if (IS_SKYLAKE(dev) && is_edp(intel_dp))
- skl_edp_set_pll_config(pipe_config, intel_dp->link_bw);
+ skl_edp_set_pll_config(pipe_config, common_rates[clock]);
else if (IS_HASWELL(dev) || IS_BROADWELL(dev))
hsw_dp_set_ddi_pll_sel(pipe_config, intel_dp->link_bw);
else
@@ -2557,11 +2742,6 @@
/* Program Tx lane latency optimal setting*/
for (i = 0; i < 4; i++) {
- /* Set the latency optimal bit */
- data = (i == 1) ? 0x0 : 0x6;
- vlv_dpio_write(dev_priv, pipe, CHV_TX_DW11(ch, i),
- data << DPIO_FRC_LATENCY_SHFIT);
-
/* Set the upar bit */
data = (i == 1) ? 0x0 : 0x1;
vlv_dpio_write(dev_priv, pipe, CHV_TX_DW14(ch, i),
@@ -2691,11 +2871,14 @@
intel_dp_voltage_max(struct intel_dp *intel_dp)
{
struct drm_device *dev = intel_dp_to_dev(intel_dp);
+ struct drm_i915_private *dev_priv = dev->dev_private;
enum port port = dp_to_dig_port(intel_dp)->port;
- if (INTEL_INFO(dev)->gen >= 9)
+ if (INTEL_INFO(dev)->gen >= 9) {
+ if (dev_priv->vbt.edp_low_vswing && port == PORT_A)
+ return DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
return DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
- else if (IS_VALLEYVIEW(dev))
+ } else if (IS_VALLEYVIEW(dev))
return DP_TRAIN_VOLTAGE_SWING_LEVEL_3;
else if (IS_GEN7(dev) && port == PORT_A)
return DP_TRAIN_VOLTAGE_SWING_LEVEL_2;
@@ -2719,6 +2902,8 @@
return DP_TRAIN_PRE_EMPH_LEVEL_2;
case DP_TRAIN_VOLTAGE_SWING_LEVEL_2:
return DP_TRAIN_PRE_EMPH_LEVEL_1;
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_3:
+ return DP_TRAIN_PRE_EMPH_LEVEL_0;
default:
return DP_TRAIN_PRE_EMPH_LEVEL_0;
}
@@ -3201,6 +3386,9 @@
return DDI_BUF_TRANS_SELECT(7);
case DP_TRAIN_VOLTAGE_SWING_LEVEL_2 | DP_TRAIN_PRE_EMPH_LEVEL_1:
return DDI_BUF_TRANS_SELECT(8);
+
+ case DP_TRAIN_VOLTAGE_SWING_LEVEL_3 | DP_TRAIN_PRE_EMPH_LEVEL_0:
+ return DDI_BUF_TRANS_SELECT(9);
default:
DRM_DEBUG_KMS("Unsupported voltage swing/pre-emphasis level:"
"0x%x\n", signal_levels);
@@ -3358,6 +3546,9 @@
if (drm_dp_enhanced_frame_cap(intel_dp->dpcd))
link_config[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN;
drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_BW_SET, link_config, 2);
+ if (intel_dp->num_sink_rates)
+ drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_RATE_SET,
+ &intel_dp->rate_select, 1);
link_config[0] = 0;
link_config[1] = DP_SET_ANSI_8B10B;
@@ -3570,6 +3761,7 @@
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_device *dev = dig_port->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ uint8_t rev;
if (intel_dp_dpcd_read_wake(&intel_dp->aux, 0x000, intel_dp->dpcd,
sizeof(intel_dp->dpcd)) < 0)
@@ -3601,6 +3793,32 @@
} else
intel_dp->use_tps3 = false;
+ /* Intermediate frequency support */
+ if (is_edp(intel_dp) &&
+ (intel_dp->dpcd[DP_EDP_CONFIGURATION_CAP] & DP_DPCD_DISPLAY_CONTROL_CAPABLE) &&
+ (intel_dp_dpcd_read_wake(&intel_dp->aux, DP_EDP_DPCD_REV, &rev, 1) == 1) &&
+ (rev >= 0x03)) { /* eDp v1.4 or higher */
+ __le16 sink_rates[DP_MAX_SUPPORTED_RATES];
+ int i;
+
+ intel_dp_dpcd_read_wake(&intel_dp->aux,
+ DP_SUPPORTED_LINK_RATES,
+ sink_rates,
+ sizeof(sink_rates));
+
+ for (i = 0; i < ARRAY_SIZE(sink_rates); i++) {
+ int val = le16_to_cpu(sink_rates[i]);
+
+ if (val == 0)
+ break;
+
+ intel_dp->sink_rates[i] = val * 200;
+ }
+ intel_dp->num_sink_rates = i;
+ }
+
+ intel_dp_print_rates(intel_dp);
+
if (!(intel_dp->dpcd[DP_DOWNSTREAMPORT_PRESENT] &
DP_DWN_STRM_PORT_PRESENT))
return true; /* native DP sink */
@@ -3803,7 +4021,7 @@
* 3. Use Link Training from 2.5.3.3 and 3.5.1.3
* 4. Check link status on receipt of hot-plug interrupt
*/
-void
+static void
intel_dp_check_link_status(struct intel_dp *intel_dp)
{
struct drm_device *dev = intel_dp_to_dev(intel_dp);
@@ -4390,6 +4608,7 @@
.atomic_get_property = intel_connector_atomic_get_property,
.destroy = intel_dp_connector_destroy,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_connector_helper_funcs intel_dp_connector_helper_funcs = {
@@ -4736,6 +4955,18 @@
I915_READ(pp_div_reg));
}
+/**
+ * intel_dp_set_drrs_state - program registers for RR switch to take effect
+ * @dev: DRM device
+ * @refresh_rate: RR to be programmed
+ *
+ * This function gets called when refresh rate (RR) has to be changed from
+ * one frequency to another. Switches can be between high and low RR
+ * supported by the panel or to any other RR based on media playback (in
+ * this case, RR value needs to be passed from user space).
+ *
+ * The caller of this function needs to take a lock on dev_priv->drrs.
+ */
static void intel_dp_set_drrs_state(struct drm_device *dev, int refresh_rate)
{
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -4764,7 +4995,7 @@
dig_port = dp_to_dig_port(intel_dp);
encoder = &dig_port->base;
- intel_crtc = encoder->new_crtc;
+ intel_crtc = to_intel_crtc(encoder->base.crtc);
if (!intel_crtc) {
DRM_DEBUG_KMS("DRRS: intel_crtc not initialized\n");
@@ -4793,14 +5024,32 @@
return;
}
- if (INTEL_INFO(dev)->gen > 6 && INTEL_INFO(dev)->gen < 8) {
+ if (INTEL_INFO(dev)->gen >= 8 && !IS_CHERRYVIEW(dev)) {
+ switch (index) {
+ case DRRS_HIGH_RR:
+ intel_dp_set_m_n(intel_crtc, M1_N1);
+ break;
+ case DRRS_LOW_RR:
+ intel_dp_set_m_n(intel_crtc, M2_N2);
+ break;
+ case DRRS_MAX_RR:
+ default:
+ DRM_ERROR("Unsupported refreshrate type\n");
+ }
+ } else if (INTEL_INFO(dev)->gen > 6) {
reg = PIPECONF(intel_crtc->config->cpu_transcoder);
val = I915_READ(reg);
+
if (index > DRRS_HIGH_RR) {
- val |= PIPECONF_EDP_RR_MODE_SWITCH;
- intel_dp_set_m_n(intel_crtc);
+ if (IS_VALLEYVIEW(dev))
+ val |= PIPECONF_EDP_RR_MODE_SWITCH_VLV;
+ else
+ val |= PIPECONF_EDP_RR_MODE_SWITCH;
} else {
- val &= ~PIPECONF_EDP_RR_MODE_SWITCH;
+ if (IS_VALLEYVIEW(dev))
+ val &= ~PIPECONF_EDP_RR_MODE_SWITCH_VLV;
+ else
+ val &= ~PIPECONF_EDP_RR_MODE_SWITCH;
}
I915_WRITE(reg, val);
}
@@ -4810,6 +5059,12 @@
DRM_DEBUG_KMS("eDP Refresh Rate set to : %dHz\n", refresh_rate);
}
+/**
+ * intel_edp_drrs_enable - init drrs struct if supported
+ * @intel_dp: DP struct
+ *
+ * Initializes frontbuffer_bits and drrs.dp
+ */
void intel_edp_drrs_enable(struct intel_dp *intel_dp)
{
struct drm_device *dev = intel_dp_to_dev(intel_dp);
@@ -4837,6 +5092,11 @@
mutex_unlock(&dev_priv->drrs.mutex);
}
+/**
+ * intel_edp_drrs_disable - Disable DRRS
+ * @intel_dp: DP struct
+ *
+ */
void intel_edp_drrs_disable(struct intel_dp *intel_dp)
{
struct drm_device *dev = intel_dp_to_dev(intel_dp);
@@ -4892,10 +5152,20 @@
downclock_mode->vrefresh);
unlock:
-
mutex_unlock(&dev_priv->drrs.mutex);
}
+/**
+ * intel_edp_drrs_invalidate - Invalidate DRRS
+ * @dev: DRM device
+ * @frontbuffer_bits: frontbuffer plane tracking bits
+ *
+ * When there is a disturbance on screen (due to cursor movement/time
+ * update etc), DRRS needs to be invalidated, i.e. need to switch to
+ * high RR.
+ *
+ * Dirty frontbuffers relevant to DRRS are tracked in busy_frontbuffer_bits.
+ */
void intel_edp_drrs_invalidate(struct drm_device *dev,
unsigned frontbuffer_bits)
{
@@ -4903,15 +5173,21 @@
struct drm_crtc *crtc;
enum pipe pipe;
- if (!dev_priv->drrs.dp)
+ if (dev_priv->drrs.type == DRRS_NOT_SUPPORTED)
return;
+ cancel_delayed_work(&dev_priv->drrs.work);
+
mutex_lock(&dev_priv->drrs.mutex);
+ if (!dev_priv->drrs.dp) {
+ mutex_unlock(&dev_priv->drrs.mutex);
+ return;
+ }
+
crtc = dp_to_dig_port(dev_priv->drrs.dp)->base.base.crtc;
pipe = to_intel_crtc(crtc)->pipe;
if (dev_priv->drrs.refresh_rate_type == DRRS_LOW_RR) {
- cancel_delayed_work_sync(&dev_priv->drrs.work);
intel_dp_set_drrs_state(dev_priv->dev,
dev_priv->drrs.dp->attached_connector->panel.
fixed_mode->vrefresh);
@@ -4923,6 +5199,17 @@
mutex_unlock(&dev_priv->drrs.mutex);
}
+/**
+ * intel_edp_drrs_flush - Flush DRRS
+ * @dev: DRM device
+ * @frontbuffer_bits: frontbuffer plane tracking bits
+ *
+ * When there is no movement on screen, DRRS work can be scheduled.
+ * This DRRS work is responsible for setting relevant registers after a
+ * timeout of 1 second.
+ *
+ * Dirty frontbuffers relevant to DRRS are tracked in busy_frontbuffer_bits.
+ */
void intel_edp_drrs_flush(struct drm_device *dev,
unsigned frontbuffer_bits)
{
@@ -4930,16 +5217,21 @@
struct drm_crtc *crtc;
enum pipe pipe;
- if (!dev_priv->drrs.dp)
+ if (dev_priv->drrs.type == DRRS_NOT_SUPPORTED)
return;
+ cancel_delayed_work(&dev_priv->drrs.work);
+
mutex_lock(&dev_priv->drrs.mutex);
+ if (!dev_priv->drrs.dp) {
+ mutex_unlock(&dev_priv->drrs.mutex);
+ return;
+ }
+
crtc = dp_to_dig_port(dev_priv->drrs.dp)->base.base.crtc;
pipe = to_intel_crtc(crtc)->pipe;
dev_priv->drrs.busy_frontbuffer_bits &= ~frontbuffer_bits;
- cancel_delayed_work_sync(&dev_priv->drrs.work);
-
if (dev_priv->drrs.refresh_rate_type != DRRS_LOW_RR &&
!dev_priv->drrs.busy_frontbuffer_bits)
schedule_delayed_work(&dev_priv->drrs.work,
@@ -4947,6 +5239,56 @@
mutex_unlock(&dev_priv->drrs.mutex);
}
+/**
+ * DOC: Display Refresh Rate Switching (DRRS)
+ *
+ * Display Refresh Rate Switching (DRRS) is a power conservation feature
+ * which enables swtching between low and high refresh rates,
+ * dynamically, based on the usage scenario. This feature is applicable
+ * for internal panels.
+ *
+ * Indication that the panel supports DRRS is given by the panel EDID, which
+ * would list multiple refresh rates for one resolution.
+ *
+ * DRRS is of 2 types - static and seamless.
+ * Static DRRS involves changing refresh rate (RR) by doing a full modeset
+ * (may appear as a blink on screen) and is used in dock-undock scenario.
+ * Seamless DRRS involves changing RR without any visual effect to the user
+ * and can be used during normal system usage. This is done by programming
+ * certain registers.
+ *
+ * Support for static/seamless DRRS may be indicated in the VBT based on
+ * inputs from the panel spec.
+ *
+ * DRRS saves power by switching to low RR based on usage scenarios.
+ *
+ * eDP DRRS:-
+ * The implementation is based on frontbuffer tracking implementation.
+ * When there is a disturbance on the screen triggered by user activity or a
+ * periodic system activity, DRRS is disabled (RR is changed to high RR).
+ * When there is no movement on screen, after a timeout of 1 second, a switch
+ * to low RR is made.
+ * For integration with frontbuffer tracking code,
+ * intel_edp_drrs_invalidate() and intel_edp_drrs_flush() are called.
+ *
+ * DRRS can be further extended to support other internal panels and also
+ * the scenario of video playback wherein RR is set based on the rate
+ * requested by userspace.
+ */
+
+/**
+ * intel_dp_drrs_init - Init basic DRRS work and mutex.
+ * @intel_connector: eDP connector
+ * @fixed_mode: preferred mode of panel
+ *
+ * This function is called only once at driver load to initialize basic
+ * DRRS stuff.
+ *
+ * Returns:
+ * Downclock mode if panel supports it, else return NULL.
+ * DRRS support is determined by the presence of downclock mode (apart
+ * from VBT setting).
+ */
static struct drm_display_mode *
intel_dp_drrs_init(struct intel_connector *intel_connector,
struct drm_display_mode *fixed_mode)
@@ -4956,6 +5298,9 @@
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_display_mode *downclock_mode = NULL;
+ INIT_DELAYED_WORK(&dev_priv->drrs.work, intel_edp_drrs_downclock_work);
+ mutex_init(&dev_priv->drrs.mutex);
+
if (INTEL_INFO(dev)->gen <= 6) {
DRM_DEBUG_KMS("DRRS supported for Gen7 and above\n");
return NULL;
@@ -4970,14 +5315,10 @@
(dev, fixed_mode, connector);
if (!downclock_mode) {
- DRM_DEBUG_KMS("DRRS not supported\n");
+ DRM_DEBUG_KMS("Downclock mode is not found. DRRS not supported\n");
return NULL;
}
- INIT_DELAYED_WORK(&dev_priv->drrs.work, intel_edp_drrs_downclock_work);
-
- mutex_init(&dev_priv->drrs.mutex);
-
dev_priv->drrs.type = dev_priv->vbt.drrs_type;
dev_priv->drrs.refresh_rate_type = DRRS_HIGH_RR;
@@ -5000,8 +5341,6 @@
struct edid *edid;
enum pipe pipe = INVALID_PIPE;
- dev_priv->drrs.type = DRRS_NOT_SUPPORTED;
-
if (!is_edp(intel_dp))
return true;
@@ -5251,7 +5590,7 @@
if (!intel_dig_port)
return;
- intel_connector = kzalloc(sizeof(*intel_connector), GFP_KERNEL);
+ intel_connector = intel_connector_alloc();
if (!intel_connector) {
kfree(intel_dig_port);
return;
diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
index 9f67a37..5cb4748 100644
--- a/drivers/gpu/drm/i915/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/intel_dp_mst.c
@@ -36,11 +36,11 @@
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(&encoder->base);
struct intel_digital_port *intel_dig_port = intel_mst->primary;
struct intel_dp *intel_dp = &intel_dig_port->dp;
- struct drm_device *dev = encoder->base.dev;
- int bpp;
- int lane_count, slots;
+ struct drm_atomic_state *state;
+ int bpp, i;
+ int lane_count, slots, rate;
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
- struct intel_connector *found = NULL, *intel_connector;
+ struct intel_connector *found = NULL;
int mst_pbn;
pipe_config->dp_encoder_is_mst = true;
@@ -52,15 +52,30 @@
* seem to suggest we should do otherwise.
*/
lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
- intel_dp->link_bw = intel_dp_max_link_bw(intel_dp);
+
+ rate = intel_dp_max_link_rate(intel_dp);
+
+ if (intel_dp->num_sink_rates) {
+ intel_dp->link_bw = 0;
+ intel_dp->rate_select = intel_dp_rate_select(intel_dp, rate);
+ } else {
+ intel_dp->link_bw = drm_dp_link_rate_to_bw_code(rate);
+ intel_dp->rate_select = 0;
+ }
+
intel_dp->lane_count = lane_count;
pipe_config->pipe_bpp = 24;
- pipe_config->port_clock = drm_dp_bw_code_to_link_rate(intel_dp->link_bw);
+ pipe_config->port_clock = rate;
- list_for_each_entry(intel_connector, &dev->mode_config.connector_list, base.head) {
- if (intel_connector->new_encoder == encoder) {
- found = intel_connector;
+ state = pipe_config->base.state;
+
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
+ continue;
+
+ if (state->connector_states[i]->best_encoder == &encoder->base) {
+ found = to_intel_connector(state->connectors[i]);
break;
}
}
@@ -140,7 +155,7 @@
struct drm_crtc *crtc = encoder->base.crtc;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- list_for_each_entry(intel_connector, &dev->mode_config.connector_list, base.head) {
+ for_each_intel_connector(dev, intel_connector) {
if (intel_connector->new_encoder == encoder) {
found = intel_connector;
break;
@@ -317,6 +332,7 @@
.atomic_get_property = intel_connector_atomic_get_property,
.destroy = intel_dp_mst_connector_destroy,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static int intel_dp_mst_get_modes(struct drm_connector *connector)
@@ -399,7 +415,7 @@
struct drm_connector *connector;
int i;
- intel_connector = kzalloc(sizeof(*intel_connector), GFP_KERNEL);
+ intel_connector = intel_connector_alloc();
if (!intel_connector)
return NULL;
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index ba243db..897f17d 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -35,6 +35,7 @@
#include <drm/drm_fb_helper.h>
#include <drm/drm_dp_mst_helper.h>
#include <drm/drm_rect.h>
+#include <drm/drm_atomic.h>
/**
* _wait_for - magic (register) wait macro
@@ -53,8 +54,8 @@
ret__ = -ETIMEDOUT; \
break; \
} \
- if (W && drm_can_sleep()) { \
- msleep(W); \
+ if ((W) && drm_can_sleep()) { \
+ usleep_range((W)*1000, (W)*2000); \
} else { \
cpu_relax(); \
} \
@@ -255,6 +256,7 @@
};
struct intel_initial_plane_config {
+ struct intel_framebuffer *fb;
unsigned int tiling;
int size;
u32 base;
@@ -460,7 +462,6 @@
struct drm_i915_gem_object *cursor_bo;
uint32_t cursor_addr;
- int16_t cursor_width, cursor_height;
uint32_t cursor_cntl;
uint32_t cursor_size;
uint32_t cursor_base;
@@ -497,16 +498,20 @@
uint8_t bytes_per_pixel;
bool enabled;
bool scaled;
+ u64 tiling;
+ unsigned int rotation;
};
struct intel_plane {
struct drm_plane base;
int plane;
enum pipe pipe;
- struct drm_i915_gem_object *obj;
bool can_scale;
int max_downscale;
+ /* FIXME convert to properties */
+ struct drm_intel_sprite_colorkey ckey;
+
/* Since we need to change the watermarks before/after
* enabling/disabling the planes, we need to store the parameters here
* as the other pieces of the struct may not reflect the values we want
@@ -523,7 +528,6 @@
void (*update_plane)(struct drm_plane *plane,
struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj,
int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t x, uint32_t y,
@@ -534,10 +538,6 @@
struct intel_plane_state *state);
void (*commit_plane)(struct drm_plane *plane,
struct intel_plane_state *state);
- int (*update_colorkey)(struct drm_plane *plane,
- struct drm_intel_sprite_colorkey *key);
- void (*get_colorkey)(struct drm_plane *plane,
- struct drm_intel_sprite_colorkey *key);
};
struct intel_watermark_params {
@@ -560,6 +560,7 @@
};
#define to_intel_crtc(x) container_of(x, struct intel_crtc, base)
+#define to_intel_crtc_state(x) container_of(x, struct intel_crtc_state, base)
#define to_intel_connector(x) container_of(x, struct intel_connector, base)
#define to_intel_encoder(x) container_of(x, struct intel_encoder, base)
#define to_intel_framebuffer(x) container_of(x, struct intel_framebuffer, base)
@@ -589,6 +590,26 @@
struct intel_dp_mst_encoder;
#define DP_MAX_DOWNSTREAM_PORTS 0x10
+/*
+ * enum link_m_n_set:
+ * When platform provides two set of M_N registers for dp, we can
+ * program them and switch between them incase of DRRS.
+ * But When only one such register is provided, we have to program the
+ * required divider value on that registers itself based on the DRRS state.
+ *
+ * M1_N1 : Program dp_m_n on M1_N1 registers
+ * dp_m2_n2 on M2_N2 registers (If supported)
+ *
+ * M2_N2 : Program dp_m2_n2 on M1_N1 registers
+ * M2_N2 registers are not supported
+ */
+
+enum link_m_n_set {
+ /* Sets the m1_n1 and m2_n2 */
+ M1_N1 = 0,
+ M2_N2
+};
+
struct intel_dp {
uint32_t output_reg;
uint32_t aux_ch_ctl_reg;
@@ -598,10 +619,14 @@
uint32_t color_range;
bool color_range_auto;
uint8_t link_bw;
+ uint8_t rate_select;
uint8_t lane_count;
uint8_t dpcd[DP_RECEIVER_CAP_SIZE];
uint8_t psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
+ /* sink rates as reported by DP_SUPPORTED_LINK_RATES */
+ uint8_t num_sink_rates;
+ int sink_rates[DP_MAX_SUPPORTED_RATES];
struct drm_dp_aux aux;
uint8_t train_set[4];
int panel_power_up_delay;
@@ -707,7 +732,7 @@
struct intel_unpin_work {
struct work_struct work;
struct drm_crtc *crtc;
- struct drm_i915_gem_object *old_fb_obj;
+ struct drm_framebuffer *old_fb;
struct drm_i915_gem_object *pending_flip_obj;
struct drm_pending_vblank_event *event;
atomic_t pending;
@@ -814,7 +839,8 @@
}
int intel_get_crtc_scanline(struct intel_crtc *crtc);
-void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv);
+void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv,
+ unsigned int pipe_mask);
/* intel_crt.c */
void intel_crt_init(struct drm_device *dev);
@@ -849,7 +875,8 @@
/* intel_frontbuffer.c */
void intel_fb_obj_invalidate(struct drm_i915_gem_object *obj,
- struct intel_engine_cs *ring);
+ struct intel_engine_cs *ring,
+ enum fb_op_origin origin);
void intel_frontbuffer_flip_prepare(struct drm_device *dev,
unsigned frontbuffer_bits);
void intel_frontbuffer_flip_complete(struct drm_device *dev,
@@ -874,10 +901,14 @@
intel_frontbuffer_flush(dev, frontbuffer_bits);
}
-int intel_fb_align_height(struct drm_device *dev, int height,
- unsigned int tiling);
+unsigned int intel_fb_align_height(struct drm_device *dev,
+ unsigned int height,
+ uint32_t pixel_format,
+ uint64_t fb_format_modifier);
void intel_fb_obj_flush(struct drm_i915_gem_object *obj, bool retire);
+u32 intel_fb_stride_alignment(struct drm_device *dev, uint64_t fb_modifier,
+ uint32_t pixel_format);
/* intel_audio.c */
void intel_init_audio(struct drm_device *dev);
@@ -896,6 +927,8 @@
void intel_crtc_control(struct drm_crtc *crtc, bool enable);
void intel_crtc_update_dpms(struct drm_crtc *crtc);
void intel_encoder_destroy(struct drm_encoder *encoder);
+int intel_connector_init(struct intel_connector *);
+struct intel_connector *intel_connector_alloc(void);
void intel_connector_dpms(struct drm_connector *, int mode);
bool intel_connector_get_hw_state(struct intel_connector *connector);
void intel_modeset_check_state(struct drm_device *dev);
@@ -925,11 +958,12 @@
struct intel_load_detect_pipe *old,
struct drm_modeset_acquire_ctx *ctx);
void intel_release_load_detect_pipe(struct drm_connector *connector,
- struct intel_load_detect_pipe *old);
+ struct intel_load_detect_pipe *old,
+ struct drm_modeset_acquire_ctx *ctx);
int intel_pin_and_fence_fb_obj(struct drm_plane *plane,
struct drm_framebuffer *fb,
+ const struct drm_plane_state *plane_state,
struct intel_engine_cs *pipelined);
-void intel_unpin_fb_obj(struct drm_i915_gem_object *obj);
struct drm_framebuffer *
__intel_framebuffer_create(struct drm_device *dev,
struct drm_mode_fb_cmd2 *mode_cmd,
@@ -939,9 +973,11 @@
void intel_finish_page_flip_plane(struct drm_device *dev, int plane);
void intel_check_page_flip(struct drm_device *dev, int pipe);
int intel_prepare_plane_fb(struct drm_plane *plane,
- struct drm_framebuffer *fb);
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *new_state);
void intel_cleanup_plane_fb(struct drm_plane *plane,
- struct drm_framebuffer *fb);
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *old_state);
int intel_plane_atomic_get_property(struct drm_plane *plane,
const struct drm_plane_state *state,
struct drm_property *property,
@@ -951,6 +987,19 @@
struct drm_property *property,
uint64_t val);
+unsigned int
+intel_tile_height(struct drm_device *dev, uint32_t pixel_format,
+ uint64_t fb_format_modifier);
+
+static inline bool
+intel_rotation_90_or_270(unsigned int rotation)
+{
+ return rotation & (BIT(DRM_ROTATE_90) | BIT(DRM_ROTATE_270));
+}
+
+bool intel_wm_need_update(struct drm_plane *plane,
+ struct drm_plane_state *state);
+
/* shared dpll functions */
struct intel_shared_dpll *intel_crtc_to_shared_dpll(struct intel_crtc *crtc);
void assert_shared_dpll(struct drm_i915_private *dev_priv,
@@ -990,7 +1039,7 @@
void hsw_disable_pc8(struct drm_i915_private *dev_priv);
void intel_dp_get_m_n(struct intel_crtc *crtc,
struct intel_crtc_state *pipe_config);
-void intel_dp_set_m_n(struct intel_crtc *crtc);
+void intel_dp_set_m_n(struct intel_crtc *crtc, enum link_m_n_set m_n);
int intel_dotclock_calculate(int link_freq, const struct intel_link_m_n *m_n);
void
ironlake_check_encoder_dotclock(const struct intel_crtc_state *pipe_config,
@@ -1005,6 +1054,9 @@
void intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc);
void intel_modeset_preclose(struct drm_device *dev, struct drm_file *file);
+unsigned long intel_plane_obj_offset(struct intel_plane *intel_plane,
+ struct drm_i915_gem_object *obj);
+
/* intel_dp.c */
void intel_dp_init(struct drm_device *dev, int output_reg, enum port port);
bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
@@ -1014,7 +1066,6 @@
void intel_dp_stop_link_train(struct intel_dp *intel_dp);
void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
void intel_dp_encoder_destroy(struct drm_encoder *encoder);
-void intel_dp_check_link_status(struct intel_dp *intel_dp);
int intel_dp_sink_crc(struct intel_dp *intel_dp, u8 *crc);
bool intel_dp_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config);
@@ -1029,17 +1080,11 @@
void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connector);
void intel_dp_mst_suspend(struct drm_device *dev);
void intel_dp_mst_resume(struct drm_device *dev);
-int intel_dp_max_link_bw(struct intel_dp *intel_dp);
+int intel_dp_max_link_rate(struct intel_dp *intel_dp);
+int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
void vlv_power_sequencer_reset(struct drm_i915_private *dev_priv);
uint32_t intel_dp_pack_aux(const uint8_t *src, int src_bytes);
-void intel_dp_unpack_aux(uint32_t src, uint8_t *dst, int dst_bytes);
-int intel_update_plane(struct drm_plane *plane, struct drm_crtc *crtc,
- struct drm_framebuffer *fb, int crtc_x, int crtc_y,
- unsigned int crtc_w, unsigned int crtc_h,
- uint32_t src_x, uint32_t src_y,
- uint32_t src_w, uint32_t src_h);
-int intel_disable_plane(struct drm_plane *plane);
void intel_plane_destroy(struct drm_plane *plane);
void intel_edp_drrs_enable(struct intel_dp *intel_dp);
void intel_edp_drrs_disable(struct intel_dp *intel_dp);
@@ -1094,7 +1139,11 @@
void intel_fbc_update(struct drm_device *dev);
void intel_fbc_init(struct drm_i915_private *dev_priv);
void intel_fbc_disable(struct drm_device *dev);
-void bdw_fbc_sw_flush(struct drm_device *dev, u32 value);
+void intel_fbc_invalidate(struct drm_i915_private *dev_priv,
+ unsigned int frontbuffer_bits,
+ enum fb_op_origin origin);
+void intel_fbc_flush(struct drm_i915_private *dev_priv,
+ unsigned int frontbuffer_bits);
/* intel_hdmi.c */
void intel_hdmi_init(struct drm_device *dev, int hdmi_reg, enum port port);
@@ -1210,8 +1259,9 @@
void intel_disable_gt_powersave(struct drm_device *dev);
void intel_suspend_gt_powersave(struct drm_device *dev);
void intel_reset_gt_powersave(struct drm_device *dev);
-void ironlake_teardown_rc6(struct drm_device *dev);
void gen6_update_ring_freq(struct drm_device *dev);
+void gen6_rps_busy(struct drm_i915_private *dev_priv);
+void gen6_rps_reset_ei(struct drm_i915_private *dev_priv);
void gen6_rps_idle(struct drm_i915_private *dev_priv);
void gen6_rps_boost(struct drm_i915_private *dev_priv);
void ilk_wm_get_hw_state(struct drm_device *dev);
@@ -1228,14 +1278,9 @@
int intel_plane_init(struct drm_device *dev, enum pipe pipe, int plane);
void intel_flush_primary_plane(struct drm_i915_private *dev_priv,
enum plane plane);
-int intel_plane_set_property(struct drm_plane *plane,
- struct drm_property *prop,
- uint64_t val);
int intel_plane_restore(struct drm_plane *plane);
int intel_sprite_set_colorkey(struct drm_device *dev, void *data,
struct drm_file *file_priv);
-int intel_sprite_get_colorkey(struct drm_device *dev, void *data,
- struct drm_file *file_priv);
bool intel_pipe_update_start(struct intel_crtc *crtc,
uint32_t *start_vbl_count);
void intel_pipe_update_end(struct intel_crtc *crtc, u32 start_vbl_count);
@@ -1258,6 +1303,17 @@
struct drm_crtc_state *intel_crtc_duplicate_state(struct drm_crtc *crtc);
void intel_crtc_destroy_state(struct drm_crtc *crtc,
struct drm_crtc_state *state);
+static inline struct intel_crtc_state *
+intel_atomic_get_crtc_state(struct drm_atomic_state *state,
+ struct intel_crtc *crtc)
+{
+ struct drm_crtc_state *crtc_state;
+ crtc_state = drm_atomic_get_crtc_state(state, &crtc->base);
+ if (IS_ERR(crtc_state))
+ return ERR_PTR(PTR_ERR(crtc_state));
+
+ return to_intel_crtc_state(crtc_state);
+}
/* intel_atomic_plane.c */
struct intel_plane_state *intel_create_plane_state(struct drm_plane *plane);
diff --git a/drivers/gpu/drm/i915/intel_dsi.c b/drivers/gpu/drm/i915/intel_dsi.c
index 10ab684..5196642 100644
--- a/drivers/gpu/drm/i915/intel_dsi.c
+++ b/drivers/gpu/drm/i915/intel_dsi.c
@@ -854,7 +854,7 @@
/* recovery disables */
- I915_WRITE(MIPI_EOT_DISABLE(port), val);
+ I915_WRITE(MIPI_EOT_DISABLE(port), tmp);
/* in terms of low power clock */
I915_WRITE(MIPI_INIT_COUNT(port), intel_dsi->init_count);
@@ -975,6 +975,7 @@
.fill_modes = drm_helper_probe_single_connector_modes,
.atomic_get_property = intel_connector_atomic_get_property,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
void intel_dsi_init(struct drm_device *dev)
@@ -1006,7 +1007,7 @@
if (!intel_dsi)
return;
- intel_connector = kzalloc(sizeof(*intel_connector), GFP_KERNEL);
+ intel_connector = intel_connector_alloc();
if (!intel_connector) {
kfree(intel_dsi);
return;
diff --git a/drivers/gpu/drm/i915/intel_dsi_cmd.h b/drivers/gpu/drm/i915/intel_dsi_cmd.h
deleted file mode 100644
index 8867790..0000000
--- a/drivers/gpu/drm/i915/intel_dsi_cmd.h
+++ /dev/null
@@ -1,39 +0,0 @@
-/*
- * Copyright © 2013 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- *
- * Author: Jani Nikula <jani.nikula@intel.com>
- */
-
-#ifndef _INTEL_DSI_DSI_H
-#define _INTEL_DSI_DSI_H
-
-#include <drm/drmP.h>
-#include <drm/drm_crtc.h>
-#include <video/mipi_display.h>
-#include "i915_drv.h"
-#include "intel_drv.h"
-#include "intel_dsi.h"
-
-void dsi_hs_mode_enable(struct intel_dsi *intel_dsi, bool enable,
- enum port port);
-
-#endif /* _INTEL_DSI_DSI_H */
diff --git a/drivers/gpu/drm/i915/intel_dvo.c b/drivers/gpu/drm/i915/intel_dvo.c
index d857951..770040f 100644
--- a/drivers/gpu/drm/i915/intel_dvo.c
+++ b/drivers/gpu/drm/i915/intel_dvo.c
@@ -393,6 +393,7 @@
.fill_modes = drm_helper_probe_single_connector_modes,
.atomic_get_property = intel_connector_atomic_get_property,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_connector_helper_funcs intel_dvo_connector_helper_funcs = {
@@ -468,7 +469,7 @@
if (!intel_dvo)
return;
- intel_connector = kzalloc(sizeof(*intel_connector), GFP_KERNEL);
+ intel_connector = intel_connector_alloc();
if (!intel_connector) {
kfree(intel_dvo);
return;
diff --git a/drivers/gpu/drm/i915/intel_fbc.c b/drivers/gpu/drm/i915/intel_fbc.c
index 624d1d9..4165ce0 100644
--- a/drivers/gpu/drm/i915/intel_fbc.c
+++ b/drivers/gpu/drm/i915/intel_fbc.c
@@ -78,7 +78,8 @@
dev_priv->fbc.enabled = true;
- cfb_pitch = dev_priv->fbc.size / FBC_LL_SIZE;
+ /* Note: fbc.threshold == 1 for i8xx */
+ cfb_pitch = dev_priv->fbc.uncompressed_size / FBC_LL_SIZE;
if (fb->pitches[0] < cfb_pitch)
cfb_pitch = fb->pitches[0];
@@ -173,29 +174,10 @@
return I915_READ(DPFC_CONTROL) & DPFC_CTL_EN;
}
-static void snb_fbc_blit_update(struct drm_device *dev)
+static void intel_fbc_nuke(struct drm_i915_private *dev_priv)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
- u32 blt_ecoskpd;
-
- /* Make sure blitter notifies FBC of writes */
-
- /* Blitter is part of Media powerwell on VLV. No impact of
- * his param in other platforms for now */
- intel_uncore_forcewake_get(dev_priv, FORCEWAKE_MEDIA);
-
- blt_ecoskpd = I915_READ(GEN6_BLITTER_ECOSKPD);
- blt_ecoskpd |= GEN6_BLITTER_FBC_NOTIFY <<
- GEN6_BLITTER_LOCK_SHIFT;
- I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd);
- blt_ecoskpd |= GEN6_BLITTER_FBC_NOTIFY;
- I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd);
- blt_ecoskpd &= ~(GEN6_BLITTER_FBC_NOTIFY <<
- GEN6_BLITTER_LOCK_SHIFT);
- I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd);
- POSTING_READ(GEN6_BLITTER_ECOSKPD);
-
- intel_uncore_forcewake_put(dev_priv, FORCEWAKE_MEDIA);
+ I915_WRITE(MSG_FBC_REND_STATE, FBC_REND_NUKE);
+ POSTING_READ(MSG_FBC_REND_STATE);
}
static void ilk_fbc_enable(struct drm_crtc *crtc)
@@ -238,9 +220,10 @@
I915_WRITE(SNB_DPFC_CTL_SA,
SNB_CPU_FENCE_ENABLE | obj->fence_reg);
I915_WRITE(DPFC_CPU_FENCE_OFFSET, crtc->y);
- snb_fbc_blit_update(dev);
}
+ intel_fbc_nuke(dev_priv);
+
DRM_DEBUG_KMS("enabled fbc on plane %c\n", plane_name(intel_crtc->plane));
}
@@ -319,7 +302,7 @@
SNB_CPU_FENCE_ENABLE | obj->fence_reg);
I915_WRITE(DPFC_CPU_FENCE_OFFSET, crtc->y);
- snb_fbc_blit_update(dev);
+ intel_fbc_nuke(dev_priv);
DRM_DEBUG_KMS("enabled fbc on plane %c\n", plane_name(intel_crtc->plane));
}
@@ -339,19 +322,6 @@
return dev_priv->fbc.enabled;
}
-void bdw_fbc_sw_flush(struct drm_device *dev, u32 value)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (!IS_GEN8(dev))
- return;
-
- if (!intel_fbc_enabled(dev))
- return;
-
- I915_WRITE(MSG_FBC_REND_STATE, value);
-}
-
static void intel_fbc_work_fn(struct work_struct *__work)
{
struct intel_fbc_work *work =
@@ -368,7 +338,7 @@
if (work->crtc->primary->fb == work->fb) {
dev_priv->display.enable_fbc(work->crtc);
- dev_priv->fbc.plane = to_intel_crtc(work->crtc)->plane;
+ dev_priv->fbc.crtc = to_intel_crtc(work->crtc);
dev_priv->fbc.fb_id = work->crtc->primary->fb->base.id;
dev_priv->fbc.y = work->crtc->y;
}
@@ -459,7 +429,7 @@
return;
dev_priv->display.disable_fbc(dev);
- dev_priv->fbc.plane = -1;
+ dev_priv->fbc.crtc = NULL;
}
static bool set_no_fbc_reason(struct drm_i915_private *dev_priv,
@@ -472,6 +442,43 @@
return true;
}
+static struct drm_crtc *intel_fbc_find_crtc(struct drm_i915_private *dev_priv)
+{
+ struct drm_crtc *crtc = NULL, *tmp_crtc;
+ enum pipe pipe;
+ bool pipe_a_only = false, one_pipe_only = false;
+
+ if (IS_HASWELL(dev_priv) || INTEL_INFO(dev_priv)->gen >= 8)
+ pipe_a_only = true;
+ else if (INTEL_INFO(dev_priv)->gen <= 4)
+ one_pipe_only = true;
+
+ for_each_pipe(dev_priv, pipe) {
+ tmp_crtc = dev_priv->pipe_to_crtc_mapping[pipe];
+
+ if (intel_crtc_active(tmp_crtc) &&
+ to_intel_crtc(tmp_crtc)->primary_enabled) {
+ if (one_pipe_only && crtc) {
+ if (set_no_fbc_reason(dev_priv, FBC_MULTIPLE_PIPES))
+ DRM_DEBUG_KMS("more than one pipe active, disabling compression\n");
+ return NULL;
+ }
+ crtc = tmp_crtc;
+ }
+
+ if (pipe_a_only)
+ break;
+ }
+
+ if (!crtc || crtc->primary->fb == NULL) {
+ if (set_no_fbc_reason(dev_priv, FBC_NO_OUTPUT))
+ DRM_DEBUG_KMS("no output, disabling\n");
+ return NULL;
+ }
+
+ return crtc;
+}
+
/**
* intel_fbc_update - enable/disable FBC as needed
* @dev: the drm_device
@@ -494,22 +501,30 @@
void intel_fbc_update(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- struct drm_crtc *crtc = NULL, *tmp_crtc;
+ struct drm_crtc *crtc = NULL;
struct intel_crtc *intel_crtc;
struct drm_framebuffer *fb;
struct drm_i915_gem_object *obj;
const struct drm_display_mode *adjusted_mode;
unsigned int max_width, max_height;
- if (!HAS_FBC(dev)) {
- set_no_fbc_reason(dev_priv, FBC_UNSUPPORTED);
+ if (!HAS_FBC(dev))
return;
+
+ /* disable framebuffer compression in vGPU */
+ if (intel_vgpu_active(dev))
+ i915.enable_fbc = 0;
+
+ if (i915.enable_fbc < 0) {
+ if (set_no_fbc_reason(dev_priv, FBC_CHIP_DEFAULT))
+ DRM_DEBUG_KMS("disabled per chip default\n");
+ goto out_disable;
}
- if (!i915.powersave) {
+ if (!i915.enable_fbc) {
if (set_no_fbc_reason(dev_priv, FBC_MODULE_PARAM))
DRM_DEBUG_KMS("fbc disabled per module param\n");
- return;
+ goto out_disable;
}
/*
@@ -521,39 +536,15 @@
* - new fb is too large to fit in compressed buffer
* - going to an unsupported config (interlace, pixel multiply, etc.)
*/
- for_each_crtc(dev, tmp_crtc) {
- if (intel_crtc_active(tmp_crtc) &&
- to_intel_crtc(tmp_crtc)->primary_enabled) {
- if (crtc) {
- if (set_no_fbc_reason(dev_priv, FBC_MULTIPLE_PIPES))
- DRM_DEBUG_KMS("more than one pipe active, disabling compression\n");
- goto out_disable;
- }
- crtc = tmp_crtc;
- }
- }
-
- if (!crtc || crtc->primary->fb == NULL) {
- if (set_no_fbc_reason(dev_priv, FBC_NO_OUTPUT))
- DRM_DEBUG_KMS("no output, disabling\n");
+ crtc = intel_fbc_find_crtc(dev_priv);
+ if (!crtc)
goto out_disable;
- }
intel_crtc = to_intel_crtc(crtc);
fb = crtc->primary->fb;
obj = intel_fb_obj(fb);
adjusted_mode = &intel_crtc->config->base.adjusted_mode;
- if (i915.enable_fbc < 0) {
- if (set_no_fbc_reason(dev_priv, FBC_CHIP_DEFAULT))
- DRM_DEBUG_KMS("disabled per chip default\n");
- goto out_disable;
- }
- if (!i915.enable_fbc) {
- if (set_no_fbc_reason(dev_priv, FBC_MODULE_PARAM))
- DRM_DEBUG_KMS("fbc disabled per module param\n");
- goto out_disable;
- }
if ((adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) ||
(adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)) {
if (set_no_fbc_reason(dev_priv, FBC_UNSUPPORTED_MODE))
@@ -617,7 +608,7 @@
* cannot be unpinned (and have its GTT offset and fence revoked)
* without first being decoupled from the scanout and FBC disabled.
*/
- if (dev_priv->fbc.plane == intel_crtc->plane &&
+ if (dev_priv->fbc.crtc == intel_crtc &&
dev_priv->fbc.fb_id == fb->base.id &&
dev_priv->fbc.y == crtc->y)
return;
@@ -663,6 +654,44 @@
i915_gem_stolen_cleanup_compression(dev);
}
+void intel_fbc_invalidate(struct drm_i915_private *dev_priv,
+ unsigned int frontbuffer_bits,
+ enum fb_op_origin origin)
+{
+ struct drm_device *dev = dev_priv->dev;
+ unsigned int fbc_bits;
+
+ if (origin == ORIGIN_GTT)
+ return;
+
+ if (dev_priv->fbc.enabled)
+ fbc_bits = INTEL_FRONTBUFFER_PRIMARY(dev_priv->fbc.crtc->pipe);
+ else if (dev_priv->fbc.fbc_work)
+ fbc_bits = INTEL_FRONTBUFFER_PRIMARY(
+ to_intel_crtc(dev_priv->fbc.fbc_work->crtc)->pipe);
+ else
+ fbc_bits = dev_priv->fbc.possible_framebuffer_bits;
+
+ dev_priv->fbc.busy_bits |= (fbc_bits & frontbuffer_bits);
+
+ if (dev_priv->fbc.busy_bits)
+ intel_fbc_disable(dev);
+}
+
+void intel_fbc_flush(struct drm_i915_private *dev_priv,
+ unsigned int frontbuffer_bits)
+{
+ struct drm_device *dev = dev_priv->dev;
+
+ if (!dev_priv->fbc.busy_bits)
+ return;
+
+ dev_priv->fbc.busy_bits &= ~frontbuffer_bits;
+
+ if (!dev_priv->fbc.busy_bits)
+ intel_fbc_update(dev);
+}
+
/**
* intel_fbc_init - Initialize FBC
* @dev_priv: the i915 device
@@ -671,11 +700,22 @@
*/
void intel_fbc_init(struct drm_i915_private *dev_priv)
{
+ enum pipe pipe;
+
if (!HAS_FBC(dev_priv)) {
dev_priv->fbc.enabled = false;
+ dev_priv->fbc.no_fbc_reason = FBC_UNSUPPORTED;
return;
}
+ for_each_pipe(dev_priv, pipe) {
+ dev_priv->fbc.possible_framebuffer_bits |=
+ INTEL_FRONTBUFFER_PRIMARY(pipe);
+
+ if (IS_HASWELL(dev_priv) || INTEL_INFO(dev_priv)->gen >= 8)
+ break;
+ }
+
if (INTEL_INFO(dev_priv)->gen >= 7) {
dev_priv->display.fbc_enabled = ilk_fbc_enabled;
dev_priv->display.enable_fbc = gen7_fbc_enable;
diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c
index 3001a867..4e7e7da 100644
--- a/drivers/gpu/drm/i915/intel_fbdev.c
+++ b/drivers/gpu/drm/i915/intel_fbdev.c
@@ -71,6 +71,31 @@
return ret;
}
+static int intel_fbdev_blank(int blank, struct fb_info *info)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct intel_fbdev *ifbdev =
+ container_of(fb_helper, struct intel_fbdev, helper);
+ int ret;
+
+ ret = drm_fb_helper_blank(blank, info);
+
+ if (ret == 0) {
+ /*
+ * FIXME: fbdev presumes that all callbacks also work from
+ * atomic contexts and relies on that for emergency oops
+ * printing. KMS totally doesn't do that and the locking here is
+ * by far not the only place this goes wrong. Ignore this for
+ * now until we solve this for real.
+ */
+ mutex_lock(&fb_helper->dev->struct_mutex);
+ intel_fb_obj_invalidate(ifbdev->fb->obj, NULL, ORIGIN_GTT);
+ mutex_unlock(&fb_helper->dev->struct_mutex);
+ }
+
+ return ret;
+}
+
static struct fb_ops intelfb_ops = {
.owner = THIS_MODULE,
.fb_check_var = drm_fb_helper_check_var,
@@ -79,7 +104,7 @@
.fb_copyarea = cfb_copyarea,
.fb_imageblit = cfb_imageblit,
.fb_pan_display = drm_fb_helper_pan_display,
- .fb_blank = drm_fb_helper_blank,
+ .fb_blank = intel_fbdev_blank,
.fb_setcmap = drm_fb_helper_setcmap,
.fb_debug_enter = drm_fb_helper_debug_enter,
.fb_debug_leave = drm_fb_helper_debug_leave,
@@ -126,7 +151,7 @@
}
/* Flush everything out, we'll be doing GTT only from now on */
- ret = intel_pin_and_fence_fb_obj(NULL, fb, NULL);
+ ret = intel_pin_and_fence_fb_obj(NULL, fb, NULL, NULL);
if (ret) {
DRM_ERROR("failed to pin obj: %d\n", ret);
goto out_fb;
@@ -594,7 +619,8 @@
cur_size = intel_crtc->config->base.adjusted_mode.crtc_vdisplay;
cur_size = intel_fb_align_height(dev, cur_size,
- plane_config->tiling);
+ fb->base.pixel_format,
+ fb->base.modifier[0]);
cur_size *= fb->base.pitches[0];
DRM_DEBUG_KMS("pipe %c area: %dx%d, bpp: %d, size: %d\n",
pipe_name(intel_crtc->pipe),
diff --git a/drivers/gpu/drm/i915/intel_frontbuffer.c b/drivers/gpu/drm/i915/intel_frontbuffer.c
index 73cb6e0..a20cffb 100644
--- a/drivers/gpu/drm/i915/intel_frontbuffer.c
+++ b/drivers/gpu/drm/i915/intel_frontbuffer.c
@@ -110,16 +110,11 @@
struct drm_i915_private *dev_priv = dev->dev_private;
enum pipe pipe;
- if (!i915.powersave)
- return;
-
for_each_pipe(dev_priv, pipe) {
if (!(frontbuffer_bits & INTEL_FRONTBUFFER_ALL_MASK(pipe)))
continue;
intel_increase_pllclock(dev, pipe);
- if (ring && intel_fbc_enabled(dev))
- ring->fbc_dirty = true;
}
}
@@ -127,6 +122,7 @@
* intel_fb_obj_invalidate - invalidate frontbuffer object
* @obj: GEM object to invalidate
* @ring: set for asynchronous rendering
+ * @origin: which operation caused the invalidation
*
* This function gets called every time rendering on the given object starts and
* frontbuffer caching (fbc, low refresh rate for DRRS, panel self refresh) must
@@ -135,7 +131,8 @@
* scheduled.
*/
void intel_fb_obj_invalidate(struct drm_i915_gem_object *obj,
- struct intel_engine_cs *ring)
+ struct intel_engine_cs *ring,
+ enum fb_op_origin origin)
{
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -158,6 +155,7 @@
intel_psr_invalidate(dev, obj->frontbuffer_bits);
intel_edp_drrs_invalidate(dev, obj->frontbuffer_bits);
+ intel_fbc_invalidate(dev_priv, obj->frontbuffer_bits, origin);
}
/**
@@ -185,16 +183,7 @@
intel_edp_drrs_flush(dev, frontbuffer_bits);
intel_psr_flush(dev, frontbuffer_bits);
-
- /*
- * FIXME: Unconditional fbc flushing here is a rather gross hack and
- * needs to be reworked into a proper frontbuffer tracking scheme like
- * psr employs.
- */
- if (dev_priv->fbc.need_sw_cache_clean) {
- dev_priv->fbc.need_sw_cache_clean = false;
- bdw_fbc_sw_flush(dev, FBC_REND_CACHE_CLEAN);
- }
+ intel_fbc_flush(dev_priv, frontbuffer_bits);
}
/**
diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
index 995c5b2..bfbe07b 100644
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -951,19 +951,30 @@
return MODE_OK;
}
-static bool hdmi_12bpc_possible(struct intel_crtc *crtc)
+static bool hdmi_12bpc_possible(struct intel_crtc_state *crtc_state)
{
- struct drm_device *dev = crtc->base.dev;
+ struct drm_device *dev = crtc_state->base.crtc->dev;
+ struct drm_atomic_state *state;
struct intel_encoder *encoder;
+ struct drm_connector_state *connector_state;
int count = 0, count_hdmi = 0;
+ int i;
if (HAS_GMCH_DISPLAY(dev))
return false;
- for_each_intel_encoder(dev, encoder) {
- if (encoder->new_crtc != crtc)
+ state = crtc_state->base.state;
+
+ for (i = 0; i < state->num_connector; i++) {
+ if (!state->connectors[i])
continue;
+ connector_state = state->connector_states[i];
+ if (connector_state->crtc != crtc_state->base.crtc)
+ continue;
+
+ encoder = to_intel_encoder(connector_state->best_encoder);
+
count_hdmi += encoder->type == INTEL_OUTPUT_HDMI;
count++;
}
@@ -1020,7 +1031,7 @@
*/
if (pipe_config->pipe_bpp > 8*3 && pipe_config->has_hdmi_sink &&
clock_12bpc <= portclock_limit &&
- hdmi_12bpc_possible(encoder->new_crtc)) {
+ hdmi_12bpc_possible(pipe_config)) {
DRM_DEBUG_KMS("picking bpc to 12 for HDMI output\n");
desired_bpp = 12*3;
@@ -1504,11 +1515,6 @@
/* Program Tx latency optimal setting */
for (i = 0; i < 4; i++) {
- /* Set the latency optimal bit */
- data = (i == 1) ? 0x0 : 0x6;
- vlv_dpio_write(dev_priv, pipe, CHV_TX_DW11(ch, i),
- data << DPIO_FRC_LATENCY_SHFIT);
-
/* Set the upar bit */
data = (i == 1) ? 0x0 : 0x1;
vlv_dpio_write(dev_priv, pipe, CHV_TX_DW14(ch, i),
@@ -1618,6 +1624,7 @@
.atomic_get_property = intel_connector_atomic_get_property,
.destroy = intel_hdmi_destroy,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_connector_helper_funcs intel_hdmi_connector_helper_funcs = {
@@ -1743,7 +1750,7 @@
if (!intel_dig_port)
return;
- intel_connector = kzalloc(sizeof(*intel_connector), GFP_KERNEL);
+ intel_connector = intel_connector_alloc();
if (!intel_connector) {
kfree(intel_dig_port);
return;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index e8d3da9..fcb074b 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -254,8 +254,10 @@
return lrca >> 12;
}
-static uint64_t execlists_ctx_descriptor(struct drm_i915_gem_object *ctx_obj)
+static uint64_t execlists_ctx_descriptor(struct intel_engine_cs *ring,
+ struct drm_i915_gem_object *ctx_obj)
{
+ struct drm_device *dev = ring->dev;
uint64_t desc;
uint64_t lrca = i915_gem_obj_ggtt_offset(ctx_obj);
@@ -272,6 +274,13 @@
* signalling between Command Streamers */
/* desc |= GEN8_CTX_FORCE_RESTORE; */
+ /* WaEnableForceRestoreInCtxtDescForVCS:skl */
+ if (IS_GEN9(dev) &&
+ INTEL_REVID(dev) <= SKL_REVID_B0 &&
+ (ring->id == BCS || ring->id == VCS ||
+ ring->id == VECS || ring->id == VCS2))
+ desc |= GEN8_CTX_FORCE_RESTORE;
+
return desc;
}
@@ -286,13 +295,13 @@
/* XXX: You must always write both descriptors in the order below. */
if (ctx_obj1)
- temp = execlists_ctx_descriptor(ctx_obj1);
+ temp = execlists_ctx_descriptor(ring, ctx_obj1);
else
temp = 0;
desc[1] = (u32)(temp >> 32);
desc[0] = (u32)temp;
- temp = execlists_ctx_descriptor(ctx_obj0);
+ temp = execlists_ctx_descriptor(ring, ctx_obj0);
desc[3] = (u32)(temp >> 32);
desc[2] = (u32)temp;
@@ -612,7 +621,7 @@
* @vmas: list of vmas.
* @batch_obj: the batchbuffer to submit.
* @exec_start: batchbuffer start virtual address pointer.
- * @flags: translated execbuffer call flags.
+ * @dispatch_flags: translated execbuffer call flags.
*
* This is the evil twin version of i915_gem_ringbuffer_submission. It abstracts
* away the submission details of the execbuffer ioctl call.
@@ -625,7 +634,7 @@
struct drm_i915_gem_execbuffer2 *args,
struct list_head *vmas,
struct drm_i915_gem_object *batch_obj,
- u64 exec_start, u32 flags)
+ u64 exec_start, u32 dispatch_flags)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ringbuffer *ringbuf = ctx->engine[ring->id].ringbuf;
@@ -698,10 +707,12 @@
dev_priv->relative_constants_mode = instp_mode;
}
- ret = ring->emit_bb_start(ringbuf, ctx, exec_start, flags);
+ ret = ring->emit_bb_start(ringbuf, ctx, exec_start, dispatch_flags);
if (ret)
return ret;
+ trace_i915_gem_ring_dispatch(intel_ring_get_request(ring), dispatch_flags);
+
i915_gem_execbuffer_move_to_active(vmas, ring);
i915_gem_execbuffer_retire_commands(dev, file, ring, batch_obj);
@@ -776,7 +787,7 @@
return 0;
}
-/**
+/*
* intel_logical_ring_advance_and_submit() - advance the tail and submit the workload
* @ringbuf: Logical Ringbuffer to advance.
*
@@ -785,9 +796,10 @@
* on a queue waiting for the ELSP to be ready to accept a new context submission. At that
* point, the tail *inside* the context is updated and the ELSP written to.
*/
-void intel_logical_ring_advance_and_submit(struct intel_ringbuffer *ringbuf,
- struct intel_context *ctx,
- struct drm_i915_gem_request *request)
+static void
+intel_logical_ring_advance_and_submit(struct intel_ringbuffer *ringbuf,
+ struct intel_context *ctx,
+ struct drm_i915_gem_request *request)
{
struct intel_engine_cs *ring = ringbuf->ring;
@@ -876,12 +888,9 @@
return ret;
}
- /* Hold a reference to the context this request belongs to
- * (we will need it when the time comes to emit/retire the
- * request).
- */
request->ctx = ctx;
i915_gem_context_reference(request->ctx);
+ request->ringbuf = ctx->engine[ring->id].ringbuf;
ring->outstanding_lazy_request = request;
return 0;
@@ -1140,11 +1149,22 @@
return init_workarounds_ring(ring);
}
+static int gen9_init_render_ring(struct intel_engine_cs *ring)
+{
+ int ret;
+
+ ret = gen8_init_common_ring(ring);
+ if (ret)
+ return ret;
+
+ return init_workarounds_ring(ring);
+}
+
static int gen8_emit_bb_start(struct intel_ringbuffer *ringbuf,
struct intel_context *ctx,
- u64 offset, unsigned flags)
+ u64 offset, unsigned dispatch_flags)
{
- bool ppgtt = !(flags & I915_DISPATCH_SECURE);
+ bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
int ret;
ret = intel_logical_ring_begin(ringbuf, ctx, 4);
@@ -1316,6 +1336,39 @@
return 0;
}
+static int intel_lr_context_render_state_init(struct intel_engine_cs *ring,
+ struct intel_context *ctx)
+{
+ struct intel_ringbuffer *ringbuf = ctx->engine[ring->id].ringbuf;
+ struct render_state so;
+ struct drm_i915_file_private *file_priv = ctx->file_priv;
+ struct drm_file *file = file_priv ? file_priv->file : NULL;
+ int ret;
+
+ ret = i915_gem_render_state_prepare(ring, &so);
+ if (ret)
+ return ret;
+
+ if (so.rodata == NULL)
+ return 0;
+
+ ret = ring->emit_bb_start(ringbuf,
+ ctx,
+ so.ggtt_offset,
+ I915_DISPATCH_SECURE);
+ if (ret)
+ goto out;
+
+ i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), ring);
+
+ ret = __i915_add_request(ring, file, so.obj);
+ /* intel_logical_ring_add_request moves object to inactive if it
+ * fails */
+out:
+ i915_gem_render_state_fini(&so);
+ return ret;
+}
+
static int gen8_init_rcs_context(struct intel_engine_cs *ring,
struct intel_context *ctx)
{
@@ -1399,7 +1452,10 @@
if (HAS_L3_DPF(dev))
ring->irq_keep_mask |= GT_RENDER_L3_PARITY_ERROR_INTERRUPT;
- ring->init_hw = gen8_init_render_ring;
+ if (INTEL_INFO(dev)->gen >= 9)
+ ring->init_hw = gen9_init_render_ring;
+ else
+ ring->init_hw = gen8_init_render_ring;
ring->init_context = gen8_init_rcs_context;
ring->cleanup = intel_fini_pipe_control;
ring->get_seqno = gen8_get_seqno;
@@ -1581,37 +1637,47 @@
return ret;
}
-int intel_lr_context_render_state_init(struct intel_engine_cs *ring,
- struct intel_context *ctx)
+static u32
+make_rpcs(struct drm_device *dev)
{
- struct intel_ringbuffer *ringbuf = ctx->engine[ring->id].ringbuf;
- struct render_state so;
- struct drm_i915_file_private *file_priv = ctx->file_priv;
- struct drm_file *file = file_priv ? file_priv->file : NULL;
- int ret;
+ u32 rpcs = 0;
- ret = i915_gem_render_state_prepare(ring, &so);
- if (ret)
- return ret;
-
- if (so.rodata == NULL)
+ /*
+ * No explicit RPCS request is needed to ensure full
+ * slice/subslice/EU enablement prior to Gen9.
+ */
+ if (INTEL_INFO(dev)->gen < 9)
return 0;
- ret = ring->emit_bb_start(ringbuf,
- ctx,
- so.ggtt_offset,
- I915_DISPATCH_SECURE);
- if (ret)
- goto out;
+ /*
+ * Starting in Gen9, render power gating can leave
+ * slice/subslice/EU in a partially enabled state. We
+ * must make an explicit request through RPCS for full
+ * enablement.
+ */
+ if (INTEL_INFO(dev)->has_slice_pg) {
+ rpcs |= GEN8_RPCS_S_CNT_ENABLE;
+ rpcs |= INTEL_INFO(dev)->slice_total <<
+ GEN8_RPCS_S_CNT_SHIFT;
+ rpcs |= GEN8_RPCS_ENABLE;
+ }
- i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), ring);
+ if (INTEL_INFO(dev)->has_subslice_pg) {
+ rpcs |= GEN8_RPCS_SS_CNT_ENABLE;
+ rpcs |= INTEL_INFO(dev)->subslice_per_slice <<
+ GEN8_RPCS_SS_CNT_SHIFT;
+ rpcs |= GEN8_RPCS_ENABLE;
+ }
- ret = __i915_add_request(ring, file, so.obj);
- /* intel_logical_ring_add_request moves object to inactive if it
- * fails */
-out:
- i915_gem_render_state_fini(&so);
- return ret;
+ if (INTEL_INFO(dev)->has_eu_pg) {
+ rpcs |= INTEL_INFO(dev)->eu_per_subslice <<
+ GEN8_RPCS_EU_MIN_SHIFT;
+ rpcs |= INTEL_INFO(dev)->eu_per_subslice <<
+ GEN8_RPCS_EU_MAX_SHIFT;
+ rpcs |= GEN8_RPCS_ENABLE;
+ }
+
+ return rpcs;
}
static int
@@ -1659,7 +1725,8 @@
reg_state[CTX_LRI_HEADER_0] |= MI_LRI_FORCE_POSTED;
reg_state[CTX_CONTEXT_CONTROL] = RING_CONTEXT_CONTROL(ring);
reg_state[CTX_CONTEXT_CONTROL+1] =
- _MASKED_BIT_ENABLE((1<<3) | MI_RESTORE_INHIBIT);
+ _MASKED_BIT_ENABLE(CTX_CTRL_INHIBIT_SYN_CTX_SWITCH |
+ CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT);
reg_state[CTX_RING_HEAD] = RING_HEAD(ring->mmio_base);
reg_state[CTX_RING_HEAD+1] = 0;
reg_state[CTX_RING_TAIL] = RING_TAIL(ring->mmio_base);
@@ -1706,18 +1773,18 @@
reg_state[CTX_PDP1_LDW] = GEN8_RING_PDP_LDW(ring, 1);
reg_state[CTX_PDP0_UDW] = GEN8_RING_PDP_UDW(ring, 0);
reg_state[CTX_PDP0_LDW] = GEN8_RING_PDP_LDW(ring, 0);
- reg_state[CTX_PDP3_UDW+1] = upper_32_bits(ppgtt->pd_dma_addr[3]);
- reg_state[CTX_PDP3_LDW+1] = lower_32_bits(ppgtt->pd_dma_addr[3]);
- reg_state[CTX_PDP2_UDW+1] = upper_32_bits(ppgtt->pd_dma_addr[2]);
- reg_state[CTX_PDP2_LDW+1] = lower_32_bits(ppgtt->pd_dma_addr[2]);
- reg_state[CTX_PDP1_UDW+1] = upper_32_bits(ppgtt->pd_dma_addr[1]);
- reg_state[CTX_PDP1_LDW+1] = lower_32_bits(ppgtt->pd_dma_addr[1]);
- reg_state[CTX_PDP0_UDW+1] = upper_32_bits(ppgtt->pd_dma_addr[0]);
- reg_state[CTX_PDP0_LDW+1] = lower_32_bits(ppgtt->pd_dma_addr[0]);
+ reg_state[CTX_PDP3_UDW+1] = upper_32_bits(ppgtt->pdp.page_directory[3]->daddr);
+ reg_state[CTX_PDP3_LDW+1] = lower_32_bits(ppgtt->pdp.page_directory[3]->daddr);
+ reg_state[CTX_PDP2_UDW+1] = upper_32_bits(ppgtt->pdp.page_directory[2]->daddr);
+ reg_state[CTX_PDP2_LDW+1] = lower_32_bits(ppgtt->pdp.page_directory[2]->daddr);
+ reg_state[CTX_PDP1_UDW+1] = upper_32_bits(ppgtt->pdp.page_directory[1]->daddr);
+ reg_state[CTX_PDP1_LDW+1] = lower_32_bits(ppgtt->pdp.page_directory[1]->daddr);
+ reg_state[CTX_PDP0_UDW+1] = upper_32_bits(ppgtt->pdp.page_directory[0]->daddr);
+ reg_state[CTX_PDP0_LDW+1] = lower_32_bits(ppgtt->pdp.page_directory[0]->daddr);
if (ring->id == RCS) {
reg_state[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
- reg_state[CTX_R_PWR_CLK_STATE] = 0x20c8;
- reg_state[CTX_R_PWR_CLK_STATE+1] = 0;
+ reg_state[CTX_R_PWR_CLK_STATE] = GEN8_R_PWR_CLK_STATE;
+ reg_state[CTX_R_PWR_CLK_STATE+1] = make_rpcs(dev);
}
kunmap_atomic(reg_state);
@@ -1925,3 +1992,38 @@
drm_gem_object_unreference(&ctx_obj->base);
return ret;
}
+
+void intel_lr_context_reset(struct drm_device *dev,
+ struct intel_context *ctx)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_engine_cs *ring;
+ int i;
+
+ for_each_ring(ring, dev_priv, i) {
+ struct drm_i915_gem_object *ctx_obj =
+ ctx->engine[ring->id].state;
+ struct intel_ringbuffer *ringbuf =
+ ctx->engine[ring->id].ringbuf;
+ uint32_t *reg_state;
+ struct page *page;
+
+ if (!ctx_obj)
+ continue;
+
+ if (i915_gem_object_get_pages(ctx_obj)) {
+ WARN(1, "Failed get_pages for context obj\n");
+ continue;
+ }
+ page = i915_gem_object_get_page(ctx_obj, 1);
+ reg_state = kmap_atomic(page);
+
+ reg_state[CTX_RING_HEAD+1] = 0;
+ reg_state[CTX_RING_TAIL+1] = 0;
+
+ kunmap_atomic(reg_state);
+
+ ringbuf->head = 0;
+ ringbuf->tail = 0;
+ }
+}
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 6f2d7da..adb731e4 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -30,6 +30,8 @@
#define RING_ELSP(ring) ((ring)->mmio_base+0x230)
#define RING_EXECLIST_STATUS(ring) ((ring)->mmio_base+0x234)
#define RING_CONTEXT_CONTROL(ring) ((ring)->mmio_base+0x244)
+#define CTX_CTRL_INHIBIT_SYN_CTX_SWITCH (1 << 3)
+#define CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT (1 << 0)
#define RING_CONTEXT_STATUS_BUF(ring) ((ring)->mmio_base+0x370)
#define RING_CONTEXT_STATUS_PTR(ring) ((ring)->mmio_base+0x3a0)
@@ -40,10 +42,6 @@
int logical_ring_flush_all_caches(struct intel_ringbuffer *ringbuf,
struct intel_context *ctx);
-void intel_logical_ring_advance_and_submit(
- struct intel_ringbuffer *ringbuf,
- struct intel_context *ctx,
- struct drm_i915_gem_request *request);
/**
* intel_logical_ring_advance() - advance the ringbuffer tail
* @ringbuf: Ringbuffer to advance.
@@ -70,13 +68,13 @@
int num_dwords);
/* Logical Ring Contexts */
-int intel_lr_context_render_state_init(struct intel_engine_cs *ring,
- struct intel_context *ctx);
void intel_lr_context_free(struct intel_context *ctx);
int intel_lr_context_deferred_create(struct intel_context *ctx,
struct intel_engine_cs *ring);
void intel_lr_context_unpin(struct intel_engine_cs *ring,
struct intel_context *ctx);
+void intel_lr_context_reset(struct drm_device *dev,
+ struct intel_context *ctx);
/* Execlists */
int intel_sanitize_enable_execlists(struct drm_device *dev, int enable_execlists);
@@ -86,7 +84,7 @@
struct drm_i915_gem_execbuffer2 *args,
struct list_head *vmas,
struct drm_i915_gem_object *batch_obj,
- u64 exec_start, u32 flags);
+ u64 exec_start, u32 dispatch_flags);
u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj);
void intel_lrc_irq_handler(struct intel_engine_cs *ring);
diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
index 071b96d..5abda1d 100644
--- a/drivers/gpu/drm/i915/intel_lvds.c
+++ b/drivers/gpu/drm/i915/intel_lvds.c
@@ -286,7 +286,7 @@
struct intel_connector *intel_connector =
&lvds_encoder->attached_connector->base;
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
- struct intel_crtc *intel_crtc = lvds_encoder->base.new_crtc;
+ struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
unsigned int lvds_bpp;
/* Should never happen!! */
@@ -509,7 +509,7 @@
intel_connector->panel.fitting_mode = value;
crtc = intel_attached_encoder(connector)->base.crtc;
- if (crtc && crtc->enabled) {
+ if (crtc && crtc->state->enable) {
/*
* If the CRTC is enabled, the display will be changed
* according to the new panel fitting mode.
@@ -535,6 +535,7 @@
.atomic_get_property = intel_connector_atomic_get_property,
.destroy = intel_lvds_destroy,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_encoder_funcs intel_lvds_enc_funcs = {
@@ -945,6 +946,12 @@
return;
}
+ if (intel_connector_init(&lvds_connector->base) < 0) {
+ kfree(lvds_connector);
+ kfree(lvds_encoder);
+ return;
+ }
+
lvds_encoder->attached_connector = lvds_connector;
intel_encoder = &lvds_encoder->base;
diff --git a/drivers/gpu/drm/i915/intel_opregion.c b/drivers/gpu/drm/i915/intel_opregion.c
index d8de1d5..71e87ab 100644
--- a/drivers/gpu/drm/i915/intel_opregion.c
+++ b/drivers/gpu/drm/i915/intel_opregion.c
@@ -744,10 +744,8 @@
return;
if (opregion->acpi) {
- if (drm_core_check_feature(dev, DRIVER_MODESET)) {
- intel_didl_outputs(dev);
- intel_setup_cadls(dev);
- }
+ intel_didl_outputs(dev);
+ intel_setup_cadls(dev);
/* Notify BIOS we are ready to handle ACPI video ext notifs.
* Right now, all the events are handled by the ACPI video module.
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index f93dfc1..dd92122 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -720,7 +720,8 @@
if (ret != 0)
return ret;
- ret = i915_gem_object_pin_to_display_plane(new_bo, 0, NULL);
+ ret = i915_gem_object_pin_to_display_plane(new_bo, 0, NULL,
+ &i915_ggtt_view_normal);
if (ret != 0)
return ret;
@@ -1065,7 +1066,6 @@
struct put_image_params *params;
int ret;
- /* No need to check for DRIVER_MODESET - we don't set it up then. */
overlay = dev_priv->overlay;
if (!overlay) {
DRM_DEBUG("userspace bug: no overlay\n");
@@ -1261,7 +1261,6 @@
struct overlay_registers __iomem *regs;
int ret;
- /* No need to check for DRIVER_MODESET - we don't set it up then. */
overlay = dev_priv->overlay;
if (!overlay) {
DRM_DEBUG("userspace bug: no overlay\n");
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 24d77dd..fa4ccb3 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -56,24 +56,42 @@
{
struct drm_i915_private *dev_priv = dev->dev_private;
- /*
- * WaDisableSDEUnitClockGating:skl
- * This seems to be a pre-production w/a.
- */
- I915_WRITE(GEN8_UCGCTL6, I915_READ(GEN8_UCGCTL6) |
- GEN8_SDEUNIT_CLOCK_GATE_DISABLE);
+ /* WaEnableLbsSlaRetryTimerDecrement:skl */
+ I915_WRITE(BDW_SCRATCH1, I915_READ(BDW_SCRATCH1) |
+ GEN9_LBS_SLA_RETRY_TIMER_DECREMENT_ENABLE);
+}
- /*
- * WaDisableDgMirrorFixInHalfSliceChicken5:skl
- * This is a pre-production w/a.
- */
- I915_WRITE(GEN9_HALF_SLICE_CHICKEN5,
- I915_READ(GEN9_HALF_SLICE_CHICKEN5) &
- ~GEN9_DG_MIRROR_FIX_ENABLE);
+static void skl_init_clock_gating(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
- /* Wa4x4STCOptimizationDisable:skl */
- I915_WRITE(CACHE_MODE_1,
- _MASKED_BIT_ENABLE(GEN8_4x4_STC_OPTIMIZATION_DISABLE));
+ gen9_init_clock_gating(dev);
+
+ if (INTEL_REVID(dev) == SKL_REVID_A0) {
+ /*
+ * WaDisableSDEUnitClockGating:skl
+ * WaSetGAPSunitClckGateDisable:skl
+ */
+ I915_WRITE(GEN8_UCGCTL6, I915_READ(GEN8_UCGCTL6) |
+ GEN8_GAPSUNIT_CLOCK_GATE_DISABLE |
+ GEN8_SDEUNIT_CLOCK_GATE_DISABLE);
+ }
+
+ if (INTEL_REVID(dev) <= SKL_REVID_D0) {
+ /* WaDisableHDCInvalidation:skl */
+ I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) |
+ BDW_DISABLE_HDC_INVALIDATION);
+
+ /* WaDisableChickenBitTSGBarrierAckForFFSliceCS:skl */
+ I915_WRITE(FF_SLICE_CS_CHICKEN2,
+ I915_READ(FF_SLICE_CS_CHICKEN2) |
+ GEN9_TSG_BARRIER_ACK_DISABLE);
+ }
+
+ if (INTEL_REVID(dev) <= SKL_REVID_E0)
+ /* WaDisableLSQCROPERFforOCL:skl */
+ I915_WRITE(GEN8_L3SQCREG4, I915_READ(GEN8_L3SQCREG4) |
+ GEN8_LQSC_RO_PERF_DIS);
}
static void i915_pineview_get_mem_freq(struct drm_device *dev)
@@ -245,6 +263,47 @@
return NULL;
}
+static void chv_set_memory_dvfs(struct drm_i915_private *dev_priv, bool enable)
+{
+ u32 val;
+
+ mutex_lock(&dev_priv->rps.hw_lock);
+
+ val = vlv_punit_read(dev_priv, PUNIT_REG_DDR_SETUP2);
+ if (enable)
+ val &= ~FORCE_DDR_HIGH_FREQ;
+ else
+ val |= FORCE_DDR_HIGH_FREQ;
+ val &= ~FORCE_DDR_LOW_FREQ;
+ val |= FORCE_DDR_FREQ_REQ_ACK;
+ vlv_punit_write(dev_priv, PUNIT_REG_DDR_SETUP2, val);
+
+ if (wait_for((vlv_punit_read(dev_priv, PUNIT_REG_DDR_SETUP2) &
+ FORCE_DDR_FREQ_REQ_ACK) == 0, 3))
+ DRM_ERROR("timed out waiting for Punit DDR DVFS request\n");
+
+ mutex_unlock(&dev_priv->rps.hw_lock);
+}
+
+static void chv_set_memory_pm5(struct drm_i915_private *dev_priv, bool enable)
+{
+ u32 val;
+
+ mutex_lock(&dev_priv->rps.hw_lock);
+
+ val = vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ);
+ if (enable)
+ val |= DSP_MAXFIFO_PM5_ENABLE;
+ else
+ val &= ~DSP_MAXFIFO_PM5_ENABLE;
+ vlv_punit_write(dev_priv, PUNIT_REG_DSPFREQ, val);
+
+ mutex_unlock(&dev_priv->rps.hw_lock);
+}
+
+#define FW_WM(value, plane) \
+ (((value) << DSPFW_ ## plane ## _SHIFT) & DSPFW_ ## plane ## _MASK)
+
void intel_set_memory_cxsr(struct drm_i915_private *dev_priv, bool enable)
{
struct drm_device *dev = dev_priv->dev;
@@ -252,6 +311,8 @@
if (IS_VALLEYVIEW(dev)) {
I915_WRITE(FW_BLC_SELF_VLV, enable ? FW_CSPWRDWNEN : 0);
+ if (IS_CHERRYVIEW(dev))
+ chv_set_memory_pm5(dev_priv, enable);
} else if (IS_G4X(dev) || IS_CRESTLINE(dev)) {
I915_WRITE(FW_BLC_SELF, enable ? FW_BLC_SELF_EN : 0);
} else if (IS_PINEVIEW(dev)) {
@@ -274,6 +335,7 @@
enable ? "enabled" : "disabled");
}
+
/*
* Latency for FIFO fetches is dependent on several factors:
* - memory configuration (speed, channels)
@@ -290,6 +352,61 @@
*/
static const int pessimal_latency_ns = 5000;
+#define VLV_FIFO_START(dsparb, dsparb2, lo_shift, hi_shift) \
+ ((((dsparb) >> (lo_shift)) & 0xff) | ((((dsparb2) >> (hi_shift)) & 0x1) << 8))
+
+static int vlv_get_fifo_size(struct drm_device *dev,
+ enum pipe pipe, int plane)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int sprite0_start, sprite1_start, size;
+
+ switch (pipe) {
+ uint32_t dsparb, dsparb2, dsparb3;
+ case PIPE_A:
+ dsparb = I915_READ(DSPARB);
+ dsparb2 = I915_READ(DSPARB2);
+ sprite0_start = VLV_FIFO_START(dsparb, dsparb2, 0, 0);
+ sprite1_start = VLV_FIFO_START(dsparb, dsparb2, 8, 4);
+ break;
+ case PIPE_B:
+ dsparb = I915_READ(DSPARB);
+ dsparb2 = I915_READ(DSPARB2);
+ sprite0_start = VLV_FIFO_START(dsparb, dsparb2, 16, 8);
+ sprite1_start = VLV_FIFO_START(dsparb, dsparb2, 24, 12);
+ break;
+ case PIPE_C:
+ dsparb2 = I915_READ(DSPARB2);
+ dsparb3 = I915_READ(DSPARB3);
+ sprite0_start = VLV_FIFO_START(dsparb3, dsparb2, 0, 16);
+ sprite1_start = VLV_FIFO_START(dsparb3, dsparb2, 8, 20);
+ break;
+ default:
+ return 0;
+ }
+
+ switch (plane) {
+ case 0:
+ size = sprite0_start;
+ break;
+ case 1:
+ size = sprite1_start - sprite0_start;
+ break;
+ case 2:
+ size = 512 - 1 - sprite1_start;
+ break;
+ default:
+ return 0;
+ }
+
+ DRM_DEBUG_KMS("Pipe %c %s %c FIFO size: %d\n",
+ pipe_name(pipe), plane == 0 ? "primary" : "sprite",
+ plane == 0 ? plane_name(pipe) : sprite_name(pipe, plane - 1),
+ size);
+
+ return size;
+}
+
static int i9xx_get_fifo_size(struct drm_device *dev, int plane)
{
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -535,7 +652,7 @@
crtc = single_enabled_crtc(dev);
if (crtc) {
const struct drm_display_mode *adjusted_mode;
- int pixel_size = crtc->primary->fb->bits_per_pixel / 8;
+ int pixel_size = crtc->primary->state->fb->bits_per_pixel / 8;
int clock;
adjusted_mode = &to_intel_crtc(crtc)->config->base.adjusted_mode;
@@ -547,7 +664,7 @@
pixel_size, latency->display_sr);
reg = I915_READ(DSPFW1);
reg &= ~DSPFW_SR_MASK;
- reg |= wm << DSPFW_SR_SHIFT;
+ reg |= FW_WM(wm, SR);
I915_WRITE(DSPFW1, reg);
DRM_DEBUG_KMS("DSPFW1 register is %x\n", reg);
@@ -557,7 +674,7 @@
pixel_size, latency->cursor_sr);
reg = I915_READ(DSPFW3);
reg &= ~DSPFW_CURSOR_SR_MASK;
- reg |= (wm & 0x3f) << DSPFW_CURSOR_SR_SHIFT;
+ reg |= FW_WM(wm, CURSOR_SR);
I915_WRITE(DSPFW3, reg);
/* Display HPLL off SR */
@@ -566,7 +683,7 @@
pixel_size, latency->display_hpll_disable);
reg = I915_READ(DSPFW3);
reg &= ~DSPFW_HPLL_SR_MASK;
- reg |= wm & DSPFW_HPLL_SR_MASK;
+ reg |= FW_WM(wm, HPLL_SR);
I915_WRITE(DSPFW3, reg);
/* cursor HPLL off SR */
@@ -575,7 +692,7 @@
pixel_size, latency->cursor_hpll_disable);
reg = I915_READ(DSPFW3);
reg &= ~DSPFW_HPLL_CURSOR_MASK;
- reg |= (wm & 0x3f) << DSPFW_HPLL_CURSOR_SHIFT;
+ reg |= FW_WM(wm, HPLL_CURSOR);
I915_WRITE(DSPFW3, reg);
DRM_DEBUG_KMS("DSPFW3 register is %x\n", reg);
@@ -611,7 +728,7 @@
clock = adjusted_mode->crtc_clock;
htotal = adjusted_mode->crtc_htotal;
hdisplay = to_intel_crtc(crtc)->config->pipe_src_w;
- pixel_size = crtc->primary->fb->bits_per_pixel / 8;
+ pixel_size = crtc->primary->state->fb->bits_per_pixel / 8;
/* Use the small buffer method to calculate plane watermark */
entries = ((clock * pixel_size / 1000) * display_latency_ns) / 1000;
@@ -626,7 +743,7 @@
/* Use the large buffer method to calculate cursor watermark */
line_time_us = max(htotal * 1000 / clock, 1);
line_count = (cursor_latency_ns / line_time_us + 1000) / 1000;
- entries = line_count * to_intel_crtc(crtc)->cursor_width * pixel_size;
+ entries = line_count * crtc->cursor->state->crtc_w * pixel_size;
tlb_miss = cursor->fifo_size*cursor->cacheline_size - hdisplay * 8;
if (tlb_miss > 0)
entries += tlb_miss;
@@ -698,7 +815,7 @@
clock = adjusted_mode->crtc_clock;
htotal = adjusted_mode->crtc_htotal;
hdisplay = to_intel_crtc(crtc)->config->pipe_src_w;
- pixel_size = crtc->primary->fb->bits_per_pixel / 8;
+ pixel_size = crtc->primary->state->fb->bits_per_pixel / 8;
line_time_us = max(htotal * 1000 / clock, 1);
line_count = (latency_ns / line_time_us + 1000) / 1000;
@@ -712,7 +829,7 @@
*display_wm = entries + display->guard_size;
/* calculate the self-refresh watermark for display cursor */
- entries = line_count * pixel_size * to_intel_crtc(crtc)->cursor_width;
+ entries = line_count * pixel_size * crtc->cursor->state->crtc_w;
entries = DIV_ROUND_UP(entries, cursor->cacheline_size);
*cursor_wm = entries + cursor->guard_size;
@@ -721,232 +838,234 @@
display, cursor);
}
-static bool vlv_compute_drain_latency(struct drm_crtc *crtc,
- int pixel_size,
- int *prec_mult,
- int *drain_latency)
+#define FW_WM_VLV(value, plane) \
+ (((value) << DSPFW_ ## plane ## _SHIFT) & DSPFW_ ## plane ## _MASK_VLV)
+
+static void vlv_write_wm_values(struct intel_crtc *crtc,
+ const struct vlv_wm_values *wm)
+{
+ struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+ enum pipe pipe = crtc->pipe;
+
+ I915_WRITE(VLV_DDL(pipe),
+ (wm->ddl[pipe].cursor << DDL_CURSOR_SHIFT) |
+ (wm->ddl[pipe].sprite[1] << DDL_SPRITE_SHIFT(1)) |
+ (wm->ddl[pipe].sprite[0] << DDL_SPRITE_SHIFT(0)) |
+ (wm->ddl[pipe].primary << DDL_PLANE_SHIFT));
+
+ I915_WRITE(DSPFW1,
+ FW_WM(wm->sr.plane, SR) |
+ FW_WM(wm->pipe[PIPE_B].cursor, CURSORB) |
+ FW_WM_VLV(wm->pipe[PIPE_B].primary, PLANEB) |
+ FW_WM_VLV(wm->pipe[PIPE_A].primary, PLANEA));
+ I915_WRITE(DSPFW2,
+ FW_WM_VLV(wm->pipe[PIPE_A].sprite[1], SPRITEB) |
+ FW_WM(wm->pipe[PIPE_A].cursor, CURSORA) |
+ FW_WM_VLV(wm->pipe[PIPE_A].sprite[0], SPRITEA));
+ I915_WRITE(DSPFW3,
+ FW_WM(wm->sr.cursor, CURSOR_SR));
+
+ if (IS_CHERRYVIEW(dev_priv)) {
+ I915_WRITE(DSPFW7_CHV,
+ FW_WM_VLV(wm->pipe[PIPE_B].sprite[1], SPRITED) |
+ FW_WM_VLV(wm->pipe[PIPE_B].sprite[0], SPRITEC));
+ I915_WRITE(DSPFW8_CHV,
+ FW_WM_VLV(wm->pipe[PIPE_C].sprite[1], SPRITEF) |
+ FW_WM_VLV(wm->pipe[PIPE_C].sprite[0], SPRITEE));
+ I915_WRITE(DSPFW9_CHV,
+ FW_WM_VLV(wm->pipe[PIPE_C].primary, PLANEC) |
+ FW_WM(wm->pipe[PIPE_C].cursor, CURSORC));
+ I915_WRITE(DSPHOWM,
+ FW_WM(wm->sr.plane >> 9, SR_HI) |
+ FW_WM(wm->pipe[PIPE_C].sprite[1] >> 8, SPRITEF_HI) |
+ FW_WM(wm->pipe[PIPE_C].sprite[0] >> 8, SPRITEE_HI) |
+ FW_WM(wm->pipe[PIPE_C].primary >> 8, PLANEC_HI) |
+ FW_WM(wm->pipe[PIPE_B].sprite[1] >> 8, SPRITED_HI) |
+ FW_WM(wm->pipe[PIPE_B].sprite[0] >> 8, SPRITEC_HI) |
+ FW_WM(wm->pipe[PIPE_B].primary >> 8, PLANEB_HI) |
+ FW_WM(wm->pipe[PIPE_A].sprite[1] >> 8, SPRITEB_HI) |
+ FW_WM(wm->pipe[PIPE_A].sprite[0] >> 8, SPRITEA_HI) |
+ FW_WM(wm->pipe[PIPE_A].primary >> 8, PLANEA_HI));
+ } else {
+ I915_WRITE(DSPFW7,
+ FW_WM_VLV(wm->pipe[PIPE_B].sprite[1], SPRITED) |
+ FW_WM_VLV(wm->pipe[PIPE_B].sprite[0], SPRITEC));
+ I915_WRITE(DSPHOWM,
+ FW_WM(wm->sr.plane >> 9, SR_HI) |
+ FW_WM(wm->pipe[PIPE_B].sprite[1] >> 8, SPRITED_HI) |
+ FW_WM(wm->pipe[PIPE_B].sprite[0] >> 8, SPRITEC_HI) |
+ FW_WM(wm->pipe[PIPE_B].primary >> 8, PLANEB_HI) |
+ FW_WM(wm->pipe[PIPE_A].sprite[1] >> 8, SPRITEB_HI) |
+ FW_WM(wm->pipe[PIPE_A].sprite[0] >> 8, SPRITEA_HI) |
+ FW_WM(wm->pipe[PIPE_A].primary >> 8, PLANEA_HI));
+ }
+
+ POSTING_READ(DSPFW1);
+
+ dev_priv->wm.vlv = *wm;
+}
+
+#undef FW_WM_VLV
+
+static uint8_t vlv_compute_drain_latency(struct drm_crtc *crtc,
+ struct drm_plane *plane)
{
struct drm_device *dev = crtc->dev;
- int entries;
- int clock = to_intel_crtc(crtc)->config->base.adjusted_mode.crtc_clock;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ int entries, prec_mult, drain_latency, pixel_size;
+ int clock = intel_crtc->config->base.adjusted_mode.crtc_clock;
+ const int high_precision = IS_CHERRYVIEW(dev) ? 16 : 64;
+
+ /*
+ * FIXME the plane might have an fb
+ * but be invisible (eg. due to clipping)
+ */
+ if (!intel_crtc->active || !plane->state->fb)
+ return 0;
if (WARN(clock == 0, "Pixel clock is zero!\n"))
- return false;
+ return 0;
+
+ pixel_size = drm_format_plane_cpp(plane->state->fb->pixel_format, 0);
if (WARN(pixel_size == 0, "Pixel size is zero!\n"))
- return false;
+ return 0;
entries = DIV_ROUND_UP(clock, 1000) * pixel_size;
- if (IS_CHERRYVIEW(dev))
- *prec_mult = (entries > 128) ? DRAIN_LATENCY_PRECISION_32 :
- DRAIN_LATENCY_PRECISION_16;
- else
- *prec_mult = (entries > 128) ? DRAIN_LATENCY_PRECISION_64 :
- DRAIN_LATENCY_PRECISION_32;
- *drain_latency = (64 * (*prec_mult) * 4) / entries;
- if (*drain_latency > DRAIN_LATENCY_MASK)
- *drain_latency = DRAIN_LATENCY_MASK;
+ prec_mult = high_precision;
+ drain_latency = 64 * prec_mult * 4 / entries;
+
+ if (drain_latency > DRAIN_LATENCY_MASK) {
+ prec_mult /= 2;
+ drain_latency = 64 * prec_mult * 4 / entries;
+ }
+
+ if (drain_latency > DRAIN_LATENCY_MASK)
+ drain_latency = DRAIN_LATENCY_MASK;
+
+ return drain_latency | (prec_mult == high_precision ?
+ DDL_PRECISION_HIGH : DDL_PRECISION_LOW);
+}
+
+static int vlv_compute_wm(struct intel_crtc *crtc,
+ struct intel_plane *plane,
+ int fifo_size)
+{
+ int clock, entries, pixel_size;
+
+ /*
+ * FIXME the plane might have an fb
+ * but be invisible (eg. due to clipping)
+ */
+ if (!crtc->active || !plane->base.state->fb)
+ return 0;
+
+ pixel_size = drm_format_plane_cpp(plane->base.state->fb->pixel_format, 0);
+ clock = crtc->config->base.adjusted_mode.crtc_clock;
+
+ entries = DIV_ROUND_UP(clock, 1000) * pixel_size;
+
+ /*
+ * Set up the watermark such that we don't start issuing memory
+ * requests until we are within PND's max deadline value (256us).
+ * Idea being to be idle as long as possible while still taking
+ * advatange of PND's deadline scheduling. The limit of 8
+ * cachelines (used when the FIFO will anyway drain in less time
+ * than 256us) should match what we would be done if trickle
+ * feed were enabled.
+ */
+ return fifo_size - clamp(DIV_ROUND_UP(256 * entries, 64), 0, fifo_size - 8);
+}
+
+static bool vlv_compute_sr_wm(struct drm_device *dev,
+ struct vlv_wm_values *wm)
+{
+ struct drm_i915_private *dev_priv = to_i915(dev);
+ struct drm_crtc *crtc;
+ enum pipe pipe = INVALID_PIPE;
+ int num_planes = 0;
+ int fifo_size = 0;
+ struct intel_plane *plane;
+
+ wm->sr.cursor = wm->sr.plane = 0;
+
+ crtc = single_enabled_crtc(dev);
+ /* maxfifo not supported on pipe C */
+ if (crtc && to_intel_crtc(crtc)->pipe != PIPE_C) {
+ pipe = to_intel_crtc(crtc)->pipe;
+ num_planes = !!wm->pipe[pipe].primary +
+ !!wm->pipe[pipe].sprite[0] +
+ !!wm->pipe[pipe].sprite[1];
+ fifo_size = INTEL_INFO(dev_priv)->num_pipes * 512 - 1;
+ }
+
+ if (fifo_size == 0 || num_planes > 1)
+ return false;
+
+ wm->sr.cursor = vlv_compute_wm(to_intel_crtc(crtc),
+ to_intel_plane(crtc->cursor), 0x3f);
+
+ list_for_each_entry(plane, &dev->mode_config.plane_list, base.head) {
+ if (plane->base.type == DRM_PLANE_TYPE_CURSOR)
+ continue;
+
+ if (plane->pipe != pipe)
+ continue;
+
+ wm->sr.plane = vlv_compute_wm(to_intel_crtc(crtc),
+ plane, fifo_size);
+ if (wm->sr.plane != 0)
+ break;
+ }
return true;
}
-/*
- * Update drain latency registers of memory arbiter
- *
- * Valleyview SoC has a new memory arbiter and needs drain latency registers
- * to be programmed. Each plane has a drain latency multiplier and a drain
- * latency value.
- */
-
-static void vlv_update_drain_latency(struct drm_crtc *crtc)
+static void valleyview_update_wm(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- int pixel_size;
- int drain_latency;
enum pipe pipe = intel_crtc->pipe;
- int plane_prec, prec_mult, plane_dl;
- const int high_precision = IS_CHERRYVIEW(dev) ?
- DRAIN_LATENCY_PRECISION_32 : DRAIN_LATENCY_PRECISION_64;
+ bool cxsr_enabled;
+ struct vlv_wm_values wm = dev_priv->wm.vlv;
- plane_dl = I915_READ(VLV_DDL(pipe)) & ~(DDL_PLANE_PRECISION_HIGH |
- DRAIN_LATENCY_MASK | DDL_CURSOR_PRECISION_HIGH |
- (DRAIN_LATENCY_MASK << DDL_CURSOR_SHIFT));
+ wm.ddl[pipe].primary = vlv_compute_drain_latency(crtc, crtc->primary);
+ wm.pipe[pipe].primary = vlv_compute_wm(intel_crtc,
+ to_intel_plane(crtc->primary),
+ vlv_get_fifo_size(dev, pipe, 0));
- if (!intel_crtc_active(crtc)) {
- I915_WRITE(VLV_DDL(pipe), plane_dl);
+ wm.ddl[pipe].cursor = vlv_compute_drain_latency(crtc, crtc->cursor);
+ wm.pipe[pipe].cursor = vlv_compute_wm(intel_crtc,
+ to_intel_plane(crtc->cursor),
+ 0x3f);
+
+ cxsr_enabled = vlv_compute_sr_wm(dev, &wm);
+
+ if (memcmp(&wm, &dev_priv->wm.vlv, sizeof(wm)) == 0)
return;
- }
- /* Primary plane Drain Latency */
- pixel_size = crtc->primary->fb->bits_per_pixel / 8; /* BPP */
- if (vlv_compute_drain_latency(crtc, pixel_size, &prec_mult, &drain_latency)) {
- plane_prec = (prec_mult == high_precision) ?
- DDL_PLANE_PRECISION_HIGH :
- DDL_PLANE_PRECISION_LOW;
- plane_dl |= plane_prec | drain_latency;
- }
+ DRM_DEBUG_KMS("Setting FIFO watermarks - %c: plane=%d, cursor=%d, "
+ "SR: plane=%d, cursor=%d\n", pipe_name(pipe),
+ wm.pipe[pipe].primary, wm.pipe[pipe].cursor,
+ wm.sr.plane, wm.sr.cursor);
- /* Cursor Drain Latency
- * BPP is always 4 for cursor
+ /*
+ * FIXME DDR DVFS introduces massive memory latencies which
+ * are not known to system agent so any deadline specified
+ * by the display may not be respected. To support DDR DVFS
+ * the watermark code needs to be rewritten to essentially
+ * bypass deadline mechanism and rely solely on the
+ * watermarks. For now disable DDR DVFS.
*/
- pixel_size = 4;
+ if (IS_CHERRYVIEW(dev_priv))
+ chv_set_memory_dvfs(dev_priv, false);
- /* Program cursor DL only if it is enabled */
- if (intel_crtc->cursor_base &&
- vlv_compute_drain_latency(crtc, pixel_size, &prec_mult, &drain_latency)) {
- plane_prec = (prec_mult == high_precision) ?
- DDL_CURSOR_PRECISION_HIGH :
- DDL_CURSOR_PRECISION_LOW;
- plane_dl |= plane_prec | (drain_latency << DDL_CURSOR_SHIFT);
- }
-
- I915_WRITE(VLV_DDL(pipe), plane_dl);
-}
-
-#define single_plane_enabled(mask) is_power_of_2(mask)
-
-static void valleyview_update_wm(struct drm_crtc *crtc)
-{
- struct drm_device *dev = crtc->dev;
- static const int sr_latency_ns = 12000;
- struct drm_i915_private *dev_priv = dev->dev_private;
- int planea_wm, planeb_wm, cursora_wm, cursorb_wm;
- int plane_sr, cursor_sr;
- int ignore_plane_sr, ignore_cursor_sr;
- unsigned int enabled = 0;
- bool cxsr_enabled;
-
- vlv_update_drain_latency(crtc);
-
- if (g4x_compute_wm0(dev, PIPE_A,
- &valleyview_wm_info, pessimal_latency_ns,
- &valleyview_cursor_wm_info, pessimal_latency_ns,
- &planea_wm, &cursora_wm))
- enabled |= 1 << PIPE_A;
-
- if (g4x_compute_wm0(dev, PIPE_B,
- &valleyview_wm_info, pessimal_latency_ns,
- &valleyview_cursor_wm_info, pessimal_latency_ns,
- &planeb_wm, &cursorb_wm))
- enabled |= 1 << PIPE_B;
-
- if (single_plane_enabled(enabled) &&
- g4x_compute_srwm(dev, ffs(enabled) - 1,
- sr_latency_ns,
- &valleyview_wm_info,
- &valleyview_cursor_wm_info,
- &plane_sr, &ignore_cursor_sr) &&
- g4x_compute_srwm(dev, ffs(enabled) - 1,
- 2*sr_latency_ns,
- &valleyview_wm_info,
- &valleyview_cursor_wm_info,
- &ignore_plane_sr, &cursor_sr)) {
- cxsr_enabled = true;
- } else {
- cxsr_enabled = false;
+ if (!cxsr_enabled)
intel_set_memory_cxsr(dev_priv, false);
- plane_sr = cursor_sr = 0;
- }
- DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, "
- "B: plane=%d, cursor=%d, SR: plane=%d, cursor=%d\n",
- planea_wm, cursora_wm,
- planeb_wm, cursorb_wm,
- plane_sr, cursor_sr);
-
- I915_WRITE(DSPFW1,
- (plane_sr << DSPFW_SR_SHIFT) |
- (cursorb_wm << DSPFW_CURSORB_SHIFT) |
- (planeb_wm << DSPFW_PLANEB_SHIFT) |
- (planea_wm << DSPFW_PLANEA_SHIFT));
- I915_WRITE(DSPFW2,
- (I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) |
- (cursora_wm << DSPFW_CURSORA_SHIFT));
- I915_WRITE(DSPFW3,
- (I915_READ(DSPFW3) & ~DSPFW_CURSOR_SR_MASK) |
- (cursor_sr << DSPFW_CURSOR_SR_SHIFT));
-
- if (cxsr_enabled)
- intel_set_memory_cxsr(dev_priv, true);
-}
-
-static void cherryview_update_wm(struct drm_crtc *crtc)
-{
- struct drm_device *dev = crtc->dev;
- static const int sr_latency_ns = 12000;
- struct drm_i915_private *dev_priv = dev->dev_private;
- int planea_wm, planeb_wm, planec_wm;
- int cursora_wm, cursorb_wm, cursorc_wm;
- int plane_sr, cursor_sr;
- int ignore_plane_sr, ignore_cursor_sr;
- unsigned int enabled = 0;
- bool cxsr_enabled;
-
- vlv_update_drain_latency(crtc);
-
- if (g4x_compute_wm0(dev, PIPE_A,
- &valleyview_wm_info, pessimal_latency_ns,
- &valleyview_cursor_wm_info, pessimal_latency_ns,
- &planea_wm, &cursora_wm))
- enabled |= 1 << PIPE_A;
-
- if (g4x_compute_wm0(dev, PIPE_B,
- &valleyview_wm_info, pessimal_latency_ns,
- &valleyview_cursor_wm_info, pessimal_latency_ns,
- &planeb_wm, &cursorb_wm))
- enabled |= 1 << PIPE_B;
-
- if (g4x_compute_wm0(dev, PIPE_C,
- &valleyview_wm_info, pessimal_latency_ns,
- &valleyview_cursor_wm_info, pessimal_latency_ns,
- &planec_wm, &cursorc_wm))
- enabled |= 1 << PIPE_C;
-
- if (single_plane_enabled(enabled) &&
- g4x_compute_srwm(dev, ffs(enabled) - 1,
- sr_latency_ns,
- &valleyview_wm_info,
- &valleyview_cursor_wm_info,
- &plane_sr, &ignore_cursor_sr) &&
- g4x_compute_srwm(dev, ffs(enabled) - 1,
- 2*sr_latency_ns,
- &valleyview_wm_info,
- &valleyview_cursor_wm_info,
- &ignore_plane_sr, &cursor_sr)) {
- cxsr_enabled = true;
- } else {
- cxsr_enabled = false;
- intel_set_memory_cxsr(dev_priv, false);
- plane_sr = cursor_sr = 0;
- }
-
- DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, "
- "B: plane=%d, cursor=%d, C: plane=%d, cursor=%d, "
- "SR: plane=%d, cursor=%d\n",
- planea_wm, cursora_wm,
- planeb_wm, cursorb_wm,
- planec_wm, cursorc_wm,
- plane_sr, cursor_sr);
-
- I915_WRITE(DSPFW1,
- (plane_sr << DSPFW_SR_SHIFT) |
- (cursorb_wm << DSPFW_CURSORB_SHIFT) |
- (planeb_wm << DSPFW_PLANEB_SHIFT) |
- (planea_wm << DSPFW_PLANEA_SHIFT));
- I915_WRITE(DSPFW2,
- (I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) |
- (cursora_wm << DSPFW_CURSORA_SHIFT));
- I915_WRITE(DSPFW3,
- (I915_READ(DSPFW3) & ~DSPFW_CURSOR_SR_MASK) |
- (cursor_sr << DSPFW_CURSOR_SR_SHIFT));
- I915_WRITE(DSPFW9_CHV,
- (I915_READ(DSPFW9_CHV) & ~(DSPFW_PLANEC_MASK |
- DSPFW_CURSORC_MASK)) |
- (planec_wm << DSPFW_PLANEC_SHIFT) |
- (cursorc_wm << DSPFW_CURSORC_SHIFT));
+ vlv_write_wm_values(intel_crtc, &wm);
if (cxsr_enabled)
intel_set_memory_cxsr(dev_priv, true);
@@ -961,30 +1080,47 @@
{
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- int pipe = to_intel_plane(plane)->pipe;
+ struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ enum pipe pipe = intel_crtc->pipe;
int sprite = to_intel_plane(plane)->plane;
- int drain_latency;
- int plane_prec;
- int sprite_dl;
- int prec_mult;
- const int high_precision = IS_CHERRYVIEW(dev) ?
- DRAIN_LATENCY_PRECISION_32 : DRAIN_LATENCY_PRECISION_64;
+ bool cxsr_enabled;
+ struct vlv_wm_values wm = dev_priv->wm.vlv;
- sprite_dl = I915_READ(VLV_DDL(pipe)) & ~(DDL_SPRITE_PRECISION_HIGH(sprite) |
- (DRAIN_LATENCY_MASK << DDL_SPRITE_SHIFT(sprite)));
+ if (enabled) {
+ wm.ddl[pipe].sprite[sprite] =
+ vlv_compute_drain_latency(crtc, plane);
- if (enabled && vlv_compute_drain_latency(crtc, pixel_size, &prec_mult,
- &drain_latency)) {
- plane_prec = (prec_mult == high_precision) ?
- DDL_SPRITE_PRECISION_HIGH(sprite) :
- DDL_SPRITE_PRECISION_LOW(sprite);
- sprite_dl |= plane_prec |
- (drain_latency << DDL_SPRITE_SHIFT(sprite));
+ wm.pipe[pipe].sprite[sprite] =
+ vlv_compute_wm(intel_crtc,
+ to_intel_plane(plane),
+ vlv_get_fifo_size(dev, pipe, sprite+1));
+ } else {
+ wm.ddl[pipe].sprite[sprite] = 0;
+ wm.pipe[pipe].sprite[sprite] = 0;
}
- I915_WRITE(VLV_DDL(pipe), sprite_dl);
+ cxsr_enabled = vlv_compute_sr_wm(dev, &wm);
+
+ if (memcmp(&wm, &dev_priv->wm.vlv, sizeof(wm)) == 0)
+ return;
+
+ DRM_DEBUG_KMS("Setting FIFO watermarks - %c: sprite %c=%d, "
+ "SR: plane=%d, cursor=%d\n", pipe_name(pipe),
+ sprite_name(pipe, sprite),
+ wm.pipe[pipe].sprite[sprite],
+ wm.sr.plane, wm.sr.cursor);
+
+ if (!cxsr_enabled)
+ intel_set_memory_cxsr(dev_priv, false);
+
+ vlv_write_wm_values(intel_crtc, &wm);
+
+ if (cxsr_enabled)
+ intel_set_memory_cxsr(dev_priv, true);
}
+#define single_plane_enabled(mask) is_power_of_2(mask)
+
static void g4x_update_wm(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
@@ -1027,17 +1163,17 @@
plane_sr, cursor_sr);
I915_WRITE(DSPFW1,
- (plane_sr << DSPFW_SR_SHIFT) |
- (cursorb_wm << DSPFW_CURSORB_SHIFT) |
- (planeb_wm << DSPFW_PLANEB_SHIFT) |
- (planea_wm << DSPFW_PLANEA_SHIFT));
+ FW_WM(plane_sr, SR) |
+ FW_WM(cursorb_wm, CURSORB) |
+ FW_WM(planeb_wm, PLANEB) |
+ FW_WM(planea_wm, PLANEA));
I915_WRITE(DSPFW2,
(I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) |
- (cursora_wm << DSPFW_CURSORA_SHIFT));
+ FW_WM(cursora_wm, CURSORA));
/* HPLL off in SR has some issues on G4x... disable it */
I915_WRITE(DSPFW3,
(I915_READ(DSPFW3) & ~(DSPFW_HPLL_SR_EN | DSPFW_CURSOR_SR_MASK)) |
- (cursor_sr << DSPFW_CURSOR_SR_SHIFT));
+ FW_WM(cursor_sr, CURSOR_SR));
if (cxsr_enabled)
intel_set_memory_cxsr(dev_priv, true);
@@ -1062,7 +1198,7 @@
int clock = adjusted_mode->crtc_clock;
int htotal = adjusted_mode->crtc_htotal;
int hdisplay = to_intel_crtc(crtc)->config->pipe_src_w;
- int pixel_size = crtc->primary->fb->bits_per_pixel / 8;
+ int pixel_size = crtc->primary->state->fb->bits_per_pixel / 8;
unsigned long line_time_us;
int entries;
@@ -1080,7 +1216,7 @@
entries, srwm);
entries = (((sr_latency_ns / line_time_us) + 1000) / 1000) *
- pixel_size * to_intel_crtc(crtc)->cursor_width;
+ pixel_size * crtc->cursor->state->crtc_w;
entries = DIV_ROUND_UP(entries,
i965_cursor_wm_info.cacheline_size);
cursor_sr = i965_cursor_wm_info.fifo_size -
@@ -1103,19 +1239,21 @@
srwm);
/* 965 has limitations... */
- I915_WRITE(DSPFW1, (srwm << DSPFW_SR_SHIFT) |
- (8 << DSPFW_CURSORB_SHIFT) |
- (8 << DSPFW_PLANEB_SHIFT) |
- (8 << DSPFW_PLANEA_SHIFT));
- I915_WRITE(DSPFW2, (8 << DSPFW_CURSORA_SHIFT) |
- (8 << DSPFW_PLANEC_SHIFT_OLD));
+ I915_WRITE(DSPFW1, FW_WM(srwm, SR) |
+ FW_WM(8, CURSORB) |
+ FW_WM(8, PLANEB) |
+ FW_WM(8, PLANEA));
+ I915_WRITE(DSPFW2, FW_WM(8, CURSORA) |
+ FW_WM(8, PLANEC_OLD));
/* update cursor SR watermark */
- I915_WRITE(DSPFW3, (cursor_sr << DSPFW_CURSOR_SR_SHIFT));
+ I915_WRITE(DSPFW3, FW_WM(cursor_sr, CURSOR_SR));
if (cxsr_enabled)
intel_set_memory_cxsr(dev_priv, true);
}
+#undef FW_WM
+
static void i9xx_update_wm(struct drm_crtc *unused_crtc)
{
struct drm_device *dev = unused_crtc->dev;
@@ -1139,7 +1277,7 @@
crtc = intel_get_crtc_for_plane(dev, 0);
if (intel_crtc_active(crtc)) {
const struct drm_display_mode *adjusted_mode;
- int cpp = crtc->primary->fb->bits_per_pixel / 8;
+ int cpp = crtc->primary->state->fb->bits_per_pixel / 8;
if (IS_GEN2(dev))
cpp = 4;
@@ -1161,7 +1299,7 @@
crtc = intel_get_crtc_for_plane(dev, 1);
if (intel_crtc_active(crtc)) {
const struct drm_display_mode *adjusted_mode;
- int cpp = crtc->primary->fb->bits_per_pixel / 8;
+ int cpp = crtc->primary->state->fb->bits_per_pixel / 8;
if (IS_GEN2(dev))
cpp = 4;
@@ -1184,7 +1322,7 @@
if (IS_I915GM(dev) && enabled) {
struct drm_i915_gem_object *obj;
- obj = intel_fb_obj(enabled->primary->fb);
+ obj = intel_fb_obj(enabled->primary->state->fb);
/* self-refresh seems busted with untiled */
if (obj->tiling_mode == I915_TILING_NONE)
@@ -1208,7 +1346,7 @@
int clock = adjusted_mode->crtc_clock;
int htotal = adjusted_mode->crtc_htotal;
int hdisplay = to_intel_crtc(enabled)->config->pipe_src_w;
- int pixel_size = enabled->primary->fb->bits_per_pixel / 8;
+ int pixel_size = enabled->primary->state->fb->bits_per_pixel / 8;
unsigned long line_time_us;
int entries;
@@ -1645,7 +1783,7 @@
struct drm_display_mode *mode = &intel_crtc->config->base.adjusted_mode;
u32 linetime, ips_linetime;
- if (!intel_crtc_active(crtc))
+ if (!intel_crtc->active)
return 0;
/* The WM are computed with base on how long it takes to fill a single
@@ -1711,6 +1849,8 @@
GEN9_MEM_LATENCY_LEVEL_MASK;
/*
+ * WaWmMemoryReadLatency:skl
+ *
* punit doesn't take into account the read latency so we need
* to add 2us to the various latency levels we retrieve from
* the punit.
@@ -1898,19 +2038,31 @@
enum pipe pipe = intel_crtc->pipe;
struct drm_plane *plane;
- if (!intel_crtc_active(crtc))
+ if (!intel_crtc->active)
return;
p->active = true;
p->pipe_htotal = intel_crtc->config->base.adjusted_mode.crtc_htotal;
p->pixel_rate = ilk_pipe_pixel_rate(dev, crtc);
- p->pri.bytes_per_pixel = crtc->primary->fb->bits_per_pixel / 8;
- p->cur.bytes_per_pixel = 4;
+
+ if (crtc->primary->state->fb) {
+ p->pri.enabled = true;
+ p->pri.bytes_per_pixel =
+ crtc->primary->state->fb->bits_per_pixel / 8;
+ } else {
+ p->pri.enabled = false;
+ p->pri.bytes_per_pixel = 0;
+ }
+
+ if (crtc->cursor->state->fb) {
+ p->cur.enabled = true;
+ p->cur.bytes_per_pixel = 4;
+ } else {
+ p->cur.enabled = false;
+ p->cur.bytes_per_pixel = 0;
+ }
p->pri.horiz_pixels = intel_crtc->config->pipe_src_w;
- p->cur.horiz_pixels = intel_crtc->cursor_width;
- /* TODO: for now, assume primary and cursor planes are always enabled. */
- p->pri.enabled = true;
- p->cur.enabled = true;
+ p->cur.horiz_pixels = intel_crtc->base.cursor->state->crtc_w;
drm_for_each_legacy_plane(plane, &dev->mode_config.plane_list) {
struct intel_plane *intel_plane = to_intel_plane(plane);
@@ -2410,7 +2562,7 @@
nth_active_pipe = 0;
for_each_crtc(dev, crtc) {
- if (!intel_crtc_active(crtc))
+ if (!to_intel_crtc(crtc)->active)
continue;
if (crtc == for_crtc)
@@ -2443,13 +2595,12 @@
void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv,
struct skl_ddb_allocation *ddb /* out */)
{
- struct drm_device *dev = dev_priv->dev;
enum pipe pipe;
int plane;
u32 val;
for_each_pipe(dev_priv, pipe) {
- for_each_plane(pipe, plane) {
+ for_each_plane(dev_priv, pipe, plane) {
val = I915_READ(PLANE_BUF_CFG(pipe, plane));
skl_ddb_entry_init_from_hw(&ddb->plane[pipe][plane],
val);
@@ -2498,10 +2649,12 @@
struct skl_ddb_allocation *ddb /* out */)
{
struct drm_device *dev = crtc->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
enum pipe pipe = intel_crtc->pipe;
struct skl_ddb_entry *alloc = &ddb->pipe[pipe];
uint16_t alloc_size, start, cursor_blocks;
+ uint16_t minimum[I915_MAX_PLANES];
unsigned int total_data_rate;
int plane;
@@ -2520,9 +2673,21 @@
alloc_size -= cursor_blocks;
alloc->end -= cursor_blocks;
+ /* 1. Allocate the mininum required blocks for each active plane */
+ for_each_plane(dev_priv, pipe, plane) {
+ const struct intel_plane_wm_parameters *p;
+
+ p = ¶ms->plane[plane];
+ if (!p->enabled)
+ continue;
+
+ minimum[plane] = 8;
+ alloc_size -= minimum[plane];
+ }
+
/*
- * Each active plane get a portion of the remaining space, in
- * proportion to the amount of data they need to fetch from memory.
+ * 2. Distribute the remaining space in proportion to the amount of
+ * data each plane needs to fetch from memory.
*
* FIXME: we may not allocate every single block here.
*/
@@ -2544,8 +2709,9 @@
* promote the expression to 64 bits to avoid overflowing, the
* result is < available as data_rate / total_data_rate < 1
*/
- plane_blocks = div_u64((uint64_t)alloc_size * data_rate,
- total_data_rate);
+ plane_blocks = minimum[plane];
+ plane_blocks += div_u64((uint64_t)alloc_size * data_rate,
+ total_data_rate);
ddb->plane[pipe][plane].start = start;
ddb->plane[pipe][plane].end = start + plane_blocks;
@@ -2575,7 +2741,7 @@
if (latency == 0)
return UINT_MAX;
- wm_intermediate_val = latency * pixel_rate * bytes_per_pixel;
+ wm_intermediate_val = latency * pixel_rate * bytes_per_pixel / 512;
ret = DIV_ROUND_UP(wm_intermediate_val, 1000);
return ret;
@@ -2583,17 +2749,29 @@
static uint32_t skl_wm_method2(uint32_t pixel_rate, uint32_t pipe_htotal,
uint32_t horiz_pixels, uint8_t bytes_per_pixel,
- uint32_t latency)
+ uint64_t tiling, uint32_t latency)
{
- uint32_t ret, plane_bytes_per_line, wm_intermediate_val;
+ uint32_t ret;
+ uint32_t plane_bytes_per_line, plane_blocks_per_line;
+ uint32_t wm_intermediate_val;
if (latency == 0)
return UINT_MAX;
plane_bytes_per_line = horiz_pixels * bytes_per_pixel;
+
+ if (tiling == I915_FORMAT_MOD_Y_TILED ||
+ tiling == I915_FORMAT_MOD_Yf_TILED) {
+ plane_bytes_per_line *= 4;
+ plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
+ plane_blocks_per_line /= 4;
+ } else {
+ plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
+ }
+
wm_intermediate_val = latency * pixel_rate;
ret = DIV_ROUND_UP(wm_intermediate_val, pipe_htotal * 1000) *
- plane_bytes_per_line;
+ plane_blocks_per_line;
return ret;
}
@@ -2624,7 +2802,7 @@
struct drm_plane *plane;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
- config->num_pipes_active += intel_crtc_active(crtc);
+ config->num_pipes_active += to_intel_crtc(crtc)->active;
/* FIXME: I don't think we need those two global parameters on SKL */
list_for_each_entry(plane, &dev->mode_config.plane_list, head) {
@@ -2642,26 +2820,40 @@
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
enum pipe pipe = intel_crtc->pipe;
struct drm_plane *plane;
+ struct drm_framebuffer *fb;
int i = 1; /* Index for sprite planes start */
- p->active = intel_crtc_active(crtc);
+ p->active = intel_crtc->active;
if (p->active) {
p->pipe_htotal = intel_crtc->config->base.adjusted_mode.crtc_htotal;
p->pixel_rate = skl_pipe_pixel_rate(intel_crtc->config);
- /*
- * For now, assume primary and cursor planes are always enabled.
- */
- p->plane[0].enabled = true;
- p->plane[0].bytes_per_pixel =
- crtc->primary->fb->bits_per_pixel / 8;
+ fb = crtc->primary->state->fb;
+ if (fb) {
+ p->plane[0].enabled = true;
+ p->plane[0].bytes_per_pixel = fb->bits_per_pixel / 8;
+ p->plane[0].tiling = fb->modifier[0];
+ } else {
+ p->plane[0].enabled = false;
+ p->plane[0].bytes_per_pixel = 0;
+ p->plane[0].tiling = DRM_FORMAT_MOD_NONE;
+ }
p->plane[0].horiz_pixels = intel_crtc->config->pipe_src_w;
p->plane[0].vert_pixels = intel_crtc->config->pipe_src_h;
+ p->plane[0].rotation = crtc->primary->state->rotation;
- p->cursor.enabled = true;
- p->cursor.bytes_per_pixel = 4;
- p->cursor.horiz_pixels = intel_crtc->cursor_width ?
- intel_crtc->cursor_width : 64;
+ fb = crtc->cursor->state->fb;
+ if (fb) {
+ p->cursor.enabled = true;
+ p->cursor.bytes_per_pixel = fb->bits_per_pixel / 8;
+ p->cursor.horiz_pixels = crtc->cursor->state->crtc_w;
+ p->cursor.vert_pixels = crtc->cursor->state->crtc_h;
+ } else {
+ p->cursor.enabled = false;
+ p->cursor.bytes_per_pixel = 0;
+ p->cursor.horiz_pixels = 64;
+ p->cursor.vert_pixels = 64;
+ }
}
list_for_each_entry(plane, &dev->mode_config.plane_list, head) {
@@ -2673,41 +2865,74 @@
}
}
-static bool skl_compute_plane_wm(struct skl_pipe_wm_parameters *p,
+static bool skl_compute_plane_wm(const struct drm_i915_private *dev_priv,
+ struct skl_pipe_wm_parameters *p,
struct intel_plane_wm_parameters *p_params,
uint16_t ddb_allocation,
- uint32_t mem_value,
+ int level,
uint16_t *out_blocks, /* out */
uint8_t *out_lines /* out */)
{
- uint32_t method1, method2, plane_bytes_per_line, res_blocks, res_lines;
- uint32_t result_bytes;
+ uint32_t latency = dev_priv->wm.skl_latency[level];
+ uint32_t method1, method2;
+ uint32_t plane_bytes_per_line, plane_blocks_per_line;
+ uint32_t res_blocks, res_lines;
+ uint32_t selected_result;
- if (mem_value == 0 || !p->active || !p_params->enabled)
+ if (latency == 0 || !p->active || !p_params->enabled)
return false;
method1 = skl_wm_method1(p->pixel_rate,
p_params->bytes_per_pixel,
- mem_value);
+ latency);
method2 = skl_wm_method2(p->pixel_rate,
p->pipe_htotal,
p_params->horiz_pixels,
p_params->bytes_per_pixel,
- mem_value);
+ p_params->tiling,
+ latency);
plane_bytes_per_line = p_params->horiz_pixels *
p_params->bytes_per_pixel;
+ plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
- /* For now xtile and linear */
- if (((ddb_allocation * 512) / plane_bytes_per_line) >= 1)
- result_bytes = min(method1, method2);
- else
- result_bytes = method1;
+ if (p_params->tiling == I915_FORMAT_MOD_Y_TILED ||
+ p_params->tiling == I915_FORMAT_MOD_Yf_TILED) {
+ uint32_t min_scanlines = 4;
+ uint32_t y_tile_minimum;
+ if (intel_rotation_90_or_270(p_params->rotation)) {
+ switch (p_params->bytes_per_pixel) {
+ case 1:
+ min_scanlines = 16;
+ break;
+ case 2:
+ min_scanlines = 8;
+ break;
+ case 8:
+ WARN(1, "Unsupported pixel depth for rotation");
+ }
+ }
+ y_tile_minimum = plane_blocks_per_line * min_scanlines;
+ selected_result = max(method2, y_tile_minimum);
+ } else {
+ if ((ddb_allocation / plane_blocks_per_line) >= 1)
+ selected_result = min(method1, method2);
+ else
+ selected_result = method1;
+ }
- res_blocks = DIV_ROUND_UP(result_bytes, 512) + 1;
- res_lines = DIV_ROUND_UP(result_bytes, plane_bytes_per_line);
+ res_blocks = selected_result + 1;
+ res_lines = DIV_ROUND_UP(selected_result, plane_blocks_per_line);
- if (res_blocks > ddb_allocation || res_lines > 31)
+ if (level >= 1 && level <= 7) {
+ if (p_params->tiling == I915_FORMAT_MOD_Y_TILED ||
+ p_params->tiling == I915_FORMAT_MOD_Yf_TILED)
+ res_lines += 4;
+ else
+ res_blocks++;
+ }
+
+ if (res_blocks >= ddb_allocation || res_lines > 31)
return false;
*out_blocks = res_blocks;
@@ -2724,30 +2949,31 @@
int num_planes,
struct skl_wm_level *result)
{
- uint16_t latency = dev_priv->wm.skl_latency[level];
uint16_t ddb_blocks;
int i;
for (i = 0; i < num_planes; i++) {
ddb_blocks = skl_ddb_entry_size(&ddb->plane[pipe][i]);
- result->plane_en[i] = skl_compute_plane_wm(p, &p->plane[i],
+ result->plane_en[i] = skl_compute_plane_wm(dev_priv,
+ p, &p->plane[i],
ddb_blocks,
- latency,
+ level,
&result->plane_res_b[i],
&result->plane_res_l[i]);
}
ddb_blocks = skl_ddb_entry_size(&ddb->cursor[pipe]);
- result->cursor_en = skl_compute_plane_wm(p, &p->cursor, ddb_blocks,
- latency, &result->cursor_res_b,
+ result->cursor_en = skl_compute_plane_wm(dev_priv, p, &p->cursor,
+ ddb_blocks, level,
+ &result->cursor_res_b,
&result->cursor_res_l);
}
static uint32_t
skl_compute_linetime_wm(struct drm_crtc *crtc, struct skl_pipe_wm_parameters *p)
{
- if (!intel_crtc_active(crtc))
+ if (!to_intel_crtc(crtc)->active)
return 0;
return DIV_ROUND_UP(8 * p->pipe_htotal * 1000, p->pixel_rate);
@@ -2921,12 +3147,11 @@
static void
skl_wm_flush_pipe(struct drm_i915_private *dev_priv, enum pipe pipe, int pass)
{
- struct drm_device *dev = dev_priv->dev;
int plane;
DRM_DEBUG_KMS("flush pipe %c (pass %d)\n", pipe_name(pipe), pass);
- for_each_plane(pipe, plane) {
+ for_each_plane(dev_priv, pipe, plane) {
I915_WRITE(PLANE_SURF(pipe, plane),
I915_READ(PLANE_SURF(pipe, plane)));
}
@@ -3133,12 +3358,21 @@
int pixel_size, bool enabled, bool scaled)
{
struct intel_plane *intel_plane = to_intel_plane(plane);
+ struct drm_framebuffer *fb = plane->state->fb;
intel_plane->wm.enabled = enabled;
intel_plane->wm.scaled = scaled;
intel_plane->wm.horiz_pixels = sprite_width;
intel_plane->wm.vert_pixels = sprite_height;
intel_plane->wm.bytes_per_pixel = pixel_size;
+ intel_plane->wm.tiling = DRM_FORMAT_MOD_NONE;
+ /*
+ * Framebuffer can be NULL on plane disable, but it does not
+ * matter for watermarks if we assume no tiling in that case.
+ */
+ if (fb)
+ intel_plane->wm.tiling = fb->modifier[0];
+ intel_plane->wm.rotation = plane->state->rotation;
skl_update_wm(crtc);
}
@@ -3287,7 +3521,7 @@
hw->plane_trans[pipe][i] = I915_READ(PLANE_WM_TRANS(pipe, i));
hw->cursor_trans[pipe] = I915_READ(CUR_WM_TRANS(pipe));
- if (!intel_crtc_active(crtc))
+ if (!intel_crtc->active)
return;
hw->dirty[pipe] = true;
@@ -3342,7 +3576,7 @@
if (IS_HASWELL(dev) || IS_BROADWELL(dev))
hw->wm_linetime[pipe] = I915_READ(PIPE_WM_LINETIME(pipe));
- active->pipe_enabled = intel_crtc_active(crtc);
+ active->pipe_enabled = intel_crtc->active;
if (active->pipe_enabled) {
u32 tmp = hw->wm_pipe[pipe];
@@ -3456,41 +3690,6 @@
pixel_size, enabled, scaled);
}
-static struct drm_i915_gem_object *
-intel_alloc_context_page(struct drm_device *dev)
-{
- struct drm_i915_gem_object *ctx;
- int ret;
-
- WARN_ON(!mutex_is_locked(&dev->struct_mutex));
-
- ctx = i915_gem_alloc_object(dev, 4096);
- if (!ctx) {
- DRM_DEBUG("failed to alloc power context, RC6 disabled\n");
- return NULL;
- }
-
- ret = i915_gem_obj_ggtt_pin(ctx, 4096, 0);
- if (ret) {
- DRM_ERROR("failed to pin power context: %d\n", ret);
- goto err_unref;
- }
-
- ret = i915_gem_object_set_to_gtt_domain(ctx, 1);
- if (ret) {
- DRM_ERROR("failed to set-domain on power context: %d\n", ret);
- goto err_unpin;
- }
-
- return ctx;
-
-err_unpin:
- i915_gem_object_ggtt_unpin(ctx);
-err_unref:
- drm_gem_object_unreference(&ctx->base);
- return NULL;
-}
-
/**
* Lock protecting IPS related data structures
*/
@@ -3623,7 +3822,7 @@
* ourselves, instead of doing a rmw cycle (which might result in us clearing
* all limits and the gpu stuck at whatever frequency it is at atm).
*/
-static u32 gen6_rps_limits(struct drm_i915_private *dev_priv, u8 val)
+static u32 intel_rps_limits(struct drm_i915_private *dev_priv, u8 val)
{
u32 limits;
@@ -3633,9 +3832,15 @@
* the hw runs at the minimal clock before selecting the desired
* frequency, if the down threshold expires in that window we will not
* receive a down interrupt. */
- limits = dev_priv->rps.max_freq_softlimit << 24;
- if (val <= dev_priv->rps.min_freq_softlimit)
- limits |= dev_priv->rps.min_freq_softlimit << 16;
+ if (IS_GEN9(dev_priv->dev)) {
+ limits = (dev_priv->rps.max_freq_softlimit) << 23;
+ if (val <= dev_priv->rps.min_freq_softlimit)
+ limits |= (dev_priv->rps.min_freq_softlimit) << 14;
+ } else {
+ limits = dev_priv->rps.max_freq_softlimit << 24;
+ if (val <= dev_priv->rps.min_freq_softlimit)
+ limits |= dev_priv->rps.min_freq_softlimit << 16;
+ }
return limits;
}
@@ -3643,6 +3848,8 @@
static void gen6_set_rps_thresholds(struct drm_i915_private *dev_priv, u8 val)
{
int new_power;
+ u32 threshold_up = 0, threshold_down = 0; /* in % */
+ u32 ei_up = 0, ei_down = 0;
new_power = dev_priv->rps.power;
switch (dev_priv->rps.power) {
@@ -3664,9 +3871,9 @@
break;
}
/* Max/min bins are special */
- if (val == dev_priv->rps.min_freq_softlimit)
+ if (val <= dev_priv->rps.min_freq_softlimit)
new_power = LOW_POWER;
- if (val == dev_priv->rps.max_freq_softlimit)
+ if (val >= dev_priv->rps.max_freq_softlimit)
new_power = HIGH_POWER;
if (new_power == dev_priv->rps.power)
return;
@@ -3675,59 +3882,53 @@
switch (new_power) {
case LOW_POWER:
/* Upclock if more than 95% busy over 16ms */
- I915_WRITE(GEN6_RP_UP_EI, 12500);
- I915_WRITE(GEN6_RP_UP_THRESHOLD, 11800);
+ ei_up = 16000;
+ threshold_up = 95;
/* Downclock if less than 85% busy over 32ms */
- I915_WRITE(GEN6_RP_DOWN_EI, 25000);
- I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 21250);
-
- I915_WRITE(GEN6_RP_CONTROL,
- GEN6_RP_MEDIA_TURBO |
- GEN6_RP_MEDIA_HW_NORMAL_MODE |
- GEN6_RP_MEDIA_IS_GFX |
- GEN6_RP_ENABLE |
- GEN6_RP_UP_BUSY_AVG |
- GEN6_RP_DOWN_IDLE_AVG);
+ ei_down = 32000;
+ threshold_down = 85;
break;
case BETWEEN:
/* Upclock if more than 90% busy over 13ms */
- I915_WRITE(GEN6_RP_UP_EI, 10250);
- I915_WRITE(GEN6_RP_UP_THRESHOLD, 9225);
+ ei_up = 13000;
+ threshold_up = 90;
/* Downclock if less than 75% busy over 32ms */
- I915_WRITE(GEN6_RP_DOWN_EI, 25000);
- I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 18750);
-
- I915_WRITE(GEN6_RP_CONTROL,
- GEN6_RP_MEDIA_TURBO |
- GEN6_RP_MEDIA_HW_NORMAL_MODE |
- GEN6_RP_MEDIA_IS_GFX |
- GEN6_RP_ENABLE |
- GEN6_RP_UP_BUSY_AVG |
- GEN6_RP_DOWN_IDLE_AVG);
+ ei_down = 32000;
+ threshold_down = 75;
break;
case HIGH_POWER:
/* Upclock if more than 85% busy over 10ms */
- I915_WRITE(GEN6_RP_UP_EI, 8000);
- I915_WRITE(GEN6_RP_UP_THRESHOLD, 6800);
+ ei_up = 10000;
+ threshold_up = 85;
/* Downclock if less than 60% busy over 32ms */
- I915_WRITE(GEN6_RP_DOWN_EI, 25000);
- I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 15000);
-
- I915_WRITE(GEN6_RP_CONTROL,
- GEN6_RP_MEDIA_TURBO |
- GEN6_RP_MEDIA_HW_NORMAL_MODE |
- GEN6_RP_MEDIA_IS_GFX |
- GEN6_RP_ENABLE |
- GEN6_RP_UP_BUSY_AVG |
- GEN6_RP_DOWN_IDLE_AVG);
+ ei_down = 32000;
+ threshold_down = 60;
break;
}
+ I915_WRITE(GEN6_RP_UP_EI,
+ GT_INTERVAL_FROM_US(dev_priv, ei_up));
+ I915_WRITE(GEN6_RP_UP_THRESHOLD,
+ GT_INTERVAL_FROM_US(dev_priv, (ei_up * threshold_up / 100)));
+
+ I915_WRITE(GEN6_RP_DOWN_EI,
+ GT_INTERVAL_FROM_US(dev_priv, ei_down));
+ I915_WRITE(GEN6_RP_DOWN_THRESHOLD,
+ GT_INTERVAL_FROM_US(dev_priv, (ei_down * threshold_down / 100)));
+
+ I915_WRITE(GEN6_RP_CONTROL,
+ GEN6_RP_MEDIA_TURBO |
+ GEN6_RP_MEDIA_HW_NORMAL_MODE |
+ GEN6_RP_MEDIA_IS_GFX |
+ GEN6_RP_ENABLE |
+ GEN6_RP_UP_BUSY_AVG |
+ GEN6_RP_DOWN_IDLE_AVG);
+
dev_priv->rps.power = new_power;
dev_priv->rps.last_adj = 0;
}
@@ -3737,11 +3938,10 @@
u32 mask = 0;
if (val > dev_priv->rps.min_freq_softlimit)
- mask |= GEN6_PM_RP_DOWN_THRESHOLD | GEN6_PM_RP_DOWN_TIMEOUT;
+ mask |= GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_DOWN_THRESHOLD | GEN6_PM_RP_DOWN_TIMEOUT;
if (val < dev_priv->rps.max_freq_softlimit)
- mask |= GEN6_PM_RP_UP_THRESHOLD;
+ mask |= GEN6_PM_RP_UP_EI_EXPIRED | GEN6_PM_RP_UP_THRESHOLD;
- mask |= dev_priv->pm_rps_events & (GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED);
mask &= dev_priv->pm_rps_events;
return gen6_sanitize_rps_pm_mask(dev_priv, ~mask);
@@ -3750,13 +3950,13 @@
/* gen6_set_rps is called to update the frequency request, but should also be
* called when the range (min_delay and max_delay) is modified so that we can
* update the GEN6_RP_INTERRUPT_LIMITS register accordingly. */
-void gen6_set_rps(struct drm_device *dev, u8 val)
+static void gen6_set_rps(struct drm_device *dev, u8 val)
{
struct drm_i915_private *dev_priv = dev->dev_private;
WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));
- WARN_ON(val > dev_priv->rps.max_freq_softlimit);
- WARN_ON(val < dev_priv->rps.min_freq_softlimit);
+ WARN_ON(val > dev_priv->rps.max_freq);
+ WARN_ON(val < dev_priv->rps.min_freq);
/* min/max delay may still have been modified so be sure to
* write the limits value.
@@ -3764,7 +3964,10 @@
if (val != dev_priv->rps.cur_freq) {
gen6_set_rps_thresholds(dev_priv, val);
- if (IS_HASWELL(dev) || IS_BROADWELL(dev))
+ if (IS_GEN9(dev))
+ I915_WRITE(GEN6_RPNSWREQ,
+ GEN9_FREQUENCY(val));
+ else if (IS_HASWELL(dev) || IS_BROADWELL(dev))
I915_WRITE(GEN6_RPNSWREQ,
HSW_FREQUENCY(val));
else
@@ -3777,7 +3980,7 @@
/* Make sure we continue to get interrupts
* until we hit the minimum or maximum frequencies.
*/
- I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, gen6_rps_limits(dev_priv, val));
+ I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, intel_rps_limits(dev_priv, val));
I915_WRITE(GEN6_PMINTRMSK, gen6_rps_pm_mask(dev_priv, val));
POSTING_READ(GEN6_RPNSWREQ);
@@ -3786,6 +3989,27 @@
trace_intel_gpu_freq_change(val * 50);
}
+static void valleyview_set_rps(struct drm_device *dev, u8 val)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));
+ WARN_ON(val > dev_priv->rps.max_freq);
+ WARN_ON(val < dev_priv->rps.min_freq);
+
+ if (WARN_ONCE(IS_CHERRYVIEW(dev) && (val & 1),
+ "Odd GPU freq value\n"))
+ val &= ~1;
+
+ if (val != dev_priv->rps.cur_freq)
+ vlv_punit_write(dev_priv, PUNIT_REG_GPU_FREQ_REQ, val);
+
+ I915_WRITE(GEN6_PMINTRMSK, gen6_rps_pm_mask(dev_priv, val));
+
+ dev_priv->rps.cur_freq = val;
+ trace_intel_gpu_freq_change(intel_gpu_freq(dev_priv, val));
+}
+
/* vlv_set_rps_idle: Set the frequency to Rpn if Gfx clocks are down
*
* * If Gfx is Idle, then
@@ -3798,10 +4022,11 @@
static void vlv_set_rps_idle(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = dev_priv->dev;
+ u32 val = dev_priv->rps.idle_freq;
/* CHV and latest VLV don't need to force the gfx clock */
if (IS_CHERRYVIEW(dev) || dev->pdev->revision >= 0xd) {
- valleyview_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);
+ valleyview_set_rps(dev_priv->dev, val);
return;
}
@@ -3809,7 +4034,7 @@
* When we are idle. Drop to min voltage state.
*/
- if (dev_priv->rps.cur_freq <= dev_priv->rps.min_freq_softlimit)
+ if (dev_priv->rps.cur_freq <= val)
return;
/* Mask turbo interrupt so that they will not come in between */
@@ -3818,10 +4043,9 @@
vlv_force_gfx_clock(dev_priv, true);
- dev_priv->rps.cur_freq = dev_priv->rps.min_freq_softlimit;
+ dev_priv->rps.cur_freq = val;
- vlv_punit_write(dev_priv, PUNIT_REG_GPU_FREQ_REQ,
- dev_priv->rps.min_freq_softlimit);
+ vlv_punit_write(dev_priv, PUNIT_REG_GPU_FREQ_REQ, val);
if (wait_for(((vlv_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS))
& GENFREQSTATUS) == 0, 100))
@@ -3829,8 +4053,19 @@
vlv_force_gfx_clock(dev_priv, false);
- I915_WRITE(GEN6_PMINTRMSK,
- gen6_rps_pm_mask(dev_priv, dev_priv->rps.cur_freq));
+ I915_WRITE(GEN6_PMINTRMSK, gen6_rps_pm_mask(dev_priv, val));
+}
+
+void gen6_rps_busy(struct drm_i915_private *dev_priv)
+{
+ mutex_lock(&dev_priv->rps.hw_lock);
+ if (dev_priv->rps.enabled) {
+ if (dev_priv->pm_rps_events & (GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED))
+ gen6_rps_reset_ei(dev_priv);
+ I915_WRITE(GEN6_PMINTRMSK,
+ gen6_rps_pm_mask(dev_priv, dev_priv->rps.cur_freq));
+ }
+ mutex_unlock(&dev_priv->rps.hw_lock);
}
void gen6_rps_idle(struct drm_i915_private *dev_priv)
@@ -3842,46 +4077,34 @@
if (IS_VALLEYVIEW(dev))
vlv_set_rps_idle(dev_priv);
else
- gen6_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);
+ gen6_set_rps(dev_priv->dev, dev_priv->rps.idle_freq);
dev_priv->rps.last_adj = 0;
+ I915_WRITE(GEN6_PMINTRMSK, 0xffffffff);
}
mutex_unlock(&dev_priv->rps.hw_lock);
}
void gen6_rps_boost(struct drm_i915_private *dev_priv)
{
- struct drm_device *dev = dev_priv->dev;
+ u32 val;
mutex_lock(&dev_priv->rps.hw_lock);
- if (dev_priv->rps.enabled) {
- if (IS_VALLEYVIEW(dev))
- valleyview_set_rps(dev_priv->dev, dev_priv->rps.max_freq_softlimit);
- else
- gen6_set_rps(dev_priv->dev, dev_priv->rps.max_freq_softlimit);
+ val = dev_priv->rps.max_freq_softlimit;
+ if (dev_priv->rps.enabled &&
+ dev_priv->mm.busy &&
+ dev_priv->rps.cur_freq < val) {
+ intel_set_rps(dev_priv->dev, val);
dev_priv->rps.last_adj = 0;
}
mutex_unlock(&dev_priv->rps.hw_lock);
}
-void valleyview_set_rps(struct drm_device *dev, u8 val)
+void intel_set_rps(struct drm_device *dev, u8 val)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));
- WARN_ON(val > dev_priv->rps.max_freq_softlimit);
- WARN_ON(val < dev_priv->rps.min_freq_softlimit);
-
- if (WARN_ONCE(IS_CHERRYVIEW(dev) && (val & 1),
- "Odd GPU freq value\n"))
- val &= ~1;
-
- if (val != dev_priv->rps.cur_freq)
- vlv_punit_write(dev_priv, PUNIT_REG_GPU_FREQ_REQ, val);
-
- I915_WRITE(GEN6_PMINTRMSK, gen6_rps_pm_mask(dev_priv, val));
-
- dev_priv->rps.cur_freq = val;
- trace_intel_gpu_freq_change(intel_gpu_freq(dev_priv, val));
+ if (IS_VALLEYVIEW(dev))
+ valleyview_set_rps(dev, val);
+ else
+ gen6_set_rps(dev, val);
}
static void gen9_disable_rps(struct drm_device *dev)
@@ -3995,6 +4218,13 @@
dev_priv->rps.rp0_freq = (rp_state_cap >> 0) & 0xff;
dev_priv->rps.rp1_freq = (rp_state_cap >> 8) & 0xff;
dev_priv->rps.min_freq = (rp_state_cap >> 16) & 0xff;
+ if (IS_SKYLAKE(dev)) {
+ /* Store the frequency values in 16.66 MHZ units, which is
+ the natural hardware unit for SKL */
+ dev_priv->rps.rp0_freq *= GEN9_FREQ_SCALER;
+ dev_priv->rps.rp1_freq *= GEN9_FREQ_SCALER;
+ dev_priv->rps.min_freq *= GEN9_FREQ_SCALER;
+ }
/* hw_max = RP0 until we check for overclocking */
dev_priv->rps.max_freq = dev_priv->rps.rp0_freq;
@@ -4011,6 +4241,8 @@
dev_priv->rps.max_freq);
}
+ dev_priv->rps.idle_freq = dev_priv->rps.min_freq;
+
/* Preserve min/max settings in case of re-init */
if (dev_priv->rps.max_freq_softlimit == 0)
dev_priv->rps.max_freq_softlimit = dev_priv->rps.max_freq;
@@ -4035,23 +4267,21 @@
gen6_init_rps_frequencies(dev);
- I915_WRITE(GEN6_RPNSWREQ, 0xc800000);
- I915_WRITE(GEN6_RC_VIDEO_FREQ, 0xc800000);
+ /* Program defaults and thresholds for RPS*/
+ I915_WRITE(GEN6_RC_VIDEO_FREQ,
+ GEN9_FREQUENCY(dev_priv->rps.rp1_freq));
- I915_WRITE(GEN6_RP_DOWN_TIMEOUT, 0xf4240);
- I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, 0x12060000);
- I915_WRITE(GEN6_RP_UP_THRESHOLD, 0xe808);
- I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 0x3bd08);
- I915_WRITE(GEN6_RP_UP_EI, 0x101d0);
- I915_WRITE(GEN6_RP_DOWN_EI, 0x55730);
+ /* 1 second timeout*/
+ I915_WRITE(GEN6_RP_DOWN_TIMEOUT,
+ GT_INTERVAL_FROM_US(dev_priv, 1000000));
+
I915_WRITE(GEN6_RP_IDLE_HYSTERSIS, 0xa);
- I915_WRITE(GEN6_PMINTRMSK, 0x6);
- I915_WRITE(GEN6_RP_CONTROL, GEN6_RP_MEDIA_TURBO |
- GEN6_RP_MEDIA_HW_MODE | GEN6_RP_MEDIA_IS_GFX |
- GEN6_RP_ENABLE | GEN6_RP_UP_BUSY_AVG |
- GEN6_RP_DOWN_IDLE_AVG);
- gen6_enable_rps_interrupts(dev);
+ /* Leaning on the below call to gen6_set_rps to program/setup the
+ * Up/Down EI & threshold registers, as well as the RP_CONTROL,
+ * RP_INTERRUPT_LIMITS & RPNSWREQ registers */
+ dev_priv->rps.power = HIGH_POWER; /* force a reset */
+ gen6_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
}
@@ -4179,7 +4409,7 @@
/* 6: Ring frequency + overclocking (our driver does this later */
dev_priv->rps.power = HIGH_POWER; /* force a reset */
- gen6_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);
+ gen6_set_rps(dev_priv->dev, dev_priv->rps.idle_freq);
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
}
@@ -4273,7 +4503,7 @@
}
dev_priv->rps.power = HIGH_POWER; /* force a reset */
- gen6_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);
+ gen6_set_rps(dev_priv->dev, dev_priv->rps.idle_freq);
rc6vids = 0;
ret = sandybridge_pcode_read(dev_priv, GEN6_PCODE_READ_RC6VIDS, &rc6vids);
@@ -4638,6 +4868,8 @@
intel_gpu_freq(dev_priv, dev_priv->rps.min_freq),
dev_priv->rps.min_freq);
+ dev_priv->rps.idle_freq = dev_priv->rps.min_freq;
+
/* Preserve min/max settings in case of re-init */
if (dev_priv->rps.max_freq_softlimit == 0)
dev_priv->rps.max_freq_softlimit = dev_priv->rps.max_freq;
@@ -4713,6 +4945,8 @@
dev_priv->rps.min_freq) & 1,
"Odd GPU freq values\n");
+ dev_priv->rps.idle_freq = dev_priv->rps.min_freq;
+
/* Preserve min/max settings in case of re-init */
if (dev_priv->rps.max_freq_softlimit == 0)
dev_priv->rps.max_freq_softlimit = dev_priv->rps.max_freq;
@@ -4904,124 +5138,6 @@
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
}
-void ironlake_teardown_rc6(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (dev_priv->ips.renderctx) {
- i915_gem_object_ggtt_unpin(dev_priv->ips.renderctx);
- drm_gem_object_unreference(&dev_priv->ips.renderctx->base);
- dev_priv->ips.renderctx = NULL;
- }
-
- if (dev_priv->ips.pwrctx) {
- i915_gem_object_ggtt_unpin(dev_priv->ips.pwrctx);
- drm_gem_object_unreference(&dev_priv->ips.pwrctx->base);
- dev_priv->ips.pwrctx = NULL;
- }
-}
-
-static void ironlake_disable_rc6(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (I915_READ(PWRCTXA)) {
- /* Wake the GPU, prevent RC6, then restore RSTDBYCTL */
- I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) | RCX_SW_EXIT);
- wait_for(((I915_READ(RSTDBYCTL) & RSX_STATUS_MASK) == RSX_STATUS_ON),
- 50);
-
- I915_WRITE(PWRCTXA, 0);
- POSTING_READ(PWRCTXA);
-
- I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) & ~RCX_SW_EXIT);
- POSTING_READ(RSTDBYCTL);
- }
-}
-
-static int ironlake_setup_rc6(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (dev_priv->ips.renderctx == NULL)
- dev_priv->ips.renderctx = intel_alloc_context_page(dev);
- if (!dev_priv->ips.renderctx)
- return -ENOMEM;
-
- if (dev_priv->ips.pwrctx == NULL)
- dev_priv->ips.pwrctx = intel_alloc_context_page(dev);
- if (!dev_priv->ips.pwrctx) {
- ironlake_teardown_rc6(dev);
- return -ENOMEM;
- }
-
- return 0;
-}
-
-static void ironlake_enable_rc6(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_engine_cs *ring = &dev_priv->ring[RCS];
- bool was_interruptible;
- int ret;
-
- /* rc6 disabled by default due to repeated reports of hanging during
- * boot and resume.
- */
- if (!intel_enable_rc6(dev))
- return;
-
- WARN_ON(!mutex_is_locked(&dev->struct_mutex));
-
- ret = ironlake_setup_rc6(dev);
- if (ret)
- return;
-
- was_interruptible = dev_priv->mm.interruptible;
- dev_priv->mm.interruptible = false;
-
- /*
- * GPU can automatically power down the render unit if given a page
- * to save state.
- */
- ret = intel_ring_begin(ring, 6);
- if (ret) {
- ironlake_teardown_rc6(dev);
- dev_priv->mm.interruptible = was_interruptible;
- return;
- }
-
- intel_ring_emit(ring, MI_SUSPEND_FLUSH | MI_SUSPEND_FLUSH_EN);
- intel_ring_emit(ring, MI_SET_CONTEXT);
- intel_ring_emit(ring, i915_gem_obj_ggtt_offset(dev_priv->ips.renderctx) |
- MI_MM_SPACE_GTT |
- MI_SAVE_EXT_STATE_EN |
- MI_RESTORE_EXT_STATE_EN |
- MI_RESTORE_INHIBIT);
- intel_ring_emit(ring, MI_SUSPEND_FLUSH);
- intel_ring_emit(ring, MI_NOOP);
- intel_ring_emit(ring, MI_FLUSH);
- intel_ring_advance(ring);
-
- /*
- * Wait for the command parser to advance past MI_SET_CONTEXT. The HW
- * does an implicit flush, combined with MI_FLUSH above, it should be
- * safe to assume that renderctx is valid
- */
- ret = intel_ring_idle(ring);
- dev_priv->mm.interruptible = was_interruptible;
- if (ret) {
- DRM_ERROR("failed to enable ironlake power savings\n");
- ironlake_teardown_rc6(dev);
- return;
- }
-
- I915_WRITE(PWRCTXA, i915_gem_obj_ggtt_offset(dev_priv->ips.pwrctx) | PWRCTX_EN);
- I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) & ~RCX_SW_EXIT);
-
- intel_print_rc6_info(dev, GEN6_RC_CTL_RC6_ENABLE);
-}
-
static unsigned long intel_pxfreq(u32 vidfreq)
{
unsigned long freq;
@@ -5534,12 +5650,7 @@
flush_delayed_work(&dev_priv->rps.delayed_resume_work);
- /*
- * TODO: disable RPS interrupts on GEN9+ too once RPS support
- * is added for it.
- */
- if (INTEL_INFO(dev)->gen < 9)
- gen6_disable_rps_interrupts(dev);
+ gen6_disable_rps_interrupts(dev);
}
/**
@@ -5569,7 +5680,6 @@
if (IS_IRONLAKE_M(dev)) {
ironlake_disable_drps(dev);
- ironlake_disable_rc6(dev);
} else if (INTEL_INFO(dev)->gen >= 6) {
intel_suspend_gt_powersave(dev);
@@ -5597,12 +5707,7 @@
mutex_lock(&dev_priv->rps.hw_lock);
- /*
- * TODO: reset/enable RPS interrupts on GEN9+ too, once RPS support is
- * added for it.
- */
- if (INTEL_INFO(dev)->gen < 9)
- gen6_reset_rps_interrupts(dev);
+ gen6_reset_rps_interrupts(dev);
if (IS_CHERRYVIEW(dev)) {
cherryview_enable_rps(dev);
@@ -5619,10 +5724,16 @@
gen6_enable_rps(dev);
__gen6_update_ring_freq(dev);
}
+
+ WARN_ON(dev_priv->rps.max_freq < dev_priv->rps.min_freq);
+ WARN_ON(dev_priv->rps.idle_freq > dev_priv->rps.max_freq);
+
+ WARN_ON(dev_priv->rps.efficient_freq < dev_priv->rps.min_freq);
+ WARN_ON(dev_priv->rps.efficient_freq > dev_priv->rps.max_freq);
+
dev_priv->rps.enabled = true;
- if (INTEL_INFO(dev)->gen < 9)
- gen6_enable_rps_interrupts(dev);
+ gen6_enable_rps_interrupts(dev);
mutex_unlock(&dev_priv->rps.hw_lock);
@@ -5633,10 +5744,13 @@
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ /* Powersaving is controlled by the host when inside a VM */
+ if (intel_vgpu_active(dev))
+ return;
+
if (IS_IRONLAKE_M(dev)) {
mutex_lock(&dev->struct_mutex);
ironlake_enable_drps(dev);
- ironlake_enable_rc6(dev);
intel_init_emon(dev);
mutex_unlock(&dev->struct_mutex);
} else if (INTEL_INFO(dev)->gen >= 6) {
@@ -6169,11 +6283,22 @@
gen6_check_mch_setup(dev);
}
+static void vlv_init_display_clock_gating(struct drm_i915_private *dev_priv)
+{
+ I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE);
+
+ /*
+ * Disable trickle feed and enable pnd deadline calculation
+ */
+ I915_WRITE(MI_ARB_VLV, MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE);
+ I915_WRITE(CBR1_VLV, 0);
+}
+
static void valleyview_init_clock_gating(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE);
+ vlv_init_display_clock_gating(dev_priv);
/* WaDisableEarlyCull:vlv */
I915_WRITE(_3D_CHICKEN3,
@@ -6221,8 +6346,6 @@
I915_WRITE(GEN7_UCGCTL4,
I915_READ(GEN7_UCGCTL4) | GEN7_L3BANK2X_CLOCK_GATE_DISABLE);
- I915_WRITE(MI_ARB_VLV, MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE);
-
/*
* BSpec says this must be set, even though
* WaDisable4x2SubspanOptimization isn't listed for VLV.
@@ -6259,9 +6382,7 @@
{
struct drm_i915_private *dev_priv = dev->dev_private;
- I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE);
-
- I915_WRITE(MI_ARB_VLV, MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE);
+ vlv_init_display_clock_gating(dev_priv);
/* WaVSRefCountFullforceMissDisable:chv */
/* WaDSRefCountFullforceMissDisable:chv */
@@ -6396,7 +6517,8 @@
{
struct drm_i915_private *dev_priv = dev->dev_private;
- dev_priv->display.init_clock_gating(dev);
+ if (dev_priv->display.init_clock_gating)
+ dev_priv->display.init_clock_gating(dev);
}
void intel_suspend_hw(struct drm_device *dev)
@@ -6422,7 +6544,7 @@
if (INTEL_INFO(dev)->gen >= 9) {
skl_setup_wm_latency(dev);
- dev_priv->display.init_clock_gating = gen9_init_clock_gating;
+ dev_priv->display.init_clock_gating = skl_init_clock_gating;
dev_priv->display.update_wm = skl_update_wm;
dev_priv->display.update_sprite_wm = skl_update_sprite_wm;
} else if (HAS_PCH_SPLIT(dev)) {
@@ -6450,7 +6572,7 @@
else if (INTEL_INFO(dev)->gen == 8)
dev_priv->display.init_clock_gating = broadwell_init_clock_gating;
} else if (IS_CHERRYVIEW(dev)) {
- dev_priv->display.update_wm = cherryview_update_wm;
+ dev_priv->display.update_wm = valleyview_update_wm;
dev_priv->display.update_sprite_wm = valleyview_update_sprite_wm;
dev_priv->display.init_clock_gating =
cherryview_init_clock_gating;
@@ -6618,7 +6740,9 @@
int intel_gpu_freq(struct drm_i915_private *dev_priv, int val)
{
- if (IS_CHERRYVIEW(dev_priv->dev))
+ if (IS_GEN9(dev_priv->dev))
+ return (val * GT_FREQUENCY_MULTIPLIER) / GEN9_FREQ_SCALER;
+ else if (IS_CHERRYVIEW(dev_priv->dev))
return chv_gpu_freq(dev_priv, val);
else if (IS_VALLEYVIEW(dev_priv->dev))
return byt_gpu_freq(dev_priv, val);
@@ -6628,7 +6752,9 @@
int intel_freq_opcode(struct drm_i915_private *dev_priv, int val)
{
- if (IS_CHERRYVIEW(dev_priv->dev))
+ if (IS_GEN9(dev_priv->dev))
+ return (val * GEN9_FREQ_SCALER) / GT_FREQUENCY_MULTIPLIER;
+ else if (IS_CHERRYVIEW(dev_priv->dev))
return chv_freq_opcode(dev_priv, val);
else if (IS_VALLEYVIEW(dev_priv->dev))
return byt_freq_opcode(dev_priv, val);
diff --git a/drivers/gpu/drm/i915/intel_psr.c b/drivers/gpu/drm/i915/intel_psr.c
index b9f40c2..a8f9348 100644
--- a/drivers/gpu/drm/i915/intel_psr.c
+++ b/drivers/gpu/drm/i915/intel_psr.c
@@ -532,8 +532,6 @@
WARN_ON(!(val & EDP_PSR_ENABLE));
I915_WRITE(EDP_PSR_CTL(dev), val & ~EDP_PSR_ENABLE);
-
- dev_priv->psr.active = false;
} else {
val = I915_READ(VLV_PSRCTL(pipe));
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index e5b3c6d..441e250 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -317,29 +317,6 @@
return 0;
}
-static int gen7_ring_fbc_flush(struct intel_engine_cs *ring, u32 value)
-{
- int ret;
-
- if (!ring->fbc_dirty)
- return 0;
-
- ret = intel_ring_begin(ring, 6);
- if (ret)
- return ret;
- /* WaFbcNukeOn3DBlt:ivb/hsw */
- intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit(ring, MSG_FBC_REND_STATE);
- intel_ring_emit(ring, value);
- intel_ring_emit(ring, MI_STORE_REGISTER_MEM(1) | MI_SRM_LRM_GLOBAL_GTT);
- intel_ring_emit(ring, MSG_FBC_REND_STATE);
- intel_ring_emit(ring, ring->scratch.gtt_offset + 256);
- intel_ring_advance(ring);
-
- ring->fbc_dirty = false;
- return 0;
-}
-
static int
gen7_render_ring_flush(struct intel_engine_cs *ring,
u32 invalidate_domains, u32 flush_domains)
@@ -398,9 +375,6 @@
intel_ring_emit(ring, 0);
intel_ring_advance(ring);
- if (!invalidate_domains && flush_domains)
- return gen7_ring_fbc_flush(ring, FBC_REND_NUKE);
-
return 0;
}
@@ -458,14 +432,7 @@
return ret;
}
- ret = gen8_emit_pipe_control(ring, flags, scratch_addr);
- if (ret)
- return ret;
-
- if (!invalidate_domains && flush_domains)
- return gen7_ring_fbc_flush(ring, FBC_REND_NUKE);
-
- return 0;
+ return gen8_emit_pipe_control(ring, flags, scratch_addr);
}
static void ring_write_tail(struct intel_engine_cs *ring,
@@ -502,6 +469,68 @@
I915_WRITE(HWS_PGA, addr);
}
+static void intel_ring_setup_status_page(struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = ring->dev->dev_private;
+ u32 mmio = 0;
+
+ /* The ring status page addresses are no longer next to the rest of
+ * the ring registers as of gen7.
+ */
+ if (IS_GEN7(dev)) {
+ switch (ring->id) {
+ case RCS:
+ mmio = RENDER_HWS_PGA_GEN7;
+ break;
+ case BCS:
+ mmio = BLT_HWS_PGA_GEN7;
+ break;
+ /*
+ * VCS2 actually doesn't exist on Gen7. Only shut up
+ * gcc switch check warning
+ */
+ case VCS2:
+ case VCS:
+ mmio = BSD_HWS_PGA_GEN7;
+ break;
+ case VECS:
+ mmio = VEBOX_HWS_PGA_GEN7;
+ break;
+ }
+ } else if (IS_GEN6(ring->dev)) {
+ mmio = RING_HWS_PGA_GEN6(ring->mmio_base);
+ } else {
+ /* XXX: gen8 returns to sanity */
+ mmio = RING_HWS_PGA(ring->mmio_base);
+ }
+
+ I915_WRITE(mmio, (u32)ring->status_page.gfx_addr);
+ POSTING_READ(mmio);
+
+ /*
+ * Flush the TLB for this page
+ *
+ * FIXME: These two bits have disappeared on gen8, so a question
+ * arises: do we still need this and if so how should we go about
+ * invalidating the TLB?
+ */
+ if (INTEL_INFO(dev)->gen >= 6 && INTEL_INFO(dev)->gen < 8) {
+ u32 reg = RING_INSTPM(ring->mmio_base);
+
+ /* ring should be idle before issuing a sync flush*/
+ WARN_ON((I915_READ_MODE(ring) & MODE_IDLE) == 0);
+
+ I915_WRITE(reg,
+ _MASKED_BIT_ENABLE(INSTPM_TLB_INVALIDATE |
+ INSTPM_SYNC_FLUSH));
+ if (wait_for((I915_READ(reg) & INSTPM_SYNC_FLUSH) == 0,
+ 1000))
+ DRM_ERROR("%s: wait for SyncFlush to complete for TLB invalidation timed out\n",
+ ring->name);
+ }
+}
+
static bool stop_ring(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = to_i915(ring->dev);
@@ -788,12 +817,14 @@
* workaround for for a possible hang in the unlikely event a TLB
* invalidation occurs during a PSD flush.
*/
- /* WaForceEnableNonCoherent:bdw */
- /* WaHdcDisableFetchWhenMasked:bdw */
- /* WaDisableFenceDestinationToSLM:bdw (GT3 pre-production) */
WA_SET_BIT_MASKED(HDC_CHICKEN0,
+ /* WaForceEnableNonCoherent:bdw */
HDC_FORCE_NON_COHERENT |
+ /* WaForceContextSaveRestoreNonCoherent:bdw */
+ HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT |
+ /* WaHdcDisableFetchWhenMasked:bdw */
HDC_DONOT_FETCH_MEM_WHEN_MASKED |
+ /* WaDisableFenceDestinationToSLM:bdw (pre-prod) */
(IS_BDW_GT3(dev) ? HDC_FENCE_DEST_SLM_DISABLE : 0));
/* From the Haswell PRM, Command Reference: Registers, CACHE_MODE_0:
@@ -870,9 +901,132 @@
GEN6_WIZ_HASHING_MASK,
GEN6_WIZ_HASHING_16x4);
+ if (INTEL_REVID(dev) == SKL_REVID_C0 ||
+ INTEL_REVID(dev) == SKL_REVID_D0)
+ /* WaBarrierPerformanceFixDisable:skl */
+ WA_SET_BIT_MASKED(HDC_CHICKEN0,
+ HDC_FENCE_DEST_SLM_DISABLE |
+ HDC_BARRIER_PERFORMANCE_DISABLE);
+
return 0;
}
+static int gen9_init_workarounds(struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ /* WaDisablePartialInstShootdown:skl */
+ WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN,
+ PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE);
+
+ /* Syncing dependencies between camera and graphics */
+ WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN3,
+ GEN9_DISABLE_OCL_OOB_SUPPRESS_LOGIC);
+
+ if (INTEL_REVID(dev) == SKL_REVID_A0 ||
+ INTEL_REVID(dev) == SKL_REVID_B0) {
+ /* WaDisableDgMirrorFixInHalfSliceChicken5:skl */
+ WA_CLR_BIT_MASKED(GEN9_HALF_SLICE_CHICKEN5,
+ GEN9_DG_MIRROR_FIX_ENABLE);
+ }
+
+ if (IS_SKYLAKE(dev) && INTEL_REVID(dev) <= SKL_REVID_B0) {
+ /* WaSetDisablePixMaskCammingAndRhwoInCommonSliceChicken:skl */
+ WA_SET_BIT_MASKED(GEN7_COMMON_SLICE_CHICKEN1,
+ GEN9_RHWO_OPTIMIZATION_DISABLE);
+ WA_SET_BIT_MASKED(GEN9_SLICE_COMMON_ECO_CHICKEN0,
+ DISABLE_PIXEL_MASK_CAMMING);
+ }
+
+ if (INTEL_REVID(dev) >= SKL_REVID_C0) {
+ /* WaEnableYV12BugFixInHalfSliceChicken7:skl */
+ WA_SET_BIT_MASKED(GEN9_HALF_SLICE_CHICKEN7,
+ GEN9_ENABLE_YV12_BUGFIX);
+ }
+
+ if (INTEL_REVID(dev) <= SKL_REVID_D0) {
+ /*
+ *Use Force Non-Coherent whenever executing a 3D context. This
+ * is a workaround for a possible hang in the unlikely event
+ * a TLB invalidation occurs during a PSD flush.
+ */
+ /* WaForceEnableNonCoherent:skl */
+ WA_SET_BIT_MASKED(HDC_CHICKEN0,
+ HDC_FORCE_NON_COHERENT);
+ }
+
+ /* Wa4x4STCOptimizationDisable:skl */
+ WA_SET_BIT_MASKED(CACHE_MODE_1, GEN8_4x4_STC_OPTIMIZATION_DISABLE);
+
+ /* WaDisablePartialResolveInVc:skl */
+ WA_SET_BIT_MASKED(CACHE_MODE_1, GEN9_PARTIAL_RESOLVE_IN_VC_DISABLE);
+
+ /* WaCcsTlbPrefetchDisable:skl */
+ WA_CLR_BIT_MASKED(GEN9_HALF_SLICE_CHICKEN5,
+ GEN9_CCS_TLB_PREFETCH_ENABLE);
+
+ return 0;
+}
+
+static int skl_tune_iz_hashing(struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ u8 vals[3] = { 0, 0, 0 };
+ unsigned int i;
+
+ for (i = 0; i < 3; i++) {
+ u8 ss;
+
+ /*
+ * Only consider slices where one, and only one, subslice has 7
+ * EUs
+ */
+ if (hweight8(dev_priv->info.subslice_7eu[i]) != 1)
+ continue;
+
+ /*
+ * subslice_7eu[i] != 0 (because of the check above) and
+ * ss_max == 4 (maximum number of subslices possible per slice)
+ *
+ * -> 0 <= ss <= 3;
+ */
+ ss = ffs(dev_priv->info.subslice_7eu[i]) - 1;
+ vals[i] = 3 - ss;
+ }
+
+ if (vals[0] == 0 && vals[1] == 0 && vals[2] == 0)
+ return 0;
+
+ /* Tune IZ hashing. See intel_device_info_runtime_init() */
+ WA_SET_FIELD_MASKED(GEN7_GT_MODE,
+ GEN9_IZ_HASHING_MASK(2) |
+ GEN9_IZ_HASHING_MASK(1) |
+ GEN9_IZ_HASHING_MASK(0),
+ GEN9_IZ_HASHING(2, vals[2]) |
+ GEN9_IZ_HASHING(1, vals[1]) |
+ GEN9_IZ_HASHING(0, vals[0]));
+
+ return 0;
+}
+
+
+static int skl_init_workarounds(struct intel_engine_cs *ring)
+{
+ struct drm_device *dev = ring->dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ gen9_init_workarounds(ring);
+
+ /* WaDisablePowerCompilerClockGating:skl */
+ if (INTEL_REVID(dev) == SKL_REVID_B0)
+ WA_SET_BIT_MASKED(HIZ_CHICKEN,
+ BDW_HIZ_POWER_COMPILER_CLOCK_GATING_DISABLE);
+
+ return skl_tune_iz_hashing(ring);
+}
+
int init_workarounds_ring(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
@@ -888,6 +1042,11 @@
if (IS_CHERRYVIEW(dev))
return chv_init_workarounds(ring);
+ if (IS_SKYLAKE(dev))
+ return skl_init_workarounds(ring);
+ else if (IS_GEN9(dev))
+ return gen9_init_workarounds(ring);
+
return 0;
}
@@ -1386,68 +1545,6 @@
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
}
-void intel_ring_setup_status_page(struct intel_engine_cs *ring)
-{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
- u32 mmio = 0;
-
- /* The ring status page addresses are no longer next to the rest of
- * the ring registers as of gen7.
- */
- if (IS_GEN7(dev)) {
- switch (ring->id) {
- case RCS:
- mmio = RENDER_HWS_PGA_GEN7;
- break;
- case BCS:
- mmio = BLT_HWS_PGA_GEN7;
- break;
- /*
- * VCS2 actually doesn't exist on Gen7. Only shut up
- * gcc switch check warning
- */
- case VCS2:
- case VCS:
- mmio = BSD_HWS_PGA_GEN7;
- break;
- case VECS:
- mmio = VEBOX_HWS_PGA_GEN7;
- break;
- }
- } else if (IS_GEN6(ring->dev)) {
- mmio = RING_HWS_PGA_GEN6(ring->mmio_base);
- } else {
- /* XXX: gen8 returns to sanity */
- mmio = RING_HWS_PGA(ring->mmio_base);
- }
-
- I915_WRITE(mmio, (u32)ring->status_page.gfx_addr);
- POSTING_READ(mmio);
-
- /*
- * Flush the TLB for this page
- *
- * FIXME: These two bits have disappeared on gen8, so a question
- * arises: do we still need this and if so how should we go about
- * invalidating the TLB?
- */
- if (INTEL_INFO(dev)->gen >= 6 && INTEL_INFO(dev)->gen < 8) {
- u32 reg = RING_INSTPM(ring->mmio_base);
-
- /* ring should be idle before issuing a sync flush*/
- WARN_ON((I915_READ_MODE(ring) & MODE_IDLE) == 0);
-
- I915_WRITE(reg,
- _MASKED_BIT_ENABLE(INSTPM_TLB_INVALIDATE |
- INSTPM_SYNC_FLUSH));
- if (wait_for((I915_READ(reg) & INSTPM_SYNC_FLUSH) == 0,
- 1000))
- DRM_ERROR("%s: wait for SyncFlush to complete for TLB invalidation timed out\n",
- ring->name);
- }
-}
-
static int
bsd_ring_flush(struct intel_engine_cs *ring,
u32 invalidate_domains,
@@ -1611,7 +1708,7 @@
static int
i965_dispatch_execbuffer(struct intel_engine_cs *ring,
u64 offset, u32 length,
- unsigned flags)
+ unsigned dispatch_flags)
{
int ret;
@@ -1622,7 +1719,8 @@
intel_ring_emit(ring,
MI_BATCH_BUFFER_START |
MI_BATCH_GTT |
- (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE_I965));
+ (dispatch_flags & I915_DISPATCH_SECURE ?
+ 0 : MI_BATCH_NON_SECURE_I965));
intel_ring_emit(ring, offset);
intel_ring_advance(ring);
@@ -1635,8 +1733,8 @@
#define I830_WA_SIZE max(I830_TLB_ENTRIES*4096, I830_BATCH_LIMIT)
static int
i830_dispatch_execbuffer(struct intel_engine_cs *ring,
- u64 offset, u32 len,
- unsigned flags)
+ u64 offset, u32 len,
+ unsigned dispatch_flags)
{
u32 cs_offset = ring->scratch.gtt_offset;
int ret;
@@ -1654,7 +1752,7 @@
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
- if ((flags & I915_DISPATCH_PINNED) == 0) {
+ if ((dispatch_flags & I915_DISPATCH_PINNED) == 0) {
if (len > I830_BATCH_LIMIT)
return -ENOSPC;
@@ -1686,7 +1784,8 @@
return ret;
intel_ring_emit(ring, MI_BATCH_BUFFER);
- intel_ring_emit(ring, offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE));
+ intel_ring_emit(ring, offset | (dispatch_flags & I915_DISPATCH_SECURE ?
+ 0 : MI_BATCH_NON_SECURE));
intel_ring_emit(ring, offset + len - 8);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
@@ -1697,7 +1796,7 @@
static int
i915_dispatch_execbuffer(struct intel_engine_cs *ring,
u64 offset, u32 len,
- unsigned flags)
+ unsigned dispatch_flags)
{
int ret;
@@ -1706,7 +1805,8 @@
return ret;
intel_ring_emit(ring, MI_BATCH_BUFFER_START | MI_BATCH_GTT);
- intel_ring_emit(ring, offset | (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE));
+ intel_ring_emit(ring, offset | (dispatch_flags & I915_DISPATCH_SECURE ?
+ 0 : MI_BATCH_NON_SECURE));
intel_ring_advance(ring);
return 0;
@@ -2097,6 +2197,7 @@
kref_init(&request->ref);
request->ring = ring;
+ request->ringbuf = ring->buffer;
request->uniq = dev_private->request_uniq++;
ret = i915_gem_get_seqno(ring->dev, &request->seqno);
@@ -2273,9 +2374,10 @@
static int
gen8_ring_dispatch_execbuffer(struct intel_engine_cs *ring,
u64 offset, u32 len,
- unsigned flags)
+ unsigned dispatch_flags)
{
- bool ppgtt = USES_PPGTT(ring->dev) && !(flags & I915_DISPATCH_SECURE);
+ bool ppgtt = USES_PPGTT(ring->dev) &&
+ !(dispatch_flags & I915_DISPATCH_SECURE);
int ret;
ret = intel_ring_begin(ring, 4);
@@ -2294,8 +2396,8 @@
static int
hsw_ring_dispatch_execbuffer(struct intel_engine_cs *ring,
- u64 offset, u32 len,
- unsigned flags)
+ u64 offset, u32 len,
+ unsigned dispatch_flags)
{
int ret;
@@ -2305,7 +2407,7 @@
intel_ring_emit(ring,
MI_BATCH_BUFFER_START |
- (flags & I915_DISPATCH_SECURE ?
+ (dispatch_flags & I915_DISPATCH_SECURE ?
0 : MI_BATCH_PPGTT_HSW | MI_BATCH_NON_SECURE_HSW));
/* bit0-7 is the length on GEN6+ */
intel_ring_emit(ring, offset);
@@ -2317,7 +2419,7 @@
static int
gen6_ring_dispatch_execbuffer(struct intel_engine_cs *ring,
u64 offset, u32 len,
- unsigned flags)
+ unsigned dispatch_flags)
{
int ret;
@@ -2327,7 +2429,8 @@
intel_ring_emit(ring,
MI_BATCH_BUFFER_START |
- (flags & I915_DISPATCH_SECURE ? 0 : MI_BATCH_NON_SECURE_I965));
+ (dispatch_flags & I915_DISPATCH_SECURE ?
+ 0 : MI_BATCH_NON_SECURE_I965));
/* bit0-7 is the length on GEN6+ */
intel_ring_emit(ring, offset);
intel_ring_advance(ring);
@@ -2341,7 +2444,6 @@
u32 invalidate, u32 flush)
{
struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t cmd;
int ret;
@@ -2350,7 +2452,7 @@
return ret;
cmd = MI_FLUSH_DW;
- if (INTEL_INFO(ring->dev)->gen >= 8)
+ if (INTEL_INFO(dev)->gen >= 8)
cmd += 1;
/* We always require a command barrier so that subsequent
@@ -2370,7 +2472,7 @@
cmd |= MI_INVALIDATE_TLB;
intel_ring_emit(ring, cmd);
intel_ring_emit(ring, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT);
- if (INTEL_INFO(ring->dev)->gen >= 8) {
+ if (INTEL_INFO(dev)->gen >= 8) {
intel_ring_emit(ring, 0); /* upper addr */
intel_ring_emit(ring, 0); /* value */
} else {
@@ -2379,13 +2481,6 @@
}
intel_ring_advance(ring);
- if (!invalidate && flush) {
- if (IS_GEN7(dev))
- return gen7_ring_fbc_flush(ring, FBC_REND_CACHE_CLEAN);
- else if (IS_BROADWELL(dev))
- dev_priv->fbc.need_sw_cache_clean = true;
- }
-
return 0;
}
@@ -2612,19 +2707,13 @@
}
/**
- * Initialize the second BSD ring for Broadwell GT3.
- * It is noted that this only exists on Broadwell GT3.
+ * Initialize the second BSD ring (eg. Broadwell GT3, Skylake GT3)
*/
int intel_init_bsd2_ring_buffer(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *ring = &dev_priv->ring[VCS2];
- if ((INTEL_INFO(dev)->gen != 8)) {
- DRM_ERROR("No dual-BSD ring on non-BDW machine\n");
- return -EINVAL;
- }
-
ring->name = "bsd2 ring";
ring->id = VCS2;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 714f3fd..c761fe0 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -164,7 +164,7 @@
u32 seqno);
int (*dispatch_execbuffer)(struct intel_engine_cs *ring,
u64 offset, u32 length,
- unsigned flags);
+ unsigned dispatch_flags);
#define I915_DISPATCH_SECURE 0x1
#define I915_DISPATCH_PINNED 0x2
void (*cleanup)(struct intel_engine_cs *ring);
@@ -242,7 +242,7 @@
u32 flush_domains);
int (*emit_bb_start)(struct intel_ringbuffer *ringbuf,
struct intel_context *ctx,
- u64 offset, unsigned flags);
+ u64 offset, unsigned dispatch_flags);
/**
* List of objects currently involved in rendering from the
@@ -267,7 +267,6 @@
*/
struct drm_i915_gem_request *outstanding_lazy_request;
bool gpu_caches_dirty;
- bool fbc_dirty;
wait_queue_head_t irq_queue;
@@ -373,11 +372,12 @@
* 0x06: ring 2 head pointer (915-class)
* 0x10-0x1b: Context status DWords (GM45)
* 0x1f: Last written status offset. (GM45)
+ * 0x20-0x2f: Reserved (Gen6+)
*
- * The area from dword 0x20 to 0x3ff is available for driver usage.
+ * The area from dword 0x30 to 0x3ff is available for driver usage.
*/
-#define I915_GEM_HWS_INDEX 0x20
-#define I915_GEM_HWS_SCRATCH_INDEX 0x30
+#define I915_GEM_HWS_INDEX 0x30
+#define I915_GEM_HWS_SCRATCH_INDEX 0x40
#define I915_GEM_HWS_SCRATCH_ADDR (I915_GEM_HWS_SCRATCH_INDEX << MI_STORE_DWORD_INDEX_SHIFT)
void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf);
@@ -425,7 +425,6 @@
int intel_init_vebox_ring_buffer(struct drm_device *dev);
u64 intel_ring_get_active_head(struct intel_engine_cs *ring);
-void intel_ring_setup_status_page(struct intel_engine_cs *ring);
int init_workarounds_ring(struct intel_engine_cs *ring);
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index 49695d7..ce00e69 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -194,8 +194,39 @@
outb(inb(VGA_MSR_READ), VGA_MSR_WRITE);
vga_put(dev->pdev, VGA_RSRC_LEGACY_IO);
- if (IS_BROADWELL(dev) || (INTEL_INFO(dev)->gen >= 9))
- gen8_irq_power_well_post_enable(dev_priv);
+ if (IS_BROADWELL(dev))
+ gen8_irq_power_well_post_enable(dev_priv,
+ 1 << PIPE_C | 1 << PIPE_B);
+}
+
+static void skl_power_well_post_enable(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ struct drm_device *dev = dev_priv->dev;
+
+ /*
+ * After we re-enable the power well, if we touch VGA register 0x3d5
+ * we'll get unclaimed register interrupts. This stops after we write
+ * anything to the VGA MSR register. The vgacon module uses this
+ * register all the time, so if we unbind our driver and, as a
+ * consequence, bind vgacon, we'll get stuck in an infinite loop at
+ * console_unlock(). So make here we touch the VGA MSR register, making
+ * sure vgacon can keep working normally without triggering interrupts
+ * and error messages.
+ */
+ if (power_well->data == SKL_DISP_PW_2) {
+ vga_get_uninterruptible(dev->pdev, VGA_RSRC_LEGACY_IO);
+ outb(inb(VGA_MSR_READ), VGA_MSR_WRITE);
+ vga_put(dev->pdev, VGA_RSRC_LEGACY_IO);
+
+ gen8_irq_power_well_post_enable(dev_priv,
+ 1 << PIPE_C | 1 << PIPE_B);
+ }
+
+ if (power_well->data == SKL_DISP_PW_1) {
+ intel_prepare_ddi(dev);
+ gen8_irq_power_well_post_enable(dev_priv, 1 << PIPE_A);
+ }
}
static void hsw_set_power_well(struct drm_i915_private *dev_priv,
@@ -230,6 +261,141 @@
}
}
+#define SKL_DISPLAY_POWERWELL_2_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_TRANSCODER_A) | \
+ BIT(POWER_DOMAIN_PIPE_B) | \
+ BIT(POWER_DOMAIN_TRANSCODER_B) | \
+ BIT(POWER_DOMAIN_PIPE_C) | \
+ BIT(POWER_DOMAIN_TRANSCODER_C) | \
+ BIT(POWER_DOMAIN_PIPE_B_PANEL_FITTER) | \
+ BIT(POWER_DOMAIN_PIPE_C_PANEL_FITTER) | \
+ BIT(POWER_DOMAIN_PORT_DDI_B_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_B_4_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_C_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_C_4_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_D_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_D_4_LANES) | \
+ BIT(POWER_DOMAIN_AUX_B) | \
+ BIT(POWER_DOMAIN_AUX_C) | \
+ BIT(POWER_DOMAIN_AUX_D) | \
+ BIT(POWER_DOMAIN_AUDIO) | \
+ BIT(POWER_DOMAIN_VGA) | \
+ BIT(POWER_DOMAIN_INIT))
+#define SKL_DISPLAY_POWERWELL_1_POWER_DOMAINS ( \
+ SKL_DISPLAY_POWERWELL_2_POWER_DOMAINS | \
+ BIT(POWER_DOMAIN_PLLS) | \
+ BIT(POWER_DOMAIN_PIPE_A) | \
+ BIT(POWER_DOMAIN_TRANSCODER_EDP) | \
+ BIT(POWER_DOMAIN_PIPE_A_PANEL_FITTER) | \
+ BIT(POWER_DOMAIN_PORT_DDI_A_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_A_4_LANES) | \
+ BIT(POWER_DOMAIN_AUX_A) | \
+ BIT(POWER_DOMAIN_INIT))
+#define SKL_DISPLAY_DDI_A_E_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PORT_DDI_A_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_A_4_LANES) | \
+ BIT(POWER_DOMAIN_INIT))
+#define SKL_DISPLAY_DDI_B_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PORT_DDI_B_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_B_4_LANES) | \
+ BIT(POWER_DOMAIN_INIT))
+#define SKL_DISPLAY_DDI_C_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PORT_DDI_C_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_C_4_LANES) | \
+ BIT(POWER_DOMAIN_INIT))
+#define SKL_DISPLAY_DDI_D_POWER_DOMAINS ( \
+ BIT(POWER_DOMAIN_PORT_DDI_D_2_LANES) | \
+ BIT(POWER_DOMAIN_PORT_DDI_D_4_LANES) | \
+ BIT(POWER_DOMAIN_INIT))
+#define SKL_DISPLAY_MISC_IO_POWER_DOMAINS ( \
+ SKL_DISPLAY_POWERWELL_1_POWER_DOMAINS)
+#define SKL_DISPLAY_ALWAYS_ON_POWER_DOMAINS ( \
+ (POWER_DOMAIN_MASK & ~(SKL_DISPLAY_POWERWELL_1_POWER_DOMAINS | \
+ SKL_DISPLAY_POWERWELL_2_POWER_DOMAINS | \
+ SKL_DISPLAY_DDI_A_E_POWER_DOMAINS | \
+ SKL_DISPLAY_DDI_B_POWER_DOMAINS | \
+ SKL_DISPLAY_DDI_C_POWER_DOMAINS | \
+ SKL_DISPLAY_DDI_D_POWER_DOMAINS | \
+ SKL_DISPLAY_MISC_IO_POWER_DOMAINS)) | \
+ BIT(POWER_DOMAIN_INIT))
+
+static void skl_set_power_well(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well, bool enable)
+{
+ uint32_t tmp, fuse_status;
+ uint32_t req_mask, state_mask;
+ bool is_enabled, enable_requested, check_fuse_status = false;
+
+ tmp = I915_READ(HSW_PWR_WELL_DRIVER);
+ fuse_status = I915_READ(SKL_FUSE_STATUS);
+
+ switch (power_well->data) {
+ case SKL_DISP_PW_1:
+ if (wait_for((I915_READ(SKL_FUSE_STATUS) &
+ SKL_FUSE_PG0_DIST_STATUS), 1)) {
+ DRM_ERROR("PG0 not enabled\n");
+ return;
+ }
+ break;
+ case SKL_DISP_PW_2:
+ if (!(fuse_status & SKL_FUSE_PG1_DIST_STATUS)) {
+ DRM_ERROR("PG1 in disabled state\n");
+ return;
+ }
+ break;
+ case SKL_DISP_PW_DDI_A_E:
+ case SKL_DISP_PW_DDI_B:
+ case SKL_DISP_PW_DDI_C:
+ case SKL_DISP_PW_DDI_D:
+ case SKL_DISP_PW_MISC_IO:
+ break;
+ default:
+ WARN(1, "Unknown power well %lu\n", power_well->data);
+ return;
+ }
+
+ req_mask = SKL_POWER_WELL_REQ(power_well->data);
+ enable_requested = tmp & req_mask;
+ state_mask = SKL_POWER_WELL_STATE(power_well->data);
+ is_enabled = tmp & state_mask;
+
+ if (enable) {
+ if (!enable_requested) {
+ I915_WRITE(HSW_PWR_WELL_DRIVER, tmp | req_mask);
+ }
+
+ if (!is_enabled) {
+ DRM_DEBUG_KMS("Enabling %s\n", power_well->name);
+ if (wait_for((I915_READ(HSW_PWR_WELL_DRIVER) &
+ state_mask), 1))
+ DRM_ERROR("%s enable timeout\n",
+ power_well->name);
+ check_fuse_status = true;
+ }
+ } else {
+ if (enable_requested) {
+ I915_WRITE(HSW_PWR_WELL_DRIVER, tmp & ~req_mask);
+ POSTING_READ(HSW_PWR_WELL_DRIVER);
+ DRM_DEBUG_KMS("Disabling %s\n", power_well->name);
+ }
+ }
+
+ if (check_fuse_status) {
+ if (power_well->data == SKL_DISP_PW_1) {
+ if (wait_for((I915_READ(SKL_FUSE_STATUS) &
+ SKL_FUSE_PG1_DIST_STATUS), 1))
+ DRM_ERROR("PG1 distributing status timeout\n");
+ } else if (power_well->data == SKL_DISP_PW_2) {
+ if (wait_for((I915_READ(SKL_FUSE_STATUS) &
+ SKL_FUSE_PG2_DIST_STATUS), 1))
+ DRM_ERROR("PG2 distributing status timeout\n");
+ }
+ }
+
+ if (enable && !is_enabled)
+ skl_power_well_post_enable(dev_priv, power_well);
+}
+
static void hsw_power_well_sync_hw(struct drm_i915_private *dev_priv,
struct i915_power_well *power_well)
{
@@ -255,6 +421,36 @@
hsw_set_power_well(dev_priv, power_well, false);
}
+static bool skl_power_well_enabled(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ uint32_t mask = SKL_POWER_WELL_REQ(power_well->data) |
+ SKL_POWER_WELL_STATE(power_well->data);
+
+ return (I915_READ(HSW_PWR_WELL_DRIVER) & mask) == mask;
+}
+
+static void skl_power_well_sync_hw(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ skl_set_power_well(dev_priv, power_well, power_well->count > 0);
+
+ /* Clear any request made by BIOS as driver is taking over */
+ I915_WRITE(HSW_PWR_WELL_BIOS, 0);
+}
+
+static void skl_power_well_enable(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ skl_set_power_well(dev_priv, power_well, true);
+}
+
+static void skl_power_well_disable(struct drm_i915_private *dev_priv,
+ struct i915_power_well *power_well)
+{
+ skl_set_power_well(dev_priv, power_well, false);
+}
+
static void i9xx_always_on_power_well_noop(struct drm_i915_private *dev_priv,
struct i915_power_well *power_well)
{
@@ -829,6 +1025,13 @@
.is_enabled = hsw_power_well_enabled,
};
+static const struct i915_power_well_ops skl_power_well_ops = {
+ .sync_hw = skl_power_well_sync_hw,
+ .enable = skl_power_well_enable,
+ .disable = skl_power_well_disable,
+ .is_enabled = skl_power_well_enabled,
+};
+
static struct i915_power_well hsw_power_wells[] = {
{
.name = "always-on",
@@ -1059,6 +1262,57 @@
return NULL;
}
+static struct i915_power_well skl_power_wells[] = {
+ {
+ .name = "always-on",
+ .always_on = 1,
+ .domains = SKL_DISPLAY_ALWAYS_ON_POWER_DOMAINS,
+ .ops = &i9xx_always_on_power_well_ops,
+ },
+ {
+ .name = "power well 1",
+ .domains = SKL_DISPLAY_POWERWELL_1_POWER_DOMAINS,
+ .ops = &skl_power_well_ops,
+ .data = SKL_DISP_PW_1,
+ },
+ {
+ .name = "MISC IO power well",
+ .domains = SKL_DISPLAY_MISC_IO_POWER_DOMAINS,
+ .ops = &skl_power_well_ops,
+ .data = SKL_DISP_PW_MISC_IO,
+ },
+ {
+ .name = "power well 2",
+ .domains = SKL_DISPLAY_POWERWELL_2_POWER_DOMAINS,
+ .ops = &skl_power_well_ops,
+ .data = SKL_DISP_PW_2,
+ },
+ {
+ .name = "DDI A/E power well",
+ .domains = SKL_DISPLAY_DDI_A_E_POWER_DOMAINS,
+ .ops = &skl_power_well_ops,
+ .data = SKL_DISP_PW_DDI_A_E,
+ },
+ {
+ .name = "DDI B power well",
+ .domains = SKL_DISPLAY_DDI_B_POWER_DOMAINS,
+ .ops = &skl_power_well_ops,
+ .data = SKL_DISP_PW_DDI_B,
+ },
+ {
+ .name = "DDI C power well",
+ .domains = SKL_DISPLAY_DDI_C_POWER_DOMAINS,
+ .ops = &skl_power_well_ops,
+ .data = SKL_DISP_PW_DDI_C,
+ },
+ {
+ .name = "DDI D power well",
+ .domains = SKL_DISPLAY_DDI_D_POWER_DOMAINS,
+ .ops = &skl_power_well_ops,
+ .data = SKL_DISP_PW_DDI_D,
+ },
+};
+
#define set_power_wells(power_domains, __power_wells) ({ \
(power_domains)->power_wells = (__power_wells); \
(power_domains)->power_well_count = ARRAY_SIZE(__power_wells); \
@@ -1085,6 +1339,8 @@
set_power_wells(power_domains, hsw_power_wells);
} else if (IS_BROADWELL(dev_priv->dev)) {
set_power_wells(power_domains, bdw_power_wells);
+ } else if (IS_SKYLAKE(dev_priv->dev)) {
+ set_power_wells(power_domains, skl_power_wells);
} else if (IS_CHERRYVIEW(dev_priv->dev)) {
set_power_wells(power_domains, chv_power_wells);
} else if (IS_VALLEYVIEW(dev_priv->dev)) {
@@ -1200,7 +1456,7 @@
}
/**
- * intel_aux_display_runtime_get - grab an auxilliary power domain reference
+ * intel_aux_display_runtime_get - grab an auxiliary power domain reference
* @dev_priv: i915 device instance
*
* This function grabs a power domain reference for the auxiliary power domain
@@ -1217,10 +1473,10 @@
}
/**
- * intel_aux_display_runtime_put - release an auxilliary power domain reference
+ * intel_aux_display_runtime_put - release an auxiliary power domain reference
* @dev_priv: i915 device instance
*
- * This function drops the auxilliary power domain reference obtained by
+ * This function drops the auxiliary power domain reference obtained by
* intel_aux_display_runtime_get() and might power down the corresponding
* hardware block right away if this is the last reference.
*/
diff --git a/drivers/gpu/drm/i915/intel_sdvo.c b/drivers/gpu/drm/i915/intel_sdvo.c
index 64ad2b4..e87d2f4 100644
--- a/drivers/gpu/drm/i915/intel_sdvo.c
+++ b/drivers/gpu/drm/i915/intel_sdvo.c
@@ -1247,7 +1247,7 @@
switch (crtc->config->pixel_multiplier) {
default:
- WARN(1, "unknown pixel mutlipler specified\n");
+ WARN(1, "unknown pixel multiplier specified\n");
case 1: rate = SDVO_CLOCK_RATE_MULT_1X; break;
case 2: rate = SDVO_CLOCK_RATE_MULT_2X; break;
case 4: rate = SDVO_CLOCK_RATE_MULT_4X; break;
@@ -2194,6 +2194,7 @@
.atomic_get_property = intel_connector_atomic_get_property,
.destroy = intel_sdvo_destroy,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_connector_helper_funcs intel_sdvo_connector_helper_funcs = {
@@ -2425,6 +2426,22 @@
}
}
+static struct intel_sdvo_connector *intel_sdvo_connector_alloc(void)
+{
+ struct intel_sdvo_connector *sdvo_connector;
+
+ sdvo_connector = kzalloc(sizeof(*sdvo_connector), GFP_KERNEL);
+ if (!sdvo_connector)
+ return NULL;
+
+ if (intel_connector_init(&sdvo_connector->base) < 0) {
+ kfree(sdvo_connector);
+ return NULL;
+ }
+
+ return sdvo_connector;
+}
+
static bool
intel_sdvo_dvi_init(struct intel_sdvo *intel_sdvo, int device)
{
@@ -2436,7 +2453,7 @@
DRM_DEBUG_KMS("initialising DVI device %d\n", device);
- intel_sdvo_connector = kzalloc(sizeof(*intel_sdvo_connector), GFP_KERNEL);
+ intel_sdvo_connector = intel_sdvo_connector_alloc();
if (!intel_sdvo_connector)
return false;
@@ -2490,7 +2507,7 @@
DRM_DEBUG_KMS("initialising TV type %d\n", type);
- intel_sdvo_connector = kzalloc(sizeof(*intel_sdvo_connector), GFP_KERNEL);
+ intel_sdvo_connector = intel_sdvo_connector_alloc();
if (!intel_sdvo_connector)
return false;
@@ -2569,7 +2586,7 @@
DRM_DEBUG_KMS("initialising LVDS device %d\n", device);
- intel_sdvo_connector = kzalloc(sizeof(*intel_sdvo_connector), GFP_KERNEL);
+ intel_sdvo_connector = intel_sdvo_connector_alloc();
if (!intel_sdvo_connector)
return false;
diff --git a/drivers/gpu/drm/i915/intel_sprite.c b/drivers/gpu/drm/i915/intel_sprite.c
index 9c5451c..a4c0a04 100644
--- a/drivers/gpu/drm/i915/intel_sprite.c
+++ b/drivers/gpu/drm/i915/intel_sprite.c
@@ -98,7 +98,7 @@
if (min <= 0 || max <= 0)
return false;
- if (WARN_ON(drm_vblank_get(dev, pipe)))
+ if (WARN_ON(drm_crtc_vblank_get(&crtc->base)))
return false;
local_irq_disable();
@@ -132,7 +132,7 @@
finish_wait(wq, &wait);
- drm_vblank_put(dev, pipe);
+ drm_crtc_vblank_put(&crtc->base);
*start_vbl_count = dev->driver->get_vblank_counter(dev, pipe);
@@ -179,7 +179,7 @@
static void
skl_update_plane(struct drm_plane *drm_plane, struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj, int crtc_x, int crtc_y,
+ int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t x, uint32_t y,
uint32_t src_w, uint32_t src_h)
@@ -187,23 +187,16 @@
struct drm_device *dev = drm_plane->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_plane *intel_plane = to_intel_plane(drm_plane);
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
const int pipe = intel_plane->pipe;
const int plane = intel_plane->plane + 1;
- u32 plane_ctl, stride;
+ u32 plane_ctl, stride_div;
int pixel_size = drm_format_plane_cpp(fb->pixel_format, 0);
+ const struct drm_intel_sprite_colorkey *key = &intel_plane->ckey;
+ unsigned long surf_addr;
- plane_ctl = I915_READ(PLANE_CTL(pipe, plane));
-
- /* Mask out pixel format bits in case we change it */
- plane_ctl &= ~PLANE_CTL_FORMAT_MASK;
- plane_ctl &= ~PLANE_CTL_ORDER_RGBX;
- plane_ctl &= ~PLANE_CTL_YUV422_ORDER_MASK;
- plane_ctl &= ~PLANE_CTL_TILED_MASK;
- plane_ctl &= ~PLANE_CTL_ALPHA_MASK;
- plane_ctl &= ~PLANE_CTL_ROTATE_MASK;
-
- /* Trickle feed has to be enabled */
- plane_ctl &= ~PLANE_CTL_TRICKLE_FEED_DISABLE;
+ plane_ctl = PLANE_CTL_ENABLE |
+ PLANE_CTL_PIPE_CSC_ENABLE;
switch (fb->pixel_format) {
case DRM_FORMAT_RGB565:
@@ -245,39 +238,57 @@
BUG();
}
- switch (obj->tiling_mode) {
- case I915_TILING_NONE:
- stride = fb->pitches[0] >> 6;
+ switch (fb->modifier[0]) {
+ case DRM_FORMAT_MOD_NONE:
break;
- case I915_TILING_X:
+ case I915_FORMAT_MOD_X_TILED:
plane_ctl |= PLANE_CTL_TILED_X;
- stride = fb->pitches[0] >> 9;
+ break;
+ case I915_FORMAT_MOD_Y_TILED:
+ plane_ctl |= PLANE_CTL_TILED_Y;
+ break;
+ case I915_FORMAT_MOD_Yf_TILED:
+ plane_ctl |= PLANE_CTL_TILED_YF;
break;
default:
- BUG();
+ MISSING_CASE(fb->modifier[0]);
}
+
if (drm_plane->state->rotation == BIT(DRM_ROTATE_180))
plane_ctl |= PLANE_CTL_ROTATE_180;
- plane_ctl |= PLANE_CTL_ENABLE;
- plane_ctl |= PLANE_CTL_PIPE_CSC_ENABLE;
-
intel_update_sprite_watermarks(drm_plane, crtc, src_w, src_h,
pixel_size, true,
src_w != crtc_w || src_h != crtc_h);
+ stride_div = intel_fb_stride_alignment(dev, fb->modifier[0],
+ fb->pixel_format);
+
/* Sizes are 0 based */
src_w--;
src_h--;
crtc_w--;
crtc_h--;
+ if (key->flags) {
+ I915_WRITE(PLANE_KEYVAL(pipe, plane), key->min_value);
+ I915_WRITE(PLANE_KEYMAX(pipe, plane), key->max_value);
+ I915_WRITE(PLANE_KEYMSK(pipe, plane), key->channel_mask);
+ }
+
+ if (key->flags & I915_SET_COLORKEY_DESTINATION)
+ plane_ctl |= PLANE_CTL_KEY_ENABLE_DESTINATION;
+ else if (key->flags & I915_SET_COLORKEY_SOURCE)
+ plane_ctl |= PLANE_CTL_KEY_ENABLE_SOURCE;
+
+ surf_addr = intel_plane_obj_offset(intel_plane, obj);
+
I915_WRITE(PLANE_OFFSET(pipe, plane), (y << 16) | x);
- I915_WRITE(PLANE_STRIDE(pipe, plane), stride);
+ I915_WRITE(PLANE_STRIDE(pipe, plane), fb->pitches[0] / stride_div);
I915_WRITE(PLANE_POS(pipe, plane), (crtc_y << 16) | crtc_x);
I915_WRITE(PLANE_SIZE(pipe, plane), (crtc_h << 16) | crtc_w);
I915_WRITE(PLANE_CTL(pipe, plane), plane_ctl);
- I915_WRITE(PLANE_SURF(pipe, plane), i915_gem_obj_ggtt_offset(obj));
+ I915_WRITE(PLANE_SURF(pipe, plane), surf_addr);
POSTING_READ(PLANE_SURF(pipe, plane));
}
@@ -290,73 +301,15 @@
const int pipe = intel_plane->pipe;
const int plane = intel_plane->plane + 1;
- I915_WRITE(PLANE_CTL(pipe, plane),
- I915_READ(PLANE_CTL(pipe, plane)) & ~PLANE_CTL_ENABLE);
+ I915_WRITE(PLANE_CTL(pipe, plane), 0);
/* Activate double buffered register update */
- I915_WRITE(PLANE_CTL(pipe, plane), 0);
- POSTING_READ(PLANE_CTL(pipe, plane));
+ I915_WRITE(PLANE_SURF(pipe, plane), 0);
+ POSTING_READ(PLANE_SURF(pipe, plane));
intel_update_sprite_watermarks(drm_plane, crtc, 0, 0, 0, false, false);
}
-static int
-skl_update_colorkey(struct drm_plane *drm_plane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = drm_plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane = to_intel_plane(drm_plane);
- const int pipe = intel_plane->pipe;
- const int plane = intel_plane->plane;
- u32 plane_ctl;
-
- I915_WRITE(PLANE_KEYVAL(pipe, plane), key->min_value);
- I915_WRITE(PLANE_KEYMAX(pipe, plane), key->max_value);
- I915_WRITE(PLANE_KEYMSK(pipe, plane), key->channel_mask);
-
- plane_ctl = I915_READ(PLANE_CTL(pipe, plane));
- plane_ctl &= ~PLANE_CTL_KEY_ENABLE_MASK;
- if (key->flags & I915_SET_COLORKEY_DESTINATION)
- plane_ctl |= PLANE_CTL_KEY_ENABLE_DESTINATION;
- else if (key->flags & I915_SET_COLORKEY_SOURCE)
- plane_ctl |= PLANE_CTL_KEY_ENABLE_SOURCE;
- I915_WRITE(PLANE_CTL(pipe, plane), plane_ctl);
-
- POSTING_READ(PLANE_CTL(pipe, plane));
-
- return 0;
-}
-
-static void
-skl_get_colorkey(struct drm_plane *drm_plane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = drm_plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane = to_intel_plane(drm_plane);
- const int pipe = intel_plane->pipe;
- const int plane = intel_plane->plane;
- u32 plane_ctl;
-
- key->min_value = I915_READ(PLANE_KEYVAL(pipe, plane));
- key->max_value = I915_READ(PLANE_KEYMAX(pipe, plane));
- key->channel_mask = I915_READ(PLANE_KEYMSK(pipe, plane));
-
- plane_ctl = I915_READ(PLANE_CTL(pipe, plane));
-
- switch (plane_ctl & PLANE_CTL_KEY_ENABLE_MASK) {
- case PLANE_CTL_KEY_ENABLE_DESTINATION:
- key->flags = I915_SET_COLORKEY_DESTINATION;
- break;
- case PLANE_CTL_KEY_ENABLE_SOURCE:
- key->flags = I915_SET_COLORKEY_SOURCE;
- break;
- default:
- key->flags = I915_SET_COLORKEY_NONE;
- }
-}
-
static void
chv_update_csc(struct intel_plane *intel_plane, uint32_t format)
{
@@ -399,7 +352,7 @@
static void
vlv_update_plane(struct drm_plane *dplane, struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj, int crtc_x, int crtc_y,
+ int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t x, uint32_t y,
uint32_t src_w, uint32_t src_h)
@@ -408,19 +361,15 @@
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_plane *intel_plane = to_intel_plane(dplane);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
int pipe = intel_plane->pipe;
int plane = intel_plane->plane;
u32 sprctl;
unsigned long sprsurf_offset, linear_offset;
int pixel_size = drm_format_plane_cpp(fb->pixel_format, 0);
+ const struct drm_intel_sprite_colorkey *key = &intel_plane->ckey;
- sprctl = I915_READ(SPCNTR(pipe, plane));
-
- /* Mask out pixel format bits in case we change it */
- sprctl &= ~SP_PIXFORMAT_MASK;
- sprctl &= ~SP_YUV_BYTE_ORDER_MASK;
- sprctl &= ~SP_TILED;
- sprctl &= ~SP_ROTATE_180;
+ sprctl = SP_ENABLE;
switch (fb->pixel_format) {
case DRM_FORMAT_YUYV:
@@ -474,8 +423,6 @@
if (obj->tiling_mode != I915_TILING_NONE)
sprctl |= SP_TILED;
- sprctl |= SP_ENABLE;
-
intel_update_sprite_watermarks(dplane, crtc, src_w, src_h,
pixel_size, true,
src_w != crtc_w || src_h != crtc_h);
@@ -503,6 +450,15 @@
intel_update_primary_plane(intel_crtc);
+ if (key->flags) {
+ I915_WRITE(SPKEYMINVAL(pipe, plane), key->min_value);
+ I915_WRITE(SPKEYMAXVAL(pipe, plane), key->max_value);
+ I915_WRITE(SPKEYMSK(pipe, plane), key->channel_mask);
+ }
+
+ if (key->flags & I915_SET_COLORKEY_SOURCE)
+ sprctl |= SP_SOURCE_KEY;
+
if (IS_CHERRYVIEW(dev) && pipe == PIPE_B)
chv_update_csc(intel_plane, fb->pixel_format);
@@ -536,8 +492,8 @@
intel_update_primary_plane(intel_crtc);
- I915_WRITE(SPCNTR(pipe, plane), I915_READ(SPCNTR(pipe, plane)) &
- ~SP_ENABLE);
+ I915_WRITE(SPCNTR(pipe, plane), 0);
+
/* Activate double buffered register update */
I915_WRITE(SPSURF(pipe, plane), 0);
@@ -546,61 +502,11 @@
intel_update_sprite_watermarks(dplane, crtc, 0, 0, 0, false, false);
}
-static int
-vlv_update_colorkey(struct drm_plane *dplane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = dplane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane = to_intel_plane(dplane);
- int pipe = intel_plane->pipe;
- int plane = intel_plane->plane;
- u32 sprctl;
-
- if (key->flags & I915_SET_COLORKEY_DESTINATION)
- return -EINVAL;
-
- I915_WRITE(SPKEYMINVAL(pipe, plane), key->min_value);
- I915_WRITE(SPKEYMAXVAL(pipe, plane), key->max_value);
- I915_WRITE(SPKEYMSK(pipe, plane), key->channel_mask);
-
- sprctl = I915_READ(SPCNTR(pipe, plane));
- sprctl &= ~SP_SOURCE_KEY;
- if (key->flags & I915_SET_COLORKEY_SOURCE)
- sprctl |= SP_SOURCE_KEY;
- I915_WRITE(SPCNTR(pipe, plane), sprctl);
-
- POSTING_READ(SPKEYMSK(pipe, plane));
-
- return 0;
-}
-
-static void
-vlv_get_colorkey(struct drm_plane *dplane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = dplane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane = to_intel_plane(dplane);
- int pipe = intel_plane->pipe;
- int plane = intel_plane->plane;
- u32 sprctl;
-
- key->min_value = I915_READ(SPKEYMINVAL(pipe, plane));
- key->max_value = I915_READ(SPKEYMAXVAL(pipe, plane));
- key->channel_mask = I915_READ(SPKEYMSK(pipe, plane));
-
- sprctl = I915_READ(SPCNTR(pipe, plane));
- if (sprctl & SP_SOURCE_KEY)
- key->flags = I915_SET_COLORKEY_SOURCE;
- else
- key->flags = I915_SET_COLORKEY_NONE;
-}
static void
ivb_update_plane(struct drm_plane *plane, struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj, int crtc_x, int crtc_y,
+ int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t x, uint32_t y,
uint32_t src_w, uint32_t src_h)
@@ -609,19 +515,14 @@
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_plane *intel_plane = to_intel_plane(plane);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- int pipe = intel_plane->pipe;
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ enum pipe pipe = intel_plane->pipe;
u32 sprctl, sprscale = 0;
unsigned long sprsurf_offset, linear_offset;
int pixel_size = drm_format_plane_cpp(fb->pixel_format, 0);
+ const struct drm_intel_sprite_colorkey *key = &intel_plane->ckey;
- sprctl = I915_READ(SPRCTL(pipe));
-
- /* Mask out pixel format bits in case we change it */
- sprctl &= ~SPRITE_PIXFORMAT_MASK;
- sprctl &= ~SPRITE_RGB_ORDER_RGBX;
- sprctl &= ~SPRITE_YUV_BYTE_ORDER_MASK;
- sprctl &= ~SPRITE_TILED;
- sprctl &= ~SPRITE_ROTATE_180;
+ sprctl = SPRITE_ENABLE;
switch (fb->pixel_format) {
case DRM_FORMAT_XBGR8888:
@@ -660,8 +561,6 @@
else
sprctl |= SPRITE_TRICKLE_FEED_DISABLE;
- sprctl |= SPRITE_ENABLE;
-
if (IS_HASWELL(dev) || IS_BROADWELL(dev))
sprctl |= SPRITE_PIPE_CSC_ENABLE;
@@ -698,6 +597,17 @@
intel_update_primary_plane(intel_crtc);
+ if (key->flags) {
+ I915_WRITE(SPRKEYVAL(pipe), key->min_value);
+ I915_WRITE(SPRKEYMAX(pipe), key->max_value);
+ I915_WRITE(SPRKEYMSK(pipe), key->channel_mask);
+ }
+
+ if (key->flags & I915_SET_COLORKEY_DESTINATION)
+ sprctl |= SPRITE_DEST_KEY;
+ else if (key->flags & I915_SET_COLORKEY_SOURCE)
+ sprctl |= SPRITE_SOURCE_KEY;
+
I915_WRITE(SPRSTRIDE(pipe), fb->pitches[0]);
I915_WRITE(SPRPOS(pipe), (crtc_y << 16) | crtc_x);
@@ -739,73 +649,12 @@
I915_WRITE(SPRSURF(pipe), 0);
intel_flush_primary_plane(dev_priv, intel_crtc->plane);
-
- /*
- * Avoid underruns when disabling the sprite.
- * FIXME remove once watermark updates are done properly.
- */
- intel_crtc->atomic.wait_vblank = true;
- intel_crtc->atomic.update_sprite_watermarks |= (1 << drm_plane_index(plane));
-}
-
-static int
-ivb_update_colorkey(struct drm_plane *plane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane;
- u32 sprctl;
- int ret = 0;
-
- intel_plane = to_intel_plane(plane);
-
- I915_WRITE(SPRKEYVAL(intel_plane->pipe), key->min_value);
- I915_WRITE(SPRKEYMAX(intel_plane->pipe), key->max_value);
- I915_WRITE(SPRKEYMSK(intel_plane->pipe), key->channel_mask);
-
- sprctl = I915_READ(SPRCTL(intel_plane->pipe));
- sprctl &= ~(SPRITE_SOURCE_KEY | SPRITE_DEST_KEY);
- if (key->flags & I915_SET_COLORKEY_DESTINATION)
- sprctl |= SPRITE_DEST_KEY;
- else if (key->flags & I915_SET_COLORKEY_SOURCE)
- sprctl |= SPRITE_SOURCE_KEY;
- I915_WRITE(SPRCTL(intel_plane->pipe), sprctl);
-
- POSTING_READ(SPRKEYMSK(intel_plane->pipe));
-
- return ret;
-}
-
-static void
-ivb_get_colorkey(struct drm_plane *plane, struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane;
- u32 sprctl;
-
- intel_plane = to_intel_plane(plane);
-
- key->min_value = I915_READ(SPRKEYVAL(intel_plane->pipe));
- key->max_value = I915_READ(SPRKEYMAX(intel_plane->pipe));
- key->channel_mask = I915_READ(SPRKEYMSK(intel_plane->pipe));
- key->flags = 0;
-
- sprctl = I915_READ(SPRCTL(intel_plane->pipe));
-
- if (sprctl & SPRITE_DEST_KEY)
- key->flags = I915_SET_COLORKEY_DESTINATION;
- else if (sprctl & SPRITE_SOURCE_KEY)
- key->flags = I915_SET_COLORKEY_SOURCE;
- else
- key->flags = I915_SET_COLORKEY_NONE;
}
static void
ilk_update_plane(struct drm_plane *plane, struct drm_crtc *crtc,
struct drm_framebuffer *fb,
- struct drm_i915_gem_object *obj, int crtc_x, int crtc_y,
+ int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t x, uint32_t y,
uint32_t src_w, uint32_t src_h)
@@ -814,19 +663,14 @@
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_plane *intel_plane = to_intel_plane(plane);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ struct drm_i915_gem_object *obj = intel_fb_obj(fb);
int pipe = intel_plane->pipe;
unsigned long dvssurf_offset, linear_offset;
u32 dvscntr, dvsscale;
int pixel_size = drm_format_plane_cpp(fb->pixel_format, 0);
+ const struct drm_intel_sprite_colorkey *key = &intel_plane->ckey;
- dvscntr = I915_READ(DVSCNTR(pipe));
-
- /* Mask out pixel format bits in case we change it */
- dvscntr &= ~DVS_PIXFORMAT_MASK;
- dvscntr &= ~DVS_RGB_ORDER_XBGR;
- dvscntr &= ~DVS_YUV_BYTE_ORDER_MASK;
- dvscntr &= ~DVS_TILED;
- dvscntr &= ~DVS_ROTATE_180;
+ dvscntr = DVS_ENABLE;
switch (fb->pixel_format) {
case DRM_FORMAT_XBGR8888:
@@ -862,7 +706,6 @@
if (IS_GEN6(dev))
dvscntr |= DVS_TRICKLE_FEED_DISABLE; /* must disable */
- dvscntr |= DVS_ENABLE;
intel_update_sprite_watermarks(plane, crtc, src_w, src_h,
pixel_size, true,
@@ -894,6 +737,17 @@
intel_update_primary_plane(intel_crtc);
+ if (key->flags) {
+ I915_WRITE(DVSKEYVAL(pipe), key->min_value);
+ I915_WRITE(DVSKEYMAX(pipe), key->max_value);
+ I915_WRITE(DVSKEYMSK(pipe), key->channel_mask);
+ }
+
+ if (key->flags & I915_SET_COLORKEY_DESTINATION)
+ dvscntr |= DVS_DEST_KEY;
+ else if (key->flags & I915_SET_COLORKEY_SOURCE)
+ dvscntr |= DVS_SOURCE_KEY;
+
I915_WRITE(DVSSTRIDE(pipe), fb->pitches[0]);
I915_WRITE(DVSPOS(pipe), (crtc_y << 16) | crtc_x);
@@ -922,20 +776,14 @@
intel_update_primary_plane(intel_crtc);
- I915_WRITE(DVSCNTR(pipe), I915_READ(DVSCNTR(pipe)) & ~DVS_ENABLE);
+ I915_WRITE(DVSCNTR(pipe), 0);
/* Disable the scaler */
I915_WRITE(DVSSCALE(pipe), 0);
+
/* Flush double buffered register updates */
I915_WRITE(DVSSURF(pipe), 0);
intel_flush_primary_plane(dev_priv, intel_crtc->plane);
-
- /*
- * Avoid underruns when disabling the sprite.
- * FIXME remove once watermark updates are done properly.
- */
- intel_crtc->atomic.wait_vblank = true;
- intel_crtc->atomic.update_sprite_watermarks |= (1 << drm_plane_index(plane));
}
/**
@@ -993,7 +841,7 @@
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
mutex_lock(&dev->struct_mutex);
- if (dev_priv->fbc.plane == intel_crtc->plane)
+ if (dev_priv->fbc.crtc == intel_crtc)
intel_fbc_disable(dev);
mutex_unlock(&dev->struct_mutex);
@@ -1006,67 +854,9 @@
hsw_disable_ips(intel_crtc);
}
-static int
-ilk_update_colorkey(struct drm_plane *plane,
- struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane;
- u32 dvscntr;
- int ret = 0;
-
- intel_plane = to_intel_plane(plane);
-
- I915_WRITE(DVSKEYVAL(intel_plane->pipe), key->min_value);
- I915_WRITE(DVSKEYMAX(intel_plane->pipe), key->max_value);
- I915_WRITE(DVSKEYMSK(intel_plane->pipe), key->channel_mask);
-
- dvscntr = I915_READ(DVSCNTR(intel_plane->pipe));
- dvscntr &= ~(DVS_SOURCE_KEY | DVS_DEST_KEY);
- if (key->flags & I915_SET_COLORKEY_DESTINATION)
- dvscntr |= DVS_DEST_KEY;
- else if (key->flags & I915_SET_COLORKEY_SOURCE)
- dvscntr |= DVS_SOURCE_KEY;
- I915_WRITE(DVSCNTR(intel_plane->pipe), dvscntr);
-
- POSTING_READ(DVSKEYMSK(intel_plane->pipe));
-
- return ret;
-}
-
-static void
-ilk_get_colorkey(struct drm_plane *plane, struct drm_intel_sprite_colorkey *key)
-{
- struct drm_device *dev = plane->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_plane *intel_plane;
- u32 dvscntr;
-
- intel_plane = to_intel_plane(plane);
-
- key->min_value = I915_READ(DVSKEYVAL(intel_plane->pipe));
- key->max_value = I915_READ(DVSKEYMAX(intel_plane->pipe));
- key->channel_mask = I915_READ(DVSKEYMSK(intel_plane->pipe));
- key->flags = 0;
-
- dvscntr = I915_READ(DVSCNTR(intel_plane->pipe));
-
- if (dvscntr & DVS_DEST_KEY)
- key->flags = I915_SET_COLORKEY_DESTINATION;
- else if (dvscntr & DVS_SOURCE_KEY)
- key->flags = I915_SET_COLORKEY_SOURCE;
- else
- key->flags = I915_SET_COLORKEY_NONE;
-}
-
static bool colorkey_enabled(struct intel_plane *intel_plane)
{
- struct drm_intel_sprite_colorkey key;
-
- intel_plane->get_colorkey(&intel_plane->base, &key);
-
- return key.flags != I915_SET_COLORKEY_NONE;
+ return intel_plane->ckey.flags != I915_SET_COLORKEY_NONE;
}
static int
@@ -1076,7 +866,6 @@
struct intel_crtc *intel_crtc = to_intel_crtc(state->base.crtc);
struct intel_plane *intel_plane = to_intel_plane(plane);
struct drm_framebuffer *fb = state->base.fb;
- struct drm_i915_gem_object *obj = intel_fb_obj(fb);
int crtc_x, crtc_y;
unsigned int crtc_w, crtc_h;
uint32_t src_x, src_y, src_w, src_h;
@@ -1106,16 +895,6 @@
return -EINVAL;
}
- /* Sprite planes can be linear or x-tiled surfaces */
- switch (obj->tiling_mode) {
- case I915_TILING_NONE:
- case I915_TILING_X:
- break;
- default:
- DRM_DEBUG_KMS("Unsupported tiling mode\n");
- return -EINVAL;
- }
-
/*
* FIXME the following code does a bunch of fuzzy adjustments to the
* coordinates and sizes. We probably need some way to decide whether
@@ -1259,6 +1038,19 @@
if (!intel_crtc->primary_enabled && !state->hides_primary)
intel_crtc->atomic.post_enable_primary = true;
+
+ if (intel_wm_need_update(plane, &state->base))
+ intel_crtc->atomic.update_wm = true;
+
+ if (!state->visible) {
+ /*
+ * Avoid underruns when disabling the sprite.
+ * FIXME remove once watermark updates are done properly.
+ */
+ intel_crtc->atomic.wait_vblank = true;
+ intel_crtc->atomic.update_sprite_watermarks |=
+ (1 << drm_plane_index(plane));
+ }
}
return 0;
@@ -1272,7 +1064,6 @@
struct intel_crtc *intel_crtc;
struct intel_plane *intel_plane = to_intel_plane(plane);
struct drm_framebuffer *fb = state->base.fb;
- struct drm_i915_gem_object *obj = intel_fb_obj(fb);
int crtc_x, crtc_y;
unsigned int crtc_w, crtc_h;
uint32_t src_x, src_y, src_w, src_h;
@@ -1280,8 +1071,7 @@
crtc = crtc ? crtc : plane->crtc;
intel_crtc = to_intel_crtc(crtc);
- plane->fb = state->base.fb;
- intel_plane->obj = obj;
+ plane->fb = fb;
if (intel_crtc->active) {
intel_crtc->primary_enabled = !state->hides_primary;
@@ -1295,7 +1085,7 @@
src_y = state->src.y1;
src_w = drm_rect_width(&state->src);
src_h = drm_rect_height(&state->src);
- intel_plane->update_plane(plane, crtc, fb, obj,
+ intel_plane->update_plane(plane, crtc, fb,
crtc_x, crtc_y, crtc_w, crtc_h,
src_x, src_y, src_w, src_h);
} else {
@@ -1312,13 +1102,14 @@
struct intel_plane *intel_plane;
int ret = 0;
- if (!drm_core_check_feature(dev, DRIVER_MODESET))
- return -ENODEV;
-
/* Make sure we don't try to enable both src & dest simultaneously */
if ((set->flags & (I915_SET_COLORKEY_DESTINATION | I915_SET_COLORKEY_SOURCE)) == (I915_SET_COLORKEY_DESTINATION | I915_SET_COLORKEY_SOURCE))
return -EINVAL;
+ if (IS_VALLEYVIEW(dev) &&
+ set->flags & I915_SET_COLORKEY_DESTINATION)
+ return -EINVAL;
+
drm_modeset_lock_all(dev);
plane = drm_plane_find(dev, set->plane_id);
@@ -1328,34 +1119,15 @@
}
intel_plane = to_intel_plane(plane);
- ret = intel_plane->update_colorkey(plane, set);
+ intel_plane->ckey = *set;
-out_unlock:
- drm_modeset_unlock_all(dev);
- return ret;
-}
-
-int intel_sprite_get_colorkey(struct drm_device *dev, void *data,
- struct drm_file *file_priv)
-{
- struct drm_intel_sprite_colorkey *get = data;
- struct drm_plane *plane;
- struct intel_plane *intel_plane;
- int ret = 0;
-
- if (!drm_core_check_feature(dev, DRIVER_MODESET))
- return -ENODEV;
-
- drm_modeset_lock_all(dev);
-
- plane = drm_plane_find(dev, get->plane_id);
- if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY) {
- ret = -ENOENT;
- goto out_unlock;
- }
-
- intel_plane = to_intel_plane(plane);
- intel_plane->get_colorkey(plane, get);
+ /*
+ * The only way this could fail would be due to
+ * the current plane state being unsupportable already,
+ * and we dont't consider that an error for the
+ * colorkey ioctl. So just ignore any error.
+ */
+ intel_plane_restore(plane);
out_unlock:
drm_modeset_unlock_all(dev);
@@ -1364,10 +1136,10 @@
int intel_plane_restore(struct drm_plane *plane)
{
- if (!plane->crtc || !plane->fb)
+ if (!plane->crtc || !plane->state->fb)
return 0;
- return plane->funcs->update_plane(plane, plane->crtc, plane->fb,
+ return plane->funcs->update_plane(plane, plane->crtc, plane->state->fb,
plane->state->crtc_x, plane->state->crtc_y,
plane->state->crtc_w, plane->state->crtc_h,
plane->state->src_x, plane->state->src_y,
@@ -1448,8 +1220,6 @@
intel_plane->max_downscale = 16;
intel_plane->update_plane = ilk_update_plane;
intel_plane->disable_plane = ilk_disable_plane;
- intel_plane->update_colorkey = ilk_update_colorkey;
- intel_plane->get_colorkey = ilk_get_colorkey;
if (IS_GEN6(dev)) {
plane_formats = snb_plane_formats;
@@ -1473,16 +1243,12 @@
if (IS_VALLEYVIEW(dev)) {
intel_plane->update_plane = vlv_update_plane;
intel_plane->disable_plane = vlv_disable_plane;
- intel_plane->update_colorkey = vlv_update_colorkey;
- intel_plane->get_colorkey = vlv_get_colorkey;
plane_formats = vlv_plane_formats;
num_plane_formats = ARRAY_SIZE(vlv_plane_formats);
} else {
intel_plane->update_plane = ivb_update_plane;
intel_plane->disable_plane = ivb_disable_plane;
- intel_plane->update_colorkey = ivb_update_colorkey;
- intel_plane->get_colorkey = ivb_get_colorkey;
plane_formats = snb_plane_formats;
num_plane_formats = ARRAY_SIZE(snb_plane_formats);
@@ -1497,8 +1263,6 @@
intel_plane->max_downscale = 1;
intel_plane->update_plane = skl_update_plane;
intel_plane->disable_plane = skl_disable_plane;
- intel_plane->update_colorkey = skl_update_colorkey;
- intel_plane->get_colorkey = skl_get_colorkey;
plane_formats = skl_plane_formats;
num_plane_formats = ARRAY_SIZE(skl_plane_formats);
diff --git a/drivers/gpu/drm/i915/intel_tv.c b/drivers/gpu/drm/i915/intel_tv.c
index 892d23c..8b9d325 100644
--- a/drivers/gpu/drm/i915/intel_tv.c
+++ b/drivers/gpu/drm/i915/intel_tv.c
@@ -1332,7 +1332,7 @@
if (intel_get_load_detect_pipe(connector, &mode, &tmp, &ctx)) {
type = intel_tv_detect_type(intel_tv, connector);
- intel_release_load_detect_pipe(connector, &tmp);
+ intel_release_load_detect_pipe(connector, &tmp, &ctx);
status = type < 0 ?
connector_status_disconnected :
connector_status_connected;
@@ -1516,6 +1516,7 @@
.atomic_get_property = intel_connector_atomic_get_property,
.fill_modes = drm_helper_probe_single_connector_modes,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
};
static const struct drm_connector_helper_funcs intel_tv_connector_helper_funcs = {
@@ -1620,7 +1621,7 @@
return;
}
- intel_connector = kzalloc(sizeof(*intel_connector), GFP_KERNEL);
+ intel_connector = intel_connector_alloc();
if (!intel_connector) {
kfree(intel_tv);
return;
diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
index 4e8fb89..ab5cc94 100644
--- a/drivers/gpu/drm/i915/intel_uncore.c
+++ b/drivers/gpu/drm/i915/intel_uncore.c
@@ -23,6 +23,7 @@
#include "i915_drv.h"
#include "intel_drv.h"
+#include "i915_vgpu.h"
#include <linux/pm_runtime.h>
@@ -210,6 +211,13 @@
gen6_gt_check_fifodbg(dev_priv);
}
+static inline u32 fifo_free_entries(struct drm_i915_private *dev_priv)
+{
+ u32 count = __raw_i915_read32(dev_priv, GTFIFOCTL);
+
+ return count & GT_FIFO_FREE_ENTRIES_MASK;
+}
+
static int __gen6_gt_wait_for_fifo(struct drm_i915_private *dev_priv)
{
int ret = 0;
@@ -217,16 +225,15 @@
/* On VLV, FIFO will be shared by both SW and HW.
* So, we need to read the FREE_ENTRIES everytime */
if (IS_VALLEYVIEW(dev_priv->dev))
- dev_priv->uncore.fifo_count =
- __raw_i915_read32(dev_priv, GTFIFOCTL) &
- GT_FIFO_FREE_ENTRIES_MASK;
+ dev_priv->uncore.fifo_count = fifo_free_entries(dev_priv);
if (dev_priv->uncore.fifo_count < GT_FIFO_NUM_RESERVED_ENTRIES) {
int loop = 500;
- u32 fifo = __raw_i915_read32(dev_priv, GTFIFOCTL) & GT_FIFO_FREE_ENTRIES_MASK;
+ u32 fifo = fifo_free_entries(dev_priv);
+
while (fifo <= GT_FIFO_NUM_RESERVED_ENTRIES && loop--) {
udelay(10);
- fifo = __raw_i915_read32(dev_priv, GTFIFOCTL) & GT_FIFO_FREE_ENTRIES_MASK;
+ fifo = fifo_free_entries(dev_priv);
}
if (WARN_ON(loop < 0 && fifo <= GT_FIFO_NUM_RESERVED_ENTRIES))
++ret;
@@ -314,8 +321,7 @@
if (IS_GEN6(dev) || IS_GEN7(dev))
dev_priv->uncore.fifo_count =
- __raw_i915_read32(dev_priv, GTFIFOCTL) &
- GT_FIFO_FREE_ENTRIES_MASK;
+ fifo_free_entries(dev_priv);
}
if (!restore)
@@ -328,8 +334,9 @@
{
struct drm_i915_private *dev_priv = dev->dev_private;
- if ((IS_HASWELL(dev) || IS_BROADWELL(dev)) &&
- (__raw_i915_read32(dev_priv, HSW_EDRAM_PRESENT) == 1)) {
+ if ((IS_HASWELL(dev) || IS_BROADWELL(dev) ||
+ INTEL_INFO(dev)->gen >= 9) &&
+ (__raw_i915_read32(dev_priv, HSW_EDRAM_PRESENT) & EDRAM_ENABLED)) {
/* The docs do not explain exactly how the calculation can be
* made. It is somewhat guessable, but for now, it's always
* 128MB.
@@ -550,18 +557,24 @@
WARN(1, "Unclaimed register detected %s %s register 0x%x\n",
when, op, reg);
__raw_i915_write32(dev_priv, FPGA_DBG, FPGA_DBG_RM_NOCLAIM);
+ i915.mmio_debug--; /* Only report the first N failures */
}
}
static void
hsw_unclaimed_reg_detect(struct drm_i915_private *dev_priv)
{
- if (i915.mmio_debug)
+ static bool mmio_debug_once = true;
+
+ if (i915.mmio_debug || !mmio_debug_once)
return;
if (__raw_i915_read32(dev_priv, FPGA_DBG) & FPGA_DBG_RM_NOCLAIM) {
- DRM_ERROR("Unclaimed register detected. Please use the i915.mmio_debug=1 to debug this problem.");
+ DRM_DEBUG("Unclaimed register detected, "
+ "enabling oneshot unclaimed register reporting. "
+ "Please use i915.mmio_debug=N for more information.\n");
__raw_i915_write32(dev_priv, FPGA_DBG, FPGA_DBG_RM_NOCLAIM);
+ i915.mmio_debug = mmio_debug_once--;
}
}
@@ -640,6 +653,14 @@
dev_priv->uncore.funcs.force_wake_get(dev_priv, fw_domains);
}
+#define __vgpu_read(x) \
+static u##x \
+vgpu_read##x(struct drm_i915_private *dev_priv, off_t reg, bool trace) { \
+ GEN6_READ_HEADER(x); \
+ val = __raw_i915_read##x(dev_priv, reg); \
+ GEN6_READ_FOOTER; \
+}
+
#define __gen6_read(x) \
static u##x \
gen6_read##x(struct drm_i915_private *dev_priv, off_t reg, bool trace) { \
@@ -703,6 +724,10 @@
GEN6_READ_FOOTER; \
}
+__vgpu_read(8)
+__vgpu_read(16)
+__vgpu_read(32)
+__vgpu_read(64)
__gen9_read(8)
__gen9_read(16)
__gen9_read(32)
@@ -724,6 +749,7 @@
#undef __chv_read
#undef __vlv_read
#undef __gen6_read
+#undef __vgpu_read
#undef GEN6_READ_FOOTER
#undef GEN6_READ_HEADER
@@ -807,6 +833,14 @@
GEN6_WRITE_FOOTER; \
}
+#define __vgpu_write(x) \
+static void vgpu_write##x(struct drm_i915_private *dev_priv, \
+ off_t reg, u##x val, bool trace) { \
+ GEN6_WRITE_HEADER; \
+ __raw_i915_write##x(dev_priv, reg, val); \
+ GEN6_WRITE_FOOTER; \
+}
+
static const u32 gen8_shadowed_regs[] = {
FORCEWAKE_MT,
GEN6_RPNSWREQ,
@@ -924,12 +958,17 @@
__gen6_write(16)
__gen6_write(32)
__gen6_write(64)
+__vgpu_write(8)
+__vgpu_write(16)
+__vgpu_write(32)
+__vgpu_write(64)
#undef __gen9_write
#undef __chv_write
#undef __gen8_write
#undef __hsw_write
#undef __gen6_write
+#undef __vgpu_write
#undef GEN6_WRITE_FOOTER
#undef GEN6_WRITE_HEADER
@@ -972,6 +1011,7 @@
d->val_set = FORCEWAKE_KERNEL;
d->val_clear = 0;
} else {
+ /* WaRsClearFWBitsAtReset:bdw,skl */
d->val_reset = _MASKED_BIT_DISABLE(0xffff);
d->val_set = _MASKED_BIT_ENABLE(FORCEWAKE_KERNEL);
d->val_clear = _MASKED_BIT_DISABLE(FORCEWAKE_KERNEL);
@@ -1088,6 +1128,8 @@
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ i915_check_vgpu(dev);
+
intel_uncore_ellc_detect(dev);
intel_uncore_fw_domains_init(dev);
__intel_uncore_early_sanitize(dev, false);
@@ -1136,6 +1178,11 @@
break;
}
+ if (intel_vgpu_active(dev)) {
+ ASSIGN_WRITE_MMIO_VFUNCS(vgpu);
+ ASSIGN_READ_MMIO_VFUNCS(vgpu);
+ }
+
i915_check_and_clear_faults(dev);
}
#undef ASSIGN_WRITE_MMIO_VFUNCS
diff --git a/drivers/gpu/drm/imx/Kconfig b/drivers/gpu/drm/imx/Kconfig
index 33cdddf..2b81a41 100644
--- a/drivers/gpu/drm/imx/Kconfig
+++ b/drivers/gpu/drm/imx/Kconfig
@@ -36,6 +36,7 @@
config DRM_IMX_LDB
tristate "Support for LVDS displays"
depends on DRM_IMX && MFD_SYSCON
+ select DRM_PANEL
help
Choose this to enable the internal LVDS Display Bridge (LDB)
found on i.MX53 and i.MX6 processors.
diff --git a/drivers/gpu/drm/imx/dw_hdmi-imx.c b/drivers/gpu/drm/imx/dw_hdmi-imx.c
index 87fe8ed..a3ecf10 100644
--- a/drivers/gpu/drm/imx/dw_hdmi-imx.c
+++ b/drivers/gpu/drm/imx/dw_hdmi-imx.c
@@ -75,10 +75,10 @@
},
};
-static const struct dw_hdmi_sym_term imx_sym_term[] = {
- /*pixelclk symbol term*/
- { 148500000, 0x800d, 0x0005 },
- { ~0UL, 0x0000, 0x0000 }
+static const struct dw_hdmi_phy_config imx_phy_config[] = {
+ /*pixelclk symbol term vlev */
+ { 148500000, 0x800d, 0x0005, 0x01ad},
+ { ~0UL, 0x0000, 0x0000, 0x0000}
};
static int dw_hdmi_imx_parse_dt(struct imx_hdmi *hdmi)
@@ -123,7 +123,7 @@
static void dw_hdmi_imx_encoder_prepare(struct drm_encoder *encoder)
{
- imx_drm_panel_format(encoder, V4L2_PIX_FMT_RGB24);
+ imx_drm_set_bus_format(encoder, MEDIA_BUS_FMT_RGB888_1X24);
}
static struct drm_encoder_helper_funcs dw_hdmi_imx_encoder_helper_funcs = {
@@ -163,7 +163,7 @@
static struct dw_hdmi_plat_data imx6q_hdmi_drv_data = {
.mpll_cfg = imx_mpll_cfg,
.cur_ctr = imx_cur_ctr,
- .sym_term = imx_sym_term,
+ .phy_config = imx_phy_config,
.dev_type = IMX6Q_HDMI,
.mode_valid = imx6q_hdmi_mode_valid,
};
@@ -171,7 +171,7 @@
static struct dw_hdmi_plat_data imx6dl_hdmi_drv_data = {
.mpll_cfg = imx_mpll_cfg,
.cur_ctr = imx_cur_ctr,
- .sym_term = imx_sym_term,
+ .phy_config = imx_phy_config,
.dev_type = IMX6DL_HDMI,
.mode_valid = imx6dl_hdmi_mode_valid,
};
diff --git a/drivers/gpu/drm/imx/imx-drm-core.c b/drivers/gpu/drm/imx/imx-drm-core.c
index a002f53..74f505b 100644
--- a/drivers/gpu/drm/imx/imx-drm-core.c
+++ b/drivers/gpu/drm/imx/imx-drm-core.c
@@ -103,8 +103,8 @@
return NULL;
}
-int imx_drm_panel_format_pins(struct drm_encoder *encoder,
- u32 interface_pix_fmt, int hsync_pin, int vsync_pin)
+int imx_drm_set_bus_format_pins(struct drm_encoder *encoder, u32 bus_format,
+ int hsync_pin, int vsync_pin)
{
struct imx_drm_crtc_helper_funcs *helper;
struct imx_drm_crtc *imx_crtc;
@@ -116,16 +116,16 @@
helper = &imx_crtc->imx_drm_helper_funcs;
if (helper->set_interface_pix_fmt)
return helper->set_interface_pix_fmt(encoder->crtc,
- interface_pix_fmt, hsync_pin, vsync_pin);
+ bus_format, hsync_pin, vsync_pin);
return 0;
}
-EXPORT_SYMBOL_GPL(imx_drm_panel_format_pins);
+EXPORT_SYMBOL_GPL(imx_drm_set_bus_format_pins);
-int imx_drm_panel_format(struct drm_encoder *encoder, u32 interface_pix_fmt)
+int imx_drm_set_bus_format(struct drm_encoder *encoder, u32 bus_format)
{
- return imx_drm_panel_format_pins(encoder, interface_pix_fmt, 2, 3);
+ return imx_drm_set_bus_format_pins(encoder, bus_format, 2, 3);
}
-EXPORT_SYMBOL_GPL(imx_drm_panel_format);
+EXPORT_SYMBOL_GPL(imx_drm_set_bus_format);
int imx_drm_crtc_vblank_get(struct imx_drm_crtc *imx_drm_crtc)
{
@@ -431,15 +431,6 @@
}
EXPORT_SYMBOL_GPL(imx_drm_encoder_parse_of);
-static struct device_node *imx_drm_of_get_next_endpoint(
- const struct device_node *parent, struct device_node *prev)
-{
- struct device_node *node = of_graph_get_next_endpoint(parent, prev);
-
- of_node_put(prev);
- return node;
-}
-
/*
* @node: device tree node containing encoder input ports
* @encoder: drm_encoder
@@ -448,7 +439,7 @@
struct drm_encoder *encoder)
{
struct imx_drm_crtc *imx_crtc = imx_drm_find_crtc(encoder->crtc);
- struct device_node *ep = NULL;
+ struct device_node *ep;
struct of_endpoint endpoint;
struct device_node *port;
int ret;
@@ -456,18 +447,15 @@
if (!node || !imx_crtc)
return -EINVAL;
- do {
- ep = imx_drm_of_get_next_endpoint(node, ep);
- if (!ep)
- break;
-
+ for_each_endpoint_of_node(node, ep) {
port = of_graph_get_remote_port(ep);
of_node_put(port);
if (port == imx_crtc->crtc->port) {
ret = of_graph_parse_endpoint(ep, &endpoint);
+ of_node_put(ep);
return ret ? ret : endpoint.port;
}
- } while (ep);
+ }
return -EINVAL;
}
diff --git a/drivers/gpu/drm/imx/imx-drm.h b/drivers/gpu/drm/imx/imx-drm.h
index 3c559cc..28e776d 100644
--- a/drivers/gpu/drm/imx/imx-drm.h
+++ b/drivers/gpu/drm/imx/imx-drm.h
@@ -18,7 +18,7 @@
int (*enable_vblank)(struct drm_crtc *crtc);
void (*disable_vblank)(struct drm_crtc *crtc);
int (*set_interface_pix_fmt)(struct drm_crtc *crtc,
- u32 pix_fmt, int hsync_pin, int vsync_pin);
+ u32 bus_format, int hsync_pin, int vsync_pin);
const struct drm_crtc_helper_funcs *crtc_helper_funcs;
const struct drm_crtc_funcs *crtc_funcs;
};
@@ -40,10 +40,10 @@
struct drm_gem_cma_object *imx_drm_fb_get_obj(struct drm_framebuffer *fb);
-int imx_drm_panel_format_pins(struct drm_encoder *encoder,
- u32 interface_pix_fmt, int hsync_pin, int vsync_pin);
-int imx_drm_panel_format(struct drm_encoder *encoder,
- u32 interface_pix_fmt);
+int imx_drm_set_bus_format_pins(struct drm_encoder *encoder,
+ u32 bus_format, int hsync_pin, int vsync_pin);
+int imx_drm_set_bus_format(struct drm_encoder *encoder,
+ u32 bus_format);
int imx_drm_encoder_get_mux_id(struct device_node *node,
struct drm_encoder *encoder);
diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
index 2d6dc94..abacc8f 100644
--- a/drivers/gpu/drm/imx/imx-ldb.c
+++ b/drivers/gpu/drm/imx/imx-ldb.c
@@ -19,10 +19,11 @@
#include <drm/drmP.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_crtc_helper.h>
+#include <drm/drm_panel.h>
#include <linux/mfd/syscon.h>
#include <linux/mfd/syscon/imx6q-iomuxc-gpr.h>
-#include <linux/of_address.h>
#include <linux/of_device.h>
+#include <linux/of_graph.h>
#include <video/of_videomode.h>
#include <linux/regmap.h>
#include <linux/videodev2.h>
@@ -55,12 +56,14 @@
struct imx_ldb *ldb;
struct drm_connector connector;
struct drm_encoder encoder;
+ struct drm_panel *panel;
struct device_node *child;
int chno;
void *edid;
int edid_len;
struct drm_display_mode mode;
int mode_valid;
+ int bus_format;
};
struct bus_mux {
@@ -75,6 +78,7 @@
struct imx_ldb_channel channel[2];
struct clk *clk[2]; /* our own clock */
struct clk *clk_sel[4]; /* parent of display clock */
+ struct clk *clk_parent[4]; /* original parent of clk_sel */
struct clk *clk_pll[2]; /* upstream clock we can adjust */
u32 ldb_ctrl;
const struct bus_mux *lvds_mux;
@@ -91,6 +95,17 @@
struct imx_ldb_channel *imx_ldb_ch = con_to_imx_ldb_ch(connector);
int num_modes = 0;
+ if (imx_ldb_ch->panel && imx_ldb_ch->panel->funcs &&
+ imx_ldb_ch->panel->funcs->get_modes) {
+ struct drm_display_info *di = &connector->display_info;
+
+ num_modes = imx_ldb_ch->panel->funcs->get_modes(imx_ldb_ch->panel);
+ if (!imx_ldb_ch->bus_format && di->num_bus_formats)
+ imx_ldb_ch->bus_format = di->bus_formats[0];
+ if (num_modes > 0)
+ return num_modes;
+ }
+
if (imx_ldb_ch->edid) {
drm_mode_connector_update_edid_property(connector,
imx_ldb_ch->edid);
@@ -163,24 +178,36 @@
{
struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder);
struct imx_ldb *ldb = imx_ldb_ch->ldb;
- u32 pixel_fmt;
+ int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN;
+ u32 bus_format;
- switch (imx_ldb_ch->chno) {
- case 0:
- pixel_fmt = (ldb->ldb_ctrl & LDB_DATA_WIDTH_CH0_24) ?
- V4L2_PIX_FMT_RGB24 : V4L2_PIX_FMT_BGR666;
- break;
- case 1:
- pixel_fmt = (ldb->ldb_ctrl & LDB_DATA_WIDTH_CH1_24) ?
- V4L2_PIX_FMT_RGB24 : V4L2_PIX_FMT_BGR666;
- break;
+ switch (imx_ldb_ch->bus_format) {
default:
- dev_err(ldb->dev, "unable to config di%d panel format\n",
- imx_ldb_ch->chno);
- pixel_fmt = V4L2_PIX_FMT_RGB24;
+ dev_warn(ldb->dev,
+ "could not determine data mapping, default to 18-bit \"spwg\"\n");
+ /* fallthrough */
+ case MEDIA_BUS_FMT_RGB666_1X7X3_SPWG:
+ bus_format = MEDIA_BUS_FMT_RGB666_1X18;
+ break;
+ case MEDIA_BUS_FMT_RGB888_1X7X4_SPWG:
+ bus_format = MEDIA_BUS_FMT_RGB888_1X24;
+ if (imx_ldb_ch->chno == 0 || dual)
+ ldb->ldb_ctrl |= LDB_DATA_WIDTH_CH0_24;
+ if (imx_ldb_ch->chno == 1 || dual)
+ ldb->ldb_ctrl |= LDB_DATA_WIDTH_CH1_24;
+ break;
+ case MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA:
+ bus_format = MEDIA_BUS_FMT_RGB888_1X24;
+ if (imx_ldb_ch->chno == 0 || dual)
+ ldb->ldb_ctrl |= LDB_DATA_WIDTH_CH0_24 |
+ LDB_BIT_MAP_CH0_JEIDA;
+ if (imx_ldb_ch->chno == 1 || dual)
+ ldb->ldb_ctrl |= LDB_DATA_WIDTH_CH1_24 |
+ LDB_BIT_MAP_CH1_JEIDA;
+ break;
}
- imx_drm_panel_format(encoder, pixel_fmt);
+ imx_drm_set_bus_format(encoder, bus_format);
}
static void imx_ldb_encoder_commit(struct drm_encoder *encoder)
@@ -190,6 +217,8 @@
int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN;
int mux = imx_drm_encoder_get_mux_id(imx_ldb_ch->child, encoder);
+ drm_panel_prepare(imx_ldb_ch->panel);
+
if (dual) {
clk_prepare_enable(ldb->clk[0]);
clk_prepare_enable(ldb->clk[1]);
@@ -223,6 +252,8 @@
}
regmap_write(ldb->regmap, IOMUXC_GPR2, ldb->ldb_ctrl);
+
+ drm_panel_enable(imx_ldb_ch->panel);
}
static void imx_ldb_encoder_mode_set(struct drm_encoder *encoder,
@@ -274,6 +305,7 @@
{
struct imx_ldb_channel *imx_ldb_ch = enc_to_imx_ldb_ch(encoder);
struct imx_ldb *ldb = imx_ldb_ch->ldb;
+ int mux, ret;
/*
* imx_ldb_encoder_disable is called by
@@ -287,6 +319,8 @@
(ldb->ldb_ctrl & LDB_CH1_MODE_EN_MASK) == 0)
return;
+ drm_panel_disable(imx_ldb_ch->panel);
+
if (imx_ldb_ch == &ldb->channel[0])
ldb->ldb_ctrl &= ~LDB_CH0_MODE_EN_MASK;
else if (imx_ldb_ch == &ldb->channel[1])
@@ -298,6 +332,30 @@
clk_disable_unprepare(ldb->clk[0]);
clk_disable_unprepare(ldb->clk[1]);
}
+
+ if (ldb->lvds_mux) {
+ const struct bus_mux *lvds_mux = NULL;
+
+ if (imx_ldb_ch == &ldb->channel[0])
+ lvds_mux = &ldb->lvds_mux[0];
+ else if (imx_ldb_ch == &ldb->channel[1])
+ lvds_mux = &ldb->lvds_mux[1];
+
+ regmap_read(ldb->regmap, lvds_mux->reg, &mux);
+ mux &= lvds_mux->mask;
+ mux >>= lvds_mux->shift;
+ } else {
+ mux = (imx_ldb_ch == &ldb->channel[0]) ? 0 : 1;
+ }
+
+ /* set display clock mux back to original input clock */
+ ret = clk_set_parent(ldb->clk_sel[mux], ldb->clk_parent[mux]);
+ if (ret)
+ dev_err(ldb->dev,
+ "unable to set di%d parent clock to original parent\n",
+ mux);
+
+ drm_panel_unprepare(imx_ldb_ch->panel);
}
static struct drm_connector_funcs imx_ldb_connector_funcs = {
@@ -371,6 +429,9 @@
drm_connector_init(drm, &imx_ldb_ch->connector,
&imx_ldb_connector_funcs, DRM_MODE_CONNECTOR_LVDS);
+ if (imx_ldb_ch->panel)
+ drm_panel_attach(imx_ldb_ch->panel, &imx_ldb_ch->connector);
+
drm_mode_connector_attach_encoder(&imx_ldb_ch->connector,
&imx_ldb_ch->encoder);
@@ -382,25 +443,39 @@
LVDS_BIT_MAP_JEIDA
};
-static const char * const imx_ldb_bit_mappings[] = {
- [LVDS_BIT_MAP_SPWG] = "spwg",
- [LVDS_BIT_MAP_JEIDA] = "jeida",
+struct imx_ldb_bit_mapping {
+ u32 bus_format;
+ u32 datawidth;
+ const char * const mapping;
};
-static const int of_get_data_mapping(struct device_node *np)
+static const struct imx_ldb_bit_mapping imx_ldb_bit_mappings[] = {
+ { MEDIA_BUS_FMT_RGB666_1X7X3_SPWG, 18, "spwg" },
+ { MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 24, "spwg" },
+ { MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA, 24, "jeida" },
+};
+
+static u32 of_get_bus_format(struct device *dev, struct device_node *np)
{
const char *bm;
+ u32 datawidth = 0;
int ret, i;
ret = of_property_read_string(np, "fsl,data-mapping", &bm);
if (ret < 0)
return ret;
- for (i = 0; i < ARRAY_SIZE(imx_ldb_bit_mappings); i++)
- if (!strcasecmp(bm, imx_ldb_bit_mappings[i]))
- return i;
+ of_property_read_u32(np, "fsl,data-width", &datawidth);
- return -EINVAL;
+ for (i = 0; i < ARRAY_SIZE(imx_ldb_bit_mappings); i++) {
+ if (!strcasecmp(bm, imx_ldb_bit_mappings[i].mapping) &&
+ datawidth == imx_ldb_bit_mappings[i].datawidth)
+ return imx_ldb_bit_mappings[i].bus_format;
+ }
+
+ dev_err(dev, "invalid data mapping: %d-bit \"%s\"\n", datawidth, bm);
+
+ return -ENOENT;
}
static struct bus_mux imx6q_lvds_mux[2] = {
@@ -437,8 +512,6 @@
struct device_node *child;
const u8 *edidp;
struct imx_ldb *imx_ldb;
- int datawidth;
- int mapping;
int dual;
int ret;
int i;
@@ -479,12 +552,15 @@
imx_ldb->clk_sel[i] = NULL;
break;
}
+
+ imx_ldb->clk_parent[i] = clk_get_parent(imx_ldb->clk_sel[i]);
}
if (i == 0)
return ret;
for_each_child_of_node(np, child) {
struct imx_ldb_channel *channel;
+ struct device_node *port;
ret = of_property_read_u32(child, "reg", &i);
if (ret || i < 0 || i > 1)
@@ -503,49 +579,53 @@
channel->chno = i;
channel->child = child;
+ /*
+ * The output port is port@4 with an external 4-port mux or
+ * port@2 with the internal 2-port mux.
+ */
+ port = of_graph_get_port_by_id(child, imx_ldb->lvds_mux ? 4 : 2);
+ if (port) {
+ struct device_node *endpoint, *remote;
+
+ endpoint = of_get_child_by_name(port, "endpoint");
+ if (endpoint) {
+ remote = of_graph_get_remote_port_parent(endpoint);
+ if (remote)
+ channel->panel = of_drm_find_panel(remote);
+ else
+ return -EPROBE_DEFER;
+ if (!channel->panel) {
+ dev_err(dev, "panel not found: %s\n",
+ remote->full_name);
+ return -EPROBE_DEFER;
+ }
+ }
+ }
+
edidp = of_get_property(child, "edid", &channel->edid_len);
if (edidp) {
channel->edid = kmemdup(edidp, channel->edid_len,
GFP_KERNEL);
- } else {
+ } else if (!channel->panel) {
ret = of_get_drm_display_mode(child, &channel->mode, 0);
if (!ret)
channel->mode_valid = 1;
}
- ret = of_property_read_u32(child, "fsl,data-width", &datawidth);
- if (ret)
- datawidth = 0;
- else if (datawidth != 18 && datawidth != 24)
- return -EINVAL;
-
- mapping = of_get_data_mapping(child);
- switch (mapping) {
- case LVDS_BIT_MAP_SPWG:
- if (datawidth == 24) {
- if (i == 0 || dual)
- imx_ldb->ldb_ctrl |=
- LDB_DATA_WIDTH_CH0_24;
- if (i == 1 || dual)
- imx_ldb->ldb_ctrl |=
- LDB_DATA_WIDTH_CH1_24;
- }
- break;
- case LVDS_BIT_MAP_JEIDA:
- if (datawidth == 18) {
- dev_err(dev, "JEIDA standard only supported in 24 bit\n");
- return -EINVAL;
- }
- if (i == 0 || dual)
- imx_ldb->ldb_ctrl |= LDB_DATA_WIDTH_CH0_24 |
- LDB_BIT_MAP_CH0_JEIDA;
- if (i == 1 || dual)
- imx_ldb->ldb_ctrl |= LDB_DATA_WIDTH_CH1_24 |
- LDB_BIT_MAP_CH1_JEIDA;
- break;
- default:
- dev_err(dev, "data mapping not specified or invalid\n");
- return -EINVAL;
+ channel->bus_format = of_get_bus_format(dev, child);
+ if (channel->bus_format == -EINVAL) {
+ /*
+ * If no bus format was specified in the device tree,
+ * we can still get it from the connected panel later.
+ */
+ if (channel->panel && channel->panel->funcs &&
+ channel->panel->funcs->get_modes)
+ channel->bus_format = 0;
+ }
+ if (channel->bus_format < 0) {
+ dev_err(dev, "could not determine data mapping: %d\n",
+ channel->bus_format);
+ return channel->bus_format;
}
ret = imx_ldb_register(drm, channel);
diff --git a/drivers/gpu/drm/imx/imx-tve.c b/drivers/gpu/drm/imx/imx-tve.c
index 4216e47..214ecee 100644
--- a/drivers/gpu/drm/imx/imx-tve.c
+++ b/drivers/gpu/drm/imx/imx-tve.c
@@ -301,11 +301,11 @@
switch (tve->mode) {
case TVE_MODE_VGA:
- imx_drm_panel_format_pins(encoder, IPU_PIX_FMT_GBR24,
- tve->hsync_pin, tve->vsync_pin);
+ imx_drm_set_bus_format_pins(encoder, MEDIA_BUS_FMT_YUV8_1X24,
+ tve->hsync_pin, tve->vsync_pin);
break;
case TVE_MODE_TVOUT:
- imx_drm_panel_format(encoder, V4L2_PIX_FMT_YUV444);
+ imx_drm_set_bus_format(encoder, MEDIA_BUS_FMT_YUV8_1X24);
break;
}
}
diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c
index 98551e3..7bc8301 100644
--- a/drivers/gpu/drm/imx/ipuv3-crtc.c
+++ b/drivers/gpu/drm/imx/ipuv3-crtc.c
@@ -45,7 +45,7 @@
struct drm_pending_vblank_event *page_flip_event;
struct drm_framebuffer *newfb;
int irq;
- u32 interface_pix_fmt;
+ u32 bus_format;
int di_hsync_pin;
int di_vsync_pin;
};
@@ -145,7 +145,6 @@
struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc);
struct ipu_di_signal_cfg sig_cfg = {};
unsigned long encoder_types = 0;
- u32 out_pixel_fmt;
int ret;
dev_dbg(ipu_crtc->dev, "%s: mode->hdisplay: %d\n", __func__,
@@ -161,21 +160,21 @@
__func__, encoder_types);
/*
- * If we have DAC, TVDAC or LDB, then we need the IPU DI clock
- * to be the same as the LDB DI clock.
+ * If we have DAC or LDB, then we need the IPU DI clock to be
+ * the same as the LDB DI clock. For TVDAC, derive the IPU DI
+ * clock from 27 MHz TVE_DI clock, but allow to divide it.
*/
if (encoder_types & (BIT(DRM_MODE_ENCODER_DAC) |
- BIT(DRM_MODE_ENCODER_TVDAC) |
BIT(DRM_MODE_ENCODER_LVDS)))
sig_cfg.clkflags = IPU_DI_CLKMODE_SYNC | IPU_DI_CLKMODE_EXT;
+ else if (encoder_types & BIT(DRM_MODE_ENCODER_TVDAC))
+ sig_cfg.clkflags = IPU_DI_CLKMODE_EXT;
else
sig_cfg.clkflags = 0;
- out_pixel_fmt = ipu_crtc->interface_pix_fmt;
-
sig_cfg.enable_pol = 1;
sig_cfg.clk_pol = 0;
- sig_cfg.pixel_fmt = out_pixel_fmt;
+ sig_cfg.bus_format = ipu_crtc->bus_format;
sig_cfg.v_to_h_sync = 0;
sig_cfg.hsync_pin = ipu_crtc->di_hsync_pin;
sig_cfg.vsync_pin = ipu_crtc->di_vsync_pin;
@@ -184,7 +183,7 @@
ret = ipu_dc_init_sync(ipu_crtc->dc, ipu_crtc->di,
mode->flags & DRM_MODE_FLAG_INTERLACE,
- out_pixel_fmt, mode->hdisplay);
+ ipu_crtc->bus_format, mode->hdisplay);
if (ret) {
dev_err(ipu_crtc->dev,
"initializing display controller failed with %d\n",
@@ -202,7 +201,8 @@
return ipu_plane_mode_set(ipu_crtc->plane[0], crtc, mode,
crtc->primary->fb,
0, 0, mode->hdisplay, mode->vdisplay,
- x, y, mode->hdisplay, mode->vdisplay);
+ x, y, mode->hdisplay, mode->vdisplay,
+ mode->flags & DRM_MODE_FLAG_INTERLACE);
}
static void ipu_crtc_handle_pageflip(struct ipu_crtc *ipu_crtc)
@@ -291,11 +291,11 @@
}
static int ipu_set_interface_pix_fmt(struct drm_crtc *crtc,
- u32 pixfmt, int hsync_pin, int vsync_pin)
+ u32 bus_format, int hsync_pin, int vsync_pin)
{
struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc);
- ipu_crtc->interface_pix_fmt = pixfmt;
+ ipu_crtc->bus_format = bus_format;
ipu_crtc->di_hsync_pin = hsync_pin;
ipu_crtc->di_vsync_pin = vsync_pin;
diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c
index 6987e16..878a643 100644
--- a/drivers/gpu/drm/imx/ipuv3-plane.c
+++ b/drivers/gpu/drm/imx/ipuv3-plane.c
@@ -99,7 +99,7 @@
struct drm_framebuffer *fb, int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t src_x, uint32_t src_y,
- uint32_t src_w, uint32_t src_h)
+ uint32_t src_w, uint32_t src_h, bool interlaced)
{
struct device *dev = ipu_plane->base.dev->dev;
int ret;
@@ -213,6 +213,8 @@
ret = ipu_plane_set_base(ipu_plane, fb, src_x, src_y);
if (ret < 0)
return ret;
+ if (interlaced)
+ ipu_cpmem_interlaced_scan(ipu_plane->ipu_ch, fb->pitches[0]);
ipu_plane->w = src_w;
ipu_plane->h = src_h;
@@ -312,7 +314,8 @@
ret = ipu_plane_mode_set(ipu_plane, crtc, &crtc->hwmode, fb,
crtc_x, crtc_y, crtc_w, crtc_h,
- src_x >> 16, src_y >> 16, src_w >> 16, src_h >> 16);
+ src_x >> 16, src_y >> 16, src_w >> 16, src_h >> 16,
+ false);
if (ret < 0) {
ipu_plane_put_resources(ipu_plane);
return ret;
diff --git a/drivers/gpu/drm/imx/ipuv3-plane.h b/drivers/gpu/drm/imx/ipuv3-plane.h
index af125fb..9b5eff1 100644
--- a/drivers/gpu/drm/imx/ipuv3-plane.h
+++ b/drivers/gpu/drm/imx/ipuv3-plane.h
@@ -42,7 +42,7 @@
struct drm_framebuffer *fb, int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t src_x, uint32_t src_y, uint32_t src_w,
- uint32_t src_h);
+ uint32_t src_h, bool interlaced);
void ipu_plane_enable(struct ipu_plane *plane);
void ipu_plane_disable(struct ipu_plane *plane);
diff --git a/drivers/gpu/drm/imx/parallel-display.c b/drivers/gpu/drm/imx/parallel-display.c
index 900dda6..74a9ce4 100644
--- a/drivers/gpu/drm/imx/parallel-display.c
+++ b/drivers/gpu/drm/imx/parallel-display.c
@@ -33,7 +33,7 @@
struct device *dev;
void *edid;
int edid_len;
- u32 interface_pix_fmt;
+ u32 bus_format;
int mode_valid;
struct drm_display_mode mode;
struct drm_panel *panel;
@@ -118,7 +118,7 @@
{
struct imx_parallel_display *imxpd = enc_to_imxpd(encoder);
- imx_drm_panel_format(encoder, imxpd->interface_pix_fmt);
+ imx_drm_set_bus_format(encoder, imxpd->bus_format);
}
static void imx_pd_encoder_commit(struct drm_encoder *encoder)
@@ -225,14 +225,13 @@
ret = of_property_read_string(np, "interface-pix-fmt", &fmt);
if (!ret) {
if (!strcmp(fmt, "rgb24"))
- imxpd->interface_pix_fmt = V4L2_PIX_FMT_RGB24;
+ imxpd->bus_format = MEDIA_BUS_FMT_RGB888_1X24;
else if (!strcmp(fmt, "rgb565"))
- imxpd->interface_pix_fmt = V4L2_PIX_FMT_RGB565;
+ imxpd->bus_format = MEDIA_BUS_FMT_RGB565_1X16;
else if (!strcmp(fmt, "bgr666"))
- imxpd->interface_pix_fmt = V4L2_PIX_FMT_BGR666;
+ imxpd->bus_format = MEDIA_BUS_FMT_RGB666_1X18;
else if (!strcmp(fmt, "lvds666"))
- imxpd->interface_pix_fmt =
- v4l2_fourcc('L', 'V', 'D', '6');
+ imxpd->bus_format = MEDIA_BUS_FMT_RGB666_1X24_CPADHI;
}
panel_node = of_parse_phandle(np, "fsl,panel", 0);
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 9872ba9..6e84df9 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -1222,7 +1222,7 @@
{
struct drm_device *dev = crtc->dev;
struct mga_device *mdev = dev->dev_private;
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
u8 tmp;
if (mdev->type == G200_WB)
diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
index bacbbb7..0a6f676 100644
--- a/drivers/gpu/drm/msm/Kconfig
+++ b/drivers/gpu/drm/msm/Kconfig
@@ -35,3 +35,14 @@
Compile in support for logging register reads/writes in a format
that can be parsed by envytools demsm tool. If enabled, register
logging can be switched on via msm.reglog=y module param.
+
+config DRM_MSM_DSI
+ bool "Enable DSI support in MSM DRM driver"
+ depends on DRM_MSM
+ select DRM_PANEL
+ select DRM_MIPI_DSI
+ default y
+ help
+ Choose this option if you have a need for MIPI DSI connector
+ support.
+
diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
index 674a132..ab20867 100644
--- a/drivers/gpu/drm/msm/Makefile
+++ b/drivers/gpu/drm/msm/Makefile
@@ -50,5 +50,10 @@
msm-$(CONFIG_DRM_MSM_FBDEV) += msm_fbdev.o
msm-$(CONFIG_COMMON_CLK) += mdp/mdp4/mdp4_lvds_pll.o
+msm-$(CONFIG_DRM_MSM_DSI) += dsi/dsi.o \
+ dsi/dsi_host.o \
+ dsi/dsi_manager.o \
+ dsi/dsi_phy.o \
+ mdp/mdp5/mdp5_cmd_encoder.o
obj-$(CONFIG_DRM_MSM) += msm.o
diff --git a/drivers/gpu/drm/msm/dsi/dsi.c b/drivers/gpu/drm/msm/dsi/dsi.c
new file mode 100644
index 0000000..28d1f95
--- /dev/null
+++ b/drivers/gpu/drm/msm/dsi/dsi.c
@@ -0,0 +1,212 @@
+/*
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "dsi.h"
+
+struct drm_encoder *msm_dsi_get_encoder(struct msm_dsi *msm_dsi)
+{
+ if (!msm_dsi || !msm_dsi->panel)
+ return NULL;
+
+ return (msm_dsi->panel_flags & MIPI_DSI_MODE_VIDEO) ?
+ msm_dsi->encoders[MSM_DSI_VIDEO_ENCODER_ID] :
+ msm_dsi->encoders[MSM_DSI_CMD_ENCODER_ID];
+}
+
+static void dsi_destroy(struct msm_dsi *msm_dsi)
+{
+ if (!msm_dsi)
+ return;
+
+ msm_dsi_manager_unregister(msm_dsi);
+ if (msm_dsi->host) {
+ msm_dsi_host_destroy(msm_dsi->host);
+ msm_dsi->host = NULL;
+ }
+
+ platform_set_drvdata(msm_dsi->pdev, NULL);
+}
+
+static struct msm_dsi *dsi_init(struct platform_device *pdev)
+{
+ struct msm_dsi *msm_dsi = NULL;
+ int ret;
+
+ if (!pdev) {
+ dev_err(&pdev->dev, "no dsi device\n");
+ ret = -ENXIO;
+ goto fail;
+ }
+
+ msm_dsi = devm_kzalloc(&pdev->dev, sizeof(*msm_dsi), GFP_KERNEL);
+ if (!msm_dsi) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+ DBG("dsi probed=%p", msm_dsi);
+
+ msm_dsi->pdev = pdev;
+ platform_set_drvdata(pdev, msm_dsi);
+
+ /* Init dsi host */
+ ret = msm_dsi_host_init(msm_dsi);
+ if (ret)
+ goto fail;
+
+ /* Register to dsi manager */
+ ret = msm_dsi_manager_register(msm_dsi);
+ if (ret)
+ goto fail;
+
+ return msm_dsi;
+
+fail:
+ if (msm_dsi)
+ dsi_destroy(msm_dsi);
+
+ return ERR_PTR(ret);
+}
+
+static int dsi_bind(struct device *dev, struct device *master, void *data)
+{
+ struct drm_device *drm = dev_get_drvdata(master);
+ struct msm_drm_private *priv = drm->dev_private;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct msm_dsi *msm_dsi;
+
+ DBG("");
+ msm_dsi = dsi_init(pdev);
+ if (IS_ERR(msm_dsi))
+ return PTR_ERR(msm_dsi);
+
+ priv->dsi[msm_dsi->id] = msm_dsi;
+
+ return 0;
+}
+
+static void dsi_unbind(struct device *dev, struct device *master,
+ void *data)
+{
+ struct drm_device *drm = dev_get_drvdata(master);
+ struct msm_drm_private *priv = drm->dev_private;
+ struct msm_dsi *msm_dsi = dev_get_drvdata(dev);
+ int id = msm_dsi->id;
+
+ if (priv->dsi[id]) {
+ dsi_destroy(msm_dsi);
+ priv->dsi[id] = NULL;
+ }
+}
+
+static const struct component_ops dsi_ops = {
+ .bind = dsi_bind,
+ .unbind = dsi_unbind,
+};
+
+static int dsi_dev_probe(struct platform_device *pdev)
+{
+ return component_add(&pdev->dev, &dsi_ops);
+}
+
+static int dsi_dev_remove(struct platform_device *pdev)
+{
+ DBG("");
+ component_del(&pdev->dev, &dsi_ops);
+ return 0;
+}
+
+static const struct of_device_id dt_match[] = {
+ { .compatible = "qcom,mdss-dsi-ctrl" },
+ {}
+};
+
+static struct platform_driver dsi_driver = {
+ .probe = dsi_dev_probe,
+ .remove = dsi_dev_remove,
+ .driver = {
+ .name = "msm_dsi",
+ .of_match_table = dt_match,
+ },
+};
+
+void __init msm_dsi_register(void)
+{
+ DBG("");
+ platform_driver_register(&dsi_driver);
+}
+
+void __exit msm_dsi_unregister(void)
+{
+ DBG("");
+ platform_driver_unregister(&dsi_driver);
+}
+
+int msm_dsi_modeset_init(struct msm_dsi *msm_dsi, struct drm_device *dev,
+ struct drm_encoder *encoders[MSM_DSI_ENCODER_NUM])
+{
+ struct msm_drm_private *priv = dev->dev_private;
+ int ret, i;
+
+ if (WARN_ON(!encoders[MSM_DSI_VIDEO_ENCODER_ID] ||
+ !encoders[MSM_DSI_CMD_ENCODER_ID]))
+ return -EINVAL;
+
+ msm_dsi->dev = dev;
+
+ ret = msm_dsi_host_modeset_init(msm_dsi->host, dev);
+ if (ret) {
+ dev_err(dev->dev, "failed to modeset init host: %d\n", ret);
+ goto fail;
+ }
+
+ msm_dsi->bridge = msm_dsi_manager_bridge_init(msm_dsi->id);
+ if (IS_ERR(msm_dsi->bridge)) {
+ ret = PTR_ERR(msm_dsi->bridge);
+ dev_err(dev->dev, "failed to create dsi bridge: %d\n", ret);
+ msm_dsi->bridge = NULL;
+ goto fail;
+ }
+
+ msm_dsi->connector = msm_dsi_manager_connector_init(msm_dsi->id);
+ if (IS_ERR(msm_dsi->connector)) {
+ ret = PTR_ERR(msm_dsi->connector);
+ dev_err(dev->dev, "failed to create dsi connector: %d\n", ret);
+ msm_dsi->connector = NULL;
+ goto fail;
+ }
+
+ for (i = 0; i < MSM_DSI_ENCODER_NUM; i++) {
+ encoders[i]->bridge = msm_dsi->bridge;
+ msm_dsi->encoders[i] = encoders[i];
+ }
+
+ priv->bridges[priv->num_bridges++] = msm_dsi->bridge;
+ priv->connectors[priv->num_connectors++] = msm_dsi->connector;
+
+ return 0;
+fail:
+ if (msm_dsi) {
+ /* bridge/connector are normally destroyed by drm: */
+ if (msm_dsi->bridge) {
+ msm_dsi_manager_bridge_destroy(msm_dsi->bridge);
+ msm_dsi->bridge = NULL;
+ }
+ if (msm_dsi->connector) {
+ msm_dsi->connector->funcs->destroy(msm_dsi->connector);
+ msm_dsi->connector = NULL;
+ }
+ }
+
+ return ret;
+}
+
diff --git a/drivers/gpu/drm/msm/dsi/dsi.h b/drivers/gpu/drm/msm/dsi/dsi.h
new file mode 100644
index 0000000..10f54d4
--- /dev/null
+++ b/drivers/gpu/drm/msm/dsi/dsi.h
@@ -0,0 +1,117 @@
+/*
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __DSI_CONNECTOR_H__
+#define __DSI_CONNECTOR_H__
+
+#include <linux/platform_device.h>
+
+#include "drm_crtc.h"
+#include "drm_mipi_dsi.h"
+#include "drm_panel.h"
+
+#include "msm_drv.h"
+
+#define DSI_0 0
+#define DSI_1 1
+#define DSI_MAX 2
+
+#define DSI_CLOCK_MASTER DSI_0
+#define DSI_CLOCK_SLAVE DSI_1
+
+#define DSI_LEFT DSI_0
+#define DSI_RIGHT DSI_1
+
+/* According to the current drm framework sequence, take the encoder of
+ * DSI_1 as master encoder
+ */
+#define DSI_ENCODER_MASTER DSI_1
+#define DSI_ENCODER_SLAVE DSI_0
+
+struct msm_dsi {
+ struct drm_device *dev;
+ struct platform_device *pdev;
+
+ struct drm_connector *connector;
+ struct drm_bridge *bridge;
+
+ struct mipi_dsi_host *host;
+ struct msm_dsi_phy *phy;
+ struct drm_panel *panel;
+ unsigned long panel_flags;
+ bool phy_enabled;
+
+ /* the encoders we are hooked to (outside of dsi block) */
+ struct drm_encoder *encoders[MSM_DSI_ENCODER_NUM];
+
+ int id;
+};
+
+/* dsi manager */
+struct drm_bridge *msm_dsi_manager_bridge_init(u8 id);
+void msm_dsi_manager_bridge_destroy(struct drm_bridge *bridge);
+struct drm_connector *msm_dsi_manager_connector_init(u8 id);
+int msm_dsi_manager_phy_enable(int id,
+ const unsigned long bit_rate, const unsigned long esc_rate,
+ u32 *clk_pre, u32 *clk_post);
+void msm_dsi_manager_phy_disable(int id);
+int msm_dsi_manager_cmd_xfer(int id, const struct mipi_dsi_msg *msg);
+bool msm_dsi_manager_cmd_xfer_trigger(int id, u32 iova, u32 len);
+int msm_dsi_manager_register(struct msm_dsi *msm_dsi);
+void msm_dsi_manager_unregister(struct msm_dsi *msm_dsi);
+
+/* msm dsi */
+struct drm_encoder *msm_dsi_get_encoder(struct msm_dsi *msm_dsi);
+
+/* dsi host */
+int msm_dsi_host_xfer_prepare(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg);
+void msm_dsi_host_xfer_restore(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg);
+int msm_dsi_host_cmd_tx(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg);
+int msm_dsi_host_cmd_rx(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg);
+void msm_dsi_host_cmd_xfer_commit(struct mipi_dsi_host *host,
+ u32 iova, u32 len);
+int msm_dsi_host_enable(struct mipi_dsi_host *host);
+int msm_dsi_host_disable(struct mipi_dsi_host *host);
+int msm_dsi_host_power_on(struct mipi_dsi_host *host);
+int msm_dsi_host_power_off(struct mipi_dsi_host *host);
+int msm_dsi_host_set_display_mode(struct mipi_dsi_host *host,
+ struct drm_display_mode *mode);
+struct drm_panel *msm_dsi_host_get_panel(struct mipi_dsi_host *host,
+ unsigned long *panel_flags);
+int msm_dsi_host_register(struct mipi_dsi_host *host, bool check_defer);
+void msm_dsi_host_unregister(struct mipi_dsi_host *host);
+void msm_dsi_host_destroy(struct mipi_dsi_host *host);
+int msm_dsi_host_modeset_init(struct mipi_dsi_host *host,
+ struct drm_device *dev);
+int msm_dsi_host_init(struct msm_dsi *msm_dsi);
+
+/* dsi phy */
+struct msm_dsi_phy;
+enum msm_dsi_phy_type {
+ MSM_DSI_PHY_UNKNOWN,
+ MSM_DSI_PHY_28NM,
+ MSM_DSI_PHY_MAX
+};
+struct msm_dsi_phy *msm_dsi_phy_init(struct platform_device *pdev,
+ enum msm_dsi_phy_type type, int id);
+int msm_dsi_phy_enable(struct msm_dsi_phy *phy, bool is_dual_panel,
+ const unsigned long bit_rate, const unsigned long esc_rate);
+int msm_dsi_phy_disable(struct msm_dsi_phy *phy);
+void msm_dsi_phy_get_clk_pre_post(struct msm_dsi_phy *phy,
+ u32 *clk_pre, u32 *clk_post);
+#endif /* __DSI_CONNECTOR_H__ */
+
diff --git a/drivers/gpu/drm/msm/dsi/dsi.xml.h b/drivers/gpu/drm/msm/dsi/dsi.xml.h
index abf1bba..1dcfae2 100644
--- a/drivers/gpu/drm/msm/dsi/dsi.xml.h
+++ b/drivers/gpu/drm/msm/dsi/dsi.xml.h
@@ -8,19 +8,10 @@
git clone https://github.com/freedreno/envytools.git
The rules-ng-ng source files this header was generated from are:
-- /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 676 bytes, from 2014-12-05 15:34:49)
-- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20908 bytes, from 2014-12-08 16:13:00)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2357 bytes, from 2014-12-08 16:13:00)
-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 27208 bytes, from 2015-01-13 23:56:11)
-- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 11712 bytes, from 2013-08-17 17:13:43)
-- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 344 bytes, from 2013-08-11 19:26:32)
-- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2014-10-31 16:48:57)
-- /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2013-07-05 19:21:12)
-- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 26848 bytes, from 2015-01-13 23:55:57)
-- /home/robclark/src/freedreno/envytools/rnndb/edp/edp.xml ( 8253 bytes, from 2014-12-08 16:13:00)
+- /usr2/hali/local/envytools/envytools/rnndb/dsi/dsi.xml ( 18681 bytes, from 2015-03-04 23:08:31)
+- /usr2/hali/local/envytools/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2015-01-28 21:43:22)
-Copyright (C) 2013 by the following authors:
+Copyright (C) 2013-2015 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
Permission is hereby granted, free of charge, to any person obtaining
@@ -51,11 +42,11 @@
BURST_MODE = 2,
};
-enum dsi_dst_format {
- DST_FORMAT_RGB565 = 0,
- DST_FORMAT_RGB666 = 1,
- DST_FORMAT_RGB666_LOOSE = 2,
- DST_FORMAT_RGB888 = 3,
+enum dsi_vid_dst_format {
+ VID_DST_FORMAT_RGB565 = 0,
+ VID_DST_FORMAT_RGB666 = 1,
+ VID_DST_FORMAT_RGB666_LOOSE = 2,
+ VID_DST_FORMAT_RGB888 = 3,
};
enum dsi_rgb_swap {
@@ -69,20 +60,63 @@
enum dsi_cmd_trigger {
TRIGGER_NONE = 0,
+ TRIGGER_SEOF = 1,
TRIGGER_TE = 2,
TRIGGER_SW = 4,
TRIGGER_SW_SEOF = 5,
TRIGGER_SW_TE = 6,
};
+enum dsi_cmd_dst_format {
+ CMD_DST_FORMAT_RGB111 = 0,
+ CMD_DST_FORMAT_RGB332 = 3,
+ CMD_DST_FORMAT_RGB444 = 4,
+ CMD_DST_FORMAT_RGB565 = 6,
+ CMD_DST_FORMAT_RGB666 = 7,
+ CMD_DST_FORMAT_RGB888 = 8,
+};
+
+enum dsi_lane_swap {
+ LANE_SWAP_0123 = 0,
+ LANE_SWAP_3012 = 1,
+ LANE_SWAP_2301 = 2,
+ LANE_SWAP_1230 = 3,
+ LANE_SWAP_0321 = 4,
+ LANE_SWAP_1032 = 5,
+ LANE_SWAP_2103 = 6,
+ LANE_SWAP_3210 = 7,
+};
+
#define DSI_IRQ_CMD_DMA_DONE 0x00000001
#define DSI_IRQ_MASK_CMD_DMA_DONE 0x00000002
#define DSI_IRQ_CMD_MDP_DONE 0x00000100
#define DSI_IRQ_MASK_CMD_MDP_DONE 0x00000200
#define DSI_IRQ_VIDEO_DONE 0x00010000
#define DSI_IRQ_MASK_VIDEO_DONE 0x00020000
+#define DSI_IRQ_BTA_DONE 0x00100000
+#define DSI_IRQ_MASK_BTA_DONE 0x00200000
#define DSI_IRQ_ERROR 0x01000000
#define DSI_IRQ_MASK_ERROR 0x02000000
+#define REG_DSI_6G_HW_VERSION 0x00000000
+#define DSI_6G_HW_VERSION_MAJOR__MASK 0xf0000000
+#define DSI_6G_HW_VERSION_MAJOR__SHIFT 28
+static inline uint32_t DSI_6G_HW_VERSION_MAJOR(uint32_t val)
+{
+ return ((val) << DSI_6G_HW_VERSION_MAJOR__SHIFT) & DSI_6G_HW_VERSION_MAJOR__MASK;
+}
+#define DSI_6G_HW_VERSION_MINOR__MASK 0x0fff0000
+#define DSI_6G_HW_VERSION_MINOR__SHIFT 16
+static inline uint32_t DSI_6G_HW_VERSION_MINOR(uint32_t val)
+{
+ return ((val) << DSI_6G_HW_VERSION_MINOR__SHIFT) & DSI_6G_HW_VERSION_MINOR__MASK;
+}
+#define DSI_6G_HW_VERSION_STEP__MASK 0x0000ffff
+#define DSI_6G_HW_VERSION_STEP__SHIFT 0
+static inline uint32_t DSI_6G_HW_VERSION_STEP(uint32_t val)
+{
+ return ((val) << DSI_6G_HW_VERSION_STEP__SHIFT) & DSI_6G_HW_VERSION_STEP__MASK;
+}
+
#define REG_DSI_CTRL 0x00000000
#define DSI_CTRL_ENABLE 0x00000001
#define DSI_CTRL_VID_MODE_EN 0x00000002
@@ -96,11 +130,15 @@
#define DSI_CTRL_CRC_CHECK 0x01000000
#define REG_DSI_STATUS0 0x00000004
+#define DSI_STATUS0_CMD_MODE_ENGINE_BUSY 0x00000001
#define DSI_STATUS0_CMD_MODE_DMA_BUSY 0x00000002
+#define DSI_STATUS0_CMD_MODE_MDP_BUSY 0x00000004
#define DSI_STATUS0_VIDEO_MODE_ENGINE_BUSY 0x00000008
#define DSI_STATUS0_DSI_BUSY 0x00000010
+#define DSI_STATUS0_INTERLEAVE_OP_CONTENTION 0x80000000
#define REG_DSI_FIFO_STATUS 0x00000008
+#define DSI_FIFO_STATUS_CMD_MDP_FIFO_UNDERFLOW 0x00000080
#define REG_DSI_VID_CFG0 0x0000000c
#define DSI_VID_CFG0_VIRT_CHANNEL__MASK 0x00000003
@@ -111,7 +149,7 @@
}
#define DSI_VID_CFG0_DST_FORMAT__MASK 0x00000030
#define DSI_VID_CFG0_DST_FORMAT__SHIFT 4
-static inline uint32_t DSI_VID_CFG0_DST_FORMAT(enum dsi_dst_format val)
+static inline uint32_t DSI_VID_CFG0_DST_FORMAT(enum dsi_vid_dst_format val)
{
return ((val) << DSI_VID_CFG0_DST_FORMAT__SHIFT) & DSI_VID_CFG0_DST_FORMAT__MASK;
}
@@ -129,21 +167,15 @@
#define DSI_VID_CFG0_PULSE_MODE_HSA_HE 0x10000000
#define REG_DSI_VID_CFG1 0x0000001c
-#define DSI_VID_CFG1_R_SEL 0x00000010
-#define DSI_VID_CFG1_G_SEL 0x00000100
-#define DSI_VID_CFG1_B_SEL 0x00001000
-#define DSI_VID_CFG1_RGB_SWAP__MASK 0x00070000
-#define DSI_VID_CFG1_RGB_SWAP__SHIFT 16
+#define DSI_VID_CFG1_R_SEL 0x00000001
+#define DSI_VID_CFG1_G_SEL 0x00000010
+#define DSI_VID_CFG1_B_SEL 0x00000100
+#define DSI_VID_CFG1_RGB_SWAP__MASK 0x00007000
+#define DSI_VID_CFG1_RGB_SWAP__SHIFT 12
static inline uint32_t DSI_VID_CFG1_RGB_SWAP(enum dsi_rgb_swap val)
{
return ((val) << DSI_VID_CFG1_RGB_SWAP__SHIFT) & DSI_VID_CFG1_RGB_SWAP__MASK;
}
-#define DSI_VID_CFG1_INTERLEAVE_MAX__MASK 0x00f00000
-#define DSI_VID_CFG1_INTERLEAVE_MAX__SHIFT 20
-static inline uint32_t DSI_VID_CFG1_INTERLEAVE_MAX(uint32_t val)
-{
- return ((val) << DSI_VID_CFG1_INTERLEAVE_MAX__SHIFT) & DSI_VID_CFG1_INTERLEAVE_MAX__MASK;
-}
#define REG_DSI_ACTIVE_H 0x00000020
#define DSI_ACTIVE_H_START__MASK 0x00000fff
@@ -201,32 +233,115 @@
return ((val) << DSI_ACTIVE_HSYNC_END__SHIFT) & DSI_ACTIVE_HSYNC_END__MASK;
}
-#define REG_DSI_ACTIVE_VSYNC 0x00000034
-#define DSI_ACTIVE_VSYNC_START__MASK 0x00000fff
-#define DSI_ACTIVE_VSYNC_START__SHIFT 0
-static inline uint32_t DSI_ACTIVE_VSYNC_START(uint32_t val)
+#define REG_DSI_ACTIVE_VSYNC_HPOS 0x00000030
+#define DSI_ACTIVE_VSYNC_HPOS_START__MASK 0x00000fff
+#define DSI_ACTIVE_VSYNC_HPOS_START__SHIFT 0
+static inline uint32_t DSI_ACTIVE_VSYNC_HPOS_START(uint32_t val)
{
- return ((val) << DSI_ACTIVE_VSYNC_START__SHIFT) & DSI_ACTIVE_VSYNC_START__MASK;
+ return ((val) << DSI_ACTIVE_VSYNC_HPOS_START__SHIFT) & DSI_ACTIVE_VSYNC_HPOS_START__MASK;
}
-#define DSI_ACTIVE_VSYNC_END__MASK 0x0fff0000
-#define DSI_ACTIVE_VSYNC_END__SHIFT 16
-static inline uint32_t DSI_ACTIVE_VSYNC_END(uint32_t val)
+#define DSI_ACTIVE_VSYNC_HPOS_END__MASK 0x0fff0000
+#define DSI_ACTIVE_VSYNC_HPOS_END__SHIFT 16
+static inline uint32_t DSI_ACTIVE_VSYNC_HPOS_END(uint32_t val)
{
- return ((val) << DSI_ACTIVE_VSYNC_END__SHIFT) & DSI_ACTIVE_VSYNC_END__MASK;
+ return ((val) << DSI_ACTIVE_VSYNC_HPOS_END__SHIFT) & DSI_ACTIVE_VSYNC_HPOS_END__MASK;
+}
+
+#define REG_DSI_ACTIVE_VSYNC_VPOS 0x00000034
+#define DSI_ACTIVE_VSYNC_VPOS_START__MASK 0x00000fff
+#define DSI_ACTIVE_VSYNC_VPOS_START__SHIFT 0
+static inline uint32_t DSI_ACTIVE_VSYNC_VPOS_START(uint32_t val)
+{
+ return ((val) << DSI_ACTIVE_VSYNC_VPOS_START__SHIFT) & DSI_ACTIVE_VSYNC_VPOS_START__MASK;
+}
+#define DSI_ACTIVE_VSYNC_VPOS_END__MASK 0x0fff0000
+#define DSI_ACTIVE_VSYNC_VPOS_END__SHIFT 16
+static inline uint32_t DSI_ACTIVE_VSYNC_VPOS_END(uint32_t val)
+{
+ return ((val) << DSI_ACTIVE_VSYNC_VPOS_END__SHIFT) & DSI_ACTIVE_VSYNC_VPOS_END__MASK;
}
#define REG_DSI_CMD_DMA_CTRL 0x00000038
+#define DSI_CMD_DMA_CTRL_BROADCAST_EN 0x80000000
#define DSI_CMD_DMA_CTRL_FROM_FRAME_BUFFER 0x10000000
#define DSI_CMD_DMA_CTRL_LOW_POWER 0x04000000
#define REG_DSI_CMD_CFG0 0x0000003c
+#define DSI_CMD_CFG0_DST_FORMAT__MASK 0x0000000f
+#define DSI_CMD_CFG0_DST_FORMAT__SHIFT 0
+static inline uint32_t DSI_CMD_CFG0_DST_FORMAT(enum dsi_cmd_dst_format val)
+{
+ return ((val) << DSI_CMD_CFG0_DST_FORMAT__SHIFT) & DSI_CMD_CFG0_DST_FORMAT__MASK;
+}
+#define DSI_CMD_CFG0_R_SEL 0x00000010
+#define DSI_CMD_CFG0_G_SEL 0x00000100
+#define DSI_CMD_CFG0_B_SEL 0x00001000
+#define DSI_CMD_CFG0_INTERLEAVE_MAX__MASK 0x00f00000
+#define DSI_CMD_CFG0_INTERLEAVE_MAX__SHIFT 20
+static inline uint32_t DSI_CMD_CFG0_INTERLEAVE_MAX(uint32_t val)
+{
+ return ((val) << DSI_CMD_CFG0_INTERLEAVE_MAX__SHIFT) & DSI_CMD_CFG0_INTERLEAVE_MAX__MASK;
+}
+#define DSI_CMD_CFG0_RGB_SWAP__MASK 0x00070000
+#define DSI_CMD_CFG0_RGB_SWAP__SHIFT 16
+static inline uint32_t DSI_CMD_CFG0_RGB_SWAP(enum dsi_rgb_swap val)
+{
+ return ((val) << DSI_CMD_CFG0_RGB_SWAP__SHIFT) & DSI_CMD_CFG0_RGB_SWAP__MASK;
+}
#define REG_DSI_CMD_CFG1 0x00000040
+#define DSI_CMD_CFG1_WR_MEM_START__MASK 0x000000ff
+#define DSI_CMD_CFG1_WR_MEM_START__SHIFT 0
+static inline uint32_t DSI_CMD_CFG1_WR_MEM_START(uint32_t val)
+{
+ return ((val) << DSI_CMD_CFG1_WR_MEM_START__SHIFT) & DSI_CMD_CFG1_WR_MEM_START__MASK;
+}
+#define DSI_CMD_CFG1_WR_MEM_CONTINUE__MASK 0x0000ff00
+#define DSI_CMD_CFG1_WR_MEM_CONTINUE__SHIFT 8
+static inline uint32_t DSI_CMD_CFG1_WR_MEM_CONTINUE(uint32_t val)
+{
+ return ((val) << DSI_CMD_CFG1_WR_MEM_CONTINUE__SHIFT) & DSI_CMD_CFG1_WR_MEM_CONTINUE__MASK;
+}
+#define DSI_CMD_CFG1_INSERT_DCS_COMMAND 0x00010000
#define REG_DSI_DMA_BASE 0x00000044
#define REG_DSI_DMA_LEN 0x00000048
+#define REG_DSI_CMD_MDP_STREAM_CTRL 0x00000054
+#define DSI_CMD_MDP_STREAM_CTRL_DATA_TYPE__MASK 0x0000003f
+#define DSI_CMD_MDP_STREAM_CTRL_DATA_TYPE__SHIFT 0
+static inline uint32_t DSI_CMD_MDP_STREAM_CTRL_DATA_TYPE(uint32_t val)
+{
+ return ((val) << DSI_CMD_MDP_STREAM_CTRL_DATA_TYPE__SHIFT) & DSI_CMD_MDP_STREAM_CTRL_DATA_TYPE__MASK;
+}
+#define DSI_CMD_MDP_STREAM_CTRL_VIRTUAL_CHANNEL__MASK 0x00000300
+#define DSI_CMD_MDP_STREAM_CTRL_VIRTUAL_CHANNEL__SHIFT 8
+static inline uint32_t DSI_CMD_MDP_STREAM_CTRL_VIRTUAL_CHANNEL(uint32_t val)
+{
+ return ((val) << DSI_CMD_MDP_STREAM_CTRL_VIRTUAL_CHANNEL__SHIFT) & DSI_CMD_MDP_STREAM_CTRL_VIRTUAL_CHANNEL__MASK;
+}
+#define DSI_CMD_MDP_STREAM_CTRL_WORD_COUNT__MASK 0xffff0000
+#define DSI_CMD_MDP_STREAM_CTRL_WORD_COUNT__SHIFT 16
+static inline uint32_t DSI_CMD_MDP_STREAM_CTRL_WORD_COUNT(uint32_t val)
+{
+ return ((val) << DSI_CMD_MDP_STREAM_CTRL_WORD_COUNT__SHIFT) & DSI_CMD_MDP_STREAM_CTRL_WORD_COUNT__MASK;
+}
+
+#define REG_DSI_CMD_MDP_STREAM_TOTAL 0x00000058
+#define DSI_CMD_MDP_STREAM_TOTAL_H_TOTAL__MASK 0x00000fff
+#define DSI_CMD_MDP_STREAM_TOTAL_H_TOTAL__SHIFT 0
+static inline uint32_t DSI_CMD_MDP_STREAM_TOTAL_H_TOTAL(uint32_t val)
+{
+ return ((val) << DSI_CMD_MDP_STREAM_TOTAL_H_TOTAL__SHIFT) & DSI_CMD_MDP_STREAM_TOTAL_H_TOTAL__MASK;
+}
+#define DSI_CMD_MDP_STREAM_TOTAL_V_TOTAL__MASK 0x0fff0000
+#define DSI_CMD_MDP_STREAM_TOTAL_V_TOTAL__SHIFT 16
+static inline uint32_t DSI_CMD_MDP_STREAM_TOTAL_V_TOTAL(uint32_t val)
+{
+ return ((val) << DSI_CMD_MDP_STREAM_TOTAL_V_TOTAL__SHIFT) & DSI_CMD_MDP_STREAM_TOTAL_V_TOTAL__MASK;
+}
+
#define REG_DSI_ACK_ERR_STATUS 0x00000064
static inline uint32_t REG_DSI_RDBK(uint32_t i0) { return 0x00000068 + 0x4*i0; }
@@ -234,19 +349,25 @@
static inline uint32_t REG_DSI_RDBK_DATA(uint32_t i0) { return 0x00000068 + 0x4*i0; }
#define REG_DSI_TRIG_CTRL 0x00000080
-#define DSI_TRIG_CTRL_DMA_TRIGGER__MASK 0x0000000f
+#define DSI_TRIG_CTRL_DMA_TRIGGER__MASK 0x00000007
#define DSI_TRIG_CTRL_DMA_TRIGGER__SHIFT 0
static inline uint32_t DSI_TRIG_CTRL_DMA_TRIGGER(enum dsi_cmd_trigger val)
{
return ((val) << DSI_TRIG_CTRL_DMA_TRIGGER__SHIFT) & DSI_TRIG_CTRL_DMA_TRIGGER__MASK;
}
-#define DSI_TRIG_CTRL_MDP_TRIGGER__MASK 0x000000f0
+#define DSI_TRIG_CTRL_MDP_TRIGGER__MASK 0x00000070
#define DSI_TRIG_CTRL_MDP_TRIGGER__SHIFT 4
static inline uint32_t DSI_TRIG_CTRL_MDP_TRIGGER(enum dsi_cmd_trigger val)
{
return ((val) << DSI_TRIG_CTRL_MDP_TRIGGER__SHIFT) & DSI_TRIG_CTRL_MDP_TRIGGER__MASK;
}
-#define DSI_TRIG_CTRL_STREAM 0x00000100
+#define DSI_TRIG_CTRL_STREAM__MASK 0x00000300
+#define DSI_TRIG_CTRL_STREAM__SHIFT 8
+static inline uint32_t DSI_TRIG_CTRL_STREAM(uint32_t val)
+{
+ return ((val) << DSI_TRIG_CTRL_STREAM__SHIFT) & DSI_TRIG_CTRL_STREAM__MASK;
+}
+#define DSI_TRIG_CTRL_BLOCK_DMA_WITHIN_FRAME 0x00001000
#define DSI_TRIG_CTRL_TE 0x80000000
#define REG_DSI_TRIG_DMA 0x0000008c
@@ -274,6 +395,12 @@
#define DSI_EOT_PACKET_CTRL_RX_EOT_IGNORE 0x00000010
#define REG_DSI_LANE_SWAP_CTRL 0x000000ac
+#define DSI_LANE_SWAP_CTRL_DLN_SWAP_SEL__MASK 0x00000007
+#define DSI_LANE_SWAP_CTRL_DLN_SWAP_SEL__SHIFT 0
+static inline uint32_t DSI_LANE_SWAP_CTRL_DLN_SWAP_SEL(enum dsi_lane_swap val)
+{
+ return ((val) << DSI_LANE_SWAP_CTRL_DLN_SWAP_SEL__SHIFT) & DSI_LANE_SWAP_CTRL_DLN_SWAP_SEL__MASK;
+}
#define REG_DSI_ERR_INT_MASK0 0x00000108
@@ -282,8 +409,36 @@
#define REG_DSI_RESET 0x00000114
#define REG_DSI_CLK_CTRL 0x00000118
+#define DSI_CLK_CTRL_AHBS_HCLK_ON 0x00000001
+#define DSI_CLK_CTRL_AHBM_SCLK_ON 0x00000002
+#define DSI_CLK_CTRL_PCLK_ON 0x00000004
+#define DSI_CLK_CTRL_DSICLK_ON 0x00000008
+#define DSI_CLK_CTRL_BYTECLK_ON 0x00000010
+#define DSI_CLK_CTRL_ESCCLK_ON 0x00000020
+#define DSI_CLK_CTRL_FORCE_ON_DYN_AHBM_HCLK 0x00000200
+
+#define REG_DSI_CLK_STATUS 0x0000011c
+#define DSI_CLK_STATUS_PLL_UNLOCKED 0x00010000
#define REG_DSI_PHY_RESET 0x00000128
+#define DSI_PHY_RESET_RESET 0x00000001
+
+#define REG_DSI_RDBK_DATA_CTRL 0x000001d0
+#define DSI_RDBK_DATA_CTRL_COUNT__MASK 0x00ff0000
+#define DSI_RDBK_DATA_CTRL_COUNT__SHIFT 16
+static inline uint32_t DSI_RDBK_DATA_CTRL_COUNT(uint32_t val)
+{
+ return ((val) << DSI_RDBK_DATA_CTRL_COUNT__SHIFT) & DSI_RDBK_DATA_CTRL_COUNT__MASK;
+}
+#define DSI_RDBK_DATA_CTRL_CLR 0x00000001
+
+#define REG_DSI_VERSION 0x000001f0
+#define DSI_VERSION_MAJOR__MASK 0xff000000
+#define DSI_VERSION_MAJOR__SHIFT 24
+static inline uint32_t DSI_VERSION_MAJOR(uint32_t val)
+{
+ return ((val) << DSI_VERSION_MAJOR__SHIFT) & DSI_VERSION_MAJOR__MASK;
+}
#define REG_DSI_PHY_PLL_CTRL_0 0x00000200
#define DSI_PHY_PLL_CTRL_0_ENABLE 0x00000001
@@ -501,5 +656,184 @@
#define REG_DSI_8960_PHY_CAL_STATUS 0x00000550
#define DSI_8960_PHY_CAL_STATUS_CAL_BUSY 0x00000010
+static inline uint32_t REG_DSI_28nm_PHY_LN(uint32_t i0) { return 0x00000000 + 0x40*i0; }
+
+static inline uint32_t REG_DSI_28nm_PHY_LN_CFG_0(uint32_t i0) { return 0x00000000 + 0x40*i0; }
+
+static inline uint32_t REG_DSI_28nm_PHY_LN_CFG_1(uint32_t i0) { return 0x00000004 + 0x40*i0; }
+
+static inline uint32_t REG_DSI_28nm_PHY_LN_CFG_2(uint32_t i0) { return 0x00000008 + 0x40*i0; }
+
+static inline uint32_t REG_DSI_28nm_PHY_LN_CFG_3(uint32_t i0) { return 0x0000000c + 0x40*i0; }
+
+static inline uint32_t REG_DSI_28nm_PHY_LN_CFG_4(uint32_t i0) { return 0x00000010 + 0x40*i0; }
+
+static inline uint32_t REG_DSI_28nm_PHY_LN_TEST_DATAPATH(uint32_t i0) { return 0x00000014 + 0x40*i0; }
+
+static inline uint32_t REG_DSI_28nm_PHY_LN_DEBUG_SEL(uint32_t i0) { return 0x00000018 + 0x40*i0; }
+
+static inline uint32_t REG_DSI_28nm_PHY_LN_TEST_STR_0(uint32_t i0) { return 0x0000001c + 0x40*i0; }
+
+static inline uint32_t REG_DSI_28nm_PHY_LN_TEST_STR_1(uint32_t i0) { return 0x00000020 + 0x40*i0; }
+
+#define REG_DSI_28nm_PHY_LNCK_CFG_0 0x00000100
+
+#define REG_DSI_28nm_PHY_LNCK_CFG_1 0x00000104
+
+#define REG_DSI_28nm_PHY_LNCK_CFG_2 0x00000108
+
+#define REG_DSI_28nm_PHY_LNCK_CFG_3 0x0000010c
+
+#define REG_DSI_28nm_PHY_LNCK_CFG_4 0x00000110
+
+#define REG_DSI_28nm_PHY_LNCK_TEST_DATAPATH 0x00000114
+
+#define REG_DSI_28nm_PHY_LNCK_DEBUG_SEL 0x00000118
+
+#define REG_DSI_28nm_PHY_LNCK_TEST_STR0 0x0000011c
+
+#define REG_DSI_28nm_PHY_LNCK_TEST_STR1 0x00000120
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_0 0x00000140
+#define DSI_28nm_PHY_TIMING_CTRL_0_CLK_ZERO__MASK 0x000000ff
+#define DSI_28nm_PHY_TIMING_CTRL_0_CLK_ZERO__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_0_CLK_ZERO(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_0_CLK_ZERO__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_0_CLK_ZERO__MASK;
+}
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_1 0x00000144
+#define DSI_28nm_PHY_TIMING_CTRL_1_CLK_TRAIL__MASK 0x000000ff
+#define DSI_28nm_PHY_TIMING_CTRL_1_CLK_TRAIL__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_1_CLK_TRAIL(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_1_CLK_TRAIL__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_1_CLK_TRAIL__MASK;
+}
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_2 0x00000148
+#define DSI_28nm_PHY_TIMING_CTRL_2_CLK_PREPARE__MASK 0x000000ff
+#define DSI_28nm_PHY_TIMING_CTRL_2_CLK_PREPARE__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_2_CLK_PREPARE(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_2_CLK_PREPARE__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_2_CLK_PREPARE__MASK;
+}
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_3 0x0000014c
+#define DSI_28nm_PHY_TIMING_CTRL_3_CLK_ZERO_8 0x00000001
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_4 0x00000150
+#define DSI_28nm_PHY_TIMING_CTRL_4_HS_EXIT__MASK 0x000000ff
+#define DSI_28nm_PHY_TIMING_CTRL_4_HS_EXIT__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_4_HS_EXIT(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_4_HS_EXIT__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_4_HS_EXIT__MASK;
+}
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_5 0x00000154
+#define DSI_28nm_PHY_TIMING_CTRL_5_HS_ZERO__MASK 0x000000ff
+#define DSI_28nm_PHY_TIMING_CTRL_5_HS_ZERO__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_5_HS_ZERO(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_5_HS_ZERO__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_5_HS_ZERO__MASK;
+}
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_6 0x00000158
+#define DSI_28nm_PHY_TIMING_CTRL_6_HS_PREPARE__MASK 0x000000ff
+#define DSI_28nm_PHY_TIMING_CTRL_6_HS_PREPARE__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_6_HS_PREPARE(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_6_HS_PREPARE__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_6_HS_PREPARE__MASK;
+}
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_7 0x0000015c
+#define DSI_28nm_PHY_TIMING_CTRL_7_HS_TRAIL__MASK 0x000000ff
+#define DSI_28nm_PHY_TIMING_CTRL_7_HS_TRAIL__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_7_HS_TRAIL(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_7_HS_TRAIL__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_7_HS_TRAIL__MASK;
+}
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_8 0x00000160
+#define DSI_28nm_PHY_TIMING_CTRL_8_HS_RQST__MASK 0x000000ff
+#define DSI_28nm_PHY_TIMING_CTRL_8_HS_RQST__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_8_HS_RQST(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_8_HS_RQST__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_8_HS_RQST__MASK;
+}
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_9 0x00000164
+#define DSI_28nm_PHY_TIMING_CTRL_9_TA_GO__MASK 0x00000007
+#define DSI_28nm_PHY_TIMING_CTRL_9_TA_GO__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_9_TA_GO(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_9_TA_GO__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_9_TA_GO__MASK;
+}
+#define DSI_28nm_PHY_TIMING_CTRL_9_TA_SURE__MASK 0x00000070
+#define DSI_28nm_PHY_TIMING_CTRL_9_TA_SURE__SHIFT 4
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_9_TA_SURE(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_9_TA_SURE__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_9_TA_SURE__MASK;
+}
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_10 0x00000168
+#define DSI_28nm_PHY_TIMING_CTRL_10_TA_GET__MASK 0x00000007
+#define DSI_28nm_PHY_TIMING_CTRL_10_TA_GET__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_10_TA_GET(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_10_TA_GET__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_10_TA_GET__MASK;
+}
+
+#define REG_DSI_28nm_PHY_TIMING_CTRL_11 0x0000016c
+#define DSI_28nm_PHY_TIMING_CTRL_11_TRIG3_CMD__MASK 0x000000ff
+#define DSI_28nm_PHY_TIMING_CTRL_11_TRIG3_CMD__SHIFT 0
+static inline uint32_t DSI_28nm_PHY_TIMING_CTRL_11_TRIG3_CMD(uint32_t val)
+{
+ return ((val) << DSI_28nm_PHY_TIMING_CTRL_11_TRIG3_CMD__SHIFT) & DSI_28nm_PHY_TIMING_CTRL_11_TRIG3_CMD__MASK;
+}
+
+#define REG_DSI_28nm_PHY_CTRL_0 0x00000170
+
+#define REG_DSI_28nm_PHY_CTRL_1 0x00000174
+
+#define REG_DSI_28nm_PHY_CTRL_2 0x00000178
+
+#define REG_DSI_28nm_PHY_CTRL_3 0x0000017c
+
+#define REG_DSI_28nm_PHY_CTRL_4 0x00000180
+
+#define REG_DSI_28nm_PHY_STRENGTH_0 0x00000184
+
+#define REG_DSI_28nm_PHY_STRENGTH_1 0x00000188
+
+#define REG_DSI_28nm_PHY_BIST_CTRL_0 0x000001b4
+
+#define REG_DSI_28nm_PHY_BIST_CTRL_1 0x000001b8
+
+#define REG_DSI_28nm_PHY_BIST_CTRL_2 0x000001bc
+
+#define REG_DSI_28nm_PHY_BIST_CTRL_3 0x000001c0
+
+#define REG_DSI_28nm_PHY_BIST_CTRL_4 0x000001c4
+
+#define REG_DSI_28nm_PHY_BIST_CTRL_5 0x000001c8
+
+#define REG_DSI_28nm_PHY_GLBL_TEST_CTRL 0x000001d4
+
+#define REG_DSI_28nm_PHY_LDO_CNTRL 0x000001dc
+
+#define REG_DSI_28nm_PHY_REGULATOR_CTRL_0 0x00000000
+
+#define REG_DSI_28nm_PHY_REGULATOR_CTRL_1 0x00000004
+
+#define REG_DSI_28nm_PHY_REGULATOR_CTRL_2 0x00000008
+
+#define REG_DSI_28nm_PHY_REGULATOR_CTRL_3 0x0000000c
+
+#define REG_DSI_28nm_PHY_REGULATOR_CTRL_4 0x00000010
+
+#define REG_DSI_28nm_PHY_REGULATOR_CTRL_5 0x00000014
+
+#define REG_DSI_28nm_PHY_REGULATOR_CAL_PWR_CFG 0x00000018
+
#endif /* DSI_XML */
diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
new file mode 100644
index 0000000..956b224
--- /dev/null
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -0,0 +1,1993 @@
+/*
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/gpio.h>
+#include <linux/interrupt.h>
+#include <linux/of_device.h>
+#include <linux/of_gpio.h>
+#include <linux/of_irq.h>
+#include <linux/regulator/consumer.h>
+#include <linux/spinlock.h>
+#include <video/mipi_display.h>
+
+#include "dsi.h"
+#include "dsi.xml.h"
+
+#define MSM_DSI_VER_MAJOR_V2 0x02
+#define MSM_DSI_VER_MAJOR_6G 0x03
+#define MSM_DSI_6G_VER_MINOR_V1_0 0x10000000
+#define MSM_DSI_6G_VER_MINOR_V1_1 0x10010000
+#define MSM_DSI_6G_VER_MINOR_V1_1_1 0x10010001
+#define MSM_DSI_6G_VER_MINOR_V1_2 0x10020000
+#define MSM_DSI_6G_VER_MINOR_V1_3_1 0x10030001
+
+#define DSI_6G_REG_SHIFT 4
+
+#define DSI_REGULATOR_MAX 8
+struct dsi_reg_entry {
+ char name[32];
+ int min_voltage;
+ int max_voltage;
+ int enable_load;
+ int disable_load;
+};
+
+struct dsi_reg_config {
+ int num;
+ struct dsi_reg_entry regs[DSI_REGULATOR_MAX];
+};
+
+struct dsi_config {
+ u32 major;
+ u32 minor;
+ u32 io_offset;
+ enum msm_dsi_phy_type phy_type;
+ struct dsi_reg_config reg_cfg;
+};
+
+static const struct dsi_config dsi_cfgs[] = {
+ {MSM_DSI_VER_MAJOR_V2, 0, 0, MSM_DSI_PHY_UNKNOWN},
+ { /* 8974 v1 */
+ .major = MSM_DSI_VER_MAJOR_6G,
+ .minor = MSM_DSI_6G_VER_MINOR_V1_0,
+ .io_offset = DSI_6G_REG_SHIFT,
+ .phy_type = MSM_DSI_PHY_28NM,
+ .reg_cfg = {
+ .num = 4,
+ .regs = {
+ {"gdsc", -1, -1, -1, -1},
+ {"vdd", 3000000, 3000000, 150000, 100},
+ {"vdda", 1200000, 1200000, 100000, 100},
+ {"vddio", 1800000, 1800000, 100000, 100},
+ },
+ },
+ },
+ { /* 8974 v2 */
+ .major = MSM_DSI_VER_MAJOR_6G,
+ .minor = MSM_DSI_6G_VER_MINOR_V1_1,
+ .io_offset = DSI_6G_REG_SHIFT,
+ .phy_type = MSM_DSI_PHY_28NM,
+ .reg_cfg = {
+ .num = 4,
+ .regs = {
+ {"gdsc", -1, -1, -1, -1},
+ {"vdd", 3000000, 3000000, 150000, 100},
+ {"vdda", 1200000, 1200000, 100000, 100},
+ {"vddio", 1800000, 1800000, 100000, 100},
+ },
+ },
+ },
+ { /* 8974 v3 */
+ .major = MSM_DSI_VER_MAJOR_6G,
+ .minor = MSM_DSI_6G_VER_MINOR_V1_1_1,
+ .io_offset = DSI_6G_REG_SHIFT,
+ .phy_type = MSM_DSI_PHY_28NM,
+ .reg_cfg = {
+ .num = 4,
+ .regs = {
+ {"gdsc", -1, -1, -1, -1},
+ {"vdd", 3000000, 3000000, 150000, 100},
+ {"vdda", 1200000, 1200000, 100000, 100},
+ {"vddio", 1800000, 1800000, 100000, 100},
+ },
+ },
+ },
+ { /* 8084 */
+ .major = MSM_DSI_VER_MAJOR_6G,
+ .minor = MSM_DSI_6G_VER_MINOR_V1_2,
+ .io_offset = DSI_6G_REG_SHIFT,
+ .phy_type = MSM_DSI_PHY_28NM,
+ .reg_cfg = {
+ .num = 4,
+ .regs = {
+ {"gdsc", -1, -1, -1, -1},
+ {"vdd", 3000000, 3000000, 150000, 100},
+ {"vdda", 1200000, 1200000, 100000, 100},
+ {"vddio", 1800000, 1800000, 100000, 100},
+ },
+ },
+ },
+ { /* 8916 */
+ .major = MSM_DSI_VER_MAJOR_6G,
+ .minor = MSM_DSI_6G_VER_MINOR_V1_3_1,
+ .io_offset = DSI_6G_REG_SHIFT,
+ .phy_type = MSM_DSI_PHY_28NM,
+ .reg_cfg = {
+ .num = 4,
+ .regs = {
+ {"gdsc", -1, -1, -1, -1},
+ {"vdd", 2850000, 2850000, 100000, 100},
+ {"vdda", 1200000, 1200000, 100000, 100},
+ {"vddio", 1800000, 1800000, 100000, 100},
+ },
+ },
+ },
+};
+
+static int dsi_get_version(const void __iomem *base, u32 *major, u32 *minor)
+{
+ u32 ver;
+ u32 ver_6g;
+
+ if (!major || !minor)
+ return -EINVAL;
+
+ /* From DSI6G(v3), addition of a 6G_HW_VERSION register at offset 0
+ * makes all other registers 4-byte shifted down.
+ */
+ ver_6g = msm_readl(base + REG_DSI_6G_HW_VERSION);
+ if (ver_6g == 0) {
+ ver = msm_readl(base + REG_DSI_VERSION);
+ ver = FIELD(ver, DSI_VERSION_MAJOR);
+ if (ver <= MSM_DSI_VER_MAJOR_V2) {
+ /* old versions */
+ *major = ver;
+ *minor = 0;
+ return 0;
+ } else {
+ return -EINVAL;
+ }
+ } else {
+ ver = msm_readl(base + DSI_6G_REG_SHIFT + REG_DSI_VERSION);
+ ver = FIELD(ver, DSI_VERSION_MAJOR);
+ if (ver == MSM_DSI_VER_MAJOR_6G) {
+ /* 6G version */
+ *major = ver;
+ *minor = ver_6g;
+ return 0;
+ } else {
+ return -EINVAL;
+ }
+ }
+}
+
+#define DSI_ERR_STATE_ACK 0x0000
+#define DSI_ERR_STATE_TIMEOUT 0x0001
+#define DSI_ERR_STATE_DLN0_PHY 0x0002
+#define DSI_ERR_STATE_FIFO 0x0004
+#define DSI_ERR_STATE_MDP_FIFO_UNDERFLOW 0x0008
+#define DSI_ERR_STATE_INTERLEAVE_OP_CONTENTION 0x0010
+#define DSI_ERR_STATE_PLL_UNLOCKED 0x0020
+
+#define DSI_CLK_CTRL_ENABLE_CLKS \
+ (DSI_CLK_CTRL_AHBS_HCLK_ON | DSI_CLK_CTRL_AHBM_SCLK_ON | \
+ DSI_CLK_CTRL_PCLK_ON | DSI_CLK_CTRL_DSICLK_ON | \
+ DSI_CLK_CTRL_BYTECLK_ON | DSI_CLK_CTRL_ESCCLK_ON | \
+ DSI_CLK_CTRL_FORCE_ON_DYN_AHBM_HCLK)
+
+struct msm_dsi_host {
+ struct mipi_dsi_host base;
+
+ struct platform_device *pdev;
+ struct drm_device *dev;
+
+ int id;
+
+ void __iomem *ctrl_base;
+ struct regulator_bulk_data supplies[DSI_REGULATOR_MAX];
+ struct clk *mdp_core_clk;
+ struct clk *ahb_clk;
+ struct clk *axi_clk;
+ struct clk *mmss_misc_ahb_clk;
+ struct clk *byte_clk;
+ struct clk *esc_clk;
+ struct clk *pixel_clk;
+ u32 byte_clk_rate;
+
+ struct gpio_desc *disp_en_gpio;
+ struct gpio_desc *te_gpio;
+
+ const struct dsi_config *cfg;
+
+ struct completion dma_comp;
+ struct completion video_comp;
+ struct mutex dev_mutex;
+ struct mutex cmd_mutex;
+ struct mutex clk_mutex;
+ spinlock_t intr_lock; /* Protect interrupt ctrl register */
+
+ u32 err_work_state;
+ struct work_struct err_work;
+ struct workqueue_struct *workqueue;
+
+ struct drm_gem_object *tx_gem_obj;
+ u8 *rx_buf;
+
+ struct drm_display_mode *mode;
+
+ /* Panel info */
+ struct device_node *panel_node;
+ unsigned int channel;
+ unsigned int lanes;
+ enum mipi_dsi_pixel_format format;
+ unsigned long mode_flags;
+
+ u32 dma_cmd_ctrl_restore;
+
+ bool registered;
+ bool power_on;
+ int irq;
+};
+
+static u32 dsi_get_bpp(const enum mipi_dsi_pixel_format fmt)
+{
+ switch (fmt) {
+ case MIPI_DSI_FMT_RGB565: return 16;
+ case MIPI_DSI_FMT_RGB666_PACKED: return 18;
+ case MIPI_DSI_FMT_RGB666:
+ case MIPI_DSI_FMT_RGB888:
+ default: return 24;
+ }
+}
+
+static inline u32 dsi_read(struct msm_dsi_host *msm_host, u32 reg)
+{
+ return msm_readl(msm_host->ctrl_base + msm_host->cfg->io_offset + reg);
+}
+static inline void dsi_write(struct msm_dsi_host *msm_host, u32 reg, u32 data)
+{
+ msm_writel(data, msm_host->ctrl_base + msm_host->cfg->io_offset + reg);
+}
+
+static int dsi_host_regulator_enable(struct msm_dsi_host *msm_host);
+static void dsi_host_regulator_disable(struct msm_dsi_host *msm_host);
+
+static const struct dsi_config *dsi_get_config(struct msm_dsi_host *msm_host)
+{
+ const struct dsi_config *cfg;
+ struct regulator *gdsc_reg;
+ int i, ret;
+ u32 major = 0, minor = 0;
+
+ gdsc_reg = regulator_get(&msm_host->pdev->dev, "gdsc");
+ if (IS_ERR_OR_NULL(gdsc_reg)) {
+ pr_err("%s: cannot get gdsc\n", __func__);
+ goto fail;
+ }
+ ret = regulator_enable(gdsc_reg);
+ if (ret) {
+ pr_err("%s: unable to enable gdsc\n", __func__);
+ regulator_put(gdsc_reg);
+ goto fail;
+ }
+ ret = clk_prepare_enable(msm_host->ahb_clk);
+ if (ret) {
+ pr_err("%s: unable to enable ahb_clk\n", __func__);
+ regulator_disable(gdsc_reg);
+ regulator_put(gdsc_reg);
+ goto fail;
+ }
+
+ ret = dsi_get_version(msm_host->ctrl_base, &major, &minor);
+
+ clk_disable_unprepare(msm_host->ahb_clk);
+ regulator_disable(gdsc_reg);
+ regulator_put(gdsc_reg);
+ if (ret) {
+ pr_err("%s: Invalid version\n", __func__);
+ goto fail;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(dsi_cfgs); i++) {
+ cfg = dsi_cfgs + i;
+ if ((cfg->major == major) && (cfg->minor == minor))
+ return cfg;
+ }
+ pr_err("%s: Version %x:%x not support\n", __func__, major, minor);
+
+fail:
+ return NULL;
+}
+
+static inline struct msm_dsi_host *to_msm_dsi_host(struct mipi_dsi_host *host)
+{
+ return container_of(host, struct msm_dsi_host, base);
+}
+
+static void dsi_host_regulator_disable(struct msm_dsi_host *msm_host)
+{
+ struct regulator_bulk_data *s = msm_host->supplies;
+ const struct dsi_reg_entry *regs = msm_host->cfg->reg_cfg.regs;
+ int num = msm_host->cfg->reg_cfg.num;
+ int i;
+
+ DBG("");
+ for (i = num - 1; i >= 0; i--)
+ if (regs[i].disable_load >= 0)
+ regulator_set_load(s[i].consumer,
+ regs[i].disable_load);
+
+ regulator_bulk_disable(num, s);
+}
+
+static int dsi_host_regulator_enable(struct msm_dsi_host *msm_host)
+{
+ struct regulator_bulk_data *s = msm_host->supplies;
+ const struct dsi_reg_entry *regs = msm_host->cfg->reg_cfg.regs;
+ int num = msm_host->cfg->reg_cfg.num;
+ int ret, i;
+
+ DBG("");
+ for (i = 0; i < num; i++) {
+ if (regs[i].enable_load >= 0) {
+ ret = regulator_set_load(s[i].consumer,
+ regs[i].enable_load);
+ if (ret < 0) {
+ pr_err("regulator %d set op mode failed, %d\n",
+ i, ret);
+ goto fail;
+ }
+ }
+ }
+
+ ret = regulator_bulk_enable(num, s);
+ if (ret < 0) {
+ pr_err("regulator enable failed, %d\n", ret);
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ for (i--; i >= 0; i--)
+ regulator_set_load(s[i].consumer, regs[i].disable_load);
+ return ret;
+}
+
+static int dsi_regulator_init(struct msm_dsi_host *msm_host)
+{
+ struct regulator_bulk_data *s = msm_host->supplies;
+ const struct dsi_reg_entry *regs = msm_host->cfg->reg_cfg.regs;
+ int num = msm_host->cfg->reg_cfg.num;
+ int i, ret;
+
+ for (i = 0; i < num; i++)
+ s[i].supply = regs[i].name;
+
+ ret = devm_regulator_bulk_get(&msm_host->pdev->dev, num, s);
+ if (ret < 0) {
+ pr_err("%s: failed to init regulator, ret=%d\n",
+ __func__, ret);
+ return ret;
+ }
+
+ for (i = 0; i < num; i++) {
+ if ((regs[i].min_voltage >= 0) && (regs[i].max_voltage >= 0)) {
+ ret = regulator_set_voltage(s[i].consumer,
+ regs[i].min_voltage, regs[i].max_voltage);
+ if (ret < 0) {
+ pr_err("regulator %d set voltage failed, %d\n",
+ i, ret);
+ return ret;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int dsi_clk_init(struct msm_dsi_host *msm_host)
+{
+ struct device *dev = &msm_host->pdev->dev;
+ int ret = 0;
+
+ msm_host->mdp_core_clk = devm_clk_get(dev, "mdp_core_clk");
+ if (IS_ERR(msm_host->mdp_core_clk)) {
+ ret = PTR_ERR(msm_host->mdp_core_clk);
+ pr_err("%s: Unable to get mdp core clk. ret=%d\n",
+ __func__, ret);
+ goto exit;
+ }
+
+ msm_host->ahb_clk = devm_clk_get(dev, "iface_clk");
+ if (IS_ERR(msm_host->ahb_clk)) {
+ ret = PTR_ERR(msm_host->ahb_clk);
+ pr_err("%s: Unable to get mdss ahb clk. ret=%d\n",
+ __func__, ret);
+ goto exit;
+ }
+
+ msm_host->axi_clk = devm_clk_get(dev, "bus_clk");
+ if (IS_ERR(msm_host->axi_clk)) {
+ ret = PTR_ERR(msm_host->axi_clk);
+ pr_err("%s: Unable to get axi bus clk. ret=%d\n",
+ __func__, ret);
+ goto exit;
+ }
+
+ msm_host->mmss_misc_ahb_clk = devm_clk_get(dev, "core_mmss_clk");
+ if (IS_ERR(msm_host->mmss_misc_ahb_clk)) {
+ ret = PTR_ERR(msm_host->mmss_misc_ahb_clk);
+ pr_err("%s: Unable to get mmss misc ahb clk. ret=%d\n",
+ __func__, ret);
+ goto exit;
+ }
+
+ msm_host->byte_clk = devm_clk_get(dev, "byte_clk");
+ if (IS_ERR(msm_host->byte_clk)) {
+ ret = PTR_ERR(msm_host->byte_clk);
+ pr_err("%s: can't find dsi_byte_clk. ret=%d\n",
+ __func__, ret);
+ msm_host->byte_clk = NULL;
+ goto exit;
+ }
+
+ msm_host->pixel_clk = devm_clk_get(dev, "pixel_clk");
+ if (IS_ERR(msm_host->pixel_clk)) {
+ ret = PTR_ERR(msm_host->pixel_clk);
+ pr_err("%s: can't find dsi_pixel_clk. ret=%d\n",
+ __func__, ret);
+ msm_host->pixel_clk = NULL;
+ goto exit;
+ }
+
+ msm_host->esc_clk = devm_clk_get(dev, "core_clk");
+ if (IS_ERR(msm_host->esc_clk)) {
+ ret = PTR_ERR(msm_host->esc_clk);
+ pr_err("%s: can't find dsi_esc_clk. ret=%d\n",
+ __func__, ret);
+ msm_host->esc_clk = NULL;
+ goto exit;
+ }
+
+exit:
+ return ret;
+}
+
+static int dsi_bus_clk_enable(struct msm_dsi_host *msm_host)
+{
+ int ret;
+
+ DBG("id=%d", msm_host->id);
+
+ ret = clk_prepare_enable(msm_host->mdp_core_clk);
+ if (ret) {
+ pr_err("%s: failed to enable mdp_core_clock, %d\n",
+ __func__, ret);
+ goto core_clk_err;
+ }
+
+ ret = clk_prepare_enable(msm_host->ahb_clk);
+ if (ret) {
+ pr_err("%s: failed to enable ahb clock, %d\n", __func__, ret);
+ goto ahb_clk_err;
+ }
+
+ ret = clk_prepare_enable(msm_host->axi_clk);
+ if (ret) {
+ pr_err("%s: failed to enable ahb clock, %d\n", __func__, ret);
+ goto axi_clk_err;
+ }
+
+ ret = clk_prepare_enable(msm_host->mmss_misc_ahb_clk);
+ if (ret) {
+ pr_err("%s: failed to enable mmss misc ahb clk, %d\n",
+ __func__, ret);
+ goto misc_ahb_clk_err;
+ }
+
+ return 0;
+
+misc_ahb_clk_err:
+ clk_disable_unprepare(msm_host->axi_clk);
+axi_clk_err:
+ clk_disable_unprepare(msm_host->ahb_clk);
+ahb_clk_err:
+ clk_disable_unprepare(msm_host->mdp_core_clk);
+core_clk_err:
+ return ret;
+}
+
+static void dsi_bus_clk_disable(struct msm_dsi_host *msm_host)
+{
+ DBG("");
+ clk_disable_unprepare(msm_host->mmss_misc_ahb_clk);
+ clk_disable_unprepare(msm_host->axi_clk);
+ clk_disable_unprepare(msm_host->ahb_clk);
+ clk_disable_unprepare(msm_host->mdp_core_clk);
+}
+
+static int dsi_link_clk_enable(struct msm_dsi_host *msm_host)
+{
+ int ret;
+
+ DBG("Set clk rates: pclk=%d, byteclk=%d",
+ msm_host->mode->clock, msm_host->byte_clk_rate);
+
+ ret = clk_set_rate(msm_host->byte_clk, msm_host->byte_clk_rate);
+ if (ret) {
+ pr_err("%s: Failed to set rate byte clk, %d\n", __func__, ret);
+ goto error;
+ }
+
+ ret = clk_set_rate(msm_host->pixel_clk, msm_host->mode->clock * 1000);
+ if (ret) {
+ pr_err("%s: Failed to set rate pixel clk, %d\n", __func__, ret);
+ goto error;
+ }
+
+ ret = clk_prepare_enable(msm_host->esc_clk);
+ if (ret) {
+ pr_err("%s: Failed to enable dsi esc clk\n", __func__);
+ goto error;
+ }
+
+ ret = clk_prepare_enable(msm_host->byte_clk);
+ if (ret) {
+ pr_err("%s: Failed to enable dsi byte clk\n", __func__);
+ goto byte_clk_err;
+ }
+
+ ret = clk_prepare_enable(msm_host->pixel_clk);
+ if (ret) {
+ pr_err("%s: Failed to enable dsi pixel clk\n", __func__);
+ goto pixel_clk_err;
+ }
+
+ return 0;
+
+pixel_clk_err:
+ clk_disable_unprepare(msm_host->byte_clk);
+byte_clk_err:
+ clk_disable_unprepare(msm_host->esc_clk);
+error:
+ return ret;
+}
+
+static void dsi_link_clk_disable(struct msm_dsi_host *msm_host)
+{
+ clk_disable_unprepare(msm_host->esc_clk);
+ clk_disable_unprepare(msm_host->pixel_clk);
+ clk_disable_unprepare(msm_host->byte_clk);
+}
+
+static int dsi_clk_ctrl(struct msm_dsi_host *msm_host, bool enable)
+{
+ int ret = 0;
+
+ mutex_lock(&msm_host->clk_mutex);
+ if (enable) {
+ ret = dsi_bus_clk_enable(msm_host);
+ if (ret) {
+ pr_err("%s: Can not enable bus clk, %d\n",
+ __func__, ret);
+ goto unlock_ret;
+ }
+ ret = dsi_link_clk_enable(msm_host);
+ if (ret) {
+ pr_err("%s: Can not enable link clk, %d\n",
+ __func__, ret);
+ dsi_bus_clk_disable(msm_host);
+ goto unlock_ret;
+ }
+ } else {
+ dsi_link_clk_disable(msm_host);
+ dsi_bus_clk_disable(msm_host);
+ }
+
+unlock_ret:
+ mutex_unlock(&msm_host->clk_mutex);
+ return ret;
+}
+
+static int dsi_calc_clk_rate(struct msm_dsi_host *msm_host)
+{
+ struct drm_display_mode *mode = msm_host->mode;
+ u8 lanes = msm_host->lanes;
+ u32 bpp = dsi_get_bpp(msm_host->format);
+ u32 pclk_rate;
+
+ if (!mode) {
+ pr_err("%s: mode not set\n", __func__);
+ return -EINVAL;
+ }
+
+ pclk_rate = mode->clock * 1000;
+ if (lanes > 0) {
+ msm_host->byte_clk_rate = (pclk_rate * bpp) / (8 * lanes);
+ } else {
+ pr_err("%s: forcing mdss_dsi lanes to 1\n", __func__);
+ msm_host->byte_clk_rate = (pclk_rate * bpp) / 8;
+ }
+
+ DBG("pclk=%d, bclk=%d", pclk_rate, msm_host->byte_clk_rate);
+
+ return 0;
+}
+
+static void dsi_phy_sw_reset(struct msm_dsi_host *msm_host)
+{
+ DBG("");
+ dsi_write(msm_host, REG_DSI_PHY_RESET, DSI_PHY_RESET_RESET);
+ /* Make sure fully reset */
+ wmb();
+ udelay(1000);
+ dsi_write(msm_host, REG_DSI_PHY_RESET, 0);
+ udelay(100);
+}
+
+static void dsi_intr_ctrl(struct msm_dsi_host *msm_host, u32 mask, int enable)
+{
+ u32 intr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&msm_host->intr_lock, flags);
+ intr = dsi_read(msm_host, REG_DSI_INTR_CTRL);
+
+ if (enable)
+ intr |= mask;
+ else
+ intr &= ~mask;
+
+ DBG("intr=%x enable=%d", intr, enable);
+
+ dsi_write(msm_host, REG_DSI_INTR_CTRL, intr);
+ spin_unlock_irqrestore(&msm_host->intr_lock, flags);
+}
+
+static inline enum dsi_traffic_mode dsi_get_traffic_mode(const u32 mode_flags)
+{
+ if (mode_flags & MIPI_DSI_MODE_VIDEO_BURST)
+ return BURST_MODE;
+ else if (mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE)
+ return NON_BURST_SYNCH_PULSE;
+
+ return NON_BURST_SYNCH_EVENT;
+}
+
+static inline enum dsi_vid_dst_format dsi_get_vid_fmt(
+ const enum mipi_dsi_pixel_format mipi_fmt)
+{
+ switch (mipi_fmt) {
+ case MIPI_DSI_FMT_RGB888: return VID_DST_FORMAT_RGB888;
+ case MIPI_DSI_FMT_RGB666: return VID_DST_FORMAT_RGB666_LOOSE;
+ case MIPI_DSI_FMT_RGB666_PACKED: return VID_DST_FORMAT_RGB666;
+ case MIPI_DSI_FMT_RGB565: return VID_DST_FORMAT_RGB565;
+ default: return VID_DST_FORMAT_RGB888;
+ }
+}
+
+static inline enum dsi_cmd_dst_format dsi_get_cmd_fmt(
+ const enum mipi_dsi_pixel_format mipi_fmt)
+{
+ switch (mipi_fmt) {
+ case MIPI_DSI_FMT_RGB888: return CMD_DST_FORMAT_RGB888;
+ case MIPI_DSI_FMT_RGB666_PACKED:
+ case MIPI_DSI_FMT_RGB666: return VID_DST_FORMAT_RGB666;
+ case MIPI_DSI_FMT_RGB565: return CMD_DST_FORMAT_RGB565;
+ default: return CMD_DST_FORMAT_RGB888;
+ }
+}
+
+static void dsi_ctrl_config(struct msm_dsi_host *msm_host, bool enable,
+ u32 clk_pre, u32 clk_post)
+{
+ u32 flags = msm_host->mode_flags;
+ enum mipi_dsi_pixel_format mipi_fmt = msm_host->format;
+ u32 data = 0;
+
+ if (!enable) {
+ dsi_write(msm_host, REG_DSI_CTRL, 0);
+ return;
+ }
+
+ if (flags & MIPI_DSI_MODE_VIDEO) {
+ if (flags & MIPI_DSI_MODE_VIDEO_HSE)
+ data |= DSI_VID_CFG0_PULSE_MODE_HSA_HE;
+ if (flags & MIPI_DSI_MODE_VIDEO_HFP)
+ data |= DSI_VID_CFG0_HFP_POWER_STOP;
+ if (flags & MIPI_DSI_MODE_VIDEO_HBP)
+ data |= DSI_VID_CFG0_HBP_POWER_STOP;
+ if (flags & MIPI_DSI_MODE_VIDEO_HSA)
+ data |= DSI_VID_CFG0_HSA_POWER_STOP;
+ /* Always set low power stop mode for BLLP
+ * to let command engine send packets
+ */
+ data |= DSI_VID_CFG0_EOF_BLLP_POWER_STOP |
+ DSI_VID_CFG0_BLLP_POWER_STOP;
+ data |= DSI_VID_CFG0_TRAFFIC_MODE(dsi_get_traffic_mode(flags));
+ data |= DSI_VID_CFG0_DST_FORMAT(dsi_get_vid_fmt(mipi_fmt));
+ data |= DSI_VID_CFG0_VIRT_CHANNEL(msm_host->channel);
+ dsi_write(msm_host, REG_DSI_VID_CFG0, data);
+
+ /* Do not swap RGB colors */
+ data = DSI_VID_CFG1_RGB_SWAP(SWAP_RGB);
+ dsi_write(msm_host, REG_DSI_VID_CFG1, 0);
+ } else {
+ /* Do not swap RGB colors */
+ data = DSI_CMD_CFG0_RGB_SWAP(SWAP_RGB);
+ data |= DSI_CMD_CFG0_DST_FORMAT(dsi_get_cmd_fmt(mipi_fmt));
+ dsi_write(msm_host, REG_DSI_CMD_CFG0, data);
+
+ data = DSI_CMD_CFG1_WR_MEM_START(MIPI_DCS_WRITE_MEMORY_START) |
+ DSI_CMD_CFG1_WR_MEM_CONTINUE(
+ MIPI_DCS_WRITE_MEMORY_CONTINUE);
+ /* Always insert DCS command */
+ data |= DSI_CMD_CFG1_INSERT_DCS_COMMAND;
+ dsi_write(msm_host, REG_DSI_CMD_CFG1, data);
+ }
+
+ dsi_write(msm_host, REG_DSI_CMD_DMA_CTRL,
+ DSI_CMD_DMA_CTRL_FROM_FRAME_BUFFER |
+ DSI_CMD_DMA_CTRL_LOW_POWER);
+
+ data = 0;
+ /* Always assume dedicated TE pin */
+ data |= DSI_TRIG_CTRL_TE;
+ data |= DSI_TRIG_CTRL_MDP_TRIGGER(TRIGGER_NONE);
+ data |= DSI_TRIG_CTRL_DMA_TRIGGER(TRIGGER_SW);
+ data |= DSI_TRIG_CTRL_STREAM(msm_host->channel);
+ if ((msm_host->cfg->major == MSM_DSI_VER_MAJOR_6G) &&
+ (msm_host->cfg->minor >= MSM_DSI_6G_VER_MINOR_V1_2))
+ data |= DSI_TRIG_CTRL_BLOCK_DMA_WITHIN_FRAME;
+ dsi_write(msm_host, REG_DSI_TRIG_CTRL, data);
+
+ data = DSI_CLKOUT_TIMING_CTRL_T_CLK_POST(clk_post) |
+ DSI_CLKOUT_TIMING_CTRL_T_CLK_PRE(clk_pre);
+ dsi_write(msm_host, REG_DSI_CLKOUT_TIMING_CTRL, data);
+
+ data = 0;
+ if (!(flags & MIPI_DSI_MODE_EOT_PACKET))
+ data |= DSI_EOT_PACKET_CTRL_TX_EOT_APPEND;
+ dsi_write(msm_host, REG_DSI_EOT_PACKET_CTRL, data);
+
+ /* allow only ack-err-status to generate interrupt */
+ dsi_write(msm_host, REG_DSI_ERR_INT_MASK0, 0x13ff3fe0);
+
+ dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_ERROR, 1);
+
+ dsi_write(msm_host, REG_DSI_CLK_CTRL, DSI_CLK_CTRL_ENABLE_CLKS);
+
+ data = DSI_CTRL_CLK_EN;
+
+ DBG("lane number=%d", msm_host->lanes);
+ if (msm_host->lanes == 2) {
+ data |= DSI_CTRL_LANE1 | DSI_CTRL_LANE2;
+ /* swap lanes for 2-lane panel for better performance */
+ dsi_write(msm_host, REG_DSI_LANE_SWAP_CTRL,
+ DSI_LANE_SWAP_CTRL_DLN_SWAP_SEL(LANE_SWAP_1230));
+ } else {
+ /* Take 4 lanes as default */
+ data |= DSI_CTRL_LANE0 | DSI_CTRL_LANE1 | DSI_CTRL_LANE2 |
+ DSI_CTRL_LANE3;
+ /* Do not swap lanes for 4-lane panel */
+ dsi_write(msm_host, REG_DSI_LANE_SWAP_CTRL,
+ DSI_LANE_SWAP_CTRL_DLN_SWAP_SEL(LANE_SWAP_0123));
+ }
+ data |= DSI_CTRL_ENABLE;
+
+ dsi_write(msm_host, REG_DSI_CTRL, data);
+}
+
+static void dsi_timing_setup(struct msm_dsi_host *msm_host)
+{
+ struct drm_display_mode *mode = msm_host->mode;
+ u32 hs_start = 0, vs_start = 0; /* take sync start as 0 */
+ u32 h_total = mode->htotal;
+ u32 v_total = mode->vtotal;
+ u32 hs_end = mode->hsync_end - mode->hsync_start;
+ u32 vs_end = mode->vsync_end - mode->vsync_start;
+ u32 ha_start = h_total - mode->hsync_start;
+ u32 ha_end = ha_start + mode->hdisplay;
+ u32 va_start = v_total - mode->vsync_start;
+ u32 va_end = va_start + mode->vdisplay;
+ u32 wc;
+
+ DBG("");
+
+ if (msm_host->mode_flags & MIPI_DSI_MODE_VIDEO) {
+ dsi_write(msm_host, REG_DSI_ACTIVE_H,
+ DSI_ACTIVE_H_START(ha_start) |
+ DSI_ACTIVE_H_END(ha_end));
+ dsi_write(msm_host, REG_DSI_ACTIVE_V,
+ DSI_ACTIVE_V_START(va_start) |
+ DSI_ACTIVE_V_END(va_end));
+ dsi_write(msm_host, REG_DSI_TOTAL,
+ DSI_TOTAL_H_TOTAL(h_total - 1) |
+ DSI_TOTAL_V_TOTAL(v_total - 1));
+
+ dsi_write(msm_host, REG_DSI_ACTIVE_HSYNC,
+ DSI_ACTIVE_HSYNC_START(hs_start) |
+ DSI_ACTIVE_HSYNC_END(hs_end));
+ dsi_write(msm_host, REG_DSI_ACTIVE_VSYNC_HPOS, 0);
+ dsi_write(msm_host, REG_DSI_ACTIVE_VSYNC_VPOS,
+ DSI_ACTIVE_VSYNC_VPOS_START(vs_start) |
+ DSI_ACTIVE_VSYNC_VPOS_END(vs_end));
+ } else { /* command mode */
+ /* image data and 1 byte write_memory_start cmd */
+ wc = mode->hdisplay * dsi_get_bpp(msm_host->format) / 8 + 1;
+
+ dsi_write(msm_host, REG_DSI_CMD_MDP_STREAM_CTRL,
+ DSI_CMD_MDP_STREAM_CTRL_WORD_COUNT(wc) |
+ DSI_CMD_MDP_STREAM_CTRL_VIRTUAL_CHANNEL(
+ msm_host->channel) |
+ DSI_CMD_MDP_STREAM_CTRL_DATA_TYPE(
+ MIPI_DSI_DCS_LONG_WRITE));
+
+ dsi_write(msm_host, REG_DSI_CMD_MDP_STREAM_TOTAL,
+ DSI_CMD_MDP_STREAM_TOTAL_H_TOTAL(mode->hdisplay) |
+ DSI_CMD_MDP_STREAM_TOTAL_V_TOTAL(mode->vdisplay));
+ }
+}
+
+static void dsi_sw_reset(struct msm_dsi_host *msm_host)
+{
+ dsi_write(msm_host, REG_DSI_CLK_CTRL, DSI_CLK_CTRL_ENABLE_CLKS);
+ wmb(); /* clocks need to be enabled before reset */
+
+ dsi_write(msm_host, REG_DSI_RESET, 1);
+ wmb(); /* make sure reset happen */
+ dsi_write(msm_host, REG_DSI_RESET, 0);
+}
+
+static void dsi_op_mode_config(struct msm_dsi_host *msm_host,
+ bool video_mode, bool enable)
+{
+ u32 dsi_ctrl;
+
+ dsi_ctrl = dsi_read(msm_host, REG_DSI_CTRL);
+
+ if (!enable) {
+ dsi_ctrl &= ~(DSI_CTRL_ENABLE | DSI_CTRL_VID_MODE_EN |
+ DSI_CTRL_CMD_MODE_EN);
+ dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_CMD_MDP_DONE |
+ DSI_IRQ_MASK_VIDEO_DONE, 0);
+ } else {
+ if (video_mode) {
+ dsi_ctrl |= DSI_CTRL_VID_MODE_EN;
+ } else { /* command mode */
+ dsi_ctrl |= DSI_CTRL_CMD_MODE_EN;
+ dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_CMD_MDP_DONE, 1);
+ }
+ dsi_ctrl |= DSI_CTRL_ENABLE;
+ }
+
+ dsi_write(msm_host, REG_DSI_CTRL, dsi_ctrl);
+}
+
+static void dsi_set_tx_power_mode(int mode, struct msm_dsi_host *msm_host)
+{
+ u32 data;
+
+ data = dsi_read(msm_host, REG_DSI_CMD_DMA_CTRL);
+
+ if (mode == 0)
+ data &= ~DSI_CMD_DMA_CTRL_LOW_POWER;
+ else
+ data |= DSI_CMD_DMA_CTRL_LOW_POWER;
+
+ dsi_write(msm_host, REG_DSI_CMD_DMA_CTRL, data);
+}
+
+static void dsi_wait4video_done(struct msm_dsi_host *msm_host)
+{
+ dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_VIDEO_DONE, 1);
+
+ reinit_completion(&msm_host->video_comp);
+
+ wait_for_completion_timeout(&msm_host->video_comp,
+ msecs_to_jiffies(70));
+
+ dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_VIDEO_DONE, 0);
+}
+
+static void dsi_wait4video_eng_busy(struct msm_dsi_host *msm_host)
+{
+ if (!(msm_host->mode_flags & MIPI_DSI_MODE_VIDEO))
+ return;
+
+ if (msm_host->power_on) {
+ dsi_wait4video_done(msm_host);
+ /* delay 4 ms to skip BLLP */
+ usleep_range(2000, 4000);
+ }
+}
+
+/* dsi_cmd */
+static int dsi_tx_buf_alloc(struct msm_dsi_host *msm_host, int size)
+{
+ struct drm_device *dev = msm_host->dev;
+ int ret;
+ u32 iova;
+
+ mutex_lock(&dev->struct_mutex);
+ msm_host->tx_gem_obj = msm_gem_new(dev, size, MSM_BO_UNCACHED);
+ if (IS_ERR(msm_host->tx_gem_obj)) {
+ ret = PTR_ERR(msm_host->tx_gem_obj);
+ pr_err("%s: failed to allocate gem, %d\n", __func__, ret);
+ msm_host->tx_gem_obj = NULL;
+ mutex_unlock(&dev->struct_mutex);
+ return ret;
+ }
+
+ ret = msm_gem_get_iova_locked(msm_host->tx_gem_obj, 0, &iova);
+ if (ret) {
+ pr_err("%s: failed to get iova, %d\n", __func__, ret);
+ return ret;
+ }
+ mutex_unlock(&dev->struct_mutex);
+
+ if (iova & 0x07) {
+ pr_err("%s: buf NOT 8 bytes aligned\n", __func__);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void dsi_tx_buf_free(struct msm_dsi_host *msm_host)
+{
+ struct drm_device *dev = msm_host->dev;
+
+ if (msm_host->tx_gem_obj) {
+ msm_gem_put_iova(msm_host->tx_gem_obj, 0);
+ mutex_lock(&dev->struct_mutex);
+ msm_gem_free_object(msm_host->tx_gem_obj);
+ msm_host->tx_gem_obj = NULL;
+ mutex_unlock(&dev->struct_mutex);
+ }
+}
+
+/*
+ * prepare cmd buffer to be txed
+ */
+static int dsi_cmd_dma_add(struct drm_gem_object *tx_gem,
+ const struct mipi_dsi_msg *msg)
+{
+ struct mipi_dsi_packet packet;
+ int len;
+ int ret;
+ u8 *data;
+
+ ret = mipi_dsi_create_packet(&packet, msg);
+ if (ret) {
+ pr_err("%s: create packet failed, %d\n", __func__, ret);
+ return ret;
+ }
+ len = (packet.size + 3) & (~0x3);
+
+ if (len > tx_gem->size) {
+ pr_err("%s: packet size is too big\n", __func__);
+ return -EINVAL;
+ }
+
+ data = msm_gem_vaddr(tx_gem);
+
+ if (IS_ERR(data)) {
+ ret = PTR_ERR(data);
+ pr_err("%s: get vaddr failed, %d\n", __func__, ret);
+ return ret;
+ }
+
+ /* MSM specific command format in memory */
+ data[0] = packet.header[1];
+ data[1] = packet.header[2];
+ data[2] = packet.header[0];
+ data[3] = BIT(7); /* Last packet */
+ if (mipi_dsi_packet_format_is_long(msg->type))
+ data[3] |= BIT(6);
+ if (msg->rx_buf && msg->rx_len)
+ data[3] |= BIT(5);
+
+ /* Long packet */
+ if (packet.payload && packet.payload_length)
+ memcpy(data + 4, packet.payload, packet.payload_length);
+
+ /* Append 0xff to the end */
+ if (packet.size < len)
+ memset(data + packet.size, 0xff, len - packet.size);
+
+ return len;
+}
+
+/*
+ * dsi_short_read1_resp: 1 parameter
+ */
+static int dsi_short_read1_resp(u8 *buf, const struct mipi_dsi_msg *msg)
+{
+ u8 *data = msg->rx_buf;
+ if (data && (msg->rx_len >= 1)) {
+ *data = buf[1]; /* strip out dcs type */
+ return 1;
+ } else {
+ pr_err("%s: read data does not match with rx_buf len %d\n",
+ __func__, msg->rx_len);
+ return -EINVAL;
+ }
+}
+
+/*
+ * dsi_short_read2_resp: 2 parameter
+ */
+static int dsi_short_read2_resp(u8 *buf, const struct mipi_dsi_msg *msg)
+{
+ u8 *data = msg->rx_buf;
+ if (data && (msg->rx_len >= 2)) {
+ data[0] = buf[1]; /* strip out dcs type */
+ data[1] = buf[2];
+ return 2;
+ } else {
+ pr_err("%s: read data does not match with rx_buf len %d\n",
+ __func__, msg->rx_len);
+ return -EINVAL;
+ }
+}
+
+static int dsi_long_read_resp(u8 *buf, const struct mipi_dsi_msg *msg)
+{
+ /* strip out 4 byte dcs header */
+ if (msg->rx_buf && msg->rx_len)
+ memcpy(msg->rx_buf, buf + 4, msg->rx_len);
+
+ return msg->rx_len;
+}
+
+
+static int dsi_cmd_dma_tx(struct msm_dsi_host *msm_host, int len)
+{
+ int ret;
+ u32 iova;
+ bool triggered;
+
+ ret = msm_gem_get_iova(msm_host->tx_gem_obj, 0, &iova);
+ if (ret) {
+ pr_err("%s: failed to get iova: %d\n", __func__, ret);
+ return ret;
+ }
+
+ reinit_completion(&msm_host->dma_comp);
+
+ dsi_wait4video_eng_busy(msm_host);
+
+ triggered = msm_dsi_manager_cmd_xfer_trigger(
+ msm_host->id, iova, len);
+ if (triggered) {
+ ret = wait_for_completion_timeout(&msm_host->dma_comp,
+ msecs_to_jiffies(200));
+ DBG("ret=%d", ret);
+ if (ret == 0)
+ ret = -ETIMEDOUT;
+ else
+ ret = len;
+ } else
+ ret = len;
+
+ return ret;
+}
+
+static int dsi_cmd_dma_rx(struct msm_dsi_host *msm_host,
+ u8 *buf, int rx_byte, int pkt_size)
+{
+ u32 *lp, *temp, data;
+ int i, j = 0, cnt;
+ bool ack_error = false;
+ u32 read_cnt;
+ u8 reg[16];
+ int repeated_bytes = 0;
+ int buf_offset = buf - msm_host->rx_buf;
+
+ lp = (u32 *)buf;
+ temp = (u32 *)reg;
+ cnt = (rx_byte + 3) >> 2;
+ if (cnt > 4)
+ cnt = 4; /* 4 x 32 bits registers only */
+
+ /* Calculate real read data count */
+ read_cnt = dsi_read(msm_host, 0x1d4) >> 16;
+
+ ack_error = (rx_byte == 4) ?
+ (read_cnt == 8) : /* short pkt + 4-byte error pkt */
+ (read_cnt == (pkt_size + 6 + 4)); /* long pkt+4-byte error pkt*/
+
+ if (ack_error)
+ read_cnt -= 4; /* Remove 4 byte error pkt */
+
+ /*
+ * In case of multiple reads from the panel, after the first read, there
+ * is possibility that there are some bytes in the payload repeating in
+ * the RDBK_DATA registers. Since we read all the parameters from the
+ * panel right from the first byte for every pass. We need to skip the
+ * repeating bytes and then append the new parameters to the rx buffer.
+ */
+ if (read_cnt > 16) {
+ int bytes_shifted;
+ /* Any data more than 16 bytes will be shifted out.
+ * The temp read buffer should already contain these bytes.
+ * The remaining bytes in read buffer are the repeated bytes.
+ */
+ bytes_shifted = read_cnt - 16;
+ repeated_bytes = buf_offset - bytes_shifted;
+ }
+
+ for (i = cnt - 1; i >= 0; i--) {
+ data = dsi_read(msm_host, REG_DSI_RDBK_DATA(i));
+ *temp++ = ntohl(data); /* to host byte order */
+ DBG("data = 0x%x and ntohl(data) = 0x%x", data, ntohl(data));
+ }
+
+ for (i = repeated_bytes; i < 16; i++)
+ buf[j++] = reg[i];
+
+ return j;
+}
+
+static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host,
+ const struct mipi_dsi_msg *msg)
+{
+ int len, ret;
+ int bllp_len = msm_host->mode->hdisplay *
+ dsi_get_bpp(msm_host->format) / 8;
+
+ len = dsi_cmd_dma_add(msm_host->tx_gem_obj, msg);
+ if (!len) {
+ pr_err("%s: failed to add cmd type = 0x%x\n",
+ __func__, msg->type);
+ return -EINVAL;
+ }
+
+ /* for video mode, do not send cmds more than
+ * one pixel line, since it only transmit it
+ * during BLLP.
+ */
+ /* TODO: if the command is sent in LP mode, the bit rate is only
+ * half of esc clk rate. In this case, if the video is already
+ * actively streaming, we need to check more carefully if the
+ * command can be fit into one BLLP.
+ */
+ if ((msm_host->mode_flags & MIPI_DSI_MODE_VIDEO) && (len > bllp_len)) {
+ pr_err("%s: cmd cannot fit into BLLP period, len=%d\n",
+ __func__, len);
+ return -EINVAL;
+ }
+
+ ret = dsi_cmd_dma_tx(msm_host, len);
+ if (ret < len) {
+ pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d\n",
+ __func__, msg->type, (*(u8 *)(msg->tx_buf)), len);
+ return -ECOMM;
+ }
+
+ return len;
+}
+
+static void dsi_sw_reset_restore(struct msm_dsi_host *msm_host)
+{
+ u32 data0, data1;
+
+ data0 = dsi_read(msm_host, REG_DSI_CTRL);
+ data1 = data0;
+ data1 &= ~DSI_CTRL_ENABLE;
+ dsi_write(msm_host, REG_DSI_CTRL, data1);
+ /*
+ * dsi controller need to be disabled before
+ * clocks turned on
+ */
+ wmb();
+
+ dsi_write(msm_host, REG_DSI_CLK_CTRL, DSI_CLK_CTRL_ENABLE_CLKS);
+ wmb(); /* make sure clocks enabled */
+
+ /* dsi controller can only be reset while clocks are running */
+ dsi_write(msm_host, REG_DSI_RESET, 1);
+ wmb(); /* make sure reset happen */
+ dsi_write(msm_host, REG_DSI_RESET, 0);
+ wmb(); /* controller out of reset */
+ dsi_write(msm_host, REG_DSI_CTRL, data0);
+ wmb(); /* make sure dsi controller enabled again */
+}
+
+static void dsi_err_worker(struct work_struct *work)
+{
+ struct msm_dsi_host *msm_host =
+ container_of(work, struct msm_dsi_host, err_work);
+ u32 status = msm_host->err_work_state;
+
+ pr_err("%s: status=%x\n", __func__, status);
+ if (status & DSI_ERR_STATE_MDP_FIFO_UNDERFLOW)
+ dsi_sw_reset_restore(msm_host);
+
+ /* It is safe to clear here because error irq is disabled. */
+ msm_host->err_work_state = 0;
+
+ /* enable dsi error interrupt */
+ dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_ERROR, 1);
+}
+
+static void dsi_ack_err_status(struct msm_dsi_host *msm_host)
+{
+ u32 status;
+
+ status = dsi_read(msm_host, REG_DSI_ACK_ERR_STATUS);
+
+ if (status) {
+ dsi_write(msm_host, REG_DSI_ACK_ERR_STATUS, status);
+ /* Writing of an extra 0 needed to clear error bits */
+ dsi_write(msm_host, REG_DSI_ACK_ERR_STATUS, 0);
+ msm_host->err_work_state |= DSI_ERR_STATE_ACK;
+ }
+}
+
+static void dsi_timeout_status(struct msm_dsi_host *msm_host)
+{
+ u32 status;
+
+ status = dsi_read(msm_host, REG_DSI_TIMEOUT_STATUS);
+
+ if (status) {
+ dsi_write(msm_host, REG_DSI_TIMEOUT_STATUS, status);
+ msm_host->err_work_state |= DSI_ERR_STATE_TIMEOUT;
+ }
+}
+
+static void dsi_dln0_phy_err(struct msm_dsi_host *msm_host)
+{
+ u32 status;
+
+ status = dsi_read(msm_host, REG_DSI_DLN0_PHY_ERR);
+
+ if (status) {
+ dsi_write(msm_host, REG_DSI_DLN0_PHY_ERR, status);
+ msm_host->err_work_state |= DSI_ERR_STATE_DLN0_PHY;
+ }
+}
+
+static void dsi_fifo_status(struct msm_dsi_host *msm_host)
+{
+ u32 status;
+
+ status = dsi_read(msm_host, REG_DSI_FIFO_STATUS);
+
+ /* fifo underflow, overflow */
+ if (status) {
+ dsi_write(msm_host, REG_DSI_FIFO_STATUS, status);
+ msm_host->err_work_state |= DSI_ERR_STATE_FIFO;
+ if (status & DSI_FIFO_STATUS_CMD_MDP_FIFO_UNDERFLOW)
+ msm_host->err_work_state |=
+ DSI_ERR_STATE_MDP_FIFO_UNDERFLOW;
+ }
+}
+
+static void dsi_status(struct msm_dsi_host *msm_host)
+{
+ u32 status;
+
+ status = dsi_read(msm_host, REG_DSI_STATUS0);
+
+ if (status & DSI_STATUS0_INTERLEAVE_OP_CONTENTION) {
+ dsi_write(msm_host, REG_DSI_STATUS0, status);
+ msm_host->err_work_state |=
+ DSI_ERR_STATE_INTERLEAVE_OP_CONTENTION;
+ }
+}
+
+static void dsi_clk_status(struct msm_dsi_host *msm_host)
+{
+ u32 status;
+
+ status = dsi_read(msm_host, REG_DSI_CLK_STATUS);
+
+ if (status & DSI_CLK_STATUS_PLL_UNLOCKED) {
+ dsi_write(msm_host, REG_DSI_CLK_STATUS, status);
+ msm_host->err_work_state |= DSI_ERR_STATE_PLL_UNLOCKED;
+ }
+}
+
+static void dsi_error(struct msm_dsi_host *msm_host)
+{
+ /* disable dsi error interrupt */
+ dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_ERROR, 0);
+
+ dsi_clk_status(msm_host);
+ dsi_fifo_status(msm_host);
+ dsi_ack_err_status(msm_host);
+ dsi_timeout_status(msm_host);
+ dsi_status(msm_host);
+ dsi_dln0_phy_err(msm_host);
+
+ queue_work(msm_host->workqueue, &msm_host->err_work);
+}
+
+static irqreturn_t dsi_host_irq(int irq, void *ptr)
+{
+ struct msm_dsi_host *msm_host = ptr;
+ u32 isr;
+ unsigned long flags;
+
+ if (!msm_host->ctrl_base)
+ return IRQ_HANDLED;
+
+ spin_lock_irqsave(&msm_host->intr_lock, flags);
+ isr = dsi_read(msm_host, REG_DSI_INTR_CTRL);
+ dsi_write(msm_host, REG_DSI_INTR_CTRL, isr);
+ spin_unlock_irqrestore(&msm_host->intr_lock, flags);
+
+ DBG("isr=0x%x, id=%d", isr, msm_host->id);
+
+ if (isr & DSI_IRQ_ERROR)
+ dsi_error(msm_host);
+
+ if (isr & DSI_IRQ_VIDEO_DONE)
+ complete(&msm_host->video_comp);
+
+ if (isr & DSI_IRQ_CMD_DMA_DONE)
+ complete(&msm_host->dma_comp);
+
+ return IRQ_HANDLED;
+}
+
+static int dsi_host_init_panel_gpios(struct msm_dsi_host *msm_host,
+ struct device *panel_device)
+{
+ int ret;
+
+ msm_host->disp_en_gpio = devm_gpiod_get(panel_device,
+ "disp-enable");
+ if (IS_ERR(msm_host->disp_en_gpio)) {
+ DBG("cannot get disp-enable-gpios %ld",
+ PTR_ERR(msm_host->disp_en_gpio));
+ msm_host->disp_en_gpio = NULL;
+ }
+ if (msm_host->disp_en_gpio) {
+ ret = gpiod_direction_output(msm_host->disp_en_gpio, 0);
+ if (ret) {
+ pr_err("cannot set dir to disp-en-gpios %d\n", ret);
+ return ret;
+ }
+ }
+
+ msm_host->te_gpio = devm_gpiod_get(panel_device, "disp-te");
+ if (IS_ERR(msm_host->te_gpio)) {
+ DBG("cannot get disp-te-gpios %ld", PTR_ERR(msm_host->te_gpio));
+ msm_host->te_gpio = NULL;
+ }
+
+ if (msm_host->te_gpio) {
+ ret = gpiod_direction_input(msm_host->te_gpio);
+ if (ret) {
+ pr_err("%s: cannot set dir to disp-te-gpios, %d\n",
+ __func__, ret);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static int dsi_host_attach(struct mipi_dsi_host *host,
+ struct mipi_dsi_device *dsi)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+ int ret;
+
+ msm_host->channel = dsi->channel;
+ msm_host->lanes = dsi->lanes;
+ msm_host->format = dsi->format;
+ msm_host->mode_flags = dsi->mode_flags;
+
+ msm_host->panel_node = dsi->dev.of_node;
+
+ /* Some gpios defined in panel DT need to be controlled by host */
+ ret = dsi_host_init_panel_gpios(msm_host, &dsi->dev);
+ if (ret)
+ return ret;
+
+ DBG("id=%d", msm_host->id);
+ if (msm_host->dev)
+ drm_helper_hpd_irq_event(msm_host->dev);
+
+ return 0;
+}
+
+static int dsi_host_detach(struct mipi_dsi_host *host,
+ struct mipi_dsi_device *dsi)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ msm_host->panel_node = NULL;
+
+ DBG("id=%d", msm_host->id);
+ if (msm_host->dev)
+ drm_helper_hpd_irq_event(msm_host->dev);
+
+ return 0;
+}
+
+static ssize_t dsi_host_transfer(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+ int ret;
+
+ if (!msg || !msm_host->power_on)
+ return -EINVAL;
+
+ mutex_lock(&msm_host->cmd_mutex);
+ ret = msm_dsi_manager_cmd_xfer(msm_host->id, msg);
+ mutex_unlock(&msm_host->cmd_mutex);
+
+ return ret;
+}
+
+static struct mipi_dsi_host_ops dsi_host_ops = {
+ .attach = dsi_host_attach,
+ .detach = dsi_host_detach,
+ .transfer = dsi_host_transfer,
+};
+
+int msm_dsi_host_init(struct msm_dsi *msm_dsi)
+{
+ struct msm_dsi_host *msm_host = NULL;
+ struct platform_device *pdev = msm_dsi->pdev;
+ int ret;
+
+ msm_host = devm_kzalloc(&pdev->dev, sizeof(*msm_host), GFP_KERNEL);
+ if (!msm_host) {
+ pr_err("%s: FAILED: cannot alloc dsi host\n",
+ __func__);
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ ret = of_property_read_u32(pdev->dev.of_node,
+ "qcom,dsi-host-index", &msm_host->id);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "%s: host index not specified, ret=%d\n",
+ __func__, ret);
+ goto fail;
+ }
+ msm_host->pdev = pdev;
+
+ ret = dsi_clk_init(msm_host);
+ if (ret) {
+ pr_err("%s: unable to initialize dsi clks\n", __func__);
+ goto fail;
+ }
+
+ msm_host->ctrl_base = msm_ioremap(pdev, "dsi_ctrl", "DSI CTRL");
+ if (IS_ERR(msm_host->ctrl_base)) {
+ pr_err("%s: unable to map Dsi ctrl base\n", __func__);
+ ret = PTR_ERR(msm_host->ctrl_base);
+ goto fail;
+ }
+
+ msm_host->cfg = dsi_get_config(msm_host);
+ if (!msm_host->cfg) {
+ ret = -EINVAL;
+ pr_err("%s: get config failed\n", __func__);
+ goto fail;
+ }
+
+ ret = dsi_regulator_init(msm_host);
+ if (ret) {
+ pr_err("%s: regulator init failed\n", __func__);
+ goto fail;
+ }
+
+ msm_host->rx_buf = devm_kzalloc(&pdev->dev, SZ_4K, GFP_KERNEL);
+ if (!msm_host->rx_buf) {
+ pr_err("%s: alloc rx temp buf failed\n", __func__);
+ goto fail;
+ }
+
+ init_completion(&msm_host->dma_comp);
+ init_completion(&msm_host->video_comp);
+ mutex_init(&msm_host->dev_mutex);
+ mutex_init(&msm_host->cmd_mutex);
+ mutex_init(&msm_host->clk_mutex);
+ spin_lock_init(&msm_host->intr_lock);
+
+ /* setup workqueue */
+ msm_host->workqueue = alloc_ordered_workqueue("dsi_drm_work", 0);
+ INIT_WORK(&msm_host->err_work, dsi_err_worker);
+
+ msm_dsi->phy = msm_dsi_phy_init(pdev, msm_host->cfg->phy_type,
+ msm_host->id);
+ if (!msm_dsi->phy) {
+ ret = -EINVAL;
+ pr_err("%s: phy init failed\n", __func__);
+ goto fail;
+ }
+ msm_dsi->host = &msm_host->base;
+ msm_dsi->id = msm_host->id;
+
+ DBG("Dsi Host %d initialized", msm_host->id);
+ return 0;
+
+fail:
+ return ret;
+}
+
+void msm_dsi_host_destroy(struct mipi_dsi_host *host)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ DBG("");
+ dsi_tx_buf_free(msm_host);
+ if (msm_host->workqueue) {
+ flush_workqueue(msm_host->workqueue);
+ destroy_workqueue(msm_host->workqueue);
+ msm_host->workqueue = NULL;
+ }
+
+ mutex_destroy(&msm_host->clk_mutex);
+ mutex_destroy(&msm_host->cmd_mutex);
+ mutex_destroy(&msm_host->dev_mutex);
+}
+
+int msm_dsi_host_modeset_init(struct mipi_dsi_host *host,
+ struct drm_device *dev)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+ struct platform_device *pdev = msm_host->pdev;
+ int ret;
+
+ msm_host->irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+ if (msm_host->irq < 0) {
+ ret = msm_host->irq;
+ dev_err(dev->dev, "failed to get irq: %d\n", ret);
+ return ret;
+ }
+
+ ret = devm_request_irq(&pdev->dev, msm_host->irq,
+ dsi_host_irq, IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
+ "dsi_isr", msm_host);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to request IRQ%u: %d\n",
+ msm_host->irq, ret);
+ return ret;
+ }
+
+ msm_host->dev = dev;
+ ret = dsi_tx_buf_alloc(msm_host, SZ_4K);
+ if (ret) {
+ pr_err("%s: alloc tx gem obj failed, %d\n", __func__, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+int msm_dsi_host_register(struct mipi_dsi_host *host, bool check_defer)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+ struct device_node *node;
+ int ret;
+
+ /* Register mipi dsi host */
+ if (!msm_host->registered) {
+ host->dev = &msm_host->pdev->dev;
+ host->ops = &dsi_host_ops;
+ ret = mipi_dsi_host_register(host);
+ if (ret)
+ return ret;
+
+ msm_host->registered = true;
+
+ /* If the panel driver has not been probed after host register,
+ * we should defer the host's probe.
+ * It makes sure panel is connected when fbcon detects
+ * connector status and gets the proper display mode to
+ * create framebuffer.
+ */
+ if (check_defer) {
+ node = of_get_child_by_name(msm_host->pdev->dev.of_node,
+ "panel");
+ if (node) {
+ if (!of_drm_find_panel(node))
+ return -EPROBE_DEFER;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void msm_dsi_host_unregister(struct mipi_dsi_host *host)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ if (msm_host->registered) {
+ mipi_dsi_host_unregister(host);
+ host->dev = NULL;
+ host->ops = NULL;
+ msm_host->registered = false;
+ }
+}
+
+int msm_dsi_host_xfer_prepare(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ /* TODO: make sure dsi_cmd_mdp is idle.
+ * Since DSI6G v1.2.0, we can set DSI_TRIG_CTRL.BLOCK_DMA_WITHIN_FRAME
+ * to ask H/W to wait until cmd mdp is idle. S/W wait is not needed.
+ * How to handle the old versions? Wait for mdp cmd done?
+ */
+
+ /*
+ * mdss interrupt is generated in mdp core clock domain
+ * mdp clock need to be enabled to receive dsi interrupt
+ */
+ dsi_clk_ctrl(msm_host, 1);
+
+ /* TODO: vote for bus bandwidth */
+
+ if (!(msg->flags & MIPI_DSI_MSG_USE_LPM))
+ dsi_set_tx_power_mode(0, msm_host);
+
+ msm_host->dma_cmd_ctrl_restore = dsi_read(msm_host, REG_DSI_CTRL);
+ dsi_write(msm_host, REG_DSI_CTRL,
+ msm_host->dma_cmd_ctrl_restore |
+ DSI_CTRL_CMD_MODE_EN |
+ DSI_CTRL_ENABLE);
+ dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_CMD_DMA_DONE, 1);
+
+ return 0;
+}
+
+void msm_dsi_host_xfer_restore(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_CMD_DMA_DONE, 0);
+ dsi_write(msm_host, REG_DSI_CTRL, msm_host->dma_cmd_ctrl_restore);
+
+ if (!(msg->flags & MIPI_DSI_MSG_USE_LPM))
+ dsi_set_tx_power_mode(1, msm_host);
+
+ /* TODO: unvote for bus bandwidth */
+
+ dsi_clk_ctrl(msm_host, 0);
+}
+
+int msm_dsi_host_cmd_tx(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ return dsi_cmds2buf_tx(msm_host, msg);
+}
+
+int msm_dsi_host_cmd_rx(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+ int data_byte, rx_byte, dlen, end;
+ int short_response, diff, pkt_size, ret = 0;
+ char cmd;
+ int rlen = msg->rx_len;
+ u8 *buf;
+
+ if (rlen <= 2) {
+ short_response = 1;
+ pkt_size = rlen;
+ rx_byte = 4;
+ } else {
+ short_response = 0;
+ data_byte = 10; /* first read */
+ if (rlen < data_byte)
+ pkt_size = rlen;
+ else
+ pkt_size = data_byte;
+ rx_byte = data_byte + 6; /* 4 header + 2 crc */
+ }
+
+ buf = msm_host->rx_buf;
+ end = 0;
+ while (!end) {
+ u8 tx[2] = {pkt_size & 0xff, pkt_size >> 8};
+ struct mipi_dsi_msg max_pkt_size_msg = {
+ .channel = msg->channel,
+ .type = MIPI_DSI_SET_MAXIMUM_RETURN_PACKET_SIZE,
+ .tx_len = 2,
+ .tx_buf = tx,
+ };
+
+ DBG("rlen=%d pkt_size=%d rx_byte=%d",
+ rlen, pkt_size, rx_byte);
+
+ ret = dsi_cmds2buf_tx(msm_host, &max_pkt_size_msg);
+ if (ret < 2) {
+ pr_err("%s: Set max pkt size failed, %d\n",
+ __func__, ret);
+ return -EINVAL;
+ }
+
+ if ((msm_host->cfg->major == MSM_DSI_VER_MAJOR_6G) &&
+ (msm_host->cfg->minor >= MSM_DSI_6G_VER_MINOR_V1_1)) {
+ /* Clear the RDBK_DATA registers */
+ dsi_write(msm_host, REG_DSI_RDBK_DATA_CTRL,
+ DSI_RDBK_DATA_CTRL_CLR);
+ wmb(); /* make sure the RDBK registers are cleared */
+ dsi_write(msm_host, REG_DSI_RDBK_DATA_CTRL, 0);
+ wmb(); /* release cleared status before transfer */
+ }
+
+ ret = dsi_cmds2buf_tx(msm_host, msg);
+ if (ret < msg->tx_len) {
+ pr_err("%s: Read cmd Tx failed, %d\n", __func__, ret);
+ return ret;
+ }
+
+ /*
+ * once cmd_dma_done interrupt received,
+ * return data from client is ready and stored
+ * at RDBK_DATA register already
+ * since rx fifo is 16 bytes, dcs header is kept at first loop,
+ * after that dcs header lost during shift into registers
+ */
+ dlen = dsi_cmd_dma_rx(msm_host, buf, rx_byte, pkt_size);
+
+ if (dlen <= 0)
+ return 0;
+
+ if (short_response)
+ break;
+
+ if (rlen <= data_byte) {
+ diff = data_byte - rlen;
+ end = 1;
+ } else {
+ diff = 0;
+ rlen -= data_byte;
+ }
+
+ if (!end) {
+ dlen -= 2; /* 2 crc */
+ dlen -= diff;
+ buf += dlen; /* next start position */
+ data_byte = 14; /* NOT first read */
+ if (rlen < data_byte)
+ pkt_size += rlen;
+ else
+ pkt_size += data_byte;
+ DBG("buf=%p dlen=%d diff=%d", buf, dlen, diff);
+ }
+ }
+
+ /*
+ * For single Long read, if the requested rlen < 10,
+ * we need to shift the start position of rx
+ * data buffer to skip the bytes which are not
+ * updated.
+ */
+ if (pkt_size < 10 && !short_response)
+ buf = msm_host->rx_buf + (10 - rlen);
+ else
+ buf = msm_host->rx_buf;
+
+ cmd = buf[0];
+ switch (cmd) {
+ case MIPI_DSI_RX_ACKNOWLEDGE_AND_ERROR_REPORT:
+ pr_err("%s: rx ACK_ERR_PACLAGE\n", __func__);
+ ret = 0;
+ case MIPI_DSI_RX_GENERIC_SHORT_READ_RESPONSE_1BYTE:
+ case MIPI_DSI_RX_DCS_SHORT_READ_RESPONSE_1BYTE:
+ ret = dsi_short_read1_resp(buf, msg);
+ break;
+ case MIPI_DSI_RX_GENERIC_SHORT_READ_RESPONSE_2BYTE:
+ case MIPI_DSI_RX_DCS_SHORT_READ_RESPONSE_2BYTE:
+ ret = dsi_short_read2_resp(buf, msg);
+ break;
+ case MIPI_DSI_RX_GENERIC_LONG_READ_RESPONSE:
+ case MIPI_DSI_RX_DCS_LONG_READ_RESPONSE:
+ ret = dsi_long_read_resp(buf, msg);
+ break;
+ default:
+ pr_warn("%s:Invalid response cmd\n", __func__);
+ ret = 0;
+ }
+
+ return ret;
+}
+
+void msm_dsi_host_cmd_xfer_commit(struct mipi_dsi_host *host, u32 iova, u32 len)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ dsi_write(msm_host, REG_DSI_DMA_BASE, iova);
+ dsi_write(msm_host, REG_DSI_DMA_LEN, len);
+ dsi_write(msm_host, REG_DSI_TRIG_DMA, 1);
+
+ /* Make sure trigger happens */
+ wmb();
+}
+
+int msm_dsi_host_enable(struct mipi_dsi_host *host)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ dsi_op_mode_config(msm_host,
+ !!(msm_host->mode_flags & MIPI_DSI_MODE_VIDEO), true);
+
+ /* TODO: clock should be turned off for command mode,
+ * and only turned on before MDP START.
+ * This part of code should be enabled once mdp driver support it.
+ */
+ /* if (msm_panel->mode == MSM_DSI_CMD_MODE)
+ dsi_clk_ctrl(msm_host, 0); */
+
+ return 0;
+}
+
+int msm_dsi_host_disable(struct mipi_dsi_host *host)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ dsi_op_mode_config(msm_host,
+ !!(msm_host->mode_flags & MIPI_DSI_MODE_VIDEO), false);
+
+ /* Since we have disabled INTF, the video engine won't stop so that
+ * the cmd engine will be blocked.
+ * Reset to disable video engine so that we can send off cmd.
+ */
+ dsi_sw_reset(msm_host);
+
+ return 0;
+}
+
+int msm_dsi_host_power_on(struct mipi_dsi_host *host)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+ u32 clk_pre = 0, clk_post = 0;
+ int ret = 0;
+
+ mutex_lock(&msm_host->dev_mutex);
+ if (msm_host->power_on) {
+ DBG("dsi host already on");
+ goto unlock_ret;
+ }
+
+ ret = dsi_calc_clk_rate(msm_host);
+ if (ret) {
+ pr_err("%s: unable to calc clk rate, %d\n", __func__, ret);
+ goto unlock_ret;
+ }
+
+ ret = dsi_host_regulator_enable(msm_host);
+ if (ret) {
+ pr_err("%s:Failed to enable vregs.ret=%d\n",
+ __func__, ret);
+ goto unlock_ret;
+ }
+
+ ret = dsi_bus_clk_enable(msm_host);
+ if (ret) {
+ pr_err("%s: failed to enable bus clocks, %d\n", __func__, ret);
+ goto fail_disable_reg;
+ }
+
+ dsi_phy_sw_reset(msm_host);
+ ret = msm_dsi_manager_phy_enable(msm_host->id,
+ msm_host->byte_clk_rate * 8,
+ clk_get_rate(msm_host->esc_clk),
+ &clk_pre, &clk_post);
+ dsi_bus_clk_disable(msm_host);
+ if (ret) {
+ pr_err("%s: failed to enable phy, %d\n", __func__, ret);
+ goto fail_disable_reg;
+ }
+
+ ret = dsi_clk_ctrl(msm_host, 1);
+ if (ret) {
+ pr_err("%s: failed to enable clocks. ret=%d\n", __func__, ret);
+ goto fail_disable_reg;
+ }
+
+ dsi_timing_setup(msm_host);
+ dsi_sw_reset(msm_host);
+ dsi_ctrl_config(msm_host, true, clk_pre, clk_post);
+
+ if (msm_host->disp_en_gpio)
+ gpiod_set_value(msm_host->disp_en_gpio, 1);
+
+ msm_host->power_on = true;
+ mutex_unlock(&msm_host->dev_mutex);
+
+ return 0;
+
+fail_disable_reg:
+ dsi_host_regulator_disable(msm_host);
+unlock_ret:
+ mutex_unlock(&msm_host->dev_mutex);
+ return ret;
+}
+
+int msm_dsi_host_power_off(struct mipi_dsi_host *host)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ mutex_lock(&msm_host->dev_mutex);
+ if (!msm_host->power_on) {
+ DBG("dsi host already off");
+ goto unlock_ret;
+ }
+
+ dsi_ctrl_config(msm_host, false, 0, 0);
+
+ if (msm_host->disp_en_gpio)
+ gpiod_set_value(msm_host->disp_en_gpio, 0);
+
+ msm_dsi_manager_phy_disable(msm_host->id);
+
+ dsi_clk_ctrl(msm_host, 0);
+
+ dsi_host_regulator_disable(msm_host);
+
+ DBG("-");
+
+ msm_host->power_on = false;
+
+unlock_ret:
+ mutex_unlock(&msm_host->dev_mutex);
+ return 0;
+}
+
+int msm_dsi_host_set_display_mode(struct mipi_dsi_host *host,
+ struct drm_display_mode *mode)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+
+ if (msm_host->mode) {
+ drm_mode_destroy(msm_host->dev, msm_host->mode);
+ msm_host->mode = NULL;
+ }
+
+ msm_host->mode = drm_mode_duplicate(msm_host->dev, mode);
+ if (IS_ERR(msm_host->mode)) {
+ pr_err("%s: cannot duplicate mode\n", __func__);
+ return PTR_ERR(msm_host->mode);
+ }
+
+ return 0;
+}
+
+struct drm_panel *msm_dsi_host_get_panel(struct mipi_dsi_host *host,
+ unsigned long *panel_flags)
+{
+ struct msm_dsi_host *msm_host = to_msm_dsi_host(host);
+ struct drm_panel *panel;
+
+ panel = of_drm_find_panel(msm_host->panel_node);
+ if (panel_flags)
+ *panel_flags = msm_host->mode_flags;
+
+ return panel;
+}
+
diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c b/drivers/gpu/drm/msm/dsi/dsi_manager.c
new file mode 100644
index 0000000..ee3ebca
--- /dev/null
+++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
@@ -0,0 +1,705 @@
+/*
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "msm_kms.h"
+#include "dsi.h"
+
+struct msm_dsi_manager {
+ struct msm_dsi *dsi[DSI_MAX];
+
+ bool is_dual_panel;
+ bool is_sync_needed;
+ int master_panel_id;
+};
+
+static struct msm_dsi_manager msm_dsim_glb;
+
+#define IS_DUAL_PANEL() (msm_dsim_glb.is_dual_panel)
+#define IS_SYNC_NEEDED() (msm_dsim_glb.is_sync_needed)
+#define IS_MASTER_PANEL(id) (msm_dsim_glb.master_panel_id == id)
+
+static inline struct msm_dsi *dsi_mgr_get_dsi(int id)
+{
+ return msm_dsim_glb.dsi[id];
+}
+
+static inline struct msm_dsi *dsi_mgr_get_other_dsi(int id)
+{
+ return msm_dsim_glb.dsi[(id + 1) % DSI_MAX];
+}
+
+static int dsi_mgr_parse_dual_panel(struct device_node *np, int id)
+{
+ struct msm_dsi_manager *msm_dsim = &msm_dsim_glb;
+
+ /* We assume 2 dsi nodes have the same information of dual-panel and
+ * sync-mode, and only one node specifies master in case of dual mode.
+ */
+ if (!msm_dsim->is_dual_panel)
+ msm_dsim->is_dual_panel = of_property_read_bool(
+ np, "qcom,dual-panel-mode");
+
+ if (msm_dsim->is_dual_panel) {
+ if (of_property_read_bool(np, "qcom,master-panel"))
+ msm_dsim->master_panel_id = id;
+ if (!msm_dsim->is_sync_needed)
+ msm_dsim->is_sync_needed = of_property_read_bool(
+ np, "qcom,sync-dual-panel");
+ }
+
+ return 0;
+}
+
+struct dsi_connector {
+ struct drm_connector base;
+ int id;
+};
+
+struct dsi_bridge {
+ struct drm_bridge base;
+ int id;
+};
+
+#define to_dsi_connector(x) container_of(x, struct dsi_connector, base)
+#define to_dsi_bridge(x) container_of(x, struct dsi_bridge, base)
+
+static inline int dsi_mgr_connector_get_id(struct drm_connector *connector)
+{
+ struct dsi_connector *dsi_connector = to_dsi_connector(connector);
+ return dsi_connector->id;
+}
+
+static int dsi_mgr_bridge_get_id(struct drm_bridge *bridge)
+{
+ struct dsi_bridge *dsi_bridge = to_dsi_bridge(bridge);
+ return dsi_bridge->id;
+}
+
+static enum drm_connector_status dsi_mgr_connector_detect(
+ struct drm_connector *connector, bool force)
+{
+ int id = dsi_mgr_connector_get_id(connector);
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct msm_dsi *other_dsi = dsi_mgr_get_other_dsi(id);
+ struct msm_drm_private *priv = connector->dev->dev_private;
+ struct msm_kms *kms = priv->kms;
+
+ DBG("id=%d", id);
+ if (!msm_dsi->panel) {
+ msm_dsi->panel = msm_dsi_host_get_panel(msm_dsi->host,
+ &msm_dsi->panel_flags);
+
+ /* There is only 1 panel in the global panel list
+ * for dual panel mode. Therefore slave dsi should get
+ * the drm_panel instance from master dsi, and
+ * keep using the panel flags got from the current DSI link.
+ */
+ if (!msm_dsi->panel && IS_DUAL_PANEL() &&
+ !IS_MASTER_PANEL(id) && other_dsi)
+ msm_dsi->panel = msm_dsi_host_get_panel(
+ other_dsi->host, NULL);
+
+ if (msm_dsi->panel && IS_DUAL_PANEL())
+ drm_object_attach_property(&connector->base,
+ connector->dev->mode_config.tile_property, 0);
+
+ /* Set split display info to kms once dual panel is connected
+ * to both hosts
+ */
+ if (msm_dsi->panel && IS_DUAL_PANEL() &&
+ other_dsi && other_dsi->panel) {
+ bool cmd_mode = !(msm_dsi->panel_flags &
+ MIPI_DSI_MODE_VIDEO);
+ struct drm_encoder *encoder = msm_dsi_get_encoder(
+ dsi_mgr_get_dsi(DSI_ENCODER_MASTER));
+ struct drm_encoder *slave_enc = msm_dsi_get_encoder(
+ dsi_mgr_get_dsi(DSI_ENCODER_SLAVE));
+
+ if (kms->funcs->set_split_display)
+ kms->funcs->set_split_display(kms, encoder,
+ slave_enc, cmd_mode);
+ else
+ pr_err("mdp does not support dual panel\n");
+ }
+ }
+
+ return msm_dsi->panel ? connector_status_connected :
+ connector_status_disconnected;
+}
+
+static void dsi_mgr_connector_destroy(struct drm_connector *connector)
+{
+ DBG("");
+ drm_connector_unregister(connector);
+ drm_connector_cleanup(connector);
+}
+
+static void dsi_dual_connector_fix_modes(struct drm_connector *connector)
+{
+ struct drm_display_mode *mode, *m;
+
+ /* Only support left-right mode */
+ list_for_each_entry_safe(mode, m, &connector->probed_modes, head) {
+ mode->clock >>= 1;
+ mode->hdisplay >>= 1;
+ mode->hsync_start >>= 1;
+ mode->hsync_end >>= 1;
+ mode->htotal >>= 1;
+ drm_mode_set_name(mode);
+ }
+}
+
+static int dsi_dual_connector_tile_init(
+ struct drm_connector *connector, int id)
+{
+ struct drm_display_mode *mode;
+ /* Fake topology id */
+ char topo_id[8] = {'M', 'S', 'M', 'D', 'U', 'D', 'S', 'I'};
+
+ if (connector->tile_group) {
+ DBG("Tile property has been initialized");
+ return 0;
+ }
+
+ /* Use the first mode only for now */
+ mode = list_first_entry(&connector->probed_modes,
+ struct drm_display_mode,
+ head);
+ if (!mode)
+ return -EINVAL;
+
+ connector->tile_group = drm_mode_get_tile_group(
+ connector->dev, topo_id);
+ if (!connector->tile_group)
+ connector->tile_group = drm_mode_create_tile_group(
+ connector->dev, topo_id);
+ if (!connector->tile_group) {
+ pr_err("%s: failed to create tile group\n", __func__);
+ return -ENOMEM;
+ }
+
+ connector->has_tile = true;
+ connector->tile_is_single_monitor = true;
+
+ /* mode has been fixed */
+ connector->tile_h_size = mode->hdisplay;
+ connector->tile_v_size = mode->vdisplay;
+
+ /* Only support left-right mode */
+ connector->num_h_tile = 2;
+ connector->num_v_tile = 1;
+
+ connector->tile_v_loc = 0;
+ connector->tile_h_loc = (id == DSI_RIGHT) ? 1 : 0;
+
+ return 0;
+}
+
+static int dsi_mgr_connector_get_modes(struct drm_connector *connector)
+{
+ int id = dsi_mgr_connector_get_id(connector);
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct drm_panel *panel = msm_dsi->panel;
+ int ret, num;
+
+ if (!panel)
+ return 0;
+
+ /* Since we have 2 connectors, but only 1 drm_panel in dual DSI mode,
+ * panel should not attach to any connector.
+ * Only temporarily attach panel to the current connector here,
+ * to let panel set mode to this connector.
+ */
+ drm_panel_attach(panel, connector);
+ num = drm_panel_get_modes(panel);
+ drm_panel_detach(panel);
+ if (!num)
+ return 0;
+
+ if (IS_DUAL_PANEL()) {
+ /* report half resolution to user */
+ dsi_dual_connector_fix_modes(connector);
+ ret = dsi_dual_connector_tile_init(connector, id);
+ if (ret)
+ return ret;
+ ret = drm_mode_connector_set_tile_property(connector);
+ if (ret) {
+ pr_err("%s: set tile property failed, %d\n",
+ __func__, ret);
+ return ret;
+ }
+ }
+
+ return num;
+}
+
+static int dsi_mgr_connector_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+{
+ int id = dsi_mgr_connector_get_id(connector);
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct drm_encoder *encoder = msm_dsi_get_encoder(msm_dsi);
+ struct msm_drm_private *priv = connector->dev->dev_private;
+ struct msm_kms *kms = priv->kms;
+ long actual, requested;
+
+ DBG("");
+ requested = 1000 * mode->clock;
+ actual = kms->funcs->round_pixclk(kms, requested, encoder);
+
+ DBG("requested=%ld, actual=%ld", requested, actual);
+ if (actual != requested)
+ return MODE_CLOCK_RANGE;
+
+ return MODE_OK;
+}
+
+static struct drm_encoder *
+dsi_mgr_connector_best_encoder(struct drm_connector *connector)
+{
+ int id = dsi_mgr_connector_get_id(connector);
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+
+ DBG("");
+ return msm_dsi_get_encoder(msm_dsi);
+}
+
+static void dsi_mgr_bridge_pre_enable(struct drm_bridge *bridge)
+{
+ int id = dsi_mgr_bridge_get_id(bridge);
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct msm_dsi *msm_dsi1 = dsi_mgr_get_dsi(DSI_1);
+ struct mipi_dsi_host *host = msm_dsi->host;
+ struct drm_panel *panel = msm_dsi->panel;
+ bool is_dual_panel = IS_DUAL_PANEL();
+ int ret;
+
+ DBG("id=%d", id);
+ if (!panel || (is_dual_panel && (DSI_1 == id)))
+ return;
+
+ ret = msm_dsi_host_power_on(host);
+ if (ret) {
+ pr_err("%s: power on host %d failed, %d\n", __func__, id, ret);
+ goto host_on_fail;
+ }
+
+ if (is_dual_panel && msm_dsi1) {
+ ret = msm_dsi_host_power_on(msm_dsi1->host);
+ if (ret) {
+ pr_err("%s: power on host1 failed, %d\n",
+ __func__, ret);
+ goto host1_on_fail;
+ }
+ }
+
+ /* Always call panel functions once, because even for dual panels,
+ * there is only one drm_panel instance.
+ */
+ ret = drm_panel_prepare(panel);
+ if (ret) {
+ pr_err("%s: prepare panel %d failed, %d\n", __func__, id, ret);
+ goto panel_prep_fail;
+ }
+
+ ret = msm_dsi_host_enable(host);
+ if (ret) {
+ pr_err("%s: enable host %d failed, %d\n", __func__, id, ret);
+ goto host_en_fail;
+ }
+
+ if (is_dual_panel && msm_dsi1) {
+ ret = msm_dsi_host_enable(msm_dsi1->host);
+ if (ret) {
+ pr_err("%s: enable host1 failed, %d\n", __func__, ret);
+ goto host1_en_fail;
+ }
+ }
+
+ ret = drm_panel_enable(panel);
+ if (ret) {
+ pr_err("%s: enable panel %d failed, %d\n", __func__, id, ret);
+ goto panel_en_fail;
+ }
+
+ return;
+
+panel_en_fail:
+ if (is_dual_panel && msm_dsi1)
+ msm_dsi_host_disable(msm_dsi1->host);
+host1_en_fail:
+ msm_dsi_host_disable(host);
+host_en_fail:
+ drm_panel_unprepare(panel);
+panel_prep_fail:
+ if (is_dual_panel && msm_dsi1)
+ msm_dsi_host_power_off(msm_dsi1->host);
+host1_on_fail:
+ msm_dsi_host_power_off(host);
+host_on_fail:
+ return;
+}
+
+static void dsi_mgr_bridge_enable(struct drm_bridge *bridge)
+{
+ DBG("");
+}
+
+static void dsi_mgr_bridge_disable(struct drm_bridge *bridge)
+{
+ DBG("");
+}
+
+static void dsi_mgr_bridge_post_disable(struct drm_bridge *bridge)
+{
+ int id = dsi_mgr_bridge_get_id(bridge);
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct msm_dsi *msm_dsi1 = dsi_mgr_get_dsi(DSI_1);
+ struct mipi_dsi_host *host = msm_dsi->host;
+ struct drm_panel *panel = msm_dsi->panel;
+ bool is_dual_panel = IS_DUAL_PANEL();
+ int ret;
+
+ DBG("id=%d", id);
+
+ if (!panel || (is_dual_panel && (DSI_1 == id)))
+ return;
+
+ ret = drm_panel_disable(panel);
+ if (ret)
+ pr_err("%s: Panel %d OFF failed, %d\n", __func__, id, ret);
+
+ ret = msm_dsi_host_disable(host);
+ if (ret)
+ pr_err("%s: host %d disable failed, %d\n", __func__, id, ret);
+
+ if (is_dual_panel && msm_dsi1) {
+ ret = msm_dsi_host_disable(msm_dsi1->host);
+ if (ret)
+ pr_err("%s: host1 disable failed, %d\n", __func__, ret);
+ }
+
+ ret = drm_panel_unprepare(panel);
+ if (ret)
+ pr_err("%s: Panel %d unprepare failed,%d\n", __func__, id, ret);
+
+ ret = msm_dsi_host_power_off(host);
+ if (ret)
+ pr_err("%s: host %d power off failed,%d\n", __func__, id, ret);
+
+ if (is_dual_panel && msm_dsi1) {
+ ret = msm_dsi_host_power_off(msm_dsi1->host);
+ if (ret)
+ pr_err("%s: host1 power off failed, %d\n",
+ __func__, ret);
+ }
+}
+
+static void dsi_mgr_bridge_mode_set(struct drm_bridge *bridge,
+ struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ int id = dsi_mgr_bridge_get_id(bridge);
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct msm_dsi *other_dsi = dsi_mgr_get_other_dsi(id);
+ struct mipi_dsi_host *host = msm_dsi->host;
+ bool is_dual_panel = IS_DUAL_PANEL();
+
+ DBG("set mode: %d:\"%s\" %d %d %d %d %d %d %d %d %d %d 0x%x 0x%x",
+ mode->base.id, mode->name,
+ mode->vrefresh, mode->clock,
+ mode->hdisplay, mode->hsync_start,
+ mode->hsync_end, mode->htotal,
+ mode->vdisplay, mode->vsync_start,
+ mode->vsync_end, mode->vtotal,
+ mode->type, mode->flags);
+
+ if (is_dual_panel && (DSI_1 == id))
+ return;
+
+ msm_dsi_host_set_display_mode(host, adjusted_mode);
+ if (is_dual_panel && other_dsi)
+ msm_dsi_host_set_display_mode(other_dsi->host, adjusted_mode);
+}
+
+static const struct drm_connector_funcs dsi_mgr_connector_funcs = {
+ .dpms = drm_atomic_helper_connector_dpms,
+ .detect = dsi_mgr_connector_detect,
+ .fill_modes = drm_helper_probe_single_connector_modes,
+ .destroy = dsi_mgr_connector_destroy,
+ .reset = drm_atomic_helper_connector_reset,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+};
+
+static const struct drm_connector_helper_funcs dsi_mgr_conn_helper_funcs = {
+ .get_modes = dsi_mgr_connector_get_modes,
+ .mode_valid = dsi_mgr_connector_mode_valid,
+ .best_encoder = dsi_mgr_connector_best_encoder,
+};
+
+static const struct drm_bridge_funcs dsi_mgr_bridge_funcs = {
+ .pre_enable = dsi_mgr_bridge_pre_enable,
+ .enable = dsi_mgr_bridge_enable,
+ .disable = dsi_mgr_bridge_disable,
+ .post_disable = dsi_mgr_bridge_post_disable,
+ .mode_set = dsi_mgr_bridge_mode_set,
+};
+
+/* initialize connector */
+struct drm_connector *msm_dsi_manager_connector_init(u8 id)
+{
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct drm_connector *connector = NULL;
+ struct dsi_connector *dsi_connector;
+ int ret;
+
+ dsi_connector = devm_kzalloc(msm_dsi->dev->dev,
+ sizeof(*dsi_connector), GFP_KERNEL);
+ if (!dsi_connector) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ dsi_connector->id = id;
+
+ connector = &dsi_connector->base;
+
+ ret = drm_connector_init(msm_dsi->dev, connector,
+ &dsi_mgr_connector_funcs, DRM_MODE_CONNECTOR_DSI);
+ if (ret)
+ goto fail;
+
+ drm_connector_helper_add(connector, &dsi_mgr_conn_helper_funcs);
+
+ /* Enable HPD to let hpd event is handled
+ * when panel is attached to the host.
+ */
+ connector->polled = DRM_CONNECTOR_POLL_HPD;
+
+ /* Display driver doesn't support interlace now. */
+ connector->interlace_allowed = 0;
+ connector->doublescan_allowed = 0;
+
+ ret = drm_connector_register(connector);
+ if (ret)
+ goto fail;
+
+ return connector;
+
+fail:
+ if (connector)
+ dsi_mgr_connector_destroy(connector);
+
+ return ERR_PTR(ret);
+}
+
+/* initialize bridge */
+struct drm_bridge *msm_dsi_manager_bridge_init(u8 id)
+{
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct drm_bridge *bridge = NULL;
+ struct dsi_bridge *dsi_bridge;
+ int ret;
+
+ dsi_bridge = devm_kzalloc(msm_dsi->dev->dev,
+ sizeof(*dsi_bridge), GFP_KERNEL);
+ if (!dsi_bridge) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ dsi_bridge->id = id;
+
+ bridge = &dsi_bridge->base;
+ bridge->funcs = &dsi_mgr_bridge_funcs;
+
+ ret = drm_bridge_attach(msm_dsi->dev, bridge);
+ if (ret)
+ goto fail;
+
+ return bridge;
+
+fail:
+ if (bridge)
+ msm_dsi_manager_bridge_destroy(bridge);
+
+ return ERR_PTR(ret);
+}
+
+void msm_dsi_manager_bridge_destroy(struct drm_bridge *bridge)
+{
+}
+
+int msm_dsi_manager_phy_enable(int id,
+ const unsigned long bit_rate, const unsigned long esc_rate,
+ u32 *clk_pre, u32 *clk_post)
+{
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct msm_dsi_phy *phy = msm_dsi->phy;
+ int ret;
+
+ ret = msm_dsi_phy_enable(phy, IS_DUAL_PANEL(), bit_rate, esc_rate);
+ if (ret)
+ return ret;
+
+ msm_dsi->phy_enabled = true;
+ msm_dsi_phy_get_clk_pre_post(phy, clk_pre, clk_post);
+
+ return 0;
+}
+
+void msm_dsi_manager_phy_disable(int id)
+{
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct msm_dsi *mdsi = dsi_mgr_get_dsi(DSI_CLOCK_MASTER);
+ struct msm_dsi *sdsi = dsi_mgr_get_dsi(DSI_CLOCK_SLAVE);
+ struct msm_dsi_phy *phy = msm_dsi->phy;
+
+ /* disable DSI phy
+ * In dual-dsi configuration, the phy should be disabled for the
+ * first controller only when the second controller is disabled.
+ */
+ msm_dsi->phy_enabled = false;
+ if (IS_DUAL_PANEL() && mdsi && sdsi) {
+ if (!mdsi->phy_enabled && !sdsi->phy_enabled) {
+ msm_dsi_phy_disable(sdsi->phy);
+ msm_dsi_phy_disable(mdsi->phy);
+ }
+ } else {
+ msm_dsi_phy_disable(phy);
+ }
+}
+
+int msm_dsi_manager_cmd_xfer(int id, const struct mipi_dsi_msg *msg)
+{
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct msm_dsi *msm_dsi0 = dsi_mgr_get_dsi(DSI_0);
+ struct mipi_dsi_host *host = msm_dsi->host;
+ bool is_read = (msg->rx_buf && msg->rx_len);
+ bool need_sync = (IS_SYNC_NEEDED() && !is_read);
+ int ret;
+
+ if (!msg->tx_buf || !msg->tx_len)
+ return 0;
+
+ /* In dual master case, panel requires the same commands sent to
+ * both DSI links. Host issues the command trigger to both links
+ * when DSI_1 calls the cmd transfer function, no matter it happens
+ * before or after DSI_0 cmd transfer.
+ */
+ if (need_sync && (id == DSI_0))
+ return is_read ? msg->rx_len : msg->tx_len;
+
+ if (need_sync && msm_dsi0) {
+ ret = msm_dsi_host_xfer_prepare(msm_dsi0->host, msg);
+ if (ret) {
+ pr_err("%s: failed to prepare non-trigger host, %d\n",
+ __func__, ret);
+ return ret;
+ }
+ }
+ ret = msm_dsi_host_xfer_prepare(host, msg);
+ if (ret) {
+ pr_err("%s: failed to prepare host, %d\n", __func__, ret);
+ goto restore_host0;
+ }
+
+ ret = is_read ? msm_dsi_host_cmd_rx(host, msg) :
+ msm_dsi_host_cmd_tx(host, msg);
+
+ msm_dsi_host_xfer_restore(host, msg);
+
+restore_host0:
+ if (need_sync && msm_dsi0)
+ msm_dsi_host_xfer_restore(msm_dsi0->host, msg);
+
+ return ret;
+}
+
+bool msm_dsi_manager_cmd_xfer_trigger(int id, u32 iova, u32 len)
+{
+ struct msm_dsi *msm_dsi = dsi_mgr_get_dsi(id);
+ struct msm_dsi *msm_dsi0 = dsi_mgr_get_dsi(DSI_0);
+ struct mipi_dsi_host *host = msm_dsi->host;
+
+ if (IS_SYNC_NEEDED() && (id == DSI_0))
+ return false;
+
+ if (IS_SYNC_NEEDED() && msm_dsi0)
+ msm_dsi_host_cmd_xfer_commit(msm_dsi0->host, iova, len);
+
+ msm_dsi_host_cmd_xfer_commit(host, iova, len);
+
+ return true;
+}
+
+int msm_dsi_manager_register(struct msm_dsi *msm_dsi)
+{
+ struct msm_dsi_manager *msm_dsim = &msm_dsim_glb;
+ int id = msm_dsi->id;
+ struct msm_dsi *other_dsi = dsi_mgr_get_other_dsi(id);
+ int ret;
+
+ if (id > DSI_MAX) {
+ pr_err("%s: invalid id %d\n", __func__, id);
+ return -EINVAL;
+ }
+
+ if (msm_dsim->dsi[id]) {
+ pr_err("%s: dsi%d already registered\n", __func__, id);
+ return -EBUSY;
+ }
+
+ msm_dsim->dsi[id] = msm_dsi;
+
+ ret = dsi_mgr_parse_dual_panel(msm_dsi->pdev->dev.of_node, id);
+ if (ret) {
+ pr_err("%s: failed to parse dual panel info\n", __func__);
+ return ret;
+ }
+
+ if (!IS_DUAL_PANEL()) {
+ ret = msm_dsi_host_register(msm_dsi->host, true);
+ } else if (!other_dsi) {
+ return 0;
+ } else {
+ struct msm_dsi *mdsi = IS_MASTER_PANEL(id) ?
+ msm_dsi : other_dsi;
+ struct msm_dsi *sdsi = IS_MASTER_PANEL(id) ?
+ other_dsi : msm_dsi;
+ /* Register slave host first, so that slave DSI device
+ * has a chance to probe, and do not block the master
+ * DSI device's probe.
+ * Also, do not check defer for the slave host,
+ * because only master DSI device adds the panel to global
+ * panel list. The panel's device is the master DSI device.
+ */
+ ret = msm_dsi_host_register(sdsi->host, false);
+ if (ret)
+ return ret;
+ ret = msm_dsi_host_register(mdsi->host, true);
+ }
+
+ return ret;
+}
+
+void msm_dsi_manager_unregister(struct msm_dsi *msm_dsi)
+{
+ struct msm_dsi_manager *msm_dsim = &msm_dsim_glb;
+
+ if (msm_dsi->host)
+ msm_dsi_host_unregister(msm_dsi->host);
+ msm_dsim->dsi[msm_dsi->id] = NULL;
+}
+
diff --git a/drivers/gpu/drm/msm/dsi/dsi_phy.c b/drivers/gpu/drm/msm/dsi/dsi_phy.c
new file mode 100644
index 0000000..f0cea89
--- /dev/null
+++ b/drivers/gpu/drm/msm/dsi/dsi_phy.c
@@ -0,0 +1,352 @@
+/*
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "dsi.h"
+#include "dsi.xml.h"
+
+#define dsi_phy_read(offset) msm_readl((offset))
+#define dsi_phy_write(offset, data) msm_writel((data), (offset))
+
+struct dsi_dphy_timing {
+ u32 clk_pre;
+ u32 clk_post;
+ u32 clk_zero;
+ u32 clk_trail;
+ u32 clk_prepare;
+ u32 hs_exit;
+ u32 hs_zero;
+ u32 hs_prepare;
+ u32 hs_trail;
+ u32 hs_rqst;
+ u32 ta_go;
+ u32 ta_sure;
+ u32 ta_get;
+};
+
+struct msm_dsi_phy {
+ void __iomem *base;
+ void __iomem *reg_base;
+ int id;
+ struct dsi_dphy_timing timing;
+ int (*enable)(struct msm_dsi_phy *phy, bool is_dual_panel,
+ const unsigned long bit_rate, const unsigned long esc_rate);
+ int (*disable)(struct msm_dsi_phy *phy);
+};
+
+#define S_DIV_ROUND_UP(n, d) \
+ (((n) >= 0) ? (((n) + (d) - 1) / (d)) : (((n) - (d) + 1) / (d)))
+
+static inline s32 linear_inter(s32 tmax, s32 tmin, s32 percent,
+ s32 min_result, bool even)
+{
+ s32 v;
+ v = (tmax - tmin) * percent;
+ v = S_DIV_ROUND_UP(v, 100) + tmin;
+ if (even && (v & 0x1))
+ return max_t(s32, min_result, v - 1);
+ else
+ return max_t(s32, min_result, v);
+}
+
+static void dsi_dphy_timing_calc_clk_zero(struct dsi_dphy_timing *timing,
+ s32 ui, s32 coeff, s32 pcnt)
+{
+ s32 tmax, tmin, clk_z;
+ s32 temp;
+
+ /* reset */
+ temp = 300 * coeff - ((timing->clk_prepare >> 1) + 1) * 2 * ui;
+ tmin = S_DIV_ROUND_UP(temp, ui) - 2;
+ if (tmin > 255) {
+ tmax = 511;
+ clk_z = linear_inter(2 * tmin, tmin, pcnt, 0, true);
+ } else {
+ tmax = 255;
+ clk_z = linear_inter(tmax, tmin, pcnt, 0, true);
+ }
+
+ /* adjust */
+ temp = (timing->hs_rqst + timing->clk_prepare + clk_z) & 0x7;
+ timing->clk_zero = clk_z + 8 - temp;
+}
+
+static int dsi_dphy_timing_calc(struct dsi_dphy_timing *timing,
+ const unsigned long bit_rate, const unsigned long esc_rate)
+{
+ s32 ui, lpx;
+ s32 tmax, tmin;
+ s32 pcnt0 = 10;
+ s32 pcnt1 = (bit_rate > 1200000000) ? 15 : 10;
+ s32 pcnt2 = 10;
+ s32 pcnt3 = (bit_rate > 180000000) ? 10 : 40;
+ s32 coeff = 1000; /* Precision, should avoid overflow */
+ s32 temp;
+
+ if (!bit_rate || !esc_rate)
+ return -EINVAL;
+
+ ui = mult_frac(NSEC_PER_MSEC, coeff, bit_rate / 1000);
+ lpx = mult_frac(NSEC_PER_MSEC, coeff, esc_rate / 1000);
+
+ tmax = S_DIV_ROUND_UP(95 * coeff, ui) - 2;
+ tmin = S_DIV_ROUND_UP(38 * coeff, ui) - 2;
+ timing->clk_prepare = linear_inter(tmax, tmin, pcnt0, 0, true);
+
+ temp = lpx / ui;
+ if (temp & 0x1)
+ timing->hs_rqst = temp;
+ else
+ timing->hs_rqst = max_t(s32, 0, temp - 2);
+
+ /* Calculate clk_zero after clk_prepare and hs_rqst */
+ dsi_dphy_timing_calc_clk_zero(timing, ui, coeff, pcnt2);
+
+ temp = 105 * coeff + 12 * ui - 20 * coeff;
+ tmax = S_DIV_ROUND_UP(temp, ui) - 2;
+ tmin = S_DIV_ROUND_UP(60 * coeff, ui) - 2;
+ timing->clk_trail = linear_inter(tmax, tmin, pcnt3, 0, true);
+
+ temp = 85 * coeff + 6 * ui;
+ tmax = S_DIV_ROUND_UP(temp, ui) - 2;
+ temp = 40 * coeff + 4 * ui;
+ tmin = S_DIV_ROUND_UP(temp, ui) - 2;
+ timing->hs_prepare = linear_inter(tmax, tmin, pcnt1, 0, true);
+
+ tmax = 255;
+ temp = ((timing->hs_prepare >> 1) + 1) * 2 * ui + 2 * ui;
+ temp = 145 * coeff + 10 * ui - temp;
+ tmin = S_DIV_ROUND_UP(temp, ui) - 2;
+ timing->hs_zero = linear_inter(tmax, tmin, pcnt2, 24, true);
+
+ temp = 105 * coeff + 12 * ui - 20 * coeff;
+ tmax = S_DIV_ROUND_UP(temp, ui) - 2;
+ temp = 60 * coeff + 4 * ui;
+ tmin = DIV_ROUND_UP(temp, ui) - 2;
+ timing->hs_trail = linear_inter(tmax, tmin, pcnt3, 0, true);
+
+ tmax = 255;
+ tmin = S_DIV_ROUND_UP(100 * coeff, ui) - 2;
+ timing->hs_exit = linear_inter(tmax, tmin, pcnt2, 0, true);
+
+ tmax = 63;
+ temp = ((timing->hs_exit >> 1) + 1) * 2 * ui;
+ temp = 60 * coeff + 52 * ui - 24 * ui - temp;
+ tmin = S_DIV_ROUND_UP(temp, 8 * ui) - 1;
+ timing->clk_post = linear_inter(tmax, tmin, pcnt2, 0, false);
+
+ tmax = 63;
+ temp = ((timing->clk_prepare >> 1) + 1) * 2 * ui;
+ temp += ((timing->clk_zero >> 1) + 1) * 2 * ui;
+ temp += 8 * ui + lpx;
+ tmin = S_DIV_ROUND_UP(temp, 8 * ui) - 1;
+ if (tmin > tmax) {
+ temp = linear_inter(2 * tmax, tmin, pcnt2, 0, false) >> 1;
+ timing->clk_pre = temp >> 1;
+ temp = (2 * tmax - tmin) * pcnt2;
+ } else {
+ timing->clk_pre = linear_inter(tmax, tmin, pcnt2, 0, false);
+ }
+
+ timing->ta_go = 3;
+ timing->ta_sure = 0;
+ timing->ta_get = 4;
+
+ DBG("PHY timings: %d, %d, %d, %d, %d, %d, %d, %d, %d, %d",
+ timing->clk_pre, timing->clk_post, timing->clk_zero,
+ timing->clk_trail, timing->clk_prepare, timing->hs_exit,
+ timing->hs_zero, timing->hs_prepare, timing->hs_trail,
+ timing->hs_rqst);
+
+ return 0;
+}
+
+static void dsi_28nm_phy_regulator_ctrl(struct msm_dsi_phy *phy, bool enable)
+{
+ void __iomem *base = phy->reg_base;
+
+ if (!enable) {
+ dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CAL_PWR_CFG, 0);
+ return;
+ }
+
+ dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_0, 0x0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CAL_PWR_CFG, 1);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_5, 0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_3, 0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_2, 0x3);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_1, 0x9);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_0, 0x7);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_4, 0x20);
+}
+
+static int dsi_28nm_phy_enable(struct msm_dsi_phy *phy, bool is_dual_panel,
+ const unsigned long bit_rate, const unsigned long esc_rate)
+{
+ struct dsi_dphy_timing *timing = &phy->timing;
+ int i;
+ void __iomem *base = phy->base;
+
+ DBG("");
+
+ if (dsi_dphy_timing_calc(timing, bit_rate, esc_rate)) {
+ pr_err("%s: D-PHY timing calculation failed\n", __func__);
+ return -EINVAL;
+ }
+
+ dsi_phy_write(base + REG_DSI_28nm_PHY_STRENGTH_0, 0xff);
+
+ dsi_28nm_phy_regulator_ctrl(phy, true);
+
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LDO_CNTRL, 0x00);
+
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_0,
+ DSI_28nm_PHY_TIMING_CTRL_0_CLK_ZERO(timing->clk_zero));
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_1,
+ DSI_28nm_PHY_TIMING_CTRL_1_CLK_TRAIL(timing->clk_trail));
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_2,
+ DSI_28nm_PHY_TIMING_CTRL_2_CLK_PREPARE(timing->clk_prepare));
+ if (timing->clk_zero & BIT(8))
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_3,
+ DSI_28nm_PHY_TIMING_CTRL_3_CLK_ZERO_8);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_4,
+ DSI_28nm_PHY_TIMING_CTRL_4_HS_EXIT(timing->hs_exit));
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_5,
+ DSI_28nm_PHY_TIMING_CTRL_5_HS_ZERO(timing->hs_zero));
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_6,
+ DSI_28nm_PHY_TIMING_CTRL_6_HS_PREPARE(timing->hs_prepare));
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_7,
+ DSI_28nm_PHY_TIMING_CTRL_7_HS_TRAIL(timing->hs_trail));
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_8,
+ DSI_28nm_PHY_TIMING_CTRL_8_HS_RQST(timing->hs_rqst));
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_9,
+ DSI_28nm_PHY_TIMING_CTRL_9_TA_GO(timing->ta_go) |
+ DSI_28nm_PHY_TIMING_CTRL_9_TA_SURE(timing->ta_sure));
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_10,
+ DSI_28nm_PHY_TIMING_CTRL_10_TA_GET(timing->ta_get));
+ dsi_phy_write(base + REG_DSI_28nm_PHY_TIMING_CTRL_11,
+ DSI_28nm_PHY_TIMING_CTRL_11_TRIG3_CMD(0));
+
+ dsi_phy_write(base + REG_DSI_28nm_PHY_CTRL_1, 0x00);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_CTRL_0, 0x5f);
+
+ dsi_phy_write(base + REG_DSI_28nm_PHY_STRENGTH_1, 0x6);
+
+ for (i = 0; i < 4; i++) {
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_CFG_0(i), 0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_CFG_1(i), 0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_CFG_2(i), 0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_CFG_3(i), 0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_TEST_DATAPATH(i), 0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_DEBUG_SEL(i), 0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_TEST_STR_0(i), 0x1);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_TEST_STR_1(i), 0x97);
+ }
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_CFG_4(0), 0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_CFG_4(1), 0x5);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_CFG_4(2), 0xa);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LN_CFG_4(3), 0xf);
+
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LNCK_CFG_1, 0xc0);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LNCK_TEST_STR0, 0x1);
+ dsi_phy_write(base + REG_DSI_28nm_PHY_LNCK_TEST_STR1, 0xbb);
+
+ dsi_phy_write(base + REG_DSI_28nm_PHY_CTRL_0, 0x5f);
+
+ if (is_dual_panel && (phy->id != DSI_CLOCK_MASTER))
+ dsi_phy_write(base + REG_DSI_28nm_PHY_GLBL_TEST_CTRL, 0x00);
+ else
+ dsi_phy_write(base + REG_DSI_28nm_PHY_GLBL_TEST_CTRL, 0x01);
+
+ return 0;
+}
+
+static int dsi_28nm_phy_disable(struct msm_dsi_phy *phy)
+{
+ dsi_phy_write(phy->base + REG_DSI_28nm_PHY_CTRL_0, 0);
+ dsi_28nm_phy_regulator_ctrl(phy, false);
+
+ /*
+ * Wait for the registers writes to complete in order to
+ * ensure that the phy is completely disabled
+ */
+ wmb();
+
+ return 0;
+}
+
+#define dsi_phy_func_init(name) \
+ do { \
+ phy->enable = dsi_##name##_phy_enable; \
+ phy->disable = dsi_##name##_phy_disable; \
+ } while (0)
+
+struct msm_dsi_phy *msm_dsi_phy_init(struct platform_device *pdev,
+ enum msm_dsi_phy_type type, int id)
+{
+ struct msm_dsi_phy *phy;
+
+ phy = devm_kzalloc(&pdev->dev, sizeof(*phy), GFP_KERNEL);
+ if (!phy)
+ return NULL;
+
+ phy->base = msm_ioremap(pdev, "dsi_phy", "DSI_PHY");
+ if (IS_ERR_OR_NULL(phy->base)) {
+ pr_err("%s: failed to map phy base\n", __func__);
+ return NULL;
+ }
+ phy->reg_base = msm_ioremap(pdev, "dsi_phy_regulator", "DSI_PHY_REG");
+ if (IS_ERR_OR_NULL(phy->reg_base)) {
+ pr_err("%s: failed to map phy regulator base\n", __func__);
+ return NULL;
+ }
+
+ switch (type) {
+ case MSM_DSI_PHY_28NM:
+ dsi_phy_func_init(28nm);
+ break;
+ default:
+ pr_err("%s: unsupported type, %d\n", __func__, type);
+ return NULL;
+ }
+
+ phy->id = id;
+
+ return phy;
+}
+
+int msm_dsi_phy_enable(struct msm_dsi_phy *phy, bool is_dual_panel,
+ const unsigned long bit_rate, const unsigned long esc_rate)
+{
+ if (!phy || !phy->enable)
+ return -EINVAL;
+ return phy->enable(phy, is_dual_panel, bit_rate, esc_rate);
+}
+
+int msm_dsi_phy_disable(struct msm_dsi_phy *phy)
+{
+ if (!phy || !phy->disable)
+ return -EINVAL;
+ return phy->disable(phy);
+}
+
+void msm_dsi_phy_get_clk_pre_post(struct msm_dsi_phy *phy,
+ u32 *clk_pre, u32 *clk_post)
+{
+ if (!phy)
+ return;
+ if (clk_pre)
+ *clk_pre = phy->timing.clk_pre;
+ if (clk_post)
+ *clk_post = phy->timing.clk_post;
+}
+
diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_phy_8960.c b/drivers/gpu/drm/msm/hdmi/hdmi_phy_8960.c
index eeed006..6997ec6 100644
--- a/drivers/gpu/drm/msm/hdmi/hdmi_phy_8960.c
+++ b/drivers/gpu/drm/msm/hdmi/hdmi_phy_8960.c
@@ -53,6 +53,23 @@
/* NOTE: keep sorted highest freq to lowest: */
static const struct pll_rate freqtbl[] = {
+ { 154000000, {
+ { 0x08, REG_HDMI_8960_PHY_PLL_REFCLK_CFG },
+ { 0x20, REG_HDMI_8960_PHY_PLL_LOOP_FLT_CFG0 },
+ { 0xf9, REG_HDMI_8960_PHY_PLL_LOOP_FLT_CFG1 },
+ { 0x02, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG0 },
+ { 0x03, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG1 },
+ { 0x3b, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG2 },
+ { 0x00, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG3 },
+ { 0x86, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG4 },
+ { 0x00, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG5 },
+ { 0x0d, REG_HDMI_8960_PHY_PLL_SDM_CFG0 },
+ { 0x4d, REG_HDMI_8960_PHY_PLL_SDM_CFG1 },
+ { 0x5e, REG_HDMI_8960_PHY_PLL_SDM_CFG2 },
+ { 0x42, REG_HDMI_8960_PHY_PLL_SDM_CFG3 },
+ { 0x00, REG_HDMI_8960_PHY_PLL_SDM_CFG4 },
+ { 0, 0 } }
+ },
/* 1080p60/1080p50 case */
{ 148500000, {
{ 0x02, REG_HDMI_8960_PHY_PLL_REFCLK_CFG },
@@ -112,6 +129,23 @@
{ 0x3b, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG2 },
{ 0, 0 } }
},
+ { 74176000, {
+ { 0x18, REG_HDMI_8960_PHY_PLL_REFCLK_CFG },
+ { 0x20, REG_HDMI_8960_PHY_PLL_LOOP_FLT_CFG0 },
+ { 0xf9, REG_HDMI_8960_PHY_PLL_LOOP_FLT_CFG1 },
+ { 0xe5, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG0 },
+ { 0x02, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG1 },
+ { 0x3b, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG2 },
+ { 0x00, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG3 },
+ { 0x86, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG4 },
+ { 0x00, REG_HDMI_8960_PHY_PLL_VCOCAL_CFG5 },
+ { 0x0c, REG_HDMI_8960_PHY_PLL_SDM_CFG0 },
+ { 0x4c, REG_HDMI_8960_PHY_PLL_SDM_CFG1 },
+ { 0x7d, REG_HDMI_8960_PHY_PLL_SDM_CFG2 },
+ { 0xbc, REG_HDMI_8960_PHY_PLL_SDM_CFG3 },
+ { 0x00, REG_HDMI_8960_PHY_PLL_SDM_CFG4 },
+ { 0, 0 } }
+ },
{ 65000000, {
{ 0x18, REG_HDMI_8960_PHY_PLL_REFCLK_CFG },
{ 0x20, REG_HDMI_8960_PHY_PLL_LOOP_FLT_CFG0 },
diff --git a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_plane.c b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_plane.c
index cde2500..dbc0689 100644
--- a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_plane.c
+++ b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_plane.c
@@ -83,7 +83,8 @@
};
static int mdp4_plane_prepare_fb(struct drm_plane *plane,
- struct drm_framebuffer *fb)
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *new_state)
{
struct mdp4_plane *mdp4_plane = to_mdp4_plane(plane);
struct mdp4_kms *mdp4_kms = get_kms(plane);
@@ -93,7 +94,8 @@
}
static void mdp4_plane_cleanup_fb(struct drm_plane *plane,
- struct drm_framebuffer *fb)
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *old_state)
{
struct mdp4_plane *mdp4_plane = to_mdp4_plane(plane);
struct mdp4_kms *mdp4_kms = get_kms(plane);
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h
index c276624..b9a4ded 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h
@@ -8,9 +8,9 @@
git clone https://github.com/freedreno/envytools.git
The rules-ng-ng source files this header was generated from are:
-- /local/mnt2/workspace2/sviau/envytools/rnndb/mdp/mdp5.xml ( 27229 bytes, from 2015-02-10 17:00:41)
+- /local/mnt2/workspace2/sviau/envytools/rnndb/mdp/mdp5.xml ( 29312 bytes, from 2015-03-23 21:18:48)
- /local/mnt2/workspace2/sviau/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2014-06-02 18:31:15)
-- /local/mnt2/workspace2/sviau/envytools/rnndb/mdp/mdp_common.xml ( 2357 bytes, from 2015-01-23 16:20:19)
+- /local/mnt2/workspace2/sviau/envytools/rnndb/mdp/mdp_common.xml ( 2357 bytes, from 2015-03-23 20:38:49)
Copyright (C) 2013-2015 by the following authors:
- Rob Clark <robdclark@gmail.com> (robclark)
@@ -37,11 +37,14 @@
*/
-enum mdp5_intf {
+enum mdp5_intf_type {
+ INTF_DISABLED = 0,
INTF_DSI = 1,
INTF_HDMI = 3,
INTF_LCDC = 5,
INTF_eDP = 9,
+ INTF_VIRTUAL = 100,
+ INTF_WB = 101,
};
enum mdp5_intfnum {
@@ -67,11 +70,11 @@
enum mdp5_ctl_mode {
MODE_NONE = 0,
- MODE_ROT0 = 1,
- MODE_ROT1 = 2,
- MODE_WB0 = 3,
- MODE_WB1 = 4,
- MODE_WFD = 5,
+ MODE_WB_0_BLOCK = 1,
+ MODE_WB_1_BLOCK = 2,
+ MODE_WB_0_LINE = 3,
+ MODE_WB_1_LINE = 4,
+ MODE_WB_2_LINE = 5,
};
enum mdp5_pack_3d {
@@ -94,33 +97,6 @@
BWC_Q_MED = 2,
};
-enum mdp5_client_id {
- CID_UNUSED = 0,
- CID_VIG0_Y = 1,
- CID_VIG0_CR = 2,
- CID_VIG0_CB = 3,
- CID_VIG1_Y = 4,
- CID_VIG1_CR = 5,
- CID_VIG1_CB = 6,
- CID_VIG2_Y = 7,
- CID_VIG2_CR = 8,
- CID_VIG2_CB = 9,
- CID_DMA0_Y = 10,
- CID_DMA0_CR = 11,
- CID_DMA0_CB = 12,
- CID_DMA1_Y = 13,
- CID_DMA1_CR = 14,
- CID_DMA1_CB = 15,
- CID_RGB0 = 16,
- CID_RGB1 = 17,
- CID_RGB2 = 18,
- CID_VIG3_Y = 19,
- CID_VIG3_CR = 20,
- CID_VIG3_CB = 21,
- CID_RGB3 = 22,
- CID_MAX = 23,
-};
-
enum mdp5_cursor_format {
CURSOR_FMT_ARGB8888 = 0,
CURSOR_FMT_ARGB1555 = 2,
@@ -144,30 +120,25 @@
DATA_FORMAT_YUV = 1,
};
-#define MDP5_IRQ_INTF0_WB_ROT_COMP 0x00000001
-#define MDP5_IRQ_INTF1_WB_ROT_COMP 0x00000002
-#define MDP5_IRQ_INTF2_WB_ROT_COMP 0x00000004
-#define MDP5_IRQ_INTF3_WB_ROT_COMP 0x00000008
-#define MDP5_IRQ_INTF0_WB_WFD 0x00000010
-#define MDP5_IRQ_INTF1_WB_WFD 0x00000020
-#define MDP5_IRQ_INTF2_WB_WFD 0x00000040
-#define MDP5_IRQ_INTF3_WB_WFD 0x00000080
-#define MDP5_IRQ_INTF0_PING_PONG_COMP 0x00000100
-#define MDP5_IRQ_INTF1_PING_PONG_COMP 0x00000200
-#define MDP5_IRQ_INTF2_PING_PONG_COMP 0x00000400
-#define MDP5_IRQ_INTF3_PING_PONG_COMP 0x00000800
-#define MDP5_IRQ_INTF0_PING_PONG_RD_PTR 0x00001000
-#define MDP5_IRQ_INTF1_PING_PONG_RD_PTR 0x00002000
-#define MDP5_IRQ_INTF2_PING_PONG_RD_PTR 0x00004000
-#define MDP5_IRQ_INTF3_PING_PONG_RD_PTR 0x00008000
-#define MDP5_IRQ_INTF0_PING_PONG_WR_PTR 0x00010000
-#define MDP5_IRQ_INTF1_PING_PONG_WR_PTR 0x00020000
-#define MDP5_IRQ_INTF2_PING_PONG_WR_PTR 0x00040000
-#define MDP5_IRQ_INTF3_PING_PONG_WR_PTR 0x00080000
-#define MDP5_IRQ_INTF0_PING_PONG_AUTO_REF 0x00100000
-#define MDP5_IRQ_INTF1_PING_PONG_AUTO_REF 0x00200000
-#define MDP5_IRQ_INTF2_PING_PONG_AUTO_REF 0x00400000
-#define MDP5_IRQ_INTF3_PING_PONG_AUTO_REF 0x00800000
+#define MDP5_IRQ_WB_0_DONE 0x00000001
+#define MDP5_IRQ_WB_1_DONE 0x00000002
+#define MDP5_IRQ_WB_2_DONE 0x00000010
+#define MDP5_IRQ_PING_PONG_0_DONE 0x00000100
+#define MDP5_IRQ_PING_PONG_1_DONE 0x00000200
+#define MDP5_IRQ_PING_PONG_2_DONE 0x00000400
+#define MDP5_IRQ_PING_PONG_3_DONE 0x00000800
+#define MDP5_IRQ_PING_PONG_0_RD_PTR 0x00001000
+#define MDP5_IRQ_PING_PONG_1_RD_PTR 0x00002000
+#define MDP5_IRQ_PING_PONG_2_RD_PTR 0x00004000
+#define MDP5_IRQ_PING_PONG_3_RD_PTR 0x00008000
+#define MDP5_IRQ_PING_PONG_0_WR_PTR 0x00010000
+#define MDP5_IRQ_PING_PONG_1_WR_PTR 0x00020000
+#define MDP5_IRQ_PING_PONG_2_WR_PTR 0x00040000
+#define MDP5_IRQ_PING_PONG_3_WR_PTR 0x00080000
+#define MDP5_IRQ_PING_PONG_0_AUTO_REF 0x00100000
+#define MDP5_IRQ_PING_PONG_1_AUTO_REF 0x00200000
+#define MDP5_IRQ_PING_PONG_2_AUTO_REF 0x00400000
+#define MDP5_IRQ_PING_PONG_3_AUTO_REF 0x00800000
#define MDP5_IRQ_INTF0_UNDER_RUN 0x01000000
#define MDP5_IRQ_INTF0_VSYNC 0x02000000
#define MDP5_IRQ_INTF1_UNDER_RUN 0x04000000
@@ -176,136 +147,186 @@
#define MDP5_IRQ_INTF2_VSYNC 0x20000000
#define MDP5_IRQ_INTF3_UNDER_RUN 0x40000000
#define MDP5_IRQ_INTF3_VSYNC 0x80000000
-#define REG_MDP5_HW_VERSION 0x00000000
-
-#define REG_MDP5_HW_INTR_STATUS 0x00000010
-#define MDP5_HW_INTR_STATUS_INTR_MDP 0x00000001
-#define MDP5_HW_INTR_STATUS_INTR_DSI0 0x00000010
-#define MDP5_HW_INTR_STATUS_INTR_DSI1 0x00000020
-#define MDP5_HW_INTR_STATUS_INTR_HDMI 0x00000100
-#define MDP5_HW_INTR_STATUS_INTR_EDP 0x00001000
-
-#define REG_MDP5_MDP_VERSION 0x00000100
-#define MDP5_MDP_VERSION_MINOR__MASK 0x00ff0000
-#define MDP5_MDP_VERSION_MINOR__SHIFT 16
-static inline uint32_t MDP5_MDP_VERSION_MINOR(uint32_t val)
+#define REG_MDSS_HW_VERSION 0x00000000
+#define MDSS_HW_VERSION_STEP__MASK 0x0000ffff
+#define MDSS_HW_VERSION_STEP__SHIFT 0
+static inline uint32_t MDSS_HW_VERSION_STEP(uint32_t val)
{
- return ((val) << MDP5_MDP_VERSION_MINOR__SHIFT) & MDP5_MDP_VERSION_MINOR__MASK;
+ return ((val) << MDSS_HW_VERSION_STEP__SHIFT) & MDSS_HW_VERSION_STEP__MASK;
}
-#define MDP5_MDP_VERSION_MAJOR__MASK 0xf0000000
-#define MDP5_MDP_VERSION_MAJOR__SHIFT 28
-static inline uint32_t MDP5_MDP_VERSION_MAJOR(uint32_t val)
+#define MDSS_HW_VERSION_MINOR__MASK 0x0fff0000
+#define MDSS_HW_VERSION_MINOR__SHIFT 16
+static inline uint32_t MDSS_HW_VERSION_MINOR(uint32_t val)
{
- return ((val) << MDP5_MDP_VERSION_MAJOR__SHIFT) & MDP5_MDP_VERSION_MAJOR__MASK;
+ return ((val) << MDSS_HW_VERSION_MINOR__SHIFT) & MDSS_HW_VERSION_MINOR__MASK;
+}
+#define MDSS_HW_VERSION_MAJOR__MASK 0xf0000000
+#define MDSS_HW_VERSION_MAJOR__SHIFT 28
+static inline uint32_t MDSS_HW_VERSION_MAJOR(uint32_t val)
+{
+ return ((val) << MDSS_HW_VERSION_MAJOR__SHIFT) & MDSS_HW_VERSION_MAJOR__MASK;
}
-#define REG_MDP5_DISP_INTF_SEL 0x00000104
-#define MDP5_DISP_INTF_SEL_INTF0__MASK 0x000000ff
-#define MDP5_DISP_INTF_SEL_INTF0__SHIFT 0
-static inline uint32_t MDP5_DISP_INTF_SEL_INTF0(enum mdp5_intf val)
+#define REG_MDSS_HW_INTR_STATUS 0x00000010
+#define MDSS_HW_INTR_STATUS_INTR_MDP 0x00000001
+#define MDSS_HW_INTR_STATUS_INTR_DSI0 0x00000010
+#define MDSS_HW_INTR_STATUS_INTR_DSI1 0x00000020
+#define MDSS_HW_INTR_STATUS_INTR_HDMI 0x00000100
+#define MDSS_HW_INTR_STATUS_INTR_EDP 0x00001000
+
+static inline uint32_t __offset_MDP(uint32_t idx)
{
- return ((val) << MDP5_DISP_INTF_SEL_INTF0__SHIFT) & MDP5_DISP_INTF_SEL_INTF0__MASK;
+ switch (idx) {
+ case 0: return (mdp5_cfg->mdp.base[0]);
+ default: return INVALID_IDX(idx);
+ }
}
-#define MDP5_DISP_INTF_SEL_INTF1__MASK 0x0000ff00
-#define MDP5_DISP_INTF_SEL_INTF1__SHIFT 8
-static inline uint32_t MDP5_DISP_INTF_SEL_INTF1(enum mdp5_intf val)
+static inline uint32_t REG_MDP5_MDP(uint32_t i0) { return 0x00000000 + __offset_MDP(i0); }
+
+static inline uint32_t REG_MDP5_MDP_HW_VERSION(uint32_t i0) { return 0x00000000 + __offset_MDP(i0); }
+#define MDP5_MDP_HW_VERSION_STEP__MASK 0x0000ffff
+#define MDP5_MDP_HW_VERSION_STEP__SHIFT 0
+static inline uint32_t MDP5_MDP_HW_VERSION_STEP(uint32_t val)
{
- return ((val) << MDP5_DISP_INTF_SEL_INTF1__SHIFT) & MDP5_DISP_INTF_SEL_INTF1__MASK;
+ return ((val) << MDP5_MDP_HW_VERSION_STEP__SHIFT) & MDP5_MDP_HW_VERSION_STEP__MASK;
}
-#define MDP5_DISP_INTF_SEL_INTF2__MASK 0x00ff0000
-#define MDP5_DISP_INTF_SEL_INTF2__SHIFT 16
-static inline uint32_t MDP5_DISP_INTF_SEL_INTF2(enum mdp5_intf val)
+#define MDP5_MDP_HW_VERSION_MINOR__MASK 0x0fff0000
+#define MDP5_MDP_HW_VERSION_MINOR__SHIFT 16
+static inline uint32_t MDP5_MDP_HW_VERSION_MINOR(uint32_t val)
{
- return ((val) << MDP5_DISP_INTF_SEL_INTF2__SHIFT) & MDP5_DISP_INTF_SEL_INTF2__MASK;
+ return ((val) << MDP5_MDP_HW_VERSION_MINOR__SHIFT) & MDP5_MDP_HW_VERSION_MINOR__MASK;
}
-#define MDP5_DISP_INTF_SEL_INTF3__MASK 0xff000000
-#define MDP5_DISP_INTF_SEL_INTF3__SHIFT 24
-static inline uint32_t MDP5_DISP_INTF_SEL_INTF3(enum mdp5_intf val)
+#define MDP5_MDP_HW_VERSION_MAJOR__MASK 0xf0000000
+#define MDP5_MDP_HW_VERSION_MAJOR__SHIFT 28
+static inline uint32_t MDP5_MDP_HW_VERSION_MAJOR(uint32_t val)
{
- return ((val) << MDP5_DISP_INTF_SEL_INTF3__SHIFT) & MDP5_DISP_INTF_SEL_INTF3__MASK;
+ return ((val) << MDP5_MDP_HW_VERSION_MAJOR__SHIFT) & MDP5_MDP_HW_VERSION_MAJOR__MASK;
}
-#define REG_MDP5_INTR_EN 0x00000110
-
-#define REG_MDP5_INTR_STATUS 0x00000114
-
-#define REG_MDP5_INTR_CLEAR 0x00000118
-
-#define REG_MDP5_HIST_INTR_EN 0x0000011c
-
-#define REG_MDP5_HIST_INTR_STATUS 0x00000120
-
-#define REG_MDP5_HIST_INTR_CLEAR 0x00000124
-
-static inline uint32_t REG_MDP5_SMP_ALLOC_W(uint32_t i0) { return 0x00000180 + 0x4*i0; }
-
-static inline uint32_t REG_MDP5_SMP_ALLOC_W_REG(uint32_t i0) { return 0x00000180 + 0x4*i0; }
-#define MDP5_SMP_ALLOC_W_REG_CLIENT0__MASK 0x000000ff
-#define MDP5_SMP_ALLOC_W_REG_CLIENT0__SHIFT 0
-static inline uint32_t MDP5_SMP_ALLOC_W_REG_CLIENT0(enum mdp5_client_id val)
+static inline uint32_t REG_MDP5_MDP_DISP_INTF_SEL(uint32_t i0) { return 0x00000004 + __offset_MDP(i0); }
+#define MDP5_MDP_DISP_INTF_SEL_INTF0__MASK 0x000000ff
+#define MDP5_MDP_DISP_INTF_SEL_INTF0__SHIFT 0
+static inline uint32_t MDP5_MDP_DISP_INTF_SEL_INTF0(enum mdp5_intf_type val)
{
- return ((val) << MDP5_SMP_ALLOC_W_REG_CLIENT0__SHIFT) & MDP5_SMP_ALLOC_W_REG_CLIENT0__MASK;
+ return ((val) << MDP5_MDP_DISP_INTF_SEL_INTF0__SHIFT) & MDP5_MDP_DISP_INTF_SEL_INTF0__MASK;
}
-#define MDP5_SMP_ALLOC_W_REG_CLIENT1__MASK 0x0000ff00
-#define MDP5_SMP_ALLOC_W_REG_CLIENT1__SHIFT 8
-static inline uint32_t MDP5_SMP_ALLOC_W_REG_CLIENT1(enum mdp5_client_id val)
+#define MDP5_MDP_DISP_INTF_SEL_INTF1__MASK 0x0000ff00
+#define MDP5_MDP_DISP_INTF_SEL_INTF1__SHIFT 8
+static inline uint32_t MDP5_MDP_DISP_INTF_SEL_INTF1(enum mdp5_intf_type val)
{
- return ((val) << MDP5_SMP_ALLOC_W_REG_CLIENT1__SHIFT) & MDP5_SMP_ALLOC_W_REG_CLIENT1__MASK;
+ return ((val) << MDP5_MDP_DISP_INTF_SEL_INTF1__SHIFT) & MDP5_MDP_DISP_INTF_SEL_INTF1__MASK;
}
-#define MDP5_SMP_ALLOC_W_REG_CLIENT2__MASK 0x00ff0000
-#define MDP5_SMP_ALLOC_W_REG_CLIENT2__SHIFT 16
-static inline uint32_t MDP5_SMP_ALLOC_W_REG_CLIENT2(enum mdp5_client_id val)
+#define MDP5_MDP_DISP_INTF_SEL_INTF2__MASK 0x00ff0000
+#define MDP5_MDP_DISP_INTF_SEL_INTF2__SHIFT 16
+static inline uint32_t MDP5_MDP_DISP_INTF_SEL_INTF2(enum mdp5_intf_type val)
{
- return ((val) << MDP5_SMP_ALLOC_W_REG_CLIENT2__SHIFT) & MDP5_SMP_ALLOC_W_REG_CLIENT2__MASK;
+ return ((val) << MDP5_MDP_DISP_INTF_SEL_INTF2__SHIFT) & MDP5_MDP_DISP_INTF_SEL_INTF2__MASK;
+}
+#define MDP5_MDP_DISP_INTF_SEL_INTF3__MASK 0xff000000
+#define MDP5_MDP_DISP_INTF_SEL_INTF3__SHIFT 24
+static inline uint32_t MDP5_MDP_DISP_INTF_SEL_INTF3(enum mdp5_intf_type val)
+{
+ return ((val) << MDP5_MDP_DISP_INTF_SEL_INTF3__SHIFT) & MDP5_MDP_DISP_INTF_SEL_INTF3__MASK;
}
-static inline uint32_t REG_MDP5_SMP_ALLOC_R(uint32_t i0) { return 0x00000230 + 0x4*i0; }
+static inline uint32_t REG_MDP5_MDP_INTR_EN(uint32_t i0) { return 0x00000010 + __offset_MDP(i0); }
-static inline uint32_t REG_MDP5_SMP_ALLOC_R_REG(uint32_t i0) { return 0x00000230 + 0x4*i0; }
-#define MDP5_SMP_ALLOC_R_REG_CLIENT0__MASK 0x000000ff
-#define MDP5_SMP_ALLOC_R_REG_CLIENT0__SHIFT 0
-static inline uint32_t MDP5_SMP_ALLOC_R_REG_CLIENT0(enum mdp5_client_id val)
+static inline uint32_t REG_MDP5_MDP_INTR_STATUS(uint32_t i0) { return 0x00000014 + __offset_MDP(i0); }
+
+static inline uint32_t REG_MDP5_MDP_INTR_CLEAR(uint32_t i0) { return 0x00000018 + __offset_MDP(i0); }
+
+static inline uint32_t REG_MDP5_MDP_HIST_INTR_EN(uint32_t i0) { return 0x0000001c + __offset_MDP(i0); }
+
+static inline uint32_t REG_MDP5_MDP_HIST_INTR_STATUS(uint32_t i0) { return 0x00000020 + __offset_MDP(i0); }
+
+static inline uint32_t REG_MDP5_MDP_HIST_INTR_CLEAR(uint32_t i0) { return 0x00000024 + __offset_MDP(i0); }
+
+static inline uint32_t REG_MDP5_MDP_SPARE_0(uint32_t i0) { return 0x00000028 + __offset_MDP(i0); }
+#define MDP5_MDP_SPARE_0_SPLIT_DPL_SINGLE_FLUSH_EN 0x00000001
+
+static inline uint32_t REG_MDP5_MDP_SMP_ALLOC_W(uint32_t i0, uint32_t i1) { return 0x00000080 + __offset_MDP(i0) + 0x4*i1; }
+
+static inline uint32_t REG_MDP5_MDP_SMP_ALLOC_W_REG(uint32_t i0, uint32_t i1) { return 0x00000080 + __offset_MDP(i0) + 0x4*i1; }
+#define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0__MASK 0x000000ff
+#define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0__SHIFT 0
+static inline uint32_t MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0(uint32_t val)
{
- return ((val) << MDP5_SMP_ALLOC_R_REG_CLIENT0__SHIFT) & MDP5_SMP_ALLOC_R_REG_CLIENT0__MASK;
+ return ((val) << MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0__SHIFT) & MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0__MASK;
}
-#define MDP5_SMP_ALLOC_R_REG_CLIENT1__MASK 0x0000ff00
-#define MDP5_SMP_ALLOC_R_REG_CLIENT1__SHIFT 8
-static inline uint32_t MDP5_SMP_ALLOC_R_REG_CLIENT1(enum mdp5_client_id val)
+#define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1__MASK 0x0000ff00
+#define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1__SHIFT 8
+static inline uint32_t MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1(uint32_t val)
{
- return ((val) << MDP5_SMP_ALLOC_R_REG_CLIENT1__SHIFT) & MDP5_SMP_ALLOC_R_REG_CLIENT1__MASK;
+ return ((val) << MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1__SHIFT) & MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1__MASK;
}
-#define MDP5_SMP_ALLOC_R_REG_CLIENT2__MASK 0x00ff0000
-#define MDP5_SMP_ALLOC_R_REG_CLIENT2__SHIFT 16
-static inline uint32_t MDP5_SMP_ALLOC_R_REG_CLIENT2(enum mdp5_client_id val)
+#define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2__MASK 0x00ff0000
+#define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2__SHIFT 16
+static inline uint32_t MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2(uint32_t val)
{
- return ((val) << MDP5_SMP_ALLOC_R_REG_CLIENT2__SHIFT) & MDP5_SMP_ALLOC_R_REG_CLIENT2__MASK;
+ return ((val) << MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2__SHIFT) & MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2__MASK;
+}
+
+static inline uint32_t REG_MDP5_MDP_SMP_ALLOC_R(uint32_t i0, uint32_t i1) { return 0x00000130 + __offset_MDP(i0) + 0x4*i1; }
+
+static inline uint32_t REG_MDP5_MDP_SMP_ALLOC_R_REG(uint32_t i0, uint32_t i1) { return 0x00000130 + __offset_MDP(i0) + 0x4*i1; }
+#define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT0__MASK 0x000000ff
+#define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT0__SHIFT 0
+static inline uint32_t MDP5_MDP_SMP_ALLOC_R_REG_CLIENT0(uint32_t val)
+{
+ return ((val) << MDP5_MDP_SMP_ALLOC_R_REG_CLIENT0__SHIFT) & MDP5_MDP_SMP_ALLOC_R_REG_CLIENT0__MASK;
+}
+#define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT1__MASK 0x0000ff00
+#define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT1__SHIFT 8
+static inline uint32_t MDP5_MDP_SMP_ALLOC_R_REG_CLIENT1(uint32_t val)
+{
+ return ((val) << MDP5_MDP_SMP_ALLOC_R_REG_CLIENT1__SHIFT) & MDP5_MDP_SMP_ALLOC_R_REG_CLIENT1__MASK;
+}
+#define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT2__MASK 0x00ff0000
+#define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT2__SHIFT 16
+static inline uint32_t MDP5_MDP_SMP_ALLOC_R_REG_CLIENT2(uint32_t val)
+{
+ return ((val) << MDP5_MDP_SMP_ALLOC_R_REG_CLIENT2__SHIFT) & MDP5_MDP_SMP_ALLOC_R_REG_CLIENT2__MASK;
}
static inline uint32_t __offset_IGC(enum mdp5_igc_type idx)
{
switch (idx) {
- case IGC_VIG: return 0x00000300;
- case IGC_RGB: return 0x00000310;
- case IGC_DMA: return 0x00000320;
- case IGC_DSPP: return 0x00000400;
+ case IGC_VIG: return 0x00000200;
+ case IGC_RGB: return 0x00000210;
+ case IGC_DMA: return 0x00000220;
+ case IGC_DSPP: return 0x00000300;
default: return INVALID_IDX(idx);
}
}
-static inline uint32_t REG_MDP5_IGC(enum mdp5_igc_type i0) { return 0x00000000 + __offset_IGC(i0); }
+static inline uint32_t REG_MDP5_MDP_IGC(uint32_t i0, enum mdp5_igc_type i1) { return 0x00000000 + __offset_MDP(i0) + __offset_IGC(i1); }
-static inline uint32_t REG_MDP5_IGC_LUT(enum mdp5_igc_type i0, uint32_t i1) { return 0x00000000 + __offset_IGC(i0) + 0x4*i1; }
+static inline uint32_t REG_MDP5_MDP_IGC_LUT(uint32_t i0, enum mdp5_igc_type i1, uint32_t i2) { return 0x00000000 + __offset_MDP(i0) + __offset_IGC(i1) + 0x4*i2; }
-static inline uint32_t REG_MDP5_IGC_LUT_REG(enum mdp5_igc_type i0, uint32_t i1) { return 0x00000000 + __offset_IGC(i0) + 0x4*i1; }
-#define MDP5_IGC_LUT_REG_VAL__MASK 0x00000fff
-#define MDP5_IGC_LUT_REG_VAL__SHIFT 0
-static inline uint32_t MDP5_IGC_LUT_REG_VAL(uint32_t val)
+static inline uint32_t REG_MDP5_MDP_IGC_LUT_REG(uint32_t i0, enum mdp5_igc_type i1, uint32_t i2) { return 0x00000000 + __offset_MDP(i0) + __offset_IGC(i1) + 0x4*i2; }
+#define MDP5_MDP_IGC_LUT_REG_VAL__MASK 0x00000fff
+#define MDP5_MDP_IGC_LUT_REG_VAL__SHIFT 0
+static inline uint32_t MDP5_MDP_IGC_LUT_REG_VAL(uint32_t val)
{
- return ((val) << MDP5_IGC_LUT_REG_VAL__SHIFT) & MDP5_IGC_LUT_REG_VAL__MASK;
+ return ((val) << MDP5_MDP_IGC_LUT_REG_VAL__SHIFT) & MDP5_MDP_IGC_LUT_REG_VAL__MASK;
}
-#define MDP5_IGC_LUT_REG_INDEX_UPDATE 0x02000000
-#define MDP5_IGC_LUT_REG_DISABLE_PIPE_0 0x10000000
-#define MDP5_IGC_LUT_REG_DISABLE_PIPE_1 0x20000000
-#define MDP5_IGC_LUT_REG_DISABLE_PIPE_2 0x40000000
+#define MDP5_MDP_IGC_LUT_REG_INDEX_UPDATE 0x02000000
+#define MDP5_MDP_IGC_LUT_REG_DISABLE_PIPE_0 0x10000000
+#define MDP5_MDP_IGC_LUT_REG_DISABLE_PIPE_1 0x20000000
+#define MDP5_MDP_IGC_LUT_REG_DISABLE_PIPE_2 0x40000000
+
+#define REG_MDP5_SPLIT_DPL_EN 0x000003f4
+
+#define REG_MDP5_SPLIT_DPL_UPPER 0x000003f8
+#define MDP5_SPLIT_DPL_UPPER_SMART_PANEL 0x00000002
+#define MDP5_SPLIT_DPL_UPPER_SMART_PANEL_FREE_RUN 0x00000004
+#define MDP5_SPLIT_DPL_UPPER_INTF1_SW_TRG_MUX 0x00000010
+#define MDP5_SPLIT_DPL_UPPER_INTF2_SW_TRG_MUX 0x00000100
+
+#define REG_MDP5_SPLIT_DPL_LOWER 0x000004f0
+#define MDP5_SPLIT_DPL_LOWER_SMART_PANEL 0x00000002
+#define MDP5_SPLIT_DPL_LOWER_SMART_PANEL_FREE_RUN 0x00000004
+#define MDP5_SPLIT_DPL_LOWER_INTF1_TG_SYNC 0x00000010
+#define MDP5_SPLIT_DPL_LOWER_INTF2_TG_SYNC 0x00000100
static inline uint32_t __offset_CTL(uint32_t idx)
{
@@ -437,11 +458,19 @@
#define MDP5_CTL_FLUSH_DSPP0 0x00002000
#define MDP5_CTL_FLUSH_DSPP1 0x00004000
#define MDP5_CTL_FLUSH_DSPP2 0x00008000
+#define MDP5_CTL_FLUSH_WB 0x00010000
#define MDP5_CTL_FLUSH_CTL 0x00020000
#define MDP5_CTL_FLUSH_VIG3 0x00040000
#define MDP5_CTL_FLUSH_RGB3 0x00080000
#define MDP5_CTL_FLUSH_LM5 0x00100000
#define MDP5_CTL_FLUSH_DSPP3 0x00200000
+#define MDP5_CTL_FLUSH_CURSOR_0 0x00400000
+#define MDP5_CTL_FLUSH_CURSOR_1 0x00800000
+#define MDP5_CTL_FLUSH_CHROMADOWN_0 0x04000000
+#define MDP5_CTL_FLUSH_TIMING_3 0x10000000
+#define MDP5_CTL_FLUSH_TIMING_2 0x20000000
+#define MDP5_CTL_FLUSH_TIMING_1 0x40000000
+#define MDP5_CTL_FLUSH_TIMING_0 0x80000000
static inline uint32_t REG_MDP5_CTL_START(uint32_t i0) { return 0x0000001c + __offset_CTL(i0); }
@@ -1117,6 +1146,94 @@
static inline uint32_t REG_MDP5_DSPP_GC_BASE(uint32_t i0) { return 0x000002b0 + __offset_DSPP(i0); }
+static inline uint32_t __offset_PP(uint32_t idx)
+{
+ switch (idx) {
+ case 0: return (mdp5_cfg->pp.base[0]);
+ case 1: return (mdp5_cfg->pp.base[1]);
+ case 2: return (mdp5_cfg->pp.base[2]);
+ case 3: return (mdp5_cfg->pp.base[3]);
+ default: return INVALID_IDX(idx);
+ }
+}
+static inline uint32_t REG_MDP5_PP(uint32_t i0) { return 0x00000000 + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_TEAR_CHECK_EN(uint32_t i0) { return 0x00000000 + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_SYNC_CONFIG_VSYNC(uint32_t i0) { return 0x00000004 + __offset_PP(i0); }
+#define MDP5_PP_SYNC_CONFIG_VSYNC_COUNT__MASK 0x0007ffff
+#define MDP5_PP_SYNC_CONFIG_VSYNC_COUNT__SHIFT 0
+static inline uint32_t MDP5_PP_SYNC_CONFIG_VSYNC_COUNT(uint32_t val)
+{
+ return ((val) << MDP5_PP_SYNC_CONFIG_VSYNC_COUNT__SHIFT) & MDP5_PP_SYNC_CONFIG_VSYNC_COUNT__MASK;
+}
+#define MDP5_PP_SYNC_CONFIG_VSYNC_COUNTER_EN 0x00080000
+#define MDP5_PP_SYNC_CONFIG_VSYNC_IN_EN 0x00100000
+
+static inline uint32_t REG_MDP5_PP_SYNC_CONFIG_HEIGHT(uint32_t i0) { return 0x00000008 + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_SYNC_WRCOUNT(uint32_t i0) { return 0x0000000c + __offset_PP(i0); }
+#define MDP5_PP_SYNC_WRCOUNT_LINE_COUNT__MASK 0x0000ffff
+#define MDP5_PP_SYNC_WRCOUNT_LINE_COUNT__SHIFT 0
+static inline uint32_t MDP5_PP_SYNC_WRCOUNT_LINE_COUNT(uint32_t val)
+{
+ return ((val) << MDP5_PP_SYNC_WRCOUNT_LINE_COUNT__SHIFT) & MDP5_PP_SYNC_WRCOUNT_LINE_COUNT__MASK;
+}
+#define MDP5_PP_SYNC_WRCOUNT_FRAME_COUNT__MASK 0xffff0000
+#define MDP5_PP_SYNC_WRCOUNT_FRAME_COUNT__SHIFT 16
+static inline uint32_t MDP5_PP_SYNC_WRCOUNT_FRAME_COUNT(uint32_t val)
+{
+ return ((val) << MDP5_PP_SYNC_WRCOUNT_FRAME_COUNT__SHIFT) & MDP5_PP_SYNC_WRCOUNT_FRAME_COUNT__MASK;
+}
+
+static inline uint32_t REG_MDP5_PP_VSYNC_INIT_VAL(uint32_t i0) { return 0x00000010 + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_INT_COUNT_VAL(uint32_t i0) { return 0x00000014 + __offset_PP(i0); }
+#define MDP5_PP_INT_COUNT_VAL_LINE_COUNT__MASK 0x0000ffff
+#define MDP5_PP_INT_COUNT_VAL_LINE_COUNT__SHIFT 0
+static inline uint32_t MDP5_PP_INT_COUNT_VAL_LINE_COUNT(uint32_t val)
+{
+ return ((val) << MDP5_PP_INT_COUNT_VAL_LINE_COUNT__SHIFT) & MDP5_PP_INT_COUNT_VAL_LINE_COUNT__MASK;
+}
+#define MDP5_PP_INT_COUNT_VAL_FRAME_COUNT__MASK 0xffff0000
+#define MDP5_PP_INT_COUNT_VAL_FRAME_COUNT__SHIFT 16
+static inline uint32_t MDP5_PP_INT_COUNT_VAL_FRAME_COUNT(uint32_t val)
+{
+ return ((val) << MDP5_PP_INT_COUNT_VAL_FRAME_COUNT__SHIFT) & MDP5_PP_INT_COUNT_VAL_FRAME_COUNT__MASK;
+}
+
+static inline uint32_t REG_MDP5_PP_SYNC_THRESH(uint32_t i0) { return 0x00000018 + __offset_PP(i0); }
+#define MDP5_PP_SYNC_THRESH_START__MASK 0x0000ffff
+#define MDP5_PP_SYNC_THRESH_START__SHIFT 0
+static inline uint32_t MDP5_PP_SYNC_THRESH_START(uint32_t val)
+{
+ return ((val) << MDP5_PP_SYNC_THRESH_START__SHIFT) & MDP5_PP_SYNC_THRESH_START__MASK;
+}
+#define MDP5_PP_SYNC_THRESH_CONTINUE__MASK 0xffff0000
+#define MDP5_PP_SYNC_THRESH_CONTINUE__SHIFT 16
+static inline uint32_t MDP5_PP_SYNC_THRESH_CONTINUE(uint32_t val)
+{
+ return ((val) << MDP5_PP_SYNC_THRESH_CONTINUE__SHIFT) & MDP5_PP_SYNC_THRESH_CONTINUE__MASK;
+}
+
+static inline uint32_t REG_MDP5_PP_START_POS(uint32_t i0) { return 0x0000001c + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_RD_PTR_IRQ(uint32_t i0) { return 0x00000020 + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_WR_PTR_IRQ(uint32_t i0) { return 0x00000024 + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_OUT_LINE_COUNT(uint32_t i0) { return 0x00000028 + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_PP_LINE_COUNT(uint32_t i0) { return 0x0000002c + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_AUTOREFRESH_CONFIG(uint32_t i0) { return 0x00000030 + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_FBC_MODE(uint32_t i0) { return 0x00000034 + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_FBC_BUDGET_CTL(uint32_t i0) { return 0x00000038 + __offset_PP(i0); }
+
+static inline uint32_t REG_MDP5_PP_FBC_LOSSY_MODE(uint32_t i0) { return 0x0000003c + __offset_PP(i0); }
+
static inline uint32_t __offset_INTF(uint32_t idx)
{
switch (idx) {
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c
index b0a4431..e001e6b 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2014 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2014-2015 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -24,13 +24,23 @@
const struct mdp5_cfg_hw msm8x74_config = {
.name = "msm8x74",
+ .mdp = {
+ .count = 1,
+ .base = { 0x00100 },
+ },
.smp = {
.mmb_count = 22,
.mmb_size = 4096,
+ .clients = {
+ [SSPP_VIG0] = 1, [SSPP_VIG1] = 4, [SSPP_VIG2] = 7,
+ [SSPP_DMA0] = 10, [SSPP_DMA1] = 13,
+ [SSPP_RGB0] = 16, [SSPP_RGB1] = 17, [SSPP_RGB2] = 18,
+ },
},
.ctl = {
.count = 5,
.base = { 0x00600, 0x00700, 0x00800, 0x00900, 0x00a00 },
+ .flush_hw_mask = 0x0003ffff,
},
.pipe_vig = {
.count = 3,
@@ -57,27 +67,49 @@
.count = 2,
.base = { 0x13100, 0x13300 }, /* NOTE: no ad in v1.0 */
},
+ .pp = {
+ .count = 3,
+ .base = { 0x12d00, 0x12e00, 0x12f00 },
+ },
.intf = {
.count = 4,
.base = { 0x12500, 0x12700, 0x12900, 0x12b00 },
},
+ .intfs = {
+ [0] = INTF_eDP,
+ [1] = INTF_DSI,
+ [2] = INTF_DSI,
+ [3] = INTF_HDMI,
+ },
.max_clk = 200000000,
};
const struct mdp5_cfg_hw apq8084_config = {
.name = "apq8084",
+ .mdp = {
+ .count = 1,
+ .base = { 0x00100 },
+ },
.smp = {
.mmb_count = 44,
.mmb_size = 8192,
+ .clients = {
+ [SSPP_VIG0] = 1, [SSPP_VIG1] = 4,
+ [SSPP_VIG2] = 7, [SSPP_VIG3] = 19,
+ [SSPP_DMA0] = 10, [SSPP_DMA1] = 13,
+ [SSPP_RGB0] = 16, [SSPP_RGB1] = 17,
+ [SSPP_RGB2] = 18, [SSPP_RGB3] = 22,
+ },
.reserved_state[0] = GENMASK(7, 0), /* first 8 MMBs */
- .reserved[CID_RGB0] = 2,
- .reserved[CID_RGB1] = 2,
- .reserved[CID_RGB2] = 2,
- .reserved[CID_RGB3] = 2,
+ .reserved = {
+ /* Two SMP blocks are statically tied to RGB pipes: */
+ [16] = 2, [17] = 2, [18] = 2, [22] = 2,
+ },
},
.ctl = {
.count = 5,
.base = { 0x00600, 0x00700, 0x00800, 0x00900, 0x00a00 },
+ .flush_hw_mask = 0x003fffff,
},
.pipe_vig = {
.count = 4,
@@ -105,10 +137,69 @@
.count = 3,
.base = { 0x13500, 0x13700, 0x13900 },
},
+ .pp = {
+ .count = 4,
+ .base = { 0x12f00, 0x13000, 0x13100, 0x13200 },
+ },
.intf = {
.count = 5,
.base = { 0x12500, 0x12700, 0x12900, 0x12b00, 0x12d00 },
},
+ .intfs = {
+ [0] = INTF_eDP,
+ [1] = INTF_DSI,
+ [2] = INTF_DSI,
+ [3] = INTF_HDMI,
+ },
+ .max_clk = 320000000,
+};
+
+const struct mdp5_cfg_hw msm8x16_config = {
+ .name = "msm8x16",
+ .mdp = {
+ .count = 1,
+ .base = { 0x01000 },
+ },
+ .smp = {
+ .mmb_count = 8,
+ .mmb_size = 8192,
+ .clients = {
+ [SSPP_VIG0] = 1, [SSPP_DMA0] = 4,
+ [SSPP_RGB0] = 7, [SSPP_RGB1] = 8,
+ },
+ },
+ .ctl = {
+ .count = 5,
+ .base = { 0x02000, 0x02200, 0x02400, 0x02600, 0x02800 },
+ .flush_hw_mask = 0x4003ffff,
+ },
+ .pipe_vig = {
+ .count = 1,
+ .base = { 0x05000 },
+ },
+ .pipe_rgb = {
+ .count = 2,
+ .base = { 0x15000, 0x17000 },
+ },
+ .pipe_dma = {
+ .count = 1,
+ .base = { 0x25000 },
+ },
+ .lm = {
+ .count = 2, /* LM0 and LM3 */
+ .base = { 0x45000, 0x48000 },
+ .nb_stages = 5,
+ },
+ .dspp = {
+ .count = 1,
+ .base = { 0x55000 },
+
+ },
+ .intf = {
+ .count = 1, /* INTF_1 */
+ .base = { 0x6B800 },
+ },
+ /* TODO enable .intfs[] with [1] = INTF_DSI, once DSI is implemented */
.max_clk = 320000000,
};
@@ -116,6 +207,7 @@
{ .revision = 0, .config = { .hw = &msm8x74_config } },
{ .revision = 2, .config = { .hw = &msm8x74_config } },
{ .revision = 3, .config = { .hw = &apq8084_config } },
+ { .revision = 6, .config = { .hw = &msm8x16_config } },
};
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.h
index dba4d52..3a551b0 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.h
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.h
@@ -44,26 +44,38 @@
uint32_t nb_stages; /* number of stages per blender */
};
+struct mdp5_ctl_block {
+ MDP5_SUB_BLOCK_DEFINITION;
+ uint32_t flush_hw_mask; /* FLUSH register's hardware mask */
+};
+
struct mdp5_smp_block {
int mmb_count; /* number of SMP MMBs */
int mmb_size; /* MMB: size in bytes */
+ uint32_t clients[MAX_CLIENTS]; /* SMP port allocation /pipe */
mdp5_smp_state_t reserved_state;/* SMP MMBs statically allocated */
int reserved[MAX_CLIENTS]; /* # of MMBs allocated per client */
};
+#define MDP5_INTF_NUM_MAX 5
+
struct mdp5_cfg_hw {
char *name;
+ struct mdp5_sub_block mdp;
struct mdp5_smp_block smp;
- struct mdp5_sub_block ctl;
+ struct mdp5_ctl_block ctl;
struct mdp5_sub_block pipe_vig;
struct mdp5_sub_block pipe_rgb;
struct mdp5_sub_block pipe_dma;
struct mdp5_lm_block lm;
struct mdp5_sub_block dspp;
struct mdp5_sub_block ad;
+ struct mdp5_sub_block pp;
struct mdp5_sub_block intf;
+ u32 intfs[MDP5_INTF_NUM_MAX]; /* array of enum mdp5_intf_type */
+
uint32_t max_clk;
};
@@ -84,6 +96,10 @@
struct mdp5_cfg *mdp5_cfg_get_config(struct mdp5_cfg_handler *cfg_hnd);
int mdp5_cfg_get_hw_rev(struct mdp5_cfg_handler *cfg_hnd);
+#define mdp5_cfg_intf_is_virtual(intf_type) ({ \
+ typeof(intf_type) __val = (intf_type); \
+ (__val) >= INTF_VIRTUAL ? true : false; })
+
struct mdp5_cfg_handler *mdp5_cfg_init(struct mdp5_kms *mdp5_kms,
uint32_t major, uint32_t minor);
void mdp5_cfg_destroy(struct mdp5_cfg_handler *cfg_hnd);
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c
new file mode 100644
index 0000000..e4e8956
--- /dev/null
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c
@@ -0,0 +1,343 @@
+/*
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "mdp5_kms.h"
+
+#include "drm_crtc.h"
+#include "drm_crtc_helper.h"
+
+struct mdp5_cmd_encoder {
+ struct drm_encoder base;
+ struct mdp5_interface intf;
+ bool enabled;
+ uint32_t bsc;
+};
+#define to_mdp5_cmd_encoder(x) container_of(x, struct mdp5_cmd_encoder, base)
+
+static struct mdp5_kms *get_kms(struct drm_encoder *encoder)
+{
+ struct msm_drm_private *priv = encoder->dev->dev_private;
+ return to_mdp5_kms(to_mdp_kms(priv->kms));
+}
+
+#ifdef CONFIG_MSM_BUS_SCALING
+#include <mach/board.h>
+#include <linux/msm-bus.h>
+#include <linux/msm-bus-board.h>
+#define MDP_BUS_VECTOR_ENTRY(ab_val, ib_val) \
+ { \
+ .src = MSM_BUS_MASTER_MDP_PORT0, \
+ .dst = MSM_BUS_SLAVE_EBI_CH0, \
+ .ab = (ab_val), \
+ .ib = (ib_val), \
+ }
+
+static struct msm_bus_vectors mdp_bus_vectors[] = {
+ MDP_BUS_VECTOR_ENTRY(0, 0),
+ MDP_BUS_VECTOR_ENTRY(2000000000, 2000000000),
+};
+static struct msm_bus_paths mdp_bus_usecases[] = { {
+ .num_paths = 1,
+ .vectors = &mdp_bus_vectors[0],
+}, {
+ .num_paths = 1,
+ .vectors = &mdp_bus_vectors[1],
+} };
+static struct msm_bus_scale_pdata mdp_bus_scale_table = {
+ .usecase = mdp_bus_usecases,
+ .num_usecases = ARRAY_SIZE(mdp_bus_usecases),
+ .name = "mdss_mdp",
+};
+
+static void bs_init(struct mdp5_cmd_encoder *mdp5_cmd_enc)
+{
+ mdp5_cmd_enc->bsc = msm_bus_scale_register_client(
+ &mdp_bus_scale_table);
+ DBG("bus scale client: %08x", mdp5_cmd_enc->bsc);
+}
+
+static void bs_fini(struct mdp5_cmd_encoder *mdp5_cmd_enc)
+{
+ if (mdp5_cmd_enc->bsc) {
+ msm_bus_scale_unregister_client(mdp5_cmd_enc->bsc);
+ mdp5_cmd_enc->bsc = 0;
+ }
+}
+
+static void bs_set(struct mdp5_cmd_encoder *mdp5_cmd_enc, int idx)
+{
+ if (mdp5_cmd_enc->bsc) {
+ DBG("set bus scaling: %d", idx);
+ /* HACK: scaling down, and then immediately back up
+ * seems to leave things broken (underflow).. so
+ * never disable:
+ */
+ idx = 1;
+ msm_bus_scale_client_update_request(mdp5_cmd_enc->bsc, idx);
+ }
+}
+#else
+static void bs_init(struct mdp5_cmd_encoder *mdp5_cmd_enc) {}
+static void bs_fini(struct mdp5_cmd_encoder *mdp5_cmd_enc) {}
+static void bs_set(struct mdp5_cmd_encoder *mdp5_cmd_enc, int idx) {}
+#endif
+
+#define VSYNC_CLK_RATE 19200000
+static int pingpong_tearcheck_setup(struct drm_encoder *encoder,
+ struct drm_display_mode *mode)
+{
+ struct mdp5_kms *mdp5_kms = get_kms(encoder);
+ struct device *dev = encoder->dev->dev;
+ u32 total_lines_x100, vclks_line, cfg;
+ long vsync_clk_speed;
+ int pp_id = GET_PING_PONG_ID(mdp5_crtc_get_lm(encoder->crtc));
+
+ if (IS_ERR_OR_NULL(mdp5_kms->vsync_clk)) {
+ dev_err(dev, "vsync_clk is not initialized\n");
+ return -EINVAL;
+ }
+
+ total_lines_x100 = mode->vtotal * mode->vrefresh;
+ if (!total_lines_x100) {
+ dev_err(dev, "%s: vtotal(%d) or vrefresh(%d) is 0\n",
+ __func__, mode->vtotal, mode->vrefresh);
+ return -EINVAL;
+ }
+
+ vsync_clk_speed = clk_round_rate(mdp5_kms->vsync_clk, VSYNC_CLK_RATE);
+ if (vsync_clk_speed <= 0) {
+ dev_err(dev, "vsync_clk round rate failed %ld\n",
+ vsync_clk_speed);
+ return -EINVAL;
+ }
+ vclks_line = vsync_clk_speed * 100 / total_lines_x100;
+
+ cfg = MDP5_PP_SYNC_CONFIG_VSYNC_COUNTER_EN
+ | MDP5_PP_SYNC_CONFIG_VSYNC_IN_EN;
+ cfg |= MDP5_PP_SYNC_CONFIG_VSYNC_COUNT(vclks_line);
+
+ mdp5_write(mdp5_kms, REG_MDP5_PP_SYNC_CONFIG_VSYNC(pp_id), cfg);
+ mdp5_write(mdp5_kms,
+ REG_MDP5_PP_SYNC_CONFIG_HEIGHT(pp_id), 0xfff0);
+ mdp5_write(mdp5_kms,
+ REG_MDP5_PP_VSYNC_INIT_VAL(pp_id), mode->vdisplay);
+ mdp5_write(mdp5_kms, REG_MDP5_PP_RD_PTR_IRQ(pp_id), mode->vdisplay + 1);
+ mdp5_write(mdp5_kms, REG_MDP5_PP_START_POS(pp_id), mode->vdisplay);
+ mdp5_write(mdp5_kms, REG_MDP5_PP_SYNC_THRESH(pp_id),
+ MDP5_PP_SYNC_THRESH_START(4) |
+ MDP5_PP_SYNC_THRESH_CONTINUE(4));
+
+ return 0;
+}
+
+static int pingpong_tearcheck_enable(struct drm_encoder *encoder)
+{
+ struct mdp5_kms *mdp5_kms = get_kms(encoder);
+ int pp_id = GET_PING_PONG_ID(mdp5_crtc_get_lm(encoder->crtc));
+ int ret;
+
+ ret = clk_set_rate(mdp5_kms->vsync_clk,
+ clk_round_rate(mdp5_kms->vsync_clk, VSYNC_CLK_RATE));
+ if (ret) {
+ dev_err(encoder->dev->dev,
+ "vsync_clk clk_set_rate failed, %d\n", ret);
+ return ret;
+ }
+ ret = clk_prepare_enable(mdp5_kms->vsync_clk);
+ if (ret) {
+ dev_err(encoder->dev->dev,
+ "vsync_clk clk_prepare_enable failed, %d\n", ret);
+ return ret;
+ }
+
+ mdp5_write(mdp5_kms, REG_MDP5_PP_TEAR_CHECK_EN(pp_id), 1);
+
+ return 0;
+}
+
+static void pingpong_tearcheck_disable(struct drm_encoder *encoder)
+{
+ struct mdp5_kms *mdp5_kms = get_kms(encoder);
+ int pp_id = GET_PING_PONG_ID(mdp5_crtc_get_lm(encoder->crtc));
+
+ mdp5_write(mdp5_kms, REG_MDP5_PP_TEAR_CHECK_EN(pp_id), 0);
+ clk_disable_unprepare(mdp5_kms->vsync_clk);
+}
+
+static void mdp5_cmd_encoder_destroy(struct drm_encoder *encoder)
+{
+ struct mdp5_cmd_encoder *mdp5_cmd_enc = to_mdp5_cmd_encoder(encoder);
+ bs_fini(mdp5_cmd_enc);
+ drm_encoder_cleanup(encoder);
+ kfree(mdp5_cmd_enc);
+}
+
+static const struct drm_encoder_funcs mdp5_cmd_encoder_funcs = {
+ .destroy = mdp5_cmd_encoder_destroy,
+};
+
+static bool mdp5_cmd_encoder_mode_fixup(struct drm_encoder *encoder,
+ const struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ return true;
+}
+
+static void mdp5_cmd_encoder_mode_set(struct drm_encoder *encoder,
+ struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ struct mdp5_cmd_encoder *mdp5_cmd_enc = to_mdp5_cmd_encoder(encoder);
+
+ mode = adjusted_mode;
+
+ DBG("set mode: %d:\"%s\" %d %d %d %d %d %d %d %d %d %d 0x%x 0x%x",
+ mode->base.id, mode->name,
+ mode->vrefresh, mode->clock,
+ mode->hdisplay, mode->hsync_start,
+ mode->hsync_end, mode->htotal,
+ mode->vdisplay, mode->vsync_start,
+ mode->vsync_end, mode->vtotal,
+ mode->type, mode->flags);
+ pingpong_tearcheck_setup(encoder, mode);
+ mdp5_crtc_set_intf(encoder->crtc, &mdp5_cmd_enc->intf);
+}
+
+static void mdp5_cmd_encoder_disable(struct drm_encoder *encoder)
+{
+ struct mdp5_cmd_encoder *mdp5_cmd_enc = to_mdp5_cmd_encoder(encoder);
+ struct mdp5_kms *mdp5_kms = get_kms(encoder);
+ struct mdp5_ctl *ctl = mdp5_crtc_get_ctl(encoder->crtc);
+ struct mdp5_interface *intf = &mdp5_cmd_enc->intf;
+ int lm = mdp5_crtc_get_lm(encoder->crtc);
+
+ if (WARN_ON(!mdp5_cmd_enc->enabled))
+ return;
+
+ /* Wait for the last frame done */
+ mdp_irq_wait(&mdp5_kms->base, lm2ppdone(lm));
+ pingpong_tearcheck_disable(encoder);
+
+ mdp5_ctl_set_encoder_state(ctl, false);
+ mdp5_ctl_commit(ctl, mdp_ctl_flush_mask_encoder(intf));
+
+ bs_set(mdp5_cmd_enc, 0);
+
+ mdp5_cmd_enc->enabled = false;
+}
+
+static void mdp5_cmd_encoder_enable(struct drm_encoder *encoder)
+{
+ struct mdp5_cmd_encoder *mdp5_cmd_enc = to_mdp5_cmd_encoder(encoder);
+ struct mdp5_ctl *ctl = mdp5_crtc_get_ctl(encoder->crtc);
+ struct mdp5_interface *intf = &mdp5_cmd_enc->intf;
+
+ if (WARN_ON(mdp5_cmd_enc->enabled))
+ return;
+
+ bs_set(mdp5_cmd_enc, 1);
+ if (pingpong_tearcheck_enable(encoder))
+ return;
+
+ mdp5_ctl_commit(ctl, mdp_ctl_flush_mask_encoder(intf));
+
+ mdp5_ctl_set_encoder_state(ctl, true);
+
+ mdp5_cmd_enc->enabled = true;
+}
+
+static const struct drm_encoder_helper_funcs mdp5_cmd_encoder_helper_funcs = {
+ .mode_fixup = mdp5_cmd_encoder_mode_fixup,
+ .mode_set = mdp5_cmd_encoder_mode_set,
+ .disable = mdp5_cmd_encoder_disable,
+ .enable = mdp5_cmd_encoder_enable,
+};
+
+int mdp5_cmd_encoder_set_split_display(struct drm_encoder *encoder,
+ struct drm_encoder *slave_encoder)
+{
+ struct mdp5_cmd_encoder *mdp5_cmd_enc = to_mdp5_cmd_encoder(encoder);
+ struct mdp5_kms *mdp5_kms;
+ int intf_num;
+ u32 data = 0;
+
+ if (!encoder || !slave_encoder)
+ return -EINVAL;
+
+ mdp5_kms = get_kms(encoder);
+ intf_num = mdp5_cmd_enc->intf.num;
+
+ /* Switch slave encoder's trigger MUX, to use the master's
+ * start signal for the slave encoder
+ */
+ if (intf_num == 1)
+ data |= MDP5_SPLIT_DPL_UPPER_INTF2_SW_TRG_MUX;
+ else if (intf_num == 2)
+ data |= MDP5_SPLIT_DPL_UPPER_INTF1_SW_TRG_MUX;
+ else
+ return -EINVAL;
+
+ /* Smart Panel, Sync mode */
+ data |= MDP5_SPLIT_DPL_UPPER_SMART_PANEL;
+
+ /* Make sure clocks are on when connectors calling this function. */
+ mdp5_enable(mdp5_kms);
+ mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_UPPER, data);
+
+ mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_LOWER,
+ MDP5_SPLIT_DPL_LOWER_SMART_PANEL);
+ mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_EN, 1);
+ mdp5_disable(mdp5_kms);
+
+ return 0;
+}
+
+/* initialize command mode encoder */
+struct drm_encoder *mdp5_cmd_encoder_init(struct drm_device *dev,
+ struct mdp5_interface *intf)
+{
+ struct drm_encoder *encoder = NULL;
+ struct mdp5_cmd_encoder *mdp5_cmd_enc;
+ int ret;
+
+ if (WARN_ON((intf->type != INTF_DSI) &&
+ (intf->mode != MDP5_INTF_DSI_MODE_COMMAND))) {
+ ret = -EINVAL;
+ goto fail;
+ }
+
+ mdp5_cmd_enc = kzalloc(sizeof(*mdp5_cmd_enc), GFP_KERNEL);
+ if (!mdp5_cmd_enc) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ memcpy(&mdp5_cmd_enc->intf, intf, sizeof(mdp5_cmd_enc->intf));
+ encoder = &mdp5_cmd_enc->base;
+
+ drm_encoder_init(dev, encoder, &mdp5_cmd_encoder_funcs,
+ DRM_MODE_ENCODER_DSI);
+
+ drm_encoder_helper_add(encoder, &mdp5_cmd_encoder_helper_funcs);
+
+ bs_init(mdp5_cmd_enc);
+
+ return encoder;
+
+fail:
+ if (encoder)
+ mdp5_cmd_encoder_destroy(encoder);
+
+ return ERR_PTR(ret);
+}
+
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
index 2f2863c..c153077 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
@@ -82,8 +82,6 @@
mdp_irq_register(&get_kms(crtc)->base, &mdp5_crtc->vblank);
}
-#define mdp5_lm_get_flush(lm) mdp_ctl_flush_mask_lm(lm)
-
static void crtc_flush(struct drm_crtc *crtc, u32 flush_mask)
{
struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
@@ -110,8 +108,8 @@
drm_atomic_crtc_for_each_plane(plane, crtc) {
flush_mask |= mdp5_plane_get_flush(plane);
}
- flush_mask |= mdp5_ctl_get_flush(mdp5_crtc->ctl);
- flush_mask |= mdp5_lm_get_flush(mdp5_crtc->lm);
+
+ flush_mask |= mdp_ctl_flush_mask_lm(mdp5_crtc->lm);
crtc_flush(crtc, flush_mask);
}
@@ -298,8 +296,6 @@
mdp5_enable(mdp5_kms);
mdp_irq_register(&mdp5_kms->base, &mdp5_crtc->err);
- crtc_flush_all(crtc);
-
mdp5_crtc->enabled = true;
}
@@ -444,13 +440,14 @@
struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct mdp5_kms *mdp5_kms = get_kms(crtc);
- struct drm_gem_object *cursor_bo, *old_bo;
+ struct drm_gem_object *cursor_bo, *old_bo = NULL;
uint32_t blendcfg, cursor_addr, stride;
int ret, bpp, lm;
unsigned int depth;
enum mdp5_cursor_alpha cur_alpha = CURSOR_ALPHA_PER_PIXEL;
uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0);
uint32_t roi_w, roi_h;
+ bool cursor_enable = true;
unsigned long flags;
if ((width > CURSOR_WIDTH) || (height > CURSOR_HEIGHT)) {
@@ -463,7 +460,8 @@
if (!handle) {
DBG("Cursor off");
- return mdp5_ctl_set_cursor(mdp5_crtc->ctl, false);
+ cursor_enable = false;
+ goto set_cursor;
}
cursor_bo = drm_gem_object_lookup(dev, file, handle);
@@ -504,11 +502,14 @@
spin_unlock_irqrestore(&mdp5_crtc->cursor.lock, flags);
- ret = mdp5_ctl_set_cursor(mdp5_crtc->ctl, true);
- if (ret)
+set_cursor:
+ ret = mdp5_ctl_set_cursor(mdp5_crtc->ctl, 0, cursor_enable);
+ if (ret) {
+ dev_err(dev->dev, "failed to %sable cursor: %d\n",
+ cursor_enable ? "en" : "dis", ret);
goto end;
+ }
- flush_mask |= mdp5_ctl_get_flush(mdp5_crtc->ctl);
crtc_flush(crtc, flush_mask);
end:
@@ -613,64 +614,39 @@
}
/* set interface for routing crtc->encoder: */
-void mdp5_crtc_set_intf(struct drm_crtc *crtc, int intf,
- enum mdp5_intf intf_id)
+void mdp5_crtc_set_intf(struct drm_crtc *crtc, struct mdp5_interface *intf)
{
struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
struct mdp5_kms *mdp5_kms = get_kms(crtc);
- uint32_t flush_mask = 0;
- uint32_t intf_sel;
- unsigned long flags;
+ int lm = mdp5_crtc_get_lm(crtc);
/* now that we know what irq's we want: */
- mdp5_crtc->err.irqmask = intf2err(intf);
- mdp5_crtc->vblank.irqmask = intf2vblank(intf);
+ mdp5_crtc->err.irqmask = intf2err(intf->num);
+
+ /* Register command mode Pingpong done as vblank for now,
+ * so that atomic commit should wait for it to finish.
+ * Ideally, in the future, we should take rd_ptr done as vblank,
+ * and let atomic commit wait for pingpong done for commond mode.
+ */
+ if (intf->mode == MDP5_INTF_DSI_MODE_COMMAND)
+ mdp5_crtc->vblank.irqmask = lm2ppdone(lm);
+ else
+ mdp5_crtc->vblank.irqmask = intf2vblank(lm, intf);
mdp_irq_update(&mdp5_kms->base);
- spin_lock_irqsave(&mdp5_kms->resource_lock, flags);
- intf_sel = mdp5_read(mdp5_kms, REG_MDP5_DISP_INTF_SEL);
-
- switch (intf) {
- case 0:
- intf_sel &= ~MDP5_DISP_INTF_SEL_INTF0__MASK;
- intf_sel |= MDP5_DISP_INTF_SEL_INTF0(intf_id);
- break;
- case 1:
- intf_sel &= ~MDP5_DISP_INTF_SEL_INTF1__MASK;
- intf_sel |= MDP5_DISP_INTF_SEL_INTF1(intf_id);
- break;
- case 2:
- intf_sel &= ~MDP5_DISP_INTF_SEL_INTF2__MASK;
- intf_sel |= MDP5_DISP_INTF_SEL_INTF2(intf_id);
- break;
- case 3:
- intf_sel &= ~MDP5_DISP_INTF_SEL_INTF3__MASK;
- intf_sel |= MDP5_DISP_INTF_SEL_INTF3(intf_id);
- break;
- default:
- BUG();
- break;
- }
-
- mdp5_write(mdp5_kms, REG_MDP5_DISP_INTF_SEL, intf_sel);
- spin_unlock_irqrestore(&mdp5_kms->resource_lock, flags);
-
- DBG("%s: intf_sel=%08x", mdp5_crtc->name, intf_sel);
mdp5_ctl_set_intf(mdp5_crtc->ctl, intf);
- flush_mask |= mdp5_ctl_get_flush(mdp5_crtc->ctl);
- flush_mask |= mdp5_lm_get_flush(mdp5_crtc->lm);
-
- crtc_flush(crtc, flush_mask);
}
int mdp5_crtc_get_lm(struct drm_crtc *crtc)
{
struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
+ return WARN_ON(!crtc) ? -EINVAL : mdp5_crtc->lm;
+}
- if (WARN_ON(!crtc))
- return -EINVAL;
-
- return mdp5_crtc->lm;
+struct mdp5_ctl *mdp5_crtc_get_ctl(struct drm_crtc *crtc)
+{
+ struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);
+ return WARN_ON(!crtc) ? NULL : mdp5_crtc->ctl;
}
/* initialize crtc */
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c
index 1511290..5488b68 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2014 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2014-2015 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -33,23 +33,31 @@
* requested by the client (in mdp5_crtc_mode_set()).
*/
+struct op_mode {
+ struct mdp5_interface intf;
+
+ bool encoder_enabled;
+ uint32_t start_mask;
+};
+
struct mdp5_ctl {
struct mdp5_ctl_manager *ctlm;
u32 id;
+ int lm;
/* whether this CTL has been allocated or not: */
bool busy;
- /* memory output connection (@see mdp5_ctl_mode): */
- u32 mode;
+ /* Operation Mode Configuration for the Pipeline */
+ struct op_mode pipeline;
/* REG_MDP5_CTL_*(<id>) registers access info + lock: */
spinlock_t hw_lock;
u32 reg_offset;
- /* flush mask used to commit CTL registers */
- u32 flush_mask;
+ /* when do CTL registers need to be flushed? (mask of trigger bits) */
+ u32 pending_ctl_trigger;
bool cursor_on;
@@ -63,6 +71,9 @@
u32 nlm;
u32 nctl;
+ /* to filter out non-present bits in the current hardware config */
+ u32 flush_hw_mask;
+
/* pool of CTLs + lock to protect resource allocation (ctls[i].busy) */
spinlock_t pool_lock;
struct mdp5_ctl ctls[MAX_CTL];
@@ -94,31 +105,172 @@
return mdp5_read(mdp5_kms, reg);
}
-
-int mdp5_ctl_set_intf(struct mdp5_ctl *ctl, int intf)
+static void set_display_intf(struct mdp5_kms *mdp5_kms,
+ struct mdp5_interface *intf)
{
unsigned long flags;
- static const enum mdp5_intfnum intfnum[] = {
- INTF0, INTF1, INTF2, INTF3,
- };
+ u32 intf_sel;
+
+ spin_lock_irqsave(&mdp5_kms->resource_lock, flags);
+ intf_sel = mdp5_read(mdp5_kms, REG_MDP5_MDP_DISP_INTF_SEL(0));
+
+ switch (intf->num) {
+ case 0:
+ intf_sel &= ~MDP5_MDP_DISP_INTF_SEL_INTF0__MASK;
+ intf_sel |= MDP5_MDP_DISP_INTF_SEL_INTF0(intf->type);
+ break;
+ case 1:
+ intf_sel &= ~MDP5_MDP_DISP_INTF_SEL_INTF1__MASK;
+ intf_sel |= MDP5_MDP_DISP_INTF_SEL_INTF1(intf->type);
+ break;
+ case 2:
+ intf_sel &= ~MDP5_MDP_DISP_INTF_SEL_INTF2__MASK;
+ intf_sel |= MDP5_MDP_DISP_INTF_SEL_INTF2(intf->type);
+ break;
+ case 3:
+ intf_sel &= ~MDP5_MDP_DISP_INTF_SEL_INTF3__MASK;
+ intf_sel |= MDP5_MDP_DISP_INTF_SEL_INTF3(intf->type);
+ break;
+ default:
+ BUG();
+ break;
+ }
+
+ mdp5_write(mdp5_kms, REG_MDP5_MDP_DISP_INTF_SEL(0), intf_sel);
+ spin_unlock_irqrestore(&mdp5_kms->resource_lock, flags);
+}
+
+static void set_ctl_op(struct mdp5_ctl *ctl, struct mdp5_interface *intf)
+{
+ unsigned long flags;
+ u32 ctl_op = 0;
+
+ if (!mdp5_cfg_intf_is_virtual(intf->type))
+ ctl_op |= MDP5_CTL_OP_INTF_NUM(INTF0 + intf->num);
+
+ switch (intf->type) {
+ case INTF_DSI:
+ if (intf->mode == MDP5_INTF_DSI_MODE_COMMAND)
+ ctl_op |= MDP5_CTL_OP_CMD_MODE;
+ break;
+
+ case INTF_WB:
+ if (intf->mode == MDP5_INTF_WB_MODE_LINE)
+ ctl_op |= MDP5_CTL_OP_MODE(MODE_WB_2_LINE);
+ break;
+
+ default:
+ break;
+ }
spin_lock_irqsave(&ctl->hw_lock, flags);
- ctl_write(ctl, REG_MDP5_CTL_OP(ctl->id),
- MDP5_CTL_OP_MODE(ctl->mode) |
- MDP5_CTL_OP_INTF_NUM(intfnum[intf]));
+ ctl_write(ctl, REG_MDP5_CTL_OP(ctl->id), ctl_op);
spin_unlock_irqrestore(&ctl->hw_lock, flags);
+}
+
+int mdp5_ctl_set_intf(struct mdp5_ctl *ctl, struct mdp5_interface *intf)
+{
+ struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm;
+ struct mdp5_kms *mdp5_kms = get_kms(ctl_mgr);
+
+ memcpy(&ctl->pipeline.intf, intf, sizeof(*intf));
+
+ ctl->pipeline.start_mask = mdp_ctl_flush_mask_lm(ctl->lm) |
+ mdp_ctl_flush_mask_encoder(intf);
+
+ /* Virtual interfaces need not set a display intf (e.g.: Writeback) */
+ if (!mdp5_cfg_intf_is_virtual(intf->type))
+ set_display_intf(mdp5_kms, intf);
+
+ set_ctl_op(ctl, intf);
return 0;
}
-int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, bool enable)
+static bool start_signal_needed(struct mdp5_ctl *ctl)
+{
+ struct op_mode *pipeline = &ctl->pipeline;
+
+ if (!pipeline->encoder_enabled || pipeline->start_mask != 0)
+ return false;
+
+ switch (pipeline->intf.type) {
+ case INTF_WB:
+ return true;
+ case INTF_DSI:
+ return pipeline->intf.mode == MDP5_INTF_DSI_MODE_COMMAND;
+ default:
+ return false;
+ }
+}
+
+/*
+ * send_start_signal() - Overlay Processor Start Signal
+ *
+ * For a given control operation (display pipeline), a START signal needs to be
+ * executed in order to kick off operation and activate all layers.
+ * e.g.: DSI command mode, Writeback
+ */
+static void send_start_signal(struct mdp5_ctl *ctl)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&ctl->hw_lock, flags);
+ ctl_write(ctl, REG_MDP5_CTL_START(ctl->id), 1);
+ spin_unlock_irqrestore(&ctl->hw_lock, flags);
+}
+
+static void refill_start_mask(struct mdp5_ctl *ctl)
+{
+ struct op_mode *pipeline = &ctl->pipeline;
+ struct mdp5_interface *intf = &ctl->pipeline.intf;
+
+ pipeline->start_mask = mdp_ctl_flush_mask_lm(ctl->lm);
+
+ /*
+ * Writeback encoder needs to program & flush
+ * address registers for each page flip..
+ */
+ if (intf->type == INTF_WB)
+ pipeline->start_mask |= mdp_ctl_flush_mask_encoder(intf);
+}
+
+/**
+ * mdp5_ctl_set_encoder_state() - set the encoder state
+ *
+ * @enable: true, when encoder is ready for data streaming; false, otherwise.
+ *
+ * Note:
+ * This encoder state is needed to trigger START signal (data path kickoff).
+ */
+int mdp5_ctl_set_encoder_state(struct mdp5_ctl *ctl, bool enabled)
+{
+ if (WARN_ON(!ctl))
+ return -EINVAL;
+
+ ctl->pipeline.encoder_enabled = enabled;
+ DBG("intf_%d: %s", ctl->pipeline.intf.num, enabled ? "on" : "off");
+
+ if (start_signal_needed(ctl)) {
+ send_start_signal(ctl);
+ refill_start_mask(ctl);
+ }
+
+ return 0;
+}
+
+/*
+ * Note:
+ * CTL registers need to be flushed after calling this function
+ * (call mdp5_ctl_commit() with mdp_ctl_flush_mask_ctl() mask)
+ */
+int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, int cursor_id, bool enable)
{
struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm;
unsigned long flags;
u32 blend_cfg;
- int lm;
+ int lm = ctl->lm;
- lm = mdp5_crtc_get_lm(ctl->crtc);
if (unlikely(WARN_ON(lm < 0))) {
dev_err(ctl_mgr->dev->dev, "CTL %d cannot find LM: %d",
ctl->id, lm);
@@ -138,12 +290,12 @@
spin_unlock_irqrestore(&ctl->hw_lock, flags);
+ ctl->pending_ctl_trigger = mdp_ctl_flush_mask_cursor(cursor_id);
ctl->cursor_on = enable;
return 0;
}
-
int mdp5_ctl_blend(struct mdp5_ctl *ctl, u32 lm, u32 blend_cfg)
{
unsigned long flags;
@@ -157,39 +309,124 @@
ctl_write(ctl, REG_MDP5_CTL_LAYER_REG(ctl->id, lm), blend_cfg);
spin_unlock_irqrestore(&ctl->hw_lock, flags);
+ ctl->pending_ctl_trigger = mdp_ctl_flush_mask_lm(lm);
+
return 0;
}
+u32 mdp_ctl_flush_mask_encoder(struct mdp5_interface *intf)
+{
+ if (intf->type == INTF_WB)
+ return MDP5_CTL_FLUSH_WB;
+
+ switch (intf->num) {
+ case 0: return MDP5_CTL_FLUSH_TIMING_0;
+ case 1: return MDP5_CTL_FLUSH_TIMING_1;
+ case 2: return MDP5_CTL_FLUSH_TIMING_2;
+ case 3: return MDP5_CTL_FLUSH_TIMING_3;
+ default: return 0;
+ }
+}
+
+u32 mdp_ctl_flush_mask_cursor(int cursor_id)
+{
+ switch (cursor_id) {
+ case 0: return MDP5_CTL_FLUSH_CURSOR_0;
+ case 1: return MDP5_CTL_FLUSH_CURSOR_1;
+ default: return 0;
+ }
+}
+
+u32 mdp_ctl_flush_mask_pipe(enum mdp5_pipe pipe)
+{
+ switch (pipe) {
+ case SSPP_VIG0: return MDP5_CTL_FLUSH_VIG0;
+ case SSPP_VIG1: return MDP5_CTL_FLUSH_VIG1;
+ case SSPP_VIG2: return MDP5_CTL_FLUSH_VIG2;
+ case SSPP_RGB0: return MDP5_CTL_FLUSH_RGB0;
+ case SSPP_RGB1: return MDP5_CTL_FLUSH_RGB1;
+ case SSPP_RGB2: return MDP5_CTL_FLUSH_RGB2;
+ case SSPP_DMA0: return MDP5_CTL_FLUSH_DMA0;
+ case SSPP_DMA1: return MDP5_CTL_FLUSH_DMA1;
+ case SSPP_VIG3: return MDP5_CTL_FLUSH_VIG3;
+ case SSPP_RGB3: return MDP5_CTL_FLUSH_RGB3;
+ default: return 0;
+ }
+}
+
+u32 mdp_ctl_flush_mask_lm(int lm)
+{
+ switch (lm) {
+ case 0: return MDP5_CTL_FLUSH_LM0;
+ case 1: return MDP5_CTL_FLUSH_LM1;
+ case 2: return MDP5_CTL_FLUSH_LM2;
+ case 5: return MDP5_CTL_FLUSH_LM5;
+ default: return 0;
+ }
+}
+
+static u32 fix_sw_flush(struct mdp5_ctl *ctl, u32 flush_mask)
+{
+ struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm;
+ u32 sw_mask = 0;
+#define BIT_NEEDS_SW_FIX(bit) \
+ (!(ctl_mgr->flush_hw_mask & bit) && (flush_mask & bit))
+
+ /* for some targets, cursor bit is the same as LM bit */
+ if (BIT_NEEDS_SW_FIX(MDP5_CTL_FLUSH_CURSOR_0))
+ sw_mask |= mdp_ctl_flush_mask_lm(ctl->lm);
+
+ return sw_mask;
+}
+
+/**
+ * mdp5_ctl_commit() - Register Flush
+ *
+ * The flush register is used to indicate several registers are all
+ * programmed, and are safe to update to the back copy of the double
+ * buffered registers.
+ *
+ * Some registers FLUSH bits are shared when the hardware does not have
+ * dedicated bits for them; handling these is the job of fix_sw_flush().
+ *
+ * CTL registers need to be flushed in some circumstances; if that is the
+ * case, some trigger bits will be present in both flush mask and
+ * ctl->pending_ctl_trigger.
+ */
int mdp5_ctl_commit(struct mdp5_ctl *ctl, u32 flush_mask)
{
struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm;
+ struct op_mode *pipeline = &ctl->pipeline;
unsigned long flags;
- if (flush_mask & MDP5_CTL_FLUSH_CURSOR_DUMMY) {
- int lm = mdp5_crtc_get_lm(ctl->crtc);
+ pipeline->start_mask &= ~flush_mask;
- if (unlikely(WARN_ON(lm < 0))) {
- dev_err(ctl_mgr->dev->dev, "CTL %d cannot find LM: %d",
- ctl->id, lm);
- return -EINVAL;
- }
+ VERB("flush_mask=%x, start_mask=%x, trigger=%x", flush_mask,
+ pipeline->start_mask, ctl->pending_ctl_trigger);
- /* for current targets, cursor bit is the same as LM bit */
- flush_mask |= mdp_ctl_flush_mask_lm(lm);
+ if (ctl->pending_ctl_trigger & flush_mask) {
+ flush_mask |= MDP5_CTL_FLUSH_CTL;
+ ctl->pending_ctl_trigger = 0;
}
- spin_lock_irqsave(&ctl->hw_lock, flags);
- ctl_write(ctl, REG_MDP5_CTL_FLUSH(ctl->id), flush_mask);
- spin_unlock_irqrestore(&ctl->hw_lock, flags);
+ flush_mask |= fix_sw_flush(ctl, flush_mask);
+
+ flush_mask &= ctl_mgr->flush_hw_mask;
+
+ if (flush_mask) {
+ spin_lock_irqsave(&ctl->hw_lock, flags);
+ ctl_write(ctl, REG_MDP5_CTL_FLUSH(ctl->id), flush_mask);
+ spin_unlock_irqrestore(&ctl->hw_lock, flags);
+ }
+
+ if (start_signal_needed(ctl)) {
+ send_start_signal(ctl);
+ refill_start_mask(ctl);
+ }
return 0;
}
-u32 mdp5_ctl_get_flush(struct mdp5_ctl *ctl)
-{
- return ctl->flush_mask;
-}
-
void mdp5_ctl_release(struct mdp5_ctl *ctl)
{
struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm;
@@ -208,6 +445,11 @@
DBG("CTL %d released", ctl->id);
}
+int mdp5_ctl_get_ctl_id(struct mdp5_ctl *ctl)
+{
+ return WARN_ON(!ctl) ? -EINVAL : ctl->id;
+}
+
/*
* mdp5_ctl_request() - CTL dynamic allocation
*
@@ -235,8 +477,10 @@
ctl = &ctl_mgr->ctls[c];
+ ctl->lm = mdp5_crtc_get_lm(crtc);
ctl->crtc = crtc;
ctl->busy = true;
+ ctl->pending_ctl_trigger = 0;
DBG("CTL %d allocated", ctl->id);
unlock:
@@ -267,7 +511,7 @@
void __iomem *mmio_base, const struct mdp5_cfg_hw *hw_cfg)
{
struct mdp5_ctl_manager *ctl_mgr;
- const struct mdp5_sub_block *ctl_cfg = &hw_cfg->ctl;
+ const struct mdp5_ctl_block *ctl_cfg = &hw_cfg->ctl;
unsigned long flags;
int c, ret;
@@ -289,6 +533,7 @@
ctl_mgr->dev = dev;
ctl_mgr->nlm = hw_cfg->lm.count;
ctl_mgr->nctl = ctl_cfg->count;
+ ctl_mgr->flush_hw_mask = ctl_cfg->flush_hw_mask;
spin_lock_init(&ctl_mgr->pool_lock);
/* initialize each CTL of the pool: */
@@ -303,9 +548,7 @@
}
ctl->ctlm = ctl_mgr;
ctl->id = c;
- ctl->mode = MODE_NONE;
ctl->reg_offset = ctl_cfg->base[c];
- ctl->flush_mask = MDP5_CTL_FLUSH_CTL;
ctl->busy = false;
spin_lock_init(&ctl->hw_lock);
}
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h
index ad48788..7a62000 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.h
@@ -33,19 +33,13 @@
* which is then used to call the other mdp5_ctl_*(ctl, ...) functions.
*/
struct mdp5_ctl *mdp5_ctlm_request(struct mdp5_ctl_manager *ctlm, struct drm_crtc *crtc);
+int mdp5_ctl_get_ctl_id(struct mdp5_ctl *ctl);
-int mdp5_ctl_set_intf(struct mdp5_ctl *ctl, int intf);
+struct mdp5_interface;
+int mdp5_ctl_set_intf(struct mdp5_ctl *ctl, struct mdp5_interface *intf);
+int mdp5_ctl_set_encoder_state(struct mdp5_ctl *ctl, bool enabled);
-int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, bool enable);
-
-/* @blend_cfg: see LM blender config definition below */
-int mdp5_ctl_blend(struct mdp5_ctl *ctl, u32 lm, u32 blend_cfg);
-
-/* @flush_mask: see CTL flush masks definitions below */
-int mdp5_ctl_commit(struct mdp5_ctl *ctl, u32 flush_mask);
-u32 mdp5_ctl_get_flush(struct mdp5_ctl *ctl);
-
-void mdp5_ctl_release(struct mdp5_ctl *ctl);
+int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, int cursor_id, bool enable);
/*
* blend_cfg (LM blender config):
@@ -72,51 +66,32 @@
}
/*
- * flush_mask (CTL flush masks):
+ * mdp5_ctl_blend() - Blend multiple layers on a Layer Mixer (LM)
*
- * The following functions allow each DRM entity to get and store
- * their own flush mask.
- * Once stored, these masks will then be accessed through each DRM's
- * interface and used by the caller of mdp5_ctl_commit() to specify
- * which block(s) need to be flushed through @flush_mask parameter.
+ * @blend_cfg: see LM blender config definition below
+ *
+ * Note:
+ * CTL registers need to be flushed after calling this function
+ * (call mdp5_ctl_commit() with mdp_ctl_flush_mask_ctl() mask)
*/
+int mdp5_ctl_blend(struct mdp5_ctl *ctl, u32 lm, u32 blend_cfg);
-#define MDP5_CTL_FLUSH_CURSOR_DUMMY 0x80000000
+/**
+ * mdp_ctl_flush_mask...() - Register FLUSH masks
+ *
+ * These masks are used to specify which block(s) need to be flushed
+ * through @flush_mask parameter in mdp5_ctl_commit(.., flush_mask).
+ */
+u32 mdp_ctl_flush_mask_lm(int lm);
+u32 mdp_ctl_flush_mask_pipe(enum mdp5_pipe pipe);
+u32 mdp_ctl_flush_mask_cursor(int cursor_id);
+u32 mdp_ctl_flush_mask_encoder(struct mdp5_interface *intf);
-static inline u32 mdp_ctl_flush_mask_cursor(int cursor_id)
-{
- /* TODO: use id once multiple cursor support is present */
- (void)cursor_id;
+/* @flush_mask: see CTL flush masks definitions below */
+int mdp5_ctl_commit(struct mdp5_ctl *ctl, u32 flush_mask);
- return MDP5_CTL_FLUSH_CURSOR_DUMMY;
-}
+void mdp5_ctl_release(struct mdp5_ctl *ctl);
-static inline u32 mdp_ctl_flush_mask_lm(int lm)
-{
- switch (lm) {
- case 0: return MDP5_CTL_FLUSH_LM0;
- case 1: return MDP5_CTL_FLUSH_LM1;
- case 2: return MDP5_CTL_FLUSH_LM2;
- case 5: return MDP5_CTL_FLUSH_LM5;
- default: return 0;
- }
-}
-static inline u32 mdp_ctl_flush_mask_pipe(enum mdp5_pipe pipe)
-{
- switch (pipe) {
- case SSPP_VIG0: return MDP5_CTL_FLUSH_VIG0;
- case SSPP_VIG1: return MDP5_CTL_FLUSH_VIG1;
- case SSPP_VIG2: return MDP5_CTL_FLUSH_VIG2;
- case SSPP_RGB0: return MDP5_CTL_FLUSH_RGB0;
- case SSPP_RGB1: return MDP5_CTL_FLUSH_RGB1;
- case SSPP_RGB2: return MDP5_CTL_FLUSH_RGB2;
- case SSPP_DMA0: return MDP5_CTL_FLUSH_DMA0;
- case SSPP_DMA1: return MDP5_CTL_FLUSH_DMA1;
- case SSPP_VIG3: return MDP5_CTL_FLUSH_VIG3;
- case SSPP_RGB3: return MDP5_CTL_FLUSH_RGB3;
- default: return 0;
- }
-}
#endif /* __MDP5_CTL_H__ */
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c
index af0e02f..1188f4b 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c
@@ -23,8 +23,7 @@
struct mdp5_encoder {
struct drm_encoder base;
- int intf;
- enum mdp5_intf intf_id;
+ struct mdp5_interface intf;
spinlock_t intf_lock; /* protect REG_MDP5_INTF_* registers */
bool enabled;
uint32_t bsc;
@@ -126,7 +125,7 @@
struct mdp5_kms *mdp5_kms = get_kms(encoder);
struct drm_device *dev = encoder->dev;
struct drm_connector *connector;
- int intf = mdp5_encoder->intf;
+ int intf = mdp5_encoder->intf.num;
uint32_t dtv_hsync_skew, vsync_period, vsync_len, ctrl_pol;
uint32_t display_v_start, display_v_end;
uint32_t hsync_start_x, hsync_end_x;
@@ -188,7 +187,7 @@
* DISPLAY_V_START = (VBP * HCYCLE) + HBP
* DISPLAY_V_END = (VBP + VACTIVE) * HCYCLE - 1 - HFP
*/
- if (mdp5_encoder->intf_id == INTF_eDP) {
+ if (mdp5_encoder->intf.type == INTF_eDP) {
display_v_start += mode->htotal - mode->hsync_start;
display_v_end -= mode->hsync_start - mode->hdisplay;
}
@@ -218,21 +217,29 @@
mdp5_write(mdp5_kms, REG_MDP5_INTF_FRAME_LINE_COUNT_EN(intf), 0x3); /* frame+line? */
spin_unlock_irqrestore(&mdp5_encoder->intf_lock, flags);
+
+ mdp5_crtc_set_intf(encoder->crtc, &mdp5_encoder->intf);
}
static void mdp5_encoder_disable(struct drm_encoder *encoder)
{
struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder);
struct mdp5_kms *mdp5_kms = get_kms(encoder);
- int intf = mdp5_encoder->intf;
+ struct mdp5_ctl *ctl = mdp5_crtc_get_ctl(encoder->crtc);
+ int lm = mdp5_crtc_get_lm(encoder->crtc);
+ struct mdp5_interface *intf = &mdp5_encoder->intf;
+ int intfn = mdp5_encoder->intf.num;
unsigned long flags;
if (WARN_ON(!mdp5_encoder->enabled))
return;
+ mdp5_ctl_set_encoder_state(ctl, false);
+
spin_lock_irqsave(&mdp5_encoder->intf_lock, flags);
- mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(intf), 0);
+ mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(intfn), 0);
spin_unlock_irqrestore(&mdp5_encoder->intf_lock, flags);
+ mdp5_ctl_commit(ctl, mdp_ctl_flush_mask_encoder(intf));
/*
* Wait for a vsync so we know the ENABLE=0 latched before
@@ -242,7 +249,7 @@
* the settings changes for the new modeset (like new
* scanout buffer) don't latch properly..
*/
- mdp_irq_wait(&mdp5_kms->base, intf2vblank(intf));
+ mdp_irq_wait(&mdp5_kms->base, intf2vblank(lm, intf));
bs_set(mdp5_encoder, 0);
@@ -253,19 +260,21 @@
{
struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder);
struct mdp5_kms *mdp5_kms = get_kms(encoder);
- int intf = mdp5_encoder->intf;
+ struct mdp5_ctl *ctl = mdp5_crtc_get_ctl(encoder->crtc);
+ struct mdp5_interface *intf = &mdp5_encoder->intf;
+ int intfn = mdp5_encoder->intf.num;
unsigned long flags;
if (WARN_ON(mdp5_encoder->enabled))
return;
- mdp5_crtc_set_intf(encoder->crtc, mdp5_encoder->intf,
- mdp5_encoder->intf_id);
-
bs_set(mdp5_encoder, 1);
spin_lock_irqsave(&mdp5_encoder->intf_lock, flags);
- mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(intf), 1);
+ mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(intfn), 1);
spin_unlock_irqrestore(&mdp5_encoder->intf_lock, flags);
+ mdp5_ctl_commit(ctl, mdp_ctl_flush_mask_encoder(intf));
+
+ mdp5_ctl_set_encoder_state(ctl, true);
mdp5_encoder->enabled = true;
}
@@ -277,12 +286,51 @@
.enable = mdp5_encoder_enable,
};
+int mdp5_encoder_set_split_display(struct drm_encoder *encoder,
+ struct drm_encoder *slave_encoder)
+{
+ struct mdp5_encoder *mdp5_encoder = to_mdp5_encoder(encoder);
+ struct mdp5_kms *mdp5_kms;
+ int intf_num;
+ u32 data = 0;
+
+ if (!encoder || !slave_encoder)
+ return -EINVAL;
+
+ mdp5_kms = get_kms(encoder);
+ intf_num = mdp5_encoder->intf.num;
+
+ /* Switch slave encoder's TimingGen Sync mode,
+ * to use the master's enable signal for the slave encoder.
+ */
+ if (intf_num == 1)
+ data |= MDP5_SPLIT_DPL_LOWER_INTF2_TG_SYNC;
+ else if (intf_num == 2)
+ data |= MDP5_SPLIT_DPL_LOWER_INTF1_TG_SYNC;
+ else
+ return -EINVAL;
+
+ /* Make sure clocks are on when connectors calling this function. */
+ mdp5_enable(mdp5_kms);
+ mdp5_write(mdp5_kms, REG_MDP5_MDP_SPARE_0(0),
+ MDP5_MDP_SPARE_0_SPLIT_DPL_SINGLE_FLUSH_EN);
+ /* Dumb Panel, Sync mode */
+ mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_UPPER, 0);
+ mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_LOWER, data);
+ mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_EN, 1);
+ mdp5_disable(mdp5_kms);
+
+ return 0;
+}
+
/* initialize encoder */
-struct drm_encoder *mdp5_encoder_init(struct drm_device *dev, int intf,
- enum mdp5_intf intf_id)
+struct drm_encoder *mdp5_encoder_init(struct drm_device *dev,
+ struct mdp5_interface *intf)
{
struct drm_encoder *encoder = NULL;
struct mdp5_encoder *mdp5_encoder;
+ int enc_type = (intf->type == INTF_DSI) ?
+ DRM_MODE_ENCODER_DSI : DRM_MODE_ENCODER_TMDS;
int ret;
mdp5_encoder = kzalloc(sizeof(*mdp5_encoder), GFP_KERNEL);
@@ -291,14 +339,13 @@
goto fail;
}
- mdp5_encoder->intf = intf;
- mdp5_encoder->intf_id = intf_id;
+ memcpy(&mdp5_encoder->intf, intf, sizeof(mdp5_encoder->intf));
encoder = &mdp5_encoder->base;
spin_lock_init(&mdp5_encoder->intf_lock);
- drm_encoder_init(dev, encoder, &mdp5_encoder_funcs,
- DRM_MODE_ENCODER_TMDS);
+ drm_encoder_init(dev, encoder, &mdp5_encoder_funcs, enc_type);
+
drm_encoder_helper_add(encoder, &mdp5_encoder_helper_funcs);
bs_init(mdp5_encoder);
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
index a940710..33bd4c6 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
@@ -23,7 +23,7 @@
void mdp5_set_irqmask(struct mdp_kms *mdp_kms, uint32_t irqmask)
{
- mdp5_write(to_mdp5_kms(mdp_kms), REG_MDP5_INTR_EN, irqmask);
+ mdp5_write(to_mdp5_kms(mdp_kms), REG_MDP5_MDP_INTR_EN(0), irqmask);
}
static void mdp5_irq_error_handler(struct mdp_irq *irq, uint32_t irqstatus)
@@ -35,8 +35,8 @@
{
struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms));
mdp5_enable(mdp5_kms);
- mdp5_write(mdp5_kms, REG_MDP5_INTR_CLEAR, 0xffffffff);
- mdp5_write(mdp5_kms, REG_MDP5_INTR_EN, 0x00000000);
+ mdp5_write(mdp5_kms, REG_MDP5_MDP_INTR_CLEAR(0), 0xffffffff);
+ mdp5_write(mdp5_kms, REG_MDP5_MDP_INTR_EN(0), 0x00000000);
mdp5_disable(mdp5_kms);
}
@@ -61,7 +61,7 @@
{
struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms));
mdp5_enable(mdp5_kms);
- mdp5_write(mdp5_kms, REG_MDP5_INTR_EN, 0x00000000);
+ mdp5_write(mdp5_kms, REG_MDP5_MDP_INTR_EN(0), 0x00000000);
mdp5_disable(mdp5_kms);
}
@@ -73,8 +73,8 @@
unsigned int id;
uint32_t status;
- status = mdp5_read(mdp5_kms, REG_MDP5_INTR_STATUS);
- mdp5_write(mdp5_kms, REG_MDP5_INTR_CLEAR, status);
+ status = mdp5_read(mdp5_kms, REG_MDP5_MDP_INTR_STATUS(0));
+ mdp5_write(mdp5_kms, REG_MDP5_MDP_INTR_CLEAR(0), status);
VERB("status=%08x", status);
@@ -91,13 +91,13 @@
struct mdp5_kms *mdp5_kms = to_mdp5_kms(mdp_kms);
uint32_t intr;
- intr = mdp5_read(mdp5_kms, REG_MDP5_HW_INTR_STATUS);
+ intr = mdp5_read(mdp5_kms, REG_MDSS_HW_INTR_STATUS);
VERB("intr=%08x", intr);
- if (intr & MDP5_HW_INTR_STATUS_INTR_MDP) {
+ if (intr & MDSS_HW_INTR_STATUS_INTR_MDP) {
mdp5_irq_mdp(mdp_kms);
- intr &= ~MDP5_HW_INTR_STATUS_INTR_MDP;
+ intr &= ~MDSS_HW_INTR_STATUS_INTR_MDP;
}
while (intr) {
@@ -128,10 +128,10 @@
* can register to get their irq's delivered
*/
-#define VALID_IRQS (MDP5_HW_INTR_STATUS_INTR_DSI0 | \
- MDP5_HW_INTR_STATUS_INTR_DSI1 | \
- MDP5_HW_INTR_STATUS_INTR_HDMI | \
- MDP5_HW_INTR_STATUS_INTR_EDP)
+#define VALID_IRQS (MDSS_HW_INTR_STATUS_INTR_DSI0 | \
+ MDSS_HW_INTR_STATUS_INTR_DSI1 | \
+ MDSS_HW_INTR_STATUS_INTR_HDMI | \
+ MDSS_HW_INTR_STATUS_INTR_EDP)
static void mdp5_hw_mask_irq(struct irq_data *irqd)
{
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
index 92b61db..dfa8beb 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
@@ -58,7 +58,7 @@
*/
spin_lock_irqsave(&mdp5_kms->resource_lock, flags);
- mdp5_write(mdp5_kms, REG_MDP5_DISP_INTF_SEL, 0);
+ mdp5_write(mdp5_kms, REG_MDP5_MDP_DISP_INTF_SEL(0), 0);
spin_unlock_irqrestore(&mdp5_kms->resource_lock, flags);
mdp5_ctlm_hw_reset(mdp5_kms->ctlm);
@@ -86,6 +86,18 @@
return rate;
}
+static int mdp5_set_split_display(struct msm_kms *kms,
+ struct drm_encoder *encoder,
+ struct drm_encoder *slave_encoder,
+ bool is_cmd_mode)
+{
+ if (is_cmd_mode)
+ return mdp5_cmd_encoder_set_split_display(encoder,
+ slave_encoder);
+ else
+ return mdp5_encoder_set_split_display(encoder, slave_encoder);
+}
+
static void mdp5_preclose(struct msm_kms *kms, struct drm_file *file)
{
struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms));
@@ -131,6 +143,7 @@
.complete_commit = mdp5_complete_commit,
.get_format = mdp_get_format,
.round_pixclk = mdp5_round_pixclk,
+ .set_split_display = mdp5_set_split_display,
.preclose = mdp5_preclose,
.destroy = mdp5_destroy,
},
@@ -161,6 +174,134 @@
return 0;
}
+static struct drm_encoder *construct_encoder(struct mdp5_kms *mdp5_kms,
+ enum mdp5_intf_type intf_type, int intf_num,
+ enum mdp5_intf_mode intf_mode)
+{
+ struct drm_device *dev = mdp5_kms->dev;
+ struct msm_drm_private *priv = dev->dev_private;
+ struct drm_encoder *encoder;
+ struct mdp5_interface intf = {
+ .num = intf_num,
+ .type = intf_type,
+ .mode = intf_mode,
+ };
+
+ if ((intf_type == INTF_DSI) &&
+ (intf_mode == MDP5_INTF_DSI_MODE_COMMAND))
+ encoder = mdp5_cmd_encoder_init(dev, &intf);
+ else
+ encoder = mdp5_encoder_init(dev, &intf);
+
+ if (IS_ERR(encoder)) {
+ dev_err(dev->dev, "failed to construct encoder\n");
+ return encoder;
+ }
+
+ encoder->possible_crtcs = (1 << priv->num_crtcs) - 1;
+ priv->encoders[priv->num_encoders++] = encoder;
+
+ return encoder;
+}
+
+static int get_dsi_id_from_intf(const struct mdp5_cfg_hw *hw_cfg, int intf_num)
+{
+ const int intf_cnt = hw_cfg->intf.count;
+ const u32 *intfs = hw_cfg->intfs;
+ int id = 0, i;
+
+ for (i = 0; i < intf_cnt; i++) {
+ if (intfs[i] == INTF_DSI) {
+ if (intf_num == i)
+ return id;
+
+ id++;
+ }
+ }
+
+ return -EINVAL;
+}
+
+static int modeset_init_intf(struct mdp5_kms *mdp5_kms, int intf_num)
+{
+ struct drm_device *dev = mdp5_kms->dev;
+ struct msm_drm_private *priv = dev->dev_private;
+ const struct mdp5_cfg_hw *hw_cfg =
+ mdp5_cfg_get_hw_config(mdp5_kms->cfg);
+ enum mdp5_intf_type intf_type = hw_cfg->intfs[intf_num];
+ struct drm_encoder *encoder;
+ int ret = 0;
+
+ switch (intf_type) {
+ case INTF_DISABLED:
+ break;
+ case INTF_eDP:
+ if (!priv->edp)
+ break;
+
+ encoder = construct_encoder(mdp5_kms, INTF_eDP, intf_num,
+ MDP5_INTF_MODE_NONE);
+ if (IS_ERR(encoder)) {
+ ret = PTR_ERR(encoder);
+ break;
+ }
+
+ ret = msm_edp_modeset_init(priv->edp, dev, encoder);
+ break;
+ case INTF_HDMI:
+ if (!priv->hdmi)
+ break;
+
+ encoder = construct_encoder(mdp5_kms, INTF_HDMI, intf_num,
+ MDP5_INTF_MODE_NONE);
+ if (IS_ERR(encoder)) {
+ ret = PTR_ERR(encoder);
+ break;
+ }
+
+ ret = hdmi_modeset_init(priv->hdmi, dev, encoder);
+ break;
+ case INTF_DSI:
+ {
+ int dsi_id = get_dsi_id_from_intf(hw_cfg, intf_num);
+ struct drm_encoder *dsi_encs[MSM_DSI_ENCODER_NUM];
+ enum mdp5_intf_mode mode;
+ int i;
+
+ if ((dsi_id >= ARRAY_SIZE(priv->dsi)) || (dsi_id < 0)) {
+ dev_err(dev->dev, "failed to find dsi from intf %d\n",
+ intf_num);
+ ret = -EINVAL;
+ break;
+ }
+
+ if (!priv->dsi[dsi_id])
+ break;
+
+ for (i = 0; i < MSM_DSI_ENCODER_NUM; i++) {
+ mode = (i == MSM_DSI_CMD_ENCODER_ID) ?
+ MDP5_INTF_DSI_MODE_COMMAND :
+ MDP5_INTF_DSI_MODE_VIDEO;
+ dsi_encs[i] = construct_encoder(mdp5_kms, INTF_DSI,
+ intf_num, mode);
+ if (IS_ERR(dsi_encs)) {
+ ret = PTR_ERR(dsi_encs);
+ break;
+ }
+ }
+
+ ret = msm_dsi_modeset_init(priv->dsi[dsi_id], dev, dsi_encs);
+ break;
+ }
+ default:
+ dev_err(dev->dev, "unknown intf: %d\n", intf_type);
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
static int modeset_init(struct mdp5_kms *mdp5_kms)
{
static const enum mdp5_pipe crtcs[] = {
@@ -171,7 +312,6 @@
};
struct drm_device *dev = mdp5_kms->dev;
struct msm_drm_private *priv = dev->dev_private;
- struct drm_encoder *encoder;
const struct mdp5_cfg_hw *hw_cfg;
int i, ret;
@@ -222,44 +362,13 @@
}
}
- if (priv->hdmi) {
- /* Construct encoder for HDMI: */
- encoder = mdp5_encoder_init(dev, 3, INTF_HDMI);
- if (IS_ERR(encoder)) {
- dev_err(dev->dev, "failed to construct encoder\n");
- ret = PTR_ERR(encoder);
+ /* Construct encoders and modeset initialize connector devices
+ * for each external display interface.
+ */
+ for (i = 0; i < ARRAY_SIZE(hw_cfg->intfs); i++) {
+ ret = modeset_init_intf(mdp5_kms, i);
+ if (ret)
goto fail;
- }
-
- encoder->possible_crtcs = (1 << priv->num_crtcs) - 1;;
- priv->encoders[priv->num_encoders++] = encoder;
-
- ret = hdmi_modeset_init(priv->hdmi, dev, encoder);
- if (ret) {
- dev_err(dev->dev, "failed to initialize HDMI: %d\n", ret);
- goto fail;
- }
- }
-
- if (priv->edp) {
- /* Construct encoder for eDP: */
- encoder = mdp5_encoder_init(dev, 0, INTF_eDP);
- if (IS_ERR(encoder)) {
- dev_err(dev->dev, "failed to construct eDP encoder\n");
- ret = PTR_ERR(encoder);
- goto fail;
- }
-
- encoder->possible_crtcs = (1 << priv->num_crtcs) - 1;
- priv->encoders[priv->num_encoders++] = encoder;
-
- /* Construct bridge/connector for eDP: */
- ret = msm_edp_modeset_init(priv->edp, dev, encoder);
- if (ret) {
- dev_err(dev->dev, "failed to initialize eDP: %d\n",
- ret);
- goto fail;
- }
}
return 0;
@@ -274,11 +383,11 @@
uint32_t version;
mdp5_enable(mdp5_kms);
- version = mdp5_read(mdp5_kms, REG_MDP5_MDP_VERSION);
+ version = mdp5_read(mdp5_kms, REG_MDSS_HW_VERSION);
mdp5_disable(mdp5_kms);
- *major = FIELD(version, MDP5_MDP_VERSION_MAJOR);
- *minor = FIELD(version, MDP5_MDP_VERSION_MINOR);
+ *major = FIELD(version, MDSS_HW_VERSION_MAJOR);
+ *minor = FIELD(version, MDSS_HW_VERSION_MINOR);
DBG("MDP5 version v%d.%d", *major, *minor);
}
@@ -321,6 +430,7 @@
mdp5_kms->dev = dev;
+ /* mdp5_kms->mmio actually represents the MDSS base address */
mdp5_kms->mmio = msm_ioremap(pdev, "mdp_phys", "MDP5");
if (IS_ERR(mdp5_kms->mmio)) {
ret = PTR_ERR(mdp5_kms->mmio);
@@ -403,8 +513,12 @@
* we don't disable):
*/
mdp5_enable(mdp5_kms);
- for (i = 0; i < config->hw->intf.count; i++)
+ for (i = 0; i < MDP5_INTF_NUM_MAX; i++) {
+ if (!config->hw->intf.base[i] ||
+ mdp5_cfg_intf_is_virtual(config->hw->intfs[i]))
+ continue;
mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(i), 0);
+ }
mdp5_disable(mdp5_kms);
mdelay(16);
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h
index 49d011e..2c0de17 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h
@@ -54,7 +54,7 @@
/*
* lock to protect access to global resources: ie., following register:
- * - REG_MDP5_DISP_INTF_SEL
+ * - REG_MDP5_MDP_DISP_INTF_SEL
*/
spinlock_t resource_lock;
@@ -94,6 +94,24 @@
#define to_mdp5_plane_state(x) \
container_of(x, struct mdp5_plane_state, base)
+enum mdp5_intf_mode {
+ MDP5_INTF_MODE_NONE = 0,
+
+ /* Modes used for DSI interface (INTF_DSI type): */
+ MDP5_INTF_DSI_MODE_VIDEO,
+ MDP5_INTF_DSI_MODE_COMMAND,
+
+ /* Modes used for WB interface (INTF_WB type): */
+ MDP5_INTF_WB_MODE_BLOCK,
+ MDP5_INTF_WB_MODE_LINE,
+};
+
+struct mdp5_interface {
+ int num; /* display interface number */
+ enum mdp5_intf_type type;
+ enum mdp5_intf_mode mode;
+};
+
static inline void mdp5_write(struct mdp5_kms *mdp5_kms, u32 reg, u32 data)
{
msm_writel(data, mdp5_kms->mmio + reg);
@@ -130,9 +148,9 @@
}
}
-static inline uint32_t intf2err(int intf)
+static inline uint32_t intf2err(int intf_num)
{
- switch (intf) {
+ switch (intf_num) {
case 0: return MDP5_IRQ_INTF0_UNDER_RUN;
case 1: return MDP5_IRQ_INTF1_UNDER_RUN;
case 2: return MDP5_IRQ_INTF2_UNDER_RUN;
@@ -141,9 +159,23 @@
}
}
-static inline uint32_t intf2vblank(int intf)
+#define GET_PING_PONG_ID(layer_mixer) ((layer_mixer == 5) ? 3 : layer_mixer)
+static inline uint32_t intf2vblank(int lm, struct mdp5_interface *intf)
{
- switch (intf) {
+ /*
+ * In case of DSI Command Mode, the Ping Pong's read pointer IRQ
+ * acts as a Vblank signal. The Ping Pong buffer used is bound to
+ * layer mixer.
+ */
+
+ if ((intf->type == INTF_DSI) &&
+ (intf->mode == MDP5_INTF_DSI_MODE_COMMAND))
+ return MDP5_IRQ_PING_PONG_0_RD_PTR << GET_PING_PONG_ID(lm);
+
+ if (intf->type == INTF_WB)
+ return MDP5_IRQ_WB_2_DONE;
+
+ switch (intf->num) {
case 0: return MDP5_IRQ_INTF0_VSYNC;
case 1: return MDP5_IRQ_INTF1_VSYNC;
case 2: return MDP5_IRQ_INTF2_VSYNC;
@@ -152,6 +184,11 @@
}
}
+static inline uint32_t lm2ppdone(int lm)
+{
+ return MDP5_IRQ_PING_PONG_0_DONE << GET_PING_PONG_ID(lm);
+}
+
int mdp5_disable(struct mdp5_kms *mdp5_kms);
int mdp5_enable(struct mdp5_kms *mdp5_kms);
@@ -197,13 +234,33 @@
uint32_t mdp5_crtc_vblank(struct drm_crtc *crtc);
int mdp5_crtc_get_lm(struct drm_crtc *crtc);
+struct mdp5_ctl *mdp5_crtc_get_ctl(struct drm_crtc *crtc);
void mdp5_crtc_cancel_pending_flip(struct drm_crtc *crtc, struct drm_file *file);
-void mdp5_crtc_set_intf(struct drm_crtc *crtc, int intf,
- enum mdp5_intf intf_id);
+void mdp5_crtc_set_intf(struct drm_crtc *crtc, struct mdp5_interface *intf);
struct drm_crtc *mdp5_crtc_init(struct drm_device *dev,
struct drm_plane *plane, int id);
-struct drm_encoder *mdp5_encoder_init(struct drm_device *dev, int intf,
- enum mdp5_intf intf_id);
+struct drm_encoder *mdp5_encoder_init(struct drm_device *dev,
+ struct mdp5_interface *intf);
+int mdp5_encoder_set_split_display(struct drm_encoder *encoder,
+ struct drm_encoder *slave_encoder);
+
+#ifdef CONFIG_DRM_MSM_DSI
+struct drm_encoder *mdp5_cmd_encoder_init(struct drm_device *dev,
+ struct mdp5_interface *intf);
+int mdp5_cmd_encoder_set_split_display(struct drm_encoder *encoder,
+ struct drm_encoder *slave_encoder);
+#else
+static inline struct drm_encoder *mdp5_cmd_encoder_init(
+ struct drm_device *dev, struct mdp5_interface *intf)
+{
+ return ERR_PTR(-EINVAL);
+}
+static inline int mdp5_cmd_encoder_set_split_display(
+ struct drm_encoder *encoder, struct drm_encoder *slave_encoder)
+{
+ return -EINVAL;
+}
+#endif
#endif /* __MDP5_KMS_H__ */
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
index 05cf9ab..18a3d20 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
@@ -156,7 +156,8 @@
};
static int mdp5_plane_prepare_fb(struct drm_plane *plane,
- struct drm_framebuffer *fb)
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *new_state)
{
struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
struct mdp5_kms *mdp5_kms = get_kms(plane);
@@ -166,7 +167,8 @@
}
static void mdp5_plane_cleanup_fb(struct drm_plane *plane,
- struct drm_framebuffer *fb)
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *old_state)
{
struct mdp5_plane *mdp5_plane = to_mdp5_plane(plane);
struct mdp5_kms *mdp5_kms = get_kms(plane);
@@ -505,8 +507,8 @@
spin_lock_irqsave(&mdp5_plane->pipe_lock, flags);
mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_IMG_SIZE(pipe),
- MDP5_PIPE_SRC_IMG_SIZE_WIDTH(src_w) |
- MDP5_PIPE_SRC_IMG_SIZE_HEIGHT(src_h));
+ MDP5_PIPE_SRC_IMG_SIZE_WIDTH(fb->width) |
+ MDP5_PIPE_SRC_IMG_SIZE_HEIGHT(fb->height));
mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_SIZE(pipe),
MDP5_PIPE_SRC_SIZE_WIDTH(src_w) |
diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c
index 1f795af..16702ae 100644
--- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c
+++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c
@@ -43,7 +43,7 @@
* set.
*
* 2) mdp5_smp_configure():
- * As hw is programmed, before FLUSH, MDP5_SMP_ALLOC registers
+ * As hw is programmed, before FLUSH, MDP5_MDP_SMP_ALLOC registers
* are configured for the union(pending, inuse)
*
* 3) mdp5_smp_commit():
@@ -74,7 +74,7 @@
spinlock_t state_lock;
mdp5_smp_state_t state; /* to track smp allocation amongst pipes: */
- struct mdp5_client_smp_state client_state[CID_MAX];
+ struct mdp5_client_smp_state client_state[MAX_CLIENTS];
};
static inline
@@ -85,27 +85,31 @@
return to_mdp5_kms(to_mdp_kms(priv->kms));
}
-static inline enum mdp5_client_id pipe2client(enum mdp5_pipe pipe, int plane)
+static inline u32 pipe2client(enum mdp5_pipe pipe, int plane)
{
- WARN_ON(plane >= pipe2nclients(pipe));
- switch (pipe) {
- case SSPP_VIG0: return CID_VIG0_Y + plane;
- case SSPP_VIG1: return CID_VIG1_Y + plane;
- case SSPP_VIG2: return CID_VIG2_Y + plane;
- case SSPP_RGB0: return CID_RGB0;
- case SSPP_RGB1: return CID_RGB1;
- case SSPP_RGB2: return CID_RGB2;
- case SSPP_DMA0: return CID_DMA0_Y + plane;
- case SSPP_DMA1: return CID_DMA1_Y + plane;
- case SSPP_VIG3: return CID_VIG3_Y + plane;
- case SSPP_RGB3: return CID_RGB3;
- default: return CID_UNUSED;
- }
+#define CID_UNUSED 0
+
+ if (WARN_ON(plane >= pipe2nclients(pipe)))
+ return CID_UNUSED;
+
+ /*
+ * Note on SMP clients:
+ * For ViG pipes, fetch Y/Cr/Cb-components clients are always
+ * consecutive, and in that order.
+ *
+ * e.g.:
+ * if mdp5_cfg->smp.clients[SSPP_VIG0] = N,
+ * Y plane's client ID is N
+ * Cr plane's client ID is N + 1
+ * Cb plane's client ID is N + 2
+ */
+
+ return mdp5_cfg->smp.clients[pipe] + plane;
}
/* step #1: update # of blocks pending for the client: */
static int smp_request_block(struct mdp5_smp *smp,
- enum mdp5_client_id cid, int nblks)
+ u32 cid, int nblks)
{
struct mdp5_kms *mdp5_kms = get_kms(smp);
const struct mdp5_cfg_hw *hw_cfg;
@@ -227,7 +231,7 @@
}
static void update_smp_state(struct mdp5_smp *smp,
- enum mdp5_client_id cid, mdp5_smp_state_t *assigned)
+ u32 cid, mdp5_smp_state_t *assigned)
{
struct mdp5_kms *mdp5_kms = get_kms(smp);
int cnt = smp->blk_cnt;
@@ -237,25 +241,25 @@
int idx = blk / 3;
int fld = blk % 3;
- val = mdp5_read(mdp5_kms, REG_MDP5_SMP_ALLOC_W_REG(idx));
+ val = mdp5_read(mdp5_kms, REG_MDP5_MDP_SMP_ALLOC_W_REG(0, idx));
switch (fld) {
case 0:
- val &= ~MDP5_SMP_ALLOC_W_REG_CLIENT0__MASK;
- val |= MDP5_SMP_ALLOC_W_REG_CLIENT0(cid);
+ val &= ~MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0__MASK;
+ val |= MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0(cid);
break;
case 1:
- val &= ~MDP5_SMP_ALLOC_W_REG_CLIENT1__MASK;
- val |= MDP5_SMP_ALLOC_W_REG_CLIENT1(cid);
+ val &= ~MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1__MASK;
+ val |= MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1(cid);
break;
case 2:
- val &= ~MDP5_SMP_ALLOC_W_REG_CLIENT2__MASK;
- val |= MDP5_SMP_ALLOC_W_REG_CLIENT2(cid);
+ val &= ~MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2__MASK;
+ val |= MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2(cid);
break;
}
- mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_W_REG(idx), val);
- mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_R_REG(idx), val);
+ mdp5_write(mdp5_kms, REG_MDP5_MDP_SMP_ALLOC_W_REG(0, idx), val);
+ mdp5_write(mdp5_kms, REG_MDP5_MDP_SMP_ALLOC_R_REG(0, idx), val);
}
}
@@ -267,7 +271,7 @@
int i;
for (i = 0; i < pipe2nclients(pipe); i++) {
- enum mdp5_client_id cid = pipe2client(pipe, i);
+ u32 cid = pipe2client(pipe, i);
struct mdp5_client_smp_state *ps = &smp->client_state[cid];
bitmap_or(assigned, ps->inuse, ps->pending, cnt);
@@ -283,7 +287,7 @@
int i;
for (i = 0; i < pipe2nclients(pipe); i++) {
- enum mdp5_client_id cid = pipe2client(pipe, i);
+ u32 cid = pipe2client(pipe, i);
struct mdp5_client_smp_state *ps = &smp->client_state[cid];
/*
diff --git a/drivers/gpu/drm/msm/msm_atomic.c b/drivers/gpu/drm/msm/msm_atomic.c
index 18fd643..5b19212 100644
--- a/drivers/gpu/drm/msm/msm_atomic.c
+++ b/drivers/gpu/drm/msm/msm_atomic.c
@@ -96,11 +96,11 @@
kms->funcs->prepare_commit(kms, state);
- drm_atomic_helper_commit_pre_planes(dev, state);
+ drm_atomic_helper_commit_modeset_disables(dev, state);
drm_atomic_helper_commit_planes(dev, state);
- drm_atomic_helper_commit_post_planes(dev, state);
+ drm_atomic_helper_commit_modeset_enables(dev, state);
/* NOTE: _wait_for_vblanks() only waits for vblank on
* enabled CRTCs. So we end up faulting when disabling
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index a426911..47f4dd4 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -182,6 +182,83 @@
return 4;
}
+#include <linux/of_address.h>
+
+static int msm_init_vram(struct drm_device *dev)
+{
+ struct msm_drm_private *priv = dev->dev_private;
+ unsigned long size = 0;
+ int ret = 0;
+
+#ifdef CONFIG_OF
+ /* In the device-tree world, we could have a 'memory-region'
+ * phandle, which gives us a link to our "vram". Allocating
+ * is all nicely abstracted behind the dma api, but we need
+ * to know the entire size to allocate it all in one go. There
+ * are two cases:
+ * 1) device with no IOMMU, in which case we need exclusive
+ * access to a VRAM carveout big enough for all gpu
+ * buffers
+ * 2) device with IOMMU, but where the bootloader puts up
+ * a splash screen. In this case, the VRAM carveout
+ * need only be large enough for fbdev fb. But we need
+ * exclusive access to the buffer to avoid the kernel
+ * using those pages for other purposes (which appears
+ * as corruption on screen before we have a chance to
+ * load and do initial modeset)
+ */
+ struct device_node *node;
+
+ node = of_parse_phandle(dev->dev->of_node, "memory-region", 0);
+ if (node) {
+ struct resource r;
+ ret = of_address_to_resource(node, 0, &r);
+ if (ret)
+ return ret;
+ size = r.end - r.start;
+ DRM_INFO("using VRAM carveout: %lx@%08x\n", size, r.start);
+ } else
+#endif
+
+ /* if we have no IOMMU, then we need to use carveout allocator.
+ * Grab the entire CMA chunk carved out in early startup in
+ * mach-msm:
+ */
+ if (!iommu_present(&platform_bus_type)) {
+ DRM_INFO("using %s VRAM carveout\n", vram);
+ size = memparse(vram, NULL);
+ }
+
+ if (size) {
+ DEFINE_DMA_ATTRS(attrs);
+ void *p;
+
+ priv->vram.size = size;
+
+ drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1);
+
+ dma_set_attr(DMA_ATTR_NO_KERNEL_MAPPING, &attrs);
+ dma_set_attr(DMA_ATTR_WRITE_COMBINE, &attrs);
+
+ /* note that for no-kernel-mapping, the vaddr returned
+ * is bogus, but non-null if allocation succeeded:
+ */
+ p = dma_alloc_attrs(dev->dev, size,
+ &priv->vram.paddr, GFP_KERNEL, &attrs);
+ if (!p) {
+ dev_err(dev->dev, "failed to allocate VRAM\n");
+ priv->vram.paddr = 0;
+ return -ENOMEM;
+ }
+
+ dev_info(dev->dev, "VRAM: %08x->%08x\n",
+ (uint32_t)priv->vram.paddr,
+ (uint32_t)(priv->vram.paddr + size));
+ }
+
+ return ret;
+}
+
static int msm_load(struct drm_device *dev, unsigned long flags)
{
struct platform_device *pdev = dev->platformdev;
@@ -206,40 +283,9 @@
drm_mode_config_init(dev);
- /* if we have no IOMMU, then we need to use carveout allocator.
- * Grab the entire CMA chunk carved out in early startup in
- * mach-msm:
- */
- if (!iommu_present(&platform_bus_type)) {
- DEFINE_DMA_ATTRS(attrs);
- unsigned long size;
- void *p;
-
- DBG("using %s VRAM carveout", vram);
- size = memparse(vram, NULL);
- priv->vram.size = size;
-
- drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1);
-
- dma_set_attr(DMA_ATTR_NO_KERNEL_MAPPING, &attrs);
- dma_set_attr(DMA_ATTR_WRITE_COMBINE, &attrs);
-
- /* note that for no-kernel-mapping, the vaddr returned
- * is bogus, but non-null if allocation succeeded:
- */
- p = dma_alloc_attrs(dev->dev, size,
- &priv->vram.paddr, GFP_KERNEL, &attrs);
- if (!p) {
- dev_err(dev->dev, "failed to allocate VRAM\n");
- priv->vram.paddr = 0;
- ret = -ENOMEM;
- goto fail;
- }
-
- dev_info(dev->dev, "VRAM: %08x->%08x\n",
- (uint32_t)priv->vram.paddr,
- (uint32_t)(priv->vram.paddr + size));
- }
+ ret = msm_init_vram(dev);
+ if (ret)
+ goto fail;
platform_set_drvdata(pdev, dev);
@@ -1030,6 +1076,7 @@
static int __init msm_drm_register(void)
{
DBG("init");
+ msm_dsi_register();
msm_edp_register();
hdmi_register();
adreno_register();
@@ -1043,6 +1090,7 @@
hdmi_unregister();
adreno_unregister();
msm_edp_unregister();
+ msm_dsi_unregister();
}
module_init(msm_drm_register);
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 9e8d441..04db4bd 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -82,6 +82,9 @@
*/
struct msm_edp *edp;
+ /* DSI is shared by mdp4 and mdp5 */
+ struct msm_dsi *dsi[2];
+
/* when we have more than one 'msm_gpu' these need to be an array: */
struct msm_gpu *gpu;
struct msm_file_private *lastctx;
@@ -236,6 +239,32 @@
int msm_edp_modeset_init(struct msm_edp *edp, struct drm_device *dev,
struct drm_encoder *encoder);
+struct msm_dsi;
+enum msm_dsi_encoder_id {
+ MSM_DSI_VIDEO_ENCODER_ID = 0,
+ MSM_DSI_CMD_ENCODER_ID = 1,
+ MSM_DSI_ENCODER_NUM = 2
+};
+#ifdef CONFIG_DRM_MSM_DSI
+void __init msm_dsi_register(void);
+void __exit msm_dsi_unregister(void);
+int msm_dsi_modeset_init(struct msm_dsi *msm_dsi, struct drm_device *dev,
+ struct drm_encoder *encoders[MSM_DSI_ENCODER_NUM]);
+#else
+static inline void __init msm_dsi_register(void)
+{
+}
+static inline void __exit msm_dsi_unregister(void)
+{
+}
+static inline int msm_dsi_modeset_init(struct msm_dsi *msm_dsi,
+ struct drm_device *dev,
+ struct drm_encoder *encoders[MSM_DSI_ENCODER_NUM])
+{
+ return -EINVAL;
+}
+#endif
+
#ifdef CONFIG_DEBUG_FS
void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m);
void msm_gem_describe_objects(struct list_head *list, struct seq_file *m);
diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c
index df60f65..95f6532 100644
--- a/drivers/gpu/drm/msm/msm_fbdev.c
+++ b/drivers/gpu/drm/msm/msm_fbdev.c
@@ -110,7 +110,8 @@
size = mode_cmd.pitches[0] * mode_cmd.height;
DBG("allocating %d bytes for fb %d", size, dev->primary->index);
mutex_lock(&dev->struct_mutex);
- fbdev->bo = msm_gem_new(dev, size, MSM_BO_SCANOUT | MSM_BO_WC);
+ fbdev->bo = msm_gem_new(dev, size, MSM_BO_SCANOUT |
+ MSM_BO_WC | MSM_BO_STOLEN);
mutex_unlock(&dev->struct_mutex);
if (IS_ERR(fbdev->bo)) {
ret = PTR_ERR(fbdev->bo);
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 49dea4f..479d8af 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -32,6 +32,12 @@
priv->vram.paddr;
}
+static bool use_pages(struct drm_gem_object *obj)
+{
+ struct msm_gem_object *msm_obj = to_msm_bo(obj);
+ return !msm_obj->vram_node;
+}
+
/* allocate pages from VRAM carveout, used when no IOMMU: */
static struct page **get_pages_vram(struct drm_gem_object *obj,
int npages)
@@ -72,7 +78,7 @@
struct page **p;
int npages = obj->size >> PAGE_SHIFT;
- if (iommu_present(&platform_bus_type))
+ if (use_pages(obj))
p = drm_gem_get_pages(obj);
else
p = get_pages_vram(obj, npages);
@@ -116,7 +122,7 @@
sg_free_table(msm_obj->sgt);
kfree(msm_obj->sgt);
- if (iommu_present(&platform_bus_type))
+ if (use_pages(obj))
drm_gem_put_pages(obj, msm_obj->pages, true, false);
else {
drm_mm_remove_node(msm_obj->vram_node);
@@ -580,6 +586,7 @@
struct msm_drm_private *priv = dev->dev_private;
struct msm_gem_object *msm_obj;
unsigned sz;
+ bool use_vram = false;
switch (flags & MSM_BO_CACHE_MASK) {
case MSM_BO_UNCACHED:
@@ -592,15 +599,23 @@
return -EINVAL;
}
- sz = sizeof(*msm_obj);
if (!iommu_present(&platform_bus_type))
+ use_vram = true;
+ else if ((flags & MSM_BO_STOLEN) && priv->vram.size)
+ use_vram = true;
+
+ if (WARN_ON(use_vram && !priv->vram.size))
+ return -EINVAL;
+
+ sz = sizeof(*msm_obj);
+ if (use_vram)
sz += sizeof(struct drm_mm_node);
msm_obj = kzalloc(sz, GFP_KERNEL);
if (!msm_obj)
return -ENOMEM;
- if (!iommu_present(&platform_bus_type))
+ if (use_vram)
msm_obj->vram_node = (void *)&msm_obj[1];
msm_obj->flags = flags;
@@ -630,7 +645,7 @@
if (ret)
goto fail;
- if (iommu_present(&platform_bus_type)) {
+ if (use_pages(obj)) {
ret = drm_gem_object_init(dev, obj, size);
if (ret)
goto fail;
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 8fbbd05..85d481e 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -21,6 +21,9 @@
#include <linux/reservation.h>
#include "msm_drv.h"
+/* Additional internal-use only BO flags: */
+#define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */
+
struct msm_gem_object {
struct drm_gem_object base;
@@ -59,7 +62,7 @@
struct reservation_object _resv;
/* For physically contiguous buffers. Used when we don't have
- * an IOMMU.
+ * an IOMMU. Also used for stolen/splashscreen buffer.
*/
struct drm_mm_node *vram_node;
};
diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h
index 3a78cb4..a9f17bd 100644
--- a/drivers/gpu/drm/msm/msm_kms.h
+++ b/drivers/gpu/drm/msm/msm_kms.h
@@ -47,6 +47,10 @@
const struct msm_format *(*get_format)(struct msm_kms *kms, uint32_t format);
long (*round_pixclk)(struct msm_kms *kms, unsigned long rate,
struct drm_encoder *encoder);
+ int (*set_split_display)(struct msm_kms *kms,
+ struct drm_encoder *encoder,
+ struct drm_encoder *slave_encoder,
+ bool is_cmd_mode);
/* cleanup: */
void (*preclose)(struct msm_kms *kms, struct drm_file *file);
void (*destroy)(struct msm_kms *kms);
diff --git a/drivers/gpu/drm/nouveau/dispnv04/crtc.c b/drivers/gpu/drm/nouveau/dispnv04/crtc.c
index 542bb266..3d96b49 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/crtc.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/crtc.c
@@ -703,7 +703,7 @@
struct drm_device *dev = crtc->dev;
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
- struct drm_crtc_helper_funcs *funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *funcs = crtc->helper_private;
if (nv_two_heads(dev))
NVSetOwner(dev, nv_crtc->index);
@@ -724,7 +724,7 @@
static void nv_crtc_commit(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
- struct drm_crtc_helper_funcs *funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *funcs = crtc->helper_private;
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
nouveau_hw_load_state(dev, nv_crtc->index, &nv04_display(dev)->mode_reg);
diff --git a/drivers/gpu/drm/nouveau/dispnv04/dac.c b/drivers/gpu/drm/nouveau/dispnv04/dac.c
index d7b495a..af7249c 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/dac.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/dac.c
@@ -358,7 +358,7 @@
static void nv04_dac_prepare(struct drm_encoder *encoder)
{
- struct drm_encoder_helper_funcs *helper = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *helper = encoder->helper_private;
struct drm_device *dev = encoder->dev;
int head = nouveau_crtc(encoder->crtc)->index;
@@ -409,7 +409,7 @@
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
struct nouveau_drm *drm = nouveau_drm(encoder->dev);
struct nouveau_crtc *nv_crtc = nouveau_crtc(encoder->crtc);
- struct drm_encoder_helper_funcs *helper = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *helper = encoder->helper_private;
helper->dpms(encoder, DRM_MODE_DPMS_ON);
diff --git a/drivers/gpu/drm/nouveau/dispnv04/dfp.c b/drivers/gpu/drm/nouveau/dispnv04/dfp.c
index f6ca343..7cfb0cb 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/dfp.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/dfp.c
@@ -244,7 +244,7 @@
static void nv04_dfp_prepare(struct drm_encoder *encoder)
{
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
- struct drm_encoder_helper_funcs *helper = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *helper = encoder->helper_private;
struct drm_device *dev = encoder->dev;
int head = nouveau_crtc(encoder->crtc)->index;
struct nv04_crtc_reg *crtcstate = nv04_display(dev)->mode_reg.crtc_reg;
@@ -445,7 +445,7 @@
{
struct drm_device *dev = encoder->dev;
struct nouveau_drm *drm = nouveau_drm(dev);
- struct drm_encoder_helper_funcs *helper = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *helper = encoder->helper_private;
struct nouveau_crtc *nv_crtc = nouveau_crtc(encoder->crtc);
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
struct dcb_output *dcbe = nv_encoder->dcb;
diff --git a/drivers/gpu/drm/nouveau/dispnv04/disp.c b/drivers/gpu/drm/nouveau/dispnv04/disp.c
index f96237e..4131be55 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/disp.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/disp.c
@@ -109,7 +109,7 @@
crtc->funcs->save(crtc);
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
- struct drm_encoder_helper_funcs *func = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *func = encoder->helper_private;
func->save(encoder);
}
@@ -138,7 +138,7 @@
/* Restore state */
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
- struct drm_encoder_helper_funcs *func = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *func = encoder->helper_private;
func->restore(encoder);
}
@@ -169,7 +169,7 @@
* on suspend too.
*/
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
- struct drm_encoder_helper_funcs *func = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *func = encoder->helper_private;
func->restore(encoder);
}
diff --git a/drivers/gpu/drm/nouveau/dispnv04/tvnv04.c b/drivers/gpu/drm/nouveau/dispnv04/tvnv04.c
index d9664b3..70e95cf 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/tvnv04.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/tvnv04.c
@@ -122,7 +122,7 @@
{
struct drm_device *dev = encoder->dev;
int head = nouveau_crtc(encoder->crtc)->index;
- struct drm_encoder_helper_funcs *helper = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *helper = encoder->helper_private;
helper->dpms(encoder, DRM_MODE_DPMS_OFF);
@@ -164,7 +164,7 @@
struct drm_device *dev = encoder->dev;
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_crtc *nv_crtc = nouveau_crtc(encoder->crtc);
- struct drm_encoder_helper_funcs *helper = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *helper = encoder->helper_private;
helper->dpms(encoder, DRM_MODE_DPMS_ON);
diff --git a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
index 731d74e..d9720dd 100644
--- a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
+++ b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
@@ -405,7 +405,7 @@
{
struct drm_device *dev = encoder->dev;
struct nouveau_drm *drm = nouveau_drm(dev);
- struct drm_encoder_helper_funcs *helper = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *helper = encoder->helper_private;
struct nv17_tv_norm_params *tv_norm = get_tv_norm(encoder);
int head = nouveau_crtc(encoder->crtc)->index;
uint8_t *cr_lcd = &nv04_display(dev)->mode_reg.crtc_reg[head].CRTC[
@@ -583,7 +583,7 @@
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_crtc *nv_crtc = nouveau_crtc(encoder->crtc);
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
- struct drm_encoder_helper_funcs *helper = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *helper = encoder->helper_private;
if (get_tv_norm(encoder)->kind == TV_ENC_MODE) {
nv17_tv_update_rescaler(encoder);
diff --git a/drivers/gpu/drm/nouveau/include/nvif/class.h b/drivers/gpu/drm/nouveau/include/nvif/class.h
index 5ad17fc..0b5af0f 100644
--- a/drivers/gpu/drm/nouveau/include/nvif/class.h
+++ b/drivers/gpu/drm/nouveau/include/nvif/class.h
@@ -12,6 +12,13 @@
#define NV_DMA_TO_MEMORY 0x00000003
#define NV_DMA_IN_MEMORY 0x0000003d
+#define FERMI_TWOD_A 0x0000902d
+
+#define FERMI_MEMORY_TO_MEMORY_FORMAT_A 0x0000903d
+
+#define KEPLER_INLINE_TO_MEMORY_A 0x0000a040
+#define KEPLER_INLINE_TO_MEMORY_B 0x0000a140
+
#define NV04_DISP 0x00000046
#define NV03_CHANNEL_DMA 0x0000006b
@@ -25,6 +32,7 @@
#define G82_CHANNEL_GPFIFO 0x0000826f
#define FERMI_CHANNEL_GPFIFO 0x0000906f
#define KEPLER_CHANNEL_GPFIFO_A 0x0000a06f
+#define MAXWELL_CHANNEL_GPFIFO_A 0x0000b06f
#define NV50_DISP 0x00005070
#define G82_DISP 0x00008270
@@ -84,6 +92,7 @@
#define KEPLER_C 0x0000a297
#define MAXWELL_A 0x0000b097
+#define MAXWELL_B 0x0000b197
#define FERMI_COMPUTE_A 0x000090c0
#define FERMI_COMPUTE_B 0x000091c0
@@ -92,6 +101,7 @@
#define KEPLER_COMPUTE_B 0x0000a1c0
#define MAXWELL_COMPUTE_A 0x0000b0c0
+#define MAXWELL_COMPUTE_B 0x0000b1c0
/*******************************************************************************
diff --git a/drivers/gpu/drm/nouveau/include/nvkm/engine/ce.h b/drivers/gpu/drm/nouveau/include/nvkm/engine/ce.h
index 7e29c52..e832f72 100644
--- a/drivers/gpu/drm/nouveau/include/nvkm/engine/ce.h
+++ b/drivers/gpu/drm/nouveau/include/nvkm/engine/ce.h
@@ -10,4 +10,7 @@
extern struct nvkm_oclass gk104_ce0_oclass;
extern struct nvkm_oclass gk104_ce1_oclass;
extern struct nvkm_oclass gk104_ce2_oclass;
+extern struct nvkm_oclass gm204_ce0_oclass;
+extern struct nvkm_oclass gm204_ce1_oclass;
+extern struct nvkm_oclass gm204_ce2_oclass;
#endif
diff --git a/drivers/gpu/drm/nouveau/include/nvkm/engine/fifo.h b/drivers/gpu/drm/nouveau/include/nvkm/engine/fifo.h
index 05321ce..97cdeab 100644
--- a/drivers/gpu/drm/nouveau/include/nvkm/engine/fifo.h
+++ b/drivers/gpu/drm/nouveau/include/nvkm/engine/fifo.h
@@ -116,6 +116,7 @@
extern struct nvkm_oclass *gk104_fifo_oclass;
extern struct nvkm_oclass *gk20a_fifo_oclass;
extern struct nvkm_oclass *gk208_fifo_oclass;
+extern struct nvkm_oclass *gm204_fifo_oclass;
int nvkm_fifo_uevent_ctor(struct nvkm_object *, void *, u32,
struct nvkm_notify *);
diff --git a/drivers/gpu/drm/nouveau/include/nvkm/engine/gr.h b/drivers/gpu/drm/nouveau/include/nvkm/engine/gr.h
index 93ef1f2..7cbe202 100644
--- a/drivers/gpu/drm/nouveau/include/nvkm/engine/gr.h
+++ b/drivers/gpu/drm/nouveau/include/nvkm/engine/gr.h
@@ -38,7 +38,7 @@
}
#define nvkm_gr_create(p,e,c,y,d) \
- nvkm_engine_create((p), (e), (c), (y), "PGR", "graphics", (d))
+ nvkm_engine_create((p), (e), (c), (y), "PGRAPH", "graphics", (d))
#define nvkm_gr_destroy(d) \
nvkm_engine_destroy(&(d)->base)
#define nvkm_gr_init(d) \
@@ -72,6 +72,8 @@
extern struct nvkm_oclass *gk110b_gr_oclass;
extern struct nvkm_oclass *gk208_gr_oclass;
extern struct nvkm_oclass *gm107_gr_oclass;
+extern struct nvkm_oclass *gm204_gr_oclass;
+extern struct nvkm_oclass *gm206_gr_oclass;
#include <core/enum.h>
diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/instmem.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/instmem.h
index d104c1a..1bcb763 100644
--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/instmem.h
+++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/instmem.h
@@ -45,4 +45,5 @@
extern struct nvkm_oclass *nv04_instmem_oclass;
extern struct nvkm_oclass *nv40_instmem_oclass;
extern struct nvkm_oclass *nv50_instmem_oclass;
+extern struct nvkm_oclass *gk20a_instmem_oclass;
#endif
diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/pmu.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/pmu.h
index 7b86acc..7559423 100644
--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/pmu.h
+++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/pmu.h
@@ -35,6 +35,7 @@
extern struct nvkm_oclass *gf100_pmu_oclass;
extern struct nvkm_oclass *gf110_pmu_oclass;
extern struct nvkm_oclass *gk104_pmu_oclass;
+extern struct nvkm_oclass *gk110_pmu_oclass;
extern struct nvkm_oclass *gk208_pmu_oclass;
extern struct nvkm_oclass *gk20a_pmu_oclass;
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 77326e3..6edcce1 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1110,6 +1110,8 @@
struct ttm_mem_reg *, struct ttm_mem_reg *);
int (*init)(struct nouveau_channel *, u32 handle);
} _methods[] = {
+ { "COPY", 4, 0xb0b5, nve0_bo_move_copy, nve0_bo_move_init },
+ { "GRCE", 0, 0xb0b5, nve0_bo_move_copy, nvc0_bo_move_init },
{ "COPY", 4, 0xa0b5, nve0_bo_move_copy, nve0_bo_move_init },
{ "GRCE", 0, 0xa0b5, nve0_bo_move_copy, nvc0_bo_move_init },
{ "COPY1", 5, 0x90b8, nvc0_bo_move_copy, nvc0_bo_move_init },
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c
index e581f63..0589bab 100644
--- a/drivers/gpu/drm/nouveau/nouveau_chan.c
+++ b/drivers/gpu/drm/nouveau/nouveau_chan.c
@@ -184,7 +184,8 @@
nouveau_channel_ind(struct nouveau_drm *drm, struct nvif_device *device,
u32 handle, u32 engine, struct nouveau_channel **pchan)
{
- static const u16 oclasses[] = { KEPLER_CHANNEL_GPFIFO_A,
+ static const u16 oclasses[] = { MAXWELL_CHANNEL_GPFIFO_A,
+ KEPLER_CHANNEL_GPFIFO_A,
FERMI_CHANNEL_GPFIFO,
G82_CHANNEL_GPFIFO,
NV50_CHANNEL_GPFIFO,
diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
index db7095a..3162040 100644
--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
+++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
@@ -309,7 +309,7 @@
nv_encoder = find_encoder(connector, DCB_OUTPUT_TV);
if (nv_encoder && force) {
struct drm_encoder *encoder = to_drm_encoder(nv_encoder);
- struct drm_encoder_helper_funcs *helper =
+ const struct drm_encoder_helper_funcs *helper =
encoder->helper_private;
if (helper->detect(encoder, connector) ==
@@ -592,7 +592,7 @@
static struct drm_display_mode *
nouveau_connector_native_mode(struct drm_connector *connector)
{
- struct drm_connector_helper_funcs *helper = connector->helper_private;
+ const struct drm_connector_helper_funcs *helper = connector->helper_private;
struct nouveau_drm *drm = nouveau_drm(connector->dev);
struct nouveau_connector *nv_connector = nouveau_connector(connector);
struct drm_device *dev = connector->dev;
diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
index 860b0e2..8670d90 100644
--- a/drivers/gpu/drm/nouveau/nouveau_display.c
+++ b/drivers/gpu/drm/nouveau/nouveau_display.c
@@ -869,13 +869,20 @@
struct drm_mode_create_dumb *args)
{
struct nouveau_bo *bo;
+ uint32_t domain;
int ret;
args->pitch = roundup(args->width * (args->bpp / 8), 256);
args->size = args->pitch * args->height;
args->size = roundup(args->size, PAGE_SIZE);
- ret = nouveau_gem_new(dev, args->size, 0, NOUVEAU_GEM_DOMAIN_VRAM, 0, 0, &bo);
+ /* Use VRAM if there is any ; otherwise fallback to system memory */
+ if (nouveau_drm(dev)->device.info.ram_size != 0)
+ domain = NOUVEAU_GEM_DOMAIN_VRAM;
+ else
+ domain = NOUVEAU_GEM_DOMAIN_GART;
+
+ ret = nouveau_gem_new(dev, args->size, 0, domain, 0, 0, &bo);
if (ret)
return ret;
diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
index 8763deb..8904933 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
@@ -181,6 +181,7 @@
break;
case FERMI_CHANNEL_GPFIFO:
case KEPLER_CHANNEL_GPFIFO_A:
+ case MAXWELL_CHANNEL_GPFIFO_A:
ret = nvc0_fence_create(drm);
break;
default:
diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.h b/drivers/gpu/drm/nouveau/nouveau_drm.h
index fc68f09..dd72652 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drm.h
+++ b/drivers/gpu/drm/nouveau/nouveau_drm.h
@@ -10,7 +10,7 @@
#define DRIVER_MAJOR 1
#define DRIVER_MINOR 2
-#define DRIVER_PATCHLEVEL 1
+#define DRIVER_PATCHLEVEL 2
/*
* 1.1.1:
@@ -28,6 +28,8 @@
* - fermi,kepler,maxwell zbc
* 1.2.1:
* - allow concurrent access to bo's mapped read/write.
+ * 1.2.2:
+ * - add NOUVEAU_GEM_DOMAIN_COHERENT flag
*/
#include <nvif/client.h>
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 7c077fc..0e690bf 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -189,6 +189,9 @@
if (!flags || domain & NOUVEAU_GEM_DOMAIN_CPU)
flags |= TTM_PL_FLAG_SYSTEM;
+ if (domain & NOUVEAU_GEM_DOMAIN_COHERENT)
+ flags |= TTM_PL_FLAG_UNCACHED;
+
ret = nouveau_bo_new(dev, size, align, flags, tile_mode,
tile_flags, NULL, NULL, pnvbo);
if (ret)
diff --git a/drivers/gpu/drm/nouveau/nouveau_platform.c b/drivers/gpu/drm/nouveau/nouveau_platform.c
index dc5900b..775277f 100644
--- a/drivers/gpu/drm/nouveau/nouveau_platform.c
+++ b/drivers/gpu/drm/nouveau/nouveau_platform.c
@@ -27,6 +27,7 @@
#include <linux/of.h>
#include <linux/reset.h>
#include <linux/regulator/consumer.h>
+#include <linux/iommu.h>
#include <soc/tegra/fuse.h>
#include <soc/tegra/pmc.h>
@@ -91,6 +92,72 @@
return 0;
}
+static void nouveau_platform_probe_iommu(struct device *dev,
+ struct nouveau_platform_gpu *gpu)
+{
+ int err;
+ unsigned long pgsize_bitmap;
+
+ mutex_init(&gpu->iommu.mutex);
+
+ if (iommu_present(&platform_bus_type)) {
+ gpu->iommu.domain = iommu_domain_alloc(&platform_bus_type);
+ if (IS_ERR(gpu->iommu.domain))
+ goto error;
+
+ /*
+ * A IOMMU is only usable if it supports page sizes smaller
+ * or equal to the system's PAGE_SIZE, with a preference if
+ * both are equal.
+ */
+ pgsize_bitmap = gpu->iommu.domain->ops->pgsize_bitmap;
+ if (pgsize_bitmap & PAGE_SIZE) {
+ gpu->iommu.pgshift = PAGE_SHIFT;
+ } else {
+ gpu->iommu.pgshift = fls(pgsize_bitmap & ~PAGE_MASK);
+ if (gpu->iommu.pgshift == 0) {
+ dev_warn(dev, "unsupported IOMMU page size\n");
+ goto free_domain;
+ }
+ gpu->iommu.pgshift -= 1;
+ }
+
+ err = iommu_attach_device(gpu->iommu.domain, dev);
+ if (err)
+ goto free_domain;
+
+ err = nvkm_mm_init(&gpu->iommu._mm, 0,
+ (1ULL << 40) >> gpu->iommu.pgshift, 1);
+ if (err)
+ goto detach_device;
+
+ gpu->iommu.mm = &gpu->iommu._mm;
+ }
+
+ return;
+
+detach_device:
+ iommu_detach_device(gpu->iommu.domain, dev);
+
+free_domain:
+ iommu_domain_free(gpu->iommu.domain);
+
+error:
+ gpu->iommu.domain = NULL;
+ gpu->iommu.pgshift = 0;
+ dev_err(dev, "cannot initialize IOMMU MM\n");
+}
+
+static void nouveau_platform_remove_iommu(struct device *dev,
+ struct nouveau_platform_gpu *gpu)
+{
+ if (gpu->iommu.domain) {
+ nvkm_mm_fini(&gpu->iommu._mm);
+ iommu_detach_device(gpu->iommu.domain, dev);
+ iommu_domain_free(gpu->iommu.domain);
+ }
+}
+
static int nouveau_platform_probe(struct platform_device *pdev)
{
struct nouveau_platform_gpu *gpu;
@@ -118,6 +185,8 @@
if (IS_ERR(gpu->clk_pwr))
return PTR_ERR(gpu->clk_pwr);
+ nouveau_platform_probe_iommu(&pdev->dev, gpu);
+
err = nouveau_platform_power_up(gpu);
if (err)
return err;
@@ -140,10 +209,9 @@
err_unref:
drm_dev_unref(drm);
- return 0;
-
power_down:
nouveau_platform_power_down(gpu);
+ nouveau_platform_remove_iommu(&pdev->dev, gpu);
return err;
}
@@ -154,10 +222,15 @@
struct nouveau_drm *drm = nouveau_drm(drm_dev);
struct nvkm_device *device = nvxx_device(&drm->device);
struct nouveau_platform_gpu *gpu = nv_device_to_platform(device)->gpu;
+ int err;
nouveau_drm_device_remove(drm_dev);
- return nouveau_platform_power_down(gpu);
+ err = nouveau_platform_power_down(gpu);
+
+ nouveau_platform_remove_iommu(&pdev->dev, gpu);
+
+ return err;
}
#if IS_ENABLED(CONFIG_OF)
diff --git a/drivers/gpu/drm/nouveau/nouveau_platform.h b/drivers/gpu/drm/nouveau/nouveau_platform.h
index 268bb72..392874c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_platform.h
+++ b/drivers/gpu/drm/nouveau/nouveau_platform.h
@@ -24,10 +24,12 @@
#define __NOUVEAU_PLATFORM_H__
#include "core/device.h"
+#include "core/mm.h"
struct reset_control;
struct clk;
struct regulator;
+struct iommu_domain;
struct platform_driver;
struct nouveau_platform_gpu {
@@ -36,6 +38,22 @@
struct clk *clk_pwr;
struct regulator *vdd;
+
+ struct {
+ /*
+ * Protects accesses to mm from subsystems
+ */
+ struct mutex mutex;
+
+ struct nvkm_mm _mm;
+ /*
+ * Just points to _mm. We need this to avoid embedding
+ * struct nvkm_mm in os.h
+ */
+ struct nvkm_mm *mm;
+ struct iommu_domain *domain;
+ unsigned long pgshift;
+ } iommu;
};
struct nouveau_platform_device {
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index 273e501..18f4497 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -82,6 +82,9 @@
u32 size_nc = 0;
int ret;
+ if (drm->device.info.ram_size == 0)
+ return -ENOMEM;
+
if (nvbo->tile_flags & NOUVEAU_GEM_TILE_NONCONTIG)
size_nc = 1 << nvbo->page_shift;
diff --git a/drivers/gpu/drm/nouveau/nv84_fence.c b/drivers/gpu/drm/nouveau/nv84_fence.c
index bf429ca..a03db43 100644
--- a/drivers/gpu/drm/nouveau/nv84_fence.c
+++ b/drivers/gpu/drm/nouveau/nv84_fence.c
@@ -215,6 +215,7 @@
{
struct nvkm_fifo *pfifo = nvxx_fifo(&drm->device);
struct nv84_fence_priv *priv;
+ u32 domain;
int ret;
priv = drm->fence = kzalloc(sizeof(*priv), GFP_KERNEL);
@@ -231,10 +232,17 @@
priv->base.context_base = fence_context_alloc(priv->base.contexts);
priv->base.uevent = true;
- ret = nouveau_bo_new(drm->dev, 16 * priv->base.contexts, 0,
- TTM_PL_FLAG_VRAM, 0, 0, NULL, NULL, &priv->bo);
+ /* Use VRAM if there is any ; otherwise fallback to system memory */
+ domain = drm->device.info.ram_size != 0 ? TTM_PL_FLAG_VRAM :
+ /*
+ * fences created in sysmem must be non-cached or we
+ * will lose CPU/GPU coherency!
+ */
+ TTM_PL_FLAG_TT | TTM_PL_FLAG_UNCACHED;
+ ret = nouveau_bo_new(drm->dev, 16 * priv->base.contexts, 0, domain, 0,
+ 0, NULL, NULL, &priv->bo);
if (ret == 0) {
- ret = nouveau_bo_pin(priv->bo, TTM_PL_FLAG_VRAM, false);
+ ret = nouveau_bo_pin(priv->bo, domain, false);
if (ret == 0) {
ret = nouveau_bo_map(priv->bo);
if (ret)
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/ce/Kbuild b/drivers/gpu/drm/nouveau/nvkm/engine/ce/Kbuild
index 8587974..fa8cda7 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/ce/Kbuild
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/ce/Kbuild
@@ -1,3 +1,4 @@
nvkm-y += nvkm/engine/ce/gt215.o
nvkm-y += nvkm/engine/ce/gf100.o
nvkm-y += nvkm/engine/ce/gk104.o
+nvkm-y += nvkm/engine/ce/gm204.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/ce/gm204.c b/drivers/gpu/drm/nouveau/nvkm/engine/ce/gm204.c
new file mode 100644
index 0000000..577eb2e
--- /dev/null
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/ce/gm204.c
@@ -0,0 +1,173 @@
+/*
+ * Copyright 2015 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs
+ */
+#include <engine/ce.h>
+
+#include <core/engctx.h>
+
+struct gm204_ce_priv {
+ struct nvkm_engine base;
+};
+
+/*******************************************************************************
+ * Copy object classes
+ ******************************************************************************/
+
+static struct nvkm_oclass
+gm204_ce_sclass[] = {
+ { 0xb0b5, &nvkm_object_ofuncs },
+ {},
+};
+
+/*******************************************************************************
+ * PCE context
+ ******************************************************************************/
+
+static struct nvkm_ofuncs
+gm204_ce_context_ofuncs = {
+ .ctor = _nvkm_engctx_ctor,
+ .dtor = _nvkm_engctx_dtor,
+ .init = _nvkm_engctx_init,
+ .fini = _nvkm_engctx_fini,
+ .rd32 = _nvkm_engctx_rd32,
+ .wr32 = _nvkm_engctx_wr32,
+};
+
+static struct nvkm_oclass
+gm204_ce_cclass = {
+ .handle = NV_ENGCTX(CE0, 0x24),
+ .ofuncs = &gm204_ce_context_ofuncs,
+};
+
+/*******************************************************************************
+ * PCE engine/subdev functions
+ ******************************************************************************/
+
+static void
+gm204_ce_intr(struct nvkm_subdev *subdev)
+{
+ const int ce = nv_subidx(subdev) - NVDEV_ENGINE_CE0;
+ struct gm204_ce_priv *priv = (void *)subdev;
+ u32 stat = nv_rd32(priv, 0x104908 + (ce * 0x1000));
+
+ if (stat) {
+ nv_warn(priv, "unhandled intr 0x%08x\n", stat);
+ nv_wr32(priv, 0x104908 + (ce * 0x1000), stat);
+ }
+}
+
+static int
+gm204_ce0_ctor(struct nvkm_object *parent, struct nvkm_object *engine,
+ struct nvkm_oclass *oclass, void *data, u32 size,
+ struct nvkm_object **pobject)
+{
+ struct gm204_ce_priv *priv;
+ int ret;
+
+ ret = nvkm_engine_create(parent, engine, oclass, true,
+ "PCE0", "ce0", &priv);
+ *pobject = nv_object(priv);
+ if (ret)
+ return ret;
+
+ nv_subdev(priv)->unit = 0x00000040;
+ nv_subdev(priv)->intr = gm204_ce_intr;
+ nv_engine(priv)->cclass = &gm204_ce_cclass;
+ nv_engine(priv)->sclass = gm204_ce_sclass;
+ return 0;
+}
+
+static int
+gm204_ce1_ctor(struct nvkm_object *parent, struct nvkm_object *engine,
+ struct nvkm_oclass *oclass, void *data, u32 size,
+ struct nvkm_object **pobject)
+{
+ struct gm204_ce_priv *priv;
+ int ret;
+
+ ret = nvkm_engine_create(parent, engine, oclass, true,
+ "PCE1", "ce1", &priv);
+ *pobject = nv_object(priv);
+ if (ret)
+ return ret;
+
+ nv_subdev(priv)->unit = 0x00000080;
+ nv_subdev(priv)->intr = gm204_ce_intr;
+ nv_engine(priv)->cclass = &gm204_ce_cclass;
+ nv_engine(priv)->sclass = gm204_ce_sclass;
+ return 0;
+}
+
+static int
+gm204_ce2_ctor(struct nvkm_object *parent, struct nvkm_object *engine,
+ struct nvkm_oclass *oclass, void *data, u32 size,
+ struct nvkm_object **pobject)
+{
+ struct gm204_ce_priv *priv;
+ int ret;
+
+ ret = nvkm_engine_create(parent, engine, oclass, true,
+ "PCE2", "ce2", &priv);
+ *pobject = nv_object(priv);
+ if (ret)
+ return ret;
+
+ nv_subdev(priv)->unit = 0x00200000;
+ nv_subdev(priv)->intr = gm204_ce_intr;
+ nv_engine(priv)->cclass = &gm204_ce_cclass;
+ nv_engine(priv)->sclass = gm204_ce_sclass;
+ return 0;
+}
+
+struct nvkm_oclass
+gm204_ce0_oclass = {
+ .handle = NV_ENGINE(CE0, 0x24),
+ .ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = gm204_ce0_ctor,
+ .dtor = _nvkm_engine_dtor,
+ .init = _nvkm_engine_init,
+ .fini = _nvkm_engine_fini,
+ },
+};
+
+struct nvkm_oclass
+gm204_ce1_oclass = {
+ .handle = NV_ENGINE(CE1, 0x24),
+ .ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = gm204_ce1_ctor,
+ .dtor = _nvkm_engine_dtor,
+ .init = _nvkm_engine_init,
+ .fini = _nvkm_engine_fini,
+ },
+};
+
+struct nvkm_oclass
+gm204_ce2_oclass = {
+ .handle = NV_ENGINE(CE2, 0x24),
+ .ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = gm204_ce2_ctor,
+ .dtor = _nvkm_engine_dtor,
+ .init = _nvkm_engine_init,
+ .fini = _nvkm_engine_fini,
+ },
+};
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
index 6efa8f3..63d8e52 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
@@ -139,9 +139,13 @@
args->v0.chipset = device->chipset;
args->v0.revision = device->chiprev;
- if (pfb) args->v0.ram_size = args->v0.ram_user = pfb->ram->size;
- else args->v0.ram_size = args->v0.ram_user = 0;
- if (imem) args->v0.ram_user = args->v0.ram_user - imem->reserved;
+ if (pfb && pfb->ram)
+ args->v0.ram_size = args->v0.ram_user = pfb->ram->size;
+ else
+ args->v0.ram_size = args->v0.ram_user = 0;
+ if (imem && args->v0.ram_size > 0)
+ args->v0.ram_user = args->v0.ram_user - imem->reserved;
+
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/gk104.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/gk104.c
index bf58934..6a9483f 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/gk104.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/gk104.c
@@ -171,7 +171,7 @@
device->oclass[NVDEV_SUBDEV_FB ] = gk20a_fb_oclass;
device->oclass[NVDEV_SUBDEV_LTC ] = gk104_ltc_oclass;
device->oclass[NVDEV_SUBDEV_IBUS ] = &gk20a_ibus_oclass;
- device->oclass[NVDEV_SUBDEV_INSTMEM] = nv50_instmem_oclass;
+ device->oclass[NVDEV_SUBDEV_INSTMEM] = gk20a_instmem_oclass;
device->oclass[NVDEV_SUBDEV_MMU ] = &gf100_mmu_oclass;
device->oclass[NVDEV_SUBDEV_BAR ] = &gk20a_bar_oclass;
device->oclass[NVDEV_ENGINE_DMAOBJ ] = gf110_dmaeng_oclass;
@@ -202,7 +202,7 @@
device->oclass[NVDEV_SUBDEV_INSTMEM] = nv50_instmem_oclass;
device->oclass[NVDEV_SUBDEV_MMU ] = &gf100_mmu_oclass;
device->oclass[NVDEV_SUBDEV_BAR ] = &gf100_bar_oclass;
- device->oclass[NVDEV_SUBDEV_PMU ] = gf110_pmu_oclass;
+ device->oclass[NVDEV_SUBDEV_PMU ] = gk110_pmu_oclass;
device->oclass[NVDEV_SUBDEV_VOLT ] = &nv40_volt_oclass;
device->oclass[NVDEV_ENGINE_DMAOBJ ] = gf110_dmaeng_oclass;
device->oclass[NVDEV_ENGINE_FIFO ] = gk104_fifo_oclass;
@@ -236,7 +236,7 @@
device->oclass[NVDEV_SUBDEV_INSTMEM] = nv50_instmem_oclass;
device->oclass[NVDEV_SUBDEV_MMU ] = &gf100_mmu_oclass;
device->oclass[NVDEV_SUBDEV_BAR ] = &gf100_bar_oclass;
- device->oclass[NVDEV_SUBDEV_PMU ] = gf110_pmu_oclass;
+ device->oclass[NVDEV_SUBDEV_PMU ] = gk110_pmu_oclass;
device->oclass[NVDEV_SUBDEV_VOLT ] = &nv40_volt_oclass;
device->oclass[NVDEV_ENGINE_DMAOBJ ] = gf110_dmaeng_oclass;
device->oclass[NVDEV_ENGINE_FIFO ] = gk104_fifo_oclass;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/gm100.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/gm100.c
index 108d048..70abf1e 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/gm100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/gm100.c
@@ -127,16 +127,14 @@
device->oclass[NVDEV_SUBDEV_VOLT ] = &nv40_volt_oclass;
#endif
device->oclass[NVDEV_ENGINE_DMAOBJ ] = gf110_dmaeng_oclass;
-#if 0
- device->oclass[NVDEV_ENGINE_FIFO ] = gk208_fifo_oclass;
+ device->oclass[NVDEV_ENGINE_FIFO ] = gm204_fifo_oclass;
device->oclass[NVDEV_ENGINE_SW ] = gf100_sw_oclass;
- device->oclass[NVDEV_ENGINE_GR ] = gm107_gr_oclass;
-#endif
+ device->oclass[NVDEV_ENGINE_GR ] = gm204_gr_oclass;
device->oclass[NVDEV_ENGINE_DISP ] = gm204_disp_oclass;
-#if 0
device->oclass[NVDEV_ENGINE_CE0 ] = &gm204_ce0_oclass;
device->oclass[NVDEV_ENGINE_CE1 ] = &gm204_ce1_oclass;
device->oclass[NVDEV_ENGINE_CE2 ] = &gm204_ce2_oclass;
+#if 0
device->oclass[NVDEV_ENGINE_MSVLD ] = &gk104_msvld_oclass;
device->oclass[NVDEV_ENGINE_MSPDEC ] = &gk104_mspdec_oclass;
device->oclass[NVDEV_ENGINE_MSPPP ] = &gf100_msppp_oclass;
@@ -170,16 +168,14 @@
device->oclass[NVDEV_SUBDEV_VOLT ] = &nv40_volt_oclass;
#endif
device->oclass[NVDEV_ENGINE_DMAOBJ ] = gf110_dmaeng_oclass;
-#if 0
- device->oclass[NVDEV_ENGINE_FIFO ] = gk208_fifo_oclass;
+ device->oclass[NVDEV_ENGINE_FIFO ] = gm204_fifo_oclass;
device->oclass[NVDEV_ENGINE_SW ] = gf100_sw_oclass;
- device->oclass[NVDEV_ENGINE_GR ] = gm107_gr_oclass;
-#endif
+ device->oclass[NVDEV_ENGINE_GR ] = gm206_gr_oclass;
device->oclass[NVDEV_ENGINE_DISP ] = gm204_disp_oclass;
-#if 0
device->oclass[NVDEV_ENGINE_CE0 ] = &gm204_ce0_oclass;
device->oclass[NVDEV_ENGINE_CE1 ] = &gm204_ce1_oclass;
device->oclass[NVDEV_ENGINE_CE2 ] = &gm204_ce2_oclass;
+#if 0
device->oclass[NVDEV_ENGINE_MSVLD ] = &gk104_msvld_oclass;
device->oclass[NVDEV_ENGINE_MSPDEC ] = &gk104_mspdec_oclass;
device->oclass[NVDEV_ENGINE_MSPPP ] = &gf100_msppp_oclass;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/gf110.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/gf110.c
index 0ebf466..9ef6728 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/gf110.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/gf110.c
@@ -413,8 +413,8 @@
static const struct nv50_disp_mthd_list
gf110_disp_base_mthd_image = {
- .mthd = 0x0400,
- .addr = 0x000400,
+ .mthd = 0x0020,
+ .addr = 0x000020,
.data = {
{ 0x0400, 0x661400 },
{ 0x0404, 0x661404 },
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
index 84ade81..8ba808d 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
@@ -229,7 +229,7 @@
switch (dmac->pushdma->target) {
case NV_MEM_TARGET_VRAM:
- dmac->push = 0x00000000 | dmac->pushdma->start >> 8;
+ dmac->push = 0x00000001 | dmac->pushdma->start >> 8;
break;
case NV_MEM_TARGET_PCI_NOSNOOP:
dmac->push = 0x00000003 | dmac->pushdma->start >> 8;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/Kbuild b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/Kbuild
index c5a2d87..42891cb 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/Kbuild
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/Kbuild
@@ -9,3 +9,4 @@
nvkm-y += nvkm/engine/fifo/gk104.o
nvkm-y += nvkm/engine/fifo/gk20a.o
nvkm-y += nvkm/engine/fifo/gk208.o
+nvkm-y += nvkm/engine/fifo/gm204.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
index 9585539..e10f964 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
@@ -323,8 +323,8 @@
return nvkm_fifo_channel_fini(&chan->base, suspend);
}
-static struct nvkm_ofuncs
-gk104_fifo_ofuncs = {
+struct nvkm_ofuncs
+gk104_fifo_chan_ofuncs = {
.ctor = gk104_fifo_chan_ctor,
.dtor = _nvkm_fifo_channel_dtor,
.init = gk104_fifo_chan_init,
@@ -337,7 +337,7 @@
static struct nvkm_oclass
gk104_fifo_sclass[] = {
- { KEPLER_CHANNEL_GPFIFO_A, &gk104_fifo_ofuncs },
+ { KEPLER_CHANNEL_GPFIFO_A, &gk104_fifo_chan_ofuncs },
{}
};
@@ -774,6 +774,7 @@
while (object) {
switch (nv_mclass(object)) {
case KEPLER_CHANNEL_GPFIFO_A:
+ case MAXWELL_CHANNEL_GPFIFO_A:
gk104_fifo_recover(priv, engine, (void *)object);
break;
}
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.h b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.h
index 3046e00..318d30d 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.h
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.h
@@ -13,4 +13,6 @@
struct nvkm_oclass base;
u32 channels;
};
+
+extern struct nvkm_ofuncs gk104_fifo_chan_ofuncs;
#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gm204.c b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gm204.c
new file mode 100644
index 0000000..749d525
--- /dev/null
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gm204.c
@@ -0,0 +1,57 @@
+/*
+ * Copyright 2015 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs
+ */
+#include "gk104.h"
+
+#include <nvif/class.h>
+
+static struct nvkm_oclass
+gm204_fifo_sclass[] = {
+ { MAXWELL_CHANNEL_GPFIFO_A, &gk104_fifo_chan_ofuncs },
+ {}
+};
+
+static int
+gm204_fifo_ctor(struct nvkm_object *parent, struct nvkm_object *engine,
+ struct nvkm_oclass *oclass, void *data, u32 size,
+ struct nvkm_object **pobject)
+{
+ int ret = gk104_fifo_ctor(parent, engine, oclass, data, size, pobject);
+ if (ret == 0) {
+ struct gk104_fifo_priv *priv = (void *)*pobject;
+ nv_engine(priv)->sclass = gm204_fifo_sclass;
+ }
+ return ret;
+}
+
+struct nvkm_oclass *
+gm204_fifo_oclass = &(struct gk104_fifo_impl) {
+ .base.handle = NV_ENGINE(FIFO, 0x24),
+ .base.ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = gm204_fifo_ctor,
+ .dtor = gk104_fifo_dtor,
+ .init = gk104_fifo_init,
+ .fini = _nvkm_fifo_fini,
+ },
+ .channels = 4096,
+}.base;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/Kbuild b/drivers/gpu/drm/nouveau/nvkm/engine/gr/Kbuild
index 1771d94..2e1b92f 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/Kbuild
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/Kbuild
@@ -12,6 +12,8 @@
nvkm-y += nvkm/engine/gr/ctxgk110b.o
nvkm-y += nvkm/engine/gr/ctxgk208.o
nvkm-y += nvkm/engine/gr/ctxgm107.o
+nvkm-y += nvkm/engine/gr/ctxgm204.o
+nvkm-y += nvkm/engine/gr/ctxgm206.o
nvkm-y += nvkm/engine/gr/nv04.o
nvkm-y += nvkm/engine/gr/nv10.o
nvkm-y += nvkm/engine/gr/nv20.o
@@ -34,3 +36,5 @@
nvkm-y += nvkm/engine/gr/gk110b.o
nvkm-y += nvkm/engine/gr/gk208.o
nvkm-y += nvkm/engine/gr/gm107.o
+nvkm-y += nvkm/engine/gr/gm204.o
+nvkm-y += nvkm/engine/gr/gm206.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.h b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.h
index 1166b1a..3676a33 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.h
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgf100.h
@@ -88,11 +88,22 @@
void gk104_grctx_generate_pagepool(struct gf100_grctx *);
void gk104_grctx_generate_unkn(struct gf100_gr_priv *);
void gk104_grctx_generate_r418bb8(struct gf100_gr_priv *);
+void gk104_grctx_generate_rop_active_fbps(struct gf100_gr_priv *);
+
extern struct nvkm_oclass *gk110_grctx_oclass;
extern struct nvkm_oclass *gk110b_grctx_oclass;
extern struct nvkm_oclass *gk208_grctx_oclass;
+
extern struct nvkm_oclass *gm107_grctx_oclass;
+void gm107_grctx_generate_bundle(struct gf100_grctx *);
+void gm107_grctx_generate_pagepool(struct gf100_grctx *);
+void gm107_grctx_generate_attrib(struct gf100_grctx *);
+
+extern struct nvkm_oclass *gm204_grctx_oclass;
+void gm204_grctx_generate_main(struct gf100_gr_priv *, struct gf100_grctx *);
+
+extern struct nvkm_oclass *gm206_grctx_oclass;
/* context init value lists */
@@ -196,4 +207,22 @@
extern const struct gf100_gr_init gk208_grctx_init_prop_0[];
extern const struct gf100_gr_init gk208_grctx_init_crstr_0[];
+
+extern const struct gf100_gr_init gm107_grctx_init_gpc_unk_0[];
+extern const struct gf100_gr_init gm107_grctx_init_wwdx_0[];
+
+extern const struct gf100_gr_pack gm204_grctx_pack_icmd[];
+
+extern const struct gf100_gr_pack gm204_grctx_pack_mthd[];
+
+extern const struct gf100_gr_pack gm204_grctx_pack_hub[];
+
+extern const struct gf100_gr_init gm204_grctx_init_prop_0[];
+extern const struct gf100_gr_init gm204_grctx_init_setup_0[];
+extern const struct gf100_gr_init gm204_grctx_init_gpm_0[];
+extern const struct gf100_gr_init gm204_grctx_init_gpc_unk_2[];
+
+extern const struct gf100_gr_pack gm204_grctx_pack_tpc[];
+
+extern const struct gf100_gr_pack gm204_grctx_pack_ppc[];
#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk104.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk104.c
index 5e9454b..b12f6a9 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk104.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgk104.c
@@ -941,6 +941,14 @@
}
void
+gk104_grctx_generate_rop_active_fbps(struct gf100_gr_priv *priv)
+{
+ const u32 fbp_count = nv_rd32(priv, 0x120074);
+ nv_mask(priv, 0x408850, 0x0000000f, fbp_count); /* zrop */
+ nv_mask(priv, 0x408958, 0x0000000f, fbp_count); /* crop */
+}
+
+void
gk104_grctx_generate_main(struct gf100_gr_priv *priv, struct gf100_grctx *info)
{
struct gf100_grctx_oclass *oclass = (void *)nv_engine(priv)->cclass;
@@ -970,13 +978,7 @@
nv_wr32(priv, 0x4064d0 + (i * 0x04), 0x00000000);
nv_wr32(priv, 0x405b00, (priv->tpc_total << 8) | priv->gpc_nr);
- if (priv->gpc_nr == 1) {
- nv_mask(priv, 0x408850, 0x0000000f, priv->tpc_nr[0]);
- nv_mask(priv, 0x408958, 0x0000000f, priv->tpc_nr[0]);
- } else {
- nv_mask(priv, 0x408850, 0x0000000f, priv->gpc_nr);
- nv_mask(priv, 0x408958, 0x0000000f, priv->gpc_nr);
- }
+ gk104_grctx_generate_rop_active_fbps(priv);
nv_mask(priv, 0x419f78, 0x00000001, 0x00000000);
gf100_gr_icmd(priv, oclass->icmd);
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm107.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm107.c
index b2fae6e..fbeaae3 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm107.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm107.c
@@ -699,7 +699,7 @@
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_grctx_init_gpc_unk_0[] = {
{ 0x418380, 1, 0x04, 0x00000056 },
{}
@@ -834,7 +834,7 @@
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_grctx_init_wwdx_0[] = {
{ 0x41bf00, 1, 0x04, 0x0a418820 },
{ 0x41bf04, 1, 0x04, 0x062080e6 },
@@ -860,7 +860,7 @@
* PGRAPH context implementation
******************************************************************************/
-static void
+void
gm107_grctx_generate_bundle(struct gf100_grctx *info)
{
const struct gf100_grctx_oclass *impl = gf100_grctx_impl(info->priv);
@@ -877,7 +877,7 @@
mmio_wr32(info, 0x4064c8, (state_limit << 16) | token_limit);
}
-static void
+void
gm107_grctx_generate_pagepool(struct gf100_grctx *info)
{
const struct gf100_grctx_oclass *impl = gf100_grctx_impl(info->priv);
@@ -892,7 +892,7 @@
mmio_wr32(info, 0x418e30, 0x80000000); /* guess at it being related */
}
-static void
+void
gm107_grctx_generate_attrib(struct gf100_grctx *info)
{
struct gf100_gr_priv *priv = info->priv;
@@ -926,7 +926,7 @@
mmio_wr32(info, o + 0xe4, as);
mmio_wr32(info, o + 0xf8, ao);
ao += impl->alpha_nr_max * priv->ppc_tpc_nr[gpc][ppc];
- mmio_wr32(info, u, (0x715 /*XXX*/ << 16) | bs);
+ mmio_wr32(info, u, ((bs / 3 /*XXX*/) << 16) | bs);
}
}
}
@@ -982,13 +982,7 @@
nv_wr32(priv, 0x405b00, (priv->tpc_total << 8) | priv->gpc_nr);
- if (priv->gpc_nr == 1) {
- nv_mask(priv, 0x408850, 0x0000000f, priv->tpc_nr[0]);
- nv_mask(priv, 0x408958, 0x0000000f, priv->tpc_nr[0]);
- } else {
- nv_mask(priv, 0x408850, 0x0000000f, priv->gpc_nr);
- nv_mask(priv, 0x408958, 0x0000000f, priv->gpc_nr);
- }
+ gk104_grctx_generate_rop_active_fbps(priv);
gf100_gr_icmd(priv, oclass->icmd);
nv_wr32(priv, 0x404154, 0x00000400);
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm204.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm204.c
new file mode 100644
index 0000000..ea8e661
--- /dev/null
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm204.c
@@ -0,0 +1,1054 @@
+/*
+ * Copyright 2015 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs <bskeggs@redhat.com>
+ */
+#include "ctxgf100.h"
+
+/*******************************************************************************
+ * PGRAPH context register lists
+ ******************************************************************************/
+
+static const struct gf100_gr_init
+gm204_grctx_init_icmd_0[] = {
+ { 0x001000, 1, 0x01, 0x00000002 },
+ { 0x0006aa, 1, 0x01, 0x00000001 },
+ { 0x0006ad, 2, 0x01, 0x00000100 },
+ { 0x0006b1, 1, 0x01, 0x00000011 },
+ { 0x00078c, 1, 0x01, 0x00000008 },
+ { 0x000792, 1, 0x01, 0x00000001 },
+ { 0x000794, 3, 0x01, 0x00000001 },
+ { 0x000797, 1, 0x01, 0x000000cf },
+ { 0x00079a, 1, 0x01, 0x00000002 },
+ { 0x0007a1, 1, 0x01, 0x00000001 },
+ { 0x0007a3, 3, 0x01, 0x00000001 },
+ { 0x000831, 1, 0x01, 0x00000004 },
+ { 0x01e100, 1, 0x01, 0x00000001 },
+ { 0x001000, 1, 0x01, 0x00000008 },
+ { 0x000039, 3, 0x01, 0x00000000 },
+ { 0x000380, 1, 0x01, 0x00000001 },
+ { 0x000366, 2, 0x01, 0x00000000 },
+ { 0x000368, 1, 0x01, 0x00000fff },
+ { 0x000370, 2, 0x01, 0x00000000 },
+ { 0x000372, 1, 0x01, 0x000fffff },
+ { 0x000374, 1, 0x01, 0x00000100 },
+ { 0x000818, 8, 0x01, 0x00000000 },
+ { 0x000848, 16, 0x01, 0x00000000 },
+ { 0x000738, 1, 0x01, 0x00000000 },
+ { 0x000b07, 1, 0x01, 0x00000002 },
+ { 0x000b08, 2, 0x01, 0x00000100 },
+ { 0x000b0a, 1, 0x01, 0x00000001 },
+ { 0x000a04, 1, 0x01, 0x000000ff },
+ { 0x000a0b, 1, 0x01, 0x00000040 },
+ { 0x00097f, 1, 0x01, 0x00000100 },
+ { 0x000a02, 1, 0x01, 0x00000001 },
+ { 0x000809, 1, 0x01, 0x00000007 },
+ { 0x00c221, 1, 0x01, 0x00000040 },
+ { 0x00c401, 1, 0x01, 0x00000001 },
+ { 0x00c402, 1, 0x01, 0x00010001 },
+ { 0x00c403, 2, 0x01, 0x00000001 },
+ { 0x00c40e, 1, 0x01, 0x00000020 },
+ { 0x01e100, 1, 0x01, 0x00000001 },
+ { 0x001000, 1, 0x01, 0x00000001 },
+ { 0x000b07, 1, 0x01, 0x00000002 },
+ { 0x000b08, 2, 0x01, 0x00000100 },
+ { 0x000b0a, 1, 0x01, 0x00000001 },
+ { 0x01e100, 1, 0x01, 0x00000001 },
+ { 0x001000, 1, 0x01, 0x00000004 },
+ { 0x000039, 3, 0x01, 0x00000000 },
+ { 0x0000a9, 1, 0x01, 0x0000ffff },
+ { 0x000038, 1, 0x01, 0x0fac6881 },
+ { 0x00003d, 1, 0x01, 0x00000001 },
+ { 0x0000e8, 8, 0x01, 0x00000400 },
+ { 0x000078, 8, 0x01, 0x00000300 },
+ { 0x000050, 1, 0x01, 0x00000011 },
+ { 0x000058, 8, 0x01, 0x00000008 },
+ { 0x000208, 8, 0x01, 0x00000001 },
+ { 0x000081, 1, 0x01, 0x00000001 },
+ { 0x000085, 1, 0x01, 0x00000004 },
+ { 0x000088, 1, 0x01, 0x00000400 },
+ { 0x000090, 1, 0x01, 0x00000300 },
+ { 0x000098, 1, 0x01, 0x00001001 },
+ { 0x0000e3, 1, 0x01, 0x00000001 },
+ { 0x0000da, 1, 0x01, 0x00000001 },
+ { 0x0000b4, 4, 0x01, 0x88888888 },
+ { 0x0000f8, 1, 0x01, 0x00000003 },
+ { 0x0000fa, 1, 0x01, 0x00000001 },
+ { 0x0000b1, 2, 0x01, 0x00000001 },
+ { 0x00009f, 4, 0x01, 0x0000ffff },
+ { 0x0000a8, 1, 0x01, 0x0000ffff },
+ { 0x0000ad, 1, 0x01, 0x0000013e },
+ { 0x0000e1, 1, 0x01, 0x00000010 },
+ { 0x000290, 16, 0x01, 0x00000000 },
+ { 0x0003b0, 16, 0x01, 0x00000000 },
+ { 0x0002a0, 16, 0x01, 0x00000000 },
+ { 0x000420, 16, 0x01, 0x00000000 },
+ { 0x0002b0, 16, 0x01, 0x00000000 },
+ { 0x000430, 16, 0x01, 0x00000000 },
+ { 0x0002c0, 16, 0x01, 0x00000000 },
+ { 0x0004d0, 16, 0x01, 0x00000000 },
+ { 0x000720, 16, 0x01, 0x00000000 },
+ { 0x0008c0, 16, 0x01, 0x00000000 },
+ { 0x000890, 16, 0x01, 0x00000000 },
+ { 0x0008e0, 16, 0x01, 0x00000000 },
+ { 0x0008a0, 16, 0x01, 0x00000000 },
+ { 0x0008f0, 16, 0x01, 0x00000000 },
+ { 0x00094c, 1, 0x01, 0x000000ff },
+ { 0x00094d, 1, 0x01, 0xffffffff },
+ { 0x00094e, 1, 0x01, 0x00000002 },
+ { 0x0002f2, 2, 0x01, 0x00000001 },
+ { 0x0002f5, 1, 0x01, 0x00000001 },
+ { 0x0002f7, 1, 0x01, 0x00000001 },
+ { 0x000303, 1, 0x01, 0x00000001 },
+ { 0x0002e6, 1, 0x01, 0x00000001 },
+ { 0x000466, 1, 0x01, 0x00000052 },
+ { 0x000301, 1, 0x01, 0x3f800000 },
+ { 0x000304, 1, 0x01, 0x30201000 },
+ { 0x000305, 1, 0x01, 0x70605040 },
+ { 0x000306, 1, 0x01, 0xb8a89888 },
+ { 0x000307, 1, 0x01, 0xf8e8d8c8 },
+ { 0x00030a, 1, 0x01, 0x00ffff00 },
+ { 0x00030b, 1, 0x01, 0x0000001a },
+ { 0x00030c, 1, 0x01, 0x00000001 },
+ { 0x000318, 1, 0x01, 0x00000001 },
+ { 0x000340, 1, 0x01, 0x00000000 },
+ { 0x00037d, 1, 0x01, 0x00000006 },
+ { 0x0003a0, 1, 0x01, 0x00000002 },
+ { 0x0003aa, 1, 0x01, 0x00000001 },
+ { 0x0003a9, 1, 0x01, 0x00000001 },
+ { 0x000380, 1, 0x01, 0x00000001 },
+ { 0x000383, 1, 0x01, 0x00000011 },
+ { 0x000360, 1, 0x01, 0x00000040 },
+ { 0x000366, 2, 0x01, 0x00000000 },
+ { 0x000368, 1, 0x01, 0x00000fff },
+ { 0x000370, 2, 0x01, 0x00000000 },
+ { 0x000372, 1, 0x01, 0x000fffff },
+ { 0x000374, 1, 0x01, 0x00000100 },
+ { 0x00037a, 1, 0x01, 0x00000012 },
+ { 0x000619, 1, 0x01, 0x00000003 },
+ { 0x000811, 1, 0x01, 0x00000003 },
+ { 0x000812, 1, 0x01, 0x00000004 },
+ { 0x000813, 1, 0x01, 0x00000006 },
+ { 0x000814, 1, 0x01, 0x00000008 },
+ { 0x000815, 1, 0x01, 0x0000000b },
+ { 0x000800, 6, 0x01, 0x00000001 },
+ { 0x000632, 1, 0x01, 0x00000001 },
+ { 0x000633, 1, 0x01, 0x00000002 },
+ { 0x000634, 1, 0x01, 0x00000003 },
+ { 0x000635, 1, 0x01, 0x00000004 },
+ { 0x000654, 1, 0x01, 0x3f800000 },
+ { 0x000657, 1, 0x01, 0x3f800000 },
+ { 0x000655, 2, 0x01, 0x3f800000 },
+ { 0x0006cd, 1, 0x01, 0x3f800000 },
+ { 0x0007f5, 1, 0x01, 0x3f800000 },
+ { 0x0007dc, 1, 0x01, 0x39291909 },
+ { 0x0007dd, 1, 0x01, 0x79695949 },
+ { 0x0007de, 1, 0x01, 0xb9a99989 },
+ { 0x0007df, 1, 0x01, 0xf9e9d9c9 },
+ { 0x0007e8, 1, 0x01, 0x00003210 },
+ { 0x0007e9, 1, 0x01, 0x00007654 },
+ { 0x0007ea, 1, 0x01, 0x00000098 },
+ { 0x0007ec, 1, 0x01, 0x39291909 },
+ { 0x0007ed, 1, 0x01, 0x79695949 },
+ { 0x0007ee, 1, 0x01, 0xb9a99989 },
+ { 0x0007ef, 1, 0x01, 0xf9e9d9c9 },
+ { 0x0007f0, 1, 0x01, 0x00003210 },
+ { 0x0007f1, 1, 0x01, 0x00007654 },
+ { 0x0007f2, 1, 0x01, 0x00000098 },
+ { 0x0005a5, 1, 0x01, 0x00000001 },
+ { 0x0005aa, 1, 0x01, 0x00000002 },
+ { 0x0005cb, 1, 0x01, 0x00000004 },
+ { 0x0005d0, 1, 0x01, 0x20181008 },
+ { 0x0005d1, 1, 0x01, 0x40383028 },
+ { 0x0005d2, 1, 0x01, 0x60585048 },
+ { 0x0005d3, 1, 0x01, 0x80787068 },
+ { 0x000980, 128, 0x01, 0x00000000 },
+ { 0x000468, 1, 0x01, 0x00000004 },
+ { 0x00046c, 1, 0x01, 0x00000001 },
+ { 0x000470, 96, 0x01, 0x00000000 },
+ { 0x0005e0, 16, 0x01, 0x00000d10 },
+ { 0x000510, 16, 0x01, 0x3f800000 },
+ { 0x000520, 1, 0x01, 0x000002b6 },
+ { 0x000529, 1, 0x01, 0x00000001 },
+ { 0x000530, 16, 0x01, 0xffff0000 },
+ { 0x000550, 32, 0x01, 0xffff0000 },
+ { 0x000585, 1, 0x01, 0x0000003f },
+ { 0x000576, 1, 0x01, 0x00000003 },
+ { 0x00057b, 1, 0x01, 0x00000059 },
+ { 0x000586, 1, 0x01, 0x00000040 },
+ { 0x000582, 2, 0x01, 0x00000080 },
+ { 0x000595, 1, 0x01, 0x00400040 },
+ { 0x000596, 1, 0x01, 0x00000492 },
+ { 0x000597, 1, 0x01, 0x08080203 },
+ { 0x0005ad, 1, 0x01, 0x00000008 },
+ { 0x000598, 1, 0x01, 0x00020001 },
+ { 0x0005d4, 1, 0x01, 0x00000001 },
+ { 0x0005c2, 1, 0x01, 0x00000001 },
+ { 0x000638, 2, 0x01, 0x00000001 },
+ { 0x00063a, 1, 0x01, 0x00000002 },
+ { 0x00063b, 2, 0x01, 0x00000001 },
+ { 0x00063d, 1, 0x01, 0x00000002 },
+ { 0x00063e, 1, 0x01, 0x00000001 },
+ { 0x0008b8, 8, 0x01, 0x00000001 },
+ { 0x000900, 8, 0x01, 0x00000001 },
+ { 0x000908, 8, 0x01, 0x00000002 },
+ { 0x000910, 16, 0x01, 0x00000001 },
+ { 0x000920, 8, 0x01, 0x00000002 },
+ { 0x000928, 8, 0x01, 0x00000001 },
+ { 0x000662, 1, 0x01, 0x00000001 },
+ { 0x000648, 9, 0x01, 0x00000001 },
+ { 0x000674, 1, 0x01, 0x00000001 },
+ { 0x000658, 1, 0x01, 0x0000000f },
+ { 0x0007ff, 1, 0x01, 0x0000000a },
+ { 0x00066a, 1, 0x01, 0x40000000 },
+ { 0x00066b, 1, 0x01, 0x10000000 },
+ { 0x00066c, 2, 0x01, 0xffff0000 },
+ { 0x0007af, 2, 0x01, 0x00000008 },
+ { 0x0007f6, 1, 0x01, 0x00000001 },
+ { 0x0006b2, 1, 0x01, 0x00000055 },
+ { 0x0007ad, 1, 0x01, 0x00000003 },
+ { 0x000971, 1, 0x01, 0x00000008 },
+ { 0x000972, 1, 0x01, 0x00000040 },
+ { 0x000973, 1, 0x01, 0x0000012c },
+ { 0x00097c, 1, 0x01, 0x00000040 },
+ { 0x000975, 1, 0x01, 0x00000020 },
+ { 0x000976, 1, 0x01, 0x00000001 },
+ { 0x000977, 1, 0x01, 0x00000020 },
+ { 0x000978, 1, 0x01, 0x00000001 },
+ { 0x000957, 1, 0x01, 0x00000003 },
+ { 0x00095e, 1, 0x01, 0x20164010 },
+ { 0x00095f, 1, 0x01, 0x00000020 },
+ { 0x000a0d, 1, 0x01, 0x00000006 },
+ { 0x00097d, 1, 0x01, 0x0000000c },
+ { 0x000683, 1, 0x01, 0x00000006 },
+ { 0x000687, 1, 0x01, 0x003fffff },
+ { 0x0006a0, 1, 0x01, 0x00000005 },
+ { 0x000840, 1, 0x01, 0x00400008 },
+ { 0x000841, 1, 0x01, 0x08000080 },
+ { 0x000842, 1, 0x01, 0x00400008 },
+ { 0x000843, 1, 0x01, 0x08000080 },
+ { 0x000818, 8, 0x01, 0x00000000 },
+ { 0x000848, 16, 0x01, 0x00000000 },
+ { 0x000738, 1, 0x01, 0x00000000 },
+ { 0x0006aa, 1, 0x01, 0x00000001 },
+ { 0x0006ab, 1, 0x01, 0x00000002 },
+ { 0x0006ac, 1, 0x01, 0x00000080 },
+ { 0x0006ad, 2, 0x01, 0x00000100 },
+ { 0x0006b1, 1, 0x01, 0x00000011 },
+ { 0x0006bb, 1, 0x01, 0x000000cf },
+ { 0x0006ce, 1, 0x01, 0x2a712488 },
+ { 0x000739, 1, 0x01, 0x4085c000 },
+ { 0x00073a, 1, 0x01, 0x00000080 },
+ { 0x000786, 1, 0x01, 0x80000100 },
+ { 0x00073c, 1, 0x01, 0x00010100 },
+ { 0x00073d, 1, 0x01, 0x02800000 },
+ { 0x000787, 1, 0x01, 0x000000cf },
+ { 0x00078c, 1, 0x01, 0x00000008 },
+ { 0x000792, 1, 0x01, 0x00000001 },
+ { 0x000794, 3, 0x01, 0x00000001 },
+ { 0x000797, 1, 0x01, 0x000000cf },
+ { 0x000836, 1, 0x01, 0x00000001 },
+ { 0x00079a, 1, 0x01, 0x00000002 },
+ { 0x000833, 1, 0x01, 0x04444480 },
+ { 0x0007a1, 1, 0x01, 0x00000001 },
+ { 0x0007a3, 3, 0x01, 0x00000001 },
+ { 0x000831, 1, 0x01, 0x00000004 },
+ { 0x000b07, 1, 0x01, 0x00000002 },
+ { 0x000b08, 2, 0x01, 0x00000100 },
+ { 0x000b0a, 1, 0x01, 0x00000001 },
+ { 0x000a04, 1, 0x01, 0x000000ff },
+ { 0x000a0b, 1, 0x01, 0x00000040 },
+ { 0x00097f, 1, 0x01, 0x00000100 },
+ { 0x000a02, 1, 0x01, 0x00000001 },
+ { 0x000809, 1, 0x01, 0x00000007 },
+ { 0x00c221, 1, 0x01, 0x00000040 },
+ { 0x00c1b0, 8, 0x01, 0x0000000f },
+ { 0x00c1b8, 1, 0x01, 0x0fac6881 },
+ { 0x00c1b9, 1, 0x01, 0x00fac688 },
+ { 0x00c401, 1, 0x01, 0x00000001 },
+ { 0x00c402, 1, 0x01, 0x00010001 },
+ { 0x00c403, 2, 0x01, 0x00000001 },
+ { 0x00c40e, 1, 0x01, 0x00000020 },
+ { 0x00c413, 4, 0x01, 0x88888888 },
+ { 0x00c423, 1, 0x01, 0x0000ff00 },
+ { 0x00c420, 1, 0x01, 0x00880101 },
+ { 0x01e100, 1, 0x01, 0x00000001 },
+ {}
+};
+
+const struct gf100_gr_pack
+gm204_grctx_pack_icmd[] = {
+ { gm204_grctx_init_icmd_0 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_b197_0[] = {
+ { 0x000800, 8, 0x40, 0x00000000 },
+ { 0x000804, 8, 0x40, 0x00000000 },
+ { 0x000808, 8, 0x40, 0x00000400 },
+ { 0x00080c, 8, 0x40, 0x00000300 },
+ { 0x000810, 1, 0x04, 0x000000cf },
+ { 0x000850, 7, 0x40, 0x00000000 },
+ { 0x000814, 8, 0x40, 0x00000040 },
+ { 0x000818, 8, 0x40, 0x00000001 },
+ { 0x00081c, 8, 0x40, 0x00000000 },
+ { 0x000820, 8, 0x40, 0x00000000 },
+ { 0x001c00, 16, 0x10, 0x00000000 },
+ { 0x001c04, 16, 0x10, 0x00000000 },
+ { 0x001c08, 16, 0x10, 0x00000000 },
+ { 0x001c0c, 16, 0x10, 0x00000000 },
+ { 0x001d00, 16, 0x10, 0x00000000 },
+ { 0x001d04, 16, 0x10, 0x00000000 },
+ { 0x001d08, 16, 0x10, 0x00000000 },
+ { 0x001d0c, 16, 0x10, 0x00000000 },
+ { 0x001f00, 16, 0x08, 0x00000000 },
+ { 0x001f04, 16, 0x08, 0x00000000 },
+ { 0x001f80, 16, 0x08, 0x00000000 },
+ { 0x001f84, 16, 0x08, 0x00000000 },
+ { 0x002000, 1, 0x04, 0x00000000 },
+ { 0x002040, 1, 0x04, 0x00000011 },
+ { 0x002080, 1, 0x04, 0x00000020 },
+ { 0x0020c0, 1, 0x04, 0x00000030 },
+ { 0x002100, 1, 0x04, 0x00000040 },
+ { 0x002140, 1, 0x04, 0x00000051 },
+ { 0x00200c, 6, 0x40, 0x00000001 },
+ { 0x002010, 1, 0x04, 0x00000000 },
+ { 0x002050, 1, 0x04, 0x00000000 },
+ { 0x002090, 1, 0x04, 0x00000001 },
+ { 0x0020d0, 1, 0x04, 0x00000002 },
+ { 0x002110, 1, 0x04, 0x00000003 },
+ { 0x002150, 1, 0x04, 0x00000004 },
+ { 0x000380, 4, 0x20, 0x00000000 },
+ { 0x000384, 4, 0x20, 0x00000000 },
+ { 0x000388, 4, 0x20, 0x00000000 },
+ { 0x00038c, 4, 0x20, 0x00000000 },
+ { 0x000700, 4, 0x10, 0x00000000 },
+ { 0x000704, 4, 0x10, 0x00000000 },
+ { 0x000708, 4, 0x10, 0x00000000 },
+ { 0x002800, 128, 0x04, 0x00000000 },
+ { 0x000a00, 16, 0x20, 0x00000000 },
+ { 0x000a04, 16, 0x20, 0x00000000 },
+ { 0x000a08, 16, 0x20, 0x00000000 },
+ { 0x000a0c, 16, 0x20, 0x00000000 },
+ { 0x000a10, 16, 0x20, 0x00000000 },
+ { 0x000a14, 16, 0x20, 0x00000000 },
+ { 0x000a18, 16, 0x20, 0x00006420 },
+ { 0x000a1c, 16, 0x20, 0x00000000 },
+ { 0x000c00, 16, 0x10, 0x00000000 },
+ { 0x000c04, 16, 0x10, 0x00000000 },
+ { 0x000c08, 16, 0x10, 0x00000000 },
+ { 0x000c0c, 16, 0x10, 0x3f800000 },
+ { 0x000d00, 8, 0x08, 0xffff0000 },
+ { 0x000d04, 8, 0x08, 0xffff0000 },
+ { 0x000e00, 16, 0x10, 0x00000000 },
+ { 0x000e04, 16, 0x10, 0xffff0000 },
+ { 0x000e08, 16, 0x10, 0xffff0000 },
+ { 0x000d40, 4, 0x08, 0x00000000 },
+ { 0x000d44, 4, 0x08, 0x00000000 },
+ { 0x001e00, 8, 0x20, 0x00000001 },
+ { 0x001e04, 8, 0x20, 0x00000001 },
+ { 0x001e08, 8, 0x20, 0x00000002 },
+ { 0x001e0c, 8, 0x20, 0x00000001 },
+ { 0x001e10, 8, 0x20, 0x00000001 },
+ { 0x001e14, 8, 0x20, 0x00000002 },
+ { 0x001e18, 8, 0x20, 0x00000001 },
+ { 0x001480, 8, 0x10, 0x00000000 },
+ { 0x001484, 8, 0x10, 0x00000000 },
+ { 0x001488, 8, 0x10, 0x00000000 },
+ { 0x003400, 128, 0x04, 0x00000000 },
+ { 0x00030c, 1, 0x04, 0x00000001 },
+ { 0x001944, 1, 0x04, 0x00000000 },
+ { 0x001514, 1, 0x04, 0x00000000 },
+ { 0x000d68, 1, 0x04, 0x0000ffff },
+ { 0x00121c, 1, 0x04, 0x0fac6881 },
+ { 0x000fac, 1, 0x04, 0x00000001 },
+ { 0x001538, 1, 0x04, 0x00000001 },
+ { 0x000fe0, 2, 0x04, 0x00000000 },
+ { 0x000fe8, 1, 0x04, 0x00000014 },
+ { 0x000fec, 1, 0x04, 0x00000040 },
+ { 0x000ff0, 1, 0x04, 0x00000000 },
+ { 0x00179c, 1, 0x04, 0x00000000 },
+ { 0x001228, 1, 0x04, 0x00000400 },
+ { 0x00122c, 1, 0x04, 0x00000300 },
+ { 0x001230, 1, 0x04, 0x00010001 },
+ { 0x0007f8, 1, 0x04, 0x00000000 },
+ { 0x001208, 1, 0x04, 0x00000000 },
+ { 0x0015b4, 1, 0x04, 0x00000001 },
+ { 0x0015cc, 1, 0x04, 0x00000000 },
+ { 0x001534, 1, 0x04, 0x00000000 },
+ { 0x000754, 1, 0x04, 0x00000001 },
+ { 0x000fb0, 1, 0x04, 0x00000000 },
+ { 0x0015d0, 1, 0x04, 0x00000000 },
+ { 0x0011e0, 4, 0x04, 0x88888888 },
+ { 0x00153c, 1, 0x04, 0x00000000 },
+ { 0x0016b4, 1, 0x04, 0x00000003 },
+ { 0x000fa4, 1, 0x04, 0x00000001 },
+ { 0x000fbc, 4, 0x04, 0x0000ffff },
+ { 0x000fa8, 1, 0x04, 0x0000ffff },
+ { 0x000df8, 2, 0x04, 0x00000000 },
+ { 0x001948, 1, 0x04, 0x00000000 },
+ { 0x001970, 1, 0x04, 0x00000001 },
+ { 0x00161c, 1, 0x04, 0x000009f0 },
+ { 0x000dcc, 1, 0x04, 0x00000010 },
+ { 0x0015e4, 1, 0x04, 0x00000000 },
+ { 0x001160, 32, 0x04, 0x25e00040 },
+ { 0x001880, 32, 0x04, 0x00000000 },
+ { 0x000f84, 2, 0x04, 0x00000000 },
+ { 0x0017c8, 2, 0x04, 0x00000000 },
+ { 0x0017d0, 1, 0x04, 0x000000ff },
+ { 0x0017d4, 1, 0x04, 0xffffffff },
+ { 0x0017d8, 1, 0x04, 0x00000002 },
+ { 0x0017dc, 1, 0x04, 0x00000000 },
+ { 0x0015f4, 2, 0x04, 0x00000000 },
+ { 0x001434, 2, 0x04, 0x00000000 },
+ { 0x000d74, 1, 0x04, 0x00000000 },
+ { 0x0013a4, 1, 0x04, 0x00000000 },
+ { 0x001318, 1, 0x04, 0x00000001 },
+ { 0x001080, 2, 0x04, 0x00000000 },
+ { 0x001088, 2, 0x04, 0x00000001 },
+ { 0x001090, 1, 0x04, 0x00000000 },
+ { 0x001094, 1, 0x04, 0x00000001 },
+ { 0x001098, 1, 0x04, 0x00000000 },
+ { 0x00109c, 1, 0x04, 0x00000001 },
+ { 0x0010a0, 2, 0x04, 0x00000000 },
+ { 0x001644, 1, 0x04, 0x00000000 },
+ { 0x000748, 1, 0x04, 0x00000000 },
+ { 0x000de8, 1, 0x04, 0x00000000 },
+ { 0x001648, 1, 0x04, 0x00000000 },
+ { 0x0012a4, 1, 0x04, 0x00000000 },
+ { 0x001120, 4, 0x04, 0x00000000 },
+ { 0x001118, 1, 0x04, 0x00000000 },
+ { 0x00164c, 1, 0x04, 0x00000000 },
+ { 0x001658, 1, 0x04, 0x00000000 },
+ { 0x001910, 1, 0x04, 0x00000290 },
+ { 0x001518, 1, 0x04, 0x00000000 },
+ { 0x00165c, 1, 0x04, 0x00000001 },
+ { 0x001520, 1, 0x04, 0x00000000 },
+ { 0x001604, 1, 0x04, 0x00000000 },
+ { 0x001570, 1, 0x04, 0x00000000 },
+ { 0x0013b0, 2, 0x04, 0x3f800000 },
+ { 0x00020c, 1, 0x04, 0x00000000 },
+ { 0x001670, 1, 0x04, 0x30201000 },
+ { 0x001674, 1, 0x04, 0x70605040 },
+ { 0x001678, 1, 0x04, 0xb8a89888 },
+ { 0x00167c, 1, 0x04, 0xf8e8d8c8 },
+ { 0x00166c, 1, 0x04, 0x00000000 },
+ { 0x001680, 1, 0x04, 0x00ffff00 },
+ { 0x0012d0, 1, 0x04, 0x00000003 },
+ { 0x00113c, 1, 0x04, 0x00000000 },
+ { 0x0012d4, 1, 0x04, 0x00000002 },
+ { 0x001684, 2, 0x04, 0x00000000 },
+ { 0x000dac, 2, 0x04, 0x00001b02 },
+ { 0x000db4, 1, 0x04, 0x00000000 },
+ { 0x00168c, 1, 0x04, 0x00000000 },
+ { 0x0015bc, 1, 0x04, 0x00000000 },
+ { 0x00156c, 1, 0x04, 0x00000000 },
+ { 0x00187c, 1, 0x04, 0x00000000 },
+ { 0x001110, 1, 0x04, 0x00000001 },
+ { 0x000dc0, 3, 0x04, 0x00000000 },
+ { 0x000f40, 5, 0x04, 0x00000000 },
+ { 0x001234, 1, 0x04, 0x00000000 },
+ { 0x001690, 1, 0x04, 0x00000000 },
+ { 0x000790, 5, 0x04, 0x00000000 },
+ { 0x00077c, 1, 0x04, 0x00000000 },
+ { 0x001000, 1, 0x04, 0x00000010 },
+ { 0x0010fc, 1, 0x04, 0x00000000 },
+ { 0x001290, 1, 0x04, 0x00000000 },
+ { 0x000218, 1, 0x04, 0x00000010 },
+ { 0x0012d8, 1, 0x04, 0x00000000 },
+ { 0x0012dc, 1, 0x04, 0x00000010 },
+ { 0x000d94, 1, 0x04, 0x00000001 },
+ { 0x00155c, 2, 0x04, 0x00000000 },
+ { 0x001564, 1, 0x04, 0x00000fff },
+ { 0x001574, 2, 0x04, 0x00000000 },
+ { 0x00157c, 1, 0x04, 0x000fffff },
+ { 0x001354, 1, 0x04, 0x00000000 },
+ { 0x001610, 1, 0x04, 0x00000012 },
+ { 0x001608, 2, 0x04, 0x00000000 },
+ { 0x00260c, 1, 0x04, 0x00000000 },
+ { 0x0007ac, 1, 0x04, 0x00000000 },
+ { 0x00162c, 1, 0x04, 0x00000003 },
+ { 0x000210, 1, 0x04, 0x00000000 },
+ { 0x000320, 1, 0x04, 0x00000000 },
+ { 0x000324, 6, 0x04, 0x3f800000 },
+ { 0x000750, 1, 0x04, 0x00000000 },
+ { 0x000760, 1, 0x04, 0x39291909 },
+ { 0x000764, 1, 0x04, 0x79695949 },
+ { 0x000768, 1, 0x04, 0xb9a99989 },
+ { 0x00076c, 1, 0x04, 0xf9e9d9c9 },
+ { 0x000770, 1, 0x04, 0x30201000 },
+ { 0x000774, 1, 0x04, 0x70605040 },
+ { 0x000778, 1, 0x04, 0x00009080 },
+ { 0x000780, 1, 0x04, 0x39291909 },
+ { 0x000784, 1, 0x04, 0x79695949 },
+ { 0x000788, 1, 0x04, 0xb9a99989 },
+ { 0x00078c, 1, 0x04, 0xf9e9d9c9 },
+ { 0x0007d0, 1, 0x04, 0x30201000 },
+ { 0x0007d4, 1, 0x04, 0x70605040 },
+ { 0x0007d8, 1, 0x04, 0x00009080 },
+ { 0x001004, 1, 0x04, 0x00000000 },
+ { 0x001240, 8, 0x04, 0x00000000 },
+ { 0x00037c, 1, 0x04, 0x00000001 },
+ { 0x000740, 1, 0x04, 0x00000000 },
+ { 0x001148, 1, 0x04, 0x00000000 },
+ { 0x000fb4, 1, 0x04, 0x00000000 },
+ { 0x000fb8, 1, 0x04, 0x00000002 },
+ { 0x001130, 1, 0x04, 0x00000002 },
+ { 0x000fd4, 2, 0x04, 0x00000000 },
+ { 0x001030, 1, 0x04, 0x20181008 },
+ { 0x001034, 1, 0x04, 0x40383028 },
+ { 0x001038, 1, 0x04, 0x60585048 },
+ { 0x00103c, 1, 0x04, 0x80787068 },
+ { 0x000744, 1, 0x04, 0x00000000 },
+ { 0x002600, 1, 0x04, 0x00000000 },
+ { 0x001918, 1, 0x04, 0x00000000 },
+ { 0x00191c, 1, 0x04, 0x00000900 },
+ { 0x001920, 1, 0x04, 0x00000405 },
+ { 0x001308, 1, 0x04, 0x00000001 },
+ { 0x001924, 1, 0x04, 0x00000000 },
+ { 0x0013ac, 1, 0x04, 0x00000000 },
+ { 0x00192c, 1, 0x04, 0x00000001 },
+ { 0x00193c, 1, 0x04, 0x00002c1c },
+ { 0x000d7c, 1, 0x04, 0x00000000 },
+ { 0x000f8c, 1, 0x04, 0x00000000 },
+ { 0x0002c0, 1, 0x04, 0x00000001 },
+ { 0x001510, 1, 0x04, 0x00000000 },
+ { 0x001940, 1, 0x04, 0x00000000 },
+ { 0x000ff4, 2, 0x04, 0x00000000 },
+ { 0x00194c, 2, 0x04, 0x00000000 },
+ { 0x001968, 1, 0x04, 0x00000000 },
+ { 0x001590, 1, 0x04, 0x0000003f },
+ { 0x0007e8, 4, 0x04, 0x00000000 },
+ { 0x00196c, 1, 0x04, 0x00000011 },
+ { 0x0002e4, 1, 0x04, 0x0000b001 },
+ { 0x00036c, 2, 0x04, 0x00000000 },
+ { 0x00197c, 1, 0x04, 0x00000000 },
+ { 0x000fcc, 2, 0x04, 0x00000000 },
+ { 0x0002d8, 1, 0x04, 0x00000040 },
+ { 0x001980, 1, 0x04, 0x00000080 },
+ { 0x001504, 1, 0x04, 0x00000080 },
+ { 0x001984, 1, 0x04, 0x00000000 },
+ { 0x000f60, 1, 0x04, 0x00000000 },
+ { 0x000f64, 1, 0x04, 0x00400040 },
+ { 0x000f68, 1, 0x04, 0x00002212 },
+ { 0x000f6c, 1, 0x04, 0x08080203 },
+ { 0x001108, 1, 0x04, 0x00000008 },
+ { 0x000f70, 1, 0x04, 0x00080001 },
+ { 0x000ffc, 1, 0x04, 0x00000000 },
+ { 0x001134, 1, 0x04, 0x00000000 },
+ { 0x000f1c, 1, 0x04, 0x00000000 },
+ { 0x0011f8, 1, 0x04, 0x00000000 },
+ { 0x001138, 1, 0x04, 0x00000001 },
+ { 0x000300, 1, 0x04, 0x00000001 },
+ { 0x0013a8, 1, 0x04, 0x00000000 },
+ { 0x001224, 1, 0x04, 0x00000000 },
+ { 0x0012ec, 1, 0x04, 0x00000000 },
+ { 0x001310, 1, 0x04, 0x00000000 },
+ { 0x001314, 1, 0x04, 0x00000001 },
+ { 0x001380, 1, 0x04, 0x00000000 },
+ { 0x001384, 4, 0x04, 0x00000001 },
+ { 0x001394, 1, 0x04, 0x00000000 },
+ { 0x00139c, 1, 0x04, 0x00000000 },
+ { 0x001398, 1, 0x04, 0x00000000 },
+ { 0x001594, 1, 0x04, 0x00000000 },
+ { 0x001598, 4, 0x04, 0x00000001 },
+ { 0x000f54, 3, 0x04, 0x00000000 },
+ { 0x0019bc, 1, 0x04, 0x00000000 },
+ { 0x000f9c, 2, 0x04, 0x00000000 },
+ { 0x0012cc, 1, 0x04, 0x00000000 },
+ { 0x0012e8, 1, 0x04, 0x00000000 },
+ { 0x00130c, 1, 0x04, 0x00000001 },
+ { 0x001360, 8, 0x04, 0x00000000 },
+ { 0x00133c, 2, 0x04, 0x00000001 },
+ { 0x001344, 1, 0x04, 0x00000002 },
+ { 0x001348, 2, 0x04, 0x00000001 },
+ { 0x001350, 1, 0x04, 0x00000002 },
+ { 0x001358, 1, 0x04, 0x00000001 },
+ { 0x0012e4, 1, 0x04, 0x00000000 },
+ { 0x00131c, 4, 0x04, 0x00000000 },
+ { 0x0019c0, 1, 0x04, 0x00000000 },
+ { 0x001140, 1, 0x04, 0x00000000 },
+ { 0x000dd0, 1, 0x04, 0x00000000 },
+ { 0x000dd4, 1, 0x04, 0x00000001 },
+ { 0x0002f4, 1, 0x04, 0x00000000 },
+ { 0x0019c4, 1, 0x04, 0x00000000 },
+ { 0x0019c8, 1, 0x04, 0x00001500 },
+ { 0x00135c, 1, 0x04, 0x00000000 },
+ { 0x000f90, 1, 0x04, 0x00000000 },
+ { 0x0019e0, 8, 0x04, 0x00000001 },
+ { 0x0019cc, 1, 0x04, 0x00000001 },
+ { 0x00111c, 1, 0x04, 0x00000001 },
+ { 0x0015b8, 1, 0x04, 0x00000000 },
+ { 0x001a00, 1, 0x04, 0x00001111 },
+ { 0x001a04, 7, 0x04, 0x00000000 },
+ { 0x000d6c, 2, 0x04, 0xffff0000 },
+ { 0x0010f8, 1, 0x04, 0x00001010 },
+ { 0x000d80, 5, 0x04, 0x00000000 },
+ { 0x000da0, 1, 0x04, 0x00000000 },
+ { 0x0007a4, 2, 0x04, 0x00000000 },
+ { 0x001508, 1, 0x04, 0x80000000 },
+ { 0x00150c, 1, 0x04, 0x40000000 },
+ { 0x001668, 1, 0x04, 0x00000000 },
+ { 0x000318, 2, 0x04, 0x00000008 },
+ { 0x000d9c, 1, 0x04, 0x00000001 },
+ { 0x000f14, 1, 0x04, 0x00000000 },
+ { 0x000374, 1, 0x04, 0x00000000 },
+ { 0x000378, 1, 0x04, 0x0000000c },
+ { 0x0007dc, 1, 0x04, 0x00000000 },
+ { 0x00074c, 1, 0x04, 0x00000055 },
+ { 0x001420, 1, 0x04, 0x00000003 },
+ { 0x001008, 1, 0x04, 0x00000008 },
+ { 0x00100c, 1, 0x04, 0x00000040 },
+ { 0x001010, 1, 0x04, 0x0000012c },
+ { 0x000d60, 1, 0x04, 0x00000040 },
+ { 0x001018, 1, 0x04, 0x00000020 },
+ { 0x00101c, 1, 0x04, 0x00000001 },
+ { 0x001020, 1, 0x04, 0x00000020 },
+ { 0x001024, 1, 0x04, 0x00000001 },
+ { 0x001444, 3, 0x04, 0x00000000 },
+ { 0x000360, 1, 0x04, 0x20164010 },
+ { 0x000364, 1, 0x04, 0x00000020 },
+ { 0x000368, 1, 0x04, 0x00000000 },
+ { 0x000da8, 1, 0x04, 0x00000030 },
+ { 0x000de4, 1, 0x04, 0x00000000 },
+ { 0x000204, 1, 0x04, 0x00000006 },
+ { 0x0002d0, 1, 0x04, 0x003fffff },
+ { 0x001220, 1, 0x04, 0x00000005 },
+ { 0x000fdc, 1, 0x04, 0x00000000 },
+ { 0x000f98, 1, 0x04, 0x00400008 },
+ { 0x001284, 1, 0x04, 0x08000080 },
+ { 0x001450, 1, 0x04, 0x00400008 },
+ { 0x001454, 1, 0x04, 0x08000080 },
+ { 0x000214, 1, 0x04, 0x00000000 },
+ {}
+};
+
+const struct gf100_gr_pack
+gm204_grctx_pack_mthd[] = {
+ { gm204_grctx_init_b197_0, 0xb197 },
+ { gf100_grctx_init_902d_0, 0x902d },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_fe_0[] = {
+ { 0x404004, 8, 0x04, 0x00000000 },
+ { 0x404024, 1, 0x04, 0x0000e000 },
+ { 0x404028, 8, 0x04, 0x00000000 },
+ { 0x4040a8, 8, 0x04, 0x00000000 },
+ { 0x4040c8, 1, 0x04, 0xf801008f },
+ { 0x4040d0, 6, 0x04, 0x00000000 },
+ { 0x4040f8, 1, 0x04, 0x00000000 },
+ { 0x404100, 10, 0x04, 0x00000000 },
+ { 0x404130, 2, 0x04, 0x00000000 },
+ { 0x404150, 1, 0x04, 0x0000002e },
+ { 0x404154, 2, 0x04, 0x00000800 },
+ { 0x404164, 1, 0x04, 0x00000045 },
+ { 0x40417c, 2, 0x04, 0x00000000 },
+ { 0x404194, 1, 0x04, 0x33000700 },
+ { 0x4041a0, 4, 0x04, 0x00000000 },
+ { 0x4041c4, 2, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_ds_0[] = {
+ { 0x405800, 1, 0x04, 0x8f8001bf },
+ { 0x405830, 1, 0x04, 0x04001000 },
+ { 0x405834, 1, 0x04, 0x08000000 },
+ { 0x405838, 1, 0x04, 0x00010000 },
+ { 0x405854, 1, 0x04, 0x00000000 },
+ { 0x405870, 4, 0x04, 0x00000001 },
+ { 0x405a00, 2, 0x04, 0x00000000 },
+ { 0x405a18, 1, 0x04, 0x00000000 },
+ { 0x405a1c, 1, 0x04, 0x000000ff },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_cwd_0[] = {
+ { 0x405b00, 1, 0x04, 0x00000000 },
+ { 0x405b10, 1, 0x04, 0x00001000 },
+ { 0x405b20, 1, 0x04, 0x04000000 },
+ { 0x405b60, 6, 0x04, 0x00000000 },
+ { 0x405ba0, 6, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_pd_0[] = {
+ { 0x406020, 1, 0x04, 0x17410001 },
+ { 0x406028, 4, 0x04, 0x00000001 },
+ { 0x4064a8, 1, 0x04, 0x00000000 },
+ { 0x4064ac, 1, 0x04, 0x00003fff },
+ { 0x4064b0, 3, 0x04, 0x00000000 },
+ { 0x4064c0, 1, 0x04, 0x80400280 },
+ { 0x4064c4, 1, 0x04, 0x0400ffff },
+ { 0x4064c8, 1, 0x04, 0x01800780 },
+ { 0x4064cc, 9, 0x04, 0x00000000 },
+ { 0x4064fc, 1, 0x04, 0x0000022a },
+ { 0x406500, 1, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_be_0[] = {
+ { 0x408800, 1, 0x04, 0x32882a3c },
+ { 0x408804, 1, 0x04, 0x00000040 },
+ { 0x408808, 1, 0x04, 0x1003e005 },
+ { 0x408840, 1, 0x04, 0x00000e0b },
+ { 0x408900, 1, 0x04, 0xb080b801 },
+ { 0x408904, 1, 0x04, 0x63038001 },
+ { 0x408908, 1, 0x04, 0x12c8502f },
+ { 0x408980, 1, 0x04, 0x0000011d },
+ {}
+};
+
+const struct gf100_gr_pack
+gm204_grctx_pack_hub[] = {
+ { gf100_grctx_init_main_0 },
+ { gm204_grctx_init_fe_0 },
+ { gk110_grctx_init_pri_0 },
+ { gk104_grctx_init_memfmt_0 },
+ { gm204_grctx_init_ds_0 },
+ { gm204_grctx_init_cwd_0 },
+ { gm204_grctx_init_pd_0 },
+ { gk208_grctx_init_rstr2d_0 },
+ { gk104_grctx_init_scc_0 },
+ { gm204_grctx_init_be_0 },
+ {}
+};
+
+const struct gf100_gr_init
+gm204_grctx_init_prop_0[] = {
+ { 0x418400, 1, 0x04, 0x38e01e00 },
+ { 0x418404, 1, 0x04, 0x70001fff },
+ { 0x41840c, 1, 0x04, 0x20001008 },
+ { 0x418410, 2, 0x04, 0x0fff0fff },
+ { 0x418418, 1, 0x04, 0x07ff07ff },
+ { 0x41841c, 1, 0x04, 0x3feffbff },
+ { 0x418450, 6, 0x04, 0x00000000 },
+ { 0x418468, 1, 0x04, 0x00000001 },
+ { 0x41846c, 2, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_gpc_unk_1[] = {
+ { 0x418600, 1, 0x04, 0x0000007f },
+ { 0x418684, 1, 0x04, 0x0000001f },
+ { 0x418700, 1, 0x04, 0x00000002 },
+ { 0x418704, 1, 0x04, 0x00000080 },
+ { 0x418708, 1, 0x04, 0x40000000 },
+ { 0x41870c, 2, 0x04, 0x00000000 },
+ { 0x418728, 1, 0x04, 0x00010000 },
+ {}
+};
+
+const struct gf100_gr_init
+gm204_grctx_init_setup_0[] = {
+ { 0x418800, 1, 0x04, 0x7006863a },
+ { 0x418808, 1, 0x04, 0x00000000 },
+ { 0x418810, 1, 0x04, 0x00000000 },
+ { 0x418828, 1, 0x04, 0x00000044 },
+ { 0x418830, 1, 0x04, 0x10000001 },
+ { 0x4188d8, 1, 0x04, 0x00000008 },
+ { 0x4188e0, 1, 0x04, 0x01000000 },
+ { 0x4188e8, 5, 0x04, 0x00000000 },
+ { 0x4188fc, 1, 0x04, 0x20100058 },
+ {}
+};
+
+const struct gf100_gr_init
+gm204_grctx_init_gpm_0[] = {
+ { 0x418c10, 8, 0x04, 0x00000000 },
+ { 0x418c40, 1, 0x04, 0xffffffff },
+ { 0x418c6c, 1, 0x04, 0x00000001 },
+ { 0x418c80, 1, 0x04, 0x20200000 },
+ {}
+};
+
+const struct gf100_gr_init
+gm204_grctx_init_gpc_unk_2[] = {
+ { 0x418e00, 1, 0x04, 0x90040000 },
+ { 0x418e24, 1, 0x04, 0x00000000 },
+ { 0x418e28, 1, 0x04, 0x00000030 },
+ { 0x418e2c, 1, 0x04, 0x00000100 },
+ { 0x418e30, 3, 0x04, 0x00000000 },
+ { 0x418e40, 22, 0x04, 0x00000000 },
+ { 0x418ea0, 12, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_pack
+gm204_grctx_pack_gpc[] = {
+ { gm107_grctx_init_gpc_unk_0 },
+ { gm204_grctx_init_prop_0 },
+ { gm204_grctx_init_gpc_unk_1 },
+ { gm204_grctx_init_setup_0 },
+ { gf100_grctx_init_zcull_0 },
+ { gk208_grctx_init_crstr_0 },
+ { gm204_grctx_init_gpm_0 },
+ { gm204_grctx_init_gpc_unk_2 },
+ { gf100_grctx_init_gcc_0 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_pe_0[] = {
+ { 0x419848, 1, 0x04, 0x00000000 },
+ { 0x419864, 1, 0x04, 0x00000029 },
+ { 0x419888, 1, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_tex_0[] = {
+ { 0x419a00, 1, 0x04, 0x000100f0 },
+ { 0x419a04, 1, 0x04, 0x00000005 },
+ { 0x419a08, 1, 0x04, 0x00000621 },
+ { 0x419a0c, 1, 0x04, 0x00320000 },
+ { 0x419a10, 1, 0x04, 0x00000000 },
+ { 0x419a14, 1, 0x04, 0x00000200 },
+ { 0x419a1c, 1, 0x04, 0x0010c000 },
+ { 0x419a20, 1, 0x04, 0x20008a00 },
+ { 0x419a30, 1, 0x04, 0x00000001 },
+ { 0x419a3c, 1, 0x04, 0x0000181e },
+ { 0x419ac4, 1, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_mpc_0[] = {
+ { 0x419c00, 1, 0x04, 0x0000009a },
+ { 0x419c04, 1, 0x04, 0x80000bd6 },
+ { 0x419c08, 1, 0x04, 0x00000002 },
+ { 0x419c20, 1, 0x04, 0x00000000 },
+ { 0x419c24, 1, 0x04, 0x00084210 },
+ { 0x419c28, 1, 0x04, 0x3efbefbe },
+ { 0x419c2c, 1, 0x04, 0x00000000 },
+ { 0x419c34, 1, 0x04, 0x71ff1ff3 },
+ { 0x419c3c, 1, 0x04, 0x00001919 },
+ { 0x419c50, 1, 0x04, 0x00000005 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_l1c_0[] = {
+ { 0x419c84, 1, 0x04, 0x0000003e },
+ { 0x419c90, 1, 0x04, 0x0000000a },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_sm_0[] = {
+ { 0x419e04, 3, 0x04, 0x00000000 },
+ { 0x419e10, 1, 0x04, 0x00001c02 },
+ { 0x419e44, 1, 0x04, 0x00d3eff2 },
+ { 0x419e48, 1, 0x04, 0x00000000 },
+ { 0x419e4c, 1, 0x04, 0x0000007f },
+ { 0x419e50, 1, 0x04, 0x00000000 },
+ { 0x419e58, 6, 0x04, 0x00000000 },
+ { 0x419e74, 10, 0x04, 0x00000000 },
+ { 0x419eac, 1, 0x04, 0x0001cf8b },
+ { 0x419eb0, 1, 0x04, 0x00030300 },
+ { 0x419eb8, 1, 0x04, 0x40000000 },
+ { 0x419ef0, 24, 0x04, 0x00000000 },
+ { 0x419f68, 2, 0x04, 0x00000000 },
+ { 0x419f70, 1, 0x04, 0x00000020 },
+ { 0x419f78, 1, 0x04, 0x00010beb },
+ { 0x419f7c, 1, 0x04, 0x00000000 },
+ {}
+};
+
+const struct gf100_gr_pack
+gm204_grctx_pack_tpc[] = {
+ { gm204_grctx_init_pe_0 },
+ { gm204_grctx_init_tex_0 },
+ { gm204_grctx_init_mpc_0 },
+ { gm204_grctx_init_l1c_0 },
+ { gm204_grctx_init_sm_0 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_pes_0[] = {
+ { 0x41be24, 1, 0x04, 0x0000000e },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_grctx_init_cbm_0[] = {
+ { 0x41bec0, 1, 0x04, 0x00000000 },
+ { 0x41bec4, 1, 0x04, 0x01030000 },
+ { 0x41bee4, 1, 0x04, 0x00000000 },
+ { 0x41bef0, 1, 0x04, 0x000003ff },
+ { 0x41bef4, 2, 0x04, 0x00000000 },
+ {}
+};
+
+const struct gf100_gr_pack
+gm204_grctx_pack_ppc[] = {
+ { gm204_grctx_init_pes_0 },
+ { gm204_grctx_init_cbm_0 },
+ { gm107_grctx_init_wwdx_0 },
+ {}
+};
+
+/*******************************************************************************
+ * PGRAPH context implementation
+ ******************************************************************************/
+
+static void
+gm204_grctx_generate_tpcid(struct gf100_gr_priv *priv)
+{
+ int gpc, tpc, id;
+
+ for (tpc = 0, id = 0; tpc < 4; tpc++) {
+ for (gpc = 0; gpc < priv->gpc_nr; gpc++) {
+ if (tpc < priv->tpc_nr[gpc]) {
+ nv_wr32(priv, TPC_UNIT(gpc, tpc, 0x698), id);
+ nv_wr32(priv, GPC_UNIT(gpc, 0x0c10 + tpc * 4), id);
+ nv_wr32(priv, TPC_UNIT(gpc, tpc, 0x088), id);
+ id++;
+ }
+ }
+ }
+}
+
+static void
+gm204_grctx_generate_rop_active_fbps(struct gf100_gr_priv *priv)
+{
+ const u32 fbp_count = nv_rd32(priv, 0x12006c);
+ nv_mask(priv, 0x408850, 0x0000000f, fbp_count); /* zrop */
+ nv_mask(priv, 0x408958, 0x0000000f, fbp_count); /* crop */
+}
+
+static void
+gm204_grctx_generate_405b60(struct gf100_gr_priv *priv)
+{
+ const u32 dist_nr = DIV_ROUND_UP(priv->tpc_total, 4);
+ u32 dist[TPC_MAX] = {};
+ u32 gpcs[GPC_MAX] = {};
+ u8 tpcnr[GPC_MAX];
+ int tpc, gpc, i;
+
+ memcpy(tpcnr, priv->tpc_nr, sizeof(priv->tpc_nr));
+
+ /* won't result in the same distribution as the binary driver where
+ * some of the gpcs have more tpcs than others, but this shall do
+ * for the moment. the code for earlier gpus has this issue too.
+ */
+ for (gpc = -1, i = 0; i < priv->tpc_total; i++) {
+ do {
+ gpc = (gpc + 1) % priv->gpc_nr;
+ } while(!tpcnr[gpc]);
+ tpc = priv->tpc_nr[gpc] - tpcnr[gpc]--;
+
+ dist[i / 4] |= ((gpc << 4) | tpc) << ((i % 4) * 8);
+ gpcs[gpc] |= i << (tpc * 8);
+ }
+
+ for (i = 0; i < dist_nr; i++)
+ nv_wr32(priv, 0x405b60 + (i * 4), dist[i]);
+ for (i = 0; i < priv->gpc_nr; i++)
+ nv_wr32(priv, 0x405ba0 + (i * 4), gpcs[i]);
+}
+
+void
+gm204_grctx_generate_main(struct gf100_gr_priv *priv, struct gf100_grctx *info)
+{
+ struct gf100_grctx_oclass *oclass = (void *)nv_engine(priv)->cclass;
+ u32 tmp;
+ int i;
+
+ gf100_gr_mmio(priv, oclass->hub);
+ gf100_gr_mmio(priv, oclass->gpc);
+ gf100_gr_mmio(priv, oclass->zcull);
+ gf100_gr_mmio(priv, oclass->tpc);
+ gf100_gr_mmio(priv, oclass->ppc);
+
+ nv_wr32(priv, 0x404154, 0x00000000);
+
+ oclass->bundle(info);
+ oclass->pagepool(info);
+ oclass->attrib(info);
+ oclass->unkn(priv);
+
+ gm204_grctx_generate_tpcid(priv);
+ gf100_grctx_generate_r406028(priv);
+ gk104_grctx_generate_r418bb8(priv);
+
+ for (i = 0; i < 8; i++)
+ nv_wr32(priv, 0x4064d0 + (i * 0x04), 0x00000000);
+ nv_wr32(priv, 0x406500, 0x00000000);
+
+ nv_wr32(priv, 0x405b00, (priv->tpc_total << 8) | priv->gpc_nr);
+
+ gm204_grctx_generate_rop_active_fbps(priv);
+
+ for (tmp = 0, i = 0; i < priv->gpc_nr; i++)
+ tmp |= ((1 << priv->tpc_nr[i]) - 1) << (i * 4);
+ nv_wr32(priv, 0x4041c4, tmp);
+
+ gm204_grctx_generate_405b60(priv);
+
+ gf100_gr_icmd(priv, oclass->icmd);
+ nv_wr32(priv, 0x404154, 0x00000800);
+ gf100_gr_mthd(priv, oclass->mthd);
+
+ nv_mask(priv, 0x418e94, 0xffffffff, 0xc4230000);
+ nv_mask(priv, 0x418e4c, 0xffffffff, 0x70000000);
+}
+
+struct nvkm_oclass *
+gm204_grctx_oclass = &(struct gf100_grctx_oclass) {
+ .base.handle = NV_ENGCTX(GR, 0x24),
+ .base.ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = gf100_gr_context_ctor,
+ .dtor = gf100_gr_context_dtor,
+ .init = _nvkm_gr_context_init,
+ .fini = _nvkm_gr_context_fini,
+ .rd32 = _nvkm_gr_context_rd32,
+ .wr32 = _nvkm_gr_context_wr32,
+ },
+ .main = gm204_grctx_generate_main,
+ .unkn = gk104_grctx_generate_unkn,
+ .hub = gm204_grctx_pack_hub,
+ .gpc = gm204_grctx_pack_gpc,
+ .zcull = gf100_grctx_pack_zcull,
+ .tpc = gm204_grctx_pack_tpc,
+ .ppc = gm204_grctx_pack_ppc,
+ .icmd = gm204_grctx_pack_icmd,
+ .mthd = gm204_grctx_pack_mthd,
+ .bundle = gm107_grctx_generate_bundle,
+ .bundle_size = 0x3000,
+ .bundle_min_gpm_fifo_depth = 0x180,
+ .bundle_token_limit = 0x780,
+ .pagepool = gm107_grctx_generate_pagepool,
+ .pagepool_size = 0x20000,
+ .attrib = gm107_grctx_generate_attrib,
+ .attrib_nr_max = 0x600,
+ .attrib_nr = 0x400,
+ .alpha_nr_max = 0x1800,
+ .alpha_nr = 0x1000,
+}.base;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm206.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm206.c
new file mode 100644
index 0000000..91ec416
--- /dev/null
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/ctxgm206.c
@@ -0,0 +1,83 @@
+/*
+ * Copyright 2015 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs <bskeggs@redhat.com>
+ */
+#include "ctxgf100.h"
+
+static const struct gf100_gr_init
+gm206_grctx_init_gpc_unk_1[] = {
+ { 0x418600, 1, 0x04, 0x0000007f },
+ { 0x418684, 1, 0x04, 0x0000001f },
+ { 0x418700, 1, 0x04, 0x00000002 },
+ { 0x418704, 1, 0x04, 0x00000080 },
+ { 0x418708, 1, 0x04, 0x40000000 },
+ { 0x41870c, 2, 0x04, 0x00000000 },
+ { 0x418728, 1, 0x04, 0x00300020 },
+ {}
+};
+
+static const struct gf100_gr_pack
+gm206_grctx_pack_gpc[] = {
+ { gm107_grctx_init_gpc_unk_0 },
+ { gm204_grctx_init_prop_0 },
+ { gm206_grctx_init_gpc_unk_1 },
+ { gm204_grctx_init_setup_0 },
+ { gf100_grctx_init_zcull_0 },
+ { gk208_grctx_init_crstr_0 },
+ { gm204_grctx_init_gpm_0 },
+ { gm204_grctx_init_gpc_unk_2 },
+ { gf100_grctx_init_gcc_0 },
+ {}
+};
+
+struct nvkm_oclass *
+gm206_grctx_oclass = &(struct gf100_grctx_oclass) {
+ .base.handle = NV_ENGCTX(GR, 0x26),
+ .base.ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = gf100_gr_context_ctor,
+ .dtor = gf100_gr_context_dtor,
+ .init = _nvkm_gr_context_init,
+ .fini = _nvkm_gr_context_fini,
+ .rd32 = _nvkm_gr_context_rd32,
+ .wr32 = _nvkm_gr_context_wr32,
+ },
+ .main = gm204_grctx_generate_main,
+ .unkn = gk104_grctx_generate_unkn,
+ .hub = gm204_grctx_pack_hub,
+ .gpc = gm206_grctx_pack_gpc,
+ .zcull = gf100_grctx_pack_zcull,
+ .tpc = gm204_grctx_pack_tpc,
+ .ppc = gm204_grctx_pack_ppc,
+ .icmd = gm204_grctx_pack_icmd,
+ .mthd = gm204_grctx_pack_mthd,
+ .bundle = gm107_grctx_generate_bundle,
+ .bundle_size = 0x3000,
+ .bundle_min_gpm_fifo_depth = 0x180,
+ .bundle_token_limit = 0x780,
+ .pagepool = gm107_grctx_generate_pagepool,
+ .pagepool_size = 0x20000,
+ .attrib = gm107_grctx_generate_attrib,
+ .attrib_nr_max = 0x600,
+ .attrib_nr = 0x400,
+ .alpha_nr_max = 0x1800,
+ .alpha_nr = 0x1000,
+}.base;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpc.fuc b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpc.fuc
index eaed159..194afe9 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpc.fuc
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpc.fuc
@@ -52,6 +52,12 @@
#endif
#ifdef INCLUDE_CODE
+#define gpc_wr32(addr,reg) /*
+*/ mov b32 $r15 reg /*
+*/ imm32($r14, addr) /*
+*/ or $r14 NV_PGRAPH_GPCX_GPCCS_MMIO_CTRL_BASE_ENABLE /*
+*/ call(nv_wr32)
+
// reports an exception to the host
//
// In: $r15 error code (see os.h)
@@ -64,6 +70,43 @@
pop $r14
ret
+#if CHIPSET >= GM107
+tpc_strand_wait:
+ push $r9
+ trace_set(T_STRTPC)
+ tpc_strand_busy:
+ nv_iord($r9, NV_PGRAPH_GPCX_GPCCS_TPC_STATUS, 0)
+ bra b32 $r9 0x0 ne #tpc_strand_busy
+ trace_clr(T_STRTPC)
+ pop $r9
+ ret
+
+#define tpc_strand_wait() call(tpc_strand_wait)
+#define tpc_strand_enable() /*
+*/ mov $r15 NV_PGRAPH_GPC0_TPCX_STRAND_CMD_ENABLE /*
+*/ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_CMD, $r15) /*
+*/ tpc_strand_wait()
+#define tpc_strand_disable() /*
+*/ mov $r15 NV_PGRAPH_GPC0_TPCX_STRAND_CMD_DISABLE /*
+*/ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_CMD, $r15) /*
+*/ tpc_strand_wait()
+#define tpc_strand_seek(p) /*
+*/ mov $r15 NV_PGRAPH_GPC0_TPCX_STRAND_INDEX_ALL /*
+*/ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_INDEX, $r15) /*
+*/ mov $r15 p /*
+*/ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_SELECT, $r15) /*
+*/ mov $r15 NV_PGRAPH_GPC0_TPCX_STRAND_CMD_SEEK /*
+*/ tpc_strand_wait()
+#define tpc_strand_info(m) /*
+*/ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_CMD, $r15) /*
+*/ mov $r15 m /*
+*/ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_DATA, $r15) /*
+*/ mov $r15 NV_PGRAPH_GPC0_TPCX_STRAND_CMD_GET_INFO /*
+*/ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_CMD, $r15) /*
+*/ tpc_strand_wait()
+#endif
+
+
// GPC fuc initialisation, executed by triggering ucode start, will
// fall through to main loop after completion.
//
@@ -101,7 +144,7 @@
// enable interrupts
bset $flags ie0
- // figure out which GPC we are, and how many TPCs we have
+ // how many TPCs do we have?
nv_iord($r2, NV_PGRAPH_GPCX_GPCCS_UNITS, 0)
mov $r3 1
and $r2 0x1f
@@ -109,8 +152,12 @@
sub b32 $r3 1
st b32 D[$r0 + #tpc_count] $r2
st b32 D[$r0 + #tpc_mask] $r3
+
+ // determine which GPC we are, setup (optional) mmio access offset
nv_iord($r2, NV_PGRAPH_GPCX_GPCCS_MYINDEX, 0)
st b32 D[$r0 + #gpc_id] $r2
+ shl b32 $r2 15
+ nv_iowr(NV_PGRAPH_GPCX_GPCCS_MMIO_BASE, 0, $r2)
#if NV_PGRAPH_GPCX_UNK__SIZE > 0
// figure out which, and how many, UNKs are actually present
@@ -186,8 +233,56 @@
// calculate size of strand context data
mov b32 $r15 $r2
call(strand_ctx_init)
+ add b32 $r2 $r15
add b32 $r3 $r15
+#if CHIPSET >= GM107
+ // calculate size of tpc strand context data
+ mov $r15 NV_PGRAPH_GPC0_TPCX_STRAND_INDEX_ALL
+ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_INDEX, $r15)
+ tpc_strand_enable();
+ tpc_strand_seek(0);
+ tpc_strand_info(-1);
+
+ ld b32 $r4 D[$r0 + #tpc_count]
+ mov $r5 NV_PGRAPH_GPC0_TPC0
+ ld b32 $r6 D[$r0 + #gpc_id]
+ shl b32 $r6 15
+ add b32 $r5 $r6
+ tpc_strand_init_tpc_loop:
+ add b32 $r14 $r5 NV_TPC_STRAND_CNT
+ call(nv_rd32)
+ mov b32 $r6 $r15
+ clear b32 $r7
+ tpc_strand_init_idx_loop:
+ add b32 $r14 $r5 NV_TPC_STRAND_INDEX
+ mov b32 $r15 $r7
+ call(nv_wr32)
+ add b32 $r14 $r5 NV_TPC_STRAND_SAVE_SWBASE
+ shr b32 $r15 $r2 8
+ call(nv_wr32)
+ add b32 $r14 $r5 NV_TPC_STRAND_LOAD_SWBASE
+ shr b32 $r15 $r2 8
+ call(nv_wr32)
+ add b32 $r14 $r5 NV_TPC_STRAND_WORDS
+ call(nv_rd32)
+ shr b32 $r15 6
+ add b32 $r15 1
+ shl b32 $r15 8
+ add b32 $r2 $r15
+ add b32 $r3 $r15
+ add b32 $r7 1
+ sub b32 $r6 1
+ bra nz #tpc_strand_init_idx_loop
+ add b32 $r5 NV_PGRAPH_GPC0_TPC0__SIZE
+ sub b32 $r4 1
+ bra nz #tpc_strand_init_tpc_loop
+
+ mov $r15 NV_PGRAPH_GPC0_TPCX_STRAND_INDEX_ALL
+ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_INDEX, $r15)
+ tpc_strand_disable();
+#endif
+
// save context size, and tell HUB we're done
nv_iowr(NV_PGRAPH_GPCX_GPCCS_CC_SCRATCH_VAL(1), 0, $r3)
clear b32 $r2
@@ -306,6 +401,9 @@
ctx_xfer:
// set context base address
nv_iowr(NV_PGRAPH_GPCX_GPCCS_MEM_BASE, 0, $r15)
+#if CHIPSET >= GM107
+ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_MEM_BASE, $r15)
+#endif
bra not $p1 #ctx_xfer_not_load
call(ctx_redswitch)
ctx_xfer_not_load:
@@ -318,6 +416,14 @@
add b32 $r2 NV_PGRAPH_GPCX_GPCCS_STRAND_CMD_SAVE
nv_iowr(NV_PGRAPH_GPCX_GPCCS_STRAND_CMD, 0x3f, $r2)
+#if CHIPSET >= GM107
+ tpc_strand_enable();
+ tpc_strand_seek(0);
+ xbit $r15 $flags $p1 // SAVE/LOAD
+ add b32 $r15 NV_PGRAPH_GPC0_TPCX_STRAND_CMD_SAVE
+ gpc_wr32(NV_PGRAPH_GPC0_TPCX_STRAND_CMD, $r15)
+#endif
+
// mmio context
xbit $r10 $flags $p1 // direction
or $r10 2 // first
@@ -362,6 +468,9 @@
// wait for strands to finish
call(strand_wait)
+#if CHIPSET >= GM107
+ tpc_strand_wait()
+#endif
// if load, or a save without a load following, do some
// unknown stuff that's done after finishing a block of
@@ -370,6 +479,9 @@
bra not $p2 #ctx_xfer_done
ctx_xfer_post:
call(strand_post)
+#if CHIPSET >= GM107
+ tpc_strand_disable()
+#endif
// mark completion in HUB's barrier
ctx_xfer_done:
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgf100.fuc3.h b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgf100.fuc3.h
index ea32f56..231f696 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgf100.fuc3.h
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgf100.fuc3.h
@@ -310,7 +310,7 @@
0x03f01200,
0x0002d000,
0x17f104bd,
- 0x10fe04e6,
+ 0x10fe04f8,
0x0007f100,
0x0003f007,
0xbd0000d0,
@@ -329,157 +329,157 @@
0xf0860027,
0x22cf0123,
0x04028000,
- 0x010027f1,
- 0xcf0223f0,
- 0x34bd0022,
- 0xf1082595,
- 0xf0c00007,
- 0x05d00103,
+ 0xf10f24b6,
+ 0xf0c90007,
+ 0x02d00103,
0xf104bd00,
- 0xf0c10007,
- 0x05d00103,
- 0x9804bd00,
- 0x0f98000e,
- 0x5021f501,
- 0x002fbb01,
- 0x98003fbb,
- 0x0f98010e,
- 0x5021f502,
- 0x050e9801,
- 0xbb00effd,
- 0x3ebb002e,
- 0x0235b600,
- 0xd30007f1,
- 0xd00103f0,
- 0x04bd0003,
- 0xb60825b6,
- 0x20b60635,
- 0x0130b601,
- 0xb60824b6,
- 0x2fb90834,
- 0xd321f502,
- 0x003fbb02,
- 0x010007f1,
+ 0xf0010027,
+ 0x22cf0223,
+ 0x9534bd00,
+ 0x07f10825,
+ 0x03f0c000,
+ 0x0005d001,
+ 0x07f104bd,
+ 0x03f0c100,
+ 0x0005d001,
+ 0x0e9804bd,
+ 0x010f9800,
+ 0x015021f5,
+ 0xbb002fbb,
+ 0x0e98003f,
+ 0x020f9801,
+ 0x015021f5,
+ 0xfd050e98,
+ 0x2ebb00ef,
+ 0x003ebb00,
+ 0xf10235b6,
+ 0xf0d30007,
+ 0x03d00103,
+ 0xb604bd00,
+ 0x35b60825,
+ 0x0120b606,
+ 0xb60130b6,
+ 0x34b60824,
+ 0x022fb908,
+ 0x02d321f5,
+ 0xbb002fbb,
+ 0x07f1003f,
+ 0x03f00100,
+ 0x0003d002,
+ 0x24bd04bd,
+ 0xf11f29f0,
+ 0xf0080007,
+ 0x02d00203,
+/* 0x04bb: main */
+ 0xf404bd00,
+ 0x28f40031,
+ 0x1cd7f000,
+ 0xf43921f4,
+ 0xe4b0f401,
+ 0x1e18f404,
+ 0xf00181fe,
+ 0x20bd0627,
+ 0xb60412fd,
+ 0x1efd01e4,
+ 0x0018fe05,
+ 0x05b021f5,
+/* 0x04eb: main_not_ctx_xfer */
+ 0x94d30ef4,
+ 0xf5f010ef,
+ 0x7e21f501,
+ 0xc60ef403,
+/* 0x04f8: ih */
+ 0x88fe80f9,
+ 0xf980f901,
+ 0xf9a0f990,
+ 0xf9d0f9b0,
+ 0xbdf0f9e0,
+ 0x00a7f104,
+ 0x00a3f002,
+ 0xc400aacf,
+ 0x0bf404ab,
+ 0x1cd7f02c,
+ 0x1a00e7f1,
+ 0xcf00e3f0,
+ 0xf7f100ee,
+ 0xf3f01900,
+ 0x00ffcf00,
+ 0xf00421f4,
+ 0x07f101e7,
+ 0x03f01d00,
+ 0x000ed000,
+/* 0x0546: ih_no_fifo */
+ 0x07f104bd,
+ 0x03f00100,
+ 0x000ad000,
+ 0xf0fc04bd,
+ 0xd0fce0fc,
+ 0xa0fcb0fc,
+ 0x80fc90fc,
+ 0xfc0088fe,
+ 0x0032f480,
+/* 0x056a: hub_barrier_done */
+ 0xf7f001f8,
+ 0x040e9801,
+ 0xb904febb,
+ 0xe7f102ff,
+ 0xe3f09418,
+ 0x9d21f440,
+/* 0x0582: ctx_redswitch */
+ 0xf7f000f8,
+ 0x0007f120,
+ 0x0103f085,
+ 0xbd000fd0,
+ 0x08e7f004,
+/* 0x0594: ctx_redswitch_delay */
+ 0xf401e2b6,
+ 0xf5f1fd1b,
+ 0xf5f10800,
+ 0x07f10200,
+ 0x03f08500,
+ 0x000fd001,
+ 0x00f804bd,
+/* 0x05b0: ctx_xfer */
+ 0x810007f1,
0xd00203f0,
- 0x04bd0003,
- 0x29f024bd,
- 0x0007f11f,
- 0x0203f008,
- 0xbd0002d0,
-/* 0x04a9: main */
- 0x0031f404,
- 0xf00028f4,
- 0x21f41cd7,
- 0xf401f439,
- 0xf404e4b0,
- 0x81fe1e18,
- 0x0627f001,
- 0x12fd20bd,
- 0x01e4b604,
- 0xfe051efd,
- 0x21f50018,
- 0x0ef4059e,
-/* 0x04d9: main_not_ctx_xfer */
- 0x10ef94d3,
- 0xf501f5f0,
- 0xf4037e21,
-/* 0x04e6: ih */
- 0x80f9c60e,
- 0xf90188fe,
- 0xf990f980,
- 0xf9b0f9a0,
- 0xf9e0f9d0,
- 0xf104bdf0,
- 0xf00200a7,
- 0xaacf00a3,
- 0x04abc400,
- 0xf02c0bf4,
- 0xe7f11cd7,
- 0xe3f01a00,
- 0x00eecf00,
- 0x1900f7f1,
- 0xcf00f3f0,
- 0x21f400ff,
- 0x01e7f004,
- 0x1d0007f1,
- 0xd00003f0,
- 0x04bd000e,
-/* 0x0534: ih_no_fifo */
- 0x010007f1,
- 0xd00003f0,
- 0x04bd000a,
- 0xe0fcf0fc,
- 0xb0fcd0fc,
- 0x90fca0fc,
- 0x88fe80fc,
- 0xf480fc00,
- 0x01f80032,
-/* 0x0558: hub_barrier_done */
- 0x9801f7f0,
- 0xfebb040e,
- 0x02ffb904,
- 0x9418e7f1,
- 0xf440e3f0,
- 0x00f89d21,
-/* 0x0570: ctx_redswitch */
- 0xf120f7f0,
- 0xf0850007,
- 0x0fd00103,
- 0xf004bd00,
-/* 0x0582: ctx_redswitch_delay */
- 0xe2b608e7,
- 0xfd1bf401,
- 0x0800f5f1,
- 0x0200f5f1,
- 0x850007f1,
- 0xd00103f0,
0x04bd000f,
-/* 0x059e: ctx_xfer */
- 0x07f100f8,
- 0x03f08100,
- 0x000fd002,
- 0x11f404bd,
- 0x7021f507,
-/* 0x05b1: ctx_xfer_not_load */
- 0x6a21f505,
- 0xf124bd02,
- 0xf047fc07,
+ 0xf50711f4,
+/* 0x05c3: ctx_xfer_not_load */
+ 0xf5058221,
+ 0xbd026a21,
+ 0xfc07f124,
+ 0x0203f047,
+ 0xbd0002d0,
+ 0x012cf004,
+ 0xf10320b6,
+ 0xf04afc07,
0x02d00203,
0xf004bd00,
- 0x20b6012c,
- 0xfc07f103,
- 0x0203f04a,
- 0xbd0002d0,
- 0x01acf004,
- 0xf102a5f0,
- 0xf00000b7,
- 0x0c9850b3,
- 0x0fc4b604,
- 0x9800bcbb,
- 0x0d98000c,
- 0x00e7f001,
- 0x016f21f5,
- 0xf001acf0,
- 0xb7f104a5,
- 0xb3f04000,
- 0x040c9850,
- 0xbb0fc4b6,
- 0x0c9800bc,
- 0x020d9801,
- 0xf1060f98,
- 0xf50800e7,
- 0xf5016f21,
- 0xf4025e21,
- 0x12f40601,
-/* 0x0629: ctx_xfer_post */
- 0x7f21f507,
-/* 0x062d: ctx_xfer_done */
- 0x5821f502,
- 0x0000f805,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
+ 0xa5f001ac,
+ 0x00b7f102,
+ 0x50b3f000,
+ 0xb6040c98,
+ 0xbcbb0fc4,
+ 0x000c9800,
+ 0xf0010d98,
+ 0x21f500e7,
+ 0xacf0016f,
+ 0x04a5f001,
+ 0x4000b7f1,
+ 0x9850b3f0,
+ 0xc4b6040c,
+ 0x00bcbb0f,
+ 0x98010c98,
+ 0x0f98020d,
+ 0x00e7f106,
+ 0x6f21f508,
+ 0x5e21f501,
+ 0x0601f402,
+/* 0x063b: ctx_xfer_post */
+ 0xf50712f4,
+/* 0x063f: ctx_xfer_done */
+ 0xf5027f21,
+ 0xf8056a21,
0x00000000,
0x00000000,
0x00000000,
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgf117.fuc3.h b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgf117.fuc3.h
index 9a36d9c..64d07df 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgf117.fuc3.h
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgf117.fuc3.h
@@ -314,7 +314,7 @@
0x03f01200,
0x0002d000,
0x17f104bd,
- 0x10fe0530,
+ 0x10fe0542,
0x0007f100,
0x0003f007,
0xbd0000d0,
@@ -333,188 +333,188 @@
0xf0860027,
0x22cf0123,
0x04028000,
- 0x0c30e7f1,
- 0xbd50e3f0,
- 0xbd34bd24,
-/* 0x0421: init_unk_loop */
- 0x6821f444,
- 0xf400f6b0,
- 0xf7f00f0b,
- 0x04f2bb01,
- 0xb6054ffd,
-/* 0x0436: init_unk_next */
- 0x20b60130,
- 0x04e0b601,
- 0xf40126b0,
-/* 0x0442: init_unk_done */
- 0x0380e21b,
- 0x08048007,
- 0x010027f1,
- 0xcf0223f0,
- 0x34bd0022,
- 0xf1082595,
- 0xf0c00007,
- 0x05d00103,
+ 0xf10f24b6,
+ 0xf0c90007,
+ 0x02d00103,
0xf104bd00,
- 0xf0c10007,
- 0x05d00103,
- 0x9804bd00,
- 0x0f98000e,
- 0x5021f501,
- 0x002fbb01,
- 0x98003fbb,
- 0x0f98010e,
- 0x5021f502,
- 0x050e9801,
- 0xbb00effd,
- 0x3ebb002e,
- 0x020e9800,
- 0xf5030f98,
- 0x98015021,
- 0xeffd070e,
- 0x002ebb00,
- 0xb6003ebb,
- 0x07f10235,
- 0x03f0d300,
- 0x0003d001,
- 0x25b604bd,
- 0x0635b608,
- 0xb60120b6,
- 0x24b60130,
- 0x0834b608,
- 0xf5022fb9,
- 0xbb02d321,
- 0x07f1003f,
- 0x03f00100,
- 0x0003d002,
- 0x24bd04bd,
- 0xf11f29f0,
- 0xf0080007,
- 0x02d00203,
-/* 0x04f3: main */
- 0xf404bd00,
- 0x28f40031,
- 0x24d7f000,
- 0xf43921f4,
- 0xe4b0f401,
- 0x1e18f404,
- 0xf00181fe,
- 0x20bd0627,
- 0xb60412fd,
- 0x1efd01e4,
- 0x0018fe05,
- 0x05e821f5,
-/* 0x0523: main_not_ctx_xfer */
- 0x94d30ef4,
- 0xf5f010ef,
- 0x7e21f501,
- 0xc60ef403,
-/* 0x0530: ih */
- 0x88fe80f9,
- 0xf980f901,
- 0xf9a0f990,
- 0xf9d0f9b0,
- 0xbdf0f9e0,
- 0x00a7f104,
- 0x00a3f002,
- 0xc400aacf,
- 0x0bf404ab,
- 0x24d7f02c,
- 0x1a00e7f1,
- 0xcf00e3f0,
- 0xf7f100ee,
- 0xf3f01900,
- 0x00ffcf00,
- 0xf00421f4,
- 0x07f101e7,
- 0x03f01d00,
- 0x000ed000,
-/* 0x057e: ih_no_fifo */
+ 0xf00c30e7,
+ 0x24bd50e3,
+ 0x44bd34bd,
+/* 0x0430: init_unk_loop */
+ 0xb06821f4,
+ 0x0bf400f6,
+ 0x01f7f00f,
+ 0xfd04f2bb,
+ 0x30b6054f,
+/* 0x0445: init_unk_next */
+ 0x0120b601,
+ 0xb004e0b6,
+ 0x1bf40126,
+/* 0x0451: init_unk_done */
+ 0x070380e2,
+ 0xf1080480,
+ 0xf0010027,
+ 0x22cf0223,
+ 0x9534bd00,
+ 0x07f10825,
+ 0x03f0c000,
+ 0x0005d001,
0x07f104bd,
- 0x03f00100,
- 0x000ad000,
- 0xf0fc04bd,
- 0xd0fce0fc,
- 0xa0fcb0fc,
- 0x80fc90fc,
- 0xfc0088fe,
- 0x0032f480,
-/* 0x05a2: hub_barrier_done */
- 0xf7f001f8,
- 0x040e9801,
- 0xb904febb,
- 0xe7f102ff,
- 0xe3f09418,
- 0x9d21f440,
-/* 0x05ba: ctx_redswitch */
- 0xf7f000f8,
- 0x0007f120,
- 0x0103f085,
- 0xbd000fd0,
- 0x08e7f004,
-/* 0x05cc: ctx_redswitch_delay */
- 0xf401e2b6,
- 0xf5f1fd1b,
- 0xf5f10800,
- 0x07f10200,
- 0x03f08500,
- 0x000fd001,
- 0x00f804bd,
-/* 0x05e8: ctx_xfer */
- 0x810007f1,
+ 0x03f0c100,
+ 0x0005d001,
+ 0x0e9804bd,
+ 0x010f9800,
+ 0x015021f5,
+ 0xbb002fbb,
+ 0x0e98003f,
+ 0x020f9801,
+ 0x015021f5,
+ 0xfd050e98,
+ 0x2ebb00ef,
+ 0x003ebb00,
+ 0x98020e98,
+ 0x21f5030f,
+ 0x0e980150,
+ 0x00effd07,
+ 0xbb002ebb,
+ 0x35b6003e,
+ 0x0007f102,
+ 0x0103f0d3,
+ 0xbd0003d0,
+ 0x0825b604,
+ 0xb60635b6,
+ 0x30b60120,
+ 0x0824b601,
+ 0xb90834b6,
+ 0x21f5022f,
+ 0x2fbb02d3,
+ 0x003fbb00,
+ 0x010007f1,
0xd00203f0,
- 0x04bd000f,
- 0xf50711f4,
-/* 0x05fb: ctx_xfer_not_load */
- 0xf505ba21,
- 0xbd026a21,
- 0xfc07f124,
- 0x0203f047,
+ 0x04bd0003,
+ 0x29f024bd,
+ 0x0007f11f,
+ 0x0203f008,
0xbd0002d0,
- 0x012cf004,
- 0xf10320b6,
- 0xf04afc07,
+/* 0x0505: main */
+ 0x0031f404,
+ 0xf00028f4,
+ 0x21f424d7,
+ 0xf401f439,
+ 0xf404e4b0,
+ 0x81fe1e18,
+ 0x0627f001,
+ 0x12fd20bd,
+ 0x01e4b604,
+ 0xfe051efd,
+ 0x21f50018,
+ 0x0ef405fa,
+/* 0x0535: main_not_ctx_xfer */
+ 0x10ef94d3,
+ 0xf501f5f0,
+ 0xf4037e21,
+/* 0x0542: ih */
+ 0x80f9c60e,
+ 0xf90188fe,
+ 0xf990f980,
+ 0xf9b0f9a0,
+ 0xf9e0f9d0,
+ 0xf104bdf0,
+ 0xf00200a7,
+ 0xaacf00a3,
+ 0x04abc400,
+ 0xf02c0bf4,
+ 0xe7f124d7,
+ 0xe3f01a00,
+ 0x00eecf00,
+ 0x1900f7f1,
+ 0xcf00f3f0,
+ 0x21f400ff,
+ 0x01e7f004,
+ 0x1d0007f1,
+ 0xd00003f0,
+ 0x04bd000e,
+/* 0x0590: ih_no_fifo */
+ 0x010007f1,
+ 0xd00003f0,
+ 0x04bd000a,
+ 0xe0fcf0fc,
+ 0xb0fcd0fc,
+ 0x90fca0fc,
+ 0x88fe80fc,
+ 0xf480fc00,
+ 0x01f80032,
+/* 0x05b4: hub_barrier_done */
+ 0x9801f7f0,
+ 0xfebb040e,
+ 0x02ffb904,
+ 0x9418e7f1,
+ 0xf440e3f0,
+ 0x00f89d21,
+/* 0x05cc: ctx_redswitch */
+ 0xf120f7f0,
+ 0xf0850007,
+ 0x0fd00103,
+ 0xf004bd00,
+/* 0x05de: ctx_redswitch_delay */
+ 0xe2b608e7,
+ 0xfd1bf401,
+ 0x0800f5f1,
+ 0x0200f5f1,
+ 0x850007f1,
+ 0xd00103f0,
+ 0x04bd000f,
+/* 0x05fa: ctx_xfer */
+ 0x07f100f8,
+ 0x03f08100,
+ 0x000fd002,
+ 0x11f404bd,
+ 0xcc21f507,
+/* 0x060d: ctx_xfer_not_load */
+ 0x6a21f505,
+ 0xf124bd02,
+ 0xf047fc07,
0x02d00203,
0xf004bd00,
- 0xa5f001ac,
- 0x00b7f102,
- 0x50b3f000,
- 0xb6040c98,
- 0xbcbb0fc4,
- 0x000c9800,
- 0xf0010d98,
- 0x21f500e7,
- 0xacf0016f,
- 0x00b7f101,
- 0x50b3f040,
- 0xb6040c98,
- 0xbcbb0fc4,
- 0x010c9800,
- 0x98020d98,
- 0xe7f1060f,
- 0x21f50800,
- 0xacf0016f,
- 0x04a5f001,
- 0x3000b7f1,
- 0x9850b3f0,
- 0xc4b6040c,
- 0x00bcbb0f,
- 0x98020c98,
- 0x0f98030d,
- 0x00e7f108,
- 0x6f21f502,
- 0x5e21f501,
- 0x0601f402,
-/* 0x0697: ctx_xfer_post */
- 0xf50712f4,
-/* 0x069b: ctx_xfer_done */
- 0xf5027f21,
- 0xf805a221,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
+ 0x20b6012c,
+ 0xfc07f103,
+ 0x0203f04a,
+ 0xbd0002d0,
+ 0x01acf004,
+ 0xf102a5f0,
+ 0xf00000b7,
+ 0x0c9850b3,
+ 0x0fc4b604,
+ 0x9800bcbb,
+ 0x0d98000c,
+ 0x00e7f001,
+ 0x016f21f5,
+ 0xf101acf0,
+ 0xf04000b7,
+ 0x0c9850b3,
+ 0x0fc4b604,
+ 0x9800bcbb,
+ 0x0d98010c,
+ 0x060f9802,
+ 0x0800e7f1,
+ 0x016f21f5,
+ 0xf001acf0,
+ 0xb7f104a5,
+ 0xb3f03000,
+ 0x040c9850,
+ 0xbb0fc4b6,
+ 0x0c9800bc,
+ 0x030d9802,
+ 0xf1080f98,
+ 0xf50200e7,
+ 0xf5016f21,
+ 0xf4025e21,
+ 0x12f40601,
+/* 0x06a9: ctx_xfer_post */
+ 0x7f21f507,
+/* 0x06ad: ctx_xfer_done */
+ 0xb421f502,
+ 0x0000f805,
0x00000000,
0x00000000,
0x00000000,
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk104.fuc3.h b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk104.fuc3.h
index 49020ff..2f59643 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk104.fuc3.h
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk104.fuc3.h
@@ -314,7 +314,7 @@
0x03f01200,
0x0002d000,
0x17f104bd,
- 0x10fe0530,
+ 0x10fe0542,
0x0007f100,
0x0003f007,
0xbd0000d0,
@@ -333,188 +333,188 @@
0xf0860027,
0x22cf0123,
0x04028000,
- 0x0c30e7f1,
- 0xbd50e3f0,
- 0xbd34bd24,
-/* 0x0421: init_unk_loop */
- 0x6821f444,
- 0xf400f6b0,
- 0xf7f00f0b,
- 0x04f2bb01,
- 0xb6054ffd,
-/* 0x0436: init_unk_next */
- 0x20b60130,
- 0x04e0b601,
- 0xf40126b0,
-/* 0x0442: init_unk_done */
- 0x0380e21b,
- 0x08048007,
- 0x010027f1,
- 0xcf0223f0,
- 0x34bd0022,
- 0xf1082595,
- 0xf0c00007,
- 0x05d00103,
+ 0xf10f24b6,
+ 0xf0c90007,
+ 0x02d00103,
0xf104bd00,
- 0xf0c10007,
- 0x05d00103,
- 0x9804bd00,
- 0x0f98000e,
- 0x5021f501,
- 0x002fbb01,
- 0x98003fbb,
- 0x0f98010e,
- 0x5021f502,
- 0x050e9801,
- 0xbb00effd,
- 0x3ebb002e,
- 0x020e9800,
- 0xf5030f98,
- 0x98015021,
- 0xeffd070e,
- 0x002ebb00,
- 0xb6003ebb,
- 0x07f10235,
- 0x03f0d300,
- 0x0003d001,
- 0x25b604bd,
- 0x0635b608,
- 0xb60120b6,
- 0x24b60130,
- 0x0834b608,
- 0xf5022fb9,
- 0xbb02d321,
- 0x07f1003f,
- 0x03f00100,
- 0x0003d002,
- 0x24bd04bd,
- 0xf11f29f0,
- 0xf0080007,
- 0x02d00203,
-/* 0x04f3: main */
- 0xf404bd00,
- 0x28f40031,
- 0x24d7f000,
- 0xf43921f4,
- 0xe4b0f401,
- 0x1e18f404,
- 0xf00181fe,
- 0x20bd0627,
- 0xb60412fd,
- 0x1efd01e4,
- 0x0018fe05,
- 0x05e821f5,
-/* 0x0523: main_not_ctx_xfer */
- 0x94d30ef4,
- 0xf5f010ef,
- 0x7e21f501,
- 0xc60ef403,
-/* 0x0530: ih */
- 0x88fe80f9,
- 0xf980f901,
- 0xf9a0f990,
- 0xf9d0f9b0,
- 0xbdf0f9e0,
- 0x00a7f104,
- 0x00a3f002,
- 0xc400aacf,
- 0x0bf404ab,
- 0x24d7f02c,
- 0x1a00e7f1,
- 0xcf00e3f0,
- 0xf7f100ee,
- 0xf3f01900,
- 0x00ffcf00,
- 0xf00421f4,
- 0x07f101e7,
- 0x03f01d00,
- 0x000ed000,
-/* 0x057e: ih_no_fifo */
+ 0xf00c30e7,
+ 0x24bd50e3,
+ 0x44bd34bd,
+/* 0x0430: init_unk_loop */
+ 0xb06821f4,
+ 0x0bf400f6,
+ 0x01f7f00f,
+ 0xfd04f2bb,
+ 0x30b6054f,
+/* 0x0445: init_unk_next */
+ 0x0120b601,
+ 0xb004e0b6,
+ 0x1bf40126,
+/* 0x0451: init_unk_done */
+ 0x070380e2,
+ 0xf1080480,
+ 0xf0010027,
+ 0x22cf0223,
+ 0x9534bd00,
+ 0x07f10825,
+ 0x03f0c000,
+ 0x0005d001,
0x07f104bd,
- 0x03f00100,
- 0x000ad000,
- 0xf0fc04bd,
- 0xd0fce0fc,
- 0xa0fcb0fc,
- 0x80fc90fc,
- 0xfc0088fe,
- 0x0032f480,
-/* 0x05a2: hub_barrier_done */
- 0xf7f001f8,
- 0x040e9801,
- 0xb904febb,
- 0xe7f102ff,
- 0xe3f09418,
- 0x9d21f440,
-/* 0x05ba: ctx_redswitch */
- 0xf7f000f8,
- 0x0007f120,
- 0x0103f085,
- 0xbd000fd0,
- 0x08e7f004,
-/* 0x05cc: ctx_redswitch_delay */
- 0xf401e2b6,
- 0xf5f1fd1b,
- 0xf5f10800,
- 0x07f10200,
- 0x03f08500,
- 0x000fd001,
- 0x00f804bd,
-/* 0x05e8: ctx_xfer */
- 0x810007f1,
+ 0x03f0c100,
+ 0x0005d001,
+ 0x0e9804bd,
+ 0x010f9800,
+ 0x015021f5,
+ 0xbb002fbb,
+ 0x0e98003f,
+ 0x020f9801,
+ 0x015021f5,
+ 0xfd050e98,
+ 0x2ebb00ef,
+ 0x003ebb00,
+ 0x98020e98,
+ 0x21f5030f,
+ 0x0e980150,
+ 0x00effd07,
+ 0xbb002ebb,
+ 0x35b6003e,
+ 0x0007f102,
+ 0x0103f0d3,
+ 0xbd0003d0,
+ 0x0825b604,
+ 0xb60635b6,
+ 0x30b60120,
+ 0x0824b601,
+ 0xb90834b6,
+ 0x21f5022f,
+ 0x2fbb02d3,
+ 0x003fbb00,
+ 0x010007f1,
0xd00203f0,
- 0x04bd000f,
- 0xf50711f4,
-/* 0x05fb: ctx_xfer_not_load */
- 0xf505ba21,
- 0xbd026a21,
- 0xfc07f124,
- 0x0203f047,
+ 0x04bd0003,
+ 0x29f024bd,
+ 0x0007f11f,
+ 0x0203f008,
0xbd0002d0,
- 0x012cf004,
- 0xf10320b6,
- 0xf04afc07,
+/* 0x0505: main */
+ 0x0031f404,
+ 0xf00028f4,
+ 0x21f424d7,
+ 0xf401f439,
+ 0xf404e4b0,
+ 0x81fe1e18,
+ 0x0627f001,
+ 0x12fd20bd,
+ 0x01e4b604,
+ 0xfe051efd,
+ 0x21f50018,
+ 0x0ef405fa,
+/* 0x0535: main_not_ctx_xfer */
+ 0x10ef94d3,
+ 0xf501f5f0,
+ 0xf4037e21,
+/* 0x0542: ih */
+ 0x80f9c60e,
+ 0xf90188fe,
+ 0xf990f980,
+ 0xf9b0f9a0,
+ 0xf9e0f9d0,
+ 0xf104bdf0,
+ 0xf00200a7,
+ 0xaacf00a3,
+ 0x04abc400,
+ 0xf02c0bf4,
+ 0xe7f124d7,
+ 0xe3f01a00,
+ 0x00eecf00,
+ 0x1900f7f1,
+ 0xcf00f3f0,
+ 0x21f400ff,
+ 0x01e7f004,
+ 0x1d0007f1,
+ 0xd00003f0,
+ 0x04bd000e,
+/* 0x0590: ih_no_fifo */
+ 0x010007f1,
+ 0xd00003f0,
+ 0x04bd000a,
+ 0xe0fcf0fc,
+ 0xb0fcd0fc,
+ 0x90fca0fc,
+ 0x88fe80fc,
+ 0xf480fc00,
+ 0x01f80032,
+/* 0x05b4: hub_barrier_done */
+ 0x9801f7f0,
+ 0xfebb040e,
+ 0x02ffb904,
+ 0x9418e7f1,
+ 0xf440e3f0,
+ 0x00f89d21,
+/* 0x05cc: ctx_redswitch */
+ 0xf120f7f0,
+ 0xf0850007,
+ 0x0fd00103,
+ 0xf004bd00,
+/* 0x05de: ctx_redswitch_delay */
+ 0xe2b608e7,
+ 0xfd1bf401,
+ 0x0800f5f1,
+ 0x0200f5f1,
+ 0x850007f1,
+ 0xd00103f0,
+ 0x04bd000f,
+/* 0x05fa: ctx_xfer */
+ 0x07f100f8,
+ 0x03f08100,
+ 0x000fd002,
+ 0x11f404bd,
+ 0xcc21f507,
+/* 0x060d: ctx_xfer_not_load */
+ 0x6a21f505,
+ 0xf124bd02,
+ 0xf047fc07,
0x02d00203,
0xf004bd00,
- 0xa5f001ac,
- 0x00b7f102,
- 0x50b3f000,
- 0xb6040c98,
- 0xbcbb0fc4,
- 0x000c9800,
- 0xf0010d98,
- 0x21f500e7,
- 0xacf0016f,
- 0x00b7f101,
- 0x50b3f040,
- 0xb6040c98,
- 0xbcbb0fc4,
- 0x010c9800,
- 0x98020d98,
- 0xe7f1060f,
- 0x21f50800,
- 0xacf0016f,
- 0x04a5f001,
- 0x3000b7f1,
- 0x9850b3f0,
- 0xc4b6040c,
- 0x00bcbb0f,
- 0x98020c98,
- 0x0f98030d,
- 0x00e7f108,
- 0x6f21f502,
- 0x5e21f501,
- 0x0601f402,
-/* 0x0697: ctx_xfer_post */
- 0xf50712f4,
-/* 0x069b: ctx_xfer_done */
- 0xf5027f21,
- 0xf805a221,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
+ 0x20b6012c,
+ 0xfc07f103,
+ 0x0203f04a,
+ 0xbd0002d0,
+ 0x01acf004,
+ 0xf102a5f0,
+ 0xf00000b7,
+ 0x0c9850b3,
+ 0x0fc4b604,
+ 0x9800bcbb,
+ 0x0d98000c,
+ 0x00e7f001,
+ 0x016f21f5,
+ 0xf101acf0,
+ 0xf04000b7,
+ 0x0c9850b3,
+ 0x0fc4b604,
+ 0x9800bcbb,
+ 0x0d98010c,
+ 0x060f9802,
+ 0x0800e7f1,
+ 0x016f21f5,
+ 0xf001acf0,
+ 0xb7f104a5,
+ 0xb3f03000,
+ 0x040c9850,
+ 0xbb0fc4b6,
+ 0x0c9800bc,
+ 0x030d9802,
+ 0xf1080f98,
+ 0xf50200e7,
+ 0xf5016f21,
+ 0xf4025e21,
+ 0x12f40601,
+/* 0x06a9: ctx_xfer_post */
+ 0x7f21f507,
+/* 0x06ad: ctx_xfer_done */
+ 0xb421f502,
+ 0x0000f805,
0x00000000,
0x00000000,
0x00000000,
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk110.fuc3.h b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk110.fuc3.h
index c95b07e..ee8e54d 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk110.fuc3.h
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk110.fuc3.h
@@ -314,7 +314,7 @@
0x03f01200,
0x0002d000,
0x17f104bd,
- 0x10fe0530,
+ 0x10fe0542,
0x0007f100,
0x0003f007,
0xbd0000d0,
@@ -333,188 +333,188 @@
0xf0860027,
0x22cf0123,
0x04028000,
- 0x0c30e7f1,
- 0xbd50e3f0,
- 0xbd34bd24,
-/* 0x0421: init_unk_loop */
- 0x6821f444,
- 0xf400f6b0,
- 0xf7f00f0b,
- 0x04f2bb01,
- 0xb6054ffd,
-/* 0x0436: init_unk_next */
- 0x20b60130,
- 0x04e0b601,
- 0xf40226b0,
-/* 0x0442: init_unk_done */
- 0x0380e21b,
- 0x08048007,
- 0x010027f1,
- 0xcf0223f0,
- 0x34bd0022,
- 0xf1082595,
- 0xf0c00007,
- 0x05d00103,
+ 0xf10f24b6,
+ 0xf0c90007,
+ 0x02d00103,
0xf104bd00,
- 0xf0c10007,
- 0x05d00103,
- 0x9804bd00,
- 0x0f98000e,
- 0x5021f501,
- 0x002fbb01,
- 0x98003fbb,
- 0x0f98010e,
- 0x5021f502,
- 0x050e9801,
- 0xbb00effd,
- 0x3ebb002e,
- 0x020e9800,
- 0xf5030f98,
- 0x98015021,
- 0xeffd070e,
- 0x002ebb00,
- 0xb6003ebb,
- 0x07f10235,
- 0x03f0d300,
- 0x0003d001,
- 0x25b604bd,
- 0x0635b608,
- 0xb60120b6,
- 0x24b60130,
- 0x0834b608,
- 0xf5022fb9,
- 0xbb02d321,
- 0x07f1003f,
- 0x03f00100,
- 0x0003d002,
- 0x24bd04bd,
- 0xf11f29f0,
- 0xf0300007,
- 0x02d00203,
-/* 0x04f3: main */
- 0xf404bd00,
- 0x28f40031,
- 0x24d7f000,
- 0xf43921f4,
- 0xe4b0f401,
- 0x1e18f404,
- 0xf00181fe,
- 0x20bd0627,
- 0xb60412fd,
- 0x1efd01e4,
- 0x0018fe05,
- 0x05e821f5,
-/* 0x0523: main_not_ctx_xfer */
- 0x94d30ef4,
- 0xf5f010ef,
- 0x7e21f501,
- 0xc60ef403,
-/* 0x0530: ih */
- 0x88fe80f9,
- 0xf980f901,
- 0xf9a0f990,
- 0xf9d0f9b0,
- 0xbdf0f9e0,
- 0x00a7f104,
- 0x00a3f002,
- 0xc400aacf,
- 0x0bf404ab,
- 0x24d7f02c,
- 0x1a00e7f1,
- 0xcf00e3f0,
- 0xf7f100ee,
- 0xf3f01900,
- 0x00ffcf00,
- 0xf00421f4,
- 0x07f101e7,
- 0x03f01d00,
- 0x000ed000,
-/* 0x057e: ih_no_fifo */
+ 0xf00c30e7,
+ 0x24bd50e3,
+ 0x44bd34bd,
+/* 0x0430: init_unk_loop */
+ 0xb06821f4,
+ 0x0bf400f6,
+ 0x01f7f00f,
+ 0xfd04f2bb,
+ 0x30b6054f,
+/* 0x0445: init_unk_next */
+ 0x0120b601,
+ 0xb004e0b6,
+ 0x1bf40226,
+/* 0x0451: init_unk_done */
+ 0x070380e2,
+ 0xf1080480,
+ 0xf0010027,
+ 0x22cf0223,
+ 0x9534bd00,
+ 0x07f10825,
+ 0x03f0c000,
+ 0x0005d001,
0x07f104bd,
- 0x03f00100,
- 0x000ad000,
- 0xf0fc04bd,
- 0xd0fce0fc,
- 0xa0fcb0fc,
- 0x80fc90fc,
- 0xfc0088fe,
- 0x0032f480,
-/* 0x05a2: hub_barrier_done */
- 0xf7f001f8,
- 0x040e9801,
- 0xb904febb,
- 0xe7f102ff,
- 0xe3f09418,
- 0x9d21f440,
-/* 0x05ba: ctx_redswitch */
- 0xf7f000f8,
- 0x0007f120,
- 0x0103f085,
- 0xbd000fd0,
- 0x08e7f004,
-/* 0x05cc: ctx_redswitch_delay */
- 0xf401e2b6,
- 0xf5f1fd1b,
- 0xf5f10800,
- 0x07f10200,
- 0x03f08500,
- 0x000fd001,
- 0x00f804bd,
-/* 0x05e8: ctx_xfer */
- 0x810007f1,
+ 0x03f0c100,
+ 0x0005d001,
+ 0x0e9804bd,
+ 0x010f9800,
+ 0x015021f5,
+ 0xbb002fbb,
+ 0x0e98003f,
+ 0x020f9801,
+ 0x015021f5,
+ 0xfd050e98,
+ 0x2ebb00ef,
+ 0x003ebb00,
+ 0x98020e98,
+ 0x21f5030f,
+ 0x0e980150,
+ 0x00effd07,
+ 0xbb002ebb,
+ 0x35b6003e,
+ 0x0007f102,
+ 0x0103f0d3,
+ 0xbd0003d0,
+ 0x0825b604,
+ 0xb60635b6,
+ 0x30b60120,
+ 0x0824b601,
+ 0xb90834b6,
+ 0x21f5022f,
+ 0x2fbb02d3,
+ 0x003fbb00,
+ 0x010007f1,
0xd00203f0,
- 0x04bd000f,
- 0xf50711f4,
-/* 0x05fb: ctx_xfer_not_load */
- 0xf505ba21,
- 0xbd026a21,
- 0xfc07f124,
- 0x0203f047,
+ 0x04bd0003,
+ 0x29f024bd,
+ 0x0007f11f,
+ 0x0203f030,
0xbd0002d0,
- 0x012cf004,
- 0xf10320b6,
- 0xf04afc07,
+/* 0x0505: main */
+ 0x0031f404,
+ 0xf00028f4,
+ 0x21f424d7,
+ 0xf401f439,
+ 0xf404e4b0,
+ 0x81fe1e18,
+ 0x0627f001,
+ 0x12fd20bd,
+ 0x01e4b604,
+ 0xfe051efd,
+ 0x21f50018,
+ 0x0ef405fa,
+/* 0x0535: main_not_ctx_xfer */
+ 0x10ef94d3,
+ 0xf501f5f0,
+ 0xf4037e21,
+/* 0x0542: ih */
+ 0x80f9c60e,
+ 0xf90188fe,
+ 0xf990f980,
+ 0xf9b0f9a0,
+ 0xf9e0f9d0,
+ 0xf104bdf0,
+ 0xf00200a7,
+ 0xaacf00a3,
+ 0x04abc400,
+ 0xf02c0bf4,
+ 0xe7f124d7,
+ 0xe3f01a00,
+ 0x00eecf00,
+ 0x1900f7f1,
+ 0xcf00f3f0,
+ 0x21f400ff,
+ 0x01e7f004,
+ 0x1d0007f1,
+ 0xd00003f0,
+ 0x04bd000e,
+/* 0x0590: ih_no_fifo */
+ 0x010007f1,
+ 0xd00003f0,
+ 0x04bd000a,
+ 0xe0fcf0fc,
+ 0xb0fcd0fc,
+ 0x90fca0fc,
+ 0x88fe80fc,
+ 0xf480fc00,
+ 0x01f80032,
+/* 0x05b4: hub_barrier_done */
+ 0x9801f7f0,
+ 0xfebb040e,
+ 0x02ffb904,
+ 0x9418e7f1,
+ 0xf440e3f0,
+ 0x00f89d21,
+/* 0x05cc: ctx_redswitch */
+ 0xf120f7f0,
+ 0xf0850007,
+ 0x0fd00103,
+ 0xf004bd00,
+/* 0x05de: ctx_redswitch_delay */
+ 0xe2b608e7,
+ 0xfd1bf401,
+ 0x0800f5f1,
+ 0x0200f5f1,
+ 0x850007f1,
+ 0xd00103f0,
+ 0x04bd000f,
+/* 0x05fa: ctx_xfer */
+ 0x07f100f8,
+ 0x03f08100,
+ 0x000fd002,
+ 0x11f404bd,
+ 0xcc21f507,
+/* 0x060d: ctx_xfer_not_load */
+ 0x6a21f505,
+ 0xf124bd02,
+ 0xf047fc07,
0x02d00203,
0xf004bd00,
- 0xa5f001ac,
- 0x00b7f102,
- 0x50b3f000,
- 0xb6040c98,
- 0xbcbb0fc4,
- 0x000c9800,
- 0xf0010d98,
- 0x21f500e7,
- 0xacf0016f,
- 0x00b7f101,
- 0x50b3f040,
- 0xb6040c98,
- 0xbcbb0fc4,
- 0x010c9800,
- 0x98020d98,
- 0xe7f1060f,
- 0x21f50800,
- 0xacf0016f,
- 0x04a5f001,
- 0x3000b7f1,
- 0x9850b3f0,
- 0xc4b6040c,
- 0x00bcbb0f,
- 0x98020c98,
- 0x0f98030d,
- 0x00e7f108,
- 0x6f21f502,
- 0x5e21f501,
- 0x0601f402,
-/* 0x0697: ctx_xfer_post */
- 0xf50712f4,
-/* 0x069b: ctx_xfer_done */
- 0xf5027f21,
- 0xf805a221,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
+ 0x20b6012c,
+ 0xfc07f103,
+ 0x0203f04a,
+ 0xbd0002d0,
+ 0x01acf004,
+ 0xf102a5f0,
+ 0xf00000b7,
+ 0x0c9850b3,
+ 0x0fc4b604,
+ 0x9800bcbb,
+ 0x0d98000c,
+ 0x00e7f001,
+ 0x016f21f5,
+ 0xf101acf0,
+ 0xf04000b7,
+ 0x0c9850b3,
+ 0x0fc4b604,
+ 0x9800bcbb,
+ 0x0d98010c,
+ 0x060f9802,
+ 0x0800e7f1,
+ 0x016f21f5,
+ 0xf001acf0,
+ 0xb7f104a5,
+ 0xb3f03000,
+ 0x040c9850,
+ 0xbb0fc4b6,
+ 0x0c9800bc,
+ 0x030d9802,
+ 0xf1080f98,
+ 0xf50200e7,
+ 0xf5016f21,
+ 0xf4025e21,
+ 0x12f40601,
+/* 0x06a9: ctx_xfer_post */
+ 0x7f21f507,
+/* 0x06ad: ctx_xfer_done */
+ 0xb421f502,
+ 0x0000f805,
0x00000000,
0x00000000,
0x00000000,
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk208.fuc5.h b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk208.fuc5.h
index 7e1c28e..fbcc342 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk208.fuc5.h
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgk208.fuc5.h
@@ -276,7 +276,7 @@
0x02020014,
0xf6120040,
0x04bd0002,
- 0xfe047241,
+ 0xfe048141,
0x00400010,
0x0000f607,
0x040204bd,
@@ -291,20 +291,23 @@
0x820603b5,
0xcf018600,
0x02b50022,
+ 0x0f24b604,
+ 0x01c90080,
+ 0xbd0002f6,
0x0c308e04,
0xbd24bd50,
-/* 0x0377: init_unk_loop */
+/* 0x0383: init_unk_loop */
0x7e44bd34,
0xb0000065,
0x0bf400f6,
0xbb010f0e,
0x4ffd04f2,
0x0130b605,
-/* 0x038c: init_unk_next */
+/* 0x0398: init_unk_next */
0xb60120b6,
0x26b004e0,
0xe21bf401,
-/* 0x0398: init_unk_done */
+/* 0x03a4: init_unk_done */
0xb50703b5,
0x00820804,
0x22cf0201,
@@ -338,121 +341,118 @@
0xb60824b6,
0x2fb20834,
0x0002687e,
- 0x80003fbb,
- 0xf6020100,
- 0x04bd0003,
- 0x29f024bd,
- 0x3000801f,
- 0x0002f602,
-/* 0x0436: main */
- 0x31f404bd,
- 0x0028f400,
- 0x377e240d,
- 0x01f40000,
- 0x04e4b0f4,
- 0xfe1d18f4,
- 0x06020181,
- 0x12fd20bd,
- 0x01e4b604,
- 0xfe051efd,
- 0x097e0018,
- 0x0ef40005,
-/* 0x0465: main_not_ctx_xfer */
- 0x10ef94d4,
- 0x7e01f5f0,
- 0xf40002f8,
-/* 0x0472: ih */
- 0x80f9c70e,
- 0xf90188fe,
- 0xf990f980,
- 0xf9b0f9a0,
- 0xf9e0f9d0,
- 0x4a04bdf0,
- 0xaacf0200,
- 0x04abc400,
- 0x0d1f0bf4,
- 0x1a004e24,
- 0x4f00eecf,
- 0xffcf1900,
- 0x00047e00,
- 0x40010e00,
- 0x0ef61d00,
-/* 0x04af: ih_no_fifo */
- 0x4004bd00,
- 0x0af60100,
- 0xfc04bd00,
- 0xfce0fcf0,
- 0xfcb0fcd0,
- 0xfc90fca0,
- 0x0088fe80,
- 0x32f480fc,
-/* 0x04cf: hub_barrier_done */
- 0x0f01f800,
- 0x040e9801,
- 0xb204febb,
- 0x94188eff,
- 0x008f7e40,
-/* 0x04e3: ctx_redswitch */
- 0x0f00f800,
- 0x85008020,
+ 0xbb002fbb,
+ 0x0080003f,
+ 0x03f60201,
+ 0xbd04bd00,
+ 0x1f29f024,
+ 0x02300080,
+ 0xbd0002f6,
+/* 0x0445: main */
+ 0x0031f404,
+ 0x0d0028f4,
+ 0x00377e24,
+ 0xf401f400,
+ 0xf404e4b0,
+ 0x81fe1d18,
+ 0xbd060201,
+ 0x0412fd20,
+ 0xfd01e4b6,
+ 0x18fe051e,
+ 0x05187e00,
+ 0xd40ef400,
+/* 0x0474: main_not_ctx_xfer */
+ 0xf010ef94,
+ 0xf87e01f5,
+ 0x0ef40002,
+/* 0x0481: ih */
+ 0xfe80f9c7,
+ 0x80f90188,
+ 0xa0f990f9,
+ 0xd0f9b0f9,
+ 0xf0f9e0f9,
+ 0x004a04bd,
+ 0x00aacf02,
+ 0xf404abc4,
+ 0x240d1f0b,
+ 0xcf1a004e,
+ 0x004f00ee,
+ 0x00ffcf19,
+ 0x0000047e,
+ 0x0040010e,
+ 0x000ef61d,
+/* 0x04be: ih_no_fifo */
+ 0x004004bd,
+ 0x000af601,
+ 0xf0fc04bd,
+ 0xd0fce0fc,
+ 0xa0fcb0fc,
+ 0x80fc90fc,
+ 0xfc0088fe,
+ 0x0032f480,
+/* 0x04de: hub_barrier_done */
+ 0x010f01f8,
+ 0xbb040e98,
+ 0xffb204fe,
+ 0x4094188e,
+ 0x00008f7e,
+/* 0x04f2: ctx_redswitch */
+ 0x200f00f8,
+ 0x01850080,
+ 0xbd000ff6,
+/* 0x04ff: ctx_redswitch_delay */
+ 0xb6080e04,
+ 0x1bf401e2,
+ 0x00f5f1fd,
+ 0x00f5f108,
+ 0x85008002,
0x000ff601,
- 0x080e04bd,
-/* 0x04f0: ctx_redswitch_delay */
- 0xf401e2b6,
- 0xf5f1fd1b,
- 0xf5f10800,
- 0x00800200,
- 0x0ff60185,
- 0xf804bd00,
-/* 0x0509: ctx_xfer */
- 0x81008000,
- 0x000ff602,
- 0x11f404bd,
- 0x04e37e07,
-/* 0x0519: ctx_xfer_not_load */
- 0x02167e00,
- 0x8024bd00,
- 0xf60247fc,
- 0x04bd0002,
- 0xb6012cf0,
- 0xfc800320,
- 0x02f6024a,
+ 0x00f804bd,
+/* 0x0518: ctx_xfer */
+ 0x02810080,
+ 0xbd000ff6,
+ 0x0711f404,
+ 0x0004f27e,
+/* 0x0528: ctx_xfer_not_load */
+ 0x0002167e,
+ 0xfc8024bd,
+ 0x02f60247,
0xf004bd00,
- 0xa5f001ac,
- 0x00008b02,
- 0x040c9850,
- 0xbb0fc4b6,
- 0x0c9800bc,
- 0x010d9800,
- 0x3d7e000e,
- 0xacf00001,
- 0x40008b01,
- 0x040c9850,
- 0xbb0fc4b6,
- 0x0c9800bc,
- 0x020d9801,
- 0x4e060f98,
- 0x3d7e0800,
- 0xacf00001,
- 0x04a5f001,
- 0x5030008b,
+ 0x20b6012c,
+ 0x4afc8003,
+ 0x0002f602,
+ 0xacf004bd,
+ 0x02a5f001,
+ 0x5000008b,
0xb6040c98,
0xbcbb0fc4,
- 0x020c9800,
- 0x98030d98,
- 0x004e080f,
- 0x013d7e02,
- 0x020a7e00,
- 0x0601f400,
-/* 0x05a3: ctx_xfer_post */
- 0x7e0712f4,
-/* 0x05a7: ctx_xfer_done */
- 0x7e000227,
- 0xf80004cf,
- 0x00000000,
- 0x00000000,
- 0x00000000,
- 0x00000000,
+ 0x000c9800,
+ 0x0e010d98,
+ 0x013d7e00,
+ 0x01acf000,
+ 0x5040008b,
+ 0xb6040c98,
+ 0xbcbb0fc4,
+ 0x010c9800,
+ 0x98020d98,
+ 0x004e060f,
+ 0x013d7e08,
+ 0x01acf000,
+ 0x8b04a5f0,
+ 0x98503000,
+ 0xc4b6040c,
+ 0x00bcbb0f,
+ 0x98020c98,
+ 0x0f98030d,
+ 0x02004e08,
+ 0x00013d7e,
+ 0x00020a7e,
+ 0xf40601f4,
+/* 0x05b2: ctx_xfer_post */
+ 0x277e0712,
+/* 0x05b6: ctx_xfer_done */
+ 0xde7e0002,
+ 0x00f80004,
0x00000000,
0x00000000,
0x00000000,
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgm107.fuc5 b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgm107.fuc5
index e730603..47802c7 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgm107.fuc5
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgm107.fuc5
@@ -24,7 +24,7 @@
#define NV_PGRAPH_GPCX_UNK__SIZE 0x00000002
-#define CHIPSET GK208
+#define CHIPSET GM107
#include "macros.fuc"
.section #gm107_grgpc_data
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgm107.fuc5.h b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgm107.fuc5.h
index 6d53b67..51f5c3c 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgm107.fuc5.h
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/gpcgm107.fuc5.h
@@ -41,7 +41,7 @@
};
uint32_t gm107_grgpc_code[] = {
- 0x03140ef5,
+ 0x03410ef5,
/* 0x0004: queue_put */
0x9800d898,
0x86f001d9,
@@ -268,187 +268,319 @@
0x409c1c8e,
0x00008f7e,
0x00f8e0fc,
-/* 0x0314: init */
- 0x004104bd,
- 0x0011cf42,
- 0x010911e7,
- 0xfe0814b6,
- 0x02020014,
- 0xf6120040,
- 0x04bd0002,
- 0xfe047241,
- 0x00400010,
- 0x0000f607,
- 0x040204bd,
- 0xf6040040,
- 0x04bd0002,
- 0x821031f4,
- 0xcf018200,
- 0x01030022,
- 0xbb1f24f0,
- 0x32b60432,
- 0x0502b501,
- 0x820603b5,
- 0xcf018600,
- 0x02b50022,
- 0x0c308e04,
- 0xbd24bd50,
-/* 0x0377: init_unk_loop */
- 0x7e44bd34,
- 0xb0000065,
- 0x0bf400f6,
- 0xbb010f0e,
- 0x4ffd04f2,
- 0x0130b605,
-/* 0x038c: init_unk_next */
- 0xb60120b6,
- 0x26b004e0,
- 0xe21bf402,
-/* 0x0398: init_unk_done */
- 0xb50703b5,
- 0x00820804,
- 0x22cf0201,
- 0x9534bd00,
- 0x00800825,
- 0x05f601c0,
- 0x8004bd00,
- 0xf601c100,
+/* 0x0314: tpc_strand_wait */
+ 0x94bd90f9,
+ 0x800a99f0,
+ 0xf6023700,
+ 0x04bd0009,
+/* 0x0324: tpc_strand_busy */
+ 0x033f0089,
+ 0xb30099cf,
+ 0xbdf90094,
+ 0x0a99f094,
+ 0x02170080,
+ 0xbd0009f6,
+ 0xf890fc04,
+/* 0x0341: init */
+ 0x4104bd00,
+ 0x11cf4200,
+ 0x0911e700,
+ 0x0814b601,
+ 0x020014fe,
+ 0x12004002,
+ 0xbd0002f6,
+ 0x05b04104,
+ 0x400010fe,
+ 0x00f60700,
+ 0x0204bd00,
+ 0x04004004,
+ 0xbd0002f6,
+ 0x1031f404,
+ 0x01820082,
+ 0x030022cf,
+ 0x1f24f001,
+ 0xb60432bb,
+ 0x02b50132,
+ 0x0603b505,
+ 0x01860082,
+ 0xb50022cf,
+ 0x24b60402,
+ 0xc900800f,
+ 0x0002f601,
+ 0x308e04bd,
+ 0x24bd500c,
+ 0x44bd34bd,
+/* 0x03b0: init_unk_loop */
+ 0x0000657e,
+ 0xf400f6b0,
+ 0x010f0e0b,
+ 0xfd04f2bb,
+ 0x30b6054f,
+/* 0x03c5: init_unk_next */
+ 0x0120b601,
+ 0xb004e0b6,
+ 0x1bf40226,
+/* 0x03d1: init_unk_done */
+ 0x0703b5e2,
+ 0x820804b5,
+ 0xcf020100,
+ 0x34bd0022,
+ 0x80082595,
+ 0xf601c000,
0x04bd0005,
- 0x98000e98,
- 0x207e010f,
- 0x2fbb0001,
+ 0x01c10080,
+ 0xbd0005f6,
+ 0x000e9804,
+ 0x7e010f98,
+ 0xbb000120,
+ 0x3fbb002f,
+ 0x010e9800,
+ 0x7e020f98,
+ 0x98000120,
+ 0xeffd050e,
+ 0x002ebb00,
+ 0x98003ebb,
+ 0x0f98020e,
+ 0x01207e03,
+ 0x070e9800,
+ 0xbb00effd,
+ 0x3ebb002e,
+ 0x0235b600,
+ 0x01d30080,
+ 0xbd0003f6,
+ 0x0825b604,
+ 0xb60635b6,
+ 0x30b60120,
+ 0x0824b601,
+ 0xb20834b6,
+ 0x02687e2f,
+ 0x002fbb00,
+ 0x0f003fbb,
+ 0x8effb23f,
+ 0xf0501d60,
+ 0x8f7e01e5,
+ 0x0c0f0000,
+ 0xa88effb2,
+ 0xe5f0501d,
+ 0x008f7e01,
+ 0x03147e00,
+ 0xb23f0f00,
+ 0x1d608eff,
+ 0x01e5f050,
+ 0x00008f7e,
+ 0xffb2000f,
+ 0x501d9c8e,
+ 0x7e01e5f0,
+ 0x0f00008f,
+ 0x03147e01,
+ 0x8effb200,
+ 0xf0501da8,
+ 0x8f7e01e5,
+ 0xff0f0000,
+ 0x988effb2,
+ 0xe5f0501d,
+ 0x008f7e01,
+ 0xb2020f00,
+ 0x1da88eff,
+ 0x01e5f050,
+ 0x00008f7e,
+ 0x0003147e,
+ 0x85050498,
+ 0x98504000,
+ 0x64b60406,
+ 0x0056bb0f,
+/* 0x04e0: tpc_strand_init_tpc_loop */
+ 0x05705eb8,
+ 0x00657e00,
+ 0xbdf6b200,
+/* 0x04ed: tpc_strand_init_idx_loop */
+ 0x605eb874,
+ 0x7fb20005,
+ 0x00008f7e,
+ 0x05885eb8,
+ 0x082f9500,
+ 0x00008f7e,
+ 0x058c5eb8,
+ 0x082f9500,
+ 0x00008f7e,
+ 0x05905eb8,
+ 0x00657e00,
+ 0x06f5b600,
+ 0xb601f0b6,
+ 0x2fbb08f4,
0x003fbb00,
- 0x98010e98,
- 0x207e020f,
- 0x0e980001,
- 0x00effd05,
- 0xbb002ebb,
- 0x0e98003e,
- 0x030f9802,
- 0x0001207e,
- 0xfd070e98,
- 0x2ebb00ef,
- 0x003ebb00,
- 0x800235b6,
- 0xf601d300,
- 0x04bd0003,
- 0xb60825b6,
- 0x20b60635,
- 0x0130b601,
- 0xb60824b6,
- 0x2fb20834,
- 0x0002687e,
- 0x80003fbb,
- 0xf6020100,
- 0x04bd0003,
- 0x29f024bd,
- 0x3000801f,
- 0x0002f602,
-/* 0x0436: main */
- 0x31f404bd,
- 0x0028f400,
- 0x377e240d,
- 0x01f40000,
- 0x04e4b0f4,
- 0xfe1d18f4,
- 0x06020181,
- 0x12fd20bd,
- 0x01e4b604,
- 0xfe051efd,
- 0x097e0018,
- 0x0ef40005,
-/* 0x0465: main_not_ctx_xfer */
- 0x10ef94d4,
- 0x7e01f5f0,
- 0xf40002f8,
-/* 0x0472: ih */
- 0x80f9c70e,
- 0xf90188fe,
- 0xf990f980,
- 0xf9b0f9a0,
- 0xf9e0f9d0,
- 0x4a04bdf0,
- 0xaacf0200,
- 0x04abc400,
- 0x0d1f0bf4,
- 0x1a004e24,
- 0x4f00eecf,
- 0xffcf1900,
- 0x00047e00,
- 0x40010e00,
- 0x0ef61d00,
-/* 0x04af: ih_no_fifo */
- 0x4004bd00,
- 0x0af60100,
- 0xfc04bd00,
- 0xfce0fcf0,
- 0xfcb0fcd0,
- 0xfc90fca0,
- 0x0088fe80,
- 0x32f480fc,
-/* 0x04cf: hub_barrier_done */
- 0x0f01f800,
- 0x040e9801,
- 0xb204febb,
- 0x94188eff,
- 0x008f7e40,
-/* 0x04e3: ctx_redswitch */
- 0x0f00f800,
- 0x85008020,
- 0x000ff601,
- 0x080e04bd,
-/* 0x04f0: ctx_redswitch_delay */
- 0xf401e2b6,
- 0xf5f1fd1b,
- 0xf5f10800,
- 0x00800200,
- 0x0ff60185,
- 0xf804bd00,
-/* 0x0509: ctx_xfer */
- 0x81008000,
- 0x000ff602,
- 0x11f404bd,
- 0x04e37e07,
-/* 0x0519: ctx_xfer_not_load */
- 0x02167e00,
- 0x8024bd00,
- 0xf60247fc,
+ 0xb60170b6,
+ 0x1bf40162,
+ 0x0050b7bf,
+ 0x0142b608,
+ 0x0fa81bf4,
+ 0x8effb23f,
+ 0xf0501d60,
+ 0x8f7e01e5,
+ 0x0d0f0000,
+ 0xa88effb2,
+ 0xe5f0501d,
+ 0x008f7e01,
+ 0x03147e00,
+ 0x01008000,
+ 0x0003f602,
+ 0x24bd04bd,
+ 0x801f29f0,
+ 0xf6023000,
0x04bd0002,
- 0xb6012cf0,
- 0xfc800320,
- 0x02f6024a,
+/* 0x0574: main */
+ 0xf40031f4,
+ 0x240d0028,
+ 0x0000377e,
+ 0xb0f401f4,
+ 0x18f404e4,
+ 0x0181fe1d,
+ 0x20bd0602,
+ 0xb60412fd,
+ 0x1efd01e4,
+ 0x0018fe05,
+ 0x0006477e,
+/* 0x05a3: main_not_ctx_xfer */
+ 0x94d40ef4,
+ 0xf5f010ef,
+ 0x02f87e01,
+ 0xc70ef400,
+/* 0x05b0: ih */
+ 0x88fe80f9,
+ 0xf980f901,
+ 0xf9a0f990,
+ 0xf9d0f9b0,
+ 0xbdf0f9e0,
+ 0x02004a04,
+ 0xc400aacf,
+ 0x0bf404ab,
+ 0x4e240d1f,
+ 0xeecf1a00,
+ 0x19004f00,
+ 0x7e00ffcf,
+ 0x0e000004,
+ 0x1d004001,
+ 0xbd000ef6,
+/* 0x05ed: ih_no_fifo */
+ 0x01004004,
+ 0xbd000af6,
+ 0xfcf0fc04,
+ 0xfcd0fce0,
+ 0xfca0fcb0,
+ 0xfe80fc90,
+ 0x80fc0088,
+ 0xf80032f4,
+/* 0x060d: hub_barrier_done */
+ 0x98010f01,
+ 0xfebb040e,
+ 0x8effb204,
+ 0x7e409418,
+ 0xf800008f,
+/* 0x0621: ctx_redswitch */
+ 0x80200f00,
+ 0xf6018500,
+ 0x04bd000f,
+/* 0x062e: ctx_redswitch_delay */
+ 0xe2b6080e,
+ 0xfd1bf401,
+ 0x0800f5f1,
+ 0x0200f5f1,
+ 0x01850080,
+ 0xbd000ff6,
+/* 0x0647: ctx_xfer */
+ 0x8000f804,
+ 0xf6028100,
+ 0x04bd000f,
+ 0xc48effb2,
+ 0xe5f0501d,
+ 0x008f7e01,
+ 0x0711f400,
+ 0x0006217e,
+/* 0x0664: ctx_xfer_not_load */
+ 0x0002167e,
+ 0xfc8024bd,
+ 0x02f60247,
0xf004bd00,
+ 0x20b6012c,
+ 0x4afc8003,
+ 0x0002f602,
+ 0x0c0f04bd,
+ 0xa88effb2,
+ 0xe5f0501d,
+ 0x008f7e01,
+ 0x03147e00,
+ 0xb23f0f00,
+ 0x1d608eff,
+ 0x01e5f050,
+ 0x00008f7e,
+ 0xffb2000f,
+ 0x501d9c8e,
+ 0x7e01e5f0,
+ 0x0f00008f,
+ 0x03147e01,
+ 0x01fcf000,
+ 0xb203f0b6,
+ 0x1da88eff,
+ 0x01e5f050,
+ 0x00008f7e,
+ 0xf001acf0,
+ 0x008b02a5,
+ 0x0c985000,
+ 0x0fc4b604,
+ 0x9800bcbb,
+ 0x0d98000c,
+ 0x7e000e01,
+ 0xf000013d,
+ 0x008b01ac,
+ 0x0c985040,
+ 0x0fc4b604,
+ 0x9800bcbb,
+ 0x0d98010c,
+ 0x060f9802,
+ 0x7e08004e,
+ 0xf000013d,
0xa5f001ac,
- 0x00008b02,
+ 0x30008b04,
0x040c9850,
0xbb0fc4b6,
0x0c9800bc,
- 0x010d9800,
- 0x3d7e000e,
- 0xacf00001,
- 0x40008b01,
- 0x040c9850,
- 0xbb0fc4b6,
- 0x0c9800bc,
- 0x020d9801,
- 0x4e060f98,
- 0x3d7e0800,
- 0xacf00001,
- 0x04a5f001,
- 0x5030008b,
- 0xb6040c98,
- 0xbcbb0fc4,
- 0x020c9800,
- 0x98030d98,
- 0x004e080f,
- 0x013d7e02,
- 0x020a7e00,
- 0x0601f400,
-/* 0x05a3: ctx_xfer_post */
- 0x7e0712f4,
-/* 0x05a7: ctx_xfer_done */
- 0x7e000227,
- 0xf80004cf,
+ 0x030d9802,
+ 0x4e080f98,
+ 0x3d7e0200,
+ 0x0a7e0001,
+ 0x147e0002,
+ 0x01f40003,
+ 0x1a12f406,
+/* 0x073c: ctx_xfer_post */
+ 0x0002277e,
+ 0xffb20d0f,
+ 0x501da88e,
+ 0x7e01e5f0,
+ 0x7e00008f,
+/* 0x0753: ctx_xfer_done */
+ 0x7e000314,
+ 0xf800060d,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
+ 0x00000000,
0x00000000,
0x00000000,
0x00000000,
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/macros.fuc b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/macros.fuc
index 2a0b0f8..fa61806 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/macros.fuc
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/fuc/macros.fuc
@@ -29,6 +29,7 @@
#define GK100 0xe0
#define GK110 0xf0
#define GK208 0x108
+#define GM107 0x117
#define NV_PGRAPH_TRAPPED_ADDR 0x400704
#define NV_PGRAPH_TRAPPED_DATA_LO 0x400708
@@ -79,7 +80,9 @@
#define NV_PGRAPH_FECS_MMCTX_MULTI_STRIDE 0x409718
#define NV_PGRAPH_FECS_MMCTX_MULTI_MASK 0x40971c
#define NV_PGRAPH_FECS_MMCTX_QUEUE 0x409720
+#define NV_PGRAPH_FECS_MMIO_BASE 0x409724
#define NV_PGRAPH_FECS_MMIO_CTRL 0x409728
+#define NV_PGRAPH_FECS_MMIO_CTRL_BASE_ENABLE 0x00000001
#define NV_PGRAPH_FECS_MMIO_RDVAL 0x40972c
#define NV_PGRAPH_FECS_MMIO_WRVAL 0x409730
#define NV_PGRAPH_FECS_MMCTX_LOAD_COUNT 0x40974c
@@ -147,6 +150,11 @@
#define NV_PGRAPH_GPCX_GPCCS_MYINDEX 0x41a618
#define NV_PGRAPH_GPCX_GPCCS_MMCTX_SAVE_SWBASE 0x41a700
#define NV_PGRAPH_GPCX_GPCCS_MMCTX_LOAD_SWBASE 0x41a704
+#define NV_PGRAPH_GPCX_GPCCS_MMIO_BASE 0x41a724
+#define NV_PGRAPH_GPCX_GPCCS_MMIO_CTRL 0x41a728
+#define NV_PGRAPH_GPCX_GPCCS_MMIO_CTRL_BASE_ENABLE 0x00000001
+#define NV_PGRAPH_GPCX_GPCCS_MMIO_RDVAL 0x41a72c
+#define NV_PGRAPH_GPCX_GPCCS_MMIO_WRVAL 0x41a730
#define NV_PGRAPH_GPCX_GPCCS_MMCTX_LOAD_COUNT 0x41a74c
#if CHIPSET < GK110
#define NV_PGRAPH_GPCX_GPCCS_CC_SCRATCH_VAL(n) ((n) * 4 + 0x41a800)
@@ -164,6 +172,29 @@
#define NV_PGRAPH_GPCX_GPCCS_STRAND_CMD_SAVE 0x00000003
#define NV_PGRAPH_GPCX_GPCCS_STRAND_CMD_LOAD 0x00000004
#define NV_PGRAPH_GPCX_GPCCS_MEM_BASE 0x41aa04
+#define NV_PGRAPH_GPCX_GPCCS_TPC_STATUS 0x41acfc
+
+#define NV_PGRAPH_GPC0_TPC0 0x504000
+#define NV_PGRAPH_GPC0_TPC0__SIZE 0x000800
+
+#define NV_PGRAPH_GPC0_TPCX_STRAND_INDEX 0x501d60
+#define NV_PGRAPH_GPC0_TPCX_STRAND_INDEX_ALL 0x0000003f
+#define NV_PGRAPH_GPC0_TPCX_STRAND_DATA 0x501d98
+#define NV_PGRAPH_GPC0_TPCX_STRAND_SELECT 0x501d9c
+#define NV_PGRAPH_GPC0_TPCX_STRAND_CMD 0x501da8
+#define NV_PGRAPH_GPC0_TPCX_STRAND_CMD_SEEK 0x00000001
+#define NV_PGRAPH_GPC0_TPCX_STRAND_CMD_GET_INFO 0x00000002
+#define NV_PGRAPH_GPC0_TPCX_STRAND_CMD_SAVE 0x00000003
+#define NV_PGRAPH_GPC0_TPCX_STRAND_CMD_LOAD 0x00000004
+#define NV_PGRAPH_GPC0_TPCX_STRAND_CMD_ENABLE 0x0000000c
+#define NV_PGRAPH_GPC0_TPCX_STRAND_CMD_DISABLE 0x0000000d
+#define NV_PGRAPH_GPC0_TPCX_STRAND_MEM_BASE 0x501dc4
+
+#define NV_TPC_STRAND_INDEX 0x560
+#define NV_TPC_STRAND_CNT 0x570
+#define NV_TPC_STRAND_SAVE_SWBASE 0x588
+#define NV_TPC_STRAND_LOAD_SWBASE 0x58c
+#define NV_TPC_STRAND_WORDS 0x590
#define mmctx_data(r,c) .b32 (((c - 1) << 26) | r)
#define queue_init .skip 72 // (2 * 4) + ((8 * 4) * 2)
@@ -178,6 +209,7 @@
#define T_SAVE 7
#define T_LCHAN 8
#define T_LCTXH 9
+#define T_STRTPC 10
#if CHIPSET < GK208
#define imm32(reg,val) /*
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
index 1dd482e..5606c25 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
@@ -236,7 +236,7 @@
gf100_gr_set_shader_exceptions(struct nvkm_object *object, u32 mthd,
void *pdata, u32 size)
{
- struct gf100_gr_priv *priv = (void *)nv_engine(object);
+ struct gf100_gr_priv *priv = (void *)object->engine;
if (size >= sizeof(u32)) {
u32 data = *(u32 *)pdata ? 0xffffffff : 0x00000000;
nv_wr32(priv, 0x419e44, data);
@@ -260,8 +260,8 @@
struct nvkm_oclass
gf100_gr_sclass[] = {
- { 0x902d, &nvkm_object_ofuncs },
- { 0x9039, &nvkm_object_ofuncs },
+ { FERMI_TWOD_A, &nvkm_object_ofuncs },
+ { FERMI_MEMORY_TO_MEMORY_FORMAT_A, &nvkm_object_ofuncs },
{ FERMI_A, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
{ FERMI_COMPUTE_A, &nvkm_object_ofuncs, gf100_gr_90c0_omthds },
{}
@@ -1097,12 +1097,26 @@
u32 subc = (addr & 0x00070000) >> 16;
u32 data = nv_rd32(priv, 0x400708);
u32 code = nv_rd32(priv, 0x400110);
- u32 class = nv_rd32(priv, 0x404200 + (subc * 4));
+ u32 class;
int chid;
+ if (nv_device(priv)->card_type < NV_E0 || subc < 4)
+ class = nv_rd32(priv, 0x404200 + (subc * 4));
+ else
+ class = 0x0000;
+
engctx = nvkm_engctx_get(engine, inst);
chid = pfifo->chid(pfifo, engctx);
+ if (stat & 0x00000001) {
+ /*
+ * notifier interrupt, only needed for cyclestats
+ * can be safely ignored
+ */
+ nv_wr32(priv, 0x400100, 0x00000001);
+ stat &= ~0x00000001;
+ }
+
if (stat & 0x00000010) {
handle = nvkm_handle_get_class(engctx, class);
if (!handle || nv_call(handle->object, mthd, data)) {
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h
index aeeca1b..8af1a89 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.h
@@ -124,10 +124,12 @@
int gf100_gr_init(struct nvkm_object *);
void gf100_gr_zbc_init(struct gf100_gr_priv *);
-int gk104_gr_fini(struct nvkm_object *, bool);
+int gk104_gr_ctor(struct nvkm_object *, struct nvkm_object *,
+ struct nvkm_oclass *, void *data, u32 size,
+ struct nvkm_object **);
int gk104_gr_init(struct nvkm_object *);
-int gk110_gr_fini(struct nvkm_object *, bool);
+int gm204_gr_init(struct nvkm_object *);
extern struct nvkm_ofuncs gf100_fermi_ofuncs;
@@ -136,6 +138,7 @@
extern struct nvkm_omthds gf100_gr_90c0_omthds[];
extern struct nvkm_oclass gf110_gr_sclass[];
extern struct nvkm_oclass gk110_gr_sclass[];
+extern struct nvkm_oclass gm204_gr_sclass[];
struct gf100_gr_init {
u32 addr;
@@ -247,4 +250,17 @@
extern const struct gf100_gr_init gk110_gr_init_sm_0[];
extern const struct gf100_gr_init gk208_gr_init_gpc_unk_0[];
+
+extern const struct gf100_gr_init gm107_gr_init_scc_0[];
+extern const struct gf100_gr_init gm107_gr_init_prop_0[];
+extern const struct gf100_gr_init gm107_gr_init_setup_1[];
+extern const struct gf100_gr_init gm107_gr_init_zcull_0[];
+extern const struct gf100_gr_init gm107_gr_init_gpc_unk_1[];
+extern const struct gf100_gr_init gm107_gr_init_tex_0[];
+extern const struct gf100_gr_init gm107_gr_init_l1c_0[];
+extern const struct gf100_gr_init gm107_gr_init_wwdx_0[];
+extern const struct gf100_gr_init gm107_gr_init_cbm_0[];
+void gm107_gr_init_bios(struct gf100_gr_priv *);
+
+extern const struct gf100_gr_pack gm204_gr_pack_mmio[];
#endif
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf108.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf108.c
index 5362c81..8df7342 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf108.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf108.c
@@ -32,8 +32,8 @@
static struct nvkm_oclass
gf108_gr_sclass[] = {
- { 0x902d, &nvkm_object_ofuncs },
- { 0x9039, &nvkm_object_ofuncs },
+ { FERMI_TWOD_A, &nvkm_object_ofuncs },
+ { FERMI_MEMORY_TO_MEMORY_FORMAT_A, &nvkm_object_ofuncs },
{ FERMI_A, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
{ FERMI_B, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
{ FERMI_COMPUTE_A, &nvkm_object_ofuncs, gf100_gr_90c0_omthds },
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf110.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf110.c
index 88beb49..ef76e2d 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf110.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf110.c
@@ -32,8 +32,8 @@
struct nvkm_oclass
gf110_gr_sclass[] = {
- { 0x902d, &nvkm_object_ofuncs },
- { 0x9039, &nvkm_object_ofuncs },
+ { FERMI_TWOD_A, &nvkm_object_ofuncs },
+ { FERMI_MEMORY_TO_MEMORY_FORMAT_A, &nvkm_object_ofuncs },
{ FERMI_A, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
{ FERMI_B, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
{ FERMI_C, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.c
index 489fdd9..46f7844 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk104.c
@@ -34,8 +34,8 @@
static struct nvkm_oclass
gk104_gr_sclass[] = {
- { 0x902d, &nvkm_object_ofuncs },
- { 0xa040, &nvkm_object_ofuncs },
+ { FERMI_TWOD_A, &nvkm_object_ofuncs },
+ { KEPLER_INLINE_TO_MEMORY_A, &nvkm_object_ofuncs },
{ KEPLER_A, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
{ KEPLER_COMPUTE_A, &nvkm_object_ofuncs, gf100_gr_90c0_omthds },
{}
@@ -310,6 +310,17 @@
return gf100_gr_init_ctxctl(priv);
}
+int
+gk104_gr_ctor(struct nvkm_object *parent, struct nvkm_object *engine,
+ struct nvkm_oclass *oclass, void *data, u32 size,
+ struct nvkm_object **pobject)
+{
+ struct nvkm_pmu *pmu = nvkm_pmu(parent);
+ if (pmu)
+ pmu->pgob(pmu, false);
+ return gf100_gr_ctor(parent, engine, oclass, data, size, pobject);
+}
+
#include "fuc/hubgk104.fuc3.h"
static struct gf100_gr_ucode
@@ -334,7 +345,7 @@
gk104_gr_oclass = &(struct gf100_gr_oclass) {
.base.handle = NV_ENGINE(GR, 0xe4),
.base.ofuncs = &(struct nvkm_ofuncs) {
- .ctor = gf100_gr_ctor,
+ .ctor = gk104_gr_ctor,
.dtor = gf100_gr_dtor,
.init = gk104_gr_init,
.fini = _nvkm_gr_fini,
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110.c
index 78e03ab..f4cd8e5 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110.c
@@ -34,8 +34,8 @@
struct nvkm_oclass
gk110_gr_sclass[] = {
- { 0x902d, &nvkm_object_ofuncs },
- { 0xa140, &nvkm_object_ofuncs },
+ { FERMI_TWOD_A, &nvkm_object_ofuncs },
+ { KEPLER_INLINE_TO_MEMORY_B, &nvkm_object_ofuncs },
{ KEPLER_B, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
{ KEPLER_COMPUTE_B, &nvkm_object_ofuncs, gf100_gr_90c0_omthds },
{}
@@ -173,43 +173,6 @@
* PGRAPH engine/subdev functions
******************************************************************************/
-int
-gk110_gr_fini(struct nvkm_object *object, bool suspend)
-{
- struct gf100_gr_priv *priv = (void *)object;
- static const struct {
- u32 addr;
- u32 data;
- } magic[] = {
- { 0x020520, 0xfffffffc },
- { 0x020524, 0xfffffffe },
- { 0x020524, 0xfffffffc },
- { 0x020524, 0xfffffff8 },
- { 0x020524, 0xffffffe0 },
- { 0x020530, 0xfffffffe },
- { 0x02052c, 0xfffffffa },
- { 0x02052c, 0xfffffff0 },
- { 0x02052c, 0xffffffc0 },
- { 0x02052c, 0xffffff00 },
- { 0x02052c, 0xfffffc00 },
- { 0x02052c, 0xfffcfc00 },
- { 0x02052c, 0xfff0fc00 },
- { 0x02052c, 0xff80fc00 },
- { 0x020528, 0xfffffffe },
- { 0x020528, 0xfffffffc },
- };
- int i;
-
- nv_mask(priv, 0x000200, 0x08001000, 0x00000000);
- nv_mask(priv, 0x0206b4, 0x00000000, 0x00000000);
- for (i = 0; i < ARRAY_SIZE(magic); i++) {
- nv_wr32(priv, magic[i].addr, magic[i].data);
- nv_wait(priv, magic[i].addr, 0x80000000, 0x00000000);
- }
-
- return nvkm_gr_fini(&priv->base, suspend);
-}
-
#include "fuc/hubgk110.fuc3.h"
struct gf100_gr_ucode
@@ -234,10 +197,10 @@
gk110_gr_oclass = &(struct gf100_gr_oclass) {
.base.handle = NV_ENGINE(GR, 0xf0),
.base.ofuncs = &(struct nvkm_ofuncs) {
- .ctor = gf100_gr_ctor,
+ .ctor = gk104_gr_ctor,
.dtor = gf100_gr_dtor,
.init = gk104_gr_init,
- .fini = gk110_gr_fini,
+ .fini = _nvkm_gr_fini,
},
.cclass = &gk110_grctx_oclass,
.sclass = gk110_gr_sclass,
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110b.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110b.c
index 5292c5a..9ff9eab 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110b.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk110b.c
@@ -102,10 +102,10 @@
gk110b_gr_oclass = &(struct gf100_gr_oclass) {
.base.handle = NV_ENGINE(GR, 0xf1),
.base.ofuncs = &(struct nvkm_ofuncs) {
- .ctor = gf100_gr_ctor,
+ .ctor = gk104_gr_ctor,
.dtor = gf100_gr_dtor,
.init = gk104_gr_init,
- .fini = gk110_gr_fini,
+ .fini = _nvkm_gr_fini,
},
.cclass = &gk110b_grctx_oclass,
.sclass = gk110_gr_sclass,
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk208.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk208.c
index ae6b853..85f44a3 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk208.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk208.c
@@ -34,10 +34,10 @@
static struct nvkm_oclass
gk208_gr_sclass[] = {
- { 0x902d, &nvkm_object_ofuncs },
- { 0xa140, &nvkm_object_ofuncs },
+ { FERMI_TWOD_A, &nvkm_object_ofuncs },
+ { KEPLER_INLINE_TO_MEMORY_B, &nvkm_object_ofuncs },
{ KEPLER_B, &gf100_fermi_ofuncs },
- { 0xa1c0, &nvkm_object_ofuncs },
+ { KEPLER_COMPUTE_B, &nvkm_object_ofuncs },
{}
};
@@ -152,43 +152,6 @@
* PGRAPH engine/subdev functions
******************************************************************************/
-static int
-gk208_gr_fini(struct nvkm_object *object, bool suspend)
-{
- struct gf100_gr_priv *priv = (void *)object;
- static const struct {
- u32 addr;
- u32 data;
- } magic[] = {
- { 0x020520, 0xfffffffc },
- { 0x020524, 0xfffffffe },
- { 0x020524, 0xfffffffc },
- { 0x020524, 0xfffffff8 },
- { 0x020524, 0xffffffe0 },
- { 0x020530, 0xfffffffe },
- { 0x02052c, 0xfffffffa },
- { 0x02052c, 0xfffffff0 },
- { 0x02052c, 0xffffffc0 },
- { 0x02052c, 0xffffff00 },
- { 0x02052c, 0xfffffc00 },
- { 0x02052c, 0xfffcfc00 },
- { 0x02052c, 0xfff0fc00 },
- { 0x02052c, 0xff80fc00 },
- { 0x020528, 0xfffffffe },
- { 0x020528, 0xfffffffc },
- };
- int i;
-
- nv_mask(priv, 0x000200, 0x08001000, 0x00000000);
- nv_mask(priv, 0x0206b4, 0x00000000, 0x00000000);
- for (i = 0; i < ARRAY_SIZE(magic); i++) {
- nv_wr32(priv, magic[i].addr, magic[i].data);
- nv_wait(priv, magic[i].addr, 0x80000000, 0x00000000);
- }
-
- return nvkm_gr_fini(&priv->base, suspend);
-}
-
#include "fuc/hubgk208.fuc5.h"
static struct gf100_gr_ucode
@@ -213,10 +176,10 @@
gk208_gr_oclass = &(struct gf100_gr_oclass) {
.base.handle = NV_ENGINE(GR, 0x08),
.base.ofuncs = &(struct nvkm_ofuncs) {
- .ctor = gf100_gr_ctor,
+ .ctor = gk104_gr_ctor,
.dtor = gf100_gr_dtor,
.init = gk104_gr_init,
- .fini = gk208_gr_fini,
+ .fini = _nvkm_gr_fini,
},
.cclass = &gk208_grctx_oclass,
.sclass = gk208_gr_sclass,
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
index 2137555..40ff5eb 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gk20a.c
@@ -26,8 +26,8 @@
static struct nvkm_oclass
gk20a_gr_sclass[] = {
- { 0x902d, &nvkm_object_ofuncs },
- { 0xa040, &nvkm_object_ofuncs },
+ { FERMI_TWOD_A, &nvkm_object_ofuncs },
+ { KEPLER_INLINE_TO_MEMORY_A, &nvkm_object_ofuncs },
{ KEPLER_C, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
{ KEPLER_COMPUTE_A, &nvkm_object_ofuncs, gf100_gr_90c0_omthds },
{}
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gm107.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gm107.c
index 124492b..a5ebd45 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gm107.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gm107.c
@@ -35,8 +35,8 @@
static struct nvkm_oclass
gm107_gr_sclass[] = {
- { 0x902d, &nvkm_object_ofuncs },
- { 0xa140, &nvkm_object_ofuncs },
+ { FERMI_TWOD_A, &nvkm_object_ofuncs },
+ { KEPLER_INLINE_TO_MEMORY_B, &nvkm_object_ofuncs },
{ MAXWELL_A, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
{ MAXWELL_COMPUTE_A, &nvkm_object_ofuncs, gf100_gr_90c0_omthds },
{}
@@ -71,7 +71,7 @@
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_gr_init_scc_0[] = {
{ 0x40803c, 1, 0x04, 0x00000010 },
{}
@@ -85,14 +85,14 @@
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_gr_init_prop_0[] = {
{ 0x418408, 1, 0x04, 0x00000000 },
{ 0x4184a0, 1, 0x04, 0x00000000 },
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_gr_init_setup_1[] = {
{ 0x4188c8, 2, 0x04, 0x00000000 },
{ 0x4188d0, 1, 0x04, 0x00010000 },
@@ -100,7 +100,7 @@
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_gr_init_zcull_0[] = {
{ 0x418910, 1, 0x04, 0x00010001 },
{ 0x418914, 1, 0x04, 0x00000301 },
@@ -111,7 +111,7 @@
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_gr_init_gpc_unk_1[] = {
{ 0x418d00, 1, 0x04, 0x00000000 },
{ 0x418f00, 1, 0x04, 0x00000400 },
@@ -134,7 +134,7 @@
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_gr_init_tex_0[] = {
{ 0x419ab0, 1, 0x04, 0x00000000 },
{ 0x419ab8, 1, 0x04, 0x000000e7 },
@@ -160,7 +160,7 @@
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_gr_init_l1c_0[] = {
{ 0x419c98, 1, 0x04, 0x00000000 },
{ 0x419cc0, 2, 0x04, 0x00000000 },
@@ -206,14 +206,14 @@
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_gr_init_wwdx_0[] = {
{ 0x41bfd4, 1, 0x04, 0x00800000 },
{ 0x41bfdc, 1, 0x04, 0x00000000 },
{}
};
-static const struct gf100_gr_init
+const struct gf100_gr_init
gm107_gr_init_cbm_0[] = {
{ 0x41becc, 1, 0x04, 0x00000000 },
{}
@@ -291,7 +291,7 @@
* PGRAPH engine/subdev functions
******************************************************************************/
-static void
+void
gm107_gr_init_bios(struct gf100_gr_priv *priv)
{
static const struct {
@@ -464,7 +464,7 @@
.cclass = &gm107_grctx_oclass,
.sclass = gm107_gr_sclass,
.mmio = gm107_gr_pack_mmio,
- .fecs.ucode = 0 ? &gm107_gr_fecs_ucode : NULL,
+ .fecs.ucode = &gm107_gr_fecs_ucode,
.gpccs.ucode = &gm107_gr_gpccs_ucode,
.ppc_nr = 2,
}.base;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gm204.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gm204.c
new file mode 100644
index 0000000..2f5eadd
--- /dev/null
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gm204.c
@@ -0,0 +1,387 @@
+/*
+ * Copyright 2015 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs <bskeggs@redhat.com>
+ */
+#include "gf100.h"
+#include "ctxgf100.h"
+
+#include <nvif/class.h>
+
+/*******************************************************************************
+ * Graphics object classes
+ ******************************************************************************/
+
+struct nvkm_oclass
+gm204_gr_sclass[] = {
+ { FERMI_TWOD_A, &nvkm_object_ofuncs },
+ { KEPLER_INLINE_TO_MEMORY_B, &nvkm_object_ofuncs },
+ { MAXWELL_B, &gf100_fermi_ofuncs, gf100_gr_9097_omthds },
+ { MAXWELL_COMPUTE_B, &nvkm_object_ofuncs, gf100_gr_90c0_omthds },
+ {}
+};
+
+/*******************************************************************************
+ * PGRAPH register lists
+ ******************************************************************************/
+
+static const struct gf100_gr_init
+gm204_gr_init_main_0[] = {
+ { 0x400080, 1, 0x04, 0x003003e2 },
+ { 0x400088, 1, 0x04, 0xe007bfe7 },
+ { 0x40008c, 1, 0x04, 0x00060000 },
+ { 0x400090, 1, 0x04, 0x00000030 },
+ { 0x40013c, 1, 0x04, 0x003901f3 },
+ { 0x400140, 1, 0x04, 0x00000100 },
+ { 0x400144, 1, 0x04, 0x00000000 },
+ { 0x400148, 1, 0x04, 0x00000110 },
+ { 0x400138, 1, 0x04, 0x00000000 },
+ { 0x400130, 2, 0x04, 0x00000000 },
+ { 0x400124, 1, 0x04, 0x00000002 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_fe_0[] = {
+ { 0x40415c, 1, 0x04, 0x00000000 },
+ { 0x404170, 1, 0x04, 0x00000000 },
+ { 0x4041b4, 1, 0x04, 0x00000000 },
+ { 0x4041b8, 1, 0x04, 0x00000010 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_ds_0[] = {
+ { 0x40583c, 1, 0x04, 0x00000000 },
+ { 0x405844, 1, 0x04, 0x00ffffff },
+ { 0x40584c, 1, 0x04, 0x00000001 },
+ { 0x405850, 1, 0x04, 0x00000000 },
+ { 0x405900, 1, 0x04, 0x00000000 },
+ { 0x405908, 1, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_sked_0[] = {
+ { 0x407010, 1, 0x04, 0x00000000 },
+ { 0x407040, 1, 0x04, 0x80440434 },
+ { 0x407048, 1, 0x04, 0x00000008 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_tpccs_0[] = {
+ { 0x419d60, 1, 0x04, 0x0000003f },
+ { 0x419d88, 3, 0x04, 0x00000000 },
+ { 0x419dc4, 1, 0x04, 0x00000000 },
+ { 0x419dc8, 1, 0x04, 0x00000501 },
+ { 0x419dd0, 1, 0x04, 0x00000000 },
+ { 0x419dd4, 1, 0x04, 0x00000100 },
+ { 0x419dd8, 1, 0x04, 0x00000001 },
+ { 0x419ddc, 1, 0x04, 0x00000002 },
+ { 0x419de0, 1, 0x04, 0x00000001 },
+ { 0x419de8, 1, 0x04, 0x000000cc },
+ { 0x419dec, 1, 0x04, 0x00000000 },
+ { 0x419df0, 1, 0x04, 0x000000cc },
+ { 0x419df4, 1, 0x04, 0x00000000 },
+ { 0x419d0c, 1, 0x04, 0x00000000 },
+ { 0x419d10, 1, 0x04, 0x00000014 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_pe_0[] = {
+ { 0x419900, 1, 0x04, 0x000000ff },
+ { 0x419810, 1, 0x04, 0x00000000 },
+ { 0x41980c, 1, 0x04, 0x00000010 },
+ { 0x419844, 1, 0x04, 0x00000000 },
+ { 0x419838, 1, 0x04, 0x000000ff },
+ { 0x419850, 1, 0x04, 0x00000004 },
+ { 0x419854, 2, 0x04, 0x00000000 },
+ { 0x419894, 3, 0x04, 0x00100401 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_sm_0[] = {
+ { 0x419e30, 1, 0x04, 0x000000ff },
+ { 0x419e00, 1, 0x04, 0x00000000 },
+ { 0x419ea0, 1, 0x04, 0x00000000 },
+ { 0x419ee4, 1, 0x04, 0x00000000 },
+ { 0x419ea4, 1, 0x04, 0x00000100 },
+ { 0x419ea8, 1, 0x04, 0x00000000 },
+ { 0x419ee8, 1, 0x04, 0x00000091 },
+ { 0x419eb4, 1, 0x04, 0x00000000 },
+ { 0x419ebc, 2, 0x04, 0x00000000 },
+ { 0x419edc, 1, 0x04, 0x000c1810 },
+ { 0x419ed8, 1, 0x04, 0x00000000 },
+ { 0x419ee0, 1, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_l1c_1[] = {
+ { 0x419cf8, 2, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_sm_1[] = {
+ { 0x419f74, 1, 0x04, 0x00055155 },
+ { 0x419f80, 4, 0x04, 0x00000000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_l1c_2[] = {
+ { 0x419ccc, 2, 0x04, 0x00000000 },
+ { 0x419c80, 1, 0x04, 0x3f006022 },
+ { 0x419c88, 1, 0x04, 0x00210000 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_pes_0[] = {
+ { 0x41be50, 1, 0x04, 0x000000ff },
+ { 0x41be04, 1, 0x04, 0x00000000 },
+ { 0x41be08, 1, 0x04, 0x00000004 },
+ { 0x41be0c, 1, 0x04, 0x00000008 },
+ { 0x41be10, 1, 0x04, 0x2e3b8bc7 },
+ { 0x41be14, 2, 0x04, 0x00000000 },
+ { 0x41be3c, 5, 0x04, 0x00100401 },
+ {}
+};
+
+static const struct gf100_gr_init
+gm204_gr_init_be_0[] = {
+ { 0x408890, 1, 0x04, 0x000000ff },
+ { 0x40880c, 1, 0x04, 0x00000000 },
+ { 0x408850, 1, 0x04, 0x00000004 },
+ { 0x408878, 1, 0x04, 0x01b4201c },
+ { 0x40887c, 1, 0x04, 0x80004c55 },
+ { 0x408880, 1, 0x04, 0x0018c258 },
+ { 0x408884, 1, 0x04, 0x0000160f },
+ { 0x408974, 1, 0x04, 0x000000ff },
+ { 0x408910, 9, 0x04, 0x00000000 },
+ { 0x408950, 1, 0x04, 0x00000000 },
+ { 0x408954, 1, 0x04, 0x0000ffff },
+ { 0x408958, 1, 0x04, 0x00000034 },
+ { 0x40895c, 1, 0x04, 0x84b17403 },
+ { 0x408960, 1, 0x04, 0x04c1884f },
+ { 0x408964, 1, 0x04, 0x04714445 },
+ { 0x408968, 1, 0x04, 0x0280802f },
+ { 0x40896c, 1, 0x04, 0x04304856 },
+ { 0x408970, 1, 0x04, 0x00012800 },
+ { 0x408984, 1, 0x04, 0x00000000 },
+ { 0x408988, 1, 0x04, 0x08040201 },
+ { 0x40898c, 1, 0x04, 0x80402010 },
+ {}
+};
+
+const struct gf100_gr_pack
+gm204_gr_pack_mmio[] = {
+ { gm204_gr_init_main_0 },
+ { gm204_gr_init_fe_0 },
+ { gf100_gr_init_pri_0 },
+ { gf100_gr_init_rstr2d_0 },
+ { gf100_gr_init_pd_0 },
+ { gm204_gr_init_ds_0 },
+ { gm107_gr_init_scc_0 },
+ { gm204_gr_init_sked_0 },
+ { gk110_gr_init_cwd_0 },
+ { gm107_gr_init_prop_0 },
+ { gk208_gr_init_gpc_unk_0 },
+ { gf100_gr_init_setup_0 },
+ { gf100_gr_init_crstr_0 },
+ { gm107_gr_init_setup_1 },
+ { gm107_gr_init_zcull_0 },
+ { gf100_gr_init_gpm_0 },
+ { gm107_gr_init_gpc_unk_1 },
+ { gf100_gr_init_gcc_0 },
+ { gm204_gr_init_tpccs_0 },
+ { gm107_gr_init_tex_0 },
+ { gm204_gr_init_pe_0 },
+ { gm107_gr_init_l1c_0 },
+ { gf100_gr_init_mpc_0 },
+ { gm204_gr_init_sm_0 },
+ { gm204_gr_init_l1c_1 },
+ { gm204_gr_init_sm_1 },
+ { gm204_gr_init_l1c_2 },
+ { gm204_gr_init_pes_0 },
+ { gm107_gr_init_wwdx_0 },
+ { gm107_gr_init_cbm_0 },
+ { gm204_gr_init_be_0 },
+ {}
+};
+
+const struct gf100_gr_pack *
+gm204_gr_data[] = {
+ gm204_gr_pack_mmio,
+ NULL
+};
+
+/*******************************************************************************
+ * PGRAPH engine/subdev functions
+ ******************************************************************************/
+
+static int
+gm204_gr_init_ctxctl(struct gf100_gr_priv *priv)
+{
+ return 0;
+}
+
+int
+gm204_gr_init(struct nvkm_object *object)
+{
+ struct gf100_gr_oclass *oclass = (void *)object->oclass;
+ struct gf100_gr_priv *priv = (void *)object;
+ const u32 magicgpc918 = DIV_ROUND_UP(0x00800000, priv->tpc_total);
+ u32 data[TPC_MAX / 8] = {};
+ u8 tpcnr[GPC_MAX];
+ int gpc, tpc, ppc, rop;
+ int ret, i;
+ u32 tmp;
+
+ ret = nvkm_gr_init(&priv->base);
+ if (ret)
+ return ret;
+
+ tmp = nv_rd32(priv, 0x100c80); /*XXX: mask? */
+ nv_wr32(priv, 0x418880, 0x00001000 | (tmp & 0x00000fff));
+ nv_wr32(priv, 0x418890, 0x00000000);
+ nv_wr32(priv, 0x418894, 0x00000000);
+ nv_wr32(priv, 0x4188b4, priv->unk4188b4->addr >> 8);
+ nv_wr32(priv, 0x4188b8, priv->unk4188b8->addr >> 8);
+ nv_mask(priv, 0x4188b0, 0x00040000, 0x00040000);
+
+ /*XXX: belongs in fb */
+ nv_wr32(priv, 0x100cc8, priv->unk4188b4->addr >> 8);
+ nv_wr32(priv, 0x100ccc, priv->unk4188b8->addr >> 8);
+ nv_mask(priv, 0x100cc4, 0x00040000, 0x00040000);
+
+ gf100_gr_mmio(priv, oclass->mmio);
+
+ gm107_gr_init_bios(priv);
+
+ nv_wr32(priv, GPC_UNIT(0, 0x3018), 0x00000001);
+
+ memset(data, 0x00, sizeof(data));
+ memcpy(tpcnr, priv->tpc_nr, sizeof(priv->tpc_nr));
+ for (i = 0, gpc = -1; i < priv->tpc_total; i++) {
+ do {
+ gpc = (gpc + 1) % priv->gpc_nr;
+ } while (!tpcnr[gpc]);
+ tpc = priv->tpc_nr[gpc] - tpcnr[gpc]--;
+
+ data[i / 8] |= tpc << ((i % 8) * 4);
+ }
+
+ nv_wr32(priv, GPC_BCAST(0x0980), data[0]);
+ nv_wr32(priv, GPC_BCAST(0x0984), data[1]);
+ nv_wr32(priv, GPC_BCAST(0x0988), data[2]);
+ nv_wr32(priv, GPC_BCAST(0x098c), data[3]);
+
+ for (gpc = 0; gpc < priv->gpc_nr; gpc++) {
+ nv_wr32(priv, GPC_UNIT(gpc, 0x0914),
+ priv->magic_not_rop_nr << 8 | priv->tpc_nr[gpc]);
+ nv_wr32(priv, GPC_UNIT(gpc, 0x0910), 0x00040000 |
+ priv->tpc_total);
+ nv_wr32(priv, GPC_UNIT(gpc, 0x0918), magicgpc918);
+ }
+
+ nv_wr32(priv, GPC_BCAST(0x3fd4), magicgpc918);
+ nv_wr32(priv, GPC_BCAST(0x08ac), nv_rd32(priv, 0x100800));
+ nv_wr32(priv, GPC_BCAST(0x033c), nv_rd32(priv, 0x100804));
+
+ nv_wr32(priv, 0x400500, 0x00010001);
+ nv_wr32(priv, 0x400100, 0xffffffff);
+ nv_wr32(priv, 0x40013c, 0xffffffff);
+ nv_wr32(priv, 0x400124, 0x00000002);
+ nv_wr32(priv, 0x409c24, 0x000e0000);
+ nv_wr32(priv, 0x405848, 0xc0000000);
+ nv_wr32(priv, 0x40584c, 0x00000001);
+ nv_wr32(priv, 0x404000, 0xc0000000);
+ nv_wr32(priv, 0x404600, 0xc0000000);
+ nv_wr32(priv, 0x408030, 0xc0000000);
+ nv_wr32(priv, 0x404490, 0xc0000000);
+ nv_wr32(priv, 0x406018, 0xc0000000);
+ nv_wr32(priv, 0x407020, 0x40000000);
+ nv_wr32(priv, 0x405840, 0xc0000000);
+ nv_wr32(priv, 0x405844, 0x00ffffff);
+ nv_mask(priv, 0x419cc0, 0x00000008, 0x00000008);
+
+ for (gpc = 0; gpc < priv->gpc_nr; gpc++) {
+ printk(KERN_ERR "ppc %d %d\n", gpc, priv->ppc_nr[gpc]);
+ for (ppc = 0; ppc < priv->ppc_nr[gpc]; ppc++)
+ nv_wr32(priv, PPC_UNIT(gpc, ppc, 0x038), 0xc0000000);
+ nv_wr32(priv, GPC_UNIT(gpc, 0x0420), 0xc0000000);
+ nv_wr32(priv, GPC_UNIT(gpc, 0x0900), 0xc0000000);
+ nv_wr32(priv, GPC_UNIT(gpc, 0x1028), 0xc0000000);
+ nv_wr32(priv, GPC_UNIT(gpc, 0x0824), 0xc0000000);
+ for (tpc = 0; tpc < priv->tpc_nr[gpc]; tpc++) {
+ nv_wr32(priv, TPC_UNIT(gpc, tpc, 0x508), 0xffffffff);
+ nv_wr32(priv, TPC_UNIT(gpc, tpc, 0x50c), 0xffffffff);
+ nv_wr32(priv, TPC_UNIT(gpc, tpc, 0x224), 0xc0000000);
+ nv_wr32(priv, TPC_UNIT(gpc, tpc, 0x48c), 0xc0000000);
+ nv_wr32(priv, TPC_UNIT(gpc, tpc, 0x084), 0xc0000000);
+ nv_wr32(priv, TPC_UNIT(gpc, tpc, 0x430), 0xc0000000);
+ nv_wr32(priv, TPC_UNIT(gpc, tpc, 0x644), 0x00dffffe);
+ nv_wr32(priv, TPC_UNIT(gpc, tpc, 0x64c), 0x00000005);
+ }
+ nv_wr32(priv, GPC_UNIT(gpc, 0x2c90), 0xffffffff);
+ nv_wr32(priv, GPC_UNIT(gpc, 0x2c94), 0xffffffff);
+ }
+
+ for (rop = 0; rop < priv->rop_nr; rop++) {
+ nv_wr32(priv, ROP_UNIT(rop, 0x144), 0x40000000);
+ nv_wr32(priv, ROP_UNIT(rop, 0x070), 0x40000000);
+ nv_wr32(priv, ROP_UNIT(rop, 0x204), 0xffffffff);
+ nv_wr32(priv, ROP_UNIT(rop, 0x208), 0xffffffff);
+ }
+
+ nv_wr32(priv, 0x400108, 0xffffffff);
+ nv_wr32(priv, 0x400138, 0xffffffff);
+ nv_wr32(priv, 0x400118, 0xffffffff);
+ nv_wr32(priv, 0x400130, 0xffffffff);
+ nv_wr32(priv, 0x40011c, 0xffffffff);
+ nv_wr32(priv, 0x400134, 0xffffffff);
+
+ nv_wr32(priv, 0x400054, 0x2c350f63);
+
+ gf100_gr_zbc_init(priv);
+
+ return gm204_gr_init_ctxctl(priv);
+}
+
+struct nvkm_oclass *
+gm204_gr_oclass = &(struct gf100_gr_oclass) {
+ .base.handle = NV_ENGINE(GR, 0x24),
+ .base.ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = gf100_gr_ctor,
+ .dtor = gf100_gr_dtor,
+ .init = gm204_gr_init,
+ .fini = _nvkm_gr_fini,
+ },
+ .cclass = &gm204_grctx_oclass,
+ .sclass = gm204_gr_sclass,
+ .mmio = gm204_gr_pack_mmio,
+ .ppc_nr = 2,
+}.base;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gm206.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gm206.c
new file mode 100644
index 0000000..04b9733
--- /dev/null
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gm206.c
@@ -0,0 +1,40 @@
+/*
+ * Copyright 2015 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs <bskeggs@redhat.com>
+ */
+#include "gf100.h"
+#include "ctxgf100.h"
+
+struct nvkm_oclass *
+gm206_gr_oclass = &(struct gf100_gr_oclass) {
+ .base.handle = NV_ENGINE(GR, 0x26),
+ .base.ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = gf100_gr_ctor,
+ .dtor = gf100_gr_dtor,
+ .init = gm204_gr_init,
+ .fini = _nvkm_gr_fini,
+ },
+ .cclass = &gm206_grctx_oclass,
+ .sclass = gm204_gr_sclass,
+ .mmio = gm204_gr_pack_mmio,
+ .ppc_nr = 2,
+}.base;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowacpi.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowacpi.c
index 1fbd93b..f9d0eb5 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowacpi.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowacpi.c
@@ -52,7 +52,7 @@
u32 start = offset & ~0x00000fff;
u32 fetch = limit - start;
- if (nvbios_extend(bios, limit) > 0) {
+ if (nvbios_extend(bios, limit) >= 0) {
int ret = nouveau_acpi_get_bios_chunk(bios->data, start, fetch);
if (ret == fetch)
return fetch;
@@ -73,7 +73,7 @@
u32 start = offset & ~0xfff;
u32 fetch = 0;
- if (nvbios_extend(bios, limit) > 0) {
+ if (nvbios_extend(bios, limit) >= 0) {
while (start + fetch < limit) {
int ret = nouveau_acpi_get_bios_chunk(bios->data,
start + fetch,
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/hwsq.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/hwsq.c
index b8853bf..7622b41 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/hwsq.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/hwsq.c
@@ -29,7 +29,7 @@
u32 data;
struct {
u8 data[512];
- u8 size;
+ u16 size;
} c;
};
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/hwsq.h b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/hwsq.h
index 3394a5e..ebf709c 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/hwsq.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/hwsq.h
@@ -11,17 +11,34 @@
struct hwsq_reg {
int sequence;
bool force;
- u32 addr[2];
+ u32 addr;
+ u32 stride; /* in bytes */
+ u32 mask;
u32 data;
};
static inline struct hwsq_reg
+hwsq_stride(u32 addr, u32 stride, u32 mask)
+{
+ return (struct hwsq_reg) {
+ .sequence = 0,
+ .force = 0,
+ .addr = addr,
+ .stride = stride,
+ .mask = mask,
+ .data = 0xdeadbeef,
+ };
+}
+
+static inline struct hwsq_reg
hwsq_reg2(u32 addr1, u32 addr2)
{
return (struct hwsq_reg) {
.sequence = 0,
.force = 0,
- .addr = { addr1, addr2 },
+ .addr = addr1,
+ .stride = addr2 - addr1,
+ .mask = 0x3,
.data = 0xdeadbeef,
};
}
@@ -29,7 +46,14 @@
static inline struct hwsq_reg
hwsq_reg(u32 addr)
{
- return hwsq_reg2(addr, addr);
+ return (struct hwsq_reg) {
+ .sequence = 0,
+ .force = 0,
+ .addr = addr,
+ .stride = 0,
+ .mask = 0x1,
+ .data = 0xdeadbeef,
+ };
}
static inline int
@@ -62,18 +86,24 @@
hwsq_rd32(struct hwsq *ram, struct hwsq_reg *reg)
{
if (reg->sequence != ram->sequence)
- reg->data = nv_rd32(ram->subdev, reg->addr[0]);
+ reg->data = nv_rd32(ram->subdev, reg->addr);
return reg->data;
}
static inline void
hwsq_wr32(struct hwsq *ram, struct hwsq_reg *reg, u32 data)
{
+ u32 mask, off = 0;
+
reg->sequence = ram->sequence;
reg->data = data;
- if (reg->addr[0] != reg->addr[1])
- nvkm_hwsq_wr32(ram->hwsq, reg->addr[1], reg->data);
- nvkm_hwsq_wr32(ram->hwsq, reg->addr[0], reg->data);
+
+ for (mask = reg->mask; mask > 0; mask = (mask & ~1) >> 1) {
+ if (mask & 1)
+ nvkm_hwsq_wr32(ram->hwsq, reg->addr+off, reg->data);
+
+ off += reg->stride;
+ }
}
static inline void
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
index b24a9cc..39a83d8 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
@@ -184,7 +184,7 @@
nv_debug(clk, "setting performance state %d\n", pstatei);
clk->pstate = pstatei;
- if (pfb->ram->calc) {
+ if (pfb->ram && pfb->ram->calc) {
int khz = pstate->base.domain[nv_clk_src_mem];
do {
ret = pfb->ram->calc(pfb, khz);
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/nv04.h b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/nv04.h
index 14a51a9..7c63abf 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/nv04.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/nv04.h
@@ -5,7 +5,7 @@
struct nv04_devinit_priv {
struct nvkm_devinit base;
- u8 owner;
+ int owner;
};
int nv04_devinit_ctor(struct nvkm_object *, struct nvkm_object *,
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild
index 904d601..d6be4c6c 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild
@@ -37,7 +37,6 @@
nvkm-y += nvkm/subdev/fb/rammcp77.o
nvkm-y += nvkm/subdev/fb/ramgf100.o
nvkm-y += nvkm/subdev/fb/ramgk104.o
-nvkm-y += nvkm/subdev/fb/ramgk20a.o
nvkm-y += nvkm/subdev/fb/ramgm107.o
nvkm-y += nvkm/subdev/fb/sddr2.o
nvkm-y += nvkm/subdev/fb/sddr3.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c
index 16589fa..61fde43 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/base.c
@@ -55,9 +55,11 @@
struct nvkm_fb *pfb = (void *)object;
int ret;
- ret = nv_ofuncs(pfb->ram)->fini(nv_object(pfb->ram), suspend);
- if (ret && suspend)
- return ret;
+ if (pfb->ram) {
+ ret = nv_ofuncs(pfb->ram)->fini(nv_object(pfb->ram), suspend);
+ if (ret && suspend)
+ return ret;
+ }
return nvkm_subdev_fini(&pfb->base, suspend);
}
@@ -72,9 +74,11 @@
if (ret)
return ret;
- ret = nv_ofuncs(pfb->ram)->init(nv_object(pfb->ram));
- if (ret)
- return ret;
+ if (pfb->ram) {
+ ret = nv_ofuncs(pfb->ram)->init(nv_object(pfb->ram));
+ if (ret)
+ return ret;
+ }
for (i = 0; i < pfb->tile.regions; i++)
pfb->tile.prog(pfb, i, &pfb->tile.region[i]);
@@ -91,9 +95,12 @@
for (i = 0; i < pfb->tile.regions; i++)
pfb->tile.fini(pfb, i, &pfb->tile.region[i]);
nvkm_mm_fini(&pfb->tags);
- nvkm_mm_fini(&pfb->vram);
- nvkm_object_ref(NULL, (struct nvkm_object **)&pfb->ram);
+ if (pfb->ram) {
+ nvkm_mm_fini(&pfb->vram);
+ nvkm_object_ref(NULL, (struct nvkm_object **)&pfb->ram);
+ }
+
nvkm_subdev_destroy(&pfb->base);
}
@@ -127,6 +134,9 @@
pfb->memtype_valid = impl->memtype;
+ if (!impl->ram)
+ return 0;
+
ret = nvkm_object_ctor(nv_object(pfb), NULL, impl->ram, NULL, 0, &ram);
if (ret) {
nv_fatal(pfb, "error detecting memory configuration!!\n");
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk20a.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk20a.c
index 6762847..a5d7857 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk20a.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk20a.c
@@ -65,5 +65,4 @@
.fini = _nvkm_fb_fini,
},
.memtype = gf100_fb_memtype_valid,
- .ram = &gk20a_ram_oclass,
}.base;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h
index d82da02..485c4b6 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h
@@ -32,7 +32,6 @@
extern struct nvkm_oclass mcp77_ram_oclass;
extern struct nvkm_oclass gf100_ram_oclass;
extern struct nvkm_oclass gk104_ram_oclass;
-extern struct nvkm_oclass gk20a_ram_oclass;
extern struct nvkm_oclass gm107_ram_oclass;
int nvkm_sddr2_calc(struct nvkm_ram *ram);
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgk20a.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgk20a.c
deleted file mode 100644
index 5f30db1..0000000
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramgk20a.c
+++ /dev/null
@@ -1,149 +0,0 @@
-/*
- * Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- */
-#include "priv.h"
-
-#include <core/device.h>
-
-struct gk20a_mem {
- struct nvkm_mem base;
- void *cpuaddr;
- dma_addr_t handle;
-};
-#define to_gk20a_mem(m) container_of(m, struct gk20a_mem, base)
-
-static void
-gk20a_ram_put(struct nvkm_fb *pfb, struct nvkm_mem **pmem)
-{
- struct device *dev = nv_device_base(nv_device(pfb));
- struct gk20a_mem *mem = to_gk20a_mem(*pmem);
-
- *pmem = NULL;
- if (unlikely(mem == NULL))
- return;
-
- if (likely(mem->cpuaddr))
- dma_free_coherent(dev, mem->base.size << PAGE_SHIFT,
- mem->cpuaddr, mem->handle);
-
- kfree(mem->base.pages);
- kfree(mem);
-}
-
-static int
-gk20a_ram_get(struct nvkm_fb *pfb, u64 size, u32 align, u32 ncmin,
- u32 memtype, struct nvkm_mem **pmem)
-{
- struct device *dev = nv_device_base(nv_device(pfb));
- struct gk20a_mem *mem;
- u32 type = memtype & 0xff;
- u32 npages, order;
- int i;
-
- nv_debug(pfb, "%s: size: %llx align: %x, ncmin: %x\n", __func__, size,
- align, ncmin);
-
- npages = size >> PAGE_SHIFT;
- if (npages == 0)
- npages = 1;
-
- if (align == 0)
- align = PAGE_SIZE;
- align >>= PAGE_SHIFT;
-
- /* round alignment to the next power of 2, if needed */
- order = fls(align);
- if ((align & (align - 1)) == 0)
- order--;
- align = BIT(order);
-
- /* ensure returned address is correctly aligned */
- npages = max(align, npages);
-
- mem = kzalloc(sizeof(*mem), GFP_KERNEL);
- if (!mem)
- return -ENOMEM;
-
- mem->base.size = npages;
- mem->base.memtype = type;
-
- mem->base.pages = kzalloc(sizeof(dma_addr_t) * npages, GFP_KERNEL);
- if (!mem->base.pages) {
- kfree(mem);
- return -ENOMEM;
- }
-
- *pmem = &mem->base;
-
- mem->cpuaddr = dma_alloc_coherent(dev, npages << PAGE_SHIFT,
- &mem->handle, GFP_KERNEL);
- if (!mem->cpuaddr) {
- nv_error(pfb, "%s: cannot allocate memory!\n", __func__);
- gk20a_ram_put(pfb, pmem);
- return -ENOMEM;
- }
-
- align <<= PAGE_SHIFT;
-
- /* alignment check */
- if (unlikely(mem->handle & (align - 1)))
- nv_warn(pfb, "memory not aligned as requested: %pad (0x%x)\n",
- &mem->handle, align);
-
- nv_debug(pfb, "alloc size: 0x%x, align: 0x%x, paddr: %pad, vaddr: %p\n",
- npages << PAGE_SHIFT, align, &mem->handle, mem->cpuaddr);
-
- for (i = 0; i < npages; i++)
- mem->base.pages[i] = mem->handle + (PAGE_SIZE * i);
-
- mem->base.offset = (u64)mem->base.pages[0];
- return 0;
-}
-
-static int
-gk20a_ram_ctor(struct nvkm_object *parent, struct nvkm_object *engine,
- struct nvkm_oclass *oclass, void *data, u32 datasize,
- struct nvkm_object **pobject)
-{
- struct nvkm_ram *ram;
- int ret;
-
- ret = nvkm_ram_create(parent, engine, oclass, &ram);
- *pobject = nv_object(ram);
- if (ret)
- return ret;
- ram->type = NV_MEM_TYPE_STOLEN;
- ram->size = get_num_physpages() << PAGE_SHIFT;
-
- ram->get = gk20a_ram_get;
- ram->put = gk20a_ram_put;
- return 0;
-}
-
-struct nvkm_oclass
-gk20a_ram_oclass = {
- .ofuncs = &(struct nvkm_ofuncs) {
- .ctor = gk20a_ram_ctor,
- .dtor = _nvkm_ram_dtor,
- .init = _nvkm_ram_init,
- .fini = _nvkm_ram_fini,
- },
-};
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fuse/gm107.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fuse/gm107.c
index ba19158..0b256aa 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fuse/gm107.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fuse/gm107.c
@@ -45,10 +45,8 @@
ret = nvkm_fuse_create(parent, engine, oclass, &priv);
*pobject = nv_object(priv);
- if (ret)
- return ret;
- return 0;
+ return ret;
}
struct nvkm_oclass
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/Kbuild
index e6f35ab..13bb7fc 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/Kbuild
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/Kbuild
@@ -2,3 +2,4 @@
nvkm-y += nvkm/subdev/instmem/nv04.o
nvkm-y += nvkm/subdev/instmem/nv40.o
nvkm-y += nvkm/subdev/instmem/nv50.o
+nvkm-y += nvkm/subdev/instmem/gk20a.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c
new file mode 100644
index 0000000..dd0994d
--- /dev/null
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c
@@ -0,0 +1,440 @@
+/*
+ * Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+/*
+ * GK20A does not have dedicated video memory, and to accurately represent this
+ * fact Nouveau will not create a RAM device for it. Therefore its instmem
+ * implementation must be done directly on top of system memory, while providing
+ * coherent read and write operations.
+ *
+ * Instmem can be allocated through two means:
+ * 1) If an IOMMU mapping has been probed, the IOMMU API is used to make memory
+ * pages contiguous to the GPU. This is the preferred way.
+ * 2) If no IOMMU mapping is probed, the DMA API is used to allocate physically
+ * contiguous memory.
+ *
+ * In both cases CPU read and writes are performed using PRAMIN (i.e. using the
+ * GPU path) to ensure these operations are coherent for the GPU. This allows us
+ * to use more "relaxed" allocation parameters when using the DMA API, since we
+ * never need a kernel mapping.
+ */
+
+#include <subdev/fb.h>
+#include <core/mm.h>
+#include <core/device.h>
+
+#ifdef __KERNEL__
+#include <linux/dma-attrs.h>
+#include <linux/iommu.h>
+#include <nouveau_platform.h>
+#endif
+
+#include "priv.h"
+
+struct gk20a_instobj_priv {
+ struct nvkm_instobj base;
+ /* Must be second member here - see nouveau_gpuobj_map_vm() */
+ struct nvkm_mem *mem;
+ /* Pointed by mem */
+ struct nvkm_mem _mem;
+};
+
+/*
+ * Used for objects allocated using the DMA API
+ */
+struct gk20a_instobj_dma {
+ struct gk20a_instobj_priv base;
+
+ void *cpuaddr;
+ dma_addr_t handle;
+ struct nvkm_mm_node r;
+};
+
+/*
+ * Used for objects flattened using the IOMMU API
+ */
+struct gk20a_instobj_iommu {
+ struct gk20a_instobj_priv base;
+
+ /* array of base.mem->size pages */
+ struct page *pages[];
+};
+
+struct gk20a_instmem_priv {
+ struct nvkm_instmem base;
+ spinlock_t lock;
+ u64 addr;
+
+ /* Only used if IOMMU if present */
+ struct mutex *mm_mutex;
+ struct nvkm_mm *mm;
+ struct iommu_domain *domain;
+ unsigned long iommu_pgshift;
+
+ /* Only used by DMA API */
+ struct dma_attrs attrs;
+};
+
+/*
+ * Use PRAMIN to read/write data and avoid coherency issues.
+ * PRAMIN uses the GPU path and ensures data will always be coherent.
+ *
+ * A dynamic mapping based solution would be desirable in the future, but
+ * the issue remains of how to maintain coherency efficiently. On ARM it is
+ * not easy (if possible at all?) to create uncached temporary mappings.
+ */
+
+static u32
+gk20a_instobj_rd32(struct nvkm_object *object, u64 offset)
+{
+ struct gk20a_instmem_priv *priv = (void *)nvkm_instmem(object);
+ struct gk20a_instobj_priv *node = (void *)object;
+ unsigned long flags;
+ u64 base = (node->mem->offset + offset) & 0xffffff00000ULL;
+ u64 addr = (node->mem->offset + offset) & 0x000000fffffULL;
+ u32 data;
+
+ spin_lock_irqsave(&priv->lock, flags);
+ if (unlikely(priv->addr != base)) {
+ nv_wr32(priv, 0x001700, base >> 16);
+ priv->addr = base;
+ }
+ data = nv_rd32(priv, 0x700000 + addr);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return data;
+}
+
+static void
+gk20a_instobj_wr32(struct nvkm_object *object, u64 offset, u32 data)
+{
+ struct gk20a_instmem_priv *priv = (void *)nvkm_instmem(object);
+ struct gk20a_instobj_priv *node = (void *)object;
+ unsigned long flags;
+ u64 base = (node->mem->offset + offset) & 0xffffff00000ULL;
+ u64 addr = (node->mem->offset + offset) & 0x000000fffffULL;
+
+ spin_lock_irqsave(&priv->lock, flags);
+ if (unlikely(priv->addr != base)) {
+ nv_wr32(priv, 0x001700, base >> 16);
+ priv->addr = base;
+ }
+ nv_wr32(priv, 0x700000 + addr, data);
+ spin_unlock_irqrestore(&priv->lock, flags);
+}
+
+static void
+gk20a_instobj_dtor_dma(struct gk20a_instobj_priv *_node)
+{
+ struct gk20a_instobj_dma *node = (void *)_node;
+ struct gk20a_instmem_priv *priv = (void *)nvkm_instmem(node);
+ struct device *dev = nv_device_base(nv_device(priv));
+
+ if (unlikely(!node->cpuaddr))
+ return;
+
+ dma_free_attrs(dev, _node->mem->size << PAGE_SHIFT, node->cpuaddr,
+ node->handle, &priv->attrs);
+}
+
+static void
+gk20a_instobj_dtor_iommu(struct gk20a_instobj_priv *_node)
+{
+ struct gk20a_instobj_iommu *node = (void *)_node;
+ struct gk20a_instmem_priv *priv = (void *)nvkm_instmem(node);
+ struct nvkm_mm_node *r;
+ int i;
+
+ if (unlikely(list_empty(&_node->mem->regions)))
+ return;
+
+ r = list_first_entry(&_node->mem->regions, struct nvkm_mm_node,
+ rl_entry);
+
+ /* clear bit 34 to unmap pages */
+ r->offset &= ~BIT(34 - priv->iommu_pgshift);
+
+ /* Unmap pages from GPU address space and free them */
+ for (i = 0; i < _node->mem->size; i++) {
+ iommu_unmap(priv->domain,
+ (r->offset + i) << priv->iommu_pgshift, PAGE_SIZE);
+ __free_page(node->pages[i]);
+ }
+
+ /* Release area from GPU address space */
+ mutex_lock(priv->mm_mutex);
+ nvkm_mm_free(priv->mm, &r);
+ mutex_unlock(priv->mm_mutex);
+}
+
+static void
+gk20a_instobj_dtor(struct nvkm_object *object)
+{
+ struct gk20a_instobj_priv *node = (void *)object;
+ struct gk20a_instmem_priv *priv = (void *)nvkm_instmem(node);
+
+ if (priv->domain)
+ gk20a_instobj_dtor_iommu(node);
+ else
+ gk20a_instobj_dtor_dma(node);
+
+ nvkm_instobj_destroy(&node->base);
+}
+
+static int
+gk20a_instobj_ctor_dma(struct nvkm_object *parent, struct nvkm_object *engine,
+ struct nvkm_oclass *oclass, u32 npages, u32 align,
+ struct gk20a_instobj_priv **_node)
+{
+ struct gk20a_instobj_dma *node;
+ struct gk20a_instmem_priv *priv = (void *)nvkm_instmem(parent);
+ struct device *dev = nv_device_base(nv_device(parent));
+ int ret;
+
+ ret = nvkm_instobj_create_(parent, engine, oclass, sizeof(*node),
+ (void **)&node);
+ *_node = &node->base;
+ if (ret)
+ return ret;
+
+ node->cpuaddr = dma_alloc_attrs(dev, npages << PAGE_SHIFT,
+ &node->handle, GFP_KERNEL,
+ &priv->attrs);
+ if (!node->cpuaddr) {
+ nv_error(priv, "cannot allocate DMA memory\n");
+ return -ENOMEM;
+ }
+
+ /* alignment check */
+ if (unlikely(node->handle & (align - 1)))
+ nv_warn(priv, "memory not aligned as requested: %pad (0x%x)\n",
+ &node->handle, align);
+
+ /* present memory for being mapped using small pages */
+ node->r.type = 12;
+ node->r.offset = node->handle >> 12;
+ node->r.length = (npages << PAGE_SHIFT) >> 12;
+
+ node->base._mem.offset = node->handle;
+
+ INIT_LIST_HEAD(&node->base._mem.regions);
+ list_add_tail(&node->r.rl_entry, &node->base._mem.regions);
+
+ return 0;
+}
+
+static int
+gk20a_instobj_ctor_iommu(struct nvkm_object *parent, struct nvkm_object *engine,
+ struct nvkm_oclass *oclass, u32 npages, u32 align,
+ struct gk20a_instobj_priv **_node)
+{
+ struct gk20a_instobj_iommu *node;
+ struct gk20a_instmem_priv *priv = (void *)nvkm_instmem(parent);
+ struct nvkm_mm_node *r;
+ int ret;
+ int i;
+
+ ret = nvkm_instobj_create_(parent, engine, oclass,
+ sizeof(*node) + sizeof(node->pages[0]) * npages,
+ (void **)&node);
+ *_node = &node->base;
+ if (ret)
+ return ret;
+
+ /* Allocate backing memory */
+ for (i = 0; i < npages; i++) {
+ struct page *p = alloc_page(GFP_KERNEL);
+
+ if (p == NULL) {
+ ret = -ENOMEM;
+ goto free_pages;
+ }
+ node->pages[i] = p;
+ }
+
+ mutex_lock(priv->mm_mutex);
+ /* Reserve area from GPU address space */
+ ret = nvkm_mm_head(priv->mm, 0, 1, npages, npages,
+ align >> priv->iommu_pgshift, &r);
+ mutex_unlock(priv->mm_mutex);
+ if (ret) {
+ nv_error(priv, "virtual space is full!\n");
+ goto free_pages;
+ }
+
+ /* Map into GPU address space */
+ for (i = 0; i < npages; i++) {
+ struct page *p = node->pages[i];
+ u32 offset = (r->offset + i) << priv->iommu_pgshift;
+
+ ret = iommu_map(priv->domain, offset, page_to_phys(p),
+ PAGE_SIZE, IOMMU_READ | IOMMU_WRITE);
+ if (ret < 0) {
+ nv_error(priv, "IOMMU mapping failure: %d\n", ret);
+
+ while (i-- > 0) {
+ offset -= PAGE_SIZE;
+ iommu_unmap(priv->domain, offset, PAGE_SIZE);
+ }
+ goto release_area;
+ }
+ }
+
+ /* Bit 34 tells that an address is to be resolved through the IOMMU */
+ r->offset |= BIT(34 - priv->iommu_pgshift);
+
+ node->base._mem.offset = ((u64)r->offset) << priv->iommu_pgshift;
+
+ INIT_LIST_HEAD(&node->base._mem.regions);
+ list_add_tail(&r->rl_entry, &node->base._mem.regions);
+
+ return 0;
+
+release_area:
+ mutex_lock(priv->mm_mutex);
+ nvkm_mm_free(priv->mm, &r);
+ mutex_unlock(priv->mm_mutex);
+
+free_pages:
+ for (i = 0; i < npages && node->pages[i] != NULL; i++)
+ __free_page(node->pages[i]);
+
+ return ret;
+}
+
+static int
+gk20a_instobj_ctor(struct nvkm_object *parent, struct nvkm_object *engine,
+ struct nvkm_oclass *oclass, void *data, u32 _size,
+ struct nvkm_object **pobject)
+{
+ struct nvkm_instobj_args *args = data;
+ struct gk20a_instmem_priv *priv = (void *)nvkm_instmem(parent);
+ struct gk20a_instobj_priv *node;
+ u32 size, align;
+ int ret;
+
+ nv_debug(parent, "%s (%s): size: %x align: %x\n", __func__,
+ priv->domain ? "IOMMU" : "DMA", args->size, args->align);
+
+ /* Round size and align to page bounds */
+ size = max(roundup(args->size, PAGE_SIZE), PAGE_SIZE);
+ align = max(roundup(args->align, PAGE_SIZE), PAGE_SIZE);
+
+ if (priv->domain)
+ ret = gk20a_instobj_ctor_iommu(parent, engine, oclass,
+ size >> PAGE_SHIFT, align, &node);
+ else
+ ret = gk20a_instobj_ctor_dma(parent, engine, oclass,
+ size >> PAGE_SHIFT, align, &node);
+ *pobject = nv_object(node);
+ if (ret)
+ return ret;
+
+ node->mem = &node->_mem;
+
+ /* present memory for being mapped using small pages */
+ node->mem->size = size >> 12;
+ node->mem->memtype = 0;
+ node->mem->page_shift = 12;
+
+ node->base.addr = node->mem->offset;
+ node->base.size = size;
+
+ nv_debug(parent, "alloc size: 0x%x, align: 0x%x, gaddr: 0x%llx\n",
+ size, align, node->mem->offset);
+
+ return 0;
+}
+
+static struct nvkm_instobj_impl
+gk20a_instobj_oclass = {
+ .base.ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = gk20a_instobj_ctor,
+ .dtor = gk20a_instobj_dtor,
+ .init = _nvkm_instobj_init,
+ .fini = _nvkm_instobj_fini,
+ .rd32 = gk20a_instobj_rd32,
+ .wr32 = gk20a_instobj_wr32,
+ },
+};
+
+
+
+static int
+gk20a_instmem_fini(struct nvkm_object *object, bool suspend)
+{
+ struct gk20a_instmem_priv *priv = (void *)object;
+ priv->addr = ~0ULL;
+ return nvkm_instmem_fini(&priv->base, suspend);
+}
+
+static int
+gk20a_instmem_ctor(struct nvkm_object *parent, struct nvkm_object *engine,
+ struct nvkm_oclass *oclass, void *data, u32 size,
+ struct nvkm_object **pobject)
+{
+ struct gk20a_instmem_priv *priv;
+ struct nouveau_platform_device *plat;
+ int ret;
+
+ ret = nvkm_instmem_create(parent, engine, oclass, &priv);
+ *pobject = nv_object(priv);
+ if (ret)
+ return ret;
+
+ spin_lock_init(&priv->lock);
+
+ plat = nv_device_to_platform(nv_device(parent));
+ if (plat->gpu->iommu.domain) {
+ priv->domain = plat->gpu->iommu.domain;
+ priv->mm = plat->gpu->iommu.mm;
+ priv->iommu_pgshift = plat->gpu->iommu.pgshift;
+ priv->mm_mutex = &plat->gpu->iommu.mutex;
+
+ nv_info(priv, "using IOMMU\n");
+ } else {
+ init_dma_attrs(&priv->attrs);
+ /*
+ * We will access instmem through PRAMIN and thus do not need a
+ * consistent CPU pointer or kernel mapping
+ */
+ dma_set_attr(DMA_ATTR_NON_CONSISTENT, &priv->attrs);
+ dma_set_attr(DMA_ATTR_WEAK_ORDERING, &priv->attrs);
+ dma_set_attr(DMA_ATTR_WRITE_COMBINE, &priv->attrs);
+ dma_set_attr(DMA_ATTR_NO_KERNEL_MAPPING, &priv->attrs);
+
+ nv_info(priv, "using DMA API\n");
+ }
+
+ return 0;
+}
+
+struct nvkm_oclass *
+gk20a_instmem_oclass = &(struct nvkm_instmem_impl) {
+ .base.handle = NV_SUBDEV(INSTMEM, 0xea),
+ .base.ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = gk20a_instmem_ctor,
+ .dtor = _nvkm_instmem_dtor,
+ .init = _nvkm_instmem_init,
+ .fini = gk20a_instmem_fini,
+ },
+ .instobj = &gk20a_instobj_oclass.base,
+}.base;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gf100.c
index 8e7cc62..7fb5ea0 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gf100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/ltc/gf100.c
@@ -136,7 +136,8 @@
struct nvkm_ltc_priv *priv = (void *)object;
nvkm_mm_fini(&priv->tags);
- nvkm_mm_free(&pfb->vram, &priv->tag_ram);
+ if (pfb->ram)
+ nvkm_mm_free(&pfb->vram, &priv->tag_ram);
nvkm_ltc_destroy(priv);
}
@@ -149,6 +150,12 @@
u32 tag_size, tag_margin, tag_align;
int ret;
+ /* No VRAM, no tags for now. */
+ if (!pfb->ram) {
+ priv->num_tags = 0;
+ goto mm_init;
+ }
+
/* tags for 1/4 of VRAM should be enough (8192/4 per GiB of VRAM) */
priv->num_tags = (pfb->ram->size >> 17) / 4;
if (priv->num_tags > (1 << 17))
@@ -183,6 +190,7 @@
priv->tag_base = tag_base;
}
+mm_init:
ret = nvkm_mm_init(&priv->tags, 0, priv->num_tags, 1);
return ret;
}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mxm/nv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mxm/nv50.c
index 42cac13..f20e4ca 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mxm/nv50.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mxm/nv50.c
@@ -182,7 +182,7 @@
{
u64 desc = *(u64 *)data;
if ((desc & 0xf0) != 0xf0)
- nv_info(mxm, "unmatched output device 0x%016llx\n", desc);
+ nv_info(mxm, "unmatched output device 0x%016llx\n", desc);
return true;
}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/Kbuild
index 9a150d5..7081d6a 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/Kbuild
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/Kbuild
@@ -4,5 +4,6 @@
nvkm-y += nvkm/subdev/pmu/gf100.o
nvkm-y += nvkm/subdev/pmu/gf110.o
nvkm-y += nvkm/subdev/pmu/gk104.o
+nvkm-y += nvkm/subdev/pmu/gk110.o
nvkm-y += nvkm/subdev/pmu/gk208.o
nvkm-y += nvkm/subdev/pmu/gk20a.o
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk110.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk110.c
new file mode 100644
index 0000000..89bb94b
--- /dev/null
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk110.c
@@ -0,0 +1,95 @@
+/*
+ * Copyright 2015 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs
+ */
+#define gf110_pmu_code gk110_pmu_code
+#define gf110_pmu_data gk110_pmu_data
+#include "priv.h"
+#include "fuc/gf110.fuc4.h"
+
+#include <subdev/timer.h>
+
+void
+gk110_pmu_pgob(struct nvkm_pmu *pmu, bool enable)
+{
+ static const struct {
+ u32 addr;
+ u32 data;
+ } magic[] = {
+ { 0x020520, 0xfffffffc },
+ { 0x020524, 0xfffffffe },
+ { 0x020524, 0xfffffffc },
+ { 0x020524, 0xfffffff8 },
+ { 0x020524, 0xffffffe0 },
+ { 0x020530, 0xfffffffe },
+ { 0x02052c, 0xfffffffa },
+ { 0x02052c, 0xfffffff0 },
+ { 0x02052c, 0xffffffc0 },
+ { 0x02052c, 0xffffff00 },
+ { 0x02052c, 0xfffffc00 },
+ { 0x02052c, 0xfffcfc00 },
+ { 0x02052c, 0xfff0fc00 },
+ { 0x02052c, 0xff80fc00 },
+ { 0x020528, 0xfffffffe },
+ { 0x020528, 0xfffffffc },
+ };
+ int i;
+
+ nv_mask(pmu, 0x000200, 0x00001000, 0x00000000);
+ nv_rd32(pmu, 0x000200);
+ nv_mask(pmu, 0x000200, 0x08000000, 0x08000000);
+ msleep(50);
+
+ nv_mask(pmu, 0x10a78c, 0x00000002, 0x00000002);
+ nv_mask(pmu, 0x10a78c, 0x00000001, 0x00000001);
+ nv_mask(pmu, 0x10a78c, 0x00000001, 0x00000000);
+
+ nv_mask(pmu, 0x0206b4, 0x00000000, 0x00000000);
+ for (i = 0; i < ARRAY_SIZE(magic); i++) {
+ nv_wr32(pmu, magic[i].addr, magic[i].data);
+ nv_wait(pmu, magic[i].addr, 0x80000000, 0x00000000);
+ }
+
+ nv_mask(pmu, 0x10a78c, 0x00000002, 0x00000000);
+ nv_mask(pmu, 0x10a78c, 0x00000001, 0x00000001);
+ nv_mask(pmu, 0x10a78c, 0x00000001, 0x00000000);
+
+ nv_mask(pmu, 0x000200, 0x08000000, 0x00000000);
+ nv_mask(pmu, 0x000200, 0x00001000, 0x00001000);
+ nv_rd32(pmu, 0x000200);
+}
+
+struct nvkm_oclass *
+gk110_pmu_oclass = &(struct nvkm_pmu_impl) {
+ .base.handle = NV_SUBDEV(PMU, 0xf0),
+ .base.ofuncs = &(struct nvkm_ofuncs) {
+ .ctor = _nvkm_pmu_ctor,
+ .dtor = _nvkm_pmu_dtor,
+ .init = _nvkm_pmu_init,
+ .fini = _nvkm_pmu_fini,
+ },
+ .code.data = gk110_pmu_code,
+ .code.size = sizeof(gk110_pmu_code),
+ .data.data = gk110_pmu_data,
+ .data.size = sizeof(gk110_pmu_data),
+ .pgob = gk110_pmu_pgob,
+}.base;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk208.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk208.c
index 6f9c09a..b14134e 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk208.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk208.c
@@ -37,4 +37,5 @@
.code.size = sizeof(gk208_pmu_code),
.data.data = gk208_pmu_data,
.data.size = sizeof(gk208_pmu_data),
+ .pgob = gk110_pmu_pgob,
}.base;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk20a.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk20a.c
index a49934b..594f746 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk20a.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gk20a.c
@@ -159,7 +159,7 @@
nvkm_timer_alarm(priv, 100000000, alarm);
}
-int
+static int
gk20a_pmu_fini(struct nvkm_object *object, bool suspend)
{
struct nvkm_pmu *pmu = (void *)object;
@@ -170,7 +170,7 @@
return nvkm_subdev_fini(&pmu->base, suspend);
}
-int
+static int
gk20a_pmu_init(struct nvkm_object *object)
{
struct nvkm_pmu *pmu = (void *)object;
@@ -192,7 +192,8 @@
return ret;
}
-struct gk20a_pmu_dvfs_data gk20a_dvfs_data= {
+static struct gk20a_pmu_dvfs_data
+gk20a_dvfs_data= {
.p_load_target = 70,
.p_load_max = 90,
.p_smooth = 1,
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
index 9984105..799e7c8 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
@@ -40,4 +40,6 @@
void (*pgob)(struct nvkm_pmu *, bool);
};
+
+void gk110_pmu_pgob(struct nvkm_pmu *, bool);
#endif
diff --git a/drivers/gpu/drm/omapdrm/omap_connector.c b/drivers/gpu/drm/omapdrm/omap_connector.c
index a94b11f7..1773973 100644
--- a/drivers/gpu/drm/omapdrm/omap_connector.c
+++ b/drivers/gpu/drm/omapdrm/omap_connector.c
@@ -102,7 +102,7 @@
timings->data_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE;
timings->de_level = OMAPDSS_SIG_ACTIVE_HIGH;
- timings->sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES;
+ timings->sync_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE;
}
static enum drm_connector_status omap_connector_detect(
@@ -271,18 +271,6 @@
.best_encoder = omap_connector_attached_encoder,
};
-/* flush an area of the framebuffer (in case of manual update display that
- * is not automatically flushed)
- */
-void omap_connector_flush(struct drm_connector *connector,
- int x, int y, int w, int h)
-{
- struct omap_connector *omap_connector = to_omap_connector(connector);
-
- /* TODO: enable when supported in dss */
- VERB("%s: %d,%d, %dx%d", omap_connector->dssdev->name, x, y, w, h);
-}
-
/* initialize connector */
struct drm_connector *omap_connector_init(struct drm_device *dev,
int connector_type, struct omap_dss_device *dssdev,
diff --git a/drivers/gpu/drm/omapdrm/omap_crtc.c b/drivers/gpu/drm/omapdrm/omap_crtc.c
index b0566a1..f456544 100644
--- a/drivers/gpu/drm/omapdrm/omap_crtc.c
+++ b/drivers/gpu/drm/omapdrm/omap_crtc.c
@@ -28,7 +28,6 @@
struct omap_crtc {
struct drm_crtc base;
- struct drm_plane *plane;
const char *name;
int pipe;
@@ -46,7 +45,6 @@
struct omap_video_timings timings;
bool enabled;
- bool full_update;
struct omap_drm_apply apply;
@@ -74,8 +72,14 @@
* XXX maybe fold into apply_work??
*/
struct work_struct page_flip_work;
+
+ bool ignore_digit_sync_lost;
};
+/* -----------------------------------------------------------------------------
+ * Helper Functions
+ */
+
uint32_t pipe2vbl(struct drm_crtc *crtc)
{
struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
@@ -83,6 +87,22 @@
return dispc_mgr_get_vsync_irq(omap_crtc->channel);
}
+const struct omap_video_timings *omap_crtc_timings(struct drm_crtc *crtc)
+{
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+ return &omap_crtc->timings;
+}
+
+enum omap_channel omap_crtc_channel(struct drm_crtc *crtc)
+{
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+ return omap_crtc->channel;
+}
+
+/* -----------------------------------------------------------------------------
+ * DSS Manager Functions
+ */
+
/*
* Manager-ops, callbacks from output when they need to configure
* the upstream part of the video pipe.
@@ -122,7 +142,63 @@
{
}
-static void set_enabled(struct drm_crtc *crtc, bool enable);
+/* Called only from CRTC pre_apply and suspend/resume handlers. */
+static void omap_crtc_set_enabled(struct drm_crtc *crtc, bool enable)
+{
+ struct drm_device *dev = crtc->dev;
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+ enum omap_channel channel = omap_crtc->channel;
+ struct omap_irq_wait *wait;
+ u32 framedone_irq, vsync_irq;
+ int ret;
+
+ if (dispc_mgr_is_enabled(channel) == enable)
+ return;
+
+ if (omap_crtc->channel == OMAP_DSS_CHANNEL_DIGIT) {
+ /*
+ * Digit output produces some sync lost interrupts during the
+ * first frame when enabling, so we need to ignore those.
+ */
+ omap_crtc->ignore_digit_sync_lost = true;
+ }
+
+ framedone_irq = dispc_mgr_get_framedone_irq(channel);
+ vsync_irq = dispc_mgr_get_vsync_irq(channel);
+
+ if (enable) {
+ wait = omap_irq_wait_init(dev, vsync_irq, 1);
+ } else {
+ /*
+ * When we disable the digit output, we need to wait for
+ * FRAMEDONE to know that DISPC has finished with the output.
+ *
+ * OMAP2/3 does not have FRAMEDONE irq for digit output, and in
+ * that case we need to use vsync interrupt, and wait for both
+ * even and odd frames.
+ */
+
+ if (framedone_irq)
+ wait = omap_irq_wait_init(dev, framedone_irq, 1);
+ else
+ wait = omap_irq_wait_init(dev, vsync_irq, 2);
+ }
+
+ dispc_mgr_enable(channel, enable);
+
+ ret = omap_irq_wait(dev, wait, msecs_to_jiffies(100));
+ if (ret) {
+ dev_err(dev->dev, "%s: timeout waiting for %s\n",
+ omap_crtc->name, enable ? "enable" : "disable");
+ }
+
+ if (omap_crtc->channel == OMAP_DSS_CHANNEL_DIGIT) {
+ omap_crtc->ignore_digit_sync_lost = false;
+ /* make sure the irq handler sees the value above */
+ mb();
+ }
+}
+
static int omap_crtc_enable(struct omap_overlay_manager *mgr)
{
@@ -131,7 +207,7 @@
dispc_mgr_setup(omap_crtc->channel, &omap_crtc->info);
dispc_mgr_set_timings(omap_crtc->channel,
&omap_crtc->timings);
- set_enabled(&omap_crtc->base, true);
+ omap_crtc_set_enabled(&omap_crtc->base, true);
return 0;
}
@@ -140,7 +216,7 @@
{
struct omap_crtc *omap_crtc = omap_crtcs[mgr->id];
- set_enabled(&omap_crtc->base, false);
+ omap_crtc_set_enabled(&omap_crtc->base, false);
}
static void omap_crtc_set_timings(struct omap_overlay_manager *mgr,
@@ -149,7 +225,6 @@
struct omap_crtc *omap_crtc = omap_crtcs[mgr->id];
DBG("%s", omap_crtc->name);
omap_crtc->timings = *timings;
- omap_crtc->full_update = true;
}
static void omap_crtc_set_lcd_config(struct omap_overlay_manager *mgr,
@@ -174,264 +249,33 @@
}
static const struct dss_mgr_ops mgr_ops = {
- .connect = omap_crtc_connect,
- .disconnect = omap_crtc_disconnect,
- .start_update = omap_crtc_start_update,
- .enable = omap_crtc_enable,
- .disable = omap_crtc_disable,
- .set_timings = omap_crtc_set_timings,
- .set_lcd_config = omap_crtc_set_lcd_config,
- .register_framedone_handler = omap_crtc_register_framedone_handler,
- .unregister_framedone_handler = omap_crtc_unregister_framedone_handler,
+ .connect = omap_crtc_connect,
+ .disconnect = omap_crtc_disconnect,
+ .start_update = omap_crtc_start_update,
+ .enable = omap_crtc_enable,
+ .disable = omap_crtc_disable,
+ .set_timings = omap_crtc_set_timings,
+ .set_lcd_config = omap_crtc_set_lcd_config,
+ .register_framedone_handler = omap_crtc_register_framedone_handler,
+ .unregister_framedone_handler = omap_crtc_unregister_framedone_handler,
};
-/*
- * CRTC funcs:
+/* -----------------------------------------------------------------------------
+ * Apply Logic
*/
-static void omap_crtc_destroy(struct drm_crtc *crtc)
-{
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
-
- DBG("%s", omap_crtc->name);
-
- WARN_ON(omap_crtc->apply_irq.registered);
- omap_irq_unregister(crtc->dev, &omap_crtc->error_irq);
-
- drm_crtc_cleanup(crtc);
-
- kfree(omap_crtc);
-}
-
-static void omap_crtc_dpms(struct drm_crtc *crtc, int mode)
-{
- struct omap_drm_private *priv = crtc->dev->dev_private;
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- bool enabled = (mode == DRM_MODE_DPMS_ON);
- int i;
-
- DBG("%s: %d", omap_crtc->name, mode);
-
- if (enabled != omap_crtc->enabled) {
- omap_crtc->enabled = enabled;
- omap_crtc->full_update = true;
- omap_crtc_apply(crtc, &omap_crtc->apply);
-
- /* also enable our private plane: */
- WARN_ON(omap_plane_dpms(omap_crtc->plane, mode));
-
- /* and any attached overlay planes: */
- for (i = 0; i < priv->num_planes; i++) {
- struct drm_plane *plane = priv->planes[i];
- if (plane->crtc == crtc)
- WARN_ON(omap_plane_dpms(plane, mode));
- }
- }
-}
-
-static bool omap_crtc_mode_fixup(struct drm_crtc *crtc,
- const struct drm_display_mode *mode,
- struct drm_display_mode *adjusted_mode)
-{
- return true;
-}
-
-static int omap_crtc_mode_set(struct drm_crtc *crtc,
- struct drm_display_mode *mode,
- struct drm_display_mode *adjusted_mode,
- int x, int y,
- struct drm_framebuffer *old_fb)
-{
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
-
- mode = adjusted_mode;
-
- DBG("%s: set mode: %d:\"%s\" %d %d %d %d %d %d %d %d %d %d 0x%x 0x%x",
- omap_crtc->name, mode->base.id, mode->name,
- mode->vrefresh, mode->clock,
- mode->hdisplay, mode->hsync_start,
- mode->hsync_end, mode->htotal,
- mode->vdisplay, mode->vsync_start,
- mode->vsync_end, mode->vtotal,
- mode->type, mode->flags);
-
- copy_timings_drm_to_omap(&omap_crtc->timings, mode);
- omap_crtc->full_update = true;
-
- return omap_plane_mode_set(omap_crtc->plane, crtc, crtc->primary->fb,
- 0, 0, mode->hdisplay, mode->vdisplay,
- x << 16, y << 16,
- mode->hdisplay << 16, mode->vdisplay << 16,
- NULL, NULL);
-}
-
-static void omap_crtc_prepare(struct drm_crtc *crtc)
-{
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- DBG("%s", omap_crtc->name);
- omap_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
-}
-
-static void omap_crtc_commit(struct drm_crtc *crtc)
-{
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- DBG("%s", omap_crtc->name);
- omap_crtc_dpms(crtc, DRM_MODE_DPMS_ON);
-}
-
-static int omap_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
- struct drm_framebuffer *old_fb)
-{
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- struct drm_plane *plane = omap_crtc->plane;
- struct drm_display_mode *mode = &crtc->mode;
-
- return omap_plane_mode_set(plane, crtc, crtc->primary->fb,
- 0, 0, mode->hdisplay, mode->vdisplay,
- x << 16, y << 16,
- mode->hdisplay << 16, mode->vdisplay << 16,
- NULL, NULL);
-}
-
-static void vblank_cb(void *arg)
-{
- struct drm_crtc *crtc = arg;
- struct drm_device *dev = crtc->dev;
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- unsigned long flags;
-
- spin_lock_irqsave(&dev->event_lock, flags);
-
- /* wakeup userspace */
- if (omap_crtc->event)
- drm_send_vblank_event(dev, omap_crtc->pipe, omap_crtc->event);
-
- omap_crtc->event = NULL;
- omap_crtc->old_fb = NULL;
-
- spin_unlock_irqrestore(&dev->event_lock, flags);
-}
-
-static void page_flip_worker(struct work_struct *work)
-{
- struct omap_crtc *omap_crtc =
- container_of(work, struct omap_crtc, page_flip_work);
- struct drm_crtc *crtc = &omap_crtc->base;
- struct drm_display_mode *mode = &crtc->mode;
- struct drm_gem_object *bo;
-
- drm_modeset_lock(&crtc->mutex, NULL);
- omap_plane_mode_set(omap_crtc->plane, crtc, crtc->primary->fb,
- 0, 0, mode->hdisplay, mode->vdisplay,
- crtc->x << 16, crtc->y << 16,
- mode->hdisplay << 16, mode->vdisplay << 16,
- vblank_cb, crtc);
- drm_modeset_unlock(&crtc->mutex);
-
- bo = omap_framebuffer_bo(crtc->primary->fb, 0);
- drm_gem_object_unreference_unlocked(bo);
-}
-
-static void page_flip_cb(void *arg)
-{
- struct drm_crtc *crtc = arg;
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- struct omap_drm_private *priv = crtc->dev->dev_private;
-
- /* avoid assumptions about what ctxt we are called from: */
- queue_work(priv->wq, &omap_crtc->page_flip_work);
-}
-
-static int omap_crtc_page_flip_locked(struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_pending_vblank_event *event,
- uint32_t page_flip_flags)
-{
- struct drm_device *dev = crtc->dev;
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- struct drm_plane *primary = crtc->primary;
- struct drm_gem_object *bo;
- unsigned long flags;
-
- DBG("%d -> %d (event=%p)", primary->fb ? primary->fb->base.id : -1,
- fb->base.id, event);
-
- spin_lock_irqsave(&dev->event_lock, flags);
-
- if (omap_crtc->old_fb) {
- spin_unlock_irqrestore(&dev->event_lock, flags);
- dev_err(dev->dev, "already a pending flip\n");
- return -EINVAL;
- }
-
- omap_crtc->event = event;
- omap_crtc->old_fb = primary->fb = fb;
-
- spin_unlock_irqrestore(&dev->event_lock, flags);
-
- /*
- * Hold a reference temporarily until the crtc is updated
- * and takes the reference to the bo. This avoids it
- * getting freed from under us:
- */
- bo = omap_framebuffer_bo(fb, 0);
- drm_gem_object_reference(bo);
-
- omap_gem_op_async(bo, OMAP_GEM_READ, page_flip_cb, crtc);
-
- return 0;
-}
-
-static int omap_crtc_set_property(struct drm_crtc *crtc,
- struct drm_property *property, uint64_t val)
-{
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- struct omap_drm_private *priv = crtc->dev->dev_private;
-
- if (property == priv->rotation_prop) {
- crtc->invert_dimensions =
- !!(val & ((1LL << DRM_ROTATE_90) | (1LL << DRM_ROTATE_270)));
- }
-
- return omap_plane_set_property(omap_crtc->plane, property, val);
-}
-
-static const struct drm_crtc_funcs omap_crtc_funcs = {
- .set_config = drm_crtc_helper_set_config,
- .destroy = omap_crtc_destroy,
- .page_flip = omap_crtc_page_flip_locked,
- .set_property = omap_crtc_set_property,
-};
-
-static const struct drm_crtc_helper_funcs omap_crtc_helper_funcs = {
- .dpms = omap_crtc_dpms,
- .mode_fixup = omap_crtc_mode_fixup,
- .mode_set = omap_crtc_mode_set,
- .prepare = omap_crtc_prepare,
- .commit = omap_crtc_commit,
- .mode_set_base = omap_crtc_mode_set_base,
-};
-
-const struct omap_video_timings *omap_crtc_timings(struct drm_crtc *crtc)
-{
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- return &omap_crtc->timings;
-}
-
-enum omap_channel omap_crtc_channel(struct drm_crtc *crtc)
-{
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- return omap_crtc->channel;
-}
-
static void omap_crtc_error_irq(struct omap_drm_irq *irq, uint32_t irqstatus)
{
struct omap_crtc *omap_crtc =
container_of(irq, struct omap_crtc, error_irq);
- struct drm_crtc *crtc = &omap_crtc->base;
- DRM_ERROR("%s: errors: %08x\n", omap_crtc->name, irqstatus);
- /* avoid getting in a flood, unregister the irq until next vblank */
- __omap_irq_unregister(crtc->dev, &omap_crtc->error_irq);
+
+ if (omap_crtc->ignore_digit_sync_lost) {
+ irqstatus &= ~DISPC_IRQ_SYNC_LOST_DIGIT;
+ if (!irqstatus)
+ return;
+ }
+
+ DRM_ERROR_RATELIMITED("%s: errors: %08x\n", omap_crtc->name, irqstatus);
}
static void omap_crtc_apply_irq(struct omap_drm_irq *irq, uint32_t irqstatus)
@@ -440,9 +284,6 @@
container_of(irq, struct omap_crtc, apply_irq);
struct drm_crtc *crtc = &omap_crtc->base;
- if (!omap_crtc->error_irq.registered)
- __omap_irq_register(crtc->dev, &omap_crtc->error_irq);
-
if (!dispc_mgr_go_busy(omap_crtc->channel)) {
struct omap_drm_private *priv =
crtc->dev->dev_private;
@@ -501,8 +342,8 @@
DBG("%s: GO", omap_crtc->name);
if (dispc_mgr_is_enabled(channel)) {
- omap_irq_register(dev, &omap_crtc->apply_irq);
dispc_mgr_go(channel);
+ omap_irq_register(dev, &omap_crtc->apply_irq);
} else {
struct omap_drm_private *priv = dev->dev_private;
queue_work(priv->wq, &omap_crtc->apply_work);
@@ -541,75 +382,21 @@
return 0;
}
-/* called only from apply */
-static void set_enabled(struct drm_crtc *crtc, bool enable)
-{
- struct drm_device *dev = crtc->dev;
- struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
- enum omap_channel channel = omap_crtc->channel;
- struct omap_irq_wait *wait;
- u32 framedone_irq, vsync_irq;
- int ret;
-
- if (dispc_mgr_is_enabled(channel) == enable)
- return;
-
- /*
- * Digit output produces some sync lost interrupts during the first
- * frame when enabling, so we need to ignore those.
- */
- omap_irq_unregister(crtc->dev, &omap_crtc->error_irq);
-
- framedone_irq = dispc_mgr_get_framedone_irq(channel);
- vsync_irq = dispc_mgr_get_vsync_irq(channel);
-
- if (enable) {
- wait = omap_irq_wait_init(dev, vsync_irq, 1);
- } else {
- /*
- * When we disable the digit output, we need to wait for
- * FRAMEDONE to know that DISPC has finished with the output.
- *
- * OMAP2/3 does not have FRAMEDONE irq for digit output, and in
- * that case we need to use vsync interrupt, and wait for both
- * even and odd frames.
- */
-
- if (framedone_irq)
- wait = omap_irq_wait_init(dev, framedone_irq, 1);
- else
- wait = omap_irq_wait_init(dev, vsync_irq, 2);
- }
-
- dispc_mgr_enable(channel, enable);
-
- ret = omap_irq_wait(dev, wait, msecs_to_jiffies(100));
- if (ret) {
- dev_err(dev->dev, "%s: timeout waiting for %s\n",
- omap_crtc->name, enable ? "enable" : "disable");
- }
-
- omap_irq_register(crtc->dev, &omap_crtc->error_irq);
-}
-
static void omap_crtc_pre_apply(struct omap_drm_apply *apply)
{
struct omap_crtc *omap_crtc =
container_of(apply, struct omap_crtc, apply);
struct drm_crtc *crtc = &omap_crtc->base;
+ struct omap_drm_private *priv = crtc->dev->dev_private;
struct drm_encoder *encoder = NULL;
+ unsigned int i;
- DBG("%s: enabled=%d, full=%d", omap_crtc->name,
- omap_crtc->enabled, omap_crtc->full_update);
+ DBG("%s: enabled=%d", omap_crtc->name, omap_crtc->enabled);
- if (omap_crtc->full_update) {
- struct omap_drm_private *priv = crtc->dev->dev_private;
- int i;
- for (i = 0; i < priv->num_encoders; i++) {
- if (priv->encoders[i]->crtc == crtc) {
- encoder = priv->encoders[i];
- break;
- }
+ for (i = 0; i < priv->num_encoders; i++) {
+ if (priv->encoders[i]->crtc == crtc) {
+ encoder = priv->encoders[i];
+ break;
}
}
@@ -629,8 +416,6 @@
omap_encoder_set_enabled(encoder, true);
}
}
-
- omap_crtc->full_update = false;
}
static void omap_crtc_post_apply(struct omap_drm_apply *apply)
@@ -657,11 +442,245 @@
}
}
+/* -----------------------------------------------------------------------------
+ * CRTC Functions
+ */
+
+static void omap_crtc_destroy(struct drm_crtc *crtc)
+{
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+
+ DBG("%s", omap_crtc->name);
+
+ WARN_ON(omap_crtc->apply_irq.registered);
+ omap_irq_unregister(crtc->dev, &omap_crtc->error_irq);
+
+ drm_crtc_cleanup(crtc);
+
+ kfree(omap_crtc);
+}
+
+static void omap_crtc_dpms(struct drm_crtc *crtc, int mode)
+{
+ struct omap_drm_private *priv = crtc->dev->dev_private;
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+ bool enabled = (mode == DRM_MODE_DPMS_ON);
+ int i;
+
+ DBG("%s: %d", omap_crtc->name, mode);
+
+ if (enabled != omap_crtc->enabled) {
+ omap_crtc->enabled = enabled;
+ omap_crtc_apply(crtc, &omap_crtc->apply);
+
+ /* Enable/disable all planes associated with the CRTC. */
+ for (i = 0; i < priv->num_planes; i++) {
+ struct drm_plane *plane = priv->planes[i];
+ if (plane->crtc == crtc)
+ WARN_ON(omap_plane_set_enable(plane, enabled));
+ }
+ }
+}
+
+static bool omap_crtc_mode_fixup(struct drm_crtc *crtc,
+ const struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ return true;
+}
+
+static int omap_crtc_mode_set(struct drm_crtc *crtc,
+ struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode,
+ int x, int y,
+ struct drm_framebuffer *old_fb)
+{
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+
+ mode = adjusted_mode;
+
+ DBG("%s: set mode: %d:\"%s\" %d %d %d %d %d %d %d %d %d %d 0x%x 0x%x",
+ omap_crtc->name, mode->base.id, mode->name,
+ mode->vrefresh, mode->clock,
+ mode->hdisplay, mode->hsync_start,
+ mode->hsync_end, mode->htotal,
+ mode->vdisplay, mode->vsync_start,
+ mode->vsync_end, mode->vtotal,
+ mode->type, mode->flags);
+
+ copy_timings_drm_to_omap(&omap_crtc->timings, mode);
+
+ /*
+ * The primary plane CRTC can be reset if the plane is disabled directly
+ * through the universal plane API. Set it again here.
+ */
+ crtc->primary->crtc = crtc;
+
+ return omap_plane_mode_set(crtc->primary, crtc, crtc->primary->fb,
+ 0, 0, mode->hdisplay, mode->vdisplay,
+ x, y, mode->hdisplay, mode->vdisplay,
+ NULL, NULL);
+}
+
+static void omap_crtc_prepare(struct drm_crtc *crtc)
+{
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+ DBG("%s", omap_crtc->name);
+ omap_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
+}
+
+static void omap_crtc_commit(struct drm_crtc *crtc)
+{
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+ DBG("%s", omap_crtc->name);
+ omap_crtc_dpms(crtc, DRM_MODE_DPMS_ON);
+}
+
+static int omap_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
+ struct drm_framebuffer *old_fb)
+{
+ struct drm_plane *plane = crtc->primary;
+ struct drm_display_mode *mode = &crtc->mode;
+
+ return omap_plane_mode_set(plane, crtc, crtc->primary->fb,
+ 0, 0, mode->hdisplay, mode->vdisplay,
+ x, y, mode->hdisplay, mode->vdisplay,
+ NULL, NULL);
+}
+
+static void vblank_cb(void *arg)
+{
+ struct drm_crtc *crtc = arg;
+ struct drm_device *dev = crtc->dev;
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+ unsigned long flags;
+ struct drm_framebuffer *fb;
+
+ spin_lock_irqsave(&dev->event_lock, flags);
+
+ /* wakeup userspace */
+ if (omap_crtc->event)
+ drm_send_vblank_event(dev, omap_crtc->pipe, omap_crtc->event);
+
+ fb = omap_crtc->old_fb;
+
+ omap_crtc->event = NULL;
+ omap_crtc->old_fb = NULL;
+
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+
+ if (fb)
+ drm_framebuffer_unreference(fb);
+}
+
+static void page_flip_worker(struct work_struct *work)
+{
+ struct omap_crtc *omap_crtc =
+ container_of(work, struct omap_crtc, page_flip_work);
+ struct drm_crtc *crtc = &omap_crtc->base;
+ struct drm_display_mode *mode = &crtc->mode;
+ struct drm_gem_object *bo;
+
+ drm_modeset_lock(&crtc->mutex, NULL);
+ omap_plane_mode_set(crtc->primary, crtc, crtc->primary->fb,
+ 0, 0, mode->hdisplay, mode->vdisplay,
+ crtc->x, crtc->y, mode->hdisplay, mode->vdisplay,
+ vblank_cb, crtc);
+ drm_modeset_unlock(&crtc->mutex);
+
+ bo = omap_framebuffer_bo(crtc->primary->fb, 0);
+ drm_gem_object_unreference_unlocked(bo);
+}
+
+static void page_flip_cb(void *arg)
+{
+ struct drm_crtc *crtc = arg;
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+ struct omap_drm_private *priv = crtc->dev->dev_private;
+
+ /* avoid assumptions about what ctxt we are called from: */
+ queue_work(priv->wq, &omap_crtc->page_flip_work);
+}
+
+static int omap_crtc_page_flip_locked(struct drm_crtc *crtc,
+ struct drm_framebuffer *fb,
+ struct drm_pending_vblank_event *event,
+ uint32_t page_flip_flags)
+{
+ struct drm_device *dev = crtc->dev;
+ struct omap_crtc *omap_crtc = to_omap_crtc(crtc);
+ struct drm_plane *primary = crtc->primary;
+ struct drm_gem_object *bo;
+ unsigned long flags;
+
+ DBG("%d -> %d (event=%p)", primary->fb ? primary->fb->base.id : -1,
+ fb->base.id, event);
+
+ spin_lock_irqsave(&dev->event_lock, flags);
+
+ if (omap_crtc->old_fb) {
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+ dev_err(dev->dev, "already a pending flip\n");
+ return -EBUSY;
+ }
+
+ omap_crtc->event = event;
+ omap_crtc->old_fb = primary->fb = fb;
+ drm_framebuffer_reference(omap_crtc->old_fb);
+
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+
+ /*
+ * Hold a reference temporarily until the crtc is updated
+ * and takes the reference to the bo. This avoids it
+ * getting freed from under us:
+ */
+ bo = omap_framebuffer_bo(fb, 0);
+ drm_gem_object_reference(bo);
+
+ omap_gem_op_async(bo, OMAP_GEM_READ, page_flip_cb, crtc);
+
+ return 0;
+}
+
+static int omap_crtc_set_property(struct drm_crtc *crtc,
+ struct drm_property *property, uint64_t val)
+{
+ struct omap_drm_private *priv = crtc->dev->dev_private;
+
+ if (property == priv->rotation_prop) {
+ crtc->invert_dimensions =
+ !!(val & ((1LL << DRM_ROTATE_90) | (1LL << DRM_ROTATE_270)));
+ }
+
+ return omap_plane_set_property(crtc->primary, property, val);
+}
+
+static const struct drm_crtc_funcs omap_crtc_funcs = {
+ .set_config = drm_crtc_helper_set_config,
+ .destroy = omap_crtc_destroy,
+ .page_flip = omap_crtc_page_flip_locked,
+ .set_property = omap_crtc_set_property,
+};
+
+static const struct drm_crtc_helper_funcs omap_crtc_helper_funcs = {
+ .dpms = omap_crtc_dpms,
+ .mode_fixup = omap_crtc_mode_fixup,
+ .mode_set = omap_crtc_mode_set,
+ .prepare = omap_crtc_prepare,
+ .commit = omap_crtc_commit,
+ .mode_set_base = omap_crtc_mode_set_base,
+};
+
+/* -----------------------------------------------------------------------------
+ * Init and Cleanup
+ */
+
static const char *channel_names[] = {
- [OMAP_DSS_CHANNEL_LCD] = "lcd",
- [OMAP_DSS_CHANNEL_DIGIT] = "tv",
- [OMAP_DSS_CHANNEL_LCD2] = "lcd2",
- [OMAP_DSS_CHANNEL_LCD3] = "lcd3",
+ [OMAP_DSS_CHANNEL_LCD] = "lcd",
+ [OMAP_DSS_CHANNEL_DIGIT] = "tv",
+ [OMAP_DSS_CHANNEL_LCD2] = "lcd2",
+ [OMAP_DSS_CHANNEL_LCD3] = "lcd3",
};
void omap_crtc_pre_init(void)
@@ -681,12 +700,13 @@
struct drm_crtc *crtc = NULL;
struct omap_crtc *omap_crtc;
struct omap_overlay_manager_info *info;
+ int ret;
DBG("%s", channel_names[channel]);
omap_crtc = kzalloc(sizeof(*omap_crtc), GFP_KERNEL);
if (!omap_crtc)
- goto fail;
+ return NULL;
crtc = &omap_crtc->base;
@@ -700,8 +720,6 @@
omap_crtc->apply.post_apply = omap_crtc_post_apply;
omap_crtc->channel = channel;
- omap_crtc->plane = plane;
- omap_crtc->plane->crtc = crtc;
omap_crtc->name = channel_names[channel];
omap_crtc->pipe = id;
@@ -723,18 +741,18 @@
info->trans_key_type = OMAP_DSS_COLOR_KEY_GFX_DST;
info->trans_enabled = false;
- drm_crtc_init(dev, crtc, &omap_crtc_funcs);
+ ret = drm_crtc_init_with_planes(dev, crtc, plane, NULL,
+ &omap_crtc_funcs);
+ if (ret < 0) {
+ kfree(omap_crtc);
+ return NULL;
+ }
+
drm_crtc_helper_add(crtc, &omap_crtc_helper_funcs);
- omap_plane_install_properties(omap_crtc->plane, &crtc->base);
+ omap_plane_install_properties(crtc->primary, &crtc->base);
omap_crtcs[channel] = omap_crtc;
return crtc;
-
-fail:
- if (crtc)
- omap_crtc_destroy(crtc);
-
- return NULL;
}
diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_priv.h b/drivers/gpu/drm/omapdrm/omap_dmm_priv.h
index 58bcd6a..9f32a83 100644
--- a/drivers/gpu/drm/omapdrm/omap_dmm_priv.h
+++ b/drivers/gpu/drm/omapdrm/omap_dmm_priv.h
@@ -148,11 +148,15 @@
bool async;
- wait_queue_head_t wait_for_refill;
+ struct completion compl;
struct list_head idle_node;
};
+struct dmm_platform_data {
+ uint32_t cpu_cache_flags;
+};
+
struct dmm {
struct device *dev;
void __iomem *base;
@@ -183,6 +187,8 @@
/* allocation list and lock */
struct list_head alloc_head;
+
+ const struct dmm_platform_data *plat_data;
};
#endif
diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
index 56c6055..042038e 100644
--- a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
+++ b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
@@ -29,6 +29,7 @@
#include <linux/mm.h>
#include <linux/time.h>
#include <linux/list.h>
+#include <linux/completion.h>
#include "omap_dmm_tiler.h"
#include "omap_dmm_priv.h"
@@ -39,6 +40,10 @@
static struct tcm *containers[TILFMT_NFORMATS];
static struct dmm *omap_dmm;
+#if defined(CONFIG_OF)
+static const struct of_device_id dmm_of_match[];
+#endif
+
/* global spinlock for protecting lists */
static DEFINE_SPINLOCK(list_lock);
@@ -58,19 +63,19 @@
uint32_t slot_w; /* width of each slot (in pixels) */
uint32_t slot_h; /* height of each slot (in pixels) */
} geom[TILFMT_NFORMATS] = {
- [TILFMT_8BIT] = GEOM(0, 0, 1),
- [TILFMT_16BIT] = GEOM(0, 1, 2),
- [TILFMT_32BIT] = GEOM(1, 1, 4),
- [TILFMT_PAGE] = GEOM(SLOT_WIDTH_BITS, SLOT_HEIGHT_BITS, 1),
+ [TILFMT_8BIT] = GEOM(0, 0, 1),
+ [TILFMT_16BIT] = GEOM(0, 1, 2),
+ [TILFMT_32BIT] = GEOM(1, 1, 4),
+ [TILFMT_PAGE] = GEOM(SLOT_WIDTH_BITS, SLOT_HEIGHT_BITS, 1),
};
/* lookup table for registers w/ per-engine instances */
static const uint32_t reg[][4] = {
- [PAT_STATUS] = {DMM_PAT_STATUS__0, DMM_PAT_STATUS__1,
- DMM_PAT_STATUS__2, DMM_PAT_STATUS__3},
- [PAT_DESCR] = {DMM_PAT_DESCR__0, DMM_PAT_DESCR__1,
- DMM_PAT_DESCR__2, DMM_PAT_DESCR__3},
+ [PAT_STATUS] = {DMM_PAT_STATUS__0, DMM_PAT_STATUS__1,
+ DMM_PAT_STATUS__2, DMM_PAT_STATUS__3},
+ [PAT_DESCR] = {DMM_PAT_DESCR__0, DMM_PAT_DESCR__1,
+ DMM_PAT_DESCR__2, DMM_PAT_DESCR__3},
};
/* simple allocator to grab next 16 byte aligned memory from txn */
@@ -142,10 +147,10 @@
for (i = 0; i < dmm->num_engines; i++) {
if (status & DMM_IRQSTAT_LST) {
- wake_up_interruptible(&dmm->engines[i].wait_for_refill);
-
if (dmm->engines[i].async)
release_engine(&dmm->engines[i]);
+
+ complete(&dmm->engines[i].compl);
}
status >>= 8;
@@ -269,15 +274,17 @@
/* mark whether it is async to denote list management in IRQ handler */
engine->async = wait ? false : true;
+ reinit_completion(&engine->compl);
+ /* verify that the irq handler sees the 'async' and completion value */
+ smp_mb();
/* kick reload */
writel(engine->refill_pa,
dmm->base + reg[PAT_DESCR][engine->id]);
if (wait) {
- if (wait_event_interruptible_timeout(engine->wait_for_refill,
- wait_status(engine, DMM_PATSTATUS_READY) == 0,
- msecs_to_jiffies(1)) <= 0) {
+ if (!wait_for_completion_timeout(&engine->compl,
+ msecs_to_jiffies(1))) {
dev_err(dmm->dev, "timed out waiting for done\n");
ret = -ETIMEDOUT;
}
@@ -529,6 +536,11 @@
return round_up(geom[fmt].cpp * w, PAGE_SIZE) * h;
}
+uint32_t tiler_get_cpu_cache_flags(void)
+{
+ return omap_dmm->plat_data->cpu_cache_flags;
+}
+
bool dmm_is_available(void)
{
return omap_dmm ? true : false;
@@ -592,6 +604,18 @@
init_waitqueue_head(&omap_dmm->engine_queue);
+ if (dev->dev.of_node) {
+ const struct of_device_id *match;
+
+ match = of_match_node(dmm_of_match, dev->dev.of_node);
+ if (!match) {
+ dev_err(&dev->dev, "failed to find matching device node\n");
+ return -ENODEV;
+ }
+
+ omap_dmm->plat_data = match->data;
+ }
+
/* lookup hwmod data - base address and irq */
mem = platform_get_resource(dev, IORESOURCE_MEM, 0);
if (!mem) {
@@ -696,7 +720,7 @@
(REFILL_BUFFER_SIZE * i);
omap_dmm->engines[i].refill_pa = omap_dmm->refill_pa +
(REFILL_BUFFER_SIZE * i);
- init_waitqueue_head(&omap_dmm->engines[i].wait_for_refill);
+ init_completion(&omap_dmm->engines[i].compl);
list_add(&omap_dmm->engines[i].idle_node, &omap_dmm->idle_head);
}
@@ -941,7 +965,7 @@
}
#endif
-#ifdef CONFIG_PM
+#ifdef CONFIG_PM_SLEEP
static int omap_dmm_resume(struct device *dev)
{
struct tcm_area area;
@@ -965,16 +989,28 @@
return 0;
}
-
-static const struct dev_pm_ops omap_dmm_pm_ops = {
- .resume = omap_dmm_resume,
-};
#endif
+static SIMPLE_DEV_PM_OPS(omap_dmm_pm_ops, NULL, omap_dmm_resume);
+
#if defined(CONFIG_OF)
+static const struct dmm_platform_data dmm_omap4_platform_data = {
+ .cpu_cache_flags = OMAP_BO_WC,
+};
+
+static const struct dmm_platform_data dmm_omap5_platform_data = {
+ .cpu_cache_flags = OMAP_BO_UNCACHED,
+};
+
static const struct of_device_id dmm_of_match[] = {
- { .compatible = "ti,omap4-dmm", },
- { .compatible = "ti,omap5-dmm", },
+ {
+ .compatible = "ti,omap4-dmm",
+ .data = &dmm_omap4_platform_data,
+ },
+ {
+ .compatible = "ti,omap5-dmm",
+ .data = &dmm_omap5_platform_data,
+ },
{},
};
#endif
@@ -986,9 +1022,7 @@
.owner = THIS_MODULE,
.name = DMM_DRIVER_NAME,
.of_match_table = of_match_ptr(dmm_of_match),
-#ifdef CONFIG_PM
.pm = &omap_dmm_pm_ops,
-#endif
},
};
diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.h b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.h
index 4fdd61e..e83c783 100644
--- a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.h
+++ b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.h
@@ -106,6 +106,7 @@
size_t tiler_size(enum tiler_fmt fmt, uint16_t w, uint16_t h);
size_t tiler_vsize(enum tiler_fmt fmt, uint16_t w, uint16_t h);
void tiler_align(enum tiler_fmt fmt, uint16_t *w, uint16_t *h);
+uint32_t tiler_get_cpu_cache_flags(void);
bool dmm_is_available(void);
extern struct platform_driver omap_dmm_driver;
diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
index 8241ed9..94920d4 100644
--- a/drivers/gpu/drm/omapdrm/omap_drv.c
+++ b/drivers/gpu/drm/omapdrm/omap_drv.c
@@ -128,6 +128,29 @@
return r;
}
+static int omap_modeset_create_crtc(struct drm_device *dev, int id,
+ enum omap_channel channel)
+{
+ struct omap_drm_private *priv = dev->dev_private;
+ struct drm_plane *plane;
+ struct drm_crtc *crtc;
+
+ plane = omap_plane_init(dev, id, DRM_PLANE_TYPE_PRIMARY);
+ if (IS_ERR(plane))
+ return PTR_ERR(plane);
+
+ crtc = omap_crtc_init(dev, plane, channel, id);
+
+ BUG_ON(priv->num_crtcs >= ARRAY_SIZE(priv->crtcs));
+ priv->crtcs[id] = crtc;
+ priv->num_crtcs++;
+
+ priv->planes[id] = plane;
+ priv->num_planes++;
+
+ return 0;
+}
+
static int omap_modeset_init(struct drm_device *dev)
{
struct omap_drm_private *priv = dev->dev_private;
@@ -136,6 +159,7 @@
int num_mgrs = dss_feat_get_num_mgrs();
int num_crtcs;
int i, id = 0;
+ int ret;
drm_mode_config_init(dev);
@@ -209,18 +233,13 @@
* allocated crtc, we create a new crtc for it
*/
if (!channel_used(dev, channel)) {
- struct drm_plane *plane;
- struct drm_crtc *crtc;
-
- plane = omap_plane_init(dev, id, true);
- crtc = omap_crtc_init(dev, plane, channel, id);
-
- BUG_ON(priv->num_crtcs >= ARRAY_SIZE(priv->crtcs));
- priv->crtcs[id] = crtc;
- priv->num_crtcs++;
-
- priv->planes[id] = plane;
- priv->num_planes++;
+ ret = omap_modeset_create_crtc(dev, id, channel);
+ if (ret < 0) {
+ dev_err(dev->dev,
+ "could not create CRTC (channel %u)\n",
+ channel);
+ return ret;
+ }
id++;
}
@@ -234,26 +253,8 @@
/* find a free manager for this crtc */
for (i = 0; i < num_mgrs; i++) {
- if (!channel_used(dev, i)) {
- struct drm_plane *plane;
- struct drm_crtc *crtc;
-
- plane = omap_plane_init(dev, id, true);
- crtc = omap_crtc_init(dev, plane, i, id);
-
- BUG_ON(priv->num_crtcs >=
- ARRAY_SIZE(priv->crtcs));
-
- priv->crtcs[id] = crtc;
- priv->num_crtcs++;
-
- priv->planes[id] = plane;
- priv->num_planes++;
-
+ if (!channel_used(dev, i))
break;
- } else {
- continue;
- }
}
if (i == num_mgrs) {
@@ -261,13 +262,24 @@
dev_err(dev->dev, "no managers left for crtc\n");
return -ENOMEM;
}
+
+ ret = omap_modeset_create_crtc(dev, id, i);
+ if (ret < 0) {
+ dev_err(dev->dev,
+ "could not create CRTC (channel %u)\n", i);
+ return ret;
+ }
}
/*
* Create normal planes for the remaining overlays:
*/
for (; id < num_ovls; id++) {
- struct drm_plane *plane = omap_plane_init(dev, id, false);
+ struct drm_plane *plane;
+
+ plane = omap_plane_init(dev, id, DRM_PLANE_TYPE_OVERLAY);
+ if (IS_ERR(plane))
+ return PTR_ERR(plane);
BUG_ON(priv->num_planes >= ARRAY_SIZE(priv->planes));
priv->planes[priv->num_planes++] = plane;
@@ -286,14 +298,13 @@
for (id = 0; id < priv->num_crtcs; id++) {
struct drm_crtc *crtc = priv->crtcs[id];
enum omap_channel crtc_channel;
- enum omap_dss_output_id supported_outputs;
crtc_channel = omap_crtc_channel(crtc);
- supported_outputs =
- dss_feat_get_supported_outputs(crtc_channel);
- if (supported_outputs & output->id)
+ if (output->dispc_channel == crtc_channel) {
encoder->possible_crtcs |= (1 << id);
+ break;
+ }
}
omap_dss_put_device(output);
@@ -480,6 +491,7 @@
priv->wq = alloc_ordered_workqueue("omapdrm", 0);
+ spin_lock_init(&priv->list_lock);
INIT_LIST_HEAD(&priv->obj_list);
omap_gem_init(dev);
@@ -519,7 +531,8 @@
drm_kms_helper_poll_fini(dev);
- omap_fbdev_free(dev);
+ if (priv->fbdev)
+ omap_fbdev_free(dev);
/* flush crtcs so the fbs get released */
for (i = 0; i < priv->num_crtcs; i++)
@@ -588,9 +601,11 @@
}
}
- ret = drm_fb_helper_restore_fbdev_mode_unlocked(priv->fbdev);
- if (ret)
- DBG("failed to restore crtc mode");
+ if (priv->fbdev) {
+ ret = drm_fb_helper_restore_fbdev_mode_unlocked(priv->fbdev);
+ if (ret)
+ DBG("failed to restore crtc mode");
+ }
}
static void dev_preclose(struct drm_device *dev, struct drm_file *file)
@@ -610,74 +625,57 @@
};
static const struct file_operations omapdriver_fops = {
- .owner = THIS_MODULE,
- .open = drm_open,
- .unlocked_ioctl = drm_ioctl,
- .release = drm_release,
- .mmap = omap_gem_mmap,
- .poll = drm_poll,
- .read = drm_read,
- .llseek = noop_llseek,
+ .owner = THIS_MODULE,
+ .open = drm_open,
+ .unlocked_ioctl = drm_ioctl,
+ .release = drm_release,
+ .mmap = omap_gem_mmap,
+ .poll = drm_poll,
+ .read = drm_read,
+ .llseek = noop_llseek,
};
static struct drm_driver omap_drm_driver = {
- .driver_features =
- DRIVER_HAVE_IRQ | DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME,
- .load = dev_load,
- .unload = dev_unload,
- .open = dev_open,
- .lastclose = dev_lastclose,
- .preclose = dev_preclose,
- .postclose = dev_postclose,
- .set_busid = drm_platform_set_busid,
- .get_vblank_counter = drm_vblank_count,
- .enable_vblank = omap_irq_enable_vblank,
- .disable_vblank = omap_irq_disable_vblank,
- .irq_preinstall = omap_irq_preinstall,
- .irq_postinstall = omap_irq_postinstall,
- .irq_uninstall = omap_irq_uninstall,
- .irq_handler = omap_irq_handler,
+ .driver_features = DRIVER_HAVE_IRQ | DRIVER_MODESET | DRIVER_GEM
+ | DRIVER_PRIME,
+ .load = dev_load,
+ .unload = dev_unload,
+ .open = dev_open,
+ .lastclose = dev_lastclose,
+ .preclose = dev_preclose,
+ .postclose = dev_postclose,
+ .set_busid = drm_platform_set_busid,
+ .get_vblank_counter = drm_vblank_count,
+ .enable_vblank = omap_irq_enable_vblank,
+ .disable_vblank = omap_irq_disable_vblank,
+ .irq_preinstall = omap_irq_preinstall,
+ .irq_postinstall = omap_irq_postinstall,
+ .irq_uninstall = omap_irq_uninstall,
+ .irq_handler = omap_irq_handler,
#ifdef CONFIG_DEBUG_FS
- .debugfs_init = omap_debugfs_init,
- .debugfs_cleanup = omap_debugfs_cleanup,
+ .debugfs_init = omap_debugfs_init,
+ .debugfs_cleanup = omap_debugfs_cleanup,
#endif
- .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
- .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
- .gem_prime_export = omap_gem_prime_export,
- .gem_prime_import = omap_gem_prime_import,
- .gem_free_object = omap_gem_free_object,
- .gem_vm_ops = &omap_gem_vm_ops,
- .dumb_create = omap_gem_dumb_create,
- .dumb_map_offset = omap_gem_dumb_map_offset,
- .dumb_destroy = drm_gem_dumb_destroy,
- .ioctls = ioctls,
- .num_ioctls = DRM_OMAP_NUM_IOCTLS,
- .fops = &omapdriver_fops,
- .name = DRIVER_NAME,
- .desc = DRIVER_DESC,
- .date = DRIVER_DATE,
- .major = DRIVER_MAJOR,
- .minor = DRIVER_MINOR,
- .patchlevel = DRIVER_PATCHLEVEL,
+ .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+ .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+ .gem_prime_export = omap_gem_prime_export,
+ .gem_prime_import = omap_gem_prime_import,
+ .gem_free_object = omap_gem_free_object,
+ .gem_vm_ops = &omap_gem_vm_ops,
+ .dumb_create = omap_gem_dumb_create,
+ .dumb_map_offset = omap_gem_dumb_map_offset,
+ .dumb_destroy = drm_gem_dumb_destroy,
+ .ioctls = ioctls,
+ .num_ioctls = DRM_OMAP_NUM_IOCTLS,
+ .fops = &omapdriver_fops,
+ .name = DRIVER_NAME,
+ .desc = DRIVER_DESC,
+ .date = DRIVER_DATE,
+ .major = DRIVER_MAJOR,
+ .minor = DRIVER_MINOR,
+ .patchlevel = DRIVER_PATCHLEVEL,
};
-static int pdev_suspend(struct platform_device *pDevice, pm_message_t state)
-{
- DBG("");
- return 0;
-}
-
-static int pdev_resume(struct platform_device *device)
-{
- DBG("");
- return 0;
-}
-
-static void pdev_shutdown(struct platform_device *device)
-{
- DBG("");
-}
-
static int pdev_probe(struct platform_device *device)
{
int r;
@@ -709,24 +707,35 @@
return 0;
}
-#ifdef CONFIG_PM
-static const struct dev_pm_ops omapdrm_pm_ops = {
- .resume = omap_gem_resume,
-};
+#ifdef CONFIG_PM_SLEEP
+static int omap_drm_suspend(struct device *dev)
+{
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
+
+ drm_kms_helper_poll_disable(drm_dev);
+
+ return 0;
+}
+
+static int omap_drm_resume(struct device *dev)
+{
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
+
+ drm_kms_helper_poll_enable(drm_dev);
+
+ return omap_gem_resume(dev);
+}
#endif
+static SIMPLE_DEV_PM_OPS(omapdrm_pm_ops, omap_drm_suspend, omap_drm_resume);
+
static struct platform_driver pdev = {
- .driver = {
- .name = DRIVER_NAME,
-#ifdef CONFIG_PM
- .pm = &omapdrm_pm_ops,
-#endif
- },
- .probe = pdev_probe,
- .remove = pdev_remove,
- .suspend = pdev_suspend,
- .resume = pdev_resume,
- .shutdown = pdev_shutdown,
+ .driver = {
+ .name = DRIVER_NAME,
+ .pm = &omapdrm_pm_ops,
+ },
+ .probe = pdev_probe,
+ .remove = pdev_remove,
};
static int __init omap_drm_init(void)
diff --git a/drivers/gpu/drm/omapdrm/omap_drv.h b/drivers/gpu/drm/omapdrm/omap_drv.h
index 60e47b3..b31c79f 100644
--- a/drivers/gpu/drm/omapdrm/omap_drv.h
+++ b/drivers/gpu/drm/omapdrm/omap_drv.h
@@ -105,6 +105,9 @@
struct workqueue_struct *wq;
+ /* lock for obj_list below */
+ spinlock_t list_lock;
+
/* list of GEM objects: */
struct list_head obj_list;
@@ -160,15 +163,15 @@
void omap_crtc_flush(struct drm_crtc *crtc);
struct drm_plane *omap_plane_init(struct drm_device *dev,
- int plane_id, bool private_plane);
-int omap_plane_dpms(struct drm_plane *plane, int mode);
+ int id, enum drm_plane_type type);
+int omap_plane_set_enable(struct drm_plane *plane, bool enable);
int omap_plane_mode_set(struct drm_plane *plane,
- struct drm_crtc *crtc, struct drm_framebuffer *fb,
- int crtc_x, int crtc_y,
- unsigned int crtc_w, unsigned int crtc_h,
- uint32_t src_x, uint32_t src_y,
- uint32_t src_w, uint32_t src_h,
- void (*fxn)(void *), void *arg);
+ struct drm_crtc *crtc, struct drm_framebuffer *fb,
+ int crtc_x, int crtc_y,
+ unsigned int crtc_w, unsigned int crtc_h,
+ unsigned int src_x, unsigned int src_y,
+ unsigned int src_w, unsigned int src_h,
+ void (*fxn)(void *), void *arg);
void omap_plane_install_properties(struct drm_plane *plane,
struct drm_mode_object *obj);
int omap_plane_set_property(struct drm_plane *plane,
@@ -186,8 +189,6 @@
struct drm_encoder *encoder);
struct drm_encoder *omap_connector_attached_encoder(
struct drm_connector *connector);
-void omap_connector_flush(struct drm_connector *connector,
- int x, int y, int w, int h);
bool omap_connector_get_hdmi_mode(struct drm_connector *connector);
void copy_timings_omap_to_drm(struct drm_display_mode *mode,
@@ -208,8 +209,6 @@
struct omap_drm_window *win, struct omap_overlay_info *info);
struct drm_connector *omap_framebuffer_get_next_connector(
struct drm_framebuffer *fb, struct drm_connector *from);
-void omap_framebuffer_flush(struct drm_framebuffer *fb,
- int x, int y, int w, int h);
void omap_gem_init(struct drm_device *dev);
void omap_gem_deinit(struct drm_device *dev);
diff --git a/drivers/gpu/drm/omapdrm/omap_fb.c b/drivers/gpu/drm/omapdrm/omap_fb.c
index 2a5cacd..b2c1a29 100644
--- a/drivers/gpu/drm/omapdrm/omap_fb.c
+++ b/drivers/gpu/drm/omapdrm/omap_fb.c
@@ -86,6 +86,7 @@
struct omap_framebuffer {
struct drm_framebuffer base;
+ int pin_count;
const struct format *format;
struct plane planes[4];
};
@@ -121,18 +122,6 @@
struct drm_file *file_priv, unsigned flags, unsigned color,
struct drm_clip_rect *clips, unsigned num_clips)
{
- int i;
-
- drm_modeset_lock_all(fb->dev);
-
- for (i = 0; i < num_clips; i++) {
- omap_framebuffer_flush(fb, clips[i].x1, clips[i].y1,
- clips[i].x2 - clips[i].x1,
- clips[i].y2 - clips[i].y1);
- }
-
- drm_modeset_unlock_all(fb->dev);
-
return 0;
}
@@ -261,6 +250,11 @@
struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb);
int ret, i, n = drm_format_num_planes(fb->pixel_format);
+ if (omap_fb->pin_count > 0) {
+ omap_fb->pin_count++;
+ return 0;
+ }
+
for (i = 0; i < n; i++) {
struct plane *plane = &omap_fb->planes[i];
ret = omap_gem_get_paddr(plane->bo, &plane->paddr, true);
@@ -269,6 +263,8 @@
omap_gem_dma_sync(plane->bo, DMA_TO_DEVICE);
}
+ omap_fb->pin_count++;
+
return 0;
fail:
@@ -287,6 +283,11 @@
struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb);
int ret, i, n = drm_format_num_planes(fb->pixel_format);
+ omap_fb->pin_count--;
+
+ if (omap_fb->pin_count > 0)
+ return 0;
+
for (i = 0; i < n; i++) {
struct plane *plane = &omap_fb->planes[i];
ret = omap_gem_put_paddr(plane->bo);
@@ -336,34 +337,6 @@
return NULL;
}
-/* flush an area of the framebuffer (in case of manual update display that
- * is not automatically flushed)
- */
-void omap_framebuffer_flush(struct drm_framebuffer *fb,
- int x, int y, int w, int h)
-{
- struct drm_connector *connector = NULL;
-
- VERB("flush: %d,%d %dx%d, fb=%p", x, y, w, h, fb);
-
- /* FIXME: This is racy - no protection against modeset config changes. */
- while ((connector = omap_framebuffer_get_next_connector(fb, connector))) {
- /* only consider connectors that are part of a chain */
- if (connector->encoder && connector->encoder->crtc) {
- /* TODO: maybe this should propagate thru the crtc who
- * could do the coordinate translation..
- */
- struct drm_crtc *crtc = connector->encoder->crtc;
- int cx = max(0, x - crtc->x);
- int cy = max(0, y - crtc->y);
- int cw = w + (x - crtc->x) - cx;
- int ch = h + (y - crtc->y) - cy;
-
- omap_connector_flush(connector, cx, cy, cw, ch);
- }
- }
-}
-
#ifdef CONFIG_DEBUG_FS
void omap_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m)
{
@@ -407,7 +380,7 @@
struct drm_framebuffer *omap_framebuffer_init(struct drm_device *dev,
struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos)
{
- struct omap_framebuffer *omap_fb;
+ struct omap_framebuffer *omap_fb = NULL;
struct drm_framebuffer *fb = NULL;
const struct format *format = NULL;
int ret, i, n = drm_format_num_planes(mode_cmd->pixel_format);
@@ -450,6 +423,14 @@
goto fail;
}
+ if (pitch % format->planes[i].stride_bpp != 0) {
+ dev_err(dev->dev,
+ "buffer pitch (%d bytes) is not a multiple of pixel size (%d bytes)\n",
+ pitch, format->planes[i].stride_bpp);
+ ret = -EINVAL;
+ goto fail;
+ }
+
size = pitch * mode_cmd->height / format->planes[i].sub_y;
if (size > (omap_gem_mmap_size(bos[i]) - mode_cmd->offsets[i])) {
@@ -478,8 +459,7 @@
return fb;
fail:
- if (fb)
- omap_framebuffer_destroy(fb);
+ kfree(omap_fb);
return ERR_PTR(ret);
}
diff --git a/drivers/gpu/drm/omapdrm/omap_fbdev.c b/drivers/gpu/drm/omapdrm/omap_fbdev.c
index d292d24..950cd33 100644
--- a/drivers/gpu/drm/omapdrm/omap_fbdev.c
+++ b/drivers/gpu/drm/omapdrm/omap_fbdev.c
@@ -42,42 +42,8 @@
struct work_struct work;
};
-static void omap_fbdev_flush(struct fb_info *fbi, int x, int y, int w, int h);
static struct drm_fb_helper *get_fb(struct fb_info *fbi);
-static ssize_t omap_fbdev_write(struct fb_info *fbi, const char __user *buf,
- size_t count, loff_t *ppos)
-{
- ssize_t res;
-
- res = fb_sys_write(fbi, buf, count, ppos);
- omap_fbdev_flush(fbi, 0, 0, fbi->var.xres, fbi->var.yres);
-
- return res;
-}
-
-static void omap_fbdev_fillrect(struct fb_info *fbi,
- const struct fb_fillrect *rect)
-{
- sys_fillrect(fbi, rect);
- omap_fbdev_flush(fbi, rect->dx, rect->dy, rect->width, rect->height);
-}
-
-static void omap_fbdev_copyarea(struct fb_info *fbi,
- const struct fb_copyarea *area)
-{
- sys_copyarea(fbi, area);
- omap_fbdev_flush(fbi, area->dx, area->dy, area->width, area->height);
-}
-
-static void omap_fbdev_imageblit(struct fb_info *fbi,
- const struct fb_image *image)
-{
- sys_imageblit(fbi, image);
- omap_fbdev_flush(fbi, image->dx, image->dy,
- image->width, image->height);
-}
-
static void pan_worker(struct work_struct *work)
{
struct omap_fbdev *fbdev = container_of(work, struct omap_fbdev, work);
@@ -121,10 +87,10 @@
* basic fbdev ops which write to the framebuffer
*/
.fb_read = fb_sys_read,
- .fb_write = omap_fbdev_write,
- .fb_fillrect = omap_fbdev_fillrect,
- .fb_copyarea = omap_fbdev_copyarea,
- .fb_imageblit = omap_fbdev_imageblit,
+ .fb_write = fb_sys_write,
+ .fb_fillrect = sys_fillrect,
+ .fb_copyarea = sys_copyarea,
+ .fb_imageblit = sys_imageblit,
.fb_check_var = drm_fb_helper_check_var,
.fb_set_par = drm_fb_helper_set_par,
@@ -294,21 +260,6 @@
return fbi->par;
}
-/* flush an area of the framebuffer (in case of manual update display that
- * is not automatically flushed)
- */
-static void omap_fbdev_flush(struct fb_info *fbi, int x, int y, int w, int h)
-{
- struct drm_fb_helper *helper = get_fb(fbi);
-
- if (!helper)
- return;
-
- VERB("flush fbdev: %d,%d %dx%d, fbi=%p", x, y, w, h, fbi);
-
- omap_framebuffer_flush(helper->fb, x, y, w, h);
-}
-
/* initialize fbdev helper */
struct drm_fb_helper *omap_fbdev_init(struct drm_device *dev)
{
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index aeb91ed..e9718b9 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -828,6 +828,7 @@
dev_err(obj->dev->dev,
"could not release unmap: %d\n", ret);
}
+ omap_obj->paddr = 0;
omap_obj->block = NULL;
}
}
@@ -1272,13 +1273,16 @@
void omap_gem_free_object(struct drm_gem_object *obj)
{
struct drm_device *dev = obj->dev;
+ struct omap_drm_private *priv = dev->dev_private;
struct omap_gem_object *omap_obj = to_omap_bo(obj);
evict(obj);
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
+ spin_lock(&priv->list_lock);
list_del(&omap_obj->mm_list);
+ spin_unlock(&priv->list_lock);
drm_gem_free_mmap_offset(obj);
@@ -1358,8 +1362,8 @@
/* currently don't allow cached buffers.. there is some caching
* stuff that needs to be handled better
*/
- flags &= ~(OMAP_BO_CACHED|OMAP_BO_UNCACHED);
- flags |= OMAP_BO_WC;
+ flags &= ~(OMAP_BO_CACHED|OMAP_BO_WC|OMAP_BO_UNCACHED);
+ flags |= tiler_get_cpu_cache_flags();
/* align dimensions to slot boundaries... */
tiler_align(gem2fmt(flags),
@@ -1376,7 +1380,9 @@
if (!omap_obj)
goto fail;
+ spin_lock(&priv->list_lock);
list_add(&omap_obj->mm_list, &priv->obj_list);
+ spin_unlock(&priv->list_lock);
obj = &omap_obj->base;
diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
index a2dbfb1..b46dabd 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
@@ -156,16 +156,16 @@
}
static struct dma_buf_ops omap_dmabuf_ops = {
- .map_dma_buf = omap_gem_map_dma_buf,
- .unmap_dma_buf = omap_gem_unmap_dma_buf,
- .release = omap_gem_dmabuf_release,
- .begin_cpu_access = omap_gem_dmabuf_begin_cpu_access,
- .end_cpu_access = omap_gem_dmabuf_end_cpu_access,
- .kmap_atomic = omap_gem_dmabuf_kmap_atomic,
- .kunmap_atomic = omap_gem_dmabuf_kunmap_atomic,
- .kmap = omap_gem_dmabuf_kmap,
- .kunmap = omap_gem_dmabuf_kunmap,
- .mmap = omap_gem_dmabuf_mmap,
+ .map_dma_buf = omap_gem_map_dma_buf,
+ .unmap_dma_buf = omap_gem_unmap_dma_buf,
+ .release = omap_gem_dmabuf_release,
+ .begin_cpu_access = omap_gem_dmabuf_begin_cpu_access,
+ .end_cpu_access = omap_gem_dmabuf_end_cpu_access,
+ .kmap_atomic = omap_gem_dmabuf_kmap_atomic,
+ .kunmap_atomic = omap_gem_dmabuf_kunmap_atomic,
+ .kmap = omap_gem_dmabuf_kmap,
+ .kunmap = omap_gem_dmabuf_kunmap,
+ .mmap = omap_gem_dmabuf_mmap,
};
struct dma_buf *omap_gem_prime_export(struct drm_device *dev,
diff --git a/drivers/gpu/drm/omapdrm/omap_irq.c b/drivers/gpu/drm/omapdrm/omap_irq.c
index f035d2b..3eb097e 100644
--- a/drivers/gpu/drm/omapdrm/omap_irq.c
+++ b/drivers/gpu/drm/omapdrm/omap_irq.c
@@ -34,7 +34,7 @@
struct omap_drm_irq *irq;
uint32_t irqmask = priv->vblank_mask;
- BUG_ON(!spin_is_locked(&list_lock));
+ assert_spin_locked(&list_lock);
list_for_each_entry(irq, &priv->irq_list, node)
irqmask |= irq->irqmask;
diff --git a/drivers/gpu/drm/omapdrm/omap_plane.c b/drivers/gpu/drm/omapdrm/omap_plane.c
index ee8e2b3..1c6b63f 100644
--- a/drivers/gpu/drm/omapdrm/omap_plane.c
+++ b/drivers/gpu/drm/omapdrm/omap_plane.c
@@ -65,12 +65,16 @@
struct callback apply_done_cb;
};
-static void unpin_worker(struct drm_flip_work *work, void *val)
+static void omap_plane_unpin_worker(struct drm_flip_work *work, void *val)
{
struct omap_plane *omap_plane =
container_of(work, struct omap_plane, unpin_work);
struct drm_device *dev = omap_plane->base.dev;
+ /*
+ * omap_framebuffer_pin/unpin are always called from priv->wq,
+ * so there's no need for locking here.
+ */
omap_framebuffer_unpin(val);
mutex_lock(&dev->mode_config.mutex);
drm_framebuffer_unreference(val);
@@ -78,7 +82,8 @@
}
/* update which fb (if any) is pinned for scanout */
-static int update_pin(struct drm_plane *plane, struct drm_framebuffer *fb)
+static int omap_plane_update_pin(struct drm_plane *plane,
+ struct drm_framebuffer *fb)
{
struct omap_plane *omap_plane = to_omap_plane(plane);
struct drm_framebuffer *pinned_fb = omap_plane->pinned_fb;
@@ -121,13 +126,12 @@
struct drm_crtc *crtc = plane->crtc;
enum omap_channel channel;
bool enabled = omap_plane->enabled && crtc;
- bool ilace, replication;
int ret;
DBG("%s, enabled=%d", omap_plane->name, enabled);
/* if fb has changed, pin new fb: */
- update_pin(plane, enabled ? plane->fb : NULL);
+ omap_plane_update_pin(plane, enabled ? plane->fb : NULL);
if (!enabled) {
dispc_ovl_enable(omap_plane->id, false);
@@ -145,20 +149,17 @@
DBG("%d,%d %pad %pad", info->pos_x, info->pos_y,
&info->paddr, &info->p_uv_addr);
- /* TODO: */
- ilace = false;
- replication = false;
+ dispc_ovl_set_channel_out(omap_plane->id, channel);
/* and finally, update omapdss: */
- ret = dispc_ovl_setup(omap_plane->id, info,
- replication, omap_crtc_timings(crtc), false);
+ ret = dispc_ovl_setup(omap_plane->id, info, false,
+ omap_crtc_timings(crtc), false);
if (ret) {
dev_err(dev->dev, "dispc_ovl_setup failed: %d\n", ret);
return;
}
dispc_ovl_enable(omap_plane->id, true);
- dispc_ovl_set_channel_out(omap_plane->id, channel);
}
static void omap_plane_post_apply(struct omap_drm_apply *apply)
@@ -167,7 +168,6 @@
container_of(apply, struct omap_plane, apply);
struct drm_plane *plane = &omap_plane->base;
struct omap_drm_private *priv = plane->dev->dev_private;
- struct omap_overlay_info *info = &omap_plane->info;
struct callback cb;
cb = omap_plane->apply_done_cb;
@@ -177,14 +177,9 @@
if (cb.fxn)
cb.fxn(cb.arg);
-
- if (omap_plane->enabled) {
- omap_framebuffer_flush(plane->fb, info->pos_x, info->pos_y,
- info->out_width, info->out_height);
- }
}
-static int apply(struct drm_plane *plane)
+static int omap_plane_apply(struct drm_plane *plane)
{
if (plane->crtc) {
struct omap_plane *omap_plane = to_omap_plane(plane);
@@ -194,12 +189,12 @@
}
int omap_plane_mode_set(struct drm_plane *plane,
- struct drm_crtc *crtc, struct drm_framebuffer *fb,
- int crtc_x, int crtc_y,
- unsigned int crtc_w, unsigned int crtc_h,
- uint32_t src_x, uint32_t src_y,
- uint32_t src_w, uint32_t src_h,
- void (*fxn)(void *), void *arg)
+ struct drm_crtc *crtc, struct drm_framebuffer *fb,
+ int crtc_x, int crtc_y,
+ unsigned int crtc_w, unsigned int crtc_h,
+ unsigned int src_x, unsigned int src_y,
+ unsigned int src_w, unsigned int src_h,
+ void (*fxn)(void *), void *arg)
{
struct omap_plane *omap_plane = to_omap_plane(plane);
struct omap_drm_window *win = &omap_plane->win;
@@ -209,11 +204,10 @@
win->crtc_w = crtc_w;
win->crtc_h = crtc_h;
- /* src values are in Q16 fixed point, convert to integer: */
- win->src_x = src_x >> 16;
- win->src_y = src_y >> 16;
- win->src_w = src_w >> 16;
- win->src_h = src_h >> 16;
+ win->src_x = src_x;
+ win->src_y = src_y;
+ win->src_w = src_w;
+ win->src_h = src_h;
if (fxn) {
/* omap_crtc should ensure that a new page flip
@@ -225,15 +219,7 @@
omap_plane->apply_done_cb.arg = arg;
}
- if (plane->fb)
- drm_framebuffer_unreference(plane->fb);
-
- drm_framebuffer_reference(fb);
-
- plane->fb = fb;
- plane->crtc = crtc;
-
- return apply(plane);
+ return omap_plane_apply(plane);
}
static int omap_plane_update(struct drm_plane *plane,
@@ -254,17 +240,29 @@
break;
}
+ /*
+ * We don't need to take a reference to the framebuffer as the DRM core
+ * has already done so for the purpose of setting plane->fb.
+ */
+ plane->fb = fb;
+ plane->crtc = crtc;
+
+ /* src values are in Q16 fixed point, convert to integer: */
return omap_plane_mode_set(plane, crtc, fb,
crtc_x, crtc_y, crtc_w, crtc_h,
- src_x, src_y, src_w, src_h,
+ src_x >> 16, src_y >> 16, src_w >> 16, src_h >> 16,
NULL, NULL);
}
static int omap_plane_disable(struct drm_plane *plane)
{
struct omap_plane *omap_plane = to_omap_plane(plane);
+
omap_plane->win.rotation = BIT(DRM_ROTATE_0);
- return omap_plane_dpms(plane, DRM_MODE_DPMS_OFF);
+ omap_plane->info.zorder = plane->type == DRM_PLANE_TYPE_PRIMARY
+ ? 0 : omap_plane->id;
+
+ return omap_plane_set_enable(plane, false);
}
static void omap_plane_destroy(struct drm_plane *plane)
@@ -275,7 +273,6 @@
omap_irq_unregister(plane->dev, &omap_plane->error_irq);
- omap_plane_disable(plane);
drm_plane_cleanup(plane);
drm_flip_work_cleanup(&omap_plane->unpin_work);
@@ -283,18 +280,15 @@
kfree(omap_plane);
}
-int omap_plane_dpms(struct drm_plane *plane, int mode)
+int omap_plane_set_enable(struct drm_plane *plane, bool enable)
{
struct omap_plane *omap_plane = to_omap_plane(plane);
- bool enabled = (mode == DRM_MODE_DPMS_ON);
- int ret = 0;
- if (enabled != omap_plane->enabled) {
- omap_plane->enabled = enabled;
- ret = apply(plane);
- }
+ if (enable == omap_plane->enabled)
+ return 0;
- return ret;
+ omap_plane->enabled = enable;
+ return omap_plane_apply(plane);
}
/* helper to install properties which are common to planes and crtcs */
@@ -342,61 +336,63 @@
if (property == priv->rotation_prop) {
DBG("%s: rotation: %02x", omap_plane->name, (uint32_t)val);
omap_plane->win.rotation = val;
- ret = apply(plane);
+ ret = omap_plane_apply(plane);
} else if (property == priv->zorder_prop) {
DBG("%s: zorder: %02x", omap_plane->name, (uint32_t)val);
omap_plane->info.zorder = val;
- ret = apply(plane);
+ ret = omap_plane_apply(plane);
}
return ret;
}
static const struct drm_plane_funcs omap_plane_funcs = {
- .update_plane = omap_plane_update,
- .disable_plane = omap_plane_disable,
- .destroy = omap_plane_destroy,
- .set_property = omap_plane_set_property,
+ .update_plane = omap_plane_update,
+ .disable_plane = omap_plane_disable,
+ .destroy = omap_plane_destroy,
+ .set_property = omap_plane_set_property,
};
static void omap_plane_error_irq(struct omap_drm_irq *irq, uint32_t irqstatus)
{
struct omap_plane *omap_plane =
container_of(irq, struct omap_plane, error_irq);
- DRM_ERROR("%s: errors: %08x\n", omap_plane->name, irqstatus);
+ DRM_ERROR_RATELIMITED("%s: errors: %08x\n", omap_plane->name,
+ irqstatus);
}
static const char *plane_names[] = {
- [OMAP_DSS_GFX] = "gfx",
- [OMAP_DSS_VIDEO1] = "vid1",
- [OMAP_DSS_VIDEO2] = "vid2",
- [OMAP_DSS_VIDEO3] = "vid3",
+ [OMAP_DSS_GFX] = "gfx",
+ [OMAP_DSS_VIDEO1] = "vid1",
+ [OMAP_DSS_VIDEO2] = "vid2",
+ [OMAP_DSS_VIDEO3] = "vid3",
};
static const uint32_t error_irqs[] = {
- [OMAP_DSS_GFX] = DISPC_IRQ_GFX_FIFO_UNDERFLOW,
- [OMAP_DSS_VIDEO1] = DISPC_IRQ_VID1_FIFO_UNDERFLOW,
- [OMAP_DSS_VIDEO2] = DISPC_IRQ_VID2_FIFO_UNDERFLOW,
- [OMAP_DSS_VIDEO3] = DISPC_IRQ_VID3_FIFO_UNDERFLOW,
+ [OMAP_DSS_GFX] = DISPC_IRQ_GFX_FIFO_UNDERFLOW,
+ [OMAP_DSS_VIDEO1] = DISPC_IRQ_VID1_FIFO_UNDERFLOW,
+ [OMAP_DSS_VIDEO2] = DISPC_IRQ_VID2_FIFO_UNDERFLOW,
+ [OMAP_DSS_VIDEO3] = DISPC_IRQ_VID3_FIFO_UNDERFLOW,
};
/* initialize plane */
struct drm_plane *omap_plane_init(struct drm_device *dev,
- int id, bool private_plane)
+ int id, enum drm_plane_type type)
{
struct omap_drm_private *priv = dev->dev_private;
- struct drm_plane *plane = NULL;
+ struct drm_plane *plane;
struct omap_plane *omap_plane;
struct omap_overlay_info *info;
+ int ret;
- DBG("%s: priv=%d", plane_names[id], private_plane);
+ DBG("%s: type=%d", plane_names[id], type);
omap_plane = kzalloc(sizeof(*omap_plane), GFP_KERNEL);
if (!omap_plane)
- return NULL;
+ return ERR_PTR(-ENOMEM);
drm_flip_work_init(&omap_plane->unpin_work,
- "unpin", unpin_worker);
+ "unpin", omap_plane_unpin_worker);
omap_plane->nformats = omap_framebuffer_get_formats(
omap_plane->formats, ARRAY_SIZE(omap_plane->formats),
@@ -413,8 +409,11 @@
omap_plane->error_irq.irq = omap_plane_error_irq;
omap_irq_register(dev, &omap_plane->error_irq);
- drm_plane_init(dev, plane, (1 << priv->num_crtcs) - 1, &omap_plane_funcs,
- omap_plane->formats, omap_plane->nformats, private_plane);
+ ret = drm_universal_plane_init(dev, plane, (1 << priv->num_crtcs) - 1,
+ &omap_plane_funcs, omap_plane->formats,
+ omap_plane->nformats, type);
+ if (ret < 0)
+ goto error;
omap_plane_install_properties(plane, &plane->base);
@@ -432,10 +431,15 @@
* TODO add ioctl to give userspace an API to change this.. this
* will come in a subsequent patch.
*/
- if (private_plane)
+ if (type == DRM_PLANE_TYPE_PRIMARY)
omap_plane->info.zorder = 0;
else
omap_plane->info.zorder = id;
return plane;
+
+error:
+ omap_irq_unregister(plane->dev, &omap_plane->error_irq);
+ kfree(omap_plane);
+ return NULL;
}
diff --git a/drivers/gpu/drm/panel/Kconfig b/drivers/gpu/drm/panel/Kconfig
index d845837..6d64c7b 100644
--- a/drivers/gpu/drm/panel/Kconfig
+++ b/drivers/gpu/drm/panel/Kconfig
@@ -11,6 +11,7 @@
tristate "support for simple panels"
depends on OF
depends on BACKLIGHT_CLASS_DEVICE
+ select VIDEOMODE_HELPERS
help
DRM panel driver for dumb panels that need at most a regulator and
a GPIO to be powered up. Optionally a backlight can be attached so
diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
index 39806c3..30904a9 100644
--- a/drivers/gpu/drm/panel/panel-simple.c
+++ b/drivers/gpu/drm/panel/panel-simple.c
@@ -33,9 +33,14 @@
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_panel.h>
+#include <video/display_timing.h>
+#include <video/videomode.h>
+
struct panel_desc {
const struct drm_display_mode *modes;
unsigned int num_modes;
+ const struct display_timing *timings;
+ unsigned int num_timings;
unsigned int bpc;
@@ -94,6 +99,25 @@
if (!panel->desc)
return 0;
+ for (i = 0; i < panel->desc->num_timings; i++) {
+ const struct display_timing *dt = &panel->desc->timings[i];
+ struct videomode vm;
+
+ videomode_from_timing(dt, &vm);
+ mode = drm_mode_create(drm);
+ if (!mode) {
+ dev_err(drm->dev, "failed to add mode %ux%u\n",
+ dt->hactive.typ, dt->vactive.typ);
+ continue;
+ }
+
+ drm_display_mode_from_videomode(&vm, mode);
+ drm_mode_set_name(mode);
+
+ drm_mode_probed_add(connector, mode);
+ num++;
+ }
+
for (i = 0; i < panel->desc->num_modes; i++) {
const struct drm_display_mode *m = &panel->desc->modes[i];
@@ -226,12 +250,30 @@
return num;
}
+static int panel_simple_get_timings(struct drm_panel *panel,
+ unsigned int num_timings,
+ struct display_timing *timings)
+{
+ struct panel_simple *p = to_panel_simple(panel);
+ unsigned int i;
+
+ if (p->desc->num_timings < num_timings)
+ num_timings = p->desc->num_timings;
+
+ if (timings)
+ for (i = 0; i < num_timings; i++)
+ timings[i] = p->desc->timings[i];
+
+ return p->desc->num_timings;
+}
+
static const struct drm_panel_funcs panel_simple_funcs = {
.disable = panel_simple_disable,
.unprepare = panel_simple_unprepare,
.prepare = panel_simple_prepare,
.enable = panel_simple_enable,
.get_modes = panel_simple_get_modes,
+ .get_timings = panel_simple_get_timings,
};
static int panel_simple_probe(struct device *dev, const struct panel_desc *desc)
@@ -327,6 +369,31 @@
panel_simple_disable(&panel->base);
}
+static const struct drm_display_mode ampire_am800480r3tmqwa1h_mode = {
+ .clock = 33333,
+ .hdisplay = 800,
+ .hsync_start = 800 + 0,
+ .hsync_end = 800 + 0 + 255,
+ .htotal = 800 + 0 + 255 + 0,
+ .vdisplay = 480,
+ .vsync_start = 480 + 2,
+ .vsync_end = 480 + 2 + 45,
+ .vtotal = 480 + 2 + 45 + 0,
+ .vrefresh = 60,
+ .flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC,
+};
+
+static const struct panel_desc ampire_am800480r3tmqwa1h = {
+ .modes = &ire_am800480r3tmqwa1h_mode,
+ .num_modes = 1,
+ .bpc = 6,
+ .size = {
+ .width = 152,
+ .height = 91,
+ },
+ .bus_format = MEDIA_BUS_FMT_RGB666_1X18,
+};
+
static const struct drm_display_mode auo_b101aw03_mode = {
.clock = 51450,
.hdisplay = 1024,
@@ -350,6 +417,29 @@
},
};
+static const struct drm_display_mode auo_b101ean01_mode = {
+ .clock = 72500,
+ .hdisplay = 1280,
+ .hsync_start = 1280 + 119,
+ .hsync_end = 1280 + 119 + 32,
+ .htotal = 1280 + 119 + 32 + 21,
+ .vdisplay = 800,
+ .vsync_start = 800 + 4,
+ .vsync_end = 800 + 4 + 20,
+ .vtotal = 800 + 4 + 20 + 8,
+ .vrefresh = 60,
+};
+
+static const struct panel_desc auo_b101ean01 = {
+ .modes = &auo_b101ean01_mode,
+ .num_modes = 1,
+ .bpc = 6,
+ .size = {
+ .width = 217,
+ .height = 136,
+ },
+};
+
static const struct drm_display_mode auo_b101xtn01_mode = {
.clock = 72000,
.hdisplay = 1366,
@@ -615,24 +705,25 @@
.width = 95,
.height = 54,
},
+ .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
};
-static const struct drm_display_mode hannstar_hsd070pww1_mode = {
- .clock = 71100,
- .hdisplay = 1280,
- .hsync_start = 1280 + 1,
- .hsync_end = 1280 + 1 + 158,
- .htotal = 1280 + 1 + 158 + 1,
- .vdisplay = 800,
- .vsync_start = 800 + 1,
- .vsync_end = 800 + 1 + 21,
- .vtotal = 800 + 1 + 21 + 1,
- .vrefresh = 60,
+static const struct display_timing hannstar_hsd070pww1_timing = {
+ .pixelclock = { 64300000, 71100000, 82000000 },
+ .hactive = { 1280, 1280, 1280 },
+ .hfront_porch = { 1, 1, 10 },
+ .hback_porch = { 1, 1, 10 },
+ .hsync_len = { 52, 158, 661 },
+ .vactive = { 800, 800, 800 },
+ .vfront_porch = { 1, 1, 10 },
+ .vback_porch = { 1, 1, 10 },
+ .vsync_len = { 1, 21, 203 },
+ .flags = DISPLAY_FLAGS_DE_HIGH,
};
static const struct panel_desc hannstar_hsd070pww1 = {
- .modes = &hannstar_hsd070pww1_mode,
- .num_modes = 1,
+ .timings = &hannstar_hsd070pww1_timing,
+ .num_timings = 1,
.bpc = 6,
.size = {
.width = 151,
@@ -663,6 +754,31 @@
},
};
+static const struct drm_display_mode innolux_at043tn24_mode = {
+ .clock = 9000,
+ .hdisplay = 480,
+ .hsync_start = 480 + 2,
+ .hsync_end = 480 + 2 + 41,
+ .htotal = 480 + 2 + 41 + 2,
+ .vdisplay = 272,
+ .vsync_start = 272 + 2,
+ .vsync_end = 272 + 2 + 11,
+ .vtotal = 272 + 2 + 11 + 2,
+ .vrefresh = 60,
+ .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
+};
+
+static const struct panel_desc innolux_at043tn24 = {
+ .modes = &innolux_at043tn24_mode,
+ .num_modes = 1,
+ .bpc = 8,
+ .size = {
+ .width = 95,
+ .height = 54,
+ },
+ .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+};
+
static const struct drm_display_mode innolux_g121i1_l01_mode = {
.clock = 71000,
.hdisplay = 1280,
@@ -733,6 +849,29 @@
},
};
+static const struct drm_display_mode innolux_zj070na_01p_mode = {
+ .clock = 51501,
+ .hdisplay = 1024,
+ .hsync_start = 1024 + 128,
+ .hsync_end = 1024 + 128 + 64,
+ .htotal = 1024 + 128 + 64 + 128,
+ .vdisplay = 600,
+ .vsync_start = 600 + 16,
+ .vsync_end = 600 + 16 + 4,
+ .vtotal = 600 + 16 + 4 + 16,
+ .vrefresh = 60,
+};
+
+static const struct panel_desc innolux_zj070na_01p = {
+ .modes = &innolux_zj070na_01p_mode,
+ .num_modes = 1,
+ .bpc = 6,
+ .size = {
+ .width = 1024,
+ .height = 600,
+ },
+};
+
static const struct drm_display_mode lg_lp129qe_mode = {
.clock = 285250,
.hdisplay = 2560,
@@ -756,6 +895,30 @@
},
};
+static const struct drm_display_mode ortustech_com43h4m85ulc_mode = {
+ .clock = 25000,
+ .hdisplay = 480,
+ .hsync_start = 480 + 10,
+ .hsync_end = 480 + 10 + 10,
+ .htotal = 480 + 10 + 10 + 15,
+ .vdisplay = 800,
+ .vsync_start = 800 + 3,
+ .vsync_end = 800 + 3 + 3,
+ .vtotal = 800 + 3 + 3 + 3,
+ .vrefresh = 60,
+};
+
+static const struct panel_desc ortustech_com43h4m85ulc = {
+ .modes = &ortustech_com43h4m85ulc_mode,
+ .num_modes = 1,
+ .bpc = 8,
+ .size = {
+ .width = 56,
+ .height = 93,
+ },
+ .bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+};
+
static const struct drm_display_mode samsung_ltn101nt05_mode = {
.clock = 54030,
.hdisplay = 1024,
@@ -779,11 +942,63 @@
},
};
+static const struct drm_display_mode samsung_ltn140at29_301_mode = {
+ .clock = 76300,
+ .hdisplay = 1366,
+ .hsync_start = 1366 + 64,
+ .hsync_end = 1366 + 64 + 48,
+ .htotal = 1366 + 64 + 48 + 128,
+ .vdisplay = 768,
+ .vsync_start = 768 + 2,
+ .vsync_end = 768 + 2 + 5,
+ .vtotal = 768 + 2 + 5 + 17,
+ .vrefresh = 60,
+};
+
+static const struct panel_desc samsung_ltn140at29_301 = {
+ .modes = &samsung_ltn140at29_301_mode,
+ .num_modes = 1,
+ .bpc = 6,
+ .size = {
+ .width = 320,
+ .height = 187,
+ },
+};
+
+static const struct drm_display_mode shelly_sca07010_bfn_lnn_mode = {
+ .clock = 33300,
+ .hdisplay = 800,
+ .hsync_start = 800 + 1,
+ .hsync_end = 800 + 1 + 64,
+ .htotal = 800 + 1 + 64 + 64,
+ .vdisplay = 480,
+ .vsync_start = 480 + 1,
+ .vsync_end = 480 + 1 + 23,
+ .vtotal = 480 + 1 + 23 + 22,
+ .vrefresh = 60,
+};
+
+static const struct panel_desc shelly_sca07010_bfn_lnn = {
+ .modes = &shelly_sca07010_bfn_lnn_mode,
+ .num_modes = 1,
+ .size = {
+ .width = 152,
+ .height = 91,
+ },
+ .bus_format = MEDIA_BUS_FMT_RGB666_1X18,
+};
+
static const struct of_device_id platform_of_match[] = {
{
+ .compatible = "ampire,am800480r3tmqwa1h",
+ .data = &ire_am800480r3tmqwa1h,
+ }, {
.compatible = "auo,b101aw03",
.data = &auo_b101aw03,
}, {
+ .compatible = "auo,b101ean01",
+ .data = &auo_b101ean01,
+ }, {
.compatible = "auo,b101xtn01",
.data = &auo_b101xtn01,
}, {
@@ -826,6 +1041,9 @@
.compatible = "hit,tx23d38vm0caa",
.data = &hitachi_tx23d38vm0caa
}, {
+ .compatible = "innolux,at043tn24",
+ .data = &innolux_at043tn24,
+ }, {
.compatible ="innolux,g121i1-l01",
.data = &innolux_g121i1_l01
}, {
@@ -835,12 +1053,24 @@
.compatible = "innolux,n156bge-l21",
.data = &innolux_n156bge_l21,
}, {
+ .compatible = "innolux,zj070na-01p",
+ .data = &innolux_zj070na_01p,
+ }, {
.compatible = "lg,lp129qe",
.data = &lg_lp129qe,
}, {
+ .compatible = "ortustech,com43h4m85ulc",
+ .data = &ortustech_com43h4m85ulc,
+ }, {
.compatible = "samsung,ltn101nt05",
.data = &samsung_ltn101nt05,
}, {
+ .compatible = "samsung,ltn140at29-301",
+ .data = &samsung_ltn140at29_301,
+ }, {
+ .compatible = "shelly,sca07010-bfn-lnn",
+ .data = &shelly_sca07010_bfn_lnn,
+ }, {
/* sentinel */
}
};
diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
index 1d9b80c..e2d0708 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.c
+++ b/drivers/gpu/drm/qxl/qxl_drv.c
@@ -102,7 +102,7 @@
/* unpin the front buffers */
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
- struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
if (crtc->enabled)
(*crtc_funcs->disable)(crtc);
}
diff --git a/drivers/gpu/drm/radeon/Kconfig b/drivers/gpu/drm/radeon/Kconfig
index 970f8e9..421ae13 100644
--- a/drivers/gpu/drm/radeon/Kconfig
+++ b/drivers/gpu/drm/radeon/Kconfig
@@ -1,3 +1,11 @@
+config DRM_RADEON_USERPTR
+ bool "Always enable userptr support"
+ depends on DRM_RADEON
+ select MMU_NOTIFIER
+ help
+ This option selects CONFIG_MMU_NOTIFIER if it isn't already
+ selected to enabled full userptr support.
+
config DRM_RADEON_UMS
bool "Enable userspace modesetting on radeon (DEPRECATED)"
depends on DRM_RADEON
diff --git a/drivers/gpu/drm/radeon/Makefile b/drivers/gpu/drm/radeon/Makefile
index 4605633..dea53e3 100644
--- a/drivers/gpu/drm/radeon/Makefile
+++ b/drivers/gpu/drm/radeon/Makefile
@@ -81,7 +81,7 @@
rv770_smc.o cypress_dpm.o btc_dpm.o sumo_dpm.o sumo_smc.o trinity_dpm.o \
trinity_smc.o ni_dpm.o si_smc.o si_dpm.o kv_smc.o kv_dpm.o ci_smc.o \
ci_dpm.o dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o \
- radeon_sync.o radeon_audio.o
+ radeon_sync.o radeon_audio.o radeon_dp_auxch.o radeon_dp_mst.o
radeon-$(CONFIG_MMU_NOTIFIER) += radeon_mn.o
diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
index 86807ee..dac78ad 100644
--- a/drivers/gpu/drm/radeon/atombios_crtc.c
+++ b/drivers/gpu/drm/radeon/atombios_crtc.c
@@ -330,8 +330,10 @@
misc |= ATOM_COMPOSITESYNC;
if (mode->flags & DRM_MODE_FLAG_INTERLACE)
misc |= ATOM_INTERLACE;
- if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+ if (mode->flags & DRM_MODE_FLAG_DBLCLK)
misc |= ATOM_DOUBLE_CLOCK_MODE;
+ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+ misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
args.ucCRTC = radeon_crtc->crtc_id;
@@ -374,8 +376,10 @@
misc |= ATOM_COMPOSITESYNC;
if (mode->flags & DRM_MODE_FLAG_INTERLACE)
misc |= ATOM_INTERLACE;
- if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+ if (mode->flags & DRM_MODE_FLAG_DBLCLK)
misc |= ATOM_DOUBLE_CLOCK_MODE;
+ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+ misc |= ATOM_H_REPLICATIONBY2 | ATOM_V_REPLICATIONBY2;
args.susModeMiscInfo.usAccess = cpu_to_le16(misc);
args.ucCRTC = radeon_crtc->crtc_id;
@@ -606,6 +610,13 @@
}
}
+ if (radeon_encoder->is_mst_encoder) {
+ struct radeon_encoder_mst *mst_enc = radeon_encoder->enc_priv;
+ struct radeon_connector_atom_dig *dig_connector = mst_enc->connector->con_priv;
+
+ dp_clock = dig_connector->dp_clock;
+ }
+
/* use recommended ref_div for ss */
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
if (radeon_crtc->ss_enabled) {
@@ -952,7 +963,9 @@
radeon_crtc->bpc = 8;
radeon_crtc->ss_enabled = false;
- if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) ||
+ if (radeon_encoder->is_mst_encoder) {
+ radeon_dp_mst_prepare_pll(crtc, mode);
+ } else if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) ||
(radeon_encoder_get_dp_bridge_encoder_id(radeon_crtc->encoder) != ENCODER_OBJECT_ID_NONE)) {
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
struct drm_connector *connector =
@@ -2069,6 +2082,12 @@
radeon_crtc->connector = NULL;
return false;
}
+ if (radeon_crtc->encoder) {
+ struct radeon_encoder *radeon_encoder =
+ to_radeon_encoder(radeon_crtc->encoder);
+
+ radeon_crtc->output_csc = radeon_encoder->output_csc;
+ }
if (!radeon_crtc_scaling_mode_fixup(crtc, mode, adjusted_mode))
return false;
if (!atombios_crtc_prepare_pll(crtc, adjusted_mode))
diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c
index 8d74de8..3e3290c 100644
--- a/drivers/gpu/drm/radeon/atombios_dp.c
+++ b/drivers/gpu/drm/radeon/atombios_dp.c
@@ -158,7 +158,7 @@
#define HEADER_SIZE (BARE_ADDRESS_SIZE + 1)
static ssize_t
-radeon_dp_aux_transfer(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
+radeon_dp_aux_transfer_atom(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
{
struct radeon_i2c_chan *chan =
container_of(aux, struct radeon_i2c_chan, aux);
@@ -226,11 +226,20 @@
void radeon_dp_aux_init(struct radeon_connector *radeon_connector)
{
+ struct drm_device *dev = radeon_connector->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
int ret;
radeon_connector->ddc_bus->rec.hpd = radeon_connector->hpd.hpd;
radeon_connector->ddc_bus->aux.dev = radeon_connector->base.kdev;
- radeon_connector->ddc_bus->aux.transfer = radeon_dp_aux_transfer;
+ if (ASIC_IS_DCE5(rdev)) {
+ if (radeon_auxch)
+ radeon_connector->ddc_bus->aux.transfer = radeon_dp_aux_transfer_native;
+ else
+ radeon_connector->ddc_bus->aux.transfer = radeon_dp_aux_transfer_atom;
+ } else {
+ radeon_connector->ddc_bus->aux.transfer = radeon_dp_aux_transfer_atom;
+ }
ret = drm_dp_aux_register(&radeon_connector->ddc_bus->aux);
if (!ret)
@@ -301,8 +310,8 @@
/***** radeon specific DP functions *****/
-static int radeon_dp_get_max_link_rate(struct drm_connector *connector,
- u8 dpcd[DP_DPCD_SIZE])
+int radeon_dp_get_max_link_rate(struct drm_connector *connector,
+ u8 dpcd[DP_DPCD_SIZE])
{
int max_link_rate;
diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
index c39c1d0..f57c1ab 100644
--- a/drivers/gpu/drm/radeon/atombios_encoders.c
+++ b/drivers/gpu/drm/radeon/atombios_encoders.c
@@ -671,7 +671,15 @@
struct drm_connector *connector;
struct radeon_connector *radeon_connector;
struct radeon_connector_atom_dig *dig_connector;
+ struct radeon_encoder_atom_dig *dig_enc;
+ if (radeon_encoder_is_digital(encoder)) {
+ dig_enc = radeon_encoder->enc_priv;
+ if (dig_enc->active_mst_links)
+ return ATOM_ENCODER_MODE_DP_MST;
+ }
+ if (radeon_encoder->is_mst_encoder || radeon_encoder->offset)
+ return ATOM_ENCODER_MODE_DP_MST;
/* dp bridges are always DP */
if (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE)
return ATOM_ENCODER_MODE_DP;
@@ -823,7 +831,7 @@
};
void
-atombios_dig_encoder_setup(struct drm_encoder *encoder, int action, int panel_mode)
+atombios_dig_encoder_setup2(struct drm_encoder *encoder, int action, int panel_mode, int enc_override)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
@@ -920,7 +928,10 @@
if (ENCODER_MODE_IS_DP(args.v3.ucEncoderMode) && (dp_clock == 270000))
args.v1.ucConfig |= ATOM_ENCODER_CONFIG_V3_DPLINKRATE_2_70GHZ;
- args.v3.acConfig.ucDigSel = dig->dig_encoder;
+ if (enc_override != -1)
+ args.v3.acConfig.ucDigSel = enc_override;
+ else
+ args.v3.acConfig.ucDigSel = dig->dig_encoder;
args.v3.ucBitPerColor = radeon_atom_get_bpc(encoder);
break;
case 4:
@@ -948,7 +959,11 @@
else
args.v1.ucConfig |= ATOM_ENCODER_CONFIG_V4_DPLINKRATE_1_62GHZ;
}
- args.v4.acConfig.ucDigSel = dig->dig_encoder;
+
+ if (enc_override != -1)
+ args.v4.acConfig.ucDigSel = enc_override;
+ else
+ args.v4.acConfig.ucDigSel = dig->dig_encoder;
args.v4.ucBitPerColor = radeon_atom_get_bpc(encoder);
if (hpd_id == RADEON_HPD_NONE)
args.v4.ucHPD_ID = 0;
@@ -969,6 +984,12 @@
}
+void
+atombios_dig_encoder_setup(struct drm_encoder *encoder, int action, int panel_mode)
+{
+ atombios_dig_encoder_setup2(encoder, action, panel_mode, -1);
+}
+
union dig_transmitter_control {
DIG_TRANSMITTER_CONTROL_PS_ALLOCATION v1;
DIG_TRANSMITTER_CONTROL_PARAMETERS_V2 v2;
@@ -978,7 +999,7 @@
};
void
-atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t lane_num, uint8_t lane_set)
+atombios_dig_transmitter_setup2(struct drm_encoder *encoder, int action, uint8_t lane_num, uint8_t lane_set, int fe)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
@@ -1328,7 +1349,7 @@
args.v5.asConfig.ucHPDSel = 0;
else
args.v5.asConfig.ucHPDSel = hpd_id + 1;
- args.v5.ucDigEncoderSel = 1 << dig_encoder;
+ args.v5.ucDigEncoderSel = (fe != -1) ? (1 << fe) : (1 << dig_encoder);
args.v5.ucDPLaneSet = lane_set;
break;
default:
@@ -1344,6 +1365,12 @@
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
+void
+atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t lane_num, uint8_t lane_set)
+{
+ atombios_dig_transmitter_setup2(encoder, action, lane_num, lane_set, -1);
+}
+
bool
atombios_set_edp_panel_power(struct drm_connector *connector, int action)
{
@@ -1687,6 +1714,11 @@
case DRM_MODE_DPMS_STANDBY:
case DRM_MODE_DPMS_SUSPEND:
case DRM_MODE_DPMS_OFF:
+
+ /* don't power off encoders with active MST links */
+ if (dig->active_mst_links)
+ return;
+
if (ASIC_IS_DCE4(rdev)) {
if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector)
atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0);
@@ -1955,6 +1987,53 @@
radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id);
}
+void
+atombios_set_mst_encoder_crtc_source(struct drm_encoder *encoder, int fe)
+{
+ struct drm_device *dev = encoder->dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc);
+ int index = GetIndexIntoMasterTable(COMMAND, SelectCRTC_Source);
+ uint8_t frev, crev;
+ union crtc_source_param args;
+
+ memset(&args, 0, sizeof(args));
+
+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))
+ return;
+
+ if (frev != 1 && crev != 2)
+ DRM_ERROR("Unknown table for MST %d, %d\n", frev, crev);
+
+ args.v2.ucCRTC = radeon_crtc->crtc_id;
+ args.v2.ucEncodeMode = ATOM_ENCODER_MODE_DP_MST;
+
+ switch (fe) {
+ case 0:
+ args.v2.ucEncoderID = ASIC_INT_DIG1_ENCODER_ID;
+ break;
+ case 1:
+ args.v2.ucEncoderID = ASIC_INT_DIG2_ENCODER_ID;
+ break;
+ case 2:
+ args.v2.ucEncoderID = ASIC_INT_DIG3_ENCODER_ID;
+ break;
+ case 3:
+ args.v2.ucEncoderID = ASIC_INT_DIG4_ENCODER_ID;
+ break;
+ case 4:
+ args.v2.ucEncoderID = ASIC_INT_DIG5_ENCODER_ID;
+ break;
+ case 5:
+ args.v2.ucEncoderID = ASIC_INT_DIG6_ENCODER_ID;
+ break;
+ case 6:
+ args.v2.ucEncoderID = ASIC_INT_DIG7_ENCODER_ID;
+ break;
+ }
+ atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
+}
+
static void
atombios_apply_encoder_quirks(struct drm_encoder *encoder,
struct drm_display_mode *mode)
@@ -2003,7 +2082,14 @@
}
}
-static int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder)
+void radeon_atom_release_dig_encoder(struct radeon_device *rdev, int enc_idx)
+{
+ if (enc_idx < 0)
+ return;
+ rdev->mode_info.active_encoders &= ~(1 << enc_idx);
+}
+
+int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
@@ -2012,71 +2098,79 @@
struct drm_encoder *test_encoder;
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
uint32_t dig_enc_in_use = 0;
+ int enc_idx = -1;
+ if (fe_idx >= 0) {
+ enc_idx = fe_idx;
+ goto assigned;
+ }
if (ASIC_IS_DCE6(rdev)) {
/* DCE6 */
switch (radeon_encoder->encoder_id) {
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
if (dig->linkb)
- return 1;
+ enc_idx = 1;
else
- return 0;
+ enc_idx = 0;
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
if (dig->linkb)
- return 3;
+ enc_idx = 3;
else
- return 2;
+ enc_idx = 2;
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
if (dig->linkb)
- return 5;
+ enc_idx = 5;
else
- return 4;
+ enc_idx = 4;
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY3:
- return 6;
+ enc_idx = 6;
break;
}
+ goto assigned;
} else if (ASIC_IS_DCE4(rdev)) {
/* DCE4/5 */
if (ASIC_IS_DCE41(rdev) && !ASIC_IS_DCE61(rdev)) {
/* ontario follows DCE4 */
if (rdev->family == CHIP_PALM) {
if (dig->linkb)
- return 1;
+ enc_idx = 1;
else
- return 0;
+ enc_idx = 0;
} else
/* llano follows DCE3.2 */
- return radeon_crtc->crtc_id;
+ enc_idx = radeon_crtc->crtc_id;
} else {
switch (radeon_encoder->encoder_id) {
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
if (dig->linkb)
- return 1;
+ enc_idx = 1;
else
- return 0;
+ enc_idx = 0;
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
if (dig->linkb)
- return 3;
+ enc_idx = 3;
else
- return 2;
+ enc_idx = 2;
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
if (dig->linkb)
- return 5;
+ enc_idx = 5;
else
- return 4;
+ enc_idx = 4;
break;
}
}
+ goto assigned;
}
/* on DCE32 and encoder can driver any block so just crtc id */
if (ASIC_IS_DCE32(rdev)) {
- return radeon_crtc->crtc_id;
+ enc_idx = radeon_crtc->crtc_id;
+ goto assigned;
}
/* on DCE3 - LVTMA can only be driven by DIGB */
@@ -2104,6 +2198,17 @@
if (!(dig_enc_in_use & 1))
return 0;
return 1;
+
+assigned:
+ if (enc_idx == -1) {
+ DRM_ERROR("Got encoder index incorrect - returning 0\n");
+ return 0;
+ }
+ if (rdev->mode_info.active_encoders & (1 << enc_idx)) {
+ DRM_ERROR("chosen encoder in use %d\n", enc_idx);
+ }
+ rdev->mode_info.active_encoders |= (1 << enc_idx);
+ return enc_idx;
}
/* This only needs to be called once at startup */
@@ -2362,7 +2467,9 @@
ENCODER_OBJECT_ID_NONE)) {
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
if (dig) {
- dig->dig_encoder = radeon_atom_pick_dig_encoder(encoder);
+ if (dig->dig_encoder >= 0)
+ radeon_atom_release_dig_encoder(rdev, dig->dig_encoder);
+ dig->dig_encoder = radeon_atom_pick_dig_encoder(encoder, -1);
if (radeon_encoder->active_device & ATOM_DEVICE_DFP_SUPPORT) {
if (rdev->family >= CHIP_R600)
dig->afmt = rdev->mode_info.afmt[dig->dig_encoder];
@@ -2464,10 +2571,18 @@
disable_done:
if (radeon_encoder_is_digital(encoder)) {
- dig = radeon_encoder->enc_priv;
- dig->dig_encoder = -1;
- }
- radeon_encoder->active_device = 0;
+ if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) {
+ if (rdev->asic->display.hdmi_enable)
+ radeon_hdmi_enable(rdev, encoder, false);
+ }
+ if (atombios_get_encoder_mode(encoder) != ATOM_ENCODER_MODE_DP_MST) {
+ dig = radeon_encoder->enc_priv;
+ radeon_atom_release_dig_encoder(rdev, dig->dig_encoder);
+ dig->dig_encoder = -1;
+ radeon_encoder->active_device = 0;
+ }
+ } else
+ radeon_encoder->active_device = 0;
}
/* these are handled by the primary encoders */
diff --git a/drivers/gpu/drm/radeon/btc_dpm.c b/drivers/gpu/drm/radeon/btc_dpm.c
index db08f17..69556f5 100644
--- a/drivers/gpu/drm/radeon/btc_dpm.c
+++ b/drivers/gpu/drm/radeon/btc_dpm.c
@@ -2751,13 +2751,54 @@
else /* current_index == 2 */
pl = &ps->high;
seq_printf(m, "uvd vclk: %d dclk: %d\n", rps->vclk, rps->dclk);
- if (rdev->family >= CHIP_CEDAR) {
- seq_printf(m, "power level %d sclk: %u mclk: %u vddc: %u vddci: %u\n",
- current_index, pl->sclk, pl->mclk, pl->vddc, pl->vddci);
- } else {
- seq_printf(m, "power level %d sclk: %u mclk: %u vddc: %u\n",
- current_index, pl->sclk, pl->mclk, pl->vddc);
- }
+ seq_printf(m, "power level %d sclk: %u mclk: %u vddc: %u vddci: %u\n",
+ current_index, pl->sclk, pl->mclk, pl->vddc, pl->vddci);
+ }
+}
+
+u32 btc_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct rv7xx_ps *ps = rv770_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->sclk;
+ }
+}
+
+u32 btc_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct rv7xx_ps *ps = rv770_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->mclk;
}
}
diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c
index bcd2f1f..8730562 100644
--- a/drivers/gpu/drm/radeon/ci_dpm.c
+++ b/drivers/gpu/drm/radeon/ci_dpm.c
@@ -5922,6 +5922,20 @@
r600_dpm_print_ps_status(rdev, rps);
}
+u32 ci_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ u32 sclk = ci_get_average_sclk_freq(rdev);
+
+ return sclk;
+}
+
+u32 ci_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ u32 mclk = ci_get_average_mclk_freq(rdev);
+
+ return mclk;
+}
+
u32 ci_dpm_get_sclk(struct radeon_device *rdev, bool low)
{
struct ci_power_info *pi = ci_get_pi(rdev);
diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index 3e670d3..28faea9 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -141,6 +141,39 @@
static void cik_enable_gui_idle_interrupt(struct radeon_device *rdev,
bool enable);
+/**
+ * cik_get_allowed_info_register - fetch the register for the info ioctl
+ *
+ * @rdev: radeon_device pointer
+ * @reg: register offset in bytes
+ * @val: register value
+ *
+ * Returns 0 for success or -EINVAL for an invalid register
+ *
+ */
+int cik_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ switch (reg) {
+ case GRBM_STATUS:
+ case GRBM_STATUS2:
+ case GRBM_STATUS_SE0:
+ case GRBM_STATUS_SE1:
+ case GRBM_STATUS_SE2:
+ case GRBM_STATUS_SE3:
+ case SRBM_STATUS:
+ case SRBM_STATUS2:
+ case (SDMA0_STATUS_REG + SDMA0_REGISTER_OFFSET):
+ case (SDMA0_STATUS_REG + SDMA1_REGISTER_OFFSET):
+ case UVD_STATUS:
+ /* TODO VCE */
+ *val = RREG32(reg);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
/* get temperature in millidegrees */
int ci_get_temp(struct radeon_device *rdev)
{
@@ -7394,12 +7427,12 @@
(CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
cp_int_cntl |= PRIV_INSTR_INT_ENABLE | PRIV_REG_INT_ENABLE;
- hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~DC_HPDx_INT_EN;
+ hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
dma_cntl = RREG32(SDMA0_CNTL + SDMA0_REGISTER_OFFSET) & ~TRAP_ENABLE;
dma_cntl1 = RREG32(SDMA0_CNTL + SDMA1_REGISTER_OFFSET) & ~TRAP_ENABLE;
@@ -7486,27 +7519,27 @@
}
if (rdev->irq.hpd[0]) {
DRM_DEBUG("cik_irq_set: hpd 1\n");
- hpd1 |= DC_HPDx_INT_EN;
+ hpd1 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[1]) {
DRM_DEBUG("cik_irq_set: hpd 2\n");
- hpd2 |= DC_HPDx_INT_EN;
+ hpd2 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[2]) {
DRM_DEBUG("cik_irq_set: hpd 3\n");
- hpd3 |= DC_HPDx_INT_EN;
+ hpd3 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[3]) {
DRM_DEBUG("cik_irq_set: hpd 4\n");
- hpd4 |= DC_HPDx_INT_EN;
+ hpd4 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[4]) {
DRM_DEBUG("cik_irq_set: hpd 5\n");
- hpd5 |= DC_HPDx_INT_EN;
+ hpd5 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[5]) {
DRM_DEBUG("cik_irq_set: hpd 6\n");
- hpd6 |= DC_HPDx_INT_EN;
+ hpd6 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
WREG32(CP_INT_CNTL_RING0, cp_int_cntl);
@@ -7678,6 +7711,36 @@
tmp |= DC_HPDx_INT_ACK;
WREG32(DC_HPD6_INT_CONTROL, tmp);
}
+ if (rdev->irq.stat_regs.cik.disp_int & DC_HPD1_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD1_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD1_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD2_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD2_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD3_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD3_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD4_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD4_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD5_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD6_INT_CONTROL, tmp);
+ }
}
/**
@@ -7803,6 +7866,7 @@
u8 me_id, pipe_id, queue_id;
u32 ring_index;
bool queue_hotplug = false;
+ bool queue_dp = false;
bool queue_reset = false;
u32 addr, status, mc_client;
bool queue_thermal = false;
@@ -8048,6 +8112,48 @@
DRM_DEBUG("IH: HPD6\n");
}
break;
+ case 6:
+ if (rdev->irq.stat_regs.cik.disp_int & DC_HPD1_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 1\n");
+ }
+ break;
+ case 7:
+ if (rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 2\n");
+ }
+ break;
+ case 8:
+ if (rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 3\n");
+ }
+ break;
+ case 9:
+ if (rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 4\n");
+ }
+ break;
+ case 10:
+ if (rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 5\n");
+ }
+ break;
+ case 11:
+ if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 6\n");
+ }
+ break;
default:
DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
break;
@@ -8256,6 +8362,8 @@
rptr &= rdev->ih.ptr_mask;
WREG32(IH_RB_RPTR, rptr);
}
+ if (queue_dp)
+ schedule_work(&rdev->dp_work);
if (queue_hotplug)
schedule_work(&rdev->hotplug_work);
if (queue_reset) {
diff --git a/drivers/gpu/drm/radeon/cikd.h b/drivers/gpu/drm/radeon/cikd.h
index 243a36c..0089d83 100644
--- a/drivers/gpu/drm/radeon/cikd.h
+++ b/drivers/gpu/drm/radeon/cikd.h
@@ -2088,6 +2088,8 @@
# define CLK_OD(x) ((x) << 6)
# define CLK_OD_MASK (0x1f << 6)
+#define UVD_STATUS 0xf6bc
+
/* UVD clocks */
#define CG_DCLK_CNTL 0xC050009C
diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
index 973df06..f848acf 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -1006,6 +1006,34 @@
}
}
+/**
+ * evergreen_get_allowed_info_register - fetch the register for the info ioctl
+ *
+ * @rdev: radeon_device pointer
+ * @reg: register offset in bytes
+ * @val: register value
+ *
+ * Returns 0 for success or -EINVAL for an invalid register
+ *
+ */
+int evergreen_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ switch (reg) {
+ case GRBM_STATUS:
+ case GRBM_STATUS_SE0:
+ case GRBM_STATUS_SE1:
+ case SRBM_STATUS:
+ case SRBM_STATUS2:
+ case DMA_STATUS_REG:
+ case UVD_STATUS:
+ *val = RREG32(reg);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
void evergreen_tiling_fields(unsigned tiling_flags, unsigned *bankw,
unsigned *bankh, unsigned *mtaspect,
unsigned *tile_split)
@@ -4392,12 +4420,12 @@
return 0;
}
- hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~DC_HPDx_INT_EN;
+ hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
if (rdev->family == CHIP_ARUBA)
thermal_int = RREG32(TN_CG_THERMAL_INT_CTRL) &
~(THERM_INT_MASK_HIGH | THERM_INT_MASK_LOW);
@@ -4486,27 +4514,27 @@
}
if (rdev->irq.hpd[0]) {
DRM_DEBUG("evergreen_irq_set: hpd 1\n");
- hpd1 |= DC_HPDx_INT_EN;
+ hpd1 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[1]) {
DRM_DEBUG("evergreen_irq_set: hpd 2\n");
- hpd2 |= DC_HPDx_INT_EN;
+ hpd2 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[2]) {
DRM_DEBUG("evergreen_irq_set: hpd 3\n");
- hpd3 |= DC_HPDx_INT_EN;
+ hpd3 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[3]) {
DRM_DEBUG("evergreen_irq_set: hpd 4\n");
- hpd4 |= DC_HPDx_INT_EN;
+ hpd4 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[4]) {
DRM_DEBUG("evergreen_irq_set: hpd 5\n");
- hpd5 |= DC_HPDx_INT_EN;
+ hpd5 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[5]) {
DRM_DEBUG("evergreen_irq_set: hpd 6\n");
- hpd6 |= DC_HPDx_INT_EN;
+ hpd6 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.afmt[0]) {
DRM_DEBUG("evergreen_irq_set: hdmi 0\n");
@@ -4700,6 +4728,38 @@
tmp |= DC_HPDx_INT_ACK;
WREG32(DC_HPD6_INT_CONTROL, tmp);
}
+
+ if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD1_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD1_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD2_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD2_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD3_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD3_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD4_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD4_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD5_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD6_INT_CONTROL, tmp);
+ }
+
if (rdev->irq.stat_regs.evergreen.afmt_status1 & AFMT_AZ_FORMAT_WTRIG) {
tmp = RREG32(AFMT_AUDIO_PACKET_CONTROL + EVERGREEN_CRTC0_REGISTER_OFFSET);
tmp |= AFMT_AZ_FORMAT_WTRIG_ACK;
@@ -4780,6 +4840,7 @@
u32 ring_index;
bool queue_hotplug = false;
bool queue_hdmi = false;
+ bool queue_dp = false;
bool queue_thermal = false;
u32 status, addr;
@@ -5019,6 +5080,48 @@
DRM_DEBUG("IH: HPD6\n");
}
break;
+ case 6:
+ if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 1\n");
+ }
+ break;
+ case 7:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 2\n");
+ }
+ break;
+ case 8:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 3\n");
+ }
+ break;
+ case 9:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 4\n");
+ }
+ break;
+ case 10:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 5\n");
+ }
+ break;
+ case 11:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 6\n");
+ }
+ break;
default:
DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
break;
@@ -5151,6 +5254,8 @@
rptr &= rdev->ih.ptr_mask;
WREG32(IH_RB_RPTR, rptr);
}
+ if (queue_dp)
+ schedule_work(&rdev->dp_work);
if (queue_hotplug)
schedule_work(&rdev->hotplug_work);
if (queue_hdmi)
diff --git a/drivers/gpu/drm/radeon/evergreend.h b/drivers/gpu/drm/radeon/evergreend.h
index a8d1d52..4aa5f75 100644
--- a/drivers/gpu/drm/radeon/evergreend.h
+++ b/drivers/gpu/drm/radeon/evergreend.h
@@ -1520,6 +1520,7 @@
#define UVD_UDEC_DBW_ADDR_CONFIG 0xef54
#define UVD_RBC_RB_RPTR 0xf690
#define UVD_RBC_RB_WPTR 0xf694
+#define UVD_STATUS 0xf6bc
/*
* PM4
diff --git a/drivers/gpu/drm/radeon/kv_dpm.c b/drivers/gpu/drm/radeon/kv_dpm.c
index 0e236d0..2d71da4 100644
--- a/drivers/gpu/drm/radeon/kv_dpm.c
+++ b/drivers/gpu/drm/radeon/kv_dpm.c
@@ -2820,6 +2820,29 @@
}
}
+u32 kv_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct kv_power_info *pi = kv_get_pi(rdev);
+ u32 current_index =
+ (RREG32_SMC(TARGET_AND_CURRENT_PROFILE_INDEX) & CURR_SCLK_INDEX_MASK) >>
+ CURR_SCLK_INDEX_SHIFT;
+ u32 sclk;
+
+ if (current_index >= SMU__NUM_SCLK_DPM_STATE) {
+ return 0;
+ } else {
+ sclk = be32_to_cpu(pi->graphics_level[current_index].SclkFrequency);
+ return sclk;
+ }
+}
+
+u32 kv_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct kv_power_info *pi = kv_get_pi(rdev);
+
+ return pi->sys_info.bootup_uma_clk;
+}
+
void kv_dpm_print_power_state(struct radeon_device *rdev,
struct radeon_ps *rps)
{
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index dab0081..e8a496f 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -828,6 +828,35 @@
return err;
}
+/**
+ * cayman_get_allowed_info_register - fetch the register for the info ioctl
+ *
+ * @rdev: radeon_device pointer
+ * @reg: register offset in bytes
+ * @val: register value
+ *
+ * Returns 0 for success or -EINVAL for an invalid register
+ *
+ */
+int cayman_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ switch (reg) {
+ case GRBM_STATUS:
+ case GRBM_STATUS_SE0:
+ case GRBM_STATUS_SE1:
+ case SRBM_STATUS:
+ case SRBM_STATUS2:
+ case (DMA_STATUS_REG + DMA0_REGISTER_OFFSET):
+ case (DMA_STATUS_REG + DMA1_REGISTER_OFFSET):
+ case UVD_STATUS:
+ *val = RREG32(reg);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
int tn_get_temp(struct radeon_device *rdev)
{
u32 temp = RREG32_SMC(TN_CURRENT_GNB_TEMP) & 0x7ff;
diff --git a/drivers/gpu/drm/radeon/ni_dpm.c b/drivers/gpu/drm/radeon/ni_dpm.c
index 7bc9f8d..c3d531a 100644
--- a/drivers/gpu/drm/radeon/ni_dpm.c
+++ b/drivers/gpu/drm/radeon/ni_dpm.c
@@ -4319,6 +4319,42 @@
}
}
+u32 ni_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct ni_ps *ps = ni_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_STATE_INDEX_MASK) >>
+ CURRENT_STATE_INDEX_SHIFT;
+
+ if (current_index >= ps->performance_level_count) {
+ return 0;
+ } else {
+ pl = &ps->performance_levels[current_index];
+ return pl->sclk;
+ }
+}
+
+u32 ni_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct ni_ps *ps = ni_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_STATE_INDEX_MASK) >>
+ CURRENT_STATE_INDEX_SHIFT;
+
+ if (current_index >= ps->performance_level_count) {
+ return 0;
+ } else {
+ pl = &ps->performance_levels[current_index];
+ return pl->mclk;
+ }
+}
+
u32 ni_dpm_get_sclk(struct radeon_device *rdev, bool low)
{
struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
diff --git a/drivers/gpu/drm/radeon/ni_reg.h b/drivers/gpu/drm/radeon/ni_reg.h
index 5db7b7d..da310a70 100644
--- a/drivers/gpu/drm/radeon/ni_reg.h
+++ b/drivers/gpu/drm/radeon/ni_reg.h
@@ -83,4 +83,48 @@
# define NI_REGAMMA_PROG_B 4
# define NI_OVL_REGAMMA_MODE(x) (((x) & 0x7) << 4)
+#define NI_DP_MSE_LINK_TIMING 0x73a0
+# define NI_DP_MSE_LINK_FRAME (((x) & 0x3ff) << 0)
+# define NI_DP_MSE_LINK_LINE (((x) & 0x3) << 16)
+
+#define NI_DP_MSE_MISC_CNTL 0x736c
+# define NI_DP_MSE_BLANK_CODE (((x) & 0x1) << 0)
+# define NI_DP_MSE_TIMESTAMP_MODE (((x) & 0x1) << 4)
+# define NI_DP_MSE_ZERO_ENCODER (((x) & 0x1) << 8)
+
+#define NI_DP_MSE_RATE_CNTL 0x7384
+# define NI_DP_MSE_RATE_Y(x) (((x) & 0x3ffffff) << 0)
+# define NI_DP_MSE_RATE_X(x) (((x) & 0x3f) << 26)
+
+#define NI_DP_MSE_RATE_UPDATE 0x738c
+
+#define NI_DP_MSE_SAT0 0x7390
+# define NI_DP_MSE_SAT_SRC0(x) (((x) & 0x7) << 0)
+# define NI_DP_MSE_SAT_SLOT_COUNT0(x) (((x) & 0x3f) << 8)
+# define NI_DP_MSE_SAT_SRC1(x) (((x) & 0x7) << 16)
+# define NI_DP_MSE_SAT_SLOT_COUNT1(x) (((x) & 0x3f) << 24)
+
+#define NI_DP_MSE_SAT1 0x7394
+
+#define NI_DP_MSE_SAT2 0x7398
+
+#define NI_DP_MSE_SAT_UPDATE 0x739c
+
+#define NI_DIG_BE_CNTL 0x7140
+# define NI_DIG_FE_SOURCE_SELECT(x) (((x) & 0x7f) << 8)
+# define NI_DIG_FE_DIG_MODE(x) (((x) & 0x7) << 16)
+# define NI_DIG_MODE_DP_SST 0
+# define NI_DIG_MODE_LVDS 1
+# define NI_DIG_MODE_TMDS_DVI 2
+# define NI_DIG_MODE_TMDS_HDMI 3
+# define NI_DIG_MODE_DP_MST 5
+# define NI_DIG_HPD_SELECT(x) (((x) & 0x7) << 28)
+
+#define NI_DIG_FE_CNTL 0x7000
+# define NI_DIG_SOURCE_SELECT(x) (((x) & 0x3) << 0)
+# define NI_DIG_STEREOSYNC_SELECT(x) (((x) & 0x3) << 4)
+# define NI_DIG_STEREOSYNC_GATE_EN(x) (((x) & 0x1) << 8)
+# define NI_DIG_DUAL_LINK_ENABLE(x) (((x) & 0x1) << 16)
+# define NI_DIG_SWAP(x) (((x) & 0x1) << 18)
+# define NI_DIG_SYMCLK_FE_ON (0x1 << 24)
#endif
diff --git a/drivers/gpu/drm/radeon/nid.h b/drivers/gpu/drm/radeon/nid.h
index 6b44580..3b29083 100644
--- a/drivers/gpu/drm/radeon/nid.h
+++ b/drivers/gpu/drm/radeon/nid.h
@@ -816,6 +816,52 @@
#define MC_PMG_CMD_MRS2 0x2b5c
#define MC_SEQ_PMG_CMD_MRS2_LP 0x2b60
+#define AUX_CONTROL 0x6200
+#define AUX_EN (1 << 0)
+#define AUX_LS_READ_EN (1 << 8)
+#define AUX_LS_UPDATE_DISABLE(x) (((x) & 0x1) << 12)
+#define AUX_HPD_DISCON(x) (((x) & 0x1) << 16)
+#define AUX_DET_EN (1 << 18)
+#define AUX_HPD_SEL(x) (((x) & 0x7) << 20)
+#define AUX_IMPCAL_REQ_EN (1 << 24)
+#define AUX_TEST_MODE (1 << 28)
+#define AUX_DEGLITCH_EN (1 << 29)
+#define AUX_SW_CONTROL 0x6204
+#define AUX_SW_GO (1 << 0)
+#define AUX_LS_READ_TRIG (1 << 2)
+#define AUX_SW_START_DELAY(x) (((x) & 0xf) << 4)
+#define AUX_SW_WR_BYTES(x) (((x) & 0x1f) << 16)
+
+#define AUX_SW_INTERRUPT_CONTROL 0x620c
+#define AUX_SW_DONE_INT (1 << 0)
+#define AUX_SW_DONE_ACK (1 << 1)
+#define AUX_SW_DONE_MASK (1 << 2)
+#define AUX_SW_LS_DONE_INT (1 << 4)
+#define AUX_SW_LS_DONE_MASK (1 << 6)
+#define AUX_SW_STATUS 0x6210
+#define AUX_SW_DONE (1 << 0)
+#define AUX_SW_REQ (1 << 1)
+#define AUX_SW_RX_TIMEOUT_STATE(x) (((x) & 0x7) << 4)
+#define AUX_SW_RX_TIMEOUT (1 << 7)
+#define AUX_SW_RX_OVERFLOW (1 << 8)
+#define AUX_SW_RX_HPD_DISCON (1 << 9)
+#define AUX_SW_RX_PARTIAL_BYTE (1 << 10)
+#define AUX_SW_NON_AUX_MODE (1 << 11)
+#define AUX_SW_RX_MIN_COUNT_VIOL (1 << 12)
+#define AUX_SW_RX_INVALID_STOP (1 << 14)
+#define AUX_SW_RX_SYNC_INVALID_L (1 << 17)
+#define AUX_SW_RX_SYNC_INVALID_H (1 << 18)
+#define AUX_SW_RX_INVALID_START (1 << 19)
+#define AUX_SW_RX_RECV_NO_DET (1 << 20)
+#define AUX_SW_RX_RECV_INVALID_H (1 << 22)
+#define AUX_SW_RX_RECV_INVALID_V (1 << 23)
+
+#define AUX_SW_DATA 0x6218
+#define AUX_SW_DATA_RW (1 << 0)
+#define AUX_SW_DATA_MASK(x) (((x) & 0xff) << 8)
+#define AUX_SW_DATA_INDEX(x) (((x) & 0x1f) << 16)
+#define AUX_SW_AUTOINCREMENT_DISABLE (1 << 31)
+
#define LB_SYNC_RESET_SEL 0x6b28
#define LB_SYNC_RESET_SEL_MASK (3 << 0)
#define LB_SYNC_RESET_SEL_SHIFT 0
@@ -1086,6 +1132,7 @@
#define UVD_UDEC_DBW_ADDR_CONFIG 0xEF54
#define UVD_RBC_RB_RPTR 0xF690
#define UVD_RBC_RB_WPTR 0xF694
+#define UVD_STATUS 0xf6bc
/*
* PM4
diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index 2fcad34..8f6d862 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -109,6 +109,32 @@
extern void rv770_set_clk_bypass_mode(struct radeon_device *rdev);
/**
+ * r600_get_allowed_info_register - fetch the register for the info ioctl
+ *
+ * @rdev: radeon_device pointer
+ * @reg: register offset in bytes
+ * @val: register value
+ *
+ * Returns 0 for success or -EINVAL for an invalid register
+ *
+ */
+int r600_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ switch (reg) {
+ case GRBM_STATUS:
+ case GRBM_STATUS2:
+ case R_000E50_SRBM_STATUS:
+ case DMA_STATUS_REG:
+ case UVD_STATUS:
+ *val = RREG32(reg);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
+/**
* r600_get_xclk - get the xclk
*
* @rdev: radeon_device pointer
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 33d5a4f..d2abe48 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -111,6 +111,8 @@
extern int radeon_use_pflipirq;
extern int radeon_bapm;
extern int radeon_backlight;
+extern int radeon_auxch;
+extern int radeon_mst;
/*
* Copy from radeon_drv.h so we don't have to include both and have conflicting
@@ -505,7 +507,7 @@
pid_t pid;
struct radeon_mn *mn;
- struct interval_tree_node mn_it;
+ struct list_head mn_list;
};
#define gem_to_radeon_bo(gobj) container_of((gobj), struct radeon_bo, gem_base)
@@ -1857,6 +1859,8 @@
u32 (*get_xclk)(struct radeon_device *rdev);
/* get the gpu clock counter */
uint64_t (*get_gpu_clock_counter)(struct radeon_device *rdev);
+ /* get register for info ioctl */
+ int (*get_allowed_info_register)(struct radeon_device *rdev, u32 reg, u32 *val);
/* gart */
struct {
void (*tlb_flush)(struct radeon_device *rdev);
@@ -1985,6 +1989,8 @@
u32 (*fan_ctrl_get_mode)(struct radeon_device *rdev);
int (*set_fan_speed_percent)(struct radeon_device *rdev, u32 speed);
int (*get_fan_speed_percent)(struct radeon_device *rdev, u32 *speed);
+ u32 (*get_current_sclk)(struct radeon_device *rdev);
+ u32 (*get_current_mclk)(struct radeon_device *rdev);
} dpm;
/* pageflipping */
struct {
@@ -2408,6 +2414,7 @@
struct radeon_rlc rlc;
struct radeon_mec mec;
struct work_struct hotplug_work;
+ struct work_struct dp_work;
struct work_struct audio_work;
int num_crtc; /* number of crtcs */
struct mutex dc_hw_i2c_mutex; /* display controller hw i2c mutex */
@@ -2932,6 +2939,7 @@
#define radeon_mc_wait_for_idle(rdev) (rdev)->asic->mc_wait_for_idle((rdev))
#define radeon_get_xclk(rdev) (rdev)->asic->get_xclk((rdev))
#define radeon_get_gpu_clock_counter(rdev) (rdev)->asic->get_gpu_clock_counter((rdev))
+#define radeon_get_allowed_info_register(rdev, r, v) (rdev)->asic->get_allowed_info_register((rdev), (r), (v))
#define radeon_dpm_init(rdev) rdev->asic->dpm.init((rdev))
#define radeon_dpm_setup_asic(rdev) rdev->asic->dpm.setup_asic((rdev))
#define radeon_dpm_enable(rdev) rdev->asic->dpm.enable((rdev))
@@ -2950,6 +2958,8 @@
#define radeon_dpm_vblank_too_short(rdev) rdev->asic->dpm.vblank_too_short((rdev))
#define radeon_dpm_powergate_uvd(rdev, g) rdev->asic->dpm.powergate_uvd((rdev), (g))
#define radeon_dpm_enable_bapm(rdev, e) rdev->asic->dpm.enable_bapm((rdev), (e))
+#define radeon_dpm_get_current_sclk(rdev) rdev->asic->dpm.get_current_sclk((rdev))
+#define radeon_dpm_get_current_mclk(rdev) rdev->asic->dpm.get_current_mclk((rdev))
/* Common functions */
/* AGP */
diff --git a/drivers/gpu/drm/radeon/radeon_asic.c b/drivers/gpu/drm/radeon/radeon_asic.c
index c0ecd12..fafd8ce 100644
--- a/drivers/gpu/drm/radeon/radeon_asic.c
+++ b/drivers/gpu/drm/radeon/radeon_asic.c
@@ -136,6 +136,11 @@
}
}
+static int radeon_invalid_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ return -EINVAL;
+}
/* helper to disable agp */
/**
@@ -199,6 +204,7 @@
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r100_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &r100_pci_gart_tlb_flush,
.get_page_entry = &r100_pci_gart_get_page_entry,
@@ -266,6 +272,7 @@
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r100_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &r100_pci_gart_tlb_flush,
.get_page_entry = &r100_pci_gart_get_page_entry,
@@ -361,6 +368,7 @@
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r300_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &r100_pci_gart_tlb_flush,
.get_page_entry = &r100_pci_gart_get_page_entry,
@@ -428,6 +436,7 @@
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r300_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rv370_pcie_gart_tlb_flush,
.get_page_entry = &rv370_pcie_gart_get_page_entry,
@@ -495,6 +504,7 @@
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r300_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rv370_pcie_gart_tlb_flush,
.get_page_entry = &rv370_pcie_gart_get_page_entry,
@@ -562,6 +572,7 @@
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &rs400_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rs400_gart_tlb_flush,
.get_page_entry = &rs400_gart_get_page_entry,
@@ -629,6 +640,7 @@
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &rs600_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rs600_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -696,6 +708,7 @@
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &rs690_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rs400_gart_tlb_flush,
.get_page_entry = &rs400_gart_get_page_entry,
@@ -763,6 +776,7 @@
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &rv515_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rv370_pcie_gart_tlb_flush,
.get_page_entry = &rv370_pcie_gart_get_page_entry,
@@ -830,6 +844,7 @@
.mmio_hdp_flush = NULL,
.gui_idle = &r100_gui_idle,
.mc_wait_for_idle = &r520_mc_wait_for_idle,
+ .get_allowed_info_register = radeon_invalid_get_allowed_info_register,
.gart = {
.tlb_flush = &rv370_pcie_gart_tlb_flush,
.get_page_entry = &rv370_pcie_gart_get_page_entry,
@@ -925,6 +940,7 @@
.mc_wait_for_idle = &r600_mc_wait_for_idle,
.get_xclk = &r600_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = r600_get_allowed_info_register,
.gart = {
.tlb_flush = &r600_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -1009,6 +1025,7 @@
.mc_wait_for_idle = &r600_mc_wait_for_idle,
.get_xclk = &r600_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = r600_get_allowed_info_register,
.gart = {
.tlb_flush = &r600_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -1080,6 +1097,8 @@
.print_power_state = &rv6xx_dpm_print_power_state,
.debugfs_print_current_performance_level = &rv6xx_dpm_debugfs_print_current_performance_level,
.force_performance_level = &rv6xx_dpm_force_performance_level,
+ .get_current_sclk = &rv6xx_dpm_get_current_sclk,
+ .get_current_mclk = &rv6xx_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &rs600_page_flip,
@@ -1099,6 +1118,7 @@
.mc_wait_for_idle = &r600_mc_wait_for_idle,
.get_xclk = &r600_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = r600_get_allowed_info_register,
.gart = {
.tlb_flush = &r600_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -1170,6 +1190,8 @@
.print_power_state = &rs780_dpm_print_power_state,
.debugfs_print_current_performance_level = &rs780_dpm_debugfs_print_current_performance_level,
.force_performance_level = &rs780_dpm_force_performance_level,
+ .get_current_sclk = &rs780_dpm_get_current_sclk,
+ .get_current_mclk = &rs780_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &rs600_page_flip,
@@ -1202,6 +1224,7 @@
.mc_wait_for_idle = &r600_mc_wait_for_idle,
.get_xclk = &rv770_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = r600_get_allowed_info_register,
.gart = {
.tlb_flush = &r600_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -1274,6 +1297,8 @@
.debugfs_print_current_performance_level = &rv770_dpm_debugfs_print_current_performance_level,
.force_performance_level = &rv770_dpm_force_performance_level,
.vblank_too_short = &rv770_dpm_vblank_too_short,
+ .get_current_sclk = &rv770_dpm_get_current_sclk,
+ .get_current_mclk = &rv770_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &rv770_page_flip,
@@ -1319,6 +1344,7 @@
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &rv770_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = evergreen_get_allowed_info_register,
.gart = {
.tlb_flush = &evergreen_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -1391,6 +1417,8 @@
.debugfs_print_current_performance_level = &rv770_dpm_debugfs_print_current_performance_level,
.force_performance_level = &rv770_dpm_force_performance_level,
.vblank_too_short = &cypress_dpm_vblank_too_short,
+ .get_current_sclk = &rv770_dpm_get_current_sclk,
+ .get_current_mclk = &rv770_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
@@ -1410,6 +1438,7 @@
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &r600_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = evergreen_get_allowed_info_register,
.gart = {
.tlb_flush = &evergreen_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -1481,6 +1510,8 @@
.print_power_state = &sumo_dpm_print_power_state,
.debugfs_print_current_performance_level = &sumo_dpm_debugfs_print_current_performance_level,
.force_performance_level = &sumo_dpm_force_performance_level,
+ .get_current_sclk = &sumo_dpm_get_current_sclk,
+ .get_current_mclk = &sumo_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
@@ -1500,6 +1531,7 @@
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &rv770_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = evergreen_get_allowed_info_register,
.gart = {
.tlb_flush = &evergreen_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -1572,6 +1604,8 @@
.debugfs_print_current_performance_level = &btc_dpm_debugfs_print_current_performance_level,
.force_performance_level = &rv770_dpm_force_performance_level,
.vblank_too_short = &btc_dpm_vblank_too_short,
+ .get_current_sclk = &btc_dpm_get_current_sclk,
+ .get_current_mclk = &btc_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
@@ -1634,6 +1668,7 @@
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &rv770_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = cayman_get_allowed_info_register,
.gart = {
.tlb_flush = &cayman_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -1717,6 +1752,8 @@
.debugfs_print_current_performance_level = &ni_dpm_debugfs_print_current_performance_level,
.force_performance_level = &ni_dpm_force_performance_level,
.vblank_too_short = &ni_dpm_vblank_too_short,
+ .get_current_sclk = &ni_dpm_get_current_sclk,
+ .get_current_mclk = &ni_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
@@ -1736,6 +1773,7 @@
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &r600_get_xclk,
.get_gpu_clock_counter = &r600_get_gpu_clock_counter,
+ .get_allowed_info_register = cayman_get_allowed_info_register,
.gart = {
.tlb_flush = &cayman_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -1819,6 +1857,8 @@
.debugfs_print_current_performance_level = &trinity_dpm_debugfs_print_current_performance_level,
.force_performance_level = &trinity_dpm_force_performance_level,
.enable_bapm = &trinity_dpm_enable_bapm,
+ .get_current_sclk = &trinity_dpm_get_current_sclk,
+ .get_current_mclk = &trinity_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
@@ -1868,6 +1908,7 @@
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &si_get_xclk,
.get_gpu_clock_counter = &si_get_gpu_clock_counter,
+ .get_allowed_info_register = si_get_allowed_info_register,
.gart = {
.tlb_flush = &si_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -1955,6 +1996,8 @@
.fan_ctrl_get_mode = &si_fan_ctrl_get_mode,
.get_fan_speed_percent = &si_fan_ctrl_get_fan_speed_percent,
.set_fan_speed_percent = &si_fan_ctrl_set_fan_speed_percent,
+ .get_current_sclk = &si_dpm_get_current_sclk,
+ .get_current_mclk = &si_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
@@ -2032,6 +2075,7 @@
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &cik_get_xclk,
.get_gpu_clock_counter = &cik_get_gpu_clock_counter,
+ .get_allowed_info_register = cik_get_allowed_info_register,
.gart = {
.tlb_flush = &cik_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -2123,6 +2167,8 @@
.fan_ctrl_get_mode = &ci_fan_ctrl_get_mode,
.get_fan_speed_percent = &ci_fan_ctrl_get_fan_speed_percent,
.set_fan_speed_percent = &ci_fan_ctrl_set_fan_speed_percent,
+ .get_current_sclk = &ci_dpm_get_current_sclk,
+ .get_current_mclk = &ci_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
@@ -2142,6 +2188,7 @@
.mc_wait_for_idle = &evergreen_mc_wait_for_idle,
.get_xclk = &cik_get_xclk,
.get_gpu_clock_counter = &cik_get_gpu_clock_counter,
+ .get_allowed_info_register = cik_get_allowed_info_register,
.gart = {
.tlb_flush = &cik_pcie_gart_tlb_flush,
.get_page_entry = &rs600_gart_get_page_entry,
@@ -2229,6 +2276,8 @@
.force_performance_level = &kv_dpm_force_performance_level,
.powergate_uvd = &kv_dpm_powergate_uvd,
.enable_bapm = &kv_dpm_enable_bapm,
+ .get_current_sclk = &kv_dpm_get_current_sclk,
+ .get_current_mclk = &kv_dpm_get_current_mclk,
},
.pflip = {
.page_flip = &evergreen_page_flip,
diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h
index 72bdd3b..cf0a90b 100644
--- a/drivers/gpu/drm/radeon/radeon_asic.h
+++ b/drivers/gpu/drm/radeon/radeon_asic.h
@@ -384,6 +384,8 @@
struct radeon_ring *ring);
void r600_gfx_set_wptr(struct radeon_device *rdev,
struct radeon_ring *ring);
+int r600_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val);
/* r600 irq */
int r600_irq_process(struct radeon_device *rdev);
int r600_irq_init(struct radeon_device *rdev);
@@ -433,6 +435,8 @@
struct seq_file *m);
int rv6xx_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
+u32 rv6xx_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 rv6xx_dpm_get_current_mclk(struct radeon_device *rdev);
/* rs780 dpm */
int rs780_dpm_init(struct radeon_device *rdev);
int rs780_dpm_enable(struct radeon_device *rdev);
@@ -449,6 +453,8 @@
struct seq_file *m);
int rs780_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
+u32 rs780_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 rs780_dpm_get_current_mclk(struct radeon_device *rdev);
/*
* rv770,rv730,rv710,rv740
@@ -488,6 +494,8 @@
int rv770_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
bool rv770_dpm_vblank_too_short(struct radeon_device *rdev);
+u32 rv770_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 rv770_dpm_get_current_mclk(struct radeon_device *rdev);
/*
* evergreen
@@ -540,6 +548,8 @@
unsigned num_gpu_pages,
struct reservation_object *resv);
int evergreen_get_temp(struct radeon_device *rdev);
+int evergreen_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val);
int sumo_get_temp(struct radeon_device *rdev);
int tn_get_temp(struct radeon_device *rdev);
int cypress_dpm_init(struct radeon_device *rdev);
@@ -563,6 +573,8 @@
bool btc_dpm_vblank_too_short(struct radeon_device *rdev);
void btc_dpm_debugfs_print_current_performance_level(struct radeon_device *rdev,
struct seq_file *m);
+u32 btc_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 btc_dpm_get_current_mclk(struct radeon_device *rdev);
int sumo_dpm_init(struct radeon_device *rdev);
int sumo_dpm_enable(struct radeon_device *rdev);
int sumo_dpm_late_enable(struct radeon_device *rdev);
@@ -581,6 +593,8 @@
struct seq_file *m);
int sumo_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
+u32 sumo_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 sumo_dpm_get_current_mclk(struct radeon_device *rdev);
/*
* cayman
@@ -637,6 +651,8 @@
struct radeon_ring *ring);
void cayman_dma_set_wptr(struct radeon_device *rdev,
struct radeon_ring *ring);
+int cayman_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val);
int ni_dpm_init(struct radeon_device *rdev);
void ni_dpm_setup_asic(struct radeon_device *rdev);
@@ -655,6 +671,8 @@
int ni_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
bool ni_dpm_vblank_too_short(struct radeon_device *rdev);
+u32 ni_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 ni_dpm_get_current_mclk(struct radeon_device *rdev);
int trinity_dpm_init(struct radeon_device *rdev);
int trinity_dpm_enable(struct radeon_device *rdev);
int trinity_dpm_late_enable(struct radeon_device *rdev);
@@ -674,6 +692,8 @@
int trinity_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
void trinity_dpm_enable_bapm(struct radeon_device *rdev, bool enable);
+u32 trinity_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 trinity_dpm_get_current_mclk(struct radeon_device *rdev);
/* DCE6 - SI */
void dce6_bandwidth_update(struct radeon_device *rdev);
@@ -726,6 +746,8 @@
uint64_t si_get_gpu_clock_counter(struct radeon_device *rdev);
int si_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk);
int si_get_temp(struct radeon_device *rdev);
+int si_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val);
int si_dpm_init(struct radeon_device *rdev);
void si_dpm_setup_asic(struct radeon_device *rdev);
int si_dpm_enable(struct radeon_device *rdev);
@@ -746,6 +768,8 @@
u32 speed);
u32 si_fan_ctrl_get_mode(struct radeon_device *rdev);
void si_fan_ctrl_set_mode(struct radeon_device *rdev, u32 mode);
+u32 si_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 si_dpm_get_current_mclk(struct radeon_device *rdev);
/* DCE8 - CIK */
void dce8_bandwidth_update(struct radeon_device *rdev);
@@ -841,6 +865,8 @@
struct radeon_ring *ring);
int ci_get_temp(struct radeon_device *rdev);
int kv_get_temp(struct radeon_device *rdev);
+int cik_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val);
int ci_dpm_init(struct radeon_device *rdev);
int ci_dpm_enable(struct radeon_device *rdev);
@@ -862,6 +888,8 @@
enum radeon_dpm_forced_level level);
bool ci_dpm_vblank_too_short(struct radeon_device *rdev);
void ci_dpm_powergate_uvd(struct radeon_device *rdev, bool gate);
+u32 ci_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 ci_dpm_get_current_mclk(struct radeon_device *rdev);
int ci_fan_ctrl_get_fan_speed_percent(struct radeon_device *rdev,
u32 *speed);
@@ -890,6 +918,8 @@
enum radeon_dpm_forced_level level);
void kv_dpm_powergate_uvd(struct radeon_device *rdev, bool gate);
void kv_dpm_enable_bapm(struct radeon_device *rdev, bool enable);
+u32 kv_dpm_get_current_sclk(struct radeon_device *rdev);
+u32 kv_dpm_get_current_mclk(struct radeon_device *rdev);
/* uvd v1.0 */
uint32_t uvd_v1_0_get_rptr(struct radeon_device *rdev,
diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c
index fc1b3f3..8f28524 100644
--- a/drivers/gpu/drm/radeon/radeon_atombios.c
+++ b/drivers/gpu/drm/radeon/radeon_atombios.c
@@ -845,6 +845,7 @@
radeon_link_encoder_connector(dev);
+ radeon_setup_mst_connector(dev);
return true;
}
diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
index b21ef69..48d49e6 100644
--- a/drivers/gpu/drm/radeon/radeon_audio.c
+++ b/drivers/gpu/drm/radeon/radeon_audio.c
@@ -520,16 +520,40 @@
struct radeon_device *rdev = encoder->dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
+ struct drm_connector *connector;
+ struct radeon_connector *radeon_connector = NULL;
u8 buffer[HDMI_INFOFRAME_HEADER_SIZE + HDMI_AVI_INFOFRAME_SIZE];
struct hdmi_avi_infoframe frame;
int err;
+ list_for_each_entry(connector,
+ &encoder->dev->mode_config.connector_list, head) {
+ if (connector->encoder == encoder) {
+ radeon_connector = to_radeon_connector(connector);
+ break;
+ }
+ }
+
+ if (!radeon_connector) {
+ DRM_ERROR("Couldn't find encoder's connector\n");
+ return -ENOENT;
+ }
+
err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
if (err < 0) {
DRM_ERROR("failed to setup AVI infoframe: %d\n", err);
return err;
}
+ if (drm_rgb_quant_range_selectable(radeon_connector_edid(connector))) {
+ if (radeon_encoder->output_csc == RADEON_OUTPUT_CSC_TVRGB)
+ frame.quantization_range = HDMI_QUANTIZATION_RANGE_LIMITED;
+ else
+ frame.quantization_range = HDMI_QUANTIZATION_RANGE_FULL;
+ } else {
+ frame.quantization_range = HDMI_QUANTIZATION_RANGE_DEFAULT;
+ }
+
err = hdmi_avi_infoframe_pack(&frame, buffer, sizeof(buffer));
if (err < 0) {
DRM_ERROR("failed to pack AVI infoframe: %d\n", err);
diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
index 27def67..cebb65e 100644
--- a/drivers/gpu/drm/radeon/radeon_connectors.c
+++ b/drivers/gpu/drm/radeon/radeon_connectors.c
@@ -27,6 +27,7 @@
#include <drm/drm_edid.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
+#include <drm/drm_dp_mst_helper.h>
#include <drm/radeon_drm.h>
#include "radeon.h"
#include "radeon_audio.h"
@@ -34,12 +35,33 @@
#include <linux/pm_runtime.h>
+static int radeon_dp_handle_hpd(struct drm_connector *connector)
+{
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ int ret;
+
+ ret = radeon_dp_mst_check_status(radeon_connector);
+ if (ret == -EINVAL)
+ return 1;
+ return 0;
+}
void radeon_connector_hotplug(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
+ struct radeon_connector_atom_dig *dig_connector =
+ radeon_connector->con_priv;
+
+ if (radeon_connector->is_mst_connector)
+ return;
+ if (dig_connector->is_mst) {
+ radeon_dp_handle_hpd(connector);
+ return;
+ }
+ }
/* bail if the connector does not have hpd pin, e.g.,
* VGA, TV, etc.
*/
@@ -135,7 +157,7 @@
if (connector->display_info.bpc)
bpc = connector->display_info.bpc;
else if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) {
- struct drm_connector_helper_funcs *connector_funcs =
+ const struct drm_connector_helper_funcs *connector_funcs =
connector->helper_private;
struct drm_encoder *encoder = connector_funcs->best_encoder(connector);
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
@@ -225,7 +247,7 @@
struct radeon_device *rdev = dev->dev_private;
struct drm_encoder *best_encoder = NULL;
struct drm_encoder *encoder = NULL;
- struct drm_connector_helper_funcs *connector_funcs = connector->helper_private;
+ const struct drm_connector_helper_funcs *connector_funcs = connector->helper_private;
bool connected;
int i;
@@ -702,7 +724,7 @@
if (connector->encoder)
radeon_encoder = to_radeon_encoder(connector->encoder);
else {
- struct drm_connector_helper_funcs *connector_funcs = connector->helper_private;
+ const struct drm_connector_helper_funcs *connector_funcs = connector->helper_private;
radeon_encoder = to_radeon_encoder(connector_funcs->best_encoder(connector));
}
@@ -725,6 +747,30 @@
radeon_property_change_mode(&radeon_encoder->base);
}
+ if (property == rdev->mode_info.output_csc_property) {
+ if (connector->encoder)
+ radeon_encoder = to_radeon_encoder(connector->encoder);
+ else {
+ const struct drm_connector_helper_funcs *connector_funcs = connector->helper_private;
+ radeon_encoder = to_radeon_encoder(connector_funcs->best_encoder(connector));
+ }
+
+ if (radeon_encoder->output_csc == val)
+ return 0;
+
+ radeon_encoder->output_csc = val;
+
+ if (connector->encoder->crtc) {
+ struct drm_crtc *crtc = connector->encoder->crtc;
+ const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
+
+ radeon_crtc->output_csc = radeon_encoder->output_csc;
+
+ (*crtc_funcs->load_lut)(crtc);
+ }
+ }
+
return 0;
}
@@ -896,7 +942,7 @@
if (connector->encoder)
radeon_encoder = to_radeon_encoder(connector->encoder);
else {
- struct drm_connector_helper_funcs *connector_funcs = connector->helper_private;
+ const struct drm_connector_helper_funcs *connector_funcs = connector->helper_private;
radeon_encoder = to_radeon_encoder(connector_funcs->best_encoder(connector));
}
@@ -964,7 +1010,7 @@
struct radeon_device *rdev = dev->dev_private;
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct drm_encoder *encoder;
- struct drm_encoder_helper_funcs *encoder_funcs;
+ const struct drm_encoder_helper_funcs *encoder_funcs;
bool dret = false;
enum drm_connector_status ret = connector_status_disconnected;
int r;
@@ -1094,7 +1140,7 @@
radeon_tv_detect(struct drm_connector *connector, bool force)
{
struct drm_encoder *encoder;
- struct drm_encoder_helper_funcs *encoder_funcs;
+ const struct drm_encoder_helper_funcs *encoder_funcs;
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
enum drm_connector_status ret = connector_status_disconnected;
int r;
@@ -1174,7 +1220,7 @@
struct radeon_device *rdev = dev->dev_private;
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct drm_encoder *encoder = NULL;
- struct drm_encoder_helper_funcs *encoder_funcs;
+ const struct drm_encoder_helper_funcs *encoder_funcs;
int i, r;
enum drm_connector_status ret = connector_status_disconnected;
bool dret = false, broken_edid = false;
@@ -1585,6 +1631,9 @@
struct drm_encoder *encoder = radeon_best_single_encoder(connector);
int r;
+ if (radeon_dig_connector->is_mst)
+ return connector_status_disconnected;
+
r = pm_runtime_get_sync(connector->dev->dev);
if (r < 0)
return connector_status_disconnected;
@@ -1635,7 +1684,7 @@
if (radeon_ddc_probe(radeon_connector, true)) /* try DDC */
ret = connector_status_connected;
else if (radeon_connector->dac_load_detect) { /* try load detection */
- struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private;
+ const struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private;
ret = encoder_funcs->detect(encoder, connector);
}
}
@@ -1643,12 +1692,21 @@
radeon_dig_connector->dp_sink_type = radeon_dp_getsinktype(radeon_connector);
if (radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) {
ret = connector_status_connected;
- if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT)
+ if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) {
radeon_dp_getdpcd(radeon_connector);
+ r = radeon_dp_mst_probe(radeon_connector);
+ if (r == 1)
+ ret = connector_status_disconnected;
+ }
} else {
if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) {
- if (radeon_dp_getdpcd(radeon_connector))
- ret = connector_status_connected;
+ if (radeon_dp_getdpcd(radeon_connector)) {
+ r = radeon_dp_mst_probe(radeon_connector);
+ if (r == 1)
+ ret = connector_status_disconnected;
+ else
+ ret = connector_status_connected;
+ }
} else {
/* try non-aux ddc (DP to DVI/HDMI/etc. adapter) */
if (radeon_ddc_probe(radeon_connector, false))
@@ -1872,6 +1930,10 @@
drm_object_attach_property(&radeon_connector->base.base,
dev->mode_config.scaling_mode_property,
DRM_MODE_SCALE_NONE);
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
break;
case DRM_MODE_CONNECTOR_DVII:
case DRM_MODE_CONNECTOR_DVID:
@@ -1904,6 +1966,10 @@
drm_object_attach_property(&radeon_connector->base.base,
rdev->mode_info.audio_property,
RADEON_AUDIO_AUTO);
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
subpixel_order = SubPixelHorizontalRGB;
connector->interlace_allowed = true;
@@ -1950,6 +2016,10 @@
drm_object_attach_property(&radeon_connector->base.base,
dev->mode_config.scaling_mode_property,
DRM_MODE_SCALE_NONE);
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
/* no HPD on analog connectors */
radeon_connector->hpd.hpd = RADEON_HPD_NONE;
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
@@ -1972,6 +2042,10 @@
drm_object_attach_property(&radeon_connector->base.base,
dev->mode_config.scaling_mode_property,
DRM_MODE_SCALE_NONE);
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
/* no HPD on analog connectors */
radeon_connector->hpd.hpd = RADEON_HPD_NONE;
connector->interlace_allowed = true;
@@ -2023,6 +2097,10 @@
rdev->mode_info.load_detect_property,
1);
}
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
connector->interlace_allowed = true;
if (connector_type == DRM_MODE_CONNECTOR_DVII)
connector->doublescan_allowed = true;
@@ -2068,6 +2146,10 @@
rdev->mode_info.audio_property,
RADEON_AUDIO_AUTO);
}
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
subpixel_order = SubPixelHorizontalRGB;
connector->interlace_allowed = true;
if (connector_type == DRM_MODE_CONNECTOR_HDMIB)
@@ -2116,6 +2198,10 @@
rdev->mode_info.audio_property,
RADEON_AUDIO_AUTO);
}
+ if (ASIC_IS_DCE5(rdev))
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.output_csc_property,
+ RADEON_OUTPUT_CSC_BYPASS);
connector->interlace_allowed = true;
/* in theory with a DP to VGA converter... */
connector->doublescan_allowed = false;
@@ -2352,3 +2438,27 @@
connector->display_info.subpixel_order = subpixel_order;
drm_connector_register(connector);
}
+
+void radeon_setup_mst_connector(struct drm_device *dev)
+{
+ struct radeon_device *rdev = dev->dev_private;
+ struct drm_connector *connector;
+ struct radeon_connector *radeon_connector;
+
+ if (!ASIC_IS_DCE5(rdev))
+ return;
+
+ if (radeon_mst == 0)
+ return;
+
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ int ret;
+
+ radeon_connector = to_radeon_connector(connector);
+
+ if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort)
+ continue;
+
+ ret = radeon_dp_mst_init(radeon_connector);
+ }
+}
diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
index bd7519f..b7ca4c5 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -1442,6 +1442,11 @@
DRM_ERROR("registering gem debugfs failed (%d).\n", r);
}
+ r = radeon_mst_debugfs_init(rdev);
+ if (r) {
+ DRM_ERROR("registering mst debugfs failed (%d).\n", r);
+ }
+
if (rdev->flags & RADEON_IS_AGP && !rdev->accel_working) {
/* Acceleration not working on AGP card try again
* with fallback to PCI or PCIE GART
diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
index 913fafa..d2e9e9e 100644
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -154,7 +154,7 @@
(NI_GRPH_REGAMMA_MODE(NI_REGAMMA_BYPASS) |
NI_OVL_REGAMMA_MODE(NI_REGAMMA_BYPASS)));
WREG32(NI_OUTPUT_CSC_CONTROL + radeon_crtc->crtc_offset,
- (NI_OUTPUT_CSC_GRPH_MODE(NI_OUTPUT_CSC_BYPASS) |
+ (NI_OUTPUT_CSC_GRPH_MODE(radeon_crtc->output_csc) |
NI_OUTPUT_CSC_OVL_MODE(NI_OUTPUT_CSC_BYPASS)));
/* XXX match this to the depth of the crtc fmt block, move to modeset? */
WREG32(0x6940 + radeon_crtc->crtc_offset, 0);
@@ -1382,6 +1382,13 @@
{ RADEON_FMT_DITHER_ENABLE, "on" },
};
+static struct drm_prop_enum_list radeon_output_csc_enum_list[] =
+{ { RADEON_OUTPUT_CSC_BYPASS, "bypass" },
+ { RADEON_OUTPUT_CSC_TVRGB, "tvrgb" },
+ { RADEON_OUTPUT_CSC_YCBCR601, "ycbcr601" },
+ { RADEON_OUTPUT_CSC_YCBCR709, "ycbcr709" },
+};
+
static int radeon_modeset_create_props(struct radeon_device *rdev)
{
int sz;
@@ -1444,6 +1451,12 @@
"dither",
radeon_dither_enum_list, sz);
+ sz = ARRAY_SIZE(radeon_output_csc_enum_list);
+ rdev->mode_info.output_csc_property =
+ drm_property_create_enum(rdev->ddev, 0,
+ "output_csc",
+ radeon_output_csc_enum_list, sz);
+
return 0;
}
diff --git a/drivers/gpu/drm/radeon/radeon_dp_auxch.c b/drivers/gpu/drm/radeon/radeon_dp_auxch.c
new file mode 100644
index 0000000..bf1fecc
--- /dev/null
+++ b/drivers/gpu/drm/radeon/radeon_dp_auxch.c
@@ -0,0 +1,206 @@
+/*
+ * Copyright 2015 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Dave Airlie
+ */
+#include <drm/drmP.h>
+#include <drm/radeon_drm.h>
+#include "radeon.h"
+#include "nid.h"
+
+#define AUX_RX_ERROR_FLAGS (AUX_SW_RX_OVERFLOW | \
+ AUX_SW_RX_HPD_DISCON | \
+ AUX_SW_RX_PARTIAL_BYTE | \
+ AUX_SW_NON_AUX_MODE | \
+ AUX_SW_RX_MIN_COUNT_VIOL | \
+ AUX_SW_RX_INVALID_STOP | \
+ AUX_SW_RX_SYNC_INVALID_L | \
+ AUX_SW_RX_SYNC_INVALID_H | \
+ AUX_SW_RX_INVALID_START | \
+ AUX_SW_RX_RECV_NO_DET | \
+ AUX_SW_RX_RECV_INVALID_H | \
+ AUX_SW_RX_RECV_INVALID_V)
+
+#define AUX_SW_REPLY_GET_BYTE_COUNT(x) (((x) >> 24) & 0x1f)
+
+#define BARE_ADDRESS_SIZE 3
+
+static const u32 aux_offset[] =
+{
+ 0x6200 - 0x6200,
+ 0x6250 - 0x6200,
+ 0x62a0 - 0x6200,
+ 0x6300 - 0x6200,
+ 0x6350 - 0x6200,
+ 0x63a0 - 0x6200,
+};
+
+ssize_t
+radeon_dp_aux_transfer_native(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
+{
+ struct radeon_i2c_chan *chan =
+ container_of(aux, struct radeon_i2c_chan, aux);
+ struct drm_device *dev = chan->dev;
+ struct radeon_device *rdev = dev->dev_private;
+ int ret = 0, i;
+ uint32_t tmp, ack = 0;
+ int instance = chan->rec.i2c_id & 0xf;
+ u8 byte;
+ u8 *buf = msg->buffer;
+ int retry_count = 0;
+ int bytes;
+ int msize;
+ bool is_write = false;
+
+ if (WARN_ON(msg->size > 16))
+ return -E2BIG;
+
+ switch (msg->request & ~DP_AUX_I2C_MOT) {
+ case DP_AUX_NATIVE_WRITE:
+ case DP_AUX_I2C_WRITE:
+ is_write = true;
+ break;
+ case DP_AUX_NATIVE_READ:
+ case DP_AUX_I2C_READ:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* work out two sizes required */
+ msize = 0;
+ bytes = BARE_ADDRESS_SIZE;
+ if (msg->size) {
+ msize = msg->size - 1;
+ bytes++;
+ if (is_write)
+ bytes += msg->size;
+ }
+
+ mutex_lock(&chan->mutex);
+
+ /* switch the pad to aux mode */
+ tmp = RREG32(chan->rec.mask_clk_reg);
+ tmp |= (1 << 16);
+ WREG32(chan->rec.mask_clk_reg, tmp);
+
+ /* setup AUX control register with correct HPD pin */
+ tmp = RREG32(AUX_CONTROL + aux_offset[instance]);
+
+ tmp &= AUX_HPD_SEL(0x7);
+ tmp |= AUX_HPD_SEL(chan->rec.hpd);
+ tmp |= AUX_EN | AUX_LS_READ_EN;
+
+ WREG32(AUX_CONTROL + aux_offset[instance], tmp);
+
+ /* atombios appears to write this twice lets copy it */
+ WREG32(AUX_SW_CONTROL + aux_offset[instance],
+ AUX_SW_WR_BYTES(bytes));
+ WREG32(AUX_SW_CONTROL + aux_offset[instance],
+ AUX_SW_WR_BYTES(bytes));
+
+ /* write the data header into the registers */
+ /* request, addres, msg size */
+ byte = (msg->request << 4);
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_MASK(byte) | AUX_SW_AUTOINCREMENT_DISABLE);
+
+ byte = (msg->address >> 8) & 0xff;
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_MASK(byte));
+
+ byte = msg->address & 0xff;
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_MASK(byte));
+
+ byte = msize;
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_MASK(byte));
+
+ /* if we are writing - write the msg buffer */
+ if (is_write) {
+ for (i = 0; i < msg->size; i++) {
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_MASK(buf[i]));
+ }
+ }
+
+ /* clear the ACK */
+ WREG32(AUX_SW_INTERRUPT_CONTROL + aux_offset[instance], AUX_SW_DONE_ACK);
+
+ /* write the size and GO bits */
+ WREG32(AUX_SW_CONTROL + aux_offset[instance],
+ AUX_SW_WR_BYTES(bytes) | AUX_SW_GO);
+
+ /* poll the status registers - TODO irq support */
+ do {
+ tmp = RREG32(AUX_SW_STATUS + aux_offset[instance]);
+ if (tmp & AUX_SW_DONE) {
+ break;
+ }
+ usleep_range(100, 200);
+ } while (retry_count++ < 1000);
+
+ if (retry_count >= 1000) {
+ DRM_ERROR("auxch hw never signalled completion, error %08x\n", tmp);
+ ret = -EIO;
+ goto done;
+ }
+
+ if (tmp & AUX_SW_RX_TIMEOUT) {
+ DRM_DEBUG_KMS("dp_aux_ch timed out\n");
+ ret = -ETIMEDOUT;
+ goto done;
+ }
+ if (tmp & AUX_RX_ERROR_FLAGS) {
+ DRM_DEBUG_KMS("dp_aux_ch flags not zero: %08x\n", tmp);
+ ret = -EIO;
+ goto done;
+ }
+
+ bytes = AUX_SW_REPLY_GET_BYTE_COUNT(tmp);
+ if (bytes) {
+ WREG32(AUX_SW_DATA + aux_offset[instance],
+ AUX_SW_DATA_RW | AUX_SW_AUTOINCREMENT_DISABLE);
+
+ tmp = RREG32(AUX_SW_DATA + aux_offset[instance]);
+ ack = (tmp >> 8) & 0xff;
+
+ for (i = 0; i < bytes - 1; i++) {
+ tmp = RREG32(AUX_SW_DATA + aux_offset[instance]);
+ if (buf)
+ buf[i] = (tmp >> 8) & 0xff;
+ }
+ if (buf)
+ ret = bytes - 1;
+ }
+
+ WREG32(AUX_SW_INTERRUPT_CONTROL + aux_offset[instance], AUX_SW_DONE_ACK);
+
+ if (is_write)
+ ret = msg->size;
+done:
+ mutex_unlock(&chan->mutex);
+
+ if (ret >= 0)
+ msg->reply = ack >> 4;
+ return ret;
+}
diff --git a/drivers/gpu/drm/radeon/radeon_dp_mst.c b/drivers/gpu/drm/radeon/radeon_dp_mst.c
new file mode 100644
index 0000000..1017338
--- /dev/null
+++ b/drivers/gpu/drm/radeon/radeon_dp_mst.c
@@ -0,0 +1,782 @@
+
+#include <drm/drmP.h>
+#include <drm/drm_dp_mst_helper.h>
+#include <drm/drm_fb_helper.h>
+
+#include "radeon.h"
+#include "atom.h"
+#include "ni_reg.h"
+
+static struct radeon_encoder *radeon_dp_create_fake_mst_encoder(struct radeon_connector *connector);
+
+static int radeon_atom_set_enc_offset(int id)
+{
+ static const int offsets[] = { EVERGREEN_CRTC0_REGISTER_OFFSET,
+ EVERGREEN_CRTC1_REGISTER_OFFSET,
+ EVERGREEN_CRTC2_REGISTER_OFFSET,
+ EVERGREEN_CRTC3_REGISTER_OFFSET,
+ EVERGREEN_CRTC4_REGISTER_OFFSET,
+ EVERGREEN_CRTC5_REGISTER_OFFSET,
+ 0x13830 - 0x7030 };
+
+ return offsets[id];
+}
+
+static int radeon_dp_mst_set_be_cntl(struct radeon_encoder *primary,
+ struct radeon_encoder_mst *mst_enc,
+ enum radeon_hpd_id hpd, bool enable)
+{
+ struct drm_device *dev = primary->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+ uint32_t reg;
+ int retries = 0;
+ uint32_t temp;
+
+ reg = RREG32(NI_DIG_BE_CNTL + primary->offset);
+
+ /* set MST mode */
+ reg &= ~NI_DIG_FE_DIG_MODE(7);
+ reg |= NI_DIG_FE_DIG_MODE(NI_DIG_MODE_DP_MST);
+
+ if (enable)
+ reg |= NI_DIG_FE_SOURCE_SELECT(1 << mst_enc->fe);
+ else
+ reg &= ~NI_DIG_FE_SOURCE_SELECT(1 << mst_enc->fe);
+
+ reg |= NI_DIG_HPD_SELECT(hpd);
+ DRM_DEBUG_KMS("writing 0x%08x 0x%08x\n", NI_DIG_BE_CNTL + primary->offset, reg);
+ WREG32(NI_DIG_BE_CNTL + primary->offset, reg);
+
+ if (enable) {
+ uint32_t offset = radeon_atom_set_enc_offset(mst_enc->fe);
+
+ do {
+ temp = RREG32(NI_DIG_FE_CNTL + offset);
+ } while ((temp & NI_DIG_SYMCLK_FE_ON) && retries++ < 10000);
+ if (retries == 10000)
+ DRM_ERROR("timed out waiting for FE %d %d\n", primary->offset, mst_enc->fe);
+ }
+ return 0;
+}
+
+static int radeon_dp_mst_set_stream_attrib(struct radeon_encoder *primary,
+ int stream_number,
+ int fe,
+ int slots)
+{
+ struct drm_device *dev = primary->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+ u32 temp, val;
+ int retries = 0;
+ int satreg, satidx;
+
+ satreg = stream_number >> 1;
+ satidx = stream_number & 1;
+
+ temp = RREG32(NI_DP_MSE_SAT0 + satreg + primary->offset);
+
+ val = NI_DP_MSE_SAT_SLOT_COUNT0(slots) | NI_DP_MSE_SAT_SRC0(fe);
+
+ val <<= (16 * satidx);
+
+ temp &= ~(0xffff << (16 * satidx));
+
+ temp |= val;
+
+ DRM_DEBUG_KMS("writing 0x%08x 0x%08x\n", NI_DP_MSE_SAT0 + satreg + primary->offset, temp);
+ WREG32(NI_DP_MSE_SAT0 + satreg + primary->offset, temp);
+
+ WREG32(NI_DP_MSE_SAT_UPDATE + primary->offset, 1);
+
+ do {
+ temp = RREG32(NI_DP_MSE_SAT_UPDATE + primary->offset);
+ } while ((temp & 0x1) && retries++ < 10000);
+
+ if (retries == 10000)
+ DRM_ERROR("timed out waitin for SAT update %d\n", primary->offset);
+
+ /* MTP 16 ? */
+ return 0;
+}
+
+static int radeon_dp_mst_update_stream_attribs(struct radeon_connector *mst_conn,
+ struct radeon_encoder *primary)
+{
+ struct drm_device *dev = mst_conn->base.dev;
+ struct stream_attribs new_attribs[6];
+ int i;
+ int idx = 0;
+ struct radeon_connector *radeon_connector;
+ struct drm_connector *connector;
+
+ memset(new_attribs, 0, sizeof(new_attribs));
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ struct radeon_encoder *subenc;
+ struct radeon_encoder_mst *mst_enc;
+
+ radeon_connector = to_radeon_connector(connector);
+ if (!radeon_connector->is_mst_connector)
+ continue;
+
+ if (radeon_connector->mst_port != mst_conn)
+ continue;
+
+ subenc = radeon_connector->mst_encoder;
+ mst_enc = subenc->enc_priv;
+
+ if (!mst_enc->enc_active)
+ continue;
+
+ new_attribs[idx].fe = mst_enc->fe;
+ new_attribs[idx].slots = drm_dp_mst_get_vcpi_slots(&mst_conn->mst_mgr, mst_enc->port);
+ idx++;
+ }
+
+ for (i = 0; i < idx; i++) {
+ if (new_attribs[i].fe != mst_conn->cur_stream_attribs[i].fe ||
+ new_attribs[i].slots != mst_conn->cur_stream_attribs[i].slots) {
+ radeon_dp_mst_set_stream_attrib(primary, i, new_attribs[i].fe, new_attribs[i].slots);
+ mst_conn->cur_stream_attribs[i].fe = new_attribs[i].fe;
+ mst_conn->cur_stream_attribs[i].slots = new_attribs[i].slots;
+ }
+ }
+
+ for (i = idx; i < mst_conn->enabled_attribs; i++) {
+ radeon_dp_mst_set_stream_attrib(primary, i, 0, 0);
+ mst_conn->cur_stream_attribs[i].fe = 0;
+ mst_conn->cur_stream_attribs[i].slots = 0;
+ }
+ mst_conn->enabled_attribs = idx;
+ return 0;
+}
+
+static int radeon_dp_mst_set_vcp_size(struct radeon_encoder *mst, uint32_t x, uint32_t y)
+{
+ struct drm_device *dev = mst->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_encoder_mst *mst_enc = mst->enc_priv;
+ uint32_t val, temp;
+ uint32_t offset = radeon_atom_set_enc_offset(mst_enc->fe);
+ int retries = 0;
+
+ val = NI_DP_MSE_RATE_X(x) | NI_DP_MSE_RATE_Y(y);
+
+ WREG32(NI_DP_MSE_RATE_CNTL + offset, val);
+
+ do {
+ temp = RREG32(NI_DP_MSE_RATE_UPDATE + offset);
+ } while ((temp & 0x1) && (retries++ < 10000));
+
+ if (retries >= 10000)
+ DRM_ERROR("timed out wait for rate cntl %d\n", mst_enc->fe);
+ return 0;
+}
+
+static int radeon_dp_mst_get_ddc_modes(struct drm_connector *connector)
+{
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ struct radeon_connector *master = radeon_connector->mst_port;
+ struct edid *edid;
+ int ret = 0;
+
+ edid = drm_dp_mst_get_edid(connector, &master->mst_mgr, radeon_connector->port);
+ radeon_connector->edid = edid;
+ DRM_DEBUG_KMS("edid retrieved %p\n", edid);
+ if (radeon_connector->edid) {
+ drm_mode_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid);
+ ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid);
+ drm_edid_to_eld(&radeon_connector->base, radeon_connector->edid);
+ return ret;
+ }
+ drm_mode_connector_update_edid_property(&radeon_connector->base, NULL);
+
+ return ret;
+}
+
+static int radeon_dp_mst_get_modes(struct drm_connector *connector)
+{
+ return radeon_dp_mst_get_ddc_modes(connector);
+}
+
+static enum drm_mode_status
+radeon_dp_mst_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+{
+ /* TODO - validate mode against available PBN for link */
+ if (mode->clock < 10000)
+ return MODE_CLOCK_LOW;
+
+ if (mode->flags & DRM_MODE_FLAG_DBLCLK)
+ return MODE_H_ILLEGAL;
+
+ return MODE_OK;
+}
+
+struct drm_encoder *radeon_mst_best_encoder(struct drm_connector *connector)
+{
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+
+ return &radeon_connector->mst_encoder->base;
+}
+
+static const struct drm_connector_helper_funcs radeon_dp_mst_connector_helper_funcs = {
+ .get_modes = radeon_dp_mst_get_modes,
+ .mode_valid = radeon_dp_mst_mode_valid,
+ .best_encoder = radeon_mst_best_encoder,
+};
+
+static enum drm_connector_status
+radeon_dp_mst_detect(struct drm_connector *connector, bool force)
+{
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ struct radeon_connector *master = radeon_connector->mst_port;
+
+ return drm_dp_mst_detect_port(connector, &master->mst_mgr, radeon_connector->port);
+}
+
+static void
+radeon_dp_mst_connector_destroy(struct drm_connector *connector)
+{
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ struct radeon_encoder *radeon_encoder = radeon_connector->mst_encoder;
+
+ drm_encoder_cleanup(&radeon_encoder->base);
+ kfree(radeon_encoder);
+ drm_connector_cleanup(connector);
+ kfree(radeon_connector);
+}
+
+static void radeon_connector_dpms(struct drm_connector *connector, int mode)
+{
+ DRM_DEBUG_KMS("\n");
+}
+
+static const struct drm_connector_funcs radeon_dp_mst_connector_funcs = {
+ .dpms = radeon_connector_dpms,
+ .detect = radeon_dp_mst_detect,
+ .fill_modes = drm_helper_probe_single_connector_modes,
+ .destroy = radeon_dp_mst_connector_destroy,
+};
+
+static struct drm_connector *radeon_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ struct drm_dp_mst_port *port,
+ const char *pathprop)
+{
+ struct radeon_connector *master = container_of(mgr, struct radeon_connector, mst_mgr);
+ struct drm_device *dev = master->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_connector *radeon_connector;
+ struct drm_connector *connector;
+
+ radeon_connector = kzalloc(sizeof(*radeon_connector), GFP_KERNEL);
+ if (!radeon_connector)
+ return NULL;
+
+ radeon_connector->is_mst_connector = true;
+ connector = &radeon_connector->base;
+ radeon_connector->port = port;
+ radeon_connector->mst_port = master;
+ DRM_DEBUG_KMS("\n");
+
+ drm_connector_init(dev, connector, &radeon_dp_mst_connector_funcs, DRM_MODE_CONNECTOR_DisplayPort);
+ drm_connector_helper_add(connector, &radeon_dp_mst_connector_helper_funcs);
+ radeon_connector->mst_encoder = radeon_dp_create_fake_mst_encoder(master);
+
+ drm_object_attach_property(&connector->base, dev->mode_config.path_property, 0);
+ drm_mode_connector_set_path_property(connector, pathprop);
+ drm_reinit_primary_mode_group(dev);
+
+ mutex_lock(&dev->mode_config.mutex);
+ radeon_fb_add_connector(rdev, connector);
+ mutex_unlock(&dev->mode_config.mutex);
+
+ drm_connector_register(connector);
+ return connector;
+}
+
+static void radeon_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
+ struct drm_connector *connector)
+{
+ struct radeon_connector *master = container_of(mgr, struct radeon_connector, mst_mgr);
+ struct drm_device *dev = master->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+
+ drm_connector_unregister(connector);
+ /* need to nuke the connector */
+ mutex_lock(&dev->mode_config.mutex);
+ /* dpms off */
+ radeon_fb_remove_connector(rdev, connector);
+
+ drm_connector_cleanup(connector);
+ mutex_unlock(&dev->mode_config.mutex);
+ drm_reinit_primary_mode_group(dev);
+
+
+ kfree(connector);
+ DRM_DEBUG_KMS("\n");
+}
+
+static void radeon_dp_mst_hotplug(struct drm_dp_mst_topology_mgr *mgr)
+{
+ struct radeon_connector *master = container_of(mgr, struct radeon_connector, mst_mgr);
+ struct drm_device *dev = master->base.dev;
+
+ drm_kms_helper_hotplug_event(dev);
+}
+
+struct drm_dp_mst_topology_cbs mst_cbs = {
+ .add_connector = radeon_dp_add_mst_connector,
+ .destroy_connector = radeon_dp_destroy_mst_connector,
+ .hotplug = radeon_dp_mst_hotplug,
+};
+
+struct radeon_connector *radeon_mst_find_connector(struct drm_encoder *encoder)
+{
+ struct drm_device *dev = encoder->dev;
+ struct drm_connector *connector;
+
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ if (!connector->encoder)
+ continue;
+ if (!radeon_connector->is_mst_connector)
+ continue;
+
+ DRM_DEBUG_KMS("checking %p vs %p\n", connector->encoder, encoder);
+ if (connector->encoder == encoder)
+ return radeon_connector;
+ }
+ return NULL;
+}
+
+void radeon_dp_mst_prepare_pll(struct drm_crtc *crtc, struct drm_display_mode *mode)
+{
+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
+ struct drm_device *dev = crtc->dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_encoder *radeon_encoder = to_radeon_encoder(radeon_crtc->encoder);
+ struct radeon_encoder_mst *mst_enc = radeon_encoder->enc_priv;
+ struct radeon_connector *radeon_connector = radeon_mst_find_connector(&radeon_encoder->base);
+ int dp_clock;
+ struct radeon_connector_atom_dig *dig_connector = mst_enc->connector->con_priv;
+
+ if (radeon_connector) {
+ radeon_connector->pixelclock_for_modeset = mode->clock;
+ if (radeon_connector->base.display_info.bpc)
+ radeon_crtc->bpc = radeon_connector->base.display_info.bpc;
+ else
+ radeon_crtc->bpc = 8;
+ }
+
+ DRM_DEBUG_KMS("dp_clock %p %d\n", dig_connector, dig_connector->dp_clock);
+ dp_clock = dig_connector->dp_clock;
+ radeon_crtc->ss_enabled =
+ radeon_atombios_get_asic_ss_info(rdev, &radeon_crtc->ss,
+ ASIC_INTERNAL_SS_ON_DP,
+ dp_clock);
+}
+
+static void
+radeon_mst_encoder_dpms(struct drm_encoder *encoder, int mode)
+{
+ struct drm_device *dev = encoder->dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_encoder *radeon_encoder, *primary;
+ struct radeon_encoder_mst *mst_enc;
+ struct radeon_encoder_atom_dig *dig_enc;
+ struct radeon_connector *radeon_connector;
+ struct drm_crtc *crtc;
+ struct radeon_crtc *radeon_crtc;
+ int ret, slots;
+
+ if (!ASIC_IS_DCE5(rdev)) {
+ DRM_ERROR("got mst dpms on non-DCE5\n");
+ return;
+ }
+
+ radeon_connector = radeon_mst_find_connector(encoder);
+ if (!radeon_connector)
+ return;
+
+ radeon_encoder = to_radeon_encoder(encoder);
+
+ mst_enc = radeon_encoder->enc_priv;
+
+ primary = mst_enc->primary;
+
+ dig_enc = primary->enc_priv;
+
+ crtc = encoder->crtc;
+ DRM_DEBUG_KMS("got connector %d\n", dig_enc->active_mst_links);
+
+ switch (mode) {
+ case DRM_MODE_DPMS_ON:
+ dig_enc->active_mst_links++;
+
+ radeon_crtc = to_radeon_crtc(crtc);
+
+ if (dig_enc->active_mst_links == 1) {
+ mst_enc->fe = dig_enc->dig_encoder;
+ mst_enc->fe_from_be = true;
+ atombios_set_mst_encoder_crtc_source(encoder, mst_enc->fe);
+
+ atombios_dig_encoder_setup(&primary->base, ATOM_ENCODER_CMD_SETUP, 0);
+ atombios_dig_transmitter_setup2(&primary->base, ATOM_TRANSMITTER_ACTION_ENABLE,
+ 0, 0, dig_enc->dig_encoder);
+
+ if (radeon_dp_needs_link_train(mst_enc->connector) ||
+ dig_enc->active_mst_links == 1) {
+ radeon_dp_link_train(&primary->base, &mst_enc->connector->base);
+ }
+
+ } else {
+ mst_enc->fe = radeon_atom_pick_dig_encoder(encoder, radeon_crtc->crtc_id);
+ if (mst_enc->fe == -1)
+ DRM_ERROR("failed to get frontend for dig encoder\n");
+ mst_enc->fe_from_be = false;
+ atombios_set_mst_encoder_crtc_source(encoder, mst_enc->fe);
+ }
+
+ DRM_DEBUG_KMS("dig encoder is %d %d %d\n", dig_enc->dig_encoder,
+ dig_enc->linkb, radeon_crtc->crtc_id);
+
+ ret = drm_dp_mst_allocate_vcpi(&radeon_connector->mst_port->mst_mgr,
+ radeon_connector->port,
+ mst_enc->pbn, &slots);
+ ret = drm_dp_update_payload_part1(&radeon_connector->mst_port->mst_mgr);
+
+ radeon_dp_mst_set_be_cntl(primary, mst_enc,
+ radeon_connector->mst_port->hpd.hpd, true);
+
+ mst_enc->enc_active = true;
+ radeon_dp_mst_update_stream_attribs(radeon_connector->mst_port, primary);
+ radeon_dp_mst_set_vcp_size(radeon_encoder, slots, 0);
+
+ atombios_dig_encoder_setup2(&primary->base, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0,
+ mst_enc->fe);
+ ret = drm_dp_check_act_status(&radeon_connector->mst_port->mst_mgr);
+
+ ret = drm_dp_update_payload_part2(&radeon_connector->mst_port->mst_mgr);
+
+ break;
+ case DRM_MODE_DPMS_STANDBY:
+ case DRM_MODE_DPMS_SUSPEND:
+ case DRM_MODE_DPMS_OFF:
+ DRM_ERROR("DPMS OFF %d\n", dig_enc->active_mst_links);
+
+ if (!mst_enc->enc_active)
+ return;
+
+ drm_dp_mst_reset_vcpi_slots(&radeon_connector->mst_port->mst_mgr, mst_enc->port);
+ ret = drm_dp_update_payload_part1(&radeon_connector->mst_port->mst_mgr);
+
+ drm_dp_check_act_status(&radeon_connector->mst_port->mst_mgr);
+ /* and this can also fail */
+ drm_dp_update_payload_part2(&radeon_connector->mst_port->mst_mgr);
+
+ drm_dp_mst_deallocate_vcpi(&radeon_connector->mst_port->mst_mgr, mst_enc->port);
+
+ mst_enc->enc_active = false;
+ radeon_dp_mst_update_stream_attribs(radeon_connector->mst_port, primary);
+
+ radeon_dp_mst_set_be_cntl(primary, mst_enc,
+ radeon_connector->mst_port->hpd.hpd, false);
+ atombios_dig_encoder_setup2(&primary->base, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0,
+ mst_enc->fe);
+
+ if (!mst_enc->fe_from_be)
+ radeon_atom_release_dig_encoder(rdev, mst_enc->fe);
+
+ mst_enc->fe_from_be = false;
+ dig_enc->active_mst_links--;
+ if (dig_enc->active_mst_links == 0) {
+ /* drop link */
+ }
+
+ break;
+ }
+
+}
+
+static bool radeon_mst_mode_fixup(struct drm_encoder *encoder,
+ const struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ struct radeon_encoder_mst *mst_enc;
+ struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+ int bpp = 24;
+
+ mst_enc = radeon_encoder->enc_priv;
+
+ mst_enc->pbn = drm_dp_calc_pbn_mode(adjusted_mode->clock, bpp);
+
+ mst_enc->primary->active_device = mst_enc->primary->devices & mst_enc->connector->devices;
+ DRM_DEBUG_KMS("setting active device to %08x from %08x %08x for encoder %d\n",
+ mst_enc->primary->active_device, mst_enc->primary->devices,
+ mst_enc->connector->devices, mst_enc->primary->base.encoder_type);
+
+
+ drm_mode_set_crtcinfo(adjusted_mode, 0);
+ {
+ struct radeon_connector_atom_dig *dig_connector;
+
+ dig_connector = mst_enc->connector->con_priv;
+ dig_connector->dp_lane_count = drm_dp_max_lane_count(dig_connector->dpcd);
+ dig_connector->dp_clock = radeon_dp_get_max_link_rate(&mst_enc->connector->base,
+ dig_connector->dpcd);
+ DRM_DEBUG_KMS("dig clock %p %d %d\n", dig_connector,
+ dig_connector->dp_lane_count, dig_connector->dp_clock);
+ }
+ return true;
+}
+
+static void radeon_mst_encoder_prepare(struct drm_encoder *encoder)
+{
+ struct radeon_connector *radeon_connector;
+ struct radeon_encoder *radeon_encoder, *primary;
+ struct radeon_encoder_mst *mst_enc;
+ struct radeon_encoder_atom_dig *dig_enc;
+
+ radeon_connector = radeon_mst_find_connector(encoder);
+ if (!radeon_connector) {
+ DRM_DEBUG_KMS("failed to find connector %p\n", encoder);
+ return;
+ }
+ radeon_encoder = to_radeon_encoder(encoder);
+
+ radeon_mst_encoder_dpms(encoder, DRM_MODE_DPMS_OFF);
+
+ mst_enc = radeon_encoder->enc_priv;
+
+ primary = mst_enc->primary;
+
+ dig_enc = primary->enc_priv;
+
+ mst_enc->port = radeon_connector->port;
+
+ if (dig_enc->dig_encoder == -1) {
+ dig_enc->dig_encoder = radeon_atom_pick_dig_encoder(&primary->base, -1);
+ primary->offset = radeon_atom_set_enc_offset(dig_enc->dig_encoder);
+ atombios_set_mst_encoder_crtc_source(encoder, dig_enc->dig_encoder);
+
+
+ }
+ DRM_DEBUG_KMS("%d %d\n", dig_enc->dig_encoder, primary->offset);
+}
+
+static void
+radeon_mst_encoder_mode_set(struct drm_encoder *encoder,
+ struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ DRM_DEBUG_KMS("\n");
+}
+
+static void radeon_mst_encoder_commit(struct drm_encoder *encoder)
+{
+ radeon_mst_encoder_dpms(encoder, DRM_MODE_DPMS_ON);
+ DRM_DEBUG_KMS("\n");
+}
+
+static const struct drm_encoder_helper_funcs radeon_mst_helper_funcs = {
+ .dpms = radeon_mst_encoder_dpms,
+ .mode_fixup = radeon_mst_mode_fixup,
+ .prepare = radeon_mst_encoder_prepare,
+ .mode_set = radeon_mst_encoder_mode_set,
+ .commit = radeon_mst_encoder_commit,
+};
+
+void radeon_dp_mst_encoder_destroy(struct drm_encoder *encoder)
+{
+ drm_encoder_cleanup(encoder);
+ kfree(encoder);
+}
+
+static const struct drm_encoder_funcs radeon_dp_mst_enc_funcs = {
+ .destroy = radeon_dp_mst_encoder_destroy,
+};
+
+static struct radeon_encoder *
+radeon_dp_create_fake_mst_encoder(struct radeon_connector *connector)
+{
+ struct drm_device *dev = connector->base.dev;
+ struct radeon_device *rdev = dev->dev_private;
+ struct radeon_encoder *radeon_encoder;
+ struct radeon_encoder_mst *mst_enc;
+ struct drm_encoder *encoder;
+ const struct drm_connector_helper_funcs *connector_funcs = connector->base.helper_private;
+ struct drm_encoder *enc_master = connector_funcs->best_encoder(&connector->base);
+
+ DRM_DEBUG_KMS("enc master is %p\n", enc_master);
+ radeon_encoder = kzalloc(sizeof(*radeon_encoder), GFP_KERNEL);
+ if (!radeon_encoder)
+ return NULL;
+
+ radeon_encoder->enc_priv = kzalloc(sizeof(*mst_enc), GFP_KERNEL);
+ if (!radeon_encoder->enc_priv) {
+ kfree(radeon_encoder);
+ return NULL;
+ }
+ encoder = &radeon_encoder->base;
+ switch (rdev->num_crtc) {
+ case 1:
+ encoder->possible_crtcs = 0x1;
+ break;
+ case 2:
+ default:
+ encoder->possible_crtcs = 0x3;
+ break;
+ case 4:
+ encoder->possible_crtcs = 0xf;
+ break;
+ case 6:
+ encoder->possible_crtcs = 0x3f;
+ break;
+ }
+
+ drm_encoder_init(dev, &radeon_encoder->base, &radeon_dp_mst_enc_funcs,
+ DRM_MODE_ENCODER_DPMST);
+ drm_encoder_helper_add(encoder, &radeon_mst_helper_funcs);
+
+ mst_enc = radeon_encoder->enc_priv;
+ mst_enc->connector = connector;
+ mst_enc->primary = to_radeon_encoder(enc_master);
+ radeon_encoder->is_mst_encoder = true;
+ return radeon_encoder;
+}
+
+int
+radeon_dp_mst_init(struct radeon_connector *radeon_connector)
+{
+ struct drm_device *dev = radeon_connector->base.dev;
+
+ if (!radeon_connector->ddc_bus->has_aux)
+ return 0;
+
+ radeon_connector->mst_mgr.cbs = &mst_cbs;
+ return drm_dp_mst_topology_mgr_init(&radeon_connector->mst_mgr, dev->dev,
+ &radeon_connector->ddc_bus->aux, 16, 6,
+ radeon_connector->base.base.id);
+}
+
+int
+radeon_dp_mst_probe(struct radeon_connector *radeon_connector)
+{
+ struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv;
+ int ret;
+ u8 msg[1];
+
+ if (dig_connector->dpcd[DP_DPCD_REV] < 0x12)
+ return 0;
+
+ ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_MSTM_CAP, msg,
+ 1);
+ if (ret) {
+ if (msg[0] & DP_MST_CAP) {
+ DRM_DEBUG_KMS("Sink is MST capable\n");
+ dig_connector->is_mst = true;
+ } else {
+ DRM_DEBUG_KMS("Sink is not MST capable\n");
+ dig_connector->is_mst = false;
+ }
+
+ }
+ drm_dp_mst_topology_mgr_set_mst(&radeon_connector->mst_mgr,
+ dig_connector->is_mst);
+ return dig_connector->is_mst;
+}
+
+int
+radeon_dp_mst_check_status(struct radeon_connector *radeon_connector)
+{
+ struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv;
+ int retry;
+
+ if (dig_connector->is_mst) {
+ u8 esi[16] = { 0 };
+ int dret;
+ int ret = 0;
+ bool handled;
+
+ dret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux,
+ DP_SINK_COUNT_ESI, esi, 8);
+go_again:
+ if (dret == 8) {
+ DRM_DEBUG_KMS("got esi %02x %02x %02x\n", esi[0], esi[1], esi[2]);
+ ret = drm_dp_mst_hpd_irq(&radeon_connector->mst_mgr, esi, &handled);
+
+ if (handled) {
+ for (retry = 0; retry < 3; retry++) {
+ int wret;
+ wret = drm_dp_dpcd_write(&radeon_connector->ddc_bus->aux,
+ DP_SINK_COUNT_ESI + 1, &esi[1], 3);
+ if (wret == 3)
+ break;
+ }
+
+ dret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux,
+ DP_SINK_COUNT_ESI, esi, 8);
+ if (dret == 8) {
+ DRM_DEBUG_KMS("got esi2 %02x %02x %02x\n", esi[0], esi[1], esi[2]);
+ goto go_again;
+ }
+ } else
+ ret = 0;
+
+ return ret;
+ } else {
+ DRM_DEBUG_KMS("failed to get ESI - device may have failed %d\n", ret);
+ dig_connector->is_mst = false;
+ drm_dp_mst_topology_mgr_set_mst(&radeon_connector->mst_mgr,
+ dig_connector->is_mst);
+ /* send a hotplug event */
+ }
+ }
+ return -EINVAL;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+
+static int radeon_debugfs_mst_info(struct seq_file *m, void *data)
+{
+ struct drm_info_node *node = (struct drm_info_node *)m->private;
+ struct drm_device *dev = node->minor->dev;
+ struct drm_connector *connector;
+ struct radeon_connector *radeon_connector;
+ struct radeon_connector_atom_dig *dig_connector;
+ int i;
+
+ drm_modeset_lock_all(dev);
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort)
+ continue;
+
+ radeon_connector = to_radeon_connector(connector);
+ dig_connector = radeon_connector->con_priv;
+ if (radeon_connector->is_mst_connector)
+ continue;
+ if (!dig_connector->is_mst)
+ continue;
+ drm_dp_mst_dump_topology(m, &radeon_connector->mst_mgr);
+
+ for (i = 0; i < radeon_connector->enabled_attribs; i++)
+ seq_printf(m, "attrib %d: %d %d\n", i,
+ radeon_connector->cur_stream_attribs[i].fe,
+ radeon_connector->cur_stream_attribs[i].slots);
+ }
+ drm_modeset_unlock_all(dev);
+ return 0;
+}
+
+static struct drm_info_list radeon_debugfs_mst_list[] = {
+ {"radeon_mst_info", &radeon_debugfs_mst_info, 0, NULL},
+};
+#endif
+
+int radeon_mst_debugfs_init(struct radeon_device *rdev)
+{
+#if defined(CONFIG_DEBUG_FS)
+ return radeon_debugfs_add_files(rdev, radeon_debugfs_mst_list, 1);
+#endif
+ return 0;
+}
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
index 5d684be..7d620d4 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -89,9 +89,10 @@
* 2.40.0 - Add RADEON_GEM_GTT_WC/UC, flush HDP cache before submitting
* CS to GPU on >= r600
* 2.41.0 - evergreen/cayman: Add SET_BASE/DRAW_INDIRECT command parsing support
+ * 2.42.0 - Add VCE/VUI (Video Usability Information) support
*/
#define KMS_DRIVER_MAJOR 2
-#define KMS_DRIVER_MINOR 41
+#define KMS_DRIVER_MINOR 42
#define KMS_DRIVER_PATCHLEVEL 0
int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags);
int radeon_driver_unload_kms(struct drm_device *dev);
@@ -190,6 +191,8 @@
int radeon_use_pflipirq = 2;
int radeon_bapm = -1;
int radeon_backlight = -1;
+int radeon_auxch = -1;
+int radeon_mst = 0;
MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers");
module_param_named(no_wb, radeon_no_wb, int, 0444);
@@ -239,7 +242,7 @@
MODULE_PARM_DESC(msi, "MSI support (1 = enable, 0 = disable, -1 = auto)");
module_param_named(msi, radeon_msi, int, 0444);
-MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (defaul 10000 = 10 seconds, 0 = disable)");
+MODULE_PARM_DESC(lockup_timeout, "GPU lockup timeout in ms (default 10000 = 10 seconds, 0 = disable)");
module_param_named(lockup_timeout, radeon_lockup_timeout, int, 0444);
MODULE_PARM_DESC(fastfb, "Direct FB access for IGP chips (0 = disable, 1 = enable)");
@@ -275,6 +278,12 @@
MODULE_PARM_DESC(backlight, "backlight support (1 = enable, 0 = disable, -1 = auto)");
module_param_named(backlight, radeon_backlight, int, 0444);
+MODULE_PARM_DESC(auxch, "Use native auxch experimental support (1 = enable, 0 = disable, -1 = auto)");
+module_param_named(auxch, radeon_auxch, int, 0444);
+
+MODULE_PARM_DESC(mst, "DisplayPort MST experimental support (1 = enable, 0 = disable)");
+module_param_named(mst, radeon_mst, int, 0444);
+
static struct pci_device_id pciidlist[] = {
radeon_PCI_IDS
};
diff --git a/drivers/gpu/drm/radeon/radeon_encoders.c b/drivers/gpu/drm/radeon/radeon_encoders.c
index 3a29703..ef99917 100644
--- a/drivers/gpu/drm/radeon/radeon_encoders.c
+++ b/drivers/gpu/drm/radeon/radeon_encoders.c
@@ -247,7 +247,16 @@
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
radeon_connector = to_radeon_connector(connector);
- if (radeon_encoder->active_device & radeon_connector->devices)
+ if (radeon_encoder->is_mst_encoder) {
+ struct radeon_encoder_mst *mst_enc;
+
+ if (!radeon_connector->is_mst_connector)
+ continue;
+
+ mst_enc = radeon_encoder->enc_priv;
+ if (mst_enc->connector == radeon_connector->mst_port)
+ return connector;
+ } else if (radeon_encoder->active_device & radeon_connector->devices)
return connector;
}
return NULL;
@@ -393,6 +402,9 @@
case DRM_MODE_CONNECTOR_DVID:
case DRM_MODE_CONNECTOR_HDMIA:
case DRM_MODE_CONNECTOR_DisplayPort:
+ if (radeon_connector->is_mst_connector)
+ return false;
+
dig_connector = radeon_connector->con_priv;
if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) ||
(dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP))
diff --git a/drivers/gpu/drm/radeon/radeon_fb.c b/drivers/gpu/drm/radeon/radeon_fb.c
index ea276ff..aeb6767 100644
--- a/drivers/gpu/drm/radeon/radeon_fb.c
+++ b/drivers/gpu/drm/radeon/radeon_fb.c
@@ -257,6 +257,7 @@
}
info->par = rfbdev;
+ info->skip_vt_switch = true;
ret = radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj);
if (ret) {
@@ -434,3 +435,13 @@
return true;
return false;
}
+
+void radeon_fb_add_connector(struct radeon_device *rdev, struct drm_connector *connector)
+{
+ drm_fb_helper_add_one_connector(&rdev->mode_info.rfbdev->helper, connector);
+}
+
+void radeon_fb_remove_connector(struct radeon_device *rdev, struct drm_connector *connector)
+{
+ drm_fb_helper_remove_one_connector(&rdev->mode_info.rfbdev->helper, connector);
+}
diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c
index 00fc597..7162c93 100644
--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
@@ -87,6 +87,20 @@
drm_helper_hpd_irq_event(dev);
}
+static void radeon_dp_work_func(struct work_struct *work)
+{
+ struct radeon_device *rdev = container_of(work, struct radeon_device,
+ dp_work);
+ struct drm_device *dev = rdev->ddev;
+ struct drm_mode_config *mode_config = &dev->mode_config;
+ struct drm_connector *connector;
+
+ /* this should take a mutex */
+ if (mode_config->num_connector) {
+ list_for_each_entry(connector, &mode_config->connector_list, head)
+ radeon_connector_hotplug(connector);
+ }
+}
/**
* radeon_driver_irq_preinstall_kms - drm irq preinstall callback
*
@@ -276,6 +290,7 @@
}
INIT_WORK(&rdev->hotplug_work, radeon_hotplug_work_func);
+ INIT_WORK(&rdev->dp_work, radeon_dp_work_func);
INIT_WORK(&rdev->audio_work, r600_audio_update_hdmi);
rdev->irq.installed = true;
diff --git a/drivers/gpu/drm/radeon/radeon_kfd.c b/drivers/gpu/drm/radeon/radeon_kfd.c
index 122eb56..3db2300 100644
--- a/drivers/gpu/drm/radeon/radeon_kfd.c
+++ b/drivers/gpu/drm/radeon/radeon_kfd.c
@@ -103,15 +103,14 @@
bool radeon_kfd_init(void)
{
#if defined(CONFIG_HSA_AMD_MODULE)
- bool (*kgd2kfd_init_p)(unsigned, const struct kfd2kgd_calls*,
- const struct kgd2kfd_calls**);
+ bool (*kgd2kfd_init_p)(unsigned, const struct kgd2kfd_calls**);
kgd2kfd_init_p = symbol_request(kgd2kfd_init);
if (kgd2kfd_init_p == NULL)
return false;
- if (!kgd2kfd_init_p(KFD_INTERFACE_VERSION, &kfd2kgd, &kgd2kfd)) {
+ if (!kgd2kfd_init_p(KFD_INTERFACE_VERSION, &kgd2kfd)) {
symbol_put(kgd2kfd_init);
kgd2kfd = NULL;
@@ -120,7 +119,7 @@
return true;
#elif defined(CONFIG_HSA_AMD)
- if (!kgd2kfd_init(KFD_INTERFACE_VERSION, &kfd2kgd, &kgd2kfd)) {
+ if (!kgd2kfd_init(KFD_INTERFACE_VERSION, &kgd2kfd)) {
kgd2kfd = NULL;
return false;
@@ -143,7 +142,8 @@
void radeon_kfd_device_probe(struct radeon_device *rdev)
{
if (kgd2kfd)
- rdev->kfd = kgd2kfd->probe((struct kgd_dev *)rdev, rdev->pdev);
+ rdev->kfd = kgd2kfd->probe((struct kgd_dev *)rdev,
+ rdev->pdev, &kfd2kgd);
}
void radeon_kfd_device_init(struct radeon_device *rdev)
diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
index 686411e..7b2a733 100644
--- a/drivers/gpu/drm/radeon/radeon_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_kms.c
@@ -547,6 +547,35 @@
else
*value = 1;
break;
+ case RADEON_INFO_CURRENT_GPU_TEMP:
+ /* get temperature in millidegrees C */
+ if (rdev->asic->pm.get_temperature)
+ *value = radeon_get_temperature(rdev);
+ else
+ *value = 0;
+ break;
+ case RADEON_INFO_CURRENT_GPU_SCLK:
+ /* get sclk in Mhz */
+ if (rdev->pm.dpm_enabled)
+ *value = radeon_dpm_get_current_sclk(rdev) / 100;
+ else
+ *value = rdev->pm.current_sclk / 100;
+ break;
+ case RADEON_INFO_CURRENT_GPU_MCLK:
+ /* get mclk in Mhz */
+ if (rdev->pm.dpm_enabled)
+ *value = radeon_dpm_get_current_mclk(rdev) / 100;
+ else
+ *value = rdev->pm.current_mclk / 100;
+ break;
+ case RADEON_INFO_READ_REG:
+ if (copy_from_user(value, value_ptr, sizeof(uint32_t))) {
+ DRM_ERROR("copy_from_user %s:%u\n", __func__, __LINE__);
+ return -EFAULT;
+ }
+ if (radeon_get_allowed_info_register(rdev, *value, value))
+ return -EINVAL;
+ break;
default:
DRM_DEBUG_KMS("Invalid request %d\n", info->request);
return -EINVAL;
diff --git a/drivers/gpu/drm/radeon/radeon_legacy_encoders.c b/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
index c89971d..4571530 100644
--- a/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
+++ b/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
@@ -36,7 +36,7 @@
static void radeon_legacy_encoder_disable(struct drm_encoder *encoder)
{
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
- struct drm_encoder_helper_funcs *encoder_funcs;
+ const struct drm_encoder_helper_funcs *encoder_funcs;
encoder_funcs = encoder->helper_private;
encoder_funcs->dpms(encoder, DRM_MODE_DPMS_OFF);
diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
index 572b4db..0170137 100644
--- a/drivers/gpu/drm/radeon/radeon_mn.c
+++ b/drivers/gpu/drm/radeon/radeon_mn.c
@@ -53,6 +53,11 @@
struct rb_root objects;
};
+struct radeon_mn_node {
+ struct interval_tree_node it;
+ struct list_head bos;
+};
+
/**
* radeon_mn_destroy - destroy the rmn
*
@@ -64,14 +69,21 @@
{
struct radeon_mn *rmn = container_of(work, struct radeon_mn, work);
struct radeon_device *rdev = rmn->rdev;
- struct radeon_bo *bo, *next;
+ struct radeon_mn_node *node, *next_node;
+ struct radeon_bo *bo, *next_bo;
mutex_lock(&rdev->mn_lock);
mutex_lock(&rmn->lock);
hash_del(&rmn->node);
- rbtree_postorder_for_each_entry_safe(bo, next, &rmn->objects, mn_it.rb) {
- interval_tree_remove(&bo->mn_it, &rmn->objects);
- bo->mn = NULL;
+ rbtree_postorder_for_each_entry_safe(node, next_node, &rmn->objects,
+ it.rb) {
+
+ interval_tree_remove(&node->it, &rmn->objects);
+ list_for_each_entry_safe(bo, next_bo, &node->bos, mn_list) {
+ bo->mn = NULL;
+ list_del_init(&bo->mn_list);
+ }
+ kfree(node);
}
mutex_unlock(&rmn->lock);
mutex_unlock(&rdev->mn_lock);
@@ -121,29 +133,33 @@
it = interval_tree_iter_first(&rmn->objects, start, end);
while (it) {
+ struct radeon_mn_node *node;
struct radeon_bo *bo;
int r;
- bo = container_of(it, struct radeon_bo, mn_it);
+ node = container_of(it, struct radeon_mn_node, it);
it = interval_tree_iter_next(it, start, end);
- r = radeon_bo_reserve(bo, true);
- if (r) {
- DRM_ERROR("(%d) failed to reserve user bo\n", r);
- continue;
+ list_for_each_entry(bo, &node->bos, mn_list) {
+
+ r = radeon_bo_reserve(bo, true);
+ if (r) {
+ DRM_ERROR("(%d) failed to reserve user bo\n", r);
+ continue;
+ }
+
+ r = reservation_object_wait_timeout_rcu(bo->tbo.resv,
+ true, false, MAX_SCHEDULE_TIMEOUT);
+ if (r)
+ DRM_ERROR("(%d) failed to wait for user bo\n", r);
+
+ radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU);
+ r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
+ if (r)
+ DRM_ERROR("(%d) failed to validate user bo\n", r);
+
+ radeon_bo_unreserve(bo);
}
-
- r = reservation_object_wait_timeout_rcu(bo->tbo.resv, true,
- false, MAX_SCHEDULE_TIMEOUT);
- if (r)
- DRM_ERROR("(%d) failed to wait for user bo\n", r);
-
- radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU);
- r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
- if (r)
- DRM_ERROR("(%d) failed to validate user bo\n", r);
-
- radeon_bo_unreserve(bo);
}
mutex_unlock(&rmn->lock);
@@ -220,24 +236,44 @@
unsigned long end = addr + radeon_bo_size(bo) - 1;
struct radeon_device *rdev = bo->rdev;
struct radeon_mn *rmn;
+ struct radeon_mn_node *node = NULL;
+ struct list_head bos;
struct interval_tree_node *it;
rmn = radeon_mn_get(rdev);
if (IS_ERR(rmn))
return PTR_ERR(rmn);
+ INIT_LIST_HEAD(&bos);
+
mutex_lock(&rmn->lock);
- it = interval_tree_iter_first(&rmn->objects, addr, end);
- if (it) {
- mutex_unlock(&rmn->lock);
- return -EEXIST;
+ while ((it = interval_tree_iter_first(&rmn->objects, addr, end))) {
+ kfree(node);
+ node = container_of(it, struct radeon_mn_node, it);
+ interval_tree_remove(&node->it, &rmn->objects);
+ addr = min(it->start, addr);
+ end = max(it->last, end);
+ list_splice(&node->bos, &bos);
+ }
+
+ if (!node) {
+ node = kmalloc(sizeof(struct radeon_mn_node), GFP_KERNEL);
+ if (!node) {
+ mutex_unlock(&rmn->lock);
+ return -ENOMEM;
+ }
}
bo->mn = rmn;
- bo->mn_it.start = addr;
- bo->mn_it.last = end;
- interval_tree_insert(&bo->mn_it, &rmn->objects);
+
+ node->it.start = addr;
+ node->it.last = end;
+ INIT_LIST_HEAD(&node->bos);
+ list_splice(&bos, &node->bos);
+ list_add(&bo->mn_list, &node->bos);
+
+ interval_tree_insert(&node->it, &rmn->objects);
mutex_unlock(&rmn->lock);
@@ -255,6 +291,7 @@
{
struct radeon_device *rdev = bo->rdev;
struct radeon_mn *rmn;
+ struct list_head *head;
mutex_lock(&rdev->mn_lock);
rmn = bo->mn;
@@ -264,8 +301,19 @@
}
mutex_lock(&rmn->lock);
- interval_tree_remove(&bo->mn_it, &rmn->objects);
+ /* save the next list entry for later */
+ head = bo->mn_list.next;
+
bo->mn = NULL;
+ list_del(&bo->mn_list);
+
+ if (list_empty(head)) {
+ struct radeon_mn_node *node;
+ node = container_of(head, struct radeon_mn_node, bos);
+ interval_tree_remove(&node->it, &rmn->objects);
+ kfree(node);
+ }
+
mutex_unlock(&rmn->lock);
mutex_unlock(&rdev->mn_lock);
}
diff --git a/drivers/gpu/drm/radeon/radeon_mode.h b/drivers/gpu/drm/radeon/radeon_mode.h
index 920a8be..fa91a17 100644
--- a/drivers/gpu/drm/radeon/radeon_mode.h
+++ b/drivers/gpu/drm/radeon/radeon_mode.h
@@ -33,6 +33,7 @@
#include <drm/drm_crtc.h>
#include <drm/drm_edid.h>
#include <drm/drm_dp_helper.h>
+#include <drm/drm_dp_mst_helper.h>
#include <drm/drm_fixed.h>
#include <drm/drm_crtc_helper.h>
#include <linux/i2c.h>
@@ -85,6 +86,13 @@
RADEON_HPD_NONE = 0xff,
};
+enum radeon_output_csc {
+ RADEON_OUTPUT_CSC_BYPASS = 0,
+ RADEON_OUTPUT_CSC_TVRGB = 1,
+ RADEON_OUTPUT_CSC_YCBCR601 = 2,
+ RADEON_OUTPUT_CSC_YCBCR709 = 3,
+};
+
#define RADEON_MAX_I2C_BUS 16
/* radeon gpio-based i2c
@@ -255,6 +263,8 @@
struct drm_property *audio_property;
/* FMT dithering */
struct drm_property *dither_property;
+ /* Output CSC */
+ struct drm_property *output_csc_property;
/* hardcoded DFP edid from BIOS */
struct edid *bios_hardcoded_edid;
int bios_hardcoded_edid_size;
@@ -265,6 +275,9 @@
u16 firmware_flags;
/* pointer to backlight encoder */
struct radeon_encoder *bl_encoder;
+
+ /* bitmask for active encoder frontends */
+ uint32_t active_encoders;
};
#define RADEON_MAX_BL_LEVEL 0xFF
@@ -357,6 +370,7 @@
u32 wm_low;
u32 wm_high;
struct drm_display_mode hw_mode;
+ enum radeon_output_csc output_csc;
};
struct radeon_encoder_primary_dac {
@@ -426,12 +440,24 @@
uint8_t backlight_level;
int panel_mode;
struct radeon_afmt *afmt;
+ int active_mst_links;
};
struct radeon_encoder_atom_dac {
enum radeon_tv_std tv_std;
};
+struct radeon_encoder_mst {
+ int crtc;
+ struct radeon_encoder *primary;
+ struct radeon_connector *connector;
+ struct drm_dp_mst_port *port;
+ int pbn;
+ int fe;
+ bool fe_from_be;
+ bool enc_active;
+};
+
struct radeon_encoder {
struct drm_encoder base;
uint32_t encoder_enum;
@@ -450,6 +476,11 @@
bool is_ext_encoder;
u16 caps;
struct radeon_audio_funcs *audio;
+ enum radeon_output_csc output_csc;
+ bool can_mst;
+ uint32_t offset;
+ bool is_mst_encoder;
+ /* front end for this mst encoder */
};
struct radeon_connector_atom_dig {
@@ -460,6 +491,7 @@
int dp_clock;
int dp_lane_count;
bool edp_on;
+ bool is_mst;
};
struct radeon_gpio_rec {
@@ -503,6 +535,11 @@
RADEON_FMT_DITHER_ENABLE = 1,
};
+struct stream_attribs {
+ uint16_t fe;
+ uint16_t slots;
+};
+
struct radeon_connector {
struct drm_connector base;
uint32_t connector_id;
@@ -524,6 +561,14 @@
enum radeon_connector_audio audio;
enum radeon_connector_dither dither;
int pixelclock_for_modeset;
+ bool is_mst_connector;
+ struct radeon_connector *mst_port;
+ struct drm_dp_mst_port *port;
+ struct drm_dp_mst_topology_mgr mst_mgr;
+
+ struct radeon_encoder *mst_encoder;
+ struct stream_attribs cur_stream_attribs[6];
+ int enabled_attribs;
};
struct radeon_framebuffer {
@@ -708,15 +753,26 @@
extern bool radeon_dp_getdpcd(struct radeon_connector *radeon_connector);
extern int radeon_dp_get_panel_mode(struct drm_encoder *encoder,
struct drm_connector *connector);
+int radeon_dp_get_max_link_rate(struct drm_connector *connector,
+ u8 *dpcd);
extern void radeon_dp_set_rx_power_state(struct drm_connector *connector,
u8 power_state);
extern void radeon_dp_aux_init(struct radeon_connector *radeon_connector);
+extern ssize_t
+radeon_dp_aux_transfer_native(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg);
+
extern void atombios_dig_encoder_setup(struct drm_encoder *encoder, int action, int panel_mode);
+extern void atombios_dig_encoder_setup2(struct drm_encoder *encoder, int action, int panel_mode, int enc_override);
extern void radeon_atom_encoder_init(struct radeon_device *rdev);
extern void radeon_atom_disp_eng_pll_init(struct radeon_device *rdev);
extern void atombios_dig_transmitter_setup(struct drm_encoder *encoder,
int action, uint8_t lane_num,
uint8_t lane_set);
+extern void atombios_dig_transmitter_setup2(struct drm_encoder *encoder,
+ int action, uint8_t lane_num,
+ uint8_t lane_set, int fe);
+extern void atombios_set_mst_encoder_crtc_source(struct drm_encoder *encoder,
+ int fe);
extern void radeon_atom_ext_encoder_setup_ddc(struct drm_encoder *encoder);
extern struct drm_encoder *radeon_get_external_encoder(struct drm_encoder *encoder);
void radeon_atom_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le);
@@ -929,7 +985,23 @@
void radeon_fb_output_poll_changed(struct radeon_device *rdev);
void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id);
+
+void radeon_fb_add_connector(struct radeon_device *rdev, struct drm_connector *connector);
+void radeon_fb_remove_connector(struct radeon_device *rdev, struct drm_connector *connector);
+
void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id);
int radeon_align_pitch(struct radeon_device *rdev, int width, int bpp, bool tiled);
+
+/* mst */
+int radeon_dp_mst_init(struct radeon_connector *radeon_connector);
+int radeon_dp_mst_probe(struct radeon_connector *radeon_connector);
+int radeon_dp_mst_check_status(struct radeon_connector *radeon_connector);
+int radeon_mst_debugfs_init(struct radeon_device *rdev);
+void radeon_dp_mst_prepare_pll(struct drm_crtc *crtc, struct drm_display_mode *mode);
+
+void radeon_setup_mst_connector(struct drm_device *dev);
+
+int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx);
+void radeon_atom_release_dig_encoder(struct radeon_device *rdev, int enc_idx);
#endif
diff --git a/drivers/gpu/drm/radeon/radeon_vce.c b/drivers/gpu/drm/radeon/radeon_vce.c
index 976fe43..24f849f 100644
--- a/drivers/gpu/drm/radeon/radeon_vce.c
+++ b/drivers/gpu/drm/radeon/radeon_vce.c
@@ -571,6 +571,7 @@
case 0x04000005: // rate control
case 0x04000007: // motion estimation
case 0x04000008: // rdo
+ case 0x04000009: // vui
break;
case 0x03000001: // encode
diff --git a/drivers/gpu/drm/radeon/rs780_dpm.c b/drivers/gpu/drm/radeon/rs780_dpm.c
index 9031f4b..cb0afe7 100644
--- a/drivers/gpu/drm/radeon/rs780_dpm.c
+++ b/drivers/gpu/drm/radeon/rs780_dpm.c
@@ -1001,6 +1001,28 @@
ps->sclk_high, ps->max_voltage);
}
+/* get the current sclk in 10 khz units */
+u32 rs780_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ u32 current_fb_div = RREG32(FVTHROT_STATUS_REG0) & CURRENT_FEEDBACK_DIV_MASK;
+ u32 func_cntl = RREG32(CG_SPLL_FUNC_CNTL);
+ u32 ref_div = ((func_cntl & SPLL_REF_DIV_MASK) >> SPLL_REF_DIV_SHIFT) + 1;
+ u32 post_div = ((func_cntl & SPLL_SW_HILEN_MASK) >> SPLL_SW_HILEN_SHIFT) + 1 +
+ ((func_cntl & SPLL_SW_LOLEN_MASK) >> SPLL_SW_LOLEN_SHIFT) + 1;
+ u32 sclk = (rdev->clock.spll.reference_freq * current_fb_div) /
+ (post_div * ref_div);
+
+ return sclk;
+}
+
+/* get the current mclk in 10 khz units */
+u32 rs780_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct igp_power_info *pi = rs780_get_pi(rdev);
+
+ return pi->bootup_uma_clk;
+}
+
int rs780_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level)
{
diff --git a/drivers/gpu/drm/radeon/rv6xx_dpm.c b/drivers/gpu/drm/radeon/rv6xx_dpm.c
index 6a5c233..97e5a6f 100644
--- a/drivers/gpu/drm/radeon/rv6xx_dpm.c
+++ b/drivers/gpu/drm/radeon/rv6xx_dpm.c
@@ -2050,6 +2050,52 @@
}
}
+/* get the current sclk in 10 khz units */
+u32 rv6xx_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct radeon_ps *rps = rdev->pm.dpm.current_ps;
+ struct rv6xx_ps *ps = rv6xx_get_ps(rps);
+ struct rv6xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->sclk;
+ }
+}
+
+/* get the current mclk in 10 khz units */
+u32 rv6xx_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct radeon_ps *rps = rdev->pm.dpm.current_ps;
+ struct rv6xx_ps *ps = rv6xx_get_ps(rps);
+ struct rv6xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->mclk;
+ }
+}
+
void rv6xx_dpm_fini(struct radeon_device *rdev)
{
int i;
diff --git a/drivers/gpu/drm/radeon/rv770_dpm.c b/drivers/gpu/drm/radeon/rv770_dpm.c
index 3067326..b9c7707 100644
--- a/drivers/gpu/drm/radeon/rv770_dpm.c
+++ b/drivers/gpu/drm/radeon/rv770_dpm.c
@@ -2492,6 +2492,50 @@
}
}
+u32 rv770_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct radeon_ps *rps = rdev->pm.dpm.current_ps;
+ struct rv7xx_ps *ps = rv770_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->sclk;
+ }
+}
+
+u32 rv770_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct radeon_ps *rps = rdev->pm.dpm.current_ps;
+ struct rv7xx_ps *ps = rv770_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_PROFILE_INDEX_MASK) >>
+ CURRENT_PROFILE_INDEX_SHIFT;
+
+ if (current_index > 2) {
+ return 0;
+ } else {
+ if (current_index == 0)
+ pl = &ps->low;
+ else if (current_index == 1)
+ pl = &ps->medium;
+ else /* current_index == 2 */
+ pl = &ps->high;
+ return pl->mclk;
+ }
+}
+
void rv770_dpm_fini(struct radeon_device *rdev)
{
int i;
diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
index a7fb273..b1d74bc 100644
--- a/drivers/gpu/drm/radeon/si.c
+++ b/drivers/gpu/drm/radeon/si.c
@@ -1264,6 +1264,36 @@
}
}
+/**
+ * si_get_allowed_info_register - fetch the register for the info ioctl
+ *
+ * @rdev: radeon_device pointer
+ * @reg: register offset in bytes
+ * @val: register value
+ *
+ * Returns 0 for success or -EINVAL for an invalid register
+ *
+ */
+int si_get_allowed_info_register(struct radeon_device *rdev,
+ u32 reg, u32 *val)
+{
+ switch (reg) {
+ case GRBM_STATUS:
+ case GRBM_STATUS2:
+ case GRBM_STATUS_SE0:
+ case GRBM_STATUS_SE1:
+ case SRBM_STATUS:
+ case SRBM_STATUS2:
+ case (DMA_STATUS_REG + DMA0_REGISTER_OFFSET):
+ case (DMA_STATUS_REG + DMA1_REGISTER_OFFSET):
+ case UVD_STATUS:
+ *val = RREG32(reg);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
#define PCIE_BUS_CLK 10000
#define TCLK (PCIE_BUS_CLK / 10)
@@ -6055,12 +6085,12 @@
(CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
if (!ASIC_IS_NODCE(rdev)) {
- hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~DC_HPDx_INT_EN;
- hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~DC_HPDx_INT_EN;
+ hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
+ hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~(DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN);
}
dma_cntl = RREG32(DMA_CNTL + DMA0_REGISTER_OFFSET) & ~TRAP_ENABLE;
@@ -6123,27 +6153,27 @@
}
if (rdev->irq.hpd[0]) {
DRM_DEBUG("si_irq_set: hpd 1\n");
- hpd1 |= DC_HPDx_INT_EN;
+ hpd1 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[1]) {
DRM_DEBUG("si_irq_set: hpd 2\n");
- hpd2 |= DC_HPDx_INT_EN;
+ hpd2 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[2]) {
DRM_DEBUG("si_irq_set: hpd 3\n");
- hpd3 |= DC_HPDx_INT_EN;
+ hpd3 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[3]) {
DRM_DEBUG("si_irq_set: hpd 4\n");
- hpd4 |= DC_HPDx_INT_EN;
+ hpd4 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[4]) {
DRM_DEBUG("si_irq_set: hpd 5\n");
- hpd5 |= DC_HPDx_INT_EN;
+ hpd5 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
if (rdev->irq.hpd[5]) {
DRM_DEBUG("si_irq_set: hpd 6\n");
- hpd6 |= DC_HPDx_INT_EN;
+ hpd6 |= DC_HPDx_INT_EN | DC_HPDx_RX_INT_EN;
}
WREG32(CP_INT_CNTL_RING0, cp_int_cntl);
@@ -6306,6 +6336,37 @@
tmp |= DC_HPDx_INT_ACK;
WREG32(DC_HPD6_INT_CONTROL, tmp);
}
+
+ if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD1_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD1_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD2_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD2_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD3_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD3_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD4_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD4_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD5_INT_CONTROL, tmp);
+ }
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ tmp = RREG32(DC_HPD5_INT_CONTROL);
+ tmp |= DC_HPDx_RX_INT_ACK;
+ WREG32(DC_HPD6_INT_CONTROL, tmp);
+ }
}
static void si_irq_disable(struct radeon_device *rdev)
@@ -6371,6 +6432,7 @@
u32 src_id, src_data, ring_id;
u32 ring_index;
bool queue_hotplug = false;
+ bool queue_dp = false;
bool queue_thermal = false;
u32 status, addr;
@@ -6611,6 +6673,48 @@
DRM_DEBUG("IH: HPD6\n");
}
break;
+ case 6:
+ if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 1\n");
+ }
+ break;
+ case 7:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 2\n");
+ }
+ break;
+ case 8:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 3\n");
+ }
+ break;
+ case 9:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 4\n");
+ }
+ break;
+ case 10:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 5\n");
+ }
+ break;
+ case 11:
+ if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+ rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
+ queue_dp = true;
+ DRM_DEBUG("IH: HPD_RX 6\n");
+ }
+ break;
default:
DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
break;
@@ -6693,6 +6797,8 @@
rptr &= rdev->ih.ptr_mask;
WREG32(IH_RB_RPTR, rptr);
}
+ if (queue_dp)
+ schedule_work(&rdev->dp_work);
if (queue_hotplug)
schedule_work(&rdev->hotplug_work);
if (queue_thermal && rdev->pm.dpm_enabled)
diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
index 7be1165..b35bccf 100644
--- a/drivers/gpu/drm/radeon/si_dpm.c
+++ b/drivers/gpu/drm/radeon/si_dpm.c
@@ -6993,3 +6993,39 @@
current_index, pl->sclk, pl->mclk, pl->vddc, pl->vddci, pl->pcie_gen + 1);
}
}
+
+u32 si_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct ni_ps *ps = ni_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_STATE_INDEX_MASK) >>
+ CURRENT_STATE_INDEX_SHIFT;
+
+ if (current_index >= ps->performance_level_count) {
+ return 0;
+ } else {
+ pl = &ps->performance_levels[current_index];
+ return pl->sclk;
+ }
+}
+
+u32 si_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct evergreen_power_info *eg_pi = evergreen_get_pi(rdev);
+ struct radeon_ps *rps = &eg_pi->current_rps;
+ struct ni_ps *ps = ni_get_ps(rps);
+ struct rv7xx_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_STATE_INDEX_MASK) >>
+ CURRENT_STATE_INDEX_SHIFT;
+
+ if (current_index >= ps->performance_level_count) {
+ return 0;
+ } else {
+ pl = &ps->performance_levels[current_index];
+ return pl->mclk;
+ }
+}
diff --git a/drivers/gpu/drm/radeon/sid.h b/drivers/gpu/drm/radeon/sid.h
index 99a9835..3afac30 100644
--- a/drivers/gpu/drm/radeon/sid.h
+++ b/drivers/gpu/drm/radeon/sid.h
@@ -1556,6 +1556,7 @@
#define UVD_UDEC_DBW_ADDR_CONFIG 0xEF54
#define UVD_RBC_RB_RPTR 0xF690
#define UVD_RBC_RB_WPTR 0xF694
+#define UVD_STATUS 0xf6bc
#define UVD_CGC_CTRL 0xF4B0
# define DCM (1 << 0)
diff --git a/drivers/gpu/drm/radeon/sumo_dpm.c b/drivers/gpu/drm/radeon/sumo_dpm.c
index 25fd4ce..cd08628 100644
--- a/drivers/gpu/drm/radeon/sumo_dpm.c
+++ b/drivers/gpu/drm/radeon/sumo_dpm.c
@@ -1837,6 +1837,34 @@
}
}
+u32 sumo_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct sumo_power_info *pi = sumo_get_pi(rdev);
+ struct radeon_ps *rps = &pi->current_rps;
+ struct sumo_ps *ps = sumo_get_ps(rps);
+ struct sumo_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURR_INDEX_MASK) >>
+ CURR_INDEX_SHIFT;
+
+ if (current_index == BOOST_DPM_LEVEL) {
+ pl = &pi->boost_pl;
+ return pl->sclk;
+ } else if (current_index >= ps->num_levels) {
+ return 0;
+ } else {
+ pl = &ps->levels[current_index];
+ return pl->sclk;
+ }
+}
+
+u32 sumo_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct sumo_power_info *pi = sumo_get_pi(rdev);
+
+ return pi->sys_info.bootup_uma_clk;
+}
+
void sumo_dpm_fini(struct radeon_device *rdev)
{
int i;
diff --git a/drivers/gpu/drm/radeon/trinity_dpm.c b/drivers/gpu/drm/radeon/trinity_dpm.c
index 38dacb7..a5b02c5 100644
--- a/drivers/gpu/drm/radeon/trinity_dpm.c
+++ b/drivers/gpu/drm/radeon/trinity_dpm.c
@@ -1964,6 +1964,31 @@
}
}
+u32 trinity_dpm_get_current_sclk(struct radeon_device *rdev)
+{
+ struct trinity_power_info *pi = trinity_get_pi(rdev);
+ struct radeon_ps *rps = &pi->current_rps;
+ struct trinity_ps *ps = trinity_get_ps(rps);
+ struct trinity_pl *pl;
+ u32 current_index =
+ (RREG32(TARGET_AND_CURRENT_PROFILE_INDEX) & CURRENT_STATE_MASK) >>
+ CURRENT_STATE_SHIFT;
+
+ if (current_index >= ps->num_levels) {
+ return 0;
+ } else {
+ pl = &ps->levels[current_index];
+ return pl->sclk;
+ }
+}
+
+u32 trinity_dpm_get_current_mclk(struct radeon_device *rdev)
+{
+ struct trinity_power_info *pi = trinity_get_pi(rdev);
+
+ return pi->sys_info.bootup_uma_clk;
+}
+
void trinity_dpm_fini(struct radeon_device *rdev)
{
int i;
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
index 25c7a99..7d0b8ef 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c
@@ -15,6 +15,8 @@
#include <linux/mutex.h>
#include <drm/drmP.h>
+#include <drm/drm_atomic.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_cma_helper.h>
@@ -99,9 +101,13 @@
clk_disable_unprepare(rcrtc->clock);
}
+/* -----------------------------------------------------------------------------
+ * Hardware Setup
+ */
+
static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc)
{
- const struct drm_display_mode *mode = &rcrtc->crtc.mode;
+ const struct drm_display_mode *mode = &rcrtc->crtc.state->adjusted_mode;
unsigned long mode_clock = mode->clock * 1000;
unsigned long clk;
u32 value;
@@ -187,9 +193,19 @@
rcdu->dpad0_source = rcrtc->index;
}
-void rcar_du_crtc_update_planes(struct drm_crtc *crtc)
+static unsigned int plane_zpos(struct rcar_du_plane *plane)
{
- struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
+ return to_rcar_du_plane_state(plane->plane.state)->zpos;
+}
+
+static const struct rcar_du_format_info *
+plane_format(struct rcar_du_plane *plane)
+{
+ return to_rcar_du_plane_state(plane->plane.state)->format;
+}
+
+static void rcar_du_crtc_update_planes(struct rcar_du_crtc *rcrtc)
+{
struct rcar_du_plane *planes[RCAR_DU_NUM_HW_PLANES];
unsigned int num_planes = 0;
unsigned int prio = 0;
@@ -201,29 +217,30 @@
struct rcar_du_plane *plane = &rcrtc->group->planes.planes[i];
unsigned int j;
- if (plane->crtc != &rcrtc->crtc || !plane->enabled)
+ if (plane->plane.state->crtc != &rcrtc->crtc)
continue;
/* Insert the plane in the sorted planes array. */
for (j = num_planes++; j > 0; --j) {
- if (planes[j-1]->zpos <= plane->zpos)
+ if (plane_zpos(planes[j-1]) <= plane_zpos(plane))
break;
planes[j] = planes[j-1];
}
planes[j] = plane;
- prio += plane->format->planes * 4;
+ prio += plane_format(plane)->planes * 4;
}
for (i = 0; i < num_planes; ++i) {
struct rcar_du_plane *plane = planes[i];
- unsigned int index = plane->hwindex;
+ struct drm_plane_state *state = plane->plane.state;
+ unsigned int index = to_rcar_du_plane_state(state)->hwindex;
prio -= 4;
dspr |= (index + 1) << prio;
dptsr |= DPTSR_PnDK(index) | DPTSR_PnTS(index);
- if (plane->format->planes == 2) {
+ if (plane_format(plane)->planes == 2) {
index = (index + 1) % 8;
prio -= 4;
@@ -236,8 +253,6 @@
* with superposition controller 2.
*/
if (rcrtc->index % 2) {
- u32 value = rcar_du_group_read(rcrtc->group, DPTSR);
-
/* The DPTSR register is updated when the display controller is
* stopped. We thus need to restart the DU. Once again, sorry
* for the flicker. One way to mitigate the issue would be to
@@ -245,245 +260,22 @@
* split, or through a module parameter). Flicker would then
* occur only if we need to break the pre-association.
*/
- if (value != dptsr) {
+ mutex_lock(&rcrtc->group->lock);
+ if (rcar_du_group_read(rcrtc->group, DPTSR) != dptsr) {
rcar_du_group_write(rcrtc->group, DPTSR, dptsr);
if (rcrtc->group->used_crtcs)
rcar_du_group_restart(rcrtc->group);
}
+ mutex_unlock(&rcrtc->group->lock);
}
rcar_du_group_write(rcrtc->group, rcrtc->index % 2 ? DS2PR : DS1PR,
dspr);
}
-static void rcar_du_crtc_start(struct rcar_du_crtc *rcrtc)
-{
- struct drm_crtc *crtc = &rcrtc->crtc;
- bool interlaced;
- unsigned int i;
-
- if (rcrtc->started)
- return;
-
- if (WARN_ON(rcrtc->plane->format == NULL))
- return;
-
- /* Set display off and background to black */
- rcar_du_crtc_write(rcrtc, DOOR, DOOR_RGB(0, 0, 0));
- rcar_du_crtc_write(rcrtc, BPOR, BPOR_RGB(0, 0, 0));
-
- /* Configure display timings and output routing */
- rcar_du_crtc_set_display_timing(rcrtc);
- rcar_du_group_set_routing(rcrtc->group);
-
- mutex_lock(&rcrtc->group->planes.lock);
- rcrtc->plane->enabled = true;
- rcar_du_crtc_update_planes(crtc);
- mutex_unlock(&rcrtc->group->planes.lock);
-
- /* Setup planes. */
- for (i = 0; i < ARRAY_SIZE(rcrtc->group->planes.planes); ++i) {
- struct rcar_du_plane *plane = &rcrtc->group->planes.planes[i];
-
- if (plane->crtc != crtc || !plane->enabled)
- continue;
-
- rcar_du_plane_setup(plane);
- }
-
- /* Select master sync mode. This enables display operation in master
- * sync mode (with the HSYNC and VSYNC signals configured as outputs and
- * actively driven).
- */
- interlaced = rcrtc->crtc.mode.flags & DRM_MODE_FLAG_INTERLACE;
- rcar_du_crtc_clr_set(rcrtc, DSYSR, DSYSR_TVM_MASK | DSYSR_SCM_MASK,
- (interlaced ? DSYSR_SCM_INT_VIDEO : 0) |
- DSYSR_TVM_MASTER);
-
- rcar_du_group_start_stop(rcrtc->group, true);
-
- rcrtc->started = true;
-}
-
-static void rcar_du_crtc_stop(struct rcar_du_crtc *rcrtc)
-{
- struct drm_crtc *crtc = &rcrtc->crtc;
-
- if (!rcrtc->started)
- return;
-
- mutex_lock(&rcrtc->group->planes.lock);
- rcrtc->plane->enabled = false;
- rcar_du_crtc_update_planes(crtc);
- mutex_unlock(&rcrtc->group->planes.lock);
-
- /* Select switch sync mode. This stops display operation and configures
- * the HSYNC and VSYNC signals as inputs.
- */
- rcar_du_crtc_clr_set(rcrtc, DSYSR, DSYSR_TVM_MASK, DSYSR_TVM_SWITCH);
-
- rcar_du_group_start_stop(rcrtc->group, false);
-
- rcrtc->started = false;
-}
-
-void rcar_du_crtc_suspend(struct rcar_du_crtc *rcrtc)
-{
- rcar_du_crtc_stop(rcrtc);
- rcar_du_crtc_put(rcrtc);
-}
-
-void rcar_du_crtc_resume(struct rcar_du_crtc *rcrtc)
-{
- if (rcrtc->dpms != DRM_MODE_DPMS_ON)
- return;
-
- rcar_du_crtc_get(rcrtc);
- rcar_du_crtc_start(rcrtc);
-}
-
-static void rcar_du_crtc_update_base(struct rcar_du_crtc *rcrtc)
-{
- struct drm_crtc *crtc = &rcrtc->crtc;
-
- rcar_du_plane_compute_base(rcrtc->plane, crtc->primary->fb);
- rcar_du_plane_update_base(rcrtc->plane);
-}
-
-static void rcar_du_crtc_dpms(struct drm_crtc *crtc, int mode)
-{
- struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
-
- if (mode != DRM_MODE_DPMS_ON)
- mode = DRM_MODE_DPMS_OFF;
-
- if (rcrtc->dpms == mode)
- return;
-
- if (mode == DRM_MODE_DPMS_ON) {
- rcar_du_crtc_get(rcrtc);
- rcar_du_crtc_start(rcrtc);
- } else {
- rcar_du_crtc_stop(rcrtc);
- rcar_du_crtc_put(rcrtc);
- }
-
- rcrtc->dpms = mode;
-}
-
-static bool rcar_du_crtc_mode_fixup(struct drm_crtc *crtc,
- const struct drm_display_mode *mode,
- struct drm_display_mode *adjusted_mode)
-{
- /* TODO Fixup modes */
- return true;
-}
-
-static void rcar_du_crtc_mode_prepare(struct drm_crtc *crtc)
-{
- struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
-
- /* We need to access the hardware during mode set, acquire a reference
- * to the CRTC.
- */
- rcar_du_crtc_get(rcrtc);
-
- /* Stop the CRTC and release the plane. Force the DPMS mode to off as a
- * result.
- */
- rcar_du_crtc_stop(rcrtc);
- rcar_du_plane_release(rcrtc->plane);
-
- rcrtc->dpms = DRM_MODE_DPMS_OFF;
-}
-
-static int rcar_du_crtc_mode_set(struct drm_crtc *crtc,
- struct drm_display_mode *mode,
- struct drm_display_mode *adjusted_mode,
- int x, int y,
- struct drm_framebuffer *old_fb)
-{
- struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
- struct rcar_du_device *rcdu = rcrtc->group->dev;
- const struct rcar_du_format_info *format;
- int ret;
-
- format = rcar_du_format_info(crtc->primary->fb->pixel_format);
- if (format == NULL) {
- dev_dbg(rcdu->dev, "mode_set: unsupported format %08x\n",
- crtc->primary->fb->pixel_format);
- ret = -EINVAL;
- goto error;
- }
-
- ret = rcar_du_plane_reserve(rcrtc->plane, format);
- if (ret < 0)
- goto error;
-
- rcrtc->plane->format = format;
-
- rcrtc->plane->src_x = x;
- rcrtc->plane->src_y = y;
- rcrtc->plane->width = mode->hdisplay;
- rcrtc->plane->height = mode->vdisplay;
-
- rcar_du_plane_compute_base(rcrtc->plane, crtc->primary->fb);
-
- rcrtc->outputs = 0;
-
- return 0;
-
-error:
- /* There's no rollback/abort operation to clean up in case of error. We
- * thus need to release the reference to the CRTC acquired in prepare()
- * here.
- */
- rcar_du_crtc_put(rcrtc);
- return ret;
-}
-
-static void rcar_du_crtc_mode_commit(struct drm_crtc *crtc)
-{
- struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
-
- /* We're done, restart the CRTC and set the DPMS mode to on. The
- * reference to the DU acquired at prepare() time will thus be released
- * by the DPMS handler (possibly called by the disable() handler).
- */
- rcar_du_crtc_start(rcrtc);
- rcrtc->dpms = DRM_MODE_DPMS_ON;
-}
-
-static int rcar_du_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
- struct drm_framebuffer *old_fb)
-{
- struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
-
- rcrtc->plane->src_x = x;
- rcrtc->plane->src_y = y;
-
- rcar_du_crtc_update_base(rcrtc);
-
- return 0;
-}
-
-static void rcar_du_crtc_disable(struct drm_crtc *crtc)
-{
- struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
-
- rcar_du_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
- rcar_du_plane_release(rcrtc->plane);
-}
-
-static const struct drm_crtc_helper_funcs crtc_helper_funcs = {
- .dpms = rcar_du_crtc_dpms,
- .mode_fixup = rcar_du_crtc_mode_fixup,
- .prepare = rcar_du_crtc_mode_prepare,
- .commit = rcar_du_crtc_mode_commit,
- .mode_set = rcar_du_crtc_mode_set,
- .mode_set_base = rcar_du_crtc_mode_set_base,
- .disable = rcar_du_crtc_disable,
-};
+/* -----------------------------------------------------------------------------
+ * Page Flip
+ */
void rcar_du_crtc_cancel_page_flip(struct rcar_du_crtc *rcrtc,
struct drm_file *file)
@@ -500,7 +292,7 @@
if (event && event->base.file_priv == file) {
rcrtc->event = NULL;
event->base.destroy(&event->base);
- drm_vblank_put(dev, rcrtc->index);
+ drm_crtc_vblank_put(&rcrtc->crtc);
}
spin_unlock_irqrestore(&dev->event_lock, flags);
}
@@ -521,11 +313,215 @@
spin_lock_irqsave(&dev->event_lock, flags);
drm_send_vblank_event(dev, rcrtc->index, event);
+ wake_up(&rcrtc->flip_wait);
spin_unlock_irqrestore(&dev->event_lock, flags);
- drm_vblank_put(dev, rcrtc->index);
+ drm_crtc_vblank_put(&rcrtc->crtc);
}
+static bool rcar_du_crtc_page_flip_pending(struct rcar_du_crtc *rcrtc)
+{
+ struct drm_device *dev = rcrtc->crtc.dev;
+ unsigned long flags;
+ bool pending;
+
+ spin_lock_irqsave(&dev->event_lock, flags);
+ pending = rcrtc->event != NULL;
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+
+ return pending;
+}
+
+static void rcar_du_crtc_wait_page_flip(struct rcar_du_crtc *rcrtc)
+{
+ struct rcar_du_device *rcdu = rcrtc->group->dev;
+
+ if (wait_event_timeout(rcrtc->flip_wait,
+ !rcar_du_crtc_page_flip_pending(rcrtc),
+ msecs_to_jiffies(50)))
+ return;
+
+ dev_warn(rcdu->dev, "page flip timeout\n");
+
+ rcar_du_crtc_finish_page_flip(rcrtc);
+}
+
+/* -----------------------------------------------------------------------------
+ * Start/Stop and Suspend/Resume
+ */
+
+static void rcar_du_crtc_start(struct rcar_du_crtc *rcrtc)
+{
+ struct drm_crtc *crtc = &rcrtc->crtc;
+ bool interlaced;
+
+ if (rcrtc->started)
+ return;
+
+ /* Set display off and background to black */
+ rcar_du_crtc_write(rcrtc, DOOR, DOOR_RGB(0, 0, 0));
+ rcar_du_crtc_write(rcrtc, BPOR, BPOR_RGB(0, 0, 0));
+
+ /* Configure display timings and output routing */
+ rcar_du_crtc_set_display_timing(rcrtc);
+ rcar_du_group_set_routing(rcrtc->group);
+
+ /* Start with all planes disabled. */
+ rcar_du_group_write(rcrtc->group, rcrtc->index % 2 ? DS2PR : DS1PR, 0);
+
+ /* Select master sync mode. This enables display operation in master
+ * sync mode (with the HSYNC and VSYNC signals configured as outputs and
+ * actively driven).
+ */
+ interlaced = rcrtc->crtc.mode.flags & DRM_MODE_FLAG_INTERLACE;
+ rcar_du_crtc_clr_set(rcrtc, DSYSR, DSYSR_TVM_MASK | DSYSR_SCM_MASK,
+ (interlaced ? DSYSR_SCM_INT_VIDEO : 0) |
+ DSYSR_TVM_MASTER);
+
+ rcar_du_group_start_stop(rcrtc->group, true);
+
+ /* Turn vertical blanking interrupt reporting back on. */
+ drm_crtc_vblank_on(crtc);
+
+ rcrtc->started = true;
+}
+
+static void rcar_du_crtc_stop(struct rcar_du_crtc *rcrtc)
+{
+ struct drm_crtc *crtc = &rcrtc->crtc;
+
+ if (!rcrtc->started)
+ return;
+
+ /* Disable vertical blanking interrupt reporting. We first need to wait
+ * for page flip completion before stopping the CRTC as userspace
+ * expects page flips to eventually complete.
+ */
+ rcar_du_crtc_wait_page_flip(rcrtc);
+ drm_crtc_vblank_off(crtc);
+
+ /* Select switch sync mode. This stops display operation and configures
+ * the HSYNC and VSYNC signals as inputs.
+ */
+ rcar_du_crtc_clr_set(rcrtc, DSYSR, DSYSR_TVM_MASK, DSYSR_TVM_SWITCH);
+
+ rcar_du_group_start_stop(rcrtc->group, false);
+
+ rcrtc->started = false;
+}
+
+void rcar_du_crtc_suspend(struct rcar_du_crtc *rcrtc)
+{
+ rcar_du_crtc_stop(rcrtc);
+ rcar_du_crtc_put(rcrtc);
+}
+
+void rcar_du_crtc_resume(struct rcar_du_crtc *rcrtc)
+{
+ unsigned int i;
+
+ if (!rcrtc->enabled)
+ return;
+
+ rcar_du_crtc_get(rcrtc);
+ rcar_du_crtc_start(rcrtc);
+
+ /* Commit the planes state. */
+ for (i = 0; i < ARRAY_SIZE(rcrtc->group->planes.planes); ++i) {
+ struct rcar_du_plane *plane = &rcrtc->group->planes.planes[i];
+
+ if (plane->plane.state->crtc != &rcrtc->crtc)
+ continue;
+
+ rcar_du_plane_setup(plane);
+ }
+
+ rcar_du_crtc_update_planes(rcrtc);
+}
+
+/* -----------------------------------------------------------------------------
+ * CRTC Functions
+ */
+
+static void rcar_du_crtc_enable(struct drm_crtc *crtc)
+{
+ struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
+
+ if (rcrtc->enabled)
+ return;
+
+ rcar_du_crtc_get(rcrtc);
+ rcar_du_crtc_start(rcrtc);
+
+ rcrtc->enabled = true;
+}
+
+static void rcar_du_crtc_disable(struct drm_crtc *crtc)
+{
+ struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
+
+ if (!rcrtc->enabled)
+ return;
+
+ rcar_du_crtc_stop(rcrtc);
+ rcar_du_crtc_put(rcrtc);
+
+ rcrtc->enabled = false;
+ rcrtc->outputs = 0;
+}
+
+static bool rcar_du_crtc_mode_fixup(struct drm_crtc *crtc,
+ const struct drm_display_mode *mode,
+ struct drm_display_mode *adjusted_mode)
+{
+ /* TODO Fixup modes */
+ return true;
+}
+
+static void rcar_du_crtc_atomic_begin(struct drm_crtc *crtc)
+{
+ struct drm_pending_vblank_event *event = crtc->state->event;
+ struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
+ struct drm_device *dev = rcrtc->crtc.dev;
+ unsigned long flags;
+
+ if (event) {
+ WARN_ON(drm_crtc_vblank_get(crtc) != 0);
+
+ spin_lock_irqsave(&dev->event_lock, flags);
+ rcrtc->event = event;
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+ }
+}
+
+static void rcar_du_crtc_atomic_flush(struct drm_crtc *crtc)
+{
+ struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
+
+ rcar_du_crtc_update_planes(rcrtc);
+}
+
+static const struct drm_crtc_helper_funcs crtc_helper_funcs = {
+ .mode_fixup = rcar_du_crtc_mode_fixup,
+ .disable = rcar_du_crtc_disable,
+ .enable = rcar_du_crtc_enable,
+ .atomic_begin = rcar_du_crtc_atomic_begin,
+ .atomic_flush = rcar_du_crtc_atomic_flush,
+};
+
+static const struct drm_crtc_funcs crtc_funcs = {
+ .reset = drm_atomic_helper_crtc_reset,
+ .destroy = drm_crtc_cleanup,
+ .set_config = drm_atomic_helper_set_config,
+ .page_flip = drm_atomic_helper_page_flip,
+ .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
+};
+
+/* -----------------------------------------------------------------------------
+ * Interrupt Handling
+ */
+
static irqreturn_t rcar_du_crtc_irq(int irq, void *arg)
{
struct rcar_du_crtc *rcrtc = arg;
@@ -544,41 +540,9 @@
return ret;
}
-static int rcar_du_crtc_page_flip(struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_pending_vblank_event *event,
- uint32_t page_flip_flags)
-{
- struct rcar_du_crtc *rcrtc = to_rcar_crtc(crtc);
- struct drm_device *dev = rcrtc->crtc.dev;
- unsigned long flags;
-
- spin_lock_irqsave(&dev->event_lock, flags);
- if (rcrtc->event != NULL) {
- spin_unlock_irqrestore(&dev->event_lock, flags);
- return -EBUSY;
- }
- spin_unlock_irqrestore(&dev->event_lock, flags);
-
- crtc->primary->fb = fb;
- rcar_du_crtc_update_base(rcrtc);
-
- if (event) {
- event->pipe = rcrtc->index;
- drm_vblank_get(dev, rcrtc->index);
- spin_lock_irqsave(&dev->event_lock, flags);
- rcrtc->event = event;
- spin_unlock_irqrestore(&dev->event_lock, flags);
- }
-
- return 0;
-}
-
-static const struct drm_crtc_funcs crtc_funcs = {
- .destroy = drm_crtc_cleanup,
- .set_config = drm_crtc_helper_set_config,
- .page_flip = rcar_du_crtc_page_flip,
-};
+/* -----------------------------------------------------------------------------
+ * Initialization
+ */
int rcar_du_crtc_create(struct rcar_du_group *rgrp, unsigned int index)
{
@@ -620,20 +584,24 @@
return -EPROBE_DEFER;
}
+ init_waitqueue_head(&rcrtc->flip_wait);
+
rcrtc->group = rgrp;
rcrtc->mmio_offset = mmio_offsets[index];
rcrtc->index = index;
- rcrtc->dpms = DRM_MODE_DPMS_OFF;
- rcrtc->plane = &rgrp->planes.planes[index % 2];
+ rcrtc->enabled = false;
- rcrtc->plane->crtc = crtc;
-
- ret = drm_crtc_init(rcdu->ddev, crtc, &crtc_funcs);
+ ret = drm_crtc_init_with_planes(rcdu->ddev, crtc,
+ &rgrp->planes.planes[index % 2].plane,
+ NULL, &crtc_funcs);
if (ret < 0)
return ret;
drm_crtc_helper_add(crtc, &crtc_helper_funcs);
+ /* Start with vertical blanking interrupt reporting disabled. */
+ drm_crtc_vblank_off(crtc);
+
/* Register the interrupt handler. */
if (rcar_du_has(rcdu, RCAR_DU_FEATURE_CRTC_IRQ_CLOCK)) {
irq = platform_get_irq(pdev, index);
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.h b/drivers/gpu/drm/rcar-du/rcar_du_crtc.h
index d2f89f7..5d9aa9b 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.h
+++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.h
@@ -15,12 +15,12 @@
#define __RCAR_DU_CRTC_H__
#include <linux/mutex.h>
+#include <linux/wait.h>
#include <drm/drmP.h>
#include <drm/drm_crtc.h>
struct rcar_du_group;
-struct rcar_du_plane;
struct rcar_du_crtc {
struct drm_crtc crtc;
@@ -32,11 +32,12 @@
bool started;
struct drm_pending_vblank_event *event;
+ wait_queue_head_t flip_wait;
+
unsigned int outputs;
- int dpms;
+ bool enabled;
struct rcar_du_group *group;
- struct rcar_du_plane *plane;
};
#define to_rcar_crtc(c) container_of(c, struct rcar_du_crtc, crtc)
@@ -59,6 +60,5 @@
void rcar_du_crtc_route_output(struct drm_crtc *crtc,
enum rcar_du_output output);
-void rcar_du_crtc_update_planes(struct drm_crtc *crtc);
#endif /* __RCAR_DU_CRTC_H__ */
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
index e0d74f8..da1216a 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c
@@ -19,6 +19,7 @@
#include <linux/platform_device.h>
#include <linux/pm.h>
#include <linux/slab.h>
+#include <linux/wait.h>
#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
@@ -163,6 +164,8 @@
return -ENOMEM;
}
+ init_waitqueue_head(&rcdu->commit.wait);
+
rcdu->dev = &pdev->dev;
rcdu->info = np ? of_match_device(rcar_du_of_table, rcdu->dev)->data
: (void *)platform_get_device_id(pdev)->driver_data;
@@ -175,6 +178,15 @@
if (IS_ERR(rcdu->mmio))
return PTR_ERR(rcdu->mmio);
+ /* Initialize vertical blanking interrupts handling. Start with vblank
+ * disabled for all CRTCs.
+ */
+ ret = drm_vblank_init(dev, (1 << rcdu->info->num_crtcs) - 1);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to initialize vblank\n");
+ goto done;
+ }
+
/* DRM/KMS objects */
ret = rcar_du_modeset_init(rcdu);
if (ret < 0) {
@@ -182,13 +194,6 @@
goto done;
}
- /* vblank handling */
- ret = drm_vblank_init(dev, (1 << rcdu->num_crtcs) - 1);
- if (ret < 0) {
- dev_err(&pdev->dev, "failed to initialize vblank\n");
- goto done;
- }
-
dev->irq_enabled = 1;
platform_set_drvdata(pdev, rcdu);
@@ -247,7 +252,8 @@
};
static struct drm_driver rcar_du_driver = {
- .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME,
+ .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME
+ | DRIVER_ATOMIC,
.load = rcar_du_load,
.unload = rcar_du_unload,
.preclose = rcar_du_preclose,
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.h b/drivers/gpu/drm/rcar-du/rcar_du_drv.h
index c5b9ea6..c7c538d 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.h
+++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.h
@@ -15,6 +15,7 @@
#define __RCAR_DU_DRV_H__
#include <linux/kernel.h>
+#include <linux/wait.h>
#include "rcar_du_crtc.h"
#include "rcar_du_group.h"
@@ -64,6 +65,10 @@
unsigned int num_lvds;
};
+#define RCAR_DU_MAX_CRTCS 3
+#define RCAR_DU_MAX_GROUPS DIV_ROUND_UP(RCAR_DU_MAX_CRTCS, 2)
+#define RCAR_DU_MAX_LVDS 2
+
struct rcar_du_device {
struct device *dev;
const struct rcar_du_device_info *info;
@@ -73,13 +78,18 @@
struct drm_device *ddev;
struct drm_fbdev_cma *fbdev;
- struct rcar_du_crtc crtcs[3];
+ struct rcar_du_crtc crtcs[RCAR_DU_MAX_CRTCS];
unsigned int num_crtcs;
- struct rcar_du_group groups[2];
+ struct rcar_du_group groups[RCAR_DU_MAX_GROUPS];
unsigned int dpad0_source;
- struct rcar_du_lvdsenc *lvds[2];
+ struct rcar_du_lvdsenc *lvds[RCAR_DU_MAX_LVDS];
+
+ struct {
+ wait_queue_head_t wait;
+ u32 pending;
+ } commit;
};
static inline bool rcar_du_has(struct rcar_du_device *rcdu,
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_encoder.c b/drivers/gpu/drm/rcar-du/rcar_du_encoder.c
index 279167f..d0ae1e8 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_encoder.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_encoder.c
@@ -42,46 +42,40 @@
* Encoder
*/
-static void rcar_du_encoder_dpms(struct drm_encoder *encoder, int mode)
+static void rcar_du_encoder_disable(struct drm_encoder *encoder)
{
struct rcar_du_encoder *renc = to_rcar_encoder(encoder);
- if (mode != DRM_MODE_DPMS_ON)
- mode = DRM_MODE_DPMS_OFF;
-
if (renc->lvds)
- rcar_du_lvdsenc_dpms(renc->lvds, encoder->crtc, mode);
+ rcar_du_lvdsenc_enable(renc->lvds, encoder->crtc, false);
}
-static bool rcar_du_encoder_mode_fixup(struct drm_encoder *encoder,
- const struct drm_display_mode *mode,
- struct drm_display_mode *adjusted_mode)
+static void rcar_du_encoder_enable(struct drm_encoder *encoder)
{
struct rcar_du_encoder *renc = to_rcar_encoder(encoder);
+
+ if (renc->lvds)
+ rcar_du_lvdsenc_enable(renc->lvds, encoder->crtc, true);
+}
+
+static int rcar_du_encoder_atomic_check(struct drm_encoder *encoder,
+ struct drm_crtc_state *crtc_state,
+ struct drm_connector_state *conn_state)
+{
+ struct rcar_du_encoder *renc = to_rcar_encoder(encoder);
+ struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode;
+ const struct drm_display_mode *mode = &crtc_state->mode;
const struct drm_display_mode *panel_mode;
+ struct drm_connector *connector = conn_state->connector;
struct drm_device *dev = encoder->dev;
- struct drm_connector *connector;
- bool found = false;
/* DAC encoders have currently no restriction on the mode. */
if (encoder->encoder_type == DRM_MODE_ENCODER_DAC)
- return true;
-
- list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
- if (connector->encoder == encoder) {
- found = true;
- break;
- }
- }
-
- if (!found) {
- dev_dbg(dev->dev, "mode_fixup: no connector found\n");
- return false;
- }
+ return 0;
if (list_empty(&connector->modes)) {
- dev_dbg(dev->dev, "mode_fixup: empty modes list\n");
- return false;
+ dev_dbg(dev->dev, "encoder: empty modes list\n");
+ return -EINVAL;
}
panel_mode = list_first_entry(&connector->modes,
@@ -90,7 +84,7 @@
/* We're not allowed to modify the resolution. */
if (mode->hdisplay != panel_mode->hdisplay ||
mode->vdisplay != panel_mode->vdisplay)
- return false;
+ return -EINVAL;
/* The flat panel mode is fixed, just copy it to the adjusted mode. */
drm_mode_copy(adjusted_mode, panel_mode);
@@ -102,25 +96,7 @@
adjusted_mode->clock = clamp(adjusted_mode->clock,
30000, 150000);
- return true;
-}
-
-static void rcar_du_encoder_mode_prepare(struct drm_encoder *encoder)
-{
- struct rcar_du_encoder *renc = to_rcar_encoder(encoder);
-
- if (renc->lvds)
- rcar_du_lvdsenc_dpms(renc->lvds, encoder->crtc,
- DRM_MODE_DPMS_OFF);
-}
-
-static void rcar_du_encoder_mode_commit(struct drm_encoder *encoder)
-{
- struct rcar_du_encoder *renc = to_rcar_encoder(encoder);
-
- if (renc->lvds)
- rcar_du_lvdsenc_dpms(renc->lvds, encoder->crtc,
- DRM_MODE_DPMS_ON);
+ return 0;
}
static void rcar_du_encoder_mode_set(struct drm_encoder *encoder,
@@ -133,11 +109,10 @@
}
static const struct drm_encoder_helper_funcs encoder_helper_funcs = {
- .dpms = rcar_du_encoder_dpms,
- .mode_fixup = rcar_du_encoder_mode_fixup,
- .prepare = rcar_du_encoder_mode_prepare,
- .commit = rcar_du_encoder_mode_commit,
.mode_set = rcar_du_encoder_mode_set,
+ .disable = rcar_du_encoder_disable,
+ .enable = rcar_du_encoder_enable,
+ .atomic_check = rcar_du_encoder_atomic_check,
};
static const struct drm_encoder_funcs encoder_funcs = {
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_group.h b/drivers/gpu/drm/rcar-du/rcar_du_group.h
index 0c38cdc..ed36433 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_group.h
+++ b/drivers/gpu/drm/rcar-du/rcar_du_group.h
@@ -14,6 +14,8 @@
#ifndef __RCAR_DU_GROUP_H__
#define __RCAR_DU_GROUP_H__
+#include <linux/mutex.h>
+
#include "rcar_du_plane.h"
struct rcar_du_device;
@@ -25,6 +27,7 @@
* @index: group index
* @use_count: number of users of the group (rcar_du_group_(get|put))
* @used_crtcs: number of CRTCs currently in use
+ * @lock: protects the DPTSR register
* @planes: planes handled by the group
*/
struct rcar_du_group {
@@ -35,6 +38,8 @@
unsigned int use_count;
unsigned int used_crtcs;
+ struct mutex lock;
+
struct rcar_du_planes planes;
};
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_hdmicon.c b/drivers/gpu/drm/rcar-du/rcar_du_hdmicon.c
index ca94b02..96f2eb4 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_hdmicon.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_hdmicon.c
@@ -12,6 +12,7 @@
*/
#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_encoder_slave.h>
@@ -74,10 +75,13 @@
}
static const struct drm_connector_funcs connector_funcs = {
- .dpms = drm_helper_connector_dpms,
+ .dpms = drm_atomic_helper_connector_dpms,
+ .reset = drm_atomic_helper_connector_reset,
.detect = rcar_du_hdmi_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = rcar_du_hdmi_connector_destroy,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
int rcar_du_hdmi_connector_init(struct rcar_du_device *rcdu,
@@ -108,7 +112,7 @@
if (ret < 0)
return ret;
- drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
+ connector->dpms = DRM_MODE_DPMS_OFF;
drm_object_property_set_value(&connector->base,
rcdu->ddev->mode_config.dpms_property, DRM_MODE_DPMS_OFF);
@@ -116,7 +120,6 @@
if (ret < 0)
return ret;
- connector->encoder = encoder;
rcon->encoder = renc;
return 0;
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_hdmienc.c b/drivers/gpu/drm/rcar-du/rcar_du_hdmienc.c
index 221f0a1..81da841 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_hdmienc.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_hdmienc.c
@@ -26,42 +26,51 @@
struct rcar_du_hdmienc {
struct rcar_du_encoder *renc;
struct device *dev;
- int dpms;
+ bool enabled;
};
#define to_rcar_hdmienc(e) (to_rcar_encoder(e)->hdmi)
#define to_slave_funcs(e) (to_rcar_encoder(e)->slave.slave_funcs)
-static void rcar_du_hdmienc_dpms(struct drm_encoder *encoder, int mode)
+static void rcar_du_hdmienc_disable(struct drm_encoder *encoder)
{
struct rcar_du_hdmienc *hdmienc = to_rcar_hdmienc(encoder);
struct drm_encoder_slave_funcs *sfuncs = to_slave_funcs(encoder);
- if (mode != DRM_MODE_DPMS_ON)
- mode = DRM_MODE_DPMS_OFF;
-
- if (hdmienc->dpms == mode)
- return;
-
- if (mode == DRM_MODE_DPMS_ON && hdmienc->renc->lvds)
- rcar_du_lvdsenc_dpms(hdmienc->renc->lvds, encoder->crtc, mode);
-
if (sfuncs->dpms)
- sfuncs->dpms(encoder, mode);
+ sfuncs->dpms(encoder, DRM_MODE_DPMS_OFF);
- if (mode != DRM_MODE_DPMS_ON && hdmienc->renc->lvds)
- rcar_du_lvdsenc_dpms(hdmienc->renc->lvds, encoder->crtc, mode);
+ if (hdmienc->renc->lvds)
+ rcar_du_lvdsenc_enable(hdmienc->renc->lvds, encoder->crtc,
+ false);
- hdmienc->dpms = mode;
+ hdmienc->enabled = false;
}
-static bool rcar_du_hdmienc_mode_fixup(struct drm_encoder *encoder,
- const struct drm_display_mode *mode,
- struct drm_display_mode *adjusted_mode)
+static void rcar_du_hdmienc_enable(struct drm_encoder *encoder)
{
struct rcar_du_hdmienc *hdmienc = to_rcar_hdmienc(encoder);
struct drm_encoder_slave_funcs *sfuncs = to_slave_funcs(encoder);
+ if (hdmienc->renc->lvds)
+ rcar_du_lvdsenc_enable(hdmienc->renc->lvds, encoder->crtc,
+ true);
+
+ if (sfuncs->dpms)
+ sfuncs->dpms(encoder, DRM_MODE_DPMS_ON);
+
+ hdmienc->enabled = true;
+}
+
+static int rcar_du_hdmienc_atomic_check(struct drm_encoder *encoder,
+ struct drm_crtc_state *crtc_state,
+ struct drm_connector_state *conn_state)
+{
+ struct rcar_du_hdmienc *hdmienc = to_rcar_hdmienc(encoder);
+ struct drm_encoder_slave_funcs *sfuncs = to_slave_funcs(encoder);
+ struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode;
+ const struct drm_display_mode *mode = &crtc_state->mode;
+
/* The internal LVDS encoder has a clock frequency operating range of
* 30MHz to 150MHz. Clamp the clock accordingly.
*/
@@ -70,19 +79,9 @@
30000, 150000);
if (sfuncs->mode_fixup == NULL)
- return true;
+ return 0;
- return sfuncs->mode_fixup(encoder, mode, adjusted_mode);
-}
-
-static void rcar_du_hdmienc_mode_prepare(struct drm_encoder *encoder)
-{
- rcar_du_hdmienc_dpms(encoder, DRM_MODE_DPMS_OFF);
-}
-
-static void rcar_du_hdmienc_mode_commit(struct drm_encoder *encoder)
-{
- rcar_du_hdmienc_dpms(encoder, DRM_MODE_DPMS_ON);
+ return sfuncs->mode_fixup(encoder, mode, adjusted_mode) ? 0 : -EINVAL;
}
static void rcar_du_hdmienc_mode_set(struct drm_encoder *encoder,
@@ -99,18 +98,18 @@
}
static const struct drm_encoder_helper_funcs encoder_helper_funcs = {
- .dpms = rcar_du_hdmienc_dpms,
- .mode_fixup = rcar_du_hdmienc_mode_fixup,
- .prepare = rcar_du_hdmienc_mode_prepare,
- .commit = rcar_du_hdmienc_mode_commit,
.mode_set = rcar_du_hdmienc_mode_set,
+ .disable = rcar_du_hdmienc_disable,
+ .enable = rcar_du_hdmienc_enable,
+ .atomic_check = rcar_du_hdmienc_atomic_check,
};
static void rcar_du_hdmienc_cleanup(struct drm_encoder *encoder)
{
struct rcar_du_hdmienc *hdmienc = to_rcar_hdmienc(encoder);
- rcar_du_hdmienc_dpms(encoder, DRM_MODE_DPMS_OFF);
+ if (hdmienc->enabled)
+ rcar_du_hdmienc_disable(encoder);
drm_encoder_cleanup(encoder);
put_device(hdmienc->dev);
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_kms.c b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
index cc9136e..93117f1 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
@@ -12,12 +12,15 @@
*/
#include <drm/drmP.h>
+#include <drm/drm_atomic.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <linux/of_graph.h>
+#include <linux/wait.h>
#include "rcar_du_crtc.h"
#include "rcar_du_drv.h"
@@ -185,9 +188,309 @@
drm_fbdev_cma_hotplug_event(rcdu->fbdev);
}
+/* -----------------------------------------------------------------------------
+ * Atomic Check and Update
+ */
+
+/*
+ * Atomic hardware plane allocator
+ *
+ * The hardware plane allocator is solely based on the atomic plane states
+ * without keeping any external state to avoid races between .atomic_check()
+ * and .atomic_commit().
+ *
+ * The core idea is to avoid using a free planes bitmask that would need to be
+ * shared between check and commit handlers with a collective knowledge based on
+ * the allocated hardware plane(s) for each KMS plane. The allocator then loops
+ * over all plane states to compute the free planes bitmask, allocates hardware
+ * planes based on that bitmask, and stores the result back in the plane states.
+ *
+ * For this to work we need to access the current state of planes not touched by
+ * the atomic update. To ensure that it won't be modified, we need to lock all
+ * planes using drm_atomic_get_plane_state(). This effectively serializes atomic
+ * updates from .atomic_check() up to completion (when swapping the states if
+ * the check step has succeeded) or rollback (when freeing the states if the
+ * check step has failed).
+ *
+ * Allocation is performed in the .atomic_check() handler and applied
+ * automatically when the core swaps the old and new states.
+ */
+
+static bool rcar_du_plane_needs_realloc(struct rcar_du_plane *plane,
+ struct rcar_du_plane_state *state)
+{
+ const struct rcar_du_format_info *cur_format;
+
+ cur_format = to_rcar_du_plane_state(plane->plane.state)->format;
+
+ /* Lowering the number of planes doesn't strictly require reallocation
+ * as the extra hardware plane will be freed when committing, but doing
+ * so could lead to more fragmentation.
+ */
+ return !cur_format || cur_format->planes != state->format->planes;
+}
+
+static unsigned int rcar_du_plane_hwmask(struct rcar_du_plane_state *state)
+{
+ unsigned int mask;
+
+ if (state->hwindex == -1)
+ return 0;
+
+ mask = 1 << state->hwindex;
+ if (state->format->planes == 2)
+ mask |= 1 << ((state->hwindex + 1) % 8);
+
+ return mask;
+}
+
+static int rcar_du_plane_hwalloc(unsigned int num_planes, unsigned int free)
+{
+ unsigned int i;
+
+ for (i = 0; i < RCAR_DU_NUM_HW_PLANES; ++i) {
+ if (!(free & (1 << i)))
+ continue;
+
+ if (num_planes == 1 || free & (1 << ((i + 1) % 8)))
+ break;
+ }
+
+ return i == RCAR_DU_NUM_HW_PLANES ? -EBUSY : i;
+}
+
+static int rcar_du_atomic_check(struct drm_device *dev,
+ struct drm_atomic_state *state)
+{
+ struct rcar_du_device *rcdu = dev->dev_private;
+ unsigned int group_freed_planes[RCAR_DU_MAX_GROUPS] = { 0, };
+ unsigned int group_free_planes[RCAR_DU_MAX_GROUPS] = { 0, };
+ bool needs_realloc = false;
+ unsigned int groups = 0;
+ unsigned int i;
+ int ret;
+
+ ret = drm_atomic_helper_check(dev, state);
+ if (ret < 0)
+ return ret;
+
+ /* Check if hardware planes need to be reallocated. */
+ for (i = 0; i < dev->mode_config.num_total_plane; ++i) {
+ struct rcar_du_plane_state *plane_state;
+ struct rcar_du_plane *plane;
+ unsigned int index;
+
+ if (!state->planes[i])
+ continue;
+
+ plane = to_rcar_plane(state->planes[i]);
+ plane_state = to_rcar_du_plane_state(state->plane_states[i]);
+
+ /* If the plane is being disabled we don't need to go through
+ * the full reallocation procedure. Just mark the hardware
+ * plane(s) as freed.
+ */
+ if (!plane_state->format) {
+ index = plane - plane->group->planes.planes;
+ group_freed_planes[plane->group->index] |= 1 << index;
+ plane_state->hwindex = -1;
+ continue;
+ }
+
+ /* If the plane needs to be reallocated mark it as such, and
+ * mark the hardware plane(s) as free.
+ */
+ if (rcar_du_plane_needs_realloc(plane, plane_state)) {
+ groups |= 1 << plane->group->index;
+ needs_realloc = true;
+
+ index = plane - plane->group->planes.planes;
+ group_freed_planes[plane->group->index] |= 1 << index;
+ plane_state->hwindex = -1;
+ }
+ }
+
+ if (!needs_realloc)
+ return 0;
+
+ /* Grab all plane states for the groups that need reallocation to ensure
+ * locking and avoid racy updates. This serializes the update operation,
+ * but there's not much we can do about it as that's the hardware
+ * design.
+ *
+ * Compute the used planes mask for each group at the same time to avoid
+ * looping over the planes separately later.
+ */
+ while (groups) {
+ unsigned int index = ffs(groups) - 1;
+ struct rcar_du_group *group = &rcdu->groups[index];
+ unsigned int used_planes = 0;
+
+ for (i = 0; i < RCAR_DU_NUM_KMS_PLANES; ++i) {
+ struct rcar_du_plane *plane = &group->planes.planes[i];
+ struct rcar_du_plane_state *plane_state;
+ struct drm_plane_state *s;
+
+ s = drm_atomic_get_plane_state(state, &plane->plane);
+ if (IS_ERR(s))
+ return PTR_ERR(s);
+
+ /* If the plane has been freed in the above loop its
+ * hardware planes must not be added to the used planes
+ * bitmask. However, the current state doesn't reflect
+ * the free state yet, as we've modified the new state
+ * above. Use the local freed planes list to check for
+ * that condition instead.
+ */
+ if (group_freed_planes[index] & (1 << i))
+ continue;
+
+ plane_state = to_rcar_du_plane_state(plane->plane.state);
+ used_planes |= rcar_du_plane_hwmask(plane_state);
+ }
+
+ group_free_planes[index] = 0xff & ~used_planes;
+ groups &= ~(1 << index);
+ }
+
+ /* Reallocate hardware planes for each plane that needs it. */
+ for (i = 0; i < dev->mode_config.num_total_plane; ++i) {
+ struct rcar_du_plane_state *plane_state;
+ struct rcar_du_plane *plane;
+ int idx;
+
+ if (!state->planes[i])
+ continue;
+
+ plane = to_rcar_plane(state->planes[i]);
+ plane_state = to_rcar_du_plane_state(state->plane_states[i]);
+
+ /* Skip planes that are being disabled or don't need to be
+ * reallocated.
+ */
+ if (!plane_state->format ||
+ !rcar_du_plane_needs_realloc(plane, plane_state))
+ continue;
+
+ idx = rcar_du_plane_hwalloc(plane_state->format->planes,
+ group_free_planes[plane->group->index]);
+ if (idx < 0) {
+ dev_dbg(rcdu->dev, "%s: no available hardware plane\n",
+ __func__);
+ return idx;
+ }
+
+ plane_state->hwindex = idx;
+
+ group_free_planes[plane->group->index] &=
+ ~rcar_du_plane_hwmask(plane_state);
+ }
+
+ return 0;
+}
+
+struct rcar_du_commit {
+ struct work_struct work;
+ struct drm_device *dev;
+ struct drm_atomic_state *state;
+ u32 crtcs;
+};
+
+static void rcar_du_atomic_complete(struct rcar_du_commit *commit)
+{
+ struct drm_device *dev = commit->dev;
+ struct rcar_du_device *rcdu = dev->dev_private;
+ struct drm_atomic_state *old_state = commit->state;
+
+ /* Apply the atomic update. */
+ drm_atomic_helper_commit_modeset_disables(dev, old_state);
+ drm_atomic_helper_commit_modeset_enables(dev, old_state);
+ drm_atomic_helper_commit_planes(dev, old_state);
+
+ drm_atomic_helper_wait_for_vblanks(dev, old_state);
+
+ drm_atomic_helper_cleanup_planes(dev, old_state);
+
+ drm_atomic_state_free(old_state);
+
+ /* Complete the commit, wake up any waiter. */
+ spin_lock(&rcdu->commit.wait.lock);
+ rcdu->commit.pending &= ~commit->crtcs;
+ wake_up_all_locked(&rcdu->commit.wait);
+ spin_unlock(&rcdu->commit.wait.lock);
+
+ kfree(commit);
+}
+
+static void rcar_du_atomic_work(struct work_struct *work)
+{
+ struct rcar_du_commit *commit =
+ container_of(work, struct rcar_du_commit, work);
+
+ rcar_du_atomic_complete(commit);
+}
+
+static int rcar_du_atomic_commit(struct drm_device *dev,
+ struct drm_atomic_state *state, bool async)
+{
+ struct rcar_du_device *rcdu = dev->dev_private;
+ struct rcar_du_commit *commit;
+ unsigned int i;
+ int ret;
+
+ ret = drm_atomic_helper_prepare_planes(dev, state);
+ if (ret)
+ return ret;
+
+ /* Allocate the commit object. */
+ commit = kzalloc(sizeof(*commit), GFP_KERNEL);
+ if (commit == NULL)
+ return -ENOMEM;
+
+ INIT_WORK(&commit->work, rcar_du_atomic_work);
+ commit->dev = dev;
+ commit->state = state;
+
+ /* Wait until all affected CRTCs have completed previous commits and
+ * mark them as pending.
+ */
+ for (i = 0; i < dev->mode_config.num_crtc; ++i) {
+ if (state->crtcs[i])
+ commit->crtcs |= 1 << drm_crtc_index(state->crtcs[i]);
+ }
+
+ spin_lock(&rcdu->commit.wait.lock);
+ ret = wait_event_interruptible_locked(rcdu->commit.wait,
+ !(rcdu->commit.pending & commit->crtcs));
+ if (ret == 0)
+ rcdu->commit.pending |= commit->crtcs;
+ spin_unlock(&rcdu->commit.wait.lock);
+
+ if (ret) {
+ kfree(commit);
+ return ret;
+ }
+
+ /* Swap the state, this is the point of no return. */
+ drm_atomic_helper_swap_state(dev, state);
+
+ if (async)
+ schedule_work(&commit->work);
+ else
+ rcar_du_atomic_complete(commit);
+
+ return 0;
+}
+
+/* -----------------------------------------------------------------------------
+ * Initialization
+ */
+
static const struct drm_mode_config_funcs rcar_du_mode_config_funcs = {
.fb_create = rcar_du_fb_create,
.output_poll_changed = rcar_du_output_poll_changed,
+ .atomic_check = rcar_du_atomic_check,
+ .atomic_commit = rcar_du_atomic_commit,
};
static int rcar_du_encoders_init_one(struct rcar_du_device *rcdu,
@@ -206,7 +509,7 @@
enum rcar_du_encoder_type enc_type = RCAR_DU_ENCODER_NONE;
struct device_node *connector = NULL;
struct device_node *encoder = NULL;
- struct device_node *prev = NULL;
+ struct device_node *ep_node = NULL;
struct device_node *entity_ep_node;
struct device_node *entity;
int ret;
@@ -224,16 +527,7 @@
entity_ep_node = of_parse_phandle(ep->local_node, "remote-endpoint", 0);
- while (1) {
- struct device_node *ep_node;
-
- ep_node = of_graph_get_next_endpoint(entity, prev);
- of_node_put(prev);
- prev = ep_node;
-
- if (!ep_node)
- break;
-
+ for_each_endpoint_of_node(entity, ep_node) {
if (ep_node == entity_ep_node)
continue;
@@ -300,27 +594,19 @@
static int rcar_du_encoders_init(struct rcar_du_device *rcdu)
{
struct device_node *np = rcdu->dev->of_node;
- struct device_node *prev = NULL;
+ struct device_node *ep_node;
unsigned int num_encoders = 0;
/*
* Iterate over the endpoints and create one encoder for each output
* pipeline.
*/
- while (1) {
- struct device_node *ep_node;
+ for_each_endpoint_of_node(np, ep_node) {
enum rcar_du_output output;
struct of_endpoint ep;
unsigned int i;
int ret;
- ep_node = of_graph_get_next_endpoint(np, prev);
- of_node_put(prev);
- prev = ep_node;
-
- if (ep_node == NULL)
- break;
-
ret = of_graph_parse_endpoint(ep_node, &ep);
if (ret < 0) {
of_node_put(ep_node);
@@ -392,6 +678,8 @@
for (i = 0; i < num_groups; ++i) {
struct rcar_du_group *rgrp = &rcdu->groups[i];
+ mutex_init(&rgrp->lock);
+
rgrp->dev = rcdu;
rgrp->mmio_offset = mmio_offsets[i];
rgrp->index = i;
@@ -439,27 +727,21 @@
encoder->possible_clones = (1 << num_encoders) - 1;
}
- /* Now that the CRTCs have been initialized register the planes. */
- for (i = 0; i < num_groups; ++i) {
- ret = rcar_du_planes_register(&rcdu->groups[i]);
- if (ret < 0)
- return ret;
- }
+ drm_mode_config_reset(dev);
drm_kms_helper_poll_init(dev);
- drm_helper_disable_unused_functions(dev);
+ if (dev->mode_config.num_connector) {
+ fbdev = drm_fbdev_cma_init(dev, 32, dev->mode_config.num_crtc,
+ dev->mode_config.num_connector);
+ if (IS_ERR(fbdev))
+ return PTR_ERR(fbdev);
- fbdev = drm_fbdev_cma_init(dev, 32, dev->mode_config.num_crtc,
- dev->mode_config.num_connector);
- if (IS_ERR(fbdev))
- return PTR_ERR(fbdev);
-
-#ifndef CONFIG_FRAMEBUFFER_CONSOLE
- drm_fbdev_cma_restore_mode(fbdev);
-#endif
-
- rcdu->fbdev = fbdev;
+ rcdu->fbdev = fbdev;
+ } else {
+ dev_info(rcdu->dev,
+ "no connector found, disabling fbdev emulation\n");
+ }
return 0;
}
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_lvdscon.c b/drivers/gpu/drm/rcar-du/rcar_du_lvdscon.c
index 6d9811c..0c43032 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_lvdscon.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_lvdscon.c
@@ -12,6 +12,7 @@
*/
#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
@@ -74,10 +75,13 @@
}
static const struct drm_connector_funcs connector_funcs = {
- .dpms = drm_helper_connector_dpms,
+ .dpms = drm_atomic_helper_connector_dpms,
+ .reset = drm_atomic_helper_connector_reset,
.detect = rcar_du_lvds_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = rcar_du_lvds_connector_destroy,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
int rcar_du_lvds_connector_init(struct rcar_du_device *rcdu,
@@ -117,7 +121,7 @@
if (ret < 0)
return ret;
- drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
+ connector->dpms = DRM_MODE_DPMS_OFF;
drm_object_property_set_value(&connector->base,
rcdu->ddev->mode_config.dpms_property, DRM_MODE_DPMS_OFF);
@@ -125,7 +129,6 @@
if (ret < 0)
return ret;
- connector->encoder = encoder;
lvdscon->connector.encoder = renc;
return 0;
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c b/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c
index 7cfb48c..85043c5 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c
@@ -28,7 +28,7 @@
unsigned int index;
void __iomem *mmio;
struct clk *clock;
- int dpms;
+ bool enabled;
enum rcar_lvds_input input;
};
@@ -48,7 +48,7 @@
u32 pllcr;
int ret;
- if (lvds->dpms == DRM_MODE_DPMS_ON)
+ if (lvds->enabled)
return 0;
ret = clk_prepare_enable(lvds->clock);
@@ -110,13 +110,13 @@
lvdcr0 |= LVDCR0_LVRES;
rcar_lvds_write(lvds, LVDCR0, lvdcr0);
- lvds->dpms = DRM_MODE_DPMS_ON;
+ lvds->enabled = true;
return 0;
}
static void rcar_du_lvdsenc_stop(struct rcar_du_lvdsenc *lvds)
{
- if (lvds->dpms == DRM_MODE_DPMS_OFF)
+ if (!lvds->enabled)
return;
rcar_lvds_write(lvds, LVDCR0, 0);
@@ -124,13 +124,13 @@
clk_disable_unprepare(lvds->clock);
- lvds->dpms = DRM_MODE_DPMS_OFF;
+ lvds->enabled = false;
}
-int rcar_du_lvdsenc_dpms(struct rcar_du_lvdsenc *lvds,
- struct drm_crtc *crtc, int mode)
+int rcar_du_lvdsenc_enable(struct rcar_du_lvdsenc *lvds, struct drm_crtc *crtc,
+ bool enable)
{
- if (mode == DRM_MODE_DPMS_OFF) {
+ if (!enable) {
rcar_du_lvdsenc_stop(lvds);
return 0;
} else if (crtc) {
@@ -179,7 +179,7 @@
lvds->dev = rcdu;
lvds->index = i;
lvds->input = i ? RCAR_LVDS_INPUT_DU1 : RCAR_LVDS_INPUT_DU0;
- lvds->dpms = DRM_MODE_DPMS_OFF;
+ lvds->enabled = false;
ret = rcar_du_lvdsenc_get_resources(lvds, pdev);
if (ret < 0)
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.h b/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.h
index f65aabd..9a6001c 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.h
+++ b/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.h
@@ -28,15 +28,15 @@
#if IS_ENABLED(CONFIG_DRM_RCAR_LVDS)
int rcar_du_lvdsenc_init(struct rcar_du_device *rcdu);
-int rcar_du_lvdsenc_dpms(struct rcar_du_lvdsenc *lvds,
- struct drm_crtc *crtc, int mode);
+int rcar_du_lvdsenc_enable(struct rcar_du_lvdsenc *lvds,
+ struct drm_crtc *crtc, bool enable);
#else
static inline int rcar_du_lvdsenc_init(struct rcar_du_device *rcdu)
{
return 0;
}
-static inline int rcar_du_lvdsenc_dpms(struct rcar_du_lvdsenc *lvds,
- struct drm_crtc *crtc, int mode)
+static inline int rcar_du_lvdsenc_enable(struct rcar_du_lvdsenc *lvds,
+ struct drm_crtc *crtc, bool enable)
{
return 0;
}
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_plane.c b/drivers/gpu/drm/rcar-du/rcar_du_plane.c
index 50f2f2b..210e5c3 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_plane.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_plane.c
@@ -12,10 +12,12 @@
*/
#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_gem_cma_helper.h>
+#include <drm/drm_plane_helper.h>
#include "rcar_du_drv.h"
#include "rcar_du_kms.h"
@@ -26,16 +28,6 @@
#define RCAR_DU_COLORKEY_SOURCE (1 << 24)
#define RCAR_DU_COLORKEY_MASK (1 << 24)
-struct rcar_du_kms_plane {
- struct drm_plane plane;
- struct rcar_du_plane *hwplane;
-};
-
-static inline struct rcar_du_plane *to_rcar_plane(struct drm_plane *plane)
-{
- return container_of(plane, struct rcar_du_kms_plane, plane)->hwplane;
-}
-
static u32 rcar_du_plane_read(struct rcar_du_group *rgrp,
unsigned int index, u32 reg)
{
@@ -50,74 +42,31 @@
data);
}
-int rcar_du_plane_reserve(struct rcar_du_plane *plane,
- const struct rcar_du_format_info *format)
+static void rcar_du_plane_setup_fb(struct rcar_du_plane *plane)
{
+ struct rcar_du_plane_state *state =
+ to_rcar_du_plane_state(plane->plane.state);
+ struct drm_framebuffer *fb = plane->plane.state->fb;
struct rcar_du_group *rgrp = plane->group;
- unsigned int i;
- int ret = -EBUSY;
-
- mutex_lock(&rgrp->planes.lock);
-
- for (i = 0; i < ARRAY_SIZE(rgrp->planes.planes); ++i) {
- if (!(rgrp->planes.free & (1 << i)))
- continue;
-
- if (format->planes == 1 ||
- rgrp->planes.free & (1 << ((i + 1) % 8)))
- break;
- }
-
- if (i == ARRAY_SIZE(rgrp->planes.planes))
- goto done;
-
- rgrp->planes.free &= ~(1 << i);
- if (format->planes == 2)
- rgrp->planes.free &= ~(1 << ((i + 1) % 8));
-
- plane->hwindex = i;
-
- ret = 0;
-
-done:
- mutex_unlock(&rgrp->planes.lock);
- return ret;
-}
-
-void rcar_du_plane_release(struct rcar_du_plane *plane)
-{
- struct rcar_du_group *rgrp = plane->group;
-
- if (plane->hwindex == -1)
- return;
-
- mutex_lock(&rgrp->planes.lock);
- rgrp->planes.free |= 1 << plane->hwindex;
- if (plane->format->planes == 2)
- rgrp->planes.free |= 1 << ((plane->hwindex + 1) % 8);
- mutex_unlock(&rgrp->planes.lock);
-
- plane->hwindex = -1;
-}
-
-void rcar_du_plane_update_base(struct rcar_du_plane *plane)
-{
- struct rcar_du_group *rgrp = plane->group;
- unsigned int index = plane->hwindex;
+ unsigned int src_x = state->state.src_x >> 16;
+ unsigned int src_y = state->state.src_y >> 16;
+ unsigned int index = state->hwindex;
+ struct drm_gem_cma_object *gem;
bool interlaced;
u32 mwr;
- interlaced = plane->crtc->mode.flags & DRM_MODE_FLAG_INTERLACE;
+ interlaced = state->state.crtc->state->adjusted_mode.flags
+ & DRM_MODE_FLAG_INTERLACE;
/* Memory pitch (expressed in pixels). Must be doubled for interlaced
* operation with 32bpp formats.
*/
- if (plane->format->planes == 2)
- mwr = plane->pitch;
+ if (state->format->planes == 2)
+ mwr = fb->pitches[0];
else
- mwr = plane->pitch * 8 / plane->format->bpp;
+ mwr = fb->pitches[0] * 8 / state->format->bpp;
- if (interlaced && plane->format->bpp == 32)
+ if (interlaced && state->format->bpp == 32)
mwr *= 2;
rcar_du_plane_write(rgrp, index, PnMWR, mwr);
@@ -134,42 +83,33 @@
* require a halved Y position value, in both progressive and interlaced
* modes.
*/
- rcar_du_plane_write(rgrp, index, PnSPXR, plane->src_x);
- rcar_du_plane_write(rgrp, index, PnSPYR, plane->src_y *
- (!interlaced && plane->format->bpp == 32 ? 2 : 1));
- rcar_du_plane_write(rgrp, index, PnDSA0R, plane->dma[0]);
-
- if (plane->format->planes == 2) {
- index = (index + 1) % 8;
-
- rcar_du_plane_write(rgrp, index, PnMWR, plane->pitch);
-
- rcar_du_plane_write(rgrp, index, PnSPXR, plane->src_x);
- rcar_du_plane_write(rgrp, index, PnSPYR, plane->src_y *
- (plane->format->bpp == 16 ? 2 : 1) / 2);
- rcar_du_plane_write(rgrp, index, PnDSA0R, plane->dma[1]);
- }
-}
-
-void rcar_du_plane_compute_base(struct rcar_du_plane *plane,
- struct drm_framebuffer *fb)
-{
- struct drm_gem_cma_object *gem;
-
- plane->pitch = fb->pitches[0];
+ rcar_du_plane_write(rgrp, index, PnSPXR, src_x);
+ rcar_du_plane_write(rgrp, index, PnSPYR, src_y *
+ (!interlaced && state->format->bpp == 32 ? 2 : 1));
gem = drm_fb_cma_get_gem_obj(fb, 0);
- plane->dma[0] = gem->paddr + fb->offsets[0];
+ rcar_du_plane_write(rgrp, index, PnDSA0R, gem->paddr + fb->offsets[0]);
- if (plane->format->planes == 2) {
+ if (state->format->planes == 2) {
+ index = (index + 1) % 8;
+
+ rcar_du_plane_write(rgrp, index, PnMWR, fb->pitches[0]);
+
+ rcar_du_plane_write(rgrp, index, PnSPXR, src_x);
+ rcar_du_plane_write(rgrp, index, PnSPYR, src_y *
+ (state->format->bpp == 16 ? 2 : 1) / 2);
+
gem = drm_fb_cma_get_gem_obj(fb, 1);
- plane->dma[1] = gem->paddr + fb->offsets[1];
+ rcar_du_plane_write(rgrp, index, PnDSA0R,
+ gem->paddr + fb->offsets[1]);
}
}
static void rcar_du_plane_setup_mode(struct rcar_du_plane *plane,
unsigned int index)
{
+ struct rcar_du_plane_state *state =
+ to_rcar_du_plane_state(plane->plane.state);
struct rcar_du_group *rgrp = plane->group;
u32 colorkey;
u32 pnmr;
@@ -183,47 +123,47 @@
* For XRGB, set the alpha value to the plane-wide alpha value and
* enable alpha-blending regardless of the X bit value.
*/
- if (plane->format->fourcc != DRM_FORMAT_XRGB1555)
+ if (state->format->fourcc != DRM_FORMAT_XRGB1555)
rcar_du_plane_write(rgrp, index, PnALPHAR, PnALPHAR_ABIT_0);
else
rcar_du_plane_write(rgrp, index, PnALPHAR,
- PnALPHAR_ABIT_X | plane->alpha);
+ PnALPHAR_ABIT_X | state->alpha);
- pnmr = PnMR_BM_MD | plane->format->pnmr;
+ pnmr = PnMR_BM_MD | state->format->pnmr;
/* Disable color keying when requested. YUV formats have the
* PnMR_SPIM_TP_OFF bit set in their pnmr field, disabling color keying
* automatically.
*/
- if ((plane->colorkey & RCAR_DU_COLORKEY_MASK) == RCAR_DU_COLORKEY_NONE)
+ if ((state->colorkey & RCAR_DU_COLORKEY_MASK) == RCAR_DU_COLORKEY_NONE)
pnmr |= PnMR_SPIM_TP_OFF;
/* For packed YUV formats we need to select the U/V order. */
- if (plane->format->fourcc == DRM_FORMAT_YUYV)
+ if (state->format->fourcc == DRM_FORMAT_YUYV)
pnmr |= PnMR_YCDF_YUYV;
rcar_du_plane_write(rgrp, index, PnMR, pnmr);
- switch (plane->format->fourcc) {
+ switch (state->format->fourcc) {
case DRM_FORMAT_RGB565:
- colorkey = ((plane->colorkey & 0xf80000) >> 8)
- | ((plane->colorkey & 0x00fc00) >> 5)
- | ((plane->colorkey & 0x0000f8) >> 3);
+ colorkey = ((state->colorkey & 0xf80000) >> 8)
+ | ((state->colorkey & 0x00fc00) >> 5)
+ | ((state->colorkey & 0x0000f8) >> 3);
rcar_du_plane_write(rgrp, index, PnTC2R, colorkey);
break;
case DRM_FORMAT_ARGB1555:
case DRM_FORMAT_XRGB1555:
- colorkey = ((plane->colorkey & 0xf80000) >> 9)
- | ((plane->colorkey & 0x00f800) >> 6)
- | ((plane->colorkey & 0x0000f8) >> 3);
+ colorkey = ((state->colorkey & 0xf80000) >> 9)
+ | ((state->colorkey & 0x00f800) >> 6)
+ | ((state->colorkey & 0x0000f8) >> 3);
rcar_du_plane_write(rgrp, index, PnTC2R, colorkey);
break;
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_ARGB8888:
rcar_du_plane_write(rgrp, index, PnTC3R,
- PnTC3R_CODE | (plane->colorkey & 0xffffff));
+ PnTC3R_CODE | (state->colorkey & 0xffffff));
break;
}
}
@@ -231,6 +171,8 @@
static void __rcar_du_plane_setup(struct rcar_du_plane *plane,
unsigned int index)
{
+ struct rcar_du_plane_state *state =
+ to_rcar_du_plane_state(plane->plane.state);
struct rcar_du_group *rgrp = plane->group;
u32 ddcr2 = PnDDCR2_CODE;
u32 ddcr4;
@@ -242,17 +184,17 @@
*/
ddcr4 = rcar_du_plane_read(rgrp, index, PnDDCR4);
ddcr4 &= ~PnDDCR4_EDF_MASK;
- ddcr4 |= plane->format->edf | PnDDCR4_CODE;
+ ddcr4 |= state->format->edf | PnDDCR4_CODE;
rcar_du_plane_setup_mode(plane, index);
- if (plane->format->planes == 2) {
- if (plane->hwindex != index) {
- if (plane->format->fourcc == DRM_FORMAT_NV12 ||
- plane->format->fourcc == DRM_FORMAT_NV21)
+ if (state->format->planes == 2) {
+ if (state->hwindex != index) {
+ if (state->format->fourcc == DRM_FORMAT_NV12 ||
+ state->format->fourcc == DRM_FORMAT_NV21)
ddcr2 |= PnDDCR2_Y420;
- if (plane->format->fourcc == DRM_FORMAT_NV21)
+ if (state->format->fourcc == DRM_FORMAT_NV21)
ddcr2 |= PnDDCR2_NV21;
ddcr2 |= PnDDCR2_DIVU;
@@ -265,10 +207,10 @@
rcar_du_plane_write(rgrp, index, PnDDCR4, ddcr4);
/* Destination position and size */
- rcar_du_plane_write(rgrp, index, PnDSXR, plane->width);
- rcar_du_plane_write(rgrp, index, PnDSYR, plane->height);
- rcar_du_plane_write(rgrp, index, PnDPXR, plane->dst_x);
- rcar_du_plane_write(rgrp, index, PnDPYR, plane->dst_y);
+ rcar_du_plane_write(rgrp, index, PnDSXR, plane->plane.state->crtc_w);
+ rcar_du_plane_write(rgrp, index, PnDSYR, plane->plane.state->crtc_h);
+ rcar_du_plane_write(rgrp, index, PnDPXR, plane->plane.state->crtc_x);
+ rcar_du_plane_write(rgrp, index, PnDPYR, plane->plane.state->crtc_y);
/* Wrap-around and blinking, disabled */
rcar_du_plane_write(rgrp, index, PnWASPR, 0);
@@ -279,150 +221,143 @@
void rcar_du_plane_setup(struct rcar_du_plane *plane)
{
- __rcar_du_plane_setup(plane, plane->hwindex);
- if (plane->format->planes == 2)
- __rcar_du_plane_setup(plane, (plane->hwindex + 1) % 8);
+ struct rcar_du_plane_state *state =
+ to_rcar_du_plane_state(plane->plane.state);
- rcar_du_plane_update_base(plane);
+ __rcar_du_plane_setup(plane, state->hwindex);
+ if (state->format->planes == 2)
+ __rcar_du_plane_setup(plane, (state->hwindex + 1) % 8);
+
+ rcar_du_plane_setup_fb(plane);
}
-static int
-rcar_du_plane_update(struct drm_plane *plane, struct drm_crtc *crtc,
- struct drm_framebuffer *fb, int crtc_x, int crtc_y,
- unsigned int crtc_w, unsigned int crtc_h,
- uint32_t src_x, uint32_t src_y,
- uint32_t src_w, uint32_t src_h)
+static int rcar_du_plane_atomic_check(struct drm_plane *plane,
+ struct drm_plane_state *state)
{
+ struct rcar_du_plane_state *rstate = to_rcar_du_plane_state(state);
struct rcar_du_plane *rplane = to_rcar_plane(plane);
struct rcar_du_device *rcdu = rplane->group->dev;
- const struct rcar_du_format_info *format;
- unsigned int nplanes;
- int ret;
- format = rcar_du_format_info(fb->pixel_format);
- if (format == NULL) {
- dev_dbg(rcdu->dev, "%s: unsupported format %08x\n", __func__,
- fb->pixel_format);
- return -EINVAL;
+ if (!state->fb || !state->crtc) {
+ rstate->format = NULL;
+ return 0;
}
- if (src_w >> 16 != crtc_w || src_h >> 16 != crtc_h) {
+ if (state->src_w >> 16 != state->crtc_w ||
+ state->src_h >> 16 != state->crtc_h) {
dev_dbg(rcdu->dev, "%s: scaling not supported\n", __func__);
return -EINVAL;
}
- nplanes = rplane->format ? rplane->format->planes : 0;
-
- /* Reallocate hardware planes if the number of required planes has
- * changed.
- */
- if (format->planes != nplanes) {
- rcar_du_plane_release(rplane);
- ret = rcar_du_plane_reserve(rplane, format);
- if (ret < 0)
- return ret;
+ rstate->format = rcar_du_format_info(state->fb->pixel_format);
+ if (rstate->format == NULL) {
+ dev_dbg(rcdu->dev, "%s: unsupported format %08x\n", __func__,
+ state->fb->pixel_format);
+ return -EINVAL;
}
- rplane->crtc = crtc;
- rplane->format = format;
-
- rplane->src_x = src_x >> 16;
- rplane->src_y = src_y >> 16;
- rplane->dst_x = crtc_x;
- rplane->dst_y = crtc_y;
- rplane->width = crtc_w;
- rplane->height = crtc_h;
-
- rcar_du_plane_compute_base(rplane, fb);
- rcar_du_plane_setup(rplane);
-
- mutex_lock(&rplane->group->planes.lock);
- rplane->enabled = true;
- rcar_du_crtc_update_planes(rplane->crtc);
- mutex_unlock(&rplane->group->planes.lock);
-
return 0;
}
-static int rcar_du_plane_disable(struct drm_plane *plane)
+static void rcar_du_plane_atomic_update(struct drm_plane *plane,
+ struct drm_plane_state *old_state)
{
struct rcar_du_plane *rplane = to_rcar_plane(plane);
- if (!rplane->enabled)
- return 0;
-
- mutex_lock(&rplane->group->planes.lock);
- rplane->enabled = false;
- rcar_du_crtc_update_planes(rplane->crtc);
- mutex_unlock(&rplane->group->planes.lock);
-
- rcar_du_plane_release(rplane);
-
- rplane->crtc = NULL;
- rplane->format = NULL;
-
- return 0;
+ if (plane->state->crtc)
+ rcar_du_plane_setup(rplane);
}
-/* Both the .set_property and the .update_plane operations are called with the
- * mode_config lock held. There is this no need to explicitly protect access to
- * the alpha and colorkey fields and the mode register.
- */
-static void rcar_du_plane_set_alpha(struct rcar_du_plane *plane, u32 alpha)
+static const struct drm_plane_helper_funcs rcar_du_plane_helper_funcs = {
+ .atomic_check = rcar_du_plane_atomic_check,
+ .atomic_update = rcar_du_plane_atomic_update,
+};
+
+static void rcar_du_plane_reset(struct drm_plane *plane)
{
- if (plane->alpha == alpha)
+ struct rcar_du_plane_state *state;
+
+ if (plane->state && plane->state->fb)
+ drm_framebuffer_unreference(plane->state->fb);
+
+ kfree(plane->state);
+ plane->state = NULL;
+
+ state = kzalloc(sizeof(*state), GFP_KERNEL);
+ if (state == NULL)
return;
- plane->alpha = alpha;
- if (!plane->enabled || plane->format->fourcc != DRM_FORMAT_XRGB1555)
- return;
+ state->hwindex = -1;
+ state->alpha = 255;
+ state->colorkey = RCAR_DU_COLORKEY_NONE;
+ state->zpos = plane->type == DRM_PLANE_TYPE_PRIMARY ? 0 : 1;
- rcar_du_plane_setup_mode(plane, plane->hwindex);
+ plane->state = &state->state;
+ plane->state->plane = plane;
}
-static void rcar_du_plane_set_colorkey(struct rcar_du_plane *plane,
- u32 colorkey)
+static struct drm_plane_state *
+rcar_du_plane_atomic_duplicate_state(struct drm_plane *plane)
{
- if (plane->colorkey == colorkey)
- return;
+ struct rcar_du_plane_state *state;
+ struct rcar_du_plane_state *copy;
- plane->colorkey = colorkey;
- if (!plane->enabled)
- return;
+ state = to_rcar_du_plane_state(plane->state);
+ copy = kmemdup(state, sizeof(*state), GFP_KERNEL);
+ if (copy == NULL)
+ return NULL;
- rcar_du_plane_setup_mode(plane, plane->hwindex);
+ if (copy->state.fb)
+ drm_framebuffer_reference(copy->state.fb);
+
+ return ©->state;
}
-static void rcar_du_plane_set_zpos(struct rcar_du_plane *plane,
- unsigned int zpos)
+static void rcar_du_plane_atomic_destroy_state(struct drm_plane *plane,
+ struct drm_plane_state *state)
{
- mutex_lock(&plane->group->planes.lock);
- if (plane->zpos == zpos)
- goto done;
+ if (state->fb)
+ drm_framebuffer_unreference(state->fb);
- plane->zpos = zpos;
- if (!plane->enabled)
- goto done;
-
- rcar_du_crtc_update_planes(plane->crtc);
-
-done:
- mutex_unlock(&plane->group->planes.lock);
+ kfree(to_rcar_du_plane_state(state));
}
-static int rcar_du_plane_set_property(struct drm_plane *plane,
- struct drm_property *property,
- uint64_t value)
+static int rcar_du_plane_atomic_set_property(struct drm_plane *plane,
+ struct drm_plane_state *state,
+ struct drm_property *property,
+ uint64_t val)
{
+ struct rcar_du_plane_state *rstate = to_rcar_du_plane_state(state);
struct rcar_du_plane *rplane = to_rcar_plane(plane);
struct rcar_du_group *rgrp = rplane->group;
if (property == rgrp->planes.alpha)
- rcar_du_plane_set_alpha(rplane, value);
+ rstate->alpha = val;
else if (property == rgrp->planes.colorkey)
- rcar_du_plane_set_colorkey(rplane, value);
+ rstate->colorkey = val;
else if (property == rgrp->planes.zpos)
- rcar_du_plane_set_zpos(rplane, value);
+ rstate->zpos = val;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+
+static int rcar_du_plane_atomic_get_property(struct drm_plane *plane,
+ const struct drm_plane_state *state, struct drm_property *property,
+ uint64_t *val)
+{
+ const struct rcar_du_plane_state *rstate =
+ container_of(state, const struct rcar_du_plane_state, state);
+ struct rcar_du_plane *rplane = to_rcar_plane(plane);
+ struct rcar_du_group *rgrp = rplane->group;
+
+ if (property == rgrp->planes.alpha)
+ *val = rstate->alpha;
+ else if (property == rgrp->planes.colorkey)
+ *val = rstate->colorkey;
+ else if (property == rgrp->planes.zpos)
+ *val = rstate->zpos;
else
return -EINVAL;
@@ -430,10 +365,15 @@
}
static const struct drm_plane_funcs rcar_du_plane_funcs = {
- .update_plane = rcar_du_plane_update,
- .disable_plane = rcar_du_plane_disable,
- .set_property = rcar_du_plane_set_property,
+ .update_plane = drm_atomic_helper_update_plane,
+ .disable_plane = drm_atomic_helper_disable_plane,
+ .reset = rcar_du_plane_reset,
+ .set_property = drm_atomic_helper_plane_set_property,
.destroy = drm_plane_cleanup,
+ .atomic_duplicate_state = rcar_du_plane_atomic_duplicate_state,
+ .atomic_destroy_state = rcar_du_plane_atomic_destroy_state,
+ .atomic_set_property = rcar_du_plane_atomic_set_property,
+ .atomic_get_property = rcar_du_plane_atomic_get_property,
};
static const uint32_t formats[] = {
@@ -453,10 +393,11 @@
{
struct rcar_du_planes *planes = &rgrp->planes;
struct rcar_du_device *rcdu = rgrp->dev;
+ unsigned int num_planes;
+ unsigned int num_crtcs;
+ unsigned int crtcs;
unsigned int i;
-
- mutex_init(&planes->lock);
- planes->free = 0xff;
+ int ret;
planes->alpha =
drm_property_create_range(rcdu->ddev, 0, "alpha", 0, 255);
@@ -478,45 +419,34 @@
if (planes->zpos == NULL)
return -ENOMEM;
- for (i = 0; i < ARRAY_SIZE(planes->planes); ++i) {
- struct rcar_du_plane *plane = &planes->planes[i];
-
- plane->group = rgrp;
- plane->hwindex = -1;
- plane->alpha = 255;
- plane->colorkey = RCAR_DU_COLORKEY_NONE;
- plane->zpos = 0;
- }
-
- return 0;
-}
-
-int rcar_du_planes_register(struct rcar_du_group *rgrp)
-{
- struct rcar_du_planes *planes = &rgrp->planes;
- struct rcar_du_device *rcdu = rgrp->dev;
- unsigned int crtcs;
- unsigned int i;
- int ret;
+ /* Create one primary plane per in this group CRTC and seven overlay
+ * planes.
+ */
+ num_crtcs = min(rcdu->num_crtcs - 2 * rgrp->index, 2U);
+ num_planes = num_crtcs + 7;
crtcs = ((1 << rcdu->num_crtcs) - 1) & (3 << (2 * rgrp->index));
- for (i = 0; i < RCAR_DU_NUM_KMS_PLANES; ++i) {
- struct rcar_du_kms_plane *plane;
+ for (i = 0; i < num_planes; ++i) {
+ enum drm_plane_type type = i < num_crtcs
+ ? DRM_PLANE_TYPE_PRIMARY
+ : DRM_PLANE_TYPE_OVERLAY;
+ struct rcar_du_plane *plane = &planes->planes[i];
- plane = devm_kzalloc(rcdu->dev, sizeof(*plane), GFP_KERNEL);
- if (plane == NULL)
- return -ENOMEM;
+ plane->group = rgrp;
- plane->hwplane = &planes->planes[i + 2];
- plane->hwplane->zpos = 1;
-
- ret = drm_plane_init(rcdu->ddev, &plane->plane, crtcs,
- &rcar_du_plane_funcs, formats,
- ARRAY_SIZE(formats), false);
+ ret = drm_universal_plane_init(rcdu->ddev, &plane->plane, crtcs,
+ &rcar_du_plane_funcs, formats,
+ ARRAY_SIZE(formats), type);
if (ret < 0)
return ret;
+ drm_plane_helper_add(&plane->plane,
+ &rcar_du_plane_helper_funcs);
+
+ if (type == DRM_PLANE_TYPE_PRIMARY)
+ continue;
+
drm_object_attach_property(&plane->plane.base,
planes->alpha, 255);
drm_object_attach_property(&plane->plane.base,
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_plane.h b/drivers/gpu/drm/rcar-du/rcar_du_plane.h
index 3021288..abff0eb 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_plane.h
+++ b/drivers/gpu/drm/rcar-du/rcar_du_plane.h
@@ -14,68 +14,57 @@
#ifndef __RCAR_DU_PLANE_H__
#define __RCAR_DU_PLANE_H__
-#include <linux/mutex.h>
-
#include <drm/drmP.h>
#include <drm/drm_crtc.h>
struct rcar_du_format_info;
struct rcar_du_group;
-/* The RCAR DU has 8 hardware planes, shared between KMS planes and CRTCs. As
- * using KMS planes requires at least one of the CRTCs being enabled, no more
- * than 7 KMS planes can be available. We thus create 7 KMS planes and
- * 9 software planes (one for each KMS planes and one for each CRTC).
+/* The RCAR DU has 8 hardware planes, shared between primary and overlay planes.
+ * As using overlay planes requires at least one of the CRTCs being enabled, no
+ * more than 7 overlay planes can be available. We thus create 1 primary plane
+ * per CRTC and 7 overlay planes, for a total of up to 9 KMS planes.
*/
-
-#define RCAR_DU_NUM_KMS_PLANES 7
+#define RCAR_DU_NUM_KMS_PLANES 9
#define RCAR_DU_NUM_HW_PLANES 8
-#define RCAR_DU_NUM_SW_PLANES 9
struct rcar_du_plane {
+ struct drm_plane plane;
struct rcar_du_group *group;
- struct drm_crtc *crtc;
-
- bool enabled;
-
- int hwindex; /* 0-based, -1 means unused */
- unsigned int alpha;
- unsigned int colorkey;
- unsigned int zpos;
-
- const struct rcar_du_format_info *format;
-
- unsigned long dma[2];
- unsigned int pitch;
-
- unsigned int width;
- unsigned int height;
-
- unsigned int src_x;
- unsigned int src_y;
- unsigned int dst_x;
- unsigned int dst_y;
};
+static inline struct rcar_du_plane *to_rcar_plane(struct drm_plane *plane)
+{
+ return container_of(plane, struct rcar_du_plane, plane);
+}
+
struct rcar_du_planes {
- struct rcar_du_plane planes[RCAR_DU_NUM_SW_PLANES];
- unsigned int free;
- struct mutex lock;
+ struct rcar_du_plane planes[RCAR_DU_NUM_KMS_PLANES];
struct drm_property *alpha;
struct drm_property *colorkey;
struct drm_property *zpos;
};
+struct rcar_du_plane_state {
+ struct drm_plane_state state;
+
+ const struct rcar_du_format_info *format;
+ int hwindex; /* 0-based, -1 means unused */
+
+ unsigned int alpha;
+ unsigned int colorkey;
+ unsigned int zpos;
+};
+
+static inline struct rcar_du_plane_state *
+to_rcar_du_plane_state(struct drm_plane_state *state)
+{
+ return container_of(state, struct rcar_du_plane_state, state);
+}
+
int rcar_du_planes_init(struct rcar_du_group *rgrp);
-int rcar_du_planes_register(struct rcar_du_group *rgrp);
void rcar_du_plane_setup(struct rcar_du_plane *plane);
-void rcar_du_plane_update_base(struct rcar_du_plane *plane);
-void rcar_du_plane_compute_base(struct rcar_du_plane *plane,
- struct drm_framebuffer *fb);
-int rcar_du_plane_reserve(struct rcar_du_plane *plane,
- const struct rcar_du_format_info *format);
-void rcar_du_plane_release(struct rcar_du_plane *plane);
#endif /* __RCAR_DU_PLANE_H__ */
diff --git a/drivers/gpu/drm/rcar-du/rcar_du_vgacon.c b/drivers/gpu/drm/rcar-du/rcar_du_vgacon.c
index 9d48799..e0a5d8f 100644
--- a/drivers/gpu/drm/rcar-du/rcar_du_vgacon.c
+++ b/drivers/gpu/drm/rcar-du/rcar_du_vgacon.c
@@ -12,6 +12,7 @@
*/
#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
@@ -43,10 +44,13 @@
}
static const struct drm_connector_funcs connector_funcs = {
- .dpms = drm_helper_connector_dpms,
+ .dpms = drm_atomic_helper_connector_dpms,
+ .reset = drm_atomic_helper_connector_reset,
.detect = rcar_du_vga_connector_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = rcar_du_vga_connector_destroy,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
int rcar_du_vga_connector_init(struct rcar_du_device *rcdu,
@@ -76,7 +80,7 @@
if (ret < 0)
return ret;
- drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
+ connector->dpms = DRM_MODE_DPMS_OFF;
drm_object_property_set_value(&connector->base,
rcdu->ddev->mode_config.dpms_property, DRM_MODE_DPMS_OFF);
@@ -84,7 +88,6 @@
if (ret < 0)
return ret;
- connector->encoder = encoder;
rcon->encoder = renc;
return 0;
diff --git a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
index d236faa..80d6fc8 100644
--- a/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
+++ b/drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
@@ -133,12 +133,12 @@
}
};
-static const struct dw_hdmi_sym_term rockchip_sym_term[] = {
- /*pixelclk symbol term*/
- { 74250000, 0x8009, 0x0004 },
- { 148500000, 0x8029, 0x0004 },
- { 297000000, 0x8039, 0x0005 },
- { ~0UL, 0x0000, 0x0000 }
+static const struct dw_hdmi_phy_config rockchip_phy_config[] = {
+ /*pixelclk symbol term vlev*/
+ { 74250000, 0x8009, 0x0004, 0x0272},
+ { 148500000, 0x802b, 0x0004, 0x028d},
+ { 297000000, 0x8039, 0x0005, 0x028d},
+ { ~0UL, 0x0000, 0x0000, 0x0000}
};
static int rockchip_hdmi_parse_dt(struct rockchip_hdmi *hdmi)
@@ -230,7 +230,7 @@
.mode_valid = dw_hdmi_rockchip_mode_valid,
.mpll_cfg = rockchip_mpll_cfg,
.cur_ctr = rockchip_cur_ctr,
- .sym_term = rockchip_sym_term,
+ .phy_config = rockchip_phy_config,
.dev_type = RK3288_HDMI,
};
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_drv.c b/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
index 21a481b..3962176 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_drv.c
@@ -129,6 +129,7 @@
struct rockchip_drm_private *private;
struct dma_iommu_mapping *mapping;
struct device *dev = drm_dev->dev;
+ struct drm_connector *connector;
int ret;
private = devm_kzalloc(drm_dev->dev, sizeof(*private), GFP_KERNEL);
@@ -171,6 +172,23 @@
if (ret)
goto err_detach_device;
+ /*
+ * All components are now added, we can publish the connector sysfs
+ * entries to userspace. This will generate hotplug events and so
+ * userspace will expect to be able to access DRM at this point.
+ */
+ list_for_each_entry(connector, &drm_dev->mode_config.connector_list,
+ head) {
+ ret = drm_connector_register(connector);
+ if (ret) {
+ dev_err(drm_dev->dev,
+ "[CONNECTOR:%d:%s] drm_connector_register failed: %d\n",
+ connector->base.id,
+ connector->name, ret);
+ goto err_unbind;
+ }
+ }
+
/* init kms poll for handling hpd */
drm_kms_helper_poll_init(drm_dev);
@@ -200,6 +218,7 @@
drm_vblank_cleanup(drm_dev);
err_kms_helper_poll_fini:
drm_kms_helper_poll_fini(drm_dev);
+err_unbind:
component_unbind_all(dev, drm_dev);
err_detach_device:
arm_iommu_detach_device(dev);
@@ -366,7 +385,7 @@
int rockchip_drm_encoder_get_mux_id(struct device_node *node,
struct drm_encoder *encoder)
{
- struct device_node *ep = NULL;
+ struct device_node *ep;
struct drm_crtc *crtc = encoder->crtc;
struct of_endpoint endpoint;
struct device_node *port;
@@ -375,18 +394,15 @@
if (!node || !crtc)
return -EINVAL;
- do {
- ep = of_graph_get_next_endpoint(node, ep);
- if (!ep)
- break;
-
+ for_each_endpoint_of_node(node, ep) {
port = of_graph_get_remote_port(ep);
of_node_put(port);
if (port == crtc->port) {
ret = of_graph_parse_endpoint(ep, &endpoint);
+ of_node_put(ep);
return ret ?: endpoint.id;
}
- } while (ep);
+ }
return -EINVAL;
}
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c b/drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c
index a5d889a..5b0dc0f 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_fbdev.c
@@ -71,7 +71,7 @@
size = mode_cmd.pitches[0] * mode_cmd.height;
- rk_obj = rockchip_gem_create_object(dev, size);
+ rk_obj = rockchip_gem_create_object(dev, size, true);
if (IS_ERR(rk_obj))
return -ENOMEM;
@@ -106,7 +106,7 @@
fb = helper->fb;
drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth);
- drm_fb_helper_fill_var(fbi, helper, fb->width, fb->height);
+ drm_fb_helper_fill_var(fbi, helper, sizes->fb_width, sizes->fb_height);
offset = fbi->var.xoffset * bytes_per_pixel;
offset += fbi->var.yoffset * fb->pitches[0];
@@ -119,6 +119,9 @@
DRM_DEBUG_KMS("FB [%dx%d]-%d kvaddr=%p offset=%ld size=%d\n",
fb->width, fb->height, fb->depth, rk_obj->kvaddr,
offset, size);
+
+ fbi->skip_vt_switch = true;
+
return 0;
err_drm_framebuffer_unref:
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7ca8799e..eb2282c 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -22,7 +22,8 @@
#include "rockchip_drm_drv.h"
#include "rockchip_drm_gem.h"
-static int rockchip_gem_alloc_buf(struct rockchip_gem_object *rk_obj)
+static int rockchip_gem_alloc_buf(struct rockchip_gem_object *rk_obj,
+ bool alloc_kmap)
{
struct drm_gem_object *obj = &rk_obj->base;
struct drm_device *drm = obj->dev;
@@ -30,7 +31,9 @@
init_dma_attrs(&rk_obj->dma_attrs);
dma_set_attr(DMA_ATTR_WRITE_COMBINE, &rk_obj->dma_attrs);
- /* TODO(djkurtz): Use DMA_ATTR_NO_KERNEL_MAPPING except for fbdev */
+ if (!alloc_kmap)
+ dma_set_attr(DMA_ATTR_NO_KERNEL_MAPPING, &rk_obj->dma_attrs);
+
rk_obj->kvaddr = dma_alloc_attrs(drm->dev, obj->size,
&rk_obj->dma_addr, GFP_KERNEL,
&rk_obj->dma_attrs);
@@ -103,7 +106,8 @@
}
struct rockchip_gem_object *
- rockchip_gem_create_object(struct drm_device *drm, unsigned int size)
+ rockchip_gem_create_object(struct drm_device *drm, unsigned int size,
+ bool alloc_kmap)
{
struct rockchip_gem_object *rk_obj;
struct drm_gem_object *obj;
@@ -119,7 +123,7 @@
drm_gem_private_object_init(drm, obj, size);
- ret = rockchip_gem_alloc_buf(rk_obj);
+ ret = rockchip_gem_alloc_buf(rk_obj, alloc_kmap);
if (ret)
goto err_free_rk_obj;
@@ -163,7 +167,7 @@
struct drm_gem_object *obj;
int ret;
- rk_obj = rockchip_gem_create_object(drm, size);
+ rk_obj = rockchip_gem_create_object(drm, size, false);
if (IS_ERR(rk_obj))
return ERR_CAST(rk_obj);
@@ -282,6 +286,9 @@
{
struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
+ if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, &rk_obj->dma_attrs))
+ return NULL;
+
return rk_obj->kvaddr;
}
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 67bcebe..ad22618 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -41,7 +41,8 @@
struct vm_area_struct *vma);
struct rockchip_gem_object *
- rockchip_gem_create_object(struct drm_device *drm, unsigned int size);
+ rockchip_gem_create_object(struct drm_device *drm, unsigned int size,
+ bool alloc_kmap);
void rockchip_gem_free_object(struct drm_gem_object *obj);
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
index 9a5c571..ccb0ce0 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
@@ -81,7 +81,7 @@
struct drm_crtc crtc;
struct device *dev;
struct drm_device *drm_dev;
- unsigned int dpms;
+ bool is_enabled;
int connector_type;
int connector_out_mode;
@@ -89,6 +89,7 @@
/* mutex vsync_ work */
struct mutex vsync_mutex;
bool vsync_work_pending;
+ struct completion dsp_hold_completion;
const struct vop_data *data;
@@ -382,11 +383,50 @@
}
}
+static void vop_dsp_hold_valid_irq_enable(struct vop *vop)
+{
+ unsigned long flags;
+
+ if (WARN_ON(!vop->is_enabled))
+ return;
+
+ spin_lock_irqsave(&vop->irq_lock, flags);
+
+ vop_mask_write(vop, INTR_CTRL0, DSP_HOLD_VALID_INTR_MASK,
+ DSP_HOLD_VALID_INTR_EN(1));
+
+ spin_unlock_irqrestore(&vop->irq_lock, flags);
+}
+
+static void vop_dsp_hold_valid_irq_disable(struct vop *vop)
+{
+ unsigned long flags;
+
+ if (WARN_ON(!vop->is_enabled))
+ return;
+
+ spin_lock_irqsave(&vop->irq_lock, flags);
+
+ vop_mask_write(vop, INTR_CTRL0, DSP_HOLD_VALID_INTR_MASK,
+ DSP_HOLD_VALID_INTR_EN(0));
+
+ spin_unlock_irqrestore(&vop->irq_lock, flags);
+}
+
static void vop_enable(struct drm_crtc *crtc)
{
struct vop *vop = to_vop(crtc);
int ret;
+ if (vop->is_enabled)
+ return;
+
+ ret = pm_runtime_get_sync(vop->dev);
+ if (ret < 0) {
+ dev_err(vop->dev, "failed to get pm runtime: %d\n", ret);
+ return;
+ }
+
ret = clk_enable(vop->hclk);
if (ret < 0) {
dev_err(vop->dev, "failed to enable hclk - %d\n", ret);
@@ -417,6 +457,11 @@
goto err_disable_aclk;
}
+ /*
+ * At here, vop clock & iommu is enable, R/W vop regs would be safe.
+ */
+ vop->is_enabled = true;
+
spin_lock(&vop->reg_lock);
VOP_CTRL_SET(vop, standby, 0);
@@ -441,28 +486,44 @@
{
struct vop *vop = to_vop(crtc);
+ if (!vop->is_enabled)
+ return;
+
drm_vblank_off(crtc->dev, vop->pipe);
- disable_irq(vop->irq);
-
/*
- * TODO: Since standby doesn't take effect until the next vblank,
- * when we turn off dclk below, the vop is probably still active.
+ * Vop standby will take effect at end of current frame,
+ * if dsp hold valid irq happen, it means standby complete.
+ *
+ * we must wait standby complete when we want to disable aclk,
+ * if not, memory bus maybe dead.
*/
+ reinit_completion(&vop->dsp_hold_completion);
+ vop_dsp_hold_valid_irq_enable(vop);
+
spin_lock(&vop->reg_lock);
VOP_CTRL_SET(vop, standby, 1);
spin_unlock(&vop->reg_lock);
- /*
- * disable dclk to stop frame scan, so we can safely detach iommu,
- */
- clk_disable(vop->dclk);
+ wait_for_completion(&vop->dsp_hold_completion);
+
+ vop_dsp_hold_valid_irq_disable(vop);
+
+ disable_irq(vop->irq);
+
+ vop->is_enabled = false;
+
+ /*
+ * vop standby complete, so iommu detach is safe.
+ */
rockchip_drm_dma_detach_device(vop->drm_dev, vop->dev);
+ clk_disable(vop->dclk);
clk_disable(vop->aclk);
clk_disable(vop->hclk);
+ pm_runtime_put(vop->dev);
}
/*
@@ -742,7 +803,7 @@
struct vop *vop = to_vop(crtc);
unsigned long flags;
- if (vop->dpms != DRM_MODE_DPMS_ON)
+ if (!vop->is_enabled)
return -EPERM;
spin_lock_irqsave(&vop->irq_lock, flags);
@@ -759,8 +820,9 @@
struct vop *vop = to_vop(crtc);
unsigned long flags;
- if (vop->dpms != DRM_MODE_DPMS_ON)
+ if (!vop->is_enabled)
return;
+
spin_lock_irqsave(&vop->irq_lock, flags);
vop_mask_write(vop, INTR_CTRL0, FS_INTR_MASK, FS_INTR_EN(0));
spin_unlock_irqrestore(&vop->irq_lock, flags);
@@ -773,15 +835,8 @@
static void vop_crtc_dpms(struct drm_crtc *crtc, int mode)
{
- struct vop *vop = to_vop(crtc);
-
DRM_DEBUG_KMS("crtc[%d] mode[%d]\n", crtc->base.id, mode);
- if (vop->dpms == mode) {
- DRM_DEBUG_KMS("desired dpms mode is same as previous one.\n");
- return;
- }
-
switch (mode) {
case DRM_MODE_DPMS_ON:
vop_enable(crtc);
@@ -795,8 +850,6 @@
DRM_DEBUG_KMS("unspecified mode %d\n", mode);
break;
}
-
- vop->dpms = mode;
}
static void vop_crtc_prepare(struct drm_crtc *crtc)
@@ -847,7 +900,7 @@
u16 vsync_len = adjusted_mode->vsync_end - adjusted_mode->vsync_start;
u16 vact_st = adjusted_mode->vtotal - adjusted_mode->vsync_start;
u16 vact_end = vact_st + vdisplay;
- int ret;
+ int ret, ret_clk;
uint32_t val;
/*
@@ -869,13 +922,14 @@
default:
DRM_ERROR("unsupport connector_type[%d]\n",
vop->connector_type);
- return -EINVAL;
+ ret = -EINVAL;
+ goto out;
};
VOP_CTRL_SET(vop, out_mode, vop->connector_out_mode);
val = 0x8;
- val |= (adjusted_mode->flags & DRM_MODE_FLAG_NHSYNC) ? 1 : 0;
- val |= (adjusted_mode->flags & DRM_MODE_FLAG_NVSYNC) ? (1 << 1) : 0;
+ val |= (adjusted_mode->flags & DRM_MODE_FLAG_NHSYNC) ? 0 : 1;
+ val |= (adjusted_mode->flags & DRM_MODE_FLAG_NVSYNC) ? 0 : (1 << 1);
VOP_CTRL_SET(vop, pin_pol, val);
VOP_CTRL_SET(vop, htotal_pw, (htotal << 16) | hsync_len);
@@ -892,7 +946,7 @@
ret = vop_crtc_mode_set_base(crtc, x, y, fb);
if (ret)
- return ret;
+ goto out;
/*
* reset dclk, take all mode config affect, so the clk would run in
@@ -903,13 +957,14 @@
reset_control_deassert(vop->dclk_rst);
clk_set_rate(vop->dclk, adjusted_mode->clock * 1000);
- ret = clk_enable(vop->dclk);
- if (ret < 0) {
- dev_err(vop->dev, "failed to enable dclk - %d\n", ret);
- return ret;
+out:
+ ret_clk = clk_enable(vop->dclk);
+ if (ret_clk < 0) {
+ dev_err(vop->dev, "failed to enable dclk - %d\n", ret_clk);
+ return ret_clk;
}
- return 0;
+ return ret;
}
static void vop_crtc_commit(struct drm_crtc *crtc)
@@ -934,9 +989,9 @@
struct drm_framebuffer *old_fb = crtc->primary->fb;
int ret;
- /* when the page flip is requested, crtc's dpms should be on */
- if (vop->dpms > DRM_MODE_DPMS_ON) {
- DRM_DEBUG("failed page flip request at dpms[%d].\n", vop->dpms);
+ /* when the page flip is requested, crtc should be on */
+ if (!vop->is_enabled) {
+ DRM_DEBUG("page flip request rejected because crtc is off.\n");
return 0;
}
@@ -1081,6 +1136,7 @@
struct vop *vop = data;
uint32_t intr0_reg, active_irqs;
unsigned long flags;
+ int ret = IRQ_NONE;
/*
* INTR_CTRL0 register has interrupt status, enable and clear bits, we
@@ -1099,15 +1155,23 @@
if (!active_irqs)
return IRQ_NONE;
- /* Only Frame Start Interrupt is enabled; other irqs are spurious. */
- if (!(active_irqs & FS_INTR)) {
- DRM_ERROR("Unknown VOP IRQs: %#02x\n", active_irqs);
- return IRQ_NONE;
+ if (active_irqs & DSP_HOLD_VALID_INTR) {
+ complete(&vop->dsp_hold_completion);
+ active_irqs &= ~DSP_HOLD_VALID_INTR;
+ ret = IRQ_HANDLED;
}
- drm_handle_vblank(vop->drm_dev, vop->pipe);
+ if (active_irqs & FS_INTR) {
+ drm_handle_vblank(vop->drm_dev, vop->pipe);
+ active_irqs &= ~FS_INTR;
+ ret = (vop->vsync_work_pending) ? IRQ_WAKE_THREAD : IRQ_HANDLED;
+ }
- return (vop->vsync_work_pending) ? IRQ_WAKE_THREAD : IRQ_HANDLED;
+ /* Unhandled irqs are spurious. */
+ if (active_irqs)
+ DRM_ERROR("Unknown VOP IRQs: %#02x\n", active_irqs);
+
+ return ret;
}
static int vop_create_crtc(struct vop *vop)
@@ -1189,6 +1253,7 @@
goto err_cleanup_crtc;
}
+ init_completion(&vop->dsp_hold_completion);
crtc->port = port;
vop->pipe = drm_crtc_index(crtc);
rockchip_register_crtc_funcs(drm_dev, &private_crtc_funcs, vop->pipe);
@@ -1302,7 +1367,7 @@
clk_disable(vop->hclk);
- vop->dpms = DRM_MODE_DPMS_OFF;
+ vop->is_enabled = false;
return 0;
diff --git a/drivers/gpu/drm/sti/sti_drm_crtc.c b/drivers/gpu/drm/sti/sti_drm_crtc.c
index e6f6ef7..6b641c5 100644
--- a/drivers/gpu/drm/sti/sti_drm_crtc.c
+++ b/drivers/gpu/drm/sti/sti_drm_crtc.c
@@ -9,6 +9,8 @@
#include <linux/clk.h>
#include <drm/drmP.h>
+#include <drm/drm_atomic.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_plane_helper.h>
@@ -77,22 +79,18 @@
}
static int
-sti_drm_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode,
- struct drm_display_mode *adjusted_mode, int x, int y,
- struct drm_framebuffer *old_fb)
+sti_drm_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode)
{
struct sti_mixer *mixer = to_sti_mixer(crtc);
struct device *dev = mixer->dev;
struct sti_compositor *compo = dev_get_drvdata(dev);
- struct sti_layer *layer;
struct clk *clk;
int rate = mode->clock * 1000;
int res;
- unsigned int w, h;
- DRM_DEBUG_KMS("CRTC:%d (%s) fb:%d mode:%d (%s)\n",
+ DRM_DEBUG_KMS("CRTC:%d (%s) mode:%d (%s)\n",
crtc->base.id, sti_mixer_to_str(mixer),
- crtc->primary->fb->base.id, mode->base.id, mode->name);
+ mode->base.id, mode->name);
DRM_DEBUG_KMS("%d %d %d %d %d %d %d %d %d %d 0x%x 0x%x\n",
mode->vrefresh, mode->clock,
@@ -122,72 +120,13 @@
sti_vtg_set_config(mixer->id == STI_MIXER_MAIN ?
compo->vtg_main : compo->vtg_aux, &crtc->mode);
- /* a GDP is reserved to the CRTC FB */
- layer = to_sti_layer(crtc->primary);
- if (!layer) {
- DRM_ERROR("Can not find GDP0)\n");
- return -EINVAL;
- }
-
- /* copy the mode data adjusted by mode_fixup() into crtc->mode
- * so that hardware can be set to proper mode
- */
- memcpy(&crtc->mode, adjusted_mode, sizeof(*adjusted_mode));
-
- res = sti_mixer_set_layer_depth(mixer, layer);
- if (res) {
- DRM_ERROR("Can not set layer depth\n");
- return -EINVAL;
- }
res = sti_mixer_active_video_area(mixer, &crtc->mode);
if (res) {
DRM_ERROR("Can not set active video area\n");
return -EINVAL;
}
- w = crtc->primary->fb->width - x;
- h = crtc->primary->fb->height - y;
-
- return sti_layer_prepare(layer, crtc,
- crtc->primary->fb, &crtc->mode,
- mixer->id, 0, 0, w, h, x, y, w, h);
-}
-
-static int sti_drm_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
- struct drm_framebuffer *old_fb)
-{
- struct sti_mixer *mixer = to_sti_mixer(crtc);
- struct sti_layer *layer;
- unsigned int w, h;
- int ret;
-
- DRM_DEBUG_KMS("CRTC:%d (%s) fb:%d (%d,%d)\n",
- crtc->base.id, sti_mixer_to_str(mixer),
- crtc->primary->fb->base.id, x, y);
-
- /* GDP is reserved to the CRTC FB */
- layer = to_sti_layer(crtc->primary);
- if (!layer) {
- DRM_ERROR("Can not find GDP0)\n");
- ret = -EINVAL;
- goto out;
- }
-
- w = crtc->primary->fb->width - crtc->x;
- h = crtc->primary->fb->height - crtc->y;
-
- ret = sti_layer_prepare(layer, crtc,
- crtc->primary->fb, &crtc->mode,
- mixer->id, 0, 0, w, h,
- crtc->x, crtc->y, w, h);
- if (ret) {
- DRM_ERROR("Can not prepare layer\n");
- goto out;
- }
-
- sti_drm_crtc_commit(crtc);
-out:
- return ret;
+ return res;
}
static void sti_drm_crtc_disable(struct drm_crtc *crtc)
@@ -195,7 +134,6 @@
struct sti_mixer *mixer = to_sti_mixer(crtc);
struct device *dev = mixer->dev;
struct sti_compositor *compo = dev_get_drvdata(dev);
- struct sti_layer *layer;
if (!mixer->enabled)
return;
@@ -205,24 +143,6 @@
/* Disable Background */
sti_mixer_set_background_status(mixer, false);
- /* Disable GDP */
- layer = to_sti_layer(crtc->primary);
- if (!layer) {
- DRM_ERROR("Cannot find GDP0\n");
- return;
- }
-
- /* Disable layer at mixer level */
- if (sti_mixer_set_layer_status(mixer, layer, false))
- DRM_ERROR("Can not disable %s layer at mixer\n",
- sti_layer_to_str(layer));
-
- /* Wait a while to be sure that a Vsync event is received */
- msleep(WAIT_NEXT_VSYNC_MS);
-
- /* Then disable layer itself */
- sti_layer_disable(layer);
-
drm_crtc_vblank_off(crtc);
/* Disable pixel clock and compo IP clocks */
@@ -237,64 +157,44 @@
mixer->enabled = false;
}
+static void
+sti_drm_crtc_mode_set_nofb(struct drm_crtc *crtc)
+{
+ sti_drm_crtc_prepare(crtc);
+ sti_drm_crtc_mode_set(crtc, &crtc->state->adjusted_mode);
+}
+
+static void sti_drm_atomic_begin(struct drm_crtc *crtc)
+{
+ struct sti_mixer *mixer = to_sti_mixer(crtc);
+
+ if (crtc->state->event) {
+ crtc->state->event->pipe = drm_crtc_index(crtc);
+
+ WARN_ON(drm_crtc_vblank_get(crtc) != 0);
+
+ mixer->pending_event = crtc->state->event;
+ crtc->state->event = NULL;
+ }
+}
+
+static void sti_drm_atomic_flush(struct drm_crtc *crtc)
+{
+}
+
static struct drm_crtc_helper_funcs sti_crtc_helper_funcs = {
.dpms = sti_drm_crtc_dpms,
.prepare = sti_drm_crtc_prepare,
.commit = sti_drm_crtc_commit,
.mode_fixup = sti_drm_crtc_mode_fixup,
- .mode_set = sti_drm_crtc_mode_set,
- .mode_set_base = sti_drm_crtc_mode_set_base,
+ .mode_set = drm_helper_crtc_mode_set,
+ .mode_set_nofb = sti_drm_crtc_mode_set_nofb,
+ .mode_set_base = drm_helper_crtc_mode_set_base,
.disable = sti_drm_crtc_disable,
+ .atomic_begin = sti_drm_atomic_begin,
+ .atomic_flush = sti_drm_atomic_flush,
};
-static int sti_drm_crtc_page_flip(struct drm_crtc *crtc,
- struct drm_framebuffer *fb,
- struct drm_pending_vblank_event *event,
- uint32_t page_flip_flags)
-{
- struct drm_device *drm_dev = crtc->dev;
- struct drm_framebuffer *old_fb;
- struct sti_mixer *mixer = to_sti_mixer(crtc);
- unsigned long flags;
- int ret;
-
- DRM_DEBUG_KMS("fb %d --> fb %d\n",
- crtc->primary->fb->base.id, fb->base.id);
-
- mutex_lock(&drm_dev->struct_mutex);
-
- old_fb = crtc->primary->fb;
- crtc->primary->fb = fb;
- ret = sti_drm_crtc_mode_set_base(crtc, crtc->x, crtc->y, old_fb);
- if (ret) {
- DRM_ERROR("failed\n");
- crtc->primary->fb = old_fb;
- goto out;
- }
-
- if (event) {
- event->pipe = mixer->id;
-
- ret = drm_vblank_get(drm_dev, event->pipe);
- if (ret) {
- DRM_ERROR("Cannot get vblank\n");
- goto out;
- }
-
- spin_lock_irqsave(&drm_dev->event_lock, flags);
- if (mixer->pending_event) {
- drm_vblank_put(drm_dev, event->pipe);
- ret = -EBUSY;
- } else {
- mixer->pending_event = event;
- }
- spin_unlock_irqrestore(&drm_dev->event_lock, flags);
- }
-out:
- mutex_unlock(&drm_dev->struct_mutex);
- return ret;
-}
-
static void sti_drm_crtc_destroy(struct drm_crtc *crtc)
{
DRM_DEBUG_KMS("\n");
@@ -380,10 +280,13 @@
EXPORT_SYMBOL(sti_drm_crtc_disable_vblank);
static struct drm_crtc_funcs sti_crtc_funcs = {
- .set_config = drm_crtc_helper_set_config,
- .page_flip = sti_drm_crtc_page_flip,
+ .set_config = drm_atomic_helper_set_config,
+ .page_flip = drm_atomic_helper_page_flip,
.destroy = sti_drm_crtc_destroy,
.set_property = sti_drm_crtc_set_property,
+ .reset = drm_atomic_helper_crtc_reset,
+ .atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
};
bool sti_drm_crtc_is_main(struct drm_crtc *crtc)
diff --git a/drivers/gpu/drm/sti/sti_drm_drv.c b/drivers/gpu/drm/sti/sti_drm_drv.c
index 5239fa1..59d558b 100644
--- a/drivers/gpu/drm/sti/sti_drm_drv.c
+++ b/drivers/gpu/drm/sti/sti_drm_drv.c
@@ -12,6 +12,8 @@
#include <linux/module.h>
#include <linux/of_platform.h>
+#include <drm/drm_atomic.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_fb_cma_helper.h>
@@ -28,8 +30,87 @@
#define STI_MAX_FB_HEIGHT 4096
#define STI_MAX_FB_WIDTH 4096
+static void sti_drm_atomic_schedule(struct sti_drm_private *private,
+ struct drm_atomic_state *state)
+{
+ private->commit.state = state;
+ schedule_work(&private->commit.work);
+}
+
+static void sti_drm_atomic_complete(struct sti_drm_private *private,
+ struct drm_atomic_state *state)
+{
+ struct drm_device *drm = private->drm_dev;
+
+ /*
+ * Everything below can be run asynchronously without the need to grab
+ * any modeset locks at all under one condition: It must be guaranteed
+ * that the asynchronous work has either been cancelled (if the driver
+ * supports it, which at least requires that the framebuffers get
+ * cleaned up with drm_atomic_helper_cleanup_planes()) or completed
+ * before the new state gets committed on the software side with
+ * drm_atomic_helper_swap_state().
+ *
+ * This scheme allows new atomic state updates to be prepared and
+ * checked in parallel to the asynchronous completion of the previous
+ * update. Which is important since compositors need to figure out the
+ * composition of the next frame right after having submitted the
+ * current layout.
+ */
+
+ drm_atomic_helper_commit_modeset_disables(drm, state);
+ drm_atomic_helper_commit_planes(drm, state);
+ drm_atomic_helper_commit_modeset_enables(drm, state);
+
+ drm_atomic_helper_wait_for_vblanks(drm, state);
+
+ drm_atomic_helper_cleanup_planes(drm, state);
+ drm_atomic_state_free(state);
+}
+
+static void sti_drm_atomic_work(struct work_struct *work)
+{
+ struct sti_drm_private *private = container_of(work,
+ struct sti_drm_private, commit.work);
+
+ sti_drm_atomic_complete(private, private->commit.state);
+}
+
+static int sti_drm_atomic_commit(struct drm_device *drm,
+ struct drm_atomic_state *state, bool async)
+{
+ struct sti_drm_private *private = drm->dev_private;
+ int err;
+
+ err = drm_atomic_helper_prepare_planes(drm, state);
+ if (err)
+ return err;
+
+ /* serialize outstanding asynchronous commits */
+ mutex_lock(&private->commit.lock);
+ flush_work(&private->commit.work);
+
+ /*
+ * This is the point of no return - everything below never fails except
+ * when the hw goes bonghits. Which means we can commit the new state on
+ * the software side now.
+ */
+
+ drm_atomic_helper_swap_state(drm, state);
+
+ if (async)
+ sti_drm_atomic_schedule(private, state);
+ else
+ sti_drm_atomic_complete(private, state);
+
+ mutex_unlock(&private->commit.lock);
+ return 0;
+}
+
static struct drm_mode_config_funcs sti_drm_mode_config_funcs = {
.fb_create = drm_fb_cma_create,
+ .atomic_check = drm_atomic_helper_check,
+ .atomic_commit = sti_drm_atomic_commit,
};
static void sti_drm_mode_config_init(struct drm_device *dev)
@@ -61,6 +142,9 @@
dev->dev_private = (void *)private;
private->drm_dev = dev;
+ mutex_init(&private->commit.lock);
+ INIT_WORK(&private->commit.work, sti_drm_atomic_work);
+
drm_mode_config_init(dev);
drm_kms_helper_poll_init(dev);
@@ -74,7 +158,7 @@
return ret;
}
- drm_helper_disable_unused_functions(dev);
+ drm_mode_config_reset(dev);
#ifdef CONFIG_DRM_STI_FBDEV
drm_fbdev_cma_init(dev, 32,
diff --git a/drivers/gpu/drm/sti/sti_drm_drv.h b/drivers/gpu/drm/sti/sti_drm_drv.h
index ec5e2eb..c413aa3 100644
--- a/drivers/gpu/drm/sti/sti_drm_drv.h
+++ b/drivers/gpu/drm/sti/sti_drm_drv.h
@@ -24,6 +24,12 @@
struct sti_compositor *compo;
struct drm_property *plane_zorder_property;
struct drm_device *drm_dev;
+
+ struct {
+ struct drm_atomic_state *state;
+ struct work_struct work;
+ struct mutex lock;
+ } commit;
};
#endif
diff --git a/drivers/gpu/drm/sti/sti_drm_plane.c b/drivers/gpu/drm/sti/sti_drm_plane.c
index bb6a293..64d4ed4 100644
--- a/drivers/gpu/drm/sti/sti_drm_plane.c
+++ b/drivers/gpu/drm/sti/sti_drm_plane.c
@@ -6,6 +6,10 @@
* License terms: GNU General Public License (GPL), version 2
*/
+#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
+#include <drm/drm_plane_helper.h>
+
#include "sti_compositor.h"
#include "sti_drm_drv.h"
#include "sti_drm_plane.h"
@@ -33,9 +37,9 @@
struct sti_mixer *mixer = to_sti_mixer(crtc);
int res;
- DRM_DEBUG_KMS("CRTC:%d (%s) drm plane:%d (%s) drm fb:%d\n",
+ DRM_DEBUG_KMS("CRTC:%d (%s) drm plane:%d (%s)\n",
crtc->base.id, sti_mixer_to_str(mixer),
- plane->base.id, sti_layer_to_str(layer), fb->base.id);
+ plane->base.id, sti_layer_to_str(layer));
DRM_DEBUG_KMS("(%dx%d)@(%d,%d)\n", crtc_w, crtc_h, crtc_x, crtc_y);
res = sti_mixer_set_layer_depth(mixer, layer);
@@ -110,7 +114,7 @@
{
DRM_DEBUG_DRIVER("\n");
- sti_drm_disable_plane(plane);
+ drm_plane_helper_disable(plane);
drm_plane_cleanup(plane);
}
@@ -133,10 +137,58 @@
}
static struct drm_plane_funcs sti_drm_plane_funcs = {
- .update_plane = sti_drm_update_plane,
- .disable_plane = sti_drm_disable_plane,
+ .update_plane = drm_atomic_helper_update_plane,
+ .disable_plane = drm_atomic_helper_disable_plane,
.destroy = sti_drm_plane_destroy,
.set_property = sti_drm_plane_set_property,
+ .reset = drm_atomic_helper_plane_reset,
+ .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
+};
+
+static int sti_drm_plane_prepare_fb(struct drm_plane *plane,
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *new_state)
+{
+ return 0;
+}
+
+static void sti_drm_plane_cleanup_fb(struct drm_plane *plane,
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *old_fb)
+{
+}
+
+static int sti_drm_plane_atomic_check(struct drm_plane *plane,
+ struct drm_plane_state *state)
+{
+ return 0;
+}
+
+static void sti_drm_plane_atomic_update(struct drm_plane *plane,
+ struct drm_plane_state *oldstate)
+{
+ struct drm_plane_state *state = plane->state;
+
+ sti_drm_update_plane(plane, state->crtc, state->fb,
+ state->crtc_x, state->crtc_y,
+ state->crtc_w, state->crtc_h,
+ state->src_x, state->src_y,
+ state->src_w, state->src_h);
+}
+
+static void sti_drm_plane_atomic_disable(struct drm_plane *plane,
+ struct drm_plane_state *oldstate)
+{
+ sti_drm_disable_plane(plane);
+}
+
+static const struct drm_plane_helper_funcs sti_drm_plane_helpers_funcs = {
+ .prepare_fb = sti_drm_plane_prepare_fb,
+ .cleanup_fb = sti_drm_plane_cleanup_fb,
+ .atomic_check = sti_drm_plane_atomic_check,
+ .atomic_update = sti_drm_plane_atomic_update,
+ .atomic_disable = sti_drm_plane_atomic_disable,
};
static void sti_drm_plane_attach_zorder_property(struct drm_plane *plane,
@@ -178,11 +230,13 @@
return NULL;
}
+ drm_plane_helper_add(&layer->plane, &sti_drm_plane_helpers_funcs);
+
for (i = 0; i < ARRAY_SIZE(sti_layer_default_zorder); i++)
if (sti_layer_default_zorder[i] == layer->desc)
break;
- default_zorder = i;
+ default_zorder = i + 1;
if (type == DRM_PLANE_TYPE_OVERLAY)
sti_drm_plane_attach_zorder_property(&layer->plane,
diff --git a/drivers/gpu/drm/sti/sti_dvo.c b/drivers/gpu/drm/sti/sti_dvo.c
index aeb5070..a9b678a 100644
--- a/drivers/gpu/drm/sti/sti_dvo.c
+++ b/drivers/gpu/drm/sti/sti_dvo.c
@@ -11,6 +11,7 @@
#include <linux/platform_device.h>
#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_panel.h>
@@ -364,10 +365,13 @@
}
static struct drm_connector_funcs sti_dvo_connector_funcs = {
- .dpms = drm_helper_connector_dpms,
+ .dpms = drm_atomic_helper_connector_dpms,
.fill_modes = drm_helper_probe_single_connector_modes,
.detect = sti_dvo_connector_detect,
.destroy = sti_dvo_connector_destroy,
+ .reset = drm_atomic_helper_connector_reset,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static struct drm_encoder *sti_dvo_find_encoder(struct drm_device *dev)
diff --git a/drivers/gpu/drm/sti/sti_hda.c b/drivers/gpu/drm/sti/sti_hda.c
index a9bbb08..598cd78 100644
--- a/drivers/gpu/drm/sti/sti_hda.c
+++ b/drivers/gpu/drm/sti/sti_hda.c
@@ -10,6 +10,7 @@
#include <linux/platform_device.h>
#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
/* HDformatter registers */
@@ -611,10 +612,13 @@
}
static struct drm_connector_funcs sti_hda_connector_funcs = {
- .dpms = drm_helper_connector_dpms,
+ .dpms = drm_atomic_helper_connector_dpms,
.fill_modes = drm_helper_probe_single_connector_modes,
.detect = sti_hda_connector_detect,
.destroy = sti_hda_connector_destroy,
+ .reset = drm_atomic_helper_connector_reset,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static struct drm_encoder *sti_hda_find_encoder(struct drm_device *dev)
diff --git a/drivers/gpu/drm/sti/sti_hdmi.c b/drivers/gpu/drm/sti/sti_hdmi.c
index 1485ade..ae5424b 100644
--- a/drivers/gpu/drm/sti/sti_hdmi.c
+++ b/drivers/gpu/drm/sti/sti_hdmi.c
@@ -13,6 +13,7 @@
#include <linux/reset.h>
#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_edid.h>
@@ -663,10 +664,13 @@
}
static struct drm_connector_funcs sti_hdmi_connector_funcs = {
- .dpms = drm_helper_connector_dpms,
+ .dpms = drm_atomic_helper_connector_dpms,
.fill_modes = drm_helper_probe_single_connector_modes,
.detect = sti_hdmi_connector_detect,
.destroy = sti_hdmi_connector_destroy,
+ .reset = drm_atomic_helper_connector_reset,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
static struct drm_encoder *sti_hdmi_find_encoder(struct drm_device *dev)
diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
index 1a52522..a287e4f 100644
--- a/drivers/gpu/drm/tegra/dc.c
+++ b/drivers/gpu/drm/tegra/dc.c
@@ -425,8 +425,8 @@
{
struct tegra_plane_state *state;
- if (plane->state && plane->state->fb)
- drm_framebuffer_unreference(plane->state->fb);
+ if (plane->state)
+ __drm_atomic_helper_plane_destroy_state(plane, plane->state);
kfree(plane->state);
plane->state = NULL;
@@ -443,12 +443,14 @@
struct tegra_plane_state *state = to_tegra_plane_state(plane->state);
struct tegra_plane_state *copy;
- copy = kmemdup(state, sizeof(*state), GFP_KERNEL);
+ copy = kmalloc(sizeof(*copy), GFP_KERNEL);
if (!copy)
return NULL;
- if (copy->base.fb)
- drm_framebuffer_reference(copy->base.fb);
+ __drm_atomic_helper_plane_duplicate_state(plane, ©->base);
+ copy->tiling = state->tiling;
+ copy->format = state->format;
+ copy->swap = state->swap;
return ©->base;
}
@@ -456,9 +458,7 @@
static void tegra_plane_atomic_destroy_state(struct drm_plane *plane,
struct drm_plane_state *state)
{
- if (state->fb)
- drm_framebuffer_unreference(state->fb);
-
+ __drm_atomic_helper_plane_destroy_state(plane, state);
kfree(state);
}
@@ -472,13 +472,15 @@
};
static int tegra_plane_prepare_fb(struct drm_plane *plane,
- struct drm_framebuffer *fb)
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *new_state)
{
return 0;
}
static void tegra_plane_cleanup_fb(struct drm_plane *plane,
- struct drm_framebuffer *fb)
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *old_fb)
{
}
@@ -906,6 +908,15 @@
return 0;
}
+u32 tegra_dc_get_vblank_counter(struct tegra_dc *dc)
+{
+ if (dc->syncpt)
+ return host1x_syncpt_read(dc->syncpt);
+
+ /* fallback to software emulated VBLANK counter */
+ return drm_crtc_vblank_count(&dc->base);
+}
+
void tegra_dc_enable_vblank(struct tegra_dc *dc)
{
unsigned long value, flags;
@@ -993,6 +1004,9 @@
{
struct tegra_dc_state *state;
+ if (crtc->state)
+ __drm_atomic_helper_crtc_destroy_state(crtc, crtc->state);
+
kfree(crtc->state);
crtc->state = NULL;
@@ -1009,14 +1023,15 @@
struct tegra_dc_state *state = to_dc_state(crtc->state);
struct tegra_dc_state *copy;
- copy = kmemdup(state, sizeof(*state), GFP_KERNEL);
+ copy = kmalloc(sizeof(*copy), GFP_KERNEL);
if (!copy)
return NULL;
- copy->base.mode_changed = false;
- copy->base.active_changed = false;
- copy->base.planes_changed = false;
- copy->base.event = NULL;
+ __drm_atomic_helper_crtc_duplicate_state(crtc, ©->base);
+ copy->clk = state->clk;
+ copy->pclk = state->pclk;
+ copy->div = state->div;
+ copy->planes = state->planes;
return ©->base;
}
@@ -1024,6 +1039,7 @@
static void tegra_crtc_atomic_destroy_state(struct drm_crtc *crtc,
struct drm_crtc_state *state)
{
+ __drm_atomic_helper_crtc_destroy_state(crtc, state);
kfree(state);
}
@@ -1150,26 +1166,18 @@
return 0;
}
-int tegra_dc_setup_clock(struct tegra_dc *dc, struct clk *parent,
- unsigned long pclk, unsigned int div)
-{
- u32 value;
- int err;
-
- err = clk_set_parent(dc->clk, parent);
- if (err < 0) {
- dev_err(dc->dev, "failed to set parent clock: %d\n", err);
- return err;
- }
-
- DRM_DEBUG_KMS("rate: %lu, div: %u\n", clk_get_rate(dc->clk), div);
-
- value = SHIFT_CLK_DIVIDER(div) | PIXEL_CLK_DIVIDER_PCD1;
- tegra_dc_writel(dc, value, DC_DISP_DISP_CLOCK_CONTROL);
-
- return 0;
-}
-
+/**
+ * tegra_dc_state_setup_clock - check clock settings and store them in atomic
+ * state
+ * @dc: display controller
+ * @crtc_state: CRTC atomic state
+ * @clk: parent clock for display controller
+ * @pclk: pixel clock
+ * @div: shift clock divider
+ *
+ * Returns:
+ * 0 on success or a negative error-code on failure.
+ */
int tegra_dc_state_setup_clock(struct tegra_dc *dc,
struct drm_crtc_state *crtc_state,
struct clk *clk, unsigned long pclk,
@@ -1177,6 +1185,9 @@
{
struct tegra_dc_state *state = to_dc_state(crtc_state);
+ if (!clk_has_parent(dc->clk, clk))
+ return -EINVAL;
+
state->clk = clk;
state->pclk = pclk;
state->div = div;
@@ -1292,9 +1303,7 @@
static const struct drm_crtc_helper_funcs tegra_crtc_helper_funcs = {
.disable = tegra_crtc_disable,
.mode_fixup = tegra_crtc_mode_fixup,
- .mode_set = drm_helper_crtc_mode_set,
.mode_set_nofb = tegra_crtc_mode_set_nofb,
- .mode_set_base = drm_helper_crtc_mode_set_base,
.prepare = tegra_crtc_prepare,
.commit = tegra_crtc_commit,
.atomic_check = tegra_crtc_atomic_check,
@@ -1629,7 +1638,6 @@
struct tegra_drm *tegra = drm->dev_private;
struct drm_plane *primary = NULL;
struct drm_plane *cursor = NULL;
- unsigned int syncpt;
u32 value;
int err;
@@ -1698,13 +1706,15 @@
}
/* initialize display controller */
- if (dc->pipe)
- syncpt = SYNCPT_VBLANK1;
- else
- syncpt = SYNCPT_VBLANK0;
+ if (dc->syncpt) {
+ u32 syncpt = host1x_syncpt_id(dc->syncpt);
- tegra_dc_writel(dc, 0x00000100, DC_CMD_GENERAL_INCR_SYNCPT_CNTRL);
- tegra_dc_writel(dc, 0x100 | syncpt, DC_CMD_CONT_SYNCPT_VSYNC);
+ value = SYNCPT_CNTRL_NO_STALL;
+ tegra_dc_writel(dc, value, DC_CMD_GENERAL_INCR_SYNCPT_CNTRL);
+
+ value = SYNCPT_VSYNC_ENABLE | syncpt;
+ tegra_dc_writel(dc, value, DC_CMD_CONT_SYNCPT_VSYNC);
+ }
value = WIN_A_UF_INT | WIN_B_UF_INT | WIN_C_UF_INT | WIN_A_OF_INT;
tegra_dc_writel(dc, value, DC_CMD_INT_TYPE);
@@ -1872,6 +1882,7 @@
static int tegra_dc_probe(struct platform_device *pdev)
{
+ unsigned long flags = HOST1X_SYNCPT_CLIENT_MANAGED;
const struct of_device_id *id;
struct resource *regs;
struct tegra_dc *dc;
@@ -1963,6 +1974,10 @@
return err;
}
+ dc->syncpt = host1x_syncpt_request(&pdev->dev, flags);
+ if (!dc->syncpt)
+ dev_warn(&pdev->dev, "failed to allocate syncpoint\n");
+
platform_set_drvdata(pdev, dc);
return 0;
@@ -1973,6 +1988,8 @@
struct tegra_dc *dc = platform_get_drvdata(pdev);
int err;
+ host1x_syncpt_free(dc->syncpt);
+
err = host1x_client_unregister(&dc->client);
if (err < 0) {
dev_err(&pdev->dev, "failed to unregister host1x client: %d\n",
diff --git a/drivers/gpu/drm/tegra/dc.h b/drivers/gpu/drm/tegra/dc.h
index 705c93b..55792da 100644
--- a/drivers/gpu/drm/tegra/dc.h
+++ b/drivers/gpu/drm/tegra/dc.h
@@ -12,6 +12,8 @@
#define DC_CMD_GENERAL_INCR_SYNCPT 0x000
#define DC_CMD_GENERAL_INCR_SYNCPT_CNTRL 0x001
+#define SYNCPT_CNTRL_NO_STALL (1 << 8)
+#define SYNCPT_CNTRL_SOFT_RESET (1 << 0)
#define DC_CMD_GENERAL_INCR_SYNCPT_ERROR 0x002
#define DC_CMD_WIN_A_INCR_SYNCPT 0x008
#define DC_CMD_WIN_A_INCR_SYNCPT_CNTRL 0x009
@@ -23,6 +25,7 @@
#define DC_CMD_WIN_C_INCR_SYNCPT_CNTRL 0x019
#define DC_CMD_WIN_C_INCR_SYNCPT_ERROR 0x01a
#define DC_CMD_CONT_SYNCPT_VSYNC 0x028
+#define SYNCPT_VSYNC_ENABLE (1 << 8)
#define DC_CMD_DISPLAY_COMMAND_OPTION0 0x031
#define DC_CMD_DISPLAY_COMMAND 0x032
#define DISP_CTRL_MODE_STOP (0 << 5)
@@ -438,8 +441,4 @@
#define DC_WINBUF_BD_UFLOW_STATUS 0xdca
#define DC_WINBUF_CD_UFLOW_STATUS 0xfca
-/* synchronization points */
-#define SYNCPT_VBLANK0 26
-#define SYNCPT_VBLANK1 27
-
#endif /* TEGRA_DC_H */
diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
index 7dd328d..1833abd 100644
--- a/drivers/gpu/drm/tegra/drm.c
+++ b/drivers/gpu/drm/tegra/drm.c
@@ -55,9 +55,9 @@
* current layout.
*/
- drm_atomic_helper_commit_pre_planes(drm, state);
+ drm_atomic_helper_commit_modeset_disables(drm, state);
drm_atomic_helper_commit_planes(drm, state);
- drm_atomic_helper_commit_post_planes(drm, state);
+ drm_atomic_helper_commit_modeset_enables(drm, state);
drm_atomic_helper_wait_for_vblanks(drm, state);
@@ -172,6 +172,10 @@
*/
drm->irq_enabled = true;
+ /* syncpoints are used for full 32-bit hardware VBLANK counters */
+ drm->vblank_disable_immediate = true;
+ drm->max_vblank_count = 0xffffffff;
+
err = drm_vblank_init(drm, drm->mode_config.num_crtc);
if (err < 0)
goto device;
@@ -813,12 +817,12 @@
static u32 tegra_drm_get_vblank_counter(struct drm_device *drm, int pipe)
{
struct drm_crtc *crtc = tegra_crtc_from_pipe(drm, pipe);
+ struct tegra_dc *dc = to_tegra_dc(crtc);
if (!crtc)
return 0;
- /* TODO: implement real hardware counter using syncpoints */
- return drm_crtc_vblank_count(crtc);
+ return tegra_dc_get_vblank_counter(dc);
}
static int tegra_drm_enable_vblank(struct drm_device *drm, int pipe)
@@ -879,8 +883,18 @@
return 0;
}
+static int tegra_debugfs_iova(struct seq_file *s, void *data)
+{
+ struct drm_info_node *node = (struct drm_info_node *)s->private;
+ struct drm_device *drm = node->minor->dev;
+ struct tegra_drm *tegra = drm->dev_private;
+
+ return drm_mm_dump_table(s, &tegra->mm);
+}
+
static struct drm_info_list tegra_debugfs_list[] = {
{ "framebuffers", tegra_debugfs_framebuffers, 0 },
+ { "iova", tegra_debugfs_iova, 0 },
};
static int tegra_debugfs_init(struct drm_minor *minor)
diff --git a/drivers/gpu/drm/tegra/drm.h b/drivers/gpu/drm/tegra/drm.h
index 8cb2dfe..659b2fc 100644
--- a/drivers/gpu/drm/tegra/drm.h
+++ b/drivers/gpu/drm/tegra/drm.h
@@ -106,6 +106,7 @@
struct tegra_dc {
struct host1x_client client;
+ struct host1x_syncpt *syncpt;
struct device *dev;
spinlock_t lock;
@@ -180,12 +181,11 @@
};
/* from dc.c */
+u32 tegra_dc_get_vblank_counter(struct tegra_dc *dc);
void tegra_dc_enable_vblank(struct tegra_dc *dc);
void tegra_dc_disable_vblank(struct tegra_dc *dc);
void tegra_dc_cancel_page_flip(struct drm_crtc *crtc, struct drm_file *file);
void tegra_dc_commit(struct tegra_dc *dc);
-int tegra_dc_setup_clock(struct tegra_dc *dc, struct clk *parent,
- unsigned long pclk, unsigned int div);
int tegra_dc_state_setup_clock(struct tegra_dc *dc,
struct drm_crtc_state *crtc_state,
struct clk *clk, unsigned long pclk,
diff --git a/drivers/gpu/drm/tegra/hdmi.c b/drivers/gpu/drm/tegra/hdmi.c
index 7eaaee74..06ab178 100644
--- a/drivers/gpu/drm/tegra/hdmi.c
+++ b/drivers/gpu/drm/tegra/hdmi.c
@@ -952,7 +952,7 @@
}
tegra_hdmi_writel(hdmi,
- SOR_SEQ_CTL_PU_PC(0) |
+ SOR_SEQ_PU_PC(0) |
SOR_SEQ_PU_PC_ALT(0) |
SOR_SEQ_PD_PC(8) |
SOR_SEQ_PD_PC_ALT(8),
@@ -1394,8 +1394,8 @@
tegra_output_exit(&hdmi->output);
- clk_disable_unprepare(hdmi->clk);
reset_control_assert(hdmi->rst);
+ clk_disable_unprepare(hdmi->clk);
regulator_disable(hdmi->vdd);
regulator_disable(hdmi->pll);
diff --git a/drivers/gpu/drm/tegra/hdmi.h b/drivers/gpu/drm/tegra/hdmi.h
index 919a19d..a882514 100644
--- a/drivers/gpu/drm/tegra/hdmi.h
+++ b/drivers/gpu/drm/tegra/hdmi.h
@@ -201,7 +201,7 @@
#define HDMI_NV_PDISP_SOR_CRCB 0x5d
#define HDMI_NV_PDISP_SOR_BLANK 0x5e
#define HDMI_NV_PDISP_SOR_SEQ_CTL 0x5f
-#define SOR_SEQ_CTL_PU_PC(x) (((x) & 0xf) << 0)
+#define SOR_SEQ_PU_PC(x) (((x) & 0xf) << 0)
#define SOR_SEQ_PU_PC_ALT(x) (((x) & 0xf) << 4)
#define SOR_SEQ_PD_PC(x) (((x) & 0xf) << 8)
#define SOR_SEQ_PD_PC_ALT(x) (((x) & 0xf) << 12)
diff --git a/drivers/gpu/drm/tegra/sor.c b/drivers/gpu/drm/tegra/sor.c
index 2afe478..7591d89 100644
--- a/drivers/gpu/drm/tegra/sor.c
+++ b/drivers/gpu/drm/tegra/sor.c
@@ -41,6 +41,8 @@
struct mutex lock;
bool enabled;
+ struct drm_info_list *debugfs_files;
+ struct drm_minor *minor;
struct dentry *debugfs;
};
@@ -68,13 +70,12 @@
return container_of(output, struct tegra_sor, output);
}
-static inline unsigned long tegra_sor_readl(struct tegra_sor *sor,
- unsigned long offset)
+static inline u32 tegra_sor_readl(struct tegra_sor *sor, unsigned long offset)
{
return readl(sor->regs + (offset << 2));
}
-static inline void tegra_sor_writel(struct tegra_sor *sor, unsigned long value,
+static inline void tegra_sor_writel(struct tegra_sor *sor, u32 value,
unsigned long offset)
{
writel(value, sor->regs + (offset << 2));
@@ -83,9 +84,9 @@
static int tegra_sor_dp_train_fast(struct tegra_sor *sor,
struct drm_dp_link *link)
{
- unsigned long value;
unsigned int i;
u8 pattern;
+ u32 value;
int err;
/* setup lane parameters */
@@ -202,7 +203,7 @@
static int tegra_sor_setup_pwm(struct tegra_sor *sor, unsigned long timeout)
{
- unsigned long value;
+ u32 value;
value = tegra_sor_readl(sor, SOR_PWM_DIV);
value &= ~SOR_PWM_DIV_MASK;
@@ -281,7 +282,7 @@
static int tegra_sor_power_up(struct tegra_sor *sor, unsigned long timeout)
{
- unsigned long value;
+ u32 value;
value = tegra_sor_readl(sor, SOR_PWR);
value |= SOR_PWR_TRIGGER | SOR_PWR_NORMAL_STATE_PU;
@@ -674,38 +675,195 @@
.release = tegra_sor_crc_release,
};
+static int tegra_sor_show_regs(struct seq_file *s, void *data)
+{
+ struct drm_info_node *node = s->private;
+ struct tegra_sor *sor = node->info_ent->data;
+
+#define DUMP_REG(name) \
+ seq_printf(s, "%-38s %#05x %08x\n", #name, name, \
+ tegra_sor_readl(sor, name))
+
+ DUMP_REG(SOR_CTXSW);
+ DUMP_REG(SOR_SUPER_STATE_0);
+ DUMP_REG(SOR_SUPER_STATE_1);
+ DUMP_REG(SOR_STATE_0);
+ DUMP_REG(SOR_STATE_1);
+ DUMP_REG(SOR_HEAD_STATE_0(0));
+ DUMP_REG(SOR_HEAD_STATE_0(1));
+ DUMP_REG(SOR_HEAD_STATE_1(0));
+ DUMP_REG(SOR_HEAD_STATE_1(1));
+ DUMP_REG(SOR_HEAD_STATE_2(0));
+ DUMP_REG(SOR_HEAD_STATE_2(1));
+ DUMP_REG(SOR_HEAD_STATE_3(0));
+ DUMP_REG(SOR_HEAD_STATE_3(1));
+ DUMP_REG(SOR_HEAD_STATE_4(0));
+ DUMP_REG(SOR_HEAD_STATE_4(1));
+ DUMP_REG(SOR_HEAD_STATE_5(0));
+ DUMP_REG(SOR_HEAD_STATE_5(1));
+ DUMP_REG(SOR_CRC_CNTRL);
+ DUMP_REG(SOR_DP_DEBUG_MVID);
+ DUMP_REG(SOR_CLK_CNTRL);
+ DUMP_REG(SOR_CAP);
+ DUMP_REG(SOR_PWR);
+ DUMP_REG(SOR_TEST);
+ DUMP_REG(SOR_PLL_0);
+ DUMP_REG(SOR_PLL_1);
+ DUMP_REG(SOR_PLL_2);
+ DUMP_REG(SOR_PLL_3);
+ DUMP_REG(SOR_CSTM);
+ DUMP_REG(SOR_LVDS);
+ DUMP_REG(SOR_CRC_A);
+ DUMP_REG(SOR_CRC_B);
+ DUMP_REG(SOR_BLANK);
+ DUMP_REG(SOR_SEQ_CTL);
+ DUMP_REG(SOR_LANE_SEQ_CTL);
+ DUMP_REG(SOR_SEQ_INST(0));
+ DUMP_REG(SOR_SEQ_INST(1));
+ DUMP_REG(SOR_SEQ_INST(2));
+ DUMP_REG(SOR_SEQ_INST(3));
+ DUMP_REG(SOR_SEQ_INST(4));
+ DUMP_REG(SOR_SEQ_INST(5));
+ DUMP_REG(SOR_SEQ_INST(6));
+ DUMP_REG(SOR_SEQ_INST(7));
+ DUMP_REG(SOR_SEQ_INST(8));
+ DUMP_REG(SOR_SEQ_INST(9));
+ DUMP_REG(SOR_SEQ_INST(10));
+ DUMP_REG(SOR_SEQ_INST(11));
+ DUMP_REG(SOR_SEQ_INST(12));
+ DUMP_REG(SOR_SEQ_INST(13));
+ DUMP_REG(SOR_SEQ_INST(14));
+ DUMP_REG(SOR_SEQ_INST(15));
+ DUMP_REG(SOR_PWM_DIV);
+ DUMP_REG(SOR_PWM_CTL);
+ DUMP_REG(SOR_VCRC_A_0);
+ DUMP_REG(SOR_VCRC_A_1);
+ DUMP_REG(SOR_VCRC_B_0);
+ DUMP_REG(SOR_VCRC_B_1);
+ DUMP_REG(SOR_CCRC_A_0);
+ DUMP_REG(SOR_CCRC_A_1);
+ DUMP_REG(SOR_CCRC_B_0);
+ DUMP_REG(SOR_CCRC_B_1);
+ DUMP_REG(SOR_EDATA_A_0);
+ DUMP_REG(SOR_EDATA_A_1);
+ DUMP_REG(SOR_EDATA_B_0);
+ DUMP_REG(SOR_EDATA_B_1);
+ DUMP_REG(SOR_COUNT_A_0);
+ DUMP_REG(SOR_COUNT_A_1);
+ DUMP_REG(SOR_COUNT_B_0);
+ DUMP_REG(SOR_COUNT_B_1);
+ DUMP_REG(SOR_DEBUG_A_0);
+ DUMP_REG(SOR_DEBUG_A_1);
+ DUMP_REG(SOR_DEBUG_B_0);
+ DUMP_REG(SOR_DEBUG_B_1);
+ DUMP_REG(SOR_TRIG);
+ DUMP_REG(SOR_MSCHECK);
+ DUMP_REG(SOR_XBAR_CTRL);
+ DUMP_REG(SOR_XBAR_POL);
+ DUMP_REG(SOR_DP_LINKCTL_0);
+ DUMP_REG(SOR_DP_LINKCTL_1);
+ DUMP_REG(SOR_LANE_DRIVE_CURRENT_0);
+ DUMP_REG(SOR_LANE_DRIVE_CURRENT_1);
+ DUMP_REG(SOR_LANE4_DRIVE_CURRENT_0);
+ DUMP_REG(SOR_LANE4_DRIVE_CURRENT_1);
+ DUMP_REG(SOR_LANE_PREEMPHASIS_0);
+ DUMP_REG(SOR_LANE_PREEMPHASIS_1);
+ DUMP_REG(SOR_LANE4_PREEMPHASIS_0);
+ DUMP_REG(SOR_LANE4_PREEMPHASIS_1);
+ DUMP_REG(SOR_LANE_POST_CURSOR_0);
+ DUMP_REG(SOR_LANE_POST_CURSOR_1);
+ DUMP_REG(SOR_DP_CONFIG_0);
+ DUMP_REG(SOR_DP_CONFIG_1);
+ DUMP_REG(SOR_DP_MN_0);
+ DUMP_REG(SOR_DP_MN_1);
+ DUMP_REG(SOR_DP_PADCTL_0);
+ DUMP_REG(SOR_DP_PADCTL_1);
+ DUMP_REG(SOR_DP_DEBUG_0);
+ DUMP_REG(SOR_DP_DEBUG_1);
+ DUMP_REG(SOR_DP_SPARE_0);
+ DUMP_REG(SOR_DP_SPARE_1);
+ DUMP_REG(SOR_DP_AUDIO_CTRL);
+ DUMP_REG(SOR_DP_AUDIO_HBLANK_SYMBOLS);
+ DUMP_REG(SOR_DP_AUDIO_VBLANK_SYMBOLS);
+ DUMP_REG(SOR_DP_GENERIC_INFOFRAME_HEADER);
+ DUMP_REG(SOR_DP_GENERIC_INFOFRAME_SUBPACK_0);
+ DUMP_REG(SOR_DP_GENERIC_INFOFRAME_SUBPACK_1);
+ DUMP_REG(SOR_DP_GENERIC_INFOFRAME_SUBPACK_2);
+ DUMP_REG(SOR_DP_GENERIC_INFOFRAME_SUBPACK_3);
+ DUMP_REG(SOR_DP_GENERIC_INFOFRAME_SUBPACK_4);
+ DUMP_REG(SOR_DP_GENERIC_INFOFRAME_SUBPACK_5);
+ DUMP_REG(SOR_DP_GENERIC_INFOFRAME_SUBPACK_6);
+ DUMP_REG(SOR_DP_TPG);
+ DUMP_REG(SOR_DP_TPG_CONFIG);
+ DUMP_REG(SOR_DP_LQ_CSTM_0);
+ DUMP_REG(SOR_DP_LQ_CSTM_1);
+ DUMP_REG(SOR_DP_LQ_CSTM_2);
+
+#undef DUMP_REG
+
+ return 0;
+}
+
+static const struct drm_info_list debugfs_files[] = {
+ { "regs", tegra_sor_show_regs, 0, NULL },
+};
+
static int tegra_sor_debugfs_init(struct tegra_sor *sor,
struct drm_minor *minor)
{
struct dentry *entry;
+ unsigned int i;
int err = 0;
sor->debugfs = debugfs_create_dir("sor", minor->debugfs_root);
if (!sor->debugfs)
return -ENOMEM;
- entry = debugfs_create_file("crc", 0644, sor->debugfs, sor,
- &tegra_sor_crc_fops);
- if (!entry) {
- dev_err(sor->dev,
- "cannot create /sys/kernel/debug/dri/%s/sor/crc\n",
- minor->debugfs_root->d_name.name);
+ sor->debugfs_files = kmemdup(debugfs_files, sizeof(debugfs_files),
+ GFP_KERNEL);
+ if (!sor->debugfs_files) {
err = -ENOMEM;
goto remove;
}
+ for (i = 0; i < ARRAY_SIZE(debugfs_files); i++)
+ sor->debugfs_files[i].data = sor;
+
+ err = drm_debugfs_create_files(sor->debugfs_files,
+ ARRAY_SIZE(debugfs_files),
+ sor->debugfs, minor);
+ if (err < 0)
+ goto free;
+
+ entry = debugfs_create_file("crc", 0644, sor->debugfs, sor,
+ &tegra_sor_crc_fops);
+ if (!entry) {
+ err = -ENOMEM;
+ goto free;
+ }
+
return err;
+free:
+ kfree(sor->debugfs_files);
+ sor->debugfs_files = NULL;
remove:
- debugfs_remove(sor->debugfs);
+ debugfs_remove_recursive(sor->debugfs);
sor->debugfs = NULL;
return err;
}
static void tegra_sor_debugfs_exit(struct tegra_sor *sor)
{
- debugfs_remove_recursive(sor->debugfs);
+ drm_debugfs_remove_files(sor->debugfs_files, ARRAY_SIZE(debugfs_files),
+ sor->minor);
+ sor->minor = NULL;
+
+ kfree(sor->debugfs_files);
sor->debugfs = NULL;
+
+ debugfs_remove_recursive(sor->debugfs);
+ sor->debugfs_files = NULL;
}
static void tegra_sor_connector_dpms(struct drm_connector *connector, int mode)
@@ -791,8 +949,8 @@
struct tegra_sor_config config;
struct drm_dp_link link;
struct drm_dp_aux *aux;
- unsigned long value;
int err = 0;
+ u32 value;
mutex_lock(&sor->lock);
@@ -1354,12 +1512,30 @@
}
}
+ /*
+ * XXX: Remove this reset once proper hand-over from firmware to
+ * kernel is possible.
+ */
+ err = reset_control_assert(sor->rst);
+ if (err < 0) {
+ dev_err(sor->dev, "failed to assert SOR reset: %d\n", err);
+ return err;
+ }
+
err = clk_prepare_enable(sor->clk);
if (err < 0) {
dev_err(sor->dev, "failed to enable clock: %d\n", err);
return err;
}
+ usleep_range(1000, 3000);
+
+ err = reset_control_deassert(sor->rst);
+ if (err < 0) {
+ dev_err(sor->dev, "failed to deassert SOR reset: %d\n", err);
+ return err;
+ }
+
err = clk_prepare_enable(sor->clk_safe);
if (err < 0)
return err;
diff --git a/drivers/gpu/drm/vgem/Makefile b/drivers/gpu/drm/vgem/Makefile
new file mode 100644
index 0000000..1055cb7
--- /dev/null
+++ b/drivers/gpu/drm/vgem/Makefile
@@ -0,0 +1,4 @@
+ccflags-y := -Iinclude/drm
+vgem-y := vgem_drv.o vgem_dma_buf.o
+
+obj-$(CONFIG_DRM_VGEM) += vgem.o
diff --git a/drivers/gpu/drm/vgem/vgem_dma_buf.c b/drivers/gpu/drm/vgem/vgem_dma_buf.c
new file mode 100644
index 0000000..0254438
--- /dev/null
+++ b/drivers/gpu/drm/vgem/vgem_dma_buf.c
@@ -0,0 +1,94 @@
+/*
+ * Copyright © 2012 Intel Corporation
+ * Copyright © 2014 The Chromium OS Authors
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ * Ben Widawsky <ben@bwidawsk.net>
+ *
+ */
+
+#include <linux/dma-buf.h>
+#include "vgem_drv.h"
+
+struct sg_table *vgem_gem_prime_get_sg_table(struct drm_gem_object *gobj)
+{
+ struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
+ BUG_ON(obj->pages == NULL);
+
+ return drm_prime_pages_to_sg(obj->pages, obj->base.size / PAGE_SIZE);
+}
+
+int vgem_gem_prime_pin(struct drm_gem_object *gobj)
+{
+ struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
+ return vgem_gem_get_pages(obj);
+}
+
+void vgem_gem_prime_unpin(struct drm_gem_object *gobj)
+{
+ struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
+ vgem_gem_put_pages(obj);
+}
+
+void *vgem_gem_prime_vmap(struct drm_gem_object *gobj)
+{
+ struct drm_vgem_gem_object *obj = to_vgem_bo(gobj);
+ BUG_ON(obj->pages == NULL);
+
+ return vmap(obj->pages, obj->base.size / PAGE_SIZE, 0, PAGE_KERNEL);
+}
+
+void vgem_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+{
+ vunmap(vaddr);
+}
+
+struct drm_gem_object *vgem_gem_prime_import(struct drm_device *dev,
+ struct dma_buf *dma_buf)
+{
+ struct drm_vgem_gem_object *obj = NULL;
+ int ret;
+
+ obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+ if (obj == NULL) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ ret = drm_gem_object_init(dev, &obj->base, dma_buf->size);
+ if (ret) {
+ ret = -ENOMEM;
+ goto fail_free;
+ }
+
+ get_dma_buf(dma_buf);
+
+ obj->base.dma_buf = dma_buf;
+ obj->use_dma_buf = true;
+
+ return &obj->base;
+
+fail_free:
+ kfree(obj);
+fail:
+ return ERR_PTR(ret);
+}
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
new file mode 100644
index 0000000..cb3b435
--- /dev/null
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -0,0 +1,364 @@
+/*
+ * Copyright 2011 Red Hat, Inc.
+ * Copyright © 2014 The Chromium OS Authors
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software")
+ * to deal in the software without restriction, including without limitation
+ * on the rights to use, copy, modify, merge, publish, distribute, sub
+ * license, and/or sell copies of the Software, and to permit persons to whom
+ * them Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTIBILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER
+ * IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors:
+ * Adam Jackson <ajax@redhat.com>
+ * Ben Widawsky <ben@bwidawsk.net>
+ */
+
+/**
+ * This is vgem, a (non-hardware-backed) GEM service. This is used by Mesa's
+ * software renderer and the X server for efficient buffer sharing.
+ */
+
+#include <linux/module.h>
+#include <linux/ramfs.h>
+#include <linux/shmem_fs.h>
+#include <linux/dma-buf.h>
+#include "vgem_drv.h"
+
+#define DRIVER_NAME "vgem"
+#define DRIVER_DESC "Virtual GEM provider"
+#define DRIVER_DATE "20120112"
+#define DRIVER_MAJOR 1
+#define DRIVER_MINOR 0
+
+void vgem_gem_put_pages(struct drm_vgem_gem_object *obj)
+{
+ drm_gem_put_pages(&obj->base, obj->pages, false, false);
+ obj->pages = NULL;
+}
+
+static void vgem_gem_free_object(struct drm_gem_object *obj)
+{
+ struct drm_vgem_gem_object *vgem_obj = to_vgem_bo(obj);
+
+ drm_gem_free_mmap_offset(obj);
+
+ if (vgem_obj->use_dma_buf && obj->dma_buf) {
+ dma_buf_put(obj->dma_buf);
+ obj->dma_buf = NULL;
+ }
+
+ drm_gem_object_release(obj);
+
+ if (vgem_obj->pages)
+ vgem_gem_put_pages(vgem_obj);
+
+ vgem_obj->pages = NULL;
+
+ kfree(vgem_obj);
+}
+
+int vgem_gem_get_pages(struct drm_vgem_gem_object *obj)
+{
+ struct page **pages;
+
+ if (obj->pages || obj->use_dma_buf)
+ return 0;
+
+ pages = drm_gem_get_pages(&obj->base);
+ if (IS_ERR(pages)) {
+ return PTR_ERR(pages);
+ }
+
+ obj->pages = pages;
+
+ return 0;
+}
+
+static int vgem_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
+{
+ struct drm_vgem_gem_object *obj = vma->vm_private_data;
+ struct drm_device *dev = obj->base.dev;
+ loff_t num_pages;
+ pgoff_t page_offset;
+ int ret;
+
+ /* We don't use vmf->pgoff since that has the fake offset */
+ page_offset = ((unsigned long)vmf->virtual_address - vma->vm_start) >>
+ PAGE_SHIFT;
+
+ num_pages = DIV_ROUND_UP(obj->base.size, PAGE_SIZE);
+
+ if (page_offset > num_pages)
+ return VM_FAULT_SIGBUS;
+
+ mutex_lock(&dev->struct_mutex);
+
+ ret = vm_insert_page(vma, (unsigned long)vmf->virtual_address,
+ obj->pages[page_offset]);
+
+ mutex_unlock(&dev->struct_mutex);
+ switch (ret) {
+ case 0:
+ return VM_FAULT_NOPAGE;
+ case -ENOMEM:
+ return VM_FAULT_OOM;
+ case -EBUSY:
+ return VM_FAULT_RETRY;
+ case -EFAULT:
+ case -EINVAL:
+ return VM_FAULT_SIGBUS;
+ default:
+ WARN_ON(1);
+ return VM_FAULT_SIGBUS;
+ }
+}
+
+static struct vm_operations_struct vgem_gem_vm_ops = {
+ .fault = vgem_gem_fault,
+ .open = drm_gem_vm_open,
+ .close = drm_gem_vm_close,
+};
+
+/* ioctls */
+
+static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
+ struct drm_file *file,
+ unsigned int *handle,
+ unsigned long size)
+{
+ struct drm_vgem_gem_object *obj;
+ struct drm_gem_object *gem_object;
+ int err;
+
+ size = roundup(size, PAGE_SIZE);
+
+ obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+ if (!obj)
+ return ERR_PTR(-ENOMEM);
+
+ gem_object = &obj->base;
+
+ err = drm_gem_object_init(dev, gem_object, size);
+ if (err)
+ goto out;
+
+ err = drm_gem_handle_create(file, gem_object, handle);
+ if (err)
+ goto handle_out;
+
+ drm_gem_object_unreference_unlocked(gem_object);
+
+ return gem_object;
+
+handle_out:
+ drm_gem_object_release(gem_object);
+out:
+ kfree(obj);
+ return ERR_PTR(err);
+}
+
+static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
+ struct drm_mode_create_dumb *args)
+{
+ struct drm_gem_object *gem_object;
+ uint64_t size;
+ uint64_t pitch = args->width * DIV_ROUND_UP(args->bpp, 8);
+
+ size = args->height * pitch;
+ if (size == 0)
+ return -EINVAL;
+
+ gem_object = vgem_gem_create(dev, file, &args->handle, size);
+
+ if (IS_ERR(gem_object)) {
+ DRM_DEBUG_DRIVER("object creation failed\n");
+ return PTR_ERR(gem_object);
+ }
+
+ args->size = gem_object->size;
+ args->pitch = pitch;
+
+ DRM_DEBUG_DRIVER("Created object of size %lld\n", size);
+
+ return 0;
+}
+
+int vgem_gem_dumb_map(struct drm_file *file, struct drm_device *dev,
+ uint32_t handle, uint64_t *offset)
+{
+ int ret = 0;
+ struct drm_gem_object *obj;
+
+ mutex_lock(&dev->struct_mutex);
+ obj = drm_gem_object_lookup(dev, file, handle);
+ if (!obj) {
+ ret = -ENOENT;
+ goto unlock;
+ }
+
+ if (!drm_vma_node_has_offset(&obj->vma_node)) {
+ ret = drm_gem_create_mmap_offset(obj);
+ if (ret)
+ goto unref;
+ }
+
+ BUG_ON(!obj->filp);
+
+ obj->filp->private_data = obj;
+
+ ret = vgem_gem_get_pages(to_vgem_bo(obj));
+ if (ret)
+ goto fail_get_pages;
+
+ *offset = drm_vma_node_offset_addr(&obj->vma_node);
+
+ goto unref;
+
+fail_get_pages:
+ drm_gem_free_mmap_offset(obj);
+unref:
+ drm_gem_object_unreference(obj);
+unlock:
+ mutex_unlock(&dev->struct_mutex);
+ return ret;
+}
+
+int vgem_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ struct drm_file *priv = filp->private_data;
+ struct drm_device *dev = priv->minor->dev;
+ struct drm_vma_offset_node *node;
+ struct drm_gem_object *obj;
+ struct drm_vgem_gem_object *vgem_obj;
+ int ret = 0;
+
+ mutex_lock(&dev->struct_mutex);
+
+ node = drm_vma_offset_exact_lookup(dev->vma_offset_manager,
+ vma->vm_pgoff,
+ vma_pages(vma));
+ if (!node) {
+ ret = -EINVAL;
+ goto out_unlock;
+ } else if (!drm_vma_node_is_allowed(node, filp)) {
+ ret = -EACCES;
+ goto out_unlock;
+ }
+
+ obj = container_of(node, struct drm_gem_object, vma_node);
+
+ vgem_obj = to_vgem_bo(obj);
+
+ if (obj->dma_buf && vgem_obj->use_dma_buf) {
+ ret = dma_buf_mmap(obj->dma_buf, vma, 0);
+ goto out_unlock;
+ }
+
+ if (!obj->dev->driver->gem_vm_ops) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_ops = obj->dev->driver->gem_vm_ops;
+ vma->vm_private_data = vgem_obj;
+ vma->vm_page_prot =
+ pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+
+ mutex_unlock(&dev->struct_mutex);
+ drm_gem_vm_open(vma);
+ return ret;
+
+out_unlock:
+ mutex_unlock(&dev->struct_mutex);
+
+ return ret;
+}
+
+
+static struct drm_ioctl_desc vgem_ioctls[] = {
+};
+
+static const struct file_operations vgem_driver_fops = {
+ .owner = THIS_MODULE,
+ .open = drm_open,
+ .mmap = vgem_drm_gem_mmap,
+ .poll = drm_poll,
+ .read = drm_read,
+ .unlocked_ioctl = drm_ioctl,
+ .release = drm_release,
+};
+
+static struct drm_driver vgem_driver = {
+ .driver_features = DRIVER_GEM | DRIVER_PRIME,
+ .gem_free_object = vgem_gem_free_object,
+ .gem_vm_ops = &vgem_gem_vm_ops,
+ .ioctls = vgem_ioctls,
+ .fops = &vgem_driver_fops,
+ .dumb_create = vgem_gem_dumb_create,
+ .dumb_map_offset = vgem_gem_dumb_map,
+ .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+ .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+ .gem_prime_export = drm_gem_prime_export,
+ .gem_prime_import = vgem_gem_prime_import,
+ .gem_prime_pin = vgem_gem_prime_pin,
+ .gem_prime_unpin = vgem_gem_prime_unpin,
+ .gem_prime_get_sg_table = vgem_gem_prime_get_sg_table,
+ .gem_prime_vmap = vgem_gem_prime_vmap,
+ .gem_prime_vunmap = vgem_gem_prime_vunmap,
+ .name = DRIVER_NAME,
+ .desc = DRIVER_DESC,
+ .date = DRIVER_DATE,
+ .major = DRIVER_MAJOR,
+ .minor = DRIVER_MINOR,
+};
+
+struct drm_device *vgem_device;
+
+static int __init vgem_init(void)
+{
+ int ret;
+
+ vgem_device = drm_dev_alloc(&vgem_driver, NULL);
+ if (!vgem_device) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ ret = drm_dev_register(vgem_device, 0);
+
+ if (ret)
+ goto out_unref;
+
+ return 0;
+
+out_unref:
+ drm_dev_unref(vgem_device);
+out:
+ return ret;
+}
+
+static void __exit vgem_exit(void)
+{
+ drm_dev_unregister(vgem_device);
+ drm_dev_unref(vgem_device);
+}
+
+module_init(vgem_init);
+module_exit(vgem_exit);
+
+MODULE_AUTHOR("Red Hat, Inc.");
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL and additional rights");
diff --git a/drivers/gpu/drm/vgem/vgem_drv.h b/drivers/gpu/drm/vgem/vgem_drv.h
new file mode 100644
index 0000000..57ab4d8
--- /dev/null
+++ b/drivers/gpu/drm/vgem/vgem_drv.h
@@ -0,0 +1,57 @@
+/*
+ * Copyright © 2012 Intel Corporation
+ * Copyright © 2014 The Chromium OS Authors
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ * Ben Widawsky <ben@bwidawsk.net>
+ *
+ */
+
+#ifndef _VGEM_DRV_H_
+#define _VGEM_DRV_H_
+
+#include <drm/drmP.h>
+#include <drm/drm_gem.h>
+
+#define to_vgem_bo(x) container_of(x, struct drm_vgem_gem_object, base)
+struct drm_vgem_gem_object {
+ struct drm_gem_object base;
+ struct page **pages;
+ bool use_dma_buf;
+};
+
+/* vgem_drv.c */
+extern void vgem_gem_put_pages(struct drm_vgem_gem_object *obj);
+extern int vgem_gem_get_pages(struct drm_vgem_gem_object *obj);
+
+/* vgem_dma_buf.c */
+extern struct sg_table *vgem_gem_prime_get_sg_table(
+ struct drm_gem_object *gobj);
+extern int vgem_gem_prime_pin(struct drm_gem_object *gobj);
+extern void vgem_gem_prime_unpin(struct drm_gem_object *gobj);
+extern void *vgem_gem_prime_vmap(struct drm_gem_object *gobj);
+extern void vgem_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+extern struct drm_gem_object *vgem_gem_prime_import(struct drm_device *dev,
+ struct dma_buf *dma_buf);
+
+
+#endif
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
index e13b9cb..620bb5c 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -134,7 +134,7 @@
*/
#define VMW_IOCTL_DEF(ioctl, func, flags) \
- [DRM_IOCTL_NR(DRM_IOCTL_##ioctl) - DRM_COMMAND_BASE] = {DRM_##ioctl, flags, func, DRM_IOCTL_##ioctl}
+ [DRM_IOCTL_NR(DRM_IOCTL_##ioctl) - DRM_COMMAND_BASE] = {DRM_IOCTL_##ioctl, flags, func}
/**
* Ioctl definitions.
@@ -1044,7 +1044,7 @@
const struct drm_ioctl_desc *ioctl =
&vmw_ioctls[nr - DRM_COMMAND_BASE];
- if (unlikely(ioctl->cmd_drv != cmd)) {
+ if (unlikely(ioctl->cmd != cmd)) {
DRM_ERROR("Invalid command format, ioctl %d\n",
nr - DRM_COMMAND_BASE);
return -EINVAL;
diff --git a/drivers/gpu/host1x/syncpt.c b/drivers/gpu/host1x/syncpt.c
index b10550e..6b7fdc1 100644
--- a/drivers/gpu/host1x/syncpt.c
+++ b/drivers/gpu/host1x/syncpt.c
@@ -425,6 +425,12 @@
}
EXPORT_SYMBOL(host1x_syncpt_read_min);
+u32 host1x_syncpt_read(struct host1x_syncpt *sp)
+{
+ return host1x_syncpt_load(sp);
+}
+EXPORT_SYMBOL(host1x_syncpt_read);
+
int host1x_syncpt_nb_pts(struct host1x *host)
{
return host->info->nb_pts;
diff --git a/drivers/gpu/ipu-v3/ipu-dc.c b/drivers/gpu/ipu-v3/ipu-dc.c
index 4864f83..9ef2e1f 100644
--- a/drivers/gpu/ipu-v3/ipu-dc.c
+++ b/drivers/gpu/ipu-v3/ipu-dc.c
@@ -147,20 +147,20 @@
writel(reg2, priv->dc_tmpl_reg + word * 8 + 4);
}
-static int ipu_pixfmt_to_map(u32 fmt)
+static int ipu_bus_format_to_map(u32 fmt)
{
switch (fmt) {
- case V4L2_PIX_FMT_RGB24:
+ case MEDIA_BUS_FMT_RGB888_1X24:
return IPU_DC_MAP_RGB24;
- case V4L2_PIX_FMT_RGB565:
+ case MEDIA_BUS_FMT_RGB565_1X16:
return IPU_DC_MAP_RGB565;
- case IPU_PIX_FMT_GBR24:
+ case MEDIA_BUS_FMT_GBR888_1X24:
return IPU_DC_MAP_GBR24;
- case V4L2_PIX_FMT_BGR666:
+ case MEDIA_BUS_FMT_RGB666_1X18:
return IPU_DC_MAP_BGR666;
- case v4l2_fourcc('L', 'V', 'D', '6'):
+ case MEDIA_BUS_FMT_RGB666_1X24_CPADHI:
return IPU_DC_MAP_LVDS666;
- case V4L2_PIX_FMT_BGR24:
+ case MEDIA_BUS_FMT_BGR888_1X24:
return IPU_DC_MAP_BGR24;
default:
return -EINVAL;
@@ -168,7 +168,7 @@
}
int ipu_dc_init_sync(struct ipu_dc *dc, struct ipu_di *di, bool interlaced,
- u32 pixel_fmt, u32 width)
+ u32 bus_format, u32 width)
{
struct ipu_dc_priv *priv = dc->priv;
u32 reg = 0;
@@ -176,7 +176,7 @@
dc->di = ipu_di_get_num(di);
- map = ipu_pixfmt_to_map(pixel_fmt);
+ map = ipu_bus_format_to_map(bus_format);
if (map < 0) {
dev_dbg(priv->dev, "IPU_DISP: No MAP\n");
return map;
diff --git a/drivers/gpu/ipu-v3/ipu-di.c b/drivers/gpu/ipu-v3/ipu-di.c
index 3ddfb3d..2970c6b 100644
--- a/drivers/gpu/ipu-v3/ipu-di.c
+++ b/drivers/gpu/ipu-v3/ipu-di.c
@@ -441,8 +441,7 @@
in_rate = clk_get_rate(clk);
div = DIV_ROUND_CLOSEST(in_rate, sig->mode.pixelclock);
- if (div == 0)
- div = 1;
+ div = clamp(div, 1U, 255U);
clkgen0 = div << 4;
}
@@ -459,8 +458,7 @@
clkrate = clk_get_rate(di->clk_ipu);
div = DIV_ROUND_CLOSEST(clkrate, sig->mode.pixelclock);
- if (div == 0)
- div = 1;
+ div = clamp(div, 1U, 255U);
rate = clkrate / div;
error = rate / (sig->mode.pixelclock / 1000);
@@ -483,8 +481,7 @@
in_rate = clk_get_rate(clk);
div = DIV_ROUND_CLOSEST(in_rate, sig->mode.pixelclock);
- if (div == 0)
- div = 1;
+ div = clamp(div, 1U, 255U);
clkgen0 = div << 4;
}
diff --git a/drivers/gpu/ipu-v3/ipu-ic.c b/drivers/gpu/ipu-v3/ipu-ic.c
index ad75588..1dcb96c 100644
--- a/drivers/gpu/ipu-v3/ipu-ic.c
+++ b/drivers/gpu/ipu-v3/ipu-ic.c
@@ -297,8 +297,8 @@
return -EINVAL;
}
- /* Cannot downsize more than 8:1 */
- if ((out_size << 3) < in_size) {
+ /* Cannot downsize more than 4:1 */
+ if ((out_size << 2) < in_size) {
dev_err(ipu->dev, "Unsupported downsize\n");
return -EINVAL;
}
diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
index 2978f5e..54da66d 100644
--- a/drivers/hv/channel.c
+++ b/drivers/hv/channel.c
@@ -71,7 +71,8 @@
struct vmbus_channel_msginfo *open_info = NULL;
void *in, *out;
unsigned long flags;
- int ret, t, err = 0;
+ int ret, err = 0;
+ unsigned long t;
spin_lock_irqsave(&newchannel->lock, flags);
if (newchannel->state == CHANNEL_OPEN_STATE) {
@@ -89,9 +90,10 @@
out = (void *)__get_free_pages(GFP_KERNEL|__GFP_ZERO,
get_order(send_ringbuffer_size + recv_ringbuffer_size));
- if (!out)
- return -ENOMEM;
-
+ if (!out) {
+ err = -ENOMEM;
+ goto error0;
+ }
in = (void *)((unsigned long)out + send_ringbuffer_size);
@@ -135,7 +137,7 @@
GFP_KERNEL);
if (!open_info) {
err = -ENOMEM;
- goto error0;
+ goto error_gpadl;
}
init_completion(&open_info->waitevent);
@@ -151,7 +153,7 @@
if (userdatalen > MAX_USER_DEFINED_BYTES) {
err = -EINVAL;
- goto error0;
+ goto error_gpadl;
}
if (userdatalen)
@@ -195,10 +197,14 @@
list_del(&open_info->msglistentry);
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+error_gpadl:
+ vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle);
+
error0:
free_pages((unsigned long)out,
get_order(send_ringbuffer_size + recv_ringbuffer_size));
kfree(open_info);
+ newchannel->state = CHANNEL_OPEN_STATE;
return err;
}
EXPORT_SYMBOL_GPL(vmbus_open);
@@ -534,6 +540,12 @@
free_pages((unsigned long)channel->ringbuffer_pages,
get_order(channel->ringbuffer_pagecount * PAGE_SIZE));
+ /*
+ * If the channel has been rescinded; process device removal.
+ */
+ if (channel->rescind)
+ hv_process_channel_removal(channel,
+ channel->offermsg.child_relid);
return ret;
}
@@ -569,23 +581,9 @@
}
EXPORT_SYMBOL_GPL(vmbus_close);
-/**
- * vmbus_sendpacket() - Send the specified buffer on the given channel
- * @channel: Pointer to vmbus_channel structure.
- * @buffer: Pointer to the buffer you want to receive the data into.
- * @bufferlen: Maximum size of what the the buffer will hold
- * @requestid: Identifier of the request
- * @type: Type of packet that is being send e.g. negotiate, time
- * packet etc.
- *
- * Sends data in @buffer directly to hyper-v via the vmbus
- * This will send the data unparsed to hyper-v.
- *
- * Mainly used by Hyper-V drivers.
- */
-int vmbus_sendpacket(struct vmbus_channel *channel, void *buffer,
+int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
u32 bufferlen, u64 requestid,
- enum vmbus_packet_type type, u32 flags)
+ enum vmbus_packet_type type, u32 flags, bool kick_q)
{
struct vmpacket_descriptor desc;
u32 packetlen = sizeof(struct vmpacket_descriptor) + bufferlen;
@@ -613,21 +611,61 @@
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal);
- if (ret == 0 && signal)
+ /*
+ * Signalling the host is conditional on many factors:
+ * 1. The ring state changed from being empty to non-empty.
+ * This is tracked by the variable "signal".
+ * 2. The variable kick_q tracks if more data will be placed
+ * on the ring. We will not signal if more data is
+ * to be placed.
+ *
+ * If we cannot write to the ring-buffer; signal the host
+ * even if we may not have written anything. This is a rare
+ * enough condition that it should not matter.
+ */
+ if (((ret == 0) && kick_q && signal) || (ret))
vmbus_setevent(channel);
return ret;
}
+EXPORT_SYMBOL(vmbus_sendpacket_ctl);
+
+/**
+ * vmbus_sendpacket() - Send the specified buffer on the given channel
+ * @channel: Pointer to vmbus_channel structure.
+ * @buffer: Pointer to the buffer you want to receive the data into.
+ * @bufferlen: Maximum size of what the the buffer will hold
+ * @requestid: Identifier of the request
+ * @type: Type of packet that is being send e.g. negotiate, time
+ * packet etc.
+ *
+ * Sends data in @buffer directly to hyper-v via the vmbus
+ * This will send the data unparsed to hyper-v.
+ *
+ * Mainly used by Hyper-V drivers.
+ */
+int vmbus_sendpacket(struct vmbus_channel *channel, void *buffer,
+ u32 bufferlen, u64 requestid,
+ enum vmbus_packet_type type, u32 flags)
+{
+ return vmbus_sendpacket_ctl(channel, buffer, bufferlen, requestid,
+ type, flags, true);
+}
EXPORT_SYMBOL(vmbus_sendpacket);
/*
- * vmbus_sendpacket_pagebuffer - Send a range of single-page buffer
- * packets using a GPADL Direct packet type.
+ * vmbus_sendpacket_pagebuffer_ctl - Send a range of single-page buffer
+ * packets using a GPADL Direct packet type. This interface allows you
+ * to control notifying the host. This will be useful for sending
+ * batched data. Also the sender can control the send flags
+ * explicitly.
*/
-int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
+int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
struct hv_page_buffer pagebuffers[],
u32 pagecount, void *buffer, u32 bufferlen,
- u64 requestid)
+ u64 requestid,
+ u32 flags,
+ bool kick_q)
{
int ret;
int i;
@@ -655,7 +693,7 @@
/* Setup the descriptor */
desc.type = VM_PKT_DATA_USING_GPA_DIRECT;
- desc.flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;
+ desc.flags = flags;
desc.dataoffset8 = descsize >> 3; /* in 8-bytes grandularity */
desc.length8 = (u16)(packetlen_aligned >> 3);
desc.transactionid = requestid;
@@ -676,11 +714,40 @@
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal);
- if (ret == 0 && signal)
+ /*
+ * Signalling the host is conditional on many factors:
+ * 1. The ring state changed from being empty to non-empty.
+ * This is tracked by the variable "signal".
+ * 2. The variable kick_q tracks if more data will be placed
+ * on the ring. We will not signal if more data is
+ * to be placed.
+ *
+ * If we cannot write to the ring-buffer; signal the host
+ * even if we may not have written anything. This is a rare
+ * enough condition that it should not matter.
+ */
+ if (((ret == 0) && kick_q && signal) || (ret))
vmbus_setevent(channel);
return ret;
}
+EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer_ctl);
+
+/*
+ * vmbus_sendpacket_pagebuffer - Send a range of single-page buffer
+ * packets using a GPADL Direct packet type.
+ */
+int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
+ struct hv_page_buffer pagebuffers[],
+ u32 pagecount, void *buffer, u32 bufferlen,
+ u64 requestid)
+{
+ u32 flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;
+ return vmbus_sendpacket_pagebuffer_ctl(channel, pagebuffers, pagecount,
+ buffer, bufferlen, requestid,
+ flags, true);
+
+}
EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer);
/*
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 3736f71..0eeb1b3 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -32,12 +32,6 @@
#include "hyperv_vmbus.h"
-struct vmbus_channel_message_table_entry {
- enum vmbus_channel_message_type message_type;
- void (*message_handler)(struct vmbus_channel_message_header *msg);
-};
-
-
/**
* vmbus_prep_negotiate_resp() - Create default response for Hyper-V Negotiate message
* @icmsghdrp: Pointer to msg header structure
@@ -139,54 +133,29 @@
*/
static struct vmbus_channel *alloc_channel(void)
{
+ static atomic_t chan_num = ATOMIC_INIT(0);
struct vmbus_channel *channel;
channel = kzalloc(sizeof(*channel), GFP_ATOMIC);
if (!channel)
return NULL;
+ channel->id = atomic_inc_return(&chan_num);
spin_lock_init(&channel->inbound_lock);
spin_lock_init(&channel->lock);
INIT_LIST_HEAD(&channel->sc_list);
INIT_LIST_HEAD(&channel->percpu_list);
- channel->controlwq = create_workqueue("hv_vmbus_ctl");
- if (!channel->controlwq) {
- kfree(channel);
- return NULL;
- }
-
return channel;
}
/*
- * release_hannel - Release the vmbus channel object itself
- */
-static void release_channel(struct work_struct *work)
-{
- struct vmbus_channel *channel = container_of(work,
- struct vmbus_channel,
- work);
-
- destroy_workqueue(channel->controlwq);
-
- kfree(channel);
-}
-
-/*
* free_channel - Release the resources used by the vmbus channel object
*/
static void free_channel(struct vmbus_channel *channel)
{
-
- /*
- * We have to release the channel's workqueue/thread in the vmbus's
- * workqueue/thread context
- * ie we can't destroy ourselves.
- */
- INIT_WORK(&channel->work, release_channel);
- queue_work(vmbus_connection.work_queue, &channel->work);
+ kfree(channel);
}
static void percpu_channel_enq(void *arg)
@@ -204,33 +173,21 @@
list_del(&channel->percpu_list);
}
-/*
- * vmbus_process_rescind_offer -
- * Rescind the offer by initiating a device removal
- */
-static void vmbus_process_rescind_offer(struct work_struct *work)
+
+void hv_process_channel_removal(struct vmbus_channel *channel, u32 relid)
{
- struct vmbus_channel *channel = container_of(work,
- struct vmbus_channel,
- work);
+ struct vmbus_channel_relid_released msg;
unsigned long flags;
struct vmbus_channel *primary_channel;
- struct vmbus_channel_relid_released msg;
- struct device *dev;
-
- if (channel->device_obj) {
- dev = get_device(&channel->device_obj->device);
- if (dev) {
- vmbus_device_unregister(channel->device_obj);
- put_device(dev);
- }
- }
memset(&msg, 0, sizeof(struct vmbus_channel_relid_released));
- msg.child_relid = channel->offermsg.child_relid;
+ msg.child_relid = relid;
msg.header.msgtype = CHANNELMSG_RELID_RELEASED;
vmbus_post_msg(&msg, sizeof(struct vmbus_channel_relid_released));
+ if (channel == NULL)
+ return;
+
if (channel->target_cpu != get_cpu()) {
put_cpu();
smp_call_function_single(channel->target_cpu,
@@ -259,7 +216,6 @@
list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
vmbus_device_unregister(channel->device_obj);
- kfree(channel->device_obj);
free_channel(channel);
}
}
@@ -268,15 +224,11 @@
* vmbus_process_offer - Process the offer by creating a channel/device
* associated with this offer
*/
-static void vmbus_process_offer(struct work_struct *work)
+static void vmbus_process_offer(struct vmbus_channel *newchannel)
{
- struct vmbus_channel *newchannel = container_of(work,
- struct vmbus_channel,
- work);
struct vmbus_channel *channel;
bool fnew = true;
bool enq = false;
- int ret;
unsigned long flags;
/* Make sure this is a new offer */
@@ -335,10 +287,11 @@
}
newchannel->state = CHANNEL_OPEN_STATE;
+ channel->num_sc++;
if (channel->sc_creation_callback != NULL)
channel->sc_creation_callback(newchannel);
- goto done_init_rescind;
+ return;
}
goto err_free_chan;
@@ -361,33 +314,35 @@
&newchannel->offermsg.offer.if_instance,
newchannel);
if (!newchannel->device_obj)
- goto err_free_chan;
+ goto err_deq_chan;
/*
* Add the new device to the bus. This will kick off device-driver
* binding which eventually invokes the device driver's AddDevice()
* method.
*/
- ret = vmbus_device_register(newchannel->device_obj);
- if (ret != 0) {
+ if (vmbus_device_register(newchannel->device_obj) != 0) {
pr_err("unable to add child device object (relid %d)\n",
- newchannel->offermsg.child_relid);
-
- spin_lock_irqsave(&vmbus_connection.channel_lock, flags);
- list_del(&newchannel->listentry);
- spin_unlock_irqrestore(&vmbus_connection.channel_lock, flags);
+ newchannel->offermsg.child_relid);
kfree(newchannel->device_obj);
- goto err_free_chan;
+ goto err_deq_chan;
}
-done_init_rescind:
- spin_lock_irqsave(&newchannel->lock, flags);
- /* The next possible work is rescind handling */
- INIT_WORK(&newchannel->work, vmbus_process_rescind_offer);
- /* Check if rescind offer was already received */
- if (newchannel->rescind)
- queue_work(newchannel->controlwq, &newchannel->work);
- spin_unlock_irqrestore(&newchannel->lock, flags);
return;
+
+err_deq_chan:
+ spin_lock_irqsave(&vmbus_connection.channel_lock, flags);
+ list_del(&newchannel->listentry);
+ spin_unlock_irqrestore(&vmbus_connection.channel_lock, flags);
+
+ if (newchannel->target_cpu != get_cpu()) {
+ put_cpu();
+ smp_call_function_single(newchannel->target_cpu,
+ percpu_channel_deq, newchannel, true);
+ } else {
+ percpu_channel_deq(newchannel);
+ put_cpu();
+ }
+
err_free_chan:
free_channel(newchannel);
}
@@ -411,6 +366,8 @@
{ HV_SCSI_GUID, },
/* Network */
{ HV_NIC_GUID, },
+ /* NetworkDirect Guest RDMA */
+ { HV_ND_GUID, },
};
@@ -511,8 +468,7 @@
newchannel->monitor_grp = (u8)offer->monitorid / 32;
newchannel->monitor_bit = (u8)offer->monitorid % 32;
- INIT_WORK(&newchannel->work, vmbus_process_offer);
- queue_work(newchannel->controlwq, &newchannel->work);
+ vmbus_process_offer(newchannel);
}
/*
@@ -525,28 +481,34 @@
struct vmbus_channel_rescind_offer *rescind;
struct vmbus_channel *channel;
unsigned long flags;
+ struct device *dev;
rescind = (struct vmbus_channel_rescind_offer *)hdr;
channel = relid2channel(rescind->child_relid);
- if (channel == NULL)
- /* Just return here, no channel found */
+ if (channel == NULL) {
+ hv_process_channel_removal(NULL, rescind->child_relid);
return;
+ }
spin_lock_irqsave(&channel->lock, flags);
channel->rescind = true;
- /*
- * channel->work.func != vmbus_process_rescind_offer means we are still
- * processing offer request and the rescind offer processing should be
- * postponed. It will be done at the very end of vmbus_process_offer()
- * as rescind flag is being checked there.
- */
- if (channel->work.func == vmbus_process_rescind_offer)
- /* work is initialized for vmbus_process_rescind_offer() from
- * vmbus_process_offer() where the channel got created */
- queue_work(channel->controlwq, &channel->work);
-
spin_unlock_irqrestore(&channel->lock, flags);
+
+ if (channel->device_obj) {
+ /*
+ * We will have to unregister this device from the
+ * driver core.
+ */
+ dev = get_device(&channel->device_obj->device);
+ if (dev) {
+ vmbus_device_unregister(channel->device_obj);
+ put_device(dev);
+ }
+ } else {
+ hv_process_channel_removal(channel,
+ channel->offermsg.child_relid);
+ }
}
/*
@@ -731,25 +693,25 @@
}
/* Channel message dispatch table */
-static struct vmbus_channel_message_table_entry
+struct vmbus_channel_message_table_entry
channel_message_table[CHANNELMSG_COUNT] = {
- {CHANNELMSG_INVALID, NULL},
- {CHANNELMSG_OFFERCHANNEL, vmbus_onoffer},
- {CHANNELMSG_RESCIND_CHANNELOFFER, vmbus_onoffer_rescind},
- {CHANNELMSG_REQUESTOFFERS, NULL},
- {CHANNELMSG_ALLOFFERS_DELIVERED, vmbus_onoffers_delivered},
- {CHANNELMSG_OPENCHANNEL, NULL},
- {CHANNELMSG_OPENCHANNEL_RESULT, vmbus_onopen_result},
- {CHANNELMSG_CLOSECHANNEL, NULL},
- {CHANNELMSG_GPADL_HEADER, NULL},
- {CHANNELMSG_GPADL_BODY, NULL},
- {CHANNELMSG_GPADL_CREATED, vmbus_ongpadl_created},
- {CHANNELMSG_GPADL_TEARDOWN, NULL},
- {CHANNELMSG_GPADL_TORNDOWN, vmbus_ongpadl_torndown},
- {CHANNELMSG_RELID_RELEASED, NULL},
- {CHANNELMSG_INITIATE_CONTACT, NULL},
- {CHANNELMSG_VERSION_RESPONSE, vmbus_onversion_response},
- {CHANNELMSG_UNLOAD, NULL},
+ {CHANNELMSG_INVALID, 0, NULL},
+ {CHANNELMSG_OFFERCHANNEL, 0, vmbus_onoffer},
+ {CHANNELMSG_RESCIND_CHANNELOFFER, 0, vmbus_onoffer_rescind},
+ {CHANNELMSG_REQUESTOFFERS, 0, NULL},
+ {CHANNELMSG_ALLOFFERS_DELIVERED, 1, vmbus_onoffers_delivered},
+ {CHANNELMSG_OPENCHANNEL, 0, NULL},
+ {CHANNELMSG_OPENCHANNEL_RESULT, 1, vmbus_onopen_result},
+ {CHANNELMSG_CLOSECHANNEL, 0, NULL},
+ {CHANNELMSG_GPADL_HEADER, 0, NULL},
+ {CHANNELMSG_GPADL_BODY, 0, NULL},
+ {CHANNELMSG_GPADL_CREATED, 1, vmbus_ongpadl_created},
+ {CHANNELMSG_GPADL_TEARDOWN, 0, NULL},
+ {CHANNELMSG_GPADL_TORNDOWN, 1, vmbus_ongpadl_torndown},
+ {CHANNELMSG_RELID_RELEASED, 0, NULL},
+ {CHANNELMSG_INITIATE_CONTACT, 0, NULL},
+ {CHANNELMSG_VERSION_RESPONSE, 1, vmbus_onversion_response},
+ {CHANNELMSG_UNLOAD, 0, NULL},
};
/*
@@ -787,7 +749,7 @@
{
struct vmbus_channel_message_header *msg;
struct vmbus_channel_msginfo *msginfo;
- int ret, t;
+ int ret;
msginfo = kmalloc(sizeof(*msginfo) +
sizeof(struct vmbus_channel_message_header),
@@ -795,8 +757,6 @@
if (!msginfo)
return -ENOMEM;
- init_completion(&msginfo->waitevent);
-
msg = (struct vmbus_channel_message_header *)msginfo->msg;
msg->msgtype = CHANNELMSG_REQUESTOFFERS;
@@ -810,14 +770,6 @@
goto cleanup;
}
- t = wait_for_completion_timeout(&msginfo->waitevent, 5*HZ);
- if (t == 0) {
- ret = -ETIMEDOUT;
- goto cleanup;
- }
-
-
-
cleanup:
kfree(msginfo);
@@ -826,9 +778,8 @@
/*
* Retrieve the (sub) channel on which to send an outgoing request.
- * When a primary channel has multiple sub-channels, we choose a
- * channel whose VCPU binding is closest to the VCPU on which
- * this call is being made.
+ * When a primary channel has multiple sub-channels, we try to
+ * distribute the load equally amongst all available channels.
*/
struct vmbus_channel *vmbus_get_outgoing_channel(struct vmbus_channel *primary)
{
@@ -836,11 +787,19 @@
int cur_cpu;
struct vmbus_channel *cur_channel;
struct vmbus_channel *outgoing_channel = primary;
- int cpu_distance, new_cpu_distance;
+ int next_channel;
+ int i = 1;
if (list_empty(&primary->sc_list))
return outgoing_channel;
+ next_channel = primary->next_oc++;
+
+ if (next_channel > (primary->num_sc)) {
+ primary->next_oc = 0;
+ return outgoing_channel;
+ }
+
cur_cpu = hv_context.vp_index[get_cpu()];
put_cpu();
list_for_each_safe(cur, tmp, &primary->sc_list) {
@@ -851,18 +810,10 @@
if (cur_channel->target_vp == cur_cpu)
return cur_channel;
- cpu_distance = ((outgoing_channel->target_vp > cur_cpu) ?
- (outgoing_channel->target_vp - cur_cpu) :
- (cur_cpu - outgoing_channel->target_vp));
+ if (i == next_channel)
+ return cur_channel;
- new_cpu_distance = ((cur_channel->target_vp > cur_cpu) ?
- (cur_channel->target_vp - cur_cpu) :
- (cur_cpu - cur_channel->target_vp));
-
- if (cpu_distance < new_cpu_distance)
- continue;
-
- outgoing_channel = cur_channel;
+ i++;
}
return outgoing_channel;
diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
index a63a795..b27220a 100644
--- a/drivers/hv/connection.c
+++ b/drivers/hv/connection.c
@@ -216,10 +216,21 @@
cleanup:
pr_err("Unable to connect to host\n");
- vmbus_connection.conn_state = DISCONNECTED;
- if (vmbus_connection.work_queue)
+ vmbus_connection.conn_state = DISCONNECTED;
+ vmbus_disconnect();
+
+ kfree(msginfo);
+
+ return ret;
+}
+
+void vmbus_disconnect(void)
+{
+ if (vmbus_connection.work_queue) {
+ drain_workqueue(vmbus_connection.work_queue);
destroy_workqueue(vmbus_connection.work_queue);
+ }
if (vmbus_connection.int_page) {
free_pages((unsigned long)vmbus_connection.int_page, 0);
@@ -230,10 +241,6 @@
free_pages((unsigned long)vmbus_connection.monitor_pages[1], 0);
vmbus_connection.monitor_pages[0] = NULL;
vmbus_connection.monitor_pages[1] = NULL;
-
- kfree(msginfo);
-
- return ret;
}
/*
@@ -311,10 +318,8 @@
*/
channel = pcpu_relid2channel(relid);
- if (!channel) {
- pr_err("channel not found for relid - %u\n", relid);
+ if (!channel)
return;
- }
/*
* A channel once created is persistent even when there
@@ -349,10 +354,7 @@
else
bytes_to_read = 0;
} while (read_state && (bytes_to_read != 0));
- } else {
- pr_err("no channel callback for relid - %u\n", relid);
}
-
}
/*
@@ -420,6 +422,7 @@
union hv_connection_id conn_id;
int ret = 0;
int retries = 0;
+ u32 msec = 1;
conn_id.asu32 = 0;
conn_id.u.id = VMBUS_MESSAGE_CONNECTION_ID;
@@ -429,13 +432,20 @@
* insufficient resources. Retry the operation a couple of
* times before giving up.
*/
- while (retries < 10) {
+ while (retries < 20) {
ret = hv_post_message(conn_id, 1, buffer, buflen);
switch (ret) {
+ case HV_STATUS_INVALID_CONNECTION_ID:
+ /*
+ * We could get this if we send messages too
+ * frequently.
+ */
+ ret = -EAGAIN;
+ break;
+ case HV_STATUS_INSUFFICIENT_MEMORY:
case HV_STATUS_INSUFFICIENT_BUFFERS:
ret = -ENOMEM;
- case -ENOMEM:
break;
case HV_STATUS_SUCCESS:
return ret;
@@ -445,7 +455,9 @@
}
retries++;
- msleep(100);
+ msleep(msec);
+ if (msec < 2048)
+ msec *= 2;
}
return ret;
}
diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
index 50e51a5..d3943bc 100644
--- a/drivers/hv/hv.c
+++ b/drivers/hv/hv.c
@@ -312,7 +312,11 @@
dev->features = CLOCK_EVT_FEAT_ONESHOT;
dev->cpumask = cpumask_of(cpu);
dev->rating = 1000;
- dev->owner = THIS_MODULE;
+ /*
+ * Avoid settint dev->owner = THIS_MODULE deliberately as doing so will
+ * result in clockevents_config_and_register() taking additional
+ * references to the hv_vmbus module making it impossible to unload.
+ */
dev->set_mode = hv_ce_setmode;
dev->set_next_event = hv_ce_set_next_event;
@@ -470,6 +474,20 @@
}
/*
+ * hv_synic_clockevents_cleanup - Cleanup clockevent devices
+ */
+void hv_synic_clockevents_cleanup(void)
+{
+ int cpu;
+
+ if (!(ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE))
+ return;
+
+ for_each_online_cpu(cpu)
+ clockevents_unbind_device(hv_context.clk_evt[cpu], cpu);
+}
+
+/*
* hv_synic_cleanup - Cleanup routine for hv_synic_init().
*/
void hv_synic_cleanup(void *arg)
@@ -477,11 +495,17 @@
union hv_synic_sint shared_sint;
union hv_synic_simp simp;
union hv_synic_siefp siefp;
+ union hv_synic_scontrol sctrl;
int cpu = smp_processor_id();
if (!hv_context.synic_initialized)
return;
+ /* Turn off clockevent device */
+ if (ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE)
+ hv_ce_setmode(CLOCK_EVT_MODE_SHUTDOWN,
+ hv_context.clk_evt[cpu]);
+
rdmsrl(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
shared_sint.masked = 1;
@@ -502,6 +526,10 @@
wrmsrl(HV_X64_MSR_SIEFP, siefp.as_uint64);
- free_page((unsigned long)hv_context.synic_message_page[cpu]);
- free_page((unsigned long)hv_context.synic_event_page[cpu]);
+ /* Disable the global synic bit */
+ rdmsrl(HV_X64_MSR_SCONTROL, sctrl.as_uint64);
+ sctrl.enable = 0;
+ wrmsrl(HV_X64_MSR_SCONTROL, sctrl.as_uint64);
+
+ hv_synic_free_cpu(cpu);
}
diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index ff16938..cb5b7dc 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -428,14 +428,13 @@
* currently hot added. We hot add in multiples of 128M
* chunks; it is possible that we may not be able to bring
* online all the pages in the region. The range
- * covered_start_pfn : covered_end_pfn defines the pages that can
+ * covered_end_pfn defines the pages that can
* be brough online.
*/
struct hv_hotadd_state {
struct list_head list;
unsigned long start_pfn;
- unsigned long covered_start_pfn;
unsigned long covered_end_pfn;
unsigned long ha_end_pfn;
unsigned long end_pfn;
@@ -503,6 +502,8 @@
* Number of pages we have currently ballooned out.
*/
unsigned int num_pages_ballooned;
+ unsigned int num_pages_onlined;
+ unsigned int num_pages_added;
/*
* State to manage the ballooning (up) operation.
@@ -534,7 +535,6 @@
struct task_struct *thread;
struct mutex ha_region_mutex;
- struct completion waiter_event;
/*
* A list of hot-add regions.
@@ -554,46 +554,32 @@
static void post_status(struct hv_dynmem_device *dm);
#ifdef CONFIG_MEMORY_HOTPLUG
-static void acquire_region_mutex(bool trylock)
-{
- if (trylock) {
- reinit_completion(&dm_device.waiter_event);
- while (!mutex_trylock(&dm_device.ha_region_mutex))
- wait_for_completion(&dm_device.waiter_event);
- } else {
- mutex_lock(&dm_device.ha_region_mutex);
- }
-}
-
-static void release_region_mutex(bool trylock)
-{
- if (trylock) {
- mutex_unlock(&dm_device.ha_region_mutex);
- } else {
- mutex_unlock(&dm_device.ha_region_mutex);
- complete(&dm_device.waiter_event);
- }
-}
-
static int hv_memory_notifier(struct notifier_block *nb, unsigned long val,
void *v)
{
+ struct memory_notify *mem = (struct memory_notify *)v;
+
switch (val) {
case MEM_GOING_ONLINE:
- acquire_region_mutex(true);
+ mutex_lock(&dm_device.ha_region_mutex);
break;
case MEM_ONLINE:
+ dm_device.num_pages_onlined += mem->nr_pages;
case MEM_CANCEL_ONLINE:
- release_region_mutex(true);
+ mutex_unlock(&dm_device.ha_region_mutex);
if (dm_device.ha_waiting) {
dm_device.ha_waiting = false;
complete(&dm_device.ol_waitevent);
}
break;
- case MEM_GOING_OFFLINE:
case MEM_OFFLINE:
+ mutex_lock(&dm_device.ha_region_mutex);
+ dm_device.num_pages_onlined -= mem->nr_pages;
+ mutex_unlock(&dm_device.ha_region_mutex);
+ break;
+ case MEM_GOING_OFFLINE:
case MEM_CANCEL_OFFLINE:
break;
}
@@ -646,7 +632,7 @@
init_completion(&dm_device.ol_waitevent);
dm_device.ha_waiting = true;
- release_region_mutex(false);
+ mutex_unlock(&dm_device.ha_region_mutex);
nid = memory_add_physaddr_to_nid(PFN_PHYS(start_pfn));
ret = add_memory(nid, PFN_PHYS((start_pfn)),
(HA_CHUNK << PAGE_SHIFT));
@@ -665,6 +651,7 @@
}
has->ha_end_pfn -= HA_CHUNK;
has->covered_end_pfn -= processed_pfn;
+ mutex_lock(&dm_device.ha_region_mutex);
break;
}
@@ -675,7 +662,7 @@
* have not been "onlined" within the allowed time.
*/
wait_for_completion_timeout(&dm_device.ol_waitevent, 5*HZ);
- acquire_region_mutex(false);
+ mutex_lock(&dm_device.ha_region_mutex);
post_status(&dm_device);
}
@@ -691,8 +678,7 @@
list_for_each(cur, &dm_device.ha_region_list) {
has = list_entry(cur, struct hv_hotadd_state, list);
- cur_start_pgp = (unsigned long)
- pfn_to_page(has->covered_start_pfn);
+ cur_start_pgp = (unsigned long)pfn_to_page(has->start_pfn);
cur_end_pgp = (unsigned long)pfn_to_page(has->covered_end_pfn);
if (((unsigned long)pg >= cur_start_pgp) &&
@@ -704,7 +690,6 @@
__online_page_set_limits(pg);
__online_page_increment_counters(pg);
__online_page_free(pg);
- has->covered_start_pfn++;
}
}
}
@@ -748,10 +733,9 @@
* is, update it.
*/
- if (has->covered_end_pfn != start_pfn) {
+ if (has->covered_end_pfn != start_pfn)
has->covered_end_pfn = start_pfn;
- has->covered_start_pfn = start_pfn;
- }
+
return true;
}
@@ -794,9 +778,18 @@
pgs_ol = has->ha_end_pfn - start_pfn;
if (pgs_ol > pfn_cnt)
pgs_ol = pfn_cnt;
- hv_bring_pgs_online(start_pfn, pgs_ol);
+
+ /*
+ * Check if the corresponding memory block is already
+ * online by checking its last previously backed page.
+ * In case it is we need to bring rest (which was not
+ * backed previously) online too.
+ */
+ if (start_pfn > has->start_pfn &&
+ !PageReserved(pfn_to_page(start_pfn - 1)))
+ hv_bring_pgs_online(start_pfn, pgs_ol);
+
has->covered_end_pfn += pgs_ol;
- has->covered_start_pfn += pgs_ol;
pfn_cnt -= pgs_ol;
}
@@ -857,7 +850,6 @@
list_add_tail(&ha_region->list, &dm_device.ha_region_list);
ha_region->start_pfn = rg_start;
ha_region->ha_end_pfn = rg_start;
- ha_region->covered_start_pfn = pg_start;
ha_region->covered_end_pfn = pg_start;
ha_region->end_pfn = rg_start + rg_size;
}
@@ -886,7 +878,7 @@
resp.hdr.size = sizeof(struct dm_hot_add_response);
#ifdef CONFIG_MEMORY_HOTPLUG
- acquire_region_mutex(false);
+ mutex_lock(&dm_device.ha_region_mutex);
pg_start = dm->ha_wrk.ha_page_range.finfo.start_page;
pfn_cnt = dm->ha_wrk.ha_page_range.finfo.page_cnt;
@@ -918,7 +910,9 @@
if (do_hot_add)
resp.page_count = process_hot_add(pg_start, pfn_cnt,
rg_start, rg_sz);
- release_region_mutex(false);
+
+ dm->num_pages_added += resp.page_count;
+ mutex_unlock(&dm_device.ha_region_mutex);
#endif
/*
* The result field of the response structure has the
@@ -982,8 +976,8 @@
* 128 72 (1/2)
* 512 168 (1/4)
* 2048 360 (1/8)
- * 8192 768 (1/16)
- * 32768 1536 (1/32)
+ * 8192 744 (1/16)
+ * 32768 1512 (1/32)
*/
if (totalram_pages < MB2PAGES(128))
min_pages = MB2PAGES(8) + (totalram_pages >> 1);
@@ -992,9 +986,9 @@
else if (totalram_pages < MB2PAGES(2048))
min_pages = MB2PAGES(104) + (totalram_pages >> 3);
else if (totalram_pages < MB2PAGES(8192))
- min_pages = MB2PAGES(256) + (totalram_pages >> 4);
+ min_pages = MB2PAGES(232) + (totalram_pages >> 4);
else
- min_pages = MB2PAGES(512) + (totalram_pages >> 5);
+ min_pages = MB2PAGES(488) + (totalram_pages >> 5);
#undef MB2PAGES
return min_pages;
}
@@ -1031,17 +1025,21 @@
status.hdr.trans_id = atomic_inc_return(&trans_id);
/*
- * The host expects the guest to report free memory.
- * Further, the host expects the pressure information to
- * include the ballooned out pages.
- * For a given amount of memory that we are managing, we
- * need to compute a floor below which we should not balloon.
- * Compute this and add it to the pressure report.
+ * The host expects the guest to report free and committed memory.
+ * Furthermore, the host expects the pressure information to include
+ * the ballooned out pages. For a given amount of memory that we are
+ * managing we need to compute a floor below which we should not
+ * balloon. Compute this and add it to the pressure report.
+ * We also need to report all offline pages (num_pages_added -
+ * num_pages_onlined) as committed to the host, otherwise it can try
+ * asking us to balloon them out.
*/
status.num_avail = val.freeram;
status.num_committed = vm_memory_committed() +
- dm->num_pages_ballooned +
- compute_balloon_floor();
+ dm->num_pages_ballooned +
+ (dm->num_pages_added > dm->num_pages_onlined ?
+ dm->num_pages_added - dm->num_pages_onlined : 0) +
+ compute_balloon_floor();
/*
* If our transaction ID is no longer current, just don't
@@ -1083,11 +1081,12 @@
-static int alloc_balloon_pages(struct hv_dynmem_device *dm, int num_pages,
- struct dm_balloon_response *bl_resp, int alloc_unit,
- bool *alloc_error)
+static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
+ unsigned int num_pages,
+ struct dm_balloon_response *bl_resp,
+ int alloc_unit)
{
- int i = 0;
+ unsigned int i = 0;
struct page *pg;
if (num_pages < alloc_unit)
@@ -1106,11 +1105,8 @@
__GFP_NOMEMALLOC | __GFP_NOWARN,
get_order(alloc_unit << PAGE_SHIFT));
- if (!pg) {
- *alloc_error = true;
+ if (!pg)
return i * alloc_unit;
- }
-
dm->num_pages_ballooned += alloc_unit;
@@ -1137,14 +1133,15 @@
static void balloon_up(struct work_struct *dummy)
{
- int num_pages = dm_device.balloon_wrk.num_pages;
- int num_ballooned = 0;
+ unsigned int num_pages = dm_device.balloon_wrk.num_pages;
+ unsigned int num_ballooned = 0;
struct dm_balloon_response *bl_resp;
int alloc_unit;
int ret;
- bool alloc_error;
bool done = false;
int i;
+ struct sysinfo val;
+ unsigned long floor;
/* The host balloons pages in 2M granularity. */
WARN_ON_ONCE(num_pages % PAGES_IN_2M != 0);
@@ -1155,6 +1152,15 @@
*/
alloc_unit = 512;
+ si_meminfo(&val);
+ floor = compute_balloon_floor();
+
+ /* Refuse to balloon below the floor, keep the 2M granularity. */
+ if (val.freeram < num_pages || val.freeram - num_pages < floor) {
+ num_pages = val.freeram > floor ? (val.freeram - floor) : 0;
+ num_pages -= num_pages % PAGES_IN_2M;
+ }
+
while (!done) {
bl_resp = (struct dm_balloon_response *)send_buffer;
memset(send_buffer, 0, PAGE_SIZE);
@@ -1164,18 +1170,15 @@
num_pages -= num_ballooned;
- alloc_error = false;
num_ballooned = alloc_balloon_pages(&dm_device, num_pages,
- bl_resp, alloc_unit,
- &alloc_error);
+ bl_resp, alloc_unit);
if (alloc_unit != 1 && num_ballooned == 0) {
alloc_unit = 1;
continue;
}
- if ((alloc_unit == 1 && alloc_error) ||
- (num_ballooned == num_pages)) {
+ if (num_ballooned == 0 || num_ballooned == num_pages) {
bl_resp->more_pages = 0;
done = true;
dm_device.state = DM_INITIALIZED;
@@ -1414,7 +1417,8 @@
static int balloon_probe(struct hv_device *dev,
const struct hv_vmbus_device_id *dev_id)
{
- int ret, t;
+ int ret;
+ unsigned long t;
struct dm_version_request version_req;
struct dm_capabilities cap_msg;
@@ -1439,7 +1443,6 @@
dm_device.next_version = DYNMEM_PROTOCOL_VERSION_WIN7;
init_completion(&dm_device.host_event);
init_completion(&dm_device.config_event);
- init_completion(&dm_device.waiter_event);
INIT_LIST_HEAD(&dm_device.ha_region_list);
mutex_init(&dm_device.ha_region_mutex);
INIT_WORK(&dm_device.balloon_wrk.wrk, balloon_up);
diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c
index 3b9c9ef..7994ec2 100644
--- a/drivers/hv/hv_util.c
+++ b/drivers/hv/hv_util.c
@@ -340,12 +340,8 @@
set_channel_read_state(dev->channel, false);
- ret = vmbus_open(dev->channel, 4 * PAGE_SIZE, 4 * PAGE_SIZE, NULL, 0,
- srv->util_cb, dev->channel);
- if (ret)
- goto error;
-
hv_set_drvdata(dev, srv);
+
/*
* Based on the host; initialize the framework and
* service version numbers we will negotiate.
@@ -365,6 +361,11 @@
hb_srv_version = HB_VERSION;
}
+ ret = vmbus_open(dev->channel, 4 * PAGE_SIZE, 4 * PAGE_SIZE, NULL, 0,
+ srv->util_cb, dev->channel);
+ if (ret)
+ goto error;
+
return 0;
error:
@@ -379,9 +380,9 @@
{
struct hv_util_service *srv = hv_get_drvdata(dev);
- vmbus_close(dev->channel);
if (srv->util_deinit)
srv->util_deinit();
+ vmbus_close(dev->channel);
kfree(srv->recv_buffer);
return 0;
diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
index 44b1c94..887287a 100644
--- a/drivers/hv/hyperv_vmbus.h
+++ b/drivers/hv/hyperv_vmbus.h
@@ -49,6 +49,17 @@
HVCPUID_IMPLEMENTATION_LIMITS = 0x40000005,
};
+#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE 0x400
+
+#define HV_X64_MSR_CRASH_P0 0x40000100
+#define HV_X64_MSR_CRASH_P1 0x40000101
+#define HV_X64_MSR_CRASH_P2 0x40000102
+#define HV_X64_MSR_CRASH_P3 0x40000103
+#define HV_X64_MSR_CRASH_P4 0x40000104
+#define HV_X64_MSR_CRASH_CTL 0x40000105
+
+#define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63)
+
/* Define version of the synthetic interrupt controller. */
#define HV_SYNIC_VERSION (1)
@@ -572,6 +583,8 @@
extern void hv_synic_cleanup(void *arg);
+extern void hv_synic_clockevents_cleanup(void);
+
/*
* Host version information.
*/
@@ -672,6 +685,23 @@
extern struct vmbus_connection vmbus_connection;
+enum vmbus_message_handler_type {
+ /* The related handler can sleep. */
+ VMHT_BLOCKING = 0,
+
+ /* The related handler must NOT sleep. */
+ VMHT_NON_BLOCKING = 1,
+};
+
+struct vmbus_channel_message_table_entry {
+ enum vmbus_channel_message_type message_type;
+ enum vmbus_message_handler_type handler_type;
+ void (*message_handler)(struct vmbus_channel_message_header *msg);
+};
+
+extern struct vmbus_channel_message_table_entry
+ channel_message_table[CHANNELMSG_COUNT];
+
/* General vmbus interface */
struct hv_device *vmbus_device_create(const uuid_le *type,
@@ -692,6 +722,7 @@
/* Connection interface */
int vmbus_connect(void);
+void vmbus_disconnect(void);
int vmbus_post_msg(void *buffer, size_t buflen);
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index f518b8d7..c85235e 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -33,9 +33,12 @@
#include <linux/hyperv.h>
#include <linux/kernel_stat.h>
#include <linux/clockchips.h>
+#include <linux/cpu.h>
#include <asm/hyperv.h>
#include <asm/hypervisor.h>
#include <asm/mshyperv.h>
+#include <linux/notifier.h>
+#include <linux/ptrace.h>
#include "hyperv_vmbus.h"
static struct acpi_device *hv_acpi_dev;
@@ -44,6 +47,31 @@
static struct completion probe_event;
static int irq;
+
+static int hyperv_panic_event(struct notifier_block *nb,
+ unsigned long event, void *ptr)
+{
+ struct pt_regs *regs;
+
+ regs = current_pt_regs();
+
+ wrmsrl(HV_X64_MSR_CRASH_P0, regs->ip);
+ wrmsrl(HV_X64_MSR_CRASH_P1, regs->ax);
+ wrmsrl(HV_X64_MSR_CRASH_P2, regs->bx);
+ wrmsrl(HV_X64_MSR_CRASH_P3, regs->cx);
+ wrmsrl(HV_X64_MSR_CRASH_P4, regs->dx);
+
+ /*
+ * Let Hyper-V know there is crash data available
+ */
+ wrmsrl(HV_X64_MSR_CRASH_CTL, HV_CRASH_CTL_CRASH_NOTIFY);
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block hyperv_panic_block = {
+ .notifier_call = hyperv_panic_event,
+};
+
struct resource hyperv_mmio = {
.name = "hyperv mmio",
.flags = IORESOURCE_MEM,
@@ -507,14 +535,26 @@
*/
static int vmbus_remove(struct device *child_device)
{
- struct hv_driver *drv = drv_to_hv_drv(child_device->driver);
+ struct hv_driver *drv;
struct hv_device *dev = device_to_hv_device(child_device);
+ u32 relid = dev->channel->offermsg.child_relid;
- if (drv->remove)
- drv->remove(dev);
- else
- pr_err("remove not set for driver %s\n",
- dev_name(child_device));
+ if (child_device->driver) {
+ drv = drv_to_hv_drv(child_device->driver);
+ if (drv->remove)
+ drv->remove(dev);
+ else {
+ hv_process_channel_removal(dev->channel, relid);
+ pr_err("remove not set for driver %s\n",
+ dev_name(child_device));
+ }
+ } else {
+ /*
+ * We don't have a driver for this device; deal with the
+ * rescind message by removing the channel.
+ */
+ hv_process_channel_removal(dev->channel, relid);
+ }
return 0;
}
@@ -573,6 +613,10 @@
{
struct onmessage_work_context *ctx;
+ /* Do not process messages if we're in DISCONNECTED state */
+ if (vmbus_connection.conn_state == DISCONNECTED)
+ return;
+
ctx = container_of(work, struct onmessage_work_context,
work);
vmbus_onmessage(&ctx->msg);
@@ -613,21 +657,36 @@
void *page_addr = hv_context.synic_message_page[cpu];
struct hv_message *msg = (struct hv_message *)page_addr +
VMBUS_MESSAGE_SINT;
+ struct vmbus_channel_message_header *hdr;
+ struct vmbus_channel_message_table_entry *entry;
struct onmessage_work_context *ctx;
while (1) {
- if (msg->header.message_type == HVMSG_NONE) {
+ if (msg->header.message_type == HVMSG_NONE)
/* no msg */
break;
- } else {
+
+ hdr = (struct vmbus_channel_message_header *)msg->u.payload;
+
+ if (hdr->msgtype >= CHANNELMSG_COUNT) {
+ WARN_ONCE(1, "unknown msgtype=%d\n", hdr->msgtype);
+ goto msg_handled;
+ }
+
+ entry = &channel_message_table[hdr->msgtype];
+ if (entry->handler_type == VMHT_BLOCKING) {
ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
if (ctx == NULL)
continue;
+
INIT_WORK(&ctx->work, vmbus_onmessage_work);
memcpy(&ctx->msg, msg, sizeof(*msg));
- queue_work(vmbus_connection.work_queue, &ctx->work);
- }
+ queue_work(vmbus_connection.work_queue, &ctx->work);
+ } else
+ entry->message_handler(hdr);
+
+msg_handled:
msg->header.message_type = HVMSG_NONE;
/*
@@ -704,6 +763,39 @@
}
}
+#ifdef CONFIG_HOTPLUG_CPU
+static int hyperv_cpu_disable(void)
+{
+ return -ENOSYS;
+}
+
+static void hv_cpu_hotplug_quirk(bool vmbus_loaded)
+{
+ static void *previous_cpu_disable;
+
+ /*
+ * Offlining a CPU when running on newer hypervisors (WS2012R2, Win8,
+ * ...) is not supported at this moment as channel interrupts are
+ * distributed across all of them.
+ */
+
+ if ((vmbus_proto_version == VERSION_WS2008) ||
+ (vmbus_proto_version == VERSION_WIN7))
+ return;
+
+ if (vmbus_loaded) {
+ previous_cpu_disable = smp_ops.cpu_disable;
+ smp_ops.cpu_disable = hyperv_cpu_disable;
+ pr_notice("CPU offlining is not supported by hypervisor\n");
+ } else if (previous_cpu_disable)
+ smp_ops.cpu_disable = previous_cpu_disable;
+}
+#else
+static void hv_cpu_hotplug_quirk(bool vmbus_loaded)
+{
+}
+#endif
+
/*
* vmbus_bus_init -Main vmbus driver initialization routine.
*
@@ -744,6 +836,16 @@
if (ret)
goto err_alloc;
+ hv_cpu_hotplug_quirk(true);
+
+ /*
+ * Only register if the crash MSRs are available
+ */
+ if (ms_hyperv.features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {
+ atomic_notifier_chain_register(&panic_notifier_list,
+ &hyperv_panic_block);
+ }
+
vmbus_request_offers();
return 0;
@@ -840,10 +942,8 @@
{
int ret = 0;
- static atomic_t device_num = ATOMIC_INIT(0);
-
- dev_set_name(&child_device_obj->device, "vmbus_0_%d",
- atomic_inc_return(&device_num));
+ dev_set_name(&child_device_obj->device, "vmbus_%d",
+ child_device_obj->channel->id);
child_device_obj->device.bus = &hv_bus;
child_device_obj->device.parent = &hv_acpi_dev->dev;
@@ -992,11 +1092,19 @@
static void __exit vmbus_exit(void)
{
+ int cpu;
+
+ vmbus_connection.conn_state = DISCONNECTED;
+ hv_synic_clockevents_cleanup();
hv_remove_vmbus_irq();
vmbus_free_channels();
bus_unregister(&hv_bus);
hv_cleanup();
+ for_each_online_cpu(cpu)
+ smp_call_function_single(cpu, hv_synic_cleanup, NULL, 1);
acpi_bus_unregister_driver(&vmbus_acpi_driver);
+ hv_cpu_hotplug_quirk(false);
+ vmbus_disconnect();
}
diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig
new file mode 100644
index 0000000..fc1f1ae
--- /dev/null
+++ b/drivers/hwtracing/coresight/Kconfig
@@ -0,0 +1,61 @@
+#
+# Coresight configuration
+#
+menuconfig CORESIGHT
+ bool "CoreSight Tracing Support"
+ select ARM_AMBA
+ help
+ This framework provides a kernel interface for the CoreSight debug
+ and trace drivers to register themselves with. It's intended to build
+ a topological view of the CoreSight components based on a DT
+ specification and configure the right serie of components when a
+ trace source gets enabled.
+
+if CORESIGHT
+config CORESIGHT_LINKS_AND_SINKS
+ bool "CoreSight Link and Sink drivers"
+ help
+ This enables support for CoreSight link and sink drivers that are
+ responsible for transporting and collecting the trace data
+ respectively. Link and sinks are dynamically aggregated with a trace
+ entity at run time to form a complete trace path.
+
+config CORESIGHT_LINK_AND_SINK_TMC
+ bool "Coresight generic TMC driver"
+ depends on CORESIGHT_LINKS_AND_SINKS
+ help
+ This enables support for the Trace Memory Controller driver.
+ Depending on its configuration the device can act as a link (embedded
+ trace router - ETR) or sink (embedded trace FIFO). The driver
+ complies with the generic implementation of the component without
+ special enhancement or added features.
+
+config CORESIGHT_SINK_TPIU
+ bool "Coresight generic TPIU driver"
+ depends on CORESIGHT_LINKS_AND_SINKS
+ help
+ This enables support for the Trace Port Interface Unit driver,
+ responsible for bridging the gap between the on-chip coresight
+ components and a trace for bridging the gap between the on-chip
+ coresight components and a trace port collection engine, typically
+ connected to an external host for use case capturing more traces than
+ the on-board coresight memory can handle.
+
+config CORESIGHT_SINK_ETBV10
+ bool "Coresight ETBv1.0 driver"
+ depends on CORESIGHT_LINKS_AND_SINKS
+ help
+ This enables support for the Embedded Trace Buffer version 1.0 driver
+ that complies with the generic implementation of the component without
+ special enhancement or added features.
+
+config CORESIGHT_SOURCE_ETM3X
+ bool "CoreSight Embedded Trace Macrocell 3.x driver"
+ depends on !ARM64
+ select CORESIGHT_LINKS_AND_SINKS
+ help
+ This driver provides support for processor ETM3.x and PTM1.x modules,
+ which allows tracing the instructions that a processor is executing
+ This is primarily useful for instruction level tracing. Depending
+ the ETM version data tracing may also be available.
+endif
diff --git a/drivers/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
similarity index 100%
rename from drivers/coresight/Makefile
rename to drivers/hwtracing/coresight/Makefile
diff --git a/drivers/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
similarity index 98%
rename from drivers/coresight/coresight-etb10.c
rename to drivers/hwtracing/coresight/coresight-etb10.c
index c9acd40..4004986 100644
--- a/drivers/coresight/coresight-etb10.c
+++ b/drivers/hwtracing/coresight/coresight-etb10.c
@@ -313,8 +313,8 @@
*ppos += len;
- dev_dbg(drvdata->dev, "%s: %d bytes copied, %d bytes left\n",
- __func__, len, (int) (depth * 4 - *ppos));
+ dev_dbg(drvdata->dev, "%s: %zu bytes copied, %d bytes left\n",
+ __func__, len, (int)(depth * 4 - *ppos));
return len;
}
diff --git a/drivers/coresight/coresight-etm-cp14.c b/drivers/hwtracing/coresight/coresight-etm-cp14.c
similarity index 100%
rename from drivers/coresight/coresight-etm-cp14.c
rename to drivers/hwtracing/coresight/coresight-etm-cp14.c
diff --git a/drivers/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
similarity index 100%
rename from drivers/coresight/coresight-etm.h
rename to drivers/hwtracing/coresight/coresight-etm.h
diff --git a/drivers/coresight/coresight-etm3x.c b/drivers/hwtracing/coresight/coresight-etm3x.c
similarity index 100%
rename from drivers/coresight/coresight-etm3x.c
rename to drivers/hwtracing/coresight/coresight-etm3x.c
diff --git a/drivers/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c
similarity index 100%
rename from drivers/coresight/coresight-funnel.c
rename to drivers/hwtracing/coresight/coresight-funnel.c
diff --git a/drivers/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h
similarity index 100%
rename from drivers/coresight/coresight-priv.h
rename to drivers/hwtracing/coresight/coresight-priv.h
diff --git a/drivers/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c
similarity index 98%
rename from drivers/coresight/coresight-replicator.c
rename to drivers/hwtracing/coresight/coresight-replicator.c
index cdf0553..75b9abd 100644
--- a/drivers/coresight/coresight-replicator.c
+++ b/drivers/hwtracing/coresight/coresight-replicator.c
@@ -107,7 +107,7 @@
return 0;
}
-static struct of_device_id replicator_match[] = {
+static const struct of_device_id replicator_match[] = {
{.compatible = "arm,coresight-replicator"},
{}
};
diff --git a/drivers/coresight/coresight-tmc.c b/drivers/hwtracing/coresight/coresight-tmc.c
similarity index 90%
rename from drivers/coresight/coresight-tmc.c
rename to drivers/hwtracing/coresight/coresight-tmc.c
index 3ff232f..7147f3d 100644
--- a/drivers/coresight/coresight-tmc.c
+++ b/drivers/hwtracing/coresight/coresight-tmc.c
@@ -533,8 +533,8 @@
*ppos += len;
- dev_dbg(drvdata->dev, "%s: %d bytes copied, %d bytes left\n",
- __func__, len, (int) (drvdata->size - *ppos));
+ dev_dbg(drvdata->dev, "%s: %zu bytes copied, %d bytes left\n",
+ __func__, len, (int)(drvdata->size - *ppos));
return len;
}
@@ -565,6 +565,59 @@
.llseek = no_llseek,
};
+static ssize_t status_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int ret;
+ unsigned long flags;
+ u32 tmc_rsz, tmc_sts, tmc_rrp, tmc_rwp, tmc_trg;
+ u32 tmc_ctl, tmc_ffsr, tmc_ffcr, tmc_mode, tmc_pscr;
+ u32 devid;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ ret = clk_prepare_enable(drvdata->clk);
+ if (ret)
+ goto out;
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ CS_UNLOCK(drvdata->base);
+
+ tmc_rsz = readl_relaxed(drvdata->base + TMC_RSZ);
+ tmc_sts = readl_relaxed(drvdata->base + TMC_STS);
+ tmc_rrp = readl_relaxed(drvdata->base + TMC_RRP);
+ tmc_rwp = readl_relaxed(drvdata->base + TMC_RWP);
+ tmc_trg = readl_relaxed(drvdata->base + TMC_TRG);
+ tmc_ctl = readl_relaxed(drvdata->base + TMC_CTL);
+ tmc_ffsr = readl_relaxed(drvdata->base + TMC_FFSR);
+ tmc_ffcr = readl_relaxed(drvdata->base + TMC_FFCR);
+ tmc_mode = readl_relaxed(drvdata->base + TMC_MODE);
+ tmc_pscr = readl_relaxed(drvdata->base + TMC_PSCR);
+ devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID);
+
+ CS_LOCK(drvdata->base);
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ clk_disable_unprepare(drvdata->clk);
+
+ return sprintf(buf,
+ "Depth:\t\t0x%x\n"
+ "Status:\t\t0x%x\n"
+ "RAM read ptr:\t0x%x\n"
+ "RAM wrt ptr:\t0x%x\n"
+ "Trigger cnt:\t0x%x\n"
+ "Control:\t0x%x\n"
+ "Flush status:\t0x%x\n"
+ "Flush ctrl:\t0x%x\n"
+ "Mode:\t\t0x%x\n"
+ "PSRC:\t\t0x%x\n"
+ "DEVID:\t\t0x%x\n",
+ tmc_rsz, tmc_sts, tmc_rrp, tmc_rwp, tmc_trg,
+ tmc_ctl, tmc_ffsr, tmc_ffcr, tmc_mode, tmc_pscr, devid);
+out:
+ return -EINVAL;
+}
+static DEVICE_ATTR_RO(status);
+
static ssize_t trigger_cntr_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -593,18 +646,21 @@
static struct attribute *coresight_etb_attrs[] = {
&dev_attr_trigger_cntr.attr,
+ &dev_attr_status.attr,
NULL,
};
ATTRIBUTE_GROUPS(coresight_etb);
static struct attribute *coresight_etr_attrs[] = {
&dev_attr_trigger_cntr.attr,
+ &dev_attr_status.attr,
NULL,
};
ATTRIBUTE_GROUPS(coresight_etr);
static struct attribute *coresight_etf_attrs[] = {
&dev_attr_trigger_cntr.attr,
+ &dev_attr_status.attr,
NULL,
};
ATTRIBUTE_GROUPS(coresight_etf);
diff --git a/drivers/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
similarity index 100%
rename from drivers/coresight/coresight-tpiu.c
rename to drivers/hwtracing/coresight/coresight-tpiu.c
diff --git a/drivers/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
similarity index 98%
rename from drivers/coresight/coresight.c
rename to drivers/hwtracing/coresight/coresight.c
index c5def93..894531d 100644
--- a/drivers/coresight/coresight.c
+++ b/drivers/hwtracing/coresight/coresight.c
@@ -305,7 +305,9 @@
list_add(&csdev->path_link, path);
- if (csdev->type == CORESIGHT_DEV_TYPE_SINK && csdev->activated) {
+ if ((csdev->type == CORESIGHT_DEV_TYPE_SINK ||
+ csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) &&
+ csdev->activated) {
if (enable)
ret = coresight_enable_path(path);
else
diff --git a/drivers/coresight/of_coresight.c b/drivers/hwtracing/coresight/of_coresight.c
similarity index 88%
rename from drivers/coresight/of_coresight.c
rename to drivers/hwtracing/coresight/of_coresight.c
index c3efa41..35e51ce 100644
--- a/drivers/coresight/of_coresight.c
+++ b/drivers/hwtracing/coresight/of_coresight.c
@@ -22,6 +22,7 @@
#include <linux/platform_device.h>
#include <linux/amba/bus.h>
#include <linux/coresight.h>
+#include <linux/cpumask.h>
#include <asm/smp_plat.h>
@@ -52,15 +53,6 @@
endpoint, of_dev_node_match);
}
-static struct device_node *of_get_coresight_endpoint(
- const struct device_node *parent, struct device_node *prev)
-{
- struct device_node *node = of_graph_get_next_endpoint(parent, prev);
-
- of_node_put(prev);
- return node;
-}
-
static void of_coresight_get_ports(struct device_node *node,
int *nr_inport, int *nr_outport)
{
@@ -68,7 +60,7 @@
int in = 0, out = 0;
do {
- ep = of_get_coresight_endpoint(node, ep);
+ ep = of_graph_get_next_endpoint(node, ep);
if (!ep)
break;
@@ -113,7 +105,7 @@
struct coresight_platform_data *of_get_coresight_platform_data(
struct device *dev, struct device_node *node)
{
- int i = 0, ret = 0;
+ int i = 0, ret = 0, cpu;
struct coresight_platform_data *pdata;
struct of_endpoint endpoint, rendpoint;
struct device *rdev;
@@ -140,7 +132,7 @@
/* Iterate through each port to discover topology */
do {
/* Get a handle on a port */
- ep = of_get_coresight_endpoint(node, ep);
+ ep = of_graph_get_next_endpoint(node, ep);
if (!ep)
break;
@@ -187,17 +179,10 @@
/* Affinity defaults to CPU0 */
pdata->cpu = 0;
dn = of_parse_phandle(node, "cpu", 0);
- if (dn) {
- const u32 *cell;
- int len, index;
- u64 hwid;
-
- cell = of_get_property(dn, "reg", &len);
- if (cell) {
- hwid = of_read_number(cell, of_n_addr_cells(dn));
- index = get_logical_index(hwid);
- if (index != -EINVAL)
- pdata->cpu = index;
+ for (cpu = 0; dn && cpu < nr_cpu_ids; cpu++) {
+ if (dn == of_get_cpu_node(cpu, NULL)) {
+ pdata->cpu = cpu;
+ break;
}
}
diff --git a/drivers/i2c/busses/i2c-jz4780.c b/drivers/i2c/busses/i2c-jz4780.c
index ce1d6932..19b2d68 100644
--- a/drivers/i2c/busses/i2c-jz4780.c
+++ b/drivers/i2c/busses/i2c-jz4780.c
@@ -23,6 +23,7 @@
#include <linux/i2c.h>
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
index 1672e6b..098f698 100644
--- a/drivers/i2c/i2c-core.c
+++ b/drivers/i2c/i2c-core.c
@@ -596,6 +596,7 @@
adap->bus_recovery_info->set_scl(adap, 1);
return i2c_generic_recovery(adap);
}
+EXPORT_SYMBOL_GPL(i2c_generic_scl_recovery);
int i2c_generic_gpio_recovery(struct i2c_adapter *adap)
{
@@ -610,6 +611,7 @@
return ret;
}
+EXPORT_SYMBOL_GPL(i2c_generic_gpio_recovery);
int i2c_recover_bus(struct i2c_adapter *adap)
{
@@ -619,6 +621,7 @@
dev_dbg(&adap->dev, "Trying i2c bus recovery\n");
return adap->bus_recovery_info->recover_bus(adap);
}
+EXPORT_SYMBOL_GPL(i2c_recover_bus);
static int i2c_device_probe(struct device *dev)
{
diff --git a/drivers/input/ff-core.c b/drivers/input/ff-core.c
index f50f6dd..b81c88c 100644
--- a/drivers/input/ff-core.c
+++ b/drivers/input/ff-core.c
@@ -23,8 +23,6 @@
/* #define DEBUG */
-#define pr_fmt(fmt) KBUILD_BASENAME ": " fmt
-
#include <linux/input.h>
#include <linux/module.h>
#include <linux/mutex.h>
@@ -116,7 +114,7 @@
if (effect->type < FF_EFFECT_MIN || effect->type > FF_EFFECT_MAX ||
!test_bit(effect->type, dev->ffbit)) {
- pr_debug("invalid or not supported effect type in upload\n");
+ dev_dbg(&dev->dev, "invalid or not supported effect type in upload\n");
return -EINVAL;
}
@@ -124,7 +122,7 @@
(effect->u.periodic.waveform < FF_WAVEFORM_MIN ||
effect->u.periodic.waveform > FF_WAVEFORM_MAX ||
!test_bit(effect->u.periodic.waveform, dev->ffbit))) {
- pr_debug("invalid or not supported wave form in upload\n");
+ dev_dbg(&dev->dev, "invalid or not supported wave form in upload\n");
return -EINVAL;
}
@@ -246,7 +244,7 @@
struct ff_device *ff = dev->ff;
int i;
- pr_debug("flushing now\n");
+ dev_dbg(&dev->dev, "flushing now\n");
mutex_lock(&ff->mutex);
@@ -316,7 +314,7 @@
int i;
if (!max_effects) {
- pr_err("cannot allocate device without any effects\n");
+ dev_err(&dev->dev, "cannot allocate device without any effects\n");
return -EINVAL;
}
diff --git a/drivers/input/ff-memless.c b/drivers/input/ff-memless.c
index 74c0d8c6..fcc6c33 100644
--- a/drivers/input/ff-memless.c
+++ b/drivers/input/ff-memless.c
@@ -237,6 +237,18 @@
(force + new_force)) << 1;
}
+#define FRAC_N 8
+static inline s16 fixp_new16(s16 a)
+{
+ return ((s32)a) >> (16 - FRAC_N);
+}
+
+static inline s16 fixp_mult(s16 a, s16 b)
+{
+ a = ((s32)a * 0x100) / 0x7fff;
+ return ((s32)(a * b)) >> FRAC_N;
+}
+
/*
* Combine two effects and apply gain.
*/
@@ -247,7 +259,7 @@
struct ff_effect *new = state->effect;
unsigned int strong, weak, i;
int x, y;
- fixp_t level;
+ s16 level;
switch (new->type) {
case FF_CONSTANT:
@@ -255,8 +267,8 @@
level = fixp_new16(apply_envelope(state,
new->u.constant.level,
&new->u.constant.envelope));
- x = fixp_mult(fixp_sin(i), level) * gain / 0xffff;
- y = fixp_mult(-fixp_cos(i), level) * gain / 0xffff;
+ x = fixp_mult(fixp_sin16(i), level) * gain / 0xffff;
+ y = fixp_mult(-fixp_cos16(i), level) * gain / 0xffff;
/*
* here we abuse ff_ramp to hold x and y of constant force
* If in future any driver wants something else than x and y
diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
index 3aa2f3f..61c7611 100644
--- a/drivers/input/joystick/xpad.c
+++ b/drivers/input/joystick/xpad.c
@@ -31,12 +31,14 @@
* - the iForce driver drivers/char/joystick/iforce.c
* - the skeleton-driver drivers/usb/usb-skeleton.c
* - Xbox 360 information http://www.free60.org/wiki/Gamepad
+ * - Xbox One information https://github.com/quantus/xbox-one-controller-protocol
*
* Thanks to:
* - ITO Takayuki for providing essential xpad information on his website
* - Vojtech Pavlik - iforce driver / input subsystem
* - Greg Kroah-Hartman - usb-skeleton driver
* - XBOX Linux project - extra USB id's
+ * - Pekka Pöyry (quantus) - Xbox One controller reverse engineering
*
* TODO:
* - fine tune axes (especially trigger axes)
@@ -828,6 +830,23 @@
return usb_submit_urb(xpad->irq_out, GFP_ATOMIC);
+ case XTYPE_XBOXONE:
+ xpad->odata[0] = 0x09; /* activate rumble */
+ xpad->odata[1] = 0x08;
+ xpad->odata[2] = 0x00;
+ xpad->odata[3] = 0x08; /* continuous effect */
+ xpad->odata[4] = 0x00; /* simple rumble mode */
+ xpad->odata[5] = 0x03; /* L and R actuator only */
+ xpad->odata[6] = 0x00; /* TODO: LT actuator */
+ xpad->odata[7] = 0x00; /* TODO: RT actuator */
+ xpad->odata[8] = strong / 256; /* left actuator */
+ xpad->odata[9] = weak / 256; /* right actuator */
+ xpad->odata[10] = 0x80; /* length of pulse */
+ xpad->odata[11] = 0x00; /* stop period of pulse */
+ xpad->irq_out->transfer_buffer_length = 12;
+
+ return usb_submit_urb(xpad->irq_out, GFP_ATOMIC);
+
default:
dev_dbg(&xpad->dev->dev,
"%s - rumble command sent to unsupported xpad type: %d\n",
@@ -841,7 +860,7 @@
static int xpad_init_ff(struct usb_xpad *xpad)
{
- if (xpad->xtype == XTYPE_UNKNOWN || xpad->xtype == XTYPE_XBOXONE)
+ if (xpad->xtype == XTYPE_UNKNOWN)
return 0;
input_set_capability(xpad->dev, EV_FF, FF_RUMBLE);
diff --git a/drivers/input/keyboard/gpio_keys_polled.c b/drivers/input/keyboard/gpio_keys_polled.c
index 90df4df..097d721 100644
--- a/drivers/input/keyboard/gpio_keys_polled.c
+++ b/drivers/input/keyboard/gpio_keys_polled.c
@@ -125,7 +125,7 @@
device_for_each_child_node(dev, child) {
struct gpio_desc *desc;
- desc = devm_get_gpiod_from_child(dev, child);
+ desc = devm_get_gpiod_from_child(dev, NULL, child);
if (IS_ERR(desc)) {
error = PTR_ERR(desc);
if (error != -EPROBE_DEFER)
diff --git a/drivers/input/keyboard/lm8333.c b/drivers/input/keyboard/lm8333.c
index 9081cbe..0ad422b 100644
--- a/drivers/input/keyboard/lm8333.c
+++ b/drivers/input/keyboard/lm8333.c
@@ -1,6 +1,6 @@
/*
* LM8333 keypad driver
- * Copyright (C) 2012 Wolfram Sang, Pengutronix <w.sang@pengutronix.de>
+ * Copyright (C) 2012 Wolfram Sang, Pengutronix <kernel@pengutronix.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -231,6 +231,6 @@
};
module_i2c_driver(lm8333_driver);
-MODULE_AUTHOR("Wolfram Sang <w.sang@pengutronix.de>");
+MODULE_AUTHOR("Wolfram Sang <kernel@pengutronix.de>");
MODULE_DESCRIPTION("LM8333 keyboard driver");
MODULE_LICENSE("GPL v2");
diff --git a/drivers/input/mouse/Kconfig b/drivers/input/mouse/Kconfig
index 4658b5d..7462d2f 100644
--- a/drivers/input/mouse/Kconfig
+++ b/drivers/input/mouse/Kconfig
@@ -149,6 +149,18 @@
If unsure, say Y.
+config MOUSE_PS2_VMMOUSE
+ bool "Virtual mouse (vmmouse)"
+ depends on MOUSE_PS2 && X86 && HYPERVISOR_GUEST
+ help
+ Say Y here if you are running under control of VMware hypervisor
+ (ESXi, Workstation or Fusion). Also make sure that when you enable
+ this option, you remove the xf86-input-vmmouse user-space driver
+ or upgrade it to at least xf86-input-vmmouse 13.0.1, which doesn't
+ load in the presence of an in-kernel vmmouse driver.
+
+ If unsure, say N.
+
config MOUSE_SERIAL
tristate "Serial mouse"
select SERIO
diff --git a/drivers/input/mouse/Makefile b/drivers/input/mouse/Makefile
index 8a9c98e..793300b 100644
--- a/drivers/input/mouse/Makefile
+++ b/drivers/input/mouse/Makefile
@@ -36,6 +36,7 @@
psmouse-$(CONFIG_MOUSE_PS2_TRACKPOINT) += trackpoint.o
psmouse-$(CONFIG_MOUSE_PS2_TOUCHKIT) += touchkit_ps2.o
psmouse-$(CONFIG_MOUSE_PS2_CYPRESS) += cypress_ps2.o
+psmouse-$(CONFIG_MOUSE_PS2_VMMOUSE) += vmmouse.o
elan_i2c-objs := elan_i2c_core.o
elan_i2c-$(CONFIG_MOUSE_ELAN_I2C_I2C) += elan_i2c_i2c.o
diff --git a/drivers/input/mouse/cyapa.c b/drivers/input/mouse/cyapa.c
index 58f4f6f..efe1484 100644
--- a/drivers/input/mouse/cyapa.c
+++ b/drivers/input/mouse/cyapa.c
@@ -723,7 +723,7 @@
} else if (sysfs_streq(buf, OFF_MODE_NAME)) {
cyapa->suspend_power_mode = PWR_MODE_OFF;
} else if (!kstrtou16(buf, 10, &sleep_time)) {
- cyapa->suspend_sleep_time = max_t(u16, sleep_time, 1000);
+ cyapa->suspend_sleep_time = min_t(u16, sleep_time, 1000);
cyapa->suspend_power_mode =
cyapa_sleep_time_to_pwr_cmd(cyapa->suspend_sleep_time);
} else {
@@ -840,7 +840,7 @@
if (error)
return error;
- cyapa->runtime_suspend_sleep_time = max_t(u16, time, 1000);
+ cyapa->runtime_suspend_sleep_time = min_t(u16, time, 1000);
cyapa->runtime_suspend_power_mode =
cyapa_sleep_time_to_pwr_cmd(cyapa->runtime_suspend_sleep_time);
diff --git a/drivers/input/mouse/elan_i2c.h b/drivers/input/mouse/elan_i2c.h
index 9b2dc01..6d5f8a4 100644
--- a/drivers/input/mouse/elan_i2c.h
+++ b/drivers/input/mouse/elan_i2c.h
@@ -25,6 +25,7 @@
#define ETP_ENABLE_CALIBRATE 0x0002
#define ETP_DISABLE_CALIBRATE 0x0000
#define ETP_DISABLE_POWER 0x0001
+#define ETP_PRESSURE_OFFSET 25
/* IAP Firmware handling */
#define ETP_FW_NAME "elan_i2c.bin"
@@ -79,6 +80,8 @@
struct completion *reset_done);
int (*get_report)(struct i2c_client *client, u8 *report);
+ int (*get_pressure_adjustment)(struct i2c_client *client,
+ int *adjustment);
};
extern const struct elan_transport_ops elan_smbus_ops, elan_i2c_ops;
diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
index 375d98f..fd5068b 100644
--- a/drivers/input/mouse/elan_i2c_core.c
+++ b/drivers/input/mouse/elan_i2c_core.c
@@ -4,7 +4,7 @@
* Copyright (c) 2013 ELAN Microelectronics Corp.
*
* Author: 林政維 (Duson Lin) <dusonlin@emc.com.tw>
- * Version: 1.5.6
+ * Version: 1.5.7
*
* Based on cyapa driver:
* copyright (c) 2011-2012 Cypress Semiconductor, Inc.
@@ -40,8 +40,7 @@
#include "elan_i2c.h"
#define DRIVER_NAME "elan_i2c"
-#define ELAN_DRIVER_VERSION "1.5.6"
-#define ETP_PRESSURE_OFFSET 25
+#define ELAN_DRIVER_VERSION "1.5.7"
#define ETP_MAX_PRESSURE 255
#define ETP_FWIDTH_REDUCE 90
#define ETP_FINGER_WIDTH 15
@@ -53,6 +52,7 @@
#define ETP_REPORT_ID_OFFSET 2
#define ETP_TOUCH_INFO_OFFSET 3
#define ETP_FINGER_DATA_OFFSET 4
+#define ETP_HOVER_INFO_OFFSET 30
#define ETP_MAX_REPORT_LEN 34
/* The main device structure */
@@ -81,7 +81,7 @@
u8 sm_version;
u8 iap_version;
u16 fw_checksum;
-
+ int pressure_adjustment;
u8 mode;
bool irq_wake;
@@ -229,6 +229,11 @@
if (error)
return error;
+ error = data->ops->get_pressure_adjustment(data->client,
+ &data->pressure_adjustment);
+ if (error)
+ return error;
+
return 0;
}
@@ -721,13 +726,13 @@
*/
static void elan_report_contact(struct elan_tp_data *data,
int contact_num, bool contact_valid,
- u8 *finger_data)
+ bool hover_event, u8 *finger_data)
{
struct input_dev *input = data->input;
unsigned int pos_x, pos_y;
unsigned int pressure, mk_x, mk_y;
- unsigned int area_x, area_y, major, minor, new_pressure;
-
+ unsigned int area_x, area_y, major, minor;
+ unsigned int scaled_pressure;
if (contact_valid) {
pos_x = ((finger_data[0] & 0xf0) << 4) |
@@ -756,15 +761,18 @@
major = max(area_x, area_y);
minor = min(area_x, area_y);
- new_pressure = pressure + ETP_PRESSURE_OFFSET;
- if (new_pressure > ETP_MAX_PRESSURE)
- new_pressure = ETP_MAX_PRESSURE;
+ scaled_pressure = pressure + data->pressure_adjustment;
+
+ if (scaled_pressure > ETP_MAX_PRESSURE)
+ scaled_pressure = ETP_MAX_PRESSURE;
input_mt_slot(input, contact_num);
input_mt_report_slot_state(input, MT_TOOL_FINGER, true);
input_report_abs(input, ABS_MT_POSITION_X, pos_x);
input_report_abs(input, ABS_MT_POSITION_Y, data->max_y - pos_y);
- input_report_abs(input, ABS_MT_PRESSURE, new_pressure);
+ input_report_abs(input, ABS_MT_DISTANCE, hover_event);
+ input_report_abs(input, ABS_MT_PRESSURE,
+ hover_event ? 0 : scaled_pressure);
input_report_abs(input, ABS_TOOL_WIDTH, mk_x);
input_report_abs(input, ABS_MT_TOUCH_MAJOR, major);
input_report_abs(input, ABS_MT_TOUCH_MINOR, minor);
@@ -780,11 +788,14 @@
u8 *finger_data = &packet[ETP_FINGER_DATA_OFFSET];
int i;
u8 tp_info = packet[ETP_TOUCH_INFO_OFFSET];
- bool contact_valid;
+ u8 hover_info = packet[ETP_HOVER_INFO_OFFSET];
+ bool contact_valid, hover_event;
+ hover_event = hover_info & 0x40;
for (i = 0; i < ETP_MAX_FINGERS; i++) {
contact_valid = tp_info & (1U << (3 + i));
- elan_report_contact(data, i, contact_valid, finger_data);
+ elan_report_contact(data, i, contact_valid, hover_event,
+ finger_data);
if (contact_valid)
finger_data += ETP_FINGER_DATA_LEN;
@@ -878,6 +889,7 @@
ETP_FINGER_WIDTH * max_width, 0, 0);
input_set_abs_params(input, ABS_MT_TOUCH_MINOR, 0,
ETP_FINGER_WIDTH * min_width, 0, 0);
+ input_set_abs_params(input, ABS_MT_DISTANCE, 0, 1, 0, 0);
data->input = input;
diff --git a/drivers/input/mouse/elan_i2c_i2c.c b/drivers/input/mouse/elan_i2c_i2c.c
index 6cf0def..a0acbbf 100644
--- a/drivers/input/mouse/elan_i2c_i2c.c
+++ b/drivers/input/mouse/elan_i2c_i2c.c
@@ -41,6 +41,7 @@
#define ETP_I2C_MAX_X_AXIS_CMD 0x0106
#define ETP_I2C_MAX_Y_AXIS_CMD 0x0107
#define ETP_I2C_RESOLUTION_CMD 0x0108
+#define ETP_I2C_PRESSURE_CMD 0x010A
#define ETP_I2C_IAP_VERSION_CMD 0x0110
#define ETP_I2C_SET_CMD 0x0300
#define ETP_I2C_POWER_CMD 0x0307
@@ -364,8 +365,29 @@
return error;
}
- *x_traces = val[0] - 1;
- *y_traces = val[1] - 1;
+ *x_traces = val[0];
+ *y_traces = val[1];
+
+ return 0;
+}
+
+static int elan_i2c_get_pressure_adjustment(struct i2c_client *client,
+ int *adjustment)
+{
+ int error;
+ u8 val[3];
+
+ error = elan_i2c_read_cmd(client, ETP_I2C_PRESSURE_CMD, val);
+ if (error) {
+ dev_err(&client->dev, "failed to get pressure format: %d\n",
+ error);
+ return error;
+ }
+
+ if ((val[0] >> 4) & 0x1)
+ *adjustment = 0;
+ else
+ *adjustment = ETP_PRESSURE_OFFSET;
return 0;
}
@@ -602,6 +624,7 @@
.get_sm_version = elan_i2c_get_sm_version,
.get_product_id = elan_i2c_get_product_id,
.get_checksum = elan_i2c_get_checksum,
+ .get_pressure_adjustment = elan_i2c_get_pressure_adjustment,
.get_max = elan_i2c_get_max,
.get_resolution = elan_i2c_get_resolution,
diff --git a/drivers/input/mouse/elan_i2c_smbus.c b/drivers/input/mouse/elan_i2c_smbus.c
index 06a2bcd..30ab80d 100644
--- a/drivers/input/mouse/elan_i2c_smbus.c
+++ b/drivers/input/mouse/elan_i2c_smbus.c
@@ -268,12 +268,19 @@
return error;
}
- *x_traces = val[1] - 1;
- *y_traces = val[2] - 1;
+ *x_traces = val[1];
+ *y_traces = val[2];
return 0;
}
+static int elan_smbus_get_pressure_adjustment(struct i2c_client *client,
+ int *adjustment)
+{
+ *adjustment = ETP_PRESSURE_OFFSET;
+ return 0;
+}
+
static int elan_smbus_iap_get_mode(struct i2c_client *client,
enum tp_mode *mode)
{
@@ -497,6 +504,7 @@
.get_sm_version = elan_smbus_get_sm_version,
.get_product_id = elan_smbus_get_product_id,
.get_checksum = elan_smbus_get_checksum,
+ .get_pressure_adjustment = elan_smbus_get_pressure_adjustment,
.get_max = elan_smbus_get_max,
.get_resolution = elan_smbus_get_resolution,
diff --git a/drivers/input/mouse/psmouse-base.c b/drivers/input/mouse/psmouse-base.c
index 27057df..5bb1658 100644
--- a/drivers/input/mouse/psmouse-base.c
+++ b/drivers/input/mouse/psmouse-base.c
@@ -36,6 +36,7 @@
#include "sentelic.h"
#include "cypress_ps2.h"
#include "focaltech.h"
+#include "vmmouse.h"
#define DRIVER_DESC "PS/2 mouse driver"
@@ -790,6 +791,13 @@
}
}
+ if (psmouse_do_detect(vmmouse_detect, psmouse, set_properties) == 0) {
+ if (max_proto > PSMOUSE_IMEX) {
+ if (!set_properties || vmmouse_init(psmouse) == 0)
+ return PSMOUSE_VMMOUSE;
+ }
+ }
+
/*
* Try Kensington ThinkingMouse (we try first, because synaptics probe
* upsets the thinkingmouse).
@@ -1113,6 +1121,15 @@
.init = focaltech_init,
},
#endif
+#ifdef CONFIG_MOUSE_PS2_VMMOUSE
+ {
+ .type = PSMOUSE_VMMOUSE,
+ .name = VMMOUSE_PSNAME,
+ .alias = "vmmouse",
+ .detect = vmmouse_detect,
+ .init = vmmouse_init,
+ },
+#endif
{
.type = PSMOUSE_AUTO,
.name = "auto",
diff --git a/drivers/input/mouse/psmouse.h b/drivers/input/mouse/psmouse.h
index d02e1bd..ad5a5a1 100644
--- a/drivers/input/mouse/psmouse.h
+++ b/drivers/input/mouse/psmouse.h
@@ -103,6 +103,7 @@
PSMOUSE_SYNAPTICS_RELATIVE,
PSMOUSE_CYPRESS,
PSMOUSE_FOCALTECH,
+ PSMOUSE_VMMOUSE,
PSMOUSE_AUTO /* This one should always be last */
};
diff --git a/drivers/input/mouse/vmmouse.c b/drivers/input/mouse/vmmouse.c
new file mode 100644
index 0000000..e272f06
--- /dev/null
+++ b/drivers/input/mouse/vmmouse.c
@@ -0,0 +1,508 @@
+/*
+ * Driver for Virtual PS/2 Mouse on VMware and QEMU hypervisors.
+ *
+ * Copyright (C) 2014, VMware, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * Twin device code is hugely inspired by the ALPS driver.
+ * Authors:
+ * Dmitry Torokhov <dmitry.torokhov@gmail.com>
+ * Thomas Hellstrom <thellstrom@vmware.com>
+ */
+
+#include <linux/input.h>
+#include <linux/serio.h>
+#include <linux/libps2.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <asm/hypervisor.h>
+
+#include "psmouse.h"
+#include "vmmouse.h"
+
+#define VMMOUSE_PROTO_MAGIC 0x564D5868U
+#define VMMOUSE_PROTO_PORT 0x5658
+
+/*
+ * Main commands supported by the vmmouse hypervisor port.
+ */
+#define VMMOUSE_PROTO_CMD_GETVERSION 10
+#define VMMOUSE_PROTO_CMD_ABSPOINTER_DATA 39
+#define VMMOUSE_PROTO_CMD_ABSPOINTER_STATUS 40
+#define VMMOUSE_PROTO_CMD_ABSPOINTER_COMMAND 41
+#define VMMOUSE_PROTO_CMD_ABSPOINTER_RESTRICT 86
+
+/*
+ * Subcommands for VMMOUSE_PROTO_CMD_ABSPOINTER_COMMAND
+ */
+#define VMMOUSE_CMD_ENABLE 0x45414552U
+#define VMMOUSE_CMD_DISABLE 0x000000f5U
+#define VMMOUSE_CMD_REQUEST_RELATIVE 0x4c455252U
+#define VMMOUSE_CMD_REQUEST_ABSOLUTE 0x53424152U
+
+#define VMMOUSE_ERROR 0xffff0000U
+
+#define VMMOUSE_VERSION_ID 0x3442554aU
+
+#define VMMOUSE_RELATIVE_PACKET 0x00010000U
+
+#define VMMOUSE_LEFT_BUTTON 0x20
+#define VMMOUSE_RIGHT_BUTTON 0x10
+#define VMMOUSE_MIDDLE_BUTTON 0x08
+
+/*
+ * VMMouse Restrict command
+ */
+#define VMMOUSE_RESTRICT_ANY 0x00
+#define VMMOUSE_RESTRICT_CPL0 0x01
+#define VMMOUSE_RESTRICT_IOPL 0x02
+
+#define VMMOUSE_MAX_X 0xFFFF
+#define VMMOUSE_MAX_Y 0xFFFF
+
+#define VMMOUSE_VENDOR "VMware"
+#define VMMOUSE_NAME "VMMouse"
+
+/**
+ * struct vmmouse_data - private data structure for the vmmouse driver
+ *
+ * @abs_dev: "Absolute" device used to report absolute mouse movement.
+ * @phys: Physical path for the absolute device.
+ * @dev_name: Name attribute name for the absolute device.
+ */
+struct vmmouse_data {
+ struct input_dev *abs_dev;
+ char phys[32];
+ char dev_name[128];
+};
+
+/**
+ * Hypervisor-specific bi-directional communication channel
+ * implementing the vmmouse protocol. Should never execute on
+ * bare metal hardware.
+ */
+#define VMMOUSE_CMD(cmd, in1, out1, out2, out3, out4) \
+({ \
+ unsigned long __dummy1, __dummy2; \
+ __asm__ __volatile__ ("inl %%dx" : \
+ "=a"(out1), \
+ "=b"(out2), \
+ "=c"(out3), \
+ "=d"(out4), \
+ "=S"(__dummy1), \
+ "=D"(__dummy2) : \
+ "a"(VMMOUSE_PROTO_MAGIC), \
+ "b"(in1), \
+ "c"(VMMOUSE_PROTO_CMD_##cmd), \
+ "d"(VMMOUSE_PROTO_PORT) : \
+ "memory"); \
+})
+
+/**
+ * vmmouse_report_button - report button state on the correct input device
+ *
+ * @psmouse: Pointer to the psmouse struct
+ * @abs_dev: The absolute input device
+ * @rel_dev: The relative input device
+ * @pref_dev: The preferred device for reporting
+ * @code: Button code
+ * @value: Button value
+ *
+ * Report @value and @code on @pref_dev, unless the button is already
+ * pressed on the other device, in which case the state is reported on that
+ * device.
+ */
+static void vmmouse_report_button(struct psmouse *psmouse,
+ struct input_dev *abs_dev,
+ struct input_dev *rel_dev,
+ struct input_dev *pref_dev,
+ unsigned int code, int value)
+{
+ if (test_bit(code, abs_dev->key))
+ pref_dev = abs_dev;
+ else if (test_bit(code, rel_dev->key))
+ pref_dev = rel_dev;
+
+ input_report_key(pref_dev, code, value);
+}
+
+/**
+ * vmmouse_report_events - process events on the vmmouse communications channel
+ *
+ * @psmouse: Pointer to the psmouse struct
+ *
+ * This function pulls events from the vmmouse communications channel and
+ * reports them on the correct (absolute or relative) input device. When the
+ * communications channel is drained, or if we've processed more than 255
+ * psmouse commands, the function returns PSMOUSE_FULL_PACKET. If there is a
+ * host- or synchronization error, the function returns PSMOUSE_BAD_DATA in
+ * the hope that the caller will reset the communications channel.
+ */
+static psmouse_ret_t vmmouse_report_events(struct psmouse *psmouse)
+{
+ struct input_dev *rel_dev = psmouse->dev;
+ struct vmmouse_data *priv = psmouse->private;
+ struct input_dev *abs_dev = priv->abs_dev;
+ struct input_dev *pref_dev;
+ u32 status, x, y, z;
+ u32 dummy1, dummy2, dummy3;
+ unsigned int queue_length;
+ unsigned int count = 255;
+
+ while (count--) {
+ /* See if we have motion data. */
+ VMMOUSE_CMD(ABSPOINTER_STATUS, 0,
+ status, dummy1, dummy2, dummy3);
+ if ((status & VMMOUSE_ERROR) == VMMOUSE_ERROR) {
+ psmouse_err(psmouse, "failed to fetch status data\n");
+ /*
+ * After a few attempts this will result in
+ * reconnect.
+ */
+ return PSMOUSE_BAD_DATA;
+ }
+
+ queue_length = status & 0xffff;
+ if (queue_length == 0)
+ break;
+
+ if (queue_length % 4) {
+ psmouse_err(psmouse, "invalid queue length\n");
+ return PSMOUSE_BAD_DATA;
+ }
+
+ /* Now get it */
+ VMMOUSE_CMD(ABSPOINTER_DATA, 4, status, x, y, z);
+
+ /*
+ * And report what we've got. Prefer to report button
+ * events on the same device where we report motion events.
+ * This doesn't work well with the mouse wheel, though. See
+ * below. Ideally we would want to report that on the
+ * preferred device as well.
+ */
+ if (status & VMMOUSE_RELATIVE_PACKET) {
+ pref_dev = rel_dev;
+ input_report_rel(rel_dev, REL_X, (s32)x);
+ input_report_rel(rel_dev, REL_Y, -(s32)y);
+ } else {
+ pref_dev = abs_dev;
+ input_report_abs(abs_dev, ABS_X, x);
+ input_report_abs(abs_dev, ABS_Y, y);
+ }
+
+ /* Xorg seems to ignore wheel events on absolute devices */
+ input_report_rel(rel_dev, REL_WHEEL, -(s8)((u8) z));
+
+ vmmouse_report_button(psmouse, abs_dev, rel_dev,
+ pref_dev, BTN_LEFT,
+ status & VMMOUSE_LEFT_BUTTON);
+ vmmouse_report_button(psmouse, abs_dev, rel_dev,
+ pref_dev, BTN_RIGHT,
+ status & VMMOUSE_RIGHT_BUTTON);
+ vmmouse_report_button(psmouse, abs_dev, rel_dev,
+ pref_dev, BTN_MIDDLE,
+ status & VMMOUSE_MIDDLE_BUTTON);
+ input_sync(abs_dev);
+ input_sync(rel_dev);
+ }
+
+ return PSMOUSE_FULL_PACKET;
+}
+
+/**
+ * vmmouse_process_byte - process data on the ps/2 channel
+ *
+ * @psmouse: Pointer to the psmouse struct
+ *
+ * When the ps/2 channel indicates that there is vmmouse data available,
+ * call vmmouse channel processing. Otherwise, continue to accept bytes. If
+ * there is a synchronization or communication data error, return
+ * PSMOUSE_BAD_DATA in the hope that the caller will reset the mouse.
+ */
+static psmouse_ret_t vmmouse_process_byte(struct psmouse *psmouse)
+{
+ unsigned char *packet = psmouse->packet;
+
+ switch (psmouse->pktcnt) {
+ case 1:
+ return (packet[0] & 0x8) == 0x8 ?
+ PSMOUSE_GOOD_DATA : PSMOUSE_BAD_DATA;
+
+ case 2:
+ return PSMOUSE_GOOD_DATA;
+
+ default:
+ return vmmouse_report_events(psmouse);
+ }
+}
+
+/**
+ * vmmouse_disable - Disable vmmouse
+ *
+ * @psmouse: Pointer to the psmouse struct
+ *
+ * Tries to disable vmmouse mode.
+ */
+static void vmmouse_disable(struct psmouse *psmouse)
+{
+ u32 status;
+ u32 dummy1, dummy2, dummy3, dummy4;
+
+ VMMOUSE_CMD(ABSPOINTER_COMMAND, VMMOUSE_CMD_DISABLE,
+ dummy1, dummy2, dummy3, dummy4);
+
+ VMMOUSE_CMD(ABSPOINTER_STATUS, 0,
+ status, dummy1, dummy2, dummy3);
+
+ if ((status & VMMOUSE_ERROR) != VMMOUSE_ERROR)
+ psmouse_warn(psmouse, "failed to disable vmmouse device\n");
+}
+
+/**
+ * vmmouse_enable - Enable vmmouse and request absolute mode.
+ *
+ * @psmouse: Pointer to the psmouse struct
+ *
+ * Tries to enable vmmouse mode. Performs basic checks and requests
+ * absolute vmmouse mode.
+ * Returns 0 on success, -ENODEV on failure.
+ */
+static int vmmouse_enable(struct psmouse *psmouse)
+{
+ u32 status, version;
+ u32 dummy1, dummy2, dummy3, dummy4;
+
+ /*
+ * Try enabling the device. If successful, we should be able to
+ * read valid version ID back from it.
+ */
+ VMMOUSE_CMD(ABSPOINTER_COMMAND, VMMOUSE_CMD_ENABLE,
+ dummy1, dummy2, dummy3, dummy4);
+
+ /*
+ * See if version ID can be retrieved.
+ */
+ VMMOUSE_CMD(ABSPOINTER_STATUS, 0, status, dummy1, dummy2, dummy3);
+ if ((status & 0x0000ffff) == 0) {
+ psmouse_dbg(psmouse, "empty flags - assuming no device\n");
+ return -ENXIO;
+ }
+
+ VMMOUSE_CMD(ABSPOINTER_DATA, 1 /* single item */,
+ version, dummy1, dummy2, dummy3);
+ if (version != VMMOUSE_VERSION_ID) {
+ psmouse_dbg(psmouse, "Unexpected version value: %u vs %u\n",
+ (unsigned) version, VMMOUSE_VERSION_ID);
+ vmmouse_disable(psmouse);
+ return -ENXIO;
+ }
+
+ /*
+ * Restrict ioport access, if possible.
+ */
+ VMMOUSE_CMD(ABSPOINTER_RESTRICT, VMMOUSE_RESTRICT_CPL0,
+ dummy1, dummy2, dummy3, dummy4);
+
+ VMMOUSE_CMD(ABSPOINTER_COMMAND, VMMOUSE_CMD_REQUEST_ABSOLUTE,
+ dummy1, dummy2, dummy3, dummy4);
+
+ return 0;
+}
+
+/*
+ * Array of supported hypervisors.
+ */
+static const struct hypervisor_x86 *vmmouse_supported_hypervisors[] = {
+ &x86_hyper_vmware,
+#ifdef CONFIG_KVM_GUEST
+ &x86_hyper_kvm,
+#endif
+};
+
+/**
+ * vmmouse_check_hypervisor - Check if we're running on a supported hypervisor
+ */
+static bool vmmouse_check_hypervisor(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vmmouse_supported_hypervisors); i++)
+ if (vmmouse_supported_hypervisors[i] == x86_hyper)
+ return true;
+
+ return false;
+}
+
+/**
+ * vmmouse_detect - Probe whether vmmouse is available
+ *
+ * @psmouse: Pointer to the psmouse struct
+ * @set_properties: Whether to set psmouse name and vendor
+ *
+ * Returns 0 if vmmouse channel is available. Negative error code if not.
+ */
+int vmmouse_detect(struct psmouse *psmouse, bool set_properties)
+{
+ u32 response, version, dummy1, dummy2;
+
+ if (!vmmouse_check_hypervisor()) {
+ psmouse_dbg(psmouse,
+ "VMMouse not running on supported hypervisor.\n");
+ return -ENXIO;
+ }
+
+ if (!request_region(VMMOUSE_PROTO_PORT, 4, "vmmouse")) {
+ psmouse_dbg(psmouse, "VMMouse port in use.\n");
+ return -EBUSY;
+ }
+
+ /* Check if the device is present */
+ response = ~VMMOUSE_PROTO_MAGIC;
+ VMMOUSE_CMD(GETVERSION, 0, version, response, dummy1, dummy2);
+ if (response != VMMOUSE_PROTO_MAGIC || version == 0xffffffffU) {
+ release_region(VMMOUSE_PROTO_PORT, 4);
+ return -ENXIO;
+ }
+
+ if (set_properties) {
+ psmouse->vendor = VMMOUSE_VENDOR;
+ psmouse->name = VMMOUSE_NAME;
+ psmouse->model = version;
+ }
+
+ release_region(VMMOUSE_PROTO_PORT, 4);
+
+ return 0;
+}
+
+/**
+ * vmmouse_disconnect - Take down vmmouse driver
+ *
+ * @psmouse: Pointer to the psmouse struct
+ *
+ * Takes down vmmouse driver and frees resources set up in vmmouse_init().
+ */
+static void vmmouse_disconnect(struct psmouse *psmouse)
+{
+ struct vmmouse_data *priv = psmouse->private;
+
+ vmmouse_disable(psmouse);
+ psmouse_reset(psmouse);
+ input_unregister_device(priv->abs_dev);
+ kfree(priv);
+ release_region(VMMOUSE_PROTO_PORT, 4);
+}
+
+/**
+ * vmmouse_reconnect - Reset the ps/2 - and vmmouse connections
+ *
+ * @psmouse: Pointer to the psmouse struct
+ *
+ * Attempts to reset the mouse connections. Returns 0 on success and
+ * -1 on failure.
+ */
+static int vmmouse_reconnect(struct psmouse *psmouse)
+{
+ int error;
+
+ psmouse_reset(psmouse);
+ vmmouse_disable(psmouse);
+ error = vmmouse_enable(psmouse);
+ if (error) {
+ psmouse_err(psmouse,
+ "Unable to re-enable mouse when reconnecting, err: %d\n",
+ error);
+ return error;
+ }
+
+ return 0;
+}
+
+/**
+ * vmmouse_init - Initialize the vmmouse driver
+ *
+ * @psmouse: Pointer to the psmouse struct
+ *
+ * Requests the device and tries to enable vmmouse mode.
+ * If successful, sets up the input device for relative movement events.
+ * It also allocates another input device and sets it up for absolute motion
+ * events. Returns 0 on success and -1 on failure.
+ */
+int vmmouse_init(struct psmouse *psmouse)
+{
+ struct vmmouse_data *priv;
+ struct input_dev *rel_dev = psmouse->dev, *abs_dev;
+ int error;
+
+ if (!request_region(VMMOUSE_PROTO_PORT, 4, "vmmouse")) {
+ psmouse_dbg(psmouse, "VMMouse port in use.\n");
+ return -EBUSY;
+ }
+
+ psmouse_reset(psmouse);
+ error = vmmouse_enable(psmouse);
+ if (error)
+ goto release_region;
+
+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ abs_dev = input_allocate_device();
+ if (!priv || !abs_dev) {
+ error = -ENOMEM;
+ goto init_fail;
+ }
+
+ priv->abs_dev = abs_dev;
+ psmouse->private = priv;
+
+ input_set_capability(rel_dev, EV_REL, REL_WHEEL);
+
+ /* Set up and register absolute device */
+ snprintf(priv->phys, sizeof(priv->phys), "%s/input1",
+ psmouse->ps2dev.serio->phys);
+
+ /* Mimic name setup for relative device in psmouse-base.c */
+ snprintf(priv->dev_name, sizeof(priv->dev_name), "%s %s %s",
+ VMMOUSE_PSNAME, VMMOUSE_VENDOR, VMMOUSE_NAME);
+ abs_dev->phys = priv->phys;
+ abs_dev->name = priv->dev_name;
+ abs_dev->id.bustype = BUS_I8042;
+ abs_dev->id.vendor = 0x0002;
+ abs_dev->id.product = PSMOUSE_VMMOUSE;
+ abs_dev->id.version = psmouse->model;
+ abs_dev->dev.parent = &psmouse->ps2dev.serio->dev;
+
+ error = input_register_device(priv->abs_dev);
+ if (error)
+ goto init_fail;
+
+ /* Set absolute device capabilities */
+ input_set_capability(abs_dev, EV_KEY, BTN_LEFT);
+ input_set_capability(abs_dev, EV_KEY, BTN_RIGHT);
+ input_set_capability(abs_dev, EV_KEY, BTN_MIDDLE);
+ input_set_capability(abs_dev, EV_ABS, ABS_X);
+ input_set_capability(abs_dev, EV_ABS, ABS_Y);
+ input_set_abs_params(abs_dev, ABS_X, 0, VMMOUSE_MAX_X, 0, 0);
+ input_set_abs_params(abs_dev, ABS_Y, 0, VMMOUSE_MAX_Y, 0, 0);
+
+ psmouse->protocol_handler = vmmouse_process_byte;
+ psmouse->disconnect = vmmouse_disconnect;
+ psmouse->reconnect = vmmouse_reconnect;
+
+ return 0;
+
+init_fail:
+ vmmouse_disable(psmouse);
+ psmouse_reset(psmouse);
+ input_free_device(abs_dev);
+ kfree(priv);
+ psmouse->private = NULL;
+
+release_region:
+ release_region(VMMOUSE_PROTO_PORT, 4);
+
+ return error;
+}
diff --git a/drivers/input/mouse/vmmouse.h b/drivers/input/mouse/vmmouse.h
new file mode 100644
index 0000000..6f12601
--- /dev/null
+++ b/drivers/input/mouse/vmmouse.h
@@ -0,0 +1,30 @@
+/*
+ * Driver for Virtual PS/2 Mouse on VMware and QEMU hypervisors.
+ *
+ * Copyright (C) 2014, VMware, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+#ifndef _VMMOUSE_H
+#define _VMMOUSE_H
+
+#ifdef CONFIG_MOUSE_PS2_VMMOUSE
+#define VMMOUSE_PSNAME "VirtualPS/2"
+
+int vmmouse_detect(struct psmouse *psmouse, bool set_properties);
+int vmmouse_init(struct psmouse *psmouse);
+#else
+static inline int vmmouse_detect(struct psmouse *psmouse, bool set_properties)
+{
+ return -ENOSYS;
+}
+static inline int vmmouse_init(struct psmouse *psmouse)
+{
+ return -ENOSYS;
+}
+#endif
+
+#endif
diff --git a/drivers/input/touchscreen/Kconfig b/drivers/input/touchscreen/Kconfig
index 547f67d..80f6386 100644
--- a/drivers/input/touchscreen/Kconfig
+++ b/drivers/input/touchscreen/Kconfig
@@ -980,7 +980,9 @@
config TOUCHSCREEN_SUR40
tristate "Samsung SUR40 (Surface 2.0/PixelSense) touchscreen"
depends on USB
+ depends on MEDIA_USB_SUPPORT
select INPUT_POLLDEV
+ select VIDEOBUF2_DMA_SG
help
Say Y here if you want support for the Samsung SUR40 touchscreen
(also known as Microsoft Surface 2.0 or Microsoft PixelSense).
diff --git a/drivers/input/touchscreen/atmel_mxt_ts.c b/drivers/input/touchscreen/atmel_mxt_ts.c
index 2875ddf..40b98dd 100644
--- a/drivers/input/touchscreen/atmel_mxt_ts.c
+++ b/drivers/input/touchscreen/atmel_mxt_ts.c
@@ -14,6 +14,8 @@
*
*/
+#include <linux/acpi.h>
+#include <linux/dmi.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/completion.h>
@@ -2371,7 +2373,7 @@
}
#ifdef CONFIG_OF
-static struct mxt_platform_data *mxt_parse_dt(struct i2c_client *client)
+static const struct mxt_platform_data *mxt_parse_dt(struct i2c_client *client)
{
struct mxt_platform_data *pdata;
u32 *keymap;
@@ -2379,7 +2381,7 @@
int proplen, i, ret;
if (!client->dev.of_node)
- return ERR_PTR(-ENODEV);
+ return ERR_PTR(-ENOENT);
pdata = devm_kzalloc(&client->dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata)
@@ -2410,25 +2412,132 @@
return pdata;
}
#else
-static struct mxt_platform_data *mxt_parse_dt(struct i2c_client *client)
+static const struct mxt_platform_data *mxt_parse_dt(struct i2c_client *client)
{
- dev_dbg(&client->dev, "No platform data specified\n");
- return ERR_PTR(-EINVAL);
+ return ERR_PTR(-ENOENT);
}
#endif
+#ifdef CONFIG_ACPI
+
+struct mxt_acpi_platform_data {
+ const char *hid;
+ struct mxt_platform_data pdata;
+};
+
+static unsigned int samus_touchpad_buttons[] = {
+ KEY_RESERVED,
+ KEY_RESERVED,
+ KEY_RESERVED,
+ BTN_LEFT
+};
+
+static struct mxt_acpi_platform_data samus_platform_data[] = {
+ {
+ /* Touchpad */
+ .hid = "ATML0000",
+ .pdata = {
+ .t19_num_keys = ARRAY_SIZE(samus_touchpad_buttons),
+ .t19_keymap = samus_touchpad_buttons,
+ },
+ },
+ {
+ /* Touchscreen */
+ .hid = "ATML0001",
+ },
+ { }
+};
+
+static const struct dmi_system_id mxt_dmi_table[] = {
+ {
+ /* 2015 Google Pixel */
+ .ident = "Chromebook Pixel 2",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Samus"),
+ },
+ .driver_data = samus_platform_data,
+ },
+ { }
+};
+
+static const struct mxt_platform_data *mxt_parse_acpi(struct i2c_client *client)
+{
+ struct acpi_device *adev;
+ const struct dmi_system_id *system_id;
+ const struct mxt_acpi_platform_data *acpi_pdata;
+
+ /*
+ * Ignore ACPI devices representing bootloader mode.
+ *
+ * This is a bit of a hack: Google Chromebook BIOS creates ACPI
+ * devices for both application and bootloader modes, but we are
+ * interested in application mode only (if device is in bootloader
+ * mode we'll end up switching into application anyway). So far
+ * application mode addresses were all above 0x40, so we'll use it
+ * as a threshold.
+ */
+ if (client->addr < 0x40)
+ return ERR_PTR(-ENXIO);
+
+ adev = ACPI_COMPANION(&client->dev);
+ if (!adev)
+ return ERR_PTR(-ENOENT);
+
+ system_id = dmi_first_match(mxt_dmi_table);
+ if (!system_id)
+ return ERR_PTR(-ENOENT);
+
+ acpi_pdata = system_id->driver_data;
+ if (!acpi_pdata)
+ return ERR_PTR(-ENOENT);
+
+ while (acpi_pdata->hid) {
+ if (!strcmp(acpi_device_hid(adev), acpi_pdata->hid))
+ return &acpi_pdata->pdata;
+
+ acpi_pdata++;
+ }
+
+ return ERR_PTR(-ENOENT);
+}
+#else
+static const struct mxt_platform_data *mxt_parse_acpi(struct i2c_client *client)
+{
+ return ERR_PTR(-ENOENT);
+}
+#endif
+
+static const struct mxt_platform_data *
+mxt_get_platform_data(struct i2c_client *client)
+{
+ const struct mxt_platform_data *pdata;
+
+ pdata = dev_get_platdata(&client->dev);
+ if (pdata)
+ return pdata;
+
+ pdata = mxt_parse_dt(client);
+ if (!IS_ERR(pdata) || PTR_ERR(pdata) != -ENOENT)
+ return pdata;
+
+ pdata = mxt_parse_acpi(client);
+ if (!IS_ERR(pdata) || PTR_ERR(pdata) != -ENOENT)
+ return pdata;
+
+ dev_err(&client->dev, "No platform data specified\n");
+ return ERR_PTR(-EINVAL);
+}
+
static int mxt_probe(struct i2c_client *client, const struct i2c_device_id *id)
{
struct mxt_data *data;
const struct mxt_platform_data *pdata;
int error;
- pdata = dev_get_platdata(&client->dev);
- if (!pdata) {
- pdata = mxt_parse_dt(client);
- if (IS_ERR(pdata))
- return PTR_ERR(pdata);
- }
+ pdata = mxt_get_platform_data(client);
+ if (IS_ERR(pdata))
+ return PTR_ERR(pdata);
data = kzalloc(sizeof(struct mxt_data), GFP_KERNEL);
if (!data) {
@@ -2536,6 +2645,15 @@
};
MODULE_DEVICE_TABLE(of, mxt_of_match);
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id mxt_acpi_id[] = {
+ { "ATML0000", 0 }, /* Touchpad */
+ { "ATML0001", 0 }, /* Touchscreen */
+ { }
+};
+MODULE_DEVICE_TABLE(acpi, mxt_acpi_id);
+#endif
+
static const struct i2c_device_id mxt_id[] = {
{ "qt602240_ts", 0 },
{ "atmel_mxt_ts", 0 },
@@ -2550,6 +2668,7 @@
.name = "atmel_mxt_ts",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(mxt_of_match),
+ .acpi_match_table = ACPI_PTR(mxt_acpi_id),
.pm = &mxt_pm_ops,
},
.probe = mxt_probe,
diff --git a/drivers/input/touchscreen/elants_i2c.c b/drivers/input/touchscreen/elants_i2c.c
index 43b3c9c..0efd766 100644
--- a/drivers/input/touchscreen/elants_i2c.c
+++ b/drivers/input/touchscreen/elants_i2c.c
@@ -699,7 +699,7 @@
char *fw_name;
int error;
- fw_name = kasprintf(GFP_KERNEL, "elants_i2c_%4x.bin", ts->hw_version);
+ fw_name = kasprintf(GFP_KERNEL, "elants_i2c_%04x.bin", ts->hw_version);
if (!fw_name)
return -ENOMEM;
diff --git a/drivers/input/touchscreen/sur40.c b/drivers/input/touchscreen/sur40.c
index f1cb051..a24eba5 100644
--- a/drivers/input/touchscreen/sur40.c
+++ b/drivers/input/touchscreen/sur40.c
@@ -1,7 +1,7 @@
/*
* Surface2.0/SUR40/PixelSense input driver
*
- * Copyright (c) 2013 by Florian 'floe' Echtler <floe@butterbrot.org>
+ * Copyright (c) 2014 by Florian 'floe' Echtler <floe@butterbrot.org>
*
* Derived from the USB Skeleton driver 1.1,
* Copyright (c) 2003 Greg Kroah-Hartman (greg@kroah.com)
@@ -12,6 +12,9 @@
* and from the generic hid-multitouch driver,
* Copyright (c) 2010-2012 Stephane Chatty <chatty@enac.fr>
*
+ * and from the v4l2-pci-skeleton driver,
+ * Copyright (c) Copyright 2014 Cisco Systems, Inc.
+ *
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of
@@ -31,6 +34,11 @@
#include <linux/input-polldev.h>
#include <linux/input/mt.h>
#include <linux/usb/input.h>
+#include <linux/videodev2.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-dev.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf2-dma-sg.h>
/* read 512 bytes from endpoint 0x86 -> get header + blobs */
struct sur40_header {
@@ -82,9 +90,19 @@
struct sur40_blob blobs[];
} __packed;
+/* read 512 bytes from endpoint 0x82 -> get header below
+ * continue reading 16k blocks until header.size bytes read */
+struct sur40_image_header {
+ __le32 magic; /* "SUBF" */
+ __le32 packet_id;
+ __le32 size; /* always 0x0007e900 = 960x540 */
+ __le32 timestamp; /* milliseconds (increases by 16 or 17 each frame) */
+ __le32 unknown; /* "epoch?" always 02/03 00 00 00 */
+} __packed;
/* version information */
#define DRIVER_SHORT "sur40"
+#define DRIVER_LONG "Samsung SUR40"
#define DRIVER_AUTHOR "Florian 'floe' Echtler <floe@butterbrot.org>"
#define DRIVER_DESC "Surface2.0/SUR40/PixelSense input driver"
@@ -99,6 +117,13 @@
/* touch data endpoint */
#define TOUCH_ENDPOINT 0x86
+/* video data endpoint */
+#define VIDEO_ENDPOINT 0x82
+
+/* video header fields */
+#define VIDEO_HEADER_MAGIC 0x46425553
+#define VIDEO_PACKET_SIZE 16384
+
/* polling interval (ms) */
#define POLL_INTERVAL 10
@@ -113,6 +138,41 @@
#define SUR40_GET_STATE 0xc5 /* 4 bytes state (?) */
#define SUR40_GET_SENSORS 0xb1 /* 8 bytes sensors */
+/* master device state */
+struct sur40_state {
+
+ struct usb_device *usbdev;
+ struct device *dev;
+ struct input_polled_dev *input;
+
+ struct v4l2_device v4l2;
+ struct video_device vdev;
+ struct mutex lock;
+
+ struct vb2_queue queue;
+ struct vb2_alloc_ctx *alloc_ctx;
+ struct list_head buf_list;
+ spinlock_t qlock;
+ int sequence;
+
+ struct sur40_data *bulk_in_buffer;
+ size_t bulk_in_size;
+ u8 bulk_in_epaddr;
+
+ char phys[64];
+};
+
+struct sur40_buffer {
+ struct vb2_buffer vb;
+ struct list_head list;
+};
+
+/* forward declarations */
+static const struct video_device sur40_video_device;
+static const struct v4l2_pix_format sur40_video_format;
+static const struct vb2_queue sur40_queue;
+static void sur40_process_video(struct sur40_state *sur40);
+
/*
* Note: an earlier, non-public version of this driver used USB_RECIP_ENDPOINT
* here by mistake which is very likely to have corrupted the firmware EEPROM
@@ -122,19 +182,7 @@
* at https://floe.butterbrot.org/matrix/hacking/surface/brick.html
*/
-struct sur40_state {
-
- struct usb_device *usbdev;
- struct device *dev;
- struct input_polled_dev *input;
-
- struct sur40_data *bulk_in_buffer;
- size_t bulk_in_size;
- u8 bulk_in_epaddr;
-
- char phys[64];
-};
-
+/* command wrapper */
static int sur40_command(struct sur40_state *dev,
u8 command, u16 index, void *buffer, u16 size)
{
@@ -247,7 +295,6 @@
/* core function: poll for new input data */
static void sur40_poll(struct input_polled_dev *polldev)
{
-
struct sur40_state *sur40 = polldev->private;
struct input_dev *input = polldev->input;
int result, bulk_read, need_blobs, packet_blobs, i;
@@ -314,6 +361,86 @@
input_mt_sync_frame(input);
input_sync(input);
+
+ sur40_process_video(sur40);
+}
+
+/* deal with video data */
+static void sur40_process_video(struct sur40_state *sur40)
+{
+
+ struct sur40_image_header *img = (void *)(sur40->bulk_in_buffer);
+ struct sur40_buffer *new_buf;
+ struct usb_sg_request sgr;
+ struct sg_table *sgt;
+ int result, bulk_read;
+
+ if (!vb2_start_streaming_called(&sur40->queue))
+ return;
+
+ /* get a new buffer from the list */
+ spin_lock(&sur40->qlock);
+ if (list_empty(&sur40->buf_list)) {
+ dev_dbg(sur40->dev, "buffer queue empty\n");
+ spin_unlock(&sur40->qlock);
+ return;
+ }
+ new_buf = list_entry(sur40->buf_list.next, struct sur40_buffer, list);
+ list_del(&new_buf->list);
+ spin_unlock(&sur40->qlock);
+
+ /* retrieve data via bulk read */
+ result = usb_bulk_msg(sur40->usbdev,
+ usb_rcvbulkpipe(sur40->usbdev, VIDEO_ENDPOINT),
+ sur40->bulk_in_buffer, sur40->bulk_in_size,
+ &bulk_read, 1000);
+
+ if (result < 0) {
+ dev_err(sur40->dev, "error in usb_bulk_read\n");
+ goto err_poll;
+ }
+
+ if (bulk_read != sizeof(struct sur40_image_header)) {
+ dev_err(sur40->dev, "received %d bytes (%zd expected)\n",
+ bulk_read, sizeof(struct sur40_image_header));
+ goto err_poll;
+ }
+
+ if (le32_to_cpu(img->magic) != VIDEO_HEADER_MAGIC) {
+ dev_err(sur40->dev, "image magic mismatch\n");
+ goto err_poll;
+ }
+
+ if (le32_to_cpu(img->size) != sur40_video_format.sizeimage) {
+ dev_err(sur40->dev, "image size mismatch\n");
+ goto err_poll;
+ }
+
+ sgt = vb2_dma_sg_plane_desc(&new_buf->vb, 0);
+
+ result = usb_sg_init(&sgr, sur40->usbdev,
+ usb_rcvbulkpipe(sur40->usbdev, VIDEO_ENDPOINT), 0,
+ sgt->sgl, sgt->nents, sur40_video_format.sizeimage, 0);
+ if (result < 0) {
+ dev_err(sur40->dev, "error %d in usb_sg_init\n", result);
+ goto err_poll;
+ }
+
+ usb_sg_wait(&sgr);
+ if (sgr.status < 0) {
+ dev_err(sur40->dev, "error %d in usb_sg_wait\n", sgr.status);
+ goto err_poll;
+ }
+
+ /* mark as finished */
+ v4l2_get_timestamp(&new_buf->vb.v4l2_buf.timestamp);
+ new_buf->vb.v4l2_buf.sequence = sur40->sequence++;
+ new_buf->vb.v4l2_buf.field = V4L2_FIELD_NONE;
+ vb2_buffer_done(&new_buf->vb, VB2_BUF_STATE_DONE);
+ return;
+
+err_poll:
+ vb2_buffer_done(&new_buf->vb, VB2_BUF_STATE_ERROR);
}
/* Initialize input device parameters. */
@@ -377,6 +504,11 @@
goto err_free_dev;
}
+ /* initialize locks/lists */
+ INIT_LIST_HEAD(&sur40->buf_list);
+ spin_lock_init(&sur40->qlock);
+ mutex_init(&sur40->lock);
+
/* Set up polled input device control structure */
poll_dev->private = sur40;
poll_dev->poll_interval = POLL_INTERVAL;
@@ -387,7 +519,7 @@
/* Set up regular input device structure */
sur40_input_setup(poll_dev->input);
- poll_dev->input->name = "Samsung SUR40";
+ poll_dev->input->name = DRIVER_LONG;
usb_to_input_id(usbdev, &poll_dev->input->id);
usb_make_path(usbdev, sur40->phys, sizeof(sur40->phys));
strlcat(sur40->phys, "/input0", sizeof(sur40->phys));
@@ -408,6 +540,7 @@
goto err_free_polldev;
}
+ /* register the polled input device */
error = input_register_polled_device(poll_dev);
if (error) {
dev_err(&interface->dev,
@@ -415,12 +548,54 @@
goto err_free_buffer;
}
+ /* register the video master device */
+ snprintf(sur40->v4l2.name, sizeof(sur40->v4l2.name), "%s", DRIVER_LONG);
+ error = v4l2_device_register(sur40->dev, &sur40->v4l2);
+ if (error) {
+ dev_err(&interface->dev,
+ "Unable to register video master device.");
+ goto err_unreg_v4l2;
+ }
+
+ /* initialize the lock and subdevice */
+ sur40->queue = sur40_queue;
+ sur40->queue.drv_priv = sur40;
+ sur40->queue.lock = &sur40->lock;
+
+ /* initialize the queue */
+ error = vb2_queue_init(&sur40->queue);
+ if (error)
+ goto err_unreg_v4l2;
+
+ sur40->alloc_ctx = vb2_dma_sg_init_ctx(sur40->dev);
+ if (IS_ERR(sur40->alloc_ctx)) {
+ dev_err(sur40->dev, "Can't allocate buffer context");
+ goto err_unreg_v4l2;
+ }
+
+ sur40->vdev = sur40_video_device;
+ sur40->vdev.v4l2_dev = &sur40->v4l2;
+ sur40->vdev.lock = &sur40->lock;
+ sur40->vdev.queue = &sur40->queue;
+ video_set_drvdata(&sur40->vdev, sur40);
+
+ error = video_register_device(&sur40->vdev, VFL_TYPE_GRABBER, -1);
+ if (error) {
+ dev_err(&interface->dev,
+ "Unable to register video subdevice.");
+ goto err_unreg_video;
+ }
+
/* we can register the device now, as it is ready */
usb_set_intfdata(interface, sur40);
dev_dbg(&interface->dev, "%s is now attached\n", DRIVER_DESC);
return 0;
+err_unreg_video:
+ video_unregister_device(&sur40->vdev);
+err_unreg_v4l2:
+ v4l2_device_unregister(&sur40->v4l2);
err_free_buffer:
kfree(sur40->bulk_in_buffer);
err_free_polldev:
@@ -436,6 +611,10 @@
{
struct sur40_state *sur40 = usb_get_intfdata(interface);
+ video_unregister_device(&sur40->vdev);
+ v4l2_device_unregister(&sur40->v4l2);
+ vb2_dma_sg_cleanup_ctx(sur40->alloc_ctx);
+
input_unregister_polled_device(sur40->input);
input_free_polled_device(sur40->input);
kfree(sur40->bulk_in_buffer);
@@ -445,12 +624,243 @@
dev_dbg(&interface->dev, "%s is now disconnected\n", DRIVER_DESC);
}
+/*
+ * Setup the constraints of the queue: besides setting the number of planes
+ * per buffer and the size and allocation context of each plane, it also
+ * checks if sufficient buffers have been allocated. Usually 3 is a good
+ * minimum number: many DMA engines need a minimum of 2 buffers in the
+ * queue and you need to have another available for userspace processing.
+ */
+static int sur40_queue_setup(struct vb2_queue *q, const struct v4l2_format *fmt,
+ unsigned int *nbuffers, unsigned int *nplanes,
+ unsigned int sizes[], void *alloc_ctxs[])
+{
+ struct sur40_state *sur40 = vb2_get_drv_priv(q);
+
+ if (q->num_buffers + *nbuffers < 3)
+ *nbuffers = 3 - q->num_buffers;
+
+ if (fmt && fmt->fmt.pix.sizeimage < sur40_video_format.sizeimage)
+ return -EINVAL;
+
+ *nplanes = 1;
+ sizes[0] = fmt ? fmt->fmt.pix.sizeimage : sur40_video_format.sizeimage;
+ alloc_ctxs[0] = sur40->alloc_ctx;
+
+ return 0;
+}
+
+/*
+ * Prepare the buffer for queueing to the DMA engine: check and set the
+ * payload size.
+ */
+static int sur40_buffer_prepare(struct vb2_buffer *vb)
+{
+ struct sur40_state *sur40 = vb2_get_drv_priv(vb->vb2_queue);
+ unsigned long size = sur40_video_format.sizeimage;
+
+ if (vb2_plane_size(vb, 0) < size) {
+ dev_err(&sur40->usbdev->dev, "buffer too small (%lu < %lu)\n",
+ vb2_plane_size(vb, 0), size);
+ return -EINVAL;
+ }
+
+ vb2_set_plane_payload(vb, 0, size);
+ return 0;
+}
+
+/*
+ * Queue this buffer to the DMA engine.
+ */
+static void sur40_buffer_queue(struct vb2_buffer *vb)
+{
+ struct sur40_state *sur40 = vb2_get_drv_priv(vb->vb2_queue);
+ struct sur40_buffer *buf = (struct sur40_buffer *)vb;
+
+ spin_lock(&sur40->qlock);
+ list_add_tail(&buf->list, &sur40->buf_list);
+ spin_unlock(&sur40->qlock);
+}
+
+static void return_all_buffers(struct sur40_state *sur40,
+ enum vb2_buffer_state state)
+{
+ struct sur40_buffer *buf, *node;
+
+ spin_lock(&sur40->qlock);
+ list_for_each_entry_safe(buf, node, &sur40->buf_list, list) {
+ vb2_buffer_done(&buf->vb, state);
+ list_del(&buf->list);
+ }
+ spin_unlock(&sur40->qlock);
+}
+
+/*
+ * Start streaming. First check if the minimum number of buffers have been
+ * queued. If not, then return -ENOBUFS and the vb2 framework will call
+ * this function again the next time a buffer has been queued until enough
+ * buffers are available to actually start the DMA engine.
+ */
+static int sur40_start_streaming(struct vb2_queue *vq, unsigned int count)
+{
+ struct sur40_state *sur40 = vb2_get_drv_priv(vq);
+
+ sur40->sequence = 0;
+ return 0;
+}
+
+/*
+ * Stop the DMA engine. Any remaining buffers in the DMA queue are dequeued
+ * and passed on to the vb2 framework marked as STATE_ERROR.
+ */
+static void sur40_stop_streaming(struct vb2_queue *vq)
+{
+ struct sur40_state *sur40 = vb2_get_drv_priv(vq);
+
+ /* Release all active buffers */
+ return_all_buffers(sur40, VB2_BUF_STATE_ERROR);
+}
+
+/* V4L ioctl */
+static int sur40_vidioc_querycap(struct file *file, void *priv,
+ struct v4l2_capability *cap)
+{
+ struct sur40_state *sur40 = video_drvdata(file);
+
+ strlcpy(cap->driver, DRIVER_SHORT, sizeof(cap->driver));
+ strlcpy(cap->card, DRIVER_LONG, sizeof(cap->card));
+ usb_make_path(sur40->usbdev, cap->bus_info, sizeof(cap->bus_info));
+ cap->device_caps = V4L2_CAP_VIDEO_CAPTURE |
+ V4L2_CAP_READWRITE |
+ V4L2_CAP_STREAMING;
+ cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
+ return 0;
+}
+
+static int sur40_vidioc_enum_input(struct file *file, void *priv,
+ struct v4l2_input *i)
+{
+ if (i->index != 0)
+ return -EINVAL;
+ i->type = V4L2_INPUT_TYPE_CAMERA;
+ i->std = V4L2_STD_UNKNOWN;
+ strlcpy(i->name, "In-Cell Sensor", sizeof(i->name));
+ i->capabilities = 0;
+ return 0;
+}
+
+static int sur40_vidioc_s_input(struct file *file, void *priv, unsigned int i)
+{
+ return (i == 0) ? 0 : -EINVAL;
+}
+
+static int sur40_vidioc_g_input(struct file *file, void *priv, unsigned int *i)
+{
+ *i = 0;
+ return 0;
+}
+
+static int sur40_vidioc_fmt(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ f->fmt.pix = sur40_video_format;
+ return 0;
+}
+
+static int sur40_vidioc_enum_fmt(struct file *file, void *priv,
+ struct v4l2_fmtdesc *f)
+{
+ if (f->index != 0)
+ return -EINVAL;
+ strlcpy(f->description, "8-bit greyscale", sizeof(f->description));
+ f->pixelformat = V4L2_PIX_FMT_GREY;
+ f->flags = 0;
+ return 0;
+}
+
static const struct usb_device_id sur40_table[] = {
{ USB_DEVICE(ID_MICROSOFT, ID_SUR40) }, /* Samsung SUR40 */
{ } /* terminating null entry */
};
MODULE_DEVICE_TABLE(usb, sur40_table);
+/* V4L2 structures */
+static const struct vb2_ops sur40_queue_ops = {
+ .queue_setup = sur40_queue_setup,
+ .buf_prepare = sur40_buffer_prepare,
+ .buf_queue = sur40_buffer_queue,
+ .start_streaming = sur40_start_streaming,
+ .stop_streaming = sur40_stop_streaming,
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
+};
+
+static const struct vb2_queue sur40_queue = {
+ .type = V4L2_BUF_TYPE_VIDEO_CAPTURE,
+ /*
+ * VB2_USERPTR in currently not enabled: passing a user pointer to
+ * dma-sg will result in segment sizes that are not a multiple of
+ * 512 bytes, which is required by the host controller.
+ */
+ .io_modes = VB2_MMAP | VB2_READ | VB2_DMABUF,
+ .buf_struct_size = sizeof(struct sur40_buffer),
+ .ops = &sur40_queue_ops,
+ .mem_ops = &vb2_dma_sg_memops,
+ .timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC,
+ .min_buffers_needed = 3,
+};
+
+static const struct v4l2_file_operations sur40_video_fops = {
+ .owner = THIS_MODULE,
+ .open = v4l2_fh_open,
+ .release = vb2_fop_release,
+ .unlocked_ioctl = video_ioctl2,
+ .read = vb2_fop_read,
+ .mmap = vb2_fop_mmap,
+ .poll = vb2_fop_poll,
+};
+
+static const struct v4l2_ioctl_ops sur40_video_ioctl_ops = {
+
+ .vidioc_querycap = sur40_vidioc_querycap,
+
+ .vidioc_enum_fmt_vid_cap = sur40_vidioc_enum_fmt,
+ .vidioc_try_fmt_vid_cap = sur40_vidioc_fmt,
+ .vidioc_s_fmt_vid_cap = sur40_vidioc_fmt,
+ .vidioc_g_fmt_vid_cap = sur40_vidioc_fmt,
+
+ .vidioc_enum_input = sur40_vidioc_enum_input,
+ .vidioc_g_input = sur40_vidioc_g_input,
+ .vidioc_s_input = sur40_vidioc_s_input,
+
+ .vidioc_reqbufs = vb2_ioctl_reqbufs,
+ .vidioc_create_bufs = vb2_ioctl_create_bufs,
+ .vidioc_querybuf = vb2_ioctl_querybuf,
+ .vidioc_qbuf = vb2_ioctl_qbuf,
+ .vidioc_dqbuf = vb2_ioctl_dqbuf,
+ .vidioc_expbuf = vb2_ioctl_expbuf,
+
+ .vidioc_streamon = vb2_ioctl_streamon,
+ .vidioc_streamoff = vb2_ioctl_streamoff,
+};
+
+static const struct video_device sur40_video_device = {
+ .name = DRIVER_LONG,
+ .fops = &sur40_video_fops,
+ .ioctl_ops = &sur40_video_ioctl_ops,
+ .release = video_device_release_empty,
+};
+
+static const struct v4l2_pix_format sur40_video_format = {
+ .pixelformat = V4L2_PIX_FMT_GREY,
+ .width = SENSOR_RES_X / 2,
+ .height = SENSOR_RES_Y / 2,
+ .field = V4L2_FIELD_NONE,
+ .colorspace = V4L2_COLORSPACE_SRGB,
+ .bytesperline = SENSOR_RES_X / 2,
+ .sizeimage = (SENSOR_RES_X/2) * (SENSOR_RES_Y/2),
+};
+
/* USB-specific object needed to register this driver with the USB subsystem. */
static struct usb_driver sur40_driver = {
.name = DRIVER_SHORT,
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 48882c1..e43d489 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -33,6 +33,7 @@
#include <linux/export.h>
#include <linux/irq.h>
#include <linux/msi.h>
+#include <linux/dma-contiguous.h>
#include <asm/irq_remapping.h>
#include <asm/io_apic.h>
#include <asm/apic.h>
@@ -126,6 +127,11 @@
*
****************************************************************************/
+static struct protection_domain *to_pdomain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct protection_domain, domain);
+}
+
static struct iommu_dev_data *alloc_dev_data(u16 devid)
{
struct iommu_dev_data *dev_data;
@@ -1321,7 +1327,9 @@
* This function checks if there is a PTE for a given dma address. If
* there is one, it returns the pointer to it.
*/
-static u64 *fetch_pte(struct protection_domain *domain, unsigned long address)
+static u64 *fetch_pte(struct protection_domain *domain,
+ unsigned long address,
+ unsigned long *page_size)
{
int level;
u64 *pte;
@@ -1329,8 +1337,9 @@
if (address > PM_LEVEL_SIZE(domain->mode))
return NULL;
- level = domain->mode - 1;
- pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)];
+ level = domain->mode - 1;
+ pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)];
+ *page_size = PTE_LEVEL_PAGE_SIZE(level);
while (level > 0) {
@@ -1339,19 +1348,9 @@
return NULL;
/* Large PTE */
- if (PM_PTE_LEVEL(*pte) == 0x07) {
- unsigned long pte_mask, __pte;
-
- /*
- * If we have a series of large PTEs, make
- * sure to return a pointer to the first one.
- */
- pte_mask = PTE_PAGE_SIZE(*pte);
- pte_mask = ~((PAGE_SIZE_PTE_COUNT(pte_mask) << 3) - 1);
- __pte = ((unsigned long)pte) & pte_mask;
-
- return (u64 *)__pte;
- }
+ if (PM_PTE_LEVEL(*pte) == 7 ||
+ PM_PTE_LEVEL(*pte) == 0)
+ break;
/* No level skipping support yet */
if (PM_PTE_LEVEL(*pte) != level)
@@ -1360,8 +1359,21 @@
level -= 1;
/* Walk to the next level */
- pte = IOMMU_PTE_PAGE(*pte);
- pte = &pte[PM_LEVEL_INDEX(level, address)];
+ pte = IOMMU_PTE_PAGE(*pte);
+ pte = &pte[PM_LEVEL_INDEX(level, address)];
+ *page_size = PTE_LEVEL_PAGE_SIZE(level);
+ }
+
+ if (PM_PTE_LEVEL(*pte) == 0x07) {
+ unsigned long pte_mask;
+
+ /*
+ * If we have a series of large PTEs, make
+ * sure to return a pointer to the first one.
+ */
+ *page_size = pte_mask = PTE_PAGE_SIZE(*pte);
+ pte_mask = ~((PAGE_SIZE_PTE_COUNT(pte_mask) << 3) - 1);
+ pte = (u64 *)(((unsigned long)pte) & pte_mask);
}
return pte;
@@ -1383,13 +1395,14 @@
u64 __pte, *pte;
int i, count;
+ BUG_ON(!IS_ALIGNED(bus_addr, page_size));
+ BUG_ON(!IS_ALIGNED(phys_addr, page_size));
+
if (!(prot & IOMMU_PROT_MASK))
return -EINVAL;
- bus_addr = PAGE_ALIGN(bus_addr);
- phys_addr = PAGE_ALIGN(phys_addr);
- count = PAGE_SIZE_PTE_COUNT(page_size);
- pte = alloc_pte(dom, bus_addr, page_size, NULL, GFP_KERNEL);
+ count = PAGE_SIZE_PTE_COUNT(page_size);
+ pte = alloc_pte(dom, bus_addr, page_size, NULL, GFP_KERNEL);
if (!pte)
return -ENOMEM;
@@ -1398,7 +1411,7 @@
if (IOMMU_PTE_PRESENT(pte[i]))
return -EBUSY;
- if (page_size > PAGE_SIZE) {
+ if (count > 1) {
__pte = PAGE_SIZE_PTE(phys_addr, page_size);
__pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_P | IOMMU_PTE_FC;
} else
@@ -1421,7 +1434,8 @@
unsigned long bus_addr,
unsigned long page_size)
{
- unsigned long long unmap_size, unmapped;
+ unsigned long long unmapped;
+ unsigned long unmap_size;
u64 *pte;
BUG_ON(!is_power_of_2(page_size));
@@ -1430,28 +1444,12 @@
while (unmapped < page_size) {
- pte = fetch_pte(dom, bus_addr);
+ pte = fetch_pte(dom, bus_addr, &unmap_size);
- if (!pte) {
- /*
- * No PTE for this address
- * move forward in 4kb steps
- */
- unmap_size = PAGE_SIZE;
- } else if (PM_PTE_LEVEL(*pte) == 0) {
- /* 4kb PTE found for this address */
- unmap_size = PAGE_SIZE;
- *pte = 0ULL;
- } else {
- int count, i;
+ if (pte) {
+ int i, count;
- /* Large PTE found which maps this address */
- unmap_size = PTE_PAGE_SIZE(*pte);
-
- /* Only unmap from the first pte in the page */
- if ((unmap_size - 1) & bus_addr)
- break;
- count = PAGE_SIZE_PTE_COUNT(unmap_size);
+ count = PAGE_SIZE_PTE_COUNT(unmap_size);
for (i = 0; i < count; i++)
pte[i] = 0ULL;
}
@@ -1599,7 +1597,7 @@
{
int index = dma_dom->aperture_size >> APERTURE_RANGE_SHIFT;
struct amd_iommu *iommu;
- unsigned long i, old_size;
+ unsigned long i, old_size, pte_pgsize;
#ifdef CONFIG_IOMMU_STRESS
populate = false;
@@ -1672,12 +1670,13 @@
*/
for (i = dma_dom->aperture[index]->offset;
i < dma_dom->aperture_size;
- i += PAGE_SIZE) {
- u64 *pte = fetch_pte(&dma_dom->domain, i);
+ i += pte_pgsize) {
+ u64 *pte = fetch_pte(&dma_dom->domain, i, &pte_pgsize);
if (!pte || !IOMMU_PTE_PRESENT(*pte))
continue;
- dma_ops_reserve_addresses(dma_dom, i >> PAGE_SHIFT, 1);
+ dma_ops_reserve_addresses(dma_dom, i >> PAGE_SHIFT,
+ pte_pgsize >> 12);
}
update_domain(&dma_dom->domain);
@@ -2422,16 +2421,6 @@
dev_data = get_dev_data(dev);
switch (action) {
- case BUS_NOTIFY_UNBOUND_DRIVER:
-
- domain = domain_for_device(dev);
-
- if (!domain)
- goto out;
- if (dev_data->passthrough)
- break;
- detach_device(dev);
- break;
case BUS_NOTIFY_ADD_DEVICE:
iommu_init_device(dev);
@@ -2467,7 +2456,7 @@
dev->archdata.dma_ops = &amd_iommu_dma_ops;
break;
- case BUS_NOTIFY_DEL_DEVICE:
+ case BUS_NOTIFY_REMOVED_DEVICE:
iommu_uninit_device(dev);
@@ -2923,38 +2912,42 @@
dma_addr_t *dma_addr, gfp_t flag,
struct dma_attrs *attrs)
{
- unsigned long flags;
- void *virt_addr;
- struct protection_domain *domain;
- phys_addr_t paddr;
u64 dma_mask = dev->coherent_dma_mask;
+ struct protection_domain *domain;
+ unsigned long flags;
+ struct page *page;
INC_STATS_COUNTER(cnt_alloc_coherent);
domain = get_domain(dev);
if (PTR_ERR(domain) == -EINVAL) {
- virt_addr = (void *)__get_free_pages(flag, get_order(size));
- *dma_addr = __pa(virt_addr);
- return virt_addr;
+ page = alloc_pages(flag, get_order(size));
+ *dma_addr = page_to_phys(page);
+ return page_address(page);
} else if (IS_ERR(domain))
return NULL;
+ size = PAGE_ALIGN(size);
dma_mask = dev->coherent_dma_mask;
flag &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
- flag |= __GFP_ZERO;
- virt_addr = (void *)__get_free_pages(flag, get_order(size));
- if (!virt_addr)
- return NULL;
+ page = alloc_pages(flag | __GFP_NOWARN, get_order(size));
+ if (!page) {
+ if (!(flag & __GFP_WAIT))
+ return NULL;
- paddr = virt_to_phys(virt_addr);
+ page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
+ get_order(size));
+ if (!page)
+ return NULL;
+ }
if (!dma_mask)
dma_mask = *dev->dma_mask;
spin_lock_irqsave(&domain->lock, flags);
- *dma_addr = __map_single(dev, domain->priv, paddr,
+ *dma_addr = __map_single(dev, domain->priv, page_to_phys(page),
size, DMA_BIDIRECTIONAL, true, dma_mask);
if (*dma_addr == DMA_ERROR_CODE) {
@@ -2966,11 +2959,12 @@
spin_unlock_irqrestore(&domain->lock, flags);
- return virt_addr;
+ return page_address(page);
out_free:
- free_pages((unsigned long)virt_addr, get_order(size));
+ if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
+ __free_pages(page, get_order(size));
return NULL;
}
@@ -2982,11 +2976,15 @@
void *virt_addr, dma_addr_t dma_addr,
struct dma_attrs *attrs)
{
- unsigned long flags;
struct protection_domain *domain;
+ unsigned long flags;
+ struct page *page;
INC_STATS_COUNTER(cnt_free_coherent);
+ page = virt_to_page(virt_addr);
+ size = PAGE_ALIGN(size);
+
domain = get_domain(dev);
if (IS_ERR(domain))
goto free_mem;
@@ -3000,7 +2998,8 @@
spin_unlock_irqrestore(&domain->lock, flags);
free_mem:
- free_pages((unsigned long)virt_addr, get_order(size));
+ if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
+ __free_pages(page, get_order(size));
}
/*
@@ -3236,42 +3235,45 @@
return 0;
}
-static int amd_iommu_domain_init(struct iommu_domain *dom)
+
+static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
+{
+ struct protection_domain *pdomain;
+
+ /* We only support unmanaged domains for now */
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
+ pdomain = protection_domain_alloc();
+ if (!pdomain)
+ goto out_free;
+
+ pdomain->mode = PAGE_MODE_3_LEVEL;
+ pdomain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
+ if (!pdomain->pt_root)
+ goto out_free;
+
+ pdomain->domain.geometry.aperture_start = 0;
+ pdomain->domain.geometry.aperture_end = ~0ULL;
+ pdomain->domain.geometry.force_aperture = true;
+
+ return &pdomain->domain;
+
+out_free:
+ protection_domain_free(pdomain);
+
+ return NULL;
+}
+
+static void amd_iommu_domain_free(struct iommu_domain *dom)
{
struct protection_domain *domain;
- domain = protection_domain_alloc();
- if (!domain)
- goto out_free;
-
- domain->mode = PAGE_MODE_3_LEVEL;
- domain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
- if (!domain->pt_root)
- goto out_free;
-
- domain->iommu_domain = dom;
-
- dom->priv = domain;
-
- dom->geometry.aperture_start = 0;
- dom->geometry.aperture_end = ~0ULL;
- dom->geometry.force_aperture = true;
-
- return 0;
-
-out_free:
- protection_domain_free(domain);
-
- return -ENOMEM;
-}
-
-static void amd_iommu_domain_destroy(struct iommu_domain *dom)
-{
- struct protection_domain *domain = dom->priv;
-
- if (!domain)
+ if (!dom)
return;
+ domain = to_pdomain(dom);
+
if (domain->dev_cnt > 0)
cleanup_domain(domain);
@@ -3284,8 +3286,6 @@
free_gcr3_table(domain);
protection_domain_free(domain);
-
- dom->priv = NULL;
}
static void amd_iommu_detach_device(struct iommu_domain *dom,
@@ -3313,7 +3313,7 @@
static int amd_iommu_attach_device(struct iommu_domain *dom,
struct device *dev)
{
- struct protection_domain *domain = dom->priv;
+ struct protection_domain *domain = to_pdomain(dom);
struct iommu_dev_data *dev_data;
struct amd_iommu *iommu;
int ret;
@@ -3340,7 +3340,7 @@
static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
phys_addr_t paddr, size_t page_size, int iommu_prot)
{
- struct protection_domain *domain = dom->priv;
+ struct protection_domain *domain = to_pdomain(dom);
int prot = 0;
int ret;
@@ -3362,7 +3362,7 @@
static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
size_t page_size)
{
- struct protection_domain *domain = dom->priv;
+ struct protection_domain *domain = to_pdomain(dom);
size_t unmap_size;
if (domain->mode == PAGE_MODE_NONE)
@@ -3380,28 +3380,22 @@
static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
dma_addr_t iova)
{
- struct protection_domain *domain = dom->priv;
- unsigned long offset_mask;
- phys_addr_t paddr;
+ struct protection_domain *domain = to_pdomain(dom);
+ unsigned long offset_mask, pte_pgsize;
u64 *pte, __pte;
if (domain->mode == PAGE_MODE_NONE)
return iova;
- pte = fetch_pte(domain, iova);
+ pte = fetch_pte(domain, iova, &pte_pgsize);
if (!pte || !IOMMU_PTE_PRESENT(*pte))
return 0;
- if (PM_PTE_LEVEL(*pte) == 0)
- offset_mask = PAGE_SIZE - 1;
- else
- offset_mask = PTE_PAGE_SIZE(*pte) - 1;
+ offset_mask = pte_pgsize - 1;
+ __pte = *pte & PM_ADDR_MASK;
- __pte = *pte & PM_ADDR_MASK;
- paddr = (__pte & ~offset_mask) | (iova & offset_mask);
-
- return paddr;
+ return (__pte & ~offset_mask) | (iova & offset_mask);
}
static bool amd_iommu_capable(enum iommu_cap cap)
@@ -3420,8 +3414,8 @@
static const struct iommu_ops amd_iommu_ops = {
.capable = amd_iommu_capable,
- .domain_init = amd_iommu_domain_init,
- .domain_destroy = amd_iommu_domain_destroy,
+ .domain_alloc = amd_iommu_domain_alloc,
+ .domain_free = amd_iommu_domain_free,
.attach_dev = amd_iommu_attach_device,
.detach_dev = amd_iommu_detach_device,
.map = amd_iommu_map,
@@ -3483,7 +3477,7 @@
void amd_iommu_domain_direct_map(struct iommu_domain *dom)
{
- struct protection_domain *domain = dom->priv;
+ struct protection_domain *domain = to_pdomain(dom);
unsigned long flags;
spin_lock_irqsave(&domain->lock, flags);
@@ -3504,7 +3498,7 @@
int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids)
{
- struct protection_domain *domain = dom->priv;
+ struct protection_domain *domain = to_pdomain(dom);
unsigned long flags;
int levels, ret;
@@ -3616,7 +3610,7 @@
int amd_iommu_flush_page(struct iommu_domain *dom, int pasid,
u64 address)
{
- struct protection_domain *domain = dom->priv;
+ struct protection_domain *domain = to_pdomain(dom);
unsigned long flags;
int ret;
@@ -3638,7 +3632,7 @@
int amd_iommu_flush_tlb(struct iommu_domain *dom, int pasid)
{
- struct protection_domain *domain = dom->priv;
+ struct protection_domain *domain = to_pdomain(dom);
unsigned long flags;
int ret;
@@ -3718,7 +3712,7 @@
int amd_iommu_domain_set_gcr3(struct iommu_domain *dom, int pasid,
unsigned long cr3)
{
- struct protection_domain *domain = dom->priv;
+ struct protection_domain *domain = to_pdomain(dom);
unsigned long flags;
int ret;
@@ -3732,7 +3726,7 @@
int amd_iommu_domain_clear_gcr3(struct iommu_domain *dom, int pasid)
{
- struct protection_domain *domain = dom->priv;
+ struct protection_domain *domain = to_pdomain(dom);
unsigned long flags;
int ret;
@@ -3765,17 +3759,17 @@
struct iommu_domain *amd_iommu_get_v2_domain(struct pci_dev *pdev)
{
- struct protection_domain *domain;
+ struct protection_domain *pdomain;
- domain = get_domain(&pdev->dev);
- if (IS_ERR(domain))
+ pdomain = get_domain(&pdev->dev);
+ if (IS_ERR(pdomain))
return NULL;
/* Only return IOMMUv2 domains */
- if (!(domain->flags & PD_IOMMUV2_MASK))
+ if (!(pdomain->flags & PD_IOMMUV2_MASK))
return NULL;
- return domain->iommu_domain;
+ return &pdomain->domain;
}
EXPORT_SYMBOL(amd_iommu_get_v2_domain);
diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h
index c4fffb7..05030e5 100644
--- a/drivers/iommu/amd_iommu_types.h
+++ b/drivers/iommu/amd_iommu_types.h
@@ -282,6 +282,12 @@
#define PTE_PAGE_SIZE(pte) \
(1ULL << (1 + ffz(((pte) | 0xfffULL))))
+/*
+ * Takes a page-table level and returns the default page-size for this level
+ */
+#define PTE_LEVEL_PAGE_SIZE(level) \
+ (1ULL << (12 + (9 * (level))))
+
#define IOMMU_PTE_P (1ULL << 0)
#define IOMMU_PTE_TV (1ULL << 1)
#define IOMMU_PTE_U (1ULL << 59)
@@ -400,6 +406,8 @@
struct protection_domain {
struct list_head list; /* for list of all protection domains */
struct list_head dev_list; /* List of all devices in this domain */
+ struct iommu_domain domain; /* generic domain handle used by
+ iommu core code */
spinlock_t lock; /* mostly used to lock the page table*/
struct mutex api_lock; /* protect page tables in the iommu-api path */
u16 id; /* the domain id written to the device table */
@@ -411,10 +419,7 @@
bool updated; /* complete domain flush required */
unsigned dev_cnt; /* devices assigned to this domain */
unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */
- void *priv; /* private data */
- struct iommu_domain *iommu_domain; /* Pointer to generic
- domain structure */
-
+ void *priv; /* private data */
};
/*
diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
index 6d5a5c4..a1cbba9 100644
--- a/drivers/iommu/amd_iommu_v2.c
+++ b/drivers/iommu/amd_iommu_v2.c
@@ -417,7 +417,7 @@
dev_state = pasid_state->device_state;
run_inv_ctx_cb = !pasid_state->invalid;
- if (run_inv_ctx_cb && pasid_state->device_state->inv_ctx_cb)
+ if (run_inv_ctx_cb && dev_state->inv_ctx_cb)
dev_state->inv_ctx_cb(dev_state->pdev, pasid_state->pasid);
unbind_pasid(pasid_state);
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index a3adde6..9f7e1d3 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -343,6 +343,7 @@
struct arm_smmu_cfg cfg;
enum arm_smmu_domain_stage stage;
struct mutex init_mutex; /* Protects smmu pointer */
+ struct iommu_domain domain;
};
static struct iommu_ops arm_smmu_ops;
@@ -360,6 +361,11 @@
{ 0, NULL},
};
+static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct arm_smmu_domain, domain);
+}
+
static void parse_driver_options(struct arm_smmu_device *smmu)
{
int i = 0;
@@ -645,7 +651,7 @@
u32 fsr, far, fsynr, resume;
unsigned long iova;
struct iommu_domain *domain = dev;
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
struct arm_smmu_device *smmu = smmu_domain->smmu;
void __iomem *cb_base;
@@ -730,6 +736,20 @@
stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS;
cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
+ if (smmu->version > ARM_SMMU_V1) {
+ /*
+ * CBA2R.
+ * *Must* be initialised before CBAR thanks to VMID16
+ * architectural oversight affected some implementations.
+ */
+#ifdef CONFIG_64BIT
+ reg = CBA2R_RW64_64BIT;
+#else
+ reg = CBA2R_RW64_32BIT;
+#endif
+ writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBA2R(cfg->cbndx));
+ }
+
/* CBAR */
reg = cfg->cbar;
if (smmu->version == ARM_SMMU_V1)
@@ -747,16 +767,6 @@
}
writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBAR(cfg->cbndx));
- if (smmu->version > ARM_SMMU_V1) {
- /* CBA2R */
-#ifdef CONFIG_64BIT
- reg = CBA2R_RW64_64BIT;
-#else
- reg = CBA2R_RW64_32BIT;
-#endif
- writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBA2R(cfg->cbndx));
- }
-
/* TTBRs */
if (stage1) {
reg = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0];
@@ -836,7 +846,7 @@
struct io_pgtable_ops *pgtbl_ops;
struct io_pgtable_cfg pgtbl_cfg;
enum io_pgtable_fmt fmt;
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
mutex_lock(&smmu_domain->init_mutex);
@@ -958,7 +968,7 @@
static void arm_smmu_destroy_domain_context(struct iommu_domain *domain)
{
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_device *smmu = smmu_domain->smmu;
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
void __iomem *cb_base;
@@ -985,10 +995,12 @@
__arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
}
-static int arm_smmu_domain_init(struct iommu_domain *domain)
+static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
{
struct arm_smmu_domain *smmu_domain;
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
/*
* Allocate the domain and initialise some of its data structures.
* We can't really do anything meaningful until we've added a
@@ -996,17 +1008,17 @@
*/
smmu_domain = kzalloc(sizeof(*smmu_domain), GFP_KERNEL);
if (!smmu_domain)
- return -ENOMEM;
+ return NULL;
mutex_init(&smmu_domain->init_mutex);
spin_lock_init(&smmu_domain->pgtbl_lock);
- domain->priv = smmu_domain;
- return 0;
+
+ return &smmu_domain->domain;
}
-static void arm_smmu_domain_destroy(struct iommu_domain *domain)
+static void arm_smmu_domain_free(struct iommu_domain *domain)
{
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
/*
* Free the domain resources. We assume that all devices have
@@ -1143,7 +1155,7 @@
static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
{
int ret;
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_device *smmu;
struct arm_smmu_master_cfg *cfg;
@@ -1187,7 +1199,7 @@
static void arm_smmu_detach_dev(struct iommu_domain *domain, struct device *dev)
{
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_master_cfg *cfg;
cfg = find_smmu_master_cfg(dev);
@@ -1203,7 +1215,7 @@
{
int ret;
unsigned long flags;
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct io_pgtable_ops *ops= smmu_domain->pgtbl_ops;
if (!ops)
@@ -1220,7 +1232,7 @@
{
size_t ret;
unsigned long flags;
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct io_pgtable_ops *ops= smmu_domain->pgtbl_ops;
if (!ops)
@@ -1235,7 +1247,7 @@
static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
dma_addr_t iova)
{
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_device *smmu = smmu_domain->smmu;
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
struct io_pgtable_ops *ops= smmu_domain->pgtbl_ops;
@@ -1281,7 +1293,7 @@
{
phys_addr_t ret;
unsigned long flags;
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct io_pgtable_ops *ops= smmu_domain->pgtbl_ops;
if (!ops)
@@ -1329,61 +1341,83 @@
kfree(data);
}
-static int arm_smmu_add_device(struct device *dev)
+static int arm_smmu_add_pci_device(struct pci_dev *pdev)
{
- struct arm_smmu_device *smmu;
- struct arm_smmu_master_cfg *cfg;
+ int i, ret;
+ u16 sid;
struct iommu_group *group;
- void (*releasefn)(void *) = NULL;
- int ret;
+ struct arm_smmu_master_cfg *cfg;
- smmu = find_smmu_for_device(dev);
- if (!smmu)
- return -ENODEV;
-
- group = iommu_group_alloc();
- if (IS_ERR(group)) {
- dev_err(dev, "Failed to allocate IOMMU group\n");
+ group = iommu_group_get_for_dev(&pdev->dev);
+ if (IS_ERR(group))
return PTR_ERR(group);
- }
- if (dev_is_pci(dev)) {
- struct pci_dev *pdev = to_pci_dev(dev);
-
+ cfg = iommu_group_get_iommudata(group);
+ if (!cfg) {
cfg = kzalloc(sizeof(*cfg), GFP_KERNEL);
if (!cfg) {
ret = -ENOMEM;
goto out_put_group;
}
- cfg->num_streamids = 1;
- /*
- * Assume Stream ID == Requester ID for now.
- * We need a way to describe the ID mappings in FDT.
- */
- pci_for_each_dma_alias(pdev, __arm_smmu_get_pci_sid,
- &cfg->streamids[0]);
- releasefn = __arm_smmu_release_pci_iommudata;
- } else {
- struct arm_smmu_master *master;
-
- master = find_smmu_master(smmu, dev->of_node);
- if (!master) {
- ret = -ENODEV;
- goto out_put_group;
- }
-
- cfg = &master->cfg;
+ iommu_group_set_iommudata(group, cfg,
+ __arm_smmu_release_pci_iommudata);
}
- iommu_group_set_iommudata(group, cfg, releasefn);
- ret = iommu_group_add_device(group, dev);
+ if (cfg->num_streamids >= MAX_MASTER_STREAMIDS) {
+ ret = -ENOSPC;
+ goto out_put_group;
+ }
+ /*
+ * Assume Stream ID == Requester ID for now.
+ * We need a way to describe the ID mappings in FDT.
+ */
+ pci_for_each_dma_alias(pdev, __arm_smmu_get_pci_sid, &sid);
+ for (i = 0; i < cfg->num_streamids; ++i)
+ if (cfg->streamids[i] == sid)
+ break;
+
+ /* Avoid duplicate SIDs, as this can lead to SMR conflicts */
+ if (i == cfg->num_streamids)
+ cfg->streamids[cfg->num_streamids++] = sid;
+
+ return 0;
out_put_group:
iommu_group_put(group);
return ret;
}
+static int arm_smmu_add_platform_device(struct device *dev)
+{
+ struct iommu_group *group;
+ struct arm_smmu_master *master;
+ struct arm_smmu_device *smmu = find_smmu_for_device(dev);
+
+ if (!smmu)
+ return -ENODEV;
+
+ master = find_smmu_master(smmu, dev->of_node);
+ if (!master)
+ return -ENODEV;
+
+ /* No automatic group creation for platform devices */
+ group = iommu_group_alloc();
+ if (IS_ERR(group))
+ return PTR_ERR(group);
+
+ iommu_group_set_iommudata(group, &master->cfg, NULL);
+ return iommu_group_add_device(group, dev);
+}
+
+static int arm_smmu_add_device(struct device *dev)
+{
+ if (dev_is_pci(dev))
+ return arm_smmu_add_pci_device(to_pci_dev(dev));
+
+ return arm_smmu_add_platform_device(dev);
+}
+
static void arm_smmu_remove_device(struct device *dev)
{
iommu_group_remove_device(dev);
@@ -1392,7 +1426,7 @@
static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
enum iommu_attr attr, void *data)
{
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
switch (attr) {
case DOMAIN_ATTR_NESTING:
@@ -1407,7 +1441,7 @@
enum iommu_attr attr, void *data)
{
int ret = 0;
- struct arm_smmu_domain *smmu_domain = domain->priv;
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
mutex_lock(&smmu_domain->init_mutex);
@@ -1435,8 +1469,8 @@
static struct iommu_ops arm_smmu_ops = {
.capable = arm_smmu_capable,
- .domain_init = arm_smmu_domain_init,
- .domain_destroy = arm_smmu_domain_destroy,
+ .domain_alloc = arm_smmu_domain_alloc,
+ .domain_free = arm_smmu_domain_free,
.attach_dev = arm_smmu_attach_dev,
.detach_dev = arm_smmu_detach_dev,
.map = arm_smmu_map,
@@ -1633,6 +1667,15 @@
size = arm_smmu_id_size_to_bits((id >> ID2_OAS_SHIFT) & ID2_OAS_MASK);
smmu->pa_size = size;
+ /*
+ * What the page table walker can address actually depends on which
+ * descriptor format is in use, but since a) we don't know that yet,
+ * and b) it can vary per context bank, this will have to do...
+ */
+ if (dma_set_mask_and_coherent(smmu->dev, DMA_BIT_MASK(size)))
+ dev_warn(smmu->dev,
+ "failed to set DMA mask for table walker\n");
+
if (smmu->version == ARM_SMMU_V1) {
smmu->va_size = smmu->ipa_size;
size = SZ_4K | SZ_2M | SZ_1G;
diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index dc14fec4..3e89850 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -200,6 +200,7 @@
short *lv2entcnt; /* free lv2 entry counter for each section */
spinlock_t lock; /* lock for this structure */
spinlock_t pgtablelock; /* lock for modifying page table @ pgtable */
+ struct iommu_domain domain; /* generic domain data structure */
};
struct sysmmu_drvdata {
@@ -214,6 +215,11 @@
phys_addr_t pgtable;
};
+static struct exynos_iommu_domain *to_exynos_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct exynos_iommu_domain, domain);
+}
+
static bool set_sysmmu_active(struct sysmmu_drvdata *data)
{
/* return true if the System MMU was not active previously
@@ -696,58 +702,60 @@
virt_to_phys(vaend));
}
-static int exynos_iommu_domain_init(struct iommu_domain *domain)
+static struct iommu_domain *exynos_iommu_domain_alloc(unsigned type)
{
- struct exynos_iommu_domain *priv;
+ struct exynos_iommu_domain *exynos_domain;
int i;
- priv = kzalloc(sizeof(*priv), GFP_KERNEL);
- if (!priv)
- return -ENOMEM;
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
- priv->pgtable = (sysmmu_pte_t *)__get_free_pages(GFP_KERNEL, 2);
- if (!priv->pgtable)
+ exynos_domain = kzalloc(sizeof(*exynos_domain), GFP_KERNEL);
+ if (!exynos_domain)
+ return NULL;
+
+ exynos_domain->pgtable = (sysmmu_pte_t *)__get_free_pages(GFP_KERNEL, 2);
+ if (!exynos_domain->pgtable)
goto err_pgtable;
- priv->lv2entcnt = (short *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1);
- if (!priv->lv2entcnt)
+ exynos_domain->lv2entcnt = (short *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1);
+ if (!exynos_domain->lv2entcnt)
goto err_counter;
/* Workaround for System MMU v3.3 to prevent caching 1MiB mapping */
for (i = 0; i < NUM_LV1ENTRIES; i += 8) {
- priv->pgtable[i + 0] = ZERO_LV2LINK;
- priv->pgtable[i + 1] = ZERO_LV2LINK;
- priv->pgtable[i + 2] = ZERO_LV2LINK;
- priv->pgtable[i + 3] = ZERO_LV2LINK;
- priv->pgtable[i + 4] = ZERO_LV2LINK;
- priv->pgtable[i + 5] = ZERO_LV2LINK;
- priv->pgtable[i + 6] = ZERO_LV2LINK;
- priv->pgtable[i + 7] = ZERO_LV2LINK;
+ exynos_domain->pgtable[i + 0] = ZERO_LV2LINK;
+ exynos_domain->pgtable[i + 1] = ZERO_LV2LINK;
+ exynos_domain->pgtable[i + 2] = ZERO_LV2LINK;
+ exynos_domain->pgtable[i + 3] = ZERO_LV2LINK;
+ exynos_domain->pgtable[i + 4] = ZERO_LV2LINK;
+ exynos_domain->pgtable[i + 5] = ZERO_LV2LINK;
+ exynos_domain->pgtable[i + 6] = ZERO_LV2LINK;
+ exynos_domain->pgtable[i + 7] = ZERO_LV2LINK;
}
- pgtable_flush(priv->pgtable, priv->pgtable + NUM_LV1ENTRIES);
+ pgtable_flush(exynos_domain->pgtable, exynos_domain->pgtable + NUM_LV1ENTRIES);
- spin_lock_init(&priv->lock);
- spin_lock_init(&priv->pgtablelock);
- INIT_LIST_HEAD(&priv->clients);
+ spin_lock_init(&exynos_domain->lock);
+ spin_lock_init(&exynos_domain->pgtablelock);
+ INIT_LIST_HEAD(&exynos_domain->clients);
- domain->geometry.aperture_start = 0;
- domain->geometry.aperture_end = ~0UL;
- domain->geometry.force_aperture = true;
+ exynos_domain->domain.geometry.aperture_start = 0;
+ exynos_domain->domain.geometry.aperture_end = ~0UL;
+ exynos_domain->domain.geometry.force_aperture = true;
- domain->priv = priv;
- return 0;
+ return &exynos_domain->domain;
err_counter:
- free_pages((unsigned long)priv->pgtable, 2);
+ free_pages((unsigned long)exynos_domain->pgtable, 2);
err_pgtable:
- kfree(priv);
- return -ENOMEM;
+ kfree(exynos_domain);
+ return NULL;
}
-static void exynos_iommu_domain_destroy(struct iommu_domain *domain)
+static void exynos_iommu_domain_free(struct iommu_domain *domain)
{
- struct exynos_iommu_domain *priv = domain->priv;
+ struct exynos_iommu_domain *priv = to_exynos_domain(domain);
struct exynos_iommu_owner *owner;
unsigned long flags;
int i;
@@ -773,15 +781,14 @@
free_pages((unsigned long)priv->pgtable, 2);
free_pages((unsigned long)priv->lv2entcnt, 1);
- kfree(domain->priv);
- domain->priv = NULL;
+ kfree(priv);
}
static int exynos_iommu_attach_device(struct iommu_domain *domain,
struct device *dev)
{
struct exynos_iommu_owner *owner = dev->archdata.iommu;
- struct exynos_iommu_domain *priv = domain->priv;
+ struct exynos_iommu_domain *priv = to_exynos_domain(domain);
phys_addr_t pagetable = virt_to_phys(priv->pgtable);
unsigned long flags;
int ret;
@@ -812,7 +819,7 @@
struct device *dev)
{
struct exynos_iommu_owner *owner;
- struct exynos_iommu_domain *priv = domain->priv;
+ struct exynos_iommu_domain *priv = to_exynos_domain(domain);
phys_addr_t pagetable = virt_to_phys(priv->pgtable);
unsigned long flags;
@@ -988,7 +995,7 @@
static int exynos_iommu_map(struct iommu_domain *domain, unsigned long l_iova,
phys_addr_t paddr, size_t size, int prot)
{
- struct exynos_iommu_domain *priv = domain->priv;
+ struct exynos_iommu_domain *priv = to_exynos_domain(domain);
sysmmu_pte_t *entry;
sysmmu_iova_t iova = (sysmmu_iova_t)l_iova;
unsigned long flags;
@@ -1042,7 +1049,7 @@
static size_t exynos_iommu_unmap(struct iommu_domain *domain,
unsigned long l_iova, size_t size)
{
- struct exynos_iommu_domain *priv = domain->priv;
+ struct exynos_iommu_domain *priv = to_exynos_domain(domain);
sysmmu_iova_t iova = (sysmmu_iova_t)l_iova;
sysmmu_pte_t *ent;
size_t err_pgsize;
@@ -1119,7 +1126,7 @@
static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
- struct exynos_iommu_domain *priv = domain->priv;
+ struct exynos_iommu_domain *priv = to_exynos_domain(domain);
sysmmu_pte_t *entry;
unsigned long flags;
phys_addr_t phys = 0;
@@ -1171,8 +1178,8 @@
}
static const struct iommu_ops exynos_iommu_ops = {
- .domain_init = exynos_iommu_domain_init,
- .domain_destroy = exynos_iommu_domain_destroy,
+ .domain_alloc = exynos_iommu_domain_alloc,
+ .domain_free = exynos_iommu_domain_free,
.attach_dev = exynos_iommu_attach_device,
.detach_dev = exynos_iommu_detach_device,
.map = exynos_iommu_map,
diff --git a/drivers/iommu/fsl_pamu_domain.c b/drivers/iommu/fsl_pamu_domain.c
index ceebd28..1d45293 100644
--- a/drivers/iommu/fsl_pamu_domain.c
+++ b/drivers/iommu/fsl_pamu_domain.c
@@ -33,6 +33,11 @@
static struct kmem_cache *iommu_devinfo_cache;
static DEFINE_SPINLOCK(device_domain_lock);
+static struct fsl_dma_domain *to_fsl_dma_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct fsl_dma_domain, iommu_domain);
+}
+
static int __init iommu_init_mempool(void)
{
fsl_pamu_domain_cache = kmem_cache_create("fsl_pamu_domain",
@@ -65,7 +70,7 @@
struct dma_window *win_ptr = &dma_domain->win_arr[0];
struct iommu_domain_geometry *geom;
- geom = &dma_domain->iommu_domain->geometry;
+ geom = &dma_domain->iommu_domain.geometry;
if (!win_cnt || !dma_domain->geom_size) {
pr_debug("Number of windows/geometry not configured for the domain\n");
@@ -123,7 +128,7 @@
{
int ret;
struct dma_window *wnd = &dma_domain->win_arr[0];
- phys_addr_t wnd_addr = dma_domain->iommu_domain->geometry.aperture_start;
+ phys_addr_t wnd_addr = dma_domain->iommu_domain.geometry.aperture_start;
unsigned long flags;
spin_lock_irqsave(&iommu_lock, flags);
@@ -172,7 +177,7 @@
} else {
phys_addr_t wnd_addr;
- wnd_addr = dma_domain->iommu_domain->geometry.aperture_start;
+ wnd_addr = dma_domain->iommu_domain.geometry.aperture_start;
ret = pamu_config_ppaace(liodn, wnd_addr,
wnd->size,
@@ -384,7 +389,7 @@
static phys_addr_t fsl_pamu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
- struct fsl_dma_domain *dma_domain = domain->priv;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
if (iova < domain->geometry.aperture_start ||
iova > domain->geometry.aperture_end)
@@ -398,11 +403,9 @@
return cap == IOMMU_CAP_CACHE_COHERENCY;
}
-static void fsl_pamu_domain_destroy(struct iommu_domain *domain)
+static void fsl_pamu_domain_free(struct iommu_domain *domain)
{
- struct fsl_dma_domain *dma_domain = domain->priv;
-
- domain->priv = NULL;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
/* remove all the devices from the device list */
detach_device(NULL, dma_domain);
@@ -413,23 +416,24 @@
kmem_cache_free(fsl_pamu_domain_cache, dma_domain);
}
-static int fsl_pamu_domain_init(struct iommu_domain *domain)
+static struct iommu_domain *fsl_pamu_domain_alloc(unsigned type)
{
struct fsl_dma_domain *dma_domain;
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
dma_domain = iommu_alloc_dma_domain();
if (!dma_domain) {
pr_debug("dma_domain allocation failed\n");
- return -ENOMEM;
+ return NULL;
}
- domain->priv = dma_domain;
- dma_domain->iommu_domain = domain;
/* defaul geometry 64 GB i.e. maximum system address */
- domain->geometry.aperture_start = 0;
- domain->geometry.aperture_end = (1ULL << 36) - 1;
- domain->geometry.force_aperture = true;
+ dma_domain->iommu_domain. geometry.aperture_start = 0;
+ dma_domain->iommu_domain.geometry.aperture_end = (1ULL << 36) - 1;
+ dma_domain->iommu_domain.geometry.force_aperture = true;
- return 0;
+ return &dma_domain->iommu_domain;
}
/* Configure geometry settings for all LIODNs associated with domain */
@@ -499,7 +503,7 @@
static void fsl_pamu_window_disable(struct iommu_domain *domain, u32 wnd_nr)
{
- struct fsl_dma_domain *dma_domain = domain->priv;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
unsigned long flags;
int ret;
@@ -530,7 +534,7 @@
static int fsl_pamu_window_enable(struct iommu_domain *domain, u32 wnd_nr,
phys_addr_t paddr, u64 size, int prot)
{
- struct fsl_dma_domain *dma_domain = domain->priv;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
struct dma_window *wnd;
int pamu_prot = 0;
int ret;
@@ -607,7 +611,7 @@
int num)
{
unsigned long flags;
- struct iommu_domain *domain = dma_domain->iommu_domain;
+ struct iommu_domain *domain = &dma_domain->iommu_domain;
int ret = 0;
int i;
@@ -653,7 +657,7 @@
static int fsl_pamu_attach_device(struct iommu_domain *domain,
struct device *dev)
{
- struct fsl_dma_domain *dma_domain = domain->priv;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
const u32 *liodn;
u32 liodn_cnt;
int len, ret = 0;
@@ -691,7 +695,7 @@
static void fsl_pamu_detach_device(struct iommu_domain *domain,
struct device *dev)
{
- struct fsl_dma_domain *dma_domain = domain->priv;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
const u32 *prop;
int len;
struct pci_dev *pdev = NULL;
@@ -723,7 +727,7 @@
static int configure_domain_geometry(struct iommu_domain *domain, void *data)
{
struct iommu_domain_geometry *geom_attr = data;
- struct fsl_dma_domain *dma_domain = domain->priv;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
dma_addr_t geom_size;
unsigned long flags;
@@ -813,7 +817,7 @@
static int fsl_pamu_set_domain_attr(struct iommu_domain *domain,
enum iommu_attr attr_type, void *data)
{
- struct fsl_dma_domain *dma_domain = domain->priv;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
int ret = 0;
switch (attr_type) {
@@ -838,7 +842,7 @@
static int fsl_pamu_get_domain_attr(struct iommu_domain *domain,
enum iommu_attr attr_type, void *data)
{
- struct fsl_dma_domain *dma_domain = domain->priv;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
int ret = 0;
switch (attr_type) {
@@ -999,7 +1003,7 @@
static int fsl_pamu_set_windows(struct iommu_domain *domain, u32 w_count)
{
- struct fsl_dma_domain *dma_domain = domain->priv;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
unsigned long flags;
int ret;
@@ -1048,15 +1052,15 @@
static u32 fsl_pamu_get_windows(struct iommu_domain *domain)
{
- struct fsl_dma_domain *dma_domain = domain->priv;
+ struct fsl_dma_domain *dma_domain = to_fsl_dma_domain(domain);
return dma_domain->win_cnt;
}
static const struct iommu_ops fsl_pamu_ops = {
.capable = fsl_pamu_capable,
- .domain_init = fsl_pamu_domain_init,
- .domain_destroy = fsl_pamu_domain_destroy,
+ .domain_alloc = fsl_pamu_domain_alloc,
+ .domain_free = fsl_pamu_domain_free,
.attach_dev = fsl_pamu_attach_device,
.detach_dev = fsl_pamu_detach_device,
.domain_window_enable = fsl_pamu_window_enable,
diff --git a/drivers/iommu/fsl_pamu_domain.h b/drivers/iommu/fsl_pamu_domain.h
index c90293f..f2b0f74 100644
--- a/drivers/iommu/fsl_pamu_domain.h
+++ b/drivers/iommu/fsl_pamu_domain.h
@@ -71,7 +71,7 @@
u32 stash_id;
struct pamu_stash_attribute dma_stash;
u32 snoop_id;
- struct iommu_domain *iommu_domain;
+ struct iommu_domain iommu_domain;
spinlock_t domain_lock;
};
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 4fc1f8a..a35927c 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -339,7 +339,7 @@
DECLARE_BITMAP(iommu_bmp, DMAR_UNITS_SUPPORTED);
/* bitmap of iommus this domain uses*/
- struct list_head devices; /* all devices' list */
+ struct list_head devices; /* all devices' list */
struct iova_domain iovad; /* iova's that belong to this domain */
struct dma_pte *pgd; /* virtual address */
@@ -358,6 +358,9 @@
2 == 1GiB, 3 == 512GiB, 4 == 1TiB */
spinlock_t iommu_lock; /* protect iommu set in domain */
u64 max_addr; /* maximum mapped address */
+
+ struct iommu_domain domain; /* generic domain data structure for
+ iommu core */
};
/* PCI domain-device relationship */
@@ -449,6 +452,12 @@
static const struct iommu_ops intel_iommu_ops;
+/* Convert generic 'struct iommu_domain to private struct dmar_domain */
+static struct dmar_domain *to_dmar_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct dmar_domain, domain);
+}
+
static int __init intel_iommu_setup(char *str)
{
if (!str)
@@ -595,12 +604,13 @@
{
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
- int i, found = 0;
+ bool found = false;
+ int i;
domain->iommu_coherency = 1;
for_each_set_bit(i, domain->iommu_bmp, g_num_of_iommus) {
- found = 1;
+ found = true;
if (!ecap_coherent(g_iommus[i]->ecap)) {
domain->iommu_coherency = 0;
break;
@@ -1267,7 +1277,7 @@
iommu_support_dev_iotlb (struct dmar_domain *domain, struct intel_iommu *iommu,
u8 bus, u8 devfn)
{
- int found = 0;
+ bool found = false;
unsigned long flags;
struct device_domain_info *info;
struct pci_dev *pdev;
@@ -1282,7 +1292,7 @@
list_for_each_entry(info, &domain->devices, link)
if (info->iommu == iommu && info->bus == bus &&
info->devfn == devfn) {
- found = 1;
+ found = true;
break;
}
spin_unlock_irqrestore(&device_domain_lock, flags);
@@ -4269,7 +4279,7 @@
struct device_domain_info *info, *tmp;
struct intel_iommu *iommu;
unsigned long flags;
- int found = 0;
+ bool found = false;
u8 bus, devfn;
iommu = device_to_iommu(dev, &bus, &devfn);
@@ -4301,7 +4311,7 @@
* update iommu count and coherency
*/
if (info->iommu == iommu)
- found = 1;
+ found = true;
}
spin_unlock_irqrestore(&device_domain_lock, flags);
@@ -4339,44 +4349,45 @@
return 0;
}
-static int intel_iommu_domain_init(struct iommu_domain *domain)
+static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
{
struct dmar_domain *dmar_domain;
+ struct iommu_domain *domain;
+
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
dmar_domain = alloc_domain(DOMAIN_FLAG_VIRTUAL_MACHINE);
if (!dmar_domain) {
printk(KERN_ERR
"intel_iommu_domain_init: dmar_domain == NULL\n");
- return -ENOMEM;
+ return NULL;
}
if (md_domain_init(dmar_domain, DEFAULT_DOMAIN_ADDRESS_WIDTH)) {
printk(KERN_ERR
"intel_iommu_domain_init() failed\n");
domain_exit(dmar_domain);
- return -ENOMEM;
+ return NULL;
}
domain_update_iommu_cap(dmar_domain);
- domain->priv = dmar_domain;
+ domain = &dmar_domain->domain;
domain->geometry.aperture_start = 0;
domain->geometry.aperture_end = __DOMAIN_MAX_ADDR(dmar_domain->gaw);
domain->geometry.force_aperture = true;
- return 0;
+ return domain;
}
-static void intel_iommu_domain_destroy(struct iommu_domain *domain)
+static void intel_iommu_domain_free(struct iommu_domain *domain)
{
- struct dmar_domain *dmar_domain = domain->priv;
-
- domain->priv = NULL;
- domain_exit(dmar_domain);
+ domain_exit(to_dmar_domain(domain));
}
static int intel_iommu_attach_device(struct iommu_domain *domain,
struct device *dev)
{
- struct dmar_domain *dmar_domain = domain->priv;
+ struct dmar_domain *dmar_domain = to_dmar_domain(domain);
struct intel_iommu *iommu;
int addr_width;
u8 bus, devfn;
@@ -4441,16 +4452,14 @@
static void intel_iommu_detach_device(struct iommu_domain *domain,
struct device *dev)
{
- struct dmar_domain *dmar_domain = domain->priv;
-
- domain_remove_one_dev_info(dmar_domain, dev);
+ domain_remove_one_dev_info(to_dmar_domain(domain), dev);
}
static int intel_iommu_map(struct iommu_domain *domain,
unsigned long iova, phys_addr_t hpa,
size_t size, int iommu_prot)
{
- struct dmar_domain *dmar_domain = domain->priv;
+ struct dmar_domain *dmar_domain = to_dmar_domain(domain);
u64 max_addr;
int prot = 0;
int ret;
@@ -4487,7 +4496,7 @@
static size_t intel_iommu_unmap(struct iommu_domain *domain,
unsigned long iova, size_t size)
{
- struct dmar_domain *dmar_domain = domain->priv;
+ struct dmar_domain *dmar_domain = to_dmar_domain(domain);
struct page *freelist = NULL;
struct intel_iommu *iommu;
unsigned long start_pfn, last_pfn;
@@ -4535,7 +4544,7 @@
static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
- struct dmar_domain *dmar_domain = domain->priv;
+ struct dmar_domain *dmar_domain = to_dmar_domain(domain);
struct dma_pte *pte;
int level = 0;
u64 phys = 0;
@@ -4594,8 +4603,8 @@
static const struct iommu_ops intel_iommu_ops = {
.capable = intel_iommu_capable,
- .domain_init = intel_iommu_domain_init,
- .domain_destroy = intel_iommu_domain_destroy,
+ .domain_alloc = intel_iommu_domain_alloc,
+ .domain_free = intel_iommu_domain_free,
.attach_dev = intel_iommu_attach_device,
.detach_dev = intel_iommu_detach_device,
.map = intel_iommu_map,
diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index 14de1ab..6c25b3c 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -631,7 +631,7 @@
{
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
- int setup = 0;
+ bool setup = false;
int eim = 0;
if (x2apic_supported()) {
@@ -697,7 +697,7 @@
*/
for_each_iommu(iommu, drhd) {
iommu_set_irq_remapping(iommu, eim);
- setup = 1;
+ setup = true;
}
if (!setup)
@@ -856,7 +856,7 @@
{
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
- int ir_supported = 0;
+ bool ir_supported = false;
int ioapic_idx;
for_each_iommu(iommu, drhd)
@@ -864,7 +864,7 @@
if (ir_parse_ioapic_hpet_scope(drhd->hdr, iommu))
return -1;
- ir_supported = 1;
+ ir_supported = true;
}
if (!ir_supported)
@@ -917,7 +917,7 @@
static int reenable_irq_remapping(int eim)
{
struct dmar_drhd_unit *drhd;
- int setup = 0;
+ bool setup = false;
struct intel_iommu *iommu = NULL;
for_each_iommu(iommu, drhd)
@@ -933,7 +933,7 @@
/* Set up interrupt remapping for iommu.*/
iommu_set_irq_remapping(iommu, eim);
- setup = 1;
+ setup = true;
}
if (!setup)
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index b610a8d..4e46021 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -116,6 +116,8 @@
#define ARM_32_LPAE_TCR_EAE (1 << 31)
#define ARM_64_LPAE_S2_TCR_RES1 (1 << 31)
+#define ARM_LPAE_TCR_EPD1 (1 << 23)
+
#define ARM_LPAE_TCR_TG0_4K (0 << 14)
#define ARM_LPAE_TCR_TG0_64K (1 << 14)
#define ARM_LPAE_TCR_TG0_16K (2 << 14)
@@ -621,6 +623,9 @@
}
reg |= (64ULL - cfg->ias) << ARM_LPAE_TCR_T0SZ_SHIFT;
+
+ /* Disable speculative walks through TTBR1 */
+ reg |= ARM_LPAE_TCR_EPD1;
cfg->arm_lpae_s1_cfg.tcr = reg;
/* MAIRs */
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 72e683d..d4f527e 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -901,36 +901,24 @@
struct iommu_domain *iommu_domain_alloc(struct bus_type *bus)
{
struct iommu_domain *domain;
- int ret;
if (bus == NULL || bus->iommu_ops == NULL)
return NULL;
- domain = kzalloc(sizeof(*domain), GFP_KERNEL);
+ domain = bus->iommu_ops->domain_alloc(IOMMU_DOMAIN_UNMANAGED);
if (!domain)
return NULL;
- domain->ops = bus->iommu_ops;
-
- ret = domain->ops->domain_init(domain);
- if (ret)
- goto out_free;
+ domain->ops = bus->iommu_ops;
+ domain->type = IOMMU_DOMAIN_UNMANAGED;
return domain;
-
-out_free:
- kfree(domain);
-
- return NULL;
}
EXPORT_SYMBOL_GPL(iommu_domain_alloc);
void iommu_domain_free(struct iommu_domain *domain)
{
- if (likely(domain->ops->domain_destroy != NULL))
- domain->ops->domain_destroy(domain);
-
- kfree(domain);
+ domain->ops->domain_free(domain);
}
EXPORT_SYMBOL_GPL(iommu_domain_free);
@@ -1049,6 +1037,9 @@
domain->ops->pgsize_bitmap == 0UL))
return -ENODEV;
+ if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
+ return -EINVAL;
+
/* find out the minimum page size supported */
min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap);
@@ -1100,6 +1091,9 @@
domain->ops->pgsize_bitmap == 0UL))
return -ENODEV;
+ if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
+ return -EINVAL;
+
/* find out the minimum page size supported */
min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap);
diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index bc39bdf..1a67c53 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -38,7 +38,7 @@
struct ipmmu_vmsa_domain {
struct ipmmu_vmsa_device *mmu;
- struct iommu_domain *io_domain;
+ struct iommu_domain io_domain;
struct io_pgtable_cfg cfg;
struct io_pgtable_ops *iop;
@@ -56,6 +56,11 @@
static DEFINE_SPINLOCK(ipmmu_devices_lock);
static LIST_HEAD(ipmmu_devices);
+static struct ipmmu_vmsa_domain *to_vmsa_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct ipmmu_vmsa_domain, io_domain);
+}
+
#define TLB_LOOP_TIMEOUT 100 /* 100us */
/* -----------------------------------------------------------------------------
@@ -428,7 +433,7 @@
* TODO: We need to look up the faulty device based on the I/O VA. Use
* the IOMMU device for now.
*/
- if (!report_iommu_fault(domain->io_domain, mmu->dev, iova, 0))
+ if (!report_iommu_fault(&domain->io_domain, mmu->dev, iova, 0))
return IRQ_HANDLED;
dev_err_ratelimited(mmu->dev,
@@ -448,7 +453,7 @@
return IRQ_NONE;
io_domain = mmu->mapping->domain;
- domain = io_domain->priv;
+ domain = to_vmsa_domain(io_domain);
return ipmmu_domain_irq(domain);
}
@@ -457,25 +462,25 @@
* IOMMU Operations
*/
-static int ipmmu_domain_init(struct iommu_domain *io_domain)
+static struct iommu_domain *ipmmu_domain_alloc(unsigned type)
{
struct ipmmu_vmsa_domain *domain;
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain)
- return -ENOMEM;
+ return NULL;
spin_lock_init(&domain->lock);
- io_domain->priv = domain;
- domain->io_domain = io_domain;
-
- return 0;
+ return &domain->io_domain;
}
-static void ipmmu_domain_destroy(struct iommu_domain *io_domain)
+static void ipmmu_domain_free(struct iommu_domain *io_domain)
{
- struct ipmmu_vmsa_domain *domain = io_domain->priv;
+ struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
/*
* Free the domain resources. We assume that all devices have already
@@ -491,7 +496,7 @@
{
struct ipmmu_vmsa_archdata *archdata = dev->archdata.iommu;
struct ipmmu_vmsa_device *mmu = archdata->mmu;
- struct ipmmu_vmsa_domain *domain = io_domain->priv;
+ struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
unsigned long flags;
unsigned int i;
int ret = 0;
@@ -532,7 +537,7 @@
struct device *dev)
{
struct ipmmu_vmsa_archdata *archdata = dev->archdata.iommu;
- struct ipmmu_vmsa_domain *domain = io_domain->priv;
+ struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
unsigned int i;
for (i = 0; i < archdata->num_utlbs; ++i)
@@ -546,7 +551,7 @@
static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot)
{
- struct ipmmu_vmsa_domain *domain = io_domain->priv;
+ struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
if (!domain)
return -ENODEV;
@@ -557,7 +562,7 @@
static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova,
size_t size)
{
- struct ipmmu_vmsa_domain *domain = io_domain->priv;
+ struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
return domain->iop->unmap(domain->iop, iova, size);
}
@@ -565,7 +570,7 @@
static phys_addr_t ipmmu_iova_to_phys(struct iommu_domain *io_domain,
dma_addr_t iova)
{
- struct ipmmu_vmsa_domain *domain = io_domain->priv;
+ struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
/* TODO: Is locking needed ? */
@@ -737,8 +742,8 @@
}
static const struct iommu_ops ipmmu_ops = {
- .domain_init = ipmmu_domain_init,
- .domain_destroy = ipmmu_domain_destroy,
+ .domain_alloc = ipmmu_domain_alloc,
+ .domain_free = ipmmu_domain_free,
.attach_dev = ipmmu_attach_device,
.detach_dev = ipmmu_detach_device,
.map = ipmmu_map,
diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
index e1b0537..15a2063 100644
--- a/drivers/iommu/msm_iommu.c
+++ b/drivers/iommu/msm_iommu.c
@@ -52,8 +52,14 @@
struct msm_priv {
unsigned long *pgtable;
struct list_head list_attached;
+ struct iommu_domain domain;
};
+static struct msm_priv *to_msm_priv(struct iommu_domain *dom)
+{
+ return container_of(dom, struct msm_priv, domain);
+}
+
static int __enable_clocks(struct msm_iommu_drvdata *drvdata)
{
int ret;
@@ -79,7 +85,7 @@
static int __flush_iotlb(struct iommu_domain *domain)
{
- struct msm_priv *priv = domain->priv;
+ struct msm_priv *priv = to_msm_priv(domain);
struct msm_iommu_drvdata *iommu_drvdata;
struct msm_iommu_ctx_drvdata *ctx_drvdata;
int ret = 0;
@@ -209,10 +215,14 @@
SET_M(base, ctx, 1);
}
-static int msm_iommu_domain_init(struct iommu_domain *domain)
+static struct iommu_domain *msm_iommu_domain_alloc(unsigned type)
{
- struct msm_priv *priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ struct msm_priv *priv;
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv)
goto fail_nomem;
@@ -224,20 +234,19 @@
goto fail_nomem;
memset(priv->pgtable, 0, SZ_16K);
- domain->priv = priv;
- domain->geometry.aperture_start = 0;
- domain->geometry.aperture_end = (1ULL << 32) - 1;
- domain->geometry.force_aperture = true;
+ priv->domain.geometry.aperture_start = 0;
+ priv->domain.geometry.aperture_end = (1ULL << 32) - 1;
+ priv->domain.geometry.force_aperture = true;
- return 0;
+ return &priv->domain;
fail_nomem:
kfree(priv);
- return -ENOMEM;
+ return NULL;
}
-static void msm_iommu_domain_destroy(struct iommu_domain *domain)
+static void msm_iommu_domain_free(struct iommu_domain *domain)
{
struct msm_priv *priv;
unsigned long flags;
@@ -245,20 +254,17 @@
int i;
spin_lock_irqsave(&msm_iommu_lock, flags);
- priv = domain->priv;
- domain->priv = NULL;
+ priv = to_msm_priv(domain);
- if (priv) {
- fl_table = priv->pgtable;
+ fl_table = priv->pgtable;
- for (i = 0; i < NUM_FL_PTE; i++)
- if ((fl_table[i] & 0x03) == FL_TYPE_TABLE)
- free_page((unsigned long) __va(((fl_table[i]) &
- FL_BASE_MASK)));
+ for (i = 0; i < NUM_FL_PTE; i++)
+ if ((fl_table[i] & 0x03) == FL_TYPE_TABLE)
+ free_page((unsigned long) __va(((fl_table[i]) &
+ FL_BASE_MASK)));
- free_pages((unsigned long)priv->pgtable, get_order(SZ_16K));
- priv->pgtable = NULL;
- }
+ free_pages((unsigned long)priv->pgtable, get_order(SZ_16K));
+ priv->pgtable = NULL;
kfree(priv);
spin_unlock_irqrestore(&msm_iommu_lock, flags);
@@ -276,9 +282,9 @@
spin_lock_irqsave(&msm_iommu_lock, flags);
- priv = domain->priv;
+ priv = to_msm_priv(domain);
- if (!priv || !dev) {
+ if (!dev) {
ret = -EINVAL;
goto fail;
}
@@ -330,9 +336,9 @@
int ret;
spin_lock_irqsave(&msm_iommu_lock, flags);
- priv = domain->priv;
+ priv = to_msm_priv(domain);
- if (!priv || !dev)
+ if (!dev)
goto fail;
iommu_drvdata = dev_get_drvdata(dev->parent);
@@ -382,11 +388,7 @@
goto fail;
}
- priv = domain->priv;
- if (!priv) {
- ret = -EINVAL;
- goto fail;
- }
+ priv = to_msm_priv(domain);
fl_table = priv->pgtable;
@@ -484,10 +486,7 @@
spin_lock_irqsave(&msm_iommu_lock, flags);
- priv = domain->priv;
-
- if (!priv)
- goto fail;
+ priv = to_msm_priv(domain);
fl_table = priv->pgtable;
@@ -566,7 +565,7 @@
spin_lock_irqsave(&msm_iommu_lock, flags);
- priv = domain->priv;
+ priv = to_msm_priv(domain);
if (list_empty(&priv->list_attached))
goto fail;
@@ -674,8 +673,8 @@
static const struct iommu_ops msm_iommu_ops = {
.capable = msm_iommu_capable,
- .domain_init = msm_iommu_domain_init,
- .domain_destroy = msm_iommu_domain_destroy,
+ .domain_alloc = msm_iommu_domain_alloc,
+ .domain_free = msm_iommu_domain_free,
.attach_dev = msm_iommu_attach_dev,
.detach_dev = msm_iommu_detach_dev,
.map = msm_iommu_map,
diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
index a4ba851..a22c33d 100644
--- a/drivers/iommu/omap-iommu.c
+++ b/drivers/iommu/omap-iommu.c
@@ -59,6 +59,7 @@
struct omap_iommu *iommu_dev;
struct device *dev;
spinlock_t lock;
+ struct iommu_domain domain;
};
#define MMU_LOCK_BASE_SHIFT 10
@@ -80,6 +81,15 @@
static struct kmem_cache *iopte_cachep;
/**
+ * to_omap_domain - Get struct omap_iommu_domain from generic iommu_domain
+ * @dom: generic iommu domain handle
+ **/
+static struct omap_iommu_domain *to_omap_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct omap_iommu_domain, domain);
+}
+
+/**
* omap_iommu_save_ctx - Save registers for pm off-mode support
* @dev: client device
**/
@@ -901,7 +911,7 @@
u32 *iopgd, *iopte;
struct omap_iommu *obj = data;
struct iommu_domain *domain = obj->domain;
- struct omap_iommu_domain *omap_domain = domain->priv;
+ struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
if (!omap_domain->iommu_dev)
return IRQ_NONE;
@@ -1113,7 +1123,7 @@
static int omap_iommu_map(struct iommu_domain *domain, unsigned long da,
phys_addr_t pa, size_t bytes, int prot)
{
- struct omap_iommu_domain *omap_domain = domain->priv;
+ struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
struct omap_iommu *oiommu = omap_domain->iommu_dev;
struct device *dev = oiommu->dev;
struct iotlb_entry e;
@@ -1140,7 +1150,7 @@
static size_t omap_iommu_unmap(struct iommu_domain *domain, unsigned long da,
size_t size)
{
- struct omap_iommu_domain *omap_domain = domain->priv;
+ struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
struct omap_iommu *oiommu = omap_domain->iommu_dev;
struct device *dev = oiommu->dev;
@@ -1152,7 +1162,7 @@
static int
omap_iommu_attach_dev(struct iommu_domain *domain, struct device *dev)
{
- struct omap_iommu_domain *omap_domain = domain->priv;
+ struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
struct omap_iommu *oiommu;
struct omap_iommu_arch_data *arch_data = dev->archdata.iommu;
int ret = 0;
@@ -1212,17 +1222,20 @@
static void omap_iommu_detach_dev(struct iommu_domain *domain,
struct device *dev)
{
- struct omap_iommu_domain *omap_domain = domain->priv;
+ struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
spin_lock(&omap_domain->lock);
_omap_iommu_detach_dev(omap_domain, dev);
spin_unlock(&omap_domain->lock);
}
-static int omap_iommu_domain_init(struct iommu_domain *domain)
+static struct iommu_domain *omap_iommu_domain_alloc(unsigned type)
{
struct omap_iommu_domain *omap_domain;
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
omap_domain = kzalloc(sizeof(*omap_domain), GFP_KERNEL);
if (!omap_domain) {
pr_err("kzalloc failed\n");
@@ -1244,25 +1257,21 @@
clean_dcache_area(omap_domain->pgtable, IOPGD_TABLE_SIZE);
spin_lock_init(&omap_domain->lock);
- domain->priv = omap_domain;
+ omap_domain->domain.geometry.aperture_start = 0;
+ omap_domain->domain.geometry.aperture_end = (1ULL << 32) - 1;
+ omap_domain->domain.geometry.force_aperture = true;
- domain->geometry.aperture_start = 0;
- domain->geometry.aperture_end = (1ULL << 32) - 1;
- domain->geometry.force_aperture = true;
-
- return 0;
+ return &omap_domain->domain;
fail_nomem:
kfree(omap_domain);
out:
- return -ENOMEM;
+ return NULL;
}
-static void omap_iommu_domain_destroy(struct iommu_domain *domain)
+static void omap_iommu_domain_free(struct iommu_domain *domain)
{
- struct omap_iommu_domain *omap_domain = domain->priv;
-
- domain->priv = NULL;
+ struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
/*
* An iommu device is still attached
@@ -1278,7 +1287,7 @@
static phys_addr_t omap_iommu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t da)
{
- struct omap_iommu_domain *omap_domain = domain->priv;
+ struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
struct omap_iommu *oiommu = omap_domain->iommu_dev;
struct device *dev = oiommu->dev;
u32 *pgd, *pte;
@@ -1358,8 +1367,8 @@
}
static const struct iommu_ops omap_iommu_ops = {
- .domain_init = omap_iommu_domain_init,
- .domain_destroy = omap_iommu_domain_destroy,
+ .domain_alloc = omap_iommu_domain_alloc,
+ .domain_free = omap_iommu_domain_free,
.attach_dev = omap_iommu_attach_dev,
.detach_dev = omap_iommu_detach_dev,
.map = omap_iommu_map,
diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
index 9f74fdd..4015560 100644
--- a/drivers/iommu/rockchip-iommu.c
+++ b/drivers/iommu/rockchip-iommu.c
@@ -80,6 +80,8 @@
u32 *dt; /* page directory table */
spinlock_t iommus_lock; /* lock for iommus list */
spinlock_t dt_lock; /* lock for modifying page directory table */
+
+ struct iommu_domain domain;
};
struct rk_iommu {
@@ -100,6 +102,11 @@
outer_flush_range(pa_start, pa_end);
}
+static struct rk_iommu_domain *to_rk_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct rk_iommu_domain, domain);
+}
+
/**
* Inspired by _wait_for in intel_drv.h
* This is NOT safe for use in interrupt context.
@@ -503,7 +510,7 @@
static phys_addr_t rk_iommu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
- struct rk_iommu_domain *rk_domain = domain->priv;
+ struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
unsigned long flags;
phys_addr_t pt_phys, phys = 0;
u32 dte, pte;
@@ -639,7 +646,7 @@
static int rk_iommu_map(struct iommu_domain *domain, unsigned long _iova,
phys_addr_t paddr, size_t size, int prot)
{
- struct rk_iommu_domain *rk_domain = domain->priv;
+ struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
unsigned long flags;
dma_addr_t iova = (dma_addr_t)_iova;
u32 *page_table, *pte_addr;
@@ -670,7 +677,7 @@
static size_t rk_iommu_unmap(struct iommu_domain *domain, unsigned long _iova,
size_t size)
{
- struct rk_iommu_domain *rk_domain = domain->priv;
+ struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
unsigned long flags;
dma_addr_t iova = (dma_addr_t)_iova;
phys_addr_t pt_phys;
@@ -726,7 +733,7 @@
struct device *dev)
{
struct rk_iommu *iommu;
- struct rk_iommu_domain *rk_domain = domain->priv;
+ struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
unsigned long flags;
int ret;
phys_addr_t dte_addr;
@@ -778,7 +785,7 @@
struct device *dev)
{
struct rk_iommu *iommu;
- struct rk_iommu_domain *rk_domain = domain->priv;
+ struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
unsigned long flags;
/* Allow 'virtual devices' (eg drm) to detach from domain */
@@ -804,13 +811,16 @@
dev_info(dev, "Detached from iommu domain\n");
}
-static int rk_iommu_domain_init(struct iommu_domain *domain)
+static struct iommu_domain *rk_iommu_domain_alloc(unsigned type)
{
struct rk_iommu_domain *rk_domain;
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
rk_domain = kzalloc(sizeof(*rk_domain), GFP_KERNEL);
if (!rk_domain)
- return -ENOMEM;
+ return NULL;
/*
* rk32xx iommus use a 2 level pagetable.
@@ -827,17 +837,16 @@
spin_lock_init(&rk_domain->dt_lock);
INIT_LIST_HEAD(&rk_domain->iommus);
- domain->priv = rk_domain;
+ return &rk_domain->domain;
- return 0;
err_dt:
kfree(rk_domain);
- return -ENOMEM;
+ return NULL;
}
-static void rk_iommu_domain_destroy(struct iommu_domain *domain)
+static void rk_iommu_domain_free(struct iommu_domain *domain)
{
- struct rk_iommu_domain *rk_domain = domain->priv;
+ struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
int i;
WARN_ON(!list_empty(&rk_domain->iommus));
@@ -852,8 +861,7 @@
}
free_page((unsigned long)rk_domain->dt);
- kfree(domain->priv);
- domain->priv = NULL;
+ kfree(rk_domain);
}
static bool rk_iommu_is_dev_iommu_master(struct device *dev)
@@ -952,8 +960,8 @@
}
static const struct iommu_ops rk_iommu_ops = {
- .domain_init = rk_iommu_domain_init,
- .domain_destroy = rk_iommu_domain_destroy,
+ .domain_alloc = rk_iommu_domain_alloc,
+ .domain_free = rk_iommu_domain_free,
.attach_dev = rk_iommu_attach_device,
.detach_dev = rk_iommu_detach_device,
.map = rk_iommu_map,
diff --git a/drivers/iommu/shmobile-iommu.c b/drivers/iommu/shmobile-iommu.c
index f1b0077..a028751 100644
--- a/drivers/iommu/shmobile-iommu.c
+++ b/drivers/iommu/shmobile-iommu.c
@@ -42,11 +42,17 @@
spinlock_t map_lock;
spinlock_t attached_list_lock;
struct list_head attached_list;
+ struct iommu_domain domain;
};
static struct shmobile_iommu_archdata *ipmmu_archdata;
static struct kmem_cache *l1cache, *l2cache;
+static struct shmobile_iommu_domain *to_sh_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct shmobile_iommu_domain, domain);
+}
+
static int pgtable_alloc(struct shmobile_iommu_domain_pgtable *pgtable,
struct kmem_cache *cache, size_t size)
{
@@ -82,31 +88,33 @@
sizeof(val) * count, DMA_TO_DEVICE);
}
-static int shmobile_iommu_domain_init(struct iommu_domain *domain)
+static struct iommu_domain *shmobile_iommu_domain_alloc(unsigned type)
{
struct shmobile_iommu_domain *sh_domain;
int i, ret;
- sh_domain = kmalloc(sizeof(*sh_domain), GFP_KERNEL);
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
+ sh_domain = kzalloc(sizeof(*sh_domain), GFP_KERNEL);
if (!sh_domain)
- return -ENOMEM;
+ return NULL;
ret = pgtable_alloc(&sh_domain->l1, l1cache, L1_SIZE);
if (ret < 0) {
kfree(sh_domain);
- return ret;
+ return NULL;
}
for (i = 0; i < L1_LEN; i++)
sh_domain->l2[i].pgtable = NULL;
spin_lock_init(&sh_domain->map_lock);
spin_lock_init(&sh_domain->attached_list_lock);
INIT_LIST_HEAD(&sh_domain->attached_list);
- domain->priv = sh_domain;
- return 0;
+ return &sh_domain->domain;
}
-static void shmobile_iommu_domain_destroy(struct iommu_domain *domain)
+static void shmobile_iommu_domain_free(struct iommu_domain *domain)
{
- struct shmobile_iommu_domain *sh_domain = domain->priv;
+ struct shmobile_iommu_domain *sh_domain = to_sh_domain(domain);
int i;
for (i = 0; i < L1_LEN; i++) {
@@ -115,14 +123,13 @@
}
pgtable_free(&sh_domain->l1, l1cache, L1_SIZE);
kfree(sh_domain);
- domain->priv = NULL;
}
static int shmobile_iommu_attach_device(struct iommu_domain *domain,
struct device *dev)
{
struct shmobile_iommu_archdata *archdata = dev->archdata.iommu;
- struct shmobile_iommu_domain *sh_domain = domain->priv;
+ struct shmobile_iommu_domain *sh_domain = to_sh_domain(domain);
int ret = -EBUSY;
if (!archdata)
@@ -151,7 +158,7 @@
struct device *dev)
{
struct shmobile_iommu_archdata *archdata = dev->archdata.iommu;
- struct shmobile_iommu_domain *sh_domain = domain->priv;
+ struct shmobile_iommu_domain *sh_domain = to_sh_domain(domain);
if (!archdata)
return;
@@ -214,7 +221,7 @@
phys_addr_t paddr, size_t size, int prot)
{
struct shmobile_iommu_domain_pgtable l2 = { .pgtable = NULL };
- struct shmobile_iommu_domain *sh_domain = domain->priv;
+ struct shmobile_iommu_domain *sh_domain = to_sh_domain(domain);
unsigned int l1index, l2index;
int ret;
@@ -258,7 +265,7 @@
unsigned long iova, size_t size)
{
struct shmobile_iommu_domain_pgtable l2 = { .pgtable = NULL };
- struct shmobile_iommu_domain *sh_domain = domain->priv;
+ struct shmobile_iommu_domain *sh_domain = to_sh_domain(domain);
unsigned int l1index, l2index;
uint32_t l2entry = 0;
size_t ret = 0;
@@ -298,7 +305,7 @@
static phys_addr_t shmobile_iommu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
- struct shmobile_iommu_domain *sh_domain = domain->priv;
+ struct shmobile_iommu_domain *sh_domain = to_sh_domain(domain);
uint32_t l1entry = 0, l2entry = 0;
unsigned int l1index, l2index;
@@ -355,8 +362,8 @@
}
static const struct iommu_ops shmobile_iommu_ops = {
- .domain_init = shmobile_iommu_domain_init,
- .domain_destroy = shmobile_iommu_domain_destroy,
+ .domain_alloc = shmobile_iommu_domain_alloc,
+ .domain_free = shmobile_iommu_domain_free,
.attach_dev = shmobile_iommu_attach_device,
.detach_dev = shmobile_iommu_detach_device,
.map = shmobile_iommu_map,
diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
index c48da05..37e708f 100644
--- a/drivers/iommu/tegra-gart.c
+++ b/drivers/iommu/tegra-gart.c
@@ -63,11 +63,21 @@
struct device *dev;
};
+struct gart_domain {
+ struct iommu_domain domain; /* generic domain handle */
+ struct gart_device *gart; /* link to gart device */
+};
+
static struct gart_device *gart_handle; /* unique for a system */
#define GART_PTE(_pfn) \
(GART_ENTRY_PHYS_ADDR_VALID | ((_pfn) << PAGE_SHIFT))
+static struct gart_domain *to_gart_domain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct gart_domain, domain);
+}
+
/*
* Any interaction between any block on PPSB and a block on APB or AHB
* must have these read-back to ensure the APB/AHB bus transaction is
@@ -156,20 +166,11 @@
static int gart_iommu_attach_dev(struct iommu_domain *domain,
struct device *dev)
{
- struct gart_device *gart;
+ struct gart_domain *gart_domain = to_gart_domain(domain);
+ struct gart_device *gart = gart_domain->gart;
struct gart_client *client, *c;
int err = 0;
- gart = gart_handle;
- if (!gart)
- return -EINVAL;
- domain->priv = gart;
-
- domain->geometry.aperture_start = gart->iovmm_base;
- domain->geometry.aperture_end = gart->iovmm_base +
- gart->page_count * GART_PAGE_SIZE - 1;
- domain->geometry.force_aperture = true;
-
client = devm_kzalloc(gart->dev, sizeof(*c), GFP_KERNEL);
if (!client)
return -ENOMEM;
@@ -198,7 +199,8 @@
static void gart_iommu_detach_dev(struct iommu_domain *domain,
struct device *dev)
{
- struct gart_device *gart = domain->priv;
+ struct gart_domain *gart_domain = to_gart_domain(domain);
+ struct gart_device *gart = gart_domain->gart;
struct gart_client *c;
spin_lock(&gart->client_lock);
@@ -216,33 +218,55 @@
spin_unlock(&gart->client_lock);
}
-static int gart_iommu_domain_init(struct iommu_domain *domain)
+static struct iommu_domain *gart_iommu_domain_alloc(unsigned type)
{
- return 0;
+ struct gart_domain *gart_domain;
+ struct gart_device *gart;
+
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
+ gart = gart_handle;
+ if (!gart)
+ return NULL;
+
+ gart_domain = kzalloc(sizeof(*gart_domain), GFP_KERNEL);
+ if (!gart_domain)
+ return NULL;
+
+ gart_domain->gart = gart;
+ gart_domain->domain.geometry.aperture_start = gart->iovmm_base;
+ gart_domain->domain.geometry.aperture_end = gart->iovmm_base +
+ gart->page_count * GART_PAGE_SIZE - 1;
+ gart_domain->domain.geometry.force_aperture = true;
+
+ return &gart_domain->domain;
}
-static void gart_iommu_domain_destroy(struct iommu_domain *domain)
+static void gart_iommu_domain_free(struct iommu_domain *domain)
{
- struct gart_device *gart = domain->priv;
+ struct gart_domain *gart_domain = to_gart_domain(domain);
+ struct gart_device *gart = gart_domain->gart;
- if (!gart)
- return;
+ if (gart) {
+ spin_lock(&gart->client_lock);
+ if (!list_empty(&gart->client)) {
+ struct gart_client *c;
- spin_lock(&gart->client_lock);
- if (!list_empty(&gart->client)) {
- struct gart_client *c;
-
- list_for_each_entry(c, &gart->client, list)
- gart_iommu_detach_dev(domain, c->dev);
+ list_for_each_entry(c, &gart->client, list)
+ gart_iommu_detach_dev(domain, c->dev);
+ }
+ spin_unlock(&gart->client_lock);
}
- spin_unlock(&gart->client_lock);
- domain->priv = NULL;
+
+ kfree(gart_domain);
}
static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t pa, size_t bytes, int prot)
{
- struct gart_device *gart = domain->priv;
+ struct gart_domain *gart_domain = to_gart_domain(domain);
+ struct gart_device *gart = gart_domain->gart;
unsigned long flags;
unsigned long pfn;
@@ -265,7 +289,8 @@
static size_t gart_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
size_t bytes)
{
- struct gart_device *gart = domain->priv;
+ struct gart_domain *gart_domain = to_gart_domain(domain);
+ struct gart_device *gart = gart_domain->gart;
unsigned long flags;
if (!gart_iova_range_valid(gart, iova, bytes))
@@ -281,7 +306,8 @@
static phys_addr_t gart_iommu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
- struct gart_device *gart = domain->priv;
+ struct gart_domain *gart_domain = to_gart_domain(domain);
+ struct gart_device *gart = gart_domain->gart;
unsigned long pte;
phys_addr_t pa;
unsigned long flags;
@@ -310,8 +336,8 @@
static const struct iommu_ops gart_iommu_ops = {
.capable = gart_iommu_capable,
- .domain_init = gart_iommu_domain_init,
- .domain_destroy = gart_iommu_domain_destroy,
+ .domain_alloc = gart_iommu_domain_alloc,
+ .domain_free = gart_iommu_domain_free,
.attach_dev = gart_iommu_attach_dev,
.detach_dev = gart_iommu_detach_dev,
.map = gart_iommu_map,
diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index 6e134c7..c845d99 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -6,6 +6,7 @@
* published by the Free Software Foundation.
*/
+#include <linux/bitops.h>
#include <linux/err.h>
#include <linux/iommu.h>
#include <linux/kernel.h>
@@ -24,6 +25,8 @@
struct tegra_mc *mc;
const struct tegra_smmu_soc *soc;
+ unsigned long pfn_mask;
+
unsigned long *asids;
struct mutex lock;
@@ -31,7 +34,7 @@
};
struct tegra_smmu_as {
- struct iommu_domain *domain;
+ struct iommu_domain domain;
struct tegra_smmu *smmu;
unsigned int use_count;
struct page *count;
@@ -40,6 +43,11 @@
u32 attr;
};
+static struct tegra_smmu_as *to_smmu_as(struct iommu_domain *dom)
+{
+ return container_of(dom, struct tegra_smmu_as, domain);
+}
+
static inline void smmu_writel(struct tegra_smmu *smmu, u32 value,
unsigned long offset)
{
@@ -105,8 +113,6 @@
#define SMMU_PDE_SHIFT 22
#define SMMU_PTE_SHIFT 12
-#define SMMU_PFN_MASK 0x000fffff
-
#define SMMU_PD_READABLE (1 << 31)
#define SMMU_PD_WRITABLE (1 << 30)
#define SMMU_PD_NONSECURE (1 << 29)
@@ -224,30 +230,32 @@
return false;
}
-static int tegra_smmu_domain_init(struct iommu_domain *domain)
+static struct iommu_domain *tegra_smmu_domain_alloc(unsigned type)
{
struct tegra_smmu_as *as;
unsigned int i;
uint32_t *pd;
+ if (type != IOMMU_DOMAIN_UNMANAGED)
+ return NULL;
+
as = kzalloc(sizeof(*as), GFP_KERNEL);
if (!as)
- return -ENOMEM;
+ return NULL;
as->attr = SMMU_PD_READABLE | SMMU_PD_WRITABLE | SMMU_PD_NONSECURE;
- as->domain = domain;
as->pd = alloc_page(GFP_KERNEL | __GFP_DMA);
if (!as->pd) {
kfree(as);
- return -ENOMEM;
+ return NULL;
}
as->count = alloc_page(GFP_KERNEL);
if (!as->count) {
__free_page(as->pd);
kfree(as);
- return -ENOMEM;
+ return NULL;
}
/* clear PDEs */
@@ -264,14 +272,17 @@
for (i = 0; i < SMMU_NUM_PDE; i++)
pd[i] = 0;
- domain->priv = as;
+ /* setup aperture */
+ as->domain.geometry.aperture_start = 0;
+ as->domain.geometry.aperture_end = 0xffffffff;
+ as->domain.geometry.force_aperture = true;
- return 0;
+ return &as->domain;
}
-static void tegra_smmu_domain_destroy(struct iommu_domain *domain)
+static void tegra_smmu_domain_free(struct iommu_domain *domain)
{
- struct tegra_smmu_as *as = domain->priv;
+ struct tegra_smmu_as *as = to_smmu_as(domain);
/* TODO: free page directory and page tables */
ClearPageReserved(as->pd);
@@ -395,7 +406,7 @@
struct device *dev)
{
struct tegra_smmu *smmu = dev->archdata.iommu;
- struct tegra_smmu_as *as = domain->priv;
+ struct tegra_smmu_as *as = to_smmu_as(domain);
struct device_node *np = dev->of_node;
struct of_phandle_args args;
unsigned int index = 0;
@@ -428,7 +439,7 @@
static void tegra_smmu_detach_dev(struct iommu_domain *domain, struct device *dev)
{
- struct tegra_smmu_as *as = domain->priv;
+ struct tegra_smmu_as *as = to_smmu_as(domain);
struct device_node *np = dev->of_node;
struct tegra_smmu *smmu = as->smmu;
struct of_phandle_args args;
@@ -481,7 +492,7 @@
smmu_flush_tlb_section(smmu, as->id, iova);
smmu_flush(smmu);
} else {
- page = pfn_to_page(pd[pde] & SMMU_PFN_MASK);
+ page = pfn_to_page(pd[pde] & smmu->pfn_mask);
pt = page_address(page);
}
@@ -503,7 +514,7 @@
u32 *pd = page_address(as->pd), *pt;
struct page *page;
- page = pfn_to_page(pd[pde] & SMMU_PFN_MASK);
+ page = pfn_to_page(pd[pde] & as->smmu->pfn_mask);
pt = page_address(page);
/*
@@ -524,7 +535,7 @@
static int tegra_smmu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot)
{
- struct tegra_smmu_as *as = domain->priv;
+ struct tegra_smmu_as *as = to_smmu_as(domain);
struct tegra_smmu *smmu = as->smmu;
unsigned long offset;
struct page *page;
@@ -548,7 +559,7 @@
static size_t tegra_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
size_t size)
{
- struct tegra_smmu_as *as = domain->priv;
+ struct tegra_smmu_as *as = to_smmu_as(domain);
struct tegra_smmu *smmu = as->smmu;
unsigned long offset;
struct page *page;
@@ -572,13 +583,13 @@
static phys_addr_t tegra_smmu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova)
{
- struct tegra_smmu_as *as = domain->priv;
+ struct tegra_smmu_as *as = to_smmu_as(domain);
struct page *page;
unsigned long pfn;
u32 *pte;
pte = as_get_pte(as, iova, &page);
- pfn = *pte & SMMU_PFN_MASK;
+ pfn = *pte & as->smmu->pfn_mask;
return PFN_PHYS(pfn);
}
@@ -633,8 +644,8 @@
static const struct iommu_ops tegra_smmu_ops = {
.capable = tegra_smmu_capable,
- .domain_init = tegra_smmu_domain_init,
- .domain_destroy = tegra_smmu_domain_destroy,
+ .domain_alloc = tegra_smmu_domain_alloc,
+ .domain_free = tegra_smmu_domain_free,
.attach_dev = tegra_smmu_attach_dev,
.detach_dev = tegra_smmu_detach_dev,
.add_device = tegra_smmu_add_device,
@@ -702,6 +713,10 @@
smmu->dev = dev;
smmu->mc = mc;
+ smmu->pfn_mask = BIT_MASK(mc->soc->num_address_bits - PAGE_SHIFT) - 1;
+ dev_dbg(dev, "address bits: %u, PFN mask: %#lx\n",
+ mc->soc->num_address_bits, smmu->pfn_mask);
+
value = SMMU_PTC_CONFIG_ENABLE | SMMU_PTC_CONFIG_INDEX_MAP(0x3f);
if (soc->supports_request_limit)
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 4f2fb62..49875ad 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -567,7 +567,7 @@
*/
smp_wmb();
- for_each_cpu_mask(cpu, *mask) {
+ for_each_cpu(cpu, mask) {
u64 cluster_id = cpu_logical_map(cpu) & ~0xffUL;
u16 tlist;
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index bc48b7d..57f09cb 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -389,19 +389,19 @@
int i;
cpumask_and(&tmp, cpumask, cpu_online_mask);
- if (cpus_empty(tmp))
+ if (cpumask_empty(&tmp))
return -EINVAL;
/* Assumption : cpumask refers to a single CPU */
spin_lock_irqsave(&gic_lock, flags);
/* Re-route this IRQ */
- gic_map_to_vpe(irq, first_cpu(tmp));
+ gic_map_to_vpe(irq, cpumask_first(&tmp));
/* Update the pcpu_masks */
for (i = 0; i < NR_CPUS; i++)
clear_bit(irq, pcpu_masks[i].pcpu_mask);
- set_bit(irq, pcpu_masks[first_cpu(tmp)].pcpu_mask);
+ set_bit(irq, pcpu_masks[cpumask_first(&tmp)].pcpu_mask);
cpumask_copy(d->affinity, cpumask);
spin_unlock_irqrestore(&gic_lock, flags);
diff --git a/drivers/leds/leds-gpio.c b/drivers/leds/leds-gpio.c
index d26af0a..15eb3f8 100644
--- a/drivers/leds/leds-gpio.c
+++ b/drivers/leds/leds-gpio.c
@@ -184,7 +184,7 @@
struct gpio_led led = {};
const char *state = NULL;
- led.gpiod = devm_get_gpiod_from_child(dev, child);
+ led.gpiod = devm_get_gpiod_from_child(dev, NULL, child);
if (IS_ERR(led.gpiod)) {
fwnode_handle_put(child);
ret = PTR_ERR(led.gpiod);
diff --git a/drivers/mcb/mcb-pci.c b/drivers/mcb/mcb-pci.c
index 0af7361..de36237 100644
--- a/drivers/mcb/mcb-pci.c
+++ b/drivers/mcb/mcb-pci.c
@@ -56,9 +56,9 @@
res = request_mem_region(priv->mapbase, CHAM_HEADER_SIZE,
KBUILD_MODNAME);
- if (IS_ERR(res)) {
+ if (!res) {
dev_err(&pdev->dev, "Failed to request PCI memory\n");
- ret = PTR_ERR(res);
+ ret = -EBUSY;
goto out_disable;
}
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index 63e05e3..6ddc9834 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -196,6 +196,17 @@
If unsure, say N.
+config DM_MQ_DEFAULT
+ bool "request-based DM: use blk-mq I/O path by default"
+ depends on BLK_DEV_DM
+ ---help---
+ This option enables the blk-mq based I/O path for request-based
+ DM devices by default. With the option the dm_mod.use_blk_mq
+ module/boot option defaults to Y, without it to N, but it can
+ still be overriden either way.
+
+ If unsure say N.
+
config DM_DEBUG
bool "Device mapper debugging support"
depends on BLK_DEV_DM
@@ -432,4 +443,20 @@
If unsure, say N.
+config DM_LOG_WRITES
+ tristate "Log writes target support"
+ depends on BLK_DEV_DM
+ ---help---
+ This device-mapper target takes two devices, one device to use
+ normally, one to log all write operations done to the first device.
+ This is for use by file system developers wishing to verify that
+ their fs is writing a consitent file system at all times by allowing
+ them to replay the log in a variety of ways and to check the
+ contents.
+
+ To compile this code as a module, choose M here: the module will
+ be called dm-log-writes.
+
+ If unsure, say N.
+
endif # MD
diff --git a/drivers/md/Makefile b/drivers/md/Makefile
index a2da532..1863fea 100644
--- a/drivers/md/Makefile
+++ b/drivers/md/Makefile
@@ -55,6 +55,7 @@
obj-$(CONFIG_DM_CACHE_MQ) += dm-cache-mq.o
obj-$(CONFIG_DM_CACHE_CLEANER) += dm-cache-cleaner.o
obj-$(CONFIG_DM_ERA) += dm-era.o
+obj-$(CONFIG_DM_LOG_WRITES) += dm-log-writes.o
ifeq ($(CONFIG_DM_UEVENT),y)
dm-mod-objs += dm-uevent.o
diff --git a/drivers/md/dm-cache-policy-mq.c b/drivers/md/dm-cache-policy-mq.c
index 13f547a..3ddd116 100644
--- a/drivers/md/dm-cache-policy-mq.c
+++ b/drivers/md/dm-cache-policy-mq.c
@@ -8,6 +8,7 @@
#include "dm.h"
#include <linux/hash.h>
+#include <linux/jiffies.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
@@ -124,32 +125,41 @@
* sorted queue.
*/
#define NR_QUEUE_LEVELS 16u
+#define NR_SENTINELS NR_QUEUE_LEVELS * 3
+
+#define WRITEBACK_PERIOD HZ
struct queue {
+ unsigned nr_elts;
+ bool current_writeback_sentinels;
+ unsigned long next_writeback;
struct list_head qs[NR_QUEUE_LEVELS];
+ struct list_head sentinels[NR_SENTINELS];
};
static void queue_init(struct queue *q)
{
unsigned i;
- for (i = 0; i < NR_QUEUE_LEVELS; i++)
+ q->nr_elts = 0;
+ q->current_writeback_sentinels = false;
+ q->next_writeback = 0;
+ for (i = 0; i < NR_QUEUE_LEVELS; i++) {
INIT_LIST_HEAD(q->qs + i);
+ INIT_LIST_HEAD(q->sentinels + i);
+ INIT_LIST_HEAD(q->sentinels + NR_QUEUE_LEVELS + i);
+ INIT_LIST_HEAD(q->sentinels + (2 * NR_QUEUE_LEVELS) + i);
+ }
}
-/*
- * Checks to see if the queue is empty.
- * FIXME: reduce cpu usage.
- */
+static unsigned queue_size(struct queue *q)
+{
+ return q->nr_elts;
+}
+
static bool queue_empty(struct queue *q)
{
- unsigned i;
-
- for (i = 0; i < NR_QUEUE_LEVELS; i++)
- if (!list_empty(q->qs + i))
- return false;
-
- return true;
+ return q->nr_elts == 0;
}
/*
@@ -157,24 +167,19 @@
*/
static void queue_push(struct queue *q, unsigned level, struct list_head *elt)
{
+ q->nr_elts++;
list_add_tail(elt, q->qs + level);
}
-static void queue_remove(struct list_head *elt)
+static void queue_remove(struct queue *q, struct list_head *elt)
{
+ q->nr_elts--;
list_del(elt);
}
-/*
- * Shifts all regions down one level. This has no effect on the order of
- * the queue.
- */
-static void queue_shift_down(struct queue *q)
+static bool is_sentinel(struct queue *q, struct list_head *h)
{
- unsigned level;
-
- for (level = 1; level < NR_QUEUE_LEVELS; level++)
- list_splice_init(q->qs + level, q->qs + level - 1);
+ return (h >= q->sentinels) && (h < (q->sentinels + NR_SENTINELS));
}
/*
@@ -184,10 +189,12 @@
static struct list_head *queue_peek(struct queue *q)
{
unsigned level;
+ struct list_head *h;
for (level = 0; level < NR_QUEUE_LEVELS; level++)
- if (!list_empty(q->qs + level))
- return q->qs[level].next;
+ list_for_each(h, q->qs + level)
+ if (!is_sentinel(q, h))
+ return h;
return NULL;
}
@@ -197,16 +204,34 @@
struct list_head *r = queue_peek(q);
if (r) {
+ q->nr_elts--;
list_del(r);
-
- /* have we just emptied the bottom level? */
- if (list_empty(q->qs))
- queue_shift_down(q);
}
return r;
}
+/*
+ * Pops an entry from a level that is not past a sentinel.
+ */
+static struct list_head *queue_pop_old(struct queue *q)
+{
+ unsigned level;
+ struct list_head *h;
+
+ for (level = 0; level < NR_QUEUE_LEVELS; level++)
+ list_for_each(h, q->qs + level) {
+ if (is_sentinel(q, h))
+ break;
+
+ q->nr_elts--;
+ list_del(h);
+ return h;
+ }
+
+ return NULL;
+}
+
static struct list_head *list_pop(struct list_head *lh)
{
struct list_head *r = lh->next;
@@ -217,6 +242,62 @@
return r;
}
+static struct list_head *writeback_sentinel(struct queue *q, unsigned level)
+{
+ if (q->current_writeback_sentinels)
+ return q->sentinels + NR_QUEUE_LEVELS + level;
+ else
+ return q->sentinels + 2 * NR_QUEUE_LEVELS + level;
+}
+
+static void queue_update_writeback_sentinels(struct queue *q)
+{
+ unsigned i;
+ struct list_head *h;
+
+ if (time_after(jiffies, q->next_writeback)) {
+ for (i = 0; i < NR_QUEUE_LEVELS; i++) {
+ h = writeback_sentinel(q, i);
+ list_del(h);
+ list_add_tail(h, q->qs + i);
+ }
+
+ q->next_writeback = jiffies + WRITEBACK_PERIOD;
+ q->current_writeback_sentinels = !q->current_writeback_sentinels;
+ }
+}
+
+/*
+ * Sometimes we want to iterate through entries that have been pushed since
+ * a certain event. We use sentinel entries on the queues to delimit these
+ * 'tick' events.
+ */
+static void queue_tick(struct queue *q)
+{
+ unsigned i;
+
+ for (i = 0; i < NR_QUEUE_LEVELS; i++) {
+ list_del(q->sentinels + i);
+ list_add_tail(q->sentinels + i, q->qs + i);
+ }
+}
+
+typedef void (*iter_fn)(struct list_head *, void *);
+static void queue_iterate_tick(struct queue *q, iter_fn fn, void *context)
+{
+ unsigned i;
+ struct list_head *h;
+
+ for (i = 0; i < NR_QUEUE_LEVELS; i++) {
+ list_for_each_prev(h, q->qs + i) {
+ if (is_sentinel(q, h))
+ break;
+
+ fn(h, context);
+ }
+ }
+}
+
/*----------------------------------------------------------------*/
/*
@@ -232,8 +313,6 @@
*/
bool dirty:1;
unsigned hit_count;
- unsigned generation;
- unsigned tick;
};
/*
@@ -481,7 +560,6 @@
*/
static void push(struct mq_policy *mq, struct entry *e)
{
- e->tick = mq->tick;
hash_insert(mq, e);
if (in_cache(mq, e))
@@ -496,7 +574,11 @@
*/
static void del(struct mq_policy *mq, struct entry *e)
{
- queue_remove(&e->list);
+ if (in_cache(mq, e))
+ queue_remove(e->dirty ? &mq->cache_dirty : &mq->cache_clean, &e->list);
+ else
+ queue_remove(&mq->pre_cache, &e->list);
+
hash_remove(e);
}
@@ -518,6 +600,20 @@
return e;
}
+static struct entry *pop_old(struct mq_policy *mq, struct queue *q)
+{
+ struct entry *e;
+ struct list_head *h = queue_pop_old(q);
+
+ if (!h)
+ return NULL;
+
+ e = container_of(h, struct entry, list);
+ hash_remove(e);
+
+ return e;
+}
+
static struct entry *peek(struct queue *q)
{
struct list_head *h = queue_peek(q);
@@ -525,14 +621,6 @@
}
/*
- * Has this entry already been updated?
- */
-static bool updated_this_tick(struct mq_policy *mq, struct entry *e)
-{
- return mq->tick == e->tick;
-}
-
-/*
* The promotion threshold is adjusted every generation. As are the counts
* of the entries.
*
@@ -583,20 +671,9 @@
* Whenever we use an entry we bump up it's hit counter, and push it to the
* back to it's current level.
*/
-static void requeue_and_update_tick(struct mq_policy *mq, struct entry *e)
+static void requeue(struct mq_policy *mq, struct entry *e)
{
- if (updated_this_tick(mq, e))
- return;
-
- e->hit_count++;
- mq->hit_count++;
check_generation(mq);
-
- /* generation adjustment, to stop the counts increasing forever. */
- /* FIXME: divide? */
- /* e->hit_count -= min(e->hit_count - 1, mq->generation - e->generation); */
- e->generation = mq->generation;
-
del(mq, e);
push(mq, e);
}
@@ -703,7 +780,7 @@
struct entry *e,
struct policy_result *result)
{
- requeue_and_update_tick(mq, e);
+ requeue(mq, e);
if (in_cache(mq, e)) {
result->op = POLICY_HIT;
@@ -740,8 +817,6 @@
new_e->oblock = e->oblock;
new_e->dirty = false;
new_e->hit_count = e->hit_count;
- new_e->generation = e->generation;
- new_e->tick = e->tick;
del(mq, e);
free_entry(&mq->pre_cache_pool, e);
@@ -757,18 +832,16 @@
int data_dir, struct policy_result *result)
{
int r = 0;
- bool updated = updated_this_tick(mq, e);
- if ((!discarded_oblock && updated) ||
- !should_promote(mq, e, discarded_oblock, data_dir)) {
- requeue_and_update_tick(mq, e);
+ if (!should_promote(mq, e, discarded_oblock, data_dir)) {
+ requeue(mq, e);
result->op = POLICY_MISS;
} else if (!can_migrate)
r = -EWOULDBLOCK;
else {
- requeue_and_update_tick(mq, e);
+ requeue(mq, e);
r = pre_cache_to_cache(mq, e, result);
}
@@ -795,7 +868,6 @@
e->dirty = false;
e->oblock = oblock;
e->hit_count = 1;
- e->generation = mq->generation;
push(mq, e);
}
@@ -828,7 +900,6 @@
e->oblock = oblock;
e->dirty = false;
e->hit_count = 1;
- e->generation = mq->generation;
push(mq, e);
result->cblock = infer_cblock(&mq->cache_pool, e);
@@ -905,12 +976,37 @@
kfree(mq);
}
+static void update_pre_cache_hits(struct list_head *h, void *context)
+{
+ struct entry *e = container_of(h, struct entry, list);
+ e->hit_count++;
+}
+
+static void update_cache_hits(struct list_head *h, void *context)
+{
+ struct mq_policy *mq = context;
+ struct entry *e = container_of(h, struct entry, list);
+ e->hit_count++;
+ mq->hit_count++;
+}
+
static void copy_tick(struct mq_policy *mq)
{
- unsigned long flags;
+ unsigned long flags, tick;
spin_lock_irqsave(&mq->tick_lock, flags);
- mq->tick = mq->tick_protected;
+ tick = mq->tick_protected;
+ if (tick != mq->tick) {
+ queue_iterate_tick(&mq->pre_cache, update_pre_cache_hits, mq);
+ queue_iterate_tick(&mq->cache_dirty, update_cache_hits, mq);
+ queue_iterate_tick(&mq->cache_clean, update_cache_hits, mq);
+ mq->tick = tick;
+ }
+
+ queue_tick(&mq->pre_cache);
+ queue_tick(&mq->cache_dirty);
+ queue_tick(&mq->cache_clean);
+ queue_update_writeback_sentinels(&mq->cache_dirty);
spin_unlock_irqrestore(&mq->tick_lock, flags);
}
@@ -1001,7 +1097,6 @@
e->oblock = oblock;
e->dirty = false; /* this gets corrected in a minute */
e->hit_count = hint_valid ? hint : 1;
- e->generation = mq->generation;
push(mq, e);
return 0;
@@ -1012,10 +1107,15 @@
{
int r;
unsigned level;
+ struct list_head *h;
struct entry *e;
for (level = 0; level < NR_QUEUE_LEVELS; level++)
- list_for_each_entry(e, q->qs + level, list) {
+ list_for_each(h, q->qs + level) {
+ if (is_sentinel(q, h))
+ continue;
+
+ e = container_of(h, struct entry, list);
r = fn(context, infer_cblock(&mq->cache_pool, e),
e->oblock, e->hit_count);
if (r)
@@ -1087,10 +1187,27 @@
return r;
}
+#define CLEAN_TARGET_PERCENTAGE 25
+
+static bool clean_target_met(struct mq_policy *mq)
+{
+ /*
+ * Cache entries may not be populated. So we're cannot rely on the
+ * size of the clean queue.
+ */
+ unsigned nr_clean = from_cblock(mq->cache_size) - queue_size(&mq->cache_dirty);
+ unsigned target = from_cblock(mq->cache_size) * CLEAN_TARGET_PERCENTAGE / 100;
+
+ return nr_clean >= target;
+}
+
static int __mq_writeback_work(struct mq_policy *mq, dm_oblock_t *oblock,
dm_cblock_t *cblock)
{
- struct entry *e = pop(mq, &mq->cache_dirty);
+ struct entry *e = pop_old(mq, &mq->cache_dirty);
+
+ if (!e && !clean_target_met(mq))
+ e = pop(mq, &mq->cache_dirty);
if (!e)
return -ENODATA;
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 713a962..9eeea19 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -228,7 +228,7 @@
*
* tcw: Compatible implementation of the block chaining mode used
* by the TrueCrypt device encryption system (prior to version 4.1).
- * For more info see: http://www.truecrypt.org
+ * For more info see: https://gitlab.com/cryptsetup/cryptsetup/wikis/TrueCryptOnDiskFormat
* It operates on full 512 byte sectors and uses CBC
* with an IV derived from initial key and the sector number.
* In addition, whitening value is applied on every sector, whitening
@@ -925,11 +925,10 @@
switch (r) {
/* async */
+ case -EINPROGRESS:
case -EBUSY:
wait_for_completion(&ctx->restart);
reinit_completion(&ctx->restart);
- /* fall through*/
- case -EINPROGRESS:
ctx->req = NULL;
ctx->cc_sector++;
continue;
@@ -1124,15 +1123,15 @@
static int kcryptd_io_read(struct dm_crypt_io *io, gfp_t gfp)
{
struct crypt_config *cc = io->cc;
- struct bio *base_bio = io->base_bio;
struct bio *clone;
/*
- * The block layer might modify the bvec array, so always
- * copy the required bvecs because we need the original
- * one in order to decrypt the whole bio data *afterwards*.
+ * We need the original biovec array in order to decrypt
+ * the whole bio data *afterwards* -- thanks to immutable
+ * biovecs we don't need to worry about the block layer
+ * modifying the biovec array; so leverage bio_clone_fast().
*/
- clone = bio_clone_bioset(base_bio, gfp, cc->bs);
+ clone = bio_clone_fast(io->base_bio, gfp, cc->bs);
if (!clone)
return 1;
@@ -1346,10 +1345,8 @@
struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx);
struct crypt_config *cc = io->cc;
- if (error == -EINPROGRESS) {
- complete(&ctx->restart);
+ if (error == -EINPROGRESS)
return;
- }
if (!error && cc->iv_gen_ops && cc->iv_gen_ops->post)
error = cc->iv_gen_ops->post(cc, iv_of_dmreq(cc, dmreq), dmreq);
@@ -1360,12 +1357,15 @@
crypt_free_req(cc, req_of_dmreq(cc, dmreq), io->base_bio);
if (!atomic_dec_and_test(&ctx->cc_pending))
- return;
+ goto done;
if (bio_data_dir(io->base_bio) == READ)
kcryptd_crypt_read_done(io);
else
kcryptd_crypt_write_io_submit(io, 1);
+done:
+ if (!completion_done(&ctx->restart))
+ complete(&ctx->restart);
}
static void kcryptd_crypt(struct work_struct *work)
@@ -1816,6 +1816,7 @@
if (ret)
goto bad;
+ ret = -EINVAL;
while (opt_params--) {
opt_string = dm_shift_arg(&as);
if (!opt_string) {
diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c
index 42c3a27..57b6a19 100644
--- a/drivers/md/dm-delay.c
+++ b/drivers/md/dm-delay.c
@@ -236,7 +236,7 @@
delayed = dm_per_bio_data(bio, sizeof(struct dm_delay_info));
delayed->context = dc;
- delayed->expires = expires = jiffies + (delay * HZ / 1000);
+ delayed->expires = expires = jiffies + msecs_to_jiffies(delay);
mutex_lock(&delayed_bios_lock);
diff --git a/drivers/md/dm-log-userspace-base.c b/drivers/md/dm-log-userspace-base.c
index 03177ca..058256d 100644
--- a/drivers/md/dm-log-userspace-base.c
+++ b/drivers/md/dm-log-userspace-base.c
@@ -17,7 +17,9 @@
#define DM_LOG_USERSPACE_VSN "1.3.0"
-struct flush_entry {
+#define FLUSH_ENTRY_POOL_SIZE 16
+
+struct dm_dirty_log_flush_entry {
int type;
region_t region;
struct list_head list;
@@ -34,22 +36,14 @@
struct log_c {
struct dm_target *ti;
struct dm_dev *log_dev;
- uint32_t region_size;
- region_t region_count;
- uint64_t luid;
- char uuid[DM_UUID_LEN];
char *usr_argv_str;
uint32_t usr_argc;
- /*
- * in_sync_hint gets set when doing is_remote_recovering. It
- * represents the first region that needs recovery. IOW, the
- * first zero bit of sync_bits. This can be useful for to limit
- * traffic for calls like is_remote_recovering and get_resync_work,
- * but be take care in its use for anything else.
- */
- uint64_t in_sync_hint;
+ uint32_t region_size;
+ region_t region_count;
+ uint64_t luid;
+ char uuid[DM_UUID_LEN];
/*
* Mark and clear requests are held until a flush is issued
@@ -62,6 +56,15 @@
struct list_head clear_list;
/*
+ * in_sync_hint gets set when doing is_remote_recovering. It
+ * represents the first region that needs recovery. IOW, the
+ * first zero bit of sync_bits. This can be useful for to limit
+ * traffic for calls like is_remote_recovering and get_resync_work,
+ * but be take care in its use for anything else.
+ */
+ uint64_t in_sync_hint;
+
+ /*
* Workqueue for flush of clear region requests.
*/
struct workqueue_struct *dmlog_wq;
@@ -72,19 +75,11 @@
* Combine userspace flush and mark requests for efficiency.
*/
uint32_t integrated_flush;
+
+ mempool_t *flush_entry_pool;
};
-static mempool_t *flush_entry_pool;
-
-static void *flush_entry_alloc(gfp_t gfp_mask, void *pool_data)
-{
- return kmalloc(sizeof(struct flush_entry), gfp_mask);
-}
-
-static void flush_entry_free(void *element, void *pool_data)
-{
- kfree(element);
-}
+static struct kmem_cache *_flush_entry_cache;
static int userspace_do_request(struct log_c *lc, const char *uuid,
int request_type, char *data, size_t data_size,
@@ -254,6 +249,14 @@
goto out;
}
+ lc->flush_entry_pool = mempool_create_slab_pool(FLUSH_ENTRY_POOL_SIZE,
+ _flush_entry_cache);
+ if (!lc->flush_entry_pool) {
+ DMERR("Failed to create flush_entry_pool");
+ r = -ENOMEM;
+ goto out;
+ }
+
/*
* Send table string and get back any opened device.
*/
@@ -310,6 +313,8 @@
out:
kfree(devices_rdata);
if (r) {
+ if (lc->flush_entry_pool)
+ mempool_destroy(lc->flush_entry_pool);
kfree(lc);
kfree(ctr_str);
} else {
@@ -338,6 +343,8 @@
if (lc->log_dev)
dm_put_device(lc->ti, lc->log_dev);
+ mempool_destroy(lc->flush_entry_pool);
+
kfree(lc->usr_argv_str);
kfree(lc);
@@ -461,7 +468,7 @@
static int flush_one_by_one(struct log_c *lc, struct list_head *flush_list)
{
int r = 0;
- struct flush_entry *fe;
+ struct dm_dirty_log_flush_entry *fe;
list_for_each_entry(fe, flush_list, list) {
r = userspace_do_request(lc, lc->uuid, fe->type,
@@ -481,7 +488,7 @@
int r = 0;
int count;
uint32_t type = 0;
- struct flush_entry *fe, *tmp_fe;
+ struct dm_dirty_log_flush_entry *fe, *tmp_fe;
LIST_HEAD(tmp_list);
uint64_t group[MAX_FLUSH_GROUP_COUNT];
@@ -563,7 +570,8 @@
LIST_HEAD(clear_list);
int mark_list_is_empty;
int clear_list_is_empty;
- struct flush_entry *fe, *tmp_fe;
+ struct dm_dirty_log_flush_entry *fe, *tmp_fe;
+ mempool_t *flush_entry_pool = lc->flush_entry_pool;
spin_lock_irqsave(&lc->flush_lock, flags);
list_splice_init(&lc->mark_list, &mark_list);
@@ -643,10 +651,10 @@
{
unsigned long flags;
struct log_c *lc = log->context;
- struct flush_entry *fe;
+ struct dm_dirty_log_flush_entry *fe;
/* Wait for an allocation, but _never_ fail */
- fe = mempool_alloc(flush_entry_pool, GFP_NOIO);
+ fe = mempool_alloc(lc->flush_entry_pool, GFP_NOIO);
BUG_ON(!fe);
spin_lock_irqsave(&lc->flush_lock, flags);
@@ -672,7 +680,7 @@
{
unsigned long flags;
struct log_c *lc = log->context;
- struct flush_entry *fe;
+ struct dm_dirty_log_flush_entry *fe;
/*
* If we fail to allocate, we skip the clearing of
@@ -680,7 +688,7 @@
* to cause the region to be resync'ed when the
* device is activated next time.
*/
- fe = mempool_alloc(flush_entry_pool, GFP_ATOMIC);
+ fe = mempool_alloc(lc->flush_entry_pool, GFP_ATOMIC);
if (!fe) {
DMERR("Failed to allocate memory to clear region.");
return;
@@ -733,7 +741,6 @@
static void userspace_set_region_sync(struct dm_dirty_log *log,
region_t region, int in_sync)
{
- int r;
struct log_c *lc = log->context;
struct {
region_t r;
@@ -743,12 +750,12 @@
pkg.r = region;
pkg.i = (int64_t)in_sync;
- r = userspace_do_request(lc, lc->uuid, DM_ULOG_SET_REGION_SYNC,
- (char *)&pkg, sizeof(pkg), NULL, NULL);
+ (void) userspace_do_request(lc, lc->uuid, DM_ULOG_SET_REGION_SYNC,
+ (char *)&pkg, sizeof(pkg), NULL, NULL);
/*
* It would be nice to be able to report failures.
- * However, it is easy emough to detect and resolve.
+ * However, it is easy enough to detect and resolve.
*/
return;
}
@@ -886,18 +893,16 @@
{
int r = 0;
- flush_entry_pool = mempool_create(100, flush_entry_alloc,
- flush_entry_free, NULL);
-
- if (!flush_entry_pool) {
- DMWARN("Unable to create flush_entry_pool: No memory.");
+ _flush_entry_cache = KMEM_CACHE(dm_dirty_log_flush_entry, 0);
+ if (!_flush_entry_cache) {
+ DMWARN("Unable to create flush_entry_cache: No memory.");
return -ENOMEM;
}
r = dm_ulog_tfr_init();
if (r) {
DMWARN("Unable to initialize userspace log communications");
- mempool_destroy(flush_entry_pool);
+ kmem_cache_destroy(_flush_entry_cache);
return r;
}
@@ -905,7 +910,7 @@
if (r) {
DMWARN("Couldn't register userspace dirty log type");
dm_ulog_tfr_exit();
- mempool_destroy(flush_entry_pool);
+ kmem_cache_destroy(_flush_entry_cache);
return r;
}
@@ -917,7 +922,7 @@
{
dm_dirty_log_type_unregister(&_userspace_type);
dm_ulog_tfr_exit();
- mempool_destroy(flush_entry_pool);
+ kmem_cache_destroy(_flush_entry_cache);
DMINFO("version " DM_LOG_USERSPACE_VSN " unloaded");
return;
diff --git a/drivers/md/dm-log-userspace-transfer.c b/drivers/md/dm-log-userspace-transfer.c
index 39ad966..fdf8ec3 100644
--- a/drivers/md/dm-log-userspace-transfer.c
+++ b/drivers/md/dm-log-userspace-transfer.c
@@ -172,6 +172,7 @@
char *rdata, size_t *rdata_size)
{
int r = 0;
+ unsigned long tmo;
size_t dummy = 0;
int overhead_size = sizeof(struct dm_ulog_request) + sizeof(struct cn_msg);
struct dm_ulog_request *tfr = prealloced_ulog_tfr;
@@ -236,11 +237,11 @@
goto out;
}
- r = wait_for_completion_timeout(&(pkg.complete), DM_ULOG_RETRY_TIMEOUT);
+ tmo = wait_for_completion_timeout(&(pkg.complete), DM_ULOG_RETRY_TIMEOUT);
spin_lock(&receiving_list_lock);
list_del_init(&(pkg.list));
spin_unlock(&receiving_list_lock);
- if (!r) {
+ if (!tmo) {
DMWARN("[%s] Request timed out: [%u/%u] - retrying",
(strlen(uuid) > 8) ?
(uuid + (strlen(uuid) - 8)) : (uuid),
diff --git a/drivers/md/dm-log-writes.c b/drivers/md/dm-log-writes.c
new file mode 100644
index 0000000..93e0844
--- /dev/null
+++ b/drivers/md/dm-log-writes.c
@@ -0,0 +1,825 @@
+/*
+ * Copyright (C) 2014 Facebook. All rights reserved.
+ *
+ * This file is released under the GPL.
+ */
+
+#include <linux/device-mapper.h>
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/blkdev.h>
+#include <linux/bio.h>
+#include <linux/slab.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
+
+#define DM_MSG_PREFIX "log-writes"
+
+/*
+ * This target will sequentially log all writes to the target device onto the
+ * log device. This is helpful for replaying writes to check for fs consistency
+ * at all times. This target provides a mechanism to mark specific events to
+ * check data at a later time. So for example you would:
+ *
+ * write data
+ * fsync
+ * dmsetup message /dev/whatever mark mymark
+ * unmount /mnt/test
+ *
+ * Then replay the log up to mymark and check the contents of the replay to
+ * verify it matches what was written.
+ *
+ * We log writes only after they have been flushed, this makes the log describe
+ * close to the order in which the data hits the actual disk, not its cache. So
+ * for example the following sequence (W means write, C means complete)
+ *
+ * Wa,Wb,Wc,Cc,Ca,FLUSH,FUAd,Cb,CFLUSH,CFUAd
+ *
+ * Would result in the log looking like this:
+ *
+ * c,a,flush,fuad,b,<other writes>,<next flush>
+ *
+ * This is meant to help expose problems where file systems do not properly wait
+ * on data being written before invoking a FLUSH. FUA bypasses cache so once it
+ * completes it is added to the log as it should be on disk.
+ *
+ * We treat DISCARDs as if they don't bypass cache so that they are logged in
+ * order of completion along with the normal writes. If we didn't do it this
+ * way we would process all the discards first and then write all the data, when
+ * in fact we want to do the data and the discard in the order that they
+ * completed.
+ */
+#define LOG_FLUSH_FLAG (1 << 0)
+#define LOG_FUA_FLAG (1 << 1)
+#define LOG_DISCARD_FLAG (1 << 2)
+#define LOG_MARK_FLAG (1 << 3)
+
+#define WRITE_LOG_VERSION 1
+#define WRITE_LOG_MAGIC 0x6a736677736872
+
+/*
+ * The disk format for this is braindead simple.
+ *
+ * At byte 0 we have our super, followed by the following sequence for
+ * nr_entries:
+ *
+ * [ 1 sector ][ entry->nr_sectors ]
+ * [log_write_entry][ data written ]
+ *
+ * The log_write_entry takes up a full sector so we can have arbitrary length
+ * marks and it leaves us room for extra content in the future.
+ */
+
+/*
+ * Basic info about the log for userspace.
+ */
+struct log_write_super {
+ __le64 magic;
+ __le64 version;
+ __le64 nr_entries;
+ __le32 sectorsize;
+};
+
+/*
+ * sector - the sector we wrote.
+ * nr_sectors - the number of sectors we wrote.
+ * flags - flags for this log entry.
+ * data_len - the size of the data in this log entry, this is for private log
+ * entry stuff, the MARK data provided by userspace for example.
+ */
+struct log_write_entry {
+ __le64 sector;
+ __le64 nr_sectors;
+ __le64 flags;
+ __le64 data_len;
+};
+
+struct log_writes_c {
+ struct dm_dev *dev;
+ struct dm_dev *logdev;
+ u64 logged_entries;
+ u32 sectorsize;
+ atomic_t io_blocks;
+ atomic_t pending_blocks;
+ sector_t next_sector;
+ sector_t end_sector;
+ bool logging_enabled;
+ bool device_supports_discard;
+ spinlock_t blocks_lock;
+ struct list_head unflushed_blocks;
+ struct list_head logging_blocks;
+ wait_queue_head_t wait;
+ struct task_struct *log_kthread;
+};
+
+struct pending_block {
+ int vec_cnt;
+ u64 flags;
+ sector_t sector;
+ sector_t nr_sectors;
+ char *data;
+ u32 datalen;
+ struct list_head list;
+ struct bio_vec vecs[0];
+};
+
+struct per_bio_data {
+ struct pending_block *block;
+};
+
+static void put_pending_block(struct log_writes_c *lc)
+{
+ if (atomic_dec_and_test(&lc->pending_blocks)) {
+ smp_mb__after_atomic();
+ if (waitqueue_active(&lc->wait))
+ wake_up(&lc->wait);
+ }
+}
+
+static void put_io_block(struct log_writes_c *lc)
+{
+ if (atomic_dec_and_test(&lc->io_blocks)) {
+ smp_mb__after_atomic();
+ if (waitqueue_active(&lc->wait))
+ wake_up(&lc->wait);
+ }
+}
+
+static void log_end_io(struct bio *bio, int err)
+{
+ struct log_writes_c *lc = bio->bi_private;
+ struct bio_vec *bvec;
+ int i;
+
+ if (err) {
+ unsigned long flags;
+
+ DMERR("Error writing log block, error=%d", err);
+ spin_lock_irqsave(&lc->blocks_lock, flags);
+ lc->logging_enabled = false;
+ spin_unlock_irqrestore(&lc->blocks_lock, flags);
+ }
+
+ bio_for_each_segment_all(bvec, bio, i)
+ __free_page(bvec->bv_page);
+
+ put_io_block(lc);
+ bio_put(bio);
+}
+
+/*
+ * Meant to be called if there is an error, it will free all the pages
+ * associated with the block.
+ */
+static void free_pending_block(struct log_writes_c *lc,
+ struct pending_block *block)
+{
+ int i;
+
+ for (i = 0; i < block->vec_cnt; i++) {
+ if (block->vecs[i].bv_page)
+ __free_page(block->vecs[i].bv_page);
+ }
+ kfree(block->data);
+ kfree(block);
+ put_pending_block(lc);
+}
+
+static int write_metadata(struct log_writes_c *lc, void *entry,
+ size_t entrylen, void *data, size_t datalen,
+ sector_t sector)
+{
+ struct bio *bio;
+ struct page *page;
+ void *ptr;
+ size_t ret;
+
+ bio = bio_alloc(GFP_KERNEL, 1);
+ if (!bio) {
+ DMERR("Couldn't alloc log bio");
+ goto error;
+ }
+ bio->bi_iter.bi_size = 0;
+ bio->bi_iter.bi_sector = sector;
+ bio->bi_bdev = lc->logdev->bdev;
+ bio->bi_end_io = log_end_io;
+ bio->bi_private = lc;
+ set_bit(BIO_UPTODATE, &bio->bi_flags);
+
+ page = alloc_page(GFP_KERNEL);
+ if (!page) {
+ DMERR("Couldn't alloc log page");
+ bio_put(bio);
+ goto error;
+ }
+
+ ptr = kmap_atomic(page);
+ memcpy(ptr, entry, entrylen);
+ if (datalen)
+ memcpy(ptr + entrylen, data, datalen);
+ memset(ptr + entrylen + datalen, 0,
+ lc->sectorsize - entrylen - datalen);
+ kunmap_atomic(ptr);
+
+ ret = bio_add_page(bio, page, lc->sectorsize, 0);
+ if (ret != lc->sectorsize) {
+ DMERR("Couldn't add page to the log block");
+ goto error_bio;
+ }
+ submit_bio(WRITE, bio);
+ return 0;
+error_bio:
+ bio_put(bio);
+ __free_page(page);
+error:
+ put_io_block(lc);
+ return -1;
+}
+
+static int log_one_block(struct log_writes_c *lc,
+ struct pending_block *block, sector_t sector)
+{
+ struct bio *bio;
+ struct log_write_entry entry;
+ size_t ret;
+ int i;
+
+ entry.sector = cpu_to_le64(block->sector);
+ entry.nr_sectors = cpu_to_le64(block->nr_sectors);
+ entry.flags = cpu_to_le64(block->flags);
+ entry.data_len = cpu_to_le64(block->datalen);
+ if (write_metadata(lc, &entry, sizeof(entry), block->data,
+ block->datalen, sector)) {
+ free_pending_block(lc, block);
+ return -1;
+ }
+
+ if (!block->vec_cnt)
+ goto out;
+ sector++;
+
+ bio = bio_alloc(GFP_KERNEL, block->vec_cnt);
+ if (!bio) {
+ DMERR("Couldn't alloc log bio");
+ goto error;
+ }
+ atomic_inc(&lc->io_blocks);
+ bio->bi_iter.bi_size = 0;
+ bio->bi_iter.bi_sector = sector;
+ bio->bi_bdev = lc->logdev->bdev;
+ bio->bi_end_io = log_end_io;
+ bio->bi_private = lc;
+ set_bit(BIO_UPTODATE, &bio->bi_flags);
+
+ for (i = 0; i < block->vec_cnt; i++) {
+ /*
+ * The page offset is always 0 because we allocate a new page
+ * for every bvec in the original bio for simplicity sake.
+ */
+ ret = bio_add_page(bio, block->vecs[i].bv_page,
+ block->vecs[i].bv_len, 0);
+ if (ret != block->vecs[i].bv_len) {
+ atomic_inc(&lc->io_blocks);
+ submit_bio(WRITE, bio);
+ bio = bio_alloc(GFP_KERNEL, block->vec_cnt - i);
+ if (!bio) {
+ DMERR("Couldn't alloc log bio");
+ goto error;
+ }
+ bio->bi_iter.bi_size = 0;
+ bio->bi_iter.bi_sector = sector;
+ bio->bi_bdev = lc->logdev->bdev;
+ bio->bi_end_io = log_end_io;
+ bio->bi_private = lc;
+ set_bit(BIO_UPTODATE, &bio->bi_flags);
+
+ ret = bio_add_page(bio, block->vecs[i].bv_page,
+ block->vecs[i].bv_len, 0);
+ if (ret != block->vecs[i].bv_len) {
+ DMERR("Couldn't add page on new bio?");
+ bio_put(bio);
+ goto error;
+ }
+ }
+ sector += block->vecs[i].bv_len >> SECTOR_SHIFT;
+ }
+ submit_bio(WRITE, bio);
+out:
+ kfree(block->data);
+ kfree(block);
+ put_pending_block(lc);
+ return 0;
+error:
+ free_pending_block(lc, block);
+ put_io_block(lc);
+ return -1;
+}
+
+static int log_super(struct log_writes_c *lc)
+{
+ struct log_write_super super;
+
+ super.magic = cpu_to_le64(WRITE_LOG_MAGIC);
+ super.version = cpu_to_le64(WRITE_LOG_VERSION);
+ super.nr_entries = cpu_to_le64(lc->logged_entries);
+ super.sectorsize = cpu_to_le32(lc->sectorsize);
+
+ if (write_metadata(lc, &super, sizeof(super), NULL, 0, 0)) {
+ DMERR("Couldn't write super");
+ return -1;
+ }
+
+ return 0;
+}
+
+static inline sector_t logdev_last_sector(struct log_writes_c *lc)
+{
+ return i_size_read(lc->logdev->bdev->bd_inode) >> SECTOR_SHIFT;
+}
+
+static int log_writes_kthread(void *arg)
+{
+ struct log_writes_c *lc = (struct log_writes_c *)arg;
+ sector_t sector = 0;
+
+ while (!kthread_should_stop()) {
+ bool super = false;
+ bool logging_enabled;
+ struct pending_block *block = NULL;
+ int ret;
+
+ spin_lock_irq(&lc->blocks_lock);
+ if (!list_empty(&lc->logging_blocks)) {
+ block = list_first_entry(&lc->logging_blocks,
+ struct pending_block, list);
+ list_del_init(&block->list);
+ if (!lc->logging_enabled)
+ goto next;
+
+ sector = lc->next_sector;
+ if (block->flags & LOG_DISCARD_FLAG)
+ lc->next_sector++;
+ else
+ lc->next_sector += block->nr_sectors + 1;
+
+ /*
+ * Apparently the size of the device may not be known
+ * right away, so handle this properly.
+ */
+ if (!lc->end_sector)
+ lc->end_sector = logdev_last_sector(lc);
+ if (lc->end_sector &&
+ lc->next_sector >= lc->end_sector) {
+ DMERR("Ran out of space on the logdev");
+ lc->logging_enabled = false;
+ goto next;
+ }
+ lc->logged_entries++;
+ atomic_inc(&lc->io_blocks);
+
+ super = (block->flags & (LOG_FUA_FLAG | LOG_MARK_FLAG));
+ if (super)
+ atomic_inc(&lc->io_blocks);
+ }
+next:
+ logging_enabled = lc->logging_enabled;
+ spin_unlock_irq(&lc->blocks_lock);
+ if (block) {
+ if (logging_enabled) {
+ ret = log_one_block(lc, block, sector);
+ if (!ret && super)
+ ret = log_super(lc);
+ if (ret) {
+ spin_lock_irq(&lc->blocks_lock);
+ lc->logging_enabled = false;
+ spin_unlock_irq(&lc->blocks_lock);
+ }
+ } else
+ free_pending_block(lc, block);
+ continue;
+ }
+
+ if (!try_to_freeze()) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (!kthread_should_stop() &&
+ !atomic_read(&lc->pending_blocks))
+ schedule();
+ __set_current_state(TASK_RUNNING);
+ }
+ }
+ return 0;
+}
+
+/*
+ * Construct a log-writes mapping:
+ * log-writes <dev_path> <log_dev_path>
+ */
+static int log_writes_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+ struct log_writes_c *lc;
+ struct dm_arg_set as;
+ const char *devname, *logdevname;
+
+ as.argc = argc;
+ as.argv = argv;
+
+ if (argc < 2) {
+ ti->error = "Invalid argument count";
+ return -EINVAL;
+ }
+
+ lc = kzalloc(sizeof(struct log_writes_c), GFP_KERNEL);
+ if (!lc) {
+ ti->error = "Cannot allocate context";
+ return -ENOMEM;
+ }
+ spin_lock_init(&lc->blocks_lock);
+ INIT_LIST_HEAD(&lc->unflushed_blocks);
+ INIT_LIST_HEAD(&lc->logging_blocks);
+ init_waitqueue_head(&lc->wait);
+ lc->sectorsize = 1 << SECTOR_SHIFT;
+ atomic_set(&lc->io_blocks, 0);
+ atomic_set(&lc->pending_blocks, 0);
+
+ devname = dm_shift_arg(&as);
+ if (dm_get_device(ti, devname, dm_table_get_mode(ti->table), &lc->dev)) {
+ ti->error = "Device lookup failed";
+ goto bad;
+ }
+
+ logdevname = dm_shift_arg(&as);
+ if (dm_get_device(ti, logdevname, dm_table_get_mode(ti->table), &lc->logdev)) {
+ ti->error = "Log device lookup failed";
+ dm_put_device(ti, lc->dev);
+ goto bad;
+ }
+
+ lc->log_kthread = kthread_run(log_writes_kthread, lc, "log-write");
+ if (!lc->log_kthread) {
+ ti->error = "Couldn't alloc kthread";
+ dm_put_device(ti, lc->dev);
+ dm_put_device(ti, lc->logdev);
+ goto bad;
+ }
+
+ /* We put the super at sector 0, start logging at sector 1 */
+ lc->next_sector = 1;
+ lc->logging_enabled = true;
+ lc->end_sector = logdev_last_sector(lc);
+ lc->device_supports_discard = true;
+
+ ti->num_flush_bios = 1;
+ ti->flush_supported = true;
+ ti->num_discard_bios = 1;
+ ti->discards_supported = true;
+ ti->per_bio_data_size = sizeof(struct per_bio_data);
+ ti->private = lc;
+ return 0;
+
+bad:
+ kfree(lc);
+ return -EINVAL;
+}
+
+static int log_mark(struct log_writes_c *lc, char *data)
+{
+ struct pending_block *block;
+ size_t maxsize = lc->sectorsize - sizeof(struct log_write_entry);
+
+ block = kzalloc(sizeof(struct pending_block), GFP_KERNEL);
+ if (!block) {
+ DMERR("Error allocating pending block");
+ return -ENOMEM;
+ }
+
+ block->data = kstrndup(data, maxsize, GFP_KERNEL);
+ if (!block->data) {
+ DMERR("Error copying mark data");
+ kfree(block);
+ return -ENOMEM;
+ }
+ atomic_inc(&lc->pending_blocks);
+ block->datalen = strlen(block->data);
+ block->flags |= LOG_MARK_FLAG;
+ spin_lock_irq(&lc->blocks_lock);
+ list_add_tail(&block->list, &lc->logging_blocks);
+ spin_unlock_irq(&lc->blocks_lock);
+ wake_up_process(lc->log_kthread);
+ return 0;
+}
+
+static void log_writes_dtr(struct dm_target *ti)
+{
+ struct log_writes_c *lc = ti->private;
+
+ spin_lock_irq(&lc->blocks_lock);
+ list_splice_init(&lc->unflushed_blocks, &lc->logging_blocks);
+ spin_unlock_irq(&lc->blocks_lock);
+
+ /*
+ * This is just nice to have since it'll update the super to include the
+ * unflushed blocks, if it fails we don't really care.
+ */
+ log_mark(lc, "dm-log-writes-end");
+ wake_up_process(lc->log_kthread);
+ wait_event(lc->wait, !atomic_read(&lc->io_blocks) &&
+ !atomic_read(&lc->pending_blocks));
+ kthread_stop(lc->log_kthread);
+
+ WARN_ON(!list_empty(&lc->logging_blocks));
+ WARN_ON(!list_empty(&lc->unflushed_blocks));
+ dm_put_device(ti, lc->dev);
+ dm_put_device(ti, lc->logdev);
+ kfree(lc);
+}
+
+static void normal_map_bio(struct dm_target *ti, struct bio *bio)
+{
+ struct log_writes_c *lc = ti->private;
+
+ bio->bi_bdev = lc->dev->bdev;
+}
+
+static int log_writes_map(struct dm_target *ti, struct bio *bio)
+{
+ struct log_writes_c *lc = ti->private;
+ struct per_bio_data *pb = dm_per_bio_data(bio, sizeof(struct per_bio_data));
+ struct pending_block *block;
+ struct bvec_iter iter;
+ struct bio_vec bv;
+ size_t alloc_size;
+ int i = 0;
+ bool flush_bio = (bio->bi_rw & REQ_FLUSH);
+ bool fua_bio = (bio->bi_rw & REQ_FUA);
+ bool discard_bio = (bio->bi_rw & REQ_DISCARD);
+
+ pb->block = NULL;
+
+ /* Don't bother doing anything if logging has been disabled */
+ if (!lc->logging_enabled)
+ goto map_bio;
+
+ /*
+ * Map reads as normal.
+ */
+ if (bio_data_dir(bio) == READ)
+ goto map_bio;
+
+ /* No sectors and not a flush? Don't care */
+ if (!bio_sectors(bio) && !flush_bio)
+ goto map_bio;
+
+ /*
+ * Discards will have bi_size set but there's no actual data, so just
+ * allocate the size of the pending block.
+ */
+ if (discard_bio)
+ alloc_size = sizeof(struct pending_block);
+ else
+ alloc_size = sizeof(struct pending_block) + sizeof(struct bio_vec) * bio_segments(bio);
+
+ block = kzalloc(alloc_size, GFP_NOIO);
+ if (!block) {
+ DMERR("Error allocating pending block");
+ spin_lock_irq(&lc->blocks_lock);
+ lc->logging_enabled = false;
+ spin_unlock_irq(&lc->blocks_lock);
+ return -ENOMEM;
+ }
+ INIT_LIST_HEAD(&block->list);
+ pb->block = block;
+ atomic_inc(&lc->pending_blocks);
+
+ if (flush_bio)
+ block->flags |= LOG_FLUSH_FLAG;
+ if (fua_bio)
+ block->flags |= LOG_FUA_FLAG;
+ if (discard_bio)
+ block->flags |= LOG_DISCARD_FLAG;
+
+ block->sector = bio->bi_iter.bi_sector;
+ block->nr_sectors = bio_sectors(bio);
+
+ /* We don't need the data, just submit */
+ if (discard_bio) {
+ WARN_ON(flush_bio || fua_bio);
+ if (lc->device_supports_discard)
+ goto map_bio;
+ bio_endio(bio, 0);
+ return DM_MAPIO_SUBMITTED;
+ }
+
+ /* Flush bio, splice the unflushed blocks onto this list and submit */
+ if (flush_bio && !bio_sectors(bio)) {
+ spin_lock_irq(&lc->blocks_lock);
+ list_splice_init(&lc->unflushed_blocks, &block->list);
+ spin_unlock_irq(&lc->blocks_lock);
+ goto map_bio;
+ }
+
+ /*
+ * We will write this bio somewhere else way later so we need to copy
+ * the actual contents into new pages so we know the data will always be
+ * there.
+ *
+ * We do this because this could be a bio from O_DIRECT in which case we
+ * can't just hold onto the page until some later point, we have to
+ * manually copy the contents.
+ */
+ bio_for_each_segment(bv, bio, iter) {
+ struct page *page;
+ void *src, *dst;
+
+ page = alloc_page(GFP_NOIO);
+ if (!page) {
+ DMERR("Error allocing page");
+ free_pending_block(lc, block);
+ spin_lock_irq(&lc->blocks_lock);
+ lc->logging_enabled = false;
+ spin_unlock_irq(&lc->blocks_lock);
+ return -ENOMEM;
+ }
+
+ src = kmap_atomic(bv.bv_page);
+ dst = kmap_atomic(page);
+ memcpy(dst, src + bv.bv_offset, bv.bv_len);
+ kunmap_atomic(dst);
+ kunmap_atomic(src);
+ block->vecs[i].bv_page = page;
+ block->vecs[i].bv_len = bv.bv_len;
+ block->vec_cnt++;
+ i++;
+ }
+
+ /* Had a flush with data in it, weird */
+ if (flush_bio) {
+ spin_lock_irq(&lc->blocks_lock);
+ list_splice_init(&lc->unflushed_blocks, &block->list);
+ spin_unlock_irq(&lc->blocks_lock);
+ }
+map_bio:
+ normal_map_bio(ti, bio);
+ return DM_MAPIO_REMAPPED;
+}
+
+static int normal_end_io(struct dm_target *ti, struct bio *bio, int error)
+{
+ struct log_writes_c *lc = ti->private;
+ struct per_bio_data *pb = dm_per_bio_data(bio, sizeof(struct per_bio_data));
+
+ if (bio_data_dir(bio) == WRITE && pb->block) {
+ struct pending_block *block = pb->block;
+ unsigned long flags;
+
+ spin_lock_irqsave(&lc->blocks_lock, flags);
+ if (block->flags & LOG_FLUSH_FLAG) {
+ list_splice_tail_init(&block->list, &lc->logging_blocks);
+ list_add_tail(&block->list, &lc->logging_blocks);
+ wake_up_process(lc->log_kthread);
+ } else if (block->flags & LOG_FUA_FLAG) {
+ list_add_tail(&block->list, &lc->logging_blocks);
+ wake_up_process(lc->log_kthread);
+ } else
+ list_add_tail(&block->list, &lc->unflushed_blocks);
+ spin_unlock_irqrestore(&lc->blocks_lock, flags);
+ }
+
+ return error;
+}
+
+/*
+ * INFO format: <logged entries> <highest allocated sector>
+ */
+static void log_writes_status(struct dm_target *ti, status_type_t type,
+ unsigned status_flags, char *result,
+ unsigned maxlen)
+{
+ unsigned sz = 0;
+ struct log_writes_c *lc = ti->private;
+
+ switch (type) {
+ case STATUSTYPE_INFO:
+ DMEMIT("%llu %llu", lc->logged_entries,
+ (unsigned long long)lc->next_sector - 1);
+ if (!lc->logging_enabled)
+ DMEMIT(" logging_disabled");
+ break;
+
+ case STATUSTYPE_TABLE:
+ DMEMIT("%s %s", lc->dev->name, lc->logdev->name);
+ break;
+ }
+}
+
+static int log_writes_ioctl(struct dm_target *ti, unsigned int cmd,
+ unsigned long arg)
+{
+ struct log_writes_c *lc = ti->private;
+ struct dm_dev *dev = lc->dev;
+ int r = 0;
+
+ /*
+ * Only pass ioctls through if the device sizes match exactly.
+ */
+ if (ti->len != i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT)
+ r = scsi_verify_blk_ioctl(NULL, cmd);
+
+ return r ? : __blkdev_driver_ioctl(dev->bdev, dev->mode, cmd, arg);
+}
+
+static int log_writes_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
+ struct bio_vec *biovec, int max_size)
+{
+ struct log_writes_c *lc = ti->private;
+ struct request_queue *q = bdev_get_queue(lc->dev->bdev);
+
+ if (!q->merge_bvec_fn)
+ return max_size;
+
+ bvm->bi_bdev = lc->dev->bdev;
+ bvm->bi_sector = dm_target_offset(ti, bvm->bi_sector);
+
+ return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
+}
+
+static int log_writes_iterate_devices(struct dm_target *ti,
+ iterate_devices_callout_fn fn,
+ void *data)
+{
+ struct log_writes_c *lc = ti->private;
+
+ return fn(ti, lc->dev, 0, ti->len, data);
+}
+
+/*
+ * Messages supported:
+ * mark <mark data> - specify the marked data.
+ */
+static int log_writes_message(struct dm_target *ti, unsigned argc, char **argv)
+{
+ int r = -EINVAL;
+ struct log_writes_c *lc = ti->private;
+
+ if (argc != 2) {
+ DMWARN("Invalid log-writes message arguments, expect 2 arguments, got %d", argc);
+ return r;
+ }
+
+ if (!strcasecmp(argv[0], "mark"))
+ r = log_mark(lc, argv[1]);
+ else
+ DMWARN("Unrecognised log writes target message received: %s", argv[0]);
+
+ return r;
+}
+
+static void log_writes_io_hints(struct dm_target *ti, struct queue_limits *limits)
+{
+ struct log_writes_c *lc = ti->private;
+ struct request_queue *q = bdev_get_queue(lc->dev->bdev);
+
+ if (!q || !blk_queue_discard(q)) {
+ lc->device_supports_discard = false;
+ limits->discard_granularity = 1 << SECTOR_SHIFT;
+ limits->max_discard_sectors = (UINT_MAX >> SECTOR_SHIFT);
+ }
+}
+
+static struct target_type log_writes_target = {
+ .name = "log-writes",
+ .version = {1, 0, 0},
+ .module = THIS_MODULE,
+ .ctr = log_writes_ctr,
+ .dtr = log_writes_dtr,
+ .map = log_writes_map,
+ .end_io = normal_end_io,
+ .status = log_writes_status,
+ .ioctl = log_writes_ioctl,
+ .merge = log_writes_merge,
+ .message = log_writes_message,
+ .iterate_devices = log_writes_iterate_devices,
+ .io_hints = log_writes_io_hints,
+};
+
+static int __init dm_log_writes_init(void)
+{
+ int r = dm_register_target(&log_writes_target);
+
+ if (r < 0)
+ DMERR("register failed %d", r);
+
+ return r;
+}
+
+static void __exit dm_log_writes_exit(void)
+{
+ dm_unregister_target(&log_writes_target);
+}
+
+module_init(dm_log_writes_init);
+module_exit(dm_log_writes_exit);
+
+MODULE_DESCRIPTION(DM_NAME " log writes target");
+MODULE_AUTHOR("Josef Bacik <jbacik@fb.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index d376dc8..6395347 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -428,7 +428,7 @@
} else {
/* blk-mq request-based interface */
*__clone = blk_get_request(bdev_get_queue(bdev),
- rq_data_dir(rq), GFP_KERNEL);
+ rq_data_dir(rq), GFP_ATOMIC);
if (IS_ERR(*__clone))
/* ENOMEM, requeue */
return r;
@@ -1627,7 +1627,7 @@
{
struct request_queue *q = bdev_get_queue(pgpath->path.dev->bdev);
- return dm_underlying_device_busy(q);
+ return blk_lld_busy(q);
}
/*
@@ -1703,7 +1703,7 @@
*---------------------------------------------------------------*/
static struct target_type multipath_target = {
.name = "multipath",
- .version = {1, 8, 0},
+ .version = {1, 9, 0},
.module = THIS_MODULE,
.ctr = multipath_ctr,
.dtr = multipath_dtr,
diff --git a/drivers/md/dm-sysfs.c b/drivers/md/dm-sysfs.c
index c62c5ab..7e818f5 100644
--- a/drivers/md/dm-sysfs.c
+++ b/drivers/md/dm-sysfs.c
@@ -11,7 +11,7 @@
struct dm_sysfs_attr {
struct attribute attr;
ssize_t (*show)(struct mapped_device *, char *);
- ssize_t (*store)(struct mapped_device *, char *);
+ ssize_t (*store)(struct mapped_device *, const char *, size_t count);
};
#define DM_ATTR_RO(_name) \
@@ -39,6 +39,31 @@
return ret;
}
+#define DM_ATTR_RW(_name) \
+struct dm_sysfs_attr dm_attr_##_name = \
+ __ATTR(_name, S_IRUGO | S_IWUSR, dm_attr_##_name##_show, dm_attr_##_name##_store)
+
+static ssize_t dm_attr_store(struct kobject *kobj, struct attribute *attr,
+ const char *page, size_t count)
+{
+ struct dm_sysfs_attr *dm_attr;
+ struct mapped_device *md;
+ ssize_t ret;
+
+ dm_attr = container_of(attr, struct dm_sysfs_attr, attr);
+ if (!dm_attr->store)
+ return -EIO;
+
+ md = dm_get_from_kobject(kobj);
+ if (!md)
+ return -EINVAL;
+
+ ret = dm_attr->store(md, page, count);
+ dm_put(md);
+
+ return ret;
+}
+
static ssize_t dm_attr_name_show(struct mapped_device *md, char *buf)
{
if (dm_copy_name_and_uuid(md, buf, NULL))
@@ -64,25 +89,33 @@
return strlen(buf);
}
+static ssize_t dm_attr_use_blk_mq_show(struct mapped_device *md, char *buf)
+{
+ sprintf(buf, "%d\n", dm_use_blk_mq(md));
+
+ return strlen(buf);
+}
+
static DM_ATTR_RO(name);
static DM_ATTR_RO(uuid);
static DM_ATTR_RO(suspended);
+static DM_ATTR_RO(use_blk_mq);
+static DM_ATTR_RW(rq_based_seq_io_merge_deadline);
static struct attribute *dm_attrs[] = {
&dm_attr_name.attr,
&dm_attr_uuid.attr,
&dm_attr_suspended.attr,
+ &dm_attr_use_blk_mq.attr,
+ &dm_attr_rq_based_seq_io_merge_deadline.attr,
NULL,
};
static const struct sysfs_ops dm_sysfs_ops = {
.show = dm_attr_show,
+ .store = dm_attr_store,
};
-/*
- * dm kobject is embedded in mapped_device structure
- * no need to define release function here
- */
static struct kobj_type dm_ktype = {
.sysfs_ops = &dm_sysfs_ops,
.default_attrs = dm_attrs,
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 6554d91..d9b00b8 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -18,6 +18,8 @@
#include <linux/mutex.h>
#include <linux/delay.h>
#include <linux/atomic.h>
+#include <linux/blk-mq.h>
+#include <linux/mount.h>
#define DM_MSG_PREFIX "table"
@@ -372,23 +374,18 @@
int r;
dev_t uninitialized_var(dev);
struct dm_dev_internal *dd;
- unsigned int major, minor;
struct dm_table *t = ti->table;
- char dummy;
+ struct block_device *bdev;
BUG_ON(!t);
- if (sscanf(path, "%u:%u%c", &major, &minor, &dummy) == 2) {
- /* Extract the major/minor numbers */
- dev = MKDEV(major, minor);
- if (MAJOR(dev) != major || MINOR(dev) != minor)
- return -EOVERFLOW;
+ /* convert the path to a device */
+ bdev = lookup_bdev(path);
+ if (IS_ERR(bdev)) {
+ dev = name_to_dev_t(path);
+ if (!dev)
+ return -ENODEV;
} else {
- /* convert the path to a device */
- struct block_device *bdev = lookup_bdev(path);
-
- if (IS_ERR(bdev))
- return PTR_ERR(bdev);
dev = bdev->bd_dev;
bdput(bdev);
}
@@ -939,7 +936,7 @@
return dm_table_get_type(t) == DM_TYPE_MQ_REQUEST_BASED;
}
-static int dm_table_alloc_md_mempools(struct dm_table *t)
+static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *md)
{
unsigned type = dm_table_get_type(t);
unsigned per_bio_data_size = 0;
@@ -957,7 +954,7 @@
per_bio_data_size = max(per_bio_data_size, tgt->per_bio_data_size);
}
- t->mempools = dm_alloc_md_mempools(type, t->integrity_supported, per_bio_data_size);
+ t->mempools = dm_alloc_md_mempools(md, type, t->integrity_supported, per_bio_data_size);
if (!t->mempools)
return -ENOMEM;
@@ -1127,7 +1124,7 @@
return r;
}
- r = dm_table_alloc_md_mempools(t);
+ r = dm_table_alloc_md_mempools(t, t->md);
if (r)
DMERR("unable to allocate mempools");
@@ -1339,14 +1336,14 @@
continue;
if (ti->flush_supported)
- return 1;
+ return true;
if (ti->type->iterate_devices &&
ti->type->iterate_devices(ti, device_flush_capable, &flush))
- return 1;
+ return true;
}
- return 0;
+ return false;
}
static bool dm_table_discard_zeroes_data(struct dm_table *t)
@@ -1359,10 +1356,10 @@
ti = dm_table_get_target(t, i++);
if (ti->discard_zeroes_data_unsupported)
- return 0;
+ return false;
}
- return 1;
+ return true;
}
static int device_is_nonrot(struct dm_target *ti, struct dm_dev *dev,
@@ -1408,10 +1405,10 @@
if (!ti->type->iterate_devices ||
!ti->type->iterate_devices(ti, func, NULL))
- return 0;
+ return false;
}
- return 1;
+ return true;
}
static int device_not_write_same_capable(struct dm_target *ti, struct dm_dev *dev,
@@ -1468,14 +1465,14 @@
continue;
if (ti->discards_supported)
- return 1;
+ return true;
if (ti->type->iterate_devices &&
ti->type->iterate_devices(ti, device_discard_capable, NULL))
- return 1;
+ return true;
}
- return 0;
+ return false;
}
void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
@@ -1677,20 +1674,6 @@
return r;
}
-int dm_table_any_busy_target(struct dm_table *t)
-{
- unsigned i;
- struct dm_target *ti;
-
- for (i = 0; i < t->num_targets; i++) {
- ti = t->targets + i;
- if (ti->type->busy && ti->type->busy(ti))
- return 1;
- }
-
- return 0;
-}
-
struct mapped_device *dm_table_get_md(struct dm_table *t)
{
return t->md;
@@ -1709,9 +1692,13 @@
md = dm_table_get_md(t);
queue = dm_get_md_queue(md);
if (queue) {
- spin_lock_irqsave(queue->queue_lock, flags);
- blk_run_queue_async(queue);
- spin_unlock_irqrestore(queue->queue_lock, flags);
+ if (queue->mq_ops)
+ blk_mq_run_hw_queues(queue, true);
+ else {
+ spin_lock_irqsave(queue->queue_lock, flags);
+ blk_run_queue_async(queue);
+ spin_unlock_irqrestore(queue->queue_lock, flags);
+ }
}
}
EXPORT_SYMBOL(dm_table_run_md_queue_async);
diff --git a/drivers/md/dm-verity.c b/drivers/md/dm-verity.c
index 7a7bab8..66616db 100644
--- a/drivers/md/dm-verity.c
+++ b/drivers/md/dm-verity.c
@@ -18,20 +18,39 @@
#include <linux/module.h>
#include <linux/device-mapper.h>
+#include <linux/reboot.h>
#include <crypto/hash.h>
#define DM_MSG_PREFIX "verity"
+#define DM_VERITY_ENV_LENGTH 42
+#define DM_VERITY_ENV_VAR_NAME "DM_VERITY_ERR_BLOCK_NR"
+
#define DM_VERITY_IO_VEC_INLINE 16
#define DM_VERITY_MEMPOOL_SIZE 4
#define DM_VERITY_DEFAULT_PREFETCH_SIZE 262144
#define DM_VERITY_MAX_LEVELS 63
+#define DM_VERITY_MAX_CORRUPTED_ERRS 100
+
+#define DM_VERITY_OPT_LOGGING "ignore_corruption"
+#define DM_VERITY_OPT_RESTART "restart_on_corruption"
static unsigned dm_verity_prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
module_param_named(prefetch_cluster, dm_verity_prefetch_cluster, uint, S_IRUGO | S_IWUSR);
+enum verity_mode {
+ DM_VERITY_MODE_EIO,
+ DM_VERITY_MODE_LOGGING,
+ DM_VERITY_MODE_RESTART
+};
+
+enum verity_block_type {
+ DM_VERITY_BLOCK_TYPE_DATA,
+ DM_VERITY_BLOCK_TYPE_METADATA
+};
+
struct dm_verity {
struct dm_dev *data_dev;
struct dm_dev *hash_dev;
@@ -54,6 +73,8 @@
unsigned digest_size; /* digest size for the current hash algorithm */
unsigned shash_descsize;/* the size of temporary space for crypto */
int hash_failed; /* set to 1 if hash of any block failed */
+ enum verity_mode mode; /* mode for handling verification errors */
+ unsigned corrupted_errs;/* Number of errors for corrupted blocks */
mempool_t *vec_mempool; /* mempool of bio vector */
@@ -175,6 +196,57 @@
}
/*
+ * Handle verification errors.
+ */
+static int verity_handle_err(struct dm_verity *v, enum verity_block_type type,
+ unsigned long long block)
+{
+ char verity_env[DM_VERITY_ENV_LENGTH];
+ char *envp[] = { verity_env, NULL };
+ const char *type_str = "";
+ struct mapped_device *md = dm_table_get_md(v->ti->table);
+
+ /* Corruption should be visible in device status in all modes */
+ v->hash_failed = 1;
+
+ if (v->corrupted_errs >= DM_VERITY_MAX_CORRUPTED_ERRS)
+ goto out;
+
+ v->corrupted_errs++;
+
+ switch (type) {
+ case DM_VERITY_BLOCK_TYPE_DATA:
+ type_str = "data";
+ break;
+ case DM_VERITY_BLOCK_TYPE_METADATA:
+ type_str = "metadata";
+ break;
+ default:
+ BUG();
+ }
+
+ DMERR("%s: %s block %llu is corrupted", v->data_dev->name, type_str,
+ block);
+
+ if (v->corrupted_errs == DM_VERITY_MAX_CORRUPTED_ERRS)
+ DMERR("%s: reached maximum errors", v->data_dev->name);
+
+ snprintf(verity_env, DM_VERITY_ENV_LENGTH, "%s=%d,%llu",
+ DM_VERITY_ENV_VAR_NAME, type, block);
+
+ kobject_uevent_env(&disk_to_dev(dm_disk(md))->kobj, KOBJ_CHANGE, envp);
+
+out:
+ if (v->mode == DM_VERITY_MODE_LOGGING)
+ return 0;
+
+ if (v->mode == DM_VERITY_MODE_RESTART)
+ kernel_restart("dm-verity device corrupted");
+
+ return 1;
+}
+
+/*
* Verify hash of a metadata block pertaining to the specified data block
* ("block" argument) at a specified level ("level" argument).
*
@@ -251,11 +323,11 @@
goto release_ret_r;
}
if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
- DMERR_LIMIT("metadata block %llu is corrupted",
- (unsigned long long)hash_block);
- v->hash_failed = 1;
- r = -EIO;
- goto release_ret_r;
+ if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_METADATA,
+ hash_block)) {
+ r = -EIO;
+ goto release_ret_r;
+ }
} else
aux->hash_verified = 1;
}
@@ -367,10 +439,9 @@
return r;
}
if (unlikely(memcmp(result, io_want_digest(v, io), v->digest_size))) {
- DMERR_LIMIT("data block %llu is corrupted",
- (unsigned long long)(io->block + b));
- v->hash_failed = 1;
- return -EIO;
+ if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,
+ io->block + b))
+ return -EIO;
}
}
@@ -546,6 +617,19 @@
else
for (x = 0; x < v->salt_size; x++)
DMEMIT("%02x", v->salt[x]);
+ if (v->mode != DM_VERITY_MODE_EIO) {
+ DMEMIT(" 1 ");
+ switch (v->mode) {
+ case DM_VERITY_MODE_LOGGING:
+ DMEMIT(DM_VERITY_OPT_LOGGING);
+ break;
+ case DM_VERITY_MODE_RESTART:
+ DMEMIT(DM_VERITY_OPT_RESTART);
+ break;
+ default:
+ BUG();
+ }
+ }
break;
}
}
@@ -647,13 +731,19 @@
static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
{
struct dm_verity *v;
- unsigned num;
+ struct dm_arg_set as;
+ const char *opt_string;
+ unsigned int num, opt_params;
unsigned long long num_ll;
int r;
int i;
sector_t hash_position;
char dummy;
+ static struct dm_arg _args[] = {
+ {0, 1, "Invalid number of feature args"},
+ };
+
v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
if (!v) {
ti->error = "Cannot allocate verity structure";
@@ -668,8 +758,8 @@
goto bad;
}
- if (argc != 10) {
- ti->error = "Invalid argument count: exactly 10 arguments required";
+ if (argc < 10) {
+ ti->error = "Not enough arguments";
r = -EINVAL;
goto bad;
}
@@ -790,6 +880,39 @@
}
}
+ argv += 10;
+ argc -= 10;
+
+ /* Optional parameters */
+ if (argc) {
+ as.argc = argc;
+ as.argv = argv;
+
+ r = dm_read_arg_group(_args, &as, &opt_params, &ti->error);
+ if (r)
+ goto bad;
+
+ while (opt_params) {
+ opt_params--;
+ opt_string = dm_shift_arg(&as);
+ if (!opt_string) {
+ ti->error = "Not enough feature arguments";
+ r = -EINVAL;
+ goto bad;
+ }
+
+ if (!strcasecmp(opt_string, DM_VERITY_OPT_LOGGING))
+ v->mode = DM_VERITY_MODE_LOGGING;
+ else if (!strcasecmp(opt_string, DM_VERITY_OPT_RESTART))
+ v->mode = DM_VERITY_MODE_RESTART;
+ else {
+ ti->error = "Invalid feature arguments";
+ r = -EINVAL;
+ goto bad;
+ }
+ }
+ }
+
v->hash_per_block_bits =
__fls((1 << v->hash_dev_block_bits) / v->digest_size);
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 8001fe9..f8c7ca3 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -21,6 +21,9 @@
#include <linux/delay.h>
#include <linux/wait.h>
#include <linux/kthread.h>
+#include <linux/ktime.h>
+#include <linux/elevator.h> /* for rq_end_sector() */
+#include <linux/blk-mq.h>
#include <trace/events/block.h>
@@ -216,8 +219,29 @@
struct kthread_worker kworker;
struct task_struct *kworker_task;
+
+ /* for request-based merge heuristic in dm_request_fn() */
+ unsigned seq_rq_merge_deadline_usecs;
+ int last_rq_rw;
+ sector_t last_rq_pos;
+ ktime_t last_rq_start_time;
+
+ /* for blk-mq request-based DM support */
+ struct blk_mq_tag_set tag_set;
+ bool use_blk_mq;
};
+#ifdef CONFIG_DM_MQ_DEFAULT
+static bool use_blk_mq = true;
+#else
+static bool use_blk_mq = false;
+#endif
+
+bool dm_use_blk_mq(struct mapped_device *md)
+{
+ return md->use_blk_mq;
+}
+
/*
* For mempools pre-allocation at the table loading time.
*/
@@ -250,35 +274,35 @@
*/
static unsigned reserved_rq_based_ios = RESERVED_REQUEST_BASED_IOS;
-static unsigned __dm_get_reserved_ios(unsigned *reserved_ios,
+static unsigned __dm_get_module_param(unsigned *module_param,
unsigned def, unsigned max)
{
- unsigned ios = ACCESS_ONCE(*reserved_ios);
- unsigned modified_ios = 0;
+ unsigned param = ACCESS_ONCE(*module_param);
+ unsigned modified_param = 0;
- if (!ios)
- modified_ios = def;
- else if (ios > max)
- modified_ios = max;
+ if (!param)
+ modified_param = def;
+ else if (param > max)
+ modified_param = max;
- if (modified_ios) {
- (void)cmpxchg(reserved_ios, ios, modified_ios);
- ios = modified_ios;
+ if (modified_param) {
+ (void)cmpxchg(module_param, param, modified_param);
+ param = modified_param;
}
- return ios;
+ return param;
}
unsigned dm_get_reserved_bio_based_ios(void)
{
- return __dm_get_reserved_ios(&reserved_bio_based_ios,
+ return __dm_get_module_param(&reserved_bio_based_ios,
RESERVED_BIO_BASED_IOS, RESERVED_MAX_IOS);
}
EXPORT_SYMBOL_GPL(dm_get_reserved_bio_based_ios);
unsigned dm_get_reserved_rq_based_ios(void)
{
- return __dm_get_reserved_ios(&reserved_rq_based_ios,
+ return __dm_get_module_param(&reserved_rq_based_ios,
RESERVED_REQUEST_BASED_IOS, RESERVED_MAX_IOS);
}
EXPORT_SYMBOL_GPL(dm_get_reserved_rq_based_ios);
@@ -1017,6 +1041,11 @@
blk_update_request(tio->orig, 0, nr_bytes);
}
+static struct dm_rq_target_io *tio_from_request(struct request *rq)
+{
+ return (rq->q->mq_ops ? blk_mq_rq_to_pdu(rq) : rq->special);
+}
+
/*
* Don't touch any member of the md after calling this function because
* the md may be freed in dm_put() at the end of this function.
@@ -1024,10 +1053,13 @@
*/
static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
{
+ int nr_requests_pending;
+
atomic_dec(&md->pending[rw]);
/* nudge anyone waiting on suspend queue */
- if (!md_in_flight(md))
+ nr_requests_pending = md_in_flight(md);
+ if (!nr_requests_pending)
wake_up(&md->wait);
/*
@@ -1036,8 +1068,13 @@
* back into ->request_fn() could deadlock attempting to grab the
* queue lock again.
*/
- if (run_queue)
- blk_run_queue_async(md->queue);
+ if (run_queue) {
+ if (md->queue->mq_ops)
+ blk_mq_run_hw_queues(md->queue, true);
+ else if (!nr_requests_pending ||
+ (nr_requests_pending >= md->queue->nr_congestion_on))
+ blk_run_queue_async(md->queue);
+ }
/*
* dm_put() must be at the end of this function. See the comment above
@@ -1048,13 +1085,18 @@
static void free_rq_clone(struct request *clone)
{
struct dm_rq_target_io *tio = clone->end_io_data;
+ struct mapped_device *md = tio->md;
blk_rq_unprep_clone(clone);
- if (clone->q && clone->q->mq_ops)
+
+ if (clone->q->mq_ops)
tio->ti->type->release_clone_rq(clone);
- else
- free_clone_request(tio->md, clone);
- free_rq_tio(tio);
+ else if (!md->queue->mq_ops)
+ /* request_fn queue stacked on request_fn queue(s) */
+ free_clone_request(md, clone);
+
+ if (!md->queue->mq_ops)
+ free_rq_tio(tio);
}
/*
@@ -1083,17 +1125,22 @@
}
free_rq_clone(clone);
- blk_end_request_all(rq, error);
+ if (!rq->q->mq_ops)
+ blk_end_request_all(rq, error);
+ else
+ blk_mq_end_request(rq, error);
rq_completed(md, rw, true);
}
static void dm_unprep_request(struct request *rq)
{
- struct dm_rq_target_io *tio = rq->special;
+ struct dm_rq_target_io *tio = tio_from_request(rq);
struct request *clone = tio->clone;
- rq->special = NULL;
- rq->cmd_flags &= ~REQ_DONTPREP;
+ if (!rq->q->mq_ops) {
+ rq->special = NULL;
+ rq->cmd_flags &= ~REQ_DONTPREP;
+ }
if (clone)
free_rq_clone(clone);
@@ -1102,18 +1149,29 @@
/*
* Requeue the original request of a clone.
*/
-static void dm_requeue_unmapped_original_request(struct mapped_device *md,
- struct request *rq)
+static void old_requeue_request(struct request *rq)
{
- int rw = rq_data_dir(rq);
struct request_queue *q = rq->q;
unsigned long flags;
- dm_unprep_request(rq);
-
spin_lock_irqsave(q->queue_lock, flags);
blk_requeue_request(q, rq);
spin_unlock_irqrestore(q->queue_lock, flags);
+}
+
+static void dm_requeue_unmapped_original_request(struct mapped_device *md,
+ struct request *rq)
+{
+ int rw = rq_data_dir(rq);
+
+ dm_unprep_request(rq);
+
+ if (!rq->q->mq_ops)
+ old_requeue_request(rq);
+ else {
+ blk_mq_requeue_request(rq);
+ blk_mq_kick_requeue_list(rq->q);
+ }
rq_completed(md, rw, false);
}
@@ -1125,33 +1183,42 @@
dm_requeue_unmapped_original_request(tio->md, tio->orig);
}
-static void __stop_queue(struct request_queue *q)
+static void old_stop_queue(struct request_queue *q)
{
+ unsigned long flags;
+
+ if (blk_queue_stopped(q))
+ return;
+
+ spin_lock_irqsave(q->queue_lock, flags);
blk_stop_queue(q);
+ spin_unlock_irqrestore(q->queue_lock, flags);
}
static void stop_queue(struct request_queue *q)
{
+ if (!q->mq_ops)
+ old_stop_queue(q);
+ else
+ blk_mq_stop_hw_queues(q);
+}
+
+static void old_start_queue(struct request_queue *q)
+{
unsigned long flags;
spin_lock_irqsave(q->queue_lock, flags);
- __stop_queue(q);
- spin_unlock_irqrestore(q->queue_lock, flags);
-}
-
-static void __start_queue(struct request_queue *q)
-{
if (blk_queue_stopped(q))
blk_start_queue(q);
+ spin_unlock_irqrestore(q->queue_lock, flags);
}
static void start_queue(struct request_queue *q)
{
- unsigned long flags;
-
- spin_lock_irqsave(q->queue_lock, flags);
- __start_queue(q);
- spin_unlock_irqrestore(q->queue_lock, flags);
+ if (!q->mq_ops)
+ old_start_queue(q);
+ else
+ blk_mq_start_stopped_hw_queues(q, true);
}
static void dm_done(struct request *clone, int error, bool mapped)
@@ -1192,13 +1259,20 @@
static void dm_softirq_done(struct request *rq)
{
bool mapped = true;
- struct dm_rq_target_io *tio = rq->special;
+ struct dm_rq_target_io *tio = tio_from_request(rq);
struct request *clone = tio->clone;
+ int rw;
if (!clone) {
- blk_end_request_all(rq, tio->error);
- rq_completed(tio->md, rq_data_dir(rq), false);
- free_rq_tio(tio);
+ rw = rq_data_dir(rq);
+ if (!rq->q->mq_ops) {
+ blk_end_request_all(rq, tio->error);
+ rq_completed(tio->md, rw, false);
+ free_rq_tio(tio);
+ } else {
+ blk_mq_end_request(rq, tio->error);
+ rq_completed(tio->md, rw, false);
+ }
return;
}
@@ -1214,7 +1288,7 @@
*/
static void dm_complete_request(struct request *rq, int error)
{
- struct dm_rq_target_io *tio = rq->special;
+ struct dm_rq_target_io *tio = tio_from_request(rq);
tio->error = error;
blk_complete_request(rq);
@@ -1233,7 +1307,7 @@
}
/*
- * Called with the clone's queue lock held
+ * Called with the clone's queue lock held (for non-blk-mq)
*/
static void end_clone_request(struct request *clone, int error)
{
@@ -1693,7 +1767,7 @@
* The request function that just remaps the bio built up by
* dm_merge_bvec.
*/
-static void _dm_request(struct request_queue *q, struct bio *bio)
+static void dm_make_request(struct request_queue *q, struct bio *bio)
{
int rw = bio_data_dir(bio);
struct mapped_device *md = q->queuedata;
@@ -1725,16 +1799,6 @@
return blk_queue_stackable(md->queue);
}
-static void dm_request(struct request_queue *q, struct bio *bio)
-{
- struct mapped_device *md = q->queuedata;
-
- if (dm_request_based(md))
- blk_queue_bio(q, bio);
- else
- _dm_request(q, bio);
-}
-
static void dm_dispatch_clone_request(struct request *clone, struct request *rq)
{
int r;
@@ -1787,15 +1851,25 @@
static struct request *clone_rq(struct request *rq, struct mapped_device *md,
struct dm_rq_target_io *tio, gfp_t gfp_mask)
{
- struct request *clone = alloc_clone_request(md, gfp_mask);
+ /*
+ * Do not allocate a clone if tio->clone was already set
+ * (see: dm_mq_queue_rq).
+ */
+ bool alloc_clone = !tio->clone;
+ struct request *clone;
- if (!clone)
- return NULL;
+ if (alloc_clone) {
+ clone = alloc_clone_request(md, gfp_mask);
+ if (!clone)
+ return NULL;
+ } else
+ clone = tio->clone;
blk_rq_init(NULL, clone);
if (setup_clone(clone, rq, tio, gfp_mask)) {
/* -ENOMEM */
- free_clone_request(md, clone);
+ if (alloc_clone)
+ free_clone_request(md, clone);
return NULL;
}
@@ -1804,6 +1878,19 @@
static void map_tio_request(struct kthread_work *work);
+static void init_tio(struct dm_rq_target_io *tio, struct request *rq,
+ struct mapped_device *md)
+{
+ tio->md = md;
+ tio->ti = NULL;
+ tio->clone = NULL;
+ tio->orig = rq;
+ tio->error = 0;
+ memset(&tio->info, 0, sizeof(tio->info));
+ if (md->kworker_task)
+ init_kthread_work(&tio->work, map_tio_request);
+}
+
static struct dm_rq_target_io *prep_tio(struct request *rq,
struct mapped_device *md, gfp_t gfp_mask)
{
@@ -1815,13 +1902,7 @@
if (!tio)
return NULL;
- tio->md = md;
- tio->ti = NULL;
- tio->clone = NULL;
- tio->orig = rq;
- tio->error = 0;
- memset(&tio->info, 0, sizeof(tio->info));
- init_kthread_work(&tio->work, map_tio_request);
+ init_tio(tio, rq, md);
table = dm_get_live_table(md, &srcu_idx);
if (!dm_table_mq_request_based(table)) {
@@ -1865,11 +1946,11 @@
* DM_MAPIO_REQUEUE : the original request needs to be requeued
* < 0 : the request was completed due to failure
*/
-static int map_request(struct dm_target *ti, struct request *rq,
+static int map_request(struct dm_rq_target_io *tio, struct request *rq,
struct mapped_device *md)
{
int r;
- struct dm_rq_target_io *tio = rq->special;
+ struct dm_target *ti = tio->ti;
struct request *clone = NULL;
if (tio->clone) {
@@ -1884,7 +1965,7 @@
}
if (IS_ERR(clone))
return DM_MAPIO_REQUEUE;
- if (setup_clone(clone, rq, tio, GFP_KERNEL)) {
+ if (setup_clone(clone, rq, tio, GFP_ATOMIC)) {
/* -ENOMEM */
ti->type->release_clone_rq(clone);
return DM_MAPIO_REQUEUE;
@@ -1925,15 +2006,24 @@
struct request *rq = tio->orig;
struct mapped_device *md = tio->md;
- if (map_request(tio->ti, rq, md) == DM_MAPIO_REQUEUE)
+ if (map_request(tio, rq, md) == DM_MAPIO_REQUEUE)
dm_requeue_unmapped_original_request(md, rq);
}
static void dm_start_request(struct mapped_device *md, struct request *orig)
{
- blk_start_request(orig);
+ if (!orig->q->mq_ops)
+ blk_start_request(orig);
+ else
+ blk_mq_start_request(orig);
atomic_inc(&md->pending[rq_data_dir(orig)]);
+ if (md->seq_rq_merge_deadline_usecs) {
+ md->last_rq_pos = rq_end_sector(orig);
+ md->last_rq_rw = rq_data_dir(orig);
+ md->last_rq_start_time = ktime_get();
+ }
+
/*
* Hold the md reference here for the in-flight I/O.
* We can't rely on the reference count by device opener,
@@ -1944,6 +2034,45 @@
dm_get(md);
}
+#define MAX_SEQ_RQ_MERGE_DEADLINE_USECS 100000
+
+ssize_t dm_attr_rq_based_seq_io_merge_deadline_show(struct mapped_device *md, char *buf)
+{
+ return sprintf(buf, "%u\n", md->seq_rq_merge_deadline_usecs);
+}
+
+ssize_t dm_attr_rq_based_seq_io_merge_deadline_store(struct mapped_device *md,
+ const char *buf, size_t count)
+{
+ unsigned deadline;
+
+ if (!dm_request_based(md) || md->use_blk_mq)
+ return count;
+
+ if (kstrtouint(buf, 10, &deadline))
+ return -EINVAL;
+
+ if (deadline > MAX_SEQ_RQ_MERGE_DEADLINE_USECS)
+ deadline = MAX_SEQ_RQ_MERGE_DEADLINE_USECS;
+
+ md->seq_rq_merge_deadline_usecs = deadline;
+
+ return count;
+}
+
+static bool dm_request_peeked_before_merge_deadline(struct mapped_device *md)
+{
+ ktime_t kt_deadline;
+
+ if (!md->seq_rq_merge_deadline_usecs)
+ return false;
+
+ kt_deadline = ns_to_ktime((u64)md->seq_rq_merge_deadline_usecs * NSEC_PER_USEC);
+ kt_deadline = ktime_add_safe(md->last_rq_start_time, kt_deadline);
+
+ return !ktime_after(ktime_get(), kt_deadline);
+}
+
/*
* q->request_fn for request-based dm.
* Called with the queue lock held.
@@ -1967,7 +2096,7 @@
while (!blk_queue_stopped(q)) {
rq = blk_peek_request(q);
if (!rq)
- goto delay_and_out;
+ goto out;
/* always use block 0 to find the target for flushes for now */
pos = 0;
@@ -1986,12 +2115,17 @@
continue;
}
+ if (dm_request_peeked_before_merge_deadline(md) &&
+ md_in_flight(md) && rq->bio && rq->bio->bi_vcnt == 1 &&
+ md->last_rq_pos == pos && md->last_rq_rw == rq_data_dir(rq))
+ goto delay_and_out;
+
if (ti->type->busy && ti->type->busy(ti))
goto delay_and_out;
dm_start_request(md, rq);
- tio = rq->special;
+ tio = tio_from_request(rq);
/* Establish tio->ti before queuing work (map_tio_request) */
tio->ti = ti;
queue_kthread_work(&md->kworker, &tio->work);
@@ -2001,33 +2135,11 @@
goto out;
delay_and_out:
- blk_delay_queue(q, HZ / 10);
+ blk_delay_queue(q, HZ / 100);
out:
dm_put_live_table(md, srcu_idx);
}
-int dm_underlying_device_busy(struct request_queue *q)
-{
- return blk_lld_busy(q);
-}
-EXPORT_SYMBOL_GPL(dm_underlying_device_busy);
-
-static int dm_lld_busy(struct request_queue *q)
-{
- int r;
- struct mapped_device *md = q->queuedata;
- struct dm_table *map = dm_get_live_table_fast(md);
-
- if (!map || test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))
- r = 1;
- else
- r = dm_table_any_busy_target(map);
-
- dm_put_live_table_fast(md);
-
- return r;
-}
-
static int dm_any_congested(void *congested_data, int bdi_bits)
{
int r = bdi_bits;
@@ -2110,7 +2222,7 @@
{
/*
* Request-based dm devices cannot be stacked on top of bio-based dm
- * devices. The type of this dm device has not been decided yet.
+ * devices. The type of this dm device may not have been decided yet.
* The type is decided at the first table loading time.
* To prevent problematic device stacking, clear the queue flag
* for request stacking support until then.
@@ -2118,13 +2230,21 @@
* This queue is new, so no concurrency on the queue_flags.
*/
queue_flag_clear_unlocked(QUEUE_FLAG_STACKABLE, md->queue);
+}
+static void dm_init_old_md_queue(struct mapped_device *md)
+{
+ md->use_blk_mq = false;
+ dm_init_md_queue(md);
+
+ /*
+ * Initialize aspects of queue that aren't relevant for blk-mq
+ */
md->queue->queuedata = md;
md->queue->backing_dev_info.congested_fn = dm_any_congested;
md->queue->backing_dev_info.congested_data = md;
- blk_queue_make_request(md->queue, dm_request);
+
blk_queue_bounce_limit(md->queue, BLK_BOUNCE_ANY);
- blk_queue_merge_bvec(md->queue, dm_merge_bvec);
}
/*
@@ -2156,6 +2276,7 @@
if (r < 0)
goto bad_io_barrier;
+ md->use_blk_mq = use_blk_mq;
md->type = DM_TYPE_NONE;
mutex_init(&md->suspend_lock);
mutex_init(&md->type_lock);
@@ -2267,6 +2388,8 @@
del_gendisk(md->disk);
put_disk(md->disk);
blk_cleanup_queue(md->queue);
+ if (md->use_blk_mq)
+ blk_mq_free_tag_set(&md->tag_set);
bdput(md->bdev);
free_minor(minor);
@@ -2278,7 +2401,7 @@
{
struct dm_md_mempools *p = dm_table_get_md_mempools(t);
- if (md->io_pool && md->bs) {
+ if (md->bs) {
/* The md already has necessary mempools. */
if (dm_table_get_type(t) == DM_TYPE_BIO_BASED) {
/*
@@ -2310,7 +2433,7 @@
p->bs = NULL;
out:
- /* mempool bind completed, now no need any mempools in the table */
+ /* mempool bind completed, no longer need any mempools in the table */
dm_table_free_md_mempools(t);
}
@@ -2357,7 +2480,7 @@
if (!q->merge_bvec_fn)
return 0;
- if (q->make_request_fn == dm_request) {
+ if (q->make_request_fn == dm_make_request) {
dev_md = q->queuedata;
if (test_bit(DMF_MERGE_IS_OPTIONAL, &dev_md->flags))
return 0;
@@ -2426,7 +2549,7 @@
* This must be done before setting the queue restrictions,
* because request-based dm may be run just after the setting.
*/
- if (dm_table_request_based(t) && !blk_queue_stopped(q))
+ if (dm_table_request_based(t))
stop_queue(q);
__bind_mempools(md, t);
@@ -2508,14 +2631,6 @@
return md->type;
}
-static bool dm_md_type_request_based(struct mapped_device *md)
-{
- unsigned table_type = dm_get_md_type(md);
-
- return (table_type == DM_TYPE_REQUEST_BASED ||
- table_type == DM_TYPE_MQ_REQUEST_BASED);
-}
-
struct target_type *dm_get_immutable_target_type(struct mapped_device *md)
{
return md->immutable_target_type;
@@ -2532,6 +2647,14 @@
}
EXPORT_SYMBOL_GPL(dm_get_queue_limits);
+static void init_rq_based_worker_thread(struct mapped_device *md)
+{
+ /* Initialize the request-based DM worker thread */
+ init_kthread_worker(&md->kworker);
+ md->kworker_task = kthread_run(kthread_worker_fn, &md->kworker,
+ "kdmwork-%s", dm_device_name(md));
+}
+
/*
* Fully initialize a request-based queue (->elevator, ->request_fn, etc).
*/
@@ -2540,27 +2663,160 @@
struct request_queue *q = NULL;
if (md->queue->elevator)
- return 1;
+ return 0;
/* Fully initialize the queue */
q = blk_init_allocated_queue(md->queue, dm_request_fn, NULL);
if (!q)
- return 0;
+ return -EINVAL;
+
+ /* disable dm_request_fn's merge heuristic by default */
+ md->seq_rq_merge_deadline_usecs = 0;
md->queue = q;
- dm_init_md_queue(md);
+ dm_init_old_md_queue(md);
blk_queue_softirq_done(md->queue, dm_softirq_done);
blk_queue_prep_rq(md->queue, dm_prep_fn);
- blk_queue_lld_busy(md->queue, dm_lld_busy);
- /* Also initialize the request-based DM worker thread */
- init_kthread_worker(&md->kworker);
- md->kworker_task = kthread_run(kthread_worker_fn, &md->kworker,
- "kdmwork-%s", dm_device_name(md));
+ init_rq_based_worker_thread(md);
elv_register_queue(md->queue);
- return 1;
+ return 0;
+}
+
+static int dm_mq_init_request(void *data, struct request *rq,
+ unsigned int hctx_idx, unsigned int request_idx,
+ unsigned int numa_node)
+{
+ struct mapped_device *md = data;
+ struct dm_rq_target_io *tio = blk_mq_rq_to_pdu(rq);
+
+ /*
+ * Must initialize md member of tio, otherwise it won't
+ * be available in dm_mq_queue_rq.
+ */
+ tio->md = md;
+
+ return 0;
+}
+
+static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+ const struct blk_mq_queue_data *bd)
+{
+ struct request *rq = bd->rq;
+ struct dm_rq_target_io *tio = blk_mq_rq_to_pdu(rq);
+ struct mapped_device *md = tio->md;
+ int srcu_idx;
+ struct dm_table *map = dm_get_live_table(md, &srcu_idx);
+ struct dm_target *ti;
+ sector_t pos;
+
+ /* always use block 0 to find the target for flushes for now */
+ pos = 0;
+ if (!(rq->cmd_flags & REQ_FLUSH))
+ pos = blk_rq_pos(rq);
+
+ ti = dm_table_find_target(map, pos);
+ if (!dm_target_is_valid(ti)) {
+ dm_put_live_table(md, srcu_idx);
+ DMERR_LIMIT("request attempted access beyond the end of device");
+ /*
+ * Must perform setup, that rq_completed() requires,
+ * before returning BLK_MQ_RQ_QUEUE_ERROR
+ */
+ dm_start_request(md, rq);
+ return BLK_MQ_RQ_QUEUE_ERROR;
+ }
+ dm_put_live_table(md, srcu_idx);
+
+ if (ti->type->busy && ti->type->busy(ti))
+ return BLK_MQ_RQ_QUEUE_BUSY;
+
+ dm_start_request(md, rq);
+
+ /* Init tio using md established in .init_request */
+ init_tio(tio, rq, md);
+
+ /*
+ * Establish tio->ti before queuing work (map_tio_request)
+ * or making direct call to map_request().
+ */
+ tio->ti = ti;
+
+ /* Clone the request if underlying devices aren't blk-mq */
+ if (dm_table_get_type(map) == DM_TYPE_REQUEST_BASED) {
+ /* clone request is allocated at the end of the pdu */
+ tio->clone = (void *)blk_mq_rq_to_pdu(rq) + sizeof(struct dm_rq_target_io);
+ if (!clone_rq(rq, md, tio, GFP_ATOMIC))
+ return BLK_MQ_RQ_QUEUE_BUSY;
+ queue_kthread_work(&md->kworker, &tio->work);
+ } else {
+ /* Direct call is fine since .queue_rq allows allocations */
+ if (map_request(tio, rq, md) == DM_MAPIO_REQUEUE)
+ dm_requeue_unmapped_original_request(md, rq);
+ }
+
+ return BLK_MQ_RQ_QUEUE_OK;
+}
+
+static struct blk_mq_ops dm_mq_ops = {
+ .queue_rq = dm_mq_queue_rq,
+ .map_queue = blk_mq_map_queue,
+ .complete = dm_softirq_done,
+ .init_request = dm_mq_init_request,
+};
+
+static int dm_init_request_based_blk_mq_queue(struct mapped_device *md)
+{
+ unsigned md_type = dm_get_md_type(md);
+ struct request_queue *q;
+ int err;
+
+ memset(&md->tag_set, 0, sizeof(md->tag_set));
+ md->tag_set.ops = &dm_mq_ops;
+ md->tag_set.queue_depth = BLKDEV_MAX_RQ;
+ md->tag_set.numa_node = NUMA_NO_NODE;
+ md->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE;
+ md->tag_set.nr_hw_queues = 1;
+ if (md_type == DM_TYPE_REQUEST_BASED) {
+ /* make the memory for non-blk-mq clone part of the pdu */
+ md->tag_set.cmd_size = sizeof(struct dm_rq_target_io) + sizeof(struct request);
+ } else
+ md->tag_set.cmd_size = sizeof(struct dm_rq_target_io);
+ md->tag_set.driver_data = md;
+
+ err = blk_mq_alloc_tag_set(&md->tag_set);
+ if (err)
+ return err;
+
+ q = blk_mq_init_allocated_queue(&md->tag_set, md->queue);
+ if (IS_ERR(q)) {
+ err = PTR_ERR(q);
+ goto out_tag_set;
+ }
+ md->queue = q;
+ dm_init_md_queue(md);
+
+ /* backfill 'mq' sysfs registration normally done in blk_register_queue */
+ blk_mq_register_disk(md->disk);
+
+ if (md_type == DM_TYPE_REQUEST_BASED)
+ init_rq_based_worker_thread(md);
+
+ return 0;
+
+out_tag_set:
+ blk_mq_free_tag_set(&md->tag_set);
+ return err;
+}
+
+static unsigned filter_md_type(unsigned type, struct mapped_device *md)
+{
+ if (type == DM_TYPE_BIO_BASED)
+ return type;
+
+ return !md->use_blk_mq ? DM_TYPE_REQUEST_BASED : DM_TYPE_MQ_REQUEST_BASED;
}
/*
@@ -2568,9 +2824,29 @@
*/
int dm_setup_md_queue(struct mapped_device *md)
{
- if (dm_md_type_request_based(md) && !dm_init_request_based_queue(md)) {
- DMWARN("Cannot initialize queue for request-based mapped device");
- return -EINVAL;
+ int r;
+ unsigned md_type = filter_md_type(dm_get_md_type(md), md);
+
+ switch (md_type) {
+ case DM_TYPE_REQUEST_BASED:
+ r = dm_init_request_based_queue(md);
+ if (r) {
+ DMWARN("Cannot initialize queue for request-based mapped device");
+ return r;
+ }
+ break;
+ case DM_TYPE_MQ_REQUEST_BASED:
+ r = dm_init_request_based_blk_mq_queue(md);
+ if (r) {
+ DMWARN("Cannot initialize queue for request-based blk-mq mapped device");
+ return r;
+ }
+ break;
+ case DM_TYPE_BIO_BASED:
+ dm_init_old_md_queue(md);
+ blk_queue_make_request(md->queue, dm_make_request);
+ blk_queue_merge_bvec(md->queue, dm_merge_bvec);
+ break;
}
return 0;
@@ -2654,7 +2930,7 @@
set_bit(DMF_FREEING, &md->flags);
spin_unlock(&_minor_lock);
- if (dm_request_based(md))
+ if (dm_request_based(md) && md->kworker_task)
flush_kthread_worker(&md->kworker);
/*
@@ -2908,7 +3184,8 @@
*/
if (dm_request_based(md)) {
stop_queue(md->queue);
- flush_kthread_worker(&md->kworker);
+ if (md->kworker_task)
+ flush_kthread_worker(&md->kworker);
}
flush_workqueue(md->wq);
@@ -3206,6 +3483,7 @@
{
return md->disk;
}
+EXPORT_SYMBOL_GPL(dm_disk);
struct kobject *dm_kobject(struct mapped_device *md)
{
@@ -3253,16 +3531,19 @@
}
EXPORT_SYMBOL_GPL(dm_noflush_suspending);
-struct dm_md_mempools *dm_alloc_md_mempools(unsigned type, unsigned integrity, unsigned per_bio_data_size)
+struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, unsigned type,
+ unsigned integrity, unsigned per_bio_data_size)
{
struct dm_md_mempools *pools = kzalloc(sizeof(*pools), GFP_KERNEL);
- struct kmem_cache *cachep;
+ struct kmem_cache *cachep = NULL;
unsigned int pool_size = 0;
unsigned int front_pad;
if (!pools)
return NULL;
+ type = filter_md_type(type, md);
+
switch (type) {
case DM_TYPE_BIO_BASED:
cachep = _io_cache;
@@ -3270,13 +3551,13 @@
front_pad = roundup(per_bio_data_size, __alignof__(struct dm_target_io)) + offsetof(struct dm_target_io, clone);
break;
case DM_TYPE_REQUEST_BASED:
+ cachep = _rq_tio_cache;
pool_size = dm_get_reserved_rq_based_ios();
pools->rq_pool = mempool_create_slab_pool(pool_size, _rq_cache);
if (!pools->rq_pool)
goto out;
/* fall through to setup remaining rq-based pools */
case DM_TYPE_MQ_REQUEST_BASED:
- cachep = _rq_tio_cache;
if (!pool_size)
pool_size = dm_get_reserved_rq_based_ios();
front_pad = offsetof(struct dm_rq_clone_bio_info, clone);
@@ -3284,12 +3565,14 @@
WARN_ON(per_bio_data_size != 0);
break;
default:
- goto out;
+ BUG();
}
- pools->io_pool = mempool_create_slab_pool(pool_size, cachep);
- if (!pools->io_pool)
- goto out;
+ if (cachep) {
+ pools->io_pool = mempool_create_slab_pool(pool_size, cachep);
+ if (!pools->io_pool)
+ goto out;
+ }
pools->bs = bioset_create_nobvec(pool_size, front_pad);
if (!pools->bs)
@@ -3346,6 +3629,9 @@
module_param(reserved_rq_based_ios, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(reserved_rq_based_ios, "Reserved IOs in request-based mempools");
+module_param(use_blk_mq, bool, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(use_blk_mq, "Use block multiqueue for request-based DM devices");
+
MODULE_DESCRIPTION(DM_NAME " driver");
MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>");
MODULE_LICENSE("GPL");
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index 59f53e7..6123c2b 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -70,7 +70,6 @@
void dm_table_postsuspend_targets(struct dm_table *t);
int dm_table_resume_targets(struct dm_table *t);
int dm_table_any_congested(struct dm_table *t, int bdi_bits);
-int dm_table_any_busy_target(struct dm_table *t);
unsigned dm_table_get_type(struct dm_table *t);
struct target_type *dm_table_get_immutable_target_type(struct dm_table *t);
bool dm_table_request_based(struct dm_table *t);
@@ -212,6 +211,8 @@
void dm_internal_suspend(struct mapped_device *md);
void dm_internal_resume(struct mapped_device *md);
+bool dm_use_blk_mq(struct mapped_device *md);
+
int dm_io_init(void);
void dm_io_exit(void);
@@ -221,7 +222,8 @@
/*
* Mempool operations
*/
-struct dm_md_mempools *dm_alloc_md_mempools(unsigned type, unsigned integrity, unsigned per_bio_data_size);
+struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, unsigned type,
+ unsigned integrity, unsigned per_bio_data_size);
void dm_free_md_mempools(struct dm_md_mempools *pools);
/*
@@ -235,4 +237,8 @@
return !maxlen || strlen(result) + 1 >= maxlen;
}
+ssize_t dm_attr_rq_based_seq_io_merge_deadline_show(struct mapped_device *md, char *buf);
+ssize_t dm_attr_rq_based_seq_io_merge_deadline_store(struct mapped_device *md,
+ const char *buf, size_t count);
+
#endif
diff --git a/drivers/media/Kconfig b/drivers/media/Kconfig
index 49cd308..3ef0f90 100644
--- a/drivers/media/Kconfig
+++ b/drivers/media/Kconfig
@@ -87,13 +87,21 @@
config MEDIA_CONTROLLER
bool "Media Controller API"
- depends on MEDIA_CAMERA_SUPPORT
+ depends on MEDIA_CAMERA_SUPPORT || MEDIA_ANALOG_TV_SUPPORT || MEDIA_DIGITAL_TV_SUPPORT
---help---
Enable the media controller API used to query media devices internal
topology and configure it dynamically.
This API is mostly used by camera interfaces in embedded platforms.
+config MEDIA_CONTROLLER_DVB
+ bool "Enable Media controller for DVB"
+ depends on MEDIA_CONTROLLER
+ ---help---
+ Enable the media controller API support for DVB.
+
+ This is currently experimental.
+
#
# Video4Linux support
# Only enables if one of the V4L2 types (ATV, webcam, radio) is selected
diff --git a/drivers/media/common/saa7146/saa7146_fops.c b/drivers/media/common/saa7146/saa7146_fops.c
index b7d6393..df1e8c9 100644
--- a/drivers/media/common/saa7146/saa7146_fops.c
+++ b/drivers/media/common/saa7146/saa7146_fops.c
@@ -587,26 +587,20 @@
}
EXPORT_SYMBOL_GPL(saa7146_vv_release);
-int saa7146_register_device(struct video_device **vid, struct saa7146_dev* dev,
+int saa7146_register_device(struct video_device *vfd, struct saa7146_dev *dev,
char *name, int type)
{
- struct video_device *vfd;
int err;
int i;
DEB_EE("dev:%p, name:'%s', type:%d\n", dev, name, type);
- // released by vfd->release
- vfd = video_device_alloc();
- if (vfd == NULL)
- return -ENOMEM;
-
vfd->fops = &video_fops;
if (type == VFL_TYPE_GRABBER)
vfd->ioctl_ops = &dev->ext_vv_data->vid_ops;
else
vfd->ioctl_ops = &dev->ext_vv_data->vbi_ops;
- vfd->release = video_device_release;
+ vfd->release = video_device_release_empty;
vfd->lock = &dev->v4l2_lock;
vfd->v4l2_dev = &dev->v4l2_dev;
vfd->tvnorms = 0;
@@ -618,25 +612,20 @@
err = video_register_device(vfd, type, -1);
if (err < 0) {
ERR("cannot register v4l2 device. skipping.\n");
- video_device_release(vfd);
return err;
}
pr_info("%s: registered device %s [v4l2]\n",
dev->name, video_device_node_name(vfd));
-
- *vid = vfd;
return 0;
}
EXPORT_SYMBOL_GPL(saa7146_register_device);
-int saa7146_unregister_device(struct video_device **vid, struct saa7146_dev* dev)
+int saa7146_unregister_device(struct video_device *vfd, struct saa7146_dev *dev)
{
DEB_EE("dev:%p\n", dev);
- video_unregister_device(*vid);
- *vid = NULL;
-
+ video_unregister_device(vfd);
return 0;
}
EXPORT_SYMBOL_GPL(saa7146_unregister_device);
diff --git a/drivers/media/common/saa7146/saa7146_vbi.c b/drivers/media/common/saa7146/saa7146_vbi.c
index 1e71e37..2da9957 100644
--- a/drivers/media/common/saa7146/saa7146_vbi.c
+++ b/drivers/media/common/saa7146/saa7146_vbi.c
@@ -95,7 +95,7 @@
/* prepare to wait to be woken up by the irq-handler */
add_wait_queue(&vv->vbi_wq, &wait);
- current->state = TASK_INTERRUPTIBLE;
+ set_current_state(TASK_INTERRUPTIBLE);
/* start rps1 to enable workaround */
saa7146_write(dev, RPS_ADDR1, dev->d_rps1.dma_handle);
@@ -106,7 +106,7 @@
DEB_VBI("brs bug workaround %d/1\n", i);
remove_wait_queue(&vv->vbi_wq, &wait);
- current->state = TASK_RUNNING;
+ __set_current_state(TASK_RUNNING);
/* disable rps1 irqs */
SAA7146_IER_DISABLE(dev,MASK_28);
diff --git a/drivers/media/common/siano/sms-cards.c b/drivers/media/common/siano/sms-cards.c
index 82c7a12..ca2f80c 100644
--- a/drivers/media/common/siano/sms-cards.c
+++ b/drivers/media/common/siano/sms-cards.c
@@ -21,10 +21,6 @@
#include "smsir.h"
#include <linux/module.h>
-static int sms_dbg;
-module_param_named(cards_dbg, sms_dbg, int, 0644);
-MODULE_PARM_DESC(cards_dbg, "set debug level (info=1, adv=2 (or-able))");
-
static struct sms_board sms_boards[] = {
[SMS_BOARD_UNKNOWN] = {
.name = "Unknown board",
@@ -232,7 +228,7 @@
break; /* BOARD_EVENT_MULTIPLEX_ERRORS */
default:
- sms_err("Unknown SMS board event");
+ pr_err("Unknown SMS board event\n");
break;
}
return 0;
@@ -342,7 +338,7 @@
int board_id = smscore_get_board_id(coredev);
struct sms_board *board = sms_get_board(board_id);
- sms_debug("%s: LNA %s", __func__, onoff ? "enabled" : "disabled");
+ pr_debug("%s: LNA %s\n", __func__, onoff ? "enabled" : "disabled");
switch (board_id) {
case SMS1XXX_BOARD_HAUPPAUGE_TIGER_MINICARD_R2:
diff --git a/drivers/media/common/siano/sms-cards.h b/drivers/media/common/siano/sms-cards.h
index 4c4cadd..bb3d733 100644
--- a/drivers/media/common/siano/sms-cards.h
+++ b/drivers/media/common/siano/sms-cards.h
@@ -20,8 +20,9 @@
#ifndef __SMS_CARDS_H__
#define __SMS_CARDS_H__
-#include <linux/usb.h>
#include "smscoreapi.h"
+
+#include <linux/usb.h>
#include "smsir.h"
#define SMS_BOARD_UNKNOWN 0
diff --git a/drivers/media/common/siano/smscoreapi.c b/drivers/media/common/siano/smscoreapi.c
index a367743..2a8d9a3 100644
--- a/drivers/media/common/siano/smscoreapi.c
+++ b/drivers/media/common/siano/smscoreapi.c
@@ -21,6 +21,8 @@
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
+#include "smscoreapi.h"
+
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/module.h>
@@ -34,14 +36,9 @@
#include <linux/wait.h>
#include <asm/byteorder.h>
-#include "smscoreapi.h"
#include "sms-cards.h"
#include "smsir.h"
-static int sms_dbg;
-module_param_named(debug, sms_dbg, int, 0644);
-MODULE_PARM_DESC(debug, "set debug level (info=1, adv=2 (or-able))");
-
struct smscore_device_notifyee_t {
struct list_head entry;
hotplug_t hotplug;
@@ -460,7 +457,7 @@
strcpy(entry->devpath, devpath);
list_add(&entry->entry, &g_smscore_registry);
} else
- sms_err("failed to create smscore_registry.");
+ pr_err("failed to create smscore_registry.\n");
kmutex_unlock(&g_smscore_registrylock);
return entry;
}
@@ -473,7 +470,7 @@
if (entry)
return entry->mode;
else
- sms_err("No registry found.");
+ pr_err("No registry found.\n");
return default_mode;
}
@@ -487,7 +484,7 @@
if (entry)
return entry->type;
else
- sms_err("No registry found.");
+ pr_err("No registry found.\n");
return -EINVAL;
}
@@ -500,7 +497,7 @@
if (entry)
entry->mode = mode;
else
- sms_err("No registry found.");
+ pr_err("No registry found.\n");
}
static void smscore_registry_settype(char *devpath,
@@ -512,7 +509,7 @@
if (entry)
entry->type = type;
else
- sms_err("No registry found.");
+ pr_err("No registry found.\n");
}
@@ -635,10 +632,8 @@
struct smscore_buffer_t *cb;
cb = kzalloc(sizeof(struct smscore_buffer_t), GFP_KERNEL);
- if (!cb) {
- sms_info("kzalloc(...) failed");
+ if (!cb)
return NULL;
- }
cb->p = buffer;
cb->offset_in_common = buffer - (u8 *) common_buffer;
@@ -658,16 +653,19 @@
* @return 0 on success, <0 on error.
*/
int smscore_register_device(struct smsdevice_params_t *params,
- struct smscore_device_t **coredev)
+ struct smscore_device_t **coredev,
+ void *mdev)
{
struct smscore_device_t *dev;
u8 *buffer;
dev = kzalloc(sizeof(struct smscore_device_t), GFP_KERNEL);
- if (!dev) {
- sms_info("kzalloc(...) failed");
+ if (!dev)
return -ENOMEM;
- }
+
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ dev->media_dev = mdev;
+#endif
/* init list entry so it could be safe in smscore_unregister_device */
INIT_LIST_HEAD(&dev->entry);
@@ -722,7 +720,7 @@
smscore_putbuffer(dev, cb);
}
- sms_info("allocated %d buffers", dev->num_buffers);
+ pr_debug("allocated %d buffers\n", dev->num_buffers);
dev->mode = DEVICE_MODE_NONE;
dev->board_id = SMS_BOARD_UNKNOWN;
@@ -746,7 +744,7 @@
*coredev = dev;
- sms_info("device %p created", dev);
+ pr_debug("device %p created\n", dev);
return 0;
}
@@ -763,7 +761,7 @@
rc = coredev->sendrequest_handler(coredev->context, buffer, size);
if (rc < 0) {
- sms_info("sendrequest returned error %d", rc);
+ pr_info("sendrequest returned error %d\n", rc);
return rc;
}
@@ -786,11 +784,11 @@
coredev->ir.dev = NULL;
ir_io = sms_get_board(smscore_get_board_id(coredev))->board_cfg.ir;
if (ir_io) {/* only if IR port exist we use IR sub-module */
- sms_info("IR loading");
+ pr_debug("IR loading\n");
rc = sms_ir_init(coredev);
if (rc != 0)
- sms_err("Error initialization DTV IR sub-module");
+ pr_err("Error initialization DTV IR sub-module\n");
else {
buffer = kmalloc(sizeof(struct sms_msg_data2) +
SMS_DMA_ALIGNMENT,
@@ -812,11 +810,10 @@
kfree(buffer);
} else
- sms_err
- ("Sending IR initialization message failed");
+ pr_err("Sending IR initialization message failed\n");
}
} else
- sms_info("IR port has not been detected");
+ pr_info("IR port has not been detected\n");
return 0;
}
@@ -835,13 +832,13 @@
board = sms_get_board(coredev->board_id);
if (!board) {
- sms_err("no board configuration exist.");
+ pr_err("no board configuration exist.\n");
return -EINVAL;
}
if (board->mtu) {
struct sms_msg_data mtu_msg;
- sms_debug("set max transmit unit %d", board->mtu);
+ pr_debug("set max transmit unit %d\n", board->mtu);
mtu_msg.x_msg_header.msg_src_id = 0;
mtu_msg.x_msg_header.msg_dst_id = HIF_TASK;
@@ -856,7 +853,7 @@
if (board->crystal) {
struct sms_msg_data crys_msg;
- sms_debug("set crystal value %d", board->crystal);
+ pr_debug("set crystal value %d\n", board->crystal);
SMS_INIT_MSG(&crys_msg.x_msg_header,
MSG_SMS_NEW_CRYSTAL_REQ,
@@ -890,12 +887,12 @@
rc = smscore_set_device_mode(coredev, mode);
if (rc < 0) {
- sms_info("set device mode faile , rc %d", rc);
+ pr_info("set device mode failed , rc %d\n", rc);
return rc;
}
rc = smscore_configure_board(coredev);
if (rc < 0) {
- sms_info("configure board failed , rc %d", rc);
+ pr_info("configure board failed , rc %d\n", rc);
return rc;
}
@@ -904,7 +901,7 @@
rc = smscore_notify_callbacks(coredev, coredev->device, 1);
smscore_init_ir(coredev);
- sms_info("device %p started, rc %d", coredev, rc);
+ pr_debug("device %p started, rc %d\n", coredev, rc);
kmutex_unlock(&g_smscore_deviceslock);
@@ -927,7 +924,7 @@
mem_address = firmware->start_address;
- sms_info("loading FW to addr 0x%x size %d",
+ pr_debug("loading FW to addr 0x%x size %d\n",
mem_address, firmware->length);
if (coredev->preload_handler) {
rc = coredev->preload_handler(coredev->context);
@@ -941,14 +938,14 @@
return -ENOMEM;
if (coredev->mode != DEVICE_MODE_NONE) {
- sms_debug("sending reload command.");
+ pr_debug("sending reload command.\n");
SMS_INIT_MSG(&msg->x_msg_header, MSG_SW_RELOAD_START_REQ,
sizeof(struct sms_msg_hdr));
rc = smscore_sendrequest_and_wait(coredev, msg,
msg->x_msg_header.msg_length,
&coredev->reload_start_done);
if (rc < 0) {
- sms_err("device reload failed, rc %d", rc);
+ pr_err("device reload failed, rc %d\n", rc);
goto exit_fw_download;
}
mem_address = *(u32 *) &payload[20];
@@ -982,7 +979,7 @@
if (rc < 0)
goto exit_fw_download;
- sms_debug("sending MSG_SMS_DATA_VALIDITY_REQ expecting 0x%x",
+ pr_debug("sending MSG_SMS_DATA_VALIDITY_REQ expecting 0x%x\n",
calc_checksum);
SMS_INIT_MSG(&msg->x_msg_header, MSG_SMS_DATA_VALIDITY_REQ,
sizeof(msg->x_msg_header) +
@@ -1001,7 +998,7 @@
struct sms_msg_data *trigger_msg =
(struct sms_msg_data *) msg;
- sms_debug("sending MSG_SMS_SWDOWNLOAD_TRIGGER_REQ");
+ pr_debug("sending MSG_SMS_SWDOWNLOAD_TRIGGER_REQ\n");
SMS_INIT_MSG(&msg->x_msg_header,
MSG_SMS_SWDOWNLOAD_TRIGGER_REQ,
sizeof(struct sms_msg_hdr) +
@@ -1037,12 +1034,13 @@
kfree(msg);
if (coredev->postload_handler) {
- sms_debug("rc=%d, postload=0x%p", rc, coredev->postload_handler);
+ pr_debug("rc=%d, postload=0x%p\n",
+ rc, coredev->postload_handler);
if (rc >= 0)
return coredev->postload_handler(coredev->context);
}
- sms_debug("rc=%d", rc);
+ pr_debug("rc=%d\n", rc);
return rc;
}
@@ -1121,11 +1119,11 @@
if (mode <= DEVICE_MODE_NONE || mode >= DEVICE_MODE_MAX)
return NULL;
- sms_debug("trying to get fw name from sms_boards board_id %d mode %d",
+ pr_debug("trying to get fw name from sms_boards board_id %d mode %d\n",
board_id, mode);
fw = sms_get_board(board_id)->fw;
if (!fw || !fw[mode]) {
- sms_debug("cannot find fw name in sms_boards, getting from lookup table mode %d type %d",
+ pr_debug("cannot find fw name in sms_boards, getting from lookup table mode %d type %d\n",
mode, type);
return smscore_fw_lkup[type][mode];
}
@@ -1154,10 +1152,10 @@
char *fw_filename = smscore_get_fw_filename(coredev, mode);
if (!fw_filename) {
- sms_err("mode %d not supported on this device", mode);
+ pr_err("mode %d not supported on this device\n", mode);
return -ENOENT;
}
- sms_debug("Firmware name: %s", fw_filename);
+ pr_debug("Firmware name: %s\n", fw_filename);
if (loadfirmware_handler == NULL && !(coredev->device_flags
& SMS_DEVICE_FAMILY2))
@@ -1165,14 +1163,14 @@
rc = request_firmware(&fw, fw_filename, coredev->device);
if (rc < 0) {
- sms_err("failed to open firmware file \"%s\"", fw_filename);
+ pr_err("failed to open firmware file '%s'\n", fw_filename);
return rc;
}
- sms_info("read fw %s, buffer size=0x%zx", fw_filename, fw->size);
+ pr_debug("read fw %s, buffer size=0x%zx\n", fw_filename, fw->size);
fw_buf = kmalloc(ALIGN(fw->size, SMS_ALLOC_ALIGNMENT),
GFP_KERNEL | GFP_DMA);
if (!fw_buf) {
- sms_err("failed to allocate firmware buffer");
+ pr_err("failed to allocate firmware buffer\n");
rc = -ENOMEM;
} else {
memcpy(fw_buf, fw->data, fw->size);
@@ -1226,18 +1224,18 @@
if (num_buffers == coredev->num_buffers)
break;
if (++retry > 10) {
- sms_info("exiting although not all buffers released.");
+ pr_info("exiting although not all buffers released.\n");
break;
}
- sms_info("waiting for %d buffer(s)",
+ pr_debug("waiting for %d buffer(s)\n",
coredev->num_buffers - num_buffers);
kmutex_unlock(&g_smscore_deviceslock);
msleep(100);
kmutex_lock(&g_smscore_deviceslock);
}
- sms_info("freed %d buffers", num_buffers);
+ pr_debug("freed %d buffers\n", num_buffers);
if (coredev->common_buffer)
dma_free_coherent(NULL, coredev->common_buffer_size,
@@ -1250,7 +1248,7 @@
kmutex_unlock(&g_smscore_deviceslock);
- sms_info("device %p destroyed", coredev);
+ pr_debug("device %p destroyed\n", coredev);
}
EXPORT_SYMBOL_GPL(smscore_unregister_device);
@@ -1271,7 +1269,7 @@
rc = smscore_sendrequest_and_wait(coredev, msg, msg->msg_length,
&coredev->version_ex_done);
if (rc == -ETIME) {
- sms_err("MSG_SMS_GET_VERSION_EX_REQ failed first try");
+ pr_err("MSG_SMS_GET_VERSION_EX_REQ failed first try\n");
if (wait_for_completion_timeout(&coredev->resume_done,
msecs_to_jiffies(5000))) {
@@ -1279,7 +1277,7 @@
coredev, msg, msg->msg_length,
&coredev->version_ex_done);
if (rc < 0)
- sms_err("MSG_SMS_GET_VERSION_EX_REQ failed second try, rc %d",
+ pr_err("MSG_SMS_GET_VERSION_EX_REQ failed second try, rc %d\n",
rc);
} else
rc = -ETIME;
@@ -1308,7 +1306,7 @@
buffer = kmalloc(sizeof(struct sms_msg_data) +
SMS_DMA_ALIGNMENT, GFP_KERNEL | GFP_DMA);
if (!buffer) {
- sms_err("Could not allocate buffer for init device message.");
+ pr_err("Could not allocate buffer for init device message.\n");
return -ENOMEM;
}
@@ -1339,10 +1337,10 @@
{
int rc = 0;
- sms_debug("set device mode to %d", mode);
+ pr_debug("set device mode to %d\n", mode);
if (coredev->device_flags & SMS_DEVICE_FAMILY2) {
if (mode <= DEVICE_MODE_NONE || mode >= DEVICE_MODE_MAX) {
- sms_err("invalid mode specified %d", mode);
+ pr_err("invalid mode specified %d\n", mode);
return -EINVAL;
}
@@ -1351,13 +1349,13 @@
if (!(coredev->device_flags & SMS_DEVICE_NOT_READY)) {
rc = smscore_detect_mode(coredev);
if (rc < 0) {
- sms_err("mode detect failed %d", rc);
+ pr_err("mode detect failed %d\n", rc);
return rc;
}
}
if (coredev->mode == mode) {
- sms_info("device mode %d already set", mode);
+ pr_debug("device mode %d already set\n", mode);
return 0;
}
@@ -1365,19 +1363,19 @@
rc = smscore_load_firmware_from_file(coredev,
mode, NULL);
if (rc >= 0)
- sms_info("firmware download success");
+ pr_debug("firmware download success\n");
} else {
- sms_info("mode %d is already supported by running firmware",
+ pr_debug("mode %d is already supported by running firmware\n",
mode);
}
if (coredev->fw_version >= 0x800) {
rc = smscore_init_device(coredev, mode);
if (rc < 0)
- sms_err("device init failed, rc %d.", rc);
+ pr_err("device init failed, rc %d.\n", rc);
}
} else {
if (mode <= DEVICE_MODE_NONE || mode >= DEVICE_MODE_MAX) {
- sms_err("invalid mode specified %d", mode);
+ pr_err("invalid mode specified %d\n", mode);
return -EINVAL;
}
@@ -1414,9 +1412,9 @@
}
if (rc < 0)
- sms_err("return error code %d.", rc);
+ pr_err("return error code %d.\n", rc);
else
- sms_debug("Success setting device mode.");
+ pr_debug("Success setting device mode.\n");
return rc;
}
@@ -1495,7 +1493,7 @@
last_sample_time = time_now;
if (time_now - last_sample_time > 10000) {
- sms_debug("data rate %d bytes/secs",
+ pr_debug("data rate %d bytes/secs\n",
(int)((data_total * 1000) /
(time_now - last_sample_time)));
@@ -1539,7 +1537,7 @@
{
struct sms_version_res *ver =
(struct sms_version_res *) phdr;
- sms_debug("Firmware id %d prots 0x%x ver %d.%d",
+ pr_debug("Firmware id %d prots 0x%x ver %d.%d\n",
ver->firmware_id, ver->supported_protocols,
ver->rom_ver_major, ver->rom_ver_minor);
@@ -1562,7 +1560,7 @@
{
struct sms_msg_data *validity = (struct sms_msg_data *) phdr;
- sms_debug("MSG_SMS_DATA_VALIDITY_RES, checksum = 0x%x",
+ pr_debug("MSG_SMS_DATA_VALIDITY_RES, checksum = 0x%x\n",
validity->msg_data[0]);
complete(&coredev->data_validity_done);
break;
@@ -1588,7 +1586,7 @@
{
u32 *msgdata = (u32 *) phdr;
coredev->gpio_get_res = msgdata[1];
- sms_debug("gpio level %d",
+ pr_debug("gpio level %d\n",
coredev->gpio_get_res);
complete(&coredev->gpio_get_level_done);
break;
@@ -1615,7 +1613,7 @@
break;
default:
- sms_debug("message %s(%d) not handled.",
+ pr_debug("message %s(%d) not handled.\n",
smscore_translate_msg(phdr->msg_type),
phdr->msg_type);
break;
@@ -1681,7 +1679,7 @@
struct smscore_client_t *registered_client;
if (!client) {
- sms_err("bad parameter.");
+ pr_err("bad parameter.\n");
return -EINVAL;
}
registered_client = smscore_find_client(coredev, data_type, id);
@@ -1689,12 +1687,12 @@
return 0;
if (registered_client) {
- sms_err("The msg ID already registered to another client.");
+ pr_err("The msg ID already registered to another client.\n");
return -EEXIST;
}
listentry = kzalloc(sizeof(struct smscore_idlist_t), GFP_KERNEL);
if (!listentry) {
- sms_err("Can't allocate memory for client id.");
+ pr_err("Can't allocate memory for client id.\n");
return -ENOMEM;
}
listentry->id = id;
@@ -1726,13 +1724,13 @@
/* check that no other channel with same parameters exists */
if (smscore_find_client(coredev, params->data_type,
params->initial_id)) {
- sms_err("Client already exist.");
+ pr_err("Client already exist.\n");
return -EEXIST;
}
newclient = kzalloc(sizeof(struct smscore_client_t), GFP_KERNEL);
if (!newclient) {
- sms_err("Failed to allocate memory for client.");
+ pr_err("Failed to allocate memory for client.\n");
return -ENOMEM;
}
@@ -1746,7 +1744,7 @@
smscore_validate_client(coredev, newclient, params->data_type,
params->initial_id);
*client = newclient;
- sms_debug("%p %d %d", params->context, params->data_type,
+ pr_debug("%p %d %d\n", params->context, params->data_type,
params->initial_id);
return 0;
@@ -1775,7 +1773,7 @@
kfree(identry);
}
- sms_info("%p", client->context);
+ pr_debug("%p\n", client->context);
list_del(&client->entry);
kfree(client);
@@ -1803,7 +1801,7 @@
int rc;
if (client == NULL) {
- sms_err("Got NULL client");
+ pr_err("Got NULL client\n");
return -EINVAL;
}
@@ -1811,7 +1809,7 @@
/* check that no other channel with same id exists */
if (coredev == NULL) {
- sms_err("Got NULL coredev");
+ pr_err("Got NULL coredev\n");
return -EINVAL;
}
@@ -2016,9 +2014,9 @@
if (rc != 0) {
if (rc == -ETIME)
- sms_err("smscore_gpio_configure timeout");
+ pr_err("smscore_gpio_configure timeout\n");
else
- sms_err("smscore_gpio_configure error");
+ pr_err("smscore_gpio_configure error\n");
}
free:
kfree(buffer);
@@ -2065,9 +2063,9 @@
if (rc != 0) {
if (rc == -ETIME)
- sms_err("smscore_gpio_set_level timeout");
+ pr_err("smscore_gpio_set_level timeout\n");
else
- sms_err("smscore_gpio_set_level error");
+ pr_err("smscore_gpio_set_level error\n");
}
kfree(buffer);
@@ -2113,9 +2111,9 @@
if (rc != 0) {
if (rc == -ETIME)
- sms_err("smscore_gpio_get_level timeout");
+ pr_err("smscore_gpio_get_level timeout\n");
else
- sms_err("smscore_gpio_get_level error");
+ pr_err("smscore_gpio_get_level error\n");
}
kfree(buffer);
@@ -2163,7 +2161,7 @@
}
kmutex_unlock(&g_smscore_registrylock);
- sms_debug("");
+ pr_debug("\n");
}
module_init(smscore_module_init);
diff --git a/drivers/media/common/siano/smscoreapi.h b/drivers/media/common/siano/smscoreapi.h
index 9c9063c..eb8bd68 100644
--- a/drivers/media/common/siano/smscoreapi.h
+++ b/drivers/media/common/siano/smscoreapi.h
@@ -22,6 +22,8 @@
#ifndef __SMS_CORE_API_H__
#define __SMS_CORE_API_H__
+#define pr_fmt(fmt) "%s:%s: " fmt, KBUILD_MODNAME, __func__
+
#include <linux/device.h>
#include <linux/list.h>
#include <linux/mm.h>
@@ -31,6 +33,8 @@
#include <linux/wait.h>
#include <linux/timer.h>
+#include <media/media-device.h>
+
#include <asm/page.h>
#include "smsir.h"
@@ -215,6 +219,10 @@
bool is_usb_device;
int led_state;
+
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ struct media_device *media_dev;
+#endif
};
/* GPIO definitions for antenna frequency domain control (SMS8021) */
@@ -1115,7 +1123,8 @@
extern void smscore_unregister_hotplug(hotplug_t hotplug);
extern int smscore_register_device(struct smsdevice_params_t *params,
- struct smscore_device_t **coredev);
+ struct smscore_device_t **coredev,
+ void *mdev);
extern void smscore_unregister_device(struct smscore_device_t *coredev);
extern int smscore_start_device(struct smscore_device_t *coredev);
@@ -1168,25 +1177,4 @@
/* ------------------------------------------------------------------------ */
-#define DBG_INFO 1
-#define DBG_ADV 2
-
-#define sms_printk(kern, fmt, arg...) \
- printk(kern "%s: " fmt "\n", __func__, ##arg)
-
-#define dprintk(kern, lvl, fmt, arg...) do {\
- if (sms_dbg & lvl) \
- sms_printk(kern, fmt, ##arg); \
-} while (0)
-
-#define sms_log(fmt, arg...) sms_printk(KERN_INFO, fmt, ##arg)
-#define sms_err(fmt, arg...) \
- sms_printk(KERN_ERR, "line: %d: " fmt, __LINE__, ##arg)
-#define sms_warn(fmt, arg...) sms_printk(KERN_WARNING, fmt, ##arg)
-#define sms_info(fmt, arg...) \
- dprintk(KERN_INFO, DBG_INFO, fmt, ##arg)
-#define sms_debug(fmt, arg...) \
- dprintk(KERN_DEBUG, DBG_ADV, fmt, ##arg)
-
-
#endif /* __SMS_CORE_API_H__ */
diff --git a/drivers/media/common/siano/smsdvb-debugfs.c b/drivers/media/common/siano/smsdvb-debugfs.c
index 2408d7e..1a8677a 100644
--- a/drivers/media/common/siano/smsdvb-debugfs.c
+++ b/drivers/media/common/siano/smsdvb-debugfs.c
@@ -17,7 +17,7 @@
*
***********************************************************************/
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include "smscoreapi.h"
#include <linux/module.h>
#include <linux/slab.h>
@@ -31,8 +31,6 @@
#include "dvb_demux.h"
#include "dvb_frontend.h"
-#include "smscoreapi.h"
-
#include "smsdvb.h"
static struct dentry *smsdvb_debugfs_usb_root;
@@ -536,7 +534,7 @@
*/
d = debugfs_create_dir("smsdvb", usb_debug_root);
if (IS_ERR_OR_NULL(d)) {
- sms_err("Couldn't create sysfs node for smsdvb");
+ pr_err("Couldn't create sysfs node for smsdvb\n");
return PTR_ERR(d);
} else {
smsdvb_debugfs_usb_root = d;
diff --git a/drivers/media/common/siano/smsdvb-main.c b/drivers/media/common/siano/smsdvb-main.c
index 85151ef..367b8e7 100644
--- a/drivers/media/common/siano/smsdvb-main.c
+++ b/drivers/media/common/siano/smsdvb-main.c
@@ -19,6 +19,8 @@
****************************************************************/
+#include "smscoreapi.h"
+
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/init.h>
@@ -29,7 +31,6 @@
#include "dvb_demux.h"
#include "dvb_frontend.h"
-#include "smscoreapi.h"
#include "sms-cards.h"
#include "smsdvb.h"
@@ -39,11 +40,6 @@
static struct list_head g_smsdvb_clients;
static struct mutex g_smsdvb_clientslock;
-static int sms_dbg;
-module_param_named(debug, sms_dbg, int, 0644);
-MODULE_PARM_DESC(debug, "set debug level (info=1, adv=2 (or-able))");
-
-
static u32 sms_to_guard_interval_table[] = {
[0] = GUARD_INTERVAL_1_32,
[1] = GUARD_INTERVAL_1_16,
@@ -82,48 +78,48 @@
struct smscore_device_t *coredev = client->coredev;
switch (event) {
case DVB3_EVENT_INIT:
- sms_debug("DVB3_EVENT_INIT");
+ pr_debug("DVB3_EVENT_INIT\n");
sms_board_event(coredev, BOARD_EVENT_BIND);
break;
case DVB3_EVENT_SLEEP:
- sms_debug("DVB3_EVENT_SLEEP");
+ pr_debug("DVB3_EVENT_SLEEP\n");
sms_board_event(coredev, BOARD_EVENT_POWER_SUSPEND);
break;
case DVB3_EVENT_HOTPLUG:
- sms_debug("DVB3_EVENT_HOTPLUG");
+ pr_debug("DVB3_EVENT_HOTPLUG\n");
sms_board_event(coredev, BOARD_EVENT_POWER_INIT);
break;
case DVB3_EVENT_FE_LOCK:
if (client->event_fe_state != DVB3_EVENT_FE_LOCK) {
client->event_fe_state = DVB3_EVENT_FE_LOCK;
- sms_debug("DVB3_EVENT_FE_LOCK");
+ pr_debug("DVB3_EVENT_FE_LOCK\n");
sms_board_event(coredev, BOARD_EVENT_FE_LOCK);
}
break;
case DVB3_EVENT_FE_UNLOCK:
if (client->event_fe_state != DVB3_EVENT_FE_UNLOCK) {
client->event_fe_state = DVB3_EVENT_FE_UNLOCK;
- sms_debug("DVB3_EVENT_FE_UNLOCK");
+ pr_debug("DVB3_EVENT_FE_UNLOCK\n");
sms_board_event(coredev, BOARD_EVENT_FE_UNLOCK);
}
break;
case DVB3_EVENT_UNC_OK:
if (client->event_unc_state != DVB3_EVENT_UNC_OK) {
client->event_unc_state = DVB3_EVENT_UNC_OK;
- sms_debug("DVB3_EVENT_UNC_OK");
+ pr_debug("DVB3_EVENT_UNC_OK\n");
sms_board_event(coredev, BOARD_EVENT_MULTIPLEX_OK);
}
break;
case DVB3_EVENT_UNC_ERR:
if (client->event_unc_state != DVB3_EVENT_UNC_ERR) {
client->event_unc_state = DVB3_EVENT_UNC_ERR;
- sms_debug("DVB3_EVENT_UNC_ERR");
+ pr_debug("DVB3_EVENT_UNC_ERR\n");
sms_board_event(coredev, BOARD_EVENT_MULTIPLEX_ERRORS);
}
break;
default:
- sms_err("Unknown dvb3 api event");
+ pr_err("Unknown dvb3 api event\n");
break;
}
}
@@ -590,7 +586,7 @@
is_status_update = true;
break;
default:
- sms_info("message not handled");
+ pr_debug("message not handled\n");
}
smscore_putbuffer(client->coredev, cb);
@@ -613,6 +609,19 @@
return 0;
}
+static void smsdvb_media_device_unregister(struct smsdvb_client_t *client)
+{
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ struct smscore_device_t *coredev = client->coredev;
+
+ if (!coredev->media_dev)
+ return;
+ media_device_unregister(coredev->media_dev);
+ kfree(coredev->media_dev);
+ coredev->media_dev = NULL;
+#endif
+}
+
static void smsdvb_unregister_client(struct smsdvb_client_t *client)
{
/* must be called under clientslock */
@@ -624,6 +633,7 @@
dvb_unregister_frontend(&client->frontend);
dvb_dmxdev_release(&client->dmxdev);
dvb_dmx_release(&client->demux);
+ smsdvb_media_device_unregister(client);
dvb_unregister_adapter(&client->adapter);
kfree(client);
}
@@ -643,7 +653,7 @@
container_of(feed->demux, struct smsdvb_client_t, demux);
struct sms_msg_data pid_msg;
- sms_debug("add pid %d(%x)",
+ pr_debug("add pid %d(%x)\n",
feed->pid, feed->pid);
client->feed_users++;
@@ -665,7 +675,7 @@
container_of(feed->demux, struct smsdvb_client_t, demux);
struct sms_msg_data pid_msg;
- sms_debug("remove pid %d(%x)",
+ pr_debug("remove pid %d(%x)\n",
feed->pid, feed->pid);
client->feed_users--;
@@ -835,7 +845,7 @@
static int smsdvb_get_tune_settings(struct dvb_frontend *fe,
struct dvb_frontend_tune_settings *tune)
{
- sms_debug("");
+ pr_debug("\n");
tune->min_delay_ms = 400;
tune->step_size = 250000;
@@ -869,7 +879,7 @@
msg.Data[0] = c->frequency;
msg.Data[2] = 12000000;
- sms_info("%s: freq %d band %d", __func__, c->frequency,
+ pr_debug("%s: freq %d band %d\n", __func__, c->frequency,
c->bandwidth_hz);
switch (c->bandwidth_hz / 1000000) {
@@ -954,7 +964,7 @@
c->bandwidth_hz = 6000000;
- sms_info("%s: freq %d segwidth %d segindex %d", __func__,
+ pr_debug("freq %d segwidth %d segindex %d\n",
c->frequency, c->isdbt_sb_segment_count,
c->isdbt_sb_segment_idx);
@@ -1082,10 +1092,8 @@
if (!arrival)
return 0;
client = kzalloc(sizeof(struct smsdvb_client_t), GFP_KERNEL);
- if (!client) {
- sms_err("kmalloc() failed");
+ if (!client)
return -ENOMEM;
- }
/* register dvb adapter */
rc = dvb_register_adapter(&client->adapter,
@@ -1093,9 +1101,10 @@
smscore_get_board_id(coredev))->name,
THIS_MODULE, device, adapter_nr);
if (rc < 0) {
- sms_err("dvb_register_adapter() failed %d", rc);
+ pr_err("dvb_register_adapter() failed %d\n", rc);
goto adapter_error;
}
+ dvb_register_media_controller(&client->adapter, coredev->media_dev);
/* init dvb demux */
client->demux.dmx.capabilities = DMX_TS_FILTERING;
@@ -1106,7 +1115,7 @@
rc = dvb_dmx_init(&client->demux);
if (rc < 0) {
- sms_err("dvb_dmx_init failed %d", rc);
+ pr_err("dvb_dmx_init failed %d\n", rc);
goto dvbdmx_error;
}
@@ -1117,7 +1126,7 @@
rc = dvb_dmxdev_init(&client->dmxdev, &client->adapter);
if (rc < 0) {
- sms_err("dvb_dmxdev_init failed %d", rc);
+ pr_err("dvb_dmxdev_init failed %d\n", rc);
goto dmxdev_error;
}
@@ -1138,7 +1147,7 @@
rc = dvb_register_frontend(&client->adapter, &client->frontend);
if (rc < 0) {
- sms_err("frontend registration failed %d", rc);
+ pr_err("frontend registration failed %d\n", rc);
goto frontend_error;
}
@@ -1150,7 +1159,7 @@
rc = smscore_register_client(coredev, ¶ms, &client->smsclient);
if (rc < 0) {
- sms_err("smscore_register_client() failed %d", rc);
+ pr_err("smscore_register_client() failed %d\n", rc);
goto client_error;
}
@@ -1169,12 +1178,14 @@
client->event_unc_state = -1;
sms_board_dvb3_event(client, DVB3_EVENT_HOTPLUG);
- sms_info("success");
sms_board_setup(coredev);
if (smsdvb_debugfs_create(client) < 0)
- sms_info("failed to create debugfs node");
+ pr_info("failed to create debugfs node\n");
+ dvb_create_media_graph(&client->adapter);
+
+ pr_info("DVB interface registered.\n");
return 0;
client_error:
@@ -1187,6 +1198,7 @@
dvb_dmx_release(&client->demux);
dvbdmx_error:
+ smsdvb_media_device_unregister(client);
dvb_unregister_adapter(&client->adapter);
adapter_error:
@@ -1205,7 +1217,7 @@
rc = smscore_register_hotplug(smsdvb_hotplug);
- sms_debug("");
+ pr_debug("\n");
return rc;
}
diff --git a/drivers/media/common/siano/smsir.c b/drivers/media/common/siano/smsir.c
index 35d0e88..1d60d20 100644
--- a/drivers/media/common/siano/smsir.c
+++ b/drivers/media/common/siano/smsir.c
@@ -25,10 +25,11 @@
****************************************************************/
+#include "smscoreapi.h"
+
#include <linux/types.h>
#include <linux/input.h>
-#include "smscoreapi.h"
#include "smsir.h"
#include "sms-cards.h"
@@ -56,16 +57,14 @@
int board_id = smscore_get_board_id(coredev);
struct rc_dev *dev;
- sms_log("Allocating rc device");
+ pr_debug("Allocating rc device\n");
dev = rc_allocate_device();
- if (!dev) {
- sms_err("Not enough memory");
+ if (!dev)
return -ENOMEM;
- }
coredev->ir.controller = 0; /* Todo: vega/nova SPI number */
coredev->ir.timeout = IR_DEFAULT_TIMEOUT;
- sms_log("IR port %d, timeout %d ms",
+ pr_debug("IR port %d, timeout %d ms\n",
coredev->ir.controller, coredev->ir.timeout);
snprintf(coredev->ir.name, sizeof(coredev->ir.name),
@@ -92,11 +91,12 @@
dev->map_name = sms_get_board(board_id)->rc_codes;
dev->driver_name = MODULE_NAME;
- sms_log("Input device (IR) %s is set for key events", dev->input_name);
+ pr_debug("Input device (IR) %s is set for key events\n",
+ dev->input_name);
err = rc_register_device(dev);
if (err < 0) {
- sms_err("Failed to register device");
+ pr_err("Failed to register device\n");
rc_free_device(dev);
return err;
}
@@ -109,5 +109,5 @@
{
rc_unregister_device(coredev->ir.dev);
- sms_log("");
+ pr_debug("\n");
}
diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
index abff803..d0e3f9d 100644
--- a/drivers/media/dvb-core/dmxdev.c
+++ b/drivers/media/dvb-core/dmxdev.c
@@ -1136,10 +1136,13 @@
.llseek = default_llseek,
};
-static struct dvb_device dvbdev_demux = {
+static const struct dvb_device dvbdev_demux = {
.priv = NULL,
.users = 1,
.writers = 1,
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ .name = "dvb-demux",
+#endif
.fops = &dvb_demux_fops
};
@@ -1209,13 +1212,15 @@
.llseek = default_llseek,
};
-static struct dvb_device dvbdev_dvr = {
+static const struct dvb_device dvbdev_dvr = {
.priv = NULL,
.readers = 1,
.users = 1,
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ .name = "dvb-dvr",
+#endif
.fops = &dvb_dvr_fops
};
-
int dvb_dmxdev_init(struct dmxdev *dmxdev, struct dvb_adapter *dvb_adapter)
{
int i;
diff --git a/drivers/media/dvb-core/dvb-usb-ids.h b/drivers/media/dvb-core/dvb-usb-ids.h
index 80ab8d0..c117fb3 100644
--- a/drivers/media/dvb-core/dvb-usb-ids.h
+++ b/drivers/media/dvb-core/dvb-usb-ids.h
@@ -245,6 +245,7 @@
#define USB_PID_TECHNOTREND_CONNECT_S2400 0x3006
#define USB_PID_TECHNOTREND_CONNECT_S2400_8KEEPROM 0x3009
#define USB_PID_TECHNOTREND_CONNECT_CT3650 0x300d
+#define USB_PID_TECHNOTREND_CONNECT_S2_4600 0x3011
#define USB_PID_TECHNOTREND_CONNECT_CT2_4650_CI 0x3012
#define USB_PID_TECHNOTREND_TVSTICK_CT2_4400 0x3014
#define USB_PID_TERRATEC_CINERGY_DT_XS_DIVERSITY 0x005a
@@ -318,6 +319,7 @@
#define USB_PID_GRANDTEC_DVBT_USB2_COLD 0x0bc6
#define USB_PID_GRANDTEC_DVBT_USB2_WARM 0x0bc7
#define USB_PID_WINFAST_DTV2000DS 0x6a04
+#define USB_PID_WINFAST_DTV2000DS_PLUS 0x6f12
#define USB_PID_WINFAST_DTV_DONGLE_COLD 0x6025
#define USB_PID_WINFAST_DTV_DONGLE_WARM 0x6026
#define USB_PID_WINFAST_DTV_DONGLE_STK7700P 0x6f00
@@ -385,4 +387,5 @@
#define USB_PID_PCTV_2002E 0x025c
#define USB_PID_PCTV_2002E_SE 0x025d
#define USB_PID_SVEON_STV27 0xd3af
+#define USB_PID_TURBOX_DTT_2000 0xd3a4
#endif
diff --git a/drivers/media/dvb-core/dvb_ca_en50221.c b/drivers/media/dvb-core/dvb_ca_en50221.c
index 0aac309..7293775 100644
--- a/drivers/media/dvb-core/dvb_ca_en50221.c
+++ b/drivers/media/dvb-core/dvb_ca_en50221.c
@@ -1638,15 +1638,17 @@
.llseek = noop_llseek,
};
-static struct dvb_device dvbdev_ca = {
+static const struct dvb_device dvbdev_ca = {
.priv = NULL,
.users = 1,
.readers = 1,
.writers = 1,
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ .name = "dvb-ca-en50221",
+#endif
.fops = &dvb_ca_fops,
};
-
/* ******************************************************************************** */
/* Initialisation/shutdown functions */
@@ -1676,14 +1678,14 @@
/* initialise the system data */
if ((ca = kzalloc(sizeof(struct dvb_ca_private), GFP_KERNEL)) == NULL) {
ret = -ENOMEM;
- goto error;
+ goto exit;
}
ca->pub = pubca;
ca->flags = flags;
ca->slot_count = slot_count;
if ((ca->slot_info = kcalloc(slot_count, sizeof(struct dvb_ca_slot), GFP_KERNEL)) == NULL) {
ret = -ENOMEM;
- goto error;
+ goto free_ca;
}
init_waitqueue_head(&ca->wait_queue);
ca->open = 0;
@@ -1694,7 +1696,7 @@
/* register the DVB device */
ret = dvb_register_device(dvb_adapter, &ca->dvbdev, &dvbdev_ca, ca, DVB_DEVICE_CA);
if (ret)
- goto error;
+ goto free_slot_info;
/* now initialise each slot */
for (i = 0; i < slot_count; i++) {
@@ -1709,7 +1711,7 @@
if (signal_pending(current)) {
ret = -EINTR;
- goto error;
+ goto unregister_device;
}
mb();
@@ -1720,17 +1722,17 @@
ret = PTR_ERR(ca->thread);
printk("dvb_ca_init: failed to start kernel_thread (%d)\n",
ret);
- goto error;
+ goto unregister_device;
}
return 0;
-error:
- if (ca != NULL) {
- if (ca->dvbdev != NULL)
- dvb_unregister_device(ca->dvbdev);
- kfree(ca->slot_info);
- kfree(ca);
- }
+unregister_device:
+ dvb_unregister_device(ca->dvbdev);
+free_slot_info:
+ kfree(ca->slot_info);
+free_ca:
+ kfree(ca);
+exit:
pubca->private = NULL;
return ret;
}
diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
index 2cf3057..882ca41 100644
--- a/drivers/media/dvb-core/dvb_frontend.c
+++ b/drivers/media/dvb-core/dvb_frontend.c
@@ -131,6 +131,11 @@
int quality;
unsigned int check_wrapped;
enum dvbfe_search algo_status;
+
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ struct media_pipeline pipe;
+ struct media_entity *pipe_start_entity;
+#endif
};
static void dvb_frontend_wakeup(struct dvb_frontend *fe);
@@ -590,12 +595,106 @@
wake_up_interruptible(&fepriv->wait_queue);
}
+/**
+ * dvb_enable_media_tuner() - tries to enable the DVB tuner
+ *
+ * @fe: struct dvb_frontend pointer
+ *
+ * This function ensures that just one media tuner is enabled for a given
+ * frontend. It has two different behaviors:
+ * - For trivial devices with just one tuner:
+ * it just enables the existing tuner->fe link
+ * - For devices with more than one tuner:
+ * It is up to the driver to implement the logic that will enable one tuner
+ * and disable the other ones. However, if more than one tuner is enabled for
+ * the same frontend, it will print an error message and return -EINVAL.
+ *
+ * At return, it will return the error code returned by media_entity_setup_link,
+ * or 0 if everything is OK, if no tuner is linked to the frontend or if the
+ * mdev is NULL.
+ */
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+static int dvb_enable_media_tuner(struct dvb_frontend *fe)
+{
+ struct dvb_frontend_private *fepriv = fe->frontend_priv;
+ struct dvb_adapter *adapter = fe->dvb;
+ struct media_device *mdev = adapter->mdev;
+ struct media_entity *entity, *source;
+ struct media_link *link, *found_link = NULL;
+ int i, ret, n_links = 0, active_links = 0;
+
+ fepriv->pipe_start_entity = NULL;
+
+ if (!mdev)
+ return 0;
+
+ entity = fepriv->dvbdev->entity;
+ fepriv->pipe_start_entity = entity;
+
+ for (i = 0; i < entity->num_links; i++) {
+ link = &entity->links[i];
+ if (link->sink->entity == entity) {
+ found_link = link;
+ n_links++;
+ if (link->flags & MEDIA_LNK_FL_ENABLED)
+ active_links++;
+ }
+ }
+
+ if (!n_links || active_links == 1 || !found_link)
+ return 0;
+
+ /*
+ * If a frontend has more than one tuner linked, it is up to the driver
+ * to select with one will be the active one, as the frontend core can't
+ * guess. If the driver doesn't do that, it is a bug.
+ */
+ if (n_links > 1 && active_links != 1) {
+ dev_err(fe->dvb->device,
+ "WARNING: there are %d active links among %d tuners. This is a driver's bug!\n",
+ active_links, n_links);
+ return -EINVAL;
+ }
+
+ source = found_link->source->entity;
+ fepriv->pipe_start_entity = source;
+ for (i = 0; i < source->num_links; i++) {
+ struct media_entity *sink;
+ int flags = 0;
+
+ link = &source->links[i];
+ sink = link->sink->entity;
+
+ if (sink == entity)
+ flags = MEDIA_LNK_FL_ENABLED;
+
+ ret = media_entity_setup_link(link, flags);
+ if (ret) {
+ dev_err(fe->dvb->device,
+ "Couldn't change link %s->%s to %s. Error %d\n",
+ source->name, sink->name,
+ flags ? "enabled" : "disabled",
+ ret);
+ return ret;
+ } else
+ dev_dbg(fe->dvb->device,
+ "link %s->%s was %s\n",
+ source->name, sink->name,
+ flags ? "ENABLED" : "disabled");
+ }
+ return 0;
+}
+#endif
+
static int dvb_frontend_thread(void *data)
{
struct dvb_frontend *fe = data;
struct dvb_frontend_private *fepriv = fe->frontend_priv;
fe_status_t s;
enum dvbfe_algo algo;
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ int ret;
+#endif
bool re_tune = false;
bool semheld = false;
@@ -609,6 +708,20 @@
fepriv->wakeup = 0;
fepriv->reinitialise = 0;
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ ret = dvb_enable_media_tuner(fe);
+ if (ret) {
+ /* FIXME: return an error if it fails */
+ dev_info(fe->dvb->device,
+ "proceeding with FE task\n");
+ } else if (fepriv->pipe_start_entity) {
+ ret = media_entity_pipeline_start(fepriv->pipe_start_entity,
+ &fepriv->pipe);
+ if (ret)
+ return ret;
+ }
+#endif
+
dvb_frontend_init(fe);
set_freezable();
@@ -718,6 +831,12 @@
}
}
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ if (fepriv->pipe_start_entity)
+ media_entity_pipeline_stop(fepriv->pipe_start_entity);
+ fepriv->pipe_start_entity = NULL;
+#endif
+
if (dvb_powerdown_on_sleep) {
if (fe->ops.set_voltage)
fe->ops.set_voltage(fe, SEC_VOLTAGE_OFF);
@@ -2612,11 +2731,14 @@
struct dvb_frontend* fe)
{
struct dvb_frontend_private *fepriv;
- static const struct dvb_device dvbdev_template = {
+ const struct dvb_device dvbdev_template = {
.users = ~0,
.writers = 1,
.readers = (~0)-1,
.fops = &dvb_frontend_fops,
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ .name = fe->ops.info.name,
+#endif
.kernel_ioctl = dvb_frontend_ioctl
};
diff --git a/drivers/media/dvb-core/dvb_net.c b/drivers/media/dvb-core/dvb_net.c
index 4a77cb0..a694fb1 100644
--- a/drivers/media/dvb-core/dvb_net.c
+++ b/drivers/media/dvb-core/dvb_net.c
@@ -1461,14 +1461,16 @@
.llseek = noop_llseek,
};
-static struct dvb_device dvbdev_net = {
+static const struct dvb_device dvbdev_net = {
.priv = NULL,
.users = 1,
.writers = 1,
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ .name = "dvb-net",
+#endif
.fops = &dvb_net_fops,
};
-
void dvb_net_release (struct dvb_net *dvbnet)
{
int i;
diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
index 983db75..13bb57f 100644
--- a/drivers/media/dvb-core/dvbdev.c
+++ b/drivers/media/dvb-core/dvbdev.c
@@ -180,6 +180,93 @@
return -ENFILE;
}
+static void dvb_register_media_device(struct dvb_device *dvbdev,
+ int type, int minor)
+{
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ int ret = 0, npads;
+
+ if (!dvbdev->adapter->mdev)
+ return;
+
+ dvbdev->entity = kzalloc(sizeof(*dvbdev->entity), GFP_KERNEL);
+ if (!dvbdev->entity)
+ return;
+
+ dvbdev->entity->info.dev.major = DVB_MAJOR;
+ dvbdev->entity->info.dev.minor = minor;
+ dvbdev->entity->name = dvbdev->name;
+
+ switch (type) {
+ case DVB_DEVICE_CA:
+ case DVB_DEVICE_DEMUX:
+ case DVB_DEVICE_FRONTEND:
+ npads = 2;
+ break;
+ case DVB_DEVICE_NET:
+ npads = 0;
+ break;
+ default:
+ npads = 1;
+ }
+
+ if (npads) {
+ dvbdev->pads = kcalloc(npads, sizeof(*dvbdev->pads),
+ GFP_KERNEL);
+ if (!dvbdev->pads) {
+ kfree(dvbdev->entity);
+ return;
+ }
+ }
+
+ switch (type) {
+ case DVB_DEVICE_FRONTEND:
+ dvbdev->entity->type = MEDIA_ENT_T_DEVNODE_DVB_FE;
+ dvbdev->pads[0].flags = MEDIA_PAD_FL_SINK;
+ dvbdev->pads[1].flags = MEDIA_PAD_FL_SOURCE;
+ break;
+ case DVB_DEVICE_DEMUX:
+ dvbdev->entity->type = MEDIA_ENT_T_DEVNODE_DVB_DEMUX;
+ dvbdev->pads[0].flags = MEDIA_PAD_FL_SINK;
+ dvbdev->pads[1].flags = MEDIA_PAD_FL_SOURCE;
+ break;
+ case DVB_DEVICE_DVR:
+ dvbdev->entity->type = MEDIA_ENT_T_DEVNODE_DVB_DVR;
+ dvbdev->pads[0].flags = MEDIA_PAD_FL_SINK;
+ break;
+ case DVB_DEVICE_CA:
+ dvbdev->entity->type = MEDIA_ENT_T_DEVNODE_DVB_CA;
+ dvbdev->pads[0].flags = MEDIA_PAD_FL_SINK;
+ dvbdev->pads[1].flags = MEDIA_PAD_FL_SOURCE;
+ break;
+ case DVB_DEVICE_NET:
+ dvbdev->entity->type = MEDIA_ENT_T_DEVNODE_DVB_NET;
+ break;
+ default:
+ kfree(dvbdev->entity);
+ dvbdev->entity = NULL;
+ return;
+ }
+
+ if (npads)
+ ret = media_entity_init(dvbdev->entity, npads, dvbdev->pads, 0);
+ if (!ret)
+ ret = media_device_register_entity(dvbdev->adapter->mdev,
+ dvbdev->entity);
+ if (ret < 0) {
+ printk(KERN_ERR
+ "%s: media_device_register_entity failed for %s\n",
+ __func__, dvbdev->entity->name);
+ kfree(dvbdev->pads);
+ kfree(dvbdev->entity);
+ dvbdev->entity = NULL;
+ return;
+ }
+
+ printk(KERN_DEBUG "%s: media device '%s' registered.\n",
+ __func__, dvbdev->entity->name);
+#endif
+}
int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
const struct dvb_device *template, void *priv, int type)
@@ -258,10 +345,11 @@
__func__, adap->num, dnames[type], id, PTR_ERR(clsdev));
return PTR_ERR(clsdev);
}
-
dprintk(KERN_DEBUG "DVB: register adapter%d/%s%d @ minor: %i (0x%02x)\n",
adap->num, dnames[type], id, minor, minor);
+ dvb_register_media_device(dvbdev, type, minor);
+
return 0;
}
EXPORT_SYMBOL(dvb_register_device);
@@ -278,12 +366,66 @@
device_destroy(dvb_class, MKDEV(DVB_MAJOR, dvbdev->minor));
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ if (dvbdev->entity) {
+ media_device_unregister_entity(dvbdev->entity);
+ kfree(dvbdev->entity);
+ kfree(dvbdev->pads);
+ }
+#endif
+
list_del (&dvbdev->list_head);
kfree (dvbdev->fops);
kfree (dvbdev);
}
EXPORT_SYMBOL(dvb_unregister_device);
+
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+void dvb_create_media_graph(struct dvb_adapter *adap)
+{
+ struct media_device *mdev = adap->mdev;
+ struct media_entity *entity, *tuner = NULL, *fe = NULL;
+ struct media_entity *demux = NULL, *dvr = NULL, *ca = NULL;
+
+ if (!mdev)
+ return;
+
+ media_device_for_each_entity(entity, mdev) {
+ switch (entity->type) {
+ case MEDIA_ENT_T_V4L2_SUBDEV_TUNER:
+ tuner = entity;
+ break;
+ case MEDIA_ENT_T_DEVNODE_DVB_FE:
+ fe = entity;
+ break;
+ case MEDIA_ENT_T_DEVNODE_DVB_DEMUX:
+ demux = entity;
+ break;
+ case MEDIA_ENT_T_DEVNODE_DVB_DVR:
+ dvr = entity;
+ break;
+ case MEDIA_ENT_T_DEVNODE_DVB_CA:
+ ca = entity;
+ break;
+ }
+ }
+
+ if (tuner && fe)
+ media_entity_create_link(tuner, 0, fe, 0, 0);
+
+ if (fe && demux)
+ media_entity_create_link(fe, 1, demux, 0, MEDIA_LNK_FL_ENABLED);
+
+ if (demux && dvr)
+ media_entity_create_link(demux, 1, dvr, 0, MEDIA_LNK_FL_ENABLED);
+
+ if (demux && ca)
+ media_entity_create_link(demux, 1, ca, 0, MEDIA_LNK_FL_ENABLED);
+}
+EXPORT_SYMBOL_GPL(dvb_create_media_graph);
+#endif
+
static int dvbdev_check_free_adapter_num(int num)
{
struct list_head *entry;
diff --git a/drivers/media/dvb-core/dvbdev.h b/drivers/media/dvb-core/dvbdev.h
index f96b28e..12629b8 100644
--- a/drivers/media/dvb-core/dvbdev.h
+++ b/drivers/media/dvb-core/dvbdev.h
@@ -27,6 +27,7 @@
#include <linux/poll.h>
#include <linux/fs.h>
#include <linux/list.h>
+#include <media/media-device.h>
#define DVB_MAJOR 212
@@ -71,6 +72,10 @@
int mfe_shared; /* indicates mutually exclusive frontends */
struct dvb_device *mfe_dvbdev; /* frontend device in use */
struct mutex mfe_lock; /* access lock for thread creation */
+
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ struct media_device *mdev;
+#endif
};
@@ -92,6 +97,15 @@
/* don't really need those !? -- FIXME: use video_usercopy */
int (*kernel_ioctl)(struct file *file, unsigned int cmd, void *arg);
+ /* Needed for media controller register/unregister */
+#if defined(CONFIG_MEDIA_CONTROLLER_DVB)
+ const char *name;
+
+ /* Allocated and filled inside dvbdev.c */
+ struct media_entity *entity;
+ struct media_pad *pads;
+#endif
+
void *priv;
};
@@ -109,6 +123,19 @@
extern void dvb_unregister_device (struct dvb_device *dvbdev);
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+void dvb_create_media_graph(struct dvb_adapter *adap);
+static inline void dvb_register_media_controller(struct dvb_adapter *adap,
+ struct media_device *mdev)
+{
+ adap->mdev = mdev;
+}
+
+#else
+static inline void dvb_create_media_graph(struct dvb_adapter *adap) {}
+#define dvb_register_media_controller(a, b) {}
+#endif
+
extern int dvb_generic_open (struct inode *inode, struct file *file);
extern int dvb_generic_release (struct inode *inode, struct file *file);
extern long dvb_generic_ioctl (struct file *file,
diff --git a/drivers/media/dvb-frontends/Kconfig b/drivers/media/dvb-frontends/Kconfig
index bb76727..97c151d 100644
--- a/drivers/media/dvb-frontends/Kconfig
+++ b/drivers/media/dvb-frontends/Kconfig
@@ -577,6 +577,14 @@
An ATSC 8VSB and QAM64/256 tuner module. Say Y when you want
to support this frontend.
+config DVB_LGDT3306A
+ tristate "LG Electronics LGDT3306A based"
+ depends on DVB_CORE && I2C
+ default m if !MEDIA_SUBDRV_AUTOSELECT
+ help
+ An ATSC 8VSB and QAM-B 64/256 demodulator module. Say Y when you want
+ to support this frontend.
+
config DVB_LG2160
tristate "LG Electronics LG216x based"
depends on DVB_CORE && I2C
diff --git a/drivers/media/dvb-frontends/Makefile b/drivers/media/dvb-frontends/Makefile
index ba59df6..23d399b 100644
--- a/drivers/media/dvb-frontends/Makefile
+++ b/drivers/media/dvb-frontends/Makefile
@@ -54,6 +54,7 @@
obj-$(CONFIG_DVB_S5H1420) += s5h1420.o
obj-$(CONFIG_DVB_LGDT330X) += lgdt330x.o
obj-$(CONFIG_DVB_LGDT3305) += lgdt3305.o
+obj-$(CONFIG_DVB_LGDT3306A) += lgdt3306a.o
obj-$(CONFIG_DVB_LG2160) += lg2160.o
obj-$(CONFIG_DVB_CX24123) += cx24123.o
obj-$(CONFIG_DVB_LNBP21) += lnbp21.o
diff --git a/drivers/media/dvb-frontends/a8293.h b/drivers/media/dvb-frontends/a8293.h
index b6ef642..5f04119 100644
--- a/drivers/media/dvb-frontends/a8293.h
+++ b/drivers/media/dvb-frontends/a8293.h
@@ -27,7 +27,7 @@
u8 i2c_addr;
};
-#if IS_ENABLED(CONFIG_DVB_A8293)
+#if IS_REACHABLE(CONFIG_DVB_A8293)
extern struct dvb_frontend *a8293_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c, const struct a8293_config *cfg);
#else
diff --git a/drivers/media/dvb-frontends/af9013.h b/drivers/media/dvb-frontends/af9013.h
index 09273b2..1dcc936 100644
--- a/drivers/media/dvb-frontends/af9013.h
+++ b/drivers/media/dvb-frontends/af9013.h
@@ -103,7 +103,7 @@
u8 gpio[4];
};
-#if IS_ENABLED(CONFIG_DVB_AF9013)
+#if IS_REACHABLE(CONFIG_DVB_AF9013)
extern struct dvb_frontend *af9013_attach(const struct af9013_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/atbm8830.h b/drivers/media/dvb-frontends/atbm8830.h
index 8e0ac98..5446d13 100644
--- a/drivers/media/dvb-frontends/atbm8830.h
+++ b/drivers/media/dvb-frontends/atbm8830.h
@@ -61,7 +61,7 @@
u8 agc_hold_loop;
};
-#if IS_ENABLED(CONFIG_DVB_ATBM8830)
+#if IS_REACHABLE(CONFIG_DVB_ATBM8830)
extern struct dvb_frontend *atbm8830_attach(const struct atbm8830_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/au8522.h b/drivers/media/dvb-frontends/au8522.h
index 6122519..dde6158 100644
--- a/drivers/media/dvb-frontends/au8522.h
+++ b/drivers/media/dvb-frontends/au8522.h
@@ -61,7 +61,7 @@
enum au8522_if_freq qam_if;
};
-#if IS_ENABLED(CONFIG_DVB_AU8522_DTV)
+#if IS_REACHABLE(CONFIG_DVB_AU8522_DTV)
extern struct dvb_frontend *au8522_attach(const struct au8522_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/bcm3510.h b/drivers/media/dvb-frontends/bcm3510.h
index 5bd56b1..ff66492 100644
--- a/drivers/media/dvb-frontends/bcm3510.h
+++ b/drivers/media/dvb-frontends/bcm3510.h
@@ -34,7 +34,7 @@
int (*request_firmware)(struct dvb_frontend* fe, const struct firmware **fw, char* name);
};
-#if IS_ENABLED(CONFIG_DVB_BCM3510)
+#if IS_REACHABLE(CONFIG_DVB_BCM3510)
extern struct dvb_frontend* bcm3510_attach(const struct bcm3510_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/cx22700.h b/drivers/media/dvb-frontends/cx22700.h
index 382a7b1..e0a7648 100644
--- a/drivers/media/dvb-frontends/cx22700.h
+++ b/drivers/media/dvb-frontends/cx22700.h
@@ -31,7 +31,7 @@
u8 demod_address;
};
-#if IS_ENABLED(CONFIG_DVB_CX22700)
+#if IS_REACHABLE(CONFIG_DVB_CX22700)
extern struct dvb_frontend* cx22700_attach(const struct cx22700_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/cx22702.h b/drivers/media/dvb-frontends/cx22702.h
index 0b1a6c2..68b69a7 100644
--- a/drivers/media/dvb-frontends/cx22702.h
+++ b/drivers/media/dvb-frontends/cx22702.h
@@ -41,7 +41,7 @@
u8 output_mode;
};
-#if IS_ENABLED(CONFIG_DVB_CX22702)
+#if IS_REACHABLE(CONFIG_DVB_CX22702)
extern struct dvb_frontend *cx22702_attach(
const struct cx22702_config *config,
struct i2c_adapter *i2c);
diff --git a/drivers/media/dvb-frontends/cx24110.h b/drivers/media/dvb-frontends/cx24110.h
index 527aff1..d5453ed 100644
--- a/drivers/media/dvb-frontends/cx24110.h
+++ b/drivers/media/dvb-frontends/cx24110.h
@@ -46,7 +46,7 @@
return 0;
}
-#if IS_ENABLED(CONFIG_DVB_CX24110)
+#if IS_REACHABLE(CONFIG_DVB_CX24110)
extern struct dvb_frontend* cx24110_attach(const struct cx24110_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/cx24113.h b/drivers/media/dvb-frontends/cx24113.h
index 782711ba..962919b 100644
--- a/drivers/media/dvb-frontends/cx24113.h
+++ b/drivers/media/dvb-frontends/cx24113.h
@@ -32,7 +32,7 @@
u32 xtal_khz;
};
-#if IS_ENABLED(CONFIG_DVB_TUNER_CX24113)
+#if IS_REACHABLE(CONFIG_DVB_TUNER_CX24113)
extern struct dvb_frontend *cx24113_attach(struct dvb_frontend *,
const struct cx24113_config *config, struct i2c_adapter *i2c);
diff --git a/drivers/media/dvb-frontends/cx24116.h b/drivers/media/dvb-frontends/cx24116.h
index 2ec84fa..f6dbabc 100644
--- a/drivers/media/dvb-frontends/cx24116.h
+++ b/drivers/media/dvb-frontends/cx24116.h
@@ -41,7 +41,7 @@
u16 i2c_wr_max;
};
-#if IS_ENABLED(CONFIG_DVB_CX24116)
+#if IS_REACHABLE(CONFIG_DVB_CX24116)
extern struct dvb_frontend *cx24116_attach(
const struct cx24116_config *config,
struct i2c_adapter *i2c);
diff --git a/drivers/media/dvb-frontends/cx24117.h b/drivers/media/dvb-frontends/cx24117.h
index 4e59e95..1648ab4 100644
--- a/drivers/media/dvb-frontends/cx24117.h
+++ b/drivers/media/dvb-frontends/cx24117.h
@@ -30,7 +30,7 @@
u8 demod_address;
};
-#if IS_ENABLED(CONFIG_DVB_CX24117)
+#if IS_REACHABLE(CONFIG_DVB_CX24117)
extern struct dvb_frontend *cx24117_attach(
const struct cx24117_config *config,
struct i2c_adapter *i2c);
diff --git a/drivers/media/dvb-frontends/cx24123.h b/drivers/media/dvb-frontends/cx24123.h
index 102e70d..758aee5 100644
--- a/drivers/media/dvb-frontends/cx24123.h
+++ b/drivers/media/dvb-frontends/cx24123.h
@@ -39,7 +39,7 @@
void (*agc_callback) (struct dvb_frontend *);
};
-#if IS_ENABLED(CONFIG_DVB_CX24123)
+#if IS_REACHABLE(CONFIG_DVB_CX24123)
extern struct dvb_frontend *cx24123_attach(const struct cx24123_config *config,
struct i2c_adapter *i2c);
extern struct i2c_adapter *cx24123_get_tuner_i2c_adapter(struct dvb_frontend *);
diff --git a/drivers/media/dvb-frontends/cxd2820r.h b/drivers/media/dvb-frontends/cxd2820r.h
index 6095dbc..56d4276 100644
--- a/drivers/media/dvb-frontends/cxd2820r.h
+++ b/drivers/media/dvb-frontends/cxd2820r.h
@@ -72,7 +72,7 @@
};
-#if IS_ENABLED(CONFIG_DVB_CXD2820R)
+#if IS_REACHABLE(CONFIG_DVB_CXD2820R)
extern struct dvb_frontend *cxd2820r_attach(
const struct cxd2820r_config *config,
struct i2c_adapter *i2c,
diff --git a/drivers/media/dvb-frontends/dib0070.h b/drivers/media/dvb-frontends/dib0070.h
index 0c6befc..6c0b667 100644
--- a/drivers/media/dvb-frontends/dib0070.h
+++ b/drivers/media/dvb-frontends/dib0070.h
@@ -48,7 +48,7 @@
u8 vga_filter;
};
-#if IS_ENABLED(CONFIG_DVB_TUNER_DIB0070)
+#if IS_REACHABLE(CONFIG_DVB_TUNER_DIB0070)
extern struct dvb_frontend *dib0070_attach(struct dvb_frontend *fe, struct i2c_adapter *i2c, struct dib0070_config *cfg);
extern u16 dib0070_wbd_offset(struct dvb_frontend *);
extern void dib0070_ctrl_agc_filter(struct dvb_frontend *, u8 open);
diff --git a/drivers/media/dvb-frontends/dib0090.h b/drivers/media/dvb-frontends/dib0090.h
index 6a09095..ad74bc8 100644
--- a/drivers/media/dvb-frontends/dib0090.h
+++ b/drivers/media/dvb-frontends/dib0090.h
@@ -75,7 +75,7 @@
u8 force_crystal_mode;
};
-#if IS_ENABLED(CONFIG_DVB_TUNER_DIB0090)
+#if IS_REACHABLE(CONFIG_DVB_TUNER_DIB0090)
extern struct dvb_frontend *dib0090_register(struct dvb_frontend *fe, struct i2c_adapter *i2c, const struct dib0090_config *config);
extern struct dvb_frontend *dib0090_fw_register(struct dvb_frontend *fe, struct i2c_adapter *i2c, const struct dib0090_config *config);
extern void dib0090_dcc_freq(struct dvb_frontend *fe, u8 fast);
diff --git a/drivers/media/dvb-frontends/dib3000.h b/drivers/media/dvb-frontends/dib3000.h
index 9b6c3bb..6ae9899 100644
--- a/drivers/media/dvb-frontends/dib3000.h
+++ b/drivers/media/dvb-frontends/dib3000.h
@@ -41,7 +41,7 @@
int (*tuner_pass_ctrl)(struct dvb_frontend *fe, int onoff, u8 pll_ctrl);
};
-#if IS_ENABLED(CONFIG_DVB_DIB3000MB)
+#if IS_REACHABLE(CONFIG_DVB_DIB3000MB)
extern struct dvb_frontend* dib3000mb_attach(const struct dib3000_config* config,
struct i2c_adapter* i2c, struct dib_fe_xfer_ops *xfer_ops);
#else
diff --git a/drivers/media/dvb-frontends/dib3000mc.h b/drivers/media/dvb-frontends/dib3000mc.h
index 129d142..74816f7 100644
--- a/drivers/media/dvb-frontends/dib3000mc.h
+++ b/drivers/media/dvb-frontends/dib3000mc.h
@@ -41,7 +41,7 @@
#define DEFAULT_DIB3000MC_I2C_ADDRESS 16
#define DEFAULT_DIB3000P_I2C_ADDRESS 24
-#if IS_ENABLED(CONFIG_DVB_DIB3000MC)
+#if IS_REACHABLE(CONFIG_DVB_DIB3000MC)
extern struct dvb_frontend *dib3000mc_attach(struct i2c_adapter *i2c_adap,
u8 i2c_addr,
struct dib3000mc_config *cfg);
diff --git a/drivers/media/dvb-frontends/dib7000m.h b/drivers/media/dvb-frontends/dib7000m.h
index b585413..6468c27 100644
--- a/drivers/media/dvb-frontends/dib7000m.h
+++ b/drivers/media/dvb-frontends/dib7000m.h
@@ -40,7 +40,7 @@
#define DEFAULT_DIB7000M_I2C_ADDRESS 18
-#if IS_ENABLED(CONFIG_DVB_DIB7000M)
+#if IS_REACHABLE(CONFIG_DVB_DIB7000M)
extern struct dvb_frontend *dib7000m_attach(struct i2c_adapter *i2c_adap,
u8 i2c_addr,
struct dib7000m_config *cfg);
diff --git a/drivers/media/dvb-frontends/dib7000p.h b/drivers/media/dvb-frontends/dib7000p.h
index 1fea0e9..baa2789 100644
--- a/drivers/media/dvb-frontends/dib7000p.h
+++ b/drivers/media/dvb-frontends/dib7000p.h
@@ -66,7 +66,7 @@
struct dvb_frontend *(*init)(struct i2c_adapter *i2c_adap, u8 i2c_addr, struct dib7000p_config *cfg);
};
-#if IS_ENABLED(CONFIG_DVB_DIB7000P)
+#if IS_REACHABLE(CONFIG_DVB_DIB7000P)
void *dib7000p_attach(struct dib7000p_ops *ops);
#else
static inline void *dib7000p_attach(struct dib7000p_ops *ops)
diff --git a/drivers/media/dvb-frontends/dib8000.h b/drivers/media/dvb-frontends/dib8000.h
index 84cc103..780c37b 100644
--- a/drivers/media/dvb-frontends/dib8000.h
+++ b/drivers/media/dvb-frontends/dib8000.h
@@ -63,7 +63,7 @@
struct dvb_frontend *(*init)(struct i2c_adapter *i2c_adap, u8 i2c_addr, struct dib8000_config *cfg);
};
-#if IS_ENABLED(CONFIG_DVB_DIB8000)
+#if IS_REACHABLE(CONFIG_DVB_DIB8000)
void *dib8000_attach(struct dib8000_ops *ops);
#else
static inline int dib8000_attach(struct dib8000_ops *ops)
diff --git a/drivers/media/dvb-frontends/dib9000.h b/drivers/media/dvb-frontends/dib9000.h
index f3639f0..b10a70a 100644
--- a/drivers/media/dvb-frontends/dib9000.h
+++ b/drivers/media/dvb-frontends/dib9000.h
@@ -27,7 +27,7 @@
#define DEFAULT_DIB9000_I2C_ADDRESS 18
-#if IS_ENABLED(CONFIG_DVB_DIB9000)
+#if IS_REACHABLE(CONFIG_DVB_DIB9000)
extern struct dvb_frontend *dib9000_attach(struct i2c_adapter *i2c_adap, u8 i2c_addr, const struct dib9000_config *cfg);
extern int dib9000_i2c_enumeration(struct i2c_adapter *host, int no_of_demods, u8 default_addr, u8 first_addr);
extern struct i2c_adapter *dib9000_get_tuner_interface(struct dvb_frontend *fe);
diff --git a/drivers/media/dvb-frontends/drx39xyj/drx39xxj.h b/drivers/media/dvb-frontends/drx39xyj/drx39xxj.h
index cfd0b96..8188062 100644
--- a/drivers/media/dvb-frontends/drx39xyj/drx39xxj.h
+++ b/drivers/media/dvb-frontends/drx39xyj/drx39xxj.h
@@ -34,7 +34,7 @@
const struct firmware *fw;
};
-#if IS_ENABLED(CONFIG_DVB_DRX39XYJ)
+#if IS_REACHABLE(CONFIG_DVB_DRX39XYJ)
struct dvb_frontend *drx39xxj_attach(struct i2c_adapter *i2c);
#else
static inline struct dvb_frontend *drx39xxj_attach(struct i2c_adapter *i2c) {
diff --git a/drivers/media/dvb-frontends/drxd.h b/drivers/media/dvb-frontends/drxd.h
index d998e4d..a47c22d 100644
--- a/drivers/media/dvb-frontends/drxd.h
+++ b/drivers/media/dvb-frontends/drxd.h
@@ -52,7 +52,7 @@
s16(*osc_deviation) (void *priv, s16 dev, int flag);
};
-#if IS_ENABLED(CONFIG_DVB_DRXD)
+#if IS_REACHABLE(CONFIG_DVB_DRXD)
extern
struct dvb_frontend *drxd_attach(const struct drxd_config *config,
void *priv, struct i2c_adapter *i2c,
diff --git a/drivers/media/dvb-frontends/drxk.h b/drivers/media/dvb-frontends/drxk.h
index f6cb346..8f0b9ee 100644
--- a/drivers/media/dvb-frontends/drxk.h
+++ b/drivers/media/dvb-frontends/drxk.h
@@ -51,7 +51,7 @@
int qam_demod_parameter_count;
};
-#if IS_ENABLED(CONFIG_DVB_DRXK)
+#if IS_REACHABLE(CONFIG_DVB_DRXK)
extern struct dvb_frontend *drxk_attach(const struct drxk_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/ds3000.h b/drivers/media/dvb-frontends/ds3000.h
index f9c21fb..153169d 100644
--- a/drivers/media/dvb-frontends/ds3000.h
+++ b/drivers/media/dvb-frontends/ds3000.h
@@ -35,7 +35,7 @@
void (*set_lock_led)(struct dvb_frontend *fe, int offon);
};
-#if IS_ENABLED(CONFIG_DVB_DS3000)
+#if IS_REACHABLE(CONFIG_DVB_DS3000)
extern struct dvb_frontend *ds3000_attach(const struct ds3000_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/dvb-pll.h b/drivers/media/dvb-frontends/dvb-pll.h
index f4b5a06..bf9602a 100644
--- a/drivers/media/dvb-frontends/dvb-pll.h
+++ b/drivers/media/dvb-frontends/dvb-pll.h
@@ -38,7 +38,7 @@
* @param pll_desc_id dvb_pll_desc to use.
* @return Frontend pointer on success, NULL on failure
*/
-#if IS_ENABLED(CONFIG_DVB_PLL)
+#if IS_REACHABLE(CONFIG_DVB_PLL)
extern struct dvb_frontend *dvb_pll_attach(struct dvb_frontend *fe,
int pll_addr,
struct i2c_adapter *i2c,
diff --git a/drivers/media/dvb-frontends/dvb_dummy_fe.h b/drivers/media/dvb-frontends/dvb_dummy_fe.h
index 0cbf961..15e4cea 100644
--- a/drivers/media/dvb-frontends/dvb_dummy_fe.h
+++ b/drivers/media/dvb-frontends/dvb_dummy_fe.h
@@ -26,7 +26,7 @@
#include <linux/dvb/frontend.h>
#include "dvb_frontend.h"
-#if IS_ENABLED(CONFIG_DVB_DUMMY_FE)
+#if IS_REACHABLE(CONFIG_DVB_DUMMY_FE)
extern struct dvb_frontend* dvb_dummy_fe_ofdm_attach(void);
extern struct dvb_frontend* dvb_dummy_fe_qpsk_attach(void);
extern struct dvb_frontend* dvb_dummy_fe_qam_attach(void);
diff --git a/drivers/media/dvb-frontends/ec100.h b/drivers/media/dvb-frontends/ec100.h
index 3755840..9544bab 100644
--- a/drivers/media/dvb-frontends/ec100.h
+++ b/drivers/media/dvb-frontends/ec100.h
@@ -31,7 +31,7 @@
};
-#if IS_ENABLED(CONFIG_DVB_EC100)
+#if IS_REACHABLE(CONFIG_DVB_EC100)
extern struct dvb_frontend *ec100_attach(const struct ec100_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/hd29l2.h b/drivers/media/dvb-frontends/hd29l2.h
index 05cd130..48e9ab7 100644
--- a/drivers/media/dvb-frontends/hd29l2.h
+++ b/drivers/media/dvb-frontends/hd29l2.h
@@ -51,7 +51,7 @@
};
-#if IS_ENABLED(CONFIG_DVB_HD29L2)
+#if IS_REACHABLE(CONFIG_DVB_HD29L2)
extern struct dvb_frontend *hd29l2_attach(const struct hd29l2_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/isl6405.h b/drivers/media/dvb-frontends/isl6405.h
index 8abb70c..3c148b8 100644
--- a/drivers/media/dvb-frontends/isl6405.h
+++ b/drivers/media/dvb-frontends/isl6405.h
@@ -55,7 +55,7 @@
#define ISL6405_ENT2 0x20
#define ISL6405_ISEL2 0x40
-#if IS_ENABLED(CONFIG_DVB_ISL6405)
+#if IS_REACHABLE(CONFIG_DVB_ISL6405)
/* override_set and override_clear control which system register bits (above)
* to always set & clear
*/
diff --git a/drivers/media/dvb-frontends/isl6421.h b/drivers/media/dvb-frontends/isl6421.h
index 630e7f8..3273597 100644
--- a/drivers/media/dvb-frontends/isl6421.h
+++ b/drivers/media/dvb-frontends/isl6421.h
@@ -39,7 +39,7 @@
#define ISL6421_ISEL1 0x20
#define ISL6421_DCL 0x40
-#if IS_ENABLED(CONFIG_DVB_ISL6421)
+#if IS_REACHABLE(CONFIG_DVB_ISL6421)
/* override_set and override_clear control which system register bits (above) to always set & clear */
extern struct dvb_frontend *isl6421_attach(struct dvb_frontend *fe, struct i2c_adapter *i2c, u8 i2c_addr,
u8 override_set, u8 override_clear, bool override_tone);
diff --git a/drivers/media/dvb-frontends/isl6423.h b/drivers/media/dvb-frontends/isl6423.h
index 80dfd9c..a64df0e 100644
--- a/drivers/media/dvb-frontends/isl6423.h
+++ b/drivers/media/dvb-frontends/isl6423.h
@@ -42,7 +42,7 @@
u8 mod_extern;
};
-#if IS_ENABLED(CONFIG_DVB_ISL6423)
+#if IS_REACHABLE(CONFIG_DVB_ISL6423)
extern struct dvb_frontend *isl6423_attach(struct dvb_frontend *fe,
diff --git a/drivers/media/dvb-frontends/itd1000.h b/drivers/media/dvb-frontends/itd1000.h
index edae090..a691bb6 100644
--- a/drivers/media/dvb-frontends/itd1000.h
+++ b/drivers/media/dvb-frontends/itd1000.h
@@ -29,7 +29,7 @@
u8 i2c_address;
};
-#if IS_ENABLED(CONFIG_DVB_TUNER_ITD1000)
+#if IS_REACHABLE(CONFIG_DVB_TUNER_ITD1000)
extern struct dvb_frontend *itd1000_attach(struct dvb_frontend *fe, struct i2c_adapter *i2c, struct itd1000_config *cfg);
#else
static inline struct dvb_frontend *itd1000_attach(struct dvb_frontend *fe, struct i2c_adapter *i2c, struct itd1000_config *cfg)
diff --git a/drivers/media/dvb-frontends/ix2505v.h b/drivers/media/dvb-frontends/ix2505v.h
index 1a735a7..af107a2 100644
--- a/drivers/media/dvb-frontends/ix2505v.h
+++ b/drivers/media/dvb-frontends/ix2505v.h
@@ -49,7 +49,7 @@
};
-#if IS_ENABLED(CONFIG_DVB_IX2505V)
+#if IS_REACHABLE(CONFIG_DVB_IX2505V)
extern struct dvb_frontend *ix2505v_attach(struct dvb_frontend *fe,
const struct ix2505v_config *config, struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/l64781.h b/drivers/media/dvb-frontends/l64781.h
index 6813b08..8697e2c 100644
--- a/drivers/media/dvb-frontends/l64781.h
+++ b/drivers/media/dvb-frontends/l64781.h
@@ -31,7 +31,7 @@
u8 demod_address;
};
-#if IS_ENABLED(CONFIG_DVB_L64781)
+#if IS_REACHABLE(CONFIG_DVB_L64781)
extern struct dvb_frontend* l64781_attach(const struct l64781_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/lg2160.h b/drivers/media/dvb-frontends/lg2160.h
index 194a07a..d20bd90 100644
--- a/drivers/media/dvb-frontends/lg2160.h
+++ b/drivers/media/dvb-frontends/lg2160.h
@@ -67,7 +67,7 @@
enum lg_chip_type lg_chip;
};
-#if IS_ENABLED(CONFIG_DVB_LG2160)
+#if IS_REACHABLE(CONFIG_DVB_LG2160)
extern
struct dvb_frontend *lg2160_attach(const struct lg2160_config *config,
struct i2c_adapter *i2c_adap);
diff --git a/drivers/media/dvb-frontends/lgdt3305.h b/drivers/media/dvb-frontends/lgdt3305.h
index 9c03e53..f91a1b4 100644
--- a/drivers/media/dvb-frontends/lgdt3305.h
+++ b/drivers/media/dvb-frontends/lgdt3305.h
@@ -80,7 +80,7 @@
enum lgdt_demod_chip_type demod_chip;
};
-#if IS_ENABLED(CONFIG_DVB_LGDT3305)
+#if IS_REACHABLE(CONFIG_DVB_LGDT3305)
extern
struct dvb_frontend *lgdt3305_attach(const struct lgdt3305_config *config,
struct i2c_adapter *i2c_adap);
diff --git a/drivers/media/dvb-frontends/lgdt3306a.c b/drivers/media/dvb-frontends/lgdt3306a.c
new file mode 100644
index 0000000..d9a2b0e
--- /dev/null
+++ b/drivers/media/dvb-frontends/lgdt3306a.c
@@ -0,0 +1,2144 @@
+/*
+ * Support for LGDT3306A - 8VSB/QAM-B
+ *
+ * Copyright (C) 2013 Fred Richter <frichter@hauppauge.com>
+ * - driver structure based on lgdt3305.[ch] by Michael Krufky
+ * - code based on LG3306_V0.35 API by LG Electronics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <asm/div64.h>
+#include <linux/dvb/frontend.h>
+#include "dvb_math.h"
+#include "lgdt3306a.h"
+
+
+static int debug;
+module_param(debug, int, 0644);
+MODULE_PARM_DESC(debug, "set debug level (info=1, reg=2 (or-able))");
+
+#define DBG_INFO 1
+#define DBG_REG 2
+#define DBG_DUMP 4 /* FGR - comment out to remove dump code */
+
+#define lg_debug(fmt, arg...) \
+ printk(KERN_DEBUG pr_fmt(fmt), ## arg)
+
+#define dbg_info(fmt, arg...) \
+ do { \
+ if (debug & DBG_INFO) \
+ lg_debug(fmt, ## arg); \
+ } while (0)
+
+#define dbg_reg(fmt, arg...) \
+ do { \
+ if (debug & DBG_REG) \
+ lg_debug(fmt, ## arg); \
+ } while (0)
+
+#define lg_chkerr(ret) \
+({ \
+ int __ret; \
+ __ret = (ret < 0); \
+ if (__ret) \
+ pr_err("error %d on line %d\n", ret, __LINE__); \
+ __ret; \
+})
+
+struct lgdt3306a_state {
+ struct i2c_adapter *i2c_adap;
+ const struct lgdt3306a_config *cfg;
+
+ struct dvb_frontend frontend;
+
+ fe_modulation_t current_modulation;
+ u32 current_frequency;
+ u32 snr;
+};
+
+/*
+ * LG3306A Register Usage
+ * (LG does not really name the registers, so this code does not either)
+ *
+ * 0000 -> 00FF Common control and status
+ * 1000 -> 10FF Synchronizer control and status
+ * 1F00 -> 1FFF Smart Antenna control and status
+ * 2100 -> 21FF VSB Equalizer control and status
+ * 2800 -> 28FF QAM Equalizer control and status
+ * 3000 -> 30FF FEC control and status
+ */
+
+enum lgdt3306a_lock_status {
+ LG3306_UNLOCK = 0x00,
+ LG3306_LOCK = 0x01,
+ LG3306_UNKNOWN_LOCK = 0xff
+};
+
+enum lgdt3306a_neverlock_status {
+ LG3306_NL_INIT = 0x00,
+ LG3306_NL_PROCESS = 0x01,
+ LG3306_NL_LOCK = 0x02,
+ LG3306_NL_FAIL = 0x03,
+ LG3306_NL_UNKNOWN = 0xff
+};
+
+enum lgdt3306a_modulation {
+ LG3306_VSB = 0x00,
+ LG3306_QAM64 = 0x01,
+ LG3306_QAM256 = 0x02,
+ LG3306_UNKNOWN_MODE = 0xff
+};
+
+enum lgdt3306a_lock_check {
+ LG3306_SYNC_LOCK,
+ LG3306_FEC_LOCK,
+ LG3306_TR_LOCK,
+ LG3306_AGC_LOCK,
+};
+
+
+#ifdef DBG_DUMP
+static void lgdt3306a_DumpAllRegs(struct lgdt3306a_state *state);
+static void lgdt3306a_DumpRegs(struct lgdt3306a_state *state);
+#endif
+
+
+static int lgdt3306a_write_reg(struct lgdt3306a_state *state, u16 reg, u8 val)
+{
+ int ret;
+ u8 buf[] = { reg >> 8, reg & 0xff, val };
+ struct i2c_msg msg = {
+ .addr = state->cfg->i2c_addr, .flags = 0,
+ .buf = buf, .len = 3,
+ };
+
+ dbg_reg("reg: 0x%04x, val: 0x%02x\n", reg, val);
+
+ ret = i2c_transfer(state->i2c_adap, &msg, 1);
+
+ if (ret != 1) {
+ pr_err("error (addr %02x %02x <- %02x, err = %i)\n",
+ msg.buf[0], msg.buf[1], msg.buf[2], ret);
+ if (ret < 0)
+ return ret;
+ else
+ return -EREMOTEIO;
+ }
+ return 0;
+}
+
+static int lgdt3306a_read_reg(struct lgdt3306a_state *state, u16 reg, u8 *val)
+{
+ int ret;
+ u8 reg_buf[] = { reg >> 8, reg & 0xff };
+ struct i2c_msg msg[] = {
+ { .addr = state->cfg->i2c_addr,
+ .flags = 0, .buf = reg_buf, .len = 2 },
+ { .addr = state->cfg->i2c_addr,
+ .flags = I2C_M_RD, .buf = val, .len = 1 },
+ };
+
+ ret = i2c_transfer(state->i2c_adap, msg, 2);
+
+ if (ret != 2) {
+ pr_err("error (addr %02x reg %04x error (ret == %i)\n",
+ state->cfg->i2c_addr, reg, ret);
+ if (ret < 0)
+ return ret;
+ else
+ return -EREMOTEIO;
+ }
+ dbg_reg("reg: 0x%04x, val: 0x%02x\n", reg, *val);
+
+ return 0;
+}
+
+#define read_reg(state, reg) \
+({ \
+ u8 __val; \
+ int ret = lgdt3306a_read_reg(state, reg, &__val); \
+ if (lg_chkerr(ret)) \
+ __val = 0; \
+ __val; \
+})
+
+static int lgdt3306a_set_reg_bit(struct lgdt3306a_state *state,
+ u16 reg, int bit, int onoff)
+{
+ u8 val;
+ int ret;
+
+ dbg_reg("reg: 0x%04x, bit: %d, level: %d\n", reg, bit, onoff);
+
+ ret = lgdt3306a_read_reg(state, reg, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ val &= ~(1 << bit);
+ val |= (onoff & 1) << bit;
+
+ ret = lgdt3306a_write_reg(state, reg, val);
+ lg_chkerr(ret);
+fail:
+ return ret;
+}
+
+/* ------------------------------------------------------------------------ */
+
+static int lgdt3306a_soft_reset(struct lgdt3306a_state *state)
+{
+ int ret;
+
+ dbg_info("\n");
+
+ ret = lgdt3306a_set_reg_bit(state, 0x0000, 7, 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ msleep(20);
+ ret = lgdt3306a_set_reg_bit(state, 0x0000, 7, 1);
+ lg_chkerr(ret);
+
+fail:
+ return ret;
+}
+
+static int lgdt3306a_mpeg_mode(struct lgdt3306a_state *state,
+ enum lgdt3306a_mpeg_mode mode)
+{
+ u8 val;
+ int ret;
+
+ dbg_info("(%d)\n", mode);
+ /* transport packet format - TPSENB=0x80 */
+ ret = lgdt3306a_set_reg_bit(state, 0x0071, 7,
+ mode == LGDT3306A_MPEG_PARALLEL ? 1 : 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /*
+ * start of packet signal duration
+ * TPSSOPBITEN=0x40; 0=byte duration, 1=bit duration
+ */
+ ret = lgdt3306a_set_reg_bit(state, 0x0071, 6, 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ ret = lgdt3306a_read_reg(state, 0x0070, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ val |= 0x10; /* TPCLKSUPB=0x10 */
+
+ if (mode == LGDT3306A_MPEG_PARALLEL)
+ val &= ~0x10;
+
+ ret = lgdt3306a_write_reg(state, 0x0070, val);
+ lg_chkerr(ret);
+
+fail:
+ return ret;
+}
+
+static int lgdt3306a_mpeg_mode_polarity(struct lgdt3306a_state *state,
+ enum lgdt3306a_tp_clock_edge edge,
+ enum lgdt3306a_tp_valid_polarity valid)
+{
+ u8 val;
+ int ret;
+
+ dbg_info("edge=%d, valid=%d\n", edge, valid);
+
+ ret = lgdt3306a_read_reg(state, 0x0070, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ val &= ~0x06; /* TPCLKPOL=0x04, TPVALPOL=0x02 */
+
+ if (edge == LGDT3306A_TPCLK_RISING_EDGE)
+ val |= 0x04;
+ if (valid == LGDT3306A_TP_VALID_HIGH)
+ val |= 0x02;
+
+ ret = lgdt3306a_write_reg(state, 0x0070, val);
+ lg_chkerr(ret);
+
+fail:
+ return ret;
+}
+
+static int lgdt3306a_mpeg_tristate(struct lgdt3306a_state *state,
+ int mode)
+{
+ u8 val;
+ int ret;
+
+ dbg_info("(%d)\n", mode);
+
+ if (mode) {
+ ret = lgdt3306a_read_reg(state, 0x0070, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+ /*
+ * Tristate bus; TPOUTEN=0x80, TPCLKOUTEN=0x20,
+ * TPDATAOUTEN=0x08
+ */
+ val &= ~0xa8;
+ ret = lgdt3306a_write_reg(state, 0x0070, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* AGCIFOUTENB=0x40; 1=Disable IFAGC pin */
+ ret = lgdt3306a_set_reg_bit(state, 0x0003, 6, 1);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ } else {
+ /* enable IFAGC pin */
+ ret = lgdt3306a_set_reg_bit(state, 0x0003, 6, 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ ret = lgdt3306a_read_reg(state, 0x0070, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ val |= 0xa8; /* enable bus */
+ ret = lgdt3306a_write_reg(state, 0x0070, val);
+ if (lg_chkerr(ret))
+ goto fail;
+ }
+
+fail:
+ return ret;
+}
+
+static int lgdt3306a_ts_bus_ctrl(struct dvb_frontend *fe, int acquire)
+{
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+
+ dbg_info("acquire=%d\n", acquire);
+
+ return lgdt3306a_mpeg_tristate(state, acquire ? 0 : 1);
+
+}
+
+static int lgdt3306a_power(struct lgdt3306a_state *state,
+ int mode)
+{
+ int ret;
+
+ dbg_info("(%d)\n", mode);
+
+ if (mode == 0) {
+ /* into reset */
+ ret = lgdt3306a_set_reg_bit(state, 0x0000, 7, 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* power down */
+ ret = lgdt3306a_set_reg_bit(state, 0x0000, 0, 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ } else {
+ /* out of reset */
+ ret = lgdt3306a_set_reg_bit(state, 0x0000, 7, 1);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* power up */
+ ret = lgdt3306a_set_reg_bit(state, 0x0000, 0, 1);
+ if (lg_chkerr(ret))
+ goto fail;
+ }
+
+#ifdef DBG_DUMP
+ lgdt3306a_DumpAllRegs(state);
+#endif
+fail:
+ return ret;
+}
+
+
+static int lgdt3306a_set_vsb(struct lgdt3306a_state *state)
+{
+ u8 val;
+ int ret;
+
+ dbg_info("\n");
+
+ /* 0. Spectrum inversion detection manual; spectrum inverted */
+ ret = lgdt3306a_read_reg(state, 0x0002, &val);
+ val &= 0xf7; /* SPECINVAUTO Off */
+ val |= 0x04; /* SPECINV On */
+ ret = lgdt3306a_write_reg(state, 0x0002, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 1. Selection of standard mode(0x08=QAM, 0x80=VSB) */
+ ret = lgdt3306a_write_reg(state, 0x0008, 0x80);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 2. Bandwidth mode for VSB(6MHz) */
+ ret = lgdt3306a_read_reg(state, 0x0009, &val);
+ val &= 0xe3;
+ val |= 0x0c; /* STDOPDETTMODE[2:0]=3 */
+ ret = lgdt3306a_write_reg(state, 0x0009, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 3. QAM mode detection mode(None) */
+ ret = lgdt3306a_read_reg(state, 0x0009, &val);
+ val &= 0xfc; /* STDOPDETCMODE[1:0]=0 */
+ ret = lgdt3306a_write_reg(state, 0x0009, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 4. ADC sampling frequency rate(2x sampling) */
+ ret = lgdt3306a_read_reg(state, 0x000d, &val);
+ val &= 0xbf; /* SAMPLING4XFEN=0 */
+ ret = lgdt3306a_write_reg(state, 0x000d, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+#if 0
+ /* FGR - disable any AICC filtering, testing only */
+
+ ret = lgdt3306a_write_reg(state, 0x0024, 0x00);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* AICCFIXFREQ0 NT N-1(Video rejection) */
+ ret = lgdt3306a_write_reg(state, 0x002e, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x002f, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x0030, 0x00);
+
+ /* AICCFIXFREQ1 NT N-1(Audio rejection) */
+ ret = lgdt3306a_write_reg(state, 0x002b, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x002c, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x002d, 0x00);
+
+ /* AICCFIXFREQ2 NT Co-Channel(Video rejection) */
+ ret = lgdt3306a_write_reg(state, 0x0028, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x0029, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x002a, 0x00);
+
+ /* AICCFIXFREQ3 NT Co-Channel(Audio rejection) */
+ ret = lgdt3306a_write_reg(state, 0x0025, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x0026, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x0027, 0x00);
+
+#else
+ /* FGR - this works well for HVR-1955,1975 */
+
+ /* 5. AICCOPMODE NT N-1 Adj. */
+ ret = lgdt3306a_write_reg(state, 0x0024, 0x5A);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* AICCFIXFREQ0 NT N-1(Video rejection) */
+ ret = lgdt3306a_write_reg(state, 0x002e, 0x5A);
+ ret = lgdt3306a_write_reg(state, 0x002f, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x0030, 0x00);
+
+ /* AICCFIXFREQ1 NT N-1(Audio rejection) */
+ ret = lgdt3306a_write_reg(state, 0x002b, 0x36);
+ ret = lgdt3306a_write_reg(state, 0x002c, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x002d, 0x00);
+
+ /* AICCFIXFREQ2 NT Co-Channel(Video rejection) */
+ ret = lgdt3306a_write_reg(state, 0x0028, 0x2A);
+ ret = lgdt3306a_write_reg(state, 0x0029, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x002a, 0x00);
+
+ /* AICCFIXFREQ3 NT Co-Channel(Audio rejection) */
+ ret = lgdt3306a_write_reg(state, 0x0025, 0x06);
+ ret = lgdt3306a_write_reg(state, 0x0026, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x0027, 0x00);
+#endif
+
+ ret = lgdt3306a_read_reg(state, 0x001e, &val);
+ val &= 0x0f;
+ val |= 0xa0;
+ ret = lgdt3306a_write_reg(state, 0x001e, val);
+
+ ret = lgdt3306a_write_reg(state, 0x0022, 0x08);
+
+ ret = lgdt3306a_write_reg(state, 0x0023, 0xFF);
+
+ ret = lgdt3306a_read_reg(state, 0x211f, &val);
+ val &= 0xef;
+ ret = lgdt3306a_write_reg(state, 0x211f, val);
+
+ ret = lgdt3306a_write_reg(state, 0x2173, 0x01);
+
+ ret = lgdt3306a_read_reg(state, 0x1061, &val);
+ val &= 0xf8;
+ val |= 0x04;
+ ret = lgdt3306a_write_reg(state, 0x1061, val);
+
+ ret = lgdt3306a_read_reg(state, 0x103d, &val);
+ val &= 0xcf;
+ ret = lgdt3306a_write_reg(state, 0x103d, val);
+
+ ret = lgdt3306a_write_reg(state, 0x2122, 0x40);
+
+ ret = lgdt3306a_read_reg(state, 0x2141, &val);
+ val &= 0x3f;
+ ret = lgdt3306a_write_reg(state, 0x2141, val);
+
+ ret = lgdt3306a_read_reg(state, 0x2135, &val);
+ val &= 0x0f;
+ val |= 0x70;
+ ret = lgdt3306a_write_reg(state, 0x2135, val);
+
+ ret = lgdt3306a_read_reg(state, 0x0003, &val);
+ val &= 0xf7;
+ ret = lgdt3306a_write_reg(state, 0x0003, val);
+
+ ret = lgdt3306a_read_reg(state, 0x001c, &val);
+ val &= 0x7f;
+ ret = lgdt3306a_write_reg(state, 0x001c, val);
+
+ /* 6. EQ step size */
+ ret = lgdt3306a_read_reg(state, 0x2179, &val);
+ val &= 0xf8;
+ ret = lgdt3306a_write_reg(state, 0x2179, val);
+
+ ret = lgdt3306a_read_reg(state, 0x217a, &val);
+ val &= 0xf8;
+ ret = lgdt3306a_write_reg(state, 0x217a, val);
+
+ /* 7. Reset */
+ ret = lgdt3306a_soft_reset(state);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ dbg_info("complete\n");
+fail:
+ return ret;
+}
+
+static int lgdt3306a_set_qam(struct lgdt3306a_state *state, int modulation)
+{
+ u8 val;
+ int ret;
+
+ dbg_info("modulation=%d\n", modulation);
+
+ /* 1. Selection of standard mode(0x08=QAM, 0x80=VSB) */
+ ret = lgdt3306a_write_reg(state, 0x0008, 0x08);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 1a. Spectrum inversion detection to Auto */
+ ret = lgdt3306a_read_reg(state, 0x0002, &val);
+ val &= 0xfb; /* SPECINV Off */
+ val |= 0x08; /* SPECINVAUTO On */
+ ret = lgdt3306a_write_reg(state, 0x0002, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 2. Bandwidth mode for QAM */
+ ret = lgdt3306a_read_reg(state, 0x0009, &val);
+ val &= 0xe3; /* STDOPDETTMODE[2:0]=0 VSB Off */
+ ret = lgdt3306a_write_reg(state, 0x0009, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 3. : 64QAM/256QAM detection(manual, auto) */
+ ret = lgdt3306a_read_reg(state, 0x0009, &val);
+ val &= 0xfc;
+ val |= 0x02; /* STDOPDETCMODE[1:0]=1=Manual 2=Auto */
+ ret = lgdt3306a_write_reg(state, 0x0009, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 3a. : 64QAM/256QAM selection for manual */
+ ret = lgdt3306a_read_reg(state, 0x101a, &val);
+ val &= 0xf8;
+ if (modulation == QAM_64)
+ val |= 0x02; /* QMDQMODE[2:0]=2=QAM64 */
+ else
+ val |= 0x04; /* QMDQMODE[2:0]=4=QAM256 */
+
+ ret = lgdt3306a_write_reg(state, 0x101a, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 4. ADC sampling frequency rate(4x sampling) */
+ ret = lgdt3306a_read_reg(state, 0x000d, &val);
+ val &= 0xbf;
+ val |= 0x40; /* SAMPLING4XFEN=1 */
+ ret = lgdt3306a_write_reg(state, 0x000d, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 5. No AICC operation in QAM mode */
+ ret = lgdt3306a_read_reg(state, 0x0024, &val);
+ val &= 0x00;
+ ret = lgdt3306a_write_reg(state, 0x0024, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 6. Reset */
+ ret = lgdt3306a_soft_reset(state);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ dbg_info("complete\n");
+fail:
+ return ret;
+}
+
+static int lgdt3306a_set_modulation(struct lgdt3306a_state *state,
+ struct dtv_frontend_properties *p)
+{
+ int ret;
+
+ dbg_info("\n");
+
+ switch (p->modulation) {
+ case VSB_8:
+ ret = lgdt3306a_set_vsb(state);
+ break;
+ case QAM_64:
+ ret = lgdt3306a_set_qam(state, QAM_64);
+ break;
+ case QAM_256:
+ ret = lgdt3306a_set_qam(state, QAM_256);
+ break;
+ default:
+ return -EINVAL;
+ }
+ if (lg_chkerr(ret))
+ goto fail;
+
+ state->current_modulation = p->modulation;
+
+fail:
+ return ret;
+}
+
+/* ------------------------------------------------------------------------ */
+
+static int lgdt3306a_agc_setup(struct lgdt3306a_state *state,
+ struct dtv_frontend_properties *p)
+{
+ /* TODO: anything we want to do here??? */
+ dbg_info("\n");
+
+ switch (p->modulation) {
+ case VSB_8:
+ break;
+ case QAM_64:
+ case QAM_256:
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+/* ------------------------------------------------------------------------ */
+
+static int lgdt3306a_set_inversion(struct lgdt3306a_state *state,
+ int inversion)
+{
+ int ret;
+
+ dbg_info("(%d)\n", inversion);
+
+ ret = lgdt3306a_set_reg_bit(state, 0x0002, 2, inversion ? 1 : 0);
+ return ret;
+}
+
+static int lgdt3306a_set_inversion_auto(struct lgdt3306a_state *state,
+ int enabled)
+{
+ int ret;
+
+ dbg_info("(%d)\n", enabled);
+
+ /* 0=Manual 1=Auto(QAM only) - SPECINVAUTO=0x04 */
+ ret = lgdt3306a_set_reg_bit(state, 0x0002, 3, enabled);
+ return ret;
+}
+
+static int lgdt3306a_spectral_inversion(struct lgdt3306a_state *state,
+ struct dtv_frontend_properties *p,
+ int inversion)
+{
+ int ret = 0;
+
+ dbg_info("(%d)\n", inversion);
+#if 0
+ /*
+ * FGR - spectral_inversion defaults already set for VSB and QAM;
+ * can enable later if desired
+ */
+
+ ret = lgdt3306a_set_inversion(state, inversion);
+
+ switch (p->modulation) {
+ case VSB_8:
+ /* Manual only for VSB */
+ ret = lgdt3306a_set_inversion_auto(state, 0);
+ break;
+ case QAM_64:
+ case QAM_256:
+ /* Auto ok for QAM */
+ ret = lgdt3306a_set_inversion_auto(state, 1);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+#endif
+ return ret;
+}
+
+static int lgdt3306a_set_if(struct lgdt3306a_state *state,
+ struct dtv_frontend_properties *p)
+{
+ int ret;
+ u16 if_freq_khz;
+ u8 nco1, nco2;
+
+ switch (p->modulation) {
+ case VSB_8:
+ if_freq_khz = state->cfg->vsb_if_khz;
+ break;
+ case QAM_64:
+ case QAM_256:
+ if_freq_khz = state->cfg->qam_if_khz;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (if_freq_khz) {
+ default:
+ pr_warn("IF=%d KHz is not supportted, 3250 assumed\n",
+ if_freq_khz);
+ /* fallthrough */
+ case 3250: /* 3.25Mhz */
+ nco1 = 0x34;
+ nco2 = 0x00;
+ break;
+ case 3500: /* 3.50Mhz */
+ nco1 = 0x38;
+ nco2 = 0x00;
+ break;
+ case 4000: /* 4.00Mhz */
+ nco1 = 0x40;
+ nco2 = 0x00;
+ break;
+ case 5000: /* 5.00Mhz */
+ nco1 = 0x50;
+ nco2 = 0x00;
+ break;
+ case 5380: /* 5.38Mhz */
+ nco1 = 0x56;
+ nco2 = 0x14;
+ break;
+ }
+ ret = lgdt3306a_write_reg(state, 0x0010, nco1);
+ if (ret)
+ return ret;
+ ret = lgdt3306a_write_reg(state, 0x0011, nco2);
+ if (ret)
+ return ret;
+
+ dbg_info("if_freq=%d KHz->[%04x]\n", if_freq_khz, nco1<<8 | nco2);
+
+ return 0;
+}
+
+/* ------------------------------------------------------------------------ */
+
+static int lgdt3306a_i2c_gate_ctrl(struct dvb_frontend *fe, int enable)
+{
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+
+ if (state->cfg->deny_i2c_rptr) {
+ dbg_info("deny_i2c_rptr=%d\n", state->cfg->deny_i2c_rptr);
+ return 0;
+ }
+ dbg_info("(%d)\n", enable);
+
+ /* NI2CRPTEN=0x80 */
+ return lgdt3306a_set_reg_bit(state, 0x0002, 7, enable ? 0 : 1);
+}
+
+static int lgdt3306a_sleep(struct lgdt3306a_state *state)
+{
+ int ret;
+
+ dbg_info("\n");
+ state->current_frequency = -1; /* force re-tune, when we wake */
+
+ ret = lgdt3306a_mpeg_tristate(state, 1); /* disable data bus */
+ if (lg_chkerr(ret))
+ goto fail;
+
+ ret = lgdt3306a_power(state, 0); /* power down */
+ lg_chkerr(ret);
+
+fail:
+ return 0;
+}
+
+static int lgdt3306a_fe_sleep(struct dvb_frontend *fe)
+{
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+
+ return lgdt3306a_sleep(state);
+}
+
+static int lgdt3306a_init(struct dvb_frontend *fe)
+{
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+ u8 val;
+ int ret;
+
+ dbg_info("\n");
+
+ /* 1. Normal operation mode */
+ ret = lgdt3306a_set_reg_bit(state, 0x0001, 0, 1); /* SIMFASTENB=0x01 */
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 2. Spectrum inversion auto detection (Not valid for VSB) */
+ ret = lgdt3306a_set_inversion_auto(state, 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 3. Spectrum inversion(According to the tuner configuration) */
+ ret = lgdt3306a_set_inversion(state, 1);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 4. Peak-to-peak voltage of ADC input signal */
+
+ /* ADCSEL1V=0x80=1Vpp; 0x00=2Vpp */
+ ret = lgdt3306a_set_reg_bit(state, 0x0004, 7, 1);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 5. ADC output data capture clock phase */
+
+ /* 0=same phase as ADC clock */
+ ret = lgdt3306a_set_reg_bit(state, 0x0004, 2, 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 5a. ADC sampling clock source */
+
+ /* ADCCLKPLLSEL=0x08; 0=use ext clock, not PLL */
+ ret = lgdt3306a_set_reg_bit(state, 0x0004, 3, 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 6. Automatic PLL set */
+
+ /* PLLSETAUTO=0x40; 0=off */
+ ret = lgdt3306a_set_reg_bit(state, 0x0005, 6, 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ if (state->cfg->xtalMHz == 24) { /* 24MHz */
+ /* 7. Frequency for PLL output(0x2564 for 192MHz for 24MHz) */
+ ret = lgdt3306a_read_reg(state, 0x0005, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+ val &= 0xc0;
+ val |= 0x25;
+ ret = lgdt3306a_write_reg(state, 0x0005, val);
+ if (lg_chkerr(ret))
+ goto fail;
+ ret = lgdt3306a_write_reg(state, 0x0006, 0x64);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 8. ADC sampling frequency(0x180000 for 24MHz sampling) */
+ ret = lgdt3306a_read_reg(state, 0x000d, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+ val &= 0xc0;
+ val |= 0x18;
+ ret = lgdt3306a_write_reg(state, 0x000d, val);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ } else if (state->cfg->xtalMHz == 25) { /* 25MHz */
+ /* 7. Frequency for PLL output */
+ ret = lgdt3306a_read_reg(state, 0x0005, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+ val &= 0xc0;
+ val |= 0x25;
+ ret = lgdt3306a_write_reg(state, 0x0005, val);
+ if (lg_chkerr(ret))
+ goto fail;
+ ret = lgdt3306a_write_reg(state, 0x0006, 0x64);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ /* 8. ADC sampling frequency(0x190000 for 25MHz sampling) */
+ ret = lgdt3306a_read_reg(state, 0x000d, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+ val &= 0xc0;
+ val |= 0x19;
+ ret = lgdt3306a_write_reg(state, 0x000d, val);
+ if (lg_chkerr(ret))
+ goto fail;
+ } else {
+ pr_err("Bad xtalMHz=%d\n", state->cfg->xtalMHz);
+ }
+#if 0
+ ret = lgdt3306a_write_reg(state, 0x000e, 0x00);
+ ret = lgdt3306a_write_reg(state, 0x000f, 0x00);
+#endif
+
+ /* 9. Center frequency of input signal of ADC */
+ ret = lgdt3306a_write_reg(state, 0x0010, 0x34); /* 3.25MHz */
+ ret = lgdt3306a_write_reg(state, 0x0011, 0x00);
+
+ /* 10. Fixed gain error value */
+ ret = lgdt3306a_write_reg(state, 0x0014, 0); /* gain error=0 */
+
+ /* 10a. VSB TR BW gear shift initial step */
+ ret = lgdt3306a_read_reg(state, 0x103c, &val);
+ val &= 0x0f;
+ val |= 0x20; /* SAMGSAUTOSTL_V[3:0] = 2 */
+ ret = lgdt3306a_write_reg(state, 0x103c, val);
+
+ /* 10b. Timing offset calibration in low temperature for VSB */
+ ret = lgdt3306a_read_reg(state, 0x103d, &val);
+ val &= 0xfc;
+ val |= 0x03;
+ ret = lgdt3306a_write_reg(state, 0x103d, val);
+
+ /* 10c. Timing offset calibration in low temperature for QAM */
+ ret = lgdt3306a_read_reg(state, 0x1036, &val);
+ val &= 0xf0;
+ val |= 0x0c;
+ ret = lgdt3306a_write_reg(state, 0x1036, val);
+
+ /* 11. Using the imaginary part of CIR in CIR loading */
+ ret = lgdt3306a_read_reg(state, 0x211f, &val);
+ val &= 0xef; /* do not use imaginary of CIR */
+ ret = lgdt3306a_write_reg(state, 0x211f, val);
+
+ /* 12. Control of no signal detector function */
+ ret = lgdt3306a_read_reg(state, 0x2849, &val);
+ val &= 0xef; /* NOUSENOSIGDET=0, enable no signal detector */
+ ret = lgdt3306a_write_reg(state, 0x2849, val);
+
+ /* FGR - put demod in some known mode */
+ ret = lgdt3306a_set_vsb(state);
+
+ /* 13. TP stream format */
+ ret = lgdt3306a_mpeg_mode(state, state->cfg->mpeg_mode);
+
+ /* 14. disable output buses */
+ ret = lgdt3306a_mpeg_tristate(state, 1);
+
+ /* 15. Sleep (in reset) */
+ ret = lgdt3306a_sleep(state);
+ lg_chkerr(ret);
+
+fail:
+ return ret;
+}
+
+static int lgdt3306a_set_parameters(struct dvb_frontend *fe)
+{
+ struct dtv_frontend_properties *p = &fe->dtv_property_cache;
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+ int ret;
+
+ dbg_info("(%d, %d)\n", p->frequency, p->modulation);
+
+ if (state->current_frequency == p->frequency &&
+ state->current_modulation == p->modulation) {
+ dbg_info(" (already set, skipping ...)\n");
+ return 0;
+ }
+ state->current_frequency = -1;
+ state->current_modulation = -1;
+
+ ret = lgdt3306a_power(state, 1); /* power up */
+ if (lg_chkerr(ret))
+ goto fail;
+
+ if (fe->ops.tuner_ops.set_params) {
+ ret = fe->ops.tuner_ops.set_params(fe);
+ if (fe->ops.i2c_gate_ctrl)
+ fe->ops.i2c_gate_ctrl(fe, 0);
+#if 0
+ if (lg_chkerr(ret))
+ goto fail;
+ state->current_frequency = p->frequency;
+#endif
+ }
+
+ ret = lgdt3306a_set_modulation(state, p);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ ret = lgdt3306a_agc_setup(state, p);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ ret = lgdt3306a_set_if(state, p);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ ret = lgdt3306a_spectral_inversion(state, p,
+ state->cfg->spectral_inversion ? 1 : 0);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ ret = lgdt3306a_mpeg_mode(state, state->cfg->mpeg_mode);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ ret = lgdt3306a_mpeg_mode_polarity(state,
+ state->cfg->tpclk_edge,
+ state->cfg->tpvalid_polarity);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ ret = lgdt3306a_mpeg_tristate(state, 0); /* enable data bus */
+ if (lg_chkerr(ret))
+ goto fail;
+
+ ret = lgdt3306a_soft_reset(state);
+ if (lg_chkerr(ret))
+ goto fail;
+
+#ifdef DBG_DUMP
+ lgdt3306a_DumpAllRegs(state);
+#endif
+ state->current_frequency = p->frequency;
+fail:
+ return ret;
+}
+
+static int lgdt3306a_get_frontend(struct dvb_frontend *fe)
+{
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+ struct dtv_frontend_properties *p = &fe->dtv_property_cache;
+
+ dbg_info("(%u, %d)\n",
+ state->current_frequency, state->current_modulation);
+
+ p->modulation = state->current_modulation;
+ p->frequency = state->current_frequency;
+ return 0;
+}
+
+static enum dvbfe_algo lgdt3306a_get_frontend_algo(struct dvb_frontend *fe)
+{
+#if 1
+ return DVBFE_ALGO_CUSTOM;
+#else
+ return DVBFE_ALGO_HW;
+#endif
+}
+
+/* ------------------------------------------------------------------------ */
+static int lgdt3306a_monitor_vsb(struct lgdt3306a_state *state)
+{
+ u8 val;
+ int ret;
+ u8 snrRef, maxPowerMan, nCombDet;
+ u16 fbDlyCir;
+
+ ret = lgdt3306a_read_reg(state, 0x21a1, &val);
+ if (ret)
+ return ret;
+ snrRef = val & 0x3f;
+
+ ret = lgdt3306a_read_reg(state, 0x2185, &maxPowerMan);
+ if (ret)
+ return ret;
+
+ ret = lgdt3306a_read_reg(state, 0x2191, &val);
+ if (ret)
+ return ret;
+ nCombDet = (val & 0x80) >> 7;
+
+ ret = lgdt3306a_read_reg(state, 0x2180, &val);
+ if (ret)
+ return ret;
+ fbDlyCir = (val & 0x03) << 8;
+
+ ret = lgdt3306a_read_reg(state, 0x2181, &val);
+ if (ret)
+ return ret;
+ fbDlyCir |= val;
+
+ dbg_info("snrRef=%d maxPowerMan=0x%x nCombDet=%d fbDlyCir=0x%x\n",
+ snrRef, maxPowerMan, nCombDet, fbDlyCir);
+
+ /* Carrier offset sub loop bandwidth */
+ ret = lgdt3306a_read_reg(state, 0x1061, &val);
+ if (ret)
+ return ret;
+ val &= 0xf8;
+ if ((snrRef > 18) && (maxPowerMan > 0x68)
+ && (nCombDet == 0x01)
+ && ((fbDlyCir == 0x03FF) || (fbDlyCir < 0x6C))) {
+ /* SNR is over 18dB and no ghosting */
+ val |= 0x00; /* final bandwidth = 0 */
+ } else {
+ val |= 0x04; /* final bandwidth = 4 */
+ }
+ ret = lgdt3306a_write_reg(state, 0x1061, val);
+ if (ret)
+ return ret;
+
+ /* Adjust Notch Filter */
+ ret = lgdt3306a_read_reg(state, 0x0024, &val);
+ if (ret)
+ return ret;
+ val &= 0x0f;
+ if (nCombDet == 0) { /* Turn on the Notch Filter */
+ val |= 0x50;
+ }
+ ret = lgdt3306a_write_reg(state, 0x0024, val);
+ if (ret)
+ return ret;
+
+ /* VSB Timing Recovery output normalization */
+ ret = lgdt3306a_read_reg(state, 0x103d, &val);
+ if (ret)
+ return ret;
+ val &= 0xcf;
+ val |= 0x20;
+ ret = lgdt3306a_write_reg(state, 0x103d, val);
+
+ return ret;
+}
+
+static enum lgdt3306a_modulation
+lgdt3306a_check_oper_mode(struct lgdt3306a_state *state)
+{
+ u8 val = 0;
+ int ret;
+
+ ret = lgdt3306a_read_reg(state, 0x0081, &val);
+ if (ret)
+ goto err;
+
+ if (val & 0x80) {
+ dbg_info("VSB\n");
+ return LG3306_VSB;
+ }
+ if (val & 0x08) {
+ ret = lgdt3306a_read_reg(state, 0x00a6, &val);
+ if (ret)
+ goto err;
+ val = val >> 2;
+ if (val & 0x01) {
+ dbg_info("QAM256\n");
+ return LG3306_QAM256;
+ }
+ dbg_info("QAM64\n");
+ return LG3306_QAM64;
+ }
+err:
+ pr_warn("UNKNOWN\n");
+ return LG3306_UNKNOWN_MODE;
+}
+
+static enum lgdt3306a_lock_status
+lgdt3306a_check_lock_status(struct lgdt3306a_state *state,
+ enum lgdt3306a_lock_check whatLock)
+{
+ u8 val = 0;
+ int ret;
+ enum lgdt3306a_modulation modeOper;
+ enum lgdt3306a_lock_status lockStatus;
+
+ modeOper = LG3306_UNKNOWN_MODE;
+
+ switch (whatLock) {
+ case LG3306_SYNC_LOCK:
+ {
+ ret = lgdt3306a_read_reg(state, 0x00a6, &val);
+ if (ret)
+ return ret;
+
+ if ((val & 0x80) == 0x80)
+ lockStatus = LG3306_LOCK;
+ else
+ lockStatus = LG3306_UNLOCK;
+
+ dbg_info("SYNC_LOCK=%x\n", lockStatus);
+ break;
+ }
+ case LG3306_AGC_LOCK:
+ {
+ ret = lgdt3306a_read_reg(state, 0x0080, &val);
+ if (ret)
+ return ret;
+
+ if ((val & 0x40) == 0x40)
+ lockStatus = LG3306_LOCK;
+ else
+ lockStatus = LG3306_UNLOCK;
+
+ dbg_info("AGC_LOCK=%x\n", lockStatus);
+ break;
+ }
+ case LG3306_TR_LOCK:
+ {
+ modeOper = lgdt3306a_check_oper_mode(state);
+ if ((modeOper == LG3306_QAM64) || (modeOper == LG3306_QAM256)) {
+ ret = lgdt3306a_read_reg(state, 0x1094, &val);
+ if (ret)
+ return ret;
+
+ if ((val & 0x80) == 0x80)
+ lockStatus = LG3306_LOCK;
+ else
+ lockStatus = LG3306_UNLOCK;
+ } else
+ lockStatus = LG3306_UNKNOWN_LOCK;
+
+ dbg_info("TR_LOCK=%x\n", lockStatus);
+ break;
+ }
+ case LG3306_FEC_LOCK:
+ {
+ modeOper = lgdt3306a_check_oper_mode(state);
+ if ((modeOper == LG3306_QAM64) || (modeOper == LG3306_QAM256)) {
+ ret = lgdt3306a_read_reg(state, 0x0080, &val);
+ if (ret)
+ return ret;
+
+ if ((val & 0x10) == 0x10)
+ lockStatus = LG3306_LOCK;
+ else
+ lockStatus = LG3306_UNLOCK;
+ } else
+ lockStatus = LG3306_UNKNOWN_LOCK;
+
+ dbg_info("FEC_LOCK=%x\n", lockStatus);
+ break;
+ }
+
+ default:
+ lockStatus = LG3306_UNKNOWN_LOCK;
+ pr_warn("UNKNOWN whatLock=%d\n", whatLock);
+ break;
+ }
+
+ return lockStatus;
+}
+
+static enum lgdt3306a_neverlock_status
+lgdt3306a_check_neverlock_status(struct lgdt3306a_state *state)
+{
+ u8 val = 0;
+ int ret;
+ enum lgdt3306a_neverlock_status lockStatus;
+
+ ret = lgdt3306a_read_reg(state, 0x0080, &val);
+ if (ret)
+ return ret;
+ lockStatus = (enum lgdt3306a_neverlock_status)(val & 0x03);
+
+ dbg_info("NeverLock=%d", lockStatus);
+
+ return lockStatus;
+}
+
+static int lgdt3306a_pre_monitoring(struct lgdt3306a_state *state)
+{
+ u8 val = 0;
+ int ret;
+ u8 currChDiffACQ, snrRef, mainStrong, aiccrejStatus;
+
+ /* Channel variation */
+ ret = lgdt3306a_read_reg(state, 0x21bc, &currChDiffACQ);
+ if (ret)
+ return ret;
+
+ /* SNR of Frame sync */
+ ret = lgdt3306a_read_reg(state, 0x21a1, &val);
+ if (ret)
+ return ret;
+ snrRef = val & 0x3f;
+
+ /* Strong Main CIR */
+ ret = lgdt3306a_read_reg(state, 0x2199, &val);
+ if (ret)
+ return ret;
+ mainStrong = (val & 0x40) >> 6;
+
+ ret = lgdt3306a_read_reg(state, 0x0090, &val);
+ if (ret)
+ return ret;
+ aiccrejStatus = (val & 0xf0) >> 4;
+
+ dbg_info("snrRef=%d mainStrong=%d aiccrejStatus=%d currChDiffACQ=0x%x\n",
+ snrRef, mainStrong, aiccrejStatus, currChDiffACQ);
+
+#if 0
+ /* Dynamic ghost exists */
+ if ((mainStrong == 0) && (currChDiffACQ > 0x70))
+#endif
+ if (mainStrong == 0) {
+ ret = lgdt3306a_read_reg(state, 0x2135, &val);
+ if (ret)
+ return ret;
+ val &= 0x0f;
+ val |= 0xa0;
+ ret = lgdt3306a_write_reg(state, 0x2135, val);
+ if (ret)
+ return ret;
+
+ ret = lgdt3306a_read_reg(state, 0x2141, &val);
+ if (ret)
+ return ret;
+ val &= 0x3f;
+ val |= 0x80;
+ ret = lgdt3306a_write_reg(state, 0x2141, val);
+ if (ret)
+ return ret;
+
+ ret = lgdt3306a_write_reg(state, 0x2122, 0x70);
+ if (ret)
+ return ret;
+ } else { /* Weak ghost or static channel */
+ ret = lgdt3306a_read_reg(state, 0x2135, &val);
+ if (ret)
+ return ret;
+ val &= 0x0f;
+ val |= 0x70;
+ ret = lgdt3306a_write_reg(state, 0x2135, val);
+ if (ret)
+ return ret;
+
+ ret = lgdt3306a_read_reg(state, 0x2141, &val);
+ if (ret)
+ return ret;
+ val &= 0x3f;
+ val |= 0x40;
+ ret = lgdt3306a_write_reg(state, 0x2141, val);
+ if (ret)
+ return ret;
+
+ ret = lgdt3306a_write_reg(state, 0x2122, 0x40);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+static enum lgdt3306a_lock_status
+lgdt3306a_sync_lock_poll(struct lgdt3306a_state *state)
+{
+ enum lgdt3306a_lock_status syncLockStatus = LG3306_UNLOCK;
+ int i;
+
+ for (i = 0; i < 2; i++) {
+ msleep(30);
+
+ syncLockStatus = lgdt3306a_check_lock_status(state,
+ LG3306_SYNC_LOCK);
+
+ if (syncLockStatus == LG3306_LOCK) {
+ dbg_info("locked(%d)\n", i);
+ return LG3306_LOCK;
+ }
+ }
+ dbg_info("not locked\n");
+ return LG3306_UNLOCK;
+}
+
+static enum lgdt3306a_lock_status
+lgdt3306a_fec_lock_poll(struct lgdt3306a_state *state)
+{
+ enum lgdt3306a_lock_status FECLockStatus = LG3306_UNLOCK;
+ int i;
+
+ for (i = 0; i < 2; i++) {
+ msleep(30);
+
+ FECLockStatus = lgdt3306a_check_lock_status(state,
+ LG3306_FEC_LOCK);
+
+ if (FECLockStatus == LG3306_LOCK) {
+ dbg_info("locked(%d)\n", i);
+ return FECLockStatus;
+ }
+ }
+ dbg_info("not locked\n");
+ return FECLockStatus;
+}
+
+static enum lgdt3306a_neverlock_status
+lgdt3306a_neverlock_poll(struct lgdt3306a_state *state)
+{
+ enum lgdt3306a_neverlock_status NLLockStatus = LG3306_NL_FAIL;
+ int i;
+
+ for (i = 0; i < 5; i++) {
+ msleep(30);
+
+ NLLockStatus = lgdt3306a_check_neverlock_status(state);
+
+ if (NLLockStatus == LG3306_NL_LOCK) {
+ dbg_info("NL_LOCK(%d)\n", i);
+ return NLLockStatus;
+ }
+ }
+ dbg_info("NLLockStatus=%d\n", NLLockStatus);
+ return NLLockStatus;
+}
+
+static u8 lgdt3306a_get_packet_error(struct lgdt3306a_state *state)
+{
+ u8 val;
+ int ret;
+
+ ret = lgdt3306a_read_reg(state, 0x00fa, &val);
+ if (ret)
+ return ret;
+
+ return val;
+}
+
+static const u32 valx_x10[] = {
+ 10, 11, 13, 15, 17, 20, 25, 33, 41, 50, 59, 73, 87, 100
+};
+static const u32 log10x_x1000[] = {
+ 0, 41, 114, 176, 230, 301, 398, 518, 613, 699, 771, 863, 939, 1000
+};
+
+static u32 log10_x1000(u32 x)
+{
+ u32 diff_val, step_val, step_log10;
+ u32 log_val = 0;
+ u32 i;
+
+ if (x <= 0)
+ return -1000000; /* signal error */
+
+ if (x == 10)
+ return 0; /* log(1)=0 */
+
+ if (x < 10) {
+ while (x < 10) {
+ x = x * 10;
+ log_val--;
+ }
+ } else { /* x > 10 */
+ while (x >= 100) {
+ x = x / 10;
+ log_val++;
+ }
+ }
+ log_val *= 1000;
+
+ if (x == 10) /* was our input an exact multiple of 10 */
+ return log_val; /* don't need to interpolate */
+
+ /* find our place on the log curve */
+ for (i = 1; i < ARRAY_SIZE(valx_x10); i++) {
+ if (valx_x10[i] >= x)
+ break;
+ }
+ if (i == ARRAY_SIZE(valx_x10))
+ return log_val + log10x_x1000[i - 1];
+
+ diff_val = x - valx_x10[i-1];
+ step_val = valx_x10[i] - valx_x10[i - 1];
+ step_log10 = log10x_x1000[i] - log10x_x1000[i - 1];
+
+ /* do a linear interpolation to get in-between values */
+ return log_val + log10x_x1000[i - 1] +
+ ((diff_val*step_log10) / step_val);
+}
+
+static u32 lgdt3306a_calculate_snr_x100(struct lgdt3306a_state *state)
+{
+ u32 mse; /* Mean-Square Error */
+ u32 pwr; /* Constelation power */
+ u32 snr_x100;
+
+ mse = (read_reg(state, 0x00ec) << 8) |
+ (read_reg(state, 0x00ed));
+ pwr = (read_reg(state, 0x00e8) << 8) |
+ (read_reg(state, 0x00e9));
+
+ if (mse == 0) /* no signal */
+ return 0;
+
+ snr_x100 = log10_x1000((pwr * 10000) / mse) - 3000;
+ dbg_info("mse=%u, pwr=%u, snr_x100=%d\n", mse, pwr, snr_x100);
+
+ return snr_x100;
+}
+
+static enum lgdt3306a_lock_status
+lgdt3306a_vsb_lock_poll(struct lgdt3306a_state *state)
+{
+ int ret;
+ u8 cnt = 0;
+ u8 packet_error;
+ u32 snr;
+
+ for (cnt = 0; cnt < 10; cnt++) {
+ if (lgdt3306a_sync_lock_poll(state) == LG3306_UNLOCK) {
+ dbg_info("no sync lock!\n");
+ return LG3306_UNLOCK;
+ }
+
+ msleep(20);
+ ret = lgdt3306a_pre_monitoring(state);
+ if (ret)
+ break;
+
+ packet_error = lgdt3306a_get_packet_error(state);
+ snr = lgdt3306a_calculate_snr_x100(state);
+ dbg_info("cnt=%d errors=%d snr=%d\n", cnt, packet_error, snr);
+
+ if ((snr >= 1500) && (packet_error < 0xff))
+ return LG3306_LOCK;
+ }
+
+ dbg_info("not locked!\n");
+ return LG3306_UNLOCK;
+}
+
+static enum lgdt3306a_lock_status
+lgdt3306a_qam_lock_poll(struct lgdt3306a_state *state)
+{
+ u8 cnt;
+ u8 packet_error;
+ u32 snr;
+
+ for (cnt = 0; cnt < 10; cnt++) {
+ if (lgdt3306a_fec_lock_poll(state) == LG3306_UNLOCK) {
+ dbg_info("no fec lock!\n");
+ return LG3306_UNLOCK;
+ }
+
+ msleep(20);
+
+ packet_error = lgdt3306a_get_packet_error(state);
+ snr = lgdt3306a_calculate_snr_x100(state);
+ dbg_info("cnt=%d errors=%d snr=%d\n", cnt, packet_error, snr);
+
+ if ((snr >= 1500) && (packet_error < 0xff))
+ return LG3306_LOCK;
+ }
+
+ dbg_info("not locked!\n");
+ return LG3306_UNLOCK;
+}
+
+static int lgdt3306a_read_status(struct dvb_frontend *fe, fe_status_t *status)
+{
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+ u16 strength = 0;
+ int ret = 0;
+
+ if (fe->ops.tuner_ops.get_rf_strength) {
+ ret = fe->ops.tuner_ops.get_rf_strength(fe, &strength);
+ if (ret == 0)
+ dbg_info("strength=%d\n", strength);
+ else
+ dbg_info("fe->ops.tuner_ops.get_rf_strength() failed\n");
+ }
+
+ *status = 0;
+ if (lgdt3306a_neverlock_poll(state) == LG3306_NL_LOCK) {
+ *status |= FE_HAS_SIGNAL;
+ *status |= FE_HAS_CARRIER;
+
+ switch (state->current_modulation) {
+ case QAM_256:
+ case QAM_64:
+ if (lgdt3306a_qam_lock_poll(state) == LG3306_LOCK) {
+ *status |= FE_HAS_VITERBI;
+ *status |= FE_HAS_SYNC;
+
+ *status |= FE_HAS_LOCK;
+ }
+ break;
+ case VSB_8:
+ if (lgdt3306a_vsb_lock_poll(state) == LG3306_LOCK) {
+ *status |= FE_HAS_VITERBI;
+ *status |= FE_HAS_SYNC;
+
+ *status |= FE_HAS_LOCK;
+
+ ret = lgdt3306a_monitor_vsb(state);
+ }
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ }
+ return ret;
+}
+
+
+static int lgdt3306a_read_snr(struct dvb_frontend *fe, u16 *snr)
+{
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+
+ state->snr = lgdt3306a_calculate_snr_x100(state);
+ /* report SNR in dB * 10 */
+ *snr = state->snr/10;
+
+ return 0;
+}
+
+static int lgdt3306a_read_signal_strength(struct dvb_frontend *fe,
+ u16 *strength)
+{
+ /*
+ * Calculate some sort of "strength" from SNR
+ */
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+ u16 snr; /* snr_x10 */
+ int ret;
+ u32 ref_snr; /* snr*100 */
+ u32 str;
+
+ *strength = 0;
+
+ switch (state->current_modulation) {
+ case VSB_8:
+ ref_snr = 1600; /* 16dB */
+ break;
+ case QAM_64:
+ ref_snr = 2200; /* 22dB */
+ break;
+ case QAM_256:
+ ref_snr = 2800; /* 28dB */
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ ret = fe->ops.read_snr(fe, &snr);
+ if (lg_chkerr(ret))
+ goto fail;
+
+ if (state->snr <= (ref_snr - 100))
+ str = 0;
+ else if (state->snr <= ref_snr)
+ str = (0xffff * 65) / 100; /* 65% */
+ else {
+ str = state->snr - ref_snr;
+ str /= 50;
+ str += 78; /* 78%-100% */
+ if (str > 100)
+ str = 100;
+ str = (0xffff * str) / 100;
+ }
+ *strength = (u16)str;
+ dbg_info("strength=%u\n", *strength);
+
+fail:
+ return ret;
+}
+
+/* ------------------------------------------------------------------------ */
+
+static int lgdt3306a_read_ber(struct dvb_frontend *fe, u32 *ber)
+{
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+ u32 tmp;
+
+ *ber = 0;
+#if 1
+ /* FGR - FIXME - I don't know what value is expected by dvb_core
+ * what is the scale of the value?? */
+ tmp = read_reg(state, 0x00fc); /* NBERVALUE[24-31] */
+ tmp = (tmp << 8) | read_reg(state, 0x00fd); /* NBERVALUE[16-23] */
+ tmp = (tmp << 8) | read_reg(state, 0x00fe); /* NBERVALUE[8-15] */
+ tmp = (tmp << 8) | read_reg(state, 0x00ff); /* NBERVALUE[0-7] */
+ *ber = tmp;
+ dbg_info("ber=%u\n", tmp);
+#endif
+ return 0;
+}
+
+static int lgdt3306a_read_ucblocks(struct dvb_frontend *fe, u32 *ucblocks)
+{
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+
+ *ucblocks = 0;
+#if 1
+ /* FGR - FIXME - I don't know what value is expected by dvb_core
+ * what happens when value wraps? */
+ *ucblocks = read_reg(state, 0x00f4); /* TPIFTPERRCNT[0-7] */
+ dbg_info("ucblocks=%u\n", *ucblocks);
+#endif
+
+ return 0;
+}
+
+static int lgdt3306a_tune(struct dvb_frontend *fe, bool re_tune,
+ unsigned int mode_flags, unsigned int *delay,
+ fe_status_t *status)
+{
+ int ret = 0;
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+
+ dbg_info("re_tune=%u\n", re_tune);
+
+ if (re_tune) {
+ state->current_frequency = -1; /* force re-tune */
+ ret = lgdt3306a_set_parameters(fe);
+ if (ret != 0)
+ return ret;
+ }
+ *delay = 125;
+ ret = lgdt3306a_read_status(fe, status);
+
+ return ret;
+}
+
+static int lgdt3306a_get_tune_settings(struct dvb_frontend *fe,
+ struct dvb_frontend_tune_settings
+ *fe_tune_settings)
+{
+ fe_tune_settings->min_delay_ms = 100;
+ dbg_info("\n");
+ return 0;
+}
+
+static int lgdt3306a_search(struct dvb_frontend *fe)
+{
+ fe_status_t status = 0;
+ int i, ret;
+
+ /* set frontend */
+ ret = lgdt3306a_set_parameters(fe);
+ if (ret)
+ goto error;
+
+ /* wait frontend lock */
+ for (i = 20; i > 0; i--) {
+ dbg_info(": loop=%d\n", i);
+ msleep(50);
+ ret = lgdt3306a_read_status(fe, &status);
+ if (ret)
+ goto error;
+
+ if (status & FE_HAS_LOCK)
+ break;
+ }
+
+ /* check if we have a valid signal */
+ if (status & FE_HAS_LOCK)
+ return DVBFE_ALGO_SEARCH_SUCCESS;
+ else
+ return DVBFE_ALGO_SEARCH_AGAIN;
+
+error:
+ dbg_info("failed (%d)\n", ret);
+ return DVBFE_ALGO_SEARCH_ERROR;
+}
+
+static void lgdt3306a_release(struct dvb_frontend *fe)
+{
+ struct lgdt3306a_state *state = fe->demodulator_priv;
+
+ dbg_info("\n");
+ kfree(state);
+}
+
+static struct dvb_frontend_ops lgdt3306a_ops;
+
+struct dvb_frontend *lgdt3306a_attach(const struct lgdt3306a_config *config,
+ struct i2c_adapter *i2c_adap)
+{
+ struct lgdt3306a_state *state = NULL;
+ int ret;
+ u8 val;
+
+ dbg_info("(%d-%04x)\n",
+ i2c_adap ? i2c_adapter_id(i2c_adap) : 0,
+ config ? config->i2c_addr : 0);
+
+ state = kzalloc(sizeof(struct lgdt3306a_state), GFP_KERNEL);
+ if (state == NULL)
+ goto fail;
+
+ state->cfg = config;
+ state->i2c_adap = i2c_adap;
+
+ memcpy(&state->frontend.ops, &lgdt3306a_ops,
+ sizeof(struct dvb_frontend_ops));
+ state->frontend.demodulator_priv = state;
+
+ /* verify that we're talking to a lg3306a */
+ /* FGR - NOTE - there is no obvious ChipId to check; we check
+ * some "known" bits after reset, but it's still just a guess */
+ ret = lgdt3306a_read_reg(state, 0x0000, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+ if ((val & 0x74) != 0x74) {
+ pr_warn("expected 0x74, got 0x%x\n", (val & 0x74));
+#if 0
+ /* FIXME - re-enable when we know this is right */
+ goto fail;
+#endif
+ }
+ ret = lgdt3306a_read_reg(state, 0x0001, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+ if ((val & 0xf6) != 0xc6) {
+ pr_warn("expected 0xc6, got 0x%x\n", (val & 0xf6));
+#if 0
+ /* FIXME - re-enable when we know this is right */
+ goto fail;
+#endif
+ }
+ ret = lgdt3306a_read_reg(state, 0x0002, &val);
+ if (lg_chkerr(ret))
+ goto fail;
+ if ((val & 0x73) != 0x03) {
+ pr_warn("expected 0x03, got 0x%x\n", (val & 0x73));
+#if 0
+ /* FIXME - re-enable when we know this is right */
+ goto fail;
+#endif
+ }
+
+ state->current_frequency = -1;
+ state->current_modulation = -1;
+
+ lgdt3306a_sleep(state);
+
+ return &state->frontend;
+
+fail:
+ pr_warn("unable to detect LGDT3306A hardware\n");
+ kfree(state);
+ return NULL;
+}
+EXPORT_SYMBOL(lgdt3306a_attach);
+
+#ifdef DBG_DUMP
+
+static const short regtab[] = {
+ 0x0000, /* SOFTRSTB 1'b1 1'b1 1'b1 ADCPDB 1'b1 PLLPDB GBBPDB 11111111 */
+ 0x0001, /* 1'b1 1'b1 1'b0 1'b0 AUTORPTRS */
+ 0x0002, /* NI2CRPTEN 1'b0 1'b0 1'b0 SPECINVAUT */
+ 0x0003, /* AGCRFOUT */
+ 0x0004, /* ADCSEL1V ADCCNT ADCCNF ADCCNS ADCCLKPLL */
+ 0x0005, /* PLLINDIVSE */
+ 0x0006, /* PLLCTRL[7:0] 11100001 */
+ 0x0007, /* SYSINITWAITTIME[7:0] (msec) 00001000 */
+ 0x0008, /* STDOPMODE[7:0] 10000000 */
+ 0x0009, /* 1'b0 1'b0 1'b0 STDOPDETTMODE[2:0] STDOPDETCMODE[1:0] 00011110 */
+ 0x000a, /* DAFTEN 1'b1 x x SCSYSLOCK */
+ 0x000b, /* SCSYSLOCKCHKTIME[7:0] (10msec) 01100100 */
+ 0x000d, /* x SAMPLING4 */
+ 0x000e, /* SAMFREQ[15:8] 00000000 */
+ 0x000f, /* SAMFREQ[7:0] 00000000 */
+ 0x0010, /* IFFREQ[15:8] 01100000 */
+ 0x0011, /* IFFREQ[7:0] 00000000 */
+ 0x0012, /* AGCEN AGCREFMO */
+ 0x0013, /* AGCRFFIXB AGCIFFIXB AGCLOCKDETRNGSEL[1:0] 1'b1 1'b0 1'b0 1'b0 11101000 */
+ 0x0014, /* AGCFIXVALUE[7:0] 01111111 */
+ 0x0015, /* AGCREF[15:8] 00001010 */
+ 0x0016, /* AGCREF[7:0] 11100100 */
+ 0x0017, /* AGCDELAY[7:0] 00100000 */
+ 0x0018, /* AGCRFBW[3:0] AGCIFBW[3:0] 10001000 */
+ 0x0019, /* AGCUDOUTMODE[1:0] AGCUDCTRLLEN[1:0] AGCUDCTRL */
+ 0x001c, /* 1'b1 PFEN MFEN AICCVSYNC */
+ 0x001d, /* 1'b0 1'b1 1'b0 1'b1 AICCVSYNC */
+ 0x001e, /* AICCALPHA[3:0] 1'b1 1'b0 1'b1 1'b0 01111010 */
+ 0x001f, /* AICCDETTH[19:16] AICCOFFTH[19:16] 00000000 */
+ 0x0020, /* AICCDETTH[15:8] 01111100 */
+ 0x0021, /* AICCDETTH[7:0] 00000000 */
+ 0x0022, /* AICCOFFTH[15:8] 00000101 */
+ 0x0023, /* AICCOFFTH[7:0] 11100000 */
+ 0x0024, /* AICCOPMODE3[1:0] AICCOPMODE2[1:0] AICCOPMODE1[1:0] AICCOPMODE0[1:0] 00000000 */
+ 0x0025, /* AICCFIXFREQ3[23:16] 00000000 */
+ 0x0026, /* AICCFIXFREQ3[15:8] 00000000 */
+ 0x0027, /* AICCFIXFREQ3[7:0] 00000000 */
+ 0x0028, /* AICCFIXFREQ2[23:16] 00000000 */
+ 0x0029, /* AICCFIXFREQ2[15:8] 00000000 */
+ 0x002a, /* AICCFIXFREQ2[7:0] 00000000 */
+ 0x002b, /* AICCFIXFREQ1[23:16] 00000000 */
+ 0x002c, /* AICCFIXFREQ1[15:8] 00000000 */
+ 0x002d, /* AICCFIXFREQ1[7:0] 00000000 */
+ 0x002e, /* AICCFIXFREQ0[23:16] 00000000 */
+ 0x002f, /* AICCFIXFREQ0[15:8] 00000000 */
+ 0x0030, /* AICCFIXFREQ0[7:0] 00000000 */
+ 0x0031, /* 1'b0 1'b1 1'b0 1'b0 x DAGC1STER */
+ 0x0032, /* DAGC1STEN DAGC1STER */
+ 0x0033, /* DAGC1STREF[15:8] 00001010 */
+ 0x0034, /* DAGC1STREF[7:0] 11100100 */
+ 0x0035, /* DAGC2NDE */
+ 0x0036, /* DAGC2NDREF[15:8] 00001010 */
+ 0x0037, /* DAGC2NDREF[7:0] 10000000 */
+ 0x0038, /* DAGC2NDLOCKDETRNGSEL[1:0] */
+ 0x003d, /* 1'b1 SAMGEARS */
+ 0x0040, /* SAMLFGMA */
+ 0x0041, /* SAMLFBWM */
+ 0x0044, /* 1'b1 CRGEARSHE */
+ 0x0045, /* CRLFGMAN */
+ 0x0046, /* CFLFBWMA */
+ 0x0047, /* CRLFGMAN */
+ 0x0048, /* x x x x CRLFGSTEP_VS[3:0] xxxx1001 */
+ 0x0049, /* CRLFBWMA */
+ 0x004a, /* CRLFBWMA */
+ 0x0050, /* 1'b0 1'b1 1'b1 1'b0 MSECALCDA */
+ 0x0070, /* TPOUTEN TPIFEN TPCLKOUTE */
+ 0x0071, /* TPSENB TPSSOPBITE */
+ 0x0073, /* TP47HINS x x CHBERINT PERMODE[1:0] PERINT[1:0] 1xx11100 */
+ 0x0075, /* x x x x x IQSWAPCTRL[2:0] xxxxx000 */
+ 0x0076, /* NBERCON NBERST NBERPOL NBERWSYN */
+ 0x0077, /* x NBERLOSTTH[2:0] NBERACQTH[3:0] x0000000 */
+ 0x0078, /* NBERPOLY[31:24] 00000000 */
+ 0x0079, /* NBERPOLY[23:16] 00000000 */
+ 0x007a, /* NBERPOLY[15:8] 00000000 */
+ 0x007b, /* NBERPOLY[7:0] 00000000 */
+ 0x007c, /* NBERPED[31:24] 00000000 */
+ 0x007d, /* NBERPED[23:16] 00000000 */
+ 0x007e, /* NBERPED[15:8] 00000000 */
+ 0x007f, /* NBERPED[7:0] 00000000 */
+ 0x0080, /* x AGCLOCK DAGCLOCK SYSLOCK x x NEVERLOCK[1:0] */
+ 0x0085, /* SPECINVST */
+ 0x0088, /* SYSLOCKTIME[15:8] */
+ 0x0089, /* SYSLOCKTIME[7:0] */
+ 0x008c, /* FECLOCKTIME[15:8] */
+ 0x008d, /* FECLOCKTIME[7:0] */
+ 0x008e, /* AGCACCOUT[15:8] */
+ 0x008f, /* AGCACCOUT[7:0] */
+ 0x0090, /* AICCREJSTATUS[3:0] AICCREJBUSY[3:0] */
+ 0x0091, /* AICCVSYNC */
+ 0x009c, /* CARRFREQOFFSET[15:8] */
+ 0x009d, /* CARRFREQOFFSET[7:0] */
+ 0x00a1, /* SAMFREQOFFSET[23:16] */
+ 0x00a2, /* SAMFREQOFFSET[15:8] */
+ 0x00a3, /* SAMFREQOFFSET[7:0] */
+ 0x00a6, /* SYNCLOCK SYNCLOCKH */
+#if 0 /* covered elsewhere */
+ 0x00e8, /* CONSTPWR[15:8] */
+ 0x00e9, /* CONSTPWR[7:0] */
+ 0x00ea, /* BMSE[15:8] */
+ 0x00eb, /* BMSE[7:0] */
+ 0x00ec, /* MSE[15:8] */
+ 0x00ed, /* MSE[7:0] */
+ 0x00ee, /* CONSTI[7:0] */
+ 0x00ef, /* CONSTQ[7:0] */
+#endif
+ 0x00f4, /* TPIFTPERRCNT[7:0] */
+ 0x00f5, /* TPCORREC */
+ 0x00f6, /* VBBER[15:8] */
+ 0x00f7, /* VBBER[7:0] */
+ 0x00f8, /* VABER[15:8] */
+ 0x00f9, /* VABER[7:0] */
+ 0x00fa, /* TPERRCNT[7:0] */
+ 0x00fb, /* NBERLOCK x x x x x x x */
+ 0x00fc, /* NBERVALUE[31:24] */
+ 0x00fd, /* NBERVALUE[23:16] */
+ 0x00fe, /* NBERVALUE[15:8] */
+ 0x00ff, /* NBERVALUE[7:0] */
+ 0x1000, /* 1'b0 WODAGCOU */
+ 0x1005, /* x x 1'b1 1'b1 x SRD_Q_QM */
+ 0x1009, /* SRDWAITTIME[7:0] (10msec) 00100011 */
+ 0x100a, /* SRDWAITTIME_CQS[7:0] (msec) 01100100 */
+ 0x101a, /* x 1'b1 1'b0 1'b0 x QMDQAMMODE[2:0] x100x010 */
+ 0x1036, /* 1'b0 1'b1 1'b0 1'b0 SAMGSEND_CQS[3:0] 01001110 */
+ 0x103c, /* SAMGSAUTOSTL_V[3:0] SAMGSAUTOEDL_V[3:0] 01000110 */
+ 0x103d, /* 1'b1 1'b1 SAMCNORMBP_V[1:0] 1'b0 1'b0 SAMMODESEL_V[1:0] 11100001 */
+ 0x103f, /* SAMZTEDSE */
+ 0x105d, /* EQSTATUSE */
+ 0x105f, /* x PMAPG2_V[2:0] x DMAPG2_V[2:0] x001x011 */
+ 0x1060, /* 1'b1 EQSTATUSE */
+ 0x1061, /* CRMAPBWSTL_V[3:0] CRMAPBWEDL_V[3:0] 00000100 */
+ 0x1065, /* 1'b0 x CRMODE_V[1:0] 1'b1 x 1'b1 x 0x111x1x */
+ 0x1066, /* 1'b0 1'b0 1'b1 1'b0 1'b1 PNBOOSTSE */
+ 0x1068, /* CREPHNGAIN2_V[3:0] CREPHNPBW_V[3:0] 10010001 */
+ 0x106e, /* x x x x x CREPHNEN_ */
+ 0x106f, /* CREPHNTH_V[7:0] 00010101 */
+ 0x1072, /* CRSWEEPN */
+ 0x1073, /* CRPGAIN_V[3:0] x x 1'b1 1'b1 1001xx11 */
+ 0x1074, /* CRPBW_V[3:0] x x 1'b1 1'b1 0001xx11 */
+ 0x1080, /* DAFTSTATUS[1:0] x x x x x x */
+ 0x1081, /* SRDSTATUS[1:0] x x x x x SRDLOCK */
+ 0x10a9, /* EQSTATUS_CQS[1:0] x x x x x x */
+ 0x10b7, /* EQSTATUS_V[1:0] x x x x x x */
+#if 0 /* SMART_ANT */
+ 0x1f00, /* MODEDETE */
+ 0x1f01, /* x x x x x x x SFNRST xxxxxxx0 */
+ 0x1f03, /* NUMOFANT[7:0] 10000000 */
+ 0x1f04, /* x SELMASK[6:0] x0000000 */
+ 0x1f05, /* x SETMASK[6:0] x0000000 */
+ 0x1f06, /* x TXDATA[6:0] x0000000 */
+ 0x1f07, /* x CHNUMBER[6:0] x0000000 */
+ 0x1f09, /* AGCTIME[23:16] 10011000 */
+ 0x1f0a, /* AGCTIME[15:8] 10010110 */
+ 0x1f0b, /* AGCTIME[7:0] 10000000 */
+ 0x1f0c, /* ANTTIME[31:24] 00000000 */
+ 0x1f0d, /* ANTTIME[23:16] 00000011 */
+ 0x1f0e, /* ANTTIME[15:8] 10010000 */
+ 0x1f0f, /* ANTTIME[7:0] 10010000 */
+ 0x1f11, /* SYNCTIME[23:16] 10011000 */
+ 0x1f12, /* SYNCTIME[15:8] 10010110 */
+ 0x1f13, /* SYNCTIME[7:0] 10000000 */
+ 0x1f14, /* SNRTIME[31:24] 00000001 */
+ 0x1f15, /* SNRTIME[23:16] 01111101 */
+ 0x1f16, /* SNRTIME[15:8] 01111000 */
+ 0x1f17, /* SNRTIME[7:0] 01000000 */
+ 0x1f19, /* FECTIME[23:16] 00000000 */
+ 0x1f1a, /* FECTIME[15:8] 01110010 */
+ 0x1f1b, /* FECTIME[7:0] 01110000 */
+ 0x1f1d, /* FECTHD[7:0] 00000011 */
+ 0x1f1f, /* SNRTHD[23:16] 00001000 */
+ 0x1f20, /* SNRTHD[15:8] 01111111 */
+ 0x1f21, /* SNRTHD[7:0] 10000101 */
+ 0x1f80, /* IRQFLG x x SFSDRFLG MODEBFLG SAVEFLG SCANFLG TRACKFLG */
+ 0x1f81, /* x SYNCCON SNRCON FECCON x STDBUSY SYNCRST AGCFZCO */
+ 0x1f82, /* x x x SCANOPCD[4:0] */
+ 0x1f83, /* x x x x MAINOPCD[3:0] */
+ 0x1f84, /* x x RXDATA[13:8] */
+ 0x1f85, /* RXDATA[7:0] */
+ 0x1f86, /* x x SDTDATA[13:8] */
+ 0x1f87, /* SDTDATA[7:0] */
+ 0x1f89, /* ANTSNR[23:16] */
+ 0x1f8a, /* ANTSNR[15:8] */
+ 0x1f8b, /* ANTSNR[7:0] */
+ 0x1f8c, /* x x x x ANTFEC[13:8] */
+ 0x1f8d, /* ANTFEC[7:0] */
+ 0x1f8e, /* MAXCNT[7:0] */
+ 0x1f8f, /* SCANCNT[7:0] */
+ 0x1f91, /* MAXPW[23:16] */
+ 0x1f92, /* MAXPW[15:8] */
+ 0x1f93, /* MAXPW[7:0] */
+ 0x1f95, /* CURPWMSE[23:16] */
+ 0x1f96, /* CURPWMSE[15:8] */
+ 0x1f97, /* CURPWMSE[7:0] */
+#endif /* SMART_ANT */
+ 0x211f, /* 1'b1 1'b1 1'b1 CIRQEN x x 1'b0 1'b0 1111xx00 */
+ 0x212a, /* EQAUTOST */
+ 0x2122, /* CHFAST[7:0] 01100000 */
+ 0x212b, /* FFFSTEP_V[3:0] x FBFSTEP_V[2:0] 0001x001 */
+ 0x212c, /* PHDEROTBWSEL[3:0] 1'b1 1'b1 1'b1 1'b0 10001110 */
+ 0x212d, /* 1'b1 1'b1 1'b1 1'b1 x x TPIFLOCKS */
+ 0x2135, /* DYNTRACKFDEQ[3:0] x 1'b0 1'b0 1'b0 1010x000 */
+ 0x2141, /* TRMODE[1:0] 1'b1 1'b1 1'b0 1'b1 1'b1 1'b1 01110111 */
+ 0x2162, /* AICCCTRLE */
+ 0x2173, /* PHNCNFCNT[7:0] 00000100 */
+ 0x2179, /* 1'b0 1'b0 1'b0 1'b1 x BADSINGLEDYNTRACKFBF[2:0] 0001x001 */
+ 0x217a, /* 1'b0 1'b0 1'b0 1'b1 x BADSLOWSINGLEDYNTRACKFBF[2:0] 0001x001 */
+ 0x217e, /* CNFCNTTPIF[7:0] 00001000 */
+ 0x217f, /* TPERRCNTTPIF[7:0] 00000001 */
+ 0x2180, /* x x x x x x FBDLYCIR[9:8] */
+ 0x2181, /* FBDLYCIR[7:0] */
+ 0x2185, /* MAXPWRMAIN[7:0] */
+ 0x2191, /* NCOMBDET x x x x x x x */
+ 0x2199, /* x MAINSTRON */
+ 0x219a, /* FFFEQSTEPOUT_V[3:0] FBFSTEPOUT_V[2:0] */
+ 0x21a1, /* x x SNRREF[5:0] */
+ 0x2845, /* 1'b0 1'b1 x x FFFSTEP_CQS[1:0] FFFCENTERTAP[1:0] 01xx1110 */
+ 0x2846, /* 1'b0 x 1'b0 1'b1 FBFSTEP_CQS[1:0] 1'b1 1'b0 0x011110 */
+ 0x2847, /* ENNOSIGDE */
+ 0x2849, /* 1'b1 1'b1 NOUSENOSI */
+ 0x284a, /* EQINITWAITTIME[7:0] 01100100 */
+ 0x3000, /* 1'b1 1'b1 1'b1 x x x 1'b0 RPTRSTM */
+ 0x3001, /* RPTRSTWAITTIME[7:0] (100msec) 00110010 */
+ 0x3031, /* FRAMELOC */
+ 0x3032, /* 1'b1 1'b0 1'b0 1'b0 x x FRAMELOCKMODE_CQS[1:0] 1000xx11 */
+ 0x30a9, /* VDLOCK_Q FRAMELOCK */
+ 0x30aa, /* MPEGLOCK */
+};
+
+#define numDumpRegs (sizeof(regtab)/sizeof(regtab[0]))
+static u8 regval1[numDumpRegs] = {0, };
+static u8 regval2[numDumpRegs] = {0, };
+
+static void lgdt3306a_DumpAllRegs(struct lgdt3306a_state *state)
+{
+ memset(regval2, 0xff, sizeof(regval2));
+ lgdt3306a_DumpRegs(state);
+}
+
+static void lgdt3306a_DumpRegs(struct lgdt3306a_state *state)
+{
+ int i;
+ int sav_debug = debug;
+
+ if ((debug & DBG_DUMP) == 0)
+ return;
+ debug &= ~DBG_REG; /* suppress DBG_REG during reg dump */
+
+ lg_debug("\n");
+
+ for (i = 0; i < numDumpRegs; i++) {
+ lgdt3306a_read_reg(state, regtab[i], ®val1[i]);
+ if (regval1[i] != regval2[i]) {
+ lg_debug(" %04X = %02X\n", regtab[i], regval1[i]);
+ regval2[i] = regval1[i];
+ }
+ }
+ debug = sav_debug;
+}
+#endif /* DBG_DUMP */
+
+
+
+static struct dvb_frontend_ops lgdt3306a_ops = {
+ .delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+ .info = {
+ .name = "LG Electronics LGDT3306A VSB/QAM Frontend",
+ .frequency_min = 54000000,
+ .frequency_max = 858000000,
+ .frequency_stepsize = 62500,
+ .caps = FE_CAN_QAM_64 | FE_CAN_QAM_256 | FE_CAN_8VSB
+ },
+ .i2c_gate_ctrl = lgdt3306a_i2c_gate_ctrl,
+ .init = lgdt3306a_init,
+ .sleep = lgdt3306a_fe_sleep,
+ /* if this is set, it overrides the default swzigzag */
+ .tune = lgdt3306a_tune,
+ .set_frontend = lgdt3306a_set_parameters,
+ .get_frontend = lgdt3306a_get_frontend,
+ .get_frontend_algo = lgdt3306a_get_frontend_algo,
+ .get_tune_settings = lgdt3306a_get_tune_settings,
+ .read_status = lgdt3306a_read_status,
+ .read_ber = lgdt3306a_read_ber,
+ .read_signal_strength = lgdt3306a_read_signal_strength,
+ .read_snr = lgdt3306a_read_snr,
+ .read_ucblocks = lgdt3306a_read_ucblocks,
+ .release = lgdt3306a_release,
+ .ts_bus_ctrl = lgdt3306a_ts_bus_ctrl,
+ .search = lgdt3306a_search,
+};
+
+MODULE_DESCRIPTION("LG Electronics LGDT3306A ATSC/QAM-B Demodulator Driver");
+MODULE_AUTHOR("Fred Richter <frichter@hauppauge.com>");
+MODULE_LICENSE("GPL");
+MODULE_VERSION("0.2");
diff --git a/drivers/media/dvb-frontends/lgdt3306a.h b/drivers/media/dvb-frontends/lgdt3306a.h
new file mode 100644
index 0000000..9dbb2dc
--- /dev/null
+++ b/drivers/media/dvb-frontends/lgdt3306a.h
@@ -0,0 +1,74 @@
+/*
+ * Support for LGDT3306A - 8VSB/QAM-B
+ *
+ * Copyright (C) 2013,2014 Fred Richter <frichter@hauppauge.com>
+ * based on lgdt3305.[ch] by Michael Krufky
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _LGDT3306A_H_
+#define _LGDT3306A_H_
+
+#include <linux/i2c.h>
+#include "dvb_frontend.h"
+
+
+enum lgdt3306a_mpeg_mode {
+ LGDT3306A_MPEG_PARALLEL = 0,
+ LGDT3306A_MPEG_SERIAL = 1,
+};
+
+enum lgdt3306a_tp_clock_edge {
+ LGDT3306A_TPCLK_RISING_EDGE = 0,
+ LGDT3306A_TPCLK_FALLING_EDGE = 1,
+};
+
+enum lgdt3306a_tp_valid_polarity {
+ LGDT3306A_TP_VALID_LOW = 0,
+ LGDT3306A_TP_VALID_HIGH = 1,
+};
+
+struct lgdt3306a_config {
+ u8 i2c_addr;
+
+ /* user defined IF frequency in KHz */
+ u16 qam_if_khz;
+ u16 vsb_if_khz;
+
+ /* disable i2c repeater - 0:repeater enabled 1:repeater disabled */
+ unsigned int deny_i2c_rptr:1;
+
+ /* spectral inversion - 0:disabled 1:enabled */
+ unsigned int spectral_inversion:1;
+
+ enum lgdt3306a_mpeg_mode mpeg_mode;
+ enum lgdt3306a_tp_clock_edge tpclk_edge;
+ enum lgdt3306a_tp_valid_polarity tpvalid_polarity;
+
+ /* demod clock freq in MHz; 24 or 25 supported */
+ int xtalMHz;
+};
+
+#if IS_REACHABLE(CONFIG_DVB_LGDT3306A)
+struct dvb_frontend *lgdt3306a_attach(const struct lgdt3306a_config *config,
+ struct i2c_adapter *i2c_adap);
+#else
+static inline
+struct dvb_frontend *lgdt3306a_attach(const struct lgdt3306a_config *config,
+ struct i2c_adapter *i2c_adap)
+{
+ printk(KERN_WARNING "%s: driver disabled by Kconfig\n", __func__);
+ return NULL;
+}
+#endif /* CONFIG_DVB_LGDT3306A */
+
+#endif /* _LGDT3306A_H_ */
diff --git a/drivers/media/dvb-frontends/lgdt330x.h b/drivers/media/dvb-frontends/lgdt330x.h
index 8bb3322..c73eeb4 100644
--- a/drivers/media/dvb-frontends/lgdt330x.h
+++ b/drivers/media/dvb-frontends/lgdt330x.h
@@ -52,7 +52,7 @@
int clock_polarity_flip;
};
-#if IS_ENABLED(CONFIG_DVB_LGDT330X)
+#if IS_REACHABLE(CONFIG_DVB_LGDT330X)
extern struct dvb_frontend* lgdt330x_attach(const struct lgdt330x_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/lgs8gl5.h b/drivers/media/dvb-frontends/lgs8gl5.h
index c2da596..a5b3faf 100644
--- a/drivers/media/dvb-frontends/lgs8gl5.h
+++ b/drivers/media/dvb-frontends/lgs8gl5.h
@@ -31,7 +31,7 @@
u8 demod_address;
};
-#if IS_ENABLED(CONFIG_DVB_LGS8GL5)
+#if IS_REACHABLE(CONFIG_DVB_LGS8GL5)
extern struct dvb_frontend *lgs8gl5_attach(
const struct lgs8gl5_config *config, struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/lgs8gxx.h b/drivers/media/dvb-frontends/lgs8gxx.h
index dadb78b..368c992 100644
--- a/drivers/media/dvb-frontends/lgs8gxx.h
+++ b/drivers/media/dvb-frontends/lgs8gxx.h
@@ -80,7 +80,7 @@
u8 tuner_address;
};
-#if IS_ENABLED(CONFIG_DVB_LGS8GXX)
+#if IS_REACHABLE(CONFIG_DVB_LGS8GXX)
extern struct dvb_frontend *lgs8gxx_attach(const struct lgs8gxx_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/lnbh24.h b/drivers/media/dvb-frontends/lnbh24.h
index b327a4f..a088b8e 100644
--- a/drivers/media/dvb-frontends/lnbh24.h
+++ b/drivers/media/dvb-frontends/lnbh24.h
@@ -37,7 +37,7 @@
#include <linux/dvb/frontend.h>
-#if IS_ENABLED(CONFIG_DVB_LNBP21)
+#if IS_REACHABLE(CONFIG_DVB_LNBP21)
/* override_set and override_clear control which
system register bits (above) to always set & clear */
extern struct dvb_frontend *lnbh24_attach(struct dvb_frontend *fe,
diff --git a/drivers/media/dvb-frontends/lnbp21.h b/drivers/media/dvb-frontends/lnbp21.h
index dbcbcc2..a9b530d 100644
--- a/drivers/media/dvb-frontends/lnbp21.h
+++ b/drivers/media/dvb-frontends/lnbp21.h
@@ -57,7 +57,7 @@
#include <linux/dvb/frontend.h>
-#if IS_ENABLED(CONFIG_DVB_LNBP21)
+#if IS_REACHABLE(CONFIG_DVB_LNBP21)
/* override_set and override_clear control which
system register bits (above) to always set & clear */
extern struct dvb_frontend *lnbp21_attach(struct dvb_frontend *fe,
diff --git a/drivers/media/dvb-frontends/lnbp22.h b/drivers/media/dvb-frontends/lnbp22.h
index 63861b3..6281483 100644
--- a/drivers/media/dvb-frontends/lnbp22.h
+++ b/drivers/media/dvb-frontends/lnbp22.h
@@ -39,7 +39,7 @@
#include <linux/dvb/frontend.h>
-#if IS_ENABLED(CONFIG_DVB_LNBP22)
+#if IS_REACHABLE(CONFIG_DVB_LNBP22)
/*
* override_set and override_clear control which system register bits (above)
* to always set & clear
diff --git a/drivers/media/dvb-frontends/m88rs2000.h b/drivers/media/dvb-frontends/m88rs2000.h
index 0a50ea9..de74301 100644
--- a/drivers/media/dvb-frontends/m88rs2000.h
+++ b/drivers/media/dvb-frontends/m88rs2000.h
@@ -41,7 +41,7 @@
CALL_IS_READ,
};
-#if IS_ENABLED(CONFIG_DVB_M88RS2000)
+#if IS_REACHABLE(CONFIG_DVB_M88RS2000)
extern struct dvb_frontend *m88rs2000_attach(
const struct m88rs2000_config *config, struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/mb86a16.h b/drivers/media/dvb-frontends/mb86a16.h
index 277ce06..e486dc0 100644
--- a/drivers/media/dvb-frontends/mb86a16.h
+++ b/drivers/media/dvb-frontends/mb86a16.h
@@ -33,7 +33,7 @@
-#if IS_ENABLED(CONFIG_DVB_MB86A16)
+#if IS_REACHABLE(CONFIG_DVB_MB86A16)
extern struct dvb_frontend *mb86a16_attach(const struct mb86a16_config *config,
struct i2c_adapter *i2c_adap);
diff --git a/drivers/media/dvb-frontends/mb86a20s.h b/drivers/media/dvb-frontends/mb86a20s.h
index cbeb941..f749c8a 100644
--- a/drivers/media/dvb-frontends/mb86a20s.h
+++ b/drivers/media/dvb-frontends/mb86a20s.h
@@ -34,7 +34,7 @@
bool is_serial;
};
-#if IS_ENABLED(CONFIG_DVB_MB86A20S)
+#if IS_REACHABLE(CONFIG_DVB_MB86A20S)
extern struct dvb_frontend *mb86a20s_attach(const struct mb86a20s_config *config,
struct i2c_adapter *i2c);
extern struct i2c_adapter *mb86a20s_get_tuner_i2c_adapter(struct dvb_frontend *);
diff --git a/drivers/media/dvb-frontends/mn88472.h b/drivers/media/dvb-frontends/mn88472.h
index e4e0b80..095294d 100644
--- a/drivers/media/dvb-frontends/mn88472.h
+++ b/drivers/media/dvb-frontends/mn88472.h
@@ -19,6 +19,16 @@
#include <linux/dvb/frontend.h>
+enum ts_clock {
+ VARIABLE_TS_CLOCK,
+ FIXED_TS_CLOCK,
+};
+
+enum ts_mode {
+ SERIAL_TS_MODE,
+ PARALLEL_TS_MODE,
+};
+
struct mn88472_config {
/*
* Max num of bytes given I2C adapter could write at once.
@@ -39,6 +49,8 @@
* Hz
*/
u32 xtal;
+ int ts_mode;
+ int ts_clock;
};
#endif
diff --git a/drivers/media/dvb-frontends/mn88473.h b/drivers/media/dvb-frontends/mn88473.h
index a373ec9..c717ebed 100644
--- a/drivers/media/dvb-frontends/mn88473.h
+++ b/drivers/media/dvb-frontends/mn88473.h
@@ -33,6 +33,12 @@
* DVB frontend.
*/
struct dvb_frontend **fe;
+
+ /*
+ * Xtal frequency.
+ * Hz
+ */
+ u32 xtal;
};
#endif
diff --git a/drivers/media/dvb-frontends/mt312.h b/drivers/media/dvb-frontends/mt312.h
index 5706621..386939a 100644
--- a/drivers/media/dvb-frontends/mt312.h
+++ b/drivers/media/dvb-frontends/mt312.h
@@ -36,7 +36,7 @@
unsigned int voltage_inverted:1;
};
-#if IS_ENABLED(CONFIG_DVB_MT312)
+#if IS_REACHABLE(CONFIG_DVB_MT312)
struct dvb_frontend *mt312_attach(const struct mt312_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/mt352.h b/drivers/media/dvb-frontends/mt352.h
index 451d904..5873263 100644
--- a/drivers/media/dvb-frontends/mt352.h
+++ b/drivers/media/dvb-frontends/mt352.h
@@ -51,7 +51,7 @@
int (*demod_init)(struct dvb_frontend* fe);
};
-#if IS_ENABLED(CONFIG_DVB_MT352)
+#if IS_REACHABLE(CONFIG_DVB_MT352)
extern struct dvb_frontend* mt352_attach(const struct mt352_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/nxt200x.h b/drivers/media/dvb-frontends/nxt200x.h
index e38d01f..825b928 100644
--- a/drivers/media/dvb-frontends/nxt200x.h
+++ b/drivers/media/dvb-frontends/nxt200x.h
@@ -42,7 +42,7 @@
int (*set_ts_params)(struct dvb_frontend* fe, int is_punctured);
};
-#if IS_ENABLED(CONFIG_DVB_NXT200X)
+#if IS_REACHABLE(CONFIG_DVB_NXT200X)
extern struct dvb_frontend* nxt200x_attach(const struct nxt200x_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/nxt6000.h b/drivers/media/dvb-frontends/nxt6000.h
index b5867c2..a94cefc 100644
--- a/drivers/media/dvb-frontends/nxt6000.h
+++ b/drivers/media/dvb-frontends/nxt6000.h
@@ -33,7 +33,7 @@
u8 clock_inversion:1;
};
-#if IS_ENABLED(CONFIG_DVB_NXT6000)
+#if IS_REACHABLE(CONFIG_DVB_NXT6000)
extern struct dvb_frontend* nxt6000_attach(const struct nxt6000_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/or51132.h b/drivers/media/dvb-frontends/or51132.h
index cdb5be3..9acf8dc 100644
--- a/drivers/media/dvb-frontends/or51132.h
+++ b/drivers/media/dvb-frontends/or51132.h
@@ -34,7 +34,7 @@
int (*set_ts_params)(struct dvb_frontend* fe, int is_punctured);
};
-#if IS_ENABLED(CONFIG_DVB_OR51132)
+#if IS_REACHABLE(CONFIG_DVB_OR51132)
extern struct dvb_frontend* or51132_attach(const struct or51132_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/or51211.h b/drivers/media/dvb-frontends/or51211.h
index 9a8ae93..cc6adab 100644
--- a/drivers/media/dvb-frontends/or51211.h
+++ b/drivers/media/dvb-frontends/or51211.h
@@ -37,7 +37,7 @@
void (*sleep)(struct dvb_frontend * fe);
};
-#if IS_ENABLED(CONFIG_DVB_OR51211)
+#if IS_REACHABLE(CONFIG_DVB_OR51211)
extern struct dvb_frontend* or51211_attach(const struct or51211_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/rtl2832.c b/drivers/media/dvb-frontends/rtl2832.c
index 67faa8d..b400f7b 100644
--- a/drivers/media/dvb-frontends/rtl2832.c
+++ b/drivers/media/dvb-frontends/rtl2832.c
@@ -685,7 +685,7 @@
struct rtl2832_dev *dev = fe->demodulator_priv;
struct i2c_client *client = dev->client;
int ret;
- u32 tmp;
+ u32 uninitialized_var(tmp);
dev_dbg(&client->dev, "\n");
diff --git a/drivers/media/dvb-frontends/s5h1409.h b/drivers/media/dvb-frontends/s5h1409.h
index 9e143f5..f58b9ca 100644
--- a/drivers/media/dvb-frontends/s5h1409.h
+++ b/drivers/media/dvb-frontends/s5h1409.h
@@ -67,7 +67,7 @@
u8 hvr1600_opt;
};
-#if IS_ENABLED(CONFIG_DVB_S5H1409)
+#if IS_REACHABLE(CONFIG_DVB_S5H1409)
extern struct dvb_frontend *s5h1409_attach(const struct s5h1409_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/s5h1411.h b/drivers/media/dvb-frontends/s5h1411.h
index 1d7deb6..f3a87f7 100644
--- a/drivers/media/dvb-frontends/s5h1411.h
+++ b/drivers/media/dvb-frontends/s5h1411.h
@@ -69,7 +69,7 @@
u8 status_mode;
};
-#if IS_ENABLED(CONFIG_DVB_S5H1411)
+#if IS_REACHABLE(CONFIG_DVB_S5H1411)
extern struct dvb_frontend *s5h1411_attach(const struct s5h1411_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/s5h1420.h b/drivers/media/dvb-frontends/s5h1420.h
index 210049b..142d93e 100644
--- a/drivers/media/dvb-frontends/s5h1420.h
+++ b/drivers/media/dvb-frontends/s5h1420.h
@@ -40,7 +40,7 @@
u8 serial_mpeg:1;
};
-#if IS_ENABLED(CONFIG_DVB_S5H1420)
+#if IS_REACHABLE(CONFIG_DVB_S5H1420)
extern struct dvb_frontend *s5h1420_attach(const struct s5h1420_config *config,
struct i2c_adapter *i2c);
extern struct i2c_adapter *s5h1420_get_tuner_i2c_adapter(struct dvb_frontend *fe);
diff --git a/drivers/media/dvb-frontends/s5h1432.h b/drivers/media/dvb-frontends/s5h1432.h
index 70917dd..f490c5e 100644
--- a/drivers/media/dvb-frontends/s5h1432.h
+++ b/drivers/media/dvb-frontends/s5h1432.h
@@ -75,7 +75,7 @@
u8 status_mode;
};
-#if IS_ENABLED(CONFIG_DVB_S5H1432)
+#if IS_REACHABLE(CONFIG_DVB_S5H1432)
extern struct dvb_frontend *s5h1432_attach(const struct s5h1432_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/s921.h b/drivers/media/dvb-frontends/s921.h
index 9b20c9e..7d3999a 100644
--- a/drivers/media/dvb-frontends/s921.h
+++ b/drivers/media/dvb-frontends/s921.h
@@ -25,7 +25,7 @@
u8 demod_address;
};
-#if IS_ENABLED(CONFIG_DVB_S921)
+#if IS_REACHABLE(CONFIG_DVB_S921)
extern struct dvb_frontend *s921_attach(const struct s921_config *config,
struct i2c_adapter *i2c);
extern struct i2c_adapter *s921_get_tuner_i2c_adapter(struct dvb_frontend *);
diff --git a/drivers/media/dvb-frontends/si2165.c b/drivers/media/dvb-frontends/si2165.c
index 98ddb49..4cc5d10 100644
--- a/drivers/media/dvb-frontends/si2165.c
+++ b/drivers/media/dvb-frontends/si2165.c
@@ -505,7 +505,7 @@
/* reset crc */
ret = si2165_writereg8(state, 0x0379, 0x01);
if (ret)
- return ret;
+ goto error;
ret = si2165_upload_firmware_block(state, data, len,
&offset, block_count);
diff --git a/drivers/media/dvb-frontends/si2165.h b/drivers/media/dvb-frontends/si2165.h
index efaa081..8a15d6a 100644
--- a/drivers/media/dvb-frontends/si2165.h
+++ b/drivers/media/dvb-frontends/si2165.h
@@ -45,7 +45,7 @@
bool inversion;
};
-#if IS_ENABLED(CONFIG_DVB_SI2165)
+#if IS_REACHABLE(CONFIG_DVB_SI2165)
struct dvb_frontend *si2165_attach(
const struct si2165_config *config,
struct i2c_adapter *i2c);
diff --git a/drivers/media/dvb-frontends/si21xx.h b/drivers/media/dvb-frontends/si21xx.h
index 1509fed..ef5f351 100644
--- a/drivers/media/dvb-frontends/si21xx.h
+++ b/drivers/media/dvb-frontends/si21xx.h
@@ -13,7 +13,7 @@
int min_delay_ms;
};
-#if IS_ENABLED(CONFIG_DVB_SI21XX)
+#if IS_REACHABLE(CONFIG_DVB_SI21XX)
extern struct dvb_frontend *si21xx_attach(const struct si21xx_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/sp2.c b/drivers/media/dvb-frontends/sp2.c
index cc1ef96..8fd4276 100644
--- a/drivers/media/dvb-frontends/sp2.c
+++ b/drivers/media/dvb-frontends/sp2.c
@@ -413,11 +413,8 @@
struct sp2 *s = i2c_get_clientdata(client);
dev_dbg(&client->dev, "\n");
-
sp2_exit(client);
- if (s != NULL)
- kfree(s);
-
+ kfree(s);
return 0;
}
diff --git a/drivers/media/dvb-frontends/sp8870.h b/drivers/media/dvb-frontends/sp8870.h
index 065ec67..f507b9f 100644
--- a/drivers/media/dvb-frontends/sp8870.h
+++ b/drivers/media/dvb-frontends/sp8870.h
@@ -35,7 +35,7 @@
int (*request_firmware)(struct dvb_frontend* fe, const struct firmware **fw, char* name);
};
-#if IS_ENABLED(CONFIG_DVB_SP8870)
+#if IS_REACHABLE(CONFIG_DVB_SP8870)
extern struct dvb_frontend* sp8870_attach(const struct sp8870_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/sp887x.h b/drivers/media/dvb-frontends/sp887x.h
index 2cdc4e8..412f011 100644
--- a/drivers/media/dvb-frontends/sp887x.h
+++ b/drivers/media/dvb-frontends/sp887x.h
@@ -17,7 +17,7 @@
int (*request_firmware)(struct dvb_frontend* fe, const struct firmware **fw, char* name);
};
-#if IS_ENABLED(CONFIG_DVB_SP887X)
+#if IS_REACHABLE(CONFIG_DVB_SP887X)
extern struct dvb_frontend* sp887x_attach(const struct sp887x_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/stb0899_drv.h b/drivers/media/dvb-frontends/stb0899_drv.h
index 139264d..0a72131 100644
--- a/drivers/media/dvb-frontends/stb0899_drv.h
+++ b/drivers/media/dvb-frontends/stb0899_drv.h
@@ -141,7 +141,7 @@
int (*tuner_set_rfsiggain)(struct dvb_frontend *fe, u32 rf_gain);
};
-#if IS_ENABLED(CONFIG_DVB_STB0899)
+#if IS_REACHABLE(CONFIG_DVB_STB0899)
extern struct dvb_frontend *stb0899_attach(struct stb0899_config *config,
struct i2c_adapter *i2c);
diff --git a/drivers/media/dvb-frontends/stb6000.h b/drivers/media/dvb-frontends/stb6000.h
index a768189..da581b6 100644
--- a/drivers/media/dvb-frontends/stb6000.h
+++ b/drivers/media/dvb-frontends/stb6000.h
@@ -35,7 +35,7 @@
* @param i2c i2c adapter to use.
* @return FE pointer on success, NULL on failure.
*/
-#if IS_ENABLED(CONFIG_DVB_STB6000)
+#if IS_REACHABLE(CONFIG_DVB_STB6000)
extern struct dvb_frontend *stb6000_attach(struct dvb_frontend *fe, int addr,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/stb6100.h b/drivers/media/dvb-frontends/stb6100.h
index 3a1e40f..218c818 100644
--- a/drivers/media/dvb-frontends/stb6100.h
+++ b/drivers/media/dvb-frontends/stb6100.h
@@ -94,7 +94,7 @@
u32 reference;
};
-#if IS_ENABLED(CONFIG_DVB_STB6100)
+#if IS_REACHABLE(CONFIG_DVB_STB6100)
extern struct dvb_frontend *stb6100_attach(struct dvb_frontend *fe,
const struct stb6100_config *config,
diff --git a/drivers/media/dvb-frontends/stv0288.h b/drivers/media/dvb-frontends/stv0288.h
index a0bd931..b58603c 100644
--- a/drivers/media/dvb-frontends/stv0288.h
+++ b/drivers/media/dvb-frontends/stv0288.h
@@ -43,7 +43,7 @@
int (*set_ts_params)(struct dvb_frontend *fe, int is_punctured);
};
-#if IS_ENABLED(CONFIG_DVB_STV0288)
+#if IS_REACHABLE(CONFIG_DVB_STV0288)
extern struct dvb_frontend *stv0288_attach(const struct stv0288_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/stv0297.h b/drivers/media/dvb-frontends/stv0297.h
index c8ff363..b30632a 100644
--- a/drivers/media/dvb-frontends/stv0297.h
+++ b/drivers/media/dvb-frontends/stv0297.h
@@ -42,7 +42,7 @@
u8 stop_during_read:1;
};
-#if IS_ENABLED(CONFIG_DVB_STV0297)
+#if IS_REACHABLE(CONFIG_DVB_STV0297)
extern struct dvb_frontend* stv0297_attach(const struct stv0297_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/stv0299.h b/drivers/media/dvb-frontends/stv0299.h
index 06f70fc8..0aca30a 100644
--- a/drivers/media/dvb-frontends/stv0299.h
+++ b/drivers/media/dvb-frontends/stv0299.h
@@ -95,7 +95,7 @@
int (*set_ts_params)(struct dvb_frontend *fe, int is_punctured);
};
-#if IS_ENABLED(CONFIG_DVB_STV0299)
+#if IS_REACHABLE(CONFIG_DVB_STV0299)
extern struct dvb_frontend *stv0299_attach(const struct stv0299_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/stv0367.h b/drivers/media/dvb-frontends/stv0367.h
index ea80b34..92b3e85 100644
--- a/drivers/media/dvb-frontends/stv0367.h
+++ b/drivers/media/dvb-frontends/stv0367.h
@@ -39,7 +39,7 @@
int clk_pol;
};
-#if IS_ENABLED(CONFIG_DVB_STV0367)
+#if IS_REACHABLE(CONFIG_DVB_STV0367)
extern struct
dvb_frontend *stv0367ter_attach(const struct stv0367_config *config,
struct i2c_adapter *i2c);
diff --git a/drivers/media/dvb-frontends/stv0900.h b/drivers/media/dvb-frontends/stv0900.h
index e2a6dc6..c90bf00 100644
--- a/drivers/media/dvb-frontends/stv0900.h
+++ b/drivers/media/dvb-frontends/stv0900.h
@@ -58,7 +58,7 @@
void (*set_lock_led)(struct dvb_frontend *fe, int offon);
};
-#if IS_ENABLED(CONFIG_DVB_STV0900)
+#if IS_REACHABLE(CONFIG_DVB_STV0900)
extern struct dvb_frontend *stv0900_attach(const struct stv0900_config *config,
struct i2c_adapter *i2c, int demod);
#else
diff --git a/drivers/media/dvb-frontends/stv090x.h b/drivers/media/dvb-frontends/stv090x.h
index 742eeda..012e55e 100644
--- a/drivers/media/dvb-frontends/stv090x.h
+++ b/drivers/media/dvb-frontends/stv090x.h
@@ -107,7 +107,7 @@
u8 xor_value);
};
-#if IS_ENABLED(CONFIG_DVB_STV090x)
+#if IS_REACHABLE(CONFIG_DVB_STV090x)
struct dvb_frontend *stv090x_attach(struct stv090x_config *config,
struct i2c_adapter *i2c,
diff --git a/drivers/media/dvb-frontends/stv6110.h b/drivers/media/dvb-frontends/stv6110.h
index 8fa07e6..f3c8a5c 100644
--- a/drivers/media/dvb-frontends/stv6110.h
+++ b/drivers/media/dvb-frontends/stv6110.h
@@ -46,7 +46,7 @@
u8 clk_div; /* divisor value for the output clock */
};
-#if IS_ENABLED(CONFIG_DVB_STV6110)
+#if IS_REACHABLE(CONFIG_DVB_STV6110)
extern struct dvb_frontend *stv6110_attach(struct dvb_frontend *fe,
const struct stv6110_config *config,
struct i2c_adapter *i2c);
diff --git a/drivers/media/dvb-frontends/stv6110x.h b/drivers/media/dvb-frontends/stv6110x.h
index bc4766d..9f7eb25 100644
--- a/drivers/media/dvb-frontends/stv6110x.h
+++ b/drivers/media/dvb-frontends/stv6110x.h
@@ -53,7 +53,7 @@
};
-#if IS_ENABLED(CONFIG_DVB_STV6110x)
+#if IS_REACHABLE(CONFIG_DVB_STV6110x)
extern struct stv6110x_devctl *stv6110x_attach(struct dvb_frontend *fe,
const struct stv6110x_config *config,
diff --git a/drivers/media/dvb-frontends/tda1002x.h b/drivers/media/dvb-frontends/tda1002x.h
index e404b6e..0d33461 100644
--- a/drivers/media/dvb-frontends/tda1002x.h
+++ b/drivers/media/dvb-frontends/tda1002x.h
@@ -57,7 +57,7 @@
u16 deltaf;
};
-#if IS_ENABLED(CONFIG_DVB_TDA10021)
+#if IS_REACHABLE(CONFIG_DVB_TDA10021)
extern struct dvb_frontend* tda10021_attach(const struct tda1002x_config* config,
struct i2c_adapter* i2c, u8 pwm);
#else
@@ -69,7 +69,7 @@
}
#endif // CONFIG_DVB_TDA10021
-#if IS_ENABLED(CONFIG_DVB_TDA10023)
+#if IS_REACHABLE(CONFIG_DVB_TDA10023)
extern struct dvb_frontend *tda10023_attach(
const struct tda10023_config *config,
struct i2c_adapter *i2c, u8 pwm);
diff --git a/drivers/media/dvb-frontends/tda10048.h b/drivers/media/dvb-frontends/tda10048.h
index 5e7bf4e..bc77a73 100644
--- a/drivers/media/dvb-frontends/tda10048.h
+++ b/drivers/media/dvb-frontends/tda10048.h
@@ -73,7 +73,7 @@
u8 pll_n;
};
-#if IS_ENABLED(CONFIG_DVB_TDA10048)
+#if IS_REACHABLE(CONFIG_DVB_TDA10048)
extern struct dvb_frontend *tda10048_attach(
const struct tda10048_config *config,
struct i2c_adapter *i2c);
diff --git a/drivers/media/dvb-frontends/tda1004x.h b/drivers/media/dvb-frontends/tda1004x.h
index dd283fb..efd7659 100644
--- a/drivers/media/dvb-frontends/tda1004x.h
+++ b/drivers/media/dvb-frontends/tda1004x.h
@@ -117,7 +117,7 @@
enum tda1004x_demod demod_type;
};
-#if IS_ENABLED(CONFIG_DVB_TDA1004X)
+#if IS_REACHABLE(CONFIG_DVB_TDA1004X)
extern struct dvb_frontend* tda10045_attach(const struct tda1004x_config* config,
struct i2c_adapter* i2c);
diff --git a/drivers/media/dvb-frontends/tda10071.h b/drivers/media/dvb-frontends/tda10071.h
index 331b5a8..da89f42 100644
--- a/drivers/media/dvb-frontends/tda10071.h
+++ b/drivers/media/dvb-frontends/tda10071.h
@@ -72,7 +72,7 @@
};
-#if IS_ENABLED(CONFIG_DVB_TDA10071)
+#if IS_REACHABLE(CONFIG_DVB_TDA10071)
extern struct dvb_frontend *tda10071_attach(
const struct tda10071_config *config, struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/tda10086.h b/drivers/media/dvb-frontends/tda10086.h
index 458fe91..690e469 100644
--- a/drivers/media/dvb-frontends/tda10086.h
+++ b/drivers/media/dvb-frontends/tda10086.h
@@ -46,7 +46,7 @@
enum tda10086_xtal xtal_freq;
};
-#if IS_ENABLED(CONFIG_DVB_TDA10086)
+#if IS_REACHABLE(CONFIG_DVB_TDA10086)
extern struct dvb_frontend* tda10086_attach(const struct tda10086_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/tda18271c2dd.h b/drivers/media/dvb-frontends/tda18271c2dd.h
index dd84f7b..7ebd8ea 100644
--- a/drivers/media/dvb-frontends/tda18271c2dd.h
+++ b/drivers/media/dvb-frontends/tda18271c2dd.h
@@ -3,7 +3,7 @@
#include <linux/kconfig.h>
-#if IS_ENABLED(CONFIG_DVB_TDA18271C2DD)
+#if IS_REACHABLE(CONFIG_DVB_TDA18271C2DD)
struct dvb_frontend *tda18271c2dd_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c, u8 adr);
#else
diff --git a/drivers/media/dvb-frontends/tda665x.h b/drivers/media/dvb-frontends/tda665x.h
index 03a0da6..baf520b 100644
--- a/drivers/media/dvb-frontends/tda665x.h
+++ b/drivers/media/dvb-frontends/tda665x.h
@@ -31,7 +31,7 @@
u32 ref_divider;
};
-#if IS_ENABLED(CONFIG_DVB_TDA665x)
+#if IS_REACHABLE(CONFIG_DVB_TDA665x)
extern struct dvb_frontend *tda665x_attach(struct dvb_frontend *fe,
const struct tda665x_config *config,
diff --git a/drivers/media/dvb-frontends/tda8083.h b/drivers/media/dvb-frontends/tda8083.h
index de6b186..46be06f 100644
--- a/drivers/media/dvb-frontends/tda8083.h
+++ b/drivers/media/dvb-frontends/tda8083.h
@@ -35,7 +35,7 @@
u8 demod_address;
};
-#if IS_ENABLED(CONFIG_DVB_TDA8083)
+#if IS_REACHABLE(CONFIG_DVB_TDA8083)
extern struct dvb_frontend* tda8083_attach(const struct tda8083_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/tda8261.h b/drivers/media/dvb-frontends/tda8261.h
index 55cf4ff..9fa5b30 100644
--- a/drivers/media/dvb-frontends/tda8261.h
+++ b/drivers/media/dvb-frontends/tda8261.h
@@ -34,7 +34,7 @@
enum tda8261_step step_size;
};
-#if IS_ENABLED(CONFIG_DVB_TDA8261)
+#if IS_REACHABLE(CONFIG_DVB_TDA8261)
extern struct dvb_frontend *tda8261_attach(struct dvb_frontend *fe,
const struct tda8261_config *config,
diff --git a/drivers/media/dvb-frontends/tda826x.h b/drivers/media/dvb-frontends/tda826x.h
index 5f0f20e..81abe1a 100644
--- a/drivers/media/dvb-frontends/tda826x.h
+++ b/drivers/media/dvb-frontends/tda826x.h
@@ -35,7 +35,7 @@
* @param has_loopthrough Set to 1 if the card has a loopthrough RF connector.
* @return FE pointer on success, NULL on failure.
*/
-#if IS_ENABLED(CONFIG_DVB_TDA826X)
+#if IS_REACHABLE(CONFIG_DVB_TDA826X)
extern struct dvb_frontend* tda826x_attach(struct dvb_frontend *fe, int addr,
struct i2c_adapter *i2c,
int has_loopthrough);
diff --git a/drivers/media/dvb-frontends/ts2020.c b/drivers/media/dvb-frontends/ts2020.c
index 9aba044..90164a3 100644
--- a/drivers/media/dvb-frontends/ts2020.c
+++ b/drivers/media/dvb-frontends/ts2020.c
@@ -26,12 +26,23 @@
#define FREQ_OFFSET_LOW_SYM_RATE 3000
struct ts2020_priv {
+ struct dvb_frontend *fe;
/* i2c details */
int i2c_address;
struct i2c_adapter *i2c;
- u8 clk_out_div;
+ u8 clk_out:2;
+ u8 clk_out_div:5;
u32 frequency;
u32 frequency_div;
+#define TS2020_M88TS2020 0
+#define TS2020_M88TS2022 1
+ u8 tuner;
+ u8 loop_through:1;
+};
+
+struct ts2020_reg_val {
+ u8 reg;
+ u8 val;
};
static int ts2020_release(struct dvb_frontend *fe)
@@ -112,40 +123,77 @@
static int ts2020_sleep(struct dvb_frontend *fe)
{
struct ts2020_priv *priv = fe->tuner_priv;
- int ret;
- u8 buf[] = { 10, 0 };
- struct i2c_msg msg = {
- .addr = priv->i2c_address,
- .flags = 0,
- .buf = buf,
- .len = 2
- };
+ u8 u8tmp;
- if (fe->ops.i2c_gate_ctrl)
- fe->ops.i2c_gate_ctrl(fe, 1);
+ if (priv->tuner == TS2020_M88TS2020)
+ u8tmp = 0x0a; /* XXX: probably wrong */
+ else
+ u8tmp = 0x00;
- ret = i2c_transfer(priv->i2c, &msg, 1);
- if (ret != 1)
- printk(KERN_ERR "%s: i2c error\n", __func__);
-
- if (fe->ops.i2c_gate_ctrl)
- fe->ops.i2c_gate_ctrl(fe, 0);
-
- return (ret == 1) ? 0 : ret;
+ return ts2020_writereg(fe, u8tmp, 0x00);
}
static int ts2020_init(struct dvb_frontend *fe)
{
struct ts2020_priv *priv = fe->tuner_priv;
+ int i;
+ u8 u8tmp;
- ts2020_writereg(fe, 0x42, 0x73);
- ts2020_writereg(fe, 0x05, priv->clk_out_div);
- ts2020_writereg(fe, 0x20, 0x27);
- ts2020_writereg(fe, 0x07, 0x02);
- ts2020_writereg(fe, 0x11, 0xff);
- ts2020_writereg(fe, 0x60, 0xf9);
- ts2020_writereg(fe, 0x08, 0x01);
- ts2020_writereg(fe, 0x00, 0x41);
+ if (priv->tuner == TS2020_M88TS2020) {
+ ts2020_writereg(fe, 0x42, 0x73);
+ ts2020_writereg(fe, 0x05, priv->clk_out_div);
+ ts2020_writereg(fe, 0x20, 0x27);
+ ts2020_writereg(fe, 0x07, 0x02);
+ ts2020_writereg(fe, 0x11, 0xff);
+ ts2020_writereg(fe, 0x60, 0xf9);
+ ts2020_writereg(fe, 0x08, 0x01);
+ ts2020_writereg(fe, 0x00, 0x41);
+ } else {
+ static const struct ts2020_reg_val reg_vals[] = {
+ {0x7d, 0x9d},
+ {0x7c, 0x9a},
+ {0x7a, 0x76},
+ {0x3b, 0x01},
+ {0x63, 0x88},
+ {0x61, 0x85},
+ {0x22, 0x30},
+ {0x30, 0x40},
+ {0x20, 0x23},
+ {0x24, 0x02},
+ {0x12, 0xa0},
+ };
+
+ ts2020_writereg(fe, 0x00, 0x01);
+ ts2020_writereg(fe, 0x00, 0x03);
+
+ switch (priv->clk_out) {
+ case TS2020_CLK_OUT_DISABLED:
+ u8tmp = 0x60;
+ break;
+ case TS2020_CLK_OUT_ENABLED:
+ u8tmp = 0x70;
+ ts2020_writereg(fe, 0x05, priv->clk_out_div);
+ break;
+ case TS2020_CLK_OUT_ENABLED_XTALOUT:
+ u8tmp = 0x6c;
+ break;
+ default:
+ u8tmp = 0x60;
+ break;
+ }
+
+ ts2020_writereg(fe, 0x42, u8tmp);
+
+ if (priv->loop_through)
+ u8tmp = 0xec;
+ else
+ u8tmp = 0x6c;
+
+ ts2020_writereg(fe, 0x62, u8tmp);
+
+ for (i = 0; i < ARRAY_SIZE(reg_vals); i++)
+ ts2020_writereg(fe, reg_vals[i].reg, reg_vals[i].val);
+ }
return 0;
}
@@ -203,7 +251,14 @@
ndiv = ndiv + ndiv % 2;
ndiv = ndiv - 1024;
- ret = ts2020_writereg(fe, 0x10, 0x80 | lo);
+ if (priv->tuner == TS2020_M88TS2020) {
+ lpf_coeff = 2766;
+ ret = ts2020_writereg(fe, 0x10, 0x80 | lo);
+ } else {
+ lpf_coeff = 3200;
+ ret = ts2020_writereg(fe, 0x10, 0x0b);
+ ret |= ts2020_writereg(fe, 0x11, 0x40);
+ }
/* Set frequency divider */
ret |= ts2020_writereg(fe, 0x01, (ndiv >> 8) & 0xf);
@@ -220,7 +275,8 @@
ret |= ts2020_tuner_gate_ctrl(fe, 0x08);
/* Tuner RF */
- ret |= ts2020_set_tuner_rf(fe);
+ if (priv->tuner == TS2020_M88TS2020)
+ ret |= ts2020_set_tuner_rf(fe);
gdiv28 = (TS2020_XTAL_FREQ / 1000 * 1694 + 500) / 1000;
ret |= ts2020_writereg(fe, 0x04, gdiv28 & 0xff);
@@ -228,6 +284,15 @@
if (ret < 0)
return -ENODEV;
+ if (priv->tuner == TS2020_M88TS2022) {
+ ret = ts2020_writereg(fe, 0x25, 0x00);
+ ret |= ts2020_writereg(fe, 0x27, 0x70);
+ ret |= ts2020_writereg(fe, 0x41, 0x09);
+ ret |= ts2020_writereg(fe, 0x08, 0x0b);
+ if (ret < 0)
+ return -ENODEV;
+ }
+
value = ts2020_readreg(fe, 0x26);
f3db = (symbol_rate * 135) / 200 + 2000;
@@ -243,8 +308,6 @@
if (mlpf_max > 63)
mlpf_max = 63;
- lpf_coeff = 2766;
-
nlpf = (f3db * gdiv28 * 2 / lpf_coeff /
(TS2020_XTAL_FREQ / 1000) + 1) / 2;
if (nlpf > 23)
@@ -285,6 +348,13 @@
{
struct ts2020_priv *priv = fe->tuner_priv;
*frequency = priv->frequency;
+
+ return 0;
+}
+
+static int ts2020_get_if_frequency(struct dvb_frontend *fe, u32 *frequency)
+{
+ *frequency = 0; /* Zero-IF */
return 0;
}
@@ -324,6 +394,7 @@
.sleep = ts2020_sleep,
.set_params = ts2020_set_params,
.get_frequency = ts2020_get_frequency,
+ .get_if_frequency = ts2020_get_if_frequency,
.get_rf_strength = ts2020_read_signal_strength,
};
@@ -340,8 +411,10 @@
priv->i2c_address = config->tuner_address;
priv->i2c = i2c;
+ priv->clk_out = config->clk_out;
priv->clk_out_div = config->clk_out_div;
priv->frequency_div = config->frequency_div;
+ priv->fe = fe;
fe->tuner_priv = priv;
if (!priv->frequency_div)
@@ -358,9 +431,13 @@
/* Check the tuner version */
buf = ts2020_readreg(fe, 0x00);
- if ((buf == 0x01) || (buf == 0x41) || (buf == 0x81))
+ if ((buf == 0x01) || (buf == 0x41) || (buf == 0x81)) {
printk(KERN_INFO "%s: Find tuner TS2020!\n", __func__);
- else {
+ priv->tuner = TS2020_M88TS2020;
+ } else if ((buf == 0x83) || (buf == 0xc3)) {
+ printk(KERN_INFO "%s: Find tuner TS2022!\n", __func__);
+ priv->tuner = TS2020_M88TS2022;
+ } else {
printk(KERN_ERR "%s: Read tuner reg[0] = %d\n", __func__, buf);
kfree(priv);
return NULL;
@@ -373,6 +450,165 @@
}
EXPORT_SYMBOL(ts2020_attach);
+static int ts2020_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct ts2020_config *pdata = client->dev.platform_data;
+ struct dvb_frontend *fe = pdata->fe;
+ struct ts2020_priv *dev;
+ int ret;
+ u8 u8tmp;
+ unsigned int utmp;
+ char *chip_str;
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ dev->i2c = client->adapter;
+ dev->i2c_address = client->addr;
+ dev->clk_out = pdata->clk_out;
+ dev->clk_out_div = pdata->clk_out_div;
+ dev->frequency_div = pdata->frequency_div;
+ dev->fe = fe;
+ fe->tuner_priv = dev;
+
+ /* check if the tuner is there */
+ ret = ts2020_readreg(fe, 0x00);
+ if (ret < 0)
+ goto err;
+ utmp = ret;
+
+ if ((utmp & 0x03) == 0x00) {
+ ret = ts2020_writereg(fe, 0x00, 0x01);
+ if (ret)
+ goto err;
+
+ usleep_range(2000, 50000);
+ }
+
+ ret = ts2020_writereg(fe, 0x00, 0x03);
+ if (ret)
+ goto err;
+
+ usleep_range(2000, 50000);
+
+ ret = ts2020_readreg(fe, 0x00);
+ if (ret < 0)
+ goto err;
+ utmp = ret;
+
+ dev_dbg(&client->dev, "chip_id=%02x\n", utmp);
+
+ switch (utmp) {
+ case 0x01:
+ case 0x41:
+ case 0x81:
+ dev->tuner = TS2020_M88TS2020;
+ chip_str = "TS2020";
+ if (!dev->frequency_div)
+ dev->frequency_div = 1060000;
+ break;
+ case 0xc3:
+ case 0x83:
+ dev->tuner = TS2020_M88TS2022;
+ chip_str = "TS2022";
+ if (!dev->frequency_div)
+ dev->frequency_div = 1103000;
+ break;
+ default:
+ ret = -ENODEV;
+ goto err;
+ }
+
+ if (dev->tuner == TS2020_M88TS2022) {
+ switch (dev->clk_out) {
+ case TS2020_CLK_OUT_DISABLED:
+ u8tmp = 0x60;
+ break;
+ case TS2020_CLK_OUT_ENABLED:
+ u8tmp = 0x70;
+ ret = ts2020_writereg(fe, 0x05, dev->clk_out_div);
+ if (ret)
+ goto err;
+ break;
+ case TS2020_CLK_OUT_ENABLED_XTALOUT:
+ u8tmp = 0x6c;
+ break;
+ default:
+ ret = -EINVAL;
+ goto err;
+ }
+
+ ret = ts2020_writereg(fe, 0x42, u8tmp);
+ if (ret)
+ goto err;
+
+ if (dev->loop_through)
+ u8tmp = 0xec;
+ else
+ u8tmp = 0x6c;
+
+ ret = ts2020_writereg(fe, 0x62, u8tmp);
+ if (ret)
+ goto err;
+ }
+
+ /* sleep */
+ ret = ts2020_writereg(fe, 0x00, 0x00);
+ if (ret)
+ goto err;
+
+ dev_info(&client->dev,
+ "Montage Technology %s successfully identified\n", chip_str);
+
+ memcpy(&fe->ops.tuner_ops, &ts2020_tuner_ops,
+ sizeof(struct dvb_tuner_ops));
+ fe->ops.tuner_ops.release = NULL;
+
+ i2c_set_clientdata(client, dev);
+ return 0;
+err:
+ dev_dbg(&client->dev, "failed=%d\n", ret);
+ kfree(dev);
+ return ret;
+}
+
+static int ts2020_remove(struct i2c_client *client)
+{
+ struct ts2020_priv *dev = i2c_get_clientdata(client);
+ struct dvb_frontend *fe = dev->fe;
+
+ dev_dbg(&client->dev, "\n");
+
+ memset(&fe->ops.tuner_ops, 0, sizeof(struct dvb_tuner_ops));
+ fe->tuner_priv = NULL;
+ kfree(dev);
+
+ return 0;
+}
+
+static const struct i2c_device_id ts2020_id_table[] = {
+ {"ts2020", 0},
+ {"ts2022", 0},
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, ts2020_id_table);
+
+static struct i2c_driver ts2020_driver = {
+ .driver = {
+ .owner = THIS_MODULE,
+ .name = "ts2020",
+ },
+ .probe = ts2020_probe,
+ .remove = ts2020_remove,
+ .id_table = ts2020_id_table,
+};
+
+module_i2c_driver(ts2020_driver);
+
MODULE_AUTHOR("Konstantin Dimitrov <kosio.dimitrov@gmail.com>");
MODULE_DESCRIPTION("Montage Technology TS2020 - Silicon tuner driver module");
MODULE_LICENSE("GPL");
diff --git a/drivers/media/dvb-frontends/ts2020.h b/drivers/media/dvb-frontends/ts2020.h
index b2fe6bb..1714af9 100644
--- a/drivers/media/dvb-frontends/ts2020.h
+++ b/drivers/media/dvb-frontends/ts2020.h
@@ -27,11 +27,34 @@
struct ts2020_config {
u8 tuner_address;
- u8 clk_out_div;
u32 frequency_div;
+
+ /*
+ * RF loop-through
+ */
+ u8 loop_through:1;
+
+ /*
+ * clock output
+ */
+#define TS2020_CLK_OUT_DISABLED 0
+#define TS2020_CLK_OUT_ENABLED 1
+#define TS2020_CLK_OUT_ENABLED_XTALOUT 2
+ u8 clk_out:2;
+
+ /*
+ * clock output divider
+ * 1 - 31
+ */
+ u8 clk_out_div:5;
+
+ /*
+ * pointer to DVB frontend
+ */
+ struct dvb_frontend *fe;
};
-#if IS_ENABLED(CONFIG_DVB_TS2020)
+#if IS_REACHABLE(CONFIG_DVB_TS2020)
extern struct dvb_frontend *ts2020_attach(
struct dvb_frontend *fe,
diff --git a/drivers/media/dvb-frontends/tua6100.h b/drivers/media/dvb-frontends/tua6100.h
index 83a9c30..52919e0 100644
--- a/drivers/media/dvb-frontends/tua6100.h
+++ b/drivers/media/dvb-frontends/tua6100.h
@@ -34,7 +34,7 @@
#include <linux/i2c.h>
#include "dvb_frontend.h"
-#if IS_ENABLED(CONFIG_DVB_TUA6100)
+#if IS_REACHABLE(CONFIG_DVB_TUA6100)
extern struct dvb_frontend *tua6100_attach(struct dvb_frontend *fe, int addr, struct i2c_adapter *i2c);
#else
static inline struct dvb_frontend* tua6100_attach(struct dvb_frontend *fe, int addr, struct i2c_adapter *i2c)
diff --git a/drivers/media/dvb-frontends/ves1820.h b/drivers/media/dvb-frontends/ves1820.h
index c073f35..ece46fd 100644
--- a/drivers/media/dvb-frontends/ves1820.h
+++ b/drivers/media/dvb-frontends/ves1820.h
@@ -41,7 +41,7 @@
u8 selagc:1;
};
-#if IS_ENABLED(CONFIG_DVB_VES1820)
+#if IS_REACHABLE(CONFIG_DVB_VES1820)
extern struct dvb_frontend* ves1820_attach(const struct ves1820_config* config,
struct i2c_adapter* i2c, u8 pwm);
#else
diff --git a/drivers/media/dvb-frontends/ves1x93.h b/drivers/media/dvb-frontends/ves1x93.h
index 2307cae..4510fe2 100644
--- a/drivers/media/dvb-frontends/ves1x93.h
+++ b/drivers/media/dvb-frontends/ves1x93.h
@@ -40,7 +40,7 @@
u8 invert_pwm:1;
};
-#if IS_ENABLED(CONFIG_DVB_VES1X93)
+#if IS_REACHABLE(CONFIG_DVB_VES1X93)
extern struct dvb_frontend* ves1x93_attach(const struct ves1x93_config* config,
struct i2c_adapter* i2c);
#else
diff --git a/drivers/media/dvb-frontends/zl10036.h b/drivers/media/dvb-frontends/zl10036.h
index 5f1e821..670e76a 100644
--- a/drivers/media/dvb-frontends/zl10036.h
+++ b/drivers/media/dvb-frontends/zl10036.h
@@ -38,7 +38,7 @@
int rf_loop_enable;
};
-#if IS_ENABLED(CONFIG_DVB_ZL10036)
+#if IS_REACHABLE(CONFIG_DVB_ZL10036)
extern struct dvb_frontend *zl10036_attach(struct dvb_frontend *fe,
const struct zl10036_config *config, struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/dvb-frontends/zl10039.h b/drivers/media/dvb-frontends/zl10039.h
index 750b9bc..0709294 100644
--- a/drivers/media/dvb-frontends/zl10039.h
+++ b/drivers/media/dvb-frontends/zl10039.h
@@ -24,7 +24,7 @@
#include <linux/kconfig.h>
-#if IS_ENABLED(CONFIG_DVB_ZL10039)
+#if IS_REACHABLE(CONFIG_DVB_ZL10039)
struct dvb_frontend *zl10039_attach(struct dvb_frontend *fe,
u8 i2c_addr,
struct i2c_adapter *i2c);
diff --git a/drivers/media/dvb-frontends/zl10353.h b/drivers/media/dvb-frontends/zl10353.h
index 50c1004..37aa6e8 100644
--- a/drivers/media/dvb-frontends/zl10353.h
+++ b/drivers/media/dvb-frontends/zl10353.h
@@ -47,7 +47,7 @@
u8 pll_0; /* default: 0x15 */
};
-#if IS_ENABLED(CONFIG_DVB_ZL10353)
+#if IS_REACHABLE(CONFIG_DVB_ZL10353)
extern struct dvb_frontend* zl10353_attach(const struct zl10353_config *config,
struct i2c_adapter *i2c);
#else
diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
index da58c9b..6f30ea7 100644
--- a/drivers/media/i2c/Kconfig
+++ b/drivers/media/i2c/Kconfig
@@ -466,6 +466,17 @@
config VIDEO_SMIAPP_PLL
tristate
+config VIDEO_OV2659
+ tristate "OmniVision OV2659 sensor support"
+ depends on VIDEO_V4L2 && I2C
+ depends on MEDIA_CAMERA_SUPPORT
+ ---help---
+ This is a Video4Linux2 sensor-level driver for the OmniVision
+ OV2659 camera.
+
+ To compile this driver as a module, choose M here: the
+ module will be called ov2659.
+
config VIDEO_OV7640
tristate "OmniVision OV7640 sensor support"
depends on I2C && VIDEO_V4L2
diff --git a/drivers/media/i2c/Makefile b/drivers/media/i2c/Makefile
index 98589001..f165fae 100644
--- a/drivers/media/i2c/Makefile
+++ b/drivers/media/i2c/Makefile
@@ -77,3 +77,4 @@
obj-$(CONFIG_VIDEO_AK881X) += ak881x.o
obj-$(CONFIG_VIDEO_IR_I2C) += ir-kbd-i2c.o
obj-$(CONFIG_VIDEO_ML86V7667) += ml86v7667.o
+obj-$(CONFIG_VIDEO_OV2659) += ov2659.o
diff --git a/drivers/media/i2c/ad9389b.c b/drivers/media/i2c/ad9389b.c
index fada175..69094ab 100644
--- a/drivers/media/i2c/ad9389b.c
+++ b/drivers/media/i2c/ad9389b.c
@@ -239,8 +239,8 @@
{
struct ad9389b_state *state = get_ad9389b_state(sd);
- if (state->dv_timings.bt.standards & V4L2_DV_BT_STD_CEA861) {
- /* CEA format, not IT */
+ if (state->dv_timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) {
+ /* CE format, not IT */
ad9389b_wr_and_or(sd, 0xcd, 0xbf, 0x00);
} else {
/* IT format */
@@ -255,11 +255,11 @@
switch (ctrl->val) {
case V4L2_DV_RGB_RANGE_AUTO:
/* automatic */
- if (state->dv_timings.bt.standards & V4L2_DV_BT_STD_CEA861) {
- /* cea format, RGB limited range (16-235) */
+ if (state->dv_timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) {
+ /* CE format, RGB limited range (16-235) */
ad9389b_csc_rgb_full2limit(sd, true);
} else {
- /* not cea format, RGB full range (0-255) */
+ /* not CE format, RGB full range (0-255) */
ad9389b_csc_rgb_full2limit(sd, false);
}
break;
diff --git a/drivers/media/i2c/adv7180.c b/drivers/media/i2c/adv7180.c
index b75878c..a493c0b 100644
--- a/drivers/media/i2c/adv7180.c
+++ b/drivers/media/i2c/adv7180.c
@@ -582,7 +582,7 @@
}
static int adv7180_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index != 0)
@@ -645,13 +645,13 @@
}
static int adv7180_get_pad_format(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
struct adv7180_state *state = to_state(sd);
if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
- format->format = *v4l2_subdev_get_try_format(fh, 0);
+ format->format = *v4l2_subdev_get_try_format(sd, cfg, 0);
} else {
adv7180_mbus_fmt(sd, &format->format);
format->format.field = state->field;
@@ -661,7 +661,7 @@
}
static int adv7180_set_pad_format(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
struct adv7180_state *state = to_state(sd);
@@ -686,7 +686,7 @@
adv7180_set_power(state, true);
}
} else {
- framefmt = v4l2_subdev_get_try_format(fh, 0);
+ framefmt = v4l2_subdev_get_try_format(sd, cfg, 0);
*framefmt = format->format;
}
diff --git a/drivers/media/i2c/adv7343.c b/drivers/media/i2c/adv7343.c
index 9d38f7b..7c50833 100644
--- a/drivers/media/i2c/adv7343.c
+++ b/drivers/media/i2c/adv7343.c
@@ -506,7 +506,6 @@
struct adv7343_state *state = to_state(sd);
v4l2_async_unregister_subdev(&state->sd);
- v4l2_device_unregister_subdev(sd);
v4l2_ctrl_handler_free(&state->hdl);
return 0;
diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c
index 81736aa..12d9320 100644
--- a/drivers/media/i2c/adv7511.c
+++ b/drivers/media/i2c/adv7511.c
@@ -312,8 +312,8 @@
static void adv7511_set_IT_content_AVI_InfoFrame(struct v4l2_subdev *sd)
{
struct adv7511_state *state = get_adv7511_state(sd);
- if (state->dv_timings.bt.standards & V4L2_DV_BT_STD_CEA861) {
- /* CEA format, not IT */
+ if (state->dv_timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) {
+ /* CE format, not IT */
adv7511_wr_and_or(sd, 0x57, 0x7f, 0x00);
} else {
/* IT format */
@@ -331,11 +331,11 @@
/* automatic */
struct adv7511_state *state = get_adv7511_state(sd);
- if (state->dv_timings.bt.standards & V4L2_DV_BT_STD_CEA861) {
- /* cea format, RGB limited range (16-235) */
+ if (state->dv_timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) {
+ /* CE format, RGB limited range (16-235) */
adv7511_csc_rgb_full2limit(sd, true);
} else {
- /* not cea format, RGB full range (0-255) */
+ /* not CE format, RGB full range (0-255) */
adv7511_csc_rgb_full2limit(sd, false);
}
}
@@ -810,7 +810,7 @@
}
static int adv7511_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->pad != 0)
@@ -842,8 +842,9 @@
format->field = V4L2_FIELD_NONE;
}
-static int adv7511_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
- struct v4l2_subdev_format *format)
+static int adv7511_get_fmt(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *format)
{
struct adv7511_state *state = get_adv7511_state(sd);
@@ -855,7 +856,7 @@
if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
struct v4l2_mbus_framefmt *fmt;
- fmt = v4l2_subdev_get_try_format(fh, format->pad);
+ fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad);
format->format.code = fmt->code;
format->format.colorspace = fmt->colorspace;
format->format.ycbcr_enc = fmt->ycbcr_enc;
@@ -870,8 +871,9 @@
return 0;
}
-static int adv7511_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
- struct v4l2_subdev_format *format)
+static int adv7511_set_fmt(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *format)
{
struct adv7511_state *state = get_adv7511_state(sd);
/*
@@ -905,7 +907,7 @@
if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
struct v4l2_mbus_framefmt *fmt;
- fmt = v4l2_subdev_get_try_format(fh, format->pad);
+ fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad);
fmt->code = format->format.code;
fmt->colorspace = format->format.colorspace;
fmt->ycbcr_enc = format->format.ycbcr_enc;
diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
index d228b7c..60ffcf0 100644
--- a/drivers/media/i2c/adv7604.c
+++ b/drivers/media/i2c/adv7604.c
@@ -53,41 +53,41 @@
MODULE_LICENSE("GPL");
/* ADV7604 system clock frequency */
-#define ADV7604_fsc (28636360)
+#define ADV76XX_FSC (28636360)
-#define ADV7604_RGB_OUT (1 << 1)
+#define ADV76XX_RGB_OUT (1 << 1)
-#define ADV7604_OP_FORMAT_SEL_8BIT (0 << 0)
+#define ADV76XX_OP_FORMAT_SEL_8BIT (0 << 0)
#define ADV7604_OP_FORMAT_SEL_10BIT (1 << 0)
-#define ADV7604_OP_FORMAT_SEL_12BIT (2 << 0)
+#define ADV76XX_OP_FORMAT_SEL_12BIT (2 << 0)
-#define ADV7604_OP_MODE_SEL_SDR_422 (0 << 5)
+#define ADV76XX_OP_MODE_SEL_SDR_422 (0 << 5)
#define ADV7604_OP_MODE_SEL_DDR_422 (1 << 5)
-#define ADV7604_OP_MODE_SEL_SDR_444 (2 << 5)
+#define ADV76XX_OP_MODE_SEL_SDR_444 (2 << 5)
#define ADV7604_OP_MODE_SEL_DDR_444 (3 << 5)
-#define ADV7604_OP_MODE_SEL_SDR_422_2X (4 << 5)
+#define ADV76XX_OP_MODE_SEL_SDR_422_2X (4 << 5)
#define ADV7604_OP_MODE_SEL_ADI_CM (5 << 5)
-#define ADV7604_OP_CH_SEL_GBR (0 << 5)
-#define ADV7604_OP_CH_SEL_GRB (1 << 5)
-#define ADV7604_OP_CH_SEL_BGR (2 << 5)
-#define ADV7604_OP_CH_SEL_RGB (3 << 5)
-#define ADV7604_OP_CH_SEL_BRG (4 << 5)
-#define ADV7604_OP_CH_SEL_RBG (5 << 5)
+#define ADV76XX_OP_CH_SEL_GBR (0 << 5)
+#define ADV76XX_OP_CH_SEL_GRB (1 << 5)
+#define ADV76XX_OP_CH_SEL_BGR (2 << 5)
+#define ADV76XX_OP_CH_SEL_RGB (3 << 5)
+#define ADV76XX_OP_CH_SEL_BRG (4 << 5)
+#define ADV76XX_OP_CH_SEL_RBG (5 << 5)
-#define ADV7604_OP_SWAP_CB_CR (1 << 0)
+#define ADV76XX_OP_SWAP_CB_CR (1 << 0)
-enum adv7604_type {
+enum adv76xx_type {
ADV7604,
ADV7611,
};
-struct adv7604_reg_seq {
+struct adv76xx_reg_seq {
unsigned int reg;
u8 val;
};
-struct adv7604_format_info {
+struct adv76xx_format_info {
u32 code;
u8 op_ch_sel;
bool rgb_out;
@@ -95,8 +95,8 @@
u8 op_format_sel;
};
-struct adv7604_chip_info {
- enum adv7604_type type;
+struct adv76xx_chip_info {
+ enum adv76xx_type type;
bool has_afe;
unsigned int max_port;
@@ -109,8 +109,9 @@
unsigned int cable_det_mask;
unsigned int tdms_lock_mask;
unsigned int fmt_change_digital_mask;
+ unsigned int cp_csc;
- const struct adv7604_format_info *formats;
+ const struct adv76xx_format_info *formats;
unsigned int nformats;
void (*set_termination)(struct v4l2_subdev *sd, bool enable);
@@ -119,7 +120,7 @@
unsigned int (*read_cable_det)(struct v4l2_subdev *sd);
/* 0 = AFE, 1 = HDMI */
- const struct adv7604_reg_seq *recommended_settings[2];
+ const struct adv76xx_reg_seq *recommended_settings[2];
unsigned int num_recommended_settings[2];
unsigned long page_mask;
@@ -133,22 +134,22 @@
**********************************************************************
*/
-struct adv7604_state {
- const struct adv7604_chip_info *info;
- struct adv7604_platform_data pdata;
+struct adv76xx_state {
+ const struct adv76xx_chip_info *info;
+ struct adv76xx_platform_data pdata;
struct gpio_desc *hpd_gpio[4];
struct v4l2_subdev sd;
- struct media_pad pads[ADV7604_PAD_MAX];
+ struct media_pad pads[ADV76XX_PAD_MAX];
unsigned int source_pad;
struct v4l2_ctrl_handler hdl;
- enum adv7604_pad selected_input;
+ enum adv76xx_pad selected_input;
struct v4l2_dv_timings timings;
- const struct adv7604_format_info *format;
+ const struct adv76xx_format_info *format;
struct {
u8 edid[256];
@@ -163,7 +164,7 @@
bool restart_stdi_once;
/* i2c clients */
- struct i2c_client *i2c_clients[ADV7604_PAGE_MAX];
+ struct i2c_client *i2c_clients[ADV76XX_PAGE_MAX];
/* controls */
struct v4l2_ctrl *detect_tx_5v_ctrl;
@@ -173,13 +174,13 @@
struct v4l2_ctrl *rgb_quantization_range_ctrl;
};
-static bool adv7604_has_afe(struct adv7604_state *state)
+static bool adv76xx_has_afe(struct adv76xx_state *state)
{
return state->info->has_afe;
}
/* Supported CEA and DMT timings */
-static const struct v4l2_dv_timings adv7604_timings[] = {
+static const struct v4l2_dv_timings adv76xx_timings[] = {
V4L2_DV_BT_CEA_720X480P59_94,
V4L2_DV_BT_CEA_720X576P50,
V4L2_DV_BT_CEA_1280X720P24,
@@ -243,14 +244,14 @@
{ },
};
-struct adv7604_video_standards {
+struct adv76xx_video_standards {
struct v4l2_dv_timings timings;
u8 vid_std;
u8 v_freq;
};
/* sorted by number of lines */
-static const struct adv7604_video_standards adv7604_prim_mode_comp[] = {
+static const struct adv76xx_video_standards adv7604_prim_mode_comp[] = {
/* { V4L2_DV_BT_CEA_720X480P59_94, 0x0a, 0x00 }, TODO flickering */
{ V4L2_DV_BT_CEA_720X576P50, 0x0b, 0x00 },
{ V4L2_DV_BT_CEA_1280X720P50, 0x19, 0x01 },
@@ -265,7 +266,7 @@
};
/* sorted by number of lines */
-static const struct adv7604_video_standards adv7604_prim_mode_gr[] = {
+static const struct adv76xx_video_standards adv7604_prim_mode_gr[] = {
{ V4L2_DV_BT_DMT_640X480P60, 0x08, 0x00 },
{ V4L2_DV_BT_DMT_640X480P72, 0x09, 0x00 },
{ V4L2_DV_BT_DMT_640X480P75, 0x0a, 0x00 },
@@ -293,7 +294,7 @@
};
/* sorted by number of lines */
-static const struct adv7604_video_standards adv7604_prim_mode_hdmi_comp[] = {
+static const struct adv76xx_video_standards adv76xx_prim_mode_hdmi_comp[] = {
{ V4L2_DV_BT_CEA_720X480P59_94, 0x0a, 0x00 },
{ V4L2_DV_BT_CEA_720X576P50, 0x0b, 0x00 },
{ V4L2_DV_BT_CEA_1280X720P50, 0x13, 0x01 },
@@ -307,7 +308,7 @@
};
/* sorted by number of lines */
-static const struct adv7604_video_standards adv7604_prim_mode_hdmi_gr[] = {
+static const struct adv76xx_video_standards adv76xx_prim_mode_hdmi_gr[] = {
{ V4L2_DV_BT_DMT_640X480P60, 0x08, 0x00 },
{ V4L2_DV_BT_DMT_640X480P72, 0x09, 0x00 },
{ V4L2_DV_BT_DMT_640X480P75, 0x0a, 0x00 },
@@ -328,9 +329,9 @@
/* ----------------------------------------------------------------------- */
-static inline struct adv7604_state *to_state(struct v4l2_subdev *sd)
+static inline struct adv76xx_state *to_state(struct v4l2_subdev *sd)
{
- return container_of(sd, struct adv7604_state, sd);
+ return container_of(sd, struct adv76xx_state, sd);
}
static inline unsigned htotal(const struct v4l2_bt_timings *t)
@@ -360,15 +361,15 @@
return -EIO;
}
-static s32 adv_smbus_read_byte_data(struct adv7604_state *state,
- enum adv7604_page page, u8 command)
+static s32 adv_smbus_read_byte_data(struct adv76xx_state *state,
+ enum adv76xx_page page, u8 command)
{
return adv_smbus_read_byte_data_check(state->i2c_clients[page],
command, true);
}
-static s32 adv_smbus_write_byte_data(struct adv7604_state *state,
- enum adv7604_page page, u8 command,
+static s32 adv_smbus_write_byte_data(struct adv76xx_state *state,
+ enum adv76xx_page page, u8 command,
u8 value)
{
struct i2c_client *client = state->i2c_clients[page];
@@ -391,8 +392,8 @@
return err;
}
-static s32 adv_smbus_write_i2c_block_data(struct adv7604_state *state,
- enum adv7604_page page, u8 command,
+static s32 adv_smbus_write_i2c_block_data(struct adv76xx_state *state,
+ enum adv76xx_page page, u8 command,
unsigned length, const u8 *values)
{
struct i2c_client *client = state->i2c_clients[page];
@@ -411,16 +412,16 @@
static inline int io_read(struct v4l2_subdev *sd, u8 reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_read_byte_data(state, ADV7604_PAGE_IO, reg);
+ return adv_smbus_read_byte_data(state, ADV76XX_PAGE_IO, reg);
}
static inline int io_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_write_byte_data(state, ADV7604_PAGE_IO, reg, val);
+ return adv_smbus_write_byte_data(state, ADV76XX_PAGE_IO, reg, val);
}
static inline int io_write_clr_set(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val)
@@ -430,73 +431,73 @@
static inline int avlink_read(struct v4l2_subdev *sd, u8 reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
return adv_smbus_read_byte_data(state, ADV7604_PAGE_AVLINK, reg);
}
static inline int avlink_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
return adv_smbus_write_byte_data(state, ADV7604_PAGE_AVLINK, reg, val);
}
static inline int cec_read(struct v4l2_subdev *sd, u8 reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_read_byte_data(state, ADV7604_PAGE_CEC, reg);
+ return adv_smbus_read_byte_data(state, ADV76XX_PAGE_CEC, reg);
}
static inline int cec_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_write_byte_data(state, ADV7604_PAGE_CEC, reg, val);
+ return adv_smbus_write_byte_data(state, ADV76XX_PAGE_CEC, reg, val);
}
static inline int infoframe_read(struct v4l2_subdev *sd, u8 reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_read_byte_data(state, ADV7604_PAGE_INFOFRAME, reg);
+ return adv_smbus_read_byte_data(state, ADV76XX_PAGE_INFOFRAME, reg);
}
static inline int infoframe_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_write_byte_data(state, ADV7604_PAGE_INFOFRAME,
+ return adv_smbus_write_byte_data(state, ADV76XX_PAGE_INFOFRAME,
reg, val);
}
static inline int afe_read(struct v4l2_subdev *sd, u8 reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_read_byte_data(state, ADV7604_PAGE_AFE, reg);
+ return adv_smbus_read_byte_data(state, ADV76XX_PAGE_AFE, reg);
}
static inline int afe_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_write_byte_data(state, ADV7604_PAGE_AFE, reg, val);
+ return adv_smbus_write_byte_data(state, ADV76XX_PAGE_AFE, reg, val);
}
static inline int rep_read(struct v4l2_subdev *sd, u8 reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_read_byte_data(state, ADV7604_PAGE_REP, reg);
+ return adv_smbus_read_byte_data(state, ADV76XX_PAGE_REP, reg);
}
static inline int rep_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_write_byte_data(state, ADV7604_PAGE_REP, reg, val);
+ return adv_smbus_write_byte_data(state, ADV76XX_PAGE_REP, reg, val);
}
static inline int rep_write_clr_set(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val)
@@ -506,64 +507,60 @@
static inline int edid_read(struct v4l2_subdev *sd, u8 reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_read_byte_data(state, ADV7604_PAGE_EDID, reg);
+ return adv_smbus_read_byte_data(state, ADV76XX_PAGE_EDID, reg);
}
static inline int edid_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_write_byte_data(state, ADV7604_PAGE_EDID, reg, val);
+ return adv_smbus_write_byte_data(state, ADV76XX_PAGE_EDID, reg, val);
}
static inline int edid_write_block(struct v4l2_subdev *sd,
unsigned len, const u8 *val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
int err = 0;
int i;
v4l2_dbg(2, debug, sd, "%s: write EDID block (%d byte)\n", __func__, len);
for (i = 0; !err && i < len; i += I2C_SMBUS_BLOCK_MAX)
- err = adv_smbus_write_i2c_block_data(state, ADV7604_PAGE_EDID,
+ err = adv_smbus_write_i2c_block_data(state, ADV76XX_PAGE_EDID,
i, I2C_SMBUS_BLOCK_MAX, val + i);
return err;
}
-static void adv7604_set_hpd(struct adv7604_state *state, unsigned int hpd)
+static void adv76xx_set_hpd(struct adv76xx_state *state, unsigned int hpd)
{
unsigned int i;
- for (i = 0; i < state->info->num_dv_ports; ++i) {
- if (IS_ERR(state->hpd_gpio[i]))
- continue;
-
+ for (i = 0; i < state->info->num_dv_ports; ++i)
gpiod_set_value_cansleep(state->hpd_gpio[i], hpd & BIT(i));
- }
- v4l2_subdev_notify(&state->sd, ADV7604_HOTPLUG, &hpd);
+ v4l2_subdev_notify(&state->sd, ADV76XX_HOTPLUG, &hpd);
}
-static void adv7604_delayed_work_enable_hotplug(struct work_struct *work)
+static void adv76xx_delayed_work_enable_hotplug(struct work_struct *work)
{
struct delayed_work *dwork = to_delayed_work(work);
- struct adv7604_state *state = container_of(dwork, struct adv7604_state,
+ struct adv76xx_state *state = container_of(dwork, struct adv76xx_state,
delayed_work_enable_hotplug);
struct v4l2_subdev *sd = &state->sd;
v4l2_dbg(2, debug, sd, "%s: enable hotplug\n", __func__);
- adv7604_set_hpd(state, state->edid.present);
+ adv76xx_set_hpd(state, state->edid.present);
}
static inline int hdmi_read(struct v4l2_subdev *sd, u8 reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_read_byte_data(state, ADV7604_PAGE_HDMI, reg);
+ return adv_smbus_read_byte_data(state, ADV76XX_PAGE_HDMI, reg);
}
static u16 hdmi_read16(struct v4l2_subdev *sd, u8 reg, u16 mask)
@@ -573,9 +570,9 @@
static inline int hdmi_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_write_byte_data(state, ADV7604_PAGE_HDMI, reg, val);
+ return adv_smbus_write_byte_data(state, ADV76XX_PAGE_HDMI, reg, val);
}
static inline int hdmi_write_clr_set(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val)
@@ -585,16 +582,16 @@
static inline int test_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_write_byte_data(state, ADV7604_PAGE_TEST, reg, val);
+ return adv_smbus_write_byte_data(state, ADV76XX_PAGE_TEST, reg, val);
}
static inline int cp_read(struct v4l2_subdev *sd, u8 reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_read_byte_data(state, ADV7604_PAGE_CP, reg);
+ return adv_smbus_read_byte_data(state, ADV76XX_PAGE_CP, reg);
}
static u16 cp_read16(struct v4l2_subdev *sd, u8 reg, u16 mask)
@@ -604,9 +601,9 @@
static inline int cp_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return adv_smbus_write_byte_data(state, ADV7604_PAGE_CP, reg, val);
+ return adv_smbus_write_byte_data(state, ADV76XX_PAGE_CP, reg, val);
}
static inline int cp_write_clr_set(struct v4l2_subdev *sd, u8 reg, u8 mask, u8 val)
@@ -616,25 +613,25 @@
static inline int vdp_read(struct v4l2_subdev *sd, u8 reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
return adv_smbus_read_byte_data(state, ADV7604_PAGE_VDP, reg);
}
static inline int vdp_write(struct v4l2_subdev *sd, u8 reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
return adv_smbus_write_byte_data(state, ADV7604_PAGE_VDP, reg, val);
}
-#define ADV7604_REG(page, offset) (((page) << 8) | (offset))
-#define ADV7604_REG_SEQ_TERM 0xffff
+#define ADV76XX_REG(page, offset) (((page) << 8) | (offset))
+#define ADV76XX_REG_SEQ_TERM 0xffff
#ifdef CONFIG_VIDEO_ADV_DEBUG
-static int adv7604_read_reg(struct v4l2_subdev *sd, unsigned int reg)
+static int adv76xx_read_reg(struct v4l2_subdev *sd, unsigned int reg)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
unsigned int page = reg >> 8;
if (!(BIT(page) & state->info->page_mask))
@@ -646,9 +643,9 @@
}
#endif
-static int adv7604_write_reg(struct v4l2_subdev *sd, unsigned int reg, u8 val)
+static int adv76xx_write_reg(struct v4l2_subdev *sd, unsigned int reg, u8 val)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
unsigned int page = reg >> 8;
if (!(BIT(page) & state->info->page_mask))
@@ -659,91 +656,91 @@
return adv_smbus_write_byte_data(state, page, reg, val);
}
-static void adv7604_write_reg_seq(struct v4l2_subdev *sd,
- const struct adv7604_reg_seq *reg_seq)
+static void adv76xx_write_reg_seq(struct v4l2_subdev *sd,
+ const struct adv76xx_reg_seq *reg_seq)
{
unsigned int i;
- for (i = 0; reg_seq[i].reg != ADV7604_REG_SEQ_TERM; i++)
- adv7604_write_reg(sd, reg_seq[i].reg, reg_seq[i].val);
+ for (i = 0; reg_seq[i].reg != ADV76XX_REG_SEQ_TERM; i++)
+ adv76xx_write_reg(sd, reg_seq[i].reg, reg_seq[i].val);
}
/* -----------------------------------------------------------------------------
* Format helpers
*/
-static const struct adv7604_format_info adv7604_formats[] = {
- { MEDIA_BUS_FMT_RGB888_1X24, ADV7604_OP_CH_SEL_RGB, true, false,
- ADV7604_OP_MODE_SEL_SDR_444 | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_YUYV8_2X8, ADV7604_OP_CH_SEL_RGB, false, false,
- ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_YVYU8_2X8, ADV7604_OP_CH_SEL_RGB, false, true,
- ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_YUYV10_2X10, ADV7604_OP_CH_SEL_RGB, false, false,
- ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_10BIT },
- { MEDIA_BUS_FMT_YVYU10_2X10, ADV7604_OP_CH_SEL_RGB, false, true,
- ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_10BIT },
- { MEDIA_BUS_FMT_YUYV12_2X12, ADV7604_OP_CH_SEL_RGB, false, false,
- ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_12BIT },
- { MEDIA_BUS_FMT_YVYU12_2X12, ADV7604_OP_CH_SEL_RGB, false, true,
- ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_12BIT },
- { MEDIA_BUS_FMT_UYVY8_1X16, ADV7604_OP_CH_SEL_RBG, false, false,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_VYUY8_1X16, ADV7604_OP_CH_SEL_RBG, false, true,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_YUYV8_1X16, ADV7604_OP_CH_SEL_RGB, false, false,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_YVYU8_1X16, ADV7604_OP_CH_SEL_RGB, false, true,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_UYVY10_1X20, ADV7604_OP_CH_SEL_RBG, false, false,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT },
- { MEDIA_BUS_FMT_VYUY10_1X20, ADV7604_OP_CH_SEL_RBG, false, true,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT },
- { MEDIA_BUS_FMT_YUYV10_1X20, ADV7604_OP_CH_SEL_RGB, false, false,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT },
- { MEDIA_BUS_FMT_YVYU10_1X20, ADV7604_OP_CH_SEL_RGB, false, true,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT },
- { MEDIA_BUS_FMT_UYVY12_1X24, ADV7604_OP_CH_SEL_RBG, false, false,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT },
- { MEDIA_BUS_FMT_VYUY12_1X24, ADV7604_OP_CH_SEL_RBG, false, true,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT },
- { MEDIA_BUS_FMT_YUYV12_1X24, ADV7604_OP_CH_SEL_RGB, false, false,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT },
- { MEDIA_BUS_FMT_YVYU12_1X24, ADV7604_OP_CH_SEL_RGB, false, true,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT },
+static const struct adv76xx_format_info adv7604_formats[] = {
+ { MEDIA_BUS_FMT_RGB888_1X24, ADV76XX_OP_CH_SEL_RGB, true, false,
+ ADV76XX_OP_MODE_SEL_SDR_444 | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_YUYV8_2X8, ADV76XX_OP_CH_SEL_RGB, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422 | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_YVYU8_2X8, ADV76XX_OP_CH_SEL_RGB, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422 | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_YUYV10_2X10, ADV76XX_OP_CH_SEL_RGB, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_10BIT },
+ { MEDIA_BUS_FMT_YVYU10_2X10, ADV76XX_OP_CH_SEL_RGB, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_10BIT },
+ { MEDIA_BUS_FMT_YUYV12_2X12, ADV76XX_OP_CH_SEL_RGB, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422 | ADV76XX_OP_FORMAT_SEL_12BIT },
+ { MEDIA_BUS_FMT_YVYU12_2X12, ADV76XX_OP_CH_SEL_RGB, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422 | ADV76XX_OP_FORMAT_SEL_12BIT },
+ { MEDIA_BUS_FMT_UYVY8_1X16, ADV76XX_OP_CH_SEL_RBG, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_VYUY8_1X16, ADV76XX_OP_CH_SEL_RBG, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_YUYV8_1X16, ADV76XX_OP_CH_SEL_RGB, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_YVYU8_1X16, ADV76XX_OP_CH_SEL_RGB, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_UYVY10_1X20, ADV76XX_OP_CH_SEL_RBG, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT },
+ { MEDIA_BUS_FMT_VYUY10_1X20, ADV76XX_OP_CH_SEL_RBG, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT },
+ { MEDIA_BUS_FMT_YUYV10_1X20, ADV76XX_OP_CH_SEL_RGB, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT },
+ { MEDIA_BUS_FMT_YVYU10_1X20, ADV76XX_OP_CH_SEL_RGB, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_10BIT },
+ { MEDIA_BUS_FMT_UYVY12_1X24, ADV76XX_OP_CH_SEL_RBG, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_12BIT },
+ { MEDIA_BUS_FMT_VYUY12_1X24, ADV76XX_OP_CH_SEL_RBG, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_12BIT },
+ { MEDIA_BUS_FMT_YUYV12_1X24, ADV76XX_OP_CH_SEL_RGB, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_12BIT },
+ { MEDIA_BUS_FMT_YVYU12_1X24, ADV76XX_OP_CH_SEL_RGB, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_12BIT },
};
-static const struct adv7604_format_info adv7611_formats[] = {
- { MEDIA_BUS_FMT_RGB888_1X24, ADV7604_OP_CH_SEL_RGB, true, false,
- ADV7604_OP_MODE_SEL_SDR_444 | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_YUYV8_2X8, ADV7604_OP_CH_SEL_RGB, false, false,
- ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_YVYU8_2X8, ADV7604_OP_CH_SEL_RGB, false, true,
- ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_YUYV12_2X12, ADV7604_OP_CH_SEL_RGB, false, false,
- ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_12BIT },
- { MEDIA_BUS_FMT_YVYU12_2X12, ADV7604_OP_CH_SEL_RGB, false, true,
- ADV7604_OP_MODE_SEL_SDR_422 | ADV7604_OP_FORMAT_SEL_12BIT },
- { MEDIA_BUS_FMT_UYVY8_1X16, ADV7604_OP_CH_SEL_RBG, false, false,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_VYUY8_1X16, ADV7604_OP_CH_SEL_RBG, false, true,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_YUYV8_1X16, ADV7604_OP_CH_SEL_RGB, false, false,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_YVYU8_1X16, ADV7604_OP_CH_SEL_RGB, false, true,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_8BIT },
- { MEDIA_BUS_FMT_UYVY12_1X24, ADV7604_OP_CH_SEL_RBG, false, false,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT },
- { MEDIA_BUS_FMT_VYUY12_1X24, ADV7604_OP_CH_SEL_RBG, false, true,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT },
- { MEDIA_BUS_FMT_YUYV12_1X24, ADV7604_OP_CH_SEL_RGB, false, false,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT },
- { MEDIA_BUS_FMT_YVYU12_1X24, ADV7604_OP_CH_SEL_RGB, false, true,
- ADV7604_OP_MODE_SEL_SDR_422_2X | ADV7604_OP_FORMAT_SEL_12BIT },
+static const struct adv76xx_format_info adv7611_formats[] = {
+ { MEDIA_BUS_FMT_RGB888_1X24, ADV76XX_OP_CH_SEL_RGB, true, false,
+ ADV76XX_OP_MODE_SEL_SDR_444 | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_YUYV8_2X8, ADV76XX_OP_CH_SEL_RGB, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422 | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_YVYU8_2X8, ADV76XX_OP_CH_SEL_RGB, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422 | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_YUYV12_2X12, ADV76XX_OP_CH_SEL_RGB, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422 | ADV76XX_OP_FORMAT_SEL_12BIT },
+ { MEDIA_BUS_FMT_YVYU12_2X12, ADV76XX_OP_CH_SEL_RGB, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422 | ADV76XX_OP_FORMAT_SEL_12BIT },
+ { MEDIA_BUS_FMT_UYVY8_1X16, ADV76XX_OP_CH_SEL_RBG, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_VYUY8_1X16, ADV76XX_OP_CH_SEL_RBG, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_YUYV8_1X16, ADV76XX_OP_CH_SEL_RGB, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_YVYU8_1X16, ADV76XX_OP_CH_SEL_RGB, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_8BIT },
+ { MEDIA_BUS_FMT_UYVY12_1X24, ADV76XX_OP_CH_SEL_RBG, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_12BIT },
+ { MEDIA_BUS_FMT_VYUY12_1X24, ADV76XX_OP_CH_SEL_RBG, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_12BIT },
+ { MEDIA_BUS_FMT_YUYV12_1X24, ADV76XX_OP_CH_SEL_RGB, false, false,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_12BIT },
+ { MEDIA_BUS_FMT_YVYU12_1X24, ADV76XX_OP_CH_SEL_RGB, false, true,
+ ADV76XX_OP_MODE_SEL_SDR_422_2X | ADV76XX_OP_FORMAT_SEL_12BIT },
};
-static const struct adv7604_format_info *
-adv7604_format_info(struct adv7604_state *state, u32 code)
+static const struct adv76xx_format_info *
+adv76xx_format_info(struct adv76xx_state *state, u32 code)
{
unsigned int i;
@@ -759,7 +756,7 @@
static inline bool is_analog_input(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
return state->selected_input == ADV7604_PAD_VGA_RGB ||
state->selected_input == ADV7604_PAD_VGA_COMP;
@@ -767,9 +764,9 @@
static inline bool is_digital_input(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- return state->selected_input == ADV7604_PAD_HDMI_PORT_A ||
+ return state->selected_input == ADV76XX_PAD_HDMI_PORT_A ||
state->selected_input == ADV7604_PAD_HDMI_PORT_B ||
state->selected_input == ADV7604_PAD_HDMI_PORT_C ||
state->selected_input == ADV7604_PAD_HDMI_PORT_D;
@@ -778,7 +775,7 @@
/* ----------------------------------------------------------------------- */
#ifdef CONFIG_VIDEO_ADV_DEBUG
-static void adv7604_inv_register(struct v4l2_subdev *sd)
+static void adv76xx_inv_register(struct v4l2_subdev *sd)
{
v4l2_info(sd, "0x000-0x0ff: IO Map\n");
v4l2_info(sd, "0x100-0x1ff: AVLink Map\n");
@@ -795,15 +792,15 @@
v4l2_info(sd, "0xc00-0xcff: VDP Map\n");
}
-static int adv7604_g_register(struct v4l2_subdev *sd,
+static int adv76xx_g_register(struct v4l2_subdev *sd,
struct v4l2_dbg_register *reg)
{
int ret;
- ret = adv7604_read_reg(sd, reg->reg);
+ ret = adv76xx_read_reg(sd, reg->reg);
if (ret < 0) {
v4l2_info(sd, "Register %03llx not supported\n", reg->reg);
- adv7604_inv_register(sd);
+ adv76xx_inv_register(sd);
return ret;
}
@@ -813,15 +810,15 @@
return 0;
}
-static int adv7604_s_register(struct v4l2_subdev *sd,
+static int adv76xx_s_register(struct v4l2_subdev *sd,
const struct v4l2_dbg_register *reg)
{
int ret;
- ret = adv7604_write_reg(sd, reg->reg, reg->val);
+ ret = adv76xx_write_reg(sd, reg->reg, reg->val);
if (ret < 0) {
v4l2_info(sd, "Register %03llx not supported\n", reg->reg);
- adv7604_inv_register(sd);
+ adv76xx_inv_register(sd);
return ret;
}
@@ -846,10 +843,10 @@
return value & 1;
}
-static int adv7604_s_detect_tx_5v_ctrl(struct v4l2_subdev *sd)
+static int adv76xx_s_detect_tx_5v_ctrl(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
- const struct adv7604_chip_info *info = state->info;
+ struct adv76xx_state *state = to_state(sd);
+ const struct adv76xx_chip_info *info = state->info;
return v4l2_ctrl_s_ctrl(state->detect_tx_5v_ctrl,
info->read_cable_det(sd));
@@ -857,7 +854,7 @@
static int find_and_set_predefined_video_timings(struct v4l2_subdev *sd,
u8 prim_mode,
- const struct adv7604_video_standards *predef_vid_timings,
+ const struct adv76xx_video_standards *predef_vid_timings,
const struct v4l2_dv_timings *timings)
{
int i;
@@ -878,12 +875,12 @@
static int configure_predefined_video_timings(struct v4l2_subdev *sd,
struct v4l2_dv_timings *timings)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
int err;
v4l2_dbg(1, debug, sd, "%s", __func__);
- if (adv7604_has_afe(state)) {
+ if (adv76xx_has_afe(state)) {
/* reset to default values */
io_write(sd, 0x16, 0x43);
io_write(sd, 0x17, 0x5a);
@@ -909,10 +906,10 @@
0x02, adv7604_prim_mode_gr, timings);
} else if (is_digital_input(sd)) {
err = find_and_set_predefined_video_timings(sd,
- 0x05, adv7604_prim_mode_hdmi_comp, timings);
+ 0x05, adv76xx_prim_mode_hdmi_comp, timings);
if (err)
err = find_and_set_predefined_video_timings(sd,
- 0x06, adv7604_prim_mode_hdmi_gr, timings);
+ 0x06, adv76xx_prim_mode_hdmi_gr, timings);
} else {
v4l2_dbg(2, debug, sd, "%s: Unknown port %d selected\n",
__func__, state->selected_input);
@@ -926,7 +923,7 @@
static void configure_custom_video_timings(struct v4l2_subdev *sd,
const struct v4l2_bt_timings *bt)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
u32 width = htotal(bt);
u32 height = vtotal(bt);
u16 cp_start_sav = bt->hsync + bt->hbackporch - 4;
@@ -934,7 +931,7 @@
u16 cp_start_vbi = height - bt->vfrontporch;
u16 cp_end_vbi = bt->vsync + bt->vbackporch;
u16 ch1_fr_ll = (((u32)bt->pixelclock / 100) > 0) ?
- ((width * (ADV7604_fsc / 100)) / ((u32)bt->pixelclock / 100)) : 0;
+ ((width * (ADV76XX_FSC / 100)) / ((u32)bt->pixelclock / 100)) : 0;
const u8 pll[2] = {
0xc0 | ((width >> 8) & 0x1f),
width & 0xff
@@ -952,7 +949,7 @@
/* Should only be set in auto-graphics mode [REF_02, p. 91-92] */
/* setup PLL_DIV_MAN_EN and PLL_DIV_RATIO */
/* IO-map reg. 0x16 and 0x17 should be written in sequence */
- if (adv_smbus_write_i2c_block_data(state, ADV7604_PAGE_IO,
+ if (adv_smbus_write_i2c_block_data(state, ADV76XX_PAGE_IO,
0x16, 2, pll))
v4l2_err(sd, "writing to reg 0x16 and 0x17 failed\n");
@@ -983,9 +980,9 @@
cp_write(sd, 0xac, (height & 0x0f) << 4);
}
-static void adv7604_set_offset(struct v4l2_subdev *sd, bool auto_offset, u16 offset_a, u16 offset_b, u16 offset_c)
+static void adv76xx_set_offset(struct v4l2_subdev *sd, bool auto_offset, u16 offset_a, u16 offset_b, u16 offset_c)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
u8 offset_buf[4];
if (auto_offset) {
@@ -1004,14 +1001,14 @@
offset_buf[3] = offset_c & 0x0ff;
/* Registers must be written in this order with no i2c access in between */
- if (adv_smbus_write_i2c_block_data(state, ADV7604_PAGE_CP,
+ if (adv_smbus_write_i2c_block_data(state, ADV76XX_PAGE_CP,
0x77, 4, offset_buf))
v4l2_err(sd, "%s: i2c error writing to CP reg 0x77, 0x78, 0x79, 0x7a\n", __func__);
}
-static void adv7604_set_gain(struct v4l2_subdev *sd, bool auto_gain, u16 gain_a, u16 gain_b, u16 gain_c)
+static void adv76xx_set_gain(struct v4l2_subdev *sd, bool auto_gain, u16 gain_a, u16 gain_b, u16 gain_c)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
u8 gain_buf[4];
u8 gain_man = 1;
u8 agc_mode_man = 1;
@@ -1034,14 +1031,14 @@
gain_buf[3] = ((gain_c & 0x0ff));
/* Registers must be written in this order with no i2c access in between */
- if (adv_smbus_write_i2c_block_data(state, ADV7604_PAGE_CP,
+ if (adv_smbus_write_i2c_block_data(state, ADV76XX_PAGE_CP,
0x73, 4, gain_buf))
v4l2_err(sd, "%s: i2c error writing to CP reg 0x73, 0x74, 0x75, 0x76\n", __func__);
}
static void set_rgb_quantization_range(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
bool rgb_output = io_read(sd, 0x02) & 0x02;
bool hdmi_signal = hdmi_read(sd, 0x05) & 0x80;
@@ -1049,8 +1046,8 @@
__func__, state->rgb_quantization_range,
rgb_output, hdmi_signal);
- adv7604_set_gain(sd, true, 0x0, 0x0, 0x0);
- adv7604_set_offset(sd, true, 0x0, 0x0, 0x0);
+ adv76xx_set_gain(sd, true, 0x0, 0x0, 0x0);
+ adv76xx_set_offset(sd, true, 0x0, 0x0, 0x0);
switch (state->rgb_quantization_range) {
case V4L2_DV_RGB_RANGE_AUTO:
@@ -1078,7 +1075,7 @@
/* Receiving DVI-D signal
* ADV7604 selects RGB limited range regardless of
* input format (CE/IT) in automatic mode */
- if (state->timings.bt.standards & V4L2_DV_BT_STD_CEA861) {
+ if (state->timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) {
/* RGB limited range (16-235) */
io_write_clr_set(sd, 0x02, 0xf0, 0x00);
} else {
@@ -1086,10 +1083,10 @@
io_write_clr_set(sd, 0x02, 0xf0, 0x10);
if (is_digital_input(sd) && rgb_output) {
- adv7604_set_offset(sd, false, 0x40, 0x40, 0x40);
+ adv76xx_set_offset(sd, false, 0x40, 0x40, 0x40);
} else {
- adv7604_set_gain(sd, false, 0xe0, 0xe0, 0xe0);
- adv7604_set_offset(sd, false, 0x70, 0x70, 0x70);
+ adv76xx_set_gain(sd, false, 0xe0, 0xe0, 0xe0);
+ adv76xx_set_offset(sd, false, 0x70, 0x70, 0x70);
}
}
break;
@@ -1119,21 +1116,21 @@
/* Adjust gain/offset for DVI-D signals only */
if (rgb_output) {
- adv7604_set_offset(sd, false, 0x40, 0x40, 0x40);
+ adv76xx_set_offset(sd, false, 0x40, 0x40, 0x40);
} else {
- adv7604_set_gain(sd, false, 0xe0, 0xe0, 0xe0);
- adv7604_set_offset(sd, false, 0x70, 0x70, 0x70);
+ adv76xx_set_gain(sd, false, 0xe0, 0xe0, 0xe0);
+ adv76xx_set_offset(sd, false, 0x70, 0x70, 0x70);
}
break;
}
}
-static int adv7604_s_ctrl(struct v4l2_ctrl *ctrl)
+static int adv76xx_s_ctrl(struct v4l2_ctrl *ctrl)
{
struct v4l2_subdev *sd =
- &container_of(ctrl->handler, struct adv7604_state, hdl)->sd;
+ &container_of(ctrl->handler, struct adv76xx_state, hdl)->sd;
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
switch (ctrl->id) {
case V4L2_CID_BRIGHTNESS:
@@ -1153,7 +1150,7 @@
set_rgb_quantization_range(sd);
return 0;
case V4L2_CID_ADV_RX_ANALOG_SAMPLING_PHASE:
- if (!adv7604_has_afe(state))
+ if (!adv76xx_has_afe(state))
return -EINVAL;
/* Set the analog sampling phase. This is needed to find the
best sampling phase for analog video: an application or
@@ -1185,15 +1182,15 @@
static inline bool no_signal_tmds(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
return !(io_read(sd, 0x6a) & (0x10 >> state->selected_input));
}
static inline bool no_lock_tmds(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
- const struct adv7604_chip_info *info = state->info;
+ struct adv76xx_state *state = to_state(sd);
+ const struct adv76xx_chip_info *info = state->info;
return (io_read(sd, 0x6a) & info->tdms_lock_mask) != info->tdms_lock_mask;
}
@@ -1205,13 +1202,13 @@
static inline bool no_lock_sspd(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
/*
* Chips without a AFE don't expose registers for the SSPD, so just assume
* that we have a lock.
*/
- if (adv7604_has_afe(state))
+ if (adv76xx_has_afe(state))
return false;
/* TODO channel 2 */
@@ -1243,9 +1240,9 @@
static inline bool no_lock_cp(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- if (!adv7604_has_afe(state))
+ if (!adv76xx_has_afe(state))
return false;
/* CP has detected a non standard number of lines on the incoming
@@ -1253,13 +1250,19 @@
return io_read(sd, 0x12) & 0x01;
}
-static int adv7604_g_input_status(struct v4l2_subdev *sd, u32 *status)
+static inline bool in_free_run(struct v4l2_subdev *sd)
+{
+ return cp_read(sd, 0xff) & 0x10;
+}
+
+static int adv76xx_g_input_status(struct v4l2_subdev *sd, u32 *status)
{
*status = 0;
*status |= no_power(sd) ? V4L2_IN_ST_NO_POWER : 0;
*status |= no_signal(sd) ? V4L2_IN_ST_NO_SIGNAL : 0;
- if (no_lock_cp(sd))
- *status |= is_digital_input(sd) ? V4L2_IN_ST_NO_SYNC : V4L2_IN_ST_NO_H_LOCK;
+ if (!in_free_run(sd) && no_lock_cp(sd))
+ *status |= is_digital_input(sd) ?
+ V4L2_IN_ST_NO_SYNC : V4L2_IN_ST_NO_H_LOCK;
v4l2_dbg(1, debug, sd, "%s: status = 0x%x\n", __func__, *status);
@@ -1278,22 +1281,22 @@
struct stdi_readback *stdi,
struct v4l2_dv_timings *timings)
{
- struct adv7604_state *state = to_state(sd);
- u32 hfreq = (ADV7604_fsc * 8) / stdi->bl;
+ struct adv76xx_state *state = to_state(sd);
+ u32 hfreq = (ADV76XX_FSC * 8) / stdi->bl;
u32 pix_clk;
int i;
- for (i = 0; adv7604_timings[i].bt.height; i++) {
- if (vtotal(&adv7604_timings[i].bt) != stdi->lcf + 1)
+ for (i = 0; adv76xx_timings[i].bt.height; i++) {
+ if (vtotal(&adv76xx_timings[i].bt) != stdi->lcf + 1)
continue;
- if (adv7604_timings[i].bt.vsync != stdi->lcvs)
+ if (adv76xx_timings[i].bt.vsync != stdi->lcvs)
continue;
- pix_clk = hfreq * htotal(&adv7604_timings[i].bt);
+ pix_clk = hfreq * htotal(&adv76xx_timings[i].bt);
- if ((pix_clk < adv7604_timings[i].bt.pixelclock + 1000000) &&
- (pix_clk > adv7604_timings[i].bt.pixelclock - 1000000)) {
- *timings = adv7604_timings[i];
+ if ((pix_clk < adv76xx_timings[i].bt.pixelclock + 1000000) &&
+ (pix_clk > adv76xx_timings[i].bt.pixelclock - 1000000)) {
+ *timings = adv76xx_timings[i];
return 0;
}
}
@@ -1319,8 +1322,8 @@
static int read_stdi(struct v4l2_subdev *sd, struct stdi_readback *stdi)
{
- struct adv7604_state *state = to_state(sd);
- const struct adv7604_chip_info *info = state->info;
+ struct adv76xx_state *state = to_state(sd);
+ const struct adv76xx_chip_info *info = state->info;
u8 polarity;
if (no_lock_stdi(sd) || no_lock_sspd(sd)) {
@@ -1334,7 +1337,7 @@
stdi->lcvs = cp_read(sd, 0xb3) >> 3;
stdi->interlaced = io_read(sd, 0x12) & 0x10;
- if (adv7604_has_afe(state)) {
+ if (adv76xx_has_afe(state)) {
/* read SSPD */
polarity = cp_read(sd, 0xb5);
if ((polarity & 0x03) == 0x01) {
@@ -1373,26 +1376,26 @@
return 0;
}
-static int adv7604_enum_dv_timings(struct v4l2_subdev *sd,
+static int adv76xx_enum_dv_timings(struct v4l2_subdev *sd,
struct v4l2_enum_dv_timings *timings)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
- if (timings->index >= ARRAY_SIZE(adv7604_timings) - 1)
+ if (timings->index >= ARRAY_SIZE(adv76xx_timings) - 1)
return -EINVAL;
if (timings->pad >= state->source_pad)
return -EINVAL;
memset(timings->reserved, 0, sizeof(timings->reserved));
- timings->timings = adv7604_timings[timings->index];
+ timings->timings = adv76xx_timings[timings->index];
return 0;
}
-static int adv7604_dv_timings_cap(struct v4l2_subdev *sd,
+static int adv76xx_dv_timings_cap(struct v4l2_subdev *sd,
struct v4l2_dv_timings_cap *cap)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
if (cap->pad >= state->source_pad)
return -EINVAL;
@@ -1403,7 +1406,7 @@
cap->bt.min_pixelclock = 25000000;
switch (cap->pad) {
- case ADV7604_PAD_HDMI_PORT_A:
+ case ADV76XX_PAD_HDMI_PORT_A:
case ADV7604_PAD_HDMI_PORT_B:
case ADV7604_PAD_HDMI_PORT_C:
case ADV7604_PAD_HDMI_PORT_D:
@@ -1424,16 +1427,16 @@
}
/* Fill the optional fields .standards and .flags in struct v4l2_dv_timings
- if the format is listed in adv7604_timings[] */
-static void adv7604_fill_optional_dv_timings_fields(struct v4l2_subdev *sd,
+ if the format is listed in adv76xx_timings[] */
+static void adv76xx_fill_optional_dv_timings_fields(struct v4l2_subdev *sd,
struct v4l2_dv_timings *timings)
{
int i;
- for (i = 0; adv7604_timings[i].bt.width; i++) {
- if (v4l2_match_dv_timings(timings, &adv7604_timings[i],
+ for (i = 0; adv76xx_timings[i].bt.width; i++) {
+ if (v4l2_match_dv_timings(timings, &adv76xx_timings[i],
is_digital_input(sd) ? 250000 : 1000000)) {
- *timings = adv7604_timings[i];
+ *timings = adv76xx_timings[i];
break;
}
}
@@ -1471,11 +1474,11 @@
return ((a << 1) | (b >> 7)) * 1000000 + (b & 0x7f) * 1000000 / 128;
}
-static int adv7604_query_dv_timings(struct v4l2_subdev *sd,
+static int adv76xx_query_dv_timings(struct v4l2_subdev *sd,
struct v4l2_dv_timings *timings)
{
- struct adv7604_state *state = to_state(sd);
- const struct adv7604_chip_info *info = state->info;
+ struct adv76xx_state *state = to_state(sd);
+ const struct adv76xx_chip_info *info = state->info;
struct v4l2_bt_timings *bt = &timings->bt;
struct stdi_readback stdi;
@@ -1519,7 +1522,7 @@
bt->il_vsync = hdmi_read16(sd, 0x30, 0x1fff) / 2;
bt->il_vbackporch = hdmi_read16(sd, 0x34, 0x1fff) / 2;
}
- adv7604_fill_optional_dv_timings_fields(sd, timings);
+ adv76xx_fill_optional_dv_timings_fields(sd, timings);
} else {
/* find format
* Since LCVS values are inaccurate [REF_03, p. 275-276],
@@ -1576,16 +1579,16 @@
}
if (debug > 1)
- v4l2_print_dv_timings(sd->name, "adv7604_query_dv_timings: ",
+ v4l2_print_dv_timings(sd->name, "adv76xx_query_dv_timings: ",
timings, true);
return 0;
}
-static int adv7604_s_dv_timings(struct v4l2_subdev *sd,
+static int adv76xx_s_dv_timings(struct v4l2_subdev *sd,
struct v4l2_dv_timings *timings)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
struct v4l2_bt_timings *bt;
int err;
@@ -1606,7 +1609,7 @@
return -ERANGE;
}
- adv7604_fill_optional_dv_timings_fields(sd, timings);
+ adv76xx_fill_optional_dv_timings_fields(sd, timings);
state->timings = *timings;
@@ -1623,15 +1626,15 @@
set_rgb_quantization_range(sd);
if (debug > 1)
- v4l2_print_dv_timings(sd->name, "adv7604_s_dv_timings: ",
+ v4l2_print_dv_timings(sd->name, "adv76xx_s_dv_timings: ",
timings, true);
return 0;
}
-static int adv7604_g_dv_timings(struct v4l2_subdev *sd,
+static int adv76xx_g_dv_timings(struct v4l2_subdev *sd,
struct v4l2_dv_timings *timings)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
*timings = state->timings;
return 0;
@@ -1649,7 +1652,7 @@
static void enable_input(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
if (is_analog_input(sd)) {
io_write(sd, 0x15, 0xb0); /* Disable Tristate of Pins (no audio) */
@@ -1666,7 +1669,7 @@
static void disable_input(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
hdmi_write_clr_set(sd, 0x1a, 0x10, 0x10); /* Mute audio */
msleep(16); /* 512 samples with >= 32 kHz sample rate [REF_03, c. 7.16.10] */
@@ -1676,11 +1679,11 @@
static void select_input(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
- const struct adv7604_chip_info *info = state->info;
+ struct adv76xx_state *state = to_state(sd);
+ const struct adv76xx_chip_info *info = state->info;
if (is_analog_input(sd)) {
- adv7604_write_reg_seq(sd, info->recommended_settings[0]);
+ adv76xx_write_reg_seq(sd, info->recommended_settings[0]);
afe_write(sd, 0x00, 0x08); /* power up ADC */
afe_write(sd, 0x01, 0x06); /* power up Analog Front End */
@@ -1688,9 +1691,9 @@
} else if (is_digital_input(sd)) {
hdmi_write(sd, 0x00, state->selected_input & 0x03);
- adv7604_write_reg_seq(sd, info->recommended_settings[1]);
+ adv76xx_write_reg_seq(sd, info->recommended_settings[1]);
- if (adv7604_has_afe(state)) {
+ if (adv76xx_has_afe(state)) {
afe_write(sd, 0x00, 0xff); /* power down ADC */
afe_write(sd, 0x01, 0xfe); /* power down Analog Front End */
afe_write(sd, 0xc8, 0x40); /* phase control */
@@ -1705,10 +1708,10 @@
}
}
-static int adv7604_s_routing(struct v4l2_subdev *sd,
+static int adv76xx_s_routing(struct v4l2_subdev *sd,
u32 input, u32 output, u32 config)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
v4l2_dbg(2, debug, sd, "%s: input %d, selected input %d",
__func__, input, state->selected_input);
@@ -1730,11 +1733,11 @@
return 0;
}
-static int adv7604_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+static int adv76xx_enum_mbus_code(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
if (code->index >= state->info->nformats)
return -EINVAL;
@@ -1744,7 +1747,7 @@
return 0;
}
-static void adv7604_fill_format(struct adv7604_state *state,
+static void adv76xx_fill_format(struct adv76xx_state *state,
struct v4l2_mbus_framefmt *format)
{
memset(format, 0, sizeof(*format));
@@ -1752,8 +1755,9 @@
format->width = state->timings.bt.width;
format->height = state->timings.bt.height;
format->field = V4L2_FIELD_NONE;
+ format->colorspace = V4L2_COLORSPACE_SRGB;
- if (state->timings.bt.standards & V4L2_DV_BT_STD_CEA861)
+ if (state->timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO)
format->colorspace = (state->timings.bt.height <= 576) ?
V4L2_COLORSPACE_SMPTE170M : V4L2_COLORSPACE_REC709;
}
@@ -1765,7 +1769,7 @@
*
* The following table gives the op_ch_value from the format component order
* (expressed as op_ch_sel value in column) and the bus reordering (expressed as
- * adv7604_bus_order value in row).
+ * adv76xx_bus_order value in row).
*
* | GBR(0) GRB(1) BGR(2) RGB(3) BRG(4) RBG(5)
* ----------+-------------------------------------------------
@@ -1776,11 +1780,11 @@
* BRG (ROR) | BRG RBG GRB GBR RGB BGR
* GBR (ROL) | RGB BGR RBG BRG GBR GRB
*/
-static unsigned int adv7604_op_ch_sel(struct adv7604_state *state)
+static unsigned int adv76xx_op_ch_sel(struct adv76xx_state *state)
{
#define _SEL(a,b,c,d,e,f) { \
- ADV7604_OP_CH_SEL_##a, ADV7604_OP_CH_SEL_##b, ADV7604_OP_CH_SEL_##c, \
- ADV7604_OP_CH_SEL_##d, ADV7604_OP_CH_SEL_##e, ADV7604_OP_CH_SEL_##f }
+ ADV76XX_OP_CH_SEL_##a, ADV76XX_OP_CH_SEL_##b, ADV76XX_OP_CH_SEL_##c, \
+ ADV76XX_OP_CH_SEL_##d, ADV76XX_OP_CH_SEL_##e, ADV76XX_OP_CH_SEL_##f }
#define _BUS(x) [ADV7604_BUS_ORDER_##x]
static const unsigned int op_ch_sel[6][6] = {
@@ -1795,33 +1799,34 @@
return op_ch_sel[state->pdata.bus_order][state->format->op_ch_sel >> 5];
}
-static void adv7604_setup_format(struct adv7604_state *state)
+static void adv76xx_setup_format(struct adv76xx_state *state)
{
struct v4l2_subdev *sd = &state->sd;
io_write_clr_set(sd, 0x02, 0x02,
- state->format->rgb_out ? ADV7604_RGB_OUT : 0);
+ state->format->rgb_out ? ADV76XX_RGB_OUT : 0);
io_write(sd, 0x03, state->format->op_format_sel |
state->pdata.op_format_mode_sel);
- io_write_clr_set(sd, 0x04, 0xe0, adv7604_op_ch_sel(state));
+ io_write_clr_set(sd, 0x04, 0xe0, adv76xx_op_ch_sel(state));
io_write_clr_set(sd, 0x05, 0x01,
- state->format->swap_cb_cr ? ADV7604_OP_SWAP_CB_CR : 0);
+ state->format->swap_cb_cr ? ADV76XX_OP_SWAP_CB_CR : 0);
}
-static int adv7604_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int adv76xx_get_format(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
if (format->pad != state->source_pad)
return -EINVAL;
- adv7604_fill_format(state, &format->format);
+ adv76xx_fill_format(state, &format->format);
if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
struct v4l2_mbus_framefmt *fmt;
- fmt = v4l2_subdev_get_try_format(fh, format->pad);
+ fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad);
format->format.code = fmt->code;
} else {
format->format.code = state->format->code;
@@ -1830,39 +1835,40 @@
return 0;
}
-static int adv7604_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int adv76xx_set_format(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
- struct adv7604_state *state = to_state(sd);
- const struct adv7604_format_info *info;
+ struct adv76xx_state *state = to_state(sd);
+ const struct adv76xx_format_info *info;
if (format->pad != state->source_pad)
return -EINVAL;
- info = adv7604_format_info(state, format->format.code);
+ info = adv76xx_format_info(state, format->format.code);
if (info == NULL)
- info = adv7604_format_info(state, MEDIA_BUS_FMT_YUYV8_2X8);
+ info = adv76xx_format_info(state, MEDIA_BUS_FMT_YUYV8_2X8);
- adv7604_fill_format(state, &format->format);
+ adv76xx_fill_format(state, &format->format);
format->format.code = info->code;
if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
struct v4l2_mbus_framefmt *fmt;
- fmt = v4l2_subdev_get_try_format(fh, format->pad);
+ fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad);
fmt->code = format->format.code;
} else {
state->format = info;
- adv7604_setup_format(state);
+ adv76xx_setup_format(state);
}
return 0;
}
-static int adv7604_isr(struct v4l2_subdev *sd, u32 status, bool *handled)
+static int adv76xx_isr(struct v4l2_subdev *sd, u32 status, bool *handled)
{
- struct adv7604_state *state = to_state(sd);
- const struct adv7604_chip_info *info = state->info;
+ struct adv76xx_state *state = to_state(sd);
+ const struct adv76xx_chip_info *info = state->info;
const u8 irq_reg_0x43 = io_read(sd, 0x43);
const u8 irq_reg_0x6b = io_read(sd, 0x6b);
const u8 irq_reg_0x70 = io_read(sd, 0x70);
@@ -1890,7 +1896,7 @@
"%s: fmt_change = 0x%x, fmt_change_digital = 0x%x\n",
__func__, fmt_change, fmt_change_digital);
- v4l2_subdev_notify(sd, ADV7604_FMT_CHANGE, NULL);
+ v4l2_subdev_notify(sd, ADV76XX_FMT_CHANGE, NULL);
if (handled)
*handled = true;
@@ -1909,22 +1915,22 @@
if (tx_5v) {
v4l2_dbg(1, debug, sd, "%s: tx_5v: 0x%x\n", __func__, tx_5v);
io_write(sd, 0x71, tx_5v);
- adv7604_s_detect_tx_5v_ctrl(sd);
+ adv76xx_s_detect_tx_5v_ctrl(sd);
if (handled)
*handled = true;
}
return 0;
}
-static int adv7604_get_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
+static int adv76xx_get_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
{
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
u8 *data = NULL;
memset(edid->reserved, 0, sizeof(edid->reserved));
switch (edid->pad) {
- case ADV7604_PAD_HDMI_PORT_A:
+ case ADV76XX_PAD_HDMI_PORT_A:
case ADV7604_PAD_HDMI_PORT_B:
case ADV7604_PAD_HDMI_PORT_C:
case ADV7604_PAD_HDMI_PORT_D:
@@ -1982,10 +1988,10 @@
return -1;
}
-static int adv7604_set_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
+static int adv76xx_set_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid)
{
- struct adv7604_state *state = to_state(sd);
- const struct adv7604_chip_info *info = state->info;
+ struct adv76xx_state *state = to_state(sd);
+ const struct adv76xx_chip_info *info = state->info;
int spa_loc;
int err;
int i;
@@ -1999,7 +2005,7 @@
if (edid->blocks == 0) {
/* Disable hotplug and I2C access to EDID RAM from DDC port */
state->edid.present &= ~(1 << edid->pad);
- adv7604_set_hpd(state, state->edid.present);
+ adv76xx_set_hpd(state, state->edid.present);
rep_write_clr_set(sd, info->edid_enable_reg, 0x0f, state->edid.present);
/* Fall back to a 16:9 aspect ratio */
@@ -2023,7 +2029,7 @@
/* Disable hotplug and I2C access to EDID RAM from DDC port */
cancel_delayed_work_sync(&state->delayed_work_enable_hotplug);
- adv7604_set_hpd(state, 0);
+ adv76xx_set_hpd(state, 0);
rep_write_clr_set(sd, info->edid_enable_reg, 0x0f, 0x00);
spa_loc = get_edid_spa_location(edid->edid);
@@ -2031,7 +2037,7 @@
spa_loc = 0xc0; /* Default value [REF_02, p. 116] */
switch (edid->pad) {
- case ADV7604_PAD_HDMI_PORT_A:
+ case ADV76XX_PAD_HDMI_PORT_A:
state->spa_port_a[0] = edid->edid[spa_loc];
state->spa_port_a[1] = edid->edid[spa_loc + 1];
break;
@@ -2074,7 +2080,7 @@
return err;
}
- /* adv7604 calculates the checksums and enables I2C access to internal
+ /* adv76xx calculates the checksums and enables I2C access to internal
EDID RAM from DDC port. */
rep_write_clr_set(sd, info->edid_enable_reg, 0x0f, state->edid.present);
@@ -2138,10 +2144,10 @@
buf[8], buf[9], buf[10], buf[11], buf[12], buf[13]);
}
-static int adv7604_log_status(struct v4l2_subdev *sd)
+static int adv76xx_log_status(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
- const struct adv7604_chip_info *info = state->info;
+ struct adv76xx_state *state = to_state(sd);
+ const struct adv76xx_chip_info *info = state->info;
struct v4l2_dv_timings timings;
struct stdi_readback stdi;
u8 reg_io_0x02 = io_read(sd, 0x02);
@@ -2200,7 +2206,7 @@
v4l2_info(sd, "STDI locked: %s\n", no_lock_stdi(sd) ? "false" : "true");
v4l2_info(sd, "CP locked: %s\n", no_lock_cp(sd) ? "false" : "true");
v4l2_info(sd, "CP free run: %s\n",
- (!!(cp_read(sd, 0xff) & 0x10) ? "on" : "off"));
+ (in_free_run(sd)) ? "on" : "off");
v4l2_info(sd, "Prim-mode = 0x%x, video std = 0x%x, v_freq = 0x%x\n",
io_read(sd, 0x01) & 0x0f, io_read(sd, 0x00) & 0x3f,
(io_read(sd, 0x01) & 0x70) >> 4);
@@ -2213,7 +2219,7 @@
stdi.lcf, stdi.bl, stdi.lcvs,
stdi.interlaced ? "interlaced" : "progressive",
stdi.hs_pol, stdi.vs_pol);
- if (adv7604_query_dv_timings(sd, &timings))
+ if (adv76xx_query_dv_timings(sd, &timings))
v4l2_info(sd, "No video detected\n");
else
v4l2_print_dv_timings(sd->name, "Detected format: ",
@@ -2235,7 +2241,7 @@
((reg_io_0x02 & 0x04) ^ (reg_io_0x02 & 0x01)) ?
"enabled" : "disabled");
v4l2_info(sd, "Color space conversion: %s\n",
- csc_coeff_sel_rb[cp_read(sd, 0xfc) >> 4]);
+ csc_coeff_sel_rb[cp_read(sd, info->cp_csc) >> 4]);
if (!is_digital_input(sd))
return 0;
@@ -2279,47 +2285,47 @@
/* ----------------------------------------------------------------------- */
-static const struct v4l2_ctrl_ops adv7604_ctrl_ops = {
- .s_ctrl = adv7604_s_ctrl,
+static const struct v4l2_ctrl_ops adv76xx_ctrl_ops = {
+ .s_ctrl = adv76xx_s_ctrl,
};
-static const struct v4l2_subdev_core_ops adv7604_core_ops = {
- .log_status = adv7604_log_status,
- .interrupt_service_routine = adv7604_isr,
+static const struct v4l2_subdev_core_ops adv76xx_core_ops = {
+ .log_status = adv76xx_log_status,
+ .interrupt_service_routine = adv76xx_isr,
#ifdef CONFIG_VIDEO_ADV_DEBUG
- .g_register = adv7604_g_register,
- .s_register = adv7604_s_register,
+ .g_register = adv76xx_g_register,
+ .s_register = adv76xx_s_register,
#endif
};
-static const struct v4l2_subdev_video_ops adv7604_video_ops = {
- .s_routing = adv7604_s_routing,
- .g_input_status = adv7604_g_input_status,
- .s_dv_timings = adv7604_s_dv_timings,
- .g_dv_timings = adv7604_g_dv_timings,
- .query_dv_timings = adv7604_query_dv_timings,
+static const struct v4l2_subdev_video_ops adv76xx_video_ops = {
+ .s_routing = adv76xx_s_routing,
+ .g_input_status = adv76xx_g_input_status,
+ .s_dv_timings = adv76xx_s_dv_timings,
+ .g_dv_timings = adv76xx_g_dv_timings,
+ .query_dv_timings = adv76xx_query_dv_timings,
};
-static const struct v4l2_subdev_pad_ops adv7604_pad_ops = {
- .enum_mbus_code = adv7604_enum_mbus_code,
- .get_fmt = adv7604_get_format,
- .set_fmt = adv7604_set_format,
- .get_edid = adv7604_get_edid,
- .set_edid = adv7604_set_edid,
- .dv_timings_cap = adv7604_dv_timings_cap,
- .enum_dv_timings = adv7604_enum_dv_timings,
+static const struct v4l2_subdev_pad_ops adv76xx_pad_ops = {
+ .enum_mbus_code = adv76xx_enum_mbus_code,
+ .get_fmt = adv76xx_get_format,
+ .set_fmt = adv76xx_set_format,
+ .get_edid = adv76xx_get_edid,
+ .set_edid = adv76xx_set_edid,
+ .dv_timings_cap = adv76xx_dv_timings_cap,
+ .enum_dv_timings = adv76xx_enum_dv_timings,
};
-static const struct v4l2_subdev_ops adv7604_ops = {
- .core = &adv7604_core_ops,
- .video = &adv7604_video_ops,
- .pad = &adv7604_pad_ops,
+static const struct v4l2_subdev_ops adv76xx_ops = {
+ .core = &adv76xx_core_ops,
+ .video = &adv76xx_video_ops,
+ .pad = &adv76xx_pad_ops,
};
/* -------------------------- custom ctrls ---------------------------------- */
static const struct v4l2_ctrl_config adv7604_ctrl_analog_sampling_phase = {
- .ops = &adv7604_ctrl_ops,
+ .ops = &adv76xx_ctrl_ops,
.id = V4L2_CID_ADV_RX_ANALOG_SAMPLING_PHASE,
.name = "Analog Sampling Phase",
.type = V4L2_CTRL_TYPE_INTEGER,
@@ -2329,8 +2335,8 @@
.def = 0,
};
-static const struct v4l2_ctrl_config adv7604_ctrl_free_run_color_manual = {
- .ops = &adv7604_ctrl_ops,
+static const struct v4l2_ctrl_config adv76xx_ctrl_free_run_color_manual = {
+ .ops = &adv76xx_ctrl_ops,
.id = V4L2_CID_ADV_RX_FREE_RUN_COLOR_MANUAL,
.name = "Free Running Color, Manual",
.type = V4L2_CTRL_TYPE_BOOLEAN,
@@ -2340,8 +2346,8 @@
.def = false,
};
-static const struct v4l2_ctrl_config adv7604_ctrl_free_run_color = {
- .ops = &adv7604_ctrl_ops,
+static const struct v4l2_ctrl_config adv76xx_ctrl_free_run_color = {
+ .ops = &adv76xx_ctrl_ops,
.id = V4L2_CID_ADV_RX_FREE_RUN_COLOR,
.name = "Free Running Color",
.type = V4L2_CTRL_TYPE_INTEGER,
@@ -2353,11 +2359,11 @@
/* ----------------------------------------------------------------------- */
-static int adv7604_core_init(struct v4l2_subdev *sd)
+static int adv76xx_core_init(struct v4l2_subdev *sd)
{
- struct adv7604_state *state = to_state(sd);
- const struct adv7604_chip_info *info = state->info;
- struct adv7604_platform_data *pdata = &state->pdata;
+ struct adv76xx_state *state = to_state(sd);
+ const struct adv76xx_chip_info *info = state->info;
+ struct adv76xx_platform_data *pdata = &state->pdata;
hdmi_write(sd, 0x48,
(pdata->disable_pwrdnb ? 0x80 : 0) |
@@ -2385,7 +2391,7 @@
io_write_clr_set(sd, 0x05, 0x0e, pdata->blank_data << 3 |
pdata->insert_av_codes << 2 |
pdata->replicate_av_codes << 1);
- adv7604_setup_format(state);
+ adv76xx_setup_format(state);
cp_write(sd, 0x69, 0x30); /* Enable CP CSC */
@@ -2415,7 +2421,7 @@
/* TODO from platform data */
afe_write(sd, 0xb5, 0x01); /* Setting MCLK to 256Fs */
- if (adv7604_has_afe(state)) {
+ if (adv76xx_has_afe(state)) {
afe_write(sd, 0x02, pdata->ain_sel); /* Select analog input muxing mode */
io_write_clr_set(sd, 0x30, 1 << 4, pdata->output_bus_lsb_to_msb << 4);
}
@@ -2440,7 +2446,7 @@
io_write(sd, 0x41, 0xd0); /* STDI irq for any change, disable INT2 */
}
-static void adv7604_unregister_clients(struct adv7604_state *state)
+static void adv76xx_unregister_clients(struct adv76xx_state *state)
{
unsigned int i;
@@ -2450,7 +2456,7 @@
}
}
-static struct i2c_client *adv7604_dummy_client(struct v4l2_subdev *sd,
+static struct i2c_client *adv76xx_dummy_client(struct v4l2_subdev *sd,
u8 addr, u8 io_reg)
{
struct i2c_client *client = v4l2_get_subdevdata(sd);
@@ -2460,74 +2466,74 @@
return i2c_new_dummy(client->adapter, io_read(sd, io_reg) >> 1);
}
-static const struct adv7604_reg_seq adv7604_recommended_settings_afe[] = {
+static const struct adv76xx_reg_seq adv7604_recommended_settings_afe[] = {
/* reset ADI recommended settings for HDMI: */
/* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 4. */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x0d), 0x04 }, /* HDMI filter optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x0d), 0x04 }, /* HDMI filter optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x3d), 0x00 }, /* DDC bus active pull-up control */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x3e), 0x74 }, /* TMDS PLL optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x4e), 0x3b }, /* TMDS PLL optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x57), 0x74 }, /* TMDS PLL optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x58), 0x63 }, /* TMDS PLL optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8d), 0x18 }, /* equaliser */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8e), 0x34 }, /* equaliser */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x93), 0x88 }, /* equaliser */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x94), 0x2e }, /* equaliser */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x96), 0x00 }, /* enable automatic EQ changing */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x0d), 0x04 }, /* HDMI filter optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x0d), 0x04 }, /* HDMI filter optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x3d), 0x00 }, /* DDC bus active pull-up control */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x3e), 0x74 }, /* TMDS PLL optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x4e), 0x3b }, /* TMDS PLL optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x57), 0x74 }, /* TMDS PLL optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x58), 0x63 }, /* TMDS PLL optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x8d), 0x18 }, /* equaliser */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x8e), 0x34 }, /* equaliser */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x93), 0x88 }, /* equaliser */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x94), 0x2e }, /* equaliser */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x96), 0x00 }, /* enable automatic EQ changing */
/* set ADI recommended settings for digitizer */
/* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 17. */
- { ADV7604_REG(ADV7604_PAGE_AFE, 0x12), 0x7b }, /* ADC noise shaping filter controls */
- { ADV7604_REG(ADV7604_PAGE_AFE, 0x0c), 0x1f }, /* CP core gain controls */
- { ADV7604_REG(ADV7604_PAGE_CP, 0x3e), 0x04 }, /* CP core pre-gain control */
- { ADV7604_REG(ADV7604_PAGE_CP, 0xc3), 0x39 }, /* CP coast control. Graphics mode */
- { ADV7604_REG(ADV7604_PAGE_CP, 0x40), 0x5c }, /* CP core pre-gain control. Graphics mode */
+ { ADV76XX_REG(ADV76XX_PAGE_AFE, 0x12), 0x7b }, /* ADC noise shaping filter controls */
+ { ADV76XX_REG(ADV76XX_PAGE_AFE, 0x0c), 0x1f }, /* CP core gain controls */
+ { ADV76XX_REG(ADV76XX_PAGE_CP, 0x3e), 0x04 }, /* CP core pre-gain control */
+ { ADV76XX_REG(ADV76XX_PAGE_CP, 0xc3), 0x39 }, /* CP coast control. Graphics mode */
+ { ADV76XX_REG(ADV76XX_PAGE_CP, 0x40), 0x5c }, /* CP core pre-gain control. Graphics mode */
- { ADV7604_REG_SEQ_TERM, 0 },
+ { ADV76XX_REG_SEQ_TERM, 0 },
};
-static const struct adv7604_reg_seq adv7604_recommended_settings_hdmi[] = {
+static const struct adv76xx_reg_seq adv7604_recommended_settings_hdmi[] = {
/* set ADI recommended settings for HDMI: */
/* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 4. */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x0d), 0x84 }, /* HDMI filter optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x3d), 0x10 }, /* DDC bus active pull-up control */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x3e), 0x39 }, /* TMDS PLL optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x4e), 0x3b }, /* TMDS PLL optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x57), 0xb6 }, /* TMDS PLL optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x58), 0x03 }, /* TMDS PLL optimization */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8d), 0x18 }, /* equaliser */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8e), 0x34 }, /* equaliser */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x93), 0x8b }, /* equaliser */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x94), 0x2d }, /* equaliser */
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x96), 0x01 }, /* enable automatic EQ changing */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x0d), 0x84 }, /* HDMI filter optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x3d), 0x10 }, /* DDC bus active pull-up control */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x3e), 0x39 }, /* TMDS PLL optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x4e), 0x3b }, /* TMDS PLL optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x57), 0xb6 }, /* TMDS PLL optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x58), 0x03 }, /* TMDS PLL optimization */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x8d), 0x18 }, /* equaliser */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x8e), 0x34 }, /* equaliser */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x93), 0x8b }, /* equaliser */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x94), 0x2d }, /* equaliser */
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x96), 0x01 }, /* enable automatic EQ changing */
/* reset ADI recommended settings for digitizer */
/* "ADV7604 Register Settings Recommendations (rev. 2.5, June 2010)" p. 17. */
- { ADV7604_REG(ADV7604_PAGE_AFE, 0x12), 0xfb }, /* ADC noise shaping filter controls */
- { ADV7604_REG(ADV7604_PAGE_AFE, 0x0c), 0x0d }, /* CP core gain controls */
+ { ADV76XX_REG(ADV76XX_PAGE_AFE, 0x12), 0xfb }, /* ADC noise shaping filter controls */
+ { ADV76XX_REG(ADV76XX_PAGE_AFE, 0x0c), 0x0d }, /* CP core gain controls */
- { ADV7604_REG_SEQ_TERM, 0 },
+ { ADV76XX_REG_SEQ_TERM, 0 },
};
-static const struct adv7604_reg_seq adv7611_recommended_settings_hdmi[] = {
+static const struct adv76xx_reg_seq adv7611_recommended_settings_hdmi[] = {
/* ADV7611 Register Settings Recommendations Rev 1.5, May 2014 */
- { ADV7604_REG(ADV7604_PAGE_CP, 0x6c), 0x00 },
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x9b), 0x03 },
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x6f), 0x08 },
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x85), 0x1f },
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x87), 0x70 },
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x57), 0xda },
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x58), 0x01 },
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x03), 0x98 },
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x4c), 0x44 },
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8d), 0x04 },
- { ADV7604_REG(ADV7604_PAGE_HDMI, 0x8e), 0x1e },
+ { ADV76XX_REG(ADV76XX_PAGE_CP, 0x6c), 0x00 },
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x9b), 0x03 },
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x6f), 0x08 },
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x85), 0x1f },
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x87), 0x70 },
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x57), 0xda },
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x58), 0x01 },
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x03), 0x98 },
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x4c), 0x44 },
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x8d), 0x04 },
+ { ADV76XX_REG(ADV76XX_PAGE_HDMI, 0x8e), 0x1e },
- { ADV7604_REG_SEQ_TERM, 0 },
+ { ADV76XX_REG_SEQ_TERM, 0 },
};
-static const struct adv7604_chip_info adv7604_chip_info[] = {
+static const struct adv76xx_chip_info adv76xx_chip_info[] = {
[ADV7604] = {
.type = ADV7604,
.has_afe = true,
@@ -2539,6 +2545,7 @@
.tdms_lock_mask = 0xe0,
.cable_det_mask = 0x1e,
.fmt_change_digital_mask = 0xc1,
+ .cp_csc = 0xfc,
.formats = adv7604_formats,
.nformats = ARRAY_SIZE(adv7604_formats),
.set_termination = adv7604_set_termination,
@@ -2553,18 +2560,18 @@
[0] = ARRAY_SIZE(adv7604_recommended_settings_afe),
[1] = ARRAY_SIZE(adv7604_recommended_settings_hdmi),
},
- .page_mask = BIT(ADV7604_PAGE_IO) | BIT(ADV7604_PAGE_AVLINK) |
- BIT(ADV7604_PAGE_CEC) | BIT(ADV7604_PAGE_INFOFRAME) |
+ .page_mask = BIT(ADV76XX_PAGE_IO) | BIT(ADV7604_PAGE_AVLINK) |
+ BIT(ADV76XX_PAGE_CEC) | BIT(ADV76XX_PAGE_INFOFRAME) |
BIT(ADV7604_PAGE_ESDP) | BIT(ADV7604_PAGE_DPP) |
- BIT(ADV7604_PAGE_AFE) | BIT(ADV7604_PAGE_REP) |
- BIT(ADV7604_PAGE_EDID) | BIT(ADV7604_PAGE_HDMI) |
- BIT(ADV7604_PAGE_TEST) | BIT(ADV7604_PAGE_CP) |
+ BIT(ADV76XX_PAGE_AFE) | BIT(ADV76XX_PAGE_REP) |
+ BIT(ADV76XX_PAGE_EDID) | BIT(ADV76XX_PAGE_HDMI) |
+ BIT(ADV76XX_PAGE_TEST) | BIT(ADV76XX_PAGE_CP) |
BIT(ADV7604_PAGE_VDP),
},
[ADV7611] = {
.type = ADV7611,
.has_afe = false,
- .max_port = ADV7604_PAD_HDMI_PORT_A,
+ .max_port = ADV76XX_PAD_HDMI_PORT_A,
.num_dv_ports = 1,
.edid_enable_reg = 0x74,
.edid_status_reg = 0x76,
@@ -2572,6 +2579,7 @@
.tdms_lock_mask = 0x43,
.cable_det_mask = 0x01,
.fmt_change_digital_mask = 0x03,
+ .cp_csc = 0xf4,
.formats = adv7611_formats,
.nformats = ARRAY_SIZE(adv7611_formats),
.set_termination = adv7611_set_termination,
@@ -2584,34 +2592,34 @@
.num_recommended_settings = {
[1] = ARRAY_SIZE(adv7611_recommended_settings_hdmi),
},
- .page_mask = BIT(ADV7604_PAGE_IO) | BIT(ADV7604_PAGE_CEC) |
- BIT(ADV7604_PAGE_INFOFRAME) | BIT(ADV7604_PAGE_AFE) |
- BIT(ADV7604_PAGE_REP) | BIT(ADV7604_PAGE_EDID) |
- BIT(ADV7604_PAGE_HDMI) | BIT(ADV7604_PAGE_CP),
+ .page_mask = BIT(ADV76XX_PAGE_IO) | BIT(ADV76XX_PAGE_CEC) |
+ BIT(ADV76XX_PAGE_INFOFRAME) | BIT(ADV76XX_PAGE_AFE) |
+ BIT(ADV76XX_PAGE_REP) | BIT(ADV76XX_PAGE_EDID) |
+ BIT(ADV76XX_PAGE_HDMI) | BIT(ADV76XX_PAGE_CP),
},
};
-static struct i2c_device_id adv7604_i2c_id[] = {
- { "adv7604", (kernel_ulong_t)&adv7604_chip_info[ADV7604] },
- { "adv7611", (kernel_ulong_t)&adv7604_chip_info[ADV7611] },
+static struct i2c_device_id adv76xx_i2c_id[] = {
+ { "adv7604", (kernel_ulong_t)&adv76xx_chip_info[ADV7604] },
+ { "adv7611", (kernel_ulong_t)&adv76xx_chip_info[ADV7611] },
{ }
};
-MODULE_DEVICE_TABLE(i2c, adv7604_i2c_id);
+MODULE_DEVICE_TABLE(i2c, adv76xx_i2c_id);
-static struct of_device_id adv7604_of_id[] __maybe_unused = {
- { .compatible = "adi,adv7611", .data = &adv7604_chip_info[ADV7611] },
+static struct of_device_id adv76xx_of_id[] __maybe_unused = {
+ { .compatible = "adi,adv7611", .data = &adv76xx_chip_info[ADV7611] },
{ }
};
-MODULE_DEVICE_TABLE(of, adv7604_of_id);
+MODULE_DEVICE_TABLE(of, adv76xx_of_id);
-static int adv7604_parse_dt(struct adv7604_state *state)
+static int adv76xx_parse_dt(struct adv76xx_state *state)
{
struct v4l2_of_endpoint bus_cfg;
struct device_node *endpoint;
struct device_node *np;
unsigned int flags;
- np = state->i2c_clients[ADV7604_PAGE_IO]->dev.of_node;
+ np = state->i2c_clients[ADV76XX_PAGE_IO]->dev.of_node;
/* Parse the endpoint. */
endpoint = of_graph_get_next_endpoint(np, NULL);
@@ -2638,20 +2646,20 @@
}
/* Disable the interrupt for now as no DT-based board uses it. */
- state->pdata.int1_config = ADV7604_INT1_CONFIG_DISABLED;
+ state->pdata.int1_config = ADV76XX_INT1_CONFIG_DISABLED;
/* Use the default I2C addresses. */
state->pdata.i2c_addresses[ADV7604_PAGE_AVLINK] = 0x42;
- state->pdata.i2c_addresses[ADV7604_PAGE_CEC] = 0x40;
- state->pdata.i2c_addresses[ADV7604_PAGE_INFOFRAME] = 0x3e;
+ state->pdata.i2c_addresses[ADV76XX_PAGE_CEC] = 0x40;
+ state->pdata.i2c_addresses[ADV76XX_PAGE_INFOFRAME] = 0x3e;
state->pdata.i2c_addresses[ADV7604_PAGE_ESDP] = 0x38;
state->pdata.i2c_addresses[ADV7604_PAGE_DPP] = 0x3c;
- state->pdata.i2c_addresses[ADV7604_PAGE_AFE] = 0x26;
- state->pdata.i2c_addresses[ADV7604_PAGE_REP] = 0x32;
- state->pdata.i2c_addresses[ADV7604_PAGE_EDID] = 0x36;
- state->pdata.i2c_addresses[ADV7604_PAGE_HDMI] = 0x34;
- state->pdata.i2c_addresses[ADV7604_PAGE_TEST] = 0x30;
- state->pdata.i2c_addresses[ADV7604_PAGE_CP] = 0x22;
+ state->pdata.i2c_addresses[ADV76XX_PAGE_AFE] = 0x26;
+ state->pdata.i2c_addresses[ADV76XX_PAGE_REP] = 0x32;
+ state->pdata.i2c_addresses[ADV76XX_PAGE_EDID] = 0x36;
+ state->pdata.i2c_addresses[ADV76XX_PAGE_HDMI] = 0x34;
+ state->pdata.i2c_addresses[ADV76XX_PAGE_TEST] = 0x30;
+ state->pdata.i2c_addresses[ADV76XX_PAGE_CP] = 0x22;
state->pdata.i2c_addresses[ADV7604_PAGE_VDP] = 0x24;
/* Hardcode the remaining platform data fields. */
@@ -2666,12 +2674,12 @@
return 0;
}
-static int adv7604_probe(struct i2c_client *client,
+static int adv76xx_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
static const struct v4l2_dv_timings cea640x480 =
V4L2_DV_BT_CEA_640X480P59_94;
- struct adv7604_state *state;
+ struct adv76xx_state *state;
struct v4l2_ctrl_handler *hdl;
struct v4l2_subdev *sd;
unsigned int i;
@@ -2681,16 +2689,16 @@
/* Check if the adapter supports the needed features */
if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_BYTE_DATA))
return -EIO;
- v4l_dbg(1, debug, client, "detecting adv7604 client on address 0x%x\n",
+ v4l_dbg(1, debug, client, "detecting adv76xx client on address 0x%x\n",
client->addr << 1);
state = devm_kzalloc(&client->dev, sizeof(*state), GFP_KERNEL);
if (!state) {
- v4l_err(client, "Could not allocate adv7604_state memory!\n");
+ v4l_err(client, "Could not allocate adv76xx_state memory!\n");
return -ENOMEM;
}
- state->i2c_clients[ADV7604_PAGE_IO] = client;
+ state->i2c_clients[ADV76XX_PAGE_IO] = client;
/* initialize variables */
state->restart_stdi_once = true;
@@ -2699,18 +2707,18 @@
if (IS_ENABLED(CONFIG_OF) && client->dev.of_node) {
const struct of_device_id *oid;
- oid = of_match_node(adv7604_of_id, client->dev.of_node);
+ oid = of_match_node(adv76xx_of_id, client->dev.of_node);
state->info = oid->data;
- err = adv7604_parse_dt(state);
+ err = adv76xx_parse_dt(state);
if (err < 0) {
v4l_err(client, "DT parsing error\n");
return err;
}
} else if (client->dev.platform_data) {
- struct adv7604_platform_data *pdata = client->dev.platform_data;
+ struct adv76xx_platform_data *pdata = client->dev.platform_data;
- state->info = (const struct adv7604_chip_info *)id->driver_data;
+ state->info = (const struct adv76xx_chip_info *)id->driver_data;
state->pdata = *pdata;
} else {
v4l_err(client, "No platform data!\n");
@@ -2720,20 +2728,20 @@
/* Request GPIOs. */
for (i = 0; i < state->info->num_dv_ports; ++i) {
state->hpd_gpio[i] =
- devm_gpiod_get_index(&client->dev, "hpd", i);
+ devm_gpiod_get_index_optional(&client->dev, "hpd", i,
+ GPIOD_OUT_LOW);
if (IS_ERR(state->hpd_gpio[i]))
- continue;
+ return PTR_ERR(state->hpd_gpio[i]);
- gpiod_direction_output(state->hpd_gpio[i], 0);
-
- v4l_info(client, "Handling HPD %u GPIO\n", i);
+ if (state->hpd_gpio[i])
+ v4l_info(client, "Handling HPD %u GPIO\n", i);
}
state->timings = cea640x480;
- state->format = adv7604_format_info(state, MEDIA_BUS_FMT_YUYV8_2X8);
+ state->format = adv76xx_format_info(state, MEDIA_BUS_FMT_YUYV8_2X8);
sd = &state->sd;
- v4l2_i2c_subdev_init(sd, client, &adv7604_ops);
+ v4l2_i2c_subdev_init(sd, client, &adv76xx_ops);
snprintf(sd->name, sizeof(sd->name), "%s %d-%04x",
id->name, i2c_adapter_id(client->adapter),
client->addr);
@@ -2763,15 +2771,15 @@
/* control handlers */
hdl = &state->hdl;
- v4l2_ctrl_handler_init(hdl, adv7604_has_afe(state) ? 9 : 8);
+ v4l2_ctrl_handler_init(hdl, adv76xx_has_afe(state) ? 9 : 8);
- v4l2_ctrl_new_std(hdl, &adv7604_ctrl_ops,
+ v4l2_ctrl_new_std(hdl, &adv76xx_ctrl_ops,
V4L2_CID_BRIGHTNESS, -128, 127, 1, 0);
- v4l2_ctrl_new_std(hdl, &adv7604_ctrl_ops,
+ v4l2_ctrl_new_std(hdl, &adv76xx_ctrl_ops,
V4L2_CID_CONTRAST, 0, 255, 1, 128);
- v4l2_ctrl_new_std(hdl, &adv7604_ctrl_ops,
+ v4l2_ctrl_new_std(hdl, &adv76xx_ctrl_ops,
V4L2_CID_SATURATION, 0, 255, 1, 128);
- v4l2_ctrl_new_std(hdl, &adv7604_ctrl_ops,
+ v4l2_ctrl_new_std(hdl, &adv76xx_ctrl_ops,
V4L2_CID_HUE, 0, 128, 1, 0);
/* private controls */
@@ -2779,18 +2787,18 @@
V4L2_CID_DV_RX_POWER_PRESENT, 0,
(1 << state->info->num_dv_ports) - 1, 0, 0);
state->rgb_quantization_range_ctrl =
- v4l2_ctrl_new_std_menu(hdl, &adv7604_ctrl_ops,
+ v4l2_ctrl_new_std_menu(hdl, &adv76xx_ctrl_ops,
V4L2_CID_DV_RX_RGB_RANGE, V4L2_DV_RGB_RANGE_FULL,
0, V4L2_DV_RGB_RANGE_AUTO);
/* custom controls */
- if (adv7604_has_afe(state))
+ if (adv76xx_has_afe(state))
state->analog_sampling_phase_ctrl =
v4l2_ctrl_new_custom(hdl, &adv7604_ctrl_analog_sampling_phase, NULL);
state->free_run_color_manual_ctrl =
- v4l2_ctrl_new_custom(hdl, &adv7604_ctrl_free_run_color_manual, NULL);
+ v4l2_ctrl_new_custom(hdl, &adv76xx_ctrl_free_run_color_manual, NULL);
state->free_run_color_ctrl =
- v4l2_ctrl_new_custom(hdl, &adv7604_ctrl_free_run_color, NULL);
+ v4l2_ctrl_new_custom(hdl, &adv76xx_ctrl_free_run_color, NULL);
sd->ctrl_handler = hdl;
if (hdl->error) {
@@ -2799,22 +2807,22 @@
}
state->detect_tx_5v_ctrl->is_private = true;
state->rgb_quantization_range_ctrl->is_private = true;
- if (adv7604_has_afe(state))
+ if (adv76xx_has_afe(state))
state->analog_sampling_phase_ctrl->is_private = true;
state->free_run_color_manual_ctrl->is_private = true;
state->free_run_color_ctrl->is_private = true;
- if (adv7604_s_detect_tx_5v_ctrl(sd)) {
+ if (adv76xx_s_detect_tx_5v_ctrl(sd)) {
err = -ENODEV;
goto err_hdl;
}
- for (i = 1; i < ADV7604_PAGE_MAX; ++i) {
+ for (i = 1; i < ADV76XX_PAGE_MAX; ++i) {
if (!(BIT(i) & state->info->page_mask))
continue;
state->i2c_clients[i] =
- adv7604_dummy_client(sd, state->pdata.i2c_addresses[i],
+ adv76xx_dummy_client(sd, state->pdata.i2c_addresses[i],
0xf2 + i);
if (state->i2c_clients[i] == NULL) {
err = -ENOMEM;
@@ -2832,7 +2840,7 @@
}
INIT_DELAYED_WORK(&state->delayed_work_enable_hotplug,
- adv7604_delayed_work_enable_hotplug);
+ adv76xx_delayed_work_enable_hotplug);
state->source_pad = state->info->num_dv_ports
+ (state->info->has_afe ? 2 : 0);
@@ -2845,7 +2853,7 @@
if (err)
goto err_work_queues;
- err = adv7604_core_init(sd);
+ err = adv76xx_core_init(sd);
if (err)
goto err_entity;
v4l2_info(sd, "%s found @ 0x%x (%s)\n", client->name,
@@ -2863,7 +2871,7 @@
cancel_delayed_work(&state->delayed_work_enable_hotplug);
destroy_workqueue(state->work_queues);
err_i2c:
- adv7604_unregister_clients(state);
+ adv76xx_unregister_clients(state);
err_hdl:
v4l2_ctrl_handler_free(hdl);
return err;
@@ -2871,32 +2879,31 @@
/* ----------------------------------------------------------------------- */
-static int adv7604_remove(struct i2c_client *client)
+static int adv76xx_remove(struct i2c_client *client)
{
struct v4l2_subdev *sd = i2c_get_clientdata(client);
- struct adv7604_state *state = to_state(sd);
+ struct adv76xx_state *state = to_state(sd);
cancel_delayed_work(&state->delayed_work_enable_hotplug);
destroy_workqueue(state->work_queues);
v4l2_async_unregister_subdev(sd);
- v4l2_device_unregister_subdev(sd);
media_entity_cleanup(&sd->entity);
- adv7604_unregister_clients(to_state(sd));
+ adv76xx_unregister_clients(to_state(sd));
v4l2_ctrl_handler_free(sd->ctrl_handler);
return 0;
}
/* ----------------------------------------------------------------------- */
-static struct i2c_driver adv7604_driver = {
+static struct i2c_driver adv76xx_driver = {
.driver = {
.owner = THIS_MODULE,
.name = "adv7604",
- .of_match_table = of_match_ptr(adv7604_of_id),
+ .of_match_table = of_match_ptr(adv76xx_of_id),
},
- .probe = adv7604_probe,
- .remove = adv7604_remove,
- .id_table = adv7604_i2c_id,
+ .probe = adv76xx_probe,
+ .remove = adv76xx_remove,
+ .id_table = adv76xx_i2c_id,
};
-module_i2c_driver(adv7604_driver);
+module_i2c_driver(adv76xx_driver);
diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
index 7c215ee..b5a37fe 100644
--- a/drivers/media/i2c/adv7842.c
+++ b/drivers/media/i2c/adv7842.c
@@ -1119,7 +1119,7 @@
/* Receiving DVI-D signal
* ADV7842 selects RGB limited range regardless of
* input format (CE/IT) in automatic mode */
- if (state->timings.bt.standards & V4L2_DV_BT_STD_CEA861) {
+ if (state->timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) {
/* RGB limited range (16-235) */
io_write_and_or(sd, 0x02, 0x0f, 0x00);
} else {
@@ -1901,7 +1901,8 @@
return 0;
}
- if (state->timings.bt.standards & V4L2_DV_BT_STD_CEA861) {
+ fmt->colorspace = V4L2_COLORSPACE_SRGB;
+ if (state->timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) {
fmt->colorspace = (state->timings.bt.height <= 576) ?
V4L2_COLORSPACE_SMPTE170M : V4L2_COLORSPACE_REC709;
}
diff --git a/drivers/media/i2c/cx25840/cx25840-core.c b/drivers/media/i2c/cx25840/cx25840-core.c
index 573e088..bd49644 100644
--- a/drivers/media/i2c/cx25840/cx25840-core.c
+++ b/drivers/media/i2c/cx25840/cx25840-core.c
@@ -5137,6 +5137,9 @@
int default_volume;
u32 id;
u16 device_id;
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ int ret;
+#endif
/* Check if the adapter supports the needed features */
if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_BYTE_DATA))
@@ -5178,6 +5181,33 @@
sd = &state->sd;
v4l2_i2c_subdev_init(sd, client, &cx25840_ops);
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ /*
+ * TODO: add media controller support for analog video inputs like
+ * composite, svideo, etc.
+ * A real input pad for this analog demod would be like:
+ * ___________
+ * TUNER --------> | |
+ * | |
+ * SVIDEO .......> | cx25840 |
+ * | |
+ * COMPOSITE1 ...> |_________|
+ *
+ * However, at least for now, there's no much gain on modelling
+ * those extra inputs. So, let's add it only when needed.
+ */
+ state->pads[CX25840_PAD_INPUT].flags = MEDIA_PAD_FL_SINK;
+ state->pads[CX25840_PAD_VID_OUT].flags = MEDIA_PAD_FL_SOURCE;
+ state->pads[CX25840_PAD_VBI_OUT].flags = MEDIA_PAD_FL_SOURCE;
+ sd->entity.type = MEDIA_ENT_T_V4L2_SUBDEV_DECODER;
+
+ ret = media_entity_init(&sd->entity, ARRAY_SIZE(state->pads),
+ state->pads, 0);
+ if (ret < 0) {
+ v4l_info(client, "failed to initialize media entity!\n");
+ return ret;
+ }
+#endif
switch (id) {
case CX23885_AV:
diff --git a/drivers/media/i2c/cx25840/cx25840-core.h b/drivers/media/i2c/cx25840/cx25840-core.h
index 37bc042..fdea48c 100644
--- a/drivers/media/i2c/cx25840/cx25840-core.h
+++ b/drivers/media/i2c/cx25840/cx25840-core.h
@@ -41,6 +41,14 @@
CX25837,
};
+enum cx25840_media_pads {
+ CX25840_PAD_INPUT,
+ CX25840_PAD_VID_OUT,
+ CX25840_PAD_VBI_OUT,
+
+ CX25840_NUM_PADS
+};
+
struct cx25840_state {
struct i2c_client *c;
struct v4l2_subdev sd;
@@ -64,6 +72,9 @@
wait_queue_head_t fw_wait; /* wake up when the fw load is finished */
struct work_struct fw_work; /* work entry for fw load */
struct cx25840_ir_state *ir_state;
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ struct media_pad pads[CX25840_NUM_PADS];
+#endif
};
static inline struct cx25840_state *to_state(struct v4l2_subdev *sd)
diff --git a/drivers/media/i2c/m5mols/m5mols_core.c b/drivers/media/i2c/m5mols/m5mols_core.c
index 6ed16e5..6404c0d 100644
--- a/drivers/media/i2c/m5mols/m5mols_core.c
+++ b/drivers/media/i2c/m5mols/m5mols_core.c
@@ -531,17 +531,17 @@
}
static struct v4l2_mbus_framefmt *__find_format(struct m5mols_info *info,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
enum v4l2_subdev_format_whence which,
enum m5mols_restype type)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return fh ? v4l2_subdev_get_try_format(fh, 0) : NULL;
+ return cfg ? v4l2_subdev_get_try_format(&info->sd, cfg, 0) : NULL;
return &info->ffmt[type];
}
-static int m5mols_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int m5mols_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct m5mols_info *info = to_m5mols(sd);
@@ -550,7 +550,7 @@
mutex_lock(&info->lock);
- format = __find_format(info, fh, fmt->which, info->res_type);
+ format = __find_format(info, cfg, fmt->which, info->res_type);
if (format)
fmt->format = *format;
else
@@ -560,7 +560,7 @@
return ret;
}
-static int m5mols_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int m5mols_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct m5mols_info *info = to_m5mols(sd);
@@ -574,7 +574,7 @@
if (ret < 0)
return ret;
- sfmt = __find_format(info, fh, fmt->which, type);
+ sfmt = __find_format(info, cfg, fmt->which, type);
if (!sfmt)
return 0;
@@ -640,7 +640,7 @@
static int m5mols_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (!code || code->index >= SIZE_DEFAULT_FFMT)
@@ -895,7 +895,7 @@
*/
static int m5mols_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- struct v4l2_mbus_framefmt *format = v4l2_subdev_get_try_format(fh, 0);
+ struct v4l2_mbus_framefmt *format = v4l2_subdev_get_try_format(sd, fh->pad, 0);
*format = m5mols_default_ffmt[0];
return 0;
diff --git a/drivers/media/i2c/mt9m032.c b/drivers/media/i2c/mt9m032.c
index 7643122..c7747bd 100644
--- a/drivers/media/i2c/mt9m032.c
+++ b/drivers/media/i2c/mt9m032.c
@@ -317,7 +317,7 @@
*/
static int mt9m032_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index != 0)
@@ -328,7 +328,7 @@
}
static int mt9m032_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
if (fse->index != 0 || fse->code != MEDIA_BUS_FMT_Y8_1X8)
@@ -345,18 +345,18 @@
/**
* __mt9m032_get_pad_crop() - get crop rect
* @sensor: pointer to the sensor struct
- * @fh: file handle for getting the try crop rect from
+ * @cfg: v4l2_subdev_pad_config for getting the try crop rect from
* @which: select try or active crop rect
*
* Returns a pointer the current active or fh relative try crop rect
*/
static struct v4l2_rect *
-__mt9m032_get_pad_crop(struct mt9m032 *sensor, struct v4l2_subdev_fh *fh,
+__mt9m032_get_pad_crop(struct mt9m032 *sensor, struct v4l2_subdev_pad_config *cfg,
enum v4l2_subdev_format_whence which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_crop(fh, 0);
+ return v4l2_subdev_get_try_crop(&sensor->subdev, cfg, 0);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &sensor->crop;
default:
@@ -367,18 +367,18 @@
/**
* __mt9m032_get_pad_format() - get format
* @sensor: pointer to the sensor struct
- * @fh: file handle for getting the try format from
+ * @cfg: v4l2_subdev_pad_config for getting the try format from
* @which: select try or active format
*
* Returns a pointer the current active or fh relative try format
*/
static struct v4l2_mbus_framefmt *
-__mt9m032_get_pad_format(struct mt9m032 *sensor, struct v4l2_subdev_fh *fh,
+__mt9m032_get_pad_format(struct mt9m032 *sensor, struct v4l2_subdev_pad_config *cfg,
enum v4l2_subdev_format_whence which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_format(fh, 0);
+ return v4l2_subdev_get_try_format(&sensor->subdev, cfg, 0);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &sensor->format;
default:
@@ -387,20 +387,20 @@
}
static int mt9m032_get_pad_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct mt9m032 *sensor = to_mt9m032(subdev);
mutex_lock(&sensor->lock);
- fmt->format = *__mt9m032_get_pad_format(sensor, fh, fmt->which);
+ fmt->format = *__mt9m032_get_pad_format(sensor, cfg, fmt->which);
mutex_unlock(&sensor->lock);
return 0;
}
static int mt9m032_set_pad_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct mt9m032 *sensor = to_mt9m032(subdev);
@@ -414,7 +414,7 @@
}
/* Scaling is not supported, the format is thus fixed. */
- fmt->format = *__mt9m032_get_pad_format(sensor, fh, fmt->which);
+ fmt->format = *__mt9m032_get_pad_format(sensor, cfg, fmt->which);
ret = 0;
done:
@@ -423,7 +423,7 @@
}
static int mt9m032_get_pad_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct mt9m032 *sensor = to_mt9m032(subdev);
@@ -432,14 +432,14 @@
return -EINVAL;
mutex_lock(&sensor->lock);
- sel->r = *__mt9m032_get_pad_crop(sensor, fh, sel->which);
+ sel->r = *__mt9m032_get_pad_crop(sensor, cfg, sel->which);
mutex_unlock(&sensor->lock);
return 0;
}
static int mt9m032_set_pad_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct mt9m032 *sensor = to_mt9m032(subdev);
@@ -475,13 +475,13 @@
rect.height = min_t(unsigned int, rect.height,
MT9M032_PIXEL_ARRAY_HEIGHT - rect.top);
- __crop = __mt9m032_get_pad_crop(sensor, fh, sel->which);
+ __crop = __mt9m032_get_pad_crop(sensor, cfg, sel->which);
if (rect.width != __crop->width || rect.height != __crop->height) {
/* Reset the output image size if the crop rectangle size has
* been modified.
*/
- format = __mt9m032_get_pad_format(sensor, fh, sel->which);
+ format = __mt9m032_get_pad_format(sensor, cfg, sel->which);
format->width = rect.width;
format->height = rect.height;
}
diff --git a/drivers/media/i2c/mt9p031.c b/drivers/media/i2c/mt9p031.c
index e3acae9..0db15f5 100644
--- a/drivers/media/i2c/mt9p031.c
+++ b/drivers/media/i2c/mt9p031.c
@@ -15,12 +15,11 @@
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/device.h>
-#include <linux/gpio.h>
+#include <linux/gpio/consumer.h>
#include <linux/i2c.h>
#include <linux/log2.h>
#include <linux/module.h>
#include <linux/of.h>
-#include <linux/of_gpio.h>
#include <linux/of_graph.h>
#include <linux/pm.h>
#include <linux/regulator/consumer.h>
@@ -28,6 +27,7 @@
#include <linux/videodev2.h>
#include <media/mt9p031.h>
+#include <media/v4l2-async.h>
#include <media/v4l2-ctrls.h>
#include <media/v4l2-device.h>
#include <media/v4l2-subdev.h>
@@ -135,7 +135,7 @@
struct aptina_pll pll;
unsigned int clk_div;
bool use_pll;
- int reset;
+ struct gpio_desc *reset;
struct v4l2_ctrl_handler ctrls;
struct v4l2_ctrl *blc_auto;
@@ -251,7 +251,7 @@
div = DIV_ROUND_UP(pdata->ext_freq, pdata->target_freq);
div = roundup_pow_of_two(div) / 2;
- mt9p031->clk_div = max_t(unsigned int, div, 64);
+ mt9p031->clk_div = min_t(unsigned int, div, 64);
mt9p031->use_pll = false;
return 0;
@@ -308,9 +308,9 @@
{
int ret;
- /* Ensure RESET_BAR is low */
- if (gpio_is_valid(mt9p031->reset)) {
- gpio_set_value(mt9p031->reset, 0);
+ /* Ensure RESET_BAR is active */
+ if (mt9p031->reset) {
+ gpiod_set_value(mt9p031->reset, 1);
usleep_range(1000, 2000);
}
@@ -331,8 +331,8 @@
}
/* Now RESET_BAR must be high */
- if (gpio_is_valid(mt9p031->reset)) {
- gpio_set_value(mt9p031->reset, 1);
+ if (mt9p031->reset) {
+ gpiod_set_value(mt9p031->reset, 0);
usleep_range(1000, 2000);
}
@@ -341,8 +341,8 @@
static void mt9p031_power_off(struct mt9p031 *mt9p031)
{
- if (gpio_is_valid(mt9p031->reset)) {
- gpio_set_value(mt9p031->reset, 0);
+ if (mt9p031->reset) {
+ gpiod_set_value(mt9p031->reset, 1);
usleep_range(1000, 2000);
}
@@ -474,7 +474,7 @@
}
static int mt9p031_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct mt9p031 *mt9p031 = to_mt9p031(subdev);
@@ -487,7 +487,7 @@
}
static int mt9p031_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct mt9p031 *mt9p031 = to_mt9p031(subdev);
@@ -505,12 +505,12 @@
}
static struct v4l2_mbus_framefmt *
-__mt9p031_get_pad_format(struct mt9p031 *mt9p031, struct v4l2_subdev_fh *fh,
+__mt9p031_get_pad_format(struct mt9p031 *mt9p031, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, u32 which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&mt9p031->subdev, cfg, pad);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &mt9p031->format;
default:
@@ -519,12 +519,12 @@
}
static struct v4l2_rect *
-__mt9p031_get_pad_crop(struct mt9p031 *mt9p031, struct v4l2_subdev_fh *fh,
+__mt9p031_get_pad_crop(struct mt9p031 *mt9p031, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, u32 which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_crop(fh, pad);
+ return v4l2_subdev_get_try_crop(&mt9p031->subdev, cfg, pad);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &mt9p031->crop;
default:
@@ -533,18 +533,18 @@
}
static int mt9p031_get_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct mt9p031 *mt9p031 = to_mt9p031(subdev);
- fmt->format = *__mt9p031_get_pad_format(mt9p031, fh, fmt->pad,
+ fmt->format = *__mt9p031_get_pad_format(mt9p031, cfg, fmt->pad,
fmt->which);
return 0;
}
static int mt9p031_set_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
struct mt9p031 *mt9p031 = to_mt9p031(subdev);
@@ -555,7 +555,7 @@
unsigned int hratio;
unsigned int vratio;
- __crop = __mt9p031_get_pad_crop(mt9p031, fh, format->pad,
+ __crop = __mt9p031_get_pad_crop(mt9p031, cfg, format->pad,
format->which);
/* Clamp the width and height to avoid dividing by zero. */
@@ -571,7 +571,7 @@
hratio = DIV_ROUND_CLOSEST(__crop->width, width);
vratio = DIV_ROUND_CLOSEST(__crop->height, height);
- __format = __mt9p031_get_pad_format(mt9p031, fh, format->pad,
+ __format = __mt9p031_get_pad_format(mt9p031, cfg, format->pad,
format->which);
__format->width = __crop->width / hratio;
__format->height = __crop->height / vratio;
@@ -582,7 +582,7 @@
}
static int mt9p031_get_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct mt9p031 *mt9p031 = to_mt9p031(subdev);
@@ -590,12 +590,12 @@
if (sel->target != V4L2_SEL_TGT_CROP)
return -EINVAL;
- sel->r = *__mt9p031_get_pad_crop(mt9p031, fh, sel->pad, sel->which);
+ sel->r = *__mt9p031_get_pad_crop(mt9p031, cfg, sel->pad, sel->which);
return 0;
}
static int mt9p031_set_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct mt9p031 *mt9p031 = to_mt9p031(subdev);
@@ -625,13 +625,13 @@
rect.height = min_t(unsigned int, rect.height,
MT9P031_PIXEL_ARRAY_HEIGHT - rect.top);
- __crop = __mt9p031_get_pad_crop(mt9p031, fh, sel->pad, sel->which);
+ __crop = __mt9p031_get_pad_crop(mt9p031, cfg, sel->pad, sel->which);
if (rect.width != __crop->width || rect.height != __crop->height) {
/* Reset the output image size if the crop rectangle size has
* been modified.
*/
- __format = __mt9p031_get_pad_format(mt9p031, fh, sel->pad,
+ __format = __mt9p031_get_pad_format(mt9p031, cfg, sel->pad,
sel->which);
__format->width = rect.width;
__format->height = rect.height;
@@ -946,13 +946,13 @@
struct v4l2_mbus_framefmt *format;
struct v4l2_rect *crop;
- crop = v4l2_subdev_get_try_crop(fh, 0);
+ crop = v4l2_subdev_get_try_crop(subdev, fh->pad, 0);
crop->left = MT9P031_COLUMN_START_DEF;
crop->top = MT9P031_ROW_START_DEF;
crop->width = MT9P031_WINDOW_WIDTH_DEF;
crop->height = MT9P031_WINDOW_HEIGHT_DEF;
- format = v4l2_subdev_get_try_format(fh, 0);
+ format = v4l2_subdev_get_try_format(subdev, fh->pad, 0);
if (mt9p031->model == MT9P031_MODEL_MONOCHROME)
format->code = MEDIA_BUS_FMT_Y12_1X12;
@@ -1022,7 +1022,6 @@
if (!pdata)
goto done;
- pdata->reset = of_get_named_gpio(client->dev.of_node, "reset-gpios", 0);
of_property_read_u32(np, "input-clock-frequency", &pdata->ext_freq);
of_property_read_u32(np, "pixel-clock-frequency", &pdata->target_freq);
@@ -1059,7 +1058,6 @@
mt9p031->output_control = MT9P031_OUTPUT_CONTROL_DEF;
mt9p031->mode2 = MT9P031_READ_MODE_2_ROW_BLC;
mt9p031->model = did->driver_data;
- mt9p031->reset = -1;
mt9p031->regulators[0].supply = "vdd";
mt9p031->regulators[1].supply = "vdd_io";
@@ -1071,6 +1069,8 @@
return ret;
}
+ mutex_init(&mt9p031->power_lock);
+
v4l2_ctrl_handler_init(&mt9p031->ctrls, ARRAY_SIZE(mt9p031_ctrls) + 6);
v4l2_ctrl_new_std(&mt9p031->ctrls, &mt9p031_ctrl_ops,
@@ -1108,7 +1108,6 @@
mt9p031->blc_offset = v4l2_ctrl_find(&mt9p031->ctrls,
V4L2_CID_BLC_DIGITAL_OFFSET);
- mutex_init(&mt9p031->power_lock);
v4l2_i2c_subdev_init(&mt9p031->subdev, client, &mt9p031_subdev_ops);
mt9p031->subdev.internal_ops = &mt9p031_subdev_internal_ops;
@@ -1134,21 +1133,20 @@
mt9p031->format.field = V4L2_FIELD_NONE;
mt9p031->format.colorspace = V4L2_COLORSPACE_SRGB;
- if (gpio_is_valid(pdata->reset)) {
- ret = devm_gpio_request_one(&client->dev, pdata->reset,
- GPIOF_OUT_INIT_LOW, "mt9p031_rst");
- if (ret < 0)
- goto done;
-
- mt9p031->reset = pdata->reset;
- }
+ mt9p031->reset = devm_gpiod_get_optional(&client->dev, "reset",
+ GPIOD_OUT_HIGH);
ret = mt9p031_clk_setup(mt9p031);
+ if (ret)
+ goto done;
+
+ ret = v4l2_async_register_subdev(&mt9p031->subdev);
done:
if (ret < 0) {
v4l2_ctrl_handler_free(&mt9p031->ctrls);
media_entity_cleanup(&mt9p031->subdev.entity);
+ mutex_destroy(&mt9p031->power_lock);
}
return ret;
@@ -1160,8 +1158,9 @@
struct mt9p031 *mt9p031 = to_mt9p031(subdev);
v4l2_ctrl_handler_free(&mt9p031->ctrls);
- v4l2_device_unregister_subdev(subdev);
+ v4l2_async_unregister_subdev(subdev);
media_entity_cleanup(&subdev->entity);
+ mutex_destroy(&mt9p031->power_lock);
return 0;
}
diff --git a/drivers/media/i2c/mt9t001.c b/drivers/media/i2c/mt9t001.c
index f6ca636..8ae99f7 100644
--- a/drivers/media/i2c/mt9t001.c
+++ b/drivers/media/i2c/mt9t001.c
@@ -244,12 +244,12 @@
*/
static struct v4l2_mbus_framefmt *
-__mt9t001_get_pad_format(struct mt9t001 *mt9t001, struct v4l2_subdev_fh *fh,
+__mt9t001_get_pad_format(struct mt9t001 *mt9t001, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&mt9t001->subdev, cfg, pad);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &mt9t001->format;
default:
@@ -258,12 +258,12 @@
}
static struct v4l2_rect *
-__mt9t001_get_pad_crop(struct mt9t001 *mt9t001, struct v4l2_subdev_fh *fh,
+__mt9t001_get_pad_crop(struct mt9t001 *mt9t001, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_crop(fh, pad);
+ return v4l2_subdev_get_try_crop(&mt9t001->subdev, cfg, pad);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &mt9t001->crop;
default:
@@ -327,7 +327,7 @@
}
static int mt9t001_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index > 0)
@@ -338,7 +338,7 @@
}
static int mt9t001_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
if (fse->index >= 8 || fse->code != MEDIA_BUS_FMT_SGRBG10_1X10)
@@ -353,18 +353,18 @@
}
static int mt9t001_get_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
struct mt9t001 *mt9t001 = to_mt9t001(subdev);
- format->format = *__mt9t001_get_pad_format(mt9t001, fh, format->pad,
+ format->format = *__mt9t001_get_pad_format(mt9t001, cfg, format->pad,
format->which);
return 0;
}
static int mt9t001_set_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
struct mt9t001 *mt9t001 = to_mt9t001(subdev);
@@ -375,7 +375,7 @@
unsigned int hratio;
unsigned int vratio;
- __crop = __mt9t001_get_pad_crop(mt9t001, fh, format->pad,
+ __crop = __mt9t001_get_pad_crop(mt9t001, cfg, format->pad,
format->which);
/* Clamp the width and height to avoid dividing by zero. */
@@ -391,7 +391,7 @@
hratio = DIV_ROUND_CLOSEST(__crop->width, width);
vratio = DIV_ROUND_CLOSEST(__crop->height, height);
- __format = __mt9t001_get_pad_format(mt9t001, fh, format->pad,
+ __format = __mt9t001_get_pad_format(mt9t001, cfg, format->pad,
format->which);
__format->width = __crop->width / hratio;
__format->height = __crop->height / vratio;
@@ -402,7 +402,7 @@
}
static int mt9t001_get_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct mt9t001 *mt9t001 = to_mt9t001(subdev);
@@ -410,12 +410,12 @@
if (sel->target != V4L2_SEL_TGT_CROP)
return -EINVAL;
- sel->r = *__mt9t001_get_pad_crop(mt9t001, fh, sel->pad, sel->which);
+ sel->r = *__mt9t001_get_pad_crop(mt9t001, cfg, sel->pad, sel->which);
return 0;
}
static int mt9t001_set_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct mt9t001 *mt9t001 = to_mt9t001(subdev);
@@ -447,13 +447,13 @@
rect.height = min_t(unsigned int, rect.height,
MT9T001_PIXEL_ARRAY_HEIGHT - rect.top);
- __crop = __mt9t001_get_pad_crop(mt9t001, fh, sel->pad, sel->which);
+ __crop = __mt9t001_get_pad_crop(mt9t001, cfg, sel->pad, sel->which);
if (rect.width != __crop->width || rect.height != __crop->height) {
/* Reset the output image size if the crop rectangle size has
* been modified.
*/
- __format = __mt9t001_get_pad_format(mt9t001, fh, sel->pad,
+ __format = __mt9t001_get_pad_format(mt9t001, cfg, sel->pad,
sel->which);
__format->width = rect.width;
__format->height = rect.height;
@@ -790,13 +790,13 @@
struct v4l2_mbus_framefmt *format;
struct v4l2_rect *crop;
- crop = v4l2_subdev_get_try_crop(fh, 0);
+ crop = v4l2_subdev_get_try_crop(subdev, fh->pad, 0);
crop->left = MT9T001_COLUMN_START_DEF;
crop->top = MT9T001_ROW_START_DEF;
crop->width = MT9T001_WINDOW_WIDTH_DEF + 1;
crop->height = MT9T001_WINDOW_HEIGHT_DEF + 1;
- format = v4l2_subdev_get_try_format(fh, 0);
+ format = v4l2_subdev_get_try_format(subdev, fh->pad, 0);
format->code = MEDIA_BUS_FMT_SGRBG10_1X10;
format->width = MT9T001_WINDOW_WIDTH_DEF + 1;
format->height = MT9T001_WINDOW_HEIGHT_DEF + 1;
diff --git a/drivers/media/i2c/mt9v032.c b/drivers/media/i2c/mt9v032.c
index bd3f979..977f400 100644
--- a/drivers/media/i2c/mt9v032.c
+++ b/drivers/media/i2c/mt9v032.c
@@ -17,6 +17,8 @@
#include <linux/i2c.h>
#include <linux/log2.h>
#include <linux/mutex.h>
+#include <linux/of.h>
+#include <linux/of_gpio.h>
#include <linux/regmap.h>
#include <linux/slab.h>
#include <linux/videodev2.h>
@@ -26,6 +28,7 @@
#include <media/mt9v032.h>
#include <media/v4l2-ctrls.h>
#include <media/v4l2-device.h>
+#include <media/v4l2-of.h>
#include <media/v4l2-subdev.h>
/* The first four rows are black rows. The active area spans 753x481 pixels. */
@@ -371,12 +374,12 @@
*/
static struct v4l2_mbus_framefmt *
-__mt9v032_get_pad_format(struct mt9v032 *mt9v032, struct v4l2_subdev_fh *fh,
+__mt9v032_get_pad_format(struct mt9v032 *mt9v032, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&mt9v032->subdev, cfg, pad);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &mt9v032->format;
default:
@@ -385,12 +388,12 @@
}
static struct v4l2_rect *
-__mt9v032_get_pad_crop(struct mt9v032 *mt9v032, struct v4l2_subdev_fh *fh,
+__mt9v032_get_pad_crop(struct mt9v032 *mt9v032, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_crop(fh, pad);
+ return v4l2_subdev_get_try_crop(&mt9v032->subdev, cfg, pad);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &mt9v032->crop;
default:
@@ -448,7 +451,7 @@
}
static int mt9v032_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index > 0)
@@ -459,7 +462,7 @@
}
static int mt9v032_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
if (fse->index >= 3 || fse->code != MEDIA_BUS_FMT_SGRBG10_1X10)
@@ -474,12 +477,12 @@
}
static int mt9v032_get_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
struct mt9v032 *mt9v032 = to_mt9v032(subdev);
- format->format = *__mt9v032_get_pad_format(mt9v032, fh, format->pad,
+ format->format = *__mt9v032_get_pad_format(mt9v032, cfg, format->pad,
format->which);
return 0;
}
@@ -509,7 +512,7 @@
}
static int mt9v032_set_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
struct mt9v032 *mt9v032 = to_mt9v032(subdev);
@@ -520,7 +523,7 @@
unsigned int hratio;
unsigned int vratio;
- __crop = __mt9v032_get_pad_crop(mt9v032, fh, format->pad,
+ __crop = __mt9v032_get_pad_crop(mt9v032, cfg, format->pad,
format->which);
/* Clamp the width and height to avoid dividing by zero. */
@@ -536,7 +539,7 @@
hratio = mt9v032_calc_ratio(__crop->width, width);
vratio = mt9v032_calc_ratio(__crop->height, height);
- __format = __mt9v032_get_pad_format(mt9v032, fh, format->pad,
+ __format = __mt9v032_get_pad_format(mt9v032, cfg, format->pad,
format->which);
__format->width = __crop->width / hratio;
__format->height = __crop->height / vratio;
@@ -553,7 +556,7 @@
}
static int mt9v032_get_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct mt9v032 *mt9v032 = to_mt9v032(subdev);
@@ -561,12 +564,12 @@
if (sel->target != V4L2_SEL_TGT_CROP)
return -EINVAL;
- sel->r = *__mt9v032_get_pad_crop(mt9v032, fh, sel->pad, sel->which);
+ sel->r = *__mt9v032_get_pad_crop(mt9v032, cfg, sel->pad, sel->which);
return 0;
}
static int mt9v032_set_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct mt9v032 *mt9v032 = to_mt9v032(subdev);
@@ -598,13 +601,13 @@
rect.height = min_t(unsigned int,
rect.height, MT9V032_PIXEL_ARRAY_HEIGHT - rect.top);
- __crop = __mt9v032_get_pad_crop(mt9v032, fh, sel->pad, sel->which);
+ __crop = __mt9v032_get_pad_crop(mt9v032, cfg, sel->pad, sel->which);
if (rect.width != __crop->width || rect.height != __crop->height) {
/* Reset the output image size if the crop rectangle size has
* been modified.
*/
- __format = __mt9v032_get_pad_format(mt9v032, fh, sel->pad,
+ __format = __mt9v032_get_pad_format(mt9v032, cfg, sel->pad,
sel->which);
__format->width = rect.width;
__format->height = rect.height;
@@ -810,13 +813,13 @@
struct v4l2_mbus_framefmt *format;
struct v4l2_rect *crop;
- crop = v4l2_subdev_get_try_crop(fh, 0);
+ crop = v4l2_subdev_get_try_crop(subdev, fh->pad, 0);
crop->left = MT9V032_COLUMN_START_DEF;
crop->top = MT9V032_ROW_START_DEF;
crop->width = MT9V032_WINDOW_WIDTH_DEF;
crop->height = MT9V032_WINDOW_HEIGHT_DEF;
- format = v4l2_subdev_get_try_format(fh, 0);
+ format = v4l2_subdev_get_try_format(subdev, fh->pad, 0);
if (mt9v032->model->color)
format->code = MEDIA_BUS_FMT_SGRBG10_1X10;
@@ -876,10 +879,58 @@
* Driver initialization and probing
*/
+static struct mt9v032_platform_data *
+mt9v032_get_pdata(struct i2c_client *client)
+{
+ struct mt9v032_platform_data *pdata;
+ struct v4l2_of_endpoint endpoint;
+ struct device_node *np;
+ struct property *prop;
+
+ if (!IS_ENABLED(CONFIG_OF) || !client->dev.of_node)
+ return client->dev.platform_data;
+
+ np = of_graph_get_next_endpoint(client->dev.of_node, NULL);
+ if (!np)
+ return NULL;
+
+ if (v4l2_of_parse_endpoint(np, &endpoint) < 0)
+ goto done;
+
+ pdata = devm_kzalloc(&client->dev, sizeof(*pdata), GFP_KERNEL);
+ if (!pdata)
+ goto done;
+
+ prop = of_find_property(np, "link-frequencies", NULL);
+ if (prop) {
+ u64 *link_freqs;
+ size_t size = prop->length / sizeof(*link_freqs);
+
+ link_freqs = devm_kcalloc(&client->dev, size,
+ sizeof(*link_freqs), GFP_KERNEL);
+ if (!link_freqs)
+ goto done;
+
+ if (of_property_read_u64_array(np, "link-frequencies",
+ link_freqs, size) < 0)
+ goto done;
+
+ pdata->link_freqs = link_freqs;
+ pdata->link_def_freq = link_freqs[0];
+ }
+
+ pdata->clk_pol = !!(endpoint.bus.parallel.flags &
+ V4L2_MBUS_PCLK_SAMPLE_RISING);
+
+done:
+ of_node_put(np);
+ return pdata;
+}
+
static int mt9v032_probe(struct i2c_client *client,
const struct i2c_device_id *did)
{
- struct mt9v032_platform_data *pdata = client->dev.platform_data;
+ struct mt9v032_platform_data *pdata = mt9v032_get_pdata(client);
struct mt9v032 *mt9v032;
unsigned int i;
int ret;
@@ -961,9 +1012,12 @@
mt9v032->subdev.ctrl_handler = &mt9v032->ctrls;
- if (mt9v032->ctrls.error)
- printk(KERN_INFO "%s: control initialization error %d\n",
- __func__, mt9v032->ctrls.error);
+ if (mt9v032->ctrls.error) {
+ dev_err(&client->dev, "control initialization error %d\n",
+ mt9v032->ctrls.error);
+ ret = mt9v032->ctrls.error;
+ goto err;
+ }
mt9v032->crop.left = MT9V032_COLUMN_START_DEF;
mt9v032->crop.top = MT9V032_ROW_START_DEF;
@@ -1016,7 +1070,6 @@
v4l2_async_unregister_subdev(subdev);
v4l2_ctrl_handler_free(&mt9v032->ctrls);
- v4l2_device_unregister_subdev(subdev);
media_entity_cleanup(&subdev->entity);
return 0;
@@ -1035,9 +1088,25 @@
};
MODULE_DEVICE_TABLE(i2c, mt9v032_id);
+#if IS_ENABLED(CONFIG_OF)
+static const struct of_device_id mt9v032_of_match[] = {
+ { .compatible = "aptina,mt9v022" },
+ { .compatible = "aptina,mt9v022m" },
+ { .compatible = "aptina,mt9v024" },
+ { .compatible = "aptina,mt9v024m" },
+ { .compatible = "aptina,mt9v032" },
+ { .compatible = "aptina,mt9v032m" },
+ { .compatible = "aptina,mt9v034" },
+ { .compatible = "aptina,mt9v034m" },
+ { /* Sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, mt9v032_of_match);
+#endif
+
static struct i2c_driver mt9v032_driver = {
.driver = {
.name = "mt9v032",
+ .of_match_table = of_match_ptr(mt9v032_of_match),
},
.probe = mt9v032_probe,
.remove = mt9v032_remove,
diff --git a/drivers/media/i2c/noon010pc30.c b/drivers/media/i2c/noon010pc30.c
index 00c7b26..f197b6c 100644
--- a/drivers/media/i2c/noon010pc30.c
+++ b/drivers/media/i2c/noon010pc30.c
@@ -492,7 +492,7 @@
}
static int noon010_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index >= ARRAY_SIZE(noon010_formats))
@@ -502,15 +502,16 @@
return 0;
}
-static int noon010_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int noon010_get_fmt(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct noon010_info *info = to_noon010(sd);
struct v4l2_mbus_framefmt *mf;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- if (fh) {
- mf = v4l2_subdev_get_try_format(fh, 0);
+ if (cfg) {
+ mf = v4l2_subdev_get_try_format(sd, cfg, 0);
fmt->format = *mf;
}
return 0;
@@ -542,7 +543,7 @@
return &noon010_formats[i];
}
-static int noon010_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int noon010_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct noon010_info *info = to_noon010(sd);
@@ -557,8 +558,8 @@
fmt->format.field = V4L2_FIELD_NONE;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- if (fh) {
- mf = v4l2_subdev_get_try_format(fh, 0);
+ if (cfg) {
+ mf = v4l2_subdev_get_try_format(sd, cfg, 0);
*mf = fmt->format;
}
return 0;
@@ -640,7 +641,7 @@
static int noon010_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- struct v4l2_mbus_framefmt *mf = v4l2_subdev_get_try_format(fh, 0);
+ struct v4l2_mbus_framefmt *mf = v4l2_subdev_get_try_format(sd, fh->pad, 0);
mf->width = noon010_sizes[0].width;
mf->height = noon010_sizes[0].height;
diff --git a/drivers/media/i2c/ov2659.c b/drivers/media/i2c/ov2659.c
new file mode 100644
index 0000000..edebd11
--- /dev/null
+++ b/drivers/media/i2c/ov2659.c
@@ -0,0 +1,1509 @@
+/*
+ * Omnivision OV2659 CMOS Image Sensor driver
+ *
+ * Copyright (C) 2015 Texas Instruments, Inc.
+ *
+ * Benoit Parrot <bparrot@ti.com>
+ * Lad, Prabhakar <prabhakar.csengg@gmail.com>
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/i2c.h>
+#include <linux/kernel.h>
+#include <linux/media.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_graph.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/videodev2.h>
+
+#include <media/media-entity.h>
+#include <media/ov2659.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-image-sizes.h>
+#include <media/v4l2-mediabus.h>
+#include <media/v4l2-of.h>
+#include <media/v4l2-subdev.h>
+
+#define DRIVER_NAME "ov2659"
+
+/*
+ * OV2659 register definitions
+ */
+#define REG_SOFTWARE_STANDBY 0x0100
+#define REG_SOFTWARE_RESET 0x0103
+#define REG_IO_CTRL00 0x3000
+#define REG_IO_CTRL01 0x3001
+#define REG_IO_CTRL02 0x3002
+#define REG_OUTPUT_VALUE00 0x3008
+#define REG_OUTPUT_VALUE01 0x3009
+#define REG_OUTPUT_VALUE02 0x300d
+#define REG_OUTPUT_SELECT00 0x300e
+#define REG_OUTPUT_SELECT01 0x300f
+#define REG_OUTPUT_SELECT02 0x3010
+#define REG_OUTPUT_DRIVE 0x3011
+#define REG_INPUT_READOUT00 0x302d
+#define REG_INPUT_READOUT01 0x302e
+#define REG_INPUT_READOUT02 0x302f
+
+#define REG_SC_PLL_CTRL0 0x3003
+#define REG_SC_PLL_CTRL1 0x3004
+#define REG_SC_PLL_CTRL2 0x3005
+#define REG_SC_PLL_CTRL3 0x3006
+#define REG_SC_CHIP_ID_H 0x300a
+#define REG_SC_CHIP_ID_L 0x300b
+#define REG_SC_PWC 0x3014
+#define REG_SC_CLKRST0 0x301a
+#define REG_SC_CLKRST1 0x301b
+#define REG_SC_CLKRST2 0x301c
+#define REG_SC_CLKRST3 0x301d
+#define REG_SC_SUB_ID 0x302a
+#define REG_SC_SCCB_ID 0x302b
+
+#define REG_GROUP_ADDRESS_00 0x3200
+#define REG_GROUP_ADDRESS_01 0x3201
+#define REG_GROUP_ADDRESS_02 0x3202
+#define REG_GROUP_ADDRESS_03 0x3203
+#define REG_GROUP_ACCESS 0x3208
+
+#define REG_AWB_R_GAIN_H 0x3400
+#define REG_AWB_R_GAIN_L 0x3401
+#define REG_AWB_G_GAIN_H 0x3402
+#define REG_AWB_G_GAIN_L 0x3403
+#define REG_AWB_B_GAIN_H 0x3404
+#define REG_AWB_B_GAIN_L 0x3405
+#define REG_AWB_MANUAL_CONTROL 0x3406
+
+#define REG_TIMING_HS_H 0x3800
+#define REG_TIMING_HS_L 0x3801
+#define REG_TIMING_VS_H 0x3802
+#define REG_TIMING_VS_L 0x3803
+#define REG_TIMING_HW_H 0x3804
+#define REG_TIMING_HW_L 0x3805
+#define REG_TIMING_VH_H 0x3806
+#define REG_TIMING_VH_L 0x3807
+#define REG_TIMING_DVPHO_H 0x3808
+#define REG_TIMING_DVPHO_L 0x3809
+#define REG_TIMING_DVPVO_H 0x380a
+#define REG_TIMING_DVPVO_L 0x380b
+#define REG_TIMING_HTS_H 0x380c
+#define REG_TIMING_HTS_L 0x380d
+#define REG_TIMING_VTS_H 0x380e
+#define REG_TIMING_VTS_L 0x380f
+#define REG_TIMING_HOFFS_H 0x3810
+#define REG_TIMING_HOFFS_L 0x3811
+#define REG_TIMING_VOFFS_H 0x3812
+#define REG_TIMING_VOFFS_L 0x3813
+#define REG_TIMING_XINC 0x3814
+#define REG_TIMING_YINC 0x3815
+#define REG_TIMING_VERT_FORMAT 0x3820
+#define REG_TIMING_HORIZ_FORMAT 0x3821
+
+#define REG_FORMAT_CTRL00 0x4300
+
+#define REG_VFIFO_READ_START_H 0x4608
+#define REG_VFIFO_READ_START_L 0x4609
+
+#define REG_DVP_CTRL02 0x4708
+
+#define REG_ISP_CTRL00 0x5000
+#define REG_ISP_CTRL01 0x5001
+#define REG_ISP_CTRL02 0x5002
+
+#define REG_LENC_RED_X0_H 0x500c
+#define REG_LENC_RED_X0_L 0x500d
+#define REG_LENC_RED_Y0_H 0x500e
+#define REG_LENC_RED_Y0_L 0x500f
+#define REG_LENC_RED_A1 0x5010
+#define REG_LENC_RED_B1 0x5011
+#define REG_LENC_RED_A2_B2 0x5012
+#define REG_LENC_GREEN_X0_H 0x5013
+#define REG_LENC_GREEN_X0_L 0x5014
+#define REG_LENC_GREEN_Y0_H 0x5015
+#define REG_LENC_GREEN_Y0_L 0x5016
+#define REG_LENC_GREEN_A1 0x5017
+#define REG_LENC_GREEN_B1 0x5018
+#define REG_LENC_GREEN_A2_B2 0x5019
+#define REG_LENC_BLUE_X0_H 0x501a
+#define REG_LENC_BLUE_X0_L 0x501b
+#define REG_LENC_BLUE_Y0_H 0x501c
+#define REG_LENC_BLUE_Y0_L 0x501d
+#define REG_LENC_BLUE_A1 0x501e
+#define REG_LENC_BLUE_B1 0x501f
+#define REG_LENC_BLUE_A2_B2 0x5020
+
+#define REG_AWB_CTRL00 0x5035
+#define REG_AWB_CTRL01 0x5036
+#define REG_AWB_CTRL02 0x5037
+#define REG_AWB_CTRL03 0x5038
+#define REG_AWB_CTRL04 0x5039
+#define REG_AWB_LOCAL_LIMIT 0x503a
+#define REG_AWB_CTRL12 0x5049
+#define REG_AWB_CTRL13 0x504a
+#define REG_AWB_CTRL14 0x504b
+
+#define REG_SHARPENMT_THRESH1 0x5064
+#define REG_SHARPENMT_THRESH2 0x5065
+#define REG_SHARPENMT_OFFSET1 0x5066
+#define REG_SHARPENMT_OFFSET2 0x5067
+#define REG_DENOISE_THRESH1 0x5068
+#define REG_DENOISE_THRESH2 0x5069
+#define REG_DENOISE_OFFSET1 0x506a
+#define REG_DENOISE_OFFSET2 0x506b
+#define REG_SHARPEN_THRESH1 0x506c
+#define REG_SHARPEN_THRESH2 0x506d
+#define REG_CIP_CTRL00 0x506e
+#define REG_CIP_CTRL01 0x506f
+
+#define REG_CMX_SIGN 0x5079
+#define REG_CMX_MISC_CTRL 0x507a
+
+#define REG_PRE_ISP_CTRL00 0x50a0
+#define TEST_PATTERN_ENABLE BIT(7)
+#define VERTICAL_COLOR_BAR_MASK 0x53
+
+#define REG_NULL 0x0000 /* Array end token */
+
+#define OV265X_ID(_msb, _lsb) ((_msb) << 8 | (_lsb))
+#define OV2659_ID 0x2656
+
+struct sensor_register {
+ u16 addr;
+ u8 value;
+};
+
+struct ov2659_framesize {
+ u16 width;
+ u16 height;
+ u16 max_exp_lines;
+ const struct sensor_register *regs;
+};
+
+struct ov2659_pll_ctrl {
+ u8 ctrl1;
+ u8 ctrl2;
+ u8 ctrl3;
+};
+
+struct ov2659_pixfmt {
+ u32 code;
+ /* Output format Register Value (REG_FORMAT_CTRL00) */
+ struct sensor_register *format_ctrl_regs;
+};
+
+struct pll_ctrl_reg {
+ unsigned int div;
+ unsigned char reg;
+};
+
+struct ov2659 {
+ struct v4l2_subdev sd;
+ struct media_pad pad;
+ struct v4l2_mbus_framefmt format;
+ unsigned int xvclk_frequency;
+ const struct ov2659_platform_data *pdata;
+ struct mutex lock;
+ struct i2c_client *client;
+ struct v4l2_ctrl_handler ctrls;
+ struct v4l2_ctrl *link_frequency;
+ const struct ov2659_framesize *frame_size;
+ struct sensor_register *format_ctrl_regs;
+ struct ov2659_pll_ctrl pll;
+ int streaming;
+};
+
+static const struct sensor_register ov2659_init_regs[] = {
+ { REG_IO_CTRL00, 0x03 },
+ { REG_IO_CTRL01, 0xff },
+ { REG_IO_CTRL02, 0xe0 },
+ { 0x3633, 0x3d },
+ { 0x3620, 0x02 },
+ { 0x3631, 0x11 },
+ { 0x3612, 0x04 },
+ { 0x3630, 0x20 },
+ { 0x4702, 0x02 },
+ { 0x370c, 0x34 },
+ { REG_TIMING_HS_H, 0x00 },
+ { REG_TIMING_HS_L, 0x00 },
+ { REG_TIMING_VS_H, 0x00 },
+ { REG_TIMING_VS_L, 0x00 },
+ { REG_TIMING_HW_H, 0x06 },
+ { REG_TIMING_HW_L, 0x5f },
+ { REG_TIMING_VH_H, 0x04 },
+ { REG_TIMING_VH_L, 0xb7 },
+ { REG_TIMING_DVPHO_H, 0x03 },
+ { REG_TIMING_DVPHO_L, 0x20 },
+ { REG_TIMING_DVPVO_H, 0x02 },
+ { REG_TIMING_DVPVO_L, 0x58 },
+ { REG_TIMING_HTS_H, 0x05 },
+ { REG_TIMING_HTS_L, 0x14 },
+ { REG_TIMING_VTS_H, 0x02 },
+ { REG_TIMING_VTS_L, 0x68 },
+ { REG_TIMING_HOFFS_L, 0x08 },
+ { REG_TIMING_VOFFS_L, 0x02 },
+ { REG_TIMING_XINC, 0x31 },
+ { REG_TIMING_YINC, 0x31 },
+ { 0x3a02, 0x02 },
+ { 0x3a03, 0x68 },
+ { 0x3a08, 0x00 },
+ { 0x3a09, 0x5c },
+ { 0x3a0a, 0x00 },
+ { 0x3a0b, 0x4d },
+ { 0x3a0d, 0x08 },
+ { 0x3a0e, 0x06 },
+ { 0x3a14, 0x02 },
+ { 0x3a15, 0x28 },
+ { REG_DVP_CTRL02, 0x01 },
+ { 0x3623, 0x00 },
+ { 0x3634, 0x76 },
+ { 0x3701, 0x44 },
+ { 0x3702, 0x18 },
+ { 0x3703, 0x24 },
+ { 0x3704, 0x24 },
+ { 0x3705, 0x0c },
+ { REG_TIMING_VERT_FORMAT, 0x81 },
+ { REG_TIMING_HORIZ_FORMAT, 0x01 },
+ { 0x370a, 0x52 },
+ { REG_VFIFO_READ_START_H, 0x00 },
+ { REG_VFIFO_READ_START_L, 0x80 },
+ { REG_FORMAT_CTRL00, 0x30 },
+ { 0x5086, 0x02 },
+ { REG_ISP_CTRL00, 0xfb },
+ { REG_ISP_CTRL01, 0x1f },
+ { REG_ISP_CTRL02, 0x00 },
+ { 0x5025, 0x0e },
+ { 0x5026, 0x18 },
+ { 0x5027, 0x34 },
+ { 0x5028, 0x4c },
+ { 0x5029, 0x62 },
+ { 0x502a, 0x74 },
+ { 0x502b, 0x85 },
+ { 0x502c, 0x92 },
+ { 0x502d, 0x9e },
+ { 0x502e, 0xb2 },
+ { 0x502f, 0xc0 },
+ { 0x5030, 0xcc },
+ { 0x5031, 0xe0 },
+ { 0x5032, 0xee },
+ { 0x5033, 0xf6 },
+ { 0x5034, 0x11 },
+ { 0x5070, 0x1c },
+ { 0x5071, 0x5b },
+ { 0x5072, 0x05 },
+ { 0x5073, 0x20 },
+ { 0x5074, 0x94 },
+ { 0x5075, 0xb4 },
+ { 0x5076, 0xb4 },
+ { 0x5077, 0xaf },
+ { 0x5078, 0x05 },
+ { REG_CMX_SIGN, 0x98 },
+ { REG_CMX_MISC_CTRL, 0x21 },
+ { REG_AWB_CTRL00, 0x6a },
+ { REG_AWB_CTRL01, 0x11 },
+ { REG_AWB_CTRL02, 0x92 },
+ { REG_AWB_CTRL03, 0x21 },
+ { REG_AWB_CTRL04, 0xe1 },
+ { REG_AWB_LOCAL_LIMIT, 0x01 },
+ { 0x503c, 0x05 },
+ { 0x503d, 0x08 },
+ { 0x503e, 0x08 },
+ { 0x503f, 0x64 },
+ { 0x5040, 0x58 },
+ { 0x5041, 0x2a },
+ { 0x5042, 0xc5 },
+ { 0x5043, 0x2e },
+ { 0x5044, 0x3a },
+ { 0x5045, 0x3c },
+ { 0x5046, 0x44 },
+ { 0x5047, 0xf8 },
+ { 0x5048, 0x08 },
+ { REG_AWB_CTRL12, 0x70 },
+ { REG_AWB_CTRL13, 0xf0 },
+ { REG_AWB_CTRL14, 0xf0 },
+ { REG_LENC_RED_X0_H, 0x03 },
+ { REG_LENC_RED_X0_L, 0x20 },
+ { REG_LENC_RED_Y0_H, 0x02 },
+ { REG_LENC_RED_Y0_L, 0x5c },
+ { REG_LENC_RED_A1, 0x48 },
+ { REG_LENC_RED_B1, 0x00 },
+ { REG_LENC_RED_A2_B2, 0x66 },
+ { REG_LENC_GREEN_X0_H, 0x03 },
+ { REG_LENC_GREEN_X0_L, 0x30 },
+ { REG_LENC_GREEN_Y0_H, 0x02 },
+ { REG_LENC_GREEN_Y0_L, 0x7c },
+ { REG_LENC_GREEN_A1, 0x40 },
+ { REG_LENC_GREEN_B1, 0x00 },
+ { REG_LENC_GREEN_A2_B2, 0x66 },
+ { REG_LENC_BLUE_X0_H, 0x03 },
+ { REG_LENC_BLUE_X0_L, 0x10 },
+ { REG_LENC_BLUE_Y0_H, 0x02 },
+ { REG_LENC_BLUE_Y0_L, 0x7c },
+ { REG_LENC_BLUE_A1, 0x3a },
+ { REG_LENC_BLUE_B1, 0x00 },
+ { REG_LENC_BLUE_A2_B2, 0x66 },
+ { REG_CIP_CTRL00, 0x44 },
+ { REG_SHARPENMT_THRESH1, 0x08 },
+ { REG_SHARPENMT_THRESH2, 0x10 },
+ { REG_SHARPENMT_OFFSET1, 0x12 },
+ { REG_SHARPENMT_OFFSET2, 0x02 },
+ { REG_SHARPEN_THRESH1, 0x08 },
+ { REG_SHARPEN_THRESH2, 0x10 },
+ { REG_CIP_CTRL01, 0xa6 },
+ { REG_DENOISE_THRESH1, 0x08 },
+ { REG_DENOISE_THRESH2, 0x10 },
+ { REG_DENOISE_OFFSET1, 0x04 },
+ { REG_DENOISE_OFFSET2, 0x12 },
+ { 0x507e, 0x40 },
+ { 0x507f, 0x20 },
+ { 0x507b, 0x02 },
+ { REG_CMX_MISC_CTRL, 0x01 },
+ { 0x5084, 0x0c },
+ { 0x5085, 0x3e },
+ { 0x5005, 0x80 },
+ { 0x3a0f, 0x30 },
+ { 0x3a10, 0x28 },
+ { 0x3a1b, 0x32 },
+ { 0x3a1e, 0x26 },
+ { 0x3a11, 0x60 },
+ { 0x3a1f, 0x14 },
+ { 0x5060, 0x69 },
+ { 0x5061, 0x7d },
+ { 0x5062, 0x7d },
+ { 0x5063, 0x69 },
+ { REG_NULL, 0x00 },
+};
+
+/* 1280X720 720p */
+static struct sensor_register ov2659_720p[] = {
+ { REG_TIMING_HS_H, 0x00 },
+ { REG_TIMING_HS_L, 0xa0 },
+ { REG_TIMING_VS_H, 0x00 },
+ { REG_TIMING_VS_L, 0xf0 },
+ { REG_TIMING_HW_H, 0x05 },
+ { REG_TIMING_HW_L, 0xbf },
+ { REG_TIMING_VH_H, 0x03 },
+ { REG_TIMING_VH_L, 0xcb },
+ { REG_TIMING_DVPHO_H, 0x05 },
+ { REG_TIMING_DVPHO_L, 0x00 },
+ { REG_TIMING_DVPVO_H, 0x02 },
+ { REG_TIMING_DVPVO_L, 0xd0 },
+ { REG_TIMING_HTS_H, 0x06 },
+ { REG_TIMING_HTS_L, 0x4c },
+ { REG_TIMING_VTS_H, 0x02 },
+ { REG_TIMING_VTS_L, 0xe8 },
+ { REG_TIMING_HOFFS_L, 0x10 },
+ { REG_TIMING_VOFFS_L, 0x06 },
+ { REG_TIMING_XINC, 0x11 },
+ { REG_TIMING_YINC, 0x11 },
+ { REG_TIMING_VERT_FORMAT, 0x80 },
+ { REG_TIMING_HORIZ_FORMAT, 0x00 },
+ { 0x3a03, 0xe8 },
+ { 0x3a09, 0x6f },
+ { 0x3a0b, 0x5d },
+ { 0x3a15, 0x9a },
+ { REG_NULL, 0x00 },
+};
+
+/* 1600X1200 UXGA */
+static struct sensor_register ov2659_uxga[] = {
+ { REG_TIMING_HS_H, 0x00 },
+ { REG_TIMING_HS_L, 0x00 },
+ { REG_TIMING_VS_H, 0x00 },
+ { REG_TIMING_VS_L, 0x00 },
+ { REG_TIMING_HW_H, 0x06 },
+ { REG_TIMING_HW_L, 0x5f },
+ { REG_TIMING_VH_H, 0x04 },
+ { REG_TIMING_VH_L, 0xbb },
+ { REG_TIMING_DVPHO_H, 0x06 },
+ { REG_TIMING_DVPHO_L, 0x40 },
+ { REG_TIMING_DVPVO_H, 0x04 },
+ { REG_TIMING_DVPVO_L, 0xb0 },
+ { REG_TIMING_HTS_H, 0x07 },
+ { REG_TIMING_HTS_L, 0x9f },
+ { REG_TIMING_VTS_H, 0x04 },
+ { REG_TIMING_VTS_L, 0xd0 },
+ { REG_TIMING_HOFFS_L, 0x10 },
+ { REG_TIMING_VOFFS_L, 0x06 },
+ { REG_TIMING_XINC, 0x11 },
+ { REG_TIMING_YINC, 0x11 },
+ { 0x3a02, 0x04 },
+ { 0x3a03, 0xd0 },
+ { 0x3a08, 0x00 },
+ { 0x3a09, 0xb8 },
+ { 0x3a0a, 0x00 },
+ { 0x3a0b, 0x9a },
+ { 0x3a0d, 0x08 },
+ { 0x3a0e, 0x06 },
+ { 0x3a14, 0x04 },
+ { 0x3a15, 0x50 },
+ { 0x3623, 0x00 },
+ { 0x3634, 0x44 },
+ { 0x3701, 0x44 },
+ { 0x3702, 0x30 },
+ { 0x3703, 0x48 },
+ { 0x3704, 0x48 },
+ { 0x3705, 0x18 },
+ { REG_TIMING_VERT_FORMAT, 0x80 },
+ { REG_TIMING_HORIZ_FORMAT, 0x00 },
+ { 0x370a, 0x12 },
+ { REG_VFIFO_READ_START_H, 0x00 },
+ { REG_VFIFO_READ_START_L, 0x80 },
+ { REG_ISP_CTRL02, 0x00 },
+ { REG_NULL, 0x00 },
+};
+
+/* 1280X1024 SXGA */
+static struct sensor_register ov2659_sxga[] = {
+ { REG_TIMING_HS_H, 0x00 },
+ { REG_TIMING_HS_L, 0x00 },
+ { REG_TIMING_VS_H, 0x00 },
+ { REG_TIMING_VS_L, 0x00 },
+ { REG_TIMING_HW_H, 0x06 },
+ { REG_TIMING_HW_L, 0x5f },
+ { REG_TIMING_VH_H, 0x04 },
+ { REG_TIMING_VH_L, 0xb7 },
+ { REG_TIMING_DVPHO_H, 0x05 },
+ { REG_TIMING_DVPHO_L, 0x00 },
+ { REG_TIMING_DVPVO_H, 0x04 },
+ { REG_TIMING_DVPVO_L, 0x00 },
+ { REG_TIMING_HTS_H, 0x07 },
+ { REG_TIMING_HTS_L, 0x9c },
+ { REG_TIMING_VTS_H, 0x04 },
+ { REG_TIMING_VTS_L, 0xd0 },
+ { REG_TIMING_HOFFS_L, 0x10 },
+ { REG_TIMING_VOFFS_L, 0x06 },
+ { REG_TIMING_XINC, 0x11 },
+ { REG_TIMING_YINC, 0x11 },
+ { 0x3a02, 0x02 },
+ { 0x3a03, 0x68 },
+ { 0x3a08, 0x00 },
+ { 0x3a09, 0x5c },
+ { 0x3a0a, 0x00 },
+ { 0x3a0b, 0x4d },
+ { 0x3a0d, 0x08 },
+ { 0x3a0e, 0x06 },
+ { 0x3a14, 0x02 },
+ { 0x3a15, 0x28 },
+ { 0x3623, 0x00 },
+ { 0x3634, 0x76 },
+ { 0x3701, 0x44 },
+ { 0x3702, 0x18 },
+ { 0x3703, 0x24 },
+ { 0x3704, 0x24 },
+ { 0x3705, 0x0c },
+ { REG_TIMING_VERT_FORMAT, 0x80 },
+ { REG_TIMING_HORIZ_FORMAT, 0x00 },
+ { 0x370a, 0x52 },
+ { REG_VFIFO_READ_START_H, 0x00 },
+ { REG_VFIFO_READ_START_L, 0x80 },
+ { REG_ISP_CTRL02, 0x00 },
+ { REG_NULL, 0x00 },
+};
+
+/* 1024X768 SXGA */
+static struct sensor_register ov2659_xga[] = {
+ { REG_TIMING_HS_H, 0x00 },
+ { REG_TIMING_HS_L, 0x00 },
+ { REG_TIMING_VS_H, 0x00 },
+ { REG_TIMING_VS_L, 0x00 },
+ { REG_TIMING_HW_H, 0x06 },
+ { REG_TIMING_HW_L, 0x5f },
+ { REG_TIMING_VH_H, 0x04 },
+ { REG_TIMING_VH_L, 0xb7 },
+ { REG_TIMING_DVPHO_H, 0x04 },
+ { REG_TIMING_DVPHO_L, 0x00 },
+ { REG_TIMING_DVPVO_H, 0x03 },
+ { REG_TIMING_DVPVO_L, 0x00 },
+ { REG_TIMING_HTS_H, 0x07 },
+ { REG_TIMING_HTS_L, 0x9c },
+ { REG_TIMING_VTS_H, 0x04 },
+ { REG_TIMING_VTS_L, 0xd0 },
+ { REG_TIMING_HOFFS_L, 0x10 },
+ { REG_TIMING_VOFFS_L, 0x06 },
+ { REG_TIMING_XINC, 0x11 },
+ { REG_TIMING_YINC, 0x11 },
+ { 0x3a02, 0x02 },
+ { 0x3a03, 0x68 },
+ { 0x3a08, 0x00 },
+ { 0x3a09, 0x5c },
+ { 0x3a0a, 0x00 },
+ { 0x3a0b, 0x4d },
+ { 0x3a0d, 0x08 },
+ { 0x3a0e, 0x06 },
+ { 0x3a14, 0x02 },
+ { 0x3a15, 0x28 },
+ { 0x3623, 0x00 },
+ { 0x3634, 0x76 },
+ { 0x3701, 0x44 },
+ { 0x3702, 0x18 },
+ { 0x3703, 0x24 },
+ { 0x3704, 0x24 },
+ { 0x3705, 0x0c },
+ { REG_TIMING_VERT_FORMAT, 0x80 },
+ { REG_TIMING_HORIZ_FORMAT, 0x00 },
+ { 0x370a, 0x52 },
+ { REG_VFIFO_READ_START_H, 0x00 },
+ { REG_VFIFO_READ_START_L, 0x80 },
+ { REG_ISP_CTRL02, 0x00 },
+ { REG_NULL, 0x00 },
+};
+
+/* 800X600 SVGA */
+static struct sensor_register ov2659_svga[] = {
+ { REG_TIMING_HS_H, 0x00 },
+ { REG_TIMING_HS_L, 0x00 },
+ { REG_TIMING_VS_H, 0x00 },
+ { REG_TIMING_VS_L, 0x00 },
+ { REG_TIMING_HW_H, 0x06 },
+ { REG_TIMING_HW_L, 0x5f },
+ { REG_TIMING_VH_H, 0x04 },
+ { REG_TIMING_VH_L, 0xb7 },
+ { REG_TIMING_DVPHO_H, 0x03 },
+ { REG_TIMING_DVPHO_L, 0x20 },
+ { REG_TIMING_DVPVO_H, 0x02 },
+ { REG_TIMING_DVPVO_L, 0x58 },
+ { REG_TIMING_HTS_H, 0x05 },
+ { REG_TIMING_HTS_L, 0x14 },
+ { REG_TIMING_VTS_H, 0x02 },
+ { REG_TIMING_VTS_L, 0x68 },
+ { REG_TIMING_HOFFS_L, 0x08 },
+ { REG_TIMING_VOFFS_L, 0x02 },
+ { REG_TIMING_XINC, 0x31 },
+ { REG_TIMING_YINC, 0x31 },
+ { 0x3a02, 0x02 },
+ { 0x3a03, 0x68 },
+ { 0x3a08, 0x00 },
+ { 0x3a09, 0x5c },
+ { 0x3a0a, 0x00 },
+ { 0x3a0b, 0x4d },
+ { 0x3a0d, 0x08 },
+ { 0x3a0e, 0x06 },
+ { 0x3a14, 0x02 },
+ { 0x3a15, 0x28 },
+ { 0x3623, 0x00 },
+ { 0x3634, 0x76 },
+ { 0x3701, 0x44 },
+ { 0x3702, 0x18 },
+ { 0x3703, 0x24 },
+ { 0x3704, 0x24 },
+ { 0x3705, 0x0c },
+ { REG_TIMING_VERT_FORMAT, 0x81 },
+ { REG_TIMING_HORIZ_FORMAT, 0x01 },
+ { 0x370a, 0x52 },
+ { REG_VFIFO_READ_START_H, 0x00 },
+ { REG_VFIFO_READ_START_L, 0x80 },
+ { REG_ISP_CTRL02, 0x00 },
+ { REG_NULL, 0x00 },
+};
+
+/* 640X480 VGA */
+static struct sensor_register ov2659_vga[] = {
+ { REG_TIMING_HS_H, 0x00 },
+ { REG_TIMING_HS_L, 0x00 },
+ { REG_TIMING_VS_H, 0x00 },
+ { REG_TIMING_VS_L, 0x00 },
+ { REG_TIMING_HW_H, 0x06 },
+ { REG_TIMING_HW_L, 0x5f },
+ { REG_TIMING_VH_H, 0x04 },
+ { REG_TIMING_VH_L, 0xb7 },
+ { REG_TIMING_DVPHO_H, 0x02 },
+ { REG_TIMING_DVPHO_L, 0x80 },
+ { REG_TIMING_DVPVO_H, 0x01 },
+ { REG_TIMING_DVPVO_L, 0xe0 },
+ { REG_TIMING_HTS_H, 0x05 },
+ { REG_TIMING_HTS_L, 0x14 },
+ { REG_TIMING_VTS_H, 0x02 },
+ { REG_TIMING_VTS_L, 0x68 },
+ { REG_TIMING_HOFFS_L, 0x08 },
+ { REG_TIMING_VOFFS_L, 0x02 },
+ { REG_TIMING_XINC, 0x31 },
+ { REG_TIMING_YINC, 0x31 },
+ { 0x3a02, 0x02 },
+ { 0x3a03, 0x68 },
+ { 0x3a08, 0x00 },
+ { 0x3a09, 0x5c },
+ { 0x3a0a, 0x00 },
+ { 0x3a0b, 0x4d },
+ { 0x3a0d, 0x08 },
+ { 0x3a0e, 0x06 },
+ { 0x3a14, 0x02 },
+ { 0x3a15, 0x28 },
+ { 0x3623, 0x00 },
+ { 0x3634, 0x76 },
+ { 0x3701, 0x44 },
+ { 0x3702, 0x18 },
+ { 0x3703, 0x24 },
+ { 0x3704, 0x24 },
+ { 0x3705, 0x0c },
+ { REG_TIMING_VERT_FORMAT, 0x81 },
+ { REG_TIMING_HORIZ_FORMAT, 0x01 },
+ { 0x370a, 0x52 },
+ { REG_VFIFO_READ_START_H, 0x00 },
+ { REG_VFIFO_READ_START_L, 0x80 },
+ { REG_ISP_CTRL02, 0x10 },
+ { REG_NULL, 0x00 },
+};
+
+/* 320X240 QVGA */
+static struct sensor_register ov2659_qvga[] = {
+ { REG_TIMING_HS_H, 0x00 },
+ { REG_TIMING_HS_L, 0x00 },
+ { REG_TIMING_VS_H, 0x00 },
+ { REG_TIMING_VS_L, 0x00 },
+ { REG_TIMING_HW_H, 0x06 },
+ { REG_TIMING_HW_L, 0x5f },
+ { REG_TIMING_VH_H, 0x04 },
+ { REG_TIMING_VH_L, 0xb7 },
+ { REG_TIMING_DVPHO_H, 0x01 },
+ { REG_TIMING_DVPHO_L, 0x40 },
+ { REG_TIMING_DVPVO_H, 0x00 },
+ { REG_TIMING_DVPVO_L, 0xf0 },
+ { REG_TIMING_HTS_H, 0x05 },
+ { REG_TIMING_HTS_L, 0x14 },
+ { REG_TIMING_VTS_H, 0x02 },
+ { REG_TIMING_VTS_L, 0x68 },
+ { REG_TIMING_HOFFS_L, 0x08 },
+ { REG_TIMING_VOFFS_L, 0x02 },
+ { REG_TIMING_XINC, 0x31 },
+ { REG_TIMING_YINC, 0x31 },
+ { 0x3a02, 0x02 },
+ { 0x3a03, 0x68 },
+ { 0x3a08, 0x00 },
+ { 0x3a09, 0x5c },
+ { 0x3a0a, 0x00 },
+ { 0x3a0b, 0x4d },
+ { 0x3a0d, 0x08 },
+ { 0x3a0e, 0x06 },
+ { 0x3a14, 0x02 },
+ { 0x3a15, 0x28 },
+ { 0x3623, 0x00 },
+ { 0x3634, 0x76 },
+ { 0x3701, 0x44 },
+ { 0x3702, 0x18 },
+ { 0x3703, 0x24 },
+ { 0x3704, 0x24 },
+ { 0x3705, 0x0c },
+ { REG_TIMING_VERT_FORMAT, 0x81 },
+ { REG_TIMING_HORIZ_FORMAT, 0x01 },
+ { 0x370a, 0x52 },
+ { REG_VFIFO_READ_START_H, 0x00 },
+ { REG_VFIFO_READ_START_L, 0x80 },
+ { REG_ISP_CTRL02, 0x10 },
+ { REG_NULL, 0x00 },
+};
+
+static const struct pll_ctrl_reg ctrl3[] = {
+ { 1, 0x00 },
+ { 2, 0x02 },
+ { 3, 0x03 },
+ { 4, 0x06 },
+ { 6, 0x0d },
+ { 8, 0x0e },
+ { 12, 0x0f },
+ { 16, 0x12 },
+ { 24, 0x13 },
+ { 32, 0x16 },
+ { 48, 0x1b },
+ { 64, 0x1e },
+ { 96, 0x1f },
+ { 0, 0x00 },
+};
+
+static const struct pll_ctrl_reg ctrl1[] = {
+ { 2, 0x10 },
+ { 4, 0x20 },
+ { 6, 0x30 },
+ { 8, 0x40 },
+ { 10, 0x50 },
+ { 12, 0x60 },
+ { 14, 0x70 },
+ { 16, 0x80 },
+ { 18, 0x90 },
+ { 20, 0xa0 },
+ { 22, 0xb0 },
+ { 24, 0xc0 },
+ { 26, 0xd0 },
+ { 28, 0xe0 },
+ { 30, 0xf0 },
+ { 0, 0x00 },
+};
+
+static const struct ov2659_framesize ov2659_framesizes[] = {
+ { /* QVGA */
+ .width = 320,
+ .height = 240,
+ .regs = ov2659_qvga,
+ .max_exp_lines = 248,
+ }, { /* VGA */
+ .width = 640,
+ .height = 480,
+ .regs = ov2659_vga,
+ .max_exp_lines = 498,
+ }, { /* SVGA */
+ .width = 800,
+ .height = 600,
+ .regs = ov2659_svga,
+ .max_exp_lines = 498,
+ }, { /* XGA */
+ .width = 1024,
+ .height = 768,
+ .regs = ov2659_xga,
+ .max_exp_lines = 498,
+ }, { /* 720P */
+ .width = 1280,
+ .height = 720,
+ .regs = ov2659_720p,
+ .max_exp_lines = 498,
+ }, { /* SXGA */
+ .width = 1280,
+ .height = 1024,
+ .regs = ov2659_sxga,
+ .max_exp_lines = 1048,
+ }, { /* UXGA */
+ .width = 1600,
+ .height = 1200,
+ .regs = ov2659_uxga,
+ .max_exp_lines = 498,
+ },
+};
+
+/* YUV422 YUYV*/
+static struct sensor_register ov2659_format_yuyv[] = {
+ { REG_FORMAT_CTRL00, 0x30 },
+ { REG_NULL, 0x0 },
+};
+
+/* YUV422 UYVY */
+static struct sensor_register ov2659_format_uyvy[] = {
+ { REG_FORMAT_CTRL00, 0x32 },
+ { REG_NULL, 0x0 },
+};
+
+/* Raw Bayer BGGR */
+static struct sensor_register ov2659_format_bggr[] = {
+ { REG_FORMAT_CTRL00, 0x00 },
+ { REG_NULL, 0x0 },
+};
+
+/* RGB565 */
+static struct sensor_register ov2659_format_rgb565[] = {
+ { REG_FORMAT_CTRL00, 0x60 },
+ { REG_NULL, 0x0 },
+};
+
+static const struct ov2659_pixfmt ov2659_formats[] = {
+ {
+ .code = MEDIA_BUS_FMT_YUYV8_2X8,
+ .format_ctrl_regs = ov2659_format_yuyv,
+ }, {
+ .code = MEDIA_BUS_FMT_UYVY8_2X8,
+ .format_ctrl_regs = ov2659_format_uyvy,
+ }, {
+ .code = MEDIA_BUS_FMT_RGB565_2X8_BE,
+ .format_ctrl_regs = ov2659_format_rgb565,
+ }, {
+ .code = MEDIA_BUS_FMT_SBGGR8_1X8,
+ .format_ctrl_regs = ov2659_format_bggr,
+ },
+};
+
+static inline struct ov2659 *to_ov2659(struct v4l2_subdev *sd)
+{
+ return container_of(sd, struct ov2659, sd);
+}
+
+/* sensor register write */
+static int ov2659_write(struct i2c_client *client, u16 reg, u8 val)
+{
+ struct i2c_msg msg;
+ u8 buf[3];
+ int ret;
+
+ buf[0] = reg >> 8;
+ buf[1] = reg & 0xFF;
+ buf[2] = val;
+
+ msg.addr = client->addr;
+ msg.flags = client->flags;
+ msg.buf = buf;
+ msg.len = sizeof(buf);
+
+ ret = i2c_transfer(client->adapter, &msg, 1);
+ if (ret >= 0)
+ return 0;
+
+ dev_dbg(&client->dev,
+ "ov2659 write reg(0x%x val:0x%x) failed !\n", reg, val);
+
+ return ret;
+}
+
+/* sensor register read */
+static int ov2659_read(struct i2c_client *client, u16 reg, u8 *val)
+{
+ struct i2c_msg msg[2];
+ u8 buf[2];
+ int ret;
+
+ buf[0] = reg >> 8;
+ buf[1] = reg & 0xFF;
+
+ msg[0].addr = client->addr;
+ msg[0].flags = client->flags;
+ msg[0].buf = buf;
+ msg[0].len = sizeof(buf);
+
+ msg[1].addr = client->addr;
+ msg[1].flags = client->flags | I2C_M_RD;
+ msg[1].buf = buf;
+ msg[1].len = 1;
+
+ ret = i2c_transfer(client->adapter, msg, 2);
+ if (ret >= 0) {
+ *val = buf[0];
+ return 0;
+ }
+
+ dev_dbg(&client->dev,
+ "ov2659 read reg(0x%x val:0x%x) failed !\n", reg, *val);
+
+ return ret;
+}
+
+static int ov2659_write_array(struct i2c_client *client,
+ const struct sensor_register *regs)
+{
+ int i, ret = 0;
+
+ for (i = 0; ret == 0 && regs[i].addr; i++)
+ ret = ov2659_write(client, regs[i].addr, regs[i].value);
+
+ return ret;
+}
+
+static void ov2659_pll_calc_params(struct ov2659 *ov2659)
+{
+ const struct ov2659_platform_data *pdata = ov2659->pdata;
+ u8 ctrl1_reg = 0, ctrl2_reg = 0, ctrl3_reg = 0;
+ struct i2c_client *client = ov2659->client;
+ unsigned int desired = pdata->link_frequency;
+ u32 s_prediv = 1, s_postdiv = 1, s_mult = 1;
+ u32 prediv, postdiv, mult;
+ u32 bestdelta = -1;
+ u32 delta, actual;
+ int i, j;
+
+ for (i = 0; ctrl1[i].div != 0; i++) {
+ postdiv = ctrl1[i].div;
+ for (j = 0; ctrl3[j].div != 0; j++) {
+ prediv = ctrl3[j].div;
+ for (mult = 1; mult <= 63; mult++) {
+ actual = ov2659->xvclk_frequency;
+ actual *= mult;
+ actual /= prediv;
+ actual /= postdiv;
+ delta = actual - desired;
+ delta = abs(delta);
+
+ if ((delta < bestdelta) || (bestdelta == -1)) {
+ bestdelta = delta;
+ s_mult = mult;
+ s_prediv = prediv;
+ s_postdiv = postdiv;
+ ctrl1_reg = ctrl1[i].reg;
+ ctrl2_reg = mult;
+ ctrl3_reg = ctrl3[j].reg;
+ }
+ }
+ }
+ }
+
+ ov2659->pll.ctrl1 = ctrl1_reg;
+ ov2659->pll.ctrl2 = ctrl2_reg;
+ ov2659->pll.ctrl3 = ctrl3_reg;
+
+ dev_dbg(&client->dev,
+ "Actual reg config: ctrl1_reg: %02x ctrl2_reg: %02x ctrl3_reg: %02x\n",
+ ctrl1_reg, ctrl2_reg, ctrl3_reg);
+}
+
+static int ov2659_set_pixel_clock(struct ov2659 *ov2659)
+{
+ struct i2c_client *client = ov2659->client;
+ struct sensor_register pll_regs[] = {
+ {REG_SC_PLL_CTRL1, ov2659->pll.ctrl1},
+ {REG_SC_PLL_CTRL2, ov2659->pll.ctrl2},
+ {REG_SC_PLL_CTRL3, ov2659->pll.ctrl3},
+ {REG_NULL, 0x00},
+ };
+
+ dev_dbg(&client->dev, "%s\n", __func__);
+
+ return ov2659_write_array(client, pll_regs);
+};
+
+static void ov2659_get_default_format(struct v4l2_mbus_framefmt *format)
+{
+ format->width = ov2659_framesizes[2].width;
+ format->height = ov2659_framesizes[2].height;
+ format->colorspace = V4L2_COLORSPACE_SRGB;
+ format->code = ov2659_formats[0].code;
+ format->field = V4L2_FIELD_NONE;
+}
+
+static void ov2659_set_streaming(struct ov2659 *ov2659, int on)
+{
+ struct i2c_client *client = ov2659->client;
+ int ret;
+
+ on = !!on;
+
+ dev_dbg(&client->dev, "%s: on: %d\n", __func__, on);
+
+ ret = ov2659_write(client, REG_SOFTWARE_STANDBY, on);
+ if (ret)
+ dev_err(&client->dev, "ov2659 soft standby failed\n");
+}
+
+static int ov2659_init(struct v4l2_subdev *sd, u32 val)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+
+ return ov2659_write_array(client, ov2659_init_regs);
+}
+
+/*
+ * V4L2 subdev video and pad level operations
+ */
+
+static int ov2659_enum_mbus_code(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_mbus_code_enum *code)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+
+ dev_dbg(&client->dev, "%s:\n", __func__);
+
+ if (code->index >= ARRAY_SIZE(ov2659_formats))
+ return -EINVAL;
+
+ code->code = ov2659_formats[code->index].code;
+
+ return 0;
+}
+
+static int ov2659_enum_frame_sizes(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_frame_size_enum *fse)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ int i = ARRAY_SIZE(ov2659_formats);
+
+ dev_dbg(&client->dev, "%s:\n", __func__);
+
+ if (fse->index >= ARRAY_SIZE(ov2659_framesizes))
+ return -EINVAL;
+
+ while (--i)
+ if (fse->code == ov2659_formats[i].code)
+ break;
+
+ fse->code = ov2659_formats[i].code;
+
+ fse->min_width = ov2659_framesizes[fse->index].width;
+ fse->max_width = fse->min_width;
+ fse->max_height = ov2659_framesizes[fse->index].height;
+ fse->min_height = fse->max_height;
+
+ return 0;
+}
+
+static int ov2659_get_fmt(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *fmt)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct ov2659 *ov2659 = to_ov2659(sd);
+ struct v4l2_mbus_framefmt *mf;
+
+ dev_dbg(&client->dev, "ov2659_get_fmt\n");
+
+ if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
+ mf = v4l2_subdev_get_try_format(sd, cfg, 0);
+ mutex_lock(&ov2659->lock);
+ fmt->format = *mf;
+ mutex_unlock(&ov2659->lock);
+ return 0;
+ }
+
+ mutex_lock(&ov2659->lock);
+ fmt->format = ov2659->format;
+ mutex_unlock(&ov2659->lock);
+
+ dev_dbg(&client->dev, "ov2659_get_fmt: %x %dx%d\n",
+ ov2659->format.code, ov2659->format.width,
+ ov2659->format.height);
+
+ return 0;
+}
+
+static void __ov2659_try_frame_size(struct v4l2_mbus_framefmt *mf,
+ const struct ov2659_framesize **size)
+{
+ const struct ov2659_framesize *fsize = &ov2659_framesizes[0];
+ const struct ov2659_framesize *match = NULL;
+ int i = ARRAY_SIZE(ov2659_framesizes);
+ unsigned int min_err = UINT_MAX;
+
+ while (i--) {
+ int err = abs(fsize->width - mf->width)
+ + abs(fsize->height - mf->height);
+ if ((err < min_err) && (fsize->regs[0].addr)) {
+ min_err = err;
+ match = fsize;
+ }
+ fsize++;
+ }
+
+ if (!match)
+ match = &ov2659_framesizes[2];
+
+ mf->width = match->width;
+ mf->height = match->height;
+
+ if (size)
+ *size = match;
+}
+
+static int ov2659_set_fmt(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *fmt)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ unsigned int index = ARRAY_SIZE(ov2659_formats);
+ struct v4l2_mbus_framefmt *mf = &fmt->format;
+ const struct ov2659_framesize *size = NULL;
+ struct ov2659 *ov2659 = to_ov2659(sd);
+ int ret = 0;
+
+ dev_dbg(&client->dev, "ov2659_set_fmt\n");
+
+ __ov2659_try_frame_size(mf, &size);
+
+ while (--index >= 0)
+ if (ov2659_formats[index].code == mf->code)
+ break;
+
+ if (index < 0)
+ return -EINVAL;
+
+ mf->colorspace = V4L2_COLORSPACE_SRGB;
+ mf->code = ov2659_formats[index].code;
+ mf->field = V4L2_FIELD_NONE;
+
+ mutex_lock(&ov2659->lock);
+
+ if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
+ *mf = fmt->format;
+ } else {
+ s64 val;
+
+ if (ov2659->streaming) {
+ mutex_unlock(&ov2659->lock);
+ return -EBUSY;
+ }
+
+ ov2659->frame_size = size;
+ ov2659->format = fmt->format;
+ ov2659->format_ctrl_regs =
+ ov2659_formats[index].format_ctrl_regs;
+
+ if (ov2659->format.code != MEDIA_BUS_FMT_SBGGR8_1X8)
+ val = ov2659->pdata->link_frequency / 2;
+ else
+ val = ov2659->pdata->link_frequency;
+
+ ret = v4l2_ctrl_s_ctrl_int64(ov2659->link_frequency, val);
+ if (ret < 0)
+ dev_warn(&client->dev,
+ "failed to set link_frequency rate (%d)\n",
+ ret);
+ }
+
+ mutex_unlock(&ov2659->lock);
+ return ret;
+}
+
+static int ov2659_set_frame_size(struct ov2659 *ov2659)
+{
+ struct i2c_client *client = ov2659->client;
+
+ dev_dbg(&client->dev, "%s\n", __func__);
+
+ return ov2659_write_array(ov2659->client, ov2659->frame_size->regs);
+}
+
+static int ov2659_set_format(struct ov2659 *ov2659)
+{
+ struct i2c_client *client = ov2659->client;
+
+ dev_dbg(&client->dev, "%s\n", __func__);
+
+ return ov2659_write_array(ov2659->client, ov2659->format_ctrl_regs);
+}
+
+static int ov2659_s_stream(struct v4l2_subdev *sd, int on)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct ov2659 *ov2659 = to_ov2659(sd);
+ int ret = 0;
+
+ dev_dbg(&client->dev, "%s: on: %d\n", __func__, on);
+
+ mutex_lock(&ov2659->lock);
+
+ on = !!on;
+
+ if (ov2659->streaming == on)
+ goto unlock;
+
+ if (!on) {
+ /* Stop Streaming Sequence */
+ ov2659_set_streaming(ov2659, 0);
+ ov2659->streaming = on;
+ goto unlock;
+ }
+
+ ov2659_set_pixel_clock(ov2659);
+ ov2659_set_frame_size(ov2659);
+ ov2659_set_format(ov2659);
+ ov2659_set_streaming(ov2659, 1);
+ ov2659->streaming = on;
+
+unlock:
+ mutex_unlock(&ov2659->lock);
+ return ret;
+}
+
+static int ov2659_set_test_pattern(struct ov2659 *ov2659, int value)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(&ov2659->sd);
+ int ret;
+ u8 val;
+
+ ret = ov2659_read(client, REG_PRE_ISP_CTRL00, &val);
+ if (ret < 0)
+ return ret;
+
+ switch (value) {
+ case 0:
+ val &= ~TEST_PATTERN_ENABLE;
+ break;
+ case 1:
+ val &= VERTICAL_COLOR_BAR_MASK;
+ val |= TEST_PATTERN_ENABLE;
+ break;
+ }
+
+ return ov2659_write(client, REG_PRE_ISP_CTRL00, val);
+}
+
+static int ov2659_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+ struct ov2659 *ov2659 =
+ container_of(ctrl->handler, struct ov2659, ctrls);
+
+ switch (ctrl->id) {
+ case V4L2_CID_TEST_PATTERN:
+ return ov2659_set_test_pattern(ov2659, ctrl->val);
+ }
+
+ return 0;
+}
+
+static struct v4l2_ctrl_ops ov2659_ctrl_ops = {
+ .s_ctrl = ov2659_s_ctrl,
+};
+
+static const char * const ov2659_test_pattern_menu[] = {
+ "Disabled",
+ "Vertical Color Bars",
+};
+
+/* -----------------------------------------------------------------------------
+ * V4L2 subdev internal operations
+ */
+
+static int ov2659_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct v4l2_mbus_framefmt *format =
+ v4l2_subdev_get_try_format(sd, fh->pad, 0);
+
+ dev_dbg(&client->dev, "%s:\n", __func__);
+
+ ov2659_get_default_format(format);
+
+ return 0;
+}
+
+static const struct v4l2_subdev_core_ops ov2659_subdev_core_ops = {
+ .log_status = v4l2_ctrl_subdev_log_status,
+ .subscribe_event = v4l2_ctrl_subdev_subscribe_event,
+ .unsubscribe_event = v4l2_event_subdev_unsubscribe,
+};
+
+static const struct v4l2_subdev_video_ops ov2659_subdev_video_ops = {
+ .s_stream = ov2659_s_stream,
+};
+
+static const struct v4l2_subdev_pad_ops ov2659_subdev_pad_ops = {
+ .enum_mbus_code = ov2659_enum_mbus_code,
+ .enum_frame_size = ov2659_enum_frame_sizes,
+ .get_fmt = ov2659_get_fmt,
+ .set_fmt = ov2659_set_fmt,
+};
+
+static const struct v4l2_subdev_ops ov2659_subdev_ops = {
+ .core = &ov2659_subdev_core_ops,
+ .video = &ov2659_subdev_video_ops,
+ .pad = &ov2659_subdev_pad_ops,
+};
+
+static const struct v4l2_subdev_internal_ops ov2659_subdev_internal_ops = {
+ .open = ov2659_open,
+};
+
+static int ov2659_detect(struct v4l2_subdev *sd)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ u8 pid, ver;
+ int ret;
+
+ dev_dbg(&client->dev, "%s:\n", __func__);
+
+ ret = ov2659_write(client, REG_SOFTWARE_RESET, 0x01);
+ if (ret != 0) {
+ dev_err(&client->dev, "Sensor soft reset failed\n");
+ return -ENODEV;
+ }
+ usleep_range(1000, 2000);
+
+ ret = ov2659_init(sd, 0);
+ if (ret < 0)
+ return ret;
+
+ /* Check sensor revision */
+ ret = ov2659_read(client, REG_SC_CHIP_ID_H, &pid);
+ if (!ret)
+ ret = ov2659_read(client, REG_SC_CHIP_ID_L, &ver);
+
+ if (!ret) {
+ unsigned short id;
+
+ id = OV265X_ID(pid, ver);
+ if (id != OV2659_ID)
+ dev_err(&client->dev,
+ "Sensor detection failed (%04X, %d)\n",
+ id, ret);
+ else
+ dev_info(&client->dev, "Found OV%04X sensor\n", id);
+ }
+
+ return ret;
+}
+
+static struct ov2659_platform_data *
+ov2659_get_pdata(struct i2c_client *client)
+{
+ struct ov2659_platform_data *pdata;
+ struct device_node *endpoint;
+ int ret;
+
+ if (!IS_ENABLED(CONFIG_OF) || !client->dev.of_node)
+ return client->dev.platform_data;
+
+ endpoint = of_graph_get_next_endpoint(client->dev.of_node, NULL);
+ if (!endpoint)
+ return NULL;
+
+ pdata = devm_kzalloc(&client->dev, sizeof(*pdata), GFP_KERNEL);
+ if (!pdata)
+ goto done;
+
+ ret = of_property_read_u64(endpoint, "link-frequencies",
+ &pdata->link_frequency);
+ if (ret) {
+ dev_err(&client->dev, "link-frequencies property not found\n");
+ pdata = NULL;
+ }
+
+done:
+ of_node_put(endpoint);
+ return pdata;
+}
+
+static int ov2659_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ const struct ov2659_platform_data *pdata = ov2659_get_pdata(client);
+ struct v4l2_subdev *sd;
+ struct ov2659 *ov2659;
+ struct clk *clk;
+ int ret;
+
+ if (!pdata) {
+ dev_err(&client->dev, "platform data not specified\n");
+ return -EINVAL;
+ }
+
+ ov2659 = devm_kzalloc(&client->dev, sizeof(*ov2659), GFP_KERNEL);
+ if (!ov2659)
+ return -ENOMEM;
+
+ ov2659->pdata = pdata;
+ ov2659->client = client;
+
+ clk = devm_clk_get(&client->dev, "xvclk");
+ if (IS_ERR(clk))
+ return PTR_ERR(clk);
+
+ ov2659->xvclk_frequency = clk_get_rate(clk);
+ if (ov2659->xvclk_frequency < 6000000 ||
+ ov2659->xvclk_frequency > 27000000)
+ return -EINVAL;
+
+ v4l2_ctrl_handler_init(&ov2659->ctrls, 2);
+ ov2659->link_frequency =
+ v4l2_ctrl_new_std(&ov2659->ctrls, &ov2659_ctrl_ops,
+ V4L2_CID_PIXEL_RATE,
+ pdata->link_frequency / 2,
+ pdata->link_frequency, 1,
+ pdata->link_frequency);
+ v4l2_ctrl_new_std_menu_items(&ov2659->ctrls, &ov2659_ctrl_ops,
+ V4L2_CID_TEST_PATTERN,
+ ARRAY_SIZE(ov2659_test_pattern_menu) - 1,
+ 0, 0, ov2659_test_pattern_menu);
+ ov2659->sd.ctrl_handler = &ov2659->ctrls;
+
+ if (ov2659->ctrls.error) {
+ dev_err(&client->dev, "%s: control initialization error %d\n",
+ __func__, ov2659->ctrls.error);
+ return ov2659->ctrls.error;
+ }
+
+ sd = &ov2659->sd;
+ client->flags |= I2C_CLIENT_SCCB;
+ v4l2_i2c_subdev_init(sd, client, &ov2659_subdev_ops);
+
+ sd->internal_ops = &ov2659_subdev_internal_ops;
+ sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE |
+ V4L2_SUBDEV_FL_HAS_EVENTS;
+
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ ov2659->pad.flags = MEDIA_PAD_FL_SOURCE;
+ sd->entity.type = MEDIA_ENT_T_V4L2_SUBDEV_SENSOR;
+ ret = media_entity_init(&sd->entity, 1, &ov2659->pad, 0);
+ if (ret < 0) {
+ v4l2_ctrl_handler_free(&ov2659->ctrls);
+ return ret;
+ }
+#endif
+
+ mutex_init(&ov2659->lock);
+
+ ov2659_get_default_format(&ov2659->format);
+ ov2659->frame_size = &ov2659_framesizes[2];
+ ov2659->format_ctrl_regs = ov2659_formats[0].format_ctrl_regs;
+
+ ret = ov2659_detect(sd);
+ if (ret < 0)
+ goto error;
+
+ /* Calculate the PLL register value needed */
+ ov2659_pll_calc_params(ov2659);
+
+ ret = v4l2_async_register_subdev(&ov2659->sd);
+ if (ret)
+ goto error;
+
+ dev_info(&client->dev, "%s sensor driver registered !!\n", sd->name);
+
+ return 0;
+
+error:
+ v4l2_ctrl_handler_free(&ov2659->ctrls);
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ media_entity_cleanup(&sd->entity);
+#endif
+ mutex_destroy(&ov2659->lock);
+ return ret;
+}
+
+static int ov2659_remove(struct i2c_client *client)
+{
+ struct v4l2_subdev *sd = i2c_get_clientdata(client);
+ struct ov2659 *ov2659 = to_ov2659(sd);
+
+ v4l2_ctrl_handler_free(&ov2659->ctrls);
+ v4l2_async_unregister_subdev(sd);
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ media_entity_cleanup(&sd->entity);
+#endif
+ mutex_destroy(&ov2659->lock);
+
+ return 0;
+}
+
+static const struct i2c_device_id ov2659_id[] = {
+ { "ov2659", 0 },
+ { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(i2c, ov2659_id);
+
+#if IS_ENABLED(CONFIG_OF)
+static const struct of_device_id ov2659_of_match[] = {
+ { .compatible = "ovti,ov2659", },
+ { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, ov2659_of_match);
+#endif
+
+static struct i2c_driver ov2659_i2c_driver = {
+ .driver = {
+ .name = DRIVER_NAME,
+ .of_match_table = of_match_ptr(ov2659_of_match),
+ },
+ .probe = ov2659_probe,
+ .remove = ov2659_remove,
+ .id_table = ov2659_id,
+};
+
+module_i2c_driver(ov2659_i2c_driver);
+
+MODULE_AUTHOR("Benoit Parrot <bparrot@ti.com>");
+MODULE_DESCRIPTION("OV2659 CMOS Image Sensor driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
index 957927f..b984752 100644
--- a/drivers/media/i2c/ov7670.c
+++ b/drivers/media/i2c/ov7670.c
@@ -1069,29 +1069,35 @@
static int ov7670_frame_rates[] = { 30, 15, 10, 5, 1 };
-static int ov7670_enum_frameintervals(struct v4l2_subdev *sd,
- struct v4l2_frmivalenum *interval)
+static int ov7670_enum_frame_interval(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_frame_interval_enum *fie)
{
- if (interval->index >= ARRAY_SIZE(ov7670_frame_rates))
+ if (fie->pad)
return -EINVAL;
- interval->type = V4L2_FRMIVAL_TYPE_DISCRETE;
- interval->discrete.numerator = 1;
- interval->discrete.denominator = ov7670_frame_rates[interval->index];
+ if (fie->index >= ARRAY_SIZE(ov7670_frame_rates))
+ return -EINVAL;
+ fie->interval.numerator = 1;
+ fie->interval.denominator = ov7670_frame_rates[fie->index];
return 0;
}
/*
* Frame size enumeration
*/
-static int ov7670_enum_framesizes(struct v4l2_subdev *sd,
- struct v4l2_frmsizeenum *fsize)
+static int ov7670_enum_frame_size(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_frame_size_enum *fse)
{
struct ov7670_info *info = to_state(sd);
int i;
int num_valid = -1;
- __u32 index = fsize->index;
+ __u32 index = fse->index;
unsigned int n_win_sizes = info->devtype->n_win_sizes;
+ if (fse->pad)
+ return -EINVAL;
+
/*
* If a minimum width/height was requested, filter out the capture
* windows that fall outside that.
@@ -1103,9 +1109,8 @@
if (info->min_height && win->height < info->min_height)
continue;
if (index == ++num_valid) {
- fsize->type = V4L2_FRMSIZE_TYPE_DISCRETE;
- fsize->discrete.width = win->width;
- fsize->discrete.height = win->height;
+ fse->min_width = fse->max_width = win->width;
+ fse->min_height = fse->max_height = win->height;
return 0;
}
}
@@ -1485,13 +1490,17 @@
.s_mbus_fmt = ov7670_s_mbus_fmt,
.s_parm = ov7670_s_parm,
.g_parm = ov7670_g_parm,
- .enum_frameintervals = ov7670_enum_frameintervals,
- .enum_framesizes = ov7670_enum_framesizes,
+};
+
+static const struct v4l2_subdev_pad_ops ov7670_pad_ops = {
+ .enum_frame_interval = ov7670_enum_frame_interval,
+ .enum_frame_size = ov7670_enum_frame_size,
};
static const struct v4l2_subdev_ops ov7670_ops = {
.core = &ov7670_core_ops,
.video = &ov7670_video_ops,
+ .pad = &ov7670_pad_ops,
};
/* ----------------------------------------------------------------------- */
diff --git a/drivers/media/i2c/ov9650.c b/drivers/media/i2c/ov9650.c
index 2246bd5..2bc4733 100644
--- a/drivers/media/i2c/ov9650.c
+++ b/drivers/media/i2c/ov9650.c
@@ -1067,7 +1067,7 @@
}
static int ov965x_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index >= ARRAY_SIZE(ov965x_formats))
@@ -1078,7 +1078,7 @@
}
static int ov965x_enum_frame_sizes(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
int i = ARRAY_SIZE(ov965x_formats);
@@ -1164,14 +1164,14 @@
return ret;
}
-static int ov965x_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ov965x_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct ov965x *ov965x = to_ov965x(sd);
struct v4l2_mbus_framefmt *mf;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, 0);
+ mf = v4l2_subdev_get_try_format(sd, cfg, 0);
fmt->format = *mf;
return 0;
}
@@ -1208,7 +1208,7 @@
*size = match;
}
-static int ov965x_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ov965x_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
unsigned int index = ARRAY_SIZE(ov965x_formats);
@@ -1230,8 +1230,8 @@
mutex_lock(&ov965x->lock);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- if (fh != NULL) {
- mf = v4l2_subdev_get_try_format(fh, fmt->pad);
+ if (cfg != NULL) {
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
*mf = fmt->format;
}
} else {
@@ -1361,7 +1361,7 @@
*/
static int ov965x_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- struct v4l2_mbus_framefmt *mf = v4l2_subdev_get_try_format(fh, 0);
+ struct v4l2_mbus_framefmt *mf = v4l2_subdev_get_try_format(sd, fh->pad, 0);
ov965x_get_default_format(mf);
return 0;
diff --git a/drivers/media/i2c/s5c73m3/s5c73m3-core.c b/drivers/media/i2c/s5c73m3/s5c73m3-core.c
index ee0f57e..08b234b 100644
--- a/drivers/media/i2c/s5c73m3/s5c73m3-core.c
+++ b/drivers/media/i2c/s5c73m3/s5c73m3-core.c
@@ -824,10 +824,11 @@
}
static void s5c73m3_oif_try_format(struct s5c73m3 *state,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt,
const struct s5c73m3_frame_size **fs)
{
+ struct v4l2_subdev *sd = &state->sensor_sd;
u32 code;
switch (fmt->pad) {
@@ -850,7 +851,7 @@
*fs = state->oif_pix_size[RES_ISP];
else
*fs = s5c73m3_find_frame_size(
- v4l2_subdev_get_try_format(fh,
+ v4l2_subdev_get_try_format(sd, cfg,
OIF_ISP_PAD),
RES_ISP);
break;
@@ -860,7 +861,7 @@
}
static void s5c73m3_try_format(struct s5c73m3 *state,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt,
const struct s5c73m3_frame_size **fs)
{
@@ -952,7 +953,7 @@
}
static int s5c73m3_oif_enum_frame_interval(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_interval_enum *fie)
{
struct s5c73m3 *state = oif_sd_to_s5c73m3(sd);
@@ -990,7 +991,7 @@
}
static int s5c73m3_get_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct s5c73m3 *state = sensor_sd_to_s5c73m3(sd);
@@ -998,7 +999,7 @@
u32 code;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- fmt->format = *v4l2_subdev_get_try_format(fh, fmt->pad);
+ fmt->format = *v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
return 0;
}
@@ -1024,7 +1025,7 @@
}
static int s5c73m3_oif_get_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct s5c73m3 *state = oif_sd_to_s5c73m3(sd);
@@ -1032,7 +1033,7 @@
u32 code;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- fmt->format = *v4l2_subdev_get_try_format(fh, fmt->pad);
+ fmt->format = *v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
return 0;
}
@@ -1062,7 +1063,7 @@
}
static int s5c73m3_set_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
const struct s5c73m3_frame_size *frame_size = NULL;
@@ -1072,10 +1073,10 @@
mutex_lock(&state->lock);
- s5c73m3_try_format(state, fh, fmt, &frame_size);
+ s5c73m3_try_format(state, cfg, fmt, &frame_size);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, fmt->pad);
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
*mf = fmt->format;
} else {
switch (fmt->pad) {
@@ -1101,7 +1102,7 @@
}
static int s5c73m3_oif_set_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
const struct s5c73m3_frame_size *frame_size = NULL;
@@ -1111,13 +1112,13 @@
mutex_lock(&state->lock);
- s5c73m3_oif_try_format(state, fh, fmt, &frame_size);
+ s5c73m3_oif_try_format(state, cfg, fmt, &frame_size);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, fmt->pad);
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
*mf = fmt->format;
if (fmt->pad == OIF_ISP_PAD) {
- mf = v4l2_subdev_get_try_format(fh, OIF_SOURCE_PAD);
+ mf = v4l2_subdev_get_try_format(sd, cfg, OIF_SOURCE_PAD);
mf->width = fmt->format.width;
mf->height = fmt->format.height;
}
@@ -1189,7 +1190,7 @@
}
static int s5c73m3_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
static const int codes[] = {
@@ -1205,7 +1206,7 @@
}
static int s5c73m3_oif_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
int ret;
@@ -1220,7 +1221,7 @@
}
static int s5c73m3_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
int idx;
@@ -1247,9 +1248,10 @@
}
static int s5c73m3_oif_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
+ struct s5c73m3 *state = oif_sd_to_s5c73m3(sd);
int idx;
if (fse->pad == OIF_SOURCE_PAD) {
@@ -1259,11 +1261,25 @@
switch (fse->code) {
case S5C73M3_JPEG_FMT:
case S5C73M3_ISP_FMT: {
- struct v4l2_mbus_framefmt *mf =
- v4l2_subdev_get_try_format(fh, OIF_ISP_PAD);
+ unsigned w, h;
- fse->max_width = fse->min_width = mf->width;
- fse->max_height = fse->min_height = mf->height;
+ if (fse->which == V4L2_SUBDEV_FORMAT_TRY) {
+ struct v4l2_mbus_framefmt *mf;
+
+ mf = v4l2_subdev_get_try_format(sd, cfg,
+ OIF_ISP_PAD);
+
+ w = mf->width;
+ h = mf->height;
+ } else {
+ const struct s5c73m3_frame_size *fs;
+
+ fs = state->oif_pix_size[RES_ISP];
+ w = fs->width;
+ h = fs->height;
+ }
+ fse->max_width = fse->min_width = w;
+ fse->max_height = fse->min_height = h;
return 0;
}
default:
@@ -1306,11 +1322,11 @@
{
struct v4l2_mbus_framefmt *mf;
- mf = v4l2_subdev_get_try_format(fh, S5C73M3_ISP_PAD);
+ mf = v4l2_subdev_get_try_format(sd, fh->pad, S5C73M3_ISP_PAD);
s5c73m3_fill_mbus_fmt(mf, &s5c73m3_isp_resolutions[1],
S5C73M3_ISP_FMT);
- mf = v4l2_subdev_get_try_format(fh, S5C73M3_JPEG_PAD);
+ mf = v4l2_subdev_get_try_format(sd, fh->pad, S5C73M3_JPEG_PAD);
s5c73m3_fill_mbus_fmt(mf, &s5c73m3_jpeg_resolutions[1],
S5C73M3_JPEG_FMT);
@@ -1321,15 +1337,15 @@
{
struct v4l2_mbus_framefmt *mf;
- mf = v4l2_subdev_get_try_format(fh, OIF_ISP_PAD);
+ mf = v4l2_subdev_get_try_format(sd, fh->pad, OIF_ISP_PAD);
s5c73m3_fill_mbus_fmt(mf, &s5c73m3_isp_resolutions[1],
S5C73M3_ISP_FMT);
- mf = v4l2_subdev_get_try_format(fh, OIF_JPEG_PAD);
+ mf = v4l2_subdev_get_try_format(sd, fh->pad, OIF_JPEG_PAD);
s5c73m3_fill_mbus_fmt(mf, &s5c73m3_jpeg_resolutions[1],
S5C73M3_JPEG_FMT);
- mf = v4l2_subdev_get_try_format(fh, OIF_SOURCE_PAD);
+ mf = v4l2_subdev_get_try_format(sd, fh->pad, OIF_SOURCE_PAD);
s5c73m3_fill_mbus_fmt(mf, &s5c73m3_isp_resolutions[1],
S5C73M3_ISP_FMT);
return 0;
diff --git a/drivers/media/i2c/s5c73m3/s5c73m3-spi.c b/drivers/media/i2c/s5c73m3/s5c73m3-spi.c
index f60b265..63eb190 100644
--- a/drivers/media/i2c/s5c73m3/s5c73m3-spi.c
+++ b/drivers/media/i2c/s5c73m3/s5c73m3-spi.c
@@ -52,7 +52,7 @@
xfer.rx_buf = addr;
if (spi_dev == NULL) {
- dev_err(&spi_dev->dev, "SPI device is uninitialized\n");
+ pr_err("SPI device is uninitialized\n");
return -ENODEV;
}
diff --git a/drivers/media/i2c/s5k4ecgx.c b/drivers/media/i2c/s5k4ecgx.c
index 7007131..9708423 100644
--- a/drivers/media/i2c/s5k4ecgx.c
+++ b/drivers/media/i2c/s5k4ecgx.c
@@ -531,7 +531,7 @@
}
static int s5k4ecgx_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index >= ARRAY_SIZE(s5k4ecgx_formats))
@@ -541,15 +541,15 @@
return 0;
}
-static int s5k4ecgx_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int s5k4ecgx_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct s5k4ecgx *priv = to_s5k4ecgx(sd);
struct v4l2_mbus_framefmt *mf;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- if (fh) {
- mf = v4l2_subdev_get_try_format(fh, 0);
+ if (cfg) {
+ mf = v4l2_subdev_get_try_format(sd, cfg, 0);
fmt->format = *mf;
}
return 0;
@@ -581,7 +581,7 @@
return &s5k4ecgx_formats[i];
}
-static int s5k4ecgx_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int s5k4ecgx_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct s5k4ecgx *priv = to_s5k4ecgx(sd);
@@ -596,8 +596,8 @@
fmt->format.field = V4L2_FIELD_NONE;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- if (fh) {
- mf = v4l2_subdev_get_try_format(fh, 0);
+ if (cfg) {
+ mf = v4l2_subdev_get_try_format(sd, cfg, 0);
*mf = fmt->format;
}
return 0;
@@ -692,7 +692,7 @@
*/
static int s5k4ecgx_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- struct v4l2_mbus_framefmt *mf = v4l2_subdev_get_try_format(fh, 0);
+ struct v4l2_mbus_framefmt *mf = v4l2_subdev_get_try_format(sd, fh->pad, 0);
mf->width = s5k4ecgx_prev_sizes[0].size.width;
mf->height = s5k4ecgx_prev_sizes[0].size.height;
diff --git a/drivers/media/i2c/s5k5baf.c b/drivers/media/i2c/s5k5baf.c
index a3d7d03..297ef04 100644
--- a/drivers/media/i2c/s5k5baf.c
+++ b/drivers/media/i2c/s5k5baf.c
@@ -374,6 +374,8 @@
count -= S5K5BAG_FW_TAG_LEN;
d = devm_kzalloc(dev, count * sizeof(u16), GFP_KERNEL);
+ if (!d)
+ return -ENOMEM;
for (i = 0; i < count; ++i)
d[i] = le16_to_cpu(data[i]);
@@ -1182,7 +1184,7 @@
* V4L2 subdev pad level and video operations
*/
static int s5k5baf_enum_frame_interval(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_interval_enum *fie)
{
if (fie->index > S5K5BAF_MAX_FR_TIME - S5K5BAF_MIN_FR_TIME ||
@@ -1201,7 +1203,7 @@
}
static int s5k5baf_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->pad == PAD_CIS) {
@@ -1219,7 +1221,7 @@
}
static int s5k5baf_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
int i;
@@ -1276,7 +1278,7 @@
return pixfmt;
}
-static int s5k5baf_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int s5k5baf_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct s5k5baf *state = to_s5k5baf(sd);
@@ -1284,7 +1286,7 @@
struct v4l2_mbus_framefmt *mf;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, fmt->pad);
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
fmt->format = *mf;
return 0;
}
@@ -1306,7 +1308,7 @@
return 0;
}
-static int s5k5baf_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int s5k5baf_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct v4l2_mbus_framefmt *mf = &fmt->format;
@@ -1317,7 +1319,7 @@
mf->field = V4L2_FIELD_NONE;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- *v4l2_subdev_get_try_format(fh, fmt->pad) = *mf;
+ *v4l2_subdev_get_try_format(sd, cfg, fmt->pad) = *mf;
return 0;
}
@@ -1369,7 +1371,7 @@
}
static int s5k5baf_get_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
static enum selection_rect rtype;
@@ -1389,9 +1391,9 @@
if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
if (rtype == R_COMPOSE)
- sel->r = *v4l2_subdev_get_try_compose(fh, sel->pad);
+ sel->r = *v4l2_subdev_get_try_compose(sd, cfg, sel->pad);
else
- sel->r = *v4l2_subdev_get_try_crop(fh, sel->pad);
+ sel->r = *v4l2_subdev_get_try_crop(sd, cfg, sel->pad);
return 0;
}
@@ -1460,7 +1462,7 @@
}
static int s5k5baf_set_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
static enum selection_rect rtype;
@@ -1481,9 +1483,9 @@
if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
rects = (struct v4l2_rect * []) {
&s5k5baf_cis_rect,
- v4l2_subdev_get_try_crop(fh, PAD_CIS),
- v4l2_subdev_get_try_compose(fh, PAD_CIS),
- v4l2_subdev_get_try_crop(fh, PAD_OUT)
+ v4l2_subdev_get_try_crop(sd, cfg, PAD_CIS),
+ v4l2_subdev_get_try_compose(sd, cfg, PAD_CIS),
+ v4l2_subdev_get_try_crop(sd, cfg, PAD_OUT)
};
s5k5baf_set_rect_and_adjust(rects, rtype, &sel->r);
return 0;
@@ -1701,22 +1703,22 @@
{
struct v4l2_mbus_framefmt *mf;
- mf = v4l2_subdev_get_try_format(fh, PAD_CIS);
+ mf = v4l2_subdev_get_try_format(sd, fh->pad, PAD_CIS);
s5k5baf_try_cis_format(mf);
if (s5k5baf_is_cis_subdev(sd))
return 0;
- mf = v4l2_subdev_get_try_format(fh, PAD_OUT);
+ mf = v4l2_subdev_get_try_format(sd, fh->pad, PAD_OUT);
mf->colorspace = s5k5baf_formats[0].colorspace;
mf->code = s5k5baf_formats[0].code;
mf->width = s5k5baf_cis_rect.width;
mf->height = s5k5baf_cis_rect.height;
mf->field = V4L2_FIELD_NONE;
- *v4l2_subdev_get_try_crop(fh, PAD_CIS) = s5k5baf_cis_rect;
- *v4l2_subdev_get_try_compose(fh, PAD_CIS) = s5k5baf_cis_rect;
- *v4l2_subdev_get_try_crop(fh, PAD_OUT) = s5k5baf_cis_rect;
+ *v4l2_subdev_get_try_crop(sd, fh->pad, PAD_CIS) = s5k5baf_cis_rect;
+ *v4l2_subdev_get_try_compose(sd, fh->pad, PAD_CIS) = s5k5baf_cis_rect;
+ *v4l2_subdev_get_try_crop(sd, fh->pad, PAD_OUT) = s5k5baf_cis_rect;
return 0;
}
diff --git a/drivers/media/i2c/s5k6a3.c b/drivers/media/i2c/s5k6a3.c
index 91b841a..bc389d5 100644
--- a/drivers/media/i2c/s5k6a3.c
+++ b/drivers/media/i2c/s5k6a3.c
@@ -99,7 +99,7 @@
}
static int s5k6a3_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index >= ARRAY_SIZE(s5k6a3_formats))
@@ -123,17 +123,17 @@
}
static struct v4l2_mbus_framefmt *__s5k6a3_get_format(
- struct s5k6a3 *sensor, struct v4l2_subdev_fh *fh,
+ struct s5k6a3 *sensor, struct v4l2_subdev_pad_config *cfg,
u32 pad, enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return fh ? v4l2_subdev_get_try_format(fh, pad) : NULL;
+ return cfg ? v4l2_subdev_get_try_format(&sensor->subdev, cfg, pad) : NULL;
return &sensor->format;
}
static int s5k6a3_set_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct s5k6a3 *sensor = sd_to_s5k6a3(sd);
@@ -141,7 +141,7 @@
s5k6a3_try_format(&fmt->format);
- mf = __s5k6a3_get_format(sensor, fh, fmt->pad, fmt->which);
+ mf = __s5k6a3_get_format(sensor, cfg, fmt->pad, fmt->which);
if (mf) {
mutex_lock(&sensor->lock);
if (fmt->which == V4L2_SUBDEV_FORMAT_ACTIVE)
@@ -152,13 +152,13 @@
}
static int s5k6a3_get_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
- struct v4l2_subdev_format *fmt)
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *fmt)
{
struct s5k6a3 *sensor = sd_to_s5k6a3(sd);
struct v4l2_mbus_framefmt *mf;
- mf = __s5k6a3_get_format(sensor, fh, fmt->pad, fmt->which);
+ mf = __s5k6a3_get_format(sensor, cfg, fmt->pad, fmt->which);
mutex_lock(&sensor->lock);
fmt->format = *mf;
@@ -174,7 +174,7 @@
static int s5k6a3_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- struct v4l2_mbus_framefmt *format = v4l2_subdev_get_try_format(fh, 0);
+ struct v4l2_mbus_framefmt *format = v4l2_subdev_get_try_format(sd, fh->pad, 0);
*format = s5k6a3_formats[0];
format->width = S5K6A3_DEFAULT_WIDTH;
diff --git a/drivers/media/i2c/s5k6aa.c b/drivers/media/i2c/s5k6aa.c
index b1c5832..de803a1 100644
--- a/drivers/media/i2c/s5k6aa.c
+++ b/drivers/media/i2c/s5k6aa.c
@@ -996,7 +996,7 @@
* V4L2 subdev pad level and video operations
*/
static int s5k6aa_enum_frame_interval(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_interval_enum *fie)
{
struct s5k6aa *s5k6aa = to_s5k6aa(sd);
@@ -1023,7 +1023,7 @@
}
static int s5k6aa_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index >= ARRAY_SIZE(s5k6aa_formats))
@@ -1034,7 +1034,7 @@
}
static int s5k6aa_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
int i = ARRAY_SIZE(s5k6aa_formats);
@@ -1056,14 +1056,14 @@
}
static struct v4l2_rect *
-__s5k6aa_get_crop_rect(struct s5k6aa *s5k6aa, struct v4l2_subdev_fh *fh,
+__s5k6aa_get_crop_rect(struct s5k6aa *s5k6aa, struct v4l2_subdev_pad_config *cfg,
enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_ACTIVE)
return &s5k6aa->ccd_rect;
WARN_ON(which != V4L2_SUBDEV_FORMAT_TRY);
- return v4l2_subdev_get_try_crop(fh, 0);
+ return v4l2_subdev_get_try_crop(&s5k6aa->sd, cfg, 0);
}
static void s5k6aa_try_format(struct s5k6aa *s5k6aa,
@@ -1087,7 +1087,7 @@
mf->field = V4L2_FIELD_NONE;
}
-static int s5k6aa_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int s5k6aa_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct s5k6aa *s5k6aa = to_s5k6aa(sd);
@@ -1096,7 +1096,7 @@
memset(fmt->reserved, 0, sizeof(fmt->reserved));
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, 0);
+ mf = v4l2_subdev_get_try_format(sd, cfg, 0);
fmt->format = *mf;
return 0;
}
@@ -1108,7 +1108,7 @@
return 0;
}
-static int s5k6aa_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int s5k6aa_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct s5k6aa *s5k6aa = to_s5k6aa(sd);
@@ -1121,8 +1121,8 @@
s5k6aa_try_format(s5k6aa, &fmt->format);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, fmt->pad);
- crop = v4l2_subdev_get_try_crop(fh, 0);
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
+ crop = v4l2_subdev_get_try_crop(sd, cfg, 0);
} else {
if (s5k6aa->streaming) {
ret = -EBUSY;
@@ -1162,7 +1162,7 @@
}
static int s5k6aa_get_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct s5k6aa *s5k6aa = to_s5k6aa(sd);
@@ -1174,7 +1174,7 @@
memset(sel->reserved, 0, sizeof(sel->reserved));
mutex_lock(&s5k6aa->lock);
- rect = __s5k6aa_get_crop_rect(s5k6aa, fh, sel->which);
+ rect = __s5k6aa_get_crop_rect(s5k6aa, cfg, sel->which);
sel->r = *rect;
mutex_unlock(&s5k6aa->lock);
@@ -1185,7 +1185,7 @@
}
static int s5k6aa_set_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct s5k6aa *s5k6aa = to_s5k6aa(sd);
@@ -1197,13 +1197,13 @@
return -EINVAL;
mutex_lock(&s5k6aa->lock);
- crop_r = __s5k6aa_get_crop_rect(s5k6aa, fh, sel->which);
+ crop_r = __s5k6aa_get_crop_rect(s5k6aa, cfg, sel->which);
if (sel->which == V4L2_SUBDEV_FORMAT_ACTIVE) {
mf = &s5k6aa->preset->mbus_fmt;
s5k6aa->apply_crop = 1;
} else {
- mf = v4l2_subdev_get_try_format(fh, 0);
+ mf = v4l2_subdev_get_try_format(sd, cfg, 0);
}
v4l_bound_align_image(&sel->r.width, mf->width,
S5K6AA_WIN_WIDTH_MAX, 1,
@@ -1424,8 +1424,8 @@
*/
static int s5k6aa_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- struct v4l2_mbus_framefmt *format = v4l2_subdev_get_try_format(fh, 0);
- struct v4l2_rect *crop = v4l2_subdev_get_try_crop(fh, 0);
+ struct v4l2_mbus_framefmt *format = v4l2_subdev_get_try_format(sd, fh->pad, 0);
+ struct v4l2_rect *crop = v4l2_subdev_get_try_crop(sd, fh->pad, 0);
format->colorspace = s5k6aa_formats[0].colorspace;
format->code = s5k6aa_formats[0].code;
diff --git a/drivers/media/i2c/smiapp/smiapp-core.c b/drivers/media/i2c/smiapp/smiapp-core.c
index d47eff5..557f25d 100644
--- a/drivers/media/i2c/smiapp/smiapp-core.c
+++ b/drivers/media/i2c/smiapp/smiapp-core.c
@@ -344,7 +344,7 @@
{ MEDIA_BUS_FMT_SGBRG8_1X8, 8, 8, SMIAPP_PIXEL_ORDER_GBRG, },
};
-const char *pixel_order_str[] = { "GRBG", "RGGB", "BGGR", "GBRG" };
+static const char *pixel_order_str[] = { "GRBG", "RGGB", "BGGR", "GBRG" };
#define to_csi_format_idx(fmt) (((unsigned long)(fmt) \
- (unsigned long)smiapp_csi_data_formats) \
@@ -1557,7 +1557,7 @@
}
static int smiapp_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct i2c_client *client = v4l2_get_subdevdata(subdev);
@@ -1611,13 +1611,13 @@
}
static int __smiapp_get_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct smiapp_subdev *ssd = to_smiapp_subdev(subdev);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- fmt->format = *v4l2_subdev_get_try_format(fh, fmt->pad);
+ fmt->format = *v4l2_subdev_get_try_format(subdev, cfg, fmt->pad);
} else {
struct v4l2_rect *r;
@@ -1636,21 +1636,21 @@
}
static int smiapp_get_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct smiapp_sensor *sensor = to_smiapp_sensor(subdev);
int rval;
mutex_lock(&sensor->mutex);
- rval = __smiapp_get_format(subdev, fh, fmt);
+ rval = __smiapp_get_format(subdev, cfg, fmt);
mutex_unlock(&sensor->mutex);
return rval;
}
static void smiapp_get_crop_compose(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_rect **crops,
struct v4l2_rect **comps, int which)
{
@@ -1666,12 +1666,12 @@
} else {
if (crops) {
for (i = 0; i < subdev->entity.num_pads; i++) {
- crops[i] = v4l2_subdev_get_try_crop(fh, i);
+ crops[i] = v4l2_subdev_get_try_crop(subdev, cfg, i);
BUG_ON(!crops[i]);
}
}
if (comps) {
- *comps = v4l2_subdev_get_try_compose(fh,
+ *comps = v4l2_subdev_get_try_compose(subdev, cfg,
SMIAPP_PAD_SINK);
BUG_ON(!*comps);
}
@@ -1680,14 +1680,14 @@
/* Changes require propagation only on sink pad. */
static void smiapp_propagate(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh, int which,
+ struct v4l2_subdev_pad_config *cfg, int which,
int target)
{
struct smiapp_sensor *sensor = to_smiapp_sensor(subdev);
struct smiapp_subdev *ssd = to_smiapp_subdev(subdev);
struct v4l2_rect *comp, *crops[SMIAPP_PADS];
- smiapp_get_crop_compose(subdev, fh, crops, &comp, which);
+ smiapp_get_crop_compose(subdev, cfg, crops, &comp, which);
switch (target) {
case V4L2_SEL_TGT_CROP:
@@ -1730,7 +1730,7 @@
}
static int smiapp_set_format_source(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct smiapp_sensor *sensor = to_smiapp_sensor(subdev);
@@ -1741,7 +1741,7 @@
unsigned int i;
int rval;
- rval = __smiapp_get_format(subdev, fh, fmt);
+ rval = __smiapp_get_format(subdev, cfg, fmt);
if (rval)
return rval;
@@ -1783,7 +1783,7 @@
}
static int smiapp_set_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct smiapp_sensor *sensor = to_smiapp_sensor(subdev);
@@ -1795,7 +1795,7 @@
if (fmt->pad == ssd->source_pad) {
int rval;
- rval = smiapp_set_format_source(subdev, fh, fmt);
+ rval = smiapp_set_format_source(subdev, cfg, fmt);
mutex_unlock(&sensor->mutex);
@@ -1817,7 +1817,7 @@
sensor->limits[SMIAPP_LIMIT_MIN_Y_OUTPUT_SIZE],
sensor->limits[SMIAPP_LIMIT_MAX_Y_OUTPUT_SIZE]);
- smiapp_get_crop_compose(subdev, fh, crops, NULL, fmt->which);
+ smiapp_get_crop_compose(subdev, cfg, crops, NULL, fmt->which);
crops[ssd->sink_pad]->left = 0;
crops[ssd->sink_pad]->top = 0;
@@ -1825,7 +1825,7 @@
crops[ssd->sink_pad]->height = fmt->format.height;
if (fmt->which == V4L2_SUBDEV_FORMAT_ACTIVE)
ssd->sink_fmt = *crops[ssd->sink_pad];
- smiapp_propagate(subdev, fh, fmt->which,
+ smiapp_propagate(subdev, cfg, fmt->which,
V4L2_SEL_TGT_CROP);
mutex_unlock(&sensor->mutex);
@@ -1878,7 +1878,7 @@
}
static void smiapp_set_compose_binner(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel,
struct v4l2_rect **crops,
struct v4l2_rect *comp)
@@ -1926,7 +1926,7 @@
* result.
*/
static void smiapp_set_compose_scaler(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel,
struct v4l2_rect **crops,
struct v4l2_rect *comp)
@@ -2042,25 +2042,25 @@
}
/* We're only called on source pads. This function sets scaling. */
static int smiapp_set_compose(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct smiapp_sensor *sensor = to_smiapp_sensor(subdev);
struct smiapp_subdev *ssd = to_smiapp_subdev(subdev);
struct v4l2_rect *comp, *crops[SMIAPP_PADS];
- smiapp_get_crop_compose(subdev, fh, crops, &comp, sel->which);
+ smiapp_get_crop_compose(subdev, cfg, crops, &comp, sel->which);
sel->r.top = 0;
sel->r.left = 0;
if (ssd == sensor->binner)
- smiapp_set_compose_binner(subdev, fh, sel, crops, comp);
+ smiapp_set_compose_binner(subdev, cfg, sel, crops, comp);
else
- smiapp_set_compose_scaler(subdev, fh, sel, crops, comp);
+ smiapp_set_compose_scaler(subdev, cfg, sel, crops, comp);
*comp = sel->r;
- smiapp_propagate(subdev, fh, sel->which,
+ smiapp_propagate(subdev, cfg, sel->which,
V4L2_SEL_TGT_COMPOSE);
if (sel->which == V4L2_SUBDEV_FORMAT_ACTIVE)
@@ -2113,7 +2113,7 @@
}
static int smiapp_set_crop(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct smiapp_sensor *sensor = to_smiapp_sensor(subdev);
@@ -2121,7 +2121,7 @@
struct v4l2_rect *src_size, *crops[SMIAPP_PADS];
struct v4l2_rect _r;
- smiapp_get_crop_compose(subdev, fh, crops, NULL, sel->which);
+ smiapp_get_crop_compose(subdev, cfg, crops, NULL, sel->which);
if (sel->which == V4L2_SUBDEV_FORMAT_ACTIVE) {
if (sel->pad == ssd->sink_pad)
@@ -2132,15 +2132,15 @@
if (sel->pad == ssd->sink_pad) {
_r.left = 0;
_r.top = 0;
- _r.width = v4l2_subdev_get_try_format(fh, sel->pad)
+ _r.width = v4l2_subdev_get_try_format(subdev, cfg, sel->pad)
->width;
- _r.height = v4l2_subdev_get_try_format(fh, sel->pad)
+ _r.height = v4l2_subdev_get_try_format(subdev, cfg, sel->pad)
->height;
src_size = &_r;
} else {
src_size =
v4l2_subdev_get_try_compose(
- fh, ssd->sink_pad);
+ subdev, cfg, ssd->sink_pad);
}
}
@@ -2158,14 +2158,14 @@
*crops[sel->pad] = sel->r;
if (ssd != sensor->pixel_array && sel->pad == SMIAPP_PAD_SINK)
- smiapp_propagate(subdev, fh, sel->which,
+ smiapp_propagate(subdev, cfg, sel->which,
V4L2_SEL_TGT_CROP);
return 0;
}
static int __smiapp_get_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct smiapp_sensor *sensor = to_smiapp_sensor(subdev);
@@ -2178,13 +2178,13 @@
if (ret)
return ret;
- smiapp_get_crop_compose(subdev, fh, crops, &comp, sel->which);
+ smiapp_get_crop_compose(subdev, cfg, crops, &comp, sel->which);
if (sel->which == V4L2_SUBDEV_FORMAT_ACTIVE) {
sink_fmt = ssd->sink_fmt;
} else {
struct v4l2_mbus_framefmt *fmt =
- v4l2_subdev_get_try_format(fh, ssd->sink_pad);
+ v4l2_subdev_get_try_format(subdev, cfg, ssd->sink_pad);
sink_fmt.left = 0;
sink_fmt.top = 0;
@@ -2220,20 +2220,20 @@
}
static int smiapp_get_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct smiapp_sensor *sensor = to_smiapp_sensor(subdev);
int rval;
mutex_lock(&sensor->mutex);
- rval = __smiapp_get_selection(subdev, fh, sel);
+ rval = __smiapp_get_selection(subdev, cfg, sel);
mutex_unlock(&sensor->mutex);
return rval;
}
static int smiapp_set_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct smiapp_sensor *sensor = to_smiapp_sensor(subdev);
@@ -2259,10 +2259,10 @@
switch (sel->target) {
case V4L2_SEL_TGT_CROP:
- ret = smiapp_set_crop(subdev, fh, sel);
+ ret = smiapp_set_crop(subdev, cfg, sel);
break;
case V4L2_SEL_TGT_COMPOSE:
- ret = smiapp_set_compose(subdev, fh, sel);
+ ret = smiapp_set_compose(subdev, cfg, sel);
break;
default:
ret = -EINVAL;
@@ -2841,8 +2841,8 @@
for (i = 0; i < ssd->npads; i++) {
struct v4l2_mbus_framefmt *try_fmt =
- v4l2_subdev_get_try_format(fh, i);
- struct v4l2_rect *try_crop = v4l2_subdev_get_try_crop(fh, i);
+ v4l2_subdev_get_try_format(sd, fh->pad, i);
+ struct v4l2_rect *try_crop = v4l2_subdev_get_try_crop(sd, fh->pad, i);
struct v4l2_rect *try_comp;
try_fmt->width = sensor->limits[SMIAPP_LIMIT_X_ADDR_MAX] + 1;
@@ -2858,7 +2858,7 @@
if (ssd != sensor->pixel_array)
continue;
- try_comp = v4l2_subdev_get_try_compose(fh, i);
+ try_comp = v4l2_subdev_get_try_compose(sd, fh->pad, i);
*try_comp = *try_crop;
}
@@ -2977,12 +2977,7 @@
struct smiapp_platform_data *pdata;
struct v4l2_of_endpoint bus_cfg;
struct device_node *ep;
- struct property *prop;
- __be32 *val;
uint32_t asize;
-#ifdef CONFIG_OF
- unsigned int i;
-#endif
int rval;
if (!dev->of_node)
@@ -2993,10 +2988,8 @@
return NULL;
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
- if (!pdata) {
- rval = -ENOMEM;
+ if (!pdata)
goto out_err;
- }
v4l2_of_parse_endpoint(ep, &bus_cfg);
@@ -3006,7 +2999,6 @@
break;
/* FIXME: add CCP2 support. */
default:
- rval = -EINVAL;
goto out_err;
}
@@ -3030,8 +3022,7 @@
dev_dbg(dev, "reset %d, nvm %d, clk %d, csi %d\n", pdata->xshutdown,
pdata->nvm_size, pdata->ext_clk, pdata->csi_signalling_mode);
- rval = of_get_property(
- dev->of_node, "link-frequencies", &asize) ? 0 : -ENOENT;
+ rval = of_get_property(ep, "link-frequencies", &asize) ? 0 : -ENOENT;
if (rval) {
dev_warn(dev, "can't get link-frequencies array size\n");
goto out_err;
@@ -3044,25 +3035,12 @@
}
asize /= sizeof(*pdata->op_sys_clock);
- /*
- * Read a 64-bit array --- this will be replaced with a
- * of_property_read_u64_array() once it's merged.
- */
- prop = of_find_property(dev->of_node, "link-frequencies", NULL);
- if (!prop)
+ rval = of_property_read_u64_array(
+ ep, "link-frequencies", pdata->op_sys_clock, asize);
+ if (rval) {
+ dev_warn(dev, "can't get link-frequencies\n");
goto out_err;
- if (!prop->value)
- goto out_err;
- if (asize * sizeof(*pdata->op_sys_clock) > prop->length)
- goto out_err;
- val = prop->value;
- if (IS_ERR(val))
- goto out_err;
-
-#ifdef CONFIG_OF
- for (i = 0; i < asize; i++)
- pdata->op_sys_clock[i] = of_read_number(val + i * 2, 2);
-#endif
+ }
for (; asize > 0; asize--)
dev_dbg(dev, "freq %d: %lld\n", asize - 1,
diff --git a/drivers/media/i2c/soc_camera/mt9m111.c b/drivers/media/i2c/soc_camera/mt9m111.c
index 5992ea9..441e0fd 100644
--- a/drivers/media/i2c/soc_camera/mt9m111.c
+++ b/drivers/media/i2c/soc_camera/mt9m111.c
@@ -1016,7 +1016,6 @@
v4l2_async_unregister_subdev(&mt9m111->subdev);
v4l2_clk_put(mt9m111->clk);
- v4l2_device_unregister_subdev(&mt9m111->subdev);
v4l2_ctrl_handler_free(&mt9m111->hdl);
return 0;
diff --git a/drivers/media/i2c/soc_camera/ov2640.c b/drivers/media/i2c/soc_camera/ov2640.c
index 1fdce2f..e3c907a 100644
--- a/drivers/media/i2c/soc_camera/ov2640.c
+++ b/drivers/media/i2c/soc_camera/ov2640.c
@@ -18,6 +18,9 @@
#include <linux/i2c.h>
#include <linux/slab.h>
#include <linux/delay.h>
+#include <linux/gpio.h>
+#include <linux/gpio/consumer.h>
+#include <linux/of_gpio.h>
#include <linux/v4l2-mediabus.h>
#include <linux/videodev2.h>
@@ -283,6 +286,10 @@
u32 cfmt_code;
struct v4l2_clk *clk;
const struct ov2640_win_size *win;
+
+ struct soc_camera_subdev_desc ssdd_dt;
+ struct gpio_desc *resetb_gpio;
+ struct gpio_desc *pwdn_gpio;
};
/*
@@ -1038,6 +1045,63 @@
.video = &ov2640_subdev_video_ops,
};
+/* OF probe functions */
+static int ov2640_hw_power(struct device *dev, int on)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct ov2640_priv *priv = to_ov2640(client);
+
+ dev_dbg(&client->dev, "%s: %s the camera\n",
+ __func__, on ? "ENABLE" : "DISABLE");
+
+ if (priv->pwdn_gpio)
+ gpiod_direction_output(priv->pwdn_gpio, !on);
+
+ return 0;
+}
+
+static int ov2640_hw_reset(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct ov2640_priv *priv = to_ov2640(client);
+
+ if (priv->resetb_gpio) {
+ /* Active the resetb pin to perform a reset pulse */
+ gpiod_direction_output(priv->resetb_gpio, 1);
+ usleep_range(3000, 5000);
+ gpiod_direction_output(priv->resetb_gpio, 0);
+ }
+
+ return 0;
+}
+
+static int ov2640_probe_dt(struct i2c_client *client,
+ struct ov2640_priv *priv)
+{
+ /* Request the reset GPIO deasserted */
+ priv->resetb_gpio = devm_gpiod_get_optional(&client->dev, "resetb",
+ GPIOD_OUT_LOW);
+ if (!priv->resetb_gpio)
+ dev_dbg(&client->dev, "resetb gpio is not assigned!\n");
+ else if (IS_ERR(priv->resetb_gpio))
+ return PTR_ERR(priv->resetb_gpio);
+
+ /* Request the power down GPIO asserted */
+ priv->pwdn_gpio = devm_gpiod_get_optional(&client->dev, "pwdn",
+ GPIOD_OUT_HIGH);
+ if (!priv->pwdn_gpio)
+ dev_dbg(&client->dev, "pwdn gpio is not assigned!\n");
+ else if (IS_ERR(priv->pwdn_gpio))
+ return PTR_ERR(priv->pwdn_gpio);
+
+ /* Initialize the soc_camera_subdev_desc */
+ priv->ssdd_dt.power = ov2640_hw_power;
+ priv->ssdd_dt.reset = ov2640_hw_reset;
+ client->dev.platform_data = &priv->ssdd_dt;
+
+ return 0;
+}
+
/*
* i2c_driver functions
*/
@@ -1049,12 +1113,6 @@
struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent);
int ret;
- if (!ssdd) {
- dev_err(&adapter->dev,
- "OV2640: Missing platform_data for driver\n");
- return -EINVAL;
- }
-
if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA)) {
dev_err(&adapter->dev,
"OV2640: I2C-Adapter doesn't support SMBUS\n");
@@ -1068,6 +1126,22 @@
return -ENOMEM;
}
+ priv->clk = v4l2_clk_get(&client->dev, "xvclk");
+ if (IS_ERR(priv->clk))
+ return -EPROBE_DEFER;
+
+ if (!ssdd && !client->dev.of_node) {
+ dev_err(&client->dev, "Missing platform_data for driver\n");
+ ret = -EINVAL;
+ goto err_clk;
+ }
+
+ if (!ssdd) {
+ ret = ov2640_probe_dt(client, priv);
+ if (ret)
+ goto err_clk;
+ }
+
v4l2_i2c_subdev_init(&priv->subdev, client, &ov2640_subdev_ops);
v4l2_ctrl_handler_init(&priv->hdl, 2);
v4l2_ctrl_new_std(&priv->hdl, &ov2640_ctrl_ops,
@@ -1075,24 +1149,27 @@
v4l2_ctrl_new_std(&priv->hdl, &ov2640_ctrl_ops,
V4L2_CID_HFLIP, 0, 1, 1, 0);
priv->subdev.ctrl_handler = &priv->hdl;
- if (priv->hdl.error)
- return priv->hdl.error;
-
- priv->clk = v4l2_clk_get(&client->dev, "mclk");
- if (IS_ERR(priv->clk)) {
- ret = PTR_ERR(priv->clk);
- goto eclkget;
+ if (priv->hdl.error) {
+ ret = priv->hdl.error;
+ goto err_clk;
}
ret = ov2640_video_probe(client);
- if (ret) {
- v4l2_clk_put(priv->clk);
-eclkget:
- v4l2_ctrl_handler_free(&priv->hdl);
- } else {
- dev_info(&adapter->dev, "OV2640 Probed\n");
- }
+ if (ret < 0)
+ goto err_videoprobe;
+ ret = v4l2_async_register_subdev(&priv->subdev);
+ if (ret < 0)
+ goto err_videoprobe;
+
+ dev_info(&adapter->dev, "OV2640 Probed\n");
+
+ return 0;
+
+err_videoprobe:
+ v4l2_ctrl_handler_free(&priv->hdl);
+err_clk:
+ v4l2_clk_put(priv->clk);
return ret;
}
@@ -1100,6 +1177,7 @@
{
struct ov2640_priv *priv = to_ov2640(client);
+ v4l2_async_unregister_subdev(&priv->subdev);
v4l2_clk_put(priv->clk);
v4l2_device_unregister_subdev(&priv->subdev);
v4l2_ctrl_handler_free(&priv->hdl);
@@ -1112,9 +1190,16 @@
};
MODULE_DEVICE_TABLE(i2c, ov2640_id);
+static const struct of_device_id ov2640_of_match[] = {
+ {.compatible = "ovti,ov2640", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, ov2640_of_match);
+
static struct i2c_driver ov2640_i2c_driver = {
.driver = {
.name = "ov2640",
+ .of_match_table = of_match_ptr(ov2640_of_match),
},
.probe = ov2640_probe,
.remove = ov2640_remove,
diff --git a/drivers/media/i2c/ths7303.c b/drivers/media/i2c/ths7303.c
index ed9ae88..9f7fdb6 100644
--- a/drivers/media/i2c/ths7303.c
+++ b/drivers/media/i2c/ths7303.c
@@ -52,10 +52,6 @@
MODULE_AUTHOR("Chaithrika U S");
MODULE_LICENSE("GPL");
-static int debug;
-module_param(debug, int, 0644);
-MODULE_PARM_DESC(debug, "Debug level 0-1");
-
static inline struct ths7303_state *to_state(struct v4l2_subdev *sd)
{
return container_of(sd, struct ths7303_state, sd);
diff --git a/drivers/media/i2c/ths8200.c b/drivers/media/i2c/ths8200.c
index 4ebd329..73fc42b 100644
--- a/drivers/media/i2c/ths8200.c
+++ b/drivers/media/i2c/ths8200.c
@@ -479,7 +479,6 @@
ths8200_s_power(sd, false);
v4l2_async_unregister_subdev(&decoder->sd);
- v4l2_device_unregister_subdev(sd);
return 0;
}
diff --git a/drivers/media/i2c/tvp514x.c b/drivers/media/i2c/tvp514x.c
index 2042042..1c6bc30 100644
--- a/drivers/media/i2c/tvp514x.c
+++ b/drivers/media/i2c/tvp514x.c
@@ -923,13 +923,13 @@
/**
* tvp514x_enum_mbus_code() - V4L2 decoder interface handler for enum_mbus_code
* @sd: pointer to standard V4L2 sub-device structure
- * @fh: file handle
+ * @cfg: pad configuration
* @code: pointer to v4l2_subdev_mbus_code_enum structure
*
* Enumertaes mbus codes supported
*/
static int tvp514x_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
u32 pad = code->pad;
@@ -950,13 +950,13 @@
/**
* tvp514x_get_pad_format() - V4L2 decoder interface handler for get pad format
* @sd: pointer to standard V4L2 sub-device structure
- * @fh: file handle
+ * @cfg: pad configuration
* @format: pointer to v4l2_subdev_format structure
*
* Retrieves pad format which is active or tried based on requirement
*/
static int tvp514x_get_pad_format(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
struct tvp514x_decoder *decoder = to_decoder(sd);
@@ -979,13 +979,13 @@
/**
* tvp514x_set_pad_format() - V4L2 decoder interface handler for set pad format
* @sd: pointer to standard V4L2 sub-device structure
- * @fh: file handle
+ * @cfg: pad configuration
* @format: pointer to v4l2_subdev_format structure
*
* Set pad format for the output pad
*/
static int tvp514x_set_pad_format(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct tvp514x_decoder *decoder = to_decoder(sd);
@@ -1209,7 +1209,6 @@
struct tvp514x_decoder *decoder = to_decoder(sd);
v4l2_async_unregister_subdev(&decoder->sd);
- v4l2_device_unregister_subdev(sd);
#if defined(CONFIG_MEDIA_CONTROLLER)
media_entity_cleanup(&decoder->sd.entity);
#endif
diff --git a/drivers/media/i2c/tvp7002.c b/drivers/media/i2c/tvp7002.c
index fe4870e..787cdfb 100644
--- a/drivers/media/i2c/tvp7002.c
+++ b/drivers/media/i2c/tvp7002.c
@@ -846,13 +846,13 @@
/*
* tvp7002_enum_mbus_code() - Enum supported digital video format on pad
* @sd: pointer to standard V4L2 sub-device structure
- * @fh: file handle for the subdev
+ * @cfg: pad configuration
* @code: pointer to subdev enum mbus code struct
*
* Enumerate supported digital video formats for pad.
*/
static int
-tvp7002_enum_mbus_code(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+tvp7002_enum_mbus_code(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
/* Check requested format index is within range */
@@ -867,13 +867,13 @@
/*
* tvp7002_get_pad_format() - get video format on pad
* @sd: pointer to standard V4L2 sub-device structure
- * @fh: file handle for the subdev
+ * @cfg: pad configuration
* @fmt: pointer to subdev format struct
*
* get video format for pad.
*/
static int
-tvp7002_get_pad_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+tvp7002_get_pad_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct tvp7002 *tvp7002 = to_tvp7002(sd);
@@ -890,16 +890,16 @@
/*
* tvp7002_set_pad_format() - set video format on pad
* @sd: pointer to standard V4L2 sub-device structure
- * @fh: file handle for the subdev
+ * @cfg: pad configuration
* @fmt: pointer to subdev format struct
*
* set video format for pad.
*/
static int
-tvp7002_set_pad_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+tvp7002_set_pad_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
- return tvp7002_get_pad_format(sd, fh, fmt);
+ return tvp7002_get_pad_format(sd, cfg, fmt);
}
/* V4L2 core operation handlers */
@@ -1116,7 +1116,6 @@
#if defined(CONFIG_MEDIA_CONTROLLER)
media_entity_cleanup(&device->sd.entity);
#endif
- v4l2_device_unregister_subdev(sd);
v4l2_ctrl_handler_free(&device->hdl);
return 0;
}
diff --git a/drivers/media/mmc/siano/smssdio.c b/drivers/media/mmc/siano/smssdio.c
index 912c281..fee2d71 100644
--- a/drivers/media/mmc/siano/smssdio.c
+++ b/drivers/media/mmc/siano/smssdio.c
@@ -32,6 +32,8 @@
* Fix stop command
*/
+#include "smscoreapi.h"
+
#include <linux/moduleparam.h>
#include <linux/slab.h>
#include <linux/firmware.h>
@@ -41,7 +43,6 @@
#include <linux/mmc/sdio_ids.h>
#include <linux/module.h>
-#include "smscoreapi.h"
#include "sms-cards.h"
#include "smsendian.h"
@@ -141,14 +142,14 @@
*/
(void)sdio_readb(func, SMSSDIO_INT, &ret);
if (ret) {
- sms_err("Unable to read interrupt register!\n");
+ pr_err("Unable to read interrupt register!\n");
return;
}
if (smsdev->split_cb == NULL) {
cb = smscore_getbuffer(smsdev->coredev);
if (!cb) {
- sms_err("Unable to allocate data buffer!\n");
+ pr_err("Unable to allocate data buffer!\n");
return;
}
@@ -157,7 +158,7 @@
SMSSDIO_DATA,
SMSSDIO_BLOCK_SIZE);
if (ret) {
- sms_err("Error %d reading initial block!\n", ret);
+ pr_err("Error %d reading initial block!\n", ret);
return;
}
@@ -198,7 +199,7 @@
size);
if (ret && ret != -EINVAL) {
smscore_putbuffer(smsdev->coredev, cb);
- sms_err("Error %d reading data from card!\n", ret);
+ pr_err("Error %d reading data from card!\n", ret);
return;
}
@@ -216,8 +217,8 @@
smsdev->func->cur_blksize);
if (ret) {
smscore_putbuffer(smsdev->coredev, cb);
- sms_err("Error %d reading "
- "data from card!\n", ret);
+ pr_err("Error %d reading data from card!\n",
+ ret);
return;
}
@@ -278,7 +279,7 @@
goto free;
}
- ret = smscore_register_device(¶ms, &smsdev->coredev);
+ ret = smscore_register_device(¶ms, &smsdev->coredev, NULL);
if (ret < 0)
goto free;
diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c
index 4ec2a3c..bc12060 100644
--- a/drivers/media/pci/bt8xx/bttv-driver.c
+++ b/drivers/media/pci/bt8xx/bttv-driver.c
@@ -2474,7 +2474,7 @@
return -EINVAL;
strlcpy(cap->driver, "bttv", sizeof(cap->driver));
- strlcpy(cap->card, btv->video_dev->name, sizeof(cap->card));
+ strlcpy(cap->card, btv->video_dev.name, sizeof(cap->card));
snprintf(cap->bus_info, sizeof(cap->bus_info),
"PCI:%s", pci_name(btv->c.pci));
cap->capabilities =
@@ -2484,9 +2484,9 @@
V4L2_CAP_DEVICE_CAPS;
if (no_overlay <= 0)
cap->capabilities |= V4L2_CAP_VIDEO_OVERLAY;
- if (btv->vbi_dev)
+ if (video_is_registered(&btv->vbi_dev))
cap->capabilities |= V4L2_CAP_VBI_CAPTURE;
- if (btv->radio_dev)
+ if (video_is_registered(&btv->radio_dev))
cap->capabilities |= V4L2_CAP_RADIO;
/*
@@ -3905,18 +3905,14 @@
/* ----------------------------------------------------------------------- */
/* initialization */
-static struct video_device *vdev_init(struct bttv *btv,
- const struct video_device *template,
- const char *type_name)
+static void vdev_init(struct bttv *btv,
+ struct video_device *vfd,
+ const struct video_device *template,
+ const char *type_name)
{
- struct video_device *vfd;
-
- vfd = video_device_alloc();
- if (NULL == vfd)
- return NULL;
*vfd = *template;
vfd->v4l2_dev = &btv->c.v4l2_dev;
- vfd->release = video_device_release;
+ vfd->release = video_device_release_empty;
video_set_drvdata(vfd, btv);
snprintf(vfd->name, sizeof(vfd->name), "BT%d%s %s (%s)",
btv->id, (btv->id==848 && btv->revision==0x12) ? "A" : "",
@@ -3927,32 +3923,13 @@
v4l2_disable_ioctl(vfd, VIDIOC_G_TUNER);
v4l2_disable_ioctl(vfd, VIDIOC_S_TUNER);
}
- return vfd;
}
static void bttv_unregister_video(struct bttv *btv)
{
- if (btv->video_dev) {
- if (video_is_registered(btv->video_dev))
- video_unregister_device(btv->video_dev);
- else
- video_device_release(btv->video_dev);
- btv->video_dev = NULL;
- }
- if (btv->vbi_dev) {
- if (video_is_registered(btv->vbi_dev))
- video_unregister_device(btv->vbi_dev);
- else
- video_device_release(btv->vbi_dev);
- btv->vbi_dev = NULL;
- }
- if (btv->radio_dev) {
- if (video_is_registered(btv->radio_dev))
- video_unregister_device(btv->radio_dev);
- else
- video_device_release(btv->radio_dev);
- btv->radio_dev = NULL;
- }
+ video_unregister_device(&btv->video_dev);
+ video_unregister_device(&btv->vbi_dev);
+ video_unregister_device(&btv->radio_dev);
}
/* register video4linux devices */
@@ -3962,44 +3939,38 @@
pr_notice("Overlay support disabled\n");
/* video */
- btv->video_dev = vdev_init(btv, &bttv_video_template, "video");
+ vdev_init(btv, &btv->video_dev, &bttv_video_template, "video");
- if (NULL == btv->video_dev)
- goto err;
- if (video_register_device(btv->video_dev, VFL_TYPE_GRABBER,
+ if (video_register_device(&btv->video_dev, VFL_TYPE_GRABBER,
video_nr[btv->c.nr]) < 0)
goto err;
pr_info("%d: registered device %s\n",
- btv->c.nr, video_device_node_name(btv->video_dev));
- if (device_create_file(&btv->video_dev->dev,
+ btv->c.nr, video_device_node_name(&btv->video_dev));
+ if (device_create_file(&btv->video_dev.dev,
&dev_attr_card)<0) {
pr_err("%d: device_create_file 'card' failed\n", btv->c.nr);
goto err;
}
/* vbi */
- btv->vbi_dev = vdev_init(btv, &bttv_video_template, "vbi");
+ vdev_init(btv, &btv->vbi_dev, &bttv_video_template, "vbi");
- if (NULL == btv->vbi_dev)
- goto err;
- if (video_register_device(btv->vbi_dev, VFL_TYPE_VBI,
+ if (video_register_device(&btv->vbi_dev, VFL_TYPE_VBI,
vbi_nr[btv->c.nr]) < 0)
goto err;
pr_info("%d: registered device %s\n",
- btv->c.nr, video_device_node_name(btv->vbi_dev));
+ btv->c.nr, video_device_node_name(&btv->vbi_dev));
if (!btv->has_radio)
return 0;
/* radio */
- btv->radio_dev = vdev_init(btv, &radio_template, "radio");
- if (NULL == btv->radio_dev)
- goto err;
- btv->radio_dev->ctrl_handler = &btv->radio_ctrl_handler;
- if (video_register_device(btv->radio_dev, VFL_TYPE_RADIO,
+ vdev_init(btv, &btv->radio_dev, &radio_template, "radio");
+ btv->radio_dev.ctrl_handler = &btv->radio_ctrl_handler;
+ if (video_register_device(&btv->radio_dev, VFL_TYPE_RADIO,
radio_nr[btv->c.nr]) < 0)
goto err;
pr_info("%d: registered device %s\n",
- btv->c.nr, video_device_node_name(btv->radio_dev));
+ btv->c.nr, video_device_node_name(&btv->radio_dev));
/* all done */
return 0;
diff --git a/drivers/media/pci/bt8xx/bttvp.h b/drivers/media/pci/bt8xx/bttvp.h
index bc048c5..a444cfb 100644
--- a/drivers/media/pci/bt8xx/bttvp.h
+++ b/drivers/media/pci/bt8xx/bttvp.h
@@ -404,9 +404,9 @@
struct v4l2_subdev *sd_tda7432;
/* video4linux (1) */
- struct video_device *video_dev;
- struct video_device *radio_dev;
- struct video_device *vbi_dev;
+ struct video_device video_dev;
+ struct video_device radio_dev;
+ struct video_device vbi_dev;
/* controls */
struct v4l2_ctrl_handler ctrl_handler;
diff --git a/drivers/media/pci/cx18/cx18-alsa-main.c b/drivers/media/pci/cx18/cx18-alsa-main.c
index ea272bc..0b0e801 100644
--- a/drivers/media/pci/cx18/cx18-alsa-main.c
+++ b/drivers/media/pci/cx18/cx18-alsa-main.c
@@ -216,7 +216,7 @@
}
s = &cx->streams[CX18_ENC_STREAM_TYPE_PCM];
- if (s->video_dev == NULL) {
+ if (s->video_dev.v4l2_dev == NULL) {
CX18_DEBUG_ALSA_INFO("%s: PCM stream for card is disabled - "
"skipping\n", __func__);
return 0;
diff --git a/drivers/media/pci/cx18/cx18-driver.h b/drivers/media/pci/cx18/cx18-driver.h
index 207d6e82..b15beed 100644
--- a/drivers/media/pci/cx18/cx18-driver.h
+++ b/drivers/media/pci/cx18/cx18-driver.h
@@ -373,7 +373,7 @@
struct cx18_stream {
/* These first five fields are always set, even if the stream
is not actually created. */
- struct video_device *video_dev; /* NULL when stream not created */
+ struct video_device video_dev; /* v4l2_dev is NULL when stream not created */
struct cx18_dvb *dvb; /* DVB / Digital Transport */
struct cx18 *cx; /* for ease of use */
const char *name; /* name of the stream */
@@ -409,6 +409,7 @@
/* Videobuf for YUV video */
u32 pixelformat;
u32 vb_bytes_per_frame;
+ u32 vb_bytes_per_line;
struct list_head vb_capture; /* video capture queue */
spinlock_t vb_lock;
struct timer_list vb_timeout;
diff --git a/drivers/media/pci/cx18/cx18-fileops.c b/drivers/media/pci/cx18/cx18-fileops.c
index 76a3b4a..df83740 100644
--- a/drivers/media/pci/cx18/cx18-fileops.c
+++ b/drivers/media/pci/cx18/cx18-fileops.c
@@ -34,6 +34,7 @@
#include "cx18-controls.h"
#include "cx18-ioctl.h"
#include "cx18-cards.h"
+#include <media/v4l2-event.h>
/* This function tries to claim the stream for a specific file descriptor.
If no one else is using this stream then the stream is claimed and
@@ -609,13 +610,16 @@
unsigned int cx18_v4l2_enc_poll(struct file *filp, poll_table *wait)
{
+ unsigned long req_events = poll_requested_events(wait);
struct cx18_open_id *id = file2id(filp);
struct cx18 *cx = id->cx;
struct cx18_stream *s = &cx->streams[id->type];
int eof = test_bit(CX18_F_S_STREAMOFF, &s->s_flags);
+ unsigned res = 0;
/* Start a capture if there is none */
- if (!eof && !test_bit(CX18_F_S_STREAMING, &s->s_flags)) {
+ if (!eof && !test_bit(CX18_F_S_STREAMING, &s->s_flags) &&
+ (req_events & (POLLIN | POLLRDNORM))) {
int rc;
mutex_lock(&cx->serialize_lock);
@@ -632,21 +636,26 @@
if ((s->vb_type == V4L2_BUF_TYPE_VIDEO_CAPTURE) &&
(id->type == CX18_ENC_STREAM_TYPE_YUV)) {
int videobuf_poll = videobuf_poll_stream(filp, &s->vbuf_q, wait);
+
+ if (v4l2_event_pending(&id->fh))
+ res |= POLLPRI;
if (eof && videobuf_poll == POLLERR)
- return POLLHUP;
- else
- return videobuf_poll;
+ return res | POLLHUP;
+ return res | videobuf_poll;
}
/* add stream's waitq to the poll list */
CX18_DEBUG_HI_FILE("Encoder poll\n");
- poll_wait(filp, &s->waitq, wait);
+ if (v4l2_event_pending(&id->fh))
+ res |= POLLPRI;
+ else
+ poll_wait(filp, &s->waitq, wait);
if (atomic_read(&s->q_full.depth))
- return POLLIN | POLLRDNORM;
+ return res | POLLIN | POLLRDNORM;
if (eof)
- return POLLHUP;
- return 0;
+ return res | POLLHUP;
+ return res;
}
int cx18_v4l2_mmap(struct file *file, struct vm_area_struct *vma)
@@ -797,7 +806,7 @@
CX18_DEBUG_WARN("nomem on v4l2 open\n");
return -ENOMEM;
}
- v4l2_fh_init(&item->fh, s->video_dev);
+ v4l2_fh_init(&item->fh, &s->video_dev);
item->cx = cx;
item->type = s->type;
diff --git a/drivers/media/pci/cx18/cx18-ioctl.c b/drivers/media/pci/cx18/cx18-ioctl.c
index b8e4b68..79aee30 100644
--- a/drivers/media/pci/cx18/cx18-ioctl.c
+++ b/drivers/media/pci/cx18/cx18-ioctl.c
@@ -39,6 +39,7 @@
#include "cx18-cards.h"
#include "cx18-av-core.h"
#include <media/tveeprom.h>
+#include <media/v4l2-event.h>
u16 cx18_service2vbi(int type)
{
@@ -159,7 +160,7 @@
if (id->type == CX18_ENC_STREAM_TYPE_YUV) {
pixfmt->pixelformat = s->pixelformat;
pixfmt->sizeimage = s->vb_bytes_per_frame;
- pixfmt->bytesperline = 720;
+ pixfmt->bytesperline = s->vb_bytes_per_line;
} else {
pixfmt->pixelformat = V4L2_PIX_FMT_MPEG;
pixfmt->sizeimage = 128 * 1024;
@@ -287,10 +288,13 @@
s->pixelformat = fmt->fmt.pix.pixelformat;
/* HM12 YUV size is (Y=(h*720) + UV=(h*(720/2)))
UYUV YUV size is (Y=(h*720) + UV=(h*(720))) */
- if (s->pixelformat == V4L2_PIX_FMT_HM12)
+ if (s->pixelformat == V4L2_PIX_FMT_HM12) {
s->vb_bytes_per_frame = h * 720 * 3 / 2;
- else
+ s->vb_bytes_per_line = 720; /* First plane */
+ } else {
s->vb_bytes_per_frame = h * 720 * 2;
+ s->vb_bytes_per_line = 1440; /* Packed */
+ }
mbus_fmt.width = cx->cxhdl.width = w;
mbus_fmt.height = cx->cxhdl.height = h;
@@ -447,34 +451,29 @@
if (cropcap->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
return -EINVAL;
- cropcap->bounds.top = cropcap->bounds.left = 0;
- cropcap->bounds.width = 720;
- cropcap->bounds.height = cx->is_50hz ? 576 : 480;
cropcap->pixelaspect.numerator = cx->is_50hz ? 59 : 10;
cropcap->pixelaspect.denominator = cx->is_50hz ? 54 : 11;
- cropcap->defrect = cropcap->bounds;
return 0;
}
-static int cx18_s_crop(struct file *file, void *fh, const struct v4l2_crop *crop)
-{
- struct cx18_open_id *id = fh2id(fh);
- struct cx18 *cx = id->cx;
-
- if (crop->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
- return -EINVAL;
- CX18_DEBUG_WARN("VIDIOC_S_CROP not implemented\n");
- return -EINVAL;
-}
-
-static int cx18_g_crop(struct file *file, void *fh, struct v4l2_crop *crop)
+static int cx18_g_selection(struct file *file, void *fh,
+ struct v4l2_selection *sel)
{
struct cx18 *cx = fh2id(fh)->cx;
- if (crop->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ if (sel->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
return -EINVAL;
- CX18_DEBUG_WARN("VIDIOC_G_CROP not implemented\n");
- return -EINVAL;
+ switch (sel->target) {
+ case V4L2_SEL_TGT_CROP_BOUNDS:
+ case V4L2_SEL_TGT_CROP_DEFAULT:
+ sel->r.top = sel->r.left = 0;
+ sel->r.width = 720;
+ sel->r.height = cx->is_50hz ? 576 : 480;
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
}
static int cx18_enum_fmt_vid_cap(struct file *file, void *fh,
@@ -510,6 +509,9 @@
{
struct cx18_open_id *id = fh2id(fh);
struct cx18 *cx = id->cx;
+ v4l2_std_id std = V4L2_STD_ALL;
+ const struct cx18_card_video_input *card_input =
+ cx->card->video_inputs + inp;
if (inp >= cx->nof_inputs)
return -EINVAL;
@@ -525,6 +527,11 @@
cx->active_input = inp;
/* Set the audio input to whatever is appropriate for the input type. */
cx->audio_input = cx->card->video_inputs[inp].audio_index;
+ if (card_input->video_type == V4L2_INPUT_TYPE_TUNER)
+ std = cx->tuner_std;
+ cx->streams[CX18_ENC_STREAM_TYPE_MPG].video_dev.tvnorms = std;
+ cx->streams[CX18_ENC_STREAM_TYPE_YUV].video_dev.tvnorms = std;
+ cx->streams[CX18_ENC_STREAM_TYPE_VBI].video_dev.tvnorms = std;
/* prevent others from messing with the streams until
we're finished changing inputs. */
@@ -1036,7 +1043,7 @@
for (i = 0; i < CX18_MAX_STREAMS; i++) {
struct cx18_stream *s = &cx->streams[i];
- if (s->video_dev == NULL || s->buffers == 0)
+ if (s->video_dev.v4l2_dev == NULL || s->buffers == 0)
continue;
CX18_INFO("Stream %s: status 0x%04lx, %d%% of %d KiB (%d buffers) in use\n",
s->name, s->s_flags,
@@ -1078,8 +1085,7 @@
.vidioc_enumaudio = cx18_enumaudio,
.vidioc_enum_input = cx18_enum_input,
.vidioc_cropcap = cx18_cropcap,
- .vidioc_s_crop = cx18_s_crop,
- .vidioc_g_crop = cx18_g_crop,
+ .vidioc_g_selection = cx18_g_selection,
.vidioc_g_input = cx18_g_input,
.vidioc_s_input = cx18_s_input,
.vidioc_g_frequency = cx18_g_frequency,
@@ -1114,6 +1120,8 @@
.vidioc_querybuf = cx18_querybuf,
.vidioc_qbuf = cx18_qbuf,
.vidioc_dqbuf = cx18_dqbuf,
+ .vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
+ .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
};
void cx18_set_funcs(struct video_device *vdev)
diff --git a/drivers/media/pci/cx18/cx18-streams.c b/drivers/media/pci/cx18/cx18-streams.c
index 369445f..c82d25d 100644
--- a/drivers/media/pci/cx18/cx18-streams.c
+++ b/drivers/media/pci/cx18/cx18-streams.c
@@ -254,11 +254,8 @@
static void cx18_stream_init(struct cx18 *cx, int type)
{
struct cx18_stream *s = &cx->streams[type];
- struct video_device *video_dev = s->video_dev;
- /* we need to keep video_dev, so restore it afterwards */
memset(s, 0, sizeof(*s));
- s->video_dev = video_dev;
/* initialize cx18_stream fields */
s->dvb = NULL;
@@ -307,6 +304,7 @@
/* Assume the previous pixel default */
s->pixelformat = V4L2_PIX_FMT_HM12;
s->vb_bytes_per_frame = cx->cxhdl.height * 720 * 3 / 2;
+ s->vb_bytes_per_line = 720;
}
}
@@ -319,12 +317,12 @@
/*
* These five fields are always initialized.
- * For analog capture related streams, if video_dev == NULL then the
+ * For analog capture related streams, if video_dev.v4l2_dev == NULL then the
* stream is not in use.
* For the TS stream, if dvb == NULL then the stream is not in use.
* In those cases no other fields but these four can be used.
*/
- s->video_dev = NULL;
+ s->video_dev.v4l2_dev = NULL;
s->dvb = NULL;
s->cx = cx;
s->type = type;
@@ -367,24 +365,20 @@
if (num_offset == -1)
return 0;
- /* allocate and initialize the v4l2 video device structure */
- s->video_dev = video_device_alloc();
- if (s->video_dev == NULL) {
- CX18_ERR("Couldn't allocate v4l2 video_device for %s\n",
- s->name);
- return -ENOMEM;
- }
-
- snprintf(s->video_dev->name, sizeof(s->video_dev->name), "%s %s",
+ /* initialize the v4l2 video device structure */
+ snprintf(s->video_dev.name, sizeof(s->video_dev.name), "%s %s",
cx->v4l2_dev.name, s->name);
- s->video_dev->num = num;
- s->video_dev->v4l2_dev = &cx->v4l2_dev;
- s->video_dev->fops = &cx18_v4l2_enc_fops;
- s->video_dev->release = video_device_release;
- s->video_dev->tvnorms = V4L2_STD_ALL;
- s->video_dev->lock = &cx->serialize_lock;
- cx18_set_funcs(s->video_dev);
+ s->video_dev.num = num;
+ s->video_dev.v4l2_dev = &cx->v4l2_dev;
+ s->video_dev.fops = &cx18_v4l2_enc_fops;
+ s->video_dev.release = video_device_release_empty;
+ if (cx->card->video_inputs->video_type == CX18_CARD_INPUT_VID_TUNER)
+ s->video_dev.tvnorms = cx->tuner_std;
+ else
+ s->video_dev.tvnorms = V4L2_STD_ALL;
+ s->video_dev.lock = &cx->serialize_lock;
+ cx18_set_funcs(&s->video_dev);
return 0;
}
@@ -428,31 +422,30 @@
}
}
- if (s->video_dev == NULL)
+ if (s->video_dev.v4l2_dev == NULL)
return 0;
- num = s->video_dev->num;
+ num = s->video_dev.num;
/* card number + user defined offset + device offset */
if (type != CX18_ENC_STREAM_TYPE_MPG) {
struct cx18_stream *s_mpg = &cx->streams[CX18_ENC_STREAM_TYPE_MPG];
- if (s_mpg->video_dev)
- num = s_mpg->video_dev->num
+ if (s_mpg->video_dev.v4l2_dev)
+ num = s_mpg->video_dev.num
+ cx18_stream_info[type].num_offset;
}
- video_set_drvdata(s->video_dev, s);
+ video_set_drvdata(&s->video_dev, s);
/* Register device. First try the desired minor, then any free one. */
- ret = video_register_device_no_warn(s->video_dev, vfl_type, num);
+ ret = video_register_device_no_warn(&s->video_dev, vfl_type, num);
if (ret < 0) {
CX18_ERR("Couldn't register v4l2 device for %s (device node number %d)\n",
s->name, num);
- video_device_release(s->video_dev);
- s->video_dev = NULL;
+ s->video_dev.v4l2_dev = NULL;
return ret;
}
- name = video_device_node_name(s->video_dev);
+ name = video_device_node_name(&s->video_dev);
switch (vfl_type) {
case VFL_TYPE_GRABBER:
@@ -542,10 +535,9 @@
}
/* If struct video_device exists, can have buffers allocated */
- vdev = cx->streams[type].video_dev;
+ vdev = &cx->streams[type].video_dev;
- cx->streams[type].video_dev = NULL;
- if (vdev == NULL)
+ if (vdev->v4l2_dev == NULL)
continue;
if (type == CX18_ENC_STREAM_TYPE_YUV)
@@ -553,11 +545,7 @@
cx18_stream_free(&cx->streams[type]);
- /* Unregister or release device */
- if (unregister)
- video_unregister_device(vdev);
- else
- video_device_release(vdev);
+ video_unregister_device(vdev);
}
}
@@ -1042,7 +1030,7 @@
for (i = 0; i < CX18_MAX_STREAMS; i++) {
struct cx18_stream *s = &cx->streams[i];
- if (s->video_dev && (s->handle != CX18_INVALID_TASK_HANDLE))
+ if (s->video_dev.v4l2_dev && (s->handle != CX18_INVALID_TASK_HANDLE))
return s->handle;
}
return CX18_INVALID_TASK_HANDLE;
diff --git a/drivers/media/pci/cx18/cx18-streams.h b/drivers/media/pci/cx18/cx18-streams.h
index 713b0e6..27f8af9 100644
--- a/drivers/media/pci/cx18/cx18-streams.h
+++ b/drivers/media/pci/cx18/cx18-streams.h
@@ -33,7 +33,7 @@
static inline bool cx18_stream_enabled(struct cx18_stream *s)
{
- return s->video_dev ||
+ return s->video_dev.v4l2_dev ||
(s->dvb && s->dvb->enabled) ||
(s->type == CX18_ENC_STREAM_TYPE_IDX &&
s->cx->stream_buffers[CX18_ENC_STREAM_TYPE_IDX] != 0);
diff --git a/drivers/media/pci/cx23885/Kconfig b/drivers/media/pci/cx23885/Kconfig
index 74d774e..2e1b88c 100644
--- a/drivers/media/pci/cx23885/Kconfig
+++ b/drivers/media/pci/cx23885/Kconfig
@@ -40,7 +40,6 @@
select MEDIA_TUNER_TDA18271 if MEDIA_SUBDRV_AUTOSELECT
select MEDIA_TUNER_XC5000 if MEDIA_SUBDRV_AUTOSELECT
select MEDIA_TUNER_SI2157 if MEDIA_SUBDRV_AUTOSELECT
- select MEDIA_TUNER_M88TS2022 if MEDIA_SUBDRV_AUTOSELECT
select MEDIA_TUNER_M88RS6000T if MEDIA_SUBDRV_AUTOSELECT
select DVB_TUNER_DIB0070 if MEDIA_SUBDRV_AUTOSELECT
---help---
diff --git a/drivers/media/pci/cx23885/altera-ci.c b/drivers/media/pci/cx23885/altera-ci.c
index 2bbbf54..0a91df2 100644
--- a/drivers/media/pci/cx23885/altera-ci.c
+++ b/drivers/media/pci/cx23885/altera-ci.c
@@ -483,7 +483,6 @@
}
}
-EXPORT_SYMBOL(altera_hw_filt_release);
void altera_ci_release(void *dev, int ci_nr)
{
@@ -598,7 +597,6 @@
return 0;
}
-EXPORT_SYMBOL(altera_pid_feed_control);
static int altera_ci_start_feed(struct dvb_demux_feed *feed, int num)
{
@@ -699,7 +697,6 @@
return ret;
}
-EXPORT_SYMBOL(altera_hw_filt_init);
int altera_ci_init(struct altera_ci_config *config, int ci_nr)
{
diff --git a/drivers/media/pci/cx23885/altera-ci.h b/drivers/media/pci/cx23885/altera-ci.h
index 5028f0c..6c51172 100644
--- a/drivers/media/pci/cx23885/altera-ci.h
+++ b/drivers/media/pci/cx23885/altera-ci.h
@@ -39,7 +39,7 @@
int (*fpga_rw) (void *dev, int ad_rg, int val, int rw);
};
-#if IS_ENABLED(CONFIG_MEDIA_ALTERA_CI)
+#if IS_REACHABLE(CONFIG_MEDIA_ALTERA_CI)
extern int altera_ci_init(struct altera_ci_config *config, int ci_nr);
extern void altera_ci_release(void *dev, int ci_nr);
diff --git a/drivers/media/pci/cx23885/cx23885-core.c b/drivers/media/pci/cx23885/cx23885-core.c
index 1ad4994..7aee76a 100644
--- a/drivers/media/pci/cx23885/cx23885-core.c
+++ b/drivers/media/pci/cx23885/cx23885-core.c
@@ -825,6 +825,7 @@
int i;
spin_lock_init(&dev->pci_irqmask_lock);
+ spin_lock_init(&dev->slock);
mutex_init(&dev->lock);
mutex_init(&dev->gpio_lock);
diff --git a/drivers/media/pci/cx23885/cx23885-dvb.c b/drivers/media/pci/cx23885/cx23885-dvb.c
index 45fbe1e..745caab 100644
--- a/drivers/media/pci/cx23885/cx23885-dvb.c
+++ b/drivers/media/pci/cx23885/cx23885-dvb.c
@@ -73,7 +73,6 @@
#include "si2157.h"
#include "sp2.h"
#include "m88ds3103.h"
-#include "m88ts2022.h"
#include "m88rs6000t.h"
static unsigned int debug;
@@ -1187,7 +1186,7 @@
struct vb2_dvb_frontend *fe0, *fe1 = NULL;
struct si2168_config si2168_config;
struct si2157_config si2157_config;
- struct m88ts2022_config m88ts2022_config;
+ struct ts2020_config ts2020_config;
struct i2c_board_info info;
struct i2c_adapter *adapter;
struct i2c_client *client_demod = NULL, *client_tuner = NULL;
@@ -1856,13 +1855,12 @@
break;
/* attach tuner */
- memset(&m88ts2022_config, 0, sizeof(m88ts2022_config));
- m88ts2022_config.fe = fe0->dvb.frontend;
- m88ts2022_config.clock = 27000000;
+ memset(&ts2020_config, 0, sizeof(ts2020_config));
+ ts2020_config.fe = fe0->dvb.frontend;
memset(&info, 0, sizeof(struct i2c_board_info));
- strlcpy(info.type, "m88ts2022", I2C_NAME_SIZE);
+ strlcpy(info.type, "ts2020", I2C_NAME_SIZE);
info.addr = 0x60;
- info.platform_data = &m88ts2022_config;
+ info.platform_data = &ts2020_config;
request_module(info.type);
client_tuner = i2c_new_device(adapter, &info);
if (client_tuner == NULL ||
@@ -1986,13 +1984,12 @@
break;
/* attach tuner */
- memset(&m88ts2022_config, 0, sizeof(m88ts2022_config));
- m88ts2022_config.fe = fe0->dvb.frontend;
- m88ts2022_config.clock = 27000000;
+ memset(&ts2020_config, 0, sizeof(ts2020_config));
+ ts2020_config.fe = fe0->dvb.frontend;
memset(&info, 0, sizeof(struct i2c_board_info));
- strlcpy(info.type, "m88ts2022", I2C_NAME_SIZE);
+ strlcpy(info.type, "ts2020", I2C_NAME_SIZE);
info.addr = 0x60;
- info.platform_data = &m88ts2022_config;
+ info.platform_data = &ts2020_config;
request_module(info.type);
client_tuner = i2c_new_device(adapter, &info);
if (client_tuner == NULL || client_tuner->dev.driver == NULL)
@@ -2032,13 +2029,12 @@
break;
/* attach tuner */
- memset(&m88ts2022_config, 0, sizeof(m88ts2022_config));
- m88ts2022_config.fe = fe0->dvb.frontend;
- m88ts2022_config.clock = 27000000;
+ memset(&ts2020_config, 0, sizeof(ts2020_config));
+ ts2020_config.fe = fe0->dvb.frontend;
memset(&info, 0, sizeof(struct i2c_board_info));
- strlcpy(info.type, "m88ts2022", I2C_NAME_SIZE);
+ strlcpy(info.type, "ts2020", I2C_NAME_SIZE);
info.addr = 0x60;
- info.platform_data = &m88ts2022_config;
+ info.platform_data = &ts2020_config;
request_module(info.type);
client_tuner = i2c_new_device(adapter, &info);
if (client_tuner == NULL || client_tuner->dev.driver == NULL)
diff --git a/drivers/media/pci/cx23885/cx23885-video.c b/drivers/media/pci/cx23885/cx23885-video.c
index 5e93c68..2232b38 100644
--- a/drivers/media/pci/cx23885/cx23885-video.c
+++ b/drivers/media/pci/cx23885/cx23885-video.c
@@ -1137,7 +1137,6 @@
int err;
dprintk(1, "%s()\n", __func__);
- spin_lock_init(&dev->slock);
/* Initialize VBI template */
cx23885_vbi_template = cx23885_video_template;
diff --git a/drivers/media/pci/cx88/cx88-blackbird.c b/drivers/media/pci/cx88/cx88-blackbird.c
index b6be46e..24216ef 100644
--- a/drivers/media/pci/cx88/cx88-blackbird.c
+++ b/drivers/media/pci/cx88/cx88-blackbird.c
@@ -1102,32 +1102,26 @@
static void blackbird_unregister_video(struct cx8802_dev *dev)
{
- if (dev->mpeg_dev) {
- if (video_is_registered(dev->mpeg_dev))
- video_unregister_device(dev->mpeg_dev);
- else
- video_device_release(dev->mpeg_dev);
- dev->mpeg_dev = NULL;
- }
+ video_unregister_device(&dev->mpeg_dev);
}
static int blackbird_register_video(struct cx8802_dev *dev)
{
int err;
- dev->mpeg_dev = cx88_vdev_init(dev->core, dev->pci,
- &cx8802_mpeg_template, "mpeg");
- dev->mpeg_dev->ctrl_handler = &dev->cxhdl.hdl;
- video_set_drvdata(dev->mpeg_dev, dev);
- dev->mpeg_dev->queue = &dev->vb2_mpegq;
- err = video_register_device(dev->mpeg_dev, VFL_TYPE_GRABBER, -1);
+ cx88_vdev_init(dev->core, dev->pci, &dev->mpeg_dev,
+ &cx8802_mpeg_template, "mpeg");
+ dev->mpeg_dev.ctrl_handler = &dev->cxhdl.hdl;
+ video_set_drvdata(&dev->mpeg_dev, dev);
+ dev->mpeg_dev.queue = &dev->vb2_mpegq;
+ err = video_register_device(&dev->mpeg_dev, VFL_TYPE_GRABBER, -1);
if (err < 0) {
printk(KERN_INFO "%s/2: can't register mpeg device\n",
dev->core->name);
return err;
}
printk(KERN_INFO "%s/2: registered device %s [mpeg]\n",
- dev->core->name, video_device_node_name(dev->mpeg_dev));
+ dev->core->name, video_device_node_name(&dev->mpeg_dev));
return 0;
}
diff --git a/drivers/media/pci/cx88/cx88-core.c b/drivers/media/pci/cx88/cx88-core.c
index c38d5a1..3501be9 100644
--- a/drivers/media/pci/cx88/cx88-core.c
+++ b/drivers/media/pci/cx88/cx88-core.c
@@ -985,17 +985,14 @@
/* ------------------------------------------------------------------ */
-struct video_device *cx88_vdev_init(struct cx88_core *core,
- struct pci_dev *pci,
- const struct video_device *template_,
- const char *type)
+void cx88_vdev_init(struct cx88_core *core,
+ struct pci_dev *pci,
+ struct video_device *vfd,
+ const struct video_device *template_,
+ const char *type)
{
- struct video_device *vfd;
-
- vfd = video_device_alloc();
- if (NULL == vfd)
- return NULL;
*vfd = *template_;
+
/*
* The dev pointer of v4l2_device is NULL, instead we set the
* video_device dev_parent pointer to the correct PCI bus device.
@@ -1004,11 +1001,10 @@
*/
vfd->v4l2_dev = &core->v4l2_dev;
vfd->dev_parent = &pci->dev;
- vfd->release = video_device_release;
+ vfd->release = video_device_release_empty;
vfd->lock = &core->lock;
snprintf(vfd->name, sizeof(vfd->name), "%s %s (%s)",
core->name, type, core->board.name);
- return vfd;
}
struct cx88_core* cx88_core_get(struct pci_dev *pci)
diff --git a/drivers/media/pci/cx88/cx88-mpeg.c b/drivers/media/pci/cx88/cx88-mpeg.c
index a369b08..9834454 100644
--- a/drivers/media/pci/cx88/cx88-mpeg.c
+++ b/drivers/media/pci/cx88/cx88-mpeg.c
@@ -732,7 +732,7 @@
dev->alloc_ctx = vb2_dma_sg_init_ctx(&pci_dev->dev);
if (IS_ERR(dev->alloc_ctx)) {
err = PTR_ERR(dev->alloc_ctx);
- goto fail_core;
+ goto fail_dev;
}
dev->core = core;
@@ -754,6 +754,7 @@
fail_free:
vb2_dma_sg_cleanup_ctx(dev->alloc_ctx);
+ fail_dev:
kfree(dev);
fail_core:
core->dvbdev = NULL;
diff --git a/drivers/media/pci/cx88/cx88-video.c b/drivers/media/pci/cx88/cx88-video.c
index 860c98fc..c9decd8 100644
--- a/drivers/media/pci/cx88/cx88-video.c
+++ b/drivers/media/pci/cx88/cx88-video.c
@@ -1274,27 +1274,9 @@
static void cx8800_unregister_video(struct cx8800_dev *dev)
{
- if (dev->radio_dev) {
- if (video_is_registered(dev->radio_dev))
- video_unregister_device(dev->radio_dev);
- else
- video_device_release(dev->radio_dev);
- dev->radio_dev = NULL;
- }
- if (dev->vbi_dev) {
- if (video_is_registered(dev->vbi_dev))
- video_unregister_device(dev->vbi_dev);
- else
- video_device_release(dev->vbi_dev);
- dev->vbi_dev = NULL;
- }
- if (dev->video_dev) {
- if (video_is_registered(dev->video_dev))
- video_unregister_device(dev->video_dev);
- else
- video_device_release(dev->video_dev);
- dev->video_dev = NULL;
- }
+ video_unregister_device(&dev->radio_dev);
+ video_unregister_device(&dev->vbi_dev);
+ video_unregister_device(&dev->video_dev);
}
static int cx8800_initdev(struct pci_dev *pci_dev,
@@ -1485,12 +1467,12 @@
goto fail_unreg;
/* register v4l devices */
- dev->video_dev = cx88_vdev_init(core,dev->pci,
- &cx8800_video_template,"video");
- video_set_drvdata(dev->video_dev, dev);
- dev->video_dev->ctrl_handler = &core->video_hdl;
- dev->video_dev->queue = &dev->vb2_vidq;
- err = video_register_device(dev->video_dev,VFL_TYPE_GRABBER,
+ cx88_vdev_init(core, dev->pci, &dev->video_dev,
+ &cx8800_video_template, "video");
+ video_set_drvdata(&dev->video_dev, dev);
+ dev->video_dev.ctrl_handler = &core->video_hdl;
+ dev->video_dev.queue = &dev->vb2_vidq;
+ err = video_register_device(&dev->video_dev, VFL_TYPE_GRABBER,
video_nr[core->nr]);
if (err < 0) {
printk(KERN_ERR "%s/0: can't register video device\n",
@@ -1498,12 +1480,13 @@
goto fail_unreg;
}
printk(KERN_INFO "%s/0: registered device %s [v4l2]\n",
- core->name, video_device_node_name(dev->video_dev));
+ core->name, video_device_node_name(&dev->video_dev));
- dev->vbi_dev = cx88_vdev_init(core,dev->pci,&cx8800_vbi_template,"vbi");
- video_set_drvdata(dev->vbi_dev, dev);
- dev->vbi_dev->queue = &dev->vb2_vbiq;
- err = video_register_device(dev->vbi_dev,VFL_TYPE_VBI,
+ cx88_vdev_init(core, dev->pci, &dev->vbi_dev,
+ &cx8800_vbi_template, "vbi");
+ video_set_drvdata(&dev->vbi_dev, dev);
+ dev->vbi_dev.queue = &dev->vb2_vbiq;
+ err = video_register_device(&dev->vbi_dev, VFL_TYPE_VBI,
vbi_nr[core->nr]);
if (err < 0) {
printk(KERN_ERR "%s/0: can't register vbi device\n",
@@ -1511,14 +1494,14 @@
goto fail_unreg;
}
printk(KERN_INFO "%s/0: registered device %s\n",
- core->name, video_device_node_name(dev->vbi_dev));
+ core->name, video_device_node_name(&dev->vbi_dev));
if (core->board.radio.type == CX88_RADIO) {
- dev->radio_dev = cx88_vdev_init(core,dev->pci,
- &cx8800_radio_template,"radio");
- video_set_drvdata(dev->radio_dev, dev);
- dev->radio_dev->ctrl_handler = &core->audio_hdl;
- err = video_register_device(dev->radio_dev,VFL_TYPE_RADIO,
+ cx88_vdev_init(core, dev->pci, &dev->radio_dev,
+ &cx8800_radio_template, "radio");
+ video_set_drvdata(&dev->radio_dev, dev);
+ dev->radio_dev.ctrl_handler = &core->audio_hdl;
+ err = video_register_device(&dev->radio_dev, VFL_TYPE_RADIO,
radio_nr[core->nr]);
if (err < 0) {
printk(KERN_ERR "%s/0: can't register radio device\n",
@@ -1526,7 +1509,7 @@
goto fail_unreg;
}
printk(KERN_INFO "%s/0: registered device %s\n",
- core->name, video_device_node_name(dev->radio_dev));
+ core->name, video_device_node_name(&dev->radio_dev));
}
/* start tvaudio thread */
diff --git a/drivers/media/pci/cx88/cx88.h b/drivers/media/pci/cx88/cx88.h
index 7748ca9..b9fe1ac 100644
--- a/drivers/media/pci/cx88/cx88.h
+++ b/drivers/media/pci/cx88/cx88.h
@@ -478,9 +478,9 @@
/* various device info */
unsigned int resources;
- struct video_device *video_dev;
- struct video_device *vbi_dev;
- struct video_device *radio_dev;
+ struct video_device video_dev;
+ struct video_device vbi_dev;
+ struct video_device radio_dev;
/* pci i/o */
struct pci_dev *pci;
@@ -563,7 +563,7 @@
/* for blackbird only */
struct list_head devlist;
#if IS_ENABLED(CONFIG_VIDEO_CX88_BLACKBIRD)
- struct video_device *mpeg_dev;
+ struct video_device mpeg_dev;
u32 mailbox;
/* mpeg params */
@@ -647,10 +647,11 @@
unsigned int height, enum v4l2_field field);
extern int cx88_set_tvnorm(struct cx88_core *core, v4l2_std_id norm);
-extern struct video_device *cx88_vdev_init(struct cx88_core *core,
- struct pci_dev *pci,
- const struct video_device *template_,
- const char *type);
+extern void cx88_vdev_init(struct cx88_core *core,
+ struct pci_dev *pci,
+ struct video_device *vfd,
+ const struct video_device *template_,
+ const char *type);
extern struct cx88_core *cx88_core_get(struct pci_dev *pci);
extern void cx88_core_put(struct cx88_core *core,
struct pci_dev *pci);
diff --git a/drivers/media/pci/ivtv/ivtv-alsa-main.c b/drivers/media/pci/ivtv/ivtv-alsa-main.c
index 39b5292..41fa215 100644
--- a/drivers/media/pci/ivtv/ivtv-alsa-main.c
+++ b/drivers/media/pci/ivtv/ivtv-alsa-main.c
@@ -224,7 +224,7 @@
}
s = &itv->streams[IVTV_ENC_STREAM_TYPE_PCM];
- if (s->vdev == NULL) {
+ if (s->vdev.v4l2_dev == NULL) {
IVTV_DEBUG_ALSA_INFO("%s: PCM stream for card is disabled - "
"skipping\n", __func__);
return 0;
diff --git a/drivers/media/pci/ivtv/ivtv-alsa-pcm.c b/drivers/media/pci/ivtv/ivtv-alsa-pcm.c
index 7bf9cbc..f198b98 100644
--- a/drivers/media/pci/ivtv/ivtv-alsa-pcm.c
+++ b/drivers/media/pci/ivtv/ivtv-alsa-pcm.c
@@ -167,7 +167,7 @@
s = &itv->streams[IVTV_ENC_STREAM_TYPE_PCM];
- v4l2_fh_init(&item.fh, s->vdev);
+ v4l2_fh_init(&item.fh, &s->vdev);
item.itv = itv;
item.type = s->type;
diff --git a/drivers/media/pci/ivtv/ivtv-driver.c b/drivers/media/pci/ivtv/ivtv-driver.c
index 802642d..c2e60b4 100644
--- a/drivers/media/pci/ivtv/ivtv-driver.c
+++ b/drivers/media/pci/ivtv/ivtv-driver.c
@@ -1284,7 +1284,7 @@
return 0;
free_streams:
- ivtv_streams_cleanup(itv, 1);
+ ivtv_streams_cleanup(itv);
free_irq:
free_irq(itv->pdev->irq, (void *)itv);
free_i2c:
@@ -1444,7 +1444,7 @@
flush_kthread_worker(&itv->irq_worker);
kthread_stop(itv->irq_worker_task);
- ivtv_streams_cleanup(itv, 1);
+ ivtv_streams_cleanup(itv);
ivtv_udma_free(itv);
v4l2_ctrl_handler_free(&itv->cxhdl.hdl);
diff --git a/drivers/media/pci/ivtv/ivtv-driver.h b/drivers/media/pci/ivtv/ivtv-driver.h
index bc309f42c..e8b6c7a 100644
--- a/drivers/media/pci/ivtv/ivtv-driver.h
+++ b/drivers/media/pci/ivtv/ivtv-driver.h
@@ -327,7 +327,7 @@
struct ivtv_stream {
/* These first four fields are always set, even if the stream
is not actually created. */
- struct video_device *vdev; /* NULL when stream not created */
+ struct video_device vdev; /* vdev.v4l2_dev is NULL if there is no device */
struct ivtv *itv; /* for ease of use */
const char *name; /* name of the stream */
int type; /* stream type */
diff --git a/drivers/media/pci/ivtv/ivtv-fileops.c b/drivers/media/pci/ivtv/ivtv-fileops.c
index e5ff627..605d280 100644
--- a/drivers/media/pci/ivtv/ivtv-fileops.c
+++ b/drivers/media/pci/ivtv/ivtv-fileops.c
@@ -995,7 +995,7 @@
IVTV_DEBUG_WARN("nomem on v4l2 open\n");
return -ENOMEM;
}
- v4l2_fh_init(&item->fh, s->vdev);
+ v4l2_fh_init(&item->fh, &s->vdev);
item->itv = itv;
item->type = s->type;
diff --git a/drivers/media/pci/ivtv/ivtv-ioctl.c b/drivers/media/pci/ivtv/ivtv-ioctl.c
index 4d8ee18..6fe6c4a 100644
--- a/drivers/media/pci/ivtv/ivtv-ioctl.c
+++ b/drivers/media/pci/ivtv/ivtv-ioctl.c
@@ -448,9 +448,12 @@
static int ivtv_g_fmt_vid_out_overlay(struct file *file, void *fh, struct v4l2_format *fmt)
{
struct ivtv *itv = fh2id(fh)->itv;
+ struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
struct v4l2_window *winfmt = &fmt->fmt.win;
- if (!(itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT))
+ if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ return -EINVAL;
+ if (!itv->osd_video_pbase)
return -EINVAL;
winfmt->chromakey = itv->osd_chroma_key;
winfmt->global_alpha = itv->osd_global_alpha;
@@ -555,10 +558,13 @@
static int ivtv_try_fmt_vid_out_overlay(struct file *file, void *fh, struct v4l2_format *fmt)
{
struct ivtv *itv = fh2id(fh)->itv;
+ struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
u32 chromakey = fmt->fmt.win.chromakey;
u8 global_alpha = fmt->fmt.win.global_alpha;
- if (!(itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT))
+ if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ return -EINVAL;
+ if (!itv->osd_video_pbase)
return -EINVAL;
ivtv_g_fmt_vid_out_overlay(file, fh, fmt);
fmt->fmt.win.chromakey = chromakey;
@@ -741,6 +747,11 @@
snprintf(vcap->bus_info, sizeof(vcap->bus_info), "PCI:%s", pci_name(itv->pdev));
vcap->capabilities = itv->v4l2_cap | V4L2_CAP_DEVICE_CAPS;
vcap->device_caps = s->caps;
+ if ((s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY) &&
+ !itv->osd_video_pbase) {
+ vcap->capabilities &= ~V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+ vcap->device_caps &= ~V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+ }
return 0;
}
@@ -816,80 +827,103 @@
{
struct ivtv_open_id *id = fh2id(fh);
struct ivtv *itv = id->itv;
- struct yuv_playback_info *yi = &itv->yuv_info;
- int streamtype;
- streamtype = id->type;
-
- if (cropcap->type != V4L2_BUF_TYPE_VIDEO_OUTPUT)
- return -EINVAL;
- cropcap->bounds.top = cropcap->bounds.left = 0;
- cropcap->bounds.width = 720;
if (cropcap->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
- cropcap->bounds.height = itv->is_50hz ? 576 : 480;
cropcap->pixelaspect.numerator = itv->is_50hz ? 59 : 10;
cropcap->pixelaspect.denominator = itv->is_50hz ? 54 : 11;
- } else if (streamtype == IVTV_DEC_STREAM_TYPE_YUV) {
- if (yi->track_osd) {
- cropcap->bounds.width = yi->osd_full_w;
- cropcap->bounds.height = yi->osd_full_h;
- } else {
- cropcap->bounds.width = 720;
- cropcap->bounds.height =
- itv->is_out_50hz ? 576 : 480;
- }
+ } else if (cropcap->type == V4L2_BUF_TYPE_VIDEO_OUTPUT) {
cropcap->pixelaspect.numerator = itv->is_out_50hz ? 59 : 10;
cropcap->pixelaspect.denominator = itv->is_out_50hz ? 54 : 11;
} else {
- cropcap->bounds.height = itv->is_out_50hz ? 576 : 480;
- cropcap->pixelaspect.numerator = itv->is_out_50hz ? 59 : 10;
- cropcap->pixelaspect.denominator = itv->is_out_50hz ? 54 : 11;
+ return -EINVAL;
}
- cropcap->defrect = cropcap->bounds;
return 0;
}
-static int ivtv_s_crop(struct file *file, void *fh, const struct v4l2_crop *crop)
+static int ivtv_s_selection(struct file *file, void *fh,
+ struct v4l2_selection *sel)
{
struct ivtv_open_id *id = fh2id(fh);
struct ivtv *itv = id->itv;
struct yuv_playback_info *yi = &itv->yuv_info;
- int streamtype;
+ struct v4l2_rect r = { 0, 0, 720, 0 };
+ int streamtype = id->type;
- streamtype = id->type;
-
- if (crop->type == V4L2_BUF_TYPE_VIDEO_OUTPUT &&
- (itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT)) {
- if (streamtype == IVTV_DEC_STREAM_TYPE_YUV) {
- yi->main_rect = crop->c;
- return 0;
- } else {
- if (!ivtv_vapi(itv, CX2341X_OSD_SET_FRAMEBUFFER_WINDOW, 4,
- crop->c.width, crop->c.height, crop->c.left, crop->c.top)) {
- itv->main_rect = crop->c;
- return 0;
- }
- }
+ if (sel->type != V4L2_BUF_TYPE_VIDEO_OUTPUT ||
+ !(itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT))
return -EINVAL;
+
+ if (sel->target != V4L2_SEL_TGT_COMPOSE)
+ return -EINVAL;
+
+
+ if (sel->type != V4L2_BUF_TYPE_VIDEO_OUTPUT ||
+ !(itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT))
+ return -EINVAL;
+
+ r.height = itv->is_out_50hz ? 576 : 480;
+ if (streamtype == IVTV_DEC_STREAM_TYPE_YUV && yi->track_osd) {
+ r.width = yi->osd_full_w;
+ r.height = yi->osd_full_h;
+ }
+ sel->r.width = clamp(sel->r.width, 16U, r.width);
+ sel->r.height = clamp(sel->r.height, 16U, r.height);
+ sel->r.left = clamp_t(unsigned, sel->r.left, 0, r.width - sel->r.width);
+ sel->r.top = clamp_t(unsigned, sel->r.top, 0, r.height - sel->r.height);
+
+ if (streamtype == IVTV_DEC_STREAM_TYPE_YUV) {
+ yi->main_rect = sel->r;
+ return 0;
+ }
+ if (!ivtv_vapi(itv, CX2341X_OSD_SET_FRAMEBUFFER_WINDOW, 4,
+ sel->r.width, sel->r.height, sel->r.left, sel->r.top)) {
+ itv->main_rect = sel->r;
+ return 0;
}
return -EINVAL;
}
-static int ivtv_g_crop(struct file *file, void *fh, struct v4l2_crop *crop)
+static int ivtv_g_selection(struct file *file, void *fh,
+ struct v4l2_selection *sel)
{
struct ivtv_open_id *id = fh2id(fh);
struct ivtv *itv = id->itv;
struct yuv_playback_info *yi = &itv->yuv_info;
- int streamtype;
+ struct v4l2_rect r = { 0, 0, 720, 0 };
+ int streamtype = id->type;
- streamtype = id->type;
+ if (sel->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+ switch (sel->target) {
+ case V4L2_SEL_TGT_CROP_DEFAULT:
+ case V4L2_SEL_TGT_CROP_BOUNDS:
+ sel->r.top = sel->r.left = 0;
+ sel->r.width = 720;
+ sel->r.height = itv->is_50hz ? 576 : 480;
+ return 0;
+ default:
+ return -EINVAL;
+ }
+ }
- if (crop->type == V4L2_BUF_TYPE_VIDEO_OUTPUT &&
- (itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT)) {
+ if (sel->type != V4L2_BUF_TYPE_VIDEO_OUTPUT ||
+ !(itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT))
+ return -EINVAL;
+
+ switch (sel->target) {
+ case V4L2_SEL_TGT_COMPOSE:
if (streamtype == IVTV_DEC_STREAM_TYPE_YUV)
- crop->c = yi->main_rect;
+ sel->r = yi->main_rect;
else
- crop->c = itv->main_rect;
+ sel->r = itv->main_rect;
+ return 0;
+ case V4L2_SEL_TGT_COMPOSE_DEFAULT:
+ case V4L2_SEL_TGT_COMPOSE_BOUNDS:
+ r.height = itv->is_out_50hz ? 576 : 480;
+ if (streamtype == IVTV_DEC_STREAM_TYPE_YUV && yi->track_osd) {
+ r.width = yi->osd_full_w;
+ r.height = yi->osd_full_h;
+ }
+ sel->r = r;
return 0;
}
return -EINVAL;
@@ -987,7 +1021,7 @@
else
std = V4L2_STD_ALL;
for (i = 0; i <= IVTV_ENC_STREAM_TYPE_VBI; i++)
- itv->streams[i].vdev->tvnorms = std;
+ itv->streams[i].vdev.tvnorms = std;
/* prevent others from messing with the streams until
we're finished changing inputs. */
@@ -1038,7 +1072,7 @@
struct ivtv *itv = fh2id(fh)->itv;
struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
- if (s->vdev->vfl_dir)
+ if (s->vdev.vfl_dir)
return -ENOTTY;
if (vf->tuner != 0)
return -EINVAL;
@@ -1052,7 +1086,7 @@
struct ivtv *itv = fh2id(fh)->itv;
struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
- if (s->vdev->vfl_dir)
+ if (s->vdev.vfl_dir)
return -ENOTTY;
if (vf->tuner != 0)
return -EINVAL;
@@ -1340,6 +1374,7 @@
static int ivtv_g_fbuf(struct file *file, void *fh, struct v4l2_framebuffer *fb)
{
struct ivtv *itv = fh2id(fh)->itv;
+ struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
u32 data[CX2341X_MBOX_MAX_DATA];
struct yuv_playback_info *yi = &itv->yuv_info;
@@ -1363,10 +1398,10 @@
0,
};
- if (!(itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
- return -EINVAL;
+ if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ return -ENOTTY;
if (!itv->osd_video_pbase)
- return -EINVAL;
+ return -ENOTTY;
fb->capability = V4L2_FBUF_CAP_EXTERNOVERLAY | V4L2_FBUF_CAP_CHROMAKEY |
V4L2_FBUF_CAP_GLOBAL_ALPHA;
@@ -1427,12 +1462,13 @@
{
struct ivtv_open_id *id = fh2id(fh);
struct ivtv *itv = id->itv;
+ struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
struct yuv_playback_info *yi = &itv->yuv_info;
- if (!(itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
- return -EINVAL;
+ if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ return -ENOTTY;
if (!itv->osd_video_pbase)
- return -EINVAL;
+ return -ENOTTY;
itv->osd_global_alpha_state = (fb->flags & V4L2_FBUF_FLAG_GLOBAL_ALPHA) != 0;
itv->osd_local_alpha_state =
@@ -1447,9 +1483,12 @@
{
struct ivtv_open_id *id = fh2id(fh);
struct ivtv *itv = id->itv;
+ struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
- if (!(itv->v4l2_cap & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
- return -EINVAL;
+ if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ return -ENOTTY;
+ if (!itv->osd_video_pbase)
+ return -ENOTTY;
ivtv_vapi(itv, CX2341X_OSD_SET_STATE, 1, on != 0);
@@ -1547,7 +1586,7 @@
for (i = 0; i < IVTV_MAX_STREAMS; i++) {
struct ivtv_stream *s = &itv->streams[i];
- if (s->vdev == NULL || s->buffers == 0)
+ if (s->vdev.v4l2_dev == NULL || s->buffers == 0)
continue;
IVTV_INFO("Stream %s: status 0x%04lx, %d%% of %d KiB (%d buffers) in use\n", s->name, s->s_flags,
(s->buffers - s->q_free.buffers) * 100 / s->buffers,
@@ -1837,8 +1876,8 @@
.vidioc_enum_output = ivtv_enum_output,
.vidioc_enumaudout = ivtv_enumaudout,
.vidioc_cropcap = ivtv_cropcap,
- .vidioc_s_crop = ivtv_s_crop,
- .vidioc_g_crop = ivtv_g_crop,
+ .vidioc_s_selection = ivtv_s_selection,
+ .vidioc_g_selection = ivtv_g_selection,
.vidioc_g_input = ivtv_g_input,
.vidioc_s_input = ivtv_s_input,
.vidioc_g_output = ivtv_g_output,
diff --git a/drivers/media/pci/ivtv/ivtv-irq.c b/drivers/media/pci/ivtv/ivtv-irq.c
index e7d7017..36ca2d6 100644
--- a/drivers/media/pci/ivtv/ivtv-irq.c
+++ b/drivers/media/pci/ivtv/ivtv-irq.c
@@ -75,7 +75,7 @@
IVTV_DEBUG_HI_DMA("ivtv_pio_work_handler\n");
if (itv->cur_pio_stream < 0 || itv->cur_pio_stream >= IVTV_MAX_STREAMS ||
- s->vdev == NULL || !ivtv_use_pio(s)) {
+ s->vdev.v4l2_dev == NULL || !ivtv_use_pio(s)) {
itv->cur_pio_stream = -1;
/* trigger PIO complete user interrupt */
write_reg(IVTV_IRQ_ENC_PIO_COMPLETE, 0x44);
@@ -132,7 +132,7 @@
int rc;
/* sanity checks */
- if (s->vdev == NULL) {
+ if (s->vdev.v4l2_dev == NULL) {
IVTV_DEBUG_WARN("Stream %s not started\n", s->name);
return -1;
}
@@ -890,8 +890,8 @@
if (s)
wake_up(&s->waitq);
}
- if (s && s->vdev)
- v4l2_event_queue(s->vdev, frame ? &evtop : &evbottom);
+ if (s && s->vdev.v4l2_dev)
+ v4l2_event_queue(&s->vdev, frame ? &evtop : &evbottom);
wake_up(&itv->vsync_waitq);
/* Send VBI to saa7127 */
diff --git a/drivers/media/pci/ivtv/ivtv-streams.c b/drivers/media/pci/ivtv/ivtv-streams.c
index f0a1cc4..d27c6df 100644
--- a/drivers/media/pci/ivtv/ivtv-streams.c
+++ b/drivers/media/pci/ivtv/ivtv-streams.c
@@ -130,7 +130,8 @@
"decoder MPG",
VFL_TYPE_GRABBER, IVTV_V4L2_DEC_MPG_OFFSET,
PCI_DMA_TODEVICE, 0,
- V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_AUDIO | V4L2_CAP_READWRITE,
+ V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_AUDIO | V4L2_CAP_READWRITE |
+ V4L2_CAP_VIDEO_OUTPUT_OVERLAY,
&ivtv_v4l2_dec_fops
},
{ /* IVTV_DEC_STREAM_TYPE_VBI */
@@ -151,7 +152,8 @@
"decoder YUV",
VFL_TYPE_GRABBER, IVTV_V4L2_DEC_YUV_OFFSET,
PCI_DMA_TODEVICE, 0,
- V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_AUDIO | V4L2_CAP_READWRITE,
+ V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_AUDIO | V4L2_CAP_READWRITE |
+ V4L2_CAP_VIDEO_OUTPUT_OVERLAY,
&ivtv_v4l2_dec_fops
}
};
@@ -159,11 +161,9 @@
static void ivtv_stream_init(struct ivtv *itv, int type)
{
struct ivtv_stream *s = &itv->streams[type];
- struct video_device *vdev = s->vdev;
/* we need to keep vdev, so restore it afterwards */
memset(s, 0, sizeof(*s));
- s->vdev = vdev;
/* initialize ivtv_stream fields */
s->itv = itv;
@@ -194,10 +194,10 @@
int num_offset = ivtv_stream_info[type].num_offset;
int num = itv->instance + ivtv_first_minor + num_offset;
- /* These four fields are always initialized. If vdev == NULL, then
+ /* These four fields are always initialized. If vdev.v4l2_dev == NULL, then
this stream is not in use. In that case no other fields but these
four can be used. */
- s->vdev = NULL;
+ s->vdev.v4l2_dev = NULL;
s->itv = itv;
s->type = type;
s->name = ivtv_stream_info[type].name;
@@ -218,40 +218,33 @@
ivtv_stream_init(itv, type);
- /* allocate and initialize the v4l2 video device structure */
- s->vdev = video_device_alloc();
- if (s->vdev == NULL) {
- IVTV_ERR("Couldn't allocate v4l2 video_device for %s\n", s->name);
- return -ENOMEM;
- }
-
- snprintf(s->vdev->name, sizeof(s->vdev->name), "%s %s",
+ snprintf(s->vdev.name, sizeof(s->vdev.name), "%s %s",
itv->v4l2_dev.name, s->name);
- s->vdev->num = num;
- s->vdev->v4l2_dev = &itv->v4l2_dev;
+ s->vdev.num = num;
+ s->vdev.v4l2_dev = &itv->v4l2_dev;
if (ivtv_stream_info[type].v4l2_caps &
(V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_SLICED_VBI_OUTPUT))
- s->vdev->vfl_dir = VFL_DIR_TX;
- s->vdev->fops = ivtv_stream_info[type].fops;
- s->vdev->ctrl_handler = itv->v4l2_dev.ctrl_handler;
- s->vdev->release = video_device_release;
- s->vdev->tvnorms = V4L2_STD_ALL;
- s->vdev->lock = &itv->serialize_lock;
+ s->vdev.vfl_dir = VFL_DIR_TX;
+ s->vdev.fops = ivtv_stream_info[type].fops;
+ s->vdev.ctrl_handler = itv->v4l2_dev.ctrl_handler;
+ s->vdev.release = video_device_release_empty;
+ s->vdev.tvnorms = V4L2_STD_ALL;
+ s->vdev.lock = &itv->serialize_lock;
if (s->type == IVTV_DEC_STREAM_TYPE_VBI) {
- v4l2_disable_ioctl(s->vdev, VIDIOC_S_AUDIO);
- v4l2_disable_ioctl(s->vdev, VIDIOC_G_AUDIO);
- v4l2_disable_ioctl(s->vdev, VIDIOC_ENUMAUDIO);
- v4l2_disable_ioctl(s->vdev, VIDIOC_ENUMINPUT);
- v4l2_disable_ioctl(s->vdev, VIDIOC_S_INPUT);
- v4l2_disable_ioctl(s->vdev, VIDIOC_G_INPUT);
- v4l2_disable_ioctl(s->vdev, VIDIOC_S_FREQUENCY);
- v4l2_disable_ioctl(s->vdev, VIDIOC_G_FREQUENCY);
- v4l2_disable_ioctl(s->vdev, VIDIOC_S_TUNER);
- v4l2_disable_ioctl(s->vdev, VIDIOC_G_TUNER);
- v4l2_disable_ioctl(s->vdev, VIDIOC_S_STD);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_S_AUDIO);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_G_AUDIO);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_ENUMAUDIO);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_ENUMINPUT);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_S_INPUT);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_G_INPUT);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_S_FREQUENCY);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_G_FREQUENCY);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_S_TUNER);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_G_TUNER);
+ v4l2_disable_ioctl(&s->vdev, VIDIOC_S_STD);
}
- ivtv_set_funcs(s->vdev);
+ ivtv_set_funcs(&s->vdev);
return 0;
}
@@ -266,7 +259,7 @@
if (ivtv_prep_dev(itv, type))
break;
- if (itv->streams[type].vdev == NULL)
+ if (itv->streams[type].vdev.v4l2_dev == NULL)
continue;
/* Allocate Stream */
@@ -277,7 +270,7 @@
return 0;
/* One or more streams could not be initialized. Clean 'em all up. */
- ivtv_streams_cleanup(itv, 0);
+ ivtv_streams_cleanup(itv);
return -ENOMEM;
}
@@ -288,28 +281,26 @@
const char *name;
int num;
- if (s->vdev == NULL)
+ if (s->vdev.v4l2_dev == NULL)
return 0;
- num = s->vdev->num;
+ num = s->vdev.num;
/* card number + user defined offset + device offset */
if (type != IVTV_ENC_STREAM_TYPE_MPG) {
struct ivtv_stream *s_mpg = &itv->streams[IVTV_ENC_STREAM_TYPE_MPG];
- if (s_mpg->vdev)
- num = s_mpg->vdev->num + ivtv_stream_info[type].num_offset;
+ if (s_mpg->vdev.v4l2_dev)
+ num = s_mpg->vdev.num + ivtv_stream_info[type].num_offset;
}
- video_set_drvdata(s->vdev, s);
+ video_set_drvdata(&s->vdev, s);
/* Register device. First try the desired minor, then any free one. */
- if (video_register_device_no_warn(s->vdev, vfl_type, num)) {
+ if (video_register_device_no_warn(&s->vdev, vfl_type, num)) {
IVTV_ERR("Couldn't register v4l2 device for %s (device node number %d)\n",
s->name, num);
- video_device_release(s->vdev);
- s->vdev = NULL;
return -ENOMEM;
}
- name = video_device_node_name(s->vdev);
+ name = video_device_node_name(&s->vdev);
switch (vfl_type) {
case VFL_TYPE_GRABBER:
@@ -346,29 +337,25 @@
return 0;
/* One or more streams could not be initialized. Clean 'em all up. */
- ivtv_streams_cleanup(itv, 1);
+ ivtv_streams_cleanup(itv);
return -ENOMEM;
}
/* Unregister v4l2 devices */
-void ivtv_streams_cleanup(struct ivtv *itv, int unregister)
+void ivtv_streams_cleanup(struct ivtv *itv)
{
int type;
/* Teardown all streams */
for (type = 0; type < IVTV_MAX_STREAMS; type++) {
- struct video_device *vdev = itv->streams[type].vdev;
+ struct video_device *vdev = &itv->streams[type].vdev;
- itv->streams[type].vdev = NULL;
- if (vdev == NULL)
+ if (vdev->v4l2_dev == NULL)
continue;
+ video_unregister_device(vdev);
ivtv_stream_free(&itv->streams[type]);
- /* Unregister or release device */
- if (unregister)
- video_unregister_device(vdev);
- else
- video_device_release(vdev);
+ itv->streams[type].vdev.v4l2_dev = NULL;
}
}
@@ -492,7 +479,7 @@
int captype = 0, subtype = 0;
int enable_passthrough = 0;
- if (s->vdev == NULL)
+ if (s->vdev.v4l2_dev == NULL)
return -EINVAL;
IVTV_DEBUG_INFO("Start encoder stream %s\n", s->name);
@@ -661,7 +648,7 @@
u16 width;
u16 height;
- if (s->vdev == NULL)
+ if (s->vdev.v4l2_dev == NULL)
return -EINVAL;
IVTV_DEBUG_INFO("Setting some initial decoder settings\n");
@@ -723,7 +710,7 @@
struct ivtv *itv = s->itv;
int rc;
- if (s->vdev == NULL)
+ if (s->vdev.v4l2_dev == NULL)
return -EINVAL;
if (test_and_set_bit(IVTV_F_S_STREAMING, &s->s_flags))
@@ -778,7 +765,7 @@
for (i = IVTV_MAX_STREAMS - 1; i >= 0; i--) {
struct ivtv_stream *s = &itv->streams[i];
- if (s->vdev == NULL)
+ if (s->vdev.v4l2_dev == NULL)
continue;
if (test_bit(IVTV_F_S_STREAMING, &s->s_flags)) {
ivtv_stop_v4l2_encode_stream(s, 0);
@@ -793,7 +780,7 @@
int cap_type;
int stopmode;
- if (s->vdev == NULL)
+ if (s->vdev.v4l2_dev == NULL)
return -EINVAL;
/* This function assumes that you are allowed to stop the capture
@@ -917,7 +904,7 @@
};
struct ivtv *itv = s->itv;
- if (s->vdev == NULL)
+ if (s->vdev.v4l2_dev == NULL)
return -EINVAL;
if (s->type != IVTV_DEC_STREAM_TYPE_YUV && s->type != IVTV_DEC_STREAM_TYPE_MPG)
@@ -969,7 +956,7 @@
set_bit(IVTV_F_I_EV_DEC_STOPPED, &itv->i_flags);
wake_up(&itv->event_waitq);
- v4l2_event_queue(s->vdev, &ev);
+ v4l2_event_queue(&s->vdev, &ev);
/* wake up wait queues */
wake_up(&s->waitq);
@@ -982,7 +969,7 @@
struct ivtv_stream *yuv_stream = &itv->streams[IVTV_ENC_STREAM_TYPE_YUV];
struct ivtv_stream *dec_stream = &itv->streams[IVTV_DEC_STREAM_TYPE_YUV];
- if (yuv_stream->vdev == NULL || dec_stream->vdev == NULL)
+ if (yuv_stream->vdev.v4l2_dev == NULL || dec_stream->vdev.v4l2_dev == NULL)
return -EINVAL;
IVTV_DEBUG_INFO("ivtv ioctl: Select passthrough mode\n");
diff --git a/drivers/media/pci/ivtv/ivtv-streams.h b/drivers/media/pci/ivtv/ivtv-streams.h
index a653a51..3d76a41 100644
--- a/drivers/media/pci/ivtv/ivtv-streams.h
+++ b/drivers/media/pci/ivtv/ivtv-streams.h
@@ -23,7 +23,7 @@
int ivtv_streams_setup(struct ivtv *itv);
int ivtv_streams_register(struct ivtv *itv);
-void ivtv_streams_cleanup(struct ivtv *itv, int unregister);
+void ivtv_streams_cleanup(struct ivtv *itv);
/* Capture related */
int ivtv_start_v4l2_encode_stream(struct ivtv_stream *s);
diff --git a/drivers/media/pci/meye/meye.c b/drivers/media/pci/meye/meye.c
index 9d9f90c..ba887e8 100644
--- a/drivers/media/pci/meye/meye.c
+++ b/drivers/media/pci/meye/meye.c
@@ -1546,7 +1546,7 @@
.name = "meye",
.fops = &meye_fops,
.ioctl_ops = &meye_ioctl_ops,
- .release = video_device_release,
+ .release = video_device_release_empty,
};
static const struct v4l2_ctrl_ops meye_ctrl_ops = {
@@ -1623,7 +1623,7 @@
if (meye.mchip_dev != NULL) {
printk(KERN_ERR "meye: only one device allowed!\n");
- goto outnotdev;
+ return ret;
}
ret = v4l2_device_register(&pcidev->dev, v4l2_dev);
@@ -1633,11 +1633,6 @@
}
ret = -ENOMEM;
meye.mchip_dev = pcidev;
- meye.vdev = video_device_alloc();
- if (!meye.vdev) {
- v4l2_err(v4l2_dev, "video_device_alloc() failed!\n");
- goto outnotdev;
- }
meye.grab_temp = vmalloc(MCHIP_NB_PAGES_MJPEG * PAGE_SIZE);
if (!meye.grab_temp) {
@@ -1658,8 +1653,8 @@
goto outkfifoalloc2;
}
- memcpy(meye.vdev, &meye_template, sizeof(meye_template));
- meye.vdev->v4l2_dev = &meye.v4l2_dev;
+ meye.vdev = meye_template;
+ meye.vdev.v4l2_dev = &meye.v4l2_dev;
ret = -EIO;
if ((ret = sony_pic_camera_command(SONY_PIC_COMMAND_SETCAMERA, 1))) {
@@ -1743,9 +1738,9 @@
}
v4l2_ctrl_handler_setup(&meye.hdl);
- meye.vdev->ctrl_handler = &meye.hdl;
+ meye.vdev.ctrl_handler = &meye.hdl;
- if (video_register_device(meye.vdev, VFL_TYPE_GRABBER,
+ if (video_register_device(&meye.vdev, VFL_TYPE_GRABBER,
video_nr) < 0) {
v4l2_err(v4l2_dev, "video_register_device failed\n");
goto outvideoreg;
@@ -1777,14 +1772,12 @@
outkfifoalloc1:
vfree(meye.grab_temp);
outvmalloc:
- video_device_release(meye.vdev);
-outnotdev:
return ret;
}
static void meye_remove(struct pci_dev *pcidev)
{
- video_unregister_device(meye.vdev);
+ video_unregister_device(&meye.vdev);
mchip_hic_stop();
diff --git a/drivers/media/pci/meye/meye.h b/drivers/media/pci/meye/meye.h
index 6fed927..751be5e 100644
--- a/drivers/media/pci/meye/meye.h
+++ b/drivers/media/pci/meye/meye.h
@@ -311,7 +311,7 @@
struct kfifo doneq; /* queue for grabbed buffers */
spinlock_t doneq_lock; /* lock protecting the queue */
wait_queue_head_t proc_list; /* wait queue */
- struct video_device *vdev; /* video device parameters */
+ struct video_device vdev; /* video device parameters */
u16 brightness;
u16 hue;
u16 contrast;
diff --git a/drivers/media/pci/saa7146/hexium_gemini.c b/drivers/media/pci/saa7146/hexium_gemini.c
index 366434f..03cbcd2 100644
--- a/drivers/media/pci/saa7146/hexium_gemini.c
+++ b/drivers/media/pci/saa7146/hexium_gemini.c
@@ -66,7 +66,7 @@
{
int type;
- struct video_device *video_dev;
+ struct video_device video_dev;
struct i2c_adapter i2c_adapter;
int cur_input; /* current input */
diff --git a/drivers/media/pci/saa7146/hexium_orion.c b/drivers/media/pci/saa7146/hexium_orion.c
index a1eb26d..15f0d66 100644
--- a/drivers/media/pci/saa7146/hexium_orion.c
+++ b/drivers/media/pci/saa7146/hexium_orion.c
@@ -63,7 +63,7 @@
struct hexium
{
int type;
- struct video_device *video_dev;
+ struct video_device video_dev;
struct i2c_adapter i2c_adapter;
int cur_input; /* current input */
diff --git a/drivers/media/pci/saa7146/mxb.c b/drivers/media/pci/saa7146/mxb.c
index c4c8fce..0ca1e07 100644
--- a/drivers/media/pci/saa7146/mxb.c
+++ b/drivers/media/pci/saa7146/mxb.c
@@ -151,8 +151,8 @@
struct mxb
{
- struct video_device *video_dev;
- struct video_device *vbi_dev;
+ struct video_device video_dev;
+ struct video_device vbi_dev;
struct i2c_adapter i2c_adapter;
diff --git a/drivers/media/pci/saa7164/saa7164-core.c b/drivers/media/pci/saa7164/saa7164-core.c
index 4b0bec3..9cf3c6c 100644
--- a/drivers/media/pci/saa7164/saa7164-core.c
+++ b/drivers/media/pci/saa7164/saa7164-core.c
@@ -1436,11 +1436,11 @@
saa7164_i2c_unregister(&dev->i2c_bus[1]);
saa7164_i2c_unregister(&dev->i2c_bus[2]);
- pci_disable_device(pci_dev);
-
/* unregister stuff */
free_irq(pci_dev->irq, dev);
+ pci_disable_device(pci_dev);
+
mutex_lock(&devlist);
list_del(&dev->devlist);
mutex_unlock(&devlist);
diff --git a/drivers/media/pci/smipcie/Kconfig b/drivers/media/pci/smipcie/Kconfig
index c8de53f..21a1583 100644
--- a/drivers/media/pci/smipcie/Kconfig
+++ b/drivers/media/pci/smipcie/Kconfig
@@ -4,7 +4,7 @@
select I2C_ALGOBIT
select DVB_M88DS3103 if MEDIA_SUBDRV_AUTOSELECT
select DVB_SI2168 if MEDIA_SUBDRV_AUTOSELECT
- select MEDIA_TUNER_M88TS2022 if MEDIA_SUBDRV_AUTOSELECT
+ select DVB_TS2020 if MEDIA_SUBDRV_AUTOSELECT
select MEDIA_TUNER_M88RS6000T if MEDIA_SUBDRV_AUTOSELECT
select MEDIA_TUNER_SI2157 if MEDIA_SUBDRV_AUTOSELECT
help
diff --git a/drivers/media/pci/smipcie/smipcie.c b/drivers/media/pci/smipcie/smipcie.c
index 36c8ed7..4115925 100644
--- a/drivers/media/pci/smipcie/smipcie.c
+++ b/drivers/media/pci/smipcie/smipcie.c
@@ -16,7 +16,7 @@
#include "smipcie.h"
#include "m88ds3103.h"
-#include "m88ts2022.h"
+#include "ts2020.h"
#include "m88rs6000t.h"
#include "si2168.h"
#include "si2157.h"
@@ -532,9 +532,7 @@
struct i2c_adapter *tuner_i2c_adapter;
struct i2c_client *tuner_client;
struct i2c_board_info tuner_info;
- struct m88ts2022_config m88ts2022_config = {
- .clock = 27000000,
- };
+ struct ts2020_config ts2020_config = {};
memset(&tuner_info, 0, sizeof(struct i2c_board_info));
i2c = (port->idx == 0) ? &dev->i2c_bus[0] : &dev->i2c_bus[1];
@@ -546,10 +544,10 @@
return ret;
}
/* attach tuner */
- m88ts2022_config.fe = port->fe;
- strlcpy(tuner_info.type, "m88ts2022", I2C_NAME_SIZE);
+ ts2020_config.fe = port->fe;
+ strlcpy(tuner_info.type, "ts2020", I2C_NAME_SIZE);
tuner_info.addr = 0x60;
- tuner_info.platform_data = &m88ts2022_config;
+ tuner_info.platform_data = &ts2020_config;
tuner_client = smi_add_i2c_client(tuner_i2c_adapter, &tuner_info);
if (!tuner_client) {
ret = -ENODEV;
diff --git a/drivers/media/pci/sta2x11/sta2x11_vip.c b/drivers/media/pci/sta2x11/sta2x11_vip.c
index 22450f5..d384a6b 100644
--- a/drivers/media/pci/sta2x11/sta2x11_vip.c
+++ b/drivers/media/pci/sta2x11/sta2x11_vip.c
@@ -127,7 +127,7 @@
*/
struct sta2x11_vip {
struct v4l2_device v4l2_dev;
- struct video_device *video_dev;
+ struct video_device video_dev;
struct pci_dev *pdev;
struct i2c_adapter *adapter;
unsigned int register_save_area[IRQ_COUNT + SAVE_COUNT + AUX_COUNT];
@@ -763,7 +763,7 @@
static struct video_device video_dev_template = {
.name = KBUILD_MODNAME,
- .release = video_device_release,
+ .release = video_device_release_empty,
.fops = &vip_fops,
.ioctl_ops = &vip_ioctl_ops,
.tvnorms = V4L2_STD_ALL,
@@ -1082,19 +1082,13 @@
goto release_buf;
}
- /* Alloc, initialize and register video device */
- vip->video_dev = video_device_alloc();
- if (!vip->video_dev) {
- ret = -ENOMEM;
- goto release_irq;
- }
+ /* Initialize and register video device */
+ vip->video_dev = video_dev_template;
+ vip->video_dev.v4l2_dev = &vip->v4l2_dev;
+ vip->video_dev.queue = &vip->vb_vidq;
+ video_set_drvdata(&vip->video_dev, vip);
- vip->video_dev = &video_dev_template;
- vip->video_dev->v4l2_dev = &vip->v4l2_dev;
- vip->video_dev->queue = &vip->vb_vidq;
- video_set_drvdata(vip->video_dev, vip);
-
- ret = video_register_device(vip->video_dev, VFL_TYPE_GRABBER, -1);
+ ret = video_register_device(&vip->video_dev, VFL_TYPE_GRABBER, -1);
if (ret)
goto vrelease;
@@ -1124,13 +1118,9 @@
return 0;
vunreg:
- video_set_drvdata(vip->video_dev, NULL);
+ video_set_drvdata(&vip->video_dev, NULL);
vrelease:
- if (video_is_registered(vip->video_dev))
- video_unregister_device(vip->video_dev);
- else
- video_device_release(vip->video_dev);
-release_irq:
+ video_unregister_device(&vip->video_dev);
free_irq(pdev->irq, vip);
release_buf:
sta2x11_vip_release_buffer(vip);
@@ -1175,9 +1165,8 @@
sta2x11_vip_clear_register(vip);
- video_set_drvdata(vip->video_dev, NULL);
- video_unregister_device(vip->video_dev);
- /*do not call video_device_release() here, is already done */
+ video_set_drvdata(&vip->video_dev, NULL);
+ video_unregister_device(&vip->video_dev);
free_irq(pdev->irq, vip);
pci_disable_msi(pdev);
vb2_queue_release(&vip->vb_vidq);
diff --git a/drivers/media/pci/ttpci/av7110.h b/drivers/media/pci/ttpci/av7110.h
index ef3d960..835635b 100644
--- a/drivers/media/pci/ttpci/av7110.h
+++ b/drivers/media/pci/ttpci/av7110.h
@@ -102,8 +102,8 @@
struct dvb_device dvb_dev;
struct dvb_net dvb_net;
- struct video_device *v4l_dev;
- struct video_device *vbi_dev;
+ struct video_device v4l_dev;
+ struct video_device vbi_dev;
struct saa7146_dev *dev;
diff --git a/drivers/media/pci/ttpci/budget-av.c b/drivers/media/pci/ttpci/budget-av.c
index 0ba3875..54c9910 100644
--- a/drivers/media/pci/ttpci/budget-av.c
+++ b/drivers/media/pci/ttpci/budget-av.c
@@ -68,7 +68,7 @@
struct budget_av {
struct budget budget;
- struct video_device *vd;
+ struct video_device vd;
int cur_input;
int has_saa7113;
struct tasklet_struct ciintf_irq_tasklet;
diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
index d9b872b..421f531 100644
--- a/drivers/media/platform/Kconfig
+++ b/drivers/media/platform/Kconfig
@@ -56,7 +56,7 @@
config VIDEO_TIMBERDALE
tristate "Support for timberdale Video In/LogiWIN"
- depends on VIDEO_V4L2 && I2C && VIDEO_V4L2_SUBDEV_API
+ depends on VIDEO_V4L2 && I2C && VIDEO_V4L2_SUBDEV_API && HAS_DMA
depends on (MFD_TIMBERDALE && TIMB_DMA) || COMPILE_TEST
select VIDEO_ADV7180
select VIDEOBUF_DMA_CONTIG
@@ -90,6 +90,7 @@
select ARM_DMA_USE_IOMMU
select OMAP_IOMMU
select VIDEOBUF2_DMA_CONTIG
+ select MFD_SYSCON
---help---
Driver for an OMAP 3 camera controller.
@@ -117,6 +118,7 @@
source "drivers/media/platform/exynos4-is/Kconfig"
source "drivers/media/platform/s5p-tv/Kconfig"
source "drivers/media/platform/am437x/Kconfig"
+source "drivers/media/platform/xilinx/Kconfig"
endif # V4L_PLATFORM_DRIVERS
diff --git a/drivers/media/platform/Makefile b/drivers/media/platform/Makefile
index 3ec1547..8f85561 100644
--- a/drivers/media/platform/Makefile
+++ b/drivers/media/platform/Makefile
@@ -48,4 +48,6 @@
obj-$(CONFIG_VIDEO_AM437X_VPFE) += am437x/
+obj-$(CONFIG_VIDEO_XILINX) += xilinx/
+
ccflags-y += -I$(srctree)/drivers/media/i2c
diff --git a/drivers/media/platform/am437x/Kconfig b/drivers/media/platform/am437x/Kconfig
index 7b023a76..42d9c18 100644
--- a/drivers/media/platform/am437x/Kconfig
+++ b/drivers/media/platform/am437x/Kconfig
@@ -1,6 +1,6 @@
config VIDEO_AM437X_VPFE
tristate "TI AM437x VPFE video capture driver"
- depends on VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API
+ depends on VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API && HAS_DMA
depends on SOC_AM43XX || COMPILE_TEST
select VIDEOBUF2_DMA_CONTIG
help
diff --git a/drivers/media/platform/am437x/am437x-vpfe.c b/drivers/media/platform/am437x/am437x-vpfe.c
index 56a5cb0..a30cc2f 100644
--- a/drivers/media/platform/am437x/am437x-vpfe.c
+++ b/drivers/media/platform/am437x/am437x-vpfe.c
@@ -1645,6 +1645,7 @@
fse.index = fsize->index;
fse.pad = 0;
fse.code = mbus.code;
+ fse.which = V4L2_SUBDEV_FORMAT_ACTIVE;
ret = v4l2_subdev_call(sdinfo->sd, pad, enum_frame_size, NULL, &fse);
if (ret)
return -EINVAL;
@@ -1700,11 +1701,16 @@
{
struct vpfe_config *cfg = vpfe->cfg;
struct vpfe_subdev_info *sdinfo;
+ struct i2c_client *client;
+ struct i2c_client *curr_client;
int i, j = 0;
+ curr_client = v4l2_get_subdevdata(vpfe->current_subdev->sd);
for (i = 0; i < ARRAY_SIZE(vpfe->cfg->asd); i++) {
sdinfo = &cfg->sub_devs[i];
- if (!strcmp(sdinfo->name, vpfe->current_subdev->name)) {
+ client = v4l2_get_subdevdata(sdinfo->sd);
+ if (client->addr == curr_client->addr &&
+ client->adapter->nr == client->adapter->nr) {
if (vpfe->current_input >= 1)
return -1;
*app_input_index = j + vpfe->current_input;
@@ -2296,20 +2302,10 @@
vpfe_dbg(1, vpfe, "vpfe_async_bound\n");
for (i = 0; i < ARRAY_SIZE(vpfe->cfg->asd); i++) {
- sdinfo = &vpfe->cfg->sub_devs[i];
-
- if (!strcmp(sdinfo->name, subdev->name)) {
+ if (vpfe->cfg->asd[i]->match.of.node == asd[i].match.of.node) {
+ sdinfo = &vpfe->cfg->sub_devs[i];
vpfe->sd[i] = subdev;
- vpfe_info(vpfe,
- "v4l2 sub device %s registered\n",
- subdev->name);
- vpfe->sd[i]->grp_id =
- sdinfo->grp_id;
- /* update tvnorms from the sub devices */
- for (j = 0; j < 1; j++)
- vpfe->video_dev->tvnorms |=
- sdinfo->inputs[j].std;
-
+ vpfe->sd[i]->grp_id = sdinfo->grp_id;
found = true;
break;
}
@@ -2320,6 +2316,8 @@
return -EINVAL;
}
+ vpfe->video_dev.tvnorms |= sdinfo->inputs[0].std;
+
/* setup the supported formats & indexes */
for (j = 0, i = 0; ; ++j) {
struct vpfe_fmt *fmt;
@@ -2327,6 +2325,7 @@
memset(&mbus_code, 0, sizeof(mbus_code));
mbus_code.index = j;
+ mbus_code.which = V4L2_SUBDEV_FORMAT_ACTIVE;
ret = v4l2_subdev_call(subdev, pad, enum_mbus_code,
NULL, &mbus_code);
if (ret)
@@ -2390,9 +2389,9 @@
INIT_LIST_HEAD(&vpfe->dma_queue);
- vdev = vpfe->video_dev;
+ vdev = &vpfe->video_dev;
strlcpy(vdev->name, VPFE_MODULE_NAME, sizeof(vdev->name));
- vdev->release = video_device_release;
+ vdev->release = video_device_release_empty;
vdev->fops = &vpfe_fops;
vdev->ioctl_ops = &vpfe_ioctl_ops;
vdev->v4l2_dev = &vpfe->v4l2_dev;
@@ -2400,7 +2399,7 @@
vdev->queue = q;
vdev->lock = &vpfe->lock;
video_set_drvdata(vdev, vpfe);
- err = video_register_device(vpfe->video_dev, VFL_TYPE_GRABBER, -1);
+ err = video_register_device(&vpfe->video_dev, VFL_TYPE_GRABBER, -1);
if (err) {
vpfe_err(vpfe,
"Unable to register video device.\n");
@@ -2425,7 +2424,7 @@
static struct vpfe_config *
vpfe_get_pdata(struct platform_device *pdev)
{
- struct device_node *endpoint = NULL, *rem = NULL;
+ struct device_node *endpoint = NULL;
struct v4l2_of_endpoint bus_cfg;
struct vpfe_subdev_info *sdinfo;
struct vpfe_config *pdata;
@@ -2443,6 +2442,8 @@
return NULL;
for (i = 0; ; i++) {
+ struct device_node *rem;
+
endpoint = of_graph_get_next_endpoint(pdev->dev.of_node,
endpoint);
if (!endpoint)
@@ -2497,14 +2498,17 @@
goto done;
}
- strncpy(sdinfo->name, rem->name, sizeof(sdinfo->name));
-
pdata->asd[i] = devm_kzalloc(&pdev->dev,
sizeof(struct v4l2_async_subdev),
GFP_KERNEL);
+ if (!pdata->asd[i]) {
+ of_node_put(rem);
+ pdata = NULL;
+ goto done;
+ }
+
pdata->asd[i]->match_type = V4L2_ASYNC_MATCH_OF;
pdata->asd[i]->match.of.node = rem;
- of_node_put(endpoint);
of_node_put(rem);
}
@@ -2513,7 +2517,6 @@
done:
of_node_put(endpoint);
- of_node_put(rem);
return NULL;
}
@@ -2561,17 +2564,11 @@
return -EINVAL;
}
- vpfe->video_dev = video_device_alloc();
- if (!vpfe->video_dev) {
- dev_err(&pdev->dev, "Unable to allocate video device\n");
- return -ENOMEM;
- }
-
ret = v4l2_device_register(&pdev->dev, &vpfe->v4l2_dev);
if (ret) {
vpfe_err(vpfe,
"Unable to register v4l2 device.\n");
- goto probe_out_video_release;
+ return ret;
}
/* set the driver data in platform device */
@@ -2609,9 +2606,6 @@
probe_out_v4l2_unregister:
v4l2_device_unregister(&vpfe->v4l2_dev);
-probe_out_video_release:
- if (!video_is_registered(vpfe->video_dev))
- video_device_release(vpfe->video_dev);
return ret;
}
@@ -2628,7 +2622,7 @@
v4l2_async_notifier_unregister(&vpfe->notifier);
v4l2_device_unregister(&vpfe->v4l2_dev);
- video_unregister_device(vpfe->video_dev);
+ video_unregister_device(&vpfe->video_dev);
return 0;
}
diff --git a/drivers/media/platform/am437x/am437x-vpfe.h b/drivers/media/platform/am437x/am437x-vpfe.h
index 0f55735..5bfb356 100644
--- a/drivers/media/platform/am437x/am437x-vpfe.h
+++ b/drivers/media/platform/am437x/am437x-vpfe.h
@@ -83,7 +83,6 @@
};
struct vpfe_subdev_info {
- char name[32];
/* Sub device group id */
int grp_id;
/* inputs available at the sub device */
@@ -223,7 +222,7 @@
struct vpfe_device {
/* V4l2 specific parameters */
/* Identifies video device for this channel */
- struct video_device *video_dev;
+ struct video_device video_dev;
/* sub devices */
struct v4l2_subdev **sd;
/* vpfe cfg */
diff --git a/drivers/media/platform/blackfin/bfin_capture.c b/drivers/media/platform/blackfin/bfin_capture.c
index 8f66986..6a437f8 100644
--- a/drivers/media/platform/blackfin/bfin_capture.c
+++ b/drivers/media/platform/blackfin/bfin_capture.c
@@ -44,7 +44,6 @@
#include <media/blackfin/ppi.h>
#define CAPTURE_DRV_NAME "bfin_capture"
-#define BCAP_MIN_NUM_BUF 2
struct bcap_format {
char *desc;
@@ -65,7 +64,7 @@
/* v4l2 control handler */
struct v4l2_ctrl_handler ctrl_handler;
/* device node data */
- struct video_device *video_dev;
+ struct video_device video_dev;
/* sub device instance */
struct v4l2_subdev *sd;
/* capture config */
@@ -104,12 +103,8 @@
struct completion comp;
/* prepare to stop */
bool stop;
-};
-
-struct bcap_fh {
- struct v4l2_fh fh;
- /* indicates whether this file handle is doing IO */
- bool io_allowed;
+ /* vb2 buffer sequence counter */
+ unsigned sequence;
};
static const struct bcap_format bcap_formats[] = {
@@ -201,90 +196,6 @@
bcap_dev->sensor_formats = NULL;
}
-static int bcap_open(struct file *file)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
- struct video_device *vfd = bcap_dev->video_dev;
- struct bcap_fh *bcap_fh;
-
- if (!bcap_dev->sd) {
- v4l2_err(&bcap_dev->v4l2_dev, "No sub device registered\n");
- return -ENODEV;
- }
-
- bcap_fh = kzalloc(sizeof(*bcap_fh), GFP_KERNEL);
- if (!bcap_fh) {
- v4l2_err(&bcap_dev->v4l2_dev,
- "unable to allocate memory for file handle object\n");
- return -ENOMEM;
- }
-
- v4l2_fh_init(&bcap_fh->fh, vfd);
-
- /* store pointer to v4l2_fh in private_data member of file */
- file->private_data = &bcap_fh->fh;
- v4l2_fh_add(&bcap_fh->fh);
- bcap_fh->io_allowed = false;
- return 0;
-}
-
-static int bcap_release(struct file *file)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
- struct v4l2_fh *fh = file->private_data;
- struct bcap_fh *bcap_fh = container_of(fh, struct bcap_fh, fh);
-
- /* if this instance is doing IO */
- if (bcap_fh->io_allowed)
- vb2_queue_release(&bcap_dev->buffer_queue);
-
- file->private_data = NULL;
- v4l2_fh_del(&bcap_fh->fh);
- v4l2_fh_exit(&bcap_fh->fh);
- kfree(bcap_fh);
- return 0;
-}
-
-static int bcap_mmap(struct file *file, struct vm_area_struct *vma)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
- int ret;
-
- if (mutex_lock_interruptible(&bcap_dev->mutex))
- return -ERESTARTSYS;
- ret = vb2_mmap(&bcap_dev->buffer_queue, vma);
- mutex_unlock(&bcap_dev->mutex);
- return ret;
-}
-
-#ifndef CONFIG_MMU
-static unsigned long bcap_get_unmapped_area(struct file *file,
- unsigned long addr,
- unsigned long len,
- unsigned long pgoff,
- unsigned long flags)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
-
- return vb2_get_unmapped_area(&bcap_dev->buffer_queue,
- addr,
- len,
- pgoff,
- flags);
-}
-#endif
-
-static unsigned int bcap_poll(struct file *file, poll_table *wait)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
- unsigned int res;
-
- mutex_lock(&bcap_dev->mutex);
- res = vb2_poll(&bcap_dev->buffer_queue, file, wait);
- mutex_unlock(&bcap_dev->mutex);
- return res;
-}
-
static int bcap_queue_setup(struct vb2_queue *vq,
const struct v4l2_format *fmt,
unsigned int *nbuffers, unsigned int *nplanes,
@@ -292,37 +203,32 @@
{
struct bcap_device *bcap_dev = vb2_get_drv_priv(vq);
- if (*nbuffers < BCAP_MIN_NUM_BUF)
- *nbuffers = BCAP_MIN_NUM_BUF;
+ if (fmt && fmt->fmt.pix.sizeimage < bcap_dev->fmt.sizeimage)
+ return -EINVAL;
+
+ if (vq->num_buffers + *nbuffers < 2)
+ *nbuffers = 2;
*nplanes = 1;
- sizes[0] = bcap_dev->fmt.sizeimage;
+ sizes[0] = fmt ? fmt->fmt.pix.sizeimage : bcap_dev->fmt.sizeimage;
alloc_ctxs[0] = bcap_dev->alloc_ctx;
return 0;
}
-static int bcap_buffer_init(struct vb2_buffer *vb)
-{
- struct bcap_buffer *buf = to_bcap_vb(vb);
-
- INIT_LIST_HEAD(&buf->list);
- return 0;
-}
-
static int bcap_buffer_prepare(struct vb2_buffer *vb)
{
struct bcap_device *bcap_dev = vb2_get_drv_priv(vb->vb2_queue);
- struct bcap_buffer *buf = to_bcap_vb(vb);
- unsigned long size;
+ unsigned long size = bcap_dev->fmt.sizeimage;
- size = bcap_dev->fmt.sizeimage;
if (vb2_plane_size(vb, 0) < size) {
v4l2_err(&bcap_dev->v4l2_dev, "buffer too small (%lu < %lu)\n",
vb2_plane_size(vb, 0), size);
return -EINVAL;
}
- vb2_set_plane_payload(&buf->vb, 0, size);
+ vb2_set_plane_payload(vb, 0, size);
+
+ vb->v4l2_buf.field = bcap_dev->fmt.field;
return 0;
}
@@ -353,14 +259,16 @@
{
struct bcap_device *bcap_dev = vb2_get_drv_priv(vq);
struct ppi_if *ppi = bcap_dev->ppi;
+ struct bcap_buffer *buf, *tmp;
struct ppi_params params;
+ dma_addr_t addr;
int ret;
/* enable streamon on the sub device */
ret = v4l2_subdev_call(bcap_dev->sd, video, s_stream, 1);
if (ret && (ret != -ENOIOCTLCMD)) {
v4l2_err(&bcap_dev->v4l2_dev, "stream on failed in subdev\n");
- return ret;
+ goto err;
}
/* set ppi params */
@@ -399,7 +307,7 @@
if (ret < 0) {
v4l2_err(&bcap_dev->v4l2_dev,
"Error in setting ppi params\n");
- return ret;
+ goto err;
}
/* attach ppi DMA irq handler */
@@ -407,12 +315,34 @@
if (ret < 0) {
v4l2_err(&bcap_dev->v4l2_dev,
"Error in attaching interrupt handler\n");
- return ret;
+ goto err;
}
+ bcap_dev->sequence = 0;
+
reinit_completion(&bcap_dev->comp);
bcap_dev->stop = false;
+
+ /* get the next frame from the dma queue */
+ bcap_dev->cur_frm = list_entry(bcap_dev->dma_queue.next,
+ struct bcap_buffer, list);
+ /* remove buffer from the dma queue */
+ list_del_init(&bcap_dev->cur_frm->list);
+ addr = vb2_dma_contig_plane_dma_addr(&bcap_dev->cur_frm->vb, 0);
+ /* update DMA address */
+ ppi->ops->update_addr(ppi, (unsigned long)addr);
+ /* enable ppi */
+ ppi->ops->start(ppi);
+
return 0;
+
+err:
+ list_for_each_entry_safe(buf, tmp, &bcap_dev->dma_queue, list) {
+ list_del(&buf->list);
+ vb2_buffer_done(&buf->vb, VB2_BUF_STATE_QUEUED);
+ }
+
+ return ret;
}
static void bcap_stop_streaming(struct vb2_queue *vq)
@@ -431,6 +361,9 @@
"stream off failed in subdev\n");
/* release all active buffers */
+ if (bcap_dev->cur_frm)
+ vb2_buffer_done(&bcap_dev->cur_frm->vb, VB2_BUF_STATE_ERROR);
+
while (!list_empty(&bcap_dev->dma_queue)) {
bcap_dev->cur_frm = list_entry(bcap_dev->dma_queue.next,
struct bcap_buffer, list);
@@ -441,7 +374,6 @@
static struct vb2_ops bcap_video_qops = {
.queue_setup = bcap_queue_setup,
- .buf_init = bcap_buffer_init,
.buf_prepare = bcap_buffer_prepare,
.buf_cleanup = bcap_buffer_cleanup,
.buf_queue = bcap_buffer_queue,
@@ -451,57 +383,6 @@
.stop_streaming = bcap_stop_streaming,
};
-static int bcap_reqbufs(struct file *file, void *priv,
- struct v4l2_requestbuffers *req_buf)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
- struct vb2_queue *vq = &bcap_dev->buffer_queue;
- struct v4l2_fh *fh = file->private_data;
- struct bcap_fh *bcap_fh = container_of(fh, struct bcap_fh, fh);
-
- if (vb2_is_busy(vq))
- return -EBUSY;
-
- bcap_fh->io_allowed = true;
-
- return vb2_reqbufs(vq, req_buf);
-}
-
-static int bcap_querybuf(struct file *file, void *priv,
- struct v4l2_buffer *buf)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
-
- return vb2_querybuf(&bcap_dev->buffer_queue, buf);
-}
-
-static int bcap_qbuf(struct file *file, void *priv,
- struct v4l2_buffer *buf)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
- struct v4l2_fh *fh = file->private_data;
- struct bcap_fh *bcap_fh = container_of(fh, struct bcap_fh, fh);
-
- if (!bcap_fh->io_allowed)
- return -EBUSY;
-
- return vb2_qbuf(&bcap_dev->buffer_queue, buf);
-}
-
-static int bcap_dqbuf(struct file *file, void *priv,
- struct v4l2_buffer *buf)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
- struct v4l2_fh *fh = file->private_data;
- struct bcap_fh *bcap_fh = container_of(fh, struct bcap_fh, fh);
-
- if (!bcap_fh->io_allowed)
- return -EBUSY;
-
- return vb2_dqbuf(&bcap_dev->buffer_queue,
- buf, file->f_flags & O_NONBLOCK);
-}
-
static irqreturn_t bcap_isr(int irq, void *dev_id)
{
struct ppi_if *ppi = dev_id;
@@ -517,6 +398,7 @@
vb2_buffer_done(vb, VB2_BUF_STATE_ERROR);
ppi->err = false;
} else {
+ vb->v4l2_buf.sequence = bcap_dev->sequence++;
vb2_buffer_done(vb, VB2_BUF_STATE_DONE);
}
bcap_dev->cur_frm = list_entry(bcap_dev->dma_queue.next,
@@ -543,62 +425,14 @@
return IRQ_HANDLED;
}
-static int bcap_streamon(struct file *file, void *priv,
- enum v4l2_buf_type buf_type)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
- struct bcap_fh *fh = file->private_data;
- struct ppi_if *ppi = bcap_dev->ppi;
- dma_addr_t addr;
- int ret;
-
- if (!fh->io_allowed)
- return -EBUSY;
-
- /* call streamon to start streaming in videobuf */
- ret = vb2_streamon(&bcap_dev->buffer_queue, buf_type);
- if (ret)
- return ret;
-
- /* if dma queue is empty, return error */
- if (list_empty(&bcap_dev->dma_queue)) {
- v4l2_err(&bcap_dev->v4l2_dev, "dma queue is empty\n");
- ret = -EINVAL;
- goto err;
- }
-
- /* get the next frame from the dma queue */
- bcap_dev->cur_frm = list_entry(bcap_dev->dma_queue.next,
- struct bcap_buffer, list);
- /* remove buffer from the dma queue */
- list_del_init(&bcap_dev->cur_frm->list);
- addr = vb2_dma_contig_plane_dma_addr(&bcap_dev->cur_frm->vb, 0);
- /* update DMA address */
- ppi->ops->update_addr(ppi, (unsigned long)addr);
- /* enable ppi */
- ppi->ops->start(ppi);
-
- return 0;
-err:
- vb2_streamoff(&bcap_dev->buffer_queue, buf_type);
- return ret;
-}
-
-static int bcap_streamoff(struct file *file, void *priv,
- enum v4l2_buf_type buf_type)
-{
- struct bcap_device *bcap_dev = video_drvdata(file);
- struct bcap_fh *fh = file->private_data;
-
- if (!fh->io_allowed)
- return -EBUSY;
-
- return vb2_streamoff(&bcap_dev->buffer_queue, buf_type);
-}
-
static int bcap_querystd(struct file *file, void *priv, v4l2_std_id *std)
{
struct bcap_device *bcap_dev = video_drvdata(file);
+ struct v4l2_input input;
+
+ input = bcap_dev->cfg->inputs[bcap_dev->cur_input];
+ if (!(input.capabilities & V4L2_IN_CAP_STD))
+ return -ENODATA;
return v4l2_subdev_call(bcap_dev->sd, video, querystd, std);
}
@@ -606,6 +440,11 @@
static int bcap_g_std(struct file *file, void *priv, v4l2_std_id *std)
{
struct bcap_device *bcap_dev = video_drvdata(file);
+ struct v4l2_input input;
+
+ input = bcap_dev->cfg->inputs[bcap_dev->cur_input];
+ if (!(input.capabilities & V4L2_IN_CAP_STD))
+ return -ENODATA;
*std = bcap_dev->std;
return 0;
@@ -614,8 +453,13 @@
static int bcap_s_std(struct file *file, void *priv, v4l2_std_id std)
{
struct bcap_device *bcap_dev = video_drvdata(file);
+ struct v4l2_input input;
int ret;
+ input = bcap_dev->cfg->inputs[bcap_dev->cur_input];
+ if (!(input.capabilities & V4L2_IN_CAP_STD))
+ return -ENODATA;
+
if (vb2_is_busy(&bcap_dev->buffer_queue))
return -EBUSY;
@@ -631,6 +475,11 @@
struct v4l2_enum_dv_timings *timings)
{
struct bcap_device *bcap_dev = video_drvdata(file);
+ struct v4l2_input input;
+
+ input = bcap_dev->cfg->inputs[bcap_dev->cur_input];
+ if (!(input.capabilities & V4L2_IN_CAP_DV_TIMINGS))
+ return -ENODATA;
timings->pad = 0;
@@ -642,6 +491,11 @@
struct v4l2_dv_timings *timings)
{
struct bcap_device *bcap_dev = video_drvdata(file);
+ struct v4l2_input input;
+
+ input = bcap_dev->cfg->inputs[bcap_dev->cur_input];
+ if (!(input.capabilities & V4L2_IN_CAP_DV_TIMINGS))
+ return -ENODATA;
return v4l2_subdev_call(bcap_dev->sd, video,
query_dv_timings, timings);
@@ -651,6 +505,11 @@
struct v4l2_dv_timings *timings)
{
struct bcap_device *bcap_dev = video_drvdata(file);
+ struct v4l2_input input;
+
+ input = bcap_dev->cfg->inputs[bcap_dev->cur_input];
+ if (!(input.capabilities & V4L2_IN_CAP_DV_TIMINGS))
+ return -ENODATA;
*timings = bcap_dev->dv_timings;
return 0;
@@ -660,7 +519,13 @@
struct v4l2_dv_timings *timings)
{
struct bcap_device *bcap_dev = video_drvdata(file);
+ struct v4l2_input input;
int ret;
+
+ input = bcap_dev->cfg->inputs[bcap_dev->cur_input];
+ if (!(input.capabilities & V4L2_IN_CAP_DV_TIMINGS))
+ return -ENODATA;
+
if (vb2_is_busy(&bcap_dev->buffer_queue))
return -EBUSY;
@@ -881,12 +746,14 @@
.vidioc_g_dv_timings = bcap_g_dv_timings,
.vidioc_query_dv_timings = bcap_query_dv_timings,
.vidioc_enum_dv_timings = bcap_enum_dv_timings,
- .vidioc_reqbufs = bcap_reqbufs,
- .vidioc_querybuf = bcap_querybuf,
- .vidioc_qbuf = bcap_qbuf,
- .vidioc_dqbuf = bcap_dqbuf,
- .vidioc_streamon = bcap_streamon,
- .vidioc_streamoff = bcap_streamoff,
+ .vidioc_reqbufs = vb2_ioctl_reqbufs,
+ .vidioc_create_bufs = vb2_ioctl_create_bufs,
+ .vidioc_querybuf = vb2_ioctl_querybuf,
+ .vidioc_qbuf = vb2_ioctl_qbuf,
+ .vidioc_dqbuf = vb2_ioctl_dqbuf,
+ .vidioc_expbuf = vb2_ioctl_expbuf,
+ .vidioc_streamon = vb2_ioctl_streamon,
+ .vidioc_streamoff = vb2_ioctl_streamoff,
.vidioc_g_parm = bcap_g_parm,
.vidioc_s_parm = bcap_s_parm,
.vidioc_log_status = bcap_log_status,
@@ -894,14 +761,14 @@
static struct v4l2_file_operations bcap_fops = {
.owner = THIS_MODULE,
- .open = bcap_open,
- .release = bcap_release,
+ .open = v4l2_fh_open,
+ .release = vb2_fop_release,
.unlocked_ioctl = video_ioctl2,
- .mmap = bcap_mmap,
+ .mmap = vb2_fop_mmap,
#ifndef CONFIG_MMU
- .get_unmapped_area = bcap_get_unmapped_area,
+ .get_unmapped_area = vb2_fop_get_unmapped_area,
#endif
- .poll = bcap_poll
+ .poll = vb2_fop_poll
};
static int bcap_probe(struct platform_device *pdev)
@@ -942,27 +809,20 @@
goto err_free_ppi;
}
- vfd = video_device_alloc();
- if (!vfd) {
- ret = -ENOMEM;
- v4l2_err(pdev->dev.driver, "Unable to alloc video device\n");
- goto err_cleanup_ctx;
- }
-
+ vfd = &bcap_dev->video_dev;
/* initialize field of video device */
- vfd->release = video_device_release;
+ vfd->release = video_device_release_empty;
vfd->fops = &bcap_fops;
vfd->ioctl_ops = &bcap_ioctl_ops;
vfd->tvnorms = 0;
vfd->v4l2_dev = &bcap_dev->v4l2_dev;
strncpy(vfd->name, CAPTURE_DRV_NAME, sizeof(vfd->name));
- bcap_dev->video_dev = vfd;
ret = v4l2_device_register(&pdev->dev, &bcap_dev->v4l2_dev);
if (ret) {
v4l2_err(pdev->dev.driver,
"Unable to register v4l2 device\n");
- goto err_release_vdev;
+ goto err_cleanup_ctx;
}
v4l2_info(&bcap_dev->v4l2_dev, "v4l2 device registered\n");
@@ -978,13 +838,14 @@
/* initialize queue */
q = &bcap_dev->buffer_queue;
q->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
- q->io_modes = VB2_MMAP;
+ q->io_modes = VB2_MMAP | VB2_DMABUF;
q->drv_priv = bcap_dev;
q->buf_struct_size = sizeof(struct bcap_buffer);
q->ops = &bcap_video_qops;
q->mem_ops = &vb2_dma_contig_memops;
q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
q->lock = &bcap_dev->mutex;
+ q->min_buffers_needed = 1;
ret = vb2_queue_init(q);
if (ret)
@@ -997,15 +858,16 @@
INIT_LIST_HEAD(&bcap_dev->dma_queue);
vfd->lock = &bcap_dev->mutex;
+ vfd->queue = q;
/* register video device */
- ret = video_register_device(bcap_dev->video_dev, VFL_TYPE_GRABBER, -1);
+ ret = video_register_device(&bcap_dev->video_dev, VFL_TYPE_GRABBER, -1);
if (ret) {
v4l2_err(&bcap_dev->v4l2_dev,
"Unable to register video device\n");
goto err_free_handler;
}
- video_set_drvdata(bcap_dev->video_dev, bcap_dev);
+ video_set_drvdata(&bcap_dev->video_dev, bcap_dev);
v4l2_info(&bcap_dev->v4l2_dev, "video device registered as: %s\n",
video_device_node_name(vfd));
@@ -1083,15 +945,11 @@
}
return 0;
err_unreg_vdev:
- video_unregister_device(bcap_dev->video_dev);
- bcap_dev->video_dev = NULL;
+ video_unregister_device(&bcap_dev->video_dev);
err_free_handler:
v4l2_ctrl_handler_free(&bcap_dev->ctrl_handler);
err_unreg_v4l2:
v4l2_device_unregister(&bcap_dev->v4l2_dev);
-err_release_vdev:
- if (bcap_dev->video_dev)
- video_device_release(bcap_dev->video_dev);
err_cleanup_ctx:
vb2_dma_contig_cleanup_ctx(bcap_dev->alloc_ctx);
err_free_ppi:
@@ -1108,7 +966,7 @@
struct bcap_device, v4l2_dev);
bcap_free_sensor_formats(bcap_dev);
- video_unregister_device(bcap_dev->video_dev);
+ video_unregister_device(&bcap_dev->video_dev);
v4l2_ctrl_handler_free(&bcap_dev->ctrl_handler);
v4l2_device_unregister(v4l2_dev);
vb2_dma_contig_cleanup_ctx(bcap_dev->alloc_ctx);
diff --git a/drivers/media/platform/coda/Makefile b/drivers/media/platform/coda/Makefile
index 25ce155..834e504 100644
--- a/drivers/media/platform/coda/Makefile
+++ b/drivers/media/platform/coda/Makefile
@@ -1,3 +1,5 @@
+ccflags-y += -I$(src)
+
coda-objs := coda-common.o coda-bit.o coda-h264.o coda-jpeg.o
obj-$(CONFIG_VIDEO_CODA) += coda.o
diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c
index 856b542..d043007 100644
--- a/drivers/media/platform/coda/coda-bit.c
+++ b/drivers/media/platform/coda/coda-bit.c
@@ -15,6 +15,7 @@
#include <linux/clk.h>
#include <linux/irqreturn.h>
#include <linux/kernel.h>
+#include <linux/log2.h>
#include <linux/platform_device.h>
#include <linux/reset.h>
#include <linux/slab.h>
@@ -29,13 +30,18 @@
#include <media/videobuf2-vmalloc.h>
#include "coda.h"
+#define CREATE_TRACE_POINTS
+#include "trace.h"
+#define CODA_PARA_BUF_SIZE (10 * 1024)
#define CODA7_PS_BUF_SIZE 0x28000
#define CODA9_PS_SAVE_SIZE (512 * 1024)
#define CODA_DEFAULT_GAMMA 4096
#define CODA9_DEFAULT_GAMMA 24576 /* 0.75 * 32768 */
+static void coda_free_bitstream_buffer(struct coda_ctx *ctx);
+
static inline int coda_is_initialized(struct coda_dev *dev)
{
return coda_read(dev, CODA_REG_BIT_CUR_PC) != 0;
@@ -84,15 +90,21 @@
coda_write(dev, ctx->params.codec_mode, CODA_REG_BIT_RUN_COD_STD);
coda_write(dev, ctx->params.codec_mode_aux, CODA7_REG_BIT_RUN_AUX_STD);
+ trace_coda_bit_run(ctx, cmd);
+
coda_write(dev, cmd, CODA_REG_BIT_RUN_COMMAND);
}
static int coda_command_sync(struct coda_ctx *ctx, int cmd)
{
struct coda_dev *dev = ctx->dev;
+ int ret;
coda_command_async(ctx, cmd);
- return coda_wait_timeout(dev);
+ ret = coda_wait_timeout(dev);
+ trace_coda_bit_done(ctx);
+
+ return ret;
}
int coda_hw_reset(struct coda_ctx *ctx)
@@ -177,10 +189,6 @@
if (n < src_size)
return -ENOSPC;
- dma_sync_single_for_device(&ctx->dev->plat_dev->dev,
- ctx->bitstream.paddr, ctx->bitstream.size,
- DMA_TO_DEVICE);
-
src_buf->v4l2_buf.sequence = ctx->qsequence++;
return 0;
@@ -214,7 +222,7 @@
return true;
}
-void coda_fill_bitstream(struct coda_ctx *ctx)
+void coda_fill_bitstream(struct coda_ctx *ctx, bool streaming)
{
struct vb2_buffer *src_buf;
struct coda_buffer_meta *meta;
@@ -235,9 +243,12 @@
if (ctx->codec->src_fourcc == V4L2_PIX_FMT_JPEG &&
!coda_jpeg_check_buffer(ctx, src_buf)) {
v4l2_err(&ctx->dev->v4l2_dev,
- "dropping invalid JPEG frame\n");
+ "dropping invalid JPEG frame %d\n",
+ ctx->qsequence);
src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
- v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_ERROR);
+ v4l2_m2m_buf_done(src_buf, streaming ?
+ VB2_BUF_STATE_ERROR :
+ VB2_BUF_STATE_QUEUED);
continue;
}
@@ -262,6 +273,8 @@
ctx->bitstream_fifo.kfifo.mask;
list_add_tail(&meta->list,
&ctx->buffer_meta_list);
+
+ trace_coda_bit_queue(ctx, src_buf, meta);
}
v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_DONE);
@@ -297,6 +310,14 @@
p[index ^ 1] = value;
}
+static inline int coda_alloc_context_buf(struct coda_ctx *ctx,
+ struct coda_aux_buf *buf, size_t size,
+ const char *name)
+{
+ return coda_alloc_aux_buf(ctx->dev, buf, size, name, ctx->debugfs_entry);
+}
+
+
static void coda_free_framebuffers(struct coda_ctx *ctx)
{
int i;
@@ -377,6 +398,7 @@
coda_free_aux_buf(dev, &ctx->psbuf);
if (dev->devtype->product != CODA_DX6)
coda_free_aux_buf(dev, &ctx->workbuf);
+ coda_free_aux_buf(dev, &ctx->parabuf);
}
static int coda_alloc_context_buffers(struct coda_ctx *ctx,
@@ -386,57 +408,42 @@
size_t size;
int ret;
+ if (!ctx->parabuf.vaddr) {
+ ret = coda_alloc_context_buf(ctx, &ctx->parabuf,
+ CODA_PARA_BUF_SIZE, "parabuf");
+ if (ret < 0)
+ return ret;
+ }
+
if (dev->devtype->product == CODA_DX6)
return 0;
- if (ctx->psbuf.vaddr) {
- v4l2_err(&dev->v4l2_dev, "psmembuf still allocated\n");
- return -EBUSY;
- }
- if (ctx->slicebuf.vaddr) {
- v4l2_err(&dev->v4l2_dev, "slicebuf still allocated\n");
- return -EBUSY;
- }
- if (ctx->workbuf.vaddr) {
- v4l2_err(&dev->v4l2_dev, "context buffer still allocated\n");
- ret = -EBUSY;
- return -ENOMEM;
- }
-
- if (q_data->fourcc == V4L2_PIX_FMT_H264) {
+ if (!ctx->slicebuf.vaddr && q_data->fourcc == V4L2_PIX_FMT_H264) {
/* worst case slice size */
size = (DIV_ROUND_UP(q_data->width, 16) *
DIV_ROUND_UP(q_data->height, 16)) * 3200 / 8 + 512;
ret = coda_alloc_context_buf(ctx, &ctx->slicebuf, size,
"slicebuf");
- if (ret < 0) {
- v4l2_err(&dev->v4l2_dev,
- "failed to allocate %d byte slice buffer",
- ctx->slicebuf.size);
- return ret;
- }
+ if (ret < 0)
+ goto err;
}
- if (dev->devtype->product == CODA_7541) {
+ if (!ctx->psbuf.vaddr && dev->devtype->product == CODA_7541) {
ret = coda_alloc_context_buf(ctx, &ctx->psbuf,
CODA7_PS_BUF_SIZE, "psbuf");
- if (ret < 0) {
- v4l2_err(&dev->v4l2_dev,
- "failed to allocate psmem buffer");
+ if (ret < 0)
goto err;
- }
}
- size = dev->devtype->workbuf_size;
- if (dev->devtype->product == CODA_960 &&
- q_data->fourcc == V4L2_PIX_FMT_H264)
- size += CODA9_PS_SAVE_SIZE;
- ret = coda_alloc_context_buf(ctx, &ctx->workbuf, size, "workbuf");
- if (ret < 0) {
- v4l2_err(&dev->v4l2_dev,
- "failed to allocate %d byte context buffer",
- ctx->workbuf.size);
- goto err;
+ if (!ctx->workbuf.vaddr) {
+ size = dev->devtype->workbuf_size;
+ if (dev->devtype->product == CODA_960 &&
+ q_data->fourcc == V4L2_PIX_FMT_H264)
+ size += CODA9_PS_SAVE_SIZE;
+ ret = coda_alloc_context_buf(ctx, &ctx->workbuf, size,
+ "workbuf");
+ if (ret < 0)
+ goto err;
}
return 0;
@@ -709,6 +716,27 @@
* Encoder context operations
*/
+static int coda_encoder_reqbufs(struct coda_ctx *ctx,
+ struct v4l2_requestbuffers *rb)
+{
+ struct coda_q_data *q_data_src;
+ int ret;
+
+ if (rb->type != V4L2_BUF_TYPE_VIDEO_OUTPUT)
+ return 0;
+
+ if (rb->count) {
+ q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+ ret = coda_alloc_context_buffers(ctx, q_data_src);
+ if (ret < 0)
+ return ret;
+ } else {
+ coda_free_context_buffers(ctx);
+ }
+
+ return 0;
+}
+
static int coda_start_encoding(struct coda_ctx *ctx)
{
struct coda_dev *dev = ctx->dev;
@@ -725,11 +753,6 @@
q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
dst_fourcc = q_data_dst->fourcc;
- /* Allocate per-instance buffers */
- ret = coda_alloc_context_buffers(ctx, q_data_src);
- if (ret < 0)
- return ret;
-
buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
bitstream_buf = vb2_dma_contig_plane_dma_addr(buf, 0);
bitstream_size = q_data_dst->sizeimage;
@@ -1227,6 +1250,8 @@
coda_write(dev, ctx->iram_info.axi_sram_use,
CODA7_REG_BIT_AXI_SRAM_USE);
+ trace_coda_enc_pic_run(ctx, src_buf);
+
coda_command_async(ctx, CODA_COMMAND_PIC_RUN);
return 0;
@@ -1241,6 +1266,8 @@
src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ trace_coda_enc_pic_done(ctx, dst_buf);
+
/* Get results from the coda */
start_ptr = coda_read(dev, CODA_CMD_ENC_PIC_BB_START);
wr_ptr = coda_read(dev, CODA_REG_BIT_WR_PTR(ctx->reg_idx));
@@ -1311,7 +1338,6 @@
ctx->bitstream.vaddr, ctx->bitstream.size);
coda_free_framebuffers(ctx);
- coda_free_context_buffers(ctx);
mutex_unlock(&dev->coda_mutex);
mutex_unlock(&ctx->buffer_mutex);
@@ -1322,11 +1348,13 @@
mutex_lock(&ctx->buffer_mutex);
coda_free_framebuffers(ctx);
coda_free_context_buffers(ctx);
+ coda_free_bitstream_buffer(ctx);
mutex_unlock(&ctx->buffer_mutex);
}
const struct coda_context_ops coda_bit_encode_ops = {
.queue_init = coda_encoder_queue_init,
+ .reqbufs = coda_encoder_reqbufs,
.start_streaming = coda_start_encoding,
.prepare_run = coda_prepare_encode,
.finish_run = coda_finish_encode,
@@ -1338,6 +1366,65 @@
* Decoder context operations
*/
+static int coda_alloc_bitstream_buffer(struct coda_ctx *ctx,
+ struct coda_q_data *q_data)
+{
+ if (ctx->bitstream.vaddr)
+ return 0;
+
+ ctx->bitstream.size = roundup_pow_of_two(q_data->sizeimage * 2);
+ ctx->bitstream.vaddr = dma_alloc_writecombine(
+ &ctx->dev->plat_dev->dev, ctx->bitstream.size,
+ &ctx->bitstream.paddr, GFP_KERNEL);
+ if (!ctx->bitstream.vaddr) {
+ v4l2_err(&ctx->dev->v4l2_dev,
+ "failed to allocate bitstream ringbuffer");
+ return -ENOMEM;
+ }
+ kfifo_init(&ctx->bitstream_fifo,
+ ctx->bitstream.vaddr, ctx->bitstream.size);
+
+ return 0;
+}
+
+static void coda_free_bitstream_buffer(struct coda_ctx *ctx)
+{
+ if (ctx->bitstream.vaddr == NULL)
+ return;
+
+ dma_free_writecombine(&ctx->dev->plat_dev->dev, ctx->bitstream.size,
+ ctx->bitstream.vaddr, ctx->bitstream.paddr);
+ ctx->bitstream.vaddr = NULL;
+ kfifo_init(&ctx->bitstream_fifo, NULL, 0);
+}
+
+static int coda_decoder_reqbufs(struct coda_ctx *ctx,
+ struct v4l2_requestbuffers *rb)
+{
+ struct coda_q_data *q_data_src;
+ int ret;
+
+ if (rb->type != V4L2_BUF_TYPE_VIDEO_OUTPUT)
+ return 0;
+
+ if (rb->count) {
+ q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+ ret = coda_alloc_context_buffers(ctx, q_data_src);
+ if (ret < 0)
+ return ret;
+ ret = coda_alloc_bitstream_buffer(ctx, q_data_src);
+ if (ret < 0) {
+ coda_free_context_buffers(ctx);
+ return ret;
+ }
+ } else {
+ coda_free_bitstream_buffer(ctx);
+ coda_free_context_buffers(ctx);
+ }
+
+ return 0;
+}
+
static int __coda_start_decoding(struct coda_ctx *ctx)
{
struct coda_q_data *q_data_src, *q_data_dst;
@@ -1356,11 +1443,6 @@
src_fourcc = q_data_src->fourcc;
dst_fourcc = q_data_dst->fourcc;
- /* Allocate per-instance buffers */
- ret = coda_alloc_context_buffers(ctx, q_data_src);
- if (ret < 0)
- return ret;
-
coda_write(dev, ctx->parabuf.paddr, CODA_REG_BIT_PARA_BUF_ADDR);
/* Update coda bitstream read and write pointers from kfifo */
@@ -1579,7 +1661,7 @@
/* Try to copy source buffer contents into the bitstream ringbuffer */
mutex_lock(&ctx->bitstream_mutex);
- coda_fill_bitstream(ctx);
+ coda_fill_bitstream(ctx, true);
mutex_unlock(&ctx->bitstream_mutex);
if (coda_get_bitstream_payload(ctx) < 512 &&
@@ -1675,6 +1757,8 @@
/* Clear decode success flag */
coda_write(dev, 0, CODA_RET_DEC_PIC_SUCCESS);
+ trace_coda_dec_pic_run(ctx, meta);
+
coda_command_async(ctx, CODA_COMMAND_PIC_RUN);
return 0;
@@ -1704,7 +1788,7 @@
* by up to 512 bytes
*/
if (ctx->bit_stream_param & CODA_BIT_STREAM_END_FLAG) {
- if (coda_get_bitstream_payload(ctx) >= CODA_MAX_FRAME_SIZE - 512)
+ if (coda_get_bitstream_payload(ctx) >= ctx->bitstream.size - 512)
kfifo_init(&ctx->bitstream_fifo,
ctx->bitstream.vaddr, ctx->bitstream.size);
}
@@ -1835,6 +1919,8 @@
}
mutex_unlock(&ctx->bitstream_mutex);
+ trace_coda_dec_pic_done(ctx, &ctx->frame_metas[decoded_idx]);
+
val = coda_read(dev, CODA_RET_DEC_PIC_TYPE) & 0x7;
if (val == 0)
ctx->frame_types[decoded_idx] = V4L2_BUF_FLAG_KEYFRAME;
@@ -1874,6 +1960,8 @@
dst_buf->v4l2_buf.timecode = meta->timecode;
dst_buf->v4l2_buf.timestamp = meta->timestamp;
+ trace_coda_dec_rot_done(ctx, meta, dst_buf);
+
switch (q_data_dst->fourcc) {
case V4L2_PIX_FMT_YUV420:
case V4L2_PIX_FMT_YVU420:
@@ -1906,6 +1994,7 @@
const struct coda_context_ops coda_bit_decode_ops = {
.queue_init = coda_decoder_queue_init,
+ .reqbufs = coda_decoder_reqbufs,
.start_streaming = coda_start_decoding,
.prepare_run = coda_prepare_decode,
.finish_run = coda_finish_decode,
@@ -1931,6 +2020,8 @@
return IRQ_HANDLED;
}
+ trace_coda_bit_done(ctx);
+
if (ctx->aborting) {
v4l2_dbg(1, coda_debug, &ctx->dev->v4l2_dev,
"task has been aborted\n");
diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
index 6f32e6d..8e6fe02 100644
--- a/drivers/media/platform/coda/coda-common.c
+++ b/drivers/media/platform/coda/coda-common.c
@@ -46,7 +46,6 @@
#define CODADX6_MAX_INSTANCES 4
#define CODA_MAX_FORMATS 4
-#define CODA_PARA_BUF_SIZE (10 * 1024)
#define CODA_ISRAM_SIZE (2048 * 2)
#define MIN_W 176
@@ -696,6 +695,26 @@
return coda_s_fmt(ctx, &f_cap);
}
+static int coda_reqbufs(struct file *file, void *priv,
+ struct v4l2_requestbuffers *rb)
+{
+ struct coda_ctx *ctx = fh_to_ctx(priv);
+ int ret;
+
+ ret = v4l2_m2m_reqbufs(file, ctx->fh.m2m_ctx, rb);
+ if (ret)
+ return ret;
+
+ /*
+ * Allow to allocate instance specific per-context buffers, such as
+ * bitstream ringbuffer, slice buffer, work buffer, etc. if needed.
+ */
+ if (rb->type == V4L2_BUF_TYPE_VIDEO_OUTPUT && ctx->ops->reqbufs)
+ return ctx->ops->reqbufs(ctx, rb);
+
+ return 0;
+}
+
static int coda_qbuf(struct file *file, void *priv,
struct v4l2_buffer *buf)
{
@@ -841,7 +860,7 @@
.vidioc_try_fmt_vid_out = coda_try_fmt_vid_out,
.vidioc_s_fmt_vid_out = coda_s_fmt_vid_out,
- .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs,
+ .vidioc_reqbufs = coda_reqbufs,
.vidioc_querybuf = v4l2_m2m_ioctl_querybuf,
.vidioc_qbuf = coda_qbuf,
@@ -1173,7 +1192,7 @@
mutex_lock(&ctx->bitstream_mutex);
v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vb);
if (vb2_is_streaming(vb->vb2_queue))
- coda_fill_bitstream(ctx);
+ coda_fill_bitstream(ctx, true);
mutex_unlock(&ctx->bitstream_mutex);
} else {
v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vb);
@@ -1215,8 +1234,9 @@
buf->vaddr, buf->paddr);
buf->vaddr = NULL;
buf->size = 0;
+ debugfs_remove(buf->dentry);
+ buf->dentry = NULL;
}
- debugfs_remove(buf->dentry);
}
static int coda_start_streaming(struct vb2_queue *q, unsigned int count)
@@ -1232,9 +1252,9 @@
if (q_data_src->fourcc == V4L2_PIX_FMT_H264 ||
(q_data_src->fourcc == V4L2_PIX_FMT_JPEG &&
ctx->dev->devtype->product == CODA_7541)) {
- /* copy the buffers that where queued before streamon */
+ /* copy the buffers that were queued before streamon */
mutex_lock(&ctx->bitstream_mutex);
- coda_fill_bitstream(ctx);
+ coda_fill_bitstream(ctx, false);
mutex_unlock(&ctx->bitstream_mutex);
if (coda_get_bitstream_payload(ctx) < 512) {
@@ -1262,12 +1282,23 @@
if (!(ctx->streamon_out & ctx->streamon_cap))
return 0;
+ q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
+ if ((q_data_src->width != q_data_dst->width &&
+ round_up(q_data_src->width, 16) != q_data_dst->width) ||
+ (q_data_src->height != q_data_dst->height &&
+ round_up(q_data_src->height, 16) != q_data_dst->height)) {
+ v4l2_err(v4l2_dev, "can't convert %dx%d to %dx%d\n",
+ q_data_src->width, q_data_src->height,
+ q_data_dst->width, q_data_dst->height);
+ ret = -EINVAL;
+ goto err;
+ }
+
/* Allow BIT decoder device_run with no new buffers queued */
if (ctx->inst_type == CODA_INST_DECODER && ctx->use_bit)
v4l2_m2m_set_src_buffered(ctx->fh.m2m_ctx, true);
ctx->gopcounter = ctx->params.gop_size - 1;
- q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
ctx->codec = coda_find_codec(ctx->dev, q_data_src->fourcc,
q_data_dst->fourcc);
@@ -1308,6 +1339,9 @@
struct coda_ctx *ctx = vb2_get_drv_priv(q);
struct coda_dev *dev = ctx->dev;
struct vb2_buffer *buf;
+ bool stop;
+
+ stop = ctx->streamon_out && ctx->streamon_cap;
if (q->type == V4L2_BUF_TYPE_VIDEO_OUTPUT) {
v4l2_dbg(1, coda_debug, &dev->v4l2_dev,
@@ -1332,7 +1366,7 @@
v4l2_m2m_buf_done(buf, VB2_BUF_STATE_ERROR);
}
- if (!ctx->streamon_out && !ctx->streamon_cap) {
+ if (stop) {
struct coda_buffer_meta *meta;
if (ctx->ops->seq_end_work) {
@@ -1457,7 +1491,7 @@
static void coda_encode_ctrls(struct coda_ctx *ctx)
{
v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops,
- V4L2_CID_MPEG_VIDEO_BITRATE, 0, 32767000, 1, 0);
+ V4L2_CID_MPEG_VIDEO_BITRATE, 0, 32767000, 1000, 0);
v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops,
V4L2_CID_MPEG_VIDEO_GOP_SIZE, 1, 60, 1, 16);
v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops,
@@ -1541,6 +1575,13 @@
vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
vq->lock = &ctx->dev->dev_mutex;
+ /* One way to indicate end-of-stream for coda is to set the
+ * bytesused == 0. However by default videobuf2 handles bytesused
+ * equal to 0 as a special case and changes its value to the size
+ * of the buffer. Set the allow_zero_bytesused flag, so
+ * that videobuf2 will keep the value of bytesused intact.
+ */
+ vq->allow_zero_bytesused = 1;
return vb2_queue_init(vq);
}
@@ -1621,6 +1662,11 @@
set_bit(idx, &dev->instance_mask);
name = kasprintf(GFP_KERNEL, "context%d", idx);
+ if (!name) {
+ ret = -ENOMEM;
+ goto err_coda_name_init;
+ }
+
ctx->debugfs_entry = debugfs_create_dir(name, dev->debugfs_root);
kfree(name);
@@ -1682,28 +1728,6 @@
ctx->fh.ctrl_handler = &ctx->ctrls;
- if (ctx->use_bit) {
- ret = coda_alloc_context_buf(ctx, &ctx->parabuf,
- CODA_PARA_BUF_SIZE, "parabuf");
- if (ret < 0) {
- v4l2_err(&dev->v4l2_dev, "failed to allocate parabuf");
- goto err_dma_alloc;
- }
- }
- if (ctx->use_bit && ctx->inst_type == CODA_INST_DECODER) {
- ctx->bitstream.size = CODA_MAX_FRAME_SIZE;
- ctx->bitstream.vaddr = dma_alloc_writecombine(
- &dev->plat_dev->dev, ctx->bitstream.size,
- &ctx->bitstream.paddr, GFP_KERNEL);
- if (!ctx->bitstream.vaddr) {
- v4l2_err(&dev->v4l2_dev,
- "failed to allocate bitstream ringbuffer");
- ret = -ENOMEM;
- goto err_dma_writecombine;
- }
- }
- kfifo_init(&ctx->bitstream_fifo,
- ctx->bitstream.vaddr, ctx->bitstream.size);
mutex_init(&ctx->bitstream_mutex);
mutex_init(&ctx->buffer_mutex);
INIT_LIST_HEAD(&ctx->buffer_meta_list);
@@ -1717,12 +1741,6 @@
return 0;
-err_dma_writecombine:
- if (ctx->dev->devtype->product == CODA_DX6)
- coda_free_aux_buf(dev, &ctx->workbuf);
- coda_free_aux_buf(dev, &ctx->parabuf);
-err_dma_alloc:
- v4l2_ctrl_handler_free(&ctx->ctrls);
err_ctrls_setup:
v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
err_ctx_init:
@@ -1735,6 +1753,7 @@
v4l2_fh_del(&ctx->fh);
v4l2_fh_exit(&ctx->fh);
clear_bit(ctx->idx, &dev->instance_mask);
+err_coda_name_init:
err_coda_max:
kfree(ctx);
return ret;
@@ -1764,14 +1783,9 @@
list_del(&ctx->list);
coda_unlock(ctx);
- if (ctx->bitstream.vaddr) {
- dma_free_writecombine(&dev->plat_dev->dev, ctx->bitstream.size,
- ctx->bitstream.vaddr, ctx->bitstream.paddr);
- }
if (ctx->dev->devtype->product == CODA_DX6)
coda_free_aux_buf(dev, &ctx->workbuf);
- coda_free_aux_buf(dev, &ctx->parabuf);
v4l2_ctrl_handler_free(&ctx->ctrls);
clk_disable_unprepare(dev->clk_ahb);
clk_disable_unprepare(dev->clk_per);
@@ -1901,8 +1915,7 @@
if (i >= dev->devtype->num_vdevs)
return -EINVAL;
- snprintf(vfd->name, sizeof(vfd->name), "%s",
- dev->devtype->vdevs[i]->name);
+ strlcpy(vfd->name, dev->devtype->vdevs[i]->name, sizeof(vfd->name));
vfd->fops = &coda_fops;
vfd->ioctl_ops = &coda_ioctl_ops;
vfd->release = video_device_release_empty,
@@ -1933,10 +1946,8 @@
/* allocate auxiliary per-device code buffer for the BIT processor */
ret = coda_alloc_aux_buf(dev, &dev->codebuf, fw->size, "codebuf",
dev->debugfs_root);
- if (ret < 0) {
- dev_err(&pdev->dev, "failed to allocate code buffer\n");
+ if (ret < 0)
goto put_pm;
- }
/* Copy the whole firmware image to the code buffer */
memcpy(dev->codebuf.vaddr, fw->data, fw->size);
@@ -2174,20 +2185,16 @@
ret = coda_alloc_aux_buf(dev, &dev->workbuf,
dev->devtype->workbuf_size, "workbuf",
dev->debugfs_root);
- if (ret < 0) {
- dev_err(&pdev->dev, "failed to allocate work buffer\n");
+ if (ret < 0)
goto err_v4l2_register;
- }
}
if (dev->devtype->tempbuf_size) {
ret = coda_alloc_aux_buf(dev, &dev->tempbuf,
dev->devtype->tempbuf_size, "tempbuf",
dev->debugfs_root);
- if (ret < 0) {
- dev_err(&pdev->dev, "failed to allocate temp buffer\n");
+ if (ret < 0)
goto err_v4l2_register;
- }
}
dev->iram.size = dev->devtype->iram_size;
diff --git a/drivers/media/platform/coda/coda-jpeg.c b/drivers/media/platform/coda/coda-jpeg.c
index 8fa3e35..11e734b 100644
--- a/drivers/media/platform/coda/coda-jpeg.c
+++ b/drivers/media/platform/coda/coda-jpeg.c
@@ -13,6 +13,7 @@
#include <linux/swab.h>
#include "coda.h"
+#include "trace.h"
#define SOI_MARKER 0xffd8
#define EOI_MARKER 0xffd9
diff --git a/drivers/media/platform/coda/coda.h b/drivers/media/platform/coda/coda.h
index 0c35cd5..6a5c8f6 100644
--- a/drivers/media/platform/coda/coda.h
+++ b/drivers/media/platform/coda/coda.h
@@ -12,6 +12,9 @@
* (at your option) any later version.
*/
+#ifndef __CODA_H__
+#define __CODA_H__
+
#include <linux/debugfs.h>
#include <linux/irqreturn.h>
#include <linux/mutex.h>
@@ -26,7 +29,6 @@
#include "coda_regs.h"
#define CODA_MAX_FRAMEBUFFERS 8
-#define CODA_MAX_FRAME_SIZE 0x100000
#define FMO_SLICE_SAVE_BUF_SIZE (32)
enum {
@@ -178,6 +180,7 @@
struct coda_context_ops {
int (*queue_init)(void *priv, struct vb2_queue *src_vq,
struct vb2_queue *dst_vq);
+ int (*reqbufs)(struct coda_ctx *ctx, struct v4l2_requestbuffers *rb);
int (*start_streaming)(struct coda_ctx *ctx);
int (*prepare_run)(struct coda_ctx *ctx);
void (*finish_run)(struct coda_ctx *ctx);
@@ -249,13 +252,6 @@
size_t size, const char *name, struct dentry *parent);
void coda_free_aux_buf(struct coda_dev *dev, struct coda_aux_buf *buf);
-static inline int coda_alloc_context_buf(struct coda_ctx *ctx,
- struct coda_aux_buf *buf, size_t size,
- const char *name)
-{
- return coda_alloc_aux_buf(ctx->dev, buf, size, name, ctx->debugfs_entry);
-}
-
int coda_encoder_queue_init(void *priv, struct vb2_queue *src_vq,
struct vb2_queue *dst_vq);
int coda_decoder_queue_init(void *priv, struct vb2_queue *src_vq,
@@ -263,7 +259,7 @@
int coda_hw_reset(struct coda_ctx *ctx);
-void coda_fill_bitstream(struct coda_ctx *ctx);
+void coda_fill_bitstream(struct coda_ctx *ctx, bool streaming);
void coda_set_gdi_regs(struct coda_ctx *ctx);
@@ -284,7 +280,7 @@
int coda_check_firmware(struct coda_dev *dev);
-static inline int coda_get_bitstream_payload(struct coda_ctx *ctx)
+static inline unsigned int coda_get_bitstream_payload(struct coda_ctx *ctx)
{
return kfifo_len(&ctx->bitstream_fifo);
}
@@ -301,3 +297,5 @@
extern const struct coda_context_ops coda_bit_decode_ops;
irqreturn_t coda_irq_handler(int irq, void *data);
+
+#endif /* __CODA_H__ */
diff --git a/drivers/media/platform/coda/trace.h b/drivers/media/platform/coda/trace.h
new file mode 100644
index 0000000..d1d06cb
--- /dev/null
+++ b/drivers/media/platform/coda/trace.h
@@ -0,0 +1,203 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM coda
+
+#if !defined(__CODA_TRACE_H__) || defined(TRACE_HEADER_MULTI_READ)
+#define __CODA_TRACE_H__
+
+#include <linux/tracepoint.h>
+#include <media/videobuf2-core.h>
+
+#include "coda.h"
+
+#define TRACE_SYSTEM_STRING __stringify(TRACE_SYSTEM)
+
+TRACE_EVENT(coda_bit_run,
+ TP_PROTO(struct coda_ctx *ctx, int cmd),
+
+ TP_ARGS(ctx, cmd),
+
+ TP_STRUCT__entry(
+ __field(int, minor)
+ __field(int, ctx)
+ __field(int, cmd)
+ ),
+
+ TP_fast_assign(
+ __entry->minor = ctx->fh.vdev->minor;
+ __entry->ctx = ctx->idx;
+ __entry->cmd = cmd;
+ ),
+
+ TP_printk("minor = %d, ctx = %d, cmd = %d",
+ __entry->minor, __entry->ctx, __entry->cmd)
+);
+
+TRACE_EVENT(coda_bit_done,
+ TP_PROTO(struct coda_ctx *ctx),
+
+ TP_ARGS(ctx),
+
+ TP_STRUCT__entry(
+ __field(int, minor)
+ __field(int, ctx)
+ ),
+
+ TP_fast_assign(
+ __entry->minor = ctx->fh.vdev->minor;
+ __entry->ctx = ctx->idx;
+ ),
+
+ TP_printk("minor = %d, ctx = %d", __entry->minor, __entry->ctx)
+);
+
+TRACE_EVENT(coda_enc_pic_run,
+ TP_PROTO(struct coda_ctx *ctx, struct vb2_buffer *buf),
+
+ TP_ARGS(ctx, buf),
+
+ TP_STRUCT__entry(
+ __field(int, minor)
+ __field(int, index)
+ __field(int, ctx)
+ ),
+
+ TP_fast_assign(
+ __entry->minor = ctx->fh.vdev->minor;
+ __entry->index = buf->v4l2_buf.index;
+ __entry->ctx = ctx->idx;
+ ),
+
+ TP_printk("minor = %d, index = %d, ctx = %d",
+ __entry->minor, __entry->index, __entry->ctx)
+);
+
+TRACE_EVENT(coda_enc_pic_done,
+ TP_PROTO(struct coda_ctx *ctx, struct vb2_buffer *buf),
+
+ TP_ARGS(ctx, buf),
+
+ TP_STRUCT__entry(
+ __field(int, minor)
+ __field(int, index)
+ __field(int, ctx)
+ ),
+
+ TP_fast_assign(
+ __entry->minor = ctx->fh.vdev->minor;
+ __entry->index = buf->v4l2_buf.index;
+ __entry->ctx = ctx->idx;
+ ),
+
+ TP_printk("minor = %d, index = %d, ctx = %d",
+ __entry->minor, __entry->index, __entry->ctx)
+);
+
+TRACE_EVENT(coda_bit_queue,
+ TP_PROTO(struct coda_ctx *ctx, struct vb2_buffer *buf,
+ struct coda_buffer_meta *meta),
+
+ TP_ARGS(ctx, buf, meta),
+
+ TP_STRUCT__entry(
+ __field(int, minor)
+ __field(int, index)
+ __field(int, start)
+ __field(int, end)
+ __field(int, ctx)
+ ),
+
+ TP_fast_assign(
+ __entry->minor = ctx->fh.vdev->minor;
+ __entry->index = buf->v4l2_buf.index;
+ __entry->start = meta->start;
+ __entry->end = meta->end;
+ __entry->ctx = ctx->idx;
+ ),
+
+ TP_printk("minor = %d, index = %d, start = 0x%x, end = 0x%x, ctx = %d",
+ __entry->minor, __entry->index, __entry->start, __entry->end,
+ __entry->ctx)
+);
+
+TRACE_EVENT(coda_dec_pic_run,
+ TP_PROTO(struct coda_ctx *ctx, struct coda_buffer_meta *meta),
+
+ TP_ARGS(ctx, meta),
+
+ TP_STRUCT__entry(
+ __field(int, minor)
+ __field(int, start)
+ __field(int, end)
+ __field(int, ctx)
+ ),
+
+ TP_fast_assign(
+ __entry->minor = ctx->fh.vdev->minor;
+ __entry->start = meta ? meta->start : 0;
+ __entry->end = meta ? meta->end : 0;
+ __entry->ctx = ctx->idx;
+ ),
+
+ TP_printk("minor = %d, start = 0x%x, end = 0x%x, ctx = %d",
+ __entry->minor, __entry->start, __entry->end, __entry->ctx)
+);
+
+TRACE_EVENT(coda_dec_pic_done,
+ TP_PROTO(struct coda_ctx *ctx, struct coda_buffer_meta *meta),
+
+ TP_ARGS(ctx, meta),
+
+ TP_STRUCT__entry(
+ __field(int, minor)
+ __field(int, start)
+ __field(int, end)
+ __field(int, ctx)
+ ),
+
+ TP_fast_assign(
+ __entry->minor = ctx->fh.vdev->minor;
+ __entry->start = meta->start;
+ __entry->end = meta->end;
+ __entry->ctx = ctx->idx;
+ ),
+
+ TP_printk("minor = %d, start = 0x%x, end = 0x%x, ctx = %d",
+ __entry->minor, __entry->start, __entry->end, __entry->ctx)
+);
+
+TRACE_EVENT(coda_dec_rot_done,
+ TP_PROTO(struct coda_ctx *ctx, struct coda_buffer_meta *meta,
+ struct vb2_buffer *buf),
+
+ TP_ARGS(ctx, meta, buf),
+
+ TP_STRUCT__entry(
+ __field(int, minor)
+ __field(int, start)
+ __field(int, end)
+ __field(int, index)
+ __field(int, ctx)
+ ),
+
+ TP_fast_assign(
+ __entry->minor = ctx->fh.vdev->minor;
+ __entry->start = meta->start;
+ __entry->end = meta->end;
+ __entry->index = buf->v4l2_buf.index;
+ __entry->ctx = ctx->idx;
+ ),
+
+ TP_printk("minor = %d, start = 0x%x, end = 0x%x, index = %d, ctx = %d",
+ __entry->minor, __entry->start, __entry->end, __entry->index,
+ __entry->ctx)
+);
+
+#endif /* __CODA_TRACE_H__ */
+
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE trace
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/drivers/media/platform/davinci/vpfe_capture.c b/drivers/media/platform/davinci/vpfe_capture.c
index b41bf7e..ccfcf3f 100644
--- a/drivers/media/platform/davinci/vpfe_capture.c
+++ b/drivers/media/platform/davinci/vpfe_capture.c
@@ -1871,16 +1871,9 @@
goto probe_free_ccdc_cfg_mem;
}
- /* Allocate memory for video device */
- vfd = video_device_alloc();
- if (NULL == vfd) {
- ret = -ENOMEM;
- v4l2_err(pdev->dev.driver, "Unable to alloc video device\n");
- goto probe_out_release_irq;
- }
-
+ vfd = &vpfe_dev->video_dev;
/* Initialize field of video device */
- vfd->release = video_device_release;
+ vfd->release = video_device_release_empty;
vfd->fops = &vpfe_fops;
vfd->ioctl_ops = &vpfe_ioctl_ops;
vfd->tvnorms = 0;
@@ -1891,14 +1884,12 @@
(VPFE_CAPTURE_VERSION_CODE >> 16) & 0xff,
(VPFE_CAPTURE_VERSION_CODE >> 8) & 0xff,
(VPFE_CAPTURE_VERSION_CODE) & 0xff);
- /* Set video_dev to the video device */
- vpfe_dev->video_dev = vfd;
ret = v4l2_device_register(&pdev->dev, &vpfe_dev->v4l2_dev);
if (ret) {
v4l2_err(pdev->dev.driver,
"Unable to register v4l2 device.\n");
- goto probe_out_video_release;
+ goto probe_out_release_irq;
}
v4l2_info(&vpfe_dev->v4l2_dev, "v4l2 device registered\n");
spin_lock_init(&vpfe_dev->irqlock);
@@ -1914,7 +1905,7 @@
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev,
"video_dev=%p\n", &vpfe_dev->video_dev);
vpfe_dev->fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
- ret = video_register_device(vpfe_dev->video_dev,
+ ret = video_register_device(&vpfe_dev->video_dev,
VFL_TYPE_GRABBER, -1);
if (ret) {
@@ -1927,7 +1918,7 @@
/* set the driver data in platform device */
platform_set_drvdata(pdev, vpfe_dev);
/* set driver private data */
- video_set_drvdata(vpfe_dev->video_dev, vpfe_dev);
+ video_set_drvdata(&vpfe_dev->video_dev, vpfe_dev);
i2c_adap = i2c_get_adapter(vpfe_cfg->i2c_adapter_id);
num_subdevs = vpfe_cfg->num_subdevs;
vpfe_dev->sd = kmalloc(sizeof(struct v4l2_subdev *) * num_subdevs,
@@ -1979,12 +1970,9 @@
probe_sd_out:
kfree(vpfe_dev->sd);
probe_out_video_unregister:
- video_unregister_device(vpfe_dev->video_dev);
+ video_unregister_device(&vpfe_dev->video_dev);
probe_out_v4l2_unregister:
v4l2_device_unregister(&vpfe_dev->v4l2_dev);
-probe_out_video_release:
- if (!video_is_registered(vpfe_dev->video_dev))
- video_device_release(vpfe_dev->video_dev);
probe_out_release_irq:
free_irq(vpfe_dev->ccdc_irq0, vpfe_dev);
probe_free_ccdc_cfg_mem:
@@ -2007,7 +1995,7 @@
free_irq(vpfe_dev->ccdc_irq0, vpfe_dev);
kfree(vpfe_dev->sd);
v4l2_device_unregister(&vpfe_dev->v4l2_dev);
- video_unregister_device(vpfe_dev->video_dev);
+ video_unregister_device(&vpfe_dev->video_dev);
kfree(vpfe_dev);
kfree(ccdc_cfg);
return 0;
diff --git a/drivers/media/platform/davinci/vpif_capture.c b/drivers/media/platform/davinci/vpif_capture.c
index fa0a515..a5f5481 100644
--- a/drivers/media/platform/davinci/vpif_capture.c
+++ b/drivers/media/platform/davinci/vpif_capture.c
@@ -712,7 +712,7 @@
ch->vpifparams.iface = chan_cfg->vpif_if;
/* update tvnorms from the sub device input info */
- ch->video_dev->tvnorms = chan_cfg->inputs[index].input.std;
+ ch->video_dev.tvnorms = chan_cfg->inputs[index].input.std;
return 0;
}
@@ -1337,7 +1337,7 @@
struct video_device *vdev;
struct channel_obj *ch;
struct vb2_queue *q;
- int i, j, err, k;
+ int j, err, k;
for (j = 0; j < VPIF_CAPTURE_MAX_DEVICES; j++) {
ch = vpif_obj.dev[j];
@@ -1384,16 +1384,16 @@
INIT_LIST_HEAD(&common->dma_queue);
/* Initialize the video_device structure */
- vdev = ch->video_dev;
+ vdev = &ch->video_dev;
strlcpy(vdev->name, VPIF_DRIVER_NAME, sizeof(vdev->name));
- vdev->release = video_device_release;
+ vdev->release = video_device_release_empty;
vdev->fops = &vpif_fops;
vdev->ioctl_ops = &vpif_ioctl_ops;
vdev->v4l2_dev = &vpif_obj.v4l2_dev;
vdev->vfl_dir = VFL_DIR_RX;
vdev->queue = q;
vdev->lock = &common->lock;
- video_set_drvdata(ch->video_dev, ch);
+ video_set_drvdata(&ch->video_dev, ch);
err = video_register_device(vdev,
VFL_TYPE_GRABBER, (j ? 1 : 0));
if (err)
@@ -1410,14 +1410,9 @@
common = &ch->common[k];
vb2_dma_contig_cleanup_ctx(common->alloc_ctx);
/* Unregister video device */
- video_unregister_device(ch->video_dev);
+ video_unregister_device(&ch->video_dev);
}
kfree(vpif_obj.sd);
- for (i = 0; i < VPIF_CAPTURE_MAX_DEVICES; i++) {
- ch = vpif_obj.dev[i];
- /* Note: does nothing if ch->video_dev == NULL */
- video_device_release(ch->video_dev);
- }
v4l2_device_unregister(&vpif_obj.v4l2_dev);
return err;
@@ -1438,13 +1433,11 @@
static __init int vpif_probe(struct platform_device *pdev)
{
struct vpif_subdev_info *subdevdata;
- int i, j, err;
- int res_idx = 0;
struct i2c_adapter *i2c_adap;
- struct channel_obj *ch;
- struct video_device *vfd;
struct resource *res;
int subdev_count;
+ int res_idx = 0;
+ int i, err;
vpif_dev = &pdev->dev;
@@ -1472,24 +1465,6 @@
res_idx++;
}
- for (i = 0; i < VPIF_CAPTURE_MAX_DEVICES; i++) {
- /* Get the pointer to the channel object */
- ch = vpif_obj.dev[i];
- /* Allocate memory for video device */
- vfd = video_device_alloc();
- if (NULL == vfd) {
- for (j = 0; j < i; j++) {
- ch = vpif_obj.dev[j];
- video_device_release(ch->video_dev);
- }
- err = -ENOMEM;
- goto vpif_unregister;
- }
-
- /* Set video_dev to the video device */
- ch->video_dev = vfd;
- }
-
vpif_obj.config = pdev->dev.platform_data;
subdev_count = vpif_obj.config->subdev_count;
@@ -1498,7 +1473,7 @@
if (vpif_obj.sd == NULL) {
vpif_err("unable to allocate memory for subdevice pointers\n");
err = -ENOMEM;
- goto vpif_sd_error;
+ goto vpif_unregister;
}
if (!vpif_obj.config->asd_sizes) {
@@ -1541,13 +1516,6 @@
probe_subdev_out:
/* free sub devices memory */
kfree(vpif_obj.sd);
-
-vpif_sd_error:
- for (i = 0; i < VPIF_CAPTURE_MAX_DEVICES; i++) {
- ch = vpif_obj.dev[i];
- /* Note: does nothing if ch->video_dev == NULL */
- video_device_release(ch->video_dev);
- }
vpif_unregister:
v4l2_device_unregister(&vpif_obj.v4l2_dev);
@@ -1576,7 +1544,7 @@
common = &ch->common[VPIF_VIDEO_INDEX];
vb2_dma_contig_cleanup_ctx(common->alloc_ctx);
/* Unregister video device */
- video_unregister_device(ch->video_dev);
+ video_unregister_device(&ch->video_dev);
kfree(vpif_obj.dev[i]);
}
return 0;
diff --git a/drivers/media/platform/davinci/vpif_capture.h b/drivers/media/platform/davinci/vpif_capture.h
index f65d28d..8b8a663 100644
--- a/drivers/media/platform/davinci/vpif_capture.h
+++ b/drivers/media/platform/davinci/vpif_capture.h
@@ -92,7 +92,7 @@
struct channel_obj {
/* Identifies video device for this channel */
- struct video_device *video_dev;
+ struct video_device video_dev;
/* Indicates id of the field which is being displayed */
u32 field_id;
/* flag to indicate whether decoder is initialized */
diff --git a/drivers/media/platform/davinci/vpif_display.c b/drivers/media/platform/davinci/vpif_display.c
index 839c24d..682e5d5 100644
--- a/drivers/media/platform/davinci/vpif_display.c
+++ b/drivers/media/platform/davinci/vpif_display.c
@@ -829,7 +829,7 @@
ch->sd = sd;
if (chan_cfg->outputs != NULL)
/* update tvnorms from the sub device output info */
- ch->video_dev->tvnorms = chan_cfg->outputs[index].output.std;
+ ch->video_dev.tvnorms = chan_cfg->outputs[index].output.std;
return 0;
}
@@ -1204,16 +1204,16 @@
ch, &ch->video_dev);
/* Initialize the video_device structure */
- vdev = ch->video_dev;
+ vdev = &ch->video_dev;
strlcpy(vdev->name, VPIF_DRIVER_NAME, sizeof(vdev->name));
- vdev->release = video_device_release;
+ vdev->release = video_device_release_empty;
vdev->fops = &vpif_fops;
vdev->ioctl_ops = &vpif_ioctl_ops;
vdev->v4l2_dev = &vpif_obj.v4l2_dev;
vdev->vfl_dir = VFL_DIR_TX;
vdev->queue = q;
vdev->lock = &common->lock;
- video_set_drvdata(ch->video_dev, ch);
+ video_set_drvdata(&ch->video_dev, ch);
err = video_register_device(vdev, VFL_TYPE_GRABBER,
(j ? 3 : 2));
if (err < 0)
@@ -1227,9 +1227,7 @@
ch = vpif_obj.dev[k];
common = &ch->common[k];
vb2_dma_contig_cleanup_ctx(common->alloc_ctx);
- video_unregister_device(ch->video_dev);
- video_device_release(ch->video_dev);
- ch->video_dev = NULL;
+ video_unregister_device(&ch->video_dev);
}
return err;
}
@@ -1246,13 +1244,11 @@
static __init int vpif_probe(struct platform_device *pdev)
{
struct vpif_subdev_info *subdevdata;
- int i, j = 0, err = 0;
- int res_idx = 0;
struct i2c_adapter *i2c_adap;
- struct channel_obj *ch;
- struct video_device *vfd;
struct resource *res;
int subdev_count;
+ int res_idx = 0;
+ int i, err;
vpif_dev = &pdev->dev;
err = initialize_vpif();
@@ -1281,25 +1277,6 @@
res_idx++;
}
- for (i = 0; i < VPIF_DISPLAY_MAX_DEVICES; i++) {
- /* Get the pointer to the channel object */
- ch = vpif_obj.dev[i];
-
- /* Allocate memory for video device */
- vfd = video_device_alloc();
- if (vfd == NULL) {
- for (j = 0; j < i; j++) {
- ch = vpif_obj.dev[j];
- video_device_release(ch->video_dev);
- }
- err = -ENOMEM;
- goto vpif_unregister;
- }
-
- /* Set video_dev to the video device */
- ch->video_dev = vfd;
- }
-
vpif_obj.config = pdev->dev.platform_data;
subdev_count = vpif_obj.config->subdev_count;
subdevdata = vpif_obj.config->subdevinfo;
@@ -1308,7 +1285,7 @@
if (vpif_obj.sd == NULL) {
vpif_err("unable to allocate memory for subdevice pointers\n");
err = -ENOMEM;
- goto vpif_sd_error;
+ goto vpif_unregister;
}
if (!vpif_obj.config->asd_sizes) {
@@ -1348,12 +1325,6 @@
probe_subdev_out:
kfree(vpif_obj.sd);
-vpif_sd_error:
- for (i = 0; i < VPIF_DISPLAY_MAX_DEVICES; i++) {
- ch = vpif_obj.dev[i];
- /* Note: does nothing if ch->video_dev == NULL */
- video_device_release(ch->video_dev);
- }
vpif_unregister:
v4l2_device_unregister(&vpif_obj.v4l2_dev);
@@ -1379,9 +1350,7 @@
common = &ch->common[VPIF_VIDEO_INDEX];
vb2_dma_contig_cleanup_ctx(common->alloc_ctx);
/* Unregister video device */
- video_unregister_device(ch->video_dev);
-
- ch->video_dev = NULL;
+ video_unregister_device(&ch->video_dev);
kfree(vpif_obj.dev[i]);
}
diff --git a/drivers/media/platform/davinci/vpif_display.h b/drivers/media/platform/davinci/vpif_display.h
index 7b21a76..849e0e3 100644
--- a/drivers/media/platform/davinci/vpif_display.h
+++ b/drivers/media/platform/davinci/vpif_display.h
@@ -100,7 +100,7 @@
struct channel_obj {
/* V4l2 specific parameters */
- struct video_device *video_dev; /* Identifies video device for
+ struct video_device video_dev; /* Identifies video device for
* this channel */
u32 field_id; /* Indicates id of the field
* which is being displayed */
diff --git a/drivers/media/platform/exynos4-is/fimc-capture.c b/drivers/media/platform/exynos4-is/fimc-capture.c
index 8a2fd8c..cfebf29 100644
--- a/drivers/media/platform/exynos4-is/fimc-capture.c
+++ b/drivers/media/platform/exynos4-is/fimc-capture.c
@@ -1482,7 +1482,7 @@
}
static int fimc_subdev_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct fimc_fmt *fmt;
@@ -1495,7 +1495,7 @@
}
static int fimc_subdev_get_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct fimc_dev *fimc = v4l2_get_subdevdata(sd);
@@ -1504,7 +1504,7 @@
struct v4l2_mbus_framefmt *mf;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, fmt->pad);
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
fmt->format = *mf;
return 0;
}
@@ -1536,7 +1536,7 @@
}
static int fimc_subdev_set_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct fimc_dev *fimc = v4l2_get_subdevdata(sd);
@@ -1559,7 +1559,7 @@
mf->colorspace = V4L2_COLORSPACE_JPEG;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, fmt->pad);
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
*mf = fmt->format;
return 0;
}
@@ -1602,7 +1602,7 @@
}
static int fimc_subdev_get_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct fimc_dev *fimc = v4l2_get_subdevdata(sd);
@@ -1628,10 +1628,10 @@
return 0;
case V4L2_SEL_TGT_CROP:
- try_sel = v4l2_subdev_get_try_crop(fh, sel->pad);
+ try_sel = v4l2_subdev_get_try_crop(sd, cfg, sel->pad);
break;
case V4L2_SEL_TGT_COMPOSE:
- try_sel = v4l2_subdev_get_try_compose(fh, sel->pad);
+ try_sel = v4l2_subdev_get_try_compose(sd, cfg, sel->pad);
f = &ctx->d_frame;
break;
default:
@@ -1657,7 +1657,7 @@
}
static int fimc_subdev_set_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct fimc_dev *fimc = v4l2_get_subdevdata(sd);
@@ -1675,10 +1675,10 @@
switch (sel->target) {
case V4L2_SEL_TGT_CROP:
- try_sel = v4l2_subdev_get_try_crop(fh, sel->pad);
+ try_sel = v4l2_subdev_get_try_crop(sd, cfg, sel->pad);
break;
case V4L2_SEL_TGT_COMPOSE:
- try_sel = v4l2_subdev_get_try_compose(fh, sel->pad);
+ try_sel = v4l2_subdev_get_try_compose(sd, cfg, sel->pad);
f = &ctx->d_frame;
break;
default:
diff --git a/drivers/media/platform/exynos4-is/fimc-isp.c b/drivers/media/platform/exynos4-is/fimc-isp.c
index 60c7449..5d78f57 100644
--- a/drivers/media/platform/exynos4-is/fimc-isp.c
+++ b/drivers/media/platform/exynos4-is/fimc-isp.c
@@ -112,7 +112,7 @@
};
static int fimc_is_subdev_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
const struct fimc_fmt *fmt;
@@ -125,14 +125,14 @@
}
static int fimc_isp_subdev_get_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct fimc_isp *isp = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *mf = &fmt->format;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- *mf = *v4l2_subdev_get_try_format(fh, fmt->pad);
+ *mf = *v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
return 0;
}
@@ -162,7 +162,7 @@
}
static void __isp_subdev_try_format(struct fimc_isp *isp,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct v4l2_mbus_framefmt *mf = &fmt->format;
@@ -178,7 +178,7 @@
mf->code = MEDIA_BUS_FMT_SGRBG10_1X10;
} else {
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY)
- format = v4l2_subdev_get_try_format(fh,
+ format = v4l2_subdev_get_try_format(&isp->subdev, cfg,
FIMC_ISP_SD_PAD_SINK);
else
format = &isp->sink_fmt;
@@ -197,7 +197,7 @@
}
static int fimc_isp_subdev_set_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct fimc_isp *isp = v4l2_get_subdevdata(sd);
@@ -209,10 +209,10 @@
__func__, fmt->pad, mf->code, mf->width, mf->height);
mutex_lock(&isp->subdev_lock);
- __isp_subdev_try_format(isp, fh, fmt);
+ __isp_subdev_try_format(isp, cfg, fmt);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, fmt->pad);
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
*mf = fmt->format;
/* Propagate format to the source pads */
@@ -223,8 +223,8 @@
for (pad = FIMC_ISP_SD_PAD_SRC_FIFO;
pad < FIMC_ISP_SD_PADS_NUM; pad++) {
format.pad = pad;
- __isp_subdev_try_format(isp, fh, &format);
- mf = v4l2_subdev_get_try_format(fh, pad);
+ __isp_subdev_try_format(isp, cfg, &format);
+ mf = v4l2_subdev_get_try_format(sd, cfg, pad);
*mf = format.format;
}
}
@@ -236,7 +236,7 @@
isp->sink_fmt = *mf;
format.pad = FIMC_ISP_SD_PAD_SRC_DMA;
- __isp_subdev_try_format(isp, fh, &format);
+ __isp_subdev_try_format(isp, cfg, &format);
isp->src_fmt = format.format;
__is_set_frame_size(is, &isp->src_fmt);
@@ -369,7 +369,7 @@
struct v4l2_mbus_framefmt fmt;
struct v4l2_mbus_framefmt *format;
- format = v4l2_subdev_get_try_format(fh, FIMC_ISP_SD_PAD_SINK);
+ format = v4l2_subdev_get_try_format(sd, fh->pad, FIMC_ISP_SD_PAD_SINK);
fmt.colorspace = V4L2_COLORSPACE_SRGB;
fmt.code = fimc_isp_formats[0].mbus_code;
@@ -378,12 +378,12 @@
fmt.field = V4L2_FIELD_NONE;
*format = fmt;
- format = v4l2_subdev_get_try_format(fh, FIMC_ISP_SD_PAD_SRC_FIFO);
+ format = v4l2_subdev_get_try_format(sd, fh->pad, FIMC_ISP_SD_PAD_SRC_FIFO);
fmt.width = DEFAULT_PREVIEW_STILL_WIDTH;
fmt.height = DEFAULT_PREVIEW_STILL_HEIGHT;
*format = fmt;
- format = v4l2_subdev_get_try_format(fh, FIMC_ISP_SD_PAD_SRC_DMA);
+ format = v4l2_subdev_get_try_format(sd, fh->pad, FIMC_ISP_SD_PAD_SRC_DMA);
*format = fmt;
return 0;
diff --git a/drivers/media/platform/exynos4-is/fimc-lite.c b/drivers/media/platform/exynos4-is/fimc-lite.c
index 2510f18..ca6261a 100644
--- a/drivers/media/platform/exynos4-is/fimc-lite.c
+++ b/drivers/media/platform/exynos4-is/fimc-lite.c
@@ -568,7 +568,7 @@
*/
static const struct fimc_fmt *fimc_lite_subdev_try_fmt(struct fimc_lite *fimc,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format)
{
struct flite_drvdata *dd = fimc->dd;
@@ -592,13 +592,13 @@
struct v4l2_rect *rect;
if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
- sink_fmt = v4l2_subdev_get_try_format(fh,
+ sink_fmt = v4l2_subdev_get_try_format(&fimc->subdev, cfg,
FLITE_SD_PAD_SINK);
mf->code = sink_fmt->code;
mf->colorspace = sink_fmt->colorspace;
- rect = v4l2_subdev_get_try_crop(fh,
+ rect = v4l2_subdev_get_try_crop(&fimc->subdev, cfg,
FLITE_SD_PAD_SINK);
} else {
mf->code = sink->fmt->mbus_code;
@@ -1047,7 +1047,7 @@
};
static int fimc_lite_subdev_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
const struct fimc_fmt *fmt;
@@ -1060,16 +1060,17 @@
}
static struct v4l2_mbus_framefmt *__fimc_lite_subdev_get_try_fmt(
- struct v4l2_subdev_fh *fh, unsigned int pad)
+ struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad)
{
if (pad != FLITE_SD_PAD_SINK)
pad = FLITE_SD_PAD_SOURCE_DMA;
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(sd, cfg, pad);
}
static int fimc_lite_subdev_get_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct fimc_lite *fimc = v4l2_get_subdevdata(sd);
@@ -1077,7 +1078,7 @@
struct flite_frame *f = &fimc->inp_frame;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = __fimc_lite_subdev_get_try_fmt(fh, fmt->pad);
+ mf = __fimc_lite_subdev_get_try_fmt(sd, cfg, fmt->pad);
fmt->format = *mf;
return 0;
}
@@ -1100,7 +1101,7 @@
}
static int fimc_lite_subdev_set_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct fimc_lite *fimc = v4l2_get_subdevdata(sd);
@@ -1122,17 +1123,17 @@
return -EBUSY;
}
- ffmt = fimc_lite_subdev_try_fmt(fimc, fh, fmt);
+ ffmt = fimc_lite_subdev_try_fmt(fimc, cfg, fmt);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
struct v4l2_mbus_framefmt *src_fmt;
- mf = __fimc_lite_subdev_get_try_fmt(fh, fmt->pad);
+ mf = __fimc_lite_subdev_get_try_fmt(sd, cfg, fmt->pad);
*mf = fmt->format;
if (fmt->pad == FLITE_SD_PAD_SINK) {
unsigned int pad = FLITE_SD_PAD_SOURCE_DMA;
- src_fmt = __fimc_lite_subdev_get_try_fmt(fh, pad);
+ src_fmt = __fimc_lite_subdev_get_try_fmt(sd, cfg, pad);
*src_fmt = *mf;
}
@@ -1160,7 +1161,7 @@
}
static int fimc_lite_subdev_get_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct fimc_lite *fimc = v4l2_get_subdevdata(sd);
@@ -1172,7 +1173,7 @@
return -EINVAL;
if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
- sel->r = *v4l2_subdev_get_try_crop(fh, sel->pad);
+ sel->r = *v4l2_subdev_get_try_crop(sd, cfg, sel->pad);
return 0;
}
@@ -1195,7 +1196,7 @@
}
static int fimc_lite_subdev_set_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct fimc_lite *fimc = v4l2_get_subdevdata(sd);
@@ -1209,7 +1210,7 @@
fimc_lite_try_crop(fimc, &sel->r);
if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
- *v4l2_subdev_get_try_crop(fh, sel->pad) = sel->r;
+ *v4l2_subdev_get_try_crop(sd, cfg, sel->pad) = sel->r;
} else {
unsigned long flags;
spin_lock_irqsave(&fimc->slock, flags);
diff --git a/drivers/media/platform/exynos4-is/mipi-csis.c b/drivers/media/platform/exynos4-is/mipi-csis.c
index 2504aa8..d74e1be 100644
--- a/drivers/media/platform/exynos4-is/mipi-csis.c
+++ b/drivers/media/platform/exynos4-is/mipi-csis.c
@@ -540,7 +540,7 @@
}
static int s5pcsis_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index >= ARRAY_SIZE(s5pcsis_formats))
@@ -568,23 +568,23 @@
}
static struct v4l2_mbus_framefmt *__s5pcsis_get_format(
- struct csis_state *state, struct v4l2_subdev_fh *fh,
+ struct csis_state *state, struct v4l2_subdev_pad_config *cfg,
enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return fh ? v4l2_subdev_get_try_format(fh, 0) : NULL;
+ return cfg ? v4l2_subdev_get_try_format(&state->sd, cfg, 0) : NULL;
return &state->format;
}
-static int s5pcsis_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int s5pcsis_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct csis_state *state = sd_to_csis_state(sd);
struct csis_pix_format const *csis_fmt;
struct v4l2_mbus_framefmt *mf;
- mf = __s5pcsis_get_format(state, fh, fmt->which);
+ mf = __s5pcsis_get_format(state, cfg, fmt->which);
if (fmt->pad == CSIS_PAD_SOURCE) {
if (mf) {
@@ -605,13 +605,13 @@
return 0;
}
-static int s5pcsis_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int s5pcsis_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct csis_state *state = sd_to_csis_state(sd);
struct v4l2_mbus_framefmt *mf;
- mf = __s5pcsis_get_format(state, fh, fmt->which);
+ mf = __s5pcsis_get_format(state, cfg, fmt->which);
if (!mf)
return -EINVAL;
@@ -651,7 +651,7 @@
static int s5pcsis_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- struct v4l2_mbus_framefmt *format = v4l2_subdev_get_try_format(fh, 0);
+ struct v4l2_mbus_framefmt *format = v4l2_subdev_get_try_format(sd, fh->pad, 0);
format->colorspace = V4L2_COLORSPACE_JPEG;
format->code = s5pcsis_formats[0].code;
diff --git a/drivers/media/platform/m2m-deinterlace.c b/drivers/media/platform/m2m-deinterlace.c
index b70c1ae..92d9549 100644
--- a/drivers/media/platform/m2m-deinterlace.c
+++ b/drivers/media/platform/m2m-deinterlace.c
@@ -127,7 +127,7 @@
struct deinterlace_dev {
struct v4l2_device v4l2_dev;
- struct video_device *vfd;
+ struct video_device vfd;
atomic_t busy;
struct mutex dev_mutex;
@@ -983,7 +983,7 @@
.fops = &deinterlace_fops,
.ioctl_ops = &deinterlace_ioctl_ops,
.minor = -1,
- .release = video_device_release,
+ .release = video_device_release_empty,
.vfl_dir = VFL_DIR_M2M,
};
@@ -1026,13 +1026,7 @@
atomic_set(&pcdev->busy, 0);
mutex_init(&pcdev->dev_mutex);
- vfd = video_device_alloc();
- if (!vfd) {
- v4l2_err(&pcdev->v4l2_dev, "Failed to allocate video device\n");
- ret = -ENOMEM;
- goto unreg_dev;
- }
-
+ vfd = &pcdev->vfd;
*vfd = deinterlace_videodev;
vfd->lock = &pcdev->dev_mutex;
vfd->v4l2_dev = &pcdev->v4l2_dev;
@@ -1040,12 +1034,11 @@
ret = video_register_device(vfd, VFL_TYPE_GRABBER, 0);
if (ret) {
v4l2_err(&pcdev->v4l2_dev, "Failed to register video device\n");
- goto rel_vdev;
+ goto unreg_dev;
}
video_set_drvdata(vfd, pcdev);
snprintf(vfd->name, sizeof(vfd->name), "%s", deinterlace_videodev.name);
- pcdev->vfd = vfd;
v4l2_info(&pcdev->v4l2_dev, MEM2MEM_TEST_MODULE_NAME
" Device registered as /dev/video%d\n", vfd->num);
@@ -1069,11 +1062,9 @@
v4l2_m2m_release(pcdev->m2m_dev);
err_m2m:
- video_unregister_device(pcdev->vfd);
+ video_unregister_device(&pcdev->vfd);
err_ctx:
vb2_dma_contig_cleanup_ctx(pcdev->alloc_ctx);
-rel_vdev:
- video_device_release(vfd);
unreg_dev:
v4l2_device_unregister(&pcdev->v4l2_dev);
rel_dma:
@@ -1088,7 +1079,7 @@
v4l2_info(&pcdev->v4l2_dev, "Removing " MEM2MEM_TEST_MODULE_NAME);
v4l2_m2m_release(pcdev->m2m_dev);
- video_unregister_device(pcdev->vfd);
+ video_unregister_device(&pcdev->vfd);
v4l2_device_unregister(&pcdev->v4l2_dev);
vb2_dma_contig_cleanup_ctx(pcdev->alloc_ctx);
dma_release_channel(pcdev->dma_chan);
diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c
index dd5b141..9c64b5d 100644
--- a/drivers/media/platform/marvell-ccic/mcam-core.c
+++ b/drivers/media/platform/marvell-ccic/mcam-core.c
@@ -1568,24 +1568,64 @@
struct v4l2_frmsizeenum *sizes)
{
struct mcam_camera *cam = priv;
+ struct mcam_format_struct *f;
+ struct v4l2_subdev_frame_size_enum fse = {
+ .index = sizes->index,
+ .which = V4L2_SUBDEV_FORMAT_ACTIVE,
+ };
int ret;
+ f = mcam_find_format(sizes->pixel_format);
+ if (f->pixelformat != sizes->pixel_format)
+ return -EINVAL;
+ fse.code = f->mbus_code;
mutex_lock(&cam->s_mutex);
- ret = sensor_call(cam, video, enum_framesizes, sizes);
+ ret = sensor_call(cam, pad, enum_frame_size, NULL, &fse);
mutex_unlock(&cam->s_mutex);
- return ret;
+ if (ret)
+ return ret;
+ if (fse.min_width == fse.max_width &&
+ fse.min_height == fse.max_height) {
+ sizes->type = V4L2_FRMSIZE_TYPE_DISCRETE;
+ sizes->discrete.width = fse.min_width;
+ sizes->discrete.height = fse.min_height;
+ return 0;
+ }
+ sizes->type = V4L2_FRMSIZE_TYPE_CONTINUOUS;
+ sizes->stepwise.min_width = fse.min_width;
+ sizes->stepwise.max_width = fse.max_width;
+ sizes->stepwise.min_height = fse.min_height;
+ sizes->stepwise.max_height = fse.max_height;
+ sizes->stepwise.step_width = 1;
+ sizes->stepwise.step_height = 1;
+ return 0;
}
static int mcam_vidioc_enum_frameintervals(struct file *filp, void *priv,
struct v4l2_frmivalenum *interval)
{
struct mcam_camera *cam = priv;
+ struct mcam_format_struct *f;
+ struct v4l2_subdev_frame_interval_enum fie = {
+ .index = interval->index,
+ .width = interval->width,
+ .height = interval->height,
+ .which = V4L2_SUBDEV_FORMAT_ACTIVE,
+ };
int ret;
+ f = mcam_find_format(interval->pixel_format);
+ if (f->pixelformat != interval->pixel_format)
+ return -EINVAL;
+ fie.code = f->mbus_code;
mutex_lock(&cam->s_mutex);
- ret = sensor_call(cam, video, enum_frameintervals, interval);
+ ret = sensor_call(cam, pad, enum_frame_interval, NULL, &fie);
mutex_unlock(&cam->s_mutex);
- return ret;
+ if (ret)
+ return ret;
+ interval->type = V4L2_FRMIVAL_TYPE_DISCRETE;
+ interval->discrete = fie.interval;
+ return 0;
}
#ifdef CONFIG_VIDEO_ADV_DEBUG
diff --git a/drivers/media/platform/omap/omap_vout.c b/drivers/media/platform/omap/omap_vout.c
index ba2d8f9..17b189a 100644
--- a/drivers/media/platform/omap/omap_vout.c
+++ b/drivers/media/platform/omap/omap_vout.c
@@ -1978,7 +1978,7 @@
vout->cropped_offset = 0;
if (ovid->rotation_type == VOUT_ROT_VRFB) {
- int static_vrfb_allocation = (vid_num == 0) ?
+ bool static_vrfb_allocation = (vid_num == 0) ?
vid1_static_vrfb_alloc : vid2_static_vrfb_alloc;
ret = omap_vout_setup_vrfb_bufs(pdev, vid_num,
static_vrfb_allocation);
diff --git a/drivers/media/platform/omap/omap_vout_vrfb.c b/drivers/media/platform/omap/omap_vout_vrfb.c
index aa39306..c6e2527 100644
--- a/drivers/media/platform/omap/omap_vout_vrfb.c
+++ b/drivers/media/platform/omap/omap_vout_vrfb.c
@@ -21,6 +21,7 @@
#include "omap_voutdef.h"
#include "omap_voutlib.h"
+#include "omap_vout_vrfb.h"
#define OMAP_DMA_NO_DEVICE 0
diff --git a/drivers/media/platform/omap/omap_vout_vrfb.h b/drivers/media/platform/omap/omap_vout_vrfb.h
index 4c23148..c976975 100644
--- a/drivers/media/platform/omap/omap_vout_vrfb.h
+++ b/drivers/media/platform/omap/omap_vout_vrfb.h
@@ -15,7 +15,7 @@
#ifdef CONFIG_VIDEO_OMAP2_VOUT_VRFB
void omap_vout_free_vrfb_buffers(struct omap_vout_device *vout);
int omap_vout_setup_vrfb_bufs(struct platform_device *pdev, int vid_num,
- u32 static_vrfb_allocation);
+ bool static_vrfb_allocation);
void omap_vout_release_vrfb(struct omap_vout_device *vout);
int omap_vout_vrfb_buffer_setup(struct omap_vout_device *vout,
unsigned int *count, unsigned int startindex);
@@ -25,7 +25,7 @@
#else
static inline void omap_vout_free_vrfb_buffers(struct omap_vout_device *vout) { };
static inline int omap_vout_setup_vrfb_bufs(struct platform_device *pdev, int vid_num,
- u32 static_vrfb_allocation)
+ bool static_vrfb_allocation)
{ return 0; };
static inline void omap_vout_release_vrfb(struct omap_vout_device *vout) { };
static inline int omap_vout_vrfb_buffer_setup(struct omap_vout_device *vout,
diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
index deca809..18d0a87 100644
--- a/drivers/media/platform/omap3isp/isp.c
+++ b/drivers/media/platform/omap3isp/isp.c
@@ -51,6 +51,7 @@
#include <linux/dma-mapping.h>
#include <linux/i2c.h>
#include <linux/interrupt.h>
+#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/omap-iommu.h>
#include <linux/platform_device.h>
@@ -63,6 +64,7 @@
#include <media/v4l2-common.h>
#include <media/v4l2-device.h>
+#include <media/v4l2-of.h>
#include "isp.h"
#include "ispreg.h"
@@ -85,35 +87,45 @@
static const struct isp_res_mapping isp_res_maps[] = {
{
.isp_rev = ISP_REVISION_2_0,
- .map = 1 << OMAP3_ISP_IOMEM_MAIN |
- 1 << OMAP3_ISP_IOMEM_CCP2 |
- 1 << OMAP3_ISP_IOMEM_CCDC |
- 1 << OMAP3_ISP_IOMEM_HIST |
- 1 << OMAP3_ISP_IOMEM_H3A |
- 1 << OMAP3_ISP_IOMEM_PREV |
- 1 << OMAP3_ISP_IOMEM_RESZ |
- 1 << OMAP3_ISP_IOMEM_SBL |
- 1 << OMAP3_ISP_IOMEM_CSI2A_REGS1 |
- 1 << OMAP3_ISP_IOMEM_CSIPHY2 |
- 1 << OMAP3_ISP_IOMEM_343X_CONTROL_CSIRXFE,
+ .offset = {
+ /* first MMIO area */
+ 0x0000, /* base, len 0x0070 */
+ 0x0400, /* ccp2, len 0x01f0 */
+ 0x0600, /* ccdc, len 0x00a8 */
+ 0x0a00, /* hist, len 0x0048 */
+ 0x0c00, /* h3a, len 0x0060 */
+ 0x0e00, /* preview, len 0x00a0 */
+ 0x1000, /* resizer, len 0x00ac */
+ 0x1200, /* sbl, len 0x00fc */
+ /* second MMIO area */
+ 0x0000, /* csi2a, len 0x0170 */
+ 0x0170, /* csiphy2, len 0x000c */
+ },
+ .syscon_offset = 0xdc,
+ .phy_type = ISP_PHY_TYPE_3430,
},
{
.isp_rev = ISP_REVISION_15_0,
- .map = 1 << OMAP3_ISP_IOMEM_MAIN |
- 1 << OMAP3_ISP_IOMEM_CCP2 |
- 1 << OMAP3_ISP_IOMEM_CCDC |
- 1 << OMAP3_ISP_IOMEM_HIST |
- 1 << OMAP3_ISP_IOMEM_H3A |
- 1 << OMAP3_ISP_IOMEM_PREV |
- 1 << OMAP3_ISP_IOMEM_RESZ |
- 1 << OMAP3_ISP_IOMEM_SBL |
- 1 << OMAP3_ISP_IOMEM_CSI2A_REGS1 |
- 1 << OMAP3_ISP_IOMEM_CSIPHY2 |
- 1 << OMAP3_ISP_IOMEM_CSI2A_REGS2 |
- 1 << OMAP3_ISP_IOMEM_CSI2C_REGS1 |
- 1 << OMAP3_ISP_IOMEM_CSIPHY1 |
- 1 << OMAP3_ISP_IOMEM_CSI2C_REGS2 |
- 1 << OMAP3_ISP_IOMEM_3630_CONTROL_CAMERA_PHY_CTRL,
+ .offset = {
+ /* first MMIO area */
+ 0x0000, /* base, len 0x0070 */
+ 0x0400, /* ccp2, len 0x01f0 */
+ 0x0600, /* ccdc, len 0x00a8 */
+ 0x0a00, /* hist, len 0x0048 */
+ 0x0c00, /* h3a, len 0x0060 */
+ 0x0e00, /* preview, len 0x00a0 */
+ 0x1000, /* resizer, len 0x00ac */
+ 0x1200, /* sbl, len 0x00fc */
+ /* second MMIO area */
+ 0x0000, /* csi2a, len 0x0170 (1st area) */
+ 0x0170, /* csiphy2, len 0x000c */
+ 0x01c0, /* csi2a, len 0x0040 (2nd area) */
+ 0x0400, /* csi2c, len 0x0170 (1st area) */
+ 0x0570, /* csiphy1, len 0x000c */
+ 0x05c0, /* csi2c, len 0x0040 (2nd area) */
+ },
+ .syscon_offset = 0x2f0,
+ .phy_type = ISP_PHY_TYPE_3630,
},
};
@@ -279,9 +291,20 @@
.num_parents = 1,
};
+static struct clk *isp_xclk_src_get(struct of_phandle_args *clkspec, void *data)
+{
+ unsigned int idx = clkspec->args[0];
+ struct isp_device *isp = data;
+
+ if (idx >= ARRAY_SIZE(isp->xclks))
+ return ERR_PTR(-ENOENT);
+
+ return isp->xclks[idx].clk;
+}
+
static int isp_xclk_init(struct isp_device *isp)
{
- struct isp_platform_data *pdata = isp->pdata;
+ struct device_node *np = isp->dev->of_node;
struct clk_init_data init;
unsigned int i;
@@ -311,37 +334,27 @@
xclk->clk = clk_register(NULL, &xclk->hw);
if (IS_ERR(xclk->clk))
return PTR_ERR(xclk->clk);
-
- if (pdata->xclks[i].con_id == NULL &&
- pdata->xclks[i].dev_id == NULL)
- continue;
-
- xclk->lookup = kzalloc(sizeof(*xclk->lookup), GFP_KERNEL);
- if (xclk->lookup == NULL)
- return -ENOMEM;
-
- xclk->lookup->con_id = pdata->xclks[i].con_id;
- xclk->lookup->dev_id = pdata->xclks[i].dev_id;
- xclk->lookup->clk = xclk->clk;
-
- clkdev_add(xclk->lookup);
}
+ if (np)
+ of_clk_add_provider(np, isp_xclk_src_get, isp);
+
return 0;
}
static void isp_xclk_cleanup(struct isp_device *isp)
{
+ struct device_node *np = isp->dev->of_node;
unsigned int i;
+ if (np)
+ of_clk_del_provider(np);
+
for (i = 0; i < ARRAY_SIZE(isp->xclks); ++i) {
struct isp_xclk *xclk = &isp->xclks[i];
if (!IS_ERR(xclk->clk))
clk_unregister(xclk->clk);
-
- if (xclk->lookup)
- clkdev_drop(xclk->lookup);
}
}
@@ -422,7 +435,7 @@
*/
void omap3isp_configure_bridge(struct isp_device *isp,
enum ccdc_input_entity input,
- const struct isp_parallel_platform_data *pdata,
+ const struct isp_parallel_cfg *parcfg,
unsigned int shift, unsigned int bridge)
{
u32 ispctrl_val;
@@ -437,8 +450,8 @@
switch (input) {
case CCDC_INPUT_PARALLEL:
ispctrl_val |= ISPCTRL_PAR_SER_CLK_SEL_PARALLEL;
- ispctrl_val |= pdata->clk_pol << ISPCTRL_PAR_CLK_POL_SHIFT;
- shift += pdata->data_lane_shift * 2;
+ ispctrl_val |= parcfg->clk_pol << ISPCTRL_PAR_CLK_POL_SHIFT;
+ shift += parcfg->data_lane_shift * 2;
break;
case CCDC_INPUT_CSI2A:
@@ -1784,58 +1797,121 @@
}
/*
- * isp_register_subdev_group - Register a group of subdevices
+ * isp_register_subdev - Register a sub-device
* @isp: OMAP3 ISP device
- * @board_info: I2C subdevs board information array
+ * @isp_subdev: platform data related to a sub-device
*
- * Register all I2C subdevices in the board_info array. The array must be
- * terminated by a NULL entry, and the first entry must be the sensor.
+ * Register an I2C sub-device which has not been registered by other
+ * means (such as the Device Tree).
*
- * Return a pointer to the sensor media entity if it has been successfully
+ * Return a pointer to the sub-device if it has been successfully
* registered, or NULL otherwise.
*/
static struct v4l2_subdev *
-isp_register_subdev_group(struct isp_device *isp,
- struct isp_subdev_i2c_board_info *board_info)
+isp_register_subdev(struct isp_device *isp,
+ struct isp_platform_subdev *isp_subdev)
{
- struct v4l2_subdev *sensor = NULL;
- unsigned int first;
+ struct i2c_adapter *adapter;
+ struct v4l2_subdev *sd;
- if (board_info->board_info == NULL)
+ if (isp_subdev->board_info == NULL)
return NULL;
- for (first = 1; board_info->board_info; ++board_info, first = 0) {
- struct v4l2_subdev *subdev;
- struct i2c_adapter *adapter;
-
- adapter = i2c_get_adapter(board_info->i2c_adapter_id);
- if (adapter == NULL) {
- dev_err(isp->dev, "%s: Unable to get I2C adapter %d for "
- "device %s\n", __func__,
- board_info->i2c_adapter_id,
- board_info->board_info->type);
- continue;
- }
-
- subdev = v4l2_i2c_new_subdev_board(&isp->v4l2_dev, adapter,
- board_info->board_info, NULL);
- if (subdev == NULL) {
- dev_err(isp->dev, "%s: Unable to register subdev %s\n",
- __func__, board_info->board_info->type);
- continue;
- }
-
- if (first)
- sensor = subdev;
+ adapter = i2c_get_adapter(isp_subdev->i2c_adapter_id);
+ if (adapter == NULL) {
+ dev_err(isp->dev,
+ "%s: Unable to get I2C adapter %d for device %s\n",
+ __func__, isp_subdev->i2c_adapter_id,
+ isp_subdev->board_info->type);
+ return NULL;
}
- return sensor;
+ sd = v4l2_i2c_new_subdev_board(&isp->v4l2_dev, adapter,
+ isp_subdev->board_info, NULL);
+ if (sd == NULL) {
+ dev_err(isp->dev, "%s: Unable to register subdev %s\n",
+ __func__, isp_subdev->board_info->type);
+ return NULL;
+ }
+
+ return sd;
+}
+
+static int isp_link_entity(
+ struct isp_device *isp, struct media_entity *entity,
+ enum isp_interface_type interface)
+{
+ struct media_entity *input;
+ unsigned int flags;
+ unsigned int pad;
+ unsigned int i;
+
+ /* Connect the sensor to the correct interface module.
+ * Parallel sensors are connected directly to the CCDC, while
+ * serial sensors are connected to the CSI2a, CCP2b or CSI2c
+ * receiver through CSIPHY1 or CSIPHY2.
+ */
+ switch (interface) {
+ case ISP_INTERFACE_PARALLEL:
+ input = &isp->isp_ccdc.subdev.entity;
+ pad = CCDC_PAD_SINK;
+ flags = 0;
+ break;
+
+ case ISP_INTERFACE_CSI2A_PHY2:
+ input = &isp->isp_csi2a.subdev.entity;
+ pad = CSI2_PAD_SINK;
+ flags = MEDIA_LNK_FL_IMMUTABLE | MEDIA_LNK_FL_ENABLED;
+ break;
+
+ case ISP_INTERFACE_CCP2B_PHY1:
+ case ISP_INTERFACE_CCP2B_PHY2:
+ input = &isp->isp_ccp2.subdev.entity;
+ pad = CCP2_PAD_SINK;
+ flags = 0;
+ break;
+
+ case ISP_INTERFACE_CSI2C_PHY1:
+ input = &isp->isp_csi2c.subdev.entity;
+ pad = CSI2_PAD_SINK;
+ flags = MEDIA_LNK_FL_IMMUTABLE | MEDIA_LNK_FL_ENABLED;
+ break;
+
+ default:
+ dev_err(isp->dev, "%s: invalid interface type %u\n", __func__,
+ interface);
+ return -EINVAL;
+ }
+
+ /*
+ * Not all interfaces are available on all revisions of the
+ * ISP. The sub-devices of those interfaces aren't initialised
+ * in such a case. Check this by ensuring the num_pads is
+ * non-zero.
+ */
+ if (!input->num_pads) {
+ dev_err(isp->dev, "%s: invalid input %u\n", entity->name,
+ interface);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < entity->num_pads; i++) {
+ if (entity->pads[i].flags & MEDIA_PAD_FL_SOURCE)
+ break;
+ }
+ if (i == entity->num_pads) {
+ dev_err(isp->dev, "%s: no source pad in external entity\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ return media_entity_create_link(entity, i, input, pad, flags);
}
static int isp_register_entities(struct isp_device *isp)
{
struct isp_platform_data *pdata = isp->pdata;
- struct isp_v4l2_subdevs_group *subdevs;
+ struct isp_platform_subdev *isp_subdev;
int ret;
isp->media_dev.dev = isp->dev;
@@ -1892,74 +1968,31 @@
if (ret < 0)
goto done;
- /* Register external entities */
- for (subdevs = pdata->subdevs; subdevs && subdevs->subdevs; ++subdevs) {
- struct v4l2_subdev *sensor;
- struct media_entity *input;
- unsigned int flags;
- unsigned int pad;
- unsigned int i;
+ /*
+ * Device Tree --- the external sub-devices will be registered
+ * later. The same goes for the sub-device node registration.
+ */
+ if (isp->dev->of_node)
+ return 0;
- sensor = isp_register_subdev_group(isp, subdevs->subdevs);
- if (sensor == NULL)
+ /* Register external entities */
+ for (isp_subdev = pdata ? pdata->subdevs : NULL;
+ isp_subdev && isp_subdev->board_info; isp_subdev++) {
+ struct v4l2_subdev *sd;
+
+ sd = isp_register_subdev(isp, isp_subdev);
+
+ /*
+ * No bus information --- this is either a flash or a
+ * lens subdev.
+ */
+ if (!sd || !isp_subdev->bus)
continue;
- sensor->host_priv = subdevs;
+ sd->host_priv = isp_subdev->bus;
- /* Connect the sensor to the correct interface module. Parallel
- * sensors are connected directly to the CCDC, while serial
- * sensors are connected to the CSI2a, CCP2b or CSI2c receiver
- * through CSIPHY1 or CSIPHY2.
- */
- switch (subdevs->interface) {
- case ISP_INTERFACE_PARALLEL:
- input = &isp->isp_ccdc.subdev.entity;
- pad = CCDC_PAD_SINK;
- flags = 0;
- break;
-
- case ISP_INTERFACE_CSI2A_PHY2:
- input = &isp->isp_csi2a.subdev.entity;
- pad = CSI2_PAD_SINK;
- flags = MEDIA_LNK_FL_IMMUTABLE
- | MEDIA_LNK_FL_ENABLED;
- break;
-
- case ISP_INTERFACE_CCP2B_PHY1:
- case ISP_INTERFACE_CCP2B_PHY2:
- input = &isp->isp_ccp2.subdev.entity;
- pad = CCP2_PAD_SINK;
- flags = 0;
- break;
-
- case ISP_INTERFACE_CSI2C_PHY1:
- input = &isp->isp_csi2c.subdev.entity;
- pad = CSI2_PAD_SINK;
- flags = MEDIA_LNK_FL_IMMUTABLE
- | MEDIA_LNK_FL_ENABLED;
- break;
-
- default:
- dev_err(isp->dev, "%s: invalid interface type %u\n",
- __func__, subdevs->interface);
- ret = -EINVAL;
- goto done;
- }
-
- for (i = 0; i < sensor->entity.num_pads; i++) {
- if (sensor->entity.pads[i].flags & MEDIA_PAD_FL_SOURCE)
- break;
- }
- if (i == sensor->entity.num_pads) {
- dev_err(isp->dev,
- "%s: no source pad in external entity\n",
- __func__);
- ret = -EINVAL;
- goto done;
- }
-
- ret = media_entity_create_link(&sensor->entity, i, input, pad,
- flags);
+ ret = isp_link_entity(isp, &sd->entity,
+ isp_subdev->bus->interface);
if (ret < 0)
goto done;
}
@@ -1967,8 +2000,10 @@
ret = v4l2_device_register_subdev_nodes(&isp->v4l2_dev);
done:
- if (ret < 0)
+ if (ret < 0) {
isp_unregister_entities(isp);
+ v4l2_async_notifier_unregister(&isp->notifier);
+ }
return ret;
}
@@ -2183,6 +2218,7 @@
{
struct isp_device *isp = platform_get_drvdata(pdev);
+ v4l2_async_notifier_unregister(&isp->notifier);
isp_unregister_entities(isp);
isp_cleanup_modules(isp);
isp_xclk_cleanup(isp);
@@ -2194,26 +2230,156 @@
return 0;
}
-static int isp_map_mem_resource(struct platform_device *pdev,
- struct isp_device *isp,
- enum isp_mem_resources res)
+enum isp_of_phy {
+ ISP_OF_PHY_PARALLEL = 0,
+ ISP_OF_PHY_CSIPHY1,
+ ISP_OF_PHY_CSIPHY2,
+};
+
+static int isp_of_parse_node(struct device *dev, struct device_node *node,
+ struct isp_async_subdev *isd)
{
- struct resource *mem;
+ struct isp_bus_cfg *buscfg = &isd->bus;
+ struct v4l2_of_endpoint vep;
+ unsigned int i;
- /* request the mem region for the camera registers */
+ v4l2_of_parse_endpoint(node, &vep);
- mem = platform_get_resource(pdev, IORESOURCE_MEM, res);
+ dev_dbg(dev, "parsing endpoint %s, interface %u\n", node->full_name,
+ vep.base.port);
- /* map the region */
- isp->mmio_base[res] = devm_ioremap_resource(isp->dev, mem);
- if (IS_ERR(isp->mmio_base[res]))
- return PTR_ERR(isp->mmio_base[res]);
+ switch (vep.base.port) {
+ case ISP_OF_PHY_PARALLEL:
+ buscfg->interface = ISP_INTERFACE_PARALLEL;
+ buscfg->bus.parallel.data_lane_shift =
+ vep.bus.parallel.data_shift;
+ buscfg->bus.parallel.clk_pol =
+ !!(vep.bus.parallel.flags
+ & V4L2_MBUS_PCLK_SAMPLE_FALLING);
+ buscfg->bus.parallel.hs_pol =
+ !!(vep.bus.parallel.flags & V4L2_MBUS_VSYNC_ACTIVE_LOW);
+ buscfg->bus.parallel.vs_pol =
+ !!(vep.bus.parallel.flags & V4L2_MBUS_HSYNC_ACTIVE_LOW);
+ buscfg->bus.parallel.fld_pol =
+ !!(vep.bus.parallel.flags & V4L2_MBUS_FIELD_EVEN_LOW);
+ buscfg->bus.parallel.data_pol =
+ !!(vep.bus.parallel.flags & V4L2_MBUS_DATA_ACTIVE_LOW);
+ break;
- isp->mmio_base_phys[res] = mem->start;
+ case ISP_OF_PHY_CSIPHY1:
+ case ISP_OF_PHY_CSIPHY2:
+ /* FIXME: always assume CSI-2 for now. */
+ switch (vep.base.port) {
+ case ISP_OF_PHY_CSIPHY1:
+ buscfg->interface = ISP_INTERFACE_CSI2C_PHY1;
+ break;
+ case ISP_OF_PHY_CSIPHY2:
+ buscfg->interface = ISP_INTERFACE_CSI2A_PHY2;
+ break;
+ }
+ buscfg->bus.csi2.lanecfg.clk.pos = vep.bus.mipi_csi2.clock_lane;
+ buscfg->bus.csi2.lanecfg.clk.pol =
+ vep.bus.mipi_csi2.lane_polarities[0];
+ dev_dbg(dev, "clock lane polarity %u, pos %u\n",
+ buscfg->bus.csi2.lanecfg.clk.pol,
+ buscfg->bus.csi2.lanecfg.clk.pos);
+
+ for (i = 0; i < ISP_CSIPHY2_NUM_DATA_LANES; i++) {
+ buscfg->bus.csi2.lanecfg.data[i].pos =
+ vep.bus.mipi_csi2.data_lanes[i];
+ buscfg->bus.csi2.lanecfg.data[i].pol =
+ vep.bus.mipi_csi2.lane_polarities[i + 1];
+ dev_dbg(dev, "data lane %u polarity %u, pos %u\n", i,
+ buscfg->bus.csi2.lanecfg.data[i].pol,
+ buscfg->bus.csi2.lanecfg.data[i].pos);
+ }
+
+ /*
+ * FIXME: now we assume the CRC is always there.
+ * Implement a way to obtain this information from the
+ * sensor. Frame descriptors, perhaps?
+ */
+ buscfg->bus.csi2.crc = 1;
+ break;
+
+ default:
+ dev_warn(dev, "%s: invalid interface %u\n", node->full_name,
+ vep.base.port);
+ break;
+ }
return 0;
}
+static int isp_of_parse_nodes(struct device *dev,
+ struct v4l2_async_notifier *notifier)
+{
+ struct device_node *node = NULL;
+
+ notifier->subdevs = devm_kcalloc(
+ dev, ISP_MAX_SUBDEVS, sizeof(*notifier->subdevs), GFP_KERNEL);
+ if (!notifier->subdevs)
+ return -ENOMEM;
+
+ while (notifier->num_subdevs < ISP_MAX_SUBDEVS &&
+ (node = of_graph_get_next_endpoint(dev->of_node, node))) {
+ struct isp_async_subdev *isd;
+
+ isd = devm_kzalloc(dev, sizeof(*isd), GFP_KERNEL);
+ if (!isd) {
+ of_node_put(node);
+ return -ENOMEM;
+ }
+
+ notifier->subdevs[notifier->num_subdevs] = &isd->asd;
+
+ if (isp_of_parse_node(dev, node, isd)) {
+ of_node_put(node);
+ return -EINVAL;
+ }
+
+ isd->asd.match.of.node = of_graph_get_remote_port_parent(node);
+ of_node_put(node);
+ if (!isd->asd.match.of.node) {
+ dev_warn(dev, "bad remote port parent\n");
+ return -EINVAL;
+ }
+
+ isd->asd.match_type = V4L2_ASYNC_MATCH_OF;
+ notifier->num_subdevs++;
+ }
+
+ return notifier->num_subdevs;
+}
+
+static int isp_subdev_notifier_bound(struct v4l2_async_notifier *async,
+ struct v4l2_subdev *subdev,
+ struct v4l2_async_subdev *asd)
+{
+ struct isp_device *isp = container_of(async, struct isp_device,
+ notifier);
+ struct isp_async_subdev *isd =
+ container_of(asd, struct isp_async_subdev, asd);
+ int ret;
+
+ ret = isp_link_entity(isp, &subdev->entity, isd->bus.interface);
+ if (ret < 0)
+ return ret;
+
+ isd->sd = subdev;
+ isd->sd->host_priv = &isd->bus;
+
+ return ret;
+}
+
+static int isp_subdev_notifier_complete(struct v4l2_async_notifier *async)
+{
+ struct isp_device *isp = container_of(async, struct isp_device,
+ notifier);
+
+ return v4l2_device_register_subdev_nodes(&isp->v4l2_dev);
+}
+
/*
* isp_probe - Probe ISP platform device
* @pdev: Pointer to ISP platform device
@@ -2227,47 +2393,86 @@
*/
static int isp_probe(struct platform_device *pdev)
{
- struct isp_platform_data *pdata = pdev->dev.platform_data;
struct isp_device *isp;
+ struct resource *mem;
int ret;
int i, m;
- if (pdata == NULL)
- return -EINVAL;
-
isp = devm_kzalloc(&pdev->dev, sizeof(*isp), GFP_KERNEL);
if (!isp) {
dev_err(&pdev->dev, "could not allocate memory\n");
return -ENOMEM;
}
+ if (IS_ENABLED(CONFIG_OF) && pdev->dev.of_node) {
+ ret = of_property_read_u32(pdev->dev.of_node, "ti,phy-type",
+ &isp->phy_type);
+ if (ret)
+ return ret;
+
+ isp->syscon = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
+ "syscon");
+ if (IS_ERR(isp->syscon))
+ return PTR_ERR(isp->syscon);
+
+ ret = of_property_read_u32_index(pdev->dev.of_node, "syscon", 1,
+ &isp->syscon_offset);
+ if (ret)
+ return ret;
+
+ ret = isp_of_parse_nodes(&pdev->dev, &isp->notifier);
+ if (ret < 0)
+ return ret;
+ ret = v4l2_async_notifier_register(&isp->v4l2_dev,
+ &isp->notifier);
+ if (ret)
+ return ret;
+ } else {
+ isp->pdata = pdev->dev.platform_data;
+ isp->syscon = syscon_regmap_lookup_by_pdevname("syscon.0");
+ if (IS_ERR(isp->syscon))
+ return PTR_ERR(isp->syscon);
+ dev_warn(&pdev->dev,
+ "Platform data support is deprecated! Please move to DT now!\n");
+ }
+
isp->autoidle = autoidle;
mutex_init(&isp->isp_mutex);
spin_lock_init(&isp->stat_lock);
isp->dev = &pdev->dev;
- isp->pdata = pdata;
isp->ref_count = 0;
ret = dma_coerce_mask_and_coherent(isp->dev, DMA_BIT_MASK(32));
if (ret)
- return ret;
+ goto error;
platform_set_drvdata(pdev, isp);
/* Regulators */
- isp->isp_csiphy1.vdd = devm_regulator_get(&pdev->dev, "VDD_CSIPHY1");
- isp->isp_csiphy2.vdd = devm_regulator_get(&pdev->dev, "VDD_CSIPHY2");
+ isp->isp_csiphy1.vdd = devm_regulator_get(&pdev->dev, "vdd-csiphy1");
+ isp->isp_csiphy2.vdd = devm_regulator_get(&pdev->dev, "vdd-csiphy2");
/* Clocks
*
* The ISP clock tree is revision-dependent. We thus need to enable ICLK
* manually to read the revision before calling __omap3isp_get().
+ *
+ * Start by mapping the ISP MMIO area, which is in two pieces.
+ * The ISP IOMMU is in between. Map both now, and fill in the
+ * ISP revision specific portions a little later in the
+ * function.
*/
- ret = isp_map_mem_resource(pdev, isp, OMAP3_ISP_IOMEM_MAIN);
- if (ret < 0)
- goto error;
+ for (i = 0; i < 2; i++) {
+ unsigned int map_idx = i ? OMAP3_ISP_IOMEM_CSI2A_REGS1 : 0;
+
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, i);
+ isp->mmio_base[map_idx] =
+ devm_ioremap_resource(isp->dev, mem);
+ if (IS_ERR(isp->mmio_base[map_idx]))
+ return PTR_ERR(isp->mmio_base[map_idx]);
+ }
ret = isp_get_clocks(isp);
if (ret < 0)
@@ -2308,14 +2513,23 @@
goto error_isp;
}
- for (i = 1; i < OMAP3_ISP_IOMEM_LAST; i++) {
- if (isp_res_maps[m].map & 1 << i) {
- ret = isp_map_mem_resource(pdev, isp, i);
- if (ret)
- goto error_isp;
- }
+ if (!IS_ENABLED(CONFIG_OF) || !pdev->dev.of_node) {
+ isp->syscon_offset = isp_res_maps[m].syscon_offset;
+ isp->phy_type = isp_res_maps[m].phy_type;
}
+ for (i = 1; i < OMAP3_ISP_IOMEM_CSI2A_REGS1; i++)
+ isp->mmio_base[i] =
+ isp->mmio_base[0] + isp_res_maps[m].offset[i];
+
+ for (i = OMAP3_ISP_IOMEM_CSIPHY2; i < OMAP3_ISP_IOMEM_LAST; i++)
+ isp->mmio_base[i] =
+ isp->mmio_base[OMAP3_ISP_IOMEM_CSI2A_REGS1]
+ + isp_res_maps[m].offset[i];
+
+ isp->mmio_hist_base_phys =
+ mem->start + isp_res_maps[m].offset[OMAP3_ISP_IOMEM_HIST];
+
/* IOMMU */
ret = isp_attach_iommu(isp);
if (ret < 0) {
@@ -2343,6 +2557,9 @@
if (ret < 0)
goto error_iommu;
+ isp->notifier.bound = isp_subdev_notifier_bound;
+ isp->notifier.complete = isp_subdev_notifier_complete;
+
ret = isp_register_entities(isp);
if (ret < 0)
goto error_modules;
@@ -2378,6 +2595,11 @@
};
MODULE_DEVICE_TABLE(platform, omap3isp_id_table);
+static const struct of_device_id omap3isp_of_table[] = {
+ { .compatible = "ti,omap3-isp" },
+ { },
+};
+
static struct platform_driver omap3isp_driver = {
.probe = isp_probe,
.remove = isp_remove,
@@ -2385,6 +2607,7 @@
.driver = {
.name = "omap3isp",
.pm = &omap3isp_pm_ops,
+ .of_match_table = omap3isp_of_table,
},
};
diff --git a/drivers/media/platform/omap3isp/isp.h b/drivers/media/platform/omap3isp/isp.h
index cfdfc87..e579943 100644
--- a/drivers/media/platform/omap3isp/isp.h
+++ b/drivers/media/platform/omap3isp/isp.h
@@ -18,6 +18,7 @@
#define OMAP3_ISP_CORE_H
#include <media/omap3isp.h>
+#include <media/v4l2-async.h>
#include <media/v4l2-device.h>
#include <linux/clk-provider.h>
#include <linux/device.h>
@@ -59,8 +60,6 @@
OMAP3_ISP_IOMEM_CSI2C_REGS1,
OMAP3_ISP_IOMEM_CSIPHY1,
OMAP3_ISP_IOMEM_CSI2C_REGS2,
- OMAP3_ISP_IOMEM_343X_CONTROL_CSIRXFE,
- OMAP3_ISP_IOMEM_3630_CONTROL_CAMERA_PHY_CTRL,
OMAP3_ISP_IOMEM_LAST
};
@@ -93,14 +92,25 @@
/* ISP2P: OMAP 36xx */
#define ISP_REVISION_15_0 0xF0
+#define ISP_PHY_TYPE_3430 0
+#define ISP_PHY_TYPE_3630 1
+
+struct regmap;
+
/*
* struct isp_res_mapping - Map ISP io resources to ISP revision.
* @isp_rev: ISP_REVISION_x_x
- * @map: bitmap for enum isp_mem_resources
+ * @offset: register offsets of various ISP sub-blocks
+ * @syscon_offset: offset of the syscon register for 343x / 3630
+ * (CONTROL_CSIRXFE / CONTROL_CAMERA_PHY_CTRL, respectively)
+ * from the syscon base address
+ * @phy_type: ISP_PHY_TYPE_{3430,3630}
*/
struct isp_res_mapping {
u32 isp_rev;
- u32 map;
+ u32 offset[OMAP3_ISP_IOMEM_LAST];
+ u32 syscon_offset;
+ u32 phy_type;
};
/*
@@ -122,7 +132,6 @@
struct isp_xclk {
struct isp_device *isp;
struct clk_hw hw;
- struct clk_lookup *lookup;
struct clk *clk;
enum isp_xclk_id id;
@@ -138,8 +147,11 @@
* @irq_num: Currently used IRQ number.
* @mmio_base: Array with kernel base addresses for ioremapped ISP register
* regions.
- * @mmio_base_phys: Array with physical L4 bus addresses for ISP register
- * regions.
+ * @mmio_hist_base_phys: Physical L4 bus address for ISP hist block register
+ * region.
+ * @syscon: Regmap for the syscon register space
+ * @syscon_offset: Offset of the CSIPHY control register in syscon
+ * @phy_type: ISP_PHY_TYPE_{3430,3630}
* @mapping: IOMMU mapping
* @stat_lock: Spinlock for handling statistics
* @isp_mutex: Mutex for serializing requests to ISP.
@@ -166,6 +178,7 @@
*/
struct isp_device {
struct v4l2_device v4l2_dev;
+ struct v4l2_async_notifier notifier;
struct media_device media_dev;
struct device *dev;
u32 revision;
@@ -175,7 +188,10 @@
unsigned int irq_num;
void __iomem *mmio_base[OMAP3_ISP_IOMEM_LAST];
- unsigned long mmio_base_phys[OMAP3_ISP_IOMEM_LAST];
+ unsigned long mmio_hist_base_phys;
+ struct regmap *syscon;
+ u32 syscon_offset;
+ u32 phy_type;
struct dma_iommu_mapping *mapping;
@@ -209,6 +225,15 @@
unsigned int sbl_resources;
unsigned int subclk_resources;
+
+#define ISP_MAX_SUBDEVS 8
+ struct v4l2_subdev *subdevs[ISP_MAX_SUBDEVS];
+};
+
+struct isp_async_subdev {
+ struct v4l2_subdev *sd;
+ struct isp_bus_cfg bus;
+ struct v4l2_async_subdev asd;
};
#define v4l2_dev_to_isp_device(dev) \
@@ -229,7 +254,7 @@
void omap3isp_pipeline_cancel_stream(struct isp_pipeline *pipe);
void omap3isp_configure_bridge(struct isp_device *isp,
enum ccdc_input_entity input,
- const struct isp_parallel_platform_data *pdata,
+ const struct isp_parallel_cfg *buscfg,
unsigned int shift, unsigned int bridge);
struct isp_device *omap3isp_get(struct isp_device *isp);
diff --git a/drivers/media/platform/omap3isp/ispccdc.c b/drivers/media/platform/omap3isp/ispccdc.c
index 587489a..a6a61cc 100644
--- a/drivers/media/platform/omap3isp/ispccdc.c
+++ b/drivers/media/platform/omap3isp/ispccdc.c
@@ -32,7 +32,7 @@
#define CCDC_MIN_HEIGHT 32
static struct v4l2_mbus_framefmt *
-__ccdc_get_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
+__ccdc_get_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which);
static const unsigned int ccdc_fmts[] = {
@@ -958,11 +958,11 @@
/*
* ccdc_config_sync_if - Set CCDC sync interface configuration
* @ccdc: Pointer to ISP CCDC device.
- * @pdata: Parallel interface platform data (may be NULL)
+ * @parcfg: Parallel interface platform data (may be NULL)
* @data_size: Data size
*/
static void ccdc_config_sync_if(struct isp_ccdc_device *ccdc,
- struct isp_parallel_platform_data *pdata,
+ struct isp_parallel_cfg *parcfg,
unsigned int data_size)
{
struct isp_device *isp = to_isp_device(ccdc);
@@ -1000,19 +1000,19 @@
break;
}
- if (pdata && pdata->data_pol)
+ if (parcfg && parcfg->data_pol)
syn_mode |= ISPCCDC_SYN_MODE_DATAPOL;
- if (pdata && pdata->hs_pol)
+ if (parcfg && parcfg->hs_pol)
syn_mode |= ISPCCDC_SYN_MODE_HDPOL;
/* The polarity of the vertical sync signal output by the BT.656
* decoder is not documented and seems to be active low.
*/
- if ((pdata && pdata->vs_pol) || ccdc->bt656)
+ if ((parcfg && parcfg->vs_pol) || ccdc->bt656)
syn_mode |= ISPCCDC_SYN_MODE_VDPOL;
- if (pdata && pdata->fld_pol)
+ if (parcfg && parcfg->fld_pol)
syn_mode |= ISPCCDC_SYN_MODE_FLDPOL;
isp_reg_writel(isp, syn_mode, OMAP3_ISP_IOMEM_CCDC, ISPCCDC_SYN_MODE);
@@ -1115,7 +1115,7 @@
static void ccdc_configure(struct isp_ccdc_device *ccdc)
{
struct isp_device *isp = to_isp_device(ccdc);
- struct isp_parallel_platform_data *pdata = NULL;
+ struct isp_parallel_cfg *parcfg = NULL;
struct v4l2_subdev *sensor;
struct v4l2_mbus_framefmt *format;
const struct v4l2_rect *crop;
@@ -1145,7 +1145,7 @@
if (!ret)
ccdc->bt656 = cfg.type == V4L2_MBUS_BT656;
- pdata = &((struct isp_v4l2_subdevs_group *)sensor->host_priv)
+ parcfg = &((struct isp_bus_cfg *)sensor->host_priv)
->bus.parallel;
}
@@ -1175,10 +1175,10 @@
else
bridge = ISPCTRL_PAR_BRIDGE_DISABLE;
- omap3isp_configure_bridge(isp, ccdc->input, pdata, shift, bridge);
+ omap3isp_configure_bridge(isp, ccdc->input, parcfg, shift, bridge);
/* Configure the sync interface. */
- ccdc_config_sync_if(ccdc, pdata, depth_out);
+ ccdc_config_sync_if(ccdc, parcfg, depth_out);
syn_mode = isp_reg_readl(isp, OMAP3_ISP_IOMEM_CCDC, ISPCCDC_SYN_MODE);
@@ -1935,21 +1935,21 @@
}
static struct v4l2_mbus_framefmt *
-__ccdc_get_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
+__ccdc_get_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&ccdc->subdev, cfg, pad);
else
return &ccdc->formats[pad];
}
static struct v4l2_rect *
-__ccdc_get_crop(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
+__ccdc_get_crop(struct isp_ccdc_device *ccdc, struct v4l2_subdev_pad_config *cfg,
enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_crop(fh, CCDC_PAD_SOURCE_OF);
+ return v4l2_subdev_get_try_crop(&ccdc->subdev, cfg, CCDC_PAD_SOURCE_OF);
else
return &ccdc->crop;
}
@@ -1957,12 +1957,12 @@
/*
* ccdc_try_format - Try video format on a pad
* @ccdc: ISP CCDC device
- * @fh : V4L2 subdev file handle
+ * @cfg : V4L2 subdev pad configuration
* @pad: Pad number
* @fmt: Format
*/
static void
-ccdc_try_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
+ccdc_try_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -1998,7 +1998,7 @@
case CCDC_PAD_SOURCE_OF:
pixelcode = fmt->code;
field = fmt->field;
- *fmt = *__ccdc_get_format(ccdc, fh, CCDC_PAD_SINK, which);
+ *fmt = *__ccdc_get_format(ccdc, cfg, CCDC_PAD_SINK, which);
/* In SYNC mode the bridge converts YUV formats from 2X8 to
* 1X16. In BT.656 no such conversion occurs. As we don't know
@@ -2023,7 +2023,7 @@
}
/* Hardcode the output size to the crop rectangle size. */
- crop = __ccdc_get_crop(ccdc, fh, which);
+ crop = __ccdc_get_crop(ccdc, cfg, which);
fmt->width = crop->width;
fmt->height = crop->height;
@@ -2040,7 +2040,7 @@
break;
case CCDC_PAD_SOURCE_VP:
- *fmt = *__ccdc_get_format(ccdc, fh, CCDC_PAD_SINK, which);
+ *fmt = *__ccdc_get_format(ccdc, cfg, CCDC_PAD_SINK, which);
/* The video port interface truncates the data to 10 bits. */
info = omap3isp_video_format_info(fmt->code);
@@ -2112,12 +2112,12 @@
/*
* ccdc_enum_mbus_code - Handle pixel format enumeration
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg : V4L2 subdev pad configuration
* @code : pointer to v4l2_subdev_mbus_code_enum structure
* return -EINVAL or zero on success
*/
static int ccdc_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct isp_ccdc_device *ccdc = v4l2_get_subdevdata(sd);
@@ -2132,8 +2132,8 @@
break;
case CCDC_PAD_SOURCE_OF:
- format = __ccdc_get_format(ccdc, fh, code->pad,
- V4L2_SUBDEV_FORMAT_TRY);
+ format = __ccdc_get_format(ccdc, cfg, code->pad,
+ code->which);
if (format->code == MEDIA_BUS_FMT_YUYV8_2X8 ||
format->code == MEDIA_BUS_FMT_UYVY8_2X8) {
@@ -2163,8 +2163,8 @@
if (code->index != 0)
return -EINVAL;
- format = __ccdc_get_format(ccdc, fh, code->pad,
- V4L2_SUBDEV_FORMAT_TRY);
+ format = __ccdc_get_format(ccdc, cfg, code->pad,
+ code->which);
/* A pixel code equal to 0 means that the video port doesn't
* support the input format. Don't enumerate any pixel code.
@@ -2183,7 +2183,7 @@
}
static int ccdc_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct isp_ccdc_device *ccdc = v4l2_get_subdevdata(sd);
@@ -2195,7 +2195,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- ccdc_try_format(ccdc, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ ccdc_try_format(ccdc, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -2205,7 +2205,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- ccdc_try_format(ccdc, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ ccdc_try_format(ccdc, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -2215,7 +2215,7 @@
/*
* ccdc_get_selection - Retrieve a selection rectangle on a pad
* @sd: ISP CCDC V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @sel: Selection rectangle
*
* The only supported rectangles are the crop rectangles on the output formatter
@@ -2223,7 +2223,7 @@
*
* Return 0 on success or a negative error code otherwise.
*/
-static int ccdc_get_selection(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ccdc_get_selection(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct isp_ccdc_device *ccdc = v4l2_get_subdevdata(sd);
@@ -2239,12 +2239,12 @@
sel->r.width = INT_MAX;
sel->r.height = INT_MAX;
- format = __ccdc_get_format(ccdc, fh, CCDC_PAD_SINK, sel->which);
+ format = __ccdc_get_format(ccdc, cfg, CCDC_PAD_SINK, sel->which);
ccdc_try_crop(ccdc, format, &sel->r);
break;
case V4L2_SEL_TGT_CROP:
- sel->r = *__ccdc_get_crop(ccdc, fh, sel->which);
+ sel->r = *__ccdc_get_crop(ccdc, cfg, sel->which);
break;
default:
@@ -2257,7 +2257,7 @@
/*
* ccdc_set_selection - Set a selection rectangle on a pad
* @sd: ISP CCDC V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @sel: Selection rectangle
*
* The only supported rectangle is the actual crop rectangle on the output
@@ -2265,7 +2265,7 @@
*
* Return 0 on success or a negative error code otherwise.
*/
-static int ccdc_set_selection(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ccdc_set_selection(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct isp_ccdc_device *ccdc = v4l2_get_subdevdata(sd);
@@ -2284,17 +2284,17 @@
* rectangle.
*/
if (sel->flags & V4L2_SEL_FLAG_KEEP_CONFIG) {
- sel->r = *__ccdc_get_crop(ccdc, fh, sel->which);
+ sel->r = *__ccdc_get_crop(ccdc, cfg, sel->which);
return 0;
}
- format = __ccdc_get_format(ccdc, fh, CCDC_PAD_SINK, sel->which);
+ format = __ccdc_get_format(ccdc, cfg, CCDC_PAD_SINK, sel->which);
ccdc_try_crop(ccdc, format, &sel->r);
- *__ccdc_get_crop(ccdc, fh, sel->which) = sel->r;
+ *__ccdc_get_crop(ccdc, cfg, sel->which) = sel->r;
/* Update the source format. */
- format = __ccdc_get_format(ccdc, fh, CCDC_PAD_SOURCE_OF, sel->which);
- ccdc_try_format(ccdc, fh, CCDC_PAD_SOURCE_OF, format, sel->which);
+ format = __ccdc_get_format(ccdc, cfg, CCDC_PAD_SOURCE_OF, sel->which);
+ ccdc_try_format(ccdc, cfg, CCDC_PAD_SOURCE_OF, format, sel->which);
return 0;
}
@@ -2302,19 +2302,19 @@
/*
* ccdc_get_format - Retrieve the video format on a pad
* @sd : ISP CCDC V4L2 subdevice
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @fmt: Format
*
* Return 0 on success or -EINVAL if the pad is invalid or doesn't correspond
* to the format type.
*/
-static int ccdc_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ccdc_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct isp_ccdc_device *ccdc = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __ccdc_get_format(ccdc, fh, fmt->pad, fmt->which);
+ format = __ccdc_get_format(ccdc, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -2325,30 +2325,30 @@
/*
* ccdc_set_format - Set the video format on a pad
* @sd : ISP CCDC V4L2 subdevice
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @fmt: Format
*
* Return 0 on success or -EINVAL if the pad is invalid or doesn't correspond
* to the format type.
*/
-static int ccdc_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ccdc_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct isp_ccdc_device *ccdc = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
struct v4l2_rect *crop;
- format = __ccdc_get_format(ccdc, fh, fmt->pad, fmt->which);
+ format = __ccdc_get_format(ccdc, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- ccdc_try_format(ccdc, fh, fmt->pad, &fmt->format, fmt->which);
+ ccdc_try_format(ccdc, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
/* Propagate the format from sink to source */
if (fmt->pad == CCDC_PAD_SINK) {
/* Reset the crop rectangle. */
- crop = __ccdc_get_crop(ccdc, fh, fmt->which);
+ crop = __ccdc_get_crop(ccdc, cfg, fmt->which);
crop->left = 0;
crop->top = 0;
crop->width = fmt->format.width;
@@ -2357,16 +2357,16 @@
ccdc_try_crop(ccdc, &fmt->format, crop);
/* Update the source formats. */
- format = __ccdc_get_format(ccdc, fh, CCDC_PAD_SOURCE_OF,
+ format = __ccdc_get_format(ccdc, cfg, CCDC_PAD_SOURCE_OF,
fmt->which);
*format = fmt->format;
- ccdc_try_format(ccdc, fh, CCDC_PAD_SOURCE_OF, format,
+ ccdc_try_format(ccdc, cfg, CCDC_PAD_SOURCE_OF, format,
fmt->which);
- format = __ccdc_get_format(ccdc, fh, CCDC_PAD_SOURCE_VP,
+ format = __ccdc_get_format(ccdc, cfg, CCDC_PAD_SOURCE_VP,
fmt->which);
*format = fmt->format;
- ccdc_try_format(ccdc, fh, CCDC_PAD_SOURCE_VP, format,
+ ccdc_try_format(ccdc, cfg, CCDC_PAD_SOURCE_VP, format,
fmt->which);
}
@@ -2417,11 +2417,11 @@
/* We've got a parallel sensor here. */
if (ccdc->input == CCDC_INPUT_PARALLEL) {
- struct isp_parallel_platform_data *pdata =
- &((struct isp_v4l2_subdevs_group *)
+ struct isp_parallel_cfg *parcfg =
+ &((struct isp_bus_cfg *)
media_entity_to_v4l2_subdev(link->source->entity)
->host_priv)->bus.parallel;
- parallel_shift = pdata->data_lane_shift * 2;
+ parallel_shift = parcfg->data_lane_shift * 2;
} else {
parallel_shift = 0;
}
@@ -2453,7 +2453,7 @@
format.format.code = MEDIA_BUS_FMT_SGRBG10_1X10;
format.format.width = 4096;
format.format.height = 4096;
- ccdc_set_format(sd, fh, &format);
+ ccdc_set_format(sd, fh ? fh->pad : NULL, &format);
return 0;
}
diff --git a/drivers/media/platform/omap3isp/ispccp2.c b/drivers/media/platform/omap3isp/ispccp2.c
index f4aedb3..38e6a97 100644
--- a/drivers/media/platform/omap3isp/ispccp2.c
+++ b/drivers/media/platform/omap3isp/ispccp2.c
@@ -201,14 +201,14 @@
/*
* ccp2_phyif_config - Initialize CCP2 phy interface config
* @ccp2: Pointer to ISP CCP2 device
- * @pdata: CCP2 platform data
+ * @buscfg: CCP2 platform data
*
* Configure the CCP2 physical interface module from platform data.
*
* Returns -EIO if strobe is chosen in CSI1 mode, or 0 on success.
*/
static int ccp2_phyif_config(struct isp_ccp2_device *ccp2,
- const struct isp_ccp2_platform_data *pdata)
+ const struct isp_ccp2_cfg *buscfg)
{
struct isp_device *isp = to_isp_device(ccp2);
u32 val;
@@ -218,16 +218,16 @@
ISPCCP2_CTRL_IO_OUT_SEL | ISPCCP2_CTRL_MODE;
/* Data/strobe physical layer */
BIT_SET(val, ISPCCP2_CTRL_PHY_SEL_SHIFT, ISPCCP2_CTRL_PHY_SEL_MASK,
- pdata->phy_layer);
+ buscfg->phy_layer);
BIT_SET(val, ISPCCP2_CTRL_INV_SHIFT, ISPCCP2_CTRL_INV_MASK,
- pdata->strobe_clk_pol);
+ buscfg->strobe_clk_pol);
isp_reg_writel(isp, val, OMAP3_ISP_IOMEM_CCP2, ISPCCP2_CTRL);
val = isp_reg_readl(isp, OMAP3_ISP_IOMEM_CCP2, ISPCCP2_CTRL);
if (!(val & ISPCCP2_CTRL_MODE)) {
- if (pdata->ccp2_mode == ISP_CCP2_MODE_CCP2)
+ if (buscfg->ccp2_mode == ISP_CCP2_MODE_CCP2)
dev_warn(isp->dev, "OMAP3 CCP2 bus not available\n");
- if (pdata->phy_layer == ISP_CCP2_PHY_DATA_STROBE)
+ if (buscfg->phy_layer == ISP_CCP2_PHY_DATA_STROBE)
/* Strobe mode requires CCP2 */
return -EIO;
}
@@ -347,7 +347,7 @@
*/
static int ccp2_if_configure(struct isp_ccp2_device *ccp2)
{
- const struct isp_v4l2_subdevs_group *pdata;
+ const struct isp_bus_cfg *buscfg;
struct v4l2_mbus_framefmt *format;
struct media_pad *pad;
struct v4l2_subdev *sensor;
@@ -358,20 +358,20 @@
pad = media_entity_remote_pad(&ccp2->pads[CCP2_PAD_SINK]);
sensor = media_entity_to_v4l2_subdev(pad->entity);
- pdata = sensor->host_priv;
+ buscfg = sensor->host_priv;
- ret = ccp2_phyif_config(ccp2, &pdata->bus.ccp2);
+ ret = ccp2_phyif_config(ccp2, &buscfg->bus.ccp2);
if (ret < 0)
return ret;
- ccp2_vp_config(ccp2, pdata->bus.ccp2.vpclk_div + 1);
+ ccp2_vp_config(ccp2, buscfg->bus.ccp2.vpclk_div + 1);
v4l2_subdev_call(sensor, sensor, g_skip_top_lines, &lines);
format = &ccp2->formats[CCP2_PAD_SINK];
ccp2->if_cfg.data_start = lines;
- ccp2->if_cfg.crc = pdata->bus.ccp2.crc;
+ ccp2->if_cfg.crc = buscfg->bus.ccp2.crc;
ccp2->if_cfg.format = format->code;
ccp2->if_cfg.data_size = format->height;
@@ -611,17 +611,17 @@
/*
* __ccp2_get_format - helper function for getting ccp2 format
* @ccp2 : Pointer to ISP CCP2 device
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @pad : pad number
* @which : wanted subdev format
* return format structure or NULL on error
*/
static struct v4l2_mbus_framefmt *
-__ccp2_get_format(struct isp_ccp2_device *ccp2, struct v4l2_subdev_fh *fh,
+__ccp2_get_format(struct isp_ccp2_device *ccp2, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&ccp2->subdev, cfg, pad);
else
return &ccp2->formats[pad];
}
@@ -629,13 +629,13 @@
/*
* ccp2_try_format - Handle try format by pad subdev method
* @ccp2 : Pointer to ISP CCP2 device
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @pad : pad num
* @fmt : pointer to v4l2 mbus format structure
* @which : wanted subdev format
*/
static void ccp2_try_format(struct isp_ccp2_device *ccp2,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -669,7 +669,7 @@
* When CCP2 write to memory feature will be added this
* should be changed properly.
*/
- format = __ccp2_get_format(ccp2, fh, CCP2_PAD_SINK, which);
+ format = __ccp2_get_format(ccp2, cfg, CCP2_PAD_SINK, which);
memcpy(fmt, format, sizeof(*fmt));
fmt->code = MEDIA_BUS_FMT_SGRBG10_1X10;
break;
@@ -682,12 +682,12 @@
/*
* ccp2_enum_mbus_code - Handle pixel format enumeration
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @code : pointer to v4l2_subdev_mbus_code_enum structure
* return -EINVAL or zero on success
*/
static int ccp2_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct isp_ccp2_device *ccp2 = v4l2_get_subdevdata(sd);
@@ -702,8 +702,8 @@
if (code->index != 0)
return -EINVAL;
- format = __ccp2_get_format(ccp2, fh, CCP2_PAD_SINK,
- V4L2_SUBDEV_FORMAT_TRY);
+ format = __ccp2_get_format(ccp2, cfg, CCP2_PAD_SINK,
+ code->which);
code->code = format->code;
}
@@ -711,7 +711,7 @@
}
static int ccp2_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct isp_ccp2_device *ccp2 = v4l2_get_subdevdata(sd);
@@ -723,7 +723,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- ccp2_try_format(ccp2, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ ccp2_try_format(ccp2, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -733,7 +733,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- ccp2_try_format(ccp2, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ ccp2_try_format(ccp2, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -743,17 +743,17 @@
/*
* ccp2_get_format - Handle get format by pads subdev method
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @fmt : pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int ccp2_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ccp2_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct isp_ccp2_device *ccp2 = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __ccp2_get_format(ccp2, fh, fmt->pad, fmt->which);
+ format = __ccp2_get_format(ccp2, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -764,29 +764,29 @@
/*
* ccp2_set_format - Handle set format by pads subdev method
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @fmt : pointer to v4l2 subdev format structure
* returns zero
*/
-static int ccp2_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ccp2_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct isp_ccp2_device *ccp2 = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __ccp2_get_format(ccp2, fh, fmt->pad, fmt->which);
+ format = __ccp2_get_format(ccp2, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- ccp2_try_format(ccp2, fh, fmt->pad, &fmt->format, fmt->which);
+ ccp2_try_format(ccp2, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
/* Propagate the format from sink to source */
if (fmt->pad == CCP2_PAD_SINK) {
- format = __ccp2_get_format(ccp2, fh, CCP2_PAD_SOURCE,
+ format = __ccp2_get_format(ccp2, cfg, CCP2_PAD_SOURCE,
fmt->which);
*format = fmt->format;
- ccp2_try_format(ccp2, fh, CCP2_PAD_SOURCE, format, fmt->which);
+ ccp2_try_format(ccp2, cfg, CCP2_PAD_SOURCE, format, fmt->which);
}
return 0;
@@ -811,7 +811,7 @@
format.format.code = MEDIA_BUS_FMT_SGRBG10_1X10;
format.format.width = 4096;
format.format.height = 4096;
- ccp2_set_format(sd, fh, &format);
+ ccp2_set_format(sd, fh ? fh->pad : NULL, &format);
return 0;
}
diff --git a/drivers/media/platform/omap3isp/ispcsi2.c b/drivers/media/platform/omap3isp/ispcsi2.c
index 09c686d..a78338d 100644
--- a/drivers/media/platform/omap3isp/ispcsi2.c
+++ b/drivers/media/platform/omap3isp/ispcsi2.c
@@ -548,7 +548,8 @@
static int csi2_configure(struct isp_csi2_device *csi2)
{
- const struct isp_v4l2_subdevs_group *pdata;
+ struct isp_pipeline *pipe = to_isp_pipeline(&csi2->subdev.entity);
+ const struct isp_bus_cfg *buscfg;
struct isp_device *isp = csi2->isp;
struct isp_csi2_timing_cfg *timing = &csi2->timing[0];
struct v4l2_subdev *sensor;
@@ -565,14 +566,19 @@
pad = media_entity_remote_pad(&csi2->pads[CSI2_PAD_SINK]);
sensor = media_entity_to_v4l2_subdev(pad->entity);
- pdata = sensor->host_priv;
+ buscfg = sensor->host_priv;
csi2->frame_skip = 0;
v4l2_subdev_call(sensor, sensor, g_skip_frames, &csi2->frame_skip);
- csi2->ctrl.vp_out_ctrl = pdata->bus.csi2.vpclk_div;
+ csi2->ctrl.vp_out_ctrl =
+ clamp_t(unsigned int, pipe->l3_ick / pipe->external_rate - 1,
+ 1, 3);
+ dev_dbg(isp->dev, "%s: l3_ick %lu, external_rate %u, vp_out_ctrl %u\n",
+ __func__, pipe->l3_ick, pipe->external_rate,
+ csi2->ctrl.vp_out_ctrl);
csi2->ctrl.frame_mode = ISP_CSI2_FRAME_IMMEDIATE;
- csi2->ctrl.ecc_enable = pdata->bus.csi2.crc;
+ csi2->ctrl.ecc_enable = buscfg->bus.csi2.crc;
timing->ionum = 1;
timing->force_rx_mode = 1;
@@ -829,17 +835,17 @@
*/
static struct v4l2_mbus_framefmt *
-__csi2_get_format(struct isp_csi2_device *csi2, struct v4l2_subdev_fh *fh,
+__csi2_get_format(struct isp_csi2_device *csi2, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&csi2->subdev, cfg, pad);
else
return &csi2->formats[pad];
}
static void
-csi2_try_format(struct isp_csi2_device *csi2, struct v4l2_subdev_fh *fh,
+csi2_try_format(struct isp_csi2_device *csi2, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -869,7 +875,7 @@
* compression.
*/
pixelcode = fmt->code;
- format = __csi2_get_format(csi2, fh, CSI2_PAD_SINK, which);
+ format = __csi2_get_format(csi2, cfg, CSI2_PAD_SINK, which);
memcpy(fmt, format, sizeof(*fmt));
/*
@@ -890,12 +896,12 @@
/*
* csi2_enum_mbus_code - Handle pixel format enumeration
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @code : pointer to v4l2_subdev_mbus_code_enum structure
* return -EINVAL or zero on success
*/
static int csi2_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct isp_csi2_device *csi2 = v4l2_get_subdevdata(sd);
@@ -908,8 +914,8 @@
code->code = csi2_input_fmts[code->index];
} else {
- format = __csi2_get_format(csi2, fh, CSI2_PAD_SINK,
- V4L2_SUBDEV_FORMAT_TRY);
+ format = __csi2_get_format(csi2, cfg, CSI2_PAD_SINK,
+ code->which);
switch (code->index) {
case 0:
/* Passthrough sink pad code */
@@ -932,7 +938,7 @@
}
static int csi2_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct isp_csi2_device *csi2 = v4l2_get_subdevdata(sd);
@@ -944,7 +950,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- csi2_try_format(csi2, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ csi2_try_format(csi2, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -954,7 +960,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- csi2_try_format(csi2, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ csi2_try_format(csi2, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -964,17 +970,17 @@
/*
* csi2_get_format - Handle get format by pads subdev method
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int csi2_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int csi2_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct isp_csi2_device *csi2 = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __csi2_get_format(csi2, fh, fmt->pad, fmt->which);
+ format = __csi2_get_format(csi2, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -985,29 +991,29 @@
/*
* csi2_set_format - Handle set format by pads subdev method
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int csi2_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int csi2_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct isp_csi2_device *csi2 = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __csi2_get_format(csi2, fh, fmt->pad, fmt->which);
+ format = __csi2_get_format(csi2, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- csi2_try_format(csi2, fh, fmt->pad, &fmt->format, fmt->which);
+ csi2_try_format(csi2, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
/* Propagate the format from sink to source */
if (fmt->pad == CSI2_PAD_SINK) {
- format = __csi2_get_format(csi2, fh, CSI2_PAD_SOURCE,
+ format = __csi2_get_format(csi2, cfg, CSI2_PAD_SOURCE,
fmt->which);
*format = fmt->format;
- csi2_try_format(csi2, fh, CSI2_PAD_SOURCE, format, fmt->which);
+ csi2_try_format(csi2, cfg, CSI2_PAD_SOURCE, format, fmt->which);
}
return 0;
@@ -1032,7 +1038,7 @@
format.format.code = MEDIA_BUS_FMT_SGRBG10_1X10;
format.format.width = 4096;
format.format.height = 4096;
- csi2_set_format(sd, fh, &format);
+ csi2_set_format(sd, fh ? fh->pad : NULL, &format);
return 0;
}
diff --git a/drivers/media/platform/omap3isp/ispcsiphy.c b/drivers/media/platform/omap3isp/ispcsiphy.c
index e033f22..495447d 100644
--- a/drivers/media/platform/omap3isp/ispcsiphy.c
+++ b/drivers/media/platform/omap3isp/ispcsiphy.c
@@ -16,6 +16,7 @@
#include <linux/delay.h>
#include <linux/device.h>
+#include <linux/regmap.h>
#include <linux/regulator/consumer.h>
#include "isp.h"
@@ -26,10 +27,11 @@
enum isp_interface_type iface,
bool ccp2_strobe)
{
- u32 reg = isp_reg_readl(
- phy->isp, OMAP3_ISP_IOMEM_3630_CONTROL_CAMERA_PHY_CTRL, 0);
+ u32 reg;
u32 shift, mode;
+ regmap_read(phy->isp->syscon, phy->isp->syscon_offset, ®);
+
switch (iface) {
default:
/* Should not happen in practice, but let's keep the compiler happy. */
@@ -63,8 +65,7 @@
reg &= ~(OMAP3630_CONTROL_CAMERA_PHY_CTRL_CAMMODE_MASK << shift);
reg |= mode << shift;
- isp_reg_writel(phy->isp, reg,
- OMAP3_ISP_IOMEM_3630_CONTROL_CAMERA_PHY_CTRL, 0);
+ regmap_write(phy->isp->syscon, phy->isp->syscon_offset, reg);
}
static void csiphy_routing_cfg_3430(struct isp_csiphy *phy, u32 iface, bool on,
@@ -78,16 +79,14 @@
return;
if (!on) {
- isp_reg_writel(phy->isp, 0,
- OMAP3_ISP_IOMEM_343X_CONTROL_CSIRXFE, 0);
+ regmap_write(phy->isp->syscon, phy->isp->syscon_offset, 0);
return;
}
if (ccp2_strobe)
csirxfe |= OMAP343X_CONTROL_CSIRXFE_SELFORM;
- isp_reg_writel(phy->isp, csirxfe,
- OMAP3_ISP_IOMEM_343X_CONTROL_CSIRXFE, 0);
+ regmap_write(phy->isp->syscon, phy->isp->syscon_offset, csirxfe);
}
/*
@@ -106,10 +105,9 @@
enum isp_interface_type iface, bool on,
bool ccp2_strobe)
{
- if (phy->isp->mmio_base[OMAP3_ISP_IOMEM_3630_CONTROL_CAMERA_PHY_CTRL]
- && on)
+ if (phy->isp->phy_type == ISP_PHY_TYPE_3630 && on)
return csiphy_routing_cfg_3630(phy, iface, ccp2_strobe);
- if (phy->isp->mmio_base[OMAP3_ISP_IOMEM_343X_CONTROL_CSIRXFE])
+ if (phy->isp->phy_type == ISP_PHY_TYPE_3430)
return csiphy_routing_cfg_3430(phy, iface, on, ccp2_strobe);
}
@@ -168,18 +166,25 @@
{
struct isp_csi2_device *csi2 = phy->csi2;
struct isp_pipeline *pipe = to_isp_pipeline(&csi2->subdev.entity);
- struct isp_v4l2_subdevs_group *subdevs = pipe->external->host_priv;
+ struct isp_bus_cfg *buscfg = pipe->external->host_priv;
struct isp_csiphy_lanes_cfg *lanes;
int csi2_ddrclk_khz;
unsigned int used_lanes = 0;
unsigned int i;
u32 reg;
- if (subdevs->interface == ISP_INTERFACE_CCP2B_PHY1
- || subdevs->interface == ISP_INTERFACE_CCP2B_PHY2)
- lanes = &subdevs->bus.ccp2.lanecfg;
+ if (!buscfg) {
+ struct isp_async_subdev *isd =
+ container_of(pipe->external->asd,
+ struct isp_async_subdev, asd);
+ buscfg = &isd->bus;
+ }
+
+ if (buscfg->interface == ISP_INTERFACE_CCP2B_PHY1
+ || buscfg->interface == ISP_INTERFACE_CCP2B_PHY2)
+ lanes = &buscfg->bus.ccp2.lanecfg;
else
- lanes = &subdevs->bus.csi2.lanecfg;
+ lanes = &buscfg->bus.csi2.lanecfg;
/* Clock and data lanes verification */
for (i = 0; i < phy->num_data_lanes; i++) {
@@ -203,8 +208,8 @@
* issue since the MPU power domain is forced on whilst the
* ISP is in use.
*/
- csiphy_routing_cfg(phy, subdevs->interface, true,
- subdevs->bus.ccp2.phy_layer);
+ csiphy_routing_cfg(phy, buscfg->interface, true,
+ buscfg->bus.ccp2.phy_layer);
/* DPHY timing configuration */
/* CSI-2 is DDR and we only count used lanes. */
@@ -302,11 +307,10 @@
struct isp_csi2_device *csi2 = phy->csi2;
struct isp_pipeline *pipe =
to_isp_pipeline(&csi2->subdev.entity);
- struct isp_v4l2_subdevs_group *subdevs =
- pipe->external->host_priv;
+ struct isp_bus_cfg *buscfg = pipe->external->host_priv;
- csiphy_routing_cfg(phy, subdevs->interface, false,
- subdevs->bus.ccp2.phy_layer);
+ csiphy_routing_cfg(phy, buscfg->interface, false,
+ buscfg->bus.ccp2.phy_layer);
csiphy_power_autoswitch_enable(phy, false);
csiphy_set_power(phy, ISPCSI2_PHY_CFG_PWR_CMD_OFF);
regulator_disable(phy->vdd);
diff --git a/drivers/media/platform/omap3isp/isph3a_aewb.c b/drivers/media/platform/omap3isp/isph3a_aewb.c
index b208c54..ccaf92f 100644
--- a/drivers/media/platform/omap3isp/isph3a_aewb.c
+++ b/drivers/media/platform/omap3isp/isph3a_aewb.c
@@ -297,7 +297,6 @@
aewb->ops = &h3a_aewb_ops;
aewb->priv = aewb_cfg;
- aewb->dma_ch = -1;
aewb->event_type = V4L2_EVENT_OMAP3ISP_AEWB;
aewb->isp = isp;
diff --git a/drivers/media/platform/omap3isp/isph3a_af.c b/drivers/media/platform/omap3isp/isph3a_af.c
index 8a83e19..92937f7 100644
--- a/drivers/media/platform/omap3isp/isph3a_af.c
+++ b/drivers/media/platform/omap3isp/isph3a_af.c
@@ -360,7 +360,6 @@
af->ops = &h3a_af_ops;
af->priv = af_cfg;
- af->dma_ch = -1;
af->event_type = V4L2_EVENT_OMAP3ISP_AF;
af->isp = isp;
diff --git a/drivers/media/platform/omap3isp/isphist.c b/drivers/media/platform/omap3isp/isphist.c
index ce822c3..7138b04 100644
--- a/drivers/media/platform/omap3isp/isphist.c
+++ b/drivers/media/platform/omap3isp/isphist.c
@@ -16,20 +16,18 @@
*/
#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+#include <linux/omap-dmaengine.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
-#include <linux/device.h>
#include "isp.h"
#include "ispreg.h"
#include "isphist.h"
-#define OMAP24XX_DMA_NO_DEVICE 0
-
#define HIST_CONFIG_DMA 1
-#define HIST_USING_DMA(hist) ((hist)->dma_ch >= 0)
-
/*
* hist_reset_mem - clear Histogram memory before start stats engine.
*/
@@ -62,20 +60,6 @@
hist->wait_acc_frames = conf->num_acc_frames;
}
-static void hist_dma_config(struct ispstat *hist)
-{
- struct isp_device *isp = hist->isp;
-
- hist->dma_config.data_type = OMAP_DMA_DATA_TYPE_S32;
- hist->dma_config.sync_mode = OMAP_DMA_SYNC_ELEMENT;
- hist->dma_config.frame_count = 1;
- hist->dma_config.src_amode = OMAP_DMA_AMODE_CONSTANT;
- hist->dma_config.src_start = isp->mmio_base_phys[OMAP3_ISP_IOMEM_HIST]
- + ISPHIST_DATA;
- hist->dma_config.dst_amode = OMAP_DMA_AMODE_POST_INC;
- hist->dma_config.src_or_dst_synch = OMAP_DMA_SRC_SYNC;
-}
-
/*
* hist_setup_regs - Helper function to update Histogram registers.
*/
@@ -176,17 +160,12 @@
& ISPHIST_PCR_BUSY;
}
-static void hist_dma_cb(int lch, u16 ch_status, void *data)
+static void hist_dma_cb(void *data)
{
struct ispstat *hist = data;
- if (ch_status & ~OMAP_DMA_BLOCK_IRQ) {
- dev_dbg(hist->isp->dev, "hist: DMA error. status = 0x%04x\n",
- ch_status);
- omap_stop_dma(lch);
- hist_reset_mem(hist);
- atomic_set(&hist->buf_err, 1);
- }
+ /* FIXME: The DMA engine API can't report transfer errors :-/ */
+
isp_reg_clr(hist->isp, OMAP3_ISP_IOMEM_HIST, ISPHIST_CNT,
ISPHIST_CNT_CLEAR);
@@ -198,24 +177,57 @@
static int hist_buf_dma(struct ispstat *hist)
{
dma_addr_t dma_addr = hist->active_buf->dma_addr;
+ struct dma_async_tx_descriptor *tx;
+ struct dma_slave_config cfg;
+ dma_cookie_t cookie;
+ int ret;
if (unlikely(!dma_addr)) {
dev_dbg(hist->isp->dev, "hist: invalid DMA buffer address\n");
- hist_reset_mem(hist);
- return STAT_NO_BUF;
+ goto error;
}
isp_reg_writel(hist->isp, 0, OMAP3_ISP_IOMEM_HIST, ISPHIST_ADDR);
isp_reg_set(hist->isp, OMAP3_ISP_IOMEM_HIST, ISPHIST_CNT,
ISPHIST_CNT_CLEAR);
omap3isp_flush(hist->isp);
- hist->dma_config.dst_start = dma_addr;
- hist->dma_config.elem_count = hist->buf_size / sizeof(u32);
- omap_set_dma_params(hist->dma_ch, &hist->dma_config);
- omap_start_dma(hist->dma_ch);
+ memset(&cfg, 0, sizeof(cfg));
+ cfg.src_addr = hist->isp->mmio_hist_base_phys + ISPHIST_DATA;
+ cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ cfg.src_maxburst = hist->buf_size / 4;
+
+ ret = dmaengine_slave_config(hist->dma_ch, &cfg);
+ if (ret < 0) {
+ dev_dbg(hist->isp->dev,
+ "hist: DMA slave configuration failed\n");
+ goto error;
+ }
+
+ tx = dmaengine_prep_slave_single(hist->dma_ch, dma_addr,
+ hist->buf_size, DMA_DEV_TO_MEM,
+ DMA_CTRL_ACK);
+ if (tx == NULL) {
+ dev_dbg(hist->isp->dev,
+ "hist: DMA slave preparation failed\n");
+ goto error;
+ }
+
+ tx->callback = hist_dma_cb;
+ tx->callback_param = hist;
+ cookie = tx->tx_submit(tx);
+ if (dma_submit_error(cookie)) {
+ dev_dbg(hist->isp->dev, "hist: DMA submission failed\n");
+ goto error;
+ }
+
+ dma_async_issue_pending(hist->dma_ch);
return STAT_BUF_WAITING_DMA;
+
+error:
+ hist_reset_mem(hist);
+ return STAT_NO_BUF;
}
static int hist_buf_pio(struct ispstat *hist)
@@ -272,7 +284,7 @@
if (--(hist->wait_acc_frames))
return STAT_NO_BUF;
- if (HIST_USING_DMA(hist))
+ if (hist->dma_ch)
ret = hist_buf_dma(hist);
else
ret = hist_buf_pio(hist);
@@ -473,18 +485,28 @@
hist->isp = isp;
- if (HIST_CONFIG_DMA)
- ret = omap_request_dma(OMAP24XX_DMA_NO_DEVICE, "DMA_ISP_HIST",
- hist_dma_cb, hist, &hist->dma_ch);
- if (ret) {
- if (HIST_CONFIG_DMA)
- dev_warn(isp->dev, "hist: DMA request channel failed. "
- "Using PIO only.\n");
- hist->dma_ch = -1;
- } else {
- dev_dbg(isp->dev, "hist: DMA channel = %d\n", hist->dma_ch);
- hist_dma_config(hist);
- omap_enable_dma_irq(hist->dma_ch, OMAP_DMA_BLOCK_IRQ);
+ if (HIST_CONFIG_DMA) {
+ struct platform_device *pdev = to_platform_device(isp->dev);
+ struct resource *res;
+ unsigned int sig = 0;
+ dma_cap_mask_t mask;
+
+ dma_cap_zero(mask);
+ dma_cap_set(DMA_SLAVE, mask);
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_DMA,
+ "hist");
+ if (res)
+ sig = res->start;
+
+ hist->dma_ch = dma_request_slave_channel_compat(mask,
+ omap_dma_filter_fn, &sig, isp->dev, "hist");
+ if (!hist->dma_ch)
+ dev_warn(isp->dev,
+ "hist: DMA channel request failed, using PIO\n");
+ else
+ dev_dbg(isp->dev, "hist: using DMA channel %s\n",
+ dma_chan_name(hist->dma_ch));
}
hist->ops = &hist_ops;
@@ -493,8 +515,8 @@
ret = omap3isp_stat_init(hist, "histogram", &hist_subdev_ops);
if (ret) {
- if (HIST_USING_DMA(hist))
- omap_free_dma(hist->dma_ch);
+ if (hist->dma_ch)
+ dma_release_channel(hist->dma_ch);
}
return ret;
@@ -505,7 +527,10 @@
*/
void omap3isp_hist_cleanup(struct isp_device *isp)
{
- if (HIST_USING_DMA(&isp->isp_hist))
- omap_free_dma(isp->isp_hist.dma_ch);
- omap3isp_stat_cleanup(&isp->isp_hist);
+ struct ispstat *hist = &isp->isp_hist;
+
+ if (hist->dma_ch)
+ dma_release_channel(hist->dma_ch);
+
+ omap3isp_stat_cleanup(hist);
}
diff --git a/drivers/media/platform/omap3isp/isppreview.c b/drivers/media/platform/omap3isp/isppreview.c
index dd9eed4..15cb254 100644
--- a/drivers/media/platform/omap3isp/isppreview.c
+++ b/drivers/media/platform/omap3isp/isppreview.c
@@ -1686,21 +1686,21 @@
}
static struct v4l2_mbus_framefmt *
-__preview_get_format(struct isp_prev_device *prev, struct v4l2_subdev_fh *fh,
+__preview_get_format(struct isp_prev_device *prev, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&prev->subdev, cfg, pad);
else
return &prev->formats[pad];
}
static struct v4l2_rect *
-__preview_get_crop(struct isp_prev_device *prev, struct v4l2_subdev_fh *fh,
+__preview_get_crop(struct isp_prev_device *prev, struct v4l2_subdev_pad_config *cfg,
enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_crop(fh, PREV_PAD_SINK);
+ return v4l2_subdev_get_try_crop(&prev->subdev, cfg, PREV_PAD_SINK);
else
return &prev->crop;
}
@@ -1727,7 +1727,7 @@
/*
* preview_try_format - Validate a format
* @prev: ISP preview engine
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @pad: pad number
* @fmt: format to be validated
* @which: try/active format selector
@@ -1736,7 +1736,7 @@
* engine limits and the format and crop rectangles on other pads.
*/
static void preview_try_format(struct isp_prev_device *prev,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -1777,7 +1777,7 @@
case PREV_PAD_SOURCE:
pixelcode = fmt->code;
- *fmt = *__preview_get_format(prev, fh, PREV_PAD_SINK, which);
+ *fmt = *__preview_get_format(prev, cfg, PREV_PAD_SINK, which);
switch (pixelcode) {
case MEDIA_BUS_FMT_YUYV8_1X16:
@@ -1795,7 +1795,7 @@
* is not supported yet, hardcode the output size to the crop
* rectangle size.
*/
- crop = __preview_get_crop(prev, fh, which);
+ crop = __preview_get_crop(prev, cfg, which);
fmt->width = crop->width;
fmt->height = crop->height;
@@ -1864,12 +1864,12 @@
/*
* preview_enum_mbus_code - Handle pixel format enumeration
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @code : pointer to v4l2_subdev_mbus_code_enum structure
* return -EINVAL or zero on success
*/
static int preview_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
switch (code->pad) {
@@ -1893,7 +1893,7 @@
}
static int preview_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct isp_prev_device *prev = v4l2_get_subdevdata(sd);
@@ -1905,7 +1905,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- preview_try_format(prev, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ preview_try_format(prev, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -1915,7 +1915,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- preview_try_format(prev, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ preview_try_format(prev, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -1925,7 +1925,7 @@
/*
* preview_get_selection - Retrieve a selection rectangle on a pad
* @sd: ISP preview V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @sel: Selection rectangle
*
* The only supported rectangles are the crop rectangles on the sink pad.
@@ -1933,7 +1933,7 @@
* Return 0 on success or a negative error code otherwise.
*/
static int preview_get_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct isp_prev_device *prev = v4l2_get_subdevdata(sd);
@@ -1949,13 +1949,13 @@
sel->r.width = INT_MAX;
sel->r.height = INT_MAX;
- format = __preview_get_format(prev, fh, PREV_PAD_SINK,
+ format = __preview_get_format(prev, cfg, PREV_PAD_SINK,
sel->which);
preview_try_crop(prev, format, &sel->r);
break;
case V4L2_SEL_TGT_CROP:
- sel->r = *__preview_get_crop(prev, fh, sel->which);
+ sel->r = *__preview_get_crop(prev, cfg, sel->which);
break;
default:
@@ -1968,7 +1968,7 @@
/*
* preview_set_selection - Set a selection rectangle on a pad
* @sd: ISP preview V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @sel: Selection rectangle
*
* The only supported rectangle is the actual crop rectangle on the sink pad.
@@ -1976,7 +1976,7 @@
* Return 0 on success or a negative error code otherwise.
*/
static int preview_set_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct isp_prev_device *prev = v4l2_get_subdevdata(sd);
@@ -1995,17 +1995,17 @@
* rectangle.
*/
if (sel->flags & V4L2_SEL_FLAG_KEEP_CONFIG) {
- sel->r = *__preview_get_crop(prev, fh, sel->which);
+ sel->r = *__preview_get_crop(prev, cfg, sel->which);
return 0;
}
- format = __preview_get_format(prev, fh, PREV_PAD_SINK, sel->which);
+ format = __preview_get_format(prev, cfg, PREV_PAD_SINK, sel->which);
preview_try_crop(prev, format, &sel->r);
- *__preview_get_crop(prev, fh, sel->which) = sel->r;
+ *__preview_get_crop(prev, cfg, sel->which) = sel->r;
/* Update the source format. */
- format = __preview_get_format(prev, fh, PREV_PAD_SOURCE, sel->which);
- preview_try_format(prev, fh, PREV_PAD_SOURCE, format, sel->which);
+ format = __preview_get_format(prev, cfg, PREV_PAD_SOURCE, sel->which);
+ preview_try_format(prev, cfg, PREV_PAD_SOURCE, format, sel->which);
return 0;
}
@@ -2013,17 +2013,17 @@
/*
* preview_get_format - Handle get format by pads subdev method
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int preview_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int preview_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct isp_prev_device *prev = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __preview_get_format(prev, fh, fmt->pad, fmt->which);
+ format = __preview_get_format(prev, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -2034,28 +2034,28 @@
/*
* preview_set_format - Handle set format by pads subdev method
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int preview_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int preview_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct isp_prev_device *prev = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
struct v4l2_rect *crop;
- format = __preview_get_format(prev, fh, fmt->pad, fmt->which);
+ format = __preview_get_format(prev, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- preview_try_format(prev, fh, fmt->pad, &fmt->format, fmt->which);
+ preview_try_format(prev, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
/* Propagate the format from sink to source */
if (fmt->pad == PREV_PAD_SINK) {
/* Reset the crop rectangle. */
- crop = __preview_get_crop(prev, fh, fmt->which);
+ crop = __preview_get_crop(prev, cfg, fmt->which);
crop->left = 0;
crop->top = 0;
crop->width = fmt->format.width;
@@ -2064,9 +2064,9 @@
preview_try_crop(prev, &fmt->format, crop);
/* Update the source format. */
- format = __preview_get_format(prev, fh, PREV_PAD_SOURCE,
+ format = __preview_get_format(prev, cfg, PREV_PAD_SOURCE,
fmt->which);
- preview_try_format(prev, fh, PREV_PAD_SOURCE, format,
+ preview_try_format(prev, cfg, PREV_PAD_SOURCE, format,
fmt->which);
}
@@ -2093,7 +2093,7 @@
format.format.code = MEDIA_BUS_FMT_SGRBG10_1X10;
format.format.width = 4096;
format.format.height = 4096;
- preview_set_format(sd, fh, &format);
+ preview_set_format(sd, fh ? fh->pad : NULL, &format);
return 0;
}
diff --git a/drivers/media/platform/omap3isp/ispresizer.c b/drivers/media/platform/omap3isp/ispresizer.c
index 2b9bc48..7cfb43d 100644
--- a/drivers/media/platform/omap3isp/ispresizer.c
+++ b/drivers/media/platform/omap3isp/ispresizer.c
@@ -112,16 +112,16 @@
* __resizer_get_format - helper function for getting resizer format
* @res : pointer to resizer private structure
* @pad : pad number
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @which : wanted subdev format
* return zero
*/
static struct v4l2_mbus_framefmt *
-__resizer_get_format(struct isp_res_device *res, struct v4l2_subdev_fh *fh,
+__resizer_get_format(struct isp_res_device *res, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&res->subdev, cfg, pad);
else
return &res->formats[pad];
}
@@ -129,15 +129,15 @@
/*
* __resizer_get_crop - helper function for getting resizer crop rectangle
* @res : pointer to resizer private structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @which : wanted subdev crop rectangle
*/
static struct v4l2_rect *
-__resizer_get_crop(struct isp_res_device *res, struct v4l2_subdev_fh *fh,
+__resizer_get_crop(struct isp_res_device *res, struct v4l2_subdev_pad_config *cfg,
enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_crop(fh, RESZ_PAD_SINK);
+ return v4l2_subdev_get_try_crop(&res->subdev, cfg, RESZ_PAD_SINK);
else
return &res->crop.request;
}
@@ -1215,7 +1215,7 @@
/*
* resizer_get_selection - Retrieve a selection rectangle on a pad
* @sd: ISP resizer V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @sel: Selection rectangle
*
* The only supported rectangles are the crop rectangles on the sink pad.
@@ -1223,7 +1223,7 @@
* Return 0 on success or a negative error code otherwise.
*/
static int resizer_get_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct isp_res_device *res = v4l2_get_subdevdata(sd);
@@ -1234,9 +1234,9 @@
if (sel->pad != RESZ_PAD_SINK)
return -EINVAL;
- format_sink = __resizer_get_format(res, fh, RESZ_PAD_SINK,
+ format_sink = __resizer_get_format(res, cfg, RESZ_PAD_SINK,
sel->which);
- format_source = __resizer_get_format(res, fh, RESZ_PAD_SOURCE,
+ format_source = __resizer_get_format(res, cfg, RESZ_PAD_SOURCE,
sel->which);
switch (sel->target) {
@@ -1251,7 +1251,7 @@
break;
case V4L2_SEL_TGT_CROP:
- sel->r = *__resizer_get_crop(res, fh, sel->which);
+ sel->r = *__resizer_get_crop(res, cfg, sel->which);
resizer_calc_ratios(res, &sel->r, format_source, &ratio);
break;
@@ -1265,7 +1265,7 @@
/*
* resizer_set_selection - Set a selection rectangle on a pad
* @sd: ISP resizer V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @sel: Selection rectangle
*
* The only supported rectangle is the actual crop rectangle on the sink pad.
@@ -1276,7 +1276,7 @@
* Return 0 on success or a negative error code otherwise.
*/
static int resizer_set_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct isp_res_device *res = v4l2_get_subdevdata(sd);
@@ -1290,9 +1290,9 @@
sel->pad != RESZ_PAD_SINK)
return -EINVAL;
- format_sink = __resizer_get_format(res, fh, RESZ_PAD_SINK,
+ format_sink = __resizer_get_format(res, cfg, RESZ_PAD_SINK,
sel->which);
- format_source = *__resizer_get_format(res, fh, RESZ_PAD_SOURCE,
+ format_source = *__resizer_get_format(res, cfg, RESZ_PAD_SOURCE,
sel->which);
dev_dbg(isp->dev, "%s(%s): req %ux%u -> (%d,%d)/%ux%u -> %ux%u\n",
@@ -1310,7 +1310,7 @@
* stored the mangled rectangle.
*/
resizer_try_crop(format_sink, &format_source, &sel->r);
- *__resizer_get_crop(res, fh, sel->which) = sel->r;
+ *__resizer_get_crop(res, cfg, sel->which) = sel->r;
resizer_calc_ratios(res, &sel->r, &format_source, &ratio);
dev_dbg(isp->dev, "%s(%s): got %ux%u -> (%d,%d)/%ux%u -> %ux%u\n",
@@ -1320,7 +1320,7 @@
format_source.width, format_source.height);
if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
- *__resizer_get_format(res, fh, RESZ_PAD_SOURCE, sel->which) =
+ *__resizer_get_format(res, cfg, RESZ_PAD_SOURCE, sel->which) =
format_source;
return 0;
}
@@ -1331,7 +1331,7 @@
*/
spin_lock_irqsave(&res->lock, flags);
- *__resizer_get_format(res, fh, RESZ_PAD_SOURCE, sel->which) =
+ *__resizer_get_format(res, cfg, RESZ_PAD_SOURCE, sel->which) =
format_source;
res->ratio = ratio;
@@ -1368,13 +1368,13 @@
/*
* resizer_try_format - Handle try format by pad subdev method
* @res : ISP resizer device
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @pad : pad num
* @fmt : pointer to v4l2 format structure
* @which : wanted subdev format
*/
static void resizer_try_format(struct isp_res_device *res,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -1395,10 +1395,10 @@
break;
case RESZ_PAD_SOURCE:
- format = __resizer_get_format(res, fh, RESZ_PAD_SINK, which);
+ format = __resizer_get_format(res, cfg, RESZ_PAD_SINK, which);
fmt->code = format->code;
- crop = *__resizer_get_crop(res, fh, which);
+ crop = *__resizer_get_crop(res, cfg, which);
resizer_calc_ratios(res, &crop, fmt, &ratio);
break;
}
@@ -1410,12 +1410,12 @@
/*
* resizer_enum_mbus_code - Handle pixel format enumeration
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @code : pointer to v4l2_subdev_mbus_code_enum structure
* return -EINVAL or zero on success
*/
static int resizer_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct isp_res_device *res = v4l2_get_subdevdata(sd);
@@ -1430,8 +1430,8 @@
if (code->index != 0)
return -EINVAL;
- format = __resizer_get_format(res, fh, RESZ_PAD_SINK,
- V4L2_SUBDEV_FORMAT_TRY);
+ format = __resizer_get_format(res, cfg, RESZ_PAD_SINK,
+ code->which);
code->code = format->code;
}
@@ -1439,7 +1439,7 @@
}
static int resizer_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct isp_res_device *res = v4l2_get_subdevdata(sd);
@@ -1451,7 +1451,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- resizer_try_format(res, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ resizer_try_format(res, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -1461,7 +1461,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- resizer_try_format(res, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ resizer_try_format(res, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -1471,17 +1471,17 @@
/*
* resizer_get_format - Handle get format by pads subdev method
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @fmt : pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int resizer_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int resizer_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct isp_res_device *res = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __resizer_get_format(res, fh, fmt->pad, fmt->which);
+ format = __resizer_get_format(res, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -1492,37 +1492,37 @@
/*
* resizer_set_format - Handle set format by pads subdev method
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
* @fmt : pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int resizer_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int resizer_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct isp_res_device *res = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
struct v4l2_rect *crop;
- format = __resizer_get_format(res, fh, fmt->pad, fmt->which);
+ format = __resizer_get_format(res, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- resizer_try_format(res, fh, fmt->pad, &fmt->format, fmt->which);
+ resizer_try_format(res, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
if (fmt->pad == RESZ_PAD_SINK) {
/* reset crop rectangle */
- crop = __resizer_get_crop(res, fh, fmt->which);
+ crop = __resizer_get_crop(res, cfg, fmt->which);
crop->left = 0;
crop->top = 0;
crop->width = fmt->format.width;
crop->height = fmt->format.height;
/* Propagate the format from sink to source */
- format = __resizer_get_format(res, fh, RESZ_PAD_SOURCE,
+ format = __resizer_get_format(res, cfg, RESZ_PAD_SOURCE,
fmt->which);
*format = fmt->format;
- resizer_try_format(res, fh, RESZ_PAD_SOURCE, format,
+ resizer_try_format(res, cfg, RESZ_PAD_SOURCE, format,
fmt->which);
}
@@ -1573,7 +1573,7 @@
format.format.code = MEDIA_BUS_FMT_YUYV8_1X16;
format.format.width = 4096;
format.format.height = 4096;
- resizer_set_format(sd, fh, &format);
+ resizer_set_format(sd, fh ? fh->pad : NULL, &format);
return 0;
}
diff --git a/drivers/media/platform/omap3isp/ispstat.c b/drivers/media/platform/omap3isp/ispstat.c
index a94e834..20434e8 100644
--- a/drivers/media/platform/omap3isp/ispstat.c
+++ b/drivers/media/platform/omap3isp/ispstat.c
@@ -21,7 +21,7 @@
#include "isp.h"
-#define ISP_STAT_USES_DMAENGINE(stat) ((stat)->dma_ch >= 0)
+#define ISP_STAT_USES_DMAENGINE(stat) ((stat)->dma_ch != NULL)
/*
* MAGIC_SIZE must always be the greatest common divisor of
diff --git a/drivers/media/platform/omap3isp/ispstat.h b/drivers/media/platform/omap3isp/ispstat.h
index b32b296..b79380d 100644
--- a/drivers/media/platform/omap3isp/ispstat.h
+++ b/drivers/media/platform/omap3isp/ispstat.h
@@ -20,7 +20,6 @@
#include <linux/types.h>
#include <linux/omap3isp.h>
-#include <linux/omap-dma.h>
#include <media/v4l2-event.h>
#include "isp.h"
@@ -33,6 +32,7 @@
#define STAT_NO_BUF 1 /* An error has occurred */
#define STAT_BUF_WAITING_DMA 2 /* Histogram only: DMA is running */
+struct dma_chan;
struct ispstat;
struct ispstat_buffer {
@@ -96,7 +96,6 @@
u8 inc_config;
atomic_t buf_err;
enum ispstat_state_t state; /* enabling/disabling state */
- struct omap_dma_channel_params dma_config;
struct isp_device *isp;
void *priv; /* pointer to priv config struct */
void *recover_priv; /* pointer to recover priv configuration */
@@ -110,7 +109,7 @@
u32 frame_number;
u32 buf_size;
u32 buf_alloc_size;
- int dma_ch;
+ struct dma_chan *dma_ch;
unsigned long event_type;
struct ispstat_buffer *buf;
struct ispstat_buffer *active_buf;
diff --git a/drivers/media/platform/omap3isp/ispvideo.c b/drivers/media/platform/omap3isp/ispvideo.c
index 3fe9047..d285af1 100644
--- a/drivers/media/platform/omap3isp/ispvideo.c
+++ b/drivers/media/platform/omap3isp/ispvideo.c
@@ -452,7 +452,6 @@
enum isp_pipeline_state state;
struct isp_buffer *buf;
unsigned long flags;
- struct timespec ts;
spin_lock_irqsave(&video->irqlock, flags);
if (WARN_ON(list_empty(&video->dmaqueue))) {
@@ -465,9 +464,7 @@
list_del(&buf->irqlist);
spin_unlock_irqrestore(&video->irqlock, flags);
- ktime_get_ts(&ts);
- buf->vb.v4l2_buf.timestamp.tv_sec = ts.tv_sec;
- buf->vb.v4l2_buf.timestamp.tv_usec = ts.tv_nsec / NSEC_PER_USEC;
+ v4l2_get_timestamp(&buf->vb.v4l2_buf.timestamp);
/* Do frame number propagation only if this is the output video node.
* Frame number either comes from the CSI receivers or it gets
@@ -524,7 +521,6 @@
buf = list_first_entry(&video->dmaqueue, struct isp_buffer,
irqlist);
- buf->vb.state = VB2_BUF_STATE_ACTIVE;
spin_unlock_irqrestore(&video->irqlock, flags);
@@ -1022,7 +1018,7 @@
pipe->entities = 0;
- if (video->isp->pdata->set_constraints)
+ if (video->isp->pdata && video->isp->pdata->set_constraints)
video->isp->pdata->set_constraints(video->isp, true);
pipe->l3_ick = clk_get_rate(video->isp->clock[ISP_CLK_L3_ICK]);
pipe->max_rate = pipe->l3_ick;
@@ -1104,7 +1100,7 @@
err_check_format:
media_entity_pipeline_stop(&video->video.entity);
err_pipeline_start:
- if (video->isp->pdata->set_constraints)
+ if (video->isp->pdata && video->isp->pdata->set_constraints)
video->isp->pdata->set_constraints(video->isp, false);
/* The DMA queue must be emptied here, otherwise CCDC interrupts that
* will get triggered the next time the CCDC is powered up will try to
@@ -1165,7 +1161,7 @@
video->queue = NULL;
video->error = false;
- if (video->isp->pdata->set_constraints)
+ if (video->isp->pdata && video->isp->pdata->set_constraints)
video->isp->pdata->set_constraints(video->isp, false);
media_entity_pipeline_stop(&video->video.entity);
@@ -1326,14 +1322,8 @@
static int isp_video_mmap(struct file *file, struct vm_area_struct *vma)
{
struct isp_video_fh *vfh = to_isp_video_fh(file->private_data);
- struct isp_video *video = video_drvdata(file);
- int ret;
- mutex_lock(&video->queue_lock);
- ret = vb2_mmap(&vfh->queue, vma);
- mutex_unlock(&video->queue_lock);
-
- return ret;
+ return vb2_mmap(&vfh->queue, vma);
}
static struct v4l2_file_operations isp_video_fops = {
diff --git a/drivers/media/platform/s3c-camif/camif-capture.c b/drivers/media/platform/s3c-camif/camif-capture.c
index 54479d6..f6a61b9 100644
--- a/drivers/media/platform/s3c-camif/camif-capture.c
+++ b/drivers/media/platform/s3c-camif/camif-capture.c
@@ -1219,7 +1219,7 @@
*/
static int s3c_camif_subdev_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->index >= ARRAY_SIZE(camif_mbus_formats))
@@ -1230,14 +1230,14 @@
}
static int s3c_camif_subdev_get_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct camif_dev *camif = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *mf = &fmt->format;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, fmt->pad);
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
fmt->format = *mf;
return 0;
}
@@ -1297,7 +1297,7 @@
}
static int s3c_camif_subdev_set_fmt(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct camif_dev *camif = v4l2_get_subdevdata(sd);
@@ -1325,7 +1325,7 @@
__camif_subdev_try_format(camif, mf, fmt->pad);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
- mf = v4l2_subdev_get_try_format(fh, fmt->pad);
+ mf = v4l2_subdev_get_try_format(sd, cfg, fmt->pad);
*mf = fmt->format;
mutex_unlock(&camif->lock);
return 0;
@@ -1364,7 +1364,7 @@
}
static int s3c_camif_subdev_get_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct camif_dev *camif = v4l2_get_subdevdata(sd);
@@ -1377,7 +1377,7 @@
return -EINVAL;
if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
- sel->r = *v4l2_subdev_get_try_crop(fh, sel->pad);
+ sel->r = *v4l2_subdev_get_try_crop(sd, cfg, sel->pad);
return 0;
}
@@ -1451,7 +1451,7 @@
}
static int s3c_camif_subdev_set_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct camif_dev *camif = v4l2_get_subdevdata(sd);
@@ -1465,7 +1465,7 @@
__camif_try_crop(camif, &sel->r);
if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
- *v4l2_subdev_get_try_crop(fh, sel->pad) = sel->r;
+ *v4l2_subdev_get_try_crop(sd, cfg, sel->pad) = sel->r;
} else {
unsigned long flags;
unsigned int i;
diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.c b/drivers/media/platform/s5p-jpeg/jpeg-core.c
index a92ff42..bfbf157 100644
--- a/drivers/media/platform/s5p-jpeg/jpeg-core.c
+++ b/drivers/media/platform/s5p-jpeg/jpeg-core.c
@@ -621,6 +621,7 @@
return V4L2_JPEG_CHROMA_SUBSAMPLING_GRAY;
return ctx->subsampling;
case SJPEG_EXYNOS3250:
+ case SJPEG_EXYNOS5420:
if (ctx->subsampling > 3)
return V4L2_JPEG_CHROMA_SUBSAMPLING_411;
return exynos3250_decoded_subsampling[ctx->subsampling];
@@ -1142,13 +1143,13 @@
w_step = 1 << walign;
h_step = 1 << halign;
- if (ctx->jpeg->variant->version == SJPEG_EXYNOS3250) {
+ if (ctx->jpeg->variant->hw3250_compat) {
/*
* Rightmost and bottommost pixels are cropped by the
- * Exynos3250 JPEG IP for RGB formats, for the specific
- * width and height values respectively. This assignment
- * will result in v4l_bound_align_image returning dimensions
- * reduced by 1 for the aforementioned cases.
+ * Exynos3250/compatible JPEG IP for RGB formats, for the
+ * specific width and height values respectively. This
+ * assignment will result in v4l_bound_align_image returning
+ * dimensions reduced by 1 for the aforementioned cases.
*/
if (w_step == 4 && ((width & 3) == 1)) {
wmax = width;
@@ -1384,12 +1385,12 @@
/*
* Prevent downscaling to YUV420 format by more than 2
- * for Exynos3250 SoC as it produces broken raw image
+ * for Exynos3250/compatible SoC as it produces broken raw image
* in such cases.
*/
if (ct->mode == S5P_JPEG_DECODE &&
f_type == FMT_TYPE_CAPTURE &&
- ct->jpeg->variant->version == SJPEG_EXYNOS3250 &&
+ ct->jpeg->variant->hw3250_compat &&
pix->pixelformat == V4L2_PIX_FMT_YUV420 &&
ct->scale_factor > 2) {
scale_rect.width = ct->out_q.w / 2;
@@ -1569,12 +1570,12 @@
if (s->target == V4L2_SEL_TGT_COMPOSE) {
if (ctx->mode != S5P_JPEG_DECODE)
return -EINVAL;
- if (ctx->jpeg->variant->version == SJPEG_EXYNOS3250)
+ if (ctx->jpeg->variant->hw3250_compat)
ret = exynos3250_jpeg_try_downscale(ctx, rect);
} else if (s->target == V4L2_SEL_TGT_CROP) {
if (ctx->mode != S5P_JPEG_ENCODE)
return -EINVAL;
- if (ctx->jpeg->variant->version == SJPEG_EXYNOS3250)
+ if (ctx->jpeg->variant->hw3250_compat)
ret = exynos3250_jpeg_try_crop(ctx, rect);
}
@@ -1604,8 +1605,9 @@
case SJPEG_S5P:
return 0;
case SJPEG_EXYNOS3250:
+ case SJPEG_EXYNOS5420:
/*
- * The exynos3250 device can produce JPEG image only
+ * The exynos3250/compatible device can produce JPEG image only
* of 4:4:4 subsampling when given RGB32 source image.
*/
if (ctx->out_q.fmt->fourcc == V4L2_PIX_FMT_RGB32)
@@ -1624,7 +1626,7 @@
}
/*
- * The exynos4x12 and exynos3250 devices require resulting
+ * The exynos4x12 and exynos3250/compatible devices require resulting
* jpeg subsampling not to be lower than the input raw image
* subsampling.
*/
@@ -1842,7 +1844,7 @@
struct s5p_jpeg *jpeg = ctx->jpeg;
struct s5p_jpeg_fmt *fmt;
struct vb2_buffer *vb;
- struct s5p_jpeg_addr jpeg_addr;
+ struct s5p_jpeg_addr jpeg_addr = {};
u32 pix_size, padding_bytes = 0;
jpeg_addr.cb = 0;
@@ -1946,7 +1948,7 @@
struct s5p_jpeg *jpeg = ctx->jpeg;
struct s5p_jpeg_fmt *fmt;
struct vb2_buffer *vb;
- struct s5p_jpeg_addr jpeg_addr;
+ struct s5p_jpeg_addr jpeg_addr = {};
u32 pix_size;
pix_size = ctx->cap_q.w * ctx->cap_q.h;
@@ -2020,6 +2022,16 @@
exynos3250_jpeg_qtbl(jpeg->regs, 2, 1);
exynos3250_jpeg_qtbl(jpeg->regs, 3, 1);
+ /*
+ * Some SoCs require setting Huffman tables before each run
+ */
+ if (jpeg->variant->htbl_reinit) {
+ s5p_jpeg_set_hdctbl(jpeg->regs);
+ s5p_jpeg_set_hdctblg(jpeg->regs);
+ s5p_jpeg_set_hactbl(jpeg->regs);
+ s5p_jpeg_set_hactblg(jpeg->regs);
+ }
+
/* Y, Cb, Cr use Huffman table 0 */
exynos3250_jpeg_htbl_ac(jpeg->regs, 1);
exynos3250_jpeg_htbl_dc(jpeg->regs, 1);
@@ -2663,13 +2675,12 @@
/*
* JPEG IP allows storing two Huffman tables for each component.
* We fill table 0 for each component and do this here only
- * for S5PC210 and Exynos3250 SoCs. Exynos4x12 SoC requires
- * programming its Huffman tables each time the encoding process
- * is initialized, and thus it is accomplished in the device_run
- * callback of m2m_ops.
+ * for S5PC210 and Exynos3250 SoCs. Exynos4x12 and Exynos542x SoC
+ * require programming their Huffman tables each time the encoding
+ * process is initialized, and thus it is accomplished in the
+ * device_run callback of m2m_ops.
*/
- if (jpeg->variant->version == SJPEG_S5P ||
- jpeg->variant->version == SJPEG_EXYNOS3250) {
+ if (!jpeg->variant->htbl_reinit) {
s5p_jpeg_set_hdctbl(jpeg->regs);
s5p_jpeg_set_hdctblg(jpeg->regs);
s5p_jpeg_set_hactbl(jpeg->regs);
@@ -2717,6 +2728,7 @@
.jpeg_irq = exynos3250_jpeg_irq,
.m2m_ops = &exynos3250_jpeg_m2m_ops,
.fmt_ver_flag = SJPEG_FMT_FLAG_EXYNOS3250,
+ .hw3250_compat = 1,
};
static struct s5p_jpeg_variant exynos4_jpeg_drvdata = {
@@ -2724,6 +2736,16 @@
.jpeg_irq = exynos4_jpeg_irq,
.m2m_ops = &exynos4_jpeg_m2m_ops,
.fmt_ver_flag = SJPEG_FMT_FLAG_EXYNOS4,
+ .htbl_reinit = 1,
+};
+
+static struct s5p_jpeg_variant exynos5420_jpeg_drvdata = {
+ .version = SJPEG_EXYNOS5420,
+ .jpeg_irq = exynos3250_jpeg_irq, /* intentionally 3250 */
+ .m2m_ops = &exynos3250_jpeg_m2m_ops, /* intentionally 3250 */
+ .fmt_ver_flag = SJPEG_FMT_FLAG_EXYNOS3250, /* intentionally 3250 */
+ .hw3250_compat = 1,
+ .htbl_reinit = 1,
};
static const struct of_device_id samsung_jpeg_match[] = {
@@ -2739,6 +2761,9 @@
}, {
.compatible = "samsung,exynos4212-jpeg",
.data = &exynos4_jpeg_drvdata,
+ }, {
+ .compatible = "samsung,exynos5420-jpeg",
+ .data = &exynos5420_jpeg_drvdata,
},
{},
};
diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.h b/drivers/media/platform/s5p-jpeg/jpeg-core.h
index 764b32d..7d9a9ed 100644
--- a/drivers/media/platform/s5p-jpeg/jpeg-core.h
+++ b/drivers/media/platform/s5p-jpeg/jpeg-core.h
@@ -67,10 +67,12 @@
#define SJPEG_SUBSAMPLING_420 0x22
/* Version numbers */
-
-#define SJPEG_S5P 1
-#define SJPEG_EXYNOS3250 2
-#define SJPEG_EXYNOS4 3
+enum sjpeg_version {
+ SJPEG_S5P,
+ SJPEG_EXYNOS3250,
+ SJPEG_EXYNOS4,
+ SJPEG_EXYNOS5420,
+};
enum exynos4_jpeg_result {
OK_ENC_OR_DEC,
@@ -130,6 +132,8 @@
struct s5p_jpeg_variant {
unsigned int version;
unsigned int fmt_ver_flag;
+ unsigned int hw3250_compat:1;
+ unsigned int htbl_reinit:1;
struct v4l2_m2m_ops *m2m_ops;
irqreturn_t (*jpeg_irq)(int irq, void *priv);
};
diff --git a/drivers/media/platform/s5p-jpeg/jpeg-hw-s5p.c b/drivers/media/platform/s5p-jpeg/jpeg-hw-s5p.c
index e3b8e67..b5f20e7 100644
--- a/drivers/media/platform/s5p-jpeg/jpeg-hw-s5p.c
+++ b/drivers/media/platform/s5p-jpeg/jpeg-hw-s5p.c
@@ -51,18 +51,6 @@
writel(reg, regs + S5P_JPGCMOD);
}
-void s5p_jpeg_input_raw_y16(void __iomem *regs, bool y16)
-{
- unsigned long reg;
-
- reg = readl(regs + S5P_JPGCMOD);
- if (y16)
- reg |= S5P_MODE_Y16;
- else
- reg &= ~S5P_MODE_Y16_MASK;
- writel(reg, regs + S5P_JPGCMOD);
-}
-
void s5p_jpeg_proc_mode(void __iomem *regs, unsigned long mode)
{
unsigned long reg, m;
@@ -208,26 +196,6 @@
writel(reg, regs + S5P_JPGINTSE);
}
-void s5p_jpeg_timer_enable(void __iomem *regs, unsigned long val)
-{
- unsigned long reg;
-
- reg = readl(regs + S5P_JPG_TIMER_SE);
- reg |= S5P_TIMER_INT_EN;
- reg &= ~S5P_TIMER_INIT_MASK;
- reg |= val & S5P_TIMER_INIT_MASK;
- writel(reg, regs + S5P_JPG_TIMER_SE);
-}
-
-void s5p_jpeg_timer_disable(void __iomem *regs)
-{
- unsigned long reg;
-
- reg = readl(regs + S5P_JPG_TIMER_SE);
- reg &= ~S5P_TIMER_INT_EN_MASK;
- writel(reg, regs + S5P_JPG_TIMER_SE);
-}
-
int s5p_jpeg_timer_stat(void __iomem *regs)
{
return (int)((readl(regs + S5P_JPG_TIMER_ST) & S5P_TIMER_INT_STAT_MASK)
diff --git a/drivers/media/platform/s5p-jpeg/jpeg-hw-s5p.h b/drivers/media/platform/s5p-jpeg/jpeg-hw-s5p.h
index c11ebe8..f208fa3 100644
--- a/drivers/media/platform/s5p-jpeg/jpeg-hw-s5p.h
+++ b/drivers/media/platform/s5p-jpeg/jpeg-hw-s5p.h
@@ -29,7 +29,6 @@
void s5p_jpeg_reset(void __iomem *regs);
void s5p_jpeg_poweron(void __iomem *regs);
void s5p_jpeg_input_raw_mode(void __iomem *regs, unsigned long mode);
-void s5p_jpeg_input_raw_y16(void __iomem *regs, bool y16);
void s5p_jpeg_proc_mode(void __iomem *regs, unsigned long mode);
void s5p_jpeg_subsampling_mode(void __iomem *regs, unsigned int mode);
unsigned int s5p_jpeg_get_subsampling_mode(void __iomem *regs);
@@ -42,8 +41,6 @@
void s5p_jpeg_rst_int_enable(void __iomem *regs, bool enable);
void s5p_jpeg_data_num_int_enable(void __iomem *regs, bool enable);
void s5p_jpeg_final_mcu_num_int_enable(void __iomem *regs, bool enbl);
-void s5p_jpeg_timer_enable(void __iomem *regs, unsigned long val);
-void s5p_jpeg_timer_disable(void __iomem *regs);
int s5p_jpeg_timer_stat(void __iomem *regs);
void s5p_jpeg_clear_timer_stat(void __iomem *regs);
void s5p_jpeg_enc_stream_int(void __iomem *regs, unsigned long size);
diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
index 98374e8..8333fbc2 100644
--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
+++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
@@ -844,6 +844,13 @@
ret = -ENOENT;
goto err_queue_init;
}
+ /* One way to indicate end-of-stream for MFC is to set the
+ * bytesused == 0. However by default videobuf2 handles bytesused
+ * equal to 0 as a special case and changes its value to the size
+ * of the buffer. Set the allow_zero_bytesused flag so that videobuf2
+ * will keep the value of bytesused intact.
+ */
+ q->allow_zero_bytesused = 1;
q->mem_ops = &vb2_dma_contig_memops;
q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
ret = vb2_queue_init(q);
diff --git a/drivers/media/platform/s5p-tv/mixer_video.c b/drivers/media/platform/s5p-tv/mixer_video.c
index 72d4f2e..751f3b6 100644
--- a/drivers/media/platform/s5p-tv/mixer_video.c
+++ b/drivers/media/platform/s5p-tv/mixer_video.c
@@ -287,7 +287,7 @@
u32 bl_width = divup(width, blk->width);
u32 bl_height = divup(height, blk->height);
u32 sizeimage = bl_width * bl_height * blk->size;
- u16 bytesperline = bl_width * blk->size / blk->height;
+ u32 bytesperline = bl_width * blk->size / blk->height;
plane->sizeimage += sizeimage;
plane->bytesperline = max(plane->bytesperline, bytesperline);
diff --git a/drivers/media/platform/sh_vou.c b/drivers/media/platform/sh_vou.c
index 261f119..dde1ccc 100644
--- a/drivers/media/platform/sh_vou.c
+++ b/drivers/media/platform/sh_vou.c
@@ -62,7 +62,7 @@
struct sh_vou_device {
struct v4l2_device v4l2_dev;
- struct video_device *vdev;
+ struct video_device vdev;
atomic_t use_count;
struct sh_vou_pdata *pdata;
spinlock_t lock;
@@ -890,7 +890,7 @@
dev_dbg(vou_dev->v4l2_dev.dev, "%s(): 0x%llx\n", __func__, std_id);
- if (std_id & ~vou_dev->vdev->tvnorms)
+ if (std_id & ~vou_dev->vdev.tvnorms)
return -EINVAL;
ret = v4l2_device_call_until_err(&vou_dev->v4l2_dev, 0, video,
@@ -1168,10 +1168,10 @@
dev_dbg(vou_dev->v4l2_dev.dev, "%s()\n", __func__);
- file->private_data = vou_file;
-
- if (mutex_lock_interruptible(&vou_dev->fop_lock))
+ if (mutex_lock_interruptible(&vou_dev->fop_lock)) {
+ kfree(vou_file);
return -ERESTARTSYS;
+ }
if (atomic_inc_return(&vou_dev->use_count) == 1) {
int ret;
/* First open */
@@ -1183,6 +1183,7 @@
pm_runtime_put(vou_dev->v4l2_dev.dev);
vou_dev->status = SH_VOU_IDLE;
mutex_unlock(&vou_dev->fop_lock);
+ kfree(vou_file);
return ret;
}
}
@@ -1192,9 +1193,11 @@
V4L2_BUF_TYPE_VIDEO_OUTPUT,
V4L2_FIELD_NONE,
sizeof(struct videobuf_buffer),
- vou_dev->vdev, &vou_dev->fop_lock);
+ &vou_dev->vdev, &vou_dev->fop_lock);
mutex_unlock(&vou_dev->fop_lock);
+ file->private_data = vou_file;
+
return 0;
}
@@ -1358,21 +1361,14 @@
goto ev4l2devreg;
}
- /* Allocate memory for video device */
- vdev = video_device_alloc();
- if (vdev == NULL) {
- ret = -ENOMEM;
- goto evdevalloc;
- }
-
+ vdev = &vou_dev->vdev;
*vdev = sh_vou_video_template;
if (vou_pdata->bus_fmt == SH_VOU_BUS_8BIT)
vdev->tvnorms |= V4L2_STD_PAL;
vdev->v4l2_dev = &vou_dev->v4l2_dev;
- vdev->release = video_device_release;
+ vdev->release = video_device_release_empty;
vdev->lock = &vou_dev->fop_lock;
- vou_dev->vdev = vdev;
video_set_drvdata(vdev, vou_dev);
pm_runtime_enable(&pdev->dev);
@@ -1406,9 +1402,7 @@
ereset:
i2c_put_adapter(i2c_adap);
ei2cgadap:
- video_device_release(vdev);
pm_runtime_disable(&pdev->dev);
-evdevalloc:
v4l2_device_unregister(&vou_dev->v4l2_dev);
ev4l2devreg:
free_irq(irq, vou_dev);
@@ -1435,7 +1429,7 @@
if (irq > 0)
free_irq(irq, vou_dev);
pm_runtime_disable(&pdev->dev);
- video_unregister_device(vou_dev->vdev);
+ video_unregister_device(&vou_dev->vdev);
i2c_put_adapter(client->adapter);
v4l2_device_unregister(&vou_dev->v4l2_dev);
iounmap(vou_dev->base);
diff --git a/drivers/media/platform/soc_camera/rcar_vin.c b/drivers/media/platform/soc_camera/rcar_vin.c
index 279ab9f..9351f64 100644
--- a/drivers/media/platform/soc_camera/rcar_vin.c
+++ b/drivers/media/platform/soc_camera/rcar_vin.c
@@ -977,19 +977,6 @@
icd->devnum);
}
-/* Called with .host_lock held */
-static int rcar_vin_clock_start(struct soc_camera_host *ici)
-{
- /* VIN does not have "mclk" */
- return 0;
-}
-
-/* Called with .host_lock held */
-static void rcar_vin_clock_stop(struct soc_camera_host *ici)
-{
- /* VIN does not have "mclk" */
-}
-
static void set_coeff(struct rcar_vin_priv *priv, unsigned short xs)
{
int i;
@@ -1803,8 +1790,6 @@
.owner = THIS_MODULE,
.add = rcar_vin_add_device,
.remove = rcar_vin_remove_device,
- .clock_start = rcar_vin_clock_start,
- .clock_stop = rcar_vin_clock_stop,
.get_formats = rcar_vin_get_formats,
.put_formats = rcar_vin_put_formats,
.get_crop = rcar_vin_get_crop,
diff --git a/drivers/media/platform/soc_camera/sh_mobile_csi2.c b/drivers/media/platform/soc_camera/sh_mobile_csi2.c
index c4e7aa0..cd93241 100644
--- a/drivers/media/platform/soc_camera/sh_mobile_csi2.c
+++ b/drivers/media/platform/soc_camera/sh_mobile_csi2.c
@@ -380,7 +380,6 @@
struct sh_csi2 *priv = container_of(subdev, struct sh_csi2, subdev);
v4l2_async_unregister_subdev(&priv->subdev);
- v4l2_device_unregister_subdev(subdev);
pm_runtime_disable(&pdev->dev);
return 0;
diff --git a/drivers/media/platform/soc_camera/soc_camera.c b/drivers/media/platform/soc_camera/soc_camera.c
index 66634b4..7bfe7665 100644
--- a/drivers/media/platform/soc_camera/soc_camera.c
+++ b/drivers/media/platform/soc_camera/soc_camera.c
@@ -177,6 +177,30 @@
return 0;
}
+static int soc_camera_clock_start(struct soc_camera_host *ici)
+{
+ int ret;
+
+ if (!ici->ops->clock_start)
+ return 0;
+
+ mutex_lock(&ici->clk_lock);
+ ret = ici->ops->clock_start(ici);
+ mutex_unlock(&ici->clk_lock);
+
+ return ret;
+}
+
+static void soc_camera_clock_stop(struct soc_camera_host *ici)
+{
+ if (!ici->ops->clock_stop)
+ return;
+
+ mutex_lock(&ici->clk_lock);
+ ici->ops->clock_stop(ici);
+ mutex_unlock(&ici->clk_lock);
+}
+
const struct soc_camera_format_xlate *soc_camera_xlate_by_fourcc(
struct soc_camera_device *icd, unsigned int fourcc)
{
@@ -584,9 +608,7 @@
return -EBUSY;
if (!icd->clk) {
- mutex_lock(&ici->clk_lock);
- ret = ici->ops->clock_start(ici);
- mutex_unlock(&ici->clk_lock);
+ ret = soc_camera_clock_start(ici);
if (ret < 0)
return ret;
}
@@ -602,11 +624,8 @@
return 0;
eadd:
- if (!icd->clk) {
- mutex_lock(&ici->clk_lock);
- ici->ops->clock_stop(ici);
- mutex_unlock(&ici->clk_lock);
- }
+ if (!icd->clk)
+ soc_camera_clock_stop(ici);
return ret;
}
@@ -619,11 +638,8 @@
if (ici->ops->remove)
ici->ops->remove(icd);
- if (!icd->clk) {
- mutex_lock(&ici->clk_lock);
- ici->ops->clock_stop(ici);
- mutex_unlock(&ici->clk_lock);
- }
+ if (!icd->clk)
+ soc_camera_clock_stop(ici);
ici->icd = NULL;
}
@@ -688,7 +704,8 @@
/* The camera could have been already on, try to reset */
if (sdesc->subdev_desc.reset)
- sdesc->subdev_desc.reset(icd->pdev);
+ if (icd->control)
+ sdesc->subdev_desc.reset(icd->control);
ret = soc_camera_add_device(icd);
if (ret < 0) {
@@ -1159,7 +1176,8 @@
/* The camera could have been already on, try to reset */
if (ssdd->reset)
- ssdd->reset(icd->pdev);
+ if (icd->control)
+ ssdd->reset(icd->control);
icd->parent = ici->v4l2_dev.dev;
@@ -1178,7 +1196,6 @@
{
struct soc_camera_device *icd = clk->priv;
struct soc_camera_host *ici;
- int ret;
if (!icd || !icd->parent)
return -ENODEV;
@@ -1192,10 +1209,7 @@
* If a different client is currently being probed, the host will tell
* you to go
*/
- mutex_lock(&ici->clk_lock);
- ret = ici->ops->clock_start(ici);
- mutex_unlock(&ici->clk_lock);
- return ret;
+ return soc_camera_clock_start(ici);
}
static void soc_camera_clk_disable(struct v4l2_clk *clk)
@@ -1208,9 +1222,7 @@
ici = to_soc_camera_host(icd->parent);
- mutex_lock(&ici->clk_lock);
- ici->ops->clock_stop(ici);
- mutex_unlock(&ici->clk_lock);
+ soc_camera_clock_stop(ici);
module_put(ici->ops->owner);
}
@@ -1364,7 +1376,7 @@
snprintf(clk_name, sizeof(clk_name), "%d-%04x",
shd->i2c_adapter_id, shd->board_info->addr);
- icd->clk = v4l2_clk_register(&soc_camera_clk_ops, clk_name, "mclk", icd);
+ icd->clk = v4l2_clk_register(&soc_camera_clk_ops, clk_name, icd);
if (IS_ERR(icd->clk)) {
ret = PTR_ERR(icd->clk);
goto eclkreg;
@@ -1445,7 +1457,7 @@
memcpy(&sdesc->subdev_desc, ssdd,
sizeof(sdesc->subdev_desc));
if (ssdd->reset)
- ssdd->reset(icd->pdev);
+ ssdd->reset(&client->dev);
}
icd->control = &client->dev;
@@ -1545,7 +1557,7 @@
snprintf(clk_name, sizeof(clk_name), "%d-%04x",
sasd->asd.match.i2c.adapter_id, sasd->asd.match.i2c.address);
- icd->clk = v4l2_clk_register(&soc_camera_clk_ops, clk_name, "mclk", icd);
+ icd->clk = v4l2_clk_register(&soc_camera_clk_ops, clk_name, icd);
if (IS_ERR(icd->clk)) {
ret = PTR_ERR(icd->clk);
goto eclkreg;
@@ -1650,7 +1662,7 @@
snprintf(clk_name, sizeof(clk_name), "of-%s",
of_node_full_name(remote));
- icd->clk = v4l2_clk_register(&soc_camera_clk_ops, clk_name, "mclk", icd);
+ icd->clk = v4l2_clk_register(&soc_camera_clk_ops, clk_name, icd);
if (IS_ERR(icd->clk)) {
ret = PTR_ERR(icd->clk);
goto eclkreg;
@@ -1659,6 +1671,8 @@
ret = v4l2_async_notifier_register(&ici->v4l2_dev, &sasc->notifier);
if (!ret)
return 0;
+
+ v4l2_clk_unregister(icd->clk);
eclkreg:
icd->clk = NULL;
platform_device_del(sasc->pdev);
@@ -1694,7 +1708,6 @@
if (!i)
soc_of_bind(ici, epn, ren->parent);
- of_node_put(epn);
of_node_put(ren);
if (i) {
@@ -1702,6 +1715,8 @@
break;
}
}
+
+ of_node_put(epn);
}
#else
@@ -1750,9 +1765,7 @@
ret = -EINVAL;
goto eadd;
} else {
- mutex_lock(&ici->clk_lock);
- ret = ici->ops->clock_start(ici);
- mutex_unlock(&ici->clk_lock);
+ ret = soc_camera_clock_start(ici);
if (ret < 0)
goto eadd;
@@ -1792,9 +1805,7 @@
module_put(control->driver->owner);
enodrv:
eadddev:
- mutex_lock(&ici->clk_lock);
- ici->ops->clock_stop(ici);
- mutex_unlock(&ici->clk_lock);
+ soc_camera_clock_stop(ici);
}
eadd:
if (icd->vdev) {
@@ -1888,22 +1899,34 @@
int ret;
struct v4l2_subdev *sd = soc_camera_to_subdev(icd);
const struct soc_camera_format_xlate *xlate;
- __u32 pixfmt = fsize->pixel_format;
- struct v4l2_frmsizeenum fsize_mbus = *fsize;
+ struct v4l2_subdev_frame_size_enum fse = {
+ .index = fsize->index,
+ .which = V4L2_SUBDEV_FORMAT_ACTIVE,
+ };
- xlate = soc_camera_xlate_by_fourcc(icd, pixfmt);
+ xlate = soc_camera_xlate_by_fourcc(icd, fsize->pixel_format);
if (!xlate)
return -EINVAL;
- /* map xlate-code to pixel_format, sensor only handle xlate-code*/
- fsize_mbus.pixel_format = xlate->code;
+ fse.code = xlate->code;
- ret = v4l2_subdev_call(sd, video, enum_framesizes, &fsize_mbus);
+ ret = v4l2_subdev_call(sd, pad, enum_frame_size, NULL, &fse);
if (ret < 0)
return ret;
- *fsize = fsize_mbus;
- fsize->pixel_format = pixfmt;
-
+ if (fse.min_width == fse.max_width &&
+ fse.min_height == fse.max_height) {
+ fsize->type = V4L2_FRMSIZE_TYPE_DISCRETE;
+ fsize->discrete.width = fse.min_width;
+ fsize->discrete.height = fse.min_height;
+ return 0;
+ }
+ fsize->type = V4L2_FRMSIZE_TYPE_CONTINUOUS;
+ fsize->stepwise.min_width = fse.min_width;
+ fsize->stepwise.max_width = fse.max_width;
+ fsize->stepwise.min_height = fse.min_height;
+ fsize->stepwise.max_height = fse.max_height;
+ fsize->stepwise.step_width = 1;
+ fsize->stepwise.step_height = 1;
return 0;
}
@@ -1920,8 +1943,6 @@
((!ici->ops->init_videobuf ||
!ici->ops->reqbufs) &&
!ici->ops->init_videobuf2) ||
- !ici->ops->clock_start ||
- !ici->ops->clock_stop ||
!ici->ops->poll ||
!ici->v4l2_dev.dev)
return -EINVAL;
diff --git a/drivers/media/platform/via-camera.c b/drivers/media/platform/via-camera.c
index 86989d8..678ed9f 100644
--- a/drivers/media/platform/via-camera.c
+++ b/drivers/media/platform/via-camera.c
@@ -1147,12 +1147,23 @@
struct v4l2_frmivalenum *interval)
{
struct via_camera *cam = priv;
+ struct v4l2_subdev_frame_interval_enum fie = {
+ .index = interval->index,
+ .code = cam->mbus_code,
+ .width = cam->sensor_format.width,
+ .height = cam->sensor_format.height,
+ .which = V4L2_SUBDEV_FORMAT_ACTIVE,
+ };
int ret;
mutex_lock(&cam->lock);
- ret = sensor_call(cam, video, enum_frameintervals, interval);
+ ret = sensor_call(cam, pad, enum_frame_interval, NULL, &fie);
mutex_unlock(&cam->lock);
- return ret;
+ if (ret)
+ return ret;
+ interval->type = V4L2_FRMIVAL_TYPE_DISCRETE;
+ interval->discrete = fie.interval;
+ return 0;
}
diff --git a/drivers/media/platform/vim2m.c b/drivers/media/platform/vim2m.c
index d9d844a..4d6b4cc 100644
--- a/drivers/media/platform/vim2m.c
+++ b/drivers/media/platform/vim2m.c
@@ -142,7 +142,7 @@
struct vim2m_dev {
struct v4l2_device v4l2_dev;
- struct video_device *vfd;
+ struct video_device vfd;
atomic_t num_inst;
struct mutex dev_mutex;
@@ -968,7 +968,7 @@
.fops = &vim2m_fops,
.ioctl_ops = &vim2m_ioctl_ops,
.minor = -1,
- .release = video_device_release,
+ .release = video_device_release_empty,
};
static struct v4l2_m2m_ops m2m_ops = {
@@ -996,26 +996,19 @@
atomic_set(&dev->num_inst, 0);
mutex_init(&dev->dev_mutex);
- vfd = video_device_alloc();
- if (!vfd) {
- v4l2_err(&dev->v4l2_dev, "Failed to allocate video device\n");
- ret = -ENOMEM;
- goto unreg_dev;
- }
-
- *vfd = vim2m_videodev;
+ dev->vfd = vim2m_videodev;
+ vfd = &dev->vfd;
vfd->lock = &dev->dev_mutex;
vfd->v4l2_dev = &dev->v4l2_dev;
ret = video_register_device(vfd, VFL_TYPE_GRABBER, 0);
if (ret) {
v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
- goto rel_vdev;
+ goto unreg_dev;
}
video_set_drvdata(vfd, dev);
snprintf(vfd->name, sizeof(vfd->name), "%s", vim2m_videodev.name);
- dev->vfd = vfd;
v4l2_info(&dev->v4l2_dev,
"Device registered as /dev/video%d\n", vfd->num);
@@ -1033,9 +1026,7 @@
err_m2m:
v4l2_m2m_release(dev->m2m_dev);
- video_unregister_device(dev->vfd);
-rel_vdev:
- video_device_release(vfd);
+ video_unregister_device(&dev->vfd);
unreg_dev:
v4l2_device_unregister(&dev->v4l2_dev);
@@ -1049,7 +1040,7 @@
v4l2_info(&dev->v4l2_dev, "Removing " MEM2MEM_NAME);
v4l2_m2m_release(dev->m2m_dev);
del_timer_sync(&dev->timer);
- video_unregister_device(dev->vfd);
+ video_unregister_device(&dev->vfd);
v4l2_device_unregister(&dev->v4l2_dev);
return 0;
diff --git a/drivers/media/platform/vivid/vivid-core.c b/drivers/media/platform/vivid/vivid-core.c
index a7e033a..d33f164 100644
--- a/drivers/media/platform/vivid/vivid-core.c
+++ b/drivers/media/platform/vivid/vivid-core.c
@@ -26,6 +26,7 @@
#include <linux/vmalloc.h>
#include <linux/font.h>
#include <linux/mutex.h>
+#include <linux/platform_device.h>
#include <linux/videodev2.h>
#include <linux/v4l2-dv-timings.h>
#include <media/videobuf2-vmalloc.h>
@@ -618,7 +619,23 @@
Initialization and module stuff
------------------------------------------------------------------*/
-static int __init vivid_create_instance(int inst)
+static void vivid_dev_release(struct v4l2_device *v4l2_dev)
+{
+ struct vivid_dev *dev = container_of(v4l2_dev, struct vivid_dev, v4l2_dev);
+
+ vivid_free_controls(dev);
+ v4l2_device_unregister(&dev->v4l2_dev);
+ vfree(dev->scaled_line);
+ vfree(dev->blended_line);
+ vfree(dev->edid);
+ vfree(dev->bitmap_cap);
+ vfree(dev->bitmap_out);
+ tpg_free(&dev->tpg);
+ kfree(dev->query_dv_timings_qmenu);
+ kfree(dev);
+}
+
+static int vivid_create_instance(struct platform_device *pdev, int inst)
{
static const struct v4l2_dv_timings def_dv_timings =
V4L2_DV_BT_CEA_1280X720P60;
@@ -646,9 +663,12 @@
/* register v4l2_device */
snprintf(dev->v4l2_dev.name, sizeof(dev->v4l2_dev.name),
"%s-%03d", VIVID_MODULE_NAME, inst);
- ret = v4l2_device_register(NULL, &dev->v4l2_dev);
- if (ret)
- goto free_dev;
+ ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
+ if (ret) {
+ kfree(dev);
+ return ret;
+ }
+ dev->v4l2_dev.release = vivid_dev_release;
/* start detecting feature set */
@@ -1256,15 +1276,8 @@
video_unregister_device(&dev->vbi_cap_dev);
video_unregister_device(&dev->vid_out_dev);
video_unregister_device(&dev->vid_cap_dev);
- vivid_free_controls(dev);
- v4l2_device_unregister(&dev->v4l2_dev);
free_dev:
- vfree(dev->scaled_line);
- vfree(dev->blended_line);
- vfree(dev->edid);
- tpg_free(&dev->tpg);
- kfree(dev->query_dv_timings_qmenu);
- kfree(dev);
+ v4l2_device_put(&dev->v4l2_dev);
return ret;
}
@@ -1274,7 +1287,7 @@
will succeed. This is limited to the maximum number of devices that
videodev supports, which is equal to VIDEO_NUM_DEVICES.
*/
-static int __init vivid_init(void)
+static int vivid_probe(struct platform_device *pdev)
{
const struct font_desc *font = find_font("VGA8x16");
int ret = 0, i;
@@ -1289,7 +1302,7 @@
n_devs = clamp_t(unsigned, n_devs, 1, VIVID_MAX_DEVS);
for (i = 0; i < n_devs; i++) {
- ret = vivid_create_instance(i);
+ ret = vivid_create_instance(pdev, i);
if (ret) {
/* If some instantiations succeeded, keep driver */
if (i)
@@ -1309,7 +1322,7 @@
return ret;
}
-static void __exit vivid_exit(void)
+static int vivid_remove(struct platform_device *pdev)
{
struct vivid_dev *dev;
unsigned i;
@@ -1358,18 +1371,48 @@
unregister_framebuffer(&dev->fb_info);
vivid_fb_release_buffers(dev);
}
- v4l2_device_unregister(&dev->v4l2_dev);
- vivid_free_controls(dev);
- vfree(dev->scaled_line);
- vfree(dev->blended_line);
- vfree(dev->edid);
- vfree(dev->bitmap_cap);
- vfree(dev->bitmap_out);
- tpg_free(&dev->tpg);
- kfree(dev->query_dv_timings_qmenu);
- kfree(dev);
+ v4l2_device_put(&dev->v4l2_dev);
vivid_devs[i] = NULL;
}
+ return 0;
+}
+
+static void vivid_pdev_release(struct device *dev)
+{
+}
+
+static struct platform_device vivid_pdev = {
+ .name = "vivid",
+ .dev.release = vivid_pdev_release,
+};
+
+static struct platform_driver vivid_pdrv = {
+ .probe = vivid_probe,
+ .remove = vivid_remove,
+ .driver = {
+ .name = "vivid",
+ },
+};
+
+static int __init vivid_init(void)
+{
+ int ret;
+
+ ret = platform_device_register(&vivid_pdev);
+ if (ret)
+ return ret;
+
+ ret = platform_driver_register(&vivid_pdrv);
+ if (ret)
+ platform_device_unregister(&vivid_pdev);
+
+ return ret;
+}
+
+static void __exit vivid_exit(void)
+{
+ platform_driver_unregister(&vivid_pdrv);
+ platform_device_unregister(&vivid_pdev);
}
module_init(vivid_init);
diff --git a/drivers/media/platform/vivid/vivid-core.h b/drivers/media/platform/vivid/vivid-core.h
index 4b497df..9e15aee 100644
--- a/drivers/media/platform/vivid/vivid-core.h
+++ b/drivers/media/platform/vivid/vivid-core.h
@@ -79,12 +79,14 @@
struct vivid_fmt {
const char *name;
u32 fourcc; /* v4l2 format id */
- u8 depth;
bool is_yuv;
bool can_do_overlay;
+ u8 vdownsampling[TPG_MAX_PLANES];
u32 alpha_mask;
u8 planes;
- u32 data_offset[2];
+ u8 buffers;
+ u32 data_offset[TPG_MAX_PLANES];
+ u32 bit_depth[TPG_MAX_PLANES];
};
extern struct vivid_fmt vivid_formats[];
@@ -332,7 +334,7 @@
u32 ycbcr_enc_out;
u32 quantization_out;
u32 service_set_out;
- u32 bytesperline_out[2];
+ unsigned bytesperline_out[TPG_MAX_PLANES];
unsigned tv_field_out;
unsigned tv_audio_output;
bool vbi_out_have_wss;
diff --git a/drivers/media/platform/vivid/vivid-ctrls.c b/drivers/media/platform/vivid/vivid-ctrls.c
index 32a798f..2b90700 100644
--- a/drivers/media/platform/vivid/vivid-ctrls.c
+++ b/drivers/media/platform/vivid/vivid-ctrls.c
@@ -818,7 +818,7 @@
dev->dvi_d_out = ctrl->val == V4L2_DV_TX_MODE_DVI_D;
if (!vivid_is_hdmi_out(dev))
break;
- if (!dev->dvi_d_out && (bt->standards & V4L2_DV_BT_STD_CEA861)) {
+ if (!dev->dvi_d_out && (bt->flags & V4L2_DV_FL_IS_CE_VIDEO)) {
if (bt->width == 720 && bt->height <= 576)
dev->colorspace_out = V4L2_COLORSPACE_SMPTE170M;
else
diff --git a/drivers/media/platform/vivid/vivid-kthread-cap.c b/drivers/media/platform/vivid/vivid-kthread-cap.c
index 39a67cf..1727f54 100644
--- a/drivers/media/platform/vivid/vivid-kthread-cap.c
+++ b/drivers/media/platform/vivid/vivid-kthread-cap.c
@@ -229,14 +229,29 @@
dev->loop_vid_overlay_cap.left, dev->loop_vid_overlay_cap.top);
}
+static void *plane_vaddr(struct tpg_data *tpg, struct vivid_buffer *buf,
+ unsigned p, unsigned bpl[TPG_MAX_PLANES], unsigned h)
+{
+ unsigned i;
+ void *vbuf;
+
+ if (p == 0 || tpg_g_buffers(tpg) > 1)
+ return vb2_plane_vaddr(&buf->vb, p);
+ vbuf = vb2_plane_vaddr(&buf->vb, 0);
+ for (i = 0; i < p; i++)
+ vbuf += bpl[i] * h / tpg->vdownsampling[i];
+ return vbuf;
+}
+
static int vivid_copy_buffer(struct vivid_dev *dev, unsigned p, u8 *vcapbuf,
struct vivid_buffer *vid_cap_buf)
{
bool blank = dev->must_blank[vid_cap_buf->vb.v4l2_buf.index];
struct tpg_data *tpg = &dev->tpg;
struct vivid_buffer *vid_out_buf = NULL;
- unsigned pixsize = tpg_g_twopixelsize(tpg, p) / 2;
- unsigned img_width = dev->compose_cap.width;
+ unsigned vdiv = dev->fmt_out->vdownsampling[p];
+ unsigned twopixsize = tpg_g_twopixelsize(tpg, p);
+ unsigned img_width = tpg_hdiv(tpg, p, dev->compose_cap.width);
unsigned img_height = dev->compose_cap.height;
unsigned stride_cap = tpg->bytesperline[p];
unsigned stride_out = dev->bytesperline_out[p];
@@ -255,6 +270,7 @@
unsigned vid_overlay_fract_part = 0;
unsigned vid_overlay_y = 0;
unsigned vid_overlay_error = 0;
+ unsigned vid_cap_left = tpg_hdiv(tpg, p, dev->loop_vid_cap.left);
unsigned vid_cap_right;
bool quick;
@@ -269,25 +285,29 @@
vid_cap_buf->vb.v4l2_buf.field = vid_out_buf->vb.v4l2_buf.field;
- voutbuf = vb2_plane_vaddr(&vid_out_buf->vb, p) +
- vid_out_buf->vb.v4l2_planes[p].data_offset;
- voutbuf += dev->loop_vid_out.left * pixsize + dev->loop_vid_out.top * stride_out;
- vcapbuf += dev->compose_cap.left * pixsize + dev->compose_cap.top * stride_cap;
+ voutbuf = plane_vaddr(tpg, vid_out_buf, p,
+ dev->bytesperline_out, dev->fmt_out_rect.height);
+ if (p < dev->fmt_out->buffers)
+ voutbuf += vid_out_buf->vb.v4l2_planes[p].data_offset;
+ voutbuf += tpg_hdiv(tpg, p, dev->loop_vid_out.left) +
+ (dev->loop_vid_out.top / vdiv) * stride_out;
+ vcapbuf += tpg_hdiv(tpg, p, dev->compose_cap.left) +
+ (dev->compose_cap.top / vdiv) * stride_cap;
if (dev->loop_vid_copy.width == 0 || dev->loop_vid_copy.height == 0) {
/*
* If there is nothing to copy, then just fill the capture window
* with black.
*/
- for (y = 0; y < hmax; y++, vcapbuf += stride_cap)
- memcpy(vcapbuf, tpg->black_line[p], img_width * pixsize);
+ for (y = 0; y < hmax / vdiv; y++, vcapbuf += stride_cap)
+ memcpy(vcapbuf, tpg->black_line[p], img_width);
return 0;
}
if (dev->overlay_out_enabled &&
dev->loop_vid_overlay.width && dev->loop_vid_overlay.height) {
vosdbuf = dev->video_vbase;
- vosdbuf += dev->loop_fb_copy.left * pixsize +
+ vosdbuf += (dev->loop_fb_copy.left * twopixsize) / 2 +
dev->loop_fb_copy.top * stride_osd;
vid_overlay_int_part = dev->loop_vid_overlay.height /
dev->loop_vid_overlay_cap.height;
@@ -295,12 +315,12 @@
dev->loop_vid_overlay_cap.height;
}
- vid_cap_right = dev->loop_vid_cap.left + dev->loop_vid_cap.width;
+ vid_cap_right = tpg_hdiv(tpg, p, dev->loop_vid_cap.left + dev->loop_vid_cap.width);
/* quick is true if no video scaling is needed */
quick = dev->loop_vid_out.width == dev->loop_vid_cap.width;
dev->cur_scaled_line = dev->loop_vid_out.height;
- for (y = 0; y < hmax; y++, vcapbuf += stride_cap) {
+ for (y = 0; y < hmax; y += vdiv, vcapbuf += stride_cap) {
/* osdline is true if this line requires overlay blending */
bool osdline = vosdbuf && y >= dev->loop_vid_overlay_cap.top &&
y < dev->loop_vid_overlay_cap.top + dev->loop_vid_overlay_cap.height;
@@ -311,34 +331,34 @@
*/
if (y < dev->loop_vid_cap.top ||
y >= dev->loop_vid_cap.top + dev->loop_vid_cap.height) {
- memcpy(vcapbuf, tpg->black_line[p], img_width * pixsize);
+ memcpy(vcapbuf, tpg->black_line[p], img_width);
continue;
}
/* fill the left border with black */
if (dev->loop_vid_cap.left)
- memcpy(vcapbuf, tpg->black_line[p], dev->loop_vid_cap.left * pixsize);
+ memcpy(vcapbuf, tpg->black_line[p], vid_cap_left);
/* fill the right border with black */
if (vid_cap_right < img_width)
- memcpy(vcapbuf + vid_cap_right * pixsize,
- tpg->black_line[p], (img_width - vid_cap_right) * pixsize);
+ memcpy(vcapbuf + vid_cap_right, tpg->black_line[p],
+ img_width - vid_cap_right);
if (quick && !osdline) {
- memcpy(vcapbuf + dev->loop_vid_cap.left * pixsize,
+ memcpy(vcapbuf + vid_cap_left,
voutbuf + vid_out_y * stride_out,
- dev->loop_vid_cap.width * pixsize);
+ tpg_hdiv(tpg, p, dev->loop_vid_cap.width));
goto update_vid_out_y;
}
if (dev->cur_scaled_line == vid_out_y) {
- memcpy(vcapbuf + dev->loop_vid_cap.left * pixsize,
- dev->scaled_line,
- dev->loop_vid_cap.width * pixsize);
+ memcpy(vcapbuf + vid_cap_left, dev->scaled_line,
+ tpg_hdiv(tpg, p, dev->loop_vid_cap.width));
goto update_vid_out_y;
}
if (!osdline) {
scale_line(voutbuf + vid_out_y * stride_out, dev->scaled_line,
- dev->loop_vid_out.width, dev->loop_vid_cap.width,
+ tpg_hdiv(tpg, p, dev->loop_vid_out.width),
+ tpg_hdiv(tpg, p, dev->loop_vid_cap.width),
tpg_g_twopixelsize(tpg, p));
} else {
/*
@@ -346,7 +366,8 @@
* loop_vid_overlay rectangle.
*/
unsigned offset =
- (dev->loop_vid_overlay.left - dev->loop_vid_copy.left) * pixsize;
+ ((dev->loop_vid_overlay.left - dev->loop_vid_copy.left) *
+ twopixsize) / 2;
u8 *osd = vosdbuf + vid_overlay_y * stride_osd;
scale_line(voutbuf + vid_out_y * stride_out, dev->blended_line,
@@ -356,18 +377,17 @@
blend_line(dev, vid_overlay_y + dev->loop_vid_overlay.top,
dev->loop_vid_overlay.left,
dev->blended_line + offset, osd,
- dev->loop_vid_overlay.width, pixsize);
+ dev->loop_vid_overlay.width, twopixsize / 2);
else
memcpy(dev->blended_line + offset,
- osd, dev->loop_vid_overlay.width * pixsize);
+ osd, (dev->loop_vid_overlay.width * twopixsize) / 2);
scale_line(dev->blended_line, dev->scaled_line,
dev->loop_vid_copy.width, dev->loop_vid_cap.width,
tpg_g_twopixelsize(tpg, p));
}
dev->cur_scaled_line = vid_out_y;
- memcpy(vcapbuf + dev->loop_vid_cap.left * pixsize,
- dev->scaled_line,
- dev->loop_vid_cap.width * pixsize);
+ memcpy(vcapbuf + vid_cap_left, dev->scaled_line,
+ tpg_hdiv(tpg, p, dev->loop_vid_cap.width));
update_vid_out_y:
if (osdline) {
@@ -380,21 +400,22 @@
}
vid_out_y += vid_out_int_part;
vid_out_error += vid_out_fract_part;
- if (vid_out_error >= dev->loop_vid_cap.height) {
- vid_out_error -= dev->loop_vid_cap.height;
+ if (vid_out_error >= dev->loop_vid_cap.height / vdiv) {
+ vid_out_error -= dev->loop_vid_cap.height / vdiv;
vid_out_y++;
}
}
if (!blank)
return 0;
- for (; y < img_height; y++, vcapbuf += stride_cap)
- memcpy(vcapbuf, tpg->contrast_line[p], img_width * pixsize);
+ for (; y < img_height; y += vdiv, vcapbuf += stride_cap)
+ memcpy(vcapbuf, tpg->contrast_line[p], img_width);
return 0;
}
static void vivid_fillbuff(struct vivid_dev *dev, struct vivid_buffer *buf)
{
+ struct tpg_data *tpg = &dev->tpg;
unsigned factor = V4L2_FIELD_HAS_T_OR_B(dev->field_cap) ? 2 : 1;
unsigned line_height = 16 / factor;
bool is_tv = vivid_is_sdtv_cap(dev);
@@ -427,7 +448,7 @@
* standards.
*/
buf->vb.v4l2_buf.field = ((dev->vid_cap_seq_count & 1) ^ is_60hz) ?
- V4L2_FIELD_TOP : V4L2_FIELD_BOTTOM;
+ V4L2_FIELD_BOTTOM : V4L2_FIELD_TOP;
/*
* The sequence counter counts frames, not fields. So divide
* by two.
@@ -436,27 +457,29 @@
} else {
buf->vb.v4l2_buf.field = dev->field_cap;
}
- tpg_s_field(&dev->tpg, buf->vb.v4l2_buf.field);
- tpg_s_perc_fill_blank(&dev->tpg, dev->must_blank[buf->vb.v4l2_buf.index]);
+ tpg_s_field(tpg, buf->vb.v4l2_buf.field,
+ dev->field_cap == V4L2_FIELD_ALTERNATE);
+ tpg_s_perc_fill_blank(tpg, dev->must_blank[buf->vb.v4l2_buf.index]);
vivid_precalc_copy_rects(dev);
- for (p = 0; p < tpg_g_planes(&dev->tpg); p++) {
- void *vbuf = vb2_plane_vaddr(&buf->vb, p);
+ for (p = 0; p < tpg_g_planes(tpg); p++) {
+ void *vbuf = plane_vaddr(tpg, buf, p,
+ tpg->bytesperline, tpg->buf_height);
/*
* The first plane of a multiplanar format has a non-zero
* data_offset. This helps testing whether the application
* correctly supports non-zero data offsets.
*/
- if (dev->fmt_cap->data_offset[p]) {
+ if (p < tpg_g_buffers(tpg) && dev->fmt_cap->data_offset[p]) {
memset(vbuf, dev->fmt_cap->data_offset[p] & 0xff,
dev->fmt_cap->data_offset[p]);
vbuf += dev->fmt_cap->data_offset[p];
}
- tpg_calc_text_basep(&dev->tpg, basep, p, vbuf);
+ tpg_calc_text_basep(tpg, basep, p, vbuf);
if (!is_loop || vivid_copy_buffer(dev, p, vbuf, buf))
- tpg_fillbuffer(&dev->tpg, vivid_get_std_cap(dev), p, vbuf);
+ tpg_fill_plane_buffer(tpg, vivid_get_std_cap(dev), p, vbuf);
}
dev->must_blank[buf->vb.v4l2_buf.index] = false;
@@ -475,12 +498,12 @@
(dev->field_cap == V4L2_FIELD_ALTERNATE) ?
(buf->vb.v4l2_buf.field == V4L2_FIELD_TOP ?
" top" : " bottom") : "");
- tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+ tpg_gen_text(tpg, basep, line++ * line_height, 16, str);
}
if (dev->osd_mode == 0) {
snprintf(str, sizeof(str), " %dx%d, input %d ",
dev->src_rect.width, dev->src_rect.height, dev->input);
- tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+ tpg_gen_text(tpg, basep, line++ * line_height, 16, str);
gain = v4l2_ctrl_g_ctrl(dev->gain);
mutex_lock(dev->ctrl_hdl_user_vid.lock);
@@ -490,38 +513,38 @@
dev->contrast->cur.val,
dev->saturation->cur.val,
dev->hue->cur.val);
- tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+ tpg_gen_text(tpg, basep, line++ * line_height, 16, str);
snprintf(str, sizeof(str),
" autogain %d, gain %3d, alpha 0x%02x ",
dev->autogain->cur.val, gain, dev->alpha->cur.val);
mutex_unlock(dev->ctrl_hdl_user_vid.lock);
- tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+ tpg_gen_text(tpg, basep, line++ * line_height, 16, str);
mutex_lock(dev->ctrl_hdl_user_aud.lock);
snprintf(str, sizeof(str),
" volume %3d, mute %d ",
dev->volume->cur.val, dev->mute->cur.val);
mutex_unlock(dev->ctrl_hdl_user_aud.lock);
- tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+ tpg_gen_text(tpg, basep, line++ * line_height, 16, str);
mutex_lock(dev->ctrl_hdl_user_gen.lock);
snprintf(str, sizeof(str), " int32 %d, int64 %lld, bitmask %08x ",
dev->int32->cur.val,
*dev->int64->p_cur.p_s64,
dev->bitmask->cur.val);
- tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+ tpg_gen_text(tpg, basep, line++ * line_height, 16, str);
snprintf(str, sizeof(str), " boolean %d, menu %s, string \"%s\" ",
dev->boolean->cur.val,
dev->menu->qmenu[dev->menu->cur.val],
dev->string->p_cur.p_char);
- tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+ tpg_gen_text(tpg, basep, line++ * line_height, 16, str);
snprintf(str, sizeof(str), " integer_menu %lld, value %d ",
dev->int_menu->qmenu_int[dev->int_menu->cur.val],
dev->int_menu->cur.val);
mutex_unlock(dev->ctrl_hdl_user_gen.lock);
- tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+ tpg_gen_text(tpg, basep, line++ * line_height, 16, str);
if (dev->button_pressed) {
dev->button_pressed--;
snprintf(str, sizeof(str), " button pressed!");
- tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+ tpg_gen_text(tpg, basep, line++ * line_height, 16, str);
}
}
@@ -585,6 +608,12 @@
bool quick = dev->bitmap_cap == NULL && dev->clipcount_cap == 0;
int x, y, w, out_x = 0;
+ /*
+ * Overlay support is only supported for formats that have a twopixelsize
+ * that's >= 2. Warn and bail out if that's not the case.
+ */
+ if (WARN_ON(pixsize == 0))
+ return;
if ((dev->overlay_cap_field == V4L2_FIELD_TOP ||
dev->overlay_cap_field == V4L2_FIELD_BOTTOM) &&
dev->overlay_cap_field != buf->vb.v4l2_buf.field)
diff --git a/drivers/media/platform/vivid/vivid-sdr-cap.c b/drivers/media/platform/vivid/vivid-sdr-cap.c
index 4af55f1..caf1316 100644
--- a/drivers/media/platform/vivid/vivid-sdr-cap.c
+++ b/drivers/media/platform/vivid/vivid-sdr-cap.c
@@ -27,6 +27,7 @@
#include <media/v4l2-common.h>
#include <media/v4l2-event.h>
#include <media/v4l2-dv-timings.h>
+#include <linux/fixp-arith.h>
#include "vivid-core.h"
#include "vivid-ctrls.h"
@@ -423,40 +424,19 @@
return 0;
}
-#define FIXP_FRAC (1 << 15)
-#define FIXP_PI ((int)(FIXP_FRAC * 3.141592653589))
-
-/* cos() from cx88 driver: cx88-dsp.c */
-static s32 fixp_cos(unsigned int x)
-{
- u32 t2, t4, t6, t8;
- u16 period = x / FIXP_PI;
-
- if (period % 2)
- return -fixp_cos(x - FIXP_PI);
- x = x % FIXP_PI;
- if (x > FIXP_PI/2)
- return -fixp_cos(FIXP_PI/2 - (x % (FIXP_PI/2)));
- /* Now x is between 0 and FIXP_PI/2.
- * To calculate cos(x) we use it's Taylor polinom. */
- t2 = x*x/FIXP_FRAC/2;
- t4 = t2*x/FIXP_FRAC*x/FIXP_FRAC/3/4;
- t6 = t4*x/FIXP_FRAC*x/FIXP_FRAC/5/6;
- t8 = t6*x/FIXP_FRAC*x/FIXP_FRAC/7/8;
- return FIXP_FRAC-t2+t4-t6+t8;
-}
-
-static inline s32 fixp_sin(unsigned int x)
-{
- return -fixp_cos(x + (FIXP_PI / 2));
-}
+#define FIXP_N (15)
+#define FIXP_FRAC (1 << FIXP_N)
+#define FIXP_2PI ((int)(2 * 3.141592653589 * FIXP_FRAC))
void vivid_sdr_cap_process(struct vivid_dev *dev, struct vivid_buffer *buf)
{
u8 *vbuf = vb2_plane_vaddr(&buf->vb, 0);
unsigned long i;
unsigned long plane_size = vb2_plane_size(&buf->vb, 0);
- int fixp_src_phase_step, fixp_i, fixp_q;
+ s32 src_phase_step;
+ s32 mod_phase_step;
+ s32 fixp_i;
+ s32 fixp_q;
/*
* TODO: Generated beep tone goes very crackly when sample rate is
@@ -466,28 +446,36 @@
/* calculate phase step */
#define BEEP_FREQ 1000 /* 1kHz beep */
- fixp_src_phase_step = DIV_ROUND_CLOSEST(2 * FIXP_PI * BEEP_FREQ,
+ src_phase_step = DIV_ROUND_CLOSEST(FIXP_2PI * BEEP_FREQ,
dev->sdr_adc_freq);
for (i = 0; i < plane_size; i += 2) {
- dev->sdr_fixp_mod_phase += fixp_cos(dev->sdr_fixp_src_phase);
- dev->sdr_fixp_src_phase += fixp_src_phase_step;
+ mod_phase_step = fixp_cos32_rad(dev->sdr_fixp_src_phase,
+ FIXP_2PI) >> (31 - FIXP_N);
+
+ dev->sdr_fixp_src_phase += src_phase_step;
+ dev->sdr_fixp_mod_phase += mod_phase_step / 4;
/*
* Transfer phases to [0 / 2xPI] in order to avoid variable
* overflow and make it suitable for cosine implementation
* used, which does not support negative angles.
*/
- while (dev->sdr_fixp_mod_phase < (0 * FIXP_PI))
- dev->sdr_fixp_mod_phase += (2 * FIXP_PI);
- while (dev->sdr_fixp_mod_phase > (2 * FIXP_PI))
- dev->sdr_fixp_mod_phase -= (2 * FIXP_PI);
+ while (dev->sdr_fixp_mod_phase < FIXP_2PI)
+ dev->sdr_fixp_mod_phase += FIXP_2PI;
+ while (dev->sdr_fixp_mod_phase > FIXP_2PI)
+ dev->sdr_fixp_mod_phase -= FIXP_2PI;
- while (dev->sdr_fixp_src_phase > (2 * FIXP_PI))
- dev->sdr_fixp_src_phase -= (2 * FIXP_PI);
+ while (dev->sdr_fixp_src_phase > FIXP_2PI)
+ dev->sdr_fixp_src_phase -= FIXP_2PI;
- fixp_i = fixp_cos(dev->sdr_fixp_mod_phase);
- fixp_q = fixp_sin(dev->sdr_fixp_mod_phase);
+ fixp_i = fixp_cos32_rad(dev->sdr_fixp_mod_phase, FIXP_2PI);
+ fixp_q = fixp_sin32_rad(dev->sdr_fixp_mod_phase, FIXP_2PI);
+
+ /* Normalize fraction values represented with 32 bit precision
+ * to fixed point representation with FIXP_N bits */
+ fixp_i >>= (31 - FIXP_N);
+ fixp_q >>= (31 - FIXP_N);
/* convert 'fixp float' to u8 */
/* u8 = X * 127.5f + 127.5f; where X is float [-1.0 / +1.0] */
diff --git a/drivers/media/platform/vivid/vivid-tpg.c b/drivers/media/platform/vivid/vivid-tpg.c
index 34493f4..cb766eb 100644
--- a/drivers/media/platform/vivid/vivid-tpg.c
+++ b/drivers/media/platform/vivid/vivid-tpg.c
@@ -35,7 +35,10 @@
"100% Green",
"100% Blue",
"16x16 Checkers",
+ "2x2 Checkers",
"1x1 Checkers",
+ "2x2 Red/Green Checkers",
+ "1x1 Red/Green Checkers",
"Alternating Hor Lines",
"Alternating Vert Lines",
"One Pixel Wide Cross",
@@ -120,15 +123,20 @@
tpg->max_line_width = max_w;
for (pat = 0; pat < TPG_MAX_PAT_LINES; pat++) {
for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
- unsigned pixelsz = plane ? 1 : 4;
+ unsigned pixelsz = plane ? 2 : 4;
tpg->lines[pat][plane] = vzalloc(max_w * 2 * pixelsz);
if (!tpg->lines[pat][plane])
return -ENOMEM;
+ if (plane == 0)
+ continue;
+ tpg->downsampled_lines[pat][plane] = vzalloc(max_w * 2 * pixelsz);
+ if (!tpg->downsampled_lines[pat][plane])
+ return -ENOMEM;
}
}
for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
- unsigned pixelsz = plane ? 1 : 4;
+ unsigned pixelsz = plane ? 2 : 4;
tpg->contrast_line[plane] = vzalloc(max_w * pixelsz);
if (!tpg->contrast_line[plane])
@@ -152,6 +160,10 @@
for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
vfree(tpg->lines[pat][plane]);
tpg->lines[pat][plane] = NULL;
+ if (plane == 0)
+ continue;
+ vfree(tpg->downsampled_lines[pat][plane]);
+ tpg->downsampled_lines[pat][plane] = NULL;
}
for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
vfree(tpg->contrast_line[plane]);
@@ -167,14 +179,38 @@
{
tpg->fourcc = fourcc;
tpg->planes = 1;
+ tpg->buffers = 1;
tpg->recalc_colors = true;
+ tpg->interleaved = false;
+ tpg->vdownsampling[0] = 1;
+ tpg->hdownsampling[0] = 1;
+ tpg->hmask[0] = ~0;
+ tpg->hmask[1] = ~0;
+ tpg->hmask[2] = ~0;
+
switch (fourcc) {
+ case V4L2_PIX_FMT_SBGGR8:
+ case V4L2_PIX_FMT_SGBRG8:
+ case V4L2_PIX_FMT_SGRBG8:
+ case V4L2_PIX_FMT_SRGGB8:
+ tpg->interleaved = true;
+ tpg->vdownsampling[1] = 1;
+ tpg->hdownsampling[1] = 1;
+ tpg->planes = 2;
+ /* fall through */
+ case V4L2_PIX_FMT_RGB332:
case V4L2_PIX_FMT_RGB565:
case V4L2_PIX_FMT_RGB565X:
+ case V4L2_PIX_FMT_RGB444:
+ case V4L2_PIX_FMT_XRGB444:
+ case V4L2_PIX_FMT_ARGB444:
case V4L2_PIX_FMT_RGB555:
case V4L2_PIX_FMT_XRGB555:
case V4L2_PIX_FMT_ARGB555:
case V4L2_PIX_FMT_RGB555X:
+ case V4L2_PIX_FMT_XRGB555X:
+ case V4L2_PIX_FMT_ARGB555X:
+ case V4L2_PIX_FMT_BGR666:
case V4L2_PIX_FMT_RGB24:
case V4L2_PIX_FMT_BGR24:
case V4L2_PIX_FMT_RGB32:
@@ -183,16 +219,72 @@
case V4L2_PIX_FMT_XBGR32:
case V4L2_PIX_FMT_ARGB32:
case V4L2_PIX_FMT_ABGR32:
+ case V4L2_PIX_FMT_GREY:
tpg->is_yuv = false;
break;
+ case V4L2_PIX_FMT_YUV444:
+ case V4L2_PIX_FMT_YUV555:
+ case V4L2_PIX_FMT_YUV565:
+ case V4L2_PIX_FMT_YUV32:
+ tpg->is_yuv = true;
+ break;
+ case V4L2_PIX_FMT_YUV420M:
+ case V4L2_PIX_FMT_YVU420M:
+ tpg->buffers = 3;
+ /* fall through */
+ case V4L2_PIX_FMT_YUV420:
+ case V4L2_PIX_FMT_YVU420:
+ tpg->vdownsampling[1] = 2;
+ tpg->vdownsampling[2] = 2;
+ tpg->hdownsampling[1] = 2;
+ tpg->hdownsampling[2] = 2;
+ tpg->planes = 3;
+ tpg->is_yuv = true;
+ break;
+ case V4L2_PIX_FMT_YUV422P:
+ tpg->vdownsampling[1] = 1;
+ tpg->vdownsampling[2] = 1;
+ tpg->hdownsampling[1] = 2;
+ tpg->hdownsampling[2] = 2;
+ tpg->planes = 3;
+ tpg->is_yuv = true;
+ break;
case V4L2_PIX_FMT_NV16M:
case V4L2_PIX_FMT_NV61M:
+ tpg->buffers = 2;
+ /* fall through */
+ case V4L2_PIX_FMT_NV16:
+ case V4L2_PIX_FMT_NV61:
+ tpg->vdownsampling[1] = 1;
+ tpg->hdownsampling[1] = 1;
+ tpg->hmask[1] = ~1;
tpg->planes = 2;
- /* fall-through */
+ tpg->is_yuv = true;
+ break;
+ case V4L2_PIX_FMT_NV12M:
+ case V4L2_PIX_FMT_NV21M:
+ tpg->buffers = 2;
+ /* fall through */
+ case V4L2_PIX_FMT_NV12:
+ case V4L2_PIX_FMT_NV21:
+ tpg->vdownsampling[1] = 2;
+ tpg->hdownsampling[1] = 1;
+ tpg->hmask[1] = ~1;
+ tpg->planes = 2;
+ tpg->is_yuv = true;
+ break;
+ case V4L2_PIX_FMT_NV24:
+ case V4L2_PIX_FMT_NV42:
+ tpg->vdownsampling[1] = 1;
+ tpg->hdownsampling[1] = 1;
+ tpg->planes = 2;
+ tpg->is_yuv = true;
+ break;
case V4L2_PIX_FMT_YUYV:
case V4L2_PIX_FMT_UYVY:
case V4L2_PIX_FMT_YVYU:
case V4L2_PIX_FMT_VYUY:
+ tpg->hmask[0] = ~1;
tpg->is_yuv = true;
break;
default:
@@ -200,35 +292,75 @@
}
switch (fourcc) {
+ case V4L2_PIX_FMT_RGB332:
+ tpg->twopixelsize[0] = 2;
+ break;
case V4L2_PIX_FMT_RGB565:
case V4L2_PIX_FMT_RGB565X:
+ case V4L2_PIX_FMT_RGB444:
+ case V4L2_PIX_FMT_XRGB444:
+ case V4L2_PIX_FMT_ARGB444:
case V4L2_PIX_FMT_RGB555:
case V4L2_PIX_FMT_XRGB555:
case V4L2_PIX_FMT_ARGB555:
case V4L2_PIX_FMT_RGB555X:
+ case V4L2_PIX_FMT_XRGB555X:
+ case V4L2_PIX_FMT_ARGB555X:
case V4L2_PIX_FMT_YUYV:
case V4L2_PIX_FMT_UYVY:
case V4L2_PIX_FMT_YVYU:
case V4L2_PIX_FMT_VYUY:
+ case V4L2_PIX_FMT_YUV444:
+ case V4L2_PIX_FMT_YUV555:
+ case V4L2_PIX_FMT_YUV565:
tpg->twopixelsize[0] = 2 * 2;
break;
case V4L2_PIX_FMT_RGB24:
case V4L2_PIX_FMT_BGR24:
tpg->twopixelsize[0] = 2 * 3;
break;
+ case V4L2_PIX_FMT_BGR666:
case V4L2_PIX_FMT_RGB32:
case V4L2_PIX_FMT_BGR32:
case V4L2_PIX_FMT_XRGB32:
case V4L2_PIX_FMT_XBGR32:
case V4L2_PIX_FMT_ARGB32:
case V4L2_PIX_FMT_ABGR32:
+ case V4L2_PIX_FMT_YUV32:
tpg->twopixelsize[0] = 2 * 4;
break;
+ case V4L2_PIX_FMT_GREY:
+ tpg->twopixelsize[0] = 2;
+ break;
+ case V4L2_PIX_FMT_NV12:
+ case V4L2_PIX_FMT_NV21:
+ case V4L2_PIX_FMT_NV12M:
+ case V4L2_PIX_FMT_NV21M:
+ case V4L2_PIX_FMT_NV16:
+ case V4L2_PIX_FMT_NV61:
case V4L2_PIX_FMT_NV16M:
case V4L2_PIX_FMT_NV61M:
+ case V4L2_PIX_FMT_SBGGR8:
+ case V4L2_PIX_FMT_SGBRG8:
+ case V4L2_PIX_FMT_SGRBG8:
+ case V4L2_PIX_FMT_SRGGB8:
tpg->twopixelsize[0] = 2;
tpg->twopixelsize[1] = 2;
break;
+ case V4L2_PIX_FMT_YUV422P:
+ case V4L2_PIX_FMT_YUV420:
+ case V4L2_PIX_FMT_YVU420:
+ case V4L2_PIX_FMT_YUV420M:
+ case V4L2_PIX_FMT_YVU420M:
+ tpg->twopixelsize[0] = 2;
+ tpg->twopixelsize[1] = 2;
+ tpg->twopixelsize[2] = 2;
+ break;
+ case V4L2_PIX_FMT_NV24:
+ case V4L2_PIX_FMT_NV42:
+ tpg->twopixelsize[0] = 2;
+ tpg->twopixelsize[1] = 4;
+ break;
}
return true;
}
@@ -267,7 +399,8 @@
tpg->compose.width = width;
tpg->compose.height = tpg->buf_height;
for (p = 0; p < tpg->planes; p++)
- tpg->bytesperline[p] = width * tpg->twopixelsize[p] / 2;
+ tpg->bytesperline[p] = (width * tpg->twopixelsize[p]) /
+ (2 * tpg->hdownsampling[p]);
tpg->recalc_square_border = true;
}
@@ -347,9 +480,9 @@
{ COEFF(0.5, 224), COEFF(-0.445, 224), COEFF(-0.055, 224) },
};
static const int bt2020[3][3] = {
- { COEFF(0.2726, 219), COEFF(0.6780, 219), COEFF(0.0593, 219) },
+ { COEFF(0.2627, 219), COEFF(0.6780, 219), COEFF(0.0593, 219) },
{ COEFF(-0.1396, 224), COEFF(-0.3604, 224), COEFF(0.5, 224) },
- { COEFF(0.5, 224), COEFF(-0.4629, 224), COEFF(-0.0405, 224) },
+ { COEFF(0.5, 224), COEFF(-0.4598, 224), COEFF(-0.0402, 224) },
};
bool full = tpg->real_quantization == V4L2_QUANTIZATION_FULL_RANGE;
unsigned y_offset = full ? 0 : 16;
@@ -524,10 +657,10 @@
g <<= 4;
b <<= 4;
}
- if (tpg->qual == TPG_QUAL_GRAY) {
+ if (tpg->qual == TPG_QUAL_GRAY || tpg->fourcc == V4L2_PIX_FMT_GREY) {
/* Rec. 709 Luma function */
/* (0.2126, 0.7152, 0.0722) * (255 * 256) */
- r = g = b = ((13879 * r + 46688 * g + 4713 * b) >> 16) + (16 << 4);
+ r = g = b = (13879 * r + 46688 * g + 4713 * b) >> 16;
}
/*
@@ -601,9 +734,29 @@
cb = clamp(cb, 16 << 4, 240 << 4);
cr = clamp(cr, 16 << 4, 240 << 4);
}
- tpg->colors[k][0] = clamp(y >> 4, 1, 254);
- tpg->colors[k][1] = clamp(cb >> 4, 1, 254);
- tpg->colors[k][2] = clamp(cr >> 4, 1, 254);
+ y = clamp(y >> 4, 1, 254);
+ cb = clamp(cb >> 4, 1, 254);
+ cr = clamp(cr >> 4, 1, 254);
+ switch (tpg->fourcc) {
+ case V4L2_PIX_FMT_YUV444:
+ y >>= 4;
+ cb >>= 4;
+ cr >>= 4;
+ break;
+ case V4L2_PIX_FMT_YUV555:
+ y >>= 3;
+ cb >>= 3;
+ cr >>= 3;
+ break;
+ case V4L2_PIX_FMT_YUV565:
+ y >>= 3;
+ cb >>= 2;
+ cr >>= 3;
+ break;
+ }
+ tpg->colors[k][0] = y;
+ tpg->colors[k][1] = cb;
+ tpg->colors[k][2] = cr;
} else {
if (tpg->real_quantization == V4L2_QUANTIZATION_LIM_RANGE) {
r = (r * 219) / 255 + (16 << 4);
@@ -611,20 +764,39 @@
b = (b * 219) / 255 + (16 << 4);
}
switch (tpg->fourcc) {
+ case V4L2_PIX_FMT_RGB332:
+ r >>= 9;
+ g >>= 9;
+ b >>= 10;
+ break;
case V4L2_PIX_FMT_RGB565:
case V4L2_PIX_FMT_RGB565X:
r >>= 7;
g >>= 6;
b >>= 7;
break;
+ case V4L2_PIX_FMT_RGB444:
+ case V4L2_PIX_FMT_XRGB444:
+ case V4L2_PIX_FMT_ARGB444:
+ r >>= 8;
+ g >>= 8;
+ b >>= 8;
+ break;
case V4L2_PIX_FMT_RGB555:
case V4L2_PIX_FMT_XRGB555:
case V4L2_PIX_FMT_ARGB555:
case V4L2_PIX_FMT_RGB555X:
+ case V4L2_PIX_FMT_XRGB555X:
+ case V4L2_PIX_FMT_ARGB555X:
r >>= 7;
g >>= 7;
b >>= 7;
break;
+ case V4L2_PIX_FMT_BGR666:
+ r >>= 6;
+ g >>= 6;
+ b >>= 6;
+ break;
default:
r >>= 4;
g >>= 4;
@@ -665,31 +837,120 @@
b_v = tpg->colors[color][2]; /* B or precalculated V */
switch (tpg->fourcc) {
+ case V4L2_PIX_FMT_GREY:
+ buf[0][offset] = r_y;
+ break;
+ case V4L2_PIX_FMT_YUV422P:
+ case V4L2_PIX_FMT_YUV420:
+ case V4L2_PIX_FMT_YUV420M:
+ buf[0][offset] = r_y;
+ if (odd) {
+ buf[1][0] = (buf[1][0] + g_u) / 2;
+ buf[2][0] = (buf[2][0] + b_v) / 2;
+ buf[1][1] = buf[1][0];
+ buf[2][1] = buf[2][0];
+ break;
+ }
+ buf[1][0] = g_u;
+ buf[2][0] = b_v;
+ break;
+ case V4L2_PIX_FMT_YVU420:
+ case V4L2_PIX_FMT_YVU420M:
+ buf[0][offset] = r_y;
+ if (odd) {
+ buf[1][0] = (buf[1][0] + b_v) / 2;
+ buf[2][0] = (buf[2][0] + g_u) / 2;
+ buf[1][1] = buf[1][0];
+ buf[2][1] = buf[2][0];
+ break;
+ }
+ buf[1][0] = b_v;
+ buf[2][0] = g_u;
+ break;
+
+ case V4L2_PIX_FMT_NV12:
+ case V4L2_PIX_FMT_NV12M:
+ case V4L2_PIX_FMT_NV16:
case V4L2_PIX_FMT_NV16M:
buf[0][offset] = r_y;
- buf[1][offset] = odd ? b_v : g_u;
+ if (odd) {
+ buf[1][0] = (buf[1][0] + g_u) / 2;
+ buf[1][1] = (buf[1][1] + b_v) / 2;
+ break;
+ }
+ buf[1][0] = g_u;
+ buf[1][1] = b_v;
break;
+ case V4L2_PIX_FMT_NV21:
+ case V4L2_PIX_FMT_NV21M:
+ case V4L2_PIX_FMT_NV61:
case V4L2_PIX_FMT_NV61M:
buf[0][offset] = r_y;
- buf[1][offset] = odd ? g_u : b_v;
+ if (odd) {
+ buf[1][0] = (buf[1][0] + b_v) / 2;
+ buf[1][1] = (buf[1][1] + g_u) / 2;
+ break;
+ }
+ buf[1][0] = b_v;
+ buf[1][1] = g_u;
+ break;
+
+ case V4L2_PIX_FMT_NV24:
+ buf[0][offset] = r_y;
+ buf[1][2 * offset] = g_u;
+ buf[1][2 * offset + 1] = b_v;
+ break;
+
+ case V4L2_PIX_FMT_NV42:
+ buf[0][offset] = r_y;
+ buf[1][2 * offset] = b_v;
+ buf[1][2 * offset + 1] = g_u;
break;
case V4L2_PIX_FMT_YUYV:
buf[0][offset] = r_y;
- buf[0][offset + 1] = odd ? b_v : g_u;
+ if (odd) {
+ buf[0][1] = (buf[0][1] + g_u) / 2;
+ buf[0][3] = (buf[0][3] + b_v) / 2;
+ break;
+ }
+ buf[0][1] = g_u;
+ buf[0][3] = b_v;
break;
case V4L2_PIX_FMT_UYVY:
- buf[0][offset] = odd ? b_v : g_u;
buf[0][offset + 1] = r_y;
+ if (odd) {
+ buf[0][0] = (buf[0][0] + g_u) / 2;
+ buf[0][2] = (buf[0][2] + b_v) / 2;
+ break;
+ }
+ buf[0][0] = g_u;
+ buf[0][2] = b_v;
break;
case V4L2_PIX_FMT_YVYU:
buf[0][offset] = r_y;
- buf[0][offset + 1] = odd ? g_u : b_v;
+ if (odd) {
+ buf[0][1] = (buf[0][1] + b_v) / 2;
+ buf[0][3] = (buf[0][3] + g_u) / 2;
+ break;
+ }
+ buf[0][1] = b_v;
+ buf[0][3] = g_u;
break;
case V4L2_PIX_FMT_VYUY:
- buf[0][offset] = odd ? g_u : b_v;
buf[0][offset + 1] = r_y;
+ if (odd) {
+ buf[0][0] = (buf[0][0] + b_v) / 2;
+ buf[0][2] = (buf[0][2] + g_u) / 2;
+ break;
+ }
+ buf[0][0] = b_v;
+ buf[0][2] = g_u;
break;
+ case V4L2_PIX_FMT_RGB332:
+ buf[0][offset] = (r_y << 5) | (g_u << 2) | b_v;
+ break;
+ case V4L2_PIX_FMT_YUV565:
case V4L2_PIX_FMT_RGB565:
buf[0][offset] = (g_u << 5) | b_v;
buf[0][offset + 1] = (r_y << 3) | (g_u >> 3);
@@ -698,15 +959,29 @@
buf[0][offset] = (r_y << 3) | (g_u >> 3);
buf[0][offset + 1] = (g_u << 5) | b_v;
break;
+ case V4L2_PIX_FMT_RGB444:
+ case V4L2_PIX_FMT_XRGB444:
+ alpha = 0;
+ /* fall through */
+ case V4L2_PIX_FMT_YUV444:
+ case V4L2_PIX_FMT_ARGB444:
+ buf[0][offset] = (g_u << 4) | b_v;
+ buf[0][offset + 1] = (alpha & 0xf0) | r_y;
+ break;
case V4L2_PIX_FMT_RGB555:
case V4L2_PIX_FMT_XRGB555:
alpha = 0;
/* fall through */
+ case V4L2_PIX_FMT_YUV555:
case V4L2_PIX_FMT_ARGB555:
buf[0][offset] = (g_u << 5) | b_v;
buf[0][offset + 1] = (alpha & 0x80) | (r_y << 2) | (g_u >> 3);
break;
case V4L2_PIX_FMT_RGB555X:
+ case V4L2_PIX_FMT_XRGB555X:
+ alpha = 0;
+ /* fall through */
+ case V4L2_PIX_FMT_ARGB555X:
buf[0][offset] = (alpha & 0x80) | (r_y << 2) | (g_u >> 3);
buf[0][offset + 1] = (g_u << 5) | b_v;
break;
@@ -720,10 +995,17 @@
buf[0][offset + 1] = g_u;
buf[0][offset + 2] = r_y;
break;
+ case V4L2_PIX_FMT_BGR666:
+ buf[0][offset] = (b_v << 2) | (g_u >> 4);
+ buf[0][offset + 1] = (g_u << 4) | (r_y >> 2);
+ buf[0][offset + 2] = r_y << 6;
+ buf[0][offset + 3] = 0;
+ break;
case V4L2_PIX_FMT_RGB32:
case V4L2_PIX_FMT_XRGB32:
alpha = 0;
/* fall through */
+ case V4L2_PIX_FMT_YUV32:
case V4L2_PIX_FMT_ARGB32:
buf[0][offset] = alpha;
buf[0][offset + 1] = r_y;
@@ -740,15 +1022,47 @@
buf[0][offset + 2] = r_y;
buf[0][offset + 3] = alpha;
break;
+ case V4L2_PIX_FMT_SBGGR8:
+ buf[0][offset] = odd ? g_u : b_v;
+ buf[1][offset] = odd ? r_y : g_u;
+ break;
+ case V4L2_PIX_FMT_SGBRG8:
+ buf[0][offset] = odd ? b_v : g_u;
+ buf[1][offset] = odd ? g_u : r_y;
+ break;
+ case V4L2_PIX_FMT_SGRBG8:
+ buf[0][offset] = odd ? r_y : g_u;
+ buf[1][offset] = odd ? g_u : b_v;
+ break;
+ case V4L2_PIX_FMT_SRGGB8:
+ buf[0][offset] = odd ? g_u : r_y;
+ buf[1][offset] = odd ? b_v : g_u;
+ break;
+ }
+}
+
+unsigned tpg_g_interleaved_plane(const struct tpg_data *tpg, unsigned buf_line)
+{
+ switch (tpg->fourcc) {
+ case V4L2_PIX_FMT_SBGGR8:
+ case V4L2_PIX_FMT_SGBRG8:
+ case V4L2_PIX_FMT_SGRBG8:
+ case V4L2_PIX_FMT_SRGGB8:
+ return buf_line & 1;
+ default:
+ return 0;
}
}
/* Return how many pattern lines are used by the current pattern. */
-static unsigned tpg_get_pat_lines(struct tpg_data *tpg)
+static unsigned tpg_get_pat_lines(const struct tpg_data *tpg)
{
switch (tpg->pattern) {
case TPG_PAT_CHECKERS_16X16:
+ case TPG_PAT_CHECKERS_2X2:
case TPG_PAT_CHECKERS_1X1:
+ case TPG_PAT_COLOR_CHECKERS_2X2:
+ case TPG_PAT_COLOR_CHECKERS_1X1:
case TPG_PAT_ALTERNATING_HLINES:
case TPG_PAT_CROSS_1_PIXEL:
case TPG_PAT_CROSS_2_PIXELS:
@@ -763,14 +1077,18 @@
}
/* Which pattern line should be used for the given frame line. */
-static unsigned tpg_get_pat_line(struct tpg_data *tpg, unsigned line)
+static unsigned tpg_get_pat_line(const struct tpg_data *tpg, unsigned line)
{
switch (tpg->pattern) {
case TPG_PAT_CHECKERS_16X16:
return (line >> 4) & 1;
case TPG_PAT_CHECKERS_1X1:
+ case TPG_PAT_COLOR_CHECKERS_1X1:
case TPG_PAT_ALTERNATING_HLINES:
return line & 1;
+ case TPG_PAT_CHECKERS_2X2:
+ case TPG_PAT_COLOR_CHECKERS_2X2:
+ return (line & 2) >> 1;
case TPG_PAT_100_COLORSQUARES:
case TPG_PAT_100_HCOLORBAR:
return (line * 8) / tpg->src_height;
@@ -789,7 +1107,8 @@
* Which color should be used for the given pattern line and X coordinate.
* Note: x is in the range 0 to 2 * tpg->src_width.
*/
-static enum tpg_color tpg_get_color(struct tpg_data *tpg, unsigned pat_line, unsigned x)
+static enum tpg_color tpg_get_color(const struct tpg_data *tpg,
+ unsigned pat_line, unsigned x)
{
/* Maximum number of bars are TPG_COLOR_MAX - otherwise, the input print code
should be modified */
@@ -836,6 +1155,15 @@
case TPG_PAT_CHECKERS_1X1:
return ((x & 1) ^ (pat_line & 1)) ?
TPG_COLOR_100_WHITE : TPG_COLOR_100_BLACK;
+ case TPG_PAT_COLOR_CHECKERS_1X1:
+ return ((x & 1) ^ (pat_line & 1)) ?
+ TPG_COLOR_100_RED : TPG_COLOR_100_BLUE;
+ case TPG_PAT_CHECKERS_2X2:
+ return (((x >> 1) & 1) ^ (pat_line & 1)) ?
+ TPG_COLOR_100_WHITE : TPG_COLOR_100_BLACK;
+ case TPG_PAT_COLOR_CHECKERS_2X2:
+ return (((x >> 1) & 1) ^ (pat_line & 1)) ?
+ TPG_COLOR_100_RED : TPG_COLOR_100_BLUE;
case TPG_PAT_ALTERNATING_HLINES:
return pat_line ? TPG_COLOR_100_WHITE : TPG_COLOR_100_BLACK;
case TPG_PAT_ALTERNATING_VLINES:
@@ -948,6 +1276,7 @@
static void tpg_precalculate_line(struct tpg_data *tpg)
{
enum tpg_color contrast;
+ u8 pix[TPG_MAX_PLANES][8];
unsigned pat;
unsigned p;
unsigned x;
@@ -974,7 +1303,6 @@
for (x = 0; x < tpg->scaled_width * 2; x += 2) {
unsigned real_x = src_x;
enum tpg_color color1, color2;
- u8 pix[TPG_MAX_PLANES][8];
real_x = tpg->hflip ? tpg->src_width * 2 - real_x - 2 : real_x;
color1 = tpg_get_color(tpg, pat, real_x);
@@ -1001,39 +1329,53 @@
gen_twopix(tpg, pix, tpg->hflip ? color1 : color2, 1);
for (p = 0; p < tpg->planes; p++) {
unsigned twopixsize = tpg->twopixelsize[p];
- u8 *pos = tpg->lines[pat][p] + x * twopixsize / 2;
+ unsigned hdiv = tpg->hdownsampling[p];
+ u8 *pos = tpg->lines[pat][p] + tpg_hdiv(tpg, p, x);
- memcpy(pos, pix[p], twopixsize);
+ memcpy(pos, pix[p], twopixsize / hdiv);
}
}
}
- for (x = 0; x < tpg->scaled_width; x += 2) {
- u8 pix[TPG_MAX_PLANES][8];
- gen_twopix(tpg, pix, contrast, 0);
- gen_twopix(tpg, pix, contrast, 1);
- for (p = 0; p < tpg->planes; p++) {
- unsigned twopixsize = tpg->twopixelsize[p];
- u8 *pos = tpg->contrast_line[p] + x * twopixsize / 2;
+ if (tpg->vdownsampling[tpg->planes - 1] > 1) {
+ unsigned pat_lines = tpg_get_pat_lines(tpg);
- memcpy(pos, pix[p], twopixsize);
+ for (pat = 0; pat < pat_lines; pat++) {
+ unsigned next_pat = (pat + 1) % pat_lines;
+
+ for (p = 1; p < tpg->planes; p++) {
+ unsigned w = tpg_hdiv(tpg, p, tpg->scaled_width * 2);
+ u8 *pos1 = tpg->lines[pat][p];
+ u8 *pos2 = tpg->lines[next_pat][p];
+ u8 *dest = tpg->downsampled_lines[pat][p];
+
+ for (x = 0; x < w; x++, pos1++, pos2++, dest++)
+ *dest = ((u16)*pos1 + (u16)*pos2) / 2;
+ }
}
}
- for (x = 0; x < tpg->scaled_width; x += 2) {
- u8 pix[TPG_MAX_PLANES][8];
- gen_twopix(tpg, pix, TPG_COLOR_100_BLACK, 0);
- gen_twopix(tpg, pix, TPG_COLOR_100_BLACK, 1);
- for (p = 0; p < tpg->planes; p++) {
- unsigned twopixsize = tpg->twopixelsize[p];
- u8 *pos = tpg->black_line[p] + x * twopixsize / 2;
+ gen_twopix(tpg, pix, contrast, 0);
+ gen_twopix(tpg, pix, contrast, 1);
+ for (p = 0; p < tpg->planes; p++) {
+ unsigned twopixsize = tpg->twopixelsize[p];
+ u8 *pos = tpg->contrast_line[p];
+ for (x = 0; x < tpg->scaled_width; x += 2, pos += twopixsize)
memcpy(pos, pix[p], twopixsize);
- }
}
+
+ gen_twopix(tpg, pix, TPG_COLOR_100_BLACK, 0);
+ gen_twopix(tpg, pix, TPG_COLOR_100_BLACK, 1);
+ for (p = 0; p < tpg->planes; p++) {
+ unsigned twopixsize = tpg->twopixelsize[p];
+ u8 *pos = tpg->black_line[p];
+
+ for (x = 0; x < tpg->scaled_width; x += 2, pos += twopixsize)
+ memcpy(pos, pix[p], twopixsize);
+ }
+
for (x = 0; x < tpg->scaled_width * 2; x += 2) {
- u8 pix[TPG_MAX_PLANES][8];
-
gen_twopix(tpg, pix, TPG_COLOR_RANDOM, 0);
gen_twopix(tpg, pix, TPG_COLOR_RANDOM, 1);
for (p = 0; p < tpg->planes; p++) {
@@ -1043,6 +1385,7 @@
memcpy(pos, pix[p], twopixsize);
}
}
+
gen_twopix(tpg, tpg->textbg, TPG_COLOR_TEXTBG, 0);
gen_twopix(tpg, tpg->textbg, TPG_COLOR_TEXTBG, 1);
gen_twopix(tpg, tpg->textfg, TPG_COLOR_TEXTFG, 0);
@@ -1052,8 +1395,8 @@
/* need this to do rgb24 rendering */
typedef struct { u16 __; u8 _; } __packed x24;
-void tpg_gen_text(struct tpg_data *tpg, u8 *basep[TPG_MAX_PLANES][2],
- int y, int x, char *text)
+void tpg_gen_text(const struct tpg_data *tpg, u8 *basep[TPG_MAX_PLANES][2],
+ int y, int x, char *text)
{
int line;
unsigned step = V4L2_FIELD_HAS_T_OR_B(tpg->field) ? 2 : 1;
@@ -1083,24 +1426,37 @@
div = 2;
for (p = 0; p < tpg->planes; p++) {
- /* Print stream time */
+ unsigned vdiv = tpg->vdownsampling[p];
+ unsigned hdiv = tpg->hdownsampling[p];
+
+ /* Print text */
#define PRINTSTR(PIXTYPE) do { \
PIXTYPE fg; \
PIXTYPE bg; \
memcpy(&fg, tpg->textfg[p], sizeof(PIXTYPE)); \
memcpy(&bg, tpg->textbg[p], sizeof(PIXTYPE)); \
\
- for (line = first; line < 16; line += step) { \
+ for (line = first; line < 16; line += vdiv * step) { \
int l = tpg->vflip ? 15 - line : line; \
- PIXTYPE *pos = (PIXTYPE *)(basep[p][line & 1] + \
- ((y * step + l) / div) * tpg->bytesperline[p] + \
- x * sizeof(PIXTYPE)); \
+ PIXTYPE *pos = (PIXTYPE *)(basep[p][(line / vdiv) & 1] + \
+ ((y * step + l) / (vdiv * div)) * tpg->bytesperline[p] + \
+ (x / hdiv) * sizeof(PIXTYPE)); \
unsigned s; \
\
for (s = 0; s < len; s++) { \
u8 chr = font8x16[text[s] * 16 + line]; \
\
- if (tpg->hflip) { \
+ if (hdiv == 2 && tpg->hflip) { \
+ pos[3] = (chr & (0x01 << 6) ? fg : bg); \
+ pos[2] = (chr & (0x01 << 4) ? fg : bg); \
+ pos[1] = (chr & (0x01 << 2) ? fg : bg); \
+ pos[0] = (chr & (0x01 << 0) ? fg : bg); \
+ } else if (hdiv == 2) { \
+ pos[0] = (chr & (0x01 << 7) ? fg : bg); \
+ pos[1] = (chr & (0x01 << 5) ? fg : bg); \
+ pos[2] = (chr & (0x01 << 3) ? fg : bg); \
+ pos[3] = (chr & (0x01 << 1) ? fg : bg); \
+ } else if (tpg->hflip) { \
pos[7] = (chr & (0x01 << 7) ? fg : bg); \
pos[6] = (chr & (0x01 << 6) ? fg : bg); \
pos[5] = (chr & (0x01 << 5) ? fg : bg); \
@@ -1120,7 +1476,7 @@
pos[7] = (chr & (0x01 << 0) ? fg : bg); \
} \
\
- pos += tpg->hflip ? -8 : 8; \
+ pos += (tpg->hflip ? -8 : 8) / hdiv; \
} \
} \
} while (0)
@@ -1187,7 +1543,7 @@
}
/* Map the line number relative to the crop rectangle to a frame line number */
-static unsigned tpg_calc_frameline(struct tpg_data *tpg, unsigned src_y,
+static unsigned tpg_calc_frameline(const struct tpg_data *tpg, unsigned src_y,
unsigned field)
{
switch (field) {
@@ -1204,7 +1560,7 @@
* Map the line number relative to the compose rectangle to a destination
* buffer line number.
*/
-static unsigned tpg_calc_buffer_line(struct tpg_data *tpg, unsigned y,
+static unsigned tpg_calc_buffer_line(const struct tpg_data *tpg, unsigned y,
unsigned field)
{
y += tpg->compose.top;
@@ -1265,6 +1621,10 @@
V4L2_QUANTIZATION_LIM_RANGE;
break;
}
+ } else if (tpg->colorspace == V4L2_COLORSPACE_BT2020) {
+ /* R'G'B' BT.2020 is limited range */
+ tpg->real_quantization =
+ V4L2_QUANTIZATION_LIM_RANGE;
}
}
tpg_precalculate_colors(tpg);
@@ -1283,88 +1643,387 @@
u8 *basep[TPG_MAX_PLANES][2], unsigned p, u8 *vbuf)
{
unsigned stride = tpg->bytesperline[p];
+ unsigned h = tpg->buf_height;
tpg_recalc(tpg);
basep[p][0] = vbuf;
basep[p][1] = vbuf;
+ h /= tpg->vdownsampling[p];
if (tpg->field == V4L2_FIELD_SEQ_TB)
- basep[p][1] += tpg->buf_height * stride / 2;
+ basep[p][1] += h * stride / 2;
else if (tpg->field == V4L2_FIELD_SEQ_BT)
- basep[p][0] += tpg->buf_height * stride / 2;
+ basep[p][0] += h * stride / 2;
+ if (p == 0 && tpg->interleaved)
+ tpg_calc_text_basep(tpg, basep, 1, vbuf);
}
-void tpg_fillbuffer(struct tpg_data *tpg, v4l2_std_id std, unsigned p, u8 *vbuf)
+static int tpg_pattern_avg(const struct tpg_data *tpg,
+ unsigned pat1, unsigned pat2)
{
- bool is_tv = std;
- bool is_60hz = is_tv && (std & V4L2_STD_525_60);
- unsigned mv_hor_old = tpg->mv_hor_count % tpg->src_width;
- unsigned mv_hor_new = (tpg->mv_hor_count + tpg->mv_hor_step) % tpg->src_width;
- unsigned mv_vert_old = tpg->mv_vert_count % tpg->src_height;
- unsigned mv_vert_new = (tpg->mv_vert_count + tpg->mv_vert_step) % tpg->src_height;
+ unsigned pat_lines = tpg_get_pat_lines(tpg);
+
+ if (pat1 == (pat2 + 1) % pat_lines)
+ return pat2;
+ if (pat2 == (pat1 + 1) % pat_lines)
+ return pat1;
+ return -1;
+}
+
+/*
+ * This struct contains common parameters used by both the drawing of the
+ * test pattern and the drawing of the extras (borders, square, etc.)
+ */
+struct tpg_draw_params {
+ /* common data */
+ bool is_tv;
+ bool is_60hz;
+ unsigned twopixsize;
+ unsigned img_width;
+ unsigned stride;
+ unsigned hmax;
+ unsigned frame_line;
+ unsigned frame_line_next;
+
+ /* test pattern */
+ unsigned mv_hor_old;
+ unsigned mv_hor_new;
+ unsigned mv_vert_old;
+ unsigned mv_vert_new;
+
+ /* extras */
unsigned wss_width;
- unsigned f;
- int hmax = (tpg->compose.height * tpg->perc_fill) / 100;
- int h;
- unsigned twopixsize = tpg->twopixelsize[p];
- unsigned img_width = tpg->compose.width * twopixsize / 2;
- unsigned line_offset;
+ unsigned wss_random_offset;
+ unsigned sav_eav_f;
+ unsigned left_pillar_width;
+ unsigned right_pillar_start;
+};
+
+static void tpg_fill_params_pattern(const struct tpg_data *tpg, unsigned p,
+ struct tpg_draw_params *params)
+{
+ params->mv_hor_old =
+ tpg_hscale_div(tpg, p, tpg->mv_hor_count % tpg->src_width);
+ params->mv_hor_new =
+ tpg_hscale_div(tpg, p, (tpg->mv_hor_count + tpg->mv_hor_step) %
+ tpg->src_width);
+ params->mv_vert_old = tpg->mv_vert_count % tpg->src_height;
+ params->mv_vert_new =
+ (tpg->mv_vert_count + tpg->mv_vert_step) % tpg->src_height;
+}
+
+static void tpg_fill_params_extras(const struct tpg_data *tpg,
+ unsigned p,
+ struct tpg_draw_params *params)
+{
unsigned left_pillar_width = 0;
- unsigned right_pillar_start = img_width;
- unsigned stride = tpg->bytesperline[p];
+ unsigned right_pillar_start = params->img_width;
+
+ params->wss_width = tpg->crop.left < tpg->src_width / 2 ?
+ tpg->src_width / 2 - tpg->crop.left : 0;
+ if (params->wss_width > tpg->crop.width)
+ params->wss_width = tpg->crop.width;
+ params->wss_width = tpg_hscale_div(tpg, p, params->wss_width);
+ params->wss_random_offset =
+ params->twopixsize * prandom_u32_max(tpg->src_width / 2);
+
+ if (tpg->crop.left < tpg->border.left) {
+ left_pillar_width = tpg->border.left - tpg->crop.left;
+ if (left_pillar_width > tpg->crop.width)
+ left_pillar_width = tpg->crop.width;
+ left_pillar_width = tpg_hscale_div(tpg, p, left_pillar_width);
+ }
+ params->left_pillar_width = left_pillar_width;
+
+ if (tpg->crop.left + tpg->crop.width >
+ tpg->border.left + tpg->border.width) {
+ right_pillar_start =
+ tpg->border.left + tpg->border.width - tpg->crop.left;
+ right_pillar_start =
+ tpg_hscale_div(tpg, p, right_pillar_start);
+ if (right_pillar_start > params->img_width)
+ right_pillar_start = params->img_width;
+ }
+ params->right_pillar_start = right_pillar_start;
+
+ params->sav_eav_f = tpg->field ==
+ (params->is_60hz ? V4L2_FIELD_TOP : V4L2_FIELD_BOTTOM);
+}
+
+static void tpg_fill_plane_extras(const struct tpg_data *tpg,
+ const struct tpg_draw_params *params,
+ unsigned p, unsigned h, u8 *vbuf)
+{
+ unsigned twopixsize = params->twopixsize;
+ unsigned img_width = params->img_width;
+ unsigned frame_line = params->frame_line;
+ const struct v4l2_rect *sq = &tpg->square;
+ const struct v4l2_rect *b = &tpg->border;
+ const struct v4l2_rect *c = &tpg->crop;
+
+ if (params->is_tv && !params->is_60hz &&
+ frame_line == 0 && params->wss_width) {
+ /*
+ * Replace the first half of the top line of a 50 Hz frame
+ * with random data to simulate a WSS signal.
+ */
+ u8 *wss = tpg->random_line[p] + params->wss_random_offset;
+
+ memcpy(vbuf, wss, params->wss_width);
+ }
+
+ if (tpg->show_border && frame_line >= b->top &&
+ frame_line < b->top + b->height) {
+ unsigned bottom = b->top + b->height - 1;
+ unsigned left = params->left_pillar_width;
+ unsigned right = params->right_pillar_start;
+
+ if (frame_line == b->top || frame_line == b->top + 1 ||
+ frame_line == bottom || frame_line == bottom - 1) {
+ memcpy(vbuf + left, tpg->contrast_line[p],
+ right - left);
+ } else {
+ if (b->left >= c->left &&
+ b->left < c->left + c->width)
+ memcpy(vbuf + left,
+ tpg->contrast_line[p], twopixsize);
+ if (b->left + b->width > c->left &&
+ b->left + b->width <= c->left + c->width)
+ memcpy(vbuf + right - twopixsize,
+ tpg->contrast_line[p], twopixsize);
+ }
+ }
+ if (tpg->qual != TPG_QUAL_NOISE && frame_line >= b->top &&
+ frame_line < b->top + b->height) {
+ memcpy(vbuf, tpg->black_line[p], params->left_pillar_width);
+ memcpy(vbuf + params->right_pillar_start, tpg->black_line[p],
+ img_width - params->right_pillar_start);
+ }
+ if (tpg->show_square && frame_line >= sq->top &&
+ frame_line < sq->top + sq->height &&
+ sq->left < c->left + c->width &&
+ sq->left + sq->width >= c->left) {
+ unsigned left = sq->left;
+ unsigned width = sq->width;
+
+ if (c->left > left) {
+ width -= c->left - left;
+ left = c->left;
+ }
+ if (c->left + c->width < left + width)
+ width -= left + width - c->left - c->width;
+ left -= c->left;
+ left = tpg_hscale_div(tpg, p, left);
+ width = tpg_hscale_div(tpg, p, width);
+ memcpy(vbuf + left, tpg->contrast_line[p], width);
+ }
+ if (tpg->insert_sav) {
+ unsigned offset = tpg_hdiv(tpg, p, tpg->compose.width / 3);
+ u8 *p = vbuf + offset;
+ unsigned vact = 0, hact = 0;
+
+ p[0] = 0xff;
+ p[1] = 0;
+ p[2] = 0;
+ p[3] = 0x80 | (params->sav_eav_f << 6) |
+ (vact << 5) | (hact << 4) |
+ ((hact ^ vact) << 3) |
+ ((hact ^ params->sav_eav_f) << 2) |
+ ((params->sav_eav_f ^ vact) << 1) |
+ (hact ^ vact ^ params->sav_eav_f);
+ }
+ if (tpg->insert_eav) {
+ unsigned offset = tpg_hdiv(tpg, p, tpg->compose.width * 2 / 3);
+ u8 *p = vbuf + offset;
+ unsigned vact = 0, hact = 1;
+
+ p[0] = 0xff;
+ p[1] = 0;
+ p[2] = 0;
+ p[3] = 0x80 | (params->sav_eav_f << 6) |
+ (vact << 5) | (hact << 4) |
+ ((hact ^ vact) << 3) |
+ ((hact ^ params->sav_eav_f) << 2) |
+ ((params->sav_eav_f ^ vact) << 1) |
+ (hact ^ vact ^ params->sav_eav_f);
+ }
+}
+
+static void tpg_fill_plane_pattern(const struct tpg_data *tpg,
+ const struct tpg_draw_params *params,
+ unsigned p, unsigned h, u8 *vbuf)
+{
+ unsigned twopixsize = params->twopixsize;
+ unsigned img_width = params->img_width;
+ unsigned mv_hor_old = params->mv_hor_old;
+ unsigned mv_hor_new = params->mv_hor_new;
+ unsigned mv_vert_old = params->mv_vert_old;
+ unsigned mv_vert_new = params->mv_vert_new;
+ unsigned frame_line = params->frame_line;
+ unsigned frame_line_next = params->frame_line_next;
+ unsigned line_offset = tpg_hscale_div(tpg, p, tpg->crop.left);
+ bool even;
+ bool fill_blank = false;
+ unsigned pat_line_old;
+ unsigned pat_line_new;
+ u8 *linestart_older;
+ u8 *linestart_newer;
+ u8 *linestart_top;
+ u8 *linestart_bottom;
+
+ even = !(frame_line & 1);
+
+ if (h >= params->hmax) {
+ if (params->hmax == tpg->compose.height)
+ return;
+ if (!tpg->perc_fill_blank)
+ return;
+ fill_blank = true;
+ }
+
+ if (tpg->vflip) {
+ frame_line = tpg->src_height - frame_line - 1;
+ frame_line_next = tpg->src_height - frame_line_next - 1;
+ }
+
+ if (fill_blank) {
+ linestart_older = tpg->contrast_line[p];
+ linestart_newer = tpg->contrast_line[p];
+ } else if (tpg->qual != TPG_QUAL_NOISE &&
+ (frame_line < tpg->border.top ||
+ frame_line >= tpg->border.top + tpg->border.height)) {
+ linestart_older = tpg->black_line[p];
+ linestart_newer = tpg->black_line[p];
+ } else if (tpg->pattern == TPG_PAT_NOISE || tpg->qual == TPG_QUAL_NOISE) {
+ linestart_older = tpg->random_line[p] +
+ twopixsize * prandom_u32_max(tpg->src_width / 2);
+ linestart_newer = tpg->random_line[p] +
+ twopixsize * prandom_u32_max(tpg->src_width / 2);
+ } else {
+ unsigned frame_line_old =
+ (frame_line + mv_vert_old) % tpg->src_height;
+ unsigned frame_line_new =
+ (frame_line + mv_vert_new) % tpg->src_height;
+ unsigned pat_line_next_old;
+ unsigned pat_line_next_new;
+
+ pat_line_old = tpg_get_pat_line(tpg, frame_line_old);
+ pat_line_new = tpg_get_pat_line(tpg, frame_line_new);
+ linestart_older = tpg->lines[pat_line_old][p] + mv_hor_old;
+ linestart_newer = tpg->lines[pat_line_new][p] + mv_hor_new;
+
+ if (tpg->vdownsampling[p] > 1 && frame_line != frame_line_next) {
+ int avg_pat;
+
+ /*
+ * Now decide whether we need to use downsampled_lines[].
+ * That's necessary if the two lines use different patterns.
+ */
+ pat_line_next_old = tpg_get_pat_line(tpg,
+ (frame_line_next + mv_vert_old) % tpg->src_height);
+ pat_line_next_new = tpg_get_pat_line(tpg,
+ (frame_line_next + mv_vert_new) % tpg->src_height);
+
+ switch (tpg->field) {
+ case V4L2_FIELD_INTERLACED:
+ case V4L2_FIELD_INTERLACED_BT:
+ case V4L2_FIELD_INTERLACED_TB:
+ avg_pat = tpg_pattern_avg(tpg, pat_line_old, pat_line_new);
+ if (avg_pat < 0)
+ break;
+ linestart_older = tpg->downsampled_lines[avg_pat][p] + mv_hor_old;
+ linestart_newer = linestart_older;
+ break;
+ case V4L2_FIELD_NONE:
+ case V4L2_FIELD_TOP:
+ case V4L2_FIELD_BOTTOM:
+ case V4L2_FIELD_SEQ_BT:
+ case V4L2_FIELD_SEQ_TB:
+ avg_pat = tpg_pattern_avg(tpg, pat_line_old, pat_line_next_old);
+ if (avg_pat >= 0)
+ linestart_older = tpg->downsampled_lines[avg_pat][p] +
+ mv_hor_old;
+ avg_pat = tpg_pattern_avg(tpg, pat_line_new, pat_line_next_new);
+ if (avg_pat >= 0)
+ linestart_newer = tpg->downsampled_lines[avg_pat][p] +
+ mv_hor_new;
+ break;
+ }
+ }
+ linestart_older += line_offset;
+ linestart_newer += line_offset;
+ }
+ if (tpg->field_alternate) {
+ linestart_top = linestart_bottom = linestart_older;
+ } else if (params->is_60hz) {
+ linestart_top = linestart_newer;
+ linestart_bottom = linestart_older;
+ } else {
+ linestart_top = linestart_older;
+ linestart_bottom = linestart_newer;
+ }
+
+ switch (tpg->field) {
+ case V4L2_FIELD_INTERLACED:
+ case V4L2_FIELD_INTERLACED_TB:
+ case V4L2_FIELD_SEQ_TB:
+ case V4L2_FIELD_SEQ_BT:
+ if (even)
+ memcpy(vbuf, linestart_top, img_width);
+ else
+ memcpy(vbuf, linestart_bottom, img_width);
+ break;
+ case V4L2_FIELD_INTERLACED_BT:
+ if (even)
+ memcpy(vbuf, linestart_bottom, img_width);
+ else
+ memcpy(vbuf, linestart_top, img_width);
+ break;
+ case V4L2_FIELD_TOP:
+ memcpy(vbuf, linestart_top, img_width);
+ break;
+ case V4L2_FIELD_BOTTOM:
+ memcpy(vbuf, linestart_bottom, img_width);
+ break;
+ case V4L2_FIELD_NONE:
+ default:
+ memcpy(vbuf, linestart_older, img_width);
+ break;
+ }
+}
+
+void tpg_fill_plane_buffer(struct tpg_data *tpg, v4l2_std_id std,
+ unsigned p, u8 *vbuf)
+{
+ struct tpg_draw_params params;
unsigned factor = V4L2_FIELD_HAS_T_OR_B(tpg->field) ? 2 : 1;
- u8 *orig_vbuf = vbuf;
/* Coarse scaling with Bresenham */
unsigned int_part = (tpg->crop.height / factor) / tpg->compose.height;
unsigned fract_part = (tpg->crop.height / factor) % tpg->compose.height;
unsigned src_y = 0;
unsigned error = 0;
+ unsigned h;
tpg_recalc(tpg);
- mv_hor_old = (mv_hor_old * tpg->scaled_width / tpg->src_width) & ~1;
- mv_hor_new = (mv_hor_new * tpg->scaled_width / tpg->src_width) & ~1;
- wss_width = tpg->crop.left < tpg->src_width / 2 ?
- tpg->src_width / 2 - tpg->crop.left : 0;
- if (wss_width > tpg->crop.width)
- wss_width = tpg->crop.width;
- wss_width = wss_width * tpg->scaled_width / tpg->src_width;
+ params.is_tv = std;
+ params.is_60hz = std & V4L2_STD_525_60;
+ params.twopixsize = tpg->twopixelsize[p];
+ params.img_width = tpg_hdiv(tpg, p, tpg->compose.width);
+ params.stride = tpg->bytesperline[p];
+ params.hmax = (tpg->compose.height * tpg->perc_fill) / 100;
- vbuf += tpg->compose.left * twopixsize / 2;
- line_offset = tpg->crop.left * tpg->scaled_width / tpg->src_width;
- line_offset = (line_offset & ~1) * twopixsize / 2;
- if (tpg->crop.left < tpg->border.left) {
- left_pillar_width = tpg->border.left - tpg->crop.left;
- if (left_pillar_width > tpg->crop.width)
- left_pillar_width = tpg->crop.width;
- left_pillar_width = (left_pillar_width * tpg->scaled_width) / tpg->src_width;
- left_pillar_width = (left_pillar_width & ~1) * twopixsize / 2;
- }
- if (tpg->crop.left + tpg->crop.width > tpg->border.left + tpg->border.width) {
- right_pillar_start = tpg->border.left + tpg->border.width - tpg->crop.left;
- right_pillar_start = (right_pillar_start * tpg->scaled_width) / tpg->src_width;
- right_pillar_start = (right_pillar_start & ~1) * twopixsize / 2;
- if (right_pillar_start > img_width)
- right_pillar_start = img_width;
- }
+ tpg_fill_params_pattern(tpg, p, ¶ms);
+ tpg_fill_params_extras(tpg, p, ¶ms);
- f = tpg->field == (is_60hz ? V4L2_FIELD_TOP : V4L2_FIELD_BOTTOM);
+ vbuf += tpg_hdiv(tpg, p, tpg->compose.left);
for (h = 0; h < tpg->compose.height; h++) {
- bool even;
- bool fill_blank = false;
- unsigned frame_line;
unsigned buf_line;
- unsigned pat_line_old;
- unsigned pat_line_new;
- u8 *linestart_older;
- u8 *linestart_newer;
- u8 *linestart_top;
- u8 *linestart_bottom;
- frame_line = tpg_calc_frameline(tpg, src_y, tpg->field);
- even = !(frame_line & 1);
+ params.frame_line = tpg_calc_frameline(tpg, src_y, tpg->field);
+ params.frame_line_next = params.frame_line;
buf_line = tpg_calc_buffer_line(tpg, h, tpg->field);
src_y += int_part;
error += fract_part;
@@ -1373,182 +2032,61 @@
src_y++;
}
- if (h >= hmax) {
- if (hmax == tpg->compose.height)
- continue;
- if (!tpg->perc_fill_blank)
- continue;
- fill_blank = true;
- }
+ /*
+ * For line-interleaved formats determine the 'plane'
+ * based on the buffer line.
+ */
+ if (tpg_g_interleaved(tpg))
+ p = tpg_g_interleaved_plane(tpg, buf_line);
- if (tpg->vflip)
- frame_line = tpg->src_height - frame_line - 1;
-
- if (fill_blank) {
- linestart_older = tpg->contrast_line[p];
- linestart_newer = tpg->contrast_line[p];
- } else if (tpg->qual != TPG_QUAL_NOISE &&
- (frame_line < tpg->border.top ||
- frame_line >= tpg->border.top + tpg->border.height)) {
- linestart_older = tpg->black_line[p];
- linestart_newer = tpg->black_line[p];
- } else if (tpg->pattern == TPG_PAT_NOISE || tpg->qual == TPG_QUAL_NOISE) {
- linestart_older = tpg->random_line[p] +
- twopixsize * prandom_u32_max(tpg->src_width / 2);
- linestart_newer = tpg->random_line[p] +
- twopixsize * prandom_u32_max(tpg->src_width / 2);
- } else {
- pat_line_old = tpg_get_pat_line(tpg,
- (frame_line + mv_vert_old) % tpg->src_height);
- pat_line_new = tpg_get_pat_line(tpg,
- (frame_line + mv_vert_new) % tpg->src_height);
- linestart_older = tpg->lines[pat_line_old][p] +
- mv_hor_old * twopixsize / 2;
- linestart_newer = tpg->lines[pat_line_new][p] +
- mv_hor_new * twopixsize / 2;
- linestart_older += line_offset;
- linestart_newer += line_offset;
- }
- if (is_60hz) {
- linestart_top = linestart_newer;
- linestart_bottom = linestart_older;
- } else {
- linestart_top = linestart_older;
- linestart_bottom = linestart_newer;
- }
-
- switch (tpg->field) {
- case V4L2_FIELD_INTERLACED:
- case V4L2_FIELD_INTERLACED_TB:
- case V4L2_FIELD_SEQ_TB:
- case V4L2_FIELD_SEQ_BT:
- if (even)
- memcpy(vbuf + buf_line * stride, linestart_top, img_width);
- else
- memcpy(vbuf + buf_line * stride, linestart_bottom, img_width);
- break;
- case V4L2_FIELD_INTERLACED_BT:
- if (even)
- memcpy(vbuf + buf_line * stride, linestart_bottom, img_width);
- else
- memcpy(vbuf + buf_line * stride, linestart_top, img_width);
- break;
- case V4L2_FIELD_TOP:
- memcpy(vbuf + buf_line * stride, linestart_top, img_width);
- break;
- case V4L2_FIELD_BOTTOM:
- memcpy(vbuf + buf_line * stride, linestart_bottom, img_width);
- break;
- case V4L2_FIELD_NONE:
- default:
- memcpy(vbuf + buf_line * stride, linestart_older, img_width);
- break;
- }
-
- if (is_tv && !is_60hz && frame_line == 0 && wss_width) {
+ if (tpg->vdownsampling[p] > 1) {
/*
- * Replace the first half of the top line of a 50 Hz frame
- * with random data to simulate a WSS signal.
+ * When doing vertical downsampling the field setting
+ * matters: for SEQ_BT/TB we downsample each field
+ * separately (i.e. lines 0+2 are combined, as are
+ * lines 1+3), for the other field settings we combine
+ * odd and even lines. Doing that for SEQ_BT/TB would
+ * be really weird.
*/
- u8 *wss = tpg->random_line[p] +
- twopixsize * prandom_u32_max(tpg->src_width / 2);
+ if (tpg->field == V4L2_FIELD_SEQ_BT ||
+ tpg->field == V4L2_FIELD_SEQ_TB) {
+ unsigned next_src_y = src_y;
- memcpy(vbuf + buf_line * stride, wss, wss_width * twopixsize / 2);
+ if ((h & 3) >= 2)
+ continue;
+ next_src_y += int_part;
+ if (error + fract_part >= tpg->compose.height)
+ next_src_y++;
+ params.frame_line_next =
+ tpg_calc_frameline(tpg, next_src_y, tpg->field);
+ } else {
+ if (h & 1)
+ continue;
+ params.frame_line_next =
+ tpg_calc_frameline(tpg, src_y, tpg->field);
+ }
+
+ buf_line /= tpg->vdownsampling[p];
}
+ tpg_fill_plane_pattern(tpg, ¶ms, p, h,
+ vbuf + buf_line * params.stride);
+ tpg_fill_plane_extras(tpg, ¶ms, p, h,
+ vbuf + buf_line * params.stride);
+ }
+}
+
+void tpg_fillbuffer(struct tpg_data *tpg, v4l2_std_id std, unsigned p, u8 *vbuf)
+{
+ unsigned offset = 0;
+ unsigned i;
+
+ if (tpg->buffers > 1) {
+ tpg_fill_plane_buffer(tpg, std, p, vbuf);
+ return;
}
- vbuf = orig_vbuf;
- vbuf += tpg->compose.left * twopixsize / 2;
- src_y = 0;
- error = 0;
- for (h = 0; h < tpg->compose.height; h++) {
- unsigned frame_line = tpg_calc_frameline(tpg, src_y, tpg->field);
- unsigned buf_line = tpg_calc_buffer_line(tpg, h, tpg->field);
- const struct v4l2_rect *sq = &tpg->square;
- const struct v4l2_rect *b = &tpg->border;
- const struct v4l2_rect *c = &tpg->crop;
-
- src_y += int_part;
- error += fract_part;
- if (error >= tpg->compose.height) {
- error -= tpg->compose.height;
- src_y++;
- }
-
- if (tpg->show_border && frame_line >= b->top &&
- frame_line < b->top + b->height) {
- unsigned bottom = b->top + b->height - 1;
- unsigned left = left_pillar_width;
- unsigned right = right_pillar_start;
-
- if (frame_line == b->top || frame_line == b->top + 1 ||
- frame_line == bottom || frame_line == bottom - 1) {
- memcpy(vbuf + buf_line * stride + left, tpg->contrast_line[p],
- right - left);
- } else {
- if (b->left >= c->left &&
- b->left < c->left + c->width)
- memcpy(vbuf + buf_line * stride + left,
- tpg->contrast_line[p], twopixsize);
- if (b->left + b->width > c->left &&
- b->left + b->width <= c->left + c->width)
- memcpy(vbuf + buf_line * stride + right - twopixsize,
- tpg->contrast_line[p], twopixsize);
- }
- }
- if (tpg->qual != TPG_QUAL_NOISE && frame_line >= b->top &&
- frame_line < b->top + b->height) {
- memcpy(vbuf + buf_line * stride, tpg->black_line[p], left_pillar_width);
- memcpy(vbuf + buf_line * stride + right_pillar_start, tpg->black_line[p],
- img_width - right_pillar_start);
- }
- if (tpg->show_square && frame_line >= sq->top &&
- frame_line < sq->top + sq->height &&
- sq->left < c->left + c->width &&
- sq->left + sq->width >= c->left) {
- unsigned left = sq->left;
- unsigned width = sq->width;
-
- if (c->left > left) {
- width -= c->left - left;
- left = c->left;
- }
- if (c->left + c->width < left + width)
- width -= left + width - c->left - c->width;
- left -= c->left;
- left = (left * tpg->scaled_width) / tpg->src_width;
- left = (left & ~1) * twopixsize / 2;
- width = (width * tpg->scaled_width) / tpg->src_width;
- width = (width & ~1) * twopixsize / 2;
- memcpy(vbuf + buf_line * stride + left, tpg->contrast_line[p], width);
- }
- if (tpg->insert_sav) {
- unsigned offset = (tpg->compose.width / 6) * twopixsize;
- u8 *p = vbuf + buf_line * stride + offset;
- unsigned vact = 0, hact = 0;
-
- p[0] = 0xff;
- p[1] = 0;
- p[2] = 0;
- p[3] = 0x80 | (f << 6) | (vact << 5) | (hact << 4) |
- ((hact ^ vact) << 3) |
- ((hact ^ f) << 2) |
- ((f ^ vact) << 1) |
- (hact ^ vact ^ f);
- }
- if (tpg->insert_eav) {
- unsigned offset = (tpg->compose.width / 6) * 2 * twopixsize;
- u8 *p = vbuf + buf_line * stride + offset;
- unsigned vact = 0, hact = 1;
-
- p[0] = 0xff;
- p[1] = 0;
- p[2] = 0;
- p[3] = 0x80 | (f << 6) | (vact << 5) | (hact << 4) |
- ((hact ^ vact) << 3) |
- ((hact ^ f) << 2) |
- ((f ^ vact) << 1) |
- (hact ^ vact ^ f);
- }
+ for (i = 0; i < tpg_g_planes(tpg); i++) {
+ tpg_fill_plane_buffer(tpg, std, i, vbuf + offset);
+ offset += tpg_calc_plane_size(tpg, i);
}
}
diff --git a/drivers/media/platform/vivid/vivid-tpg.h b/drivers/media/platform/vivid/vivid-tpg.h
index bd8b1c7..a50cd2e 100644
--- a/drivers/media/platform/vivid/vivid-tpg.h
+++ b/drivers/media/platform/vivid/vivid-tpg.h
@@ -41,7 +41,10 @@
TPG_PAT_GREEN,
TPG_PAT_BLUE,
TPG_PAT_CHECKERS_16X16,
+ TPG_PAT_CHECKERS_2X2,
TPG_PAT_CHECKERS_1X1,
+ TPG_PAT_COLOR_CHECKERS_2X2,
+ TPG_PAT_COLOR_CHECKERS_1X1,
TPG_PAT_ALTERNATING_HLINES,
TPG_PAT_ALTERNATING_VLINES,
TPG_PAT_CROSS_1_PIXEL,
@@ -87,7 +90,7 @@
extern const char * const tpg_aspect_strings[];
-#define TPG_MAX_PLANES 2
+#define TPG_MAX_PLANES 3
#define TPG_MAX_PAT_LINES 8
struct tpg_data {
@@ -98,6 +101,7 @@
/* Scaled output frame size */
unsigned scaled_width;
u32 field;
+ bool field_alternate;
/* crop coordinates are frame-based */
struct v4l2_rect crop;
/* compose coordinates are format-based */
@@ -134,7 +138,16 @@
enum tpg_pixel_aspect pix_aspect;
unsigned rgb_range;
unsigned real_rgb_range;
+ unsigned buffers;
unsigned planes;
+ bool interleaved;
+ u8 vdownsampling[TPG_MAX_PLANES];
+ u8 hdownsampling[TPG_MAX_PLANES];
+ /*
+ * horizontal positions must be ANDed with this value to enforce
+ * correct boundaries for packed YUYV values.
+ */
+ unsigned hmask[TPG_MAX_PLANES];
/* Used to store the colors in native format, either RGB or YUV */
u8 colors[TPG_COLOR_MAX][3];
u8 textfg[TPG_MAX_PLANES][8], textbg[TPG_MAX_PLANES][8];
@@ -168,6 +181,7 @@
/* Used to store TPG_MAX_PAT_LINES lines, each with up to two planes */
unsigned max_line_width;
u8 *lines[TPG_MAX_PAT_LINES][TPG_MAX_PLANES];
+ u8 *downsampled_lines[TPG_MAX_PAT_LINES][TPG_MAX_PLANES];
u8 *random_line[TPG_MAX_PLANES];
u8 *contrast_line[TPG_MAX_PLANES];
u8 *black_line[TPG_MAX_PLANES];
@@ -180,11 +194,15 @@
u32 field);
void tpg_set_font(const u8 *f);
-void tpg_gen_text(struct tpg_data *tpg,
+void tpg_gen_text(const struct tpg_data *tpg,
u8 *basep[TPG_MAX_PLANES][2], int y, int x, char *text);
void tpg_calc_text_basep(struct tpg_data *tpg,
u8 *basep[TPG_MAX_PLANES][2], unsigned p, u8 *vbuf);
-void tpg_fillbuffer(struct tpg_data *tpg, v4l2_std_id std, unsigned p, u8 *vbuf);
+unsigned tpg_g_interleaved_plane(const struct tpg_data *tpg, unsigned buf_line);
+void tpg_fill_plane_buffer(struct tpg_data *tpg, v4l2_std_id std,
+ unsigned p, u8 *vbuf);
+void tpg_fillbuffer(struct tpg_data *tpg, v4l2_std_id std,
+ unsigned p, u8 *vbuf);
bool tpg_s_fourcc(struct tpg_data *tpg, u32 fourcc);
void tpg_s_crop_compose(struct tpg_data *tpg, const struct v4l2_rect *crop,
const struct v4l2_rect *compose);
@@ -323,9 +341,19 @@
return tpg->quantization;
}
+static inline unsigned tpg_g_buffers(const struct tpg_data *tpg)
+{
+ return tpg->buffers;
+}
+
static inline unsigned tpg_g_planes(const struct tpg_data *tpg)
{
- return tpg->planes;
+ return tpg->interleaved ? 1 : tpg->planes;
+}
+
+static inline bool tpg_g_interleaved(const struct tpg_data *tpg)
+{
+ return tpg->interleaved;
}
static inline unsigned tpg_g_twopixelsize(const struct tpg_data *tpg, unsigned plane)
@@ -333,6 +361,24 @@
return tpg->twopixelsize[plane];
}
+static inline unsigned tpg_hdiv(const struct tpg_data *tpg,
+ unsigned plane, unsigned x)
+{
+ return ((x / tpg->hdownsampling[plane]) & tpg->hmask[plane]) *
+ tpg->twopixelsize[plane] / 2;
+}
+
+static inline unsigned tpg_hscale(const struct tpg_data *tpg, unsigned x)
+{
+ return (x * tpg->scaled_width) / tpg->src_width;
+}
+
+static inline unsigned tpg_hscale_div(const struct tpg_data *tpg,
+ unsigned plane, unsigned x)
+{
+ return tpg_hdiv(tpg, plane, tpg_hscale(tpg, x));
+}
+
static inline unsigned tpg_g_bytesperline(const struct tpg_data *tpg, unsigned plane)
{
return tpg->bytesperline[plane];
@@ -340,7 +386,60 @@
static inline void tpg_s_bytesperline(struct tpg_data *tpg, unsigned plane, unsigned bpl)
{
- tpg->bytesperline[plane] = bpl;
+ unsigned p;
+
+ if (tpg->buffers > 1) {
+ tpg->bytesperline[plane] = bpl;
+ return;
+ }
+
+ for (p = 0; p < tpg_g_planes(tpg); p++) {
+ unsigned plane_w = bpl * tpg->twopixelsize[p] / tpg->twopixelsize[0];
+
+ tpg->bytesperline[p] = plane_w / tpg->hdownsampling[p];
+ }
+}
+
+
+static inline unsigned tpg_g_line_width(const struct tpg_data *tpg, unsigned plane)
+{
+ unsigned w = 0;
+ unsigned p;
+
+ if (tpg->buffers > 1)
+ return tpg_g_bytesperline(tpg, plane);
+ for (p = 0; p < tpg_g_planes(tpg); p++) {
+ unsigned plane_w = tpg_g_bytesperline(tpg, p);
+
+ w += plane_w / tpg->vdownsampling[p];
+ }
+ return w;
+}
+
+static inline unsigned tpg_calc_line_width(const struct tpg_data *tpg,
+ unsigned plane, unsigned bpl)
+{
+ unsigned w = 0;
+ unsigned p;
+
+ if (tpg->buffers > 1)
+ return bpl;
+ for (p = 0; p < tpg_g_planes(tpg); p++) {
+ unsigned plane_w = bpl * tpg->twopixelsize[p] / tpg->twopixelsize[0];
+
+ plane_w /= tpg->hdownsampling[p];
+ w += plane_w / tpg->vdownsampling[p];
+ }
+ return w;
+}
+
+static inline unsigned tpg_calc_plane_size(const struct tpg_data *tpg, unsigned plane)
+{
+ if (plane >= tpg_g_planes(tpg))
+ return 0;
+
+ return tpg_g_bytesperline(tpg, plane) * tpg->buf_height /
+ tpg->vdownsampling[plane];
}
static inline void tpg_s_buf_height(struct tpg_data *tpg, unsigned h)
@@ -348,9 +447,10 @@
tpg->buf_height = h;
}
-static inline void tpg_s_field(struct tpg_data *tpg, unsigned field)
+static inline void tpg_s_field(struct tpg_data *tpg, unsigned field, bool alternate)
{
tpg->field = field;
+ tpg->field_alternate = alternate;
}
static inline void tpg_s_perc_fill(struct tpg_data *tpg,
diff --git a/drivers/media/platform/vivid/vivid-vid-cap.c b/drivers/media/platform/vivid/vivid-vid-cap.c
index 867a29a..dab5990 100644
--- a/drivers/media/platform/vivid/vivid-vid-cap.c
+++ b/drivers/media/platform/vivid/vivid-vid-cap.c
@@ -42,20 +42,26 @@
{
.name = "RGB565 (LE)",
.fourcc = V4L2_PIX_FMT_RGB565, /* gggbbbbb rrrrrggg */
- .depth = 16,
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.planes = 1,
+ .buffers = 1,
},
{
.name = "XRGB555 (LE)",
.fourcc = V4L2_PIX_FMT_XRGB555, /* gggbbbbb arrrrrgg */
- .depth = 16,
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.planes = 1,
+ .buffers = 1,
},
{
.name = "ARGB555 (LE)",
.fourcc = V4L2_PIX_FMT_ARGB555, /* gggbbbbb arrrrrgg */
- .depth = 16,
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.planes = 1,
+ .buffers = 1,
},
};
@@ -94,7 +100,7 @@
unsigned sizes[], void *alloc_ctxs[])
{
struct vivid_dev *dev = vb2_get_drv_priv(vq);
- unsigned planes = tpg_g_planes(&dev->tpg);
+ unsigned buffers = tpg_g_buffers(&dev->tpg);
unsigned h = dev->fmt_cap_rect.height;
unsigned p;
@@ -127,39 +133,36 @@
mp = &fmt->fmt.pix_mp;
/*
* Check if the number of planes in the specified format match
- * the number of planes in the current format. You can't mix that.
+ * the number of buffers in the current format. You can't mix that.
*/
- if (mp->num_planes != planes)
+ if (mp->num_planes != buffers)
return -EINVAL;
vfmt = vivid_get_format(dev, mp->pixelformat);
- for (p = 0; p < planes; p++) {
+ for (p = 0; p < buffers; p++) {
sizes[p] = mp->plane_fmt[p].sizeimage;
- if (sizes[0] < tpg_g_bytesperline(&dev->tpg, 0) * h +
+ if (sizes[p] < tpg_g_line_width(&dev->tpg, p) * h +
vfmt->data_offset[p])
return -EINVAL;
}
} else {
- for (p = 0; p < planes; p++)
- sizes[p] = tpg_g_bytesperline(&dev->tpg, p) * h +
+ for (p = 0; p < buffers; p++)
+ sizes[p] = tpg_g_line_width(&dev->tpg, p) * h +
dev->fmt_cap->data_offset[p];
}
if (vq->num_buffers + *nbuffers < 2)
*nbuffers = 2 - vq->num_buffers;
- *nplanes = planes;
+ *nplanes = buffers;
/*
* videobuf2-vmalloc allocator is context-less so no need to set
* alloc_ctxs array.
*/
- if (planes == 2)
- dprintk(dev, 1, "%s, count=%d, sizes=%u, %u\n", __func__,
- *nbuffers, sizes[0], sizes[1]);
- else
- dprintk(dev, 1, "%s, count=%d, size=%u\n", __func__,
- *nbuffers, sizes[0]);
+ dprintk(dev, 1, "%s: count=%d\n", __func__, *nbuffers);
+ for (p = 0; p < buffers; p++)
+ dprintk(dev, 1, "%s: size[%u]=%u\n", __func__, p, sizes[p]);
return 0;
}
@@ -168,7 +171,7 @@
{
struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
unsigned long size;
- unsigned planes = tpg_g_planes(&dev->tpg);
+ unsigned buffers = tpg_g_buffers(&dev->tpg);
unsigned p;
dprintk(dev, 1, "%s\n", __func__);
@@ -184,13 +187,13 @@
dev->buf_prepare_error = false;
return -EINVAL;
}
- for (p = 0; p < planes; p++) {
- size = tpg_g_bytesperline(&dev->tpg, p) * dev->fmt_cap_rect.height +
+ for (p = 0; p < buffers; p++) {
+ size = tpg_g_line_width(&dev->tpg, p) * dev->fmt_cap_rect.height +
dev->fmt_cap->data_offset[p];
- if (vb2_plane_size(vb, 0) < size) {
+ if (vb2_plane_size(vb, p) < size) {
dprintk(dev, 1, "%s data will not fit into plane %u (%lu < %lu)\n",
- __func__, p, vb2_plane_size(vb, 0), size);
+ __func__, p, vb2_plane_size(vb, p), size);
return -EINVAL;
}
@@ -441,7 +444,7 @@
*/
if (keep_controls || !dev->colorspace)
break;
- if (bt->standards & V4L2_DV_BT_STD_CEA861) {
+ if (bt->flags & V4L2_DV_FL_IS_CE_VIDEO) {
if (bt->width == 720 && bt->height <= 576)
v4l2_ctrl_s_ctrl(dev->colorspace, VIVID_CS_170M);
else
@@ -526,11 +529,11 @@
mp->colorspace = vivid_colorspace_cap(dev);
mp->ycbcr_enc = vivid_ycbcr_enc_cap(dev);
mp->quantization = vivid_quantization_cap(dev);
- mp->num_planes = dev->fmt_cap->planes;
+ mp->num_planes = dev->fmt_cap->buffers;
for (p = 0; p < mp->num_planes; p++) {
mp->plane_fmt[p].bytesperline = tpg_g_bytesperline(&dev->tpg, p);
mp->plane_fmt[p].sizeimage =
- mp->plane_fmt[p].bytesperline * mp->height +
+ tpg_g_line_width(&dev->tpg, p) * mp->height +
dev->fmt_cap->data_offset[p];
}
return 0;
@@ -596,18 +599,19 @@
/* This driver supports custom bytesperline values */
- /* Calculate the minimum supported bytesperline value */
- bytesperline = (mp->width * fmt->depth) >> 3;
- /* Calculate the maximum supported bytesperline value */
- max_bpl = (MAX_ZOOM * MAX_WIDTH * fmt->depth) >> 3;
- mp->num_planes = fmt->planes;
+ mp->num_planes = fmt->buffers;
for (p = 0; p < mp->num_planes; p++) {
+ /* Calculate the minimum supported bytesperline value */
+ bytesperline = (mp->width * fmt->bit_depth[p]) >> 3;
+ /* Calculate the maximum supported bytesperline value */
+ max_bpl = (MAX_ZOOM * MAX_WIDTH * fmt->bit_depth[p]) >> 3;
+
if (pfmt[p].bytesperline > max_bpl)
pfmt[p].bytesperline = max_bpl;
if (pfmt[p].bytesperline < bytesperline)
pfmt[p].bytesperline = bytesperline;
- pfmt[p].sizeimage = pfmt[p].bytesperline * mp->height +
- fmt->data_offset[p];
+ pfmt[p].sizeimage = tpg_calc_line_width(&dev->tpg, p, pfmt[p].bytesperline) *
+ mp->height + fmt->data_offset[p];
memset(pfmt[p].reserved, 0, sizeof(pfmt[p].reserved));
}
mp->colorspace = vivid_colorspace_cap(dev);
@@ -627,6 +631,7 @@
struct vb2_queue *q = &dev->vb_vid_cap_q;
int ret = vivid_try_fmt_vid_cap(file, priv, f);
unsigned factor = 1;
+ unsigned p;
unsigned i;
if (ret < 0)
@@ -729,13 +734,15 @@
dev->fmt_cap_rect.width = mp->width;
dev->fmt_cap_rect.height = mp->height;
tpg_s_buf_height(&dev->tpg, mp->height);
- tpg_s_bytesperline(&dev->tpg, 0, mp->plane_fmt[0].bytesperline);
- if (tpg_g_planes(&dev->tpg) > 1)
- tpg_s_bytesperline(&dev->tpg, 1, mp->plane_fmt[1].bytesperline);
- dev->field_cap = mp->field;
- tpg_s_field(&dev->tpg, dev->field_cap);
- tpg_s_crop_compose(&dev->tpg, &dev->crop_cap, &dev->compose_cap);
tpg_s_fourcc(&dev->tpg, dev->fmt_cap->fourcc);
+ for (p = 0; p < tpg_g_buffers(&dev->tpg); p++)
+ tpg_s_bytesperline(&dev->tpg, p, mp->plane_fmt[p].bytesperline);
+ dev->field_cap = mp->field;
+ if (dev->field_cap == V4L2_FIELD_ALTERNATE)
+ tpg_s_field(&dev->tpg, V4L2_FIELD_TOP, true);
+ else
+ tpg_s_field(&dev->tpg, dev->field_cap, false);
+ tpg_s_crop_compose(&dev->tpg, &dev->crop_cap, &dev->compose_cap);
if (vivid_is_sdtv_cap(dev))
dev->tv_field_cap = mp->field;
tpg_update_mv_step(&dev->tpg);
@@ -1012,8 +1019,12 @@
int vidioc_enum_fmt_vid_overlay(struct file *file, void *priv,
struct v4l2_fmtdesc *f)
{
+ struct vivid_dev *dev = video_drvdata(file);
const struct vivid_fmt *fmt;
+ if (dev->multiplanar)
+ return -ENOTTY;
+
if (f->index >= ARRAY_SIZE(formats_ovl))
return -EINVAL;
@@ -1032,6 +1043,9 @@
struct v4l2_window *win = &f->fmt.win;
unsigned clipcount = win->clipcount;
+ if (dev->multiplanar)
+ return -ENOTTY;
+
win->w.top = dev->overlay_cap_top;
win->w.left = dev->overlay_cap_left;
win->w.width = compose->width;
@@ -1063,6 +1077,9 @@
struct v4l2_window *win = &f->fmt.win;
int i, j;
+ if (dev->multiplanar)
+ return -ENOTTY;
+
win->w.left = clamp_t(int, win->w.left,
-dev->fb_cap.fmt.width, dev->fb_cap.fmt.width);
win->w.top = clamp_t(int, win->w.top,
@@ -1150,6 +1167,9 @@
{
struct vivid_dev *dev = video_drvdata(file);
+ if (dev->multiplanar)
+ return -ENOTTY;
+
if (i && dev->fb_vbase_cap == NULL)
return -EINVAL;
@@ -1169,6 +1189,9 @@
{
struct vivid_dev *dev = video_drvdata(file);
+ if (dev->multiplanar)
+ return -ENOTTY;
+
*a = dev->fb_cap;
a->capability = V4L2_FBUF_CAP_BITMAP_CLIPPING |
V4L2_FBUF_CAP_LIST_CLIPPING;
@@ -1185,6 +1208,9 @@
struct vivid_dev *dev = video_drvdata(file);
const struct vivid_fmt *fmt;
+ if (dev->multiplanar)
+ return -ENOTTY;
+
if (!capable(CAP_SYS_ADMIN) && !capable(CAP_SYS_RAWIO))
return -EPERM;
@@ -1202,7 +1228,7 @@
fmt = vivid_get_format(dev, a->fmt.pixelformat);
if (!fmt || !fmt->can_do_overlay)
return -EINVAL;
- if (a->fmt.bytesperline < (a->fmt.width * fmt->depth) / 8)
+ if (a->fmt.bytesperline < (a->fmt.width * fmt->bit_depth[0]) / 8)
return -EINVAL;
if (a->fmt.height * a->fmt.bytesperline < a->fmt.sizeimage)
return -EINVAL;
@@ -1332,7 +1358,7 @@
v4l2_ctrl_s_ctrl(dev->colorspace, VIVID_CS_170M);
break;
case HDMI:
- if (bt->standards & V4L2_DV_BT_STD_CEA861) {
+ if (bt->flags & V4L2_DV_FL_IS_CE_VIDEO) {
if (dev->src_rect.width == 720 && dev->src_rect.height <= 576)
v4l2_ctrl_s_ctrl(dev->colorspace, VIVID_CS_170M);
else
@@ -1552,6 +1578,65 @@
return 0;
}
+static void find_aspect_ratio(u32 width, u32 height,
+ u32 *num, u32 *denom)
+{
+ if (!(height % 3) && ((height * 4 / 3) == width)) {
+ *num = 4;
+ *denom = 3;
+ } else if (!(height % 9) && ((height * 16 / 9) == width)) {
+ *num = 16;
+ *denom = 9;
+ } else if (!(height % 10) && ((height * 16 / 10) == width)) {
+ *num = 16;
+ *denom = 10;
+ } else if (!(height % 4) && ((height * 5 / 4) == width)) {
+ *num = 5;
+ *denom = 4;
+ } else if (!(height % 9) && ((height * 15 / 9) == width)) {
+ *num = 15;
+ *denom = 9;
+ } else { /* default to 16:9 */
+ *num = 16;
+ *denom = 9;
+ }
+}
+
+static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings)
+{
+ struct v4l2_bt_timings *bt = &timings->bt;
+ u32 total_h_pixel;
+ u32 total_v_lines;
+ u32 h_freq;
+
+ if (!v4l2_valid_dv_timings(timings, &vivid_dv_timings_cap,
+ NULL, NULL))
+ return false;
+
+ total_h_pixel = V4L2_DV_BT_FRAME_WIDTH(bt);
+ total_v_lines = V4L2_DV_BT_FRAME_HEIGHT(bt);
+
+ h_freq = (u32)bt->pixelclock / total_h_pixel;
+
+ if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_CVT)) {
+ if (v4l2_detect_cvt(total_v_lines, h_freq, bt->vsync,
+ bt->polarities, timings))
+ return true;
+ }
+
+ if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_GTF)) {
+ struct v4l2_fract aspect_ratio;
+
+ find_aspect_ratio(bt->width, bt->height,
+ &aspect_ratio.numerator,
+ &aspect_ratio.denominator);
+ if (v4l2_detect_gtf(total_v_lines, h_freq, bt->vsync,
+ bt->polarities, aspect_ratio, timings))
+ return true;
+ }
+ return false;
+}
+
int vivid_vid_cap_s_dv_timings(struct file *file, void *_fh,
struct v4l2_dv_timings *timings)
{
@@ -1559,13 +1644,16 @@
if (!vivid_is_hdmi_cap(dev))
return -ENODATA;
- if (vb2_is_busy(&dev->vb_vid_cap_q))
- return -EBUSY;
if (!v4l2_find_dv_timings_cap(timings, &vivid_dv_timings_cap,
- 0, NULL, NULL))
+ 0, NULL, NULL) &&
+ !valid_cvt_gtf_timings(timings))
return -EINVAL;
+
if (v4l2_match_dv_timings(timings, &dev->dv_timings_cap, 0))
return 0;
+ if (vb2_is_busy(&dev->vb_vid_cap_q))
+ return -EBUSY;
+
dev->dv_timings_cap = *timings;
vivid_update_format_cap(dev, false);
return 0;
@@ -1663,18 +1751,14 @@
return -EINVAL;
if (!vivid_is_webcam(dev)) {
- static const struct v4l2_fract step = { 1, 1 };
-
if (fival->index)
return -EINVAL;
if (fival->width < MIN_WIDTH || fival->width > MAX_WIDTH * MAX_ZOOM)
return -EINVAL;
if (fival->height < MIN_HEIGHT || fival->height > MAX_HEIGHT * MAX_ZOOM)
return -EINVAL;
- fival->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
- fival->stepwise.min = tpf_min;
- fival->stepwise.max = tpf_max;
- fival->stepwise.step = step;
+ fival->type = V4L2_FRMIVAL_TYPE_DISCRETE;
+ fival->discrete = dev->timeperframe_vid_cap;
return 0;
}
diff --git a/drivers/media/platform/vivid/vivid-vid-common.c b/drivers/media/platform/vivid/vivid-vid-common.c
index 6bef1e6..aa44627 100644
--- a/drivers/media/platform/vivid/vivid-vid-common.c
+++ b/drivers/media/platform/vivid/vivid-vid-common.c
@@ -33,8 +33,9 @@
.type = V4L2_DV_BT_656_1120,
/* keep this initialization for compatibility with GCC < 4.4.6 */
.reserved = { 0 },
- V4L2_INIT_BT_TIMINGS(0, MAX_WIDTH, 0, MAX_HEIGHT, 25000000, 600000000,
- V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT,
+ V4L2_INIT_BT_TIMINGS(0, MAX_WIDTH, 0, MAX_HEIGHT, 14000000, 775000000,
+ V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
+ V4L2_DV_BT_STD_CVT | V4L2_DV_BT_STD_GTF,
V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_INTERLACED)
};
@@ -46,145 +47,435 @@
{
.name = "4:2:2, packed, YUYV",
.fourcc = V4L2_PIX_FMT_YUYV,
- .depth = 16,
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.is_yuv = true,
.planes = 1,
- .data_offset = { PLANE0_DATA_OFFSET, 0 },
+ .buffers = 1,
+ .data_offset = { PLANE0_DATA_OFFSET },
},
{
.name = "4:2:2, packed, UYVY",
.fourcc = V4L2_PIX_FMT_UYVY,
- .depth = 16,
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.is_yuv = true,
.planes = 1,
+ .buffers = 1,
},
{
.name = "4:2:2, packed, YVYU",
.fourcc = V4L2_PIX_FMT_YVYU,
- .depth = 16,
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.is_yuv = true,
.planes = 1,
+ .buffers = 1,
},
{
.name = "4:2:2, packed, VYUY",
.fourcc = V4L2_PIX_FMT_VYUY,
- .depth = 16,
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.is_yuv = true,
.planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "YUV 4:2:2 triplanar",
+ .fourcc = V4L2_PIX_FMT_YUV422P,
+ .vdownsampling = { 1, 1, 1 },
+ .bit_depth = { 8, 4, 4 },
+ .is_yuv = true,
+ .planes = 3,
+ .buffers = 1,
+ },
+ {
+ .name = "YUV 4:2:0 triplanar",
+ .fourcc = V4L2_PIX_FMT_YUV420,
+ .vdownsampling = { 1, 2, 2 },
+ .bit_depth = { 8, 4, 4 },
+ .is_yuv = true,
+ .planes = 3,
+ .buffers = 1,
+ },
+ {
+ .name = "YVU 4:2:0 triplanar",
+ .fourcc = V4L2_PIX_FMT_YVU420,
+ .vdownsampling = { 1, 2, 2 },
+ .bit_depth = { 8, 4, 4 },
+ .is_yuv = true,
+ .planes = 3,
+ .buffers = 1,
+ },
+ {
+ .name = "YUV 4:2:0 biplanar",
+ .fourcc = V4L2_PIX_FMT_NV12,
+ .vdownsampling = { 1, 2 },
+ .bit_depth = { 8, 8 },
+ .is_yuv = true,
+ .planes = 2,
+ .buffers = 1,
+ },
+ {
+ .name = "YVU 4:2:0 biplanar",
+ .fourcc = V4L2_PIX_FMT_NV21,
+ .vdownsampling = { 1, 2 },
+ .bit_depth = { 8, 8 },
+ .is_yuv = true,
+ .planes = 2,
+ .buffers = 1,
+ },
+ {
+ .name = "YUV 4:2:2 biplanar",
+ .fourcc = V4L2_PIX_FMT_NV16,
+ .vdownsampling = { 1, 1 },
+ .bit_depth = { 8, 8 },
+ .is_yuv = true,
+ .planes = 2,
+ .buffers = 1,
+ },
+ {
+ .name = "YVU 4:2:2 biplanar",
+ .fourcc = V4L2_PIX_FMT_NV61,
+ .vdownsampling = { 1, 1 },
+ .bit_depth = { 8, 8 },
+ .is_yuv = true,
+ .planes = 2,
+ .buffers = 1,
+ },
+ {
+ .name = "YUV 4:4:4 biplanar",
+ .fourcc = V4L2_PIX_FMT_NV24,
+ .vdownsampling = { 1, 1 },
+ .bit_depth = { 8, 16 },
+ .is_yuv = true,
+ .planes = 2,
+ .buffers = 1,
+ },
+ {
+ .name = "YVU 4:4:4 biplanar",
+ .fourcc = V4L2_PIX_FMT_NV42,
+ .vdownsampling = { 1, 1 },
+ .bit_depth = { 8, 16 },
+ .is_yuv = true,
+ .planes = 2,
+ .buffers = 1,
+ },
+ {
+ .name = "YUV555 (LE)",
+ .fourcc = V4L2_PIX_FMT_YUV555, /* uuuvvvvv ayyyyyuu */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
+ .planes = 1,
+ .buffers = 1,
+ .alpha_mask = 0x8000,
+ },
+ {
+ .name = "YUV565 (LE)",
+ .fourcc = V4L2_PIX_FMT_YUV565, /* uuuvvvvv yyyyyuuu */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
+ .planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "YUV444",
+ .fourcc = V4L2_PIX_FMT_YUV444, /* uuuuvvvv aaaayyyy */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
+ .planes = 1,
+ .buffers = 1,
+ .alpha_mask = 0xf000,
+ },
+ {
+ .name = "YUV32 (LE)",
+ .fourcc = V4L2_PIX_FMT_YUV32, /* ayuv */
+ .vdownsampling = { 1 },
+ .bit_depth = { 32 },
+ .planes = 1,
+ .buffers = 1,
+ .alpha_mask = 0x000000ff,
+ },
+ {
+ .name = "Monochrome",
+ .fourcc = V4L2_PIX_FMT_GREY,
+ .vdownsampling = { 1 },
+ .bit_depth = { 8 },
+ .is_yuv = true,
+ .planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "RGB332",
+ .fourcc = V4L2_PIX_FMT_RGB332, /* rrrgggbb */
+ .vdownsampling = { 1 },
+ .bit_depth = { 8 },
+ .planes = 1,
+ .buffers = 1,
},
{
.name = "RGB565 (LE)",
.fourcc = V4L2_PIX_FMT_RGB565, /* gggbbbbb rrrrrggg */
- .depth = 16,
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.planes = 1,
+ .buffers = 1,
.can_do_overlay = true,
},
{
.name = "RGB565 (BE)",
.fourcc = V4L2_PIX_FMT_RGB565X, /* rrrrrggg gggbbbbb */
- .depth = 16,
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.planes = 1,
+ .buffers = 1,
.can_do_overlay = true,
},
{
- .name = "RGB555 (LE)",
- .fourcc = V4L2_PIX_FMT_RGB555, /* gggbbbbb arrrrrgg */
- .depth = 16,
+ .name = "RGB444",
+ .fourcc = V4L2_PIX_FMT_RGB444, /* xxxxrrrr ggggbbbb */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "XRGB444",
+ .fourcc = V4L2_PIX_FMT_XRGB444, /* xxxxrrrr ggggbbbb */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
+ .planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "ARGB444",
+ .fourcc = V4L2_PIX_FMT_ARGB444, /* aaaarrrr ggggbbbb */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
+ .planes = 1,
+ .buffers = 1,
+ .alpha_mask = 0x00f0,
+ },
+ {
+ .name = "RGB555 (LE)",
+ .fourcc = V4L2_PIX_FMT_RGB555, /* gggbbbbb xrrrrrgg */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
+ .planes = 1,
+ .buffers = 1,
.can_do_overlay = true,
},
{
.name = "XRGB555 (LE)",
- .fourcc = V4L2_PIX_FMT_XRGB555, /* gggbbbbb arrrrrgg */
- .depth = 16,
+ .fourcc = V4L2_PIX_FMT_XRGB555, /* gggbbbbb xrrrrrgg */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.planes = 1,
+ .buffers = 1,
.can_do_overlay = true,
},
{
.name = "ARGB555 (LE)",
.fourcc = V4L2_PIX_FMT_ARGB555, /* gggbbbbb arrrrrgg */
- .depth = 16,
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.planes = 1,
+ .buffers = 1,
.can_do_overlay = true,
.alpha_mask = 0x8000,
},
{
.name = "RGB555 (BE)",
- .fourcc = V4L2_PIX_FMT_RGB555X, /* arrrrrgg gggbbbbb */
- .depth = 16,
+ .fourcc = V4L2_PIX_FMT_RGB555X, /* xrrrrrgg gggbbbbb */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
.planes = 1,
- .can_do_overlay = true,
+ .buffers = 1,
+ },
+ {
+ .name = "XRGB555 (BE)",
+ .fourcc = V4L2_PIX_FMT_XRGB555X, /* xrrrrrgg gggbbbbb */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
+ .planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "ARGB555 (BE)",
+ .fourcc = V4L2_PIX_FMT_ARGB555X, /* arrrrrgg gggbbbbb */
+ .vdownsampling = { 1 },
+ .bit_depth = { 16 },
+ .planes = 1,
+ .buffers = 1,
+ .alpha_mask = 0x0080,
},
{
.name = "RGB24 (LE)",
.fourcc = V4L2_PIX_FMT_RGB24, /* rgb */
- .depth = 24,
+ .vdownsampling = { 1 },
+ .bit_depth = { 24 },
.planes = 1,
+ .buffers = 1,
},
{
.name = "RGB24 (BE)",
.fourcc = V4L2_PIX_FMT_BGR24, /* bgr */
- .depth = 24,
+ .vdownsampling = { 1 },
+ .bit_depth = { 24 },
.planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "BGR666",
+ .fourcc = V4L2_PIX_FMT_BGR666, /* bbbbbbgg ggggrrrr rrxxxxxx */
+ .vdownsampling = { 1 },
+ .bit_depth = { 32 },
+ .planes = 1,
+ .buffers = 1,
},
{
.name = "RGB32 (LE)",
- .fourcc = V4L2_PIX_FMT_RGB32, /* argb */
- .depth = 32,
+ .fourcc = V4L2_PIX_FMT_RGB32, /* xrgb */
+ .vdownsampling = { 1 },
+ .bit_depth = { 32 },
.planes = 1,
+ .buffers = 1,
},
{
.name = "RGB32 (BE)",
- .fourcc = V4L2_PIX_FMT_BGR32, /* bgra */
- .depth = 32,
+ .fourcc = V4L2_PIX_FMT_BGR32, /* bgrx */
+ .vdownsampling = { 1 },
+ .bit_depth = { 32 },
.planes = 1,
+ .buffers = 1,
},
{
.name = "XRGB32 (LE)",
- .fourcc = V4L2_PIX_FMT_XRGB32, /* argb */
- .depth = 32,
+ .fourcc = V4L2_PIX_FMT_XRGB32, /* xrgb */
+ .vdownsampling = { 1 },
+ .bit_depth = { 32 },
.planes = 1,
+ .buffers = 1,
},
{
.name = "XRGB32 (BE)",
- .fourcc = V4L2_PIX_FMT_XBGR32, /* bgra */
- .depth = 32,
+ .fourcc = V4L2_PIX_FMT_XBGR32, /* bgrx */
+ .vdownsampling = { 1 },
+ .bit_depth = { 32 },
.planes = 1,
+ .buffers = 1,
},
{
.name = "ARGB32 (LE)",
.fourcc = V4L2_PIX_FMT_ARGB32, /* argb */
- .depth = 32,
+ .vdownsampling = { 1 },
+ .bit_depth = { 32 },
.planes = 1,
+ .buffers = 1,
.alpha_mask = 0x000000ff,
},
{
.name = "ARGB32 (BE)",
.fourcc = V4L2_PIX_FMT_ABGR32, /* bgra */
- .depth = 32,
+ .vdownsampling = { 1 },
+ .bit_depth = { 32 },
.planes = 1,
+ .buffers = 1,
.alpha_mask = 0xff000000,
},
{
- .name = "4:2:2, planar, YUV",
+ .name = "Bayer BG/GR",
+ .fourcc = V4L2_PIX_FMT_SBGGR8, /* Bayer BG/GR */
+ .vdownsampling = { 1 },
+ .bit_depth = { 8 },
+ .planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "Bayer GB/RG",
+ .fourcc = V4L2_PIX_FMT_SGBRG8, /* Bayer GB/RG */
+ .vdownsampling = { 1 },
+ .bit_depth = { 8 },
+ .planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "Bayer GR/BG",
+ .fourcc = V4L2_PIX_FMT_SGRBG8, /* Bayer GR/BG */
+ .vdownsampling = { 1 },
+ .bit_depth = { 8 },
+ .planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "Bayer RG/GB",
+ .fourcc = V4L2_PIX_FMT_SRGGB8, /* Bayer RG/GB */
+ .vdownsampling = { 1 },
+ .bit_depth = { 8 },
+ .planes = 1,
+ .buffers = 1,
+ },
+ {
+ .name = "4:2:2, biplanar, YUV",
.fourcc = V4L2_PIX_FMT_NV16M,
- .depth = 8,
+ .vdownsampling = { 1, 1 },
+ .bit_depth = { 8, 8 },
.is_yuv = true,
.planes = 2,
+ .buffers = 2,
.data_offset = { PLANE0_DATA_OFFSET, 0 },
},
{
- .name = "4:2:2, planar, YVU",
+ .name = "4:2:2, biplanar, YVU",
.fourcc = V4L2_PIX_FMT_NV61M,
- .depth = 8,
+ .vdownsampling = { 1, 1 },
+ .bit_depth = { 8, 8 },
.is_yuv = true,
.planes = 2,
+ .buffers = 2,
.data_offset = { 0, PLANE0_DATA_OFFSET },
},
+ {
+ .name = "4:2:0, triplanar, YUV",
+ .fourcc = V4L2_PIX_FMT_YUV420M,
+ .vdownsampling = { 1, 2, 2 },
+ .bit_depth = { 8, 4, 4 },
+ .is_yuv = true,
+ .planes = 3,
+ .buffers = 3,
+ },
+ {
+ .name = "4:2:0, triplanar, YVU",
+ .fourcc = V4L2_PIX_FMT_YVU420M,
+ .vdownsampling = { 1, 2, 2 },
+ .bit_depth = { 8, 4, 4 },
+ .is_yuv = true,
+ .planes = 3,
+ .buffers = 3,
+ },
+ {
+ .name = "4:2:0, biplanar, YUV",
+ .fourcc = V4L2_PIX_FMT_NV12M,
+ .vdownsampling = { 1, 2 },
+ .bit_depth = { 8, 8 },
+ .is_yuv = true,
+ .planes = 2,
+ .buffers = 2,
+ },
+ {
+ .name = "4:2:0, biplanar, YVU",
+ .fourcc = V4L2_PIX_FMT_NV21M,
+ .vdownsampling = { 1, 2 },
+ .bit_depth = { 8, 8 },
+ .is_yuv = true,
+ .planes = 2,
+ .buffers = 2,
+ },
};
-/* There are 2 multiplanar formats in the list */
-#define VIVID_MPLANAR_FORMATS 2
+/* There are 6 multiplanar formats in the list */
+#define VIVID_MPLANAR_FORMATS 6
const struct vivid_fmt *vivid_get_format(struct vivid_dev *dev, u32 pixelformat)
{
@@ -194,7 +485,7 @@
for (k = 0; k < ARRAY_SIZE(vivid_formats); k++) {
fmt = &vivid_formats[k];
if (fmt->fourcc == pixelformat)
- if (fmt->planes == 1 || dev->multiplanar)
+ if (fmt->buffers == 1 || dev->multiplanar)
return fmt;
}
@@ -210,6 +501,13 @@
return false;
if (dev->field_cap != dev->field_out)
return false;
+ /*
+ * While this can be supported, it is just too much work
+ * to actually implement.
+ */
+ if (dev->field_cap == V4L2_FIELD_SEQ_TB ||
+ dev->field_cap == V4L2_FIELD_SEQ_BT)
+ return false;
if (vivid_is_svid_cap(dev) && vivid_is_svid_out(dev)) {
if (!(dev->std_cap & V4L2_STD_525_60) !=
!(dev->std_out & V4L2_STD_525_60))
@@ -397,6 +695,9 @@
unsigned w = r->width;
unsigned h = r->height;
+ /* sanitize w and h in case someone passes ~0 as the value */
+ w &= 0xffff;
+ h &= 0xffff;
if (!(flags & V4L2_SEL_FLAG_LE)) {
w++;
h++;
@@ -421,8 +722,9 @@
r->top = 0;
if (r->left < 0)
r->left = 0;
- r->left &= ~1;
- r->top &= ~1;
+ /* sanitize left and top in case someone passes ~0 as the value */
+ r->left &= 0xfffe;
+ r->top &= 0xfffe;
if (r->left + w > MAX_WIDTH)
r->left = MAX_WIDTH - w;
if (r->top + h > MAX_HEIGHT)
diff --git a/drivers/media/platform/vivid/vivid-vid-out.c b/drivers/media/platform/vivid/vivid-vid-out.c
index 39ff79f..0af43dc 100644
--- a/drivers/media/platform/vivid/vivid-vid-out.c
+++ b/drivers/media/platform/vivid/vivid-vid-out.c
@@ -36,9 +36,14 @@
unsigned sizes[], void *alloc_ctxs[])
{
struct vivid_dev *dev = vb2_get_drv_priv(vq);
- unsigned planes = dev->fmt_out->planes;
+ const struct vivid_fmt *vfmt = dev->fmt_out;
+ unsigned planes = vfmt->buffers;
unsigned h = dev->fmt_out_rect.height;
unsigned size = dev->bytesperline_out[0] * h;
+ unsigned p;
+
+ for (p = vfmt->buffers; p < vfmt->planes; p++)
+ size += dev->bytesperline_out[p] * h / vfmt->vdownsampling[p];
if (dev->field_out == V4L2_FIELD_ALTERNATE) {
/*
@@ -74,21 +79,16 @@
if (mp->num_planes != planes)
return -EINVAL;
sizes[0] = mp->plane_fmt[0].sizeimage;
- if (planes == 2) {
- sizes[1] = mp->plane_fmt[1].sizeimage;
- if (sizes[0] < dev->bytesperline_out[0] * h ||
- sizes[1] < dev->bytesperline_out[1] * h)
- return -EINVAL;
- } else if (sizes[0] < size) {
+ if (sizes[0] < size)
return -EINVAL;
+ for (p = 1; p < planes; p++) {
+ sizes[p] = mp->plane_fmt[p].sizeimage;
+ if (sizes[p] < dev->bytesperline_out[p] * h)
+ return -EINVAL;
}
} else {
- if (planes == 2) {
- sizes[0] = dev->bytesperline_out[0] * h;
- sizes[1] = dev->bytesperline_out[1] * h;
- } else {
- sizes[0] = size;
- }
+ for (p = 0; p < planes; p++)
+ sizes[p] = p ? dev->bytesperline_out[p] * h : size;
}
if (vq->num_buffers + *nbuffers < 2)
@@ -101,12 +101,9 @@
* alloc_ctxs array.
*/
- if (planes == 2)
- dprintk(dev, 1, "%s, count=%d, sizes=%u, %u\n", __func__,
- *nbuffers, sizes[0], sizes[1]);
- else
- dprintk(dev, 1, "%s, count=%d, size=%u\n", __func__,
- *nbuffers, sizes[0]);
+ dprintk(dev, 1, "%s: count=%d\n", __func__, *nbuffers);
+ for (p = 0; p < planes; p++)
+ dprintk(dev, 1, "%s: size[%u]=%u\n", __func__, p, sizes[p]);
return 0;
}
@@ -114,7 +111,7 @@
{
struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
unsigned long size;
- unsigned planes = dev->fmt_out->planes;
+ unsigned planes;
unsigned p;
dprintk(dev, 1, "%s\n", __func__);
@@ -122,6 +119,8 @@
if (WARN_ON(NULL == dev->fmt_out))
return -EINVAL;
+ planes = dev->fmt_out->planes;
+
if (dev->buf_prepare_error) {
/*
* Error injection: test what happens if buf_prepare() returns
@@ -220,7 +219,7 @@
void vivid_update_format_out(struct vivid_dev *dev)
{
struct v4l2_bt_timings *bt = &dev->dv_timings_out.bt;
- unsigned size;
+ unsigned size, p;
switch (dev->output_type[dev->output]) {
case SVID:
@@ -249,7 +248,7 @@
dev->field_out = V4L2_FIELD_ALTERNATE;
else
dev->field_out = V4L2_FIELD_NONE;
- if (!dev->dvi_d_out && (bt->standards & V4L2_DV_BT_STD_CEA861)) {
+ if (!dev->dvi_d_out && (bt->flags & V4L2_DV_FL_IS_CE_VIDEO)) {
if (bt->width == 720 && bt->height <= 576)
dev->colorspace_out = V4L2_COLORSPACE_SMPTE170M;
else
@@ -267,9 +266,9 @@
if (V4L2_FIELD_HAS_T_OR_B(dev->field_out))
dev->crop_out.height /= 2;
dev->fmt_out_rect = dev->crop_out;
- dev->bytesperline_out[0] = (dev->sink_rect.width * dev->fmt_out->depth) / 8;
- if (dev->fmt_out->planes == 2)
- dev->bytesperline_out[1] = (dev->sink_rect.width * dev->fmt_out->depth) / 8;
+ for (p = 0; p < dev->fmt_out->planes; p++)
+ dev->bytesperline_out[p] =
+ (dev->sink_rect.width * dev->fmt_out->bit_depth[p]) / 8;
}
/* Map the field to something that is valid for the current output */
@@ -313,21 +312,28 @@
{
struct vivid_dev *dev = video_drvdata(file);
struct v4l2_pix_format_mplane *mp = &f->fmt.pix_mp;
+ const struct vivid_fmt *fmt = dev->fmt_out;
unsigned p;
mp->width = dev->fmt_out_rect.width;
mp->height = dev->fmt_out_rect.height;
mp->field = dev->field_out;
- mp->pixelformat = dev->fmt_out->fourcc;
+ mp->pixelformat = fmt->fourcc;
mp->colorspace = dev->colorspace_out;
mp->ycbcr_enc = dev->ycbcr_enc_out;
mp->quantization = dev->quantization_out;
- mp->num_planes = dev->fmt_out->planes;
+ mp->num_planes = fmt->buffers;
for (p = 0; p < mp->num_planes; p++) {
mp->plane_fmt[p].bytesperline = dev->bytesperline_out[p];
mp->plane_fmt[p].sizeimage =
mp->plane_fmt[p].bytesperline * mp->height;
}
+ for (p = fmt->buffers; p < fmt->planes; p++) {
+ unsigned stride = dev->bytesperline_out[p];
+
+ mp->plane_fmt[0].sizeimage +=
+ (stride * mp->height) / fmt->vdownsampling[p];
+ }
return 0;
}
@@ -386,10 +392,10 @@
/* This driver supports custom bytesperline values */
/* Calculate the minimum supported bytesperline value */
- bytesperline = (mp->width * fmt->depth) >> 3;
+ bytesperline = (mp->width * fmt->bit_depth[0]) >> 3;
/* Calculate the maximum supported bytesperline value */
- max_bpl = (MAX_ZOOM * MAX_WIDTH * fmt->depth) >> 3;
- mp->num_planes = fmt->planes;
+ max_bpl = (MAX_ZOOM * MAX_WIDTH * fmt->bit_depth[0]) >> 3;
+ mp->num_planes = fmt->buffers;
for (p = 0; p < mp->num_planes; p++) {
if (pfmt[p].bytesperline > max_bpl)
pfmt[p].bytesperline = max_bpl;
@@ -398,11 +404,14 @@
pfmt[p].sizeimage = pfmt[p].bytesperline * mp->height;
memset(pfmt[p].reserved, 0, sizeof(pfmt[p].reserved));
}
+ for (p = fmt->buffers; p < fmt->planes; p++)
+ pfmt[0].sizeimage += (pfmt[0].bytesperline * fmt->bit_depth[p]) /
+ (fmt->bit_depth[0] * fmt->vdownsampling[p]);
mp->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
mp->quantization = V4L2_QUANTIZATION_DEFAULT;
if (vivid_is_svid_out(dev)) {
mp->colorspace = V4L2_COLORSPACE_SMPTE170M;
- } else if (dev->dvi_d_out || !(bt->standards & V4L2_DV_BT_STD_CEA861)) {
+ } else if (dev->dvi_d_out || !(bt->flags & V4L2_DV_FL_IS_CE_VIDEO)) {
mp->colorspace = V4L2_COLORSPACE_SRGB;
if (dev->dvi_d_out)
mp->quantization = V4L2_QUANTIZATION_LIM_RANGE;
@@ -429,6 +438,7 @@
struct vb2_queue *q = &dev->vb_vid_out_q;
int ret = vivid_try_fmt_vid_out(file, priv, f);
unsigned factor = 1;
+ unsigned p;
if (ret < 0)
return ret;
@@ -524,9 +534,12 @@
dev->fmt_out_rect.width = mp->width;
dev->fmt_out_rect.height = mp->height;
- dev->bytesperline_out[0] = mp->plane_fmt[0].bytesperline;
- if (mp->num_planes > 1)
- dev->bytesperline_out[1] = mp->plane_fmt[1].bytesperline;
+ for (p = 0; p < mp->num_planes; p++)
+ dev->bytesperline_out[p] = mp->plane_fmt[p].bytesperline;
+ for (p = dev->fmt_out->buffers; p < dev->fmt_out->planes; p++)
+ dev->bytesperline_out[p] =
+ (dev->bytesperline_out[0] * dev->fmt_out->bit_depth[p]) /
+ dev->fmt_out->bit_depth[0];
dev->field_out = mp->field;
if (vivid_is_svid_out(dev))
dev->tv_field_out = mp->field;
@@ -1114,13 +1127,13 @@
if (!vivid_is_hdmi_out(dev))
return -ENODATA;
- if (vb2_is_busy(&dev->vb_vid_out_q))
- return -EBUSY;
if (!v4l2_find_dv_timings_cap(timings, &vivid_dv_timings_cap,
0, NULL, NULL))
return -EINVAL;
if (v4l2_match_dv_timings(timings, &dev->dv_timings_out, 0))
return 0;
+ if (vb2_is_busy(&dev->vb_vid_out_q))
+ return -EBUSY;
dev->dv_timings_out = *timings;
vivid_update_format_out(dev);
return 0;
diff --git a/drivers/media/platform/vsp1/vsp1_bru.c b/drivers/media/platform/vsp1/vsp1_bru.c
index 401e2b7..7dd7633 100644
--- a/drivers/media/platform/vsp1/vsp1_bru.c
+++ b/drivers/media/platform/vsp1/vsp1_bru.c
@@ -183,13 +183,14 @@
*/
static int bru_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
static const unsigned int codes[] = {
MEDIA_BUS_FMT_ARGB8888_1X32,
MEDIA_BUS_FMT_AYUV8_1X32,
};
+ struct vsp1_bru *bru = to_bru(subdev);
struct v4l2_mbus_framefmt *format;
if (code->pad == BRU_PAD_SINK(0)) {
@@ -201,7 +202,8 @@
if (code->index)
return -EINVAL;
- format = v4l2_subdev_get_try_format(fh, BRU_PAD_SINK(0));
+ format = vsp1_entity_get_pad_format(&bru->entity, cfg,
+ BRU_PAD_SINK(0), code->which);
code->code = format->code;
}
@@ -209,7 +211,7 @@
}
static int bru_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
if (fse->index)
@@ -228,12 +230,12 @@
}
static struct v4l2_rect *bru_get_compose(struct vsp1_bru *bru,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
unsigned int pad, u32 which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_crop(fh, pad);
+ return v4l2_subdev_get_try_crop(&bru->entity.subdev, cfg, pad);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &bru->inputs[pad].compose;
default:
@@ -241,18 +243,18 @@
}
}
-static int bru_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+static int bru_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_bru *bru = to_bru(subdev);
- fmt->format = *vsp1_entity_get_pad_format(&bru->entity, fh, fmt->pad,
+ fmt->format = *vsp1_entity_get_pad_format(&bru->entity, cfg, fmt->pad,
fmt->which);
return 0;
}
-static void bru_try_format(struct vsp1_bru *bru, struct v4l2_subdev_fh *fh,
+static void bru_try_format(struct vsp1_bru *bru, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -268,7 +270,7 @@
default:
/* The BRU can't perform format conversion. */
- format = vsp1_entity_get_pad_format(&bru->entity, fh,
+ format = vsp1_entity_get_pad_format(&bru->entity, cfg,
BRU_PAD_SINK(0), which);
fmt->code = format->code;
break;
@@ -280,15 +282,15 @@
fmt->colorspace = V4L2_COLORSPACE_SRGB;
}
-static int bru_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+static int bru_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_bru *bru = to_bru(subdev);
struct v4l2_mbus_framefmt *format;
- bru_try_format(bru, fh, fmt->pad, &fmt->format, fmt->which);
+ bru_try_format(bru, cfg, fmt->pad, &fmt->format, fmt->which);
- format = vsp1_entity_get_pad_format(&bru->entity, fh, fmt->pad,
+ format = vsp1_entity_get_pad_format(&bru->entity, cfg, fmt->pad,
fmt->which);
*format = fmt->format;
@@ -296,7 +298,7 @@
if (fmt->pad != BRU_PAD_SOURCE) {
struct v4l2_rect *compose;
- compose = bru_get_compose(bru, fh, fmt->pad, fmt->which);
+ compose = bru_get_compose(bru, cfg, fmt->pad, fmt->which);
compose->left = 0;
compose->top = 0;
compose->width = format->width;
@@ -308,7 +310,7 @@
unsigned int i;
for (i = 0; i <= BRU_PAD_SOURCE; ++i) {
- format = vsp1_entity_get_pad_format(&bru->entity, fh,
+ format = vsp1_entity_get_pad_format(&bru->entity, cfg,
i, fmt->which);
format->code = fmt->format.code;
}
@@ -318,7 +320,7 @@
}
static int bru_get_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct vsp1_bru *bru = to_bru(subdev);
@@ -335,7 +337,7 @@
return 0;
case V4L2_SEL_TGT_COMPOSE:
- sel->r = *bru_get_compose(bru, fh, sel->pad, sel->which);
+ sel->r = *bru_get_compose(bru, cfg, sel->pad, sel->which);
return 0;
default:
@@ -344,7 +346,7 @@
}
static int bru_set_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct vsp1_bru *bru = to_bru(subdev);
@@ -360,7 +362,7 @@
/* The compose rectangle top left corner must be inside the output
* frame.
*/
- format = vsp1_entity_get_pad_format(&bru->entity, fh, BRU_PAD_SOURCE,
+ format = vsp1_entity_get_pad_format(&bru->entity, cfg, BRU_PAD_SOURCE,
sel->which);
sel->r.left = clamp_t(unsigned int, sel->r.left, 0, format->width - 1);
sel->r.top = clamp_t(unsigned int, sel->r.top, 0, format->height - 1);
@@ -368,12 +370,12 @@
/* Scaling isn't supported, the compose rectangle size must be identical
* to the sink format size.
*/
- format = vsp1_entity_get_pad_format(&bru->entity, fh, sel->pad,
+ format = vsp1_entity_get_pad_format(&bru->entity, cfg, sel->pad,
sel->which);
sel->r.width = format->width;
sel->r.height = format->height;
- compose = bru_get_compose(bru, fh, sel->pad, sel->which);
+ compose = bru_get_compose(bru, cfg, sel->pad, sel->which);
*compose = sel->r;
return 0;
diff --git a/drivers/media/platform/vsp1/vsp1_entity.c b/drivers/media/platform/vsp1/vsp1_entity.c
index 79af71d5..a453bb4 100644
--- a/drivers/media/platform/vsp1/vsp1_entity.c
+++ b/drivers/media/platform/vsp1/vsp1_entity.c
@@ -63,12 +63,12 @@
struct v4l2_mbus_framefmt *
vsp1_entity_get_pad_format(struct vsp1_entity *entity,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
unsigned int pad, u32 which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&entity->subdev, cfg, pad);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &entity->formats[pad];
default:
@@ -79,14 +79,14 @@
/*
* vsp1_entity_init_formats - Initialize formats on all pads
* @subdev: V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad configuration
*
- * Initialize all pad formats with default values. If fh is not NULL, try
+ * Initialize all pad formats with default values. If cfg is not NULL, try
* formats are initialized on the file handle. Otherwise active formats are
* initialized on the device.
*/
void vsp1_entity_init_formats(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh)
+ struct v4l2_subdev_pad_config *cfg)
{
struct v4l2_subdev_format format;
unsigned int pad;
@@ -95,17 +95,17 @@
memset(&format, 0, sizeof(format));
format.pad = pad;
- format.which = fh ? V4L2_SUBDEV_FORMAT_TRY
+ format.which = cfg ? V4L2_SUBDEV_FORMAT_TRY
: V4L2_SUBDEV_FORMAT_ACTIVE;
- v4l2_subdev_call(subdev, pad, set_fmt, fh, &format);
+ v4l2_subdev_call(subdev, pad, set_fmt, cfg, &format);
}
}
static int vsp1_entity_open(struct v4l2_subdev *subdev,
struct v4l2_subdev_fh *fh)
{
- vsp1_entity_init_formats(subdev, fh);
+ vsp1_entity_init_formats(subdev, fh->pad);
return 0;
}
diff --git a/drivers/media/platform/vsp1/vsp1_entity.h b/drivers/media/platform/vsp1/vsp1_entity.h
index aa20aaa..62c768d 100644
--- a/drivers/media/platform/vsp1/vsp1_entity.h
+++ b/drivers/media/platform/vsp1/vsp1_entity.h
@@ -91,10 +91,10 @@
struct v4l2_mbus_framefmt *
vsp1_entity_get_pad_format(struct vsp1_entity *entity,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
unsigned int pad, u32 which);
void vsp1_entity_init_formats(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh);
+ struct v4l2_subdev_pad_config *cfg);
bool vsp1_entity_is_streaming(struct vsp1_entity *entity);
int vsp1_entity_set_streaming(struct vsp1_entity *entity, bool streaming);
diff --git a/drivers/media/platform/vsp1/vsp1_hsit.c b/drivers/media/platform/vsp1/vsp1_hsit.c
index 0bc0471..8ffb817 100644
--- a/drivers/media/platform/vsp1/vsp1_hsit.c
+++ b/drivers/media/platform/vsp1/vsp1_hsit.c
@@ -55,7 +55,7 @@
*/
static int hsit_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct vsp1_hsit *hsit = to_hsit(subdev);
@@ -73,12 +73,14 @@
}
static int hsit_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
+ struct vsp1_hsit *hsit = to_hsit(subdev);
struct v4l2_mbus_framefmt *format;
- format = v4l2_subdev_get_try_format(fh, fse->pad);
+ format = vsp1_entity_get_pad_format(&hsit->entity, cfg, fse->pad,
+ fse->which);
if (fse->index || fse->code != format->code)
return -EINVAL;
@@ -102,25 +104,25 @@
}
static int hsit_get_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_hsit *hsit = to_hsit(subdev);
- fmt->format = *vsp1_entity_get_pad_format(&hsit->entity, fh, fmt->pad,
+ fmt->format = *vsp1_entity_get_pad_format(&hsit->entity, cfg, fmt->pad,
fmt->which);
return 0;
}
static int hsit_set_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_hsit *hsit = to_hsit(subdev);
struct v4l2_mbus_framefmt *format;
- format = vsp1_entity_get_pad_format(&hsit->entity, fh, fmt->pad,
+ format = vsp1_entity_get_pad_format(&hsit->entity, cfg, fmt->pad,
fmt->which);
if (fmt->pad == HSIT_PAD_SOURCE) {
@@ -143,7 +145,7 @@
fmt->format = *format;
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&hsit->entity, fh, HSIT_PAD_SOURCE,
+ format = vsp1_entity_get_pad_format(&hsit->entity, cfg, HSIT_PAD_SOURCE,
fmt->which);
*format = fmt->format;
format->code = hsit->inverse ? MEDIA_BUS_FMT_ARGB8888_1X32
diff --git a/drivers/media/platform/vsp1/vsp1_lif.c b/drivers/media/platform/vsp1/vsp1_lif.c
index 17a6ca7..39fa5ef 100644
--- a/drivers/media/platform/vsp1/vsp1_lif.c
+++ b/drivers/media/platform/vsp1/vsp1_lif.c
@@ -74,13 +74,14 @@
*/
static int lif_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
static const unsigned int codes[] = {
MEDIA_BUS_FMT_ARGB8888_1X32,
MEDIA_BUS_FMT_AYUV8_1X32,
};
+ struct vsp1_lif *lif = to_lif(subdev);
if (code->pad == LIF_PAD_SINK) {
if (code->index >= ARRAY_SIZE(codes))
@@ -96,7 +97,8 @@
if (code->index)
return -EINVAL;
- format = v4l2_subdev_get_try_format(fh, LIF_PAD_SINK);
+ format = vsp1_entity_get_pad_format(&lif->entity, cfg,
+ LIF_PAD_SINK, code->which);
code->code = format->code;
}
@@ -104,12 +106,14 @@
}
static int lif_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
+ struct vsp1_lif *lif = to_lif(subdev);
struct v4l2_mbus_framefmt *format;
- format = v4l2_subdev_get_try_format(fh, LIF_PAD_SINK);
+ format = vsp1_entity_get_pad_format(&lif->entity, cfg, LIF_PAD_SINK,
+ fse->which);
if (fse->index || fse->code != format->code)
return -EINVAL;
@@ -129,18 +133,18 @@
return 0;
}
-static int lif_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+static int lif_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_lif *lif = to_lif(subdev);
- fmt->format = *vsp1_entity_get_pad_format(&lif->entity, fh, fmt->pad,
+ fmt->format = *vsp1_entity_get_pad_format(&lif->entity, cfg, fmt->pad,
fmt->which);
return 0;
}
-static int lif_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+static int lif_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_lif *lif = to_lif(subdev);
@@ -151,7 +155,7 @@
fmt->format.code != MEDIA_BUS_FMT_AYUV8_1X32)
fmt->format.code = MEDIA_BUS_FMT_AYUV8_1X32;
- format = vsp1_entity_get_pad_format(&lif->entity, fh, fmt->pad,
+ format = vsp1_entity_get_pad_format(&lif->entity, cfg, fmt->pad,
fmt->which);
if (fmt->pad == LIF_PAD_SOURCE) {
@@ -173,7 +177,7 @@
fmt->format = *format;
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&lif->entity, fh, LIF_PAD_SOURCE,
+ format = vsp1_entity_get_pad_format(&lif->entity, cfg, LIF_PAD_SOURCE,
fmt->which);
*format = fmt->format;
diff --git a/drivers/media/platform/vsp1/vsp1_lut.c b/drivers/media/platform/vsp1/vsp1_lut.c
index 6f185c3..656ec27 100644
--- a/drivers/media/platform/vsp1/vsp1_lut.c
+++ b/drivers/media/platform/vsp1/vsp1_lut.c
@@ -82,7 +82,7 @@
*/
static int lut_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
static const unsigned int codes[] = {
@@ -90,6 +90,7 @@
MEDIA_BUS_FMT_AHSV8888_1X32,
MEDIA_BUS_FMT_AYUV8_1X32,
};
+ struct vsp1_lut *lut = to_lut(subdev);
struct v4l2_mbus_framefmt *format;
if (code->pad == LUT_PAD_SINK) {
@@ -104,7 +105,8 @@
if (code->index)
return -EINVAL;
- format = v4l2_subdev_get_try_format(fh, LUT_PAD_SINK);
+ format = vsp1_entity_get_pad_format(&lut->entity, cfg,
+ LUT_PAD_SINK, code->which);
code->code = format->code;
}
@@ -112,12 +114,14 @@
}
static int lut_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
+ struct vsp1_lut *lut = to_lut(subdev);
struct v4l2_mbus_framefmt *format;
- format = v4l2_subdev_get_try_format(fh, fse->pad);
+ format = vsp1_entity_get_pad_format(&lut->entity, cfg,
+ fse->pad, fse->which);
if (fse->index || fse->code != format->code)
return -EINVAL;
@@ -140,18 +144,18 @@
return 0;
}
-static int lut_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+static int lut_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_lut *lut = to_lut(subdev);
- fmt->format = *vsp1_entity_get_pad_format(&lut->entity, fh, fmt->pad,
+ fmt->format = *vsp1_entity_get_pad_format(&lut->entity, cfg, fmt->pad,
fmt->which);
return 0;
}
-static int lut_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+static int lut_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_lut *lut = to_lut(subdev);
@@ -163,7 +167,7 @@
fmt->format.code != MEDIA_BUS_FMT_AYUV8_1X32)
fmt->format.code = MEDIA_BUS_FMT_AYUV8_1X32;
- format = vsp1_entity_get_pad_format(&lut->entity, fh, fmt->pad,
+ format = vsp1_entity_get_pad_format(&lut->entity, cfg, fmt->pad,
fmt->which);
if (fmt->pad == LUT_PAD_SOURCE) {
@@ -182,7 +186,7 @@
fmt->format = *format;
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&lut->entity, fh, LUT_PAD_SOURCE,
+ format = vsp1_entity_get_pad_format(&lut->entity, cfg, LUT_PAD_SOURCE,
fmt->which);
*format = fmt->format;
diff --git a/drivers/media/platform/vsp1/vsp1_rwpf.c b/drivers/media/platform/vsp1/vsp1_rwpf.c
index 1f1ba26..fa71f46 100644
--- a/drivers/media/platform/vsp1/vsp1_rwpf.c
+++ b/drivers/media/platform/vsp1/vsp1_rwpf.c
@@ -25,7 +25,7 @@
*/
int vsp1_rwpf_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
static const unsigned int codes[] = {
@@ -42,13 +42,14 @@
}
int vsp1_rwpf_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct vsp1_rwpf *rwpf = to_rwpf(subdev);
struct v4l2_mbus_framefmt *format;
- format = v4l2_subdev_get_try_format(fh, fse->pad);
+ format = vsp1_entity_get_pad_format(&rwpf->entity, cfg, fse->pad,
+ fse->which);
if (fse->index || fse->code != format->code)
return -EINVAL;
@@ -72,11 +73,11 @@
}
static struct v4l2_rect *
-vsp1_rwpf_get_crop(struct vsp1_rwpf *rwpf, struct v4l2_subdev_fh *fh, u32 which)
+vsp1_rwpf_get_crop(struct vsp1_rwpf *rwpf, struct v4l2_subdev_pad_config *cfg, u32 which)
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_crop(fh, RWPF_PAD_SINK);
+ return v4l2_subdev_get_try_crop(&rwpf->entity.subdev, cfg, RWPF_PAD_SINK);
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &rwpf->crop;
default:
@@ -84,18 +85,18 @@
}
}
-int vsp1_rwpf_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+int vsp1_rwpf_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_rwpf *rwpf = to_rwpf(subdev);
- fmt->format = *vsp1_entity_get_pad_format(&rwpf->entity, fh, fmt->pad,
+ fmt->format = *vsp1_entity_get_pad_format(&rwpf->entity, cfg, fmt->pad,
fmt->which);
return 0;
}
-int vsp1_rwpf_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+int vsp1_rwpf_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_rwpf *rwpf = to_rwpf(subdev);
@@ -107,7 +108,7 @@
fmt->format.code != MEDIA_BUS_FMT_AYUV8_1X32)
fmt->format.code = MEDIA_BUS_FMT_AYUV8_1X32;
- format = vsp1_entity_get_pad_format(&rwpf->entity, fh, fmt->pad,
+ format = vsp1_entity_get_pad_format(&rwpf->entity, cfg, fmt->pad,
fmt->which);
if (fmt->pad == RWPF_PAD_SOURCE) {
@@ -130,14 +131,14 @@
fmt->format = *format;
/* Update the sink crop rectangle. */
- crop = vsp1_rwpf_get_crop(rwpf, fh, fmt->which);
+ crop = vsp1_rwpf_get_crop(rwpf, cfg, fmt->which);
crop->left = 0;
crop->top = 0;
crop->width = fmt->format.width;
crop->height = fmt->format.height;
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&rwpf->entity, fh, RWPF_PAD_SOURCE,
+ format = vsp1_entity_get_pad_format(&rwpf->entity, cfg, RWPF_PAD_SOURCE,
fmt->which);
*format = fmt->format;
@@ -145,7 +146,7 @@
}
int vsp1_rwpf_get_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct vsp1_rwpf *rwpf = to_rwpf(subdev);
@@ -157,11 +158,11 @@
switch (sel->target) {
case V4L2_SEL_TGT_CROP:
- sel->r = *vsp1_rwpf_get_crop(rwpf, fh, sel->which);
+ sel->r = *vsp1_rwpf_get_crop(rwpf, cfg, sel->which);
break;
case V4L2_SEL_TGT_CROP_BOUNDS:
- format = vsp1_entity_get_pad_format(&rwpf->entity, fh,
+ format = vsp1_entity_get_pad_format(&rwpf->entity, cfg,
RWPF_PAD_SINK, sel->which);
sel->r.left = 0;
sel->r.top = 0;
@@ -177,7 +178,7 @@
}
int vsp1_rwpf_set_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct vsp1_rwpf *rwpf = to_rwpf(subdev);
@@ -194,7 +195,7 @@
/* Make sure the crop rectangle is entirely contained in the image. The
* WPF top and left offsets are limited to 255.
*/
- format = vsp1_entity_get_pad_format(&rwpf->entity, fh, RWPF_PAD_SINK,
+ format = vsp1_entity_get_pad_format(&rwpf->entity, cfg, RWPF_PAD_SINK,
sel->which);
sel->r.left = min_t(unsigned int, sel->r.left, format->width - 2);
sel->r.top = min_t(unsigned int, sel->r.top, format->height - 2);
@@ -207,11 +208,11 @@
sel->r.height = min_t(unsigned int, sel->r.height,
format->height - sel->r.top);
- crop = vsp1_rwpf_get_crop(rwpf, fh, sel->which);
+ crop = vsp1_rwpf_get_crop(rwpf, cfg, sel->which);
*crop = sel->r;
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&rwpf->entity, fh, RWPF_PAD_SOURCE,
+ format = vsp1_entity_get_pad_format(&rwpf->entity, cfg, RWPF_PAD_SOURCE,
sel->which);
format->width = crop->width;
format->height = crop->height;
diff --git a/drivers/media/platform/vsp1/vsp1_rwpf.h b/drivers/media/platform/vsp1/vsp1_rwpf.h
index 2cf1f13..f452dce 100644
--- a/drivers/media/platform/vsp1/vsp1_rwpf.h
+++ b/drivers/media/platform/vsp1/vsp1_rwpf.h
@@ -51,20 +51,20 @@
struct vsp1_rwpf *vsp1_wpf_create(struct vsp1_device *vsp1, unsigned int index);
int vsp1_rwpf_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code);
int vsp1_rwpf_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse);
-int vsp1_rwpf_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+int vsp1_rwpf_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt);
-int vsp1_rwpf_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+int vsp1_rwpf_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt);
int vsp1_rwpf_get_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel);
int vsp1_rwpf_set_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel);
#endif /* __VSP1_RWPF_H__ */
diff --git a/drivers/media/platform/vsp1/vsp1_sru.c b/drivers/media/platform/vsp1/vsp1_sru.c
index 1129494..6310aca 100644
--- a/drivers/media/platform/vsp1/vsp1_sru.c
+++ b/drivers/media/platform/vsp1/vsp1_sru.c
@@ -166,13 +166,14 @@
*/
static int sru_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
static const unsigned int codes[] = {
MEDIA_BUS_FMT_ARGB8888_1X32,
MEDIA_BUS_FMT_AYUV8_1X32,
};
+ struct vsp1_sru *sru = to_sru(subdev);
struct v4l2_mbus_framefmt *format;
if (code->pad == SRU_PAD_SINK) {
@@ -187,7 +188,8 @@
if (code->index)
return -EINVAL;
- format = v4l2_subdev_get_try_format(fh, SRU_PAD_SINK);
+ format = vsp1_entity_get_pad_format(&sru->entity, cfg,
+ SRU_PAD_SINK, code->which);
code->code = format->code;
}
@@ -195,12 +197,14 @@
}
static int sru_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
+ struct vsp1_sru *sru = to_sru(subdev);
struct v4l2_mbus_framefmt *format;
- format = v4l2_subdev_get_try_format(fh, SRU_PAD_SINK);
+ format = vsp1_entity_get_pad_format(&sru->entity, cfg,
+ SRU_PAD_SINK, fse->which);
if (fse->index || fse->code != format->code)
return -EINVAL;
@@ -226,18 +230,18 @@
return 0;
}
-static int sru_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+static int sru_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_sru *sru = to_sru(subdev);
- fmt->format = *vsp1_entity_get_pad_format(&sru->entity, fh, fmt->pad,
+ fmt->format = *vsp1_entity_get_pad_format(&sru->entity, cfg, fmt->pad,
fmt->which);
return 0;
}
-static void sru_try_format(struct vsp1_sru *sru, struct v4l2_subdev_fh *fh,
+static void sru_try_format(struct vsp1_sru *sru, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -258,7 +262,7 @@
case SRU_PAD_SOURCE:
/* The SRU can't perform format conversion. */
- format = vsp1_entity_get_pad_format(&sru->entity, fh,
+ format = vsp1_entity_get_pad_format(&sru->entity, cfg,
SRU_PAD_SINK, which);
fmt->code = format->code;
@@ -288,25 +292,25 @@
fmt->colorspace = V4L2_COLORSPACE_SRGB;
}
-static int sru_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+static int sru_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_sru *sru = to_sru(subdev);
struct v4l2_mbus_framefmt *format;
- sru_try_format(sru, fh, fmt->pad, &fmt->format, fmt->which);
+ sru_try_format(sru, cfg, fmt->pad, &fmt->format, fmt->which);
- format = vsp1_entity_get_pad_format(&sru->entity, fh, fmt->pad,
+ format = vsp1_entity_get_pad_format(&sru->entity, cfg, fmt->pad,
fmt->which);
*format = fmt->format;
if (fmt->pad == SRU_PAD_SINK) {
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&sru->entity, fh,
+ format = vsp1_entity_get_pad_format(&sru->entity, cfg,
SRU_PAD_SOURCE, fmt->which);
*format = fmt->format;
- sru_try_format(sru, fh, SRU_PAD_SOURCE, format, fmt->which);
+ sru_try_format(sru, cfg, SRU_PAD_SOURCE, format, fmt->which);
}
return 0;
diff --git a/drivers/media/platform/vsp1/vsp1_uds.c b/drivers/media/platform/vsp1/vsp1_uds.c
index a4afec1..ccc8243 100644
--- a/drivers/media/platform/vsp1/vsp1_uds.c
+++ b/drivers/media/platform/vsp1/vsp1_uds.c
@@ -169,13 +169,14 @@
*/
static int uds_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
static const unsigned int codes[] = {
MEDIA_BUS_FMT_ARGB8888_1X32,
MEDIA_BUS_FMT_AYUV8_1X32,
};
+ struct vsp1_uds *uds = to_uds(subdev);
if (code->pad == UDS_PAD_SINK) {
if (code->index >= ARRAY_SIZE(codes))
@@ -191,7 +192,8 @@
if (code->index)
return -EINVAL;
- format = v4l2_subdev_get_try_format(fh, UDS_PAD_SINK);
+ format = vsp1_entity_get_pad_format(&uds->entity, cfg,
+ UDS_PAD_SINK, code->which);
code->code = format->code;
}
@@ -199,12 +201,14 @@
}
static int uds_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
+ struct vsp1_uds *uds = to_uds(subdev);
struct v4l2_mbus_framefmt *format;
- format = v4l2_subdev_get_try_format(fh, UDS_PAD_SINK);
+ format = vsp1_entity_get_pad_format(&uds->entity, cfg,
+ UDS_PAD_SINK, fse->which);
if (fse->index || fse->code != format->code)
return -EINVAL;
@@ -224,18 +228,18 @@
return 0;
}
-static int uds_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+static int uds_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_uds *uds = to_uds(subdev);
- fmt->format = *vsp1_entity_get_pad_format(&uds->entity, fh, fmt->pad,
+ fmt->format = *vsp1_entity_get_pad_format(&uds->entity, cfg, fmt->pad,
fmt->which);
return 0;
}
-static void uds_try_format(struct vsp1_uds *uds, struct v4l2_subdev_fh *fh,
+static void uds_try_format(struct vsp1_uds *uds, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -256,7 +260,7 @@
case UDS_PAD_SOURCE:
/* The UDS scales but can't perform format conversion. */
- format = vsp1_entity_get_pad_format(&uds->entity, fh,
+ format = vsp1_entity_get_pad_format(&uds->entity, cfg,
UDS_PAD_SINK, which);
fmt->code = format->code;
@@ -271,25 +275,25 @@
fmt->colorspace = V4L2_COLORSPACE_SRGB;
}
-static int uds_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh,
+static int uds_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_uds *uds = to_uds(subdev);
struct v4l2_mbus_framefmt *format;
- uds_try_format(uds, fh, fmt->pad, &fmt->format, fmt->which);
+ uds_try_format(uds, cfg, fmt->pad, &fmt->format, fmt->which);
- format = vsp1_entity_get_pad_format(&uds->entity, fh, fmt->pad,
+ format = vsp1_entity_get_pad_format(&uds->entity, cfg, fmt->pad,
fmt->which);
*format = fmt->format;
if (fmt->pad == UDS_PAD_SINK) {
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&uds->entity, fh,
+ format = vsp1_entity_get_pad_format(&uds->entity, cfg,
UDS_PAD_SOURCE, fmt->which);
*format = fmt->format;
- uds_try_format(uds, fh, UDS_PAD_SOURCE, format, fmt->which);
+ uds_try_format(uds, cfg, UDS_PAD_SOURCE, format, fmt->which);
}
return 0;
diff --git a/drivers/media/platform/xilinx/Kconfig b/drivers/media/platform/xilinx/Kconfig
new file mode 100644
index 0000000..d7324c7
--- /dev/null
+++ b/drivers/media/platform/xilinx/Kconfig
@@ -0,0 +1,23 @@
+config VIDEO_XILINX
+ tristate "Xilinx Video IP (EXPERIMENTAL)"
+ depends on VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API && OF
+ select VIDEOBUF2_DMA_CONTIG
+ ---help---
+ Driver for Xilinx Video IP Pipelines
+
+if VIDEO_XILINX
+
+config VIDEO_XILINX_TPG
+ tristate "Xilinx Video Test Pattern Generator"
+ depends on VIDEO_XILINX
+ select VIDEO_XILINX_VTC
+ ---help---
+ Driver for the Xilinx Video Test Pattern Generator
+
+config VIDEO_XILINX_VTC
+ tristate "Xilinx Video Timing Controller"
+ depends on VIDEO_XILINX
+ ---help---
+ Driver for the Xilinx Video Timing Controller
+
+endif #VIDEO_XILINX
diff --git a/drivers/media/platform/xilinx/Makefile b/drivers/media/platform/xilinx/Makefile
new file mode 100644
index 0000000..e8a0f2a
--- /dev/null
+++ b/drivers/media/platform/xilinx/Makefile
@@ -0,0 +1,5 @@
+xilinx-video-objs += xilinx-dma.o xilinx-vip.o xilinx-vipp.o
+
+obj-$(CONFIG_VIDEO_XILINX) += xilinx-video.o
+obj-$(CONFIG_VIDEO_XILINX_TPG) += xilinx-tpg.o
+obj-$(CONFIG_VIDEO_XILINX_VTC) += xilinx-vtc.o
diff --git a/drivers/media/platform/xilinx/xilinx-dma.c b/drivers/media/platform/xilinx/xilinx-dma.c
new file mode 100644
index 0000000..10209c2
--- /dev/null
+++ b/drivers/media/platform/xilinx/xilinx-dma.c
@@ -0,0 +1,766 @@
+/*
+ * Xilinx Video DMA
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/amba/xilinx_dma.h>
+#include <linux/lcm.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/slab.h>
+
+#include <media/v4l2-dev.h>
+#include <media/v4l2-fh.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf2-core.h>
+#include <media/videobuf2-dma-contig.h>
+
+#include "xilinx-dma.h"
+#include "xilinx-vip.h"
+#include "xilinx-vipp.h"
+
+#define XVIP_DMA_DEF_FORMAT V4L2_PIX_FMT_YUYV
+#define XVIP_DMA_DEF_WIDTH 1920
+#define XVIP_DMA_DEF_HEIGHT 1080
+
+/* Minimum and maximum widths are expressed in bytes */
+#define XVIP_DMA_MIN_WIDTH 1U
+#define XVIP_DMA_MAX_WIDTH 65535U
+#define XVIP_DMA_MIN_HEIGHT 1U
+#define XVIP_DMA_MAX_HEIGHT 8191U
+
+/* -----------------------------------------------------------------------------
+ * Helper functions
+ */
+
+static struct v4l2_subdev *
+xvip_dma_remote_subdev(struct media_pad *local, u32 *pad)
+{
+ struct media_pad *remote;
+
+ remote = media_entity_remote_pad(local);
+ if (remote == NULL ||
+ media_entity_type(remote->entity) != MEDIA_ENT_T_V4L2_SUBDEV)
+ return NULL;
+
+ if (pad)
+ *pad = remote->index;
+
+ return media_entity_to_v4l2_subdev(remote->entity);
+}
+
+static int xvip_dma_verify_format(struct xvip_dma *dma)
+{
+ struct v4l2_subdev_format fmt;
+ struct v4l2_subdev *subdev;
+ int ret;
+
+ subdev = xvip_dma_remote_subdev(&dma->pad, &fmt.pad);
+ if (subdev == NULL)
+ return -EPIPE;
+
+ fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &fmt);
+ if (ret < 0)
+ return ret == -ENOIOCTLCMD ? -EINVAL : ret;
+
+ if (dma->fmtinfo->code != fmt.format.code ||
+ dma->format.height != fmt.format.height ||
+ dma->format.width != fmt.format.width ||
+ dma->format.colorspace != fmt.format.colorspace)
+ return -EINVAL;
+
+ return 0;
+}
+
+/* -----------------------------------------------------------------------------
+ * Pipeline Stream Management
+ */
+
+/**
+ * xvip_pipeline_start_stop - Start ot stop streaming on a pipeline
+ * @pipe: The pipeline
+ * @start: Start (when true) or stop (when false) the pipeline
+ *
+ * Walk the entities chain starting at the pipeline output video node and start
+ * or stop all of them.
+ *
+ * Return: 0 if successful, or the return value of the failed video::s_stream
+ * operation otherwise.
+ */
+static int xvip_pipeline_start_stop(struct xvip_pipeline *pipe, bool start)
+{
+ struct xvip_dma *dma = pipe->output;
+ struct media_entity *entity;
+ struct media_pad *pad;
+ struct v4l2_subdev *subdev;
+ int ret;
+
+ entity = &dma->video.entity;
+ while (1) {
+ pad = &entity->pads[0];
+ if (!(pad->flags & MEDIA_PAD_FL_SINK))
+ break;
+
+ pad = media_entity_remote_pad(pad);
+ if (pad == NULL ||
+ media_entity_type(pad->entity) != MEDIA_ENT_T_V4L2_SUBDEV)
+ break;
+
+ entity = pad->entity;
+ subdev = media_entity_to_v4l2_subdev(entity);
+
+ ret = v4l2_subdev_call(subdev, video, s_stream, start);
+ if (start && ret < 0 && ret != -ENOIOCTLCMD)
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * xvip_pipeline_set_stream - Enable/disable streaming on a pipeline
+ * @pipe: The pipeline
+ * @on: Turn the stream on when true or off when false
+ *
+ * The pipeline is shared between all DMA engines connect at its input and
+ * output. While the stream state of DMA engines can be controlled
+ * independently, pipelines have a shared stream state that enable or disable
+ * all entities in the pipeline. For this reason the pipeline uses a streaming
+ * counter that tracks the number of DMA engines that have requested the stream
+ * to be enabled.
+ *
+ * When called with the @on argument set to true, this function will increment
+ * the pipeline streaming count. If the streaming count reaches the number of
+ * DMA engines in the pipeline it will enable all entities that belong to the
+ * pipeline.
+ *
+ * Similarly, when called with the @on argument set to false, this function will
+ * decrement the pipeline streaming count and disable all entities in the
+ * pipeline when the streaming count reaches zero.
+ *
+ * Return: 0 if successful, or the return value of the failed video::s_stream
+ * operation otherwise. Stopping the pipeline never fails. The pipeline state is
+ * not updated when the operation fails.
+ */
+static int xvip_pipeline_set_stream(struct xvip_pipeline *pipe, bool on)
+{
+ int ret = 0;
+
+ mutex_lock(&pipe->lock);
+
+ if (on) {
+ if (pipe->stream_count == pipe->num_dmas - 1) {
+ ret = xvip_pipeline_start_stop(pipe, true);
+ if (ret < 0)
+ goto done;
+ }
+ pipe->stream_count++;
+ } else {
+ if (--pipe->stream_count == 0)
+ xvip_pipeline_start_stop(pipe, false);
+ }
+
+done:
+ mutex_unlock(&pipe->lock);
+ return ret;
+}
+
+static int xvip_pipeline_validate(struct xvip_pipeline *pipe,
+ struct xvip_dma *start)
+{
+ struct media_entity_graph graph;
+ struct media_entity *entity = &start->video.entity;
+ struct media_device *mdev = entity->parent;
+ unsigned int num_inputs = 0;
+ unsigned int num_outputs = 0;
+
+ mutex_lock(&mdev->graph_mutex);
+
+ /* Walk the graph to locate the video nodes. */
+ media_entity_graph_walk_start(&graph, entity);
+
+ while ((entity = media_entity_graph_walk_next(&graph))) {
+ struct xvip_dma *dma;
+
+ if (entity->type != MEDIA_ENT_T_DEVNODE_V4L)
+ continue;
+
+ dma = to_xvip_dma(media_entity_to_video_device(entity));
+
+ if (dma->pad.flags & MEDIA_PAD_FL_SINK) {
+ pipe->output = dma;
+ num_outputs++;
+ } else {
+ num_inputs++;
+ }
+ }
+
+ mutex_unlock(&mdev->graph_mutex);
+
+ /* We need exactly one output and zero or one input. */
+ if (num_outputs != 1 || num_inputs > 1)
+ return -EPIPE;
+
+ pipe->num_dmas = num_inputs + num_outputs;
+
+ return 0;
+}
+
+static void __xvip_pipeline_cleanup(struct xvip_pipeline *pipe)
+{
+ pipe->num_dmas = 0;
+ pipe->output = NULL;
+}
+
+/**
+ * xvip_pipeline_cleanup - Cleanup the pipeline after streaming
+ * @pipe: the pipeline
+ *
+ * Decrease the pipeline use count and clean it up if we were the last user.
+ */
+static void xvip_pipeline_cleanup(struct xvip_pipeline *pipe)
+{
+ mutex_lock(&pipe->lock);
+
+ /* If we're the last user clean up the pipeline. */
+ if (--pipe->use_count == 0)
+ __xvip_pipeline_cleanup(pipe);
+
+ mutex_unlock(&pipe->lock);
+}
+
+/**
+ * xvip_pipeline_prepare - Prepare the pipeline for streaming
+ * @pipe: the pipeline
+ * @dma: DMA engine at one end of the pipeline
+ *
+ * Validate the pipeline if no user exists yet, otherwise just increase the use
+ * count.
+ *
+ * Return: 0 if successful or -EPIPE if the pipeline is not valid.
+ */
+static int xvip_pipeline_prepare(struct xvip_pipeline *pipe,
+ struct xvip_dma *dma)
+{
+ int ret;
+
+ mutex_lock(&pipe->lock);
+
+ /* If we're the first user validate and initialize the pipeline. */
+ if (pipe->use_count == 0) {
+ ret = xvip_pipeline_validate(pipe, dma);
+ if (ret < 0) {
+ __xvip_pipeline_cleanup(pipe);
+ goto done;
+ }
+ }
+
+ pipe->use_count++;
+ ret = 0;
+
+done:
+ mutex_unlock(&pipe->lock);
+ return ret;
+}
+
+/* -----------------------------------------------------------------------------
+ * videobuf2 queue operations
+ */
+
+/**
+ * struct xvip_dma_buffer - Video DMA buffer
+ * @buf: vb2 buffer base object
+ * @queue: buffer list entry in the DMA engine queued buffers list
+ * @dma: DMA channel that uses the buffer
+ */
+struct xvip_dma_buffer {
+ struct vb2_buffer buf;
+ struct list_head queue;
+ struct xvip_dma *dma;
+};
+
+#define to_xvip_dma_buffer(vb) container_of(vb, struct xvip_dma_buffer, buf)
+
+static void xvip_dma_complete(void *param)
+{
+ struct xvip_dma_buffer *buf = param;
+ struct xvip_dma *dma = buf->dma;
+
+ spin_lock(&dma->queued_lock);
+ list_del(&buf->queue);
+ spin_unlock(&dma->queued_lock);
+
+ buf->buf.v4l2_buf.field = V4L2_FIELD_NONE;
+ buf->buf.v4l2_buf.sequence = dma->sequence++;
+ v4l2_get_timestamp(&buf->buf.v4l2_buf.timestamp);
+ vb2_set_plane_payload(&buf->buf, 0, dma->format.sizeimage);
+ vb2_buffer_done(&buf->buf, VB2_BUF_STATE_DONE);
+}
+
+static int
+xvip_dma_queue_setup(struct vb2_queue *vq, const struct v4l2_format *fmt,
+ unsigned int *nbuffers, unsigned int *nplanes,
+ unsigned int sizes[], void *alloc_ctxs[])
+{
+ struct xvip_dma *dma = vb2_get_drv_priv(vq);
+
+ /* Make sure the image size is large enough. */
+ if (fmt && fmt->fmt.pix.sizeimage < dma->format.sizeimage)
+ return -EINVAL;
+
+ *nplanes = 1;
+
+ sizes[0] = fmt ? fmt->fmt.pix.sizeimage : dma->format.sizeimage;
+ alloc_ctxs[0] = dma->alloc_ctx;
+
+ return 0;
+}
+
+static int xvip_dma_buffer_prepare(struct vb2_buffer *vb)
+{
+ struct xvip_dma *dma = vb2_get_drv_priv(vb->vb2_queue);
+ struct xvip_dma_buffer *buf = to_xvip_dma_buffer(vb);
+
+ buf->dma = dma;
+
+ return 0;
+}
+
+static void xvip_dma_buffer_queue(struct vb2_buffer *vb)
+{
+ struct xvip_dma *dma = vb2_get_drv_priv(vb->vb2_queue);
+ struct xvip_dma_buffer *buf = to_xvip_dma_buffer(vb);
+ struct dma_async_tx_descriptor *desc;
+ dma_addr_t addr = vb2_dma_contig_plane_dma_addr(vb, 0);
+ u32 flags;
+
+ if (dma->queue.type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+ flags = DMA_PREP_INTERRUPT | DMA_CTRL_ACK;
+ dma->xt.dir = DMA_DEV_TO_MEM;
+ dma->xt.src_sgl = false;
+ dma->xt.dst_sgl = true;
+ dma->xt.dst_start = addr;
+ } else {
+ flags = DMA_PREP_INTERRUPT | DMA_CTRL_ACK;
+ dma->xt.dir = DMA_MEM_TO_DEV;
+ dma->xt.src_sgl = true;
+ dma->xt.dst_sgl = false;
+ dma->xt.src_start = addr;
+ }
+
+ dma->xt.frame_size = 1;
+ dma->sgl[0].size = dma->format.width * dma->fmtinfo->bpp;
+ dma->sgl[0].icg = dma->format.bytesperline - dma->sgl[0].size;
+ dma->xt.numf = dma->format.height;
+
+ desc = dmaengine_prep_interleaved_dma(dma->dma, &dma->xt, flags);
+ if (!desc) {
+ dev_err(dma->xdev->dev, "Failed to prepare DMA transfer\n");
+ vb2_buffer_done(&buf->buf, VB2_BUF_STATE_ERROR);
+ return;
+ }
+ desc->callback = xvip_dma_complete;
+ desc->callback_param = buf;
+
+ spin_lock_irq(&dma->queued_lock);
+ list_add_tail(&buf->queue, &dma->queued_bufs);
+ spin_unlock_irq(&dma->queued_lock);
+
+ dmaengine_submit(desc);
+
+ if (vb2_is_streaming(&dma->queue))
+ dma_async_issue_pending(dma->dma);
+}
+
+static int xvip_dma_start_streaming(struct vb2_queue *vq, unsigned int count)
+{
+ struct xvip_dma *dma = vb2_get_drv_priv(vq);
+ struct xvip_dma_buffer *buf, *nbuf;
+ struct xvip_pipeline *pipe;
+ int ret;
+
+ dma->sequence = 0;
+
+ /*
+ * Start streaming on the pipeline. No link touching an entity in the
+ * pipeline can be activated or deactivated once streaming is started.
+ *
+ * Use the pipeline object embedded in the first DMA object that starts
+ * streaming.
+ */
+ pipe = dma->video.entity.pipe
+ ? to_xvip_pipeline(&dma->video.entity) : &dma->pipe;
+
+ ret = media_entity_pipeline_start(&dma->video.entity, &pipe->pipe);
+ if (ret < 0)
+ goto error;
+
+ /* Verify that the configured format matches the output of the
+ * connected subdev.
+ */
+ ret = xvip_dma_verify_format(dma);
+ if (ret < 0)
+ goto error_stop;
+
+ ret = xvip_pipeline_prepare(pipe, dma);
+ if (ret < 0)
+ goto error_stop;
+
+ /* Start the DMA engine. This must be done before starting the blocks
+ * in the pipeline to avoid DMA synchronization issues.
+ */
+ dma_async_issue_pending(dma->dma);
+
+ /* Start the pipeline. */
+ xvip_pipeline_set_stream(pipe, true);
+
+ return 0;
+
+error_stop:
+ media_entity_pipeline_stop(&dma->video.entity);
+
+error:
+ /* Give back all queued buffers to videobuf2. */
+ spin_lock_irq(&dma->queued_lock);
+ list_for_each_entry_safe(buf, nbuf, &dma->queued_bufs, queue) {
+ vb2_buffer_done(&buf->buf, VB2_BUF_STATE_QUEUED);
+ list_del(&buf->queue);
+ }
+ spin_unlock_irq(&dma->queued_lock);
+
+ return ret;
+}
+
+static void xvip_dma_stop_streaming(struct vb2_queue *vq)
+{
+ struct xvip_dma *dma = vb2_get_drv_priv(vq);
+ struct xvip_pipeline *pipe = to_xvip_pipeline(&dma->video.entity);
+ struct xvip_dma_buffer *buf, *nbuf;
+
+ /* Stop the pipeline. */
+ xvip_pipeline_set_stream(pipe, false);
+
+ /* Stop and reset the DMA engine. */
+ dmaengine_terminate_all(dma->dma);
+
+ /* Cleanup the pipeline and mark it as being stopped. */
+ xvip_pipeline_cleanup(pipe);
+ media_entity_pipeline_stop(&dma->video.entity);
+
+ /* Give back all queued buffers to videobuf2. */
+ spin_lock_irq(&dma->queued_lock);
+ list_for_each_entry_safe(buf, nbuf, &dma->queued_bufs, queue) {
+ vb2_buffer_done(&buf->buf, VB2_BUF_STATE_ERROR);
+ list_del(&buf->queue);
+ }
+ spin_unlock_irq(&dma->queued_lock);
+}
+
+static struct vb2_ops xvip_dma_queue_qops = {
+ .queue_setup = xvip_dma_queue_setup,
+ .buf_prepare = xvip_dma_buffer_prepare,
+ .buf_queue = xvip_dma_buffer_queue,
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
+ .start_streaming = xvip_dma_start_streaming,
+ .stop_streaming = xvip_dma_stop_streaming,
+};
+
+/* -----------------------------------------------------------------------------
+ * V4L2 ioctls
+ */
+
+static int
+xvip_dma_querycap(struct file *file, void *fh, struct v4l2_capability *cap)
+{
+ struct v4l2_fh *vfh = file->private_data;
+ struct xvip_dma *dma = to_xvip_dma(vfh->vdev);
+
+ cap->capabilities = V4L2_CAP_DEVICE_CAPS | V4L2_CAP_STREAMING
+ | dma->xdev->v4l2_caps;
+
+ if (dma->queue.type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
+ else
+ cap->device_caps = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING;
+
+ strlcpy(cap->driver, "xilinx-vipp", sizeof(cap->driver));
+ strlcpy(cap->card, dma->video.name, sizeof(cap->card));
+ snprintf(cap->bus_info, sizeof(cap->bus_info), "platform:%s:%u",
+ dma->xdev->dev->of_node->name, dma->port);
+
+ return 0;
+}
+
+/* FIXME: without this callback function, some applications are not configured
+ * with correct formats, and it results in frames in wrong format. Whether this
+ * callback needs to be required is not clearly defined, so it should be
+ * clarified through the mailing list.
+ */
+static int
+xvip_dma_enum_format(struct file *file, void *fh, struct v4l2_fmtdesc *f)
+{
+ struct v4l2_fh *vfh = file->private_data;
+ struct xvip_dma *dma = to_xvip_dma(vfh->vdev);
+
+ if (f->index > 0)
+ return -EINVAL;
+
+ f->pixelformat = dma->format.pixelformat;
+ strlcpy(f->description, dma->fmtinfo->description,
+ sizeof(f->description));
+
+ return 0;
+}
+
+static int
+xvip_dma_get_format(struct file *file, void *fh, struct v4l2_format *format)
+{
+ struct v4l2_fh *vfh = file->private_data;
+ struct xvip_dma *dma = to_xvip_dma(vfh->vdev);
+
+ format->fmt.pix = dma->format;
+
+ return 0;
+}
+
+static void
+__xvip_dma_try_format(struct xvip_dma *dma, struct v4l2_pix_format *pix,
+ const struct xvip_video_format **fmtinfo)
+{
+ const struct xvip_video_format *info;
+ unsigned int min_width;
+ unsigned int max_width;
+ unsigned int min_bpl;
+ unsigned int max_bpl;
+ unsigned int width;
+ unsigned int align;
+ unsigned int bpl;
+
+ /* Retrieve format information and select the default format if the
+ * requested format isn't supported.
+ */
+ info = xvip_get_format_by_fourcc(pix->pixelformat);
+ if (IS_ERR(info))
+ info = xvip_get_format_by_fourcc(XVIP_DMA_DEF_FORMAT);
+
+ pix->pixelformat = info->fourcc;
+ pix->field = V4L2_FIELD_NONE;
+
+ /* The transfer alignment requirements are expressed in bytes. Compute
+ * the minimum and maximum values, clamp the requested width and convert
+ * it back to pixels.
+ */
+ align = lcm(dma->align, info->bpp);
+ min_width = roundup(XVIP_DMA_MIN_WIDTH, align);
+ max_width = rounddown(XVIP_DMA_MAX_WIDTH, align);
+ width = rounddown(pix->width * info->bpp, align);
+
+ pix->width = clamp(width, min_width, max_width) / info->bpp;
+ pix->height = clamp(pix->height, XVIP_DMA_MIN_HEIGHT,
+ XVIP_DMA_MAX_HEIGHT);
+
+ /* Clamp the requested bytes per line value. If the maximum bytes per
+ * line value is zero, the module doesn't support user configurable line
+ * sizes. Override the requested value with the minimum in that case.
+ */
+ min_bpl = pix->width * info->bpp;
+ max_bpl = rounddown(XVIP_DMA_MAX_WIDTH, dma->align);
+ bpl = rounddown(pix->bytesperline, dma->align);
+
+ pix->bytesperline = clamp(bpl, min_bpl, max_bpl);
+ pix->sizeimage = pix->bytesperline * pix->height;
+
+ if (fmtinfo)
+ *fmtinfo = info;
+}
+
+static int
+xvip_dma_try_format(struct file *file, void *fh, struct v4l2_format *format)
+{
+ struct v4l2_fh *vfh = file->private_data;
+ struct xvip_dma *dma = to_xvip_dma(vfh->vdev);
+
+ __xvip_dma_try_format(dma, &format->fmt.pix, NULL);
+ return 0;
+}
+
+static int
+xvip_dma_set_format(struct file *file, void *fh, struct v4l2_format *format)
+{
+ struct v4l2_fh *vfh = file->private_data;
+ struct xvip_dma *dma = to_xvip_dma(vfh->vdev);
+ const struct xvip_video_format *info;
+
+ __xvip_dma_try_format(dma, &format->fmt.pix, &info);
+
+ if (vb2_is_busy(&dma->queue))
+ return -EBUSY;
+
+ dma->format = format->fmt.pix;
+ dma->fmtinfo = info;
+
+ return 0;
+}
+
+static const struct v4l2_ioctl_ops xvip_dma_ioctl_ops = {
+ .vidioc_querycap = xvip_dma_querycap,
+ .vidioc_enum_fmt_vid_cap = xvip_dma_enum_format,
+ .vidioc_g_fmt_vid_cap = xvip_dma_get_format,
+ .vidioc_g_fmt_vid_out = xvip_dma_get_format,
+ .vidioc_s_fmt_vid_cap = xvip_dma_set_format,
+ .vidioc_s_fmt_vid_out = xvip_dma_set_format,
+ .vidioc_try_fmt_vid_cap = xvip_dma_try_format,
+ .vidioc_try_fmt_vid_out = xvip_dma_try_format,
+ .vidioc_reqbufs = vb2_ioctl_reqbufs,
+ .vidioc_querybuf = vb2_ioctl_querybuf,
+ .vidioc_qbuf = vb2_ioctl_qbuf,
+ .vidioc_dqbuf = vb2_ioctl_dqbuf,
+ .vidioc_create_bufs = vb2_ioctl_create_bufs,
+ .vidioc_expbuf = vb2_ioctl_expbuf,
+ .vidioc_streamon = vb2_ioctl_streamon,
+ .vidioc_streamoff = vb2_ioctl_streamoff,
+};
+
+/* -----------------------------------------------------------------------------
+ * V4L2 file operations
+ */
+
+static const struct v4l2_file_operations xvip_dma_fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = video_ioctl2,
+ .open = v4l2_fh_open,
+ .release = vb2_fop_release,
+ .poll = vb2_fop_poll,
+ .mmap = vb2_fop_mmap,
+};
+
+/* -----------------------------------------------------------------------------
+ * Xilinx Video DMA Core
+ */
+
+int xvip_dma_init(struct xvip_composite_device *xdev, struct xvip_dma *dma,
+ enum v4l2_buf_type type, unsigned int port)
+{
+ char name[14];
+ int ret;
+
+ dma->xdev = xdev;
+ dma->port = port;
+ mutex_init(&dma->lock);
+ mutex_init(&dma->pipe.lock);
+ INIT_LIST_HEAD(&dma->queued_bufs);
+ spin_lock_init(&dma->queued_lock);
+
+ dma->fmtinfo = xvip_get_format_by_fourcc(XVIP_DMA_DEF_FORMAT);
+ dma->format.pixelformat = dma->fmtinfo->fourcc;
+ dma->format.colorspace = V4L2_COLORSPACE_SRGB;
+ dma->format.field = V4L2_FIELD_NONE;
+ dma->format.width = XVIP_DMA_DEF_WIDTH;
+ dma->format.height = XVIP_DMA_DEF_HEIGHT;
+ dma->format.bytesperline = dma->format.width * dma->fmtinfo->bpp;
+ dma->format.sizeimage = dma->format.bytesperline * dma->format.height;
+
+ /* Initialize the media entity... */
+ dma->pad.flags = type == V4L2_BUF_TYPE_VIDEO_CAPTURE
+ ? MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
+
+ ret = media_entity_init(&dma->video.entity, 1, &dma->pad, 0);
+ if (ret < 0)
+ goto error;
+
+ /* ... and the video node... */
+ dma->video.fops = &xvip_dma_fops;
+ dma->video.v4l2_dev = &xdev->v4l2_dev;
+ dma->video.queue = &dma->queue;
+ snprintf(dma->video.name, sizeof(dma->video.name), "%s %s %u",
+ xdev->dev->of_node->name,
+ type == V4L2_BUF_TYPE_VIDEO_CAPTURE ? "output" : "input",
+ port);
+ dma->video.vfl_type = VFL_TYPE_GRABBER;
+ dma->video.vfl_dir = type == V4L2_BUF_TYPE_VIDEO_CAPTURE
+ ? VFL_DIR_RX : VFL_DIR_TX;
+ dma->video.release = video_device_release_empty;
+ dma->video.ioctl_ops = &xvip_dma_ioctl_ops;
+ dma->video.lock = &dma->lock;
+
+ video_set_drvdata(&dma->video, dma);
+
+ /* ... and the buffers queue... */
+ dma->alloc_ctx = vb2_dma_contig_init_ctx(dma->xdev->dev);
+ if (IS_ERR(dma->alloc_ctx))
+ goto error;
+
+ /* Don't enable VB2_READ and VB2_WRITE, as using the read() and write()
+ * V4L2 APIs would be inefficient. Testing on the command line with a
+ * 'cat /dev/video?' thus won't be possible, but given that the driver
+ * anyway requires a test tool to setup the pipeline before any video
+ * stream can be started, requiring a specific V4L2 test tool as well
+ * instead of 'cat' isn't really a drawback.
+ */
+ dma->queue.type = type;
+ dma->queue.io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
+ dma->queue.lock = &dma->lock;
+ dma->queue.drv_priv = dma;
+ dma->queue.buf_struct_size = sizeof(struct xvip_dma_buffer);
+ dma->queue.ops = &xvip_dma_queue_qops;
+ dma->queue.mem_ops = &vb2_dma_contig_memops;
+ dma->queue.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC
+ | V4L2_BUF_FLAG_TSTAMP_SRC_EOF;
+ ret = vb2_queue_init(&dma->queue);
+ if (ret < 0) {
+ dev_err(dma->xdev->dev, "failed to initialize VB2 queue\n");
+ goto error;
+ }
+
+ /* ... and the DMA channel. */
+ sprintf(name, "port%u", port);
+ dma->dma = dma_request_slave_channel(dma->xdev->dev, name);
+ if (dma->dma == NULL) {
+ dev_err(dma->xdev->dev, "no VDMA channel found\n");
+ ret = -ENODEV;
+ goto error;
+ }
+
+ dma->align = 1 << dma->dma->device->copy_align;
+
+ ret = video_register_device(&dma->video, VFL_TYPE_GRABBER, -1);
+ if (ret < 0) {
+ dev_err(dma->xdev->dev, "failed to register video device\n");
+ goto error;
+ }
+
+ return 0;
+
+error:
+ xvip_dma_cleanup(dma);
+ return ret;
+}
+
+void xvip_dma_cleanup(struct xvip_dma *dma)
+{
+ if (video_is_registered(&dma->video))
+ video_unregister_device(&dma->video);
+
+ if (dma->dma)
+ dma_release_channel(dma->dma);
+
+ if (!IS_ERR_OR_NULL(dma->alloc_ctx))
+ vb2_dma_contig_cleanup_ctx(dma->alloc_ctx);
+
+ media_entity_cleanup(&dma->video.entity);
+
+ mutex_destroy(&dma->lock);
+ mutex_destroy(&dma->pipe.lock);
+}
diff --git a/drivers/media/platform/xilinx/xilinx-dma.h b/drivers/media/platform/xilinx/xilinx-dma.h
new file mode 100644
index 0000000..a540111
--- /dev/null
+++ b/drivers/media/platform/xilinx/xilinx-dma.h
@@ -0,0 +1,109 @@
+/*
+ * Xilinx Video DMA
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __XILINX_VIP_DMA_H__
+#define __XILINX_VIP_DMA_H__
+
+#include <linux/dmaengine.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+#include <linux/videodev2.h>
+
+#include <media/media-entity.h>
+#include <media/v4l2-dev.h>
+#include <media/videobuf2-core.h>
+
+struct dma_chan;
+struct xvip_composite_device;
+struct xvip_video_format;
+
+/**
+ * struct xvip_pipeline - Xilinx Video IP pipeline structure
+ * @pipe: media pipeline
+ * @lock: protects the pipeline @stream_count
+ * @use_count: number of DMA engines using the pipeline
+ * @stream_count: number of DMA engines currently streaming
+ * @num_dmas: number of DMA engines in the pipeline
+ * @output: DMA engine at the output of the pipeline
+ */
+struct xvip_pipeline {
+ struct media_pipeline pipe;
+
+ struct mutex lock;
+ unsigned int use_count;
+ unsigned int stream_count;
+
+ unsigned int num_dmas;
+ struct xvip_dma *output;
+};
+
+static inline struct xvip_pipeline *to_xvip_pipeline(struct media_entity *e)
+{
+ return container_of(e->pipe, struct xvip_pipeline, pipe);
+}
+
+/**
+ * struct xvip_dma - Video DMA channel
+ * @list: list entry in a composite device dmas list
+ * @video: V4L2 video device associated with the DMA channel
+ * @pad: media pad for the video device entity
+ * @xdev: composite device the DMA channel belongs to
+ * @pipe: pipeline belonging to the DMA channel
+ * @port: composite device DT node port number for the DMA channel
+ * @lock: protects the @format, @fmtinfo and @queue fields
+ * @format: active V4L2 pixel format
+ * @fmtinfo: format information corresponding to the active @format
+ * @queue: vb2 buffers queue
+ * @alloc_ctx: allocation context for the vb2 @queue
+ * @sequence: V4L2 buffers sequence number
+ * @queued_bufs: list of queued buffers
+ * @queued_lock: protects the buf_queued list
+ * @dma: DMA engine channel
+ * @align: transfer alignment required by the DMA channel (in bytes)
+ * @xt: dma interleaved template for dma configuration
+ * @sgl: data chunk structure for dma_interleaved_template
+ */
+struct xvip_dma {
+ struct list_head list;
+ struct video_device video;
+ struct media_pad pad;
+
+ struct xvip_composite_device *xdev;
+ struct xvip_pipeline pipe;
+ unsigned int port;
+
+ struct mutex lock;
+ struct v4l2_pix_format format;
+ const struct xvip_video_format *fmtinfo;
+
+ struct vb2_queue queue;
+ void *alloc_ctx;
+ unsigned int sequence;
+
+ struct list_head queued_bufs;
+ spinlock_t queued_lock;
+
+ struct dma_chan *dma;
+ unsigned int align;
+ struct dma_interleaved_template xt;
+ struct data_chunk sgl[1];
+};
+
+#define to_xvip_dma(vdev) container_of(vdev, struct xvip_dma, video)
+
+int xvip_dma_init(struct xvip_composite_device *xdev, struct xvip_dma *dma,
+ enum v4l2_buf_type type, unsigned int port);
+void xvip_dma_cleanup(struct xvip_dma *dma);
+
+#endif /* __XILINX_VIP_DMA_H__ */
diff --git a/drivers/media/platform/xilinx/xilinx-tpg.c b/drivers/media/platform/xilinx/xilinx-tpg.c
new file mode 100644
index 0000000..b5f7d5e
--- /dev/null
+++ b/drivers/media/platform/xilinx/xilinx-tpg.c
@@ -0,0 +1,931 @@
+/*
+ * Xilinx Test Pattern Generator
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/device.h>
+#include <linux/gpio/consumer.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/xilinx-v4l2-controls.h>
+
+#include <media/v4l2-async.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-subdev.h>
+
+#include "xilinx-vip.h"
+#include "xilinx-vtc.h"
+
+#define XTPG_CTRL_STATUS_SLAVE_ERROR (1 << 16)
+#define XTPG_CTRL_IRQ_SLAVE_ERROR (1 << 16)
+
+#define XTPG_PATTERN_CONTROL 0x0100
+#define XTPG_PATTERN_MASK (0xf << 0)
+#define XTPG_PATTERN_CONTROL_CROSS_HAIRS (1 << 4)
+#define XTPG_PATTERN_CONTROL_MOVING_BOX (1 << 5)
+#define XTPG_PATTERN_CONTROL_COLOR_MASK_SHIFT 6
+#define XTPG_PATTERN_CONTROL_COLOR_MASK_MASK (0xf << 6)
+#define XTPG_PATTERN_CONTROL_STUCK_PIXEL (1 << 9)
+#define XTPG_PATTERN_CONTROL_NOISE (1 << 10)
+#define XTPG_PATTERN_CONTROL_MOTION (1 << 12)
+#define XTPG_MOTION_SPEED 0x0104
+#define XTPG_CROSS_HAIRS 0x0108
+#define XTPG_CROSS_HAIRS_ROW_SHIFT 0
+#define XTPG_CROSS_HAIRS_ROW_MASK (0xfff << 0)
+#define XTPG_CROSS_HAIRS_COLUMN_SHIFT 16
+#define XTPG_CROSS_HAIRS_COLUMN_MASK (0xfff << 16)
+#define XTPG_ZPLATE_HOR_CONTROL 0x010c
+#define XTPG_ZPLATE_VER_CONTROL 0x0110
+#define XTPG_ZPLATE_START_SHIFT 0
+#define XTPG_ZPLATE_START_MASK (0xffff << 0)
+#define XTPG_ZPLATE_SPEED_SHIFT 16
+#define XTPG_ZPLATE_SPEED_MASK (0xffff << 16)
+#define XTPG_BOX_SIZE 0x0114
+#define XTPG_BOX_COLOR 0x0118
+#define XTPG_STUCK_PIXEL_THRESH 0x011c
+#define XTPG_NOISE_GAIN 0x0120
+#define XTPG_BAYER_PHASE 0x0124
+#define XTPG_BAYER_PHASE_RGGB 0
+#define XTPG_BAYER_PHASE_GRBG 1
+#define XTPG_BAYER_PHASE_GBRG 2
+#define XTPG_BAYER_PHASE_BGGR 3
+#define XTPG_BAYER_PHASE_OFF 4
+
+/*
+ * The minimum blanking value is one clock cycle for the front porch, one clock
+ * cycle for the sync pulse and one clock cycle for the back porch.
+ */
+#define XTPG_MIN_HBLANK 3
+#define XTPG_MAX_HBLANK (XVTC_MAX_HSIZE - XVIP_MIN_WIDTH)
+#define XTPG_MIN_VBLANK 3
+#define XTPG_MAX_VBLANK (XVTC_MAX_VSIZE - XVIP_MIN_HEIGHT)
+
+/**
+ * struct xtpg_device - Xilinx Test Pattern Generator device structure
+ * @xvip: Xilinx Video IP device
+ * @pads: media pads
+ * @npads: number of pads (1 or 2)
+ * @has_input: whether an input is connected to the sink pad
+ * @formats: active V4L2 media bus format for each pad
+ * @default_format: default V4L2 media bus format
+ * @vip_format: format information corresponding to the active format
+ * @bayer: boolean flag if TPG is set to any bayer format
+ * @ctrl_handler: control handler
+ * @hblank: horizontal blanking control
+ * @vblank: vertical blanking control
+ * @pattern: test pattern control
+ * @streaming: is the video stream active
+ * @vtc: video timing controller
+ * @vtmux_gpio: video timing mux GPIO
+ */
+struct xtpg_device {
+ struct xvip_device xvip;
+
+ struct media_pad pads[2];
+ unsigned int npads;
+ bool has_input;
+
+ struct v4l2_mbus_framefmt formats[2];
+ struct v4l2_mbus_framefmt default_format;
+ const struct xvip_video_format *vip_format;
+ bool bayer;
+
+ struct v4l2_ctrl_handler ctrl_handler;
+ struct v4l2_ctrl *hblank;
+ struct v4l2_ctrl *vblank;
+ struct v4l2_ctrl *pattern;
+ bool streaming;
+
+ struct xvtc_device *vtc;
+ struct gpio_desc *vtmux_gpio;
+};
+
+static inline struct xtpg_device *to_tpg(struct v4l2_subdev *subdev)
+{
+ return container_of(subdev, struct xtpg_device, xvip.subdev);
+}
+
+static u32 xtpg_get_bayer_phase(unsigned int code)
+{
+ switch (code) {
+ case MEDIA_BUS_FMT_SRGGB8_1X8:
+ return XTPG_BAYER_PHASE_RGGB;
+ case MEDIA_BUS_FMT_SGRBG8_1X8:
+ return XTPG_BAYER_PHASE_GRBG;
+ case MEDIA_BUS_FMT_SGBRG8_1X8:
+ return XTPG_BAYER_PHASE_GBRG;
+ case MEDIA_BUS_FMT_SBGGR8_1X8:
+ return XTPG_BAYER_PHASE_BGGR;
+ default:
+ return XTPG_BAYER_PHASE_OFF;
+ }
+}
+
+static void __xtpg_update_pattern_control(struct xtpg_device *xtpg,
+ bool passthrough, bool pattern)
+{
+ u32 pattern_mask = (1 << (xtpg->pattern->maximum + 1)) - 1;
+
+ /*
+ * If the TPG has no sink pad or no input connected to its sink pad
+ * passthrough mode can't be enabled.
+ */
+ if (xtpg->npads == 1 || !xtpg->has_input)
+ passthrough = false;
+
+ /* If passthrough mode is allowed unmask bit 0. */
+ if (passthrough)
+ pattern_mask &= ~1;
+
+ /* If test pattern mode is allowed unmask all other bits. */
+ if (pattern)
+ pattern_mask &= 1;
+
+ __v4l2_ctrl_modify_range(xtpg->pattern, 0, xtpg->pattern->maximum,
+ pattern_mask, pattern ? 9 : 0);
+}
+
+static void xtpg_update_pattern_control(struct xtpg_device *xtpg,
+ bool passthrough, bool pattern)
+{
+ mutex_lock(xtpg->ctrl_handler.lock);
+ __xtpg_update_pattern_control(xtpg, passthrough, pattern);
+ mutex_unlock(xtpg->ctrl_handler.lock);
+}
+
+/* -----------------------------------------------------------------------------
+ * V4L2 Subdevice Video Operations
+ */
+
+static int xtpg_s_stream(struct v4l2_subdev *subdev, int enable)
+{
+ struct xtpg_device *xtpg = to_tpg(subdev);
+ unsigned int width = xtpg->formats[0].width;
+ unsigned int height = xtpg->formats[0].height;
+ bool passthrough;
+ u32 bayer_phase;
+
+ if (!enable) {
+ xvip_stop(&xtpg->xvip);
+ if (xtpg->vtc)
+ xvtc_generator_stop(xtpg->vtc);
+
+ xtpg_update_pattern_control(xtpg, true, true);
+ xtpg->streaming = false;
+ return 0;
+ }
+
+ xvip_set_frame_size(&xtpg->xvip, &xtpg->formats[0]);
+
+ if (xtpg->vtc) {
+ struct xvtc_config config = {
+ .hblank_start = width,
+ .hsync_start = width + 1,
+ .vblank_start = height,
+ .vsync_start = height + 1,
+ };
+ unsigned int htotal;
+ unsigned int vtotal;
+
+ htotal = min_t(unsigned int, XVTC_MAX_HSIZE,
+ v4l2_ctrl_g_ctrl(xtpg->hblank) + width);
+ vtotal = min_t(unsigned int, XVTC_MAX_VSIZE,
+ v4l2_ctrl_g_ctrl(xtpg->vblank) + height);
+
+ config.hsync_end = htotal - 1;
+ config.hsize = htotal;
+ config.vsync_end = vtotal - 1;
+ config.vsize = vtotal;
+
+ xvtc_generator_start(xtpg->vtc, &config);
+ }
+
+ /*
+ * Configure the bayer phase and video timing mux based on the
+ * operation mode (passthrough or test pattern generation). The test
+ * pattern can be modified by the control set handler, we thus need to
+ * take the control lock here to avoid races.
+ */
+ mutex_lock(xtpg->ctrl_handler.lock);
+
+ xvip_clr_and_set(&xtpg->xvip, XTPG_PATTERN_CONTROL,
+ XTPG_PATTERN_MASK, xtpg->pattern->cur.val);
+
+ /*
+ * Switching between passthrough and test pattern generation modes isn't
+ * allowed during streaming, update the control range accordingly.
+ */
+ passthrough = xtpg->pattern->cur.val == 0;
+ __xtpg_update_pattern_control(xtpg, passthrough, !passthrough);
+
+ xtpg->streaming = true;
+
+ mutex_unlock(xtpg->ctrl_handler.lock);
+
+ /*
+ * For TPG v5.0, the bayer phase needs to be off for the pass through
+ * mode, otherwise the external input would be subsampled.
+ */
+ bayer_phase = passthrough ? XTPG_BAYER_PHASE_OFF
+ : xtpg_get_bayer_phase(xtpg->formats[0].code);
+ xvip_write(&xtpg->xvip, XTPG_BAYER_PHASE, bayer_phase);
+
+ if (xtpg->vtmux_gpio)
+ gpiod_set_value_cansleep(xtpg->vtmux_gpio, !passthrough);
+
+ xvip_start(&xtpg->xvip);
+
+ return 0;
+}
+
+/* -----------------------------------------------------------------------------
+ * V4L2 Subdevice Pad Operations
+ */
+
+static struct v4l2_mbus_framefmt *
+__xtpg_get_pad_format(struct xtpg_device *xtpg,
+ struct v4l2_subdev_pad_config *cfg,
+ unsigned int pad, u32 which)
+{
+ switch (which) {
+ case V4L2_SUBDEV_FORMAT_TRY:
+ return v4l2_subdev_get_try_format(&xtpg->xvip.subdev, cfg, pad);
+ case V4L2_SUBDEV_FORMAT_ACTIVE:
+ return &xtpg->formats[pad];
+ default:
+ return NULL;
+ }
+}
+
+static int xtpg_get_format(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *fmt)
+{
+ struct xtpg_device *xtpg = to_tpg(subdev);
+
+ fmt->format = *__xtpg_get_pad_format(xtpg, cfg, fmt->pad, fmt->which);
+
+ return 0;
+}
+
+static int xtpg_set_format(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *fmt)
+{
+ struct xtpg_device *xtpg = to_tpg(subdev);
+ struct v4l2_mbus_framefmt *__format;
+ u32 bayer_phase;
+
+ __format = __xtpg_get_pad_format(xtpg, cfg, fmt->pad, fmt->which);
+
+ /* In two pads mode the source pad format is always identical to the
+ * sink pad format.
+ */
+ if (xtpg->npads == 2 && fmt->pad == 1) {
+ fmt->format = *__format;
+ return 0;
+ }
+
+ /* Bayer phase is configurable at runtime */
+ if (xtpg->bayer) {
+ bayer_phase = xtpg_get_bayer_phase(fmt->format.code);
+ if (bayer_phase != XTPG_BAYER_PHASE_OFF)
+ __format->code = fmt->format.code;
+ }
+
+ xvip_set_format_size(__format, fmt);
+
+ fmt->format = *__format;
+
+ /* Propagate the format to the source pad. */
+ if (xtpg->npads == 2) {
+ __format = __xtpg_get_pad_format(xtpg, cfg, 1, fmt->which);
+ *__format = fmt->format;
+ }
+
+ return 0;
+}
+
+/* -----------------------------------------------------------------------------
+ * V4L2 Subdevice Operations
+ */
+
+static int xtpg_enum_frame_size(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_frame_size_enum *fse)
+{
+ struct v4l2_mbus_framefmt *format;
+
+ format = v4l2_subdev_get_try_format(subdev, cfg, fse->pad);
+
+ if (fse->index || fse->code != format->code)
+ return -EINVAL;
+
+ /* Min / max values for pad 0 is always fixed in both one and two pads
+ * modes. In two pads mode, the source pad(= 1) size is identical to
+ * the sink pad size */
+ if (fse->pad == 0) {
+ fse->min_width = XVIP_MIN_WIDTH;
+ fse->max_width = XVIP_MAX_WIDTH;
+ fse->min_height = XVIP_MIN_HEIGHT;
+ fse->max_height = XVIP_MAX_HEIGHT;
+ } else {
+ fse->min_width = format->width;
+ fse->max_width = format->width;
+ fse->min_height = format->height;
+ fse->max_height = format->height;
+ }
+
+ return 0;
+}
+
+static int xtpg_open(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh)
+{
+ struct xtpg_device *xtpg = to_tpg(subdev);
+ struct v4l2_mbus_framefmt *format;
+
+ format = v4l2_subdev_get_try_format(subdev, fh->pad, 0);
+ *format = xtpg->default_format;
+
+ if (xtpg->npads == 2) {
+ format = v4l2_subdev_get_try_format(subdev, fh->pad, 1);
+ *format = xtpg->default_format;
+ }
+
+ return 0;
+}
+
+static int xtpg_close(struct v4l2_subdev *subdev, struct v4l2_subdev_fh *fh)
+{
+ return 0;
+}
+
+static int xtpg_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+ struct xtpg_device *xtpg = container_of(ctrl->handler,
+ struct xtpg_device,
+ ctrl_handler);
+ switch (ctrl->id) {
+ case V4L2_CID_TEST_PATTERN:
+ xvip_clr_and_set(&xtpg->xvip, XTPG_PATTERN_CONTROL,
+ XTPG_PATTERN_MASK, ctrl->val);
+ return 0;
+ case V4L2_CID_XILINX_TPG_CROSS_HAIRS:
+ xvip_clr_or_set(&xtpg->xvip, XTPG_PATTERN_CONTROL,
+ XTPG_PATTERN_CONTROL_CROSS_HAIRS, ctrl->val);
+ return 0;
+ case V4L2_CID_XILINX_TPG_MOVING_BOX:
+ xvip_clr_or_set(&xtpg->xvip, XTPG_PATTERN_CONTROL,
+ XTPG_PATTERN_CONTROL_MOVING_BOX, ctrl->val);
+ return 0;
+ case V4L2_CID_XILINX_TPG_COLOR_MASK:
+ xvip_clr_and_set(&xtpg->xvip, XTPG_PATTERN_CONTROL,
+ XTPG_PATTERN_CONTROL_COLOR_MASK_MASK,
+ ctrl->val <<
+ XTPG_PATTERN_CONTROL_COLOR_MASK_SHIFT);
+ return 0;
+ case V4L2_CID_XILINX_TPG_STUCK_PIXEL:
+ xvip_clr_or_set(&xtpg->xvip, XTPG_PATTERN_CONTROL,
+ XTPG_PATTERN_CONTROL_STUCK_PIXEL, ctrl->val);
+ return 0;
+ case V4L2_CID_XILINX_TPG_NOISE:
+ xvip_clr_or_set(&xtpg->xvip, XTPG_PATTERN_CONTROL,
+ XTPG_PATTERN_CONTROL_NOISE, ctrl->val);
+ return 0;
+ case V4L2_CID_XILINX_TPG_MOTION:
+ xvip_clr_or_set(&xtpg->xvip, XTPG_PATTERN_CONTROL,
+ XTPG_PATTERN_CONTROL_MOTION, ctrl->val);
+ return 0;
+ case V4L2_CID_XILINX_TPG_MOTION_SPEED:
+ xvip_write(&xtpg->xvip, XTPG_MOTION_SPEED, ctrl->val);
+ return 0;
+ case V4L2_CID_XILINX_TPG_CROSS_HAIR_ROW:
+ xvip_clr_and_set(&xtpg->xvip, XTPG_CROSS_HAIRS,
+ XTPG_CROSS_HAIRS_ROW_MASK,
+ ctrl->val << XTPG_CROSS_HAIRS_ROW_SHIFT);
+ return 0;
+ case V4L2_CID_XILINX_TPG_CROSS_HAIR_COLUMN:
+ xvip_clr_and_set(&xtpg->xvip, XTPG_CROSS_HAIRS,
+ XTPG_CROSS_HAIRS_COLUMN_MASK,
+ ctrl->val << XTPG_CROSS_HAIRS_COLUMN_SHIFT);
+ return 0;
+ case V4L2_CID_XILINX_TPG_ZPLATE_HOR_START:
+ xvip_clr_and_set(&xtpg->xvip, XTPG_ZPLATE_HOR_CONTROL,
+ XTPG_ZPLATE_START_MASK,
+ ctrl->val << XTPG_ZPLATE_START_SHIFT);
+ return 0;
+ case V4L2_CID_XILINX_TPG_ZPLATE_HOR_SPEED:
+ xvip_clr_and_set(&xtpg->xvip, XTPG_ZPLATE_HOR_CONTROL,
+ XTPG_ZPLATE_SPEED_MASK,
+ ctrl->val << XTPG_ZPLATE_SPEED_SHIFT);
+ return 0;
+ case V4L2_CID_XILINX_TPG_ZPLATE_VER_START:
+ xvip_clr_and_set(&xtpg->xvip, XTPG_ZPLATE_VER_CONTROL,
+ XTPG_ZPLATE_START_MASK,
+ ctrl->val << XTPG_ZPLATE_START_SHIFT);
+ return 0;
+ case V4L2_CID_XILINX_TPG_ZPLATE_VER_SPEED:
+ xvip_clr_and_set(&xtpg->xvip, XTPG_ZPLATE_VER_CONTROL,
+ XTPG_ZPLATE_SPEED_MASK,
+ ctrl->val << XTPG_ZPLATE_SPEED_SHIFT);
+ return 0;
+ case V4L2_CID_XILINX_TPG_BOX_SIZE:
+ xvip_write(&xtpg->xvip, XTPG_BOX_SIZE, ctrl->val);
+ return 0;
+ case V4L2_CID_XILINX_TPG_BOX_COLOR:
+ xvip_write(&xtpg->xvip, XTPG_BOX_COLOR, ctrl->val);
+ return 0;
+ case V4L2_CID_XILINX_TPG_STUCK_PIXEL_THRESH:
+ xvip_write(&xtpg->xvip, XTPG_STUCK_PIXEL_THRESH, ctrl->val);
+ return 0;
+ case V4L2_CID_XILINX_TPG_NOISE_GAIN:
+ xvip_write(&xtpg->xvip, XTPG_NOISE_GAIN, ctrl->val);
+ return 0;
+ }
+
+ return 0;
+}
+
+static const struct v4l2_ctrl_ops xtpg_ctrl_ops = {
+ .s_ctrl = xtpg_s_ctrl,
+};
+
+static struct v4l2_subdev_core_ops xtpg_core_ops = {
+};
+
+static struct v4l2_subdev_video_ops xtpg_video_ops = {
+ .s_stream = xtpg_s_stream,
+};
+
+static struct v4l2_subdev_pad_ops xtpg_pad_ops = {
+ .enum_mbus_code = xvip_enum_mbus_code,
+ .enum_frame_size = xtpg_enum_frame_size,
+ .get_fmt = xtpg_get_format,
+ .set_fmt = xtpg_set_format,
+};
+
+static struct v4l2_subdev_ops xtpg_ops = {
+ .core = &xtpg_core_ops,
+ .video = &xtpg_video_ops,
+ .pad = &xtpg_pad_ops,
+};
+
+static const struct v4l2_subdev_internal_ops xtpg_internal_ops = {
+ .open = xtpg_open,
+ .close = xtpg_close,
+};
+
+/*
+ * Control Config
+ */
+
+static const char *const xtpg_pattern_strings[] = {
+ "Passthrough",
+ "Horizontal Ramp",
+ "Vertical Ramp",
+ "Temporal Ramp",
+ "Solid Red",
+ "Solid Green",
+ "Solid Blue",
+ "Solid Black",
+ "Solid White",
+ "Color Bars",
+ "Zone Plate",
+ "Tartan Color Bars",
+ "Cross Hatch",
+ "None",
+ "Vertical/Horizontal Ramps",
+ "Black/White Checker Board",
+};
+
+static struct v4l2_ctrl_config xtpg_ctrls[] = {
+ {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_CROSS_HAIRS,
+ .name = "Test Pattern: Cross Hairs",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .min = false,
+ .max = true,
+ .step = 1,
+ .def = 0,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_MOVING_BOX,
+ .name = "Test Pattern: Moving Box",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .min = false,
+ .max = true,
+ .step = 1,
+ .def = 0,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_COLOR_MASK,
+ .name = "Test Pattern: Color Mask",
+ .type = V4L2_CTRL_TYPE_BITMASK,
+ .min = 0,
+ .max = 0xf,
+ .def = 0,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_STUCK_PIXEL,
+ .name = "Test Pattern: Stuck Pixel",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .min = false,
+ .max = true,
+ .step = 1,
+ .def = 0,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_NOISE,
+ .name = "Test Pattern: Noise",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .min = false,
+ .max = true,
+ .step = 1,
+ .def = 0,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_MOTION,
+ .name = "Test Pattern: Motion",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .min = false,
+ .max = true,
+ .step = 1,
+ .def = 0,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_MOTION_SPEED,
+ .name = "Test Pattern: Motion Speed",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 8) - 1,
+ .step = 1,
+ .def = 4,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_CROSS_HAIR_ROW,
+ .name = "Test Pattern: Cross Hairs Row",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 12) - 1,
+ .step = 1,
+ .def = 0x64,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_CROSS_HAIR_COLUMN,
+ .name = "Test Pattern: Cross Hairs Column",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 12) - 1,
+ .step = 1,
+ .def = 0x64,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_ZPLATE_HOR_START,
+ .name = "Test Pattern: Zplate Horizontal Start Pos",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 16) - 1,
+ .step = 1,
+ .def = 0x1e,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_ZPLATE_HOR_SPEED,
+ .name = "Test Pattern: Zplate Horizontal Speed",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 16) - 1,
+ .step = 1,
+ .def = 0,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_ZPLATE_VER_START,
+ .name = "Test Pattern: Zplate Vertical Start Pos",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 16) - 1,
+ .step = 1,
+ .def = 1,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_ZPLATE_VER_SPEED,
+ .name = "Test Pattern: Zplate Vertical Speed",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 16) - 1,
+ .step = 1,
+ .def = 0,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_BOX_SIZE,
+ .name = "Test Pattern: Box Size",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 12) - 1,
+ .step = 1,
+ .def = 0x32,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_BOX_COLOR,
+ .name = "Test Pattern: Box Color(RGB)",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 24) - 1,
+ .step = 1,
+ .def = 0,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_STUCK_PIXEL_THRESH,
+ .name = "Test Pattern: Stuck Pixel threshold",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 16) - 1,
+ .step = 1,
+ .def = 0,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &xtpg_ctrl_ops,
+ .id = V4L2_CID_XILINX_TPG_NOISE_GAIN,
+ .name = "Test Pattern: Noise Gain",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .min = 0,
+ .max = (1 << 8) - 1,
+ .step = 1,
+ .def = 0,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ },
+};
+
+/* -----------------------------------------------------------------------------
+ * Media Operations
+ */
+
+static const struct media_entity_operations xtpg_media_ops = {
+ .link_validate = v4l2_subdev_link_validate,
+};
+
+/* -----------------------------------------------------------------------------
+ * Power Management
+ */
+
+static int __maybe_unused xtpg_pm_suspend(struct device *dev)
+{
+ struct xtpg_device *xtpg = dev_get_drvdata(dev);
+
+ xvip_suspend(&xtpg->xvip);
+
+ return 0;
+}
+
+static int __maybe_unused xtpg_pm_resume(struct device *dev)
+{
+ struct xtpg_device *xtpg = dev_get_drvdata(dev);
+
+ xvip_resume(&xtpg->xvip);
+
+ return 0;
+}
+
+/* -----------------------------------------------------------------------------
+ * Platform Device Driver
+ */
+
+static int xtpg_parse_of(struct xtpg_device *xtpg)
+{
+ struct device *dev = xtpg->xvip.dev;
+ struct device_node *node = xtpg->xvip.dev->of_node;
+ struct device_node *ports;
+ struct device_node *port;
+ unsigned int nports = 0;
+ bool has_endpoint = false;
+
+ ports = of_get_child_by_name(node, "ports");
+ if (ports == NULL)
+ ports = node;
+
+ for_each_child_of_node(ports, port) {
+ const struct xvip_video_format *format;
+ struct device_node *endpoint;
+
+ if (!port->name || of_node_cmp(port->name, "port"))
+ continue;
+
+ format = xvip_of_get_format(port);
+ if (IS_ERR(format)) {
+ dev_err(dev, "invalid format in DT");
+ return PTR_ERR(format);
+ }
+
+ /* Get and check the format description */
+ if (!xtpg->vip_format) {
+ xtpg->vip_format = format;
+ } else if (xtpg->vip_format != format) {
+ dev_err(dev, "in/out format mismatch in DT");
+ return -EINVAL;
+ }
+
+ if (nports == 0) {
+ endpoint = of_get_next_child(port, NULL);
+ if (endpoint)
+ has_endpoint = true;
+ of_node_put(endpoint);
+ }
+
+ /* Count the number of ports. */
+ nports++;
+ }
+
+ if (nports != 1 && nports != 2) {
+ dev_err(dev, "invalid number of ports %u\n", nports);
+ return -EINVAL;
+ }
+
+ xtpg->npads = nports;
+ if (nports == 2 && has_endpoint)
+ xtpg->has_input = true;
+
+ return 0;
+}
+
+static int xtpg_probe(struct platform_device *pdev)
+{
+ struct v4l2_subdev *subdev;
+ struct xtpg_device *xtpg;
+ u32 i, bayer_phase;
+ int ret;
+
+ xtpg = devm_kzalloc(&pdev->dev, sizeof(*xtpg), GFP_KERNEL);
+ if (!xtpg)
+ return -ENOMEM;
+
+ xtpg->xvip.dev = &pdev->dev;
+
+ ret = xtpg_parse_of(xtpg);
+ if (ret < 0)
+ return ret;
+
+ ret = xvip_init_resources(&xtpg->xvip);
+ if (ret < 0)
+ return ret;
+
+ xtpg->vtmux_gpio = devm_gpiod_get_optional(&pdev->dev, "timing",
+ GPIOD_OUT_HIGH);
+ if (IS_ERR(xtpg->vtmux_gpio)) {
+ ret = PTR_ERR(xtpg->vtmux_gpio);
+ goto error_resource;
+ }
+
+ xtpg->vtc = xvtc_of_get(pdev->dev.of_node);
+ if (IS_ERR(xtpg->vtc)) {
+ ret = PTR_ERR(xtpg->vtc);
+ goto error_resource;
+ }
+
+ /* Reset and initialize the core */
+ xvip_reset(&xtpg->xvip);
+
+ /* Initialize V4L2 subdevice and media entity. Pad numbers depend on the
+ * number of pads.
+ */
+ if (xtpg->npads == 2) {
+ xtpg->pads[0].flags = MEDIA_PAD_FL_SINK;
+ xtpg->pads[1].flags = MEDIA_PAD_FL_SOURCE;
+ } else {
+ xtpg->pads[0].flags = MEDIA_PAD_FL_SOURCE;
+ }
+
+ /* Initialize the default format */
+ xtpg->default_format.code = xtpg->vip_format->code;
+ xtpg->default_format.field = V4L2_FIELD_NONE;
+ xtpg->default_format.colorspace = V4L2_COLORSPACE_SRGB;
+ xvip_get_frame_size(&xtpg->xvip, &xtpg->default_format);
+
+ bayer_phase = xtpg_get_bayer_phase(xtpg->vip_format->code);
+ if (bayer_phase != XTPG_BAYER_PHASE_OFF)
+ xtpg->bayer = true;
+
+ xtpg->formats[0] = xtpg->default_format;
+ if (xtpg->npads == 2)
+ xtpg->formats[1] = xtpg->default_format;
+
+ /* Initialize V4L2 subdevice and media entity */
+ subdev = &xtpg->xvip.subdev;
+ v4l2_subdev_init(subdev, &xtpg_ops);
+ subdev->dev = &pdev->dev;
+ subdev->internal_ops = &xtpg_internal_ops;
+ strlcpy(subdev->name, dev_name(&pdev->dev), sizeof(subdev->name));
+ v4l2_set_subdevdata(subdev, xtpg);
+ subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+ subdev->entity.ops = &xtpg_media_ops;
+
+ ret = media_entity_init(&subdev->entity, xtpg->npads, xtpg->pads, 0);
+ if (ret < 0)
+ goto error;
+
+ v4l2_ctrl_handler_init(&xtpg->ctrl_handler, 3 + ARRAY_SIZE(xtpg_ctrls));
+
+ xtpg->vblank = v4l2_ctrl_new_std(&xtpg->ctrl_handler, &xtpg_ctrl_ops,
+ V4L2_CID_VBLANK, XTPG_MIN_VBLANK,
+ XTPG_MAX_VBLANK, 1, 100);
+ xtpg->hblank = v4l2_ctrl_new_std(&xtpg->ctrl_handler, &xtpg_ctrl_ops,
+ V4L2_CID_HBLANK, XTPG_MIN_HBLANK,
+ XTPG_MAX_HBLANK, 1, 100);
+ xtpg->pattern = v4l2_ctrl_new_std_menu_items(&xtpg->ctrl_handler,
+ &xtpg_ctrl_ops, V4L2_CID_TEST_PATTERN,
+ ARRAY_SIZE(xtpg_pattern_strings) - 1,
+ 1, 9, xtpg_pattern_strings);
+
+ for (i = 0; i < ARRAY_SIZE(xtpg_ctrls); i++)
+ v4l2_ctrl_new_custom(&xtpg->ctrl_handler, &xtpg_ctrls[i], NULL);
+
+ if (xtpg->ctrl_handler.error) {
+ dev_err(&pdev->dev, "failed to add controls\n");
+ ret = xtpg->ctrl_handler.error;
+ goto error;
+ }
+ subdev->ctrl_handler = &xtpg->ctrl_handler;
+
+ xtpg_update_pattern_control(xtpg, true, true);
+
+ ret = v4l2_ctrl_handler_setup(&xtpg->ctrl_handler);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to set controls\n");
+ goto error;
+ }
+
+ platform_set_drvdata(pdev, xtpg);
+
+ xvip_print_version(&xtpg->xvip);
+
+ ret = v4l2_async_register_subdev(subdev);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to register subdev\n");
+ goto error;
+ }
+
+ return 0;
+
+error:
+ v4l2_ctrl_handler_free(&xtpg->ctrl_handler);
+ media_entity_cleanup(&subdev->entity);
+ xvtc_put(xtpg->vtc);
+error_resource:
+ xvip_cleanup_resources(&xtpg->xvip);
+ return ret;
+}
+
+static int xtpg_remove(struct platform_device *pdev)
+{
+ struct xtpg_device *xtpg = platform_get_drvdata(pdev);
+ struct v4l2_subdev *subdev = &xtpg->xvip.subdev;
+
+ v4l2_async_unregister_subdev(subdev);
+ v4l2_ctrl_handler_free(&xtpg->ctrl_handler);
+ media_entity_cleanup(&subdev->entity);
+
+ xvip_cleanup_resources(&xtpg->xvip);
+
+ return 0;
+}
+
+static SIMPLE_DEV_PM_OPS(xtpg_pm_ops, xtpg_pm_suspend, xtpg_pm_resume);
+
+static const struct of_device_id xtpg_of_id_table[] = {
+ { .compatible = "xlnx,v-tpg-5.0" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, xtpg_of_id_table);
+
+static struct platform_driver xtpg_driver = {
+ .driver = {
+ .name = "xilinx-tpg",
+ .pm = &xtpg_pm_ops,
+ .of_match_table = xtpg_of_id_table,
+ },
+ .probe = xtpg_probe,
+ .remove = xtpg_remove,
+};
+
+module_platform_driver(xtpg_driver);
+
+MODULE_AUTHOR("Laurent Pinchart <laurent.pinchart@ideasonboard.com>");
+MODULE_DESCRIPTION("Xilinx Test Pattern Generator Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/media/platform/xilinx/xilinx-vip.c b/drivers/media/platform/xilinx/xilinx-vip.c
new file mode 100644
index 0000000..3112591
--- /dev/null
+++ b/drivers/media/platform/xilinx/xilinx-vip.c
@@ -0,0 +1,323 @@
+/*
+ * Xilinx Video IP Core
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/clk.h>
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+
+#include <dt-bindings/media/xilinx-vip.h>
+
+#include "xilinx-vip.h"
+
+/* -----------------------------------------------------------------------------
+ * Helper functions
+ */
+
+static const struct xvip_video_format xvip_video_formats[] = {
+ { XVIP_VF_YUV_422, 8, NULL, MEDIA_BUS_FMT_UYVY8_1X16,
+ 2, V4L2_PIX_FMT_YUYV, "4:2:2, packed, YUYV" },
+ { XVIP_VF_YUV_444, 8, NULL, MEDIA_BUS_FMT_VUY8_1X24,
+ 3, V4L2_PIX_FMT_YUV444, "4:4:4, packed, YUYV" },
+ { XVIP_VF_RBG, 8, NULL, MEDIA_BUS_FMT_RBG888_1X24,
+ 3, 0, NULL },
+ { XVIP_VF_MONO_SENSOR, 8, "mono", MEDIA_BUS_FMT_Y8_1X8,
+ 1, V4L2_PIX_FMT_GREY, "Greyscale 8-bit" },
+ { XVIP_VF_MONO_SENSOR, 8, "rggb", MEDIA_BUS_FMT_SRGGB8_1X8,
+ 1, V4L2_PIX_FMT_SGRBG8, "Bayer 8-bit RGGB" },
+ { XVIP_VF_MONO_SENSOR, 8, "grbg", MEDIA_BUS_FMT_SGRBG8_1X8,
+ 1, V4L2_PIX_FMT_SGRBG8, "Bayer 8-bit GRBG" },
+ { XVIP_VF_MONO_SENSOR, 8, "gbrg", MEDIA_BUS_FMT_SGBRG8_1X8,
+ 1, V4L2_PIX_FMT_SGBRG8, "Bayer 8-bit GBRG" },
+ { XVIP_VF_MONO_SENSOR, 8, "bggr", MEDIA_BUS_FMT_SBGGR8_1X8,
+ 1, V4L2_PIX_FMT_SBGGR8, "Bayer 8-bit BGGR" },
+};
+
+/**
+ * xvip_get_format_by_code - Retrieve format information for a media bus code
+ * @code: the format media bus code
+ *
+ * Return: a pointer to the format information structure corresponding to the
+ * given V4L2 media bus format @code, or ERR_PTR if no corresponding format can
+ * be found.
+ */
+const struct xvip_video_format *xvip_get_format_by_code(unsigned int code)
+{
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(xvip_video_formats); ++i) {
+ const struct xvip_video_format *format = &xvip_video_formats[i];
+
+ if (format->code == code)
+ return format;
+ }
+
+ return ERR_PTR(-EINVAL);
+}
+EXPORT_SYMBOL_GPL(xvip_get_format_by_code);
+
+/**
+ * xvip_get_format_by_fourcc - Retrieve format information for a 4CC
+ * @fourcc: the format 4CC
+ *
+ * Return: a pointer to the format information structure corresponding to the
+ * given V4L2 format @fourcc, or ERR_PTR if no corresponding format can be
+ * found.
+ */
+const struct xvip_video_format *xvip_get_format_by_fourcc(u32 fourcc)
+{
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(xvip_video_formats); ++i) {
+ const struct xvip_video_format *format = &xvip_video_formats[i];
+
+ if (format->fourcc == fourcc)
+ return format;
+ }
+
+ return ERR_PTR(-EINVAL);
+}
+EXPORT_SYMBOL_GPL(xvip_get_format_by_fourcc);
+
+/**
+ * xvip_of_get_format - Parse a device tree node and return format information
+ * @node: the device tree node
+ *
+ * Read the xlnx,video-format, xlnx,video-width and xlnx,cfa-pattern properties
+ * from the device tree @node passed as an argument and return the corresponding
+ * format information.
+ *
+ * Return: a pointer to the format information structure corresponding to the
+ * format name and width, or ERR_PTR if no corresponding format can be found.
+ */
+const struct xvip_video_format *xvip_of_get_format(struct device_node *node)
+{
+ const char *pattern = "mono";
+ unsigned int vf_code;
+ unsigned int i;
+ u32 width;
+ int ret;
+
+ ret = of_property_read_u32(node, "xlnx,video-format", &vf_code);
+ if (ret < 0)
+ return ERR_PTR(ret);
+
+ ret = of_property_read_u32(node, "xlnx,video-width", &width);
+ if (ret < 0)
+ return ERR_PTR(ret);
+
+ if (vf_code == XVIP_VF_MONO_SENSOR)
+ of_property_read_string(node, "xlnx,cfa-pattern", &pattern);
+
+ for (i = 0; i < ARRAY_SIZE(xvip_video_formats); ++i) {
+ const struct xvip_video_format *format = &xvip_video_formats[i];
+
+ if (format->vf_code != vf_code || format->width != width)
+ continue;
+
+ if (vf_code == XVIP_VF_MONO_SENSOR &&
+ strcmp(pattern, format->pattern))
+ continue;
+
+ return format;
+ }
+
+ return ERR_PTR(-EINVAL);
+}
+EXPORT_SYMBOL_GPL(xvip_of_get_format);
+
+/**
+ * xvip_set_format_size - Set the media bus frame format size
+ * @format: V4L2 frame format on media bus
+ * @fmt: media bus format
+ *
+ * Set the media bus frame format size. The width / height from the subdevice
+ * format are set to the given media bus format. The new format size is stored
+ * in @format. The width and height are clamped using default min / max values.
+ */
+void xvip_set_format_size(struct v4l2_mbus_framefmt *format,
+ const struct v4l2_subdev_format *fmt)
+{
+ format->width = clamp_t(unsigned int, fmt->format.width,
+ XVIP_MIN_WIDTH, XVIP_MAX_WIDTH);
+ format->height = clamp_t(unsigned int, fmt->format.height,
+ XVIP_MIN_HEIGHT, XVIP_MAX_HEIGHT);
+}
+EXPORT_SYMBOL_GPL(xvip_set_format_size);
+
+/**
+ * xvip_clr_or_set - Clear or set the register with a bitmask
+ * @xvip: Xilinx Video IP device
+ * @addr: address of register
+ * @mask: bitmask to be set or cleared
+ * @set: boolean flag indicating whether to set or clear
+ *
+ * Clear or set the register at address @addr with a bitmask @mask depending on
+ * the boolean flag @set. When the flag @set is true, the bitmask is set in
+ * the register, otherwise the bitmask is cleared from the register
+ * when the flag @set is false.
+ *
+ * Fox eample, this function can be used to set a control with a boolean value
+ * requested by users. If the caller knows whether to set or clear in the first
+ * place, the caller should call xvip_clr() or xvip_set() directly instead of
+ * using this function.
+ */
+void xvip_clr_or_set(struct xvip_device *xvip, u32 addr, u32 mask, bool set)
+{
+ u32 reg;
+
+ reg = xvip_read(xvip, addr);
+ reg = set ? reg | mask : reg & ~mask;
+ xvip_write(xvip, addr, reg);
+}
+EXPORT_SYMBOL_GPL(xvip_clr_or_set);
+
+/**
+ * xvip_clr_and_set - Clear and set the register with a bitmask
+ * @xvip: Xilinx Video IP device
+ * @addr: address of register
+ * @clr: bitmask to be cleared
+ * @set: bitmask to be set
+ *
+ * Clear a bit(s) of mask @clr in the register at address @addr, then set
+ * a bit(s) of mask @set in the register after.
+ */
+void xvip_clr_and_set(struct xvip_device *xvip, u32 addr, u32 clr, u32 set)
+{
+ u32 reg;
+
+ reg = xvip_read(xvip, addr);
+ reg &= ~clr;
+ reg |= set;
+ xvip_write(xvip, addr, reg);
+}
+EXPORT_SYMBOL_GPL(xvip_clr_and_set);
+
+int xvip_init_resources(struct xvip_device *xvip)
+{
+ struct platform_device *pdev = to_platform_device(xvip->dev);
+ struct resource *res;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ xvip->iomem = devm_ioremap_resource(xvip->dev, res);
+ if (IS_ERR(xvip->iomem))
+ return PTR_ERR(xvip->iomem);
+
+ xvip->clk = devm_clk_get(xvip->dev, NULL);
+ if (IS_ERR(xvip->clk))
+ return PTR_ERR(xvip->clk);
+
+ clk_prepare_enable(xvip->clk);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xvip_init_resources);
+
+void xvip_cleanup_resources(struct xvip_device *xvip)
+{
+ clk_disable_unprepare(xvip->clk);
+}
+EXPORT_SYMBOL_GPL(xvip_cleanup_resources);
+
+/* -----------------------------------------------------------------------------
+ * Subdev operations handlers
+ */
+
+/**
+ * xvip_enum_mbus_code - Enumerate the media format code
+ * @subdev: V4L2 subdevice
+ * @cfg: V4L2 subdev pad configuration
+ * @code: returning media bus code
+ *
+ * Enumerate the media bus code of the subdevice. Return the corresponding
+ * pad format code. This function only works for subdevices with fixed format
+ * on all pads. Subdevices with multiple format should have their own
+ * function to enumerate mbus codes.
+ *
+ * Return: 0 if the media bus code is found, or -EINVAL if the format index
+ * is not valid.
+ */
+int xvip_enum_mbus_code(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_mbus_code_enum *code)
+{
+ struct v4l2_mbus_framefmt *format;
+
+ /* Enumerating frame sizes based on the active configuration isn't
+ * supported yet.
+ */
+ if (code->which == V4L2_SUBDEV_FORMAT_ACTIVE)
+ return -EINVAL;
+
+ if (code->index)
+ return -EINVAL;
+
+ format = v4l2_subdev_get_try_format(subdev, cfg, code->pad);
+
+ code->code = format->code;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xvip_enum_mbus_code);
+
+/**
+ * xvip_enum_frame_size - Enumerate the media bus frame size
+ * @subdev: V4L2 subdevice
+ * @cfg: V4L2 subdev pad configuration
+ * @fse: returning media bus frame size
+ *
+ * This function is a drop-in implementation of the subdev enum_frame_size pad
+ * operation. It assumes that the subdevice has one sink pad and one source
+ * pad, and that the format on the source pad is always identical to the
+ * format on the sink pad. Entities with different requirements need to
+ * implement their own enum_frame_size handlers.
+ *
+ * Return: 0 if the media bus frame size is found, or -EINVAL
+ * if the index or the code is not valid.
+ */
+int xvip_enum_frame_size(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_frame_size_enum *fse)
+{
+ struct v4l2_mbus_framefmt *format;
+
+ /* Enumerating frame sizes based on the active configuration isn't
+ * supported yet.
+ */
+ if (fse->which == V4L2_SUBDEV_FORMAT_ACTIVE)
+ return -EINVAL;
+
+ format = v4l2_subdev_get_try_format(subdev, cfg, fse->pad);
+
+ if (fse->index || fse->code != format->code)
+ return -EINVAL;
+
+ if (fse->pad == XVIP_PAD_SINK) {
+ fse->min_width = XVIP_MIN_WIDTH;
+ fse->max_width = XVIP_MAX_WIDTH;
+ fse->min_height = XVIP_MIN_HEIGHT;
+ fse->max_height = XVIP_MAX_HEIGHT;
+ } else {
+ /* The size on the source pad is fixed and always identical to
+ * the size on the sink pad.
+ */
+ fse->min_width = format->width;
+ fse->max_width = format->width;
+ fse->min_height = format->height;
+ fse->max_height = format->height;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xvip_enum_frame_size);
diff --git a/drivers/media/platform/xilinx/xilinx-vip.h b/drivers/media/platform/xilinx/xilinx-vip.h
new file mode 100644
index 0000000..42fee20
--- /dev/null
+++ b/drivers/media/platform/xilinx/xilinx-vip.h
@@ -0,0 +1,238 @@
+/*
+ * Xilinx Video IP Core
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __XILINX_VIP_H__
+#define __XILINX_VIP_H__
+
+#include <linux/io.h>
+#include <media/v4l2-subdev.h>
+
+struct clk;
+
+/*
+ * Minimum and maximum width and height common to most video IP cores. IP
+ * cores with different requirements must define their own values.
+ */
+#define XVIP_MIN_WIDTH 32
+#define XVIP_MAX_WIDTH 7680
+#define XVIP_MIN_HEIGHT 32
+#define XVIP_MAX_HEIGHT 7680
+
+/*
+ * Pad IDs. IP cores with with multiple inputs or outputs should define
+ * their own values.
+ */
+#define XVIP_PAD_SINK 0
+#define XVIP_PAD_SOURCE 1
+
+/* Xilinx Video IP Control Registers */
+#define XVIP_CTRL_CONTROL 0x0000
+#define XVIP_CTRL_CONTROL_SW_ENABLE (1 << 0)
+#define XVIP_CTRL_CONTROL_REG_UPDATE (1 << 1)
+#define XVIP_CTRL_CONTROL_BYPASS (1 << 4)
+#define XVIP_CTRL_CONTROL_TEST_PATTERN (1 << 5)
+#define XVIP_CTRL_CONTROL_FRAME_SYNC_RESET (1 << 30)
+#define XVIP_CTRL_CONTROL_SW_RESET (1 << 31)
+#define XVIP_CTRL_STATUS 0x0004
+#define XVIP_CTRL_STATUS_PROC_STARTED (1 << 0)
+#define XVIP_CTRL_STATUS_EOF (1 << 1)
+#define XVIP_CTRL_ERROR 0x0008
+#define XVIP_CTRL_ERROR_SLAVE_EOL_EARLY (1 << 0)
+#define XVIP_CTRL_ERROR_SLAVE_EOL_LATE (1 << 1)
+#define XVIP_CTRL_ERROR_SLAVE_SOF_EARLY (1 << 2)
+#define XVIP_CTRL_ERROR_SLAVE_SOF_LATE (1 << 3)
+#define XVIP_CTRL_IRQ_ENABLE 0x000c
+#define XVIP_CTRL_IRQ_ENABLE_PROC_STARTED (1 << 0)
+#define XVIP_CTRL_IRQ_EOF (1 << 1)
+#define XVIP_CTRL_VERSION 0x0010
+#define XVIP_CTRL_VERSION_MAJOR_MASK (0xff << 24)
+#define XVIP_CTRL_VERSION_MAJOR_SHIFT 24
+#define XVIP_CTRL_VERSION_MINOR_MASK (0xff << 16)
+#define XVIP_CTRL_VERSION_MINOR_SHIFT 16
+#define XVIP_CTRL_VERSION_REVISION_MASK (0xf << 12)
+#define XVIP_CTRL_VERSION_REVISION_SHIFT 12
+#define XVIP_CTRL_VERSION_PATCH_MASK (0xf << 8)
+#define XVIP_CTRL_VERSION_PATCH_SHIFT 8
+#define XVIP_CTRL_VERSION_INTERNAL_MASK (0xff << 0)
+#define XVIP_CTRL_VERSION_INTERNAL_SHIFT 0
+
+/* Xilinx Video IP Timing Registers */
+#define XVIP_ACTIVE_SIZE 0x0020
+#define XVIP_ACTIVE_VSIZE_MASK (0x7ff << 16)
+#define XVIP_ACTIVE_VSIZE_SHIFT 16
+#define XVIP_ACTIVE_HSIZE_MASK (0x7ff << 0)
+#define XVIP_ACTIVE_HSIZE_SHIFT 0
+#define XVIP_ENCODING 0x0028
+#define XVIP_ENCODING_NBITS_8 (0 << 4)
+#define XVIP_ENCODING_NBITS_10 (1 << 4)
+#define XVIP_ENCODING_NBITS_12 (2 << 4)
+#define XVIP_ENCODING_NBITS_16 (3 << 4)
+#define XVIP_ENCODING_NBITS_MASK (3 << 4)
+#define XVIP_ENCODING_NBITS_SHIFT 4
+#define XVIP_ENCODING_VIDEO_FORMAT_YUV422 (0 << 0)
+#define XVIP_ENCODING_VIDEO_FORMAT_YUV444 (1 << 0)
+#define XVIP_ENCODING_VIDEO_FORMAT_RGB (2 << 0)
+#define XVIP_ENCODING_VIDEO_FORMAT_YUV420 (3 << 0)
+#define XVIP_ENCODING_VIDEO_FORMAT_MASK (3 << 0)
+#define XVIP_ENCODING_VIDEO_FORMAT_SHIFT 0
+
+/**
+ * struct xvip_device - Xilinx Video IP device structure
+ * @subdev: V4L2 subdevice
+ * @dev: (OF) device
+ * @iomem: device I/O register space remapped to kernel virtual memory
+ * @clk: video core clock
+ * @saved_ctrl: saved control register for resume / suspend
+ */
+struct xvip_device {
+ struct v4l2_subdev subdev;
+ struct device *dev;
+ void __iomem *iomem;
+ struct clk *clk;
+ u32 saved_ctrl;
+};
+
+/**
+ * struct xvip_video_format - Xilinx Video IP video format description
+ * @vf_code: AXI4 video format code
+ * @width: AXI4 format width in bits per component
+ * @pattern: CFA pattern for Mono/Sensor formats
+ * @code: media bus format code
+ * @bpp: bytes per pixel (when stored in memory)
+ * @fourcc: V4L2 pixel format FCC identifier
+ * @description: format description, suitable for userspace
+ */
+struct xvip_video_format {
+ unsigned int vf_code;
+ unsigned int width;
+ const char *pattern;
+ unsigned int code;
+ unsigned int bpp;
+ u32 fourcc;
+ const char *description;
+};
+
+const struct xvip_video_format *xvip_get_format_by_code(unsigned int code);
+const struct xvip_video_format *xvip_get_format_by_fourcc(u32 fourcc);
+const struct xvip_video_format *xvip_of_get_format(struct device_node *node);
+void xvip_set_format_size(struct v4l2_mbus_framefmt *format,
+ const struct v4l2_subdev_format *fmt);
+int xvip_enum_mbus_code(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_mbus_code_enum *code);
+int xvip_enum_frame_size(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_frame_size_enum *fse);
+
+static inline u32 xvip_read(struct xvip_device *xvip, u32 addr)
+{
+ return ioread32(xvip->iomem + addr);
+}
+
+static inline void xvip_write(struct xvip_device *xvip, u32 addr, u32 value)
+{
+ iowrite32(value, xvip->iomem + addr);
+}
+
+static inline void xvip_clr(struct xvip_device *xvip, u32 addr, u32 clr)
+{
+ xvip_write(xvip, addr, xvip_read(xvip, addr) & ~clr);
+}
+
+static inline void xvip_set(struct xvip_device *xvip, u32 addr, u32 set)
+{
+ xvip_write(xvip, addr, xvip_read(xvip, addr) | set);
+}
+
+void xvip_clr_or_set(struct xvip_device *xvip, u32 addr, u32 mask, bool set);
+void xvip_clr_and_set(struct xvip_device *xvip, u32 addr, u32 clr, u32 set);
+
+int xvip_init_resources(struct xvip_device *xvip);
+void xvip_cleanup_resources(struct xvip_device *xvip);
+
+static inline void xvip_reset(struct xvip_device *xvip)
+{
+ xvip_write(xvip, XVIP_CTRL_CONTROL, XVIP_CTRL_CONTROL_SW_RESET);
+}
+
+static inline void xvip_start(struct xvip_device *xvip)
+{
+ xvip_set(xvip, XVIP_CTRL_CONTROL,
+ XVIP_CTRL_CONTROL_SW_ENABLE | XVIP_CTRL_CONTROL_REG_UPDATE);
+}
+
+static inline void xvip_stop(struct xvip_device *xvip)
+{
+ xvip_clr(xvip, XVIP_CTRL_CONTROL, XVIP_CTRL_CONTROL_SW_ENABLE);
+}
+
+static inline void xvip_resume(struct xvip_device *xvip)
+{
+ xvip_write(xvip, XVIP_CTRL_CONTROL,
+ xvip->saved_ctrl | XVIP_CTRL_CONTROL_SW_ENABLE);
+}
+
+static inline void xvip_suspend(struct xvip_device *xvip)
+{
+ xvip->saved_ctrl = xvip_read(xvip, XVIP_CTRL_CONTROL);
+ xvip_write(xvip, XVIP_CTRL_CONTROL,
+ xvip->saved_ctrl & ~XVIP_CTRL_CONTROL_SW_ENABLE);
+}
+
+static inline void xvip_set_frame_size(struct xvip_device *xvip,
+ const struct v4l2_mbus_framefmt *format)
+{
+ xvip_write(xvip, XVIP_ACTIVE_SIZE,
+ (format->height << XVIP_ACTIVE_VSIZE_SHIFT) |
+ (format->width << XVIP_ACTIVE_HSIZE_SHIFT));
+}
+
+static inline void xvip_get_frame_size(struct xvip_device *xvip,
+ struct v4l2_mbus_framefmt *format)
+{
+ u32 reg;
+
+ reg = xvip_read(xvip, XVIP_ACTIVE_SIZE);
+ format->width = (reg & XVIP_ACTIVE_HSIZE_MASK) >>
+ XVIP_ACTIVE_HSIZE_SHIFT;
+ format->height = (reg & XVIP_ACTIVE_VSIZE_MASK) >>
+ XVIP_ACTIVE_VSIZE_SHIFT;
+}
+
+static inline void xvip_enable_reg_update(struct xvip_device *xvip)
+{
+ xvip_set(xvip, XVIP_CTRL_CONTROL, XVIP_CTRL_CONTROL_REG_UPDATE);
+}
+
+static inline void xvip_disable_reg_update(struct xvip_device *xvip)
+{
+ xvip_clr(xvip, XVIP_CTRL_CONTROL, XVIP_CTRL_CONTROL_REG_UPDATE);
+}
+
+static inline void xvip_print_version(struct xvip_device *xvip)
+{
+ u32 version;
+
+ version = xvip_read(xvip, XVIP_CTRL_VERSION);
+
+ dev_info(xvip->dev, "device found, version %u.%02x%x\n",
+ ((version & XVIP_CTRL_VERSION_MAJOR_MASK) >>
+ XVIP_CTRL_VERSION_MAJOR_SHIFT),
+ ((version & XVIP_CTRL_VERSION_MINOR_MASK) >>
+ XVIP_CTRL_VERSION_MINOR_SHIFT),
+ ((version & XVIP_CTRL_VERSION_REVISION_MASK) >>
+ XVIP_CTRL_VERSION_REVISION_SHIFT));
+}
+
+#endif /* __XILINX_VIP_H__ */
diff --git a/drivers/media/platform/xilinx/xilinx-vipp.c b/drivers/media/platform/xilinx/xilinx-vipp.c
new file mode 100644
index 0000000..7b7cb9c
--- /dev/null
+++ b/drivers/media/platform/xilinx/xilinx-vipp.c
@@ -0,0 +1,669 @@
+/*
+ * Xilinx Video IP Composite Device
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_graph.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include <media/v4l2-async.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-of.h>
+
+#include "xilinx-dma.h"
+#include "xilinx-vipp.h"
+
+#define XVIPP_DMA_S2MM 0
+#define XVIPP_DMA_MM2S 1
+
+/**
+ * struct xvip_graph_entity - Entity in the video graph
+ * @list: list entry in a graph entities list
+ * @node: the entity's DT node
+ * @entity: media entity, from the corresponding V4L2 subdev
+ * @asd: subdev asynchronous registration information
+ * @subdev: V4L2 subdev
+ */
+struct xvip_graph_entity {
+ struct list_head list;
+ struct device_node *node;
+ struct media_entity *entity;
+
+ struct v4l2_async_subdev asd;
+ struct v4l2_subdev *subdev;
+};
+
+/* -----------------------------------------------------------------------------
+ * Graph Management
+ */
+
+static struct xvip_graph_entity *
+xvip_graph_find_entity(struct xvip_composite_device *xdev,
+ const struct device_node *node)
+{
+ struct xvip_graph_entity *entity;
+
+ list_for_each_entry(entity, &xdev->entities, list) {
+ if (entity->node == node)
+ return entity;
+ }
+
+ return NULL;
+}
+
+static int xvip_graph_build_one(struct xvip_composite_device *xdev,
+ struct xvip_graph_entity *entity)
+{
+ u32 link_flags = MEDIA_LNK_FL_ENABLED;
+ struct media_entity *local = entity->entity;
+ struct media_entity *remote;
+ struct media_pad *local_pad;
+ struct media_pad *remote_pad;
+ struct xvip_graph_entity *ent;
+ struct v4l2_of_link link;
+ struct device_node *ep = NULL;
+ struct device_node *next;
+ int ret = 0;
+
+ dev_dbg(xdev->dev, "creating links for entity %s\n", local->name);
+
+ while (1) {
+ /* Get the next endpoint and parse its link. */
+ next = of_graph_get_next_endpoint(entity->node, ep);
+ if (next == NULL)
+ break;
+
+ of_node_put(ep);
+ ep = next;
+
+ dev_dbg(xdev->dev, "processing endpoint %s\n", ep->full_name);
+
+ ret = v4l2_of_parse_link(ep, &link);
+ if (ret < 0) {
+ dev_err(xdev->dev, "failed to parse link for %s\n",
+ ep->full_name);
+ continue;
+ }
+
+ /* Skip sink ports, they will be processed from the other end of
+ * the link.
+ */
+ if (link.local_port >= local->num_pads) {
+ dev_err(xdev->dev, "invalid port number %u on %s\n",
+ link.local_port, link.local_node->full_name);
+ v4l2_of_put_link(&link);
+ ret = -EINVAL;
+ break;
+ }
+
+ local_pad = &local->pads[link.local_port];
+
+ if (local_pad->flags & MEDIA_PAD_FL_SINK) {
+ dev_dbg(xdev->dev, "skipping sink port %s:%u\n",
+ link.local_node->full_name, link.local_port);
+ v4l2_of_put_link(&link);
+ continue;
+ }
+
+ /* Skip DMA engines, they will be processed separately. */
+ if (link.remote_node == xdev->dev->of_node) {
+ dev_dbg(xdev->dev, "skipping DMA port %s:%u\n",
+ link.local_node->full_name, link.local_port);
+ v4l2_of_put_link(&link);
+ continue;
+ }
+
+ /* Find the remote entity. */
+ ent = xvip_graph_find_entity(xdev, link.remote_node);
+ if (ent == NULL) {
+ dev_err(xdev->dev, "no entity found for %s\n",
+ link.remote_node->full_name);
+ v4l2_of_put_link(&link);
+ ret = -ENODEV;
+ break;
+ }
+
+ remote = ent->entity;
+
+ if (link.remote_port >= remote->num_pads) {
+ dev_err(xdev->dev, "invalid port number %u on %s\n",
+ link.remote_port, link.remote_node->full_name);
+ v4l2_of_put_link(&link);
+ ret = -EINVAL;
+ break;
+ }
+
+ remote_pad = &remote->pads[link.remote_port];
+
+ v4l2_of_put_link(&link);
+
+ /* Create the media link. */
+ dev_dbg(xdev->dev, "creating %s:%u -> %s:%u link\n",
+ local->name, local_pad->index,
+ remote->name, remote_pad->index);
+
+ ret = media_entity_create_link(local, local_pad->index,
+ remote, remote_pad->index,
+ link_flags);
+ if (ret < 0) {
+ dev_err(xdev->dev,
+ "failed to create %s:%u -> %s:%u link\n",
+ local->name, local_pad->index,
+ remote->name, remote_pad->index);
+ break;
+ }
+ }
+
+ of_node_put(ep);
+ return ret;
+}
+
+static struct xvip_dma *
+xvip_graph_find_dma(struct xvip_composite_device *xdev, unsigned int port)
+{
+ struct xvip_dma *dma;
+
+ list_for_each_entry(dma, &xdev->dmas, list) {
+ if (dma->port == port)
+ return dma;
+ }
+
+ return NULL;
+}
+
+static int xvip_graph_build_dma(struct xvip_composite_device *xdev)
+{
+ u32 link_flags = MEDIA_LNK_FL_ENABLED;
+ struct device_node *node = xdev->dev->of_node;
+ struct media_entity *source;
+ struct media_entity *sink;
+ struct media_pad *source_pad;
+ struct media_pad *sink_pad;
+ struct xvip_graph_entity *ent;
+ struct v4l2_of_link link;
+ struct device_node *ep = NULL;
+ struct device_node *next;
+ struct xvip_dma *dma;
+ int ret = 0;
+
+ dev_dbg(xdev->dev, "creating links for DMA engines\n");
+
+ while (1) {
+ /* Get the next endpoint and parse its link. */
+ next = of_graph_get_next_endpoint(node, ep);
+ if (next == NULL)
+ break;
+
+ of_node_put(ep);
+ ep = next;
+
+ dev_dbg(xdev->dev, "processing endpoint %s\n", ep->full_name);
+
+ ret = v4l2_of_parse_link(ep, &link);
+ if (ret < 0) {
+ dev_err(xdev->dev, "failed to parse link for %s\n",
+ ep->full_name);
+ continue;
+ }
+
+ /* Find the DMA engine. */
+ dma = xvip_graph_find_dma(xdev, link.local_port);
+ if (dma == NULL) {
+ dev_err(xdev->dev, "no DMA engine found for port %u\n",
+ link.local_port);
+ v4l2_of_put_link(&link);
+ ret = -EINVAL;
+ break;
+ }
+
+ dev_dbg(xdev->dev, "creating link for DMA engine %s\n",
+ dma->video.name);
+
+ /* Find the remote entity. */
+ ent = xvip_graph_find_entity(xdev, link.remote_node);
+ if (ent == NULL) {
+ dev_err(xdev->dev, "no entity found for %s\n",
+ link.remote_node->full_name);
+ v4l2_of_put_link(&link);
+ ret = -ENODEV;
+ break;
+ }
+
+ if (link.remote_port >= ent->entity->num_pads) {
+ dev_err(xdev->dev, "invalid port number %u on %s\n",
+ link.remote_port, link.remote_node->full_name);
+ v4l2_of_put_link(&link);
+ ret = -EINVAL;
+ break;
+ }
+
+ if (dma->pad.flags & MEDIA_PAD_FL_SOURCE) {
+ source = &dma->video.entity;
+ source_pad = &dma->pad;
+ sink = ent->entity;
+ sink_pad = &sink->pads[link.remote_port];
+ } else {
+ source = ent->entity;
+ source_pad = &source->pads[link.remote_port];
+ sink = &dma->video.entity;
+ sink_pad = &dma->pad;
+ }
+
+ v4l2_of_put_link(&link);
+
+ /* Create the media link. */
+ dev_dbg(xdev->dev, "creating %s:%u -> %s:%u link\n",
+ source->name, source_pad->index,
+ sink->name, sink_pad->index);
+
+ ret = media_entity_create_link(source, source_pad->index,
+ sink, sink_pad->index,
+ link_flags);
+ if (ret < 0) {
+ dev_err(xdev->dev,
+ "failed to create %s:%u -> %s:%u link\n",
+ source->name, source_pad->index,
+ sink->name, sink_pad->index);
+ break;
+ }
+ }
+
+ of_node_put(ep);
+ return ret;
+}
+
+static int xvip_graph_notify_complete(struct v4l2_async_notifier *notifier)
+{
+ struct xvip_composite_device *xdev =
+ container_of(notifier, struct xvip_composite_device, notifier);
+ struct xvip_graph_entity *entity;
+ int ret;
+
+ dev_dbg(xdev->dev, "notify complete, all subdevs registered\n");
+
+ /* Create links for every entity. */
+ list_for_each_entry(entity, &xdev->entities, list) {
+ ret = xvip_graph_build_one(xdev, entity);
+ if (ret < 0)
+ return ret;
+ }
+
+ /* Create links for DMA channels. */
+ ret = xvip_graph_build_dma(xdev);
+ if (ret < 0)
+ return ret;
+
+ ret = v4l2_device_register_subdev_nodes(&xdev->v4l2_dev);
+ if (ret < 0)
+ dev_err(xdev->dev, "failed to register subdev nodes\n");
+
+ return ret;
+}
+
+static int xvip_graph_notify_bound(struct v4l2_async_notifier *notifier,
+ struct v4l2_subdev *subdev,
+ struct v4l2_async_subdev *asd)
+{
+ struct xvip_composite_device *xdev =
+ container_of(notifier, struct xvip_composite_device, notifier);
+ struct xvip_graph_entity *entity;
+
+ /* Locate the entity corresponding to the bound subdev and store the
+ * subdev pointer.
+ */
+ list_for_each_entry(entity, &xdev->entities, list) {
+ if (entity->node != subdev->dev->of_node)
+ continue;
+
+ if (entity->subdev) {
+ dev_err(xdev->dev, "duplicate subdev for node %s\n",
+ entity->node->full_name);
+ return -EINVAL;
+ }
+
+ dev_dbg(xdev->dev, "subdev %s bound\n", subdev->name);
+ entity->entity = &subdev->entity;
+ entity->subdev = subdev;
+ return 0;
+ }
+
+ dev_err(xdev->dev, "no entity for subdev %s\n", subdev->name);
+ return -EINVAL;
+}
+
+static int xvip_graph_parse_one(struct xvip_composite_device *xdev,
+ struct device_node *node)
+{
+ struct xvip_graph_entity *entity;
+ struct device_node *remote;
+ struct device_node *ep = NULL;
+ struct device_node *next;
+ int ret = 0;
+
+ dev_dbg(xdev->dev, "parsing node %s\n", node->full_name);
+
+ while (1) {
+ next = of_graph_get_next_endpoint(node, ep);
+ if (next == NULL)
+ break;
+
+ of_node_put(ep);
+ ep = next;
+
+ dev_dbg(xdev->dev, "handling endpoint %s\n", ep->full_name);
+
+ remote = of_graph_get_remote_port_parent(ep);
+ if (remote == NULL) {
+ ret = -EINVAL;
+ break;
+ }
+
+ /* Skip entities that we have already processed. */
+ if (remote == xdev->dev->of_node ||
+ xvip_graph_find_entity(xdev, remote)) {
+ of_node_put(remote);
+ continue;
+ }
+
+ entity = devm_kzalloc(xdev->dev, sizeof(*entity), GFP_KERNEL);
+ if (entity == NULL) {
+ of_node_put(remote);
+ ret = -ENOMEM;
+ break;
+ }
+
+ entity->node = remote;
+ entity->asd.match_type = V4L2_ASYNC_MATCH_OF;
+ entity->asd.match.of.node = remote;
+ list_add_tail(&entity->list, &xdev->entities);
+ xdev->num_subdevs++;
+ }
+
+ of_node_put(ep);
+ return ret;
+}
+
+static int xvip_graph_parse(struct xvip_composite_device *xdev)
+{
+ struct xvip_graph_entity *entity;
+ int ret;
+
+ /*
+ * Walk the links to parse the full graph. Start by parsing the
+ * composite node and then parse entities in turn. The list_for_each
+ * loop will handle entities added at the end of the list while walking
+ * the links.
+ */
+ ret = xvip_graph_parse_one(xdev, xdev->dev->of_node);
+ if (ret < 0)
+ return 0;
+
+ list_for_each_entry(entity, &xdev->entities, list) {
+ ret = xvip_graph_parse_one(xdev, entity->node);
+ if (ret < 0)
+ break;
+ }
+
+ return ret;
+}
+
+static int xvip_graph_dma_init_one(struct xvip_composite_device *xdev,
+ struct device_node *node)
+{
+ struct xvip_dma *dma;
+ enum v4l2_buf_type type;
+ const char *direction;
+ unsigned int index;
+ int ret;
+
+ ret = of_property_read_string(node, "direction", &direction);
+ if (ret < 0)
+ return ret;
+
+ if (strcmp(direction, "input") == 0)
+ type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ else if (strcmp(direction, "output") == 0)
+ type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
+ else
+ return -EINVAL;
+
+ of_property_read_u32(node, "reg", &index);
+
+ dma = devm_kzalloc(xdev->dev, sizeof(*dma), GFP_KERNEL);
+ if (dma == NULL)
+ return -ENOMEM;
+
+ ret = xvip_dma_init(xdev, dma, type, index);
+ if (ret < 0) {
+ dev_err(xdev->dev, "%s initialization failed\n",
+ node->full_name);
+ return ret;
+ }
+
+ list_add_tail(&dma->list, &xdev->dmas);
+
+ xdev->v4l2_caps |= type == V4L2_BUF_TYPE_VIDEO_CAPTURE
+ ? V4L2_CAP_VIDEO_CAPTURE : V4L2_CAP_VIDEO_OUTPUT;
+
+ return 0;
+}
+
+static int xvip_graph_dma_init(struct xvip_composite_device *xdev)
+{
+ struct device_node *ports;
+ struct device_node *port;
+ int ret;
+
+ ports = of_get_child_by_name(xdev->dev->of_node, "ports");
+ if (ports == NULL) {
+ dev_err(xdev->dev, "ports node not present\n");
+ return -EINVAL;
+ }
+
+ for_each_child_of_node(ports, port) {
+ ret = xvip_graph_dma_init_one(xdev, port);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void xvip_graph_cleanup(struct xvip_composite_device *xdev)
+{
+ struct xvip_graph_entity *entityp;
+ struct xvip_graph_entity *entity;
+ struct xvip_dma *dmap;
+ struct xvip_dma *dma;
+
+ v4l2_async_notifier_unregister(&xdev->notifier);
+
+ list_for_each_entry_safe(entity, entityp, &xdev->entities, list) {
+ of_node_put(entity->node);
+ list_del(&entity->list);
+ }
+
+ list_for_each_entry_safe(dma, dmap, &xdev->dmas, list) {
+ xvip_dma_cleanup(dma);
+ list_del(&dma->list);
+ }
+}
+
+static int xvip_graph_init(struct xvip_composite_device *xdev)
+{
+ struct xvip_graph_entity *entity;
+ struct v4l2_async_subdev **subdevs = NULL;
+ unsigned int num_subdevs;
+ unsigned int i;
+ int ret;
+
+ /* Init the DMA channels. */
+ ret = xvip_graph_dma_init(xdev);
+ if (ret < 0) {
+ dev_err(xdev->dev, "DMA initialization failed\n");
+ goto done;
+ }
+
+ /* Parse the graph to extract a list of subdevice DT nodes. */
+ ret = xvip_graph_parse(xdev);
+ if (ret < 0) {
+ dev_err(xdev->dev, "graph parsing failed\n");
+ goto done;
+ }
+
+ if (!xdev->num_subdevs) {
+ dev_err(xdev->dev, "no subdev found in graph\n");
+ goto done;
+ }
+
+ /* Register the subdevices notifier. */
+ num_subdevs = xdev->num_subdevs;
+ subdevs = devm_kzalloc(xdev->dev, sizeof(*subdevs) * num_subdevs,
+ GFP_KERNEL);
+ if (subdevs == NULL) {
+ ret = -ENOMEM;
+ goto done;
+ }
+
+ i = 0;
+ list_for_each_entry(entity, &xdev->entities, list)
+ subdevs[i++] = &entity->asd;
+
+ xdev->notifier.subdevs = subdevs;
+ xdev->notifier.num_subdevs = num_subdevs;
+ xdev->notifier.bound = xvip_graph_notify_bound;
+ xdev->notifier.complete = xvip_graph_notify_complete;
+
+ ret = v4l2_async_notifier_register(&xdev->v4l2_dev, &xdev->notifier);
+ if (ret < 0) {
+ dev_err(xdev->dev, "notifier registration failed\n");
+ goto done;
+ }
+
+ ret = 0;
+
+done:
+ if (ret < 0)
+ xvip_graph_cleanup(xdev);
+
+ return ret;
+}
+
+/* -----------------------------------------------------------------------------
+ * Media Controller and V4L2
+ */
+
+static void xvip_composite_v4l2_cleanup(struct xvip_composite_device *xdev)
+{
+ v4l2_device_unregister(&xdev->v4l2_dev);
+ media_device_unregister(&xdev->media_dev);
+}
+
+static int xvip_composite_v4l2_init(struct xvip_composite_device *xdev)
+{
+ int ret;
+
+ xdev->media_dev.dev = xdev->dev;
+ strlcpy(xdev->media_dev.model, "Xilinx Video Composite Device",
+ sizeof(xdev->media_dev.model));
+ xdev->media_dev.hw_revision = 0;
+
+ ret = media_device_register(&xdev->media_dev);
+ if (ret < 0) {
+ dev_err(xdev->dev, "media device registration failed (%d)\n",
+ ret);
+ return ret;
+ }
+
+ xdev->v4l2_dev.mdev = &xdev->media_dev;
+ ret = v4l2_device_register(xdev->dev, &xdev->v4l2_dev);
+ if (ret < 0) {
+ dev_err(xdev->dev, "V4L2 device registration failed (%d)\n",
+ ret);
+ media_device_unregister(&xdev->media_dev);
+ return ret;
+ }
+
+ return 0;
+}
+
+/* -----------------------------------------------------------------------------
+ * Platform Device Driver
+ */
+
+static int xvip_composite_probe(struct platform_device *pdev)
+{
+ struct xvip_composite_device *xdev;
+ int ret;
+
+ xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL);
+ if (!xdev)
+ return -ENOMEM;
+
+ xdev->dev = &pdev->dev;
+ INIT_LIST_HEAD(&xdev->entities);
+ INIT_LIST_HEAD(&xdev->dmas);
+
+ ret = xvip_composite_v4l2_init(xdev);
+ if (ret < 0)
+ return ret;
+
+ ret = xvip_graph_init(xdev);
+ if (ret < 0)
+ goto error;
+
+ platform_set_drvdata(pdev, xdev);
+
+ dev_info(xdev->dev, "device registered\n");
+
+ return 0;
+
+error:
+ xvip_composite_v4l2_cleanup(xdev);
+ return ret;
+}
+
+static int xvip_composite_remove(struct platform_device *pdev)
+{
+ struct xvip_composite_device *xdev = platform_get_drvdata(pdev);
+
+ xvip_graph_cleanup(xdev);
+ xvip_composite_v4l2_cleanup(xdev);
+
+ return 0;
+}
+
+static const struct of_device_id xvip_composite_of_id_table[] = {
+ { .compatible = "xlnx,video" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, xvip_composite_of_id_table);
+
+static struct platform_driver xvip_composite_driver = {
+ .driver = {
+ .name = "xilinx-video",
+ .of_match_table = xvip_composite_of_id_table,
+ },
+ .probe = xvip_composite_probe,
+ .remove = xvip_composite_remove,
+};
+
+module_platform_driver(xvip_composite_driver);
+
+MODULE_AUTHOR("Laurent Pinchart <laurent.pinchart@ideasonboard.com>");
+MODULE_DESCRIPTION("Xilinx Video IP Composite Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/media/platform/xilinx/xilinx-vipp.h b/drivers/media/platform/xilinx/xilinx-vipp.h
new file mode 100644
index 0000000..faf6b6e
--- /dev/null
+++ b/drivers/media/platform/xilinx/xilinx-vipp.h
@@ -0,0 +1,49 @@
+/*
+ * Xilinx Video IP Composite Device
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __XILINX_VIPP_H__
+#define __XILINX_VIPP_H__
+
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <media/media-device.h>
+#include <media/v4l2-async.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+
+/**
+ * struct xvip_composite_device - Xilinx Video IP device structure
+ * @v4l2_dev: V4L2 device
+ * @media_dev: media device
+ * @dev: (OF) device
+ * @notifier: V4L2 asynchronous subdevs notifier
+ * @entities: entities in the graph as a list of xvip_graph_entity
+ * @num_subdevs: number of subdevs in the pipeline
+ * @dmas: list of DMA channels at the pipeline output and input
+ * @v4l2_caps: V4L2 capabilities of the whole device (see VIDIOC_QUERYCAP)
+ */
+struct xvip_composite_device {
+ struct v4l2_device v4l2_dev;
+ struct media_device media_dev;
+ struct device *dev;
+
+ struct v4l2_async_notifier notifier;
+ struct list_head entities;
+ unsigned int num_subdevs;
+
+ struct list_head dmas;
+ u32 v4l2_caps;
+};
+
+#endif /* __XILINX_VIPP_H__ */
diff --git a/drivers/media/platform/xilinx/xilinx-vtc.c b/drivers/media/platform/xilinx/xilinx-vtc.c
new file mode 100644
index 0000000..01c750e
--- /dev/null
+++ b/drivers/media/platform/xilinx/xilinx-vtc.c
@@ -0,0 +1,380 @@
+/*
+ * Xilinx Video Timing Controller
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/clk.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include "xilinx-vip.h"
+#include "xilinx-vtc.h"
+
+#define XVTC_CONTROL_FIELD_ID_POL_SRC (1 << 26)
+#define XVTC_CONTROL_ACTIVE_CHROMA_POL_SRC (1 << 25)
+#define XVTC_CONTROL_ACTIVE_VIDEO_POL_SRC (1 << 24)
+#define XVTC_CONTROL_HSYNC_POL_SRC (1 << 23)
+#define XVTC_CONTROL_VSYNC_POL_SRC (1 << 22)
+#define XVTC_CONTROL_HBLANK_POL_SRC (1 << 21)
+#define XVTC_CONTROL_VBLANK_POL_SRC (1 << 20)
+#define XVTC_CONTROL_CHROMA_SRC (1 << 18)
+#define XVTC_CONTROL_VBLANK_HOFF_SRC (1 << 17)
+#define XVTC_CONTROL_VSYNC_END_SRC (1 << 16)
+#define XVTC_CONTROL_VSYNC_START_SRC (1 << 15)
+#define XVTC_CONTROL_ACTIVE_VSIZE_SRC (1 << 14)
+#define XVTC_CONTROL_FRAME_VSIZE_SRC (1 << 13)
+#define XVTC_CONTROL_HSYNC_END_SRC (1 << 11)
+#define XVTC_CONTROL_HSYNC_START_SRC (1 << 10)
+#define XVTC_CONTROL_ACTIVE_HSIZE_SRC (1 << 9)
+#define XVTC_CONTROL_FRAME_HSIZE_SRC (1 << 8)
+#define XVTC_CONTROL_SYNC_ENABLE (1 << 5)
+#define XVTC_CONTROL_DET_ENABLE (1 << 3)
+#define XVTC_CONTROL_GEN_ENABLE (1 << 2)
+
+#define XVTC_STATUS_FSYNC(n) ((n) << 16)
+#define XVTC_STATUS_GEN_ACTIVE_VIDEO (1 << 13)
+#define XVTC_STATUS_GEN_VBLANK (1 << 12)
+#define XVTC_STATUS_DET_ACTIVE_VIDEO (1 << 11)
+#define XVTC_STATUS_DET_VBLANK (1 << 10)
+#define XVTC_STATUS_LOCK_LOSS (1 << 9)
+#define XVTC_STATUS_LOCK (1 << 8)
+
+#define XVTC_ERROR_ACTIVE_CHROMA_LOCK (1 << 21)
+#define XVTC_ERROR_ACTIVE_VIDEO_LOCK (1 << 20)
+#define XVTC_ERROR_HSYNC_LOCK (1 << 19)
+#define XVTC_ERROR_VSYNC_LOCK (1 << 18)
+#define XVTC_ERROR_HBLANK_LOCK (1 << 17)
+#define XVTC_ERROR_VBLANK_LOCK (1 << 16)
+
+#define XVTC_IRQ_ENABLE_FSYNC(n) ((n) << 16)
+#define XVTC_IRQ_ENABLE_GEN_ACTIVE_VIDEO (1 << 13)
+#define XVTC_IRQ_ENABLE_GEN_VBLANK (1 << 12)
+#define XVTC_IRQ_ENABLE_DET_ACTIVE_VIDEO (1 << 11)
+#define XVTC_IRQ_ENABLE_DET_VBLANK (1 << 10)
+#define XVTC_IRQ_ENABLE_LOCK_LOSS (1 << 9)
+#define XVTC_IRQ_ENABLE_LOCK (1 << 8)
+
+/*
+ * The following registers exist in two blocks, one at 0x0020 for the detector
+ * and one at 0x0060 for the generator.
+ */
+
+#define XVTC_DETECTOR_OFFSET 0x0020
+#define XVTC_GENERATOR_OFFSET 0x0060
+
+#define XVTC_ACTIVE_SIZE 0x0000
+#define XVTC_ACTIVE_VSIZE_SHIFT 16
+#define XVTC_ACTIVE_VSIZE_MASK (0x1fff << 16)
+#define XVTC_ACTIVE_HSIZE_SHIFT 0
+#define XVTC_ACTIVE_HSIZE_MASK (0x1fff << 0)
+
+#define XVTC_TIMING_STATUS 0x0004
+#define XVTC_TIMING_STATUS_ACTIVE_VIDEO (1 << 2)
+#define XVTC_TIMING_STATUS_VBLANK (1 << 1)
+#define XVTC_TIMING_STATUS_LOCKED (1 << 0)
+
+#define XVTC_ENCODING 0x0008
+#define XVTC_ENCODING_CHROMA_PARITY_SHIFT 8
+#define XVTC_ENCODING_CHROMA_PARITY_MASK (3 << 8)
+#define XVTC_ENCODING_CHROMA_PARITY_EVEN_ALL (0 << 8)
+#define XVTC_ENCODING_CHROMA_PARITY_ODD_ALL (1 << 8)
+#define XVTC_ENCODING_CHROMA_PARITY_EVEN_EVEN (2 << 8)
+#define XVTC_ENCODING_CHROMA_PARITY_ODD_EVEN (3 << 8)
+#define XVTC_ENCODING_VIDEO_FORMAT_SHIFT 0
+#define XVTC_ENCODING_VIDEO_FORMAT_MASK (0xf << 0)
+#define XVTC_ENCODING_VIDEO_FORMAT_YUV422 (0 << 0)
+#define XVTC_ENCODING_VIDEO_FORMAT_YUV444 (1 << 0)
+#define XVTC_ENCODING_VIDEO_FORMAT_RGB (2 << 0)
+#define XVTC_ENCODING_VIDEO_FORMAT_YUV420 (3 << 0)
+
+#define XVTC_POLARITY 0x000c
+#define XVTC_POLARITY_ACTIVE_CHROMA_POL (1 << 5)
+#define XVTC_POLARITY_ACTIVE_VIDEO_POL (1 << 4)
+#define XVTC_POLARITY_HSYNC_POL (1 << 3)
+#define XVTC_POLARITY_VSYNC_POL (1 << 2)
+#define XVTC_POLARITY_HBLANK_POL (1 << 1)
+#define XVTC_POLARITY_VBLANK_POL (1 << 0)
+
+#define XVTC_HSIZE 0x0010
+#define XVTC_HSIZE_MASK (0x1fff << 0)
+
+#define XVTC_VSIZE 0x0014
+#define XVTC_VSIZE_MASK (0x1fff << 0)
+
+#define XVTC_HSYNC 0x0018
+#define XVTC_HSYNC_END_SHIFT 16
+#define XVTC_HSYNC_END_MASK (0x1fff << 16)
+#define XVTC_HSYNC_START_SHIFT 0
+#define XVTC_HSYNC_START_MASK (0x1fff << 0)
+
+#define XVTC_F0_VBLANK_H 0x001c
+#define XVTC_F0_VBLANK_HEND_SHIFT 16
+#define XVTC_F0_VBLANK_HEND_MASK (0x1fff << 16)
+#define XVTC_F0_VBLANK_HSTART_SHIFT 0
+#define XVTC_F0_VBLANK_HSTART_MASK (0x1fff << 0)
+
+#define XVTC_F0_VSYNC_V 0x0020
+#define XVTC_F0_VSYNC_VEND_SHIFT 16
+#define XVTC_F0_VSYNC_VEND_MASK (0x1fff << 16)
+#define XVTC_F0_VSYNC_VSTART_SHIFT 0
+#define XVTC_F0_VSYNC_VSTART_MASK (0x1fff << 0)
+
+#define XVTC_F0_VSYNC_H 0x0024
+#define XVTC_F0_VSYNC_HEND_SHIFT 16
+#define XVTC_F0_VSYNC_HEND_MASK (0x1fff << 16)
+#define XVTC_F0_VSYNC_HSTART_SHIFT 0
+#define XVTC_F0_VSYNC_HSTART_MASK (0x1fff << 0)
+
+#define XVTC_FRAME_SYNC_CONFIG(n) (0x0100 + 4 * (n))
+#define XVTC_FRAME_SYNC_V_START_SHIFT 16
+#define XVTC_FRAME_SYNC_V_START_MASK (0x1fff << 16)
+#define XVTC_FRAME_SYNC_H_START_SHIFT 0
+#define XVTC_FRAME_SYNC_H_START_MASK (0x1fff << 0)
+
+#define XVTC_GENERATOR_GLOBAL_DELAY 0x0104
+
+/**
+ * struct xvtc_device - Xilinx Video Timing Controller device structure
+ * @xvip: Xilinx Video IP device
+ * @list: entry in the global VTC list
+ * @has_detector: the VTC has a timing detector
+ * @has_generator: the VTC has a timing generator
+ * @config: generator timings configuration
+ */
+struct xvtc_device {
+ struct xvip_device xvip;
+ struct list_head list;
+
+ bool has_detector;
+ bool has_generator;
+
+ struct xvtc_config config;
+};
+
+static LIST_HEAD(xvtc_list);
+static DEFINE_MUTEX(xvtc_lock);
+
+static inline void xvtc_gen_write(struct xvtc_device *xvtc, u32 addr, u32 value)
+{
+ xvip_write(&xvtc->xvip, XVTC_GENERATOR_OFFSET + addr, value);
+}
+
+/* -----------------------------------------------------------------------------
+ * Generator Operations
+ */
+
+int xvtc_generator_start(struct xvtc_device *xvtc,
+ const struct xvtc_config *config)
+{
+ int ret;
+
+ if (!xvtc->has_generator)
+ return -ENXIO;
+
+ ret = clk_prepare_enable(xvtc->xvip.clk);
+ if (ret < 0)
+ return ret;
+
+ /* We don't care about the chroma active signal, encoding parameters are
+ * not important for now.
+ */
+ xvtc_gen_write(xvtc, XVTC_POLARITY,
+ XVTC_POLARITY_ACTIVE_CHROMA_POL |
+ XVTC_POLARITY_ACTIVE_VIDEO_POL |
+ XVTC_POLARITY_HSYNC_POL | XVTC_POLARITY_VSYNC_POL |
+ XVTC_POLARITY_HBLANK_POL | XVTC_POLARITY_VBLANK_POL);
+
+ /* Hardcode the polarity to active high, as required by the video in to
+ * AXI4-stream core.
+ */
+ xvtc_gen_write(xvtc, XVTC_ENCODING, 0);
+
+ /* Configure the timings. The VBLANK and VSYNC signals assertion and
+ * deassertion are hardcoded to the first pixel of the line.
+ */
+ xvtc_gen_write(xvtc, XVTC_ACTIVE_SIZE,
+ (config->vblank_start << XVTC_ACTIVE_VSIZE_SHIFT) |
+ (config->hblank_start << XVTC_ACTIVE_HSIZE_SHIFT));
+ xvtc_gen_write(xvtc, XVTC_HSIZE, config->hsize);
+ xvtc_gen_write(xvtc, XVTC_VSIZE, config->vsize);
+ xvtc_gen_write(xvtc, XVTC_HSYNC,
+ (config->hsync_end << XVTC_HSYNC_END_SHIFT) |
+ (config->hsync_start << XVTC_HSYNC_START_SHIFT));
+ xvtc_gen_write(xvtc, XVTC_F0_VBLANK_H, 0);
+ xvtc_gen_write(xvtc, XVTC_F0_VSYNC_V,
+ (config->vsync_end << XVTC_F0_VSYNC_VEND_SHIFT) |
+ (config->vsync_start << XVTC_F0_VSYNC_VSTART_SHIFT));
+ xvtc_gen_write(xvtc, XVTC_F0_VSYNC_H, 0);
+
+ /* Enable the generator. Set the source of all generator parameters to
+ * generator registers.
+ */
+ xvip_write(&xvtc->xvip, XVIP_CTRL_CONTROL,
+ XVTC_CONTROL_ACTIVE_CHROMA_POL_SRC |
+ XVTC_CONTROL_ACTIVE_VIDEO_POL_SRC |
+ XVTC_CONTROL_HSYNC_POL_SRC | XVTC_CONTROL_VSYNC_POL_SRC |
+ XVTC_CONTROL_HBLANK_POL_SRC | XVTC_CONTROL_VBLANK_POL_SRC |
+ XVTC_CONTROL_CHROMA_SRC | XVTC_CONTROL_VBLANK_HOFF_SRC |
+ XVTC_CONTROL_VSYNC_END_SRC | XVTC_CONTROL_VSYNC_START_SRC |
+ XVTC_CONTROL_ACTIVE_VSIZE_SRC |
+ XVTC_CONTROL_FRAME_VSIZE_SRC | XVTC_CONTROL_HSYNC_END_SRC |
+ XVTC_CONTROL_HSYNC_START_SRC |
+ XVTC_CONTROL_ACTIVE_HSIZE_SRC |
+ XVTC_CONTROL_FRAME_HSIZE_SRC | XVTC_CONTROL_GEN_ENABLE |
+ XVIP_CTRL_CONTROL_REG_UPDATE);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xvtc_generator_start);
+
+int xvtc_generator_stop(struct xvtc_device *xvtc)
+{
+ if (!xvtc->has_generator)
+ return -ENXIO;
+
+ xvip_write(&xvtc->xvip, XVIP_CTRL_CONTROL, 0);
+
+ clk_disable_unprepare(xvtc->xvip.clk);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xvtc_generator_stop);
+
+struct xvtc_device *xvtc_of_get(struct device_node *np)
+{
+ struct device_node *xvtc_node;
+ struct xvtc_device *found = NULL;
+ struct xvtc_device *xvtc;
+
+ if (!of_find_property(np, "xlnx,vtc", NULL))
+ return NULL;
+
+ xvtc_node = of_parse_phandle(np, "xlnx,vtc", 0);
+ if (xvtc_node == NULL)
+ return ERR_PTR(-EINVAL);
+
+ mutex_lock(&xvtc_lock);
+ list_for_each_entry(xvtc, &xvtc_list, list) {
+ if (xvtc->xvip.dev->of_node == xvtc_node) {
+ found = xvtc;
+ break;
+ }
+ }
+ mutex_unlock(&xvtc_lock);
+
+ of_node_put(xvtc_node);
+
+ if (!found)
+ return ERR_PTR(-EPROBE_DEFER);
+
+ return found;
+}
+EXPORT_SYMBOL_GPL(xvtc_of_get);
+
+void xvtc_put(struct xvtc_device *xvtc)
+{
+}
+EXPORT_SYMBOL_GPL(xvtc_put);
+
+/* -----------------------------------------------------------------------------
+ * Registration and Unregistration
+ */
+
+static void xvtc_register_device(struct xvtc_device *xvtc)
+{
+ mutex_lock(&xvtc_lock);
+ list_add_tail(&xvtc->list, &xvtc_list);
+ mutex_unlock(&xvtc_lock);
+}
+
+static void xvtc_unregister_device(struct xvtc_device *xvtc)
+{
+ mutex_lock(&xvtc_lock);
+ list_del(&xvtc->list);
+ mutex_unlock(&xvtc_lock);
+}
+
+/* -----------------------------------------------------------------------------
+ * Platform Device Driver
+ */
+
+static int xvtc_parse_of(struct xvtc_device *xvtc)
+{
+ struct device_node *node = xvtc->xvip.dev->of_node;
+
+ xvtc->has_detector = of_property_read_bool(node, "xlnx,detector");
+ xvtc->has_generator = of_property_read_bool(node, "xlnx,generator");
+
+ return 0;
+}
+
+static int xvtc_probe(struct platform_device *pdev)
+{
+ struct xvtc_device *xvtc;
+ int ret;
+
+ xvtc = devm_kzalloc(&pdev->dev, sizeof(*xvtc), GFP_KERNEL);
+ if (!xvtc)
+ return -ENOMEM;
+
+ xvtc->xvip.dev = &pdev->dev;
+
+ ret = xvtc_parse_of(xvtc);
+ if (ret < 0)
+ return ret;
+
+ ret = xvip_init_resources(&xvtc->xvip);
+ if (ret < 0)
+ return ret;
+
+ platform_set_drvdata(pdev, xvtc);
+
+ xvip_print_version(&xvtc->xvip);
+
+ xvtc_register_device(xvtc);
+
+ return 0;
+}
+
+static int xvtc_remove(struct platform_device *pdev)
+{
+ struct xvtc_device *xvtc = platform_get_drvdata(pdev);
+
+ xvtc_unregister_device(xvtc);
+
+ xvip_cleanup_resources(&xvtc->xvip);
+
+ return 0;
+}
+
+static const struct of_device_id xvtc_of_id_table[] = {
+ { .compatible = "xlnx,v-tc-6.1" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, xvtc_of_id_table);
+
+static struct platform_driver xvtc_driver = {
+ .driver = {
+ .name = "xilinx-vtc",
+ .of_match_table = xvtc_of_id_table,
+ },
+ .probe = xvtc_probe,
+ .remove = xvtc_remove,
+};
+
+module_platform_driver(xvtc_driver);
+
+MODULE_AUTHOR("Laurent Pinchart <laurent.pinchart@ideasonboard.com>");
+MODULE_DESCRIPTION("Xilinx Video Timing Controller Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/media/platform/xilinx/xilinx-vtc.h b/drivers/media/platform/xilinx/xilinx-vtc.h
new file mode 100644
index 0000000..e1bb2cf
--- /dev/null
+++ b/drivers/media/platform/xilinx/xilinx-vtc.h
@@ -0,0 +1,42 @@
+/*
+ * Xilinx Video Timing Controller
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __XILINX_VTC_H__
+#define __XILINX_VTC_H__
+
+struct device_node;
+struct xvtc_device;
+
+#define XVTC_MAX_HSIZE 8191
+#define XVTC_MAX_VSIZE 8191
+
+struct xvtc_config {
+ unsigned int hblank_start;
+ unsigned int hsync_start;
+ unsigned int hsync_end;
+ unsigned int hsize;
+ unsigned int vblank_start;
+ unsigned int vsync_start;
+ unsigned int vsync_end;
+ unsigned int vsize;
+};
+
+struct xvtc_device *xvtc_of_get(struct device_node *np);
+void xvtc_put(struct xvtc_device *xvtc);
+
+int xvtc_generator_start(struct xvtc_device *xvtc,
+ const struct xvtc_config *config);
+int xvtc_generator_stop(struct xvtc_device *xvtc);
+
+#endif /* __XILINX_VTC_H__ */
diff --git a/drivers/media/radio/radio-wl1273.c b/drivers/media/radio/radio-wl1273.c
index b8f3644..a93f681 100644
--- a/drivers/media/radio/radio-wl1273.c
+++ b/drivers/media/radio/radio-wl1273.c
@@ -347,6 +347,7 @@
{
struct wl1273_core *core = radio->core;
int r = 0;
+ unsigned long t;
if (freq < WL1273_BAND_TX_LOW) {
dev_err(radio->dev,
@@ -378,11 +379,11 @@
reinit_completion(&radio->busy);
/* wait for the FR IRQ */
- r = wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(2000));
- if (!r)
+ t = wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(2000));
+ if (!t)
return -ETIMEDOUT;
- dev_dbg(radio->dev, "WL1273_CHANL_SET: %d\n", r);
+ dev_dbg(radio->dev, "WL1273_CHANL_SET: %lu\n", t);
/* Enable the output power */
r = core->write(core, WL1273_POWER_ENB_SET, 1);
@@ -392,12 +393,12 @@
reinit_completion(&radio->busy);
/* wait for the POWER_ENB IRQ */
- r = wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(1000));
- if (!r)
+ t = wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(1000));
+ if (!t)
return -ETIMEDOUT;
radio->tx_frequency = freq;
- dev_dbg(radio->dev, "WL1273_POWER_ENB_SET: %d\n", r);
+ dev_dbg(radio->dev, "WL1273_POWER_ENB_SET: %lu\n", t);
return 0;
}
@@ -406,6 +407,7 @@
{
struct wl1273_core *core = radio->core;
int r, f;
+ unsigned long t;
if (freq < radio->rangelow) {
dev_err(radio->dev,
@@ -446,8 +448,8 @@
reinit_completion(&radio->busy);
- r = wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(2000));
- if (!r) {
+ t = wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(2000));
+ if (!t) {
dev_err(radio->dev, "%s: TIMEOUT\n", __func__);
return -ETIMEDOUT;
}
@@ -826,9 +828,12 @@
if (r)
goto out;
+ /* wait for the FR IRQ */
wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(1000));
- if (!(radio->irq_received & WL1273_BL_EVENT))
+ if (!(radio->irq_received & WL1273_BL_EVENT)) {
+ r = -ETIMEDOUT;
goto out;
+ }
radio->irq_received &= ~WL1273_BL_EVENT;
@@ -854,7 +859,9 @@
if (r)
goto out;
- wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(1000));
+ /* wait for the FR IRQ */
+ if (!wait_for_completion_timeout(&radio->busy, msecs_to_jiffies(1000)))
+ r = -ETIMEDOUT;
out:
dev_dbg(radio->dev, "%s: Err: %d\n", __func__, r);
return r;
diff --git a/drivers/media/radio/si470x/radio-si470x-common.c b/drivers/media/radio/si470x/radio-si470x-common.c
index 909c3f9..1d827ad 100644
--- a/drivers/media/radio/si470x/radio-si470x-common.c
+++ b/drivers/media/radio/si470x/radio-si470x-common.c
@@ -208,6 +208,7 @@
static int si470x_set_chan(struct si470x_device *radio, unsigned short chan)
{
int retval;
+ unsigned long time_left;
bool timed_out = false;
/* start tuning */
@@ -219,9 +220,9 @@
/* wait till tune operation has completed */
reinit_completion(&radio->completion);
- retval = wait_for_completion_timeout(&radio->completion,
- msecs_to_jiffies(tune_timeout));
- if (!retval)
+ time_left = wait_for_completion_timeout(&radio->completion,
+ msecs_to_jiffies(tune_timeout));
+ if (time_left == 0)
timed_out = true;
if ((radio->registers[STATUSRSSI] & STATUSRSSI_STC) == 0)
@@ -301,6 +302,7 @@
int band, retval;
unsigned int freq;
bool timed_out = false;
+ unsigned long time_left;
/* set band */
if (seek->rangelow || seek->rangehigh) {
@@ -342,9 +344,9 @@
/* wait till tune operation has completed */
reinit_completion(&radio->completion);
- retval = wait_for_completion_timeout(&radio->completion,
- msecs_to_jiffies(seek_timeout));
- if (!retval)
+ time_left = wait_for_completion_timeout(&radio->completion,
+ msecs_to_jiffies(seek_timeout));
+ if (time_left == 0)
timed_out = true;
if ((radio->registers[STATUSRSSI] & STATUSRSSI_STC) == 0)
diff --git a/drivers/media/radio/si4713/si4713.c b/drivers/media/radio/si4713/si4713.c
index c90004d..e9d03ac 100644
--- a/drivers/media/radio/si4713/si4713.c
+++ b/drivers/media/radio/si4713/si4713.c
@@ -383,7 +383,7 @@
}
}
- if (!IS_ERR(sdev->gpio_reset)) {
+ if (sdev->gpio_reset) {
udelay(50);
gpiod_set_value(sdev->gpio_reset, 1);
}
@@ -407,8 +407,7 @@
SI4713_STC_INT | SI4713_CTS);
return err;
}
- if (!IS_ERR(sdev->gpio_reset))
- gpiod_set_value(sdev->gpio_reset, 0);
+ gpiod_set_value(sdev->gpio_reset, 0);
if (sdev->vdd) {
@@ -447,7 +446,7 @@
v4l2_dbg(1, debug, &sdev->sd, "Power down response: 0x%02x\n",
resp[0]);
v4l2_dbg(1, debug, &sdev->sd, "Device in reset mode\n");
- if (!IS_ERR(sdev->gpio_reset))
+ if (sdev->gpio_reset)
gpiod_set_value(sdev->gpio_reset, 0);
if (sdev->vdd) {
@@ -1460,14 +1459,9 @@
goto exit;
}
- sdev->gpio_reset = devm_gpiod_get(&client->dev, "reset");
- if (!IS_ERR(sdev->gpio_reset)) {
- gpiod_direction_output(sdev->gpio_reset, 0);
- } else if (PTR_ERR(sdev->gpio_reset) == -ENOENT) {
- dev_dbg(&client->dev, "No reset GPIO assigned\n");
- } else if (PTR_ERR(sdev->gpio_reset) == -ENOSYS) {
- dev_dbg(&client->dev, "No reset GPIO support\n");
- } else {
+ sdev->gpio_reset = devm_gpiod_get_optional(&client->dev, "reset",
+ GPIOD_OUT_LOW);
+ if (IS_ERR(sdev->gpio_reset)) {
rval = PTR_ERR(sdev->gpio_reset);
dev_err(&client->dev, "Failed to request gpio: %d\n", rval);
goto exit;
diff --git a/drivers/media/radio/wl128x/Kconfig b/drivers/media/radio/wl128x/Kconfig
index f359be7..9d6574b 100644
--- a/drivers/media/radio/wl128x/Kconfig
+++ b/drivers/media/radio/wl128x/Kconfig
@@ -5,7 +5,7 @@
config RADIO_WL128X
tristate "Texas Instruments WL128x FM Radio"
depends on VIDEO_V4L2 && RFKILL && GPIOLIB && TTY
- select TI_ST if NET
+ depends on TI_ST
help
Choose Y here if you have this FM radio chip.
diff --git a/drivers/media/radio/wl128x/fmdrv_v4l2.c b/drivers/media/radio/wl128x/fmdrv_v4l2.c
index a5bd3f6..fb42f0f 100644
--- a/drivers/media/radio/wl128x/fmdrv_v4l2.c
+++ b/drivers/media/radio/wl128x/fmdrv_v4l2.c
@@ -36,7 +36,7 @@
#include "fmdrv_rx.h"
#include "fmdrv_tx.h"
-static struct video_device *gradio_dev;
+static struct video_device gradio_dev;
static u8 radio_disconnected;
/* -- V4L2 RADIO (/dev/radioX) device file operation interfaces --- */
@@ -517,7 +517,7 @@
.fops = &fm_drv_fops,
.ioctl_ops = &fm_drv_ioctl_ops,
.name = FM_DRV_NAME,
- .release = video_device_release,
+ .release = video_device_release_empty,
/*
* To ensure both the tuner and modulator ioctls are accessible we
* set the vfl_dir to M2M to indicate this.
@@ -543,29 +543,21 @@
/* Init mutex for core locking */
mutex_init(&fmdev->mutex);
- /* Allocate new video device */
- gradio_dev = video_device_alloc();
- if (NULL == gradio_dev) {
- fmerr("Can't allocate video device\n");
- return -ENOMEM;
- }
-
/* Setup FM driver's V4L2 properties */
- memcpy(gradio_dev, &fm_viddev_template, sizeof(fm_viddev_template));
+ gradio_dev = fm_viddev_template;
- video_set_drvdata(gradio_dev, fmdev);
+ video_set_drvdata(&gradio_dev, fmdev);
- gradio_dev->lock = &fmdev->mutex;
- gradio_dev->v4l2_dev = &fmdev->v4l2_dev;
+ gradio_dev.lock = &fmdev->mutex;
+ gradio_dev.v4l2_dev = &fmdev->v4l2_dev;
/* Register with V4L2 subsystem as RADIO device */
- if (video_register_device(gradio_dev, VFL_TYPE_RADIO, radio_nr)) {
- video_device_release(gradio_dev);
+ if (video_register_device(&gradio_dev, VFL_TYPE_RADIO, radio_nr)) {
fmerr("Could not register video device\n");
return -ENOMEM;
}
- fmdev->radio_dev = gradio_dev;
+ fmdev->radio_dev = &gradio_dev;
/* Register to v4l2 ctrl handler framework */
fmdev->radio_dev->ctrl_handler = &fmdev->ctrl_handler;
@@ -611,13 +603,13 @@
struct fmdev *fmdev;
- fmdev = video_get_drvdata(gradio_dev);
+ fmdev = video_get_drvdata(&gradio_dev);
/* Unregister to v4l2 ctrl handler framework*/
v4l2_ctrl_handler_free(&fmdev->ctrl_handler);
/* Unregister RADIO device from V4L2 subsystem */
- video_unregister_device(gradio_dev);
+ video_unregister_device(&gradio_dev);
v4l2_device_unregister(&fmdev->v4l2_dev);
diff --git a/drivers/media/rc/img-ir/img-ir-core.c b/drivers/media/rc/img-ir/img-ir-core.c
index 77c78de..03fe080 100644
--- a/drivers/media/rc/img-ir/img-ir-core.c
+++ b/drivers/media/rc/img-ir/img-ir-core.c
@@ -110,16 +110,32 @@
priv->clk = devm_clk_get(&pdev->dev, "core");
if (IS_ERR(priv->clk))
dev_warn(&pdev->dev, "cannot get core clock resource\n");
+
+ /* Get sys clock */
+ priv->sys_clk = devm_clk_get(&pdev->dev, "sys");
+ if (IS_ERR(priv->sys_clk))
+ dev_warn(&pdev->dev, "cannot get sys clock resource\n");
/*
- * The driver doesn't need to know about the system ("sys") or power
- * modulation ("mod") clocks yet
+ * Enabling the system clock before the register interface is
+ * accessed. ISR shouldn't get called with Sys Clock disabled,
+ * hence exiting probe with an error.
*/
+ if (!IS_ERR(priv->sys_clk)) {
+ error = clk_prepare_enable(priv->sys_clk);
+ if (error) {
+ dev_err(&pdev->dev, "cannot enable sys clock\n");
+ return error;
+ }
+ }
/* Set up raw & hw decoder */
error = img_ir_probe_raw(priv);
error2 = img_ir_probe_hw(priv);
- if (error && error2)
- return (error == -ENODEV) ? error2 : error;
+ if (error && error2) {
+ if (error == -ENODEV)
+ error = error2;
+ goto err_probe;
+ }
/* Get the IRQ */
priv->irq = irq;
@@ -139,6 +155,9 @@
err_irq:
img_ir_remove_hw(priv);
img_ir_remove_raw(priv);
+err_probe:
+ if (!IS_ERR(priv->sys_clk))
+ clk_disable_unprepare(priv->sys_clk);
return error;
}
@@ -146,12 +165,14 @@
{
struct img_ir_priv *priv = platform_get_drvdata(pdev);
- free_irq(priv->irq, img_ir_isr);
+ free_irq(priv->irq, priv);
img_ir_remove_hw(priv);
img_ir_remove_raw(priv);
if (!IS_ERR(priv->clk))
clk_disable_unprepare(priv->clk);
+ if (!IS_ERR(priv->sys_clk))
+ clk_disable_unprepare(priv->sys_clk);
return 0;
}
diff --git a/drivers/media/rc/img-ir/img-ir.h b/drivers/media/rc/img-ir/img-ir.h
index 2ddf560..f1387c0 100644
--- a/drivers/media/rc/img-ir/img-ir.h
+++ b/drivers/media/rc/img-ir/img-ir.h
@@ -138,6 +138,7 @@
* @dev: Platform device.
* @irq: IRQ number.
* @clk: Input clock.
+ * @sys_clk: System clock.
* @reg_base: Iomem base address of IR register block.
* @lock: Protects IR registers and variables in this struct.
* @raw: Driver data for raw decoder.
@@ -147,6 +148,7 @@
struct device *dev;
int irq;
struct clk *clk;
+ struct clk *sys_clk;
void __iomem *reg_base;
spinlock_t lock;
diff --git a/drivers/media/rc/ir-hix5hd2.c b/drivers/media/rc/ir-hix5hd2.c
index b0df629..58ec598 100644
--- a/drivers/media/rc/ir-hix5hd2.c
+++ b/drivers/media/rc/ir-hix5hd2.c
@@ -16,14 +16,6 @@
#include <linux/regmap.h>
#include <media/rc-core.h>
-/* Allow the driver to compile on all architectures */
-#ifndef writel_relaxed
-# define writel_relaxed writel
-#endif
-#ifndef readl_relaxed
-# define readl_relaxed readl
-#endif
-
#define IR_ENABLE 0x00
#define IR_CONFIG 0x04
#define CNT_LEADS 0x08
diff --git a/drivers/media/tuners/Kconfig b/drivers/media/tuners/Kconfig
index 42e5a01..983510d 100644
--- a/drivers/media/tuners/Kconfig
+++ b/drivers/media/tuners/Kconfig
@@ -224,14 +224,6 @@
help
FCI FC2580 silicon tuner driver.
-config MEDIA_TUNER_M88TS2022
- tristate "Montage M88TS2022 silicon tuner"
- depends on MEDIA_SUPPORT && I2C
- select REGMAP_I2C
- default m if !MEDIA_SUBDRV_AUTOSELECT
- help
- Montage M88TS2022 silicon tuner driver.
-
config MEDIA_TUNER_M88RS6000T
tristate "Montage M88RS6000 internal tuner"
depends on MEDIA_SUPPORT && I2C
diff --git a/drivers/media/tuners/Makefile b/drivers/media/tuners/Makefile
index da4fe6e..06a9ab6 100644
--- a/drivers/media/tuners/Makefile
+++ b/drivers/media/tuners/Makefile
@@ -33,7 +33,6 @@
obj-$(CONFIG_MEDIA_TUNER_FC2580) += fc2580.o
obj-$(CONFIG_MEDIA_TUNER_TUA9001) += tua9001.o
obj-$(CONFIG_MEDIA_TUNER_SI2157) += si2157.o
-obj-$(CONFIG_MEDIA_TUNER_M88TS2022) += m88ts2022.o
obj-$(CONFIG_MEDIA_TUNER_FC0011) += fc0011.o
obj-$(CONFIG_MEDIA_TUNER_FC0012) += fc0012.o
obj-$(CONFIG_MEDIA_TUNER_FC0013) += fc0013.o
diff --git a/drivers/media/tuners/fc0011.h b/drivers/media/tuners/fc0011.h
index 43ec893..81bb568 100644
--- a/drivers/media/tuners/fc0011.h
+++ b/drivers/media/tuners/fc0011.h
@@ -23,7 +23,7 @@
FC0011_FE_CALLBACK_RESET,
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_FC0011)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_FC0011)
struct dvb_frontend *fc0011_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c,
const struct fc0011_config *config);
diff --git a/drivers/media/tuners/fc0012.h b/drivers/media/tuners/fc0012.h
index 1d08057..9ad3285 100644
--- a/drivers/media/tuners/fc0012.h
+++ b/drivers/media/tuners/fc0012.h
@@ -49,7 +49,7 @@
bool clock_out;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_FC0012)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_FC0012)
extern struct dvb_frontend *fc0012_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c,
const struct fc0012_config *cfg);
diff --git a/drivers/media/tuners/fc0013.h b/drivers/media/tuners/fc0013.h
index d65d5b3..e130bd7 100644
--- a/drivers/media/tuners/fc0013.h
+++ b/drivers/media/tuners/fc0013.h
@@ -26,7 +26,7 @@
#include "dvb_frontend.h"
#include "fc001x-common.h"
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_FC0013)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_FC0013)
extern struct dvb_frontend *fc0013_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c,
u8 i2c_address, int dual_master,
diff --git a/drivers/media/tuners/fc2580.h b/drivers/media/tuners/fc2580.h
index 9c43c1c..b1ce677 100644
--- a/drivers/media/tuners/fc2580.h
+++ b/drivers/media/tuners/fc2580.h
@@ -37,7 +37,7 @@
u32 clock;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_FC2580)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_FC2580)
extern struct dvb_frontend *fc2580_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c, const struct fc2580_config *cfg);
#else
diff --git a/drivers/media/tuners/m88ts2022.c b/drivers/media/tuners/m88ts2022.c
deleted file mode 100644
index 066e543..0000000
--- a/drivers/media/tuners/m88ts2022.c
+++ /dev/null
@@ -1,579 +0,0 @@
-/*
- * Montage M88TS2022 silicon tuner driver
- *
- * Copyright (C) 2013 Antti Palosaari <crope@iki.fi>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * Some calculations are taken from existing TS2020 driver.
- */
-
-#include "m88ts2022_priv.h"
-
-static int m88ts2022_cmd(struct m88ts2022_dev *dev, int op, int sleep, u8 reg,
- u8 mask, u8 val, u8 *reg_val)
-{
- int ret, i;
- unsigned int utmp;
- struct m88ts2022_reg_val reg_vals[] = {
- {0x51, 0x1f - op},
- {0x51, 0x1f},
- {0x50, 0x00 + op},
- {0x50, 0x00},
- };
-
- for (i = 0; i < 2; i++) {
- dev_dbg(&dev->client->dev,
- "i=%d op=%02x reg=%02x mask=%02x val=%02x\n",
- i, op, reg, mask, val);
-
- for (i = 0; i < ARRAY_SIZE(reg_vals); i++) {
- ret = regmap_write(dev->regmap, reg_vals[i].reg,
- reg_vals[i].val);
- if (ret)
- goto err;
- }
-
- usleep_range(sleep * 1000, sleep * 10000);
-
- ret = regmap_read(dev->regmap, reg, &utmp);
- if (ret)
- goto err;
-
- if ((utmp & mask) != val)
- break;
- }
-
- if (reg_val)
- *reg_val = utmp;
-err:
- return ret;
-}
-
-static int m88ts2022_set_params(struct dvb_frontend *fe)
-{
- struct m88ts2022_dev *dev = fe->tuner_priv;
- struct dtv_frontend_properties *c = &fe->dtv_property_cache;
- int ret;
- unsigned int utmp, frequency_khz, frequency_offset_khz, f_3db_hz;
- unsigned int f_ref_khz, f_vco_khz, div_ref, div_out, pll_n, gdiv28;
- u8 buf[3], u8tmp, cap_code, lpf_gm, lpf_mxdiv, div_max, div_min;
- u16 u16tmp;
-
- dev_dbg(&dev->client->dev,
- "frequency=%d symbol_rate=%d rolloff=%d\n",
- c->frequency, c->symbol_rate, c->rolloff);
- /*
- * Integer-N PLL synthesizer
- * kHz is used for all calculations to keep calculations within 32-bit
- */
- f_ref_khz = DIV_ROUND_CLOSEST(dev->cfg.clock, 1000);
- div_ref = DIV_ROUND_CLOSEST(f_ref_khz, 2000);
-
- if (c->symbol_rate < 5000000)
- frequency_offset_khz = 3000; /* 3 MHz */
- else
- frequency_offset_khz = 0;
-
- frequency_khz = c->frequency + frequency_offset_khz;
-
- if (frequency_khz < 1103000) {
- div_out = 4;
- u8tmp = 0x1b;
- } else {
- div_out = 2;
- u8tmp = 0x0b;
- }
-
- buf[0] = u8tmp;
- buf[1] = 0x40;
- ret = regmap_bulk_write(dev->regmap, 0x10, buf, 2);
- if (ret)
- goto err;
-
- f_vco_khz = frequency_khz * div_out;
- pll_n = f_vco_khz * div_ref / f_ref_khz;
- pll_n += pll_n % 2;
- dev->frequency_khz = pll_n * f_ref_khz / div_ref / div_out;
-
- if (pll_n < 4095)
- u16tmp = pll_n - 1024;
- else if (pll_n < 6143)
- u16tmp = pll_n + 1024;
- else
- u16tmp = pll_n + 3072;
-
- buf[0] = (u16tmp >> 8) & 0x3f;
- buf[1] = (u16tmp >> 0) & 0xff;
- buf[2] = div_ref - 8;
- ret = regmap_bulk_write(dev->regmap, 0x01, buf, 3);
- if (ret)
- goto err;
-
- dev_dbg(&dev->client->dev,
- "frequency=%u offset=%d f_vco_khz=%u pll_n=%u div_ref=%u div_out=%u\n",
- dev->frequency_khz, dev->frequency_khz - c->frequency,
- f_vco_khz, pll_n, div_ref, div_out);
-
- ret = m88ts2022_cmd(dev, 0x10, 5, 0x15, 0x40, 0x00, NULL);
- if (ret)
- goto err;
-
- ret = regmap_read(dev->regmap, 0x14, &utmp);
- if (ret)
- goto err;
-
- utmp &= 0x7f;
- if (utmp < 64) {
- ret = regmap_update_bits(dev->regmap, 0x10, 0x80, 0x80);
- if (ret)
- goto err;
-
- ret = regmap_write(dev->regmap, 0x11, 0x6f);
- if (ret)
- goto err;
-
- ret = m88ts2022_cmd(dev, 0x10, 5, 0x15, 0x40, 0x00, NULL);
- if (ret)
- goto err;
- }
-
- ret = regmap_read(dev->regmap, 0x14, &utmp);
- if (ret)
- goto err;
-
- utmp &= 0x1f;
- if (utmp > 19) {
- ret = regmap_update_bits(dev->regmap, 0x10, 0x02, 0x00);
- if (ret)
- goto err;
- }
-
- ret = m88ts2022_cmd(dev, 0x08, 5, 0x3c, 0xff, 0x00, NULL);
- if (ret)
- goto err;
-
- ret = regmap_write(dev->regmap, 0x25, 0x00);
- if (ret)
- goto err;
-
- ret = regmap_write(dev->regmap, 0x27, 0x70);
- if (ret)
- goto err;
-
- ret = regmap_write(dev->regmap, 0x41, 0x09);
- if (ret)
- goto err;
-
- ret = regmap_write(dev->regmap, 0x08, 0x0b);
- if (ret)
- goto err;
-
- /* filters */
- gdiv28 = DIV_ROUND_CLOSEST(f_ref_khz * 1694U, 1000000U);
-
- ret = regmap_write(dev->regmap, 0x04, gdiv28);
- if (ret)
- goto err;
-
- ret = m88ts2022_cmd(dev, 0x04, 2, 0x26, 0xff, 0x00, &u8tmp);
- if (ret)
- goto err;
-
- cap_code = u8tmp & 0x3f;
-
- ret = regmap_write(dev->regmap, 0x41, 0x0d);
- if (ret)
- goto err;
-
- ret = m88ts2022_cmd(dev, 0x04, 2, 0x26, 0xff, 0x00, &u8tmp);
- if (ret)
- goto err;
-
- u8tmp &= 0x3f;
- cap_code = (cap_code + u8tmp) / 2;
- gdiv28 = gdiv28 * 207 / (cap_code * 2 + 151);
- div_max = gdiv28 * 135 / 100;
- div_min = gdiv28 * 78 / 100;
- div_max = clamp_val(div_max, 0U, 63U);
-
- f_3db_hz = mult_frac(c->symbol_rate, 135, 200);
- f_3db_hz += 2000000U + (frequency_offset_khz * 1000U);
- f_3db_hz = clamp(f_3db_hz, 7000000U, 40000000U);
-
-#define LPF_COEFF 3200U
- lpf_gm = DIV_ROUND_CLOSEST(f_3db_hz * gdiv28, LPF_COEFF * f_ref_khz);
- lpf_gm = clamp_val(lpf_gm, 1U, 23U);
-
- lpf_mxdiv = DIV_ROUND_CLOSEST(lpf_gm * LPF_COEFF * f_ref_khz, f_3db_hz);
- if (lpf_mxdiv < div_min)
- lpf_mxdiv = DIV_ROUND_CLOSEST(++lpf_gm * LPF_COEFF * f_ref_khz, f_3db_hz);
- lpf_mxdiv = clamp_val(lpf_mxdiv, 0U, div_max);
-
- ret = regmap_write(dev->regmap, 0x04, lpf_mxdiv);
- if (ret)
- goto err;
-
- ret = regmap_write(dev->regmap, 0x06, lpf_gm);
- if (ret)
- goto err;
-
- ret = m88ts2022_cmd(dev, 0x04, 2, 0x26, 0xff, 0x00, &u8tmp);
- if (ret)
- goto err;
-
- cap_code = u8tmp & 0x3f;
-
- ret = regmap_write(dev->regmap, 0x41, 0x09);
- if (ret)
- goto err;
-
- ret = m88ts2022_cmd(dev, 0x04, 2, 0x26, 0xff, 0x00, &u8tmp);
- if (ret)
- goto err;
-
- u8tmp &= 0x3f;
- cap_code = (cap_code + u8tmp) / 2;
-
- u8tmp = cap_code | 0x80;
- ret = regmap_write(dev->regmap, 0x25, u8tmp);
- if (ret)
- goto err;
-
- ret = regmap_write(dev->regmap, 0x27, 0x30);
- if (ret)
- goto err;
-
- ret = regmap_write(dev->regmap, 0x08, 0x09);
- if (ret)
- goto err;
-
- ret = m88ts2022_cmd(dev, 0x01, 20, 0x21, 0xff, 0x00, NULL);
- if (ret)
- goto err;
-err:
- if (ret)
- dev_dbg(&dev->client->dev, "failed=%d\n", ret);
-
- return ret;
-}
-
-static int m88ts2022_init(struct dvb_frontend *fe)
-{
- struct m88ts2022_dev *dev = fe->tuner_priv;
- int ret, i;
- u8 u8tmp;
- static const struct m88ts2022_reg_val reg_vals[] = {
- {0x7d, 0x9d},
- {0x7c, 0x9a},
- {0x7a, 0x76},
- {0x3b, 0x01},
- {0x63, 0x88},
- {0x61, 0x85},
- {0x22, 0x30},
- {0x30, 0x40},
- {0x20, 0x23},
- {0x24, 0x02},
- {0x12, 0xa0},
- };
-
- dev_dbg(&dev->client->dev, "\n");
-
- ret = regmap_write(dev->regmap, 0x00, 0x01);
- if (ret)
- goto err;
-
- ret = regmap_write(dev->regmap, 0x00, 0x03);
- if (ret)
- goto err;
-
- switch (dev->cfg.clock_out) {
- case M88TS2022_CLOCK_OUT_DISABLED:
- u8tmp = 0x60;
- break;
- case M88TS2022_CLOCK_OUT_ENABLED:
- u8tmp = 0x70;
- ret = regmap_write(dev->regmap, 0x05, dev->cfg.clock_out_div);
- if (ret)
- goto err;
- break;
- case M88TS2022_CLOCK_OUT_ENABLED_XTALOUT:
- u8tmp = 0x6c;
- break;
- default:
- goto err;
- }
-
- ret = regmap_write(dev->regmap, 0x42, u8tmp);
- if (ret)
- goto err;
-
- if (dev->cfg.loop_through)
- u8tmp = 0xec;
- else
- u8tmp = 0x6c;
-
- ret = regmap_write(dev->regmap, 0x62, u8tmp);
- if (ret)
- goto err;
-
- for (i = 0; i < ARRAY_SIZE(reg_vals); i++) {
- ret = regmap_write(dev->regmap, reg_vals[i].reg, reg_vals[i].val);
- if (ret)
- goto err;
- }
-err:
- if (ret)
- dev_dbg(&dev->client->dev, "failed=%d\n", ret);
- return ret;
-}
-
-static int m88ts2022_sleep(struct dvb_frontend *fe)
-{
- struct m88ts2022_dev *dev = fe->tuner_priv;
- int ret;
-
- dev_dbg(&dev->client->dev, "\n");
-
- ret = regmap_write(dev->regmap, 0x00, 0x00);
- if (ret)
- goto err;
-err:
- if (ret)
- dev_dbg(&dev->client->dev, "failed=%d\n", ret);
- return ret;
-}
-
-static int m88ts2022_get_frequency(struct dvb_frontend *fe, u32 *frequency)
-{
- struct m88ts2022_dev *dev = fe->tuner_priv;
-
- dev_dbg(&dev->client->dev, "\n");
-
- *frequency = dev->frequency_khz;
- return 0;
-}
-
-static int m88ts2022_get_if_frequency(struct dvb_frontend *fe, u32 *frequency)
-{
- struct m88ts2022_dev *dev = fe->tuner_priv;
-
- dev_dbg(&dev->client->dev, "\n");
-
- *frequency = 0; /* Zero-IF */
- return 0;
-}
-
-static int m88ts2022_get_rf_strength(struct dvb_frontend *fe, u16 *strength)
-{
- struct m88ts2022_dev *dev = fe->tuner_priv;
- int ret;
- u16 gain, u16tmp;
- unsigned int utmp, gain1, gain2, gain3;
-
- ret = regmap_read(dev->regmap, 0x3d, &utmp);
- if (ret)
- goto err;
-
- gain1 = (utmp >> 0) & 0x1f;
- gain1 = clamp(gain1, 0U, 15U);
-
- ret = regmap_read(dev->regmap, 0x21, &utmp);
- if (ret)
- goto err;
-
- gain2 = (utmp >> 0) & 0x1f;
- gain2 = clamp(gain2, 2U, 16U);
-
- ret = regmap_read(dev->regmap, 0x66, &utmp);
- if (ret)
- goto err;
-
- gain3 = (utmp >> 3) & 0x07;
- gain3 = clamp(gain3, 0U, 6U);
-
- gain = gain1 * 265 + gain2 * 338 + gain3 * 285;
-
- /* scale value to 0x0000-0xffff */
- u16tmp = (0xffff - gain);
- u16tmp = clamp_val(u16tmp, 59000U, 61500U);
-
- *strength = (u16tmp - 59000) * 0xffff / (61500 - 59000);
-err:
- if (ret)
- dev_dbg(&dev->client->dev, "failed=%d\n", ret);
- return ret;
-}
-
-static const struct dvb_tuner_ops m88ts2022_tuner_ops = {
- .info = {
- .name = "Montage M88TS2022",
- .frequency_min = 950000,
- .frequency_max = 2150000,
- },
-
- .init = m88ts2022_init,
- .sleep = m88ts2022_sleep,
- .set_params = m88ts2022_set_params,
-
- .get_frequency = m88ts2022_get_frequency,
- .get_if_frequency = m88ts2022_get_if_frequency,
- .get_rf_strength = m88ts2022_get_rf_strength,
-};
-
-static int m88ts2022_probe(struct i2c_client *client,
- const struct i2c_device_id *id)
-{
- struct m88ts2022_config *cfg = client->dev.platform_data;
- struct dvb_frontend *fe = cfg->fe;
- struct m88ts2022_dev *dev;
- int ret;
- u8 u8tmp;
- unsigned int utmp;
- static const struct regmap_config regmap_config = {
- .reg_bits = 8,
- .val_bits = 8,
- };
-
- dev = kzalloc(sizeof(*dev), GFP_KERNEL);
- if (!dev) {
- ret = -ENOMEM;
- dev_err(&client->dev, "kzalloc() failed\n");
- goto err;
- }
-
- memcpy(&dev->cfg, cfg, sizeof(struct m88ts2022_config));
- dev->client = client;
- dev->regmap = devm_regmap_init_i2c(client, ®map_config);
- if (IS_ERR(dev->regmap)) {
- ret = PTR_ERR(dev->regmap);
- goto err;
- }
-
- /* check if the tuner is there */
- ret = regmap_read(dev->regmap, 0x00, &utmp);
- if (ret)
- goto err;
-
- if ((utmp & 0x03) == 0x00) {
- ret = regmap_write(dev->regmap, 0x00, 0x01);
- if (ret)
- goto err;
-
- usleep_range(2000, 50000);
- }
-
- ret = regmap_write(dev->regmap, 0x00, 0x03);
- if (ret)
- goto err;
-
- usleep_range(2000, 50000);
-
- ret = regmap_read(dev->regmap, 0x00, &utmp);
- if (ret)
- goto err;
-
- dev_dbg(&dev->client->dev, "chip_id=%02x\n", utmp);
-
- switch (utmp) {
- case 0xc3:
- case 0x83:
- break;
- default:
- ret = -ENODEV;
- goto err;
- }
-
- switch (dev->cfg.clock_out) {
- case M88TS2022_CLOCK_OUT_DISABLED:
- u8tmp = 0x60;
- break;
- case M88TS2022_CLOCK_OUT_ENABLED:
- u8tmp = 0x70;
- ret = regmap_write(dev->regmap, 0x05, dev->cfg.clock_out_div);
- if (ret)
- goto err;
- break;
- case M88TS2022_CLOCK_OUT_ENABLED_XTALOUT:
- u8tmp = 0x6c;
- break;
- default:
- ret = -EINVAL;
- goto err;
- }
-
- ret = regmap_write(dev->regmap, 0x42, u8tmp);
- if (ret)
- goto err;
-
- if (dev->cfg.loop_through)
- u8tmp = 0xec;
- else
- u8tmp = 0x6c;
-
- ret = regmap_write(dev->regmap, 0x62, u8tmp);
- if (ret)
- goto err;
-
- /* sleep */
- ret = regmap_write(dev->regmap, 0x00, 0x00);
- if (ret)
- goto err;
-
- dev_info(&dev->client->dev, "Montage M88TS2022 successfully identified\n");
-
- fe->tuner_priv = dev;
- memcpy(&fe->ops.tuner_ops, &m88ts2022_tuner_ops,
- sizeof(struct dvb_tuner_ops));
-
- i2c_set_clientdata(client, dev);
- return 0;
-err:
- dev_dbg(&client->dev, "failed=%d\n", ret);
- kfree(dev);
- return ret;
-}
-
-static int m88ts2022_remove(struct i2c_client *client)
-{
- struct m88ts2022_dev *dev = i2c_get_clientdata(client);
- struct dvb_frontend *fe = dev->cfg.fe;
-
- dev_dbg(&client->dev, "\n");
-
- memset(&fe->ops.tuner_ops, 0, sizeof(struct dvb_tuner_ops));
- fe->tuner_priv = NULL;
- kfree(dev);
-
- return 0;
-}
-
-static const struct i2c_device_id m88ts2022_id[] = {
- {"m88ts2022", 0},
- {}
-};
-MODULE_DEVICE_TABLE(i2c, m88ts2022_id);
-
-static struct i2c_driver m88ts2022_driver = {
- .driver = {
- .owner = THIS_MODULE,
- .name = "m88ts2022",
- },
- .probe = m88ts2022_probe,
- .remove = m88ts2022_remove,
- .id_table = m88ts2022_id,
-};
-
-module_i2c_driver(m88ts2022_driver);
-
-MODULE_DESCRIPTION("Montage M88TS2022 silicon tuner driver");
-MODULE_AUTHOR("Antti Palosaari <crope@iki.fi>");
-MODULE_LICENSE("GPL");
diff --git a/drivers/media/tuners/m88ts2022.h b/drivers/media/tuners/m88ts2022.h
deleted file mode 100644
index 659fa1b..0000000
--- a/drivers/media/tuners/m88ts2022.h
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Montage M88TS2022 silicon tuner driver
- *
- * Copyright (C) 2013 Antti Palosaari <crope@iki.fi>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#ifndef M88TS2022_H
-#define M88TS2022_H
-
-#include "dvb_frontend.h"
-
-struct m88ts2022_config {
- /*
- * clock
- * 16000000 - 32000000
- */
- u32 clock;
-
- /*
- * RF loop-through
- */
- u8 loop_through:1;
-
- /*
- * clock output
- */
-#define M88TS2022_CLOCK_OUT_DISABLED 0
-#define M88TS2022_CLOCK_OUT_ENABLED 1
-#define M88TS2022_CLOCK_OUT_ENABLED_XTALOUT 2
- u8 clock_out:2;
-
- /*
- * clock output divider
- * 1 - 31
- */
- u8 clock_out_div:5;
-
- /*
- * pointer to DVB frontend
- */
- struct dvb_frontend *fe;
-};
-
-#endif
diff --git a/drivers/media/tuners/m88ts2022_priv.h b/drivers/media/tuners/m88ts2022_priv.h
deleted file mode 100644
index feeb5ad..0000000
--- a/drivers/media/tuners/m88ts2022_priv.h
+++ /dev/null
@@ -1,35 +0,0 @@
-/*
- * Montage M88TS2022 silicon tuner driver
- *
- * Copyright (C) 2013 Antti Palosaari <crope@iki.fi>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#ifndef M88TS2022_PRIV_H
-#define M88TS2022_PRIV_H
-
-#include "m88ts2022.h"
-#include <linux/regmap.h>
-
-struct m88ts2022_dev {
- struct m88ts2022_config cfg;
- struct i2c_client *client;
- struct regmap *regmap;
- u32 frequency_khz;
-};
-
-struct m88ts2022_reg_val {
- u8 reg;
- u8 val;
-};
-
-#endif
diff --git a/drivers/media/tuners/max2165.h b/drivers/media/tuners/max2165.h
index 26e1dc6..5054f01 100644
--- a/drivers/media/tuners/max2165.h
+++ b/drivers/media/tuners/max2165.h
@@ -32,7 +32,7 @@
u8 osc_clk; /* in MHz, selectable values: 4,16,18,20,22,24,26,28 */
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_MAX2165)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_MAX2165)
extern struct dvb_frontend *max2165_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c,
struct max2165_config *cfg);
diff --git a/drivers/media/tuners/mc44s803.h b/drivers/media/tuners/mc44s803.h
index 9aae50a..b3e614b 100644
--- a/drivers/media/tuners/mc44s803.h
+++ b/drivers/media/tuners/mc44s803.h
@@ -32,7 +32,7 @@
u8 dig_out;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_MC44S803)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_MC44S803)
extern struct dvb_frontend *mc44s803_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c, struct mc44s803_config *cfg);
#else
diff --git a/drivers/media/tuners/mt2060.h b/drivers/media/tuners/mt2060.h
index c64fc19..6efed35 100644
--- a/drivers/media/tuners/mt2060.h
+++ b/drivers/media/tuners/mt2060.h
@@ -30,7 +30,7 @@
u8 clock_out; /* 0 = off, 1 = CLK/4, 2 = CLK/2, 3 = CLK/1 */
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_MT2060)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_MT2060)
extern struct dvb_frontend * mt2060_attach(struct dvb_frontend *fe, struct i2c_adapter *i2c, struct mt2060_config *cfg, u16 if1);
#else
static inline struct dvb_frontend * mt2060_attach(struct dvb_frontend *fe, struct i2c_adapter *i2c, struct mt2060_config *cfg, u16 if1)
diff --git a/drivers/media/tuners/mt2063.h b/drivers/media/tuners/mt2063.h
index e1acfc8..e55e0a6 100644
--- a/drivers/media/tuners/mt2063.h
+++ b/drivers/media/tuners/mt2063.h
@@ -8,7 +8,7 @@
u32 refclock;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_MT2063)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_MT2063)
struct dvb_frontend *mt2063_attach(struct dvb_frontend *fe,
struct mt2063_config *config,
struct i2c_adapter *i2c);
diff --git a/drivers/media/tuners/mt20xx.h b/drivers/media/tuners/mt20xx.h
index f56241c..9912362 100644
--- a/drivers/media/tuners/mt20xx.h
+++ b/drivers/media/tuners/mt20xx.h
@@ -20,7 +20,7 @@
#include <linux/i2c.h>
#include "dvb_frontend.h"
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_MT20XX)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_MT20XX)
extern struct dvb_frontend *microtune_attach(struct dvb_frontend *fe,
struct i2c_adapter* i2c_adap,
u8 i2c_addr);
diff --git a/drivers/media/tuners/mt2131.h b/drivers/media/tuners/mt2131.h
index 837c854..8267a6a 100644
--- a/drivers/media/tuners/mt2131.h
+++ b/drivers/media/tuners/mt2131.h
@@ -30,7 +30,7 @@
u8 clock_out; /* 0 = off, 1 = CLK/4, 2 = CLK/2, 3 = CLK/1 */
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_MT2131)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_MT2131)
extern struct dvb_frontend* mt2131_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c,
struct mt2131_config *cfg,
diff --git a/drivers/media/tuners/mt2266.h b/drivers/media/tuners/mt2266.h
index fad6dd6..69abefa 100644
--- a/drivers/media/tuners/mt2266.h
+++ b/drivers/media/tuners/mt2266.h
@@ -24,7 +24,7 @@
u8 i2c_address;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_MT2266)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_MT2266)
extern struct dvb_frontend * mt2266_attach(struct dvb_frontend *fe, struct i2c_adapter *i2c, struct mt2266_config *cfg);
#else
static inline struct dvb_frontend * mt2266_attach(struct dvb_frontend *fe, struct i2c_adapter *i2c, struct mt2266_config *cfg)
diff --git a/drivers/media/tuners/mxl5005s.h b/drivers/media/tuners/mxl5005s.h
index ae8db88..5764b12 100644
--- a/drivers/media/tuners/mxl5005s.h
+++ b/drivers/media/tuners/mxl5005s.h
@@ -118,7 +118,7 @@
u8 AgcMasterByte;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_MXL5005S)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_MXL5005S)
extern struct dvb_frontend *mxl5005s_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c,
struct mxl5005s_config *config);
diff --git a/drivers/media/tuners/mxl5007t.h b/drivers/media/tuners/mxl5007t.h
index ae7037d..e786d1f 100644
--- a/drivers/media/tuners/mxl5007t.h
+++ b/drivers/media/tuners/mxl5007t.h
@@ -77,7 +77,7 @@
unsigned int clk_out_enable:1;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_MXL5007T)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_MXL5007T)
extern struct dvb_frontend *mxl5007t_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c, u8 addr,
struct mxl5007t_config *cfg);
diff --git a/drivers/media/tuners/qt1010.h b/drivers/media/tuners/qt1010.h
index 8ab5d47..e3198f2 100644
--- a/drivers/media/tuners/qt1010.h
+++ b/drivers/media/tuners/qt1010.h
@@ -36,7 +36,7 @@
* @param cfg tuner hw based configuration
* @return fe pointer on success, NULL on failure
*/
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_QT1010)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_QT1010)
extern struct dvb_frontend *qt1010_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c,
struct qt1010_config *cfg);
diff --git a/drivers/media/tuners/r820t.c b/drivers/media/tuners/r820t.c
index 8e040cf..71159a5 100644
--- a/drivers/media/tuners/r820t.c
+++ b/drivers/media/tuners/r820t.c
@@ -775,6 +775,19 @@
div_buf_cur = 0x30; /* 11, 150u */
filter_cur = 0x40; /* 10, low */
break;
+ case SYS_DVBC_ANNEX_A:
+ mixer_top = 0x24; /* mixer top:13 , top-1, low-discharge */
+ lna_top = 0xe5;
+ lna_vth_l = 0x62;
+ mixer_vth_l = 0x75;
+ air_cable1_in = 0x60;
+ cable2_in = 0x00;
+ pre_dect = 0x40;
+ lna_discharge = 14;
+ cp_cur = 0x38; /* 111, auto */
+ div_buf_cur = 0x30; /* 11, 150u */
+ filter_cur = 0x40; /* 10, low */
+ break;
default: /* DVB-T 8M */
mixer_top = 0x24; /* mixer top:13 , top-1, low-discharge */
lna_top = 0xe5; /* detect bw 3, lna top:4, predet top:2 */
@@ -957,7 +970,7 @@
ext_enable = 0x40; /* r30[6], ext enable; r30[5]:0 ext at lna max */
loop_through = 0x00; /* r5[7], lt on */
lt_att = 0x00; /* r31[7], lt att enable */
- flt_ext_widest = 0x00; /* r15[7]: flt_ext_wide off */
+ flt_ext_widest = 0x80; /* r15[7]: flt_ext_wide on */
polyfil_cur = 0x60; /* r25[6:5]:min */
} else if (delsys == SYS_DVBC_ANNEX_A) {
if_khz = 5070;
@@ -971,6 +984,18 @@
lt_att = 0x00; /* r31[7], lt att enable */
flt_ext_widest = 0x00; /* r15[7]: flt_ext_wide off */
polyfil_cur = 0x60; /* r25[6:5]:min */
+ } else if (delsys == SYS_DVBC_ANNEX_C) {
+ if_khz = 4063;
+ filt_cal_lo = 55000;
+ filt_gain = 0x10; /* +3db, 6mhz on */
+ img_r = 0x00; /* image negative */
+ filt_q = 0x10; /* r10[4]:low q(1'b1) */
+ hp_cor = 0x6a; /* 1.7m disable, +0cap, 1.0mhz */
+ ext_enable = 0x40; /* r30[6]=1 ext enable; r30[5]:1 ext at lna max-1 */
+ loop_through = 0x00; /* r5[7], lt on */
+ lt_att = 0x00; /* r31[7], lt att enable */
+ flt_ext_widest = 0x80; /* r15[7]: flt_ext_wide on */
+ polyfil_cur = 0x60; /* r25[6:5]:min */
} else {
if (bw <= 6) {
if_khz = 3570;
@@ -1186,7 +1211,7 @@
if (rc < 0)
return rc;
- return ((data[3] & 0x0f) << 1) + ((data[3] & 0xf0) >> 4);
+ return ((data[3] & 0x08) << 1) + ((data[3] & 0xf0) >> 4);
}
#if 0
diff --git a/drivers/media/tuners/r820t.h b/drivers/media/tuners/r820t.h
index 48af354..b1e5661 100644
--- a/drivers/media/tuners/r820t.h
+++ b/drivers/media/tuners/r820t.h
@@ -42,7 +42,7 @@
bool use_predetect;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_R820T)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_R820T)
struct dvb_frontend *r820t_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c,
const struct r820t_config *cfg);
diff --git a/drivers/media/tuners/si2157.c b/drivers/media/tuners/si2157.c
index fcf139d..d74ae26 100644
--- a/drivers/media/tuners/si2157.c
+++ b/drivers/media/tuners/si2157.c
@@ -244,6 +244,7 @@
int ret;
struct si2157_cmd cmd;
u8 bandwidth, delivery_system;
+ u32 if_frequency = 5000000;
dev_dbg(&client->dev,
"delivery_system=%d frequency=%u bandwidth_hz=%u\n",
@@ -266,9 +267,11 @@
switch (c->delivery_system) {
case SYS_ATSC:
delivery_system = 0x00;
+ if_frequency = 3250000;
break;
case SYS_DVBC_ANNEX_B:
delivery_system = 0x10;
+ if_frequency = 4000000;
break;
case SYS_DVBT:
case SYS_DVBT2: /* it seems DVB-T and DVB-T2 both are 0x20 here */
@@ -302,6 +305,20 @@
if (ret)
goto err;
+ /* set if frequency if needed */
+ if (if_frequency != dev->if_frequency) {
+ memcpy(cmd.args, "\x14\x00\x06\x07", 4);
+ cmd.args[4] = (if_frequency / 1000) & 0xff;
+ cmd.args[5] = ((if_frequency / 1000) >> 8) & 0xff;
+ cmd.wlen = 6;
+ cmd.rlen = 4;
+ ret = si2157_cmd_execute(client, &cmd);
+ if (ret)
+ goto err;
+
+ dev->if_frequency = if_frequency;
+ }
+
/* set frequency */
memcpy(cmd.args, "\x41\x00\x00\x00\x00\x00\x00\x00", 8);
cmd.args[4] = (c->frequency >> 0) & 0xff;
@@ -322,14 +339,17 @@
static int si2157_get_if_frequency(struct dvb_frontend *fe, u32 *frequency)
{
- *frequency = 5000000; /* default value of property 0x0706 */
+ struct i2c_client *client = fe->tuner_priv;
+ struct si2157_dev *dev = i2c_get_clientdata(client);
+
+ *frequency = dev->if_frequency;
return 0;
}
static const struct dvb_tuner_ops si2157_ops = {
.info = {
.name = "Silicon Labs Si2146/2147/2148/2157/2158",
- .frequency_min = 110000000,
+ .frequency_min = 55000000,
.frequency_max = 862000000,
},
@@ -360,6 +380,7 @@
dev->inversion = cfg->inversion;
dev->fw_loaded = false;
dev->chiptype = (u8)id->driver_data;
+ dev->if_frequency = 5000000; /* default value of property 0x0706 */
mutex_init(&dev->i2c_mutex);
/* check if the tuner is there */
diff --git a/drivers/media/tuners/si2157_priv.h b/drivers/media/tuners/si2157_priv.h
index 7aa53bc..cd8fa5b 100644
--- a/drivers/media/tuners/si2157_priv.h
+++ b/drivers/media/tuners/si2157_priv.h
@@ -28,6 +28,7 @@
bool fw_loaded;
bool inversion;
u8 chiptype;
+ u32 if_frequency;
};
#define SI2157_CHIPTYPE_SI2157 0
diff --git a/drivers/media/tuners/tda18218.h b/drivers/media/tuners/tda18218.h
index 366410e..1eacb4f 100644
--- a/drivers/media/tuners/tda18218.h
+++ b/drivers/media/tuners/tda18218.h
@@ -30,7 +30,7 @@
u8 loop_through:1;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_TDA18218)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_TDA18218)
extern struct dvb_frontend *tda18218_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c, struct tda18218_config *cfg);
#else
diff --git a/drivers/media/tuners/tda18271.h b/drivers/media/tuners/tda18271.h
index 4c418d6..0a84633 100644
--- a/drivers/media/tuners/tda18271.h
+++ b/drivers/media/tuners/tda18271.h
@@ -121,7 +121,7 @@
TDA18271_DIGITAL,
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_TDA18271)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_TDA18271)
extern struct dvb_frontend *tda18271_attach(struct dvb_frontend *fe, u8 addr,
struct i2c_adapter *i2c,
struct tda18271_config *cfg);
diff --git a/drivers/media/tuners/tda827x.h b/drivers/media/tuners/tda827x.h
index b642921..abf2e2f 100644
--- a/drivers/media/tuners/tda827x.h
+++ b/drivers/media/tuners/tda827x.h
@@ -51,7 +51,7 @@
* @param cfg optional callback function pointers.
* @return FE pointer on success, NULL on failure.
*/
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_TDA827X)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_TDA827X)
extern struct dvb_frontend* tda827x_attach(struct dvb_frontend *fe, int addr,
struct i2c_adapter *i2c,
struct tda827x_config *cfg);
diff --git a/drivers/media/tuners/tda8290.h b/drivers/media/tuners/tda8290.h
index cf96e58..901b8ca 100644
--- a/drivers/media/tuners/tda8290.h
+++ b/drivers/media/tuners/tda8290.h
@@ -38,7 +38,7 @@
struct tda18271_std_map *tda18271_std_map;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_TDA8290)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_TDA8290)
extern int tda829x_probe(struct i2c_adapter *i2c_adap, u8 i2c_addr);
extern struct dvb_frontend *tda829x_attach(struct dvb_frontend *fe,
diff --git a/drivers/media/tuners/tda9887.h b/drivers/media/tuners/tda9887.h
index 37a4a11..95070ec 100644
--- a/drivers/media/tuners/tda9887.h
+++ b/drivers/media/tuners/tda9887.h
@@ -21,7 +21,7 @@
#include "dvb_frontend.h"
/* ------------------------------------------------------------------------ */
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_TDA9887)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_TDA9887)
extern struct dvb_frontend *tda9887_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c_adap,
u8 i2c_addr);
diff --git a/drivers/media/tuners/tea5761.h b/drivers/media/tuners/tea5761.h
index 933228f..2d624d9 100644
--- a/drivers/media/tuners/tea5761.h
+++ b/drivers/media/tuners/tea5761.h
@@ -20,7 +20,7 @@
#include <linux/i2c.h>
#include "dvb_frontend.h"
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_TEA5761)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_TEA5761)
extern int tea5761_autodetection(struct i2c_adapter* i2c_adap, u8 i2c_addr);
extern struct dvb_frontend *tea5761_attach(struct dvb_frontend *fe,
diff --git a/drivers/media/tuners/tea5767.h b/drivers/media/tuners/tea5767.h
index c391011..4f6f6c9 100644
--- a/drivers/media/tuners/tea5767.h
+++ b/drivers/media/tuners/tea5767.h
@@ -39,7 +39,7 @@
enum tea5767_xtal xtal_freq;
};
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_TEA5767)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_TEA5767)
extern int tea5767_autodetection(struct i2c_adapter* i2c_adap, u8 i2c_addr);
extern struct dvb_frontend *tea5767_attach(struct dvb_frontend *fe,
diff --git a/drivers/media/tuners/tua9001.h b/drivers/media/tuners/tua9001.h
index 26358da..2c3375c 100644
--- a/drivers/media/tuners/tua9001.h
+++ b/drivers/media/tuners/tua9001.h
@@ -51,7 +51,7 @@
#define TUA9001_CMD_RESETN 1
#define TUA9001_CMD_RXEN 2
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_TUA9001)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_TUA9001)
extern struct dvb_frontend *tua9001_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c, struct tua9001_config *cfg);
#else
diff --git a/drivers/media/tuners/tuner-simple.h b/drivers/media/tuners/tuner-simple.h
index ffd12cf..6399b45 100644
--- a/drivers/media/tuners/tuner-simple.h
+++ b/drivers/media/tuners/tuner-simple.h
@@ -20,7 +20,7 @@
#include <linux/i2c.h>
#include "dvb_frontend.h"
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_SIMPLE)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_SIMPLE)
extern struct dvb_frontend *simple_tuner_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c_adap,
u8 i2c_addr,
diff --git a/drivers/media/tuners/tuner-xc2028.h b/drivers/media/tuners/tuner-xc2028.h
index 181d087..98e4eff 100644
--- a/drivers/media/tuners/tuner-xc2028.h
+++ b/drivers/media/tuners/tuner-xc2028.h
@@ -56,7 +56,7 @@
#define XC2028_RESET_CLK 1
#define XC2028_I2C_FLUSH 2
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_XC2028)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_XC2028)
extern struct dvb_frontend *xc2028_attach(struct dvb_frontend *fe,
struct xc2028_config *cfg);
#else
diff --git a/drivers/media/tuners/xc4000.h b/drivers/media/tuners/xc4000.h
index 97c23de..4051786 100644
--- a/drivers/media/tuners/xc4000.h
+++ b/drivers/media/tuners/xc4000.h
@@ -50,7 +50,7 @@
* it's passed back to a bridge during tuner_callback().
*/
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_XC4000)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_XC4000)
extern struct dvb_frontend *xc4000_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c,
struct xc4000_config *cfg);
diff --git a/drivers/media/tuners/xc5000.c b/drivers/media/tuners/xc5000.c
index 2a039de..e6e5e90 100644
--- a/drivers/media/tuners/xc5000.c
+++ b/drivers/media/tuners/xc5000.c
@@ -1336,7 +1336,10 @@
if (priv) {
cancel_delayed_work(&priv->timer_sleep);
- release_firmware(priv->firmware);
+ if (priv->firmware) {
+ release_firmware(priv->firmware);
+ priv->firmware = NULL;
+ }
hybrid_tuner_release_state(priv);
}
diff --git a/drivers/media/tuners/xc5000.h b/drivers/media/tuners/xc5000.h
index 6aa534f..00ba29e 100644
--- a/drivers/media/tuners/xc5000.h
+++ b/drivers/media/tuners/xc5000.h
@@ -58,7 +58,7 @@
* it's passed back to a bridge during tuner_callback().
*/
-#if IS_ENABLED(CONFIG_MEDIA_TUNER_XC5000)
+#if IS_REACHABLE(CONFIG_MEDIA_TUNER_XC5000)
extern struct dvb_frontend *xc5000_attach(struct dvb_frontend *fe,
struct i2c_adapter *i2c,
const struct xc5000_config *cfg);
diff --git a/drivers/media/usb/au0828/au0828-video.c b/drivers/media/usb/au0828/au0828-video.c
index a27cb5f..1a362a0 100644
--- a/drivers/media/usb/au0828/au0828-video.c
+++ b/drivers/media/usb/au0828/au0828-video.c
@@ -299,29 +299,23 @@
* Announces that a buffer were filled and request the next
*/
static inline void buffer_filled(struct au0828_dev *dev,
- struct au0828_dmaqueue *dma_q,
- struct au0828_buffer *buf)
+ struct au0828_dmaqueue *dma_q,
+ struct au0828_buffer *buf)
{
+ struct vb2_buffer *vb = &buf->vb;
+ struct vb2_queue *q = vb->vb2_queue;
+
/* Advice that buffer was filled */
au0828_isocdbg("[%p/%d] wakeup\n", buf, buf->top_field);
- buf->vb.v4l2_buf.sequence = dev->frame_count++;
- buf->vb.v4l2_buf.field = V4L2_FIELD_INTERLACED;
- v4l2_get_timestamp(&buf->vb.v4l2_buf.timestamp);
- vb2_buffer_done(&buf->vb, VB2_BUF_STATE_DONE);
-}
+ if (q->type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ vb->v4l2_buf.sequence = dev->frame_count++;
+ else
+ vb->v4l2_buf.sequence = dev->vbi_frame_count++;
-static inline void vbi_buffer_filled(struct au0828_dev *dev,
- struct au0828_dmaqueue *dma_q,
- struct au0828_buffer *buf)
-{
- /* Advice that buffer was filled */
- au0828_isocdbg("[%p/%d] wakeup\n", buf, buf->top_field);
-
- buf->vb.v4l2_buf.sequence = dev->vbi_frame_count++;
- buf->vb.v4l2_buf.field = V4L2_FIELD_INTERLACED;
- v4l2_get_timestamp(&buf->vb.v4l2_buf.timestamp);
- vb2_buffer_done(&buf->vb, VB2_BUF_STATE_DONE);
+ vb->v4l2_buf.field = V4L2_FIELD_INTERLACED;
+ v4l2_get_timestamp(&vb->v4l2_buf.timestamp);
+ vb2_buffer_done(vb, VB2_BUF_STATE_DONE);
}
/*
@@ -574,9 +568,7 @@
if (fbyte & 0x40) {
/* VBI */
if (vbi_buf != NULL)
- vbi_buffer_filled(dev,
- vbi_dma_q,
- vbi_buf);
+ buffer_filled(dev, vbi_dma_q, vbi_buf);
vbi_get_next_buf(vbi_dma_q, &vbi_buf);
if (vbi_buf == NULL)
vbioutp = NULL;
@@ -899,12 +891,8 @@
{
dprintk(1, "au0828_analog_unregister called\n");
mutex_lock(&au0828_sysfs_lock);
-
- if (dev->vdev)
- video_unregister_device(dev->vdev);
- if (dev->vbi_dev)
- video_unregister_device(dev->vbi_dev);
-
+ video_unregister_device(&dev->vdev);
+ video_unregister_device(&dev->vbi_dev);
mutex_unlock(&au0828_sysfs_lock);
}
@@ -949,7 +937,7 @@
if (buf != NULL) {
vbi_data = vb2_plane_vaddr(&buf->vb, 0);
memset(vbi_data, 0x00, buf->length);
- vbi_buffer_filled(dev, dma_q, buf);
+ buffer_filled(dev, dma_q, buf);
}
vbi_get_next_buf(dma_q, &buf);
@@ -1286,7 +1274,7 @@
input->audioset = 2;
}
- input->std = dev->vdev->tvnorms;
+ input->std = dev->vdev.tvnorms;
return 0;
}
@@ -1704,7 +1692,7 @@
static const struct video_device au0828_video_template = {
.fops = &au0828_v4l_fops,
- .release = video_device_release,
+ .release = video_device_release_empty,
.ioctl_ops = &video_ioctl_ops,
.tvnorms = V4L2_STD_NTSC_M | V4L2_STD_PAL_M,
};
@@ -1814,52 +1802,36 @@
dev->std = V4L2_STD_NTSC_M;
au0828_s_input(dev, 0);
- /* allocate and fill v4l2 video struct */
- dev->vdev = video_device_alloc();
- if (NULL == dev->vdev) {
- dprintk(1, "Can't allocate video_device.\n");
- return -ENOMEM;
- }
-
- /* allocate the VBI struct */
- dev->vbi_dev = video_device_alloc();
- if (NULL == dev->vbi_dev) {
- dprintk(1, "Can't allocate vbi_device.\n");
- ret = -ENOMEM;
- goto err_vdev;
- }
-
mutex_init(&dev->vb_queue_lock);
mutex_init(&dev->vb_vbi_queue_lock);
/* Fill the video capture device struct */
- *dev->vdev = au0828_video_template;
- dev->vdev->v4l2_dev = &dev->v4l2_dev;
- dev->vdev->lock = &dev->lock;
- dev->vdev->queue = &dev->vb_vidq;
- dev->vdev->queue->lock = &dev->vb_queue_lock;
- strcpy(dev->vdev->name, "au0828a video");
+ dev->vdev = au0828_video_template;
+ dev->vdev.v4l2_dev = &dev->v4l2_dev;
+ dev->vdev.lock = &dev->lock;
+ dev->vdev.queue = &dev->vb_vidq;
+ dev->vdev.queue->lock = &dev->vb_queue_lock;
+ strcpy(dev->vdev.name, "au0828a video");
/* Setup the VBI device */
- *dev->vbi_dev = au0828_video_template;
- dev->vbi_dev->v4l2_dev = &dev->v4l2_dev;
- dev->vbi_dev->lock = &dev->lock;
- dev->vbi_dev->queue = &dev->vb_vbiq;
- dev->vbi_dev->queue->lock = &dev->vb_vbi_queue_lock;
- strcpy(dev->vbi_dev->name, "au0828a vbi");
+ dev->vbi_dev = au0828_video_template;
+ dev->vbi_dev.v4l2_dev = &dev->v4l2_dev;
+ dev->vbi_dev.lock = &dev->lock;
+ dev->vbi_dev.queue = &dev->vb_vbiq;
+ dev->vbi_dev.queue->lock = &dev->vb_vbi_queue_lock;
+ strcpy(dev->vbi_dev.name, "au0828a vbi");
/* initialize videobuf2 stuff */
retval = au0828_vb2_setup(dev);
if (retval != 0) {
dprintk(1, "unable to setup videobuf2 queues (error = %d).\n",
retval);
- ret = -ENODEV;
- goto err_vbi_dev;
+ return -ENODEV;
}
/* Register the v4l2 device */
- video_set_drvdata(dev->vdev, dev);
- retval = video_register_device(dev->vdev, VFL_TYPE_GRABBER, -1);
+ video_set_drvdata(&dev->vdev, dev);
+ retval = video_register_device(&dev->vdev, VFL_TYPE_GRABBER, -1);
if (retval != 0) {
dprintk(1, "unable to register video device (error = %d).\n",
retval);
@@ -1868,8 +1840,8 @@
}
/* Register the vbi device */
- video_set_drvdata(dev->vbi_dev, dev);
- retval = video_register_device(dev->vbi_dev, VFL_TYPE_VBI, -1);
+ video_set_drvdata(&dev->vbi_dev, dev);
+ retval = video_register_device(&dev->vbi_dev, VFL_TYPE_VBI, -1);
if (retval != 0) {
dprintk(1, "unable to register vbi device (error = %d).\n",
retval);
@@ -1882,14 +1854,10 @@
return 0;
err_reg_vbi_dev:
- video_unregister_device(dev->vdev);
+ video_unregister_device(&dev->vdev);
err_reg_vdev:
vb2_queue_release(&dev->vb_vidq);
vb2_queue_release(&dev->vb_vbiq);
-err_vbi_dev:
- video_device_release(dev->vbi_dev);
-err_vdev:
- video_device_release(dev->vdev);
return ret;
}
diff --git a/drivers/media/usb/au0828/au0828.h b/drivers/media/usb/au0828/au0828.h
index eb15187..3b48000 100644
--- a/drivers/media/usb/au0828/au0828.h
+++ b/drivers/media/usb/au0828/au0828.h
@@ -209,8 +209,8 @@
struct au0828_rc *ir;
#endif
- struct video_device *vdev;
- struct video_device *vbi_dev;
+ struct video_device vdev;
+ struct video_device vbi_dev;
/* Videobuf2 */
struct vb2_queue vb_vidq;
diff --git a/drivers/media/usb/cx231xx/Kconfig b/drivers/media/usb/cx231xx/Kconfig
index 173c0e2..0cced3e 100644
--- a/drivers/media/usb/cx231xx/Kconfig
+++ b/drivers/media/usb/cx231xx/Kconfig
@@ -47,6 +47,7 @@
select MEDIA_TUNER_TDA18271 if MEDIA_SUBDRV_AUTOSELECT
select DVB_MB86A20S if MEDIA_SUBDRV_AUTOSELECT
select DVB_LGDT3305 if MEDIA_SUBDRV_AUTOSELECT
+ select DVB_LGDT3306A if MEDIA_SUBDRV_AUTOSELECT
select DVB_TDA18271C2DD if MEDIA_SUBDRV_AUTOSELECT
select DVB_SI2165 if MEDIA_SUBDRV_AUTOSELECT
select MEDIA_TUNER_SI2157 if MEDIA_SUBDRV_AUTOSELECT
diff --git a/drivers/media/usb/cx231xx/cx231xx-417.c b/drivers/media/usb/cx231xx/cx231xx-417.c
index 3f295b4..983ea83 100644
--- a/drivers/media/usb/cx231xx/cx231xx-417.c
+++ b/drivers/media/usb/cx231xx/cx231xx-417.c
@@ -1868,13 +1868,9 @@
dprintk(1, "%s()\n", __func__);
dprintk(3, "%s()\n", __func__);
- if (dev->v4l_device) {
- if (-1 != dev->v4l_device->minor)
- video_unregister_device(dev->v4l_device);
- else
- video_device_release(dev->v4l_device);
+ if (video_is_registered(&dev->v4l_device)) {
+ video_unregister_device(&dev->v4l_device);
v4l2_ctrl_handler_free(&dev->mpeg_ctrl_handler.hdl);
- dev->v4l_device = NULL;
}
}
@@ -1911,25 +1907,21 @@
.s_video_encoding = cx231xx_s_video_encoding,
};
-static struct video_device *cx231xx_video_dev_alloc(
+static void cx231xx_video_dev_init(
struct cx231xx *dev,
struct usb_device *usbdev,
- struct video_device *template,
- char *type)
+ struct video_device *vfd,
+ const struct video_device *template,
+ const char *type)
{
- struct video_device *vfd;
-
dprintk(1, "%s()\n", __func__);
- vfd = video_device_alloc();
- if (NULL == vfd)
- return NULL;
*vfd = *template;
snprintf(vfd->name, sizeof(vfd->name), "%s %s (%s)", dev->name,
type, cx231xx_boards[dev->model].name);
vfd->v4l2_dev = &dev->v4l2_dev;
vfd->lock = &dev->lock;
- vfd->release = video_device_release;
+ vfd->release = video_device_release_empty;
vfd->ctrl_handler = &dev->mpeg_ctrl_handler.hdl;
video_set_drvdata(vfd, dev);
if (dev->tuner_type == TUNER_ABSENT) {
@@ -1938,9 +1930,6 @@
v4l2_disable_ioctl(vfd, VIDIOC_G_TUNER);
v4l2_disable_ioctl(vfd, VIDIOC_S_TUNER);
}
-
- return vfd;
-
}
int cx231xx_417_register(struct cx231xx *dev)
@@ -1983,9 +1972,9 @@
cx2341x_handler_set_50hz(&dev->mpeg_ctrl_handler, false);
/* Allocate and initialize V4L video device */
- dev->v4l_device = cx231xx_video_dev_alloc(dev,
- dev->udev, &cx231xx_mpeg_template, "mpeg");
- err = video_register_device(dev->v4l_device,
+ cx231xx_video_dev_init(dev, dev->udev,
+ &dev->v4l_device, &cx231xx_mpeg_template, "mpeg");
+ err = video_register_device(&dev->v4l_device,
VFL_TYPE_GRABBER, -1);
if (err < 0) {
dprintk(3, "%s: can't register mpeg device\n", dev->name);
@@ -1994,7 +1983,7 @@
}
dprintk(3, "%s: registered device video%d [mpeg]\n",
- dev->name, dev->v4l_device->num);
+ dev->name, dev->v4l_device.num);
return 0;
}
diff --git a/drivers/media/usb/cx231xx/cx231xx-cards.c b/drivers/media/usb/cx231xx/cx231xx-cards.c
index da03733..fe00da1 100644
--- a/drivers/media/usb/cx231xx/cx231xx-cards.c
+++ b/drivers/media/usb/cx231xx/cx231xx-cards.c
@@ -776,6 +776,45 @@
.gpio = NULL,
} },
},
+ [CX231XX_BOARD_HAUPPAUGE_955Q] = {
+ .name = "Hauppauge WinTV-HVR-955Q (111401)",
+ .tuner_type = TUNER_ABSENT,
+ .tuner_addr = 0x60,
+ .tuner_gpio = RDE250_XCV_TUNER,
+ .tuner_sif_gpio = 0x05,
+ .tuner_scl_gpio = 0x1a,
+ .tuner_sda_gpio = 0x1b,
+ .decoder = CX231XX_AVDECODER,
+ .output_mode = OUT_MODE_VIP11,
+ .demod_xfer_mode = 0,
+ .ctl_pin_status_mask = 0xFFFFFFC4,
+ .agc_analog_digital_select_gpio = 0x0c,
+ .gpio_pin_status_mask = 0x4001000,
+ .tuner_i2c_master = I2C_1_MUX_3,
+ .demod_i2c_master = I2C_2,
+ .has_dvb = 1,
+ .demod_addr = 0x0e,
+ .norm = V4L2_STD_NTSC,
+
+ .input = {{
+ .type = CX231XX_VMUX_TELEVISION,
+ .vmux = CX231XX_VIN_3_1,
+ .amux = CX231XX_AMUX_VIDEO,
+ .gpio = NULL,
+ }, {
+ .type = CX231XX_VMUX_COMPOSITE1,
+ .vmux = CX231XX_VIN_2_1,
+ .amux = CX231XX_AMUX_LINE_IN,
+ .gpio = NULL,
+ }, {
+ .type = CX231XX_VMUX_SVIDEO,
+ .vmux = CX231XX_VIN_1_1 |
+ (CX231XX_VIN_1_2 << 8) |
+ CX25840_SVIDEO_ON,
+ .amux = CX231XX_AMUX_LINE_IN,
+ .gpio = NULL,
+ } },
+ },
};
const unsigned int cx231xx_bcount = ARRAY_SIZE(cx231xx_boards);
@@ -805,6 +844,8 @@
.driver_info = CX231XX_BOARD_HAUPPAUGE_USB2_FM_NTSC},
{USB_DEVICE(0x2040, 0xb120),
.driver_info = CX231XX_BOARD_HAUPPAUGE_EXETER},
+ {USB_DEVICE(0x2040, 0xb123),
+ .driver_info = CX231XX_BOARD_HAUPPAUGE_955Q},
{USB_DEVICE(0x2040, 0xb130),
.driver_info = CX231XX_BOARD_HAUPPAUGE_930C_HD_1113xx},
{USB_DEVICE(0x2040, 0xb131),
@@ -912,9 +953,6 @@
*/
void cx231xx_pre_card_setup(struct cx231xx *dev)
{
-
- cx231xx_set_model(dev);
-
dev_info(dev->dev, "Identified as %s (card=%d)\n",
dev->board.name, dev->model);
@@ -1052,6 +1090,7 @@
switch (dev->model) {
case CX231XX_BOARD_HAUPPAUGE_930C_HD_1113xx:
case CX231XX_BOARD_HAUPPAUGE_930C_HD_1114xx:
+ case CX231XX_BOARD_HAUPPAUGE_955Q:
{
struct tveeprom tvee;
static u8 eeprom[256];
@@ -1092,6 +1131,17 @@
call_all(dev, video, s_stream, 1);
}
+static void cx231xx_unregister_media_device(struct cx231xx *dev)
+{
+#ifdef CONFIG_MEDIA_CONTROLLER
+ if (dev->media_dev) {
+ media_device_unregister(dev->media_dev);
+ kfree(dev->media_dev);
+ dev->media_dev = NULL;
+ }
+#endif
+}
+
/*
* cx231xx_realease_resources()
* unregisters the v4l2,i2c and usb devices
@@ -1099,6 +1149,8 @@
*/
void cx231xx_release_resources(struct cx231xx *dev)
{
+ cx231xx_unregister_media_device(dev);
+
cx231xx_release_analog_resources(dev);
cx231xx_remove_from_devlist(dev);
@@ -1117,6 +1169,74 @@
clear_bit(dev->devno, &cx231xx_devused);
}
+static void cx231xx_media_device_register(struct cx231xx *dev,
+ struct usb_device *udev)
+{
+#ifdef CONFIG_MEDIA_CONTROLLER
+ struct media_device *mdev;
+ int ret;
+
+ mdev = kzalloc(sizeof(*mdev), GFP_KERNEL);
+ if (!mdev)
+ return;
+
+ mdev->dev = dev->dev;
+ strlcpy(mdev->model, dev->board.name, sizeof(mdev->model));
+ if (udev->serial)
+ strlcpy(mdev->serial, udev->serial, sizeof(mdev->serial));
+ strcpy(mdev->bus_info, udev->devpath);
+ mdev->hw_revision = le16_to_cpu(udev->descriptor.bcdDevice);
+ mdev->driver_version = LINUX_VERSION_CODE;
+
+ ret = media_device_register(mdev);
+ if (ret) {
+ dev_err(dev->dev,
+ "Couldn't create a media device. Error: %d\n",
+ ret);
+ kfree(mdev);
+ return;
+ }
+
+ dev->media_dev = mdev;
+#endif
+}
+
+static void cx231xx_create_media_graph(struct cx231xx *dev)
+{
+#ifdef CONFIG_MEDIA_CONTROLLER
+ struct media_device *mdev = dev->media_dev;
+ struct media_entity *entity;
+ struct media_entity *tuner = NULL, *decoder = NULL;
+
+ if (!mdev)
+ return;
+
+ media_device_for_each_entity(entity, mdev) {
+ switch (entity->type) {
+ case MEDIA_ENT_T_V4L2_SUBDEV_TUNER:
+ tuner = entity;
+ break;
+ case MEDIA_ENT_T_V4L2_SUBDEV_DECODER:
+ decoder = entity;
+ break;
+ }
+ }
+
+ /* Analog setup, using tuner as a link */
+
+ if (!decoder)
+ return;
+
+ if (tuner)
+ media_entity_create_link(tuner, 0, decoder, 0,
+ MEDIA_LNK_FL_ENABLED);
+ media_entity_create_link(decoder, 1, &dev->vdev.entity, 0,
+ MEDIA_LNK_FL_ENABLED);
+ media_entity_create_link(decoder, 2, &dev->vbi_dev.entity, 0,
+ MEDIA_LNK_FL_ENABLED);
+#endif
+}
+
/*
* cx231xx_init_dev()
* allocates and inits the device structs, registers i2c bus and v4l device
@@ -1225,10 +1345,8 @@
}
retval = cx231xx_register_analog_devices(dev);
- if (retval) {
- cx231xx_release_analog_resources(dev);
+ if (retval)
goto err_analog;
- }
cx231xx_ir_init(dev);
@@ -1236,6 +1354,8 @@
return 0;
err_analog:
+ cx231xx_unregister_media_device(dev);
+ cx231xx_release_analog_resources(dev);
cx231xx_remove_from_devlist(dev);
err_dev_init:
cx231xx_dev_uninit(dev);
@@ -1438,6 +1558,8 @@
dev->video_mode.alt = -1;
dev->dev = d;
+ cx231xx_set_model(dev);
+
dev->interface_count++;
/* reset gpio dir and value */
dev->gpio_dir = 0;
@@ -1502,7 +1624,13 @@
/* save our data pointer in this interface device */
usb_set_intfdata(interface, dev);
+ /* Register the media controller */
+ cx231xx_media_device_register(dev, udev);
+
/* Create v4l2 device */
+#ifdef CONFIG_MEDIA_CONTROLLER
+ dev->v4l2_dev.mdev = dev->media_dev;
+#endif
retval = v4l2_device_register(&interface->dev, &dev->v4l2_dev);
if (retval) {
dev_err(d, "v4l2_device_register failed\n");
@@ -1568,6 +1696,8 @@
/* load other modules required */
request_modules(dev);
+ cx231xx_create_media_graph(dev);
+
return 0;
err_video_alt:
/* cx231xx_uninit_dev: */
@@ -1618,7 +1748,7 @@
if (dev->users) {
dev_warn(dev->dev,
"device %s is open! Deregistration and memory deallocation are deferred on close.\n",
- video_device_node_name(dev->vdev));
+ video_device_node_name(&dev->vdev));
/* Even having users, it is safe to remove the RC i2c driver */
cx231xx_ir_exit(dev);
diff --git a/drivers/media/usb/cx231xx/cx231xx-core.c b/drivers/media/usb/cx231xx/cx231xx-core.c
index 4a3f28c..e42bde0 100644
--- a/drivers/media/usb/cx231xx/cx231xx-core.c
+++ b/drivers/media/usb/cx231xx/cx231xx-core.c
@@ -176,16 +176,9 @@
saddr_len = req_data->saddr_len;
/* Set wValue */
- if (saddr_len == 1) /* need check saddr_len == 0 */
- ven_req.wValue =
- req_data->
- dev_addr << 9 | _i2c_period << 4 | saddr_len << 2 |
- _i2c_nostop << 1 | I2C_SYNC | _i2c_reserve << 6;
- else
- ven_req.wValue =
- req_data->
- dev_addr << 9 | _i2c_period << 4 | saddr_len << 2 |
- _i2c_nostop << 1 | I2C_SYNC | _i2c_reserve << 6;
+ ven_req.wValue = (req_data->dev_addr << 9 | _i2c_period << 4 |
+ saddr_len << 2 | _i2c_nostop << 1 | I2C_SYNC |
+ _i2c_reserve << 6);
/* set channel number */
if (req_data->direction & I2C_M_RD) {
diff --git a/drivers/media/usb/cx231xx/cx231xx-dvb.c b/drivers/media/usb/cx231xx/cx231xx-dvb.c
index dd600b9..610d567 100644
--- a/drivers/media/usb/cx231xx/cx231xx-dvb.c
+++ b/drivers/media/usb/cx231xx/cx231xx-dvb.c
@@ -34,6 +34,7 @@
#include "si2165.h"
#include "mb86a20s.h"
#include "si2157.h"
+#include "lgdt3306a.h"
MODULE_DESCRIPTION("driver for cx231xx based DVB cards");
MODULE_AUTHOR("Srinivasa Deevi <srinivasa.deevi@conexant.com>");
@@ -160,6 +161,18 @@
.ref_freq_Hz = 24000000,
};
+static struct lgdt3306a_config hauppauge_955q_lgdt3306a_config = {
+ .i2c_addr = 0x59,
+ .qam_if_khz = 4000,
+ .vsb_if_khz = 3250,
+ .deny_i2c_rptr = 1,
+ .spectral_inversion = 1,
+ .mpeg_mode = LGDT3306A_MPEG_SERIAL,
+ .tpclk_edge = LGDT3306A_TPCLK_RISING_EDGE,
+ .tpvalid_polarity = LGDT3306A_TP_VALID_HIGH,
+ .xtalMHz = 25,
+};
+
static inline void print_err_status(struct cx231xx *dev, int packet, int status)
{
char *errmsg = "Unknown";
@@ -455,6 +468,7 @@
mutex_init(&dvb->lock);
+
/* register adapter */
result = dvb_register_adapter(&dvb->adapter, dev->name, module, device,
adapter_nr);
@@ -464,6 +478,7 @@
dev->name, result);
goto fail_adapter;
}
+ dvb_register_media_controller(&dvb->adapter, dev->media_dev);
/* Ensure all frontends negotiate bus access */
dvb->frontend->ops.ts_bus_ctrl = cx231xx_dvb_bus_ctrl;
@@ -536,6 +551,8 @@
/* register network adapter */
dvb_net_init(&dvb->adapter, &dvb->net, &dvb->demux.dmx);
+ dvb_create_media_graph(&dvb->adapter);
+
return 0;
fail_fe_conn:
@@ -807,7 +824,61 @@
dev->dvb->i2c_client_tuner = client;
break;
}
+ case CX231XX_BOARD_HAUPPAUGE_955Q:
+ {
+ struct i2c_client *client;
+ struct i2c_board_info info;
+ struct si2157_config si2157_config;
+ memset(&info, 0, sizeof(struct i2c_board_info));
+
+ dev->dvb->frontend = dvb_attach(lgdt3306a_attach,
+ &hauppauge_955q_lgdt3306a_config,
+ tuner_i2c
+ );
+
+ if (dev->dvb->frontend == NULL) {
+ dev_err(dev->dev,
+ "Failed to attach LGDT3306A frontend.\n");
+ result = -EINVAL;
+ goto out_free;
+ }
+
+ dev->dvb->frontend->ops.i2c_gate_ctrl = NULL;
+
+ /* define general-purpose callback pointer */
+ dvb->frontend->callback = cx231xx_tuner_callback;
+
+ /* attach tuner */
+ memset(&si2157_config, 0, sizeof(si2157_config));
+ si2157_config.fe = dev->dvb->frontend;
+ si2157_config.inversion = true;
+ strlcpy(info.type, "si2157", I2C_NAME_SIZE);
+ info.addr = 0x60;
+ info.platform_data = &si2157_config;
+ request_module("si2157");
+
+ client = i2c_new_device(
+ tuner_i2c,
+ &info);
+ if (client == NULL || client->dev.driver == NULL) {
+ dvb_frontend_detach(dev->dvb->frontend);
+ result = -ENODEV;
+ goto out_free;
+ }
+
+ if (!try_module_get(client->dev.driver->owner)) {
+ i2c_unregister_device(client);
+ dvb_frontend_detach(dev->dvb->frontend);
+ result = -ENODEV;
+ goto out_free;
+ }
+
+ dev->cx231xx_reset_analog_tuner = NULL;
+
+ dev->dvb->i2c_client_tuner = client;
+ break;
+ }
case CX231XX_BOARD_PV_PLAYTV_USB_HYBRID:
case CX231XX_BOARD_KWORLD_UB430_USB_HYBRID:
diff --git a/drivers/media/usb/cx231xx/cx231xx-video.c b/drivers/media/usb/cx231xx/cx231xx-video.c
index ecea76f..c261e16 100644
--- a/drivers/media/usb/cx231xx/cx231xx-video.c
+++ b/drivers/media/usb/cx231xx/cx231xx-video.c
@@ -100,6 +100,75 @@
};
+static int cx231xx_enable_analog_tuner(struct cx231xx *dev)
+{
+#ifdef CONFIG_MEDIA_CONTROLLER
+ struct media_device *mdev = dev->media_dev;
+ struct media_entity *entity, *decoder = NULL, *source;
+ struct media_link *link, *found_link = NULL;
+ int i, ret, active_links = 0;
+
+ if (!mdev)
+ return 0;
+
+ /*
+ * This will find the tuner that is connected into the decoder.
+ * Technically, this is not 100% correct, as the device may be
+ * using an analog input instead of the tuner. However, as we can't
+ * do DVB streaming while the DMA engine is being used for V4L2,
+ * this should be enough for the actual needs.
+ */
+ media_device_for_each_entity(entity, mdev) {
+ if (entity->type == MEDIA_ENT_T_V4L2_SUBDEV_DECODER) {
+ decoder = entity;
+ break;
+ }
+ }
+ if (!decoder)
+ return 0;
+
+ for (i = 0; i < decoder->num_links; i++) {
+ link = &decoder->links[i];
+ if (link->sink->entity == decoder) {
+ found_link = link;
+ if (link->flags & MEDIA_LNK_FL_ENABLED)
+ active_links++;
+ break;
+ }
+ }
+
+ if (active_links == 1 || !found_link)
+ return 0;
+
+ source = found_link->source->entity;
+ for (i = 0; i < source->num_links; i++) {
+ struct media_entity *sink;
+ int flags = 0;
+
+ link = &source->links[i];
+ sink = link->sink->entity;
+
+ if (sink == entity)
+ flags = MEDIA_LNK_FL_ENABLED;
+
+ ret = media_entity_setup_link(link, flags);
+ if (ret) {
+ dev_err(dev->dev,
+ "Couldn't change link %s->%s to %s. Error %d\n",
+ source->name, sink->name,
+ flags ? "enabled" : "disabled",
+ ret);
+ return ret;
+ } else
+ dev_dbg(dev->dev,
+ "link %s->%s was %s\n",
+ source->name, sink->name,
+ flags ? "ENABLED" : "disabled");
+ }
+#endif
+ return 0;
+}
+
/* ------------------------------------------------------------------
Video buffer and parser functions
------------------------------------------------------------------*/
@@ -667,6 +736,9 @@
if (*count < CX231XX_MIN_BUF)
*count = CX231XX_MIN_BUF;
+
+ cx231xx_enable_analog_tuner(dev);
+
return 0;
}
@@ -756,6 +828,7 @@
}
buf->vb.state = VIDEOBUF_PREPARED;
+
return 0;
fail:
@@ -1056,7 +1129,7 @@
(CX231XX_VMUX_CABLE == INPUT(n)->type))
i->type = V4L2_INPUT_TYPE_TUNER;
- i->std = dev->vdev->tvnorms;
+ i->std = dev->vdev.tvnorms;
/* If they are asking about the active input, read signal status */
if (n == dev->video_input) {
@@ -1451,7 +1524,7 @@
cap->capabilities = cap->device_caps | V4L2_CAP_READWRITE |
V4L2_CAP_VBI_CAPTURE | V4L2_CAP_VIDEO_CAPTURE |
V4L2_CAP_STREAMING | V4L2_CAP_DEVICE_CAPS;
- if (dev->radio_dev)
+ if (video_is_registered(&dev->radio_dev))
cap->capabilities |= V4L2_CAP_RADIO;
return 0;
@@ -1729,34 +1802,21 @@
/*FIXME: I2C IR should be disconnected */
- if (dev->radio_dev) {
- if (video_is_registered(dev->radio_dev))
- video_unregister_device(dev->radio_dev);
- else
- video_device_release(dev->radio_dev);
- dev->radio_dev = NULL;
- }
- if (dev->vbi_dev) {
+ if (video_is_registered(&dev->radio_dev))
+ video_unregister_device(&dev->radio_dev);
+ if (video_is_registered(&dev->vbi_dev)) {
dev_info(dev->dev, "V4L2 device %s deregistered\n",
- video_device_node_name(dev->vbi_dev));
- if (video_is_registered(dev->vbi_dev))
- video_unregister_device(dev->vbi_dev);
- else
- video_device_release(dev->vbi_dev);
- dev->vbi_dev = NULL;
+ video_device_node_name(&dev->vbi_dev));
+ video_unregister_device(&dev->vbi_dev);
}
- if (dev->vdev) {
+ if (video_is_registered(&dev->vdev)) {
dev_info(dev->dev, "V4L2 device %s deregistered\n",
- video_device_node_name(dev->vdev));
+ video_device_node_name(&dev->vdev));
if (dev->board.has_417)
cx231xx_417_unregister(dev);
- if (video_is_registered(dev->vdev))
- video_unregister_device(dev->vdev);
- else
- video_device_release(dev->vdev);
- dev->vdev = NULL;
+ video_unregister_device(&dev->vdev);
}
v4l2_ctrl_handler_free(&dev->ctrl_handler);
v4l2_ctrl_handler_free(&dev->radio_ctrl_handler);
@@ -2013,7 +2073,7 @@
static const struct video_device cx231xx_video_template = {
.fops = &cx231xx_v4l_fops,
- .release = video_device_release,
+ .release = video_device_release_empty,
.ioctl_ops = &video_ioctl_ops,
.tvnorms = V4L2_STD_ALL,
};
@@ -2049,19 +2109,14 @@
/******************************** usb interface ******************************/
-static struct video_device *cx231xx_vdev_init(struct cx231xx *dev,
- const struct video_device
- *template, const char *type_name)
+static void cx231xx_vdev_init(struct cx231xx *dev,
+ struct video_device *vfd,
+ const struct video_device *template,
+ const char *type_name)
{
- struct video_device *vfd;
-
- vfd = video_device_alloc();
- if (NULL == vfd)
- return NULL;
-
*vfd = *template;
vfd->v4l2_dev = &dev->v4l2_dev;
- vfd->release = video_device_release;
+ vfd->release = video_device_release_empty;
vfd->lock = &dev->lock;
snprintf(vfd->name, sizeof(vfd->name), "%s %s", dev->name, type_name);
@@ -2073,7 +2128,6 @@
v4l2_disable_ioctl(vfd, VIDIOC_G_TUNER);
v4l2_disable_ioctl(vfd, VIDIOC_S_TUNER);
}
- return vfd;
}
int cx231xx_register_analog_devices(struct cx231xx *dev)
@@ -2116,15 +2170,16 @@
/* write code here... */
/* allocate and fill video video_device struct */
- dev->vdev = cx231xx_vdev_init(dev, &cx231xx_video_template, "video");
- if (!dev->vdev) {
- dev_err(dev->dev, "cannot allocate video_device.\n");
- return -ENODEV;
- }
-
- dev->vdev->ctrl_handler = &dev->ctrl_handler;
+ cx231xx_vdev_init(dev, &dev->vdev, &cx231xx_video_template, "video");
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ dev->video_pad.flags = MEDIA_PAD_FL_SINK;
+ ret = media_entity_init(&dev->vdev.entity, 1, &dev->video_pad, 0);
+ if (ret < 0)
+ dev_err(dev->dev, "failed to initialize video media entity!\n");
+#endif
+ dev->vdev.ctrl_handler = &dev->ctrl_handler;
/* register v4l2 video video_device */
- ret = video_register_device(dev->vdev, VFL_TYPE_GRABBER,
+ ret = video_register_device(&dev->vdev, VFL_TYPE_GRABBER,
video_nr[dev->devno]);
if (ret) {
dev_err(dev->dev,
@@ -2134,22 +2189,24 @@
}
dev_info(dev->dev, "Registered video device %s [v4l2]\n",
- video_device_node_name(dev->vdev));
+ video_device_node_name(&dev->vdev));
/* Initialize VBI template */
cx231xx_vbi_template = cx231xx_video_template;
strcpy(cx231xx_vbi_template.name, "cx231xx-vbi");
/* Allocate and fill vbi video_device struct */
- dev->vbi_dev = cx231xx_vdev_init(dev, &cx231xx_vbi_template, "vbi");
+ cx231xx_vdev_init(dev, &dev->vbi_dev, &cx231xx_vbi_template, "vbi");
- if (!dev->vbi_dev) {
- dev_err(dev->dev, "cannot allocate video_device.\n");
- return -ENODEV;
- }
- dev->vbi_dev->ctrl_handler = &dev->ctrl_handler;
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ dev->vbi_pad.flags = MEDIA_PAD_FL_SINK;
+ ret = media_entity_init(&dev->vbi_dev.entity, 1, &dev->vbi_pad, 0);
+ if (ret < 0)
+ dev_err(dev->dev, "failed to initialize vbi media entity!\n");
+#endif
+ dev->vbi_dev.ctrl_handler = &dev->ctrl_handler;
/* register v4l2 vbi video_device */
- ret = video_register_device(dev->vbi_dev, VFL_TYPE_VBI,
+ ret = video_register_device(&dev->vbi_dev, VFL_TYPE_VBI,
vbi_nr[dev->devno]);
if (ret < 0) {
dev_err(dev->dev, "unable to register vbi device\n");
@@ -2157,18 +2214,13 @@
}
dev_info(dev->dev, "Registered VBI device %s\n",
- video_device_node_name(dev->vbi_dev));
+ video_device_node_name(&dev->vbi_dev));
if (cx231xx_boards[dev->model].radio.type == CX231XX_RADIO) {
- dev->radio_dev = cx231xx_vdev_init(dev, &cx231xx_radio_template,
- "radio");
- if (!dev->radio_dev) {
- dev_err(dev->dev,
- "cannot allocate video_device.\n");
- return -ENODEV;
- }
- dev->radio_dev->ctrl_handler = &dev->radio_ctrl_handler;
- ret = video_register_device(dev->radio_dev, VFL_TYPE_RADIO,
+ cx231xx_vdev_init(dev, &dev->radio_dev,
+ &cx231xx_radio_template, "radio");
+ dev->radio_dev.ctrl_handler = &dev->radio_ctrl_handler;
+ ret = video_register_device(&dev->radio_dev, VFL_TYPE_RADIO,
radio_nr[dev->devno]);
if (ret < 0) {
dev_err(dev->dev,
@@ -2176,7 +2228,7 @@
return ret;
}
dev_info(dev->dev, "Registered radio device as %s\n",
- video_device_node_name(dev->radio_dev));
+ video_device_node_name(&dev->radio_dev));
}
return 0;
diff --git a/drivers/media/usb/cx231xx/cx231xx.h b/drivers/media/usb/cx231xx/cx231xx.h
index 6d6f3ee..00d3bce 100644
--- a/drivers/media/usb/cx231xx/cx231xx.h
+++ b/drivers/media/usb/cx231xx/cx231xx.h
@@ -76,6 +76,7 @@
#define CX231XX_BOARD_KWORLD_UB445_USB_HYBRID 18
#define CX231XX_BOARD_HAUPPAUGE_930C_HD_1113xx 19
#define CX231XX_BOARD_HAUPPAUGE_930C_HD_1114xx 20
+#define CX231XX_BOARD_HAUPPAUGE_955Q 21
/* Limits minimum and default number of buffers */
#define CX231XX_MIN_BUF 4
@@ -633,7 +634,7 @@
/* video for linux */
int users; /* user count for exclusive use */
- struct video_device *vdev; /* video for linux device struct */
+ struct video_device vdev; /* video for linux device struct */
v4l2_std_id norm; /* selected tv norm */
int ctl_freq; /* selected frequency */
unsigned int ctl_ainput; /* selected audio input */
@@ -655,8 +656,13 @@
struct mutex ctrl_urb_lock; /* protects urb_buf */
struct list_head inqueue, outqueue;
wait_queue_head_t open, wait_frame, wait_stream;
- struct video_device *vbi_dev;
- struct video_device *radio_dev;
+ struct video_device vbi_dev;
+ struct video_device radio_dev;
+
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ struct media_device *media_dev;
+ struct media_pad video_pad, vbi_pad;
+#endif
unsigned char eedata[256];
@@ -718,7 +724,7 @@
u8 USE_ISO;
struct cx231xx_tvnorm encodernorm;
struct cx231xx_tsport ts1, ts2;
- struct video_device *v4l_device;
+ struct video_device v4l_device;
atomic_t v4l_reader_count;
u32 freq;
unsigned int input;
@@ -972,8 +978,11 @@
int cx231xx_ir_init(struct cx231xx *dev);
void cx231xx_ir_exit(struct cx231xx *dev);
#else
-#define cx231xx_ir_init(dev) (0)
-#define cx231xx_ir_exit(dev) (0)
+static inline int cx231xx_ir_init(struct cx231xx *dev)
+{
+ return 0;
+}
+static inline void cx231xx_ir_exit(struct cx231xx *dev) {}
#endif
static inline unsigned int norm_maxw(struct cx231xx *dev)
diff --git a/drivers/media/usb/dvb-usb-v2/Kconfig b/drivers/media/usb/dvb-usb-v2/Kconfig
index 0982e73..9facc92 100644
--- a/drivers/media/usb/dvb-usb-v2/Kconfig
+++ b/drivers/media/usb/dvb-usb-v2/Kconfig
@@ -146,7 +146,7 @@
depends on DVB_USB_V2
select DVB_M88DS3103 if MEDIA_SUBDRV_AUTOSELECT
select DVB_SI2168 if MEDIA_SUBDRV_AUTOSELECT
- select MEDIA_TUNER_M88TS2022 if MEDIA_SUBDRV_AUTOSELECT
+ select DVB_TS2020 if MEDIA_SUBDRV_AUTOSELECT
select MEDIA_TUNER_SI2157 if MEDIA_SUBDRV_AUTOSELECT
select DVB_SP2 if MEDIA_SUBDRV_AUTOSELECT
help
diff --git a/drivers/media/usb/dvb-usb-v2/dvb_usb.h b/drivers/media/usb/dvb-usb-v2/dvb_usb.h
index 41c6363..023d91f 100644
--- a/drivers/media/usb/dvb-usb-v2/dvb_usb.h
+++ b/drivers/media/usb/dvb-usb-v2/dvb_usb.h
@@ -25,6 +25,7 @@
#include <linux/usb/input.h>
#include <linux/firmware.h>
#include <media/rc-core.h>
+#include <media/media-device.h>
#include "dvb_frontend.h"
#include "dvb_demux.h"
diff --git a/drivers/media/usb/dvb-usb-v2/dvb_usb_core.c b/drivers/media/usb/dvb-usb-v2/dvb_usb_core.c
index 9913e0f..f5df9ea 100644
--- a/drivers/media/usb/dvb-usb-v2/dvb_usb_core.c
+++ b/drivers/media/usb/dvb-usb-v2/dvb_usb_core.c
@@ -400,10 +400,61 @@
return ret;
}
+static void dvb_usbv2_media_device_register(struct dvb_usb_adapter *adap)
+{
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ struct media_device *mdev;
+ struct dvb_usb_device *d = adap_to_d(adap);
+ struct usb_device *udev = d->udev;
+ int ret;
+
+ mdev = kzalloc(sizeof(*mdev), GFP_KERNEL);
+ if (!mdev)
+ return;
+
+ mdev->dev = &udev->dev;
+ strlcpy(mdev->model, d->name, sizeof(mdev->model));
+ if (udev->serial)
+ strlcpy(mdev->serial, udev->serial, sizeof(mdev->serial));
+ strcpy(mdev->bus_info, udev->devpath);
+ mdev->hw_revision = le16_to_cpu(udev->descriptor.bcdDevice);
+ mdev->driver_version = LINUX_VERSION_CODE;
+
+ ret = media_device_register(mdev);
+ if (ret) {
+ dev_err(&d->udev->dev,
+ "Couldn't create a media device. Error: %d\n",
+ ret);
+ kfree(mdev);
+ return;
+ }
+
+ dvb_register_media_controller(&adap->dvb_adap, mdev);
+
+ dev_info(&d->udev->dev, "media controller created\n");
+
+#endif
+}
+
+static void dvb_usbv2_media_device_unregister(struct dvb_usb_adapter *adap)
+{
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+
+ if (!adap->dvb_adap.mdev)
+ return;
+
+ media_device_unregister(adap->dvb_adap.mdev);
+ kfree(adap->dvb_adap.mdev);
+ adap->dvb_adap.mdev = NULL;
+
+#endif
+}
+
static int dvb_usbv2_adapter_dvb_init(struct dvb_usb_adapter *adap)
{
int ret;
struct dvb_usb_device *d = adap_to_d(adap);
+
dev_dbg(&d->udev->dev, "%s: adap=%d\n", __func__, adap->id);
ret = dvb_register_adapter(&adap->dvb_adap, d->name, d->props->owner,
@@ -416,6 +467,8 @@
adap->dvb_adap.priv = adap;
+ dvb_usbv2_media_device_register(adap);
+
if (d->props->read_mac_address) {
ret = d->props->read_mac_address(adap,
adap->dvb_adap.proposed_mac);
@@ -464,6 +517,7 @@
err_dvb_dmxdev_init:
dvb_dmx_release(&adap->demux);
err_dvb_dmx_init:
+ dvb_usbv2_media_device_unregister(adap);
dvb_unregister_adapter(&adap->dvb_adap);
err_dvb_register_adapter:
adap->dvb_adap.priv = NULL;
@@ -480,6 +534,7 @@
adap->demux.dmx.close(&adap->demux.dmx);
dvb_dmxdev_release(&adap->dmxdev);
dvb_dmx_release(&adap->demux);
+ dvb_usbv2_media_device_unregister(adap);
dvb_unregister_adapter(&adap->dvb_adap);
}
@@ -643,6 +698,8 @@
}
}
+ dvb_create_media_graph(&adap->dvb_adap);
+
return 0;
err_dvb_unregister_frontend:
@@ -955,6 +1012,7 @@
struct dvb_usb_device *d = usb_get_intfdata(intf);
const char *name = d->name;
struct device dev = d->udev->dev;
+
dev_dbg(&d->udev->dev, "%s: bInterfaceNumber=%d\n", __func__,
intf->cur_altsetting->desc.bInterfaceNumber);
diff --git a/drivers/media/usb/dvb-usb-v2/dvbsky.c b/drivers/media/usb/dvb-usb-v2/dvbsky.c
index 9b5add4..cdf59bc 100644
--- a/drivers/media/usb/dvb-usb-v2/dvbsky.c
+++ b/drivers/media/usb/dvb-usb-v2/dvbsky.c
@@ -20,7 +20,7 @@
#include "dvb_usb.h"
#include "m88ds3103.h"
-#include "m88ts2022.h"
+#include "ts2020.h"
#include "sp2.h"
#include "si2168.h"
#include "si2157.h"
@@ -315,9 +315,7 @@
struct i2c_adapter *i2c_adapter;
struct i2c_client *client;
struct i2c_board_info info;
- struct m88ts2022_config m88ts2022_config = {
- .clock = 27000000,
- };
+ struct ts2020_config ts2020_config = {};
memset(&info, 0, sizeof(struct i2c_board_info));
/* attach demod */
@@ -332,11 +330,11 @@
}
/* attach tuner */
- m88ts2022_config.fe = adap->fe[0];
- strlcpy(info.type, "m88ts2022", I2C_NAME_SIZE);
+ ts2020_config.fe = adap->fe[0];
+ strlcpy(info.type, "ts2020", I2C_NAME_SIZE);
info.addr = 0x60;
- info.platform_data = &m88ts2022_config;
- request_module("m88ts2022");
+ info.platform_data = &ts2020_config;
+ request_module("ts2020");
client = i2c_new_device(i2c_adapter, &info);
if (client == NULL || client->dev.driver == NULL) {
dvb_frontend_detach(adap->fe[0]);
@@ -439,9 +437,7 @@
struct i2c_client *client_tuner, *client_ci;
struct i2c_board_info info;
struct sp2_config sp2_config;
- struct m88ts2022_config m88ts2022_config = {
- .clock = 27000000,
- };
+ struct ts2020_config ts2020_config = {};
memset(&info, 0, sizeof(struct i2c_board_info));
/* attach demod */
@@ -456,11 +452,11 @@
}
/* attach tuner */
- m88ts2022_config.fe = adap->fe[0];
- strlcpy(info.type, "m88ts2022", I2C_NAME_SIZE);
+ ts2020_config.fe = adap->fe[0];
+ strlcpy(info.type, "ts2020", I2C_NAME_SIZE);
info.addr = 0x60;
- info.platform_data = &m88ts2022_config;
- request_module("m88ts2022");
+ info.platform_data = &ts2020_config;
+ request_module("ts2020");
client_tuner = i2c_new_device(i2c_adapter, &info);
if (client_tuner == NULL || client_tuner->dev.driver == NULL) {
ret = -ENODEV;
diff --git a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
index 87fc0fe..895441f 100644
--- a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+++ b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
@@ -866,6 +866,8 @@
mn88472_config.i2c_wr_max = 22,
strlcpy(info.type, "mn88472", I2C_NAME_SIZE);
mn88472_config.xtal = 20500000;
+ mn88472_config.ts_mode = SERIAL_TS_MODE;
+ mn88472_config.ts_clock = VARIABLE_TS_CLOCK;
info.addr = 0x18;
info.platform_data = &mn88472_config;
request_module(info.type);
@@ -1609,7 +1611,7 @@
rc->allowed_protos = RC_BIT_ALL;
rc->driver_type = RC_DRIVER_IR_RAW;
rc->query = rtl2832u_rc_query;
- rc->interval = 400;
+ rc->interval = 200;
return 0;
}
@@ -1724,6 +1726,8 @@
&rtl28xxu_props, "DigitalNow Quad DVB-T Receiver", NULL) },
{ DVB_USB_DEVICE(USB_VID_LEADTEK, USB_PID_WINFAST_DTV_DONGLE_MINID,
&rtl28xxu_props, "Leadtek Winfast DTV Dongle Mini D", NULL) },
+ { DVB_USB_DEVICE(USB_VID_LEADTEK, USB_PID_WINFAST_DTV2000DS_PLUS,
+ &rtl28xxu_props, "Leadtek WinFast DTV2000DS Plus", RC_MAP_LEADTEK_Y04G0051) },
{ DVB_USB_DEVICE(USB_VID_TERRATEC, 0x00d3,
&rtl28xxu_props, "TerraTec Cinergy T Stick RC (Rev. 3)", NULL) },
{ DVB_USB_DEVICE(USB_VID_DEXATEK, 0x1102,
@@ -1754,6 +1758,8 @@
&rtl28xxu_props, "Sveon STV21", NULL) },
{ DVB_USB_DEVICE(USB_VID_KWORLD_2, USB_PID_SVEON_STV27,
&rtl28xxu_props, "Sveon STV27", NULL) },
+ { DVB_USB_DEVICE(USB_VID_KWORLD_2, USB_PID_TURBOX_DTT_2000,
+ &rtl28xxu_props, "TURBO-X Pure TV Tuner DTT-2000", NULL) },
/* RTL2832P devices: */
{ DVB_USB_DEVICE(USB_VID_HANFTEK, 0x0131,
diff --git a/drivers/media/usb/dvb-usb/Kconfig b/drivers/media/usb/dvb-usb/Kconfig
index 3364200..128eee6 100644
--- a/drivers/media/usb/dvb-usb/Kconfig
+++ b/drivers/media/usb/dvb-usb/Kconfig
@@ -278,9 +278,10 @@
select DVB_STV6110 if MEDIA_SUBDRV_AUTOSELECT
select DVB_STV0900 if MEDIA_SUBDRV_AUTOSELECT
select DVB_M88RS2000 if MEDIA_SUBDRV_AUTOSELECT
+ select DVB_M88DS3103 if MEDIA_SUBDRV_AUTOSELECT
help
- Say Y here to support the DvbWorld, TeVii, Prof DVB-S/S2 USB2.0
- receivers.
+ Say Y here to support the DvbWorld, TeVii, Prof, TechnoTrend
+ DVB-S/S2 USB2.0 receivers.
config DVB_USB_CINERGY_T2
tristate "Terratec CinergyT2/qanu USB 2.0 DVB-T receiver"
diff --git a/drivers/media/usb/dvb-usb/cxusb.c b/drivers/media/usb/dvb-usb/cxusb.c
index f327c49..ffc3704 100644
--- a/drivers/media/usb/dvb-usb/cxusb.c
+++ b/drivers/media/usb/dvb-usb/cxusb.c
@@ -1516,28 +1516,95 @@
dvb_usb_device_exit(intf);
}
-static struct usb_device_id cxusb_table [] = {
- { USB_DEVICE(USB_VID_MEDION, USB_PID_MEDION_MD95700) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_LG064F_COLD) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_LG064F_WARM) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_1_COLD) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_1_WARM) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_LGZ201_COLD) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_LGZ201_WARM) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_TH7579_COLD) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_TH7579_WARM) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DIGITALNOW_BLUEBIRD_DUAL_1_COLD) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DIGITALNOW_BLUEBIRD_DUAL_1_WARM) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_2_COLD) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_2_WARM) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_4) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DVB_T_NANO_2) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DVB_T_NANO_2_NFW_WARM) },
- { USB_DEVICE(USB_VID_AVERMEDIA, USB_PID_AVERMEDIA_VOLAR_A868R) },
- { USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_4_REV_2) },
- { USB_DEVICE(USB_VID_CONEXANT, USB_PID_CONEXANT_D680_DMB) },
- { USB_DEVICE(USB_VID_CONEXANT, USB_PID_MYGICA_D689) },
- { USB_DEVICE(USB_VID_CONEXANT, USB_PID_MYGICA_T230) },
+enum cxusb_table_index {
+ MEDION_MD95700,
+ DVICO_BLUEBIRD_LG064F_COLD,
+ DVICO_BLUEBIRD_LG064F_WARM,
+ DVICO_BLUEBIRD_DUAL_1_COLD,
+ DVICO_BLUEBIRD_DUAL_1_WARM,
+ DVICO_BLUEBIRD_LGZ201_COLD,
+ DVICO_BLUEBIRD_LGZ201_WARM,
+ DVICO_BLUEBIRD_TH7579_COLD,
+ DVICO_BLUEBIRD_TH7579_WARM,
+ DIGITALNOW_BLUEBIRD_DUAL_1_COLD,
+ DIGITALNOW_BLUEBIRD_DUAL_1_WARM,
+ DVICO_BLUEBIRD_DUAL_2_COLD,
+ DVICO_BLUEBIRD_DUAL_2_WARM,
+ DVICO_BLUEBIRD_DUAL_4,
+ DVICO_BLUEBIRD_DVB_T_NANO_2,
+ DVICO_BLUEBIRD_DVB_T_NANO_2_NFW_WARM,
+ AVERMEDIA_VOLAR_A868R,
+ DVICO_BLUEBIRD_DUAL_4_REV_2,
+ CONEXANT_D680_DMB,
+ MYGICA_D689,
+ MYGICA_T230,
+ NR__cxusb_table_index
+};
+
+static struct usb_device_id cxusb_table[NR__cxusb_table_index + 1] = {
+ [MEDION_MD95700] = {
+ USB_DEVICE(USB_VID_MEDION, USB_PID_MEDION_MD95700)
+ },
+ [DVICO_BLUEBIRD_LG064F_COLD] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_LG064F_COLD)
+ },
+ [DVICO_BLUEBIRD_LG064F_WARM] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_LG064F_WARM)
+ },
+ [DVICO_BLUEBIRD_DUAL_1_COLD] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_1_COLD)
+ },
+ [DVICO_BLUEBIRD_DUAL_1_WARM] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_1_WARM)
+ },
+ [DVICO_BLUEBIRD_LGZ201_COLD] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_LGZ201_COLD)
+ },
+ [DVICO_BLUEBIRD_LGZ201_WARM] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_LGZ201_WARM)
+ },
+ [DVICO_BLUEBIRD_TH7579_COLD] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_TH7579_COLD)
+ },
+ [DVICO_BLUEBIRD_TH7579_WARM] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_TH7579_WARM)
+ },
+ [DIGITALNOW_BLUEBIRD_DUAL_1_COLD] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DIGITALNOW_BLUEBIRD_DUAL_1_COLD)
+ },
+ [DIGITALNOW_BLUEBIRD_DUAL_1_WARM] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DIGITALNOW_BLUEBIRD_DUAL_1_WARM)
+ },
+ [DVICO_BLUEBIRD_DUAL_2_COLD] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_2_COLD)
+ },
+ [DVICO_BLUEBIRD_DUAL_2_WARM] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_2_WARM)
+ },
+ [DVICO_BLUEBIRD_DUAL_4] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_4)
+ },
+ [DVICO_BLUEBIRD_DVB_T_NANO_2] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DVB_T_NANO_2)
+ },
+ [DVICO_BLUEBIRD_DVB_T_NANO_2_NFW_WARM] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DVB_T_NANO_2_NFW_WARM)
+ },
+ [AVERMEDIA_VOLAR_A868R] = {
+ USB_DEVICE(USB_VID_AVERMEDIA, USB_PID_AVERMEDIA_VOLAR_A868R)
+ },
+ [DVICO_BLUEBIRD_DUAL_4_REV_2] = {
+ USB_DEVICE(USB_VID_DVICO, USB_PID_DVICO_BLUEBIRD_DUAL_4_REV_2)
+ },
+ [CONEXANT_D680_DMB] = {
+ USB_DEVICE(USB_VID_CONEXANT, USB_PID_CONEXANT_D680_DMB)
+ },
+ [MYGICA_D689] = {
+ USB_DEVICE(USB_VID_CONEXANT, USB_PID_MYGICA_D689)
+ },
+ [MYGICA_T230] = {
+ USB_DEVICE(USB_VID_CONEXANT, USB_PID_MYGICA_T230)
+ },
{} /* Terminating entry */
};
MODULE_DEVICE_TABLE (usb, cxusb_table);
@@ -1581,7 +1648,7 @@
.devices = {
{ "Medion MD95700 (MDUSBTV-HYBRID)",
{ NULL },
- { &cxusb_table[0], NULL },
+ { &cxusb_table[MEDION_MD95700], NULL },
},
}
};
@@ -1637,8 +1704,8 @@
.num_device_descs = 1,
.devices = {
{ "DViCO FusionHDTV5 USB Gold",
- { &cxusb_table[1], NULL },
- { &cxusb_table[2], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_LG064F_COLD], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_LG064F_WARM], NULL },
},
}
};
@@ -1693,16 +1760,16 @@
.num_device_descs = 3,
.devices = {
{ "DViCO FusionHDTV DVB-T Dual USB",
- { &cxusb_table[3], NULL },
- { &cxusb_table[4], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_DUAL_1_COLD], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_DUAL_1_WARM], NULL },
},
{ "DigitalNow DVB-T Dual USB",
- { &cxusb_table[9], NULL },
- { &cxusb_table[10], NULL },
+ { &cxusb_table[DIGITALNOW_BLUEBIRD_DUAL_1_COLD], NULL },
+ { &cxusb_table[DIGITALNOW_BLUEBIRD_DUAL_1_WARM], NULL },
},
{ "DViCO FusionHDTV DVB-T Dual Digital 2",
- { &cxusb_table[11], NULL },
- { &cxusb_table[12], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_DUAL_2_COLD], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_DUAL_2_WARM], NULL },
},
}
};
@@ -1756,8 +1823,8 @@
.num_device_descs = 1,
.devices = {
{ "DViCO FusionHDTV DVB-T USB (LGZ201)",
- { &cxusb_table[5], NULL },
- { &cxusb_table[6], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_LGZ201_COLD], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_LGZ201_WARM], NULL },
},
}
};
@@ -1812,8 +1879,8 @@
.num_device_descs = 1,
.devices = {
{ "DViCO FusionHDTV DVB-T USB (TH7579)",
- { &cxusb_table[7], NULL },
- { &cxusb_table[8], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_TH7579_COLD], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_TH7579_WARM], NULL },
},
}
};
@@ -1865,7 +1932,7 @@
.devices = {
{ "DViCO FusionHDTV DVB-T Dual Digital 4",
{ NULL },
- { &cxusb_table[13], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_DUAL_4], NULL },
},
}
};
@@ -1918,7 +1985,7 @@
.devices = {
{ "DViCO FusionHDTV DVB-T NANO2",
{ NULL },
- { &cxusb_table[14], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_DVB_T_NANO_2], NULL },
},
}
};
@@ -1972,8 +2039,8 @@
.num_device_descs = 1,
.devices = {
{ "DViCO FusionHDTV DVB-T NANO2 w/o firmware",
- { &cxusb_table[14], NULL },
- { &cxusb_table[15], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_DVB_T_NANO_2], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_DVB_T_NANO_2_NFW_WARM], NULL },
},
}
};
@@ -2017,7 +2084,7 @@
.devices = {
{ "AVerMedia AVerTVHD Volar (A868R)",
{ NULL },
- { &cxusb_table[16], NULL },
+ { &cxusb_table[AVERMEDIA_VOLAR_A868R], NULL },
},
}
};
@@ -2071,7 +2138,7 @@
.devices = {
{ "DViCO FusionHDTV DVB-T Dual Digital 4 (rev 2)",
{ NULL },
- { &cxusb_table[17], NULL },
+ { &cxusb_table[DVICO_BLUEBIRD_DUAL_4_REV_2], NULL },
},
}
};
@@ -2125,7 +2192,7 @@
{
"Conexant DMB-TH Stick",
{ NULL },
- { &cxusb_table[18], NULL },
+ { &cxusb_table[CONEXANT_D680_DMB], NULL },
},
}
};
@@ -2179,7 +2246,7 @@
{
"Mygica D689 DMB-TH",
{ NULL },
- { &cxusb_table[19], NULL },
+ { &cxusb_table[MYGICA_D689], NULL },
},
}
};
@@ -2232,7 +2299,7 @@
{
"Mygica T230 DVB-T/T2/C",
{ NULL },
- { &cxusb_table[20], NULL },
+ { &cxusb_table[MYGICA_T230], NULL },
},
}
};
diff --git a/drivers/media/usb/dvb-usb/dib0700_core.c b/drivers/media/usb/dvb-usb/dib0700_core.c
index 50856db..2b40393 100644
--- a/drivers/media/usb/dvb-usb/dib0700_core.c
+++ b/drivers/media/usb/dvb-usb/dib0700_core.c
@@ -651,9 +651,6 @@
return ret;
}
-/* Number of keypresses to ignore before start repeating */
-#define RC_REPEAT_DELAY_V1_20 10
-
/* This is the structure of the RC response packet starting in firmware 1.20 */
struct dib0700_rc_response {
u8 report_id;
diff --git a/drivers/media/usb/dvb-usb/dib0700_devices.c b/drivers/media/usb/dvb-usb/dib0700_devices.c
index e1757b8..d7d55a2 100644
--- a/drivers/media/usb/dvb-usb/dib0700_devices.c
+++ b/drivers/media/usb/dvb-usb/dib0700_devices.c
@@ -510,9 +510,6 @@
static u8 rc_request[] = { REQUEST_POLL_RC, 0 };
-/* Number of keypresses to ignore before start repeating */
-#define RC_REPEAT_DELAY 6
-
/*
* This function is used only when firmware is < 1.20 version. Newer
* firmwares use bulk mode, with functions implemented at dib0700_core,
diff --git a/drivers/media/usb/dvb-usb/dvb-usb-dvb.c b/drivers/media/usb/dvb-usb/dvb-usb-dvb.c
index 719413b..8a260c8 100644
--- a/drivers/media/usb/dvb-usb/dvb-usb-dvb.c
+++ b/drivers/media/usb/dvb-usb/dvb-usb-dvb.c
@@ -84,14 +84,61 @@
static int dvb_usb_start_feed(struct dvb_demux_feed *dvbdmxfeed)
{
- deb_ts("start pid: 0x%04x, feedtype: %d\n", dvbdmxfeed->pid,dvbdmxfeed->type);
- return dvb_usb_ctrl_feed(dvbdmxfeed,1);
+ deb_ts("start pid: 0x%04x, feedtype: %d\n", dvbdmxfeed->pid,
+ dvbdmxfeed->type);
+ return dvb_usb_ctrl_feed(dvbdmxfeed, 1);
}
static int dvb_usb_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
{
deb_ts("stop pid: 0x%04x, feedtype: %d\n", dvbdmxfeed->pid, dvbdmxfeed->type);
- return dvb_usb_ctrl_feed(dvbdmxfeed,0);
+ return dvb_usb_ctrl_feed(dvbdmxfeed, 0);
+}
+
+static void dvb_usb_media_device_register(struct dvb_usb_adapter *adap)
+{
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ struct media_device *mdev;
+ struct dvb_usb_device *d = adap->dev;
+ struct usb_device *udev = d->udev;
+ int ret;
+
+ mdev = kzalloc(sizeof(*mdev), GFP_KERNEL);
+ if (!mdev)
+ return;
+
+ mdev->dev = &udev->dev;
+ strlcpy(mdev->model, d->desc->name, sizeof(mdev->model));
+ if (udev->serial)
+ strlcpy(mdev->serial, udev->serial, sizeof(mdev->serial));
+ strcpy(mdev->bus_info, udev->devpath);
+ mdev->hw_revision = le16_to_cpu(udev->descriptor.bcdDevice);
+ mdev->driver_version = LINUX_VERSION_CODE;
+
+ ret = media_device_register(mdev);
+ if (ret) {
+ dev_err(&d->udev->dev,
+ "Couldn't create a media device. Error: %d\n",
+ ret);
+ kfree(mdev);
+ return;
+ }
+ dvb_register_media_controller(&adap->dvb_adap, mdev);
+
+ dev_info(&d->udev->dev, "media controller created\n");
+#endif
+}
+
+static void dvb_usb_media_device_unregister(struct dvb_usb_adapter *adap)
+{
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ if (!adap->dvb_adap.mdev)
+ return;
+
+ media_device_unregister(adap->dvb_adap.mdev);
+ kfree(adap->dvb_adap.mdev);
+ adap->dvb_adap.mdev = NULL;
+#endif
}
int dvb_usb_adapter_dvb_init(struct dvb_usb_adapter *adap, short *adapter_nums)
@@ -107,9 +154,11 @@
}
adap->dvb_adap.priv = adap;
+ dvb_usb_media_device_register(adap);
+
if (adap->dev->props.read_mac_address) {
- if (adap->dev->props.read_mac_address(adap->dev,adap->dvb_adap.proposed_mac) == 0)
- info("MAC address: %pM",adap->dvb_adap.proposed_mac);
+ if (adap->dev->props.read_mac_address(adap->dev, adap->dvb_adap.proposed_mac) == 0)
+ info("MAC address: %pM", adap->dvb_adap.proposed_mac);
else
err("MAC address reading failed.");
}
@@ -128,7 +177,7 @@
adap->demux.stop_feed = dvb_usb_stop_feed;
adap->demux.write_to_decoder = NULL;
if ((ret = dvb_dmx_init(&adap->demux)) < 0) {
- err("dvb_dmx_init failed: error %d",ret);
+ err("dvb_dmx_init failed: error %d", ret);
goto err_dmx;
}
@@ -136,13 +185,13 @@
adap->dmxdev.demux = &adap->demux.dmx;
adap->dmxdev.capabilities = 0;
if ((ret = dvb_dmxdev_init(&adap->dmxdev, &adap->dvb_adap)) < 0) {
- err("dvb_dmxdev_init failed: error %d",ret);
+ err("dvb_dmxdev_init failed: error %d", ret);
goto err_dmx_dev;
}
if ((ret = dvb_net_init(&adap->dvb_adap, &adap->dvb_net,
&adap->demux.dmx)) < 0) {
- err("dvb_net_init failed: error %d",ret);
+ err("dvb_net_init failed: error %d", ret);
goto err_net_init;
}
@@ -154,6 +203,7 @@
err_dmx_dev:
dvb_dmx_release(&adap->demux);
err_dmx:
+ dvb_usb_media_device_unregister(adap);
dvb_unregister_adapter(&adap->dvb_adap);
err:
return ret;
@@ -167,6 +217,7 @@
adap->demux.dmx.close(&adap->demux.dmx);
dvb_dmxdev_release(&adap->dmxdev);
dvb_dmx_release(&adap->demux);
+ dvb_usb_media_device_unregister(adap);
dvb_unregister_adapter(&adap->dvb_adap);
adap->state &= ~DVB_USB_ADAP_STATE_DVB;
}
@@ -268,6 +319,8 @@
adap->num_frontends_initialized++;
}
+ dvb_create_media_graph(&adap->dvb_adap);
+
return 0;
}
diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
index 1a3df10..f1f357f 100644
--- a/drivers/media/usb/dvb-usb/dw2102.c
+++ b/drivers/media/usb/dvb-usb/dw2102.c
@@ -2,7 +2,8 @@
* DVBWorld DVB-S 2101, 2102, DVB-S2 2104, DVB-C 3101,
* TeVii S600, S630, S650, S660, S480, S421, S632
* Prof 1100, 7500,
- * Geniatech SU3000, T220 Cards
+ * Geniatech SU3000, T220,
+ * TechnoTrend S2-4600 Cards
* Copyright (C) 2008-2012 Igor M. Liplianin (liplianin@me.by)
*
* This program is free software; you can redistribute it and/or modify it
@@ -31,6 +32,8 @@
#include "m88rs2000.h"
#include "tda18271.h"
#include "cxd2820r.h"
+#include "m88ds3103.h"
+#include "ts2020.h"
/* Max transfer size done by I2C transfer functions */
#define MAX_XFER_SIZE 64
@@ -112,11 +115,9 @@
"Please see linux/Documentation/dvb/ for more details " \
"on firmware-problems."
-struct su3000_state {
+struct dw2102_state {
u8 initialized;
-};
-
-struct s6x0_state {
+ struct i2c_client *i2c_client_tuner;
int (*old_set_voltage)(struct dvb_frontend *f, fe_sec_voltage_t v);
};
@@ -887,7 +888,7 @@
static int su3000_power_ctrl(struct dvb_usb_device *d, int i)
{
- struct su3000_state *state = (struct su3000_state *)d->priv;
+ struct dw2102_state *state = (struct dw2102_state *)d->priv;
u8 obuf[] = {0xde, 0};
info("%s: %d, initialized %d\n", __func__, i, state->initialized);
@@ -973,7 +974,7 @@
{
struct dvb_usb_adapter *d =
(struct dvb_usb_adapter *)(fe->dvb->priv);
- struct s6x0_state *st = (struct s6x0_state *)d->dev->priv;
+ struct dw2102_state *st = (struct dw2102_state *)d->dev->priv;
dw210x_set_voltage(fe, voltage);
if (st->old_set_voltage)
@@ -1117,6 +1118,22 @@
.gate = TDA18271_GATE_DIGITAL,
};
+static const struct m88ds3103_config tt_s2_4600_m88ds3103_config = {
+ .i2c_addr = 0x68,
+ .clock = 27000000,
+ .i2c_wr_max = 33,
+ .ts_mode = M88DS3103_TS_CI,
+ .ts_clk = 16000,
+ .ts_clk_pol = 0,
+ .spec_inv = 0,
+ .agc_inv = 0,
+ .clock_out = M88DS3103_CLOCK_OUT_ENABLED,
+ .envelope_mode = 0,
+ .agc = 0x99,
+ .lnb_hv_pol = 1,
+ .lnb_en_pol = 0,
+};
+
static u8 m88rs2000_inittab[] = {
DEMOD_WRITE, 0x9a, 0x30,
DEMOD_WRITE, 0x00, 0x01,
@@ -1295,7 +1312,7 @@
static int ds3000_frontend_attach(struct dvb_usb_adapter *d)
{
- struct s6x0_state *st = (struct s6x0_state *)d->dev->priv;
+ struct dw2102_state *st = d->dev->priv;
u8 obuf[] = {7, 1};
d->fe_adap[0].fe = dvb_attach(ds3000_attach, &s660_ds3000_config,
@@ -1461,6 +1478,84 @@
return -EIO;
}
+static int tt_s2_4600_frontend_attach(struct dvb_usb_adapter *adap)
+{
+ struct dvb_usb_device *d = adap->dev;
+ struct dw2102_state *state = d->priv;
+ u8 obuf[3] = { 0xe, 0x80, 0 };
+ u8 ibuf[] = { 0 };
+ struct i2c_adapter *i2c_adapter;
+ struct i2c_client *client;
+ struct i2c_board_info info;
+ struct ts2020_config ts2020_config = {};
+
+ if (dvb_usb_generic_rw(d, obuf, 3, ibuf, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+ obuf[0] = 0xe;
+ obuf[1] = 0x02;
+ obuf[2] = 1;
+
+ if (dvb_usb_generic_rw(d, obuf, 3, ibuf, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+ msleep(300);
+
+ obuf[0] = 0xe;
+ obuf[1] = 0x83;
+ obuf[2] = 0;
+
+ if (dvb_usb_generic_rw(d, obuf, 3, ibuf, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+ obuf[0] = 0xe;
+ obuf[1] = 0x83;
+ obuf[2] = 1;
+
+ if (dvb_usb_generic_rw(d, obuf, 3, ibuf, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+ obuf[0] = 0x51;
+
+ if (dvb_usb_generic_rw(d, obuf, 1, ibuf, 1, 0) < 0)
+ err("command 0x51 transfer failed.");
+
+ memset(&info, 0, sizeof(struct i2c_board_info));
+
+ adap->fe_adap[0].fe = dvb_attach(m88ds3103_attach,
+ &tt_s2_4600_m88ds3103_config,
+ &d->i2c_adap,
+ &i2c_adapter);
+ if (adap->fe_adap[0].fe == NULL)
+ return -ENODEV;
+
+ /* attach tuner */
+ ts2020_config.fe = adap->fe_adap[0].fe;
+ strlcpy(info.type, "ts2022", I2C_NAME_SIZE);
+ info.addr = 0x60;
+ info.platform_data = &ts2020_config;
+ request_module("ts2020");
+ client = i2c_new_device(i2c_adapter, &info);
+
+ if (client == NULL || client->dev.driver == NULL) {
+ dvb_frontend_detach(adap->fe_adap[0].fe);
+ return -ENODEV;
+ }
+
+ if (!try_module_get(client->dev.driver->owner)) {
+ i2c_unregister_device(client);
+ dvb_frontend_detach(adap->fe_adap[0].fe);
+ return -ENODEV;
+ }
+
+ /* delegate signal strength measurement to tuner */
+ adap->fe_adap[0].fe->ops.read_signal_strength =
+ adap->fe_adap[0].fe->ops.tuner_ops.get_rf_strength;
+
+ state->i2c_client_tuner = client;
+
+ return 0;
+}
+
static int dw2102_tuner_attach(struct dvb_usb_adapter *adap)
{
dvb_attach(dvb_pll_attach, adap->fe_adap[0].fe, 0x60,
@@ -1561,6 +1656,7 @@
TERRATEC_CINERGY_S2_R2,
GOTVIEW_SAT_HD,
GENIATECH_T220,
+ TECHNOTREND_S2_4600,
};
static struct usb_device_id dw2102_table[] = {
@@ -1584,6 +1680,8 @@
[TERRATEC_CINERGY_S2_R2] = {USB_DEVICE(USB_VID_TERRATEC, 0x00b0)},
[GOTVIEW_SAT_HD] = {USB_DEVICE(0x1FE1, USB_PID_GOTVIEW_SAT_HD)},
[GENIATECH_T220] = {USB_DEVICE(0x1f4d, 0xD220)},
+ [TECHNOTREND_S2_4600] = {USB_DEVICE(USB_VID_TECHNOTREND,
+ USB_PID_TECHNOTREND_CONNECT_S2_4600)},
{ }
};
@@ -1857,7 +1955,7 @@
static struct dvb_usb_device_properties s6x0_properties = {
.caps = DVB_USB_IS_AN_I2C_ADAPTER,
.usb_ctrl = DEVICE_SPECIFIC,
- .size_of_priv = sizeof(struct s6x0_state),
+ .size_of_priv = sizeof(struct dw2102_state),
.firmware = S630_FIRMWARE,
.no_reconnect = 1,
@@ -1950,7 +2048,7 @@
static struct dvb_usb_device_properties su3000_properties = {
.caps = DVB_USB_IS_AN_I2C_ADAPTER,
.usb_ctrl = DEVICE_SPECIFIC,
- .size_of_priv = sizeof(struct su3000_state),
+ .size_of_priv = sizeof(struct dw2102_state),
.power_ctrl = su3000_power_ctrl,
.num_adapters = 1,
.identify_state = su3000_identify_state,
@@ -2015,7 +2113,7 @@
static struct dvb_usb_device_properties t220_properties = {
.caps = DVB_USB_IS_AN_I2C_ADAPTER,
.usb_ctrl = DEVICE_SPECIFIC,
- .size_of_priv = sizeof(struct su3000_state),
+ .size_of_priv = sizeof(struct dw2102_state),
.power_ctrl = su3000_power_ctrl,
.num_adapters = 1,
.identify_state = su3000_identify_state,
@@ -2061,6 +2159,55 @@
}
};
+static struct dvb_usb_device_properties tt_s2_4600_properties = {
+ .caps = DVB_USB_IS_AN_I2C_ADAPTER,
+ .usb_ctrl = DEVICE_SPECIFIC,
+ .size_of_priv = sizeof(struct dw2102_state),
+ .power_ctrl = su3000_power_ctrl,
+ .num_adapters = 1,
+ .identify_state = su3000_identify_state,
+ .i2c_algo = &su3000_i2c_algo,
+
+ .rc.core = {
+ .rc_interval = 250,
+ .rc_codes = RC_MAP_TT_1500,
+ .module_name = "dw2102",
+ .allowed_protos = RC_BIT_RC5,
+ .rc_query = su3000_rc_query,
+ },
+
+ .read_mac_address = su3000_read_mac_address,
+
+ .generic_bulk_ctrl_endpoint = 0x01,
+
+ .adapter = {
+ {
+ .num_frontends = 1,
+ .fe = {{
+ .streaming_ctrl = su3000_streaming_ctrl,
+ .frontend_attach = tt_s2_4600_frontend_attach,
+ .stream = {
+ .type = USB_BULK,
+ .count = 8,
+ .endpoint = 0x82,
+ .u = {
+ .bulk = {
+ .buffersize = 4096,
+ }
+ }
+ }
+ } },
+ }
+ },
+ .num_device_descs = 1,
+ .devices = {
+ { "TechnoTrend TT-connect S2-4600",
+ { &dw2102_table[TECHNOTREND_S2_4600], NULL },
+ { NULL },
+ },
+ }
+};
+
static int dw2102_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
@@ -2135,16 +2282,34 @@
0 == dvb_usb_device_init(intf, &su3000_properties,
THIS_MODULE, NULL, adapter_nr) ||
0 == dvb_usb_device_init(intf, &t220_properties,
+ THIS_MODULE, NULL, adapter_nr) ||
+ 0 == dvb_usb_device_init(intf, &tt_s2_4600_properties,
THIS_MODULE, NULL, adapter_nr))
return 0;
return -ENODEV;
}
+static void dw2102_disconnect(struct usb_interface *intf)
+{
+ struct dvb_usb_device *d = usb_get_intfdata(intf);
+ struct dw2102_state *st = (struct dw2102_state *)d->priv;
+ struct i2c_client *client;
+
+ /* remove I2C client for tuner */
+ client = st->i2c_client_tuner;
+ if (client) {
+ module_put(client->dev.driver->owner);
+ i2c_unregister_device(client);
+ }
+
+ dvb_usb_device_exit(intf);
+}
+
static struct usb_driver dw2102_driver = {
.name = "dw2102",
.probe = dw2102_probe,
- .disconnect = dvb_usb_device_exit,
+ .disconnect = dw2102_disconnect,
.id_table = dw2102_table,
};
@@ -2155,7 +2320,8 @@
" DVB-C 3101 USB2.0,"
" TeVii S600, S630, S650, S660, S480, S421, S632"
" Prof 1100, 7500 USB2.0,"
- " Geniatech SU3000, T220 devices");
+ " Geniatech SU3000, T220,"
+ " TechnoTrend S2-4600 devices");
MODULE_VERSION("0.1");
MODULE_LICENSE("GPL");
MODULE_FIRMWARE(DW2101_FIRMWARE);
diff --git a/drivers/media/usb/em28xx/Kconfig b/drivers/media/usb/em28xx/Kconfig
index f5d7198..e382210 100644
--- a/drivers/media/usb/em28xx/Kconfig
+++ b/drivers/media/usb/em28xx/Kconfig
@@ -55,7 +55,7 @@
select MEDIA_TUNER_TDA18271 if MEDIA_SUBDRV_AUTOSELECT
select MEDIA_TUNER_TDA18212 if MEDIA_SUBDRV_AUTOSELECT
select DVB_M88DS3103 if MEDIA_SUBDRV_AUTOSELECT
- select MEDIA_TUNER_M88TS2022 if MEDIA_SUBDRV_AUTOSELECT
+ select DVB_TS2020 if MEDIA_SUBDRV_AUTOSELECT
select DVB_DRX39XYJ if MEDIA_SUBDRV_AUTOSELECT
select DVB_SI2168 if MEDIA_SUBDRV_AUTOSELECT
select MEDIA_TUNER_SI2157 if MEDIA_SUBDRV_AUTOSELECT
diff --git a/drivers/media/usb/em28xx/em28xx-camera.c b/drivers/media/usb/em28xx/em28xx-camera.c
index 7be661f..a4b22c2 100644
--- a/drivers/media/usb/em28xx/em28xx-camera.c
+++ b/drivers/media/usb/em28xx/em28xx-camera.c
@@ -330,7 +330,7 @@
v4l2_clk_name_i2c(clk_name, sizeof(clk_name),
i2c_adapter_id(adap), client->addr);
- v4l2->clk = v4l2_clk_register_fixed(clk_name, "mclk", -EINVAL);
+ v4l2->clk = v4l2_clk_register_fixed(clk_name, -EINVAL);
if (IS_ERR(v4l2->clk))
return PTR_ERR(v4l2->clk);
diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
index d9704e6..3940046 100644
--- a/drivers/media/usb/em28xx/em28xx-cards.c
+++ b/drivers/media/usb/em28xx/em28xx-cards.c
@@ -1157,6 +1157,15 @@
.i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE |
EM28XX_I2C_FREQ_400_KHZ,
},
+ [EM2884_BOARD_ELGATO_EYETV_HYBRID_2008] = {
+ .name = "Elgato EyeTV Hybrid 2008 INT",
+ .has_dvb = 1,
+ .ir_codes = RC_MAP_NEC_TERRATEC_CINERGY_XS,
+ .tuner_type = TUNER_ABSENT,
+ .def_i2c_bus = 1,
+ .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE |
+ EM28XX_I2C_FREQ_400_KHZ,
+ },
[EM2880_BOARD_HAUPPAUGE_WINTV_HVR_900] = {
.name = "Hauppauge WinTV HVR 900",
.tda9887_conf = TDA9887_PRESENT,
@@ -2378,8 +2387,10 @@
.driver_info = EM2860_BOARD_TERRATEC_GRABBY },
{ USB_DEVICE(0x0ccd, 0x00b2),
.driver_info = EM2884_BOARD_CINERGY_HTC_STICK },
+ { USB_DEVICE(0x0fd9, 0x0018),
+ .driver_info = EM2884_BOARD_ELGATO_EYETV_HYBRID_2008 },
{ USB_DEVICE(0x0fd9, 0x0033),
- .driver_info = EM2860_BOARD_ELGATO_VIDEO_CAPTURE},
+ .driver_info = EM2860_BOARD_ELGATO_VIDEO_CAPTURE },
{ USB_DEVICE(0x185b, 0x2870),
.driver_info = EM2870_BOARD_COMPRO_VIDEOMATE },
{ USB_DEVICE(0x185b, 0x2041),
diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c b/drivers/media/usb/em28xx/em28xx-dvb.c
index aee70d4..a5b22c5 100644
--- a/drivers/media/usb/em28xx/em28xx-dvb.c
+++ b/drivers/media/usb/em28xx/em28xx-dvb.c
@@ -54,7 +54,7 @@
#include "qt1010.h"
#include "mb86a20s.h"
#include "m88ds3103.h"
-#include "m88ts2022.h"
+#include "ts2020.h"
#include "si2168.h"
#include "si2157.h"
@@ -1380,6 +1380,7 @@
}
}
break;
+ case EM2884_BOARD_ELGATO_EYETV_HYBRID_2008:
case EM2884_BOARD_CINERGY_HTC_STICK:
terratec_htc_stick_init(dev);
@@ -1491,8 +1492,7 @@
struct i2c_adapter *i2c_adapter;
struct i2c_client *client;
struct i2c_board_info info;
- struct m88ts2022_config m88ts2022_config = {
- .clock = 27000000,
+ struct ts2020_config ts2020_config = {
};
memset(&info, 0, sizeof(struct i2c_board_info));
@@ -1507,11 +1507,11 @@
}
/* attach tuner */
- m88ts2022_config.fe = dvb->fe[0];
- strlcpy(info.type, "m88ts2022", I2C_NAME_SIZE);
+ ts2020_config.fe = dvb->fe[0];
+ strlcpy(info.type, "ts2022", I2C_NAME_SIZE);
info.addr = 0x60;
- info.platform_data = &m88ts2022_config;
- request_module("m88ts2022");
+ info.platform_data = &ts2020_config;
+ request_module("ts2020");
client = i2c_new_device(i2c_adapter, &info);
if (client == NULL || client->dev.driver == NULL) {
dvb_frontend_detach(dvb->fe[0]);
diff --git a/drivers/media/usb/em28xx/em28xx-video.c b/drivers/media/usb/em28xx/em28xx-video.c
index 9ecf656..14eba9c 100644
--- a/drivers/media/usb/em28xx/em28xx-video.c
+++ b/drivers/media/usb/em28xx/em28xx-video.c
@@ -1472,7 +1472,7 @@
(EM28XX_VMUX_CABLE == INPUT(n)->type))
i->type = V4L2_INPUT_TYPE_TUNER;
- i->std = dev->v4l2->vdev->tvnorms;
+ i->std = dev->v4l2->vdev.tvnorms;
/* webcams do not have the STD API */
if (dev->board.is_webcam)
i->capabilities = 0;
@@ -1730,9 +1730,9 @@
cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS |
V4L2_CAP_READWRITE | V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
- if (v4l2->vbi_dev)
+ if (video_is_registered(&v4l2->vbi_dev))
cap->capabilities |= V4L2_CAP_VBI_CAPTURE;
- if (v4l2->radio_dev)
+ if (video_is_registered(&v4l2->radio_dev))
cap->capabilities |= V4L2_CAP_RADIO;
return 0;
}
@@ -1966,20 +1966,20 @@
em28xx_uninit_usb_xfer(dev, EM28XX_ANALOG_MODE);
- if (v4l2->radio_dev) {
+ if (video_is_registered(&v4l2->radio_dev)) {
em28xx_info("V4L2 device %s deregistered\n",
- video_device_node_name(v4l2->radio_dev));
- video_unregister_device(v4l2->radio_dev);
+ video_device_node_name(&v4l2->radio_dev));
+ video_unregister_device(&v4l2->radio_dev);
}
- if (v4l2->vbi_dev) {
+ if (video_is_registered(&v4l2->vbi_dev)) {
em28xx_info("V4L2 device %s deregistered\n",
- video_device_node_name(v4l2->vbi_dev));
- video_unregister_device(v4l2->vbi_dev);
+ video_device_node_name(&v4l2->vbi_dev));
+ video_unregister_device(&v4l2->vbi_dev);
}
- if (v4l2->vdev) {
+ if (video_is_registered(&v4l2->vdev)) {
em28xx_info("V4L2 device %s deregistered\n",
- video_device_node_name(v4l2->vdev));
- video_unregister_device(v4l2->vdev);
+ video_device_node_name(&v4l2->vdev));
+ video_unregister_device(&v4l2->vdev);
}
v4l2_ctrl_handler_free(&v4l2->ctrl_handler);
@@ -2127,7 +2127,7 @@
static const struct video_device em28xx_video_template = {
.fops = &em28xx_v4l_fops,
.ioctl_ops = &video_ioctl_ops,
- .release = video_device_release,
+ .release = video_device_release_empty,
.tvnorms = V4L2_STD_ALL,
};
@@ -2156,7 +2156,7 @@
static struct video_device em28xx_radio_template = {
.fops = &radio_fops,
.ioctl_ops = &radio_ioctl_ops,
- .release = video_device_release,
+ .release = video_device_release_empty,
};
/* I2C possible address to saa7115, tvp5150, msp3400, tvaudio */
@@ -2179,17 +2179,11 @@
/******************************** usb interface ******************************/
-static struct video_device
-*em28xx_vdev_init(struct em28xx *dev,
- const struct video_device *template,
- const char *type_name)
+static void em28xx_vdev_init(struct em28xx *dev,
+ struct video_device *vfd,
+ const struct video_device *template,
+ const char *type_name)
{
- struct video_device *vfd;
-
- vfd = video_device_alloc();
- if (NULL == vfd)
- return NULL;
-
*vfd = *template;
vfd->v4l2_dev = &dev->v4l2->v4l2_dev;
vfd->lock = &dev->lock;
@@ -2200,7 +2194,6 @@
dev->name, type_name);
video_set_drvdata(vfd, dev);
- return vfd;
}
static void em28xx_tuner_setup(struct em28xx *dev, unsigned short tuner_addr)
@@ -2491,38 +2484,33 @@
goto unregister_dev;
/* allocate and fill video video_device struct */
- v4l2->vdev = em28xx_vdev_init(dev, &em28xx_video_template, "video");
- if (!v4l2->vdev) {
- em28xx_errdev("cannot allocate video_device.\n");
- ret = -ENODEV;
- goto unregister_dev;
- }
+ em28xx_vdev_init(dev, &v4l2->vdev, &em28xx_video_template, "video");
mutex_init(&v4l2->vb_queue_lock);
mutex_init(&v4l2->vb_vbi_queue_lock);
- v4l2->vdev->queue = &v4l2->vb_vidq;
- v4l2->vdev->queue->lock = &v4l2->vb_queue_lock;
+ v4l2->vdev.queue = &v4l2->vb_vidq;
+ v4l2->vdev.queue->lock = &v4l2->vb_queue_lock;
/* disable inapplicable ioctls */
if (dev->board.is_webcam) {
- v4l2_disable_ioctl(v4l2->vdev, VIDIOC_QUERYSTD);
- v4l2_disable_ioctl(v4l2->vdev, VIDIOC_G_STD);
- v4l2_disable_ioctl(v4l2->vdev, VIDIOC_S_STD);
+ v4l2_disable_ioctl(&v4l2->vdev, VIDIOC_QUERYSTD);
+ v4l2_disable_ioctl(&v4l2->vdev, VIDIOC_G_STD);
+ v4l2_disable_ioctl(&v4l2->vdev, VIDIOC_S_STD);
} else {
- v4l2_disable_ioctl(v4l2->vdev, VIDIOC_S_PARM);
+ v4l2_disable_ioctl(&v4l2->vdev, VIDIOC_S_PARM);
}
if (dev->tuner_type == TUNER_ABSENT) {
- v4l2_disable_ioctl(v4l2->vdev, VIDIOC_G_TUNER);
- v4l2_disable_ioctl(v4l2->vdev, VIDIOC_S_TUNER);
- v4l2_disable_ioctl(v4l2->vdev, VIDIOC_G_FREQUENCY);
- v4l2_disable_ioctl(v4l2->vdev, VIDIOC_S_FREQUENCY);
+ v4l2_disable_ioctl(&v4l2->vdev, VIDIOC_G_TUNER);
+ v4l2_disable_ioctl(&v4l2->vdev, VIDIOC_S_TUNER);
+ v4l2_disable_ioctl(&v4l2->vdev, VIDIOC_G_FREQUENCY);
+ v4l2_disable_ioctl(&v4l2->vdev, VIDIOC_S_FREQUENCY);
}
if (dev->int_audio_type == EM28XX_INT_AUDIO_NONE) {
- v4l2_disable_ioctl(v4l2->vdev, VIDIOC_G_AUDIO);
- v4l2_disable_ioctl(v4l2->vdev, VIDIOC_S_AUDIO);
+ v4l2_disable_ioctl(&v4l2->vdev, VIDIOC_G_AUDIO);
+ v4l2_disable_ioctl(&v4l2->vdev, VIDIOC_S_AUDIO);
}
/* register v4l2 video video_device */
- ret = video_register_device(v4l2->vdev, VFL_TYPE_GRABBER,
+ ret = video_register_device(&v4l2->vdev, VFL_TYPE_GRABBER,
video_nr[dev->devno]);
if (ret) {
em28xx_errdev("unable to register video device (error=%i).\n",
@@ -2532,27 +2520,27 @@
/* Allocate and fill vbi video_device struct */
if (em28xx_vbi_supported(dev) == 1) {
- v4l2->vbi_dev = em28xx_vdev_init(dev, &em28xx_video_template,
- "vbi");
+ em28xx_vdev_init(dev, &v4l2->vbi_dev, &em28xx_video_template,
+ "vbi");
- v4l2->vbi_dev->queue = &v4l2->vb_vbiq;
- v4l2->vbi_dev->queue->lock = &v4l2->vb_vbi_queue_lock;
+ v4l2->vbi_dev.queue = &v4l2->vb_vbiq;
+ v4l2->vbi_dev.queue->lock = &v4l2->vb_vbi_queue_lock;
/* disable inapplicable ioctls */
- v4l2_disable_ioctl(v4l2->vbi_dev, VIDIOC_S_PARM);
+ v4l2_disable_ioctl(&v4l2->vbi_dev, VIDIOC_S_PARM);
if (dev->tuner_type == TUNER_ABSENT) {
- v4l2_disable_ioctl(v4l2->vbi_dev, VIDIOC_G_TUNER);
- v4l2_disable_ioctl(v4l2->vbi_dev, VIDIOC_S_TUNER);
- v4l2_disable_ioctl(v4l2->vbi_dev, VIDIOC_G_FREQUENCY);
- v4l2_disable_ioctl(v4l2->vbi_dev, VIDIOC_S_FREQUENCY);
+ v4l2_disable_ioctl(&v4l2->vbi_dev, VIDIOC_G_TUNER);
+ v4l2_disable_ioctl(&v4l2->vbi_dev, VIDIOC_S_TUNER);
+ v4l2_disable_ioctl(&v4l2->vbi_dev, VIDIOC_G_FREQUENCY);
+ v4l2_disable_ioctl(&v4l2->vbi_dev, VIDIOC_S_FREQUENCY);
}
if (dev->int_audio_type == EM28XX_INT_AUDIO_NONE) {
- v4l2_disable_ioctl(v4l2->vbi_dev, VIDIOC_G_AUDIO);
- v4l2_disable_ioctl(v4l2->vbi_dev, VIDIOC_S_AUDIO);
+ v4l2_disable_ioctl(&v4l2->vbi_dev, VIDIOC_G_AUDIO);
+ v4l2_disable_ioctl(&v4l2->vbi_dev, VIDIOC_S_AUDIO);
}
/* register v4l2 vbi video_device */
- ret = video_register_device(v4l2->vbi_dev, VFL_TYPE_VBI,
+ ret = video_register_device(&v4l2->vbi_dev, VFL_TYPE_VBI,
vbi_nr[dev->devno]);
if (ret < 0) {
em28xx_errdev("unable to register vbi device\n");
@@ -2561,29 +2549,24 @@
}
if (em28xx_boards[dev->model].radio.type == EM28XX_RADIO) {
- v4l2->radio_dev = em28xx_vdev_init(dev, &em28xx_radio_template,
- "radio");
- if (!v4l2->radio_dev) {
- em28xx_errdev("cannot allocate video_device.\n");
- ret = -ENODEV;
- goto unregister_dev;
- }
- ret = video_register_device(v4l2->radio_dev, VFL_TYPE_RADIO,
+ em28xx_vdev_init(dev, &v4l2->radio_dev, &em28xx_radio_template,
+ "radio");
+ ret = video_register_device(&v4l2->radio_dev, VFL_TYPE_RADIO,
radio_nr[dev->devno]);
if (ret < 0) {
em28xx_errdev("can't register radio device\n");
goto unregister_dev;
}
em28xx_info("Registered radio device as %s\n",
- video_device_node_name(v4l2->radio_dev));
+ video_device_node_name(&v4l2->radio_dev));
}
em28xx_info("V4L2 video device registered as %s\n",
- video_device_node_name(v4l2->vdev));
+ video_device_node_name(&v4l2->vdev));
- if (v4l2->vbi_dev)
+ if (video_is_registered(&v4l2->vbi_dev))
em28xx_info("V4L2 VBI device registered as %s\n",
- video_device_node_name(v4l2->vbi_dev));
+ video_device_node_name(&v4l2->vbi_dev));
/* Save some power by putting tuner to sleep */
v4l2_device_call_all(&v4l2->v4l2_dev, 0, core, s_power, 0);
diff --git a/drivers/media/usb/em28xx/em28xx.h b/drivers/media/usb/em28xx/em28xx.h
index 9c70753..e6559c6 100644
--- a/drivers/media/usb/em28xx/em28xx.h
+++ b/drivers/media/usb/em28xx/em28xx.h
@@ -143,6 +143,7 @@
#define EM28178_BOARD_PCTV_292E 94
#define EM2861_BOARD_LEADTEK_VC100 95
#define EM28178_BOARD_TERRATEC_T2_STICK_HD 96
+#define EM2884_BOARD_ELGATO_EYETV_HYBRID_2008 97
/* Limits minimum and default number of buffers */
#define EM28XX_MIN_BUF 4
@@ -512,9 +513,9 @@
struct v4l2_ctrl_handler ctrl_handler;
struct v4l2_clk *clk;
- struct video_device *vdev;
- struct video_device *vbi_dev;
- struct video_device *radio_dev;
+ struct video_device vdev;
+ struct video_device vbi_dev;
+ struct video_device radio_dev;
/* Videobuf2 */
struct vb2_queue vb_vidq;
diff --git a/drivers/media/usb/gspca/ov534.c b/drivers/media/usb/gspca/ov534.c
index a9c866d..146071b 100644
--- a/drivers/media/usb/gspca/ov534.c
+++ b/drivers/media/usb/gspca/ov534.c
@@ -816,21 +816,16 @@
s16 huesin;
s16 huecos;
- /* fixp_sin and fixp_cos accept only positive values, while
- * our val is between -90 and 90
- */
- val += 360;
-
/* According to the datasheet the registers expect HUESIN and
* HUECOS to be the result of the trigonometric functions,
* scaled by 0x80.
*
- * The 0x100 here represents the maximun absolute value
+ * The 0x7fff here represents the maximum absolute value
* returned byt fixp_sin and fixp_cos, so the scaling will
* consider the result like in the interval [-1.0, 1.0].
*/
- huesin = fixp_sin(val) * 0x80 / 0x100;
- huecos = fixp_cos(val) * 0x80 / 0x100;
+ huesin = fixp_sin16(val) * 0x80 / 0x7fff;
+ huecos = fixp_cos16(val) * 0x80 / 0x7fff;
if (huesin < 0) {
sccb_reg_write(gspca_dev, 0xab,
diff --git a/drivers/media/usb/gspca/topro.c b/drivers/media/usb/gspca/topro.c
index 5fcd1ee..c70ff40 100644
--- a/drivers/media/usb/gspca/topro.c
+++ b/drivers/media/usb/gspca/topro.c
@@ -969,7 +969,9 @@
{
int i, sc;
- if (quality < 50)
+ if (quality <= 0)
+ sc = 5000;
+ else if (quality < 50)
sc = 5000 / quality;
else
sc = 200 - quality * 2;
diff --git a/drivers/media/usb/hdpvr/hdpvr-core.c b/drivers/media/usb/hdpvr/hdpvr-core.c
index 42b4cdf..3fc6419 100644
--- a/drivers/media/usb/hdpvr/hdpvr-core.c
+++ b/drivers/media/usb/hdpvr/hdpvr-core.c
@@ -69,10 +69,6 @@
void hdpvr_delete(struct hdpvr_device *dev)
{
hdpvr_free_buffers(dev);
-
- if (dev->video_dev)
- video_device_release(dev->video_dev);
-
usb_put_dev(dev->udev);
}
@@ -397,7 +393,7 @@
/* let the user know what node this device is now attached to */
v4l2_info(&dev->v4l2_dev, "device now attached to %s\n",
- video_device_node_name(dev->video_dev));
+ video_device_node_name(&dev->video_dev));
return 0;
reg_fail:
@@ -420,7 +416,7 @@
struct hdpvr_device *dev = to_hdpvr_dev(usb_get_intfdata(interface));
v4l2_info(&dev->v4l2_dev, "device %s disconnected\n",
- video_device_node_name(dev->video_dev));
+ video_device_node_name(&dev->video_dev));
/* prevent more I/O from starting and stop any ongoing */
mutex_lock(&dev->io_mutex);
dev->status = STATUS_DISCONNECTED;
@@ -436,7 +432,7 @@
#if IS_ENABLED(CONFIG_I2C)
i2c_del_adapter(&dev->i2c_adapter);
#endif
- video_unregister_device(dev->video_dev);
+ video_unregister_device(&dev->video_dev);
atomic_dec(&dev_nr);
}
diff --git a/drivers/media/usb/hdpvr/hdpvr-video.c b/drivers/media/usb/hdpvr/hdpvr-video.c
index 59d15fd..d8d8c0f 100644
--- a/drivers/media/usb/hdpvr/hdpvr-video.c
+++ b/drivers/media/usb/hdpvr/hdpvr-video.c
@@ -797,7 +797,7 @@
* Comment this out for now, but if the legacy mode can be
* removed in the future, then this code should be enabled
* again.
- dev->video_dev->tvnorms =
+ dev->video_dev.tvnorms =
(index != HDPVR_COMPONENT) ? V4L2_STD_ALL : 0;
*/
}
@@ -1228,19 +1228,12 @@
}
/* setup and register video device */
- dev->video_dev = video_device_alloc();
- if (!dev->video_dev) {
- v4l2_err(&dev->v4l2_dev, "video_device_alloc() failed\n");
- res = -ENOMEM;
- goto error;
- }
+ dev->video_dev = hdpvr_video_template;
+ strcpy(dev->video_dev.name, "Hauppauge HD PVR");
+ dev->video_dev.v4l2_dev = &dev->v4l2_dev;
+ video_set_drvdata(&dev->video_dev, dev);
- *dev->video_dev = hdpvr_video_template;
- strcpy(dev->video_dev->name, "Hauppauge HD PVR");
- dev->video_dev->v4l2_dev = &dev->v4l2_dev;
- video_set_drvdata(dev->video_dev, dev);
-
- res = video_register_device(dev->video_dev, VFL_TYPE_GRABBER, devnum);
+ res = video_register_device(&dev->video_dev, VFL_TYPE_GRABBER, devnum);
if (res < 0) {
v4l2_err(&dev->v4l2_dev, "video_device registration failed\n");
goto error;
diff --git a/drivers/media/usb/hdpvr/hdpvr.h b/drivers/media/usb/hdpvr/hdpvr.h
index dc685d4..a319430 100644
--- a/drivers/media/usb/hdpvr/hdpvr.h
+++ b/drivers/media/usb/hdpvr/hdpvr.h
@@ -66,7 +66,7 @@
/* Structure to hold all of our device specific stuff */
struct hdpvr_device {
/* the v4l device for this device */
- struct video_device *video_dev;
+ struct video_device video_dev;
/* the control handler for this device */
struct v4l2_ctrl_handler hdl;
/* the usb device for this device */
diff --git a/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c b/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c
index 35e4ea5..1c5f85b 100644
--- a/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c
+++ b/drivers/media/usb/pvrusb2/pvrusb2-v4l2.c
@@ -21,7 +21,6 @@
#include <linux/kernel.h>
#include <linux/slab.h>
-#include <linux/version.h>
#include "pvrusb2-context.h"
#include "pvrusb2-hdw.h"
#include "pvrusb2.h"
@@ -32,6 +31,7 @@
#include <linux/module.h>
#include <media/v4l2-dev.h>
#include <media/v4l2-device.h>
+#include <media/v4l2-fh.h>
#include <media/v4l2-common.h>
#include <media/v4l2-ioctl.h>
@@ -50,14 +50,11 @@
};
struct pvr2_v4l2_fh {
+ struct v4l2_fh fh;
struct pvr2_channel channel;
struct pvr2_v4l2_dev *pdi;
- enum v4l2_priority prio;
struct pvr2_ioread *rhp;
struct file *file;
- struct pvr2_v4l2 *vhead;
- struct pvr2_v4l2_fh *vnext;
- struct pvr2_v4l2_fh *vprev;
wait_queue_head_t wait_data;
int fw_mode_flag;
/* Map contiguous ordinal value to input id */
@@ -67,10 +64,6 @@
struct pvr2_v4l2 {
struct pvr2_channel channel;
- struct pvr2_v4l2_fh *vfirst;
- struct pvr2_v4l2_fh *vlast;
-
- struct v4l2_prio_state prio;
/* streams - Note that these must be separately, individually,
* allocated pointers. This is because the v4l core is going to
@@ -169,23 +162,6 @@
return 0;
}
-static int pvr2_g_priority(struct file *file, void *priv, enum v4l2_priority *p)
-{
- struct pvr2_v4l2_fh *fh = file->private_data;
- struct pvr2_v4l2 *vp = fh->vhead;
-
- *p = v4l2_prio_max(&vp->prio);
- return 0;
-}
-
-static int pvr2_s_priority(struct file *file, void *priv, enum v4l2_priority prio)
-{
- struct pvr2_v4l2_fh *fh = file->private_data;
- struct pvr2_v4l2 *vp = fh->vhead;
-
- return v4l2_prio_change(&vp->prio, &fh->prio, prio);
-}
-
static int pvr2_g_std(struct file *file, void *priv, v4l2_std_id *std)
{
struct pvr2_v4l2_fh *fh = file->private_data;
@@ -805,8 +781,6 @@
static const struct v4l2_ioctl_ops pvr2_ioctl_ops = {
.vidioc_querycap = pvr2_querycap,
- .vidioc_g_priority = pvr2_g_priority,
- .vidioc_s_priority = pvr2_s_priority,
.vidioc_s_audio = pvr2_s_audio,
.vidioc_g_audio = pvr2_g_audio,
.vidioc_enumaudio = pvr2_enumaudio,
@@ -911,7 +885,9 @@
if (!vp->channel.mc_head->disconnect_flag) return;
pvr2_v4l2_dev_disassociate_parent(vp->dev_video);
pvr2_v4l2_dev_disassociate_parent(vp->dev_radio);
- if (vp->vfirst) return;
+ if (!list_empty(&vp->dev_video->devbase.fh_list) ||
+ !list_empty(&vp->dev_radio->devbase.fh_list))
+ return;
pvr2_v4l2_destroy_no_lock(vp);
}
@@ -921,7 +897,6 @@
{
struct pvr2_v4l2_fh *fh = file->private_data;
- struct pvr2_v4l2 *vp = fh->vhead;
struct pvr2_hdw *hdw = fh->channel.mc_head->hdw;
long ret = -EINVAL;
@@ -934,18 +909,6 @@
return -EFAULT;
}
- /* check priority */
- switch (cmd) {
- case VIDIOC_S_CTRL:
- case VIDIOC_S_STD:
- case VIDIOC_S_INPUT:
- case VIDIOC_S_TUNER:
- case VIDIOC_S_FREQUENCY:
- ret = v4l2_prio_check(&vp->prio, fh->prio);
- if (ret)
- return ret;
- }
-
ret = video_ioctl2(file, cmd, arg);
pvr2_hdw_commit_ctl(hdw);
@@ -970,7 +933,7 @@
static int pvr2_v4l2_release(struct file *file)
{
struct pvr2_v4l2_fh *fhp = file->private_data;
- struct pvr2_v4l2 *vp = fhp->vhead;
+ struct pvr2_v4l2 *vp = fhp->pdi->v4lp;
struct pvr2_hdw *hdw = fhp->channel.mc_head->hdw;
pvr2_trace(PVR2_TRACE_OPEN_CLOSE,"pvr2_v4l2_release");
@@ -984,22 +947,10 @@
fhp->rhp = NULL;
}
- v4l2_prio_close(&vp->prio, fhp->prio);
+ v4l2_fh_del(&fhp->fh);
+ v4l2_fh_exit(&fhp->fh);
file->private_data = NULL;
- if (fhp->vnext) {
- fhp->vnext->vprev = fhp->vprev;
- } else {
- vp->vlast = fhp->vprev;
- }
- if (fhp->vprev) {
- fhp->vprev->vnext = fhp->vnext;
- } else {
- vp->vfirst = fhp->vnext;
- }
- fhp->vnext = NULL;
- fhp->vprev = NULL;
- fhp->vhead = NULL;
pvr2_channel_done(&fhp->channel);
pvr2_trace(PVR2_TRACE_STRUCT,
"Destroying pvr_v4l2_fh id=%p",fhp);
@@ -1008,7 +959,9 @@
fhp->input_map = NULL;
}
kfree(fhp);
- if (vp->channel.mc_head->disconnect_flag && !vp->vfirst) {
+ if (vp->channel.mc_head->disconnect_flag &&
+ list_empty(&vp->dev_video->devbase.fh_list) &&
+ list_empty(&vp->dev_radio->devbase.fh_list)) {
pvr2_v4l2_destroy_no_lock(vp);
}
return 0;
@@ -1043,6 +996,7 @@
return -ENOMEM;
}
+ v4l2_fh_init(&fhp->fh, &dip->devbase);
init_waitqueue_head(&fhp->wait_data);
fhp->pdi = dip;
@@ -1093,21 +1047,11 @@
fhp->input_map[input_cnt++] = idx;
}
- fhp->vnext = NULL;
- fhp->vprev = vp->vlast;
- if (vp->vlast) {
- vp->vlast->vnext = fhp;
- } else {
- vp->vfirst = fhp;
- }
- vp->vlast = fhp;
- fhp->vhead = vp;
-
fhp->file = file;
file->private_data = fhp;
- v4l2_prio_open(&vp->prio, &fhp->prio);
fhp->fw_mode_flag = pvr2_hdw_cpufw_get_enabled(hdw);
+ v4l2_fh_add(&fhp->fh);
return 0;
}
@@ -1247,7 +1191,7 @@
.open = pvr2_v4l2_open,
.release = pvr2_v4l2_release,
.read = pvr2_v4l2_read,
- .ioctl = pvr2_v4l2_ioctl,
+ .unlocked_ioctl = pvr2_v4l2_ioctl,
.poll = pvr2_v4l2_poll,
};
diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c
index 94e10b1..c945e4c 100644
--- a/drivers/media/usb/siano/smsusb.c
+++ b/drivers/media/usb/siano/smsusb.c
@@ -19,6 +19,8 @@
****************************************************************/
+#include "smscoreapi.h"
+
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/usb.h>
@@ -26,14 +28,9 @@
#include <linux/slab.h>
#include <linux/module.h>
-#include "smscoreapi.h"
#include "sms-cards.h"
#include "smsendian.h"
-static int sms_dbg;
-module_param_named(debug, sms_dbg, int, 0644);
-MODULE_PARM_DESC(debug, "set debug level (info=1, adv=2 (or-able))");
-
#define USB1_BUFFER_SIZE 0x1000
#define USB2_BUFFER_SIZE 0x2000
@@ -87,7 +84,7 @@
struct smsusb_device_t *dev = surb->dev;
if (urb->status == -ESHUTDOWN) {
- sms_err("error, urb status %d (-ESHUTDOWN), %d bytes",
+ pr_err("error, urb status %d (-ESHUTDOWN), %d bytes\n",
urb->status, urb->actual_length);
return;
}
@@ -109,9 +106,7 @@
/* sanity check */
if (((int) phdr->msg_length +
surb->cb->offset) > urb->actual_length) {
- sms_err("invalid response "
- "msglen %d offset %d "
- "size %d",
+ pr_err("invalid response msglen %d offset %d size %d\n",
phdr->msg_length,
surb->cb->offset,
urb->actual_length);
@@ -125,7 +120,7 @@
} else
surb->cb->offset = 0;
- sms_debug("received %s(%d) size: %d",
+ pr_debug("received %s(%d) size: %d\n",
smscore_translate_msg(phdr->msg_type),
phdr->msg_type, phdr->msg_length);
@@ -134,12 +129,11 @@
smscore_onresponse(dev->coredev, surb->cb);
surb->cb = NULL;
} else {
- sms_err("invalid response "
- "msglen %d actual %d",
+ pr_err("invalid response msglen %d actual %d\n",
phdr->msg_length, urb->actual_length);
}
} else
- sms_err("error, urb status %d, %d bytes",
+ pr_err("error, urb status %d, %d bytes\n",
urb->status, urb->actual_length);
@@ -153,7 +147,7 @@
if (!surb->cb) {
surb->cb = smscore_getbuffer(dev->coredev);
if (!surb->cb) {
- sms_err("smscore_getbuffer(...) returned NULL");
+ pr_err("smscore_getbuffer(...) returned NULL\n");
return -ENOMEM;
}
}
@@ -194,7 +188,7 @@
for (i = 0; i < MAX_URBS; i++) {
rc = smsusb_submit_urb(dev, &dev->surbs[i]);
if (rc < 0) {
- sms_err("smsusb_submit_urb(...) failed");
+ pr_err("smsusb_submit_urb(...) failed\n");
smsusb_stop_streaming(dev);
break;
}
@@ -210,11 +204,11 @@
int dummy;
if (dev->state != SMSUSB_ACTIVE) {
- sms_debug("Device not active yet");
+ pr_debug("Device not active yet\n");
return -ENOENT;
}
- sms_debug("sending %s(%d) size: %d",
+ pr_debug("sending %s(%d) size: %d\n",
smscore_translate_msg(phdr->msg_type), phdr->msg_type,
phdr->msg_length);
@@ -249,7 +243,7 @@
id = sms_get_board(board_id)->default_mode;
if (id < DEVICE_MODE_DVBT || id > DEVICE_MODE_DVBT_BDA) {
- sms_err("invalid firmware id specified %d", id);
+ pr_err("invalid firmware id specified %d\n", id);
return -EINVAL;
}
@@ -257,13 +251,13 @@
rc = request_firmware(&fw, fw_filename, &udev->dev);
if (rc < 0) {
- sms_warn("failed to open \"%s\" mode %d, "
- "trying again with default firmware", fw_filename, id);
+ pr_warn("failed to open '%s' mode %d, trying again with default firmware\n",
+ fw_filename, id);
fw_filename = smsusb1_fw_lkup[id];
rc = request_firmware(&fw, fw_filename, &udev->dev);
if (rc < 0) {
- sms_warn("failed to open \"%s\" mode %d",
+ pr_warn("failed to open '%s' mode %d\n",
fw_filename, id);
return rc;
@@ -277,14 +271,14 @@
rc = usb_bulk_msg(udev, usb_sndbulkpipe(udev, 2),
fw_buffer, fw->size, &dummy, 1000);
- sms_info("sent %zu(%d) bytes, rc %d", fw->size, dummy, rc);
+ pr_debug("sent %zu(%d) bytes, rc %d\n", fw->size, dummy, rc);
kfree(fw_buffer);
} else {
- sms_err("failed to allocate firmware buffer");
+ pr_err("failed to allocate firmware buffer\n");
rc = -ENOMEM;
}
- sms_info("read FW %s, size=%zu", fw_filename, fw->size);
+ pr_debug("read FW %s, size=%zu\n", fw_filename, fw->size);
release_firmware(fw);
@@ -300,7 +294,7 @@
if (!product_string) {
product_string = "none";
- sms_err("product string not found");
+ pr_err("product string not found\n");
} else if (strstr(product_string, "DVBH"))
*mode = 1;
else if (strstr(product_string, "BDA"))
@@ -310,7 +304,7 @@
else if (strstr(product_string, "TDMB"))
*mode = 2;
- sms_info("%d \"%s\"", *mode, product_string);
+ pr_debug("%d \"%s\"\n", *mode, product_string);
}
static int smsusb1_setmode(void *context, int mode)
@@ -319,7 +313,7 @@
sizeof(struct sms_msg_hdr), 0 };
if (mode < DEVICE_MODE_DVBT || mode > DEVICE_MODE_DVBT_BDA) {
- sms_err("invalid firmware id specified %d", mode);
+ pr_err("invalid firmware id specified %d\n", mode);
return -EINVAL;
}
@@ -339,25 +333,61 @@
if (dev->coredev)
smscore_unregister_device(dev->coredev);
- sms_info("device 0x%p destroyed", dev);
+ pr_debug("device 0x%p destroyed\n", dev);
kfree(dev);
}
usb_set_intfdata(intf, NULL);
}
+static void *siano_media_device_register(struct smsusb_device_t *dev,
+ int board_id)
+{
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ struct media_device *mdev;
+ struct usb_device *udev = dev->udev;
+ struct sms_board *board = sms_get_board(board_id);
+ int ret;
+
+ mdev = kzalloc(sizeof(*mdev), GFP_KERNEL);
+ if (!mdev)
+ return NULL;
+
+ mdev->dev = &udev->dev;
+ strlcpy(mdev->model, board->name, sizeof(mdev->model));
+ if (udev->serial)
+ strlcpy(mdev->serial, udev->serial, sizeof(mdev->serial));
+ strcpy(mdev->bus_info, udev->devpath);
+ mdev->hw_revision = le16_to_cpu(udev->descriptor.bcdDevice);
+ mdev->driver_version = LINUX_VERSION_CODE;
+
+ ret = media_device_register(mdev);
+ if (ret) {
+ pr_err("Couldn't create a media device. Error: %d\n",
+ ret);
+ kfree(mdev);
+ return NULL;
+ }
+
+ pr_info("media controller created\n");
+
+ return mdev;
+#else
+ return NULL;
+#endif
+}
+
static int smsusb_init_device(struct usb_interface *intf, int board_id)
{
struct smsdevice_params_t params;
struct smsusb_device_t *dev;
+ void *mdev;
int i, rc;
/* create device object */
dev = kzalloc(sizeof(struct smsusb_device_t), GFP_KERNEL);
- if (!dev) {
- sms_err("kzalloc(sizeof(struct smsusb_device_t) failed");
+ if (!dev)
return -ENOMEM;
- }
memset(¶ms, 0, sizeof(params));
usb_set_intfdata(intf, dev);
@@ -374,7 +404,7 @@
params.detectmode_handler = smsusb1_detectmode;
break;
case SMS_UNKNOWN_TYPE:
- sms_err("Unspecified sms device type!");
+ pr_err("Unspecified sms device type!\n");
/* fall-thru */
default:
dev->buffer_size = USB2_BUFFER_SIZE;
@@ -393,7 +423,7 @@
dev->out_ep = intf->cur_altsetting->endpoint[i].desc.bEndpointAddress;
}
- sms_info("in_ep = %02x, out_ep = %02x",
+ pr_debug("in_ep = %02x, out_ep = %02x\n",
dev->in_ep, dev->out_ep);
params.device = &dev->udev->dev;
@@ -403,11 +433,17 @@
params.context = dev;
usb_make_path(dev->udev, params.devpath, sizeof(params.devpath));
+ mdev = siano_media_device_register(dev, board_id);
+
/* register in smscore */
- rc = smscore_register_device(¶ms, &dev->coredev);
+ rc = smscore_register_device(¶ms, &dev->coredev, mdev);
if (rc < 0) {
- sms_err("smscore_register_device(...) failed, rc %d", rc);
+ pr_err("smscore_register_device(...) failed, rc %d\n", rc);
smsusb_term_device(intf);
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ media_device_unregister(mdev);
+#endif
+ kfree(mdev);
return rc;
}
@@ -421,10 +457,10 @@
usb_init_urb(&dev->surbs[i].urb);
}
- sms_info("smsusb_start_streaming(...).");
+ pr_debug("smsusb_start_streaming(...).\n");
rc = smsusb_start_streaming(dev);
if (rc < 0) {
- sms_err("smsusb_start_streaming(...) failed");
+ pr_err("smsusb_start_streaming(...) failed\n");
smsusb_term_device(intf);
return rc;
}
@@ -433,12 +469,12 @@
rc = smscore_start_device(dev->coredev);
if (rc < 0) {
- sms_err("smscore_start_device(...) failed");
+ pr_err("smscore_start_device(...) failed\n");
smsusb_term_device(intf);
return rc;
}
- sms_info("device 0x%p created", dev);
+ pr_debug("device 0x%p created\n", dev);
return rc;
}
@@ -450,13 +486,13 @@
char devpath[32];
int i, rc;
- sms_info("board id=%lu, interface number %d",
+ pr_info("board id=%lu, interface number %d\n",
id->driver_info,
intf->cur_altsetting->desc.bInterfaceNumber);
if (sms_get_board(id->driver_info)->intf_num !=
intf->cur_altsetting->desc.bInterfaceNumber) {
- sms_debug("interface %d won't be used. Expecting interface %d to popup",
+ pr_debug("interface %d won't be used. Expecting interface %d to popup\n",
intf->cur_altsetting->desc.bInterfaceNumber,
sms_get_board(id->driver_info)->intf_num);
return -ENODEV;
@@ -467,15 +503,15 @@
intf->cur_altsetting->desc.bInterfaceNumber,
0);
if (rc < 0) {
- sms_err("usb_set_interface failed, rc %d", rc);
+ pr_err("usb_set_interface failed, rc %d\n", rc);
return rc;
}
}
- sms_info("smsusb_probe %d",
+ pr_debug("smsusb_probe %d\n",
intf->cur_altsetting->desc.bInterfaceNumber);
for (i = 0; i < intf->cur_altsetting->desc.bNumEndpoints; i++) {
- sms_info("endpoint %d %02x %02x %d", i,
+ pr_debug("endpoint %d %02x %02x %d\n", i,
intf->cur_altsetting->endpoint[i].desc.bEndpointAddress,
intf->cur_altsetting->endpoint[i].desc.bmAttributes,
intf->cur_altsetting->endpoint[i].desc.wMaxPacketSize);
@@ -489,7 +525,7 @@
}
if ((udev->actconfig->desc.bNumInterfaces == 2) &&
(intf->cur_altsetting->desc.bInterfaceNumber == 0)) {
- sms_debug("rom interface 0 is not used");
+ pr_debug("rom interface 0 is not used\n");
return -ENODEV;
}
@@ -498,23 +534,25 @@
snprintf(devpath, sizeof(devpath), "usb\\%d-%s",
udev->bus->busnum, udev->devpath);
- sms_info("stellar device in cold state was found at %s.", devpath);
+ pr_info("stellar device in cold state was found at %s.\n",
+ devpath);
rc = smsusb1_load_firmware(
udev, smscore_registry_getmode(devpath),
id->driver_info);
/* This device will reset and gain another USB ID */
if (!rc)
- sms_info("stellar device now in warm state");
+ pr_info("stellar device now in warm state\n");
else
- sms_err("Failed to put stellar in warm state. Error: %d", rc);
+ pr_err("Failed to put stellar in warm state. Error: %d\n",
+ rc);
return rc;
} else {
rc = smsusb_init_device(intf, id->driver_info);
}
- sms_info("Device initialized with return code %d", rc);
+ pr_info("Device initialized with return code %d\n", rc);
sms_board_load_modules(id->driver_info);
return rc;
}
diff --git a/drivers/media/usb/stk1160/stk1160-v4l.c b/drivers/media/usb/stk1160/stk1160-v4l.c
index 65a326c..749ad56 100644
--- a/drivers/media/usb/stk1160/stk1160-v4l.c
+++ b/drivers/media/usb/stk1160/stk1160-v4l.c
@@ -240,6 +240,11 @@
if (mutex_lock_interruptible(&dev->v4l_lock))
return -ERESTARTSYS;
+ /*
+ * Once URBs are cancelled, the URB complete handler
+ * won't be running. This is required to safely release the
+ * current buffer (dev->isoc_ctl.buf).
+ */
stk1160_cancel_isoc(dev);
/*
@@ -620,8 +625,16 @@
stk1160_info("buffer [%p/%d] aborted\n",
buf, buf->vb.v4l2_buf.index);
}
- /* It's important to clear current buffer */
- dev->isoc_ctl.buf = NULL;
+
+ /* It's important to release the current buffer */
+ if (dev->isoc_ctl.buf) {
+ buf = dev->isoc_ctl.buf;
+ dev->isoc_ctl.buf = NULL;
+
+ vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
+ stk1160_info("buffer [%p/%d] aborted\n",
+ buf, buf->vb.v4l2_buf.index);
+ }
spin_unlock_irqrestore(&dev->buf_lock, flags);
}
diff --git a/drivers/media/usb/stkwebcam/stk-webcam.c b/drivers/media/usb/stkwebcam/stk-webcam.c
index e08fa58..c21c4c0 100644
--- a/drivers/media/usb/stkwebcam/stk-webcam.c
+++ b/drivers/media/usb/stkwebcam/stk-webcam.c
@@ -556,10 +556,8 @@
nbufs = dev->n_sbufs;
dev->n_sbufs = 0;
spin_unlock_irqrestore(&dev->spinlock, flags);
- for (i = 0; i < nbufs; i++) {
- if (dev->sio_bufs[i].buffer != NULL)
- vfree(dev->sio_bufs[i].buffer);
- }
+ for (i = 0; i < nbufs; i++)
+ vfree(dev->sio_bufs[i].buffer);
kfree(dev->sio_bufs);
dev->sio_bufs = NULL;
return 0;
diff --git a/drivers/media/usb/tm6000/tm6000-video.c b/drivers/media/usb/tm6000/tm6000-video.c
index 0f14d3c..77ce9ef 100644
--- a/drivers/media/usb/tm6000/tm6000-video.c
+++ b/drivers/media/usb/tm6000/tm6000-video.c
@@ -1576,7 +1576,7 @@
.name = "tm6000",
.fops = &tm6000_fops,
.ioctl_ops = &video_ioctl_ops,
- .release = video_device_release,
+ .release = video_device_release_empty,
.tvnorms = TM6000_STD,
};
@@ -1609,25 +1609,19 @@
* ------------------------------------------------------------------
*/
-static struct video_device *vdev_init(struct tm6000_core *dev,
+static void vdev_init(struct tm6000_core *dev,
+ struct video_device *vfd,
const struct video_device
*template, const char *type_name)
{
- struct video_device *vfd;
-
- vfd = video_device_alloc();
- if (NULL == vfd)
- return NULL;
-
*vfd = *template;
vfd->v4l2_dev = &dev->v4l2_dev;
- vfd->release = video_device_release;
+ vfd->release = video_device_release_empty;
vfd->lock = &dev->lock;
snprintf(vfd->name, sizeof(vfd->name), "%s %s", dev->name, type_name);
video_set_drvdata(vfd, dev);
- return vfd;
}
int tm6000_v4l2_register(struct tm6000_core *dev)
@@ -1658,62 +1652,46 @@
if (ret)
goto free_ctrl;
- dev->vfd = vdev_init(dev, &tm6000_template, "video");
+ vdev_init(dev, &dev->vfd, &tm6000_template, "video");
- if (!dev->vfd) {
- printk(KERN_INFO "%s: can't register video device\n",
- dev->name);
- ret = -ENOMEM;
- goto free_ctrl;
- }
- dev->vfd->ctrl_handler = &dev->ctrl_handler;
+ dev->vfd.ctrl_handler = &dev->ctrl_handler;
/* init video dma queues */
INIT_LIST_HEAD(&dev->vidq.active);
INIT_LIST_HEAD(&dev->vidq.queued);
- ret = video_register_device(dev->vfd, VFL_TYPE_GRABBER, video_nr);
+ ret = video_register_device(&dev->vfd, VFL_TYPE_GRABBER, video_nr);
if (ret < 0) {
printk(KERN_INFO "%s: can't register video device\n",
dev->name);
- video_device_release(dev->vfd);
- dev->vfd = NULL;
goto free_ctrl;
}
printk(KERN_INFO "%s: registered device %s\n",
- dev->name, video_device_node_name(dev->vfd));
+ dev->name, video_device_node_name(&dev->vfd));
if (dev->caps.has_radio) {
- dev->radio_dev = vdev_init(dev, &tm6000_radio_template,
+ vdev_init(dev, &dev->radio_dev, &tm6000_radio_template,
"radio");
- if (!dev->radio_dev) {
- printk(KERN_INFO "%s: can't register radio device\n",
- dev->name);
- ret = -ENXIO;
- goto unreg_video;
- }
-
- dev->radio_dev->ctrl_handler = &dev->radio_ctrl_handler;
- ret = video_register_device(dev->radio_dev, VFL_TYPE_RADIO,
+ dev->radio_dev.ctrl_handler = &dev->radio_ctrl_handler;
+ ret = video_register_device(&dev->radio_dev, VFL_TYPE_RADIO,
radio_nr);
if (ret < 0) {
printk(KERN_INFO "%s: can't register radio device\n",
dev->name);
- video_device_release(dev->radio_dev);
goto unreg_video;
}
printk(KERN_INFO "%s: registered device %s\n",
- dev->name, video_device_node_name(dev->radio_dev));
+ dev->name, video_device_node_name(&dev->radio_dev));
}
printk(KERN_INFO "Trident TVMaster TM5600/TM6000/TM6010 USB2 board (Load status: %d)\n", ret);
return ret;
unreg_video:
- video_unregister_device(dev->vfd);
+ video_unregister_device(&dev->vfd);
free_ctrl:
v4l2_ctrl_handler_free(&dev->ctrl_handler);
v4l2_ctrl_handler_free(&dev->radio_ctrl_handler);
@@ -1722,19 +1700,12 @@
int tm6000_v4l2_unregister(struct tm6000_core *dev)
{
- video_unregister_device(dev->vfd);
+ video_unregister_device(&dev->vfd);
/* if URB buffers are still allocated free them now */
tm6000_free_urb_buffers(dev);
- if (dev->radio_dev) {
- if (video_is_registered(dev->radio_dev))
- video_unregister_device(dev->radio_dev);
- else
- video_device_release(dev->radio_dev);
- dev->radio_dev = NULL;
- }
-
+ video_unregister_device(&dev->radio_dev);
return 0;
}
diff --git a/drivers/media/usb/tm6000/tm6000.h b/drivers/media/usb/tm6000/tm6000.h
index 08bd074..f212794 100644
--- a/drivers/media/usb/tm6000/tm6000.h
+++ b/drivers/media/usb/tm6000/tm6000.h
@@ -220,8 +220,8 @@
struct tm6000_fh *resources; /* Points to fh that is streaming */
bool is_res_read;
- struct video_device *vfd;
- struct video_device *radio_dev;
+ struct video_device vfd;
+ struct video_device radio_dev;
struct tm6000_dmaqueue vidq;
struct v4l2_device v4l2_dev;
struct v4l2_ctrl_handler ctrl_handler;
diff --git a/drivers/media/usb/usbvision/usbvision-video.c b/drivers/media/usb/usbvision/usbvision-video.c
index cd2fbf1..12b403e 100644
--- a/drivers/media/usb/usbvision/usbvision-video.c
+++ b/drivers/media/usb/usbvision/usbvision-video.c
@@ -471,7 +471,7 @@
/* NT100x has a 8-bit register space */
err_code = usbvision_read_reg(usbvision, reg->reg&0xff);
if (err_code < 0) {
- dev_err(&usbvision->vdev->dev,
+ dev_err(&usbvision->vdev.dev,
"%s: VIDIOC_DBG_G_REGISTER failed: error %d\n",
__func__, err_code);
return err_code;
@@ -490,7 +490,7 @@
/* NT100x has a 8-bit register space */
err_code = usbvision_write_reg(usbvision, reg->reg & 0xff, reg->val);
if (err_code < 0) {
- dev_err(&usbvision->vdev->dev,
+ dev_err(&usbvision->vdev.dev,
"%s: VIDIOC_DBG_S_REGISTER failed: error %d\n",
__func__, err_code);
return err_code;
@@ -1157,7 +1157,7 @@
if (mutex_lock_interruptible(&usbvision->v4l2_lock))
return -ERESTARTSYS;
if (usbvision->user) {
- dev_err(&usbvision->rdev->dev,
+ dev_err(&usbvision->rdev.dev,
"%s: Someone tried to open an already opened USBVision Radio!\n",
__func__);
err_code = -EBUSY;
@@ -1280,7 +1280,7 @@
.fops = &usbvision_fops,
.ioctl_ops = &usbvision_ioctl_ops,
.name = "usbvision-video",
- .release = video_device_release,
+ .release = video_device_release_empty,
.tvnorms = USBVISION_NORMS,
};
@@ -1312,58 +1312,46 @@
static struct video_device usbvision_radio_template = {
.fops = &usbvision_radio_fops,
.name = "usbvision-radio",
- .release = video_device_release,
+ .release = video_device_release_empty,
.ioctl_ops = &usbvision_radio_ioctl_ops,
};
-static struct video_device *usbvision_vdev_init(struct usb_usbvision *usbvision,
- struct video_device *vdev_template,
- char *name)
+static void usbvision_vdev_init(struct usb_usbvision *usbvision,
+ struct video_device *vdev,
+ const struct video_device *vdev_template,
+ const char *name)
{
struct usb_device *usb_dev = usbvision->dev;
- struct video_device *vdev;
if (usb_dev == NULL) {
dev_err(&usbvision->dev->dev,
"%s: usbvision->dev is not set\n", __func__);
- return NULL;
+ return;
}
- vdev = video_device_alloc();
- if (NULL == vdev)
- return NULL;
*vdev = *vdev_template;
vdev->lock = &usbvision->v4l2_lock;
vdev->v4l2_dev = &usbvision->v4l2_dev;
snprintf(vdev->name, sizeof(vdev->name), "%s", name);
video_set_drvdata(vdev, usbvision);
- return vdev;
}
/* unregister video4linux devices */
static void usbvision_unregister_video(struct usb_usbvision *usbvision)
{
/* Radio Device: */
- if (usbvision->rdev) {
+ if (video_is_registered(&usbvision->rdev)) {
PDEBUG(DBG_PROBE, "unregister %s [v4l2]",
- video_device_node_name(usbvision->rdev));
- if (video_is_registered(usbvision->rdev))
- video_unregister_device(usbvision->rdev);
- else
- video_device_release(usbvision->rdev);
- usbvision->rdev = NULL;
+ video_device_node_name(&usbvision->rdev));
+ video_unregister_device(&usbvision->rdev);
}
/* Video Device: */
- if (usbvision->vdev) {
+ if (video_is_registered(&usbvision->vdev)) {
PDEBUG(DBG_PROBE, "unregister %s [v4l2]",
- video_device_node_name(usbvision->vdev));
- if (video_is_registered(usbvision->vdev))
- video_unregister_device(usbvision->vdev);
- else
- video_device_release(usbvision->vdev);
- usbvision->vdev = NULL;
+ video_device_node_name(&usbvision->vdev));
+ video_unregister_device(&usbvision->vdev);
}
}
@@ -1371,28 +1359,22 @@
static int usbvision_register_video(struct usb_usbvision *usbvision)
{
/* Video Device: */
- usbvision->vdev = usbvision_vdev_init(usbvision,
- &usbvision_video_template,
- "USBVision Video");
- if (usbvision->vdev == NULL)
- goto err_exit;
- if (video_register_device(usbvision->vdev, VFL_TYPE_GRABBER, video_nr) < 0)
+ usbvision_vdev_init(usbvision, &usbvision->vdev,
+ &usbvision_video_template, "USBVision Video");
+ if (video_register_device(&usbvision->vdev, VFL_TYPE_GRABBER, video_nr) < 0)
goto err_exit;
printk(KERN_INFO "USBVision[%d]: registered USBVision Video device %s [v4l2]\n",
- usbvision->nr, video_device_node_name(usbvision->vdev));
+ usbvision->nr, video_device_node_name(&usbvision->vdev));
/* Radio Device: */
if (usbvision_device_data[usbvision->dev_model].radio) {
/* usbvision has radio */
- usbvision->rdev = usbvision_vdev_init(usbvision,
- &usbvision_radio_template,
- "USBVision Radio");
- if (usbvision->rdev == NULL)
- goto err_exit;
- if (video_register_device(usbvision->rdev, VFL_TYPE_RADIO, radio_nr) < 0)
+ usbvision_vdev_init(usbvision, &usbvision->rdev,
+ &usbvision_radio_template, "USBVision Radio");
+ if (video_register_device(&usbvision->rdev, VFL_TYPE_RADIO, radio_nr) < 0)
goto err_exit;
printk(KERN_INFO "USBVision[%d]: registered USBVision Radio device %s [v4l2]\n",
- usbvision->nr, video_device_node_name(usbvision->rdev));
+ usbvision->nr, video_device_node_name(&usbvision->rdev));
}
/* all done */
return 0;
@@ -1461,7 +1443,7 @@
usbvision->initialized = 0;
- usbvision_remove_sysfs(usbvision->vdev);
+ usbvision_remove_sysfs(&usbvision->vdev);
usbvision_unregister_video(usbvision);
kfree(usbvision->alt_max_pkt_size);
@@ -1525,7 +1507,7 @@
const struct usb_host_interface *interface;
struct usb_usbvision *usbvision = NULL;
const struct usb_endpoint_descriptor *endpoint;
- int model, i;
+ int model, i, ret;
PDEBUG(DBG_PROBE, "VID=%#04x, PID=%#04x, ifnum=%u",
dev->descriptor.idVendor,
@@ -1534,7 +1516,8 @@
model = devid->driver_info;
if (model < 0 || model >= usbvision_device_data_size) {
PDEBUG(DBG_PROBE, "model out of bounds %d", model);
- return -ENODEV;
+ ret = -ENODEV;
+ goto err_usb;
}
printk(KERN_INFO "%s: %s found\n", __func__,
usbvision_device_data[model].model_string);
@@ -1549,18 +1532,21 @@
__func__, ifnum);
dev_err(&intf->dev, "%s: Endpoint attributes %d",
__func__, endpoint->bmAttributes);
- return -ENODEV;
+ ret = -ENODEV;
+ goto err_usb;
}
if (usb_endpoint_dir_out(endpoint)) {
dev_err(&intf->dev, "%s: interface %d. has ISO OUT endpoint!\n",
__func__, ifnum);
- return -ENODEV;
+ ret = -ENODEV;
+ goto err_usb;
}
usbvision = usbvision_alloc(dev, intf);
if (usbvision == NULL) {
dev_err(&intf->dev, "%s: couldn't allocate USBVision struct\n", __func__);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto err_usb;
}
if (dev->descriptor.bNumConfigurations > 1)
@@ -1579,8 +1565,8 @@
usbvision->alt_max_pkt_size = kmalloc(32 * usbvision->num_alt, GFP_KERNEL);
if (usbvision->alt_max_pkt_size == NULL) {
dev_err(&intf->dev, "usbvision: out of memory!\n");
- usbvision_release(usbvision);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto err_pkt;
}
for (i = 0; i < usbvision->num_alt; i++) {
@@ -1611,10 +1597,16 @@
usbvision_configure_video(usbvision);
usbvision_register_video(usbvision);
- usbvision_create_sysfs(usbvision->vdev);
+ usbvision_create_sysfs(&usbvision->vdev);
PDEBUG(DBG_PROBE, "success");
return 0;
+
+err_pkt:
+ usbvision_release(usbvision);
+err_usb:
+ usb_put_dev(dev);
+ return ret;
}
diff --git a/drivers/media/usb/usbvision/usbvision.h b/drivers/media/usb/usbvision/usbvision.h
index 77aeb1e..140a1f6 100644
--- a/drivers/media/usb/usbvision/usbvision.h
+++ b/drivers/media/usb/usbvision/usbvision.h
@@ -357,8 +357,8 @@
struct usb_usbvision {
struct v4l2_device v4l2_dev;
- struct video_device *vdev; /* Video Device */
- struct video_device *rdev; /* Radio Device */
+ struct video_device vdev; /* Video Device */
+ struct video_device rdev; /* Radio Device */
/* i2c Declaration Section*/
struct i2c_adapter i2c_adap;
diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
index cf27006..5970dd6 100644
--- a/drivers/media/usb/uvc/uvc_driver.c
+++ b/drivers/media/usb/uvc/uvc_driver.c
@@ -1669,10 +1669,6 @@
#ifdef CONFIG_MEDIA_CONTROLLER
uvc_mc_cleanup_entity(entity);
#endif
- if (entity->vdev) {
- video_device_release(entity->vdev);
- entity->vdev = NULL;
- }
kfree(entity);
}
@@ -1717,11 +1713,10 @@
atomic_inc(&dev->nstreams);
list_for_each_entry(stream, &dev->streams, list) {
- if (stream->vdev == NULL)
+ if (!video_is_registered(&stream->vdev))
continue;
- video_unregister_device(stream->vdev);
- stream->vdev = NULL;
+ video_unregister_device(&stream->vdev);
uvc_debugfs_cleanup_stream(stream);
}
@@ -1736,7 +1731,7 @@
static int uvc_register_video(struct uvc_device *dev,
struct uvc_streaming *stream)
{
- struct video_device *vdev;
+ struct video_device *vdev = &stream->vdev;
int ret;
/* Initialize the video buffers queue. */
@@ -1757,12 +1752,6 @@
uvc_debugfs_init_stream(stream);
/* Register the device with V4L. */
- vdev = video_device_alloc();
- if (vdev == NULL) {
- uvc_printk(KERN_ERR, "Failed to allocate video device (%d).\n",
- ret);
- return -ENOMEM;
- }
/* We already hold a reference to dev->udev. The video device will be
* unregistered before the reference is released, so we don't need to
@@ -1780,15 +1769,12 @@
/* Set the driver data before calling video_register_device, otherwise
* uvc_v4l2_open might race us.
*/
- stream->vdev = vdev;
video_set_drvdata(vdev, stream);
ret = video_register_device(vdev, VFL_TYPE_GRABBER, -1);
if (ret < 0) {
uvc_printk(KERN_ERR, "Failed to register video device (%d).\n",
ret);
- stream->vdev = NULL;
- video_device_release(vdev);
return ret;
}
@@ -1827,7 +1813,7 @@
if (ret < 0)
return ret;
- term->vdev = stream->vdev;
+ term->vdev = &stream->vdev;
}
return 0;
@@ -2461,6 +2447,14 @@
.bInterfaceProtocol = 0,
.driver_info = UVC_QUIRK_PROBE_MINMAX
| UVC_QUIRK_PROBE_EXTRAFIELDS },
+ /* Aveo Technology USB 2.0 Camera (Tasco USB Microscope) */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+ .idVendor = 0x1871,
+ .idProduct = 0x0516,
+ .bInterfaceClass = USB_CLASS_VENDOR_SPEC,
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0 },
/* Ecamm Pico iMage */
{ .match_flags = USB_DEVICE_ID_MATCH_DEVICE
| USB_DEVICE_ID_MATCH_INT_INFO,
diff --git a/drivers/media/usb/uvc/uvc_queue.c b/drivers/media/usb/uvc/uvc_queue.c
index 10c554e..87a19f3 100644
--- a/drivers/media/usb/uvc/uvc_queue.c
+++ b/drivers/media/usb/uvc/uvc_queue.c
@@ -306,25 +306,14 @@
int uvc_queue_mmap(struct uvc_video_queue *queue, struct vm_area_struct *vma)
{
- int ret;
-
- mutex_lock(&queue->mutex);
- ret = vb2_mmap(&queue->queue, vma);
- mutex_unlock(&queue->mutex);
-
- return ret;
+ return vb2_mmap(&queue->queue, vma);
}
#ifndef CONFIG_MMU
unsigned long uvc_queue_get_unmapped_area(struct uvc_video_queue *queue,
unsigned long pgoff)
{
- unsigned long ret;
-
- mutex_lock(&queue->mutex);
- ret = vb2_get_unmapped_area(&queue->queue, 0, 0, pgoff, 0);
- mutex_unlock(&queue->mutex);
- return ret;
+ return vb2_get_unmapped_area(&queue->queue, 0, 0, pgoff, 0);
}
#endif
diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
index 43e953f..c4b1ac6 100644
--- a/drivers/media/usb/uvc/uvc_v4l2.c
+++ b/drivers/media/usb/uvc/uvc_v4l2.c
@@ -511,7 +511,7 @@
stream->dev->users++;
mutex_unlock(&stream->dev->lock);
- v4l2_fh_init(&handle->vfh, stream->vdev);
+ v4l2_fh_init(&handle->vfh, &stream->vdev);
v4l2_fh_add(&handle->vfh);
handle->chain = stream->chain;
handle->stream = stream;
@@ -882,6 +882,35 @@
return uvc_query_v4l2_ctrl(chain, qc);
}
+static int uvc_ioctl_query_ext_ctrl(struct file *file, void *fh,
+ struct v4l2_query_ext_ctrl *qec)
+{
+ struct uvc_fh *handle = fh;
+ struct uvc_video_chain *chain = handle->chain;
+ struct v4l2_queryctrl qc = { qec->id };
+ int ret;
+
+ ret = uvc_query_v4l2_ctrl(chain, &qc);
+ if (ret)
+ return ret;
+
+ qec->id = qc.id;
+ qec->type = qc.type;
+ strlcpy(qec->name, qc.name, sizeof(qec->name));
+ qec->minimum = qc.minimum;
+ qec->maximum = qc.maximum;
+ qec->step = qc.step;
+ qec->default_value = qc.default_value;
+ qec->flags = qc.flags;
+ qec->elem_size = 4;
+ qec->elems = 1;
+ qec->nr_of_dims = 0;
+ memset(qec->dims, 0, sizeof(qec->dims));
+ memset(qec->reserved, 0, sizeof(qec->reserved));
+
+ return 0;
+}
+
static int uvc_ioctl_g_ctrl(struct file *file, void *fh,
struct v4l2_control *ctrl)
{
@@ -1018,26 +1047,37 @@
return uvc_query_v4l2_menu(chain, qm);
}
-static int uvc_ioctl_cropcap(struct file *file, void *fh,
- struct v4l2_cropcap *ccap)
+static int uvc_ioctl_g_selection(struct file *file, void *fh,
+ struct v4l2_selection *sel)
{
struct uvc_fh *handle = fh;
struct uvc_streaming *stream = handle->stream;
- if (ccap->type != stream->type)
+ if (sel->type != stream->type)
return -EINVAL;
- ccap->bounds.left = 0;
- ccap->bounds.top = 0;
+ switch (sel->target) {
+ case V4L2_SEL_TGT_CROP_DEFAULT:
+ case V4L2_SEL_TGT_CROP_BOUNDS:
+ if (stream->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+ break;
+ case V4L2_SEL_TGT_COMPOSE_DEFAULT:
+ case V4L2_SEL_TGT_COMPOSE_BOUNDS:
+ if (stream->type != V4L2_BUF_TYPE_VIDEO_OUTPUT)
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ sel->r.left = 0;
+ sel->r.top = 0;
mutex_lock(&stream->mutex);
- ccap->bounds.width = stream->cur_frame->wWidth;
- ccap->bounds.height = stream->cur_frame->wHeight;
+ sel->r.width = stream->cur_frame->wWidth;
+ sel->r.height = stream->cur_frame->wHeight;
mutex_unlock(&stream->mutex);
- ccap->defrect = ccap->bounds;
-
- ccap->pixelaspect.numerator = 1;
- ccap->pixelaspect.denominator = 1;
return 0;
}
@@ -1133,6 +1173,9 @@
uvc_simplify_fraction(&fival->discrete.numerator,
&fival->discrete.denominator, 8, 333);
} else {
+ if (fival->index)
+ return -EINVAL;
+
fival->type = V4L2_FRMIVAL_TYPE_STEPWISE;
fival->stepwise.min.numerator = frame->dwFrameInterval[0];
fival->stepwise.min.denominator = 10000000;
@@ -1443,13 +1486,14 @@
.vidioc_g_input = uvc_ioctl_g_input,
.vidioc_s_input = uvc_ioctl_s_input,
.vidioc_queryctrl = uvc_ioctl_queryctrl,
+ .vidioc_query_ext_ctrl = uvc_ioctl_query_ext_ctrl,
.vidioc_g_ctrl = uvc_ioctl_g_ctrl,
.vidioc_s_ctrl = uvc_ioctl_s_ctrl,
.vidioc_g_ext_ctrls = uvc_ioctl_g_ext_ctrls,
.vidioc_s_ext_ctrls = uvc_ioctl_s_ext_ctrls,
.vidioc_try_ext_ctrls = uvc_ioctl_try_ext_ctrls,
.vidioc_querymenu = uvc_ioctl_querymenu,
- .vidioc_cropcap = uvc_ioctl_cropcap,
+ .vidioc_g_selection = uvc_ioctl_g_selection,
.vidioc_g_parm = uvc_ioctl_g_parm,
.vidioc_s_parm = uvc_ioctl_s_parm,
.vidioc_enum_framesizes = uvc_ioctl_enum_framesizes,
diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
index c63e5b5..1b594c2 100644
--- a/drivers/media/usb/uvc/uvcvideo.h
+++ b/drivers/media/usb/uvc/uvcvideo.h
@@ -443,7 +443,7 @@
struct uvc_streaming {
struct list_head list;
struct uvc_device *dev;
- struct video_device *vdev;
+ struct video_device vdev;
struct uvc_video_chain *chain;
atomic_t active;
diff --git a/drivers/media/v4l2-core/tuner-core.c b/drivers/media/v4l2-core/tuner-core.c
index 559f837..abdcffa 100644
--- a/drivers/media/v4l2-core/tuner-core.c
+++ b/drivers/media/v4l2-core/tuner-core.c
@@ -134,6 +134,9 @@
unsigned int type; /* chip type id */
void *config;
const char *name;
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ struct media_pad pad;
+#endif
};
/*
@@ -434,6 +437,10 @@
t->name = analog_ops->info.name;
}
+#ifdef CONFIG_MEDIA_CONTROLLER
+ t->sd.entity.name = t->name;
+#endif
+
tuner_dbg("type set to %s\n", t->name);
t->mode_mask = new_mode_mask;
@@ -592,6 +599,9 @@
struct tuner *t;
struct tuner *radio;
struct tuner *tv;
+#ifdef CONFIG_MEDIA_CONTROLLER
+ int ret;
+#endif
t = kzalloc(sizeof(struct tuner), GFP_KERNEL);
if (NULL == t)
@@ -684,6 +694,18 @@
/* Should be just before return */
register_client:
+#if defined(CONFIG_MEDIA_CONTROLLER)
+ t->pad.flags = MEDIA_PAD_FL_SOURCE;
+ t->sd.entity.type = MEDIA_ENT_T_V4L2_SUBDEV_TUNER;
+ t->sd.entity.name = t->name;
+
+ ret = media_entity_init(&t->sd.entity, 1, &t->pad, 0);
+ if (ret < 0) {
+ tuner_err("failed to initialize media entity!\n");
+ kfree(t);
+ return -ENODEV;
+ }
+#endif
/* Sets a default mode */
if (t->mode_mask & T_ANALOG_TV)
t->mode = V4L2_TUNER_ANALOG_TV;
diff --git a/drivers/media/v4l2-core/v4l2-clk.c b/drivers/media/v4l2-core/v4l2-clk.c
index e18cc04..34e416a 100644
--- a/drivers/media/v4l2-core/v4l2-clk.c
+++ b/drivers/media/v4l2-core/v4l2-clk.c
@@ -9,6 +9,7 @@
*/
#include <linux/atomic.h>
+#include <linux/clk.h>
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/list.h>
@@ -23,17 +24,13 @@
static DEFINE_MUTEX(clk_lock);
static LIST_HEAD(clk_list);
-static struct v4l2_clk *v4l2_clk_find(const char *dev_id, const char *id)
+static struct v4l2_clk *v4l2_clk_find(const char *dev_id)
{
struct v4l2_clk *clk;
- list_for_each_entry(clk, &clk_list, list) {
- if (strcmp(dev_id, clk->dev_id))
- continue;
-
- if (!id || !clk->id || !strcmp(clk->id, id))
+ list_for_each_entry(clk, &clk_list, list)
+ if (!strcmp(dev_id, clk->dev_id))
return clk;
- }
return ERR_PTR(-ENODEV);
}
@@ -41,9 +38,24 @@
struct v4l2_clk *v4l2_clk_get(struct device *dev, const char *id)
{
struct v4l2_clk *clk;
+ struct clk *ccf_clk = clk_get(dev, id);
+
+ if (PTR_ERR(ccf_clk) == -EPROBE_DEFER)
+ return ERR_PTR(-EPROBE_DEFER);
+
+ if (!IS_ERR_OR_NULL(ccf_clk)) {
+ clk = kzalloc(sizeof(*clk), GFP_KERNEL);
+ if (!clk) {
+ clk_put(ccf_clk);
+ return ERR_PTR(-ENOMEM);
+ }
+ clk->clk = ccf_clk;
+
+ return clk;
+ }
mutex_lock(&clk_lock);
- clk = v4l2_clk_find(dev_name(dev), id);
+ clk = v4l2_clk_find(dev_name(dev));
if (!IS_ERR(clk))
atomic_inc(&clk->use_count);
@@ -60,6 +72,12 @@
if (IS_ERR(clk))
return;
+ if (clk->clk) {
+ clk_put(clk->clk);
+ kfree(clk);
+ return;
+ }
+
mutex_lock(&clk_lock);
list_for_each_entry(tmp, &clk_list, list)
@@ -97,8 +115,12 @@
int v4l2_clk_enable(struct v4l2_clk *clk)
{
- int ret = v4l2_clk_lock_driver(clk);
+ int ret;
+ if (clk->clk)
+ return clk_prepare_enable(clk->clk);
+
+ ret = v4l2_clk_lock_driver(clk);
if (ret < 0)
return ret;
@@ -124,11 +146,14 @@
{
int enable;
+ if (clk->clk)
+ return clk_disable_unprepare(clk->clk);
+
mutex_lock(&clk->lock);
enable = --clk->enable;
- if (WARN(enable < 0, "Unbalanced %s() on %s:%s!\n", __func__,
- clk->dev_id, clk->id))
+ if (WARN(enable < 0, "Unbalanced %s() on %s!\n", __func__,
+ clk->dev_id))
clk->enable++;
else if (!enable && clk->ops->disable)
clk->ops->disable(clk);
@@ -141,8 +166,12 @@
unsigned long v4l2_clk_get_rate(struct v4l2_clk *clk)
{
- int ret = v4l2_clk_lock_driver(clk);
+ int ret;
+ if (clk->clk)
+ return clk_get_rate(clk->clk);
+
+ ret = v4l2_clk_lock_driver(clk);
if (ret < 0)
return ret;
@@ -161,7 +190,16 @@
int v4l2_clk_set_rate(struct v4l2_clk *clk, unsigned long rate)
{
- int ret = v4l2_clk_lock_driver(clk);
+ int ret;
+
+ if (clk->clk) {
+ long r = clk_round_rate(clk->clk, rate);
+ if (r < 0)
+ return r;
+ return clk_set_rate(clk->clk, r);
+ }
+
+ ret = v4l2_clk_lock_driver(clk);
if (ret < 0)
return ret;
@@ -181,7 +219,7 @@
struct v4l2_clk *v4l2_clk_register(const struct v4l2_clk_ops *ops,
const char *dev_id,
- const char *id, void *priv)
+ void *priv)
{
struct v4l2_clk *clk;
int ret;
@@ -193,9 +231,8 @@
if (!clk)
return ERR_PTR(-ENOMEM);
- clk->id = kstrdup(id, GFP_KERNEL);
clk->dev_id = kstrdup(dev_id, GFP_KERNEL);
- if ((id && !clk->id) || !clk->dev_id) {
+ if (!clk->dev_id) {
ret = -ENOMEM;
goto ealloc;
}
@@ -205,7 +242,7 @@
mutex_init(&clk->lock);
mutex_lock(&clk_lock);
- if (!IS_ERR(v4l2_clk_find(dev_id, id))) {
+ if (!IS_ERR(v4l2_clk_find(dev_id))) {
mutex_unlock(&clk_lock);
ret = -EEXIST;
goto eexist;
@@ -217,7 +254,6 @@
eexist:
ealloc:
- kfree(clk->id);
kfree(clk->dev_id);
kfree(clk);
return ERR_PTR(ret);
@@ -227,15 +263,14 @@
void v4l2_clk_unregister(struct v4l2_clk *clk)
{
if (WARN(atomic_read(&clk->use_count),
- "%s(): Refusing to unregister ref-counted %s:%s clock!\n",
- __func__, clk->dev_id, clk->id))
+ "%s(): Refusing to unregister ref-counted %s clock!\n",
+ __func__, clk->dev_id))
return;
mutex_lock(&clk_lock);
list_del(&clk->list);
mutex_unlock(&clk_lock);
- kfree(clk->id);
kfree(clk->dev_id);
kfree(clk);
}
@@ -253,7 +288,7 @@
}
struct v4l2_clk *__v4l2_clk_register_fixed(const char *dev_id,
- const char *id, unsigned long rate, struct module *owner)
+ unsigned long rate, struct module *owner)
{
struct v4l2_clk *clk;
struct v4l2_clk_fixed *priv = kzalloc(sizeof(*priv), GFP_KERNEL);
@@ -265,7 +300,7 @@
priv->ops.get_rate = fixed_get_rate;
priv->ops.owner = owner;
- clk = v4l2_clk_register(&priv->ops, dev_id, id, priv);
+ clk = v4l2_clk_register(&priv->ops, dev_id, priv);
if (IS_ERR(clk))
kfree(priv);
diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
index 45c5b47..e3a3468 100644
--- a/drivers/media/v4l2-core/v4l2-ctrls.c
+++ b/drivers/media/v4l2-core/v4l2-ctrls.c
@@ -991,7 +991,8 @@
case V4L2_CID_AUTO_FOCUS_START:
case V4L2_CID_AUTO_FOCUS_STOP:
*type = V4L2_CTRL_TYPE_BUTTON;
- *flags |= V4L2_CTRL_FLAG_WRITE_ONLY;
+ *flags |= V4L2_CTRL_FLAG_WRITE_ONLY |
+ V4L2_CTRL_FLAG_EXECUTE_ON_WRITE;
*min = *max = *step = *def = 0;
break;
case V4L2_CID_POWER_LINE_FREQUENCY:
@@ -1172,7 +1173,8 @@
case V4L2_CID_FOCUS_RELATIVE:
case V4L2_CID_IRIS_RELATIVE:
case V4L2_CID_ZOOM_RELATIVE:
- *flags |= V4L2_CTRL_FLAG_WRITE_ONLY;
+ *flags |= V4L2_CTRL_FLAG_WRITE_ONLY |
+ V4L2_CTRL_FLAG_EXECUTE_ON_WRITE;
break;
case V4L2_CID_FLASH_STROBE_STATUS:
case V4L2_CID_AUTO_FOCUS_STATUS:
@@ -1609,6 +1611,19 @@
if (ctrl == NULL)
continue;
+
+ if (ctrl->flags & V4L2_CTRL_FLAG_EXECUTE_ON_WRITE)
+ changed = ctrl_changed = true;
+
+ /*
+ * Set has_changed to false to avoid generating
+ * the event V4L2_EVENT_CTRL_CH_VALUE
+ */
+ if (ctrl->flags & V4L2_CTRL_FLAG_VOLATILE) {
+ ctrl->has_changed = false;
+ continue;
+ }
+
for (idx = 0; !ctrl_changed && idx < ctrl->elems; idx++)
ctrl_changed = !ctrl->type_ops->equal(ctrl, idx,
ctrl->p_cur, ctrl->p_new);
@@ -1974,7 +1989,8 @@
sz_extra = 0;
if (type == V4L2_CTRL_TYPE_BUTTON)
- flags |= V4L2_CTRL_FLAG_WRITE_ONLY;
+ flags |= V4L2_CTRL_FLAG_WRITE_ONLY |
+ V4L2_CTRL_FLAG_EXECUTE_ON_WRITE;
else if (type == V4L2_CTRL_TYPE_CTRL_CLASS)
flags |= V4L2_CTRL_FLAG_READ_ONLY;
else if (type == V4L2_CTRL_TYPE_INTEGER64 ||
diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
index 86bb93f..71a1b93 100644
--- a/drivers/media/v4l2-core/v4l2-dev.c
+++ b/drivers/media/v4l2-core/v4l2-dev.c
@@ -357,34 +357,6 @@
ret = vdev->fops->unlocked_ioctl(filp, cmd, arg);
if (lock)
mutex_unlock(lock);
- } else if (vdev->fops->ioctl) {
- /* This code path is a replacement for the BKL. It is a major
- * hack but it will have to do for those drivers that are not
- * yet converted to use unlocked_ioctl.
- *
- * All drivers implement struct v4l2_device, so we use the
- * lock defined there to serialize the ioctls.
- *
- * However, if the driver sleeps, then it blocks all ioctls
- * since the lock is still held. This is very common for
- * VIDIOC_DQBUF since that normally waits for a frame to arrive.
- * As a result any other ioctl calls will proceed very, very
- * slowly since each call will have to wait for the VIDIOC_QBUF
- * to finish. Things that should take 0.01s may now take 10-20
- * seconds.
- *
- * The workaround is to *not* take the lock for VIDIOC_DQBUF.
- * This actually works OK for videobuf-based drivers, since
- * videobuf will take its own internal lock.
- */
- struct mutex *m = &vdev->v4l2_dev->ioctl_lock;
-
- if (cmd != VIDIOC_DQBUF && mutex_lock_interruptible(m))
- return -ERESTARTSYS;
- if (video_is_registered(vdev))
- ret = vdev->fops->ioctl(filp, cmd, arg);
- if (cmd != VIDIOC_DQBUF)
- mutex_unlock(m);
} else
ret = -ENOTTY;
@@ -560,10 +532,9 @@
/* vfl_type and vfl_dir independent ioctls */
SET_VALID_IOCTL(ops, VIDIOC_QUERYCAP, vidioc_querycap);
- if (ops->vidioc_g_priority)
- set_bit(_IOC_NR(VIDIOC_G_PRIORITY), valid_ioctls);
- if (ops->vidioc_s_priority)
- set_bit(_IOC_NR(VIDIOC_S_PRIORITY), valid_ioctls);
+ set_bit(_IOC_NR(VIDIOC_G_PRIORITY), valid_ioctls);
+ set_bit(_IOC_NR(VIDIOC_S_PRIORITY), valid_ioctls);
+
/* Note: the control handler can also be passed through the filehandle,
and that can't be tested here. If the bit for these control ioctls
is set, then the ioctl is valid. But if it is 0, then it can still
@@ -640,6 +611,14 @@
SET_VALID_IOCTL(ops, VIDIOC_TRY_DECODER_CMD, vidioc_try_decoder_cmd);
SET_VALID_IOCTL(ops, VIDIOC_ENUM_FRAMESIZES, vidioc_enum_framesizes);
SET_VALID_IOCTL(ops, VIDIOC_ENUM_FRAMEINTERVALS, vidioc_enum_frameintervals);
+ if (ops->vidioc_g_crop || ops->vidioc_g_selection)
+ set_bit(_IOC_NR(VIDIOC_G_CROP), valid_ioctls);
+ if (ops->vidioc_s_crop || ops->vidioc_s_selection)
+ set_bit(_IOC_NR(VIDIOC_S_CROP), valid_ioctls);
+ SET_VALID_IOCTL(ops, VIDIOC_G_SELECTION, vidioc_g_selection);
+ SET_VALID_IOCTL(ops, VIDIOC_S_SELECTION, vidioc_s_selection);
+ if (ops->vidioc_cropcap || ops->vidioc_g_selection)
+ set_bit(_IOC_NR(VIDIOC_CROPCAP), valid_ioctls);
} else if (is_vbi) {
/* vbi specific ioctls */
if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
@@ -708,14 +687,6 @@
SET_VALID_IOCTL(ops, VIDIOC_G_AUDOUT, vidioc_g_audout);
SET_VALID_IOCTL(ops, VIDIOC_S_AUDOUT, vidioc_s_audout);
}
- if (ops->vidioc_g_crop || ops->vidioc_g_selection)
- set_bit(_IOC_NR(VIDIOC_G_CROP), valid_ioctls);
- if (ops->vidioc_s_crop || ops->vidioc_s_selection)
- set_bit(_IOC_NR(VIDIOC_S_CROP), valid_ioctls);
- SET_VALID_IOCTL(ops, VIDIOC_G_SELECTION, vidioc_g_selection);
- SET_VALID_IOCTL(ops, VIDIOC_S_SELECTION, vidioc_s_selection);
- if (ops->vidioc_cropcap || ops->vidioc_g_selection)
- set_bit(_IOC_NR(VIDIOC_CROPCAP), valid_ioctls);
if (ops->vidioc_g_parm || (vdev->vfl_type == VFL_TYPE_GRABBER &&
ops->vidioc_g_std))
set_bit(_IOC_NR(VIDIOC_G_PARM), valid_ioctls);
@@ -943,8 +914,8 @@
vdev->vfl_type != VFL_TYPE_SUBDEV) {
vdev->entity.type = MEDIA_ENT_T_DEVNODE_V4L;
vdev->entity.name = vdev->name;
- vdev->entity.info.v4l.major = VIDEO_MAJOR;
- vdev->entity.info.v4l.minor = vdev->minor;
+ vdev->entity.info.dev.major = VIDEO_MAJOR;
+ vdev->entity.info.dev.minor = vdev->minor;
ret = media_device_register_entity(vdev->v4l2_dev->mdev,
&vdev->entity);
if (ret < 0)
diff --git a/drivers/media/v4l2-core/v4l2-device.c b/drivers/media/v4l2-core/v4l2-device.c
index 015f92a..5b0a30b 100644
--- a/drivers/media/v4l2-core/v4l2-device.c
+++ b/drivers/media/v4l2-core/v4l2-device.c
@@ -37,7 +37,6 @@
INIT_LIST_HEAD(&v4l2_dev->subdevs);
spin_lock_init(&v4l2_dev->lock);
- mutex_init(&v4l2_dev->ioctl_lock);
v4l2_prio_init(&v4l2_dev->prio);
kref_init(&v4l2_dev->ref);
get_device(dev);
@@ -248,8 +247,8 @@
goto clean_up;
}
#if defined(CONFIG_MEDIA_CONTROLLER)
- sd->entity.info.v4l.major = VIDEO_MAJOR;
- sd->entity.info.v4l.minor = vdev->minor;
+ sd->entity.info.dev.major = VIDEO_MAJOR;
+ sd->entity.info.dev.minor = vdev->minor;
#endif
sd->devnode = vdev;
}
diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
index b1d8dbb..c0e9638 100644
--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
+++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
@@ -282,7 +282,7 @@
(bt->polarities & V4L2_DV_VSYNC_POS_POL) ? "+" : "-",
bt->vsync, bt->vbackporch);
pr_info("%s: pixelclock: %llu\n", dev_prefix, bt->pixelclock);
- pr_info("%s: flags (0x%x):%s%s%s%s\n", dev_prefix, bt->flags,
+ pr_info("%s: flags (0x%x):%s%s%s%s%s\n", dev_prefix, bt->flags,
(bt->flags & V4L2_DV_FL_REDUCED_BLANKING) ?
" REDUCED_BLANKING" : "",
(bt->flags & V4L2_DV_FL_CAN_REDUCE_FPS) ?
@@ -290,7 +290,9 @@
(bt->flags & V4L2_DV_FL_REDUCED_FPS) ?
" REDUCED_FPS" : "",
(bt->flags & V4L2_DV_FL_HALF_LINE) ?
- " HALF_LINE" : "");
+ " HALF_LINE" : "",
+ (bt->flags & V4L2_DV_FL_IS_CE_VIDEO) ?
+ " CE_VIDEO" : "");
pr_info("%s: standards (0x%x):%s%s%s%s\n", dev_prefix, bt->standards,
(bt->standards & V4L2_DV_BT_STD_CEA861) ? " CEA" : "",
(bt->standards & V4L2_DV_BT_STD_DMT) ? " DMT" : "",
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index b084072..aa407cb 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -257,7 +257,7 @@
pr_cont(", width=%u, height=%u, "
"pixelformat=%c%c%c%c, field=%s, "
"bytesperline=%u, sizeimage=%u, colorspace=%d, "
- "flags %x, ycbcr_enc=%u, quantization=%u\n",
+ "flags=0x%x, ycbcr_enc=%u, quantization=%u\n",
pix->width, pix->height,
(pix->pixelformat & 0xff),
(pix->pixelformat >> 8) & 0xff,
@@ -273,7 +273,7 @@
mp = &p->fmt.pix_mp;
pr_cont(", width=%u, height=%u, "
"format=%c%c%c%c, field=%s, "
- "colorspace=%d, num_planes=%u, flags=%x, "
+ "colorspace=%d, num_planes=%u, flags=0x%x, "
"ycbcr_enc=%u, quantization=%u\n",
mp->width, mp->height,
(mp->pixelformat & 0xff),
@@ -901,6 +901,8 @@
*/
if (!allow_priv && c->ctrl_class == V4L2_CID_PRIVATE_BASE)
return 0;
+ if (c->ctrl_class == 0)
+ return 1;
/* Check that all controls are from the same control class. */
for (i = 0; i < c->count; i++) {
if (V4L2_CTRL_ID2CLASS(c->controls[i].id) != c->ctrl_class) {
@@ -1046,8 +1048,6 @@
struct video_device *vfd;
u32 *p = arg;
- if (ops->vidioc_g_priority)
- return ops->vidioc_g_priority(file, fh, arg);
vfd = video_devdata(file);
*p = v4l2_prio_max(vfd->prio);
return 0;
@@ -1060,9 +1060,9 @@
struct v4l2_fh *vfh;
u32 *p = arg;
- if (ops->vidioc_s_priority)
- return ops->vidioc_s_priority(file, fh, *p);
vfd = video_devdata(file);
+ if (!test_bit(V4L2_FL_USES_V4L2_FH, &vfd->flags))
+ return -ENOTTY;
vfh = file->private_data;
return v4l2_prio_change(vfd->prio, &vfh->prio, *p);
}
diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
index 80c588f..73824a5 100644
--- a/drivers/media/v4l2-core/v4l2-mem2mem.c
+++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
@@ -97,7 +97,7 @@
*/
void *v4l2_m2m_next_buf(struct v4l2_m2m_queue_ctx *q_ctx)
{
- struct v4l2_m2m_buffer *b = NULL;
+ struct v4l2_m2m_buffer *b;
unsigned long flags;
spin_lock_irqsave(&q_ctx->rdy_spinlock, flags);
@@ -119,7 +119,7 @@
*/
void *v4l2_m2m_buf_remove(struct v4l2_m2m_queue_ctx *q_ctx)
{
- struct v4l2_m2m_buffer *b = NULL;
+ struct v4l2_m2m_buffer *b;
unsigned long flags;
spin_lock_irqsave(&q_ctx->rdy_spinlock, flags);
diff --git a/drivers/media/v4l2-core/v4l2-of.c b/drivers/media/v4l2-core/v4l2-of.c
index b4ed9a9..83143d3 100644
--- a/drivers/media/v4l2-core/v4l2-of.c
+++ b/drivers/media/v4l2-core/v4l2-of.c
@@ -19,11 +19,10 @@
#include <media/v4l2-of.h>
-static void v4l2_of_parse_csi_bus(const struct device_node *node,
- struct v4l2_of_endpoint *endpoint)
+static int v4l2_of_parse_csi_bus(const struct device_node *node,
+ struct v4l2_of_endpoint *endpoint)
{
struct v4l2_of_bus_mipi_csi2 *bus = &endpoint->bus.mipi_csi2;
- u32 data_lanes[ARRAY_SIZE(bus->data_lanes)];
struct property *prop;
bool have_clk_lane = false;
unsigned int flags = 0;
@@ -32,16 +31,34 @@
prop = of_find_property(node, "data-lanes", NULL);
if (prop) {
const __be32 *lane = NULL;
- int i;
+ unsigned int i;
- for (i = 0; i < ARRAY_SIZE(data_lanes); i++) {
- lane = of_prop_next_u32(prop, lane, &data_lanes[i]);
+ for (i = 0; i < ARRAY_SIZE(bus->data_lanes); i++) {
+ lane = of_prop_next_u32(prop, lane, &v);
if (!lane)
break;
+ bus->data_lanes[i] = v;
}
bus->num_data_lanes = i;
- while (i--)
- bus->data_lanes[i] = data_lanes[i];
+ }
+
+ prop = of_find_property(node, "lane-polarities", NULL);
+ if (prop) {
+ const __be32 *polarity = NULL;
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bus->lane_polarities); i++) {
+ polarity = of_prop_next_u32(prop, polarity, &v);
+ if (!polarity)
+ break;
+ bus->lane_polarities[i] = v;
+ }
+
+ if (i < 1 + bus->num_data_lanes /* clock + data */) {
+ pr_warn("%s: too few lane-polarities entries (need %u, got %u)\n",
+ node->full_name, 1 + bus->num_data_lanes, i);
+ return -EINVAL;
+ }
}
if (!of_property_read_u32(node, "clock-lanes", &v)) {
@@ -56,6 +73,8 @@
bus->flags = flags;
endpoint->bus_type = V4L2_MBUS_CSI2;
+
+ return 0;
}
static void v4l2_of_parse_parallel_bus(const struct device_node *node,
@@ -127,11 +146,15 @@
int v4l2_of_parse_endpoint(const struct device_node *node,
struct v4l2_of_endpoint *endpoint)
{
+ int rval;
+
of_graph_parse_endpoint(node, &endpoint->base);
endpoint->bus_type = 0;
memset(&endpoint->bus, 0, sizeof(endpoint->bus));
- v4l2_of_parse_csi_bus(node, endpoint);
+ rval = v4l2_of_parse_csi_bus(node, endpoint);
+ if (rval)
+ return rval;
/*
* Parse the parallel video bus properties only if none
* of the MIPI CSI-2 specific properties were found.
@@ -142,3 +165,64 @@
return 0;
}
EXPORT_SYMBOL(v4l2_of_parse_endpoint);
+
+/**
+ * v4l2_of_parse_link() - parse a link between two endpoints
+ * @node: pointer to the endpoint at the local end of the link
+ * @link: pointer to the V4L2 OF link data structure
+ *
+ * Fill the link structure with the local and remote nodes and port numbers.
+ * The local_node and remote_node fields are set to point to the local and
+ * remote port's parent nodes respectively (the port parent node being the
+ * parent node of the port node if that node isn't a 'ports' node, or the
+ * grand-parent node of the port node otherwise).
+ *
+ * A reference is taken to both the local and remote nodes, the caller must use
+ * v4l2_of_put_link() to drop the references when done with the link.
+ *
+ * Return: 0 on success, or -ENOLINK if the remote endpoint can't be found.
+ */
+int v4l2_of_parse_link(const struct device_node *node,
+ struct v4l2_of_link *link)
+{
+ struct device_node *np;
+
+ memset(link, 0, sizeof(*link));
+
+ np = of_get_parent(node);
+ of_property_read_u32(np, "reg", &link->local_port);
+ np = of_get_next_parent(np);
+ if (of_node_cmp(np->name, "ports") == 0)
+ np = of_get_next_parent(np);
+ link->local_node = np;
+
+ np = of_parse_phandle(node, "remote-endpoint", 0);
+ if (!np) {
+ of_node_put(link->local_node);
+ return -ENOLINK;
+ }
+
+ np = of_get_parent(np);
+ of_property_read_u32(np, "reg", &link->remote_port);
+ np = of_get_next_parent(np);
+ if (of_node_cmp(np->name, "ports") == 0)
+ np = of_get_next_parent(np);
+ link->remote_node = np;
+
+ return 0;
+}
+EXPORT_SYMBOL(v4l2_of_parse_link);
+
+/**
+ * v4l2_of_put_link() - drop references to nodes in a link
+ * @link: pointer to the V4L2 OF link data structure
+ *
+ * Drop references to the local and remote nodes in the link. This function must
+ * be called on every link parsed with v4l2_of_parse_link().
+ */
+void v4l2_of_put_link(struct v4l2_of_link *link)
+{
+ of_node_put(link->local_node);
+ of_node_put(link->remote_node);
+}
+EXPORT_SYMBOL(v4l2_of_put_link);
diff --git a/drivers/media/v4l2-core/v4l2-subdev.c b/drivers/media/v4l2-core/v4l2-subdev.c
index 19a034e..6359606 100644
--- a/drivers/media/v4l2-core/v4l2-subdev.c
+++ b/drivers/media/v4l2-core/v4l2-subdev.c
@@ -93,8 +93,7 @@
err:
#if defined(CONFIG_MEDIA_CONTROLLER)
- if (entity)
- media_entity_put(entity);
+ media_entity_put(entity);
#endif
v4l2_fh_del(&subdev_fh->vfh);
v4l2_fh_exit(&subdev_fh->vfh);
@@ -262,7 +261,7 @@
if (rval)
return rval;
- return v4l2_subdev_call(sd, pad, get_fmt, subdev_fh, format);
+ return v4l2_subdev_call(sd, pad, get_fmt, subdev_fh->pad, format);
}
case VIDIOC_SUBDEV_S_FMT: {
@@ -272,7 +271,7 @@
if (rval)
return rval;
- return v4l2_subdev_call(sd, pad, set_fmt, subdev_fh, format);
+ return v4l2_subdev_call(sd, pad, set_fmt, subdev_fh->pad, format);
}
case VIDIOC_SUBDEV_G_CROP: {
@@ -289,7 +288,7 @@
sel.target = V4L2_SEL_TGT_CROP;
rval = v4l2_subdev_call(
- sd, pad, get_selection, subdev_fh, &sel);
+ sd, pad, get_selection, subdev_fh->pad, &sel);
crop->rect = sel.r;
@@ -311,7 +310,7 @@
sel.r = crop->rect;
rval = v4l2_subdev_call(
- sd, pad, set_selection, subdev_fh, &sel);
+ sd, pad, set_selection, subdev_fh->pad, &sel);
crop->rect = sel.r;
@@ -321,20 +320,28 @@
case VIDIOC_SUBDEV_ENUM_MBUS_CODE: {
struct v4l2_subdev_mbus_code_enum *code = arg;
+ if (code->which != V4L2_SUBDEV_FORMAT_TRY &&
+ code->which != V4L2_SUBDEV_FORMAT_ACTIVE)
+ return -EINVAL;
+
if (code->pad >= sd->entity.num_pads)
return -EINVAL;
- return v4l2_subdev_call(sd, pad, enum_mbus_code, subdev_fh,
+ return v4l2_subdev_call(sd, pad, enum_mbus_code, subdev_fh->pad,
code);
}
case VIDIOC_SUBDEV_ENUM_FRAME_SIZE: {
struct v4l2_subdev_frame_size_enum *fse = arg;
+ if (fse->which != V4L2_SUBDEV_FORMAT_TRY &&
+ fse->which != V4L2_SUBDEV_FORMAT_ACTIVE)
+ return -EINVAL;
+
if (fse->pad >= sd->entity.num_pads)
return -EINVAL;
- return v4l2_subdev_call(sd, pad, enum_frame_size, subdev_fh,
+ return v4l2_subdev_call(sd, pad, enum_frame_size, subdev_fh->pad,
fse);
}
@@ -359,10 +366,14 @@
case VIDIOC_SUBDEV_ENUM_FRAME_INTERVAL: {
struct v4l2_subdev_frame_interval_enum *fie = arg;
+ if (fie->which != V4L2_SUBDEV_FORMAT_TRY &&
+ fie->which != V4L2_SUBDEV_FORMAT_ACTIVE)
+ return -EINVAL;
+
if (fie->pad >= sd->entity.num_pads)
return -EINVAL;
- return v4l2_subdev_call(sd, pad, enum_frame_interval, subdev_fh,
+ return v4l2_subdev_call(sd, pad, enum_frame_interval, subdev_fh->pad,
fie);
}
@@ -374,7 +385,7 @@
return rval;
return v4l2_subdev_call(
- sd, pad, get_selection, subdev_fh, sel);
+ sd, pad, get_selection, subdev_fh->pad, sel);
}
case VIDIOC_SUBDEV_S_SELECTION: {
@@ -385,7 +396,7 @@
return rval;
return v4l2_subdev_call(
- sd, pad, set_selection, subdev_fh, sel);
+ sd, pad, set_selection, subdev_fh->pad, sel);
}
case VIDIOC_G_EDID: {
diff --git a/drivers/media/v4l2-core/videobuf2-core.c b/drivers/media/v4l2-core/videobuf2-core.c
index cc16e76..66ada01 100644
--- a/drivers/media/v4l2-core/videobuf2-core.c
+++ b/drivers/media/v4l2-core/videobuf2-core.c
@@ -1247,6 +1247,16 @@
{
unsigned int plane;
+ if (V4L2_TYPE_IS_OUTPUT(b->type)) {
+ if (WARN_ON_ONCE(b->bytesused == 0)) {
+ pr_warn_once("use of bytesused == 0 is deprecated and will be removed in the future,\n");
+ if (vb->vb2_queue->allow_zero_bytesused)
+ pr_warn_once("use VIDIOC_DECODER_CMD(V4L2_DEC_CMD_STOP) instead.\n");
+ else
+ pr_warn_once("use the actual size instead.\n");
+ }
+ }
+
if (V4L2_TYPE_IS_MULTIPLANAR(b->type)) {
if (b->memory == V4L2_MEMORY_USERPTR) {
for (plane = 0; plane < vb->num_planes; ++plane) {
@@ -1276,13 +1286,22 @@
* userspace clearly never bothered to set it and
* it's a safe assumption that they really meant to
* use the full plane sizes.
+ *
+ * Some drivers, e.g. old codec drivers, use bytesused == 0
+ * as a way to indicate that streaming is finished.
+ * In that case, the driver should use the
+ * allow_zero_bytesused flag to keep old userspace
+ * applications working.
*/
for (plane = 0; plane < vb->num_planes; ++plane) {
struct v4l2_plane *pdst = &v4l2_planes[plane];
struct v4l2_plane *psrc = &b->m.planes[plane];
- pdst->bytesused = psrc->bytesused ?
- psrc->bytesused : pdst->length;
+ if (vb->vb2_queue->allow_zero_bytesused)
+ pdst->bytesused = psrc->bytesused;
+ else
+ pdst->bytesused = psrc->bytesused ?
+ psrc->bytesused : pdst->length;
pdst->data_offset = psrc->data_offset;
}
}
@@ -1295,6 +1314,11 @@
*
* If bytesused == 0 for the output buffer, then fall back
* to the full buffer size as that's a sensible default.
+ *
+ * Some drivers, e.g. old codec drivers, use bytesused == 0 as
+ * a way to indicate that streaming is finished. In that case,
+ * the driver should use the allow_zero_bytesused flag to keep
+ * old userspace applications working.
*/
if (b->memory == V4L2_MEMORY_USERPTR) {
v4l2_planes[0].m.userptr = b->m.userptr;
@@ -1306,10 +1330,13 @@
v4l2_planes[0].length = b->length;
}
- if (V4L2_TYPE_IS_OUTPUT(b->type))
- v4l2_planes[0].bytesused = b->bytesused ?
- b->bytesused : v4l2_planes[0].length;
- else
+ if (V4L2_TYPE_IS_OUTPUT(b->type)) {
+ if (vb->vb2_queue->allow_zero_bytesused)
+ v4l2_planes[0].bytesused = b->bytesused;
+ else
+ v4l2_planes[0].bytesused = b->bytesused ?
+ b->bytesused : v4l2_planes[0].length;
+ } else
v4l2_planes[0].bytesused = 0;
}
@@ -2760,7 +2787,8 @@
unsigned int initial_index;
unsigned int q_count;
unsigned int dq_count;
- unsigned int flags;
+ unsigned read_once:1;
+ unsigned write_immediately:1;
};
/**
@@ -2798,14 +2826,16 @@
*/
count = 1;
- dprintk(3, "setting up file io: mode %s, count %d, flags %08x\n",
- (read) ? "read" : "write", count, q->io_flags);
+ dprintk(3, "setting up file io: mode %s, count %d, read_once %d, write_immediately %d\n",
+ (read) ? "read" : "write", count, q->fileio_read_once,
+ q->fileio_write_immediately);
fileio = kzalloc(sizeof(struct vb2_fileio_data), GFP_KERNEL);
if (fileio == NULL)
return -ENOMEM;
- fileio->flags = q->io_flags;
+ fileio->read_once = q->fileio_read_once;
+ fileio->write_immediately = q->fileio_write_immediately;
/*
* Request buffers and use MMAP type to force driver
@@ -3028,13 +3058,11 @@
/*
* Queue next buffer if required.
*/
- if (buf->pos == buf->size ||
- (!read && (fileio->flags & VB2_FILEIO_WRITE_IMMEDIATELY))) {
+ if (buf->pos == buf->size || (!read && fileio->write_immediately)) {
/*
* Check if this is the last buffer to read.
*/
- if (read && (fileio->flags & VB2_FILEIO_READ_ONCE) &&
- fileio->dq_count == 1) {
+ if (read && fileio->read_once && fileio->dq_count == 1) {
dprintk(3, "read limit reached\n");
return __vb2_cleanup_fileio(q);
}
@@ -3225,7 +3253,6 @@
int vb2_thread_stop(struct vb2_queue *q)
{
struct vb2_threadio_data *threadio = q->threadio;
- struct vb2_fileio_data *fileio = q->fileio;
int err;
if (threadio == NULL)
@@ -3411,6 +3438,8 @@
struct mutex *lock = vdev->queue->lock ? vdev->queue->lock : vdev->lock;
int err = -EBUSY;
+ if (!(vdev->queue->io_modes & VB2_WRITE))
+ return -EINVAL;
if (lock && mutex_lock_interruptible(lock))
return -ERESTARTSYS;
if (vb2_queue_is_busy(vdev, file))
@@ -3433,6 +3462,8 @@
struct mutex *lock = vdev->queue->lock ? vdev->queue->lock : vdev->lock;
int err = -EBUSY;
+ if (!(vdev->queue->io_modes & VB2_READ))
+ return -EINVAL;
if (lock && mutex_lock_interruptible(lock))
return -ERESTARTSYS;
if (vb2_queue_is_busy(vdev, file))
diff --git a/drivers/memory/Kconfig b/drivers/memory/Kconfig
index 191383d8..868036f 100644
--- a/drivers/memory/Kconfig
+++ b/drivers/memory/Kconfig
@@ -83,6 +83,15 @@
bool
depends on FSL_SOC
+config JZ4780_NEMC
+ bool "Ingenic JZ4780 SoC NEMC driver"
+ default y
+ depends on MACH_JZ4780
+ help
+ This driver is for the NAND/External Memory Controller (NEMC) in
+ the Ingenic JZ4780. This controller is used to handle external
+ memory devices such as NAND and SRAM.
+
source "drivers/memory/tegra/Kconfig"
endif
diff --git a/drivers/memory/Makefile b/drivers/memory/Makefile
index 6b65481..b670441 100644
--- a/drivers/memory/Makefile
+++ b/drivers/memory/Makefile
@@ -13,5 +13,6 @@
obj-$(CONFIG_FSL_IFC) += fsl_ifc.o
obj-$(CONFIG_MVEBU_DEVBUS) += mvebu-devbus.o
obj-$(CONFIG_TEGRA20_MC) += tegra20-mc.o
+obj-$(CONFIG_JZ4780_NEMC) += jz4780-nemc.o
obj-$(CONFIG_TEGRA_MC) += tegra/
diff --git a/drivers/memory/jz4780-nemc.c b/drivers/memory/jz4780-nemc.c
new file mode 100644
index 0000000..919d192
--- /dev/null
+++ b/drivers/memory/jz4780-nemc.c
@@ -0,0 +1,391 @@
+/*
+ * JZ4780 NAND/external memory controller (NEMC)
+ *
+ * Copyright (c) 2015 Imagination Technologies
+ * Author: Alex Smith <alex@alex-smith.me.uk>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation.
+ */
+
+#include <linux/clk.h>
+#include <linux/init.h>
+#include <linux/math64.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include <linux/jz4780-nemc.h>
+
+#define NEMC_SMCRn(n) (0x14 + (((n) - 1) * 4))
+#define NEMC_NFCSR 0x50
+
+#define NEMC_SMCR_SMT BIT(0)
+#define NEMC_SMCR_BW_SHIFT 6
+#define NEMC_SMCR_BW_MASK (0x3 << NEMC_SMCR_BW_SHIFT)
+#define NEMC_SMCR_BW_8 (0 << 6)
+#define NEMC_SMCR_TAS_SHIFT 8
+#define NEMC_SMCR_TAS_MASK (0xf << NEMC_SMCR_TAS_SHIFT)
+#define NEMC_SMCR_TAH_SHIFT 12
+#define NEMC_SMCR_TAH_MASK (0xf << NEMC_SMCR_TAH_SHIFT)
+#define NEMC_SMCR_TBP_SHIFT 16
+#define NEMC_SMCR_TBP_MASK (0xf << NEMC_SMCR_TBP_SHIFT)
+#define NEMC_SMCR_TAW_SHIFT 20
+#define NEMC_SMCR_TAW_MASK (0xf << NEMC_SMCR_TAW_SHIFT)
+#define NEMC_SMCR_TSTRV_SHIFT 24
+#define NEMC_SMCR_TSTRV_MASK (0x3f << NEMC_SMCR_TSTRV_SHIFT)
+
+#define NEMC_NFCSR_NFEn(n) BIT(((n) - 1) << 1)
+#define NEMC_NFCSR_NFCEn(n) BIT((((n) - 1) << 1) + 1)
+#define NEMC_NFCSR_TNFEn(n) BIT(16 + (n) - 1)
+
+struct jz4780_nemc {
+ spinlock_t lock;
+ struct device *dev;
+ void __iomem *base;
+ struct clk *clk;
+ uint32_t clk_period;
+ unsigned long banks_present;
+};
+
+/**
+ * jz4780_nemc_num_banks() - count the number of banks referenced by a device
+ * @dev: device to count banks for, must be a child of the NEMC.
+ *
+ * Return: The number of unique NEMC banks referred to by the specified NEMC
+ * child device. Unique here means that a device that references the same bank
+ * multiple times in the its "reg" property will only count once.
+ */
+unsigned int jz4780_nemc_num_banks(struct device *dev)
+{
+ const __be32 *prop;
+ unsigned int bank, count = 0;
+ unsigned long referenced = 0;
+ int i = 0;
+
+ while ((prop = of_get_address(dev->of_node, i++, NULL, NULL))) {
+ bank = of_read_number(prop, 1);
+ if (!(referenced & BIT(bank))) {
+ referenced |= BIT(bank);
+ count++;
+ }
+ }
+
+ return count;
+}
+EXPORT_SYMBOL(jz4780_nemc_num_banks);
+
+/**
+ * jz4780_nemc_set_type() - set the type of device connected to a bank
+ * @dev: child device of the NEMC.
+ * @bank: bank number to configure.
+ * @type: type of device connected to the bank.
+ */
+void jz4780_nemc_set_type(struct device *dev, unsigned int bank,
+ enum jz4780_nemc_bank_type type)
+{
+ struct jz4780_nemc *nemc = dev_get_drvdata(dev->parent);
+ uint32_t nfcsr;
+
+ nfcsr = readl(nemc->base + NEMC_NFCSR);
+
+ /* TODO: Support toggle NAND devices. */
+ switch (type) {
+ case JZ4780_NEMC_BANK_SRAM:
+ nfcsr &= ~(NEMC_NFCSR_TNFEn(bank) | NEMC_NFCSR_NFEn(bank));
+ break;
+ case JZ4780_NEMC_BANK_NAND:
+ nfcsr &= ~NEMC_NFCSR_TNFEn(bank);
+ nfcsr |= NEMC_NFCSR_NFEn(bank);
+ break;
+ }
+
+ writel(nfcsr, nemc->base + NEMC_NFCSR);
+}
+EXPORT_SYMBOL(jz4780_nemc_set_type);
+
+/**
+ * jz4780_nemc_assert() - (de-)assert a NAND device's chip enable pin
+ * @dev: child device of the NEMC.
+ * @bank: bank number of device.
+ * @assert: whether the chip enable pin should be asserted.
+ *
+ * (De-)asserts the chip enable pin for the NAND device connected to the
+ * specified bank.
+ */
+void jz4780_nemc_assert(struct device *dev, unsigned int bank, bool assert)
+{
+ struct jz4780_nemc *nemc = dev_get_drvdata(dev->parent);
+ uint32_t nfcsr;
+
+ nfcsr = readl(nemc->base + NEMC_NFCSR);
+
+ if (assert)
+ nfcsr |= NEMC_NFCSR_NFCEn(bank);
+ else
+ nfcsr &= ~NEMC_NFCSR_NFCEn(bank);
+
+ writel(nfcsr, nemc->base + NEMC_NFCSR);
+}
+EXPORT_SYMBOL(jz4780_nemc_assert);
+
+static uint32_t jz4780_nemc_clk_period(struct jz4780_nemc *nemc)
+{
+ unsigned long rate;
+
+ rate = clk_get_rate(nemc->clk);
+ if (!rate)
+ return 0;
+
+ /* Return in picoseconds. */
+ return div64_ul(1000000000000ull, rate);
+}
+
+static uint32_t jz4780_nemc_ns_to_cycles(struct jz4780_nemc *nemc, uint32_t ns)
+{
+ return ((ns * 1000) + nemc->clk_period - 1) / nemc->clk_period;
+}
+
+static bool jz4780_nemc_configure_bank(struct jz4780_nemc *nemc,
+ unsigned int bank,
+ struct device_node *node)
+{
+ uint32_t smcr, val, cycles;
+
+ /*
+ * Conversion of tBP and tAW cycle counts to values supported by the
+ * hardware (round up to the next supported value).
+ */
+ static const uint32_t convert_tBP_tAW[] = {
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
+
+ /* 11 - 12 -> 12 cycles */
+ 11, 11,
+
+ /* 13 - 15 -> 15 cycles */
+ 12, 12, 12,
+
+ /* 16 - 20 -> 20 cycles */
+ 13, 13, 13, 13, 13,
+
+ /* 21 - 25 -> 25 cycles */
+ 14, 14, 14, 14, 14,
+
+ /* 26 - 31 -> 31 cycles */
+ 15, 15, 15, 15, 15, 15
+ };
+
+ smcr = readl(nemc->base + NEMC_SMCRn(bank));
+ smcr &= ~NEMC_SMCR_SMT;
+
+ if (!of_property_read_u32(node, "ingenic,nemc-bus-width", &val)) {
+ smcr &= ~NEMC_SMCR_BW_MASK;
+ switch (val) {
+ case 8:
+ smcr |= NEMC_SMCR_BW_8;
+ break;
+ default:
+ /*
+ * Earlier SoCs support a 16 bit bus width (the 4780
+ * does not), until those are properly supported, error.
+ */
+ dev_err(nemc->dev, "unsupported bus width: %u\n", val);
+ return false;
+ }
+ }
+
+ if (of_property_read_u32(node, "ingenic,nemc-tAS", &val) == 0) {
+ smcr &= ~NEMC_SMCR_TAS_MASK;
+ cycles = jz4780_nemc_ns_to_cycles(nemc, val);
+ if (cycles > 15) {
+ dev_err(nemc->dev, "tAS %u is too high (%u cycles)\n",
+ val, cycles);
+ return false;
+ }
+
+ smcr |= cycles << NEMC_SMCR_TAS_SHIFT;
+ }
+
+ if (of_property_read_u32(node, "ingenic,nemc-tAH", &val) == 0) {
+ smcr &= ~NEMC_SMCR_TAH_MASK;
+ cycles = jz4780_nemc_ns_to_cycles(nemc, val);
+ if (cycles > 15) {
+ dev_err(nemc->dev, "tAH %u is too high (%u cycles)\n",
+ val, cycles);
+ return false;
+ }
+
+ smcr |= cycles << NEMC_SMCR_TAH_SHIFT;
+ }
+
+ if (of_property_read_u32(node, "ingenic,nemc-tBP", &val) == 0) {
+ smcr &= ~NEMC_SMCR_TBP_MASK;
+ cycles = jz4780_nemc_ns_to_cycles(nemc, val);
+ if (cycles > 31) {
+ dev_err(nemc->dev, "tBP %u is too high (%u cycles)\n",
+ val, cycles);
+ return false;
+ }
+
+ smcr |= convert_tBP_tAW[cycles] << NEMC_SMCR_TBP_SHIFT;
+ }
+
+ if (of_property_read_u32(node, "ingenic,nemc-tAW", &val) == 0) {
+ smcr &= ~NEMC_SMCR_TAW_MASK;
+ cycles = jz4780_nemc_ns_to_cycles(nemc, val);
+ if (cycles > 31) {
+ dev_err(nemc->dev, "tAW %u is too high (%u cycles)\n",
+ val, cycles);
+ return false;
+ }
+
+ smcr |= convert_tBP_tAW[cycles] << NEMC_SMCR_TAW_SHIFT;
+ }
+
+ if (of_property_read_u32(node, "ingenic,nemc-tSTRV", &val) == 0) {
+ smcr &= ~NEMC_SMCR_TSTRV_MASK;
+ cycles = jz4780_nemc_ns_to_cycles(nemc, val);
+ if (cycles > 63) {
+ dev_err(nemc->dev, "tSTRV %u is too high (%u cycles)\n",
+ val, cycles);
+ return false;
+ }
+
+ smcr |= cycles << NEMC_SMCR_TSTRV_SHIFT;
+ }
+
+ writel(smcr, nemc->base + NEMC_SMCRn(bank));
+ return true;
+}
+
+static int jz4780_nemc_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct jz4780_nemc *nemc;
+ struct resource *res;
+ struct device_node *child;
+ const __be32 *prop;
+ unsigned int bank;
+ unsigned long referenced;
+ int i, ret;
+
+ nemc = devm_kzalloc(dev, sizeof(*nemc), GFP_KERNEL);
+ if (!nemc)
+ return -ENOMEM;
+
+ spin_lock_init(&nemc->lock);
+ nemc->dev = dev;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ nemc->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(nemc->base)) {
+ dev_err(dev, "failed to get I/O memory\n");
+ return PTR_ERR(nemc->base);
+ }
+
+ writel(0, nemc->base + NEMC_NFCSR);
+
+ nemc->clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(nemc->clk)) {
+ dev_err(dev, "failed to get clock\n");
+ return PTR_ERR(nemc->clk);
+ }
+
+ ret = clk_prepare_enable(nemc->clk);
+ if (ret) {
+ dev_err(dev, "failed to enable clock: %d\n", ret);
+ return ret;
+ }
+
+ nemc->clk_period = jz4780_nemc_clk_period(nemc);
+ if (!nemc->clk_period) {
+ dev_err(dev, "failed to calculate clock period\n");
+ clk_disable_unprepare(nemc->clk);
+ return -EINVAL;
+ }
+
+ /*
+ * Iterate over child devices, check that they do not conflict with
+ * each other, and register child devices for them. If a child device
+ * has invalid properties, it is ignored and no platform device is
+ * registered for it.
+ */
+ for_each_child_of_node(nemc->dev->of_node, child) {
+ referenced = 0;
+ i = 0;
+ while ((prop = of_get_address(child, i++, NULL, NULL))) {
+ bank = of_read_number(prop, 1);
+ if (bank < 1 || bank >= JZ4780_NEMC_NUM_BANKS) {
+ dev_err(nemc->dev,
+ "%s requests invalid bank %u\n",
+ child->full_name, bank);
+
+ /* Will continue the outer loop below. */
+ referenced = 0;
+ break;
+ }
+
+ referenced |= BIT(bank);
+ }
+
+ if (!referenced) {
+ dev_err(nemc->dev, "%s has no addresses\n",
+ child->full_name);
+ continue;
+ } else if (nemc->banks_present & referenced) {
+ dev_err(nemc->dev, "%s conflicts with another node\n",
+ child->full_name);
+ continue;
+ }
+
+ /* Configure bank parameters. */
+ for_each_set_bit(bank, &referenced, JZ4780_NEMC_NUM_BANKS) {
+ if (!jz4780_nemc_configure_bank(nemc, bank, child)) {
+ referenced = 0;
+ break;
+ }
+ }
+
+ if (referenced) {
+ if (of_platform_device_create(child, NULL, nemc->dev))
+ nemc->banks_present |= referenced;
+ }
+ }
+
+ platform_set_drvdata(pdev, nemc);
+ dev_info(dev, "JZ4780 NEMC initialised\n");
+ return 0;
+}
+
+static int jz4780_nemc_remove(struct platform_device *pdev)
+{
+ struct jz4780_nemc *nemc = platform_get_drvdata(pdev);
+
+ clk_disable_unprepare(nemc->clk);
+ return 0;
+}
+
+static const struct of_device_id jz4780_nemc_dt_match[] = {
+ { .compatible = "ingenic,jz4780-nemc" },
+ {},
+};
+
+static struct platform_driver jz4780_nemc_driver = {
+ .probe = jz4780_nemc_probe,
+ .remove = jz4780_nemc_remove,
+ .driver = {
+ .name = "jz4780-nemc",
+ .of_match_table = of_match_ptr(jz4780_nemc_dt_match),
+ },
+};
+
+static int __init jz4780_nemc_init(void)
+{
+ return platform_driver_register(&jz4780_nemc_driver);
+}
+subsys_initcall(jz4780_nemc_init);
diff --git a/drivers/misc/bh1780gli.c b/drivers/misc/bh1780gli.c
index 4c4a59b..7f90ce5 100644
--- a/drivers/misc/bh1780gli.c
+++ b/drivers/misc/bh1780gli.c
@@ -230,6 +230,8 @@
{ },
};
+MODULE_DEVICE_TABLE(i2c, bh1780_id);
+
#ifdef CONFIG_OF
static const struct of_device_id of_bh1780_match[] = {
{ .compatible = "rohm,bh1780gli", },
diff --git a/drivers/misc/carma/carma-fpga-program.c b/drivers/misc/carma/carma-fpga-program.c
index 06166ac..0b1bd85 100644
--- a/drivers/misc/carma/carma-fpga-program.c
+++ b/drivers/misc/carma/carma-fpga-program.c
@@ -479,6 +479,7 @@
static noinline int fpga_program_cpu(struct fpga_dev *priv)
{
int ret;
+ unsigned long timeout;
/* Disable the programmer */
fpga_programmer_disable(priv);
@@ -497,8 +498,8 @@
goto out_disable_controller;
/* Wait for the interrupt handler to signal that programming finished */
- ret = wait_for_completion_timeout(&priv->completion, 2 * HZ);
- if (!ret) {
+ timeout = wait_for_completion_timeout(&priv->completion, 2 * HZ);
+ if (!timeout) {
dev_err(priv->dev, "Timed out waiting for completion\n");
ret = -ETIMEDOUT;
goto out_disable_controller;
@@ -536,6 +537,7 @@
struct sg_table table;
dma_cookie_t cookie;
int ret, i;
+ unsigned long timeout;
/* Disable the programmer */
fpga_programmer_disable(priv);
@@ -623,8 +625,8 @@
dev_dbg(priv->dev, "enabled the controller\n");
/* Wait for the interrupt handler to signal that programming finished */
- ret = wait_for_completion_timeout(&priv->completion, 2 * HZ);
- if (!ret) {
+ timeout = wait_for_completion_timeout(&priv->completion, 2 * HZ);
+ if (!timeout) {
dev_err(priv->dev, "Timed out waiting for completion\n");
ret = -ETIMEDOUT;
goto out_disable_controller;
@@ -1142,7 +1144,7 @@
return ret;
}
-static struct of_device_id fpga_of_match[] = {
+static const struct of_device_id fpga_of_match[] = {
{ .compatible = "carma,fpga-programmer", },
{},
};
diff --git a/drivers/misc/carma/carma-fpga.c b/drivers/misc/carma/carma-fpga.c
index 68cdfe1..5aba3fd 100644
--- a/drivers/misc/carma/carma-fpga.c
+++ b/drivers/misc/carma/carma-fpga.c
@@ -1486,7 +1486,7 @@
return 0;
}
-static struct of_device_id data_of_match[] = {
+static const struct of_device_id data_of_match[] = {
{ .compatible = "carma,carma-fpga", },
{},
};
diff --git a/drivers/misc/lis3lv02d/lis3lv02d.c b/drivers/misc/lis3lv02d/lis3lv02d.c
index 3ef4627..4739689 100644
--- a/drivers/misc/lis3lv02d/lis3lv02d.c
+++ b/drivers/misc/lis3lv02d/lis3lv02d.c
@@ -950,6 +950,7 @@
struct lis3lv02d_platform_data *pdata;
struct device_node *np = lis3->of_node;
u32 val;
+ s32 sval;
if (!lis3->of_node)
return 0;
@@ -1031,6 +1032,23 @@
pdata->wakeup_flags |= LIS3_WAKEUP_Z_LO;
if (of_get_property(np, "st,wakeup-z-hi", NULL))
pdata->wakeup_flags |= LIS3_WAKEUP_Z_HI;
+ if (of_get_property(np, "st,wakeup-threshold", &val))
+ pdata->wakeup_thresh = val;
+
+ if (of_get_property(np, "st,wakeup2-x-lo", NULL))
+ pdata->wakeup_flags2 |= LIS3_WAKEUP_X_LO;
+ if (of_get_property(np, "st,wakeup2-x-hi", NULL))
+ pdata->wakeup_flags2 |= LIS3_WAKEUP_X_HI;
+ if (of_get_property(np, "st,wakeup2-y-lo", NULL))
+ pdata->wakeup_flags2 |= LIS3_WAKEUP_Y_LO;
+ if (of_get_property(np, "st,wakeup2-y-hi", NULL))
+ pdata->wakeup_flags2 |= LIS3_WAKEUP_Y_HI;
+ if (of_get_property(np, "st,wakeup2-z-lo", NULL))
+ pdata->wakeup_flags2 |= LIS3_WAKEUP_Z_LO;
+ if (of_get_property(np, "st,wakeup2-z-hi", NULL))
+ pdata->wakeup_flags2 |= LIS3_WAKEUP_Z_HI;
+ if (of_get_property(np, "st,wakeup2-threshold", &val))
+ pdata->wakeup_thresh2 = val;
if (!of_property_read_u32(np, "st,highpass-cutoff-hz", &val)) {
switch (val) {
@@ -1054,29 +1072,29 @@
if (of_get_property(np, "st,hipass2-disable", NULL))
pdata->hipass_ctrl |= LIS3_HIPASS2_DISABLE;
- if (of_get_property(np, "st,axis-x", &val))
- pdata->axis_x = val;
- if (of_get_property(np, "st,axis-y", &val))
- pdata->axis_y = val;
- if (of_get_property(np, "st,axis-z", &val))
- pdata->axis_z = val;
+ if (of_property_read_s32(np, "st,axis-x", &sval) == 0)
+ pdata->axis_x = sval;
+ if (of_property_read_s32(np, "st,axis-y", &sval) == 0)
+ pdata->axis_y = sval;
+ if (of_property_read_s32(np, "st,axis-z", &sval) == 0)
+ pdata->axis_z = sval;
if (of_get_property(np, "st,default-rate", NULL))
pdata->default_rate = val;
- if (of_get_property(np, "st,min-limit-x", &val))
- pdata->st_min_limits[0] = val;
- if (of_get_property(np, "st,min-limit-y", &val))
- pdata->st_min_limits[1] = val;
- if (of_get_property(np, "st,min-limit-z", &val))
- pdata->st_min_limits[2] = val;
+ if (of_property_read_s32(np, "st,min-limit-x", &sval) == 0)
+ pdata->st_min_limits[0] = sval;
+ if (of_property_read_s32(np, "st,min-limit-y", &sval) == 0)
+ pdata->st_min_limits[1] = sval;
+ if (of_property_read_s32(np, "st,min-limit-z", &sval) == 0)
+ pdata->st_min_limits[2] = sval;
- if (of_get_property(np, "st,max-limit-x", &val))
- pdata->st_max_limits[0] = val;
- if (of_get_property(np, "st,max-limit-y", &val))
- pdata->st_max_limits[1] = val;
- if (of_get_property(np, "st,max-limit-z", &val))
- pdata->st_max_limits[2] = val;
+ if (of_property_read_s32(np, "st,max-limit-x", &sval) == 0)
+ pdata->st_max_limits[0] = sval;
+ if (of_property_read_s32(np, "st,max-limit-y", &sval) == 0)
+ pdata->st_max_limits[1] = sval;
+ if (of_property_read_s32(np, "st,max-limit-z", &sval) == 0)
+ pdata->st_max_limits[2] = sval;
lis3->pdata = pdata;
diff --git a/drivers/misc/lis3lv02d/lis3lv02d_i2c.c b/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
index 63fe096..e3e7f1d 100644
--- a/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
+++ b/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
@@ -106,7 +106,7 @@
{ .as_array = { LIS3_DEV_X, LIS3_DEV_Y, LIS3_DEV_Z } };
#ifdef CONFIG_OF
-static struct of_device_id lis3lv02d_i2c_dt_ids[] = {
+static const struct of_device_id lis3lv02d_i2c_dt_ids[] = {
{ .compatible = "st,lis3lv02d" },
{}
};
diff --git a/drivers/misc/lis3lv02d/lis3lv02d_spi.c b/drivers/misc/lis3lv02d/lis3lv02d_spi.c
index bd06d0c..b2f6e16 100644
--- a/drivers/misc/lis3lv02d/lis3lv02d_spi.c
+++ b/drivers/misc/lis3lv02d/lis3lv02d_spi.c
@@ -61,7 +61,7 @@
{ .as_array = { 1, 2, 3 } };
#ifdef CONFIG_OF
-static struct of_device_id lis302dl_spi_dt_ids[] = {
+static const struct of_device_id lis302dl_spi_dt_ids[] = {
{ .compatible = "st,lis302dl-spi" },
{}
};
diff --git a/drivers/misc/mei/Makefile b/drivers/misc/mei/Makefile
index 8ebc6cd..518914a 100644
--- a/drivers/misc/mei/Makefile
+++ b/drivers/misc/mei/Makefile
@@ -21,3 +21,6 @@
obj-$(CONFIG_INTEL_MEI_TXE) += mei-txe.o
mei-txe-objs := pci-txe.o
mei-txe-objs += hw-txe.o
+
+mei-$(CONFIG_EVENT_TRACING) += mei-trace.o
+CFLAGS_mei-trace.o = -I$(src)
diff --git a/drivers/misc/mei/amthif.c b/drivers/misc/mei/amthif.c
index 40ea639..d2cd53e 100644
--- a/drivers/misc/mei/amthif.c
+++ b/drivers/misc/mei/amthif.c
@@ -48,10 +48,7 @@
{
/* reset iamthif parameters. */
dev->iamthif_current_cb = NULL;
- dev->iamthif_msg_buf_size = 0;
- dev->iamthif_msg_buf_index = 0;
dev->iamthif_canceled = false;
- dev->iamthif_ioctl = false;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
dev->iamthif_timer = 0;
dev->iamthif_stall_timer = 0;
@@ -69,7 +66,6 @@
{
struct mei_cl *cl = &dev->iamthif_cl;
struct mei_me_client *me_cl;
- unsigned char *msg_buf;
int ret;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
@@ -90,18 +86,6 @@
dev->iamthif_mtu = me_cl->props.max_msg_length;
dev_dbg(dev->dev, "IAMTHIF_MTU = %d\n", dev->iamthif_mtu);
- kfree(dev->iamthif_msg_buf);
- dev->iamthif_msg_buf = NULL;
-
- /* allocate storage for ME message buffer */
- msg_buf = kcalloc(dev->iamthif_mtu,
- sizeof(unsigned char), GFP_KERNEL);
- if (!msg_buf) {
- ret = -ENOMEM;
- goto out;
- }
-
- dev->iamthif_msg_buf = msg_buf;
ret = mei_cl_link(cl, MEI_IAMTHIF_HOST_CLIENT_ID);
if (ret < 0) {
@@ -194,30 +178,33 @@
dev_dbg(dev->dev, "woke up from sleep\n");
}
+ if (cb->status) {
+ rets = cb->status;
+ dev_dbg(dev->dev, "read operation failed %d\n", rets);
+ goto free;
+ }
dev_dbg(dev->dev, "Got amthif data\n");
dev->iamthif_timer = 0;
- if (cb) {
- timeout = cb->read_time +
- mei_secs_to_jiffies(MEI_IAMTHIF_READ_TIMER);
- dev_dbg(dev->dev, "amthif timeout = %lud\n",
- timeout);
+ timeout = cb->read_time +
+ mei_secs_to_jiffies(MEI_IAMTHIF_READ_TIMER);
+ dev_dbg(dev->dev, "amthif timeout = %lud\n",
+ timeout);
- if (time_after(jiffies, timeout)) {
- dev_dbg(dev->dev, "amthif Time out\n");
- /* 15 sec for the message has expired */
- list_del(&cb->list);
- rets = -ETIME;
- goto free;
- }
+ if (time_after(jiffies, timeout)) {
+ dev_dbg(dev->dev, "amthif Time out\n");
+ /* 15 sec for the message has expired */
+ list_del_init(&cb->list);
+ rets = -ETIME;
+ goto free;
}
/* if the whole message will fit remove it from the list */
if (cb->buf_idx >= *offset && length >= (cb->buf_idx - *offset))
- list_del(&cb->list);
+ list_del_init(&cb->list);
else if (cb->buf_idx > 0 && cb->buf_idx <= *offset) {
/* end of the message has been reached */
- list_del(&cb->list);
+ list_del_init(&cb->list);
rets = 0;
goto free;
}
@@ -225,15 +212,15 @@
* remove message from deletion list
*/
- dev_dbg(dev->dev, "amthif cb->response_buffer size - %d\n",
- cb->response_buffer.size);
+ dev_dbg(dev->dev, "amthif cb->buf size - %d\n",
+ cb->buf.size);
dev_dbg(dev->dev, "amthif cb->buf_idx - %lu\n", cb->buf_idx);
/* length is being truncated to PAGE_SIZE, however,
* the buf_idx may point beyond */
length = min_t(size_t, length, (cb->buf_idx - *offset));
- if (copy_to_user(ubuf, cb->response_buffer.data + *offset, length)) {
+ if (copy_to_user(ubuf, cb->buf.data + *offset, length)) {
dev_dbg(dev->dev, "failed to copy data to userland\n");
rets = -EFAULT;
} else {
@@ -252,126 +239,88 @@
}
/**
+ * mei_amthif_read_start - queue message for sending read credential
+ *
+ * @cl: host client
+ * @file: file pointer of message recipient
+ *
+ * Return: 0 on success, <0 on failure.
+ */
+static int mei_amthif_read_start(struct mei_cl *cl, struct file *file)
+{
+ struct mei_device *dev = cl->dev;
+ struct mei_cl_cb *cb;
+ size_t length = dev->iamthif_mtu;
+ int rets;
+
+ cb = mei_io_cb_init(cl, MEI_FOP_READ, file);
+ if (!cb) {
+ rets = -ENOMEM;
+ goto err;
+ }
+
+ rets = mei_io_cb_alloc_buf(cb, length);
+ if (rets)
+ goto err;
+
+ list_add_tail(&cb->list, &dev->ctrl_wr_list.list);
+
+ dev->iamthif_state = MEI_IAMTHIF_READING;
+ dev->iamthif_file_object = cb->file_object;
+ dev->iamthif_current_cb = cb;
+
+ return 0;
+err:
+ mei_io_cb_free(cb);
+ return rets;
+}
+
+/**
* mei_amthif_send_cmd - send amthif command to the ME
*
- * @dev: the device structure
+ * @cl: the host client
* @cb: mei call back struct
*
* Return: 0 on success, <0 on failure.
- *
*/
-static int mei_amthif_send_cmd(struct mei_device *dev, struct mei_cl_cb *cb)
+static int mei_amthif_send_cmd(struct mei_cl *cl, struct mei_cl_cb *cb)
{
- struct mei_msg_hdr mei_hdr;
- struct mei_cl *cl;
+ struct mei_device *dev;
int ret;
- if (!dev || !cb)
+ if (!cl->dev || !cb)
return -ENODEV;
- dev_dbg(dev->dev, "write data to amthif client.\n");
+ dev = cl->dev;
dev->iamthif_state = MEI_IAMTHIF_WRITING;
dev->iamthif_current_cb = cb;
dev->iamthif_file_object = cb->file_object;
dev->iamthif_canceled = false;
- dev->iamthif_ioctl = true;
- dev->iamthif_msg_buf_size = cb->request_buffer.size;
- memcpy(dev->iamthif_msg_buf, cb->request_buffer.data,
- cb->request_buffer.size);
- cl = &dev->iamthif_cl;
- ret = mei_cl_flow_ctrl_creds(cl);
+ ret = mei_cl_write(cl, cb, false);
if (ret < 0)
return ret;
- if (ret && mei_hbuf_acquire(dev)) {
- ret = 0;
- if (cb->request_buffer.size > mei_hbuf_max_len(dev)) {
- mei_hdr.length = mei_hbuf_max_len(dev);
- mei_hdr.msg_complete = 0;
- } else {
- mei_hdr.length = cb->request_buffer.size;
- mei_hdr.msg_complete = 1;
- }
+ if (cb->completed)
+ cb->status = mei_amthif_read_start(cl, cb->file_object);
- mei_hdr.host_addr = cl->host_client_id;
- mei_hdr.me_addr = cl->me_client_id;
- mei_hdr.reserved = 0;
- mei_hdr.internal = 0;
- dev->iamthif_msg_buf_index += mei_hdr.length;
- ret = mei_write_message(dev, &mei_hdr, dev->iamthif_msg_buf);
- if (ret)
- return ret;
-
- if (mei_hdr.msg_complete) {
- if (mei_cl_flow_ctrl_reduce(cl))
- return -EIO;
- dev->iamthif_flow_control_pending = true;
- dev->iamthif_state = MEI_IAMTHIF_FLOW_CONTROL;
- dev_dbg(dev->dev, "add amthif cb to write waiting list\n");
- dev->iamthif_current_cb = cb;
- dev->iamthif_file_object = cb->file_object;
- list_add_tail(&cb->list, &dev->write_waiting_list.list);
- } else {
- dev_dbg(dev->dev, "message does not complete, so add amthif cb to write list.\n");
- list_add_tail(&cb->list, &dev->write_list.list);
- }
- } else {
- list_add_tail(&cb->list, &dev->write_list.list);
- }
return 0;
}
/**
- * mei_amthif_write - write amthif data to amthif client
- *
- * @dev: the device structure
- * @cb: mei call back struct
- *
- * Return: 0 on success, <0 on failure.
- *
- */
-int mei_amthif_write(struct mei_device *dev, struct mei_cl_cb *cb)
-{
- int ret;
-
- if (!dev || !cb)
- return -ENODEV;
-
- ret = mei_io_cb_alloc_resp_buf(cb, dev->iamthif_mtu);
- if (ret)
- return ret;
-
- cb->fop_type = MEI_FOP_WRITE;
-
- if (!list_empty(&dev->amthif_cmd_list.list) ||
- dev->iamthif_state != MEI_IAMTHIF_IDLE) {
- dev_dbg(dev->dev,
- "amthif state = %d\n", dev->iamthif_state);
- dev_dbg(dev->dev, "AMTHIF: add cb to the wait list\n");
- list_add_tail(&cb->list, &dev->amthif_cmd_list.list);
- return 0;
- }
- return mei_amthif_send_cmd(dev, cb);
-}
-/**
* mei_amthif_run_next_cmd - send next amt command from queue
*
* @dev: the device structure
+ *
+ * Return: 0 on success, <0 on failure.
*/
-void mei_amthif_run_next_cmd(struct mei_device *dev)
+int mei_amthif_run_next_cmd(struct mei_device *dev)
{
+ struct mei_cl *cl = &dev->iamthif_cl;
struct mei_cl_cb *cb;
- int ret;
- if (!dev)
- return;
-
- dev->iamthif_msg_buf_size = 0;
- dev->iamthif_msg_buf_index = 0;
dev->iamthif_canceled = false;
- dev->iamthif_ioctl = true;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
dev->iamthif_timer = 0;
dev->iamthif_file_object = NULL;
@@ -381,13 +330,48 @@
cb = list_first_entry_or_null(&dev->amthif_cmd_list.list,
typeof(*cb), list);
if (!cb)
- return;
- list_del(&cb->list);
- ret = mei_amthif_send_cmd(dev, cb);
- if (ret)
- dev_warn(dev->dev, "amthif write failed status = %d\n", ret);
+ return 0;
+
+ list_del_init(&cb->list);
+ return mei_amthif_send_cmd(cl, cb);
}
+/**
+ * mei_amthif_write - write amthif data to amthif client
+ *
+ * @cl: host client
+ * @cb: mei call back struct
+ *
+ * Return: 0 on success, <0 on failure.
+ */
+int mei_amthif_write(struct mei_cl *cl, struct mei_cl_cb *cb)
+{
+
+ struct mei_device *dev;
+
+ if (WARN_ON(!cl || !cl->dev))
+ return -ENODEV;
+
+ if (WARN_ON(!cb))
+ return -EINVAL;
+
+ dev = cl->dev;
+
+ list_add_tail(&cb->list, &dev->amthif_cmd_list.list);
+ return mei_amthif_run_next_cmd(dev);
+}
+
+/**
+ * mei_amthif_poll - the amthif poll function
+ *
+ * @dev: the device structure
+ * @file: pointer to file structure
+ * @wait: pointer to poll_table structure
+ *
+ * Return: poll mask
+ *
+ * Locking: called under "dev->device_lock" lock
+ */
unsigned int mei_amthif_poll(struct mei_device *dev,
struct file *file, poll_table *wait)
@@ -396,19 +380,12 @@
poll_wait(file, &dev->iamthif_cl.wait, wait);
- mutex_lock(&dev->device_lock);
- if (!mei_cl_is_connected(&dev->iamthif_cl)) {
+ if (dev->iamthif_state == MEI_IAMTHIF_READ_COMPLETE &&
+ dev->iamthif_file_object == file) {
- mask = POLLERR;
-
- } else if (dev->iamthif_state == MEI_IAMTHIF_READ_COMPLETE &&
- dev->iamthif_file_object == file) {
-
- mask |= (POLLIN | POLLRDNORM);
- dev_dbg(dev->dev, "run next amthif cb\n");
+ mask |= POLLIN | POLLRDNORM;
mei_amthif_run_next_cmd(dev);
}
- mutex_unlock(&dev->device_lock);
return mask;
}
@@ -427,71 +404,14 @@
int mei_amthif_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list)
{
- struct mei_device *dev = cl->dev;
- struct mei_msg_hdr mei_hdr;
- size_t len = dev->iamthif_msg_buf_size - dev->iamthif_msg_buf_index;
- u32 msg_slots = mei_data2slots(len);
- int slots;
- int rets;
+ int ret;
- rets = mei_cl_flow_ctrl_creds(cl);
- if (rets < 0)
- return rets;
+ ret = mei_cl_irq_write(cl, cb, cmpl_list);
+ if (ret)
+ return ret;
- if (rets == 0) {
- cl_dbg(dev, cl, "No flow control credentials: not sending.\n");
- return 0;
- }
-
- mei_hdr.host_addr = cl->host_client_id;
- mei_hdr.me_addr = cl->me_client_id;
- mei_hdr.reserved = 0;
- mei_hdr.internal = 0;
-
- slots = mei_hbuf_empty_slots(dev);
-
- if (slots >= msg_slots) {
- mei_hdr.length = len;
- mei_hdr.msg_complete = 1;
- /* Split the message only if we can write the whole host buffer */
- } else if (slots == dev->hbuf_depth) {
- msg_slots = slots;
- len = (slots * sizeof(u32)) - sizeof(struct mei_msg_hdr);
- mei_hdr.length = len;
- mei_hdr.msg_complete = 0;
- } else {
- /* wait for next time the host buffer is empty */
- return 0;
- }
-
- dev_dbg(dev->dev, MEI_HDR_FMT, MEI_HDR_PRM(&mei_hdr));
-
- rets = mei_write_message(dev, &mei_hdr,
- dev->iamthif_msg_buf + dev->iamthif_msg_buf_index);
- if (rets) {
- dev->iamthif_state = MEI_IAMTHIF_IDLE;
- cl->status = rets;
- list_del(&cb->list);
- return rets;
- }
-
- if (mei_cl_flow_ctrl_reduce(cl))
- return -EIO;
-
- dev->iamthif_msg_buf_index += mei_hdr.length;
- cl->status = 0;
-
- if (mei_hdr.msg_complete) {
- dev->iamthif_state = MEI_IAMTHIF_FLOW_CONTROL;
- dev->iamthif_flow_control_pending = true;
-
- /* save iamthif cb sent to amthif client */
- cb->buf_idx = dev->iamthif_msg_buf_index;
- dev->iamthif_current_cb = cb;
-
- list_move_tail(&cb->list, &dev->write_waiting_list.list);
- }
-
+ if (cb->completed)
+ cb->status = mei_amthif_read_start(cl, cb->file_object);
return 0;
}
@@ -500,83 +420,35 @@
* mei_amthif_irq_read_msg - read routine after ISR to
* handle the read amthif message
*
- * @dev: the device structure
+ * @cl: mei client
* @mei_hdr: header of amthif message
- * @complete_list: An instance of our list structure
+ * @cmpl_list: completed callbacks list
*
- * Return: 0 on success, <0 on failure.
+ * Return: -ENODEV if cb is NULL 0 otherwise; error message is in cb->status
*/
-int mei_amthif_irq_read_msg(struct mei_device *dev,
+int mei_amthif_irq_read_msg(struct mei_cl *cl,
struct mei_msg_hdr *mei_hdr,
- struct mei_cl_cb *complete_list)
+ struct mei_cl_cb *cmpl_list)
{
- struct mei_cl_cb *cb;
- unsigned char *buffer;
+ struct mei_device *dev;
+ int ret;
- BUG_ON(mei_hdr->me_addr != dev->iamthif_cl.me_client_id);
- BUG_ON(dev->iamthif_state != MEI_IAMTHIF_READING);
+ dev = cl->dev;
- buffer = dev->iamthif_msg_buf + dev->iamthif_msg_buf_index;
- BUG_ON(dev->iamthif_mtu < dev->iamthif_msg_buf_index + mei_hdr->length);
+ if (dev->iamthif_state != MEI_IAMTHIF_READING)
+ return 0;
- mei_read_slots(dev, buffer, mei_hdr->length);
-
- dev->iamthif_msg_buf_index += mei_hdr->length;
+ ret = mei_cl_irq_read_msg(cl, mei_hdr, cmpl_list);
+ if (ret)
+ return ret;
if (!mei_hdr->msg_complete)
return 0;
- dev_dbg(dev->dev, "amthif_message_buffer_index =%d\n",
- mei_hdr->length);
-
dev_dbg(dev->dev, "completed amthif read.\n ");
- if (!dev->iamthif_current_cb)
- return -ENODEV;
-
- cb = dev->iamthif_current_cb;
dev->iamthif_current_cb = NULL;
-
dev->iamthif_stall_timer = 0;
- cb->buf_idx = dev->iamthif_msg_buf_index;
- cb->read_time = jiffies;
- if (dev->iamthif_ioctl) {
- /* found the iamthif cb */
- dev_dbg(dev->dev, "complete the amthif read cb.\n ");
- dev_dbg(dev->dev, "add the amthif read cb to complete.\n ");
- list_add_tail(&cb->list, &complete_list->list);
- }
- return 0;
-}
-/**
- * mei_amthif_irq_read - prepares to read amthif data.
- *
- * @dev: the device structure.
- * @slots: free slots.
- *
- * Return: 0, OK; otherwise, error.
- */
-int mei_amthif_irq_read(struct mei_device *dev, s32 *slots)
-{
- u32 msg_slots = mei_data2slots(sizeof(struct hbm_flow_control));
-
- if (*slots < msg_slots)
- return -EMSGSIZE;
-
- *slots -= msg_slots;
-
- if (mei_hbm_cl_flow_control_req(dev, &dev->iamthif_cl)) {
- dev_dbg(dev->dev, "iamthif flow control failed\n");
- return -EIO;
- }
-
- dev_dbg(dev->dev, "iamthif flow control success\n");
- dev->iamthif_state = MEI_IAMTHIF_READING;
- dev->iamthif_flow_control_pending = false;
- dev->iamthif_msg_buf_index = 0;
- dev->iamthif_msg_buf_size = 0;
- dev->iamthif_stall_timer = MEI_IAMTHIF_STALL_TIMER;
- dev->hbuf_is_ready = mei_hbuf_is_ready(dev);
return 0;
}
@@ -588,17 +460,30 @@
*/
void mei_amthif_complete(struct mei_device *dev, struct mei_cl_cb *cb)
{
+
+ if (cb->fop_type == MEI_FOP_WRITE) {
+ if (!cb->status) {
+ dev->iamthif_stall_timer = MEI_IAMTHIF_STALL_TIMER;
+ mei_io_cb_free(cb);
+ return;
+ }
+ /*
+ * in case of error enqueue the write cb to complete read list
+ * so it can be propagated to the reader
+ */
+ list_add_tail(&cb->list, &dev->amthif_rd_complete_list.list);
+ wake_up_interruptible(&dev->iamthif_cl.wait);
+ return;
+ }
+
if (dev->iamthif_canceled != 1) {
dev->iamthif_state = MEI_IAMTHIF_READ_COMPLETE;
dev->iamthif_stall_timer = 0;
- memcpy(cb->response_buffer.data,
- dev->iamthif_msg_buf,
- dev->iamthif_msg_buf_index);
list_add_tail(&cb->list, &dev->amthif_rd_complete_list.list);
dev_dbg(dev->dev, "amthif read completed\n");
dev->iamthif_timer = jiffies;
dev_dbg(dev->dev, "dev->iamthif_timer = %ld\n",
- dev->iamthif_timer);
+ dev->iamthif_timer);
} else {
mei_amthif_run_next_cmd(dev);
}
@@ -623,26 +508,22 @@
static bool mei_clear_list(struct mei_device *dev,
const struct file *file, struct list_head *mei_cb_list)
{
- struct mei_cl_cb *cb_pos = NULL;
- struct mei_cl_cb *cb_next = NULL;
+ struct mei_cl *cl = &dev->iamthif_cl;
+ struct mei_cl_cb *cb, *next;
bool removed = false;
/* list all list member */
- list_for_each_entry_safe(cb_pos, cb_next, mei_cb_list, list) {
+ list_for_each_entry_safe(cb, next, mei_cb_list, list) {
/* check if list member associated with a file */
- if (file == cb_pos->file_object) {
- /* remove member from the list */
- list_del(&cb_pos->list);
+ if (file == cb->file_object) {
/* check if cb equal to current iamthif cb */
- if (dev->iamthif_current_cb == cb_pos) {
+ if (dev->iamthif_current_cb == cb) {
dev->iamthif_current_cb = NULL;
/* send flow control to iamthif client */
- mei_hbm_cl_flow_control_req(dev,
- &dev->iamthif_cl);
+ mei_hbm_cl_flow_control_req(dev, cl);
}
/* free all allocated buffers */
- mei_io_cb_free(cb_pos);
- cb_pos = NULL;
+ mei_io_cb_free(cb);
removed = true;
}
}
diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
index be767f4..4cf38c3 100644
--- a/drivers/misc/mei/bus.c
+++ b/drivers/misc/mei/bus.c
@@ -238,7 +238,7 @@
dev = cl->dev;
mutex_lock(&dev->device_lock);
- if (cl->state != MEI_FILE_CONNECTED) {
+ if (!mei_cl_is_connected(cl)) {
rets = -ENODEV;
goto out;
}
@@ -255,17 +255,13 @@
goto out;
}
- cb = mei_io_cb_init(cl, NULL);
+ cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, NULL);
if (!cb) {
rets = -ENOMEM;
goto out;
}
- rets = mei_io_cb_alloc_req_buf(cb, length);
- if (rets < 0)
- goto out;
-
- memcpy(cb->request_buffer.data, buf, length);
+ memcpy(cb->buf.data, buf, length);
rets = mei_cl_write(cl, cb, blocking);
@@ -292,20 +288,21 @@
mutex_lock(&dev->device_lock);
- if (!cl->read_cb) {
- rets = mei_cl_read_start(cl, length);
- if (rets < 0)
- goto out;
- }
+ cb = mei_cl_read_cb(cl, NULL);
+ if (cb)
+ goto copy;
- if (cl->reading_state != MEI_READ_COMPLETE &&
- !waitqueue_active(&cl->rx_wait)) {
+ rets = mei_cl_read_start(cl, length, NULL);
+ if (rets && rets != -EBUSY)
+ goto out;
+
+ if (list_empty(&cl->rd_completed) && !waitqueue_active(&cl->rx_wait)) {
mutex_unlock(&dev->device_lock);
if (wait_event_interruptible(cl->rx_wait,
- cl->reading_state == MEI_READ_COMPLETE ||
- mei_cl_is_transitioning(cl))) {
+ (!list_empty(&cl->rd_completed)) ||
+ (!mei_cl_is_connected(cl)))) {
if (signal_pending(current))
return -EINTR;
@@ -313,23 +310,31 @@
}
mutex_lock(&dev->device_lock);
+
+ if (!mei_cl_is_connected(cl)) {
+ rets = -EBUSY;
+ goto out;
+ }
}
- cb = cl->read_cb;
-
- if (cl->reading_state != MEI_READ_COMPLETE) {
+ cb = mei_cl_read_cb(cl, NULL);
+ if (!cb) {
rets = 0;
goto out;
}
+copy:
+ if (cb->status) {
+ rets = cb->status;
+ goto free;
+ }
+
r_length = min_t(size_t, length, cb->buf_idx);
- memcpy(buf, cb->response_buffer.data, r_length);
+ memcpy(buf, cb->buf.data, r_length);
rets = r_length;
+free:
mei_io_cb_free(cb);
- cl->reading_state = MEI_IDLE;
- cl->read_cb = NULL;
-
out:
mutex_unlock(&dev->device_lock);
@@ -386,7 +391,7 @@
device->events = 0;
/* Prepare for the next read */
- mei_cl_read_start(device->cl, 0);
+ mei_cl_read_start(device->cl, 0, NULL);
}
int mei_cl_register_event_cb(struct mei_cl_device *device,
@@ -400,7 +405,7 @@
device->event_context = context;
INIT_WORK(&device->event_work, mei_bus_event_work);
- mei_cl_read_start(device->cl, 0);
+ mei_cl_read_start(device->cl, 0, NULL);
return 0;
}
@@ -441,8 +446,8 @@
mutex_unlock(&dev->device_lock);
- if (device->event_cb && !cl->read_cb)
- mei_cl_read_start(device->cl, 0);
+ if (device->event_cb)
+ mei_cl_read_start(device->cl, 0, NULL);
if (!device->ops || !device->ops->enable)
return 0;
@@ -462,54 +467,34 @@
dev = cl->dev;
+ if (device->ops && device->ops->disable)
+ device->ops->disable(device);
+
+ device->event_cb = NULL;
+
mutex_lock(&dev->device_lock);
- if (cl->state != MEI_FILE_CONNECTED) {
- mutex_unlock(&dev->device_lock);
+ if (!mei_cl_is_connected(cl)) {
dev_err(dev->dev, "Already disconnected");
-
- return 0;
+ err = 0;
+ goto out;
}
cl->state = MEI_FILE_DISCONNECTING;
err = mei_cl_disconnect(cl);
if (err < 0) {
- mutex_unlock(&dev->device_lock);
- dev_err(dev->dev,
- "Could not disconnect from the ME client");
-
- return err;
+ dev_err(dev->dev, "Could not disconnect from the ME client");
+ goto out;
}
/* Flush queues and remove any pending read */
- mei_cl_flush_queues(cl);
+ mei_cl_flush_queues(cl, NULL);
- if (cl->read_cb) {
- struct mei_cl_cb *cb = NULL;
-
- cb = mei_cl_find_read_cb(cl);
- /* Remove entry from read list */
- if (cb)
- list_del(&cb->list);
-
- cb = cl->read_cb;
- cl->read_cb = NULL;
-
- if (cb) {
- mei_io_cb_free(cb);
- cb = NULL;
- }
- }
-
- device->event_cb = NULL;
-
+out:
mutex_unlock(&dev->device_lock);
+ return err;
- if (!device->ops || !device->ops->disable)
- return 0;
-
- return device->ops->disable(device);
}
EXPORT_SYMBOL_GPL(mei_cl_disable_device);
diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
index dfbddfe..1e99ef6 100644
--- a/drivers/misc/mei/client.c
+++ b/drivers/misc/mei/client.c
@@ -48,14 +48,14 @@
*/
struct mei_me_client *mei_me_cl_get(struct mei_me_client *me_cl)
{
- if (me_cl)
- kref_get(&me_cl->refcnt);
+ if (me_cl && kref_get_unless_zero(&me_cl->refcnt))
+ return me_cl;
- return me_cl;
+ return NULL;
}
/**
- * mei_me_cl_release - unlink and free me client
+ * mei_me_cl_release - free me client
*
* Locking: called under "dev->device_lock" lock
*
@@ -65,9 +65,10 @@
{
struct mei_me_client *me_cl =
container_of(ref, struct mei_me_client, refcnt);
- list_del(&me_cl->list);
+
kfree(me_cl);
}
+
/**
* mei_me_cl_put - decrease me client refcount and free client if necessary
*
@@ -82,26 +83,85 @@
}
/**
+ * __mei_me_cl_del - delete me client form the list and decrease
+ * reference counter
+ *
+ * @dev: mei device
+ * @me_cl: me client
+ *
+ * Locking: dev->me_clients_rwsem
+ */
+static void __mei_me_cl_del(struct mei_device *dev, struct mei_me_client *me_cl)
+{
+ if (!me_cl)
+ return;
+
+ list_del(&me_cl->list);
+ mei_me_cl_put(me_cl);
+}
+
+/**
+ * mei_me_cl_add - add me client to the list
+ *
+ * @dev: mei device
+ * @me_cl: me client
+ */
+void mei_me_cl_add(struct mei_device *dev, struct mei_me_client *me_cl)
+{
+ down_write(&dev->me_clients_rwsem);
+ list_add(&me_cl->list, &dev->me_clients);
+ up_write(&dev->me_clients_rwsem);
+}
+
+/**
+ * __mei_me_cl_by_uuid - locate me client by uuid
+ * increases ref count
+ *
+ * @dev: mei device
+ * @uuid: me client uuid
+ *
+ * Return: me client or NULL if not found
+ *
+ * Locking: dev->me_clients_rwsem
+ */
+static struct mei_me_client *__mei_me_cl_by_uuid(struct mei_device *dev,
+ const uuid_le *uuid)
+{
+ struct mei_me_client *me_cl;
+ const uuid_le *pn;
+
+ WARN_ON(!rwsem_is_locked(&dev->me_clients_rwsem));
+
+ list_for_each_entry(me_cl, &dev->me_clients, list) {
+ pn = &me_cl->props.protocol_name;
+ if (uuid_le_cmp(*uuid, *pn) == 0)
+ return mei_me_cl_get(me_cl);
+ }
+
+ return NULL;
+}
+
+/**
* mei_me_cl_by_uuid - locate me client by uuid
* increases ref count
*
* @dev: mei device
* @uuid: me client uuid
*
- * Locking: called under "dev->device_lock" lock
- *
* Return: me client or NULL if not found
+ *
+ * Locking: dev->me_clients_rwsem
*/
-struct mei_me_client *mei_me_cl_by_uuid(const struct mei_device *dev,
+struct mei_me_client *mei_me_cl_by_uuid(struct mei_device *dev,
const uuid_le *uuid)
{
struct mei_me_client *me_cl;
- list_for_each_entry(me_cl, &dev->me_clients, list)
- if (uuid_le_cmp(*uuid, me_cl->props.protocol_name) == 0)
- return mei_me_cl_get(me_cl);
+ down_read(&dev->me_clients_rwsem);
+ me_cl = __mei_me_cl_by_uuid(dev, uuid);
+ up_read(&dev->me_clients_rwsem);
- return NULL;
+ return me_cl;
}
/**
@@ -111,22 +171,58 @@
* @dev: the device structure
* @client_id: me client id
*
- * Locking: called under "dev->device_lock" lock
- *
* Return: me client or NULL if not found
+ *
+ * Locking: dev->me_clients_rwsem
*/
struct mei_me_client *mei_me_cl_by_id(struct mei_device *dev, u8 client_id)
{
- struct mei_me_client *me_cl;
+ struct mei_me_client *__me_cl, *me_cl = NULL;
- list_for_each_entry(me_cl, &dev->me_clients, list)
- if (me_cl->client_id == client_id)
+ down_read(&dev->me_clients_rwsem);
+ list_for_each_entry(__me_cl, &dev->me_clients, list) {
+ if (__me_cl->client_id == client_id) {
+ me_cl = mei_me_cl_get(__me_cl);
+ break;
+ }
+ }
+ up_read(&dev->me_clients_rwsem);
+
+ return me_cl;
+}
+
+/**
+ * __mei_me_cl_by_uuid_id - locate me client by client id and uuid
+ * increases ref count
+ *
+ * @dev: the device structure
+ * @uuid: me client uuid
+ * @client_id: me client id
+ *
+ * Return: me client or null if not found
+ *
+ * Locking: dev->me_clients_rwsem
+ */
+static struct mei_me_client *__mei_me_cl_by_uuid_id(struct mei_device *dev,
+ const uuid_le *uuid, u8 client_id)
+{
+ struct mei_me_client *me_cl;
+ const uuid_le *pn;
+
+ WARN_ON(!rwsem_is_locked(&dev->me_clients_rwsem));
+
+ list_for_each_entry(me_cl, &dev->me_clients, list) {
+ pn = &me_cl->props.protocol_name;
+ if (uuid_le_cmp(*uuid, *pn) == 0 &&
+ me_cl->client_id == client_id)
return mei_me_cl_get(me_cl);
+ }
return NULL;
}
+
/**
* mei_me_cl_by_uuid_id - locate me client by client id and uuid
* increases ref count
@@ -135,21 +231,18 @@
* @uuid: me client uuid
* @client_id: me client id
*
- * Locking: called under "dev->device_lock" lock
- *
- * Return: me client or NULL if not found
+ * Return: me client or null if not found
*/
struct mei_me_client *mei_me_cl_by_uuid_id(struct mei_device *dev,
const uuid_le *uuid, u8 client_id)
{
struct mei_me_client *me_cl;
- list_for_each_entry(me_cl, &dev->me_clients, list)
- if (uuid_le_cmp(*uuid, me_cl->props.protocol_name) == 0 &&
- me_cl->client_id == client_id)
- return mei_me_cl_get(me_cl);
+ down_read(&dev->me_clients_rwsem);
+ me_cl = __mei_me_cl_by_uuid_id(dev, uuid, client_id);
+ up_read(&dev->me_clients_rwsem);
- return NULL;
+ return me_cl;
}
/**
@@ -162,12 +255,14 @@
*/
void mei_me_cl_rm_by_uuid(struct mei_device *dev, const uuid_le *uuid)
{
- struct mei_me_client *me_cl, *next;
+ struct mei_me_client *me_cl;
dev_dbg(dev->dev, "remove %pUl\n", uuid);
- list_for_each_entry_safe(me_cl, next, &dev->me_clients, list)
- if (uuid_le_cmp(*uuid, me_cl->props.protocol_name) == 0)
- mei_me_cl_put(me_cl);
+
+ down_write(&dev->me_clients_rwsem);
+ me_cl = __mei_me_cl_by_uuid(dev, uuid);
+ __mei_me_cl_del(dev, me_cl);
+ up_write(&dev->me_clients_rwsem);
}
/**
@@ -181,15 +276,14 @@
*/
void mei_me_cl_rm_by_uuid_id(struct mei_device *dev, const uuid_le *uuid, u8 id)
{
- struct mei_me_client *me_cl, *next;
- const uuid_le *pn;
+ struct mei_me_client *me_cl;
dev_dbg(dev->dev, "remove %pUl %d\n", uuid, id);
- list_for_each_entry_safe(me_cl, next, &dev->me_clients, list) {
- pn = &me_cl->props.protocol_name;
- if (me_cl->client_id == id && uuid_le_cmp(*uuid, *pn) == 0)
- mei_me_cl_put(me_cl);
- }
+
+ down_write(&dev->me_clients_rwsem);
+ me_cl = __mei_me_cl_by_uuid_id(dev, uuid, id);
+ __mei_me_cl_del(dev, me_cl);
+ up_write(&dev->me_clients_rwsem);
}
/**
@@ -203,12 +297,12 @@
{
struct mei_me_client *me_cl, *next;
+ down_write(&dev->me_clients_rwsem);
list_for_each_entry_safe(me_cl, next, &dev->me_clients, list)
- mei_me_cl_put(me_cl);
+ __mei_me_cl_del(dev, me_cl);
+ up_write(&dev->me_clients_rwsem);
}
-
-
/**
* mei_cl_cmp_id - tells if the clients are the same
*
@@ -227,7 +321,48 @@
}
/**
- * mei_io_list_flush - removes cbs belonging to cl.
+ * mei_io_cb_free - free mei_cb_private related memory
+ *
+ * @cb: mei callback struct
+ */
+void mei_io_cb_free(struct mei_cl_cb *cb)
+{
+ if (cb == NULL)
+ return;
+
+ list_del(&cb->list);
+ kfree(cb->buf.data);
+ kfree(cb);
+}
+
+/**
+ * mei_io_cb_init - allocate and initialize io callback
+ *
+ * @cl: mei client
+ * @type: operation type
+ * @fp: pointer to file structure
+ *
+ * Return: mei_cl_cb pointer or NULL;
+ */
+struct mei_cl_cb *mei_io_cb_init(struct mei_cl *cl, enum mei_cb_file_ops type,
+ struct file *fp)
+{
+ struct mei_cl_cb *cb;
+
+ cb = kzalloc(sizeof(struct mei_cl_cb), GFP_KERNEL);
+ if (!cb)
+ return NULL;
+
+ INIT_LIST_HEAD(&cb->list);
+ cb->file_object = fp;
+ cb->cl = cl;
+ cb->buf_idx = 0;
+ cb->fop_type = type;
+ return cb;
+}
+
+/**
+ * __mei_io_list_flush - removes and frees cbs belonging to cl.
*
* @list: an instance of our list structure
* @cl: host client, can be NULL for flushing the whole list
@@ -236,13 +371,12 @@
static void __mei_io_list_flush(struct mei_cl_cb *list,
struct mei_cl *cl, bool free)
{
- struct mei_cl_cb *cb;
- struct mei_cl_cb *next;
+ struct mei_cl_cb *cb, *next;
/* enable removing everything if no cl is specified */
list_for_each_entry_safe(cb, next, &list->list, list) {
if (!cl || mei_cl_cmp_id(cl, cb->cl)) {
- list_del(&cb->list);
+ list_del_init(&cb->list);
if (free)
mei_io_cb_free(cb);
}
@@ -260,7 +394,6 @@
__mei_io_list_flush(list, cl, false);
}
-
/**
* mei_io_list_free - removes cb belonging to cl and free them
*
@@ -273,103 +406,107 @@
}
/**
- * mei_io_cb_free - free mei_cb_private related memory
+ * mei_io_cb_alloc_buf - allocate callback buffer
*
- * @cb: mei callback struct
+ * @cb: io callback structure
+ * @length: size of the buffer
+ *
+ * Return: 0 on success
+ * -EINVAL if cb is NULL
+ * -ENOMEM if allocation failed
*/
-void mei_io_cb_free(struct mei_cl_cb *cb)
+int mei_io_cb_alloc_buf(struct mei_cl_cb *cb, size_t length)
{
- if (cb == NULL)
- return;
+ if (!cb)
+ return -EINVAL;
- kfree(cb->request_buffer.data);
- kfree(cb->response_buffer.data);
- kfree(cb);
+ if (length == 0)
+ return 0;
+
+ cb->buf.data = kmalloc(length, GFP_KERNEL);
+ if (!cb->buf.data)
+ return -ENOMEM;
+ cb->buf.size = length;
+ return 0;
}
/**
- * mei_io_cb_init - allocate and initialize io callback
+ * mei_cl_alloc_cb - a convenient wrapper for allocating read cb
*
- * @cl: mei client
- * @fp: pointer to file structure
+ * @cl: host client
+ * @length: size of the buffer
+ * @type: operation type
+ * @fp: associated file pointer (might be NULL)
*
- * Return: mei_cl_cb pointer or NULL;
+ * Return: cb on success and NULL on failure
*/
-struct mei_cl_cb *mei_io_cb_init(struct mei_cl *cl, struct file *fp)
+struct mei_cl_cb *mei_cl_alloc_cb(struct mei_cl *cl, size_t length,
+ enum mei_cb_file_ops type, struct file *fp)
{
struct mei_cl_cb *cb;
- cb = kzalloc(sizeof(struct mei_cl_cb), GFP_KERNEL);
+ cb = mei_io_cb_init(cl, type, fp);
if (!cb)
return NULL;
- mei_io_list_init(cb);
+ if (mei_io_cb_alloc_buf(cb, length)) {
+ mei_io_cb_free(cb);
+ return NULL;
+ }
- cb->file_object = fp;
- cb->cl = cl;
- cb->buf_idx = 0;
return cb;
}
/**
- * mei_io_cb_alloc_req_buf - allocate request buffer
+ * mei_cl_read_cb - find this cl's callback in the read list
+ * for a specific file
*
- * @cb: io callback structure
- * @length: size of the buffer
+ * @cl: host client
+ * @fp: file pointer (matching cb file object), may be NULL
*
- * Return: 0 on success
- * -EINVAL if cb is NULL
- * -ENOMEM if allocation failed
+ * Return: cb on success, NULL if cb is not found
*/
-int mei_io_cb_alloc_req_buf(struct mei_cl_cb *cb, size_t length)
+struct mei_cl_cb *mei_cl_read_cb(const struct mei_cl *cl, const struct file *fp)
{
- if (!cb)
- return -EINVAL;
+ struct mei_cl_cb *cb;
- if (length == 0)
- return 0;
+ list_for_each_entry(cb, &cl->rd_completed, list)
+ if (!fp || fp == cb->file_object)
+ return cb;
- cb->request_buffer.data = kmalloc(length, GFP_KERNEL);
- if (!cb->request_buffer.data)
- return -ENOMEM;
- cb->request_buffer.size = length;
- return 0;
+ return NULL;
}
+
/**
- * mei_io_cb_alloc_resp_buf - allocate response buffer
+ * mei_cl_read_cb_flush - free client's read pending and completed cbs
+ * for a specific file
*
- * @cb: io callback structure
- * @length: size of the buffer
- *
- * Return: 0 on success
- * -EINVAL if cb is NULL
- * -ENOMEM if allocation failed
+ * @cl: host client
+ * @fp: file pointer (matching cb file object), may be NULL
*/
-int mei_io_cb_alloc_resp_buf(struct mei_cl_cb *cb, size_t length)
+void mei_cl_read_cb_flush(const struct mei_cl *cl, const struct file *fp)
{
- if (!cb)
- return -EINVAL;
+ struct mei_cl_cb *cb, *next;
- if (length == 0)
- return 0;
+ list_for_each_entry_safe(cb, next, &cl->rd_completed, list)
+ if (!fp || fp == cb->file_object)
+ mei_io_cb_free(cb);
- cb->response_buffer.data = kmalloc(length, GFP_KERNEL);
- if (!cb->response_buffer.data)
- return -ENOMEM;
- cb->response_buffer.size = length;
- return 0;
+
+ list_for_each_entry_safe(cb, next, &cl->rd_pending, list)
+ if (!fp || fp == cb->file_object)
+ mei_io_cb_free(cb);
}
-
-
/**
* mei_cl_flush_queues - flushes queue lists belonging to cl.
*
* @cl: host client
+ * @fp: file pointer (matching cb file object), may be NULL
*
* Return: 0 on success, -EINVAL if cl or cl->dev is NULL.
*/
-int mei_cl_flush_queues(struct mei_cl *cl)
+int mei_cl_flush_queues(struct mei_cl *cl, const struct file *fp)
{
struct mei_device *dev;
@@ -379,13 +516,15 @@
dev = cl->dev;
cl_dbg(dev, cl, "remove list entry belonging to cl\n");
- mei_io_list_flush(&cl->dev->read_list, cl);
mei_io_list_free(&cl->dev->write_list, cl);
mei_io_list_free(&cl->dev->write_waiting_list, cl);
mei_io_list_flush(&cl->dev->ctrl_wr_list, cl);
mei_io_list_flush(&cl->dev->ctrl_rd_list, cl);
mei_io_list_flush(&cl->dev->amthif_cmd_list, cl);
mei_io_list_flush(&cl->dev->amthif_rd_complete_list, cl);
+
+ mei_cl_read_cb_flush(cl, fp);
+
return 0;
}
@@ -402,9 +541,10 @@
init_waitqueue_head(&cl->wait);
init_waitqueue_head(&cl->rx_wait);
init_waitqueue_head(&cl->tx_wait);
+ INIT_LIST_HEAD(&cl->rd_completed);
+ INIT_LIST_HEAD(&cl->rd_pending);
INIT_LIST_HEAD(&cl->link);
INIT_LIST_HEAD(&cl->device_link);
- cl->reading_state = MEI_IDLE;
cl->writing_state = MEI_IDLE;
cl->dev = dev;
}
@@ -429,31 +569,14 @@
}
/**
- * mei_cl_find_read_cb - find this cl's callback in the read list
+ * mei_cl_link - allocate host id in the host map
*
* @cl: host client
- *
- * Return: cb on success, NULL on error
- */
-struct mei_cl_cb *mei_cl_find_read_cb(struct mei_cl *cl)
-{
- struct mei_device *dev = cl->dev;
- struct mei_cl_cb *cb;
-
- list_for_each_entry(cb, &dev->read_list.list, list)
- if (mei_cl_cmp_id(cl, cb->cl))
- return cb;
- return NULL;
-}
-
-/** mei_cl_link: allocate host id in the host map
- *
- * @cl - host client
- * @id - fixed host id or -1 for generic one
+ * @id: fixed host id or MEI_HOST_CLIENT_ID_ANY (-1) for generic one
*
* Return: 0 on success
* -EINVAL on incorrect values
- * -ENONET if client not found
+ * -EMFILE if open count exceeded.
*/
int mei_cl_link(struct mei_cl *cl, int id)
{
@@ -535,28 +658,31 @@
void mei_host_client_init(struct work_struct *work)
{
- struct mei_device *dev = container_of(work,
- struct mei_device, init_work);
+ struct mei_device *dev =
+ container_of(work, struct mei_device, init_work);
struct mei_me_client *me_cl;
- struct mei_client_properties *props;
mutex_lock(&dev->device_lock);
- list_for_each_entry(me_cl, &dev->me_clients, list) {
- props = &me_cl->props;
- if (!uuid_le_cmp(props->protocol_name, mei_amthif_guid))
- mei_amthif_host_init(dev);
- else if (!uuid_le_cmp(props->protocol_name, mei_wd_guid))
- mei_wd_host_init(dev);
- else if (!uuid_le_cmp(props->protocol_name, mei_nfc_guid))
- mei_nfc_host_init(dev);
+ me_cl = mei_me_cl_by_uuid(dev, &mei_amthif_guid);
+ if (me_cl)
+ mei_amthif_host_init(dev);
+ mei_me_cl_put(me_cl);
- }
+ me_cl = mei_me_cl_by_uuid(dev, &mei_wd_guid);
+ if (me_cl)
+ mei_wd_host_init(dev);
+ mei_me_cl_put(me_cl);
+
+ me_cl = mei_me_cl_by_uuid(dev, &mei_nfc_guid);
+ if (me_cl)
+ mei_nfc_host_init(dev);
+ mei_me_cl_put(me_cl);
+
dev->dev_state = MEI_DEV_ENABLED;
dev->reset_count = 0;
-
mutex_unlock(&dev->device_lock);
pm_runtime_mark_last_busy(dev->dev);
@@ -620,13 +746,10 @@
return rets;
}
- cb = mei_io_cb_init(cl, NULL);
- if (!cb) {
- rets = -ENOMEM;
+ cb = mei_io_cb_init(cl, MEI_FOP_DISCONNECT, NULL);
+ rets = cb ? 0 : -ENOMEM;
+ if (rets)
goto free;
- }
-
- cb->fop_type = MEI_FOP_DISCONNECT;
if (mei_hbuf_acquire(dev)) {
if (mei_hbm_cl_disconnect_req(dev, cl)) {
@@ -727,13 +850,10 @@
return rets;
}
- cb = mei_io_cb_init(cl, file);
- if (!cb) {
- rets = -ENOMEM;
+ cb = mei_io_cb_init(cl, MEI_FOP_CONNECT, file);
+ rets = cb ? 0 : -ENOMEM;
+ if (rets)
goto out;
- }
-
- cb->fop_type = MEI_FOP_CONNECT;
/* run hbuf acquire last so we don't have to undo */
if (!mei_cl_is_other_connecting(cl) && mei_hbuf_acquire(dev)) {
@@ -756,7 +876,7 @@
mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));
mutex_lock(&dev->device_lock);
- if (cl->state != MEI_FILE_CONNECTED) {
+ if (!mei_cl_is_connected(cl)) {
cl->state = MEI_FILE_DISCONNECTED;
/* something went really wrong */
if (!cl->status)
@@ -778,6 +898,37 @@
}
/**
+ * mei_cl_alloc_linked - allocate and link host client
+ *
+ * @dev: the device structure
+ * @id: fixed host id or MEI_HOST_CLIENT_ID_ANY (-1) for generic one
+ *
+ * Return: cl on success ERR_PTR on failure
+ */
+struct mei_cl *mei_cl_alloc_linked(struct mei_device *dev, int id)
+{
+ struct mei_cl *cl;
+ int ret;
+
+ cl = mei_cl_allocate(dev);
+ if (!cl) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ ret = mei_cl_link(cl, id);
+ if (ret)
+ goto err;
+
+ return cl;
+err:
+ kfree(cl);
+ return ERR_PTR(ret);
+}
+
+
+
+/**
* mei_cl_flow_ctrl_creds - checks flow_control credits for cl.
*
* @cl: private data of the file object
@@ -866,10 +1017,11 @@
*
* @cl: host client
* @length: number of bytes to read
+ * @fp: pointer to file structure
*
* Return: 0 on success, <0 on failure.
*/
-int mei_cl_read_start(struct mei_cl *cl, size_t length)
+int mei_cl_read_start(struct mei_cl *cl, size_t length, struct file *fp)
{
struct mei_device *dev;
struct mei_cl_cb *cb;
@@ -884,10 +1036,10 @@
if (!mei_cl_is_connected(cl))
return -ENODEV;
- if (cl->read_cb) {
- cl_dbg(dev, cl, "read is pending.\n");
+ /* HW currently supports only one pending read */
+ if (!list_empty(&cl->rd_pending))
return -EBUSY;
- }
+
me_cl = mei_me_cl_by_uuid_id(dev, &cl->cl_uuid, cl->me_client_id);
if (!me_cl) {
cl_err(dev, cl, "no such me client %d\n", cl->me_client_id);
@@ -904,29 +1056,21 @@
return rets;
}
- cb = mei_io_cb_init(cl, NULL);
- if (!cb) {
- rets = -ENOMEM;
- goto out;
- }
-
- rets = mei_io_cb_alloc_resp_buf(cb, length);
+ cb = mei_cl_alloc_cb(cl, length, MEI_FOP_READ, fp);
+ rets = cb ? 0 : -ENOMEM;
if (rets)
goto out;
- cb->fop_type = MEI_FOP_READ;
if (mei_hbuf_acquire(dev)) {
rets = mei_hbm_cl_flow_control_req(dev, cl);
if (rets < 0)
goto out;
- list_add_tail(&cb->list, &dev->read_list.list);
+ list_add_tail(&cb->list, &cl->rd_pending);
} else {
list_add_tail(&cb->list, &dev->ctrl_wr_list.list);
}
- cl->read_cb = cb;
-
out:
cl_dbg(dev, cl, "rpm: autosuspend\n");
pm_runtime_mark_last_busy(dev->dev);
@@ -964,7 +1108,7 @@
dev = cl->dev;
- buf = &cb->request_buffer;
+ buf = &cb->buf;
rets = mei_cl_flow_ctrl_creds(cl);
if (rets < 0)
@@ -999,7 +1143,7 @@
}
cl_dbg(dev, cl, "buf: size = %d idx = %lu\n",
- cb->request_buffer.size, cb->buf_idx);
+ cb->buf.size, cb->buf_idx);
rets = mei_write_message(dev, &mei_hdr, buf->data + cb->buf_idx);
if (rets) {
@@ -1011,6 +1155,7 @@
cl->status = 0;
cl->writing_state = MEI_WRITING;
cb->buf_idx += mei_hdr.length;
+ cb->completed = mei_hdr.msg_complete == 1;
if (mei_hdr.msg_complete) {
if (mei_cl_flow_ctrl_reduce(cl))
@@ -1048,7 +1193,7 @@
dev = cl->dev;
- buf = &cb->request_buffer;
+ buf = &cb->buf;
cl_dbg(dev, cl, "size=%d\n", buf->size);
@@ -1059,7 +1204,6 @@
return rets;
}
- cb->fop_type = MEI_FOP_WRITE;
cb->buf_idx = 0;
cl->writing_state = MEI_IDLE;
@@ -1099,6 +1243,7 @@
cl->writing_state = MEI_WRITING;
cb->buf_idx = mei_hdr.length;
+ cb->completed = mei_hdr.msg_complete == 1;
out:
if (mei_hdr.msg_complete) {
@@ -1151,11 +1296,10 @@
if (waitqueue_active(&cl->tx_wait))
wake_up_interruptible(&cl->tx_wait);
- } else if (cb->fop_type == MEI_FOP_READ &&
- MEI_READING == cl->reading_state) {
- cl->reading_state = MEI_READ_COMPLETE;
+ } else if (cb->fop_type == MEI_FOP_READ) {
+ list_add_tail(&cb->list, &cl->rd_completed);
if (waitqueue_active(&cl->rx_wait))
- wake_up_interruptible(&cl->rx_wait);
+ wake_up_interruptible_all(&cl->rx_wait);
else
mei_cl_bus_rx_event(cl);
diff --git a/drivers/misc/mei/client.h b/drivers/misc/mei/client.h
index cfcde8e..0a39e5d 100644
--- a/drivers/misc/mei/client.h
+++ b/drivers/misc/mei/client.h
@@ -31,7 +31,10 @@
void mei_me_cl_put(struct mei_me_client *me_cl);
struct mei_me_client *mei_me_cl_get(struct mei_me_client *me_cl);
-struct mei_me_client *mei_me_cl_by_uuid(const struct mei_device *dev,
+void mei_me_cl_add(struct mei_device *dev, struct mei_me_client *me_cl);
+void mei_me_cl_del(struct mei_device *dev, struct mei_me_client *me_cl);
+
+struct mei_me_client *mei_me_cl_by_uuid(struct mei_device *dev,
const uuid_le *uuid);
struct mei_me_client *mei_me_cl_by_id(struct mei_device *dev, u8 client_id);
struct mei_me_client *mei_me_cl_by_uuid_id(struct mei_device *dev,
@@ -44,10 +47,10 @@
/*
* MEI IO Functions
*/
-struct mei_cl_cb *mei_io_cb_init(struct mei_cl *cl, struct file *fp);
+struct mei_cl_cb *mei_io_cb_init(struct mei_cl *cl, enum mei_cb_file_ops type,
+ struct file *fp);
void mei_io_cb_free(struct mei_cl_cb *priv_cb);
-int mei_io_cb_alloc_req_buf(struct mei_cl_cb *cb, size_t length);
-int mei_io_cb_alloc_resp_buf(struct mei_cl_cb *cb, size_t length);
+int mei_io_cb_alloc_buf(struct mei_cl_cb *cb, size_t length);
/**
@@ -72,9 +75,14 @@
int mei_cl_link(struct mei_cl *cl, int id);
int mei_cl_unlink(struct mei_cl *cl);
-int mei_cl_flush_queues(struct mei_cl *cl);
-struct mei_cl_cb *mei_cl_find_read_cb(struct mei_cl *cl);
+struct mei_cl *mei_cl_alloc_linked(struct mei_device *dev, int id);
+struct mei_cl_cb *mei_cl_read_cb(const struct mei_cl *cl,
+ const struct file *fp);
+void mei_cl_read_cb_flush(const struct mei_cl *cl, const struct file *fp);
+struct mei_cl_cb *mei_cl_alloc_cb(struct mei_cl *cl, size_t length,
+ enum mei_cb_file_ops type, struct file *fp);
+int mei_cl_flush_queues(struct mei_cl *cl, const struct file *fp);
int mei_cl_flow_ctrl_creds(struct mei_cl *cl);
@@ -82,23 +90,25 @@
/*
* MEI input output function prototype
*/
+
+/**
+ * mei_cl_is_connected - host client is connected
+ *
+ * @cl: host clinet
+ *
+ * Return: true if the host clinet is connected
+ */
static inline bool mei_cl_is_connected(struct mei_cl *cl)
{
- return cl->dev &&
- cl->dev->dev_state == MEI_DEV_ENABLED &&
- cl->state == MEI_FILE_CONNECTED;
-}
-static inline bool mei_cl_is_transitioning(struct mei_cl *cl)
-{
- return MEI_FILE_INITIALIZING == cl->state ||
- MEI_FILE_DISCONNECTED == cl->state ||
- MEI_FILE_DISCONNECTING == cl->state;
+ return cl->state == MEI_FILE_CONNECTED;
}
bool mei_cl_is_other_connecting(struct mei_cl *cl);
int mei_cl_disconnect(struct mei_cl *cl);
int mei_cl_connect(struct mei_cl *cl, struct file *file);
-int mei_cl_read_start(struct mei_cl *cl, size_t length);
+int mei_cl_read_start(struct mei_cl *cl, size_t length, struct file *fp);
+int mei_cl_irq_read_msg(struct mei_cl *cl, struct mei_msg_hdr *hdr,
+ struct mei_cl_cb *cmpl_list);
int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb, bool blocking);
int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list);
diff --git a/drivers/misc/mei/debugfs.c b/drivers/misc/mei/debugfs.c
index b125380..d9cd7e6e 100644
--- a/drivers/misc/mei/debugfs.c
+++ b/drivers/misc/mei/debugfs.c
@@ -28,7 +28,7 @@
size_t cnt, loff_t *ppos)
{
struct mei_device *dev = fp->private_data;
- struct mei_me_client *me_cl, *n;
+ struct mei_me_client *me_cl;
size_t bufsz = 1;
char *buf;
int i = 0;
@@ -38,15 +38,14 @@
#define HDR \
" |id|fix| UUID |con|msg len|sb|refc|\n"
- mutex_lock(&dev->device_lock);
-
+ down_read(&dev->me_clients_rwsem);
list_for_each_entry(me_cl, &dev->me_clients, list)
bufsz++;
bufsz *= sizeof(HDR) + 1;
buf = kzalloc(bufsz, GFP_KERNEL);
if (!buf) {
- mutex_unlock(&dev->device_lock);
+ up_read(&dev->me_clients_rwsem);
return -ENOMEM;
}
@@ -56,10 +55,9 @@
if (dev->dev_state != MEI_DEV_ENABLED)
goto out;
- list_for_each_entry_safe(me_cl, n, &dev->me_clients, list) {
+ list_for_each_entry(me_cl, &dev->me_clients, list) {
- me_cl = mei_me_cl_get(me_cl);
- if (me_cl) {
+ if (mei_me_cl_get(me_cl)) {
pos += scnprintf(buf + pos, bufsz - pos,
"%2d|%2d|%3d|%pUl|%3d|%7d|%2d|%4d|\n",
i++, me_cl->client_id,
@@ -69,12 +67,13 @@
me_cl->props.max_msg_length,
me_cl->props.single_recv_buf,
atomic_read(&me_cl->refcnt.refcount));
- }
- mei_me_cl_put(me_cl);
+ mei_me_cl_put(me_cl);
+ }
}
+
out:
- mutex_unlock(&dev->device_lock);
+ up_read(&dev->me_clients_rwsem);
ret = simple_read_from_buffer(ubuf, cnt, ppos, buf, pos);
kfree(buf);
return ret;
@@ -118,7 +117,7 @@
pos += scnprintf(buf + pos, bufsz - pos,
"%2d|%2d|%4d|%5d|%2d|%2d|\n",
i, cl->me_client_id, cl->host_client_id, cl->state,
- cl->reading_state, cl->writing_state);
+ !list_empty(&cl->rd_completed), cl->writing_state);
i++;
}
out:
diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
index c8412d4..58da925 100644
--- a/drivers/misc/mei/hbm.c
+++ b/drivers/misc/mei/hbm.c
@@ -338,7 +338,8 @@
me_cl->client_id = res->me_addr;
me_cl->mei_flow_ctrl_creds = 0;
- list_add(&me_cl->list, &dev->me_clients);
+ mei_me_cl_add(dev, me_cl);
+
return 0;
}
@@ -638,7 +639,7 @@
continue;
if (mei_hbm_cl_addr_equal(cl, rs)) {
- list_del(&cb->list);
+ list_del_init(&cb->list);
break;
}
}
@@ -683,10 +684,9 @@
cl->state = MEI_FILE_DISCONNECTED;
cl->timer_count = 0;
- cb = mei_io_cb_init(cl, NULL);
+ cb = mei_io_cb_init(cl, MEI_FOP_DISCONNECT_RSP, NULL);
if (!cb)
return -ENOMEM;
- cb->fop_type = MEI_FOP_DISCONNECT_RSP;
cl_dbg(dev, cl, "add disconnect response as first\n");
list_add(&cb->list, &dev->ctrl_wr_list.list);
}
diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c
index f8fd503..6fb75e6 100644
--- a/drivers/misc/mei/hw-me.c
+++ b/drivers/misc/mei/hw-me.c
@@ -25,6 +25,8 @@
#include "hw-me.h"
#include "hw-me-regs.h"
+#include "mei-trace.h"
+
/**
* mei_me_reg_read - Reads 32bit data from the mei device
*
@@ -61,45 +63,79 @@
*
* Return: ME_CB_RW register value (u32)
*/
-static u32 mei_me_mecbrw_read(const struct mei_device *dev)
+static inline u32 mei_me_mecbrw_read(const struct mei_device *dev)
{
return mei_me_reg_read(to_me_hw(dev), ME_CB_RW);
}
+
+/**
+ * mei_me_hcbww_write - write 32bit data to the host circular buffer
+ *
+ * @dev: the device structure
+ * @data: 32bit data to be written to the host circular buffer
+ */
+static inline void mei_me_hcbww_write(struct mei_device *dev, u32 data)
+{
+ mei_me_reg_write(to_me_hw(dev), H_CB_WW, data);
+}
+
/**
* mei_me_mecsr_read - Reads 32bit data from the ME CSR
*
- * @hw: the me hardware structure
+ * @dev: the device structure
*
* Return: ME_CSR_HA register value (u32)
*/
-static inline u32 mei_me_mecsr_read(const struct mei_me_hw *hw)
+static inline u32 mei_me_mecsr_read(const struct mei_device *dev)
{
- return mei_me_reg_read(hw, ME_CSR_HA);
+ u32 reg;
+
+ reg = mei_me_reg_read(to_me_hw(dev), ME_CSR_HA);
+ trace_mei_reg_read(dev->dev, "ME_CSR_HA", ME_CSR_HA, reg);
+
+ return reg;
}
/**
* mei_hcsr_read - Reads 32bit data from the host CSR
*
- * @hw: the me hardware structure
+ * @dev: the device structure
*
* Return: H_CSR register value (u32)
*/
-static inline u32 mei_hcsr_read(const struct mei_me_hw *hw)
+static inline u32 mei_hcsr_read(const struct mei_device *dev)
{
- return mei_me_reg_read(hw, H_CSR);
+ u32 reg;
+
+ reg = mei_me_reg_read(to_me_hw(dev), H_CSR);
+ trace_mei_reg_read(dev->dev, "H_CSR", H_CSR, reg);
+
+ return reg;
+}
+
+/**
+ * mei_hcsr_write - writes H_CSR register to the mei device
+ *
+ * @dev: the device structure
+ * @reg: new register value
+ */
+static inline void mei_hcsr_write(struct mei_device *dev, u32 reg)
+{
+ trace_mei_reg_write(dev->dev, "H_CSR", H_CSR, reg);
+ mei_me_reg_write(to_me_hw(dev), H_CSR, reg);
}
/**
* mei_hcsr_set - writes H_CSR register to the mei device,
* and ignores the H_IS bit for it is write-one-to-zero.
*
- * @hw: the me hardware structure
- * @hcsr: new register value
+ * @dev: the device structure
+ * @reg: new register value
*/
-static inline void mei_hcsr_set(struct mei_me_hw *hw, u32 hcsr)
+static inline void mei_hcsr_set(struct mei_device *dev, u32 reg)
{
- hcsr &= ~H_IS;
- mei_me_reg_write(hw, H_CSR, hcsr);
+ reg &= ~H_IS;
+ mei_hcsr_write(dev, reg);
}
/**
@@ -141,7 +177,7 @@
static void mei_me_hw_config(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
- u32 hcsr = mei_hcsr_read(to_me_hw(dev));
+ u32 hcsr = mei_hcsr_read(dev);
/* Doesn't change in runtime */
dev->hbuf_depth = (hcsr & H_CBD) >> 24;
@@ -170,11 +206,10 @@
*/
static void mei_me_intr_clear(struct mei_device *dev)
{
- struct mei_me_hw *hw = to_me_hw(dev);
- u32 hcsr = mei_hcsr_read(hw);
+ u32 hcsr = mei_hcsr_read(dev);
if ((hcsr & H_IS) == H_IS)
- mei_me_reg_write(hw, H_CSR, hcsr);
+ mei_hcsr_write(dev, hcsr);
}
/**
* mei_me_intr_enable - enables mei device interrupts
@@ -183,11 +218,10 @@
*/
static void mei_me_intr_enable(struct mei_device *dev)
{
- struct mei_me_hw *hw = to_me_hw(dev);
- u32 hcsr = mei_hcsr_read(hw);
+ u32 hcsr = mei_hcsr_read(dev);
hcsr |= H_IE;
- mei_hcsr_set(hw, hcsr);
+ mei_hcsr_set(dev, hcsr);
}
/**
@@ -197,11 +231,10 @@
*/
static void mei_me_intr_disable(struct mei_device *dev)
{
- struct mei_me_hw *hw = to_me_hw(dev);
- u32 hcsr = mei_hcsr_read(hw);
+ u32 hcsr = mei_hcsr_read(dev);
hcsr &= ~H_IE;
- mei_hcsr_set(hw, hcsr);
+ mei_hcsr_set(dev, hcsr);
}
/**
@@ -211,12 +244,11 @@
*/
static void mei_me_hw_reset_release(struct mei_device *dev)
{
- struct mei_me_hw *hw = to_me_hw(dev);
- u32 hcsr = mei_hcsr_read(hw);
+ u32 hcsr = mei_hcsr_read(dev);
hcsr |= H_IG;
hcsr &= ~H_RST;
- mei_hcsr_set(hw, hcsr);
+ mei_hcsr_set(dev, hcsr);
/* complete this write before we set host ready on another CPU */
mmiowb();
@@ -231,8 +263,7 @@
*/
static int mei_me_hw_reset(struct mei_device *dev, bool intr_enable)
{
- struct mei_me_hw *hw = to_me_hw(dev);
- u32 hcsr = mei_hcsr_read(hw);
+ u32 hcsr = mei_hcsr_read(dev);
/* H_RST may be found lit before reset is started,
* for example if preceding reset flow hasn't completed.
@@ -242,8 +273,8 @@
if ((hcsr & H_RST) == H_RST) {
dev_warn(dev->dev, "H_RST is set = 0x%08X", hcsr);
hcsr &= ~H_RST;
- mei_hcsr_set(hw, hcsr);
- hcsr = mei_hcsr_read(hw);
+ mei_hcsr_set(dev, hcsr);
+ hcsr = mei_hcsr_read(dev);
}
hcsr |= H_RST | H_IG | H_IS;
@@ -254,13 +285,13 @@
hcsr &= ~H_IE;
dev->recvd_hw_ready = false;
- mei_me_reg_write(hw, H_CSR, hcsr);
+ mei_hcsr_write(dev, hcsr);
/*
* Host reads the H_CSR once to ensure that the
* posted write to H_CSR completes.
*/
- hcsr = mei_hcsr_read(hw);
+ hcsr = mei_hcsr_read(dev);
if ((hcsr & H_RST) == 0)
dev_warn(dev->dev, "H_RST is not set = 0x%08X", hcsr);
@@ -281,11 +312,10 @@
*/
static void mei_me_host_set_ready(struct mei_device *dev)
{
- struct mei_me_hw *hw = to_me_hw(dev);
- u32 hcsr = mei_hcsr_read(hw);
+ u32 hcsr = mei_hcsr_read(dev);
hcsr |= H_IE | H_IG | H_RDY;
- mei_hcsr_set(hw, hcsr);
+ mei_hcsr_set(dev, hcsr);
}
/**
@@ -296,8 +326,7 @@
*/
static bool mei_me_host_is_ready(struct mei_device *dev)
{
- struct mei_me_hw *hw = to_me_hw(dev);
- u32 hcsr = mei_hcsr_read(hw);
+ u32 hcsr = mei_hcsr_read(dev);
return (hcsr & H_RDY) == H_RDY;
}
@@ -310,8 +339,7 @@
*/
static bool mei_me_hw_is_ready(struct mei_device *dev)
{
- struct mei_me_hw *hw = to_me_hw(dev);
- u32 mecsr = mei_me_mecsr_read(hw);
+ u32 mecsr = mei_me_mecsr_read(dev);
return (mecsr & ME_RDY_HRA) == ME_RDY_HRA;
}
@@ -368,11 +396,10 @@
*/
static unsigned char mei_hbuf_filled_slots(struct mei_device *dev)
{
- struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr;
char read_ptr, write_ptr;
- hcsr = mei_hcsr_read(hw);
+ hcsr = mei_hcsr_read(dev);
read_ptr = (char) ((hcsr & H_CBRP) >> 8);
write_ptr = (char) ((hcsr & H_CBWP) >> 16);
@@ -439,7 +466,6 @@
struct mei_msg_hdr *header,
unsigned char *buf)
{
- struct mei_me_hw *hw = to_me_hw(dev);
unsigned long rem;
unsigned long length = header->length;
u32 *reg_buf = (u32 *)buf;
@@ -457,21 +483,21 @@
if (empty_slots < 0 || dw_cnt > empty_slots)
return -EMSGSIZE;
- mei_me_reg_write(hw, H_CB_WW, *((u32 *) header));
+ mei_me_hcbww_write(dev, *((u32 *) header));
for (i = 0; i < length / 4; i++)
- mei_me_reg_write(hw, H_CB_WW, reg_buf[i]);
+ mei_me_hcbww_write(dev, reg_buf[i]);
rem = length & 0x3;
if (rem > 0) {
u32 reg = 0;
memcpy(®, &buf[length - rem], rem);
- mei_me_reg_write(hw, H_CB_WW, reg);
+ mei_me_hcbww_write(dev, reg);
}
- hcsr = mei_hcsr_read(hw) | H_IG;
- mei_hcsr_set(hw, hcsr);
+ hcsr = mei_hcsr_read(dev) | H_IG;
+ mei_hcsr_set(dev, hcsr);
if (!mei_me_hw_is_ready(dev))
return -EIO;
@@ -487,12 +513,11 @@
*/
static int mei_me_count_full_read_slots(struct mei_device *dev)
{
- struct mei_me_hw *hw = to_me_hw(dev);
u32 me_csr;
char read_ptr, write_ptr;
unsigned char buffer_depth, filled_slots;
- me_csr = mei_me_mecsr_read(hw);
+ me_csr = mei_me_mecsr_read(dev);
buffer_depth = (unsigned char)((me_csr & ME_CBD_HRA) >> 24);
read_ptr = (char) ((me_csr & ME_CBRP_HRA) >> 8);
write_ptr = (char) ((me_csr & ME_CBWP_HRA) >> 16);
@@ -518,7 +543,6 @@
static int mei_me_read_slots(struct mei_device *dev, unsigned char *buffer,
unsigned long buffer_length)
{
- struct mei_me_hw *hw = to_me_hw(dev);
u32 *reg_buf = (u32 *)buffer;
u32 hcsr;
@@ -531,49 +555,59 @@
memcpy(reg_buf, ®, buffer_length);
}
- hcsr = mei_hcsr_read(hw) | H_IG;
- mei_hcsr_set(hw, hcsr);
+ hcsr = mei_hcsr_read(dev) | H_IG;
+ mei_hcsr_set(dev, hcsr);
return 0;
}
/**
- * mei_me_pg_enter - write pg enter register
+ * mei_me_pg_set - write pg enter register
*
* @dev: the device structure
*/
-static void mei_me_pg_enter(struct mei_device *dev)
+static void mei_me_pg_set(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
- u32 reg = mei_me_reg_read(hw, H_HPG_CSR);
+ u32 reg;
+
+ reg = mei_me_reg_read(hw, H_HPG_CSR);
+ trace_mei_reg_read(dev->dev, "H_HPG_CSR", H_HPG_CSR, reg);
reg |= H_HPG_CSR_PGI;
+
+ trace_mei_reg_write(dev->dev, "H_HPG_CSR", H_HPG_CSR, reg);
mei_me_reg_write(hw, H_HPG_CSR, reg);
}
/**
- * mei_me_pg_exit - write pg exit register
+ * mei_me_pg_unset - write pg exit register
*
* @dev: the device structure
*/
-static void mei_me_pg_exit(struct mei_device *dev)
+static void mei_me_pg_unset(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
- u32 reg = mei_me_reg_read(hw, H_HPG_CSR);
+ u32 reg;
+
+ reg = mei_me_reg_read(hw, H_HPG_CSR);
+ trace_mei_reg_read(dev->dev, "H_HPG_CSR", H_HPG_CSR, reg);
WARN(!(reg & H_HPG_CSR_PGI), "PGI is not set\n");
reg |= H_HPG_CSR_PGIHEXR;
+
+ trace_mei_reg_write(dev->dev, "H_HPG_CSR", H_HPG_CSR, reg);
mei_me_reg_write(hw, H_HPG_CSR, reg);
}
/**
- * mei_me_pg_set_sync - perform pg entry procedure
+ * mei_me_pg_enter_sync - perform pg entry procedure
*
* @dev: the device structure
*
* Return: 0 on success an error code otherwise
*/
-int mei_me_pg_set_sync(struct mei_device *dev)
+int mei_me_pg_enter_sync(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
unsigned long timeout = mei_secs_to_jiffies(MEI_PGI_TIMEOUT);
@@ -591,7 +625,7 @@
mutex_lock(&dev->device_lock);
if (dev->pg_event == MEI_PG_EVENT_RECEIVED) {
- mei_me_pg_enter(dev);
+ mei_me_pg_set(dev);
ret = 0;
} else {
ret = -ETIME;
@@ -604,13 +638,13 @@
}
/**
- * mei_me_pg_unset_sync - perform pg exit procedure
+ * mei_me_pg_exit_sync - perform pg exit procedure
*
* @dev: the device structure
*
* Return: 0 on success an error code otherwise
*/
-int mei_me_pg_unset_sync(struct mei_device *dev)
+int mei_me_pg_exit_sync(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
unsigned long timeout = mei_secs_to_jiffies(MEI_PGI_TIMEOUT);
@@ -621,7 +655,7 @@
dev->pg_event = MEI_PG_EVENT_WAIT;
- mei_me_pg_exit(dev);
+ mei_me_pg_unset(dev);
mutex_unlock(&dev->device_lock);
wait_event_timeout(dev->wait_pg,
@@ -649,8 +683,7 @@
*/
static bool mei_me_pg_is_enabled(struct mei_device *dev)
{
- struct mei_me_hw *hw = to_me_hw(dev);
- u32 reg = mei_me_reg_read(hw, ME_CSR_HA);
+ u32 reg = mei_me_mecsr_read(dev);
if ((reg & ME_PGIC_HRA) == 0)
goto notsupported;
@@ -683,14 +716,13 @@
irqreturn_t mei_me_irq_quick_handler(int irq, void *dev_id)
{
struct mei_device *dev = (struct mei_device *) dev_id;
- struct mei_me_hw *hw = to_me_hw(dev);
- u32 csr_reg = mei_hcsr_read(hw);
+ u32 hcsr = mei_hcsr_read(dev);
- if ((csr_reg & H_IS) != H_IS)
+ if ((hcsr & H_IS) != H_IS)
return IRQ_NONE;
/* clear H_IS bit in H_CSR */
- mei_me_reg_write(hw, H_CSR, csr_reg);
+ mei_hcsr_write(dev, hcsr);
return IRQ_WAKE_THREAD;
}
diff --git a/drivers/misc/mei/hw-me.h b/drivers/misc/mei/hw-me.h
index d6567af..6022d52 100644
--- a/drivers/misc/mei/hw-me.h
+++ b/drivers/misc/mei/hw-me.h
@@ -71,8 +71,8 @@
struct mei_device *mei_me_dev_init(struct pci_dev *pdev,
const struct mei_cfg *cfg);
-int mei_me_pg_set_sync(struct mei_device *dev);
-int mei_me_pg_unset_sync(struct mei_device *dev);
+int mei_me_pg_enter_sync(struct mei_device *dev);
+int mei_me_pg_exit_sync(struct mei_device *dev);
irqreturn_t mei_me_irq_quick_handler(int irq, void *dev_id);
irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id);
diff --git a/drivers/misc/mei/hw-txe.c b/drivers/misc/mei/hw-txe.c
index 618ea72..7abafe7 100644
--- a/drivers/misc/mei/hw-txe.c
+++ b/drivers/misc/mei/hw-txe.c
@@ -412,7 +412,7 @@
mei_txe_br_reg_write(hw, HIER_REG, 0);
}
/**
- * mei_txe_intr_disable - enable all interrupts
+ * mei_txe_intr_enable - enable all interrupts
*
* @dev: the device structure
*/
diff --git a/drivers/misc/mei/init.c b/drivers/misc/mei/init.c
index 6ad049a..97353cf 100644
--- a/drivers/misc/mei/init.c
+++ b/drivers/misc/mei/init.c
@@ -389,6 +389,7 @@
INIT_LIST_HEAD(&dev->device_list);
INIT_LIST_HEAD(&dev->me_clients);
mutex_init(&dev->device_lock);
+ init_rwsem(&dev->me_clients_rwsem);
init_waitqueue_head(&dev->wait_hw_ready);
init_waitqueue_head(&dev->wait_pg);
init_waitqueue_head(&dev->wait_hbm_start);
@@ -396,7 +397,6 @@
dev->dev_state = MEI_DEV_INITIALIZING;
dev->reset_count = 0;
- mei_io_list_init(&dev->read_list);
mei_io_list_init(&dev->write_list);
mei_io_list_init(&dev->write_waiting_list);
mei_io_list_init(&dev->ctrl_wr_list);
diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
index 711cddf..3f84d2e 100644
--- a/drivers/misc/mei/interrupt.c
+++ b/drivers/misc/mei/interrupt.c
@@ -43,7 +43,7 @@
list_for_each_entry_safe(cb, next, &compl_list->list, list) {
cl = cb->cl;
- list_del(&cb->list);
+ list_del_init(&cb->list);
dev_dbg(dev->dev, "completing call back.\n");
if (cl == &dev->iamthif_cl)
@@ -68,91 +68,91 @@
return cl->host_client_id == mei_hdr->host_addr &&
cl->me_client_id == mei_hdr->me_addr;
}
+
/**
- * mei_cl_is_reading - checks if the client
- * is the one to read this message
+ * mei_irq_discard_msg - discard received message
*
- * @cl: mei client
- * @mei_hdr: header of mei message
- *
- * Return: true on match and false otherwise
+ * @dev: mei device
+ * @hdr: message header
*/
-static bool mei_cl_is_reading(struct mei_cl *cl, struct mei_msg_hdr *mei_hdr)
+static inline
+void mei_irq_discard_msg(struct mei_device *dev, struct mei_msg_hdr *hdr)
{
- return mei_cl_hbm_equal(cl, mei_hdr) &&
- cl->state == MEI_FILE_CONNECTED &&
- cl->reading_state != MEI_READ_COMPLETE;
+ /*
+ * no need to check for size as it is guarantied
+ * that length fits into rd_msg_buf
+ */
+ mei_read_slots(dev, dev->rd_msg_buf, hdr->length);
+ dev_dbg(dev->dev, "discarding message " MEI_HDR_FMT "\n",
+ MEI_HDR_PRM(hdr));
}
/**
* mei_cl_irq_read_msg - process client message
*
- * @dev: the device structure
+ * @cl: reading client
* @mei_hdr: header of mei client message
- * @complete_list: An instance of our list structure
+ * @complete_list: completion list
*
- * Return: 0 on success, <0 on failure.
+ * Return: always 0
*/
-static int mei_cl_irq_read_msg(struct mei_device *dev,
- struct mei_msg_hdr *mei_hdr,
- struct mei_cl_cb *complete_list)
+int mei_cl_irq_read_msg(struct mei_cl *cl,
+ struct mei_msg_hdr *mei_hdr,
+ struct mei_cl_cb *complete_list)
{
- struct mei_cl *cl;
- struct mei_cl_cb *cb, *next;
+ struct mei_device *dev = cl->dev;
+ struct mei_cl_cb *cb;
unsigned char *buffer = NULL;
- list_for_each_entry_safe(cb, next, &dev->read_list.list, list) {
- cl = cb->cl;
- if (!mei_cl_is_reading(cl, mei_hdr))
- continue;
-
- cl->reading_state = MEI_READING;
-
- if (cb->response_buffer.size == 0 ||
- cb->response_buffer.data == NULL) {
- cl_err(dev, cl, "response buffer is not allocated.\n");
- list_del(&cb->list);
- return -ENOMEM;
- }
-
- if (cb->response_buffer.size < mei_hdr->length + cb->buf_idx) {
- cl_dbg(dev, cl, "message overflow. size %d len %d idx %ld\n",
- cb->response_buffer.size,
- mei_hdr->length, cb->buf_idx);
- buffer = krealloc(cb->response_buffer.data,
- mei_hdr->length + cb->buf_idx,
- GFP_KERNEL);
-
- if (!buffer) {
- list_del(&cb->list);
- return -ENOMEM;
- }
- cb->response_buffer.data = buffer;
- cb->response_buffer.size =
- mei_hdr->length + cb->buf_idx;
- }
-
- buffer = cb->response_buffer.data + cb->buf_idx;
- mei_read_slots(dev, buffer, mei_hdr->length);
-
- cb->buf_idx += mei_hdr->length;
- if (mei_hdr->msg_complete) {
- cl->status = 0;
- list_del(&cb->list);
- cl_dbg(dev, cl, "completed read length = %lu\n",
- cb->buf_idx);
- list_add_tail(&cb->list, &complete_list->list);
- }
- break;
+ cb = list_first_entry_or_null(&cl->rd_pending, struct mei_cl_cb, list);
+ if (!cb) {
+ cl_err(dev, cl, "pending read cb not found\n");
+ goto out;
}
- dev_dbg(dev->dev, "message read\n");
- if (!buffer) {
- mei_read_slots(dev, dev->rd_msg_buf, mei_hdr->length);
- dev_dbg(dev->dev, "discarding message " MEI_HDR_FMT "\n",
- MEI_HDR_PRM(mei_hdr));
+ if (!mei_cl_is_connected(cl)) {
+ cl_dbg(dev, cl, "not connected\n");
+ cb->status = -ENODEV;
+ goto out;
}
+ if (cb->buf.size == 0 || cb->buf.data == NULL) {
+ cl_err(dev, cl, "response buffer is not allocated.\n");
+ list_move_tail(&cb->list, &complete_list->list);
+ cb->status = -ENOMEM;
+ goto out;
+ }
+
+ if (cb->buf.size < mei_hdr->length + cb->buf_idx) {
+ cl_dbg(dev, cl, "message overflow. size %d len %d idx %ld\n",
+ cb->buf.size, mei_hdr->length, cb->buf_idx);
+ buffer = krealloc(cb->buf.data, mei_hdr->length + cb->buf_idx,
+ GFP_KERNEL);
+
+ if (!buffer) {
+ cb->status = -ENOMEM;
+ list_move_tail(&cb->list, &complete_list->list);
+ goto out;
+ }
+ cb->buf.data = buffer;
+ cb->buf.size = mei_hdr->length + cb->buf_idx;
+ }
+
+ buffer = cb->buf.data + cb->buf_idx;
+ mei_read_slots(dev, buffer, mei_hdr->length);
+
+ cb->buf_idx += mei_hdr->length;
+
+ if (mei_hdr->msg_complete) {
+ cb->read_time = jiffies;
+ cl_dbg(dev, cl, "completed read length = %lu\n", cb->buf_idx);
+ list_move_tail(&cb->list, &complete_list->list);
+ }
+
+out:
+ if (!buffer)
+ mei_irq_discard_msg(dev, mei_hdr);
+
return 0;
}
@@ -183,7 +183,6 @@
cl->state = MEI_FILE_DISCONNECTED;
cl->status = 0;
- list_del(&cb->list);
mei_io_cb_free(cb);
return ret;
@@ -263,7 +262,7 @@
return ret;
}
- list_move_tail(&cb->list, &dev->read_list.list);
+ list_move_tail(&cb->list, &cl->rd_pending);
return 0;
}
@@ -301,7 +300,7 @@
if (ret) {
cl->status = ret;
cb->buf_idx = 0;
- list_del(&cb->list);
+ list_del_init(&cb->list);
return ret;
}
@@ -378,25 +377,13 @@
goto end;
}
- if (mei_hdr->host_addr == dev->iamthif_cl.host_client_id &&
- MEI_FILE_CONNECTED == dev->iamthif_cl.state &&
- dev->iamthif_state == MEI_IAMTHIF_READING) {
-
- ret = mei_amthif_irq_read_msg(dev, mei_hdr, cmpl_list);
- if (ret) {
- dev_err(dev->dev, "mei_amthif_irq_read_msg failed = %d\n",
- ret);
- goto end;
- }
+ if (cl == &dev->iamthif_cl) {
+ ret = mei_amthif_irq_read_msg(cl, mei_hdr, cmpl_list);
} else {
- ret = mei_cl_irq_read_msg(dev, mei_hdr, cmpl_list);
- if (ret) {
- dev_err(dev->dev, "mei_cl_irq_read_msg failed = %d\n",
- ret);
- goto end;
- }
+ ret = mei_cl_irq_read_msg(cl, mei_hdr, cmpl_list);
}
+
reset_slots:
/* reset the number of slots and header */
*slots = mei_count_full_read_slots(dev);
@@ -449,21 +436,9 @@
cl = cb->cl;
cl->status = 0;
- list_del(&cb->list);
- if (cb->fop_type == MEI_FOP_WRITE &&
- cl != &dev->iamthif_cl) {
- cl_dbg(dev, cl, "MEI WRITE COMPLETE\n");
- cl->writing_state = MEI_WRITE_COMPLETE;
- list_add_tail(&cb->list, &cmpl_list->list);
- }
- if (cl == &dev->iamthif_cl) {
- cl_dbg(dev, cl, "check iamthif flow control.\n");
- if (dev->iamthif_flow_control_pending) {
- ret = mei_amthif_irq_read(dev, &slots);
- if (ret)
- return ret;
- }
- }
+ cl_dbg(dev, cl, "MEI WRITE COMPLETE\n");
+ cl->writing_state = MEI_WRITE_COMPLETE;
+ list_move_tail(&cb->list, &cmpl_list->list);
}
if (dev->wd_state == MEI_WD_STOPPING) {
@@ -587,10 +562,7 @@
if (--dev->iamthif_stall_timer == 0) {
dev_err(dev->dev, "timer: amthif hanged.\n");
mei_reset(dev);
- dev->iamthif_msg_buf_size = 0;
- dev->iamthif_msg_buf_index = 0;
dev->iamthif_canceled = false;
- dev->iamthif_ioctl = true;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
dev->iamthif_timer = 0;
@@ -636,4 +608,3 @@
schedule_delayed_work(&dev->timer_work, 2 * HZ);
mutex_unlock(&dev->device_lock);
}
-
diff --git a/drivers/misc/mei/main.c b/drivers/misc/mei/main.c
index 47680c8..3e29681 100644
--- a/drivers/misc/mei/main.c
+++ b/drivers/misc/mei/main.c
@@ -58,24 +58,18 @@
mutex_lock(&dev->device_lock);
- cl = NULL;
-
- err = -ENODEV;
if (dev->dev_state != MEI_DEV_ENABLED) {
dev_dbg(dev->dev, "dev_state != MEI_ENABLED dev_state = %s\n",
mei_dev_state_str(dev->dev_state));
+ err = -ENODEV;
goto err_unlock;
}
- err = -ENOMEM;
- cl = mei_cl_allocate(dev);
- if (!cl)
+ cl = mei_cl_alloc_linked(dev, MEI_HOST_CLIENT_ID_ANY);
+ if (IS_ERR(cl)) {
+ err = PTR_ERR(cl);
goto err_unlock;
-
- /* open_handle_count check is handled in the mei_cl_link */
- err = mei_cl_link(cl, MEI_HOST_CLIENT_ID_ANY);
- if (err)
- goto err_unlock;
+ }
file->private_data = cl;
@@ -85,7 +79,6 @@
err_unlock:
mutex_unlock(&dev->device_lock);
- kfree(cl);
return err;
}
@@ -100,7 +93,6 @@
static int mei_release(struct inode *inode, struct file *file)
{
struct mei_cl *cl = file->private_data;
- struct mei_cl_cb *cb;
struct mei_device *dev;
int rets = 0;
@@ -114,33 +106,18 @@
rets = mei_amthif_release(dev, file);
goto out;
}
- if (cl->state == MEI_FILE_CONNECTED) {
+ if (mei_cl_is_connected(cl)) {
cl->state = MEI_FILE_DISCONNECTING;
cl_dbg(dev, cl, "disconnecting\n");
rets = mei_cl_disconnect(cl);
}
- mei_cl_flush_queues(cl);
+ mei_cl_flush_queues(cl, file);
cl_dbg(dev, cl, "removing\n");
mei_cl_unlink(cl);
-
- /* free read cb */
- cb = NULL;
- if (cl->read_cb) {
- cb = mei_cl_find_read_cb(cl);
- /* Remove entry from read list */
- if (cb)
- list_del(&cb->list);
-
- cb = cl->read_cb;
- cl->read_cb = NULL;
- }
-
file->private_data = NULL;
- mei_io_cb_free(cb);
-
kfree(cl);
out:
mutex_unlock(&dev->device_lock);
@@ -162,9 +139,8 @@
size_t length, loff_t *offset)
{
struct mei_cl *cl = file->private_data;
- struct mei_cl_cb *cb_pos = NULL;
- struct mei_cl_cb *cb = NULL;
struct mei_device *dev;
+ struct mei_cl_cb *cb = NULL;
int rets;
int err;
@@ -191,8 +167,8 @@
goto out;
}
- if (cl->read_cb) {
- cb = cl->read_cb;
+ cb = mei_cl_read_cb(cl, file);
+ if (cb) {
/* read what left */
if (cb->buf_idx > *offset)
goto copy_buffer;
@@ -208,7 +184,7 @@
*offset = 0;
}
- err = mei_cl_read_start(cl, length);
+ err = mei_cl_read_start(cl, length, file);
if (err && err != -EBUSY) {
dev_dbg(dev->dev,
"mei start read failure with status = %d\n", err);
@@ -216,8 +192,7 @@
goto out;
}
- if (MEI_READ_COMPLETE != cl->reading_state &&
- !waitqueue_active(&cl->rx_wait)) {
+ if (list_empty(&cl->rd_completed) && !waitqueue_active(&cl->rx_wait)) {
if (file->f_flags & O_NONBLOCK) {
rets = -EAGAIN;
goto out;
@@ -226,8 +201,8 @@
mutex_unlock(&dev->device_lock);
if (wait_event_interruptible(cl->rx_wait,
- MEI_READ_COMPLETE == cl->reading_state ||
- mei_cl_is_transitioning(cl))) {
+ (!list_empty(&cl->rd_completed)) ||
+ (!mei_cl_is_connected(cl)))) {
if (signal_pending(current))
return -EINTR;
@@ -235,26 +210,28 @@
}
mutex_lock(&dev->device_lock);
- if (mei_cl_is_transitioning(cl)) {
+ if (!mei_cl_is_connected(cl)) {
rets = -EBUSY;
goto out;
}
}
- cb = cl->read_cb;
-
+ cb = mei_cl_read_cb(cl, file);
if (!cb) {
- rets = -ENODEV;
- goto out;
- }
- if (cl->reading_state != MEI_READ_COMPLETE) {
rets = 0;
goto out;
}
- /* now copy the data to user space */
+
copy_buffer:
+ /* now copy the data to user space */
+ if (cb->status) {
+ rets = cb->status;
+ dev_dbg(dev->dev, "read operation failed %d\n", rets);
+ goto free;
+ }
+
dev_dbg(dev->dev, "buf.size = %d buf.idx= %ld\n",
- cb->response_buffer.size, cb->buf_idx);
+ cb->buf.size, cb->buf_idx);
if (length == 0 || ubuf == NULL || *offset > cb->buf_idx) {
rets = -EMSGSIZE;
goto free;
@@ -264,7 +241,7 @@
* however buf_idx may point beyond that */
length = min_t(size_t, length, cb->buf_idx - *offset);
- if (copy_to_user(ubuf, cb->response_buffer.data + *offset, length)) {
+ if (copy_to_user(ubuf, cb->buf.data + *offset, length)) {
dev_dbg(dev->dev, "failed to copy data to userland\n");
rets = -EFAULT;
goto free;
@@ -276,13 +253,8 @@
goto out;
free:
- cb_pos = mei_cl_find_read_cb(cl);
- /* Remove entry from read list */
- if (cb_pos)
- list_del(&cb_pos->list);
mei_io_cb_free(cb);
- cl->reading_state = MEI_IDLE;
- cl->read_cb = NULL;
+
out:
dev_dbg(dev->dev, "end mei read rets= %d\n", rets);
mutex_unlock(&dev->device_lock);
@@ -336,9 +308,8 @@
goto out;
}
- if (cl->state != MEI_FILE_CONNECTED) {
- dev_err(dev->dev, "host client = %d, is not connected to ME client = %d",
- cl->host_client_id, cl->me_client_id);
+ if (!mei_cl_is_connected(cl)) {
+ cl_err(dev, cl, "is not connected");
rets = -ENODEV;
goto out;
}
@@ -349,41 +320,22 @@
timeout = write_cb->read_time +
mei_secs_to_jiffies(MEI_IAMTHIF_READ_TIMER);
- if (time_after(jiffies, timeout) ||
- cl->reading_state == MEI_READ_COMPLETE) {
+ if (time_after(jiffies, timeout)) {
*offset = 0;
- list_del(&write_cb->list);
mei_io_cb_free(write_cb);
write_cb = NULL;
}
}
}
- /* free entry used in read */
- if (cl->reading_state == MEI_READ_COMPLETE) {
- *offset = 0;
- write_cb = mei_cl_find_read_cb(cl);
- if (write_cb) {
- list_del(&write_cb->list);
- mei_io_cb_free(write_cb);
- write_cb = NULL;
- cl->reading_state = MEI_IDLE;
- cl->read_cb = NULL;
- }
- } else if (cl->reading_state == MEI_IDLE)
- *offset = 0;
-
-
- write_cb = mei_io_cb_init(cl, file);
+ *offset = 0;
+ write_cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, file);
if (!write_cb) {
rets = -ENOMEM;
goto out;
}
- rets = mei_io_cb_alloc_req_buf(write_cb, length);
- if (rets)
- goto out;
- rets = copy_from_user(write_cb->request_buffer.data, ubuf, length);
+ rets = copy_from_user(write_cb->buf.data, ubuf, length);
if (rets) {
dev_dbg(dev->dev, "failed to copy data from userland\n");
rets = -EFAULT;
@@ -391,7 +343,7 @@
}
if (cl == &dev->iamthif_cl) {
- rets = mei_amthif_write(dev, write_cb);
+ rets = mei_amthif_write(cl, write_cb);
if (rets) {
dev_err(dev->dev,
@@ -464,7 +416,7 @@
*/
if (uuid_le_cmp(data->in_client_uuid, mei_amthif_guid) == 0) {
dev_dbg(dev->dev, "FW Client is amthi\n");
- if (dev->iamthif_cl.state != MEI_FILE_CONNECTED) {
+ if (!mei_cl_is_connected(&dev->iamthif_cl)) {
rets = -ENODEV;
goto end;
}
@@ -588,6 +540,7 @@
*/
static unsigned int mei_poll(struct file *file, poll_table *wait)
{
+ unsigned long req_events = poll_requested_events(wait);
struct mei_cl *cl = file->private_data;
struct mei_device *dev;
unsigned int mask = 0;
@@ -599,27 +552,26 @@
mutex_lock(&dev->device_lock);
- if (!mei_cl_is_connected(cl)) {
+
+ if (dev->dev_state != MEI_DEV_ENABLED ||
+ !mei_cl_is_connected(cl)) {
mask = POLLERR;
goto out;
}
- mutex_unlock(&dev->device_lock);
-
-
- if (cl == &dev->iamthif_cl)
- return mei_amthif_poll(dev, file, wait);
-
- poll_wait(file, &cl->tx_wait, wait);
-
- mutex_lock(&dev->device_lock);
-
- if (!mei_cl_is_connected(cl)) {
- mask = POLLERR;
+ if (cl == &dev->iamthif_cl) {
+ mask = mei_amthif_poll(dev, file, wait);
goto out;
}
- mask |= (POLLIN | POLLRDNORM);
+ if (req_events & (POLLIN | POLLRDNORM)) {
+ poll_wait(file, &cl->rx_wait, wait);
+
+ if (!list_empty(&cl->rd_completed))
+ mask |= POLLIN | POLLRDNORM;
+ else
+ mei_cl_read_start(cl, 0, file);
+ }
out:
mutex_unlock(&dev->device_lock);
diff --git a/drivers/misc/mei/mei-trace.c b/drivers/misc/mei/mei-trace.c
new file mode 100644
index 0000000..388efb5
--- /dev/null
+++ b/drivers/misc/mei/mei-trace.c
@@ -0,0 +1,25 @@
+/*
+ *
+ * Intel Management Engine Interface (Intel MEI) Linux driver
+ * Copyright (c) 2015, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ */
+#include <linux/module.h>
+
+/* sparse doesn't like tracepoint macros */
+#ifndef __CHECKER__
+#define CREATE_TRACE_POINTS
+#include "mei-trace.h"
+
+EXPORT_TRACEPOINT_SYMBOL(mei_reg_read);
+EXPORT_TRACEPOINT_SYMBOL(mei_reg_write);
+#endif /* __CHECKER__ */
diff --git a/drivers/misc/mei/mei-trace.h b/drivers/misc/mei/mei-trace.h
new file mode 100644
index 0000000..47e1bc6
--- /dev/null
+++ b/drivers/misc/mei/mei-trace.h
@@ -0,0 +1,74 @@
+/*
+ *
+ * Intel Management Engine Interface (Intel MEI) Linux driver
+ * Copyright (c) 2015, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ */
+
+#if !defined(_MEI_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
+#define _MEI_TRACE_H_
+
+#include <linux/stringify.h>
+#include <linux/types.h>
+#include <linux/tracepoint.h>
+
+#include <linux/device.h>
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM mei
+
+TRACE_EVENT(mei_reg_read,
+ TP_PROTO(const struct device *dev, const char *reg, u32 offs, u32 val),
+ TP_ARGS(dev, reg, offs, val),
+ TP_STRUCT__entry(
+ __string(dev, dev_name(dev))
+ __field(const char *, reg)
+ __field(u32, offs)
+ __field(u32, val)
+ ),
+ TP_fast_assign(
+ __assign_str(dev, dev_name(dev))
+ __entry->reg = reg;
+ __entry->offs = offs;
+ __entry->val = val;
+ ),
+ TP_printk("[%s] read %s:[%#x] = %#x",
+ __get_str(dev), __entry->reg, __entry->offs, __entry->val)
+);
+
+TRACE_EVENT(mei_reg_write,
+ TP_PROTO(const struct device *dev, const char *reg, u32 offs, u32 val),
+ TP_ARGS(dev, reg, offs, val),
+ TP_STRUCT__entry(
+ __string(dev, dev_name(dev))
+ __field(const char *, reg)
+ __field(u32, offs)
+ __field(u32, val)
+ ),
+ TP_fast_assign(
+ __assign_str(dev, dev_name(dev))
+ __entry->reg = reg;
+ __entry->offs = offs;
+ __entry->val = val;
+ ),
+ TP_printk("[%s] write %s[%#x] = %#x)",
+ __get_str(dev), __entry->reg, __entry->offs, __entry->val)
+);
+
+#endif /* _MEI_TRACE_H_ */
+
+/* This part must be outside protection */
+#undef TRACE_INCLUDE_PATH
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_PATH .
+#define TRACE_INCLUDE_FILE mei-trace
+#include <trace/define_trace.h>
diff --git a/drivers/misc/mei/mei_dev.h b/drivers/misc/mei/mei_dev.h
index 6c6ce93..f066ecd 100644
--- a/drivers/misc/mei/mei_dev.h
+++ b/drivers/misc/mei/mei_dev.h
@@ -194,23 +194,25 @@
* @list: link in callback queue
* @cl: file client who is running this operation
* @fop_type: file operation type
- * @request_buffer: buffer to store request data
- * @response_buffer: buffer to store response data
+ * @buf: buffer for data associated with the callback
* @buf_idx: last read index
* @read_time: last read operation time stamp (iamthif)
* @file_object: pointer to file structure
+ * @status: io status of the cb
* @internal: communication between driver and FW flag
+ * @completed: the transfer or reception has completed
*/
struct mei_cl_cb {
struct list_head list;
struct mei_cl *cl;
enum mei_cb_file_ops fop_type;
- struct mei_msg_data request_buffer;
- struct mei_msg_data response_buffer;
+ struct mei_msg_data buf;
unsigned long buf_idx;
unsigned long read_time;
struct file *file_object;
+ int status;
u32 internal:1;
+ u32 completed:1;
};
/**
@@ -229,9 +231,9 @@
* @me_client_id: me/fw id
* @mei_flow_ctrl_creds: transmit flow credentials
* @timer_count: watchdog timer for operation completion
- * @reading_state: state of the rx
* @writing_state: state of the tx
- * @read_cb: current pending reading callback
+ * @rd_pending: pending read credits
+ * @rd_completed: completed read
*
* @device: device on the mei client bus
* @device_link: link to bus clients
@@ -249,9 +251,9 @@
u8 me_client_id;
u8 mei_flow_ctrl_creds;
u8 timer_count;
- enum mei_file_transaction_states reading_state;
enum mei_file_transaction_states writing_state;
- struct mei_cl_cb *read_cb;
+ struct list_head rd_pending;
+ struct list_head rd_completed;
/* MEI CL bus data */
struct mei_cl_device *device;
@@ -423,7 +425,6 @@
* @cdev : character device
* @minor : minor number allocated for device
*
- * @read_list : read completion list
* @write_list : write pending list
* @write_waiting_list : write completion list
* @ctrl_wr_list : pending control write list
@@ -460,6 +461,7 @@
* @version : HBM protocol version in use
* @hbm_f_pg_supported : hbm feature pgi protocol
*
+ * @me_clients_rwsem: rw lock over me_clients list
* @me_clients : list of FW clients
* @me_clients_map : FW clients bit map
* @host_clients_map : host clients id pool
@@ -480,12 +482,7 @@
* @iamthif_mtu : amthif client max message length
* @iamthif_timer : time stamp of current amthif command completion
* @iamthif_stall_timer : timer to detect amthif hang
- * @iamthif_msg_buf : amthif current message buffer
- * @iamthif_msg_buf_size : size of current amthif message request buffer
- * @iamthif_msg_buf_index : current index in amthif message request buffer
* @iamthif_state : amthif processor state
- * @iamthif_flow_control_pending: amthif waits for flow control
- * @iamthif_ioctl : wait for completion if amthif control message
* @iamthif_canceled : current amthif command is canceled
*
* @init_work : work item for the device init
@@ -503,7 +500,6 @@
struct cdev cdev;
int minor;
- struct mei_cl_cb read_list;
struct mei_cl_cb write_list;
struct mei_cl_cb write_waiting_list;
struct mei_cl_cb ctrl_wr_list;
@@ -556,6 +552,7 @@
struct hbm_version version;
unsigned int hbm_f_pg_supported:1;
+ struct rw_semaphore me_clients_rwsem;
struct list_head me_clients;
DECLARE_BITMAP(me_clients_map, MEI_CLIENTS_MAX);
DECLARE_BITMAP(host_clients_map, MEI_CLIENTS_MAX);
@@ -579,12 +576,7 @@
int iamthif_mtu;
unsigned long iamthif_timer;
u32 iamthif_stall_timer;
- unsigned char *iamthif_msg_buf; /* Note: memory has to be allocated */
- u32 iamthif_msg_buf_size;
- u32 iamthif_msg_buf_index;
enum iamthif_states iamthif_state;
- bool iamthif_flow_control_pending;
- bool iamthif_ioctl;
bool iamthif_canceled;
struct work_struct init_work;
@@ -662,8 +654,6 @@
int mei_amthif_host_init(struct mei_device *dev);
-int mei_amthif_write(struct mei_device *dev, struct mei_cl_cb *priv_cb);
-
int mei_amthif_read(struct mei_device *dev, struct file *file,
char __user *ubuf, size_t length, loff_t *offset);
@@ -675,13 +665,13 @@
struct mei_cl_cb *mei_amthif_find_read_list_entry(struct mei_device *dev,
struct file *file);
-void mei_amthif_run_next_cmd(struct mei_device *dev);
-
+int mei_amthif_write(struct mei_cl *cl, struct mei_cl_cb *cb);
+int mei_amthif_run_next_cmd(struct mei_device *dev);
int mei_amthif_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list);
void mei_amthif_complete(struct mei_device *dev, struct mei_cl_cb *cb);
-int mei_amthif_irq_read_msg(struct mei_device *dev,
+int mei_amthif_irq_read_msg(struct mei_cl *cl,
struct mei_msg_hdr *mei_hdr,
struct mei_cl_cb *complete_list);
int mei_amthif_irq_read(struct mei_device *dev, s32 *slots);
diff --git a/drivers/misc/mei/nfc.c b/drivers/misc/mei/nfc.c
index bb61a11..c3bcb63 100644
--- a/drivers/misc/mei/nfc.c
+++ b/drivers/misc/mei/nfc.c
@@ -482,8 +482,8 @@
int mei_nfc_host_init(struct mei_device *dev)
{
struct mei_nfc_dev *ndev;
- struct mei_cl *cl_info, *cl = NULL;
- struct mei_me_client *me_cl;
+ struct mei_cl *cl_info, *cl;
+ struct mei_me_client *me_cl = NULL;
int ret;
@@ -500,17 +500,6 @@
goto err;
}
- ndev->cl_info = mei_cl_allocate(dev);
- ndev->cl = mei_cl_allocate(dev);
-
- cl = ndev->cl;
- cl_info = ndev->cl_info;
-
- if (!cl || !cl_info) {
- ret = -ENOMEM;
- goto err;
- }
-
/* check for valid client id */
me_cl = mei_me_cl_by_uuid(dev, &mei_nfc_info_guid);
if (!me_cl) {
@@ -519,17 +508,21 @@
goto err;
}
+ cl_info = mei_cl_alloc_linked(dev, MEI_HOST_CLIENT_ID_ANY);
+ if (IS_ERR(cl_info)) {
+ ret = PTR_ERR(cl_info);
+ goto err;
+ }
+
cl_info->me_client_id = me_cl->client_id;
cl_info->cl_uuid = me_cl->props.protocol_name;
mei_me_cl_put(me_cl);
-
- ret = mei_cl_link(cl_info, MEI_HOST_CLIENT_ID_ANY);
- if (ret)
- goto err;
-
+ me_cl = NULL;
list_add_tail(&cl_info->device_link, &dev->device_list);
+ ndev->cl_info = cl_info;
+
/* check for valid client id */
me_cl = mei_me_cl_by_uuid(dev, &mei_nfc_guid);
if (!me_cl) {
@@ -538,16 +531,21 @@
goto err;
}
+ cl = mei_cl_alloc_linked(dev, MEI_HOST_CLIENT_ID_ANY);
+ if (IS_ERR(cl)) {
+ ret = PTR_ERR(cl);
+ goto err;
+ }
+
cl->me_client_id = me_cl->client_id;
cl->cl_uuid = me_cl->props.protocol_name;
mei_me_cl_put(me_cl);
-
- ret = mei_cl_link(cl, MEI_HOST_CLIENT_ID_ANY);
- if (ret)
- goto err;
+ me_cl = NULL;
list_add_tail(&cl->device_link, &dev->device_list);
+ ndev->cl = cl;
+
ndev->req_id = 1;
INIT_WORK(&ndev->init_work, mei_nfc_init);
@@ -557,6 +555,7 @@
return 0;
err:
+ mei_me_cl_put(me_cl);
mei_nfc_free(ndev);
return ret;
diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
index af44ee2..23f71f5 100644
--- a/drivers/misc/mei/pci-me.c
+++ b/drivers/misc/mei/pci-me.c
@@ -388,7 +388,7 @@
mutex_lock(&dev->device_lock);
if (mei_write_is_idle(dev))
- ret = mei_me_pg_set_sync(dev);
+ ret = mei_me_pg_enter_sync(dev);
else
ret = -EAGAIN;
@@ -413,7 +413,7 @@
mutex_lock(&dev->device_lock);
- ret = mei_me_pg_unset_sync(dev);
+ ret = mei_me_pg_exit_sync(dev);
mutex_unlock(&dev->device_lock);
diff --git a/drivers/misc/mei/pci-txe.c b/drivers/misc/mei/pci-txe.c
index c86e2dd..dcfcba4 100644
--- a/drivers/misc/mei/pci-txe.c
+++ b/drivers/misc/mei/pci-txe.c
@@ -63,7 +63,7 @@
}
}
/**
- * mei_probe - Device Initialization Routine
+ * mei_txe_probe - Device Initialization Routine
*
* @pdev: PCI device structure
* @ent: entry in mei_txe_pci_tbl
@@ -193,7 +193,7 @@
}
/**
- * mei_remove - Device Removal Routine
+ * mei_txe_remove - Device Removal Routine
*
* @pdev: PCI device structure
*
diff --git a/drivers/misc/mei/wd.c b/drivers/misc/mei/wd.c
index 475f1de..2725f86 100644
--- a/drivers/misc/mei/wd.c
+++ b/drivers/misc/mei/wd.c
@@ -160,9 +160,10 @@
*/
int mei_wd_stop(struct mei_device *dev)
{
+ struct mei_cl *cl = &dev->wd_cl;
int ret;
- if (dev->wd_cl.state != MEI_FILE_CONNECTED ||
+ if (!mei_cl_is_connected(cl) ||
dev->wd_state != MEI_WD_RUNNING)
return 0;
@@ -170,7 +171,7 @@
dev->wd_state = MEI_WD_STOPPING;
- ret = mei_cl_flow_ctrl_creds(&dev->wd_cl);
+ ret = mei_cl_flow_ctrl_creds(cl);
if (ret < 0)
goto err;
@@ -202,22 +203,25 @@
return ret;
}
-/*
+/**
* mei_wd_ops_start - wd start command from the watchdog core.
*
- * @wd_dev - watchdog device struct
+ * @wd_dev: watchdog device struct
*
* Return: 0 if success, negative errno code for failure
*/
static int mei_wd_ops_start(struct watchdog_device *wd_dev)
{
- int err = -ENODEV;
struct mei_device *dev;
+ struct mei_cl *cl;
+ int err = -ENODEV;
dev = watchdog_get_drvdata(wd_dev);
if (!dev)
return -ENODEV;
+ cl = &dev->wd_cl;
+
mutex_lock(&dev->device_lock);
if (dev->dev_state != MEI_DEV_ENABLED) {
@@ -226,8 +230,8 @@
goto end_unlock;
}
- if (dev->wd_cl.state != MEI_FILE_CONNECTED) {
- dev_dbg(dev->dev, "MEI Driver is not connected to Watchdog Client\n");
+ if (!mei_cl_is_connected(cl)) {
+ cl_dbg(dev, cl, "MEI Driver is not connected to Watchdog Client\n");
goto end_unlock;
}
@@ -239,10 +243,10 @@
return err;
}
-/*
+/**
* mei_wd_ops_stop - wd stop command from the watchdog core.
*
- * @wd_dev - watchdog device struct
+ * @wd_dev: watchdog device struct
*
* Return: 0 if success, negative errno code for failure
*/
@@ -261,10 +265,10 @@
return 0;
}
-/*
+/**
* mei_wd_ops_ping - wd ping command from the watchdog core.
*
- * @wd_dev - watchdog device struct
+ * @wd_dev: watchdog device struct
*
* Return: 0 if success, negative errno code for failure
*/
@@ -282,8 +286,8 @@
mutex_lock(&dev->device_lock);
- if (cl->state != MEI_FILE_CONNECTED) {
- dev_err(dev->dev, "wd: not connected.\n");
+ if (!mei_cl_is_connected(cl)) {
+ cl_err(dev, cl, "wd: not connected.\n");
ret = -ENODEV;
goto end;
}
@@ -311,11 +315,11 @@
return ret;
}
-/*
+/**
* mei_wd_ops_set_timeout - wd set timeout command from the watchdog core.
*
- * @wd_dev - watchdog device struct
- * @timeout - timeout value to set
+ * @wd_dev: watchdog device struct
+ * @timeout: timeout value to set
*
* Return: 0 if success, negative errno code for failure
*/
diff --git a/drivers/misc/mic/host/mic_boot.c b/drivers/misc/mic/host/mic_boot.c
index ff2b0fb..d9fa609 100644
--- a/drivers/misc/mic/host/mic_boot.c
+++ b/drivers/misc/mic/host/mic_boot.c
@@ -309,7 +309,7 @@
*/
void mic_prepare_suspend(struct mic_device *mdev)
{
- int rc;
+ unsigned long timeout;
#define MIC_SUSPEND_TIMEOUT (60 * HZ)
@@ -331,10 +331,10 @@
*/
mic_set_state(mdev, MIC_SUSPENDING);
mutex_unlock(&mdev->mic_mutex);
- rc = wait_for_completion_timeout(&mdev->reset_wait,
- MIC_SUSPEND_TIMEOUT);
+ timeout = wait_for_completion_timeout(&mdev->reset_wait,
+ MIC_SUSPEND_TIMEOUT);
/* Force reset the card if the shutdown completion timed out */
- if (!rc) {
+ if (!timeout) {
mutex_lock(&mdev->mic_mutex);
mic_set_state(mdev, MIC_SUSPENDED);
mutex_unlock(&mdev->mic_mutex);
@@ -348,10 +348,10 @@
*/
mic_set_state(mdev, MIC_SUSPENDED);
mutex_unlock(&mdev->mic_mutex);
- rc = wait_for_completion_timeout(&mdev->reset_wait,
- MIC_SUSPEND_TIMEOUT);
+ timeout = wait_for_completion_timeout(&mdev->reset_wait,
+ MIC_SUSPEND_TIMEOUT);
/* Force reset the card if the shutdown completion timed out */
- if (!rc)
+ if (!timeout)
mic_stop(mdev, true);
break;
default:
diff --git a/drivers/misc/mic/host/mic_intr.c b/drivers/misc/mic/host/mic_intr.c
index d686f28..b4ca6c8 100644
--- a/drivers/misc/mic/host/mic_intr.c
+++ b/drivers/misc/mic/host/mic_intr.c
@@ -363,8 +363,6 @@
{
int rc;
- pci_msi_off(pdev);
-
/* Enable intx */
pci_intx(pdev, 1);
rc = mic_setup_callbacks(mdev);
diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
index 21181fa..eeaaf5f 100644
--- a/drivers/misc/sram.c
+++ b/drivers/misc/sram.c
@@ -69,12 +69,23 @@
INIT_LIST_HEAD(&reserve_list);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- virt_base = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(virt_base))
- return PTR_ERR(virt_base);
+ if (!res) {
+ dev_err(&pdev->dev, "found no memory resource\n");
+ return -EINVAL;
+ }
size = resource_size(res);
+ if (!devm_request_mem_region(&pdev->dev,
+ res->start, size, pdev->name)) {
+ dev_err(&pdev->dev, "could not request region for resource\n");
+ return -EBUSY;
+ }
+
+ virt_base = devm_ioremap_wc(&pdev->dev, res->start, size);
+ if (IS_ERR(virt_base))
+ return PTR_ERR(virt_base);
+
sram = devm_kzalloc(&pdev->dev, sizeof(*sram), GFP_KERNEL);
if (!sram)
return -ENOMEM;
@@ -205,7 +216,7 @@
}
#ifdef CONFIG_OF
-static struct of_device_id sram_dt_ids[] = {
+static const struct of_device_id sram_dt_ids[] = {
{ .compatible = "mmio-sram" },
{}
};
diff --git a/drivers/misc/tifm_7xx1.c b/drivers/misc/tifm_7xx1.c
index a606c89..a37a42f 100644
--- a/drivers/misc/tifm_7xx1.c
+++ b/drivers/misc/tifm_7xx1.c
@@ -236,6 +236,7 @@
{
struct tifm_adapter *fm = pci_get_drvdata(dev);
int rc;
+ unsigned long timeout;
unsigned int good_sockets = 0, bad_sockets = 0;
unsigned long flags;
unsigned char new_ids[fm->num_sockets];
@@ -272,8 +273,8 @@
if (good_sockets) {
fm->finish_me = &finish_resume;
spin_unlock_irqrestore(&fm->lock, flags);
- rc = wait_for_completion_timeout(&finish_resume, HZ);
- dev_dbg(&dev->dev, "wait returned %d\n", rc);
+ timeout = wait_for_completion_timeout(&finish_resume, HZ);
+ dev_dbg(&dev->dev, "wait returned %lu\n", timeout);
writel(TIFM_IRQ_FIFOMASK(good_sockets)
| TIFM_IRQ_CARDMASK(good_sockets),
fm->addr + FM_CLEAR_INTERRUPT_ENABLE);
diff --git a/drivers/misc/vmw_vmci/vmci_driver.c b/drivers/misc/vmw_vmci/vmci_driver.c
index 032d35cf..b823f9a 100644
--- a/drivers/misc/vmw_vmci/vmci_driver.c
+++ b/drivers/misc/vmw_vmci/vmci_driver.c
@@ -113,5 +113,5 @@
MODULE_AUTHOR("VMware, Inc.");
MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface.");
-MODULE_VERSION("1.1.1.0-k");
+MODULE_VERSION("1.1.3.0-k");
MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c
index 66fc992..a721b5d 100644
--- a/drivers/misc/vmw_vmci/vmci_host.c
+++ b/drivers/misc/vmw_vmci/vmci_host.c
@@ -395,6 +395,12 @@
return -EFAULT;
}
+ if (VMCI_DG_SIZE(dg) != send_info.len) {
+ vmci_ioctl_err("datagram size mismatch\n");
+ kfree(dg);
+ return -EINVAL;
+ }
+
pr_devel("Datagram dst (handle=0x%x:0x%x) src (handle=0x%x:0x%x), payload (size=%llu bytes)\n",
dg->dst.context, dg->dst.resource,
dg->src.context, dg->src.resource,
diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
index 35f19a6..f42d9c4 100644
--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
+++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
@@ -295,12 +295,20 @@
{
u64 i;
struct vmci_queue *queue;
- const size_t num_pages = DIV_ROUND_UP(size, PAGE_SIZE) + 1;
- const size_t pas_size = num_pages * sizeof(*queue->kernel_if->u.g.pas);
- const size_t vas_size = num_pages * sizeof(*queue->kernel_if->u.g.vas);
- const size_t queue_size =
- sizeof(*queue) + sizeof(*queue->kernel_if) +
- pas_size + vas_size;
+ size_t pas_size;
+ size_t vas_size;
+ size_t queue_size = sizeof(*queue) + sizeof(*queue->kernel_if);
+ const u64 num_pages = DIV_ROUND_UP(size, PAGE_SIZE) + 1;
+
+ if (num_pages >
+ (SIZE_MAX - queue_size) /
+ (sizeof(*queue->kernel_if->u.g.pas) +
+ sizeof(*queue->kernel_if->u.g.vas)))
+ return NULL;
+
+ pas_size = num_pages * sizeof(*queue->kernel_if->u.g.pas);
+ vas_size = num_pages * sizeof(*queue->kernel_if->u.g.vas);
+ queue_size += pas_size + vas_size;
queue = vmalloc(queue_size);
if (!queue)
@@ -615,10 +623,15 @@
static struct vmci_queue *qp_host_alloc_queue(u64 size)
{
struct vmci_queue *queue;
- const size_t num_pages = DIV_ROUND_UP(size, PAGE_SIZE) + 1;
+ size_t queue_page_size;
+ const u64 num_pages = DIV_ROUND_UP(size, PAGE_SIZE) + 1;
const size_t queue_size = sizeof(*queue) + sizeof(*(queue->kernel_if));
- const size_t queue_page_size =
- num_pages * sizeof(*queue->kernel_if->u.h.page);
+
+ if (num_pages > (SIZE_MAX - queue_size) /
+ sizeof(*queue->kernel_if->u.h.page))
+ return NULL;
+
+ queue_page_size = num_pages * sizeof(*queue->kernel_if->u.h.page);
queue = kzalloc(queue_size + queue_page_size, GFP_KERNEL);
if (queue) {
@@ -737,7 +750,8 @@
produce_q->kernel_if->num_pages, 1,
produce_q->kernel_if->u.h.header_page);
if (retval < produce_q->kernel_if->num_pages) {
- pr_warn("get_user_pages(produce) failed (retval=%d)", retval);
+ pr_debug("get_user_pages_fast(produce) failed (retval=%d)",
+ retval);
qp_release_pages(produce_q->kernel_if->u.h.header_page,
retval, false);
err = VMCI_ERROR_NO_MEM;
@@ -748,7 +762,8 @@
consume_q->kernel_if->num_pages, 1,
consume_q->kernel_if->u.h.header_page);
if (retval < consume_q->kernel_if->num_pages) {
- pr_warn("get_user_pages(consume) failed (retval=%d)", retval);
+ pr_debug("get_user_pages_fast(consume) failed (retval=%d)",
+ retval);
qp_release_pages(consume_q->kernel_if->u.h.header_page,
retval, false);
qp_release_pages(produce_q->kernel_if->u.h.header_page,
diff --git a/drivers/net/ethernet/tile/tilegx.c b/drivers/net/ethernet/tile/tilegx.c
index a789a20..a3f7610 100644
--- a/drivers/net/ethernet/tile/tilegx.c
+++ b/drivers/net/ethernet/tile/tilegx.c
@@ -1123,7 +1123,7 @@
addr + i * sizeof(struct tile_net_comps);
/* If this is a network cpu, create an iqueue. */
- if (cpu_isset(cpu, network_cpus_map)) {
+ if (cpumask_test_cpu(cpu, &network_cpus_map)) {
order = get_order(NOTIF_RING_SIZE);
page = homecache_alloc_pages(GFP_KERNEL, order, cpu);
if (page == NULL) {
@@ -1299,7 +1299,7 @@
int first_ring, ring;
int instance = mpipe_instance(dev);
struct mpipe_data *md = &mpipe_data[instance];
- int network_cpus_count = cpus_weight(network_cpus_map);
+ int network_cpus_count = cpumask_weight(&network_cpus_map);
if (!hash_default) {
netdev_err(dev, "Networking requires hash_default!\n");
diff --git a/drivers/of/Kconfig b/drivers/of/Kconfig
index 7bcaeec..1470b52 100644
--- a/drivers/of/Kconfig
+++ b/drivers/of/Kconfig
@@ -34,7 +34,11 @@
# Hardly any platforms need this. It is safe to select, but only do so if you
# need it.
config OF_DYNAMIC
- bool
+ bool "Support for dynamic device trees" if OF_UNITTEST
+ help
+ On some platforms, the device tree can be manipulated at runtime.
+ While this option is selected automatically on such platforms, you
+ can enable it manually to improve device tree unit test coverage.
config OF_ADDRESS
def_bool y
@@ -87,5 +91,10 @@
bool "Device Tree overlays"
select OF_DYNAMIC
select OF_RESOLVE
+ help
+ Overlays are a method to dynamically modify part of the kernel's
+ device tree with dynamically loaded data.
+ While this option is selected automatically when needed, you can
+ enable it manually to improve device tree unit test coverage.
endmenu # OF
diff --git a/drivers/of/Makefile b/drivers/of/Makefile
index 7563f36..fcacb18 100644
--- a/drivers/of/Makefile
+++ b/drivers/of/Makefile
@@ -6,8 +6,7 @@
obj-$(CONFIG_OF_ADDRESS) += address.o
obj-$(CONFIG_OF_IRQ) += irq.o
obj-$(CONFIG_OF_NET) += of_net.o
-obj-$(CONFIG_OF_UNITTEST) += of_unittest.o
-of_unittest-objs := unittest.o unittest-data/testcases.dtb.o
+obj-$(CONFIG_OF_UNITTEST) += unittest.o
obj-$(CONFIG_OF_MDIO) += of_mdio.o
obj-$(CONFIG_OF_PCI) += of_pci.o
obj-$(CONFIG_OF_PCI_IRQ) += of_pci_irq.o
@@ -16,5 +15,7 @@
obj-$(CONFIG_OF_RESOLVE) += resolver.o
obj-$(CONFIG_OF_OVERLAY) += overlay.o
+obj-$(CONFIG_OF_UNITTEST) += unittest-data/
+
CFLAGS_fdt.o = -I$(src)/../../scripts/dtc/libfdt
CFLAGS_fdt_address.o = -I$(src)/../../scripts/dtc/libfdt
diff --git a/drivers/of/base.c b/drivers/of/base.c
index 4cc06c7..a1aa0c7 100644
--- a/drivers/of/base.c
+++ b/drivers/of/base.c
@@ -2109,13 +2109,44 @@
EXPORT_SYMBOL(of_graph_parse_endpoint);
/**
+ * of_graph_get_port_by_id() - get the port matching a given id
+ * @parent: pointer to the parent device node
+ * @id: id of the port
+ *
+ * Return: A 'port' node pointer with refcount incremented. The caller
+ * has to use of_node_put() on it when done.
+ */
+struct device_node *of_graph_get_port_by_id(struct device_node *parent, u32 id)
+{
+ struct device_node *node, *port;
+
+ node = of_get_child_by_name(parent, "ports");
+ if (node)
+ parent = node;
+
+ for_each_child_of_node(parent, port) {
+ u32 port_id = 0;
+
+ if (of_node_cmp(port->name, "port") != 0)
+ continue;
+ of_property_read_u32(port, "reg", &port_id);
+ if (id == port_id)
+ break;
+ }
+
+ of_node_put(node);
+
+ return port;
+}
+EXPORT_SYMBOL(of_graph_get_port_by_id);
+
+/**
* of_graph_get_next_endpoint() - get next endpoint node
* @parent: pointer to the parent device node
* @prev: previous endpoint node, or NULL to get first
*
* Return: An 'endpoint' node pointer with refcount incremented. Refcount
- * of the passed @prev node is not decremented, the caller have to use
- * of_node_put() on it when done.
+ * of the passed @prev node is decremented.
*/
struct device_node *of_graph_get_next_endpoint(const struct device_node *parent,
struct device_node *prev)
@@ -2151,12 +2182,6 @@
if (WARN_ONCE(!port, "%s(): endpoint %s has no parent node\n",
__func__, prev->full_name))
return NULL;
-
- /*
- * Avoid dropping prev node refcount to 0 when getting the next
- * child below.
- */
- of_node_get(prev);
}
while (1) {
diff --git a/drivers/of/of_net.c b/drivers/of/of_net.c
index 73e1418..d820f3e 100644
--- a/drivers/of/of_net.c
+++ b/drivers/of/of_net.c
@@ -38,6 +38,15 @@
}
EXPORT_SYMBOL_GPL(of_get_phy_mode);
+static const void *of_get_mac_addr(struct device_node *np, const char *name)
+{
+ struct property *pp = of_find_property(np, name, NULL);
+
+ if (pp && pp->length == ETH_ALEN && is_valid_ether_addr(pp->value))
+ return pp->value;
+ return NULL;
+}
+
/**
* Search the device tree for the best MAC address to use. 'mac-address' is
* checked first, because that is supposed to contain to "most recent" MAC
@@ -58,20 +67,16 @@
*/
const void *of_get_mac_address(struct device_node *np)
{
- struct property *pp;
+ const void *addr;
- pp = of_find_property(np, "mac-address", NULL);
- if (pp && (pp->length == 6) && is_valid_ether_addr(pp->value))
- return pp->value;
+ addr = of_get_mac_addr(np, "mac-address");
+ if (addr)
+ return addr;
- pp = of_find_property(np, "local-mac-address", NULL);
- if (pp && (pp->length == 6) && is_valid_ether_addr(pp->value))
- return pp->value;
+ addr = of_get_mac_addr(np, "local-mac-address");
+ if (addr)
+ return addr;
- pp = of_find_property(np, "address", NULL);
- if (pp && (pp->length == 6) && is_valid_ether_addr(pp->value))
- return pp->value;
-
- return NULL;
+ return of_get_mac_addr(np, "address");
}
EXPORT_SYMBOL(of_get_mac_address);
diff --git a/drivers/of/unittest-data/.gitignore b/drivers/of/unittest-data/.gitignore
new file mode 100644
index 0000000..4b3cf8b
--- /dev/null
+++ b/drivers/of/unittest-data/.gitignore
@@ -0,0 +1,2 @@
+testcases.dtb
+testcases.dtb.S
diff --git a/drivers/of/unittest-data/Makefile b/drivers/of/unittest-data/Makefile
new file mode 100644
index 0000000..1ac5cc0
--- /dev/null
+++ b/drivers/of/unittest-data/Makefile
@@ -0,0 +1,7 @@
+obj-y += testcases.dtb.o
+
+targets += testcases.dtb testcases.dtb.S
+
+.SECONDARY: \
+ $(obj)/testcases.dtb.S \
+ $(obj)/testcases.dtb
diff --git a/drivers/of/unittest-data/tests-overlay.dtsi b/drivers/of/unittest-data/tests-overlay.dtsi
index 244226c..02ba56c 100644
--- a/drivers/of/unittest-data/tests-overlay.dtsi
+++ b/drivers/of/unittest-data/tests-overlay.dtsi
@@ -4,94 +4,94 @@
overlay-node {
/* test bus */
- selftestbus: test-bus {
+ unittestbus: test-bus {
compatible = "simple-bus";
#address-cells = <1>;
#size-cells = <0>;
- selftest100: test-selftest100 {
- compatible = "selftest";
+ unittest100: test-unittest100 {
+ compatible = "unittest";
status = "okay";
reg = <100>;
};
- selftest101: test-selftest101 {
- compatible = "selftest";
+ unittest101: test-unittest101 {
+ compatible = "unittest";
status = "disabled";
reg = <101>;
};
- selftest0: test-selftest0 {
- compatible = "selftest";
+ unittest0: test-unittest0 {
+ compatible = "unittest";
status = "disabled";
reg = <0>;
};
- selftest1: test-selftest1 {
- compatible = "selftest";
+ unittest1: test-unittest1 {
+ compatible = "unittest";
status = "okay";
reg = <1>;
};
- selftest2: test-selftest2 {
- compatible = "selftest";
+ unittest2: test-unittest2 {
+ compatible = "unittest";
status = "disabled";
reg = <2>;
};
- selftest3: test-selftest3 {
- compatible = "selftest";
+ unittest3: test-unittest3 {
+ compatible = "unittest";
status = "okay";
reg = <3>;
};
- selftest5: test-selftest5 {
- compatible = "selftest";
+ unittest5: test-unittest5 {
+ compatible = "unittest";
status = "disabled";
reg = <5>;
};
- selftest6: test-selftest6 {
- compatible = "selftest";
+ unittest6: test-unittest6 {
+ compatible = "unittest";
status = "disabled";
reg = <6>;
};
- selftest7: test-selftest7 {
- compatible = "selftest";
+ unittest7: test-unittest7 {
+ compatible = "unittest";
status = "disabled";
reg = <7>;
};
- selftest8: test-selftest8 {
- compatible = "selftest";
+ unittest8: test-unittest8 {
+ compatible = "unittest";
status = "disabled";
reg = <8>;
};
i2c-test-bus {
- compatible = "selftest-i2c-bus";
+ compatible = "unittest-i2c-bus";
status = "okay";
reg = <50>;
#address-cells = <1>;
#size-cells = <0>;
- test-selftest12 {
+ test-unittest12 {
reg = <8>;
- compatible = "selftest-i2c-dev";
+ compatible = "unittest-i2c-dev";
status = "disabled";
};
- test-selftest13 {
+ test-unittest13 {
reg = <9>;
- compatible = "selftest-i2c-dev";
+ compatible = "unittest-i2c-dev";
status = "okay";
};
- test-selftest14 {
+ test-unittest14 {
reg = <10>;
- compatible = "selftest-i2c-mux";
+ compatible = "unittest-i2c-mux";
status = "okay";
#address-cells = <1>;
@@ -104,7 +104,7 @@
test-mux-dev {
reg = <32>;
- compatible = "selftest-i2c-dev";
+ compatible = "unittest-i2c-dev";
status = "okay";
};
};
@@ -116,7 +116,7 @@
/* test enable using absolute target path */
overlay0 {
fragment@0 {
- target-path = "/testcase-data/overlay-node/test-bus/test-selftest0";
+ target-path = "/testcase-data/overlay-node/test-bus/test-unittest0";
__overlay__ {
status = "okay";
};
@@ -126,7 +126,7 @@
/* test disable using absolute target path */
overlay1 {
fragment@0 {
- target-path = "/testcase-data/overlay-node/test-bus/test-selftest1";
+ target-path = "/testcase-data/overlay-node/test-bus/test-unittest1";
__overlay__ {
status = "disabled";
};
@@ -136,7 +136,7 @@
/* test enable using label */
overlay2 {
fragment@0 {
- target = <&selftest2>;
+ target = <&unittest2>;
__overlay__ {
status = "okay";
};
@@ -146,7 +146,7 @@
/* test disable using label */
overlay3 {
fragment@0 {
- target = <&selftest3>;
+ target = <&unittest3>;
__overlay__ {
status = "disabled";
};
@@ -156,15 +156,15 @@
/* test insertion of a full node */
overlay4 {
fragment@0 {
- target = <&selftestbus>;
+ target = <&unittestbus>;
__overlay__ {
/* suppress DTC warning */
#address-cells = <1>;
#size-cells = <0>;
- test-selftest4 {
- compatible = "selftest";
+ test-unittest4 {
+ compatible = "unittest";
status = "okay";
reg = <4>;
};
@@ -175,7 +175,7 @@
/* test overlay apply revert */
overlay5 {
fragment@0 {
- target-path = "/testcase-data/overlay-node/test-bus/test-selftest5";
+ target-path = "/testcase-data/overlay-node/test-bus/test-unittest5";
__overlay__ {
status = "okay";
};
@@ -185,7 +185,7 @@
/* test overlays application and removal in sequence */
overlay6 {
fragment@0 {
- target-path = "/testcase-data/overlay-node/test-bus/test-selftest6";
+ target-path = "/testcase-data/overlay-node/test-bus/test-unittest6";
__overlay__ {
status = "okay";
};
@@ -193,7 +193,7 @@
};
overlay7 {
fragment@0 {
- target-path = "/testcase-data/overlay-node/test-bus/test-selftest7";
+ target-path = "/testcase-data/overlay-node/test-bus/test-unittest7";
__overlay__ {
status = "okay";
};
@@ -203,7 +203,7 @@
/* test overlays application and removal in bad sequence */
overlay8 {
fragment@0 {
- target-path = "/testcase-data/overlay-node/test-bus/test-selftest8";
+ target-path = "/testcase-data/overlay-node/test-bus/test-unittest8";
__overlay__ {
status = "okay";
};
@@ -211,7 +211,7 @@
};
overlay9 {
fragment@0 {
- target-path = "/testcase-data/overlay-node/test-bus/test-selftest8";
+ target-path = "/testcase-data/overlay-node/test-bus/test-unittest8";
__overlay__ {
property-foo = "bar";
};
@@ -227,16 +227,16 @@
#address-cells = <1>;
#size-cells = <0>;
- test-selftest10 {
- compatible = "selftest";
+ test-unittest10 {
+ compatible = "unittest";
status = "okay";
reg = <10>;
#address-cells = <1>;
#size-cells = <0>;
- test-selftest101 {
- compatible = "selftest";
+ test-unittest101 {
+ compatible = "unittest";
status = "okay";
reg = <1>;
};
@@ -255,16 +255,16 @@
#address-cells = <1>;
#size-cells = <0>;
- test-selftest11 {
- compatible = "selftest";
+ test-unittest11 {
+ compatible = "unittest";
status = "okay";
reg = <11>;
#address-cells = <1>;
#size-cells = <0>;
- test-selftest111 {
- compatible = "selftest";
+ test-unittest111 {
+ compatible = "unittest";
status = "okay";
reg = <1>;
};
@@ -277,7 +277,7 @@
/* test enable using absolute target path (i2c) */
overlay12 {
fragment@0 {
- target-path = "/testcase-data/overlay-node/test-bus/i2c-test-bus/test-selftest12";
+ target-path = "/testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest12";
__overlay__ {
status = "okay";
};
@@ -287,7 +287,7 @@
/* test disable using absolute target path (i2c) */
overlay13 {
fragment@0 {
- target-path = "/testcase-data/overlay-node/test-bus/i2c-test-bus/test-selftest13";
+ target-path = "/testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest13";
__overlay__ {
status = "disabled";
};
@@ -301,9 +301,9 @@
__overlay__ {
#address-cells = <1>;
#size-cells = <0>;
- test-selftest15 {
+ test-unittest15 {
reg = <11>;
- compatible = "selftest-i2c-mux";
+ compatible = "unittest-i2c-mux";
status = "okay";
#address-cells = <1>;
@@ -316,7 +316,7 @@
test-mux-dev {
reg = <32>;
- compatible = "selftest-i2c-dev";
+ compatible = "unittest-i2c-dev";
status = "okay";
};
};
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 52c45c7..e844907 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -25,115 +25,115 @@
#include "of_private.h"
-static struct selftest_results {
+static struct unittest_results {
int passed;
int failed;
-} selftest_results;
+} unittest_results;
-#define selftest(result, fmt, ...) ({ \
+#define unittest(result, fmt, ...) ({ \
bool failed = !(result); \
if (failed) { \
- selftest_results.failed++; \
+ unittest_results.failed++; \
pr_err("FAIL %s():%i " fmt, __func__, __LINE__, ##__VA_ARGS__); \
} else { \
- selftest_results.passed++; \
+ unittest_results.passed++; \
pr_debug("pass %s():%i\n", __func__, __LINE__); \
} \
failed; \
})
-static void __init of_selftest_find_node_by_name(void)
+static void __init of_unittest_find_node_by_name(void)
{
struct device_node *np;
const char *options;
np = of_find_node_by_path("/testcase-data");
- selftest(np && !strcmp("/testcase-data", np->full_name),
+ unittest(np && !strcmp("/testcase-data", np->full_name),
"find /testcase-data failed\n");
of_node_put(np);
/* Test if trailing '/' works */
np = of_find_node_by_path("/testcase-data/");
- selftest(!np, "trailing '/' on /testcase-data/ should fail\n");
+ unittest(!np, "trailing '/' on /testcase-data/ should fail\n");
np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
- selftest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", np->full_name),
+ unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", np->full_name),
"find /testcase-data/phandle-tests/consumer-a failed\n");
of_node_put(np);
np = of_find_node_by_path("testcase-alias");
- selftest(np && !strcmp("/testcase-data", np->full_name),
+ unittest(np && !strcmp("/testcase-data", np->full_name),
"find testcase-alias failed\n");
of_node_put(np);
/* Test if trailing '/' works on aliases */
np = of_find_node_by_path("testcase-alias/");
- selftest(!np, "trailing '/' on testcase-alias/ should fail\n");
+ unittest(!np, "trailing '/' on testcase-alias/ should fail\n");
np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
- selftest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", np->full_name),
+ unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", np->full_name),
"find testcase-alias/phandle-tests/consumer-a failed\n");
of_node_put(np);
np = of_find_node_by_path("/testcase-data/missing-path");
- selftest(!np, "non-existent path returned node %s\n", np->full_name);
+ unittest(!np, "non-existent path returned node %s\n", np->full_name);
of_node_put(np);
np = of_find_node_by_path("missing-alias");
- selftest(!np, "non-existent alias returned node %s\n", np->full_name);
+ unittest(!np, "non-existent alias returned node %s\n", np->full_name);
of_node_put(np);
np = of_find_node_by_path("testcase-alias/missing-path");
- selftest(!np, "non-existent alias with relative path returned node %s\n", np->full_name);
+ unittest(!np, "non-existent alias with relative path returned node %s\n", np->full_name);
of_node_put(np);
np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
- selftest(np && !strcmp("testoption", options),
+ unittest(np && !strcmp("testoption", options),
"option path test failed\n");
of_node_put(np);
np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
- selftest(np && !strcmp("test/option", options),
+ unittest(np && !strcmp("test/option", options),
"option path test, subcase #1 failed\n");
of_node_put(np);
np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
- selftest(np && !strcmp("test/option", options),
+ unittest(np && !strcmp("test/option", options),
"option path test, subcase #2 failed\n");
of_node_put(np);
np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
- selftest(np, "NULL option path test failed\n");
+ unittest(np, "NULL option path test failed\n");
of_node_put(np);
np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
&options);
- selftest(np && !strcmp("testaliasoption", options),
+ unittest(np && !strcmp("testaliasoption", options),
"option alias path test failed\n");
of_node_put(np);
np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
&options);
- selftest(np && !strcmp("test/alias/option", options),
+ unittest(np && !strcmp("test/alias/option", options),
"option alias path test, subcase #1 failed\n");
of_node_put(np);
np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
- selftest(np, "NULL option alias path test failed\n");
+ unittest(np, "NULL option alias path test failed\n");
of_node_put(np);
options = "testoption";
np = of_find_node_opts_by_path("testcase-alias", &options);
- selftest(np && !options, "option clearing test failed\n");
+ unittest(np && !options, "option clearing test failed\n");
of_node_put(np);
options = "testoption";
np = of_find_node_opts_by_path("/", &options);
- selftest(np && !options, "option clearing root node test failed\n");
+ unittest(np && !options, "option clearing root node test failed\n");
of_node_put(np);
}
-static void __init of_selftest_dynamic(void)
+static void __init of_unittest_dynamic(void)
{
struct device_node *np;
struct property *prop;
@@ -147,7 +147,7 @@
/* Array of 4 properties for the purpose of testing */
prop = kzalloc(sizeof(*prop) * 4, GFP_KERNEL);
if (!prop) {
- selftest(0, "kzalloc() failed\n");
+ unittest(0, "kzalloc() failed\n");
return;
}
@@ -155,20 +155,20 @@
prop->name = "new-property";
prop->value = "new-property-data";
prop->length = strlen(prop->value);
- selftest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
+ unittest(of_add_property(np, prop) == 0, "Adding a new property failed\n");
/* Try to add an existing property - should fail */
prop++;
prop->name = "new-property";
prop->value = "new-property-data-should-fail";
prop->length = strlen(prop->value);
- selftest(of_add_property(np, prop) != 0,
+ unittest(of_add_property(np, prop) != 0,
"Adding an existing property should have failed\n");
/* Try to modify an existing property - should pass */
prop->value = "modify-property-data-should-pass";
prop->length = strlen(prop->value);
- selftest(of_update_property(np, prop) == 0,
+ unittest(of_update_property(np, prop) == 0,
"Updating an existing property should have passed\n");
/* Try to modify non-existent property - should pass*/
@@ -176,11 +176,11 @@
prop->name = "modify-property";
prop->value = "modify-missing-property-data-should-pass";
prop->length = strlen(prop->value);
- selftest(of_update_property(np, prop) == 0,
+ unittest(of_update_property(np, prop) == 0,
"Updating a missing property should have passed\n");
/* Remove property - should pass */
- selftest(of_remove_property(np, prop) == 0,
+ unittest(of_remove_property(np, prop) == 0,
"Removing a property should have passed\n");
/* Adding very large property - should pass */
@@ -188,13 +188,13 @@
prop->name = "large-property-PAGE_SIZEx8";
prop->length = PAGE_SIZE * 8;
prop->value = kzalloc(prop->length, GFP_KERNEL);
- selftest(prop->value != NULL, "Unable to allocate large buffer\n");
+ unittest(prop->value != NULL, "Unable to allocate large buffer\n");
if (prop->value)
- selftest(of_add_property(np, prop) == 0,
+ unittest(of_add_property(np, prop) == 0,
"Adding a large property should have passed\n");
}
-static int __init of_selftest_check_node_linkage(struct device_node *np)
+static int __init of_unittest_check_node_linkage(struct device_node *np)
{
struct device_node *child;
int count = 0, rc;
@@ -206,7 +206,7 @@
return -EINVAL;
}
- rc = of_selftest_check_node_linkage(child);
+ rc = of_unittest_check_node_linkage(child);
if (rc < 0)
return rc;
count += rc;
@@ -215,7 +215,7 @@
return count + 1;
}
-static void __init of_selftest_check_tree_linkage(void)
+static void __init of_unittest_check_tree_linkage(void)
{
struct device_node *np;
int allnode_count = 0, child_count;
@@ -225,11 +225,12 @@
for_each_of_allnodes(np)
allnode_count++;
- child_count = of_selftest_check_node_linkage(of_root);
+ child_count = of_unittest_check_node_linkage(of_root);
- selftest(child_count > 0, "Device node data structure is corrupted\n");
- selftest(child_count == allnode_count, "allnodes list size (%i) doesn't match"
- "sibling lists size (%i)\n", allnode_count, child_count);
+ unittest(child_count > 0, "Device node data structure is corrupted\n");
+ unittest(child_count == allnode_count,
+ "allnodes list size (%i) doesn't match sibling lists size (%i)\n",
+ allnode_count, child_count);
pr_debug("allnodes list size (%i); sibling lists size (%i)\n", allnode_count, child_count);
}
@@ -239,7 +240,7 @@
};
static DEFINE_HASHTABLE(phandle_ht, 8);
-static void __init of_selftest_check_phandles(void)
+static void __init of_unittest_check_phandles(void)
{
struct device_node *np;
struct node_hash *nh;
@@ -267,7 +268,7 @@
hash_add(phandle_ht, &nh->node, np->phandle);
phandle_count++;
}
- selftest(dup_count == 0, "Found %i duplicates in %i phandles\n",
+ unittest(dup_count == 0, "Found %i duplicates in %i phandles\n",
dup_count, phandle_count);
/* Clean up */
@@ -277,7 +278,7 @@
}
}
-static void __init of_selftest_parse_phandle_with_args(void)
+static void __init of_unittest_parse_phandle_with_args(void)
{
struct device_node *np;
struct of_phandle_args args;
@@ -290,10 +291,11 @@
}
rc = of_count_phandle_with_args(np, "phandle-list", "#phandle-cells");
- selftest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
+ unittest(rc == 7, "of_count_phandle_with_args() returned %i, expected 7\n", rc);
for (i = 0; i < 8; i++) {
bool passed = true;
+
rc = of_parse_phandle_with_args(np, "phandle-list",
"#phandle-cells", i, &args);
@@ -342,44 +344,44 @@
passed = false;
}
- selftest(passed, "index %i - data error on node %s rc=%i\n",
+ unittest(passed, "index %i - data error on node %s rc=%i\n",
i, args.np->full_name, rc);
}
/* Check for missing list property */
rc = of_parse_phandle_with_args(np, "phandle-list-missing",
"#phandle-cells", 0, &args);
- selftest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+ unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
rc = of_count_phandle_with_args(np, "phandle-list-missing",
"#phandle-cells");
- selftest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
+ unittest(rc == -ENOENT, "expected:%i got:%i\n", -ENOENT, rc);
/* Check for missing cells property */
rc = of_parse_phandle_with_args(np, "phandle-list",
"#phandle-cells-missing", 0, &args);
- selftest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+ unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
rc = of_count_phandle_with_args(np, "phandle-list",
"#phandle-cells-missing");
- selftest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+ unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
/* Check for bad phandle in list */
rc = of_parse_phandle_with_args(np, "phandle-list-bad-phandle",
"#phandle-cells", 0, &args);
- selftest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+ unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
rc = of_count_phandle_with_args(np, "phandle-list-bad-phandle",
"#phandle-cells");
- selftest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+ unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
/* Check for incorrectly formed argument list */
rc = of_parse_phandle_with_args(np, "phandle-list-bad-args",
"#phandle-cells", 1, &args);
- selftest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+ unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
rc = of_count_phandle_with_args(np, "phandle-list-bad-args",
"#phandle-cells");
- selftest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
+ unittest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc);
}
-static void __init of_selftest_property_string(void)
+static void __init of_unittest_property_string(void)
{
const char *strings[4];
struct device_node *np;
@@ -392,79 +394,79 @@
}
rc = of_property_match_string(np, "phandle-list-names", "first");
- selftest(rc == 0, "first expected:0 got:%i\n", rc);
+ unittest(rc == 0, "first expected:0 got:%i\n", rc);
rc = of_property_match_string(np, "phandle-list-names", "second");
- selftest(rc == 1, "second expected:1 got:%i\n", rc);
+ unittest(rc == 1, "second expected:1 got:%i\n", rc);
rc = of_property_match_string(np, "phandle-list-names", "third");
- selftest(rc == 2, "third expected:2 got:%i\n", rc);
+ unittest(rc == 2, "third expected:2 got:%i\n", rc);
rc = of_property_match_string(np, "phandle-list-names", "fourth");
- selftest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
+ unittest(rc == -ENODATA, "unmatched string; rc=%i\n", rc);
rc = of_property_match_string(np, "missing-property", "blah");
- selftest(rc == -EINVAL, "missing property; rc=%i\n", rc);
+ unittest(rc == -EINVAL, "missing property; rc=%i\n", rc);
rc = of_property_match_string(np, "empty-property", "blah");
- selftest(rc == -ENODATA, "empty property; rc=%i\n", rc);
+ unittest(rc == -ENODATA, "empty property; rc=%i\n", rc);
rc = of_property_match_string(np, "unterminated-string", "blah");
- selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+ unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
/* of_property_count_strings() tests */
rc = of_property_count_strings(np, "string-property");
- selftest(rc == 1, "Incorrect string count; rc=%i\n", rc);
+ unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
rc = of_property_count_strings(np, "phandle-list-names");
- selftest(rc == 3, "Incorrect string count; rc=%i\n", rc);
+ unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
rc = of_property_count_strings(np, "unterminated-string");
- selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+ unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
rc = of_property_count_strings(np, "unterminated-string-list");
- selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+ unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
/* of_property_read_string_index() tests */
rc = of_property_read_string_index(np, "string-property", 0, strings);
- selftest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
+ unittest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc);
strings[0] = NULL;
rc = of_property_read_string_index(np, "string-property", 1, strings);
- selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+ unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
rc = of_property_read_string_index(np, "phandle-list-names", 0, strings);
- selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+ unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
rc = of_property_read_string_index(np, "phandle-list-names", 1, strings);
- selftest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
+ unittest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc);
rc = of_property_read_string_index(np, "phandle-list-names", 2, strings);
- selftest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
+ unittest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc);
strings[0] = NULL;
rc = of_property_read_string_index(np, "phandle-list-names", 3, strings);
- selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+ unittest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
strings[0] = NULL;
rc = of_property_read_string_index(np, "unterminated-string", 0, strings);
- selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+ unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings);
- selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
+ unittest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc);
strings[0] = NULL;
rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */
- selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
+ unittest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc);
strings[1] = NULL;
/* of_property_read_string_array() tests */
rc = of_property_read_string_array(np, "string-property", strings, 4);
- selftest(rc == 1, "Incorrect string count; rc=%i\n", rc);
+ unittest(rc == 1, "Incorrect string count; rc=%i\n", rc);
rc = of_property_read_string_array(np, "phandle-list-names", strings, 4);
- selftest(rc == 3, "Incorrect string count; rc=%i\n", rc);
+ unittest(rc == 3, "Incorrect string count; rc=%i\n", rc);
rc = of_property_read_string_array(np, "unterminated-string", strings, 4);
- selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
+ unittest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc);
/* -- An incorrectly formed string should cause a failure */
rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4);
- selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
+ unittest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc);
/* -- parsing the correctly formed strings should still work: */
strings[2] = NULL;
rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2);
- selftest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
+ unittest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc);
strings[1] = NULL;
rc = of_property_read_string_array(np, "phandle-list-names", strings, 1);
- selftest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
+ unittest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]);
}
#define propcmp(p1, p2) (((p1)->length == (p2)->length) && \
(p1)->value && (p2)->value && \
!memcmp((p1)->value, (p2)->value, (p1)->length) && \
!strcmp((p1)->name, (p2)->name))
-static void __init of_selftest_property_copy(void)
+static void __init of_unittest_property_copy(void)
{
#ifdef CONFIG_OF_DYNAMIC
struct property p1 = { .name = "p1", .length = 0, .value = "" };
@@ -472,20 +474,20 @@
struct property *new;
new = __of_prop_dup(&p1, GFP_KERNEL);
- selftest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
+ unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
kfree(new->value);
kfree(new->name);
kfree(new);
new = __of_prop_dup(&p2, GFP_KERNEL);
- selftest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
+ unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
kfree(new->value);
kfree(new->name);
kfree(new);
#endif
}
-static void __init of_selftest_changeset(void)
+static void __init of_unittest_changeset(void)
{
#ifdef CONFIG_OF_DYNAMIC
struct property *ppadd, padd = { .name = "prop-add", .length = 0, .value = "" };
@@ -495,51 +497,51 @@
struct of_changeset chgset;
n1 = __of_node_dup(NULL, "/testcase-data/changeset/n1");
- selftest(n1, "testcase setup failure\n");
+ unittest(n1, "testcase setup failure\n");
n2 = __of_node_dup(NULL, "/testcase-data/changeset/n2");
- selftest(n2, "testcase setup failure\n");
+ unittest(n2, "testcase setup failure\n");
n21 = __of_node_dup(NULL, "%s/%s", "/testcase-data/changeset/n2", "n21");
- selftest(n21, "testcase setup failure %p\n", n21);
+ unittest(n21, "testcase setup failure %p\n", n21);
nremove = of_find_node_by_path("/testcase-data/changeset/node-remove");
- selftest(nremove, "testcase setup failure\n");
+ unittest(nremove, "testcase setup failure\n");
ppadd = __of_prop_dup(&padd, GFP_KERNEL);
- selftest(ppadd, "testcase setup failure\n");
+ unittest(ppadd, "testcase setup failure\n");
ppupdate = __of_prop_dup(&pupdate, GFP_KERNEL);
- selftest(ppupdate, "testcase setup failure\n");
+ unittest(ppupdate, "testcase setup failure\n");
parent = nremove->parent;
n1->parent = parent;
n2->parent = parent;
n21->parent = n2;
n2->child = n21;
ppremove = of_find_property(parent, "prop-remove", NULL);
- selftest(ppremove, "failed to find removal prop");
+ unittest(ppremove, "failed to find removal prop");
of_changeset_init(&chgset);
- selftest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
- selftest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
- selftest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
- selftest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
- selftest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop\n");
- selftest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
- selftest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
+ unittest(!of_changeset_attach_node(&chgset, n1), "fail attach n1\n");
+ unittest(!of_changeset_attach_node(&chgset, n2), "fail attach n2\n");
+ unittest(!of_changeset_detach_node(&chgset, nremove), "fail remove node\n");
+ unittest(!of_changeset_attach_node(&chgset, n21), "fail attach n21\n");
+ unittest(!of_changeset_add_property(&chgset, parent, ppadd), "fail add prop\n");
+ unittest(!of_changeset_update_property(&chgset, parent, ppupdate), "fail update prop\n");
+ unittest(!of_changeset_remove_property(&chgset, parent, ppremove), "fail remove prop\n");
mutex_lock(&of_mutex);
- selftest(!of_changeset_apply(&chgset), "apply failed\n");
+ unittest(!of_changeset_apply(&chgset), "apply failed\n");
mutex_unlock(&of_mutex);
/* Make sure node names are constructed correctly */
- selftest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
+ unittest((np = of_find_node_by_path("/testcase-data/changeset/n2/n21")),
"'%s' not added\n", n21->full_name);
of_node_put(np);
mutex_lock(&of_mutex);
- selftest(!of_changeset_revert(&chgset), "revert failed\n");
+ unittest(!of_changeset_revert(&chgset), "revert failed\n");
mutex_unlock(&of_mutex);
of_changeset_destroy(&chgset);
#endif
}
-static void __init of_selftest_parse_interrupts(void)
+static void __init of_unittest_parse_interrupts(void)
{
struct device_node *np;
struct of_phandle_args args;
@@ -553,6 +555,7 @@
for (i = 0; i < 4; i++) {
bool passed = true;
+
args.args_count = 0;
rc = of_irq_parse_one(np, i, &args);
@@ -560,7 +563,7 @@
passed &= (args.args_count == 1);
passed &= (args.args[0] == (i + 1));
- selftest(passed, "index %i - data error on node %s rc=%i\n",
+ unittest(passed, "index %i - data error on node %s rc=%i\n",
i, args.np->full_name, rc);
}
of_node_put(np);
@@ -573,6 +576,7 @@
for (i = 0; i < 4; i++) {
bool passed = true;
+
args.args_count = 0;
rc = of_irq_parse_one(np, i, &args);
@@ -605,13 +609,13 @@
default:
passed = false;
}
- selftest(passed, "index %i - data error on node %s rc=%i\n",
+ unittest(passed, "index %i - data error on node %s rc=%i\n",
i, args.np->full_name, rc);
}
of_node_put(np);
}
-static void __init of_selftest_parse_interrupts_extended(void)
+static void __init of_unittest_parse_interrupts_extended(void)
{
struct device_node *np;
struct of_phandle_args args;
@@ -625,6 +629,7 @@
for (i = 0; i < 7; i++) {
bool passed = true;
+
rc = of_irq_parse_one(np, i, &args);
/* Test the values from tests-phandle.dtsi */
@@ -674,13 +679,13 @@
passed = false;
}
- selftest(passed, "index %i - data error on node %s rc=%i\n",
+ unittest(passed, "index %i - data error on node %s rc=%i\n",
i, args.np->full_name, rc);
}
of_node_put(np);
}
-static struct of_device_id match_node_table[] = {
+static const struct of_device_id match_node_table[] = {
{ .data = "A", .name = "name0", }, /* Name alone is lowest priority */
{ .data = "B", .type = "type1", }, /* followed by type alone */
@@ -715,7 +720,7 @@
{ .path = "/testcase-data/match-node/name9", .data = "K", },
};
-static void __init of_selftest_match_node(void)
+static void __init of_unittest_match_node(void)
{
struct device_node *np;
const struct of_device_id *match;
@@ -724,37 +729,37 @@
for (i = 0; i < ARRAY_SIZE(match_node_tests); i++) {
np = of_find_node_by_path(match_node_tests[i].path);
if (!np) {
- selftest(0, "missing testcase node %s\n",
+ unittest(0, "missing testcase node %s\n",
match_node_tests[i].path);
continue;
}
match = of_match_node(match_node_table, np);
if (!match) {
- selftest(0, "%s didn't match anything\n",
+ unittest(0, "%s didn't match anything\n",
match_node_tests[i].path);
continue;
}
if (strcmp(match->data, match_node_tests[i].data) != 0) {
- selftest(0, "%s got wrong match. expected %s, got %s\n",
+ unittest(0, "%s got wrong match. expected %s, got %s\n",
match_node_tests[i].path, match_node_tests[i].data,
(const char *)match->data);
continue;
}
- selftest(1, "passed");
+ unittest(1, "passed");
}
}
-struct device test_bus = {
- .init_name = "unittest-bus",
+static const struct platform_device_info test_bus_info = {
+ .name = "unittest-bus",
};
-static void __init of_selftest_platform_populate(void)
+static void __init of_unittest_platform_populate(void)
{
int irq, rc;
struct device_node *np, *child, *grandchild;
- struct platform_device *pdev;
- struct of_device_id match[] = {
+ struct platform_device *pdev, *test_bus;
+ const struct of_device_id match[] = {
{ .compatible = "test-device", },
{}
};
@@ -765,43 +770,47 @@
/* Test that a missing irq domain returns -EPROBE_DEFER */
np = of_find_node_by_path("/testcase-data/testcase-device1");
pdev = of_find_device_by_node(np);
- selftest(pdev, "device 1 creation failed\n");
+ unittest(pdev, "device 1 creation failed\n");
irq = platform_get_irq(pdev, 0);
- selftest(irq == -EPROBE_DEFER, "device deferred probe failed - %d\n", irq);
+ unittest(irq == -EPROBE_DEFER, "device deferred probe failed - %d\n", irq);
/* Test that a parsing failure does not return -EPROBE_DEFER */
np = of_find_node_by_path("/testcase-data/testcase-device2");
pdev = of_find_device_by_node(np);
- selftest(pdev, "device 2 creation failed\n");
+ unittest(pdev, "device 2 creation failed\n");
irq = platform_get_irq(pdev, 0);
- selftest(irq < 0 && irq != -EPROBE_DEFER, "device parsing error failed - %d\n", irq);
+ unittest(irq < 0 && irq != -EPROBE_DEFER, "device parsing error failed - %d\n", irq);
- if (selftest(np = of_find_node_by_path("/testcase-data/platform-tests"),
- "No testcase data in device tree\n"));
+ np = of_find_node_by_path("/testcase-data/platform-tests");
+ unittest(np, "No testcase data in device tree\n");
+ if (!np)
return;
- if (selftest(!(rc = device_register(&test_bus)),
- "testbus registration failed; rc=%i\n", rc));
+ test_bus = platform_device_register_full(&test_bus_info);
+ rc = PTR_ERR_OR_ZERO(test_bus);
+ unittest(!rc, "testbus registration failed; rc=%i\n", rc);
+ if (rc)
return;
+ test_bus->dev.of_node = np;
+ of_platform_populate(np, match, NULL, &test_bus->dev);
for_each_child_of_node(np, child) {
- of_platform_populate(child, match, NULL, &test_bus);
for_each_child_of_node(child, grandchild)
- selftest(of_find_device_by_node(grandchild),
+ unittest(of_find_device_by_node(grandchild),
"Could not create device for node '%s'\n",
grandchild->name);
}
- of_platform_depopulate(&test_bus);
+ of_platform_depopulate(&test_bus->dev);
for_each_child_of_node(np, child) {
for_each_child_of_node(child, grandchild)
- selftest(!of_find_device_by_node(grandchild),
+ unittest(!of_find_device_by_node(grandchild),
"device didn't get destroyed '%s'\n",
grandchild->name);
}
- device_unregister(&test_bus);
+ platform_device_unregister(test_bus);
of_node_put(np);
}
@@ -866,13 +875,17 @@
}
/**
- * selftest_data_add - Reads, copies data from
+ * unittest_data_add - Reads, copies data from
* linked tree and attaches it to the live tree
*/
-static int __init selftest_data_add(void)
+static int __init unittest_data_add(void)
{
- void *selftest_data;
- struct device_node *selftest_data_node, *np;
+ void *unittest_data;
+ struct device_node *unittest_data_node, *np;
+ /*
+ * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
+ * created by cmd_dt_S_dtb in scripts/Makefile.lib
+ */
extern uint8_t __dtb_testcases_begin[];
extern uint8_t __dtb_testcases_end[];
const int size = __dtb_testcases_end - __dtb_testcases_begin;
@@ -885,27 +898,27 @@
}
/* creating copy */
- selftest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
+ unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
- if (!selftest_data) {
- pr_warn("%s: Failed to allocate memory for selftest_data; "
+ if (!unittest_data) {
+ pr_warn("%s: Failed to allocate memory for unittest_data; "
"not running tests\n", __func__);
return -ENOMEM;
}
- of_fdt_unflatten_tree(selftest_data, &selftest_data_node);
- if (!selftest_data_node) {
+ of_fdt_unflatten_tree(unittest_data, &unittest_data_node);
+ if (!unittest_data_node) {
pr_warn("%s: No tree to attach; not running tests\n", __func__);
return -ENODATA;
}
- of_node_set_flag(selftest_data_node, OF_DETACHED);
- rc = of_resolve_phandles(selftest_data_node);
+ of_node_set_flag(unittest_data_node, OF_DETACHED);
+ rc = of_resolve_phandles(unittest_data_node);
if (rc) {
pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
return -EINVAL;
}
if (!of_root) {
- of_root = selftest_data_node;
+ of_root = unittest_data_node;
for_each_of_allnodes(np)
__of_attach_node_sysfs(np);
of_aliases = of_find_node_by_path("/aliases");
@@ -914,9 +927,10 @@
}
/* attach the sub-tree to live tree */
- np = selftest_data_node->child;
+ np = unittest_data_node->child;
while (np) {
struct device_node *next = np->sibling;
+
np->parent = of_root;
attach_node_and_children(np);
np = next;
@@ -926,7 +940,7 @@
#ifdef CONFIG_OF_OVERLAY
-static int selftest_probe(struct platform_device *pdev)
+static int unittest_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
@@ -944,7 +958,7 @@
return 0;
}
-static int selftest_remove(struct platform_device *pdev)
+static int unittest_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
@@ -953,18 +967,18 @@
return 0;
}
-static struct of_device_id selftest_match[] = {
- { .compatible = "selftest", },
+static const struct of_device_id unittest_match[] = {
+ { .compatible = "unittest", },
{},
};
-static struct platform_driver selftest_driver = {
- .probe = selftest_probe,
- .remove = selftest_remove,
+static struct platform_driver unittest_driver = {
+ .probe = unittest_probe,
+ .remove = unittest_remove,
.driver = {
- .name = "selftest",
+ .name = "unittest",
.owner = THIS_MODULE,
- .of_match_table = of_match_ptr(selftest_match),
+ .of_match_table = of_match_ptr(unittest_match),
},
};
@@ -1046,7 +1060,7 @@
return 0;
}
-static const char *selftest_path(int nr, enum overlay_type ovtype)
+static const char *unittest_path(int nr, enum overlay_type ovtype)
{
const char *base;
static char buf[256];
@@ -1062,16 +1076,16 @@
buf[0] = '\0';
return buf;
}
- snprintf(buf, sizeof(buf) - 1, "%s/test-selftest%d", base, nr);
+ snprintf(buf, sizeof(buf) - 1, "%s/test-unittest%d", base, nr);
buf[sizeof(buf) - 1] = '\0';
return buf;
}
-static int of_selftest_device_exists(int selftest_nr, enum overlay_type ovtype)
+static int of_unittest_device_exists(int unittest_nr, enum overlay_type ovtype)
{
const char *path;
- path = selftest_path(selftest_nr, ovtype);
+ path = unittest_path(unittest_nr, ovtype);
switch (ovtype) {
case PDEV_OVERLAY:
@@ -1095,7 +1109,7 @@
static const char *bus_path = "/testcase-data/overlay-node/test-bus";
-static int of_selftest_apply_overlay(int selftest_nr, int overlay_nr,
+static int of_unittest_apply_overlay(int unittest_nr, int overlay_nr,
int *overlay_id)
{
struct device_node *np = NULL;
@@ -1103,7 +1117,7 @@
np = of_find_node_by_path(overlay_path(overlay_nr));
if (np == NULL) {
- selftest(0, "could not find overlay node @\"%s\"\n",
+ unittest(0, "could not find overlay node @\"%s\"\n",
overlay_path(overlay_nr));
ret = -EINVAL;
goto out;
@@ -1111,7 +1125,7 @@
ret = of_overlay_create(np);
if (ret < 0) {
- selftest(0, "could not create overlay from \"%s\"\n",
+ unittest(0, "could not create overlay from \"%s\"\n",
overlay_path(overlay_nr));
goto out;
}
@@ -1129,31 +1143,31 @@
}
/* apply an overlay while checking before and after states */
-static int of_selftest_apply_overlay_check(int overlay_nr, int selftest_nr,
+static int of_unittest_apply_overlay_check(int overlay_nr, int unittest_nr,
int before, int after, enum overlay_type ovtype)
{
int ret;
- /* selftest device must not be in before state */
- if (of_selftest_device_exists(selftest_nr, ovtype) != before) {
- selftest(0, "overlay @\"%s\" with device @\"%s\" %s\n",
+ /* unittest device must not be in before state */
+ if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
+ unittest(0, "overlay @\"%s\" with device @\"%s\" %s\n",
overlay_path(overlay_nr),
- selftest_path(selftest_nr, ovtype),
+ unittest_path(unittest_nr, ovtype),
!before ? "enabled" : "disabled");
return -EINVAL;
}
- ret = of_selftest_apply_overlay(overlay_nr, selftest_nr, NULL);
+ ret = of_unittest_apply_overlay(overlay_nr, unittest_nr, NULL);
if (ret != 0) {
- /* of_selftest_apply_overlay already called selftest() */
+ /* of_unittest_apply_overlay already called unittest() */
return ret;
}
- /* selftest device must be to set to after state */
- if (of_selftest_device_exists(selftest_nr, ovtype) != after) {
- selftest(0, "overlay @\"%s\" failed to create @\"%s\" %s\n",
+ /* unittest device must be to set to after state */
+ if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
+ unittest(0, "overlay @\"%s\" failed to create @\"%s\" %s\n",
overlay_path(overlay_nr),
- selftest_path(selftest_nr, ovtype),
+ unittest_path(unittest_nr, ovtype),
!after ? "enabled" : "disabled");
return -EINVAL;
}
@@ -1162,50 +1176,50 @@
}
/* apply an overlay and then revert it while checking before, after states */
-static int of_selftest_apply_revert_overlay_check(int overlay_nr,
- int selftest_nr, int before, int after,
+static int of_unittest_apply_revert_overlay_check(int overlay_nr,
+ int unittest_nr, int before, int after,
enum overlay_type ovtype)
{
int ret, ov_id;
- /* selftest device must be in before state */
- if (of_selftest_device_exists(selftest_nr, ovtype) != before) {
- selftest(0, "overlay @\"%s\" with device @\"%s\" %s\n",
+ /* unittest device must be in before state */
+ if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
+ unittest(0, "overlay @\"%s\" with device @\"%s\" %s\n",
overlay_path(overlay_nr),
- selftest_path(selftest_nr, ovtype),
+ unittest_path(unittest_nr, ovtype),
!before ? "enabled" : "disabled");
return -EINVAL;
}
/* apply the overlay */
- ret = of_selftest_apply_overlay(overlay_nr, selftest_nr, &ov_id);
+ ret = of_unittest_apply_overlay(overlay_nr, unittest_nr, &ov_id);
if (ret != 0) {
- /* of_selftest_apply_overlay already called selftest() */
+ /* of_unittest_apply_overlay already called unittest() */
return ret;
}
- /* selftest device must be in after state */
- if (of_selftest_device_exists(selftest_nr, ovtype) != after) {
- selftest(0, "overlay @\"%s\" failed to create @\"%s\" %s\n",
+ /* unittest device must be in after state */
+ if (of_unittest_device_exists(unittest_nr, ovtype) != after) {
+ unittest(0, "overlay @\"%s\" failed to create @\"%s\" %s\n",
overlay_path(overlay_nr),
- selftest_path(selftest_nr, ovtype),
+ unittest_path(unittest_nr, ovtype),
!after ? "enabled" : "disabled");
return -EINVAL;
}
ret = of_overlay_destroy(ov_id);
if (ret != 0) {
- selftest(0, "overlay @\"%s\" failed to be destroyed @\"%s\"\n",
+ unittest(0, "overlay @\"%s\" failed to be destroyed @\"%s\"\n",
overlay_path(overlay_nr),
- selftest_path(selftest_nr, ovtype));
+ unittest_path(unittest_nr, ovtype));
return ret;
}
- /* selftest device must be again in before state */
- if (of_selftest_device_exists(selftest_nr, PDEV_OVERLAY) != before) {
- selftest(0, "overlay @\"%s\" with device @\"%s\" %s\n",
+ /* unittest device must be again in before state */
+ if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
+ unittest(0, "overlay @\"%s\" with device @\"%s\" %s\n",
overlay_path(overlay_nr),
- selftest_path(selftest_nr, ovtype),
+ unittest_path(unittest_nr, ovtype),
!before ? "enabled" : "disabled");
return -EINVAL;
}
@@ -1214,98 +1228,98 @@
}
/* test activation of device */
-static void of_selftest_overlay_0(void)
+static void of_unittest_overlay_0(void)
{
int ret;
/* device should enable */
- ret = of_selftest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY);
+ ret = of_unittest_apply_overlay_check(0, 0, 0, 1, PDEV_OVERLAY);
if (ret != 0)
return;
- selftest(1, "overlay test %d passed\n", 0);
+ unittest(1, "overlay test %d passed\n", 0);
}
/* test deactivation of device */
-static void of_selftest_overlay_1(void)
+static void of_unittest_overlay_1(void)
{
int ret;
/* device should disable */
- ret = of_selftest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY);
+ ret = of_unittest_apply_overlay_check(1, 1, 1, 0, PDEV_OVERLAY);
if (ret != 0)
return;
- selftest(1, "overlay test %d passed\n", 1);
+ unittest(1, "overlay test %d passed\n", 1);
}
/* test activation of device */
-static void of_selftest_overlay_2(void)
+static void of_unittest_overlay_2(void)
{
int ret;
/* device should enable */
- ret = of_selftest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY);
+ ret = of_unittest_apply_overlay_check(2, 2, 0, 1, PDEV_OVERLAY);
if (ret != 0)
return;
- selftest(1, "overlay test %d passed\n", 2);
+ unittest(1, "overlay test %d passed\n", 2);
}
/* test deactivation of device */
-static void of_selftest_overlay_3(void)
+static void of_unittest_overlay_3(void)
{
int ret;
/* device should disable */
- ret = of_selftest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY);
+ ret = of_unittest_apply_overlay_check(3, 3, 1, 0, PDEV_OVERLAY);
if (ret != 0)
return;
- selftest(1, "overlay test %d passed\n", 3);
+ unittest(1, "overlay test %d passed\n", 3);
}
/* test activation of a full device node */
-static void of_selftest_overlay_4(void)
+static void of_unittest_overlay_4(void)
{
int ret;
/* device should disable */
- ret = of_selftest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY);
+ ret = of_unittest_apply_overlay_check(4, 4, 0, 1, PDEV_OVERLAY);
if (ret != 0)
return;
- selftest(1, "overlay test %d passed\n", 4);
+ unittest(1, "overlay test %d passed\n", 4);
}
/* test overlay apply/revert sequence */
-static void of_selftest_overlay_5(void)
+static void of_unittest_overlay_5(void)
{
int ret;
/* device should disable */
- ret = of_selftest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY);
+ ret = of_unittest_apply_revert_overlay_check(5, 5, 0, 1, PDEV_OVERLAY);
if (ret != 0)
return;
- selftest(1, "overlay test %d passed\n", 5);
+ unittest(1, "overlay test %d passed\n", 5);
}
/* test overlay application in sequence */
-static void of_selftest_overlay_6(void)
+static void of_unittest_overlay_6(void)
{
struct device_node *np;
int ret, i, ov_id[2];
- int overlay_nr = 6, selftest_nr = 6;
+ int overlay_nr = 6, unittest_nr = 6;
int before = 0, after = 1;
- /* selftest device must be in before state */
+ /* unittest device must be in before state */
for (i = 0; i < 2; i++) {
- if (of_selftest_device_exists(selftest_nr + i, PDEV_OVERLAY)
+ if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
!= before) {
- selftest(0, "overlay @\"%s\" with device @\"%s\" %s\n",
+ unittest(0, "overlay @\"%s\" with device @\"%s\" %s\n",
overlay_path(overlay_nr + i),
- selftest_path(selftest_nr + i,
+ unittest_path(unittest_nr + i,
PDEV_OVERLAY),
!before ? "enabled" : "disabled");
return;
@@ -1317,14 +1331,14 @@
np = of_find_node_by_path(overlay_path(overlay_nr + i));
if (np == NULL) {
- selftest(0, "could not find overlay node @\"%s\"\n",
+ unittest(0, "could not find overlay node @\"%s\"\n",
overlay_path(overlay_nr + i));
return;
}
ret = of_overlay_create(np);
if (ret < 0) {
- selftest(0, "could not create overlay from \"%s\"\n",
+ unittest(0, "could not create overlay from \"%s\"\n",
overlay_path(overlay_nr + i));
return;
}
@@ -1332,12 +1346,12 @@
}
for (i = 0; i < 2; i++) {
- /* selftest device must be in after state */
- if (of_selftest_device_exists(selftest_nr + i, PDEV_OVERLAY)
+ /* unittest device must be in after state */
+ if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
!= after) {
- selftest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
+ unittest(0, "overlay @\"%s\" failed @\"%s\" %s\n",
overlay_path(overlay_nr + i),
- selftest_path(selftest_nr + i,
+ unittest_path(unittest_nr + i,
PDEV_OVERLAY),
!after ? "enabled" : "disabled");
return;
@@ -1347,36 +1361,36 @@
for (i = 1; i >= 0; i--) {
ret = of_overlay_destroy(ov_id[i]);
if (ret != 0) {
- selftest(0, "overlay @\"%s\" failed destroy @\"%s\"\n",
+ unittest(0, "overlay @\"%s\" failed destroy @\"%s\"\n",
overlay_path(overlay_nr + i),
- selftest_path(selftest_nr + i,
+ unittest_path(unittest_nr + i,
PDEV_OVERLAY));
return;
}
}
for (i = 0; i < 2; i++) {
- /* selftest device must be again in before state */
- if (of_selftest_device_exists(selftest_nr + i, PDEV_OVERLAY)
+ /* unittest device must be again in before state */
+ if (of_unittest_device_exists(unittest_nr + i, PDEV_OVERLAY)
!= before) {
- selftest(0, "overlay @\"%s\" with device @\"%s\" %s\n",
+ unittest(0, "overlay @\"%s\" with device @\"%s\" %s\n",
overlay_path(overlay_nr + i),
- selftest_path(selftest_nr + i,
+ unittest_path(unittest_nr + i,
PDEV_OVERLAY),
!before ? "enabled" : "disabled");
return;
}
}
- selftest(1, "overlay test %d passed\n", 6);
+ unittest(1, "overlay test %d passed\n", 6);
}
/* test overlay application in sequence */
-static void of_selftest_overlay_8(void)
+static void of_unittest_overlay_8(void)
{
struct device_node *np;
int ret, i, ov_id[2];
- int overlay_nr = 8, selftest_nr = 8;
+ int overlay_nr = 8, unittest_nr = 8;
/* we don't care about device state in this test */
@@ -1385,14 +1399,14 @@
np = of_find_node_by_path(overlay_path(overlay_nr + i));
if (np == NULL) {
- selftest(0, "could not find overlay node @\"%s\"\n",
+ unittest(0, "could not find overlay node @\"%s\"\n",
overlay_path(overlay_nr + i));
return;
}
ret = of_overlay_create(np);
if (ret < 0) {
- selftest(0, "could not create overlay from \"%s\"\n",
+ unittest(0, "could not create overlay from \"%s\"\n",
overlay_path(overlay_nr + i));
return;
}
@@ -1402,9 +1416,9 @@
/* now try to remove first overlay (it should fail) */
ret = of_overlay_destroy(ov_id[0]);
if (ret == 0) {
- selftest(0, "overlay @\"%s\" was destroyed @\"%s\"\n",
+ unittest(0, "overlay @\"%s\" was destroyed @\"%s\"\n",
overlay_path(overlay_nr + 0),
- selftest_path(selftest_nr,
+ unittest_path(unittest_nr,
PDEV_OVERLAY));
return;
}
@@ -1413,85 +1427,85 @@
for (i = 1; i >= 0; i--) {
ret = of_overlay_destroy(ov_id[i]);
if (ret != 0) {
- selftest(0, "overlay @\"%s\" not destroyed @\"%s\"\n",
+ unittest(0, "overlay @\"%s\" not destroyed @\"%s\"\n",
overlay_path(overlay_nr + i),
- selftest_path(selftest_nr,
+ unittest_path(unittest_nr,
PDEV_OVERLAY));
return;
}
}
- selftest(1, "overlay test %d passed\n", 8);
+ unittest(1, "overlay test %d passed\n", 8);
}
/* test insertion of a bus with parent devices */
-static void of_selftest_overlay_10(void)
+static void of_unittest_overlay_10(void)
{
int ret;
char *child_path;
/* device should disable */
- ret = of_selftest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
- if (selftest(ret == 0,
+ ret = of_unittest_apply_overlay_check(10, 10, 0, 1, PDEV_OVERLAY);
+ if (unittest(ret == 0,
"overlay test %d failed; overlay application\n", 10))
return;
- child_path = kasprintf(GFP_KERNEL, "%s/test-selftest101",
- selftest_path(10, PDEV_OVERLAY));
- if (selftest(child_path, "overlay test %d failed; kasprintf\n", 10))
+ child_path = kasprintf(GFP_KERNEL, "%s/test-unittest101",
+ unittest_path(10, PDEV_OVERLAY));
+ if (unittest(child_path, "overlay test %d failed; kasprintf\n", 10))
return;
ret = of_path_device_type_exists(child_path, PDEV_OVERLAY);
kfree(child_path);
- if (selftest(ret, "overlay test %d failed; no child device\n", 10))
+ if (unittest(ret, "overlay test %d failed; no child device\n", 10))
return;
}
/* test insertion of a bus with parent devices (and revert) */
-static void of_selftest_overlay_11(void)
+static void of_unittest_overlay_11(void)
{
int ret;
/* device should disable */
- ret = of_selftest_apply_revert_overlay_check(11, 11, 0, 1,
+ ret = of_unittest_apply_revert_overlay_check(11, 11, 0, 1,
PDEV_OVERLAY);
- if (selftest(ret == 0,
+ if (unittest(ret == 0,
"overlay test %d failed; overlay application\n", 11))
return;
}
#if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY)
-struct selftest_i2c_bus_data {
+struct unittest_i2c_bus_data {
struct platform_device *pdev;
struct i2c_adapter adap;
};
-static int selftest_i2c_master_xfer(struct i2c_adapter *adap,
+static int unittest_i2c_master_xfer(struct i2c_adapter *adap,
struct i2c_msg *msgs, int num)
{
- struct selftest_i2c_bus_data *std = i2c_get_adapdata(adap);
+ struct unittest_i2c_bus_data *std = i2c_get_adapdata(adap);
(void)std;
return num;
}
-static u32 selftest_i2c_functionality(struct i2c_adapter *adap)
+static u32 unittest_i2c_functionality(struct i2c_adapter *adap)
{
return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;
}
-static const struct i2c_algorithm selftest_i2c_algo = {
- .master_xfer = selftest_i2c_master_xfer,
- .functionality = selftest_i2c_functionality,
+static const struct i2c_algorithm unittest_i2c_algo = {
+ .master_xfer = unittest_i2c_master_xfer,
+ .functionality = unittest_i2c_functionality,
};
-static int selftest_i2c_bus_probe(struct platform_device *pdev)
+static int unittest_i2c_bus_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
- struct selftest_i2c_bus_data *std;
+ struct unittest_i2c_bus_data *std;
struct i2c_adapter *adap;
int ret;
@@ -1505,7 +1519,7 @@
std = devm_kzalloc(dev, sizeof(*std), GFP_KERNEL);
if (!std) {
- dev_err(dev, "Failed to allocate selftest i2c data\n");
+ dev_err(dev, "Failed to allocate unittest i2c data\n");
return -ENOMEM;
}
@@ -1518,7 +1532,7 @@
adap->nr = -1;
strlcpy(adap->name, pdev->name, sizeof(adap->name));
adap->class = I2C_CLASS_DEPRECATED;
- adap->algo = &selftest_i2c_algo;
+ adap->algo = &unittest_i2c_algo;
adap->dev.parent = dev;
adap->dev.of_node = dev->of_node;
adap->timeout = 5 * HZ;
@@ -1533,11 +1547,11 @@
return 0;
}
-static int selftest_i2c_bus_remove(struct platform_device *pdev)
+static int unittest_i2c_bus_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
- struct selftest_i2c_bus_data *std = platform_get_drvdata(pdev);
+ struct unittest_i2c_bus_data *std = platform_get_drvdata(pdev);
dev_dbg(dev, "%s for node @%s\n", __func__, np->full_name);
i2c_del_adapter(&std->adap);
@@ -1545,21 +1559,21 @@
return 0;
}
-static struct of_device_id selftest_i2c_bus_match[] = {
- { .compatible = "selftest-i2c-bus", },
+static const struct of_device_id unittest_i2c_bus_match[] = {
+ { .compatible = "unittest-i2c-bus", },
{},
};
-static struct platform_driver selftest_i2c_bus_driver = {
- .probe = selftest_i2c_bus_probe,
- .remove = selftest_i2c_bus_remove,
+static struct platform_driver unittest_i2c_bus_driver = {
+ .probe = unittest_i2c_bus_probe,
+ .remove = unittest_i2c_bus_remove,
.driver = {
- .name = "selftest-i2c-bus",
- .of_match_table = of_match_ptr(selftest_i2c_bus_match),
+ .name = "unittest-i2c-bus",
+ .of_match_table = of_match_ptr(unittest_i2c_bus_match),
},
};
-static int selftest_i2c_dev_probe(struct i2c_client *client,
+static int unittest_i2c_dev_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct device *dev = &client->dev;
@@ -1575,7 +1589,7 @@
return 0;
};
-static int selftest_i2c_dev_remove(struct i2c_client *client)
+static int unittest_i2c_dev_remove(struct i2c_client *client)
{
struct device *dev = &client->dev;
struct device_node *np = client->dev.of_node;
@@ -1584,42 +1598,42 @@
return 0;
}
-static const struct i2c_device_id selftest_i2c_dev_id[] = {
- { .name = "selftest-i2c-dev" },
+static const struct i2c_device_id unittest_i2c_dev_id[] = {
+ { .name = "unittest-i2c-dev" },
{ }
};
-static struct i2c_driver selftest_i2c_dev_driver = {
+static struct i2c_driver unittest_i2c_dev_driver = {
.driver = {
- .name = "selftest-i2c-dev",
+ .name = "unittest-i2c-dev",
.owner = THIS_MODULE,
},
- .probe = selftest_i2c_dev_probe,
- .remove = selftest_i2c_dev_remove,
- .id_table = selftest_i2c_dev_id,
+ .probe = unittest_i2c_dev_probe,
+ .remove = unittest_i2c_dev_remove,
+ .id_table = unittest_i2c_dev_id,
};
#if IS_BUILTIN(CONFIG_I2C_MUX)
-struct selftest_i2c_mux_data {
+struct unittest_i2c_mux_data {
int nchans;
struct i2c_adapter *adap[];
};
-static int selftest_i2c_mux_select_chan(struct i2c_adapter *adap,
+static int unittest_i2c_mux_select_chan(struct i2c_adapter *adap,
void *client, u32 chan)
{
return 0;
}
-static int selftest_i2c_mux_probe(struct i2c_client *client,
+static int unittest_i2c_mux_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
int ret, i, nchans, size;
struct device *dev = &client->dev;
struct i2c_adapter *adap = to_i2c_adapter(dev->parent);
struct device_node *np = client->dev.of_node, *child;
- struct selftest_i2c_mux_data *stm;
+ struct unittest_i2c_mux_data *stm;
u32 reg, max_reg;
dev_dbg(dev, "%s for node @%s\n", __func__, np->full_name);
@@ -1643,7 +1657,7 @@
return -EINVAL;
}
- size = offsetof(struct selftest_i2c_mux_data, adap[nchans]);
+ size = offsetof(struct unittest_i2c_mux_data, adap[nchans]);
stm = devm_kzalloc(dev, size, GFP_KERNEL);
if (!stm) {
dev_err(dev, "Out of memory\n");
@@ -1652,7 +1666,7 @@
stm->nchans = nchans;
for (i = 0; i < nchans; i++) {
stm->adap[i] = i2c_add_mux_adapter(adap, dev, client,
- 0, i, 0, selftest_i2c_mux_select_chan, NULL);
+ 0, i, 0, unittest_i2c_mux_select_chan, NULL);
if (!stm->adap[i]) {
dev_err(dev, "Failed to register mux #%d\n", i);
for (i--; i >= 0; i--)
@@ -1666,11 +1680,11 @@
return 0;
};
-static int selftest_i2c_mux_remove(struct i2c_client *client)
+static int unittest_i2c_mux_remove(struct i2c_client *client)
{
struct device *dev = &client->dev;
struct device_node *np = client->dev.of_node;
- struct selftest_i2c_mux_data *stm = i2c_get_clientdata(client);
+ struct unittest_i2c_mux_data *stm = i2c_get_clientdata(client);
int i;
dev_dbg(dev, "%s for node @%s\n", __func__, np->full_name);
@@ -1679,166 +1693,166 @@
return 0;
}
-static const struct i2c_device_id selftest_i2c_mux_id[] = {
- { .name = "selftest-i2c-mux" },
+static const struct i2c_device_id unittest_i2c_mux_id[] = {
+ { .name = "unittest-i2c-mux" },
{ }
};
-static struct i2c_driver selftest_i2c_mux_driver = {
+static struct i2c_driver unittest_i2c_mux_driver = {
.driver = {
- .name = "selftest-i2c-mux",
+ .name = "unittest-i2c-mux",
.owner = THIS_MODULE,
},
- .probe = selftest_i2c_mux_probe,
- .remove = selftest_i2c_mux_remove,
- .id_table = selftest_i2c_mux_id,
+ .probe = unittest_i2c_mux_probe,
+ .remove = unittest_i2c_mux_remove,
+ .id_table = unittest_i2c_mux_id,
};
#endif
-static int of_selftest_overlay_i2c_init(void)
+static int of_unittest_overlay_i2c_init(void)
{
int ret;
- ret = i2c_add_driver(&selftest_i2c_dev_driver);
- if (selftest(ret == 0,
- "could not register selftest i2c device driver\n"))
+ ret = i2c_add_driver(&unittest_i2c_dev_driver);
+ if (unittest(ret == 0,
+ "could not register unittest i2c device driver\n"))
return ret;
- ret = platform_driver_register(&selftest_i2c_bus_driver);
- if (selftest(ret == 0,
- "could not register selftest i2c bus driver\n"))
+ ret = platform_driver_register(&unittest_i2c_bus_driver);
+ if (unittest(ret == 0,
+ "could not register unittest i2c bus driver\n"))
return ret;
#if IS_BUILTIN(CONFIG_I2C_MUX)
- ret = i2c_add_driver(&selftest_i2c_mux_driver);
- if (selftest(ret == 0,
- "could not register selftest i2c mux driver\n"))
+ ret = i2c_add_driver(&unittest_i2c_mux_driver);
+ if (unittest(ret == 0,
+ "could not register unittest i2c mux driver\n"))
return ret;
#endif
return 0;
}
-static void of_selftest_overlay_i2c_cleanup(void)
+static void of_unittest_overlay_i2c_cleanup(void)
{
#if IS_BUILTIN(CONFIG_I2C_MUX)
- i2c_del_driver(&selftest_i2c_mux_driver);
+ i2c_del_driver(&unittest_i2c_mux_driver);
#endif
- platform_driver_unregister(&selftest_i2c_bus_driver);
- i2c_del_driver(&selftest_i2c_dev_driver);
+ platform_driver_unregister(&unittest_i2c_bus_driver);
+ i2c_del_driver(&unittest_i2c_dev_driver);
}
-static void of_selftest_overlay_i2c_12(void)
+static void of_unittest_overlay_i2c_12(void)
{
int ret;
/* device should enable */
- ret = of_selftest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY);
+ ret = of_unittest_apply_overlay_check(12, 12, 0, 1, I2C_OVERLAY);
if (ret != 0)
return;
- selftest(1, "overlay test %d passed\n", 12);
+ unittest(1, "overlay test %d passed\n", 12);
}
/* test deactivation of device */
-static void of_selftest_overlay_i2c_13(void)
+static void of_unittest_overlay_i2c_13(void)
{
int ret;
/* device should disable */
- ret = of_selftest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY);
+ ret = of_unittest_apply_overlay_check(13, 13, 1, 0, I2C_OVERLAY);
if (ret != 0)
return;
- selftest(1, "overlay test %d passed\n", 13);
+ unittest(1, "overlay test %d passed\n", 13);
}
/* just check for i2c mux existence */
-static void of_selftest_overlay_i2c_14(void)
+static void of_unittest_overlay_i2c_14(void)
{
}
-static void of_selftest_overlay_i2c_15(void)
+static void of_unittest_overlay_i2c_15(void)
{
int ret;
/* device should enable */
- ret = of_selftest_apply_overlay_check(16, 15, 0, 1, I2C_OVERLAY);
+ ret = of_unittest_apply_overlay_check(16, 15, 0, 1, I2C_OVERLAY);
if (ret != 0)
return;
- selftest(1, "overlay test %d passed\n", 15);
+ unittest(1, "overlay test %d passed\n", 15);
}
#else
-static inline void of_selftest_overlay_i2c_14(void) { }
-static inline void of_selftest_overlay_i2c_15(void) { }
+static inline void of_unittest_overlay_i2c_14(void) { }
+static inline void of_unittest_overlay_i2c_15(void) { }
#endif
-static void __init of_selftest_overlay(void)
+static void __init of_unittest_overlay(void)
{
struct device_node *bus_np = NULL;
int ret;
- ret = platform_driver_register(&selftest_driver);
+ ret = platform_driver_register(&unittest_driver);
if (ret != 0) {
- selftest(0, "could not register selftest driver\n");
+ unittest(0, "could not register unittest driver\n");
goto out;
}
bus_np = of_find_node_by_path(bus_path);
if (bus_np == NULL) {
- selftest(0, "could not find bus_path \"%s\"\n", bus_path);
+ unittest(0, "could not find bus_path \"%s\"\n", bus_path);
goto out;
}
ret = of_platform_populate(bus_np, of_default_bus_match_table,
NULL, NULL);
if (ret != 0) {
- selftest(0, "could not populate bus @ \"%s\"\n", bus_path);
+ unittest(0, "could not populate bus @ \"%s\"\n", bus_path);
goto out;
}
- if (!of_selftest_device_exists(100, PDEV_OVERLAY)) {
- selftest(0, "could not find selftest0 @ \"%s\"\n",
- selftest_path(100, PDEV_OVERLAY));
+ if (!of_unittest_device_exists(100, PDEV_OVERLAY)) {
+ unittest(0, "could not find unittest0 @ \"%s\"\n",
+ unittest_path(100, PDEV_OVERLAY));
goto out;
}
- if (of_selftest_device_exists(101, PDEV_OVERLAY)) {
- selftest(0, "selftest1 @ \"%s\" should not exist\n",
- selftest_path(101, PDEV_OVERLAY));
+ if (of_unittest_device_exists(101, PDEV_OVERLAY)) {
+ unittest(0, "unittest1 @ \"%s\" should not exist\n",
+ unittest_path(101, PDEV_OVERLAY));
goto out;
}
- selftest(1, "basic infrastructure of overlays passed");
+ unittest(1, "basic infrastructure of overlays passed");
/* tests in sequence */
- of_selftest_overlay_0();
- of_selftest_overlay_1();
- of_selftest_overlay_2();
- of_selftest_overlay_3();
- of_selftest_overlay_4();
- of_selftest_overlay_5();
- of_selftest_overlay_6();
- of_selftest_overlay_8();
+ of_unittest_overlay_0();
+ of_unittest_overlay_1();
+ of_unittest_overlay_2();
+ of_unittest_overlay_3();
+ of_unittest_overlay_4();
+ of_unittest_overlay_5();
+ of_unittest_overlay_6();
+ of_unittest_overlay_8();
- of_selftest_overlay_10();
- of_selftest_overlay_11();
+ of_unittest_overlay_10();
+ of_unittest_overlay_11();
#if IS_BUILTIN(CONFIG_I2C)
- if (selftest(of_selftest_overlay_i2c_init() == 0, "i2c init failed\n"))
+ if (unittest(of_unittest_overlay_i2c_init() == 0, "i2c init failed\n"))
goto out;
- of_selftest_overlay_i2c_12();
- of_selftest_overlay_i2c_13();
- of_selftest_overlay_i2c_14();
- of_selftest_overlay_i2c_15();
+ of_unittest_overlay_i2c_12();
+ of_unittest_overlay_i2c_13();
+ of_unittest_overlay_i2c_14();
+ of_unittest_overlay_i2c_15();
- of_selftest_overlay_i2c_cleanup();
+ of_unittest_overlay_i2c_cleanup();
#endif
out:
@@ -1846,16 +1860,16 @@
}
#else
-static inline void __init of_selftest_overlay(void) { }
+static inline void __init of_unittest_overlay(void) { }
#endif
-static int __init of_selftest(void)
+static int __init of_unittest(void)
{
struct device_node *np;
int res;
- /* adding data for selftest */
- res = selftest_data_add();
+ /* adding data for unittest */
+ res = unittest_data_add();
if (res)
return res;
if (!of_aliases)
@@ -1868,27 +1882,27 @@
}
of_node_put(np);
- pr_info("start of selftest - you will see error messages\n");
- of_selftest_check_tree_linkage();
- of_selftest_check_phandles();
- of_selftest_find_node_by_name();
- of_selftest_dynamic();
- of_selftest_parse_phandle_with_args();
- of_selftest_property_string();
- of_selftest_property_copy();
- of_selftest_changeset();
- of_selftest_parse_interrupts();
- of_selftest_parse_interrupts_extended();
- of_selftest_match_node();
- of_selftest_platform_populate();
- of_selftest_overlay();
+ pr_info("start of unittest - you will see error messages\n");
+ of_unittest_check_tree_linkage();
+ of_unittest_check_phandles();
+ of_unittest_find_node_by_name();
+ of_unittest_dynamic();
+ of_unittest_parse_phandle_with_args();
+ of_unittest_property_string();
+ of_unittest_property_copy();
+ of_unittest_changeset();
+ of_unittest_parse_interrupts();
+ of_unittest_parse_interrupts_extended();
+ of_unittest_match_node();
+ of_unittest_platform_populate();
+ of_unittest_overlay();
/* Double check linkage after removing testcase data */
- of_selftest_check_tree_linkage();
+ of_unittest_check_tree_linkage();
- pr_info("end of selftest - %i passed, %i failed\n",
- selftest_results.passed, selftest_results.failed);
+ pr_info("end of unittest - %i passed, %i failed\n",
+ unittest_results.passed, unittest_results.failed);
return 0;
}
-late_initcall(of_selftest);
+late_initcall(of_unittest);
diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
index 6bc1680..02ff84f 100644
--- a/drivers/parisc/ccio-dma.c
+++ b/drivers/parisc/ccio-dma.c
@@ -916,7 +916,7 @@
/* Fast path single entry scatterlists. */
if (nents == 1) {
sg_dma_address(sglist) = ccio_map_single(dev,
- (void *)sg_virt_addr(sglist), sglist->length,
+ sg_virt(sglist), sglist->length,
direction);
sg_dma_len(sglist) = sglist->length;
return 1;
@@ -983,8 +983,8 @@
BUG_ON(!dev);
ioc = GET_IOC(dev);
- DBG_RUN_SG("%s() START %d entries, %08lx,%x\n",
- __func__, nents, sg_virt_addr(sglist), sglist->length);
+ DBG_RUN_SG("%s() START %d entries, %p,%x\n",
+ __func__, nents, sg_virt(sglist), sglist->length);
#ifdef CCIO_COLLECT_STATS
ioc->usg_calls++;
diff --git a/drivers/parisc/iommu-helpers.h b/drivers/parisc/iommu-helpers.h
index 8c33491..761e77b 100644
--- a/drivers/parisc/iommu-helpers.h
+++ b/drivers/parisc/iommu-helpers.h
@@ -30,9 +30,9 @@
unsigned long vaddr;
long size;
- DBG_RUN_SG(" %d : %08lx/%05x %08lx/%05x\n", nents,
+ DBG_RUN_SG(" %d : %08lx/%05x %p/%05x\n", nents,
(unsigned long)sg_dma_address(startsg), cnt,
- sg_virt_addr(startsg), startsg->length
+ sg_virt(startsg), startsg->length
);
@@ -66,7 +66,7 @@
BUG_ON(pdirp == NULL);
- vaddr = sg_virt_addr(startsg);
+ vaddr = (unsigned long)sg_virt(startsg);
sg_dma_len(dma_sg) += startsg->length;
size = startsg->length + dma_offset;
dma_offset = 0;
@@ -113,7 +113,7 @@
*/
contig_sg = startsg;
dma_len = startsg->length;
- dma_offset = sg_virt_addr(startsg) & ~IOVP_MASK;
+ dma_offset = startsg->offset;
/* PARANOID: clear entries */
sg_dma_address(startsg) = 0;
@@ -124,14 +124,13 @@
** it's always looking one "ahead".
*/
while(--nents > 0) {
- unsigned long prevstartsg_end, startsg_end;
+ unsigned long prev_end, sg_start;
- prevstartsg_end = sg_virt_addr(startsg) +
- startsg->length;
+ prev_end = (unsigned long)sg_virt(startsg) +
+ startsg->length;
startsg++;
- startsg_end = sg_virt_addr(startsg) +
- startsg->length;
+ sg_start = (unsigned long)sg_virt(startsg);
/* PARANOID: clear entries */
sg_dma_address(startsg) = 0;
@@ -150,10 +149,13 @@
break;
/*
- ** Next see if we can append the next chunk (i.e.
- ** it must end on one page and begin on another
+ * Next see if we can append the next chunk (i.e.
+ * it must end on one page and begin on another, or
+ * it must start on the same address as the previous
+ * entry ended.
*/
- if (unlikely(((prevstartsg_end | sg_virt_addr(startsg)) & ~PAGE_MASK) != 0))
+ if (unlikely((prev_end != sg_start) ||
+ ((prev_end | sg_start) & ~PAGE_MASK)))
break;
dma_len += startsg->length;
diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c
index f074712..f1441e4 100644
--- a/drivers/parisc/sba_iommu.c
+++ b/drivers/parisc/sba_iommu.c
@@ -278,7 +278,7 @@
nents,
(unsigned long) sg_dma_address(startsg),
sg_dma_len(startsg),
- sg_virt_addr(startsg), startsg->length);
+ sg_virt(startsg), startsg->length);
startsg++;
}
}
@@ -945,8 +945,7 @@
/* Fast path single entry scatterlists. */
if (nents == 1) {
- sg_dma_address(sglist) = sba_map_single(dev,
- (void *)sg_virt_addr(sglist),
+ sg_dma_address(sglist) = sba_map_single(dev, sg_virt(sglist),
sglist->length, direction);
sg_dma_len(sglist) = sglist->length;
return 1;
@@ -1025,7 +1024,7 @@
#endif
DBG_RUN_SG("%s() START %d entries, %p,%x\n",
- __func__, nents, sg_virt_addr(sglist), sglist->length);
+ __func__, nents, sg_virt(sglist), sglist->length);
ioc = GET_IOC(dev);
diff --git a/drivers/pcmcia/omap_cf.c b/drivers/pcmcia/omap_cf.c
index 8170102..4e2f501 100644
--- a/drivers/pcmcia/omap_cf.c
+++ b/drivers/pcmcia/omap_cf.c
@@ -220,9 +220,7 @@
cf = kzalloc(sizeof *cf, GFP_KERNEL);
if (!cf)
return -ENOMEM;
- init_timer(&cf->timer);
- cf->timer.function = omap_cf_timer;
- cf->timer.data = (unsigned long) cf;
+ setup_timer(&cf->timer, omap_cf_timer, (unsigned long)cf);
cf->pdev = pdev;
platform_set_drvdata(pdev, cf);
diff --git a/drivers/pcmcia/pd6729.c b/drivers/pcmcia/pd6729.c
index 34ace48..0f70b4d 100644
--- a/drivers/pcmcia/pd6729.c
+++ b/drivers/pcmcia/pd6729.c
@@ -707,11 +707,9 @@
}
} else {
/* poll Card status change */
- init_timer(&socket->poll_timer);
- socket->poll_timer.function = pd6729_interrupt_wrapper;
- socket->poll_timer.data = (unsigned long)socket;
- socket->poll_timer.expires = jiffies + HZ;
- add_timer(&socket->poll_timer);
+ setup_timer(&socket->poll_timer, pd6729_interrupt_wrapper,
+ (unsigned long)socket);
+ mod_timer(&socket->poll_timer, jiffies + HZ);
}
for (i = 0; i < MAX_SOCKETS; i++) {
diff --git a/drivers/pcmcia/soc_common.c b/drivers/pcmcia/soc_common.c
index 933f465..eed5e9c 100644
--- a/drivers/pcmcia/soc_common.c
+++ b/drivers/pcmcia/soc_common.c
@@ -726,9 +726,8 @@
{
int ret;
- init_timer(&skt->poll_timer);
- skt->poll_timer.function = soc_common_pcmcia_poll_event;
- skt->poll_timer.data = (unsigned long)skt;
+ setup_timer(&skt->poll_timer, soc_common_pcmcia_poll_event,
+ (unsigned long)skt);
skt->poll_timer.expires = jiffies + SOC_PCMCIA_POLL_PERIOD;
ret = request_resource(&iomem_resource, &skt->res_skt);
diff --git a/drivers/pcmcia/yenta_socket.c b/drivers/pcmcia/yenta_socket.c
index 8a23ccb..965bd84 100644
--- a/drivers/pcmcia/yenta_socket.c
+++ b/drivers/pcmcia/yenta_socket.c
@@ -1236,11 +1236,9 @@
if (!socket->cb_irq || request_irq(socket->cb_irq, yenta_interrupt, IRQF_SHARED, "yenta", socket)) {
/* No IRQ or request_irq failed. Poll */
socket->cb_irq = 0; /* But zero is a valid IRQ number. */
- init_timer(&socket->poll_timer);
- socket->poll_timer.function = yenta_interrupt_wrapper;
- socket->poll_timer.data = (unsigned long)socket;
- socket->poll_timer.expires = jiffies + HZ;
- add_timer(&socket->poll_timer);
+ setup_timer(&socket->poll_timer, yenta_interrupt_wrapper,
+ (unsigned long)socket);
+ mod_timer(&socket->poll_timer, jiffies + HZ);
dev_printk(KERN_INFO, &dev->dev,
"no PCI IRQ, CardBus support disabled for this "
"socket.\n");
diff --git a/drivers/pinctrl/Kconfig b/drivers/pinctrl/Kconfig
index c6f299b..aeb5729 100644
--- a/drivers/pinctrl/Kconfig
+++ b/drivers/pinctrl/Kconfig
@@ -95,9 +95,11 @@
config PINCTRL_MESON
bool
+ depends on OF
select PINMUX
select PINCONF
select GENERIC_PINCONF
+ select GPIOLIB
select OF_GPIO
select REGMAP_MMIO
@@ -229,7 +231,8 @@
config PINCTRL_TB10X
bool
- depends on ARC_PLAT_TB10X
+ depends on OF && ARC_PLAT_TB10X
+ select GPIOLIB
endmenu
diff --git a/drivers/pinctrl/mediatek/Kconfig b/drivers/pinctrl/mediatek/Kconfig
index 5983cf5..6b3551c 100644
--- a/drivers/pinctrl/mediatek/Kconfig
+++ b/drivers/pinctrl/mediatek/Kconfig
@@ -2,6 +2,7 @@
config PINCTRL_MTK_COMMON
bool
+ depends on OF
select PINMUX
select GENERIC_PINCONF
select GPIOLIB
@@ -10,12 +11,14 @@
# For ARMv7 SoCs
config PINCTRL_MT8135
bool "Mediatek MT8135 pin control" if COMPILE_TEST && !MACH_MT8135
+ depends on OF
default MACH_MT8135
select PINCTRL_MTK_COMMON
# For ARMv8 SoCs
config PINCTRL_MT8173
bool "Mediatek MT8173 pin control"
+ depends on OF
depends on ARM64 || COMPILE_TEST
default ARM64 && ARCH_MEDIATEK
select PINCTRL_MTK_COMMON
diff --git a/drivers/pinctrl/nomadik/Kconfig b/drivers/pinctrl/nomadik/Kconfig
index d48a5aa..f4fcebf 100644
--- a/drivers/pinctrl/nomadik/Kconfig
+++ b/drivers/pinctrl/nomadik/Kconfig
@@ -30,9 +30,9 @@
config PINCTRL_NOMADIK
bool "Nomadik pin controller driver"
depends on ARCH_U8500 || ARCH_NOMADIK
+ depends on OF && GPIOLIB
select PINMUX
select PINCONF
- select GPIOLIB
select OF_GPIO
select GPIOLIB_IRQCHIP
diff --git a/drivers/remoteproc/da8xx_remoteproc.c b/drivers/remoteproc/da8xx_remoteproc.c
index 89fd057..f8d6a06 100644
--- a/drivers/remoteproc/da8xx_remoteproc.c
+++ b/drivers/remoteproc/da8xx_remoteproc.c
@@ -224,6 +224,7 @@
drproc = rproc->priv;
drproc->rproc = rproc;
+ rproc->has_iommu = false;
platform_set_drvdata(pdev, rproc);
diff --git a/drivers/remoteproc/omap_remoteproc.c b/drivers/remoteproc/omap_remoteproc.c
index e85f303..b74368a 100644
--- a/drivers/remoteproc/omap_remoteproc.c
+++ b/drivers/remoteproc/omap_remoteproc.c
@@ -202,6 +202,8 @@
oproc = rproc->priv;
oproc->rproc = rproc;
+ /* All existing OMAP IPU and DSP processors have an MMU */
+ rproc->has_iommu = true;
platform_set_drvdata(pdev, rproc);
diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
index 3cd85a6..11cdb11 100644
--- a/drivers/remoteproc/remoteproc_core.c
+++ b/drivers/remoteproc/remoteproc_core.c
@@ -94,19 +94,8 @@
struct device *dev = rproc->dev.parent;
int ret;
- /*
- * We currently use iommu_present() to decide if an IOMMU
- * setup is needed.
- *
- * This works for simple cases, but will easily fail with
- * platforms that do have an IOMMU, but not for this specific
- * rproc.
- *
- * This will be easily solved by introducing hw capabilities
- * that will be set by the remoteproc driver.
- */
- if (!iommu_present(dev->bus)) {
- dev_dbg(dev, "iommu not found\n");
+ if (!rproc->has_iommu) {
+ dev_dbg(dev, "iommu not present\n");
return 0;
}
diff --git a/drivers/remoteproc/ste_modem_rproc.c b/drivers/remoteproc/ste_modem_rproc.c
index 16b7b7b..dd193f3 100644
--- a/drivers/remoteproc/ste_modem_rproc.c
+++ b/drivers/remoteproc/ste_modem_rproc.c
@@ -289,6 +289,7 @@
sproc = rproc->priv;
sproc->mdev = mdev;
sproc->rproc = rproc;
+ rproc->has_iommu = false;
mdev->drv_data = sproc;
/* Provide callback functions to modem device */
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index 26a51dc..57fd663 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -579,7 +579,8 @@
{
dasd_get_device(device);
/* queue call to dasd_kick_device to the kernel event daemon. */
- schedule_work(&device->kick_work);
+ if (!schedule_work(&device->kick_work))
+ dasd_put_device(device);
}
EXPORT_SYMBOL(dasd_kick_device);
@@ -599,7 +600,8 @@
{
dasd_get_device(device);
/* queue call to dasd_reload_device to the kernel event daemon. */
- schedule_work(&device->reload_device);
+ if (!schedule_work(&device->reload_device))
+ dasd_put_device(device);
}
EXPORT_SYMBOL(dasd_reload_device);
@@ -619,7 +621,8 @@
{
dasd_get_device(device);
/* queue call to dasd_restore_device to the kernel event daemon. */
- schedule_work(&device->restore_device);
+ if (!schedule_work(&device->restore_device))
+ dasd_put_device(device);
}
/*
@@ -2163,18 +2166,22 @@
cqr->intrc = -ENOLINK;
continue;
}
- /* Don't try to start requests if device is stopped */
- if (interruptible) {
- rc = wait_event_interruptible(
- generic_waitq, !(device->stopped));
- if (rc == -ERESTARTSYS) {
- cqr->status = DASD_CQR_FAILED;
- maincqr->intrc = rc;
- continue;
- }
- } else
- wait_event(generic_waitq, !(device->stopped));
-
+ /*
+ * Don't try to start requests if device is stopped
+ * except path verification requests
+ */
+ if (!test_bit(DASD_CQR_VERIFY_PATH, &cqr->flags)) {
+ if (interruptible) {
+ rc = wait_event_interruptible(
+ generic_waitq, !(device->stopped));
+ if (rc == -ERESTARTSYS) {
+ cqr->status = DASD_CQR_FAILED;
+ maincqr->intrc = rc;
+ continue;
+ }
+ } else
+ wait_event(generic_waitq, !(device->stopped));
+ }
if (!cqr->callback)
cqr->callback = dasd_wakeup_cb;
@@ -2524,6 +2531,11 @@
__blk_end_request_all(req, -EIO);
return;
}
+
+ /* if device ist stopped do not fetch new requests */
+ if (basedev->stopped)
+ return;
+
/* Now we try to fetch requests from the request queue */
while ((req = blk_peek_request(queue))) {
if (basedev->features & DASD_FEATURE_READONLY &&
diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
index 49b48a8..6215f64 100644
--- a/drivers/s390/block/dasd_eckd.c
+++ b/drivers/s390/block/dasd_eckd.c
@@ -1628,7 +1628,8 @@
return;
}
/* queue call to do_validate_server to the kernel event daemon. */
- schedule_work(&device->kick_validate);
+ if (!schedule_work(&device->kick_validate))
+ dasd_put_device(device);
}
static u32 get_fcx_max_data(struct dasd_device *device)
diff --git a/drivers/s390/char/sclp_cmd.c b/drivers/s390/char/sclp_cmd.c
index 6e14999..7be7821 100644
--- a/drivers/s390/char/sclp_cmd.c
+++ b/drivers/s390/char/sclp_cmd.c
@@ -315,10 +315,29 @@
rc |= sclp_assign_storage(incr->rn);
else
sclp_unassign_storage(incr->rn);
+ if (rc == 0)
+ incr->standby = online ? 0 : 1;
}
return rc ? -EIO : 0;
}
+static bool contains_standby_increment(unsigned long start, unsigned long end)
+{
+ struct memory_increment *incr;
+ unsigned long istart;
+
+ list_for_each_entry(incr, &sclp_mem_list, list) {
+ istart = rn2addr(incr->rn);
+ if (end - 1 < istart)
+ continue;
+ if (start > istart + sclp_rzm - 1)
+ continue;
+ if (incr->standby)
+ return true;
+ }
+ return false;
+}
+
static int sclp_mem_notifier(struct notifier_block *nb,
unsigned long action, void *data)
{
@@ -334,8 +353,16 @@
for_each_clear_bit(id, sclp_storage_ids, sclp_max_storage_id + 1)
sclp_attach_storage(id);
switch (action) {
- case MEM_ONLINE:
case MEM_GOING_OFFLINE:
+ /*
+ * We do not allow to set memory blocks offline that contain
+ * standby memory. This is done to simplify the "memory online"
+ * case.
+ */
+ if (contains_standby_increment(start, start + size))
+ rc = -EPERM;
+ break;
+ case MEM_ONLINE:
case MEM_CANCEL_OFFLINE:
break;
case MEM_GOING_ONLINE:
@@ -361,6 +388,21 @@
.notifier_call = sclp_mem_notifier,
};
+static void __init align_to_block_size(unsigned long long *start,
+ unsigned long long *size)
+{
+ unsigned long long start_align, size_align, alignment;
+
+ alignment = memory_block_size_bytes();
+ start_align = roundup(*start, alignment);
+ size_align = rounddown(*start + *size, alignment) - start_align;
+
+ pr_info("Standby memory at 0x%llx (%lluM of %lluM usable)\n",
+ *start, size_align >> 20, *size >> 20);
+ *start = start_align;
+ *size = size_align;
+}
+
static void __init add_memory_merged(u16 rn)
{
static u16 first_rn, num;
@@ -382,7 +424,9 @@
goto skip_add;
if (memory_end_set && (start + size > memory_end))
size = memory_end - start;
- add_memory(0, start, size);
+ align_to_block_size(&start, &size);
+ if (size)
+ add_memory(0, start, size);
skip_add:
first_rn = rn;
num = 1;
diff --git a/drivers/scsi/am53c974.c b/drivers/scsi/am53c974.c
index a6f5ee8..beea30e 100644
--- a/drivers/scsi/am53c974.c
+++ b/drivers/scsi/am53c974.c
@@ -476,6 +476,8 @@
goto fail_unmap_regs;
}
+ pci_set_drvdata(pdev, pep);
+
err = request_irq(pdev->irq, scsi_esp_intr, IRQF_SHARED,
DRV_MODULE_NAME, esp);
if (err < 0) {
@@ -496,8 +498,6 @@
/* Assume 40MHz clock */
esp->cfreq = 40000000;
- pci_set_drvdata(pdev, pep);
-
err = scsi_esp_register(esp, &pdev->dev);
if (err)
goto fail_free_irq;
@@ -507,6 +507,7 @@
fail_free_irq:
free_irq(pdev->irq, esp);
fail_unmap_command_block:
+ pci_set_drvdata(pdev, NULL);
pci_free_consistent(pdev, 16, esp->command_block,
esp->command_block_dma);
fail_unmap_regs:
@@ -530,6 +531,7 @@
scsi_esp_unregister(esp);
free_irq(pdev->irq, esp);
+ pci_set_drvdata(pdev, NULL);
pci_free_consistent(pdev, 16, esp->command_block,
esp->command_block_dma);
pci_iounmap(pdev, esp->regs);
diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index a1cfbd3..8eab107 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -6632,14 +6632,12 @@
static void set_lockup_detected_for_all_cpus(struct ctlr_info *h, u32 value)
{
- int i, cpu;
+ int cpu;
- cpu = cpumask_first(cpu_online_mask);
- for (i = 0; i < num_online_cpus(); i++) {
+ for_each_online_cpu(cpu) {
u32 *lockup_detected;
lockup_detected = per_cpu_ptr(h->lockup_detected, cpu);
*lockup_detected = value;
- cpu = cpumask_next(cpu, cpu_online_mask);
}
wmb(); /* be sure the per-cpu variables are out to memory */
}
diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
index 675b5e7..5a0800d 100644
--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
+++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
@@ -1584,11 +1584,11 @@
fp_possible = io_info.fpOkForIo;
}
- /* Use smp_processor_id() for now until cmd->request->cpu is CPU
+ /* Use raw_smp_processor_id() for now until cmd->request->cpu is CPU
id by default, not CPU group id, otherwise all MSI-X queues won't
be utilized */
cmd->request_desc->SCSIIO.MSIxIndex = instance->msix_vectors ?
- smp_processor_id() % instance->msix_vectors : 0;
+ raw_smp_processor_id() % instance->msix_vectors : 0;
if (fp_possible) {
megasas_set_pd_lba(io_request, scp->cmd_len, &io_info, scp,
@@ -1693,7 +1693,10 @@
<< MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT;
cmd->request_desc->SCSIIO.DevHandle = io_request->DevHandle;
cmd->request_desc->SCSIIO.MSIxIndex =
- instance->msix_vectors ? smp_processor_id() % instance->msix_vectors : 0;
+ instance->msix_vectors ?
+ raw_smp_processor_id() %
+ instance->msix_vectors :
+ 0;
os_timeout_value = scmd->request->timeout / HZ;
if (instance->secure_jbod_support &&
diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
index 2d5ab6d..454536c 100644
--- a/drivers/scsi/mvsas/mv_sas.c
+++ b/drivers/scsi/mvsas/mv_sas.c
@@ -441,14 +441,11 @@
static int mvs_task_prep_ata(struct mvs_info *mvi,
struct mvs_task_exec_info *tei)
{
- struct sas_ha_struct *sha = mvi->sas;
struct sas_task *task = tei->task;
struct domain_device *dev = task->dev;
struct mvs_device *mvi_dev = dev->lldd_dev;
struct mvs_cmd_hdr *hdr = tei->hdr;
struct asd_sas_port *sas_port = dev->port;
- struct sas_phy *sphy = dev->phy;
- struct asd_sas_phy *sas_phy = sha->sas_phy[sphy->number];
struct mvs_slot_info *slot;
void *buf_prd;
u32 tag = tei->tag, hdr_tag;
@@ -468,7 +465,7 @@
slot->tx = mvi->tx_prod;
del_q = TXQ_MODE_I | tag |
(TXQ_CMD_STP << TXQ_CMD_SHIFT) |
- (MVS_PHY_ID << TXQ_PHY_SHIFT) |
+ ((sas_port->phy_mask & TXQ_PHY_MASK) << TXQ_PHY_SHIFT) |
(mvi_dev->taskfileset << TXQ_SRS_SHIFT);
mvi->tx[mvi->tx_prod] = cpu_to_le32(del_q);
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index dcc4244..79beebf 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3076,6 +3076,7 @@
ida_remove(&sd_index_ida, sdkp->index);
spin_unlock(&sd_index_lock);
+ blk_integrity_unregister(disk);
disk->private_data = NULL;
put_disk(disk);
put_device(&sdkp->device->sdev_gendev);
diff --git a/drivers/scsi/sd_dif.c b/drivers/scsi/sd_dif.c
index 14c7d42..5c06d29 100644
--- a/drivers/scsi/sd_dif.c
+++ b/drivers/scsi/sd_dif.c
@@ -77,7 +77,7 @@
disk->integrity->flags |= BLK_INTEGRITY_DEVICE_CAPABLE;
- if (!sdkp)
+ if (!sdkp->ATO)
return;
if (type == SD_DIF_TYPE3_PROTECTION)
diff --git a/drivers/spmi/Kconfig b/drivers/spmi/Kconfig
index bf1295e..c8d9956 100644
--- a/drivers/spmi/Kconfig
+++ b/drivers/spmi/Kconfig
@@ -12,7 +12,6 @@
config SPMI_MSM_PMIC_ARB
tristate "Qualcomm MSM SPMI Controller (PMIC Arbiter)"
- depends on ARM
depends on IRQ_DOMAIN
depends on ARCH_QCOM || COMPILE_TEST
default ARCH_QCOM
diff --git a/drivers/spmi/spmi-pmic-arb.c b/drivers/spmi/spmi-pmic-arb.c
index 20559ab..d7119db 100644
--- a/drivers/spmi/spmi-pmic-arb.c
+++ b/drivers/spmi/spmi-pmic-arb.c
@@ -1,4 +1,5 @@
-/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2012-2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -25,22 +26,18 @@
/* PMIC Arbiter configuration registers */
#define PMIC_ARB_VERSION 0x0000
+#define PMIC_ARB_VERSION_V2_MIN 0x20010000
#define PMIC_ARB_INT_EN 0x0004
-/* PMIC Arbiter channel registers */
-#define PMIC_ARB_CMD(N) (0x0800 + (0x80 * (N)))
-#define PMIC_ARB_CONFIG(N) (0x0804 + (0x80 * (N)))
-#define PMIC_ARB_STATUS(N) (0x0808 + (0x80 * (N)))
-#define PMIC_ARB_WDATA0(N) (0x0810 + (0x80 * (N)))
-#define PMIC_ARB_WDATA1(N) (0x0814 + (0x80 * (N)))
-#define PMIC_ARB_RDATA0(N) (0x0818 + (0x80 * (N)))
-#define PMIC_ARB_RDATA1(N) (0x081C + (0x80 * (N)))
-
-/* Interrupt Controller */
-#define SPMI_PIC_OWNER_ACC_STATUS(M, N) (0x0000 + ((32 * (M)) + (4 * (N))))
-#define SPMI_PIC_ACC_ENABLE(N) (0x0200 + (4 * (N)))
-#define SPMI_PIC_IRQ_STATUS(N) (0x0600 + (4 * (N)))
-#define SPMI_PIC_IRQ_CLEAR(N) (0x0A00 + (4 * (N)))
+/* PMIC Arbiter channel registers offsets */
+#define PMIC_ARB_CMD 0x00
+#define PMIC_ARB_CONFIG 0x04
+#define PMIC_ARB_STATUS 0x08
+#define PMIC_ARB_WDATA0 0x10
+#define PMIC_ARB_WDATA1 0x14
+#define PMIC_ARB_RDATA0 0x18
+#define PMIC_ARB_RDATA1 0x1C
+#define PMIC_ARB_REG_CHNL(N) (0x800 + 0x4 * (N))
/* Mapping Table */
#define SPMI_MAPPING_TABLE_REG(N) (0x0B00 + (4 * (N)))
@@ -52,6 +49,7 @@
#define SPMI_MAPPING_TABLE_LEN 255
#define SPMI_MAPPING_TABLE_TREE_DEPTH 16 /* Maximum of 16-bits */
+#define PPID_TO_CHAN_TABLE_SZ BIT(12) /* PPID is 12bit chan is 1byte*/
/* Ownership Table */
#define SPMI_OWNERSHIP_TABLE_REG(N) (0x0700 + (4 * (N)))
@@ -88,6 +86,7 @@
/* Maximum number of support PMIC peripherals */
#define PMIC_ARB_MAX_PERIPHS 256
+#define PMIC_ARB_MAX_CHNL 128
#define PMIC_ARB_PERIPH_ID_VALID (1 << 15)
#define PMIC_ARB_TIMEOUT_US 100
#define PMIC_ARB_MAX_TRANS_BYTES (8)
@@ -98,14 +97,17 @@
/* interrupt enable bit */
#define SPMI_PIC_ACC_ENABLE_BIT BIT(0)
+struct pmic_arb_ver_ops;
+
/**
* spmi_pmic_arb_dev - SPMI PMIC Arbiter object
*
- * @base: address of the PMIC Arbiter core registers.
+ * @rd_base: on v1 "core", on v2 "observer" register base off DT.
+ * @wr_base: on v1 "core", on v2 "chnls" register base off DT.
* @intr: address of the SPMI interrupt control registers.
* @cnfg: address of the PMIC Arbiter configuration registers.
* @lock: lock to synchronize accesses.
- * @channel: which channel to use for accesses.
+ * @channel: execution environment channel to use for accesses.
* @irq: PMIC ARB interrupt.
* @ee: the current Execution Environment
* @min_apid: minimum APID (used for bounding IRQ search)
@@ -113,10 +115,14 @@
* @mapping_table: in-memory copy of PPID -> APID mapping table.
* @domain: irq domain object for PMIC IRQ domain
* @spmic: SPMI controller object
- * @apid_to_ppid: cached mapping from APID to PPID
+ * @apid_to_ppid: in-memory copy of APID -> PPID mapping table.
+ * @ver_ops: version dependent operations.
+ * @ppid_to_chan in-memory copy of PPID -> channel (APID) mapping table.
+ * v2 only.
*/
struct spmi_pmic_arb_dev {
- void __iomem *base;
+ void __iomem *rd_base;
+ void __iomem *wr_base;
void __iomem *intr;
void __iomem *cnfg;
raw_spinlock_t lock;
@@ -129,17 +135,54 @@
struct irq_domain *domain;
struct spmi_controller *spmic;
u16 apid_to_ppid[256];
+ const struct pmic_arb_ver_ops *ver_ops;
+ u8 *ppid_to_chan;
+};
+
+/**
+ * pmic_arb_ver: version dependent functionality.
+ *
+ * @non_data_cmd: on v1 issues an spmi non-data command.
+ * on v2 no HW support, returns -EOPNOTSUPP.
+ * @offset: on v1 offset of per-ee channel.
+ * on v2 offset of per-ee and per-ppid channel.
+ * @fmt_cmd: formats a GENI/SPMI command.
+ * @owner_acc_status: on v1 offset of PMIC_ARB_SPMI_PIC_OWNERm_ACC_STATUSn
+ * on v2 offset of SPMI_PIC_OWNERm_ACC_STATUSn.
+ * @acc_enable: on v1 offset of PMIC_ARB_SPMI_PIC_ACC_ENABLEn
+ * on v2 offset of SPMI_PIC_ACC_ENABLEn.
+ * @irq_status: on v1 offset of PMIC_ARB_SPMI_PIC_IRQ_STATUSn
+ * on v2 offset of SPMI_PIC_IRQ_STATUSn.
+ * @irq_clear: on v1 offset of PMIC_ARB_SPMI_PIC_IRQ_CLEARn
+ * on v2 offset of SPMI_PIC_IRQ_CLEARn.
+ */
+struct pmic_arb_ver_ops {
+ /* spmi commands (read_cmd, write_cmd, cmd) functionality */
+ u32 (*offset)(struct spmi_pmic_arb_dev *dev, u8 sid, u16 addr);
+ u32 (*fmt_cmd)(u8 opc, u8 sid, u16 addr, u8 bc);
+ int (*non_data_cmd)(struct spmi_controller *ctrl, u8 opc, u8 sid);
+ /* Interrupts controller functionality (offset of PIC registers) */
+ u32 (*owner_acc_status)(u8 m, u8 n);
+ u32 (*acc_enable)(u8 n);
+ u32 (*irq_status)(u8 n);
+ u32 (*irq_clear)(u8 n);
};
static inline u32 pmic_arb_base_read(struct spmi_pmic_arb_dev *dev, u32 offset)
{
- return readl_relaxed(dev->base + offset);
+ return readl_relaxed(dev->rd_base + offset);
}
static inline void pmic_arb_base_write(struct spmi_pmic_arb_dev *dev,
u32 offset, u32 val)
{
- writel_relaxed(val, dev->base + offset);
+ writel_relaxed(val, dev->wr_base + offset);
+}
+
+static inline void pmic_arb_set_rd_cmd(struct spmi_pmic_arb_dev *dev,
+ u32 offset, u32 val)
+{
+ writel_relaxed(val, dev->rd_base + offset);
}
/**
@@ -168,15 +211,16 @@
pmic_arb_base_write(dev, reg, data);
}
-static int pmic_arb_wait_for_done(struct spmi_controller *ctrl)
+static int pmic_arb_wait_for_done(struct spmi_controller *ctrl,
+ void __iomem *base, u8 sid, u16 addr)
{
struct spmi_pmic_arb_dev *dev = spmi_controller_get_drvdata(ctrl);
u32 status = 0;
u32 timeout = PMIC_ARB_TIMEOUT_US;
- u32 offset = PMIC_ARB_STATUS(dev->channel);
+ u32 offset = dev->ver_ops->offset(dev, sid, addr) + PMIC_ARB_STATUS;
while (timeout--) {
- status = pmic_arb_base_read(dev, offset);
+ status = readl_relaxed(base + offset);
if (status & PMIC_ARB_STATUS_DONE) {
if (status & PMIC_ARB_STATUS_DENIED) {
@@ -211,26 +255,43 @@
return -ETIMEDOUT;
}
-/* Non-data command */
-static int pmic_arb_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid)
+static int
+pmic_arb_non_data_cmd_v1(struct spmi_controller *ctrl, u8 opc, u8 sid)
{
struct spmi_pmic_arb_dev *pmic_arb = spmi_controller_get_drvdata(ctrl);
unsigned long flags;
u32 cmd;
int rc;
+ u32 offset = pmic_arb->ver_ops->offset(pmic_arb, sid, 0);
+
+ cmd = ((opc | 0x40) << 27) | ((sid & 0xf) << 20);
+
+ raw_spin_lock_irqsave(&pmic_arb->lock, flags);
+ pmic_arb_base_write(pmic_arb, offset + PMIC_ARB_CMD, cmd);
+ rc = pmic_arb_wait_for_done(ctrl, pmic_arb->wr_base, sid, 0);
+ raw_spin_unlock_irqrestore(&pmic_arb->lock, flags);
+
+ return rc;
+}
+
+static int
+pmic_arb_non_data_cmd_v2(struct spmi_controller *ctrl, u8 opc, u8 sid)
+{
+ return -EOPNOTSUPP;
+}
+
+/* Non-data command */
+static int pmic_arb_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid)
+{
+ struct spmi_pmic_arb_dev *pmic_arb = spmi_controller_get_drvdata(ctrl);
+
+ dev_dbg(&ctrl->dev, "cmd op:0x%x sid:%d\n", opc, sid);
/* Check for valid non-data command */
if (opc < SPMI_CMD_RESET || opc > SPMI_CMD_WAKEUP)
return -EINVAL;
- cmd = ((opc | 0x40) << 27) | ((sid & 0xf) << 20);
-
- raw_spin_lock_irqsave(&pmic_arb->lock, flags);
- pmic_arb_base_write(pmic_arb, PMIC_ARB_CMD(pmic_arb->channel), cmd);
- rc = pmic_arb_wait_for_done(ctrl);
- raw_spin_unlock_irqrestore(&pmic_arb->lock, flags);
-
- return rc;
+ return pmic_arb->ver_ops->non_data_cmd(ctrl, opc, sid);
}
static int pmic_arb_read_cmd(struct spmi_controller *ctrl, u8 opc, u8 sid,
@@ -241,10 +302,11 @@
u8 bc = len - 1;
u32 cmd;
int rc;
+ u32 offset = pmic_arb->ver_ops->offset(pmic_arb, sid, addr);
if (bc >= PMIC_ARB_MAX_TRANS_BYTES) {
dev_err(&ctrl->dev,
- "pmic-arb supports 1..%d bytes per trans, but %d requested",
+ "pmic-arb supports 1..%d bytes per trans, but:%zu requested",
PMIC_ARB_MAX_TRANS_BYTES, len);
return -EINVAL;
}
@@ -259,20 +321,20 @@
else
return -EINVAL;
- cmd = (opc << 27) | ((sid & 0xf) << 20) | (addr << 4) | (bc & 0x7);
+ cmd = pmic_arb->ver_ops->fmt_cmd(opc, sid, addr, bc);
raw_spin_lock_irqsave(&pmic_arb->lock, flags);
- pmic_arb_base_write(pmic_arb, PMIC_ARB_CMD(pmic_arb->channel), cmd);
- rc = pmic_arb_wait_for_done(ctrl);
+ pmic_arb_set_rd_cmd(pmic_arb, offset + PMIC_ARB_CMD, cmd);
+ rc = pmic_arb_wait_for_done(ctrl, pmic_arb->rd_base, sid, addr);
if (rc)
goto done;
- pa_read_data(pmic_arb, buf, PMIC_ARB_RDATA0(pmic_arb->channel),
+ pa_read_data(pmic_arb, buf, offset + PMIC_ARB_RDATA0,
min_t(u8, bc, 3));
if (bc > 3)
pa_read_data(pmic_arb, buf + 4,
- PMIC_ARB_RDATA1(pmic_arb->channel), bc - 4);
+ offset + PMIC_ARB_RDATA1, bc - 4);
done:
raw_spin_unlock_irqrestore(&pmic_arb->lock, flags);
@@ -287,10 +349,11 @@
u8 bc = len - 1;
u32 cmd;
int rc;
+ u32 offset = pmic_arb->ver_ops->offset(pmic_arb, sid, addr);
if (bc >= PMIC_ARB_MAX_TRANS_BYTES) {
dev_err(&ctrl->dev,
- "pmic-arb supports 1..%d bytes per trans, but:%d requested",
+ "pmic-arb supports 1..%d bytes per trans, but:%zu requested",
PMIC_ARB_MAX_TRANS_BYTES, len);
return -EINVAL;
}
@@ -307,19 +370,19 @@
else
return -EINVAL;
- cmd = (opc << 27) | ((sid & 0xf) << 20) | (addr << 4) | (bc & 0x7);
+ cmd = pmic_arb->ver_ops->fmt_cmd(opc, sid, addr, bc);
/* Write data to FIFOs */
raw_spin_lock_irqsave(&pmic_arb->lock, flags);
- pa_write_data(pmic_arb, buf, PMIC_ARB_WDATA0(pmic_arb->channel)
- , min_t(u8, bc, 3));
+ pa_write_data(pmic_arb, buf, offset + PMIC_ARB_WDATA0,
+ min_t(u8, bc, 3));
if (bc > 3)
pa_write_data(pmic_arb, buf + 4,
- PMIC_ARB_WDATA1(pmic_arb->channel), bc - 4);
+ offset + PMIC_ARB_WDATA1, bc - 4);
/* Start the transaction */
- pmic_arb_base_write(pmic_arb, PMIC_ARB_CMD(pmic_arb->channel), cmd);
- rc = pmic_arb_wait_for_done(ctrl);
+ pmic_arb_base_write(pmic_arb, offset + PMIC_ARB_CMD, cmd);
+ rc = pmic_arb_wait_for_done(ctrl, pmic_arb->wr_base, sid, addr);
raw_spin_unlock_irqrestore(&pmic_arb->lock, flags);
return rc;
@@ -376,7 +439,7 @@
u32 status;
int id;
- status = readl_relaxed(pa->intr + SPMI_PIC_IRQ_STATUS(apid));
+ status = readl_relaxed(pa->intr + pa->ver_ops->irq_status(apid));
while (status) {
id = ffs(status) - 1;
status &= ~(1 << id);
@@ -402,7 +465,7 @@
for (i = first; i <= last; ++i) {
status = readl_relaxed(intr +
- SPMI_PIC_OWNER_ACC_STATUS(pa->ee, i));
+ pa->ver_ops->owner_acc_status(pa->ee, i));
while (status) {
id = ffs(status) - 1;
status &= ~(1 << id);
@@ -422,7 +485,7 @@
u8 data;
raw_spin_lock_irqsave(&pa->lock, flags);
- writel_relaxed(1 << irq, pa->intr + SPMI_PIC_IRQ_CLEAR(apid));
+ writel_relaxed(1 << irq, pa->intr + pa->ver_ops->irq_clear(apid));
raw_spin_unlock_irqrestore(&pa->lock, flags);
data = 1 << irq;
@@ -439,10 +502,11 @@
u8 data;
raw_spin_lock_irqsave(&pa->lock, flags);
- status = readl_relaxed(pa->intr + SPMI_PIC_ACC_ENABLE(apid));
+ status = readl_relaxed(pa->intr + pa->ver_ops->acc_enable(apid));
if (status & SPMI_PIC_ACC_ENABLE_BIT) {
status = status & ~SPMI_PIC_ACC_ENABLE_BIT;
- writel_relaxed(status, pa->intr + SPMI_PIC_ACC_ENABLE(apid));
+ writel_relaxed(status, pa->intr +
+ pa->ver_ops->acc_enable(apid));
}
raw_spin_unlock_irqrestore(&pa->lock, flags);
@@ -460,10 +524,10 @@
u8 data;
raw_spin_lock_irqsave(&pa->lock, flags);
- status = readl_relaxed(pa->intr + SPMI_PIC_ACC_ENABLE(apid));
+ status = readl_relaxed(pa->intr + pa->ver_ops->acc_enable(apid));
if (!(status & SPMI_PIC_ACC_ENABLE_BIT)) {
writel_relaxed(status | SPMI_PIC_ACC_ENABLE_BIT,
- pa->intr + SPMI_PIC_ACC_ENABLE(apid));
+ pa->intr + pa->ver_ops->acc_enable(apid));
}
raw_spin_unlock_irqrestore(&pa->lock, flags);
@@ -624,6 +688,91 @@
return 0;
}
+/* v1 offset per ee */
+static u32 pmic_arb_offset_v1(struct spmi_pmic_arb_dev *pa, u8 sid, u16 addr)
+{
+ return 0x800 + 0x80 * pa->channel;
+}
+
+/* v2 offset per ppid (chan) and per ee */
+static u32 pmic_arb_offset_v2(struct spmi_pmic_arb_dev *pa, u8 sid, u16 addr)
+{
+ u16 ppid = (sid << 8) | (addr >> 8);
+ u8 chan = pa->ppid_to_chan[ppid];
+
+ return 0x1000 * pa->ee + 0x8000 * chan;
+}
+
+static u32 pmic_arb_fmt_cmd_v1(u8 opc, u8 sid, u16 addr, u8 bc)
+{
+ return (opc << 27) | ((sid & 0xf) << 20) | (addr << 4) | (bc & 0x7);
+}
+
+static u32 pmic_arb_fmt_cmd_v2(u8 opc, u8 sid, u16 addr, u8 bc)
+{
+ return (opc << 27) | ((addr & 0xff) << 4) | (bc & 0x7);
+}
+
+static u32 pmic_arb_owner_acc_status_v1(u8 m, u8 n)
+{
+ return 0x20 * m + 0x4 * n;
+}
+
+static u32 pmic_arb_owner_acc_status_v2(u8 m, u8 n)
+{
+ return 0x100000 + 0x1000 * m + 0x4 * n;
+}
+
+static u32 pmic_arb_acc_enable_v1(u8 n)
+{
+ return 0x200 + 0x4 * n;
+}
+
+static u32 pmic_arb_acc_enable_v2(u8 n)
+{
+ return 0x1000 * n;
+}
+
+static u32 pmic_arb_irq_status_v1(u8 n)
+{
+ return 0x600 + 0x4 * n;
+}
+
+static u32 pmic_arb_irq_status_v2(u8 n)
+{
+ return 0x4 + 0x1000 * n;
+}
+
+static u32 pmic_arb_irq_clear_v1(u8 n)
+{
+ return 0xA00 + 0x4 * n;
+}
+
+static u32 pmic_arb_irq_clear_v2(u8 n)
+{
+ return 0x8 + 0x1000 * n;
+}
+
+static const struct pmic_arb_ver_ops pmic_arb_v1 = {
+ .non_data_cmd = pmic_arb_non_data_cmd_v1,
+ .offset = pmic_arb_offset_v1,
+ .fmt_cmd = pmic_arb_fmt_cmd_v1,
+ .owner_acc_status = pmic_arb_owner_acc_status_v1,
+ .acc_enable = pmic_arb_acc_enable_v1,
+ .irq_status = pmic_arb_irq_status_v1,
+ .irq_clear = pmic_arb_irq_clear_v1,
+};
+
+static const struct pmic_arb_ver_ops pmic_arb_v2 = {
+ .non_data_cmd = pmic_arb_non_data_cmd_v2,
+ .offset = pmic_arb_offset_v2,
+ .fmt_cmd = pmic_arb_fmt_cmd_v2,
+ .owner_acc_status = pmic_arb_owner_acc_status_v2,
+ .acc_enable = pmic_arb_acc_enable_v2,
+ .irq_status = pmic_arb_irq_status_v2,
+ .irq_clear = pmic_arb_irq_clear_v2,
+};
+
static const struct irq_domain_ops pmic_arb_irq_domain_ops = {
.map = qpnpint_irq_domain_map,
.xlate = qpnpint_irq_domain_dt_translate,
@@ -634,8 +783,10 @@
struct spmi_pmic_arb_dev *pa;
struct spmi_controller *ctrl;
struct resource *res;
- u32 channel, ee;
+ void __iomem *core;
+ u32 channel, ee, hw_ver;
int err, i;
+ bool is_v1;
ctrl = spmi_controller_alloc(&pdev->dev, sizeof(*pa));
if (!ctrl)
@@ -645,12 +796,65 @@
pa->spmic = ctrl;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "core");
- pa->base = devm_ioremap_resource(&ctrl->dev, res);
- if (IS_ERR(pa->base)) {
- err = PTR_ERR(pa->base);
+ core = devm_ioremap_resource(&ctrl->dev, res);
+ if (IS_ERR(core)) {
+ err = PTR_ERR(core);
goto err_put_ctrl;
}
+ hw_ver = readl_relaxed(core + PMIC_ARB_VERSION);
+ is_v1 = (hw_ver < PMIC_ARB_VERSION_V2_MIN);
+
+ dev_info(&ctrl->dev, "PMIC Arb Version-%d (0x%x)\n", (is_v1 ? 1 : 2),
+ hw_ver);
+
+ if (is_v1) {
+ pa->ver_ops = &pmic_arb_v1;
+ pa->wr_base = core;
+ pa->rd_base = core;
+ } else {
+ u8 chan;
+ u16 ppid;
+ u32 regval;
+
+ pa->ver_ops = &pmic_arb_v2;
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ "obsrvr");
+ pa->rd_base = devm_ioremap_resource(&ctrl->dev, res);
+ if (IS_ERR(pa->rd_base)) {
+ err = PTR_ERR(pa->rd_base);
+ goto err_put_ctrl;
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ "chnls");
+ pa->wr_base = devm_ioremap_resource(&ctrl->dev, res);
+ if (IS_ERR(pa->wr_base)) {
+ err = PTR_ERR(pa->wr_base);
+ goto err_put_ctrl;
+ }
+
+ pa->ppid_to_chan = devm_kzalloc(&ctrl->dev,
+ PPID_TO_CHAN_TABLE_SZ, GFP_KERNEL);
+ if (!pa->ppid_to_chan) {
+ err = -ENOMEM;
+ goto err_put_ctrl;
+ }
+ /*
+ * PMIC_ARB_REG_CHNL is a table in HW mapping channel to ppid.
+ * ppid_to_chan is an in-memory invert of that table.
+ */
+ for (chan = 0; chan < PMIC_ARB_MAX_CHNL; ++chan) {
+ regval = readl_relaxed(core + PMIC_ARB_REG_CHNL(chan));
+ if (!regval)
+ continue;
+
+ ppid = (regval >> 8) & 0xFFF;
+ pa->ppid_to_chan[ppid] = chan;
+ }
+ }
+
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "intr");
pa->intr = devm_ioremap_resource(&ctrl->dev, res);
if (IS_ERR(pa->intr)) {
@@ -731,9 +935,6 @@
if (err)
goto err_domain_remove;
- dev_dbg(&ctrl->dev, "PMIC Arb Version 0x%x\n",
- pmic_arb_base_read(pa, PMIC_ARB_VERSION));
-
return 0;
err_domain_remove:
diff --git a/drivers/spmi/spmi.c b/drivers/spmi/spmi.c
index 1d92f51..9493843 100644
--- a/drivers/spmi/spmi.c
+++ b/drivers/spmi/spmi.c
@@ -1,4 +1,5 @@
-/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
+/*
+ * Copyright (c) 2012-2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -316,11 +317,6 @@
struct spmi_device *sdev = to_spmi_device(dev);
int err;
- /* Ensure the slave is in ACTIVE state */
- err = spmi_command_wakeup(sdev);
- if (err)
- goto fail_wakeup;
-
pm_runtime_get_noresume(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
@@ -335,7 +331,6 @@
pm_runtime_disable(dev);
pm_runtime_set_suspended(dev);
pm_runtime_put_noidle(dev);
-fail_wakeup:
return err;
}
diff --git a/drivers/staging/media/bcm2048/radio-bcm2048.c b/drivers/staging/media/bcm2048/radio-bcm2048.c
index f28ffef..e9d0691 100644
--- a/drivers/staging/media/bcm2048/radio-bcm2048.c
+++ b/drivers/staging/media/bcm2048/radio-bcm2048.c
@@ -279,7 +279,7 @@
struct bcm2048_device {
struct i2c_client *client;
- struct video_device *videodev;
+ struct video_device videodev;
struct work_struct work;
struct completion compl;
struct mutex mutex;
@@ -1579,7 +1579,7 @@
bcm2048_parse_rds_rt_block(bdev, i, index+2, crc);
}
-static int bcm2048_parse_rds_rt(struct bcm2048_device *bdev)
+static void bcm2048_parse_rds_rt(struct bcm2048_device *bdev)
{
int i, index = 0, crc, match_b = 0, match_c = 0, match_d = 0;
@@ -1615,8 +1615,6 @@
match_b = 1;
}
}
-
- return 0;
}
static void bcm2048_parse_rds_ps_block(struct bcm2048_device *bdev, int i,
@@ -2273,7 +2271,7 @@
*/
static const struct v4l2_file_operations bcm2048_fops = {
.owner = THIS_MODULE,
- .ioctl = video_ioctl2,
+ .unlocked_ioctl = video_ioctl2,
/* for RDS read support */
.open = bcm2048_fops_open,
.release = bcm2048_fops_release,
@@ -2584,7 +2582,7 @@
static struct video_device bcm2048_viddev_template = {
.fops = &bcm2048_fops,
.name = BCM2048_DRIVER_NAME,
- .release = video_device_release,
+ .release = video_device_release_empty,
.ioctl_ops = &bcm2048_ioctl_ops,
};
@@ -2603,13 +2601,6 @@
goto exit;
}
- bdev->videodev = video_device_alloc();
- if (!bdev->videodev) {
- dev_dbg(&client->dev, "Failed to alloc video device.\n");
- err = -ENOMEM;
- goto free_bdev;
- }
-
bdev->client = client;
i2c_set_clientdata(client, bdev);
mutex_init(&bdev->mutex);
@@ -2622,16 +2613,16 @@
client->name, bdev);
if (err < 0) {
dev_err(&client->dev, "Could not request IRQ\n");
- goto free_vdev;
+ goto free_bdev;
}
dev_dbg(&client->dev, "IRQ requested.\n");
} else {
dev_dbg(&client->dev, "IRQ not configured. Using timeouts.\n");
}
- *bdev->videodev = bcm2048_viddev_template;
- video_set_drvdata(bdev->videodev, bdev);
- if (video_register_device(bdev->videodev, VFL_TYPE_RADIO, radio_nr)) {
+ bdev->videodev = bcm2048_viddev_template;
+ video_set_drvdata(&bdev->videodev, bdev);
+ if (video_register_device(&bdev->videodev, VFL_TYPE_RADIO, radio_nr)) {
dev_dbg(&client->dev, "Could not register video device.\n");
err = -EIO;
goto free_irq;
@@ -2654,18 +2645,13 @@
free_sysfs:
bcm2048_sysfs_unregister_properties(bdev, ARRAY_SIZE(attrs));
free_registration:
- video_unregister_device(bdev->videodev);
- /* video_unregister_device frees bdev->videodev */
- bdev->videodev = NULL;
+ video_unregister_device(&bdev->videodev);
skip_release = 1;
free_irq:
if (client->irq)
free_irq(client->irq, bdev);
-free_vdev:
- if (!skip_release)
- video_device_release(bdev->videodev);
- i2c_set_clientdata(client, NULL);
free_bdev:
+ i2c_set_clientdata(client, NULL);
kfree(bdev);
exit:
return err;
@@ -2674,16 +2660,13 @@
static int __exit bcm2048_i2c_driver_remove(struct i2c_client *client)
{
struct bcm2048_device *bdev = i2c_get_clientdata(client);
- struct video_device *vd;
if (!client->adapter)
return -ENODEV;
if (bdev) {
- vd = bdev->videodev;
-
bcm2048_sysfs_unregister_properties(bdev, ARRAY_SIZE(attrs));
- video_unregister_device(vd);
+ video_unregister_device(&bdev->videodev);
if (bdev->power_state)
bcm2048_set_power_state(bdev, BCM2048_POWER_OFF);
diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipe.c b/drivers/staging/media/davinci_vpfe/dm365_ipipe.c
index a425f71..1bbb90c 100644
--- a/drivers/staging/media/davinci_vpfe/dm365_ipipe.c
+++ b/drivers/staging/media/davinci_vpfe/dm365_ipipe.c
@@ -1414,17 +1414,17 @@
* __ipipe_get_format() - helper function for getting ipipe format
* @ipipe: pointer to ipipe private structure.
* @pad: pad number.
- * @fh: V4L2 subdev file handle.
+ * @cfg: V4L2 subdev pad config
* @which: wanted subdev format.
*
*/
static struct v4l2_mbus_framefmt *
__ipipe_get_format(struct vpfe_ipipe_device *ipipe,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&ipipe->subdev, cfg, pad);
return &ipipe->formats[pad];
}
@@ -1432,14 +1432,14 @@
/*
* ipipe_try_format() - Handle try format by pad subdev method
* @ipipe: VPFE ipipe device.
- * @fh: V4L2 subdev file handle.
+ * @cfg: V4L2 subdev pad config
* @pad: pad num.
* @fmt: pointer to v4l2 format structure.
* @which : wanted subdev format
*/
static void
ipipe_try_format(struct vpfe_ipipe_device *ipipe,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -1475,22 +1475,22 @@
/*
* ipipe_set_format() - Handle set format by pads subdev method
* @sd: pointer to v4l2 subdev structure
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
static int
-ipipe_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ipipe_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vpfe_ipipe_device *ipipe = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __ipipe_get_format(ipipe, fh, fmt->pad, fmt->which);
+ format = __ipipe_get_format(ipipe, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- ipipe_try_format(ipipe, fh, fmt->pad, &fmt->format, fmt->which);
+ ipipe_try_format(ipipe, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY)
@@ -1512,11 +1512,11 @@
/*
* ipipe_get_format() - Handle get format by pads subdev method.
* @sd: pointer to v4l2 subdev structure.
- * @fh: V4L2 subdev file handle.
+ * @cfg: V4L2 subdev pad config
* @fmt: pointer to v4l2 subdev format structure.
*/
static int
-ipipe_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ipipe_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vpfe_ipipe_device *ipipe = v4l2_get_subdevdata(sd);
@@ -1524,7 +1524,7 @@
if (fmt->which == V4L2_SUBDEV_FORMAT_ACTIVE)
fmt->format = ipipe->formats[fmt->pad];
else
- fmt->format = *(v4l2_subdev_get_try_format(fh, fmt->pad));
+ fmt->format = *(v4l2_subdev_get_try_format(sd, cfg, fmt->pad));
return 0;
}
@@ -1532,11 +1532,11 @@
/*
* ipipe_enum_frame_size() - enum frame sizes on pads
* @sd: pointer to v4l2 subdev structure.
- * @fh: V4L2 subdev file handle.
+ * @cfg: V4L2 subdev pad config
* @fse: pointer to v4l2_subdev_frame_size_enum structure.
*/
static int
-ipipe_enum_frame_size(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ipipe_enum_frame_size(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct vpfe_ipipe_device *ipipe = v4l2_get_subdevdata(sd);
@@ -1548,8 +1548,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- ipipe_try_format(ipipe, fh, fse->pad, &format,
- V4L2_SUBDEV_FORMAT_TRY);
+ ipipe_try_format(ipipe, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -1559,8 +1558,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- ipipe_try_format(ipipe, fh, fse->pad, &format,
- V4L2_SUBDEV_FORMAT_TRY);
+ ipipe_try_format(ipipe, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -1570,11 +1568,11 @@
/*
* ipipe_enum_mbus_code() - enum mbus codes for pads
* @sd: pointer to v4l2 subdev structure.
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @code: pointer to v4l2_subdev_mbus_code_enum structure
*/
static int
-ipipe_enum_mbus_code(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ipipe_enum_mbus_code(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
switch (code->pad) {
@@ -1630,9 +1628,8 @@
* @sd: pointer to v4l2 subdev structure.
* @fh: V4L2 subdev file handle
*
- * Initialize all pad formats with default values. If fh is not NULL, try
- * formats are initialized on the file handle. Otherwise active formats are
- * initialized on the device.
+ * Initialize all pad formats with default values. Try formats are initialized
+ * on the file handle.
*/
static int
ipipe_init_formats(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
@@ -1641,19 +1638,19 @@
memset(&format, 0, sizeof(format));
format.pad = IPIPE_PAD_SINK;
- format.which = fh ? V4L2_SUBDEV_FORMAT_TRY : V4L2_SUBDEV_FORMAT_ACTIVE;
+ format.which = V4L2_SUBDEV_FORMAT_TRY;
format.format.code = MEDIA_BUS_FMT_SGRBG12_1X12;
format.format.width = IPIPE_MAX_OUTPUT_WIDTH_A;
format.format.height = IPIPE_MAX_OUTPUT_HEIGHT_A;
- ipipe_set_format(sd, fh, &format);
+ ipipe_set_format(sd, fh->pad, &format);
memset(&format, 0, sizeof(format));
format.pad = IPIPE_PAD_SOURCE;
- format.which = fh ? V4L2_SUBDEV_FORMAT_TRY : V4L2_SUBDEV_FORMAT_ACTIVE;
+ format.which = V4L2_SUBDEV_FORMAT_TRY;
format.format.code = MEDIA_BUS_FMT_UYVY8_2X8;
format.format.width = IPIPE_MAX_OUTPUT_WIDTH_A;
format.format.height = IPIPE_MAX_OUTPUT_HEIGHT_A;
- ipipe_set_format(sd, fh, &format);
+ ipipe_set_format(sd, fh->pad, &format);
return 0;
}
diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c b/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c
index 17e105e..8b23054 100644
--- a/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c
+++ b/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c
@@ -544,12 +544,12 @@
/*
* ipipeif_enum_mbus_code() - Handle pixel format enumeration
* @sd: pointer to v4l2 subdev structure
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @code: pointer to v4l2_subdev_mbus_code_enum structure
* return -EINVAL or zero on success
*/
static int ipipeif_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
switch (code->pad) {
@@ -577,11 +577,11 @@
/*
* ipipeif_get_format() - Handle get format by pads subdev method
* @sd: pointer to v4l2 subdev structure
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: pointer to v4l2 subdev format structure
*/
static int
-ipipeif_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ipipeif_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vpfe_ipipeif_device *ipipeif = v4l2_get_subdevdata(sd);
@@ -589,7 +589,7 @@
if (fmt->which == V4L2_SUBDEV_FORMAT_ACTIVE)
fmt->format = ipipeif->formats[fmt->pad];
else
- fmt->format = *(v4l2_subdev_get_try_format(fh, fmt->pad));
+ fmt->format = *(v4l2_subdev_get_try_format(sd, cfg, fmt->pad));
return 0;
}
@@ -600,14 +600,14 @@
/*
* ipipeif_try_format() - Handle try format by pad subdev method
* @ipipeif: VPFE ipipeif device.
- * @fh: V4L2 subdev file handle.
+ * @cfg: V4L2 subdev pad config
* @pad: pad num.
* @fmt: pointer to v4l2 format structure.
* @which : wanted subdev format
*/
static void
ipipeif_try_format(struct vpfe_ipipeif_device *ipipeif,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -641,7 +641,7 @@
}
static int
-ipipeif_enum_frame_size(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ipipeif_enum_frame_size(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct vpfe_ipipeif_device *ipipeif = v4l2_get_subdevdata(sd);
@@ -653,8 +653,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- ipipeif_try_format(ipipeif, fh, fse->pad, &format,
- V4L2_SUBDEV_FORMAT_TRY);
+ ipipeif_try_format(ipipeif, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -664,8 +663,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- ipipeif_try_format(ipipeif, fh, fse->pad, &format,
- V4L2_SUBDEV_FORMAT_TRY);
+ ipipeif_try_format(ipipeif, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -675,18 +673,18 @@
/*
* __ipipeif_get_format() - helper function for getting ipipeif format
* @ipipeif: pointer to ipipeif private structure.
+ * @cfg: V4L2 subdev pad config
* @pad: pad number.
- * @fh: V4L2 subdev file handle.
* @which: wanted subdev format.
*
*/
static struct v4l2_mbus_framefmt *
__ipipeif_get_format(struct vpfe_ipipeif_device *ipipeif,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&ipipeif->subdev, cfg, pad);
return &ipipeif->formats[pad];
}
@@ -694,22 +692,22 @@
/*
* ipipeif_set_format() - Handle set format by pads subdev method
* @sd: pointer to v4l2 subdev structure
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
static int
-ipipeif_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ipipeif_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vpfe_ipipeif_device *ipipeif = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __ipipeif_get_format(ipipeif, fh, fmt->pad, fmt->which);
+ format = __ipipeif_get_format(ipipeif, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- ipipeif_try_format(ipipeif, fh, fmt->pad, &fmt->format, fmt->which);
+ ipipeif_try_format(ipipeif, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY)
@@ -756,9 +754,8 @@
* @sd: VPFE ipipeif V4L2 subdevice
* @fh: V4L2 subdev file handle
*
- * Initialize all pad formats with default values. If fh is not NULL, try
- * formats are initialized on the file handle. Otherwise active formats are
- * initialized on the device.
+ * Initialize all pad formats with default values. Try formats are initialized
+ * on the file handle.
*/
static int
ipipeif_init_formats(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
@@ -768,19 +765,19 @@
memset(&format, 0, sizeof(format));
format.pad = IPIPEIF_PAD_SINK;
- format.which = fh ? V4L2_SUBDEV_FORMAT_TRY : V4L2_SUBDEV_FORMAT_ACTIVE;
+ format.which = V4L2_SUBDEV_FORMAT_TRY;
format.format.code = MEDIA_BUS_FMT_SGRBG12_1X12;
format.format.width = IPIPE_MAX_OUTPUT_WIDTH_A;
format.format.height = IPIPE_MAX_OUTPUT_HEIGHT_A;
- ipipeif_set_format(sd, fh, &format);
+ ipipeif_set_format(sd, fh->pad, &format);
memset(&format, 0, sizeof(format));
format.pad = IPIPEIF_PAD_SOURCE;
- format.which = fh ? V4L2_SUBDEV_FORMAT_TRY : V4L2_SUBDEV_FORMAT_ACTIVE;
+ format.which = V4L2_SUBDEV_FORMAT_TRY;
format.format.code = MEDIA_BUS_FMT_UYVY8_2X8;
format.format.width = IPIPE_MAX_OUTPUT_WIDTH_A;
format.format.height = IPIPE_MAX_OUTPUT_HEIGHT_A;
- ipipeif_set_format(sd, fh, &format);
+ ipipeif_set_format(sd, fh->pad, &format);
ipipeif_set_default_config(ipipeif);
diff --git a/drivers/staging/media/davinci_vpfe/dm365_isif.c b/drivers/staging/media/davinci_vpfe/dm365_isif.c
index bcf762b..80907b4 100644
--- a/drivers/staging/media/davinci_vpfe/dm365_isif.c
+++ b/drivers/staging/media/davinci_vpfe/dm365_isif.c
@@ -278,11 +278,11 @@
/*
* isif_try_format() - Try video format on a pad
* @isif: VPFE isif device
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: pointer to v4l2 subdev format structure
*/
static void
-isif_try_format(struct vpfe_isif_device *isif, struct v4l2_subdev_fh *fh,
+isif_try_format(struct vpfe_isif_device *isif, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
unsigned int width = fmt->format.width;
@@ -1394,11 +1394,11 @@
* __isif_get_format() - helper function for getting isif format
* @isif: pointer to isif private structure.
* @pad: pad number.
- * @fh: V4L2 subdev file handle.
+ * @cfg: V4L2 subdev pad config
* @which: wanted subdev format.
*/
static struct v4l2_mbus_framefmt *
-__isif_get_format(struct vpfe_isif_device *isif, struct v4l2_subdev_fh *fh,
+__isif_get_format(struct vpfe_isif_device *isif, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY) {
@@ -1407,32 +1407,32 @@
fmt.pad = pad;
fmt.which = which;
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&isif->subdev, cfg, pad);
}
return &isif->formats[pad];
}
/*
-* isif_set_format() - set format on pad
-* @sd : VPFE ISIF device
-* @fh : V4L2 subdev file handle
-* @fmt : pointer to v4l2 subdev format structure
-*
-* Return 0 on success or -EINVAL if format or pad is invalid
-*/
+ * isif_set_format() - set format on pad
+ * @sd : VPFE ISIF device
+ * @cfg : V4L2 subdev pad config
+ * @fmt : pointer to v4l2 subdev format structure
+ *
+ * Return 0 on success or -EINVAL if format or pad is invalid
+ */
static int
-isif_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+isif_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vpfe_isif_device *isif = v4l2_get_subdevdata(sd);
struct vpfe_device *vpfe_dev = to_vpfe_device(isif);
struct v4l2_mbus_framefmt *format;
- format = __isif_get_format(isif, fh, fmt->pad, fmt->which);
+ format = __isif_get_format(isif, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- isif_try_format(isif, fh, fmt);
+ isif_try_format(isif, cfg, fmt);
memcpy(format, &fmt->format, sizeof(*format));
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY)
@@ -1447,20 +1447,20 @@
/*
* isif_get_format() - Retrieve the video format on a pad
* @sd: VPFE ISIF V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: pointer to v4l2 subdev format structure
*
* Return 0 on success or -EINVAL if the pad is invalid or doesn't correspond
* to the format type.
*/
static int
-isif_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+isif_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vpfe_isif_device *vpfe_isif = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __isif_get_format(vpfe_isif, fh, fmt->pad, fmt->which);
+ format = __isif_get_format(vpfe_isif, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -1472,11 +1472,11 @@
/*
* isif_enum_frame_size() - enum frame sizes on pads
* @sd: VPFE isif V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @code: pointer to v4l2_subdev_frame_size_enum structure
*/
static int
-isif_enum_frame_size(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+isif_enum_frame_size(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct vpfe_isif_device *isif = v4l2_get_subdevdata(sd);
@@ -1489,8 +1489,8 @@
format.format.code = fse->code;
format.format.width = 1;
format.format.height = 1;
- format.which = V4L2_SUBDEV_FORMAT_TRY;
- isif_try_format(isif, fh, &format);
+ format.which = fse->which;
+ isif_try_format(isif, cfg, &format);
fse->min_width = format.format.width;
fse->min_height = format.format.height;
@@ -1501,8 +1501,8 @@
format.format.code = fse->code;
format.format.width = -1;
format.format.height = -1;
- format.which = V4L2_SUBDEV_FORMAT_TRY;
- isif_try_format(isif, fh, &format);
+ format.which = fse->which;
+ isif_try_format(isif, cfg, &format);
fse->max_width = format.format.width;
fse->max_height = format.format.height;
@@ -1512,11 +1512,11 @@
/*
* isif_enum_mbus_code() - enum mbus codes for pads
* @sd: VPFE isif V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @code: pointer to v4l2_subdev_mbus_code_enum structure
*/
static int
-isif_enum_mbus_code(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+isif_enum_mbus_code(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
switch (code->pad) {
@@ -1537,14 +1537,14 @@
/*
* isif_pad_set_selection() - set crop rectangle on pad
* @sd: VPFE isif V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @code: pointer to v4l2_subdev_mbus_code_enum structure
*
* Return 0 on success, -EINVAL if pad is invalid
*/
static int
isif_pad_set_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct vpfe_isif_device *vpfe_isif = v4l2_get_subdevdata(sd);
@@ -1554,7 +1554,7 @@
if (sel->pad != ISIF_PAD_SINK || sel->target != V4L2_SEL_TGT_CROP)
return -EINVAL;
- format = __isif_get_format(vpfe_isif, fh, sel->pad, sel->which);
+ format = __isif_get_format(vpfe_isif, cfg, sel->pad, sel->which);
if (format == NULL)
return -EINVAL;
@@ -1577,7 +1577,7 @@
} else {
struct v4l2_rect *rect;
- rect = v4l2_subdev_get_try_crop(fh, ISIF_PAD_SINK);
+ rect = v4l2_subdev_get_try_crop(sd, cfg, ISIF_PAD_SINK);
memcpy(rect, &vpfe_isif->crop, sizeof(*rect));
}
return 0;
@@ -1586,14 +1586,14 @@
/*
* isif_pad_get_selection() - get crop rectangle on pad
* @sd: VPFE isif V4L2 subdevice
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @code: pointer to v4l2_subdev_mbus_code_enum structure
*
* Return 0 on success, -EINVAL if pad is invalid
*/
static int
isif_pad_get_selection(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel)
{
struct vpfe_isif_device *vpfe_isif = v4l2_get_subdevdata(sd);
@@ -1605,7 +1605,7 @@
if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
struct v4l2_rect *rect;
- rect = v4l2_subdev_get_try_crop(fh, ISIF_PAD_SINK);
+ rect = v4l2_subdev_get_try_crop(sd, cfg, ISIF_PAD_SINK);
memcpy(&sel->r, rect, sizeof(*rect));
} else {
sel->r = vpfe_isif->crop;
@@ -1619,9 +1619,8 @@
* @sd: VPFE isif V4L2 subdevice
* @fh: V4L2 subdev file handle
*
- * Initialize all pad formats with default values. If fh is not NULL, try
- * formats are initialized on the file handle. Otherwise active formats are
- * initialized on the device.
+ * Initialize all pad formats with default values. Try formats are initialized
+ * on the file handle.
*/
static int
isif_init_formats(struct v4l2_subdev *sd,
@@ -1632,27 +1631,27 @@
memset(&format, 0, sizeof(format));
format.pad = ISIF_PAD_SINK;
- format.which = fh ? V4L2_SUBDEV_FORMAT_TRY : V4L2_SUBDEV_FORMAT_ACTIVE;
+ format.which = V4L2_SUBDEV_FORMAT_TRY;
format.format.code = MEDIA_BUS_FMT_SGRBG12_1X12;
format.format.width = MAX_WIDTH;
format.format.height = MAX_HEIGHT;
- isif_set_format(sd, fh, &format);
+ isif_set_format(sd, fh->pad, &format);
memset(&format, 0, sizeof(format));
format.pad = ISIF_PAD_SOURCE;
- format.which = fh ? V4L2_SUBDEV_FORMAT_TRY : V4L2_SUBDEV_FORMAT_ACTIVE;
+ format.which = V4L2_SUBDEV_FORMAT_TRY;
format.format.code = MEDIA_BUS_FMT_SGRBG12_1X12;
format.format.width = MAX_WIDTH;
format.format.height = MAX_HEIGHT;
- isif_set_format(sd, fh, &format);
+ isif_set_format(sd, fh->pad, &format);
memset(&sel, 0, sizeof(sel));
sel.pad = ISIF_PAD_SINK;
- sel.which = fh ? V4L2_SUBDEV_FORMAT_TRY : V4L2_SUBDEV_FORMAT_ACTIVE;
+ sel.which = V4L2_SUBDEV_FORMAT_TRY;
sel.target = V4L2_SEL_TGT_CROP;
sel.r.width = MAX_WIDTH;
sel.r.height = MAX_HEIGHT;
- isif_pad_set_selection(sd, fh, &sel);
+ isif_pad_set_selection(sd, fh->pad, &sel);
return 0;
}
diff --git a/drivers/staging/media/davinci_vpfe/dm365_resizer.c b/drivers/staging/media/davinci_vpfe/dm365_resizer.c
index 7cc8d1b..b649813 100644
--- a/drivers/staging/media/davinci_vpfe/dm365_resizer.c
+++ b/drivers/staging/media/davinci_vpfe/dm365_resizer.c
@@ -1288,19 +1288,19 @@
/*
* __resizer_get_format() - helper function for getting resizer format
* @sd: pointer to subdev.
- * @fh: V4L2 subdev file handle.
+ * @cfg: V4L2 subdev pad config
* @pad: pad number.
* @which: wanted subdev format.
* Retun wanted mbus frame format.
*/
static struct v4l2_mbus_framefmt *
-__resizer_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+__resizer_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
struct vpfe_resizer_device *resizer = v4l2_get_subdevdata(sd);
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(sd, cfg, pad);
if (&resizer->crop_resizer.subdev == sd)
return &resizer->crop_resizer.formats[pad];
if (&resizer->resizer_a.subdev == sd)
@@ -1313,13 +1313,13 @@
/*
* resizer_try_format() - Handle try format by pad subdev method
* @sd: pointer to subdev.
- * @fh: V4L2 subdev file handle.
+ * @cfg: V4L2 subdev pad config
* @pad: pad num.
* @fmt: pointer to v4l2 format structure.
* @which: wanted subdev format.
*/
static void
-resizer_try_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+resizer_try_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -1387,21 +1387,21 @@
/*
* resizer_set_format() - Handle set format by pads subdev method
* @sd: pointer to v4l2 subdev structure
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int resizer_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int resizer_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vpfe_resizer_device *resizer = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __resizer_get_format(sd, fh, fmt->pad, fmt->which);
+ format = __resizer_get_format(sd, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- resizer_try_format(sd, fh, fmt->pad, &fmt->format, fmt->which);
+ resizer_try_format(sd, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY)
@@ -1447,16 +1447,16 @@
/*
* resizer_get_format() - Retrieve the video format on a pad
* @sd: pointer to v4l2 subdev structure.
- * @fh: V4L2 subdev file handle.
+ * @cfg: V4L2 subdev pad config
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int resizer_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int resizer_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct v4l2_mbus_framefmt *format;
- format = __resizer_get_format(sd, fh, fmt->pad, fmt->which);
+ format = __resizer_get_format(sd, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -1468,11 +1468,11 @@
/*
* resizer_enum_frame_size() - enum frame sizes on pads
* @sd: Pointer to subdevice.
- * @fh: V4L2 subdev file handle.
+ * @cfg: V4L2 subdev pad config
* @code: pointer to v4l2_subdev_frame_size_enum structure.
*/
static int resizer_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct v4l2_mbus_framefmt format;
@@ -1483,8 +1483,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- resizer_try_format(sd, fh, fse->pad, &format,
- V4L2_SUBDEV_FORMAT_TRY);
+ resizer_try_format(sd, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -1494,8 +1493,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- resizer_try_format(sd, fh, fse->pad, &format,
- V4L2_SUBDEV_FORMAT_TRY);
+ resizer_try_format(sd, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -1505,11 +1503,11 @@
/*
* resizer_enum_mbus_code() - enum mbus codes for pads
* @sd: Pointer to subdevice.
- * @fh: V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @code: pointer to v4l2_subdev_mbus_code_enum structure
*/
static int resizer_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
if (code->pad == RESIZER_PAD_SINK) {
@@ -1532,14 +1530,13 @@
* @sd: Pointer to subdevice.
* @fh: V4L2 subdev file handle.
*
- * Initialize all pad formats with default values. If fh is not NULL, try
- * formats are initialized on the file handle. Otherwise active formats are
- * initialized on the device.
+ * Initialize all pad formats with default values. Try formats are
+ * initialized on the file handle.
*/
static int resizer_init_formats(struct v4l2_subdev *sd,
struct v4l2_subdev_fh *fh)
{
- __u32 which = fh ? V4L2_SUBDEV_FORMAT_TRY : V4L2_SUBDEV_FORMAT_ACTIVE;
+ __u32 which = V4L2_SUBDEV_FORMAT_TRY;
struct vpfe_resizer_device *resizer = v4l2_get_subdevdata(sd);
struct v4l2_subdev_format format;
@@ -1550,7 +1547,7 @@
format.format.code = MEDIA_BUS_FMT_YUYV8_2X8;
format.format.width = MAX_IN_WIDTH;
format.format.height = MAX_IN_HEIGHT;
- resizer_set_format(sd, fh, &format);
+ resizer_set_format(sd, fh->pad, &format);
memset(&format, 0, sizeof(format));
format.pad = RESIZER_CROP_PAD_SOURCE;
@@ -1558,7 +1555,7 @@
format.format.code = MEDIA_BUS_FMT_UYVY8_2X8;
format.format.width = MAX_IN_WIDTH;
format.format.height = MAX_IN_WIDTH;
- resizer_set_format(sd, fh, &format);
+ resizer_set_format(sd, fh->pad, &format);
memset(&format, 0, sizeof(format));
format.pad = RESIZER_CROP_PAD_SOURCE2;
@@ -1566,7 +1563,7 @@
format.format.code = MEDIA_BUS_FMT_UYVY8_2X8;
format.format.width = MAX_IN_WIDTH;
format.format.height = MAX_IN_WIDTH;
- resizer_set_format(sd, fh, &format);
+ resizer_set_format(sd, fh->pad, &format);
} else if (&resizer->resizer_a.subdev == sd) {
memset(&format, 0, sizeof(format));
format.pad = RESIZER_PAD_SINK;
@@ -1574,7 +1571,7 @@
format.format.code = MEDIA_BUS_FMT_YUYV8_2X8;
format.format.width = MAX_IN_WIDTH;
format.format.height = MAX_IN_HEIGHT;
- resizer_set_format(sd, fh, &format);
+ resizer_set_format(sd, fh->pad, &format);
memset(&format, 0, sizeof(format));
format.pad = RESIZER_PAD_SOURCE;
@@ -1582,7 +1579,7 @@
format.format.code = MEDIA_BUS_FMT_UYVY8_2X8;
format.format.width = IPIPE_MAX_OUTPUT_WIDTH_A;
format.format.height = IPIPE_MAX_OUTPUT_HEIGHT_A;
- resizer_set_format(sd, fh, &format);
+ resizer_set_format(sd, fh->pad, &format);
} else if (&resizer->resizer_b.subdev == sd) {
memset(&format, 0, sizeof(format));
format.pad = RESIZER_PAD_SINK;
@@ -1590,7 +1587,7 @@
format.format.code = MEDIA_BUS_FMT_YUYV8_2X8;
format.format.width = MAX_IN_WIDTH;
format.format.height = MAX_IN_HEIGHT;
- resizer_set_format(sd, fh, &format);
+ resizer_set_format(sd, fh->pad, &format);
memset(&format, 0, sizeof(format));
format.pad = RESIZER_PAD_SOURCE;
@@ -1598,7 +1595,7 @@
format.format.code = MEDIA_BUS_FMT_UYVY8_2X8;
format.format.width = IPIPE_MAX_OUTPUT_WIDTH_B;
format.format.height = IPIPE_MAX_OUTPUT_HEIGHT_B;
- resizer_set_format(sd, fh, &format);
+ resizer_set_format(sd, fh->pad, &format);
}
return 0;
diff --git a/drivers/staging/media/dt3155v4l/dt3155v4l.c b/drivers/staging/media/dt3155v4l/dt3155v4l.c
index 293ffda..52a8ffe 100644
--- a/drivers/staging/media/dt3155v4l/dt3155v4l.c
+++ b/drivers/staging/media/dt3155v4l/dt3155v4l.c
@@ -244,7 +244,7 @@
{
struct dt3155_priv *pd = vb2_get_drv_priv(q);
- mutex_unlock(pd->vdev->lock);
+ mutex_unlock(pd->vdev.lock);
}
static void
@@ -252,7 +252,7 @@
{
struct dt3155_priv *pd = vb2_get_drv_priv(q);
- mutex_lock(pd->vdev->lock);
+ mutex_lock(pd->vdev.lock);
}
static int
@@ -824,7 +824,7 @@
.fops = &dt3155_fops,
.ioctl_ops = &dt3155_ioctl_ops,
.minor = -1,
- .release = video_device_release,
+ .release = video_device_release_empty,
.tvnorms = DT3155_CURRENT_NORM,
};
@@ -901,28 +901,24 @@
err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
if (err)
return -ENODEV;
- pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+ pd = devm_kzalloc(&pdev->dev, sizeof(*pd), GFP_KERNEL);
if (!pd)
return -ENOMEM;
- pd->vdev = video_device_alloc();
- if (!pd->vdev) {
- err = -ENOMEM;
- goto err_video_device_alloc;
- }
- *pd->vdev = dt3155_vdev;
+
+ pd->vdev = dt3155_vdev;
pci_set_drvdata(pdev, pd); /* for use in dt3155_remove() */
- video_set_drvdata(pd->vdev, pd); /* for use in video_fops */
+ video_set_drvdata(&pd->vdev, pd); /* for use in video_fops */
pd->users = 0;
pd->pdev = pdev;
INIT_LIST_HEAD(&pd->dmaq);
mutex_init(&pd->mux);
- pd->vdev->lock = &pd->mux; /* for locking v4l2_file_operations */
+ pd->vdev.lock = &pd->mux; /* for locking v4l2_file_operations */
spin_lock_init(&pd->lock);
pd->csr2 = csr2_init;
pd->config = config_init;
err = pci_enable_device(pdev);
if (err)
- goto err_enable_dev;
+ return err;
err = pci_request_region(pdev, 0, pci_name(pdev));
if (err)
goto err_req_region;
@@ -934,13 +930,13 @@
err = dt3155_init_board(pdev);
if (err)
goto err_init_board;
- err = video_register_device(pd->vdev, VFL_TYPE_GRABBER, -1);
+ err = video_register_device(&pd->vdev, VFL_TYPE_GRABBER, -1);
if (err)
goto err_init_board;
if (dt3155_alloc_coherent(&pdev->dev, DT3155_CHUNK_SIZE,
DMA_MEMORY_MAP))
dev_info(&pdev->dev, "preallocated 8 buffers\n");
- dev_info(&pdev->dev, "/dev/video%i is ready\n", pd->vdev->minor);
+ dev_info(&pdev->dev, "/dev/video%i is ready\n", pd->vdev.minor);
return 0; /* success */
err_init_board:
@@ -949,10 +945,6 @@
pci_release_region(pdev, 0);
err_req_region:
pci_disable_device(pdev);
-err_enable_dev:
- video_device_release(pd->vdev);
-err_video_device_alloc:
- kfree(pd);
return err;
}
@@ -962,15 +954,10 @@
struct dt3155_priv *pd = pci_get_drvdata(pdev);
dt3155_free_coherent(&pdev->dev);
- video_unregister_device(pd->vdev);
+ video_unregister_device(&pd->vdev);
pci_iounmap(pdev, pd->regs);
pci_release_region(pdev, 0);
pci_disable_device(pdev);
- /*
- * video_device_release() is invoked automatically
- * see: struct video_device dt3155_vdev
- */
- kfree(pd);
}
static const struct pci_device_id pci_ids[] = {
diff --git a/drivers/staging/media/dt3155v4l/dt3155v4l.h b/drivers/staging/media/dt3155v4l/dt3155v4l.h
index 2e4f89d..96f01a0 100644
--- a/drivers/staging/media/dt3155v4l/dt3155v4l.h
+++ b/drivers/staging/media/dt3155v4l/dt3155v4l.h
@@ -178,7 +178,7 @@
/**
* struct dt3155_priv - private data structure
*
- * @vdev: pointer to video_device structure
+ * @vdev: video_device structure
* @pdev: pointer to pci_dev structure
* @q pointer to vb2_queue structure
* @curr_buf: pointer to curren buffer
@@ -193,7 +193,7 @@
* @config: local copy of config register
*/
struct dt3155_priv {
- struct video_device *vdev;
+ struct video_device vdev;
struct pci_dev *pdev;
struct vb2_queue *q;
struct vb2_buffer *curr_buf;
diff --git a/drivers/staging/media/mn88472/mn88472.c b/drivers/staging/media/mn88472/mn88472.c
index 2a68582..a4cfcf5 100644
--- a/drivers/staging/media/mn88472/mn88472.c
+++ b/drivers/staging/media/mn88472/mn88472.c
@@ -19,7 +19,7 @@
static int mn88472_get_tune_settings(struct dvb_frontend *fe,
struct dvb_frontend_tune_settings *s)
{
- s->min_delay_ms = 400;
+ s->min_delay_ms = 800;
return 0;
}
@@ -178,8 +178,32 @@
ret = regmap_write(dev->regmap[0], 0x46, 0x00);
ret = regmap_write(dev->regmap[0], 0xae, 0x00);
- ret = regmap_write(dev->regmap[2], 0x08, 0x1d);
- ret = regmap_write(dev->regmap[0], 0xd9, 0xe3);
+
+ switch (dev->ts_mode) {
+ case SERIAL_TS_MODE:
+ ret = regmap_write(dev->regmap[2], 0x08, 0x1d);
+ break;
+ case PARALLEL_TS_MODE:
+ ret = regmap_write(dev->regmap[2], 0x08, 0x00);
+ break;
+ default:
+ dev_dbg(&client->dev, "ts_mode error: %d\n", dev->ts_mode);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ switch (dev->ts_clock) {
+ case VARIABLE_TS_CLOCK:
+ ret = regmap_write(dev->regmap[0], 0xd9, 0xe3);
+ break;
+ case FIXED_TS_CLOCK:
+ ret = regmap_write(dev->regmap[0], 0xd9, 0xe1);
+ break;
+ default:
+ dev_dbg(&client->dev, "ts_clock error: %d\n", dev->ts_clock);
+ ret = -EINVAL;
+ goto err;
+ }
/* Reset demod */
ret = regmap_write(dev->regmap[2], 0xf8, 0x9f);
@@ -201,6 +225,7 @@
struct dtv_frontend_properties *c = &fe->dtv_property_cache;
int ret;
unsigned int utmp;
+ int lock = 0;
*status = 0;
@@ -211,21 +236,36 @@
switch (c->delivery_system) {
case SYS_DVBT:
+ ret = regmap_read(dev->regmap[0], 0x7F, &utmp);
+ if (ret)
+ goto err;
+ if ((utmp & 0xF) >= 0x09)
+ lock = 1;
+ break;
case SYS_DVBT2:
- /* FIXME: implement me */
- utmp = 0x08; /* DVB-C lock value */
+ ret = regmap_read(dev->regmap[2], 0x92, &utmp);
+ if (ret)
+ goto err;
+ if ((utmp & 0xF) >= 0x07)
+ *status |= FE_HAS_SIGNAL;
+ if ((utmp & 0xF) >= 0x0a)
+ *status |= FE_HAS_CARRIER;
+ if ((utmp & 0xF) >= 0x0d)
+ *status |= FE_HAS_VITERBI | FE_HAS_SYNC | FE_HAS_LOCK;
break;
case SYS_DVBC_ANNEX_A:
ret = regmap_read(dev->regmap[1], 0x84, &utmp);
if (ret)
goto err;
+ if ((utmp & 0xF) >= 0x08)
+ lock = 1;
break;
default:
ret = -EINVAL;
goto err;
}
- if (utmp == 0x08)
+ if (lock)
*status = FE_HAS_SIGNAL | FE_HAS_CARRIER | FE_HAS_VITERBI |
FE_HAS_SYNC | FE_HAS_LOCK;
@@ -242,6 +282,7 @@
int ret, len, remaining;
const struct firmware *fw = NULL;
u8 *fw_file = MN88472_FIRMWARE;
+ unsigned int tmp;
dev_dbg(&client->dev, "\n");
@@ -257,6 +298,17 @@
if (ret)
goto err;
+ /* check if firmware is already running */
+ ret = regmap_read(dev->regmap[0], 0xf5, &tmp);
+ if (ret)
+ goto err;
+
+ if (!(tmp & 0x1)) {
+ dev_info(&client->dev, "firmware already running\n");
+ dev->warm = true;
+ return 0;
+ }
+
/* request the firmware, this will block and timeout */
ret = request_firmware(&fw, fw_file, &client->dev);
if (ret) {
@@ -270,7 +322,7 @@
ret = regmap_write(dev->regmap[0], 0xf5, 0x03);
if (ret)
- goto err;
+ goto firmware_release;
for (remaining = fw->size; remaining > 0;
remaining -= (dev->i2c_wr_max - 1)) {
@@ -283,13 +335,27 @@
if (ret) {
dev_err(&client->dev,
"firmware download failed=%d\n", ret);
- goto err;
+ goto firmware_release;
}
}
+ /* parity check of firmware */
+ ret = regmap_read(dev->regmap[0], 0xf8, &tmp);
+ if (ret) {
+ dev_err(&client->dev,
+ "parity reg read failed=%d\n", ret);
+ goto err;
+ }
+ if (tmp & 0x10) {
+ dev_err(&client->dev,
+ "firmware parity check failed=0x%x\n", tmp);
+ goto err;
+ }
+ dev_err(&client->dev, "firmware parity check succeeded=0x%x\n", tmp);
+
ret = regmap_write(dev->regmap[0], 0xf5, 0x00);
if (ret)
- goto err;
+ goto firmware_release;
release_firmware(fw);
fw = NULL;
@@ -298,10 +364,9 @@
dev->warm = true;
return 0;
+firmware_release:
+ release_firmware(fw);
err:
- if (fw)
- release_firmware(fw);
-
dev_dbg(&client->dev, "failed=%d\n", ret);
return ret;
}
@@ -336,6 +401,8 @@
.delsys = {SYS_DVBT, SYS_DVBT2, SYS_DVBC_ANNEX_A},
.info = {
.name = "Panasonic MN88472",
+ .symbol_rate_min = 1000000,
+ .symbol_rate_max = 7200000,
.caps = FE_CAN_FEC_1_2 |
FE_CAN_FEC_2_3 |
FE_CAN_FEC_3_4 |
@@ -396,6 +463,8 @@
dev->i2c_wr_max = config->i2c_wr_max;
dev->xtal = config->xtal;
+ dev->ts_mode = config->ts_mode;
+ dev->ts_clock = config->ts_clock;
dev->client[0] = client;
dev->regmap[0] = regmap_init_i2c(dev->client[0], ®map_config);
if (IS_ERR(dev->regmap[0])) {
diff --git a/drivers/staging/media/mn88472/mn88472_priv.h b/drivers/staging/media/mn88472/mn88472_priv.h
index b12b731..9ba8c8b 100644
--- a/drivers/staging/media/mn88472/mn88472_priv.h
+++ b/drivers/staging/media/mn88472/mn88472_priv.h
@@ -32,6 +32,8 @@
fe_delivery_system_t delivery_system;
bool warm; /* FW running */
u32 xtal;
+ int ts_mode;
+ int ts_clock;
};
#endif
diff --git a/drivers/staging/media/mn88473/mn88473.c b/drivers/staging/media/mn88473/mn88473.c
index 5baeb03..8b6736c 100644
--- a/drivers/staging/media/mn88473/mn88473.c
+++ b/drivers/staging/media/mn88473/mn88473.c
@@ -30,6 +30,7 @@
struct dtv_frontend_properties *c = &fe->dtv_property_cache;
int ret, i;
u32 if_frequency;
+ u64 tmp;
u8 delivery_system_val, if_val[3], bw_val[7];
dev_dbg(&client->dev,
@@ -62,32 +63,13 @@
goto err;
}
- switch (c->delivery_system) {
- case SYS_DVBT:
- case SYS_DVBT2:
- if (c->bandwidth_hz <= 6000000) {
- /* IF 3570000 Hz, BW 6000000 Hz */
- memcpy(if_val, "\x24\x8e\x8a", 3);
- memcpy(bw_val, "\xe9\x55\x55\x1c\x29\x1c\x29", 7);
- } else if (c->bandwidth_hz <= 7000000) {
- /* IF 4570000 Hz, BW 7000000 Hz */
- memcpy(if_val, "\x2e\xcb\xfb", 3);
- memcpy(bw_val, "\xc8\x00\x00\x17\x0a\x17\x0a", 7);
- } else if (c->bandwidth_hz <= 8000000) {
- /* IF 4570000 Hz, BW 8000000 Hz */
- memcpy(if_val, "\x2e\xcb\xfb", 3);
- memcpy(bw_val, "\xaf\x00\x00\x11\xec\x11\xec", 7);
- } else {
- ret = -EINVAL;
- goto err;
- }
- break;
- case SYS_DVBC_ANNEX_A:
- /* IF 5070000 Hz, BW 8000000 Hz */
- memcpy(if_val, "\x33\xea\xb3", 3);
+ if (c->bandwidth_hz <= 6000000) {
+ memcpy(bw_val, "\xe9\x55\x55\x1c\x29\x1c\x29", 7);
+ } else if (c->bandwidth_hz <= 7000000) {
+ memcpy(bw_val, "\xc8\x00\x00\x17\x0a\x17\x0a", 7);
+ } else if (c->bandwidth_hz <= 8000000) {
memcpy(bw_val, "\xaf\x00\x00\x11\xec\x11\xec", 7);
- break;
- default:
+ } else {
ret = -EINVAL;
goto err;
}
@@ -109,17 +91,12 @@
if_frequency = 0;
}
- switch (if_frequency) {
- case 3570000:
- case 4570000:
- case 5070000:
- break;
- default:
- dev_err(&client->dev, "IF frequency %d not supported\n",
- if_frequency);
- ret = -EINVAL;
- goto err;
- }
+ /* Calculate IF registers ( (1<<24)*IF / Xtal ) */
+ tmp = div_u64(if_frequency * (u64)(1<<24) + (dev->xtal / 2),
+ dev->xtal);
+ if_val[0] = ((tmp >> 16) & 0xff);
+ if_val[1] = ((tmp >> 8) & 0xff);
+ if_val[2] = ((tmp >> 0) & 0xff);
ret = regmap_write(dev->regmap[2], 0x05, 0x00);
ret = regmap_write(dev->regmap[2], 0xfb, 0x13);
@@ -194,7 +171,10 @@
{
struct i2c_client *client = fe->demodulator_priv;
struct mn88473_dev *dev = i2c_get_clientdata(client);
+ struct dtv_frontend_properties *c = &fe->dtv_property_cache;
int ret;
+ unsigned int utmp;
+ int lock = 0;
*status = 0;
@@ -203,8 +183,51 @@
goto err;
}
- *status = FE_HAS_SIGNAL | FE_HAS_CARRIER | FE_HAS_VITERBI |
- FE_HAS_SYNC | FE_HAS_LOCK;
+ switch (c->delivery_system) {
+ case SYS_DVBT:
+ ret = regmap_read(dev->regmap[0], 0x62, &utmp);
+ if (ret)
+ goto err;
+ if (!(utmp & 0xA0)) {
+ if ((utmp & 0xF) >= 0x03)
+ *status |= FE_HAS_SIGNAL;
+ if ((utmp & 0xF) >= 0x09)
+ lock = 1;
+ }
+ break;
+ case SYS_DVBT2:
+ ret = regmap_read(dev->regmap[2], 0x8B, &utmp);
+ if (ret)
+ goto err;
+ if (!(utmp & 0x40)) {
+ if ((utmp & 0xF) >= 0x07)
+ *status |= FE_HAS_SIGNAL;
+ if ((utmp & 0xF) >= 0x0a)
+ *status |= FE_HAS_CARRIER;
+ if ((utmp & 0xF) >= 0x0d)
+ *status |= FE_HAS_VITERBI | FE_HAS_SYNC | FE_HAS_LOCK;
+ }
+ break;
+ case SYS_DVBC_ANNEX_A:
+ ret = regmap_read(dev->regmap[1], 0x85, &utmp);
+ if (ret)
+ goto err;
+ if (!(utmp & 0x40)) {
+ ret = regmap_read(dev->regmap[1], 0x89, &utmp);
+ if (ret)
+ goto err;
+ if (utmp & 0x01)
+ lock = 1;
+ }
+ break;
+ default:
+ ret = -EINVAL;
+ goto err;
+ }
+
+ if (lock)
+ *status = FE_HAS_SIGNAL | FE_HAS_CARRIER | FE_HAS_VITERBI |
+ FE_HAS_SYNC | FE_HAS_LOCK;
return 0;
err:
@@ -219,11 +242,23 @@
int ret, len, remaining;
const struct firmware *fw = NULL;
u8 *fw_file = MN88473_FIRMWARE;
+ unsigned int tmp;
dev_dbg(&client->dev, "\n");
- if (dev->warm)
+ /* set cold state by default */
+ dev->warm = false;
+
+ /* check if firmware is already running */
+ ret = regmap_read(dev->regmap[0], 0xf5, &tmp);
+ if (ret)
+ goto err;
+
+ if (!(tmp & 0x1)) {
+ dev_info(&client->dev, "firmware already running\n");
+ dev->warm = true;
return 0;
+ }
/* request the firmware, this will block and timeout */
ret = request_firmware(&fw, fw_file, &client->dev);
@@ -254,6 +289,20 @@
}
}
+ /* parity check of firmware */
+ ret = regmap_read(dev->regmap[0], 0xf8, &tmp);
+ if (ret) {
+ dev_err(&client->dev,
+ "parity reg read failed=%d\n", ret);
+ goto err;
+ }
+ if (tmp & 0x10) {
+ dev_err(&client->dev,
+ "firmware parity check failed=0x%x\n", tmp);
+ goto err;
+ }
+ dev_err(&client->dev, "firmware parity check succeeded=0x%x\n", tmp);
+
ret = regmap_write(dev->regmap[0], 0xf5, 0x00);
if (ret)
goto err;
@@ -297,6 +346,8 @@
.delsys = {SYS_DVBT, SYS_DVBT2, SYS_DVBC_ANNEX_AC},
.info = {
.name = "Panasonic MN88473",
+ .symbol_rate_min = 1000000,
+ .symbol_rate_max = 7200000,
.caps = FE_CAN_FEC_1_2 |
FE_CAN_FEC_2_3 |
FE_CAN_FEC_3_4 |
@@ -356,6 +407,10 @@
}
dev->i2c_wr_max = config->i2c_wr_max;
+ if (!config->xtal)
+ dev->xtal = 25000000;
+ else
+ dev->xtal = config->xtal;
dev->client[0] = client;
dev->regmap[0] = regmap_init_i2c(dev->client[0], ®map_config);
if (IS_ERR(dev->regmap[0])) {
diff --git a/drivers/staging/media/mn88473/mn88473_priv.h b/drivers/staging/media/mn88473/mn88473_priv.h
index 78af112..ef6f013 100644
--- a/drivers/staging/media/mn88473/mn88473_priv.h
+++ b/drivers/staging/media/mn88473/mn88473_priv.h
@@ -31,6 +31,7 @@
u16 i2c_wr_max;
fe_delivery_system_t delivery_system;
bool warm; /* FW running */
+ u32 xtal;
};
#endif
diff --git a/drivers/staging/media/omap4iss/iss_csi2.c b/drivers/staging/media/omap4iss/iss_csi2.c
index 2d96fb3..d7ff769 100644
--- a/drivers/staging/media/omap4iss/iss_csi2.c
+++ b/drivers/staging/media/omap4iss/iss_csi2.c
@@ -828,17 +828,17 @@
*/
static struct v4l2_mbus_framefmt *
-__csi2_get_format(struct iss_csi2_device *csi2, struct v4l2_subdev_fh *fh,
+__csi2_get_format(struct iss_csi2_device *csi2, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&csi2->subdev, cfg, pad);
return &csi2->formats[pad];
}
static void
-csi2_try_format(struct iss_csi2_device *csi2, struct v4l2_subdev_fh *fh,
+csi2_try_format(struct iss_csi2_device *csi2, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -868,7 +868,7 @@
* compression.
*/
pixelcode = fmt->code;
- format = __csi2_get_format(csi2, fh, CSI2_PAD_SINK, which);
+ format = __csi2_get_format(csi2, cfg, CSI2_PAD_SINK, which);
memcpy(fmt, format, sizeof(*fmt));
/*
@@ -889,12 +889,12 @@
/*
* csi2_enum_mbus_code - Handle pixel format enumeration
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg : V4L2 subdev pad config
* @code : pointer to v4l2_subdev_mbus_code_enum structure
* return -EINVAL or zero on success
*/
static int csi2_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct iss_csi2_device *csi2 = v4l2_get_subdevdata(sd);
@@ -907,8 +907,8 @@
code->code = csi2_input_fmts[code->index];
} else {
- format = __csi2_get_format(csi2, fh, CSI2_PAD_SINK,
- V4L2_SUBDEV_FORMAT_TRY);
+ format = __csi2_get_format(csi2, cfg, CSI2_PAD_SINK,
+ code->which);
switch (code->index) {
case 0:
/* Passthrough sink pad code */
@@ -931,7 +931,7 @@
}
static int csi2_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct iss_csi2_device *csi2 = v4l2_get_subdevdata(sd);
@@ -943,7 +943,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- csi2_try_format(csi2, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ csi2_try_format(csi2, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -953,7 +953,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- csi2_try_format(csi2, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ csi2_try_format(csi2, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -963,17 +963,17 @@
/*
* csi2_get_format - Handle get format by pads subdev method
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int csi2_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int csi2_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct iss_csi2_device *csi2 = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __csi2_get_format(csi2, fh, fmt->pad, fmt->which);
+ format = __csi2_get_format(csi2, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -984,29 +984,29 @@
/*
* csi2_set_format - Handle set format by pads subdev method
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int csi2_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int csi2_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct iss_csi2_device *csi2 = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __csi2_get_format(csi2, fh, fmt->pad, fmt->which);
+ format = __csi2_get_format(csi2, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- csi2_try_format(csi2, fh, fmt->pad, &fmt->format, fmt->which);
+ csi2_try_format(csi2, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
/* Propagate the format from sink to source */
if (fmt->pad == CSI2_PAD_SINK) {
- format = __csi2_get_format(csi2, fh, CSI2_PAD_SOURCE,
+ format = __csi2_get_format(csi2, cfg, CSI2_PAD_SOURCE,
fmt->which);
*format = fmt->format;
- csi2_try_format(csi2, fh, CSI2_PAD_SOURCE, format, fmt->which);
+ csi2_try_format(csi2, cfg, CSI2_PAD_SOURCE, format, fmt->which);
}
return 0;
@@ -1048,7 +1048,7 @@
format.format.code = MEDIA_BUS_FMT_SGRBG10_1X10;
format.format.width = 4096;
format.format.height = 4096;
- csi2_set_format(sd, fh, &format);
+ csi2_set_format(sd, fh ? fh->pad : NULL, &format);
return 0;
}
diff --git a/drivers/staging/media/omap4iss/iss_ipipe.c b/drivers/staging/media/omap4iss/iss_ipipe.c
index a1a46ef..eaa82da 100644
--- a/drivers/staging/media/omap4iss/iss_ipipe.c
+++ b/drivers/staging/media/omap4iss/iss_ipipe.c
@@ -24,7 +24,7 @@
#include "iss_ipipe.h"
static struct v4l2_mbus_framefmt *
-__ipipe_get_format(struct iss_ipipe_device *ipipe, struct v4l2_subdev_fh *fh,
+__ipipe_get_format(struct iss_ipipe_device *ipipe, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which);
static const unsigned int ipipe_fmts[] = {
@@ -176,11 +176,11 @@
}
static struct v4l2_mbus_framefmt *
-__ipipe_get_format(struct iss_ipipe_device *ipipe, struct v4l2_subdev_fh *fh,
+__ipipe_get_format(struct iss_ipipe_device *ipipe, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&ipipe->subdev, cfg, pad);
return &ipipe->formats[pad];
}
@@ -188,12 +188,12 @@
/*
* ipipe_try_format - Try video format on a pad
* @ipipe: ISS IPIPE device
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @pad: Pad number
* @fmt: Format
*/
static void
-ipipe_try_format(struct iss_ipipe_device *ipipe, struct v4l2_subdev_fh *fh,
+ipipe_try_format(struct iss_ipipe_device *ipipe, struct v4l2_subdev_pad_config *cfg,
unsigned int pad, struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -220,7 +220,7 @@
break;
case IPIPE_PAD_SOURCE_VP:
- format = __ipipe_get_format(ipipe, fh, IPIPE_PAD_SINK, which);
+ format = __ipipe_get_format(ipipe, cfg, IPIPE_PAD_SINK, which);
memcpy(fmt, format, sizeof(*fmt));
fmt->code = MEDIA_BUS_FMT_UYVY8_1X16;
@@ -236,12 +236,12 @@
/*
* ipipe_enum_mbus_code - Handle pixel format enumeration
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg : V4L2 subdev pad config
* @code : pointer to v4l2_subdev_mbus_code_enum structure
* return -EINVAL or zero on success
*/
static int ipipe_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
switch (code->pad) {
@@ -268,7 +268,7 @@
}
static int ipipe_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct iss_ipipe_device *ipipe = v4l2_get_subdevdata(sd);
@@ -280,7 +280,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- ipipe_try_format(ipipe, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ ipipe_try_format(ipipe, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -290,7 +290,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- ipipe_try_format(ipipe, fh, fse->pad, &format, V4L2_SUBDEV_FORMAT_TRY);
+ ipipe_try_format(ipipe, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -300,19 +300,19 @@
/*
* ipipe_get_format - Retrieve the video format on a pad
* @sd : ISP IPIPE V4L2 subdevice
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: Format
*
* Return 0 on success or -EINVAL if the pad is invalid or doesn't correspond
* to the format type.
*/
-static int ipipe_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ipipe_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct iss_ipipe_device *ipipe = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __ipipe_get_format(ipipe, fh, fmt->pad, fmt->which);
+ format = __ipipe_get_format(ipipe, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -323,31 +323,31 @@
/*
* ipipe_set_format - Set the video format on a pad
* @sd : ISP IPIPE V4L2 subdevice
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: Format
*
* Return 0 on success or -EINVAL if the pad is invalid or doesn't correspond
* to the format type.
*/
-static int ipipe_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ipipe_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct iss_ipipe_device *ipipe = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __ipipe_get_format(ipipe, fh, fmt->pad, fmt->which);
+ format = __ipipe_get_format(ipipe, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- ipipe_try_format(ipipe, fh, fmt->pad, &fmt->format, fmt->which);
+ ipipe_try_format(ipipe, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
/* Propagate the format from sink to source */
if (fmt->pad == IPIPE_PAD_SINK) {
- format = __ipipe_get_format(ipipe, fh, IPIPE_PAD_SOURCE_VP,
+ format = __ipipe_get_format(ipipe, cfg, IPIPE_PAD_SOURCE_VP,
fmt->which);
*format = fmt->format;
- ipipe_try_format(ipipe, fh, IPIPE_PAD_SOURCE_VP, format,
+ ipipe_try_format(ipipe, cfg, IPIPE_PAD_SOURCE_VP, format,
fmt->which);
}
@@ -388,7 +388,7 @@
format.format.code = MEDIA_BUS_FMT_SGRBG10_1X10;
format.format.width = 4096;
format.format.height = 4096;
- ipipe_set_format(sd, fh, &format);
+ ipipe_set_format(sd, fh ? fh->pad : NULL, &format);
return 0;
}
@@ -516,8 +516,6 @@
void omap4iss_ipipe_unregister_entities(struct iss_ipipe_device *ipipe)
{
- media_entity_cleanup(&ipipe->subdev.entity);
-
v4l2_device_unregister_subdev(&ipipe->subdev);
}
@@ -566,5 +564,7 @@
*/
void omap4iss_ipipe_cleanup(struct iss_device *iss)
{
- /* FIXME: are you sure there's nothing to do? */
+ struct iss_ipipe_device *ipipe = &iss->ipipe;
+
+ media_entity_cleanup(&ipipe->subdev.entity);
}
diff --git a/drivers/staging/media/omap4iss/iss_ipipeif.c b/drivers/staging/media/omap4iss/iss_ipipeif.c
index 3943fae..530ac84 100644
--- a/drivers/staging/media/omap4iss/iss_ipipeif.c
+++ b/drivers/staging/media/omap4iss/iss_ipipeif.c
@@ -361,24 +361,24 @@
static struct v4l2_mbus_framefmt *
__ipipeif_get_format(struct iss_ipipeif_device *ipipeif,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&ipipeif->subdev, cfg, pad);
return &ipipeif->formats[pad];
}
/*
* ipipeif_try_format - Try video format on a pad
* @ipipeif: ISS IPIPEIF device
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @pad: Pad number
* @fmt: Format
*/
static void
ipipeif_try_format(struct iss_ipipeif_device *ipipeif,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -407,7 +407,7 @@
break;
case IPIPEIF_PAD_SOURCE_ISIF_SF:
- format = __ipipeif_get_format(ipipeif, fh, IPIPEIF_PAD_SINK,
+ format = __ipipeif_get_format(ipipeif, cfg, IPIPEIF_PAD_SINK,
which);
memcpy(fmt, format, sizeof(*fmt));
@@ -422,7 +422,7 @@
break;
case IPIPEIF_PAD_SOURCE_VP:
- format = __ipipeif_get_format(ipipeif, fh, IPIPEIF_PAD_SINK,
+ format = __ipipeif_get_format(ipipeif, cfg, IPIPEIF_PAD_SINK,
which);
memcpy(fmt, format, sizeof(*fmt));
@@ -441,12 +441,12 @@
/*
* ipipeif_enum_mbus_code - Handle pixel format enumeration
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg : V4L2 subdev pad config
* @code : pointer to v4l2_subdev_mbus_code_enum structure
* return -EINVAL or zero on success
*/
static int ipipeif_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct iss_ipipeif_device *ipipeif = v4l2_get_subdevdata(sd);
@@ -466,8 +466,8 @@
if (code->index != 0)
return -EINVAL;
- format = __ipipeif_get_format(ipipeif, fh, IPIPEIF_PAD_SINK,
- V4L2_SUBDEV_FORMAT_TRY);
+ format = __ipipeif_get_format(ipipeif, cfg, IPIPEIF_PAD_SINK,
+ code->which);
code->code = format->code;
break;
@@ -480,7 +480,7 @@
}
static int ipipeif_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct iss_ipipeif_device *ipipeif = v4l2_get_subdevdata(sd);
@@ -492,8 +492,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- ipipeif_try_format(ipipeif, fh, fse->pad, &format,
- V4L2_SUBDEV_FORMAT_TRY);
+ ipipeif_try_format(ipipeif, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -503,8 +502,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- ipipeif_try_format(ipipeif, fh, fse->pad, &format,
- V4L2_SUBDEV_FORMAT_TRY);
+ ipipeif_try_format(ipipeif, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -514,19 +512,19 @@
/*
* ipipeif_get_format - Retrieve the video format on a pad
* @sd : ISP IPIPEIF V4L2 subdevice
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: Format
*
* Return 0 on success or -EINVAL if the pad is invalid or doesn't correspond
* to the format type.
*/
-static int ipipeif_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ipipeif_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct iss_ipipeif_device *ipipeif = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __ipipeif_get_format(ipipeif, fh, fmt->pad, fmt->which);
+ format = __ipipeif_get_format(ipipeif, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -537,39 +535,39 @@
/*
* ipipeif_set_format - Set the video format on a pad
* @sd : ISP IPIPEIF V4L2 subdevice
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: Format
*
* Return 0 on success or -EINVAL if the pad is invalid or doesn't correspond
* to the format type.
*/
-static int ipipeif_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int ipipeif_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct iss_ipipeif_device *ipipeif = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __ipipeif_get_format(ipipeif, fh, fmt->pad, fmt->which);
+ format = __ipipeif_get_format(ipipeif, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- ipipeif_try_format(ipipeif, fh, fmt->pad, &fmt->format, fmt->which);
+ ipipeif_try_format(ipipeif, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
/* Propagate the format from sink to source */
if (fmt->pad == IPIPEIF_PAD_SINK) {
- format = __ipipeif_get_format(ipipeif, fh,
+ format = __ipipeif_get_format(ipipeif, cfg,
IPIPEIF_PAD_SOURCE_ISIF_SF,
fmt->which);
*format = fmt->format;
- ipipeif_try_format(ipipeif, fh, IPIPEIF_PAD_SOURCE_ISIF_SF,
+ ipipeif_try_format(ipipeif, cfg, IPIPEIF_PAD_SOURCE_ISIF_SF,
format, fmt->which);
- format = __ipipeif_get_format(ipipeif, fh,
+ format = __ipipeif_get_format(ipipeif, cfg,
IPIPEIF_PAD_SOURCE_VP,
fmt->which);
*format = fmt->format;
- ipipeif_try_format(ipipeif, fh, IPIPEIF_PAD_SOURCE_VP, format,
+ ipipeif_try_format(ipipeif, cfg, IPIPEIF_PAD_SOURCE_VP, format,
fmt->which);
}
@@ -612,7 +610,7 @@
format.format.code = MEDIA_BUS_FMT_SGRBG10_1X10;
format.format.width = 4096;
format.format.height = 4096;
- ipipeif_set_format(sd, fh, &format);
+ ipipeif_set_format(sd, fh ? fh->pad : NULL, &format);
return 0;
}
@@ -773,8 +771,6 @@
void omap4iss_ipipeif_unregister_entities(struct iss_ipipeif_device *ipipeif)
{
- media_entity_cleanup(&ipipeif->subdev.entity);
-
v4l2_device_unregister_subdev(&ipipeif->subdev);
omap4iss_video_unregister(&ipipeif->video_out);
}
@@ -828,5 +824,7 @@
*/
void omap4iss_ipipeif_cleanup(struct iss_device *iss)
{
- /* FIXME: are you sure there's nothing to do? */
+ struct iss_ipipeif_device *ipipeif = &iss->ipipeif;
+
+ media_entity_cleanup(&ipipeif->subdev.entity);
}
diff --git a/drivers/staging/media/omap4iss/iss_resizer.c b/drivers/staging/media/omap4iss/iss_resizer.c
index 3ab9728..5f69012 100644
--- a/drivers/staging/media/omap4iss/iss_resizer.c
+++ b/drivers/staging/media/omap4iss/iss_resizer.c
@@ -420,24 +420,24 @@
static struct v4l2_mbus_framefmt *
__resizer_get_format(struct iss_resizer_device *resizer,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY)
- return v4l2_subdev_get_try_format(fh, pad);
+ return v4l2_subdev_get_try_format(&resizer->subdev, cfg, pad);
return &resizer->formats[pad];
}
/*
* resizer_try_format - Try video format on a pad
* @resizer: ISS RESIZER device
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @pad: Pad number
* @fmt: Format
*/
static void
resizer_try_format(struct iss_resizer_device *resizer,
- struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
struct v4l2_mbus_framefmt *fmt,
enum v4l2_subdev_format_whence which)
{
@@ -465,7 +465,7 @@
case RESIZER_PAD_SOURCE_MEM:
pixelcode = fmt->code;
- format = __resizer_get_format(resizer, fh, RESIZER_PAD_SINK,
+ format = __resizer_get_format(resizer, cfg, RESIZER_PAD_SINK,
which);
memcpy(fmt, format, sizeof(*fmt));
@@ -492,12 +492,12 @@
/*
* resizer_enum_mbus_code - Handle pixel format enumeration
* @sd : pointer to v4l2 subdev structure
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @code : pointer to v4l2_subdev_mbus_code_enum structure
* return -EINVAL or zero on success
*/
static int resizer_enum_mbus_code(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code)
{
struct iss_resizer_device *resizer = v4l2_get_subdevdata(sd);
@@ -512,8 +512,8 @@
break;
case RESIZER_PAD_SOURCE_MEM:
- format = __resizer_get_format(resizer, fh, RESIZER_PAD_SINK,
- V4L2_SUBDEV_FORMAT_TRY);
+ format = __resizer_get_format(resizer, cfg, RESIZER_PAD_SINK,
+ code->which);
if (code->index == 0) {
code->code = format->code;
@@ -542,7 +542,7 @@
}
static int resizer_enum_frame_size(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
struct iss_resizer_device *resizer = v4l2_get_subdevdata(sd);
@@ -554,8 +554,7 @@
format.code = fse->code;
format.width = 1;
format.height = 1;
- resizer_try_format(resizer, fh, fse->pad, &format,
- V4L2_SUBDEV_FORMAT_TRY);
+ resizer_try_format(resizer, cfg, fse->pad, &format, fse->which);
fse->min_width = format.width;
fse->min_height = format.height;
@@ -565,8 +564,7 @@
format.code = fse->code;
format.width = -1;
format.height = -1;
- resizer_try_format(resizer, fh, fse->pad, &format,
- V4L2_SUBDEV_FORMAT_TRY);
+ resizer_try_format(resizer, cfg, fse->pad, &format, fse->which);
fse->max_width = format.width;
fse->max_height = format.height;
@@ -576,19 +574,19 @@
/*
* resizer_get_format - Retrieve the video format on a pad
* @sd : ISP RESIZER V4L2 subdevice
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: Format
*
* Return 0 on success or -EINVAL if the pad is invalid or doesn't correspond
* to the format type.
*/
-static int resizer_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int resizer_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct iss_resizer_device *resizer = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __resizer_get_format(resizer, fh, fmt->pad, fmt->which);
+ format = __resizer_get_format(resizer, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
@@ -599,32 +597,32 @@
/*
* resizer_set_format - Set the video format on a pad
* @sd : ISP RESIZER V4L2 subdevice
- * @fh : V4L2 subdev file handle
+ * @cfg: V4L2 subdev pad config
* @fmt: Format
*
* Return 0 on success or -EINVAL if the pad is invalid or doesn't correspond
* to the format type.
*/
-static int resizer_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+static int resizer_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct iss_resizer_device *resizer = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
- format = __resizer_get_format(resizer, fh, fmt->pad, fmt->which);
+ format = __resizer_get_format(resizer, cfg, fmt->pad, fmt->which);
if (format == NULL)
return -EINVAL;
- resizer_try_format(resizer, fh, fmt->pad, &fmt->format, fmt->which);
+ resizer_try_format(resizer, cfg, fmt->pad, &fmt->format, fmt->which);
*format = fmt->format;
/* Propagate the format from sink to source */
if (fmt->pad == RESIZER_PAD_SINK) {
- format = __resizer_get_format(resizer, fh,
+ format = __resizer_get_format(resizer, cfg,
RESIZER_PAD_SOURCE_MEM,
fmt->which);
*format = fmt->format;
- resizer_try_format(resizer, fh, RESIZER_PAD_SOURCE_MEM, format,
+ resizer_try_format(resizer, cfg, RESIZER_PAD_SOURCE_MEM, format,
fmt->which);
}
@@ -667,7 +665,7 @@
format.format.code = MEDIA_BUS_FMT_UYVY8_1X16;
format.format.width = 4096;
format.format.height = 4096;
- resizer_set_format(sd, fh, &format);
+ resizer_set_format(sd, fh ? fh->pad : NULL, &format);
return 0;
}
@@ -817,8 +815,6 @@
void omap4iss_resizer_unregister_entities(struct iss_resizer_device *resizer)
{
- media_entity_cleanup(&resizer->subdev.entity);
-
v4l2_device_unregister_subdev(&resizer->subdev);
omap4iss_video_unregister(&resizer->video_out);
}
@@ -872,5 +868,7 @@
*/
void omap4iss_resizer_cleanup(struct iss_device *iss)
{
- /* FIXME: are you sure there's nothing to do? */
+ struct iss_resizer_device *resizer = &iss->resizer;
+
+ media_entity_cleanup(&resizer->subdev.entity);
}
diff --git a/drivers/staging/media/omap4iss/iss_video.c b/drivers/staging/media/omap4iss/iss_video.c
index 55938cc..85c54fe 100644
--- a/drivers/staging/media/omap4iss/iss_video.c
+++ b/drivers/staging/media/omap4iss/iss_video.c
@@ -171,14 +171,14 @@
mbus->width = pix->width;
mbus->height = pix->height;
- for (i = 0; i < ARRAY_SIZE(formats); ++i) {
+ /* Skip the last format in the loop so that it will be selected if no
+ * match is found.
+ */
+ for (i = 0; i < ARRAY_SIZE(formats) - 1; ++i) {
if (formats[i].pixelformat == pix->pixelformat)
break;
}
- if (WARN_ON(i == ARRAY_SIZE(formats)))
- return;
-
mbus->code = formats[i].code;
mbus->colorspace = pix->colorspace;
mbus->field = pix->field;
diff --git a/drivers/tty/hvc/hvc_opal.c b/drivers/tty/hvc/hvc_opal.c
index 071551b..543b234 100644
--- a/drivers/tty/hvc/hvc_opal.c
+++ b/drivers/tty/hvc/hvc_opal.c
@@ -41,7 +41,7 @@
static const char hvc_opal_name[] = "hvc_opal";
-static struct of_device_id hvc_opal_match[] = {
+static const struct of_device_id hvc_opal_match[] = {
{ .name = "serial", .compatible = "ibm,opal-console-raw" },
{ .name = "serial", .compatible = "ibm,opal-console-hvsi" },
{ },
diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
index c434376..91abc00 100644
--- a/drivers/tty/n_gsm.c
+++ b/drivers/tty/n_gsm.c
@@ -2669,7 +2669,7 @@
static int gsm_mux_net_start_xmit(struct sk_buff *skb,
struct net_device *net)
{
- struct gsm_mux_net *mux_net = (struct gsm_mux_net *)netdev_priv(net);
+ struct gsm_mux_net *mux_net = netdev_priv(net);
struct gsm_dlci *dlci = mux_net->dlci;
muxnet_get(mux_net);
@@ -2698,7 +2698,7 @@
{
struct net_device *net = dlci->net;
struct sk_buff *skb;
- struct gsm_mux_net *mux_net = (struct gsm_mux_net *)netdev_priv(net);
+ struct gsm_mux_net *mux_net = netdev_priv(net);
muxnet_get(mux_net);
/* Allocate an sk_buff */
@@ -2727,7 +2727,7 @@
static int gsm_change_mtu(struct net_device *net, int new_mtu)
{
- struct gsm_mux_net *mux_net = (struct gsm_mux_net *)netdev_priv(net);
+ struct gsm_mux_net *mux_net = netdev_priv(net);
if ((new_mtu < 8) || (new_mtu > mux_net->dlci->gsm->mtu))
return -EINVAL;
net->mtu = new_mtu;
@@ -2763,7 +2763,7 @@
pr_debug("destroy network interface");
if (!dlci->net)
return;
- mux_net = (struct gsm_mux_net *)netdev_priv(dlci->net);
+ mux_net = netdev_priv(dlci->net);
muxnet_put(mux_net);
}
@@ -2801,7 +2801,7 @@
return -ENOMEM;
}
net->mtu = dlci->gsm->mtu;
- mux_net = (struct gsm_mux_net *)netdev_priv(net);
+ mux_net = netdev_priv(net);
mux_net->dlci = dlci;
kref_init(&mux_net->ref);
strncpy(nc->if_name, net->name, IFNAMSIZ); /* return net name */
@@ -2824,7 +2824,7 @@
}
/* Line discipline for real tty */
-struct tty_ldisc_ops tty_ldisc_packet = {
+static struct tty_ldisc_ops tty_ldisc_packet = {
.owner = THIS_MODULE,
.magic = TTY_LDISC_MAGIC,
.name = "n_gsm",
diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
index b008368..c43f74c 100644
--- a/drivers/tty/serial/8250/8250.h
+++ b/drivers/tty/serial/8250/8250.h
@@ -21,7 +21,6 @@
/* Filter function */
dma_filter_fn fn;
-
/* Parameter to the filter function */
void *rx_param;
void *tx_param;
@@ -53,7 +52,7 @@
unsigned int baud_base;
unsigned int port;
unsigned int irq;
- unsigned int flags;
+ upf_t flags;
unsigned char hub6;
unsigned char io_type;
unsigned char __iomem *iomem_base;
@@ -85,9 +84,6 @@
#define UART_BUG_THRE (1 << 3) /* UART has buggy THRE reassertion */
#define UART_BUG_PARITY (1 << 4) /* UART mishandles parity if FIFO enabled */
-#define PROBE_RSA (1 << 0)
-#define PROBE_ANY (~0)
-
#define HIGH_BITS_OFFSET ((sizeof(long)-sizeof(int))*8)
#ifdef CONFIG_SERIAL_8250_SHARE_IRQ
@@ -198,3 +194,20 @@
}
static inline void serial8250_release_dma(struct uart_8250_port *p) { }
#endif
+
+static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)
+{
+ unsigned char status;
+
+ status = serial_in(up, 0x04); /* EXCR2 */
+#define PRESL(x) ((x) & 0x30)
+ if (PRESL(status) == 0x10) {
+ /* already in high speed mode */
+ return 0;
+ } else {
+ status &= ~0xB0; /* Disable LOCK, mask out PRESL[01] */
+ status |= 0x10; /* 1.625 divisor for baud_base --> 921600 */
+ serial_out(up, 0x04, status);
+ }
+ return 1;
+}
diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
index deae122..422ebea 100644
--- a/drivers/tty/serial/8250/8250_core.c
+++ b/drivers/tty/serial/8250/8250_core.c
@@ -31,7 +31,6 @@
#include <linux/tty.h>
#include <linux/ratelimit.h>
#include <linux/tty_flip.h>
-#include <linux/serial_core.h>
#include <linux/serial.h>
#include <linux/serial_8250.h>
#include <linux/nmi.h>
@@ -61,7 +60,7 @@
static int serial_index(struct uart_port *port)
{
- return (serial8250_reg.minor - 64) + port->line;
+ return port->minor - 64;
}
static unsigned int skip_txen_test; /* force skip of txen test at init time */
@@ -358,34 +357,46 @@
#if defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_SERIAL_8250_RT288X)
/* Au1x00/RT288x UART hardware has a weird register layout */
-static const u8 au_io_in_map[] = {
- [UART_RX] = 0,
- [UART_IER] = 2,
- [UART_IIR] = 3,
- [UART_LCR] = 5,
- [UART_MCR] = 6,
- [UART_LSR] = 7,
- [UART_MSR] = 8,
+static const s8 au_io_in_map[8] = {
+ 0, /* UART_RX */
+ 2, /* UART_IER */
+ 3, /* UART_IIR */
+ 5, /* UART_LCR */
+ 6, /* UART_MCR */
+ 7, /* UART_LSR */
+ 8, /* UART_MSR */
+ -1, /* UART_SCR (unmapped) */
};
-static const u8 au_io_out_map[] = {
- [UART_TX] = 1,
- [UART_IER] = 2,
- [UART_FCR] = 4,
- [UART_LCR] = 5,
- [UART_MCR] = 6,
+static const s8 au_io_out_map[8] = {
+ 1, /* UART_TX */
+ 2, /* UART_IER */
+ 4, /* UART_FCR */
+ 5, /* UART_LCR */
+ 6, /* UART_MCR */
+ -1, /* UART_LSR (unmapped) */
+ -1, /* UART_MSR (unmapped) */
+ -1, /* UART_SCR (unmapped) */
};
static unsigned int au_serial_in(struct uart_port *p, int offset)
{
- offset = au_io_in_map[offset] << p->regshift;
- return __raw_readl(p->membase + offset);
+ if (offset >= ARRAY_SIZE(au_io_in_map))
+ return UINT_MAX;
+ offset = au_io_in_map[offset];
+ if (offset < 0)
+ return UINT_MAX;
+ return __raw_readl(p->membase + (offset << p->regshift));
}
static void au_serial_out(struct uart_port *p, int offset, int value)
{
- offset = au_io_out_map[offset] << p->regshift;
- __raw_writel(value, p->membase + offset);
+ if (offset >= ARRAY_SIZE(au_io_out_map))
+ return;
+ offset = au_io_out_map[offset];
+ if (offset < 0)
+ return;
+ __raw_writel(value, p->membase + (offset << p->regshift));
}
/* Au1x00 haven't got a standard divisor latch */
@@ -895,7 +906,7 @@
/*
* Exar ST16C2550 "A2" devices incorrectly detect as
* having an EFR, and report an ID of 0x0201. See
- * http://linux.derkeiler.com/Mailing-Lists/Kernel/2004-11/4812.html
+ * http://linux.derkeiler.com/Mailing-Lists/Kernel/2004-11/4812.html
*/
if (autoconfig_read_divisor_id(up) == 0x0201 && size_fifo(up) == 16)
return 1;
@@ -903,23 +914,6 @@
return 0;
}
-static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)
-{
- unsigned char status;
-
- status = serial_in(up, 0x04); /* EXCR2 */
-#define PRESL(x) ((x) & 0x30)
- if (PRESL(status) == 0x10) {
- /* already in high speed mode */
- return 0;
- } else {
- status &= ~0xB0; /* Disable LOCK, mask out PRESL[01] */
- status |= 0x10; /* 1.625 divisor for baud_base --> 921600 */
- serial_out(up, 0x04, status);
- }
- return 1;
-}
-
/*
* We know that the chip has FIFOs. Does it have an EFR? The
* EFR is located in the same register position as the IIR and
@@ -1122,7 +1116,7 @@
* whether or not this UART is a 16550A or not, since this will
* determine whether or not we can use its FIFO features or not.
*/
-static void autoconfig(struct uart_8250_port *up, unsigned int probeflags)
+static void autoconfig(struct uart_8250_port *up)
{
unsigned char status1, scratch, scratch2, scratch3;
unsigned char save_lcr, save_mcr;
@@ -1245,22 +1239,15 @@
/*
* Only probe for RSA ports if we got the region.
*/
- if (port->type == PORT_16550A && probeflags & PROBE_RSA) {
- int i;
-
- for (i = 0 ; i < probe_rsa_count; ++i) {
- if (probe_rsa[i] == port->iobase && __enable_rsa(up)) {
- port->type = PORT_RSA;
- break;
- }
- }
- }
+ if (port->type == PORT_16550A && up->probe & UART_PROBE_RSA &&
+ __enable_rsa(up))
+ port->type = PORT_RSA;
#endif
serial_out(up, UART_LCR, save_lcr);
port->fifosize = uart_config[up->port.type].fifo_size;
- old_capabilities = up->capabilities;
+ old_capabilities = up->capabilities;
up->capabilities = uart_config[port->type].flags;
up->tx_loadsz = uart_config[port->type].tx_loadsz;
@@ -1907,6 +1894,48 @@
jiffies + uart_poll_timeout(&up->port) + HZ / 5);
}
+static int univ8250_setup_irq(struct uart_8250_port *up)
+{
+ struct uart_port *port = &up->port;
+ int retval = 0;
+
+ /*
+ * The above check will only give an accurate result the first time
+ * the port is opened so this value needs to be preserved.
+ */
+ if (up->bugs & UART_BUG_THRE) {
+ pr_debug("ttyS%d - using backup timer\n", serial_index(port));
+
+ up->timer.function = serial8250_backup_timeout;
+ up->timer.data = (unsigned long)up;
+ mod_timer(&up->timer, jiffies +
+ uart_poll_timeout(port) + HZ / 5);
+ }
+
+ /*
+ * If the "interrupt" for this port doesn't correspond with any
+ * hardware interrupt, we use a timer-based system. The original
+ * driver used to do this with IRQ0.
+ */
+ if (!port->irq) {
+ up->timer.data = (unsigned long)up;
+ mod_timer(&up->timer, jiffies + uart_poll_timeout(port));
+ } else
+ retval = serial_link_irq_chain(up);
+
+ return retval;
+}
+
+static void univ8250_release_irq(struct uart_8250_port *up)
+{
+ struct uart_port *port = &up->port;
+
+ del_timer_sync(&up->timer);
+ up->timer.function = serial8250_timeout;
+ if (port->irq)
+ serial_unlink_irq_chain(up);
+}
+
static unsigned int serial8250_tx_empty(struct uart_port *port)
{
struct uart_8250_port *up = up_to_u8250p(port);
@@ -2211,35 +2240,12 @@
if ((!(iir1 & UART_IIR_NO_INT) && (iir & UART_IIR_NO_INT)) ||
up->port.flags & UPF_BUG_THRE) {
up->bugs |= UART_BUG_THRE;
- pr_debug("ttyS%d - using backup timer\n",
- serial_index(port));
}
}
- /*
- * The above check will only give an accurate result the first time
- * the port is opened so this value needs to be preserved.
- */
- if (up->bugs & UART_BUG_THRE) {
- up->timer.function = serial8250_backup_timeout;
- up->timer.data = (unsigned long)up;
- mod_timer(&up->timer, jiffies +
- uart_poll_timeout(port) + HZ / 5);
- }
-
- /*
- * If the "interrupt" for this port doesn't correspond with any
- * hardware interrupt, we use a timer-based system. The original
- * driver used to do this with IRQ0.
- */
- if (!port->irq) {
- up->timer.data = (unsigned long)up;
- mod_timer(&up->timer, jiffies + uart_poll_timeout(port));
- } else {
- retval = serial_link_irq_chain(up);
- if (retval)
- goto out;
- }
+ retval = up->ops->setup_irq(up);
+ if (retval)
+ goto out;
/*
* Now, initialize the UART
@@ -2270,7 +2276,7 @@
is variable. So, let's just don't test if we receive
TX irq. This way, we'll never enable UART_BUG_TXEN.
*/
- if (skip_txen_test || up->port.flags & UPF_NO_TXEN_TEST)
+ if (up->port.flags & UPF_NO_TXEN_TEST)
goto dont_test_tx_en;
/*
@@ -2397,10 +2403,7 @@
serial_port_in(port, UART_RX);
serial8250_rpm_put(up);
- del_timer_sync(&up->timer);
- up->timer.function = serial8250_timeout;
- if (port->irq)
- serial_unlink_irq_chain(up);
+ up->ops->release_irq(up);
}
EXPORT_SYMBOL_GPL(serial8250_do_shutdown);
@@ -2719,6 +2722,8 @@
static unsigned int serial8250_port_size(struct uart_8250_port *pt)
{
+ if (pt->port.mapsize)
+ return pt->port.mapsize;
if (pt->port.iotype == UPIO_AU) {
if (pt->port.type == PORT_RT2880)
return 0x100;
@@ -2798,6 +2803,7 @@
}
}
+#ifdef CONFIG_SERIAL_8250_RSA
static int serial8250_request_rsa_resource(struct uart_8250_port *up)
{
unsigned long start = UART_RSA_BASE << up->port.regshift;
@@ -2832,14 +2838,13 @@
break;
}
}
+#endif
static void serial8250_release_port(struct uart_port *port)
{
struct uart_8250_port *up = up_to_u8250p(port);
serial8250_release_std_resource(up);
- if (port->type == PORT_RSA)
- serial8250_release_rsa_resource(up);
}
static int serial8250_request_port(struct uart_port *port)
@@ -2851,11 +2856,6 @@
return -ENODEV;
ret = serial8250_request_std_resource(up);
- if (ret == 0 && port->type == PORT_RSA) {
- ret = serial8250_request_rsa_resource(up);
- if (ret < 0)
- serial8250_release_std_resource(up);
- }
return ret;
}
@@ -3003,7 +3003,6 @@
static void serial8250_config_port(struct uart_port *port, int flags)
{
struct uart_8250_port *up = up_to_u8250p(port);
- int probeflags = PROBE_ANY;
int ret;
if (port->type == PORT_8250_CIR)
@@ -3017,15 +3016,11 @@
if (ret < 0)
return;
- ret = serial8250_request_rsa_resource(up);
- if (ret < 0)
- probeflags &= ~PROBE_RSA;
-
if (port->iotype != up->cur_iotype)
set_io_from_upio(port);
if (flags & UART_CONFIG_TYPE)
- autoconfig(up, probeflags);
+ autoconfig(up);
/* if access method is AU, it is a 16550 with a quirk */
if (port->type == PORT_16550A && port->iotype == UPIO_AU)
@@ -3038,8 +3033,6 @@
if (port->type != PORT_UNKNOWN && flags & UART_CONFIG_IRQ)
autoconfig_irq(up);
- if (port->type != PORT_RSA && probeflags & PROBE_RSA)
- serial8250_release_rsa_resource(up);
if (port->type == PORT_UNKNOWN)
serial8250_release_std_resource(up);
@@ -3073,7 +3066,7 @@
return uart_config[type].name;
}
-static struct uart_ops serial8250_pops = {
+static const struct uart_ops serial8250_pops = {
.tx_empty = serial8250_tx_empty,
.set_mctrl = serial8250_set_mctrl,
.get_mctrl = serial8250_get_mctrl,
@@ -3100,6 +3093,14 @@
#endif
};
+static const struct uart_ops *base_ops;
+static struct uart_ops univ8250_port_ops;
+
+static const struct uart_8250_ops univ8250_driver_ops = {
+ .setup_irq = univ8250_setup_irq,
+ .release_irq = univ8250_release_irq,
+};
+
static struct uart_8250_port serial8250_ports[UART_NR];
/**
@@ -3130,6 +3131,105 @@
}
EXPORT_SYMBOL(serial8250_set_isa_configurator);
+static void serial8250_init_port(struct uart_8250_port *up)
+{
+ struct uart_port *port = &up->port;
+
+ spin_lock_init(&port->lock);
+ port->ops = &serial8250_pops;
+
+ up->cur_iotype = 0xFF;
+}
+
+static void serial8250_set_defaults(struct uart_8250_port *up)
+{
+ struct uart_port *port = &up->port;
+
+ if (up->port.flags & UPF_FIXED_TYPE) {
+ unsigned int type = up->port.type;
+
+ if (!up->port.fifosize)
+ up->port.fifosize = uart_config[type].fifo_size;
+ if (!up->tx_loadsz)
+ up->tx_loadsz = uart_config[type].tx_loadsz;
+ if (!up->capabilities)
+ up->capabilities = uart_config[type].flags;
+ }
+
+ set_io_from_upio(port);
+
+ /* default dma handlers */
+ if (up->dma) {
+ if (!up->dma->tx_dma)
+ up->dma->tx_dma = serial8250_tx_dma;
+ if (!up->dma->rx_dma)
+ up->dma->rx_dma = serial8250_rx_dma;
+ }
+}
+
+#ifdef CONFIG_SERIAL_8250_RSA
+
+static void univ8250_config_port(struct uart_port *port, int flags)
+{
+ struct uart_8250_port *up = up_to_u8250p(port);
+
+ up->probe &= ~UART_PROBE_RSA;
+ if (port->type == PORT_RSA) {
+ if (serial8250_request_rsa_resource(up) == 0)
+ up->probe |= UART_PROBE_RSA;
+ } else if (flags & UART_CONFIG_TYPE) {
+ int i;
+
+ for (i = 0; i < probe_rsa_count; i++) {
+ if (probe_rsa[i] == up->port.iobase) {
+ if (serial8250_request_rsa_resource(up) == 0)
+ up->probe |= UART_PROBE_RSA;
+ break;
+ }
+ }
+ }
+
+ base_ops->config_port(port, flags);
+
+ if (port->type != PORT_RSA && up->probe & UART_PROBE_RSA)
+ serial8250_release_rsa_resource(up);
+}
+
+static int univ8250_request_port(struct uart_port *port)
+{
+ struct uart_8250_port *up = up_to_u8250p(port);
+ int ret;
+
+ ret = base_ops->request_port(port);
+ if (ret == 0 && port->type == PORT_RSA) {
+ ret = serial8250_request_rsa_resource(up);
+ if (ret < 0)
+ base_ops->release_port(port);
+ }
+
+ return ret;
+}
+
+static void univ8250_release_port(struct uart_port *port)
+{
+ struct uart_8250_port *up = up_to_u8250p(port);
+
+ if (port->type == PORT_RSA)
+ serial8250_release_rsa_resource(up);
+ base_ops->release_port(port);
+}
+
+static void univ8250_rsa_support(struct uart_ops *ops)
+{
+ ops->config_port = univ8250_config_port;
+ ops->request_port = univ8250_request_port;
+ ops->release_port = univ8250_release_port;
+}
+
+#else
+#define univ8250_rsa_support(x) do { } while (0)
+#endif /* CONFIG_SERIAL_8250_RSA */
+
static void __init serial8250_isa_init_ports(void)
{
struct uart_8250_port *up;
@@ -3148,21 +3248,27 @@
struct uart_port *port = &up->port;
port->line = i;
- spin_lock_init(&port->lock);
+ serial8250_init_port(up);
+ if (!base_ops)
+ base_ops = port->ops;
+ port->ops = &univ8250_port_ops;
init_timer(&up->timer);
up->timer.function = serial8250_timeout;
- up->cur_iotype = 0xFF;
+
+ up->ops = &univ8250_driver_ops;
/*
* ALPHA_KLUDGE_MCR needs to be killed.
*/
up->mcr_mask = ~ALPHA_KLUDGE_MCR;
up->mcr_force = ALPHA_KLUDGE_MCR;
-
- port->ops = &serial8250_pops;
}
+ /* chain base port ops to support Remote Supervisor Adapter */
+ univ8250_port_ops = *base_ops;
+ univ8250_rsa_support(&univ8250_port_ops);
+
if (share_irqs)
irqflag = IRQF_SHARED;
@@ -3180,26 +3286,14 @@
port->membase = old_serial_port[i].iomem_base;
port->iotype = old_serial_port[i].io_type;
port->regshift = old_serial_port[i].iomem_reg_shift;
- set_io_from_upio(port);
+ serial8250_set_defaults(up);
+
port->irqflags |= irqflag;
if (serial8250_isa_config != NULL)
serial8250_isa_config(i, &up->port, &up->capabilities);
-
}
}
-static void
-serial8250_init_fixed_type_port(struct uart_8250_port *up, unsigned int type)
-{
- up->port.type = type;
- if (!up->port.fifosize)
- up->port.fifosize = uart_config[type].fifo_size;
- if (!up->tx_loadsz)
- up->tx_loadsz = uart_config[type].tx_loadsz;
- if (!up->capabilities)
- up->capabilities = uart_config[type].flags;
-}
-
static void __init
serial8250_register_ports(struct uart_driver *drv, struct device *dev)
{
@@ -3213,8 +3307,8 @@
up->port.dev = dev;
- if (up->port.flags & UPF_FIXED_TYPE)
- serial8250_init_fixed_type_port(up, up->port.type);
+ if (skip_txen_test)
+ up->port.flags |= UPF_NO_TXEN_TEST;
uart_add_one_port(drv, &up->port);
}
@@ -3236,10 +3330,9 @@
*
* The console_lock must be held when we get here.
*/
-static void
-serial8250_console_write(struct console *co, const char *s, unsigned int count)
+static void serial8250_console_write(struct uart_8250_port *up, const char *s,
+ unsigned int count)
{
- struct uart_8250_port *up = &serial8250_ports[co->index];
struct uart_port *port = &up->port;
unsigned long flags;
unsigned int ier;
@@ -3311,14 +3404,51 @@
serial8250_rpm_put(up);
}
-static int serial8250_console_setup(struct console *co, char *options)
+static void univ8250_console_write(struct console *co, const char *s,
+ unsigned int count)
{
- struct uart_port *port;
+ struct uart_8250_port *up = &serial8250_ports[co->index];
+
+ serial8250_console_write(up, s, count);
+}
+
+static unsigned int probe_baud(struct uart_port *port)
+{
+ unsigned char lcr, dll, dlm;
+ unsigned int quot;
+
+ lcr = serial_port_in(port, UART_LCR);
+ serial_port_out(port, UART_LCR, lcr | UART_LCR_DLAB);
+ dll = serial_port_in(port, UART_DLL);
+ dlm = serial_port_in(port, UART_DLM);
+ serial_port_out(port, UART_LCR, lcr);
+
+ quot = (dlm << 8) | dll;
+ return (port->uartclk / 16) / quot;
+}
+
+static int serial8250_console_setup(struct uart_port *port, char *options, bool probe)
+{
int baud = 9600;
int bits = 8;
int parity = 'n';
int flow = 'n';
+ if (!port->iobase && !port->membase)
+ return -ENODEV;
+
+ if (options)
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+ else if (probe)
+ baud = probe_baud(port);
+
+ return uart_set_options(port, port->cons, baud, parity, bits, flow);
+}
+
+static int univ8250_console_setup(struct console *co, char *options)
+{
+ struct uart_port *port;
+
/*
* Check whether an invalid uart number has been specified, and
* if so, search for the first available port that does have
@@ -3327,53 +3457,84 @@
if (co->index >= nr_uarts)
co->index = 0;
port = &serial8250_ports[co->index].port;
- if (!port->iobase && !port->membase)
+ /* link port to console */
+ port->cons = co;
+
+ return serial8250_console_setup(port, options, false);
+}
+
+/**
+ * univ8250_console_match - non-standard console matching
+ * @co: registering console
+ * @name: name from console command line
+ * @idx: index from console command line
+ * @options: ptr to option string from console command line
+ *
+ * Only attempts to match console command lines of the form:
+ * console=uart[8250],io|mmio|mmio32,<addr>[,<options>]
+ * console=uart[8250],0x<addr>[,<options>]
+ * This form is used to register an initial earlycon boot console and
+ * replace it with the serial8250_console at 8250 driver init.
+ *
+ * Performs console setup for a match (as required by interface)
+ * If no <options> are specified, then assume the h/w is already setup.
+ *
+ * Returns 0 if console matches; otherwise non-zero to use default matching
+ */
+static int univ8250_console_match(struct console *co, char *name, int idx,
+ char *options)
+{
+ char match[] = "uart"; /* 8250-specific earlycon name */
+ unsigned char iotype;
+ unsigned long addr;
+ int i;
+
+ if (strncmp(name, match, 4) != 0)
return -ENODEV;
- if (options)
- uart_parse_options(options, &baud, &parity, &bits, &flow);
+ if (uart_parse_earlycon(options, &iotype, &addr, &options))
+ return -ENODEV;
- return uart_set_options(port, co, baud, parity, bits, flow);
+ /* try to match the port specified on the command line */
+ for (i = 0; i < nr_uarts; i++) {
+ struct uart_port *port = &serial8250_ports[i].port;
+
+ if (port->iotype != iotype)
+ continue;
+ if ((iotype == UPIO_MEM || iotype == UPIO_MEM32) &&
+ (port->mapbase != addr))
+ continue;
+ if (iotype == UPIO_PORT && port->iobase != addr)
+ continue;
+
+ co->index = i;
+ port->cons = co;
+ return serial8250_console_setup(port, options, true);
+ }
+
+ return -ENODEV;
}
-static int serial8250_console_early_setup(void)
-{
- return serial8250_find_port_for_earlycon();
-}
-
-static struct console serial8250_console = {
+static struct console univ8250_console = {
.name = "ttyS",
- .write = serial8250_console_write,
+ .write = univ8250_console_write,
.device = uart_console_device,
- .setup = serial8250_console_setup,
- .early_setup = serial8250_console_early_setup,
+ .setup = univ8250_console_setup,
+ .match = univ8250_console_match,
.flags = CON_PRINTBUFFER | CON_ANYTIME,
.index = -1,
.data = &serial8250_reg,
};
-static int __init serial8250_console_init(void)
+static int __init univ8250_console_init(void)
{
serial8250_isa_init_ports();
- register_console(&serial8250_console);
+ register_console(&univ8250_console);
return 0;
}
-console_initcall(serial8250_console_init);
+console_initcall(univ8250_console_init);
-int serial8250_find_port(struct uart_port *p)
-{
- int line;
- struct uart_port *port;
-
- for (line = 0; line < nr_uarts; line++) {
- port = &serial8250_ports[line].port;
- if (uart_match_port(p, port))
- return line;
- }
- return -ENODEV;
-}
-
-#define SERIAL8250_CONSOLE &serial8250_console
+#define SERIAL8250_CONSOLE &univ8250_console
#else
#define SERIAL8250_CONSOLE NULL
#endif
@@ -3412,19 +3573,19 @@
p->iotype = port->iotype;
p->flags = port->flags;
p->mapbase = port->mapbase;
+ p->mapsize = port->mapsize;
p->private_data = port->private_data;
p->type = port->type;
p->line = port->line;
- set_io_from_upio(p);
+ serial8250_set_defaults(up_to_u8250p(p));
+
if (port->serial_in)
p->serial_in = port->serial_in;
if (port->serial_out)
p->serial_out = port->serial_out;
if (port->handle_irq)
p->handle_irq = port->handle_irq;
- else
- p->handle_irq = serial8250_default_handle_irq;
return 0;
}
@@ -3444,7 +3605,8 @@
port->type != PORT_8250) {
unsigned char canary = 0xa5;
serial_out(up, UART_SCR, canary);
- up->canary = canary;
+ if (serial_in(up, UART_SCR) == canary)
+ up->canary = canary;
}
uart_suspend_port(&serial8250_reg, port);
@@ -3666,6 +3828,7 @@
uart->port.flags = up->port.flags | UPF_BOOT_AUTOCONF;
uart->bugs = up->bugs;
uart->port.mapbase = up->port.mapbase;
+ uart->port.mapsize = up->port.mapsize;
uart->port.private_data = up->port.private_data;
uart->port.fifosize = up->port.fifosize;
uart->tx_loadsz = up->tx_loadsz;
@@ -3674,6 +3837,7 @@
uart->port.unthrottle = up->port.unthrottle;
uart->port.rs485_config = up->port.rs485_config;
uart->port.rs485 = up->port.rs485;
+ uart->dma = up->dma;
/* Take tx_loadsz from fifosize if it wasn't set separately */
if (uart->port.fifosize && !uart->tx_loadsz)
@@ -3682,10 +3846,14 @@
if (up->port.dev)
uart->port.dev = up->port.dev;
- if (up->port.flags & UPF_FIXED_TYPE)
- serial8250_init_fixed_type_port(uart, up->port.type);
+ if (skip_txen_test)
+ uart->port.flags |= UPF_NO_TXEN_TEST;
- set_io_from_upio(&uart->port);
+ if (up->port.flags & UPF_FIXED_TYPE)
+ uart->port.type = up->port.type;
+
+ serial8250_set_defaults(uart);
+
/* Possibly override default I/O functions. */
if (up->port.serial_in)
uart->port.serial_in = up->port.serial_in;
@@ -3710,13 +3878,6 @@
uart->dl_read = up->dl_read;
if (up->dl_write)
uart->dl_write = up->dl_write;
- if (up->dma) {
- uart->dma = up->dma;
- if (!uart->dma->tx_dma)
- uart->dma->tx_dma = serial8250_tx_dma;
- if (!uart->dma->rx_dma)
- uart->dma->rx_dma = serial8250_rx_dma;
- }
if (serial8250_isa_config != NULL)
serial8250_isa_config(0, &uart->port,
@@ -3747,9 +3908,11 @@
uart_remove_one_port(&serial8250_reg, &uart->port);
if (serial8250_isa_devs) {
uart->port.flags &= ~UPF_BOOT_AUTOCONF;
+ if (skip_txen_test)
+ uart->port.flags |= UPF_NO_TXEN_TEST;
uart->port.type = PORT_UNKNOWN;
uart->port.dev = &serial8250_isa_devs->dev;
- uart->capabilities = uart_config[uart->port.type].flags;
+ uart->capabilities = 0;
uart_add_one_port(&serial8250_reg, &uart->port);
} else {
uart->port.dev = NULL;
diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
index 6ae5b85..176f18f 100644
--- a/drivers/tty/serial/8250/8250_dw.c
+++ b/drivers/tty/serial/8250/8250_dw.c
@@ -17,7 +17,6 @@
#include <linux/io.h>
#include <linux/module.h>
#include <linux/serial_8250.h>
-#include <linux/serial_core.h>
#include <linux/serial_reg.h>
#include <linux/of.h>
#include <linux/of_irq.h>
@@ -364,9 +363,9 @@
}
if (of_property_read_bool(np, "cts-override")) {
- /* Always report DSR as active */
- data->msr_mask_on |= UART_MSR_DSR;
- data->msr_mask_off |= UART_MSR_DDSR;
+ /* Always report CTS as active */
+ data->msr_mask_on |= UART_MSR_CTS;
+ data->msr_mask_off |= UART_MSR_DCTS;
}
if (of_property_read_bool(np, "ri-override")) {
@@ -375,37 +374,16 @@
data->msr_mask_off |= UART_MSR_TERI;
}
- /* clock got configured through clk api, all done */
- if (p->uartclk)
- return 0;
-
- /* try to find out clock frequency from DT as fallback */
- if (of_property_read_u32(np, "clock-frequency", &val)) {
- dev_err(p->dev, "clk or clock-frequency not defined\n");
- return -EINVAL;
- }
- p->uartclk = val;
-
return 0;
}
static int dw8250_probe_acpi(struct uart_8250_port *up,
struct dw8250_data *data)
{
- const struct acpi_device_id *id;
struct uart_port *p = &up->port;
dw8250_setup_port(up);
- id = acpi_match_device(p->dev->driver->acpi_match_table, p->dev);
- if (!id)
- return -ENODEV;
-
- if (!p->uartclk)
- if (device_property_read_u32(p->dev, "clock-frequency",
- &p->uartclk))
- return -EINVAL;
-
p->iotype = UPIO_MEM32;
p->serial_in = dw8250_serial_in32;
p->serial_out = dw8250_serial_out32;
@@ -425,18 +403,24 @@
{
struct uart_8250_port uart = {};
struct resource *regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- struct resource *irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ int irq = platform_get_irq(pdev, 0);
struct dw8250_data *data;
int err;
- if (!regs || !irq) {
- dev_err(&pdev->dev, "no registers/irq defined\n");
+ if (!regs) {
+ dev_err(&pdev->dev, "no registers defined\n");
return -EINVAL;
}
+ if (irq < 0) {
+ if (irq != -EPROBE_DEFER)
+ dev_err(&pdev->dev, "cannot get irq\n");
+ return irq;
+ }
+
spin_lock_init(&uart.port.lock);
uart.port.mapbase = regs->start;
- uart.port.irq = irq->start;
+ uart.port.irq = irq;
uart.port.handle_irq = dw8250_handle_irq;
uart.port.pm = dw8250_do_pm;
uart.port.type = PORT_8250;
@@ -453,12 +437,18 @@
return -ENOMEM;
data->usr_reg = DW_UART_USR;
+
+ /* Always ask for fixed clock rate from a property. */
+ device_property_read_u32(&pdev->dev, "clock-frequency",
+ &uart.port.uartclk);
+
+ /* If there is separate baudclk, get the rate from it. */
data->clk = devm_clk_get(&pdev->dev, "baudclk");
if (IS_ERR(data->clk) && PTR_ERR(data->clk) != -EPROBE_DEFER)
data->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(data->clk) && PTR_ERR(data->clk) == -EPROBE_DEFER)
return -EPROBE_DEFER;
- if (!IS_ERR(data->clk)) {
+ if (!IS_ERR_OR_NULL(data->clk)) {
err = clk_prepare_enable(data->clk);
if (err)
dev_warn(&pdev->dev, "could not enable optional baudclk: %d\n",
@@ -467,6 +457,12 @@
uart.port.uartclk = clk_get_rate(data->clk);
}
+ /* If no clock rate is defined, fail. */
+ if (!uart.port.uartclk) {
+ dev_err(&pdev->dev, "clock rate not defined\n");
+ return -EINVAL;
+ }
+
data->pclk = devm_clk_get(&pdev->dev, "apb_pclk");
if (IS_ERR(data->clk) && PTR_ERR(data->clk) == -EPROBE_DEFER) {
err = -EPROBE_DEFER;
@@ -629,6 +625,7 @@
{ "80860F0A", 0 },
{ "8086228A", 0 },
{ "APMC0D08", 0},
+ { "AMD0020", 0 },
{ },
};
MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match);
@@ -649,3 +646,4 @@
MODULE_AUTHOR("Jamie Iles");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Synopsys DesignWare 8250 serial port driver");
+MODULE_ALIAS("platform:dw-apb-uart");
diff --git a/drivers/tty/serial/8250/8250_early.c b/drivers/tty/serial/8250/8250_early.c
index c31a22b..8e11968 100644
--- a/drivers/tty/serial/8250/8250_early.c
+++ b/drivers/tty/serial/8250/8250_early.c
@@ -29,15 +29,12 @@
#include <linux/tty.h>
#include <linux/init.h>
#include <linux/console.h>
-#include <linux/serial_core.h>
#include <linux/serial_reg.h>
#include <linux/serial.h>
#include <linux/serial_8250.h>
#include <asm/io.h>
#include <asm/serial.h>
-static struct earlycon_device *early_device;
-
unsigned int __weak __init serial8250_early_in(struct uart_port *port, int offset)
{
switch (port->iotype) {
@@ -90,7 +87,8 @@
static void __init early_serial8250_write(struct console *console,
const char *s, unsigned int count)
{
- struct uart_port *port = &early_device->port;
+ struct earlycon_device *device = console->data;
+ struct uart_port *port = &device->port;
unsigned int ier;
/* Save the IER and disable interrupts preserving the UUE bit */
@@ -107,21 +105,6 @@
serial8250_early_out(port, UART_IER, ier);
}
-static unsigned int __init probe_baud(struct uart_port *port)
-{
- unsigned char lcr, dll, dlm;
- unsigned int quot;
-
- lcr = serial8250_early_in(port, UART_LCR);
- serial8250_early_out(port, UART_LCR, lcr | UART_LCR_DLAB);
- dll = serial8250_early_in(port, UART_DLL);
- dlm = serial8250_early_in(port, UART_DLM);
- serial8250_early_out(port, UART_LCR, lcr);
-
- quot = (dlm << 8) | dll;
- return (port->uartclk / 16) / quot;
-}
-
static void __init init_port(struct earlycon_device *device)
{
struct uart_port *port = &device->port;
@@ -147,52 +130,20 @@
const char *options)
{
if (!(device->port.membase || device->port.iobase))
- return 0;
+ return -ENODEV;
if (!device->baud) {
- device->baud = probe_baud(&device->port);
- snprintf(device->options, sizeof(device->options), "%u",
- device->baud);
- }
+ struct uart_port *port = &device->port;
+ unsigned int ier;
- init_port(device);
+ /* assume the device was initialized, only mask interrupts */
+ ier = serial8250_early_in(port, UART_IER);
+ serial8250_early_out(port, UART_IER, ier & UART_IER_UUE);
+ } else
+ init_port(device);
- early_device = device;
device->con->write = early_serial8250_write;
return 0;
}
EARLYCON_DECLARE(uart8250, early_serial8250_setup);
EARLYCON_DECLARE(uart, early_serial8250_setup);
-
-int __init setup_early_serial8250_console(char *cmdline)
-{
- char match[] = "uart8250";
-
- if (cmdline && cmdline[4] == ',')
- match[4] = '\0';
-
- return setup_earlycon(cmdline, match, early_serial8250_setup);
-}
-
-int serial8250_find_port_for_earlycon(void)
-{
- struct earlycon_device *device = early_device;
- struct uart_port *port = device ? &device->port : NULL;
- int line;
- int ret;
-
- if (!port || (!port->membase && !port->iobase))
- return -ENODEV;
-
- line = serial8250_find_port(port);
- if (line < 0)
- return -ENODEV;
-
- ret = update_console_cmdline("uart", 8250,
- "ttyS", line, device->options);
- if (ret < 0)
- ret = update_console_cmdline("uart", 0,
- "ttyS", line, device->options);
-
- return ret;
-}
diff --git a/drivers/tty/serial/8250/8250_em.c b/drivers/tty/serial/8250/8250_em.c
index ae5eaed..0b63812 100644
--- a/drivers/tty/serial/8250/8250_em.c
+++ b/drivers/tty/serial/8250/8250_em.c
@@ -21,7 +21,6 @@
#include <linux/io.h>
#include <linux/module.h>
#include <linux/serial_8250.h>
-#include <linux/serial_core.h>
#include <linux/serial_reg.h>
#include <linux/platform_device.h>
#include <linux/clk.h>
diff --git a/drivers/tty/serial/8250/8250_hp300.c b/drivers/tty/serial/8250/8250_hp300.c
index b488208..2891958 100644
--- a/drivers/tty/serial/8250/8250_hp300.c
+++ b/drivers/tty/serial/8250/8250_hp300.c
@@ -10,7 +10,6 @@
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/serial.h>
-#include <linux/serial_core.h>
#include <linux/serial_8250.h>
#include <linux/delay.h>
#include <linux/dio.h>
diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
index fe6d2e51..9289999 100644
--- a/drivers/tty/serial/8250/8250_omap.c
+++ b/drivers/tty/serial/8250/8250_omap.c
@@ -11,7 +11,6 @@
#include <linux/io.h>
#include <linux/module.h>
#include <linux/serial_8250.h>
-#include <linux/serial_core.h>
#include <linux/serial_reg.h>
#include <linux/tty_flip.h>
#include <linux/platform_device.h>
diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
index 892eb32..08da4d3 100644
--- a/drivers/tty/serial/8250/8250_pci.c
+++ b/drivers/tty/serial/8250/8250_pci.c
@@ -21,12 +21,14 @@
#include <linux/serial_core.h>
#include <linux/8250_pci.h>
#include <linux/bitops.h>
+#include <linux/rational.h>
#include <asm/byteorder.h>
#include <asm/io.h>
#include <linux/dmaengine.h>
#include <linux/platform_data/dma-dw.h>
+#include <linux/platform_data/dma-hsu.h>
#include "8250.h"
@@ -1392,45 +1394,22 @@
struct ktermios *old)
{
unsigned int baud = tty_termios_baud_rate(termios);
- unsigned int m, n;
+ unsigned long fref = 100000000, fuart = baud * 16;
+ unsigned long w = BIT(15) - 1;
+ unsigned long m, n;
u32 reg;
+ /* Get Fuart closer to Fref */
+ fuart *= rounddown_pow_of_two(fref / fuart);
+
/*
* For baud rates 0.5M, 1M, 1.5M, 2M, 2.5M, 3M, 3.5M and 4M the
* dividers must be adjusted.
*
* uartclk = (m / n) * 100 MHz, where m <= n
*/
- switch (baud) {
- case 500000:
- case 1000000:
- case 2000000:
- case 4000000:
- m = 64;
- n = 100;
- p->uartclk = 64000000;
- break;
- case 3500000:
- m = 56;
- n = 100;
- p->uartclk = 56000000;
- break;
- case 1500000:
- case 3000000:
- m = 48;
- n = 100;
- p->uartclk = 48000000;
- break;
- case 2500000:
- m = 40;
- n = 100;
- p->uartclk = 40000000;
- break;
- default:
- m = 2304;
- n = 3125;
- p->uartclk = 73728000;
- }
+ rational_best_approximation(fuart, fref, w, w, &m, &n);
+ p->uartclk = fuart;
/* Reset the clock */
reg = (m << BYT_PRV_CLK_M_VAL_SHIFT) | (n << BYT_PRV_CLK_N_VAL_SHIFT);
@@ -1525,6 +1504,167 @@
return ret;
}
+#define INTEL_MID_UART_PS 0x30
+#define INTEL_MID_UART_MUL 0x34
+#define INTEL_MID_UART_DIV 0x38
+
+static void intel_mid_set_termios(struct uart_port *p,
+ struct ktermios *termios,
+ struct ktermios *old,
+ unsigned long fref)
+{
+ unsigned int baud = tty_termios_baud_rate(termios);
+ unsigned short ps = 16;
+ unsigned long fuart = baud * ps;
+ unsigned long w = BIT(24) - 1;
+ unsigned long mul, div;
+
+ if (fref < fuart) {
+ /* Find prescaler value that satisfies Fuart < Fref */
+ if (fref > baud)
+ ps = fref / baud; /* baud rate too high */
+ else
+ ps = 1; /* PLL case */
+ fuart = baud * ps;
+ } else {
+ /* Get Fuart closer to Fref */
+ fuart *= rounddown_pow_of_two(fref / fuart);
+ }
+
+ rational_best_approximation(fuart, fref, w, w, &mul, &div);
+ p->uartclk = fuart * 16 / ps; /* core uses ps = 16 always */
+
+ writel(ps, p->membase + INTEL_MID_UART_PS); /* set PS */
+ writel(mul, p->membase + INTEL_MID_UART_MUL); /* set MUL */
+ writel(div, p->membase + INTEL_MID_UART_DIV);
+
+ serial8250_do_set_termios(p, termios, old);
+}
+
+static void intel_mid_set_termios_38_4M(struct uart_port *p,
+ struct ktermios *termios,
+ struct ktermios *old)
+{
+ intel_mid_set_termios(p, termios, old, 38400000);
+}
+
+static void intel_mid_set_termios_50M(struct uart_port *p,
+ struct ktermios *termios,
+ struct ktermios *old)
+{
+ /*
+ * The uart clk is 50Mhz, and the baud rate come from:
+ * baud = 50M * MUL / (DIV * PS * DLAB)
+ */
+ intel_mid_set_termios(p, termios, old, 50000000);
+}
+
+static bool intel_mid_dma_filter(struct dma_chan *chan, void *param)
+{
+ struct hsu_dma_slave *s = param;
+
+ if (s->dma_dev != chan->device->dev || s->chan_id != chan->chan_id)
+ return false;
+
+ chan->private = s;
+ return true;
+}
+
+static int intel_mid_serial_setup(struct serial_private *priv,
+ const struct pciserial_board *board,
+ struct uart_8250_port *port, int idx,
+ int index, struct pci_dev *dma_dev)
+{
+ struct device *dev = port->port.dev;
+ struct uart_8250_dma *dma;
+ struct hsu_dma_slave *tx_param, *rx_param;
+
+ dma = devm_kzalloc(dev, sizeof(*dma), GFP_KERNEL);
+ if (!dma)
+ return -ENOMEM;
+
+ tx_param = devm_kzalloc(dev, sizeof(*tx_param), GFP_KERNEL);
+ if (!tx_param)
+ return -ENOMEM;
+
+ rx_param = devm_kzalloc(dev, sizeof(*rx_param), GFP_KERNEL);
+ if (!rx_param)
+ return -ENOMEM;
+
+ rx_param->chan_id = index * 2 + 1;
+ tx_param->chan_id = index * 2;
+
+ dma->rxconf.src_maxburst = 64;
+ dma->txconf.dst_maxburst = 64;
+
+ rx_param->dma_dev = &dma_dev->dev;
+ tx_param->dma_dev = &dma_dev->dev;
+
+ dma->fn = intel_mid_dma_filter;
+ dma->rx_param = rx_param;
+ dma->tx_param = tx_param;
+
+ port->port.type = PORT_16750;
+ port->port.flags |= UPF_FIXED_PORT | UPF_FIXED_TYPE;
+ port->dma = dma;
+
+ return pci_default_setup(priv, board, port, idx);
+}
+
+#define PCI_DEVICE_ID_INTEL_PNW_UART1 0x081b
+#define PCI_DEVICE_ID_INTEL_PNW_UART2 0x081c
+#define PCI_DEVICE_ID_INTEL_PNW_UART3 0x081d
+
+static int pnw_serial_setup(struct serial_private *priv,
+ const struct pciserial_board *board,
+ struct uart_8250_port *port, int idx)
+{
+ struct pci_dev *pdev = priv->dev;
+ struct pci_dev *dma_dev;
+ int index;
+
+ switch (pdev->device) {
+ case PCI_DEVICE_ID_INTEL_PNW_UART1:
+ index = 0;
+ break;
+ case PCI_DEVICE_ID_INTEL_PNW_UART2:
+ index = 1;
+ break;
+ case PCI_DEVICE_ID_INTEL_PNW_UART3:
+ index = 2;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ dma_dev = pci_get_slot(pdev->bus, PCI_DEVFN(PCI_SLOT(pdev->devfn), 3));
+
+ port->port.set_termios = intel_mid_set_termios_50M;
+
+ return intel_mid_serial_setup(priv, board, port, idx, index, dma_dev);
+}
+
+#define PCI_DEVICE_ID_INTEL_TNG_UART 0x1191
+
+static int tng_serial_setup(struct serial_private *priv,
+ const struct pciserial_board *board,
+ struct uart_8250_port *port, int idx)
+{
+ struct pci_dev *pdev = priv->dev;
+ struct pci_dev *dma_dev;
+ int index = PCI_FUNC(pdev->devfn);
+
+ /* Currently no support for HSU port0 */
+ if (index-- == 0)
+ return -ENODEV;
+
+ dma_dev = pci_get_slot(pdev->bus, PCI_DEVFN(5, 0));
+
+ port->port.set_termios = intel_mid_set_termios_38_4M;
+
+ return intel_mid_serial_setup(priv, board, port, idx, index, dma_dev);
+}
+
static int
pci_omegapci_setup(struct serial_private *priv,
const struct pciserial_board *board,
@@ -1550,97 +1690,73 @@
struct uart_8250_port *port, int idx)
{
struct pci_dev *pdev = priv->dev;
- unsigned long base;
- unsigned long iobase;
- unsigned long ciobase = 0;
u8 config_base;
- u32 bar_data[3];
+ u16 iobase;
- /*
- * Find each UARTs offset in PCI configuraion space
- */
- switch (idx) {
- case 0:
- config_base = 0x40;
- break;
- case 1:
- config_base = 0x48;
- break;
- case 2:
- config_base = 0x50;
- break;
- case 3:
- config_base = 0x58;
- break;
- case 4:
- config_base = 0x60;
- break;
- case 5:
- config_base = 0x68;
- break;
- case 6:
- config_base = 0x70;
- break;
- case 7:
- config_base = 0x78;
- break;
- case 8:
- config_base = 0x80;
- break;
- case 9:
- config_base = 0x88;
- break;
- case 10:
- config_base = 0x90;
- break;
- case 11:
- config_base = 0x98;
- break;
- default:
- /* Unknown number of ports, get out of here */
- return -EINVAL;
- }
+ config_base = 0x40 + 0x08 * idx;
- if (idx < 4) {
- base = pci_resource_start(priv->dev, 3);
- ciobase = (int)(base + (0x8 * idx));
- }
+ /* Get the io address from configuration space */
+ pci_read_config_word(pdev, config_base + 4, &iobase);
- /* Get the io address dispatch from the BIOS */
- pci_read_config_dword(pdev, 0x24, &bar_data[0]);
- pci_read_config_dword(pdev, 0x20, &bar_data[1]);
- pci_read_config_dword(pdev, 0x1c, &bar_data[2]);
-
- /* Calculate Real IO Port */
- iobase = (bar_data[idx/4] & 0xffffffe0) + (idx % 4) * 8;
-
- dev_dbg(&pdev->dev, "%s: idx=%d iobase=0x%lx ciobase=0x%lx config_base=0x%2x\n",
- __func__, idx, iobase, ciobase, config_base);
-
- /* Enable UART I/O port */
- pci_write_config_byte(pdev, config_base + 0x00, 0x01);
-
- /* Select 128-byte FIFO and 8x FIFO threshold */
- pci_write_config_byte(pdev, config_base + 0x01, 0x33);
-
- /* LSB UART */
- pci_write_config_byte(pdev, config_base + 0x04, (u8)(iobase & 0xff));
-
- /* MSB UART */
- pci_write_config_byte(pdev, config_base + 0x05, (u8)((iobase & 0xff00) >> 8));
-
- /* irq number, this usually fails, but the spec says to do it anyway. */
- pci_write_config_byte(pdev, config_base + 0x06, pdev->irq);
+ dev_dbg(&pdev->dev, "%s: idx=%d iobase=0x%x", __func__, idx, iobase);
port->port.iotype = UPIO_PORT;
port->port.iobase = iobase;
- port->port.mapbase = 0;
- port->port.membase = NULL;
- port->port.regshift = 0;
return 0;
}
+static int pci_fintek_init(struct pci_dev *dev)
+{
+ unsigned long iobase;
+ u32 max_port, i;
+ u32 bar_data[3];
+ u8 config_base;
+
+ switch (dev->device) {
+ case 0x1104: /* 4 ports */
+ case 0x1108: /* 8 ports */
+ max_port = dev->device & 0xff;
+ break;
+ case 0x1112: /* 12 ports */
+ max_port = 12;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* Get the io address dispatch from the BIOS */
+ pci_read_config_dword(dev, 0x24, &bar_data[0]);
+ pci_read_config_dword(dev, 0x20, &bar_data[1]);
+ pci_read_config_dword(dev, 0x1c, &bar_data[2]);
+
+ for (i = 0; i < max_port; ++i) {
+ /* UART0 configuration offset start from 0x40 */
+ config_base = 0x40 + 0x08 * i;
+
+ /* Calculate Real IO Port */
+ iobase = (bar_data[i / 4] & 0xffffffe0) + (i % 4) * 8;
+
+ /* Enable UART I/O port */
+ pci_write_config_byte(dev, config_base + 0x00, 0x01);
+
+ /* Select 128-byte FIFO and 8x FIFO threshold */
+ pci_write_config_byte(dev, config_base + 0x01, 0x33);
+
+ /* LSB UART */
+ pci_write_config_byte(dev, config_base + 0x04,
+ (u8)(iobase & 0xff));
+
+ /* MSB UART */
+ pci_write_config_byte(dev, config_base + 0x05,
+ (u8)((iobase & 0xff00) >> 8));
+
+ pci_write_config_byte(dev, config_base + 0x06, dev->irq);
+ }
+
+ return max_port;
+}
+
static int skip_tx_en_setup(struct serial_private *priv,
const struct pciserial_board *board,
struct uart_8250_port *port, int idx)
@@ -1989,6 +2105,34 @@
},
{
.vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_PNW_UART1,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pnw_serial_setup,
+ },
+ {
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_PNW_UART2,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pnw_serial_setup,
+ },
+ {
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_PNW_UART3,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pnw_serial_setup,
+ },
+ {
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_TNG_UART,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = tng_serial_setup,
+ },
+ {
+ .vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BSW_UART1,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
@@ -2653,6 +2797,7 @@
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = pci_fintek_setup,
+ .init = pci_fintek_init,
},
{
.vendor = 0x1c29,
@@ -2660,6 +2805,7 @@
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = pci_fintek_setup,
+ .init = pci_fintek_init,
},
{
.vendor = 0x1c29,
@@ -2667,6 +2813,7 @@
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.setup = pci_fintek_setup,
+ .init = pci_fintek_init,
},
/*
@@ -2864,6 +3011,8 @@
pbn_ADDIDATA_PCIe_8_3906250,
pbn_ce4100_1_115200,
pbn_byt,
+ pbn_pnw,
+ pbn_tng,
pbn_qrk,
pbn_omegapci,
pbn_NETMOS9900_2s_115200,
@@ -3630,6 +3779,16 @@
.uart_offset = 0x80,
.reg_shift = 2,
},
+ [pbn_pnw] = {
+ .flags = FL_BASE0,
+ .num_ports = 1,
+ .base_baud = 115200,
+ },
+ [pbn_tng] = {
+ .flags = FL_BASE0,
+ .num_ports = 1,
+ .base_baud = 1843200,
+ },
[pbn_qrk] = {
.flags = FL_BASE0,
.num_ports = 1,
@@ -4006,41 +4165,41 @@
pci_disable_device(dev);
}
-#ifdef CONFIG_PM
-static int pciserial_suspend_one(struct pci_dev *dev, pm_message_t state)
+#ifdef CONFIG_PM_SLEEP
+static int pciserial_suspend_one(struct device *dev)
{
- struct serial_private *priv = pci_get_drvdata(dev);
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct serial_private *priv = pci_get_drvdata(pdev);
if (priv)
pciserial_suspend_ports(priv);
- pci_save_state(dev);
- pci_set_power_state(dev, pci_choose_state(dev, state));
return 0;
}
-static int pciserial_resume_one(struct pci_dev *dev)
+static int pciserial_resume_one(struct device *dev)
{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct serial_private *priv = pci_get_drvdata(pdev);
int err;
- struct serial_private *priv = pci_get_drvdata(dev);
-
- pci_set_power_state(dev, PCI_D0);
- pci_restore_state(dev);
if (priv) {
/*
* The device may have been disabled. Re-enable it.
*/
- err = pci_enable_device(dev);
+ err = pci_enable_device(pdev);
/* FIXME: We cannot simply error out here */
if (err)
- dev_err(&dev->dev, "Unable to re-enable ports, trying to continue.\n");
+ dev_err(dev, "Unable to re-enable ports, trying to continue.\n");
pciserial_resume_ports(priv);
}
return 0;
}
#endif
+static SIMPLE_DEV_PM_OPS(pciserial_pm_ops, pciserial_suspend_one,
+ pciserial_resume_one);
+
static struct pci_device_id serial_pci_tbl[] = {
/* Advantech use PCI_DEVICE_ID_ADVANTECH_PCI3620 (0x3620) as 'PCI_SUBVENDOR_ID' */
{ PCI_VENDOR_ID_ADVANTECH, PCI_DEVICE_ID_ADVANTECH_PCI3620,
@@ -5363,6 +5522,26 @@
pbn_byt },
/*
+ * Intel Penwell
+ */
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PNW_UART1,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pnw},
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PNW_UART2,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pnw},
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PNW_UART3,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pnw},
+
+ /*
+ * Intel Tangier
+ */
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_TNG_UART,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_tng},
+
+ /*
* Intel Quark x1000
*/
{ PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_QRK_UART,
@@ -5510,10 +5689,9 @@
.name = "serial",
.probe = pciserial_init_one,
.remove = pciserial_remove_one,
-#ifdef CONFIG_PM
- .suspend = pciserial_suspend_one,
- .resume = pciserial_resume_one,
-#endif
+ .driver = {
+ .pm = &pciserial_pm_ops,
+ },
.id_table = serial_pci_tbl,
.err_handler = &serial8250_err_handler,
};
diff --git a/drivers/tty/serial/8250/Kconfig b/drivers/tty/serial/8250/Kconfig
index 6f7f2d7..c350703 100644
--- a/drivers/tty/serial/8250/Kconfig
+++ b/drivers/tty/serial/8250/Kconfig
@@ -108,6 +108,7 @@
tristate "8250/16550 PCI device support" if EXPERT
depends on SERIAL_8250 && PCI
default SERIAL_8250
+ select RATIONAL
help
This builds standard PCI serial support. You may be able to
disable this feature if you only need legacy serial support.
diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig
index d2501f0..f8120c1 100644
--- a/drivers/tty/serial/Kconfig
+++ b/drivers/tty/serial/Kconfig
@@ -20,7 +20,7 @@
config SERIAL_AMBA_PL010
tristate "ARM AMBA PL010 serial port support"
- depends on ARM_AMBA && (BROKEN || !ARCH_VERSATILE)
+ depends on ARM_AMBA
select SERIAL_CORE
help
This selects the ARM(R) AMBA(R) PrimeCell PL010 UART. If you have
@@ -483,16 +483,6 @@
your boot loader (lilo or loadlin) about how to pass options to the
kernel at boot time.)
-config SERIAL_MFD_HSU
- tristate "Medfield High Speed UART support"
- depends on PCI
- select SERIAL_CORE
-
-config SERIAL_MFD_HSU_CONSOLE
- bool "Medfile HSU serial console support"
- depends on SERIAL_MFD_HSU=y
- select SERIAL_CORE_CONSOLE
-
config SERIAL_BFIN
tristate "Blackfin serial port support"
depends on BLACKFIN
@@ -835,7 +825,7 @@
config SERIAL_PMACZILOG
tristate "Mac or PowerMac z85c30 ESCC support"
- depends on (M68K && MAC) || (PPC_OF && PPC_PMAC)
+ depends on (M68K && MAC) || PPC_PMAC
select SERIAL_CORE
help
This driver supports the Zilog z85C30 serial ports found on
@@ -878,7 +868,7 @@
config SERIAL_CPM
tristate "CPM SCC/SMC serial port support"
- depends on CPM2 || 8xx
+ depends on CPM2 || CPM1
select SERIAL_CORE
help
This driver supports the SCC and SMC serial ports on Motorola
@@ -1054,7 +1044,7 @@
config SERIAL_MSM
bool "MSM on-chip serial port support"
- depends on ARCH_MSM || ARCH_QCOM
+ depends on ARCH_QCOM
select SERIAL_CORE
config SERIAL_MSM_CONSOLE
@@ -1063,18 +1053,6 @@
select SERIAL_CORE_CONSOLE
select SERIAL_EARLYCON
-config SERIAL_MSM_HS
- tristate "MSM UART High Speed: Serial Driver"
- depends on ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50
- select SERIAL_CORE
- help
- If you have a machine based on MSM family of SoCs, you
- can enable its onboard high speed serial port by enabling
- this option.
-
- Choose M here to compile it as a module. The module will be
- called msm_serial_hs.
-
config SERIAL_VT8500
bool "VIA VT8500 on-chip serial port support"
depends on ARCH_VT8500
@@ -1153,7 +1131,7 @@
config SERIAL_OF_PLATFORM_NWPSERIAL
tristate "NWP serial port driver"
- depends on PPC_OF && PPC_DCR
+ depends on PPC_DCR
select SERIAL_OF_PLATFORM
select SERIAL_CORE_CONSOLE
select SERIAL_CORE
diff --git a/drivers/tty/serial/Makefile b/drivers/tty/serial/Makefile
index 599be4b..c3ac3d9 100644
--- a/drivers/tty/serial/Makefile
+++ b/drivers/tty/serial/Makefile
@@ -62,7 +62,6 @@
obj-$(CONFIG_SERIAL_ATMEL) += atmel_serial.o
obj-$(CONFIG_SERIAL_UARTLITE) += uartlite.o
obj-$(CONFIG_SERIAL_MSM) += msm_serial.o
-obj-$(CONFIG_SERIAL_MSM_HS) += msm_serial_hs.o
obj-$(CONFIG_SERIAL_NETX) += netx-serial.o
obj-$(CONFIG_SERIAL_OF_PLATFORM) += of_serial.o
obj-$(CONFIG_SERIAL_OF_PLATFORM_NWPSERIAL) += nwpserial.o
@@ -78,7 +77,6 @@
obj-$(CONFIG_SERIAL_GRLIB_GAISLER_APBUART) += apbuart.o
obj-$(CONFIG_SERIAL_ALTERA_JTAGUART) += altera_jtaguart.o
obj-$(CONFIG_SERIAL_VT8500) += vt8500_serial.o
-obj-$(CONFIG_SERIAL_MFD_HSU) += mfd.o
obj-$(CONFIG_SERIAL_IFX6X60) += ifx6x60.o
obj-$(CONFIG_SERIAL_PCH_UART) += pch_uart.o
obj-$(CONFIG_SERIAL_MSM_SMD) += msm_smd_tty.o
diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
index 8d94c19..5a4e9d5 100644
--- a/drivers/tty/serial/amba-pl011.c
+++ b/drivers/tty/serial/amba-pl011.c
@@ -58,6 +58,7 @@
#include <linux/pinctrl/consumer.h>
#include <linux/sizes.h>
#include <linux/io.h>
+#include <linux/workqueue.h>
#define UART_NR 14
@@ -156,7 +157,9 @@
unsigned int lcrh_tx; /* vendor-specific */
unsigned int lcrh_rx; /* vendor-specific */
unsigned int old_cr; /* state during shutdown */
+ struct delayed_work tx_softirq_work;
bool autorts;
+ unsigned int tx_irq_seen; /* 0=none, 1=1, 2=2 or more */
char type[12];
#ifdef CONFIG_DMA_ENGINE
/* DMA stuff */
@@ -164,6 +167,7 @@
bool using_rx_dma;
struct pl011_dmarx_data dmarx;
struct pl011_dmatx_data dmatx;
+ bool dma_probed;
#endif
};
@@ -261,10 +265,11 @@
}
}
-static void pl011_dma_probe_initcall(struct device *dev, struct uart_amba_port *uap)
+static void pl011_dma_probe(struct uart_amba_port *uap)
{
/* DMA is the sole user of the platform data right now */
struct amba_pl011_data *plat = dev_get_platdata(uap->port.dev);
+ struct device *dev = uap->port.dev;
struct dma_slave_config tx_conf = {
.dst_addr = uap->port.mapbase + UART01x_DR,
.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE,
@@ -275,9 +280,14 @@
struct dma_chan *chan;
dma_cap_mask_t mask;
- chan = dma_request_slave_channel(dev, "tx");
+ uap->dma_probed = true;
+ chan = dma_request_slave_channel_reason(dev, "tx");
+ if (IS_ERR(chan)) {
+ if (PTR_ERR(chan) == -EPROBE_DEFER) {
+ uap->dma_probed = false;
+ return;
+ }
- if (!chan) {
/* We need platform data */
if (!plat || !plat->dma_filter) {
dev_info(uap->port.dev, "no DMA platform data\n");
@@ -385,63 +395,17 @@
}
}
-#ifndef MODULE
-/*
- * Stack up the UARTs and let the above initcall be done at device
- * initcall time, because the serial driver is called as an arch
- * initcall, and at this time the DMA subsystem is not yet registered.
- * At this point the driver will switch over to using DMA where desired.
- */
-struct dma_uap {
- struct list_head node;
- struct uart_amba_port *uap;
- struct device *dev;
-};
-
-static LIST_HEAD(pl011_dma_uarts);
-
-static int __init pl011_dma_initcall(void)
-{
- struct list_head *node, *tmp;
-
- list_for_each_safe(node, tmp, &pl011_dma_uarts) {
- struct dma_uap *dmau = list_entry(node, struct dma_uap, node);
- pl011_dma_probe_initcall(dmau->dev, dmau->uap);
- list_del(node);
- kfree(dmau);
- }
- return 0;
-}
-
-device_initcall(pl011_dma_initcall);
-
-static void pl011_dma_probe(struct device *dev, struct uart_amba_port *uap)
-{
- struct dma_uap *dmau = kzalloc(sizeof(struct dma_uap), GFP_KERNEL);
- if (dmau) {
- dmau->uap = uap;
- dmau->dev = dev;
- list_add_tail(&dmau->node, &pl011_dma_uarts);
- }
-}
-#else
-static void pl011_dma_probe(struct device *dev, struct uart_amba_port *uap)
-{
- pl011_dma_probe_initcall(dev, uap);
-}
-#endif
-
static void pl011_dma_remove(struct uart_amba_port *uap)
{
- /* TODO: remove the initcall if it has not yet executed */
if (uap->dmatx.chan)
dma_release_channel(uap->dmatx.chan);
if (uap->dmarx.chan)
dma_release_channel(uap->dmarx.chan);
}
-/* Forward declare this for the refill routine */
+/* Forward declare these for the refill routine */
static int pl011_dma_tx_refill(struct uart_amba_port *uap);
+static void pl011_start_tx_pio(struct uart_amba_port *uap);
/*
* The current DMA TX buffer has been sent.
@@ -479,14 +443,13 @@
return;
}
- if (pl011_dma_tx_refill(uap) <= 0) {
+ if (pl011_dma_tx_refill(uap) <= 0)
/*
* We didn't queue a DMA buffer for some reason, but we
* have data pending to be sent. Re-enable the TX IRQ.
*/
- uap->im |= UART011_TXIM;
- writew(uap->im, uap->port.membase + UART011_IMSC);
- }
+ pl011_start_tx_pio(uap);
+
spin_unlock_irqrestore(&uap->port.lock, flags);
}
@@ -664,12 +627,10 @@
if (!uap->dmatx.queued) {
if (pl011_dma_tx_refill(uap) > 0) {
uap->im &= ~UART011_TXIM;
- ret = true;
- } else {
- uap->im |= UART011_TXIM;
+ writew(uap->im, uap->port.membase +
+ UART011_IMSC);
+ } else
ret = false;
- }
- writew(uap->im, uap->port.membase + UART011_IMSC);
} else if (!(uap->dmacr & UART011_TXDMAE)) {
uap->dmacr |= UART011_TXDMAE;
writew(uap->dmacr,
@@ -1021,6 +982,9 @@
{
int ret;
+ if (!uap->dma_probed)
+ pl011_dma_probe(uap);
+
if (!uap->dmatx.chan)
return;
@@ -1142,7 +1106,7 @@
#else
/* Blank functions if the DMA engine is not available */
-static inline void pl011_dma_probe(struct device *dev, struct uart_amba_port *uap)
+static inline void pl011_dma_probe(struct uart_amba_port *uap)
{
}
@@ -1208,15 +1172,24 @@
pl011_dma_tx_stop(uap);
}
+static bool pl011_tx_chars(struct uart_amba_port *uap);
+
+/* Start TX with programmed I/O only (no DMA) */
+static void pl011_start_tx_pio(struct uart_amba_port *uap)
+{
+ uap->im |= UART011_TXIM;
+ writew(uap->im, uap->port.membase + UART011_IMSC);
+ if (!uap->tx_irq_seen)
+ pl011_tx_chars(uap);
+}
+
static void pl011_start_tx(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
- if (!pl011_dma_tx_start(uap)) {
- uap->im |= UART011_TXIM;
- writew(uap->im, uap->port.membase + UART011_IMSC);
- }
+ if (!pl011_dma_tx_start(uap))
+ pl011_start_tx_pio(uap);
}
static void pl011_stop_rx(struct uart_port *port)
@@ -1274,40 +1247,87 @@
spin_lock(&uap->port.lock);
}
-static void pl011_tx_chars(struct uart_amba_port *uap)
+/*
+ * Transmit a character
+ * There must be at least one free entry in the TX FIFO to accept the char.
+ *
+ * Returns true if the FIFO might have space in it afterwards;
+ * returns false if the FIFO definitely became full.
+ */
+static bool pl011_tx_char(struct uart_amba_port *uap, unsigned char c)
+{
+ writew(c, uap->port.membase + UART01x_DR);
+ uap->port.icount.tx++;
+
+ if (likely(uap->tx_irq_seen > 1))
+ return true;
+
+ return !(readw(uap->port.membase + UART01x_FR) & UART01x_FR_TXFF);
+}
+
+static bool pl011_tx_chars(struct uart_amba_port *uap)
{
struct circ_buf *xmit = &uap->port.state->xmit;
int count;
+ if (unlikely(uap->tx_irq_seen < 2))
+ /*
+ * Initial FIFO fill level unknown: we must check TXFF
+ * after each write, so just try to fill up the FIFO.
+ */
+ count = uap->fifosize;
+ else /* tx_irq_seen >= 2 */
+ /*
+ * FIFO initially at least half-empty, so we can simply
+ * write half the FIFO without polling TXFF.
+
+ * Note: the *first* TX IRQ can still race with
+ * pl011_start_tx_pio(), which can result in the FIFO
+ * being fuller than expected in that case.
+ */
+ count = uap->fifosize >> 1;
+
+ /*
+ * If the FIFO is full we're guaranteed a TX IRQ at some later point,
+ * and can't transmit immediately in any case:
+ */
+ if (unlikely(uap->tx_irq_seen < 2 &&
+ readw(uap->port.membase + UART01x_FR) & UART01x_FR_TXFF))
+ return false;
+
if (uap->port.x_char) {
- writew(uap->port.x_char, uap->port.membase + UART01x_DR);
- uap->port.icount.tx++;
+ pl011_tx_char(uap, uap->port.x_char);
uap->port.x_char = 0;
- return;
+ --count;
}
if (uart_circ_empty(xmit) || uart_tx_stopped(&uap->port)) {
pl011_stop_tx(&uap->port);
- return;
+ goto done;
}
/* If we are using DMA mode, try to send some characters. */
if (pl011_dma_tx_irq(uap))
- return;
+ goto done;
- count = uap->fifosize >> 1;
- do {
- writew(xmit->buf[xmit->tail], uap->port.membase + UART01x_DR);
+ while (count-- > 0 && pl011_tx_char(uap, xmit->buf[xmit->tail])) {
xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
- uap->port.icount.tx++;
if (uart_circ_empty(xmit))
break;
- } while (--count > 0);
+ }
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
uart_write_wakeup(&uap->port);
- if (uart_circ_empty(xmit))
+ if (uart_circ_empty(xmit)) {
pl011_stop_tx(&uap->port);
+ goto done;
+ }
+
+ if (unlikely(!uap->tx_irq_seen))
+ schedule_delayed_work(&uap->tx_softirq_work, uap->port.timeout);
+
+done:
+ return false;
}
static void pl011_modem_status(struct uart_amba_port *uap)
@@ -1334,6 +1354,28 @@
wake_up_interruptible(&uap->port.state->port.delta_msr_wait);
}
+static void pl011_tx_softirq(struct work_struct *work)
+{
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct uart_amba_port *uap =
+ container_of(dwork, struct uart_amba_port, tx_softirq_work);
+
+ spin_lock(&uap->port.lock);
+ while (pl011_tx_chars(uap)) ;
+ spin_unlock(&uap->port.lock);
+}
+
+static void pl011_tx_irq_seen(struct uart_amba_port *uap)
+{
+ if (likely(uap->tx_irq_seen > 1))
+ return;
+
+ uap->tx_irq_seen++;
+ if (uap->tx_irq_seen < 2)
+ /* first TX IRQ */
+ cancel_delayed_work(&uap->tx_softirq_work);
+}
+
static irqreturn_t pl011_int(int irq, void *dev_id)
{
struct uart_amba_port *uap = dev_id;
@@ -1372,8 +1414,10 @@
if (status & (UART011_DSRMIS|UART011_DCDMIS|
UART011_CTSMIS|UART011_RIMIS))
pl011_modem_status(uap);
- if (status & UART011_TXIS)
+ if (status & UART011_TXIS) {
+ pl011_tx_irq_seen(uap);
pl011_tx_chars(uap);
+ }
if (pass_counter-- == 0)
break;
@@ -1577,7 +1621,7 @@
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
- unsigned int cr, lcr_h, fbrd, ibrd;
+ unsigned int cr;
int retval;
retval = pl011_hwinit(port);
@@ -1595,30 +1639,8 @@
writew(uap->vendor->ifls, uap->port.membase + UART011_IFLS);
- /*
- * Provoke TX FIFO interrupt into asserting. Taking care to preserve
- * baud rate and data format specified by FBRD, IBRD and LCRH as the
- * UART may already be in use as a console.
- */
spin_lock_irq(&uap->port.lock);
- fbrd = readw(uap->port.membase + UART011_FBRD);
- ibrd = readw(uap->port.membase + UART011_IBRD);
- lcr_h = readw(uap->port.membase + uap->lcrh_rx);
-
- cr = UART01x_CR_UARTEN | UART011_CR_TXE | UART011_CR_LBE;
- writew(cr, uap->port.membase + UART011_CR);
- writew(0, uap->port.membase + UART011_FBRD);
- writew(1, uap->port.membase + UART011_IBRD);
- pl011_write_lcr_h(uap, 0);
- writew(0, uap->port.membase + UART01x_DR);
- while (readw(uap->port.membase + UART01x_FR) & UART01x_FR_BUSY)
- barrier();
-
- writew(fbrd, uap->port.membase + UART011_FBRD);
- writew(ibrd, uap->port.membase + UART011_IBRD);
- pl011_write_lcr_h(uap, lcr_h);
-
/* restore RTS and DTR */
cr = uap->old_cr & (UART011_CR_RTS | UART011_CR_DTR);
cr |= UART01x_CR_UARTEN | UART011_CR_RXE | UART011_CR_TXE;
@@ -1672,13 +1694,15 @@
container_of(port, struct uart_amba_port, port);
unsigned int cr;
+ cancel_delayed_work_sync(&uap->tx_softirq_work);
+
/*
* disable all interrupts
*/
spin_lock_irq(&uap->port.lock);
uap->im = 0;
writew(uap->im, uap->port.membase + UART011_IMSC);
- writew(0xffff, uap->port.membase + UART011_ICR);
+ writew(0xffff & ~UART011_TXIS, uap->port.membase + UART011_ICR);
spin_unlock_irq(&uap->port.lock);
pl011_dma_shutdown(uap);
@@ -2218,7 +2242,7 @@
uap->port.ops = &amba_pl011_pops;
uap->port.flags = UPF_BOOT_AUTOCONF;
uap->port.line = i;
- pl011_dma_probe(&dev->dev, uap);
+ INIT_DELAYED_WORK(&uap->tx_softirq_work, pl011_tx_softirq);
/* Ensure interrupts from this UART are masked and cleared */
writew(0, uap->port.membase + UART011_IMSC);
@@ -2233,7 +2257,8 @@
if (!amba_reg.state) {
ret = uart_register_driver(&amba_reg);
if (ret < 0) {
- pr_err("Failed to register AMBA-PL011 driver\n");
+ dev_err(&dev->dev,
+ "Failed to register AMBA-PL011 driver\n");
return ret;
}
}
@@ -2242,7 +2267,6 @@
if (ret) {
amba_ports[i] = NULL;
uart_unregister_driver(&amba_reg);
- pl011_dma_remove(uap);
}
return ret;
diff --git a/drivers/tty/serial/apbuart.c b/drivers/tty/serial/apbuart.c
index 4f0f95e..f3af317 100644
--- a/drivers/tty/serial/apbuart.c
+++ b/drivers/tty/serial/apbuart.c
@@ -572,7 +572,7 @@
return 0;
}
-static struct of_device_id apbuart_match[] = {
+static const struct of_device_id apbuart_match[] = {
{
.name = "GAISLER_APBUART",
},
diff --git a/drivers/tty/serial/ar933x_uart.c b/drivers/tty/serial/ar933x_uart.c
index 77fc9fa..1519d2c 100644
--- a/drivers/tty/serial/ar933x_uart.c
+++ b/drivers/tty/serial/ar933x_uart.c
@@ -649,7 +649,7 @@
id = 0;
}
- if (id > CONFIG_SERIAL_AR933X_NR_UARTS)
+ if (id >= CONFIG_SERIAL_AR933X_NR_UARTS)
return -EINVAL;
irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
index 4e959c4..d58fe47 100644
--- a/drivers/tty/serial/atmel_serial.c
+++ b/drivers/tty/serial/atmel_serial.c
@@ -855,7 +855,7 @@
spin_lock_init(&atmel_port->lock_tx);
sg_init_table(&atmel_port->sg_tx, 1);
/* UART circular tx buffer is an aligned page. */
- BUG_ON((int)port->state->xmit.buf & ~PAGE_MASK);
+ BUG_ON(!PAGE_ALIGNED(port->state->xmit.buf));
sg_set_page(&atmel_port->sg_tx,
virt_to_page(port->state->xmit.buf),
UART_XMIT_SIZE,
@@ -1034,10 +1034,10 @@
spin_lock_init(&atmel_port->lock_rx);
sg_init_table(&atmel_port->sg_rx, 1);
/* UART circular rx buffer is an aligned page. */
- BUG_ON((int)port->state->xmit.buf & ~PAGE_MASK);
+ BUG_ON(!PAGE_ALIGNED(ring->buf));
sg_set_page(&atmel_port->sg_rx,
virt_to_page(ring->buf),
- ATMEL_SERIAL_RINGSIZE,
+ sizeof(struct atmel_uart_char) * ATMEL_SERIAL_RINGSIZE,
(int)ring->buf & ~PAGE_MASK);
nent = dma_map_sg(port->dev,
&atmel_port->sg_rx,
@@ -1554,7 +1554,7 @@
spin_unlock(&port->lock);
}
-static int atmel_init_property(struct atmel_uart_port *atmel_port,
+static void atmel_init_property(struct atmel_uart_port *atmel_port,
struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
@@ -1595,7 +1595,6 @@
atmel_port->use_dma_tx = false;
}
- return 0;
}
static void atmel_init_rs485(struct uart_port *port,
@@ -1777,10 +1776,13 @@
if (retval)
goto free_irq;
+ tasklet_enable(&atmel_port->tasklet);
+
/*
* Initialize DMA (if necessary)
*/
atmel_init_property(atmel_port, pdev);
+ atmel_set_ops(port);
if (atmel_port->prepare_rx) {
retval = atmel_port->prepare_rx(port);
@@ -1879,6 +1881,7 @@
* Clear out any scheduled tasklets before
* we destroy the buffers
*/
+ tasklet_disable(&atmel_port->tasklet);
tasklet_kill(&atmel_port->tasklet);
/*
@@ -2256,8 +2259,8 @@
struct uart_port *port = &atmel_port->uart;
struct atmel_uart_data *pdata = dev_get_platdata(&pdev->dev);
- if (!atmel_init_property(atmel_port, pdev))
- atmel_set_ops(port);
+ atmel_init_property(atmel_port, pdev);
+ atmel_set_ops(port);
atmel_init_rs485(port, pdev);
@@ -2272,6 +2275,7 @@
tasklet_init(&atmel_port->tasklet, atmel_tasklet_func,
(unsigned long)port);
+ tasklet_disable(&atmel_port->tasklet);
memset(&atmel_port->rx_ring, 0, sizeof(atmel_port->rx_ring));
@@ -2581,8 +2585,8 @@
struct gpio_desc *gpiod;
p->gpios = mctrl_gpio_init(dev, 0);
- if (IS_ERR_OR_NULL(p->gpios))
- return -1;
+ if (IS_ERR(p->gpios))
+ return PTR_ERR(p->gpios);
for (i = 0; i < UART_GPIO_MAX; i++) {
gpiod = mctrl_gpio_to_gpiod(p->gpios, i);
@@ -2635,9 +2639,10 @@
spin_lock_init(&port->lock_suspended);
ret = atmel_init_gpios(port, &pdev->dev);
- if (ret < 0)
- dev_err(&pdev->dev, "%s",
- "Failed to initialize GPIOs. The serial port may not work as expected");
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to initialize GPIOs.");
+ goto err;
+ }
ret = atmel_init_port(port, pdev);
if (ret)
diff --git a/drivers/tty/serial/bcm63xx_uart.c b/drivers/tty/serial/bcm63xx_uart.c
index 01d83df..681e0f3 100644
--- a/drivers/tty/serial/bcm63xx_uart.c
+++ b/drivers/tty/serial/bcm63xx_uart.c
@@ -854,7 +854,7 @@
ret = uart_add_one_port(&bcm_uart_driver, port);
if (ret) {
- ports[pdev->id].membase = 0;
+ ports[pdev->id].membase = NULL;
return ret;
}
platform_set_drvdata(pdev, port);
@@ -868,7 +868,7 @@
port = platform_get_drvdata(pdev);
uart_remove_one_port(&bcm_uart_driver, port);
/* mark port as free */
- ports[pdev->id].membase = 0;
+ ports[pdev->id].membase = NULL;
return 0;
}
diff --git a/drivers/tty/serial/bfin_uart.c b/drivers/tty/serial/bfin_uart.c
index 43b3e2c..155781e 100644
--- a/drivers/tty/serial/bfin_uart.c
+++ b/drivers/tty/serial/bfin_uart.c
@@ -464,6 +464,7 @@
int x_pos, pos;
unsigned long flags;
+ dma_disable_irq_nosync(uart->rx_dma_channel);
spin_lock_irqsave(&uart->rx_lock, flags);
/* 2D DMA RX buffer ring is used. Because curr_y_count and
@@ -496,6 +497,7 @@
}
spin_unlock_irqrestore(&uart->rx_lock, flags);
+ dma_enable_irq(uart->rx_dma_channel);
mod_timer(&(uart->rx_dma_timer), jiffies + DMA_RX_FLUSH_JIFFIES);
}
diff --git a/drivers/tty/serial/clps711x.c b/drivers/tty/serial/clps711x.c
index 6e11c27..d5d2dd7 100644
--- a/drivers/tty/serial/clps711x.c
+++ b/drivers/tty/serial/clps711x.c
@@ -501,6 +501,8 @@
platform_set_drvdata(pdev, s);
s->gpios = mctrl_gpio_init(&pdev->dev, 0);
+ if (IS_ERR(s->gpios))
+ return PTR_ERR(s->gpios);
ret = uart_add_one_port(&clps711x_uart, &s->port);
if (ret)
diff --git a/drivers/tty/serial/cpm_uart/Makefile b/drivers/tty/serial/cpm_uart/Makefile
index e072724..896a5d5 100644
--- a/drivers/tty/serial/cpm_uart/Makefile
+++ b/drivers/tty/serial/cpm_uart/Makefile
@@ -6,6 +6,6 @@
# Select the correct platform objects.
cpm_uart-objs-$(CONFIG_CPM2) += cpm_uart_cpm2.o
-cpm_uart-objs-$(CONFIG_8xx) += cpm_uart_cpm1.o
+cpm_uart-objs-$(CONFIG_CPM1) += cpm_uart_cpm1.o
cpm_uart-objs := cpm_uart_core.o $(cpm_uart-objs-y)
diff --git a/drivers/tty/serial/cpm_uart/cpm_uart.h b/drivers/tty/serial/cpm_uart/cpm_uart.h
index cf34d26..0ad027b 100644
--- a/drivers/tty/serial/cpm_uart/cpm_uart.h
+++ b/drivers/tty/serial/cpm_uart/cpm_uart.h
@@ -19,7 +19,7 @@
#if defined(CONFIG_CPM2)
#include "cpm_uart_cpm2.h"
-#elif defined(CONFIG_8xx)
+#elif defined(CONFIG_CPM1)
#include "cpm_uart_cpm1.h"
#endif
diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
index fddb1fd..08431ad 100644
--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
@@ -1435,7 +1435,7 @@
return uart_remove_one_port(&cpm_reg, &pinfo->port);
}
-static struct of_device_id cpm_uart_match[] = {
+static const struct of_device_id cpm_uart_match[] = {
{
.compatible = "fsl,cpm1-smc-uart",
},
diff --git a/drivers/tty/serial/earlycon.c b/drivers/tty/serial/earlycon.c
index 64fe25a..5fdc9f3 100644
--- a/drivers/tty/serial/earlycon.c
+++ b/drivers/tty/serial/earlycon.c
@@ -10,6 +10,9 @@
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/console.h>
#include <linux/kernel.h>
#include <linux/init.h>
@@ -34,6 +37,10 @@
.con = &early_con,
};
+extern struct earlycon_id __earlycon_table[];
+static const struct earlycon_id __earlycon_table_sentinel
+ __used __section(__earlycon_table_end);
+
static const struct of_device_id __earlycon_of_table_sentinel
__used __section(__earlycon_of_table_end);
@@ -54,44 +61,29 @@
return base;
}
-static int __init parse_options(struct earlycon_device *device,
- char *options)
+static int __init parse_options(struct earlycon_device *device, char *options)
{
struct uart_port *port = &device->port;
- int mmio, mmio32, length;
+ int length;
unsigned long addr;
- if (!options)
- return -ENODEV;
+ if (uart_parse_earlycon(options, &port->iotype, &addr, &options))
+ return -EINVAL;
- mmio = !strncmp(options, "mmio,", 5);
- mmio32 = !strncmp(options, "mmio32,", 7);
- if (mmio || mmio32) {
- port->iotype = (mmio ? UPIO_MEM : UPIO_MEM32);
- options += mmio ? 5 : 7;
- addr = simple_strtoul(options, NULL, 0);
+ switch (port->iotype) {
+ case UPIO_MEM32:
+ port->regshift = 2; /* fall-through */
+ case UPIO_MEM:
port->mapbase = addr;
- if (mmio32)
- port->regshift = 2;
- } else if (!strncmp(options, "io,", 3)) {
- port->iotype = UPIO_PORT;
- options += 3;
- addr = simple_strtoul(options, NULL, 0);
+ break;
+ case UPIO_PORT:
port->iobase = addr;
- mmio = 0;
- } else if (!strncmp(options, "0x", 2)) {
- port->iotype = UPIO_MEM;
- addr = simple_strtoul(options, NULL, 0);
- port->mapbase = addr;
- } else {
+ break;
+ default:
return -EINVAL;
}
- port->uartclk = BASE_BAUD * 16;
-
- options = strchr(options, ',');
if (options) {
- options++;
device->baud = simple_strtoul(options, NULL, 0);
length = min(strcspn(options, " ") + 1,
(size_t)(sizeof(device->options)));
@@ -100,7 +92,7 @@
if (port->iotype == UPIO_MEM || port->iotype == UPIO_MEM32)
pr_info("Early serial console at MMIO%s 0x%llx (options '%s')\n",
- mmio32 ? "32" : "",
+ (port->iotype == UPIO_MEM32) ? "32" : "",
(unsigned long long)port->mapbase,
device->options);
else
@@ -111,34 +103,21 @@
return 0;
}
-int __init setup_earlycon(char *buf, const char *match,
- int (*setup)(struct earlycon_device *, const char *))
+static int __init register_earlycon(char *buf, const struct earlycon_id *match)
{
int err;
- size_t len;
struct uart_port *port = &early_console_dev.port;
- if (!buf || !match || !setup)
- return 0;
-
- len = strlen(match);
- if (strncmp(buf, match, len))
- return 0;
- if (buf[len] && (buf[len] != ','))
- return 0;
-
- buf += len + 1;
-
- err = parse_options(&early_console_dev, buf);
/* On parsing error, pass the options buf to the setup function */
- if (!err)
+ if (buf && !parse_options(&early_console_dev, buf))
buf = NULL;
+ port->uartclk = BASE_BAUD * 16;
if (port->mapbase)
port->membase = earlycon_map(port->mapbase, 64);
early_console_dev.con->data = &early_console_dev;
- err = setup(&early_console_dev, buf);
+ err = match->setup(&early_console_dev, buf);
if (err < 0)
return err;
if (!early_console_dev.con->write)
@@ -148,6 +127,77 @@
return 0;
}
+/**
+ * setup_earlycon - match and register earlycon console
+ * @buf: earlycon param string
+ *
+ * Registers the earlycon console matching the earlycon specified
+ * in the param string @buf. Acceptable param strings are of the form
+ * <name>,io|mmio|mmio32,<addr>,<options>
+ * <name>,0x<addr>,<options>
+ * <name>,<options>
+ * <name>
+ *
+ * Only for the third form does the earlycon setup() method receive the
+ * <options> string in the 'options' parameter; all other forms set
+ * the parameter to NULL.
+ *
+ * Returns 0 if an attempt to register the earlycon was made,
+ * otherwise negative error code
+ */
+int __init setup_earlycon(char *buf)
+{
+ const struct earlycon_id *match;
+
+ if (!buf || !buf[0])
+ return -EINVAL;
+
+ if (early_con.flags & CON_ENABLED)
+ return -EALREADY;
+
+ for (match = __earlycon_table; match->name[0]; match++) {
+ size_t len = strlen(match->name);
+
+ if (strncmp(buf, match->name, len))
+ continue;
+
+ if (buf[len]) {
+ if (buf[len] != ',')
+ continue;
+ buf += len + 1;
+ } else
+ buf = NULL;
+
+ return register_earlycon(buf, match);
+ }
+
+ return -ENOENT;
+}
+
+/* early_param wrapper for setup_earlycon() */
+static int __init param_setup_earlycon(char *buf)
+{
+ int err;
+
+ /*
+ * Just 'earlycon' is a valid param for devicetree earlycons;
+ * don't generate a warning from parse_early_params() in that case
+ */
+ if (!buf || !buf[0])
+ return 0;
+
+ err = setup_earlycon(buf);
+ if (err == -ENOENT) {
+ pr_warn("no match for %s\n", buf);
+ err = 0;
+ } else if (err == -EALREADY) {
+ pr_warn("already registered\n");
+ err = 0;
+ }
+ return err;
+}
+early_param("earlycon", param_setup_earlycon);
+
int __init of_setup_earlycon(unsigned long addr,
int (*setup)(struct earlycon_device *, const char *))
{
diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
index 3ad1458..08ce76f 100644
--- a/drivers/tty/serial/fsl_lpuart.c
+++ b/drivers/tty/serial/fsl_lpuart.c
@@ -257,7 +257,7 @@
struct timer_list lpuart_timer;
};
-static struct of_device_id lpuart_dt_ids[] = {
+static const struct of_device_id lpuart_dt_ids[] = {
{
.compatible = "fsl,vf610-lpuart",
},
diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
index 0eb29b1..c8cfa06 100644
--- a/drivers/tty/serial/imx.c
+++ b/drivers/tty/serial/imx.c
@@ -1,13 +1,10 @@
/*
- * Driver for Motorola IMX serial ports
+ * Driver for Motorola/Freescale IMX serial ports
*
- * Based on drivers/char/serial.c, by Linus Torvalds, Theodore Ts'o.
+ * Based on drivers/char/serial.c, by Linus Torvalds, Theodore Ts'o.
*
- * Author: Sascha Hauer <sascha@saschahauer.de>
- * Copyright (C) 2004 Pengutronix
- *
- * Copyright (C) 2009 emlix GmbH
- * Author: Fabian Godehardt (added IrDA support for iMX)
+ * Author: Sascha Hauer <sascha@saschahauer.de>
+ * Copyright (C) 2004 Pengutronix
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -18,13 +15,6 @@
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
- * [29-Mar-2005] Mike Lee
- * Added hardware handshake
*/
#if defined(CONFIG_SERIAL_IMX_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
@@ -189,7 +179,7 @@
#define UART_NR 8
-/* i.mx21 type uart runs on all i.mx except i.mx1 */
+/* i.MX21 type uart runs on all i.mx except i.MX1 and i.MX6q */
enum imx_uart_type {
IMX1_UART,
IMX21_UART,
@@ -206,10 +196,8 @@
struct uart_port port;
struct timer_list timer;
unsigned int old_status;
- int txirq, rxirq, rtsirq;
unsigned int have_rtscts:1;
unsigned int dte_mode:1;
- unsigned int use_irda:1;
unsigned int irda_inv_rx:1;
unsigned int irda_inv_tx:1;
unsigned short trcv_delay; /* transceiver delay */
@@ -236,12 +224,6 @@
unsigned int ucr3;
};
-#ifdef CONFIG_IRDA
-#define USE_IRDA(sport) ((sport)->use_irda)
-#else
-#define USE_IRDA(sport) (0)
-#endif
-
static struct imx_uart_data imx_uart_devdata[] = {
[IMX1_UART] = {
.uts_reg = IMX1_UTS,
@@ -273,7 +255,7 @@
};
MODULE_DEVICE_TABLE(platform, imx_uart_devtype);
-static struct of_device_id imx_uart_dt_ids[] = {
+static const struct of_device_id imx_uart_dt_ids[] = {
{ .compatible = "fsl,imx6q-uart", .data = &imx_uart_devdata[IMX6Q_UART], },
{ .compatible = "fsl,imx1-uart", .data = &imx_uart_devdata[IMX1_UART], },
{ .compatible = "fsl,imx21-uart", .data = &imx_uart_devdata[IMX21_UART], },
@@ -376,48 +358,6 @@
struct imx_port *sport = (struct imx_port *)port;
unsigned long temp;
- if (USE_IRDA(sport)) {
- /* half duplex - wait for end of transmission */
- int n = 256;
- while ((--n > 0) &&
- !(readl(sport->port.membase + USR2) & USR2_TXDC)) {
- udelay(5);
- barrier();
- }
- /*
- * irda transceiver - wait a bit more to avoid
- * cutoff, hardware dependent
- */
- udelay(sport->trcv_delay);
-
- /*
- * half duplex - reactivate receive mode,
- * flush receive pipe echo crap
- */
- if (readl(sport->port.membase + USR2) & USR2_TXDC) {
- temp = readl(sport->port.membase + UCR1);
- temp &= ~(UCR1_TXMPTYEN | UCR1_TRDYEN);
- writel(temp, sport->port.membase + UCR1);
-
- temp = readl(sport->port.membase + UCR4);
- temp &= ~(UCR4_TCEN);
- writel(temp, sport->port.membase + UCR4);
-
- while (readl(sport->port.membase + URXD0) &
- URXD_CHARRDY)
- barrier();
-
- temp = readl(sport->port.membase + UCR1);
- temp |= UCR1_RRDYEN;
- writel(temp, sport->port.membase + UCR1);
-
- temp = readl(sport->port.membase + UCR4);
- temp |= UCR4_DREN;
- writel(temp, sport->port.membase + UCR4);
- }
- return;
- }
-
/*
* We are maybe in the SMP context, so if the DMA TX thread is running
* on other cpu, we have to wait for it to finish.
@@ -425,8 +365,23 @@
if (sport->dma_is_enabled && sport->dma_is_txing)
return;
- temp = readl(sport->port.membase + UCR1);
- writel(temp & ~UCR1_TXMPTYEN, sport->port.membase + UCR1);
+ temp = readl(port->membase + UCR1);
+ writel(temp & ~UCR1_TXMPTYEN, port->membase + UCR1);
+
+ /* in rs485 mode disable transmitter if shifter is empty */
+ if (port->rs485.flags & SER_RS485_ENABLED &&
+ readl(port->membase + USR2) & USR2_TXDC) {
+ temp = readl(port->membase + UCR2);
+ if (port->rs485.flags & SER_RS485_RTS_AFTER_SEND)
+ temp &= ~UCR2_CTS;
+ else
+ temp |= UCR2_CTS;
+ writel(temp, port->membase + UCR2);
+
+ temp = readl(port->membase + UCR4);
+ temp &= ~UCR4_TCEN;
+ writel(temp, port->membase + UCR4);
+ }
}
/*
@@ -620,15 +575,18 @@
struct imx_port *sport = (struct imx_port *)port;
unsigned long temp;
- if (USE_IRDA(sport)) {
- /* half duplex in IrDA mode; have to disable receive mode */
- temp = readl(sport->port.membase + UCR4);
- temp &= ~(UCR4_DREN);
- writel(temp, sport->port.membase + UCR4);
+ if (port->rs485.flags & SER_RS485_ENABLED) {
+ /* enable transmitter and shifter empty irq */
+ temp = readl(port->membase + UCR2);
+ if (port->rs485.flags & SER_RS485_RTS_ON_SEND)
+ temp &= ~UCR2_CTS;
+ else
+ temp |= UCR2_CTS;
+ writel(temp, port->membase + UCR2);
- temp = readl(sport->port.membase + UCR1);
- temp &= ~(UCR1_RRDYEN);
- writel(temp, sport->port.membase + UCR1);
+ temp = readl(port->membase + UCR4);
+ temp |= UCR4_TCEN;
+ writel(temp, port->membase + UCR4);
}
if (!sport->dma_is_enabled) {
@@ -636,16 +594,6 @@
writel(temp | UCR1_TXMPTYEN, sport->port.membase + UCR1);
}
- if (USE_IRDA(sport)) {
- temp = readl(sport->port.membase + UCR1);
- temp |= UCR1_TRDYEN;
- writel(temp, sport->port.membase + UCR1);
-
- temp = readl(sport->port.membase + UCR4);
- temp |= UCR4_TCEN;
- writel(temp, sport->port.membase + UCR4);
- }
-
if (sport->dma_is_enabled) {
if (sport->port.x_char) {
/* We have X-char to send, so enable TX IRQ and
@@ -796,6 +744,7 @@
unsigned int sts2;
sts = readl(sport->port.membase + USR1);
+ sts2 = readl(sport->port.membase + USR2);
if (sts & USR1_RRDY) {
if (sport->dma_is_enabled)
@@ -804,8 +753,10 @@
imx_rxint(irq, dev_id);
}
- if (sts & USR1_TRDY &&
- readl(sport->port.membase + UCR1) & UCR1_TXMPTYEN)
+ if ((sts & USR1_TRDY &&
+ readl(sport->port.membase + UCR1) & UCR1_TXMPTYEN) ||
+ (sts2 & USR2_TXDC &&
+ readl(sport->port.membase + UCR4) & UCR4_TCEN))
imx_txint(irq, dev_id);
if (sts & USR1_RTSD)
@@ -814,11 +765,10 @@
if (sts & USR1_AWAKE)
writel(USR1_AWAKE, sport->port.membase + USR1);
- sts2 = readl(sport->port.membase + USR2);
if (sts2 & USR2_ORE) {
dev_err(sport->port.dev, "Rx FIFO overrun\n");
sport->port.icount.overrun++;
- writel(sts2 | USR2_ORE, sport->port.membase + USR2);
+ writel(USR2_ORE, sport->port.membase + USR2);
}
return IRQ_HANDLED;
@@ -866,11 +816,13 @@
struct imx_port *sport = (struct imx_port *)port;
unsigned long temp;
- temp = readl(sport->port.membase + UCR2) & ~(UCR2_CTS | UCR2_CTSC);
- if (mctrl & TIOCM_RTS)
- temp |= UCR2_CTS | UCR2_CTSC;
-
- writel(temp, sport->port.membase + UCR2);
+ if (!(port->rs485.flags & SER_RS485_ENABLED)) {
+ temp = readl(sport->port.membase + UCR2);
+ temp &= ~(UCR2_CTS | UCR2_CTSC);
+ if (mctrl & TIOCM_RTS)
+ temp |= UCR2_CTS | UCR2_CTSC;
+ writel(temp, sport->port.membase + UCR2);
+ }
temp = readl(sport->port.membase + uts_reg(sport)) & ~UTS_LOOP;
if (mctrl & TIOCM_LOOP)
@@ -1156,9 +1108,6 @@
*/
temp = readl(sport->port.membase + UCR4);
- if (USE_IRDA(sport))
- temp |= UCR4_IRSC;
-
/* set the trigger level for CTS */
temp &= ~(UCR4_CTSTL_MASK << UCR4_CTSTL_SHF);
temp |= CTSTL << UCR4_CTSTL_SHF;
@@ -1181,10 +1130,12 @@
imx_uart_dma_init(sport);
spin_lock_irqsave(&sport->port.lock, flags);
+
/*
* Finally, clear and enable interrupts
*/
writel(USR1_RTSD, sport->port.membase + USR1);
+ writel(USR2_ORE, sport->port.membase + USR2);
if (sport->dma_is_inited && !sport->dma_is_enabled)
imx_enable_dma(sport);
@@ -1192,17 +1143,8 @@
temp = readl(sport->port.membase + UCR1);
temp |= UCR1_RRDYEN | UCR1_RTSDEN | UCR1_UARTEN;
- if (USE_IRDA(sport)) {
- temp |= UCR1_IREN;
- temp &= ~(UCR1_RTSDEN);
- }
-
writel(temp, sport->port.membase + UCR1);
- /* Clear any pending ORE flag before enabling interrupt */
- temp = readl(sport->port.membase + USR2);
- writel(temp | USR2_ORE, sport->port.membase + USR2);
-
temp = readl(sport->port.membase + UCR4);
temp |= UCR4_OREN;
writel(temp, sport->port.membase + UCR4);
@@ -1219,38 +1161,12 @@
writel(temp, sport->port.membase + UCR3);
}
- if (USE_IRDA(sport)) {
- temp = readl(sport->port.membase + UCR4);
- if (sport->irda_inv_rx)
- temp |= UCR4_INVR;
- else
- temp &= ~(UCR4_INVR);
- writel(temp | UCR4_DREN, sport->port.membase + UCR4);
-
- temp = readl(sport->port.membase + UCR3);
- if (sport->irda_inv_tx)
- temp |= UCR3_INVT;
- else
- temp &= ~(UCR3_INVT);
- writel(temp, sport->port.membase + UCR3);
- }
-
/*
* Enable modem status interrupts
*/
imx_enable_ms(&sport->port);
spin_unlock_irqrestore(&sport->port.lock, flags);
- if (USE_IRDA(sport)) {
- struct imxuart_platform_data *pdata;
- pdata = dev_get_platdata(sport->port.dev);
- sport->irda_inv_rx = pdata->irda_inv_rx;
- sport->irda_inv_tx = pdata->irda_inv_tx;
- sport->trcv_delay = pdata->transceiver_delay;
- if (pdata->irda_enable)
- pdata->irda_enable(1);
- }
-
return 0;
}
@@ -1286,13 +1202,6 @@
writel(temp, sport->port.membase + UCR2);
spin_unlock_irqrestore(&sport->port.lock, flags);
- if (USE_IRDA(sport)) {
- struct imxuart_platform_data *pdata;
- pdata = dev_get_platdata(sport->port.dev);
- if (pdata->irda_enable)
- pdata->irda_enable(0);
- }
-
/*
* Stop our timer.
*/
@@ -1305,8 +1214,6 @@
spin_lock_irqsave(&sport->port.lock, flags);
temp = readl(sport->port.membase + UCR1);
temp &= ~(UCR1_TXMPTYEN | UCR1_RRDYEN | UCR1_RTSDEN | UCR1_UARTEN);
- if (USE_IRDA(sport))
- temp &= ~(UCR1_IREN);
writel(temp, sport->port.membase + UCR1);
spin_unlock_irqrestore(&sport->port.lock, flags);
@@ -1320,7 +1227,7 @@
struct imx_port *sport = (struct imx_port *)port;
struct scatterlist *sgl = &sport->tx_sgl[0];
unsigned long temp;
- int i = 100, ubir, ubmr, ubrc, uts;
+ int i = 100, ubir, ubmr, uts;
if (!sport->dma_chan_tx)
return;
@@ -1345,7 +1252,6 @@
*/
ubir = readl(sport->port.membase + UBIR);
ubmr = readl(sport->port.membase + UBMR);
- ubrc = readl(sport->port.membase + UBRC);
uts = readl(sport->port.membase + IMX21_UTS);
temp = readl(sport->port.membase + UCR2);
@@ -1358,7 +1264,6 @@
/* Restore the registers */
writel(ubir, sport->port.membase + UBIR);
writel(ubmr, sport->port.membase + UBMR);
- writel(ubrc, sport->port.membase + UBRC);
writel(uts, sport->port.membase + IMX21_UTS);
}
@@ -1375,15 +1280,6 @@
uint64_t tdiv64;
/*
- * If we don't support modem control lines, don't allow
- * these to be set.
- */
- if (0) {
- termios->c_cflag &= ~(HUPCL | CRTSCTS | CMSPAR);
- termios->c_cflag |= CLOCAL;
- }
-
- /*
* We only support CS7 and CS8.
*/
while ((termios->c_cflag & CSIZE) != CS7 &&
@@ -1401,11 +1297,26 @@
if (termios->c_cflag & CRTSCTS) {
if (sport->have_rtscts) {
ucr2 &= ~UCR2_IRTS;
- ucr2 |= UCR2_CTSC;
+
+ if (port->rs485.flags & SER_RS485_ENABLED) {
+ /*
+ * RTS is mandatory for rs485 operation, so keep
+ * it under manual control and keep transmitter
+ * disabled.
+ */
+ if (!(port->rs485.flags &
+ SER_RS485_RTS_AFTER_SEND))
+ ucr2 |= UCR2_CTS;
+ } else {
+ ucr2 |= UCR2_CTSC;
+ }
} else {
termios->c_cflag &= ~CRTSCTS;
}
- }
+ } else if (port->rs485.flags & SER_RS485_ENABLED)
+ /* disable transmitter */
+ if (!(port->rs485.flags & SER_RS485_RTS_AFTER_SEND))
+ ucr2 |= UCR2_CTS;
if (termios->c_cflag & CSTOPB)
ucr2 |= UCR2_STPB;
@@ -1471,24 +1382,16 @@
sport->port.membase + UCR2);
old_txrxen &= (UCR2_TXEN | UCR2_RXEN);
- if (USE_IRDA(sport)) {
- /*
- * use maximum available submodule frequency to
- * avoid missing short pulses due to low sampling rate
- */
- div = 1;
- } else {
- /* custom-baudrate handling */
- div = sport->port.uartclk / (baud * 16);
- if (baud == 38400 && quot != div)
- baud = sport->port.uartclk / (quot * 16);
+ /* custom-baudrate handling */
+ div = sport->port.uartclk / (baud * 16);
+ if (baud == 38400 && quot != div)
+ baud = sport->port.uartclk / (quot * 16);
- div = sport->port.uartclk / (baud * 16);
- if (div > 7)
- div = 7;
- if (!div)
- div = 1;
- }
+ div = sport->port.uartclk / (baud * 16);
+ if (div > 7)
+ div = 7;
+ if (!div)
+ div = 1;
rational_best_approximation(16 * div * baud, sport->port.uartclk,
1 << 16, 1 << 16, &num, &denom);
@@ -1635,6 +1538,38 @@
}
#endif
+static int imx_rs485_config(struct uart_port *port,
+ struct serial_rs485 *rs485conf)
+{
+ struct imx_port *sport = (struct imx_port *)port;
+
+ /* unimplemented */
+ rs485conf->delay_rts_before_send = 0;
+ rs485conf->delay_rts_after_send = 0;
+ rs485conf->flags |= SER_RS485_RX_DURING_TX;
+
+ /* RTS is required to control the transmitter */
+ if (!sport->have_rtscts)
+ rs485conf->flags &= ~SER_RS485_ENABLED;
+
+ if (rs485conf->flags & SER_RS485_ENABLED) {
+ unsigned long temp;
+
+ /* disable transmitter */
+ temp = readl(sport->port.membase + UCR2);
+ temp &= ~UCR2_CTSC;
+ if (rs485conf->flags & SER_RS485_RTS_AFTER_SEND)
+ temp &= ~UCR2_CTS;
+ else
+ temp |= UCR2_CTS;
+ writel(temp, sport->port.membase + UCR2);
+ }
+
+ port->rs485 = *rs485conf;
+
+ return 0;
+}
+
static struct uart_ops imx_pops = {
.tx_empty = imx_tx_empty,
.set_mctrl = imx_set_mctrl,
@@ -1927,9 +1862,6 @@
if (of_get_property(np, "fsl,uart-has-rtscts", NULL))
sport->have_rtscts = 1;
- if (of_get_property(np, "fsl,irda-mode", NULL))
- sport->use_irda = 1;
-
if (of_get_property(np, "fsl,dte-mode", NULL))
sport->dte_mode = 1;
@@ -1958,9 +1890,6 @@
if (pdata->flags & IMXUART_HAVE_RTSCTS)
sport->have_rtscts = 1;
-
- if (pdata->flags & IMXUART_IRDA)
- sport->use_irda = 1;
}
static int serial_imx_probe(struct platform_device *pdev)
@@ -1969,6 +1898,7 @@
void __iomem *base;
int ret = 0;
struct resource *res;
+ int txirq, rxirq, rtsirq;
sport = devm_kzalloc(&pdev->dev, sizeof(*sport), GFP_KERNEL);
if (!sport)
@@ -1985,17 +1915,21 @@
if (IS_ERR(base))
return PTR_ERR(base);
+ rxirq = platform_get_irq(pdev, 0);
+ txirq = platform_get_irq(pdev, 1);
+ rtsirq = platform_get_irq(pdev, 2);
+
sport->port.dev = &pdev->dev;
sport->port.mapbase = res->start;
sport->port.membase = base;
sport->port.type = PORT_IMX,
sport->port.iotype = UPIO_MEM;
- sport->port.irq = platform_get_irq(pdev, 0);
- sport->rxirq = platform_get_irq(pdev, 0);
- sport->txirq = platform_get_irq(pdev, 1);
- sport->rtsirq = platform_get_irq(pdev, 2);
+ sport->port.irq = rxirq;
sport->port.fifosize = 32;
sport->port.ops = &imx_pops;
+ sport->port.rs485_config = imx_rs485_config;
+ sport->port.rs485.flags =
+ SER_RS485_RTS_ON_SEND | SER_RS485_RX_DURING_TX;
sport->port.flags = UPF_BOOT_AUTOCONF;
init_timer(&sport->timer);
sport->timer.function = imx_timeout;
@@ -2021,27 +1955,18 @@
* Allocate the IRQ(s) i.MX1 has three interrupts whereas later
* chips only have one interrupt.
*/
- if (sport->txirq > 0) {
- ret = devm_request_irq(&pdev->dev, sport->rxirq, imx_rxint, 0,
+ if (txirq > 0) {
+ ret = devm_request_irq(&pdev->dev, rxirq, imx_rxint, 0,
dev_name(&pdev->dev), sport);
if (ret)
return ret;
- ret = devm_request_irq(&pdev->dev, sport->txirq, imx_txint, 0,
+ ret = devm_request_irq(&pdev->dev, txirq, imx_txint, 0,
dev_name(&pdev->dev), sport);
if (ret)
return ret;
-
- /* do not use RTS IRQ on IrDA */
- if (!USE_IRDA(sport)) {
- ret = devm_request_irq(&pdev->dev, sport->rtsirq,
- imx_rtsint, 0,
- dev_name(&pdev->dev), sport);
- if (ret)
- return ret;
- }
} else {
- ret = devm_request_irq(&pdev->dev, sport->port.irq, imx_int, 0,
+ ret = devm_request_irq(&pdev->dev, rxirq, imx_int, 0,
dev_name(&pdev->dev), sport);
if (ret)
return ret;
diff --git a/drivers/tty/serial/jsm/jsm_cls.c b/drivers/tty/serial/jsm/jsm_cls.c
index bfb0681..4eb12a9 100644
--- a/drivers/tty/serial/jsm/jsm_cls.c
+++ b/drivers/tty/serial/jsm/jsm_cls.c
@@ -570,7 +570,7 @@
* verified in the interrupt routine.
*/
- if (port > brd->nasync)
+ if (port >= brd->nasync)
return;
ch = brd->channels[port];
diff --git a/drivers/tty/serial/jsm/jsm_neo.c b/drivers/tty/serial/jsm/jsm_neo.c
index 7291c21..932b2ac 100644
--- a/drivers/tty/serial/jsm/jsm_neo.c
+++ b/drivers/tty/serial/jsm/jsm_neo.c
@@ -724,7 +724,7 @@
if (!brd)
return;
- if (port > brd->maxports)
+ if (port >= brd->maxports)
return;
ch = brd->channels[port];
@@ -840,7 +840,7 @@
if (!brd)
return;
- if (port > brd->maxports)
+ if (port >= brd->maxports)
return;
ch = brd->channels[port];
@@ -1180,7 +1180,7 @@
*/
/* Verify the port is in range. */
- if (port > brd->nasync)
+ if (port >= brd->nasync)
continue;
ch = brd->channels[port];
diff --git a/drivers/tty/serial/max3100.c b/drivers/tty/serial/max3100.c
index 79f9a9e..0773772 100644
--- a/drivers/tty/serial/max3100.c
+++ b/drivers/tty/serial/max3100.c
@@ -782,7 +782,7 @@
pdata = dev_get_platdata(&spi->dev);
max3100s[i]->crystal = pdata->crystal;
max3100s[i]->loopback = pdata->loopback;
- max3100s[i]->poll_time = pdata->poll_time * HZ / 1000;
+ max3100s[i]->poll_time = msecs_to_jiffies(pdata->poll_time);
if (pdata->poll_time > 0 && max3100s[i]->poll_time == 0)
max3100s[i]->poll_time = 1;
max3100s[i]->max3100_hw_suspend = pdata->max3100_hw_suspend;
diff --git a/drivers/tty/serial/mfd.c b/drivers/tty/serial/mfd.c
deleted file mode 100644
index 8fe4501..0000000
--- a/drivers/tty/serial/mfd.c
+++ /dev/null
@@ -1,1505 +0,0 @@
-/*
- * mfd.c: driver for High Speed UART device of Intel Medfield platform
- *
- * Refer pxa.c, 8250.c and some other drivers in drivers/serial/
- *
- * (C) Copyright 2010 Intel Corporation
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; version 2
- * of the License.
- */
-
-/* Notes:
- * 1. DMA channel allocation: 0/1 channel are assigned to port 0,
- * 2/3 chan to port 1, 4/5 chan to port 3. Even number chans
- * are used for RX, odd chans for TX
- *
- * 2. The RI/DSR/DCD/DTR are not pinned out, DCD & DSR are always
- * asserted, only when the HW is reset the DDCD and DDSR will
- * be triggered
- */
-
-#if defined(CONFIG_SERIAL_MFD_HSU_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
-#define SUPPORT_SYSRQ
-#endif
-
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/console.h>
-#include <linux/sysrq.h>
-#include <linux/slab.h>
-#include <linux/serial_reg.h>
-#include <linux/circ_buf.h>
-#include <linux/delay.h>
-#include <linux/interrupt.h>
-#include <linux/tty.h>
-#include <linux/tty_flip.h>
-#include <linux/serial_core.h>
-#include <linux/serial_mfd.h>
-#include <linux/dma-mapping.h>
-#include <linux/pci.h>
-#include <linux/nmi.h>
-#include <linux/io.h>
-#include <linux/debugfs.h>
-#include <linux/pm_runtime.h>
-
-#define HSU_DMA_BUF_SIZE 2048
-
-#define chan_readl(chan, offset) readl(chan->reg + offset)
-#define chan_writel(chan, offset, val) writel(val, chan->reg + offset)
-
-#define mfd_readl(obj, offset) readl(obj->reg + offset)
-#define mfd_writel(obj, offset, val) writel(val, obj->reg + offset)
-
-static int hsu_dma_enable;
-module_param(hsu_dma_enable, int, 0);
-MODULE_PARM_DESC(hsu_dma_enable,
- "It is a bitmap to set working mode, if bit[x] is 1, then port[x] will work in DMA mode, otherwise in PIO mode.");
-
-struct hsu_dma_buffer {
- u8 *buf;
- dma_addr_t dma_addr;
- u32 dma_size;
- u32 ofs;
-};
-
-struct hsu_dma_chan {
- u32 id;
- enum dma_data_direction dirt;
- struct uart_hsu_port *uport;
- void __iomem *reg;
-};
-
-struct uart_hsu_port {
- struct uart_port port;
- unsigned char ier;
- unsigned char lcr;
- unsigned char mcr;
- unsigned int lsr_break_flag;
- char name[12];
- int index;
- struct device *dev;
-
- struct hsu_dma_chan *txc;
- struct hsu_dma_chan *rxc;
- struct hsu_dma_buffer txbuf;
- struct hsu_dma_buffer rxbuf;
- int use_dma; /* flag for DMA/PIO */
- int running;
- int dma_tx_on;
-};
-
-/* Top level data structure of HSU */
-struct hsu_port {
- void __iomem *reg;
- unsigned long paddr;
- unsigned long iolen;
- u32 irq;
-
- struct uart_hsu_port port[3];
- struct hsu_dma_chan chans[10];
-
- struct dentry *debugfs;
-};
-
-static inline unsigned int serial_in(struct uart_hsu_port *up, int offset)
-{
- unsigned int val;
-
- if (offset > UART_MSR) {
- offset <<= 2;
- val = readl(up->port.membase + offset);
- } else
- val = (unsigned int)readb(up->port.membase + offset);
-
- return val;
-}
-
-static inline void serial_out(struct uart_hsu_port *up, int offset, int value)
-{
- if (offset > UART_MSR) {
- offset <<= 2;
- writel(value, up->port.membase + offset);
- } else {
- unsigned char val = value & 0xff;
- writeb(val, up->port.membase + offset);
- }
-}
-
-#ifdef CONFIG_DEBUG_FS
-
-#define HSU_REGS_BUFSIZE 1024
-
-
-static ssize_t port_show_regs(struct file *file, char __user *user_buf,
- size_t count, loff_t *ppos)
-{
- struct uart_hsu_port *up = file->private_data;
- char *buf;
- u32 len = 0;
- ssize_t ret;
-
- buf = kzalloc(HSU_REGS_BUFSIZE, GFP_KERNEL);
- if (!buf)
- return 0;
-
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "MFD HSU port[%d] regs:\n", up->index);
-
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "=================================\n");
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "IER: \t\t0x%08x\n", serial_in(up, UART_IER));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "IIR: \t\t0x%08x\n", serial_in(up, UART_IIR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "LCR: \t\t0x%08x\n", serial_in(up, UART_LCR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "MCR: \t\t0x%08x\n", serial_in(up, UART_MCR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "LSR: \t\t0x%08x\n", serial_in(up, UART_LSR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "MSR: \t\t0x%08x\n", serial_in(up, UART_MSR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "FOR: \t\t0x%08x\n", serial_in(up, UART_FOR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "PS: \t\t0x%08x\n", serial_in(up, UART_PS));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "MUL: \t\t0x%08x\n", serial_in(up, UART_MUL));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "DIV: \t\t0x%08x\n", serial_in(up, UART_DIV));
-
- if (len > HSU_REGS_BUFSIZE)
- len = HSU_REGS_BUFSIZE;
-
- ret = simple_read_from_buffer(user_buf, count, ppos, buf, len);
- kfree(buf);
- return ret;
-}
-
-static ssize_t dma_show_regs(struct file *file, char __user *user_buf,
- size_t count, loff_t *ppos)
-{
- struct hsu_dma_chan *chan = file->private_data;
- char *buf;
- u32 len = 0;
- ssize_t ret;
-
- buf = kzalloc(HSU_REGS_BUFSIZE, GFP_KERNEL);
- if (!buf)
- return 0;
-
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "MFD HSU DMA channel [%d] regs:\n", chan->id);
-
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "=================================\n");
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "CR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_CR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "DCR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_DCR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "BSR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_BSR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "MOTSR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_MOTSR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "D0SAR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_D0SAR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "D0TSR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_D0TSR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "D0SAR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_D1SAR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "D0TSR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_D1TSR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "D0SAR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_D2SAR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "D0TSR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_D2TSR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "D0SAR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_D3SAR));
- len += snprintf(buf + len, HSU_REGS_BUFSIZE - len,
- "D0TSR: \t\t0x%08x\n", chan_readl(chan, HSU_CH_D3TSR));
-
- if (len > HSU_REGS_BUFSIZE)
- len = HSU_REGS_BUFSIZE;
-
- ret = simple_read_from_buffer(user_buf, count, ppos, buf, len);
- kfree(buf);
- return ret;
-}
-
-static const struct file_operations port_regs_ops = {
- .owner = THIS_MODULE,
- .open = simple_open,
- .read = port_show_regs,
- .llseek = default_llseek,
-};
-
-static const struct file_operations dma_regs_ops = {
- .owner = THIS_MODULE,
- .open = simple_open,
- .read = dma_show_regs,
- .llseek = default_llseek,
-};
-
-static int hsu_debugfs_init(struct hsu_port *hsu)
-{
- int i;
- char name[32];
-
- hsu->debugfs = debugfs_create_dir("hsu", NULL);
- if (!hsu->debugfs)
- return -ENOMEM;
-
- for (i = 0; i < 3; i++) {
- snprintf(name, sizeof(name), "port_%d_regs", i);
- debugfs_create_file(name, S_IFREG | S_IRUGO,
- hsu->debugfs, (void *)(&hsu->port[i]), &port_regs_ops);
- }
-
- for (i = 0; i < 6; i++) {
- snprintf(name, sizeof(name), "dma_chan_%d_regs", i);
- debugfs_create_file(name, S_IFREG | S_IRUGO,
- hsu->debugfs, (void *)&hsu->chans[i], &dma_regs_ops);
- }
-
- return 0;
-}
-
-static void hsu_debugfs_remove(struct hsu_port *hsu)
-{
- if (hsu->debugfs)
- debugfs_remove_recursive(hsu->debugfs);
-}
-
-#else
-static inline int hsu_debugfs_init(struct hsu_port *hsu)
-{
- return 0;
-}
-
-static inline void hsu_debugfs_remove(struct hsu_port *hsu)
-{
-}
-#endif /* CONFIG_DEBUG_FS */
-
-static void serial_hsu_enable_ms(struct uart_port *port)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
-
- up->ier |= UART_IER_MSI;
- serial_out(up, UART_IER, up->ier);
-}
-
-static void hsu_dma_tx(struct uart_hsu_port *up)
-{
- struct circ_buf *xmit = &up->port.state->xmit;
- struct hsu_dma_buffer *dbuf = &up->txbuf;
- int count;
-
- /* test_and_set_bit may be better, but anyway it's in lock protected mode */
- if (up->dma_tx_on)
- return;
-
- /* Update the circ buf info */
- xmit->tail += dbuf->ofs;
- xmit->tail &= UART_XMIT_SIZE - 1;
-
- up->port.icount.tx += dbuf->ofs;
- dbuf->ofs = 0;
-
- /* Disable the channel */
- chan_writel(up->txc, HSU_CH_CR, 0x0);
-
- if (!uart_circ_empty(xmit) && !uart_tx_stopped(&up->port)) {
- dma_sync_single_for_device(up->port.dev,
- dbuf->dma_addr,
- dbuf->dma_size,
- DMA_TO_DEVICE);
-
- count = CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE);
- dbuf->ofs = count;
-
- /* Reprogram the channel */
- chan_writel(up->txc, HSU_CH_D0SAR, dbuf->dma_addr + xmit->tail);
- chan_writel(up->txc, HSU_CH_D0TSR, count);
-
- /* Reenable the channel */
- chan_writel(up->txc, HSU_CH_DCR, 0x1
- | (0x1 << 8)
- | (0x1 << 16)
- | (0x1 << 24));
- up->dma_tx_on = 1;
- chan_writel(up->txc, HSU_CH_CR, 0x1);
- }
-
- if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
- uart_write_wakeup(&up->port);
-}
-
-/* The buffer is already cache coherent */
-static void hsu_dma_start_rx_chan(struct hsu_dma_chan *rxc,
- struct hsu_dma_buffer *dbuf)
-{
- dbuf->ofs = 0;
-
- chan_writel(rxc, HSU_CH_BSR, 32);
- chan_writel(rxc, HSU_CH_MOTSR, 4);
-
- chan_writel(rxc, HSU_CH_D0SAR, dbuf->dma_addr);
- chan_writel(rxc, HSU_CH_D0TSR, dbuf->dma_size);
- chan_writel(rxc, HSU_CH_DCR, 0x1 | (0x1 << 8)
- | (0x1 << 16)
- | (0x1 << 24) /* timeout bit, see HSU Errata 1 */
- );
- chan_writel(rxc, HSU_CH_CR, 0x3);
-}
-
-/* Protected by spin_lock_irqsave(port->lock) */
-static void serial_hsu_start_tx(struct uart_port *port)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
-
- if (up->use_dma) {
- hsu_dma_tx(up);
- } else if (!(up->ier & UART_IER_THRI)) {
- up->ier |= UART_IER_THRI;
- serial_out(up, UART_IER, up->ier);
- }
-}
-
-static void serial_hsu_stop_tx(struct uart_port *port)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- struct hsu_dma_chan *txc = up->txc;
-
- if (up->use_dma)
- chan_writel(txc, HSU_CH_CR, 0x0);
- else if (up->ier & UART_IER_THRI) {
- up->ier &= ~UART_IER_THRI;
- serial_out(up, UART_IER, up->ier);
- }
-}
-
-/* This is always called in spinlock protected mode, so
- * modify timeout timer is safe here */
-static void hsu_dma_rx(struct uart_hsu_port *up, u32 int_sts,
- unsigned long *flags)
-{
- struct hsu_dma_buffer *dbuf = &up->rxbuf;
- struct hsu_dma_chan *chan = up->rxc;
- struct uart_port *port = &up->port;
- struct tty_port *tport = &port->state->port;
- int count;
-
- /*
- * First need to know how many is already transferred,
- * then check if its a timeout DMA irq, and return
- * the trail bytes out, push them up and reenable the
- * channel
- */
-
- /* Timeout IRQ, need wait some time, see Errata 2 */
- if (int_sts & 0xf00)
- udelay(2);
-
- /* Stop the channel */
- chan_writel(chan, HSU_CH_CR, 0x0);
-
- count = chan_readl(chan, HSU_CH_D0SAR) - dbuf->dma_addr;
- if (!count) {
- /* Restart the channel before we leave */
- chan_writel(chan, HSU_CH_CR, 0x3);
- return;
- }
-
- dma_sync_single_for_cpu(port->dev, dbuf->dma_addr,
- dbuf->dma_size, DMA_FROM_DEVICE);
-
- /*
- * Head will only wrap around when we recycle
- * the DMA buffer, and when that happens, we
- * explicitly set tail to 0. So head will
- * always be greater than tail.
- */
- tty_insert_flip_string(tport, dbuf->buf, count);
- port->icount.rx += count;
-
- dma_sync_single_for_device(up->port.dev, dbuf->dma_addr,
- dbuf->dma_size, DMA_FROM_DEVICE);
-
- /* Reprogram the channel */
- chan_writel(chan, HSU_CH_D0SAR, dbuf->dma_addr);
- chan_writel(chan, HSU_CH_D0TSR, dbuf->dma_size);
- chan_writel(chan, HSU_CH_DCR, 0x1
- | (0x1 << 8)
- | (0x1 << 16)
- | (0x1 << 24) /* timeout bit, see HSU Errata 1 */
- );
- spin_unlock_irqrestore(&up->port.lock, *flags);
- tty_flip_buffer_push(tport);
- spin_lock_irqsave(&up->port.lock, *flags);
-
- chan_writel(chan, HSU_CH_CR, 0x3);
-
-}
-
-static void serial_hsu_stop_rx(struct uart_port *port)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- struct hsu_dma_chan *chan = up->rxc;
-
- if (up->use_dma)
- chan_writel(chan, HSU_CH_CR, 0x2);
- else {
- up->ier &= ~UART_IER_RLSI;
- up->port.read_status_mask &= ~UART_LSR_DR;
- serial_out(up, UART_IER, up->ier);
- }
-}
-
-static inline void receive_chars(struct uart_hsu_port *up, int *status,
- unsigned long *flags)
-{
- unsigned int ch, flag;
- unsigned int max_count = 256;
-
- do {
- ch = serial_in(up, UART_RX);
- flag = TTY_NORMAL;
- up->port.icount.rx++;
-
- if (unlikely(*status & (UART_LSR_BI | UART_LSR_PE |
- UART_LSR_FE | UART_LSR_OE))) {
-
- dev_warn(up->dev, "We really rush into ERR/BI case"
- "status = 0x%02x", *status);
- /* For statistics only */
- if (*status & UART_LSR_BI) {
- *status &= ~(UART_LSR_FE | UART_LSR_PE);
- up->port.icount.brk++;
- /*
- * We do the SysRQ and SAK checking
- * here because otherwise the break
- * may get masked by ignore_status_mask
- * or read_status_mask.
- */
- if (uart_handle_break(&up->port))
- goto ignore_char;
- } else if (*status & UART_LSR_PE)
- up->port.icount.parity++;
- else if (*status & UART_LSR_FE)
- up->port.icount.frame++;
- if (*status & UART_LSR_OE)
- up->port.icount.overrun++;
-
- /* Mask off conditions which should be ignored. */
- *status &= up->port.read_status_mask;
-
-#ifdef CONFIG_SERIAL_MFD_HSU_CONSOLE
- if (up->port.cons &&
- up->port.cons->index == up->port.line) {
- /* Recover the break flag from console xmit */
- *status |= up->lsr_break_flag;
- up->lsr_break_flag = 0;
- }
-#endif
- if (*status & UART_LSR_BI) {
- flag = TTY_BREAK;
- } else if (*status & UART_LSR_PE)
- flag = TTY_PARITY;
- else if (*status & UART_LSR_FE)
- flag = TTY_FRAME;
- }
-
- if (uart_handle_sysrq_char(&up->port, ch))
- goto ignore_char;
-
- uart_insert_char(&up->port, *status, UART_LSR_OE, ch, flag);
- ignore_char:
- *status = serial_in(up, UART_LSR);
- } while ((*status & UART_LSR_DR) && max_count--);
-
- spin_unlock_irqrestore(&up->port.lock, *flags);
- tty_flip_buffer_push(&up->port.state->port);
- spin_lock_irqsave(&up->port.lock, *flags);
-}
-
-static void transmit_chars(struct uart_hsu_port *up)
-{
- struct circ_buf *xmit = &up->port.state->xmit;
- int count;
-
- if (up->port.x_char) {
- serial_out(up, UART_TX, up->port.x_char);
- up->port.icount.tx++;
- up->port.x_char = 0;
- return;
- }
- if (uart_circ_empty(xmit) || uart_tx_stopped(&up->port)) {
- serial_hsu_stop_tx(&up->port);
- return;
- }
-
- /* The IRQ is for TX FIFO half-empty */
- count = up->port.fifosize / 2;
-
- do {
- serial_out(up, UART_TX, xmit->buf[xmit->tail]);
- xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
-
- up->port.icount.tx++;
- if (uart_circ_empty(xmit))
- break;
- } while (--count > 0);
-
- if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
- uart_write_wakeup(&up->port);
-
- if (uart_circ_empty(xmit))
- serial_hsu_stop_tx(&up->port);
-}
-
-static inline void check_modem_status(struct uart_hsu_port *up)
-{
- int status;
-
- status = serial_in(up, UART_MSR);
-
- if ((status & UART_MSR_ANY_DELTA) == 0)
- return;
-
- if (status & UART_MSR_TERI)
- up->port.icount.rng++;
- if (status & UART_MSR_DDSR)
- up->port.icount.dsr++;
- /* We may only get DDCD when HW init and reset */
- if (status & UART_MSR_DDCD)
- uart_handle_dcd_change(&up->port, status & UART_MSR_DCD);
- /* Will start/stop_tx accordingly */
- if (status & UART_MSR_DCTS)
- uart_handle_cts_change(&up->port, status & UART_MSR_CTS);
-
- wake_up_interruptible(&up->port.state->port.delta_msr_wait);
-}
-
-/*
- * This handles the interrupt from one port.
- */
-static irqreturn_t port_irq(int irq, void *dev_id)
-{
- struct uart_hsu_port *up = dev_id;
- unsigned int iir, lsr;
- unsigned long flags;
-
- if (unlikely(!up->running))
- return IRQ_NONE;
-
- spin_lock_irqsave(&up->port.lock, flags);
- if (up->use_dma) {
- lsr = serial_in(up, UART_LSR);
- if (unlikely(lsr & (UART_LSR_BI | UART_LSR_PE |
- UART_LSR_FE | UART_LSR_OE)))
- dev_warn(up->dev,
- "Got lsr irq while using DMA, lsr = 0x%2x\n",
- lsr);
- check_modem_status(up);
- spin_unlock_irqrestore(&up->port.lock, flags);
- return IRQ_HANDLED;
- }
-
- iir = serial_in(up, UART_IIR);
- if (iir & UART_IIR_NO_INT) {
- spin_unlock_irqrestore(&up->port.lock, flags);
- return IRQ_NONE;
- }
-
- lsr = serial_in(up, UART_LSR);
- if (lsr & UART_LSR_DR)
- receive_chars(up, &lsr, &flags);
- check_modem_status(up);
-
- /* lsr will be renewed during the receive_chars */
- if (lsr & UART_LSR_THRE)
- transmit_chars(up);
-
- spin_unlock_irqrestore(&up->port.lock, flags);
- return IRQ_HANDLED;
-}
-
-static inline void dma_chan_irq(struct hsu_dma_chan *chan)
-{
- struct uart_hsu_port *up = chan->uport;
- unsigned long flags;
- u32 int_sts;
-
- spin_lock_irqsave(&up->port.lock, flags);
-
- if (!up->use_dma || !up->running)
- goto exit;
-
- /*
- * No matter what situation, need read clear the IRQ status
- * There is a bug, see Errata 5, HSD 2900918
- */
- int_sts = chan_readl(chan, HSU_CH_SR);
-
- /* Rx channel */
- if (chan->dirt == DMA_FROM_DEVICE)
- hsu_dma_rx(up, int_sts, &flags);
-
- /* Tx channel */
- if (chan->dirt == DMA_TO_DEVICE) {
- chan_writel(chan, HSU_CH_CR, 0x0);
- up->dma_tx_on = 0;
- hsu_dma_tx(up);
- }
-
-exit:
- spin_unlock_irqrestore(&up->port.lock, flags);
- return;
-}
-
-static irqreturn_t dma_irq(int irq, void *dev_id)
-{
- struct hsu_port *hsu = dev_id;
- u32 int_sts, i;
-
- int_sts = mfd_readl(hsu, HSU_GBL_DMAISR);
-
- /* Currently we only have 6 channels may be used */
- for (i = 0; i < 6; i++) {
- if (int_sts & 0x1)
- dma_chan_irq(&hsu->chans[i]);
- int_sts >>= 1;
- }
-
- return IRQ_HANDLED;
-}
-
-static unsigned int serial_hsu_tx_empty(struct uart_port *port)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- unsigned long flags;
- unsigned int ret;
-
- spin_lock_irqsave(&up->port.lock, flags);
- ret = serial_in(up, UART_LSR) & UART_LSR_TEMT ? TIOCSER_TEMT : 0;
- spin_unlock_irqrestore(&up->port.lock, flags);
-
- return ret;
-}
-
-static unsigned int serial_hsu_get_mctrl(struct uart_port *port)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- unsigned char status;
- unsigned int ret;
-
- status = serial_in(up, UART_MSR);
-
- ret = 0;
- if (status & UART_MSR_DCD)
- ret |= TIOCM_CAR;
- if (status & UART_MSR_RI)
- ret |= TIOCM_RNG;
- if (status & UART_MSR_DSR)
- ret |= TIOCM_DSR;
- if (status & UART_MSR_CTS)
- ret |= TIOCM_CTS;
- return ret;
-}
-
-static void serial_hsu_set_mctrl(struct uart_port *port, unsigned int mctrl)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- unsigned char mcr = 0;
-
- if (mctrl & TIOCM_RTS)
- mcr |= UART_MCR_RTS;
- if (mctrl & TIOCM_DTR)
- mcr |= UART_MCR_DTR;
- if (mctrl & TIOCM_OUT1)
- mcr |= UART_MCR_OUT1;
- if (mctrl & TIOCM_OUT2)
- mcr |= UART_MCR_OUT2;
- if (mctrl & TIOCM_LOOP)
- mcr |= UART_MCR_LOOP;
-
- mcr |= up->mcr;
-
- serial_out(up, UART_MCR, mcr);
-}
-
-static void serial_hsu_break_ctl(struct uart_port *port, int break_state)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- unsigned long flags;
-
- spin_lock_irqsave(&up->port.lock, flags);
- if (break_state == -1)
- up->lcr |= UART_LCR_SBC;
- else
- up->lcr &= ~UART_LCR_SBC;
- serial_out(up, UART_LCR, up->lcr);
- spin_unlock_irqrestore(&up->port.lock, flags);
-}
-
-/*
- * What special to do:
- * 1. chose the 64B fifo mode
- * 2. start dma or pio depends on configuration
- * 3. we only allocate dma memory when needed
- */
-static int serial_hsu_startup(struct uart_port *port)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- unsigned long flags;
-
- pm_runtime_get_sync(up->dev);
-
- /*
- * Clear the FIFO buffers and disable them.
- * (they will be reenabled in set_termios())
- */
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO);
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO |
- UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT);
- serial_out(up, UART_FCR, 0);
-
- /* Clear the interrupt registers. */
- (void) serial_in(up, UART_LSR);
- (void) serial_in(up, UART_RX);
- (void) serial_in(up, UART_IIR);
- (void) serial_in(up, UART_MSR);
-
- /* Now, initialize the UART, default is 8n1 */
- serial_out(up, UART_LCR, UART_LCR_WLEN8);
-
- spin_lock_irqsave(&up->port.lock, flags);
-
- up->port.mctrl |= TIOCM_OUT2;
- serial_hsu_set_mctrl(&up->port, up->port.mctrl);
-
- /*
- * Finally, enable interrupts. Note: Modem status interrupts
- * are set via set_termios(), which will be occurring imminently
- * anyway, so we don't enable them here.
- */
- if (!up->use_dma)
- up->ier = UART_IER_RLSI | UART_IER_RDI | UART_IER_RTOIE;
- else
- up->ier = 0;
- serial_out(up, UART_IER, up->ier);
-
- spin_unlock_irqrestore(&up->port.lock, flags);
-
- /* DMA init */
- if (up->use_dma) {
- struct hsu_dma_buffer *dbuf;
- struct circ_buf *xmit = &port->state->xmit;
-
- up->dma_tx_on = 0;
-
- /* First allocate the RX buffer */
- dbuf = &up->rxbuf;
- dbuf->buf = kzalloc(HSU_DMA_BUF_SIZE, GFP_KERNEL);
- if (!dbuf->buf) {
- up->use_dma = 0;
- goto exit;
- }
- dbuf->dma_addr = dma_map_single(port->dev,
- dbuf->buf,
- HSU_DMA_BUF_SIZE,
- DMA_FROM_DEVICE);
- dbuf->dma_size = HSU_DMA_BUF_SIZE;
-
- /* Start the RX channel right now */
- hsu_dma_start_rx_chan(up->rxc, dbuf);
-
- /* Next init the TX DMA */
- dbuf = &up->txbuf;
- dbuf->buf = xmit->buf;
- dbuf->dma_addr = dma_map_single(port->dev,
- dbuf->buf,
- UART_XMIT_SIZE,
- DMA_TO_DEVICE);
- dbuf->dma_size = UART_XMIT_SIZE;
-
- /* This should not be changed all around */
- chan_writel(up->txc, HSU_CH_BSR, 32);
- chan_writel(up->txc, HSU_CH_MOTSR, 4);
- dbuf->ofs = 0;
- }
-
-exit:
- /* And clear the interrupt registers again for luck. */
- (void) serial_in(up, UART_LSR);
- (void) serial_in(up, UART_RX);
- (void) serial_in(up, UART_IIR);
- (void) serial_in(up, UART_MSR);
-
- up->running = 1;
- return 0;
-}
-
-static void serial_hsu_shutdown(struct uart_port *port)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- unsigned long flags;
-
- /* Disable interrupts from this port */
- up->ier = 0;
- serial_out(up, UART_IER, 0);
- up->running = 0;
-
- spin_lock_irqsave(&up->port.lock, flags);
- up->port.mctrl &= ~TIOCM_OUT2;
- serial_hsu_set_mctrl(&up->port, up->port.mctrl);
- spin_unlock_irqrestore(&up->port.lock, flags);
-
- /* Disable break condition and FIFOs */
- serial_out(up, UART_LCR, serial_in(up, UART_LCR) & ~UART_LCR_SBC);
- serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO |
- UART_FCR_CLEAR_RCVR |
- UART_FCR_CLEAR_XMIT);
- serial_out(up, UART_FCR, 0);
-
- pm_runtime_put(up->dev);
-}
-
-static void
-serial_hsu_set_termios(struct uart_port *port, struct ktermios *termios,
- struct ktermios *old)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- unsigned char cval, fcr = 0;
- unsigned long flags;
- unsigned int baud, quot;
- u32 ps, mul;
-
- switch (termios->c_cflag & CSIZE) {
- case CS5:
- cval = UART_LCR_WLEN5;
- break;
- case CS6:
- cval = UART_LCR_WLEN6;
- break;
- case CS7:
- cval = UART_LCR_WLEN7;
- break;
- default:
- case CS8:
- cval = UART_LCR_WLEN8;
- break;
- }
-
- /* CMSPAR isn't supported by this driver */
- termios->c_cflag &= ~CMSPAR;
-
- if (termios->c_cflag & CSTOPB)
- cval |= UART_LCR_STOP;
- if (termios->c_cflag & PARENB)
- cval |= UART_LCR_PARITY;
- if (!(termios->c_cflag & PARODD))
- cval |= UART_LCR_EPAR;
-
- /*
- * The base clk is 50Mhz, and the baud rate come from:
- * baud = 50M * MUL / (DIV * PS * DLAB)
- *
- * For those basic low baud rate we can get the direct
- * scalar from 2746800, like 115200 = 2746800/24. For those
- * higher baud rate, we handle them case by case, mainly by
- * adjusting the MUL/PS registers, and DIV register is kept
- * as default value 0x3d09 to make things simple
- */
- baud = uart_get_baud_rate(port, termios, old, 0, 4000000);
-
- quot = 1;
- ps = 0x10;
- mul = 0x3600;
- switch (baud) {
- case 3500000:
- mul = 0x3345;
- ps = 0xC;
- break;
- case 1843200:
- mul = 0x2400;
- break;
- case 3000000:
- case 2500000:
- case 2000000:
- case 1500000:
- case 1000000:
- case 500000:
- /* mul/ps/quot = 0x9C4/0x10/0x1 will make a 500000 bps */
- mul = baud / 500000 * 0x9C4;
- break;
- default:
- /* Use uart_get_divisor to get quot for other baud rates */
- quot = 0;
- }
-
- if (!quot)
- quot = uart_get_divisor(port, baud);
-
- if ((up->port.uartclk / quot) < (2400 * 16))
- fcr = UART_FCR_ENABLE_FIFO | UART_FCR_HSU_64_1B;
- else if ((up->port.uartclk / quot) < (230400 * 16))
- fcr = UART_FCR_ENABLE_FIFO | UART_FCR_HSU_64_16B;
- else
- fcr = UART_FCR_ENABLE_FIFO | UART_FCR_HSU_64_32B;
-
- fcr |= UART_FCR_HSU_64B_FIFO;
-
- /*
- * Ok, we're now changing the port state. Do it with
- * interrupts disabled.
- */
- spin_lock_irqsave(&up->port.lock, flags);
-
- /* Update the per-port timeout */
- uart_update_timeout(port, termios->c_cflag, baud);
-
- up->port.read_status_mask = UART_LSR_OE | UART_LSR_THRE | UART_LSR_DR;
- if (termios->c_iflag & INPCK)
- up->port.read_status_mask |= UART_LSR_FE | UART_LSR_PE;
- if (termios->c_iflag & (IGNBRK | BRKINT | PARMRK))
- up->port.read_status_mask |= UART_LSR_BI;
-
- /* Characters to ignore */
- up->port.ignore_status_mask = 0;
- if (termios->c_iflag & IGNPAR)
- up->port.ignore_status_mask |= UART_LSR_PE | UART_LSR_FE;
- if (termios->c_iflag & IGNBRK) {
- up->port.ignore_status_mask |= UART_LSR_BI;
- /*
- * If we're ignoring parity and break indicators,
- * ignore overruns too (for real raw support).
- */
- if (termios->c_iflag & IGNPAR)
- up->port.ignore_status_mask |= UART_LSR_OE;
- }
-
- /* Ignore all characters if CREAD is not set */
- if ((termios->c_cflag & CREAD) == 0)
- up->port.ignore_status_mask |= UART_LSR_DR;
-
- /*
- * CTS flow control flag and modem status interrupts, disable
- * MSI by default
- */
- up->ier &= ~UART_IER_MSI;
- if (UART_ENABLE_MS(&up->port, termios->c_cflag))
- up->ier |= UART_IER_MSI;
-
- serial_out(up, UART_IER, up->ier);
-
- if (termios->c_cflag & CRTSCTS)
- up->mcr |= UART_MCR_AFE | UART_MCR_RTS;
- else
- up->mcr &= ~UART_MCR_AFE;
-
- serial_out(up, UART_LCR, cval | UART_LCR_DLAB); /* set DLAB */
- serial_out(up, UART_DLL, quot & 0xff); /* LS of divisor */
- serial_out(up, UART_DLM, quot >> 8); /* MS of divisor */
- serial_out(up, UART_LCR, cval); /* reset DLAB */
- serial_out(up, UART_MUL, mul); /* set MUL */
- serial_out(up, UART_PS, ps); /* set PS */
- up->lcr = cval; /* Save LCR */
- serial_hsu_set_mctrl(&up->port, up->port.mctrl);
- serial_out(up, UART_FCR, fcr);
- spin_unlock_irqrestore(&up->port.lock, flags);
-}
-
-static void
-serial_hsu_pm(struct uart_port *port, unsigned int state,
- unsigned int oldstate)
-{
-}
-
-static void serial_hsu_release_port(struct uart_port *port)
-{
-}
-
-static int serial_hsu_request_port(struct uart_port *port)
-{
- return 0;
-}
-
-static void serial_hsu_config_port(struct uart_port *port, int flags)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- up->port.type = PORT_MFD;
-}
-
-static int
-serial_hsu_verify_port(struct uart_port *port, struct serial_struct *ser)
-{
- /* We don't want the core code to modify any port params */
- return -EINVAL;
-}
-
-static const char *
-serial_hsu_type(struct uart_port *port)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
- return up->name;
-}
-
-/* Mainly for uart console use */
-static struct uart_hsu_port *serial_hsu_ports[3];
-static struct uart_driver serial_hsu_reg;
-
-#ifdef CONFIG_SERIAL_MFD_HSU_CONSOLE
-
-#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
-
-/* Wait for transmitter & holding register to empty */
-static inline void wait_for_xmitr(struct uart_hsu_port *up)
-{
- unsigned int status, tmout = 1000;
-
- /* Wait up to 1ms for the character to be sent. */
- do {
- status = serial_in(up, UART_LSR);
-
- if (status & UART_LSR_BI)
- up->lsr_break_flag = UART_LSR_BI;
-
- if (--tmout == 0)
- break;
- udelay(1);
- } while (!(status & BOTH_EMPTY));
-
- /* Wait up to 1s for flow control if necessary */
- if (up->port.flags & UPF_CONS_FLOW) {
- tmout = 1000000;
- while (--tmout &&
- ((serial_in(up, UART_MSR) & UART_MSR_CTS) == 0))
- udelay(1);
- }
-}
-
-static void serial_hsu_console_putchar(struct uart_port *port, int ch)
-{
- struct uart_hsu_port *up =
- container_of(port, struct uart_hsu_port, port);
-
- wait_for_xmitr(up);
- serial_out(up, UART_TX, ch);
-}
-
-/*
- * Print a string to the serial port trying not to disturb
- * any possible real use of the port...
- *
- * The console_lock must be held when we get here.
- */
-static void
-serial_hsu_console_write(struct console *co, const char *s, unsigned int count)
-{
- struct uart_hsu_port *up = serial_hsu_ports[co->index];
- unsigned long flags;
- unsigned int ier;
- int locked = 1;
-
- touch_nmi_watchdog();
-
- local_irq_save(flags);
- if (up->port.sysrq)
- locked = 0;
- else if (oops_in_progress) {
- locked = spin_trylock(&up->port.lock);
- } else
- spin_lock(&up->port.lock);
-
- /* First save the IER then disable the interrupts */
- ier = serial_in(up, UART_IER);
- serial_out(up, UART_IER, 0);
-
- uart_console_write(&up->port, s, count, serial_hsu_console_putchar);
-
- /*
- * Finally, wait for transmitter to become empty
- * and restore the IER
- */
- wait_for_xmitr(up);
- serial_out(up, UART_IER, ier);
-
- if (locked)
- spin_unlock(&up->port.lock);
- local_irq_restore(flags);
-}
-
-static struct console serial_hsu_console;
-
-static int __init
-serial_hsu_console_setup(struct console *co, char *options)
-{
- struct uart_hsu_port *up;
- int baud = 115200;
- int bits = 8;
- int parity = 'n';
- int flow = 'n';
-
- if (co->index == -1 || co->index >= serial_hsu_reg.nr)
- co->index = 0;
- up = serial_hsu_ports[co->index];
- if (!up)
- return -ENODEV;
-
- if (options)
- uart_parse_options(options, &baud, &parity, &bits, &flow);
-
- return uart_set_options(&up->port, co, baud, parity, bits, flow);
-}
-
-static struct console serial_hsu_console = {
- .name = "ttyMFD",
- .write = serial_hsu_console_write,
- .device = uart_console_device,
- .setup = serial_hsu_console_setup,
- .flags = CON_PRINTBUFFER,
- .index = -1,
- .data = &serial_hsu_reg,
-};
-
-#define SERIAL_HSU_CONSOLE (&serial_hsu_console)
-#else
-#define SERIAL_HSU_CONSOLE NULL
-#endif
-
-static struct uart_ops serial_hsu_pops = {
- .tx_empty = serial_hsu_tx_empty,
- .set_mctrl = serial_hsu_set_mctrl,
- .get_mctrl = serial_hsu_get_mctrl,
- .stop_tx = serial_hsu_stop_tx,
- .start_tx = serial_hsu_start_tx,
- .stop_rx = serial_hsu_stop_rx,
- .enable_ms = serial_hsu_enable_ms,
- .break_ctl = serial_hsu_break_ctl,
- .startup = serial_hsu_startup,
- .shutdown = serial_hsu_shutdown,
- .set_termios = serial_hsu_set_termios,
- .pm = serial_hsu_pm,
- .type = serial_hsu_type,
- .release_port = serial_hsu_release_port,
- .request_port = serial_hsu_request_port,
- .config_port = serial_hsu_config_port,
- .verify_port = serial_hsu_verify_port,
-};
-
-static struct uart_driver serial_hsu_reg = {
- .owner = THIS_MODULE,
- .driver_name = "MFD serial",
- .dev_name = "ttyMFD",
- .major = TTY_MAJOR,
- .minor = 128,
- .nr = 3,
- .cons = SERIAL_HSU_CONSOLE,
-};
-
-#ifdef CONFIG_PM
-static int serial_hsu_suspend(struct pci_dev *pdev, pm_message_t state)
-{
- void *priv = pci_get_drvdata(pdev);
- struct uart_hsu_port *up;
-
- /* Make sure this is not the internal dma controller */
- if (priv && (pdev->device != 0x081E)) {
- up = priv;
- uart_suspend_port(&serial_hsu_reg, &up->port);
- }
-
- pci_save_state(pdev);
- pci_set_power_state(pdev, pci_choose_state(pdev, state));
- return 0;
-}
-
-static int serial_hsu_resume(struct pci_dev *pdev)
-{
- void *priv = pci_get_drvdata(pdev);
- struct uart_hsu_port *up;
- int ret;
-
- pci_set_power_state(pdev, PCI_D0);
- pci_restore_state(pdev);
-
- ret = pci_enable_device(pdev);
- if (ret)
- dev_warn(&pdev->dev,
- "HSU: can't re-enable device, try to continue\n");
-
- if (priv && (pdev->device != 0x081E)) {
- up = priv;
- uart_resume_port(&serial_hsu_reg, &up->port);
- }
- return 0;
-}
-
-static int serial_hsu_runtime_idle(struct device *dev)
-{
- pm_schedule_suspend(dev, 500);
- return -EBUSY;
-}
-
-static int serial_hsu_runtime_suspend(struct device *dev)
-{
- return 0;
-}
-
-static int serial_hsu_runtime_resume(struct device *dev)
-{
- return 0;
-}
-#else
-#define serial_hsu_suspend NULL
-#define serial_hsu_resume NULL
-#define serial_hsu_runtime_idle NULL
-#define serial_hsu_runtime_suspend NULL
-#define serial_hsu_runtime_resume NULL
-#endif
-
-static const struct dev_pm_ops serial_hsu_pm_ops = {
- .runtime_suspend = serial_hsu_runtime_suspend,
- .runtime_resume = serial_hsu_runtime_resume,
- .runtime_idle = serial_hsu_runtime_idle,
-};
-
-/* temp global pointer before we settle down on using one or four PCI dev */
-static struct hsu_port *phsu;
-
-static int serial_hsu_probe(struct pci_dev *pdev,
- const struct pci_device_id *ent)
-{
- struct uart_hsu_port *uport;
- int index, ret;
-
- printk(KERN_INFO "HSU: found PCI Serial controller(ID: %04x:%04x)\n",
- pdev->vendor, pdev->device);
-
- switch (pdev->device) {
- case 0x081B:
- index = 0;
- break;
- case 0x081C:
- index = 1;
- break;
- case 0x081D:
- index = 2;
- break;
- case 0x081E:
- /* internal DMA controller */
- index = 3;
- break;
- default:
- dev_err(&pdev->dev, "HSU: out of index!");
- return -ENODEV;
- }
-
- ret = pci_enable_device(pdev);
- if (ret)
- return ret;
-
- if (index == 3) {
- /* DMA controller */
- ret = request_irq(pdev->irq, dma_irq, 0, "hsu_dma", phsu);
- if (ret) {
- dev_err(&pdev->dev, "can not get IRQ\n");
- goto err_disable;
- }
- pci_set_drvdata(pdev, phsu);
- } else {
- /* UART port 0~2 */
- uport = &phsu->port[index];
- uport->port.irq = pdev->irq;
- uport->port.dev = &pdev->dev;
- uport->dev = &pdev->dev;
-
- ret = request_irq(pdev->irq, port_irq, 0, uport->name, uport);
- if (ret) {
- dev_err(&pdev->dev, "can not get IRQ\n");
- goto err_disable;
- }
- uart_add_one_port(&serial_hsu_reg, &uport->port);
-
- pci_set_drvdata(pdev, uport);
- }
-
- pm_runtime_put_noidle(&pdev->dev);
- pm_runtime_allow(&pdev->dev);
-
- return 0;
-
-err_disable:
- pci_disable_device(pdev);
- return ret;
-}
-
-static void hsu_global_init(void)
-{
- struct hsu_port *hsu;
- struct uart_hsu_port *uport;
- struct hsu_dma_chan *dchan;
- int i, ret;
-
- hsu = kzalloc(sizeof(struct hsu_port), GFP_KERNEL);
- if (!hsu)
- return;
-
- /* Get basic io resource and map it */
- hsu->paddr = 0xffa28000;
- hsu->iolen = 0x1000;
-
- if (!(request_mem_region(hsu->paddr, hsu->iolen, "HSU global")))
- pr_warn("HSU: error in request mem region\n");
-
- hsu->reg = ioremap_nocache((unsigned long)hsu->paddr, hsu->iolen);
- if (!hsu->reg) {
- pr_err("HSU: error in ioremap\n");
- ret = -ENOMEM;
- goto err_free_region;
- }
-
- /* Initialise the 3 UART ports */
- uport = hsu->port;
- for (i = 0; i < 3; i++) {
- uport->port.type = PORT_MFD;
- uport->port.iotype = UPIO_MEM;
- uport->port.mapbase = (resource_size_t)hsu->paddr
- + HSU_PORT_REG_OFFSET
- + i * HSU_PORT_REG_LENGTH;
- uport->port.membase = hsu->reg + HSU_PORT_REG_OFFSET
- + i * HSU_PORT_REG_LENGTH;
-
- sprintf(uport->name, "hsu_port%d", i);
- uport->port.fifosize = 64;
- uport->port.ops = &serial_hsu_pops;
- uport->port.line = i;
- uport->port.flags = UPF_IOREMAP;
- /* set the scalable maxim support rate to 2746800 bps */
- uport->port.uartclk = 115200 * 24 * 16;
-
- uport->running = 0;
- uport->txc = &hsu->chans[i * 2];
- uport->rxc = &hsu->chans[i * 2 + 1];
-
- serial_hsu_ports[i] = uport;
- uport->index = i;
-
- if (hsu_dma_enable & (1<<i))
- uport->use_dma = 1;
- else
- uport->use_dma = 0;
-
- uport++;
- }
-
- /* Initialise 6 dma channels */
- dchan = hsu->chans;
- for (i = 0; i < 6; i++) {
- dchan->id = i;
- dchan->dirt = (i & 0x1) ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
- dchan->uport = &hsu->port[i/2];
- dchan->reg = hsu->reg + HSU_DMA_CHANS_REG_OFFSET +
- i * HSU_DMA_CHANS_REG_LENGTH;
-
- dchan++;
- }
-
- phsu = hsu;
- hsu_debugfs_init(hsu);
- return;
-
-err_free_region:
- release_mem_region(hsu->paddr, hsu->iolen);
- kfree(hsu);
- return;
-}
-
-static void serial_hsu_remove(struct pci_dev *pdev)
-{
- void *priv = pci_get_drvdata(pdev);
- struct uart_hsu_port *up;
-
- if (!priv)
- return;
-
- pm_runtime_forbid(&pdev->dev);
- pm_runtime_get_noresume(&pdev->dev);
-
- /* For port 0/1/2, priv is the address of uart_hsu_port */
- if (pdev->device != 0x081E) {
- up = priv;
- uart_remove_one_port(&serial_hsu_reg, &up->port);
- }
-
- free_irq(pdev->irq, priv);
- pci_disable_device(pdev);
-}
-
-/* First 3 are UART ports, and the 4th is the DMA */
-static const struct pci_device_id pci_ids[] = {
- { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x081B) },
- { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x081C) },
- { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x081D) },
- { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x081E) },
- {},
-};
-
-static struct pci_driver hsu_pci_driver = {
- .name = "HSU serial",
- .id_table = pci_ids,
- .probe = serial_hsu_probe,
- .remove = serial_hsu_remove,
- .suspend = serial_hsu_suspend,
- .resume = serial_hsu_resume,
- .driver = {
- .pm = &serial_hsu_pm_ops,
- },
-};
-
-static int __init hsu_pci_init(void)
-{
- int ret;
-
- hsu_global_init();
-
- ret = uart_register_driver(&serial_hsu_reg);
- if (ret)
- return ret;
-
- return pci_register_driver(&hsu_pci_driver);
-}
-
-static void __exit hsu_pci_exit(void)
-{
- pci_unregister_driver(&hsu_pci_driver);
- uart_unregister_driver(&serial_hsu_reg);
-
- hsu_debugfs_remove(phsu);
-
- kfree(phsu);
-}
-
-module_init(hsu_pci_init);
-module_exit(hsu_pci_exit);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DEVICE_TABLE(pci, pci_ids);
diff --git a/drivers/tty/serial/mpc52xx_uart.c b/drivers/tty/serial/mpc52xx_uart.c
index 3308ef2..1589f17 100644
--- a/drivers/tty/serial/mpc52xx_uart.c
+++ b/drivers/tty/serial/mpc52xx_uart.c
@@ -1717,7 +1717,7 @@
/* OF Platform Driver */
/* ======================================================================== */
-static struct of_device_id mpc52xx_uart_of_match[] = {
+static const struct of_device_id mpc52xx_uart_of_match[] = {
#ifdef CONFIG_PPC_MPC52xx
{ .compatible = "fsl,mpc5200b-psc-uart", .data = &mpc5200b_psc_ops, },
{ .compatible = "fsl,mpc5200-psc-uart", .data = &mpc52xx_psc_ops, },
diff --git a/drivers/tty/serial/msm_serial.h b/drivers/tty/serial/msm_serial.h
index 3e1c713..737f69f 100644
--- a/drivers/tty/serial/msm_serial.h
+++ b/drivers/tty/serial/msm_serial.h
@@ -170,15 +170,6 @@
msm_serial_set_mnd_regs_tcxoby4(port);
}
-/*
- * TROUT has a specific defect that makes it report it's uartclk
- * as 19.2Mhz (TCXO) when it's actually 4.8Mhz (TCXO/4). This special
- * cases TROUT to use the right clock.
- */
-#ifdef CONFIG_MACH_TROUT
-#define msm_serial_set_mnd_regs msm_serial_set_mnd_regs_tcxoby4
-#else
#define msm_serial_set_mnd_regs msm_serial_set_mnd_regs_from_uartclk
-#endif
#endif /* __DRIVERS_SERIAL_MSM_SERIAL_H */
diff --git a/drivers/tty/serial/msm_serial_hs.c b/drivers/tty/serial/msm_serial_hs.c
deleted file mode 100644
index 62da853..0000000
--- a/drivers/tty/serial/msm_serial_hs.c
+++ /dev/null
@@ -1,1874 +0,0 @@
-/*
- * MSM 7k/8k High speed uart driver
- *
- * Copyright (c) 2007-2011, Code Aurora Forum. All rights reserved.
- * Copyright (c) 2008 Google Inc.
- * Modified: Nick Pelly <npelly@google.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
- * See the GNU General Public License for more details.
- *
- * Has optional support for uart power management independent of linux
- * suspend/resume:
- *
- * RX wakeup.
- * UART wakeup can be triggered by RX activity (using a wakeup GPIO on the
- * UART RX pin). This should only be used if there is not a wakeup
- * GPIO on the UART CTS, and the first RX byte is known (for example, with the
- * Bluetooth Texas Instruments HCILL protocol), since the first RX byte will
- * always be lost. RTS will be asserted even while the UART is off in this mode
- * of operation. See msm_serial_hs_platform_data.rx_wakeup_irq.
- */
-
-#include <linux/module.h>
-
-#include <linux/serial.h>
-#include <linux/serial_core.h>
-#include <linux/tty.h>
-#include <linux/tty_flip.h>
-#include <linux/slab.h>
-#include <linux/init.h>
-#include <linux/interrupt.h>
-#include <linux/irq.h>
-#include <linux/io.h>
-#include <linux/ioport.h>
-#include <linux/kernel.h>
-#include <linux/timer.h>
-#include <linux/clk.h>
-#include <linux/platform_device.h>
-#include <linux/pm_runtime.h>
-#include <linux/dma-mapping.h>
-#include <linux/dmapool.h>
-#include <linux/wait.h>
-#include <linux/workqueue.h>
-
-#include <linux/atomic.h>
-#include <asm/irq.h>
-
-#include <mach/hardware.h>
-#include <mach/dma.h>
-#include <linux/platform_data/msm_serial_hs.h>
-
-/* HSUART Registers */
-#define UARTDM_MR1_ADDR 0x0
-#define UARTDM_MR2_ADDR 0x4
-
-/* Data Mover result codes */
-#define RSLT_FIFO_CNTR_BMSK (0xE << 28)
-#define RSLT_VLD BIT(1)
-
-/* write only register */
-#define UARTDM_CSR_ADDR 0x8
-#define UARTDM_CSR_115200 0xFF
-#define UARTDM_CSR_57600 0xEE
-#define UARTDM_CSR_38400 0xDD
-#define UARTDM_CSR_28800 0xCC
-#define UARTDM_CSR_19200 0xBB
-#define UARTDM_CSR_14400 0xAA
-#define UARTDM_CSR_9600 0x99
-#define UARTDM_CSR_7200 0x88
-#define UARTDM_CSR_4800 0x77
-#define UARTDM_CSR_3600 0x66
-#define UARTDM_CSR_2400 0x55
-#define UARTDM_CSR_1200 0x44
-#define UARTDM_CSR_600 0x33
-#define UARTDM_CSR_300 0x22
-#define UARTDM_CSR_150 0x11
-#define UARTDM_CSR_75 0x00
-
-/* write only register */
-#define UARTDM_TF_ADDR 0x70
-#define UARTDM_TF2_ADDR 0x74
-#define UARTDM_TF3_ADDR 0x78
-#define UARTDM_TF4_ADDR 0x7C
-
-/* write only register */
-#define UARTDM_CR_ADDR 0x10
-#define UARTDM_IMR_ADDR 0x14
-
-#define UARTDM_IPR_ADDR 0x18
-#define UARTDM_TFWR_ADDR 0x1c
-#define UARTDM_RFWR_ADDR 0x20
-#define UARTDM_HCR_ADDR 0x24
-#define UARTDM_DMRX_ADDR 0x34
-#define UARTDM_IRDA_ADDR 0x38
-#define UARTDM_DMEN_ADDR 0x3c
-
-/* UART_DM_NO_CHARS_FOR_TX */
-#define UARTDM_NCF_TX_ADDR 0x40
-
-#define UARTDM_BADR_ADDR 0x44
-
-#define UARTDM_SIM_CFG_ADDR 0x80
-/* Read Only register */
-#define UARTDM_SR_ADDR 0x8
-
-/* Read Only register */
-#define UARTDM_RF_ADDR 0x70
-#define UARTDM_RF2_ADDR 0x74
-#define UARTDM_RF3_ADDR 0x78
-#define UARTDM_RF4_ADDR 0x7C
-
-/* Read Only register */
-#define UARTDM_MISR_ADDR 0x10
-
-/* Read Only register */
-#define UARTDM_ISR_ADDR 0x14
-#define UARTDM_RX_TOTAL_SNAP_ADDR 0x38
-
-#define UARTDM_RXFS_ADDR 0x50
-
-/* Register field Mask Mapping */
-#define UARTDM_SR_PAR_FRAME_BMSK BIT(5)
-#define UARTDM_SR_OVERRUN_BMSK BIT(4)
-#define UARTDM_SR_TXEMT_BMSK BIT(3)
-#define UARTDM_SR_TXRDY_BMSK BIT(2)
-#define UARTDM_SR_RXRDY_BMSK BIT(0)
-
-#define UARTDM_CR_TX_DISABLE_BMSK BIT(3)
-#define UARTDM_CR_RX_DISABLE_BMSK BIT(1)
-#define UARTDM_CR_TX_EN_BMSK BIT(2)
-#define UARTDM_CR_RX_EN_BMSK BIT(0)
-
-/* UARTDM_CR channel_comman bit value (register field is bits 8:4) */
-#define RESET_RX 0x10
-#define RESET_TX 0x20
-#define RESET_ERROR_STATUS 0x30
-#define RESET_BREAK_INT 0x40
-#define START_BREAK 0x50
-#define STOP_BREAK 0x60
-#define RESET_CTS 0x70
-#define RESET_STALE_INT 0x80
-#define RFR_LOW 0xD0
-#define RFR_HIGH 0xE0
-#define CR_PROTECTION_EN 0x100
-#define STALE_EVENT_ENABLE 0x500
-#define STALE_EVENT_DISABLE 0x600
-#define FORCE_STALE_EVENT 0x400
-#define CLEAR_TX_READY 0x300
-#define RESET_TX_ERROR 0x800
-#define RESET_TX_DONE 0x810
-
-#define UARTDM_MR1_AUTO_RFR_LEVEL1_BMSK 0xffffff00
-#define UARTDM_MR1_AUTO_RFR_LEVEL0_BMSK 0x3f
-#define UARTDM_MR1_CTS_CTL_BMSK 0x40
-#define UARTDM_MR1_RX_RDY_CTL_BMSK 0x80
-
-#define UARTDM_MR2_ERROR_MODE_BMSK 0x40
-#define UARTDM_MR2_BITS_PER_CHAR_BMSK 0x30
-
-/* bits per character configuration */
-#define FIVE_BPC (0 << 4)
-#define SIX_BPC (1 << 4)
-#define SEVEN_BPC (2 << 4)
-#define EIGHT_BPC (3 << 4)
-
-#define UARTDM_MR2_STOP_BIT_LEN_BMSK 0xc
-#define STOP_BIT_ONE (1 << 2)
-#define STOP_BIT_TWO (3 << 2)
-
-#define UARTDM_MR2_PARITY_MODE_BMSK 0x3
-
-/* Parity configuration */
-#define NO_PARITY 0x0
-#define EVEN_PARITY 0x1
-#define ODD_PARITY 0x2
-#define SPACE_PARITY 0x3
-
-#define UARTDM_IPR_STALE_TIMEOUT_MSB_BMSK 0xffffff80
-#define UARTDM_IPR_STALE_LSB_BMSK 0x1f
-
-/* These can be used for both ISR and IMR register */
-#define UARTDM_ISR_TX_READY_BMSK BIT(7)
-#define UARTDM_ISR_CURRENT_CTS_BMSK BIT(6)
-#define UARTDM_ISR_DELTA_CTS_BMSK BIT(5)
-#define UARTDM_ISR_RXLEV_BMSK BIT(4)
-#define UARTDM_ISR_RXSTALE_BMSK BIT(3)
-#define UARTDM_ISR_RXBREAK_BMSK BIT(2)
-#define UARTDM_ISR_RXHUNT_BMSK BIT(1)
-#define UARTDM_ISR_TXLEV_BMSK BIT(0)
-
-/* Field definitions for UART_DM_DMEN*/
-#define UARTDM_TX_DM_EN_BMSK 0x1
-#define UARTDM_RX_DM_EN_BMSK 0x2
-
-#define UART_FIFOSIZE 64
-#define UARTCLK 7372800
-
-/* Rx DMA request states */
-enum flush_reason {
- FLUSH_NONE,
- FLUSH_DATA_READY,
- FLUSH_DATA_INVALID, /* values after this indicate invalid data */
- FLUSH_IGNORE = FLUSH_DATA_INVALID,
- FLUSH_STOP,
- FLUSH_SHUTDOWN,
-};
-
-/* UART clock states */
-enum msm_hs_clk_states_e {
- MSM_HS_CLK_PORT_OFF, /* port not in use */
- MSM_HS_CLK_OFF, /* clock disabled */
- MSM_HS_CLK_REQUEST_OFF, /* disable after TX and RX flushed */
- MSM_HS_CLK_ON, /* clock enabled */
-};
-
-/* Track the forced RXSTALE flush during clock off sequence.
- * These states are only valid during MSM_HS_CLK_REQUEST_OFF */
-enum msm_hs_clk_req_off_state_e {
- CLK_REQ_OFF_START,
- CLK_REQ_OFF_RXSTALE_ISSUED,
- CLK_REQ_OFF_FLUSH_ISSUED,
- CLK_REQ_OFF_RXSTALE_FLUSHED,
-};
-
-/**
- * struct msm_hs_tx
- * @tx_ready_int_en: ok to dma more tx?
- * @dma_in_flight: tx dma in progress
- * @xfer: top level DMA command pointer structure
- * @command_ptr: third level command struct pointer
- * @command_ptr_ptr: second level command list struct pointer
- * @mapped_cmd_ptr: DMA view of third level command struct
- * @mapped_cmd_ptr_ptr: DMA view of second level command list struct
- * @tx_count: number of bytes to transfer in DMA transfer
- * @dma_base: DMA view of UART xmit buffer
- *
- * This structure describes a single Tx DMA transaction. MSM DMA
- * commands have two levels of indirection. The top level command
- * ptr points to a list of command ptr which in turn points to a
- * single DMA 'command'. In our case each Tx transaction consists
- * of a single second level pointer pointing to a 'box type' command.
- */
-struct msm_hs_tx {
- unsigned int tx_ready_int_en;
- unsigned int dma_in_flight;
- struct msm_dmov_cmd xfer;
- dmov_box *command_ptr;
- u32 *command_ptr_ptr;
- dma_addr_t mapped_cmd_ptr;
- dma_addr_t mapped_cmd_ptr_ptr;
- int tx_count;
- dma_addr_t dma_base;
-};
-
-/**
- * struct msm_hs_rx
- * @flush: Rx DMA request state
- * @xfer: top level DMA command pointer structure
- * @cmdptr_dmaaddr: DMA view of second level command structure
- * @command_ptr: third level DMA command pointer structure
- * @command_ptr_ptr: second level DMA command list pointer
- * @mapped_cmd_ptr: DMA view of the third level command structure
- * @wait: wait for DMA completion before shutdown
- * @buffer: destination buffer for RX DMA
- * @rbuffer: DMA view of buffer
- * @pool: dma pool out of which coherent rx buffer is allocated
- * @tty_work: private work-queue for tty flip buffer push task
- *
- * This structure describes a single Rx DMA transaction. Rx DMA
- * transactions use box mode DMA commands.
- */
-struct msm_hs_rx {
- enum flush_reason flush;
- struct msm_dmov_cmd xfer;
- dma_addr_t cmdptr_dmaaddr;
- dmov_box *command_ptr;
- u32 *command_ptr_ptr;
- dma_addr_t mapped_cmd_ptr;
- wait_queue_head_t wait;
- dma_addr_t rbuffer;
- unsigned char *buffer;
- struct dma_pool *pool;
- struct work_struct tty_work;
-};
-
-/**
- * struct msm_hs_rx_wakeup
- * @irq: IRQ line to be configured as interrupt source on Rx activity
- * @ignore: boolean value. 1 = ignore the wakeup interrupt
- * @rx_to_inject: extra character to be inserted to Rx tty on wakeup
- * @inject_rx: 1 = insert rx_to_inject. 0 = do not insert extra character
- *
- * This is an optional structure required for UART Rx GPIO IRQ based
- * wakeup from low power state. UART wakeup can be triggered by RX activity
- * (using a wakeup GPIO on the UART RX pin). This should only be used if
- * there is not a wakeup GPIO on the UART CTS, and the first RX byte is
- * known (eg., with the Bluetooth Texas Instruments HCILL protocol),
- * since the first RX byte will always be lost. RTS will be asserted even
- * while the UART is clocked off in this mode of operation.
- */
-struct msm_hs_rx_wakeup {
- int irq; /* < 0 indicates low power wakeup disabled */
- unsigned char ignore;
- unsigned char inject_rx;
- char rx_to_inject;
-};
-
-/**
- * struct msm_hs_port
- * @uport: embedded uart port structure
- * @imr_reg: shadow value of UARTDM_IMR
- * @clk: uart input clock handle
- * @tx: Tx transaction related data structure
- * @rx: Rx transaction related data structure
- * @dma_tx_channel: Tx DMA command channel
- * @dma_rx_channel Rx DMA command channel
- * @dma_tx_crci: Tx channel rate control interface number
- * @dma_rx_crci: Rx channel rate control interface number
- * @clk_off_timer: Timer to poll DMA event completion before clock off
- * @clk_off_delay: clk_off_timer poll interval
- * @clk_state: overall clock state
- * @clk_req_off_state: post flush clock states
- * @rx_wakeup: optional rx_wakeup feature related data
- * @exit_lpm_cb: optional callback to exit low power mode
- *
- * Low level serial port structure.
- */
-struct msm_hs_port {
- struct uart_port uport;
- unsigned long imr_reg;
- struct clk *clk;
- struct msm_hs_tx tx;
- struct msm_hs_rx rx;
-
- int dma_tx_channel;
- int dma_rx_channel;
- int dma_tx_crci;
- int dma_rx_crci;
-
- struct hrtimer clk_off_timer;
- ktime_t clk_off_delay;
- enum msm_hs_clk_states_e clk_state;
- enum msm_hs_clk_req_off_state_e clk_req_off_state;
-
- struct msm_hs_rx_wakeup rx_wakeup;
- void (*exit_lpm_cb)(struct uart_port *);
-};
-
-#define MSM_UARTDM_BURST_SIZE 16 /* DM burst size (in bytes) */
-#define UARTDM_TX_BUF_SIZE UART_XMIT_SIZE
-#define UARTDM_RX_BUF_SIZE 512
-
-#define UARTDM_NR 2
-
-static struct msm_hs_port q_uart_port[UARTDM_NR];
-static struct platform_driver msm_serial_hs_platform_driver;
-static struct uart_driver msm_hs_driver;
-static struct uart_ops msm_hs_ops;
-static struct workqueue_struct *msm_hs_workqueue;
-
-#define UARTDM_TO_MSM(uart_port) \
- container_of((uart_port), struct msm_hs_port, uport)
-
-static unsigned int use_low_power_rx_wakeup(struct msm_hs_port
- *msm_uport)
-{
- return (msm_uport->rx_wakeup.irq >= 0);
-}
-
-static unsigned int msm_hs_read(struct uart_port *uport,
- unsigned int offset)
-{
- return ioread32(uport->membase + offset);
-}
-
-static void msm_hs_write(struct uart_port *uport, unsigned int offset,
- unsigned int value)
-{
- iowrite32(value, uport->membase + offset);
-}
-
-static void msm_hs_release_port(struct uart_port *port)
-{
- iounmap(port->membase);
-}
-
-static int msm_hs_request_port(struct uart_port *port)
-{
- port->membase = ioremap(port->mapbase, PAGE_SIZE);
- if (unlikely(!port->membase))
- return -ENOMEM;
-
- /* configure the CR Protection to Enable */
- msm_hs_write(port, UARTDM_CR_ADDR, CR_PROTECTION_EN);
- return 0;
-}
-
-static int msm_hs_remove(struct platform_device *pdev)
-{
-
- struct msm_hs_port *msm_uport;
- struct device *dev;
-
- if (pdev->id < 0 || pdev->id >= UARTDM_NR) {
- printk(KERN_ERR "Invalid plaform device ID = %d\n", pdev->id);
- return -EINVAL;
- }
-
- msm_uport = &q_uart_port[pdev->id];
- dev = msm_uport->uport.dev;
-
- dma_unmap_single(dev, msm_uport->rx.mapped_cmd_ptr, sizeof(dmov_box),
- DMA_TO_DEVICE);
- dma_pool_free(msm_uport->rx.pool, msm_uport->rx.buffer,
- msm_uport->rx.rbuffer);
- dma_pool_destroy(msm_uport->rx.pool);
-
- dma_unmap_single(dev, msm_uport->rx.cmdptr_dmaaddr, sizeof(u32),
- DMA_TO_DEVICE);
- dma_unmap_single(dev, msm_uport->tx.mapped_cmd_ptr_ptr, sizeof(u32),
- DMA_TO_DEVICE);
- dma_unmap_single(dev, msm_uport->tx.mapped_cmd_ptr, sizeof(dmov_box),
- DMA_TO_DEVICE);
-
- uart_remove_one_port(&msm_hs_driver, &msm_uport->uport);
- clk_put(msm_uport->clk);
-
- /* Free the tx resources */
- kfree(msm_uport->tx.command_ptr);
- kfree(msm_uport->tx.command_ptr_ptr);
-
- /* Free the rx resources */
- kfree(msm_uport->rx.command_ptr);
- kfree(msm_uport->rx.command_ptr_ptr);
-
- iounmap(msm_uport->uport.membase);
-
- return 0;
-}
-
-static int msm_hs_init_clk_locked(struct uart_port *uport)
-{
- int ret;
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- ret = clk_enable(msm_uport->clk);
- if (ret) {
- printk(KERN_ERR "Error could not turn on UART clk\n");
- return ret;
- }
-
- /* Set up the MREG/NREG/DREG/MNDREG */
- ret = clk_set_rate(msm_uport->clk, uport->uartclk);
- if (ret) {
- printk(KERN_WARNING "Error setting clock rate on UART\n");
- clk_disable(msm_uport->clk);
- return ret;
- }
-
- msm_uport->clk_state = MSM_HS_CLK_ON;
- return 0;
-}
-
-/* Enable and Disable clocks (Used for power management) */
-static void msm_hs_pm(struct uart_port *uport, unsigned int state,
- unsigned int oldstate)
-{
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- if (use_low_power_rx_wakeup(msm_uport) ||
- msm_uport->exit_lpm_cb)
- return; /* ignore linux PM states,
- use msm_hs_request_clock API */
-
- switch (state) {
- case 0:
- clk_enable(msm_uport->clk);
- break;
- case 3:
- clk_disable(msm_uport->clk);
- break;
- default:
- dev_err(uport->dev, "msm_serial: Unknown PM state %d\n",
- state);
- }
-}
-
-/*
- * programs the UARTDM_CSR register with correct bit rates
- *
- * Interrupts should be disabled before we are called, as
- * we modify Set Baud rate
- * Set receive stale interrupt level, dependent on Bit Rate
- * Goal is to have around 8 ms before indicate stale.
- * roundup (((Bit Rate * .008) / 10) + 1
- */
-static void msm_hs_set_bps_locked(struct uart_port *uport,
- unsigned int bps)
-{
- unsigned long rxstale;
- unsigned long data;
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- switch (bps) {
- case 300:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_75);
- rxstale = 1;
- break;
- case 600:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_150);
- rxstale = 1;
- break;
- case 1200:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_300);
- rxstale = 1;
- break;
- case 2400:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_600);
- rxstale = 1;
- break;
- case 4800:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_1200);
- rxstale = 1;
- break;
- case 9600:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_2400);
- rxstale = 2;
- break;
- case 14400:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_3600);
- rxstale = 3;
- break;
- case 19200:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_4800);
- rxstale = 4;
- break;
- case 28800:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_7200);
- rxstale = 6;
- break;
- case 38400:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_9600);
- rxstale = 8;
- break;
- case 57600:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_14400);
- rxstale = 16;
- break;
- case 76800:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_19200);
- rxstale = 16;
- break;
- case 115200:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_28800);
- rxstale = 31;
- break;
- case 230400:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_57600);
- rxstale = 31;
- break;
- case 460800:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_115200);
- rxstale = 31;
- break;
- case 4000000:
- case 3686400:
- case 3200000:
- case 3500000:
- case 3000000:
- case 2500000:
- case 1500000:
- case 1152000:
- case 1000000:
- case 921600:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_115200);
- rxstale = 31;
- break;
- default:
- msm_hs_write(uport, UARTDM_CSR_ADDR, UARTDM_CSR_2400);
- /* default to 9600 */
- bps = 9600;
- rxstale = 2;
- break;
- }
- if (bps > 460800)
- uport->uartclk = bps * 16;
- else
- uport->uartclk = UARTCLK;
-
- if (clk_set_rate(msm_uport->clk, uport->uartclk)) {
- printk(KERN_WARNING "Error setting clock rate on UART\n");
- return;
- }
-
- data = rxstale & UARTDM_IPR_STALE_LSB_BMSK;
- data |= UARTDM_IPR_STALE_TIMEOUT_MSB_BMSK & (rxstale << 2);
-
- msm_hs_write(uport, UARTDM_IPR_ADDR, data);
-}
-
-/*
- * termios : new ktermios
- * oldtermios: old ktermios previous setting
- *
- * Configure the serial port
- */
-static void msm_hs_set_termios(struct uart_port *uport,
- struct ktermios *termios,
- struct ktermios *oldtermios)
-{
- unsigned int bps;
- unsigned long data;
- unsigned long flags;
- unsigned int c_cflag = termios->c_cflag;
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- spin_lock_irqsave(&uport->lock, flags);
- clk_enable(msm_uport->clk);
-
- /* 300 is the minimum baud support by the driver */
- bps = uart_get_baud_rate(uport, termios, oldtermios, 200, 4000000);
-
- /* Temporary remapping 200 BAUD to 3.2 mbps */
- if (bps == 200)
- bps = 3200000;
-
- msm_hs_set_bps_locked(uport, bps);
-
- data = msm_hs_read(uport, UARTDM_MR2_ADDR);
- data &= ~UARTDM_MR2_PARITY_MODE_BMSK;
- /* set parity */
- if (PARENB == (c_cflag & PARENB)) {
- if (PARODD == (c_cflag & PARODD))
- data |= ODD_PARITY;
- else if (CMSPAR == (c_cflag & CMSPAR))
- data |= SPACE_PARITY;
- else
- data |= EVEN_PARITY;
- }
-
- /* Set bits per char */
- data &= ~UARTDM_MR2_BITS_PER_CHAR_BMSK;
-
- switch (c_cflag & CSIZE) {
- case CS5:
- data |= FIVE_BPC;
- break;
- case CS6:
- data |= SIX_BPC;
- break;
- case CS7:
- data |= SEVEN_BPC;
- break;
- default:
- data |= EIGHT_BPC;
- break;
- }
- /* stop bits */
- if (c_cflag & CSTOPB) {
- data |= STOP_BIT_TWO;
- } else {
- /* otherwise 1 stop bit */
- data |= STOP_BIT_ONE;
- }
- data |= UARTDM_MR2_ERROR_MODE_BMSK;
- /* write parity/bits per char/stop bit configuration */
- msm_hs_write(uport, UARTDM_MR2_ADDR, data);
-
- /* Configure HW flow control */
- data = msm_hs_read(uport, UARTDM_MR1_ADDR);
-
- data &= ~(UARTDM_MR1_CTS_CTL_BMSK | UARTDM_MR1_RX_RDY_CTL_BMSK);
-
- if (c_cflag & CRTSCTS) {
- data |= UARTDM_MR1_CTS_CTL_BMSK;
- data |= UARTDM_MR1_RX_RDY_CTL_BMSK;
- }
-
- msm_hs_write(uport, UARTDM_MR1_ADDR, data);
-
- uport->ignore_status_mask = termios->c_iflag & INPCK;
- uport->ignore_status_mask |= termios->c_iflag & IGNPAR;
- uport->read_status_mask = (termios->c_cflag & CREAD);
-
- msm_hs_write(uport, UARTDM_IMR_ADDR, 0);
-
- /* Set Transmit software time out */
- uart_update_timeout(uport, c_cflag, bps);
-
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_RX);
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_TX);
-
- if (msm_uport->rx.flush == FLUSH_NONE) {
- msm_uport->rx.flush = FLUSH_IGNORE;
- msm_dmov_stop_cmd(msm_uport->dma_rx_channel, NULL, 1);
- }
-
- msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg);
-
- clk_disable(msm_uport->clk);
- spin_unlock_irqrestore(&uport->lock, flags);
-}
-
-/*
- * Standard API, Transmitter
- * Any character in the transmit shift register is sent
- */
-static unsigned int msm_hs_tx_empty(struct uart_port *uport)
-{
- unsigned int data;
- unsigned int ret = 0;
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- clk_enable(msm_uport->clk);
-
- data = msm_hs_read(uport, UARTDM_SR_ADDR);
- if (data & UARTDM_SR_TXEMT_BMSK)
- ret = TIOCSER_TEMT;
-
- clk_disable(msm_uport->clk);
-
- return ret;
-}
-
-/*
- * Standard API, Stop transmitter.
- * Any character in the transmit shift register is sent as
- * well as the current data mover transfer .
- */
-static void msm_hs_stop_tx_locked(struct uart_port *uport)
-{
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- msm_uport->tx.tx_ready_int_en = 0;
-}
-
-/*
- * Standard API, Stop receiver as soon as possible.
- *
- * Function immediately terminates the operation of the
- * channel receiver and any incoming characters are lost. None
- * of the receiver status bits are affected by this command and
- * characters that are already in the receive FIFO there.
- */
-static void msm_hs_stop_rx_locked(struct uart_port *uport)
-{
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
- unsigned int data;
-
- clk_enable(msm_uport->clk);
-
- /* disable dlink */
- data = msm_hs_read(uport, UARTDM_DMEN_ADDR);
- data &= ~UARTDM_RX_DM_EN_BMSK;
- msm_hs_write(uport, UARTDM_DMEN_ADDR, data);
-
- /* Disable the receiver */
- if (msm_uport->rx.flush == FLUSH_NONE)
- msm_dmov_stop_cmd(msm_uport->dma_rx_channel, NULL, 1);
-
- if (msm_uport->rx.flush != FLUSH_SHUTDOWN)
- msm_uport->rx.flush = FLUSH_STOP;
-
- clk_disable(msm_uport->clk);
-}
-
-/* Transmit the next chunk of data */
-static void msm_hs_submit_tx_locked(struct uart_port *uport)
-{
- int left;
- int tx_count;
- dma_addr_t src_addr;
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
- struct msm_hs_tx *tx = &msm_uport->tx;
- struct circ_buf *tx_buf = &msm_uport->uport.state->xmit;
-
- if (uart_circ_empty(tx_buf) || uport->state->port.tty->stopped) {
- msm_hs_stop_tx_locked(uport);
- return;
- }
-
- tx->dma_in_flight = 1;
-
- tx_count = uart_circ_chars_pending(tx_buf);
-
- if (UARTDM_TX_BUF_SIZE < tx_count)
- tx_count = UARTDM_TX_BUF_SIZE;
-
- left = UART_XMIT_SIZE - tx_buf->tail;
-
- if (tx_count > left)
- tx_count = left;
-
- src_addr = tx->dma_base + tx_buf->tail;
- dma_sync_single_for_device(uport->dev, src_addr, tx_count,
- DMA_TO_DEVICE);
-
- tx->command_ptr->num_rows = (((tx_count + 15) >> 4) << 16) |
- ((tx_count + 15) >> 4);
- tx->command_ptr->src_row_addr = src_addr;
-
- dma_sync_single_for_device(uport->dev, tx->mapped_cmd_ptr,
- sizeof(dmov_box), DMA_TO_DEVICE);
-
- *tx->command_ptr_ptr = CMD_PTR_LP | DMOV_CMD_ADDR(tx->mapped_cmd_ptr);
-
- dma_sync_single_for_device(uport->dev, tx->mapped_cmd_ptr_ptr,
- sizeof(u32), DMA_TO_DEVICE);
-
- /* Save tx_count to use in Callback */
- tx->tx_count = tx_count;
- msm_hs_write(uport, UARTDM_NCF_TX_ADDR, tx_count);
-
- /* Disable the tx_ready interrupt */
- msm_uport->imr_reg &= ~UARTDM_ISR_TX_READY_BMSK;
- msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg);
- msm_dmov_enqueue_cmd(msm_uport->dma_tx_channel, &tx->xfer);
-}
-
-/* Start to receive the next chunk of data */
-static void msm_hs_start_rx_locked(struct uart_port *uport)
-{
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_STALE_INT);
- msm_hs_write(uport, UARTDM_DMRX_ADDR, UARTDM_RX_BUF_SIZE);
- msm_hs_write(uport, UARTDM_CR_ADDR, STALE_EVENT_ENABLE);
- msm_uport->imr_reg |= UARTDM_ISR_RXLEV_BMSK;
- msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg);
-
- msm_uport->rx.flush = FLUSH_NONE;
- msm_dmov_enqueue_cmd(msm_uport->dma_rx_channel, &msm_uport->rx.xfer);
-
- /* might have finished RX and be ready to clock off */
- hrtimer_start(&msm_uport->clk_off_timer, msm_uport->clk_off_delay,
- HRTIMER_MODE_REL);
-}
-
-/* Enable the transmitter Interrupt */
-static void msm_hs_start_tx_locked(struct uart_port *uport)
-{
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- clk_enable(msm_uport->clk);
-
- if (msm_uport->exit_lpm_cb)
- msm_uport->exit_lpm_cb(uport);
-
- if (msm_uport->tx.tx_ready_int_en == 0) {
- msm_uport->tx.tx_ready_int_en = 1;
- msm_hs_submit_tx_locked(uport);
- }
-
- clk_disable(msm_uport->clk);
-}
-
-/*
- * This routine is called when we are done with a DMA transfer
- *
- * This routine is registered with Data mover when we set
- * up a Data Mover transfer. It is called from Data mover ISR
- * when the DMA transfer is done.
- */
-static void msm_hs_dmov_tx_callback(struct msm_dmov_cmd *cmd_ptr,
- unsigned int result,
- struct msm_dmov_errdata *err)
-{
- unsigned long flags;
- struct msm_hs_port *msm_uport;
-
- /* DMA did not finish properly */
- WARN_ON((((result & RSLT_FIFO_CNTR_BMSK) >> 28) == 1) &&
- !(result & RSLT_VLD));
-
- msm_uport = container_of(cmd_ptr, struct msm_hs_port, tx.xfer);
-
- spin_lock_irqsave(&msm_uport->uport.lock, flags);
- clk_enable(msm_uport->clk);
-
- msm_uport->imr_reg |= UARTDM_ISR_TX_READY_BMSK;
- msm_hs_write(&msm_uport->uport, UARTDM_IMR_ADDR, msm_uport->imr_reg);
-
- clk_disable(msm_uport->clk);
- spin_unlock_irqrestore(&msm_uport->uport.lock, flags);
-}
-
-/*
- * This routine is called when we are done with a DMA transfer or the
- * a flush has been sent to the data mover driver.
- *
- * This routine is registered with Data mover when we set up a Data Mover
- * transfer. It is called from Data mover ISR when the DMA transfer is done.
- */
-static void msm_hs_dmov_rx_callback(struct msm_dmov_cmd *cmd_ptr,
- unsigned int result,
- struct msm_dmov_errdata *err)
-{
- int retval;
- int rx_count;
- unsigned long status;
- unsigned int error_f = 0;
- unsigned long flags;
- unsigned int flush;
- struct tty_port *port;
- struct uart_port *uport;
- struct msm_hs_port *msm_uport;
-
- msm_uport = container_of(cmd_ptr, struct msm_hs_port, rx.xfer);
- uport = &msm_uport->uport;
-
- spin_lock_irqsave(&uport->lock, flags);
- clk_enable(msm_uport->clk);
-
- port = &uport->state->port;
-
- msm_hs_write(uport, UARTDM_CR_ADDR, STALE_EVENT_DISABLE);
-
- status = msm_hs_read(uport, UARTDM_SR_ADDR);
-
- /* overflow is not connect to data in a FIFO */
- if (unlikely((status & UARTDM_SR_OVERRUN_BMSK) &&
- (uport->read_status_mask & CREAD))) {
- tty_insert_flip_char(port, 0, TTY_OVERRUN);
- uport->icount.buf_overrun++;
- error_f = 1;
- }
-
- if (!(uport->ignore_status_mask & INPCK))
- status = status & ~(UARTDM_SR_PAR_FRAME_BMSK);
-
- if (unlikely(status & UARTDM_SR_PAR_FRAME_BMSK)) {
- /* Can not tell difference between parity & frame error */
- uport->icount.parity++;
- error_f = 1;
- if (uport->ignore_status_mask & IGNPAR)
- tty_insert_flip_char(port, 0, TTY_PARITY);
- }
-
- if (error_f)
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_ERROR_STATUS);
-
- if (msm_uport->clk_req_off_state == CLK_REQ_OFF_FLUSH_ISSUED)
- msm_uport->clk_req_off_state = CLK_REQ_OFF_RXSTALE_FLUSHED;
-
- flush = msm_uport->rx.flush;
- if (flush == FLUSH_IGNORE)
- msm_hs_start_rx_locked(uport);
- if (flush == FLUSH_STOP)
- msm_uport->rx.flush = FLUSH_SHUTDOWN;
- if (flush >= FLUSH_DATA_INVALID)
- goto out;
-
- rx_count = msm_hs_read(uport, UARTDM_RX_TOTAL_SNAP_ADDR);
-
- if (0 != (uport->read_status_mask & CREAD)) {
- retval = tty_insert_flip_string(port, msm_uport->rx.buffer,
- rx_count);
- BUG_ON(retval != rx_count);
- }
-
- msm_hs_start_rx_locked(uport);
-
-out:
- clk_disable(msm_uport->clk);
-
- spin_unlock_irqrestore(&uport->lock, flags);
-
- if (flush < FLUSH_DATA_INVALID)
- queue_work(msm_hs_workqueue, &msm_uport->rx.tty_work);
-}
-
-static void msm_hs_tty_flip_buffer_work(struct work_struct *work)
-{
- struct msm_hs_port *msm_uport =
- container_of(work, struct msm_hs_port, rx.tty_work);
-
- tty_flip_buffer_push(&msm_uport->uport.state->port);
-}
-
-/*
- * Standard API, Current states of modem control inputs
- *
- * Since CTS can be handled entirely by HARDWARE we always
- * indicate clear to send and count on the TX FIFO to block when
- * it fills up.
- *
- * - TIOCM_DCD
- * - TIOCM_CTS
- * - TIOCM_DSR
- * - TIOCM_RI
- * (Unsupported) DCD and DSR will return them high. RI will return low.
- */
-static unsigned int msm_hs_get_mctrl_locked(struct uart_port *uport)
-{
- return TIOCM_DSR | TIOCM_CAR | TIOCM_CTS;
-}
-
-/*
- * True enables UART auto RFR, which indicates we are ready for data if the RX
- * buffer is not full. False disables auto RFR, and deasserts RFR to indicate
- * we are not ready for data. Must be called with UART clock on.
- */
-static void set_rfr_locked(struct uart_port *uport, int auto_rfr)
-{
- unsigned int data;
-
- data = msm_hs_read(uport, UARTDM_MR1_ADDR);
-
- if (auto_rfr) {
- /* enable auto ready-for-receiving */
- data |= UARTDM_MR1_RX_RDY_CTL_BMSK;
- msm_hs_write(uport, UARTDM_MR1_ADDR, data);
- } else {
- /* disable auto ready-for-receiving */
- data &= ~UARTDM_MR1_RX_RDY_CTL_BMSK;
- msm_hs_write(uport, UARTDM_MR1_ADDR, data);
- /* RFR is active low, set high */
- msm_hs_write(uport, UARTDM_CR_ADDR, RFR_HIGH);
- }
-}
-
-/*
- * Standard API, used to set or clear RFR
- */
-static void msm_hs_set_mctrl_locked(struct uart_port *uport,
- unsigned int mctrl)
-{
- unsigned int auto_rfr;
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- clk_enable(msm_uport->clk);
-
- auto_rfr = TIOCM_RTS & mctrl ? 1 : 0;
- set_rfr_locked(uport, auto_rfr);
-
- clk_disable(msm_uport->clk);
-}
-
-/* Standard API, Enable modem status (CTS) interrupt */
-static void msm_hs_enable_ms_locked(struct uart_port *uport)
-{
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- clk_enable(msm_uport->clk);
-
- /* Enable DELTA_CTS Interrupt */
- msm_uport->imr_reg |= UARTDM_ISR_DELTA_CTS_BMSK;
- msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg);
-
- clk_disable(msm_uport->clk);
-
-}
-
-/*
- * Standard API, Break Signal
- *
- * Control the transmission of a break signal. ctl eq 0 => break
- * signal terminate ctl ne 0 => start break signal
- */
-static void msm_hs_break_ctl(struct uart_port *uport, int ctl)
-{
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- clk_enable(msm_uport->clk);
- msm_hs_write(uport, UARTDM_CR_ADDR, ctl ? START_BREAK : STOP_BREAK);
- clk_disable(msm_uport->clk);
-}
-
-static void msm_hs_config_port(struct uart_port *uport, int cfg_flags)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&uport->lock, flags);
- if (cfg_flags & UART_CONFIG_TYPE) {
- uport->type = PORT_MSM;
- msm_hs_request_port(uport);
- }
- spin_unlock_irqrestore(&uport->lock, flags);
-}
-
-/* Handle CTS changes (Called from interrupt handler) */
-static void msm_hs_handle_delta_cts_locked(struct uart_port *uport)
-{
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- clk_enable(msm_uport->clk);
-
- /* clear interrupt */
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_CTS);
- uport->icount.cts++;
-
- clk_disable(msm_uport->clk);
-
- /* clear the IOCTL TIOCMIWAIT if called */
- wake_up_interruptible(&uport->state->port.delta_msr_wait);
-}
-
-/* check if the TX path is flushed, and if so clock off
- * returns 0 did not clock off, need to retry (still sending final byte)
- * -1 did not clock off, do not retry
- * 1 if we clocked off
- */
-static int msm_hs_check_clock_off_locked(struct uart_port *uport)
-{
- unsigned long sr_status;
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
- struct circ_buf *tx_buf = &uport->state->xmit;
-
- /* Cancel if tx tty buffer is not empty, dma is in flight,
- * or tx fifo is not empty, or rx fifo is not empty */
- if (msm_uport->clk_state != MSM_HS_CLK_REQUEST_OFF ||
- !uart_circ_empty(tx_buf) || msm_uport->tx.dma_in_flight ||
- (msm_uport->imr_reg & UARTDM_ISR_TXLEV_BMSK) ||
- !(msm_uport->imr_reg & UARTDM_ISR_RXLEV_BMSK)) {
- return -1;
- }
-
- /* Make sure the uart is finished with the last byte */
- sr_status = msm_hs_read(uport, UARTDM_SR_ADDR);
- if (!(sr_status & UARTDM_SR_TXEMT_BMSK))
- return 0; /* retry */
-
- /* Make sure forced RXSTALE flush complete */
- switch (msm_uport->clk_req_off_state) {
- case CLK_REQ_OFF_START:
- msm_uport->clk_req_off_state = CLK_REQ_OFF_RXSTALE_ISSUED;
- msm_hs_write(uport, UARTDM_CR_ADDR, FORCE_STALE_EVENT);
- return 0; /* RXSTALE flush not complete - retry */
- case CLK_REQ_OFF_RXSTALE_ISSUED:
- case CLK_REQ_OFF_FLUSH_ISSUED:
- return 0; /* RXSTALE flush not complete - retry */
- case CLK_REQ_OFF_RXSTALE_FLUSHED:
- break; /* continue */
- }
-
- if (msm_uport->rx.flush != FLUSH_SHUTDOWN) {
- if (msm_uport->rx.flush == FLUSH_NONE)
- msm_hs_stop_rx_locked(uport);
- return 0; /* come back later to really clock off */
- }
-
- /* we really want to clock off */
- clk_disable(msm_uport->clk);
- msm_uport->clk_state = MSM_HS_CLK_OFF;
-
- if (use_low_power_rx_wakeup(msm_uport)) {
- msm_uport->rx_wakeup.ignore = 1;
- enable_irq(msm_uport->rx_wakeup.irq);
- }
- return 1;
-}
-
-static enum hrtimer_restart msm_hs_clk_off_retry(struct hrtimer *timer)
-{
- unsigned long flags;
- int ret = HRTIMER_NORESTART;
- struct msm_hs_port *msm_uport = container_of(timer, struct msm_hs_port,
- clk_off_timer);
- struct uart_port *uport = &msm_uport->uport;
-
- spin_lock_irqsave(&uport->lock, flags);
-
- if (!msm_hs_check_clock_off_locked(uport)) {
- hrtimer_forward_now(timer, msm_uport->clk_off_delay);
- ret = HRTIMER_RESTART;
- }
-
- spin_unlock_irqrestore(&uport->lock, flags);
-
- return ret;
-}
-
-static irqreturn_t msm_hs_isr(int irq, void *dev)
-{
- unsigned long flags;
- unsigned long isr_status;
- struct msm_hs_port *msm_uport = dev;
- struct uart_port *uport = &msm_uport->uport;
- struct circ_buf *tx_buf = &uport->state->xmit;
- struct msm_hs_tx *tx = &msm_uport->tx;
- struct msm_hs_rx *rx = &msm_uport->rx;
-
- spin_lock_irqsave(&uport->lock, flags);
-
- isr_status = msm_hs_read(uport, UARTDM_MISR_ADDR);
-
- /* Uart RX starting */
- if (isr_status & UARTDM_ISR_RXLEV_BMSK) {
- msm_uport->imr_reg &= ~UARTDM_ISR_RXLEV_BMSK;
- msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg);
- }
- /* Stale rx interrupt */
- if (isr_status & UARTDM_ISR_RXSTALE_BMSK) {
- msm_hs_write(uport, UARTDM_CR_ADDR, STALE_EVENT_DISABLE);
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_STALE_INT);
-
- if (msm_uport->clk_req_off_state == CLK_REQ_OFF_RXSTALE_ISSUED)
- msm_uport->clk_req_off_state =
- CLK_REQ_OFF_FLUSH_ISSUED;
- if (rx->flush == FLUSH_NONE) {
- rx->flush = FLUSH_DATA_READY;
- msm_dmov_stop_cmd(msm_uport->dma_rx_channel, NULL, 1);
- }
- }
- /* tx ready interrupt */
- if (isr_status & UARTDM_ISR_TX_READY_BMSK) {
- /* Clear TX Ready */
- msm_hs_write(uport, UARTDM_CR_ADDR, CLEAR_TX_READY);
-
- if (msm_uport->clk_state == MSM_HS_CLK_REQUEST_OFF) {
- msm_uport->imr_reg |= UARTDM_ISR_TXLEV_BMSK;
- msm_hs_write(uport, UARTDM_IMR_ADDR,
- msm_uport->imr_reg);
- }
-
- /* Complete DMA TX transactions and submit new transactions */
- tx_buf->tail = (tx_buf->tail + tx->tx_count) & ~UART_XMIT_SIZE;
-
- tx->dma_in_flight = 0;
-
- uport->icount.tx += tx->tx_count;
- if (tx->tx_ready_int_en)
- msm_hs_submit_tx_locked(uport);
-
- if (uart_circ_chars_pending(tx_buf) < WAKEUP_CHARS)
- uart_write_wakeup(uport);
- }
- if (isr_status & UARTDM_ISR_TXLEV_BMSK) {
- /* TX FIFO is empty */
- msm_uport->imr_reg &= ~UARTDM_ISR_TXLEV_BMSK;
- msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg);
- if (!msm_hs_check_clock_off_locked(uport))
- hrtimer_start(&msm_uport->clk_off_timer,
- msm_uport->clk_off_delay,
- HRTIMER_MODE_REL);
- }
-
- /* Change in CTS interrupt */
- if (isr_status & UARTDM_ISR_DELTA_CTS_BMSK)
- msm_hs_handle_delta_cts_locked(uport);
-
- spin_unlock_irqrestore(&uport->lock, flags);
-
- return IRQ_HANDLED;
-}
-
-void msm_hs_request_clock_off_locked(struct uart_port *uport)
-{
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- if (msm_uport->clk_state == MSM_HS_CLK_ON) {
- msm_uport->clk_state = MSM_HS_CLK_REQUEST_OFF;
- msm_uport->clk_req_off_state = CLK_REQ_OFF_START;
- if (!use_low_power_rx_wakeup(msm_uport))
- set_rfr_locked(uport, 0);
- msm_uport->imr_reg |= UARTDM_ISR_TXLEV_BMSK;
- msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg);
- }
-}
-
-/**
- * msm_hs_request_clock_off - request to (i.e. asynchronously) turn off uart
- * clock once pending TX is flushed and Rx DMA command is terminated.
- * @uport: uart_port structure for the device instance.
- *
- * This functions puts the device into a partially active low power mode. It
- * waits to complete all pending tx transactions, flushes ongoing Rx DMA
- * command and terminates UART side Rx transaction, puts UART HW in non DMA
- * mode and then clocks off the device. A client calls this when no UART
- * data is expected. msm_request_clock_on() must be called before any further
- * UART can be sent or received.
- */
-void msm_hs_request_clock_off(struct uart_port *uport)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&uport->lock, flags);
- msm_hs_request_clock_off_locked(uport);
- spin_unlock_irqrestore(&uport->lock, flags);
-}
-
-void msm_hs_request_clock_on_locked(struct uart_port *uport)
-{
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
- unsigned int data;
-
- switch (msm_uport->clk_state) {
- case MSM_HS_CLK_OFF:
- clk_enable(msm_uport->clk);
- disable_irq_nosync(msm_uport->rx_wakeup.irq);
- /* fall-through */
- case MSM_HS_CLK_REQUEST_OFF:
- if (msm_uport->rx.flush == FLUSH_STOP ||
- msm_uport->rx.flush == FLUSH_SHUTDOWN) {
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_RX);
- data = msm_hs_read(uport, UARTDM_DMEN_ADDR);
- data |= UARTDM_RX_DM_EN_BMSK;
- msm_hs_write(uport, UARTDM_DMEN_ADDR, data);
- }
- hrtimer_try_to_cancel(&msm_uport->clk_off_timer);
- if (msm_uport->rx.flush == FLUSH_SHUTDOWN)
- msm_hs_start_rx_locked(uport);
- if (!use_low_power_rx_wakeup(msm_uport))
- set_rfr_locked(uport, 1);
- if (msm_uport->rx.flush == FLUSH_STOP)
- msm_uport->rx.flush = FLUSH_IGNORE;
- msm_uport->clk_state = MSM_HS_CLK_ON;
- break;
- case MSM_HS_CLK_ON:
- break;
- case MSM_HS_CLK_PORT_OFF:
- break;
- }
-}
-
-/**
- * msm_hs_request_clock_on - Switch the device from partially active low
- * power mode to fully active (i.e. clock on) mode.
- * @uport: uart_port structure for the device.
- *
- * This function switches on the input clock, puts UART HW into DMA mode
- * and enqueues an Rx DMA command if the device was in partially active
- * mode. It has no effect if called with the device in inactive state.
- */
-void msm_hs_request_clock_on(struct uart_port *uport)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&uport->lock, flags);
- msm_hs_request_clock_on_locked(uport);
- spin_unlock_irqrestore(&uport->lock, flags);
-}
-
-static irqreturn_t msm_hs_rx_wakeup_isr(int irq, void *dev)
-{
- unsigned int wakeup = 0;
- unsigned long flags;
- struct msm_hs_port *msm_uport = dev;
- struct uart_port *uport = &msm_uport->uport;
-
- spin_lock_irqsave(&uport->lock, flags);
- if (msm_uport->clk_state == MSM_HS_CLK_OFF) {
- /* ignore the first irq - it is a pending irq that occurred
- * before enable_irq() */
- if (msm_uport->rx_wakeup.ignore)
- msm_uport->rx_wakeup.ignore = 0;
- else
- wakeup = 1;
- }
-
- if (wakeup) {
- /* the uart was clocked off during an rx, wake up and
- * optionally inject char into tty rx */
- msm_hs_request_clock_on_locked(uport);
- if (msm_uport->rx_wakeup.inject_rx) {
- tty_insert_flip_char(&uport->state->port,
- msm_uport->rx_wakeup.rx_to_inject,
- TTY_NORMAL);
- queue_work(msm_hs_workqueue, &msm_uport->rx.tty_work);
- }
- }
-
- spin_unlock_irqrestore(&uport->lock, flags);
-
- return IRQ_HANDLED;
-}
-
-static const char *msm_hs_type(struct uart_port *port)
-{
- return (port->type == PORT_MSM) ? "MSM_HS_UART" : NULL;
-}
-
-/* Called when port is opened */
-static int msm_hs_startup(struct uart_port *uport)
-{
- int ret;
- int rfr_level;
- unsigned long flags;
- unsigned int data;
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
- struct circ_buf *tx_buf = &uport->state->xmit;
- struct msm_hs_tx *tx = &msm_uport->tx;
- struct msm_hs_rx *rx = &msm_uport->rx;
-
- rfr_level = uport->fifosize;
- if (rfr_level > 16)
- rfr_level -= 16;
-
- tx->dma_base = dma_map_single(uport->dev, tx_buf->buf, UART_XMIT_SIZE,
- DMA_TO_DEVICE);
-
- /* do not let tty layer execute RX in global workqueue, use a
- * dedicated workqueue managed by this driver */
- uport->state->port.low_latency = 1;
-
- /* turn on uart clk */
- ret = msm_hs_init_clk_locked(uport);
- if (unlikely(ret)) {
- printk(KERN_ERR "Turning uartclk failed!\n");
- goto err_msm_hs_init_clk;
- }
-
- /* Set auto RFR Level */
- data = msm_hs_read(uport, UARTDM_MR1_ADDR);
- data &= ~UARTDM_MR1_AUTO_RFR_LEVEL1_BMSK;
- data &= ~UARTDM_MR1_AUTO_RFR_LEVEL0_BMSK;
- data |= (UARTDM_MR1_AUTO_RFR_LEVEL1_BMSK & (rfr_level << 2));
- data |= (UARTDM_MR1_AUTO_RFR_LEVEL0_BMSK & rfr_level);
- msm_hs_write(uport, UARTDM_MR1_ADDR, data);
-
- /* Make sure RXSTALE count is non-zero */
- data = msm_hs_read(uport, UARTDM_IPR_ADDR);
- if (!data) {
- data |= 0x1f & UARTDM_IPR_STALE_LSB_BMSK;
- msm_hs_write(uport, UARTDM_IPR_ADDR, data);
- }
-
- /* Enable Data Mover Mode */
- data = UARTDM_TX_DM_EN_BMSK | UARTDM_RX_DM_EN_BMSK;
- msm_hs_write(uport, UARTDM_DMEN_ADDR, data);
-
- /* Reset TX */
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_TX);
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_RX);
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_ERROR_STATUS);
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_BREAK_INT);
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_STALE_INT);
- msm_hs_write(uport, UARTDM_CR_ADDR, RESET_CTS);
- msm_hs_write(uport, UARTDM_CR_ADDR, RFR_LOW);
- /* Turn on Uart Receiver */
- msm_hs_write(uport, UARTDM_CR_ADDR, UARTDM_CR_RX_EN_BMSK);
-
- /* Turn on Uart Transmitter */
- msm_hs_write(uport, UARTDM_CR_ADDR, UARTDM_CR_TX_EN_BMSK);
-
- /* Initialize the tx */
- tx->tx_ready_int_en = 0;
- tx->dma_in_flight = 0;
-
- tx->xfer.complete_func = msm_hs_dmov_tx_callback;
- tx->xfer.execute_func = NULL;
-
- tx->command_ptr->cmd = CMD_LC |
- CMD_DST_CRCI(msm_uport->dma_tx_crci) | CMD_MODE_BOX;
-
- tx->command_ptr->src_dst_len = (MSM_UARTDM_BURST_SIZE << 16)
- | (MSM_UARTDM_BURST_SIZE);
-
- tx->command_ptr->row_offset = (MSM_UARTDM_BURST_SIZE << 16);
-
- tx->command_ptr->dst_row_addr =
- msm_uport->uport.mapbase + UARTDM_TF_ADDR;
-
-
- /* Turn on Uart Receive */
- rx->xfer.complete_func = msm_hs_dmov_rx_callback;
- rx->xfer.execute_func = NULL;
-
- rx->command_ptr->cmd = CMD_LC |
- CMD_SRC_CRCI(msm_uport->dma_rx_crci) | CMD_MODE_BOX;
-
- rx->command_ptr->src_dst_len = (MSM_UARTDM_BURST_SIZE << 16)
- | (MSM_UARTDM_BURST_SIZE);
- rx->command_ptr->row_offset = MSM_UARTDM_BURST_SIZE;
- rx->command_ptr->src_row_addr = uport->mapbase + UARTDM_RF_ADDR;
-
-
- msm_uport->imr_reg |= UARTDM_ISR_RXSTALE_BMSK;
- /* Enable reading the current CTS, no harm even if CTS is ignored */
- msm_uport->imr_reg |= UARTDM_ISR_CURRENT_CTS_BMSK;
-
- msm_hs_write(uport, UARTDM_TFWR_ADDR, 0); /* TXLEV on empty TX fifo */
-
-
- ret = request_irq(uport->irq, msm_hs_isr, IRQF_TRIGGER_HIGH,
- "msm_hs_uart", msm_uport);
- if (unlikely(ret)) {
- printk(KERN_ERR "Request msm_hs_uart IRQ failed!\n");
- goto err_request_irq;
- }
- if (use_low_power_rx_wakeup(msm_uport)) {
- ret = request_irq(msm_uport->rx_wakeup.irq,
- msm_hs_rx_wakeup_isr,
- IRQF_TRIGGER_FALLING,
- "msm_hs_rx_wakeup", msm_uport);
- if (unlikely(ret)) {
- printk(KERN_ERR "Request msm_hs_rx_wakeup IRQ failed!\n");
- free_irq(uport->irq, msm_uport);
- goto err_request_irq;
- }
- disable_irq(msm_uport->rx_wakeup.irq);
- }
-
- spin_lock_irqsave(&uport->lock, flags);
-
- msm_hs_write(uport, UARTDM_RFWR_ADDR, 0);
- msm_hs_start_rx_locked(uport);
-
- spin_unlock_irqrestore(&uport->lock, flags);
- ret = pm_runtime_set_active(uport->dev);
- if (ret)
- dev_err(uport->dev, "set active error:%d\n", ret);
- pm_runtime_enable(uport->dev);
-
- return 0;
-
-err_request_irq:
-err_msm_hs_init_clk:
- dma_unmap_single(uport->dev, tx->dma_base,
- UART_XMIT_SIZE, DMA_TO_DEVICE);
- return ret;
-}
-
-/* Initialize tx and rx data structures */
-static int uartdm_init_port(struct uart_port *uport)
-{
- int ret = 0;
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
- struct msm_hs_tx *tx = &msm_uport->tx;
- struct msm_hs_rx *rx = &msm_uport->rx;
-
- /* Allocate the command pointer. Needs to be 64 bit aligned */
- tx->command_ptr = kmalloc(sizeof(dmov_box), GFP_KERNEL | __GFP_DMA);
- if (!tx->command_ptr)
- return -ENOMEM;
-
- tx->command_ptr_ptr = kmalloc(sizeof(u32), GFP_KERNEL | __GFP_DMA);
- if (!tx->command_ptr_ptr) {
- ret = -ENOMEM;
- goto err_tx_command_ptr_ptr;
- }
-
- tx->mapped_cmd_ptr = dma_map_single(uport->dev, tx->command_ptr,
- sizeof(dmov_box), DMA_TO_DEVICE);
- tx->mapped_cmd_ptr_ptr = dma_map_single(uport->dev,
- tx->command_ptr_ptr,
- sizeof(u32), DMA_TO_DEVICE);
- tx->xfer.cmdptr = DMOV_CMD_ADDR(tx->mapped_cmd_ptr_ptr);
-
- init_waitqueue_head(&rx->wait);
-
- rx->pool = dma_pool_create("rx_buffer_pool", uport->dev,
- UARTDM_RX_BUF_SIZE, 16, 0);
- if (!rx->pool) {
- pr_err("%s(): cannot allocate rx_buffer_pool", __func__);
- ret = -ENOMEM;
- goto err_dma_pool_create;
- }
-
- rx->buffer = dma_pool_alloc(rx->pool, GFP_KERNEL, &rx->rbuffer);
- if (!rx->buffer) {
- pr_err("%s(): cannot allocate rx->buffer", __func__);
- ret = -ENOMEM;
- goto err_dma_pool_alloc;
- }
-
- /* Allocate the command pointer. Needs to be 64 bit aligned */
- rx->command_ptr = kmalloc(sizeof(dmov_box), GFP_KERNEL | __GFP_DMA);
- if (!rx->command_ptr) {
- pr_err("%s(): cannot allocate rx->command_ptr", __func__);
- ret = -ENOMEM;
- goto err_rx_command_ptr;
- }
-
- rx->command_ptr_ptr = kmalloc(sizeof(u32), GFP_KERNEL | __GFP_DMA);
- if (!rx->command_ptr_ptr) {
- pr_err("%s(): cannot allocate rx->command_ptr_ptr", __func__);
- ret = -ENOMEM;
- goto err_rx_command_ptr_ptr;
- }
-
- rx->command_ptr->num_rows = ((UARTDM_RX_BUF_SIZE >> 4) << 16) |
- (UARTDM_RX_BUF_SIZE >> 4);
-
- rx->command_ptr->dst_row_addr = rx->rbuffer;
-
- rx->mapped_cmd_ptr = dma_map_single(uport->dev, rx->command_ptr,
- sizeof(dmov_box), DMA_TO_DEVICE);
-
- *rx->command_ptr_ptr = CMD_PTR_LP | DMOV_CMD_ADDR(rx->mapped_cmd_ptr);
-
- rx->cmdptr_dmaaddr = dma_map_single(uport->dev, rx->command_ptr_ptr,
- sizeof(u32), DMA_TO_DEVICE);
- rx->xfer.cmdptr = DMOV_CMD_ADDR(rx->cmdptr_dmaaddr);
-
- INIT_WORK(&rx->tty_work, msm_hs_tty_flip_buffer_work);
-
- return ret;
-
-err_rx_command_ptr_ptr:
- kfree(rx->command_ptr);
-err_rx_command_ptr:
- dma_pool_free(msm_uport->rx.pool, msm_uport->rx.buffer,
- msm_uport->rx.rbuffer);
-err_dma_pool_alloc:
- dma_pool_destroy(msm_uport->rx.pool);
-err_dma_pool_create:
- dma_unmap_single(uport->dev, msm_uport->tx.mapped_cmd_ptr_ptr,
- sizeof(u32), DMA_TO_DEVICE);
- dma_unmap_single(uport->dev, msm_uport->tx.mapped_cmd_ptr,
- sizeof(dmov_box), DMA_TO_DEVICE);
- kfree(msm_uport->tx.command_ptr_ptr);
-err_tx_command_ptr_ptr:
- kfree(msm_uport->tx.command_ptr);
- return ret;
-}
-
-static int msm_hs_probe(struct platform_device *pdev)
-{
- int ret;
- struct uart_port *uport;
- struct msm_hs_port *msm_uport;
- struct resource *resource;
- const struct msm_serial_hs_platform_data *pdata =
- dev_get_platdata(&pdev->dev);
-
- if (pdev->id < 0 || pdev->id >= UARTDM_NR) {
- printk(KERN_ERR "Invalid plaform device ID = %d\n", pdev->id);
- return -EINVAL;
- }
-
- msm_uport = &q_uart_port[pdev->id];
- uport = &msm_uport->uport;
-
- uport->dev = &pdev->dev;
-
- resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (unlikely(!resource))
- return -ENXIO;
-
- uport->mapbase = resource->start;
- uport->irq = platform_get_irq(pdev, 0);
- if (unlikely(uport->irq < 0))
- return -ENXIO;
-
- if (unlikely(irq_set_irq_wake(uport->irq, 1)))
- return -ENXIO;
-
- if (pdata == NULL || pdata->rx_wakeup_irq < 0)
- msm_uport->rx_wakeup.irq = -1;
- else {
- msm_uport->rx_wakeup.irq = pdata->rx_wakeup_irq;
- msm_uport->rx_wakeup.ignore = 1;
- msm_uport->rx_wakeup.inject_rx = pdata->inject_rx_on_wakeup;
- msm_uport->rx_wakeup.rx_to_inject = pdata->rx_to_inject;
-
- if (unlikely(msm_uport->rx_wakeup.irq < 0))
- return -ENXIO;
-
- if (unlikely(irq_set_irq_wake(msm_uport->rx_wakeup.irq, 1)))
- return -ENXIO;
- }
-
- if (pdata == NULL)
- msm_uport->exit_lpm_cb = NULL;
- else
- msm_uport->exit_lpm_cb = pdata->exit_lpm_cb;
-
- resource = platform_get_resource_byname(pdev, IORESOURCE_DMA,
- "uartdm_channels");
- if (unlikely(!resource))
- return -ENXIO;
-
- msm_uport->dma_tx_channel = resource->start;
- msm_uport->dma_rx_channel = resource->end;
-
- resource = platform_get_resource_byname(pdev, IORESOURCE_DMA,
- "uartdm_crci");
- if (unlikely(!resource))
- return -ENXIO;
-
- msm_uport->dma_tx_crci = resource->start;
- msm_uport->dma_rx_crci = resource->end;
-
- uport->iotype = UPIO_MEM;
- uport->fifosize = UART_FIFOSIZE;
- uport->ops = &msm_hs_ops;
- uport->flags = UPF_BOOT_AUTOCONF;
- uport->uartclk = UARTCLK;
- msm_uport->imr_reg = 0x0;
- msm_uport->clk = clk_get(&pdev->dev, "uartdm_clk");
- if (IS_ERR(msm_uport->clk))
- return PTR_ERR(msm_uport->clk);
-
- ret = uartdm_init_port(uport);
- if (unlikely(ret))
- return ret;
-
- msm_uport->clk_state = MSM_HS_CLK_PORT_OFF;
- hrtimer_init(&msm_uport->clk_off_timer, CLOCK_MONOTONIC,
- HRTIMER_MODE_REL);
- msm_uport->clk_off_timer.function = msm_hs_clk_off_retry;
- msm_uport->clk_off_delay = ktime_set(0, 1000000); /* 1ms */
-
- uport->line = pdev->id;
- return uart_add_one_port(&msm_hs_driver, uport);
-}
-
-static int __init msm_serial_hs_init(void)
-{
- int ret, i;
-
- /* Init all UARTS as non-configured */
- for (i = 0; i < UARTDM_NR; i++)
- q_uart_port[i].uport.type = PORT_UNKNOWN;
-
- msm_hs_workqueue = create_singlethread_workqueue("msm_serial_hs");
- if (unlikely(!msm_hs_workqueue))
- return -ENOMEM;
-
- ret = uart_register_driver(&msm_hs_driver);
- if (unlikely(ret)) {
- printk(KERN_ERR "%s failed to load\n", __func__);
- goto err_uart_register_driver;
- }
-
- ret = platform_driver_register(&msm_serial_hs_platform_driver);
- if (ret) {
- printk(KERN_ERR "%s failed to load\n", __func__);
- goto err_platform_driver_register;
- }
-
- return ret;
-
-err_platform_driver_register:
- uart_unregister_driver(&msm_hs_driver);
-err_uart_register_driver:
- destroy_workqueue(msm_hs_workqueue);
- return ret;
-}
-module_init(msm_serial_hs_init);
-
-/*
- * Called by the upper layer when port is closed.
- * - Disables the port
- * - Unhook the ISR
- */
-static void msm_hs_shutdown(struct uart_port *uport)
-{
- unsigned long flags;
- struct msm_hs_port *msm_uport = UARTDM_TO_MSM(uport);
-
- BUG_ON(msm_uport->rx.flush < FLUSH_STOP);
-
- spin_lock_irqsave(&uport->lock, flags);
- clk_enable(msm_uport->clk);
-
- /* Disable the transmitter */
- msm_hs_write(uport, UARTDM_CR_ADDR, UARTDM_CR_TX_DISABLE_BMSK);
- /* Disable the receiver */
- msm_hs_write(uport, UARTDM_CR_ADDR, UARTDM_CR_RX_DISABLE_BMSK);
-
- pm_runtime_disable(uport->dev);
- pm_runtime_set_suspended(uport->dev);
-
- /* Free the interrupt */
- free_irq(uport->irq, msm_uport);
- if (use_low_power_rx_wakeup(msm_uport))
- free_irq(msm_uport->rx_wakeup.irq, msm_uport);
-
- msm_uport->imr_reg = 0;
- msm_hs_write(uport, UARTDM_IMR_ADDR, msm_uport->imr_reg);
-
- wait_event(msm_uport->rx.wait, msm_uport->rx.flush == FLUSH_SHUTDOWN);
-
- clk_disable(msm_uport->clk); /* to balance local clk_enable() */
- if (msm_uport->clk_state != MSM_HS_CLK_OFF)
- clk_disable(msm_uport->clk); /* to balance clk_state */
- msm_uport->clk_state = MSM_HS_CLK_PORT_OFF;
-
- dma_unmap_single(uport->dev, msm_uport->tx.dma_base,
- UART_XMIT_SIZE, DMA_TO_DEVICE);
-
- spin_unlock_irqrestore(&uport->lock, flags);
-
- if (cancel_work_sync(&msm_uport->rx.tty_work))
- msm_hs_tty_flip_buffer_work(&msm_uport->rx.tty_work);
-}
-
-static void __exit msm_serial_hs_exit(void)
-{
- flush_workqueue(msm_hs_workqueue);
- destroy_workqueue(msm_hs_workqueue);
- platform_driver_unregister(&msm_serial_hs_platform_driver);
- uart_unregister_driver(&msm_hs_driver);
-}
-module_exit(msm_serial_hs_exit);
-
-#ifdef CONFIG_PM
-static int msm_hs_runtime_idle(struct device *dev)
-{
- /*
- * returning success from idle results in runtime suspend to be
- * called
- */
- return 0;
-}
-
-static int msm_hs_runtime_resume(struct device *dev)
-{
- struct platform_device *pdev = container_of(dev, struct
- platform_device, dev);
- struct msm_hs_port *msm_uport = &q_uart_port[pdev->id];
-
- msm_hs_request_clock_on(&msm_uport->uport);
- return 0;
-}
-
-static int msm_hs_runtime_suspend(struct device *dev)
-{
- struct platform_device *pdev = container_of(dev, struct
- platform_device, dev);
- struct msm_hs_port *msm_uport = &q_uart_port[pdev->id];
-
- msm_hs_request_clock_off(&msm_uport->uport);
- return 0;
-}
-#else
-#define msm_hs_runtime_idle NULL
-#define msm_hs_runtime_resume NULL
-#define msm_hs_runtime_suspend NULL
-#endif
-
-static const struct dev_pm_ops msm_hs_dev_pm_ops = {
- .runtime_suspend = msm_hs_runtime_suspend,
- .runtime_resume = msm_hs_runtime_resume,
- .runtime_idle = msm_hs_runtime_idle,
-};
-
-static struct platform_driver msm_serial_hs_platform_driver = {
- .probe = msm_hs_probe,
- .remove = msm_hs_remove,
- .driver = {
- .name = "msm_serial_hs",
- .pm = &msm_hs_dev_pm_ops,
- },
-};
-
-static struct uart_driver msm_hs_driver = {
- .owner = THIS_MODULE,
- .driver_name = "msm_serial_hs",
- .dev_name = "ttyHS",
- .nr = UARTDM_NR,
- .cons = 0,
-};
-
-static struct uart_ops msm_hs_ops = {
- .tx_empty = msm_hs_tx_empty,
- .set_mctrl = msm_hs_set_mctrl_locked,
- .get_mctrl = msm_hs_get_mctrl_locked,
- .stop_tx = msm_hs_stop_tx_locked,
- .start_tx = msm_hs_start_tx_locked,
- .stop_rx = msm_hs_stop_rx_locked,
- .enable_ms = msm_hs_enable_ms_locked,
- .break_ctl = msm_hs_break_ctl,
- .startup = msm_hs_startup,
- .shutdown = msm_hs_shutdown,
- .set_termios = msm_hs_set_termios,
- .pm = msm_hs_pm,
- .type = msm_hs_type,
- .config_port = msm_hs_config_port,
- .release_port = msm_hs_release_port,
- .request_port = msm_hs_request_port,
-};
-
-MODULE_DESCRIPTION("High Speed UART Driver for the MSM chipset");
-MODULE_VERSION("1.2");
-MODULE_LICENSE("GPL v2");
diff --git a/drivers/tty/serial/mxs-auart.c b/drivers/tty/serial/mxs-auart.c
index d1298b6..f7e5825 100644
--- a/drivers/tty/serial/mxs-auart.c
+++ b/drivers/tty/serial/mxs-auart.c
@@ -176,7 +176,7 @@
};
MODULE_DEVICE_TABLE(platform, mxs_auart_devtype);
-static struct of_device_id mxs_auart_dt_ids[] = {
+static const struct of_device_id mxs_auart_dt_ids[] = {
{
.compatible = "fsl,imx28-auart",
.data = &mxs_auart_devtype[IMX28_AUART]
@@ -1155,14 +1155,14 @@
return 0;
}
-static bool mxs_auart_init_gpios(struct mxs_auart_port *s, struct device *dev)
+static int mxs_auart_init_gpios(struct mxs_auart_port *s, struct device *dev)
{
enum mctrl_gpio_idx i;
struct gpio_desc *gpiod;
s->gpios = mctrl_gpio_init(dev, 0);
- if (IS_ERR_OR_NULL(s->gpios))
- return false;
+ if (IS_ERR(s->gpios))
+ return PTR_ERR(s->gpios);
/* Block (enabled before) DMA option if RTS or CTS is GPIO line */
if (!RTS_AT_AUART() || !CTS_AT_AUART()) {
@@ -1180,7 +1180,7 @@
s->gpio_irq[i] = -EINVAL;
}
- return true;
+ return 0;
}
static void mxs_auart_free_gpio_irq(struct mxs_auart_port *s)
@@ -1276,9 +1276,11 @@
platform_set_drvdata(pdev, s);
- if (!mxs_auart_init_gpios(s, &pdev->dev))
- dev_err(&pdev->dev,
- "Failed to initialize GPIOs. The serial port may not work as expected\n");
+ ret = mxs_auart_init_gpios(s, &pdev->dev);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to initialize GPIOs.\n");
+ return ret;
+ }
/*
* Get the GPIO lines IRQ
diff --git a/drivers/tty/serial/of_serial.c b/drivers/tty/serial/of_serial.c
index 33fb94f..aa00154 100644
--- a/drivers/tty/serial/of_serial.c
+++ b/drivers/tty/serial/of_serial.c
@@ -89,6 +89,7 @@
spin_lock_init(&port->lock);
port->mapbase = resource.start;
+ port->mapsize = resource_size(&resource);
/* Check for shifted address mapping */
if (of_property_read_u32(np, "reg-offset", &prop) == 0)
@@ -155,7 +156,7 @@
/*
* Try to register a serial port
*/
-static struct of_device_id of_platform_serial_table[];
+static const struct of_device_id of_platform_serial_table[];
static int of_platform_serial_probe(struct platform_device *ofdev)
{
const struct of_device_id *match;
@@ -320,7 +321,7 @@
/*
* A few common types, add more as needed.
*/
-static struct of_device_id of_platform_serial_table[] = {
+static const struct of_device_id of_platform_serial_table[] = {
{ .compatible = "ns8250", .data = (void *)PORT_8250, },
{ .compatible = "ns16450", .data = (void *)PORT_16450, },
{ .compatible = "ns16550a", .data = (void *)PORT_16550A, },
diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
index 10256fa..211479a 100644
--- a/drivers/tty/serial/omap-serial.c
+++ b/drivers/tty/serial/omap-serial.c
@@ -1654,11 +1654,6 @@
up->port.type = PORT_OMAP;
up->port.iotype = UPIO_MEM;
up->port.irq = uartirq;
- up->wakeirq = wakeirq;
- if (!up->wakeirq)
- dev_info(up->port.dev, "no wakeirq for uart%d\n",
- up->port.line);
-
up->port.regshift = 2;
up->port.fifosize = 64;
up->port.ops = &serial_omap_pops;
@@ -1682,6 +1677,11 @@
goto err_port_line;
}
+ up->wakeirq = wakeirq;
+ if (!up->wakeirq)
+ dev_info(up->port.dev, "no wakeirq for uart%d\n",
+ up->port.line);
+
ret = serial_omap_probe_rs485(up, pdev->dev.of_node);
if (ret < 0)
goto err_rs485;
diff --git a/drivers/tty/serial/pmac_zilog.c b/drivers/tty/serial/pmac_zilog.c
index 8f51579..e156e39 100644
--- a/drivers/tty/serial/pmac_zilog.c
+++ b/drivers/tty/serial/pmac_zilog.c
@@ -1846,7 +1846,7 @@
#ifdef CONFIG_PPC_PMAC
-static struct of_device_id pmz_match[] =
+static const struct of_device_id pmz_match[] =
{
{
.name = "ch-a",
diff --git a/drivers/tty/serial/pxa.c b/drivers/tty/serial/pxa.c
index d5d0626..9becba6 100644
--- a/drivers/tty/serial/pxa.c
+++ b/drivers/tty/serial/pxa.c
@@ -824,7 +824,7 @@
};
#endif
-static struct of_device_id serial_pxa_dt_ids[] = {
+static const struct of_device_id serial_pxa_dt_ids[] = {
{ .compatible = "mrvl,pxa-uart", },
{ .compatible = "mrvl,mmp-uart", },
{}
diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
index df9a384..468354e 100644
--- a/drivers/tty/serial/sc16is7xx.c
+++ b/drivers/tty/serial/sc16is7xx.c
@@ -829,16 +829,32 @@
}
static int sc16is7xx_config_rs485(struct uart_port *port,
- struct serial_rs485 *rs485)
+ struct serial_rs485 *rs485)
{
- if (port->rs485.flags & SER_RS485_ENABLED)
- sc16is7xx_port_update(port, SC16IS7XX_EFCR_REG,
- SC16IS7XX_EFCR_AUTO_RS485_BIT,
- SC16IS7XX_EFCR_AUTO_RS485_BIT);
- else
- sc16is7xx_port_update(port, SC16IS7XX_EFCR_REG,
- SC16IS7XX_EFCR_AUTO_RS485_BIT,
- 0);
+ const u32 mask = SC16IS7XX_EFCR_AUTO_RS485_BIT |
+ SC16IS7XX_EFCR_RTS_INVERT_BIT;
+ u32 efcr = 0;
+
+ if (rs485->flags & SER_RS485_ENABLED) {
+ bool rts_during_rx, rts_during_tx;
+
+ rts_during_rx = rs485->flags & SER_RS485_RTS_AFTER_SEND;
+ rts_during_tx = rs485->flags & SER_RS485_RTS_ON_SEND;
+
+ efcr |= SC16IS7XX_EFCR_AUTO_RS485_BIT;
+
+ if (!rts_during_rx && rts_during_tx)
+ /* default */;
+ else if (rts_during_rx && !rts_during_tx)
+ efcr |= SC16IS7XX_EFCR_RTS_INVERT_BIT;
+ else
+ dev_err(port->dev,
+ "unsupported RTS signalling on_send:%d after_send:%d - exactly one of RS485 RTS flags should be set\n",
+ rts_during_tx, rts_during_rx);
+ }
+
+ sc16is7xx_port_update(port, SC16IS7XX_EFCR_REG, mask, efcr);
+
port->rs485 = *rs485;
return 0;
@@ -903,9 +919,11 @@
/* Disable all interrupts */
sc16is7xx_port_write(port, SC16IS7XX_IER_REG, 0);
/* Disable TX/RX */
- sc16is7xx_port_write(port, SC16IS7XX_EFCR_REG,
- SC16IS7XX_EFCR_RXDISABLE_BIT |
- SC16IS7XX_EFCR_TXDISABLE_BIT);
+ sc16is7xx_port_update(port, SC16IS7XX_EFCR_REG,
+ SC16IS7XX_EFCR_RXDISABLE_BIT |
+ SC16IS7XX_EFCR_TXDISABLE_BIT,
+ SC16IS7XX_EFCR_RXDISABLE_BIT |
+ SC16IS7XX_EFCR_TXDISABLE_BIT);
sc16is7xx_power(port, 0);
}
@@ -1048,6 +1066,7 @@
else
return PTR_ERR(s->clk);
} else {
+ clk_prepare_enable(s->clk);
freq = clk_get_rate(s->clk);
}
@@ -1120,6 +1139,9 @@
if (!ret)
return 0;
+ for (i = 0; i < s->uart.nr; i++)
+ uart_remove_one_port(&s->uart, &s->p[i].port);
+
mutex_destroy(&s->mutex);
#ifdef CONFIG_GPIOLIB
diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
index 48e6e41..1d5ea39 100644
--- a/drivers/tty/serial/serial-tegra.c
+++ b/drivers/tty/serial/serial-tegra.c
@@ -1251,7 +1251,7 @@
.support_clk_src_div = true,
};
-static struct of_device_id tegra_uart_of_match[] = {
+static const struct of_device_id tegra_uart_of_match[] = {
{
.compatible = "nvidia,tegra30-hsuart",
.data = &tegra30_uart_chip_data,
diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
index 6a1055a..eb5b03b 100644
--- a/drivers/tty/serial/serial_core.c
+++ b/drivers/tty/serial/serial_core.c
@@ -1118,8 +1118,7 @@
cprev = cnow;
}
-
- current->state = TASK_RUNNING;
+ __set_current_state(TASK_RUNNING);
remove_wait_queue(&port->delta_msr_wait, &wait);
return ret;
@@ -1766,7 +1765,7 @@
#endif
#if defined(CONFIG_SERIAL_CORE_CONSOLE) || defined(CONFIG_CONSOLE_POLL)
-/*
+/**
* uart_console_write - write a console message to a serial port
* @port: the port to write the message
* @s: array of characters
@@ -1810,6 +1809,52 @@
}
/**
+ * uart_parse_earlycon - Parse earlycon options
+ * @p: ptr to 2nd field (ie., just beyond '<name>,')
+ * @iotype: ptr for decoded iotype (out)
+ * @addr: ptr for decoded mapbase/iobase (out)
+ * @options: ptr for <options> field; NULL if not present (out)
+ *
+ * Decodes earlycon kernel command line parameters of the form
+ * earlycon=<name>,io|mmio|mmio32,<addr>,<options>
+ * console=<name>,io|mmio|mmio32,<addr>,<options>
+ *
+ * The optional form
+ * earlycon=<name>,0x<addr>,<options>
+ * console=<name>,0x<addr>,<options>
+ * is also accepted; the returned @iotype will be UPIO_MEM.
+ *
+ * Returns 0 on success or -EINVAL on failure
+ */
+int uart_parse_earlycon(char *p, unsigned char *iotype, unsigned long *addr,
+ char **options)
+{
+ if (strncmp(p, "mmio,", 5) == 0) {
+ *iotype = UPIO_MEM;
+ p += 5;
+ } else if (strncmp(p, "mmio32,", 7) == 0) {
+ *iotype = UPIO_MEM32;
+ p += 7;
+ } else if (strncmp(p, "io,", 3) == 0) {
+ *iotype = UPIO_PORT;
+ p += 3;
+ } else if (strncmp(p, "0x", 2) == 0) {
+ *iotype = UPIO_MEM;
+ } else {
+ return -EINVAL;
+ }
+
+ *addr = simple_strtoul(p, NULL, 0);
+ p = strchr(p, ',');
+ if (p)
+ p++;
+
+ *options = p;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(uart_parse_earlycon);
+
+/**
* uart_parse_options - Parse serial port baud/parity/bits/flow control.
* @options: pointer to option string
* @baud: pointer to an 'int' variable for the baud rate.
@@ -2637,6 +2682,7 @@
state->pm_state = UART_PM_STATE_UNDEFINED;
uport->cons = drv->cons;
+ uport->minor = drv->tty_driver->minor_start + uport->line;
/*
* If this port is a console, then the spinlock is already
diff --git a/drivers/tty/serial/serial_mctrl_gpio.c b/drivers/tty/serial/serial_mctrl_gpio.c
index a38596c..0ec756c 100644
--- a/drivers/tty/serial/serial_mctrl_gpio.c
+++ b/drivers/tty/serial/serial_mctrl_gpio.c
@@ -48,9 +48,6 @@
int value_array[UART_GPIO_MAX];
unsigned int count = 0;
- if (IS_ERR_OR_NULL(gpios))
- return;
-
for (i = 0; i < UART_GPIO_MAX; i++)
if (!IS_ERR_OR_NULL(gpios->gpio[i]) &&
mctrl_gpios_desc[i].dir_out) {
@@ -65,10 +62,7 @@
struct gpio_desc *mctrl_gpio_to_gpiod(struct mctrl_gpios *gpios,
enum mctrl_gpio_idx gidx)
{
- if (!IS_ERR_OR_NULL(gpios) && !IS_ERR_OR_NULL(gpios->gpio[gidx]))
- return gpios->gpio[gidx];
- else
- return NULL;
+ return gpios->gpio[gidx];
}
EXPORT_SYMBOL_GPL(mctrl_gpio_to_gpiod);
@@ -76,15 +70,8 @@
{
enum mctrl_gpio_idx i;
- /*
- * return it unchanged if the structure is not allocated
- */
- if (IS_ERR_OR_NULL(gpios))
- return *mctrl;
-
for (i = 0; i < UART_GPIO_MAX; i++) {
- if (!IS_ERR_OR_NULL(gpios->gpio[i]) &&
- !mctrl_gpios_desc[i].dir_out) {
+ if (gpios->gpio[i] && !mctrl_gpios_desc[i].dir_out) {
if (gpiod_get_value(gpios->gpio[i]))
*mctrl |= mctrl_gpios_desc[i].mctrl;
else
@@ -100,34 +87,26 @@
{
struct mctrl_gpios *gpios;
enum mctrl_gpio_idx i;
- int err;
gpios = devm_kzalloc(dev, sizeof(*gpios), GFP_KERNEL);
if (!gpios)
return ERR_PTR(-ENOMEM);
for (i = 0; i < UART_GPIO_MAX; i++) {
- gpios->gpio[i] = devm_gpiod_get_index(dev,
- mctrl_gpios_desc[i].name,
- idx);
-
- /*
- * The GPIOs are maybe not all filled,
- * this is not an error.
- */
- if (IS_ERR_OR_NULL(gpios->gpio[i]))
- continue;
+ enum gpiod_flags flags;
if (mctrl_gpios_desc[i].dir_out)
- err = gpiod_direction_output(gpios->gpio[i], 0);
+ flags = GPIOD_OUT_LOW;
else
- err = gpiod_direction_input(gpios->gpio[i]);
- if (err) {
- dev_dbg(dev, "Unable to set direction for %s GPIO",
- mctrl_gpios_desc[i].name);
- devm_gpiod_put(dev, gpios->gpio[i]);
- gpios->gpio[i] = NULL;
- }
+ flags = GPIOD_IN;
+
+ gpios->gpio[i] =
+ devm_gpiod_get_index_optional(dev,
+ mctrl_gpios_desc[i].name,
+ idx, flags);
+
+ if (IS_ERR(gpios->gpio[i]))
+ return ERR_CAST(gpios->gpio[i]);
}
return gpios;
@@ -138,9 +117,6 @@
{
enum mctrl_gpio_idx i;
- if (IS_ERR_OR_NULL(gpios))
- return;
-
for (i = 0; i < UART_GPIO_MAX; i++)
if (!IS_ERR_OR_NULL(gpios->gpio[i]))
devm_gpiod_put(dev, gpios->gpio[i]);
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index 5b50c79..e7d6566 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -844,14 +844,32 @@
struct tty_port *tport = &port->state->port;
struct sci_port *s = to_sci_port(port);
struct plat_sci_reg *reg;
- int copied = 0;
+ int copied = 0, offset;
+ u16 status, bit;
- reg = sci_getreg(port, SCLSR);
+ switch (port->type) {
+ case PORT_SCIF:
+ case PORT_HSCIF:
+ offset = SCLSR;
+ break;
+ case PORT_SCIFA:
+ case PORT_SCIFB:
+ offset = SCxSR;
+ break;
+ default:
+ return 0;
+ }
+
+ reg = sci_getreg(port, offset);
if (!reg->size)
return 0;
- if ((serial_port_in(port, SCLSR) & (1 << s->overrun_bit))) {
- serial_port_out(port, SCLSR, 0);
+ status = serial_port_in(port, offset);
+ bit = 1 << s->overrun_bit;
+
+ if (status & bit) {
+ status &= ~bit;
+ serial_port_out(port, offset, status);
port->icount.overrun++;
@@ -996,16 +1014,24 @@
static irqreturn_t sci_mpxed_interrupt(int irq, void *ptr)
{
- unsigned short ssr_status, scr_status, err_enabled;
- unsigned short slr_status = 0;
+ unsigned short ssr_status, scr_status, err_enabled, orer_status = 0;
struct uart_port *port = ptr;
struct sci_port *s = to_sci_port(port);
irqreturn_t ret = IRQ_NONE;
ssr_status = serial_port_in(port, SCxSR);
scr_status = serial_port_in(port, SCSCR);
- if (port->type == PORT_SCIF || port->type == PORT_HSCIF)
- slr_status = serial_port_in(port, SCLSR);
+ switch (port->type) {
+ case PORT_SCIF:
+ case PORT_HSCIF:
+ orer_status = serial_port_in(port, SCLSR);
+ break;
+ case PORT_SCIFA:
+ case PORT_SCIFB:
+ orer_status = ssr_status;
+ break;
+ }
+
err_enabled = scr_status & port_rx_irq_mask(port);
/* Tx Interrupt */
@@ -1033,10 +1059,8 @@
ret = sci_br_interrupt(irq, ptr);
/* Overrun Interrupt */
- if (port->type == PORT_SCIF || port->type == PORT_HSCIF) {
- if (slr_status & 0x01)
- sci_handle_fifo_overrun(port);
- }
+ if (orer_status & (1 << s->overrun_bit))
+ sci_handle_fifo_overrun(port);
return ret;
}
@@ -1967,18 +1991,40 @@
#ifdef CONFIG_SERIAL_SH_SCI_DMA
/*
- * Calculate delay for 1.5 DMA buffers: see
- * drivers/serial/serial_core.c::uart_update_timeout(). With 10 bits
- * (CS8), 250Hz, 115200 baud and 64 bytes FIFO, the above function
+ * Calculate delay for 2 DMA buffers (4 FIFO).
+ * See drivers/serial/serial_core.c::uart_update_timeout(). With 10
+ * bits (CS8), 250Hz, 115200 baud and 64 bytes FIFO, the above function
* calculates 1 jiffie for the data plus 5 jiffies for the "slop(e)."
- * Then below we calculate 3 jiffies (12ms) for 1.5 DMA buffers (3 FIFO
- * sizes), but it has been found out experimentally, that this is not
- * enough: the driver too often needlessly runs on a DMA timeout. 20ms
- * as a minimum seem to work perfectly.
+ * Then below we calculate 5 jiffies (20ms) for 2 DMA buffers (4 FIFO
+ * sizes), but when performing a faster transfer, value obtained by
+ * this formula is may not enough. Therefore, if value is smaller than
+ * 20msec, this sets 20msec as timeout of DMA.
*/
if (s->chan_rx) {
- s->rx_timeout = (port->timeout - HZ / 50) * s->buf_len_rx * 3 /
- port->fifosize / 2;
+ unsigned int bits;
+
+ /* byte size and parity */
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ bits = 7;
+ break;
+ case CS6:
+ bits = 8;
+ break;
+ case CS7:
+ bits = 9;
+ break;
+ default:
+ bits = 10;
+ break;
+ }
+
+ if (termios->c_cflag & CSTOPB)
+ bits++;
+ if (termios->c_cflag & PARENB)
+ bits++;
+ s->rx_timeout = DIV_ROUND_UP((s->buf_len_rx * 2 * bits * HZ) /
+ (baud / 10), 10);
dev_dbg(port->dev, "DMA Rx t-out %ums, tty t-out %u jiffies\n",
s->rx_timeout * 1000 / HZ, port->timeout);
if (s->rx_timeout < msecs_to_jiffies(20))
diff --git a/drivers/tty/serial/sirfsoc_uart.c b/drivers/tty/serial/sirfsoc_uart.c
index 27ed0e9..9de3eab 100644
--- a/drivers/tty/serial/sirfsoc_uart.c
+++ b/drivers/tty/serial/sirfsoc_uart.c
@@ -1269,7 +1269,7 @@
#endif
};
-static struct of_device_id sirfsoc_uart_ids[] = {
+static const struct of_device_id sirfsoc_uart_ids[] = {
{ .compatible = "sirf,prima2-uart", .data = &sirfsoc_uart,},
{ .compatible = "sirf,atlas7-uart", .data = &sirfsoc_uart},
{ .compatible = "sirf,prima2-usp-uart", .data = &sirfsoc_usp},
diff --git a/drivers/tty/serial/sprd_serial.c b/drivers/tty/serial/sprd_serial.c
index bca975f..582d272 100644
--- a/drivers/tty/serial/sprd_serial.c
+++ b/drivers/tty/serial/sprd_serial.c
@@ -493,6 +493,8 @@
return -EINVAL;
if (port->irq != ser->irq)
return -EINVAL;
+ if (port->iotype != ser->io_type)
+ return -EINVAL;
return 0;
}
@@ -707,7 +709,7 @@
up->dev = &pdev->dev;
up->line = index;
up->type = PORT_SPRD;
- up->iotype = SERIAL_IO_PORT;
+ up->iotype = UPIO_MEM;
up->uartclk = SPRD_DEF_RATE;
up->fifosize = SPRD_FIFO_SIZE;
up->ops = &serial_sprd_ops;
@@ -754,6 +756,7 @@
return ret;
}
+#ifdef CONFIG_PM_SLEEP
static int sprd_suspend(struct device *dev)
{
struct sprd_uart_port *sup = dev_get_drvdata(dev);
@@ -771,6 +774,7 @@
return 0;
}
+#endif
static SIMPLE_DEV_PM_OPS(sprd_pm_ops, sprd_suspend, sprd_resume);
diff --git a/drivers/tty/serial/st-asc.c b/drivers/tty/serial/st-asc.c
index 712b03a..d625664 100644
--- a/drivers/tty/serial/st-asc.c
+++ b/drivers/tty/serial/st-asc.c
@@ -720,7 +720,7 @@
}
#ifdef CONFIG_OF
-static struct of_device_id asc_match[] = {
+static const struct of_device_id asc_match[] = {
{ .compatible = "st,asc", },
{},
};
diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
index 189f52e..708eead 100644
--- a/drivers/tty/serial/uartlite.c
+++ b/drivers/tty/serial/uartlite.c
@@ -622,7 +622,7 @@
#if defined(CONFIG_OF)
/* Match table for of_platform binding */
-static struct of_device_id ulite_of_match[] = {
+static const struct of_device_id ulite_of_match[] = {
{ .compatible = "xlnx,opb-uartlite-1.00.b", },
{ .compatible = "xlnx,xps-uartlite-1.00.a", },
{}
diff --git a/drivers/tty/serial/ucc_uart.c b/drivers/tty/serial/ucc_uart.c
index 14d10fc..7d2532b 100644
--- a/drivers/tty/serial/ucc_uart.c
+++ b/drivers/tty/serial/ucc_uart.c
@@ -1473,7 +1473,7 @@
return 0;
}
-static struct of_device_id ucc_uart_match[] = {
+static const struct of_device_id ucc_uart_match[] = {
{
.type = "serial",
.compatible = "ucc_uart",
diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
index cff531a..f218ec6 100644
--- a/drivers/tty/serial/xilinx_uartps.c
+++ b/drivers/tty/serial/xilinx_uartps.c
@@ -37,10 +37,7 @@
#define CDNS_UART_MINOR 0 /* works best with devtmpfs */
#define CDNS_UART_NR_PORTS 2
#define CDNS_UART_FIFO_SIZE 64 /* FIFO size */
-#define CDNS_UART_REGISTER_SPACE 0xFFF
-
-#define cdns_uart_readl(offset) ioread32(port->membase + offset)
-#define cdns_uart_writel(val, offset) iowrite32(val, port->membase + offset)
+#define CDNS_UART_REGISTER_SPACE 0x1000
/* Rx Trigger level */
static int rx_trigger_level = 56;
@@ -195,7 +192,7 @@
/* Read the interrupt status register to determine which
* interrupt(s) is/are active.
*/
- isrstatus = cdns_uart_readl(CDNS_UART_ISR_OFFSET);
+ isrstatus = readl(port->membase + CDNS_UART_ISR_OFFSET);
/*
* There is no hardware break detection, so we interpret framing
@@ -203,14 +200,15 @@
* there's another non-zero byte at the end of the sequence.
*/
if (isrstatus & CDNS_UART_IXR_FRAMING) {
- while (!(cdns_uart_readl(CDNS_UART_SR_OFFSET) &
+ while (!(readl(port->membase + CDNS_UART_SR_OFFSET) &
CDNS_UART_SR_RXEMPTY)) {
- if (!cdns_uart_readl(CDNS_UART_FIFO_OFFSET)) {
+ if (!readl(port->membase + CDNS_UART_FIFO_OFFSET)) {
port->read_status_mask |= CDNS_UART_IXR_BRK;
isrstatus &= ~CDNS_UART_IXR_FRAMING;
}
}
- cdns_uart_writel(CDNS_UART_IXR_FRAMING, CDNS_UART_ISR_OFFSET);
+ writel(CDNS_UART_IXR_FRAMING,
+ port->membase + CDNS_UART_ISR_OFFSET);
}
/* drop byte with parity error if IGNPAR specified */
@@ -223,9 +221,9 @@
if ((isrstatus & CDNS_UART_IXR_TOUT) ||
(isrstatus & CDNS_UART_IXR_RXTRIG)) {
/* Receive Timeout Interrupt */
- while ((cdns_uart_readl(CDNS_UART_SR_OFFSET) &
- CDNS_UART_SR_RXEMPTY) != CDNS_UART_SR_RXEMPTY) {
- data = cdns_uart_readl(CDNS_UART_FIFO_OFFSET);
+ while (!(readl(port->membase + CDNS_UART_SR_OFFSET) &
+ CDNS_UART_SR_RXEMPTY)) {
+ data = readl(port->membase + CDNS_UART_FIFO_OFFSET);
/* Non-NULL byte after BREAK is garbage (99%) */
if (data && (port->read_status_mask &
@@ -275,8 +273,8 @@
/* Dispatch an appropriate handler */
if ((isrstatus & CDNS_UART_IXR_TXEMPTY) == CDNS_UART_IXR_TXEMPTY) {
if (uart_circ_empty(&port->state->xmit)) {
- cdns_uart_writel(CDNS_UART_IXR_TXEMPTY,
- CDNS_UART_IDR_OFFSET);
+ writel(CDNS_UART_IXR_TXEMPTY,
+ port->membase + CDNS_UART_IDR_OFFSET);
} else {
numbytes = port->fifosize;
/* Break if no more data available in the UART buffer */
@@ -287,9 +285,9 @@
* and write it to the cdns_uart's TX_FIFO
* register.
*/
- cdns_uart_writel(
- port->state->xmit.buf[port->state->xmit.
- tail], CDNS_UART_FIFO_OFFSET);
+ writel(port->state->xmit.buf[
+ port->state->xmit.tail],
+ port->membase + CDNS_UART_FIFO_OFFSET);
port->icount.tx++;
@@ -307,7 +305,7 @@
}
}
- cdns_uart_writel(isrstatus, CDNS_UART_ISR_OFFSET);
+ writel(isrstatus, port->membase + CDNS_UART_ISR_OFFSET);
/* be sure to release the lock and tty before leaving */
spin_unlock_irqrestore(&port->lock, flags);
@@ -397,14 +395,14 @@
&div8);
/* Write new divisors to hardware */
- mreg = cdns_uart_readl(CDNS_UART_MR_OFFSET);
+ mreg = readl(port->membase + CDNS_UART_MR_OFFSET);
if (div8)
mreg |= CDNS_UART_MR_CLKSEL;
else
mreg &= ~CDNS_UART_MR_CLKSEL;
- cdns_uart_writel(mreg, CDNS_UART_MR_OFFSET);
- cdns_uart_writel(cd, CDNS_UART_BAUDGEN_OFFSET);
- cdns_uart_writel(bdiv, CDNS_UART_BAUDDIV_OFFSET);
+ writel(mreg, port->membase + CDNS_UART_MR_OFFSET);
+ writel(cd, port->membase + CDNS_UART_BAUDGEN_OFFSET);
+ writel(bdiv, port->membase + CDNS_UART_BAUDDIV_OFFSET);
cdns_uart->baud = baud;
return calc_baud;
@@ -451,9 +449,9 @@
spin_lock_irqsave(&cdns_uart->port->lock, flags);
/* Disable the TX and RX to set baud rate */
- ctrl_reg = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ ctrl_reg = readl(port->membase + CDNS_UART_CR_OFFSET);
ctrl_reg |= CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS;
- cdns_uart_writel(ctrl_reg, CDNS_UART_CR_OFFSET);
+ writel(ctrl_reg, port->membase + CDNS_UART_CR_OFFSET);
spin_unlock_irqrestore(&cdns_uart->port->lock, flags);
@@ -478,11 +476,11 @@
spin_lock_irqsave(&cdns_uart->port->lock, flags);
/* Set TX/RX Reset */
- ctrl_reg = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ ctrl_reg = readl(port->membase + CDNS_UART_CR_OFFSET);
ctrl_reg |= CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST;
- cdns_uart_writel(ctrl_reg, CDNS_UART_CR_OFFSET);
+ writel(ctrl_reg, port->membase + CDNS_UART_CR_OFFSET);
- while (cdns_uart_readl(CDNS_UART_CR_OFFSET) &
+ while (readl(port->membase + CDNS_UART_CR_OFFSET) &
(CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST))
cpu_relax();
@@ -491,11 +489,11 @@
* enable bit and RX enable bit to enable the transmitter and
* receiver.
*/
- cdns_uart_writel(rx_timeout, CDNS_UART_RXTOUT_OFFSET);
- ctrl_reg = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ writel(rx_timeout, port->membase + CDNS_UART_RXTOUT_OFFSET);
+ ctrl_reg = readl(port->membase + CDNS_UART_CR_OFFSET);
ctrl_reg &= ~(CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS);
ctrl_reg |= CDNS_UART_CR_TX_EN | CDNS_UART_CR_RX_EN;
- cdns_uart_writel(ctrl_reg, CDNS_UART_CR_OFFSET);
+ writel(ctrl_reg, port->membase + CDNS_UART_CR_OFFSET);
spin_unlock_irqrestore(&cdns_uart->port->lock, flags);
@@ -517,14 +515,14 @@
if (uart_circ_empty(&port->state->xmit) || uart_tx_stopped(port))
return;
- status = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ status = readl(port->membase + CDNS_UART_CR_OFFSET);
/* Set the TX enable bit and clear the TX disable bit to enable the
* transmitter.
*/
- cdns_uart_writel((status & ~CDNS_UART_CR_TX_DIS) | CDNS_UART_CR_TX_EN,
- CDNS_UART_CR_OFFSET);
+ writel((status & ~CDNS_UART_CR_TX_DIS) | CDNS_UART_CR_TX_EN,
+ port->membase + CDNS_UART_CR_OFFSET);
- while (numbytes-- && ((cdns_uart_readl(CDNS_UART_SR_OFFSET) &
+ while (numbytes-- && ((readl(port->membase + CDNS_UART_SR_OFFSET) &
CDNS_UART_SR_TXFULL)) != CDNS_UART_SR_TXFULL) {
/* Break if no more data available in the UART buffer */
if (uart_circ_empty(&port->state->xmit))
@@ -533,9 +531,8 @@
/* Get the data from the UART circular buffer and
* write it to the cdns_uart's TX_FIFO register.
*/
- cdns_uart_writel(
- port->state->xmit.buf[port->state->xmit.tail],
- CDNS_UART_FIFO_OFFSET);
+ writel(port->state->xmit.buf[port->state->xmit.tail],
+ port->membase + CDNS_UART_FIFO_OFFSET);
port->icount.tx++;
/* Adjust the tail of the UART buffer and wrap
@@ -544,9 +541,9 @@
port->state->xmit.tail = (port->state->xmit.tail + 1) &
(UART_XMIT_SIZE - 1);
}
- cdns_uart_writel(CDNS_UART_IXR_TXEMPTY, CDNS_UART_ISR_OFFSET);
+ writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_ISR_OFFSET);
/* Enable the TX Empty interrupt */
- cdns_uart_writel(CDNS_UART_IXR_TXEMPTY, CDNS_UART_IER_OFFSET);
+ writel(CDNS_UART_IXR_TXEMPTY, port->membase + CDNS_UART_IER_OFFSET);
if (uart_circ_chars_pending(&port->state->xmit) < WAKEUP_CHARS)
uart_write_wakeup(port);
@@ -560,10 +557,10 @@
{
unsigned int regval;
- regval = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ regval = readl(port->membase + CDNS_UART_CR_OFFSET);
regval |= CDNS_UART_CR_TX_DIS;
/* Disable the transmitter */
- cdns_uart_writel(regval, CDNS_UART_CR_OFFSET);
+ writel(regval, port->membase + CDNS_UART_CR_OFFSET);
}
/**
@@ -574,10 +571,10 @@
{
unsigned int regval;
- regval = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ regval = readl(port->membase + CDNS_UART_CR_OFFSET);
regval |= CDNS_UART_CR_RX_DIS;
/* Disable the receiver */
- cdns_uart_writel(regval, CDNS_UART_CR_OFFSET);
+ writel(regval, port->membase + CDNS_UART_CR_OFFSET);
}
/**
@@ -590,7 +587,8 @@
{
unsigned int status;
- status = cdns_uart_readl(CDNS_UART_SR_OFFSET) & CDNS_UART_SR_TXEMPTY;
+ status = readl(port->membase + CDNS_UART_SR_OFFSET) &
+ CDNS_UART_SR_TXEMPTY;
return status ? TIOCSER_TEMT : 0;
}
@@ -607,15 +605,15 @@
spin_lock_irqsave(&port->lock, flags);
- status = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ status = readl(port->membase + CDNS_UART_CR_OFFSET);
if (ctl == -1)
- cdns_uart_writel(CDNS_UART_CR_STARTBRK | status,
- CDNS_UART_CR_OFFSET);
+ writel(CDNS_UART_CR_STARTBRK | status,
+ port->membase + CDNS_UART_CR_OFFSET);
else {
if ((status & CDNS_UART_CR_STOPBRK) == 0)
- cdns_uart_writel(CDNS_UART_CR_STOPBRK | status,
- CDNS_UART_CR_OFFSET);
+ writel(CDNS_UART_CR_STOPBRK | status,
+ port->membase + CDNS_UART_CR_OFFSET);
}
spin_unlock_irqrestore(&port->lock, flags);
}
@@ -638,17 +636,18 @@
spin_lock_irqsave(&port->lock, flags);
/* Wait for the transmit FIFO to empty before making changes */
- if (!(cdns_uart_readl(CDNS_UART_CR_OFFSET) & CDNS_UART_CR_TX_DIS)) {
- while (!(cdns_uart_readl(CDNS_UART_SR_OFFSET) &
+ if (!(readl(port->membase + CDNS_UART_CR_OFFSET) &
+ CDNS_UART_CR_TX_DIS)) {
+ while (!(readl(port->membase + CDNS_UART_SR_OFFSET) &
CDNS_UART_SR_TXEMPTY)) {
cpu_relax();
}
}
/* Disable the TX and RX to set baud rate */
- ctrl_reg = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ ctrl_reg = readl(port->membase + CDNS_UART_CR_OFFSET);
ctrl_reg |= CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS;
- cdns_uart_writel(ctrl_reg, CDNS_UART_CR_OFFSET);
+ writel(ctrl_reg, port->membase + CDNS_UART_CR_OFFSET);
/*
* Min baud rate = 6bps and Max Baud Rate is 10Mbps for 100Mhz clk
@@ -667,20 +666,20 @@
uart_update_timeout(port, termios->c_cflag, baud);
/* Set TX/RX Reset */
- ctrl_reg = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ ctrl_reg = readl(port->membase + CDNS_UART_CR_OFFSET);
ctrl_reg |= CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST;
- cdns_uart_writel(ctrl_reg, CDNS_UART_CR_OFFSET);
+ writel(ctrl_reg, port->membase + CDNS_UART_CR_OFFSET);
/*
* Clear the RX disable and TX disable bits and then set the TX enable
* bit and RX enable bit to enable the transmitter and receiver.
*/
- ctrl_reg = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ ctrl_reg = readl(port->membase + CDNS_UART_CR_OFFSET);
ctrl_reg &= ~(CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS);
ctrl_reg |= CDNS_UART_CR_TX_EN | CDNS_UART_CR_RX_EN;
- cdns_uart_writel(ctrl_reg, CDNS_UART_CR_OFFSET);
+ writel(ctrl_reg, port->membase + CDNS_UART_CR_OFFSET);
- cdns_uart_writel(rx_timeout, CDNS_UART_RXTOUT_OFFSET);
+ writel(rx_timeout, port->membase + CDNS_UART_RXTOUT_OFFSET);
port->read_status_mask = CDNS_UART_IXR_TXEMPTY | CDNS_UART_IXR_RXTRIG |
CDNS_UART_IXR_OVERRUN | CDNS_UART_IXR_TOUT;
@@ -700,7 +699,7 @@
CDNS_UART_IXR_TOUT | CDNS_UART_IXR_PARITY |
CDNS_UART_IXR_FRAMING | CDNS_UART_IXR_OVERRUN;
- mode_reg = cdns_uart_readl(CDNS_UART_MR_OFFSET);
+ mode_reg = readl(port->membase + CDNS_UART_MR_OFFSET);
/* Handling Data Size */
switch (termios->c_cflag & CSIZE) {
@@ -741,7 +740,7 @@
cval |= CDNS_UART_MR_PARITY_NONE;
}
cval |= mode_reg & 1;
- cdns_uart_writel(cval, CDNS_UART_MR_OFFSET);
+ writel(cval, port->membase + CDNS_UART_MR_OFFSET);
spin_unlock_irqrestore(&port->lock, flags);
}
@@ -762,52 +761,53 @@
return retval;
/* Disable the TX and RX */
- cdns_uart_writel(CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS,
- CDNS_UART_CR_OFFSET);
+ writel(CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS,
+ port->membase + CDNS_UART_CR_OFFSET);
/* Set the Control Register with TX/RX Enable, TX/RX Reset,
* no break chars.
*/
- cdns_uart_writel(CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST,
- CDNS_UART_CR_OFFSET);
+ writel(CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST,
+ port->membase + CDNS_UART_CR_OFFSET);
- status = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ status = readl(port->membase + CDNS_UART_CR_OFFSET);
/* Clear the RX disable and TX disable bits and then set the TX enable
* bit and RX enable bit to enable the transmitter and receiver.
*/
- cdns_uart_writel((status & ~(CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS))
+ writel((status & ~(CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS))
| (CDNS_UART_CR_TX_EN | CDNS_UART_CR_RX_EN |
- CDNS_UART_CR_STOPBRK), CDNS_UART_CR_OFFSET);
+ CDNS_UART_CR_STOPBRK),
+ port->membase + CDNS_UART_CR_OFFSET);
/* Set the Mode Register with normal mode,8 data bits,1 stop bit,
* no parity.
*/
- cdns_uart_writel(CDNS_UART_MR_CHMODE_NORM | CDNS_UART_MR_STOPMODE_1_BIT
+ writel(CDNS_UART_MR_CHMODE_NORM | CDNS_UART_MR_STOPMODE_1_BIT
| CDNS_UART_MR_PARITY_NONE | CDNS_UART_MR_CHARLEN_8_BIT,
- CDNS_UART_MR_OFFSET);
+ port->membase + CDNS_UART_MR_OFFSET);
/*
* Set the RX FIFO Trigger level to use most of the FIFO, but it
* can be tuned with a module parameter
*/
- cdns_uart_writel(rx_trigger_level, CDNS_UART_RXWM_OFFSET);
+ writel(rx_trigger_level, port->membase + CDNS_UART_RXWM_OFFSET);
/*
* Receive Timeout register is enabled but it
* can be tuned with a module parameter
*/
- cdns_uart_writel(rx_timeout, CDNS_UART_RXTOUT_OFFSET);
+ writel(rx_timeout, port->membase + CDNS_UART_RXTOUT_OFFSET);
/* Clear out any pending interrupts before enabling them */
- cdns_uart_writel(cdns_uart_readl(CDNS_UART_ISR_OFFSET),
- CDNS_UART_ISR_OFFSET);
+ writel(readl(port->membase + CDNS_UART_ISR_OFFSET),
+ port->membase + CDNS_UART_ISR_OFFSET);
/* Set the Interrupt Registers with desired interrupts */
- cdns_uart_writel(CDNS_UART_IXR_TXEMPTY | CDNS_UART_IXR_PARITY |
+ writel(CDNS_UART_IXR_TXEMPTY | CDNS_UART_IXR_PARITY |
CDNS_UART_IXR_FRAMING | CDNS_UART_IXR_OVERRUN |
CDNS_UART_IXR_RXTRIG | CDNS_UART_IXR_TOUT,
- CDNS_UART_IER_OFFSET);
+ port->membase + CDNS_UART_IER_OFFSET);
return retval;
}
@@ -821,12 +821,12 @@
int status;
/* Disable interrupts */
- status = cdns_uart_readl(CDNS_UART_IMR_OFFSET);
- cdns_uart_writel(status, CDNS_UART_IDR_OFFSET);
+ status = readl(port->membase + CDNS_UART_IMR_OFFSET);
+ writel(status, port->membase + CDNS_UART_IDR_OFFSET);
/* Disable the TX and RX */
- cdns_uart_writel(CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS,
- CDNS_UART_CR_OFFSET);
+ writel(CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS,
+ port->membase + CDNS_UART_CR_OFFSET);
free_irq(port->irq, port);
}
@@ -928,7 +928,7 @@
{
u32 val;
- val = cdns_uart_readl(CDNS_UART_MODEMCR_OFFSET);
+ val = readl(port->membase + CDNS_UART_MODEMCR_OFFSET);
val &= ~(CDNS_UART_MODEMCR_RTS | CDNS_UART_MODEMCR_DTR);
@@ -937,7 +937,7 @@
if (mctrl & TIOCM_DTR)
val |= CDNS_UART_MODEMCR_DTR;
- cdns_uart_writel(val, CDNS_UART_MODEMCR_OFFSET);
+ writel(val, port->membase + CDNS_UART_MODEMCR_OFFSET);
}
#ifdef CONFIG_CONSOLE_POLL
@@ -947,17 +947,18 @@
int c;
/* Disable all interrupts */
- imr = cdns_uart_readl(CDNS_UART_IMR_OFFSET);
- cdns_uart_writel(imr, CDNS_UART_IDR_OFFSET);
+ imr = readl(port->membase + CDNS_UART_IMR_OFFSET);
+ writel(imr, port->membase + CDNS_UART_IDR_OFFSET);
/* Check if FIFO is empty */
- if (cdns_uart_readl(CDNS_UART_SR_OFFSET) & CDNS_UART_SR_RXEMPTY)
+ if (readl(port->membase + CDNS_UART_SR_OFFSET) & CDNS_UART_SR_RXEMPTY)
c = NO_POLL_CHAR;
else /* Read a character */
- c = (unsigned char) cdns_uart_readl(CDNS_UART_FIFO_OFFSET);
+ c = (unsigned char) readl(
+ port->membase + CDNS_UART_FIFO_OFFSET);
/* Enable interrupts */
- cdns_uart_writel(imr, CDNS_UART_IER_OFFSET);
+ writel(imr, port->membase + CDNS_UART_IER_OFFSET);
return c;
}
@@ -967,22 +968,24 @@
u32 imr;
/* Disable all interrupts */
- imr = cdns_uart_readl(CDNS_UART_IMR_OFFSET);
- cdns_uart_writel(imr, CDNS_UART_IDR_OFFSET);
+ imr = readl(port->membase + CDNS_UART_IMR_OFFSET);
+ writel(imr, port->membase + CDNS_UART_IDR_OFFSET);
/* Wait until FIFO is empty */
- while (!(cdns_uart_readl(CDNS_UART_SR_OFFSET) & CDNS_UART_SR_TXEMPTY))
+ while (!(readl(port->membase + CDNS_UART_SR_OFFSET) &
+ CDNS_UART_SR_TXEMPTY))
cpu_relax();
/* Write a character */
- cdns_uart_writel(c, CDNS_UART_FIFO_OFFSET);
+ writel(c, port->membase + CDNS_UART_FIFO_OFFSET);
/* Wait until FIFO is empty */
- while (!(cdns_uart_readl(CDNS_UART_SR_OFFSET) & CDNS_UART_SR_TXEMPTY))
+ while (!(readl(port->membase + CDNS_UART_SR_OFFSET) &
+ CDNS_UART_SR_TXEMPTY))
cpu_relax();
/* Enable interrupts */
- cdns_uart_writel(imr, CDNS_UART_IER_OFFSET);
+ writel(imr, port->membase + CDNS_UART_IER_OFFSET);
return;
}
@@ -1010,7 +1013,7 @@
#endif
};
-static struct uart_port cdns_uart_port[2];
+static struct uart_port cdns_uart_port[CDNS_UART_NR_PORTS];
/**
* cdns_uart_get_port - Configure the port from platform device resource info
@@ -1038,7 +1041,6 @@
/* At this point, we've got an empty uart_port struct, initialize it */
spin_lock_init(&port->lock);
port->membase = NULL;
- port->iobase = 1; /* mark port in use */
port->irq = 0;
port->type = PORT_UNKNOWN;
port->iotype = UPIO_MEM32;
@@ -1057,8 +1059,8 @@
*/
static void cdns_uart_console_wait_tx(struct uart_port *port)
{
- while ((cdns_uart_readl(CDNS_UART_SR_OFFSET) & CDNS_UART_SR_TXEMPTY)
- != CDNS_UART_SR_TXEMPTY)
+ while (!(readl(port->membase + CDNS_UART_SR_OFFSET) &
+ CDNS_UART_SR_TXEMPTY))
barrier();
}
@@ -1070,7 +1072,7 @@
static void cdns_uart_console_putchar(struct uart_port *port, int ch)
{
cdns_uart_console_wait_tx(port);
- cdns_uart_writel(ch, CDNS_UART_FIFO_OFFSET);
+ writel(ch, port->membase + CDNS_UART_FIFO_OFFSET);
}
static void cdns_early_write(struct console *con, const char *s, unsigned n)
@@ -1112,24 +1114,24 @@
spin_lock_irqsave(&port->lock, flags);
/* save and disable interrupt */
- imr = cdns_uart_readl(CDNS_UART_IMR_OFFSET);
- cdns_uart_writel(imr, CDNS_UART_IDR_OFFSET);
+ imr = readl(port->membase + CDNS_UART_IMR_OFFSET);
+ writel(imr, port->membase + CDNS_UART_IDR_OFFSET);
/*
* Make sure that the tx part is enabled. Set the TX enable bit and
* clear the TX disable bit to enable the transmitter.
*/
- ctrl = cdns_uart_readl(CDNS_UART_CR_OFFSET);
- cdns_uart_writel((ctrl & ~CDNS_UART_CR_TX_DIS) | CDNS_UART_CR_TX_EN,
- CDNS_UART_CR_OFFSET);
+ ctrl = readl(port->membase + CDNS_UART_CR_OFFSET);
+ writel((ctrl & ~CDNS_UART_CR_TX_DIS) | CDNS_UART_CR_TX_EN,
+ port->membase + CDNS_UART_CR_OFFSET);
uart_console_write(port, s, count, cdns_uart_console_putchar);
cdns_uart_console_wait_tx(port);
- cdns_uart_writel(ctrl, CDNS_UART_CR_OFFSET);
+ writel(ctrl, port->membase + CDNS_UART_CR_OFFSET);
/* restore interrupt state */
- cdns_uart_writel(imr, CDNS_UART_IER_OFFSET);
+ writel(imr, port->membase + CDNS_UART_IER_OFFSET);
if (locked)
spin_unlock_irqrestore(&port->lock, flags);
@@ -1153,8 +1155,9 @@
if (co->index < 0 || co->index >= CDNS_UART_NR_PORTS)
return -EINVAL;
- if (!port->mapbase) {
- pr_debug("console on ttyPS%i not present\n", co->index);
+ if (!port->membase) {
+ pr_debug("console on " CDNS_UART_TTY_NAME "%i not present\n",
+ co->index);
return -ENODEV;
}
@@ -1240,13 +1243,14 @@
spin_lock_irqsave(&port->lock, flags);
/* Empty the receive FIFO 1st before making changes */
- while (!(cdns_uart_readl(CDNS_UART_SR_OFFSET) &
+ while (!(readl(port->membase + CDNS_UART_SR_OFFSET) &
CDNS_UART_SR_RXEMPTY))
- cdns_uart_readl(CDNS_UART_FIFO_OFFSET);
+ readl(port->membase + CDNS_UART_FIFO_OFFSET);
/* set RX trigger level to 1 */
- cdns_uart_writel(1, CDNS_UART_RXWM_OFFSET);
+ writel(1, port->membase + CDNS_UART_RXWM_OFFSET);
/* disable RX timeout interrups */
- cdns_uart_writel(CDNS_UART_IXR_TOUT, CDNS_UART_IDR_OFFSET);
+ writel(CDNS_UART_IXR_TOUT,
+ port->membase + CDNS_UART_IDR_OFFSET);
spin_unlock_irqrestore(&port->lock, flags);
}
@@ -1285,28 +1289,30 @@
spin_lock_irqsave(&port->lock, flags);
/* Set TX/RX Reset */
- ctrl_reg = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ ctrl_reg = readl(port->membase + CDNS_UART_CR_OFFSET);
ctrl_reg |= CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST;
- cdns_uart_writel(ctrl_reg, CDNS_UART_CR_OFFSET);
- while (cdns_uart_readl(CDNS_UART_CR_OFFSET) &
+ writel(ctrl_reg, port->membase + CDNS_UART_CR_OFFSET);
+ while (readl(port->membase + CDNS_UART_CR_OFFSET) &
(CDNS_UART_CR_TXRST | CDNS_UART_CR_RXRST))
cpu_relax();
/* restore rx timeout value */
- cdns_uart_writel(rx_timeout, CDNS_UART_RXTOUT_OFFSET);
+ writel(rx_timeout, port->membase + CDNS_UART_RXTOUT_OFFSET);
/* Enable Tx/Rx */
- ctrl_reg = cdns_uart_readl(CDNS_UART_CR_OFFSET);
+ ctrl_reg = readl(port->membase + CDNS_UART_CR_OFFSET);
ctrl_reg &= ~(CDNS_UART_CR_TX_DIS | CDNS_UART_CR_RX_DIS);
ctrl_reg |= CDNS_UART_CR_TX_EN | CDNS_UART_CR_RX_EN;
- cdns_uart_writel(ctrl_reg, CDNS_UART_CR_OFFSET);
+ writel(ctrl_reg, port->membase + CDNS_UART_CR_OFFSET);
spin_unlock_irqrestore(&port->lock, flags);
} else {
spin_lock_irqsave(&port->lock, flags);
/* restore original rx trigger level */
- cdns_uart_writel(rx_trigger_level, CDNS_UART_RXWM_OFFSET);
+ writel(rx_trigger_level,
+ port->membase + CDNS_UART_RXWM_OFFSET);
/* enable RX timeout interrupt */
- cdns_uart_writel(CDNS_UART_IXR_TOUT, CDNS_UART_IER_OFFSET);
+ writel(CDNS_UART_IXR_TOUT,
+ port->membase + CDNS_UART_IER_OFFSET);
spin_unlock_irqrestore(&port->lock, flags);
}
@@ -1458,7 +1464,7 @@
}
/* Match table for of_platform binding */
-static struct of_device_id cdns_uart_of_match[] = {
+static const struct of_device_id cdns_uart_of_match[] = {
{ .compatible = "xlnx,xuartps", },
{ .compatible = "cdns,uart-r1p8", },
{}
diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
index 2bb4dfc..e569546 100644
--- a/drivers/tty/tty_io.c
+++ b/drivers/tty/tty_io.c
@@ -1025,11 +1025,17 @@
}
EXPORT_SYMBOL(start_tty);
-/* We limit tty time update visibility to every 8 seconds or so. */
static void tty_update_time(struct timespec *time)
{
unsigned long sec = get_seconds();
- if (abs(sec - time->tv_sec) & ~7)
+
+ /*
+ * We only care if the two values differ in anything other than the
+ * lower three bits (i.e every 8 seconds). If so, then we can update
+ * the time of the tty device, otherwise it could be construded as a
+ * security leak to let userspace know the exact timing of the tty.
+ */
+ if ((sec ^ time->tv_sec) & ~7)
time->tv_sec = sec;
}
@@ -3593,6 +3599,13 @@
}
static DEVICE_ATTR(active, S_IRUGO, show_cons_active, NULL);
+static struct attribute *cons_dev_attrs[] = {
+ &dev_attr_active.attr,
+ NULL
+};
+
+ATTRIBUTE_GROUPS(cons_dev);
+
static struct device *consdev;
void console_sysfs_notify(void)
@@ -3617,12 +3630,11 @@
if (cdev_add(&console_cdev, MKDEV(TTYAUX_MAJOR, 1), 1) ||
register_chrdev_region(MKDEV(TTYAUX_MAJOR, 1), 1, "/dev/console") < 0)
panic("Couldn't register /dev/console driver\n");
- consdev = device_create(tty_class, NULL, MKDEV(TTYAUX_MAJOR, 1), NULL,
- "console");
+ consdev = device_create_with_groups(tty_class, NULL,
+ MKDEV(TTYAUX_MAJOR, 1), NULL,
+ cons_dev_groups, "console");
if (IS_ERR(consdev))
consdev = NULL;
- else
- WARN_ON(device_create_file(consdev, &dev_attr_active) < 0);
#ifdef CONFIG_VT
vty_init(&console_fops);
diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
index 6e00572..4a24eb2 100644
--- a/drivers/tty/vt/vt.c
+++ b/drivers/tty/vt/vt.c
@@ -1237,7 +1237,7 @@
struct rgb { u8 r; u8 g; u8 b; };
-struct rgb rgb_from_256(int i)
+static struct rgb rgb_from_256(int i)
{
struct rgb c;
if (i < 8) { /* Standard colours. */
@@ -1573,7 +1573,7 @@
case 11: /* set bell duration in msec */
if (vc->vc_npar >= 1)
vc->vc_bell_duration = (vc->vc_par[1] < 2000) ?
- vc->vc_par[1] * HZ / 1000 : 0;
+ msecs_to_jiffies(vc->vc_par[1]) : 0;
else
vc->vc_bell_duration = DEFAULT_BELL_DURATION;
break;
@@ -3041,17 +3041,24 @@
}
static DEVICE_ATTR(active, S_IRUGO, show_tty_active, NULL);
+static struct attribute *vt_dev_attrs[] = {
+ &dev_attr_active.attr,
+ NULL
+};
+
+ATTRIBUTE_GROUPS(vt_dev);
+
int __init vty_init(const struct file_operations *console_fops)
{
cdev_init(&vc0_cdev, console_fops);
if (cdev_add(&vc0_cdev, MKDEV(TTY_MAJOR, 0), 1) ||
register_chrdev_region(MKDEV(TTY_MAJOR, 0), 1, "/dev/vc/0") < 0)
panic("Couldn't register /dev/tty0 driver\n");
- tty0dev = device_create(tty_class, NULL, MKDEV(TTY_MAJOR, 0), NULL, "tty0");
+ tty0dev = device_create_with_groups(tty_class, NULL,
+ MKDEV(TTY_MAJOR, 0), NULL,
+ vt_dev_groups, "tty0");
if (IS_ERR(tty0dev))
tty0dev = NULL;
- else
- WARN_ON(device_create_file(tty0dev, &dev_attr_active) < 0);
vcs_init();
@@ -3423,42 +3430,26 @@
}
-static struct device_attribute device_attrs[] = {
- __ATTR(bind, S_IRUGO|S_IWUSR, show_bind, store_bind),
- __ATTR(name, S_IRUGO, show_name, NULL),
+static DEVICE_ATTR(bind, S_IRUGO|S_IWUSR, show_bind, store_bind);
+static DEVICE_ATTR(name, S_IRUGO, show_name, NULL);
+
+static struct attribute *con_dev_attrs[] = {
+ &dev_attr_bind.attr,
+ &dev_attr_name.attr,
+ NULL
};
+ATTRIBUTE_GROUPS(con_dev);
+
static int vtconsole_init_device(struct con_driver *con)
{
- int i;
- int error = 0;
-
con->flag |= CON_DRIVER_FLAG_ATTR;
- dev_set_drvdata(con->dev, con);
- for (i = 0; i < ARRAY_SIZE(device_attrs); i++) {
- error = device_create_file(con->dev, &device_attrs[i]);
- if (error)
- break;
- }
-
- if (error) {
- while (--i >= 0)
- device_remove_file(con->dev, &device_attrs[i]);
- con->flag &= ~CON_DRIVER_FLAG_ATTR;
- }
-
- return error;
+ return 0;
}
static void vtconsole_deinit_device(struct con_driver *con)
{
- int i;
-
- if (con->flag & CON_DRIVER_FLAG_ATTR) {
- for (i = 0; i < ARRAY_SIZE(device_attrs); i++)
- device_remove_file(con->dev, &device_attrs[i]);
- con->flag &= ~CON_DRIVER_FLAG_ATTR;
- }
+ con->flag &= ~CON_DRIVER_FLAG_ATTR;
}
/**
@@ -3621,11 +3612,11 @@
if (retval)
goto err;
- con_driver->dev = device_create(vtconsole_class, NULL,
- MKDEV(0, con_driver->node),
- NULL, "vtcon%i",
- con_driver->node);
-
+ con_driver->dev =
+ device_create_with_groups(vtconsole_class, NULL,
+ MKDEV(0, con_driver->node),
+ con_driver, con_dev_groups,
+ "vtcon%i", con_driver->node);
if (IS_ERR(con_driver->dev)) {
printk(KERN_WARNING "Unable to create device for %s; "
"errno = %ld\n", con_driver->desc,
@@ -3739,10 +3730,11 @@
struct con_driver *con = ®istered_con_driver[i];
if (con->con && !con->dev) {
- con->dev = device_create(vtconsole_class, NULL,
- MKDEV(0, con->node),
- NULL, "vtcon%i",
- con->node);
+ con->dev =
+ device_create_with_groups(vtconsole_class, NULL,
+ MKDEV(0, con->node),
+ con, con_dev_groups,
+ "vtcon%i", con->node);
if (IS_ERR(con->dev)) {
printk(KERN_WARNING "Unable to create "
diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
index 2bd78e2..97d5a74 100644
--- a/drivers/tty/vt/vt_ioctl.c
+++ b/drivers/tty/vt/vt_ioctl.c
@@ -388,7 +388,7 @@
* Generate the tone for the appropriate number of ticks.
* If the time is zero, turn off sound ourselves.
*/
- ticks = HZ * ((arg >> 16) & 0xffff) / 1000;
+ ticks = msecs_to_jiffies((arg >> 16) & 0xffff);
count = ticks ? (arg & 0xffff) : 0;
if (count)
count = PIT_TICK_RATE / count;
diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
index 6276f13..65bf067 100644
--- a/drivers/uio/uio.c
+++ b/drivers/uio/uio.c
@@ -835,7 +835,15 @@
info->uio_dev = idev;
if (info->irq && (info->irq != UIO_IRQ_CUSTOM)) {
- ret = devm_request_irq(idev->dev, info->irq, uio_interrupt,
+ /*
+ * Note that we deliberately don't use devm_request_irq
+ * here. The parent module can unregister the UIO device
+ * and call pci_disable_msi, which requires that this
+ * irq has been freed. However, the device may have open
+ * FDs at the time of unregister and therefore may not be
+ * freed until they are released.
+ */
+ ret = request_irq(info->irq, uio_interrupt,
info->irq_flags, info->name, idev);
if (ret)
goto err_request_irq;
@@ -871,6 +879,8 @@
uio_dev_del_attributes(idev);
+ free_irq(idev->info->irq, idev);
+
device_destroy(&uio_class, MKDEV(uio_major, idev->minor));
return;
diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
index 76891ad..cf0df8f 100644
--- a/drivers/usb/gadget/function/f_uvc.c
+++ b/drivers/usb/gadget/function/f_uvc.c
@@ -222,7 +222,7 @@
v4l2_event.type = UVC_EVENT_DATA;
uvc_event->data.length = req->actual;
memcpy(&uvc_event->data.data, req->buf, req->actual);
- v4l2_event_queue(uvc->vdev, &v4l2_event);
+ v4l2_event_queue(&uvc->vdev, &v4l2_event);
}
}
@@ -256,7 +256,7 @@
memset(&v4l2_event, 0, sizeof(v4l2_event));
v4l2_event.type = UVC_EVENT_SETUP;
memcpy(&uvc_event->req, ctrl, sizeof(uvc_event->req));
- v4l2_event_queue(uvc->vdev, &v4l2_event);
+ v4l2_event_queue(&uvc->vdev, &v4l2_event);
return 0;
}
@@ -315,7 +315,7 @@
memset(&v4l2_event, 0, sizeof(v4l2_event));
v4l2_event.type = UVC_EVENT_CONNECT;
uvc_event->speed = cdev->gadget->speed;
- v4l2_event_queue(uvc->vdev, &v4l2_event);
+ v4l2_event_queue(&uvc->vdev, &v4l2_event);
uvc->state = UVC_STATE_CONNECTED;
}
@@ -343,7 +343,7 @@
memset(&v4l2_event, 0, sizeof(v4l2_event));
v4l2_event.type = UVC_EVENT_STREAMOFF;
- v4l2_event_queue(uvc->vdev, &v4l2_event);
+ v4l2_event_queue(&uvc->vdev, &v4l2_event);
uvc->state = UVC_STATE_CONNECTED;
return 0;
@@ -370,7 +370,7 @@
memset(&v4l2_event, 0, sizeof(v4l2_event));
v4l2_event.type = UVC_EVENT_STREAMON;
- v4l2_event_queue(uvc->vdev, &v4l2_event);
+ v4l2_event_queue(&uvc->vdev, &v4l2_event);
return USB_GADGET_DELAYED_STATUS;
default:
@@ -388,7 +388,7 @@
memset(&v4l2_event, 0, sizeof(v4l2_event));
v4l2_event.type = UVC_EVENT_DISCONNECT;
- v4l2_event_queue(uvc->vdev, &v4l2_event);
+ v4l2_event_queue(&uvc->vdev, &v4l2_event);
uvc->state = UVC_STATE_DISCONNECTED;
@@ -435,24 +435,19 @@
uvc_register_video(struct uvc_device *uvc)
{
struct usb_composite_dev *cdev = uvc->func.config->cdev;
- struct video_device *video;
/* TODO reference counting. */
- video = video_device_alloc();
- if (video == NULL)
- return -ENOMEM;
+ uvc->vdev.v4l2_dev = &uvc->v4l2_dev;
+ uvc->vdev.fops = &uvc_v4l2_fops;
+ uvc->vdev.ioctl_ops = &uvc_v4l2_ioctl_ops;
+ uvc->vdev.release = video_device_release_empty;
+ uvc->vdev.vfl_dir = VFL_DIR_TX;
+ uvc->vdev.lock = &uvc->video.mutex;
+ strlcpy(uvc->vdev.name, cdev->gadget->name, sizeof(uvc->vdev.name));
- video->v4l2_dev = &uvc->v4l2_dev;
- video->fops = &uvc_v4l2_fops;
- video->ioctl_ops = &uvc_v4l2_ioctl_ops;
- video->release = video_device_release;
- video->vfl_dir = VFL_DIR_TX;
- strlcpy(video->name, cdev->gadget->name, sizeof(video->name));
+ video_set_drvdata(&uvc->vdev, uvc);
- uvc->vdev = video;
- video_set_drvdata(video, uvc);
-
- return video_register_device(video, VFL_TYPE_GRABBER, -1);
+ return video_register_device(&uvc->vdev, VFL_TYPE_GRABBER, -1);
}
#define UVC_COPY_DESCRIPTOR(mem, dst, desc) \
@@ -765,8 +760,6 @@
error:
v4l2_device_unregister(&uvc->v4l2_dev);
- if (uvc->vdev)
- video_device_release(uvc->vdev);
if (uvc->control_ep)
uvc->control_ep->driver_data = NULL;
@@ -897,7 +890,7 @@
INFO(cdev, "%s\n", __func__);
- video_unregister_device(uvc->vdev);
+ video_unregister_device(&uvc->vdev);
v4l2_device_unregister(&uvc->v4l2_dev);
uvc->control_ep->driver_data = NULL;
uvc->video.ep->driver_data = NULL;
@@ -918,6 +911,7 @@
if (uvc == NULL)
return ERR_PTR(-ENOMEM);
+ mutex_init(&uvc->video.mutex);
uvc->state = UVC_STATE_DISCONNECTED;
opts = fi_to_f_uvc_opts(fi);
diff --git a/drivers/usb/gadget/function/uvc.h b/drivers/usb/gadget/function/uvc.h
index f67695c..ebe409b 100644
--- a/drivers/usb/gadget/function/uvc.h
+++ b/drivers/usb/gadget/function/uvc.h
@@ -115,6 +115,7 @@
unsigned int width;
unsigned int height;
unsigned int imagesize;
+ struct mutex mutex; /* protects frame parameters */
/* Requests */
unsigned int req_size;
@@ -143,7 +144,7 @@
struct uvc_device
{
- struct video_device *vdev;
+ struct video_device vdev;
struct v4l2_device v4l2_dev;
enum uvc_state state;
struct usb_function func;
diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c
index 8ea8b3b..d617c39 100644
--- a/drivers/usb/gadget/function/uvc_queue.c
+++ b/drivers/usb/gadget/function/uvc_queue.c
@@ -104,29 +104,16 @@
spin_unlock_irqrestore(&queue->irqlock, flags);
}
-static void uvc_wait_prepare(struct vb2_queue *vq)
-{
- struct uvc_video_queue *queue = vb2_get_drv_priv(vq);
-
- mutex_unlock(&queue->mutex);
-}
-
-static void uvc_wait_finish(struct vb2_queue *vq)
-{
- struct uvc_video_queue *queue = vb2_get_drv_priv(vq);
-
- mutex_lock(&queue->mutex);
-}
-
static struct vb2_ops uvc_queue_qops = {
.queue_setup = uvc_queue_setup,
.buf_prepare = uvc_buffer_prepare,
.buf_queue = uvc_buffer_queue,
- .wait_prepare = uvc_wait_prepare,
- .wait_finish = uvc_wait_finish,
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
};
-int uvcg_queue_init(struct uvc_video_queue *queue, enum v4l2_buf_type type)
+int uvcg_queue_init(struct uvc_video_queue *queue, enum v4l2_buf_type type,
+ struct mutex *lock)
{
int ret;
@@ -135,6 +122,7 @@
queue->queue.drv_priv = queue;
queue->queue.buf_struct_size = sizeof(struct uvc_buffer);
queue->queue.ops = &uvc_queue_qops;
+ queue->queue.lock = lock;
queue->queue.mem_ops = &vb2_vmalloc_memops;
queue->queue.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC
| V4L2_BUF_FLAG_TSTAMP_SRC_EOF;
@@ -142,7 +130,6 @@
if (ret)
return ret;
- mutex_init(&queue->mutex);
spin_lock_init(&queue->irqlock);
INIT_LIST_HEAD(&queue->irqqueue);
queue->flags = 0;
@@ -155,9 +142,7 @@
*/
void uvcg_free_buffers(struct uvc_video_queue *queue)
{
- mutex_lock(&queue->mutex);
vb2_queue_release(&queue->queue);
- mutex_unlock(&queue->mutex);
}
/*
@@ -168,22 +153,14 @@
{
int ret;
- mutex_lock(&queue->mutex);
ret = vb2_reqbufs(&queue->queue, rb);
- mutex_unlock(&queue->mutex);
return ret ? ret : rb->count;
}
int uvcg_query_buffer(struct uvc_video_queue *queue, struct v4l2_buffer *buf)
{
- int ret;
-
- mutex_lock(&queue->mutex);
- ret = vb2_querybuf(&queue->queue, buf);
- mutex_unlock(&queue->mutex);
-
- return ret;
+ return vb2_querybuf(&queue->queue, buf);
}
int uvcg_queue_buffer(struct uvc_video_queue *queue, struct v4l2_buffer *buf)
@@ -191,18 +168,14 @@
unsigned long flags;
int ret;
- mutex_lock(&queue->mutex);
ret = vb2_qbuf(&queue->queue, buf);
if (ret < 0)
- goto done;
+ return ret;
spin_lock_irqsave(&queue->irqlock, flags);
ret = (queue->flags & UVC_QUEUE_PAUSED) != 0;
queue->flags &= ~UVC_QUEUE_PAUSED;
spin_unlock_irqrestore(&queue->irqlock, flags);
-
-done:
- mutex_unlock(&queue->mutex);
return ret;
}
@@ -213,13 +186,7 @@
int uvcg_dequeue_buffer(struct uvc_video_queue *queue, struct v4l2_buffer *buf,
int nonblocking)
{
- int ret;
-
- mutex_lock(&queue->mutex);
- ret = vb2_dqbuf(&queue->queue, buf, nonblocking);
- mutex_unlock(&queue->mutex);
-
- return ret;
+ return vb2_dqbuf(&queue->queue, buf, nonblocking);
}
/*
@@ -231,24 +198,12 @@
unsigned int uvcg_queue_poll(struct uvc_video_queue *queue, struct file *file,
poll_table *wait)
{
- unsigned int ret;
-
- mutex_lock(&queue->mutex);
- ret = vb2_poll(&queue->queue, file, wait);
- mutex_unlock(&queue->mutex);
-
- return ret;
+ return vb2_poll(&queue->queue, file, wait);
}
int uvcg_queue_mmap(struct uvc_video_queue *queue, struct vm_area_struct *vma)
{
- int ret;
-
- mutex_lock(&queue->mutex);
- ret = vb2_mmap(&queue->queue, vma);
- mutex_unlock(&queue->mutex);
-
- return ret;
+ return vb2_mmap(&queue->queue, vma);
}
#ifndef CONFIG_MMU
@@ -260,12 +215,7 @@
unsigned long uvcg_queue_get_unmapped_area(struct uvc_video_queue *queue,
unsigned long pgoff)
{
- unsigned long ret;
-
- mutex_lock(&queue->mutex);
- ret = vb2_get_unmapped_area(&queue->queue, 0, 0, pgoff, 0);
- mutex_unlock(&queue->mutex);
- return ret;
+ return vb2_get_unmapped_area(&queue->queue, 0, 0, pgoff, 0);
}
#endif
@@ -327,18 +277,17 @@
unsigned long flags;
int ret = 0;
- mutex_lock(&queue->mutex);
if (enable) {
ret = vb2_streamon(&queue->queue, queue->queue.type);
if (ret < 0)
- goto done;
+ return ret;
queue->sequence = 0;
queue->buf_used = 0;
} else {
ret = vb2_streamoff(&queue->queue, queue->queue.type);
if (ret < 0)
- goto done;
+ return ret;
spin_lock_irqsave(&queue->irqlock, flags);
INIT_LIST_HEAD(&queue->irqqueue);
@@ -353,8 +302,6 @@
spin_unlock_irqrestore(&queue->irqlock, flags);
}
-done:
- mutex_unlock(&queue->mutex);
return ret;
}
diff --git a/drivers/usb/gadget/function/uvc_queue.h b/drivers/usb/gadget/function/uvc_queue.h
index 03919c7..01ca9ea 100644
--- a/drivers/usb/gadget/function/uvc_queue.h
+++ b/drivers/usb/gadget/function/uvc_queue.h
@@ -41,7 +41,6 @@
struct uvc_video_queue {
struct vb2_queue queue;
- struct mutex mutex; /* Protects queue */
unsigned int flags;
__u32 sequence;
@@ -57,7 +56,8 @@
return vb2_is_streaming(&queue->queue);
}
-int uvcg_queue_init(struct uvc_video_queue *queue, enum v4l2_buf_type type);
+int uvcg_queue_init(struct uvc_video_queue *queue, enum v4l2_buf_type type,
+ struct mutex *lock);
void uvcg_free_buffers(struct uvc_video_queue *queue);
diff --git a/drivers/usb/gadget/function/uvc_v4l2.c b/drivers/usb/gadget/function/uvc_v4l2.c
index 8b818fd..f4ccbd5 100644
--- a/drivers/usb/gadget/function/uvc_v4l2.c
+++ b/drivers/usb/gadget/function/uvc_v4l2.c
@@ -14,7 +14,6 @@
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/list.h>
-#include <linux/mutex.h>
#include <linux/videodev2.h>
#include <linux/vmalloc.h>
#include <linux/wait.h>
@@ -77,7 +76,8 @@
strlcpy(cap->bus_info, dev_name(&cdev->gadget->dev),
sizeof(cap->bus_info));
- cap->capabilities = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING;
+ cap->device_caps = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING;
+ cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
return 0;
}
@@ -312,8 +312,10 @@
uvc_function_disconnect(uvc);
+ mutex_lock(&video->mutex);
uvcg_video_enable(video, 0);
uvcg_free_buffers(&video->queue);
+ mutex_unlock(&video->mutex);
file->private_data = NULL;
v4l2_fh_del(&handle->vfh);
@@ -357,7 +359,7 @@
.owner = THIS_MODULE,
.open = uvc_v4l2_open,
.release = uvc_v4l2_release,
- .ioctl = video_ioctl2,
+ .unlocked_ioctl = video_ioctl2,
.mmap = uvc_v4l2_mmap,
.poll = uvc_v4l2_poll,
#ifndef CONFIG_MMU
diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
index 50a5e63..3d0d5d9 100644
--- a/drivers/usb/gadget/function/uvc_video.c
+++ b/drivers/usb/gadget/function/uvc_video.c
@@ -391,7 +391,8 @@
video->imagesize = 320 * 240 * 2;
/* Initialize the video buffers queue. */
- uvcg_queue_init(&video->queue, V4L2_BUF_TYPE_VIDEO_OUTPUT);
+ uvcg_queue_init(&video->queue, V4L2_BUF_TYPE_VIDEO_OUTPUT,
+ &video->mutex);
return 0;
}
diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
index 4254336..807ee22 100644
--- a/drivers/video/fbdev/hyperv_fb.c
+++ b/drivers/video/fbdev/hyperv_fb.c
@@ -415,7 +415,8 @@
struct fb_info *info = hv_get_drvdata(hdev);
struct hvfb_par *par = info->par;
struct synthvid_msg *msg = (struct synthvid_msg *)par->init_buf;
- int t, ret = 0;
+ int ret = 0;
+ unsigned long t;
memset(msg, 0, sizeof(struct synthvid_msg));
msg->vid_hdr.type = SYNTHVID_VERSION_REQUEST;
@@ -488,7 +489,8 @@
struct fb_info *info = hv_get_drvdata(hdev);
struct hvfb_par *par = info->par;
struct synthvid_msg *msg = (struct synthvid_msg *)par->init_buf;
- int t, ret = 0;
+ int ret = 0;
+ unsigned long t;
/* Send VRAM location */
memset(msg, 0, sizeof(struct synthvid_msg));
diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
index 3b6a3c8..84d1d29 100644
--- a/drivers/video/fbdev/imxfb.c
+++ b/drivers/video/fbdev/imxfb.c
@@ -183,7 +183,7 @@
};
MODULE_DEVICE_TABLE(platform, imxfb_devtype);
-static struct of_device_id imxfb_of_dev_id[] = {
+static const struct of_device_id imxfb_of_dev_id[] = {
{
.compatible = "fsl,imx1-fb",
.data = &imxfb_devtype[IMX1_FB],
diff --git a/drivers/video/fbdev/omap2/displays-new/connector-dvi.c b/drivers/video/fbdev/omap2/displays-new/connector-dvi.c
index 0cdc974..a8ce920 100644
--- a/drivers/video/fbdev/omap2/displays-new/connector-dvi.c
+++ b/drivers/video/fbdev/omap2/displays-new/connector-dvi.c
@@ -37,7 +37,7 @@
.hsync_level = OMAPDSS_SIG_ACTIVE_HIGH,
.data_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE,
.de_level = OMAPDSS_SIG_ACTIVE_HIGH,
- .sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES,
+ .sync_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE,
};
struct panel_drv_data {
diff --git a/drivers/video/fbdev/omap2/displays-new/encoder-tfp410.c b/drivers/video/fbdev/omap2/displays-new/encoder-tfp410.c
index 92919a7..d9048b3 100644
--- a/drivers/video/fbdev/omap2/displays-new/encoder-tfp410.c
+++ b/drivers/video/fbdev/omap2/displays-new/encoder-tfp410.c
@@ -114,12 +114,21 @@
dssdev->state = OMAP_DSS_DISPLAY_DISABLED;
}
+static void tfp410_fix_timings(struct omap_video_timings *timings)
+{
+ timings->data_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE;
+ timings->sync_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE;
+ timings->de_level = OMAPDSS_SIG_ACTIVE_HIGH;
+}
+
static void tfp410_set_timings(struct omap_dss_device *dssdev,
struct omap_video_timings *timings)
{
struct panel_drv_data *ddata = to_panel_data(dssdev);
struct omap_dss_device *in = ddata->in;
+ tfp410_fix_timings(timings);
+
ddata->timings = *timings;
dssdev->panel.timings = *timings;
@@ -140,6 +149,8 @@
struct panel_drv_data *ddata = to_panel_data(dssdev);
struct omap_dss_device *in = ddata->in;
+ tfp410_fix_timings(timings);
+
return in->ops.dpi->check_timings(in, timings);
}
diff --git a/drivers/video/fbdev/omap2/displays-new/panel-lgphilips-lb035q02.c b/drivers/video/fbdev/omap2/displays-new/panel-lgphilips-lb035q02.c
index 27d4fcf..9974a37 100644
--- a/drivers/video/fbdev/omap2/displays-new/panel-lgphilips-lb035q02.c
+++ b/drivers/video/fbdev/omap2/displays-new/panel-lgphilips-lb035q02.c
@@ -37,7 +37,7 @@
.hsync_level = OMAPDSS_SIG_ACTIVE_LOW,
.data_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE,
.de_level = OMAPDSS_SIG_ACTIVE_HIGH,
- .sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES,
+ .sync_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE,
};
struct panel_drv_data {
diff --git a/drivers/video/fbdev/omap2/displays-new/panel-sharp-ls037v7dw01.c b/drivers/video/fbdev/omap2/displays-new/panel-sharp-ls037v7dw01.c
index 18b19b6..eae2637 100644
--- a/drivers/video/fbdev/omap2/displays-new/panel-sharp-ls037v7dw01.c
+++ b/drivers/video/fbdev/omap2/displays-new/panel-sharp-ls037v7dw01.c
@@ -54,7 +54,7 @@
.hsync_level = OMAPDSS_SIG_ACTIVE_LOW,
.data_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE,
.de_level = OMAPDSS_SIG_ACTIVE_HIGH,
- .sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES,
+ .sync_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE,
};
#define to_panel_data(p) container_of(p, struct panel_drv_data, dssdev)
diff --git a/drivers/video/fbdev/omap2/displays-new/panel-sony-acx565akm.c b/drivers/video/fbdev/omap2/displays-new/panel-sony-acx565akm.c
index 337ccc5..90cbc4c 100644
--- a/drivers/video/fbdev/omap2/displays-new/panel-sony-acx565akm.c
+++ b/drivers/video/fbdev/omap2/displays-new/panel-sony-acx565akm.c
@@ -108,7 +108,7 @@
.data_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE,
.de_level = OMAPDSS_SIG_ACTIVE_HIGH,
- .sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES,
+ .sync_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE,
};
#define to_panel_data(p) container_of(p, struct panel_drv_data, dssdev)
diff --git a/drivers/video/fbdev/omap2/displays-new/panel-tpo-td028ttec1.c b/drivers/video/fbdev/omap2/displays-new/panel-tpo-td028ttec1.c
index fbba0b8..9edc511 100644
--- a/drivers/video/fbdev/omap2/displays-new/panel-tpo-td028ttec1.c
+++ b/drivers/video/fbdev/omap2/displays-new/panel-tpo-td028ttec1.c
@@ -58,7 +58,7 @@
.data_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE,
.de_level = OMAPDSS_SIG_ACTIVE_HIGH,
- .sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES,
+ .sync_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE,
};
#define JBT_COMMAND 0x000
diff --git a/drivers/video/fbdev/omap2/displays-new/panel-tpo-td043mtea1.c b/drivers/video/fbdev/omap2/displays-new/panel-tpo-td043mtea1.c
index 5aba76b..79e4a02 100644
--- a/drivers/video/fbdev/omap2/displays-new/panel-tpo-td043mtea1.c
+++ b/drivers/video/fbdev/omap2/displays-new/panel-tpo-td043mtea1.c
@@ -91,7 +91,7 @@
.hsync_level = OMAPDSS_SIG_ACTIVE_LOW,
.data_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE,
.de_level = OMAPDSS_SIG_ACTIVE_HIGH,
- .sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES,
+ .sync_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE,
};
#define to_panel_data(p) container_of(p, struct panel_drv_data, dssdev)
diff --git a/drivers/video/fbdev/omap2/dss/core.c b/drivers/video/fbdev/omap2/dss/core.c
index d5d9212..1675175 100644
--- a/drivers/video/fbdev/omap2/dss/core.c
+++ b/drivers/video/fbdev/omap2/dss/core.c
@@ -179,10 +179,14 @@
switch (v) {
case PM_SUSPEND_PREPARE:
+ case PM_HIBERNATION_PREPARE:
+ case PM_RESTORE_PREPARE:
DSSDBG("suspending displays\n");
return dss_suspend_all_devices();
case PM_POST_SUSPEND:
+ case PM_POST_HIBERNATION:
+ case PM_POST_RESTORE:
DSSDBG("resuming displays\n");
return dss_resume_all_devices();
diff --git a/drivers/video/fbdev/omap2/dss/dispc.c b/drivers/video/fbdev/omap2/dss/dispc.c
index 31b743c..f4fc77d 100644
--- a/drivers/video/fbdev/omap2/dss/dispc.c
+++ b/drivers/video/fbdev/omap2/dss/dispc.c
@@ -123,6 +123,9 @@
struct regmap *syscon_pol;
u32 syscon_pol_offset;
+
+ /* DISPC_CONTROL & DISPC_CONFIG lock*/
+ spinlock_t control_lock;
} dispc;
enum omap_color_component {
@@ -261,7 +264,16 @@
static void mgr_fld_write(enum omap_channel channel,
enum mgr_reg_fields regfld, int val) {
const struct dispc_reg_field rfld = mgr_desc[channel].reg_desc[regfld];
+ const bool need_lock = rfld.reg == DISPC_CONTROL || rfld.reg == DISPC_CONFIG;
+ unsigned long flags;
+
+ if (need_lock)
+ spin_lock_irqsave(&dispc.control_lock, flags);
+
REG_FLD_MOD(rfld.reg, val, rfld.high, rfld.low);
+
+ if (need_lock)
+ spin_unlock_irqrestore(&dispc.control_lock, flags);
}
#define SR(reg) \
@@ -1126,6 +1138,7 @@
int fifo;
u8 start, end;
u32 unit;
+ int i;
unit = dss_feat_get_buffer_size_unit();
@@ -1165,6 +1178,20 @@
dispc.fifo_assignment[OMAP_DSS_GFX] = OMAP_DSS_WB;
dispc.fifo_assignment[OMAP_DSS_WB] = OMAP_DSS_GFX;
}
+
+ /*
+ * Setup default fifo thresholds.
+ */
+ for (i = 0; i < dss_feat_get_num_ovls(); ++i) {
+ u32 low, high;
+ const bool use_fifomerge = false;
+ const bool manual_update = false;
+
+ dispc_ovl_compute_fifo_thresholds(i, &low, &high,
+ use_fifomerge, manual_update);
+
+ dispc_ovl_set_fifo_threshold(i, low, high);
+ }
}
static u32 dispc_ovl_get_fifo_size(enum omap_plane plane)
@@ -1278,6 +1305,63 @@
}
EXPORT_SYMBOL(dispc_ovl_compute_fifo_thresholds);
+static void dispc_ovl_set_mflag(enum omap_plane plane, bool enable)
+{
+ int bit;
+
+ if (plane == OMAP_DSS_GFX)
+ bit = 14;
+ else
+ bit = 23;
+
+ REG_FLD_MOD(DISPC_OVL_ATTRIBUTES(plane), enable, bit, bit);
+}
+
+static void dispc_ovl_set_mflag_threshold(enum omap_plane plane,
+ int low, int high)
+{
+ dispc_write_reg(DISPC_OVL_MFLAG_THRESHOLD(plane),
+ FLD_VAL(high, 31, 16) | FLD_VAL(low, 15, 0));
+}
+
+static void dispc_init_mflag(void)
+{
+ int i;
+
+ /*
+ * HACK: NV12 color format and MFLAG seem to have problems working
+ * together: using two displays, and having an NV12 overlay on one of
+ * the displays will cause underflows/synclosts when MFLAG_CTRL=2.
+ * Changing MFLAG thresholds and PRELOAD to certain values seem to
+ * remove the errors, but there doesn't seem to be a clear logic on
+ * which values work and which not.
+ *
+ * As a work-around, set force MFLAG to always on.
+ */
+ dispc_write_reg(DISPC_GLOBAL_MFLAG_ATTRIBUTE,
+ (1 << 0) | /* MFLAG_CTRL = force always on */
+ (0 << 2)); /* MFLAG_START = disable */
+
+ for (i = 0; i < dss_feat_get_num_ovls(); ++i) {
+ u32 size = dispc_ovl_get_fifo_size(i);
+ u32 unit = dss_feat_get_buffer_size_unit();
+ u32 low, high;
+
+ dispc_ovl_set_mflag(i, true);
+
+ /*
+ * Simulation team suggests below thesholds:
+ * HT = fifosize * 5 / 8;
+ * LT = fifosize * 4 / 8;
+ */
+
+ low = size * 4 / 8 / unit;
+ high = size * 5 / 8 / unit;
+
+ dispc_ovl_set_mflag_threshold(i, low, high);
+ }
+}
+
static void dispc_ovl_set_fir(enum omap_plane plane,
int hinc, int vinc,
enum omap_color_component color_comp)
@@ -2322,6 +2406,11 @@
if (width == out_width && height == out_height)
return 0;
+ if (pclk == 0 || mgr_timings->pixelclock == 0) {
+ DSSERR("cannot calculate scaling settings: pclk is zero\n");
+ return -EINVAL;
+ }
+
if ((caps & OMAP_DSS_OVL_CAP_SCALE) == 0)
return -EINVAL;
@@ -2441,7 +2530,7 @@
unsigned long pclk = dispc_plane_pclk_rate(plane);
unsigned long lclk = dispc_plane_lclk_rate(plane);
- if (paddr == 0)
+ if (paddr == 0 && rotation_type != OMAP_DSS_ROT_TILER)
return -EINVAL;
out_width = out_width == 0 ? width : out_width;
@@ -2915,7 +3004,7 @@
{
u32 timing_h, timing_v, l;
- bool onoff, rf, ipc;
+ bool onoff, rf, ipc, vs, hs, de;
timing_h = FLD_VAL(hsw-1, dispc.feat->sw_start, 0) |
FLD_VAL(hfp-1, dispc.feat->fp_start, 8) |
@@ -2927,6 +3016,39 @@
dispc_write_reg(DISPC_TIMING_H(channel), timing_h);
dispc_write_reg(DISPC_TIMING_V(channel), timing_v);
+ switch (vsync_level) {
+ case OMAPDSS_SIG_ACTIVE_LOW:
+ vs = true;
+ break;
+ case OMAPDSS_SIG_ACTIVE_HIGH:
+ vs = false;
+ break;
+ default:
+ BUG();
+ }
+
+ switch (hsync_level) {
+ case OMAPDSS_SIG_ACTIVE_LOW:
+ hs = true;
+ break;
+ case OMAPDSS_SIG_ACTIVE_HIGH:
+ hs = false;
+ break;
+ default:
+ BUG();
+ }
+
+ switch (de_level) {
+ case OMAPDSS_SIG_ACTIVE_LOW:
+ de = true;
+ break;
+ case OMAPDSS_SIG_ACTIVE_HIGH:
+ de = false;
+ break;
+ default:
+ BUG();
+ }
+
switch (data_pclk_edge) {
case OMAPDSS_DRIVE_SIG_RISING_EDGE:
ipc = false;
@@ -2934,22 +3056,18 @@
case OMAPDSS_DRIVE_SIG_FALLING_EDGE:
ipc = true;
break;
- case OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES:
default:
BUG();
}
+ /* always use the 'rf' setting */
+ onoff = true;
+
switch (sync_pclk_edge) {
- case OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES:
- onoff = false;
- rf = false;
- break;
case OMAPDSS_DRIVE_SIG_FALLING_EDGE:
- onoff = true;
rf = false;
break;
case OMAPDSS_DRIVE_SIG_RISING_EDGE:
- onoff = true;
rf = true;
break;
default:
@@ -2958,10 +3076,10 @@
l = FLD_VAL(onoff, 17, 17) |
FLD_VAL(rf, 16, 16) |
- FLD_VAL(de_level, 15, 15) |
+ FLD_VAL(de, 15, 15) |
FLD_VAL(ipc, 14, 14) |
- FLD_VAL(hsync_level, 13, 13) |
- FLD_VAL(vsync_level, 12, 12);
+ FLD_VAL(hs, 13, 13) |
+ FLD_VAL(vs, 12, 12);
dispc_write_reg(DISPC_POL_FREQ(channel), l);
@@ -3569,6 +3687,9 @@
if (dispc.feat->mstandby_workaround)
REG_FLD_MOD(DISPC_MSTANDBY_CTRL, 1, 0, 0);
+
+ if (dss_has_feature(FEAT_MFLAG))
+ dispc_init_mflag();
}
static const struct dispc_features omap24xx_dispc_feats __initconst = {
@@ -3770,6 +3891,8 @@
dispc.pdev = pdev;
+ spin_lock_init(&dispc.control_lock);
+
r = dispc_init_features(dispc.pdev);
if (r)
return r;
diff --git a/drivers/video/fbdev/omap2/dss/display.c b/drivers/video/fbdev/omap2/dss/display.c
index 2412a0d..ef5b902 100644
--- a/drivers/video/fbdev/omap2/dss/display.c
+++ b/drivers/video/fbdev/omap2/dss/display.c
@@ -295,7 +295,7 @@
OMAPDSS_DRIVE_SIG_RISING_EDGE :
OMAPDSS_DRIVE_SIG_FALLING_EDGE;
- ovt->sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES;
+ ovt->sync_pclk_edge = ovt->data_pclk_edge;
}
EXPORT_SYMBOL(videomode_to_omap_video_timings);
diff --git a/drivers/video/fbdev/omap2/dss/dsi.c b/drivers/video/fbdev/omap2/dss/dsi.c
index 5081f6f..28b0bc1 100644
--- a/drivers/video/fbdev/omap2/dss/dsi.c
+++ b/drivers/video/fbdev/omap2/dss/dsi.c
@@ -4137,7 +4137,7 @@
dsi->timings.vsync_level = OMAPDSS_SIG_ACTIVE_HIGH;
dsi->timings.data_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE;
dsi->timings.de_level = OMAPDSS_SIG_ACTIVE_HIGH;
- dsi->timings.sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES;
+ dsi->timings.sync_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE;
dss_mgr_set_timings(mgr, &dsi->timings);
diff --git a/drivers/video/fbdev/omap2/dss/dss.c b/drivers/video/fbdev/omap2/dss/dss.c
index a6d10d4..7f978b6 100644
--- a/drivers/video/fbdev/omap2/dss/dss.c
+++ b/drivers/video/fbdev/omap2/dss/dss.c
@@ -38,6 +38,7 @@
#include <linux/regmap.h>
#include <linux/of.h>
#include <linux/regulator/consumer.h>
+#include <linux/suspend.h>
#include <video/omapdss.h>
@@ -1138,6 +1139,8 @@
dss_debugfs_create_file("dss", dss_dump_regs);
+ pm_set_vt_switch(0);
+
return 0;
err_pll_init:
diff --git a/drivers/video/fbdev/omap2/dss/dss_features.c b/drivers/video/fbdev/omap2/dss/dss_features.c
index 376270b..b0b6dfd 100644
--- a/drivers/video/fbdev/omap2/dss/dss_features.c
+++ b/drivers/video/fbdev/omap2/dss/dss_features.c
@@ -440,7 +440,7 @@
static const struct dss_param_range am43xx_dss_param_range[] = {
[FEAT_PARAM_DSS_FCK] = { 0, 200000000 },
- [FEAT_PARAM_DSS_PCD] = { 2, 255 },
+ [FEAT_PARAM_DSS_PCD] = { 1, 255 },
[FEAT_PARAM_DOWNSCALE] = { 1, 4 },
[FEAT_PARAM_LINEWIDTH] = { 1, 1024 },
};
diff --git a/drivers/video/fbdev/omap2/dss/hdmi5_core.c b/drivers/video/fbdev/omap2/dss/hdmi5_core.c
index a3cfe3d..bfc0c4c 100644
--- a/drivers/video/fbdev/omap2/dss/hdmi5_core.c
+++ b/drivers/video/fbdev/omap2/dss/hdmi5_core.c
@@ -55,7 +55,7 @@
const unsigned ss_scl_low = 4700; /* ns */
const unsigned fs_scl_high = 600; /* ns */
const unsigned fs_scl_low = 1300; /* ns */
- const unsigned sda_hold = 300; /* ns */
+ const unsigned sda_hold = 1000; /* ns */
const unsigned sfr_div = 10;
unsigned long long sfr;
unsigned v;
diff --git a/drivers/video/fbdev/omap2/dss/omapdss-boot-init.c b/drivers/video/fbdev/omap2/dss/omapdss-boot-init.c
index 42b87f9..8b6f6d5 100644
--- a/drivers/video/fbdev/omap2/dss/omapdss-boot-init.c
+++ b/drivers/video/fbdev/omap2/dss/omapdss-boot-init.c
@@ -164,20 +164,15 @@
pn = of_graph_get_remote_port_parent(n);
- if (!pn) {
- of_node_put(n);
+ if (!pn)
continue;
- }
if (!of_device_is_available(pn) || omapdss_list_contains(pn)) {
of_node_put(pn);
- of_node_put(n);
continue;
}
omapdss_walk_device(pn, false);
-
- of_node_put(n);
}
}
diff --git a/drivers/video/fbdev/omap2/dss/rfbi.c b/drivers/video/fbdev/omap2/dss/rfbi.c
index 28e694b..065effc 100644
--- a/drivers/video/fbdev/omap2/dss/rfbi.c
+++ b/drivers/video/fbdev/omap2/dss/rfbi.c
@@ -869,7 +869,7 @@
rfbi.timings.vsync_level = OMAPDSS_SIG_ACTIVE_HIGH;
rfbi.timings.data_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE;
rfbi.timings.de_level = OMAPDSS_SIG_ACTIVE_HIGH;
- rfbi.timings.sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES;
+ rfbi.timings.sync_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE;
dss_mgr_set_timings(mgr, &rfbi.timings);
}
diff --git a/drivers/video/fbdev/omap2/omapfb/omapfb-main.c b/drivers/video/fbdev/omap2/omapfb/omapfb-main.c
index 22f07f8..4f0cbb5 100644
--- a/drivers/video/fbdev/omap2/omapfb/omapfb-main.c
+++ b/drivers/video/fbdev/omap2/omapfb/omapfb-main.c
@@ -2073,7 +2073,7 @@
} else {
timings->data_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE;
timings->de_level = OMAPDSS_SIG_ACTIVE_HIGH;
- timings->sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES;
+ timings->sync_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE;
}
timings->pixelclock = PICOS2KHZ(var->pixclock) * 1000;
@@ -2223,7 +2223,7 @@
} else {
t->data_pclk_edge = OMAPDSS_DRIVE_SIG_RISING_EDGE;
t->de_level = OMAPDSS_SIG_ACTIVE_HIGH;
- t->sync_pclk_edge = OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES;
+ t->sync_pclk_edge = OMAPDSS_DRIVE_SIG_FALLING_EDGE;
}
t->x_res = m->xres;
diff --git a/drivers/video/fbdev/pm3fb.c b/drivers/video/fbdev/pm3fb.c
index 4bf3273..77b99ed 100644
--- a/drivers/video/fbdev/pm3fb.c
+++ b/drivers/video/fbdev/pm3fb.c
@@ -1479,9 +1479,9 @@
fb_dealloc_cmap(&info->cmap);
#ifdef CONFIG_MTRR
- if (par->mtrr_handle >= 0)
- mtrr_del(par->mtrr_handle, info->fix.smem_start,
- info->fix.smem_len);
+ if (par->mtrr_handle >= 0)
+ mtrr_del(par->mtrr_handle, info->fix.smem_start,
+ info->fix.smem_len);
#endif /* CONFIG_MTRR */
iounmap(info->screen_base);
release_mem_region(fix->smem_start, fix->smem_len);
diff --git a/drivers/video/fbdev/pxafb.c b/drivers/video/fbdev/pxafb.c
index da2431e..7245611 100644
--- a/drivers/video/fbdev/pxafb.c
+++ b/drivers/video/fbdev/pxafb.c
@@ -1285,7 +1285,7 @@
mutex_unlock(&fbi->ctrlr_lock);
set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout(30 * HZ / 1000);
+ schedule_timeout(msecs_to_jiffies(30));
}
pr_debug("%s(): task ending\n", __func__);
@@ -1460,7 +1460,7 @@
#ifdef CONFIG_FB_PXA_SMARTPANEL
if (fbi->lccr0 & LCCR0_LCDT) {
wait_for_completion_timeout(&fbi->refresh_done,
- 200 * HZ / 1000);
+ msecs_to_jiffies(200));
return;
}
#endif
@@ -1472,7 +1472,7 @@
lcd_writel(fbi, LCCR0, lccr0);
lcd_writel(fbi, LCCR0, lccr0 | LCCR0_DIS);
- wait_for_completion_timeout(&fbi->disable_done, 200 * HZ / 1000);
+ wait_for_completion_timeout(&fbi->disable_done, msecs_to_jiffies(200));
/* disable LCD controller clock */
clk_disable_unprepare(fbi->clk);
diff --git a/drivers/video/fbdev/sh_mobile_lcdcfb.c b/drivers/video/fbdev/sh_mobile_lcdcfb.c
index d3013cd..82c0a8c 100644
--- a/drivers/video/fbdev/sh_mobile_lcdcfb.c
+++ b/drivers/video/fbdev/sh_mobile_lcdcfb.c
@@ -1461,7 +1461,7 @@
unsigned int rop3;
char *endp;
- rop3 = !!simple_strtoul(buf, &endp, 10);
+ rop3 = simple_strtoul(buf, &endp, 10);
if (isspace(*endp))
endp++;
@@ -2605,7 +2605,6 @@
unsigned int max_size;
unsigned int i;
- mutex_init(&ch->open_lock);
ch->notify = sh_mobile_lcdc_display_notify;
/* Validate the format. */
@@ -2704,7 +2703,7 @@
struct resource *res;
int num_channels;
int error;
- int i;
+ int irq, i;
if (!pdata) {
dev_err(&pdev->dev, "no platform data defined\n");
@@ -2712,8 +2711,8 @@
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- i = platform_get_irq(pdev, 0);
- if (!res || i < 0) {
+ irq = platform_get_irq(pdev, 0);
+ if (!res || irq < 0) {
dev_err(&pdev->dev, "cannot get platform resources\n");
return -ENOENT;
}
@@ -2726,16 +2725,18 @@
priv->dev = &pdev->dev;
priv->meram_dev = pdata->meram_dev;
+ for (i = 0; i < ARRAY_SIZE(priv->ch); i++)
+ mutex_init(&priv->ch[i].open_lock);
platform_set_drvdata(pdev, priv);
- error = request_irq(i, sh_mobile_lcdc_irq, 0,
+ error = request_irq(irq, sh_mobile_lcdc_irq, 0,
dev_name(&pdev->dev), priv);
if (error) {
dev_err(&pdev->dev, "unable to request irq\n");
goto err1;
}
- priv->irq = i;
+ priv->irq = irq;
atomic_set(&priv->hw_usecnt, -1);
for (i = 0, num_channels = 0; i < ARRAY_SIZE(pdata->ch); i++) {
diff --git a/drivers/video/fbdev/sm501fb.c b/drivers/video/fbdev/sm501fb.c
index e8d4121..d0a4e2f 100644
--- a/drivers/video/fbdev/sm501fb.c
+++ b/drivers/video/fbdev/sm501fb.c
@@ -1606,7 +1606,7 @@
info->fbmem_len = resource_size(res);
/* clear framebuffer memory - avoids garbage data on unused fb */
- memset(info->fbmem, 0, info->fbmem_len);
+ memset_io(info->fbmem, 0, info->fbmem_len);
/* clear palette ram - undefined at power on */
for (k = 0; k < (256 * 3); k++)
diff --git a/drivers/video/fbdev/via/via_clock.c b/drivers/video/fbdev/via/via_clock.c
index db1e392..bf269fa 100644
--- a/drivers/video/fbdev/via/via_clock.c
+++ b/drivers/video/fbdev/via/via_clock.c
@@ -30,7 +30,7 @@
#include "global.h"
#include "debug.h"
-const char *via_slap = "Please slap VIA Technologies to motivate them "
+static const char *via_slap = "Please slap VIA Technologies to motivate them "
"releasing full documentation for your platform!\n";
static inline u32 cle266_encode_pll(struct via_pll_config pll)
diff --git a/drivers/w1/masters/mxc_w1.c b/drivers/w1/masters/mxc_w1.c
index 53bf2c8..a462175 100644
--- a/drivers/w1/masters/mxc_w1.c
+++ b/drivers/w1/masters/mxc_w1.c
@@ -166,7 +166,7 @@
return 0;
}
-static struct of_device_id mxc_w1_dt_ids[] = {
+static const struct of_device_id mxc_w1_dt_ids[] = {
{ .compatible = "fsl,imx21-owire" },
{ /* sentinel */ }
};
diff --git a/drivers/w1/masters/omap_hdq.c b/drivers/w1/masters/omap_hdq.c
index 03321d6..e7d4489 100644
--- a/drivers/w1/masters/omap_hdq.c
+++ b/drivers/w1/masters/omap_hdq.c
@@ -72,7 +72,7 @@
static int omap_hdq_probe(struct platform_device *pdev);
static int omap_hdq_remove(struct platform_device *pdev);
-static struct of_device_id omap_hdq_dt_ids[] = {
+static const struct of_device_id omap_hdq_dt_ids[] = {
{ .compatible = "ti,omap3-1w" },
{}
};
diff --git a/drivers/w1/masters/w1-gpio.c b/drivers/w1/masters/w1-gpio.c
index b99a932..8f7848c 100644
--- a/drivers/w1/masters/w1-gpio.c
+++ b/drivers/w1/masters/w1-gpio.c
@@ -68,7 +68,7 @@
}
#if defined(CONFIG_OF)
-static struct of_device_id w1_gpio_dt_ids[] = {
+static const struct of_device_id w1_gpio_dt_ids[] = {
{ .compatible = "w1-gpio" },
{}
};
diff --git a/fs/9p/v9fs.h b/fs/9p/v9fs.h
index 099c771..fb9ffcb 100644
--- a/fs/9p/v9fs.h
+++ b/fs/9p/v9fs.h
@@ -78,7 +78,6 @@
* @cache: cache mode of type &p9_cache_modes
* @cachetag: the tag of the cache associated with this session
* @fscache: session cookie associated with FS-Cache
- * @options: copy of options string given by user
* @uname: string user name to mount hierarchy as
* @aname: mount specifier for remote hierarchy
* @maxdata: maximum data to be sent/recvd per protocol message
diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
index be35d05..e9e0437 100644
--- a/fs/9p/vfs_addr.c
+++ b/fs/9p/vfs_addr.c
@@ -231,9 +231,7 @@
/**
* v9fs_direct_IO - 9P address space operation for direct I/O
* @iocb: target I/O control block
- * @iov: array of vectors that define I/O buffer
* @pos: offset in file to begin the operation
- * @nr_segs: size of iovec array
*
* The presence of v9fs_direct_IO() in the address space ops vector
* allowes open() O_DIRECT flags which would have failed otherwise.
diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index 2a9dd37..1ef16bd 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -151,7 +151,7 @@
{
struct p9_flock flock;
struct p9_fid *fid;
- uint8_t status;
+ uint8_t status = P9_LOCK_ERROR;
int res = 0;
unsigned char fl_type;
@@ -196,7 +196,7 @@
for (;;) {
res = p9_client_lock_dotl(fid, &flock, &status);
if (res < 0)
- break;
+ goto out_unlock;
if (status != P9_LOCK_BLOCKED)
break;
@@ -214,14 +214,16 @@
case P9_LOCK_BLOCKED:
res = -EAGAIN;
break;
+ default:
+ WARN_ONCE(1, "unknown lock status code: %d\n", status);
+ /* fallthough */
case P9_LOCK_ERROR:
case P9_LOCK_GRACE:
res = -ENOLCK;
break;
- default:
- BUG();
}
+out_unlock:
/*
* incase server returned error for lock request, revert
* it locally
diff --git a/fs/Kconfig b/fs/Kconfig
index ec35851..011f433 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -32,6 +32,7 @@
source "fs/ocfs2/Kconfig"
source "fs/btrfs/Kconfig"
source "fs/nilfs2/Kconfig"
+source "fs/f2fs/Kconfig"
config FS_DAX
bool "Direct Access (DAX) support"
@@ -217,7 +218,6 @@
source "fs/sysv/Kconfig"
source "fs/ufs/Kconfig"
source "fs/exofs/Kconfig"
-source "fs/f2fs/Kconfig"
endif # MISC_FILESYSTEMS
diff --git a/fs/exec.c b/fs/exec.c
index 02bfd98..49a1c61 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1275,6 +1275,53 @@
spin_unlock(&p->fs->lock);
}
+static void bprm_fill_uid(struct linux_binprm *bprm)
+{
+ struct inode *inode;
+ unsigned int mode;
+ kuid_t uid;
+ kgid_t gid;
+
+ /* clear any previous set[ug]id data from a previous binary */
+ bprm->cred->euid = current_euid();
+ bprm->cred->egid = current_egid();
+
+ if (bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID)
+ return;
+
+ if (task_no_new_privs(current))
+ return;
+
+ inode = file_inode(bprm->file);
+ mode = READ_ONCE(inode->i_mode);
+ if (!(mode & (S_ISUID|S_ISGID)))
+ return;
+
+ /* Be careful if suid/sgid is set */
+ mutex_lock(&inode->i_mutex);
+
+ /* reload atomically mode/uid/gid now that lock held */
+ mode = inode->i_mode;
+ uid = inode->i_uid;
+ gid = inode->i_gid;
+ mutex_unlock(&inode->i_mutex);
+
+ /* We ignore suid/sgid if there are no mappings for them in the ns */
+ if (!kuid_has_mapping(bprm->cred->user_ns, uid) ||
+ !kgid_has_mapping(bprm->cred->user_ns, gid))
+ return;
+
+ if (mode & S_ISUID) {
+ bprm->per_clear |= PER_CLEAR_ON_SETID;
+ bprm->cred->euid = uid;
+ }
+
+ if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
+ bprm->per_clear |= PER_CLEAR_ON_SETID;
+ bprm->cred->egid = gid;
+ }
+}
+
/*
* Fill the binprm structure from the inode.
* Check permissions, then read the first 128 (BINPRM_BUF_SIZE) bytes
@@ -1283,36 +1330,9 @@
*/
int prepare_binprm(struct linux_binprm *bprm)
{
- struct inode *inode = file_inode(bprm->file);
- umode_t mode = inode->i_mode;
int retval;
-
- /* clear any previous set[ug]id data from a previous binary */
- bprm->cred->euid = current_euid();
- bprm->cred->egid = current_egid();
-
- if (!(bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID) &&
- !task_no_new_privs(current) &&
- kuid_has_mapping(bprm->cred->user_ns, inode->i_uid) &&
- kgid_has_mapping(bprm->cred->user_ns, inode->i_gid)) {
- /* Set-uid? */
- if (mode & S_ISUID) {
- bprm->per_clear |= PER_CLEAR_ON_SETID;
- bprm->cred->euid = inode->i_uid;
- }
-
- /* Set-gid? */
- /*
- * If setgid is set but no group execute bit then this
- * is a candidate for mandatory locking, not a setgid
- * executable.
- */
- if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
- bprm->per_clear |= PER_CLEAR_ON_SETID;
- bprm->cred->egid = inode->i_gid;
- }
- }
+ bprm_fill_uid(bprm);
/* fill in binprm security blob */
retval = security_bprm_set_creds(bprm);
diff --git a/fs/ext4/Kconfig b/fs/ext4/Kconfig
index efea5d5c..18228c2 100644
--- a/fs/ext4/Kconfig
+++ b/fs/ext4/Kconfig
@@ -64,6 +64,23 @@
If you are not using a security module that requires using
extended attributes for file security labels, say N.
+config EXT4_FS_ENCRYPTION
+ bool "Ext4 Encryption"
+ depends on EXT4_FS
+ select CRYPTO_AES
+ select CRYPTO_CBC
+ select CRYPTO_ECB
+ select CRYPTO_XTS
+ select CRYPTO_CTS
+ select CRYPTO_SHA256
+ select KEYS
+ select ENCRYPTED_KEYS
+ help
+ Enable encryption of ext4 files and directories. This
+ feature is similar to ecryptfs, but it is more memory
+ efficient since it avoids caching the encrypted and
+ decrypted pages in the page cache.
+
config EXT4_DEBUG
bool "EXT4 debugging support"
depends on EXT4_FS
diff --git a/fs/ext4/Makefile b/fs/ext4/Makefile
index 0310fec..75285ea 100644
--- a/fs/ext4/Makefile
+++ b/fs/ext4/Makefile
@@ -8,7 +8,9 @@
ioctl.o namei.o super.o symlink.o hash.o resize.o extents.o \
ext4_jbd2.o migrate.o mballoc.o block_validity.o move_extent.o \
mmp.o indirect.o extents_status.o xattr.o xattr_user.o \
- xattr_trusted.o inline.o
+ xattr_trusted.o inline.o readpage.o
ext4-$(CONFIG_EXT4_FS_POSIX_ACL) += acl.o
ext4-$(CONFIG_EXT4_FS_SECURITY) += xattr_security.o
+ext4-$(CONFIG_EXT4_FS_ENCRYPTION) += crypto_policy.o crypto.o \
+ crypto_key.o crypto_fname.o
diff --git a/fs/ext4/acl.c b/fs/ext4/acl.c
index d40c8db..69b1e73 100644
--- a/fs/ext4/acl.c
+++ b/fs/ext4/acl.c
@@ -4,11 +4,6 @@
* Copyright (C) 2001-2003 Andreas Gruenbacher, <agruen@suse.de>
*/
-#include <linux/init.h>
-#include <linux/sched.h>
-#include <linux/slab.h>
-#include <linux/capability.h>
-#include <linux/fs.h>
#include "ext4_jbd2.h"
#include "ext4.h"
#include "xattr.h"
diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
index 83a6f49..955bf49 100644
--- a/fs/ext4/balloc.c
+++ b/fs/ext4/balloc.c
@@ -14,7 +14,6 @@
#include <linux/time.h>
#include <linux/capability.h>
#include <linux/fs.h>
-#include <linux/jbd2.h>
#include <linux/quotaops.h>
#include <linux/buffer_head.h>
#include "ext4.h"
@@ -641,8 +640,6 @@
* fail EDQUOT for metdata, but we do account for it.
*/
if (!(*errp) && (flags & EXT4_MB_DELALLOC_RESERVED)) {
- spin_lock(&EXT4_I(inode)->i_block_reservation_lock);
- spin_unlock(&EXT4_I(inode)->i_block_reservation_lock);
dquot_alloc_block_nofail(inode,
EXT4_C2B(EXT4_SB(inode->i_sb), ar.len));
}
diff --git a/fs/ext4/bitmap.c b/fs/ext4/bitmap.c
index b610779..4a606afb 100644
--- a/fs/ext4/bitmap.c
+++ b/fs/ext4/bitmap.c
@@ -8,7 +8,6 @@
*/
#include <linux/buffer_head.h>
-#include <linux/jbd2.h>
#include "ext4.h"
unsigned int ext4_count_free(char *bitmap, unsigned int numchars)
diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c
index 41eb9dc..3522340 100644
--- a/fs/ext4/block_validity.c
+++ b/fs/ext4/block_validity.c
@@ -16,7 +16,6 @@
#include <linux/swap.h>
#include <linux/pagemap.h>
#include <linux/blkdev.h>
-#include <linux/mutex.h>
#include <linux/slab.h>
#include "ext4.h"
diff --git a/fs/ext4/crypto.c b/fs/ext4/crypto.c
new file mode 100644
index 0000000..8ff1527
--- /dev/null
+++ b/fs/ext4/crypto.c
@@ -0,0 +1,558 @@
+/*
+ * linux/fs/ext4/crypto.c
+ *
+ * Copyright (C) 2015, Google, Inc.
+ *
+ * This contains encryption functions for ext4
+ *
+ * Written by Michael Halcrow, 2014.
+ *
+ * Filename encryption additions
+ * Uday Savagaonkar, 2014
+ * Encryption policy handling additions
+ * Ildar Muslukhov, 2014
+ *
+ * This has not yet undergone a rigorous security audit.
+ *
+ * The usage of AES-XTS should conform to recommendations in NIST
+ * Special Publication 800-38E and IEEE P1619/D16.
+ */
+
+#include <crypto/hash.h>
+#include <crypto/sha.h>
+#include <keys/user-type.h>
+#include <keys/encrypted-type.h>
+#include <linux/crypto.h>
+#include <linux/ecryptfs.h>
+#include <linux/gfp.h>
+#include <linux/kernel.h>
+#include <linux/key.h>
+#include <linux/list.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/random.h>
+#include <linux/scatterlist.h>
+#include <linux/spinlock_types.h>
+
+#include "ext4_extents.h"
+#include "xattr.h"
+
+/* Encryption added and removed here! (L: */
+
+static unsigned int num_prealloc_crypto_pages = 32;
+static unsigned int num_prealloc_crypto_ctxs = 128;
+
+module_param(num_prealloc_crypto_pages, uint, 0444);
+MODULE_PARM_DESC(num_prealloc_crypto_pages,
+ "Number of crypto pages to preallocate");
+module_param(num_prealloc_crypto_ctxs, uint, 0444);
+MODULE_PARM_DESC(num_prealloc_crypto_ctxs,
+ "Number of crypto contexts to preallocate");
+
+static mempool_t *ext4_bounce_page_pool;
+
+static LIST_HEAD(ext4_free_crypto_ctxs);
+static DEFINE_SPINLOCK(ext4_crypto_ctx_lock);
+
+/**
+ * ext4_release_crypto_ctx() - Releases an encryption context
+ * @ctx: The encryption context to release.
+ *
+ * If the encryption context was allocated from the pre-allocated pool, returns
+ * it to that pool. Else, frees it.
+ *
+ * If there's a bounce page in the context, this frees that.
+ */
+void ext4_release_crypto_ctx(struct ext4_crypto_ctx *ctx)
+{
+ unsigned long flags;
+
+ if (ctx->bounce_page) {
+ if (ctx->flags & EXT4_BOUNCE_PAGE_REQUIRES_FREE_ENCRYPT_FL)
+ __free_page(ctx->bounce_page);
+ else
+ mempool_free(ctx->bounce_page, ext4_bounce_page_pool);
+ ctx->bounce_page = NULL;
+ }
+ ctx->control_page = NULL;
+ if (ctx->flags & EXT4_CTX_REQUIRES_FREE_ENCRYPT_FL) {
+ if (ctx->tfm)
+ crypto_free_tfm(ctx->tfm);
+ kfree(ctx);
+ } else {
+ spin_lock_irqsave(&ext4_crypto_ctx_lock, flags);
+ list_add(&ctx->free_list, &ext4_free_crypto_ctxs);
+ spin_unlock_irqrestore(&ext4_crypto_ctx_lock, flags);
+ }
+}
+
+/**
+ * ext4_alloc_and_init_crypto_ctx() - Allocates and inits an encryption context
+ * @mask: The allocation mask.
+ *
+ * Return: An allocated and initialized encryption context on success. An error
+ * value or NULL otherwise.
+ */
+static struct ext4_crypto_ctx *ext4_alloc_and_init_crypto_ctx(gfp_t mask)
+{
+ struct ext4_crypto_ctx *ctx = kzalloc(sizeof(struct ext4_crypto_ctx),
+ mask);
+
+ if (!ctx)
+ return ERR_PTR(-ENOMEM);
+ return ctx;
+}
+
+/**
+ * ext4_get_crypto_ctx() - Gets an encryption context
+ * @inode: The inode for which we are doing the crypto
+ *
+ * Allocates and initializes an encryption context.
+ *
+ * Return: An allocated and initialized encryption context on success; error
+ * value or NULL otherwise.
+ */
+struct ext4_crypto_ctx *ext4_get_crypto_ctx(struct inode *inode)
+{
+ struct ext4_crypto_ctx *ctx = NULL;
+ int res = 0;
+ unsigned long flags;
+ struct ext4_encryption_key *key = &EXT4_I(inode)->i_encryption_key;
+
+ if (!ext4_read_workqueue)
+ ext4_init_crypto();
+
+ /*
+ * We first try getting the ctx from a free list because in
+ * the common case the ctx will have an allocated and
+ * initialized crypto tfm, so it's probably a worthwhile
+ * optimization. For the bounce page, we first try getting it
+ * from the kernel allocator because that's just about as fast
+ * as getting it from a list and because a cache of free pages
+ * should generally be a "last resort" option for a filesystem
+ * to be able to do its job.
+ */
+ spin_lock_irqsave(&ext4_crypto_ctx_lock, flags);
+ ctx = list_first_entry_or_null(&ext4_free_crypto_ctxs,
+ struct ext4_crypto_ctx, free_list);
+ if (ctx)
+ list_del(&ctx->free_list);
+ spin_unlock_irqrestore(&ext4_crypto_ctx_lock, flags);
+ if (!ctx) {
+ ctx = ext4_alloc_and_init_crypto_ctx(GFP_NOFS);
+ if (IS_ERR(ctx)) {
+ res = PTR_ERR(ctx);
+ goto out;
+ }
+ ctx->flags |= EXT4_CTX_REQUIRES_FREE_ENCRYPT_FL;
+ } else {
+ ctx->flags &= ~EXT4_CTX_REQUIRES_FREE_ENCRYPT_FL;
+ }
+
+ /* Allocate a new Crypto API context if we don't already have
+ * one or if it isn't the right mode. */
+ BUG_ON(key->mode == EXT4_ENCRYPTION_MODE_INVALID);
+ if (ctx->tfm && (ctx->mode != key->mode)) {
+ crypto_free_tfm(ctx->tfm);
+ ctx->tfm = NULL;
+ ctx->mode = EXT4_ENCRYPTION_MODE_INVALID;
+ }
+ if (!ctx->tfm) {
+ switch (key->mode) {
+ case EXT4_ENCRYPTION_MODE_AES_256_XTS:
+ ctx->tfm = crypto_ablkcipher_tfm(
+ crypto_alloc_ablkcipher("xts(aes)", 0, 0));
+ break;
+ case EXT4_ENCRYPTION_MODE_AES_256_GCM:
+ /* TODO(mhalcrow): AEAD w/ gcm(aes);
+ * crypto_aead_setauthsize() */
+ ctx->tfm = ERR_PTR(-ENOTSUPP);
+ break;
+ default:
+ BUG();
+ }
+ if (IS_ERR_OR_NULL(ctx->tfm)) {
+ res = PTR_ERR(ctx->tfm);
+ ctx->tfm = NULL;
+ goto out;
+ }
+ ctx->mode = key->mode;
+ }
+ BUG_ON(key->size != ext4_encryption_key_size(key->mode));
+
+ /* There shouldn't be a bounce page attached to the crypto
+ * context at this point. */
+ BUG_ON(ctx->bounce_page);
+
+out:
+ if (res) {
+ if (!IS_ERR_OR_NULL(ctx))
+ ext4_release_crypto_ctx(ctx);
+ ctx = ERR_PTR(res);
+ }
+ return ctx;
+}
+
+struct workqueue_struct *ext4_read_workqueue;
+static DEFINE_MUTEX(crypto_init);
+
+/**
+ * ext4_exit_crypto() - Shutdown the ext4 encryption system
+ */
+void ext4_exit_crypto(void)
+{
+ struct ext4_crypto_ctx *pos, *n;
+
+ list_for_each_entry_safe(pos, n, &ext4_free_crypto_ctxs, free_list) {
+ if (pos->bounce_page) {
+ if (pos->flags &
+ EXT4_BOUNCE_PAGE_REQUIRES_FREE_ENCRYPT_FL) {
+ __free_page(pos->bounce_page);
+ } else {
+ mempool_free(pos->bounce_page,
+ ext4_bounce_page_pool);
+ }
+ }
+ if (pos->tfm)
+ crypto_free_tfm(pos->tfm);
+ kfree(pos);
+ }
+ INIT_LIST_HEAD(&ext4_free_crypto_ctxs);
+ if (ext4_bounce_page_pool)
+ mempool_destroy(ext4_bounce_page_pool);
+ ext4_bounce_page_pool = NULL;
+ if (ext4_read_workqueue)
+ destroy_workqueue(ext4_read_workqueue);
+ ext4_read_workqueue = NULL;
+}
+
+/**
+ * ext4_init_crypto() - Set up for ext4 encryption.
+ *
+ * We only call this when we start accessing encrypted files, since it
+ * results in memory getting allocated that wouldn't otherwise be used.
+ *
+ * Return: Zero on success, non-zero otherwise.
+ */
+int ext4_init_crypto(void)
+{
+ int i, res;
+
+ mutex_lock(&crypto_init);
+ if (ext4_read_workqueue)
+ goto already_initialized;
+ ext4_read_workqueue = alloc_workqueue("ext4_crypto", WQ_HIGHPRI, 0);
+ if (!ext4_read_workqueue) {
+ res = -ENOMEM;
+ goto fail;
+ }
+
+ for (i = 0; i < num_prealloc_crypto_ctxs; i++) {
+ struct ext4_crypto_ctx *ctx;
+
+ ctx = ext4_alloc_and_init_crypto_ctx(GFP_KERNEL);
+ if (IS_ERR(ctx)) {
+ res = PTR_ERR(ctx);
+ goto fail;
+ }
+ list_add(&ctx->free_list, &ext4_free_crypto_ctxs);
+ }
+
+ ext4_bounce_page_pool =
+ mempool_create_page_pool(num_prealloc_crypto_pages, 0);
+ if (!ext4_bounce_page_pool) {
+ res = -ENOMEM;
+ goto fail;
+ }
+already_initialized:
+ mutex_unlock(&crypto_init);
+ return 0;
+fail:
+ ext4_exit_crypto();
+ mutex_unlock(&crypto_init);
+ return res;
+}
+
+void ext4_restore_control_page(struct page *data_page)
+{
+ struct ext4_crypto_ctx *ctx =
+ (struct ext4_crypto_ctx *)page_private(data_page);
+
+ set_page_private(data_page, (unsigned long)NULL);
+ ClearPagePrivate(data_page);
+ unlock_page(data_page);
+ ext4_release_crypto_ctx(ctx);
+}
+
+/**
+ * ext4_crypt_complete() - The completion callback for page encryption
+ * @req: The asynchronous encryption request context
+ * @res: The result of the encryption operation
+ */
+static void ext4_crypt_complete(struct crypto_async_request *req, int res)
+{
+ struct ext4_completion_result *ecr = req->data;
+
+ if (res == -EINPROGRESS)
+ return;
+ ecr->res = res;
+ complete(&ecr->completion);
+}
+
+typedef enum {
+ EXT4_DECRYPT = 0,
+ EXT4_ENCRYPT,
+} ext4_direction_t;
+
+static int ext4_page_crypto(struct ext4_crypto_ctx *ctx,
+ struct inode *inode,
+ ext4_direction_t rw,
+ pgoff_t index,
+ struct page *src_page,
+ struct page *dest_page)
+
+{
+ u8 xts_tweak[EXT4_XTS_TWEAK_SIZE];
+ struct ablkcipher_request *req = NULL;
+ DECLARE_EXT4_COMPLETION_RESULT(ecr);
+ struct scatterlist dst, src;
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ struct crypto_ablkcipher *atfm = __crypto_ablkcipher_cast(ctx->tfm);
+ int res = 0;
+
+ BUG_ON(!ctx->tfm);
+ BUG_ON(ctx->mode != ei->i_encryption_key.mode);
+
+ if (ctx->mode != EXT4_ENCRYPTION_MODE_AES_256_XTS) {
+ printk_ratelimited(KERN_ERR
+ "%s: unsupported crypto algorithm: %d\n",
+ __func__, ctx->mode);
+ return -ENOTSUPP;
+ }
+
+ crypto_ablkcipher_clear_flags(atfm, ~0);
+ crypto_tfm_set_flags(ctx->tfm, CRYPTO_TFM_REQ_WEAK_KEY);
+
+ res = crypto_ablkcipher_setkey(atfm, ei->i_encryption_key.raw,
+ ei->i_encryption_key.size);
+ if (res) {
+ printk_ratelimited(KERN_ERR
+ "%s: crypto_ablkcipher_setkey() failed\n",
+ __func__);
+ return res;
+ }
+ req = ablkcipher_request_alloc(atfm, GFP_NOFS);
+ if (!req) {
+ printk_ratelimited(KERN_ERR
+ "%s: crypto_request_alloc() failed\n",
+ __func__);
+ return -ENOMEM;
+ }
+ ablkcipher_request_set_callback(
+ req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
+ ext4_crypt_complete, &ecr);
+
+ BUILD_BUG_ON(EXT4_XTS_TWEAK_SIZE < sizeof(index));
+ memcpy(xts_tweak, &index, sizeof(index));
+ memset(&xts_tweak[sizeof(index)], 0,
+ EXT4_XTS_TWEAK_SIZE - sizeof(index));
+
+ sg_init_table(&dst, 1);
+ sg_set_page(&dst, dest_page, PAGE_CACHE_SIZE, 0);
+ sg_init_table(&src, 1);
+ sg_set_page(&src, src_page, PAGE_CACHE_SIZE, 0);
+ ablkcipher_request_set_crypt(req, &src, &dst, PAGE_CACHE_SIZE,
+ xts_tweak);
+ if (rw == EXT4_DECRYPT)
+ res = crypto_ablkcipher_decrypt(req);
+ else
+ res = crypto_ablkcipher_encrypt(req);
+ if (res == -EINPROGRESS || res == -EBUSY) {
+ BUG_ON(req->base.data != &ecr);
+ wait_for_completion(&ecr.completion);
+ res = ecr.res;
+ }
+ ablkcipher_request_free(req);
+ if (res) {
+ printk_ratelimited(
+ KERN_ERR
+ "%s: crypto_ablkcipher_encrypt() returned %d\n",
+ __func__, res);
+ return res;
+ }
+ return 0;
+}
+
+/**
+ * ext4_encrypt() - Encrypts a page
+ * @inode: The inode for which the encryption should take place
+ * @plaintext_page: The page to encrypt. Must be locked.
+ *
+ * Allocates a ciphertext page and encrypts plaintext_page into it using the ctx
+ * encryption context.
+ *
+ * Called on the page write path. The caller must call
+ * ext4_restore_control_page() on the returned ciphertext page to
+ * release the bounce buffer and the encryption context.
+ *
+ * Return: An allocated page with the encrypted content on success. Else, an
+ * error value or NULL.
+ */
+struct page *ext4_encrypt(struct inode *inode,
+ struct page *plaintext_page)
+{
+ struct ext4_crypto_ctx *ctx;
+ struct page *ciphertext_page = NULL;
+ int err;
+
+ BUG_ON(!PageLocked(plaintext_page));
+
+ ctx = ext4_get_crypto_ctx(inode);
+ if (IS_ERR(ctx))
+ return (struct page *) ctx;
+
+ /* The encryption operation will require a bounce page. */
+ ciphertext_page = alloc_page(GFP_NOFS);
+ if (!ciphertext_page) {
+ /* This is a potential bottleneck, but at least we'll have
+ * forward progress. */
+ ciphertext_page = mempool_alloc(ext4_bounce_page_pool,
+ GFP_NOFS);
+ if (WARN_ON_ONCE(!ciphertext_page)) {
+ ciphertext_page = mempool_alloc(ext4_bounce_page_pool,
+ GFP_NOFS | __GFP_WAIT);
+ }
+ ctx->flags &= ~EXT4_BOUNCE_PAGE_REQUIRES_FREE_ENCRYPT_FL;
+ } else {
+ ctx->flags |= EXT4_BOUNCE_PAGE_REQUIRES_FREE_ENCRYPT_FL;
+ }
+ ctx->bounce_page = ciphertext_page;
+ ctx->control_page = plaintext_page;
+ err = ext4_page_crypto(ctx, inode, EXT4_ENCRYPT, plaintext_page->index,
+ plaintext_page, ciphertext_page);
+ if (err) {
+ ext4_release_crypto_ctx(ctx);
+ return ERR_PTR(err);
+ }
+ SetPagePrivate(ciphertext_page);
+ set_page_private(ciphertext_page, (unsigned long)ctx);
+ lock_page(ciphertext_page);
+ return ciphertext_page;
+}
+
+/**
+ * ext4_decrypt() - Decrypts a page in-place
+ * @ctx: The encryption context.
+ * @page: The page to decrypt. Must be locked.
+ *
+ * Decrypts page in-place using the ctx encryption context.
+ *
+ * Called from the read completion callback.
+ *
+ * Return: Zero on success, non-zero otherwise.
+ */
+int ext4_decrypt(struct ext4_crypto_ctx *ctx, struct page *page)
+{
+ BUG_ON(!PageLocked(page));
+
+ return ext4_page_crypto(ctx, page->mapping->host,
+ EXT4_DECRYPT, page->index, page, page);
+}
+
+/*
+ * Convenience function which takes care of allocating and
+ * deallocating the encryption context
+ */
+int ext4_decrypt_one(struct inode *inode, struct page *page)
+{
+ int ret;
+
+ struct ext4_crypto_ctx *ctx = ext4_get_crypto_ctx(inode);
+
+ if (!ctx)
+ return -ENOMEM;
+ ret = ext4_decrypt(ctx, page);
+ ext4_release_crypto_ctx(ctx);
+ return ret;
+}
+
+int ext4_encrypted_zeroout(struct inode *inode, struct ext4_extent *ex)
+{
+ struct ext4_crypto_ctx *ctx;
+ struct page *ciphertext_page = NULL;
+ struct bio *bio;
+ ext4_lblk_t lblk = ex->ee_block;
+ ext4_fsblk_t pblk = ext4_ext_pblock(ex);
+ unsigned int len = ext4_ext_get_actual_len(ex);
+ int err = 0;
+
+ BUG_ON(inode->i_sb->s_blocksize != PAGE_CACHE_SIZE);
+
+ ctx = ext4_get_crypto_ctx(inode);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+
+ ciphertext_page = alloc_page(GFP_NOFS);
+ if (!ciphertext_page) {
+ /* This is a potential bottleneck, but at least we'll have
+ * forward progress. */
+ ciphertext_page = mempool_alloc(ext4_bounce_page_pool,
+ GFP_NOFS);
+ if (WARN_ON_ONCE(!ciphertext_page)) {
+ ciphertext_page = mempool_alloc(ext4_bounce_page_pool,
+ GFP_NOFS | __GFP_WAIT);
+ }
+ ctx->flags &= ~EXT4_BOUNCE_PAGE_REQUIRES_FREE_ENCRYPT_FL;
+ } else {
+ ctx->flags |= EXT4_BOUNCE_PAGE_REQUIRES_FREE_ENCRYPT_FL;
+ }
+ ctx->bounce_page = ciphertext_page;
+
+ while (len--) {
+ err = ext4_page_crypto(ctx, inode, EXT4_ENCRYPT, lblk,
+ ZERO_PAGE(0), ciphertext_page);
+ if (err)
+ goto errout;
+
+ bio = bio_alloc(GFP_KERNEL, 1);
+ if (!bio) {
+ err = -ENOMEM;
+ goto errout;
+ }
+ bio->bi_bdev = inode->i_sb->s_bdev;
+ bio->bi_iter.bi_sector = pblk;
+ err = bio_add_page(bio, ciphertext_page,
+ inode->i_sb->s_blocksize, 0);
+ if (err) {
+ bio_put(bio);
+ goto errout;
+ }
+ err = submit_bio_wait(WRITE, bio);
+ if (err)
+ goto errout;
+ }
+ err = 0;
+errout:
+ ext4_release_crypto_ctx(ctx);
+ return err;
+}
+
+bool ext4_valid_contents_enc_mode(uint32_t mode)
+{
+ return (mode == EXT4_ENCRYPTION_MODE_AES_256_XTS);
+}
+
+/**
+ * ext4_validate_encryption_key_size() - Validate the encryption key size
+ * @mode: The key mode.
+ * @size: The key size to validate.
+ *
+ * Return: The validated key size for @mode. Zero if invalid.
+ */
+uint32_t ext4_validate_encryption_key_size(uint32_t mode, uint32_t size)
+{
+ if (size == ext4_encryption_key_size(mode))
+ return size;
+ return 0;
+}
diff --git a/fs/ext4/crypto_fname.c b/fs/ext4/crypto_fname.c
new file mode 100644
index 0000000..ca2f594
--- /dev/null
+++ b/fs/ext4/crypto_fname.c
@@ -0,0 +1,709 @@
+/*
+ * linux/fs/ext4/crypto_fname.c
+ *
+ * Copyright (C) 2015, Google, Inc.
+ *
+ * This contains functions for filename crypto management in ext4
+ *
+ * Written by Uday Savagaonkar, 2014.
+ *
+ * This has not yet undergone a rigorous security audit.
+ *
+ */
+
+#include <crypto/hash.h>
+#include <crypto/sha.h>
+#include <keys/encrypted-type.h>
+#include <keys/user-type.h>
+#include <linux/crypto.h>
+#include <linux/gfp.h>
+#include <linux/kernel.h>
+#include <linux/key.h>
+#include <linux/key.h>
+#include <linux/list.h>
+#include <linux/mempool.h>
+#include <linux/random.h>
+#include <linux/scatterlist.h>
+#include <linux/spinlock_types.h>
+
+#include "ext4.h"
+#include "ext4_crypto.h"
+#include "xattr.h"
+
+/**
+ * ext4_dir_crypt_complete() -
+ */
+static void ext4_dir_crypt_complete(struct crypto_async_request *req, int res)
+{
+ struct ext4_completion_result *ecr = req->data;
+
+ if (res == -EINPROGRESS)
+ return;
+ ecr->res = res;
+ complete(&ecr->completion);
+}
+
+bool ext4_valid_filenames_enc_mode(uint32_t mode)
+{
+ return (mode == EXT4_ENCRYPTION_MODE_AES_256_CTS);
+}
+
+/**
+ * ext4_fname_encrypt() -
+ *
+ * This function encrypts the input filename, and returns the length of the
+ * ciphertext. Errors are returned as negative numbers. We trust the caller to
+ * allocate sufficient memory to oname string.
+ */
+static int ext4_fname_encrypt(struct ext4_fname_crypto_ctx *ctx,
+ const struct qstr *iname,
+ struct ext4_str *oname)
+{
+ u32 ciphertext_len;
+ struct ablkcipher_request *req = NULL;
+ DECLARE_EXT4_COMPLETION_RESULT(ecr);
+ struct crypto_ablkcipher *tfm = ctx->ctfm;
+ int res = 0;
+ char iv[EXT4_CRYPTO_BLOCK_SIZE];
+ struct scatterlist sg[1];
+ char *workbuf;
+
+ if (iname->len <= 0 || iname->len > ctx->lim)
+ return -EIO;
+
+ ciphertext_len = (iname->len < EXT4_CRYPTO_BLOCK_SIZE) ?
+ EXT4_CRYPTO_BLOCK_SIZE : iname->len;
+ ciphertext_len = (ciphertext_len > ctx->lim)
+ ? ctx->lim : ciphertext_len;
+
+ /* Allocate request */
+ req = ablkcipher_request_alloc(tfm, GFP_NOFS);
+ if (!req) {
+ printk_ratelimited(
+ KERN_ERR "%s: crypto_request_alloc() failed\n", __func__);
+ return -ENOMEM;
+ }
+ ablkcipher_request_set_callback(req,
+ CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
+ ext4_dir_crypt_complete, &ecr);
+
+ /* Map the workpage */
+ workbuf = kmap(ctx->workpage);
+
+ /* Copy the input */
+ memcpy(workbuf, iname->name, iname->len);
+ if (iname->len < ciphertext_len)
+ memset(workbuf + iname->len, 0, ciphertext_len - iname->len);
+
+ /* Initialize IV */
+ memset(iv, 0, EXT4_CRYPTO_BLOCK_SIZE);
+
+ /* Create encryption request */
+ sg_init_table(sg, 1);
+ sg_set_page(sg, ctx->workpage, PAGE_SIZE, 0);
+ ablkcipher_request_set_crypt(req, sg, sg, iname->len, iv);
+ res = crypto_ablkcipher_encrypt(req);
+ if (res == -EINPROGRESS || res == -EBUSY) {
+ BUG_ON(req->base.data != &ecr);
+ wait_for_completion(&ecr.completion);
+ res = ecr.res;
+ }
+ if (res >= 0) {
+ /* Copy the result to output */
+ memcpy(oname->name, workbuf, ciphertext_len);
+ res = ciphertext_len;
+ }
+ kunmap(ctx->workpage);
+ ablkcipher_request_free(req);
+ if (res < 0) {
+ printk_ratelimited(
+ KERN_ERR "%s: Error (error code %d)\n", __func__, res);
+ }
+ oname->len = ciphertext_len;
+ return res;
+}
+
+/*
+ * ext4_fname_decrypt()
+ * This function decrypts the input filename, and returns
+ * the length of the plaintext.
+ * Errors are returned as negative numbers.
+ * We trust the caller to allocate sufficient memory to oname string.
+ */
+static int ext4_fname_decrypt(struct ext4_fname_crypto_ctx *ctx,
+ const struct ext4_str *iname,
+ struct ext4_str *oname)
+{
+ struct ext4_str tmp_in[2], tmp_out[1];
+ struct ablkcipher_request *req = NULL;
+ DECLARE_EXT4_COMPLETION_RESULT(ecr);
+ struct scatterlist sg[1];
+ struct crypto_ablkcipher *tfm = ctx->ctfm;
+ int res = 0;
+ char iv[EXT4_CRYPTO_BLOCK_SIZE];
+ char *workbuf;
+
+ if (iname->len <= 0 || iname->len > ctx->lim)
+ return -EIO;
+
+ tmp_in[0].name = iname->name;
+ tmp_in[0].len = iname->len;
+ tmp_out[0].name = oname->name;
+
+ /* Allocate request */
+ req = ablkcipher_request_alloc(tfm, GFP_NOFS);
+ if (!req) {
+ printk_ratelimited(
+ KERN_ERR "%s: crypto_request_alloc() failed\n", __func__);
+ return -ENOMEM;
+ }
+ ablkcipher_request_set_callback(req,
+ CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
+ ext4_dir_crypt_complete, &ecr);
+
+ /* Map the workpage */
+ workbuf = kmap(ctx->workpage);
+
+ /* Copy the input */
+ memcpy(workbuf, iname->name, iname->len);
+
+ /* Initialize IV */
+ memset(iv, 0, EXT4_CRYPTO_BLOCK_SIZE);
+
+ /* Create encryption request */
+ sg_init_table(sg, 1);
+ sg_set_page(sg, ctx->workpage, PAGE_SIZE, 0);
+ ablkcipher_request_set_crypt(req, sg, sg, iname->len, iv);
+ res = crypto_ablkcipher_decrypt(req);
+ if (res == -EINPROGRESS || res == -EBUSY) {
+ BUG_ON(req->base.data != &ecr);
+ wait_for_completion(&ecr.completion);
+ res = ecr.res;
+ }
+ if (res >= 0) {
+ /* Copy the result to output */
+ memcpy(oname->name, workbuf, iname->len);
+ res = iname->len;
+ }
+ kunmap(ctx->workpage);
+ ablkcipher_request_free(req);
+ if (res < 0) {
+ printk_ratelimited(
+ KERN_ERR "%s: Error in ext4_fname_encrypt (error code %d)\n",
+ __func__, res);
+ return res;
+ }
+
+ oname->len = strnlen(oname->name, iname->len);
+ return oname->len;
+}
+
+/**
+ * ext4_fname_encode_digest() -
+ *
+ * Encodes the input digest using characters from the set [a-zA-Z0-9_+].
+ * The encoded string is roughly 4/3 times the size of the input string.
+ */
+int ext4_fname_encode_digest(char *dst, char *src, u32 len)
+{
+ static const char *lookup_table =
+ "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_+";
+ u32 current_chunk, num_chunks, i;
+ char tmp_buf[3];
+ u32 c0, c1, c2, c3;
+
+ current_chunk = 0;
+ num_chunks = len/3;
+ for (i = 0; i < num_chunks; i++) {
+ c0 = src[3*i] & 0x3f;
+ c1 = (((src[3*i]>>6)&0x3) | ((src[3*i+1] & 0xf)<<2)) & 0x3f;
+ c2 = (((src[3*i+1]>>4)&0xf) | ((src[3*i+2] & 0x3)<<4)) & 0x3f;
+ c3 = (src[3*i+2]>>2) & 0x3f;
+ dst[4*i] = lookup_table[c0];
+ dst[4*i+1] = lookup_table[c1];
+ dst[4*i+2] = lookup_table[c2];
+ dst[4*i+3] = lookup_table[c3];
+ }
+ if (i*3 < len) {
+ memset(tmp_buf, 0, 3);
+ memcpy(tmp_buf, &src[3*i], len-3*i);
+ c0 = tmp_buf[0] & 0x3f;
+ c1 = (((tmp_buf[0]>>6)&0x3) | ((tmp_buf[1] & 0xf)<<2)) & 0x3f;
+ c2 = (((tmp_buf[1]>>4)&0xf) | ((tmp_buf[2] & 0x3)<<4)) & 0x3f;
+ c3 = (tmp_buf[2]>>2) & 0x3f;
+ dst[4*i] = lookup_table[c0];
+ dst[4*i+1] = lookup_table[c1];
+ dst[4*i+2] = lookup_table[c2];
+ dst[4*i+3] = lookup_table[c3];
+ i++;
+ }
+ return (i * 4);
+}
+
+/**
+ * ext4_fname_hash() -
+ *
+ * This function computes the hash of the input filename, and sets the output
+ * buffer to the *encoded* digest. It returns the length of the digest as its
+ * return value. Errors are returned as negative numbers. We trust the caller
+ * to allocate sufficient memory to oname string.
+ */
+static int ext4_fname_hash(struct ext4_fname_crypto_ctx *ctx,
+ const struct ext4_str *iname,
+ struct ext4_str *oname)
+{
+ struct scatterlist sg;
+ struct hash_desc desc = {
+ .tfm = (struct crypto_hash *)ctx->htfm,
+ .flags = CRYPTO_TFM_REQ_MAY_SLEEP
+ };
+ int res = 0;
+
+ if (iname->len <= EXT4_FNAME_CRYPTO_DIGEST_SIZE) {
+ res = ext4_fname_encode_digest(oname->name, iname->name,
+ iname->len);
+ oname->len = res;
+ return res;
+ }
+
+ sg_init_one(&sg, iname->name, iname->len);
+ res = crypto_hash_init(&desc);
+ if (res) {
+ printk(KERN_ERR
+ "%s: Error initializing crypto hash; res = [%d]\n",
+ __func__, res);
+ goto out;
+ }
+ res = crypto_hash_update(&desc, &sg, iname->len);
+ if (res) {
+ printk(KERN_ERR
+ "%s: Error updating crypto hash; res = [%d]\n",
+ __func__, res);
+ goto out;
+ }
+ res = crypto_hash_final(&desc,
+ &oname->name[EXT4_FNAME_CRYPTO_DIGEST_SIZE]);
+ if (res) {
+ printk(KERN_ERR
+ "%s: Error finalizing crypto hash; res = [%d]\n",
+ __func__, res);
+ goto out;
+ }
+ /* Encode the digest as a printable string--this will increase the
+ * size of the digest */
+ oname->name[0] = 'I';
+ res = ext4_fname_encode_digest(oname->name+1,
+ &oname->name[EXT4_FNAME_CRYPTO_DIGEST_SIZE],
+ EXT4_FNAME_CRYPTO_DIGEST_SIZE) + 1;
+ oname->len = res;
+out:
+ return res;
+}
+
+/**
+ * ext4_free_fname_crypto_ctx() -
+ *
+ * Frees up a crypto context.
+ */
+void ext4_free_fname_crypto_ctx(struct ext4_fname_crypto_ctx *ctx)
+{
+ if (ctx == NULL || IS_ERR(ctx))
+ return;
+
+ if (ctx->ctfm && !IS_ERR(ctx->ctfm))
+ crypto_free_ablkcipher(ctx->ctfm);
+ if (ctx->htfm && !IS_ERR(ctx->htfm))
+ crypto_free_hash(ctx->htfm);
+ if (ctx->workpage && !IS_ERR(ctx->workpage))
+ __free_page(ctx->workpage);
+ kfree(ctx);
+}
+
+/**
+ * ext4_put_fname_crypto_ctx() -
+ *
+ * Return: The crypto context onto free list. If the free list is above a
+ * threshold, completely frees up the context, and returns the memory.
+ *
+ * TODO: Currently we directly free the crypto context. Eventually we should
+ * add code it to return to free list. Such an approach will increase
+ * efficiency of directory lookup.
+ */
+void ext4_put_fname_crypto_ctx(struct ext4_fname_crypto_ctx **ctx)
+{
+ if (*ctx == NULL || IS_ERR(*ctx))
+ return;
+ ext4_free_fname_crypto_ctx(*ctx);
+ *ctx = NULL;
+}
+
+/**
+ * ext4_search_fname_crypto_ctx() -
+ */
+static struct ext4_fname_crypto_ctx *ext4_search_fname_crypto_ctx(
+ const struct ext4_encryption_key *key)
+{
+ return NULL;
+}
+
+/**
+ * ext4_alloc_fname_crypto_ctx() -
+ */
+struct ext4_fname_crypto_ctx *ext4_alloc_fname_crypto_ctx(
+ const struct ext4_encryption_key *key)
+{
+ struct ext4_fname_crypto_ctx *ctx;
+
+ ctx = kmalloc(sizeof(struct ext4_fname_crypto_ctx), GFP_NOFS);
+ if (ctx == NULL)
+ return ERR_PTR(-ENOMEM);
+ if (key->mode == EXT4_ENCRYPTION_MODE_INVALID) {
+ /* This will automatically set key mode to invalid
+ * As enum for ENCRYPTION_MODE_INVALID is zero */
+ memset(&ctx->key, 0, sizeof(ctx->key));
+ } else {
+ memcpy(&ctx->key, key, sizeof(struct ext4_encryption_key));
+ }
+ ctx->has_valid_key = (EXT4_ENCRYPTION_MODE_INVALID == key->mode)
+ ? 0 : 1;
+ ctx->ctfm_key_is_ready = 0;
+ ctx->ctfm = NULL;
+ ctx->htfm = NULL;
+ ctx->workpage = NULL;
+ return ctx;
+}
+
+/**
+ * ext4_get_fname_crypto_ctx() -
+ *
+ * Allocates a free crypto context and initializes it to hold
+ * the crypto material for the inode.
+ *
+ * Return: NULL if not encrypted. Error value on error. Valid pointer otherwise.
+ */
+struct ext4_fname_crypto_ctx *ext4_get_fname_crypto_ctx(
+ struct inode *inode, u32 max_ciphertext_len)
+{
+ struct ext4_fname_crypto_ctx *ctx;
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ int res;
+
+ /* Check if the crypto policy is set on the inode */
+ res = ext4_encrypted_inode(inode);
+ if (res == 0)
+ return NULL;
+
+ if (!ext4_has_encryption_key(inode))
+ ext4_generate_encryption_key(inode);
+
+ /* Get a crypto context based on the key.
+ * A new context is allocated if no context matches the requested key.
+ */
+ ctx = ext4_search_fname_crypto_ctx(&(ei->i_encryption_key));
+ if (ctx == NULL)
+ ctx = ext4_alloc_fname_crypto_ctx(&(ei->i_encryption_key));
+ if (IS_ERR(ctx))
+ return ctx;
+
+ if (ctx->has_valid_key) {
+ if (ctx->key.mode != EXT4_ENCRYPTION_MODE_AES_256_CTS) {
+ printk_once(KERN_WARNING
+ "ext4: unsupported key mode %d\n",
+ ctx->key.mode);
+ return ERR_PTR(-ENOKEY);
+ }
+
+ /* As a first cut, we will allocate new tfm in every call.
+ * later, we will keep the tfm around, in case the key gets
+ * re-used */
+ if (ctx->ctfm == NULL) {
+ ctx->ctfm = crypto_alloc_ablkcipher("cts(cbc(aes))",
+ 0, 0);
+ }
+ if (IS_ERR(ctx->ctfm)) {
+ res = PTR_ERR(ctx->ctfm);
+ printk(
+ KERN_DEBUG "%s: error (%d) allocating crypto tfm\n",
+ __func__, res);
+ ctx->ctfm = NULL;
+ ext4_put_fname_crypto_ctx(&ctx);
+ return ERR_PTR(res);
+ }
+ if (ctx->ctfm == NULL) {
+ printk(
+ KERN_DEBUG "%s: could not allocate crypto tfm\n",
+ __func__);
+ ext4_put_fname_crypto_ctx(&ctx);
+ return ERR_PTR(-ENOMEM);
+ }
+ if (ctx->workpage == NULL)
+ ctx->workpage = alloc_page(GFP_NOFS);
+ if (IS_ERR(ctx->workpage)) {
+ res = PTR_ERR(ctx->workpage);
+ printk(
+ KERN_DEBUG "%s: error (%d) allocating work page\n",
+ __func__, res);
+ ctx->workpage = NULL;
+ ext4_put_fname_crypto_ctx(&ctx);
+ return ERR_PTR(res);
+ }
+ if (ctx->workpage == NULL) {
+ printk(
+ KERN_DEBUG "%s: could not allocate work page\n",
+ __func__);
+ ext4_put_fname_crypto_ctx(&ctx);
+ return ERR_PTR(-ENOMEM);
+ }
+ ctx->lim = max_ciphertext_len;
+ crypto_ablkcipher_clear_flags(ctx->ctfm, ~0);
+ crypto_tfm_set_flags(crypto_ablkcipher_tfm(ctx->ctfm),
+ CRYPTO_TFM_REQ_WEAK_KEY);
+
+ /* If we are lucky, we will get a context that is already
+ * set up with the right key. Else, we will have to
+ * set the key */
+ if (!ctx->ctfm_key_is_ready) {
+ /* Since our crypto objectives for filename encryption
+ * are pretty weak,
+ * we directly use the inode master key */
+ res = crypto_ablkcipher_setkey(ctx->ctfm,
+ ctx->key.raw, ctx->key.size);
+ if (res) {
+ ext4_put_fname_crypto_ctx(&ctx);
+ return ERR_PTR(-EIO);
+ }
+ ctx->ctfm_key_is_ready = 1;
+ } else {
+ /* In the current implementation, key should never be
+ * marked "ready" for a context that has just been
+ * allocated. So we should never reach here */
+ BUG();
+ }
+ }
+ if (ctx->htfm == NULL)
+ ctx->htfm = crypto_alloc_hash("sha256", 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(ctx->htfm)) {
+ res = PTR_ERR(ctx->htfm);
+ printk(KERN_DEBUG "%s: error (%d) allocating hash tfm\n",
+ __func__, res);
+ ctx->htfm = NULL;
+ ext4_put_fname_crypto_ctx(&ctx);
+ return ERR_PTR(res);
+ }
+ if (ctx->htfm == NULL) {
+ printk(KERN_DEBUG "%s: could not allocate hash tfm\n",
+ __func__);
+ ext4_put_fname_crypto_ctx(&ctx);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ return ctx;
+}
+
+/**
+ * ext4_fname_crypto_round_up() -
+ *
+ * Return: The next multiple of block size
+ */
+u32 ext4_fname_crypto_round_up(u32 size, u32 blksize)
+{
+ return ((size+blksize-1)/blksize)*blksize;
+}
+
+/**
+ * ext4_fname_crypto_namelen_on_disk() -
+ */
+int ext4_fname_crypto_namelen_on_disk(struct ext4_fname_crypto_ctx *ctx,
+ u32 namelen)
+{
+ u32 ciphertext_len;
+
+ if (ctx == NULL)
+ return -EIO;
+ if (!(ctx->has_valid_key))
+ return -EACCES;
+ ciphertext_len = (namelen < EXT4_CRYPTO_BLOCK_SIZE) ?
+ EXT4_CRYPTO_BLOCK_SIZE : namelen;
+ ciphertext_len = (ciphertext_len > ctx->lim)
+ ? ctx->lim : ciphertext_len;
+ return (int) ciphertext_len;
+}
+
+/**
+ * ext4_fname_crypto_alloc_obuff() -
+ *
+ * Allocates an output buffer that is sufficient for the crypto operation
+ * specified by the context and the direction.
+ */
+int ext4_fname_crypto_alloc_buffer(struct ext4_fname_crypto_ctx *ctx,
+ u32 ilen, struct ext4_str *crypto_str)
+{
+ unsigned int olen;
+
+ if (!ctx)
+ return -EIO;
+ olen = ext4_fname_crypto_round_up(ilen, EXT4_CRYPTO_BLOCK_SIZE);
+ crypto_str->len = olen;
+ if (olen < EXT4_FNAME_CRYPTO_DIGEST_SIZE*2)
+ olen = EXT4_FNAME_CRYPTO_DIGEST_SIZE*2;
+ /* Allocated buffer can hold one more character to null-terminate the
+ * string */
+ crypto_str->name = kmalloc(olen+1, GFP_NOFS);
+ if (!(crypto_str->name))
+ return -ENOMEM;
+ return 0;
+}
+
+/**
+ * ext4_fname_crypto_free_buffer() -
+ *
+ * Frees the buffer allocated for crypto operation.
+ */
+void ext4_fname_crypto_free_buffer(struct ext4_str *crypto_str)
+{
+ if (!crypto_str)
+ return;
+ kfree(crypto_str->name);
+ crypto_str->name = NULL;
+}
+
+/**
+ * ext4_fname_disk_to_usr() - converts a filename from disk space to user space
+ */
+int _ext4_fname_disk_to_usr(struct ext4_fname_crypto_ctx *ctx,
+ const struct ext4_str *iname,
+ struct ext4_str *oname)
+{
+ if (ctx == NULL)
+ return -EIO;
+ if (iname->len < 3) {
+ /*Check for . and .. */
+ if (iname->name[0] == '.' && iname->name[iname->len-1] == '.') {
+ oname->name[0] = '.';
+ oname->name[iname->len-1] = '.';
+ oname->len = iname->len;
+ return oname->len;
+ }
+ }
+ if (ctx->has_valid_key)
+ return ext4_fname_decrypt(ctx, iname, oname);
+ else
+ return ext4_fname_hash(ctx, iname, oname);
+}
+
+int ext4_fname_disk_to_usr(struct ext4_fname_crypto_ctx *ctx,
+ const struct ext4_dir_entry_2 *de,
+ struct ext4_str *oname)
+{
+ struct ext4_str iname = {.name = (unsigned char *) de->name,
+ .len = de->name_len };
+
+ return _ext4_fname_disk_to_usr(ctx, &iname, oname);
+}
+
+
+/**
+ * ext4_fname_usr_to_disk() - converts a filename from user space to disk space
+ */
+int ext4_fname_usr_to_disk(struct ext4_fname_crypto_ctx *ctx,
+ const struct qstr *iname,
+ struct ext4_str *oname)
+{
+ int res;
+
+ if (ctx == NULL)
+ return -EIO;
+ if (iname->len < 3) {
+ /*Check for . and .. */
+ if (iname->name[0] == '.' &&
+ iname->name[iname->len-1] == '.') {
+ oname->name[0] = '.';
+ oname->name[iname->len-1] = '.';
+ oname->len = iname->len;
+ return oname->len;
+ }
+ }
+ if (ctx->has_valid_key) {
+ res = ext4_fname_encrypt(ctx, iname, oname);
+ return res;
+ }
+ /* Without a proper key, a user is not allowed to modify the filenames
+ * in a directory. Consequently, a user space name cannot be mapped to
+ * a disk-space name */
+ return -EACCES;
+}
+
+/*
+ * Calculate the htree hash from a filename from user space
+ */
+int ext4_fname_usr_to_hash(struct ext4_fname_crypto_ctx *ctx,
+ const struct qstr *iname,
+ struct dx_hash_info *hinfo)
+{
+ struct ext4_str tmp, tmp2;
+ int ret = 0;
+
+ if (!ctx || !ctx->has_valid_key ||
+ ((iname->name[0] == '.') &&
+ ((iname->len == 1) ||
+ ((iname->name[1] == '.') && (iname->len == 2))))) {
+ ext4fs_dirhash(iname->name, iname->len, hinfo);
+ return 0;
+ }
+
+ /* First encrypt the plaintext name */
+ ret = ext4_fname_crypto_alloc_buffer(ctx, iname->len, &tmp);
+ if (ret < 0)
+ return ret;
+
+ ret = ext4_fname_encrypt(ctx, iname, &tmp);
+ if (ret < 0)
+ goto out;
+
+ tmp2.len = (4 * ((EXT4_FNAME_CRYPTO_DIGEST_SIZE + 2) / 3)) + 1;
+ tmp2.name = kmalloc(tmp2.len + 1, GFP_KERNEL);
+ if (tmp2.name == NULL) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ ret = ext4_fname_hash(ctx, &tmp, &tmp2);
+ if (ret > 0)
+ ext4fs_dirhash(tmp2.name, tmp2.len, hinfo);
+ ext4_fname_crypto_free_buffer(&tmp2);
+out:
+ ext4_fname_crypto_free_buffer(&tmp);
+ return ret;
+}
+
+/**
+ * ext4_fname_disk_to_htree() - converts a filename from disk space to htree-access string
+ */
+int ext4_fname_disk_to_hash(struct ext4_fname_crypto_ctx *ctx,
+ const struct ext4_dir_entry_2 *de,
+ struct dx_hash_info *hinfo)
+{
+ struct ext4_str iname = {.name = (unsigned char *) de->name,
+ .len = de->name_len};
+ struct ext4_str tmp;
+ int ret;
+
+ if (!ctx ||
+ ((iname.name[0] == '.') &&
+ ((iname.len == 1) ||
+ ((iname.name[1] == '.') && (iname.len == 2))))) {
+ ext4fs_dirhash(iname.name, iname.len, hinfo);
+ return 0;
+ }
+
+ tmp.len = (4 * ((EXT4_FNAME_CRYPTO_DIGEST_SIZE + 2) / 3)) + 1;
+ tmp.name = kmalloc(tmp.len + 1, GFP_KERNEL);
+ if (tmp.name == NULL)
+ return -ENOMEM;
+
+ ret = ext4_fname_hash(ctx, &iname, &tmp);
+ if (ret > 0)
+ ext4fs_dirhash(tmp.name, tmp.len, hinfo);
+ ext4_fname_crypto_free_buffer(&tmp);
+ return ret;
+}
diff --git a/fs/ext4/crypto_key.c b/fs/ext4/crypto_key.c
new file mode 100644
index 0000000..c8392af
--- /dev/null
+++ b/fs/ext4/crypto_key.c
@@ -0,0 +1,165 @@
+/*
+ * linux/fs/ext4/crypto_key.c
+ *
+ * Copyright (C) 2015, Google, Inc.
+ *
+ * This contains encryption key functions for ext4
+ *
+ * Written by Michael Halcrow, Ildar Muslukhov, and Uday Savagaonkar, 2015.
+ */
+
+#include <keys/encrypted-type.h>
+#include <keys/user-type.h>
+#include <linux/random.h>
+#include <linux/scatterlist.h>
+#include <uapi/linux/keyctl.h>
+
+#include "ext4.h"
+#include "xattr.h"
+
+static void derive_crypt_complete(struct crypto_async_request *req, int rc)
+{
+ struct ext4_completion_result *ecr = req->data;
+
+ if (rc == -EINPROGRESS)
+ return;
+
+ ecr->res = rc;
+ complete(&ecr->completion);
+}
+
+/**
+ * ext4_derive_key_aes() - Derive a key using AES-128-ECB
+ * @deriving_key: Encryption key used for derivatio.
+ * @source_key: Source key to which to apply derivation.
+ * @derived_key: Derived key.
+ *
+ * Return: Zero on success; non-zero otherwise.
+ */
+static int ext4_derive_key_aes(char deriving_key[EXT4_AES_128_ECB_KEY_SIZE],
+ char source_key[EXT4_AES_256_XTS_KEY_SIZE],
+ char derived_key[EXT4_AES_256_XTS_KEY_SIZE])
+{
+ int res = 0;
+ struct ablkcipher_request *req = NULL;
+ DECLARE_EXT4_COMPLETION_RESULT(ecr);
+ struct scatterlist src_sg, dst_sg;
+ struct crypto_ablkcipher *tfm = crypto_alloc_ablkcipher("ecb(aes)", 0,
+ 0);
+
+ if (IS_ERR(tfm)) {
+ res = PTR_ERR(tfm);
+ tfm = NULL;
+ goto out;
+ }
+ crypto_ablkcipher_set_flags(tfm, CRYPTO_TFM_REQ_WEAK_KEY);
+ req = ablkcipher_request_alloc(tfm, GFP_NOFS);
+ if (!req) {
+ res = -ENOMEM;
+ goto out;
+ }
+ ablkcipher_request_set_callback(req,
+ CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
+ derive_crypt_complete, &ecr);
+ res = crypto_ablkcipher_setkey(tfm, deriving_key,
+ EXT4_AES_128_ECB_KEY_SIZE);
+ if (res < 0)
+ goto out;
+ sg_init_one(&src_sg, source_key, EXT4_AES_256_XTS_KEY_SIZE);
+ sg_init_one(&dst_sg, derived_key, EXT4_AES_256_XTS_KEY_SIZE);
+ ablkcipher_request_set_crypt(req, &src_sg, &dst_sg,
+ EXT4_AES_256_XTS_KEY_SIZE, NULL);
+ res = crypto_ablkcipher_encrypt(req);
+ if (res == -EINPROGRESS || res == -EBUSY) {
+ BUG_ON(req->base.data != &ecr);
+ wait_for_completion(&ecr.completion);
+ res = ecr.res;
+ }
+
+out:
+ if (req)
+ ablkcipher_request_free(req);
+ if (tfm)
+ crypto_free_ablkcipher(tfm);
+ return res;
+}
+
+/**
+ * ext4_generate_encryption_key() - generates an encryption key
+ * @inode: The inode to generate the encryption key for.
+ */
+int ext4_generate_encryption_key(struct inode *inode)
+{
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ struct ext4_encryption_key *crypt_key = &ei->i_encryption_key;
+ char full_key_descriptor[EXT4_KEY_DESC_PREFIX_SIZE +
+ (EXT4_KEY_DESCRIPTOR_SIZE * 2) + 1];
+ struct key *keyring_key = NULL;
+ struct ext4_encryption_key *master_key;
+ struct ext4_encryption_context ctx;
+ struct user_key_payload *ukp;
+ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ int res = ext4_xattr_get(inode, EXT4_XATTR_INDEX_ENCRYPTION,
+ EXT4_XATTR_NAME_ENCRYPTION_CONTEXT,
+ &ctx, sizeof(ctx));
+
+ if (res != sizeof(ctx)) {
+ if (res > 0)
+ res = -EINVAL;
+ goto out;
+ }
+ res = 0;
+
+ if (S_ISREG(inode->i_mode))
+ crypt_key->mode = ctx.contents_encryption_mode;
+ else if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
+ crypt_key->mode = ctx.filenames_encryption_mode;
+ else {
+ printk(KERN_ERR "ext4 crypto: Unsupported inode type.\n");
+ BUG();
+ }
+ crypt_key->size = ext4_encryption_key_size(crypt_key->mode);
+ BUG_ON(!crypt_key->size);
+ if (DUMMY_ENCRYPTION_ENABLED(sbi)) {
+ memset(crypt_key->raw, 0x42, EXT4_AES_256_XTS_KEY_SIZE);
+ goto out;
+ }
+ memcpy(full_key_descriptor, EXT4_KEY_DESC_PREFIX,
+ EXT4_KEY_DESC_PREFIX_SIZE);
+ sprintf(full_key_descriptor + EXT4_KEY_DESC_PREFIX_SIZE,
+ "%*phN", EXT4_KEY_DESCRIPTOR_SIZE,
+ ctx.master_key_descriptor);
+ full_key_descriptor[EXT4_KEY_DESC_PREFIX_SIZE +
+ (2 * EXT4_KEY_DESCRIPTOR_SIZE)] = '\0';
+ keyring_key = request_key(&key_type_logon, full_key_descriptor, NULL);
+ if (IS_ERR(keyring_key)) {
+ res = PTR_ERR(keyring_key);
+ keyring_key = NULL;
+ goto out;
+ }
+ BUG_ON(keyring_key->type != &key_type_logon);
+ ukp = ((struct user_key_payload *)keyring_key->payload.data);
+ if (ukp->datalen != sizeof(struct ext4_encryption_key)) {
+ res = -EINVAL;
+ goto out;
+ }
+ master_key = (struct ext4_encryption_key *)ukp->data;
+ BUILD_BUG_ON(EXT4_AES_128_ECB_KEY_SIZE !=
+ EXT4_KEY_DERIVATION_NONCE_SIZE);
+ BUG_ON(master_key->size != EXT4_AES_256_XTS_KEY_SIZE);
+ res = ext4_derive_key_aes(ctx.nonce, master_key->raw, crypt_key->raw);
+out:
+ if (keyring_key)
+ key_put(keyring_key);
+ if (res < 0)
+ crypt_key->mode = EXT4_ENCRYPTION_MODE_INVALID;
+ return res;
+}
+
+int ext4_has_encryption_key(struct inode *inode)
+{
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ struct ext4_encryption_key *crypt_key = &ei->i_encryption_key;
+
+ return (crypt_key->mode != EXT4_ENCRYPTION_MODE_INVALID);
+}
diff --git a/fs/ext4/crypto_policy.c b/fs/ext4/crypto_policy.c
new file mode 100644
index 0000000..30eaf9e
--- /dev/null
+++ b/fs/ext4/crypto_policy.c
@@ -0,0 +1,194 @@
+/*
+ * linux/fs/ext4/crypto_policy.c
+ *
+ * Copyright (C) 2015, Google, Inc.
+ *
+ * This contains encryption policy functions for ext4
+ *
+ * Written by Michael Halcrow, 2015.
+ */
+
+#include <linux/random.h>
+#include <linux/string.h>
+#include <linux/types.h>
+
+#include "ext4.h"
+#include "xattr.h"
+
+static int ext4_inode_has_encryption_context(struct inode *inode)
+{
+ int res = ext4_xattr_get(inode, EXT4_XATTR_INDEX_ENCRYPTION,
+ EXT4_XATTR_NAME_ENCRYPTION_CONTEXT, NULL, 0);
+ return (res > 0);
+}
+
+/*
+ * check whether the policy is consistent with the encryption context
+ * for the inode
+ */
+static int ext4_is_encryption_context_consistent_with_policy(
+ struct inode *inode, const struct ext4_encryption_policy *policy)
+{
+ struct ext4_encryption_context ctx;
+ int res = ext4_xattr_get(inode, EXT4_XATTR_INDEX_ENCRYPTION,
+ EXT4_XATTR_NAME_ENCRYPTION_CONTEXT, &ctx,
+ sizeof(ctx));
+ if (res != sizeof(ctx))
+ return 0;
+ return (memcmp(ctx.master_key_descriptor, policy->master_key_descriptor,
+ EXT4_KEY_DESCRIPTOR_SIZE) == 0 &&
+ (ctx.contents_encryption_mode ==
+ policy->contents_encryption_mode) &&
+ (ctx.filenames_encryption_mode ==
+ policy->filenames_encryption_mode));
+}
+
+static int ext4_create_encryption_context_from_policy(
+ struct inode *inode, const struct ext4_encryption_policy *policy)
+{
+ struct ext4_encryption_context ctx;
+ int res = 0;
+
+ ctx.format = EXT4_ENCRYPTION_CONTEXT_FORMAT_V1;
+ memcpy(ctx.master_key_descriptor, policy->master_key_descriptor,
+ EXT4_KEY_DESCRIPTOR_SIZE);
+ if (!ext4_valid_contents_enc_mode(policy->contents_encryption_mode)) {
+ printk(KERN_WARNING
+ "%s: Invalid contents encryption mode %d\n", __func__,
+ policy->contents_encryption_mode);
+ res = -EINVAL;
+ goto out;
+ }
+ if (!ext4_valid_filenames_enc_mode(policy->filenames_encryption_mode)) {
+ printk(KERN_WARNING
+ "%s: Invalid filenames encryption mode %d\n", __func__,
+ policy->filenames_encryption_mode);
+ res = -EINVAL;
+ goto out;
+ }
+ ctx.contents_encryption_mode = policy->contents_encryption_mode;
+ ctx.filenames_encryption_mode = policy->filenames_encryption_mode;
+ BUILD_BUG_ON(sizeof(ctx.nonce) != EXT4_KEY_DERIVATION_NONCE_SIZE);
+ get_random_bytes(ctx.nonce, EXT4_KEY_DERIVATION_NONCE_SIZE);
+
+ res = ext4_xattr_set(inode, EXT4_XATTR_INDEX_ENCRYPTION,
+ EXT4_XATTR_NAME_ENCRYPTION_CONTEXT, &ctx,
+ sizeof(ctx), 0);
+out:
+ if (!res)
+ ext4_set_inode_flag(inode, EXT4_INODE_ENCRYPT);
+ return res;
+}
+
+int ext4_process_policy(const struct ext4_encryption_policy *policy,
+ struct inode *inode)
+{
+ if (policy->version != 0)
+ return -EINVAL;
+
+ if (!ext4_inode_has_encryption_context(inode)) {
+ if (!ext4_empty_dir(inode))
+ return -ENOTEMPTY;
+ return ext4_create_encryption_context_from_policy(inode,
+ policy);
+ }
+
+ if (ext4_is_encryption_context_consistent_with_policy(inode, policy))
+ return 0;
+
+ printk(KERN_WARNING "%s: Policy inconsistent with encryption context\n",
+ __func__);
+ return -EINVAL;
+}
+
+int ext4_get_policy(struct inode *inode, struct ext4_encryption_policy *policy)
+{
+ struct ext4_encryption_context ctx;
+
+ int res = ext4_xattr_get(inode, EXT4_XATTR_INDEX_ENCRYPTION,
+ EXT4_XATTR_NAME_ENCRYPTION_CONTEXT,
+ &ctx, sizeof(ctx));
+ if (res != sizeof(ctx))
+ return -ENOENT;
+ if (ctx.format != EXT4_ENCRYPTION_CONTEXT_FORMAT_V1)
+ return -EINVAL;
+ policy->version = 0;
+ policy->contents_encryption_mode = ctx.contents_encryption_mode;
+ policy->filenames_encryption_mode = ctx.filenames_encryption_mode;
+ memcpy(&policy->master_key_descriptor, ctx.master_key_descriptor,
+ EXT4_KEY_DESCRIPTOR_SIZE);
+ return 0;
+}
+
+int ext4_is_child_context_consistent_with_parent(struct inode *parent,
+ struct inode *child)
+{
+ struct ext4_encryption_context parent_ctx, child_ctx;
+ int res;
+
+ if ((parent == NULL) || (child == NULL)) {
+ pr_err("parent %p child %p\n", parent, child);
+ BUG_ON(1);
+ }
+ /* no restrictions if the parent directory is not encrypted */
+ if (!ext4_encrypted_inode(parent))
+ return 1;
+ res = ext4_xattr_get(parent, EXT4_XATTR_INDEX_ENCRYPTION,
+ EXT4_XATTR_NAME_ENCRYPTION_CONTEXT,
+ &parent_ctx, sizeof(parent_ctx));
+ if (res != sizeof(parent_ctx))
+ return 0;
+ /* if the child directory is not encrypted, this is always a problem */
+ if (!ext4_encrypted_inode(child))
+ return 0;
+ res = ext4_xattr_get(child, EXT4_XATTR_INDEX_ENCRYPTION,
+ EXT4_XATTR_NAME_ENCRYPTION_CONTEXT,
+ &child_ctx, sizeof(child_ctx));
+ if (res != sizeof(child_ctx))
+ return 0;
+ return (memcmp(parent_ctx.master_key_descriptor,
+ child_ctx.master_key_descriptor,
+ EXT4_KEY_DESCRIPTOR_SIZE) == 0 &&
+ (parent_ctx.contents_encryption_mode ==
+ child_ctx.contents_encryption_mode) &&
+ (parent_ctx.filenames_encryption_mode ==
+ child_ctx.filenames_encryption_mode));
+}
+
+/**
+ * ext4_inherit_context() - Sets a child context from its parent
+ * @parent: Parent inode from which the context is inherited.
+ * @child: Child inode that inherits the context from @parent.
+ *
+ * Return: Zero on success, non-zero otherwise
+ */
+int ext4_inherit_context(struct inode *parent, struct inode *child)
+{
+ struct ext4_encryption_context ctx;
+ int res = ext4_xattr_get(parent, EXT4_XATTR_INDEX_ENCRYPTION,
+ EXT4_XATTR_NAME_ENCRYPTION_CONTEXT,
+ &ctx, sizeof(ctx));
+
+ if (res != sizeof(ctx)) {
+ if (DUMMY_ENCRYPTION_ENABLED(EXT4_SB(parent->i_sb))) {
+ ctx.format = EXT4_ENCRYPTION_CONTEXT_FORMAT_V1;
+ ctx.contents_encryption_mode =
+ EXT4_ENCRYPTION_MODE_AES_256_XTS;
+ ctx.filenames_encryption_mode =
+ EXT4_ENCRYPTION_MODE_AES_256_CTS;
+ memset(ctx.master_key_descriptor, 0x42,
+ EXT4_KEY_DESCRIPTOR_SIZE);
+ res = 0;
+ } else {
+ goto out;
+ }
+ }
+ get_random_bytes(ctx.nonce, EXT4_KEY_DERIVATION_NONCE_SIZE);
+ res = ext4_xattr_set(child, EXT4_XATTR_INDEX_ENCRYPTION,
+ EXT4_XATTR_NAME_ENCRYPTION_CONTEXT, &ctx,
+ sizeof(ctx), 0);
+out:
+ if (!res)
+ ext4_set_inode_flag(child, EXT4_INODE_ENCRYPT);
+ return res;
+}
diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
index c24143e..61db51a5 100644
--- a/fs/ext4/dir.c
+++ b/fs/ext4/dir.c
@@ -22,10 +22,8 @@
*/
#include <linux/fs.h>
-#include <linux/jbd2.h>
#include <linux/buffer_head.h>
#include <linux/slab.h>
-#include <linux/rbtree.h>
#include "ext4.h"
#include "xattr.h"
@@ -110,7 +108,10 @@
int err;
struct inode *inode = file_inode(file);
struct super_block *sb = inode->i_sb;
+ struct buffer_head *bh = NULL;
int dir_has_error = 0;
+ struct ext4_fname_crypto_ctx *enc_ctx = NULL;
+ struct ext4_str fname_crypto_str = {.name = NULL, .len = 0};
if (is_dx_dir(inode)) {
err = ext4_dx_readdir(file, ctx);
@@ -127,17 +128,28 @@
if (ext4_has_inline_data(inode)) {
int has_inline_data = 1;
- int ret = ext4_read_inline_dir(file, ctx,
+ err = ext4_read_inline_dir(file, ctx,
&has_inline_data);
if (has_inline_data)
- return ret;
+ return err;
+ }
+
+ enc_ctx = ext4_get_fname_crypto_ctx(inode, EXT4_NAME_LEN);
+ if (IS_ERR(enc_ctx))
+ return PTR_ERR(enc_ctx);
+ if (enc_ctx) {
+ err = ext4_fname_crypto_alloc_buffer(enc_ctx, EXT4_NAME_LEN,
+ &fname_crypto_str);
+ if (err < 0) {
+ ext4_put_fname_crypto_ctx(&enc_ctx);
+ return err;
+ }
}
offset = ctx->pos & (sb->s_blocksize - 1);
while (ctx->pos < inode->i_size) {
struct ext4_map_blocks map;
- struct buffer_head *bh = NULL;
map.m_lblk = ctx->pos >> EXT4_BLOCK_SIZE_BITS(sb);
map.m_len = 1;
@@ -180,6 +192,7 @@
(unsigned long long)ctx->pos);
ctx->pos += sb->s_blocksize - offset;
brelse(bh);
+ bh = NULL;
continue;
}
set_buffer_verified(bh);
@@ -226,25 +239,44 @@
offset += ext4_rec_len_from_disk(de->rec_len,
sb->s_blocksize);
if (le32_to_cpu(de->inode)) {
- if (!dir_emit(ctx, de->name,
- de->name_len,
- le32_to_cpu(de->inode),
- get_dtype(sb, de->file_type))) {
- brelse(bh);
- return 0;
+ if (enc_ctx == NULL) {
+ /* Directory is not encrypted */
+ if (!dir_emit(ctx, de->name,
+ de->name_len,
+ le32_to_cpu(de->inode),
+ get_dtype(sb, de->file_type)))
+ goto done;
+ } else {
+ /* Directory is encrypted */
+ err = ext4_fname_disk_to_usr(enc_ctx,
+ de, &fname_crypto_str);
+ if (err < 0)
+ goto errout;
+ if (!dir_emit(ctx,
+ fname_crypto_str.name, err,
+ le32_to_cpu(de->inode),
+ get_dtype(sb, de->file_type)))
+ goto done;
}
}
ctx->pos += ext4_rec_len_from_disk(de->rec_len,
sb->s_blocksize);
}
- offset = 0;
+ if ((ctx->pos < inode->i_size) && !dir_relax(inode))
+ goto done;
brelse(bh);
- if (ctx->pos < inode->i_size) {
- if (!dir_relax(inode))
- return 0;
- }
+ bh = NULL;
+ offset = 0;
}
- return 0;
+done:
+ err = 0;
+errout:
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ ext4_put_fname_crypto_ctx(&enc_ctx);
+ ext4_fname_crypto_free_buffer(&fname_crypto_str);
+#endif
+ brelse(bh);
+ return err;
}
static inline int is_32bit_api(void)
@@ -384,10 +416,15 @@
/*
* Given a directory entry, enter it into the fname rb tree.
+ *
+ * When filename encryption is enabled, the dirent will hold the
+ * encrypted filename, while the htree will hold decrypted filename.
+ * The decrypted filename is passed in via ent_name. parameter.
*/
int ext4_htree_store_dirent(struct file *dir_file, __u32 hash,
__u32 minor_hash,
- struct ext4_dir_entry_2 *dirent)
+ struct ext4_dir_entry_2 *dirent,
+ struct ext4_str *ent_name)
{
struct rb_node **p, *parent = NULL;
struct fname *fname, *new_fn;
@@ -398,17 +435,17 @@
p = &info->root.rb_node;
/* Create and allocate the fname structure */
- len = sizeof(struct fname) + dirent->name_len + 1;
+ len = sizeof(struct fname) + ent_name->len + 1;
new_fn = kzalloc(len, GFP_KERNEL);
if (!new_fn)
return -ENOMEM;
new_fn->hash = hash;
new_fn->minor_hash = minor_hash;
new_fn->inode = le32_to_cpu(dirent->inode);
- new_fn->name_len = dirent->name_len;
+ new_fn->name_len = ent_name->len;
new_fn->file_type = dirent->file_type;
- memcpy(new_fn->name, dirent->name, dirent->name_len);
- new_fn->name[dirent->name_len] = 0;
+ memcpy(new_fn->name, ent_name->name, ent_name->len);
+ new_fn->name[ent_name->len] = 0;
while (*p) {
parent = *p;
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index c8eb32e..ef267ad 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -422,7 +422,7 @@
EXT4_INODE_DIRTY = 8,
EXT4_INODE_COMPRBLK = 9, /* One or more compressed clusters */
EXT4_INODE_NOCOMPR = 10, /* Don't compress */
- EXT4_INODE_ENCRYPT = 11, /* Compression error */
+ EXT4_INODE_ENCRYPT = 11, /* Encrypted file */
/* End compression flags --- maybe not all used */
EXT4_INODE_INDEX = 12, /* hash-indexed directory */
EXT4_INODE_IMAGIC = 13, /* AFS directory */
@@ -582,6 +582,15 @@
#define EXT4_FREE_BLOCKS_NOFREE_FIRST_CLUSTER 0x0010
#define EXT4_FREE_BLOCKS_NOFREE_LAST_CLUSTER 0x0020
+/* Encryption algorithms */
+#define EXT4_ENCRYPTION_MODE_INVALID 0
+#define EXT4_ENCRYPTION_MODE_AES_256_XTS 1
+#define EXT4_ENCRYPTION_MODE_AES_256_GCM 2
+#define EXT4_ENCRYPTION_MODE_AES_256_CBC 3
+#define EXT4_ENCRYPTION_MODE_AES_256_CTS 4
+
+#include "ext4_crypto.h"
+
/*
* ioctl commands
*/
@@ -603,6 +612,9 @@
#define EXT4_IOC_RESIZE_FS _IOW('f', 16, __u64)
#define EXT4_IOC_SWAP_BOOT _IO('f', 17)
#define EXT4_IOC_PRECACHE_EXTENTS _IO('f', 18)
+#define EXT4_IOC_SET_ENCRYPTION_POLICY _IOR('f', 19, struct ext4_encryption_policy)
+#define EXT4_IOC_GET_ENCRYPTION_PWSALT _IOW('f', 20, __u8[16])
+#define EXT4_IOC_GET_ENCRYPTION_POLICY _IOW('f', 21, struct ext4_encryption_policy)
#if defined(__KERNEL__) && defined(CONFIG_COMPAT)
/*
@@ -939,6 +951,11 @@
/* Precomputed uuid+inum+igen checksum for seeding inode checksums */
__u32 i_csum_seed;
+
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ /* Encryption params */
+ struct ext4_encryption_key i_encryption_key;
+#endif
};
/*
@@ -1142,7 +1159,8 @@
__le32 s_raid_stripe_width; /* blocks on all data disks (N*stride)*/
__u8 s_log_groups_per_flex; /* FLEX_BG group size */
__u8 s_checksum_type; /* metadata checksum algorithm used */
- __le16 s_reserved_pad;
+ __u8 s_encryption_level; /* versioning level for encryption */
+ __u8 s_reserved_pad; /* Padding to next 32bits */
__le64 s_kbytes_written; /* nr of lifetime kilobytes written */
__le32 s_snapshot_inum; /* Inode number of active snapshot */
__le32 s_snapshot_id; /* sequential ID of active snapshot */
@@ -1169,7 +1187,9 @@
__le32 s_overhead_clusters; /* overhead blocks/clusters in fs */
__le32 s_backup_bgs[2]; /* groups with sparse_super2 SBs */
__u8 s_encrypt_algos[4]; /* Encryption algorithms in use */
- __le32 s_reserved[105]; /* Padding to the end of the block */
+ __u8 s_encrypt_pw_salt[16]; /* Salt used for string2key algorithm */
+ __le32 s_lpf_ino; /* Location of the lost+found inode */
+ __le32 s_reserved[100]; /* Padding to the end of the block */
__le32 s_checksum; /* crc32c(superblock) */
};
@@ -1180,8 +1200,16 @@
/*
* run-time mount flags
*/
-#define EXT4_MF_MNTDIR_SAMPLED 0x0001
-#define EXT4_MF_FS_ABORTED 0x0002 /* Fatal error detected */
+#define EXT4_MF_MNTDIR_SAMPLED 0x0001
+#define EXT4_MF_FS_ABORTED 0x0002 /* Fatal error detected */
+#define EXT4_MF_TEST_DUMMY_ENCRYPTION 0x0004
+
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+#define DUMMY_ENCRYPTION_ENABLED(sbi) (unlikely((sbi)->s_mount_flags & \
+ EXT4_MF_TEST_DUMMY_ENCRYPTION))
+#else
+#define DUMMY_ENCRYPTION_ENABLED(sbi) (0)
+#endif
/* Number of quota types we support */
#define EXT4_MAXQUOTAS 2
@@ -1351,6 +1379,12 @@
struct ratelimit_state s_err_ratelimit_state;
struct ratelimit_state s_warning_ratelimit_state;
struct ratelimit_state s_msg_ratelimit_state;
+
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ /* Encryption */
+ uint32_t s_file_encryption_mode;
+ uint32_t s_dir_encryption_mode;
+#endif
};
static inline struct ext4_sb_info *EXT4_SB(struct super_block *sb)
@@ -1466,6 +1500,18 @@
#define EXT4_SB(sb) (sb)
#endif
+/*
+ * Returns true if the inode is inode is encrypted
+ */
+static inline int ext4_encrypted_inode(struct inode *inode)
+{
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ return ext4_test_inode_flag(inode, EXT4_INODE_ENCRYPT);
+#else
+ return 0;
+#endif
+}
+
#define NEXT_ORPHAN(inode) EXT4_I(inode)->i_dtime
/*
@@ -1575,8 +1621,9 @@
EXT4_FEATURE_INCOMPAT_EXTENTS| \
EXT4_FEATURE_INCOMPAT_64BIT| \
EXT4_FEATURE_INCOMPAT_FLEX_BG| \
- EXT4_FEATURE_INCOMPAT_MMP | \
- EXT4_FEATURE_INCOMPAT_INLINE_DATA)
+ EXT4_FEATURE_INCOMPAT_MMP | \
+ EXT4_FEATURE_INCOMPAT_INLINE_DATA | \
+ EXT4_FEATURE_INCOMPAT_ENCRYPT)
#define EXT4_FEATURE_RO_COMPAT_SUPP (EXT4_FEATURE_RO_COMPAT_SPARSE_SUPER| \
EXT4_FEATURE_RO_COMPAT_LARGE_FILE| \
EXT4_FEATURE_RO_COMPAT_GDT_CSUM| \
@@ -2001,6 +2048,99 @@
struct ext4_group_desc *gdp);
ext4_fsblk_t ext4_inode_to_goal_block(struct inode *);
+/* crypto_policy.c */
+int ext4_is_child_context_consistent_with_parent(struct inode *parent,
+ struct inode *child);
+int ext4_inherit_context(struct inode *parent, struct inode *child);
+void ext4_to_hex(char *dst, char *src, size_t src_size);
+int ext4_process_policy(const struct ext4_encryption_policy *policy,
+ struct inode *inode);
+int ext4_get_policy(struct inode *inode,
+ struct ext4_encryption_policy *policy);
+
+/* crypto.c */
+bool ext4_valid_contents_enc_mode(uint32_t mode);
+uint32_t ext4_validate_encryption_key_size(uint32_t mode, uint32_t size);
+extern struct workqueue_struct *ext4_read_workqueue;
+struct ext4_crypto_ctx *ext4_get_crypto_ctx(struct inode *inode);
+void ext4_release_crypto_ctx(struct ext4_crypto_ctx *ctx);
+void ext4_restore_control_page(struct page *data_page);
+struct page *ext4_encrypt(struct inode *inode,
+ struct page *plaintext_page);
+int ext4_decrypt(struct ext4_crypto_ctx *ctx, struct page *page);
+int ext4_decrypt_one(struct inode *inode, struct page *page);
+int ext4_encrypted_zeroout(struct inode *inode, struct ext4_extent *ex);
+
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+int ext4_init_crypto(void);
+void ext4_exit_crypto(void);
+static inline int ext4_sb_has_crypto(struct super_block *sb)
+{
+ return EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_ENCRYPT);
+}
+#else
+static inline int ext4_init_crypto(void) { return 0; }
+static inline void ext4_exit_crypto(void) { }
+static inline int ext4_sb_has_crypto(struct super_block *sb)
+{
+ return 0;
+}
+#endif
+
+/* crypto_fname.c */
+bool ext4_valid_filenames_enc_mode(uint32_t mode);
+u32 ext4_fname_crypto_round_up(u32 size, u32 blksize);
+int ext4_fname_crypto_alloc_buffer(struct ext4_fname_crypto_ctx *ctx,
+ u32 ilen, struct ext4_str *crypto_str);
+int _ext4_fname_disk_to_usr(struct ext4_fname_crypto_ctx *ctx,
+ const struct ext4_str *iname,
+ struct ext4_str *oname);
+int ext4_fname_disk_to_usr(struct ext4_fname_crypto_ctx *ctx,
+ const struct ext4_dir_entry_2 *de,
+ struct ext4_str *oname);
+int ext4_fname_usr_to_disk(struct ext4_fname_crypto_ctx *ctx,
+ const struct qstr *iname,
+ struct ext4_str *oname);
+int ext4_fname_usr_to_hash(struct ext4_fname_crypto_ctx *ctx,
+ const struct qstr *iname,
+ struct dx_hash_info *hinfo);
+int ext4_fname_disk_to_hash(struct ext4_fname_crypto_ctx *ctx,
+ const struct ext4_dir_entry_2 *de,
+ struct dx_hash_info *hinfo);
+int ext4_fname_crypto_namelen_on_disk(struct ext4_fname_crypto_ctx *ctx,
+ u32 namelen);
+
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+void ext4_put_fname_crypto_ctx(struct ext4_fname_crypto_ctx **ctx);
+struct ext4_fname_crypto_ctx *ext4_get_fname_crypto_ctx(struct inode *inode,
+ u32 max_len);
+void ext4_fname_crypto_free_buffer(struct ext4_str *crypto_str);
+#else
+static inline
+void ext4_put_fname_crypto_ctx(struct ext4_fname_crypto_ctx **ctx) { }
+static inline
+struct ext4_fname_crypto_ctx *ext4_get_fname_crypto_ctx(struct inode *inode,
+ u32 max_len)
+{
+ return NULL;
+}
+static inline void ext4_fname_crypto_free_buffer(struct ext4_str *p) { }
+#endif
+
+
+/* crypto_key.c */
+int ext4_generate_encryption_key(struct inode *inode);
+
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+int ext4_has_encryption_key(struct inode *inode);
+#else
+static inline int ext4_has_encryption_key(struct inode *inode)
+{
+ return 0;
+}
+#endif
+
+
/* dir.c */
extern int __ext4_check_dir_entry(const char *, unsigned int, struct inode *,
struct file *,
@@ -2011,17 +2151,20 @@
unlikely(__ext4_check_dir_entry(__func__, __LINE__, (dir), (filp), \
(de), (bh), (buf), (size), (offset)))
extern int ext4_htree_store_dirent(struct file *dir_file, __u32 hash,
- __u32 minor_hash,
- struct ext4_dir_entry_2 *dirent);
+ __u32 minor_hash,
+ struct ext4_dir_entry_2 *dirent,
+ struct ext4_str *ent_name);
extern void ext4_htree_free_dir_info(struct dir_private_info *p);
extern int ext4_find_dest_de(struct inode *dir, struct inode *inode,
struct buffer_head *bh,
void *buf, int buf_size,
const char *name, int namelen,
struct ext4_dir_entry_2 **dest_de);
-void ext4_insert_dentry(struct inode *inode,
+int ext4_insert_dentry(struct inode *dir,
+ struct inode *inode,
struct ext4_dir_entry_2 *de,
int buf_size,
+ const struct qstr *iname,
const char *name, int namelen);
static inline void ext4_update_dx_flag(struct inode *inode)
{
@@ -2099,6 +2242,7 @@
extern int ext4_trim_fs(struct super_block *, struct fstrim_range *);
/* inode.c */
+int ext4_inode_is_fast_symlink(struct inode *inode);
struct buffer_head *ext4_getblk(handle_t *, struct inode *, ext4_lblk_t, int);
struct buffer_head *ext4_bread(handle_t *, struct inode *, ext4_lblk_t, int);
int ext4_get_block_write(struct inode *inode, sector_t iblock,
@@ -2189,6 +2333,7 @@
void *entry_buf,
int buf_size,
int csum_size);
+extern int ext4_empty_dir(struct inode *inode);
/* resize.c */
extern int ext4_group_add(struct super_block *sb,
@@ -2698,6 +2843,10 @@
de->file_type = ext4_type_by_mode[(mode & S_IFMT)>>S_SHIFT];
}
+/* readpages.c */
+extern int ext4_mpage_readpages(struct address_space *mapping,
+ struct list_head *pages, struct page *page,
+ unsigned nr_pages);
/* symlink.c */
extern const struct inode_operations ext4_symlink_inode_operations;
diff --git a/fs/ext4/ext4_crypto.h b/fs/ext4/ext4_crypto.h
new file mode 100644
index 0000000..c2ba35a
--- /dev/null
+++ b/fs/ext4/ext4_crypto.h
@@ -0,0 +1,147 @@
+/*
+ * linux/fs/ext4/ext4_crypto.h
+ *
+ * Copyright (C) 2015, Google, Inc.
+ *
+ * This contains encryption header content for ext4
+ *
+ * Written by Michael Halcrow, 2015.
+ */
+
+#ifndef _EXT4_CRYPTO_H
+#define _EXT4_CRYPTO_H
+
+#include <linux/fs.h>
+
+#define EXT4_KEY_DESCRIPTOR_SIZE 8
+
+/* Policy provided via an ioctl on the topmost directory */
+struct ext4_encryption_policy {
+ char version;
+ char contents_encryption_mode;
+ char filenames_encryption_mode;
+ char master_key_descriptor[EXT4_KEY_DESCRIPTOR_SIZE];
+} __attribute__((__packed__));
+
+#define EXT4_ENCRYPTION_CONTEXT_FORMAT_V1 1
+#define EXT4_KEY_DERIVATION_NONCE_SIZE 16
+
+/**
+ * Encryption context for inode
+ *
+ * Protector format:
+ * 1 byte: Protector format (1 = this version)
+ * 1 byte: File contents encryption mode
+ * 1 byte: File names encryption mode
+ * 1 byte: Reserved
+ * 8 bytes: Master Key descriptor
+ * 16 bytes: Encryption Key derivation nonce
+ */
+struct ext4_encryption_context {
+ char format;
+ char contents_encryption_mode;
+ char filenames_encryption_mode;
+ char reserved;
+ char master_key_descriptor[EXT4_KEY_DESCRIPTOR_SIZE];
+ char nonce[EXT4_KEY_DERIVATION_NONCE_SIZE];
+} __attribute__((__packed__));
+
+/* Encryption parameters */
+#define EXT4_XTS_TWEAK_SIZE 16
+#define EXT4_AES_128_ECB_KEY_SIZE 16
+#define EXT4_AES_256_GCM_KEY_SIZE 32
+#define EXT4_AES_256_CBC_KEY_SIZE 32
+#define EXT4_AES_256_CTS_KEY_SIZE 32
+#define EXT4_AES_256_XTS_KEY_SIZE 64
+#define EXT4_MAX_KEY_SIZE 64
+
+#define EXT4_KEY_DESC_PREFIX "ext4:"
+#define EXT4_KEY_DESC_PREFIX_SIZE 5
+
+struct ext4_encryption_key {
+ uint32_t mode;
+ char raw[EXT4_MAX_KEY_SIZE];
+ uint32_t size;
+};
+
+#define EXT4_CTX_REQUIRES_FREE_ENCRYPT_FL 0x00000001
+#define EXT4_BOUNCE_PAGE_REQUIRES_FREE_ENCRYPT_FL 0x00000002
+
+struct ext4_crypto_ctx {
+ struct crypto_tfm *tfm; /* Crypto API context */
+ struct page *bounce_page; /* Ciphertext page on write path */
+ struct page *control_page; /* Original page on write path */
+ struct bio *bio; /* The bio for this context */
+ struct work_struct work; /* Work queue for read complete path */
+ struct list_head free_list; /* Free list */
+ int flags; /* Flags */
+ int mode; /* Encryption mode for tfm */
+};
+
+struct ext4_completion_result {
+ struct completion completion;
+ int res;
+};
+
+#define DECLARE_EXT4_COMPLETION_RESULT(ecr) \
+ struct ext4_completion_result ecr = { \
+ COMPLETION_INITIALIZER((ecr).completion), 0 }
+
+static inline int ext4_encryption_key_size(int mode)
+{
+ switch (mode) {
+ case EXT4_ENCRYPTION_MODE_AES_256_XTS:
+ return EXT4_AES_256_XTS_KEY_SIZE;
+ case EXT4_ENCRYPTION_MODE_AES_256_GCM:
+ return EXT4_AES_256_GCM_KEY_SIZE;
+ case EXT4_ENCRYPTION_MODE_AES_256_CBC:
+ return EXT4_AES_256_CBC_KEY_SIZE;
+ case EXT4_ENCRYPTION_MODE_AES_256_CTS:
+ return EXT4_AES_256_CTS_KEY_SIZE;
+ default:
+ BUG();
+ }
+ return 0;
+}
+
+#define EXT4_FNAME_NUM_SCATTER_ENTRIES 4
+#define EXT4_CRYPTO_BLOCK_SIZE 16
+#define EXT4_FNAME_CRYPTO_DIGEST_SIZE 32
+
+struct ext4_str {
+ unsigned char *name;
+ u32 len;
+};
+
+struct ext4_fname_crypto_ctx {
+ u32 lim;
+ char tmp_buf[EXT4_CRYPTO_BLOCK_SIZE];
+ struct crypto_ablkcipher *ctfm;
+ struct crypto_hash *htfm;
+ struct page *workpage;
+ struct ext4_encryption_key key;
+ unsigned has_valid_key : 1;
+ unsigned ctfm_key_is_ready : 1;
+};
+
+/**
+ * For encrypted symlinks, the ciphertext length is stored at the beginning
+ * of the string in little-endian format.
+ */
+struct ext4_encrypted_symlink_data {
+ __le16 len;
+ char encrypted_path[1];
+} __attribute__((__packed__));
+
+/**
+ * This function is used to calculate the disk space required to
+ * store a filename of length l in encrypted symlink format.
+ */
+static inline u32 encrypted_symlink_data_len(u32 l)
+{
+ if (l < EXT4_CRYPTO_BLOCK_SIZE)
+ l = EXT4_CRYPTO_BLOCK_SIZE;
+ return (l + sizeof(struct ext4_encrypted_symlink_data) - 1);
+}
+
+#endif /* _EXT4_CRYPTO_H */
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index bed4308..973816b 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -1717,12 +1717,6 @@
{
unsigned short ext1_ee_len, ext2_ee_len;
- /*
- * Make sure that both extents are initialized. We don't merge
- * unwritten extents so that we can be sure that end_io code has
- * the extent that was written properly split out and conversion to
- * initialized is trivial.
- */
if (ext4_ext_is_unwritten(ex1) != ext4_ext_is_unwritten(ex2))
return 0;
@@ -3128,6 +3122,9 @@
ee_len = ext4_ext_get_actual_len(ex);
ee_pblock = ext4_ext_pblock(ex);
+ if (ext4_encrypted_inode(inode))
+ return ext4_encrypted_zeroout(inode, ex);
+
ret = sb_issue_zeroout(inode->i_sb, ee_pblock, ee_len, GFP_NOFS);
if (ret > 0)
ret = 0;
@@ -4535,19 +4532,7 @@
*/
reserved_clusters = get_reserved_cluster_alloc(inode,
map->m_lblk, allocated);
- if (map_from_cluster) {
- if (reserved_clusters) {
- /*
- * We have clusters reserved for this range.
- * But since we are not doing actual allocation
- * and are simply using blocks from previously
- * allocated cluster, we should release the
- * reservation and not claim quota.
- */
- ext4_da_update_reserve_space(inode,
- reserved_clusters, 0);
- }
- } else {
+ if (!map_from_cluster) {
BUG_ON(allocated_clusters < reserved_clusters);
if (reserved_clusters < allocated_clusters) {
struct ext4_inode_info *ei = EXT4_I(inode);
@@ -4803,12 +4788,6 @@
else
max_blocks -= lblk;
- flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT |
- EXT4_GET_BLOCKS_CONVERT_UNWRITTEN |
- EXT4_EX_NOCACHE;
- if (mode & FALLOC_FL_KEEP_SIZE)
- flags |= EXT4_GET_BLOCKS_KEEP_SIZE;
-
mutex_lock(&inode->i_mutex);
/*
@@ -4825,15 +4804,28 @@
ret = inode_newsize_ok(inode, new_size);
if (ret)
goto out_mutex;
- /*
- * If we have a partial block after EOF we have to allocate
- * the entire block.
- */
- if (partial_end)
- max_blocks += 1;
}
+ flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT;
+ if (mode & FALLOC_FL_KEEP_SIZE)
+ flags |= EXT4_GET_BLOCKS_KEEP_SIZE;
+
+ /* Preallocate the range including the unaligned edges */
+ if (partial_begin || partial_end) {
+ ret = ext4_alloc_file_blocks(file,
+ round_down(offset, 1 << blkbits) >> blkbits,
+ (round_up((offset + len), 1 << blkbits) -
+ round_down(offset, 1 << blkbits)) >> blkbits,
+ new_size, flags, mode);
+ if (ret)
+ goto out_mutex;
+
+ }
+
+ /* Zero range excluding the unaligned edges */
if (max_blocks > 0) {
+ flags |= (EXT4_GET_BLOCKS_CONVERT_UNWRITTEN |
+ EXT4_EX_NOCACHE);
/* Now release the pages and zero block aligned part of pages*/
truncate_pagecache_range(inode, start, end - 1);
@@ -4847,19 +4839,6 @@
flags, mode);
if (ret)
goto out_dio;
- /*
- * Remove entire range from the extent status tree.
- *
- * ext4_es_remove_extent(inode, lblk, max_blocks) is
- * NOT sufficient. I'm not sure why this is the case,
- * but let's be conservative and remove the extent
- * status tree for the entire inode. There should be
- * no outstanding delalloc extents thanks to the
- * filemap_write_and_wait_range() call above.
- */
- ret = ext4_es_remove_extent(inode, 0, EXT_MAX_BLOCKS);
- if (ret)
- goto out_dio;
}
if (!partial_begin && !partial_end)
goto out_dio;
@@ -4922,6 +4901,20 @@
ext4_lblk_t lblk;
unsigned int blkbits = inode->i_blkbits;
+ /*
+ * Encrypted inodes can't handle collapse range or insert
+ * range since we would need to re-encrypt blocks with a
+ * different IV or XTS tweak (which are based on the logical
+ * block number).
+ *
+ * XXX It's not clear why zero range isn't working, but we'll
+ * leave it disabled for encrypted inodes for now. This is a
+ * bug we should fix....
+ */
+ if (ext4_encrypted_inode(inode) &&
+ (mode & (FALLOC_FL_COLLAPSE_RANGE | FALLOC_FL_ZERO_RANGE)))
+ return -EOPNOTSUPP;
+
/* Return error if mode is not supported */
if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE |
FALLOC_FL_COLLAPSE_RANGE | FALLOC_FL_ZERO_RANGE))
diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
index e04d457..d33d5a68 100644
--- a/fs/ext4/extents_status.c
+++ b/fs/ext4/extents_status.c
@@ -9,12 +9,10 @@
*
* Ext4 extents status tree core functions.
*/
-#include <linux/rbtree.h>
#include <linux/list_sort.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include "ext4.h"
-#include "extents_status.h"
#include <trace/events/ext4.h>
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index e576d68..0613c25 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -20,7 +20,6 @@
#include <linux/time.h>
#include <linux/fs.h>
-#include <linux/jbd2.h>
#include <linux/mount.h>
#include <linux/path.h>
#include <linux/quotaops.h>
@@ -221,6 +220,13 @@
static int ext4_file_mmap(struct file *file, struct vm_area_struct *vma)
{
+ struct inode *inode = file->f_mapping->host;
+
+ if (ext4_encrypted_inode(inode)) {
+ int err = ext4_generate_encryption_key(inode);
+ if (err)
+ return 0;
+ }
file_accessed(file);
if (IS_DAX(file_inode(file))) {
vma->vm_ops = &ext4_dax_vm_ops;
@@ -238,6 +244,7 @@
struct vfsmount *mnt = filp->f_path.mnt;
struct path path;
char buf[64], *cp;
+ int ret;
if (unlikely(!(sbi->s_mount_flags & EXT4_MF_MNTDIR_SAMPLED) &&
!(sb->s_flags & MS_RDONLY))) {
@@ -276,11 +283,17 @@
* writing and the journal is present
*/
if (filp->f_mode & FMODE_WRITE) {
- int ret = ext4_inode_attach_jinode(inode);
+ ret = ext4_inode_attach_jinode(inode);
if (ret < 0)
return ret;
}
- return dquot_file_open(inode, filp);
+ ret = dquot_file_open(inode, filp);
+ if (!ret && ext4_encrypted_inode(inode)) {
+ ret = ext4_generate_encryption_key(inode);
+ if (ret)
+ ret = -EACCES;
+ }
+ return ret;
}
/*
diff --git a/fs/ext4/fsync.c b/fs/ext4/fsync.c
index a8bc47f..e9d632e 100644
--- a/fs/ext4/fsync.c
+++ b/fs/ext4/fsync.c
@@ -26,7 +26,6 @@
#include <linux/fs.h>
#include <linux/sched.h>
#include <linux/writeback.h>
-#include <linux/jbd2.h>
#include <linux/blkdev.h>
#include "ext4.h"
diff --git a/fs/ext4/hash.c b/fs/ext4/hash.c
index 3d586f0..e026aa9 100644
--- a/fs/ext4/hash.c
+++ b/fs/ext4/hash.c
@@ -10,7 +10,6 @@
*/
#include <linux/fs.h>
-#include <linux/jbd2.h>
#include <linux/cryptohash.h>
#include "ext4.h"
diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
index ac644c3..2cf18a2 100644
--- a/fs/ext4/ialloc.c
+++ b/fs/ext4/ialloc.c
@@ -14,7 +14,6 @@
#include <linux/time.h>
#include <linux/fs.h>
-#include <linux/jbd2.h>
#include <linux/stat.h>
#include <linux/string.h>
#include <linux/quotaops.h>
@@ -997,6 +996,12 @@
ei->i_block_group = group;
ei->i_last_alloc_group = ~0;
+ /* If the directory encrypted, then we should encrypt the inode. */
+ if ((S_ISDIR(mode) || S_ISREG(mode) || S_ISLNK(mode)) &&
+ (ext4_encrypted_inode(dir) ||
+ DUMMY_ENCRYPTION_ENABLED(sbi)))
+ ext4_set_inode_flag(inode, EXT4_INODE_ENCRYPT);
+
ext4_set_inode_flags(inode);
if (IS_DIRSYNC(inode))
ext4_handle_sync(handle);
@@ -1029,11 +1034,28 @@
ext4_set_inode_state(inode, EXT4_STATE_NEW);
ei->i_extra_isize = EXT4_SB(sb)->s_want_extra_isize;
-
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ if ((sbi->s_file_encryption_mode == EXT4_ENCRYPTION_MODE_INVALID) &&
+ (sbi->s_dir_encryption_mode == EXT4_ENCRYPTION_MODE_INVALID)) {
+ ei->i_inline_off = 0;
+ if (EXT4_HAS_INCOMPAT_FEATURE(sb,
+ EXT4_FEATURE_INCOMPAT_INLINE_DATA))
+ ext4_set_inode_state(inode,
+ EXT4_STATE_MAY_INLINE_DATA);
+ } else {
+ /* Inline data and encryption are incompatible
+ * We turn off inline data since encryption is enabled */
+ ei->i_inline_off = 1;
+ if (EXT4_HAS_INCOMPAT_FEATURE(sb,
+ EXT4_FEATURE_INCOMPAT_INLINE_DATA))
+ ext4_clear_inode_state(inode,
+ EXT4_STATE_MAY_INLINE_DATA);
+ }
+#else
ei->i_inline_off = 0;
if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_INLINE_DATA))
ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
-
+#endif
ret = inode;
err = dquot_alloc_inode(inode);
if (err)
diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index 4b143fe..feb2caf 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -11,11 +11,13 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
+
+#include <linux/fiemap.h>
+
#include "ext4_jbd2.h"
#include "ext4.h"
#include "xattr.h"
#include "truncate.h"
-#include <linux/fiemap.h>
#define EXT4_XATTR_SYSTEM_DATA "data"
#define EXT4_MIN_INLINE_DATA_SIZE ((sizeof(__le32) * EXT4_N_BLOCKS))
@@ -972,7 +974,7 @@
offset = 0;
while ((void *)de < dlimit) {
de_len = ext4_rec_len_from_disk(de->rec_len, inline_size);
- trace_printk("de: off %u rlen %u name %*.s nlen %u ino %u\n",
+ trace_printk("de: off %u rlen %u name %.*s nlen %u ino %u\n",
offset, de_len, de->name_len, de->name,
de->name_len, le32_to_cpu(de->inode));
if (ext4_check_dir_entry(dir, NULL, de, bh,
@@ -1014,7 +1016,8 @@
err = ext4_journal_get_write_access(handle, iloc->bh);
if (err)
return err;
- ext4_insert_dentry(inode, de, inline_size, name, namelen);
+ ext4_insert_dentry(dir, inode, de, inline_size, &dentry->d_name,
+ name, namelen);
ext4_show_inline_dir(dir, iloc->bh, inline_start, inline_size);
@@ -1327,6 +1330,7 @@
struct ext4_iloc iloc;
void *dir_buf = NULL;
struct ext4_dir_entry_2 fake;
+ struct ext4_str tmp_str;
ret = ext4_get_inode_loc(inode, &iloc);
if (ret)
@@ -1398,8 +1402,10 @@
continue;
if (de->inode == 0)
continue;
- err = ext4_htree_store_dirent(dir_file,
- hinfo->hash, hinfo->minor_hash, de);
+ tmp_str.name = de->name;
+ tmp_str.len = de->name_len;
+ err = ext4_htree_store_dirent(dir_file, hinfo->hash,
+ hinfo->minor_hash, de, &tmp_str);
if (err) {
count = err;
goto out;
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index b49cf6e..366476e 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -20,7 +20,6 @@
#include <linux/fs.h>
#include <linux/time.h>
-#include <linux/jbd2.h>
#include <linux/highuid.h>
#include <linux/pagemap.h>
#include <linux/quotaops.h>
@@ -36,7 +35,6 @@
#include <linux/kernel.h>
#include <linux/printk.h>
#include <linux/slab.h>
-#include <linux/ratelimit.h>
#include <linux/bitops.h>
#include "ext4_jbd2.h"
@@ -140,7 +138,7 @@
/*
* Test whether an inode is a fast symlink.
*/
-static int ext4_inode_is_fast_symlink(struct inode *inode)
+int ext4_inode_is_fast_symlink(struct inode *inode)
{
int ea_blocks = EXT4_I(inode)->i_file_acl ?
EXT4_CLUSTER_SIZE(inode->i_sb) >> 9 : 0;
@@ -887,6 +885,95 @@
static int ext4_get_block_write_nolock(struct inode *inode, sector_t iblock,
struct buffer_head *bh_result, int create);
+
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
+ get_block_t *get_block)
+{
+ unsigned from = pos & (PAGE_CACHE_SIZE - 1);
+ unsigned to = from + len;
+ struct inode *inode = page->mapping->host;
+ unsigned block_start, block_end;
+ sector_t block;
+ int err = 0;
+ unsigned blocksize = inode->i_sb->s_blocksize;
+ unsigned bbits;
+ struct buffer_head *bh, *head, *wait[2], **wait_bh = wait;
+ bool decrypt = false;
+
+ BUG_ON(!PageLocked(page));
+ BUG_ON(from > PAGE_CACHE_SIZE);
+ BUG_ON(to > PAGE_CACHE_SIZE);
+ BUG_ON(from > to);
+
+ if (!page_has_buffers(page))
+ create_empty_buffers(page, blocksize, 0);
+ head = page_buffers(page);
+ bbits = ilog2(blocksize);
+ block = (sector_t)page->index << (PAGE_CACHE_SHIFT - bbits);
+
+ for (bh = head, block_start = 0; bh != head || !block_start;
+ block++, block_start = block_end, bh = bh->b_this_page) {
+ block_end = block_start + blocksize;
+ if (block_end <= from || block_start >= to) {
+ if (PageUptodate(page)) {
+ if (!buffer_uptodate(bh))
+ set_buffer_uptodate(bh);
+ }
+ continue;
+ }
+ if (buffer_new(bh))
+ clear_buffer_new(bh);
+ if (!buffer_mapped(bh)) {
+ WARN_ON(bh->b_size != blocksize);
+ err = get_block(inode, block, bh, 1);
+ if (err)
+ break;
+ if (buffer_new(bh)) {
+ unmap_underlying_metadata(bh->b_bdev,
+ bh->b_blocknr);
+ if (PageUptodate(page)) {
+ clear_buffer_new(bh);
+ set_buffer_uptodate(bh);
+ mark_buffer_dirty(bh);
+ continue;
+ }
+ if (block_end > to || block_start < from)
+ zero_user_segments(page, to, block_end,
+ block_start, from);
+ continue;
+ }
+ }
+ if (PageUptodate(page)) {
+ if (!buffer_uptodate(bh))
+ set_buffer_uptodate(bh);
+ continue;
+ }
+ if (!buffer_uptodate(bh) && !buffer_delay(bh) &&
+ !buffer_unwritten(bh) &&
+ (block_start < from || block_end > to)) {
+ ll_rw_block(READ, 1, &bh);
+ *wait_bh++ = bh;
+ decrypt = ext4_encrypted_inode(inode) &&
+ S_ISREG(inode->i_mode);
+ }
+ }
+ /*
+ * If we issued read requests, let them complete.
+ */
+ while (wait_bh > wait) {
+ wait_on_buffer(*--wait_bh);
+ if (!buffer_uptodate(*wait_bh))
+ err = -EIO;
+ }
+ if (unlikely(err))
+ page_zero_new_buffers(page, from, to);
+ else if (decrypt)
+ err = ext4_decrypt_one(inode, page);
+ return err;
+}
+#endif
+
static int ext4_write_begin(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata)
@@ -949,11 +1036,19 @@
/* In case writeback began while the page was unlocked */
wait_for_stable_page(page);
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ if (ext4_should_dioread_nolock(inode))
+ ret = ext4_block_write_begin(page, pos, len,
+ ext4_get_block_write);
+ else
+ ret = ext4_block_write_begin(page, pos, len,
+ ext4_get_block);
+#else
if (ext4_should_dioread_nolock(inode))
ret = __block_write_begin(page, pos, len, ext4_get_block_write);
else
ret = __block_write_begin(page, pos, len, ext4_get_block);
-
+#endif
if (!ret && ext4_should_journal_data(inode)) {
ret = ext4_walk_page_buffers(handle, page_buffers(page),
from, to, NULL,
@@ -2575,7 +2670,12 @@
/* In case writeback began while the page was unlocked */
wait_for_stable_page(page);
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ ret = ext4_block_write_begin(page, pos, len,
+ ext4_da_get_block_prep);
+#else
ret = __block_write_begin(page, pos, len, ext4_da_get_block_prep);
+#endif
if (ret < 0) {
unlock_page(page);
ext4_journal_stop(handle);
@@ -2821,7 +2921,7 @@
ret = ext4_readpage_inline(inode, page);
if (ret == -EAGAIN)
- return mpage_readpage(page, ext4_get_block);
+ return ext4_mpage_readpages(page->mapping, NULL, page, 1);
return ret;
}
@@ -2836,7 +2936,7 @@
if (ext4_has_inline_data(inode))
return 0;
- return mpage_readpages(mapping, pages, nr_pages, ext4_get_block);
+ return ext4_mpage_readpages(mapping, pages, NULL, nr_pages);
}
static void ext4_invalidatepage(struct page *page, unsigned int offset,
@@ -3033,6 +3133,9 @@
get_block_func = ext4_get_block_write;
dio_flags = DIO_LOCKING;
}
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ BUG_ON(ext4_encrypted_inode(inode) && S_ISREG(inode->i_mode));
+#endif
if (IS_DAX(inode))
ret = dax_do_io(iocb, inode, iter, offset, get_block_func,
ext4_end_io_dio, dio_flags);
@@ -3097,6 +3200,11 @@
size_t count = iov_iter_count(iter);
ssize_t ret;
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ if (ext4_encrypted_inode(inode) && S_ISREG(inode->i_mode))
+ return 0;
+#endif
+
/*
* If we are doing data journalling we don't support O_DIRECT
*/
@@ -3261,6 +3369,13 @@
/* Uhhuh. Read error. Complain and punt. */
if (!buffer_uptodate(bh))
goto unlock;
+ if (S_ISREG(inode->i_mode) &&
+ ext4_encrypted_inode(inode)) {
+ /* We expect the key to be set. */
+ BUG_ON(!ext4_has_encryption_key(inode));
+ BUG_ON(blocksize != PAGE_CACHE_SIZE);
+ WARN_ON_ONCE(ext4_decrypt_one(inode, page));
+ }
}
if (ext4_should_journal_data(inode)) {
BUFFER_TRACE(bh, "get write access");
@@ -4096,7 +4211,8 @@
inode->i_op = &ext4_dir_inode_operations;
inode->i_fop = &ext4_dir_operations;
} else if (S_ISLNK(inode->i_mode)) {
- if (ext4_inode_is_fast_symlink(inode)) {
+ if (ext4_inode_is_fast_symlink(inode) &&
+ !ext4_encrypted_inode(inode)) {
inode->i_op = &ext4_fast_symlink_inode_operations;
nd_terminate_link(ei->i_data, inode->i_size,
sizeof(ei->i_data) - 1);
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index f58a0d1..2cb9e17 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -8,12 +8,12 @@
*/
#include <linux/fs.h>
-#include <linux/jbd2.h>
#include <linux/capability.h>
#include <linux/time.h>
#include <linux/compat.h>
#include <linux/mount.h>
#include <linux/file.h>
+#include <linux/random.h>
#include <asm/uaccess.h>
#include "ext4_jbd2.h"
#include "ext4.h"
@@ -196,6 +196,16 @@
return err;
}
+static int uuid_is_zero(__u8 u[16])
+{
+ int i;
+
+ for (i = 0; i < 16; i++)
+ if (u[i])
+ return 0;
+ return 1;
+}
+
long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
struct inode *inode = file_inode(filp);
@@ -615,7 +625,78 @@
}
case EXT4_IOC_PRECACHE_EXTENTS:
return ext4_ext_precache(inode);
+ case EXT4_IOC_SET_ENCRYPTION_POLICY: {
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ struct ext4_encryption_policy policy;
+ int err = 0;
+ if (copy_from_user(&policy,
+ (struct ext4_encryption_policy __user *)arg,
+ sizeof(policy))) {
+ err = -EFAULT;
+ goto encryption_policy_out;
+ }
+
+ err = ext4_process_policy(&policy, inode);
+encryption_policy_out:
+ return err;
+#else
+ return -EOPNOTSUPP;
+#endif
+ }
+ case EXT4_IOC_GET_ENCRYPTION_PWSALT: {
+ int err, err2;
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ handle_t *handle;
+
+ if (!ext4_sb_has_crypto(sb))
+ return -EOPNOTSUPP;
+ if (uuid_is_zero(sbi->s_es->s_encrypt_pw_salt)) {
+ err = mnt_want_write_file(filp);
+ if (err)
+ return err;
+ handle = ext4_journal_start_sb(sb, EXT4_HT_MISC, 1);
+ if (IS_ERR(handle)) {
+ err = PTR_ERR(handle);
+ goto pwsalt_err_exit;
+ }
+ err = ext4_journal_get_write_access(handle, sbi->s_sbh);
+ if (err)
+ goto pwsalt_err_journal;
+ generate_random_uuid(sbi->s_es->s_encrypt_pw_salt);
+ err = ext4_handle_dirty_metadata(handle, NULL,
+ sbi->s_sbh);
+ pwsalt_err_journal:
+ err2 = ext4_journal_stop(handle);
+ if (err2 && !err)
+ err = err2;
+ pwsalt_err_exit:
+ mnt_drop_write_file(filp);
+ if (err)
+ return err;
+ }
+ if (copy_to_user((void *) arg, sbi->s_es->s_encrypt_pw_salt,
+ 16))
+ return -EFAULT;
+ return 0;
+ }
+ case EXT4_IOC_GET_ENCRYPTION_POLICY: {
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ struct ext4_encryption_policy policy;
+ int err = 0;
+
+ if (!ext4_encrypted_inode(inode))
+ return -ENOENT;
+ err = ext4_get_policy(inode, &policy);
+ if (err)
+ return err;
+ if (copy_to_user((void *)arg, &policy, sizeof(policy)))
+ return -EFAULT;
+ return 0;
+#else
+ return -EOPNOTSUPP;
+#endif
+ }
default:
return -ENOTTY;
}
@@ -680,6 +761,9 @@
case FITRIM:
case EXT4_IOC_RESIZE_FS:
case EXT4_IOC_PRECACHE_EXTENTS:
+ case EXT4_IOC_SET_ENCRYPTION_POLICY:
+ case EXT4_IOC_GET_ENCRYPTION_PWSALT:
+ case EXT4_IOC_GET_ENCRYPTION_POLICY:
break;
default:
return -ENOIOCTLCMD;
diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
index 2291923..ef22cd9 100644
--- a/fs/ext4/namei.c
+++ b/fs/ext4/namei.c
@@ -26,7 +26,6 @@
#include <linux/fs.h>
#include <linux/pagemap.h>
-#include <linux/jbd2.h>
#include <linux/time.h>
#include <linux/fcntl.h>
#include <linux/stat.h>
@@ -254,8 +253,9 @@
struct dx_hash_info *hinfo,
struct dx_frame *frame);
static void dx_release(struct dx_frame *frames);
-static int dx_make_map(struct ext4_dir_entry_2 *de, unsigned blocksize,
- struct dx_hash_info *hinfo, struct dx_map_entry map[]);
+static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+ unsigned blocksize, struct dx_hash_info *hinfo,
+ struct dx_map_entry map[]);
static void dx_sort_map(struct dx_map_entry *map, unsigned count);
static struct ext4_dir_entry_2 *dx_move_dirents(char *from, char *to,
struct dx_map_entry *offsets, int count, unsigned blocksize);
@@ -586,8 +586,10 @@
unsigned bcount;
};
-static struct stats dx_show_leaf(struct dx_hash_info *hinfo, struct ext4_dir_entry_2 *de,
- int size, int show_names)
+static struct stats dx_show_leaf(struct inode *dir,
+ struct dx_hash_info *hinfo,
+ struct ext4_dir_entry_2 *de,
+ int size, int show_names)
{
unsigned names = 0, space = 0;
char *base = (char *) de;
@@ -600,12 +602,80 @@
{
if (show_names)
{
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ int len;
+ char *name;
+ struct ext4_str fname_crypto_str
+ = {.name = NULL, .len = 0};
+ struct ext4_fname_crypto_ctx *ctx = NULL;
+ int res;
+
+ name = de->name;
+ len = de->name_len;
+ ctx = ext4_get_fname_crypto_ctx(dir,
+ EXT4_NAME_LEN);
+ if (IS_ERR(ctx)) {
+ printk(KERN_WARNING "Error acquiring"
+ " crypto ctxt--skipping crypto\n");
+ ctx = NULL;
+ }
+ if (ctx == NULL) {
+ /* Directory is not encrypted */
+ ext4fs_dirhash(de->name,
+ de->name_len, &h);
+ printk("%*.s:(U)%x.%u ", len,
+ name, h.hash,
+ (unsigned) ((char *) de
+ - base));
+ } else {
+ /* Directory is encrypted */
+ res = ext4_fname_crypto_alloc_buffer(
+ ctx, de->name_len,
+ &fname_crypto_str);
+ if (res < 0) {
+ printk(KERN_WARNING "Error "
+ "allocating crypto "
+ "buffer--skipping "
+ "crypto\n");
+ ext4_put_fname_crypto_ctx(&ctx);
+ ctx = NULL;
+ }
+ res = ext4_fname_disk_to_usr(ctx, de,
+ &fname_crypto_str);
+ if (res < 0) {
+ printk(KERN_WARNING "Error "
+ "converting filename "
+ "from disk to usr"
+ "\n");
+ name = "??";
+ len = 2;
+ } else {
+ name = fname_crypto_str.name;
+ len = fname_crypto_str.len;
+ }
+ res = ext4_fname_disk_to_hash(ctx, de,
+ &h);
+ if (res < 0) {
+ printk(KERN_WARNING "Error "
+ "converting filename "
+ "from disk to htree"
+ "\n");
+ h.hash = 0xDEADBEEF;
+ }
+ printk("%*.s:(E)%x.%u ", len, name,
+ h.hash, (unsigned) ((char *) de
+ - base));
+ ext4_put_fname_crypto_ctx(&ctx);
+ ext4_fname_crypto_free_buffer(
+ &fname_crypto_str);
+ }
+#else
int len = de->name_len;
char *name = de->name;
- while (len--) printk("%c", *name++);
ext4fs_dirhash(de->name, de->name_len, &h);
- printk(":%x.%u ", h.hash,
+ printk("%*.s:%x.%u ", len, name, h.hash,
(unsigned) ((char *) de - base));
+#endif
}
space += EXT4_DIR_REC_LEN(de->name_len);
names++;
@@ -623,7 +693,6 @@
unsigned count = dx_get_count(entries), names = 0, space = 0, i;
unsigned bcount = 0;
struct buffer_head *bh;
- int err;
printk("%i indexed blocks...\n", count);
for (i = 0; i < count; i++, entries++)
{
@@ -637,7 +706,8 @@
continue;
stats = levels?
dx_show_entries(hinfo, dir, ((struct dx_node *) bh->b_data)->entries, levels - 1):
- dx_show_leaf(hinfo, (struct ext4_dir_entry_2 *) bh->b_data, blocksize, 0);
+ dx_show_leaf(dir, hinfo, (struct ext4_dir_entry_2 *)
+ bh->b_data, blocksize, 0);
names += stats.names;
space += stats.space;
bcount += stats.bcount;
@@ -687,8 +757,28 @@
if (hinfo->hash_version <= DX_HASH_TEA)
hinfo->hash_version += EXT4_SB(dir->i_sb)->s_hash_unsigned;
hinfo->seed = EXT4_SB(dir->i_sb)->s_hash_seed;
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ if (d_name) {
+ struct ext4_fname_crypto_ctx *ctx = NULL;
+ int res;
+
+ /* Check if the directory is encrypted */
+ ctx = ext4_get_fname_crypto_ctx(dir, EXT4_NAME_LEN);
+ if (IS_ERR(ctx)) {
+ ret_err = ERR_PTR(PTR_ERR(ctx));
+ goto fail;
+ }
+ res = ext4_fname_usr_to_hash(ctx, d_name, hinfo);
+ if (res < 0) {
+ ret_err = ERR_PTR(res);
+ goto fail;
+ }
+ ext4_put_fname_crypto_ctx(&ctx);
+ }
+#else
if (d_name)
ext4fs_dirhash(d_name->name, d_name->len, hinfo);
+#endif
hash = hinfo->hash;
if (root->info.unused_flags & 1) {
@@ -773,6 +863,7 @@
brelse(frame->bh);
frame--;
}
+
if (ret_err == ERR_PTR(ERR_BAD_DX_DIR))
ext4_warning(dir->i_sb,
"Corrupt dir inode %lu, running e2fsck is "
@@ -878,6 +969,8 @@
struct buffer_head *bh;
struct ext4_dir_entry_2 *de, *top;
int err = 0, count = 0;
+ struct ext4_fname_crypto_ctx *ctx = NULL;
+ struct ext4_str fname_crypto_str = {.name = NULL, .len = 0}, tmp_str;
dxtrace(printk(KERN_INFO "In htree dirblock_to_tree: block %lu\n",
(unsigned long)block));
@@ -889,6 +982,24 @@
top = (struct ext4_dir_entry_2 *) ((char *) de +
dir->i_sb->s_blocksize -
EXT4_DIR_REC_LEN(0));
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ /* Check if the directory is encrypted */
+ ctx = ext4_get_fname_crypto_ctx(dir, EXT4_NAME_LEN);
+ if (IS_ERR(ctx)) {
+ err = PTR_ERR(ctx);
+ brelse(bh);
+ return err;
+ }
+ if (ctx != NULL) {
+ err = ext4_fname_crypto_alloc_buffer(ctx, EXT4_NAME_LEN,
+ &fname_crypto_str);
+ if (err < 0) {
+ ext4_put_fname_crypto_ctx(&ctx);
+ brelse(bh);
+ return err;
+ }
+ }
+#endif
for (; de < top; de = ext4_next_entry(de, dir->i_sb->s_blocksize)) {
if (ext4_check_dir_entry(dir, NULL, de, bh,
bh->b_data, bh->b_size,
@@ -897,21 +1008,52 @@
/* silently ignore the rest of the block */
break;
}
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ err = ext4_fname_disk_to_hash(ctx, de, hinfo);
+ if (err < 0) {
+ count = err;
+ goto errout;
+ }
+#else
ext4fs_dirhash(de->name, de->name_len, hinfo);
+#endif
if ((hinfo->hash < start_hash) ||
((hinfo->hash == start_hash) &&
(hinfo->minor_hash < start_minor_hash)))
continue;
if (de->inode == 0)
continue;
- if ((err = ext4_htree_store_dirent(dir_file,
- hinfo->hash, hinfo->minor_hash, de)) != 0) {
- brelse(bh);
- return err;
+ if (ctx == NULL) {
+ /* Directory is not encrypted */
+ tmp_str.name = de->name;
+ tmp_str.len = de->name_len;
+ err = ext4_htree_store_dirent(dir_file,
+ hinfo->hash, hinfo->minor_hash, de,
+ &tmp_str);
+ } else {
+ /* Directory is encrypted */
+ err = ext4_fname_disk_to_usr(ctx, de,
+ &fname_crypto_str);
+ if (err < 0) {
+ count = err;
+ goto errout;
+ }
+ err = ext4_htree_store_dirent(dir_file,
+ hinfo->hash, hinfo->minor_hash, de,
+ &fname_crypto_str);
+ }
+ if (err != 0) {
+ count = err;
+ goto errout;
}
count++;
}
+errout:
brelse(bh);
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ ext4_put_fname_crypto_ctx(&ctx);
+ ext4_fname_crypto_free_buffer(&fname_crypto_str);
+#endif
return count;
}
@@ -935,6 +1077,7 @@
int count = 0;
int ret, err;
__u32 hashval;
+ struct ext4_str tmp_str;
dxtrace(printk(KERN_DEBUG "In htree_fill_tree, start hash: %x:%x\n",
start_hash, start_minor_hash));
@@ -970,14 +1113,22 @@
/* Add '.' and '..' from the htree header */
if (!start_hash && !start_minor_hash) {
de = (struct ext4_dir_entry_2 *) frames[0].bh->b_data;
- if ((err = ext4_htree_store_dirent(dir_file, 0, 0, de)) != 0)
+ tmp_str.name = de->name;
+ tmp_str.len = de->name_len;
+ err = ext4_htree_store_dirent(dir_file, 0, 0,
+ de, &tmp_str);
+ if (err != 0)
goto errout;
count++;
}
if (start_hash < 2 || (start_hash ==2 && start_minor_hash==0)) {
de = (struct ext4_dir_entry_2 *) frames[0].bh->b_data;
de = ext4_next_entry(de, dir->i_sb->s_blocksize);
- if ((err = ext4_htree_store_dirent(dir_file, 2, 0, de)) != 0)
+ tmp_str.name = de->name;
+ tmp_str.len = de->name_len;
+ err = ext4_htree_store_dirent(dir_file, 2, 0,
+ de, &tmp_str);
+ if (err != 0)
goto errout;
count++;
}
@@ -1035,17 +1186,33 @@
* Create map of hash values, offsets, and sizes, stored at end of block.
* Returns number of entries mapped.
*/
-static int dx_make_map(struct ext4_dir_entry_2 *de, unsigned blocksize,
- struct dx_hash_info *hinfo,
+static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+ unsigned blocksize, struct dx_hash_info *hinfo,
struct dx_map_entry *map_tail)
{
int count = 0;
char *base = (char *) de;
struct dx_hash_info h = *hinfo;
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ struct ext4_fname_crypto_ctx *ctx = NULL;
+ int err;
+
+ ctx = ext4_get_fname_crypto_ctx(dir, EXT4_NAME_LEN);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+#endif
while ((char *) de < base + blocksize) {
if (de->name_len && de->inode) {
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ err = ext4_fname_disk_to_hash(ctx, de, &h);
+ if (err < 0) {
+ ext4_put_fname_crypto_ctx(&ctx);
+ return err;
+ }
+#else
ext4fs_dirhash(de->name, de->name_len, &h);
+#endif
map_tail--;
map_tail->hash = h.hash;
map_tail->offs = ((char *) de - base)>>2;
@@ -1056,6 +1223,9 @@
/* XXX: do we need to check rec_len == 0 case? -Chris */
de = ext4_next_entry(de, blocksize);
}
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ ext4_put_fname_crypto_ctx(&ctx);
+#endif
return count;
}
@@ -1106,57 +1276,107 @@
* `len <= EXT4_NAME_LEN' is guaranteed by caller.
* `de != NULL' is guaranteed by caller.
*/
-static inline int ext4_match (int len, const char * const name,
- struct ext4_dir_entry_2 * de)
+static inline int ext4_match(struct ext4_fname_crypto_ctx *ctx,
+ struct ext4_str *fname_crypto_str,
+ int len, const char * const name,
+ struct ext4_dir_entry_2 *de)
{
- if (len != de->name_len)
- return 0;
+ int res;
+
if (!de->inode)
return 0;
- return !memcmp(name, de->name, len);
+
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ if (ctx) {
+ /* Directory is encrypted */
+ res = ext4_fname_disk_to_usr(ctx, de, fname_crypto_str);
+ if (res < 0)
+ return res;
+ if (len != res)
+ return 0;
+ res = memcmp(name, fname_crypto_str->name, len);
+ return (res == 0) ? 1 : 0;
+ }
+#endif
+ if (len != de->name_len)
+ return 0;
+ res = memcmp(name, de->name, len);
+ return (res == 0) ? 1 : 0;
}
/*
* Returns 0 if not found, -1 on failure, and 1 on success
*/
-int search_dir(struct buffer_head *bh,
- char *search_buf,
- int buf_size,
- struct inode *dir,
- const struct qstr *d_name,
- unsigned int offset,
- struct ext4_dir_entry_2 **res_dir)
+int search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+ struct inode *dir, const struct qstr *d_name,
+ unsigned int offset, struct ext4_dir_entry_2 **res_dir)
{
struct ext4_dir_entry_2 * de;
char * dlimit;
int de_len;
const char *name = d_name->name;
int namelen = d_name->len;
+ struct ext4_fname_crypto_ctx *ctx = NULL;
+ struct ext4_str fname_crypto_str = {.name = NULL, .len = 0};
+ int res;
+
+ ctx = ext4_get_fname_crypto_ctx(dir, EXT4_NAME_LEN);
+ if (IS_ERR(ctx))
+ return -1;
+
+ if (ctx != NULL) {
+ /* Allocate buffer to hold maximum name length */
+ res = ext4_fname_crypto_alloc_buffer(ctx, EXT4_NAME_LEN,
+ &fname_crypto_str);
+ if (res < 0) {
+ ext4_put_fname_crypto_ctx(&ctx);
+ return -1;
+ }
+ }
de = (struct ext4_dir_entry_2 *)search_buf;
dlimit = search_buf + buf_size;
while ((char *) de < dlimit) {
/* this code is executed quadratically often */
/* do minimal checking `by hand' */
+ if ((char *) de + de->name_len <= dlimit) {
+ res = ext4_match(ctx, &fname_crypto_str, namelen,
+ name, de);
+ if (res < 0) {
+ res = -1;
+ goto return_result;
+ }
+ if (res > 0) {
+ /* found a match - just to be sure, do
+ * a full check */
+ if (ext4_check_dir_entry(dir, NULL, de, bh,
+ bh->b_data,
+ bh->b_size, offset)) {
+ res = -1;
+ goto return_result;
+ }
+ *res_dir = de;
+ res = 1;
+ goto return_result;
+ }
- if ((char *) de + namelen <= dlimit &&
- ext4_match (namelen, name, de)) {
- /* found a match - just to be sure, do a full check */
- if (ext4_check_dir_entry(dir, NULL, de, bh, bh->b_data,
- bh->b_size, offset))
- return -1;
- *res_dir = de;
- return 1;
}
/* prevent looping on a bad block */
de_len = ext4_rec_len_from_disk(de->rec_len,
dir->i_sb->s_blocksize);
- if (de_len <= 0)
- return -1;
+ if (de_len <= 0) {
+ res = -1;
+ goto return_result;
+ }
offset += de_len;
de = (struct ext4_dir_entry_2 *) ((char *) de + de_len);
}
- return 0;
+
+ res = 0;
+return_result:
+ ext4_put_fname_crypto_ctx(&ctx);
+ ext4_fname_crypto_free_buffer(&fname_crypto_str);
+ return res;
}
static int is_dx_internal_node(struct inode *dir, ext4_lblk_t block,
@@ -1345,6 +1565,9 @@
ext4_lblk_t block;
int retval;
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ *res_dir = NULL;
+#endif
frame = dx_probe(d_name, dir, &hinfo, frames);
if (IS_ERR(frame))
return (struct buffer_head *) frame;
@@ -1417,6 +1640,18 @@
ino);
return ERR_PTR(-EIO);
}
+ if (!IS_ERR(inode) && ext4_encrypted_inode(dir) &&
+ (S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) ||
+ S_ISLNK(inode->i_mode)) &&
+ !ext4_is_child_context_consistent_with_parent(dir,
+ inode)) {
+ iput(inode);
+ ext4_warning(inode->i_sb,
+ "Inconsistent encryption contexts: %lu/%lu\n",
+ (unsigned long) dir->i_ino,
+ (unsigned long) inode->i_ino);
+ return ERR_PTR(-EPERM);
+ }
}
return d_splice_alias(inode, dentry);
}
@@ -1541,7 +1776,7 @@
/* create map in the end of data2 block */
map = (struct dx_map_entry *) (data2 + blocksize);
- count = dx_make_map((struct ext4_dir_entry_2 *) data1,
+ count = dx_make_map(dir, (struct ext4_dir_entry_2 *) data1,
blocksize, hinfo, map);
map -= count;
dx_sort_map(map, count);
@@ -1564,7 +1799,8 @@
hash2, split, count-split));
/* Fancy dance to stay within two buffers */
- de2 = dx_move_dirents(data1, data2, map + split, count - split, blocksize);
+ de2 = dx_move_dirents(data1, data2, map + split, count - split,
+ blocksize);
de = dx_pack_dirents(data1, blocksize);
de->rec_len = ext4_rec_len_to_disk(data1 + (blocksize - csum_size) -
(char *) de,
@@ -1580,8 +1816,10 @@
initialize_dirent_tail(t, blocksize);
}
- dxtrace(dx_show_leaf (hinfo, (struct ext4_dir_entry_2 *) data1, blocksize, 1));
- dxtrace(dx_show_leaf (hinfo, (struct ext4_dir_entry_2 *) data2, blocksize, 1));
+ dxtrace(dx_show_leaf(dir, hinfo, (struct ext4_dir_entry_2 *) data1,
+ blocksize, 1));
+ dxtrace(dx_show_leaf(dir, hinfo, (struct ext4_dir_entry_2 *) data2,
+ blocksize, 1));
/* Which block gets the new entry? */
if (hinfo->hash >= hash2) {
@@ -1618,15 +1856,48 @@
int nlen, rlen;
unsigned int offset = 0;
char *top;
+ struct ext4_fname_crypto_ctx *ctx = NULL;
+ struct ext4_str fname_crypto_str = {.name = NULL, .len = 0};
+ int res;
+
+ ctx = ext4_get_fname_crypto_ctx(dir, EXT4_NAME_LEN);
+ if (IS_ERR(ctx))
+ return -1;
+
+ if (ctx != NULL) {
+ /* Calculate record length needed to store the entry */
+ res = ext4_fname_crypto_namelen_on_disk(ctx, namelen);
+ if (res < 0) {
+ ext4_put_fname_crypto_ctx(&ctx);
+ return res;
+ }
+ reclen = EXT4_DIR_REC_LEN(res);
+
+ /* Allocate buffer to hold maximum name length */
+ res = ext4_fname_crypto_alloc_buffer(ctx, EXT4_NAME_LEN,
+ &fname_crypto_str);
+ if (res < 0) {
+ ext4_put_fname_crypto_ctx(&ctx);
+ return -1;
+ }
+ }
de = (struct ext4_dir_entry_2 *)buf;
top = buf + buf_size - reclen;
while ((char *) de <= top) {
if (ext4_check_dir_entry(dir, NULL, de, bh,
- buf, buf_size, offset))
- return -EIO;
- if (ext4_match(namelen, name, de))
- return -EEXIST;
+ buf, buf_size, offset)) {
+ res = -EIO;
+ goto return_result;
+ }
+ /* Provide crypto context and crypto buffer to ext4 match */
+ res = ext4_match(ctx, &fname_crypto_str, namelen, name, de);
+ if (res < 0)
+ goto return_result;
+ if (res > 0) {
+ res = -EEXIST;
+ goto return_result;
+ }
nlen = EXT4_DIR_REC_LEN(de->name_len);
rlen = ext4_rec_len_from_disk(de->rec_len, buf_size);
if ((de->inode ? rlen - nlen : rlen) >= reclen)
@@ -1634,26 +1905,62 @@
de = (struct ext4_dir_entry_2 *)((char *)de + rlen);
offset += rlen;
}
- if ((char *) de > top)
- return -ENOSPC;
- *dest_de = de;
- return 0;
+ if ((char *) de > top)
+ res = -ENOSPC;
+ else {
+ *dest_de = de;
+ res = 0;
+ }
+return_result:
+ ext4_put_fname_crypto_ctx(&ctx);
+ ext4_fname_crypto_free_buffer(&fname_crypto_str);
+ return res;
}
-void ext4_insert_dentry(struct inode *inode,
- struct ext4_dir_entry_2 *de,
- int buf_size,
- const char *name, int namelen)
+int ext4_insert_dentry(struct inode *dir,
+ struct inode *inode,
+ struct ext4_dir_entry_2 *de,
+ int buf_size,
+ const struct qstr *iname,
+ const char *name, int namelen)
{
int nlen, rlen;
+ struct ext4_fname_crypto_ctx *ctx = NULL;
+ struct ext4_str fname_crypto_str = {.name = NULL, .len = 0};
+ struct ext4_str tmp_str;
+ int res;
+
+ ctx = ext4_get_fname_crypto_ctx(dir, EXT4_NAME_LEN);
+ if (IS_ERR(ctx))
+ return -EIO;
+ /* By default, the input name would be written to the disk */
+ tmp_str.name = (unsigned char *)name;
+ tmp_str.len = namelen;
+ if (ctx != NULL) {
+ /* Directory is encrypted */
+ res = ext4_fname_crypto_alloc_buffer(ctx, EXT4_NAME_LEN,
+ &fname_crypto_str);
+ if (res < 0) {
+ ext4_put_fname_crypto_ctx(&ctx);
+ return -ENOMEM;
+ }
+ res = ext4_fname_usr_to_disk(ctx, iname, &fname_crypto_str);
+ if (res < 0) {
+ ext4_put_fname_crypto_ctx(&ctx);
+ ext4_fname_crypto_free_buffer(&fname_crypto_str);
+ return res;
+ }
+ tmp_str.name = fname_crypto_str.name;
+ tmp_str.len = fname_crypto_str.len;
+ }
nlen = EXT4_DIR_REC_LEN(de->name_len);
rlen = ext4_rec_len_from_disk(de->rec_len, buf_size);
if (de->inode) {
struct ext4_dir_entry_2 *de1 =
- (struct ext4_dir_entry_2 *)((char *)de + nlen);
+ (struct ext4_dir_entry_2 *)((char *)de + nlen);
de1->rec_len = ext4_rec_len_to_disk(rlen - nlen, buf_size);
de->rec_len = ext4_rec_len_to_disk(nlen, buf_size);
de = de1;
@@ -1661,9 +1968,14 @@
de->file_type = EXT4_FT_UNKNOWN;
de->inode = cpu_to_le32(inode->i_ino);
ext4_set_de_type(inode->i_sb, de, inode->i_mode);
- de->name_len = namelen;
- memcpy(de->name, name, namelen);
+ de->name_len = tmp_str.len;
+
+ memcpy(de->name, tmp_str.name, tmp_str.len);
+ ext4_put_fname_crypto_ctx(&ctx);
+ ext4_fname_crypto_free_buffer(&fname_crypto_str);
+ return 0;
}
+
/*
* Add a new entry into a directory (leaf) block. If de is non-NULL,
* it points to a directory entry which is guaranteed to be large
@@ -1700,8 +2012,12 @@
return err;
}
- /* By now the buffer is marked for journaling */
- ext4_insert_dentry(inode, de, blocksize, name, namelen);
+ /* By now the buffer is marked for journaling. Due to crypto operations,
+ * the following function call may fail */
+ err = ext4_insert_dentry(dir, inode, de, blocksize, &dentry->d_name,
+ name, namelen);
+ if (err < 0)
+ return err;
/*
* XXX shouldn't update any times until successful
@@ -1733,8 +2049,13 @@
struct inode *inode, struct buffer_head *bh)
{
struct inode *dir = dentry->d_parent->d_inode;
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ struct ext4_fname_crypto_ctx *ctx = NULL;
+ int res;
+#else
const char *name = dentry->d_name.name;
int namelen = dentry->d_name.len;
+#endif
struct buffer_head *bh2;
struct dx_root *root;
struct dx_frame frames[2], *frame;
@@ -1748,7 +2069,13 @@
struct dx_hash_info hinfo;
ext4_lblk_t block;
struct fake_dirent *fde;
- int csum_size = 0;
+ int csum_size = 0;
+
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ ctx = ext4_get_fname_crypto_ctx(dir, EXT4_NAME_LEN);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+#endif
if (ext4_has_metadata_csum(inode->i_sb))
csum_size = sizeof(struct ext4_dir_entry_tail);
@@ -1815,7 +2142,18 @@
if (hinfo.hash_version <= DX_HASH_TEA)
hinfo.hash_version += EXT4_SB(dir->i_sb)->s_hash_unsigned;
hinfo.seed = EXT4_SB(dir->i_sb)->s_hash_seed;
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ res = ext4_fname_usr_to_hash(ctx, &dentry->d_name, &hinfo);
+ if (res < 0) {
+ ext4_put_fname_crypto_ctx(&ctx);
+ ext4_mark_inode_dirty(handle, dir);
+ brelse(bh);
+ return res;
+ }
+ ext4_put_fname_crypto_ctx(&ctx);
+#else
ext4fs_dirhash(name, namelen, &hinfo);
+#endif
memset(frames, 0, sizeof(frames));
frame = frames;
frame->entries = entries;
@@ -1865,7 +2203,7 @@
struct inode *inode)
{
struct inode *dir = dentry->d_parent->d_inode;
- struct buffer_head *bh;
+ struct buffer_head *bh = NULL;
struct ext4_dir_entry_2 *de;
struct ext4_dir_entry_tail *t;
struct super_block *sb;
@@ -1889,14 +2227,14 @@
return retval;
if (retval == 1) {
retval = 0;
- return retval;
+ goto out;
}
}
if (is_dx(dir)) {
retval = ext4_dx_add_entry(handle, dentry, inode);
if (!retval || (retval != ERR_BAD_DX_DIR))
- return retval;
+ goto out;
ext4_clear_inode_flag(dir, EXT4_INODE_INDEX);
dx_fallback++;
ext4_mark_inode_dirty(handle, dir);
@@ -1908,14 +2246,15 @@
return PTR_ERR(bh);
retval = add_dirent_to_buf(handle, dentry, inode, NULL, bh);
- if (retval != -ENOSPC) {
- brelse(bh);
- return retval;
- }
+ if (retval != -ENOSPC)
+ goto out;
if (blocks == 1 && !dx_fallback &&
- EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_DIR_INDEX))
- return make_indexed_dir(handle, dentry, inode, bh);
+ EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_DIR_INDEX)) {
+ retval = make_indexed_dir(handle, dentry, inode, bh);
+ bh = NULL; /* make_indexed_dir releases bh */
+ goto out;
+ }
brelse(bh);
}
bh = ext4_append(handle, dir, &block);
@@ -1931,6 +2270,7 @@
}
retval = add_dirent_to_buf(handle, dentry, inode, de, bh);
+out:
brelse(bh);
if (retval == 0)
ext4_set_inode_state(inode, EXT4_STATE_NEWENTRY);
@@ -2237,7 +2577,20 @@
inode->i_op = &ext4_file_inode_operations;
inode->i_fop = &ext4_file_operations;
ext4_set_aops(inode);
- err = ext4_add_nondir(handle, dentry, inode);
+ err = 0;
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ if (!err && (ext4_encrypted_inode(dir) ||
+ DUMMY_ENCRYPTION_ENABLED(EXT4_SB(dir->i_sb)))) {
+ err = ext4_inherit_context(dir, inode);
+ if (err) {
+ clear_nlink(inode);
+ unlock_new_inode(inode);
+ iput(inode);
+ }
+ }
+#endif
+ if (!err)
+ err = ext4_add_nondir(handle, dentry, inode);
if (!err && IS_DIRSYNC(dir))
ext4_handle_sync(handle);
}
@@ -2418,6 +2771,14 @@
err = ext4_init_new_dir(handle, dir, inode);
if (err)
goto out_clear_inode;
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ if (ext4_encrypted_inode(dir) ||
+ DUMMY_ENCRYPTION_ENABLED(EXT4_SB(dir->i_sb))) {
+ err = ext4_inherit_context(dir, inode);
+ if (err)
+ goto out_clear_inode;
+ }
+#endif
err = ext4_mark_inode_dirty(handle, inode);
if (!err)
err = ext4_add_entry(handle, dentry, inode);
@@ -2450,7 +2811,7 @@
/*
* routine to check that the specified directory is empty (for rmdir)
*/
-static int empty_dir(struct inode *inode)
+int ext4_empty_dir(struct inode *inode)
{
unsigned int offset;
struct buffer_head *bh;
@@ -2718,7 +3079,7 @@
goto end_rmdir;
retval = -ENOTEMPTY;
- if (!empty_dir(inode))
+ if (!ext4_empty_dir(inode))
goto end_rmdir;
handle = ext4_journal_start(dir, EXT4_HT_DIR,
@@ -2828,16 +3189,25 @@
{
handle_t *handle;
struct inode *inode;
- int l, err, retries = 0;
+ int err, len = strlen(symname);
int credits;
+ bool encryption_required;
+ struct ext4_str disk_link;
+ struct ext4_encrypted_symlink_data *sd = NULL;
- l = strlen(symname)+1;
- if (l > dir->i_sb->s_blocksize)
+ disk_link.len = len + 1;
+ disk_link.name = (char *) symname;
+
+ encryption_required = (ext4_encrypted_inode(dir) ||
+ DUMMY_ENCRYPTION_ENABLED(EXT4_SB(dir->i_sb)));
+ if (encryption_required)
+ disk_link.len = encrypted_symlink_data_len(len) + 1;
+ if (disk_link.len > dir->i_sb->s_blocksize)
return -ENAMETOOLONG;
dquot_initialize(dir);
- if (l > EXT4_N_BLOCKS * 4) {
+ if ((disk_link.len > EXT4_N_BLOCKS * 4)) {
/*
* For non-fast symlinks, we just allocate inode and put it on
* orphan list in the first transaction => we need bitmap,
@@ -2856,16 +3226,49 @@
credits = EXT4_DATA_TRANS_BLOCKS(dir->i_sb) +
EXT4_INDEX_EXTRA_TRANS_BLOCKS + 3;
}
-retry:
+
inode = ext4_new_inode_start_handle(dir, S_IFLNK|S_IRWXUGO,
&dentry->d_name, 0, NULL,
EXT4_HT_DIR, credits);
handle = ext4_journal_current_handle();
- err = PTR_ERR(inode);
- if (IS_ERR(inode))
- goto out_stop;
+ if (IS_ERR(inode)) {
+ if (handle)
+ ext4_journal_stop(handle);
+ return PTR_ERR(inode);
+ }
- if (l > EXT4_N_BLOCKS * 4) {
+ if (encryption_required) {
+ struct ext4_fname_crypto_ctx *ctx = NULL;
+ struct qstr istr;
+ struct ext4_str ostr;
+
+ sd = kzalloc(disk_link.len, GFP_NOFS);
+ if (!sd) {
+ err = -ENOMEM;
+ goto err_drop_inode;
+ }
+ err = ext4_inherit_context(dir, inode);
+ if (err)
+ goto err_drop_inode;
+ ctx = ext4_get_fname_crypto_ctx(inode,
+ inode->i_sb->s_blocksize);
+ if (IS_ERR_OR_NULL(ctx)) {
+ /* We just set the policy, so ctx should not be NULL */
+ err = (ctx == NULL) ? -EIO : PTR_ERR(ctx);
+ goto err_drop_inode;
+ }
+ istr.name = (const unsigned char *) symname;
+ istr.len = len;
+ ostr.name = sd->encrypted_path;
+ err = ext4_fname_usr_to_disk(ctx, &istr, &ostr);
+ ext4_put_fname_crypto_ctx(&ctx);
+ if (err < 0)
+ goto err_drop_inode;
+ sd->len = cpu_to_le16(ostr.len);
+ disk_link.name = (char *) sd;
+ }
+
+ if ((disk_link.len > EXT4_N_BLOCKS * 4)) {
inode->i_op = &ext4_symlink_inode_operations;
ext4_set_aops(inode);
/*
@@ -2881,9 +3284,10 @@
drop_nlink(inode);
err = ext4_orphan_add(handle, inode);
ext4_journal_stop(handle);
+ handle = NULL;
if (err)
goto err_drop_inode;
- err = __page_symlink(inode, symname, l, 1);
+ err = __page_symlink(inode, disk_link.name, disk_link.len, 1);
if (err)
goto err_drop_inode;
/*
@@ -2895,34 +3299,37 @@
EXT4_INDEX_EXTRA_TRANS_BLOCKS + 1);
if (IS_ERR(handle)) {
err = PTR_ERR(handle);
+ handle = NULL;
goto err_drop_inode;
}
set_nlink(inode, 1);
err = ext4_orphan_del(handle, inode);
- if (err) {
- ext4_journal_stop(handle);
- clear_nlink(inode);
+ if (err)
goto err_drop_inode;
- }
} else {
/* clear the extent format for fast symlink */
ext4_clear_inode_flag(inode, EXT4_INODE_EXTENTS);
- inode->i_op = &ext4_fast_symlink_inode_operations;
- memcpy((char *)&EXT4_I(inode)->i_data, symname, l);
- inode->i_size = l-1;
+ inode->i_op = encryption_required ?
+ &ext4_symlink_inode_operations :
+ &ext4_fast_symlink_inode_operations;
+ memcpy((char *)&EXT4_I(inode)->i_data, disk_link.name,
+ disk_link.len);
+ inode->i_size = disk_link.len - 1;
}
EXT4_I(inode)->i_disksize = inode->i_size;
err = ext4_add_nondir(handle, dentry, inode);
if (!err && IS_DIRSYNC(dir))
ext4_handle_sync(handle);
-out_stop:
if (handle)
ext4_journal_stop(handle);
- if (err == -ENOSPC && ext4_should_retry_alloc(dir->i_sb, &retries))
- goto retry;
+ kfree(sd);
return err;
err_drop_inode:
+ if (handle)
+ ext4_journal_stop(handle);
+ kfree(sd);
+ clear_nlink(inode);
unlock_new_inode(inode);
iput(inode);
return err;
@@ -2937,7 +3344,9 @@
if (inode->i_nlink >= EXT4_LINK_MAX)
return -EMLINK;
-
+ if (ext4_encrypted_inode(dir) &&
+ !ext4_is_child_context_consistent_with_parent(dir, inode))
+ return -EPERM;
dquot_initialize(dir);
retry:
@@ -3238,6 +3647,14 @@
if (!old.bh || le32_to_cpu(old.de->inode) != old.inode->i_ino)
goto end_rename;
+ if ((old.dir != new.dir) &&
+ ext4_encrypted_inode(new.dir) &&
+ !ext4_is_child_context_consistent_with_parent(new.dir,
+ old.inode)) {
+ retval = -EPERM;
+ goto end_rename;
+ }
+
new.bh = ext4_find_entry(new.dir, &new.dentry->d_name,
&new.de, &new.inlined);
if (IS_ERR(new.bh)) {
@@ -3258,12 +3675,18 @@
EXT4_INDEX_EXTRA_TRANS_BLOCKS + 2);
if (!(flags & RENAME_WHITEOUT)) {
handle = ext4_journal_start(old.dir, EXT4_HT_DIR, credits);
- if (IS_ERR(handle))
- return PTR_ERR(handle);
+ if (IS_ERR(handle)) {
+ retval = PTR_ERR(handle);
+ handle = NULL;
+ goto end_rename;
+ }
} else {
whiteout = ext4_whiteout_for_rename(&old, credits, &handle);
- if (IS_ERR(whiteout))
- return PTR_ERR(whiteout);
+ if (IS_ERR(whiteout)) {
+ retval = PTR_ERR(whiteout);
+ whiteout = NULL;
+ goto end_rename;
+ }
}
if (IS_DIRSYNC(old.dir) || IS_DIRSYNC(new.dir))
@@ -3272,7 +3695,7 @@
if (S_ISDIR(old.inode->i_mode)) {
if (new.inode) {
retval = -ENOTEMPTY;
- if (!empty_dir(new.inode))
+ if (!ext4_empty_dir(new.inode))
goto end_rename;
} else {
retval = -EMLINK;
@@ -3346,8 +3769,9 @@
ext4_dec_count(handle, old.dir);
if (new.inode) {
- /* checked empty_dir above, can't have another parent,
- * ext4_dec_count() won't work for many-linked dirs */
+ /* checked ext4_empty_dir above, can't have another
+ * parent, ext4_dec_count() won't work for many-linked
+ * dirs */
clear_nlink(new.inode);
} else {
ext4_inc_count(handle, new.dir);
@@ -3427,8 +3851,11 @@
handle = ext4_journal_start(old.dir, EXT4_HT_DIR,
(2 * EXT4_DATA_TRANS_BLOCKS(old.dir->i_sb) +
2 * EXT4_INDEX_EXTRA_TRANS_BLOCKS + 2));
- if (IS_ERR(handle))
- return PTR_ERR(handle);
+ if (IS_ERR(handle)) {
+ retval = PTR_ERR(handle);
+ handle = NULL;
+ goto end_rename;
+ }
if (IS_DIRSYNC(old.dir) || IS_DIRSYNC(new.dir))
ext4_handle_sync(handle);
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 4649842..5765f88 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -8,7 +8,6 @@
#include <linux/fs.h>
#include <linux/time.h>
-#include <linux/jbd2.h>
#include <linux/highuid.h>
#include <linux/pagemap.h>
#include <linux/quotaops.h>
@@ -24,7 +23,6 @@
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/mm.h>
-#include <linux/ratelimit.h>
#include "ext4_jbd2.h"
#include "xattr.h"
@@ -68,6 +66,10 @@
bio_for_each_segment_all(bvec, bio, i) {
struct page *page = bvec->bv_page;
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ struct page *data_page = NULL;
+ struct ext4_crypto_ctx *ctx = NULL;
+#endif
struct buffer_head *bh, *head;
unsigned bio_start = bvec->bv_offset;
unsigned bio_end = bio_start + bvec->bv_len;
@@ -77,6 +79,15 @@
if (!page)
continue;
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ if (!page->mapping) {
+ /* The bounce data pages are unmapped. */
+ data_page = page;
+ ctx = (struct ext4_crypto_ctx *)page_private(data_page);
+ page = ctx->control_page;
+ }
+#endif
+
if (error) {
SetPageError(page);
set_bit(AS_EIO, &page->mapping->flags);
@@ -101,8 +112,13 @@
} while ((bh = bh->b_this_page) != head);
bit_spin_unlock(BH_Uptodate_Lock, &head->b_state);
local_irq_restore(flags);
- if (!under_io)
+ if (!under_io) {
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ if (ctx)
+ ext4_restore_control_page(data_page);
+#endif
end_page_writeback(page);
+ }
}
}
@@ -377,6 +393,7 @@
static int io_submit_add_bh(struct ext4_io_submit *io,
struct inode *inode,
+ struct page *page,
struct buffer_head *bh)
{
int ret;
@@ -390,7 +407,7 @@
if (ret)
return ret;
}
- ret = bio_add_page(io->io_bio, bh->b_page, bh->b_size, bh_offset(bh));
+ ret = bio_add_page(io->io_bio, page, bh->b_size, bh_offset(bh));
if (ret != bh->b_size)
goto submit_and_retry;
io->io_next_block++;
@@ -403,6 +420,7 @@
struct writeback_control *wbc,
bool keep_towrite)
{
+ struct page *data_page = NULL;
struct inode *inode = page->mapping->host;
unsigned block_start, blocksize;
struct buffer_head *bh, *head;
@@ -462,19 +480,29 @@
set_buffer_async_write(bh);
} while ((bh = bh->b_this_page) != head);
- /* Now submit buffers to write */
bh = head = page_buffers(page);
+
+ if (ext4_encrypted_inode(inode) && S_ISREG(inode->i_mode)) {
+ data_page = ext4_encrypt(inode, page);
+ if (IS_ERR(data_page)) {
+ ret = PTR_ERR(data_page);
+ data_page = NULL;
+ goto out;
+ }
+ }
+
+ /* Now submit buffers to write */
do {
if (!buffer_async_write(bh))
continue;
- ret = io_submit_add_bh(io, inode, bh);
+ ret = io_submit_add_bh(io, inode,
+ data_page ? data_page : page, bh);
if (ret) {
/*
* We only get here on ENOMEM. Not much else
* we can do but mark the page as dirty, and
* better luck next time.
*/
- redirty_page_for_writepage(wbc, page);
break;
}
nr_submitted++;
@@ -483,6 +511,11 @@
/* Error stopped previous loop? Clean up buffers... */
if (ret) {
+ out:
+ if (data_page)
+ ext4_restore_control_page(data_page);
+ printk_ratelimited(KERN_ERR "%s: ret = %d\n", __func__, ret);
+ redirty_page_for_writepage(wbc, page);
do {
clear_buffer_async_write(bh);
bh = bh->b_this_page;
diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
new file mode 100644
index 0000000..171b9ac
--- /dev/null
+++ b/fs/ext4/readpage.c
@@ -0,0 +1,328 @@
+/*
+ * linux/fs/ext4/readpage.c
+ *
+ * Copyright (C) 2002, Linus Torvalds.
+ * Copyright (C) 2015, Google, Inc.
+ *
+ * This was originally taken from fs/mpage.c
+ *
+ * The intent is the ext4_mpage_readpages() function here is intended
+ * to replace mpage_readpages() in the general case, not just for
+ * encrypted files. It has some limitations (see below), where it
+ * will fall back to read_block_full_page(), but these limitations
+ * should only be hit when page_size != block_size.
+ *
+ * This will allow us to attach a callback function to support ext4
+ * encryption.
+ *
+ * If anything unusual happens, such as:
+ *
+ * - encountering a page which has buffers
+ * - encountering a page which has a non-hole after a hole
+ * - encountering a page with non-contiguous blocks
+ *
+ * then this code just gives up and calls the buffer_head-based read function.
+ * It does handle a page which has holes at the end - that is a common case:
+ * the end-of-file on blocksize < PAGE_CACHE_SIZE setups.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/kdev_t.h>
+#include <linux/gfp.h>
+#include <linux/bio.h>
+#include <linux/fs.h>
+#include <linux/buffer_head.h>
+#include <linux/blkdev.h>
+#include <linux/highmem.h>
+#include <linux/prefetch.h>
+#include <linux/mpage.h>
+#include <linux/writeback.h>
+#include <linux/backing-dev.h>
+#include <linux/pagevec.h>
+#include <linux/cleancache.h>
+
+#include "ext4.h"
+
+/*
+ * Call ext4_decrypt on every single page, reusing the encryption
+ * context.
+ */
+static void completion_pages(struct work_struct *work)
+{
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ struct ext4_crypto_ctx *ctx =
+ container_of(work, struct ext4_crypto_ctx, work);
+ struct bio *bio = ctx->bio;
+ struct bio_vec *bv;
+ int i;
+
+ bio_for_each_segment_all(bv, bio, i) {
+ struct page *page = bv->bv_page;
+
+ int ret = ext4_decrypt(ctx, page);
+ if (ret) {
+ WARN_ON_ONCE(1);
+ SetPageError(page);
+ } else
+ SetPageUptodate(page);
+ unlock_page(page);
+ }
+ ext4_release_crypto_ctx(ctx);
+ bio_put(bio);
+#else
+ BUG();
+#endif
+}
+
+static inline bool ext4_bio_encrypted(struct bio *bio)
+{
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ return unlikely(bio->bi_private != NULL);
+#else
+ return false;
+#endif
+}
+
+/*
+ * I/O completion handler for multipage BIOs.
+ *
+ * The mpage code never puts partial pages into a BIO (except for end-of-file).
+ * If a page does not map to a contiguous run of blocks then it simply falls
+ * back to block_read_full_page().
+ *
+ * Why is this? If a page's completion depends on a number of different BIOs
+ * which can complete in any order (or at the same time) then determining the
+ * status of that page is hard. See end_buffer_async_read() for the details.
+ * There is no point in duplicating all that complexity.
+ */
+static void mpage_end_io(struct bio *bio, int err)
+{
+ struct bio_vec *bv;
+ int i;
+
+ if (ext4_bio_encrypted(bio)) {
+ struct ext4_crypto_ctx *ctx = bio->bi_private;
+
+ if (err) {
+ ext4_release_crypto_ctx(ctx);
+ } else {
+ INIT_WORK(&ctx->work, completion_pages);
+ ctx->bio = bio;
+ queue_work(ext4_read_workqueue, &ctx->work);
+ return;
+ }
+ }
+ bio_for_each_segment_all(bv, bio, i) {
+ struct page *page = bv->bv_page;
+
+ if (!err) {
+ SetPageUptodate(page);
+ } else {
+ ClearPageUptodate(page);
+ SetPageError(page);
+ }
+ unlock_page(page);
+ }
+
+ bio_put(bio);
+}
+
+int ext4_mpage_readpages(struct address_space *mapping,
+ struct list_head *pages, struct page *page,
+ unsigned nr_pages)
+{
+ struct bio *bio = NULL;
+ unsigned page_idx;
+ sector_t last_block_in_bio = 0;
+
+ struct inode *inode = mapping->host;
+ const unsigned blkbits = inode->i_blkbits;
+ const unsigned blocks_per_page = PAGE_CACHE_SIZE >> blkbits;
+ const unsigned blocksize = 1 << blkbits;
+ sector_t block_in_file;
+ sector_t last_block;
+ sector_t last_block_in_file;
+ sector_t blocks[MAX_BUF_PER_PAGE];
+ unsigned page_block;
+ struct block_device *bdev = inode->i_sb->s_bdev;
+ int length;
+ unsigned relative_block = 0;
+ struct ext4_map_blocks map;
+
+ map.m_pblk = 0;
+ map.m_lblk = 0;
+ map.m_len = 0;
+ map.m_flags = 0;
+
+ for (page_idx = 0; nr_pages; page_idx++, nr_pages--) {
+ int fully_mapped = 1;
+ unsigned first_hole = blocks_per_page;
+
+ prefetchw(&page->flags);
+ if (pages) {
+ page = list_entry(pages->prev, struct page, lru);
+ list_del(&page->lru);
+ if (add_to_page_cache_lru(page, mapping,
+ page->index, GFP_KERNEL))
+ goto next_page;
+ }
+
+ if (page_has_buffers(page))
+ goto confused;
+
+ block_in_file = (sector_t)page->index << (PAGE_CACHE_SHIFT - blkbits);
+ last_block = block_in_file + nr_pages * blocks_per_page;
+ last_block_in_file = (i_size_read(inode) + blocksize - 1) >> blkbits;
+ if (last_block > last_block_in_file)
+ last_block = last_block_in_file;
+ page_block = 0;
+
+ /*
+ * Map blocks using the previous result first.
+ */
+ if ((map.m_flags & EXT4_MAP_MAPPED) &&
+ block_in_file > map.m_lblk &&
+ block_in_file < (map.m_lblk + map.m_len)) {
+ unsigned map_offset = block_in_file - map.m_lblk;
+ unsigned last = map.m_len - map_offset;
+
+ for (relative_block = 0; ; relative_block++) {
+ if (relative_block == last) {
+ /* needed? */
+ map.m_flags &= ~EXT4_MAP_MAPPED;
+ break;
+ }
+ if (page_block == blocks_per_page)
+ break;
+ blocks[page_block] = map.m_pblk + map_offset +
+ relative_block;
+ page_block++;
+ block_in_file++;
+ }
+ }
+
+ /*
+ * Then do more ext4_map_blocks() calls until we are
+ * done with this page.
+ */
+ while (page_block < blocks_per_page) {
+ if (block_in_file < last_block) {
+ map.m_lblk = block_in_file;
+ map.m_len = last_block - block_in_file;
+
+ if (ext4_map_blocks(NULL, inode, &map, 0) < 0) {
+ set_error_page:
+ SetPageError(page);
+ zero_user_segment(page, 0,
+ PAGE_CACHE_SIZE);
+ unlock_page(page);
+ goto next_page;
+ }
+ }
+ if ((map.m_flags & EXT4_MAP_MAPPED) == 0) {
+ fully_mapped = 0;
+ if (first_hole == blocks_per_page)
+ first_hole = page_block;
+ page_block++;
+ block_in_file++;
+ continue;
+ }
+ if (first_hole != blocks_per_page)
+ goto confused; /* hole -> non-hole */
+
+ /* Contiguous blocks? */
+ if (page_block && blocks[page_block-1] != map.m_pblk-1)
+ goto confused;
+ for (relative_block = 0; ; relative_block++) {
+ if (relative_block == map.m_len) {
+ /* needed? */
+ map.m_flags &= ~EXT4_MAP_MAPPED;
+ break;
+ } else if (page_block == blocks_per_page)
+ break;
+ blocks[page_block] = map.m_pblk+relative_block;
+ page_block++;
+ block_in_file++;
+ }
+ }
+ if (first_hole != blocks_per_page) {
+ zero_user_segment(page, first_hole << blkbits,
+ PAGE_CACHE_SIZE);
+ if (first_hole == 0) {
+ SetPageUptodate(page);
+ unlock_page(page);
+ goto next_page;
+ }
+ } else if (fully_mapped) {
+ SetPageMappedToDisk(page);
+ }
+ if (fully_mapped && blocks_per_page == 1 &&
+ !PageUptodate(page) && cleancache_get_page(page) == 0) {
+ SetPageUptodate(page);
+ goto confused;
+ }
+
+ /*
+ * This page will go to BIO. Do we need to send this
+ * BIO off first?
+ */
+ if (bio && (last_block_in_bio != blocks[0] - 1)) {
+ submit_and_realloc:
+ submit_bio(READ, bio);
+ bio = NULL;
+ }
+ if (bio == NULL) {
+ struct ext4_crypto_ctx *ctx = NULL;
+
+ if (ext4_encrypted_inode(inode) &&
+ S_ISREG(inode->i_mode)) {
+ ctx = ext4_get_crypto_ctx(inode);
+ if (IS_ERR(ctx))
+ goto set_error_page;
+ }
+ bio = bio_alloc(GFP_KERNEL,
+ min_t(int, nr_pages, bio_get_nr_vecs(bdev)));
+ if (!bio) {
+ if (ctx)
+ ext4_release_crypto_ctx(ctx);
+ goto set_error_page;
+ }
+ bio->bi_bdev = bdev;
+ bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9);
+ bio->bi_end_io = mpage_end_io;
+ bio->bi_private = ctx;
+ }
+
+ length = first_hole << blkbits;
+ if (bio_add_page(bio, page, length, 0) < length)
+ goto submit_and_realloc;
+
+ if (((map.m_flags & EXT4_MAP_BOUNDARY) &&
+ (relative_block == map.m_len)) ||
+ (first_hole != blocks_per_page)) {
+ submit_bio(READ, bio);
+ bio = NULL;
+ } else
+ last_block_in_bio = blocks[blocks_per_page - 1];
+ goto next_page;
+ confused:
+ if (bio) {
+ submit_bio(READ, bio);
+ bio = NULL;
+ }
+ if (!PageUptodate(page))
+ block_read_full_page(page, ext4_get_block);
+ else
+ unlock_page(page);
+ next_page:
+ if (pages)
+ page_cache_release(page);
+ }
+ BUG_ON(pages && !list_empty(pages));
+ if (bio)
+ submit_bio(READ, bio);
+ return 0;
+}
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index d348c7d..821f22d 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -21,7 +21,6 @@
#include <linux/fs.h>
#include <linux/time.h>
#include <linux/vmalloc.h>
-#include <linux/jbd2.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/blkdev.h>
@@ -323,22 +322,6 @@
ext4_commit_super(sb, 1);
}
-/*
- * The del_gendisk() function uninitializes the disk-specific data
- * structures, including the bdi structure, without telling anyone
- * else. Once this happens, any attempt to call mark_buffer_dirty()
- * (for example, by ext4_commit_super), will cause a kernel OOPS.
- * This is a kludge to prevent these oops until we can put in a proper
- * hook in del_gendisk() to inform the VFS and file system layers.
- */
-static int block_device_ejected(struct super_block *sb)
-{
- struct inode *bd_inode = sb->s_bdev->bd_inode;
- struct backing_dev_info *bdi = inode_to_bdi(bd_inode);
-
- return bdi->dev == NULL;
-}
-
static void ext4_journal_commit_callback(journal_t *journal, transaction_t *txn)
{
struct super_block *sb = journal->j_private;
@@ -893,6 +876,9 @@
atomic_set(&ei->i_ioend_count, 0);
atomic_set(&ei->i_unwritten, 0);
INIT_WORK(&ei->i_rsv_conversion_work, ext4_end_io_rsv_work);
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ ei->i_encryption_key.mode = EXT4_ENCRYPTION_MODE_INVALID;
+#endif
return &ei->vfs_inode;
}
@@ -1120,7 +1106,7 @@
Opt_commit, Opt_min_batch_time, Opt_max_batch_time, Opt_journal_dev,
Opt_journal_path, Opt_journal_checksum, Opt_journal_async_commit,
Opt_abort, Opt_data_journal, Opt_data_ordered, Opt_data_writeback,
- Opt_data_err_abort, Opt_data_err_ignore,
+ Opt_data_err_abort, Opt_data_err_ignore, Opt_test_dummy_encryption,
Opt_usrjquota, Opt_grpjquota, Opt_offusrjquota, Opt_offgrpjquota,
Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_jqfmt_vfsv1, Opt_quota,
Opt_noquota, Opt_barrier, Opt_nobarrier, Opt_err,
@@ -1211,6 +1197,7 @@
{Opt_init_itable, "init_itable"},
{Opt_noinit_itable, "noinit_itable"},
{Opt_max_dir_size_kb, "max_dir_size_kb=%u"},
+ {Opt_test_dummy_encryption, "test_dummy_encryption"},
{Opt_removed, "check=none"}, /* mount option from ext2/3 */
{Opt_removed, "nocheck"}, /* mount option from ext2/3 */
{Opt_removed, "reservation"}, /* mount option from ext2/3 */
@@ -1412,6 +1399,7 @@
{Opt_jqfmt_vfsv0, QFMT_VFS_V0, MOPT_QFMT},
{Opt_jqfmt_vfsv1, QFMT_VFS_V1, MOPT_QFMT},
{Opt_max_dir_size_kb, 0, MOPT_GTE0},
+ {Opt_test_dummy_encryption, 0, MOPT_GTE0},
{Opt_err, 0, 0}
};
@@ -1588,6 +1576,15 @@
}
*journal_ioprio =
IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, arg);
+ } else if (token == Opt_test_dummy_encryption) {
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ sbi->s_mount_flags |= EXT4_MF_TEST_DUMMY_ENCRYPTION;
+ ext4_msg(sb, KERN_WARNING,
+ "Test dummy encryption mode enabled");
+#else
+ ext4_msg(sb, KERN_WARNING,
+ "Test dummy encryption mount option ignored");
+#endif
} else if (m->flags & MOPT_DATAJ) {
if (is_remount) {
if (!sbi->s_journal)
@@ -2685,11 +2682,13 @@
EXT4_INFO_ATTR(lazy_itable_init);
EXT4_INFO_ATTR(batched_discard);
EXT4_INFO_ATTR(meta_bg_resize);
+EXT4_INFO_ATTR(encryption);
static struct attribute *ext4_feat_attrs[] = {
ATTR_LIST(lazy_itable_init),
ATTR_LIST(batched_discard),
ATTR_LIST(meta_bg_resize),
+ ATTR_LIST(encryption),
NULL,
};
@@ -3448,6 +3447,11 @@
if (sb->s_bdev->bd_part)
sbi->s_sectors_written_start =
part_stat_read(sb->s_bdev->bd_part, sectors[1]);
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ /* Modes of operations for file and directory encryption. */
+ sbi->s_file_encryption_mode = EXT4_ENCRYPTION_MODE_AES_256_XTS;
+ sbi->s_dir_encryption_mode = EXT4_ENCRYPTION_MODE_INVALID;
+#endif
/* Cleanup superblock name */
for (cp = sb->s_id; (cp = strchr(cp, '/'));)
@@ -3692,6 +3696,13 @@
}
}
+ if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_ENCRYPT) &&
+ es->s_encryption_level) {
+ ext4_msg(sb, KERN_ERR, "Unsupported encryption level %d",
+ es->s_encryption_level);
+ goto failed_mount;
+ }
+
if (sb->s_blocksize != blocksize) {
/* Validate the filesystem blocksize */
if (!sb_set_blocksize(sb, blocksize)) {
@@ -4054,6 +4065,13 @@
}
}
+ if (unlikely(sbi->s_mount_flags & EXT4_MF_TEST_DUMMY_ENCRYPTION) &&
+ !(sb->s_flags & MS_RDONLY) &&
+ !EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_ENCRYPT)) {
+ EXT4_SET_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_ENCRYPT);
+ ext4_commit_super(sb, 1);
+ }
+
/*
* Get the # of file system overhead blocks from the
* superblock if present.
@@ -4570,7 +4588,7 @@
struct buffer_head *sbh = EXT4_SB(sb)->s_sbh;
int error = 0;
- if (!sbh || block_device_ejected(sb))
+ if (!sbh)
return error;
if (buffer_write_io_error(sbh)) {
/*
diff --git a/fs/ext4/symlink.c b/fs/ext4/symlink.c
index ff37119..136ca0e 100644
--- a/fs/ext4/symlink.c
+++ b/fs/ext4/symlink.c
@@ -18,13 +18,101 @@
*/
#include <linux/fs.h>
-#include <linux/jbd2.h>
#include <linux/namei.h>
#include "ext4.h"
#include "xattr.h"
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
static void *ext4_follow_link(struct dentry *dentry, struct nameidata *nd)
{
+ struct page *cpage = NULL;
+ char *caddr, *paddr = NULL;
+ struct ext4_str cstr, pstr;
+ struct inode *inode = dentry->d_inode;
+ struct ext4_fname_crypto_ctx *ctx = NULL;
+ struct ext4_encrypted_symlink_data *sd;
+ loff_t size = min_t(loff_t, i_size_read(inode), PAGE_SIZE - 1);
+ int res;
+ u32 plen, max_size = inode->i_sb->s_blocksize;
+
+ if (!ext4_encrypted_inode(inode))
+ return page_follow_link_light(dentry, nd);
+
+ ctx = ext4_get_fname_crypto_ctx(inode, inode->i_sb->s_blocksize);
+ if (IS_ERR(ctx))
+ return ctx;
+
+ if (ext4_inode_is_fast_symlink(inode)) {
+ caddr = (char *) EXT4_I(dentry->d_inode)->i_data;
+ max_size = sizeof(EXT4_I(dentry->d_inode)->i_data);
+ } else {
+ cpage = read_mapping_page(inode->i_mapping, 0, NULL);
+ if (IS_ERR(cpage)) {
+ ext4_put_fname_crypto_ctx(&ctx);
+ return cpage;
+ }
+ caddr = kmap(cpage);
+ caddr[size] = 0;
+ }
+
+ /* Symlink is encrypted */
+ sd = (struct ext4_encrypted_symlink_data *)caddr;
+ cstr.name = sd->encrypted_path;
+ cstr.len = le32_to_cpu(sd->len);
+ if ((cstr.len +
+ sizeof(struct ext4_encrypted_symlink_data) - 1) >
+ max_size) {
+ /* Symlink data on the disk is corrupted */
+ res = -EIO;
+ goto errout;
+ }
+ plen = (cstr.len < EXT4_FNAME_CRYPTO_DIGEST_SIZE*2) ?
+ EXT4_FNAME_CRYPTO_DIGEST_SIZE*2 : cstr.len;
+ paddr = kmalloc(plen + 1, GFP_NOFS);
+ if (!paddr) {
+ res = -ENOMEM;
+ goto errout;
+ }
+ pstr.name = paddr;
+ res = _ext4_fname_disk_to_usr(ctx, &cstr, &pstr);
+ if (res < 0)
+ goto errout;
+ /* Null-terminate the name */
+ if (res <= plen)
+ paddr[res] = '\0';
+ nd_set_link(nd, paddr);
+ ext4_put_fname_crypto_ctx(&ctx);
+ if (cpage) {
+ kunmap(cpage);
+ page_cache_release(cpage);
+ }
+ return NULL;
+errout:
+ ext4_put_fname_crypto_ctx(&ctx);
+ if (cpage) {
+ kunmap(cpage);
+ page_cache_release(cpage);
+ }
+ kfree(paddr);
+ return ERR_PTR(res);
+}
+
+static void ext4_put_link(struct dentry *dentry, struct nameidata *nd,
+ void *cookie)
+{
+ struct page *page = cookie;
+
+ if (!page) {
+ kfree(nd_get_link(nd));
+ } else {
+ kunmap(page);
+ page_cache_release(page);
+ }
+}
+#endif
+
+static void *ext4_follow_fast_link(struct dentry *dentry, struct nameidata *nd)
+{
struct ext4_inode_info *ei = EXT4_I(dentry->d_inode);
nd_set_link(nd, (char *) ei->i_data);
return NULL;
@@ -32,8 +120,13 @@
const struct inode_operations ext4_symlink_inode_operations = {
.readlink = generic_readlink,
+#ifdef CONFIG_EXT4_FS_ENCRYPTION
+ .follow_link = ext4_follow_link,
+ .put_link = ext4_put_link,
+#else
.follow_link = page_follow_link_light,
.put_link = page_put_link,
+#endif
.setattr = ext4_setattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
@@ -43,7 +136,7 @@
const struct inode_operations ext4_fast_symlink_inode_operations = {
.readlink = generic_readlink,
- .follow_link = ext4_follow_link,
+ .follow_link = ext4_follow_fast_link,
.setattr = ext4_setattr,
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
index 1e09fc7..759842f 100644
--- a/fs/ext4/xattr.c
+++ b/fs/ext4/xattr.c
@@ -55,7 +55,6 @@
#include <linux/slab.h>
#include <linux/mbcache.h>
#include <linux/quotaops.h>
-#include <linux/rwsem.h>
#include "ext4_jbd2.h"
#include "ext4.h"
#include "xattr.h"
@@ -639,8 +638,7 @@
free += EXT4_XATTR_LEN(name_len);
}
if (i->value) {
- if (free < EXT4_XATTR_SIZE(i->value_len) ||
- free < EXT4_XATTR_LEN(name_len) +
+ if (free < EXT4_XATTR_LEN(name_len) +
EXT4_XATTR_SIZE(i->value_len))
return -ENOSPC;
}
diff --git a/fs/ext4/xattr.h b/fs/ext4/xattr.h
index 29bedf5..ddc0957 100644
--- a/fs/ext4/xattr.h
+++ b/fs/ext4/xattr.h
@@ -23,6 +23,7 @@
#define EXT4_XATTR_INDEX_SECURITY 6
#define EXT4_XATTR_INDEX_SYSTEM 7
#define EXT4_XATTR_INDEX_RICHACL 8
+#define EXT4_XATTR_INDEX_ENCRYPTION 9
struct ext4_xattr_header {
__le32 h_magic; /* magic number for identification */
@@ -98,6 +99,8 @@
extern const struct xattr_handler ext4_xattr_trusted_handler;
extern const struct xattr_handler ext4_xattr_security_handler;
+#define EXT4_XATTR_NAME_ENCRYPTION_CONTEXT "c"
+
extern ssize_t ext4_listxattr(struct dentry *, char *, size_t);
extern int ext4_xattr_get(struct inode *, int, const char *, void *, size_t);
diff --git a/fs/f2fs/Kconfig b/fs/f2fs/Kconfig
index 94e2d2f..05f0f66 100644
--- a/fs/f2fs/Kconfig
+++ b/fs/f2fs/Kconfig
@@ -1,5 +1,5 @@
config F2FS_FS
- tristate "F2FS filesystem support (EXPERIMENTAL)"
+ tristate "F2FS filesystem support"
depends on BLOCK
help
F2FS is based on Log-structured File System (LFS), which supports
diff --git a/fs/f2fs/acl.c b/fs/f2fs/acl.c
index 7422027..4320ffa 100644
--- a/fs/f2fs/acl.c
+++ b/fs/f2fs/acl.c
@@ -351,13 +351,11 @@
*acl = f2fs_acl_clone(p, GFP_NOFS);
if (!*acl)
- return -ENOMEM;
+ goto no_mem;
ret = f2fs_acl_create_masq(*acl, mode);
- if (ret < 0) {
- posix_acl_release(*acl);
- return -ENOMEM;
- }
+ if (ret < 0)
+ goto no_mem_clone;
if (ret == 0) {
posix_acl_release(*acl);
@@ -378,6 +376,12 @@
*default_acl = NULL;
*acl = NULL;
return 0;
+
+no_mem_clone:
+ posix_acl_release(*acl);
+no_mem:
+ posix_acl_release(p);
+ return -ENOMEM;
}
int f2fs_init_acl(struct inode *inode, struct inode *dir, struct page *ipage,
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 7f794b7..a5e17a2 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -276,7 +276,7 @@
if (!clear_page_dirty_for_io(page))
goto continue_unlock;
- if (f2fs_write_meta_page(page, &wbc)) {
+ if (mapping->a_ops->writepage(page, &wbc)) {
unlock_page(page);
break;
}
@@ -464,20 +464,19 @@
void recover_orphan_inodes(struct f2fs_sb_info *sbi)
{
- block_t start_blk, orphan_blkaddr, i, j;
+ block_t start_blk, orphan_blocks, i, j;
if (!is_set_ckpt_flags(F2FS_CKPT(sbi), CP_ORPHAN_PRESENT_FLAG))
return;
set_sbi_flag(sbi, SBI_POR_DOING);
- start_blk = __start_cp_addr(sbi) + 1 +
- le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload);
- orphan_blkaddr = __start_sum_addr(sbi) - 1;
+ start_blk = __start_cp_addr(sbi) + 1 + __cp_payload(sbi);
+ orphan_blocks = __start_sum_addr(sbi) - 1 - __cp_payload(sbi);
- ra_meta_pages(sbi, start_blk, orphan_blkaddr, META_CP);
+ ra_meta_pages(sbi, start_blk, orphan_blocks, META_CP);
- for (i = 0; i < orphan_blkaddr; i++) {
+ for (i = 0; i < orphan_blocks; i++) {
struct page *page = get_meta_page(sbi, start_blk + i);
struct f2fs_orphan_block *orphan_blk;
@@ -615,7 +614,7 @@
unsigned long blk_size = sbi->blocksize;
unsigned long long cp1_version = 0, cp2_version = 0;
unsigned long long cp_start_blk_no;
- unsigned int cp_blks = 1 + le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload);
+ unsigned int cp_blks = 1 + __cp_payload(sbi);
block_t cp_blk_no;
int i;
@@ -796,6 +795,7 @@
* wribacking dentry pages in the freeing inode.
*/
f2fs_submit_merged_bio(sbi, DATA, WRITE);
+ cond_resched();
}
goto retry;
}
@@ -884,7 +884,7 @@
__u32 crc32 = 0;
void *kaddr;
int i;
- int cp_payload_blks = le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload);
+ int cp_payload_blks = __cp_payload(sbi);
/*
* This avoids to conduct wrong roll-forward operations and uses
@@ -1048,17 +1048,18 @@
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
unsigned long long ckpt_ver;
- trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "start block_ops");
-
mutex_lock(&sbi->cp_mutex);
if (!is_sbi_flag_set(sbi, SBI_IS_DIRTY) &&
- cpc->reason != CP_DISCARD && cpc->reason != CP_UMOUNT)
+ (cpc->reason == CP_FASTBOOT || cpc->reason == CP_SYNC))
goto out;
if (unlikely(f2fs_cp_error(sbi)))
goto out;
if (f2fs_readonly(sbi->sb))
goto out;
+
+ trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "start block_ops");
+
if (block_operations(sbi))
goto out;
@@ -1085,6 +1086,10 @@
unblock_operations(sbi);
stat_inc_cp_count(sbi->stat_info);
+
+ if (cpc->reason == CP_RECOVERY)
+ f2fs_msg(sbi->sb, KERN_NOTICE,
+ "checkpoint: version = %llx", ckpt_ver);
out:
mutex_unlock(&sbi->cp_mutex);
trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "finish checkpoint");
@@ -1103,14 +1108,9 @@
im->ino_num = 0;
}
- /*
- * considering 512 blocks in a segment 8 blocks are needed for cp
- * and log segment summaries. Remaining blocks are used to keep
- * orphan entries with the limitation one reserved segment
- * for cp pack we can have max 1020*504 orphan entries
- */
sbi->max_orphans = (sbi->blocks_per_seg - F2FS_CP_PACKS -
- NR_CURSEG_TYPE) * F2FS_ORPHANS_PER_BLOCK;
+ NR_CURSEG_TYPE - __cp_payload(sbi)) *
+ F2FS_ORPHANS_PER_BLOCK;
}
int __init create_checkpoint_caches(void)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 319eda5..b91b0e1 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -25,6 +25,9 @@
#include "trace.h"
#include <trace/events/f2fs.h>
+static struct kmem_cache *extent_tree_slab;
+static struct kmem_cache *extent_node_slab;
+
static void f2fs_read_end_io(struct bio *bio, int err)
{
struct bio_vec *bvec;
@@ -197,7 +200,7 @@
* ->node_page
* update block addresses in the node page
*/
-static void __set_data_blkaddr(struct dnode_of_data *dn)
+void set_data_blkaddr(struct dnode_of_data *dn)
{
struct f2fs_node *rn;
__le32 *addr_array;
@@ -226,7 +229,7 @@
trace_f2fs_reserve_new_block(dn->inode, dn->nid, dn->ofs_in_node);
dn->data_blkaddr = NEW_ADDR;
- __set_data_blkaddr(dn);
+ set_data_blkaddr(dn);
mark_inode_dirty(dn->inode);
sync_inode_page(dn);
return 0;
@@ -248,73 +251,62 @@
return err;
}
-static int check_extent_cache(struct inode *inode, pgoff_t pgofs,
- struct buffer_head *bh_result)
+static void f2fs_map_bh(struct super_block *sb, pgoff_t pgofs,
+ struct extent_info *ei, struct buffer_head *bh_result)
+{
+ unsigned int blkbits = sb->s_blocksize_bits;
+ size_t max_size = bh_result->b_size;
+ size_t mapped_size;
+
+ clear_buffer_new(bh_result);
+ map_bh(bh_result, sb, ei->blk + pgofs - ei->fofs);
+ mapped_size = (ei->fofs + ei->len - pgofs) << blkbits;
+ bh_result->b_size = min(max_size, mapped_size);
+}
+
+static bool lookup_extent_info(struct inode *inode, pgoff_t pgofs,
+ struct extent_info *ei)
{
struct f2fs_inode_info *fi = F2FS_I(inode);
pgoff_t start_fofs, end_fofs;
block_t start_blkaddr;
- if (is_inode_flag_set(fi, FI_NO_EXTENT))
- return 0;
-
- read_lock(&fi->ext.ext_lock);
+ read_lock(&fi->ext_lock);
if (fi->ext.len == 0) {
- read_unlock(&fi->ext.ext_lock);
- return 0;
+ read_unlock(&fi->ext_lock);
+ return false;
}
stat_inc_total_hit(inode->i_sb);
start_fofs = fi->ext.fofs;
end_fofs = fi->ext.fofs + fi->ext.len - 1;
- start_blkaddr = fi->ext.blk_addr;
+ start_blkaddr = fi->ext.blk;
if (pgofs >= start_fofs && pgofs <= end_fofs) {
- unsigned int blkbits = inode->i_sb->s_blocksize_bits;
- size_t count;
-
- set_buffer_new(bh_result);
- map_bh(bh_result, inode->i_sb,
- start_blkaddr + pgofs - start_fofs);
- count = end_fofs - pgofs + 1;
- if (count < (UINT_MAX >> blkbits))
- bh_result->b_size = (count << blkbits);
- else
- bh_result->b_size = UINT_MAX;
-
+ *ei = fi->ext;
stat_inc_read_hit(inode->i_sb);
- read_unlock(&fi->ext.ext_lock);
- return 1;
+ read_unlock(&fi->ext_lock);
+ return true;
}
- read_unlock(&fi->ext.ext_lock);
- return 0;
+ read_unlock(&fi->ext_lock);
+ return false;
}
-void update_extent_cache(struct dnode_of_data *dn)
+static bool update_extent_info(struct inode *inode, pgoff_t fofs,
+ block_t blkaddr)
{
- struct f2fs_inode_info *fi = F2FS_I(dn->inode);
- pgoff_t fofs, start_fofs, end_fofs;
+ struct f2fs_inode_info *fi = F2FS_I(inode);
+ pgoff_t start_fofs, end_fofs;
block_t start_blkaddr, end_blkaddr;
int need_update = true;
- f2fs_bug_on(F2FS_I_SB(dn->inode), dn->data_blkaddr == NEW_ADDR);
-
- /* Update the page address in the parent node */
- __set_data_blkaddr(dn);
-
- if (is_inode_flag_set(fi, FI_NO_EXTENT))
- return;
-
- fofs = start_bidx_of_node(ofs_of_node(dn->node_page), fi) +
- dn->ofs_in_node;
-
- write_lock(&fi->ext.ext_lock);
+ write_lock(&fi->ext_lock);
start_fofs = fi->ext.fofs;
end_fofs = fi->ext.fofs + fi->ext.len - 1;
- start_blkaddr = fi->ext.blk_addr;
- end_blkaddr = fi->ext.blk_addr + fi->ext.len - 1;
+ start_blkaddr = fi->ext.blk;
+ end_blkaddr = fi->ext.blk + fi->ext.len - 1;
/* Drop and initialize the matched extent */
if (fi->ext.len == 1 && fofs == start_fofs)
@@ -322,24 +314,24 @@
/* Initial extent */
if (fi->ext.len == 0) {
- if (dn->data_blkaddr != NULL_ADDR) {
+ if (blkaddr != NULL_ADDR) {
fi->ext.fofs = fofs;
- fi->ext.blk_addr = dn->data_blkaddr;
+ fi->ext.blk = blkaddr;
fi->ext.len = 1;
}
goto end_update;
}
/* Front merge */
- if (fofs == start_fofs - 1 && dn->data_blkaddr == start_blkaddr - 1) {
+ if (fofs == start_fofs - 1 && blkaddr == start_blkaddr - 1) {
fi->ext.fofs--;
- fi->ext.blk_addr--;
+ fi->ext.blk--;
fi->ext.len++;
goto end_update;
}
/* Back merge */
- if (fofs == end_fofs + 1 && dn->data_blkaddr == end_blkaddr + 1) {
+ if (fofs == end_fofs + 1 && blkaddr == end_blkaddr + 1) {
fi->ext.len++;
goto end_update;
}
@@ -351,8 +343,7 @@
fi->ext.len = fofs - start_fofs;
} else {
fi->ext.fofs = fofs + 1;
- fi->ext.blk_addr = start_blkaddr +
- fofs - start_fofs + 1;
+ fi->ext.blk = start_blkaddr + fofs - start_fofs + 1;
fi->ext.len -= fofs - start_fofs + 1;
}
} else {
@@ -366,27 +357,583 @@
need_update = true;
}
end_update:
- write_unlock(&fi->ext.ext_lock);
- if (need_update)
- sync_inode_page(dn);
+ write_unlock(&fi->ext_lock);
+ return need_update;
+}
+
+static struct extent_node *__attach_extent_node(struct f2fs_sb_info *sbi,
+ struct extent_tree *et, struct extent_info *ei,
+ struct rb_node *parent, struct rb_node **p)
+{
+ struct extent_node *en;
+
+ en = kmem_cache_alloc(extent_node_slab, GFP_ATOMIC);
+ if (!en)
+ return NULL;
+
+ en->ei = *ei;
+ INIT_LIST_HEAD(&en->list);
+
+ rb_link_node(&en->rb_node, parent, p);
+ rb_insert_color(&en->rb_node, &et->root);
+ et->count++;
+ atomic_inc(&sbi->total_ext_node);
+ return en;
+}
+
+static void __detach_extent_node(struct f2fs_sb_info *sbi,
+ struct extent_tree *et, struct extent_node *en)
+{
+ rb_erase(&en->rb_node, &et->root);
+ et->count--;
+ atomic_dec(&sbi->total_ext_node);
+
+ if (et->cached_en == en)
+ et->cached_en = NULL;
+}
+
+static struct extent_tree *__find_extent_tree(struct f2fs_sb_info *sbi,
+ nid_t ino)
+{
+ struct extent_tree *et;
+
+ down_read(&sbi->extent_tree_lock);
+ et = radix_tree_lookup(&sbi->extent_tree_root, ino);
+ if (!et) {
+ up_read(&sbi->extent_tree_lock);
+ return NULL;
+ }
+ atomic_inc(&et->refcount);
+ up_read(&sbi->extent_tree_lock);
+
+ return et;
+}
+
+static struct extent_tree *__grab_extent_tree(struct inode *inode)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct extent_tree *et;
+ nid_t ino = inode->i_ino;
+
+ down_write(&sbi->extent_tree_lock);
+ et = radix_tree_lookup(&sbi->extent_tree_root, ino);
+ if (!et) {
+ et = f2fs_kmem_cache_alloc(extent_tree_slab, GFP_NOFS);
+ f2fs_radix_tree_insert(&sbi->extent_tree_root, ino, et);
+ memset(et, 0, sizeof(struct extent_tree));
+ et->ino = ino;
+ et->root = RB_ROOT;
+ et->cached_en = NULL;
+ rwlock_init(&et->lock);
+ atomic_set(&et->refcount, 0);
+ et->count = 0;
+ sbi->total_ext_tree++;
+ }
+ atomic_inc(&et->refcount);
+ up_write(&sbi->extent_tree_lock);
+
+ return et;
+}
+
+static struct extent_node *__lookup_extent_tree(struct extent_tree *et,
+ unsigned int fofs)
+{
+ struct rb_node *node = et->root.rb_node;
+ struct extent_node *en;
+
+ if (et->cached_en) {
+ struct extent_info *cei = &et->cached_en->ei;
+
+ if (cei->fofs <= fofs && cei->fofs + cei->len > fofs)
+ return et->cached_en;
+ }
+
+ while (node) {
+ en = rb_entry(node, struct extent_node, rb_node);
+
+ if (fofs < en->ei.fofs) {
+ node = node->rb_left;
+ } else if (fofs >= en->ei.fofs + en->ei.len) {
+ node = node->rb_right;
+ } else {
+ et->cached_en = en;
+ return en;
+ }
+ }
+ return NULL;
+}
+
+static struct extent_node *__try_back_merge(struct f2fs_sb_info *sbi,
+ struct extent_tree *et, struct extent_node *en)
+{
+ struct extent_node *prev;
+ struct rb_node *node;
+
+ node = rb_prev(&en->rb_node);
+ if (!node)
+ return NULL;
+
+ prev = rb_entry(node, struct extent_node, rb_node);
+ if (__is_back_mergeable(&en->ei, &prev->ei)) {
+ en->ei.fofs = prev->ei.fofs;
+ en->ei.blk = prev->ei.blk;
+ en->ei.len += prev->ei.len;
+ __detach_extent_node(sbi, et, prev);
+ return prev;
+ }
+ return NULL;
+}
+
+static struct extent_node *__try_front_merge(struct f2fs_sb_info *sbi,
+ struct extent_tree *et, struct extent_node *en)
+{
+ struct extent_node *next;
+ struct rb_node *node;
+
+ node = rb_next(&en->rb_node);
+ if (!node)
+ return NULL;
+
+ next = rb_entry(node, struct extent_node, rb_node);
+ if (__is_front_mergeable(&en->ei, &next->ei)) {
+ en->ei.len += next->ei.len;
+ __detach_extent_node(sbi, et, next);
+ return next;
+ }
+ return NULL;
+}
+
+static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi,
+ struct extent_tree *et, struct extent_info *ei,
+ struct extent_node **den)
+{
+ struct rb_node **p = &et->root.rb_node;
+ struct rb_node *parent = NULL;
+ struct extent_node *en;
+
+ while (*p) {
+ parent = *p;
+ en = rb_entry(parent, struct extent_node, rb_node);
+
+ if (ei->fofs < en->ei.fofs) {
+ if (__is_front_mergeable(ei, &en->ei)) {
+ f2fs_bug_on(sbi, !den);
+ en->ei.fofs = ei->fofs;
+ en->ei.blk = ei->blk;
+ en->ei.len += ei->len;
+ *den = __try_back_merge(sbi, et, en);
+ return en;
+ }
+ p = &(*p)->rb_left;
+ } else if (ei->fofs >= en->ei.fofs + en->ei.len) {
+ if (__is_back_mergeable(ei, &en->ei)) {
+ f2fs_bug_on(sbi, !den);
+ en->ei.len += ei->len;
+ *den = __try_front_merge(sbi, et, en);
+ return en;
+ }
+ p = &(*p)->rb_right;
+ } else {
+ f2fs_bug_on(sbi, 1);
+ }
+ }
+
+ return __attach_extent_node(sbi, et, ei, parent, p);
+}
+
+static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi,
+ struct extent_tree *et, bool free_all)
+{
+ struct rb_node *node, *next;
+ struct extent_node *en;
+ unsigned int count = et->count;
+
+ node = rb_first(&et->root);
+ while (node) {
+ next = rb_next(node);
+ en = rb_entry(node, struct extent_node, rb_node);
+
+ if (free_all) {
+ spin_lock(&sbi->extent_lock);
+ if (!list_empty(&en->list))
+ list_del_init(&en->list);
+ spin_unlock(&sbi->extent_lock);
+ }
+
+ if (free_all || list_empty(&en->list)) {
+ __detach_extent_node(sbi, et, en);
+ kmem_cache_free(extent_node_slab, en);
+ }
+ node = next;
+ }
+
+ return count - et->count;
+}
+
+static void f2fs_init_extent_tree(struct inode *inode,
+ struct f2fs_extent *i_ext)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct extent_tree *et;
+ struct extent_node *en;
+ struct extent_info ei;
+
+ if (le32_to_cpu(i_ext->len) < F2FS_MIN_EXTENT_LEN)
+ return;
+
+ et = __grab_extent_tree(inode);
+
+ write_lock(&et->lock);
+ if (et->count)
+ goto out;
+
+ set_extent_info(&ei, le32_to_cpu(i_ext->fofs),
+ le32_to_cpu(i_ext->blk), le32_to_cpu(i_ext->len));
+
+ en = __insert_extent_tree(sbi, et, &ei, NULL);
+ if (en) {
+ et->cached_en = en;
+
+ spin_lock(&sbi->extent_lock);
+ list_add_tail(&en->list, &sbi->extent_list);
+ spin_unlock(&sbi->extent_lock);
+ }
+out:
+ write_unlock(&et->lock);
+ atomic_dec(&et->refcount);
+}
+
+static bool f2fs_lookup_extent_tree(struct inode *inode, pgoff_t pgofs,
+ struct extent_info *ei)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct extent_tree *et;
+ struct extent_node *en;
+
+ trace_f2fs_lookup_extent_tree_start(inode, pgofs);
+
+ et = __find_extent_tree(sbi, inode->i_ino);
+ if (!et)
+ return false;
+
+ read_lock(&et->lock);
+ en = __lookup_extent_tree(et, pgofs);
+ if (en) {
+ *ei = en->ei;
+ spin_lock(&sbi->extent_lock);
+ if (!list_empty(&en->list))
+ list_move_tail(&en->list, &sbi->extent_list);
+ spin_unlock(&sbi->extent_lock);
+ stat_inc_read_hit(sbi->sb);
+ }
+ stat_inc_total_hit(sbi->sb);
+ read_unlock(&et->lock);
+
+ trace_f2fs_lookup_extent_tree_end(inode, pgofs, en);
+
+ atomic_dec(&et->refcount);
+ return en ? true : false;
+}
+
+static void f2fs_update_extent_tree(struct inode *inode, pgoff_t fofs,
+ block_t blkaddr)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct extent_tree *et;
+ struct extent_node *en = NULL, *en1 = NULL, *en2 = NULL, *en3 = NULL;
+ struct extent_node *den = NULL;
+ struct extent_info ei, dei;
+ unsigned int endofs;
+
+ trace_f2fs_update_extent_tree(inode, fofs, blkaddr);
+
+ et = __grab_extent_tree(inode);
+
+ write_lock(&et->lock);
+
+ /* 1. lookup and remove existing extent info in cache */
+ en = __lookup_extent_tree(et, fofs);
+ if (!en)
+ goto update_extent;
+
+ dei = en->ei;
+ __detach_extent_node(sbi, et, en);
+
+ /* 2. if extent can be split more, split and insert the left part */
+ if (dei.len > 1) {
+ /* insert left part of split extent into cache */
+ if (fofs - dei.fofs >= F2FS_MIN_EXTENT_LEN) {
+ set_extent_info(&ei, dei.fofs, dei.blk,
+ fofs - dei.fofs);
+ en1 = __insert_extent_tree(sbi, et, &ei, NULL);
+ }
+
+ /* insert right part of split extent into cache */
+ endofs = dei.fofs + dei.len - 1;
+ if (endofs - fofs >= F2FS_MIN_EXTENT_LEN) {
+ set_extent_info(&ei, fofs + 1,
+ fofs - dei.fofs + dei.blk, endofs - fofs);
+ en2 = __insert_extent_tree(sbi, et, &ei, NULL);
+ }
+ }
+
+update_extent:
+ /* 3. update extent in extent cache */
+ if (blkaddr) {
+ set_extent_info(&ei, fofs, blkaddr, 1);
+ en3 = __insert_extent_tree(sbi, et, &ei, &den);
+ }
+
+ /* 4. update in global extent list */
+ spin_lock(&sbi->extent_lock);
+ if (en && !list_empty(&en->list))
+ list_del(&en->list);
+ /*
+ * en1 and en2 split from en, they will become more and more smaller
+ * fragments after splitting several times. So if the length is smaller
+ * than F2FS_MIN_EXTENT_LEN, we will not add them into extent tree.
+ */
+ if (en1)
+ list_add_tail(&en1->list, &sbi->extent_list);
+ if (en2)
+ list_add_tail(&en2->list, &sbi->extent_list);
+ if (en3) {
+ if (list_empty(&en3->list))
+ list_add_tail(&en3->list, &sbi->extent_list);
+ else
+ list_move_tail(&en3->list, &sbi->extent_list);
+ }
+ if (den && !list_empty(&den->list))
+ list_del(&den->list);
+ spin_unlock(&sbi->extent_lock);
+
+ /* 5. release extent node */
+ if (en)
+ kmem_cache_free(extent_node_slab, en);
+ if (den)
+ kmem_cache_free(extent_node_slab, den);
+
+ write_unlock(&et->lock);
+ atomic_dec(&et->refcount);
+}
+
+void f2fs_preserve_extent_tree(struct inode *inode)
+{
+ struct extent_tree *et;
+ struct extent_info *ext = &F2FS_I(inode)->ext;
+ bool sync = false;
+
+ if (!test_opt(F2FS_I_SB(inode), EXTENT_CACHE))
+ return;
+
+ et = __find_extent_tree(F2FS_I_SB(inode), inode->i_ino);
+ if (!et) {
+ if (ext->len) {
+ ext->len = 0;
+ update_inode_page(inode);
+ }
+ return;
+ }
+
+ read_lock(&et->lock);
+ if (et->count) {
+ struct extent_node *en;
+
+ if (et->cached_en) {
+ en = et->cached_en;
+ } else {
+ struct rb_node *node = rb_first(&et->root);
+
+ if (!node)
+ node = rb_last(&et->root);
+ en = rb_entry(node, struct extent_node, rb_node);
+ }
+
+ if (__is_extent_same(ext, &en->ei))
+ goto out;
+
+ *ext = en->ei;
+ sync = true;
+ } else if (ext->len) {
+ ext->len = 0;
+ sync = true;
+ }
+out:
+ read_unlock(&et->lock);
+ atomic_dec(&et->refcount);
+
+ if (sync)
+ update_inode_page(inode);
+}
+
+void f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink)
+{
+ struct extent_tree *treevec[EXT_TREE_VEC_SIZE];
+ struct extent_node *en, *tmp;
+ unsigned long ino = F2FS_ROOT_INO(sbi);
+ struct radix_tree_iter iter;
+ void **slot;
+ unsigned int found;
+ unsigned int node_cnt = 0, tree_cnt = 0;
+
+ if (!test_opt(sbi, EXTENT_CACHE))
+ return;
+
+ if (available_free_memory(sbi, EXTENT_CACHE))
+ return;
+
+ spin_lock(&sbi->extent_lock);
+ list_for_each_entry_safe(en, tmp, &sbi->extent_list, list) {
+ if (!nr_shrink--)
+ break;
+ list_del_init(&en->list);
+ }
+ spin_unlock(&sbi->extent_lock);
+
+ down_read(&sbi->extent_tree_lock);
+ while ((found = radix_tree_gang_lookup(&sbi->extent_tree_root,
+ (void **)treevec, ino, EXT_TREE_VEC_SIZE))) {
+ unsigned i;
+
+ ino = treevec[found - 1]->ino + 1;
+ for (i = 0; i < found; i++) {
+ struct extent_tree *et = treevec[i];
+
+ atomic_inc(&et->refcount);
+ write_lock(&et->lock);
+ node_cnt += __free_extent_tree(sbi, et, false);
+ write_unlock(&et->lock);
+ atomic_dec(&et->refcount);
+ }
+ }
+ up_read(&sbi->extent_tree_lock);
+
+ down_write(&sbi->extent_tree_lock);
+ radix_tree_for_each_slot(slot, &sbi->extent_tree_root, &iter,
+ F2FS_ROOT_INO(sbi)) {
+ struct extent_tree *et = (struct extent_tree *)*slot;
+
+ if (!atomic_read(&et->refcount) && !et->count) {
+ radix_tree_delete(&sbi->extent_tree_root, et->ino);
+ kmem_cache_free(extent_tree_slab, et);
+ sbi->total_ext_tree--;
+ tree_cnt++;
+ }
+ }
+ up_write(&sbi->extent_tree_lock);
+
+ trace_f2fs_shrink_extent_tree(sbi, node_cnt, tree_cnt);
+}
+
+void f2fs_destroy_extent_tree(struct inode *inode)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct extent_tree *et;
+ unsigned int node_cnt = 0;
+
+ if (!test_opt(sbi, EXTENT_CACHE))
+ return;
+
+ et = __find_extent_tree(sbi, inode->i_ino);
+ if (!et)
+ goto out;
+
+ /* free all extent info belong to this extent tree */
+ write_lock(&et->lock);
+ node_cnt = __free_extent_tree(sbi, et, true);
+ write_unlock(&et->lock);
+
+ atomic_dec(&et->refcount);
+
+ /* try to find and delete extent tree entry in radix tree */
+ down_write(&sbi->extent_tree_lock);
+ et = radix_tree_lookup(&sbi->extent_tree_root, inode->i_ino);
+ if (!et) {
+ up_write(&sbi->extent_tree_lock);
+ goto out;
+ }
+ f2fs_bug_on(sbi, atomic_read(&et->refcount) || et->count);
+ radix_tree_delete(&sbi->extent_tree_root, inode->i_ino);
+ kmem_cache_free(extent_tree_slab, et);
+ sbi->total_ext_tree--;
+ up_write(&sbi->extent_tree_lock);
+out:
+ trace_f2fs_destroy_extent_tree(inode, node_cnt);
return;
}
+void f2fs_init_extent_cache(struct inode *inode, struct f2fs_extent *i_ext)
+{
+ if (test_opt(F2FS_I_SB(inode), EXTENT_CACHE))
+ f2fs_init_extent_tree(inode, i_ext);
+
+ write_lock(&F2FS_I(inode)->ext_lock);
+ get_extent_info(&F2FS_I(inode)->ext, *i_ext);
+ write_unlock(&F2FS_I(inode)->ext_lock);
+}
+
+static bool f2fs_lookup_extent_cache(struct inode *inode, pgoff_t pgofs,
+ struct extent_info *ei)
+{
+ if (is_inode_flag_set(F2FS_I(inode), FI_NO_EXTENT))
+ return false;
+
+ if (test_opt(F2FS_I_SB(inode), EXTENT_CACHE))
+ return f2fs_lookup_extent_tree(inode, pgofs, ei);
+
+ return lookup_extent_info(inode, pgofs, ei);
+}
+
+void f2fs_update_extent_cache(struct dnode_of_data *dn)
+{
+ struct f2fs_inode_info *fi = F2FS_I(dn->inode);
+ pgoff_t fofs;
+
+ f2fs_bug_on(F2FS_I_SB(dn->inode), dn->data_blkaddr == NEW_ADDR);
+
+ if (is_inode_flag_set(fi, FI_NO_EXTENT))
+ return;
+
+ fofs = start_bidx_of_node(ofs_of_node(dn->node_page), fi) +
+ dn->ofs_in_node;
+
+ if (test_opt(F2FS_I_SB(dn->inode), EXTENT_CACHE))
+ return f2fs_update_extent_tree(dn->inode, fofs,
+ dn->data_blkaddr);
+
+ if (update_extent_info(dn->inode, fofs, dn->data_blkaddr))
+ sync_inode_page(dn);
+}
+
struct page *find_data_page(struct inode *inode, pgoff_t index, bool sync)
{
struct address_space *mapping = inode->i_mapping;
struct dnode_of_data dn;
struct page *page;
+ struct extent_info ei;
int err;
struct f2fs_io_info fio = {
.type = DATA,
.rw = sync ? READ_SYNC : READA,
};
+ /*
+ * If sync is false, it needs to check its block allocation.
+ * This is need and triggered by two flows:
+ * gc and truncate_partial_data_page.
+ */
+ if (!sync)
+ goto search;
+
page = find_get_page(mapping, index);
if (page && PageUptodate(page))
return page;
f2fs_put_page(page, 0);
+search:
+ if (f2fs_lookup_extent_cache(inode, index, &ei)) {
+ dn.data_blkaddr = ei.blk + index - ei.fofs;
+ goto got_it;
+ }
set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, index, LOOKUP_NODE);
@@ -401,6 +948,7 @@
if (unlikely(dn.data_blkaddr == NEW_ADDR))
return ERR_PTR(-EINVAL);
+got_it:
page = grab_cache_page(mapping, index);
if (!page)
return ERR_PTR(-ENOMEM);
@@ -435,6 +983,7 @@
struct address_space *mapping = inode->i_mapping;
struct dnode_of_data dn;
struct page *page;
+ struct extent_info ei;
int err;
struct f2fs_io_info fio = {
.type = DATA,
@@ -445,6 +994,11 @@
if (!page)
return ERR_PTR(-ENOMEM);
+ if (f2fs_lookup_extent_cache(inode, index, &ei)) {
+ dn.data_blkaddr = ei.blk + index - ei.fofs;
+ goto got_it;
+ }
+
set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, index, LOOKUP_NODE);
if (err) {
@@ -458,6 +1012,7 @@
return ERR_PTR(-ENOENT);
}
+got_it:
if (PageUptodate(page))
return page;
@@ -569,19 +1124,26 @@
if (unlikely(is_inode_flag_set(F2FS_I(dn->inode), FI_NO_ALLOC)))
return -EPERM;
+
+ dn->data_blkaddr = datablock_addr(dn->node_page, dn->ofs_in_node);
+ if (dn->data_blkaddr == NEW_ADDR)
+ goto alloc;
+
if (unlikely(!inc_valid_block_count(sbi, dn->inode, 1)))
return -ENOSPC;
+alloc:
get_node_info(sbi, dn->nid, &ni);
set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version);
if (dn->ofs_in_node == 0 && dn->inode_page == dn->node_page)
seg = CURSEG_DIRECT_IO;
- allocate_data_block(sbi, NULL, NULL_ADDR, &dn->data_blkaddr, &sum, seg);
+ allocate_data_block(sbi, NULL, dn->data_blkaddr, &dn->data_blkaddr,
+ &sum, seg);
/* direct IO doesn't use extent cache to maximize the performance */
- __set_data_blkaddr(dn);
+ set_data_blkaddr(dn);
/* update i_size */
fofs = start_bidx_of_node(ofs_of_node(dn->node_page), fi) +
@@ -615,7 +1177,10 @@
end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode));
while (dn.ofs_in_node < end_offset && len) {
- if (dn.data_blkaddr == NULL_ADDR) {
+ block_t blkaddr;
+
+ blkaddr = datablock_addr(dn.node_page, dn.ofs_in_node);
+ if (blkaddr == NULL_ADDR || blkaddr == NEW_ADDR) {
if (__allocate_data_block(&dn))
goto sync_out;
allocated = true;
@@ -659,13 +1224,16 @@
int mode = create ? ALLOC_NODE : LOOKUP_NODE_RA;
pgoff_t pgofs, end_offset;
int err = 0, ofs = 1;
+ struct extent_info ei;
bool allocated = false;
/* Get the page offset from the block offset(iblock) */
pgofs = (pgoff_t)(iblock >> (PAGE_CACHE_SHIFT - blkbits));
- if (check_extent_cache(inode, pgofs, bh_result))
+ if (f2fs_lookup_extent_cache(inode, pgofs, &ei)) {
+ f2fs_map_bh(inode->i_sb, pgofs, &ei, bh_result);
goto out;
+ }
if (create)
f2fs_lock_op(F2FS_I_SB(inode));
@@ -682,7 +1250,7 @@
goto put_out;
if (dn.data_blkaddr != NULL_ADDR) {
- set_buffer_new(bh_result);
+ clear_buffer_new(bh_result);
map_bh(bh_result, inode->i_sb, dn.data_blkaddr);
} else if (create) {
err = __allocate_data_block(&dn);
@@ -727,6 +1295,7 @@
if (err)
goto sync_out;
allocated = true;
+ set_buffer_new(bh_result);
blkaddr = dn.data_blkaddr;
}
/* Give more consecutive addresses for the readahead */
@@ -813,8 +1382,10 @@
fio->blk_addr = dn.data_blkaddr;
/* This page is already truncated */
- if (fio->blk_addr == NULL_ADDR)
+ if (fio->blk_addr == NULL_ADDR) {
+ ClearPageUptodate(page);
goto out_writepage;
+ }
set_page_writeback(page);
@@ -827,10 +1398,15 @@
need_inplace_update(inode))) {
rewrite_data_page(page, fio);
set_inode_flag(F2FS_I(inode), FI_UPDATE_WRITE);
+ trace_f2fs_do_write_data_page(page, IPU);
} else {
write_data_page(page, &dn, fio);
- update_extent_cache(&dn);
+ set_data_blkaddr(&dn);
+ f2fs_update_extent_cache(&dn);
+ trace_f2fs_do_write_data_page(page, OPU);
set_inode_flag(F2FS_I(inode), FI_APPEND_WRITE);
+ if (page->index == 0)
+ set_inode_flag(F2FS_I(inode), FI_FIRST_BLOCK_WRITTEN);
}
out_writepage:
f2fs_put_dnode(&dn);
@@ -909,6 +1485,8 @@
clear_cold_data(page);
out:
inode_dec_dirty_pages(inode);
+ if (err)
+ ClearPageUptodate(page);
unlock_page(page);
if (need_balance_fs)
f2fs_balance_fs(sbi);
@@ -935,7 +1513,6 @@
{
struct inode *inode = mapping->host;
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
- bool locked = false;
int ret;
long diff;
@@ -950,15 +1527,13 @@
available_free_memory(sbi, DIRTY_DENTS))
goto skip_write;
+ /* during POR, we don't need to trigger writepage at all. */
+ if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
+ goto skip_write;
+
diff = nr_pages_to_write(sbi, DATA, wbc);
- if (!S_ISDIR(inode->i_mode)) {
- mutex_lock(&sbi->writepages);
- locked = true;
- }
ret = write_cache_pages(mapping, wbc, __f2fs_writepage, mapping);
- if (locked)
- mutex_unlock(&sbi->writepages);
f2fs_submit_merged_bio(sbi, DATA, WRITE);
@@ -1236,6 +1811,37 @@
return generic_block_bmap(mapping, block, get_data_block);
}
+void init_extent_cache_info(struct f2fs_sb_info *sbi)
+{
+ INIT_RADIX_TREE(&sbi->extent_tree_root, GFP_NOIO);
+ init_rwsem(&sbi->extent_tree_lock);
+ INIT_LIST_HEAD(&sbi->extent_list);
+ spin_lock_init(&sbi->extent_lock);
+ sbi->total_ext_tree = 0;
+ atomic_set(&sbi->total_ext_node, 0);
+}
+
+int __init create_extent_cache(void)
+{
+ extent_tree_slab = f2fs_kmem_cache_create("f2fs_extent_tree",
+ sizeof(struct extent_tree));
+ if (!extent_tree_slab)
+ return -ENOMEM;
+ extent_node_slab = f2fs_kmem_cache_create("f2fs_extent_node",
+ sizeof(struct extent_node));
+ if (!extent_node_slab) {
+ kmem_cache_destroy(extent_tree_slab);
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+void destroy_extent_cache(void)
+{
+ kmem_cache_destroy(extent_node_slab);
+ kmem_cache_destroy(extent_tree_slab);
+}
+
const struct address_space_operations f2fs_dblock_aops = {
.readpage = f2fs_read_data_page,
.readpages = f2fs_read_data_pages,
diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
index e671373..f5388f3 100644
--- a/fs/f2fs/debug.c
+++ b/fs/f2fs/debug.c
@@ -35,6 +35,8 @@
/* validation check of the segment numbers */
si->hit_ext = sbi->read_hit_ext;
si->total_ext = sbi->total_hit_ext;
+ si->ext_tree = sbi->total_ext_tree;
+ si->ext_node = atomic_read(&sbi->total_ext_node);
si->ndirty_node = get_pages(sbi, F2FS_DIRTY_NODES);
si->ndirty_dent = get_pages(sbi, F2FS_DIRTY_DENTS);
si->ndirty_dirs = sbi->n_dirty_dirs;
@@ -185,6 +187,9 @@
si->cache_mem += sbi->n_dirty_dirs * sizeof(struct inode_entry);
for (i = 0; i <= UPDATE_INO; i++)
si->cache_mem += sbi->im[i].ino_num * sizeof(struct ino_entry);
+ si->cache_mem += sbi->total_ext_tree * sizeof(struct extent_tree);
+ si->cache_mem += atomic_read(&sbi->total_ext_node) *
+ sizeof(struct extent_node);
si->page_mem = 0;
npages = NODE_MAPPING(sbi)->nrpages;
@@ -260,13 +265,20 @@
seq_printf(s, "CP calls: %d\n", si->cp_count);
seq_printf(s, "GC calls: %d (BG: %d)\n",
si->call_count, si->bg_gc);
- seq_printf(s, " - data segments : %d\n", si->data_segs);
- seq_printf(s, " - node segments : %d\n", si->node_segs);
- seq_printf(s, "Try to move %d blocks\n", si->tot_blks);
- seq_printf(s, " - data blocks : %d\n", si->data_blks);
- seq_printf(s, " - node blocks : %d\n", si->node_blks);
+ seq_printf(s, " - data segments : %d (%d)\n",
+ si->data_segs, si->bg_data_segs);
+ seq_printf(s, " - node segments : %d (%d)\n",
+ si->node_segs, si->bg_node_segs);
+ seq_printf(s, "Try to move %d blocks (BG: %d)\n", si->tot_blks,
+ si->bg_data_blks + si->bg_node_blks);
+ seq_printf(s, " - data blocks : %d (%d)\n", si->data_blks,
+ si->bg_data_blks);
+ seq_printf(s, " - node blocks : %d (%d)\n", si->node_blks,
+ si->bg_node_blks);
seq_printf(s, "\nExtent Hit Ratio: %d / %d\n",
si->hit_ext, si->total_ext);
+ seq_printf(s, "\nExtent Tree Count: %d\n", si->ext_tree);
+ seq_printf(s, "\nExtent Node Count: %d\n", si->ext_node);
seq_puts(s, "\nBalancing F2FS Async:\n");
seq_printf(s, " - inmem: %4d, wb: %4d\n",
si->inmem_pages, si->wb_pages);
diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
index b74097a..3a3302a 100644
--- a/fs/f2fs/dir.c
+++ b/fs/f2fs/dir.c
@@ -59,9 +59,8 @@
[S_IFLNK >> S_SHIFT] = F2FS_FT_SYMLINK,
};
-void set_de_type(struct f2fs_dir_entry *de, struct inode *inode)
+void set_de_type(struct f2fs_dir_entry *de, umode_t mode)
{
- umode_t mode = inode->i_mode;
de->file_type = f2fs_type_by_mode[(mode & S_IFMT) >> S_SHIFT];
}
@@ -127,22 +126,19 @@
*max_slots = 0;
while (bit_pos < d->max) {
if (!test_bit_le(bit_pos, d->bitmap)) {
- if (bit_pos == 0)
- max_len = 1;
- else if (!test_bit_le(bit_pos - 1, d->bitmap))
- max_len++;
bit_pos++;
+ max_len++;
continue;
}
+
de = &d->dentry[bit_pos];
if (early_match_name(name->len, namehash, de) &&
!memcmp(d->filename[bit_pos], name->name, name->len))
goto found;
- if (max_slots && *max_slots >= 0 && max_len > *max_slots) {
+ if (max_slots && max_len > *max_slots)
*max_slots = max_len;
- max_len = 0;
- }
+ max_len = 0;
/* remain bug on condition */
if (unlikely(!de->name_len))
@@ -219,14 +215,14 @@
unsigned int max_depth;
unsigned int level;
+ *res_page = NULL;
+
if (f2fs_has_inline_dentry(dir))
return find_in_inline_dir(dir, child, res_page);
if (npages == 0)
return NULL;
- *res_page = NULL;
-
name_hash = f2fs_dentry_hash(child);
max_depth = F2FS_I(dir)->i_current_depth;
@@ -285,7 +281,7 @@
lock_page(page);
f2fs_wait_on_page_writeback(page, type);
de->ino = cpu_to_le32(inode->i_ino);
- set_de_type(de, inode);
+ set_de_type(de, inode->i_mode);
f2fs_dentry_kunmap(dir, page);
set_page_dirty(page);
dir->i_mtime = dir->i_ctime = CURRENT_TIME;
@@ -331,14 +327,14 @@
de->hash_code = 0;
de->ino = cpu_to_le32(inode->i_ino);
memcpy(d->filename[0], ".", 1);
- set_de_type(de, inode);
+ set_de_type(de, inode->i_mode);
de = &d->dentry[1];
de->hash_code = 0;
de->name_len = cpu_to_le16(2);
de->ino = cpu_to_le32(parent->i_ino);
memcpy(d->filename[1], "..", 2);
- set_de_type(de, inode);
+ set_de_type(de, parent->i_mode);
test_and_set_bit_le(0, (void *)d->bitmap);
test_and_set_bit_le(1, (void *)d->bitmap);
@@ -435,7 +431,7 @@
void update_parent_metadata(struct inode *dir, struct inode *inode,
unsigned int current_depth)
{
- if (is_inode_flag_set(F2FS_I(inode), FI_NEW_INODE)) {
+ if (inode && is_inode_flag_set(F2FS_I(inode), FI_NEW_INODE)) {
if (S_ISDIR(inode->i_mode)) {
inc_nlink(dir);
set_inode_flag(F2FS_I(dir), FI_UPDATE_DIR);
@@ -450,7 +446,7 @@
set_inode_flag(F2FS_I(dir), FI_UPDATE_DIR);
}
- if (is_inode_flag_set(F2FS_I(inode), FI_INC_LINK))
+ if (inode && is_inode_flag_set(F2FS_I(inode), FI_INC_LINK))
clear_inode_flag(F2FS_I(inode), FI_INC_LINK);
}
@@ -474,30 +470,47 @@
goto next;
}
+void f2fs_update_dentry(nid_t ino, umode_t mode, struct f2fs_dentry_ptr *d,
+ const struct qstr *name, f2fs_hash_t name_hash,
+ unsigned int bit_pos)
+{
+ struct f2fs_dir_entry *de;
+ int slots = GET_DENTRY_SLOTS(name->len);
+ int i;
+
+ de = &d->dentry[bit_pos];
+ de->hash_code = name_hash;
+ de->name_len = cpu_to_le16(name->len);
+ memcpy(d->filename[bit_pos], name->name, name->len);
+ de->ino = cpu_to_le32(ino);
+ set_de_type(de, mode);
+ for (i = 0; i < slots; i++)
+ test_and_set_bit_le(bit_pos + i, (void *)d->bitmap);
+}
+
/*
* Caller should grab and release a rwsem by calling f2fs_lock_op() and
* f2fs_unlock_op().
*/
int __f2fs_add_link(struct inode *dir, const struct qstr *name,
- struct inode *inode)
+ struct inode *inode, nid_t ino, umode_t mode)
{
unsigned int bit_pos;
unsigned int level;
unsigned int current_depth;
unsigned long bidx, block;
f2fs_hash_t dentry_hash;
- struct f2fs_dir_entry *de;
unsigned int nbucket, nblock;
size_t namelen = name->len;
struct page *dentry_page = NULL;
struct f2fs_dentry_block *dentry_blk = NULL;
+ struct f2fs_dentry_ptr d;
int slots = GET_DENTRY_SLOTS(namelen);
- struct page *page;
+ struct page *page = NULL;
int err = 0;
- int i;
if (f2fs_has_inline_dentry(dir)) {
- err = f2fs_add_inline_entry(dir, name, inode);
+ err = f2fs_add_inline_entry(dir, name, inode, ino, mode);
if (!err || err != -EAGAIN)
return err;
else
@@ -547,30 +560,31 @@
add_dentry:
f2fs_wait_on_page_writeback(dentry_page, DATA);
- down_write(&F2FS_I(inode)->i_sem);
- page = init_inode_metadata(inode, dir, name, NULL);
- if (IS_ERR(page)) {
- err = PTR_ERR(page);
- goto fail;
+ if (inode) {
+ down_write(&F2FS_I(inode)->i_sem);
+ page = init_inode_metadata(inode, dir, name, NULL);
+ if (IS_ERR(page)) {
+ err = PTR_ERR(page);
+ goto fail;
+ }
}
- de = &dentry_blk->dentry[bit_pos];
- de->hash_code = dentry_hash;
- de->name_len = cpu_to_le16(namelen);
- memcpy(dentry_blk->filename[bit_pos], name->name, name->len);
- de->ino = cpu_to_le32(inode->i_ino);
- set_de_type(de, inode);
- for (i = 0; i < slots; i++)
- test_and_set_bit_le(bit_pos + i, &dentry_blk->dentry_bitmap);
+
+ make_dentry_ptr(&d, (void *)dentry_blk, 1);
+ f2fs_update_dentry(ino, mode, &d, name, dentry_hash, bit_pos);
+
set_page_dirty(dentry_page);
- /* we don't need to mark_inode_dirty now */
- F2FS_I(inode)->i_pino = dir->i_ino;
- update_inode(inode, page);
- f2fs_put_page(page, 1);
+ if (inode) {
+ /* we don't need to mark_inode_dirty now */
+ F2FS_I(inode)->i_pino = dir->i_ino;
+ update_inode(inode, page);
+ f2fs_put_page(page, 1);
+ }
update_parent_metadata(dir, inode, current_depth);
fail:
- up_write(&F2FS_I(inode)->i_sem);
+ if (inode)
+ up_write(&F2FS_I(inode)->i_sem);
if (is_inode_flag_set(F2FS_I(dir), FI_UPDATE_DIR)) {
update_inode_page(dir);
@@ -669,6 +683,7 @@
if (bit_pos == NR_DENTRY_IN_BLOCK) {
truncate_hole(dir, page->index, page->index + 1);
clear_page_dirty_for_io(page);
+ ClearPagePrivate(page);
ClearPageUptodate(page);
inode_dec_dirty_pages(dir);
}
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 7fa3313..c06a25e 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -50,6 +50,7 @@
#define F2FS_MOUNT_FLUSH_MERGE 0x00000400
#define F2FS_MOUNT_NOBARRIER 0x00000800
#define F2FS_MOUNT_FASTBOOT 0x00001000
+#define F2FS_MOUNT_EXTENT_CACHE 0x00002000
#define clear_opt(sbi, option) (sbi->mount_opt.opt &= ~F2FS_MOUNT_##option)
#define set_opt(sbi, option) (sbi->mount_opt.opt |= F2FS_MOUNT_##option)
@@ -102,6 +103,7 @@
CP_UMOUNT,
CP_FASTBOOT,
CP_SYNC,
+ CP_RECOVERY,
CP_DISCARD,
};
@@ -216,6 +218,15 @@
#define F2FS_IOC_RELEASE_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 4)
#define F2FS_IOC_ABORT_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 5)
+/*
+ * should be same as XFS_IOC_GOINGDOWN.
+ * Flags for going down operation used by FS_IOC_GOINGDOWN
+ */
+#define F2FS_IOC_SHUTDOWN _IOR('X', 125, __u32) /* Shutdown */
+#define F2FS_GOING_DOWN_FULLSYNC 0x0 /* going down with full sync */
+#define F2FS_GOING_DOWN_METASYNC 0x1 /* going down with metadata */
+#define F2FS_GOING_DOWN_NOSYNC 0x2 /* going down */
+
#if defined(__KERNEL__) && defined(CONFIG_COMPAT)
/*
* ioctl commands in 32 bit emulation
@@ -273,14 +284,34 @@
#define MAX_DIR_RA_PAGES 4 /* maximum ra pages of dir */
+/* vector size for gang look-up from extent cache that consists of radix tree */
+#define EXT_TREE_VEC_SIZE 64
+
/* for in-memory extent cache entry */
-#define F2FS_MIN_EXTENT_LEN 16 /* minimum extent length */
+#define F2FS_MIN_EXTENT_LEN 64 /* minimum extent length */
+
+/* number of extent info in extent cache we try to shrink */
+#define EXTENT_CACHE_SHRINK_NUMBER 128
struct extent_info {
- rwlock_t ext_lock; /* rwlock for consistency */
- unsigned int fofs; /* start offset in a file */
- u32 blk_addr; /* start block address of the extent */
- unsigned int len; /* length of the extent */
+ unsigned int fofs; /* start offset in a file */
+ u32 blk; /* start block address of the extent */
+ unsigned int len; /* length of the extent */
+};
+
+struct extent_node {
+ struct rb_node rb_node; /* rb node located in rb-tree */
+ struct list_head list; /* node in global extent list of sbi */
+ struct extent_info ei; /* extent info */
+};
+
+struct extent_tree {
+ nid_t ino; /* inode number */
+ struct rb_root root; /* root of extent info rb-tree */
+ struct extent_node *cached_en; /* recently accessed extent node */
+ rwlock_t lock; /* protect extent info rb-tree */
+ atomic_t refcount; /* reference count of rb-tree */
+ unsigned int count; /* # of extent node in rb-tree*/
};
/*
@@ -309,6 +340,7 @@
nid_t i_xattr_nid; /* node id that contains xattrs */
unsigned long long xattr_ver; /* cp version of xattr modification */
struct extent_info ext; /* in-memory extent cache entry */
+ rwlock_t ext_lock; /* rwlock for single extent cache */
struct inode_entry *dirty_dir; /* the pointer of dirty dir */
struct radix_tree_root inmem_root; /* radix tree for inmem pages */
@@ -319,21 +351,51 @@
static inline void get_extent_info(struct extent_info *ext,
struct f2fs_extent i_ext)
{
- write_lock(&ext->ext_lock);
ext->fofs = le32_to_cpu(i_ext.fofs);
- ext->blk_addr = le32_to_cpu(i_ext.blk_addr);
+ ext->blk = le32_to_cpu(i_ext.blk);
ext->len = le32_to_cpu(i_ext.len);
- write_unlock(&ext->ext_lock);
}
static inline void set_raw_extent(struct extent_info *ext,
struct f2fs_extent *i_ext)
{
- read_lock(&ext->ext_lock);
i_ext->fofs = cpu_to_le32(ext->fofs);
- i_ext->blk_addr = cpu_to_le32(ext->blk_addr);
+ i_ext->blk = cpu_to_le32(ext->blk);
i_ext->len = cpu_to_le32(ext->len);
- read_unlock(&ext->ext_lock);
+}
+
+static inline void set_extent_info(struct extent_info *ei, unsigned int fofs,
+ u32 blk, unsigned int len)
+{
+ ei->fofs = fofs;
+ ei->blk = blk;
+ ei->len = len;
+}
+
+static inline bool __is_extent_same(struct extent_info *ei1,
+ struct extent_info *ei2)
+{
+ return (ei1->fofs == ei2->fofs && ei1->blk == ei2->blk &&
+ ei1->len == ei2->len);
+}
+
+static inline bool __is_extent_mergeable(struct extent_info *back,
+ struct extent_info *front)
+{
+ return (back->fofs + back->len == front->fofs &&
+ back->blk + back->len == front->blk);
+}
+
+static inline bool __is_back_mergeable(struct extent_info *cur,
+ struct extent_info *back)
+{
+ return __is_extent_mergeable(back, cur);
+}
+
+static inline bool __is_front_mergeable(struct extent_info *cur,
+ struct extent_info *front)
+{
+ return __is_extent_mergeable(cur, front);
}
struct f2fs_nm_info {
@@ -502,6 +564,10 @@
META,
NR_PAGE_TYPE,
META_FLUSH,
+ INMEM, /* the below types are used by tracepoints only. */
+ INMEM_DROP,
+ IPU,
+ OPU,
};
struct f2fs_io_info {
@@ -559,7 +625,6 @@
struct mutex cp_mutex; /* checkpoint procedure lock */
struct rw_semaphore cp_rwsem; /* blocking FS operations */
struct rw_semaphore node_write; /* locking node writes */
- struct mutex writepages; /* mutex for writepages() */
wait_queue_head_t cp_wait;
struct inode_management im[MAX_INO_ENTRY]; /* manage inode cache */
@@ -571,6 +636,14 @@
struct list_head dir_inode_list; /* dir inode list */
spinlock_t dir_inode_lock; /* for dir inode list lock */
+ /* for extent tree cache */
+ struct radix_tree_root extent_tree_root;/* cache extent cache entries */
+ struct rw_semaphore extent_tree_lock; /* locking extent radix tree */
+ struct list_head extent_list; /* lru list for shrinker */
+ spinlock_t extent_lock; /* locking extent lru list */
+ int total_ext_tree; /* extent tree count */
+ atomic_t total_ext_node; /* extent info count */
+
/* basic filesystem units */
unsigned int log_sectors_per_block; /* log2 sectors per block */
unsigned int log_blocksize; /* log2 block size */
@@ -920,12 +993,17 @@
return 0;
}
+static inline block_t __cp_payload(struct f2fs_sb_info *sbi)
+{
+ return le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload);
+}
+
static inline void *__bitmap_ptr(struct f2fs_sb_info *sbi, int flag)
{
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
int offset;
- if (le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_payload) > 0) {
+ if (__cp_payload(sbi) > 0) {
if (flag == NAT_BITMAP)
return &ckpt->sit_nat_version_bitmap;
else
@@ -1166,8 +1244,10 @@
FI_NEED_IPU, /* used for ipu per file */
FI_ATOMIC_FILE, /* indicate atomic file */
FI_VOLATILE_FILE, /* indicate volatile file */
+ FI_FIRST_BLOCK_WRITTEN, /* indicate #0 data block was written */
FI_DROP_CACHE, /* drop dirty page cache */
FI_DATA_EXIST, /* indicate data exists */
+ FI_INLINE_DOTS, /* indicate inline dot dentries */
};
static inline void set_inode_flag(struct f2fs_inode_info *fi, int flag)
@@ -1204,6 +1284,8 @@
set_inode_flag(fi, FI_INLINE_DENTRY);
if (ri->i_inline & F2FS_DATA_EXIST)
set_inode_flag(fi, FI_DATA_EXIST);
+ if (ri->i_inline & F2FS_INLINE_DOTS)
+ set_inode_flag(fi, FI_INLINE_DOTS);
}
static inline void set_raw_inline(struct f2fs_inode_info *fi,
@@ -1219,6 +1301,8 @@
ri->i_inline |= F2FS_INLINE_DENTRY;
if (is_inode_flag_set(fi, FI_DATA_EXIST))
ri->i_inline |= F2FS_DATA_EXIST;
+ if (is_inode_flag_set(fi, FI_INLINE_DOTS))
+ ri->i_inline |= F2FS_INLINE_DOTS;
}
static inline int f2fs_has_inline_xattr(struct inode *inode)
@@ -1264,6 +1348,11 @@
return is_inode_flag_set(F2FS_I(inode), FI_DATA_EXIST);
}
+static inline int f2fs_has_inline_dots(struct inode *inode)
+{
+ return is_inode_flag_set(F2FS_I(inode), FI_INLINE_DOTS);
+}
+
static inline bool f2fs_is_atomic_file(struct inode *inode)
{
return is_inode_flag_set(F2FS_I(inode), FI_ATOMIC_FILE);
@@ -1274,6 +1363,11 @@
return is_inode_flag_set(F2FS_I(inode), FI_VOLATILE_FILE);
}
+static inline bool f2fs_is_first_block_written(struct inode *inode)
+{
+ return is_inode_flag_set(F2FS_I(inode), FI_FIRST_BLOCK_WRITTEN);
+}
+
static inline bool f2fs_is_drop_cache(struct inode *inode)
{
return is_inode_flag_set(F2FS_I(inode), FI_DROP_CACHE);
@@ -1290,12 +1384,6 @@
return is_inode_flag_set(F2FS_I(inode), FI_INLINE_DENTRY);
}
-static inline void *inline_dentry_addr(struct page *page)
-{
- struct f2fs_inode *ri = F2FS_INODE(page);
- return (void *)&(ri->i_addr[1]);
-}
-
static inline void f2fs_dentry_kunmap(struct inode *dir, struct page *page)
{
if (!f2fs_has_inline_dentry(dir))
@@ -1363,7 +1451,7 @@
* dir.c
*/
extern unsigned char f2fs_filetype_table[F2FS_FT_MAX];
-void set_de_type(struct f2fs_dir_entry *, struct inode *);
+void set_de_type(struct f2fs_dir_entry *, umode_t);
struct f2fs_dir_entry *find_target_dentry(struct qstr *, int *,
struct f2fs_dentry_ptr *);
bool f2fs_fill_dentries(struct dir_context *, struct f2fs_dentry_ptr *,
@@ -1382,7 +1470,10 @@
void f2fs_set_link(struct inode *, struct f2fs_dir_entry *,
struct page *, struct inode *);
int update_dent_inode(struct inode *, const struct qstr *);
-int __f2fs_add_link(struct inode *, const struct qstr *, struct inode *);
+void f2fs_update_dentry(nid_t ino, umode_t mode, struct f2fs_dentry_ptr *,
+ const struct qstr *, f2fs_hash_t , unsigned int);
+int __f2fs_add_link(struct inode *, const struct qstr *, struct inode *, nid_t,
+ umode_t);
void f2fs_delete_entry(struct f2fs_dir_entry *, struct page *, struct inode *,
struct inode *);
int f2fs_do_tmpfile(struct inode *, struct inode *);
@@ -1392,7 +1483,7 @@
static inline int f2fs_add_link(struct dentry *dentry, struct inode *inode)
{
return __f2fs_add_link(dentry->d_parent->d_inode, &dentry->d_name,
- inode);
+ inode, inode->i_ino, inode->i_mode);
}
/*
@@ -1519,14 +1610,22 @@
struct f2fs_io_info *);
void f2fs_submit_page_mbio(struct f2fs_sb_info *, struct page *,
struct f2fs_io_info *);
+void set_data_blkaddr(struct dnode_of_data *);
int reserve_new_block(struct dnode_of_data *);
int f2fs_reserve_block(struct dnode_of_data *, pgoff_t);
-void update_extent_cache(struct dnode_of_data *);
+void f2fs_shrink_extent_tree(struct f2fs_sb_info *, int);
+void f2fs_destroy_extent_tree(struct inode *);
+void f2fs_init_extent_cache(struct inode *, struct f2fs_extent *);
+void f2fs_update_extent_cache(struct dnode_of_data *);
+void f2fs_preserve_extent_tree(struct inode *);
struct page *find_data_page(struct inode *, pgoff_t, bool);
struct page *get_lock_data_page(struct inode *, pgoff_t);
struct page *get_new_data_page(struct inode *, struct page *, pgoff_t, bool);
int do_write_data_page(struct page *, struct f2fs_io_info *);
int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *, u64, u64);
+void init_extent_cache_info(struct f2fs_sb_info *);
+int __init create_extent_cache(void);
+void destroy_extent_cache(void);
void f2fs_invalidate_page(struct page *, unsigned int, unsigned int);
int f2fs_release_page(struct page *, gfp_t);
@@ -1554,7 +1653,7 @@
struct f2fs_sb_info *sbi;
int all_area_segs, sit_area_segs, nat_area_segs, ssa_area_segs;
int main_area_segs, main_area_sections, main_area_zones;
- int hit_ext, total_ext;
+ int hit_ext, total_ext, ext_tree, ext_node;
int ndirty_node, ndirty_dent, ndirty_dirs, ndirty_meta;
int nats, dirty_nats, sits, dirty_sits, fnids;
int total_count, utilization;
@@ -1566,7 +1665,9 @@
int dirty_count, node_pages, meta_pages;
int prefree_count, call_count, cp_count;
int tot_segs, node_segs, data_segs, free_segs, free_secs;
+ int bg_node_segs, bg_data_segs;
int tot_blks, data_blks, node_blks;
+ int bg_data_blks, bg_node_blks;
int curseg[NR_CURSEG_TYPE];
int cursec[NR_CURSEG_TYPE];
int curzone[NR_CURSEG_TYPE];
@@ -1615,31 +1716,36 @@
((sbi)->block_count[(curseg)->alloc_type]++)
#define stat_inc_inplace_blocks(sbi) \
(atomic_inc(&(sbi)->inplace_count))
-#define stat_inc_seg_count(sbi, type) \
+#define stat_inc_seg_count(sbi, type, gc_type) \
do { \
struct f2fs_stat_info *si = F2FS_STAT(sbi); \
(si)->tot_segs++; \
- if (type == SUM_TYPE_DATA) \
+ if (type == SUM_TYPE_DATA) { \
si->data_segs++; \
- else \
+ si->bg_data_segs += (gc_type == BG_GC) ? 1 : 0; \
+ } else { \
si->node_segs++; \
+ si->bg_node_segs += (gc_type == BG_GC) ? 1 : 0; \
+ } \
} while (0)
#define stat_inc_tot_blk_count(si, blks) \
(si->tot_blks += (blks))
-#define stat_inc_data_blk_count(sbi, blks) \
+#define stat_inc_data_blk_count(sbi, blks, gc_type) \
do { \
struct f2fs_stat_info *si = F2FS_STAT(sbi); \
stat_inc_tot_blk_count(si, blks); \
si->data_blks += (blks); \
+ si->bg_data_blks += (gc_type == BG_GC) ? (blks) : 0; \
} while (0)
-#define stat_inc_node_blk_count(sbi, blks) \
+#define stat_inc_node_blk_count(sbi, blks, gc_type) \
do { \
struct f2fs_stat_info *si = F2FS_STAT(sbi); \
stat_inc_tot_blk_count(si, blks); \
si->node_blks += (blks); \
+ si->bg_node_blks += (gc_type == BG_GC) ? (blks) : 0; \
} while (0)
int f2fs_build_stats(struct f2fs_sb_info *);
@@ -1661,10 +1767,10 @@
#define stat_inc_seg_type(sbi, curseg)
#define stat_inc_block_count(sbi, curseg)
#define stat_inc_inplace_blocks(sbi)
-#define stat_inc_seg_count(si, type)
+#define stat_inc_seg_count(sbi, type, gc_type)
#define stat_inc_tot_blk_count(si, blks)
-#define stat_inc_data_blk_count(si, blks)
-#define stat_inc_node_blk_count(sbi, blks)
+#define stat_inc_data_blk_count(sbi, blks, gc_type)
+#define stat_inc_node_blk_count(sbi, blks, gc_type)
static inline int f2fs_build_stats(struct f2fs_sb_info *sbi) { return 0; }
static inline void f2fs_destroy_stats(struct f2fs_sb_info *sbi) { }
@@ -1688,6 +1794,7 @@
*/
bool f2fs_may_inline(struct inode *);
void read_inline_data(struct page *, struct page *);
+bool truncate_inline_inode(struct page *, u64);
int f2fs_read_inline_data(struct inode *, struct page *);
int f2fs_convert_inline_page(struct dnode_of_data *, struct page *);
int f2fs_convert_inline_inode(struct inode *);
@@ -1697,7 +1804,8 @@
struct page **);
struct f2fs_dir_entry *f2fs_parent_inline_dir(struct inode *, struct page **);
int make_empty_inline_dir(struct inode *inode, struct inode *, struct page *);
-int f2fs_add_inline_entry(struct inode *, const struct qstr *, struct inode *);
+int f2fs_add_inline_entry(struct inode *, const struct qstr *, struct inode *,
+ nid_t, umode_t);
void f2fs_delete_inline_entry(struct f2fs_dir_entry *, struct page *,
struct inode *, struct inode *);
bool f2fs_empty_inline_dir(struct inode *);
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index df6a059..a6f3f61 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -241,6 +241,8 @@
* will be used only for fsynced inodes after checkpoint.
*/
try_to_fix_pino(inode);
+ clear_inode_flag(fi, FI_APPEND_WRITE);
+ clear_inode_flag(fi, FI_UPDATE_WRITE);
goto out;
}
sync_nodes:
@@ -433,8 +435,12 @@
continue;
dn->data_blkaddr = NULL_ADDR;
- update_extent_cache(dn);
+ set_data_blkaddr(dn);
+ f2fs_update_extent_cache(dn);
invalidate_blocks(sbi, blkaddr);
+ if (dn->ofs_in_node == 0 && IS_INODE(dn->node_page))
+ clear_inode_flag(F2FS_I(dn->inode),
+ FI_FIRST_BLOCK_WRITTEN);
nr_free++;
}
if (nr_free) {
@@ -454,15 +460,16 @@
truncate_data_blocks_range(dn, ADDRS_PER_BLOCK);
}
-static int truncate_partial_data_page(struct inode *inode, u64 from)
+static int truncate_partial_data_page(struct inode *inode, u64 from,
+ bool force)
{
unsigned offset = from & (PAGE_CACHE_SIZE - 1);
struct page *page;
- if (!offset)
+ if (!offset && !force)
return 0;
- page = find_data_page(inode, from >> PAGE_CACHE_SHIFT, false);
+ page = find_data_page(inode, from >> PAGE_CACHE_SHIFT, force);
if (IS_ERR(page))
return 0;
@@ -473,7 +480,8 @@
f2fs_wait_on_page_writeback(page, DATA);
zero_user(page, offset, PAGE_CACHE_SIZE - offset);
- set_page_dirty(page);
+ if (!force)
+ set_page_dirty(page);
out:
f2fs_put_page(page, 1);
return 0;
@@ -487,6 +495,7 @@
pgoff_t free_from;
int count = 0, err = 0;
struct page *ipage;
+ bool truncate_page = false;
trace_f2fs_truncate_blocks_enter(inode, from);
@@ -502,7 +511,10 @@
}
if (f2fs_has_inline_data(inode)) {
+ if (truncate_inline_inode(ipage, from))
+ set_page_dirty(ipage);
f2fs_put_page(ipage, 1);
+ truncate_page = true;
goto out;
}
@@ -533,7 +545,7 @@
/* lastly zero out the first data page */
if (!err)
- err = truncate_partial_data_page(inode, from);
+ err = truncate_partial_data_page(inode, from, truncate_page);
trace_f2fs_truncate_blocks_exit(inode, err);
return err;
@@ -997,6 +1009,9 @@
if (!f2fs_is_volatile_file(inode))
return 0;
+ if (!f2fs_is_first_block_written(inode))
+ return truncate_partial_data_page(inode, 0, true);
+
punch_hole(inode, 0, F2FS_BLKSIZE);
return 0;
}
@@ -1029,6 +1044,41 @@
return ret;
}
+static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+{
+ struct inode *inode = file_inode(filp);
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct super_block *sb = sbi->sb;
+ __u32 in;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (get_user(in, (__u32 __user *)arg))
+ return -EFAULT;
+
+ switch (in) {
+ case F2FS_GOING_DOWN_FULLSYNC:
+ sb = freeze_bdev(sb->s_bdev);
+ if (sb && !IS_ERR(sb)) {
+ f2fs_stop_checkpoint(sbi);
+ thaw_bdev(sb->s_bdev, sb);
+ }
+ break;
+ case F2FS_GOING_DOWN_METASYNC:
+ /* do checkpoint only */
+ f2fs_sync_fs(sb, 1);
+ f2fs_stop_checkpoint(sbi);
+ break;
+ case F2FS_GOING_DOWN_NOSYNC:
+ f2fs_stop_checkpoint(sbi);
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
static int f2fs_ioc_fitrim(struct file *filp, unsigned long arg)
{
struct inode *inode = file_inode(filp);
@@ -1078,6 +1128,8 @@
return f2fs_ioc_release_volatile_write(filp);
case F2FS_IOC_ABORT_VOLATILE_WRITE:
return f2fs_ioc_abort_volatile_write(filp);
+ case F2FS_IOC_SHUTDOWN:
+ return f2fs_ioc_shutdown(filp, arg);
case FITRIM:
return f2fs_ioc_fitrim(filp, arg);
default:
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index 76adbc3..ed58211 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -435,7 +435,7 @@
set_page_dirty(node_page);
}
f2fs_put_page(node_page, 1);
- stat_inc_node_blk_count(sbi, 1);
+ stat_inc_node_blk_count(sbi, 1, gc_type);
}
if (initial) {
@@ -622,7 +622,7 @@
if (IS_ERR(data_page))
continue;
move_data_page(inode, data_page, gc_type);
- stat_inc_data_blk_count(sbi, 1);
+ stat_inc_data_blk_count(sbi, 1, gc_type);
}
}
@@ -680,7 +680,7 @@
}
blk_finish_plug(&plug);
- stat_inc_seg_count(sbi, GET_SUM_TYPE((&sum->footer)));
+ stat_inc_seg_count(sbi, GET_SUM_TYPE((&sum->footer)), gc_type);
stat_inc_call_count(sbi->stat_info);
f2fs_put_page(sum_page, 1);
diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
index 1484c00..8140e4f 100644
--- a/fs/f2fs/inline.c
+++ b/fs/f2fs/inline.c
@@ -21,7 +21,7 @@
if (f2fs_is_atomic_file(inode))
return false;
- if (!S_ISREG(inode->i_mode))
+ if (!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode))
return false;
if (i_size_read(inode) > MAX_INLINE_DATA)
@@ -50,10 +50,19 @@
SetPageUptodate(page);
}
-static void truncate_inline_data(struct page *ipage)
+bool truncate_inline_inode(struct page *ipage, u64 from)
{
+ void *addr;
+
+ if (from >= MAX_INLINE_DATA)
+ return false;
+
+ addr = inline_data_addr(ipage);
+
f2fs_wait_on_page_writeback(ipage, NODE);
- memset(inline_data_addr(ipage), 0, MAX_INLINE_DATA);
+ memset(addr + from, 0, MAX_INLINE_DATA - from);
+
+ return true;
}
int f2fs_read_inline_data(struct inode *inode, struct page *page)
@@ -122,7 +131,8 @@
set_page_writeback(page);
fio.blk_addr = dn->data_blkaddr;
write_data_page(page, dn, &fio);
- update_extent_cache(dn);
+ set_data_blkaddr(dn);
+ f2fs_update_extent_cache(dn);
f2fs_wait_on_page_writeback(page, DATA);
if (dirty)
inode_dec_dirty_pages(dn->inode);
@@ -131,7 +141,7 @@
set_inode_flag(F2FS_I(dn->inode), FI_APPEND_WRITE);
/* clear inline data and flag after data writeback */
- truncate_inline_data(dn->inode_page);
+ truncate_inline_inode(dn->inode_page, 0);
clear_out:
stat_dec_inline_inode(dn->inode);
f2fs_clear_inline_inode(dn->inode);
@@ -245,7 +255,7 @@
if (f2fs_has_inline_data(inode)) {
ipage = get_node_page(sbi, inode->i_ino);
f2fs_bug_on(sbi, IS_ERR(ipage));
- truncate_inline_data(ipage);
+ truncate_inline_inode(ipage, 0);
f2fs_clear_inline_inode(inode);
update_inode(inode, ipage);
f2fs_put_page(ipage, 1);
@@ -363,7 +373,7 @@
set_page_dirty(page);
/* clear inline dir and flag after data writeback */
- truncate_inline_data(ipage);
+ truncate_inline_inode(ipage, 0);
stat_dec_inline_dir(dir);
clear_inode_flag(F2FS_I(dir), FI_INLINE_DENTRY);
@@ -380,21 +390,18 @@
}
int f2fs_add_inline_entry(struct inode *dir, const struct qstr *name,
- struct inode *inode)
+ struct inode *inode, nid_t ino, umode_t mode)
{
struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
struct page *ipage;
unsigned int bit_pos;
f2fs_hash_t name_hash;
- struct f2fs_dir_entry *de;
size_t namelen = name->len;
struct f2fs_inline_dentry *dentry_blk = NULL;
+ struct f2fs_dentry_ptr d;
int slots = GET_DENTRY_SLOTS(namelen);
- struct page *page;
+ struct page *page = NULL;
int err = 0;
- int i;
-
- name_hash = f2fs_dentry_hash(name);
ipage = get_node_page(sbi, dir->i_ino);
if (IS_ERR(ipage))
@@ -410,32 +417,34 @@
goto out;
}
- down_write(&F2FS_I(inode)->i_sem);
- page = init_inode_metadata(inode, dir, name, ipage);
- if (IS_ERR(page)) {
- err = PTR_ERR(page);
- goto fail;
+ if (inode) {
+ down_write(&F2FS_I(inode)->i_sem);
+ page = init_inode_metadata(inode, dir, name, ipage);
+ if (IS_ERR(page)) {
+ err = PTR_ERR(page);
+ goto fail;
+ }
}
f2fs_wait_on_page_writeback(ipage, NODE);
- de = &dentry_blk->dentry[bit_pos];
- de->hash_code = name_hash;
- de->name_len = cpu_to_le16(namelen);
- memcpy(dentry_blk->filename[bit_pos], name->name, name->len);
- de->ino = cpu_to_le32(inode->i_ino);
- set_de_type(de, inode);
- for (i = 0; i < slots; i++)
- test_and_set_bit_le(bit_pos + i, &dentry_blk->dentry_bitmap);
+
+ name_hash = f2fs_dentry_hash(name);
+ make_dentry_ptr(&d, (void *)dentry_blk, 2);
+ f2fs_update_dentry(ino, mode, &d, name, name_hash, bit_pos);
+
set_page_dirty(ipage);
/* we don't need to mark_inode_dirty now */
- F2FS_I(inode)->i_pino = dir->i_ino;
- update_inode(inode, page);
- f2fs_put_page(page, 1);
+ if (inode) {
+ F2FS_I(inode)->i_pino = dir->i_ino;
+ update_inode(inode, page);
+ f2fs_put_page(page, 1);
+ }
update_parent_metadata(dir, inode, 0);
fail:
- up_write(&F2FS_I(inode)->i_sem);
+ if (inode)
+ up_write(&F2FS_I(inode)->i_sem);
if (is_inode_flag_set(F2FS_I(dir), FI_UPDATE_DIR)) {
update_inode(dir, ipage);
diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
index 2d002e3..e622ec9 100644
--- a/fs/f2fs/inode.c
+++ b/fs/f2fs/inode.c
@@ -51,6 +51,15 @@
}
}
+static bool __written_first_block(struct f2fs_inode *ri)
+{
+ block_t addr = le32_to_cpu(ri->i_addr[0]);
+
+ if (addr != NEW_ADDR && addr != NULL_ADDR)
+ return true;
+ return false;
+}
+
static void __set_inode_rdev(struct inode *inode, struct f2fs_inode *ri)
{
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
@@ -130,7 +139,8 @@
fi->i_pino = le32_to_cpu(ri->i_pino);
fi->i_dir_level = ri->i_dir_level;
- get_extent_info(&fi->ext, ri->i_ext);
+ f2fs_init_extent_cache(inode, &ri->i_ext);
+
get_inline_info(fi, ri);
/* check data exist */
@@ -140,6 +150,9 @@
/* get rdev by using inline_info */
__get_inode_rdev(inode, ri);
+ if (__written_first_block(ri))
+ set_inode_flag(F2FS_I(inode), FI_FIRST_BLOCK_WRITTEN);
+
f2fs_put_page(node_page, 1);
stat_inc_inline_inode(inode);
@@ -220,7 +233,11 @@
ri->i_links = cpu_to_le32(inode->i_nlink);
ri->i_size = cpu_to_le64(i_size_read(inode));
ri->i_blocks = cpu_to_le64(inode->i_blocks);
+
+ read_lock(&F2FS_I(inode)->ext_lock);
set_raw_extent(&F2FS_I(inode)->ext, &ri->i_ext);
+ read_unlock(&F2FS_I(inode)->ext_lock);
+
set_raw_inline(F2FS_I(inode), ri);
ri->i_atime = cpu_to_le64(inode->i_atime.tv_sec);
@@ -328,6 +345,12 @@
no_delete:
stat_dec_inline_dir(inode);
stat_dec_inline_inode(inode);
+
+ /* update extent info in inode */
+ if (inode->i_nlink)
+ f2fs_preserve_extent_tree(inode);
+ f2fs_destroy_extent_tree(inode);
+
invalidate_mapping_pages(NODE_MAPPING(sbi), inode->i_ino, inode->i_ino);
if (xnid)
invalidate_mapping_pages(NODE_MAPPING(sbi), xnid, xnid);
diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
index e79639a9..407dde3 100644
--- a/fs/f2fs/namei.c
+++ b/fs/f2fs/namei.c
@@ -14,6 +14,7 @@
#include <linux/sched.h>
#include <linux/ctype.h>
#include <linux/dcache.h>
+#include <linux/namei.h>
#include "f2fs.h"
#include "node.h"
@@ -187,6 +188,44 @@
return d_obtain_alias(f2fs_iget(child->d_inode->i_sb, ino));
}
+static int __recover_dot_dentries(struct inode *dir, nid_t pino)
+{
+ struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
+ struct qstr dot = QSTR_INIT(".", 1);
+ struct qstr dotdot = QSTR_INIT("..", 2);
+ struct f2fs_dir_entry *de;
+ struct page *page;
+ int err = 0;
+
+ f2fs_lock_op(sbi);
+
+ de = f2fs_find_entry(dir, &dot, &page);
+ if (de) {
+ f2fs_dentry_kunmap(dir, page);
+ f2fs_put_page(page, 0);
+ } else {
+ err = __f2fs_add_link(dir, &dot, NULL, dir->i_ino, S_IFDIR);
+ if (err)
+ goto out;
+ }
+
+ de = f2fs_find_entry(dir, &dotdot, &page);
+ if (de) {
+ f2fs_dentry_kunmap(dir, page);
+ f2fs_put_page(page, 0);
+ } else {
+ err = __f2fs_add_link(dir, &dotdot, NULL, pino, S_IFDIR);
+ }
+out:
+ if (!err) {
+ clear_inode_flag(F2FS_I(dir), FI_INLINE_DOTS);
+ mark_inode_dirty(dir);
+ }
+
+ f2fs_unlock_op(sbi);
+ return err;
+}
+
static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
unsigned int flags)
{
@@ -206,6 +245,16 @@
inode = f2fs_iget(dir->i_sb, ino);
if (IS_ERR(inode))
return ERR_CAST(inode);
+
+ if (f2fs_has_inline_dots(inode)) {
+ int err;
+
+ err = __recover_dot_dentries(inode, dir->i_ino);
+ if (err) {
+ iget_failed(inode);
+ return ERR_PTR(err);
+ }
+ }
}
return d_splice_alias(inode, dentry);
@@ -247,6 +296,23 @@
return err;
}
+static void *f2fs_follow_link(struct dentry *dentry, struct nameidata *nd)
+{
+ struct page *page;
+
+ page = page_follow_link_light(dentry, nd);
+ if (IS_ERR(page))
+ return page;
+
+ /* this is broken symlink case */
+ if (*nd_get_link(nd) == 0) {
+ kunmap(page);
+ page_cache_release(page);
+ return ERR_PTR(-ENOENT);
+ }
+ return page;
+}
+
static int f2fs_symlink(struct inode *dir, struct dentry *dentry,
const char *symname)
{
@@ -276,6 +342,17 @@
d_instantiate(dentry, inode);
unlock_new_inode(inode);
+ /*
+ * Let's flush symlink data in order to avoid broken symlink as much as
+ * possible. Nevertheless, fsyncing is the best way, but there is no
+ * way to get a file descriptor in order to flush that.
+ *
+ * Note that, it needs to do dir->fsync to make this recoverable.
+ * If the symlink path is stored into inline_data, there is no
+ * performance regression.
+ */
+ filemap_write_and_wait_range(inode->i_mapping, 0, symlen - 1);
+
if (IS_DIRSYNC(dir))
f2fs_sync_fs(sbi->sb, 1);
return err;
@@ -693,6 +770,8 @@
f2fs_unlock_op(sbi);
alloc_nid_done(sbi, inode->i_ino);
+
+ stat_inc_inline_inode(inode);
d_tmpfile(dentry, inode);
unlock_new_inode(inode);
return 0;
@@ -729,7 +808,7 @@
const struct inode_operations f2fs_symlink_inode_operations = {
.readlink = generic_readlink,
- .follow_link = page_follow_link_light,
+ .follow_link = f2fs_follow_link,
.put_link = page_put_link,
.getattr = f2fs_getattr,
.setattr = f2fs_setattr,
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 97bd9d3..8ab0cf1 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -41,7 +41,9 @@
/* only uses low memory */
avail_ram = val.totalram - val.totalhigh;
- /* give 25%, 25%, 50%, 50% memory for each components respectively */
+ /*
+ * give 25%, 25%, 50%, 50%, 50% memory for each components respectively
+ */
if (type == FREE_NIDS) {
mem_size = (nm_i->fcnt * sizeof(struct free_nid)) >>
PAGE_CACHE_SHIFT;
@@ -62,6 +64,11 @@
mem_size += (sbi->im[i].ino_num *
sizeof(struct ino_entry)) >> PAGE_CACHE_SHIFT;
res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 1);
+ } else if (type == EXTENT_CACHE) {
+ mem_size = (sbi->total_ext_tree * sizeof(struct extent_tree) +
+ atomic_read(&sbi->total_ext_node) *
+ sizeof(struct extent_node)) >> PAGE_CACHE_SHIFT;
+ res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 1);
} else {
if (sbi->sb->s_bdi->dirty_exceeded)
return false;
@@ -494,7 +501,7 @@
/* if inline_data is set, should not report any block indices */
if (f2fs_has_inline_data(dn->inode) && index) {
- err = -EINVAL;
+ err = -ENOENT;
f2fs_put_page(npage[0], 1);
goto release_out;
}
@@ -995,6 +1002,7 @@
get_node_info(sbi, page->index, &ni);
if (unlikely(ni.blk_addr == NULL_ADDR)) {
+ ClearPageUptodate(page);
f2fs_put_page(page, 1);
return -ENOENT;
}
@@ -1306,6 +1314,7 @@
/* This page is already truncated */
if (unlikely(ni.blk_addr == NULL_ADDR)) {
+ ClearPageUptodate(page);
dec_page_count(sbi, F2FS_DIRTY_NODES);
unlock_page(page);
return 0;
@@ -1821,6 +1830,7 @@
struct f2fs_nat_block *nat_blk;
struct nat_entry *ne, *cur;
struct page *page = NULL;
+ struct f2fs_nm_info *nm_i = NM_I(sbi);
/*
* there are two steps to flush nat entries:
@@ -1874,7 +1884,9 @@
f2fs_bug_on(sbi, set->entry_cnt);
+ down_write(&nm_i->nat_tree_lock);
radix_tree_delete(&NM_I(sbi)->nat_set_root, set->set);
+ up_write(&nm_i->nat_tree_lock);
kmem_cache_free(nat_entry_set_slab, set);
}
@@ -1902,6 +1914,7 @@
if (!__has_cursum_space(sum, nm_i->dirty_nat_cnt, NAT_JOURNAL))
remove_nats_in_journal(sbi);
+ down_write(&nm_i->nat_tree_lock);
while ((found = __gang_lookup_nat_set(nm_i,
set_idx, SETVEC_SIZE, setvec))) {
unsigned idx;
@@ -1910,6 +1923,7 @@
__adjust_nat_entry_set(setvec[idx], &sets,
MAX_NAT_JENTRIES(sum));
}
+ up_write(&nm_i->nat_tree_lock);
/* flush dirty nats in nat entry set */
list_for_each_entry_safe(set, tmp, &sets, set_list)
diff --git a/fs/f2fs/node.h b/fs/f2fs/node.h
index f405bbf..c56026f 100644
--- a/fs/f2fs/node.h
+++ b/fs/f2fs/node.h
@@ -120,6 +120,7 @@
NAT_ENTRIES, /* indicates the cached nat entry */
DIRTY_DENTS, /* indicates dirty dentry pages */
INO_ENTRIES, /* indicates inode entries */
+ EXTENT_CACHE, /* indicates extent cache */
BASE_CHECK, /* check kernel status */
};
diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
index 41afb95..8d8ea99 100644
--- a/fs/f2fs/recovery.c
+++ b/fs/f2fs/recovery.c
@@ -93,10 +93,9 @@
}
retry:
de = f2fs_find_entry(dir, &name, &page);
- if (de && inode->i_ino == le32_to_cpu(de->ino)) {
- clear_inode_flag(F2FS_I(inode), FI_INC_LINK);
+ if (de && inode->i_ino == le32_to_cpu(de->ino))
goto out_unmap_put;
- }
+
if (de) {
einode = f2fs_iget(inode->i_sb, le32_to_cpu(de->ino));
if (IS_ERR(einode)) {
@@ -115,7 +114,7 @@
iput(einode);
goto retry;
}
- err = __f2fs_add_link(dir, &name, inode);
+ err = __f2fs_add_link(dir, &name, inode, inode->i_ino, inode->i_mode);
if (err)
goto out_err;
@@ -187,11 +186,7 @@
goto next;
entry = get_fsync_inode(head, ino_of_node(page));
- if (entry) {
- if (IS_INODE(page) && is_dent_dnode(page))
- set_inode_flag(F2FS_I(entry->inode),
- FI_INC_LINK);
- } else {
+ if (!entry) {
if (IS_INODE(page) && is_dent_dnode(page)) {
err = recover_inode_page(sbi, page);
if (err)
@@ -212,8 +207,10 @@
if (IS_ERR(entry->inode)) {
err = PTR_ERR(entry->inode);
kmem_cache_free(fsync_entry_slab, entry);
- if (err == -ENOENT)
+ if (err == -ENOENT) {
+ err = 0;
goto next;
+ }
break;
}
list_add_tail(&entry->list, head);
@@ -256,6 +253,7 @@
struct f2fs_summary_block *sum_node;
struct f2fs_summary sum;
struct page *sum_page, *node_page;
+ struct dnode_of_data tdn = *dn;
nid_t ino, nid;
struct inode *inode;
unsigned int offset;
@@ -283,17 +281,15 @@
/* Use the locked dnode page and inode */
nid = le32_to_cpu(sum.nid);
if (dn->inode->i_ino == nid) {
- struct dnode_of_data tdn = *dn;
tdn.nid = nid;
+ if (!dn->inode_page_locked)
+ lock_page(dn->inode_page);
tdn.node_page = dn->inode_page;
tdn.ofs_in_node = le16_to_cpu(sum.ofs_in_node);
- truncate_data_blocks_range(&tdn, 1);
- return 0;
+ goto truncate_out;
} else if (dn->nid == nid) {
- struct dnode_of_data tdn = *dn;
tdn.ofs_in_node = le16_to_cpu(sum.ofs_in_node);
- truncate_data_blocks_range(&tdn, 1);
- return 0;
+ goto truncate_out;
}
/* Get the node page */
@@ -317,18 +313,33 @@
bidx = start_bidx_of_node(offset, F2FS_I(inode)) +
le16_to_cpu(sum.ofs_in_node);
- if (ino != dn->inode->i_ino) {
- truncate_hole(inode, bidx, bidx + 1);
+ /*
+ * if inode page is locked, unlock temporarily, but its reference
+ * count keeps alive.
+ */
+ if (ino == dn->inode->i_ino && dn->inode_page_locked)
+ unlock_page(dn->inode_page);
+
+ set_new_dnode(&tdn, inode, NULL, NULL, 0);
+ if (get_dnode_of_data(&tdn, bidx, LOOKUP_NODE))
+ goto out;
+
+ if (tdn.data_blkaddr == blkaddr)
+ truncate_data_blocks_range(&tdn, 1);
+
+ f2fs_put_dnode(&tdn);
+out:
+ if (ino != dn->inode->i_ino)
iput(inode);
- } else {
- struct dnode_of_data tdn;
- set_new_dnode(&tdn, inode, dn->inode_page, NULL, 0);
- if (get_dnode_of_data(&tdn, bidx, LOOKUP_NODE))
- return 0;
- if (tdn.data_blkaddr != NULL_ADDR)
- truncate_data_blocks_range(&tdn, 1);
- f2fs_put_page(tdn.node_page, 1);
- }
+ else if (dn->inode_page_locked)
+ lock_page(dn->inode_page);
+ return 0;
+
+truncate_out:
+ if (datablock_addr(tdn.node_page, tdn.ofs_in_node) == blkaddr)
+ truncate_data_blocks_range(&tdn, 1);
+ if (dn->inode->i_ino == nid && !dn->inode_page_locked)
+ unlock_page(dn->inode_page);
return 0;
}
@@ -384,7 +395,9 @@
src = datablock_addr(dn.node_page, dn.ofs_in_node);
dest = datablock_addr(page, dn.ofs_in_node);
- if (src != dest && dest != NEW_ADDR && dest != NULL_ADDR) {
+ if (src != dest && dest != NEW_ADDR && dest != NULL_ADDR &&
+ dest >= MAIN_BLKADDR(sbi) && dest < MAX_BLKADDR(sbi)) {
+
if (src == NULL_ADDR) {
err = reserve_new_block(&dn);
/* We should not get -ENOSPC */
@@ -401,14 +414,13 @@
/* write dummy data page */
recover_data_page(sbi, NULL, &sum, src, dest);
dn.data_blkaddr = dest;
- update_extent_cache(&dn);
+ set_data_blkaddr(&dn);
+ f2fs_update_extent_cache(&dn);
recovered++;
}
dn.ofs_in_node++;
}
- /* write node page in place */
- set_summary(&sum, dn.nid, 0, 0);
if (IS_INODE(dn.node_page))
sync_inode_page(&dn);
@@ -552,7 +564,7 @@
mutex_unlock(&sbi->cp_mutex);
} else if (need_writecp) {
struct cp_control cpc = {
- .reason = CP_SYNC,
+ .reason = CP_RECOVERY,
};
mutex_unlock(&sbi->cp_mutex);
write_checkpoint(sbi, &cpc);
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index daee4ab..f939660 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -205,6 +205,8 @@
list_add_tail(&new->list, &fi->inmem_pages);
inc_page_count(F2FS_I_SB(inode), F2FS_INMEM_PAGES);
mutex_unlock(&fi->inmem_lock);
+
+ trace_f2fs_register_inmem_page(page, INMEM);
}
void commit_inmem_pages(struct inode *inode, bool abort)
@@ -238,11 +240,13 @@
f2fs_wait_on_page_writeback(cur->page, DATA);
if (clear_page_dirty_for_io(cur->page))
inode_dec_dirty_pages(inode);
+ trace_f2fs_commit_inmem_page(cur->page, INMEM);
do_write_data_page(cur->page, &fio);
submit_bio = true;
}
f2fs_put_page(cur->page, 1);
} else {
+ trace_f2fs_commit_inmem_page(cur->page, INMEM_DROP);
put_page(cur->page);
}
radix_tree_delete(&fi->inmem_root, cur->page->index);
@@ -277,6 +281,9 @@
void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi)
{
+ /* try to shrink extent cache when there is no enough memory */
+ f2fs_shrink_extent_tree(sbi, EXTENT_CACHE_SHRINK_NUMBER);
+
/* check the # of cached NAT entries and prefree segments */
if (try_to_free_nats(sbi, NAT_ENTRY_PER_BLOCK) ||
excess_prefree_segs(sbi) ||
@@ -549,7 +556,7 @@
end = __find_rev_next_zero_bit(dmap, max_blocks, start + 1);
- if (end - start < cpc->trim_minlen)
+ if (force && end - start < cpc->trim_minlen)
continue;
__add_discard_entry(sbi, cpc, start, end);
@@ -1164,6 +1171,7 @@
curseg = CURSEG_I(sbi, type);
mutex_lock(&curseg->curseg_mutex);
+ mutex_lock(&sit_i->sentry_lock);
/* direct_io'ed data is aligned to the segment for better performance */
if (direct_io && curseg->next_blkoff)
@@ -1178,7 +1186,6 @@
*/
__add_sum_entry(sbi, type, sum);
- mutex_lock(&sit_i->sentry_lock);
__refresh_next_blkoff(sbi, curseg);
stat_inc_block_count(sbi, curseg);
@@ -1730,6 +1737,9 @@
mutex_lock(&curseg->curseg_mutex);
mutex_lock(&sit_i->sentry_lock);
+ if (!sit_i->dirty_sentries)
+ goto out;
+
/*
* add and account sit entries of dirty bitmap in sit entry
* set temporarily
@@ -1744,9 +1754,6 @@
if (!__has_cursum_space(sum, sit_i->dirty_sentries, SIT_JOURNAL))
remove_sits_in_journal(sbi);
- if (!sit_i->dirty_sentries)
- goto out;
-
/*
* there are two steps to flush sit entries:
* #1, flush sit entries to journal in current cold data summary block.
diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
index 7fd3511..85d7fa7 100644
--- a/fs/f2fs/segment.h
+++ b/fs/f2fs/segment.h
@@ -336,7 +336,8 @@
clear_bit(segno, free_i->free_segmap);
free_i->free_segments++;
- next = find_next_bit(free_i->free_segmap, MAIN_SEGS(sbi), start_segno);
+ next = find_next_bit(free_i->free_segmap,
+ start_segno + sbi->segs_per_sec, start_segno);
if (next >= start_segno + sbi->segs_per_sec) {
clear_bit(secno, free_i->free_secmap);
free_i->free_sections++;
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index f2fe666..160b883 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -57,6 +57,8 @@
Opt_flush_merge,
Opt_nobarrier,
Opt_fastboot,
+ Opt_extent_cache,
+ Opt_noinline_data,
Opt_err,
};
@@ -78,6 +80,8 @@
{Opt_flush_merge, "flush_merge"},
{Opt_nobarrier, "nobarrier"},
{Opt_fastboot, "fastboot"},
+ {Opt_extent_cache, "extent_cache"},
+ {Opt_noinline_data, "noinline_data"},
{Opt_err, NULL},
};
@@ -367,6 +371,12 @@
case Opt_fastboot:
set_opt(sbi, FASTBOOT);
break;
+ case Opt_extent_cache:
+ set_opt(sbi, EXTENT_CACHE);
+ break;
+ case Opt_noinline_data:
+ clear_opt(sbi, INLINE_DATA);
+ break;
default:
f2fs_msg(sb, KERN_ERR,
"Unrecognized mount option \"%s\" or missing value",
@@ -392,7 +402,7 @@
atomic_set(&fi->dirty_pages, 0);
fi->i_current_depth = 1;
fi->i_advise = 0;
- rwlock_init(&fi->ext.ext_lock);
+ rwlock_init(&fi->ext_lock);
init_rwsem(&fi->i_sem);
INIT_RADIX_TREE(&fi->inmem_root, GFP_NOFS);
INIT_LIST_HEAD(&fi->inmem_pages);
@@ -591,6 +601,8 @@
seq_puts(seq, ",disable_ext_identify");
if (test_opt(sbi, INLINE_DATA))
seq_puts(seq, ",inline_data");
+ else
+ seq_puts(seq, ",noinline_data");
if (test_opt(sbi, INLINE_DENTRY))
seq_puts(seq, ",inline_dentry");
if (!f2fs_readonly(sbi->sb) && test_opt(sbi, FLUSH_MERGE))
@@ -599,6 +611,8 @@
seq_puts(seq, ",nobarrier");
if (test_opt(sbi, FASTBOOT))
seq_puts(seq, ",fastboot");
+ if (test_opt(sbi, EXTENT_CACHE))
+ seq_puts(seq, ",extent_cache");
seq_printf(seq, ",active_logs=%u", sbi->active_logs);
return 0;
@@ -959,7 +973,7 @@
struct buffer_head *raw_super_buf;
struct inode *root;
long err = -EINVAL;
- bool retry = true;
+ bool retry = true, need_fsck = false;
char *options = NULL;
int i;
@@ -984,6 +998,7 @@
sbi->active_logs = NR_CURSEG_TYPE;
set_opt(sbi, BG_GC);
+ set_opt(sbi, INLINE_DATA);
#ifdef CONFIG_F2FS_FS_XATTR
set_opt(sbi, XATTR_USER);
@@ -1020,7 +1035,6 @@
sbi->raw_super = raw_super;
sbi->raw_super_buf = raw_super_buf;
mutex_init(&sbi->gc_mutex);
- mutex_init(&sbi->writepages);
mutex_init(&sbi->cp_mutex);
init_rwsem(&sbi->node_write);
clear_sbi_flag(sbi, SBI_POR_DOING);
@@ -1072,6 +1086,8 @@
INIT_LIST_HEAD(&sbi->dir_inode_list);
spin_lock_init(&sbi->dir_inode_lock);
+ init_extent_cache_info(sbi);
+
init_ino_entry_info(sbi);
/* setup f2fs internal modules */
@@ -1146,9 +1162,6 @@
if (err)
goto free_proc;
- if (!retry)
- set_sbi_flag(sbi, SBI_NEED_FSCK);
-
/* recover fsynced data */
if (!test_opt(sbi, DISABLE_ROLL_FORWARD)) {
/*
@@ -1160,8 +1173,13 @@
err = -EROFS;
goto free_kobj;
}
+
+ if (need_fsck)
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
+
err = recover_fsync_data(sbi);
if (err) {
+ need_fsck = true;
f2fs_msg(sb, KERN_ERR,
"Cannot recover all fsync data errno=%ld", err);
goto free_kobj;
@@ -1212,7 +1230,7 @@
/* give only one another chance */
if (retry) {
- retry = 0;
+ retry = false;
shrink_dcache_sb(sb);
goto try_onemore;
}
@@ -1278,10 +1296,13 @@
err = create_checkpoint_caches();
if (err)
goto free_segment_manager_caches;
+ err = create_extent_cache();
+ if (err)
+ goto free_checkpoint_caches;
f2fs_kset = kset_create_and_add("f2fs", NULL, fs_kobj);
if (!f2fs_kset) {
err = -ENOMEM;
- goto free_checkpoint_caches;
+ goto free_extent_cache;
}
err = register_filesystem(&f2fs_fs_type);
if (err)
@@ -1292,6 +1313,8 @@
free_kset:
kset_unregister(f2fs_kset);
+free_extent_cache:
+ destroy_extent_cache();
free_checkpoint_caches:
destroy_checkpoint_caches();
free_segment_manager_caches:
@@ -1309,6 +1332,7 @@
remove_proc_entry("fs/f2fs", NULL);
f2fs_destroy_root_stats();
unregister_filesystem(&f2fs_fs_type);
+ destroy_extent_cache();
destroy_checkpoint_caches();
destroy_segment_manager_caches();
destroy_node_manager_caches();
diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
index 5072bf9..b0fd2f2 100644
--- a/fs/f2fs/xattr.c
+++ b/fs/f2fs/xattr.c
@@ -135,7 +135,8 @@
if (strcmp(name, "") != 0)
return -EINVAL;
- *((char *)buffer) = F2FS_I(inode)->i_advise;
+ if (buffer)
+ *((char *)buffer) = F2FS_I(inode)->i_advise;
return sizeof(char);
}
@@ -152,6 +153,7 @@
return -EINVAL;
F2FS_I(inode)->i_advise |= *(char *)value;
+ mark_inode_dirty(inode);
return 0;
}
diff --git a/fs/fs_pin.c b/fs/fs_pin.c
index b06c987..611b540 100644
--- a/fs/fs_pin.c
+++ b/fs/fs_pin.c
@@ -9,8 +9,8 @@
void pin_remove(struct fs_pin *pin)
{
spin_lock(&pin_lock);
- hlist_del(&pin->m_list);
- hlist_del(&pin->s_list);
+ hlist_del_init(&pin->m_list);
+ hlist_del_init(&pin->s_list);
spin_unlock(&pin_lock);
spin_lock_irq(&pin->wait.lock);
pin->done = 1;
diff --git a/fs/namespace.c b/fs/namespace.c
index 82ef140..1f4f9da 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -632,14 +632,17 @@
*/
struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry)
{
- struct mount *p, *res;
- res = p = __lookup_mnt(mnt, dentry);
+ struct mount *p, *res = NULL;
+ p = __lookup_mnt(mnt, dentry);
if (!p)
goto out;
+ if (!(p->mnt.mnt_flags & MNT_UMOUNT))
+ res = p;
hlist_for_each_entry_continue(p, mnt_hash) {
if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry)
break;
- res = p;
+ if (!(p->mnt.mnt_flags & MNT_UMOUNT))
+ res = p;
}
out:
return res;
@@ -795,10 +798,8 @@
/*
* vfsmount lock must be held for write
*/
-static void detach_mnt(struct mount *mnt, struct path *old_path)
+static void unhash_mnt(struct mount *mnt)
{
- old_path->dentry = mnt->mnt_mountpoint;
- old_path->mnt = &mnt->mnt_parent->mnt;
mnt->mnt_parent = mnt;
mnt->mnt_mountpoint = mnt->mnt.mnt_root;
list_del_init(&mnt->mnt_child);
@@ -811,6 +812,26 @@
/*
* vfsmount lock must be held for write
*/
+static void detach_mnt(struct mount *mnt, struct path *old_path)
+{
+ old_path->dentry = mnt->mnt_mountpoint;
+ old_path->mnt = &mnt->mnt_parent->mnt;
+ unhash_mnt(mnt);
+}
+
+/*
+ * vfsmount lock must be held for write
+ */
+static void umount_mnt(struct mount *mnt)
+{
+ /* old mountpoint will be dropped when we can do that */
+ mnt->mnt_ex_mountpoint = mnt->mnt_mountpoint;
+ unhash_mnt(mnt);
+}
+
+/*
+ * vfsmount lock must be held for write
+ */
void mnt_set_mountpoint(struct mount *mnt,
struct mountpoint *mp,
struct mount *child_mnt)
@@ -1078,6 +1099,13 @@
rcu_read_unlock();
list_del(&mnt->mnt_instance);
+
+ if (unlikely(!list_empty(&mnt->mnt_mounts))) {
+ struct mount *p, *tmp;
+ list_for_each_entry_safe(p, tmp, &mnt->mnt_mounts, mnt_child) {
+ umount_mnt(p);
+ }
+ }
unlock_mount_hash();
if (likely(!(mnt->mnt.mnt_flags & MNT_INTERNAL))) {
@@ -1298,17 +1326,15 @@
static void namespace_unlock(void)
{
- struct hlist_head head = unmounted;
+ struct hlist_head head;
- if (likely(hlist_empty(&head))) {
- up_write(&namespace_sem);
- return;
- }
+ hlist_move_list(&unmounted, &head);
- head.first->pprev = &head.first;
- INIT_HLIST_HEAD(&unmounted);
up_write(&namespace_sem);
+ if (likely(hlist_empty(&head)))
+ return;
+
synchronize_rcu();
group_pin_kill(&head);
@@ -1319,49 +1345,63 @@
down_write(&namespace_sem);
}
+enum umount_tree_flags {
+ UMOUNT_SYNC = 1,
+ UMOUNT_PROPAGATE = 2,
+ UMOUNT_CONNECTED = 4,
+};
/*
* mount_lock must be held
* namespace_sem must be held for write
- * how = 0 => just this tree, don't propagate
- * how = 1 => propagate; we know that nobody else has reference to any victims
- * how = 2 => lazy umount
*/
-void umount_tree(struct mount *mnt, int how)
+static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
{
- HLIST_HEAD(tmp_list);
+ LIST_HEAD(tmp_list);
struct mount *p;
+ if (how & UMOUNT_PROPAGATE)
+ propagate_mount_unlock(mnt);
+
+ /* Gather the mounts to umount */
for (p = mnt; p; p = next_mnt(p, mnt)) {
- hlist_del_init_rcu(&p->mnt_hash);
- hlist_add_head(&p->mnt_hash, &tmp_list);
+ p->mnt.mnt_flags |= MNT_UMOUNT;
+ list_move(&p->mnt_list, &tmp_list);
}
- hlist_for_each_entry(p, &tmp_list, mnt_hash)
+ /* Hide the mounts from mnt_mounts */
+ list_for_each_entry(p, &tmp_list, mnt_list) {
list_del_init(&p->mnt_child);
+ }
- if (how)
+ /* Add propogated mounts to the tmp_list */
+ if (how & UMOUNT_PROPAGATE)
propagate_umount(&tmp_list);
- while (!hlist_empty(&tmp_list)) {
- p = hlist_entry(tmp_list.first, struct mount, mnt_hash);
- hlist_del_init_rcu(&p->mnt_hash);
+ while (!list_empty(&tmp_list)) {
+ bool disconnect;
+ p = list_first_entry(&tmp_list, struct mount, mnt_list);
list_del_init(&p->mnt_expire);
list_del_init(&p->mnt_list);
__touch_mnt_namespace(p->mnt_ns);
p->mnt_ns = NULL;
- if (how < 2)
+ if (how & UMOUNT_SYNC)
p->mnt.mnt_flags |= MNT_SYNC_UMOUNT;
- pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt, &unmounted);
+ disconnect = !(((how & UMOUNT_CONNECTED) &&
+ mnt_has_parent(p) &&
+ (p->mnt_parent->mnt.mnt_flags & MNT_UMOUNT)) ||
+ IS_MNT_LOCKED_AND_LAZY(p));
+
+ pin_insert_group(&p->mnt_umount, &p->mnt_parent->mnt,
+ disconnect ? &unmounted : NULL);
if (mnt_has_parent(p)) {
- hlist_del_init(&p->mnt_mp_list);
- put_mountpoint(p->mnt_mp);
mnt_add_count(p->mnt_parent, -1);
- /* old mountpoint will be dropped when we can do that */
- p->mnt_ex_mountpoint = p->mnt_mountpoint;
- p->mnt_mountpoint = p->mnt.mnt_root;
- p->mnt_parent = p;
- p->mnt_mp = NULL;
+ if (!disconnect) {
+ /* Don't forget about p */
+ list_add_tail(&p->mnt_child, &p->mnt_parent->mnt_mounts);
+ } else {
+ umount_mnt(p);
+ }
}
change_mnt_propagation(p, MS_PRIVATE);
}
@@ -1447,14 +1487,14 @@
if (flags & MNT_DETACH) {
if (!list_empty(&mnt->mnt_list))
- umount_tree(mnt, 2);
+ umount_tree(mnt, UMOUNT_PROPAGATE);
retval = 0;
} else {
shrink_submounts(mnt);
retval = -EBUSY;
if (!propagate_mount_busy(mnt, 2)) {
if (!list_empty(&mnt->mnt_list))
- umount_tree(mnt, 1);
+ umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
retval = 0;
}
}
@@ -1480,13 +1520,20 @@
namespace_lock();
mp = lookup_mountpoint(dentry);
- if (!mp)
+ if (IS_ERR_OR_NULL(mp))
goto out_unlock;
lock_mount_hash();
while (!hlist_empty(&mp->m_list)) {
mnt = hlist_entry(mp->m_list.first, struct mount, mnt_mp_list);
- umount_tree(mnt, 2);
+ if (mnt->mnt.mnt_flags & MNT_UMOUNT) {
+ struct mount *p, *tmp;
+ list_for_each_entry_safe(p, tmp, &mnt->mnt_mounts, mnt_child) {
+ hlist_add_head(&p->mnt_umount.s_list, &unmounted);
+ umount_mnt(p);
+ }
+ }
+ else umount_tree(mnt, UMOUNT_CONNECTED);
}
unlock_mount_hash();
put_mountpoint(mp);
@@ -1648,7 +1695,7 @@
out:
if (res) {
lock_mount_hash();
- umount_tree(res, 0);
+ umount_tree(res, UMOUNT_SYNC);
unlock_mount_hash();
}
return q;
@@ -1660,8 +1707,11 @@
{
struct mount *tree;
namespace_lock();
- tree = copy_tree(real_mount(path->mnt), path->dentry,
- CL_COPY_ALL | CL_PRIVATE);
+ if (!check_mnt(real_mount(path->mnt)))
+ tree = ERR_PTR(-EINVAL);
+ else
+ tree = copy_tree(real_mount(path->mnt), path->dentry,
+ CL_COPY_ALL | CL_PRIVATE);
namespace_unlock();
if (IS_ERR(tree))
return ERR_CAST(tree);
@@ -1672,7 +1722,7 @@
{
namespace_lock();
lock_mount_hash();
- umount_tree(real_mount(mnt), 0);
+ umount_tree(real_mount(mnt), UMOUNT_SYNC);
unlock_mount_hash();
namespace_unlock();
}
@@ -1855,7 +1905,7 @@
out_cleanup_ids:
while (!hlist_empty(&tree_list)) {
child = hlist_entry(tree_list.first, struct mount, mnt_hash);
- umount_tree(child, 0);
+ umount_tree(child, UMOUNT_SYNC);
}
unlock_mount_hash();
cleanup_group_ids(source_mnt, NULL);
@@ -2035,7 +2085,7 @@
err = graft_tree(mnt, parent, mp);
if (err) {
lock_mount_hash();
- umount_tree(mnt, 0);
+ umount_tree(mnt, UMOUNT_SYNC);
unlock_mount_hash();
}
out2:
@@ -2406,7 +2456,7 @@
while (!list_empty(&graveyard)) {
mnt = list_first_entry(&graveyard, struct mount, mnt_expire);
touch_mnt_namespace(mnt->mnt_ns);
- umount_tree(mnt, 1);
+ umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
}
unlock_mount_hash();
namespace_unlock();
@@ -2477,7 +2527,7 @@
m = list_first_entry(&graveyard, struct mount,
mnt_expire);
touch_mnt_namespace(m->mnt_ns);
- umount_tree(m, 1);
+ umount_tree(m, UMOUNT_PROPAGATE|UMOUNT_SYNC);
}
}
}
diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
index 8e19b9d..16eff45 100644
--- a/fs/ocfs2/cluster/heartbeat.c
+++ b/fs/ocfs2/cluster/heartbeat.c
@@ -1312,9 +1312,7 @@
int ret = -ENOMEM;
o2hb_debug_dir = debugfs_create_dir(O2HB_DEBUG_DIR, NULL);
- if (IS_ERR_OR_NULL(o2hb_debug_dir)) {
- ret = o2hb_debug_dir ?
- PTR_ERR(o2hb_debug_dir) : -ENOMEM;
+ if (!o2hb_debug_dir) {
mlog_errno(ret);
goto bail;
}
@@ -1327,9 +1325,7 @@
sizeof(o2hb_live_node_bitmap),
O2NM_MAX_NODES,
o2hb_live_node_bitmap);
- if (IS_ERR_OR_NULL(o2hb_debug_livenodes)) {
- ret = o2hb_debug_livenodes ?
- PTR_ERR(o2hb_debug_livenodes) : -ENOMEM;
+ if (!o2hb_debug_livenodes) {
mlog_errno(ret);
goto bail;
}
@@ -1342,9 +1338,7 @@
sizeof(o2hb_live_region_bitmap),
O2NM_MAX_REGIONS,
o2hb_live_region_bitmap);
- if (IS_ERR_OR_NULL(o2hb_debug_liveregions)) {
- ret = o2hb_debug_liveregions ?
- PTR_ERR(o2hb_debug_liveregions) : -ENOMEM;
+ if (!o2hb_debug_liveregions) {
mlog_errno(ret);
goto bail;
}
@@ -1358,9 +1352,7 @@
sizeof(o2hb_quorum_region_bitmap),
O2NM_MAX_REGIONS,
o2hb_quorum_region_bitmap);
- if (IS_ERR_OR_NULL(o2hb_debug_quorumregions)) {
- ret = o2hb_debug_quorumregions ?
- PTR_ERR(o2hb_debug_quorumregions) : -ENOMEM;
+ if (!o2hb_debug_quorumregions) {
mlog_errno(ret);
goto bail;
}
@@ -1374,9 +1366,7 @@
sizeof(o2hb_failed_region_bitmap),
O2NM_MAX_REGIONS,
o2hb_failed_region_bitmap);
- if (IS_ERR_OR_NULL(o2hb_debug_failedregions)) {
- ret = o2hb_debug_failedregions ?
- PTR_ERR(o2hb_debug_failedregions) : -ENOMEM;
+ if (!o2hb_debug_failedregions) {
mlog_errno(ret);
goto bail;
}
@@ -2010,8 +2000,7 @@
reg->hr_debug_dir =
debugfs_create_dir(config_item_name(®->hr_item), dir);
- if (IS_ERR_OR_NULL(reg->hr_debug_dir)) {
- ret = reg->hr_debug_dir ? PTR_ERR(reg->hr_debug_dir) : -ENOMEM;
+ if (!reg->hr_debug_dir) {
mlog_errno(ret);
goto bail;
}
@@ -2024,9 +2013,7 @@
O2HB_DB_TYPE_REGION_LIVENODES,
sizeof(reg->hr_live_node_bitmap),
O2NM_MAX_NODES, reg);
- if (IS_ERR_OR_NULL(reg->hr_debug_livenodes)) {
- ret = reg->hr_debug_livenodes ?
- PTR_ERR(reg->hr_debug_livenodes) : -ENOMEM;
+ if (!reg->hr_debug_livenodes) {
mlog_errno(ret);
goto bail;
}
@@ -2038,9 +2025,7 @@
sizeof(*(reg->hr_db_regnum)),
O2HB_DB_TYPE_REGION_NUMBER,
0, O2NM_MAX_NODES, reg);
- if (IS_ERR_OR_NULL(reg->hr_debug_regnum)) {
- ret = reg->hr_debug_regnum ?
- PTR_ERR(reg->hr_debug_regnum) : -ENOMEM;
+ if (!reg->hr_debug_regnum) {
mlog_errno(ret);
goto bail;
}
@@ -2052,9 +2037,7 @@
sizeof(*(reg->hr_db_elapsed_time)),
O2HB_DB_TYPE_REGION_ELAPSED_TIME,
0, 0, reg);
- if (IS_ERR_OR_NULL(reg->hr_debug_elapsed_time)) {
- ret = reg->hr_debug_elapsed_time ?
- PTR_ERR(reg->hr_debug_elapsed_time) : -ENOMEM;
+ if (!reg->hr_debug_elapsed_time) {
mlog_errno(ret);
goto bail;
}
@@ -2066,16 +2049,13 @@
sizeof(*(reg->hr_db_pinned)),
O2HB_DB_TYPE_REGION_PINNED,
0, 0, reg);
- if (IS_ERR_OR_NULL(reg->hr_debug_pinned)) {
- ret = reg->hr_debug_pinned ?
- PTR_ERR(reg->hr_debug_pinned) : -ENOMEM;
+ if (!reg->hr_debug_pinned) {
mlog_errno(ret);
goto bail;
}
- return 0;
+ ret = 0;
bail:
- debugfs_remove_recursive(reg->hr_debug_dir);
return ret;
}
diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
index 956edf6..8b23aa2 100644
--- a/fs/ocfs2/dlmglue.c
+++ b/fs/ocfs2/dlmglue.c
@@ -2959,7 +2959,7 @@
osb->osb_debug_root,
osb,
&ocfs2_dlm_debug_fops);
- if (IS_ERR_OR_NULL(dlm_debug->d_locking_state)) {
+ if (!dlm_debug->d_locking_state) {
ret = -EINVAL;
mlog(ML_ERROR,
"Unable to create locking state debugfs file.\n");
diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
index 837ddce..403c566 100644
--- a/fs/ocfs2/super.c
+++ b/fs/ocfs2/super.c
@@ -1112,7 +1112,7 @@
osb->osb_debug_root = debugfs_create_dir(osb->uuid_str,
ocfs2_debugfs_root);
- if (IS_ERR_OR_NULL(osb->osb_debug_root)) {
+ if (!osb->osb_debug_root) {
status = -EINVAL;
mlog(ML_ERROR, "Unable to create per-mount debugfs root.\n");
goto read_super_error;
@@ -1122,7 +1122,7 @@
osb->osb_debug_root,
osb,
&ocfs2_osb_debug_fops);
- if (IS_ERR_OR_NULL(osb->osb_ctxt)) {
+ if (!osb->osb_ctxt) {
status = -EINVAL;
mlog_errno(status);
goto read_super_error;
@@ -1606,9 +1606,8 @@
}
ocfs2_debugfs_root = debugfs_create_dir("ocfs2", NULL);
- if (IS_ERR_OR_NULL(ocfs2_debugfs_root)) {
- status = ocfs2_debugfs_root ?
- PTR_ERR(ocfs2_debugfs_root) : -ENOMEM;
+ if (!ocfs2_debugfs_root) {
+ status = -ENOMEM;
mlog(ML_ERROR, "Unable to create ocfs2 debugfs root.\n");
goto out4;
}
diff --git a/fs/pnode.c b/fs/pnode.c
index 260ac8f..6367e1e 100644
--- a/fs/pnode.c
+++ b/fs/pnode.c
@@ -362,6 +362,46 @@
}
/*
+ * Clear MNT_LOCKED when it can be shown to be safe.
+ *
+ * mount_lock lock must be held for write
+ */
+void propagate_mount_unlock(struct mount *mnt)
+{
+ struct mount *parent = mnt->mnt_parent;
+ struct mount *m, *child;
+
+ BUG_ON(parent == mnt);
+
+ for (m = propagation_next(parent, parent); m;
+ m = propagation_next(m, parent)) {
+ child = __lookup_mnt_last(&m->mnt, mnt->mnt_mountpoint);
+ if (child)
+ child->mnt.mnt_flags &= ~MNT_LOCKED;
+ }
+}
+
+/*
+ * Mark all mounts that the MNT_LOCKED logic will allow to be unmounted.
+ */
+static void mark_umount_candidates(struct mount *mnt)
+{
+ struct mount *parent = mnt->mnt_parent;
+ struct mount *m;
+
+ BUG_ON(parent == mnt);
+
+ for (m = propagation_next(parent, parent); m;
+ m = propagation_next(m, parent)) {
+ struct mount *child = __lookup_mnt_last(&m->mnt,
+ mnt->mnt_mountpoint);
+ if (child && (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m))) {
+ SET_MNT_MARK(child);
+ }
+ }
+}
+
+/*
* NOTE: unmounting 'mnt' naturally propagates to all other mounts its
* parent propagates to.
*/
@@ -378,13 +418,16 @@
struct mount *child = __lookup_mnt_last(&m->mnt,
mnt->mnt_mountpoint);
/*
- * umount the child only if the child has no
- * other children
+ * umount the child only if the child has no children
+ * and the child is marked safe to unmount.
*/
- if (child && list_empty(&child->mnt_mounts)) {
+ if (!child || !IS_MNT_MARKED(child))
+ continue;
+ CLEAR_MNT_MARK(child);
+ if (list_empty(&child->mnt_mounts)) {
list_del_init(&child->mnt_child);
- hlist_del_init_rcu(&child->mnt_hash);
- hlist_add_before_rcu(&child->mnt_hash, &mnt->mnt_hash);
+ child->mnt.mnt_flags |= MNT_UMOUNT;
+ list_move_tail(&child->mnt_list, &mnt->mnt_list);
}
}
}
@@ -396,11 +439,14 @@
*
* vfsmount lock must be held for write
*/
-int propagate_umount(struct hlist_head *list)
+int propagate_umount(struct list_head *list)
{
struct mount *mnt;
- hlist_for_each_entry(mnt, list, mnt_hash)
+ list_for_each_entry_reverse(mnt, list, mnt_list)
+ mark_umount_candidates(mnt);
+
+ list_for_each_entry(mnt, list, mnt_list)
__propagate_umount(mnt);
return 0;
}
diff --git a/fs/pnode.h b/fs/pnode.h
index 4a24635..7114ce6 100644
--- a/fs/pnode.h
+++ b/fs/pnode.h
@@ -19,6 +19,9 @@
#define IS_MNT_MARKED(m) ((m)->mnt.mnt_flags & MNT_MARKED)
#define SET_MNT_MARK(m) ((m)->mnt.mnt_flags |= MNT_MARKED)
#define CLEAR_MNT_MARK(m) ((m)->mnt.mnt_flags &= ~MNT_MARKED)
+#define IS_MNT_LOCKED(m) ((m)->mnt.mnt_flags & MNT_LOCKED)
+#define IS_MNT_LOCKED_AND_LAZY(m) \
+ (((m)->mnt.mnt_flags & (MNT_LOCKED|MNT_SYNC_UMOUNT)) == MNT_LOCKED)
#define CL_EXPIRE 0x01
#define CL_SLAVE 0x02
@@ -40,14 +43,14 @@
void change_mnt_propagation(struct mount *, int);
int propagate_mnt(struct mount *, struct mountpoint *, struct mount *,
struct hlist_head *);
-int propagate_umount(struct hlist_head *);
+int propagate_umount(struct list_head *);
int propagate_mount_busy(struct mount *, int);
+void propagate_mount_unlock(struct mount *);
void mnt_release_group_id(struct mount *);
int get_dominating_id(struct mount *mnt, const struct path *root);
unsigned int mnt_get_count(struct mount *mnt);
void mnt_set_mountpoint(struct mount *, struct mountpoint *,
struct mount *);
-void umount_tree(struct mount *, int);
struct mount *copy_tree(struct mount *, struct dentry *, int);
bool is_path_reachable(struct mount *, struct dentry *,
const struct path *root);
diff --git a/include/asm-generic/gpio.h b/include/asm-generic/gpio.h
index 383ade1..9bb0d11 100644
--- a/include/asm-generic/gpio.h
+++ b/include/asm-generic/gpio.h
@@ -5,7 +5,6 @@
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/of.h>
-#include <linux/pinctrl/pinctrl.h>
#ifdef CONFIG_GPIOLIB
@@ -139,53 +138,6 @@
gpiod_unexport(gpio_to_desc(gpio));
}
-#ifdef CONFIG_PINCTRL
-
-/**
- * struct gpio_pin_range - pin range controlled by a gpio chip
- * @head: list for maintaining set of pin ranges, used internally
- * @pctldev: pinctrl device which handles corresponding pins
- * @range: actual range of pins controlled by a gpio controller
- */
-
-struct gpio_pin_range {
- struct list_head node;
- struct pinctrl_dev *pctldev;
- struct pinctrl_gpio_range range;
-};
-
-int gpiochip_add_pin_range(struct gpio_chip *chip, const char *pinctl_name,
- unsigned int gpio_offset, unsigned int pin_offset,
- unsigned int npins);
-int gpiochip_add_pingroup_range(struct gpio_chip *chip,
- struct pinctrl_dev *pctldev,
- unsigned int gpio_offset, const char *pin_group);
-void gpiochip_remove_pin_ranges(struct gpio_chip *chip);
-
-#else
-
-static inline int
-gpiochip_add_pin_range(struct gpio_chip *chip, const char *pinctl_name,
- unsigned int gpio_offset, unsigned int pin_offset,
- unsigned int npins)
-{
- return 0;
-}
-static inline int
-gpiochip_add_pingroup_range(struct gpio_chip *chip,
- struct pinctrl_dev *pctldev,
- unsigned int gpio_offset, const char *pin_group)
-{
- return 0;
-}
-
-static inline void
-gpiochip_remove_pin_ranges(struct gpio_chip *chip)
-{
-}
-
-#endif /* CONFIG_PINCTRL */
-
#else /* !CONFIG_GPIOLIB */
static inline bool gpio_is_valid(int number)
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 5c48c58..8bd374d 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -153,6 +153,14 @@
#define TRACE_SYSCALLS()
#endif
+#ifdef CONFIG_SERIAL_EARLYCON
+#define EARLYCON_TABLE() STRUCT_ALIGN(); \
+ VMLINUX_SYMBOL(__earlycon_table) = .; \
+ *(__earlycon_table) \
+ *(__earlycon_table_end)
+#else
+#define EARLYCON_TABLE()
+#endif
#define ___OF_TABLE(cfg, name) _OF_TABLE_##cfg(name)
#define __OF_TABLE(cfg, name) ___OF_TABLE(cfg, name)
@@ -508,6 +516,7 @@
CPUIDLE_METHOD_OF_TABLES() \
KERNEL_DTB() \
IRQCHIP_OF_MATCH_TABLE() \
+ EARLYCON_TABLE() \
EARLYCON_OF_TABLES()
#define INIT_TEXT \
diff --git a/include/drm/bridge/dw_hdmi.h b/include/drm/bridge/dw_hdmi.h
index 5a4f490..de13bfc 100644
--- a/include/drm/bridge/dw_hdmi.h
+++ b/include/drm/bridge/dw_hdmi.h
@@ -38,17 +38,18 @@
u16 curr[DW_HDMI_RES_MAX];
};
-struct dw_hdmi_sym_term {
+struct dw_hdmi_phy_config {
unsigned long mpixelclock;
u16 sym_ctr; /*clock symbol and transmitter control*/
u16 term; /*transmission termination value*/
+ u16 vlev_ctr; /* voltage level control */
};
struct dw_hdmi_plat_data {
enum dw_hdmi_devtype dev_type;
const struct dw_hdmi_mpll_config *mpll_cfg;
const struct dw_hdmi_curr_ctrl *cur_ctr;
- const struct dw_hdmi_sym_term *sym_term;
+ const struct dw_hdmi_phy_config *phy_config;
enum drm_mode_status (*mode_valid)(struct drm_connector *connector,
struct drm_display_mode *mode);
};
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index e928625..62c40777 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -104,6 +104,9 @@
* PRIME: used in the prime code.
* This is the category used by the DRM_DEBUG_PRIME() macro.
*
+ * ATOMIC: used in the atomic code.
+ * This is the category used by the DRM_DEBUG_ATOMIC() macro.
+ *
* Enabling verbose debug messages is done through the drm.debug parameter,
* each category being enabled by a bit.
*
@@ -121,6 +124,7 @@
#define DRM_UT_DRIVER 0x02
#define DRM_UT_KMS 0x04
#define DRM_UT_PRIME 0x08
+#define DRM_UT_ATOMIC 0x10
extern __printf(2, 3)
void drm_ut_debug_printk(const char *function_name,
@@ -207,6 +211,11 @@
if (unlikely(drm_debug & DRM_UT_PRIME)) \
drm_ut_debug_printk(__func__, fmt, ##args); \
} while (0)
+#define DRM_DEBUG_ATOMIC(fmt, args...) \
+ do { \
+ if (unlikely(drm_debug & DRM_UT_ATOMIC)) \
+ drm_ut_debug_printk(__func__, fmt, ##args); \
+ } while (0)
/*@}*/
@@ -244,7 +253,6 @@
unsigned int cmd;
int flags;
drm_ioctl_t *func;
- unsigned int cmd_drv;
const char *name;
};
@@ -253,8 +261,13 @@
* ioctl, for use by drm_ioctl().
*/
-#define DRM_IOCTL_DEF_DRV(ioctl, _func, _flags) \
- [DRM_IOCTL_NR(DRM_##ioctl)] = {.cmd = DRM_##ioctl, .func = _func, .flags = _flags, .cmd_drv = DRM_IOCTL_##ioctl, .name = #ioctl}
+#define DRM_IOCTL_DEF_DRV(ioctl, _func, _flags) \
+ [DRM_IOCTL_NR(DRM_IOCTL_##ioctl) - DRM_COMMAND_BASE] = { \
+ .cmd = DRM_IOCTL_##ioctl, \
+ .func = _func, \
+ .flags = _flags, \
+ .name = #ioctl \
+ }
/* Event queued up for userspace to read */
struct drm_pending_event {
@@ -922,6 +935,7 @@
extern void drm_vblank_off(struct drm_device *dev, int crtc);
extern void drm_vblank_on(struct drm_device *dev, int crtc);
extern void drm_crtc_vblank_off(struct drm_crtc *crtc);
+extern void drm_crtc_vblank_reset(struct drm_crtc *crtc);
extern void drm_crtc_vblank_on(struct drm_crtc *crtc);
extern void drm_vblank_cleanup(struct drm_device *dev);
diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h
index 51168a8..c1571034 100644
--- a/include/drm/drm_atomic.h
+++ b/include/drm/drm_atomic.h
@@ -75,4 +75,28 @@
int __must_check drm_atomic_commit(struct drm_atomic_state *state);
int __must_check drm_atomic_async_commit(struct drm_atomic_state *state);
+#define for_each_connector_in_state(state, connector, connector_state, __i) \
+ for ((__i) = 0; \
+ (connector) = (state)->connectors[__i], \
+ (connector_state) = (state)->connector_states[__i], \
+ (__i) < (state)->num_connector; \
+ (__i)++) \
+ if (connector)
+
+#define for_each_crtc_in_state(state, crtc, crtc_state, __i) \
+ for ((__i) = 0; \
+ (crtc) = (state)->crtcs[__i], \
+ (crtc_state) = (state)->crtc_states[__i], \
+ (__i) < (state)->dev->mode_config.num_crtc; \
+ (__i)++) \
+ if (crtc_state)
+
+#define for_each_plane_in_state(state, plane, plane_state, __i) \
+ for ((__i) = 0; \
+ (plane) = (state)->planes[__i], \
+ (plane_state) = (state)->plane_states[__i], \
+ (__i) < (state)->dev->mode_config.num_total_plane; \
+ (__i)++) \
+ if (plane_state)
+
#endif /* DRM_ATOMIC_H_ */
diff --git a/include/drm/drm_atomic_helper.h b/include/drm/drm_atomic_helper.h
index 8039d54..d665781 100644
--- a/include/drm/drm_atomic_helper.h
+++ b/include/drm/drm_atomic_helper.h
@@ -43,9 +43,9 @@
void drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
struct drm_atomic_state *old_state);
-void drm_atomic_helper_commit_pre_planes(struct drm_device *dev,
- struct drm_atomic_state *state);
-void drm_atomic_helper_commit_post_planes(struct drm_device *dev,
+void drm_atomic_helper_commit_modeset_disables(struct drm_device *dev,
+ struct drm_atomic_state *state);
+void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
struct drm_atomic_state *old_state);
int drm_atomic_helper_prepare_planes(struct drm_device *dev,
@@ -87,20 +87,34 @@
/* default implementations for state handling */
void drm_atomic_helper_crtc_reset(struct drm_crtc *crtc);
+void __drm_atomic_helper_crtc_duplicate_state(struct drm_crtc *crtc,
+ struct drm_crtc_state *state);
struct drm_crtc_state *
drm_atomic_helper_crtc_duplicate_state(struct drm_crtc *crtc);
+void __drm_atomic_helper_crtc_destroy_state(struct drm_crtc *crtc,
+ struct drm_crtc_state *state);
void drm_atomic_helper_crtc_destroy_state(struct drm_crtc *crtc,
struct drm_crtc_state *state);
void drm_atomic_helper_plane_reset(struct drm_plane *plane);
+void __drm_atomic_helper_plane_duplicate_state(struct drm_plane *plane,
+ struct drm_plane_state *state);
struct drm_plane_state *
drm_atomic_helper_plane_duplicate_state(struct drm_plane *plane);
+void __drm_atomic_helper_plane_destroy_state(struct drm_plane *plane,
+ struct drm_plane_state *state);
void drm_atomic_helper_plane_destroy_state(struct drm_plane *plane,
struct drm_plane_state *state);
void drm_atomic_helper_connector_reset(struct drm_connector *connector);
+void
+__drm_atomic_helper_connector_duplicate_state(struct drm_connector *connector,
+ struct drm_connector_state *state);
struct drm_connector_state *
drm_atomic_helper_connector_duplicate_state(struct drm_connector *connector);
+void
+__drm_atomic_helper_connector_destroy_state(struct drm_connector *connector,
+ struct drm_connector_state *state);
void drm_atomic_helper_connector_destroy_state(struct drm_connector *connector,
struct drm_connector_state *state);
diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h
index 920e21a..ca71c03 100644
--- a/include/drm/drm_crtc.h
+++ b/include/drm/drm_crtc.h
@@ -53,7 +53,6 @@
#define DRM_MODE_OBJECT_FB 0xfbfbfbfb
#define DRM_MODE_OBJECT_BLOB 0xbbbbbbbb
#define DRM_MODE_OBJECT_PLANE 0xeeeeeeee
-#define DRM_MODE_OBJECT_BRIDGE 0xbdbdbdbd
#define DRM_MODE_OBJECT_ANY 0
struct drm_mode_object {
@@ -202,6 +201,7 @@
const struct drm_framebuffer_funcs *funcs;
unsigned int pitches[4];
unsigned int offsets[4];
+ uint64_t modifier[4];
unsigned int width;
unsigned int height;
/* depth can be 15 or 16 */
@@ -466,7 +466,7 @@
int framedur_ns, linedur_ns, pixeldur_ns;
/* if you are using the helper */
- void *helper_private;
+ const void *helper_private;
struct drm_object_properties properties;
@@ -596,7 +596,7 @@
struct drm_crtc *crtc;
struct drm_bridge *bridge;
const struct drm_encoder_funcs *funcs;
- void *helper_private;
+ const void *helper_private;
};
/* should we poll this connector for connects and disconnects */
@@ -700,7 +700,7 @@
/* requested DPMS state */
int dpms;
- void *helper_private;
+ const void *helper_private;
/* forced on connector */
struct drm_cmdline_mode cmdline_mode;
@@ -829,6 +829,7 @@
* @possible_crtcs: pipes this plane can be bound to
* @format_types: array of formats supported by this plane
* @format_count: number of formats supported
+ * @format_default: driver hasn't supplied supported formats for the plane
* @crtc: currently bound CRTC
* @fb: currently bound fb
* @old_fb: Temporary tracking of the old fb while a modeset is ongoing. Used by
@@ -849,6 +850,7 @@
uint32_t possible_crtcs;
uint32_t *format_types;
uint32_t format_count;
+ bool format_default;
struct drm_crtc *crtc;
struct drm_framebuffer *fb;
@@ -861,7 +863,7 @@
enum drm_plane_type type;
- void *helper_private;
+ const void *helper_private;
struct drm_plane_state *state;
};
@@ -912,7 +914,7 @@
};
/**
- * struct struct drm_atomic_state - the global state object for atomic updates
+ * struct drm_atomic_state - the global state object for atomic updates
* @dev: parent DRM device
* @allow_modeset: allow full modeset
* @legacy_cursor_update: hint to enforce legacy cursor ioctl semantics
@@ -972,7 +974,7 @@
* struct drm_mode_config_funcs - basic driver provided mode setting functions
* @fb_create: create a new framebuffer object
* @output_poll_changed: function to handle output configuration changes
- * @atomic_check: check whether a give atomic state update is possible
+ * @atomic_check: check whether a given atomic state update is possible
* @atomic_commit: commit an atomic state update previously verified with
* atomic_check()
*
@@ -1155,6 +1157,9 @@
/* whether async page flip is supported or not */
bool async_page_flip;
+ /* whether the driver supports fb modifiers */
+ bool allow_fb_modifiers;
+
/* cursor size */
uint32_t cursor_width, cursor_height;
};
@@ -1259,6 +1264,8 @@
extern void drm_plane_cleanup(struct drm_plane *plane);
extern unsigned int drm_plane_index(struct drm_plane *plane);
extern void drm_plane_force_disable(struct drm_plane *plane);
+extern int drm_plane_check_pixel_format(const struct drm_plane *plane,
+ u32 format);
extern void drm_crtc_get_hv_timing(const struct drm_display_mode *mode,
int *hdisplay, int *vdisplay);
extern int drm_crtc_check_viewport(const struct drm_crtc *crtc,
diff --git a/include/drm/drm_crtc_helper.h b/include/drm/drm_crtc_helper.h
index c250a22..c8fc187 100644
--- a/include/drm/drm_crtc_helper.h
+++ b/include/drm/drm_crtc_helper.h
@@ -89,6 +89,7 @@
int (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode, int x, int y,
struct drm_framebuffer *old_fb);
+ /* Actually set the mode for atomic helpers, optional */
void (*mode_set_nofb)(struct drm_crtc *crtc);
/* Move the crtc on the current fb to the given position *optional* */
@@ -119,7 +120,7 @@
* @mode_fixup: try to fixup proposed mode for this connector
* @prepare: part of the disable sequence, called before the CRTC modeset
* @commit: called after the CRTC modeset
- * @mode_set: set this mode
+ * @mode_set: set this mode, optional for atomic helpers
* @get_crtc: return CRTC that the encoder is currently attached to
* @detect: connection status detection
* @disable: disable encoder when not in use (overrides DPMS off)
@@ -196,19 +197,19 @@
static inline void drm_crtc_helper_add(struct drm_crtc *crtc,
const struct drm_crtc_helper_funcs *funcs)
{
- crtc->helper_private = (void *)funcs;
+ crtc->helper_private = funcs;
}
static inline void drm_encoder_helper_add(struct drm_encoder *encoder,
const struct drm_encoder_helper_funcs *funcs)
{
- encoder->helper_private = (void *)funcs;
+ encoder->helper_private = funcs;
}
static inline void drm_connector_helper_add(struct drm_connector *connector,
const struct drm_connector_helper_funcs *funcs)
{
- connector->helper_private = (void *)funcs;
+ connector->helper_private = funcs;
}
extern void drm_helper_resume_force_mode(struct drm_device *dev);
diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
index 7e25030..523f04c 100644
--- a/include/drm/drm_dp_helper.h
+++ b/include/drm/drm_dp_helper.h
@@ -42,6 +42,8 @@
* 1.2 formally includes both eDP and DPI definitions.
*/
+#define DP_AUX_MAX_PAYLOAD_BYTES 16
+
#define DP_AUX_I2C_WRITE 0x0
#define DP_AUX_I2C_READ 0x1
#define DP_AUX_I2C_STATUS 0x2
@@ -92,6 +94,15 @@
# define DP_MSA_TIMING_PAR_IGNORED (1 << 6) /* eDP */
# define DP_OUI_SUPPORT (1 << 7)
+#define DP_RECEIVE_PORT_0_CAP_0 0x008
+# define DP_LOCAL_EDID_PRESENT (1 << 1)
+# define DP_ASSOCIATED_TO_PRECEDING_PORT (1 << 2)
+
+#define DP_RECEIVE_PORT_0_BUFFER_SIZE 0x009
+
+#define DP_RECEIVE_PORT_1_CAP_0 0x00a
+#define DP_RECEIVE_PORT_1_BUFFER_SIZE 0x00b
+
#define DP_I2C_SPEED_CAP 0x00c /* DPI */
# define DP_I2C_SPEED_1K 0x01
# define DP_I2C_SPEED_5K 0x02
@@ -101,8 +112,19 @@
# define DP_I2C_SPEED_1M 0x20
#define DP_EDP_CONFIGURATION_CAP 0x00d /* XXX 1.2? */
+# define DP_ALTERNATE_SCRAMBLER_RESET_CAP (1 << 0)
+# define DP_FRAMING_CHANGE_CAP (1 << 1)
+# define DP_DPCD_DISPLAY_CONTROL_CAPABLE (1 << 3) /* edp v1.2 or higher */
+
#define DP_TRAINING_AUX_RD_INTERVAL 0x00e /* XXX 1.2? */
+#define DP_ADAPTER_CAP 0x00f /* 1.2 */
+# define DP_FORCE_LOAD_SENSE_CAP (1 << 0)
+# define DP_ALTERNATE_I2C_PATTERN_CAP (1 << 1)
+
+#define DP_SUPPORTED_LINK_RATES 0x010 /* eDP 1.4 */
+# define DP_MAX_SUPPORTED_RATES 8 /* 16-bit little-endian */
+
/* Multiple stream transport */
#define DP_FAUX_CAP 0x020 /* 1.2 */
# define DP_FAUX_CAP_1 (1 << 0)
@@ -110,10 +132,56 @@
#define DP_MSTM_CAP 0x021 /* 1.2 */
# define DP_MST_CAP (1 << 0)
+#define DP_NUMBER_OF_AUDIO_ENDPOINTS 0x022 /* 1.2 */
+
+/* AV_SYNC_DATA_BLOCK 1.2 */
+#define DP_AV_GRANULARITY 0x023
+# define DP_AG_FACTOR_MASK (0xf << 0)
+# define DP_AG_FACTOR_3MS (0 << 0)
+# define DP_AG_FACTOR_2MS (1 << 0)
+# define DP_AG_FACTOR_1MS (2 << 0)
+# define DP_AG_FACTOR_500US (3 << 0)
+# define DP_AG_FACTOR_200US (4 << 0)
+# define DP_AG_FACTOR_100US (5 << 0)
+# define DP_AG_FACTOR_10US (6 << 0)
+# define DP_AG_FACTOR_1US (7 << 0)
+# define DP_VG_FACTOR_MASK (0xf << 4)
+# define DP_VG_FACTOR_3MS (0 << 4)
+# define DP_VG_FACTOR_2MS (1 << 4)
+# define DP_VG_FACTOR_1MS (2 << 4)
+# define DP_VG_FACTOR_500US (3 << 4)
+# define DP_VG_FACTOR_200US (4 << 4)
+# define DP_VG_FACTOR_100US (5 << 4)
+
+#define DP_AUD_DEC_LAT0 0x024
+#define DP_AUD_DEC_LAT1 0x025
+
+#define DP_AUD_PP_LAT0 0x026
+#define DP_AUD_PP_LAT1 0x027
+
+#define DP_VID_INTER_LAT 0x028
+
+#define DP_VID_PROG_LAT 0x029
+
+#define DP_REP_LAT 0x02a
+
+#define DP_AUD_DEL_INS0 0x02b
+#define DP_AUD_DEL_INS1 0x02c
+#define DP_AUD_DEL_INS2 0x02d
+/* End of AV_SYNC_DATA_BLOCK */
+
+#define DP_RECEIVER_ALPM_CAP 0x02e /* eDP 1.4 */
+# define DP_ALPM_CAP (1 << 0)
+
+#define DP_SINK_DEVICE_AUX_FRAME_SYNC_CAP 0x02f /* eDP 1.4 */
+# define DP_AUX_FRAME_SYNC_CAP (1 << 0)
+
#define DP_GUID 0x030 /* 1.2 */
#define DP_PSR_SUPPORT 0x070 /* XXX 1.2? */
# define DP_PSR_IS_SUPPORTED 1
+# define DP_PSR2_IS_SUPPORTED 2 /* eDP 1.4 */
+
#define DP_PSR_CAPS 0x071 /* XXX 1.2? */
# define DP_PSR_NO_TRAIN_ON_EXIT 1
# define DP_PSR_SETUP_TIME_330 (0 << 1)
@@ -153,6 +221,7 @@
/* link configuration */
#define DP_LINK_BW_SET 0x100
+# define DP_LINK_RATE_TABLE 0x00 /* eDP 1.4 */
# define DP_LINK_BW_1_62 0x06
# define DP_LINK_BW_2_7 0x0a
# define DP_LINK_BW_5_4 0x14 /* 1.2 */
@@ -168,11 +237,12 @@
# define DP_TRAINING_PATTERN_3 3 /* 1.2 */
# define DP_TRAINING_PATTERN_MASK 0x3
-# define DP_LINK_QUAL_PATTERN_DISABLE (0 << 2)
-# define DP_LINK_QUAL_PATTERN_D10_2 (1 << 2)
-# define DP_LINK_QUAL_PATTERN_ERROR_RATE (2 << 2)
-# define DP_LINK_QUAL_PATTERN_PRBS7 (3 << 2)
-# define DP_LINK_QUAL_PATTERN_MASK (3 << 2)
+/* DPCD 1.1 only. For DPCD >= 1.2 see per-lane DP_LINK_QUAL_LANEn_SET */
+# define DP_LINK_QUAL_PATTERN_11_DISABLE (0 << 2)
+# define DP_LINK_QUAL_PATTERN_11_D10_2 (1 << 2)
+# define DP_LINK_QUAL_PATTERN_11_ERROR_RATE (2 << 2)
+# define DP_LINK_QUAL_PATTERN_11_PRBS7 (3 << 2)
+# define DP_LINK_QUAL_PATTERN_11_MASK (3 << 2)
# define DP_RECOVERED_CLOCK_OUT_EN (1 << 4)
# define DP_LINK_SCRAMBLING_DISABLE (1 << 5)
@@ -215,17 +285,63 @@
/* bitmask as for DP_I2C_SPEED_CAP */
#define DP_EDP_CONFIGURATION_SET 0x10a /* XXX 1.2? */
+# define DP_ALTERNATE_SCRAMBLER_RESET_ENABLE (1 << 0)
+# define DP_FRAMING_CHANGE_ENABLE (1 << 1)
+# define DP_PANEL_SELF_TEST_ENABLE (1 << 7)
+
+#define DP_LINK_QUAL_LANE0_SET 0x10b /* DPCD >= 1.2 */
+#define DP_LINK_QUAL_LANE1_SET 0x10c
+#define DP_LINK_QUAL_LANE2_SET 0x10d
+#define DP_LINK_QUAL_LANE3_SET 0x10e
+# define DP_LINK_QUAL_PATTERN_DISABLE 0
+# define DP_LINK_QUAL_PATTERN_D10_2 1
+# define DP_LINK_QUAL_PATTERN_ERROR_RATE 2
+# define DP_LINK_QUAL_PATTERN_PRBS7 3
+# define DP_LINK_QUAL_PATTERN_80BIT_CUSTOM 4
+# define DP_LINK_QUAL_PATTERN_HBR2_EYE 5
+# define DP_LINK_QUAL_PATTERN_MASK 7
+
+#define DP_TRAINING_LANE0_1_SET2 0x10f
+#define DP_TRAINING_LANE2_3_SET2 0x110
+# define DP_LANE02_POST_CURSOR2_SET_MASK (3 << 0)
+# define DP_LANE02_MAX_POST_CURSOR2_REACHED (1 << 2)
+# define DP_LANE13_POST_CURSOR2_SET_MASK (3 << 4)
+# define DP_LANE13_MAX_POST_CURSOR2_REACHED (1 << 6)
#define DP_MSTM_CTRL 0x111 /* 1.2 */
# define DP_MST_EN (1 << 0)
# define DP_UP_REQ_EN (1 << 1)
# define DP_UPSTREAM_IS_SRC (1 << 2)
+#define DP_AUDIO_DELAY0 0x112 /* 1.2 */
+#define DP_AUDIO_DELAY1 0x113
+#define DP_AUDIO_DELAY2 0x114
+
+#define DP_LINK_RATE_SET 0x115 /* eDP 1.4 */
+# define DP_LINK_RATE_SET_SHIFT 0
+# define DP_LINK_RATE_SET_MASK (7 << 0)
+
+#define DP_RECEIVER_ALPM_CONFIG 0x116 /* eDP 1.4 */
+# define DP_ALPM_ENABLE (1 << 0)
+# define DP_ALPM_LOCK_ERROR_IRQ_HPD_ENABLE (1 << 1)
+
+#define DP_SINK_DEVICE_AUX_FRAME_SYNC_CONF 0x117 /* eDP 1.4 */
+# define DP_AUX_FRAME_SYNC_ENABLE (1 << 0)
+# define DP_IRQ_HPD_ENABLE (1 << 1)
+
+#define DP_UPSTREAM_DEVICE_DP_PWR_NEED 0x118 /* 1.2 */
+# define DP_PWR_NOT_NEEDED (1 << 0)
+
+#define DP_AUX_FRAME_SYNC_VALUE 0x15c /* eDP 1.4 */
+# define DP_AUX_FRAME_SYNC_VALID (1 << 0)
+
#define DP_PSR_EN_CFG 0x170 /* XXX 1.2? */
# define DP_PSR_ENABLE (1 << 0)
# define DP_PSR_MAIN_LINK_ACTIVE (1 << 1)
# define DP_PSR_CRC_VERIFICATION (1 << 2)
# define DP_PSR_FRAME_CAPTURE (1 << 3)
+# define DP_PSR_SELECTIVE_UPDATE (1 << 4)
+# define DP_PSR_IRQ_HPD_WITH_CRC_ERRORS (1 << 5)
#define DP_ADAPTER_CTRL 0x1a0
# define DP_ADAPTER_CTRL_FORCE_LOAD_SENSE (1 << 0)
@@ -332,6 +448,49 @@
# define DP_SET_POWER_D3 0x2
# define DP_SET_POWER_MASK 0x3
+#define DP_EDP_DPCD_REV 0x700 /* eDP 1.2 */
+# define DP_EDP_11 0x00
+# define DP_EDP_12 0x01
+# define DP_EDP_13 0x02
+# define DP_EDP_14 0x03
+
+#define DP_EDP_GENERAL_CAP_1 0x701
+
+#define DP_EDP_BACKLIGHT_ADJUSTMENT_CAP 0x702
+
+#define DP_EDP_GENERAL_CAP_2 0x703
+
+#define DP_EDP_GENERAL_CAP_3 0x704 /* eDP 1.4 */
+
+#define DP_EDP_DISPLAY_CONTROL_REGISTER 0x720
+
+#define DP_EDP_BACKLIGHT_MODE_SET_REGISTER 0x721
+
+#define DP_EDP_BACKLIGHT_BRIGHTNESS_MSB 0x722
+#define DP_EDP_BACKLIGHT_BRIGHTNESS_LSB 0x723
+
+#define DP_EDP_PWMGEN_BIT_COUNT 0x724
+#define DP_EDP_PWMGEN_BIT_COUNT_CAP_MIN 0x725
+#define DP_EDP_PWMGEN_BIT_COUNT_CAP_MAX 0x726
+
+#define DP_EDP_BACKLIGHT_CONTROL_STATUS 0x727
+
+#define DP_EDP_BACKLIGHT_FREQ_SET 0x728
+
+#define DP_EDP_BACKLIGHT_FREQ_CAP_MIN_MSB 0x72a
+#define DP_EDP_BACKLIGHT_FREQ_CAP_MIN_MID 0x72b
+#define DP_EDP_BACKLIGHT_FREQ_CAP_MIN_LSB 0x72c
+
+#define DP_EDP_BACKLIGHT_FREQ_CAP_MAX_MSB 0x72d
+#define DP_EDP_BACKLIGHT_FREQ_CAP_MAX_MID 0x72e
+#define DP_EDP_BACKLIGHT_FREQ_CAP_MAX_LSB 0x72f
+
+#define DP_EDP_DBC_MINIMUM_BRIGHTNESS_SET 0x732
+#define DP_EDP_DBC_MAXIMUM_BRIGHTNESS_SET 0x733
+
+#define DP_EDP_REGIONAL_BACKLIGHT_BASE 0x740 /* eDP 1.4 */
+#define DP_EDP_REGIONAL_BACKLIGHT_0 0x741 /* eDP 1.4 */
+
#define DP_SIDEBAND_MSG_DOWN_REQ_BASE 0x1000 /* 1.2 MST */
#define DP_SIDEBAND_MSG_UP_REP_BASE 0x1200 /* 1.2 MST */
#define DP_SIDEBAND_MSG_DOWN_REP_BASE 0x1400 /* 1.2 MST */
@@ -350,6 +509,7 @@
#define DP_PSR_ERROR_STATUS 0x2006 /* XXX 1.2? */
# define DP_PSR_LINK_CRC_ERROR (1 << 0)
# define DP_PSR_RFB_STORAGE_ERROR (1 << 1)
+# define DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR (1 << 2) /* eDP 1.4 */
#define DP_PSR_ESI 0x2007 /* XXX 1.2? */
# define DP_PSR_CAPS_CHANGE (1 << 0)
@@ -363,6 +523,9 @@
# define DP_PSR_SINK_INTERNAL_ERROR 7
# define DP_PSR_SINK_STATE_MASK 0x07
+#define DP_RECEIVER_ALPM_STATUS 0x200b /* eDP 1.4 */
+# define DP_ALPM_LOCK_TIMEOUT_ERROR (1 << 0)
+
/* DP 1.2 Sideband message defines */
/* peer device type - DP 1.2a Table 2-92 */
#define DP_PEER_DEVICE_NONE 0x0
@@ -519,6 +682,9 @@
* transactions. The drm_dp_aux_register_i2c_bus() function registers an
* I2C adapter that can be passed to drm_probe_ddc(). Upon removal, drivers
* should call drm_dp_aux_unregister_i2c_bus() to remove the I2C adapter.
+ * The I2C adapter uses long transfers by default; if a partial response is
+ * received, the adapter will drop down to the size given by the partial
+ * response for this transaction only.
*
* Note that the aux helper code assumes that the .transfer() function
* only modifies the reply field of the drm_dp_aux_msg structure. The
diff --git a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h
index 00c1da9..a250781 100644
--- a/include/drm/drm_dp_mst_helper.h
+++ b/include/drm/drm_dp_mst_helper.h
@@ -486,6 +486,8 @@
bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, int pbn, int *slots);
+int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
+
void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
index 87d85e8..7990501 100644
--- a/include/drm/drm_edid.h
+++ b/include/drm/drm_edid.h
@@ -215,6 +215,8 @@
#define DRM_ELD_VER 0
# define DRM_ELD_VER_SHIFT 3
# define DRM_ELD_VER_MASK (0x1f << 3)
+# define DRM_ELD_VER_CEA861D (2 << 3) /* supports 861D or below */
+# define DRM_ELD_VER_CANNED (0x1f << 3)
#define DRM_ELD_BASELINE_ELD_LEN 2 /* in dwords! */
diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
index 21b944c..0dfd94def 100644
--- a/include/drm/drm_fb_helper.h
+++ b/include/drm/drm_fb_helper.h
@@ -44,6 +44,25 @@
int x, y;
};
+/**
+ * struct drm_fb_helper_surface_size - describes fbdev size and scanout surface size
+ * @fb_width: fbdev width
+ * @fb_height: fbdev height
+ * @surface_width: scanout buffer width
+ * @surface_height: scanout buffer height
+ * @surface_bpp: scanout buffer bpp
+ * @surface_depth: scanout buffer depth
+ *
+ * Note that the scanout surface width/height may be larger than the fbdev
+ * width/height. In case of multiple displays, the scanout surface is sized
+ * according to the largest width/height (so it is large enough for all CRTCs
+ * to scanout). But the fbdev width/height is sized to the minimum width/
+ * height of all the displays. This ensures that fbcon fits on the smallest
+ * of the attached displays.
+ *
+ * So what is passed to drm_fb_helper_fill_var() should be fb_width/fb_height,
+ * rather than the surface size.
+ */
struct drm_fb_helper_surface_size {
u32 fb_width;
u32 fb_height;
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 1e6ae14..7a592d7 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -149,14 +149,16 @@
static inline void
drm_gem_object_unreference_unlocked(struct drm_gem_object *obj)
{
- if (obj && !atomic_add_unless(&obj->refcount.refcount, -1, 1)) {
- struct drm_device *dev = obj->dev;
+ struct drm_device *dev;
- mutex_lock(&dev->struct_mutex);
- if (likely(atomic_dec_and_test(&obj->refcount.refcount)))
- drm_gem_object_free(&obj->refcount);
+ if (!obj)
+ return;
+
+ dev = obj->dev;
+ if (kref_put_mutex(&obj->refcount, drm_gem_object_free, &dev->struct_mutex))
mutex_unlock(&dev->struct_mutex);
- }
+ else
+ might_lock(&dev->struct_mutex);
}
int drm_gem_handle_create(struct drm_file *file_priv,
diff --git a/include/drm/drm_modes.h b/include/drm/drm_modes.h
index d92f6dd..0616188 100644
--- a/include/drm/drm_modes.h
+++ b/include/drm/drm_modes.h
@@ -92,7 +92,7 @@
#define CRTC_STEREO_DOUBLE (1 << 1) /* adjust timings for stereo modes */
#define CRTC_NO_DBLSCAN (1 << 2) /* don't adjust doublescan */
#define CRTC_NO_VSCAN (1 << 3) /* don't adjust doublescan */
-#define CRTC_STEREO_DOUBLE_ONLY (CRTC_NO_DBLSCAN | CRTC_NO_VSCAN)
+#define CRTC_STEREO_DOUBLE_ONLY (CRTC_STEREO_DOUBLE | CRTC_NO_DBLSCAN | CRTC_NO_VSCAN)
#define DRM_MODE_FLAG_3D_MAX DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF
diff --git a/include/drm/drm_panel.h b/include/drm/drm_panel.h
index 1fbcc96..13ff44b 100644
--- a/include/drm/drm_panel.h
+++ b/include/drm/drm_panel.h
@@ -29,6 +29,7 @@
struct drm_connector;
struct drm_device;
struct drm_panel;
+struct display_timing;
/**
* struct drm_panel_funcs - perform operations on a given panel
@@ -38,6 +39,8 @@
* @enable: enable panel (turn on back light, etc.)
* @get_modes: add modes to the connector that the panel is attached to and
* return the number of modes added
+ * @get_timings: copy display timings into the provided array and return
+ * the number of display timings available
*
* The .prepare() function is typically called before the display controller
* starts to transmit video data. Panel drivers can use this to turn the panel
@@ -68,6 +71,8 @@
int (*prepare)(struct drm_panel *panel);
int (*enable)(struct drm_panel *panel);
int (*get_modes)(struct drm_panel *panel);
+ int (*get_timings)(struct drm_panel *panel, unsigned int num_timings,
+ struct display_timing *timings);
};
struct drm_panel {
diff --git a/include/drm/drm_plane_helper.h b/include/drm/drm_plane_helper.h
index 31c11d3..96e1628 100644
--- a/include/drm/drm_plane_helper.h
+++ b/include/drm/drm_plane_helper.h
@@ -59,9 +59,11 @@
*/
struct drm_plane_helper_funcs {
int (*prepare_fb)(struct drm_plane *plane,
- struct drm_framebuffer *fb);
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *new_state);
void (*cleanup_fb)(struct drm_plane *plane,
- struct drm_framebuffer *fb);
+ struct drm_framebuffer *fb,
+ const struct drm_plane_state *old_state);
int (*atomic_check)(struct drm_plane *plane,
struct drm_plane_state *state);
@@ -74,7 +76,7 @@
static inline void drm_plane_helper_add(struct drm_plane *plane,
const struct drm_plane_helper_funcs *funcs)
{
- plane->helper_private = (void *)funcs;
+ plane->helper_private = funcs;
}
extern int drm_plane_helper_check_update(struct drm_plane *plane,
@@ -98,10 +100,6 @@
extern int drm_primary_helper_disable(struct drm_plane *plane);
extern void drm_primary_helper_destroy(struct drm_plane *plane);
extern const struct drm_plane_funcs drm_primary_helper_funcs;
-extern struct drm_plane *drm_primary_helper_create_plane(struct drm_device *dev,
- const uint32_t *formats,
- int num_formats);
-
int drm_plane_helper_update(struct drm_plane *plane, struct drm_crtc *crtc,
struct drm_framebuffer *fb,
diff --git a/include/drm/i915_pciids.h b/include/drm/i915_pciids.h
index d016dc5..6133723 100644
--- a/include/drm/i915_pciids.h
+++ b/include/drm/i915_pciids.h
@@ -208,40 +208,41 @@
#define INTEL_VLV_D_IDS(info) \
INTEL_VGA_DEVICE(0x0155, info)
-#define _INTEL_BDW_M(gt, id, info) \
- INTEL_VGA_DEVICE((((gt) - 1) << 4) | (id), info)
-#define _INTEL_BDW_D(gt, id, info) \
- INTEL_VGA_DEVICE((((gt) - 1) << 4) | (id), info)
-
-#define _INTEL_BDW_M_IDS(gt, info) \
- _INTEL_BDW_M(gt, 0x1602, info), /* Halo */ \
- _INTEL_BDW_M(gt, 0x1606, info), /* ULT */ \
- _INTEL_BDW_M(gt, 0x160B, info), /* ULT */ \
- _INTEL_BDW_M(gt, 0x160E, info) /* ULX */
-
-#define _INTEL_BDW_D_IDS(gt, info) \
- _INTEL_BDW_D(gt, 0x160A, info), /* Server */ \
- _INTEL_BDW_D(gt, 0x160D, info) /* Workstation */
-
-#define INTEL_BDW_GT12M_IDS(info) \
- _INTEL_BDW_M_IDS(1, info), \
- _INTEL_BDW_M_IDS(2, info)
+#define INTEL_BDW_GT12M_IDS(info) \
+ INTEL_VGA_DEVICE(0x1602, info), /* GT1 ULT */ \
+ INTEL_VGA_DEVICE(0x1606, info), /* GT1 ULT */ \
+ INTEL_VGA_DEVICE(0x160B, info), /* GT1 Iris */ \
+ INTEL_VGA_DEVICE(0x160E, info), /* GT1 ULX */ \
+ INTEL_VGA_DEVICE(0x1612, info), /* GT2 Halo */ \
+ INTEL_VGA_DEVICE(0x1616, info), /* GT2 ULT */ \
+ INTEL_VGA_DEVICE(0x161B, info), /* GT2 ULT */ \
+ INTEL_VGA_DEVICE(0x161E, info) /* GT2 ULX */
#define INTEL_BDW_GT12D_IDS(info) \
- _INTEL_BDW_D_IDS(1, info), \
- _INTEL_BDW_D_IDS(2, info)
+ INTEL_VGA_DEVICE(0x160A, info), /* GT1 Server */ \
+ INTEL_VGA_DEVICE(0x160D, info), /* GT1 Workstation */ \
+ INTEL_VGA_DEVICE(0x161A, info), /* GT2 Server */ \
+ INTEL_VGA_DEVICE(0x161D, info) /* GT2 Workstation */
#define INTEL_BDW_GT3M_IDS(info) \
- _INTEL_BDW_M_IDS(3, info)
+ INTEL_VGA_DEVICE(0x1622, info), /* ULT */ \
+ INTEL_VGA_DEVICE(0x1626, info), /* ULT */ \
+ INTEL_VGA_DEVICE(0x162B, info), /* Iris */ \
+ INTEL_VGA_DEVICE(0x162E, info) /* ULX */
#define INTEL_BDW_GT3D_IDS(info) \
- _INTEL_BDW_D_IDS(3, info)
+ INTEL_VGA_DEVICE(0x162A, info), /* Server */ \
+ INTEL_VGA_DEVICE(0x162D, info) /* Workstation */
#define INTEL_BDW_RSVDM_IDS(info) \
- _INTEL_BDW_M_IDS(4, info)
+ INTEL_VGA_DEVICE(0x1632, info), /* ULT */ \
+ INTEL_VGA_DEVICE(0x1636, info), /* ULT */ \
+ INTEL_VGA_DEVICE(0x163B, info), /* Iris */ \
+ INTEL_VGA_DEVICE(0x163E, info) /* ULX */
#define INTEL_BDW_RSVDD_IDS(info) \
- _INTEL_BDW_D_IDS(4, info)
+ INTEL_VGA_DEVICE(0x163A, info), /* Server */ \
+ INTEL_VGA_DEVICE(0x163D, info) /* Workstation */
#define INTEL_BDW_M_IDS(info) \
INTEL_BDW_GT12M_IDS(info), \
@@ -259,21 +260,31 @@
INTEL_VGA_DEVICE(0x22b2, info), \
INTEL_VGA_DEVICE(0x22b3, info)
-#define INTEL_SKL_IDS(info) \
- INTEL_VGA_DEVICE(0x1916, info), /* ULT GT2 */ \
+#define INTEL_SKL_GT1_IDS(info) \
INTEL_VGA_DEVICE(0x1906, info), /* ULT GT1 */ \
- INTEL_VGA_DEVICE(0x1926, info), /* ULT GT3 */ \
- INTEL_VGA_DEVICE(0x1921, info), /* ULT GT2F */ \
INTEL_VGA_DEVICE(0x190E, info), /* ULX GT1 */ \
+ INTEL_VGA_DEVICE(0x1902, info), /* DT GT1 */ \
+ INTEL_VGA_DEVICE(0x190B, info), /* Halo GT1 */ \
+ INTEL_VGA_DEVICE(0x190A, info) /* SRV GT1 */
+
+#define INTEL_SKL_GT2_IDS(info) \
+ INTEL_VGA_DEVICE(0x1916, info), /* ULT GT2 */ \
+ INTEL_VGA_DEVICE(0x1921, info), /* ULT GT2F */ \
INTEL_VGA_DEVICE(0x191E, info), /* ULX GT2 */ \
INTEL_VGA_DEVICE(0x1912, info), /* DT GT2 */ \
- INTEL_VGA_DEVICE(0x1902, info), /* DT GT1 */ \
INTEL_VGA_DEVICE(0x191B, info), /* Halo GT2 */ \
- INTEL_VGA_DEVICE(0x192B, info), /* Halo GT3 */ \
- INTEL_VGA_DEVICE(0x190B, info), /* Halo GT1 */ \
INTEL_VGA_DEVICE(0x191A, info), /* SRV GT2 */ \
- INTEL_VGA_DEVICE(0x192A, info), /* SRV GT3 */ \
- INTEL_VGA_DEVICE(0x190A, info), /* SRV GT1 */ \
INTEL_VGA_DEVICE(0x191D, info) /* WKS GT2 */
+#define INTEL_SKL_GT3_IDS(info) \
+ INTEL_VGA_DEVICE(0x1926, info), /* ULT GT3 */ \
+ INTEL_VGA_DEVICE(0x192B, info), /* Halo GT3 */ \
+ INTEL_VGA_DEVICE(0x192A, info) /* SRV GT3 */ \
+
+#define INTEL_SKL_IDS(info) \
+ INTEL_SKL_GT1_IDS(info), \
+ INTEL_SKL_GT2_IDS(info), \
+ INTEL_SKL_GT3_IDS(info)
+
+
#endif /* _I915_PCIIDS_H */
diff --git a/include/dt-bindings/clock/exynos3250.h b/include/dt-bindings/clock/exynos3250.h
index 961b9c1..aab088d 100644
--- a/include/dt-bindings/clock/exynos3250.h
+++ b/include/dt-bindings/clock/exynos3250.h
@@ -282,4 +282,65 @@
*/
#define NR_CLKS_DMC 21
+/*
+ * CMU ISP
+ */
+
+/* Dividers */
+
+#define CLK_DIV_ISP1 1
+#define CLK_DIV_ISP0 2
+#define CLK_DIV_MCUISP1 3
+#define CLK_DIV_MCUISP0 4
+#define CLK_DIV_MPWM 5
+
+/* Gates */
+
+#define CLK_UART_ISP 8
+#define CLK_WDT_ISP 9
+#define CLK_PWM_ISP 10
+#define CLK_I2C1_ISP 11
+#define CLK_I2C0_ISP 12
+#define CLK_MPWM_ISP 13
+#define CLK_MCUCTL_ISP 14
+#define CLK_PPMUISPX 15
+#define CLK_PPMUISPMX 16
+#define CLK_QE_LITE1 17
+#define CLK_QE_LITE0 18
+#define CLK_QE_FD 19
+#define CLK_QE_DRC 20
+#define CLK_QE_ISP 21
+#define CLK_CSIS1 22
+#define CLK_SMMU_LITE1 23
+#define CLK_SMMU_LITE0 24
+#define CLK_SMMU_FD 25
+#define CLK_SMMU_DRC 26
+#define CLK_SMMU_ISP 27
+#define CLK_GICISP 28
+#define CLK_CSIS0 29
+#define CLK_MCUISP 30
+#define CLK_LITE1 31
+#define CLK_LITE0 32
+#define CLK_FD 33
+#define CLK_DRC 34
+#define CLK_ISP 35
+#define CLK_QE_ISPCX 36
+#define CLK_QE_SCALERP 37
+#define CLK_QE_SCALERC 38
+#define CLK_SMMU_SCALERP 39
+#define CLK_SMMU_SCALERC 40
+#define CLK_SCALERP 41
+#define CLK_SCALERC 42
+#define CLK_SPI1_ISP 43
+#define CLK_SPI0_ISP 44
+#define CLK_SMMU_ISPCX 45
+#define CLK_ASYNCAXIM 46
+#define CLK_SCLK_MPWM_ISP 47
+
+/*
+ * Total number of clocks of CMU_ISP.
+ * NOTE: Must be equal to last clock ID increased by one.
+ */
+#define NR_CLKS_ISP 48
+
#endif /* _DT_BINDINGS_CLOCK_SAMSUNG_EXYNOS3250_CLOCK_H */
diff --git a/include/dt-bindings/clock/exynos5433.h b/include/dt-bindings/clock/exynos5433.h
new file mode 100644
index 0000000..5bd80d5
--- /dev/null
+++ b/include/dt-bindings/clock/exynos5433.h
@@ -0,0 +1,1403 @@
+/*
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Chanwoo Choi <cw00.choi@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _DT_BINDINGS_CLOCK_EXYNOS5433_H
+#define _DT_BINDINGS_CLOCK_EXYNOS5433_H
+
+/* CMU_TOP */
+#define CLK_FOUT_ISP_PLL 1
+#define CLK_FOUT_AUD_PLL 2
+
+#define CLK_MOUT_AUD_PLL 10
+#define CLK_MOUT_ISP_PLL 11
+#define CLK_MOUT_AUD_PLL_USER_T 12
+#define CLK_MOUT_MPHY_PLL_USER 13
+#define CLK_MOUT_MFC_PLL_USER 14
+#define CLK_MOUT_BUS_PLL_USER 15
+#define CLK_MOUT_ACLK_HEVC_400 16
+#define CLK_MOUT_ACLK_CAM1_333 17
+#define CLK_MOUT_ACLK_CAM1_552_B 18
+#define CLK_MOUT_ACLK_CAM1_552_A 19
+#define CLK_MOUT_ACLK_ISP_DIS_400 20
+#define CLK_MOUT_ACLK_ISP_400 21
+#define CLK_MOUT_ACLK_BUS0_400 22
+#define CLK_MOUT_ACLK_MSCL_400_B 23
+#define CLK_MOUT_ACLK_MSCL_400_A 24
+#define CLK_MOUT_ACLK_GSCL_333 25
+#define CLK_MOUT_ACLK_G2D_400_B 26
+#define CLK_MOUT_ACLK_G2D_400_A 27
+#define CLK_MOUT_SCLK_JPEG_C 28
+#define CLK_MOUT_SCLK_JPEG_B 29
+#define CLK_MOUT_SCLK_JPEG_A 30
+#define CLK_MOUT_SCLK_MMC2_B 31
+#define CLK_MOUT_SCLK_MMC2_A 32
+#define CLK_MOUT_SCLK_MMC1_B 33
+#define CLK_MOUT_SCLK_MMC1_A 34
+#define CLK_MOUT_SCLK_MMC0_D 35
+#define CLK_MOUT_SCLK_MMC0_C 36
+#define CLK_MOUT_SCLK_MMC0_B 37
+#define CLK_MOUT_SCLK_MMC0_A 38
+#define CLK_MOUT_SCLK_SPI4 39
+#define CLK_MOUT_SCLK_SPI3 40
+#define CLK_MOUT_SCLK_UART2 41
+#define CLK_MOUT_SCLK_UART1 42
+#define CLK_MOUT_SCLK_UART0 43
+#define CLK_MOUT_SCLK_SPI2 44
+#define CLK_MOUT_SCLK_SPI1 45
+#define CLK_MOUT_SCLK_SPI0 46
+#define CLK_MOUT_ACLK_MFC_400_C 47
+#define CLK_MOUT_ACLK_MFC_400_B 48
+#define CLK_MOUT_ACLK_MFC_400_A 49
+#define CLK_MOUT_SCLK_ISP_SENSOR2 50
+#define CLK_MOUT_SCLK_ISP_SENSOR1 51
+#define CLK_MOUT_SCLK_ISP_SENSOR0 52
+#define CLK_MOUT_SCLK_ISP_UART 53
+#define CLK_MOUT_SCLK_ISP_SPI1 54
+#define CLK_MOUT_SCLK_ISP_SPI0 55
+#define CLK_MOUT_SCLK_PCIE_100 56
+#define CLK_MOUT_SCLK_UFSUNIPRO 57
+#define CLK_MOUT_SCLK_USBHOST30 58
+#define CLK_MOUT_SCLK_USBDRD30 59
+#define CLK_MOUT_SCLK_SLIMBUS 60
+#define CLK_MOUT_SCLK_SPDIF 61
+#define CLK_MOUT_SCLK_AUDIO1 62
+#define CLK_MOUT_SCLK_AUDIO0 63
+#define CLK_MOUT_SCLK_HDMI_SPDIF 64
+
+#define CLK_DIV_ACLK_FSYS_200 100
+#define CLK_DIV_ACLK_IMEM_SSSX_266 101
+#define CLK_DIV_ACLK_IMEM_200 102
+#define CLK_DIV_ACLK_IMEM_266 103
+#define CLK_DIV_ACLK_PERIC_66_B 104
+#define CLK_DIV_ACLK_PERIC_66_A 105
+#define CLK_DIV_ACLK_PERIS_66_B 106
+#define CLK_DIV_ACLK_PERIS_66_A 107
+#define CLK_DIV_SCLK_MMC1_B 108
+#define CLK_DIV_SCLK_MMC1_A 109
+#define CLK_DIV_SCLK_MMC0_B 110
+#define CLK_DIV_SCLK_MMC0_A 111
+#define CLK_DIV_SCLK_MMC2_B 112
+#define CLK_DIV_SCLK_MMC2_A 113
+#define CLK_DIV_SCLK_SPI1_B 114
+#define CLK_DIV_SCLK_SPI1_A 115
+#define CLK_DIV_SCLK_SPI0_B 116
+#define CLK_DIV_SCLK_SPI0_A 117
+#define CLK_DIV_SCLK_SPI2_B 118
+#define CLK_DIV_SCLK_SPI2_A 119
+#define CLK_DIV_SCLK_UART2 120
+#define CLK_DIV_SCLK_UART1 121
+#define CLK_DIV_SCLK_UART0 122
+#define CLK_DIV_SCLK_SPI4_B 123
+#define CLK_DIV_SCLK_SPI4_A 124
+#define CLK_DIV_SCLK_SPI3_B 125
+#define CLK_DIV_SCLK_SPI3_A 126
+#define CLK_DIV_SCLK_I2S1 127
+#define CLK_DIV_SCLK_PCM1 128
+#define CLK_DIV_SCLK_AUDIO1 129
+#define CLK_DIV_SCLK_AUDIO0 130
+#define CLK_DIV_ACLK_GSCL_111 131
+#define CLK_DIV_ACLK_GSCL_333 132
+#define CLK_DIV_ACLK_HEVC_400 133
+#define CLK_DIV_ACLK_MFC_400 134
+#define CLK_DIV_ACLK_G2D_266 135
+#define CLK_DIV_ACLK_G2D_400 136
+#define CLK_DIV_ACLK_G3D_400 137
+#define CLK_DIV_ACLK_BUS0_400 138
+#define CLK_DIV_ACLK_BUS1_400 139
+#define CLK_DIV_SCLK_PCIE_100 140
+#define CLK_DIV_SCLK_USBHOST30 141
+#define CLK_DIV_SCLK_UFSUNIPRO 142
+#define CLK_DIV_SCLK_USBDRD30 143
+#define CLK_DIV_SCLK_JPEG 144
+#define CLK_DIV_ACLK_MSCL_400 145
+#define CLK_DIV_ACLK_ISP_DIS_400 146
+#define CLK_DIV_ACLK_ISP_400 147
+#define CLK_DIV_ACLK_CAM0_333 148
+#define CLK_DIV_ACLK_CAM0_400 149
+#define CLK_DIV_ACLK_CAM0_552 150
+#define CLK_DIV_ACLK_CAM1_333 151
+#define CLK_DIV_ACLK_CAM1_400 152
+#define CLK_DIV_ACLK_CAM1_552 153
+#define CLK_DIV_SCLK_ISP_UART 154
+#define CLK_DIV_SCLK_ISP_SPI1_B 155
+#define CLK_DIV_SCLK_ISP_SPI1_A 156
+#define CLK_DIV_SCLK_ISP_SPI0_B 157
+#define CLK_DIV_SCLK_ISP_SPI0_A 158
+#define CLK_DIV_SCLK_ISP_SENSOR2_B 159
+#define CLK_DIV_SCLK_ISP_SENSOR2_A 160
+#define CLK_DIV_SCLK_ISP_SENSOR1_B 161
+#define CLK_DIV_SCLK_ISP_SENSOR1_A 162
+#define CLK_DIV_SCLK_ISP_SENSOR0_B 163
+#define CLK_DIV_SCLK_ISP_SENSOR0_A 164
+
+#define CLK_ACLK_PERIC_66 200
+#define CLK_ACLK_PERIS_66 201
+#define CLK_ACLK_FSYS_200 202
+#define CLK_SCLK_MMC2_FSYS 203
+#define CLK_SCLK_MMC1_FSYS 204
+#define CLK_SCLK_MMC0_FSYS 205
+#define CLK_SCLK_SPI4_PERIC 206
+#define CLK_SCLK_SPI3_PERIC 207
+#define CLK_SCLK_UART2_PERIC 208
+#define CLK_SCLK_UART1_PERIC 209
+#define CLK_SCLK_UART0_PERIC 210
+#define CLK_SCLK_SPI2_PERIC 211
+#define CLK_SCLK_SPI1_PERIC 212
+#define CLK_SCLK_SPI0_PERIC 213
+#define CLK_SCLK_SPDIF_PERIC 214
+#define CLK_SCLK_I2S1_PERIC 215
+#define CLK_SCLK_PCM1_PERIC 216
+#define CLK_SCLK_SLIMBUS 217
+#define CLK_SCLK_AUDIO1 218
+#define CLK_SCLK_AUDIO0 219
+#define CLK_ACLK_G2D_266 220
+#define CLK_ACLK_G2D_400 221
+#define CLK_ACLK_G3D_400 222
+#define CLK_ACLK_IMEM_SSX_266 223
+#define CLK_ACLK_BUS0_400 224
+#define CLK_ACLK_BUS1_400 225
+#define CLK_ACLK_IMEM_200 226
+#define CLK_ACLK_IMEM_266 227
+#define CLK_SCLK_PCIE_100_FSYS 228
+#define CLK_SCLK_UFSUNIPRO_FSYS 229
+#define CLK_SCLK_USBHOST30_FSYS 230
+#define CLK_SCLK_USBDRD30_FSYS 231
+#define CLK_ACLK_GSCL_111 232
+#define CLK_ACLK_GSCL_333 233
+#define CLK_SCLK_JPEG_MSCL 234
+#define CLK_ACLK_MSCL_400 235
+#define CLK_ACLK_MFC_400 236
+#define CLK_ACLK_HEVC_400 237
+#define CLK_ACLK_ISP_DIS_400 238
+#define CLK_ACLK_ISP_400 239
+#define CLK_ACLK_CAM0_333 240
+#define CLK_ACLK_CAM0_400 241
+#define CLK_ACLK_CAM0_552 242
+#define CLK_ACLK_CAM1_333 243
+#define CLK_ACLK_CAM1_400 244
+#define CLK_ACLK_CAM1_552 245
+#define CLK_SCLK_ISP_SENSOR2 246
+#define CLK_SCLK_ISP_SENSOR1 247
+#define CLK_SCLK_ISP_SENSOR0 248
+#define CLK_SCLK_ISP_MCTADC_CAM1 249
+#define CLK_SCLK_ISP_UART_CAM1 250
+#define CLK_SCLK_ISP_SPI1_CAM1 251
+#define CLK_SCLK_ISP_SPI0_CAM1 252
+#define CLK_SCLK_HDMI_SPDIF_DISP 253
+
+#define TOP_NR_CLK 254
+
+/* CMU_CPIF */
+#define CLK_FOUT_MPHY_PLL 1
+
+#define CLK_MOUT_MPHY_PLL 2
+
+#define CLK_DIV_SCLK_MPHY 10
+
+#define CLK_SCLK_MPHY_PLL 11
+#define CLK_SCLK_UFS_MPHY 11
+
+#define CPIF_NR_CLK 12
+
+/* CMU_MIF */
+#define CLK_FOUT_MEM0_PLL 1
+#define CLK_FOUT_MEM1_PLL 2
+#define CLK_FOUT_BUS_PLL 3
+#define CLK_FOUT_MFC_PLL 4
+#define CLK_DOUT_MFC_PLL 5
+#define CLK_DOUT_BUS_PLL 6
+#define CLK_DOUT_MEM1_PLL 7
+#define CLK_DOUT_MEM0_PLL 8
+
+#define CLK_MOUT_MFC_PLL_DIV2 10
+#define CLK_MOUT_BUS_PLL_DIV2 11
+#define CLK_MOUT_MEM1_PLL_DIV2 12
+#define CLK_MOUT_MEM0_PLL_DIV2 13
+#define CLK_MOUT_MFC_PLL 14
+#define CLK_MOUT_BUS_PLL 15
+#define CLK_MOUT_MEM1_PLL 16
+#define CLK_MOUT_MEM0_PLL 17
+#define CLK_MOUT_CLK2X_PHY_C 18
+#define CLK_MOUT_CLK2X_PHY_B 19
+#define CLK_MOUT_CLK2X_PHY_A 20
+#define CLK_MOUT_CLKM_PHY_C 21
+#define CLK_MOUT_CLKM_PHY_B 22
+#define CLK_MOUT_CLKM_PHY_A 23
+#define CLK_MOUT_ACLK_MIFNM_200 24
+#define CLK_MOUT_ACLK_MIFNM_400 25
+#define CLK_MOUT_ACLK_DISP_333_B 26
+#define CLK_MOUT_ACLK_DISP_333_A 27
+#define CLK_MOUT_SCLK_DECON_VCLK_C 28
+#define CLK_MOUT_SCLK_DECON_VCLK_B 29
+#define CLK_MOUT_SCLK_DECON_VCLK_A 30
+#define CLK_MOUT_SCLK_DECON_ECLK_C 31
+#define CLK_MOUT_SCLK_DECON_ECLK_B 32
+#define CLK_MOUT_SCLK_DECON_ECLK_A 33
+#define CLK_MOUT_SCLK_DECON_TV_ECLK_C 34
+#define CLK_MOUT_SCLK_DECON_TV_ECLK_B 35
+#define CLK_MOUT_SCLK_DECON_TV_ECLK_A 36
+#define CLK_MOUT_SCLK_DSD_C 37
+#define CLK_MOUT_SCLK_DSD_B 38
+#define CLK_MOUT_SCLK_DSD_A 39
+#define CLK_MOUT_SCLK_DSIM0_C 40
+#define CLK_MOUT_SCLK_DSIM0_B 41
+#define CLK_MOUT_SCLK_DSIM0_A 42
+#define CLK_MOUT_SCLK_DECON_TV_VCLK_C 46
+#define CLK_MOUT_SCLK_DECON_TV_VCLK_B 47
+#define CLK_MOUT_SCLK_DECON_TV_VCLK_A 48
+#define CLK_MOUT_SCLK_DSIM1_C 49
+#define CLK_MOUT_SCLK_DSIM1_B 50
+#define CLK_MOUT_SCLK_DSIM1_A 51
+
+#define CLK_DIV_SCLK_HPM_MIF 55
+#define CLK_DIV_ACLK_DREX1 56
+#define CLK_DIV_ACLK_DREX0 57
+#define CLK_DIV_CLK2XPHY 58
+#define CLK_DIV_ACLK_MIF_266 59
+#define CLK_DIV_ACLK_MIFND_133 60
+#define CLK_DIV_ACLK_MIF_133 61
+#define CLK_DIV_ACLK_MIFNM_200 62
+#define CLK_DIV_ACLK_MIF_200 63
+#define CLK_DIV_ACLK_MIF_400 64
+#define CLK_DIV_ACLK_BUS2_400 65
+#define CLK_DIV_ACLK_DISP_333 66
+#define CLK_DIV_ACLK_CPIF_200 67
+#define CLK_DIV_SCLK_DSIM1 68
+#define CLK_DIV_SCLK_DECON_TV_VCLK 69
+#define CLK_DIV_SCLK_DSIM0 70
+#define CLK_DIV_SCLK_DSD 71
+#define CLK_DIV_SCLK_DECON_TV_ECLK 72
+#define CLK_DIV_SCLK_DECON_VCLK 73
+#define CLK_DIV_SCLK_DECON_ECLK 74
+#define CLK_DIV_MIF_PRE 75
+
+#define CLK_CLK2X_PHY1 80
+#define CLK_CLK2X_PHY0 81
+#define CLK_CLKM_PHY1 82
+#define CLK_CLKM_PHY0 83
+#define CLK_RCLK_DREX1 84
+#define CLK_RCLK_DREX0 85
+#define CLK_ACLK_DREX1_TZ 86
+#define CLK_ACLK_DREX0_TZ 87
+#define CLK_ACLK_DREX1_PEREV 88
+#define CLK_ACLK_DREX0_PEREV 89
+#define CLK_ACLK_DREX1_MEMIF 90
+#define CLK_ACLK_DREX0_MEMIF 91
+#define CLK_ACLK_DREX1_SCH 92
+#define CLK_ACLK_DREX0_SCH 93
+#define CLK_ACLK_DREX1_BUSIF 94
+#define CLK_ACLK_DREX0_BUSIF 95
+#define CLK_ACLK_DREX1_BUSIF_RD 96
+#define CLK_ACLK_DREX0_BUSIF_RD 97
+#define CLK_ACLK_DREX1 98
+#define CLK_ACLK_DREX0 99
+#define CLK_ACLK_ASYNCAXIM_ATLAS_CCIX 100
+#define CLK_ACLK_ASYNCAXIS_ATLAS_MIF 101
+#define CLK_ACLK_ASYNCAXIM_ATLAS_MIF 102
+#define CLK_ACLK_ASYNCAXIS_MIF_IMEM 103
+#define CLK_ACLK_ASYNCAXIS_NOC_P_CCI 104
+#define CLK_ACLK_ASYNCAXIM_NOC_P_CCI 105
+#define CLK_ACLK_ASYNCAXIS_CP1 106
+#define CLK_ACLK_ASYNCAXIM_CP1 107
+#define CLK_ACLK_ASYNCAXIS_CP0 108
+#define CLK_ACLK_ASYNCAXIM_CP0 109
+#define CLK_ACLK_ASYNCAXIS_DREX1_3 110
+#define CLK_ACLK_ASYNCAXIM_DREX1_3 111
+#define CLK_ACLK_ASYNCAXIS_DREX1_1 112
+#define CLK_ACLK_ASYNCAXIM_DREX1_1 113
+#define CLK_ACLK_ASYNCAXIS_DREX1_0 114
+#define CLK_ACLK_ASYNCAXIM_DREX1_0 115
+#define CLK_ACLK_ASYNCAXIS_DREX0_3 116
+#define CLK_ACLK_ASYNCAXIM_DREX0_3 117
+#define CLK_ACLK_ASYNCAXIS_DREX0_1 118
+#define CLK_ACLK_ASYNCAXIM_DREX0_1 119
+#define CLK_ACLK_ASYNCAXIS_DREX0_0 120
+#define CLK_ACLK_ASYNCAXIM_DREX0_0 121
+#define CLK_ACLK_AHB2APB_MIF2P 122
+#define CLK_ACLK_AHB2APB_MIF1P 123
+#define CLK_ACLK_AHB2APB_MIF0P 124
+#define CLK_ACLK_IXIU_CCI 125
+#define CLK_ACLK_XIU_MIFSFRX 126
+#define CLK_ACLK_MIFNP_133 127
+#define CLK_ACLK_MIFNM_200 128
+#define CLK_ACLK_MIFND_133 129
+#define CLK_ACLK_MIFND_400 130
+#define CLK_ACLK_CCI 131
+#define CLK_ACLK_MIFND_266 132
+#define CLK_ACLK_PPMU_DREX1S3 133
+#define CLK_ACLK_PPMU_DREX1S1 134
+#define CLK_ACLK_PPMU_DREX1S0 135
+#define CLK_ACLK_PPMU_DREX0S3 136
+#define CLK_ACLK_PPMU_DREX0S1 137
+#define CLK_ACLK_PPMU_DREX0S0 138
+#define CLK_ACLK_BTS_APOLLO 139
+#define CLK_ACLK_BTS_ATLAS 140
+#define CLK_ACLK_ACE_SEL_APOLL 141
+#define CLK_ACLK_ACE_SEL_ATLAS 142
+#define CLK_ACLK_AXIDS_CCI_MIFSFRX 143
+#define CLK_ACLK_AXIUS_ATLAS_CCI 144
+#define CLK_ACLK_AXISYNCDNS_CCI 145
+#define CLK_ACLK_AXISYNCDN_CCI 146
+#define CLK_ACLK_AXISYNCDN_NOC_D 147
+#define CLK_ACLK_ASYNCACEM_APOLLO_CCI 148
+#define CLK_ACLK_ASYNCACEM_ATLAS_CCI 149
+#define CLK_ACLK_ASYNCAPBS_MIF_CSSYS 150
+#define CLK_ACLK_BUS2_400 151
+#define CLK_ACLK_DISP_333 152
+#define CLK_ACLK_CPIF_200 153
+#define CLK_PCLK_PPMU_DREX1S3 154
+#define CLK_PCLK_PPMU_DREX1S1 155
+#define CLK_PCLK_PPMU_DREX1S0 156
+#define CLK_PCLK_PPMU_DREX0S3 157
+#define CLK_PCLK_PPMU_DREX0S1 158
+#define CLK_PCLK_PPMU_DREX0S0 159
+#define CLK_PCLK_BTS_APOLLO 160
+#define CLK_PCLK_BTS_ATLAS 161
+#define CLK_PCLK_ASYNCAXI_NOC_P_CCI 162
+#define CLK_PCLK_ASYNCAXI_CP1 163
+#define CLK_PCLK_ASYNCAXI_CP0 164
+#define CLK_PCLK_ASYNCAXI_DREX1_3 165
+#define CLK_PCLK_ASYNCAXI_DREX1_1 166
+#define CLK_PCLK_ASYNCAXI_DREX1_0 167
+#define CLK_PCLK_ASYNCAXI_DREX0_3 168
+#define CLK_PCLK_ASYNCAXI_DREX0_1 169
+#define CLK_PCLK_ASYNCAXI_DREX0_0 170
+#define CLK_PCLK_MIFSRVND_133 171
+#define CLK_PCLK_PMU_MIF 172
+#define CLK_PCLK_SYSREG_MIF 173
+#define CLK_PCLK_GPIO_ALIVE 174
+#define CLK_PCLK_ABB 175
+#define CLK_PCLK_PMU_APBIF 176
+#define CLK_PCLK_DDR_PHY1 177
+#define CLK_PCLK_DREX1 178
+#define CLK_PCLK_DDR_PHY0 179
+#define CLK_PCLK_DREX0 180
+#define CLK_PCLK_DREX0_TZ 181
+#define CLK_PCLK_DREX1_TZ 182
+#define CLK_PCLK_MONOTONIC_CNT 183
+#define CLK_PCLK_RTC 184
+#define CLK_SCLK_DSIM1_DISP 185
+#define CLK_SCLK_DECON_TV_VCLK_DISP 186
+#define CLK_SCLK_FREQ_DET_BUS_PLL 187
+#define CLK_SCLK_FREQ_DET_MFC_PLL 188
+#define CLK_SCLK_FREQ_DET_MEM0_PLL 189
+#define CLK_SCLK_FREQ_DET_MEM1_PLL 190
+#define CLK_SCLK_DSIM0_DISP 191
+#define CLK_SCLK_DSD_DISP 192
+#define CLK_SCLK_DECON_TV_ECLK_DISP 193
+#define CLK_SCLK_DECON_VCLK_DISP 194
+#define CLK_SCLK_DECON_ECLK_DISP 195
+#define CLK_SCLK_HPM_MIF 196
+#define CLK_SCLK_MFC_PLL 197
+#define CLK_SCLK_BUS_PLL 198
+#define CLK_SCLK_BUS_PLL_APOLLO 199
+#define CLK_SCLK_BUS_PLL_ATLAS 200
+
+#define MIF_NR_CLK 201
+
+/* CMU_PERIC */
+#define CLK_PCLK_SPI2 1
+#define CLK_PCLK_SPI1 2
+#define CLK_PCLK_SPI0 3
+#define CLK_PCLK_UART2 4
+#define CLK_PCLK_UART1 5
+#define CLK_PCLK_UART0 6
+#define CLK_PCLK_HSI2C3 7
+#define CLK_PCLK_HSI2C2 8
+#define CLK_PCLK_HSI2C1 9
+#define CLK_PCLK_HSI2C0 10
+#define CLK_PCLK_I2C7 11
+#define CLK_PCLK_I2C6 12
+#define CLK_PCLK_I2C5 13
+#define CLK_PCLK_I2C4 14
+#define CLK_PCLK_I2C3 15
+#define CLK_PCLK_I2C2 16
+#define CLK_PCLK_I2C1 17
+#define CLK_PCLK_I2C0 18
+#define CLK_PCLK_SPI4 19
+#define CLK_PCLK_SPI3 20
+#define CLK_PCLK_HSI2C11 21
+#define CLK_PCLK_HSI2C10 22
+#define CLK_PCLK_HSI2C9 23
+#define CLK_PCLK_HSI2C8 24
+#define CLK_PCLK_HSI2C7 25
+#define CLK_PCLK_HSI2C6 26
+#define CLK_PCLK_HSI2C5 27
+#define CLK_PCLK_HSI2C4 28
+#define CLK_SCLK_SPI4 29
+#define CLK_SCLK_SPI3 30
+#define CLK_SCLK_SPI2 31
+#define CLK_SCLK_SPI1 32
+#define CLK_SCLK_SPI0 33
+#define CLK_SCLK_UART2 34
+#define CLK_SCLK_UART1 35
+#define CLK_SCLK_UART0 36
+#define CLK_ACLK_AHB2APB_PERIC2P 37
+#define CLK_ACLK_AHB2APB_PERIC1P 38
+#define CLK_ACLK_AHB2APB_PERIC0P 39
+#define CLK_ACLK_PERICNP_66 40
+#define CLK_PCLK_SCI 41
+#define CLK_PCLK_GPIO_FINGER 42
+#define CLK_PCLK_GPIO_ESE 43
+#define CLK_PCLK_PWM 44
+#define CLK_PCLK_SPDIF 45
+#define CLK_PCLK_PCM1 46
+#define CLK_PCLK_I2S1 47
+#define CLK_PCLK_ADCIF 48
+#define CLK_PCLK_GPIO_TOUCH 49
+#define CLK_PCLK_GPIO_NFC 50
+#define CLK_PCLK_GPIO_PERIC 51
+#define CLK_PCLK_PMU_PERIC 52
+#define CLK_PCLK_SYSREG_PERIC 53
+#define CLK_SCLK_IOCLK_SPI4 54
+#define CLK_SCLK_IOCLK_SPI3 55
+#define CLK_SCLK_SCI 56
+#define CLK_SCLK_SC_IN 57
+#define CLK_SCLK_PWM 58
+#define CLK_SCLK_IOCLK_SPI2 59
+#define CLK_SCLK_IOCLK_SPI1 60
+#define CLK_SCLK_IOCLK_SPI0 61
+#define CLK_SCLK_IOCLK_I2S1_BCLK 62
+#define CLK_SCLK_SPDIF 63
+#define CLK_SCLK_PCM1 64
+#define CLK_SCLK_I2S1 65
+
+#define CLK_DIV_SCLK_SCI 70
+#define CLK_DIV_SCLK_SC_IN 71
+
+#define PERIC_NR_CLK 72
+
+/* CMU_PERIS */
+#define CLK_PCLK_HPM_APBIF 1
+#define CLK_PCLK_TMU1_APBIF 2
+#define CLK_PCLK_TMU0_APBIF 3
+#define CLK_PCLK_PMU_PERIS 4
+#define CLK_PCLK_SYSREG_PERIS 5
+#define CLK_PCLK_CMU_TOP_APBIF 6
+#define CLK_PCLK_WDT_APOLLO 7
+#define CLK_PCLK_WDT_ATLAS 8
+#define CLK_PCLK_MCT 9
+#define CLK_PCLK_HDMI_CEC 10
+#define CLK_ACLK_AHB2APB_PERIS1P 11
+#define CLK_ACLK_AHB2APB_PERIS0P 12
+#define CLK_ACLK_PERISNP_66 13
+#define CLK_PCLK_TZPC12 14
+#define CLK_PCLK_TZPC11 15
+#define CLK_PCLK_TZPC10 16
+#define CLK_PCLK_TZPC9 17
+#define CLK_PCLK_TZPC8 18
+#define CLK_PCLK_TZPC7 19
+#define CLK_PCLK_TZPC6 20
+#define CLK_PCLK_TZPC5 21
+#define CLK_PCLK_TZPC4 22
+#define CLK_PCLK_TZPC3 23
+#define CLK_PCLK_TZPC2 24
+#define CLK_PCLK_TZPC1 25
+#define CLK_PCLK_TZPC0 26
+#define CLK_PCLK_SECKEY_APBIF 27
+#define CLK_PCLK_CHIPID_APBIF 28
+#define CLK_PCLK_TOPRTC 29
+#define CLK_PCLK_CUSTOM_EFUSE_APBIF 30
+#define CLK_PCLK_ANTIRBK_CNT_APBIF 31
+#define CLK_PCLK_OTP_CON_APBIF 32
+#define CLK_SCLK_ASV_TB 33
+#define CLK_SCLK_TMU1 34
+#define CLK_SCLK_TMU0 35
+#define CLK_SCLK_SECKEY 36
+#define CLK_SCLK_CHIPID 37
+#define CLK_SCLK_TOPRTC 38
+#define CLK_SCLK_CUSTOM_EFUSE 39
+#define CLK_SCLK_ANTIRBK_CNT 40
+#define CLK_SCLK_OTP_CON 41
+
+#define PERIS_NR_CLK 42
+
+/* CMU_FSYS */
+#define CLK_MOUT_ACLK_FSYS_200_USER 1
+#define CLK_MOUT_SCLK_MMC2_USER 2
+#define CLK_MOUT_SCLK_MMC1_USER 3
+#define CLK_MOUT_SCLK_MMC0_USER 4
+#define CLK_MOUT_SCLK_UFS_MPHY_USER 5
+#define CLK_MOUT_SCLK_PCIE_100_USER 6
+#define CLK_MOUT_SCLK_UFSUNIPRO_USER 7
+#define CLK_MOUT_SCLK_USBHOST30_USER 8
+#define CLK_MOUT_SCLK_USBDRD30_USER 9
+#define CLK_MOUT_PHYCLK_USBHOST30_UHOST30_PIPE_PCLK_USER 10
+#define CLK_MOUT_PHYCLK_USBHOST30_UHOST30_PHYCLOCK_USER 11
+#define CLK_MOUT_PHYCLK_USBHOST20_PHY_HSIC1_USER 12
+#define CLK_MOUT_PHYCLK_USBHOST20_PHY_CLK48MOHCI_USER 13
+#define CLK_MOUT_PHYCLK_USBHOST20_PHY_PHYCLOCK_USER 14
+#define CLK_MOUT_PHYCLK_USBHOST20_PHY_PHY_FREECLK_USER 15
+#define CLK_MOUT_PHYCLK_USBDRD30_UDRD30_PIPE_PCLK_USER 16
+#define CLK_MOUT_PHYCLK_USBDRD30_UDRD30_PHYCLOCK_USER 17
+#define CLK_MOUT_PHYCLK_UFS_RX1_SYMBOL_USER 18
+#define CLK_MOUT_PHYCLK_UFS_RX0_SYMBOL_USER 19
+#define CLK_MOUT_PHYCLK_UFS_TX1_SYMBOL_USER 20
+#define CLK_MOUT_PHYCLK_UFS_TX0_SYMBOL_USER 21
+#define CLK_MOUT_PHYCLK_LLI_MPHY_TO_UFS_USER 22
+#define CLK_MOUT_SCLK_MPHY 23
+
+#define CLK_PHYCLK_USBDRD30_UDRD30_PHYCLOCK_PHY 25
+#define CLK_PHYCLK_USBDRD30_UDRD30_PIPE_PCLK_PHY 26
+#define CLK_PHYCLK_USBHOST30_UHOST30_PHYCLOCK_PHY 27
+#define CLK_PHYCLK_USBHOST30_UHOST30_PIPE_PCLK_PHY 28
+#define CLK_PHYCLK_USBHOST20_PHY_FREECLK_PHY 29
+#define CLK_PHYCLK_USBHOST20_PHY_PHYCLOCK_PHY 30
+#define CLK_PHYCLK_USBHOST20_PHY_CLK48MOHCI_PHY 31
+#define CLK_PHYCLK_USBHOST20_PHY_HSIC1_PHY 32
+#define CLK_PHYCLK_UFS_TX0_SYMBOL_PHY 33
+#define CLK_PHYCLK_UFS_RX0_SYMBOL_PHY 34
+#define CLK_PHYCLK_UFS_TX1_SYMBOL_PHY 35
+#define CLK_PHYCLK_UFS_RX1_SYMBOL_PHY 36
+#define CLK_PHYCLK_LLI_MPHY_TO_UFS_PHY 37
+
+#define CLK_ACLK_PCIE 50
+#define CLK_ACLK_PDMA1 51
+#define CLK_ACLK_TSI 52
+#define CLK_ACLK_MMC2 53
+#define CLK_ACLK_MMC1 54
+#define CLK_ACLK_MMC0 55
+#define CLK_ACLK_UFS 56
+#define CLK_ACLK_USBHOST20 57
+#define CLK_ACLK_USBHOST30 58
+#define CLK_ACLK_USBDRD30 59
+#define CLK_ACLK_PDMA0 60
+#define CLK_SCLK_MMC2 61
+#define CLK_SCLK_MMC1 62
+#define CLK_SCLK_MMC0 63
+#define CLK_PDMA1 64
+#define CLK_PDMA0 65
+#define CLK_ACLK_XIU_FSYSPX 66
+#define CLK_ACLK_AHB_USBLINKH1 67
+#define CLK_ACLK_SMMU_PDMA1 68
+#define CLK_ACLK_BTS_PCIE 69
+#define CLK_ACLK_AXIUS_PDMA1 70
+#define CLK_ACLK_SMMU_PDMA0 71
+#define CLK_ACLK_BTS_UFS 72
+#define CLK_ACLK_BTS_USBHOST30 73
+#define CLK_ACLK_BTS_USBDRD30 74
+#define CLK_ACLK_AXIUS_PDMA0 75
+#define CLK_ACLK_AXIUS_USBHS 76
+#define CLK_ACLK_AXIUS_FSYSSX 77
+#define CLK_ACLK_AHB2APB_FSYSP 78
+#define CLK_ACLK_AHB2AXI_USBHS 79
+#define CLK_ACLK_AHB_USBLINKH0 80
+#define CLK_ACLK_AHB_USBHS 81
+#define CLK_ACLK_AHB_FSYSH 82
+#define CLK_ACLK_XIU_FSYSX 83
+#define CLK_ACLK_XIU_FSYSSX 84
+#define CLK_ACLK_FSYSNP_200 85
+#define CLK_ACLK_FSYSND_200 86
+#define CLK_PCLK_PCIE_CTRL 87
+#define CLK_PCLK_SMMU_PDMA1 88
+#define CLK_PCLK_PCIE_PHY 89
+#define CLK_PCLK_BTS_PCIE 90
+#define CLK_PCLK_SMMU_PDMA0 91
+#define CLK_PCLK_BTS_UFS 92
+#define CLK_PCLK_BTS_USBHOST30 93
+#define CLK_PCLK_BTS_USBDRD30 94
+#define CLK_PCLK_GPIO_FSYS 95
+#define CLK_PCLK_PMU_FSYS 96
+#define CLK_PCLK_SYSREG_FSYS 97
+#define CLK_SCLK_PCIE_100 98
+#define CLK_PHYCLK_USBHOST30_UHOST30_PIPE_PCLK 99
+#define CLK_PHYCLK_USBHOST30_UHOST30_PHYCLOCK 100
+#define CLK_PHYCLK_UFS_RX1_SYMBOL 101
+#define CLK_PHYCLK_UFS_RX0_SYMBOL 102
+#define CLK_PHYCLK_UFS_TX1_SYMBOL 103
+#define CLK_PHYCLK_UFS_TX0_SYMBOL 104
+#define CLK_PHYCLK_USBHOST20_PHY_HSIC1 105
+#define CLK_PHYCLK_USBHOST20_PHY_CLK48MOHCI 106
+#define CLK_PHYCLK_USBHOST20_PHY_PHYCLOCK 107
+#define CLK_PHYCLK_USBHOST20_PHY_FREECLK 108
+#define CLK_PHYCLK_USBDRD30_UDRD30_PIPE_PCLK 109
+#define CLK_PHYCLK_USBDRD30_UDRD30_PHYCLOCK 110
+#define CLK_SCLK_MPHY 111
+#define CLK_SCLK_UFSUNIPRO 112
+#define CLK_SCLK_USBHOST30 113
+#define CLK_SCLK_USBDRD30 114
+
+#define FSYS_NR_CLK 115
+
+/* CMU_G2D */
+#define CLK_MUX_ACLK_G2D_266_USER 1
+#define CLK_MUX_ACLK_G2D_400_USER 2
+
+#define CLK_DIV_PCLK_G2D 3
+
+#define CLK_ACLK_SMMU_MDMA1 4
+#define CLK_ACLK_BTS_MDMA1 5
+#define CLK_ACLK_BTS_G2D 6
+#define CLK_ACLK_ALB_G2D 7
+#define CLK_ACLK_AXIUS_G2DX 8
+#define CLK_ACLK_ASYNCAXI_SYSX 9
+#define CLK_ACLK_AHB2APB_G2D1P 10
+#define CLK_ACLK_AHB2APB_G2D0P 11
+#define CLK_ACLK_XIU_G2DX 12
+#define CLK_ACLK_G2DNP_133 13
+#define CLK_ACLK_G2DND_400 14
+#define CLK_ACLK_MDMA1 15
+#define CLK_ACLK_G2D 16
+#define CLK_ACLK_SMMU_G2D 17
+#define CLK_PCLK_SMMU_MDMA1 18
+#define CLK_PCLK_BTS_MDMA1 19
+#define CLK_PCLK_BTS_G2D 20
+#define CLK_PCLK_ALB_G2D 21
+#define CLK_PCLK_ASYNCAXI_SYSX 22
+#define CLK_PCLK_PMU_G2D 23
+#define CLK_PCLK_SYSREG_G2D 24
+#define CLK_PCLK_G2D 25
+#define CLK_PCLK_SMMU_G2D 26
+
+#define G2D_NR_CLK 27
+
+/* CMU_DISP */
+#define CLK_FOUT_DISP_PLL 1
+
+#define CLK_MOUT_DISP_PLL 2
+#define CLK_MOUT_SCLK_DSIM1_USER 3
+#define CLK_MOUT_SCLK_DSIM0_USER 4
+#define CLK_MOUT_SCLK_DSD_USER 5
+#define CLK_MOUT_SCLK_DECON_TV_ECLK_USER 6
+#define CLK_MOUT_SCLK_DECON_VCLK_USER 7
+#define CLK_MOUT_SCLK_DECON_ECLK_USER 8
+#define CLK_MOUT_SCLK_DECON_TV_VCLK_USER 9
+#define CLK_MOUT_ACLK_DISP_333_USER 10
+#define CLK_MOUT_PHYCLK_MIPIDPHY1_BITCLKDIV8_USER 11
+#define CLK_MOUT_PHYCLK_MIPIDPHY1_RXCLKESC0_USER 12
+#define CLK_MOUT_PHYCLK_MIPIDPHY0_BITCLKDIV8_USER 13
+#define CLK_MOUT_PHYCLK_MIPIDPHY0_RXCLKESC0_USER 14
+#define CLK_MOUT_PHYCLK_HDMIPHY_TMDS_CLKO_USER 15
+#define CLK_MOUT_PHYCLK_HDMIPHY_PIXEL_CLKO_USER 16
+#define CLK_MOUT_SCLK_DSIM0 17
+#define CLK_MOUT_SCLK_DECON_TV_ECLK 18
+#define CLK_MOUT_SCLK_DECON_VCLK 19
+#define CLK_MOUT_SCLK_DECON_ECLK 20
+#define CLK_MOUT_SCLK_DSIM1_B_DISP 21
+#define CLK_MOUT_SCLK_DSIM1_A_DISP 22
+#define CLK_MOUT_SCLK_DECON_TV_VCLK_C_DISP 23
+#define CLK_MOUT_SCLK_DECON_TV_VCLK_B_DISP 24
+#define CLK_MOUT_SCLK_DECON_TV_VCLK_A_DISP 25
+
+#define CLK_DIV_SCLK_DSIM1_DISP 30
+#define CLK_DIV_SCLK_DECON_TV_VCLK_DISP 31
+#define CLK_DIV_SCLK_DSIM0_DISP 32
+#define CLK_DIV_SCLK_DECON_TV_ECLK_DISP 33
+#define CLK_DIV_SCLK_DECON_VCLK_DISP 34
+#define CLK_DIV_SCLK_DECON_ECLK_DISP 35
+#define CLK_DIV_PCLK_DISP 36
+
+#define CLK_ACLK_DECON_TV 40
+#define CLK_ACLK_DECON 41
+#define CLK_ACLK_SMMU_TV1X 42
+#define CLK_ACLK_SMMU_TV0X 43
+#define CLK_ACLK_SMMU_DECON1X 44
+#define CLK_ACLK_SMMU_DECON0X 45
+#define CLK_ACLK_BTS_DECON_TV_M3 46
+#define CLK_ACLK_BTS_DECON_TV_M2 47
+#define CLK_ACLK_BTS_DECON_TV_M1 48
+#define CLK_ACLK_BTS_DECON_TV_M0 49
+#define CLK_ACLK_BTS_DECON_NM4 50
+#define CLK_ACLK_BTS_DECON_NM3 51
+#define CLK_ACLK_BTS_DECON_NM2 52
+#define CLK_ACLK_BTS_DECON_NM1 53
+#define CLK_ACLK_BTS_DECON_NM0 54
+#define CLK_ACLK_AHB2APB_DISPSFR2P 55
+#define CLK_ACLK_AHB2APB_DISPSFR1P 56
+#define CLK_ACLK_AHB2APB_DISPSFR0P 57
+#define CLK_ACLK_AHB_DISPH 58
+#define CLK_ACLK_XIU_TV1X 59
+#define CLK_ACLK_XIU_TV0X 60
+#define CLK_ACLK_XIU_DECON1X 61
+#define CLK_ACLK_XIU_DECON0X 62
+#define CLK_ACLK_XIU_DISP1X 63
+#define CLK_ACLK_XIU_DISPNP_100 64
+#define CLK_ACLK_DISP1ND_333 65
+#define CLK_ACLK_DISP0ND_333 66
+#define CLK_PCLK_SMMU_TV1X 67
+#define CLK_PCLK_SMMU_TV0X 68
+#define CLK_PCLK_SMMU_DECON1X 69
+#define CLK_PCLK_SMMU_DECON0X 70
+#define CLK_PCLK_BTS_DECON_TV_M3 71
+#define CLK_PCLK_BTS_DECON_TV_M2 72
+#define CLK_PCLK_BTS_DECON_TV_M1 73
+#define CLK_PCLK_BTS_DECON_TV_M0 74
+#define CLK_PCLK_BTS_DECONM4 75
+#define CLK_PCLK_BTS_DECONM3 76
+#define CLK_PCLK_BTS_DECONM2 77
+#define CLK_PCLK_BTS_DECONM1 78
+#define CLK_PCLK_BTS_DECONM0 79
+#define CLK_PCLK_MIC1 80
+#define CLK_PCLK_PMU_DISP 81
+#define CLK_PCLK_SYSREG_DISP 82
+#define CLK_PCLK_HDMIPHY 83
+#define CLK_PCLK_HDMI 84
+#define CLK_PCLK_MIC0 85
+#define CLK_PCLK_DSIM1 86
+#define CLK_PCLK_DSIM0 87
+#define CLK_PCLK_DECON_TV 88
+#define CLK_PHYCLK_MIPIDPHY1_BITCLKDIV8 89
+#define CLK_PHYCLK_MIPIDPHY1_RXCLKESC0 90
+#define CLK_SCLK_RGB_TV_VCLK_TO_DSIM1 91
+#define CLK_SCLK_RGB_TV_VCLK_TO_MIC1 92
+#define CLK_SCLK_DSIM1 93
+#define CLK_SCLK_DECON_TV_VCLK 94
+#define CLK_PHYCLK_MIPIDPHY0_BITCLKDIV8 95
+#define CLK_PHYCLK_MIPIDPHY0_RXCLKESC0 96
+#define CLK_PHYCLK_HDMIPHY_TMDS_CLKO 97
+#define CLK_PHYCLK_HDMI_PIXEL 98
+#define CLK_SCLK_RGB_VCLK_TO_SMIES 99
+#define CLK_SCLK_FREQ_DET_DISP_PLL 100
+#define CLK_SCLK_RGB_VCLK_TO_DSIM0 101
+#define CLK_SCLK_RGB_VCLK_TO_MIC0 102
+#define CLK_SCLK_DSD 103
+#define CLK_SCLK_HDMI_SPDIF 104
+#define CLK_SCLK_DSIM0 105
+#define CLK_SCLK_DECON_TV_ECLK 106
+#define CLK_SCLK_DECON_VCLK 107
+#define CLK_SCLK_DECON_ECLK 108
+#define CLK_SCLK_RGB_VCLK 109
+#define CLK_SCLK_RGB_TV_VCLK 110
+
+#define DISP_NR_CLK 111
+
+/* CMU_AUD */
+#define CLK_MOUT_AUD_PLL_USER 1
+#define CLK_MOUT_SCLK_AUD_PCM 2
+#define CLK_MOUT_SCLK_AUD_I2S 3
+
+#define CLK_DIV_ATCLK_AUD 4
+#define CLK_DIV_PCLK_DBG_AUD 5
+#define CLK_DIV_ACLK_AUD 6
+#define CLK_DIV_AUD_CA5 7
+#define CLK_DIV_SCLK_AUD_SLIMBUS 8
+#define CLK_DIV_SCLK_AUD_UART 9
+#define CLK_DIV_SCLK_AUD_PCM 10
+#define CLK_DIV_SCLK_AUD_I2S 11
+
+#define CLK_ACLK_INTR_CTRL 12
+#define CLK_ACLK_AXIDS2_LPASSP 13
+#define CLK_ACLK_AXIDS1_LPASSP 14
+#define CLK_ACLK_AXI2APB1_LPASSP 15
+#define CLK_ACLK_AXI2APH_LPASSP 16
+#define CLK_ACLK_SMMU_LPASSX 17
+#define CLK_ACLK_AXIDS0_LPASSP 18
+#define CLK_ACLK_AXI2APB0_LPASSP 19
+#define CLK_ACLK_XIU_LPASSX 20
+#define CLK_ACLK_AUDNP_133 21
+#define CLK_ACLK_AUDND_133 22
+#define CLK_ACLK_SRAMC 23
+#define CLK_ACLK_DMAC 24
+#define CLK_PCLK_WDT1 25
+#define CLK_PCLK_WDT0 26
+#define CLK_PCLK_SFR1 27
+#define CLK_PCLK_SMMU_LPASSX 28
+#define CLK_PCLK_GPIO_AUD 29
+#define CLK_PCLK_PMU_AUD 30
+#define CLK_PCLK_SYSREG_AUD 31
+#define CLK_PCLK_AUD_SLIMBUS 32
+#define CLK_PCLK_AUD_UART 33
+#define CLK_PCLK_AUD_PCM 34
+#define CLK_PCLK_AUD_I2S 35
+#define CLK_PCLK_TIMER 36
+#define CLK_PCLK_SFR0_CTRL 37
+#define CLK_ATCLK_AUD 38
+#define CLK_PCLK_DBG_AUD 39
+#define CLK_SCLK_AUD_CA5 40
+#define CLK_SCLK_JTAG_TCK 41
+#define CLK_SCLK_SLIMBUS_CLKIN 42
+#define CLK_SCLK_AUD_SLIMBUS 43
+#define CLK_SCLK_AUD_UART 44
+#define CLK_SCLK_AUD_PCM 45
+#define CLK_SCLK_I2S_BCLK 46
+#define CLK_SCLK_AUD_I2S 47
+
+#define AUD_NR_CLK 48
+
+/* CMU_BUS{0|1|2} */
+#define CLK_DIV_PCLK_BUS_133 1
+
+#define CLK_ACLK_AHB2APB_BUSP 2
+#define CLK_ACLK_BUSNP_133 3
+#define CLK_ACLK_BUSND_400 4
+#define CLK_PCLK_BUSSRVND_133 5
+#define CLK_PCLK_PMU_BUS 6
+#define CLK_PCLK_SYSREG_BUS 7
+
+#define CLK_MOUT_ACLK_BUS2_400_USER 8 /* Only CMU_BUS2 */
+#define CLK_ACLK_BUS2BEND_400 9 /* Only CMU_BUS2 */
+#define CLK_ACLK_BUS2RTND_400 10 /* Only CMU_BUS2 */
+
+#define BUSx_NR_CLK 11
+
+/* CMU_G3D */
+#define CLK_FOUT_G3D_PLL 1
+
+#define CLK_MOUT_ACLK_G3D_400 2
+#define CLK_MOUT_G3D_PLL 3
+
+#define CLK_DIV_SCLK_HPM_G3D 4
+#define CLK_DIV_PCLK_G3D 5
+#define CLK_DIV_ACLK_G3D 6
+#define CLK_ACLK_BTS_G3D1 7
+#define CLK_ACLK_BTS_G3D0 8
+#define CLK_ACLK_ASYNCAPBS_G3D 9
+#define CLK_ACLK_ASYNCAPBM_G3D 10
+#define CLK_ACLK_AHB2APB_G3DP 11
+#define CLK_ACLK_G3DNP_150 12
+#define CLK_ACLK_G3DND_600 13
+#define CLK_ACLK_G3D 14
+#define CLK_PCLK_BTS_G3D1 15
+#define CLK_PCLK_BTS_G3D0 16
+#define CLK_PCLK_PMU_G3D 17
+#define CLK_PCLK_SYSREG_G3D 18
+#define CLK_SCLK_HPM_G3D 19
+
+#define G3D_NR_CLK 20
+
+/* CMU_GSCL */
+#define CLK_MOUT_ACLK_GSCL_111_USER 1
+#define CLK_MOUT_ACLK_GSCL_333_USER 2
+
+#define CLK_ACLK_BTS_GSCL2 3
+#define CLK_ACLK_BTS_GSCL1 4
+#define CLK_ACLK_BTS_GSCL0 5
+#define CLK_ACLK_AHB2APB_GSCLP 6
+#define CLK_ACLK_XIU_GSCLX 7
+#define CLK_ACLK_GSCLNP_111 8
+#define CLK_ACLK_GSCLRTND_333 9
+#define CLK_ACLK_GSCLBEND_333 10
+#define CLK_ACLK_GSD 11
+#define CLK_ACLK_GSCL2 12
+#define CLK_ACLK_GSCL1 13
+#define CLK_ACLK_GSCL0 14
+#define CLK_ACLK_SMMU_GSCL0 15
+#define CLK_ACLK_SMMU_GSCL1 16
+#define CLK_ACLK_SMMU_GSCL2 17
+#define CLK_PCLK_BTS_GSCL2 18
+#define CLK_PCLK_BTS_GSCL1 19
+#define CLK_PCLK_BTS_GSCL0 20
+#define CLK_PCLK_PMU_GSCL 21
+#define CLK_PCLK_SYSREG_GSCL 22
+#define CLK_PCLK_GSCL2 23
+#define CLK_PCLK_GSCL1 24
+#define CLK_PCLK_GSCL0 25
+#define CLK_PCLK_SMMU_GSCL0 26
+#define CLK_PCLK_SMMU_GSCL1 27
+#define CLK_PCLK_SMMU_GSCL2 28
+
+#define GSCL_NR_CLK 29
+
+/* CMU_APOLLO */
+#define CLK_FOUT_APOLLO_PLL 1
+
+#define CLK_MOUT_APOLLO_PLL 2
+#define CLK_MOUT_BUS_PLL_APOLLO_USER 3
+#define CLK_MOUT_APOLLO 4
+
+#define CLK_DIV_CNTCLK_APOLLO 5
+#define CLK_DIV_PCLK_DBG_APOLLO 6
+#define CLK_DIV_ATCLK_APOLLO 7
+#define CLK_DIV_PCLK_APOLLO 8
+#define CLK_DIV_ACLK_APOLLO 9
+#define CLK_DIV_APOLLO2 10
+#define CLK_DIV_APOLLO1 11
+#define CLK_DIV_SCLK_HPM_APOLLO 12
+#define CLK_DIV_APOLLO_PLL 13
+
+#define CLK_ACLK_ATBDS_APOLLO_3 14
+#define CLK_ACLK_ATBDS_APOLLO_2 15
+#define CLK_ACLK_ATBDS_APOLLO_1 16
+#define CLK_ACLK_ATBDS_APOLLO_0 17
+#define CLK_ACLK_ASATBSLV_APOLLO_3_CSSYS 18
+#define CLK_ACLK_ASATBSLV_APOLLO_2_CSSYS 19
+#define CLK_ACLK_ASATBSLV_APOLLO_1_CSSYS 20
+#define CLK_ACLK_ASATBSLV_APOLLO_0_CSSYS 21
+#define CLK_ACLK_ASYNCACES_APOLLO_CCI 22
+#define CLK_ACLK_AHB2APB_APOLLOP 23
+#define CLK_ACLK_APOLLONP_200 24
+#define CLK_PCLK_ASAPBMST_CSSYS_APOLLO 25
+#define CLK_PCLK_PMU_APOLLO 26
+#define CLK_PCLK_SYSREG_APOLLO 27
+#define CLK_CNTCLK_APOLLO 28
+#define CLK_SCLK_HPM_APOLLO 29
+#define CLK_SCLK_APOLLO 30
+
+#define APOLLO_NR_CLK 31
+
+/* CMU_ATLAS */
+#define CLK_FOUT_ATLAS_PLL 1
+
+#define CLK_MOUT_ATLAS_PLL 2
+#define CLK_MOUT_BUS_PLL_ATLAS_USER 3
+#define CLK_MOUT_ATLAS 4
+
+#define CLK_DIV_CNTCLK_ATLAS 5
+#define CLK_DIV_PCLK_DBG_ATLAS 6
+#define CLK_DIV_ATCLK_ATLASO 7
+#define CLK_DIV_PCLK_ATLAS 8
+#define CLK_DIV_ACLK_ATLAS 9
+#define CLK_DIV_ATLAS2 10
+#define CLK_DIV_ATLAS1 11
+#define CLK_DIV_SCLK_HPM_ATLAS 12
+#define CLK_DIV_ATLAS_PLL 13
+
+#define CLK_ACLK_ATB_AUD_CSSYS 14
+#define CLK_ACLK_ATB_APOLLO3_CSSYS 15
+#define CLK_ACLK_ATB_APOLLO2_CSSYS 16
+#define CLK_ACLK_ATB_APOLLO1_CSSYS 17
+#define CLK_ACLK_ATB_APOLLO0_CSSYS 18
+#define CLK_ACLK_ASYNCAHBS_CSSYS_SSS 19
+#define CLK_ACLK_ASYNCAXIS_CSSYS_CCIX 20
+#define CLK_ACLK_ASYNCACES_ATLAS_CCI 21
+#define CLK_ACLK_AHB2APB_ATLASP 22
+#define CLK_ACLK_ATLASNP_200 23
+#define CLK_PCLK_ASYNCAPB_AUD_CSSYS 24
+#define CLK_PCLK_ASYNCAPB_ISP_CSSYS 25
+#define CLK_PCLK_ASYNCAPB_APOLLO_CSSYS 26
+#define CLK_PCLK_PMU_ATLAS 27
+#define CLK_PCLK_SYSREG_ATLAS 28
+#define CLK_PCLK_SECJTAG 29
+#define CLK_CNTCLK_ATLAS 30
+#define CLK_SCLK_FREQ_DET_ATLAS_PLL 31
+#define CLK_SCLK_HPM_ATLAS 32
+#define CLK_TRACECLK 33
+#define CLK_CTMCLK 34
+#define CLK_HCLK_CSSYS 35
+#define CLK_PCLK_DBG_CSSYS 36
+#define CLK_PCLK_DBG 37
+#define CLK_ATCLK 38
+#define CLK_SCLK_ATLAS 39
+
+#define ATLAS_NR_CLK 40
+
+/* CMU_MSCL */
+#define CLK_MOUT_SCLK_JPEG_USER 1
+#define CLK_MOUT_ACLK_MSCL_400_USER 2
+#define CLK_MOUT_SCLK_JPEG 3
+
+#define CLK_DIV_PCLK_MSCL 4
+
+#define CLK_ACLK_BTS_JPEG 5
+#define CLK_ACLK_BTS_M2MSCALER1 6
+#define CLK_ACLK_BTS_M2MSCALER0 7
+#define CLK_ACLK_AHB2APB_MSCL0P 8
+#define CLK_ACLK_XIU_MSCLX 9
+#define CLK_ACLK_MSCLNP_100 10
+#define CLK_ACLK_MSCLND_400 11
+#define CLK_ACLK_JPEG 12
+#define CLK_ACLK_M2MSCALER1 13
+#define CLK_ACLK_M2MSCALER0 14
+#define CLK_ACLK_SMMU_M2MSCALER0 15
+#define CLK_ACLK_SMMU_M2MSCALER1 16
+#define CLK_ACLK_SMMU_JPEG 17
+#define CLK_PCLK_BTS_JPEG 18
+#define CLK_PCLK_BTS_M2MSCALER1 19
+#define CLK_PCLK_BTS_M2MSCALER0 20
+#define CLK_PCLK_PMU_MSCL 21
+#define CLK_PCLK_SYSREG_MSCL 22
+#define CLK_PCLK_JPEG 23
+#define CLK_PCLK_M2MSCALER1 24
+#define CLK_PCLK_M2MSCALER0 25
+#define CLK_PCLK_SMMU_M2MSCALER0 26
+#define CLK_PCLK_SMMU_M2MSCALER1 27
+#define CLK_PCLK_SMMU_JPEG 28
+#define CLK_SCLK_JPEG 29
+
+#define MSCL_NR_CLK 30
+
+/* CMU_MFC */
+#define CLK_MOUT_ACLK_MFC_400_USER 1
+
+#define CLK_DIV_PCLK_MFC 2
+
+#define CLK_ACLK_BTS_MFC_1 3
+#define CLK_ACLK_BTS_MFC_0 4
+#define CLK_ACLK_AHB2APB_MFCP 5
+#define CLK_ACLK_XIU_MFCX 6
+#define CLK_ACLK_MFCNP_100 7
+#define CLK_ACLK_MFCND_400 8
+#define CLK_ACLK_MFC 9
+#define CLK_ACLK_SMMU_MFC_1 10
+#define CLK_ACLK_SMMU_MFC_0 11
+#define CLK_PCLK_BTS_MFC_1 12
+#define CLK_PCLK_BTS_MFC_0 13
+#define CLK_PCLK_PMU_MFC 14
+#define CLK_PCLK_SYSREG_MFC 15
+#define CLK_PCLK_MFC 16
+#define CLK_PCLK_SMMU_MFC_1 17
+#define CLK_PCLK_SMMU_MFC_0 18
+
+#define MFC_NR_CLK 19
+
+/* CMU_HEVC */
+#define CLK_MOUT_ACLK_HEVC_400_USER 1
+
+#define CLK_DIV_PCLK_HEVC 2
+
+#define CLK_ACLK_BTS_HEVC_1 3
+#define CLK_ACLK_BTS_HEVC_0 4
+#define CLK_ACLK_AHB2APB_HEVCP 5
+#define CLK_ACLK_XIU_HEVCX 6
+#define CLK_ACLK_HEVCNP_100 7
+#define CLK_ACLK_HEVCND_400 8
+#define CLK_ACLK_HEVC 9
+#define CLK_ACLK_SMMU_HEVC_1 10
+#define CLK_ACLK_SMMU_HEVC_0 11
+#define CLK_PCLK_BTS_HEVC_1 12
+#define CLK_PCLK_BTS_HEVC_0 13
+#define CLK_PCLK_PMU_HEVC 14
+#define CLK_PCLK_SYSREG_HEVC 15
+#define CLK_PCLK_HEVC 16
+#define CLK_PCLK_SMMU_HEVC_1 17
+#define CLK_PCLK_SMMU_HEVC_0 18
+
+#define HEVC_NR_CLK 19
+
+/* CMU_ISP */
+#define CLK_MOUT_ACLK_ISP_DIS_400_USER 1
+#define CLK_MOUT_ACLK_ISP_400_USER 2
+
+#define CLK_DIV_PCLK_ISP_DIS 3
+#define CLK_DIV_PCLK_ISP 4
+#define CLK_DIV_ACLK_ISP_D_200 5
+#define CLK_DIV_ACLK_ISP_C_200 6
+
+#define CLK_ACLK_ISP_D_GLUE 7
+#define CLK_ACLK_SCALERP 8
+#define CLK_ACLK_3DNR 9
+#define CLK_ACLK_DIS 10
+#define CLK_ACLK_SCALERC 11
+#define CLK_ACLK_DRC 12
+#define CLK_ACLK_ISP 13
+#define CLK_ACLK_AXIUS_SCALERP 14
+#define CLK_ACLK_AXIUS_SCALERC 15
+#define CLK_ACLK_AXIUS_DRC 16
+#define CLK_ACLK_ASYNCAHBM_ISP2P 17
+#define CLK_ACLK_ASYNCAHBM_ISP1P 18
+#define CLK_ACLK_ASYNCAXIS_DIS1 19
+#define CLK_ACLK_ASYNCAXIS_DIS0 20
+#define CLK_ACLK_ASYNCAXIM_DIS1 21
+#define CLK_ACLK_ASYNCAXIM_DIS0 22
+#define CLK_ACLK_ASYNCAXIM_ISP2P 23
+#define CLK_ACLK_ASYNCAXIM_ISP1P 24
+#define CLK_ACLK_AHB2APB_ISP2P 25
+#define CLK_ACLK_AHB2APB_ISP1P 26
+#define CLK_ACLK_AXI2APB_ISP2P 27
+#define CLK_ACLK_AXI2APB_ISP1P 28
+#define CLK_ACLK_XIU_ISPEX1 29
+#define CLK_ACLK_XIU_ISPEX0 30
+#define CLK_ACLK_ISPND_400 31
+#define CLK_ACLK_SMMU_SCALERP 32
+#define CLK_ACLK_SMMU_3DNR 33
+#define CLK_ACLK_SMMU_DIS1 34
+#define CLK_ACLK_SMMU_DIS0 35
+#define CLK_ACLK_SMMU_SCALERC 36
+#define CLK_ACLK_SMMU_DRC 37
+#define CLK_ACLK_SMMU_ISP 38
+#define CLK_ACLK_BTS_SCALERP 39
+#define CLK_ACLK_BTS_3DR 40
+#define CLK_ACLK_BTS_DIS1 41
+#define CLK_ACLK_BTS_DIS0 42
+#define CLK_ACLK_BTS_SCALERC 43
+#define CLK_ACLK_BTS_DRC 44
+#define CLK_ACLK_BTS_ISP 45
+#define CLK_PCLK_SMMU_SCALERP 46
+#define CLK_PCLK_SMMU_3DNR 47
+#define CLK_PCLK_SMMU_DIS1 48
+#define CLK_PCLK_SMMU_DIS0 49
+#define CLK_PCLK_SMMU_SCALERC 50
+#define CLK_PCLK_SMMU_DRC 51
+#define CLK_PCLK_SMMU_ISP 52
+#define CLK_PCLK_BTS_SCALERP 53
+#define CLK_PCLK_BTS_3DNR 54
+#define CLK_PCLK_BTS_DIS1 55
+#define CLK_PCLK_BTS_DIS0 56
+#define CLK_PCLK_BTS_SCALERC 57
+#define CLK_PCLK_BTS_DRC 58
+#define CLK_PCLK_BTS_ISP 59
+#define CLK_PCLK_ASYNCAXI_DIS1 60
+#define CLK_PCLK_ASYNCAXI_DIS0 61
+#define CLK_PCLK_PMU_ISP 62
+#define CLK_PCLK_SYSREG_ISP 63
+#define CLK_PCLK_CMU_ISP_LOCAL 64
+#define CLK_PCLK_SCALERP 65
+#define CLK_PCLK_3DNR 66
+#define CLK_PCLK_DIS_CORE 67
+#define CLK_PCLK_DIS 68
+#define CLK_PCLK_SCALERC 69
+#define CLK_PCLK_DRC 70
+#define CLK_PCLK_ISP 71
+#define CLK_SCLK_PIXELASYNCS_DIS 72
+#define CLK_SCLK_PIXELASYNCM_DIS 73
+#define CLK_SCLK_PIXELASYNCS_SCALERP 74
+#define CLK_SCLK_PIXELASYNCM_ISPD 75
+#define CLK_SCLK_PIXELASYNCS_ISPC 76
+#define CLK_SCLK_PIXELASYNCM_ISPC 77
+
+#define ISP_NR_CLK 78
+
+/* CMU_CAM0 */
+#define CLK_PHYCLK_RXBYTEECLKHS0_S4_PHY 1
+#define CLK_PHYCLK_RXBYTEECLKHS0_S2A_PHY 2
+
+#define CLK_MOUT_ACLK_CAM0_333_USER 3
+#define CLK_MOUT_ACLK_CAM0_400_USER 4
+#define CLK_MOUT_ACLK_CAM0_552_USER 5
+#define CLK_MOUT_PHYCLK_RXBYTECLKHS0_S4_USER 6
+#define CLK_MOUT_PHYCLK_RXBYTECLKHS0_S2A_USER 7
+#define CLK_MOUT_ACLK_LITE_D_B 8
+#define CLK_MOUT_ACLK_LITE_D_A 9
+#define CLK_MOUT_ACLK_LITE_B_B 10
+#define CLK_MOUT_ACLK_LITE_B_A 11
+#define CLK_MOUT_ACLK_LITE_A_B 12
+#define CLK_MOUT_ACLK_LITE_A_A 13
+#define CLK_MOUT_ACLK_CAM0_400 14
+#define CLK_MOUT_ACLK_CSIS1_B 15
+#define CLK_MOUT_ACLK_CSIS1_A 16
+#define CLK_MOUT_ACLK_CSIS0_B 17
+#define CLK_MOUT_ACLK_CSIS0_A 18
+#define CLK_MOUT_ACLK_3AA1_B 19
+#define CLK_MOUT_ACLK_3AA1_A 20
+#define CLK_MOUT_ACLK_3AA0_B 21
+#define CLK_MOUT_ACLK_3AA0_A 22
+#define CLK_MOUT_SCLK_LITE_FREECNT_C 23
+#define CLK_MOUT_SCLK_LITE_FREECNT_B 24
+#define CLK_MOUT_SCLK_LITE_FREECNT_A 25
+#define CLK_MOUT_SCLK_PIXELASYNC_LITE_C_B 26
+#define CLK_MOUT_SCLK_PIXELASYNC_LITE_C_A 27
+#define CLK_MOUT_SCLK_PIXELASYNC_LITE_C_INIT_B 28
+#define CLK_MOUT_SCLK_PIXELASYNC_LITE_C_INIT_A 29
+
+#define CLK_DIV_PCLK_CAM0_50 30
+#define CLK_DIV_ACLK_CAM0_200 31
+#define CLK_DIV_ACLK_CAM0_BUS_400 32
+#define CLK_DIV_PCLK_LITE_D 33
+#define CLK_DIV_ACLK_LITE_D 34
+#define CLK_DIV_PCLK_LITE_B 35
+#define CLK_DIV_ACLK_LITE_B 36
+#define CLK_DIV_PCLK_LITE_A 37
+#define CLK_DIV_ACLK_LITE_A 38
+#define CLK_DIV_ACLK_CSIS1 39
+#define CLK_DIV_ACLK_CSIS0 40
+#define CLK_DIV_PCLK_3AA1 41
+#define CLK_DIV_ACLK_3AA1 42
+#define CLK_DIV_PCLK_3AA0 43
+#define CLK_DIV_ACLK_3AA0 44
+#define CLK_DIV_SCLK_PIXELASYNC_LITE_C 45
+#define CLK_DIV_PCLK_PIXELASYNC_LITE_C 46
+#define CLK_DIV_SCLK_PIXELASYNC_LITE_C_INIT 47
+
+#define CLK_ACLK_CSIS1 50
+#define CLK_ACLK_CSIS0 51
+#define CLK_ACLK_3AA1 52
+#define CLK_ACLK_3AA0 53
+#define CLK_ACLK_LITE_D 54
+#define CLK_ACLK_LITE_B 55
+#define CLK_ACLK_LITE_A 56
+#define CLK_ACLK_AHBSYNCDN 57
+#define CLK_ACLK_AXIUS_LITE_D 58
+#define CLK_ACLK_AXIUS_LITE_B 59
+#define CLK_ACLK_AXIUS_LITE_A 60
+#define CLK_ACLK_ASYNCAPBM_3AA1 61
+#define CLK_ACLK_ASYNCAPBS_3AA1 62
+#define CLK_ACLK_ASYNCAPBM_3AA0 63
+#define CLK_ACLK_ASYNCAPBS_3AA0 64
+#define CLK_ACLK_ASYNCAPBM_LITE_D 65
+#define CLK_ACLK_ASYNCAPBS_LITE_D 66
+#define CLK_ACLK_ASYNCAPBM_LITE_B 67
+#define CLK_ACLK_ASYNCAPBS_LITE_B 68
+#define CLK_ACLK_ASYNCAPBM_LITE_A 69
+#define CLK_ACLK_ASYNCAPBS_LITE_A 70
+#define CLK_ACLK_ASYNCAXIM_ISP0P 71
+#define CLK_ACLK_ASYNCAXIM_3AA1 72
+#define CLK_ACLK_ASYNCAXIS_3AA1 73
+#define CLK_ACLK_ASYNCAXIM_3AA0 74
+#define CLK_ACLK_ASYNCAXIS_3AA0 75
+#define CLK_ACLK_ASYNCAXIM_LITE_D 76
+#define CLK_ACLK_ASYNCAXIS_LITE_D 77
+#define CLK_ACLK_ASYNCAXIM_LITE_B 78
+#define CLK_ACLK_ASYNCAXIS_LITE_B 79
+#define CLK_ACLK_ASYNCAXIM_LITE_A 80
+#define CLK_ACLK_ASYNCAXIS_LITE_A 81
+#define CLK_ACLK_AHB2APB_ISPSFRP 82
+#define CLK_ACLK_AXI2APB_ISP0P 83
+#define CLK_ACLK_AXI2AHB_ISP0P 84
+#define CLK_ACLK_XIU_IS0X 85
+#define CLK_ACLK_XIU_ISP0EX 86
+#define CLK_ACLK_CAM0NP_276 87
+#define CLK_ACLK_CAM0ND_400 88
+#define CLK_ACLK_SMMU_3AA1 89
+#define CLK_ACLK_SMMU_3AA0 90
+#define CLK_ACLK_SMMU_LITE_D 91
+#define CLK_ACLK_SMMU_LITE_B 92
+#define CLK_ACLK_SMMU_LITE_A 93
+#define CLK_ACLK_BTS_3AA1 94
+#define CLK_ACLK_BTS_3AA0 95
+#define CLK_ACLK_BTS_LITE_D 96
+#define CLK_ACLK_BTS_LITE_B 97
+#define CLK_ACLK_BTS_LITE_A 98
+#define CLK_PCLK_SMMU_3AA1 99
+#define CLK_PCLK_SMMU_3AA0 100
+#define CLK_PCLK_SMMU_LITE_D 101
+#define CLK_PCLK_SMMU_LITE_B 102
+#define CLK_PCLK_SMMU_LITE_A 103
+#define CLK_PCLK_BTS_3AA1 104
+#define CLK_PCLK_BTS_3AA0 105
+#define CLK_PCLK_BTS_LITE_D 106
+#define CLK_PCLK_BTS_LITE_B 107
+#define CLK_PCLK_BTS_LITE_A 108
+#define CLK_PCLK_ASYNCAXI_CAM1 109
+#define CLK_PCLK_ASYNCAXI_3AA1 110
+#define CLK_PCLK_ASYNCAXI_3AA0 111
+#define CLK_PCLK_ASYNCAXI_LITE_D 112
+#define CLK_PCLK_ASYNCAXI_LITE_B 113
+#define CLK_PCLK_ASYNCAXI_LITE_A 114
+#define CLK_PCLK_PMU_CAM0 115
+#define CLK_PCLK_SYSREG_CAM0 116
+#define CLK_PCLK_CMU_CAM0_LOCAL 117
+#define CLK_PCLK_CSIS1 118
+#define CLK_PCLK_CSIS0 119
+#define CLK_PCLK_3AA1 120
+#define CLK_PCLK_3AA0 121
+#define CLK_PCLK_LITE_D 122
+#define CLK_PCLK_LITE_B 123
+#define CLK_PCLK_LITE_A 124
+#define CLK_PHYCLK_RXBYTECLKHS0_S4 125
+#define CLK_PHYCLK_RXBYTECLKHS0_S2A 126
+#define CLK_SCLK_LITE_FREECNT 127
+#define CLK_SCLK_PIXELASYNCM_3AA1 128
+#define CLK_SCLK_PIXELASYNCM_3AA0 129
+#define CLK_SCLK_PIXELASYNCS_3AA0 130
+#define CLK_SCLK_PIXELASYNCM_LITE_C 131
+#define CLK_SCLK_PIXELASYNCM_LITE_C_INIT 132
+#define CLK_SCLK_PIXELASYNCS_LITE_C_INIT 133
+
+#define CAM0_NR_CLK 134
+
+/* CMU_CAM1 */
+#define CLK_PHYCLK_RXBYTEECLKHS0_S2B 1
+
+#define CLK_MOUT_SCLK_ISP_UART_USER 2
+#define CLK_MOUT_SCLK_ISP_SPI1_USER 3
+#define CLK_MOUT_SCLK_ISP_SPI0_USER 4
+#define CLK_MOUT_ACLK_CAM1_333_USER 5
+#define CLK_MOUT_ACLK_CAM1_400_USER 6
+#define CLK_MOUT_ACLK_CAM1_552_USER 7
+#define CLK_MOUT_PHYCLK_RXBYTECLKHS0_S2B_USER 8
+#define CLK_MOUT_ACLK_CSIS2_B 9
+#define CLK_MOUT_ACLK_CSIS2_A 10
+#define CLK_MOUT_ACLK_FD_B 11
+#define CLK_MOUT_ACLK_FD_A 12
+#define CLK_MOUT_ACLK_LITE_C_B 13
+#define CLK_MOUT_ACLK_LITE_C_A 14
+
+#define CLK_DIV_SCLK_ISP_WPWM 15
+#define CLK_DIV_PCLK_CAM1_83 16
+#define CLK_DIV_PCLK_CAM1_166 17
+#define CLK_DIV_PCLK_DBG_CAM1 18
+#define CLK_DIV_ATCLK_CAM1 19
+#define CLK_DIV_ACLK_CSIS2 20
+#define CLK_DIV_PCLK_FD 21
+#define CLK_DIV_ACLK_FD 22
+#define CLK_DIV_PCLK_LITE_C 23
+#define CLK_DIV_ACLK_LITE_C 24
+
+#define CLK_ACLK_ISP_GIC 25
+#define CLK_ACLK_FD 26
+#define CLK_ACLK_LITE_C 27
+#define CLK_ACLK_CSIS2 28
+#define CLK_ACLK_ASYNCAPBM_FD 29
+#define CLK_ACLK_ASYNCAPBS_FD 30
+#define CLK_ACLK_ASYNCAPBM_LITE_C 31
+#define CLK_ACLK_ASYNCAPBS_LITE_C 32
+#define CLK_ACLK_ASYNCAHBS_SFRISP2H2 33
+#define CLK_ACLK_ASYNCAHBS_SFRISP2H1 34
+#define CLK_ACLK_ASYNCAXIM_CA5 35
+#define CLK_ACLK_ASYNCAXIS_CA5 36
+#define CLK_ACLK_ASYNCAXIS_ISPX2 37
+#define CLK_ACLK_ASYNCAXIS_ISPX1 38
+#define CLK_ACLK_ASYNCAXIS_ISPX0 39
+#define CLK_ACLK_ASYNCAXIM_ISPEX 40
+#define CLK_ACLK_ASYNCAXIM_ISP3P 41
+#define CLK_ACLK_ASYNCAXIS_ISP3P 42
+#define CLK_ACLK_ASYNCAXIM_FD 43
+#define CLK_ACLK_ASYNCAXIS_FD 44
+#define CLK_ACLK_ASYNCAXIM_LITE_C 45
+#define CLK_ACLK_ASYNCAXIS_LITE_C 46
+#define CLK_ACLK_AHB2APB_ISP5P 47
+#define CLK_ACLK_AHB2APB_ISP3P 48
+#define CLK_ACLK_AXI2APB_ISP3P 49
+#define CLK_ACLK_AHB_SFRISP2H 50
+#define CLK_ACLK_AXI_ISP_HX_R 51
+#define CLK_ACLK_AXI_ISP_CX_R 52
+#define CLK_ACLK_AXI_ISP_HX 53
+#define CLK_ACLK_AXI_ISP_CX 54
+#define CLK_ACLK_XIU_ISPX 55
+#define CLK_ACLK_XIU_ISPEX 56
+#define CLK_ACLK_CAM1NP_333 57
+#define CLK_ACLK_CAM1ND_400 58
+#define CLK_ACLK_SMMU_ISPCPU 59
+#define CLK_ACLK_SMMU_FD 60
+#define CLK_ACLK_SMMU_LITE_C 61
+#define CLK_ACLK_BTS_ISP3P 62
+#define CLK_ACLK_BTS_FD 63
+#define CLK_ACLK_BTS_LITE_C 64
+#define CLK_ACLK_AHBDN_SFRISP2H 65
+#define CLK_ACLK_AHBDN_ISP5P 66
+#define CLK_ACLK_AXIUS_ISP3P 67
+#define CLK_ACLK_AXIUS_FD 68
+#define CLK_ACLK_AXIUS_LITE_C 69
+#define CLK_PCLK_SMMU_ISPCPU 70
+#define CLK_PCLK_SMMU_FD 71
+#define CLK_PCLK_SMMU_LITE_C 72
+#define CLK_PCLK_BTS_ISP3P 73
+#define CLK_PCLK_BTS_FD 74
+#define CLK_PCLK_BTS_LITE_C 75
+#define CLK_PCLK_ASYNCAXIM_CA5 76
+#define CLK_PCLK_ASYNCAXIM_ISPEX 77
+#define CLK_PCLK_ASYNCAXIM_ISP3P 78
+#define CLK_PCLK_ASYNCAXIM_FD 79
+#define CLK_PCLK_ASYNCAXIM_LITE_C 80
+#define CLK_PCLK_PMU_CAM1 81
+#define CLK_PCLK_SYSREG_CAM1 82
+#define CLK_PCLK_CMU_CAM1_LOCAL 83
+#define CLK_PCLK_ISP_MCTADC 84
+#define CLK_PCLK_ISP_WDT 85
+#define CLK_PCLK_ISP_PWM 86
+#define CLK_PCLK_ISP_UART 87
+#define CLK_PCLK_ISP_MCUCTL 88
+#define CLK_PCLK_ISP_SPI1 89
+#define CLK_PCLK_ISP_SPI0 90
+#define CLK_PCLK_ISP_I2C2 91
+#define CLK_PCLK_ISP_I2C1 92
+#define CLK_PCLK_ISP_I2C0 93
+#define CLK_PCLK_ISP_MPWM 94
+#define CLK_PCLK_FD 95
+#define CLK_PCLK_LITE_C 96
+#define CLK_PCLK_CSIS2 97
+#define CLK_SCLK_ISP_I2C2 98
+#define CLK_SCLK_ISP_I2C1 99
+#define CLK_SCLK_ISP_I2C0 100
+#define CLK_SCLK_ISP_PWM 101
+#define CLK_PHYCLK_RXBYTECLKHS0_S2B 102
+#define CLK_SCLK_LITE_C_FREECNT 103
+#define CLK_SCLK_PIXELASYNCM_FD 104
+#define CLK_SCLK_ISP_MCTADC 105
+#define CLK_SCLK_ISP_UART 106
+#define CLK_SCLK_ISP_SPI1 107
+#define CLK_SCLK_ISP_SPI0 108
+#define CLK_SCLK_ISP_MPWM 109
+#define CLK_PCLK_DBG_ISP 110
+#define CLK_ATCLK_ISP 111
+#define CLK_SCLK_ISP_CA5 112
+
+#define CAM1_NR_CLK 113
+
+#endif /* _DT_BINDINGS_CLOCK_EXYNOS5433_H */
diff --git a/include/dt-bindings/clock/qcom,gcc-ipq806x.h b/include/dt-bindings/clock/qcom,gcc-ipq806x.h
index 04fb29a..ebd63fd 100644
--- a/include/dt-bindings/clock/qcom,gcc-ipq806x.h
+++ b/include/dt-bindings/clock/qcom,gcc-ipq806x.h
@@ -288,5 +288,6 @@
#define UBI32_CORE2_CLK_SRC 278
#define UBI32_CORE1_CLK 279
#define UBI32_CORE2_CLK 280
+#define EBI2_AON_CLK 281
#endif
diff --git a/include/dt-bindings/clock/qcom,gcc-msm8916.h b/include/dt-bindings/clock/qcom,gcc-msm8916.h
new file mode 100644
index 0000000..e430f64
--- /dev/null
+++ b/include/dt-bindings/clock/qcom,gcc-msm8916.h
@@ -0,0 +1,156 @@
+/*
+ * Copyright 2015 Linaro Limited
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _DT_BINDINGS_CLK_MSM_GCC_8916_H
+#define _DT_BINDINGS_CLK_MSM_GCC_8916_H
+
+#define GPLL0 0
+#define GPLL0_VOTE 1
+#define BIMC_PLL 2
+#define BIMC_PLL_VOTE 3
+#define GPLL1 4
+#define GPLL1_VOTE 5
+#define GPLL2 6
+#define GPLL2_VOTE 7
+#define PCNOC_BFDCD_CLK_SRC 8
+#define SYSTEM_NOC_BFDCD_CLK_SRC 9
+#define CAMSS_AHB_CLK_SRC 10
+#define APSS_AHB_CLK_SRC 11
+#define CSI0_CLK_SRC 12
+#define CSI1_CLK_SRC 13
+#define GFX3D_CLK_SRC 14
+#define VFE0_CLK_SRC 15
+#define BLSP1_QUP1_I2C_APPS_CLK_SRC 16
+#define BLSP1_QUP1_SPI_APPS_CLK_SRC 17
+#define BLSP1_QUP2_I2C_APPS_CLK_SRC 18
+#define BLSP1_QUP2_SPI_APPS_CLK_SRC 19
+#define BLSP1_QUP3_I2C_APPS_CLK_SRC 20
+#define BLSP1_QUP3_SPI_APPS_CLK_SRC 21
+#define BLSP1_QUP4_I2C_APPS_CLK_SRC 22
+#define BLSP1_QUP4_SPI_APPS_CLK_SRC 23
+#define BLSP1_QUP5_I2C_APPS_CLK_SRC 24
+#define BLSP1_QUP5_SPI_APPS_CLK_SRC 25
+#define BLSP1_QUP6_I2C_APPS_CLK_SRC 26
+#define BLSP1_QUP6_SPI_APPS_CLK_SRC 27
+#define BLSP1_UART1_APPS_CLK_SRC 28
+#define BLSP1_UART2_APPS_CLK_SRC 29
+#define CCI_CLK_SRC 30
+#define CAMSS_GP0_CLK_SRC 31
+#define CAMSS_GP1_CLK_SRC 32
+#define JPEG0_CLK_SRC 33
+#define MCLK0_CLK_SRC 34
+#define MCLK1_CLK_SRC 35
+#define CSI0PHYTIMER_CLK_SRC 36
+#define CSI1PHYTIMER_CLK_SRC 37
+#define CPP_CLK_SRC 38
+#define CRYPTO_CLK_SRC 39
+#define GP1_CLK_SRC 40
+#define GP2_CLK_SRC 41
+#define GP3_CLK_SRC 42
+#define BYTE0_CLK_SRC 43
+#define ESC0_CLK_SRC 44
+#define MDP_CLK_SRC 45
+#define PCLK0_CLK_SRC 46
+#define VSYNC_CLK_SRC 47
+#define PDM2_CLK_SRC 48
+#define SDCC1_APPS_CLK_SRC 49
+#define SDCC2_APPS_CLK_SRC 50
+#define APSS_TCU_CLK_SRC 51
+#define USB_HS_SYSTEM_CLK_SRC 52
+#define VCODEC0_CLK_SRC 53
+#define GCC_BLSP1_AHB_CLK 54
+#define GCC_BLSP1_SLEEP_CLK 55
+#define GCC_BLSP1_QUP1_I2C_APPS_CLK 56
+#define GCC_BLSP1_QUP1_SPI_APPS_CLK 57
+#define GCC_BLSP1_QUP2_I2C_APPS_CLK 58
+#define GCC_BLSP1_QUP2_SPI_APPS_CLK 59
+#define GCC_BLSP1_QUP3_I2C_APPS_CLK 60
+#define GCC_BLSP1_QUP3_SPI_APPS_CLK 61
+#define GCC_BLSP1_QUP4_I2C_APPS_CLK 62
+#define GCC_BLSP1_QUP4_SPI_APPS_CLK 63
+#define GCC_BLSP1_QUP5_I2C_APPS_CLK 64
+#define GCC_BLSP1_QUP5_SPI_APPS_CLK 65
+#define GCC_BLSP1_QUP6_I2C_APPS_CLK 66
+#define GCC_BLSP1_QUP6_SPI_APPS_CLK 67
+#define GCC_BLSP1_UART1_APPS_CLK 68
+#define GCC_BLSP1_UART2_APPS_CLK 69
+#define GCC_BOOT_ROM_AHB_CLK 70
+#define GCC_CAMSS_CCI_AHB_CLK 71
+#define GCC_CAMSS_CCI_CLK 72
+#define GCC_CAMSS_CSI0_AHB_CLK 73
+#define GCC_CAMSS_CSI0_CLK 74
+#define GCC_CAMSS_CSI0PHY_CLK 75
+#define GCC_CAMSS_CSI0PIX_CLK 76
+#define GCC_CAMSS_CSI0RDI_CLK 77
+#define GCC_CAMSS_CSI1_AHB_CLK 78
+#define GCC_CAMSS_CSI1_CLK 79
+#define GCC_CAMSS_CSI1PHY_CLK 80
+#define GCC_CAMSS_CSI1PIX_CLK 81
+#define GCC_CAMSS_CSI1RDI_CLK 82
+#define GCC_CAMSS_CSI_VFE0_CLK 83
+#define GCC_CAMSS_GP0_CLK 84
+#define GCC_CAMSS_GP1_CLK 85
+#define GCC_CAMSS_ISPIF_AHB_CLK 86
+#define GCC_CAMSS_JPEG0_CLK 87
+#define GCC_CAMSS_JPEG_AHB_CLK 88
+#define GCC_CAMSS_JPEG_AXI_CLK 89
+#define GCC_CAMSS_MCLK0_CLK 90
+#define GCC_CAMSS_MCLK1_CLK 91
+#define GCC_CAMSS_MICRO_AHB_CLK 92
+#define GCC_CAMSS_CSI0PHYTIMER_CLK 93
+#define GCC_CAMSS_CSI1PHYTIMER_CLK 94
+#define GCC_CAMSS_AHB_CLK 95
+#define GCC_CAMSS_TOP_AHB_CLK 96
+#define GCC_CAMSS_CPP_AHB_CLK 97
+#define GCC_CAMSS_CPP_CLK 98
+#define GCC_CAMSS_VFE0_CLK 99
+#define GCC_CAMSS_VFE_AHB_CLK 100
+#define GCC_CAMSS_VFE_AXI_CLK 101
+#define GCC_CRYPTO_AHB_CLK 102
+#define GCC_CRYPTO_AXI_CLK 103
+#define GCC_CRYPTO_CLK 104
+#define GCC_OXILI_GMEM_CLK 105
+#define GCC_GP1_CLK 106
+#define GCC_GP2_CLK 107
+#define GCC_GP3_CLK 108
+#define GCC_MDSS_AHB_CLK 109
+#define GCC_MDSS_AXI_CLK 110
+#define GCC_MDSS_BYTE0_CLK 111
+#define GCC_MDSS_ESC0_CLK 112
+#define GCC_MDSS_MDP_CLK 113
+#define GCC_MDSS_PCLK0_CLK 114
+#define GCC_MDSS_VSYNC_CLK 115
+#define GCC_MSS_CFG_AHB_CLK 116
+#define GCC_OXILI_AHB_CLK 117
+#define GCC_OXILI_GFX3D_CLK 118
+#define GCC_PDM2_CLK 119
+#define GCC_PDM_AHB_CLK 120
+#define GCC_PRNG_AHB_CLK 121
+#define GCC_SDCC1_AHB_CLK 122
+#define GCC_SDCC1_APPS_CLK 123
+#define GCC_SDCC2_AHB_CLK 124
+#define GCC_SDCC2_APPS_CLK 125
+#define GCC_GTCU_AHB_CLK 126
+#define GCC_JPEG_TBU_CLK 127
+#define GCC_MDP_TBU_CLK 128
+#define GCC_SMMU_CFG_CLK 129
+#define GCC_VENUS_TBU_CLK 130
+#define GCC_VFE_TBU_CLK 131
+#define GCC_USB2A_PHY_SLEEP_CLK 132
+#define GCC_USB_HS_AHB_CLK 133
+#define GCC_USB_HS_SYSTEM_CLK 134
+#define GCC_VENUS0_AHB_CLK 135
+#define GCC_VENUS0_AXI_CLK 136
+#define GCC_VENUS0_VCODEC0_CLK 137
+
+#endif
diff --git a/include/dt-bindings/clock/tegra124-car-common.h b/include/dt-bindings/clock/tegra124-car-common.h
index ae2eb17..a215609 100644
--- a/include/dt-bindings/clock/tegra124-car-common.h
+++ b/include/dt-bindings/clock/tegra124-car-common.h
@@ -297,7 +297,7 @@
#define TEGRA124_CLK_PLL_C4 270
#define TEGRA124_CLK_PLL_DP 271
#define TEGRA124_CLK_PLL_E_MUX 272
-#define TEGRA124_CLK_PLLD_DSI 273
+#define TEGRA124_CLK_PLL_D_DSI_OUT 273
/* 274 */
/* 275 */
/* 276 */
diff --git a/include/dt-bindings/media/xilinx-vip.h b/include/dt-bindings/media/xilinx-vip.h
new file mode 100644
index 0000000..6298fec
--- /dev/null
+++ b/include/dt-bindings/media/xilinx-vip.h
@@ -0,0 +1,39 @@
+/*
+ * Xilinx Video IP Core
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __DT_BINDINGS_MEDIA_XILINX_VIP_H__
+#define __DT_BINDINGS_MEDIA_XILINX_VIP_H__
+
+/*
+ * Video format codes as defined in "AXI4-Stream Video IP and System Design
+ * Guide".
+ */
+#define XVIP_VF_YUV_422 0
+#define XVIP_VF_YUV_444 1
+#define XVIP_VF_RBG 2
+#define XVIP_VF_YUV_420 3
+#define XVIP_VF_YUVA_422 4
+#define XVIP_VF_YUVA_444 5
+#define XVIP_VF_RGBA 6
+#define XVIP_VF_YUVA_420 7
+#define XVIP_VF_YUVD_422 8
+#define XVIP_VF_YUVD_444 9
+#define XVIP_VF_RGBD 10
+#define XVIP_VF_YUVD_420 11
+#define XVIP_VF_MONO_SENSOR 12
+#define XVIP_VF_CUSTOM2 13
+#define XVIP_VF_CUSTOM3 14
+#define XVIP_VF_CUSTOM4 15
+
+#endif /* __DT_BINDINGS_MEDIA_XILINX_VIP_H__ */
diff --git a/include/dt-bindings/reset/qcom,gcc-msm8916.h b/include/dt-bindings/reset/qcom,gcc-msm8916.h
new file mode 100644
index 0000000..3d90410
--- /dev/null
+++ b/include/dt-bindings/reset/qcom,gcc-msm8916.h
@@ -0,0 +1,108 @@
+/*
+ * Copyright 2015 Linaro Limited
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _DT_BINDINGS_RESET_MSM_GCC_8916_H
+#define _DT_BINDINGS_RESET_MSM_GCC_8916_H
+
+#define GCC_BLSP1_BCR 0
+#define GCC_BLSP1_QUP1_BCR 1
+#define GCC_BLSP1_UART1_BCR 2
+#define GCC_BLSP1_QUP2_BCR 3
+#define GCC_BLSP1_UART2_BCR 4
+#define GCC_BLSP1_QUP3_BCR 5
+#define GCC_BLSP1_QUP4_BCR 6
+#define GCC_BLSP1_QUP5_BCR 7
+#define GCC_BLSP1_QUP6_BCR 8
+#define GCC_IMEM_BCR 9
+#define GCC_SMMU_BCR 10
+#define GCC_APSS_TCU_BCR 11
+#define GCC_SMMU_XPU_BCR 12
+#define GCC_PCNOC_TBU_BCR 13
+#define GCC_PRNG_BCR 14
+#define GCC_BOOT_ROM_BCR 15
+#define GCC_CRYPTO_BCR 16
+#define GCC_SEC_CTRL_BCR 17
+#define GCC_AUDIO_CORE_BCR 18
+#define GCC_ULT_AUDIO_BCR 19
+#define GCC_DEHR_BCR 20
+#define GCC_SYSTEM_NOC_BCR 21
+#define GCC_PCNOC_BCR 22
+#define GCC_TCSR_BCR 23
+#define GCC_QDSS_BCR 24
+#define GCC_DCD_BCR 25
+#define GCC_MSG_RAM_BCR 26
+#define GCC_MPM_BCR 27
+#define GCC_SPMI_BCR 28
+#define GCC_SPDM_BCR 29
+#define GCC_MM_SPDM_BCR 30
+#define GCC_BIMC_BCR 31
+#define GCC_RBCPR_BCR 32
+#define GCC_TLMM_BCR 33
+#define GCC_USB_HS_BCR 34
+#define GCC_USB2A_PHY_BCR 35
+#define GCC_SDCC1_BCR 36
+#define GCC_SDCC2_BCR 37
+#define GCC_PDM_BCR 38
+#define GCC_SNOC_BUS_TIMEOUT0_BCR 39
+#define GCC_PCNOC_BUS_TIMEOUT0_BCR 40
+#define GCC_PCNOC_BUS_TIMEOUT1_BCR 41
+#define GCC_PCNOC_BUS_TIMEOUT2_BCR 42
+#define GCC_PCNOC_BUS_TIMEOUT3_BCR 43
+#define GCC_PCNOC_BUS_TIMEOUT4_BCR 44
+#define GCC_PCNOC_BUS_TIMEOUT5_BCR 45
+#define GCC_PCNOC_BUS_TIMEOUT6_BCR 46
+#define GCC_PCNOC_BUS_TIMEOUT7_BCR 47
+#define GCC_PCNOC_BUS_TIMEOUT8_BCR 48
+#define GCC_PCNOC_BUS_TIMEOUT9_BCR 49
+#define GCC_MMSS_BCR 50
+#define GCC_VENUS0_BCR 51
+#define GCC_MDSS_BCR 52
+#define GCC_CAMSS_PHY0_BCR 53
+#define GCC_CAMSS_CSI0_BCR 54
+#define GCC_CAMSS_CSI0PHY_BCR 55
+#define GCC_CAMSS_CSI0RDI_BCR 56
+#define GCC_CAMSS_CSI0PIX_BCR 57
+#define GCC_CAMSS_PHY1_BCR 58
+#define GCC_CAMSS_CSI1_BCR 59
+#define GCC_CAMSS_CSI1PHY_BCR 60
+#define GCC_CAMSS_CSI1RDI_BCR 61
+#define GCC_CAMSS_CSI1PIX_BCR 62
+#define GCC_CAMSS_ISPIF_BCR 63
+#define GCC_CAMSS_CCI_BCR 64
+#define GCC_CAMSS_MCLK0_BCR 65
+#define GCC_CAMSS_MCLK1_BCR 66
+#define GCC_CAMSS_GP0_BCR 67
+#define GCC_CAMSS_GP1_BCR 68
+#define GCC_CAMSS_TOP_BCR 69
+#define GCC_CAMSS_MICRO_BCR 70
+#define GCC_CAMSS_JPEG_BCR 71
+#define GCC_CAMSS_VFE_BCR 72
+#define GCC_CAMSS_CSI_VFE0_BCR 73
+#define GCC_OXILI_BCR 74
+#define GCC_GMEM_BCR 75
+#define GCC_CAMSS_AHB_BCR 76
+#define GCC_MDP_TBU_BCR 77
+#define GCC_GFX_TBU_BCR 78
+#define GCC_GFX_TCU_BCR 79
+#define GCC_MSS_TBU_AXI_BCR 80
+#define GCC_MSS_TBU_GSS_AXI_BCR 81
+#define GCC_MSS_TBU_Q6_AXI_BCR 82
+#define GCC_GTCU_AHB_BCR 83
+#define GCC_SMMU_CFG_BCR 84
+#define GCC_VFE_TBU_BCR 85
+#define GCC_VENUS_TBU_BCR 86
+#define GCC_JPEG_TBU_BCR 87
+#define GCC_PRONTO_TBU_BCR 88
+#define GCC_SMMU_CATS_BCR 89
+
+#endif
diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
index 5591ea7..df69531 100644
--- a/include/linux/clk-provider.h
+++ b/include/linux/clk-provider.h
@@ -541,7 +541,7 @@
extern const struct clk_ops clk_gpio_gate_ops;
struct clk *clk_register_gpio_gate(struct device *dev, const char *name,
- const char *parent_name, struct gpio_desc *gpio,
+ const char *parent_name, unsigned gpio, bool active_low,
unsigned long flags);
void of_gpio_clk_gate_setup(struct device_node *node);
diff --git a/include/linux/clk/at91_pmc.h b/include/linux/clk/at91_pmc.h
index c8e3b3d..7669f76 100644
--- a/include/linux/clk/at91_pmc.h
+++ b/include/linux/clk/at91_pmc.h
@@ -20,10 +20,10 @@
extern void __iomem *at91_pmc_base;
#define at91_pmc_read(field) \
- __raw_readl(at91_pmc_base + field)
+ readl_relaxed(at91_pmc_base + field)
#define at91_pmc_write(field, value) \
- __raw_writel(value, at91_pmc_base + field)
+ writel_relaxed(value, at91_pmc_base + field)
#else
.extern at91_pmc_base
#endif
diff --git a/include/linux/console.h b/include/linux/console.h
index 7571a16..9f50fb4 100644
--- a/include/linux/console.h
+++ b/include/linux/console.h
@@ -123,7 +123,7 @@
struct tty_driver *(*device)(struct console *, int *);
void (*unblank)(void);
int (*setup)(struct console *, char *);
- int (*early_setup)(void);
+ int (*match)(struct console *, char *name, int idx, char *options);
short flags;
short index;
int cflag;
@@ -141,7 +141,6 @@
extern struct console *early_console;
extern int add_preferred_console(char *name, int idx, char *options);
-extern int update_console_cmdline(char *name, int idx, char *name_new, int idx_new, char *options);
extern void register_console(struct console *);
extern int unregister_console(struct console *);
extern struct console *console_drivers;
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 086549a..27e285b 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -11,6 +11,7 @@
#include <linux/bitmap.h>
#include <linux/bug.h>
+/* Don't assign or return these: may not be this big! */
typedef struct cpumask { DECLARE_BITMAP(bits, NR_CPUS); } cpumask_t;
/**
@@ -289,11 +290,11 @@
* @cpumask: the cpumask pointer
*
* Returns 1 if @cpu is set in @cpumask, else returns 0
- *
- * No static inline type checking - see Subtlety (1) above.
*/
-#define cpumask_test_cpu(cpu, cpumask) \
- test_bit(cpumask_check(cpu), cpumask_bits((cpumask)))
+static inline int cpumask_test_cpu(int cpu, const struct cpumask *cpumask)
+{
+ return test_bit(cpumask_check(cpu), cpumask_bits((cpumask)));
+}
/**
* cpumask_test_and_set_cpu - atomically test and set a cpu in a cpumask
@@ -609,9 +610,7 @@
*/
static inline size_t cpumask_size(void)
{
- /* FIXME: Once all cpumask assignments are eliminated, this
- * can be nr_cpumask_bits */
- return BITS_TO_LONGS(NR_CPUS) * sizeof(long);
+ return BITS_TO_LONGS(nr_cpumask_bits) * sizeof(long);
}
/*
@@ -768,7 +767,7 @@
#if NR_CPUS <= BITS_PER_LONG
#define CPU_BITS_ALL \
{ \
- [BITS_TO_LONGS(NR_CPUS)-1] = CPU_MASK_LAST_WORD \
+ [BITS_TO_LONGS(NR_CPUS)-1] = BITMAP_LAST_WORD_MASK(NR_CPUS) \
}
#else /* NR_CPUS > BITS_PER_LONG */
@@ -776,7 +775,7 @@
#define CPU_BITS_ALL \
{ \
[0 ... BITS_TO_LONGS(NR_CPUS)-2] = ~0UL, \
- [BITS_TO_LONGS(NR_CPUS)-1] = CPU_MASK_LAST_WORD \
+ [BITS_TO_LONGS(NR_CPUS)-1] = BITMAP_LAST_WORD_MASK(NR_CPUS) \
}
#endif /* NR_CPUS > BITS_PER_LONG */
@@ -797,32 +796,18 @@
nr_cpu_ids);
}
-/*
- *
- * From here down, all obsolete. Use cpumask_ variants!
- *
- */
-#ifndef CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS
-#define cpumask_of_cpu(cpu) (*get_cpu_mask(cpu))
-
-#define CPU_MASK_LAST_WORD BITMAP_LAST_WORD_MASK(NR_CPUS)
-
#if NR_CPUS <= BITS_PER_LONG
-
#define CPU_MASK_ALL \
(cpumask_t) { { \
- [BITS_TO_LONGS(NR_CPUS)-1] = CPU_MASK_LAST_WORD \
+ [BITS_TO_LONGS(NR_CPUS)-1] = BITMAP_LAST_WORD_MASK(NR_CPUS) \
} }
-
#else
-
#define CPU_MASK_ALL \
(cpumask_t) { { \
[0 ... BITS_TO_LONGS(NR_CPUS)-2] = ~0UL, \
- [BITS_TO_LONGS(NR_CPUS)-1] = CPU_MASK_LAST_WORD \
+ [BITS_TO_LONGS(NR_CPUS)-1] = BITMAP_LAST_WORD_MASK(NR_CPUS) \
} }
-
-#endif
+#endif /* NR_CPUS > BITS_PER_LONG */
#define CPU_MASK_NONE \
(cpumask_t) { { \
@@ -834,143 +819,4 @@
[0] = 1UL \
} }
-#if NR_CPUS == 1
-#define first_cpu(src) ({ (void)(src); 0; })
-#define next_cpu(n, src) ({ (void)(src); 1; })
-#define any_online_cpu(mask) 0
-#define for_each_cpu_mask(cpu, mask) \
- for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
-#else /* NR_CPUS > 1 */
-int __first_cpu(const cpumask_t *srcp);
-int __next_cpu(int n, const cpumask_t *srcp);
-
-#define first_cpu(src) __first_cpu(&(src))
-#define next_cpu(n, src) __next_cpu((n), &(src))
-#define any_online_cpu(mask) cpumask_any_and(&mask, cpu_online_mask)
-#define for_each_cpu_mask(cpu, mask) \
- for ((cpu) = -1; \
- (cpu) = next_cpu((cpu), (mask)), \
- (cpu) < NR_CPUS; )
-#endif /* SMP */
-
-#if NR_CPUS <= 64
-
-#define for_each_cpu_mask_nr(cpu, mask) for_each_cpu_mask(cpu, mask)
-
-#else /* NR_CPUS > 64 */
-
-int __next_cpu_nr(int n, const cpumask_t *srcp);
-#define for_each_cpu_mask_nr(cpu, mask) \
- for ((cpu) = -1; \
- (cpu) = __next_cpu_nr((cpu), &(mask)), \
- (cpu) < nr_cpu_ids; )
-
-#endif /* NR_CPUS > 64 */
-
-#define cpus_addr(src) ((src).bits)
-
-#define cpu_set(cpu, dst) __cpu_set((cpu), &(dst))
-static inline void __cpu_set(int cpu, volatile cpumask_t *dstp)
-{
- set_bit(cpu, dstp->bits);
-}
-
-#define cpu_clear(cpu, dst) __cpu_clear((cpu), &(dst))
-static inline void __cpu_clear(int cpu, volatile cpumask_t *dstp)
-{
- clear_bit(cpu, dstp->bits);
-}
-
-#define cpus_setall(dst) __cpus_setall(&(dst), NR_CPUS)
-static inline void __cpus_setall(cpumask_t *dstp, unsigned int nbits)
-{
- bitmap_fill(dstp->bits, nbits);
-}
-
-#define cpus_clear(dst) __cpus_clear(&(dst), NR_CPUS)
-static inline void __cpus_clear(cpumask_t *dstp, unsigned int nbits)
-{
- bitmap_zero(dstp->bits, nbits);
-}
-
-/* No static inline type checking - see Subtlety (1) above. */
-#define cpu_isset(cpu, cpumask) test_bit((cpu), (cpumask).bits)
-
-#define cpu_test_and_set(cpu, cpumask) __cpu_test_and_set((cpu), &(cpumask))
-static inline int __cpu_test_and_set(int cpu, cpumask_t *addr)
-{
- return test_and_set_bit(cpu, addr->bits);
-}
-
-#define cpus_and(dst, src1, src2) __cpus_and(&(dst), &(src1), &(src2), NR_CPUS)
-static inline int __cpus_and(cpumask_t *dstp, const cpumask_t *src1p,
- const cpumask_t *src2p, unsigned int nbits)
-{
- return bitmap_and(dstp->bits, src1p->bits, src2p->bits, nbits);
-}
-
-#define cpus_or(dst, src1, src2) __cpus_or(&(dst), &(src1), &(src2), NR_CPUS)
-static inline void __cpus_or(cpumask_t *dstp, const cpumask_t *src1p,
- const cpumask_t *src2p, unsigned int nbits)
-{
- bitmap_or(dstp->bits, src1p->bits, src2p->bits, nbits);
-}
-
-#define cpus_xor(dst, src1, src2) __cpus_xor(&(dst), &(src1), &(src2), NR_CPUS)
-static inline void __cpus_xor(cpumask_t *dstp, const cpumask_t *src1p,
- const cpumask_t *src2p, unsigned int nbits)
-{
- bitmap_xor(dstp->bits, src1p->bits, src2p->bits, nbits);
-}
-
-#define cpus_andnot(dst, src1, src2) \
- __cpus_andnot(&(dst), &(src1), &(src2), NR_CPUS)
-static inline int __cpus_andnot(cpumask_t *dstp, const cpumask_t *src1p,
- const cpumask_t *src2p, unsigned int nbits)
-{
- return bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, nbits);
-}
-
-#define cpus_equal(src1, src2) __cpus_equal(&(src1), &(src2), NR_CPUS)
-static inline int __cpus_equal(const cpumask_t *src1p,
- const cpumask_t *src2p, unsigned int nbits)
-{
- return bitmap_equal(src1p->bits, src2p->bits, nbits);
-}
-
-#define cpus_intersects(src1, src2) __cpus_intersects(&(src1), &(src2), NR_CPUS)
-static inline int __cpus_intersects(const cpumask_t *src1p,
- const cpumask_t *src2p, unsigned int nbits)
-{
- return bitmap_intersects(src1p->bits, src2p->bits, nbits);
-}
-
-#define cpus_subset(src1, src2) __cpus_subset(&(src1), &(src2), NR_CPUS)
-static inline int __cpus_subset(const cpumask_t *src1p,
- const cpumask_t *src2p, unsigned int nbits)
-{
- return bitmap_subset(src1p->bits, src2p->bits, nbits);
-}
-
-#define cpus_empty(src) __cpus_empty(&(src), NR_CPUS)
-static inline int __cpus_empty(const cpumask_t *srcp, unsigned int nbits)
-{
- return bitmap_empty(srcp->bits, nbits);
-}
-
-#define cpus_weight(cpumask) __cpus_weight(&(cpumask), NR_CPUS)
-static inline int __cpus_weight(const cpumask_t *srcp, unsigned int nbits)
-{
- return bitmap_weight(srcp->bits, nbits);
-}
-
-#define cpus_shift_left(dst, src, n) \
- __cpus_shift_left(&(dst), &(src), (n), NR_CPUS)
-static inline void __cpus_shift_left(cpumask_t *dstp,
- const cpumask_t *srcp, int n, int nbits)
-{
- bitmap_shift_left(dstp->bits, srcp->bits, n, nbits);
-}
-#endif /* !CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS */
-
#endif /* __LINUX_CPUMASK_H */
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index fd23978..51cc1de 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -605,9 +605,4 @@
return (n << SECTOR_SHIFT);
}
-/*-----------------------------------------------------------------
- * Helper for block layer and dm core operations
- *---------------------------------------------------------------*/
-int dm_underlying_device_busy(struct request_queue *q);
-
#endif /* _LINUX_DEVICE_MAPPER_H */
diff --git a/include/linux/dma/hsu.h b/include/linux/dma/hsu.h
new file mode 100644
index 0000000..234393a
--- /dev/null
+++ b/include/linux/dma/hsu.h
@@ -0,0 +1,48 @@
+/*
+ * Driver for the High Speed UART DMA
+ *
+ * Copyright (C) 2015 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _DMA_HSU_H
+#define _DMA_HSU_H
+
+#include <linux/device.h>
+#include <linux/interrupt.h>
+
+#include <linux/platform_data/dma-hsu.h>
+
+struct hsu_dma;
+
+/**
+ * struct hsu_dma_chip - representation of HSU DMA hardware
+ * @dev: struct device of the DMA controller
+ * @irq: irq line
+ * @regs: memory mapped I/O space
+ * @length: I/O space length
+ * @offset: offset of the I/O space where registers are located
+ * @hsu: struct hsu_dma that is filed by ->probe()
+ * @pdata: platform data for the DMA controller if provided
+ */
+struct hsu_dma_chip {
+ struct device *dev;
+ int irq;
+ void __iomem *regs;
+ unsigned int length;
+ unsigned int offset;
+ struct hsu_dma *hsu;
+ struct hsu_dma_platform_data *pdata;
+};
+
+/* Export to the internal users */
+irqreturn_t hsu_dma_irq(struct hsu_dma_chip *chip, unsigned short nr);
+
+/* Export to the platform drivers */
+int hsu_dma_probe(struct hsu_dma_chip *chip);
+int hsu_dma_remove(struct hsu_dma_chip *chip);
+
+#endif /* _DMA_HSU_H */
diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
index a23556c..591f8c3 100644
--- a/include/linux/f2fs_fs.h
+++ b/include/linux/f2fs_fs.h
@@ -153,7 +153,7 @@
*/
struct f2fs_extent {
__le32 fofs; /* start file offset of the extent */
- __le32 blk_addr; /* start block address of the extent */
+ __le32 blk; /* start block address of the extent */
__le32 len; /* lengh of the extent */
} __packed;
@@ -178,6 +178,7 @@
#define F2FS_INLINE_DATA 0x02 /* file inline data flag */
#define F2FS_INLINE_DENTRY 0x04 /* file inline dentry flag */
#define F2FS_DATA_EXIST 0x08 /* file inline data exist flag */
+#define F2FS_INLINE_DOTS 0x10 /* file having implicit dot dentries */
#define MAX_INLINE_DATA (sizeof(__le32) * (DEF_ADDRS_PER_INODE - \
F2FS_INLINE_XATTR_ADDRS - 1))
diff --git a/include/linux/fixp-arith.h b/include/linux/fixp-arith.h
index 3089d73..d4686fe 100644
--- a/include/linux/fixp-arith.h
+++ b/include/linux/fixp-arith.h
@@ -1,6 +1,8 @@
#ifndef _FIXP_ARITH_H
#define _FIXP_ARITH_H
+#include <linux/math64.h>
+
/*
* Simplistic fixed-point arithmetics.
* Hmm, I'm probably duplicating some code :(
@@ -29,59 +31,126 @@
#include <linux/types.h>
-/* The type representing fixed-point values */
-typedef s16 fixp_t;
-
-#define FRAC_N 8
-#define FRAC_MASK ((1<<FRAC_N)-1)
-
-/* Not to be used directly. Use fixp_{cos,sin} */
-static const fixp_t cos_table[46] = {
- 0x0100, 0x00FF, 0x00FF, 0x00FE, 0x00FD, 0x00FC, 0x00FA, 0x00F8,
- 0x00F6, 0x00F3, 0x00F0, 0x00ED, 0x00E9, 0x00E6, 0x00E2, 0x00DD,
- 0x00D9, 0x00D4, 0x00CF, 0x00C9, 0x00C4, 0x00BE, 0x00B8, 0x00B1,
- 0x00AB, 0x00A4, 0x009D, 0x0096, 0x008F, 0x0087, 0x0080, 0x0078,
- 0x0070, 0x0068, 0x005F, 0x0057, 0x004F, 0x0046, 0x003D, 0x0035,
- 0x002C, 0x0023, 0x001A, 0x0011, 0x0008, 0x0000
+static const s32 sin_table[] = {
+ 0x00000000, 0x023be165, 0x04779632, 0x06b2f1d2, 0x08edc7b6, 0x0b27eb5c,
+ 0x0d61304d, 0x0f996a26, 0x11d06c96, 0x14060b67, 0x163a1a7d, 0x186c6ddd,
+ 0x1a9cd9ac, 0x1ccb3236, 0x1ef74bf2, 0x2120fb82, 0x234815ba, 0x256c6f9e,
+ 0x278dde6e, 0x29ac379f, 0x2bc750e8, 0x2ddf003f, 0x2ff31bdd, 0x32037a44,
+ 0x340ff241, 0x36185aee, 0x381c8bb5, 0x3a1c5c56, 0x3c17a4e7, 0x3e0e3ddb,
+ 0x3fffffff, 0x41ecc483, 0x43d464fa, 0x45b6bb5d, 0x4793a20f, 0x496af3e1,
+ 0x4b3c8c11, 0x4d084650, 0x4ecdfec6, 0x508d9210, 0x5246dd48, 0x53f9be04,
+ 0x55a6125a, 0x574bb8e5, 0x58ea90c2, 0x5a827999, 0x5c135399, 0x5d9cff82,
+ 0x5f1f5ea0, 0x609a52d1, 0x620dbe8a, 0x637984d3, 0x64dd894f, 0x6639b039,
+ 0x678dde6d, 0x68d9f963, 0x6a1de735, 0x6b598ea1, 0x6c8cd70a, 0x6db7a879,
+ 0x6ed9eba0, 0x6ff389de, 0x71046d3c, 0x720c8074, 0x730baeec, 0x7401e4bf,
+ 0x74ef0ebb, 0x75d31a5f, 0x76adf5e5, 0x777f903b, 0x7847d908, 0x7906c0af,
+ 0x79bc384c, 0x7a6831b8, 0x7b0a9f8c, 0x7ba3751c, 0x7c32a67c, 0x7cb82884,
+ 0x7d33f0c8, 0x7da5f5a3, 0x7e0e2e31, 0x7e6c924f, 0x7ec11aa3, 0x7f0bc095,
+ 0x7f4c7e52, 0x7f834ecf, 0x7fb02dc4, 0x7fd317b3, 0x7fec09e1, 0x7ffb025e,
+ 0x7fffffff
};
-
-/* a: 123 -> 123.0 */
-static inline fixp_t fixp_new(s16 a)
+/**
+ * __fixp_sin32() returns the sin of an angle in degrees
+ *
+ * @degrees: angle, in degrees, from 0 to 360.
+ *
+ * The returned value ranges from -0x7fffffff to +0x7fffffff.
+ */
+static inline s32 __fixp_sin32(int degrees)
{
- return a<<FRAC_N;
+ s32 ret;
+ bool negative = false;
+
+ if (degrees > 180) {
+ negative = true;
+ degrees -= 180;
+ }
+ if (degrees > 90)
+ degrees = 180 - degrees;
+
+ ret = sin_table[degrees];
+
+ return negative ? -ret : ret;
}
-/* a: 0xFFFF -> -1.0
- 0x8000 -> 1.0
- 0x0000 -> 0.0
-*/
-static inline fixp_t fixp_new16(s16 a)
+/**
+ * fixp_sin32() returns the sin of an angle in degrees
+ *
+ * @degrees: angle, in degrees. The angle can be positive or negative
+ *
+ * The returned value ranges from -0x7fffffff to +0x7fffffff.
+ */
+static inline s32 fixp_sin32(int degrees)
{
- return ((s32)a)>>(16-FRAC_N);
+ degrees = (degrees % 360 + 360) % 360;
+
+ return __fixp_sin32(degrees);
}
-static inline fixp_t fixp_cos(unsigned int degrees)
+/* cos(x) = sin(x + 90 degrees) */
+#define fixp_cos32(v) fixp_sin32((v) + 90)
+
+/*
+ * 16 bits variants
+ *
+ * The returned value ranges from -0x7fff to 0x7fff
+ */
+
+#define fixp_sin16(v) (fixp_sin32(v) >> 16)
+#define fixp_cos16(v) (fixp_cos32(v) >> 16)
+
+/**
+ * fixp_sin32_rad() - calculates the sin of an angle in radians
+ *
+ * @radians: angle, in radians
+ * @twopi: value to be used for 2*pi
+ *
+ * Provides a variant for the cases where just 360
+ * values is not enough. This function uses linear
+ * interpolation to a wider range of values given by
+ * twopi var.
+ *
+ * Experimental tests gave a maximum difference of
+ * 0.000038 between the value calculated by sin() and
+ * the one produced by this function, when twopi is
+ * equal to 360000. That seems to be enough precision
+ * for practical purposes.
+ *
+ * Please notice that two high numbers for twopi could cause
+ * overflows, so the routine will not allow values of twopi
+ * bigger than 1^18.
+ */
+static inline s32 fixp_sin32_rad(u32 radians, u32 twopi)
{
- int quadrant = (degrees / 90) & 3;
- unsigned int i = degrees % 90;
+ int degrees;
+ s32 v1, v2, dx, dy;
+ s64 tmp;
- if (quadrant == 1 || quadrant == 3)
- i = 90 - i;
+ /*
+ * Avoid too large values for twopi, as we don't want overflows.
+ */
+ BUG_ON(twopi > 1 << 18);
- i >>= 1;
+ degrees = (radians * 360) / twopi;
+ tmp = radians - (degrees * twopi) / 360;
- return (quadrant == 1 || quadrant == 2)? -cos_table[i] : cos_table[i];
+ degrees = (degrees % 360 + 360) % 360;
+ v1 = __fixp_sin32(degrees);
+
+ v2 = fixp_sin32(degrees + 1);
+
+ dx = twopi / 360;
+ dy = v2 - v1;
+
+ tmp *= dy;
+
+ return v1 + div_s64(tmp, dx);
}
-static inline fixp_t fixp_sin(unsigned int degrees)
-{
- return -fixp_cos(degrees + 90);
-}
+/* cos(x) = sin(x + pi/2 radians) */
-static inline fixp_t fixp_mult(fixp_t a, fixp_t b)
-{
- return ((s32)(a*b))>>FRAC_N;
-}
+#define fixp_cos32_rad(rad, twopi) \
+ fixp_sin32_rad(rad + twopi / 4, twopi)
#endif
diff --git a/include/linux/fs_pin.h b/include/linux/fs_pin.h
index 9dc4e03..3886b3b 100644
--- a/include/linux/fs_pin.h
+++ b/include/linux/fs_pin.h
@@ -13,6 +13,8 @@
static inline void init_fs_pin(struct fs_pin *p, void (*kill)(struct fs_pin *))
{
init_waitqueue_head(&p->wait);
+ INIT_HLIST_NODE(&p->s_list);
+ INIT_HLIST_NODE(&p->m_list);
p->kill = kill;
}
diff --git a/include/linux/gpio/consumer.h b/include/linux/gpio/consumer.h
index 45afc2d..3a7c9ff 100644
--- a/include/linux/gpio/consumer.h
+++ b/include/linux/gpio/consumer.h
@@ -16,6 +16,15 @@
*/
struct gpio_desc;
+/**
+ * Struct containing an array of descriptors that can be obtained using
+ * gpiod_get_array().
+ */
+struct gpio_descs {
+ unsigned int ndescs;
+ struct gpio_desc *desc[];
+};
+
#define GPIOD_FLAGS_BIT_DIR_SET BIT(0)
#define GPIOD_FLAGS_BIT_DIR_OUT BIT(1)
#define GPIOD_FLAGS_BIT_DIR_VAL BIT(2)
@@ -34,6 +43,9 @@
#ifdef CONFIG_GPIOLIB
+/* Return the number of GPIOs associated with a device / function */
+int gpiod_count(struct device *dev, const char *con_id);
+
/* Acquire and dispose GPIOs */
struct gpio_desc *__must_check __gpiod_get(struct device *dev,
const char *con_id,
@@ -49,7 +61,14 @@
const char *con_id,
unsigned int index,
enum gpiod_flags flags);
+struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags);
+struct gpio_descs *__must_check gpiod_get_array_optional(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags);
void gpiod_put(struct gpio_desc *desc);
+void gpiod_put_array(struct gpio_descs *descs);
struct gpio_desc *__must_check __devm_gpiod_get(struct device *dev,
const char *con_id,
@@ -64,7 +83,14 @@
struct gpio_desc *__must_check
__devm_gpiod_get_index_optional(struct device *dev, const char *con_id,
unsigned int index, enum gpiod_flags flags);
+struct gpio_descs *__must_check devm_gpiod_get_array(struct device *dev,
+ const char *con_id,
+ enum gpiod_flags flags);
+struct gpio_descs *__must_check
+devm_gpiod_get_array_optional(struct device *dev, const char *con_id,
+ enum gpiod_flags flags);
void devm_gpiod_put(struct device *dev, struct gpio_desc *desc);
+void devm_gpiod_put_array(struct device *dev, struct gpio_descs *descs);
int gpiod_get_direction(struct gpio_desc *desc);
int gpiod_direction_input(struct gpio_desc *desc);
@@ -110,9 +136,15 @@
struct gpio_desc *fwnode_get_named_gpiod(struct fwnode_handle *fwnode,
const char *propname);
struct gpio_desc *devm_get_gpiod_from_child(struct device *dev,
+ const char *con_id,
struct fwnode_handle *child);
#else /* CONFIG_GPIOLIB */
+static inline int gpiod_count(struct device *dev, const char *con_id)
+{
+ return 0;
+}
+
static inline struct gpio_desc *__must_check __gpiod_get(struct device *dev,
const char *con_id,
enum gpiod_flags flags)
@@ -142,6 +174,20 @@
return ERR_PTR(-ENOSYS);
}
+static inline struct gpio_descs *__must_check
+gpiod_get_array(struct device *dev, const char *con_id,
+ enum gpiod_flags flags)
+{
+ return ERR_PTR(-ENOSYS);
+}
+
+static inline struct gpio_descs *__must_check
+gpiod_get_array_optional(struct device *dev, const char *con_id,
+ enum gpiod_flags flags)
+{
+ return ERR_PTR(-ENOSYS);
+}
+
static inline void gpiod_put(struct gpio_desc *desc)
{
might_sleep();
@@ -150,6 +196,14 @@
WARN_ON(1);
}
+static inline void gpiod_put_array(struct gpio_descs *descs)
+{
+ might_sleep();
+
+ /* GPIO can never have been requested */
+ WARN_ON(1);
+}
+
static inline struct gpio_desc *__must_check
__devm_gpiod_get(struct device *dev,
const char *con_id,
@@ -181,6 +235,20 @@
return ERR_PTR(-ENOSYS);
}
+static inline struct gpio_descs *__must_check
+devm_gpiod_get_array(struct device *dev, const char *con_id,
+ enum gpiod_flags flags)
+{
+ return ERR_PTR(-ENOSYS);
+}
+
+static inline struct gpio_descs *__must_check
+devm_gpiod_get_array_optional(struct device *dev, const char *con_id,
+ enum gpiod_flags flags)
+{
+ return ERR_PTR(-ENOSYS);
+}
+
static inline void devm_gpiod_put(struct device *dev, struct gpio_desc *desc)
{
might_sleep();
@@ -189,6 +257,15 @@
WARN_ON(1);
}
+static inline void devm_gpiod_put_array(struct device *dev,
+ struct gpio_descs *descs)
+{
+ might_sleep();
+
+ /* GPIO can never have been requested */
+ WARN_ON(1);
+}
+
static inline int gpiod_get_direction(const struct gpio_desc *desc)
{
diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
index c497c62..f1b3659 100644
--- a/include/linux/gpio/driver.h
+++ b/include/linux/gpio/driver.h
@@ -6,6 +6,7 @@
#include <linux/irq.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
+#include <linux/pinctrl/pinctrl.h>
struct device;
struct gpio_desc;
@@ -173,6 +174,53 @@
#endif /* CONFIG_GPIOLIB_IRQCHIP */
+#ifdef CONFIG_PINCTRL
+
+/**
+ * struct gpio_pin_range - pin range controlled by a gpio chip
+ * @head: list for maintaining set of pin ranges, used internally
+ * @pctldev: pinctrl device which handles corresponding pins
+ * @range: actual range of pins controlled by a gpio controller
+ */
+
+struct gpio_pin_range {
+ struct list_head node;
+ struct pinctrl_dev *pctldev;
+ struct pinctrl_gpio_range range;
+};
+
+int gpiochip_add_pin_range(struct gpio_chip *chip, const char *pinctl_name,
+ unsigned int gpio_offset, unsigned int pin_offset,
+ unsigned int npins);
+int gpiochip_add_pingroup_range(struct gpio_chip *chip,
+ struct pinctrl_dev *pctldev,
+ unsigned int gpio_offset, const char *pin_group);
+void gpiochip_remove_pin_ranges(struct gpio_chip *chip);
+
+#else
+
+static inline int
+gpiochip_add_pin_range(struct gpio_chip *chip, const char *pinctl_name,
+ unsigned int gpio_offset, unsigned int pin_offset,
+ unsigned int npins)
+{
+ return 0;
+}
+static inline int
+gpiochip_add_pingroup_range(struct gpio_chip *chip,
+ struct pinctrl_dev *pctldev,
+ unsigned int gpio_offset, const char *pin_group)
+{
+ return 0;
+}
+
+static inline void
+gpiochip_remove_pin_ranges(struct gpio_chip *chip)
+{
+}
+
+#endif /* CONFIG_PINCTRL */
+
struct gpio_desc *gpiochip_request_own_desc(struct gpio_chip *chip, u16 hwnum,
const char *label);
void gpiochip_free_own_desc(struct gpio_desc *desc);
diff --git a/include/linux/host1x.h b/include/linux/host1x.h
index 464f338..d2ba7d3 100644
--- a/include/linux/host1x.h
+++ b/include/linux/host1x.h
@@ -135,6 +135,7 @@
u32 host1x_syncpt_id(struct host1x_syncpt *sp);
u32 host1x_syncpt_read_min(struct host1x_syncpt *sp);
u32 host1x_syncpt_read_max(struct host1x_syncpt *sp);
+u32 host1x_syncpt_read(struct host1x_syncpt *sp);
int host1x_syncpt_incr(struct host1x_syncpt *sp);
u32 host1x_syncpt_incr_max(struct host1x_syncpt *sp, u32 incrs);
int host1x_syncpt_wait(struct host1x_syncpt *sp, u32 thresh, long timeout,
diff --git a/include/linux/hsi/hsi.h b/include/linux/hsi/hsi.h
index 3ec0630..5dd60c2 100644
--- a/include/linux/hsi/hsi.h
+++ b/include/linux/hsi/hsi.h
@@ -135,9 +135,9 @@
* @device: Driver model representation of the device
* @tx_cfg: HSI TX configuration
* @rx_cfg: HSI RX configuration
- * e_handler: Callback for handling port events (RX Wake High/Low)
- * pclaimed: Keeps tracks if the clients claimed its associated HSI port
- * nb: Notifier block for port events
+ * @e_handler: Callback for handling port events (RX Wake High/Low)
+ * @pclaimed: Keeps tracks if the clients claimed its associated HSI port
+ * @nb: Notifier block for port events
*/
struct hsi_client {
struct device device;
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 5a2ba67..902c37a 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -646,12 +646,13 @@
};
struct vmbus_channel {
+ /* Unique channel id */
+ int id;
+
struct list_head listentry;
struct hv_device *device_obj;
- struct work_struct work;
-
enum vmbus_channel_state state;
struct vmbus_channel_offer_channel offermsg;
@@ -672,7 +673,6 @@
struct hv_ring_buffer_info outbound; /* send to parent */
struct hv_ring_buffer_info inbound; /* receive from parent */
spinlock_t inbound_lock;
- struct workqueue_struct *controlwq;
struct vmbus_close_msg close_msg;
@@ -758,6 +758,9 @@
* link up channels based on their CPU affinity.
*/
struct list_head percpu_list;
+
+ int num_sc;
+ int next_oc;
};
static inline void set_channel_read_state(struct vmbus_channel *c, bool state)
@@ -861,6 +864,14 @@
enum vmbus_packet_type type,
u32 flags);
+extern int vmbus_sendpacket_ctl(struct vmbus_channel *channel,
+ void *buffer,
+ u32 bufferLen,
+ u64 requestid,
+ enum vmbus_packet_type type,
+ u32 flags,
+ bool kick_q);
+
extern int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
struct hv_page_buffer pagebuffers[],
u32 pagecount,
@@ -868,6 +879,15 @@
u32 bufferlen,
u64 requestid);
+extern int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
+ struct hv_page_buffer pagebuffers[],
+ u32 pagecount,
+ void *buffer,
+ u32 bufferlen,
+ u64 requestid,
+ u32 flags,
+ bool kick_q);
+
extern int vmbus_sendpacket_multipagebuffer(struct vmbus_channel *channel,
struct hv_multipage_buffer *mpb,
void *buffer,
@@ -1107,6 +1127,16 @@
}
/*
+ * NetworkDirect. This is the guest RDMA service.
+ * {8c2eaf3d-32a7-4b09-ab99-bd1f1c86b501}
+ */
+#define HV_ND_GUID \
+ .guid = { \
+ 0x3d, 0xaf, 0x2e, 0x8c, 0xa7, 0x32, 0x09, 0x4b, \
+ 0xab, 0x99, 0xbd, 0x1f, 0x1c, 0x86, 0xb5, 0x01 \
+ }
+
+/*
* Common header for Hyper-V ICs
*/
@@ -1213,6 +1243,7 @@
int hv_vss_init(struct hv_util_service *);
void hv_vss_deinit(void);
void hv_vss_onchannelcallback(void *);
+void hv_process_channel_removal(struct vmbus_channel *channel, u32 relid);
extern struct resource hyperv_mmio;
diff --git a/include/linux/io.h b/include/linux/io.h
index 4cc299c..986f2bf 100644
--- a/include/linux/io.h
+++ b/include/linux/io.h
@@ -72,6 +72,8 @@
resource_size_t size);
void __iomem *devm_ioremap_nocache(struct device *dev, resource_size_t offset,
resource_size_t size);
+void __iomem *devm_ioremap_wc(struct device *dev, resource_size_t offset,
+ resource_size_t size);
void devm_iounmap(struct device *dev, void __iomem *addr);
int check_signature(const volatile void __iomem *io_addr,
const unsigned char *signature, int length);
diff --git a/include/linux/iommu-common.h b/include/linux/iommu-common.h
index 6be5c86..bbced83 100644
--- a/include/linux/iommu-common.h
+++ b/include/linux/iommu-common.h
@@ -15,41 +15,37 @@
spinlock_t lock;
};
-struct iommu_table;
-
-struct iommu_tbl_ops {
- unsigned long (*cookie_to_index)(u64, void *);
- void (*demap)(void *, unsigned long, unsigned long);
- void (*reset)(struct iommu_table *);
-};
-
-struct iommu_table {
- unsigned long page_table_map_base;
- unsigned long page_table_shift;
+struct iommu_map_table {
+ unsigned long table_map_base;
+ unsigned long table_shift;
unsigned long nr_pools;
- const struct iommu_tbl_ops *iommu_tbl_ops;
+ void (*lazy_flush)(struct iommu_map_table *);
unsigned long poolsize;
- struct iommu_pool arena_pool[IOMMU_NR_POOLS];
+ struct iommu_pool pools[IOMMU_NR_POOLS];
u32 flags;
#define IOMMU_HAS_LARGE_POOL 0x00000001
+#define IOMMU_NO_SPAN_BOUND 0x00000002
+#define IOMMU_NEED_FLUSH 0x00000004
struct iommu_pool large_pool;
unsigned long *map;
};
-extern void iommu_tbl_pool_init(struct iommu_table *iommu,
+extern void iommu_tbl_pool_init(struct iommu_map_table *iommu,
unsigned long num_entries,
- u32 page_table_shift,
- const struct iommu_tbl_ops *iommu_tbl_ops,
- bool large_pool, u32 npools);
+ u32 table_shift,
+ void (*lazy_flush)(struct iommu_map_table *),
+ bool large_pool, u32 npools,
+ bool skip_span_boundary_check);
extern unsigned long iommu_tbl_range_alloc(struct device *dev,
- struct iommu_table *iommu,
+ struct iommu_map_table *iommu,
unsigned long npages,
unsigned long *handle,
- unsigned int pool_hash);
+ unsigned long mask,
+ unsigned int align_order);
-extern void iommu_tbl_range_free(struct iommu_table *iommu,
+extern void iommu_tbl_range_free(struct iommu_map_table *iommu,
u64 dma_addr, unsigned long npages,
- bool do_demap, void *demap_arg);
+ unsigned long entry);
#endif
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 38daa45..0546b87 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -51,9 +51,33 @@
bool force_aperture; /* DMA only allowed in mappable range? */
};
+/* Domain feature flags */
+#define __IOMMU_DOMAIN_PAGING (1U << 0) /* Support for iommu_map/unmap */
+#define __IOMMU_DOMAIN_DMA_API (1U << 1) /* Domain for use in DMA-API
+ implementation */
+#define __IOMMU_DOMAIN_PT (1U << 2) /* Domain is identity mapped */
+
+/*
+ * This are the possible domain-types
+ *
+ * IOMMU_DOMAIN_BLOCKED - All DMA is blocked, can be used to isolate
+ * devices
+ * IOMMU_DOMAIN_IDENTITY - DMA addresses are system physical addresses
+ * IOMMU_DOMAIN_UNMANAGED - DMA mappings managed by IOMMU-API user, used
+ * for VMs
+ * IOMMU_DOMAIN_DMA - Internally used for DMA-API implementations.
+ * This flag allows IOMMU drivers to implement
+ * certain optimizations for these domains
+ */
+#define IOMMU_DOMAIN_BLOCKED (0U)
+#define IOMMU_DOMAIN_IDENTITY (__IOMMU_DOMAIN_PT)
+#define IOMMU_DOMAIN_UNMANAGED (__IOMMU_DOMAIN_PAGING)
+#define IOMMU_DOMAIN_DMA (__IOMMU_DOMAIN_PAGING | \
+ __IOMMU_DOMAIN_DMA_API)
+
struct iommu_domain {
+ unsigned type;
const struct iommu_ops *ops;
- void *priv;
iommu_fault_handler_t handler;
void *handler_token;
struct iommu_domain_geometry geometry;
@@ -113,8 +137,11 @@
*/
struct iommu_ops {
bool (*capable)(enum iommu_cap);
- int (*domain_init)(struct iommu_domain *domain);
- void (*domain_destroy)(struct iommu_domain *domain);
+
+ /* Domain allocation and freeing by the iommu driver */
+ struct iommu_domain *(*domain_alloc)(unsigned iommu_domain_type);
+ void (*domain_free)(struct iommu_domain *);
+
int (*attach_dev)(struct iommu_domain *domain, struct device *dev);
void (*detach_dev)(struct iommu_domain *domain, struct device *dev);
int (*map)(struct iommu_domain *domain, unsigned long iova,
diff --git a/include/linux/jz4780-nemc.h b/include/linux/jz4780-nemc.h
new file mode 100644
index 0000000..e7f1cc7
--- /dev/null
+++ b/include/linux/jz4780-nemc.h
@@ -0,0 +1,43 @@
+/*
+ * JZ4780 NAND/external memory controller (NEMC)
+ *
+ * Copyright (c) 2015 Imagination Technologies
+ * Author: Alex Smith <alex@alex-smith.me.uk>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef __LINUX_JZ4780_NEMC_H__
+#define __LINUX_JZ4780_NEMC_H__
+
+#include <linux/types.h>
+
+struct device;
+
+/*
+ * Number of NEMC banks. Note that there are actually 6, but they are numbered
+ * from 1.
+ */
+#define JZ4780_NEMC_NUM_BANKS 7
+
+/**
+ * enum jz4780_nemc_bank_type - device types which can be connected to a bank
+ * @JZ4780_NEMC_BANK_SRAM: SRAM
+ * @JZ4780_NEMC_BANK_NAND: NAND
+ */
+enum jz4780_nemc_bank_type {
+ JZ4780_NEMC_BANK_SRAM,
+ JZ4780_NEMC_BANK_NAND,
+};
+
+extern unsigned int jz4780_nemc_num_banks(struct device *dev);
+
+extern void jz4780_nemc_set_type(struct device *dev, unsigned int bank,
+ enum jz4780_nemc_bank_type type);
+extern void jz4780_nemc_assert(struct device *dev, unsigned int bank,
+ bool assert);
+
+#endif /* __LINUX_JZ4780_NEMC_H__ */
diff --git a/include/linux/kconfig.h b/include/linux/kconfig.h
index 63ca8da..b33c779 100644
--- a/include/linux/kconfig.h
+++ b/include/linux/kconfig.h
@@ -36,6 +36,15 @@
#define IS_MODULE(option) config_enabled(option##_MODULE)
/*
+ * IS_REACHABLE(CONFIG_FOO) evaluates to 1 if the currently compiled
+ * code can call a function defined in code compiled based on CONFIG_FOO.
+ * This is similar to IS_ENABLED(), but returns false when invoked from
+ * built-in code when CONFIG_FOO is set to 'm'.
+ */
+#define IS_REACHABLE(option) (config_enabled(option) || \
+ (config_enabled(option##_MODULE) && config_enabled(MODULE)))
+
+/*
* IS_ENABLED(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y' or 'm',
* 0 otherwise.
*/
diff --git a/include/linux/mfd/arizona/core.h b/include/linux/mfd/arizona/core.h
index f970105..16a498f 100644
--- a/include/linux/mfd/arizona/core.h
+++ b/include/linux/mfd/arizona/core.h
@@ -127,7 +127,7 @@
struct regmap_irq_chip_data *aod_irq_chip;
struct regmap_irq_chip_data *irq_chip;
- bool hpdet_magic;
+ bool hpdet_clamp;
unsigned int hp_ena;
struct mutex clk_lock;
diff --git a/include/linux/miscdevice.h b/include/linux/miscdevice.h
index ee80dd7..819077c 100644
--- a/include/linux/miscdevice.h
+++ b/include/linux/miscdevice.h
@@ -52,6 +52,7 @@
#define MISC_DYNAMIC_MINOR 255
struct device;
+struct attribute_group;
struct miscdevice {
int minor;
@@ -60,6 +61,7 @@
struct list_head list;
struct device *parent;
struct device *this_device;
+ const struct attribute_group **groups;
const char *nodename;
umode_t mode;
};
diff --git a/include/linux/mount.h b/include/linux/mount.h
index c2c561d..f822c3c 100644
--- a/include/linux/mount.h
+++ b/include/linux/mount.h
@@ -61,6 +61,7 @@
#define MNT_DOOMED 0x1000000
#define MNT_SYNC_UMOUNT 0x2000000
#define MNT_MARKED 0x4000000
+#define MNT_UMOUNT 0x8000000
struct vfsmount {
struct dentry *mnt_root; /* root of the mounted tree */
@@ -92,6 +93,6 @@
extern void mnt_set_expiry(struct vfsmount *mnt, struct list_head *expiry_list);
extern void mark_mounts_for_expiry(struct list_head *mounts);
-extern dev_t name_to_dev_t(char *name);
+extern dev_t name_to_dev_t(const char *name);
#endif /* _LINUX_MOUNT_H */
diff --git a/include/linux/of.h b/include/linux/of.h
index 9bfcc18..5f124f6 100644
--- a/include/linux/of.h
+++ b/include/linux/of.h
@@ -622,6 +622,38 @@
return NULL;
}
+static inline int of_node_check_flag(struct device_node *n, unsigned long flag)
+{
+ return 0;
+}
+
+static inline int of_node_test_and_set_flag(struct device_node *n,
+ unsigned long flag)
+{
+ return 0;
+}
+
+static inline void of_node_set_flag(struct device_node *n, unsigned long flag)
+{
+}
+
+static inline void of_node_clear_flag(struct device_node *n, unsigned long flag)
+{
+}
+
+static inline int of_property_check_flag(struct property *p, unsigned long flag)
+{
+ return 0;
+}
+
+static inline void of_property_set_flag(struct property *p, unsigned long flag)
+{
+}
+
+static inline void of_property_clear_flag(struct property *p, unsigned long flag)
+{
+}
+
#define of_match_ptr(_ptr) NULL
#define of_match_node(_matches, _node) NULL
#endif /* CONFIG_OF */
diff --git a/include/linux/of_graph.h b/include/linux/of_graph.h
index befef42..7bc92e0 100644
--- a/include/linux/of_graph.h
+++ b/include/linux/of_graph.h
@@ -14,6 +14,8 @@
#ifndef __LINUX_OF_GRAPH_H
#define __LINUX_OF_GRAPH_H
+#include <linux/types.h>
+
/**
* struct of_endpoint - the OF graph endpoint data structure
* @port: identifier (value of reg property) of a port this endpoint belongs to
@@ -26,9 +28,21 @@
const struct device_node *local_node;
};
+/**
+ * for_each_endpoint_of_node - iterate over every endpoint in a device node
+ * @parent: parent device node containing ports and endpoints
+ * @child: loop variable pointing to the current endpoint node
+ *
+ * When breaking out of the loop, of_node_put(child) has to be called manually.
+ */
+#define for_each_endpoint_of_node(parent, child) \
+ for (child = of_graph_get_next_endpoint(parent, NULL); child != NULL; \
+ child = of_graph_get_next_endpoint(parent, child))
+
#ifdef CONFIG_OF
int of_graph_parse_endpoint(const struct device_node *node,
struct of_endpoint *endpoint);
+struct device_node *of_graph_get_port_by_id(struct device_node *node, u32 id);
struct device_node *of_graph_get_next_endpoint(const struct device_node *parent,
struct device_node *previous);
struct device_node *of_graph_get_remote_port_parent(
@@ -42,6 +56,12 @@
return -ENOSYS;
}
+static inline struct device_node *of_graph_get_port_by_id(
+ struct device_node *node, u32 id)
+{
+ return NULL;
+}
+
static inline struct device_node *of_graph_get_next_endpoint(
const struct device_node *parent,
struct device_node *previous)
diff --git a/include/linux/platform_data/dma-hsu.h b/include/linux/platform_data/dma-hsu.h
new file mode 100644
index 0000000..8a1f6a4
--- /dev/null
+++ b/include/linux/platform_data/dma-hsu.h
@@ -0,0 +1,25 @@
+/*
+ * Driver for the High Speed UART DMA
+ *
+ * Copyright (C) 2015 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _PLATFORM_DATA_DMA_HSU_H
+#define _PLATFORM_DATA_DMA_HSU_H
+
+#include <linux/device.h>
+
+struct hsu_dma_slave {
+ struct device *dma_dev;
+ int chan_id;
+};
+
+struct hsu_dma_platform_data {
+ unsigned short nr_channels;
+};
+
+#endif /* _PLATFORM_DATA_DMA_HSU_H */
diff --git a/include/linux/platform_data/msm_serial_hs.h b/include/linux/platform_data/msm_serial_hs.h
deleted file mode 100644
index 98a2046..0000000
--- a/include/linux/platform_data/msm_serial_hs.h
+++ /dev/null
@@ -1,49 +0,0 @@
-/*
- * Copyright (C) 2008 Google, Inc.
- * Author: Nick Pelly <npelly@google.com>
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#ifndef __ASM_ARCH_MSM_SERIAL_HS_H
-#define __ASM_ARCH_MSM_SERIAL_HS_H
-
-#include <linux/serial_core.h>
-
-/* API to request the uart clock off or on for low power management
- * Clients should call request_clock_off() when no uart data is expected,
- * and must call request_clock_on() before any further uart data can be
- * received. */
-extern void msm_hs_request_clock_off(struct uart_port *uport);
-extern void msm_hs_request_clock_on(struct uart_port *uport);
-
-/**
- * struct msm_serial_hs_platform_data
- * @rx_wakeup_irq: Rx activity irq
- * @rx_to_inject: extra character to be inserted to Rx tty on wakeup
- * @inject_rx: 1 = insert rx_to_inject. 0 = do not insert extra character
- * @exit_lpm_cb: function called before every Tx transaction
- *
- * This is an optional structure required for UART Rx GPIO IRQ based
- * wakeup from low power state. UART wakeup can be triggered by RX activity
- * (using a wakeup GPIO on the UART RX pin). This should only be used if
- * there is not a wakeup GPIO on the UART CTS, and the first RX byte is
- * known (eg., with the Bluetooth Texas Instruments HCILL protocol),
- * since the first RX byte will always be lost. RTS will be asserted even
- * while the UART is clocked off in this mode of operation.
- */
-struct msm_serial_hs_platform_data {
- int rx_wakeup_irq;
- unsigned char inject_rx_on_wakeup;
- char rx_to_inject;
- void (*exit_lpm_cb)(struct uart_port *);
-};
-
-#endif
diff --git a/include/linux/platform_data/serial-imx.h b/include/linux/platform_data/serial-imx.h
index 3cc2e3c..a938eba 100644
--- a/include/linux/platform_data/serial-imx.h
+++ b/include/linux/platform_data/serial-imx.h
@@ -20,14 +20,9 @@
#define ASMARM_ARCH_UART_H
#define IMXUART_HAVE_RTSCTS (1<<0)
-#define IMXUART_IRDA (1<<1)
struct imxuart_platform_data {
unsigned int flags;
- void (*irda_enable)(int enable);
- unsigned int irda_inv_rx:1;
- unsigned int irda_inv_tx:1;
- unsigned short transceiver_delay;
};
#endif
diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h
index 9e7e745..78b8a9b 100644
--- a/include/linux/remoteproc.h
+++ b/include/linux/remoteproc.h
@@ -404,6 +404,7 @@
* @table_ptr: pointer to the resource table in effect
* @cached_table: copy of the resource table
* @table_csum: checksum of the resource table
+ * @has_iommu: flag to indicate if remote processor is behind an MMU
*/
struct rproc {
struct klist_node node;
@@ -435,6 +436,7 @@
struct resource_table *table_ptr;
struct resource_table *cached_table;
u32 table_csum;
+ bool has_iommu;
};
/* we currently support only two vrings per rvdev */
diff --git a/include/linux/serial_8250.h b/include/linux/serial_8250.h
index a8efa23..78097e7 100644
--- a/include/linux/serial_8250.h
+++ b/include/linux/serial_8250.h
@@ -60,6 +60,20 @@
};
struct uart_8250_dma;
+struct uart_8250_port;
+
+/**
+ * 8250 core driver operations
+ *
+ * @setup_irq() Setup irq handling. The universal 8250 driver links this
+ * port to the irq chain. Other drivers may @request_irq().
+ * @release_irq() Undo irq handling. The universal 8250 driver unlinks
+ * the port from the irq chain.
+ */
+struct uart_8250_ops {
+ int (*setup_irq)(struct uart_8250_port *);
+ void (*release_irq)(struct uart_8250_port *);
+};
/*
* This should be used by drivers which want to register
@@ -88,6 +102,8 @@
unsigned char canary; /* non-zero during system sleep
* if no_console_suspend
*/
+ unsigned char probe;
+#define UART_PROBE_RSA (1 << 0)
/*
* Some bits in registers are cleared on a read, so they must
@@ -100,6 +116,7 @@
unsigned char msr_saved_flags;
struct uart_8250_dma *dma;
+ const struct uart_8250_ops *ops;
/* 8250 specific callbacks */
int (*dl_read)(struct uart_8250_port *);
@@ -118,11 +135,8 @@
extern int early_serial_setup(struct uart_port *port);
-extern int serial8250_find_port(struct uart_port *p);
-extern int serial8250_find_port_for_earlycon(void);
extern unsigned int serial8250_early_in(struct uart_port *port, int offset);
extern void serial8250_early_out(struct uart_port *port, int offset, int value);
-extern int setup_early_serial8250_console(char *cmdline);
extern void serial8250_do_set_termios(struct uart_port *port,
struct ktermios *termios, struct ktermios *old);
extern int serial8250_do_startup(struct uart_port *port);
diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
index d10965f..025dad9 100644
--- a/include/linux/serial_core.h
+++ b/include/linux/serial_core.h
@@ -235,7 +235,9 @@
const struct uart_ops *ops;
unsigned int custom_divisor;
unsigned int line; /* port index */
+ unsigned int minor;
resource_size_t mapbase; /* for ioremap */
+ resource_size_t mapsize;
struct device *dev; /* parent device */
unsigned char hub6; /* this should be in the 8250 driver */
unsigned char suspended;
@@ -336,24 +338,29 @@
char options[16]; /* e.g., 115200n8 */
unsigned int baud;
};
-int setup_earlycon(char *buf, const char *match,
- int (*setup)(struct earlycon_device *, const char *));
+struct earlycon_id {
+ char name[16];
+ int (*setup)(struct earlycon_device *, const char *options);
+} __aligned(32);
+
+extern int setup_earlycon(char *buf);
extern int of_setup_earlycon(unsigned long addr,
int (*setup)(struct earlycon_device *, const char *));
-#define EARLYCON_DECLARE(name, func) \
-static int __init name ## _setup_earlycon(char *buf) \
-{ \
- return setup_earlycon(buf, __stringify(name), func); \
-} \
-early_param("earlycon", name ## _setup_earlycon);
+#define EARLYCON_DECLARE(_name, func) \
+ static const struct earlycon_id __earlycon_##_name \
+ __used __section(__earlycon_table) \
+ = { .name = __stringify(_name), \
+ .setup = func }
#define OF_EARLYCON_DECLARE(name, compat, fn) \
_OF_DECLARE(earlycon, name, compat, fn, void *)
struct uart_port *uart_get_console(struct uart_port *ports, int nr,
struct console *c);
+int uart_parse_earlycon(char *p, unsigned char *iotype, unsigned long *addr,
+ char **options);
void uart_parse_options(char *options, int *baud, int *parity, int *bits,
int *flow);
int uart_set_options(struct uart_port *port, struct console *co, int baud,
diff --git a/include/linux/serial_mfd.h b/include/linux/serial_mfd.h
deleted file mode 100644
index 2b071e0..0000000
--- a/include/linux/serial_mfd.h
+++ /dev/null
@@ -1,47 +0,0 @@
-#ifndef _SERIAL_MFD_H_
-#define _SERIAL_MFD_H_
-
-/* HW register offset definition */
-#define UART_FOR 0x08
-#define UART_PS 0x0C
-#define UART_MUL 0x0D
-#define UART_DIV 0x0E
-
-#define HSU_GBL_IEN 0x0
-#define HSU_GBL_IST 0x4
-
-#define HSU_GBL_INT_BIT_PORT0 0x0
-#define HSU_GBL_INT_BIT_PORT1 0x1
-#define HSU_GBL_INT_BIT_PORT2 0x2
-#define HSU_GBL_INT_BIT_IRI 0x3
-#define HSU_GBL_INT_BIT_HDLC 0x4
-#define HSU_GBL_INT_BIT_DMA 0x5
-
-#define HSU_GBL_ISR 0x8
-#define HSU_GBL_DMASR 0x400
-#define HSU_GBL_DMAISR 0x404
-
-#define HSU_PORT_REG_OFFSET 0x80
-#define HSU_PORT0_REG_OFFSET 0x80
-#define HSU_PORT1_REG_OFFSET 0x100
-#define HSU_PORT2_REG_OFFSET 0x180
-#define HSU_PORT_REG_LENGTH 0x80
-
-#define HSU_DMA_CHANS_REG_OFFSET 0x500
-#define HSU_DMA_CHANS_REG_LENGTH 0x40
-
-#define HSU_CH_SR 0x0 /* channel status reg */
-#define HSU_CH_CR 0x4 /* control reg */
-#define HSU_CH_DCR 0x8 /* descriptor control reg */
-#define HSU_CH_BSR 0x10 /* max fifo buffer size reg */
-#define HSU_CH_MOTSR 0x14 /* minimum ocp transfer size */
-#define HSU_CH_D0SAR 0x20 /* desc 0 start addr */
-#define HSU_CH_D0TSR 0x24 /* desc 0 transfer size */
-#define HSU_CH_D1SAR 0x28
-#define HSU_CH_D1TSR 0x2C
-#define HSU_CH_D2SAR 0x30
-#define HSU_CH_D2TSR 0x34
-#define HSU_CH_D3SAR 0x38
-#define HSU_CH_D3TSR 0x3C
-
-#endif
diff --git a/include/linux/smp.h b/include/linux/smp.h
index be91db2..c441407 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -18,7 +18,7 @@
struct llist_node llist;
smp_call_func_t func;
void *info;
- u16 flags;
+ unsigned int flags;
};
/* total number of cpus in this system (may exceed NR_CPUS) */
diff --git a/include/media/adv7604.h b/include/media/adv7604.h
index aa1c447..9ecf353 100644
--- a/include/media/adv7604.h
+++ b/include/media/adv7604.h
@@ -47,16 +47,16 @@
};
/* Input Color Space (IO register 0x02, [7:4]) */
-enum adv7604_inp_color_space {
- ADV7604_INP_COLOR_SPACE_LIM_RGB = 0,
- ADV7604_INP_COLOR_SPACE_FULL_RGB = 1,
- ADV7604_INP_COLOR_SPACE_LIM_YCbCr_601 = 2,
- ADV7604_INP_COLOR_SPACE_LIM_YCbCr_709 = 3,
- ADV7604_INP_COLOR_SPACE_XVYCC_601 = 4,
- ADV7604_INP_COLOR_SPACE_XVYCC_709 = 5,
- ADV7604_INP_COLOR_SPACE_FULL_YCbCr_601 = 6,
- ADV7604_INP_COLOR_SPACE_FULL_YCbCr_709 = 7,
- ADV7604_INP_COLOR_SPACE_AUTO = 0xf,
+enum adv76xx_inp_color_space {
+ ADV76XX_INP_COLOR_SPACE_LIM_RGB = 0,
+ ADV76XX_INP_COLOR_SPACE_FULL_RGB = 1,
+ ADV76XX_INP_COLOR_SPACE_LIM_YCbCr_601 = 2,
+ ADV76XX_INP_COLOR_SPACE_LIM_YCbCr_709 = 3,
+ ADV76XX_INP_COLOR_SPACE_XVYCC_601 = 4,
+ ADV76XX_INP_COLOR_SPACE_XVYCC_709 = 5,
+ ADV76XX_INP_COLOR_SPACE_FULL_YCbCr_601 = 6,
+ ADV76XX_INP_COLOR_SPACE_FULL_YCbCr_709 = 7,
+ ADV76XX_INP_COLOR_SPACE_AUTO = 0xf,
};
/* Select output format (IO register 0x03, [4:2]) */
@@ -66,38 +66,39 @@
ADV7604_OP_FORMAT_MODE2 = 0x08,
};
-enum adv7604_drive_strength {
- ADV7604_DR_STR_MEDIUM_LOW = 1,
- ADV7604_DR_STR_MEDIUM_HIGH = 2,
- ADV7604_DR_STR_HIGH = 3,
+enum adv76xx_drive_strength {
+ ADV76XX_DR_STR_MEDIUM_LOW = 1,
+ ADV76XX_DR_STR_MEDIUM_HIGH = 2,
+ ADV76XX_DR_STR_HIGH = 3,
};
-enum adv7604_int1_config {
- ADV7604_INT1_CONFIG_OPEN_DRAIN,
- ADV7604_INT1_CONFIG_ACTIVE_LOW,
- ADV7604_INT1_CONFIG_ACTIVE_HIGH,
- ADV7604_INT1_CONFIG_DISABLED,
+/* INT1 Configuration (IO register 0x40, [1:0]) */
+enum adv76xx_int1_config {
+ ADV76XX_INT1_CONFIG_OPEN_DRAIN,
+ ADV76XX_INT1_CONFIG_ACTIVE_LOW,
+ ADV76XX_INT1_CONFIG_ACTIVE_HIGH,
+ ADV76XX_INT1_CONFIG_DISABLED,
};
-enum adv7604_page {
- ADV7604_PAGE_IO,
+enum adv76xx_page {
+ ADV76XX_PAGE_IO,
ADV7604_PAGE_AVLINK,
- ADV7604_PAGE_CEC,
- ADV7604_PAGE_INFOFRAME,
+ ADV76XX_PAGE_CEC,
+ ADV76XX_PAGE_INFOFRAME,
ADV7604_PAGE_ESDP,
ADV7604_PAGE_DPP,
- ADV7604_PAGE_AFE,
- ADV7604_PAGE_REP,
- ADV7604_PAGE_EDID,
- ADV7604_PAGE_HDMI,
- ADV7604_PAGE_TEST,
- ADV7604_PAGE_CP,
+ ADV76XX_PAGE_AFE,
+ ADV76XX_PAGE_REP,
+ ADV76XX_PAGE_EDID,
+ ADV76XX_PAGE_HDMI,
+ ADV76XX_PAGE_TEST,
+ ADV76XX_PAGE_CP,
ADV7604_PAGE_VDP,
- ADV7604_PAGE_MAX,
+ ADV76XX_PAGE_MAX,
};
/* Platform dependent definition */
-struct adv7604_platform_data {
+struct adv76xx_platform_data {
/* DIS_PWRDNB: 1 if the PWRDNB pin is unused and unconnected */
unsigned disable_pwrdnb:1;
@@ -116,7 +117,7 @@
enum adv7604_op_format_mode_sel op_format_mode_sel;
/* Configuration of the INT1 pin */
- enum adv7604_int1_config int1_config;
+ enum adv76xx_int1_config int1_config;
/* IO register 0x02 */
unsigned alt_gamma:1;
@@ -134,9 +135,9 @@
unsigned inv_llc_pol:1;
/* IO register 0x14 */
- enum adv7604_drive_strength dr_str_data;
- enum adv7604_drive_strength dr_str_clk;
- enum adv7604_drive_strength dr_str_sync;
+ enum adv76xx_drive_strength dr_str_data;
+ enum adv76xx_drive_strength dr_str_clk;
+ enum adv76xx_drive_strength dr_str_sync;
/* IO register 0x30 */
unsigned output_bus_lsb_to_msb:1;
@@ -145,11 +146,11 @@
unsigned hdmi_free_run_mode;
/* i2c addresses: 0 == use default */
- u8 i2c_addresses[ADV7604_PAGE_MAX];
+ u8 i2c_addresses[ADV76XX_PAGE_MAX];
};
-enum adv7604_pad {
- ADV7604_PAD_HDMI_PORT_A = 0,
+enum adv76xx_pad {
+ ADV76XX_PAD_HDMI_PORT_A = 0,
ADV7604_PAD_HDMI_PORT_B = 1,
ADV7604_PAD_HDMI_PORT_C = 2,
ADV7604_PAD_HDMI_PORT_D = 3,
@@ -158,7 +159,7 @@
/* The source pad is either 1 (ADV7611) or 6 (ADV7604) */
ADV7604_PAD_SOURCE = 6,
ADV7611_PAD_SOURCE = 1,
- ADV7604_PAD_MAX = 7,
+ ADV76XX_PAD_MAX = 7,
};
#define V4L2_CID_ADV_RX_ANALOG_SAMPLING_PHASE (V4L2_CID_DV_CLASS_BASE + 0x1000)
@@ -166,7 +167,7 @@
#define V4L2_CID_ADV_RX_FREE_RUN_COLOR (V4L2_CID_DV_CLASS_BASE + 0x1002)
/* notify events */
-#define ADV7604_HOTPLUG 1
-#define ADV7604_FMT_CHANGE 2
+#define ADV76XX_HOTPLUG 1
+#define ADV76XX_FMT_CHANGE 2
#endif
diff --git a/include/media/davinci/vpfe_capture.h b/include/media/davinci/vpfe_capture.h
index 288772e..28bcd71 100644
--- a/include/media/davinci/vpfe_capture.h
+++ b/include/media/davinci/vpfe_capture.h
@@ -102,7 +102,7 @@
struct vpfe_device {
/* V4l2 specific parameters */
/* Identifies video device for this channel */
- struct video_device *video_dev;
+ struct video_device video_dev;
/* sub devices */
struct v4l2_subdev **sd;
/* vpfe cfg */
diff --git a/include/media/media-entity.h b/include/media/media-entity.h
index e004591..0c003d8 100644
--- a/include/media/media-entity.h
+++ b/include/media/media-entity.h
@@ -44,6 +44,15 @@
unsigned long flags; /* Pad flags (MEDIA_PAD_FL_*) */
};
+/**
+ * struct media_entity_operations - Media entity operations
+ * @link_setup: Notify the entity of link changes. The operation can
+ * return an error, in which case link setup will be
+ * cancelled. Optional.
+ * @link_validate: Return whether a link is valid from the entity point of
+ * view. The media_entity_pipeline_start() function
+ * validates all links by calling this operation. Optional.
+ */
struct media_entity_operations {
int (*link_setup)(struct media_entity *entity,
const struct media_pad *local,
@@ -87,17 +96,7 @@
struct {
u32 major;
u32 minor;
- } v4l;
- struct {
- u32 major;
- u32 minor;
- } fb;
- struct {
- u32 card;
- u32 device;
- u32 subdevice;
- } alsa;
- int dvb;
+ } dev;
/* Sub-device specifications */
/* Nothing needed yet */
diff --git a/include/media/mt9p031.h b/include/media/mt9p031.h
index b1e63f2b..1ba3612 100644
--- a/include/media/mt9p031.h
+++ b/include/media/mt9p031.h
@@ -5,12 +5,10 @@
/*
* struct mt9p031_platform_data - MT9P031 platform data
- * @reset: Chip reset GPIO (set to -1 if not used)
* @ext_freq: Input clock frequency
* @target_freq: Pixel clock frequency
*/
struct mt9p031_platform_data {
- int reset;
int ext_freq;
int target_freq;
};
diff --git a/include/media/omap3isp.h b/include/media/omap3isp.h
index 398279d..048f8f9 100644
--- a/include/media/omap3isp.h
+++ b/include/media/omap3isp.h
@@ -45,7 +45,7 @@
};
/**
- * struct isp_parallel_platform_data - Parallel interface platform data
+ * struct isp_parallel_cfg - Parallel interface configuration
* @data_lane_shift: Data lane shifter
* ISP_LANE_SHIFT_0 - CAMEXT[13:0] -> CAM[13:0]
* ISP_LANE_SHIFT_2 - CAMEXT[13:2] -> CAM[11:0]
@@ -62,7 +62,7 @@
* @data_pol: Data polarity
* 0 - Normal, 1 - One's complement
*/
-struct isp_parallel_platform_data {
+struct isp_parallel_cfg {
unsigned int data_lane_shift:2;
unsigned int clk_pol:1;
unsigned int hs_pol:1;
@@ -105,7 +105,7 @@
};
/**
- * struct isp_ccp2_platform_data - CCP2 interface platform data
+ * struct isp_ccp2_cfg - CCP2 interface configuration
* @strobe_clk_pol: Strobe/clock polarity
* 0 - Non Inverted, 1 - Inverted
* @crc: Enable the cyclic redundancy check
@@ -117,7 +117,7 @@
* ISP_CCP2_PHY_DATA_STROBE - Data/strobe physical layer
* @vpclk_div: Video port output clock control
*/
-struct isp_ccp2_platform_data {
+struct isp_ccp2_cfg {
unsigned int strobe_clk_pol:1;
unsigned int crc:1;
unsigned int ccp2_mode:1;
@@ -127,39 +127,31 @@
};
/**
- * struct isp_csi2_platform_data - CSI2 interface platform data
+ * struct isp_csi2_cfg - CSI2 interface configuration
* @crc: Enable the cyclic redundancy check
- * @vpclk_div: Video port output clock control
*/
-struct isp_csi2_platform_data {
+struct isp_csi2_cfg {
unsigned crc:1;
- unsigned vpclk_div:2;
struct isp_csiphy_lanes_cfg lanecfg;
};
-struct isp_subdev_i2c_board_info {
- struct i2c_board_info *board_info;
- int i2c_adapter_id;
-};
-
-struct isp_v4l2_subdevs_group {
- struct isp_subdev_i2c_board_info *subdevs;
+struct isp_bus_cfg {
enum isp_interface_type interface;
union {
- struct isp_parallel_platform_data parallel;
- struct isp_ccp2_platform_data ccp2;
- struct isp_csi2_platform_data csi2;
+ struct isp_parallel_cfg parallel;
+ struct isp_ccp2_cfg ccp2;
+ struct isp_csi2_cfg csi2;
} bus; /* gcc < 4.6.0 chokes on anonymous union initializers */
};
-struct isp_platform_xclk {
- const char *dev_id;
- const char *con_id;
+struct isp_platform_subdev {
+ struct i2c_board_info *board_info;
+ int i2c_adapter_id;
+ struct isp_bus_cfg *bus;
};
struct isp_platform_data {
- struct isp_platform_xclk xclks[2];
- struct isp_v4l2_subdevs_group *subdevs;
+ struct isp_platform_subdev *subdevs;
void (*set_constraints)(struct isp_device *isp, bool enable);
};
diff --git a/include/media/ov2659.h b/include/media/ov2659.h
new file mode 100644
index 0000000..4216adc
--- /dev/null
+++ b/include/media/ov2659.h
@@ -0,0 +1,34 @@
+/*
+ * Omnivision OV2659 CMOS Image Sensor driver
+ *
+ * Copyright (C) 2015 Texas Instruments, Inc.
+ *
+ * Benoit Parrot <bparrot@ti.com>
+ * Lad, Prabhakar <prabhakar.csengg@gmail.com>
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef OV2659_H
+#define OV2659_H
+
+/**
+ * struct ov2659_platform_data - ov2659 driver platform data
+ * @link_frequency: target pixel clock frequency
+ */
+struct ov2659_platform_data {
+ s64 link_frequency;
+};
+
+#endif /* OV2659_H */
diff --git a/include/media/saa7146_vv.h b/include/media/saa7146_vv.h
index 944ecdf..92766f7 100644
--- a/include/media/saa7146_vv.h
+++ b/include/media/saa7146_vv.h
@@ -178,8 +178,8 @@
};
/* from saa7146_fops.c */
-int saa7146_register_device(struct video_device **vid, struct saa7146_dev* dev, char *name, int type);
-int saa7146_unregister_device(struct video_device **vid, struct saa7146_dev* dev);
+int saa7146_register_device(struct video_device *vid, struct saa7146_dev *dev, char *name, int type);
+int saa7146_unregister_device(struct video_device *vid, struct saa7146_dev *dev);
void saa7146_buffer_finish(struct saa7146_dev *dev, struct saa7146_dmaqueue *q, int state);
void saa7146_buffer_next(struct saa7146_dev *dev, struct saa7146_dmaqueue *q,int vbi);
int saa7146_buffer_queue(struct saa7146_dev *dev, struct saa7146_dmaqueue *q, struct saa7146_buf *buf);
diff --git a/include/media/v4l2-clk.h b/include/media/v4l2-clk.h
index 0b36cc1..3ef6e3d 100644
--- a/include/media/v4l2-clk.h
+++ b/include/media/v4l2-clk.h
@@ -22,14 +22,15 @@
struct module;
struct device;
+struct clk;
struct v4l2_clk {
struct list_head list;
const struct v4l2_clk_ops *ops;
const char *dev_id;
- const char *id;
int enable;
struct mutex lock; /* Protect the enable count */
atomic_t use_count;
+ struct clk *clk;
void *priv;
};
@@ -43,7 +44,7 @@
struct v4l2_clk *v4l2_clk_register(const struct v4l2_clk_ops *ops,
const char *dev_name,
- const char *name, void *priv);
+ void *priv);
void v4l2_clk_unregister(struct v4l2_clk *clk);
struct v4l2_clk *v4l2_clk_get(struct device *dev, const char *id);
void v4l2_clk_put(struct v4l2_clk *clk);
@@ -55,14 +56,13 @@
struct module;
struct v4l2_clk *__v4l2_clk_register_fixed(const char *dev_id,
- const char *id, unsigned long rate, struct module *owner);
+ unsigned long rate, struct module *owner);
void v4l2_clk_unregister_fixed(struct v4l2_clk *clk);
static inline struct v4l2_clk *v4l2_clk_register_fixed(const char *dev_id,
- const char *id,
unsigned long rate)
{
- return __v4l2_clk_register_fixed(dev_id, id, rate, THIS_MODULE);
+ return __v4l2_clk_register_fixed(dev_id, rate, THIS_MODULE);
}
#define v4l2_clk_name_i2c(name, size, adap, client) snprintf(name, size, \
diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
index 3e4fddf..acbcd2f 100644
--- a/include/media/v4l2-dev.h
+++ b/include/media/v4l2-dev.h
@@ -65,7 +65,6 @@
ssize_t (*read) (struct file *, char __user *, size_t, loff_t *);
ssize_t (*write) (struct file *, const char __user *, size_t, loff_t *);
unsigned int (*poll) (struct file *, struct poll_table_struct *);
- long (*ioctl) (struct file *, unsigned int, unsigned long);
long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
#ifdef CONFIG_COMPAT
long (*compat_ioctl32) (struct file *, unsigned int, unsigned long);
diff --git a/include/media/v4l2-device.h b/include/media/v4l2-device.h
index ffb69da..9c58157 100644
--- a/include/media/v4l2-device.h
+++ b/include/media/v4l2-device.h
@@ -58,8 +58,6 @@
struct v4l2_ctrl_handler *ctrl_handler;
/* Device's priority state */
struct v4l2_prio_state prio;
- /* BKL replacement mutex. Temporary solution only. */
- struct mutex ioctl_lock;
/* Keep track of the references to this struct. */
struct kref ref;
/* Release function that is called when the ref count goes to 0. */
diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
index 8537983..8fbbd76 100644
--- a/include/media/v4l2-ioctl.h
+++ b/include/media/v4l2-ioctl.h
@@ -23,12 +23,6 @@
/* VIDIOC_QUERYCAP handler */
int (*vidioc_querycap)(struct file *file, void *fh, struct v4l2_capability *cap);
- /* Priority handling */
- int (*vidioc_g_priority) (struct file *file, void *fh,
- enum v4l2_priority *p);
- int (*vidioc_s_priority) (struct file *file, void *fh,
- enum v4l2_priority p);
-
/* VIDIOC_ENUM_FMT handlers */
int (*vidioc_enum_fmt_vid_cap) (struct file *file, void *fh,
struct v4l2_fmtdesc *f);
diff --git a/include/media/v4l2-of.h b/include/media/v4l2-of.h
index 70fa7b7..f831c9c 100644
--- a/include/media/v4l2-of.h
+++ b/include/media/v4l2-of.h
@@ -29,12 +29,15 @@
* @data_lanes: an array of physical data lane indexes
* @clock_lane: physical lane index of the clock lane
* @num_data_lanes: number of data lanes
+ * @lane_polarities: polarity of the lanes. The order is the same of
+ * the physical lanes.
*/
struct v4l2_of_bus_mipi_csi2 {
unsigned int flags;
unsigned char data_lanes[4];
unsigned char clock_lane;
unsigned short num_data_lanes;
+ bool lane_polarities[5];
};
/**
@@ -66,9 +69,26 @@
struct list_head head;
};
+/**
+ * struct v4l2_of_link - a link between two endpoints
+ * @local_node: pointer to device_node of this endpoint
+ * @local_port: identifier of the port this endpoint belongs to
+ * @remote_node: pointer to device_node of the remote endpoint
+ * @remote_port: identifier of the port the remote endpoint belongs to
+ */
+struct v4l2_of_link {
+ struct device_node *local_node;
+ unsigned int local_port;
+ struct device_node *remote_node;
+ unsigned int remote_port;
+};
+
#ifdef CONFIG_OF
int v4l2_of_parse_endpoint(const struct device_node *node,
struct v4l2_of_endpoint *endpoint);
+int v4l2_of_parse_link(const struct device_node *node,
+ struct v4l2_of_link *link);
+void v4l2_of_put_link(struct v4l2_of_link *link);
#else /* CONFIG_OF */
static inline int v4l2_of_parse_endpoint(const struct device_node *node,
@@ -77,6 +97,16 @@
return -ENOSYS;
}
+static inline int v4l2_of_parse_link(const struct device_node *node,
+ struct v4l2_of_link *link)
+{
+ return -ENOSYS;
+}
+
+static inline void v4l2_of_put_link(struct v4l2_of_link *link)
+{
+}
+
#endif /* CONFIG_OF */
#endif /* _V4L2_OF_H */
diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
index 5beeb87..2f0a345 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -332,8 +332,6 @@
struct v4l2_subdev_frame_interval *interval);
int (*s_frame_interval)(struct v4l2_subdev *sd,
struct v4l2_subdev_frame_interval *interval);
- int (*enum_framesizes)(struct v4l2_subdev *sd, struct v4l2_frmsizeenum *fsize);
- int (*enum_frameintervals)(struct v4l2_subdev *sd, struct v4l2_frmivalenum *fival);
int (*s_dv_timings)(struct v4l2_subdev *sd,
struct v4l2_dv_timings *timings);
int (*g_dv_timings)(struct v4l2_subdev *sd,
@@ -482,6 +480,18 @@
struct v4l2_subdev_ir_parameters *params);
};
+/*
+ * Used for storing subdev pad information. This structure only needs
+ * to be passed to the pad op if the 'which' field of the main argument
+ * is set to V4L2_SUBDEV_FORMAT_TRY. For V4L2_SUBDEV_FORMAT_ACTIVE it is
+ * safe to pass NULL.
+ */
+struct v4l2_subdev_pad_config {
+ struct v4l2_mbus_framefmt try_fmt;
+ struct v4l2_rect try_crop;
+ struct v4l2_rect try_compose;
+};
+
/**
* struct v4l2_subdev_pad_ops - v4l2-subdev pad level operations
* @get_frame_desc: get the current low level media bus frame parameters.
@@ -489,21 +499,26 @@
* may be adjusted by the subdev driver to device capabilities.
*/
struct v4l2_subdev_pad_ops {
- int (*enum_mbus_code)(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ int (*enum_mbus_code)(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_mbus_code_enum *code);
int (*enum_frame_size)(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse);
int (*enum_frame_interval)(struct v4l2_subdev *sd,
- struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_interval_enum *fie);
- int (*get_fmt)(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ int (*get_fmt)(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format);
- int (*set_fmt)(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ int (*set_fmt)(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *format);
- int (*get_selection)(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ int (*get_selection)(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel);
- int (*set_selection)(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ int (*set_selection)(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_selection *sel);
int (*get_edid)(struct v4l2_subdev *sd, struct v4l2_edid *edid);
int (*set_edid)(struct v4l2_subdev *sd, struct v4l2_edid *edid);
@@ -625,11 +640,7 @@
struct v4l2_subdev_fh {
struct v4l2_fh vfh;
#if defined(CONFIG_VIDEO_V4L2_SUBDEV_API)
- struct {
- struct v4l2_mbus_framefmt try_fmt;
- struct v4l2_rect try_crop;
- struct v4l2_rect try_compose;
- } *pad;
+ struct v4l2_subdev_pad_config *pad;
#endif
};
@@ -639,17 +650,17 @@
#if defined(CONFIG_VIDEO_V4L2_SUBDEV_API)
#define __V4L2_SUBDEV_MK_GET_TRY(rtype, fun_name, field_name) \
static inline struct rtype * \
- v4l2_subdev_get_try_##fun_name(struct v4l2_subdev_fh *fh, \
- unsigned int pad) \
+ fun_name(struct v4l2_subdev *sd, \
+ struct v4l2_subdev_pad_config *cfg, \
+ unsigned int pad) \
{ \
- BUG_ON(pad >= vdev_to_v4l2_subdev( \
- fh->vfh.vdev)->entity.num_pads); \
- return &fh->pad[pad].field_name; \
+ BUG_ON(pad >= sd->entity.num_pads); \
+ return &cfg[pad].field_name; \
}
-__V4L2_SUBDEV_MK_GET_TRY(v4l2_mbus_framefmt, format, try_fmt)
-__V4L2_SUBDEV_MK_GET_TRY(v4l2_rect, crop, try_crop)
-__V4L2_SUBDEV_MK_GET_TRY(v4l2_rect, compose, try_compose)
+__V4L2_SUBDEV_MK_GET_TRY(v4l2_mbus_framefmt, v4l2_subdev_get_try_format, try_fmt)
+__V4L2_SUBDEV_MK_GET_TRY(v4l2_rect, v4l2_subdev_get_try_crop, try_crop)
+__V4L2_SUBDEV_MK_GET_TRY(v4l2_rect, v4l2_subdev_get_try_compose, try_compose)
#endif
extern const struct v4l2_file_operations v4l2_subdev_fops;
diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
index bd2cec2..a5790fd 100644
--- a/include/media/videobuf2-core.h
+++ b/include/media/videobuf2-core.h
@@ -134,17 +134,6 @@
};
/**
- * enum vb2_fileio_flags - flags for selecting a mode of the file io emulator,
- * by default the 'streaming' style is used by the file io emulator
- * @VB2_FILEIO_READ_ONCE: report EOF after reading the first buffer
- * @VB2_FILEIO_WRITE_IMMEDIATELY: queue buffer after each write() call
- */
-enum vb2_fileio_flags {
- VB2_FILEIO_READ_ONCE = (1 << 0),
- VB2_FILEIO_WRITE_IMMEDIATELY = (1 << 1),
-};
-
-/**
* enum vb2_buffer_state - current video buffer state
* @VB2_BUF_STATE_DEQUEUED: buffer under userspace control
* @VB2_BUF_STATE_PREPARING: buffer is being prepared in videobuf
@@ -346,7 +335,9 @@
*
* @type: queue type (see V4L2_BUF_TYPE_* in linux/videodev2.h
* @io_modes: supported io methods (see vb2_io_modes enum)
- * @io_flags: additional io flags (see vb2_fileio_flags enum)
+ * @fileio_read_once: report EOF after reading the first buffer
+ * @fileio_write_immediately: queue buffer after each write() call
+ * @allow_zero_bytesused: allow bytesused == 0 to be passed to the driver
* @lock: pointer to a mutex that protects the vb2_queue struct. The
* driver can set this to a mutex to let the v4l2 core serialize
* the queuing ioctls. If the driver wants to handle locking
@@ -396,7 +387,10 @@
struct vb2_queue {
enum v4l2_buf_type type;
unsigned int io_modes;
- unsigned int io_flags;
+ unsigned fileio_read_once:1;
+ unsigned fileio_write_immediately:1;
+ unsigned allow_zero_bytesused:1;
+
struct mutex *lock;
struct v4l2_fh *owner;
diff --git a/include/trace/events/clk.h b/include/trace/events/clk.h
new file mode 100644
index 0000000..7586072
--- /dev/null
+++ b/include/trace/events/clk.h
@@ -0,0 +1,198 @@
+/*
+ * Copyright (c) 2014-2015, The Linux Foundation. All rights reserved.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM clk
+
+#if !defined(_TRACE_CLK_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_CLK_H
+
+#include <linux/tracepoint.h>
+
+struct clk_core;
+
+DECLARE_EVENT_CLASS(clk,
+
+ TP_PROTO(struct clk_core *core),
+
+ TP_ARGS(core),
+
+ TP_STRUCT__entry(
+ __string( name, core->name )
+ ),
+
+ TP_fast_assign(
+ __assign_str(name, core->name);
+ ),
+
+ TP_printk("%s", __get_str(name))
+);
+
+DEFINE_EVENT(clk, clk_enable,
+
+ TP_PROTO(struct clk_core *core),
+
+ TP_ARGS(core)
+);
+
+DEFINE_EVENT(clk, clk_enable_complete,
+
+ TP_PROTO(struct clk_core *core),
+
+ TP_ARGS(core)
+);
+
+DEFINE_EVENT(clk, clk_disable,
+
+ TP_PROTO(struct clk_core *core),
+
+ TP_ARGS(core)
+);
+
+DEFINE_EVENT(clk, clk_disable_complete,
+
+ TP_PROTO(struct clk_core *core),
+
+ TP_ARGS(core)
+);
+
+DEFINE_EVENT(clk, clk_prepare,
+
+ TP_PROTO(struct clk_core *core),
+
+ TP_ARGS(core)
+);
+
+DEFINE_EVENT(clk, clk_prepare_complete,
+
+ TP_PROTO(struct clk_core *core),
+
+ TP_ARGS(core)
+);
+
+DEFINE_EVENT(clk, clk_unprepare,
+
+ TP_PROTO(struct clk_core *core),
+
+ TP_ARGS(core)
+);
+
+DEFINE_EVENT(clk, clk_unprepare_complete,
+
+ TP_PROTO(struct clk_core *core),
+
+ TP_ARGS(core)
+);
+
+DECLARE_EVENT_CLASS(clk_rate,
+
+ TP_PROTO(struct clk_core *core, unsigned long rate),
+
+ TP_ARGS(core, rate),
+
+ TP_STRUCT__entry(
+ __string( name, core->name )
+ __field(unsigned long, rate )
+ ),
+
+ TP_fast_assign(
+ __assign_str(name, core->name);
+ __entry->rate = rate;
+ ),
+
+ TP_printk("%s %lu", __get_str(name), (unsigned long)__entry->rate)
+);
+
+DEFINE_EVENT(clk_rate, clk_set_rate,
+
+ TP_PROTO(struct clk_core *core, unsigned long rate),
+
+ TP_ARGS(core, rate)
+);
+
+DEFINE_EVENT(clk_rate, clk_set_rate_complete,
+
+ TP_PROTO(struct clk_core *core, unsigned long rate),
+
+ TP_ARGS(core, rate)
+);
+
+DECLARE_EVENT_CLASS(clk_parent,
+
+ TP_PROTO(struct clk_core *core, struct clk_core *parent),
+
+ TP_ARGS(core, parent),
+
+ TP_STRUCT__entry(
+ __string( name, core->name )
+ __string( pname, parent->name )
+ ),
+
+ TP_fast_assign(
+ __assign_str(name, core->name);
+ __assign_str(pname, parent->name);
+ ),
+
+ TP_printk("%s %s", __get_str(name), __get_str(pname))
+);
+
+DEFINE_EVENT(clk_parent, clk_set_parent,
+
+ TP_PROTO(struct clk_core *core, struct clk_core *parent),
+
+ TP_ARGS(core, parent)
+);
+
+DEFINE_EVENT(clk_parent, clk_set_parent_complete,
+
+ TP_PROTO(struct clk_core *core, struct clk_core *parent),
+
+ TP_ARGS(core, parent)
+);
+
+DECLARE_EVENT_CLASS(clk_phase,
+
+ TP_PROTO(struct clk_core *core, int phase),
+
+ TP_ARGS(core, phase),
+
+ TP_STRUCT__entry(
+ __string( name, core->name )
+ __field( int, phase )
+ ),
+
+ TP_fast_assign(
+ __assign_str(name, core->name);
+ __entry->phase = phase;
+ ),
+
+ TP_printk("%s %d", __get_str(name), (int)__entry->phase)
+);
+
+DEFINE_EVENT(clk_phase, clk_set_phase,
+
+ TP_PROTO(struct clk_core *core, int phase),
+
+ TP_ARGS(core, phase)
+);
+
+DEFINE_EVENT(clk_phase, clk_set_phase_complete,
+
+ TP_PROTO(struct clk_core *core, int phase),
+
+ TP_ARGS(core, phase)
+);
+
+#endif /* _TRACE_CLK_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
index 36f4536..e202dec 100644
--- a/include/trace/events/f2fs.h
+++ b/include/trace/events/f2fs.h
@@ -44,7 +44,11 @@
{ NODE, "NODE" }, \
{ DATA, "DATA" }, \
{ META, "META" }, \
- { META_FLUSH, "META_FLUSH" })
+ { META_FLUSH, "META_FLUSH" }, \
+ { INMEM, "INMEM" }, \
+ { INMEM_DROP, "INMEM_DROP" }, \
+ { IPU, "IN-PLACE" }, \
+ { OPU, "OUT-OF-PLACE" })
#define F2FS_BIO_MASK(t) (t & (READA | WRITE_FLUSH_FUA))
#define F2FS_BIO_EXTRA_MASK(t) (t & (REQ_META | REQ_PRIO))
@@ -104,6 +108,7 @@
{ CP_UMOUNT, "Umount" }, \
{ CP_FASTBOOT, "Fastboot" }, \
{ CP_SYNC, "Sync" }, \
+ { CP_RECOVERY, "Recovery" }, \
{ CP_DISCARD, "Discard" })
struct victim_sel_policy;
@@ -884,6 +889,13 @@
TP_ARGS(page, type)
);
+DEFINE_EVENT(f2fs__page, f2fs_do_write_data_page,
+
+ TP_PROTO(struct page *page, int type),
+
+ TP_ARGS(page, type)
+);
+
DEFINE_EVENT(f2fs__page, f2fs_readpage,
TP_PROTO(struct page *page, int type),
@@ -905,6 +917,20 @@
TP_ARGS(page, type)
);
+DEFINE_EVENT(f2fs__page, f2fs_register_inmem_page,
+
+ TP_PROTO(struct page *page, int type),
+
+ TP_ARGS(page, type)
+);
+
+DEFINE_EVENT(f2fs__page, f2fs_commit_inmem_page,
+
+ TP_PROTO(struct page *page, int type),
+
+ TP_ARGS(page, type)
+);
+
TRACE_EVENT(f2fs_writepages,
TP_PROTO(struct inode *inode, struct writeback_control *wbc, int type),
@@ -1041,6 +1067,140 @@
__entry->nobarrier ? "skip (nobarrier)" : "issue",
__entry->flush_merge ? " with flush_merge" : "")
);
+
+TRACE_EVENT(f2fs_lookup_extent_tree_start,
+
+ TP_PROTO(struct inode *inode, unsigned int pgofs),
+
+ TP_ARGS(inode, pgofs),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(unsigned int, pgofs)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
+ __entry->pgofs = pgofs;
+ ),
+
+ TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u",
+ show_dev_ino(__entry),
+ __entry->pgofs)
+);
+
+TRACE_EVENT_CONDITION(f2fs_lookup_extent_tree_end,
+
+ TP_PROTO(struct inode *inode, unsigned int pgofs,
+ struct extent_node *en),
+
+ TP_ARGS(inode, pgofs, en),
+
+ TP_CONDITION(en),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(unsigned int, pgofs)
+ __field(unsigned int, fofs)
+ __field(u32, blk)
+ __field(unsigned int, len)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
+ __entry->pgofs = pgofs;
+ __entry->fofs = en->ei.fofs;
+ __entry->blk = en->ei.blk;
+ __entry->len = en->ei.len;
+ ),
+
+ TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u, "
+ "ext_info(fofs: %u, blk: %u, len: %u)",
+ show_dev_ino(__entry),
+ __entry->pgofs,
+ __entry->fofs,
+ __entry->blk,
+ __entry->len)
+);
+
+TRACE_EVENT(f2fs_update_extent_tree,
+
+ TP_PROTO(struct inode *inode, unsigned int pgofs, block_t blkaddr),
+
+ TP_ARGS(inode, pgofs, blkaddr),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(unsigned int, pgofs)
+ __field(u32, blk)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
+ __entry->pgofs = pgofs;
+ __entry->blk = blkaddr;
+ ),
+
+ TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u, blkaddr = %u",
+ show_dev_ino(__entry),
+ __entry->pgofs,
+ __entry->blk)
+);
+
+TRACE_EVENT(f2fs_shrink_extent_tree,
+
+ TP_PROTO(struct f2fs_sb_info *sbi, unsigned int node_cnt,
+ unsigned int tree_cnt),
+
+ TP_ARGS(sbi, node_cnt, tree_cnt),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(unsigned int, node_cnt)
+ __field(unsigned int, tree_cnt)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = sbi->sb->s_dev;
+ __entry->node_cnt = node_cnt;
+ __entry->tree_cnt = tree_cnt;
+ ),
+
+ TP_printk("dev = (%d,%d), shrunk: node_cnt = %u, tree_cnt = %u",
+ show_dev(__entry),
+ __entry->node_cnt,
+ __entry->tree_cnt)
+);
+
+TRACE_EVENT(f2fs_destroy_extent_tree,
+
+ TP_PROTO(struct inode *inode, unsigned int node_cnt),
+
+ TP_ARGS(inode, node_cnt),
+
+ TP_STRUCT__entry(
+ __field(dev_t, dev)
+ __field(ino_t, ino)
+ __field(unsigned int, node_cnt)
+ ),
+
+ TP_fast_assign(
+ __entry->dev = inode->i_sb->s_dev;
+ __entry->ino = inode->i_ino;
+ __entry->node_cnt = node_cnt;
+ ),
+
+ TP_printk("dev = (%d,%d), ino = %lu, destroyed: node_cnt = %u",
+ show_dev_ino(__entry),
+ __entry->node_cnt)
+);
+
#endif /* _TRACE_F2FS_H */
/* This part must be outside protection */
diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h
index 0421f49..42febb6 100644
--- a/include/trace/events/filemap.h
+++ b/include/trace/events/filemap.h
@@ -18,14 +18,14 @@
TP_ARGS(page),
TP_STRUCT__entry(
- __field(struct page *, page)
+ __field(unsigned long, pfn)
__field(unsigned long, i_ino)
__field(unsigned long, index)
__field(dev_t, s_dev)
),
TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page_to_pfn(page);
__entry->i_ino = page->mapping->host->i_ino;
__entry->index = page->index;
if (page->mapping->host->i_sb)
@@ -37,8 +37,8 @@
TP_printk("dev %d:%d ino %lx page=%p pfn=%lu ofs=%lu",
MAJOR(__entry->s_dev), MINOR(__entry->s_dev),
__entry->i_ino,
- __entry->page,
- page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn),
+ __entry->pfn,
__entry->index << PAGE_SHIFT)
);
diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
index 4ad10ba..81ea598 100644
--- a/include/trace/events/kmem.h
+++ b/include/trace/events/kmem.h
@@ -154,18 +154,18 @@
TP_ARGS(page, order),
TP_STRUCT__entry(
- __field( struct page *, page )
+ __field( unsigned long, pfn )
__field( unsigned int, order )
),
TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page_to_pfn(page);
__entry->order = order;
),
TP_printk("page=%p pfn=%lu order=%d",
- __entry->page,
- page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn),
+ __entry->pfn,
__entry->order)
);
@@ -176,18 +176,18 @@
TP_ARGS(page, cold),
TP_STRUCT__entry(
- __field( struct page *, page )
+ __field( unsigned long, pfn )
__field( int, cold )
),
TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page_to_pfn(page);
__entry->cold = cold;
),
TP_printk("page=%p pfn=%lu order=0 cold=%d",
- __entry->page,
- page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn),
+ __entry->pfn,
__entry->cold)
);
@@ -199,22 +199,22 @@
TP_ARGS(page, order, gfp_flags, migratetype),
TP_STRUCT__entry(
- __field( struct page *, page )
+ __field( unsigned long, pfn )
__field( unsigned int, order )
__field( gfp_t, gfp_flags )
__field( int, migratetype )
),
TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page ? page_to_pfn(page) : -1UL;
__entry->order = order;
__entry->gfp_flags = gfp_flags;
__entry->migratetype = migratetype;
),
TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s",
- __entry->page,
- __entry->page ? page_to_pfn(__entry->page) : 0,
+ __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
+ __entry->pfn != -1UL ? __entry->pfn : 0,
__entry->order,
__entry->migratetype,
show_gfp_flags(__entry->gfp_flags))
@@ -227,20 +227,20 @@
TP_ARGS(page, order, migratetype),
TP_STRUCT__entry(
- __field( struct page *, page )
+ __field( unsigned long, pfn )
__field( unsigned int, order )
__field( int, migratetype )
),
TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page ? page_to_pfn(page) : -1UL;
__entry->order = order;
__entry->migratetype = migratetype;
),
TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d",
- __entry->page,
- __entry->page ? page_to_pfn(__entry->page) : 0,
+ __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
+ __entry->pfn != -1UL ? __entry->pfn : 0,
__entry->order,
__entry->migratetype,
__entry->order == 0)
@@ -260,7 +260,7 @@
TP_ARGS(page, order, migratetype),
TP_printk("page=%p pfn=%lu order=%d migratetype=%d",
- __entry->page, page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn), __entry->pfn,
__entry->order, __entry->migratetype)
);
@@ -275,7 +275,7 @@
alloc_migratetype, fallback_migratetype),
TP_STRUCT__entry(
- __field( struct page *, page )
+ __field( unsigned long, pfn )
__field( int, alloc_order )
__field( int, fallback_order )
__field( int, alloc_migratetype )
@@ -284,7 +284,7 @@
),
TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page_to_pfn(page);
__entry->alloc_order = alloc_order;
__entry->fallback_order = fallback_order;
__entry->alloc_migratetype = alloc_migratetype;
@@ -294,8 +294,8 @@
),
TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d",
- __entry->page,
- page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn),
+ __entry->pfn,
__entry->alloc_order,
__entry->fallback_order,
pageblock_order,
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index 69590b6..f66476b 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -336,18 +336,18 @@
TP_ARGS(page, reclaim_flags),
TP_STRUCT__entry(
- __field(struct page *, page)
+ __field(unsigned long, pfn)
__field(int, reclaim_flags)
),
TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page_to_pfn(page);
__entry->reclaim_flags = reclaim_flags;
),
TP_printk("page=%p pfn=%lu flags=%s",
- __entry->page,
- page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn),
+ __entry->pfn,
show_reclaim_flags(__entry->reclaim_flags))
);
diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
index 01b2d6d..ff6ef62 100644
--- a/include/uapi/drm/drm.h
+++ b/include/uapi/drm/drm.h
@@ -630,6 +630,7 @@
*/
#define DRM_CAP_CURSOR_WIDTH 0x8
#define DRM_CAP_CURSOR_HEIGHT 0x9
+#define DRM_CAP_ADDFB2_MODIFIERS 0x10
/** DRM_IOCTL_GET_CAP ioctl argument type */
struct drm_get_cap {
diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
index a284f11..0773582 100644
--- a/include/uapi/drm/drm_fourcc.h
+++ b/include/uapi/drm/drm_fourcc.h
@@ -129,4 +129,82 @@
#define DRM_FORMAT_YUV444 fourcc_code('Y', 'U', '2', '4') /* non-subsampled Cb (1) and Cr (2) planes */
#define DRM_FORMAT_YVU444 fourcc_code('Y', 'V', '2', '4') /* non-subsampled Cr (1) and Cb (2) planes */
+
+/*
+ * Format Modifiers:
+ *
+ * Format modifiers describe, typically, a re-ordering or modification
+ * of the data in a plane of an FB. This can be used to express tiled/
+ * swizzled formats, or compression, or a combination of the two.
+ *
+ * The upper 8 bits of the format modifier are a vendor-id as assigned
+ * below. The lower 56 bits are assigned as vendor sees fit.
+ */
+
+/* Vendor Ids: */
+#define DRM_FORMAT_MOD_NONE 0
+#define DRM_FORMAT_MOD_VENDOR_INTEL 0x01
+#define DRM_FORMAT_MOD_VENDOR_AMD 0x02
+#define DRM_FORMAT_MOD_VENDOR_NV 0x03
+#define DRM_FORMAT_MOD_VENDOR_SAMSUNG 0x04
+#define DRM_FORMAT_MOD_VENDOR_QCOM 0x05
+/* add more to the end as needed */
+
+#define fourcc_mod_code(vendor, val) \
+ ((((u64)DRM_FORMAT_MOD_VENDOR_## vendor) << 56) | (val & 0x00ffffffffffffffULL))
+
+/*
+ * Format Modifier tokens:
+ *
+ * When adding a new token please document the layout with a code comment,
+ * similar to the fourcc codes above. drm_fourcc.h is considered the
+ * authoritative source for all of these.
+ */
+
+/* Intel framebuffer modifiers */
+
+/*
+ * Intel X-tiling layout
+ *
+ * This is a tiled layout using 4Kb tiles (except on gen2 where the tiles 2Kb)
+ * in row-major layout. Within the tile bytes are laid out row-major, with
+ * a platform-dependent stride. On top of that the memory can apply
+ * platform-depending swizzling of some higher address bits into bit6.
+ *
+ * This format is highly platforms specific and not useful for cross-driver
+ * sharing. It exists since on a given platform it does uniquely identify the
+ * layout in a simple way for i915-specific userspace.
+ */
+#define I915_FORMAT_MOD_X_TILED fourcc_mod_code(INTEL, 1)
+
+/*
+ * Intel Y-tiling layout
+ *
+ * This is a tiled layout using 4Kb tiles (except on gen2 where the tiles 2Kb)
+ * in row-major layout. Within the tile bytes are laid out in OWORD (16 bytes)
+ * chunks column-major, with a platform-dependent height. On top of that the
+ * memory can apply platform-depending swizzling of some higher address bits
+ * into bit6.
+ *
+ * This format is highly platforms specific and not useful for cross-driver
+ * sharing. It exists since on a given platform it does uniquely identify the
+ * layout in a simple way for i915-specific userspace.
+ */
+#define I915_FORMAT_MOD_Y_TILED fourcc_mod_code(INTEL, 2)
+
+/*
+ * Intel Yf-tiling layout
+ *
+ * This is a tiled layout using 4Kb tiles in row-major layout.
+ * Within the tile pixels are laid out in 16 256 byte units / sub-tiles which
+ * are arranged in four groups (two wide, two high) with column-major layout.
+ * Each group therefore consits out of four 256 byte units, which are also laid
+ * out as 2x2 column-major.
+ * 256 byte units are made out of four 64 byte blocks of pixels, producing
+ * either a square block or a 2:1 unit.
+ * 64 byte blocks of pixels contain four pixel rows of 16 bytes, where the width
+ * in pixel depends on the pixel depth.
+ */
+#define I915_FORMAT_MOD_Yf_TILED fourcc_mod_code(INTEL, 3)
+
#endif /* DRM_FOURCC_H */
diff --git a/include/uapi/drm/drm_mode.h b/include/uapi/drm/drm_mode.h
index ca788e0..dbeba94 100644
--- a/include/uapi/drm/drm_mode.h
+++ b/include/uapi/drm/drm_mode.h
@@ -336,6 +336,7 @@
};
#define DRM_MODE_FB_INTERLACED (1<<0) /* for interlaced framebuffers */
+#define DRM_MODE_FB_MODIFIERS (1<<1) /* enables ->modifer[] */
struct drm_mode_fb_cmd2 {
__u32 fb_id;
@@ -356,10 +357,18 @@
* So it would consist of Y as offsets[0] and UV as
* offsets[1]. Note that offsets[0] will generally
* be 0 (but this is not required).
+ *
+ * To accommodate tiled, compressed, etc formats, a per-plane
+ * modifier can be specified. The default value of zero
+ * indicates "native" format as specified by the fourcc.
+ * Vendor specific modifier token. This allows, for example,
+ * different tiling/swizzling pattern on different planes.
+ * See discussion above of DRM_FORMAT_MOD_xxx.
*/
__u32 handles[4];
__u32 pitches[4]; /* pitch for each plane */
__u32 offsets[4]; /* offset of each plane */
+ __u64 modifier[4]; /* ie, tiling, compressed (per plane) */
};
#define DRM_MODE_FB_DIRTY_ANNOTATE_COPY 0x01
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 6eed16b..551b673 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -270,7 +270,7 @@
#define DRM_IOCTL_I915_OVERLAY_PUT_IMAGE DRM_IOW(DRM_COMMAND_BASE + DRM_I915_OVERLAY_PUT_IMAGE, struct drm_intel_overlay_put_image)
#define DRM_IOCTL_I915_OVERLAY_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_OVERLAY_ATTRS, struct drm_intel_overlay_attrs)
#define DRM_IOCTL_I915_SET_SPRITE_COLORKEY DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_SET_SPRITE_COLORKEY, struct drm_intel_sprite_colorkey)
-#define DRM_IOCTL_I915_GET_SPRITE_COLORKEY DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_SET_SPRITE_COLORKEY, struct drm_intel_sprite_colorkey)
+#define DRM_IOCTL_I915_GET_SPRITE_COLORKEY DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GET_SPRITE_COLORKEY, struct drm_intel_sprite_colorkey)
#define DRM_IOCTL_I915_GEM_WAIT DRM_IOWR(DRM_COMMAND_BASE + DRM_I915_GEM_WAIT, struct drm_i915_gem_wait)
#define DRM_IOCTL_I915_GEM_CONTEXT_CREATE DRM_IOWR (DRM_COMMAND_BASE + DRM_I915_GEM_CONTEXT_CREATE, struct drm_i915_gem_context_create)
#define DRM_IOCTL_I915_GEM_CONTEXT_DESTROY DRM_IOW (DRM_COMMAND_BASE + DRM_I915_GEM_CONTEXT_DESTROY, struct drm_i915_gem_context_destroy)
@@ -347,6 +347,9 @@
#define I915_PARAM_HAS_COHERENT_PHYS_GTT 29
#define I915_PARAM_MMAP_VERSION 30
#define I915_PARAM_HAS_BSD2 31
+#define I915_PARAM_REVISION 32
+#define I915_PARAM_SUBSLICE_TOTAL 33
+#define I915_PARAM_EU_TOTAL 34
typedef struct drm_i915_getparam {
int param;
diff --git a/include/uapi/drm/nouveau_drm.h b/include/uapi/drm/nouveau_drm.h
index 0d7608d..5507eea 100644
--- a/include/uapi/drm/nouveau_drm.h
+++ b/include/uapi/drm/nouveau_drm.h
@@ -39,6 +39,7 @@
#define NOUVEAU_GEM_DOMAIN_VRAM (1 << 1)
#define NOUVEAU_GEM_DOMAIN_GART (1 << 2)
#define NOUVEAU_GEM_DOMAIN_MAPPABLE (1 << 3)
+#define NOUVEAU_GEM_DOMAIN_COHERENT (1 << 4)
#define NOUVEAU_GEM_TILE_COMP 0x00030000 /* nv50-only */
#define NOUVEAU_GEM_TILE_LAYOUT_MASK 0x0000ff00
diff --git a/include/uapi/drm/radeon_drm.h b/include/uapi/drm/radeon_drm.h
index 50d0fb4..871e73f 100644
--- a/include/uapi/drm/radeon_drm.h
+++ b/include/uapi/drm/radeon_drm.h
@@ -1034,6 +1034,10 @@
#define RADEON_INFO_VRAM_USAGE 0x1e
#define RADEON_INFO_GTT_USAGE 0x1f
#define RADEON_INFO_ACTIVE_CU_COUNT 0x20
+#define RADEON_INFO_CURRENT_GPU_TEMP 0x21
+#define RADEON_INFO_CURRENT_GPU_SCLK 0x22
+#define RADEON_INFO_CURRENT_GPU_MCLK 0x23
+#define RADEON_INFO_READ_REG 0x24
struct drm_radeon_info {
uint32_t request;
diff --git a/include/uapi/drm/tegra_drm.h b/include/uapi/drm/tegra_drm.h
index c15d781..5391780 100644
--- a/include/uapi/drm/tegra_drm.h
+++ b/include/uapi/drm/tegra_drm.h
@@ -36,7 +36,8 @@
struct drm_tegra_gem_mmap {
__u32 handle;
- __u32 offset;
+ __u32 pad;
+ __u64 offset;
};
struct drm_tegra_syncpt_read {
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 38df234..640954b 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -447,5 +447,6 @@
header-y += x25.h
header-y += xattr.h
header-y += xfrm.h
+header-y += xilinx-v4l2-controls.h
header-y += zorro.h
header-y += zorro_ids.h
diff --git a/include/uapi/linux/am437x-vpfe.h b/include/uapi/linux/am437x-vpfe.h
index 9b03033f..d757743 100644
--- a/include/uapi/linux/am437x-vpfe.h
+++ b/include/uapi/linux/am437x-vpfe.h
@@ -21,6 +21,8 @@
#ifndef AM437X_VPFE_USER_H
#define AM437X_VPFE_USER_H
+#include <linux/videodev2.h>
+
enum vpfe_ccdc_data_size {
VPFE_CCDC_DATA_16BITS = 0,
VPFE_CCDC_DATA_15BITS,
diff --git a/include/uapi/linux/dm-ioctl.h b/include/uapi/linux/dm-ioctl.h
index 889f3a5..eac8c36 100644
--- a/include/uapi/linux/dm-ioctl.h
+++ b/include/uapi/linux/dm-ioctl.h
@@ -267,9 +267,9 @@
#define DM_DEV_SET_GEOMETRY _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
#define DM_VERSION_MAJOR 4
-#define DM_VERSION_MINOR 30
+#define DM_VERSION_MINOR 31
#define DM_VERSION_PATCHLEVEL 0
-#define DM_VERSION_EXTRA "-ioctl (2014-12-22)"
+#define DM_VERSION_EXTRA "-ioctl (2015-3-12)"
/* Status bits */
#define DM_READONLY_FLAG (1 << 0) /* In/Out */
diff --git a/include/uapi/linux/media-bus-format.h b/include/uapi/linux/media-bus-format.h
index 23b4090..190d491 100644
--- a/include/uapi/linux/media-bus-format.h
+++ b/include/uapi/linux/media-bus-format.h
@@ -33,22 +33,32 @@
#define MEDIA_BUS_FMT_FIXED 0x0001
-/* RGB - next is 0x100e */
+/* RGB - next is 0x1018 */
+#define MEDIA_BUS_FMT_RGB444_1X12 0x1016
#define MEDIA_BUS_FMT_RGB444_2X8_PADHI_BE 0x1001
#define MEDIA_BUS_FMT_RGB444_2X8_PADHI_LE 0x1002
#define MEDIA_BUS_FMT_RGB555_2X8_PADHI_BE 0x1003
#define MEDIA_BUS_FMT_RGB555_2X8_PADHI_LE 0x1004
+#define MEDIA_BUS_FMT_RGB565_1X16 0x1017
#define MEDIA_BUS_FMT_BGR565_2X8_BE 0x1005
#define MEDIA_BUS_FMT_BGR565_2X8_LE 0x1006
#define MEDIA_BUS_FMT_RGB565_2X8_BE 0x1007
#define MEDIA_BUS_FMT_RGB565_2X8_LE 0x1008
#define MEDIA_BUS_FMT_RGB666_1X18 0x1009
+#define MEDIA_BUS_FMT_RBG888_1X24 0x100e
+#define MEDIA_BUS_FMT_RGB666_1X24_CPADHI 0x1015
+#define MEDIA_BUS_FMT_RGB666_1X7X3_SPWG 0x1010
+#define MEDIA_BUS_FMT_BGR888_1X24 0x1013
+#define MEDIA_BUS_FMT_GBR888_1X24 0x1014
#define MEDIA_BUS_FMT_RGB888_1X24 0x100a
#define MEDIA_BUS_FMT_RGB888_2X12_BE 0x100b
#define MEDIA_BUS_FMT_RGB888_2X12_LE 0x100c
+#define MEDIA_BUS_FMT_RGB888_1X7X4_SPWG 0x1011
+#define MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA 0x1012
#define MEDIA_BUS_FMT_ARGB8888_1X32 0x100d
+#define MEDIA_BUS_FMT_RGB888_1X32_PADHI 0x100f
-/* YUV (including grey) - next is 0x2024 */
+/* YUV (including grey) - next is 0x2026 */
#define MEDIA_BUS_FMT_Y8_1X8 0x2001
#define MEDIA_BUS_FMT_UV8_1X8 0x2015
#define MEDIA_BUS_FMT_UYVY8_1_5X8 0x2002
@@ -65,6 +75,10 @@
#define MEDIA_BUS_FMT_YUYV10_2X10 0x200b
#define MEDIA_BUS_FMT_YVYU10_2X10 0x200c
#define MEDIA_BUS_FMT_Y12_1X12 0x2013
+#define MEDIA_BUS_FMT_UYVY12_2X12 0x201c
+#define MEDIA_BUS_FMT_VYUY12_2X12 0x201d
+#define MEDIA_BUS_FMT_YUYV12_2X12 0x201e
+#define MEDIA_BUS_FMT_YVYU12_2X12 0x201f
#define MEDIA_BUS_FMT_UYVY8_1X16 0x200f
#define MEDIA_BUS_FMT_VYUY8_1X16 0x2010
#define MEDIA_BUS_FMT_YUYV8_1X16 0x2011
@@ -74,16 +88,14 @@
#define MEDIA_BUS_FMT_VYUY10_1X20 0x201b
#define MEDIA_BUS_FMT_YUYV10_1X20 0x200d
#define MEDIA_BUS_FMT_YVYU10_1X20 0x200e
-#define MEDIA_BUS_FMT_YUV10_1X30 0x2016
-#define MEDIA_BUS_FMT_AYUV8_1X32 0x2017
-#define MEDIA_BUS_FMT_UYVY12_2X12 0x201c
-#define MEDIA_BUS_FMT_VYUY12_2X12 0x201d
-#define MEDIA_BUS_FMT_YUYV12_2X12 0x201e
-#define MEDIA_BUS_FMT_YVYU12_2X12 0x201f
+#define MEDIA_BUS_FMT_VUY8_1X24 0x2024
+#define MEDIA_BUS_FMT_YUV8_1X24 0x2025
#define MEDIA_BUS_FMT_UYVY12_1X24 0x2020
#define MEDIA_BUS_FMT_VYUY12_1X24 0x2021
#define MEDIA_BUS_FMT_YUYV12_1X24 0x2022
#define MEDIA_BUS_FMT_YVYU12_1X24 0x2023
+#define MEDIA_BUS_FMT_YUV10_1X30 0x2016
+#define MEDIA_BUS_FMT_AYUV8_1X32 0x2017
/* Bayer - next is 0x3019 */
#define MEDIA_BUS_FMT_SBGGR8_1X8 0x3001
diff --git a/include/uapi/linux/media.h b/include/uapi/linux/media.h
index d847c76..4e816be 100644
--- a/include/uapi/linux/media.h
+++ b/include/uapi/linux/media.h
@@ -50,7 +50,14 @@
#define MEDIA_ENT_T_DEVNODE_V4L (MEDIA_ENT_T_DEVNODE + 1)
#define MEDIA_ENT_T_DEVNODE_FB (MEDIA_ENT_T_DEVNODE + 2)
#define MEDIA_ENT_T_DEVNODE_ALSA (MEDIA_ENT_T_DEVNODE + 3)
-#define MEDIA_ENT_T_DEVNODE_DVB (MEDIA_ENT_T_DEVNODE + 4)
+#define MEDIA_ENT_T_DEVNODE_DVB_FE (MEDIA_ENT_T_DEVNODE + 4)
+#define MEDIA_ENT_T_DEVNODE_DVB_DEMUX (MEDIA_ENT_T_DEVNODE + 5)
+#define MEDIA_ENT_T_DEVNODE_DVB_DVR (MEDIA_ENT_T_DEVNODE + 6)
+#define MEDIA_ENT_T_DEVNODE_DVB_CA (MEDIA_ENT_T_DEVNODE + 7)
+#define MEDIA_ENT_T_DEVNODE_DVB_NET (MEDIA_ENT_T_DEVNODE + 8)
+
+/* Legacy symbol. Use it to avoid userspace compilation breakages */
+#define MEDIA_ENT_T_DEVNODE_DVB MEDIA_ENT_T_DEVNODE_DVB_FE
#define MEDIA_ENT_T_V4L2_SUBDEV (2 << MEDIA_ENT_TYPE_SHIFT)
#define MEDIA_ENT_T_V4L2_SUBDEV_SENSOR (MEDIA_ENT_T_V4L2_SUBDEV + 1)
@@ -59,6 +66,8 @@
/* A converter of analogue video to its digital representation. */
#define MEDIA_ENT_T_V4L2_SUBDEV_DECODER (MEDIA_ENT_T_V4L2_SUBDEV + 4)
+#define MEDIA_ENT_T_V4L2_SUBDEV_TUNER (MEDIA_ENT_T_V4L2_SUBDEV + 5)
+
#define MEDIA_ENT_FL_DEFAULT (1 << 0)
struct media_entity_desc {
@@ -78,17 +87,48 @@
struct {
__u32 major;
__u32 minor;
- } v4l;
- struct {
- __u32 major;
- __u32 minor;
- } fb;
+ } dev;
+
+#if 1
+ /*
+ * TODO: this shouldn't have been added without
+ * actual drivers that use this. When the first real driver
+ * appears that sets this information, special attention
+ * should be given whether this information is 1) enough, and
+ * 2) can deal with udev rules that rename devices. The struct
+ * dev would not be sufficient for this since that does not
+ * contain the subdevice information. In addition, struct dev
+ * can only refer to a single device, and not to multiple (e.g.
+ * pcm and mixer devices).
+ *
+ * So for now mark this as a to do.
+ */
struct {
__u32 card;
__u32 device;
__u32 subdevice;
} alsa;
+#endif
+
+#if 1
+ /*
+ * DEPRECATED: previous node specifications. Kept just to
+ * avoid breaking compilation, but media_entity_desc.dev
+ * should be used instead. In particular, alsa and dvb
+ * fields below are wrong: for all devnodes, there should
+ * be just major/minor inside the struct, as this is enough
+ * to represent any devnode, no matter what type.
+ */
+ struct {
+ __u32 major;
+ __u32 minor;
+ } v4l;
+ struct {
+ __u32 major;
+ __u32 minor;
+ } fb;
int dvb;
+#endif
/* Sub-device specifications */
/* Nothing needed yet */
diff --git a/include/uapi/linux/serial_reg.h b/include/uapi/linux/serial_reg.h
index 00adb01..e9b4cb0 100644
--- a/include/uapi/linux/serial_reg.h
+++ b/include/uapi/linux/serial_reg.h
@@ -242,25 +242,6 @@
#define UART_FCR_PXAR32 0xc0 /* receive FIFO threshold = 32 */
/*
- * Intel MID on-chip HSU (High Speed UART) defined bits
- */
-#define UART_FCR_HSU_64_1B 0x00 /* receive FIFO treshold = 1 */
-#define UART_FCR_HSU_64_16B 0x40 /* receive FIFO treshold = 16 */
-#define UART_FCR_HSU_64_32B 0x80 /* receive FIFO treshold = 32 */
-#define UART_FCR_HSU_64_56B 0xc0 /* receive FIFO treshold = 56 */
-
-#define UART_FCR_HSU_16_1B 0x00 /* receive FIFO treshold = 1 */
-#define UART_FCR_HSU_16_4B 0x40 /* receive FIFO treshold = 4 */
-#define UART_FCR_HSU_16_8B 0x80 /* receive FIFO treshold = 8 */
-#define UART_FCR_HSU_16_14B 0xc0 /* receive FIFO treshold = 14 */
-
-#define UART_FCR_HSU_64B_FIFO 0x20 /* chose 64 bytes FIFO */
-#define UART_FCR_HSU_16B_FIFO 0x00 /* chose 16 bytes FIFO */
-
-#define UART_FCR_HALF_EMPT_TXI 0x00 /* trigger TX_EMPT IRQ for half empty */
-#define UART_FCR_FULL_EMPT_TXI 0x08 /* trigger TX_EMPT IRQ for full empty */
-
-/*
* These register definitions are for the 16C950
*/
#define UART_ASR 0x01 /* Additional Status Register */
diff --git a/include/uapi/linux/v4l2-dv-timings.h b/include/uapi/linux/v4l2-dv-timings.h
index 6c8f159..c039f1d 100644
--- a/include/uapi/linux/v4l2-dv-timings.h
+++ b/include/uapi/linux/v4l2-dv-timings.h
@@ -48,14 +48,15 @@
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(720, 480, 1, 0, \
13500000, 19, 62, 57, 4, 3, 15, 4, 3, 16, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_HALF_LINE) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_HALF_LINE | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_720X480P59_94 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(720, 480, 0, 0, \
27000000, 16, 62, 60, 9, 6, 30, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, 0) \
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
}
/* Note: these are the nominal timings, for HDMI links this format is typically
@@ -64,14 +65,15 @@
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(720, 576, 1, 0, \
13500000, 12, 63, 69, 2, 3, 19, 2, 3, 20, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_HALF_LINE) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_HALF_LINE | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_720X576P50 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(720, 576, 0, 0, \
27000000, 12, 64, 68, 5, 5, 39, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, 0) \
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1280X720P24 { \
@@ -88,7 +90,7 @@
V4L2_INIT_BT_TIMINGS(1280, 720, 0, \
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
74250000, 2420, 40, 220, 5, 5, 20, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, 0) \
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1280X720P30 { \
@@ -96,7 +98,8 @@
V4L2_INIT_BT_TIMINGS(1280, 720, 0, \
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
74250000, 1760, 40, 220, 5, 5, 20, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1280X720P50 { \
@@ -104,7 +107,7 @@
V4L2_INIT_BT_TIMINGS(1280, 720, 0, \
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
74250000, 440, 40, 220, 5, 5, 20, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, 0) \
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1280X720P60 { \
@@ -112,7 +115,8 @@
V4L2_INIT_BT_TIMINGS(1280, 720, 0, \
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
74250000, 110, 40, 220, 5, 5, 20, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1920X1080P24 { \
@@ -120,7 +124,8 @@
V4L2_INIT_BT_TIMINGS(1920, 1080, 0, \
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
74250000, 638, 44, 148, 4, 5, 36, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1920X1080P25 { \
@@ -128,7 +133,7 @@
V4L2_INIT_BT_TIMINGS(1920, 1080, 0, \
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
74250000, 528, 44, 148, 4, 5, 36, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, 0) \
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1920X1080P30 { \
@@ -136,7 +141,8 @@
V4L2_INIT_BT_TIMINGS(1920, 1080, 0, \
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
74250000, 88, 44, 148, 4, 5, 36, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1920X1080I50 { \
@@ -144,7 +150,8 @@
V4L2_INIT_BT_TIMINGS(1920, 1080, 1, \
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
74250000, 528, 44, 148, 2, 5, 15, 2, 5, 16, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_HALF_LINE) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_HALF_LINE | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1920X1080P50 { \
@@ -152,7 +159,7 @@
V4L2_INIT_BT_TIMINGS(1920, 1080, 0, \
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
148500000, 528, 44, 148, 4, 5, 36, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, 0) \
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1920X1080I60 { \
@@ -161,7 +168,8 @@
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
74250000, 88, 44, 148, 2, 5, 15, 2, 5, 16, \
V4L2_DV_BT_STD_CEA861, \
- V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_HALF_LINE) \
+ V4L2_DV_FL_CAN_REDUCE_FPS | \
+ V4L2_DV_FL_HALF_LINE | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_1920X1080P60 { \
@@ -170,77 +178,83 @@
V4L2_DV_HSYNC_POS_POL | V4L2_DV_VSYNC_POS_POL, \
148500000, 88, 44, 148, 4, 5, 36, 0, 0, 0, \
V4L2_DV_BT_STD_DMT | V4L2_DV_BT_STD_CEA861, \
- V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_3840X2160P24 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
297000000, 1276, 88, 296, 8, 10, 72, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_3840X2160P25 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
297000000, 1056, 88, 296, 8, 10, 72, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, 0) \
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_3840X2160P30 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
297000000, 176, 88, 296, 8, 10, 72, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_3840X2160P50 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
594000000, 1056, 88, 296, 8, 10, 72, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, 0) \
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_3840X2160P60 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(3840, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
594000000, 176, 88, 296, 8, 10, 72, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_4096X2160P24 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
297000000, 1020, 88, 296, 8, 10, 72, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_4096X2160P25 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
297000000, 968, 88, 128, 8, 10, 72, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, 0) \
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_4096X2160P30 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
297000000, 88, 88, 128, 8, 10, 72, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_4096X2160P50 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
594000000, 968, 88, 128, 8, 10, 72, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, 0) \
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_IS_CE_VIDEO) \
}
#define V4L2_DV_BT_CEA_4096X2160P60 { \
.type = V4L2_DV_BT_656_1120, \
V4L2_INIT_BT_TIMINGS(4096, 2160, 0, V4L2_DV_HSYNC_POS_POL, \
594000000, 88, 88, 128, 8, 10, 72, 0, 0, 0, \
- V4L2_DV_BT_STD_CEA861, V4L2_DV_FL_CAN_REDUCE_FPS) \
+ V4L2_DV_BT_STD_CEA861, \
+ V4L2_DV_FL_CAN_REDUCE_FPS | V4L2_DV_FL_IS_CE_VIDEO) \
}
diff --git a/include/uapi/linux/v4l2-subdev.h b/include/uapi/linux/v4l2-subdev.h
index e0a7e3da..dbce2b554 100644
--- a/include/uapi/linux/v4l2-subdev.h
+++ b/include/uapi/linux/v4l2-subdev.h
@@ -69,12 +69,14 @@
* @pad: pad number, as reported by the media API
* @index: format index during enumeration
* @code: format code (MEDIA_BUS_FMT_ definitions)
+ * @which: format type (from enum v4l2_subdev_format_whence)
*/
struct v4l2_subdev_mbus_code_enum {
__u32 pad;
__u32 index;
__u32 code;
- __u32 reserved[9];
+ __u32 which;
+ __u32 reserved[8];
};
/**
@@ -82,6 +84,7 @@
* @pad: pad number, as reported by the media API
* @index: format index during enumeration
* @code: format code (MEDIA_BUS_FMT_ definitions)
+ * @which: format type (from enum v4l2_subdev_format_whence)
*/
struct v4l2_subdev_frame_size_enum {
__u32 index;
@@ -91,7 +94,8 @@
__u32 max_width;
__u32 min_height;
__u32 max_height;
- __u32 reserved[9];
+ __u32 which;
+ __u32 reserved[8];
};
/**
@@ -113,6 +117,7 @@
* @width: frame width in pixels
* @height: frame height in pixels
* @interval: frame interval in seconds
+ * @which: format type (from enum v4l2_subdev_format_whence)
*/
struct v4l2_subdev_frame_interval_enum {
__u32 index;
@@ -121,7 +126,8 @@
__u32 width;
__u32 height;
struct v4l2_fract interval;
- __u32 reserved[9];
+ __u32 which;
+ __u32 reserved[8];
};
/**
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index fbdc360..fa376f7 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -268,9 +268,10 @@
enum v4l2_quantization {
/*
- * The default for R'G'B' quantization is always full range. For
- * Y'CbCr the quantization is always limited range, except for
- * SYCC, XV601, XV709 or JPEG: those are full range.
+ * The default for R'G'B' quantization is always full range, except
+ * for the BT2020 colorspace. For Y'CbCr the quantization is always
+ * limited range, except for COLORSPACE_JPEG, SYCC, XV601 or XV709:
+ * those are full range.
*/
V4L2_QUANTIZATION_DEFAULT = 0,
V4L2_QUANTIZATION_FULL_RANGE = 1,
@@ -1187,6 +1188,12 @@
exactly the same number of half-lines. Whether half-lines can be detected
or used depends on the hardware. */
#define V4L2_DV_FL_HALF_LINE (1 << 3)
+/* If set, then this is a Consumer Electronics (CE) video format. Such formats
+ * differ from other formats (commonly called IT formats) in that if RGB
+ * encoding is used then by default the RGB values use limited range (i.e.
+ * use the range 16-235) as opposed to 0-255. All formats defined in CEA-861
+ * except for the 640x480 format are CE formats. */
+#define V4L2_DV_FL_IS_CE_VIDEO (1 << 4)
/* A few useful defines to calculate the total blanking and frame sizes */
#define V4L2_DV_BT_BLANKING_WIDTH(bt) \
@@ -1456,6 +1463,7 @@
#define V4L2_CTRL_FLAG_WRITE_ONLY 0x0040
#define V4L2_CTRL_FLAG_VOLATILE 0x0080
#define V4L2_CTRL_FLAG_HAS_PAYLOAD 0x0100
+#define V4L2_CTRL_FLAG_EXECUTE_ON_WRITE 0x0200
/* Query flags, to be ORed with the control ID */
#define V4L2_CTRL_FLAG_NEXT_CTRL 0x80000000
@@ -1841,8 +1849,8 @@
*/
struct v4l2_plane_pix_format {
__u32 sizeimage;
- __u16 bytesperline;
- __u16 reserved[7];
+ __u32 bytesperline;
+ __u16 reserved[6];
} __attribute__ ((packed));
/**
diff --git a/include/uapi/linux/xilinx-v4l2-controls.h b/include/uapi/linux/xilinx-v4l2-controls.h
new file mode 100644
index 0000000..fb495b9
--- /dev/null
+++ b/include/uapi/linux/xilinx-v4l2-controls.h
@@ -0,0 +1,73 @@
+/*
+ * Xilinx Controls Header
+ *
+ * Copyright (C) 2013-2015 Ideas on Board
+ * Copyright (C) 2013-2015 Xilinx, Inc.
+ *
+ * Contacts: Hyun Kwon <hyun.kwon@xilinx.com>
+ * Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __UAPI_XILINX_V4L2_CONTROLS_H__
+#define __UAPI_XILINX_V4L2_CONTROLS_H__
+
+#include <linux/v4l2-controls.h>
+
+#define V4L2_CID_XILINX_OFFSET 0xc000
+#define V4L2_CID_XILINX_BASE (V4L2_CID_USER_BASE + V4L2_CID_XILINX_OFFSET)
+
+/*
+ * Private Controls for Xilinx Video IPs
+ */
+
+/*
+ * Xilinx TPG Video IP
+ */
+
+#define V4L2_CID_XILINX_TPG (V4L2_CID_USER_BASE + 0xc000)
+
+/* Draw cross hairs */
+#define V4L2_CID_XILINX_TPG_CROSS_HAIRS (V4L2_CID_XILINX_TPG + 1)
+/* Enable a moving box */
+#define V4L2_CID_XILINX_TPG_MOVING_BOX (V4L2_CID_XILINX_TPG + 2)
+/* Mask out a color component */
+#define V4L2_CID_XILINX_TPG_COLOR_MASK (V4L2_CID_XILINX_TPG + 3)
+/* Enable a stuck pixel feature */
+#define V4L2_CID_XILINX_TPG_STUCK_PIXEL (V4L2_CID_XILINX_TPG + 4)
+/* Enable a noisy output */
+#define V4L2_CID_XILINX_TPG_NOISE (V4L2_CID_XILINX_TPG + 5)
+/* Enable the motion feature */
+#define V4L2_CID_XILINX_TPG_MOTION (V4L2_CID_XILINX_TPG + 6)
+/* Configure the motion speed of moving patterns */
+#define V4L2_CID_XILINX_TPG_MOTION_SPEED (V4L2_CID_XILINX_TPG + 7)
+/* The row of horizontal cross hair location */
+#define V4L2_CID_XILINX_TPG_CROSS_HAIR_ROW (V4L2_CID_XILINX_TPG + 8)
+/* The colum of vertical cross hair location */
+#define V4L2_CID_XILINX_TPG_CROSS_HAIR_COLUMN (V4L2_CID_XILINX_TPG + 9)
+/* Set starting point of sine wave for horizontal component */
+#define V4L2_CID_XILINX_TPG_ZPLATE_HOR_START (V4L2_CID_XILINX_TPG + 10)
+/* Set speed of the horizontal component */
+#define V4L2_CID_XILINX_TPG_ZPLATE_HOR_SPEED (V4L2_CID_XILINX_TPG + 11)
+/* Set starting point of sine wave for vertical component */
+#define V4L2_CID_XILINX_TPG_ZPLATE_VER_START (V4L2_CID_XILINX_TPG + 12)
+/* Set speed of the vertical component */
+#define V4L2_CID_XILINX_TPG_ZPLATE_VER_SPEED (V4L2_CID_XILINX_TPG + 13)
+/* Moving box size */
+#define V4L2_CID_XILINX_TPG_BOX_SIZE (V4L2_CID_XILINX_TPG + 14)
+/* Moving box color */
+#define V4L2_CID_XILINX_TPG_BOX_COLOR (V4L2_CID_XILINX_TPG + 15)
+/* Upper limit count of generated stuck pixels */
+#define V4L2_CID_XILINX_TPG_STUCK_PIXEL_THRESH (V4L2_CID_XILINX_TPG + 16)
+/* Noise level */
+#define V4L2_CID_XILINX_TPG_NOISE_GAIN (V4L2_CID_XILINX_TPG + 17)
+
+#endif /* __UAPI_XILINX_V4L2_CONTROLS_H__ */
diff --git a/include/video/imx-ipu-v3.h b/include/video/imx-ipu-v3.h
index 73390c1..85dedca 100644
--- a/include/video/imx-ipu-v3.h
+++ b/include/video/imx-ipu-v3.h
@@ -39,7 +39,7 @@
struct videomode mode;
- u32 pixel_fmt;
+ u32 bus_format;
u32 v_to_h_sync;
#define IPU_DI_CLKMODE_SYNC (1 << 0)
diff --git a/include/video/omapdss.h b/include/video/omapdss.h
index c8ed15d..f001a35 100644
--- a/include/video/omapdss.h
+++ b/include/video/omapdss.h
@@ -129,14 +129,13 @@
};
enum omap_dss_signal_level {
- OMAPDSS_SIG_ACTIVE_HIGH = 0,
- OMAPDSS_SIG_ACTIVE_LOW = 1,
+ OMAPDSS_SIG_ACTIVE_LOW,
+ OMAPDSS_SIG_ACTIVE_HIGH,
};
enum omap_dss_signal_edge {
- OMAPDSS_DRIVE_SIG_OPPOSITE_EDGES,
- OMAPDSS_DRIVE_SIG_RISING_EDGE,
OMAPDSS_DRIVE_SIG_FALLING_EDGE,
+ OMAPDSS_DRIVE_SIG_RISING_EDGE,
};
enum omap_dss_venc_type {
diff --git a/include/video/samsung_fimd.h b/include/video/samsung_fimd.h
index a20e4a3..0530e5a 100644
--- a/include/video/samsung_fimd.h
+++ b/include/video/samsung_fimd.h
@@ -289,6 +289,11 @@
#define VIDISD14C_ALPHA1_B_LIMIT 0xf
#define VIDISD14C_ALPHA1_B(_x) ((_x) << 0)
+#define VIDW_ALPHA 0x021c
+#define VIDW_ALPHA_R(_x) ((_x) << 16)
+#define VIDW_ALPHA_G(_x) ((_x) << 8)
+#define VIDW_ALPHA_B(_x) ((_x) << 0)
+
/* Video buffer addresses */
#define VIDW_BUF_START(_buff) (0xA0 + ((_buff) * 8))
#define VIDW_BUF_START1(_buff) (0xA4 + ((_buff) * 8))
@@ -436,6 +441,12 @@
#define BLENDCON_NEW_8BIT_ALPHA_VALUE (1 << 0)
#define BLENDCON_NEW_4BIT_ALPHA_VALUE (0 << 0)
+/* Display port clock control */
+#define DP_MIE_CLKCON 0x27c
+#define DP_MIE_CLK_DISABLE 0x0
+#define DP_MIE_CLK_DP_ENABLE 0x2
+#define DP_MIE_CLK_MIE_ENABLE 0x3
+
/* Notes on per-window bpp settings
*
* Value Win0 Win1 Win2 Win3 Win 4
diff --git a/init/Kconfig b/init/Kconfig
index 3b9df1a..dc24dec 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1047,12 +1047,6 @@
the kmem extension can use it to guarantee that no group of processes
will ever exhaust kernel resources alone.
- WARNING: Current implementation lacks reclaim support. That means
- allocation attempts will fail when close to the limit even if there
- are plenty of kmem available for reclaim. That makes this option
- unusable in real life so DO NOT SELECT IT unless for development
- purposes.
-
config CGROUP_HUGETLB
bool "HugeTLB Resource Controller for Control Groups"
depends on HUGETLB_PAGE
diff --git a/init/do_mounts.c b/init/do_mounts.c
index eb41008..8369ffa 100644
--- a/init/do_mounts.c
+++ b/init/do_mounts.c
@@ -207,7 +207,7 @@
* bangs.
*/
-dev_t name_to_dev_t(char *name)
+dev_t name_to_dev_t(const char *name)
{
char s[32];
char *p;
@@ -226,8 +226,9 @@
if (strncmp(name, "/dev/", 5) != 0) {
unsigned maj, min;
+ char dummy;
- if (sscanf(name, "%u:%u", &maj, &min) == 2) {
+ if (sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2) {
res = MKDEV(maj, min);
if (maj != MAJOR(res) || min != MINOR(res))
goto fail;
@@ -286,6 +287,7 @@
done:
return res;
}
+EXPORT_SYMBOL_GPL(name_to_dev_t);
static int __init root_dev_setup(char *line)
{
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ba77ab5..a0831e1 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -551,7 +551,21 @@
static void print_lock(struct held_lock *hlock)
{
- print_lock_name(hlock_class(hlock));
+ /*
+ * We can be called locklessly through debug_show_all_locks() so be
+ * extra careful, the hlock might have been released and cleared.
+ */
+ unsigned int class_idx = hlock->class_idx;
+
+ /* Don't re-read hlock->class_idx, can't use READ_ONCE() on bitfields: */
+ barrier();
+
+ if (!class_idx || (class_idx - 1) >= MAX_LOCKDEP_KEYS) {
+ printk("<RELEASED>\n");
+ return;
+ }
+
+ print_lock_name(lock_classes + class_idx - 1);
printk(", at: ");
print_ip_sym(hlock->acquire_ip);
}
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 879edfc..c099b08 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -2017,24 +2017,6 @@
return __add_preferred_console(name, idx, options, NULL);
}
-int update_console_cmdline(char *name, int idx, char *name_new, int idx_new, char *options)
-{
- struct console_cmdline *c;
- int i;
-
- for (i = 0, c = console_cmdline;
- i < MAX_CMDLINECONSOLES && c->name[0];
- i++, c++)
- if (strcmp(c->name, name) == 0 && c->index == idx) {
- strlcpy(c->name, name_new, sizeof(c->name));
- c->options = options;
- c->index = idx_new;
- return i;
- }
- /* not found */
- return -1;
-}
-
bool console_suspend_enabled = true;
EXPORT_SYMBOL(console_suspend_enabled);
@@ -2436,9 +2418,6 @@
if (preferred_console < 0 || bcon || !console_drivers)
preferred_console = selected_console;
- if (newcon->early_setup)
- newcon->early_setup();
-
/*
* See if we want to use this console driver. If we
* didn't select a console we take the first one
@@ -2464,23 +2443,27 @@
for (i = 0, c = console_cmdline;
i < MAX_CMDLINECONSOLES && c->name[0];
i++, c++) {
- BUILD_BUG_ON(sizeof(c->name) != sizeof(newcon->name));
- if (strcmp(c->name, newcon->name) != 0)
- continue;
- if (newcon->index >= 0 &&
- newcon->index != c->index)
- continue;
- if (newcon->index < 0)
- newcon->index = c->index;
+ if (!newcon->match ||
+ newcon->match(newcon, c->name, c->index, c->options) != 0) {
+ /* default matching */
+ BUILD_BUG_ON(sizeof(c->name) != sizeof(newcon->name));
+ if (strcmp(c->name, newcon->name) != 0)
+ continue;
+ if (newcon->index >= 0 &&
+ newcon->index != c->index)
+ continue;
+ if (newcon->index < 0)
+ newcon->index = c->index;
- if (_braille_register_console(newcon, c))
- return;
+ if (_braille_register_console(newcon, c))
+ return;
- if (newcon->setup &&
- newcon->setup(newcon, console_cmdline[i].options) != 0)
- break;
+ if (newcon->setup &&
+ newcon->setup(newcon, c->options) != 0)
+ break;
+ }
+
newcon->flags |= CON_ENABLED;
- newcon->index = c->index;
if (i == selected_console) {
newcon->flags |= CON_CONSDEV;
preferred_console = selected_console;
diff --git a/kernel/smp.c b/kernel/smp.c
index f38a1e6..0785447 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -19,7 +19,7 @@
enum {
CSD_FLAG_LOCK = 0x01,
- CSD_FLAG_WAIT = 0x02,
+ CSD_FLAG_SYNCHRONOUS = 0x02,
};
struct call_function_data {
@@ -107,7 +107,7 @@
*/
static void csd_lock_wait(struct call_single_data *csd)
{
- while (csd->flags & CSD_FLAG_LOCK)
+ while (smp_load_acquire(&csd->flags) & CSD_FLAG_LOCK)
cpu_relax();
}
@@ -121,19 +121,17 @@
* to ->flags with any subsequent assignments to other
* fields of the specified call_single_data structure:
*/
- smp_mb();
+ smp_wmb();
}
static void csd_unlock(struct call_single_data *csd)
{
- WARN_ON((csd->flags & CSD_FLAG_WAIT) && !(csd->flags & CSD_FLAG_LOCK));
+ WARN_ON(!(csd->flags & CSD_FLAG_LOCK));
/*
* ensure we're all done before releasing data:
*/
- smp_mb();
-
- csd->flags &= ~CSD_FLAG_LOCK;
+ smp_store_release(&csd->flags, 0);
}
static DEFINE_PER_CPU_SHARED_ALIGNED(struct call_single_data, csd_data);
@@ -144,13 +142,16 @@
* ->func, ->info, and ->flags set.
*/
static int generic_exec_single(int cpu, struct call_single_data *csd,
- smp_call_func_t func, void *info, int wait)
+ smp_call_func_t func, void *info)
{
- struct call_single_data csd_stack = { .flags = 0 };
- unsigned long flags;
-
-
if (cpu == smp_processor_id()) {
+ unsigned long flags;
+
+ /*
+ * We can unlock early even for the synchronous on-stack case,
+ * since we're doing this from the same CPU..
+ */
+ csd_unlock(csd);
local_irq_save(flags);
func(info);
local_irq_restore(flags);
@@ -158,24 +159,14 @@
}
- if ((unsigned)cpu >= nr_cpu_ids || !cpu_online(cpu))
+ if ((unsigned)cpu >= nr_cpu_ids || !cpu_online(cpu)) {
+ csd_unlock(csd);
return -ENXIO;
-
-
- if (!csd) {
- csd = &csd_stack;
- if (!wait)
- csd = this_cpu_ptr(&csd_data);
}
- csd_lock(csd);
-
csd->func = func;
csd->info = info;
- if (wait)
- csd->flags |= CSD_FLAG_WAIT;
-
/*
* The list addition should be visible before sending the IPI
* handler locks the list to pull the entry off it because of
@@ -190,9 +181,6 @@
if (llist_add(&csd->llist, &per_cpu(call_single_queue, cpu)))
arch_send_call_function_single_ipi(cpu);
- if (wait)
- csd_lock_wait(csd);
-
return 0;
}
@@ -250,8 +238,17 @@
}
llist_for_each_entry_safe(csd, csd_next, entry, llist) {
- csd->func(csd->info);
- csd_unlock(csd);
+ smp_call_func_t func = csd->func;
+ void *info = csd->info;
+
+ /* Do we wait until *after* callback? */
+ if (csd->flags & CSD_FLAG_SYNCHRONOUS) {
+ func(info);
+ csd_unlock(csd);
+ } else {
+ csd_unlock(csd);
+ func(info);
+ }
}
/*
@@ -274,6 +271,8 @@
int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
int wait)
{
+ struct call_single_data *csd;
+ struct call_single_data csd_stack = { .flags = CSD_FLAG_LOCK | CSD_FLAG_SYNCHRONOUS };
int this_cpu;
int err;
@@ -292,7 +291,16 @@
WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
&& !oops_in_progress);
- err = generic_exec_single(cpu, NULL, func, info, wait);
+ csd = &csd_stack;
+ if (!wait) {
+ csd = this_cpu_ptr(&csd_data);
+ csd_lock(csd);
+ }
+
+ err = generic_exec_single(cpu, csd, func, info);
+
+ if (wait)
+ csd_lock_wait(csd);
put_cpu();
@@ -321,7 +329,15 @@
int err = 0;
preempt_disable();
- err = generic_exec_single(cpu, csd, csd->func, csd->info, 0);
+
+ /* We could deadlock if we have to wait here with interrupts disabled! */
+ if (WARN_ON_ONCE(csd->flags & CSD_FLAG_LOCK))
+ csd_lock_wait(csd);
+
+ csd->flags = CSD_FLAG_LOCK;
+ smp_wmb();
+
+ err = generic_exec_single(cpu, csd, csd->func, csd->info);
preempt_enable();
return err;
@@ -433,6 +449,8 @@
struct call_single_data *csd = per_cpu_ptr(cfd->csd, cpu);
csd_lock(csd);
+ if (wait)
+ csd->flags |= CSD_FLAG_SYNCHRONOUS;
csd->func = func;
csd->info = info;
llist_add(&csd->llist, &per_cpu(call_single_queue, cpu));
diff --git a/kernel/time/clockevents.c b/kernel/time/clockevents.c
index 25d942d..11dc22a 100644
--- a/kernel/time/clockevents.c
+++ b/kernel/time/clockevents.c
@@ -440,7 +440,7 @@
mutex_unlock(&clockevents_mutex);
return ret;
}
-EXPORT_SYMBOL_GPL(clockevents_unbind);
+EXPORT_SYMBOL_GPL(clockevents_unbind_device);
/* Sanity check of state transition callbacks */
static int clockevents_sanity_check(struct clock_event_device *dev)
diff --git a/lib/Kconfig b/lib/Kconfig
index f544022..601965a 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -396,10 +396,6 @@
them on the stack. This is a bit more expensive, but avoids
stack overflow.
-config DISABLE_OBSOLETE_CPUMASK_FUNCTIONS
- bool "Disable obsolete cpumask functions" if DEBUG_PER_CPU_MAPS
- depends on BROKEN
-
config CPU_RMAP
bool
depends on SMP
diff --git a/lib/cpumask.c b/lib/cpumask.c
index 5ab1553..830dd5d 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -5,27 +5,6 @@
#include <linux/export.h>
#include <linux/bootmem.h>
-int __first_cpu(const cpumask_t *srcp)
-{
- return min_t(int, NR_CPUS, find_first_bit(srcp->bits, NR_CPUS));
-}
-EXPORT_SYMBOL(__first_cpu);
-
-int __next_cpu(int n, const cpumask_t *srcp)
-{
- return min_t(int, NR_CPUS, find_next_bit(srcp->bits, NR_CPUS, n+1));
-}
-EXPORT_SYMBOL(__next_cpu);
-
-#if NR_CPUS > 64
-int __next_cpu_nr(int n, const cpumask_t *srcp)
-{
- return min_t(int, nr_cpu_ids,
- find_next_bit(srcp->bits, nr_cpu_ids, n+1));
-}
-EXPORT_SYMBOL(__next_cpu_nr);
-#endif
-
/**
* cpumask_next_and - get the next cpu in *src1p & *src2p
* @n: the cpu prior to the place to search (ie. return will be > @n)
@@ -90,13 +69,6 @@
dump_stack();
}
#endif
- /* FIXME: Bandaid to save us from old primitives which go to NR_CPUS. */
- if (*mask) {
- unsigned char *ptr = (unsigned char *)cpumask_bits(*mask);
- unsigned int tail;
- tail = BITS_TO_LONGS(NR_CPUS - nr_cpumask_bits) * sizeof(long);
- memset(ptr + cpumask_size() - tail, 0, tail);
- }
return *mask != NULL;
}
diff --git a/lib/devres.c b/lib/devres.c
index 0f1dd2e..fbe2aac 100644
--- a/lib/devres.c
+++ b/lib/devres.c
@@ -72,6 +72,34 @@
EXPORT_SYMBOL(devm_ioremap_nocache);
/**
+ * devm_ioremap_wc - Managed ioremap_wc()
+ * @dev: Generic device to remap IO address for
+ * @offset: BUS offset to map
+ * @size: Size of map
+ *
+ * Managed ioremap_wc(). Map is automatically unmapped on driver detach.
+ */
+void __iomem *devm_ioremap_wc(struct device *dev, resource_size_t offset,
+ resource_size_t size)
+{
+ void __iomem **ptr, *addr;
+
+ ptr = devres_alloc(devm_ioremap_release, sizeof(*ptr), GFP_KERNEL);
+ if (!ptr)
+ return NULL;
+
+ addr = ioremap_wc(offset, size);
+ if (addr) {
+ *ptr = addr;
+ devres_add(dev, ptr);
+ } else
+ devres_free(ptr);
+
+ return addr;
+}
+EXPORT_SYMBOL(devm_ioremap_wc);
+
+/**
* devm_iounmap - Managed iounmap()
* @dev: Generic device to unmap for
* @addr: Address to unmap
diff --git a/lib/iommu-common.c b/lib/iommu-common.c
index fac4f35..a1a517c 100644
--- a/lib/iommu-common.c
+++ b/lib/iommu-common.c
@@ -9,37 +9,72 @@
#include <linux/iommu-helper.h>
#include <linux/iommu-common.h>
#include <linux/dma-mapping.h>
+#include <linux/hash.h>
#ifndef DMA_ERROR_CODE
#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
#endif
-#define IOMMU_LARGE_ALLOC 15
+unsigned long iommu_large_alloc = 15;
+
+static DEFINE_PER_CPU(unsigned int, iommu_pool_hash);
+
+static inline bool need_flush(struct iommu_map_table *iommu)
+{
+ return (iommu->lazy_flush != NULL &&
+ (iommu->flags & IOMMU_NEED_FLUSH) != 0);
+}
+
+static inline void set_flush(struct iommu_map_table *iommu)
+{
+ iommu->flags |= IOMMU_NEED_FLUSH;
+}
+
+static inline void clear_flush(struct iommu_map_table *iommu)
+{
+ iommu->flags &= ~IOMMU_NEED_FLUSH;
+}
+
+static void setup_iommu_pool_hash(void)
+{
+ unsigned int i;
+ static bool do_once;
+
+ if (do_once)
+ return;
+ do_once = true;
+ for_each_possible_cpu(i)
+ per_cpu(iommu_pool_hash, i) = hash_32(i, IOMMU_POOL_HASHBITS);
+}
/*
- * Initialize iommu_pool entries for the iommu_table. `num_entries'
+ * Initialize iommu_pool entries for the iommu_map_table. `num_entries'
* is the number of table entries. If `large_pool' is set to true,
* the top 1/4 of the table will be set aside for pool allocations
- * of more than IOMMU_LARGE_ALLOC pages.
+ * of more than iommu_large_alloc pages.
*/
-extern void iommu_tbl_pool_init(struct iommu_table *iommu,
+extern void iommu_tbl_pool_init(struct iommu_map_table *iommu,
unsigned long num_entries,
- u32 page_table_shift,
- const struct iommu_tbl_ops *iommu_tbl_ops,
- bool large_pool, u32 npools)
+ u32 table_shift,
+ void (*lazy_flush)(struct iommu_map_table *),
+ bool large_pool, u32 npools,
+ bool skip_span_boundary_check)
{
unsigned int start, i;
struct iommu_pool *p = &(iommu->large_pool);
+ setup_iommu_pool_hash();
if (npools == 0)
iommu->nr_pools = IOMMU_NR_POOLS;
else
iommu->nr_pools = npools;
BUG_ON(npools > IOMMU_NR_POOLS);
- iommu->page_table_shift = page_table_shift;
- iommu->iommu_tbl_ops = iommu_tbl_ops;
+ iommu->table_shift = table_shift;
+ iommu->lazy_flush = lazy_flush;
start = 0;
+ if (skip_span_boundary_check)
+ iommu->flags |= IOMMU_NO_SPAN_BOUND;
if (large_pool)
iommu->flags |= IOMMU_HAS_LARGE_POOL;
@@ -48,11 +83,11 @@
else
iommu->poolsize = (num_entries * 3 / 4)/iommu->nr_pools;
for (i = 0; i < iommu->nr_pools; i++) {
- spin_lock_init(&(iommu->arena_pool[i].lock));
- iommu->arena_pool[i].start = start;
- iommu->arena_pool[i].hint = start;
+ spin_lock_init(&(iommu->pools[i].lock));
+ iommu->pools[i].start = start;
+ iommu->pools[i].hint = start;
start += iommu->poolsize; /* start for next pool */
- iommu->arena_pool[i].end = start - 1;
+ iommu->pools[i].end = start - 1;
}
if (!large_pool)
return;
@@ -65,121 +100,136 @@
EXPORT_SYMBOL(iommu_tbl_pool_init);
unsigned long iommu_tbl_range_alloc(struct device *dev,
- struct iommu_table *iommu,
+ struct iommu_map_table *iommu,
unsigned long npages,
unsigned long *handle,
- unsigned int pool_hash)
+ unsigned long mask,
+ unsigned int align_order)
{
+ unsigned int pool_hash = __this_cpu_read(iommu_pool_hash);
unsigned long n, end, start, limit, boundary_size;
- struct iommu_pool *arena;
+ struct iommu_pool *pool;
int pass = 0;
unsigned int pool_nr;
unsigned int npools = iommu->nr_pools;
unsigned long flags;
bool large_pool = ((iommu->flags & IOMMU_HAS_LARGE_POOL) != 0);
- bool largealloc = (large_pool && npages > IOMMU_LARGE_ALLOC);
+ bool largealloc = (large_pool && npages > iommu_large_alloc);
unsigned long shift;
+ unsigned long align_mask = 0;
+
+ if (align_order > 0)
+ align_mask = 0xffffffffffffffffl >> (64 - align_order);
/* Sanity check */
if (unlikely(npages == 0)) {
- printk_ratelimited("npages == 0\n");
+ WARN_ON_ONCE(1);
return DMA_ERROR_CODE;
}
if (largealloc) {
- arena = &(iommu->large_pool);
- spin_lock_irqsave(&arena->lock, flags);
+ pool = &(iommu->large_pool);
pool_nr = 0; /* to keep compiler happy */
} else {
/* pick out pool_nr */
pool_nr = pool_hash & (npools - 1);
- arena = &(iommu->arena_pool[pool_nr]);
-
- /* find first available unlocked pool */
- while (!spin_trylock_irqsave(&(arena->lock), flags)) {
- pool_nr = (pool_nr + 1) & (iommu->nr_pools - 1);
- arena = &(iommu->arena_pool[pool_nr]);
- }
+ pool = &(iommu->pools[pool_nr]);
}
+ spin_lock_irqsave(&pool->lock, flags);
again:
if (pass == 0 && handle && *handle &&
- (*handle >= arena->start) && (*handle < arena->end))
+ (*handle >= pool->start) && (*handle < pool->end))
start = *handle;
else
- start = arena->hint;
+ start = pool->hint;
- limit = arena->end;
+ limit = pool->end;
/* The case below can happen if we have a small segment appended
* to a large, or when the previous alloc was at the very end of
- * the available space. If so, go back to the beginning and flush.
+ * the available space. If so, go back to the beginning. If a
+ * flush is needed, it will get done based on the return value
+ * from iommu_area_alloc() below.
*/
- if (start >= limit) {
- start = arena->start;
- if (iommu->iommu_tbl_ops->reset != NULL)
- iommu->iommu_tbl_ops->reset(iommu);
+ if (start >= limit)
+ start = pool->start;
+ shift = iommu->table_map_base >> iommu->table_shift;
+ if (limit + shift > mask) {
+ limit = mask - shift + 1;
+ /* If we're constrained on address range, first try
+ * at the masked hint to avoid O(n) search complexity,
+ * but on second pass, start at 0 in pool 0.
+ */
+ if ((start & mask) >= limit || pass > 0) {
+ spin_unlock(&(pool->lock));
+ pool = &(iommu->pools[0]);
+ spin_lock(&(pool->lock));
+ start = pool->start;
+ } else {
+ start &= mask;
+ }
}
if (dev)
boundary_size = ALIGN(dma_get_seg_boundary(dev) + 1,
- 1 << iommu->page_table_shift);
+ 1 << iommu->table_shift);
else
- boundary_size = ALIGN(1ULL << 32, 1 << iommu->page_table_shift);
+ boundary_size = ALIGN(1ULL << 32, 1 << iommu->table_shift);
- shift = iommu->page_table_map_base >> iommu->page_table_shift;
- boundary_size = boundary_size >> iommu->page_table_shift;
+ boundary_size = boundary_size >> iommu->table_shift;
/*
- * if the iommu has a non-trivial cookie <-> index mapping, we set
+ * if the skip_span_boundary_check had been set during init, we set
* things up so that iommu_is_span_boundary() merely checks if the
* (index + npages) < num_tsb_entries
*/
- if (iommu->iommu_tbl_ops->cookie_to_index != NULL) {
+ if ((iommu->flags & IOMMU_NO_SPAN_BOUND) != 0) {
shift = 0;
boundary_size = iommu->poolsize * iommu->nr_pools;
}
n = iommu_area_alloc(iommu->map, limit, start, npages, shift,
- boundary_size, 0);
+ boundary_size, align_mask);
if (n == -1) {
if (likely(pass == 0)) {
/* First failure, rescan from the beginning. */
- arena->hint = arena->start;
- if (iommu->iommu_tbl_ops->reset != NULL)
- iommu->iommu_tbl_ops->reset(iommu);
+ pool->hint = pool->start;
+ set_flush(iommu);
pass++;
goto again;
} else if (!largealloc && pass <= iommu->nr_pools) {
- spin_unlock(&(arena->lock));
+ spin_unlock(&(pool->lock));
pool_nr = (pool_nr + 1) & (iommu->nr_pools - 1);
- arena = &(iommu->arena_pool[pool_nr]);
- while (!spin_trylock(&(arena->lock))) {
- pool_nr = (pool_nr + 1) & (iommu->nr_pools - 1);
- arena = &(iommu->arena_pool[pool_nr]);
- }
- arena->hint = arena->start;
+ pool = &(iommu->pools[pool_nr]);
+ spin_lock(&(pool->lock));
+ pool->hint = pool->start;
+ set_flush(iommu);
pass++;
goto again;
} else {
/* give up */
- spin_unlock_irqrestore(&(arena->lock), flags);
- return DMA_ERROR_CODE;
+ n = DMA_ERROR_CODE;
+ goto bail;
}
}
+ if (n < pool->hint || need_flush(iommu)) {
+ clear_flush(iommu);
+ iommu->lazy_flush(iommu);
+ }
end = n + npages;
-
- arena->hint = end;
+ pool->hint = end;
/* Update handle for SG allocations */
if (handle)
*handle = end;
- spin_unlock_irqrestore(&(arena->lock), flags);
+bail:
+ spin_unlock_irqrestore(&(pool->lock), flags);
return n;
}
EXPORT_SYMBOL(iommu_tbl_range_alloc);
-static struct iommu_pool *get_pool(struct iommu_table *tbl,
+static struct iommu_pool *get_pool(struct iommu_map_table *tbl,
unsigned long entry)
{
struct iommu_pool *p;
@@ -193,31 +243,27 @@
unsigned int pool_nr = entry / tbl->poolsize;
BUG_ON(pool_nr >= tbl->nr_pools);
- p = &tbl->arena_pool[pool_nr];
+ p = &tbl->pools[pool_nr];
}
return p;
}
-void iommu_tbl_range_free(struct iommu_table *iommu, u64 dma_addr,
- unsigned long npages, bool do_demap, void *demap_arg)
+/* Caller supplies the index of the entry into the iommu map table
+ * itself when the mapping from dma_addr to the entry is not the
+ * default addr->entry mapping below.
+ */
+void iommu_tbl_range_free(struct iommu_map_table *iommu, u64 dma_addr,
+ unsigned long npages, unsigned long entry)
{
- unsigned long entry;
struct iommu_pool *pool;
unsigned long flags;
- unsigned long shift = iommu->page_table_shift;
+ unsigned long shift = iommu->table_shift;
- if (iommu->iommu_tbl_ops->cookie_to_index != NULL) {
- entry = (*iommu->iommu_tbl_ops->cookie_to_index)(dma_addr,
- demap_arg);
- } else {
- entry = (dma_addr - iommu->page_table_map_base) >> shift;
- }
+ if (entry == DMA_ERROR_CODE) /* use default addr->entry mapping */
+ entry = (dma_addr - iommu->table_map_base) >> shift;
pool = get_pool(iommu, entry);
spin_lock_irqsave(&(pool->lock), flags);
- if (do_demap && iommu->iommu_tbl_ops->demap != NULL)
- (*iommu->iommu_tbl_ops->demap)(demap_arg, entry, npages);
-
bitmap_clear(iommu->map, entry, npages);
spin_unlock_irqrestore(&(pool->lock), flags);
}
diff --git a/lib/test-hexdump.c b/lib/test-hexdump.c
index 9846ff7..c227cc4 100644
--- a/lib/test-hexdump.c
+++ b/lib/test-hexdump.c
@@ -48,7 +48,7 @@
char test[32 * 3 + 2 + 32 + 1];
char real[32 * 3 + 2 + 32 + 1];
char *p;
- const char **result;
+ const char * const *result;
size_t l = len;
int gs = groupsize, rs = rowsize;
unsigned int i;
diff --git a/net/9p/protocol.c b/net/9p/protocol.c
index e9d0f0c..16d2875 100644
--- a/net/9p/protocol.c
+++ b/net/9p/protocol.c
@@ -275,7 +275,7 @@
}
break;
case 'R':{
- int16_t *nwqid = va_arg(ap, int16_t *);
+ uint16_t *nwqid = va_arg(ap, uint16_t *);
struct p9_qid **wqids =
va_arg(ap, struct p9_qid **);
@@ -440,7 +440,7 @@
stbuf->n_gid, stbuf->n_muid);
} break;
case 'V':{
- int32_t count = va_arg(ap, int32_t);
+ uint32_t count = va_arg(ap, uint32_t);
struct iov_iter *from =
va_arg(ap, struct iov_iter *);
errcode = p9pdu_writef(pdu, proto_version, "d",
@@ -471,7 +471,7 @@
}
break;
case 'R':{
- int16_t nwqid = va_arg(ap, int);
+ uint16_t nwqid = va_arg(ap, int);
struct p9_qid *wqids =
va_arg(ap, struct p9_qid *);
diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
index 3e3d82d..bced8c0 100644
--- a/net/9p/trans_fd.c
+++ b/net/9p/trans_fd.c
@@ -734,6 +734,7 @@
opts->port = P9_PORT;
opts->rfd = ~0;
opts->wfd = ~0;
+ opts->privport = 0;
if (!params)
return 0;
@@ -1013,7 +1014,6 @@
{
int err;
struct p9_fd_opts opts;
- struct p9_trans_fd *p;
parse_opts(args, &opts);
@@ -1026,7 +1026,6 @@
if (err < 0)
return err;
- p = (struct p9_trans_fd *) client->trans;
p9_conn_create(client);
return 0;
diff --git a/net/9p/trans_rdma.c b/net/9p/trans_rdma.c
index 14ad43b..3533d2a 100644
--- a/net/9p/trans_rdma.c
+++ b/net/9p/trans_rdma.c
@@ -139,6 +139,7 @@
int sq_depth;
int rq_depth;
long timeout;
+ int privport;
};
/*
@@ -146,7 +147,10 @@
*/
enum {
/* Options that take integer arguments */
- Opt_port, Opt_rq_depth, Opt_sq_depth, Opt_timeout, Opt_err,
+ Opt_port, Opt_rq_depth, Opt_sq_depth, Opt_timeout,
+ /* Options that take no argument */
+ Opt_privport,
+ Opt_err,
};
static match_table_t tokens = {
@@ -154,6 +158,7 @@
{Opt_sq_depth, "sq=%u"},
{Opt_rq_depth, "rq=%u"},
{Opt_timeout, "timeout=%u"},
+ {Opt_privport, "privport"},
{Opt_err, NULL},
};
@@ -175,6 +180,7 @@
opts->sq_depth = P9_RDMA_SQ_DEPTH;
opts->rq_depth = P9_RDMA_RQ_DEPTH;
opts->timeout = P9_RDMA_TIMEOUT;
+ opts->privport = 0;
if (!params)
return 0;
@@ -193,13 +199,13 @@
if (!*p)
continue;
token = match_token(p, tokens, args);
- if (token == Opt_err)
- continue;
- r = match_int(&args[0], &option);
- if (r < 0) {
- p9_debug(P9_DEBUG_ERROR,
- "integer field, but no integer?\n");
- continue;
+ if ((token != Opt_err) && (token != Opt_privport)) {
+ r = match_int(&args[0], &option);
+ if (r < 0) {
+ p9_debug(P9_DEBUG_ERROR,
+ "integer field, but no integer?\n");
+ continue;
+ }
}
switch (token) {
case Opt_port:
@@ -214,6 +220,9 @@
case Opt_timeout:
opts->timeout = option;
break;
+ case Opt_privport:
+ opts->privport = 1;
+ break;
default:
continue;
}
@@ -607,6 +616,23 @@
return 0;
}
+static int p9_rdma_bind_privport(struct p9_trans_rdma *rdma)
+{
+ struct sockaddr_in cl = {
+ .sin_family = AF_INET,
+ .sin_addr.s_addr = htonl(INADDR_ANY),
+ };
+ int port, err = -EINVAL;
+
+ for (port = P9_DEF_MAX_RESVPORT; port >= P9_DEF_MIN_RESVPORT; port--) {
+ cl.sin_port = htons((ushort)port);
+ err = rdma_bind_addr(rdma->cm_id, (struct sockaddr *)&cl);
+ if (err != -EADDRINUSE)
+ break;
+ }
+ return err;
+}
+
/**
* trans_create_rdma - Transport method for creating atransport instance
* @client: client instance
@@ -642,6 +668,16 @@
/* Associate the client with the transport */
client->trans = rdma;
+ /* Bind to a privileged port if we need to */
+ if (opts.privport) {
+ err = p9_rdma_bind_privport(rdma);
+ if (err < 0) {
+ pr_err("%s (%d): problem binding to privport: %d\n",
+ __func__, task_pid_nr(current), -err);
+ goto error;
+ }
+ }
+
/* Resolve the server's address */
rdma->addr.sin_family = AF_INET;
rdma->addr.sin_addr.s_addr = in_aton(addr);
diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
index e62bcbb..9dd49ca 100644
--- a/net/9p/trans_virtio.c
+++ b/net/9p/trans_virtio.c
@@ -525,7 +525,10 @@
vdev = dev_to_virtio(dev);
chan = vdev->priv;
- return snprintf(buf, chan->tag_len + 1, "%s", chan->tag);
+ memcpy(buf, chan->tag, chan->tag_len);
+ buf[chan->tag_len] = 0;
+
+ return chan->tag_len + 1;
}
static DEVICE_ATTR(mount_tag, 0444, p9_mount_tag_show, NULL);
diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c
index a05b9db..9070dfd 100644
--- a/net/bluetooth/hidp/core.c
+++ b/net/bluetooth/hidp/core.c
@@ -1313,7 +1313,8 @@
struct socket *ctrl_sock,
struct socket *intr_sock)
{
- u32 valid_flags = 0;
+ u32 valid_flags = BIT(HIDP_VIRTUAL_CABLE_UNPLUG) |
+ BIT(HIDP_BOOT_PROTOCOL_MODE);
struct hidp_session *session;
struct l2cap_conn *conn;
struct l2cap_chan *chan;
diff --git a/scripts/checkkconfigsymbols.py b/scripts/checkkconfigsymbols.py
old mode 100644
new mode 100755
index e9cc689..74086a5
--- a/scripts/checkkconfigsymbols.py
+++ b/scripts/checkkconfigsymbols.py
@@ -1,8 +1,8 @@
#!/usr/bin/env python
-"""Find Kconfig identifiers that are referenced but not defined."""
+"""Find Kconfig symbols that are referenced but not defined."""
-# (c) 2014 Valentin Rothberg <valentinrothberg@gmail.com>
+# (c) 2014-2015 Valentin Rothberg <Valentin.Rothberg@lip6.fr>
# (c) 2014 Stefan Hengelein <stefan.hengelein@fau.de>
#
# Licensed under the terms of the GNU GPL License version 2
@@ -10,7 +10,9 @@
import os
import re
+import sys
from subprocess import Popen, PIPE, STDOUT
+from optparse import OptionParser
# regex expressions
@@ -32,22 +34,149 @@
REGEX_FILTER_FEATURES = re.compile(r"[A-Za-z0-9]$")
+def parse_options():
+ """The user interface of this module."""
+ usage = "%prog [options]\n\n" \
+ "Run this tool to detect Kconfig symbols that are referenced but " \
+ "not defined in\nKconfig. The output of this tool has the " \
+ "format \'Undefined symbol\\tFile list\'\n\n" \
+ "If no option is specified, %prog will default to check your\n" \
+ "current tree. Please note that specifying commits will " \
+ "\'git reset --hard\'\nyour current tree! You may save " \
+ "uncommitted changes to avoid losing data."
+
+ parser = OptionParser(usage=usage)
+
+ parser.add_option('-c', '--commit', dest='commit', action='store',
+ default="",
+ help="Check if the specified commit (hash) introduces "
+ "undefined Kconfig symbols.")
+
+ parser.add_option('-d', '--diff', dest='diff', action='store',
+ default="",
+ help="Diff undefined symbols between two commits. The "
+ "input format bases on Git log's "
+ "\'commmit1..commit2\'.")
+
+ parser.add_option('', '--force', dest='force', action='store_true',
+ default=False,
+ help="Reset current Git tree even when it's dirty.")
+
+ (opts, _) = parser.parse_args()
+
+ if opts.commit and opts.diff:
+ sys.exit("Please specify only one option at once.")
+
+ if opts.diff and not re.match(r"^[\w\-\.]+\.\.[\w\-\.]+$", opts.diff):
+ sys.exit("Please specify valid input in the following format: "
+ "\'commmit1..commit2\'")
+
+ if opts.commit or opts.diff:
+ if not opts.force and tree_is_dirty():
+ sys.exit("The current Git tree is dirty (see 'git status'). "
+ "Running this script may\ndelete important data since it "
+ "calls 'git reset --hard' for some performance\nreasons. "
+ " Please run this script in a clean Git tree or pass "
+ "'--force' if you\nwant to ignore this warning and "
+ "continue.")
+
+ return opts
+
+
def main():
"""Main function of this module."""
+ opts = parse_options()
+
+ if opts.commit or opts.diff:
+ head = get_head()
+
+ # get commit range
+ commit_a = None
+ commit_b = None
+ if opts.commit:
+ commit_a = opts.commit + "~"
+ commit_b = opts.commit
+ elif opts.diff:
+ split = opts.diff.split("..")
+ commit_a = split[0]
+ commit_b = split[1]
+ undefined_a = {}
+ undefined_b = {}
+
+ # get undefined items before the commit
+ execute("git reset --hard %s" % commit_a)
+ undefined_a = check_symbols()
+
+ # get undefined items for the commit
+ execute("git reset --hard %s" % commit_b)
+ undefined_b = check_symbols()
+
+ # report cases that are present for the commit but not before
+ for feature in sorted(undefined_b):
+ # feature has not been undefined before
+ if not feature in undefined_a:
+ files = sorted(undefined_b.get(feature))
+ print "%s\t%s" % (feature, ", ".join(files))
+ # check if there are new files that reference the undefined feature
+ else:
+ files = sorted(undefined_b.get(feature) -
+ undefined_a.get(feature))
+ if files:
+ print "%s\t%s" % (feature, ", ".join(files))
+
+ # reset to head
+ execute("git reset --hard %s" % head)
+
+ # default to check the entire tree
+ else:
+ undefined = check_symbols()
+ for feature in sorted(undefined):
+ files = sorted(undefined.get(feature))
+ print "%s\t%s" % (feature, ", ".join(files))
+
+
+def execute(cmd):
+ """Execute %cmd and return stdout. Exit in case of error."""
+ pop = Popen(cmd, stdout=PIPE, stderr=STDOUT, shell=True)
+ (stdout, _) = pop.communicate() # wait until finished
+ if pop.returncode != 0:
+ sys.exit(stdout)
+ return stdout
+
+
+def tree_is_dirty():
+ """Return true if the current working tree is dirty (i.e., if any file has
+ been added, deleted, modified, renamed or copied but not committed)."""
+ stdout = execute("git status --porcelain")
+ for line in stdout:
+ if re.findall(r"[URMADC]{1}", line[:2]):
+ return True
+ return False
+
+
+def get_head():
+ """Return commit hash of current HEAD."""
+ stdout = execute("git rev-parse HEAD")
+ return stdout.strip('\n')
+
+
+def check_symbols():
+ """Find undefined Kconfig symbols and return a dict with the symbol as key
+ and a list of referencing files as value."""
source_files = []
kconfig_files = []
defined_features = set()
referenced_features = dict() # {feature: [files]}
# use 'git ls-files' to get the worklist
- pop = Popen("git ls-files", stdout=PIPE, stderr=STDOUT, shell=True)
- (stdout, _) = pop.communicate() # wait until finished
+ stdout = execute("git ls-files")
if len(stdout) > 0 and stdout[-1] == "\n":
stdout = stdout[:-1]
for gitfile in stdout.rsplit("\n"):
- if ".git" in gitfile or "ChangeLog" in gitfile or \
- ".log" in gitfile or os.path.isdir(gitfile):
+ if ".git" in gitfile or "ChangeLog" in gitfile or \
+ ".log" in gitfile or os.path.isdir(gitfile) or \
+ gitfile.startswith("tools/"):
continue
if REGEX_FILE_KCONFIG.match(gitfile):
kconfig_files.append(gitfile)
@@ -61,7 +190,7 @@
for kfile in kconfig_files:
parse_kconfig_file(kfile, defined_features, referenced_features)
- print "Undefined symbol used\tFile list"
+ undefined = {} # {feature: [files]}
for feature in sorted(referenced_features):
# filter some false positives
if feature == "FOO" or feature == "BAR" or \
@@ -72,8 +201,8 @@
# avoid false positives for kernel modules
if feature[:-len("_MODULE")] in defined_features:
continue
- files = referenced_features.get(feature)
- print "%s\t%s" % (feature, ", ".join(files))
+ undefined[feature] = referenced_features.get(feature)
+ return undefined
def parse_source_file(sfile, referenced_features):
diff --git a/scripts/coccinelle/misc/irqf_oneshot.cocci b/scripts/coccinelle/misc/irqf_oneshot.cocci
index 6cfde94..a24a754 100644
--- a/scripts/coccinelle/misc/irqf_oneshot.cocci
+++ b/scripts/coccinelle/misc/irqf_oneshot.cocci
@@ -12,11 +12,13 @@
virtual report
@r1@
+expression dev;
expression irq;
expression thread_fn;
expression flags;
position p;
@@
+(
request_threaded_irq@p(irq, NULL, thread_fn,
(
flags | IRQF_ONESHOT
@@ -24,13 +26,24 @@
IRQF_ONESHOT
)
, ...)
+|
+devm_request_threaded_irq@p(dev, irq, NULL, thread_fn,
+(
+flags | IRQF_ONESHOT
+|
+IRQF_ONESHOT
+)
+, ...)
+)
@depends on patch@
+expression dev;
expression irq;
expression thread_fn;
expression flags;
position p != r1.p;
@@
+(
request_threaded_irq@p(irq, NULL, thread_fn,
(
-0
@@ -40,6 +53,17 @@
+flags | IRQF_ONESHOT
)
, ...)
+|
+devm_request_threaded_irq@p(dev, irq, NULL, thread_fn,
+(
+-0
++IRQF_ONESHOT
+|
+-flags
++flags | IRQF_ONESHOT
+)
+, ...)
+)
@depends on context@
position p != r1.p;
diff --git a/scripts/extract-ikconfig b/scripts/extract-ikconfig
index e186242..3b42f25 100755
--- a/scripts/extract-ikconfig
+++ b/scripts/extract-ikconfig
@@ -61,6 +61,7 @@
try_decompress 'BZh' xy bunzip2
try_decompress '\135\0\0\0' xxx unlzma
try_decompress '\211\114\132' xy 'lzop -d'
+try_decompress '\002\041\114\030' xyy 'lz4 -d -l'
# Bail out:
echo "$me: Cannot find kernel config." >&2
diff --git a/sound/soc/codecs/arizona.c b/sound/soc/codecs/arizona.c
index 57da0ce..eff4b4d 100644
--- a/sound/soc/codecs/arizona.c
+++ b/sound/soc/codecs/arizona.c
@@ -840,8 +840,8 @@
priv->arizona->hp_ena &= ~mask;
priv->arizona->hp_ena |= val;
- /* Force off if HPDET magic is active */
- if (priv->arizona->hpdet_magic)
+ /* Force off if HPDET clamp is active */
+ if (priv->arizona->hpdet_clamp)
val = 0;
regmap_update_bits_async(arizona->regmap, ARIZONA_OUTPUT_ENABLES_1,
diff --git a/tools/hv/Makefile b/tools/hv/Makefile
index 99ffe61..a8ab795 100644
--- a/tools/hv/Makefile
+++ b/tools/hv/Makefile
@@ -3,7 +3,7 @@
CC = $(CROSS_COMPILE)gcc
PTHREAD_LIBS = -lpthread
WARNINGS = -Wall -Wextra
-CFLAGS = $(WARNINGS) -g $(PTHREAD_LIBS)
+CFLAGS = $(WARNINGS) -g $(PTHREAD_LIBS) $(shell getconf LFS_CFLAGS)
all: hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon
%: %.c
diff --git a/tools/hv/hv_vss_daemon.c b/tools/hv/hv_vss_daemon.c
index 5e63f70..506dd01 100644
--- a/tools/hv/hv_vss_daemon.c
+++ b/tools/hv/hv_vss_daemon.c
@@ -81,6 +81,7 @@
char match[] = "/dev/";
FILE *mounts;
struct mntent *ent;
+ char errdir[1024] = {0};
unsigned int cmd;
int error = 0, root_seen = 0, save_errno = 0;
@@ -115,6 +116,8 @@
goto err;
}
+ endmntent(mounts);
+
if (root_seen) {
error |= vss_do_freeze("/", cmd);
if (error && operation == VSS_OP_FREEZE)
@@ -124,16 +127,19 @@
goto out;
err:
save_errno = errno;
+ if (ent) {
+ strncpy(errdir, ent->mnt_dir, sizeof(errdir)-1);
+ endmntent(mounts);
+ }
vss_operate(VSS_OP_THAW);
/* Call syslog after we thaw all filesystems */
if (ent)
syslog(LOG_ERR, "FREEZE of %s failed; error:%d %s",
- ent->mnt_dir, save_errno, strerror(save_errno));
+ errdir, save_errno, strerror(save_errno));
else
syslog(LOG_ERR, "FREEZE of / failed; error:%d %s", save_errno,
strerror(save_errno));
out:
- endmntent(mounts);
return error;
}
diff --git a/tools/perf/Documentation/perf-kmem.txt b/tools/perf/Documentation/perf-kmem.txt
index 150253c..23219c6 100644
--- a/tools/perf/Documentation/perf-kmem.txt
+++ b/tools/perf/Documentation/perf-kmem.txt
@@ -3,7 +3,7 @@
NAME
----
-perf-kmem - Tool to trace/measure kernel memory(slab) properties
+perf-kmem - Tool to trace/measure kernel memory properties
SYNOPSIS
--------
@@ -46,6 +46,12 @@
--raw-ip::
Print raw ip instead of symbol
+--slab::
+ Analyze SLAB allocator events.
+
+--page::
+ Analyze page allocator events
+
SEE ALSO
--------
linkperf:perf-record[1]
diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
index 4ebf65c..63ea013 100644
--- a/tools/perf/builtin-kmem.c
+++ b/tools/perf/builtin-kmem.c
@@ -22,6 +22,11 @@
#include <linux/string.h>
#include <locale.h>
+static int kmem_slab;
+static int kmem_page;
+
+static long kmem_page_size;
+
struct alloc_stat;
typedef int (*sort_fn_t)(struct alloc_stat *, struct alloc_stat *);
@@ -226,6 +231,244 @@
return 0;
}
+static u64 total_page_alloc_bytes;
+static u64 total_page_free_bytes;
+static u64 total_page_nomatch_bytes;
+static u64 total_page_fail_bytes;
+static unsigned long nr_page_allocs;
+static unsigned long nr_page_frees;
+static unsigned long nr_page_fails;
+static unsigned long nr_page_nomatch;
+
+static bool use_pfn;
+
+#define MAX_MIGRATE_TYPES 6
+#define MAX_PAGE_ORDER 11
+
+static int order_stats[MAX_PAGE_ORDER][MAX_MIGRATE_TYPES];
+
+struct page_stat {
+ struct rb_node node;
+ u64 page;
+ int order;
+ unsigned gfp_flags;
+ unsigned migrate_type;
+ u64 alloc_bytes;
+ u64 free_bytes;
+ int nr_alloc;
+ int nr_free;
+};
+
+static struct rb_root page_tree;
+static struct rb_root page_alloc_tree;
+static struct rb_root page_alloc_sorted;
+
+static struct page_stat *search_page(unsigned long page, bool create)
+{
+ struct rb_node **node = &page_tree.rb_node;
+ struct rb_node *parent = NULL;
+ struct page_stat *data;
+
+ while (*node) {
+ s64 cmp;
+
+ parent = *node;
+ data = rb_entry(*node, struct page_stat, node);
+
+ cmp = data->page - page;
+ if (cmp < 0)
+ node = &parent->rb_left;
+ else if (cmp > 0)
+ node = &parent->rb_right;
+ else
+ return data;
+ }
+
+ if (!create)
+ return NULL;
+
+ data = zalloc(sizeof(*data));
+ if (data != NULL) {
+ data->page = page;
+
+ rb_link_node(&data->node, parent, node);
+ rb_insert_color(&data->node, &page_tree);
+ }
+
+ return data;
+}
+
+static int page_stat_cmp(struct page_stat *a, struct page_stat *b)
+{
+ if (a->page > b->page)
+ return -1;
+ if (a->page < b->page)
+ return 1;
+ if (a->order > b->order)
+ return -1;
+ if (a->order < b->order)
+ return 1;
+ if (a->migrate_type > b->migrate_type)
+ return -1;
+ if (a->migrate_type < b->migrate_type)
+ return 1;
+ if (a->gfp_flags > b->gfp_flags)
+ return -1;
+ if (a->gfp_flags < b->gfp_flags)
+ return 1;
+ return 0;
+}
+
+static struct page_stat *search_page_alloc_stat(struct page_stat *stat, bool create)
+{
+ struct rb_node **node = &page_alloc_tree.rb_node;
+ struct rb_node *parent = NULL;
+ struct page_stat *data;
+
+ while (*node) {
+ s64 cmp;
+
+ parent = *node;
+ data = rb_entry(*node, struct page_stat, node);
+
+ cmp = page_stat_cmp(data, stat);
+ if (cmp < 0)
+ node = &parent->rb_left;
+ else if (cmp > 0)
+ node = &parent->rb_right;
+ else
+ return data;
+ }
+
+ if (!create)
+ return NULL;
+
+ data = zalloc(sizeof(*data));
+ if (data != NULL) {
+ data->page = stat->page;
+ data->order = stat->order;
+ data->gfp_flags = stat->gfp_flags;
+ data->migrate_type = stat->migrate_type;
+
+ rb_link_node(&data->node, parent, node);
+ rb_insert_color(&data->node, &page_alloc_tree);
+ }
+
+ return data;
+}
+
+static bool valid_page(u64 pfn_or_page)
+{
+ if (use_pfn && pfn_or_page == -1UL)
+ return false;
+ if (!use_pfn && pfn_or_page == 0)
+ return false;
+ return true;
+}
+
+static int perf_evsel__process_page_alloc_event(struct perf_evsel *evsel,
+ struct perf_sample *sample)
+{
+ u64 page;
+ unsigned int order = perf_evsel__intval(evsel, sample, "order");
+ unsigned int gfp_flags = perf_evsel__intval(evsel, sample, "gfp_flags");
+ unsigned int migrate_type = perf_evsel__intval(evsel, sample,
+ "migratetype");
+ u64 bytes = kmem_page_size << order;
+ struct page_stat *stat;
+ struct page_stat this = {
+ .order = order,
+ .gfp_flags = gfp_flags,
+ .migrate_type = migrate_type,
+ };
+
+ if (use_pfn)
+ page = perf_evsel__intval(evsel, sample, "pfn");
+ else
+ page = perf_evsel__intval(evsel, sample, "page");
+
+ nr_page_allocs++;
+ total_page_alloc_bytes += bytes;
+
+ if (!valid_page(page)) {
+ nr_page_fails++;
+ total_page_fail_bytes += bytes;
+
+ return 0;
+ }
+
+ /*
+ * This is to find the current page (with correct gfp flags and
+ * migrate type) at free event.
+ */
+ stat = search_page(page, true);
+ if (stat == NULL)
+ return -ENOMEM;
+
+ stat->order = order;
+ stat->gfp_flags = gfp_flags;
+ stat->migrate_type = migrate_type;
+
+ this.page = page;
+ stat = search_page_alloc_stat(&this, true);
+ if (stat == NULL)
+ return -ENOMEM;
+
+ stat->nr_alloc++;
+ stat->alloc_bytes += bytes;
+
+ order_stats[order][migrate_type]++;
+
+ return 0;
+}
+
+static int perf_evsel__process_page_free_event(struct perf_evsel *evsel,
+ struct perf_sample *sample)
+{
+ u64 page;
+ unsigned int order = perf_evsel__intval(evsel, sample, "order");
+ u64 bytes = kmem_page_size << order;
+ struct page_stat *stat;
+ struct page_stat this = {
+ .order = order,
+ };
+
+ if (use_pfn)
+ page = perf_evsel__intval(evsel, sample, "pfn");
+ else
+ page = perf_evsel__intval(evsel, sample, "page");
+
+ nr_page_frees++;
+ total_page_free_bytes += bytes;
+
+ stat = search_page(page, false);
+ if (stat == NULL) {
+ pr_debug2("missing free at page %"PRIx64" (order: %d)\n",
+ page, order);
+
+ nr_page_nomatch++;
+ total_page_nomatch_bytes += bytes;
+
+ return 0;
+ }
+
+ this.page = page;
+ this.gfp_flags = stat->gfp_flags;
+ this.migrate_type = stat->migrate_type;
+
+ rb_erase(&stat->node, &page_tree);
+ free(stat);
+
+ stat = search_page_alloc_stat(&this, false);
+ if (stat == NULL)
+ return -ENOENT;
+
+ stat->nr_free++;
+ stat->free_bytes += bytes;
+
+ return 0;
+}
+
typedef int (*tracepoint_handler)(struct perf_evsel *evsel,
struct perf_sample *sample);
@@ -270,8 +513,9 @@
return 100.0 - (100.0 * n_req / n_alloc);
}
-static void __print_result(struct rb_root *root, struct perf_session *session,
- int n_lines, int is_caller)
+static void __print_slab_result(struct rb_root *root,
+ struct perf_session *session,
+ int n_lines, int is_caller)
{
struct rb_node *next;
struct machine *machine = &session->machines.host;
@@ -323,9 +567,56 @@
printf("%.105s\n", graph_dotted_line);
}
-static void print_summary(void)
+static const char * const migrate_type_str[] = {
+ "UNMOVABL",
+ "RECLAIM",
+ "MOVABLE",
+ "RESERVED",
+ "CMA/ISLT",
+ "UNKNOWN",
+};
+
+static void __print_page_result(struct rb_root *root,
+ struct perf_session *session __maybe_unused,
+ int n_lines)
{
- printf("\nSUMMARY\n=======\n");
+ struct rb_node *next = rb_first(root);
+ const char *format;
+
+ printf("\n%.80s\n", graph_dotted_line);
+ printf(" %-16s | Total alloc (KB) | Hits | Order | Mig.type | GFP flags\n",
+ use_pfn ? "PFN" : "Page");
+ printf("%.80s\n", graph_dotted_line);
+
+ if (use_pfn)
+ format = " %16llu | %'16llu | %'9d | %5d | %8s | %08lx\n";
+ else
+ format = " %016llx | %'16llu | %'9d | %5d | %8s | %08lx\n";
+
+ while (next && n_lines--) {
+ struct page_stat *data;
+
+ data = rb_entry(next, struct page_stat, node);
+
+ printf(format, (unsigned long long)data->page,
+ (unsigned long long)data->alloc_bytes / 1024,
+ data->nr_alloc, data->order,
+ migrate_type_str[data->migrate_type],
+ (unsigned long)data->gfp_flags);
+
+ next = rb_next(next);
+ }
+
+ if (n_lines == -1)
+ printf(" ... | ... | ... | ... | ... | ... \n");
+
+ printf("%.80s\n", graph_dotted_line);
+}
+
+static void print_slab_summary(void)
+{
+ printf("\nSUMMARY (SLAB allocator)");
+ printf("\n========================\n");
printf("Total bytes requested: %'lu\n", total_requested);
printf("Total bytes allocated: %'lu\n", total_allocated);
printf("Total bytes wasted on internal fragmentation: %'lu\n",
@@ -335,13 +626,73 @@
printf("Cross CPU allocations: %'lu/%'lu\n", nr_cross_allocs, nr_allocs);
}
-static void print_result(struct perf_session *session)
+static void print_page_summary(void)
+{
+ int o, m;
+ u64 nr_alloc_freed = nr_page_frees - nr_page_nomatch;
+ u64 total_alloc_freed_bytes = total_page_free_bytes - total_page_nomatch_bytes;
+
+ printf("\nSUMMARY (page allocator)");
+ printf("\n========================\n");
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total allocation requests",
+ nr_page_allocs, total_page_alloc_bytes / 1024);
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total free requests",
+ nr_page_frees, total_page_free_bytes / 1024);
+ printf("\n");
+
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total alloc+freed requests",
+ nr_alloc_freed, (total_alloc_freed_bytes) / 1024);
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total alloc-only requests",
+ nr_page_allocs - nr_alloc_freed,
+ (total_page_alloc_bytes - total_alloc_freed_bytes) / 1024);
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total free-only requests",
+ nr_page_nomatch, total_page_nomatch_bytes / 1024);
+ printf("\n");
+
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total allocation failures",
+ nr_page_fails, total_page_fail_bytes / 1024);
+ printf("\n");
+
+ printf("%5s %12s %12s %12s %12s %12s\n", "Order", "Unmovable",
+ "Reclaimable", "Movable", "Reserved", "CMA/Isolated");
+ printf("%.5s %.12s %.12s %.12s %.12s %.12s\n", graph_dotted_line,
+ graph_dotted_line, graph_dotted_line, graph_dotted_line,
+ graph_dotted_line, graph_dotted_line);
+
+ for (o = 0; o < MAX_PAGE_ORDER; o++) {
+ printf("%5d", o);
+ for (m = 0; m < MAX_MIGRATE_TYPES - 1; m++) {
+ if (order_stats[o][m])
+ printf(" %'12d", order_stats[o][m]);
+ else
+ printf(" %12c", '.');
+ }
+ printf("\n");
+ }
+}
+
+static void print_slab_result(struct perf_session *session)
{
if (caller_flag)
- __print_result(&root_caller_sorted, session, caller_lines, 1);
+ __print_slab_result(&root_caller_sorted, session, caller_lines, 1);
if (alloc_flag)
- __print_result(&root_alloc_sorted, session, alloc_lines, 0);
- print_summary();
+ __print_slab_result(&root_alloc_sorted, session, alloc_lines, 0);
+ print_slab_summary();
+}
+
+static void print_page_result(struct perf_session *session)
+{
+ if (alloc_flag)
+ __print_page_result(&page_alloc_sorted, session, alloc_lines);
+ print_page_summary();
+}
+
+static void print_result(struct perf_session *session)
+{
+ if (kmem_slab)
+ print_slab_result(session);
+ if (kmem_page)
+ print_page_result(session);
}
struct sort_dimension {
@@ -353,8 +704,8 @@
static LIST_HEAD(caller_sort);
static LIST_HEAD(alloc_sort);
-static void sort_insert(struct rb_root *root, struct alloc_stat *data,
- struct list_head *sort_list)
+static void sort_slab_insert(struct rb_root *root, struct alloc_stat *data,
+ struct list_head *sort_list)
{
struct rb_node **new = &(root->rb_node);
struct rb_node *parent = NULL;
@@ -383,8 +734,8 @@
rb_insert_color(&data->node, root);
}
-static void __sort_result(struct rb_root *root, struct rb_root *root_sorted,
- struct list_head *sort_list)
+static void __sort_slab_result(struct rb_root *root, struct rb_root *root_sorted,
+ struct list_head *sort_list)
{
struct rb_node *node;
struct alloc_stat *data;
@@ -396,26 +747,79 @@
rb_erase(node, root);
data = rb_entry(node, struct alloc_stat, node);
- sort_insert(root_sorted, data, sort_list);
+ sort_slab_insert(root_sorted, data, sort_list);
+ }
+}
+
+static void sort_page_insert(struct rb_root *root, struct page_stat *data)
+{
+ struct rb_node **new = &root->rb_node;
+ struct rb_node *parent = NULL;
+
+ while (*new) {
+ struct page_stat *this;
+ int cmp = 0;
+
+ this = rb_entry(*new, struct page_stat, node);
+ parent = *new;
+
+ /* TODO: support more sort key */
+ cmp = data->alloc_bytes - this->alloc_bytes;
+
+ if (cmp > 0)
+ new = &parent->rb_left;
+ else
+ new = &parent->rb_right;
+ }
+
+ rb_link_node(&data->node, parent, new);
+ rb_insert_color(&data->node, root);
+}
+
+static void __sort_page_result(struct rb_root *root, struct rb_root *root_sorted)
+{
+ struct rb_node *node;
+ struct page_stat *data;
+
+ for (;;) {
+ node = rb_first(root);
+ if (!node)
+ break;
+
+ rb_erase(node, root);
+ data = rb_entry(node, struct page_stat, node);
+ sort_page_insert(root_sorted, data);
}
}
static void sort_result(void)
{
- __sort_result(&root_alloc_stat, &root_alloc_sorted, &alloc_sort);
- __sort_result(&root_caller_stat, &root_caller_sorted, &caller_sort);
+ if (kmem_slab) {
+ __sort_slab_result(&root_alloc_stat, &root_alloc_sorted,
+ &alloc_sort);
+ __sort_slab_result(&root_caller_stat, &root_caller_sorted,
+ &caller_sort);
+ }
+ if (kmem_page) {
+ __sort_page_result(&page_alloc_tree, &page_alloc_sorted);
+ }
}
static int __cmd_kmem(struct perf_session *session)
{
int err = -EINVAL;
+ struct perf_evsel *evsel;
const struct perf_evsel_str_handler kmem_tracepoints[] = {
+ /* slab allocator */
{ "kmem:kmalloc", perf_evsel__process_alloc_event, },
{ "kmem:kmem_cache_alloc", perf_evsel__process_alloc_event, },
{ "kmem:kmalloc_node", perf_evsel__process_alloc_node_event, },
{ "kmem:kmem_cache_alloc_node", perf_evsel__process_alloc_node_event, },
{ "kmem:kfree", perf_evsel__process_free_event, },
{ "kmem:kmem_cache_free", perf_evsel__process_free_event, },
+ /* page allocator */
+ { "kmem:mm_page_alloc", perf_evsel__process_page_alloc_event, },
+ { "kmem:mm_page_free", perf_evsel__process_page_free_event, },
};
if (!perf_session__has_traces(session, "kmem record"))
@@ -426,10 +830,20 @@
goto out;
}
+ evlist__for_each(session->evlist, evsel) {
+ if (!strcmp(perf_evsel__name(evsel), "kmem:mm_page_alloc") &&
+ perf_evsel__field(evsel, "pfn")) {
+ use_pfn = true;
+ break;
+ }
+ }
+
setup_pager();
err = perf_session__process_events(session);
- if (err != 0)
+ if (err != 0) {
+ pr_err("error during process events: %d\n", err);
goto out;
+ }
sort_result();
print_result(session);
out:
@@ -612,6 +1026,22 @@
return 0;
}
+static int parse_slab_opt(const struct option *opt __maybe_unused,
+ const char *arg __maybe_unused,
+ int unset __maybe_unused)
+{
+ kmem_slab = (kmem_page + 1);
+ return 0;
+}
+
+static int parse_page_opt(const struct option *opt __maybe_unused,
+ const char *arg __maybe_unused,
+ int unset __maybe_unused)
+{
+ kmem_page = (kmem_slab + 1);
+ return 0;
+}
+
static int parse_line_opt(const struct option *opt __maybe_unused,
const char *arg, int unset __maybe_unused)
{
@@ -634,6 +1064,8 @@
{
const char * const record_args[] = {
"record", "-a", "-R", "-c", "1",
+ };
+ const char * const slab_events[] = {
"-e", "kmem:kmalloc",
"-e", "kmem:kmalloc_node",
"-e", "kmem:kfree",
@@ -641,10 +1073,19 @@
"-e", "kmem:kmem_cache_alloc_node",
"-e", "kmem:kmem_cache_free",
};
+ const char * const page_events[] = {
+ "-e", "kmem:mm_page_alloc",
+ "-e", "kmem:mm_page_free",
+ };
unsigned int rec_argc, i, j;
const char **rec_argv;
rec_argc = ARRAY_SIZE(record_args) + argc - 1;
+ if (kmem_slab)
+ rec_argc += ARRAY_SIZE(slab_events);
+ if (kmem_page)
+ rec_argc += ARRAY_SIZE(page_events);
+
rec_argv = calloc(rec_argc + 1, sizeof(char *));
if (rec_argv == NULL)
@@ -653,6 +1094,15 @@
for (i = 0; i < ARRAY_SIZE(record_args); i++)
rec_argv[i] = strdup(record_args[i]);
+ if (kmem_slab) {
+ for (j = 0; j < ARRAY_SIZE(slab_events); j++, i++)
+ rec_argv[i] = strdup(slab_events[j]);
+ }
+ if (kmem_page) {
+ for (j = 0; j < ARRAY_SIZE(page_events); j++, i++)
+ rec_argv[i] = strdup(page_events[j]);
+ }
+
for (j = 1; j < (unsigned int)argc; j++, i++)
rec_argv[i] = argv[j];
@@ -679,6 +1129,10 @@
OPT_CALLBACK('l', "line", NULL, "num", "show n lines", parse_line_opt),
OPT_BOOLEAN(0, "raw-ip", &raw_ip, "show raw ip instead of symbol"),
OPT_BOOLEAN('f', "force", &file.force, "don't complain, do it"),
+ OPT_CALLBACK_NOOPT(0, "slab", NULL, NULL, "Analyze slab allocator",
+ parse_slab_opt),
+ OPT_CALLBACK_NOOPT(0, "page", NULL, NULL, "Analyze page allocator",
+ parse_page_opt),
OPT_END()
};
const char *const kmem_subcommands[] = { "record", "stat", NULL };
@@ -695,6 +1149,9 @@
if (!argc)
usage_with_options(kmem_usage, kmem_options);
+ if (kmem_slab == 0 && kmem_page == 0)
+ kmem_slab = 1; /* for backward compatibility */
+
if (!strncmp(argv[0], "rec", 3)) {
symbol__init(NULL);
return __cmd_record(argc, argv);
@@ -706,6 +1163,17 @@
if (session == NULL)
return -1;
+ if (kmem_page) {
+ struct perf_evsel *evsel = perf_evlist__first(session->evlist);
+
+ if (evsel == NULL || evsel->tp_format == NULL) {
+ pr_err("invalid event found.. aborting\n");
+ return -1;
+ }
+
+ kmem_page_size = pevent_get_page_size(evsel->tp_format->pevent);
+ }
+
symbol__init(&session->header.env);
if (!strcmp(argv[0], "stat")) {
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index 30545ce..d8bb616 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -332,6 +332,7 @@
else {
result->offset += pp->offset;
result->line += pp->line;
+ result->retprobe = pp->retprobe;
ret = 0;
}
@@ -654,65 +655,6 @@
return ntevs;
}
-/*
- * Find a src file from a DWARF tag path. Prepend optional source path prefix
- * and chop off leading directories that do not exist. Result is passed back as
- * a newly allocated path on success.
- * Return 0 if file was found and readable, -errno otherwise.
- */
-static int get_real_path(const char *raw_path, const char *comp_dir,
- char **new_path)
-{
- const char *prefix = symbol_conf.source_prefix;
-
- if (!prefix) {
- if (raw_path[0] != '/' && comp_dir)
- /* If not an absolute path, try to use comp_dir */
- prefix = comp_dir;
- else {
- if (access(raw_path, R_OK) == 0) {
- *new_path = strdup(raw_path);
- return *new_path ? 0 : -ENOMEM;
- } else
- return -errno;
- }
- }
-
- *new_path = malloc((strlen(prefix) + strlen(raw_path) + 2));
- if (!*new_path)
- return -ENOMEM;
-
- for (;;) {
- sprintf(*new_path, "%s/%s", prefix, raw_path);
-
- if (access(*new_path, R_OK) == 0)
- return 0;
-
- if (!symbol_conf.source_prefix) {
- /* In case of searching comp_dir, don't retry */
- zfree(new_path);
- return -errno;
- }
-
- switch (errno) {
- case ENAMETOOLONG:
- case ENOENT:
- case EROFS:
- case EFAULT:
- raw_path = strchr(++raw_path, '/');
- if (!raw_path) {
- zfree(new_path);
- return -ENOENT;
- }
- continue;
-
- default:
- zfree(new_path);
- return -errno;
- }
- }
-}
-
#define LINEBUF_SIZE 256
#define NR_ADDITIONAL_LINES 2
diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
index e307423..b5bf9d5 100644
--- a/tools/perf/util/probe-finder.c
+++ b/tools/perf/util/probe-finder.c
@@ -855,11 +855,22 @@
static int find_probe_point_lazy(Dwarf_Die *sp_die, struct probe_finder *pf)
{
int ret = 0;
+ char *fpath;
if (intlist__empty(pf->lcache)) {
+ const char *comp_dir;
+
+ comp_dir = cu_get_comp_dir(&pf->cu_die);
+ ret = get_real_path(pf->fname, comp_dir, &fpath);
+ if (ret < 0) {
+ pr_warning("Failed to find source file path.\n");
+ return ret;
+ }
+
/* Matching lazy line pattern */
- ret = find_lazy_match_lines(pf->lcache, pf->fname,
+ ret = find_lazy_match_lines(pf->lcache, fpath,
pf->pev->point.lazy_line);
+ free(fpath);
if (ret <= 0)
return ret;
}
@@ -1055,7 +1066,7 @@
if (pp->function)
ret = find_probe_point_by_func(pf);
else if (pp->lazy_line)
- ret = find_probe_point_lazy(NULL, pf);
+ ret = find_probe_point_lazy(&pf->cu_die, pf);
else {
pf->lno = pp->line;
ret = find_probe_point_by_line(pf);
@@ -1622,3 +1633,61 @@
return (ret < 0) ? ret : lf.found;
}
+/*
+ * Find a src file from a DWARF tag path. Prepend optional source path prefix
+ * and chop off leading directories that do not exist. Result is passed back as
+ * a newly allocated path on success.
+ * Return 0 if file was found and readable, -errno otherwise.
+ */
+int get_real_path(const char *raw_path, const char *comp_dir,
+ char **new_path)
+{
+ const char *prefix = symbol_conf.source_prefix;
+
+ if (!prefix) {
+ if (raw_path[0] != '/' && comp_dir)
+ /* If not an absolute path, try to use comp_dir */
+ prefix = comp_dir;
+ else {
+ if (access(raw_path, R_OK) == 0) {
+ *new_path = strdup(raw_path);
+ return *new_path ? 0 : -ENOMEM;
+ } else
+ return -errno;
+ }
+ }
+
+ *new_path = malloc((strlen(prefix) + strlen(raw_path) + 2));
+ if (!*new_path)
+ return -ENOMEM;
+
+ for (;;) {
+ sprintf(*new_path, "%s/%s", prefix, raw_path);
+
+ if (access(*new_path, R_OK) == 0)
+ return 0;
+
+ if (!symbol_conf.source_prefix) {
+ /* In case of searching comp_dir, don't retry */
+ zfree(new_path);
+ return -errno;
+ }
+
+ switch (errno) {
+ case ENAMETOOLONG:
+ case ENOENT:
+ case EROFS:
+ case EFAULT:
+ raw_path = strchr(++raw_path, '/');
+ if (!raw_path) {
+ zfree(new_path);
+ return -ENOENT;
+ }
+ continue;
+
+ default:
+ zfree(new_path);
+ return -errno;
+ }
+ }
+}
diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h
index 92590b2..ebf8c8c 100644
--- a/tools/perf/util/probe-finder.h
+++ b/tools/perf/util/probe-finder.h
@@ -55,6 +55,10 @@
struct variable_list **vls,
int max_points, bool externs);
+/* Find a src file from a DWARF tag path */
+int get_real_path(const char *raw_path, const char *comp_dir,
+ char **new_path);
+
struct probe_finder {
struct perf_probe_event *pev; /* Target probe event */
diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
index d1b3a36..4039854 100644
--- a/tools/power/x86/turbostat/Makefile
+++ b/tools/power/x86/turbostat/Makefile
@@ -1,8 +1,12 @@
CC = $(CROSS_COMPILE)gcc
-BUILD_OUTPUT := $(PWD)
+BUILD_OUTPUT := $(CURDIR)
PREFIX := /usr
DESTDIR :=
+ifeq ("$(origin O)", "command line")
+ BUILD_OUTPUT := $(O)
+endif
+
turbostat : turbostat.c
CFLAGS += -Wall
CFLAGS += -DMSRHEADER='"../../../../arch/x86/include/uapi/asm/msr-index.h"'
diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
index feea7ad..05b8fc3 100644
--- a/tools/power/x86/turbostat/turbostat.8
+++ b/tools/power/x86/turbostat/turbostat.8
@@ -20,9 +20,11 @@
The second method is to omit the command,
and turbostat displays statistics every 5 seconds.
The 5-second interval can be changed using the --interval option.
-
+.PP
Some information is not available on older processors.
.SS Options
+Options can be specified with a single or double '-', and only as much of the option
+name as necessary to disambiguate it from others is necessary. Note that options are case-sensitive.
\fB--Counter MSR#\fP shows the delta of the specified 64-bit MSR counter.
.PP
\fB--counter MSR#\fP shows the delta of the specified 32-bit MSR counter.
@@ -55,16 +57,20 @@
The \fBcommand\fP parameter forks \fBcommand\fP, and upon its exit,
displays the statistics gathered since it was forked.
.PP
-.SH FIELD DESCRIPTIONS
+.SH DEFAULT FIELD DESCRIPTIONS
.nf
-\fBPackage\fP processor package number.
-\fBCore\fP processor core number.
-\fBCPU\fP Linux CPU (logical processor) number.
-Note that multiple CPUs per core indicate support for Intel(R) Hyper-Threading Technology.
+\fBCPU\fP Linux CPU (logical processor) number. Yes, it is okay that on many systems the CPUs are not listed in numerical order -- for efficiency reasons, turbostat runs in topology order, so HT siblings appear together.
\fBAVG_MHz\fP number of cycles executed divided by time elapsed.
\fB%Busy\fP percent of the interval that the CPU retired instructions, aka. % of time in "C0" state.
\fBBzy_MHz\fP average clock rate while the CPU was busy (in "c0" state).
\fBTSC_MHz\fP average MHz that the TSC ran during the entire interval.
+.fi
+.PP
+.SH DEBUG FIELD DESCRIPTIONS
+.nf
+\fBPackage\fP processor package number.
+\fBCore\fP processor core number.
+Note that multiple CPUs per core indicate support for Intel(R) Hyper-Threading Technology (HT).
\fBCPU%c1, CPU%c3, CPU%c6, CPU%c7\fP show the percentage residency in hardware core idle states.
\fBCoreTmp\fP Degrees Celsius reported by the per-core Digital Thermal Sensor.
\fBPkgTtmp\fP Degrees Celsius reported by the per-package Package Thermal Monitor.
@@ -81,63 +87,76 @@
Without any parameters, turbostat displays statistics ever 5 seconds.
(override interval with "-i sec" option, or specify a command
for turbostat to fork).
+.nf
+[root@hsw]# ./turbostat
+ CPU Avg_MHz %Busy Bzy_MHz TSC_MHz
+ - 488 12.51 3898 3498
+ 0 0 0.01 3885 3498
+ 4 3897 99.99 3898 3498
+ 1 0 0.00 3861 3498
+ 5 0 0.00 3882 3498
+ 2 1 0.02 3894 3498
+ 6 2 0.06 3898 3498
+ 3 0 0.00 3849 3498
+ 7 0 0.00 3877 3498
+
+.fi
+.SH DEBUG EXAMPLE
+The "--debug" option prints additional system information before measurements:
The first row of statistics is a summary for the entire system.
For residency % columns, the summary is a weighted average.
For Temperature columns, the summary is the column maximum.
For Watts columns, the summary is a system total.
Subsequent rows show per-CPU statistics.
-
.nf
-[root@ivy]# ./turbostat
- Core CPU Avg_MHz %Busy Bzy_MHz TSC_MHz SMI CPU%c1 CPU%c3 CPU%c6 CPU%c7 CoreTmp PkgTmp Pkg%pc2 Pkg%pc3 Pkg%pc6 Pkg%pc7 PkgWatt CorWatt GFXWatt
- - - 6 0.36 1596 3492 0 0.59 0.01 99.04 0.00 23 24 23.82 0.01 72.47 0.00 6.40 1.01 0.00
- 0 0 9 0.58 1596 3492 0 0.28 0.01 99.13 0.00 23 24 23.82 0.01 72.47 0.00 6.40 1.01 0.00
- 0 4 1 0.07 1596 3492 0 0.79
- 1 1 10 0.65 1596 3492 0 0.59 0.00 98.76 0.00 23
- 1 5 5 0.28 1596 3492 0 0.95
- 2 2 10 0.66 1596 3492 0 0.41 0.01 98.92 0.00 23
- 2 6 2 0.10 1597 3492 0 0.97
- 3 3 3 0.20 1596 3492 0 0.44 0.00 99.37 0.00 23
- 3 7 5 0.31 1596 3492 0 0.33
-.fi
-.SH DEBUG EXAMPLE
-The "--debug" option prints additional system information before measurements:
-
-.nf
-turbostat version 4.0 10-Feb, 2015 - Len Brown <lenb@kernel.org>
-CPUID(0): GenuineIntel 13 CPUID levels; family:model:stepping 0x6:3a:9 (6:58:9)
+turbostat version 4.1 10-Feb, 2015 - Len Brown <lenb@kernel.org>
+CPUID(0): GenuineIntel 13 CPUID levels; family:model:stepping 0x6:3c:3 (6:60:3)
CPUID(6): APERF, DTS, PTM, EPB
-RAPL: 851 sec. Joule Counter Range, at 77 Watts
-cpu0: MSR_NHM_PLATFORM_INFO: 0x81010f0012300
-16 * 100 = 1600 MHz max efficiency
+RAPL: 3121 sec. Joule Counter Range, at 84 Watts
+cpu0: MSR_NHM_PLATFORM_INFO: 0x80838f3012300
+8 * 100 = 800 MHz max efficiency
35 * 100 = 3500 MHz TSC frequency
-cpu0: MSR_IA32_POWER_CTL: 0x0014005d (C1E auto-promotion: DISabled)
-cpu0: MSR_NHM_SNB_PKG_CST_CFG_CTL: 0x1e008402 (UNdemote-C3, UNdemote-C1, demote-C3, demote-C1, locked: pkg-cstate-limit=2: pc6n)
+cpu0: MSR_IA32_POWER_CTL: 0x0004005d (C1E auto-promotion: DISabled)
+cpu0: MSR_NHM_SNB_PKG_CST_CFG_CTL: 0x1e000400 (UNdemote-C3, UNdemote-C1, demote-C3, demote-C1, UNlocked: pkg-cstate-limit=0: pc0)
cpu0: MSR_NHM_TURBO_RATIO_LIMIT: 0x25262727
37 * 100 = 3700 MHz max turbo 4 active cores
38 * 100 = 3800 MHz max turbo 3 active cores
39 * 100 = 3900 MHz max turbo 2 active cores
39 * 100 = 3900 MHz max turbo 1 active cores
cpu0: MSR_IA32_ENERGY_PERF_BIAS: 0x00000006 (balanced)
-cpu0: MSR_RAPL_POWER_UNIT: 0x000a1003 (0.125000 Watts, 0.000015 Joules, 0.000977 sec.)
-cpu0: MSR_PKG_POWER_INFO: 0x01e00268 (77 W TDP, RAPL 60 - 0 W, 0.000000 sec.)
-cpu0: MSR_PKG_POWER_LIMIT: 0x30000148268 (UNlocked)
-cpu0: PKG Limit #1: ENabled (77.000000 Watts, 1.000000 sec, clamp DISabled)
-cpu0: PKG Limit #2: DISabled (96.000000 Watts, 0.000977* sec, clamp DISabled)
+cpu0: MSR_CORE_PERF_LIMIT_REASONS, 0x31200000 (Active: ) (Logged: Auto-HWP, Amps, MultiCoreTurbo, Transitions, )
+cpu0: MSR_GFX_PERF_LIMIT_REASONS, 0x00000000 (Active: ) (Logged: )
+cpu0: MSR_RING_PERF_LIMIT_REASONS, 0x0d000000 (Active: ) (Logged: Amps, PkgPwrL1, PkgPwrL2, )
+cpu0: MSR_RAPL_POWER_UNIT: 0x000a0e03 (0.125000 Watts, 0.000061 Joules, 0.000977 sec.)
+cpu0: MSR_PKG_POWER_INFO: 0x000002a0 (84 W TDP, RAPL 0 - 0 W, 0.000000 sec.)
+cpu0: MSR_PKG_POWER_LIMIT: 0x428348001a82a0 (UNlocked)
+cpu0: PKG Limit #1: ENabled (84.000000 Watts, 8.000000 sec, clamp DISabled)
+cpu0: PKG Limit #2: ENabled (105.000000 Watts, 0.002441* sec, clamp DISabled)
cpu0: MSR_PP0_POLICY: 0
cpu0: MSR_PP0_POWER_LIMIT: 0x00000000 (UNlocked)
cpu0: Cores Limit: DISabled (0.000000 Watts, 0.000977 sec, clamp DISabled)
cpu0: MSR_PP1_POLICY: 0
cpu0: MSR_PP1_POWER_LIMIT: 0x00000000 (UNlocked)
cpu0: GFX Limit: DISabled (0.000000 Watts, 0.000977 sec, clamp DISabled)
-cpu0: MSR_IA32_TEMPERATURE_TARGET: 0x00691400 (105 C)
-cpu0: MSR_IA32_PACKAGE_THERM_STATUS: 0x884e0000 (27 C)
-cpu0: MSR_IA32_THERM_STATUS: 0x88580000 (17 C +/- 1)
-cpu1: MSR_IA32_THERM_STATUS: 0x885a0000 (15 C +/- 1)
-cpu2: MSR_IA32_THERM_STATUS: 0x88570000 (18 C +/- 1)
-cpu3: MSR_IA32_THERM_STATUS: 0x884e0000 (27 C +/- 1)
- ...
+cpu0: MSR_IA32_TEMPERATURE_TARGET: 0x00641400 (100 C)
+cpu0: MSR_IA32_PACKAGE_THERM_STATUS: 0x88340800 (48 C)
+cpu0: MSR_IA32_THERM_STATUS: 0x88340000 (48 C +/- 1)
+cpu1: MSR_IA32_THERM_STATUS: 0x88440000 (32 C +/- 1)
+cpu2: MSR_IA32_THERM_STATUS: 0x88450000 (31 C +/- 1)
+cpu3: MSR_IA32_THERM_STATUS: 0x88490000 (27 C +/- 1)
+ Core CPU Avg_MHz %Busy Bzy_MHz TSC_MHz SMI CPU%c1 CPU%c3 CPU%c6 CPU%c7 CoreTmp PkgTmp PkgWatt CorWatt GFXWatt
+ - - 493 12.64 3898 3498 0 12.64 0.00 0.00 74.72 47 47 21.62 13.74 0.00
+ 0 0 4 0.11 3894 3498 0 99.89 0.00 0.00 0.00 47 47 21.62 13.74 0.00
+ 0 4 3897 99.98 3898 3498 0 0.02
+ 1 1 7 0.17 3887 3498 0 0.04 0.00 0.00 99.79 32
+ 1 5 0 0.00 3885 3498 0 0.21
+ 2 2 29 0.76 3895 3498 0 0.10 0.01 0.01 99.13 32
+ 2 6 2 0.06 3896 3498 0 0.80
+ 3 3 1 0.02 3832 3498 0 0.03 0.00 0.00 99.95 28
+ 3 7 0 0.00 3879 3498 0 0.04
+^C
+
.fi
The \fBmax efficiency\fP frequency, a.k.a. Low Frequency Mode, is the frequency
available at the minimum package voltage. The \fBTSC frequency\fP is the base
@@ -147,6 +166,9 @@
The remaining rows show what maximum turbo frequency is possible
depending on the number of idle cores. Note that not all information is
available on all processors.
+.PP
+The --debug option adds additional columns to the measurement ouput, including CPU idle power-state residency processor temperature sensor readinds.
+See the field definitions above.
.SH FORK EXAMPLE
If turbostat is invoked with a command, it will fork that command
and output the statistics gathered when the command exits.
@@ -154,27 +176,23 @@
until ^C while the other CPUs are mostly idle:
.nf
-root@ivy: turbostat cat /dev/zero > /dev/null
+root@hsw: turbostat cat /dev/zero > /dev/null
^C
- Core CPU Avg_MHz %Busy Bzy_MHz TSC_MHz SMI CPU%c1 CPU%c3 CPU%c6 CPU%c7 CoreTmp PkgTmp Pkg%pc2 Pkg%pc3 Pkg%pc6 Pkg%pc7 PkgWatt CorWatt GFXWatt
- - - 496 12.75 3886 3492 0 13.16 0.04 74.04 0.00 36 36 0.00 0.00 0.00 0.00 23.15 17.65 0.00
- 0 0 22 0.57 3830 3492 0 0.83 0.02 98.59 0.00 27 36 0.00 0.00 0.00 0.00 23.15 17.65 0.00
- 0 4 9 0.24 3829 3492 0 1.15
- 1 1 4 0.09 3783 3492 0 99.91 0.00 0.00 0.00 36
- 1 5 3880 99.82 3888 3492 0 0.18
- 2 2 17 0.44 3813 3492 0 0.77 0.04 98.75 0.00 28
- 2 6 12 0.32 3823 3492 0 0.89
- 3 3 16 0.43 3844 3492 0 0.63 0.11 98.84 0.00 30
- 3 7 4 0.11 3827 3492 0 0.94
-30.372243 sec
+ CPU Avg_MHz %Busy Bzy_MHz TSC_MHz
+ - 482 12.51 3854 3498
+ 0 0 0.01 1960 3498
+ 4 0 0.00 2128 3498
+ 1 0 0.00 3003 3498
+ 5 3854 99.98 3855 3498
+ 2 0 0.01 3504 3498
+ 6 3 0.08 3884 3498
+ 3 0 0.00 2553 3498
+ 7 0 0.00 2126 3498
+10.783983 sec
.fi
-Above the cycle soaker drives cpu5 up its 3.8 GHz turbo limit
-while the other processors are generally in various states of idle.
-
-Note that cpu1 and cpu5 are HT siblings within core1.
-As cpu5 is very busy, it prevents its sibling, cpu1,
-from entering a c-state deeper than c1.
+Above the cycle soaker drives cpu5 up its 3.9 GHz turbo limit.
+The first row shows the average MHz and %Busy across all the processors in the system.
Note that the Avg_MHz column reflects the total number of cycles executed
divided by the measurement interval. If the %Busy column is 100%,
diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
index 2d089ca..bac98ca 100644
--- a/tools/power/x86/turbostat/turbostat.c
+++ b/tools/power/x86/turbostat/turbostat.c
@@ -57,6 +57,7 @@
unsigned int do_pc6;
unsigned int do_pc7;
unsigned int do_c8_c9_c10;
+unsigned int do_skl_residency;
unsigned int do_slm_cstates;
unsigned int use_c1_residency_msr;
unsigned int has_aperf;
@@ -65,8 +66,6 @@
unsigned int genuine_intel;
unsigned int has_invariant_tsc;
unsigned int do_nhm_platform_info;
-unsigned int do_nhm_turbo_ratio_limit;
-unsigned int do_ivt_turbo_ratio_limit;
unsigned int extra_msr_offset32;
unsigned int extra_msr_offset64;
unsigned int extra_delta_offset32;
@@ -84,11 +83,14 @@
unsigned int do_ptm;
unsigned int tcc_activation_temp;
unsigned int tcc_activation_temp_override;
-double rapl_power_units, rapl_energy_units, rapl_time_units;
+double rapl_power_units, rapl_time_units;
+double rapl_dram_energy_units, rapl_energy_units;
double rapl_joule_counter_range;
unsigned int do_core_perf_limit_reasons;
unsigned int do_gfx_perf_limit_reasons;
unsigned int do_ring_perf_limit_reasons;
+unsigned int crystal_hz;
+unsigned long long tsc_hz;
#define RAPL_PKG (1 << 0)
/* 0x610 MSR_PKG_POWER_LIMIT */
@@ -101,18 +103,18 @@
#define RAPL_DRAM (1 << 3)
/* 0x618 MSR_DRAM_POWER_LIMIT */
/* 0x619 MSR_DRAM_ENERGY_STATUS */
- /* 0x61c MSR_DRAM_POWER_INFO */
#define RAPL_DRAM_PERF_STATUS (1 << 4)
/* 0x61b MSR_DRAM_PERF_STATUS */
+#define RAPL_DRAM_POWER_INFO (1 << 5)
+ /* 0x61c MSR_DRAM_POWER_INFO */
-#define RAPL_CORES (1 << 5)
+#define RAPL_CORES (1 << 6)
/* 0x638 MSR_PP0_POWER_LIMIT */
/* 0x639 MSR_PP0_ENERGY_STATUS */
-#define RAPL_CORE_POLICY (1 << 6)
+#define RAPL_CORE_POLICY (1 << 7)
/* 0x63a MSR_PP0_POLICY */
-
-#define RAPL_GFX (1 << 7)
+#define RAPL_GFX (1 << 8)
/* 0x640 MSR_PP1_POWER_LIMIT */
/* 0x641 MSR_PP1_ENERGY_STATUS */
/* 0x642 MSR_PP1_POLICY */
@@ -159,6 +161,10 @@
unsigned long long pc8;
unsigned long long pc9;
unsigned long long pc10;
+ unsigned long long pkg_wtd_core_c0;
+ unsigned long long pkg_any_core_c0;
+ unsigned long long pkg_any_gfxe_c0;
+ unsigned long long pkg_both_core_gfxe_c0;
unsigned int package_id;
unsigned int energy_pkg; /* MSR_PKG_ENERGY_STATUS */
unsigned int energy_dram; /* MSR_DRAM_ENERGY_STATUS */
@@ -292,8 +298,7 @@
if (has_aperf)
outp += sprintf(outp, " Bzy_MHz");
outp += sprintf(outp, " TSC_MHz");
- if (do_smi)
- outp += sprintf(outp, " SMI");
+
if (extra_delta_offset32)
outp += sprintf(outp, " count 0x%03X", extra_delta_offset32);
if (extra_delta_offset64)
@@ -302,6 +307,13 @@
outp += sprintf(outp, " MSR 0x%03X", extra_msr_offset32);
if (extra_msr_offset64)
outp += sprintf(outp, " MSR 0x%03X", extra_msr_offset64);
+
+ if (!debug)
+ goto done;
+
+ if (do_smi)
+ outp += sprintf(outp, " SMI");
+
if (do_nhm_cstates)
outp += sprintf(outp, " CPU%%c1");
if (do_nhm_cstates && !do_slm_cstates)
@@ -316,6 +328,13 @@
if (do_ptm)
outp += sprintf(outp, " PkgTmp");
+ if (do_skl_residency) {
+ outp += sprintf(outp, " Totl%%C0");
+ outp += sprintf(outp, " Any%%C0");
+ outp += sprintf(outp, " GFX%%C0");
+ outp += sprintf(outp, " CPUGFX%%");
+ }
+
if (do_pc2)
outp += sprintf(outp, " Pkg%%pc2");
if (do_pc3)
@@ -359,6 +378,7 @@
outp += sprintf(outp, " time");
}
+ done:
outp += sprintf(outp, "\n");
}
@@ -396,6 +416,12 @@
if (p) {
outp += sprintf(outp, "package: %d\n", p->package_id);
+
+ outp += sprintf(outp, "Weighted cores: %016llX\n", p->pkg_wtd_core_c0);
+ outp += sprintf(outp, "Any cores: %016llX\n", p->pkg_any_core_c0);
+ outp += sprintf(outp, "Any GFX: %016llX\n", p->pkg_any_gfxe_c0);
+ outp += sprintf(outp, "CPU + GFX: %016llX\n", p->pkg_both_core_gfxe_c0);
+
outp += sprintf(outp, "pc2: %016llX\n", p->pc2);
if (do_pc3)
outp += sprintf(outp, "pc3: %016llX\n", p->pc3);
@@ -487,10 +513,6 @@
/* TSC_MHz */
outp += sprintf(outp, "%8.0f", 1.0 * t->tsc/units/interval_float);
- /* SMI */
- if (do_smi)
- outp += sprintf(outp, "%8d", t->smi_count);
-
/* delta */
if (extra_delta_offset32)
outp += sprintf(outp, " %11llu", t->extra_delta32);
@@ -506,6 +528,13 @@
if (extra_msr_offset64)
outp += sprintf(outp, " 0x%016llx", t->extra_msr64);
+ if (!debug)
+ goto done;
+
+ /* SMI */
+ if (do_smi)
+ outp += sprintf(outp, "%8d", t->smi_count);
+
if (do_nhm_cstates) {
if (!skip_c1)
outp += sprintf(outp, "%8.2f", 100.0 * t->c1/t->tsc);
@@ -531,9 +560,18 @@
if (!(t->flags & CPU_IS_FIRST_CORE_IN_PACKAGE))
goto done;
+ /* PkgTmp */
if (do_ptm)
outp += sprintf(outp, "%8d", p->pkg_temp_c);
+ /* Totl%C0, Any%C0 GFX%C0 CPUGFX% */
+ if (do_skl_residency) {
+ outp += sprintf(outp, "%8.2f", 100.0 * p->pkg_wtd_core_c0/t->tsc);
+ outp += sprintf(outp, "%8.2f", 100.0 * p->pkg_any_core_c0/t->tsc);
+ outp += sprintf(outp, "%8.2f", 100.0 * p->pkg_any_gfxe_c0/t->tsc);
+ outp += sprintf(outp, "%8.2f", 100.0 * p->pkg_both_core_gfxe_c0/t->tsc);
+ }
+
if (do_pc2)
outp += sprintf(outp, "%8.2f", 100.0 * p->pc2/t->tsc);
if (do_pc3)
@@ -565,7 +603,7 @@
if (do_rapl & RAPL_GFX)
outp += sprintf(outp, fmt8, p->energy_gfx * rapl_energy_units / interval_float);
if (do_rapl & RAPL_DRAM)
- outp += sprintf(outp, fmt8, p->energy_dram * rapl_energy_units / interval_float);
+ outp += sprintf(outp, fmt8, p->energy_dram * rapl_dram_energy_units / interval_float);
if (do_rapl & RAPL_PKG_PERF_STATUS)
outp += sprintf(outp, fmt8, 100.0 * p->rapl_pkg_perf_status * rapl_time_units / interval_float);
if (do_rapl & RAPL_DRAM_PERF_STATUS)
@@ -582,7 +620,7 @@
p->energy_gfx * rapl_energy_units);
if (do_rapl & RAPL_DRAM)
outp += sprintf(outp, fmt8,
- p->energy_dram * rapl_energy_units);
+ p->energy_dram * rapl_dram_energy_units);
if (do_rapl & RAPL_PKG_PERF_STATUS)
outp += sprintf(outp, fmt8, 100.0 * p->rapl_pkg_perf_status * rapl_time_units / interval_float);
if (do_rapl & RAPL_DRAM_PERF_STATUS)
@@ -636,6 +674,13 @@
void
delta_package(struct pkg_data *new, struct pkg_data *old)
{
+
+ if (do_skl_residency) {
+ old->pkg_wtd_core_c0 = new->pkg_wtd_core_c0 - old->pkg_wtd_core_c0;
+ old->pkg_any_core_c0 = new->pkg_any_core_c0 - old->pkg_any_core_c0;
+ old->pkg_any_gfxe_c0 = new->pkg_any_gfxe_c0 - old->pkg_any_gfxe_c0;
+ old->pkg_both_core_gfxe_c0 = new->pkg_both_core_gfxe_c0 - old->pkg_both_core_gfxe_c0;
+ }
old->pc2 = new->pc2 - old->pc2;
if (do_pc3)
old->pc3 = new->pc3 - old->pc3;
@@ -782,6 +827,11 @@
c->c7 = 0;
c->core_temp_c = 0;
+ p->pkg_wtd_core_c0 = 0;
+ p->pkg_any_core_c0 = 0;
+ p->pkg_any_gfxe_c0 = 0;
+ p->pkg_both_core_gfxe_c0 = 0;
+
p->pc2 = 0;
if (do_pc3)
p->pc3 = 0;
@@ -826,6 +876,13 @@
if (!(t->flags & CPU_IS_FIRST_CORE_IN_PACKAGE))
return 0;
+ if (do_skl_residency) {
+ average.packages.pkg_wtd_core_c0 += p->pkg_wtd_core_c0;
+ average.packages.pkg_any_core_c0 += p->pkg_any_core_c0;
+ average.packages.pkg_any_gfxe_c0 += p->pkg_any_gfxe_c0;
+ average.packages.pkg_both_core_gfxe_c0 += p->pkg_both_core_gfxe_c0;
+ }
+
average.packages.pc2 += p->pc2;
if (do_pc3)
average.packages.pc3 += p->pc3;
@@ -873,6 +930,13 @@
average.cores.c6 /= topo.num_cores;
average.cores.c7 /= topo.num_cores;
+ if (do_skl_residency) {
+ average.packages.pkg_wtd_core_c0 /= topo.num_packages;
+ average.packages.pkg_any_core_c0 /= topo.num_packages;
+ average.packages.pkg_any_gfxe_c0 /= topo.num_packages;
+ average.packages.pkg_both_core_gfxe_c0 /= topo.num_packages;
+ }
+
average.packages.pc2 /= topo.num_packages;
if (do_pc3)
average.packages.pc3 /= topo.num_packages;
@@ -979,6 +1043,16 @@
if (!(t->flags & CPU_IS_FIRST_CORE_IN_PACKAGE))
return 0;
+ if (do_skl_residency) {
+ if (get_msr(cpu, MSR_PKG_WEIGHTED_CORE_C0_RES, &p->pkg_wtd_core_c0))
+ return -10;
+ if (get_msr(cpu, MSR_PKG_ANY_CORE_C0_RES, &p->pkg_any_core_c0))
+ return -11;
+ if (get_msr(cpu, MSR_PKG_ANY_GFXE_C0_RES, &p->pkg_any_gfxe_c0))
+ return -12;
+ if (get_msr(cpu, MSR_PKG_BOTH_CORE_GFXE_C0_RES, &p->pkg_both_core_gfxe_c0))
+ return -13;
+ }
if (do_pc3)
if (get_msr(cpu, MSR_PKG_C3_RESIDENCY, &p->pc3))
return -9;
@@ -1055,49 +1129,77 @@
#define PCL_6R 9 /* PC6 Retention */
#define PCL__7 10 /* PC7 */
#define PCL_7S 11 /* PC7 Shrink */
-#define PCLUNL 12 /* Unlimited */
+#define PCL__8 12 /* PC8 */
+#define PCL__9 13 /* PC9 */
+#define PCLUNL 14 /* Unlimited */
int pkg_cstate_limit = PCLUKN;
char *pkg_cstate_limit_strings[] = { "reserved", "unknown", "pc0", "pc1", "pc2",
- "pc3", "pc4", "pc6", "pc6n", "pc6r", "pc7", "pc7s", "unlimited"};
+ "pc3", "pc4", "pc6", "pc6n", "pc6r", "pc7", "pc7s", "pc8", "pc9", "unlimited"};
-int nhm_pkg_cstate_limits[8] = {PCL__0, PCL__1, PCL__3, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLUNL};
-int snb_pkg_cstate_limits[8] = {PCL__0, PCL__2, PCL_6N, PCL_6R, PCL__7, PCL_7S, PCLRSV, PCLUNL};
-int hsw_pkg_cstate_limits[8] = {PCL__0, PCL__2, PCL__3, PCL__6, PCL__7, PCL_7S, PCLRSV, PCLUNL};
-int slv_pkg_cstate_limits[8] = {PCL__0, PCL__1, PCLRSV, PCLRSV, PCL__4, PCLRSV, PCL__6, PCL__7};
-int amt_pkg_cstate_limits[8] = {PCL__0, PCL__1, PCL__2, PCLRSV, PCLRSV, PCLRSV, PCL__6, PCL__7};
-int phi_pkg_cstate_limits[8] = {PCL__0, PCL__2, PCL_6N, PCL_6R, PCLRSV, PCLRSV, PCLRSV, PCLUNL};
+int nhm_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCL__3, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLUNL, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};
+int snb_pkg_cstate_limits[16] = {PCL__0, PCL__2, PCL_6N, PCL_6R, PCL__7, PCL_7S, PCLRSV, PCLUNL, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};
+int hsw_pkg_cstate_limits[16] = {PCL__0, PCL__2, PCL__3, PCL__6, PCL__7, PCL_7S, PCL__8, PCL__9, PCLUNL, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};
+int slv_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCLRSV, PCLRSV, PCL__4, PCLRSV, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};
+int amt_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCL__2, PCLRSV, PCLRSV, PCLRSV, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};
+int phi_pkg_cstate_limits[16] = {PCL__0, PCL__2, PCL_6N, PCL_6R, PCLRSV, PCLRSV, PCLRSV, PCLUNL, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};
-void print_verbose_header(void)
+static void
+dump_nhm_platform_info(void)
{
unsigned long long msr;
unsigned int ratio;
- if (!do_nhm_platform_info)
- return;
-
get_msr(0, MSR_NHM_PLATFORM_INFO, &msr);
fprintf(stderr, "cpu0: MSR_NHM_PLATFORM_INFO: 0x%08llx\n", msr);
ratio = (msr >> 40) & 0xFF;
- fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency\n",
+ fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency frequency\n",
ratio, bclk, ratio * bclk);
ratio = (msr >> 8) & 0xFF;
- fprintf(stderr, "%d * %.0f = %.0f MHz TSC frequency\n",
+ fprintf(stderr, "%d * %.0f = %.0f MHz base frequency\n",
ratio, bclk, ratio * bclk);
get_msr(0, MSR_IA32_POWER_CTL, &msr);
fprintf(stderr, "cpu0: MSR_IA32_POWER_CTL: 0x%08llx (C1E auto-promotion: %sabled)\n",
msr, msr & 0x2 ? "EN" : "DIS");
- if (!do_ivt_turbo_ratio_limit)
- goto print_nhm_turbo_ratio_limits;
+ return;
+}
- get_msr(0, MSR_IVT_TURBO_RATIO_LIMIT, &msr);
+static void
+dump_hsw_turbo_ratio_limits(void)
+{
+ unsigned long long msr;
+ unsigned int ratio;
- fprintf(stderr, "cpu0: MSR_IVT_TURBO_RATIO_LIMIT: 0x%08llx\n", msr);
+ get_msr(0, MSR_TURBO_RATIO_LIMIT2, &msr);
+
+ fprintf(stderr, "cpu0: MSR_TURBO_RATIO_LIMIT2: 0x%08llx\n", msr);
+
+ ratio = (msr >> 8) & 0xFF;
+ if (ratio)
+ fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 18 active cores\n",
+ ratio, bclk, ratio * bclk);
+
+ ratio = (msr >> 0) & 0xFF;
+ if (ratio)
+ fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 17 active cores\n",
+ ratio, bclk, ratio * bclk);
+ return;
+}
+
+static void
+dump_ivt_turbo_ratio_limits(void)
+{
+ unsigned long long msr;
+ unsigned int ratio;
+
+ get_msr(0, MSR_TURBO_RATIO_LIMIT1, &msr);
+
+ fprintf(stderr, "cpu0: MSR_TURBO_RATIO_LIMIT1: 0x%08llx\n", msr);
ratio = (msr >> 56) & 0xFF;
if (ratio)
@@ -1138,30 +1240,18 @@
if (ratio)
fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 9 active cores\n",
ratio, bclk, ratio * bclk);
+ return;
+}
-print_nhm_turbo_ratio_limits:
- get_msr(0, MSR_NHM_SNB_PKG_CST_CFG_CTL, &msr);
+static void
+dump_nhm_turbo_ratio_limits(void)
+{
+ unsigned long long msr;
+ unsigned int ratio;
-#define SNB_C1_AUTO_UNDEMOTE (1UL << 27)
-#define SNB_C3_AUTO_UNDEMOTE (1UL << 28)
+ get_msr(0, MSR_TURBO_RATIO_LIMIT, &msr);
- fprintf(stderr, "cpu0: MSR_NHM_SNB_PKG_CST_CFG_CTL: 0x%08llx", msr);
-
- fprintf(stderr, " (%s%s%s%s%slocked: pkg-cstate-limit=%d: %s)\n",
- (msr & SNB_C3_AUTO_UNDEMOTE) ? "UNdemote-C3, " : "",
- (msr & SNB_C1_AUTO_UNDEMOTE) ? "UNdemote-C1, " : "",
- (msr & NHM_C3_AUTO_DEMOTE) ? "demote-C3, " : "",
- (msr & NHM_C1_AUTO_DEMOTE) ? "demote-C1, " : "",
- (msr & (1 << 15)) ? "" : "UN",
- (unsigned int)msr & 7,
- pkg_cstate_limit_strings[pkg_cstate_limit]);
-
- if (!do_nhm_turbo_ratio_limit)
- return;
-
- get_msr(0, MSR_NHM_TURBO_RATIO_LIMIT, &msr);
-
- fprintf(stderr, "cpu0: MSR_NHM_TURBO_RATIO_LIMIT: 0x%08llx\n", msr);
+ fprintf(stderr, "cpu0: MSR_TURBO_RATIO_LIMIT: 0x%08llx\n", msr);
ratio = (msr >> 56) & 0xFF;
if (ratio)
@@ -1202,7 +1292,30 @@
if (ratio)
fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 1 active cores\n",
ratio, bclk, ratio * bclk);
+ return;
+}
+static void
+dump_nhm_cst_cfg(void)
+{
+ unsigned long long msr;
+
+ get_msr(0, MSR_NHM_SNB_PKG_CST_CFG_CTL, &msr);
+
+#define SNB_C1_AUTO_UNDEMOTE (1UL << 27)
+#define SNB_C3_AUTO_UNDEMOTE (1UL << 28)
+
+ fprintf(stderr, "cpu0: MSR_NHM_SNB_PKG_CST_CFG_CTL: 0x%08llx", msr);
+
+ fprintf(stderr, " (%s%s%s%s%slocked: pkg-cstate-limit=%d: %s)\n",
+ (msr & SNB_C3_AUTO_UNDEMOTE) ? "UNdemote-C3, " : "",
+ (msr & SNB_C1_AUTO_UNDEMOTE) ? "UNdemote-C1, " : "",
+ (msr & NHM_C3_AUTO_DEMOTE) ? "demote-C3, " : "",
+ (msr & NHM_C1_AUTO_DEMOTE) ? "demote-C1, " : "",
+ (msr & (1 << 15)) ? "" : "UN",
+ (unsigned int)msr & 7,
+ pkg_cstate_limit_strings[pkg_cstate_limit]);
+ return;
}
void free_all_buffers(void)
@@ -1483,7 +1596,8 @@
struct stat sb;
if (stat("/dev/cpu/0/msr", &sb))
- err(-5, "no /dev/cpu/0/msr, Try \"# modprobe msr\" ");
+ if (system("/sbin/modprobe msr > /dev/null 2>&1"))
+ err(-5, "no /dev/cpu/0/msr, Try \"# modprobe msr\" ");
}
void check_permissions()
@@ -1573,6 +1687,8 @@
case 0x47: /* BDW */
case 0x4F: /* BDX */
case 0x56: /* BDX-DE */
+ case 0x4E: /* SKL */
+ case 0x5E: /* SKL */
pkg_cstate_limits = hsw_pkg_cstate_limits;
break;
case 0x37: /* BYT */
@@ -1590,7 +1706,7 @@
}
get_msr(0, MSR_NHM_SNB_PKG_CST_CFG_CTL, &msr);
- pkg_cstate_limit = pkg_cstate_limits[msr & 0x7];
+ pkg_cstate_limit = pkg_cstate_limits[msr & 0xF];
return 1;
}
@@ -1615,11 +1731,48 @@
switch (model) {
case 0x3E: /* IVB Xeon */
+ case 0x3F: /* HSW Xeon */
return 1;
default:
return 0;
}
}
+int has_hsw_turbo_ratio_limit(unsigned int family, unsigned int model)
+{
+ if (!genuine_intel)
+ return 0;
+
+ if (family != 6)
+ return 0;
+
+ switch (model) {
+ case 0x3F: /* HSW Xeon */
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+static void
+dump_cstate_pstate_config_info(family, model)
+{
+ if (!do_nhm_platform_info)
+ return;
+
+ dump_nhm_platform_info();
+
+ if (has_hsw_turbo_ratio_limit(family, model))
+ dump_hsw_turbo_ratio_limits();
+
+ if (has_ivt_turbo_ratio_limit(family, model))
+ dump_ivt_turbo_ratio_limits();
+
+ if (has_nhm_turbo_ratio_limit(family, model))
+ dump_nhm_turbo_ratio_limits();
+
+ dump_nhm_cst_cfg();
+}
+
/*
* print_epb()
@@ -1690,35 +1843,35 @@
get_msr(cpu, MSR_CORE_PERF_LIMIT_REASONS, &msr);
fprintf(stderr, "cpu%d: MSR_CORE_PERF_LIMIT_REASONS, 0x%08llx", cpu, msr);
fprintf(stderr, " (Active: %s%s%s%s%s%s%s%s%s%s%s%s%s%s)",
- (msr & 1 << 0) ? "PROCHOT, " : "",
- (msr & 1 << 1) ? "ThermStatus, " : "",
- (msr & 1 << 2) ? "bit2, " : "",
- (msr & 1 << 4) ? "Graphics, " : "",
- (msr & 1 << 5) ? "Auto-HWP, " : "",
- (msr & 1 << 6) ? "VR-Therm, " : "",
- (msr & 1 << 8) ? "Amps, " : "",
- (msr & 1 << 9) ? "CorePwr, " : "",
- (msr & 1 << 10) ? "PkgPwrL1, " : "",
- (msr & 1 << 11) ? "PkgPwrL2, " : "",
- (msr & 1 << 12) ? "MultiCoreTurbo, " : "",
- (msr & 1 << 13) ? "Transitions, " : "",
+ (msr & 1 << 15) ? "bit15, " : "",
(msr & 1 << 14) ? "bit14, " : "",
- (msr & 1 << 15) ? "bit15, " : "");
+ (msr & 1 << 13) ? "Transitions, " : "",
+ (msr & 1 << 12) ? "MultiCoreTurbo, " : "",
+ (msr & 1 << 11) ? "PkgPwrL2, " : "",
+ (msr & 1 << 10) ? "PkgPwrL1, " : "",
+ (msr & 1 << 9) ? "CorePwr, " : "",
+ (msr & 1 << 8) ? "Amps, " : "",
+ (msr & 1 << 6) ? "VR-Therm, " : "",
+ (msr & 1 << 5) ? "Auto-HWP, " : "",
+ (msr & 1 << 4) ? "Graphics, " : "",
+ (msr & 1 << 2) ? "bit2, " : "",
+ (msr & 1 << 1) ? "ThermStatus, " : "",
+ (msr & 1 << 0) ? "PROCHOT, " : "");
fprintf(stderr, " (Logged: %s%s%s%s%s%s%s%s%s%s%s%s%s%s)\n",
- (msr & 1 << 16) ? "PROCHOT, " : "",
- (msr & 1 << 17) ? "ThermStatus, " : "",
- (msr & 1 << 18) ? "bit18, " : "",
- (msr & 1 << 20) ? "Graphics, " : "",
- (msr & 1 << 21) ? "Auto-HWP, " : "",
- (msr & 1 << 22) ? "VR-Therm, " : "",
- (msr & 1 << 24) ? "Amps, " : "",
- (msr & 1 << 25) ? "CorePwr, " : "",
- (msr & 1 << 26) ? "PkgPwrL1, " : "",
- (msr & 1 << 27) ? "PkgPwrL2, " : "",
- (msr & 1 << 28) ? "MultiCoreTurbo, " : "",
- (msr & 1 << 29) ? "Transitions, " : "",
+ (msr & 1 << 31) ? "bit31, " : "",
(msr & 1 << 30) ? "bit30, " : "",
- (msr & 1 << 31) ? "bit31, " : "");
+ (msr & 1 << 29) ? "Transitions, " : "",
+ (msr & 1 << 28) ? "MultiCoreTurbo, " : "",
+ (msr & 1 << 27) ? "PkgPwrL2, " : "",
+ (msr & 1 << 26) ? "PkgPwrL1, " : "",
+ (msr & 1 << 25) ? "CorePwr, " : "",
+ (msr & 1 << 24) ? "Amps, " : "",
+ (msr & 1 << 22) ? "VR-Therm, " : "",
+ (msr & 1 << 21) ? "Auto-HWP, " : "",
+ (msr & 1 << 20) ? "Graphics, " : "",
+ (msr & 1 << 18) ? "bit18, " : "",
+ (msr & 1 << 17) ? "ThermStatus, " : "",
+ (msr & 1 << 16) ? "PROCHOT, " : "");
}
if (do_gfx_perf_limit_reasons) {
@@ -1784,6 +1937,25 @@
}
}
+/*
+ * rapl_dram_energy_units_probe()
+ * Energy units are either hard-coded, or come from RAPL Energy Unit MSR.
+ */
+static double
+rapl_dram_energy_units_probe(int model, double rapl_energy_units)
+{
+ /* only called for genuine_intel, family 6 */
+
+ switch (model) {
+ case 0x3F: /* HSX */
+ case 0x4F: /* BDX */
+ case 0x56: /* BDX-DE */
+ return (rapl_dram_energy_units = 15.3 / 1000000);
+ default:
+ return (rapl_energy_units);
+ }
+}
+
/*
* rapl_probe()
@@ -1812,14 +1984,18 @@
case 0x47: /* BDW */
do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_GFX | RAPL_PKG_POWER_INFO;
break;
+ case 0x4E: /* SKL */
+ case 0x5E: /* SKL */
+ do_rapl = RAPL_PKG | RAPL_DRAM | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_PKG_POWER_INFO;
+ break;
case 0x3F: /* HSX */
case 0x4F: /* BDX */
case 0x56: /* BDX-DE */
- do_rapl = RAPL_PKG | RAPL_DRAM | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_PKG_POWER_INFO;
+ do_rapl = RAPL_PKG | RAPL_DRAM | RAPL_DRAM_POWER_INFO | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_PKG_POWER_INFO;
break;
case 0x2D:
case 0x3E:
- do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_DRAM | RAPL_PKG_PERF_STATUS | RAPL_DRAM_PERF_STATUS | RAPL_PKG_POWER_INFO;
+ do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_DRAM | RAPL_DRAM_POWER_INFO | RAPL_PKG_PERF_STATUS | RAPL_DRAM_PERF_STATUS | RAPL_PKG_POWER_INFO;
break;
case 0x37: /* BYT */
case 0x4D: /* AVN */
@@ -1839,6 +2015,8 @@
else
rapl_energy_units = 1.0 / (1 << (msr >> 8 & 0x1F));
+ rapl_dram_energy_units = rapl_dram_energy_units_probe(model, rapl_energy_units);
+
time_unit = msr >> 16 & 0xF;
if (time_unit == 0)
time_unit = 0xA;
@@ -2009,19 +2187,18 @@
((msr >> 48) & 1) ? "EN" : "DIS");
}
- if (do_rapl & RAPL_DRAM) {
+ if (do_rapl & RAPL_DRAM_POWER_INFO) {
if (get_msr(cpu, MSR_DRAM_POWER_INFO, &msr))
return -6;
-
fprintf(stderr, "cpu%d: MSR_DRAM_POWER_INFO,: 0x%08llx (%.0f W TDP, RAPL %.0f - %.0f W, %f sec.)\n",
cpu, msr,
((msr >> 0) & RAPL_POWER_GRANULARITY) * rapl_power_units,
((msr >> 16) & RAPL_POWER_GRANULARITY) * rapl_power_units,
((msr >> 32) & RAPL_POWER_GRANULARITY) * rapl_power_units,
((msr >> 48) & RAPL_TIME_GRANULARITY) * rapl_time_units);
-
-
+ }
+ if (do_rapl & RAPL_DRAM) {
if (get_msr(cpu, MSR_DRAM_POWER_LIMIT, &msr))
return -9;
fprintf(stderr, "cpu%d: MSR_DRAM_POWER_LIMIT: 0x%08llx (%slocked)\n",
@@ -2090,6 +2267,8 @@
case 0x47: /* BDW */
case 0x4F: /* BDX */
case 0x56: /* BDX-DE */
+ case 0x4E: /* SKL */
+ case 0x5E: /* SKL */
return 1;
}
return 0;
@@ -2110,11 +2289,35 @@
switch (model) {
case 0x45: /* HSW */
case 0x3D: /* BDW */
+ case 0x4E: /* SKL */
+ case 0x5E: /* SKL */
return 1;
}
return 0;
}
+/*
+ * SKL adds support for additional MSRS:
+ *
+ * MSR_PKG_WEIGHTED_CORE_C0_RES 0x00000658
+ * MSR_PKG_ANY_CORE_C0_RES 0x00000659
+ * MSR_PKG_ANY_GFXE_C0_RES 0x0000065A
+ * MSR_PKG_BOTH_CORE_GFXE_C0_RES 0x0000065B
+ */
+int has_skl_msrs(unsigned int family, unsigned int model)
+{
+ if (!genuine_intel)
+ return 0;
+
+ switch (model) {
+ case 0x4E: /* SKL */
+ case 0x5E: /* SKL */
+ return 1;
+ }
+ return 0;
+}
+
+
int is_slm(unsigned int family, unsigned int model)
{
@@ -2228,7 +2431,7 @@
return 0;
}
-void check_cpuid()
+void process_cpuid()
{
unsigned int eax, ebx, ecx, edx, max_level;
unsigned int fms, family, model, stepping;
@@ -2294,6 +2497,41 @@
do_ptm ? "" : "No ",
has_epb ? "" : "No ");
+ if (max_level > 0x15) {
+ unsigned int eax_crystal;
+ unsigned int ebx_tsc;
+
+ /*
+ * CPUID 15H TSC/Crystal ratio, possibly Crystal Hz
+ */
+ eax_crystal = ebx_tsc = crystal_hz = edx = 0;
+ __get_cpuid(0x15, &eax_crystal, &ebx_tsc, &crystal_hz, &edx);
+
+ if (ebx_tsc != 0) {
+
+ if (debug && (ebx != 0))
+ fprintf(stderr, "CPUID(0x15): eax_crystal: %d ebx_tsc: %d ecx_crystal_hz: %d\n",
+ eax_crystal, ebx_tsc, crystal_hz);
+
+ if (crystal_hz == 0)
+ switch(model) {
+ case 0x4E: /* SKL */
+ case 0x5E: /* SKL */
+ crystal_hz = 24000000; /* 24 MHz */
+ break;
+ default:
+ crystal_hz = 0;
+ }
+
+ if (crystal_hz) {
+ tsc_hz = (unsigned long long) crystal_hz * ebx_tsc / eax_crystal;
+ if (debug)
+ fprintf(stderr, "TSC: %lld MHz (%d Hz * %d / %d / 1000000)\n",
+ tsc_hz / 1000000, crystal_hz, ebx_tsc, eax_crystal);
+ }
+ }
+ }
+
do_nhm_platform_info = do_nhm_cstates = do_smi = probe_nhm_msrs(family, model);
do_snb_cstates = has_snb_msrs(family, model);
do_pc2 = do_snb_cstates && (pkg_cstate_limit >= PCL__2);
@@ -2301,18 +2539,19 @@
do_pc6 = (pkg_cstate_limit >= PCL__6);
do_pc7 = do_snb_cstates && (pkg_cstate_limit >= PCL__7);
do_c8_c9_c10 = has_hsw_msrs(family, model);
+ do_skl_residency = has_skl_msrs(family, model);
do_slm_cstates = is_slm(family, model);
bclk = discover_bclk(family, model);
- do_nhm_turbo_ratio_limit = do_nhm_platform_info && has_nhm_turbo_ratio_limit(family, model);
- do_ivt_turbo_ratio_limit = has_ivt_turbo_ratio_limit(family, model);
rapl_probe(family, model);
perf_limit_reasons_probe(family, model);
+ if (debug)
+ dump_cstate_pstate_config_info();
+
return;
}
-
void help()
{
fprintf(stderr,
@@ -2428,14 +2667,14 @@
if (debug > 1)
fprintf(stderr, "max_core_id %d, sizing for %d cores per package\n",
max_core_id, topo.num_cores_per_pkg);
- if (!summary_only && topo.num_cores_per_pkg > 1)
+ if (debug && !summary_only && topo.num_cores_per_pkg > 1)
show_core = 1;
topo.num_packages = max_package_id + 1;
if (debug > 1)
fprintf(stderr, "max_package_id %d, sizing for %d packages\n",
max_package_id, topo.num_packages);
- if (!summary_only && topo.num_packages > 1)
+ if (debug && !summary_only && topo.num_packages > 1)
show_pkg = 1;
topo.num_threads_per_core = max_siblings;
@@ -2550,14 +2789,11 @@
{
check_dev_msr();
check_permissions();
- check_cpuid();
+ process_cpuid();
setup_all_buffers();
if (debug)
- print_verbose_header();
-
- if (debug)
for_all_cpus(print_epb, ODD_COUNTERS);
if (debug)
@@ -2634,7 +2870,7 @@
}
void print_version() {
- fprintf(stderr, "turbostat version 4.1 10-Feb, 2015"
+ fprintf(stderr, "turbostat version 4.5 2 Apr, 2015"
" - Len Brown <lenb@kernel.org>\n");
}
diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
index f0a7918..ddf6356 100644
--- a/tools/testing/selftests/x86/Makefile
+++ b/tools/testing/selftests/x86/Makefile
@@ -1,6 +1,6 @@
.PHONY: all all_32 all_64 check_build32 clean run_tests
-TARGETS_C_BOTHBITS := sigreturn
+TARGETS_C_BOTHBITS := sigreturn single_step_syscall
BINARIES_32 := $(TARGETS_C_BOTHBITS:%=%_32)
BINARIES_64 := $(TARGETS_C_BOTHBITS:%=%_64)
diff --git a/tools/testing/selftests/x86/run_x86_tests.sh b/tools/testing/selftests/x86/run_x86_tests.sh
index 3d3ec65..3fc19b3 100644
--- a/tools/testing/selftests/x86/run_x86_tests.sh
+++ b/tools/testing/selftests/x86/run_x86_tests.sh
@@ -3,9 +3,11 @@
# This is deliberately minimal. IMO kselftests should provide a standard
# script here.
./sigreturn_32 || exit 1
+./single_step_syscall_32 || exit 1
if [[ "$uname -p" -eq "x86_64" ]]; then
./sigreturn_64 || exit 1
+ ./single_step_syscall_64 || exit 1
fi
exit 0
diff --git a/tools/testing/selftests/x86/single_step_syscall.c b/tools/testing/selftests/x86/single_step_syscall.c
new file mode 100644
index 0000000..50c2635
--- /dev/null
+++ b/tools/testing/selftests/x86/single_step_syscall.c
@@ -0,0 +1,181 @@
+/*
+ * single_step_syscall.c - single-steps various x86 syscalls
+ * Copyright (c) 2014-2015 Andrew Lutomirski
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * This is a very simple series of tests that makes system calls with
+ * the TF flag set. This exercises some nasty kernel code in the
+ * SYSENTER case: SYSENTER does not clear TF, so SYSENTER with TF set
+ * immediately issues #DB from CPL 0. This requires special handling in
+ * the kernel.
+ */
+
+#define _GNU_SOURCE
+
+#include <sys/time.h>
+#include <time.h>
+#include <stdlib.h>
+#include <sys/syscall.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <inttypes.h>
+#include <sys/mman.h>
+#include <sys/signal.h>
+#include <sys/ucontext.h>
+#include <asm/ldt.h>
+#include <err.h>
+#include <setjmp.h>
+#include <stddef.h>
+#include <stdbool.h>
+#include <sys/ptrace.h>
+#include <sys/user.h>
+
+static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *),
+ int flags)
+{
+ struct sigaction sa;
+ memset(&sa, 0, sizeof(sa));
+ sa.sa_sigaction = handler;
+ sa.sa_flags = SA_SIGINFO | flags;
+ sigemptyset(&sa.sa_mask);
+ if (sigaction(sig, &sa, 0))
+ err(1, "sigaction");
+}
+
+static volatile sig_atomic_t sig_traps;
+
+#ifdef __x86_64__
+# define REG_IP REG_RIP
+# define WIDTH "q"
+#else
+# define REG_IP REG_EIP
+# define WIDTH "l"
+#endif
+
+static unsigned long get_eflags(void)
+{
+ unsigned long eflags;
+ asm volatile ("pushf" WIDTH "\n\tpop" WIDTH " %0" : "=rm" (eflags));
+ return eflags;
+}
+
+static void set_eflags(unsigned long eflags)
+{
+ asm volatile ("push" WIDTH " %0\n\tpopf" WIDTH
+ : : "rm" (eflags) : "flags");
+}
+
+#define X86_EFLAGS_TF (1UL << 8)
+
+static void sigtrap(int sig, siginfo_t *info, void *ctx_void)
+{
+ ucontext_t *ctx = (ucontext_t*)ctx_void;
+
+ if (get_eflags() & X86_EFLAGS_TF) {
+ set_eflags(get_eflags() & ~X86_EFLAGS_TF);
+ printf("[WARN]\tSIGTRAP handler had TF set\n");
+ _exit(1);
+ }
+
+ sig_traps++;
+
+ if (sig_traps == 10000 || sig_traps == 10001) {
+ printf("[WARN]\tHit %d SIGTRAPs with si_addr 0x%lx, ip 0x%lx\n",
+ (int)sig_traps,
+ (unsigned long)info->si_addr,
+ (unsigned long)ctx->uc_mcontext.gregs[REG_IP]);
+ }
+}
+
+static void check_result(void)
+{
+ unsigned long new_eflags = get_eflags();
+ set_eflags(new_eflags & ~X86_EFLAGS_TF);
+
+ if (!sig_traps) {
+ printf("[FAIL]\tNo SIGTRAP\n");
+ exit(1);
+ }
+
+ if (!(new_eflags & X86_EFLAGS_TF)) {
+ printf("[FAIL]\tTF was cleared\n");
+ exit(1);
+ }
+
+ printf("[OK]\tSurvived with TF set and %d traps\n", (int)sig_traps);
+ sig_traps = 0;
+}
+
+int main()
+{
+ int tmp;
+
+ sethandler(SIGTRAP, sigtrap, 0);
+
+ printf("[RUN]\tSet TF and check nop\n");
+ set_eflags(get_eflags() | X86_EFLAGS_TF);
+ asm volatile ("nop");
+ check_result();
+
+#ifdef __x86_64__
+ printf("[RUN]\tSet TF and check syscall-less opportunistic sysret\n");
+ set_eflags(get_eflags() | X86_EFLAGS_TF);
+ extern unsigned char post_nop[];
+ asm volatile ("pushf" WIDTH "\n\t"
+ "pop" WIDTH " %%r11\n\t"
+ "nop\n\t"
+ "post_nop:"
+ : : "c" (post_nop) : "r11");
+ check_result();
+#endif
+
+ printf("[RUN]\tSet TF and check int80\n");
+ set_eflags(get_eflags() | X86_EFLAGS_TF);
+ asm volatile ("int $0x80" : "=a" (tmp) : "a" (SYS_getpid));
+ check_result();
+
+ /*
+ * This test is particularly interesting if fast syscalls use
+ * SYSENTER: it triggers a nasty design flaw in SYSENTER.
+ * Specifically, SYSENTER does not clear TF, so either SYSENTER
+ * or the next instruction traps at CPL0. (Of course, Intel
+ * mostly forgot to document exactly what happens here.) So we
+ * get a CPL0 fault with usergs (on 64-bit kernels) and possibly
+ * no stack. The only sane way the kernel can possibly handle
+ * it is to clear TF on return from the #DB handler, but this
+ * happens way too early to set TF in the saved pt_regs, so the
+ * kernel has to do something clever to avoid losing track of
+ * the TF bit.
+ *
+ * Needless to say, we've had bugs in this area.
+ */
+ syscall(SYS_getpid); /* Force symbol binding without TF set. */
+ printf("[RUN]\tSet TF and check a fast syscall\n");
+ set_eflags(get_eflags() | X86_EFLAGS_TF);
+ syscall(SYS_getpid);
+ check_result();
+
+ /* Now make sure that another fast syscall doesn't set TF again. */
+ printf("[RUN]\tFast syscall with TF cleared\n");
+ fflush(stdout); /* Force a syscall */
+ if (get_eflags() & X86_EFLAGS_TF) {
+ printf("[FAIL]\tTF is now set\n");
+ exit(1);
+ }
+ if (sig_traps) {
+ printf("[FAIL]\tGot SIGTRAP\n");
+ exit(1);
+ }
+ printf("[OK]\tNothing unexpected happened\n");
+
+ return 0;
+}